2 years, 15 days

## @dharr great! Is there a reason why...

@dharr great! Sometimes fsolve() doesn't go through for smaller degree polynomials: Approx_new(1)_error.mw

@dharr thanks for the explanations.

1. Yes, after a second read I realize those references are useful for data fitting, which I don't need to care about it for now.
2. But the command plot(fapprox-f(x))/f(x) returns another plot? How to turn that into numbers? It could be interesting to me to have avg error (in %) for the whole curve and perhaps also min and max error with associated x values for which these two occurs? Just to have a measure of the interpolation accuracy beyond the visuals...or how else do you suggest?
3. I must be doing something wrong to find the decay coefficient a: Approx_new.mw

Thanks

## Follow-up...

@dharr thanks for giving more context. I like the idea of trying to "hack things that work well at both ends". Not only it could lead to simple approximate expressions but it should also help with interpretability, which is important to me.

You write "but you interpolated the polynomial not the overall function fapprox.". Yes, my mistake.

Since you mention fitting vs interpolation and simplicity of the function is important to me, which approach do you recommend? If I understood correctly, the paper shows that the beautiful and simple Eq.(13) can replace well Eq.(2) for both low and high frequencies. I read "complex nonlinear least squares fitting was done using Maple’s NLPSolve routine using the nonlinearsimplex option [15], with a custom calling program [16] that also derives the standard errors in the usual way, i.e., from the values of the derivatives of the impedance with respect to the parameters at the minimum [17]."

1. Should I consider looking into [15,16,17] to try and find a similarly accurate and simple approximation for my function? Would that be challenging yet worthy? Would you help?
2. You also mention "relative errors (with respect to Eq. (2))" and "systematic error in the parameters can be estimated by individually varying the parameters to find the minimum in the residual sum of squares". I think it could be interesting to quantify the errors for my approximation as well
3. Talking about interpolation instead, you mention "Exact value and derivative at zero preclude any of the things that fit (as opposed to interpolate) arbitrary functions unless they are carefully designed not to disrupt the exact values" and "c1*x term will mess up the derivative at zero". Which replacement term would preserve the derivative at 0?​​

Thanks

## nice...

@acer both approximations look nice. I confirm that the rational polynomial, while more strange looking, performs slightly better: approx_ac2_moreplots.mw

## simple function...

@dharr thanks.

3. is indeed true!

Is this what you meant in 4.? Approx.mw (which perhaps can be implemented more concisely using loops?)

## @dharr I can't execute your wor...

@dharr I can't execute your worksheets, MakeFunction doesn't execute...how come?

Thank you for your files. Yes I was wrong on the last derivative, I forgot to multiply by sigma__v/sigma__d. Thanks for clarifying.

Moreover:

1. In your first worksheet, you specify small f but then don't actually use it? I think this is related to MakeFunction not going through...when I open both worksheets I get the warning "This worksheet was created with a newer version of Maple. It may not work as expected". I got the same with previous worksheet of yours, but never encountered issues during execution with my older version of Maple.
2. Do plot(D[3](F)(1, 2, gamma),gamma=0..5) mean plotting D[3](F) while fixing sigma__d=1 and sigma__v=2 and plot3d(D[3](F)(sigma__d, sigma__v, 1), sigma__d=0..5, sigma__v=0..5) mean plotting D[3](F) while fixing gamma=1? I think so but I need to solve the MakeFunction issue first.
3. The plots clearly show that D[3](F) is negative for some parameter values, and I already know that it's negative for all values by just looking at D[3](F)(sigma__d, sigma__v, gamma) given that D(f) is negative and sigma__v^2 positive...but what about D[2](F)? Shouldn't it be positive or negative depending on param values (since D(f) is negative as from your second file but f is positive and all underlying variables are positive):
4.  >
5. Let's say I still wanted to obtain an approximation for f(Gamma) for Gamma from 0 to that Gamma sending f(Gamma) relatively close to sqrt(5)/5...Such approximate polynomial should be a very simple (no more than one line) function of Gamma that well captures its original shape up to the point it starts stabilizing at sqrt(5)/5 (very roughly at which Gamma?). How would you do this?

Thanks again.

## Follow-up doubts...

@mmcdara thanks!

I still have doubts:

1. In first and second derivative of Lambda__1 (that is really nothing other than the complicated f(Gamma)), the Lambda__1 on the right hand side are placeholders for the complicated f(Gamma), am I right? Is it like rewriting f'(x) in terms of f(x) instead of x? In other words, the synthetic representation is surely accurate, but is it telling about its dependence on Gamma?
2. Related to 2., can these synthetic representations tell me anything about at least the sign of dl1dsv, given that the first term is negative and the second one is positive? Accordingly, I expect this derivative to be positive for some small values of sigma__v and then turn negative when sigma__v grows beyond a critical value. How to find this? I think I need to have an approximation of my f(Gamma) and its associated derivative for this, right?
3. When I plot(lambda__1, Gama=1..10) I plot only one root among the four because I am only interested in the positive one, which is only one for strictly positive Gamma. THIS is the f(Gamma) whose expression and derivatives matter to me: the one and only positive f(Gamma). All my points above only concerns the positive f(Gamma), I am not interested at all in the other three roots (does it mean your synthetic representations of the derivatives is not yet "specified" to the positive one?)

I hope I clarified myself a bit more. Thanks for the explanations!

## gamma limit to 0 also a constant? (not z...

@dharr thank you for showing me how to run asymptotic tests.

I think that for gamma to zero as well the limit should be a constant:

Expected limits for gamma (expected limits, based on the limits of f(Gamma) - computed in a separate worksheet...)

Expected Limit to 0:

 > sqrt(2)/2*sigma__v/sigma__d; evalf(%);
 (1)

Expected limit to infinity:

 > sqrt(5)/5*sigma__v/sigma__d; evalf(%);
 (2)
 >

## limits of expressions converted to radic...

@dharr thank you.

It's still not clear to me the difference in processing times between computing limits of RootOf(...quartic...) and computing limits of its conversion to radicals. As an example: caseAcaseB.mw

Case A is exactly your worksheet. Case B is a very similar analysis of another quartic x_B but different coefficients (only slightly). Computing limits of x_B seems fast but unreliable, while computing limits of convert(x_B, radical) is much much slower (after ~1-2 hours still no output). Also for Case A computing limits of convert(x_A, radical) is much slower than computing limits of x_A directly, but it's still definitely manageable (takes 1-2 minutes more).

Am I doing anything wrong? More specifically, would you please propose an ad hoc method to compute the three limits for Case B? Thank you so much for your help and kindness!

## One last thought...

@dharr thanks a lot.

I confirm that

`limit(convert(lambda__2, radical), sigma__d=0, right)  assuming positive;`

returns infinity.

Good catch on using series(). Perhaps the series() tool can also give me more insights on how the function behaves at some meaningul values of the underlying parameters...for example, when I plot the lambda against one param (e.g., 'a') while setting the other two params (e.g., 'b' and 'c') to wildly diverging values, the smoothness breaks down and an "unstable regime" settles for some critical value of 'a': MaPal93_sigma_d_new.mw.

But I am not sure how insightful these "extreme" calibrations are...because, in contrast to that first root of the 6th degree polynomial for which a Gamma_max (i.e., a strict bound for the product of the three params: really interesting!) was visibly workable, the real and positive lambda_2 = lambda_3 solution that is the root of our quartic seems to exist for any positive Gamma = gamma*sigma_v*sigma_d (the red plot)...

(but at least we know the upper and lower bounds for f(Gamma)...)

## sigma_d to 0: limit undefined?...

@dharr amazing, thanks a lot. I wish I could give you another best answer!

Interestingly, while the limits for gamma make sense, I was expecting the limit for sigma_d to 0 to give me infinity, but it gives me 'undefined'...

 > restart;

Limits of Lambda__2 - directly from RootOf should work but has bugs (reported)

 > Lambda__2 := RootOf(40*_Z^4 + 36*Gamma*_Z^3 + (9*Gamma^2 - 4)*_Z^2 - 4*Gamma*_Z - Gamma^2); L2:=convert(Lambda__2, radical): limit(Lambda__2, Gamma = 0); # fast but wrong limit(Lambda__2, Gamma = infinity); # fast but wrong limit(L2, Gamma = 0); # slow but right limit(L2, Gamma = infinity); # very slow but right
 (1)

Sensitivity analysis

 > restart;
 > local gamma:
 > Gamma := gamma*sigma__v*sigma__d: Lambda__2 := RootOf(40*_Z^4 + 36*Gamma*_Z^3 + (9*Gamma^2 - 4)*_Z^2 - 4*Gamma*_Z - Gamma^2): lambda__1:=sqrt(2/5)*sigma__v/sigma__d: lambda__2:=Lambda__2*sigma__v/sigma__d: lambda__3:=lambda__2:

Via RootOfs

 > dlam2dsigv:=diff(lambda__2,sigma__v):
 > eval(dlam2dsigv,{gamma=1,sigma__v=2, sigma__d=3}): evalf(%);
 (2)

 (3)

Do limits via radicals to be on the safe side

 (4)

Check with plot for some values of the other parameters

 >
 (5)
 (6)

## red one?...

@dharr in your summary you write "f(Gamma) is a (known) complicated function of Gamma that lies between 1/sqrt(10) = 0.316 and 1/3 = 0.333". You are referring exactly to the red one (allvals[1]) in your plot above, in your comment "Always the first - here is the plot - the positive one is consistently the first (red) one.", am I right? How do you find its min and max (maximize() and minimize() keep running forever on my computer...)?

Moreover, it seems like this f(Gamma) is extremely smooth...

1. It doesn't tell me much about the sensitivity of lambda__2 = lambda__3 = f(Gamma)*sigma__v/sigma__d to changes in the underlying parameters sigma__v, sigma__d, and gamma (my ultimate goal). How can I make sense of this?
2. How does lambda_2 = lambda_3 looks like in the limits to 0 and infinity for sigma__v, sigma__d, and gamma?

## Lambda_3 always negative for the first t...

@dharr thanks a lot. I really appreciate all the details and I am learning a lot.

You write: "As an example consider the first solution with the 6th order polynomial. Plots show that for positive gamma there are solutions with positive Lambda__2 but negative Lambda__2*, which may be enough to know that this is not useful to you. But suppose you wanted to know the maximum Gamma value for which it is possible to find a positive Lambda__2. You can differentiate and solve to find this value is Gamma_max = 0.29407, so  for any sigma__v and sigma__d and gamma, their product has to be less than 0.29407 to give a real solution (with a positive Lambda__2 and Lambda__3**)."

*I think you meant negative Lambda__3

**I think you meant negative Lambda__3

Didn't you show in the first side-by-side plot that Lambda_2 and Lambda_3 have opposite signs for both the first two roots of the 6th-degree polynomial?

Separately, what's the purpose of the plot where you superimpose Lambda_2 and Lambda_3?

## Upper bound for the degree of implicit e...

@dharr thanks for going above and beyond! I will take a look at that paper.

Would you argue that seeking closed-form roots (with or without using non-dimensionalization) is only reasonable if we know that lambdas' polynomial implicit representations in _Z are not beyond quartics?

For example, suppose your alias beta in NondimshortAllEqns.mw that made ans23[2] = {Lambda_2 = beta, Lambda_3 = beta} was not a quartic but, instead, a polynomial in _Z of degree 10 (not even 5 as I know there might be special but still non-numerical ways to obtain the roots...). Would it be a waste of time to work on finding the roots for a 10th-degree polynomial in _Z (without using numerical or plotting strategies)?

## @acer indeed, the non-dimensionaliz...

@acer indeed, the non-dimensionalization arose in this thread as an aside to answering the question. But I respect moderators' choice if you want to keep it all here...

@dharr I do not know about Hermite normal forms...but I guess it requires some matrix re-formulation of the setup?

 4 5 6 7 8 9 10 Last Page 6 of 15
﻿