2 years, 15 days

## all clear...

@dharr thanks a lot! Your three answers all make sense to me now.

## zeros, optima and scaling...

@dharr thanks a lot. I really appreciate your help.

This should be correct if I am not overlooking anything: derivatives_final.mw. The file only includes lambda-lambda, beta-beta, and alpha-alpha comparisons. Next, I will try lambda-beta, lambda-alpha, and beta-alpha comparisons (6 additional comparisons in total, i.e., for each pair wrt to sigma_d and sigma_v). Hopefully I can get back to you if I encounter issues.

Three minor questions:

1. Two second derivatives change sign. For each, I find the zero with solve() but it returns two zeros instead of just one as I expected. Why?
2. I find minima and maxima with [DirectSearch]GlobalOptima, which is most of the time accurate but quite slow. Any alternative that is more efficient? I write "most of the time" because occasionally it gives me some very random results, completely off from the peaks and troughs I can eyeball from the plots.
3. My axes are always gamma*sigma_d^2 vs gamma*sigma_d*sigma_v. Are there any other param combinations that I can use? Would it make sense to scale the plots together with the actual axes, e.g., dividing both by gamma*sigma_d so that my new y-axis and x-axis are, respectively, sigma_d and sigma_v? How to do so? By doing so perhaps the plots are easier to interpret...

Thanks a lot.

## further questions...

@dharr working_example2.mw works great. Thanks a lot for the helpful comments too!

About the derivatives, please check derivatives_2.mw (with alias, without approximation) or derivatives_2_(1).mw (without alias, with approximation). Perhaps in the second one the derivatives look nicer? I am not sure...anyway...

Questions:

1. How to determine the signs? For example, partial derivative of lambda wrt sigma_v should be positive according to the inert form (our knowledge of Lambda being positive, at least the root I care about, and d(Lambda)/d(Gamma) being negative) but my if else loop always gives me negative...How to fix it?
2. Are my second derivatives correct? (See header of worksheet and the 8 additional derivatives compared to your version.) Specifically, does the Symmetry of second derivatives always hold in my cases? If not, should I derive first wrt to sigma_d or sigma_v and then wrt gamma or vice versa if I want to compare the 8 derivatives across each other?
3. The scaling seemed easy, but I encountered issues when plotting. See for example the comparison at the bottom of the script.

Thank you for looking into it

@dharr I think I managed to do the beta1-beta3 comparison: working_example.mw. Could you please check if it is correct?

Questions:

1. However, I have sigma_d3/sigma_d on the y-axis instead of sigma_d/sigma_d3 as for the lambda1-lambda3 comparison you did. Does it make sense to have an inverted y-axis or should I try to have the same y-axis to facilitate interpretation?
2. For the alpha-alpha_s comparison, I found that alpha > alpha_s regardless of the parameter values. However, I am having a tough time doing the beta-alpha and beta-alpha_s comparisons. Can you help?
3. For the partial derivatives, I set them up using the chain rule and previous code of yours. Are they all correct (see the inert forms)? Now, if I want to scale them so that I can compare them with each other, I would first need to visualize them with aliases, but it's not working as I expected (see my script). However, even if I am successfull with using the aliases, it still seems quite hard to do the manual scaling.

What do you propose? (and should I migrate this to a separate question which I would title "Non-dimensionalization and finding the key pair of parameters combinations"?)

(Notation: L is lambda_1=lambda_2, B is beta_1=beta_2, A is alpha_1=alpha_2, As is alpha_2s=alpha_1s.)

## @dharr thank you, as always...

@dharr thank you, as always! After reading your details, I understand that what you propose is the simplest and most effective way to assess magnitudes and do what I wanted to do. (Best answer went to @mmcdara because his approach was the first being proposed and tackled effectively my initial request for simpler functions. I admit that I later changed my request to solve a more complicated case and I would have given you best answer if I asked for help for the complicated case since the beginning.)

Back to your approach, I think I got it and I have a working example I will upload here once MaplePrimes allows me to upload files again (are you also encountering issues?)

In summary, I think:

1. Your approach always requires manual scaling tailored for the specific functions at hand but it's very effective
2. Working with Lambda, which is just a quartic in implicit form, is actually simpler than working with its approximate form
3. The underlying idea is to obtain two parameters combinations that I can plot as x vs y and identify the equality line

I will try to make it work for my derivatives too now and get back to you here.

## more details...

@dharr thanks. Perhaps this is good enough for f_1 (= f_2) and f_3 but it can be complicated to customize this for "other functions". In contrast, @mmcdara's solve() should be more of a standard approach where I just plug in my two functions and it returns the precise inequality relationship (between the parameters or products/ratios of them) that dictates the relative magnitudes (without any scaling, which I think is only necessary for 2D plotting). So, how to fix complicated_comparison.mw? Or perhaps there are advantages of using your "plotting approach" rather than solve() but I am not seeing them? (See my last paragraph below.)

@dharr What do I mean by "other functions"? I mean (a) functions built on combinations of f_1 (I call them g_1, g_2, and g_3 in my script right below and beta_1, beta_2, and beta_3 in the script at the bottom of my comment), (b) partial derivatives of f_1, f_3 wrt to my underlying parameters. For both (a) and (b) perhaps is useful to directly use the approximate form of Lambda instead of Lambda directly, but your "plotting approach" can get tricky: complicated_comparison2.mw.

What do I mean by (b)? See partial_derivatives.mw. My END GOAL is to do exactly the same type of comparisons I did here for the limit of gamma to infinity, but for a finite gamma (so I will have additional partial derivatives wrt gamma). Hence the convenience of using the approximate form of Lambda instead of Lambda directly (as the partial derivatives for the latter get too messy). I am interested in both the sign and the relative absolute magnitude  of the partial derivatives. Perhaps for the finite gamma case (i) there are sign switches for some partial derivatives, e.g., derivative being positive for a range of parameter values but negative for another, (ii) your 2D plotting approach can actually be handy for the relative magnitudes in cases where the boundary function of the two domains is easier to plot than to express as a formula (in contrast to the gamma to infinity case, where plotting is not needed as the inequalities are very simple).

## precise specifications...

@mmcdara thanks. I fixed my mistakes and adapted your script for constructing piecewise comparisons for my other simple functions depending on just 3 parameters.

Now I have a more complicated case with 4 parameters: gamma, sigma__v, sigma__d, sigma__d3 which are all strictly positive. I am pretty sure this is still simple enough for solve() to tackle. Note that the first function depends only on gamma, sigma__v, and sigma__d while the second function only on sigma__v and sigma__d3.

Precice specifications: I want to build a piecewise function that finds the parameters ranges such that (a) f_1 > f_2,  (b) f_1 < f_2, and (c) f_1 = f_2. I need:

1. To exclude the trivial solutions param = param and param > 0 (e.g., sigma__v = sigma__v and sigma__v > 0 and the same for the other 3 params)
2. Express all the other solutions in the most meaningful way, e.g., perhaps is simplest/most compact to express param1 > ...combination of other 3 params... than param2 > ...combination of other 3 params... (what is the simplest of the 4 variants?) or, perhaps, it's simpler and more meaningful to have products of params on both sides of the inequalities, e.g., param1*param2 > param3*param4 or similar...

My failed attempt:

complicated_comparison.mw

Thanks!

## not robust comparison...

@mmcdara Please check the following file. What am I not seeing about f_1 and f_3 comparison? Note that sigma__d is different from sigma__d3.

 >
 > beta = (5*sigma__d^2)/(8*sigma__v^2): f__1 := rhs(%); alpha = sqrt(5)*sigma__d/(8*sigma__v): f__2 := rhs(%); beta__3 = (sigma__d3^2)/(4*sigma__v^2): f__3 := rhs(%);
 (1)
 > a__1 := (solve([f__1 > f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]: b__1 := (solve([f__1 < f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]: c__1 := (solve([f__1 = f__2, sigma__v > 0], [sigma__v]) assuming sigma__d > 0)[]: piecewise(   remove(has, a__1, 0)[], ('f__1' > 'f__2'),   remove(has, b__1, 0)[], ('f__1' < 'f__2'),   remove(has, c__1, 0)[], ('f__1' = 'f__2') )
 (2)
 > a__2 := (solve([f__1 > f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: b__2 := (solve([f__1 < f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: c__2 := (solve([f__1 = f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: piecewise(   remove(has, a__2, 0)[], ('f__1' > 'f__3'),   remove(has, b__2, 0)[], ('f__1' < 'f__3'),   remove(has, c__2, 0)[], ('f__1' = 'f__3') )
 > a__3 := (solve([f__2 > f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: b__3 := (solve([f__2 < f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: c__3 := (solve([f__2 = f__3, sigma__v > 0], [sigma__v]) assuming sigma__d > 0, sigma__d3 > 0)[]: piecewise(   remove(has, a__3, 0)[], ('f__2' > 'f__3'),   remove(has, b__3, 0)[], ('f__2' < 'f__3'),   remove(has, c__3, 0)[], ('f__2' = 'f__3') )
 (3)
 >

## @mmcdara thanks. I will get back to...

@mmcdara thanks. I will get back to you if I have any issue with adapting your script to other f_1 and f_2.

you write "there is no such thing in the worksheet". I know it's quite odd (I never encountered this before), but the blur is on the plot in the actual worksheet. Here's a screenshot. My plot looks exactly like this in the worksheet:

(This is a minor concern but I was curious about it.)

## I will try both scripts with other funct...

@mmcdara thanks a lot! Both the plotting approach and the piecewise approach are useful. I think this should be enough. I will run this for a multitude of comparisons I want to do and hopefully I can hear back from you if I encounter any issue with other f_1 and f_2?

When you write "For a more complex case than yours" in the first script you mean more convoluted forms of f_1 and f_2 but still depending on just two variables right? If I have, let's say, three variables then I can use add_on.mw right?

A minor follow-up: why is the plot significantly blurred (legend, numbers on axes, axes titles, inequalities on plot, and even the y=x blue line...)

## Now ok...

@dharr it runs smoothly now. Thanks!

Explore(plot) is indeed a nice way to pin down good approximations.

## Using Explore(plot)...

@dharr what am I doing wrong? MaPal93approx(3).mw I was trying to play with the Explore command to replicate your a and b...

## Impressive!...

@dharr that's such a simple expression and so accurate!

I managed to get 2.6% for (f(0)-f(infinity))*exp(a*x)*(1 +c_1*x+c_2*x^2) + f(infinity) if I interpolate the quadratic polynomial using the two roots of L-polynomials but yours is truly impressive.

What's the rationale of abandoning the P(x)? Replicating the P(x) role of "submission" to the decaying exponential at infinity while introducing a denominator as in the conventionally more accurate rational functions? How did you come up with the -1/8 and the 5/6?

## Thanks for clarifying...

@dharr My objective, as you guessed, was indeed (1). I was getting hooked after I learned more and more about the different approaches from you and acer. Sorry for not making that explicit enough.

I have been seeking a simple and interpretable approximation since the very beginning, so (1) is more aligned to my goal. However, I think it was still interesting to maximize the accuracy of the approximation on the shorter scale, but that wouldn't be my primary concern.

I really appreciate all the details!