2 years, 15 days

## Am I correct?...

@mmcdara questions:

1. true_Gamma (=1.112) is the asymptote in the last plot?
2. the Lennard-Jones potential is the green curve? That is, the functional form K with optimal params a, b, c, d, f?
3. then, is my summary below correct?
• regardless of rho, for Gamma<true_Gamma my surface is always negative
• for Gamma>true_Gamma, my surface is negative if rho<green curve and positive if rho>green curve

Any misunderstanding?

Why don't you embed the L-J snippet in the worksheet and post your comment as an answer I can upvote/give best answer?

EDIT: encounter error when running it on my machine: Surface1_fsolve_workaround_mmcdara_MaPal.mw

## @acer I don't understand.Suppos...

@acer I don't understand.

Suppose this command:
plot3d(Z(X,Y),X=0.01..10,Y=0.01..10,'colorscheme'=["zgradient",["LightGray", "Gray", "Green"]], labels = [X, Y, Zeta(X,Y)], view=[default,default,0..10]);

What do you mean by

max( lowvalue, min( maxvalue,  expr ) )

around the expression?

## thanks...

@mmcdara it's the same numerical calibration exercise, but this time for sigma_delta2 (ncal3): discontinuity_persists_ncal3.mw

## Other reasons fsolve() breaks...

@mmcdara thank you for your ideas so far.

I don't want to spam this thread with another worksheet but I did find another instance of fsolve() breaking before the end of the for loop (besides the still unsolved and different issue with the discontinuity). Perhaps this is helpful to know for other contributors still on this (@Carl Love) or other readers.

Apparently hitting the lower bound of the chosen range for the solution is not the only reason fsolve() could "break". As such, dynamic adjustment of the lower bound doesn't always solve the issue. For this specific calibration I am referring to, the last valid iteration before the "break" has all its 6 lambdas well above their lower bounds.

"even if you find a good strategy for one calibration case there is no guarantee it still performs well for another one." Agree with you...as you argued in your previous comment perhaps the work required just to make this specific calibration work is not worth the effort.

Let's see if Carl Love has some other ideas in mind.

"Why do you believe positive solutions exist for INDEX > 0.8731? Do you feel they do or are you sure they really do?" I thought about it. I agree with you that it's not necessarily the case. Based on my understanding of the problem at hand, there should be positive and smooth solutions (for all 6 lambdas) for a range of sigma_* (* wildcard) values within (0,x] where x is unknown but surely <1. It seems like you found that such x is 0.8731 for this specific calibration. Of course x may differ for a different calibration.

P.S.: could you please check the update to my comment above? Why does the discontinuity persists for a broader plotting range?  Thanks.

## black-box procedures/operator-form inste...

@acer thanks for describing the inner workings of fsolve().

You write: "Moreover (and I know this sounds facile, but I'm being serious), it can help to pause and try to define the notion of a "root" -- in this float context. Once you have such, you can program for it."
I don't understand what you have in mind.

Also "Using black-box procedures can separate the working precision for the functional evaluations (Digits now set withing the scope of the procedures) from the accuracy requirement in the residual check." So how to use operator-form instead of expressions, for the equations? What exactly does it mean? Are you confident that it's the residual check at INDEX~0.554 the root cause of the singularity?

## adjusting the INDEX step "along the way"...

@mmcdara thanks.

You write: "Maybe reducing the step of INDEX before reaching this value or increasing the number of Digits would prove I'm wrong?"
How to reduce the step of INDEX dynamically? Are you thinking of a for sub-loop within the main for loop from 0.8731 onwards?

Moreover, I set Digits:=30 inside the "if decrease then", right before lam_ranges:=...
Since I am not seeing any improvement with that, I assume that placing the Digits command there is not what you had in mind...

Anyway, I understand if you are done looking at this thread and I thank you for your time and ideas so far. Actually it's already an improvement compared to stopping at the 600-something iteration and is nice to see that the discontintuity is actually not there. Perhaps I can think some more about it. Initially I thought that decreasing the lambda__i1 lower bound even more could be helpful but I think I now understand that it could simply be that there are no positive lambda__i1 (however small they are) beyond that threshold iteration...Maybe the dynamic adjustement of the step for INDEX is the technical solution we are looking for... Let's see if anyone else has other ideas.

UPDATE: See discontinuity_persists.mw (leaving aside what happens for INDEX>0.5):

1. Did your script contain a spelling mistake? See the two edited lines of code highlighted in electric blue
2. When plotting for INDEX from 1e-3 to 0.5, why the discontinuity at ~0.27 DOES NOT go away despite reducing the step to 1e-4? (despite plotting for INDEX from 0.250 by 1e-4 to 0.300 makes the discontinuity disappear...I thought that once the step is reduced to 1e-4 there wouldn't be any discontinuity anymore regardless of the plotting range)

## "version for you"...

@Carl Love thank you for entering the thread. You can take a look at this worksheet: fsolve_loop_breaks.mw.

Hopefully your fix tackles both problematic iterations, that is, the one breaking fsolve() as well as the one causing the discontinuity. It would also be useful to include some basic statistics about the error magnitudes over the whole loop.

@mmcdara lambda_d1 = lambda_d2 and  lambda_i1 = lambda_i2 only if sigma__v1=sigma__v2. All 6 lambda are distinguished if these two parameters are set different (see for example fsolve_help_mmcdara_ncal1.mw).
By the way, what's the purpose of wrapping the array construction into the if lam::set then statement? What if you didn't do that?

Two issues related to running your script for ncal2: fsolve_help_mmcdara_ncal2_breaks.mw

1. discontinuity emerges for sigma__delta1~0.27
2. fsolve() "breaks" at the 614th iteration

How to avoid/prevent the discontinuity and how to "skip" the 614th iteration so that I can "fsolve() all the way" to the 1000th and final iteration and fully fill up the lamRes1 array?

## More context: How do I replace fsolve no...

@Axel Vogt thank you but as I replied to @mmcdara I don't think the negativity is the reason why fsolve() breaks after a specific iteration. The fact that fsolve is inside the loop is an important aspect of the issue I am trying to solve.

@tomleslie once wrote:
"When data 'goes missing' is because the fsolve() command is failing to find  solution for certain loop index values, at which point all subsequent code 'breaks'. In order to assist  the fsolve() command as much as possible, there are a couple of possibilities

1. one strategy is to restrict the region over which it has to search for a solution
2. second strategy is to provide fsolve with a "starting point" which is "close" to the desired answer"

My worksheet fsolve_does_not_work.mw uses (2) above, where the starting point for the i-th loop iteration is the result of the (i-1)-th loop iteration. The initial guess for the first iteration in each calibration, I took as {lambda__11 = 0.25, lambda__12 = 0.25, lambda__13 = 0.25, lambda__21 = 0.25, lambda__22 = 0.25, lambda__23 = 0.25, lambda__31 = 0.25, lambda__32 = 0.25, lambda__33 = 0.25}. This strategy generates values only up to iteration number 55 out of 1000.

@Carl Love suggested to replace fsolve non-answers with undefined:
"Like many things in numerical analysis, fsolve is an iterative process attempting to create a numeric sequence (or a sequence of numeric vectors) that converges in some sense. Now, I've learned over the years that despite our best efforts (e.g., using all the tricks that Tom has provided), there will always be some cases where fsolve doesn't converge. Indeed, I don't think that I've ever had a practical system of nonlinear equations with parameters and more than 50 sets of parameter instantiations where I could get fsolve to give an answer for all instantiations. So, whenever you get a non-answer from fsolve, whatever values that would've gone into your plots had it returned numeric values should be replaced by the keyword undefined."

My questions then are:

1. How to do this replacement within fsolve() command? Could I then replace such "undefined" with a simple interpolation of the last valid iteration and the next valid iteration? By doing so perhaps I could nicely plot all my curves up to the 1000th iteration.
2. What's the standard (implicit) convergence of fsolve? How to output the error within fsolve() command in case I want to have a better idea of the error magnitudes compared to the numerical solutions at each iteration?

## @acer mmm, okay thank you. No other...

@acer mmm, okay thank you. No other way to simply show a longer line symbol in the legend box? Even though I really didn't want to, your example actually made me realize that perhaps using three different colors would look clearer overall...

## My question was different...

@mmcdara thank you for showing that perhaps my example was a bad one since you actually found that there are no solutions assuming positivity of all 6 lambdas (I think this is consistent with the fact that fsolve() loop breaks at the first iteration).

My question was different though: I would like to address fsolve() "fragility", i.e., I want to know how to help fsolve() to pin down solutions at each iteration.

Allow me to consider a different example. Here fsolve_works.mw fsolve() correctly generates the full array for 1000 iterations. Note that sigma_v1=sigma_v2=0.1. However, here fsolve_does_not_work.mw fsolve() correctly generates the full array for the first 55 iterations and 'breaks' after that. Note that everything is the same as the working worksheet except that sigma_v1=0.1 but sigma_v2=0.11 (they are only slightly different now). The fact that the first 55 iterations go through suggests that there are strictly positive solutions for all 9 lambdas (unlike my badly chosen original example), and yet fsolve() 'breaks' at a specific iteration. I want to know how to address this (maybe skip such problematic iterations and interpolate to fill these 'gaps'? How to do that? What else to do?)

@mmcdara @acer ? Would you please follow up? I would prefer to have my remaining doubts clarified in this thread instead of opening a new question...

## @dharr thank you!! That's perfe...

@dharr thank you!! That's perfect. Yes the scaling is super easy to see now. Most likely will be this easy also for all the other functions. Moreover, the "new" notation and related reduction of numerical evaluations makes the deriv much more compact.

## Automated scaling...

@dharr thanks a lot. Works perfectly for all lambda_1 and lambda_2 derivatives.

However, as you anticipated, it's harder to manually scale derivatives of functions of lambda_1 and lambda_2. See for example d(beta_1)/d(sigma_d). An old answer of yours could perhaps help you think of a procedure to automatically do such scaling. See the weblink I attached at the bottom of this worksheet:

(How come there is no built-in Maple command that returns something similar to indets() but with powers?)

EDIT: after some manual handling I found that the transformation deriv/(sigma__d*gamma) nicely scales this specific derivative, so it's all good for this one. What I did: 1) first evaluate deriv for sigma__v1=Gamma__1/sigma__d/gamma,sigma__v2=Gamma__2/sigma__d/gamma), 2) then focus on the first term only of deriv. I am not sure if I can do the same for the other derivatives and with similar ease...

 > restart;
 > local gamma:

Non-dim and dim variables. Since there are more dim vars than non-dim vars, we allow sigma__d and sigma__v2  (and sigma__d3 if using the 3rd eqn) -see dims. Can scale by these after the differentiation. (Definitions in startup code.)

 > 'nondims'=nondims; 'dims'=dims;
 (1)

d(beta__1)/d(sigma__d)

 > beta__1 := (gamma*sigma__v2^2 + 2*lambda__2)*sigma__v1^2/((2*gamma*(lambda__1 + lambda__2)*sigma__v2^2 + 4*lambda__1*lambda__2)*sigma__v1^2 + 4*lambda__1*lambda__2*sigma__v2^2);
 (2)
 > lambda1 := eval(eval(lambda__1, dims), Lambda__1 = Lambda__1(Gamma__1, Gamma__2)); lambda2 := eval(eval(lambda__2, dims), Lambda__2 = Lambda__2(Gamma__1, Gamma__2));
 (3)
 > beta1 := eval(beta__1,{lambda__1=lambda1,lambda__2=lambda2});
 (4)

Diff needs to know the functional dependencies of Gamma__1 and Gamma__2, but we withhold the actual function.

 > G1G2:={Gamma__1=Gamma__1(sigma__d, sigma__v1, gamma), Gamma__2=Gamma__2(sigma__d, sigma__v2, gamma)};
 (5)

Do the derivative.

 > deriv:=diff(eval(beta1, G1G2),sigma__d):

Substitute the pieces we will do numerically with functions. Start with the D[1](Lambda__1) etc

 > DiGjsubs:={seq(seq(D[i](cat(Lambda__,j))(Gamma__1(sigma__d, sigma__v1, gamma), Gamma__2(sigma__d, sigma__v2, gamma))=fdiff(cat(Lnum,j),[i],[Gamma__1,Gamma__2]),j=1..2),i=1..2)}: deriv:=subs(DiGjsubs,deriv):

Next the functions Lambda__1 and Lambda__2 are replced by the numerical routines (in startup code).

 > deriv:=subs(Lambda__1(Gamma__1(sigma__d, sigma__v1, gamma), Gamma__2(sigma__d, sigma__v2, gamma))=Lnum1(Gamma__1,Gamma__2),      Lambda__2(Gamma__1(sigma__d, sigma__v1, gamma), Gamma__2(sigma__d, sigma__v2, gamma))=Lnum2(Gamma__1,Gamma__2),deriv):

Now evaluate using the functional forms of Gamma__1 and Gamma__2

 > deriv:=eval(deriv,{Gamma__1(sigma__d, sigma__v1, gamma) = sigma__d*sigma__v1*gamma,             Gamma__2(sigma__d, sigma__v2, gamma) = sigma__d*sigma__v2*gamma});
 (6)
 > indets(deriv);
 (7)

Final scaling and removal of the gamma

 > #scaledderiv:=eval(expand(sigma__d^2/sigma__v1*deriv), {sigma__v1=Gamma__1/sigma__d/gamma,sigma__v2=Gamma__2/sigma__d/gamma});

Make it a function and check it out

 > #scaled_dlambda_1dsigma_d:=unapply(scaledderiv,Gamma__1,Gamma__2);
 > #plot3d(scaled_dlambda_1dsigma_d(Gamma__1,Gamma__2),Gamma__1=0.1..5, Gamma__2=0.1..5,grid=[25,25]);

Ideas for fixing final scaling (to obtain an expression only dependent on Gamma_1 and Gamma_2)

 > restart;
 > # Idea 1): something like the following but taking as input A. the actual expression and B. the param in the expression for which I need to find the highest exponent. # Then apply such new function to numer(deriv) and denom(deriv) separately and scale accordingly. Finally, check deriv_scaled with indets(): # if indets does not include any of the fundamental parameter we are done. f := proc(x) `if`(x::`^`, op(2, x), 1) end: easy := f(x^9);
 (8)
 > # wrong of course medium :=f(2+x^9);
 (9)
 > # also wrong of course hard := f(2+x^9-a^4+b^2-10*a^7);
 (10)
 > # Idea 2): Some variation of one answer of yours: https://www.mapleprimes.com/questions/235941-How-To-Obtain-Degree-Of-X-To-The-Power.