2 years, 15 days

## @C_R thank you for the detailed ans...

@C_R thank you for the detailed answer.

Isn't this equivalent to the solution I mentioned in my last comment to @mmcdara ??

That is, wouldn't it be simpler to just take the linear term of the polynomial (since the quadratic is surely negative), plug into it the condition to cancel the X and see when such expression is negative. Then, either:

1. lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3>0 and theta<0     or
2. lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3<0 and theta>0

In other words, it would suffice to show (which I can't as of now just with the equations I have) that lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3 and theta having the same sign leads to a contradiction, thus proving that it can only be the case that the two terms have opposite signs or, equivalently, that not only the quadratic term but also the linear term is necessarily negative.

## @mmcdara yes I do agree that my str...

@mmcdara yes I do agree that my strategy is flawed or at least missing some crucial elements, but I am a bit confused by your last paragraph in your script...would you mind elaborating please?

So, what you suddenly call Z would be the Non_Trivial_Theta_Root found just above, right? And what does HDT stands for? And then when you talk about the sign of -HDT do you simply mean positive (since all the lambdas are positive)? If I understood correctly the broader picture, you are basically saying that the polynomial in theta (diffcond) is not necessarily negative as I was hoping since it has a zero, i.e., it changes sign for theta=Non_Trivial_Theta_Root. In other words, it cannot be negative for any value of theta. But I am not sure I follow why it matters whether such zero is positive or negative...

Perhaps I am asking something really trivial which I can't see at the moment (might be too tired), but it would be helpful if you could rephrase that last paragraph...

For example, wouldn't it be simpler to just take the linear term of the polynomial (since the quadratic is surely negative), plug into it the condition to cancel the X and see when such expression is negative. Then, either:

1. lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3>0 and theta<0     or
2. lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3<0 and theta>0

In other words, it would suffice to show (which I can't as of now just with the equations I have) that lambda_1*delta_1+lambda_2*delta_2-lambda_3*delta_3 and theta having the same sign leads to a contradiction, thus proving that it can only be the case that the two terms have opposite signs or, equivalently, that not only the quadratic term but also the linear term is necessarily negative.

## constraint...

@mmcdara what if I add the following constraint?

`X__3*lambda__3-X__2*lambda__2-X__1*lambda__1=0`

## looking at the linear term...

@mmcdara thanks, but I'd try to avoid numerical approaches as much as possible to show this. I agree with you I might be missing out some additional constraint...

Anyway here I further analyze the linear term (since the quadratic term in theta is trivially always negative). Perhaps output (8) is useful in some ways...

 > restart;
 > local gamma;
 (1)
 > assume(0 < gamma, 0 < nu__02, 0 < nu__01, 0 <= sigma__v, delta__1::real, delta__2::real, delta__3::real, theta::real); interface(showassumed=0);
 (2)
 > wo_theta := X__3*(-X__3*lambda__3 - delta__3*lambda__3 + DEV) + X__2*(-X__2*lambda__2 - delta__2*lambda__2 - nu__02) + X__1*(-X__1*lambda__1 - delta__1*lambda__1 - nu__01) + X__2*(nu__02 + DEV/2) + X__1*(nu__01 + DEV/2) - gamma*X__2^2*sigma__v^2/4 - gamma*X__1^2*sigma__v^2/4 + gamma*X__2*X__1*sigma__v^2/2;
 (3)
 > with_theta := X__3*(-X__3*lambda__3 - theta*lambda__3 - delta__3*lambda__3 + DEV) + X__2*(-X__2*lambda__2 + theta*lambda__2 - delta__2*lambda__2 - nu__02) + X__1*(-X__1*lambda__1 + theta*lambda__1 - delta__1*lambda__1 - nu__01) + X__2*(nu__02 + DEV/2) + X__1*(nu__01 + DEV/2) - gamma*X__2^2*sigma__v^2/4 - gamma*X__1^2*sigma__v^2/4 + gamma*X__2*X__1*sigma__v^2/2 + theta*(lambda__1*(X__1 + delta__1 - theta) + lambda__2*(X__2 + delta__2 - theta) - lambda__3*(X__3 + delta__3 + theta));
 (4)
 > collect(with_theta, theta);
 (5)
 > solve(wo_theta > with_theta, theta) assuming 0 < gamma, 0 < nu__02, 0 < nu__01, 0 < sigma__v, delta__1::real, delta__2::real, delta__3::real, theta::real;
 > solve(with_theta < wo_theta, theta);
 > difference_term := (-lambda__1 - lambda__2 - lambda__3)*theta^2 + (X__1*lambda__1 + X__2*lambda__2 - X__3*lambda__3 + lambda__1*(X__1 + delta__1) + lambda__2*(X__2 + delta__2) - lambda__3*(X__3 + delta__3))*theta;
 (6)
 > # I would expect such difference_term in theta to be always < 0, i.e., for any theta different from 0) # (Note that lambda_1, lambda_2, and lambda_3 are always > 0, while theta, the three X and the three delta can be positive or negative. In other words, it suffices to show that the linear term in theta is always negative...) linear_term := (X__1*lambda__1 + X__2*lambda__2 - X__3*lambda__3 + lambda__1*(X__1 + delta__1) + lambda__2*(X__2 + delta__2) - lambda__3*(X__3 + delta__3))*theta;
 (7)
 > solve(linear_term<0,theta);
 (8)
 > solve(0 < X__1*lambda__1 + X__2*lambda__2 - X__3*lambda__3 + lambda__1*(X__1 + delta__1) + lambda__2*(X__2 + delta__2) - lambda__3*(X__3 + delta__3),[delta__1,delta__2,delta__3]) assuming lambda__1>0,lambda__2>0,lambda__3>0;
 (9)
 > solve(X__1*lambda__1 + X__2*lambda__2 - X__3*lambda__3 + lambda__1*(X__1 + delta__1) + lambda__2*(X__2 + delta__2) - lambda__3*(X__3 + delta__3) < 0,[delta__1,delta__2,delta__3]) assuming lambda__1>0,lambda__2>0,lambda__3>0;
 (10)

EDIT: I am thinking deeply whether for my system the additional specification that is missing is that the linear term in theta is actually 0 (and thus the sum of both terms in theta, i.e., the linear and the quadratic, is negative overall...)

Download inequality_new.mw

## putting all together...

@dharr I was trying to use the non-dimensionalized expressions obtained in Case_with_radicals.mw and Case_2.mw to express in compact form the solution to a system of 2 equations (in a similar way as what you helped me with previously) for further analysis: putting_all_together.mw. The script includes (1) the system whose solution I can't rewrite in compact form, (2) a working example for a simpler system, which I copied from a previous answer of yours.

My goal is the same as before: identify real and positive (ideally unique) solutions for lambda_1 and lambda_2, which in this case differ from each other. Thanks a lot for taking a look.

## @dharr thanks. 1. I understand now...

@dharr thanks.

1. I understand now. In the file I attach in 2. I got rid of gamma^9*sigma__d^9, but can't get rid of the rest...

2. What am I missing in the single equation case: Case_with_radicals.mw? And how do I extend this to a system of 2 equations (those two at the bottom of the script)?

## Follow-up questions...

@dharr thanks a lot. I have two follow-up questions:

1. In Case_1.mw (your script) dividing by sigma__d1^4 does not seem to change the difference to something other than 0 (see lines highlighted in yellow). Why doesn't it matter?
2. In Case_with_radicals.mw I tried to apply your script to two much simpler expressions but with a (simple) radical...the matrix of exponents gets weird. I suspect that I need to divid the radical out before that step but I am not sure. Would you please take a look?

Thanks a lot.
[I confirm that p3_complex is fine (see Case_2.mw) but that p3_mostcomplex requires one more nondim var (see Case_3.mw). However, Case 3 simply collapses to Case 2 if the "extra" nondim var is set to 1, exactly as I expected.]

## crashes...

@dharr thanks a lot for all the details!

I added:

allvals := map(simplify, [allvalues(Lambda)]):
plot3d(%,Gamma=0..10,Psi=0..10);

at the bottom of your script but it crashes during execution (after about ~30s). I have Maple 2023. Moreover, I get this prompt when trying to save the file (before I try to execute):

Separately, do you think two non-dim variables will also be enough for the last two degree-10 polynomials in non-dimensionalization.mw?

## @dharr thanks a lot!...

@dharr thanks a lot!

## examples provided...

I included two simpler examples for a 4th-degree polynomial and a 6th-degree polynomial, respectively. The examples are from @dharr.

## A or B...

@mmcdara @Kitonum can I assume that Maple is always (consistently) responsive to the sign of terms below square roots?

In other words, whenever Maple outputs a result (to whatever computation I ask it to do, not just limit) which does not include signum(), should I conclude that (A) Maple didn't encounter any term of ambiguous sign or (B) Maple did encounter ambiguous terms but implicitly assumed their signs, let's say positive, so that its final output is without signum()?

## @mmcdara very interesting... I ran...

@mmcdara very interesting...

I ran your with Maple 2023. Perhaps Maple 2023 thinks my expression (b) is simpler than yours (a):

 >
 (1)
 >
 (2)
 >
 (3)
 >

Download simp.mw

Anyway, best answer of course!

## simplify() issue...

@mmcdara thanks a lot for all the work. But how about the simplify issue that I mentioned?

## Mean and Variance of A+B+C...

@mmcdara so mean and variance of my A+B+C sum would simply be these

Separately, I noticed (also in other occasions) that simplify() does not behave consistently. Your simplify() on Cov(A,B) gives you a linear expression (no weird fractions), which is easier to read than mine with fractions. How to make Maple "prioritize" your simplify() output over mine?

 > restart
 > with(Statistics):
 > RVS := [          Nu__1    = RandomVariable(Normal(0, sigma__nu)),          Nu__2    = RandomVariable(Normal(0, sigma__nu)),          Delta__1 = RandomVariable(Normal(0, sigma__d)),          Delta__2 = RandomVariable(Normal(0, sigma__d)),          Delta__3 = RandomVariable(Normal(0, sigma__d3))        ]
 (1)
 > X__1 := beta__1*(Nu__1+Nu__2)+alpha__1*Delta__1+alpha__2s*Delta__2; X__2 := beta__2*(Nu__1+Nu__2)+alpha__2*Delta__2+alpha__1s*Delta__1; X__3 := beta__3*(Nu__1+Nu__2)+alpha__3*Delta__3;
 (2)
 > A := X__1*(-lambda__1*X__1-lambda__1*Delta__1+Nu__1); B := X__2*(-lambda__2*X__2-lambda__2*Delta__2+Nu__2); C := X__3*(-lambda__3*X__3-lambda__3*Delta__3+Nu__1+Nu__2);
 (3)
 > `Cov(A, B)` := Covariance(eval(A, RVS), eval(B, RVS)):
 > `Cov(A, B)` := simplify(`Cov(A, B)`);
 (4)
 > # and so on for Cov(A, C) and Cov(B, C)
 > Obj := A+B+C;
 (5)
 > `Var(Obj)` := Variance(eval(Obj, RVS)):
 > `Var(Obj)` := simplify(`Var(Obj)`);
 (6)
 > `Exp(Obj)` := Mean(eval(Obj, RVS)):
 > `Exp(Obj)` := simplify(`Exp(Obj)`);
 (7)

Download No_problem_MaPal.mw

## @acer good catch. My mistake. See n...

@acer good catch. My mistake. See new worksheet in question body.

 2 3 4 5 6 7 8 Last Page 4 of 15
﻿