mmcdara

6269 Reputation

17 Badges

7 years, 321 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Kitonum 

The expression I get is by far simpler... did I miss something?
Maybe using LargeExpressions:-Veil "freezes" the coefficients and evalc no longer works as expected?

Thanks in advance.

restart;

with(LargeExpressions):

eq := T*z^2 - (R + S*I)*z +G-K + H*I;

T*z^2-(R+I*S)*z+G-K+I*H

(1)

veq := collect(eq, z, Veil[C]);

T*z^2-z*C[1]+C[2]

(2)

sol:=solve(veq,z);

sol1,sol2:=evalc~([%])[];

expand(sol1*sol2)=a^2+b^2;

simplify(sol1+sol2=2*a);

simple_1 := simplify(sol1-sol2=2*b);

(1/2)*(C[1]+(-4*T*C[2]+C[1]^2)^(1/2))/T, -(1/2)*(-C[1]+(-4*T*C[2]+C[1]^2)^(1/2))/T

 

(1/2)*(C[1]+(1/2)*abs(4*T*C[2]-C[1]^2)^(1/2)*(1-signum(4*T*C[2]-C[1]^2)))/T+((1/4)*I)*abs(4*T*C[2]-C[1]^2)^(1/2)*(1+signum(4*T*C[2]-C[1]^2))/T, ((1/2)*C[1]-(1/4)*abs(4*T*C[2]-C[1]^2)^(1/2)*(1-signum(4*T*C[2]-C[1]^2)))/T-((1/4)*I)*abs(4*T*C[2]-C[1]^2)^(1/2)*(1+signum(4*T*C[2]-C[1]^2))/T

 

((1/8)*I)*abs(4*T*C[2]-C[1]^2)*signum(4*T*C[2]-C[1]^2)^2/T^2+(1/4)*abs(4*T*C[2]-C[1]^2)*signum(4*T*C[2]-C[1]^2)/T^2-((1/8)*I)*abs(4*T*C[2]-C[1]^2)/T^2+(1/4)*C[1]^2/T^2 = a^2+b^2

 

C[1]/T = 2*a

 

(1/2)*abs(4*T*C[2]-C[1]^2)^(1/2)*(I*signum(4*T*C[2]-C[1]^2)-signum(4*T*C[2]-C[1]^2)+1+I)/T = 2*b

(3)

rew := op(select(has, indets(simple_1), signum)[]) = Phi;

4*T*C[2]-C[1]^2 = Phi

(4)

simple_2 := map(factor, algsubs(rew, simple_1))

(-1/2+(1/2)*I)*abs(Phi)^(1/2)*(signum(Phi)-I)/T = 2*b

(5)

print~([(rhs=lhs)(rew), seq( C[i] = Unveil[C]( C[i] ), i=1..LastUsed[C] ) ]):

Phi = 4*T*C[2]-C[1]^2

 

C[1] = R+I*S

 

C[2] = I*H+G-K

(6)

eval(simple_2, (rhs=lhs)(rew));

(-1/2+(1/2)*I)*abs(4*T*C[2]-C[1]^2)^(1/2)*(signum(4*T*C[2]-C[1]^2)-I)/T = 2*b

(7)

Unveil[C](%);

(-1/2+(1/2)*I)*abs(4*T*(I*H+G-K)-(R+I*S)^2)^(1/2)*(signum(4*T*(I*H+G-K)-(R+I*S)^2)-I)/T = 2*b

(8)

 


 

Download LargeExpressions.mw

 

@dharr 

I was sure the challenge would tempt you
I vote up of course.

@Scot Gould 

Starting point:
Computing the integral of a function f(x) over an interval x=a..b, can always be interpreted as computing the expectation the random variable Y = f(X) where X is a Uniform random variable with support a..b.
Indeed, denoting phi(x) the probability density function of X (phi(x) = 1/(b-a) if a <= x <= b and 0 elswhere)

J = int(f(x), x=a..b) = (b-a) * int(f(x)*phi(x), x=a..b)

The integral represents the Expectation (aka mean) E(Y) of the random variable Y.  
Then J = (b-a)*E(Y).

The Monte-Carlo*  method; originally designed to assess the Expectation of a random variable, generalizes to estimate the integral of a function over a bounded domain (generally connected).
*This name has been adapted into Monegasque: Monte-Carlu [ˌmõteˈkaʀlu]. The typographical rules for toponyms used by the Imprimerie Nationale [a French instution which translates in "national printing house"] require Monte-Carlo to be written with a hyphen. It's generally pronounced /Monté-Carlo/, but some say /Monté-Carl'/.


Here is "your" problem:
How to assess the mean value of func(X) where X is a Uniform random variable with support 0.8..3.
One simple idea is to use a Monte-Carlo approach:

  • draw a sample S from X, let n the size of this sample,
  • then add(func~(S))/n is the Monte-Carlo estimator of Mean(func(X)).
     

What may go wrong in doing this?
Let us suppose that func(x) is a function whose value is close to zero within a  large portion of the interval 0.8..3, and which gets very large values in a narrow subdomain of 0.8..3. Let us say the "effective" domain of variation of func has a size equal to a w (for instance 0.001)
Drawing the sample S randomly will give you about N*w/(3-0.8) points in this "effective" domain of variation, meaning that we have a small aount of chance to capture the true bahaviour of func in the region where ir strongly changes.As a consequence your Monte-Carlo estimator  add(func~(S))/n can be of poor quality.
Being of poor quality means that doing again the same computation with another sample S'  (independent from the first one) can lead to a result significantly different from the first one, unless the value of n is huge.
So the estimator, which is a rantom variable whose add(func~(S))/n is a a particular realization and add(func~(S'))/n is another one, maypresent large deviations: onesays its variance is high.

The goal being to get reliable values for the estimator of Mean(func(X)) the naive way is to increase the value of n (the variance of the Monte-Carlo estimator decreases as n-1). The drawback being an increase of the computational time.
All the other ways consist in applying a more astute strategy where the sampling is done to capture the bahaviour of func.
The simplest strategy is named Importance Sampling.
Is goal is to reduce the variance of the Monte-Carlo estimator by introducing an auxiliary random variable Z whose PDF (Probability Density Function) is closer to func than the PDF of X is.
Then drawing a sample from Z will generate points whose spreading will give a closer picture of the

The basic idea and an illustrative example are provided in this file Importance_Sampling_Idea.mw.
Look how Importance Sampling dramatically decrease the variance of the Monte-Carlo estimator.

Go back to the MMA code.
Here is a new file which ALMOST does in Maple what is done with MMA: TruncatedNormal_3.mw.
That is assessing the mean value of func over the interval 0.8..3 by using both the crude Monte-Carlo method (which implicitely considers that the random variable X is uniform with some support, which I arbitrarily took as 0.8..3 but could be something else in which case the estimator is known up to an arbitrary multiplicative constant) and assesing the same estimator by using an auxiliary random variable Z whose PDF is "close" to func (Z is the truncated gaussian random variable).

You will see that the formula used to assess the Importance Sampling Estimator (ISE) does not exactly correspond to the formula the MMA code uses:

  1. The term 
    func[RV]/Distrib[RV, 1, 0.399]
    # which translate in
    func~(RV) /~ Distrib~(RV, 1, 0.399)
    should be 
    func[RV]*f[RV]/Distrib[RV, 1, 0.399]
    # which translate in
    func~(RV) *~ f~(RV) /~ Distrib~(RV, 1, 0.399)
    # where f is the PDF of the "native" random variable
    
    ...
    
    # unless f = 1 for any real (which seems to be a nonsense as its integraj
    # from -oo to +oo is not equal to 1!!!) but is clasiccaly used under the
    # name of "improper distribution" in bayesian statistics
  2. The presence of the term 
    Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}]
    Wich seems to mean that Distrib is not a true distribution (this integral should be naturally equal to 1, which is the case with the construction ot the TruncatedGaussian I made).
     


Recently added
The MMA code doesn't assess a mean value of func(X)  but directly jumps to use the Monte-Carlo method to assess the integral of func(x) by using a majoring function (the PDF of the truncated gaussian random variable).
This is why the red line above is replaced by the pink one (see alose the double green equality at the top of this reply). 
The code is byfar unnecessarily complex. For instance there is no need to use a truncated gaussian distribution to assess this integral, more of this the "1.1*x-0.1" stuff is useless and brings nothing but confusion.
Whatever... this new file explains what Monte-Carlo integration is, from its historical point of view to its classical to-day use. 

Monte_Carlo.mw



About Monte-Carlo integration
"[You] wondered if the sampling technique provided was going to be more useful than the crude method as a demonstration of a single-variable example of Monte Carlo integration.  Of course, Maple and Mathematica can quickly integrate the function. "
The L2 norm of the integration error of the Monte-Carlo method is n-1/2 whatever the dimension of the integration domain. This is signiicantly lower than a simple rectangle-rule integration which is  n-1 ... in dimension 1 and n-1/d in dimension d.
A Simson method has an integration error whixh evolves as n-2/d in dimension d and is then outperformed by a Montecarlo integration in domains of dimensions higher than 4.
In fact the main interest of Monte-Carlo integration lies in the simplicity of the method and the fact it is efficient in domains of any shapes.


Last point:
"Could you recommend a book, website, etc., that would help me get up to speed with probability distributions?"
I'll get back to you shortly.

I heard positive feedback from Khan Academy (that I just browse very quickly to form a first idea).
If by chance you speak French, WikiStat is really quite good (not sure there is an english version).
"Good" off-the-shelf books are not that easy to find (they surely do exist).

As I am very lazzy, my line of conduct is "why redo what has already been done elsewhere?"
For example, Geogebra has a module for creating solid patterns, and as I think you're a French speaker, aren't you? see the video here Obtenir un patron avec Geogebra

About your question concerning Maple and an easy way to develop a solid, I believe the answer is no, this is not that easy.
What we can do easily as a first step is to:

  • "extract" the faces of a polyhedron defined by plottools
  • use GraphTheory to create cuts, duplicate vertices and edges and so on.

Here is an example... but as you will see the cuts are done by hand and the final result has to be rescaled given the coordinates you use in plottools.
First_bricks.mw

Another way could be to use geom3d (instead of plottools): then the faces are got in a simple way, for instance

restart
with(geom3d):
solid := tetrahedron(t, point(o,1,2,3), 3):
draw(solid);
faces(solid):

For more information search "patron d'un solide" on your favorite browser.
You will find  HERE a detailed explanation of an algorithm based on the "covering tree" of the graph
associated to a convex polyhedron.

I remember @dharr did recently a lot of work concerning graphs, try to brose its answers/replies on this subject.

@jrive 

What if Rhi < RIo?
Are imaginary resistors physical resistors?

Had you spoke of impedances instead of resistors (whose impedance is real as it is purely imaginary for a capacitor, for instance) I wouldn't have sent my comment.

Note that "my" equation (4) indicates there is no solution when Rhi < RIo.

@acer 

To avoid any misunderstanding, I will not say that you did something wrong, but that the OP did not consider some natural / physical conditions.
With this in mind, the solutions you get are not physical (as the OP speaks about resistors, all the Rs are strictly positive values,which leads to the condition Rhi > RIo and excludes your second solution).

Anticipating a possible clarification from @jrive this solution seems more physical:

restart

eq1 := (R2+Rhi)*R1/(R1+R2+Rhi) = Rlo

(R2+Rhi)*R1/(R1+R2+Rhi) = Rlo

(1)

 

eq2 := R1*Rlo/(R1+Rlo)+R2 = Rhi

R1*Rlo/(R1+Rlo)+R2 = Rhi

(2)

evala({solve({eq1, eq2}, {R1, R2}, explicit)})

{{R1 = ((Rhi-Rlo)*Rhi)^(1/2)*Rlo/(Rhi-Rlo), R2 = ((Rhi-Rlo)*Rhi)^(1/2)}, {R1 = -((Rhi-Rlo)*Rhi)^(1/2)*Rlo/(Rhi-Rlo), R2 = -((Rhi-Rlo)*Rhi)^(1/2)}}

(3)

# The previous solutions require Rhi > RIo and all the impedances to be strictly positive
# for these solutions to be physical.

physical_sol := solve({eq1, eq2}, {R1, R2}, explicit, useassumptions) assuming positive

physical_sol := piecewise(Rlo < Rhi, [{R1 = sqrt(Rhi)*Rlo/sqrt(Rhi-Rlo), R2 = (Rhi-Rlo)*(Rhi+sqrt(Rhi)*sqrt(Rhi-Rlo))/(sqrt(Rhi)*sqrt(Rhi-Rlo)+Rhi-Rlo)}], [])

(4)

# So, I would say the physical solutions are

physical_sol := (eval(physical_sol) assuming Rlo < Rhi)[]

{R1 = Rhi^(1/2)*Rlo/(Rhi-Rlo)^(1/2), R2 = (Rhi-Rlo)*(Rhi+Rhi^(1/2)*(Rhi-Rlo)^(1/2))/(Rhi^(1/2)*(Rhi-Rlo)^(1/2)+Rhi-Rlo)}

(5)
 

 

Download ac_2_mmcdara.mw


It would be clearer if you uploaded your Maple worksheet (use yhe big green arrow in the menu bar).

@Moqifang 

restart

u(xi):=c[0]+c[1]*a^(f(xi))

c[0]+c[1]*a^f(xi)

(1)

Eq1 := -(alpha^2 + beta)*u(xi) + k^2*diff(u(xi), xi $ 2) + b*u(xi)^3 = 0

-(alpha^2+beta)*(c[0]+c[1]*a^f(xi))+k^2*(c[1]*a^f(xi)*(diff(f(xi), xi))^2*ln(a)^2+c[1]*a^f(xi)*(diff(diff(f(xi), xi), xi))*ln(a))+b*(c[0]+c[1]*a^f(xi))^3 = 0

(2)

D1 := diff(f(xi), xi) = (a^(-f(xi))*p + q + r*a^f(xi))/ln(a);

diff(f(xi), xi) = (a^(-f(xi))*p+q+r*a^f(xi))/ln(a)

(3)

D2 := diff(f(xi), xi$2) = eval(diff(rhs(D1), xi), D1);

diff(diff(f(xi), xi), xi) = (-a^(-f(xi))*(a^(-f(xi))*p+q+r*a^f(xi))*p+r*a^f(xi)*(a^(-f(xi))*p+q+r*a^f(xi)))/ln(a)

(4)

DEq1_0 := eval(Eq1, {D1, D2})

-(alpha^2+beta)*(c[0]+c[1]*a^f(xi))+k^2*(c[1]*a^f(xi)*(a^(-f(xi))*p+q+r*a^f(xi))^2+c[1]*a^f(xi)*(-a^(-f(xi))*(a^(-f(xi))*p+q+r*a^f(xi))*p+r*a^f(xi)*(a^(-f(xi))*p+q+r*a^f(xi))))+b*(c[0]+c[1]*a^f(xi))^3 = 0

(5)

# Setting Z = a^f(xi)

DEq1_1 := collect(eval(expand(DEq1_0), a^f(xi) = Z), Z)

(2*k^2*r^2*c[1]+b*c[1]^3)*Z^3+(3*k^2*q*r*c[1]+3*b*c[0]*c[1]^2)*Z^2+(2*k^2*p*r*c[1]+k^2*q^2*c[1]+3*b*c[0]^2*c[1]-alpha^2*c[1]-beta*c[1])*Z+k^2*c[1]*p*q+b*c[0]^3-c[0]*alpha^2-c[0]*beta = 0

(6)

# I don't understand why this command doesn't work ???
coeffs(DEq1_1, Z);

Error, invalid arguments to coeffs

 

# Workaround

with(LargeExpressions):

collect(DEq1_1, Z, Veil[C] );
CoefficientNullity := [ seq( 0 = factor(Unveil[C]( C[i] )), i=1..LastUsed[C] ) ];



sols := solve({CoefficientNullity[], b > 0}, [alpha, c[0], c[1]]):
sols := eval(sols) assuming b > 0;
whattype(sols);

print(cat('_'$120)):
print~(sols):

Z^3*C[1]+3*Z^2*C[2]-Z*C[3]-C[4] = 0

 

[0 = c[1]*(2*k^2*r^2+b*c[1]^2), 0 = c[1]*(k^2*q*r+b*c[0]*c[1]), 0 = c[1]*(-2*k^2*p*r-k^2*q^2-3*b*c[0]^2+alpha^2+beta), 0 = -k^2*p*q*c[1]-b*c[0]^3+alpha^2*c[0]+beta*c[0]]

 

[[c[0] = 0, c[1] = 0], [alpha = (b*c[0]^2-beta)^(1/2), c[1] = 0], [alpha = -(b*c[0]^2-beta)^(1/2), c[1] = 0], [alpha = -(1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = -((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = -I*2^(1/2)*r*k/b^(1/2)], [alpha = -(1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = ((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = I*2^(1/2)*r*k/b^(1/2)], [alpha = (1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = -((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = -I*2^(1/2)*r*k/b^(1/2)], [alpha = (1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = ((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = I*2^(1/2)*r*k/b^(1/2)]]

 

list

 

________________________________________________________________________________________________________________________

 

[c[0] = 0, c[1] = 0]

 

[alpha = (b*c[0]^2-beta)^(1/2), c[1] = 0]

 

[alpha = -(b*c[0]^2-beta)^(1/2), c[1] = 0]

 

[alpha = -(1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = -((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = -I*2^(1/2)*r*k/b^(1/2)]

 

[alpha = -(1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = ((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = I*2^(1/2)*r*k/b^(1/2)]

 

[alpha = (1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = -((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = -I*2^(1/2)*r*k/b^(1/2)]

 

[alpha = (1/2)*(8*k^2*p*r-2*k^2*q^2-4*beta)^(1/2), c[0] = ((1/2)*I)*k*q*2^(1/2)/b^(1/2), c[1] = I*2^(1/2)*r*k/b^(1/2)]

(7)

# In case you would like to remove complex solutions:

RealSols := remove(has, sols, I):
print~(RealSols):

[c[0] = 0, c[1] = 0]

 

[alpha = (b*c[0]^2-beta)^(1/2), c[1] = 0]

 

[alpha = -(b*c[0]^2-beta)^(1/2), c[1] = 0]

(8)

# Check:

eval(DEq1_1, RealSols[1]);
simplify(eval(DEq1_1, RealSols[2]));
simplify(eval(DEq1_1, RealSols[3]));

0 = 0

 

0 = 0

 

0 = 0

(9)
 

NULL

Download maple_sheet_mmcdara_2.mw

@CaptNnFalcon 

See HERE for some hint.

@C_R 

Thanks for taking over.

@MaPal93 

Z has an trivial root theta=0 and I focus on the other one

What is HDT? Read my code (equation (5) )

The she sign of -HDT is "+"  if HDT < 0 and "-" if HDT > 0.

By the way, if you are puzzled by the meaning of HDT , make your choice here acronyms

@MaPal93 

What do you think will happen if you add this constraint?
Do you sincerely believe that this will change something?

Download What_do_you_expect.mw

I believe you should put in question your intuition or, if you are certain of it, write the assumptions in your problem PRIOR to verify if your intuition is right or not.
What you are trying to achieve right now is to find, through trial and error, the assumptions that will prove A POSTERIORI that your intuition is good... which has never been a scientific way to proceed.

@MaPal93 

You "might be missing out some additional constraint" ?
No, you surely did.

Your equation (6) writes

convert(difference_term, horner, theta)
 theta ((-lambda__1 - lambda__2 - lambda__3) theta

    - X__3 lambda__3 + X__2 lambda__2 + X__1 lambda__1

    + lambda__1 (X__1 + delta__1) + lambda__2 (X__2 + delta__2)

    - lambda__3 (X__3 + delta__3))

Considered as a polynom in theta it has 2 zeros (one of them being trivial).
The other one is.

Z = (2*X__1*lambda__1+2*X__2*lambda__2-2*X__3*lambda__3+delta__1*lambda__1+delta__2*lambda__2-delta__3*lambda__3)/(lambda__1+lambda__2+lambda__3)

Then right to Zdifference_term has one sign and left to it it has the oppposite sign.
The only possibility would be that, for any values of the Xs, deltas  and lambdas,  this difference_term > 0 right to AND Z < 0.
Nothing in your assumptions enable to think this is the case.

Take a numerical example

ru := rand(0. .. 10.0):
indets(wo_theta, name):
pars := convert(% minus {DEV, X__1, X__2, X__3}, list):
simplify( eval( wo_theta-with_theta, pars=~[seq(ru(), k=1..numelems(pars))] ) ):

expl := collect(%, theta);
                    2                                        
   6.308500620 theta  + (0.6008920768 X__3 - 5.051628992 X__2 - 6.964480172 X__1 - 30.63867180) theta

Why would this expression be positive?

For instance expl < 0 as soon as X__3 verifies

solve(expl < 0, X__3) assuming theta > 0
{X__3 < -10.49855850 theta + 8.406882345 X__2 + 11.59023465 X__1 + 50.98864336}

Example:

eval(%, theta=1)
   {X__3 < 40.49008486 + 8.406882345 X__2 + 11.59023465 X__1}

So even assuming the Xs are > 0, there exist X__3 such that wo_theta-with_theta < 0.

@dharr 

Sorry for this late reply.
I've just seen that  @acer identified the problem.

1 2 3 4 5 6 7 Last Page 1 of 126