mmcdara

3728 Reputation

17 Badges

6 years, 26 days

MaplePrimes Activity


These are replies submitted by mmcdara

@acer 

Thank you acer for this very clear answer

@acer 

Hi, 
In my posted remark I used fsolve, but my first guess was to use RootFinding:-NextZero instead.
It seems that the latter is a little bit faster than the former.
Unfortunately RootFinding:-NextZero doesn't seeem capable to to find the two highest positive zeros of ChebyshevT(N, x) for N >= 25 (Maple 2015).
Do you think that  fsolve is more versatile than RootFinding:-NextZero or that some tuning if this latter is possible.

(just by curiosity)

A remark:
Computing this way the zeros of the ChebychevT polynomial  is quite inefficient
 

restart:

CodeTools:-Usage( evalf(allvalues(RootOf(ChebyshevT(10, x), x))) ):
CodeTools:-Usage( evalf(allvalues(RootOf(ChebyshevT(11, x), x))) ):

memory used=22.28MiB, alloc change=36.00MiB, cpu time=507.00ms, real time=507.00ms, gc time=7.83ms
memory used=303.98MiB, alloc change=40.00MiB, cpu time=10.10s, real time=10.08s, gc time=192.72ms

# a better way

ZT := proc(N)
  local P := simplify(ChebyshevT(N, x), 'ChebyshevT'):
  local z, n, M, zr:
  z := 0:
  M := `if`(N::even, N/2, (N-1)/2);
  for n from 1 to M do
    z := z, fsolve(P, x=z[-1]..1, avoid={x=z[-1]})
  end do; 
  zr := -z[2..-1][]:
  if N::even then z := z[2..-1] end if:
  sort([zr, z])
end proc:

CodeTools:-Usage(ZT(10));
CodeTools:-Usage(ZT(11)):

memory used=1.17MiB, alloc change=0 bytes, cpu time=19.00ms, real time=19.00ms, gc time=0ns
memory used=0.67MiB, alloc change=0 bytes, cpu time=10.00ms, real time=11.00ms, gc time=0ns

CodeTools:-Usage(ZT(35)):
memory used=5.41MiB, alloc change=32.00MiB, cpu time=180.00ms, real time=161.00ms, gc time=41.76ms

 

@Carl Love 

Thanks for your answer.

Here is the plot, you can take it and put it into your own answer



Bravo again

@Carl Love 

Great !!!

I had thought about replacing the log of a product of derivatives by a sum of logarithms but I never pushed it.
Not fixing a priori the number of iterations is an excellent idea (especially as soon as we discover how many iterations are needed to converge).
Could you help me understand in what way these lines seem to be the key of your procedure?

F := a-> x-> a*x*(1-x);
f := parse(sprintf("%a", eval(F)));
D(f);
            a -> x -> a x (1 - x)
            a -> x -> a x (1 - x)
                  0


Obviously I vote up, rather twice more than one

@Christopher2222 

I agree one hundred percent

@Pepini    @Carl Love   @acer 

I would like to have your opinion about the following idea: Is it intersting, or is it stupid white elephant?

Let's start with this code

f := a -> x -> a*x*(1 - x);
g := n -> a -> x -> (f(a)@@n)(x):
a -> x -> a x (1 - x)
r := [seq(3..4, 0.01)]:

iter := 10:

expr := eval(diff(g(iter)(a)(x), x), x=x0):

My first idea to enhance the computational time was to convert expr into its Horner form in order to reduce the number of operations.
This approach speeds up the computation while iter <=8 but Maple (2015) seems to fail in finding the Horner form for higher values of iter.

So I thought to that: CodeGeneration (here CodeGeneration:-Matlab) enables building an optimal representation of an expression (here a simple polynomial).
I then do that 

expr := CodeTools:-Usage( eval(diff(g(11)(a)(x), x), x=x0) ):

writeto(cat(currentdir(), "/opt")):
CodeGeneration:-Matlan(expr, optimize):
writeto(terminal):
                               0

# Open file "opt"
# Change " ^ " into "^"
# In this worksheet:
#    type ftn := proc(a)
#    copy paste the content of file "opt"
#    add "return=%:"
#    add "end proc":

Here are the computational times for iter=10 and 101 pts between 3 and 4.

restart;
with(plots):
M := 100;
r := evalf([seq(a/M, a = 3*M .. 4*M)]):
x := t -> t;
x0 := 0.2;
                              100
t -> t
                              0.2
f := a -> x -> a*x*(1 - x);
g := n -> a -> x -> (f(a)@@n)(x):
a -> x -> a x (1 - x)
r := [seq(3..4, 0.01)]:

iter := 10:

t0 := time():

expr   := eval(diff(g(iter)(a)(x), x), x=x0):
phi    := unapply(expr, a):
L      := phi~(r):
lambda := (ln@abs)~(L)/~iter: 

time()-t0;


                             1.308

versus


t0 := time():

L      := ftn~(r):
lambda := (ln@abs)~(L)/~iter: 

time()-t0;
                             0.018

Computing 1001 pts with iter=20 requires only  0.675 seconds (plus 20 s to generate the optimized MATLAB code and ... let's say half of a  minute to transfom it into Maple code)

Lyapounov_2.mw

Example:


Of course it is a very cumbersome and partly manual strategy.It could be improved if it was possible to generate the MATLAB code, not of expr itself, but of a procedure ftn like ftn := unapply(expr, a), and after simply use  Matlab:-FromMFile to automatically create the MAPLE code of ftn.
But it seems that CodeGeneration:-Matlab doesn't handle operator forms like ftn?

Does it exist a procedure like CodeGeneration:-Matlab(expr, optimize) which transforms a Maple expression into another whose the number of operations is minimal?
I've seen recently some answers concerning simplify/size, but it seems this function is extremely slow compared to CodeGeneration:-Matlab(expr, optimize)?

Do you think that this approach could be useful in computing this Lyapounov exponent, or do you think it's a dead end do-it-yourself?

@Syrup_Maker 

You're welcome.
By the way, in cas you wouldn't be familiar with animate, the number of frames per second can't be change programmaticaly (maybe in the more recent versions?); so you will have to adjust this number directly in the animate toolbar.

@Carl Love 

I didn't know.
I have read here several answers based on the use of this package (that I never used myself... I found the help pages quite hard to read) and I had pictured it as some kind of "miracle package".
Thank you for this clarification.

I hope you will be able to speed up my preliminary work for, at this stage, it is getting too techie for me

@Preben Alsholm 

Ok, thanks you Preben
I'm going to copy my answer in the original thread.
I hope @jennierubyjane will follow the thread.

@tangentspace 

By the way, when I tried to run your code, I got some errors:
I've just edited my previous answer, sorry for the inconvience.

Can you see if it's still a saddle point if we look at it this way? 
Seen this way the point [Rth=8.66666, Rload=8.48585] is a maximum for both the components of the gradient of power, now interpreted as a function of Rth and Rload alone, are negative:

Diff(power(Rth, Rload), Rth) = -0.2656550650e-3, 
Diff(power(Rth, Rload), Rload) = -0.465591929e-4

Maybe I posed the question incorrectly.  The real variables in this problem is Rolad and Rth.
Sure and it made me waste a lot of time.

Now let's go back to your original question:
Is an analytic solution possible for this problem ?
Technically YES but practically NO. 
The expressions become dramatically too complex to be solved in a reasonable amoiunt of time and to be useful.
Look to the code below and replace colons by semicolons to convince yoursel how huge these expressions are.

restart:

params := [T1 = 310. ,  T2 = 300. , Rthhot = 0.1 , Rthcold = 6.,  R = 6. , Z=1/305.];

[T1 = 310., T2 = 300., Rthhot = .1, Rthcold = 6., R = 6., Z = 0.3278688525e-2]

(1)

f     := -(T1 - X)/Rthhot + (X - Y)/Rth + S^2*X*(X - Y)/(R + Rload) + (-1)*0.5*S^2*Rload*(X - Y)^2/(R + Rload)^2:

g     := (Y - T2)/Rthcold - (X - Y)/Rth - S^2*Y*(X - Y)/(R + Rload) + (-1)*0.5*S^2*Rload*(X - Y)^2/(R + Rload)^2:

h     := S^2*Rth/R - Z:
power := S^2*(X - Y)^2*Rload/(R + Rload)^2:

# define the maximization variables

vars := [Rth, Rload]

[Rth, Rload]

(2)

rule_h := isolate(h, S^2);

S^2 = Z*R/Rth

(3)

fg := eval([f, g], rule_h);

[-(T1-X)/Rthhot+(X-Y)/Rth+Z*R*X*(X-Y)/(Rth*(R+Rload))-.5*Z*R*Rload*(X-Y)^2/(Rth*(R+Rload)^2), (Y-T2)/Rthcold-(X-Y)/Rth-Z*R*Y*(X-Y)/(Rth*(R+Rload))-.5*Z*R*Rload*(X-Y)^2/(Rth*(R+Rload)^2)]

(4)

sol_XY := CodeTools:-Usage( solve(fg, [X, Y]) )[]:

memory used=28.99MiB, alloc change=32.00MiB, cpu time=342.00ms, real time=1.26s, gc time=6.76ms

 

length(sol_XY);       # roughly the number of characters used to represent sol_XY
has(sol_XY, RootOf);  # which shows that sol_XY is still not explicitely defined

19717

 

true

(5)

val_XY := [allvalues(sol_XY)]:
whattype(val_XY);     # val_XY is a list made of
numelems(val_XY);     # 3 couples [X=..., Y=...] of solutions
length(val_XY[1]);    # each solution has approximately this length
has(val_XY, RootOf);  # now val_xy contains explicit expressions sor X and Y

list

 

3

 

2664045

 

false

(6)

po := eval(power, rule_h):
po := eval(po, val_XY[1]):
length(po);                 # this is the length of the function to maximise.
 

2664077

(7)

dpo := [diff(po, Rth), diff(po, Rload)]:
length(dpo);

# CodeTools:-Usage( solve(dpo, [Rth, Rload]) ):

41183710

(8)

 

 

Download YES_but_NO.mw

@dharr 

As I don't know what are @The function's knowledge neither what it is realy looking for, I voluntarily presented a two-steps way base on elementary equivalences.
But you're right this could be done in a single step.

@vv  We definitely don't have the same interpretation of the question: given the figure, I treated the case where the six lengths are six consecutive integers... which seemed a reasonable interpretation of a partly stated question

@minhthien2016 : It would have been nice if you had completely clarified your problem: What type of tetrahedron are you looking for? What does "Is there another tetrahedron like that?" mean to you?

@tangentspace 

"it seems to be the maximum." is not enough to say the solution is a maximum:

EDITED

restart:

Check MMA (Mathematica) solution

R :=  6:

power := S^2*(X - Y)^2*Rload/(R + Rload)^2;
MMA   := [X=309.918, Y =304.91, Rload=8.48585, Rth= 8.66666, S= 0.0476431];

S^2*(X-Y)^2*Rload/(6+Rload)^2

 

[X = 309.918, Y = 304.91, Rload = 8.48585, Rth = 8.66666, S = 0.476431e-1]

(1)

vars := indets(power, name);
D2   := [seq(diff(power, v$2), v in vars)];

{Rload, S, X, Y}

 

[-4*S^2*(X-Y)^2/(6+Rload)^3+6*S^2*(X-Y)^2*Rload/(6+Rload)^4, 2*(X-Y)^2*Rload/(6+Rload)^2, 2*S^2*Rload/(6+Rload)^2, 2*S^2*Rload/(6+Rload)^2]

(2)

# check

eval(D2, MMA)

[-0.908663370e-5, 2.028457996, 0.1835850884e-3, 0.1835850884e-3]

(3)

# MMA's solution is not a maximum but a saddle point

mma := map(u -> if lhs(u) in vars then u end if, MMA);

[X = 309.918, Y = 304.91, Rload = 8.48585, S = 0.476431e-1]

(4)

cl := NULL:
for k1 from 1 to 3 do
  for k2 from k1+1 to 4 do
    k := [k1, k2];
    m := select(`not`@`in`, [$1..4], k);
    r := seq(lhs(mma[m[n]])=0.5*rhs(mma[m[n]])..1.5*rhs(mma[m[n]]), n=1..2);
    cl := cl, plots:-contourplot(eval(power, mma[k]), r, title=typeset(fixed=lhs~(mma[k])), size=[400, 400])
  end do:
end do:

plots:-display(`<|>`(cl))

 

 

 

 

 

 

# The patterns arround an extremum

expl := a^2+b^2+c^2+d^2;
plots:-contourplot(eval(expl, [a=0, b=0]), c=-1..1, d=-1..1, title=typeset(fixed=[a,b]))

a^2+b^2+c^2+d^2

 

 

 

Download constrained_maximization_3.mw

@tangentspace 

Could you give me an example so that I understand where I would be wrong?

Nevertheless there is something I do not understand: you define power (the objective function to maximize)  as a function of (X, Y, Rload, Rth, S), but Rth doesn't appear in its expression.

You say  "if I assign numerical values to T1, T2, R, Rthhot, Rthcold, and Z,...", which means that Rth would be a maximization variable ... do we agree on this ?
If it is so this is quite a strange problem where a maximization variable only appears in the constraints...

Do we also agree that the maximization variables thus are Rload, S, X, Y?

In such a case here is the edited version of my previous worksheet.
The conclusion hasn't change, still no solution even for the unconstrained problem.

So here again post your solution or I'm done with that.

constrained_maximization_2.mw

3 4 5 6 7 8 9 Last Page 5 of 86