C_R

3527 Reputation

21 Badges

6 years, 53 days

MaplePrimes Activity


These are replies submitted by C_R

@Gabriel Barcellos

As I see it at the moment:

  • The equations eqm, eqm2 and eqTPO can be simplified to the below equations. But that's it pretty much. As @dharr said the problem is the large number of exponentials and this has not changed

 

  • I could not optimize your original code further than the procedure p_O2 (see  my answer to @acer). A gain in speed by a factor of 2 to 3 is possible and on my fast machine I could compute solutions. On my old PC it simply took too long. The code should be faster with the above simplifications. I can try if you can provide simplified equations.
  • I do not think that approximating the exponentials by series is a good idea. I cannot see now you will get exact results.
  • I also think that using computer algebra is less prone to errors.
  • I still think that a worksheet for spin 3/2 would be helpfull that @one man  can have a look at the equations to apply a different root finding method. If that works, it will be quite easy to scale up to 7/2. A worksheet with the big equations is simply not managable because of long computation delays and the responsiveness of the GUI. I can hardly edit code on my machine. What is possible is replacing S0 and Z0 in a worksheet for spin 3/2 and then executing the worksheet for a 7/2.

Update:

I forgot to upload

eqm_eqm2_eqTPO.mw (again corrected)

Update 2:

I had to correct eqTPO in the above.

Update 3:

Missed to add m2 in the argument of d__i in eqm2 and eqTPO. Now corrected.

Redundancies:
2 of the 6 sums in the above equations are doubled.

Also in the sums of eqm and eqm2 is redundancy. Computing all the 50238 exponentials, for example, in the denominator of

covers all the exponentials in the numerator. Here is code to verify this

exp_n:=indets(numer(lhs(eqm)),function):nops(%);
exp_d:=indets(denom(lhs(eqm)),function):nops(%);
exp_d minus exp_n;nops(%)

(note: the difference are the exponentials that do not depend on m. This is the reason why I had to correct a second time)

The number of non redundant exp calls is less than 50% of the exp calls in the above equations. So far I have not found a smart way to remove redundancies using Maple's strength. It could be that fsolve is already recognising some of the redundancies and there is not much to gain. Maybe someone can tell.

@acer 

I could make hfloat work togehter with the fsolve call. evalhf does not work when calling fsolve.
Making an optimized procedures out of the expression showed the biggest gain in computation time (real time).

It seems to me that for this particular problem there is not much to gain with hardware floats. I am surprised.

Download Campo_Médio_spin_7_2_-_Forum_optimize_03_b.mw

@Carl Love

How to map what you demonstrated above to a list of procedures. Here is an example

P:= [proc(x) sin(x) end proc,proc(x) sin(x)^2 end proc,proc(x) sin(x)^3 end proc]:

P(3.);

         [0.1411200081, 0.01991485669, 0.002810384737]

For the time beeing, I work separately listelemet by listelement

@acer 

There was an example in my question that has disappeared. I didn't have time to rewrite it.
I've progressed enough (with the support I got from this question) in the other question with the 100k exp calls that I feel confident in giving an overview of some ways to improve computing power. What I can see so far gain by hardware floats is limited.
I will post it there once presentable.

@acer 

dharr answered one part. His answer can be used for the second part of my question.

@dharr 

Excelent. Is there a way to do this with the body?

subs(sin = evalhf@sin, sin(x))
                        (evalhf@sin)(x)

@Gabriel Barcellos 

I the latest file i uploaded I tried Compiler:-Compile. This does not finish in a reasonable time which is probably due to the large expressions. It takes much more time than generating a procedure internally (the help page indicates that this can be a problem).

For the evalhf I will ask separately. I have tried a few things but always got error messages. Edit: This does not work for example:
It could be that evalhf is limited to procedures with not more than 500 function calls. In this case it is not an option.

A key to test this and other new ways would be a test file with less exponential expression (e.g. spin 3/2). See my reply to @one man. If we get new methods to work we can scale up to spin 7/2.

@dharr 

Not always: As if the GUI does some house keeping from time to time.

I am wondering if the context panel or the pallet can play a role (or something else) since they are updated depending on the kernel state and the cursor position.

Maybe there are GUI functions that can be turned off.

@one man 

My computer works but becomes unresponsive and I cannot work efficiently.

It should be possible to reduce the number of spins to work on algorithms and to better analyse the equations. 

If the OP is interested he should provide S0 and Z0 for Spin 3/2. This should make the problem managable.

Let's wait for his feedback.

@one man 

The lhs of eqm and eqm2 is basically a fraction of two big sums of exponential functions with the parameters d, m, m2 and T.

The lhs and rhs of eqTPO contain sums. One summand is a ln function with a big sum of exponential functions

I have updated my response with new procedures to evaluate the equations.

You can analyse this way.

aaa := op(3, lhs(eqm2));
bbb := op(4, lhs(eqm2));
whattype(aaa);
whattype(bbb);
                              
nops(aaa);
nops(bbb);
                             
op(1 .. 4, aaa);
                                  
ccc := op(1, bbb);
nops(ccc);
                           
op(1 .. 4, ccc);

@Gabriel Barcellos

The Time command I used in my worksheet is the system time. The codetools:-usage output show cpu time and the real time which should match the system time difference before and after the fsolve call.

@acer suggested something I have no experience with. If I interprete him correctly he thinks of converting the equations for fsolve into procedures. In doing so evalhf can be applied to the procedure and functions inside the procedure as well as code can be generated with the Compiler:-Compile command.

Attched is a way that uses the codegen library to generate procedures. Its about 2 times faster. Since it takes 5 minutes time to optimize the procedures it is advisable to define and optimize the procedures only once by including d as a fourth parameter. I have not done this because of lack of time.

I am curious how much my attempt can be outperformed by other methods.

Campo_Médio_spin_7_2_-_Forum_optimize.mw

 

P.S.:

 

I have seen in your code that n and h are not used. I was further wondering whether in the exponent of Z0 there is not an m missing (since further down in the document a is replaced by J*m*something)

P.P.S.:

The root you want to encircle by several fsolve calls could perhaps be found with Draghilev’s method.

Looks at this example. Similarily to your problem a projection from R^4 (in your case d, m, m2 and T) to R^3 (m, m2, T) could be performed. Varing d would walk you along the pink line toward the root your are looking for.

@one man could tell if that is an option for your problem. As for acers suggestion, I have no practical experience with that method.

Update: This file contains procedures for the equations with the four parameters d, m, m2 and T

Campo_Médio_spin_7_2_-_Forum_optimize_02.mw

fsolve is a notch faster when using these. An attempt with Compiler:-Compile did not succeed.

@ecterrab 

Short version:

There is no seamless derivation top-down from the principle of least action

Ein Bild, das Schrift, Grafiken, Symbol, Typografie enthält.

Automatisch generierte Beschreibung

to the principle of d’Alembert in its classical form

Ein Bild, das Schrift, Text, Zahl, weiß enthält.

Automatisch generierte Beschreibung

(Seamless means without deliberately multiplying at some point by -1)

Long version:

It took several iterations to get it to the point. I hope it is understandable and sufficiently consistent.

My intention is to use the new function Physics:-LagrangeEquations for d’Alembert’s principle (and maybe for other principles) for systems with non-constraint forces (external forces excluding constraint forces). This is relevant in some non-conservative (e.g. with friction) or in non-holonomic systems (e.g. with control forces) where no energy potential can be defined for non-constraint forces (in order to account for them in a Lagrangian).

In your first reply you pointed out that Physics:-LagrangeEquations can change the sign of the output. This lead to the wrong conclusion (from my side) that this is the reason for the wrong sign in my example worksheet Lagrange_Equation.mw. This false conclusion got further support from the comparison of a second case (the first example of ?dsolve,numeric,DAE) where

   VariationalCalculus:-EulerLagrange returned - A - B = 0 and

   Physics:-LagrangeEquations returned A + B = 0

For this case VariationalCalculus:-EulerLagrange produces the correct sign when used to calculate the left-hand side of the d’Alembert’s principle. However, neither Physics:-LagrangeEquations nor  VariationalCalculus:-EulerLagrange produce the correct sign in my example worksheet. So, in one instance Physics:-LagrangeEquations with the new option keeprelativesign can be used, in another not. The reason for that is simple, once noticed: The left-hand side of d’Alembert’s principle has an opposite sign than the Lagrange Equations.  Only when Physics:-LagrangeEquations for the d’Alembert’s principle is used this way the sign is correct:

Physics:-LagrangeEquations(-L,…,keeprelativesign)=Q; #note the sign

I do not see another way for a generic use of the command Physics:-LagrangeEquations for d’Alembert’s principle (which is my intention). For this reason (and for compatibility to VariationalCalculus:-EulerLagrange) I consider an option to keep the sign relevant.

Naming a command option keeprealitivesign explains what the command does. Naming it non-holonomic on the other hand describes the context, for those who know what non-holonomic is. I therefore think that keeprealitivesign is a good choice and the description could refer to cases where this option should be used. Thank you for adding this option.

In the below, I have tried to identify cases where a mismatch of sign can occur when deriving one principle from another. It basically can (but does not have to) happen on the way up from Newtons equations to the principle of least action or on the way down.

Comparing some references

D’Alembert’s principle from https://en.wikipedia.org/wiki/Lagrangian_mechanics but not named as such

Ein Bild, das Text, Schrift, Screenshot, Reihe enthält.

Automatisch generierte Beschreibung

The right-hand side matches the left-hand side from Goldstein’s book that you cite (assuming T is replaced by L):

Ein Bild, das Text, Schrift, Screenshot, weiß enthält.

Automatisch generierte Beschreibung

Also: One of my old textbooks derives the Lagrange’s equations for holonomic systems from d’Alembert’s principle and is in line with the left-hand side of Goldstein’s book above.

However, https://en.wikipedia.org/wiki/Lagrangian_mechanics chooses a different path to derive Lagrange’s equations from d’Alembert’s principle:

Ein Bild, das Text, Screenshot, Schrift, Zahl enthält.

Automatisch generierte Beschreibung

This is done by substituting Q__j into d’Alembert’s principle an doing (lhs-rhs)(…) on it instead of (rhs-lhs)(…). Both operations are correct but only the second allows for what I did in my worksheet: replacing T by L in d’Alembert’s principle.   

This deliberate choice of sign is also possible when going backwards from the principle of least action to the Lagrangian’ s equations. You state

Ein Bild, das Text, Screenshot, Schrift, Reihe enthält.

Automatisch generierte Beschreibung

Strictly speaking, this is Hamilton’s principle but https://en.wikipedia.org/wiki/History_of_variational_principles_in_physics#action_principle_names states that Feynman called Hamilton’s principle the principle of least action. According to https://en.wikipedia.org/wiki/Principle_of_stationary_action, distinct principles, "the principle of least action" is a common name for many principles.

Anyway, in https://en.wikipedia.org/wiki/Hamilton%27s_principle the same result is derived in more detail by defining the change in the action functional this way

Ein Bild, das Text, Schrift, Reihe, weiß enthält.

Automatisch generierte Beschreibung

Since the change is stationary (not necessarily minimal) in the principle it is equally possible to change the sign of the integrand (i.e. doing L(q, q_dot)- L(q+eps, q_dot+eps_dot)). By doing so the result will match the left-hand side of the Goldstein book above. Both ways are mathematically correct. Again, the selection of one of two possible ways to define the change of action functional in a stationary context is a deliberate choice of sign. I would have probably applied the same definition as in the functional above because it is in line with the often-used convention: error = measured value minus true value. Another reason to stay with the above functional is that the variation of the action (and the variation of the Lagrangian) can be written in the form of a total differential which leads literally via integration by parts to

Ein Bild, das Schrift, Text, Handschrift, Reihe enthält.

Automatisch generierte Beschreibung ---> Ein Bild, das Schrift, Text, weiß, Zahl enthält.

Automatisch generierte Beschreibung

As long as we are dealing with conservative systems the sign of the left-hand side of the Lagrangian’ s equations is not a problem. But for non-conservative systems (where no disspative energy term for the Lagrangian can be defined for the non-conservative forces) and for non-holonomic systems sign matters. For those cases https://en.wikipedia.org/wiki/Lagrangian_mechanics  has to change the sign of the left-hand side to correctly account for the sign of forces derived from a dissipative term:

Ein Bild, das Text, Screenshot, Schrift, Zahl enthält.

Automatisch generierte Beschreibung

This uncommented change of sign of the left-hand side within this otherwise well-made reference is unfortunate and makes the reference somehow inconsistent.

 

P.S.:

My “flat” statement was more a whish towards principles that Maple has not covered extensively yet (like d’Alembert’s principle) and not towards the principle of least action.

D’Alembert’s principle leads directly to Newtons equations of motion and is closely related to the principle of virtual work. Only for this, d’Alembert’s principle has a place in physics. In combination with non-constraint forces it has a unique position with its closest cousin being Lagrange’s equations of the first kind. However, it looks to me that the principle of virtual work and d’Alembert’s principle are more used in engineering applications whereas in modern physics the principle of least action dominates.

In classical mechanics: D’Alembert’s principle, Lagrange’s equation of the first kind, and modified Lagrangian’s (by constraints) all deal with forces. Why can’t they be combined into one principle that fits all? On the one hand I have not found good references connecting these dots. On the other hand, I have not found a reason why d’Alembert’s principle cannot be changed to a modified principle using the Lagrangian or even a modified Lagrangian (see P.P.P.S).

I am not sure if a consistent description including Lagrange’s first equation, d’Alembert principle and the principle of least action without a deliberate change in sign can be found. At the moment I only see a flip of sign for the variation of the action (which I think is out of discussion) or a reformulation of the common form of d’Alembert’s principle (It would be a great relief if someone could demonstrate the opposite).

So, if there is no elegant unification, Maple can, at best, help physics and engineering users applying principles as they are making aware of signs when they matter. 

 

P.P.S.: Talking about engineering applications. My old textbook is consistent in itself but does not mention the principle of least action. However it is mentioned that the Hamiltonian principle states that the action int(L(t),t=t0..t1) becomes stationary which is according to the book “An insight of philosophical but not of technical value”. It also considers the Lagrangian Equations of the second kind as unnecessarily complicated to perform calculations and therefore uses for the practical part of the book only d’Alembert’s principle and Jourdain’s principle to solve problems. It also mentions the Gaussian principle that had little technical relevance. The author clearly had to do all calculations by hand, which must have been laborious at the time. These times have changed with computer algebra. I wonder if he would have refrained from making such statements today.

P.P.P.S.:

Reformulating d’Alembert’s classical principle from my worksheet (omega stands for alpha_dot)

Ein Bild, das Schrift, Reihe, Text, Zahl enthält.

Automatisch generierte Beschreibung

to

Ein Bild, das Schrift, Reihe, Text, Zahl enthält.

Automatisch generierte Beschreibung

would allow the use of Physics:-LagrangeEquations without minus sign.

If the Lagrangian accounts for external forces by defining potential and dissipative energy terms, the above equation becomes

Ein Bild, das Schrift, Reihe, Text, Diagramm enthält.

Automatisch generierte Beschreibung

where Q__NP stands for all forces that are not included in potential or dissipative energy terms. This equation should be usable with a modified Lagrangian accounting for constraint forces.

 

 

 

 

 

 

 

 

 

@Scot Gould 

Was a couple of years ago before 2020?

@Gabriel Barcellos 

Your case is interesting in many respects. A bench mark for Maple from physics.
Attached is a file with my edits in organge.
Campo_Médio_spin_7_2_-_Forum_benchmark.mw

My perliminary conclusion is that you do not need more memory and a processor much faster is not available. Acer mentionned already some options to improve fsolve root finding. I would like to add symbolic simplification of the equations. (fsolve is "robust" proven code. I do not think that is does allot of symbolic preconditioning.)

If you have already reached an endpoint of symplification, I still assume that compromises in numerical precision can substantially speed up fsolve (compare the time it takes to compute one root by fsolve to the many roots that implict plot computes).

Some observations:

Computing Z0, implict plots and fsolve occupy mainly one core.
System load before running the worksheet

Computing "best case": Two cores are buisy. The cpu time is about half the real time

Computing with "core hopping": The task mserver.exe distributed over more cores.

Whereas in the best case the overall system load is displayed around 5%, it is now hopping between 5 and 20%. The real time can take longer than the cpu time dispalyed by codetools. I cannot give an explanation what system state triggers the OS distributing the task to more cores.

@Scot Gould 

This does not sound like bugfixing. More like a rebuild of the interface or maybe even a complete new interface.
I am still trying to understand to what extent the current Java IDE contributes to GUI problems and whether such problems also occur on Apple OS and Linux. It seems to be non-trivial to fix randomly occuring "glitches".

The last substantial and visible overhaul was from Maple 2020 to Maple 2021. Did you get this feedback before 2020?

First 11 12 13 14 15 16 17 Last Page 13 of 69