Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 308 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@vv Yes, I like it! It's definitely better than mine  I had forgotten about the possibility of using single quotes as an ephemeral grouping operator. 

 

@impostersyndrome 

I came up with a shorter formula for your permutation. It's conceptually and algebraically simpler, although it may appear a bit more complex to you due to your (perhaps) unfamiliarity with Maple's elementwise syntax (applying an operation to all elements of a container with ~).and arrow operator syntax (x-> e is equivalent to proc(x) e end proc for expressions whose only dependencies on x appear explicitly. The left side of the arrow can contain multiple symbol variables with or without typespecs such as (x::posint, y::realcons)-> ...):

eq_arrangement:= (k::posint)-> k +~ (i-> (-1)^i*iquo(i,2))~([$1..2*k-1]):

Note that this formula doesn't require special treatment of the 1st element, k. The iquo is fast integer division with discarded remainder. To me, it's conceptually simply than the "shuffling" of your procedure, although opinions may vary on that. 

I'm still guessing that you'd like to find the relationship between and the order of the permutation. Let me know. I've written some tools to explore that, but they're difficult to post from my phone.

The equation is trivial to solve symbolically, even by hand, so why do you want to solve it numerically? Are you sure that you're not missing some symbols in the equation?

@impostersyndrome 

A permutation p in list form can be applied to list L simply by L[p]. Thus, the following procedure will show the whole cycle for any k:

DoIt:= proc(k::posint)
local L:= NaturalNumbers(k), p:= eq_arrangement(k);
    to GroupTheory:-PermOrder(Perm(p)) do L:= L[p]; print(L) od;
    return
end proc
:
DoIt(13);
[13, 14, 12, 15, 11, 16, 10, 17, 9, 18, 8, 19, 7, 20, 6, 21, 5, 22, 4, 23, 3, 24, 2, 25, 1]
[7, 20, 19, 6, 8, 21, 18, 5, 9, 22, 17, 4, 10, 23, 16, 3, 11, 24, 15, 2, 12, 25, 14, 1, 13]
[10, 23, 4, 16, 17, 3, 22, 11, 9, 24, 5, 15, 18, 2, 21, 12, 8, 25, 6, 14, 19, 1, 20, 13, 7]
[18, 2, 15, 21, 5, 12, 24, 8, 9, 25, 11, 6, 22, 14, 3, 19, 17, 1, 16, 20, 4, 13, 23, 7, 10]
[22, 14, 6, 3, 11, 19, 25, 17, 9, 1, 8, 16, 24, 20, 12, 4, 5, 13, 21, 23, 15, 7, 2, 10, 18]
[24, 20, 16, 12, 8, 4, 1, 5, 9, 13, 17, 21, 25, 23, 19, 15, 11, 7, 3, 2, 6, 10, 14, 18, 22]
[25, 23, 21, 19, 17, 15, 13, 11, 9, 7, 5, 3, 1, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24]
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]

If you'd like the cycle to be returned by the procedure rather than simply displayed, let me know. It just requires a small modification.

@one man What is the significance of the coefficient of s produced by Draghilev? In this case, that would be sqrt(141)/5 or 2.38. Obvioulsy this number can be replaced by 1 and the parameterization would still be valid.

@September It has to use something as an upper limit if nothing is specified. That something is called a default value. The default range of the independent variable in plot is -10..10.

Changing any properties (such as axis length) after the plot is created does not cause any recomputation of function values.

@tomleslie The method used by Maximize in this case is sqp (sequential quadratic programming). This computes numeric (in this case) derivatives of the objective function. This method is not used by DirectSearch (which I think avoids derivatives entirely).

You can get information, including the iteration count, by setting infolevel[Optimization]:= 5.

@mmcdara I do not understand why Tom's method produces the correct result while mine doesn't. Theoretically, it should be doing the same as mine. Setting the infolevel also shows the methods doing the same thing.

infolevel[Optimization]:= 5:

My method's output:

NLPSolve: calling NLP solver
NLPSolve: using method=sqp
NLPSolve: number of problem variables 2
NLPSolve: number of nonlinear inequality constraints 0
NLPSolve: number of nonlinear equality constraints 0
NLPSolve: number of general linear constraints 0
NLPSolve: feasibility tolerance set to 0.1053671213e-7
NLPSolve: optimality tolerance set to 0.3256082241e-11
NLPSolve: iteration limit set to 50
NLPSolve: infinite bound set to 0.10e21
NLPSolve: trying evalhf mode
NLPSolve: trying evalf mode
attemptsolution: number of major iterations taken 0

Tom's method's output:

NLPSolve: calling NLP solver
NLPSolve: using method=sqp
NLPSolve: number of problem variables 2
NLPSolve: number of nonlinear inequality constraints 0
NLPSolve: number of nonlinear equality constraints 0
NLPSolve: number of general linear constraints 0
NLPSolve: feasibility tolerance set to 0.1053671213e-7
NLPSolve: optimality tolerance set to 0.3256082241e-11
NLPSolve: iteration limit set to 50
NLPSolve: infinite bound set to 0.10e21
NLPSolve: trying evalhf mode
NLPSolve: trying evalf mode
attemptsolution: number of major iterations taken 9


 

@mmcdara My only goal was to correct your syntax. Yes, I noticed the single iteration. It seems to be very common with Optimize. You probably need to adjust some of the numerous options.

@Matt C Anderson 

Did you mean to put that in another thread? I don't see any connection to this thread. Your worksheet is about exact polynomial algebra, and this thread is about fitting a sinusoidal function.

By the way, I like your idea of posting PDFs of worksheets. I think I'll start using that because they are immediately viewable from almost any browser, regardless of whether the viewing computer has Maple. If more people did this, I'd be able to view their worksheets from my phone, which I use to read MaplePrimes about half the time.

@mmcdara In your code, it's solve, not CharactericticPolynomial, that determines the order of the roots/eigenvalues. It may indeed now be the case that solve uses a consistent order (I don't know); however, that has not always been true.

Your code can only handle low-degree cases without parameters. It also has trouble with repeated eigenvalues. For example, change Sy to an identity matrix.

Your title gives the impression that using sort is undesirable.

For floating-point cases, the algorithm used by Eigenvalues doesn't solve a polynomial. There are other methods with less round-off error.

@Carl Love The code in the Answer above has been updated in several significant ways:

  1. It now handles all documented calling sequences of LinearAlgebra:-Eigenvalues, in particular the generalized eigenvalue problem (whose input is two square matrices).
  2. It now preserves all Matrix options, in particular shape and datatype, which Eigenvalues uses for algorithm selection.
  3. All options to Eigenvalues are handled, including options that do not yet exist but may be added in the future, as long as they're of type {name, name= anything}.
  4. The results are now remembered after sorting.

Regarding 2: This done by applying ToInert to the Matrices, which makes them immutable, which in turn makes them suitable as cache/remember table indices.

Regarding 3: On the off chance that an option not of that type is added, or if a completely new calling sequence is added, it'll still be handled correctly; it just won't be sorted.

Regarding 4: The earlier code sorted them after remembering them. While that did produce correct results, the new way is more efficient.

Finally, let me re-emphasive that this process is completely transparent to the end user. Once the overload is defined, you can call LinearAlgebra:-Eigenvalues in exactly the same ways that you've always called it, and it'll produce output in exactly its documented formats. The only thing that my code does is intercept that output, sort it, and remember it. The overload command is an extremely powerful tool for modifying what stock commands do in transparent ways.

@Kitonum Unfortunately, there are subtle, nonstandardized, imprecise, and often overlapping shades of meaning between residuals and errors, even when those words are used in a purely mathematical context. The present context appears to be fitting a real-valued model function to a finite set of data. In such a case the goodness of fit is often empirically checked by "plotting the residuals". One usually wants to check (often simply visually) not just the magnitude of the residuals but also that their mean is 0 (unbiasedness) and that their variance is constant over different subintervals of the independent variable (homoscedasticity). This can't be done if you use absolute values.

Of course, the 1-norm and infinity-norm of the residuals are also useful information wrt goodness of fit, and of course they use absolute value, but they don't require a plot. That a plot was requested suggests that the situation described in the first paragraph applies.

@Scot Gould The set sorting order is used in my Answer below.

@Preben Alsholm 

That's great, and very useful to know in general. Vote up! I didn't know that procedure options could be subsop'd. It's not allowed for some op numbers of a procedure.

There's 5 issues that I'd like to change. All of these are addressed in the following Answer.

  1. I don't think that option system would be desirable, as it makes the remembered data too ephemeral. Option cache can be used to replace the timewise ephemerality of system with spacewise ephermerality. 
  2. There's also the issue of the ephemerality (or, more precisely, the mutability) of Matrices themselves, which make them less-than-ideal candidates as arguments to cache- or remember-table procedures.
  3. A consistent sort of the eigenvalues and eigenvectors can make this more useful across sessions.
  4. The sort can also be used for list-form output (output= list).
  5. This can all be done with no need to change the calling sequence of LinearAlgebra:-Eigenvectors (from the end user's POV) by using overload.

All these issues are solved in the following Answer.

 

First 129 130 131 132 133 134 135 Last Page 131 of 708