284 Reputation

8 Badges

16 years, 295 days

Professor of Economics SUNY Stony Brook I use Maple for Research: To test theoretical modelling ideas with various kinds of general equilibrium deterministic and stochastic models in Macroeconomics and Urban Economics. Graduate classes: In Macroeconomics and Econometric classes. Undergraduate classes: In Urban Economics and Computational Methods classes. Note: You can communicate with me in German and French as well as English.

MaplePrimes Activity

These are replies submitted by TomM

@Carl Love 


Yes, indeed, unifloat takes samples (it's estimated degree plus 1) and interpolates. However, it takes these samples equally spaced around a comples circle of radius 1 around the origin. It then simply zeros out the imaginary parts of all the coefficients. It turns out that the accuracy can be improved by increasing the radius and treating the imaginary parts in a more nuanced fashion. I have experimented with the following:

I let the radius be set by a global variable. If I in increase the radius, the extra degrees that don't really belong go to zero and the imaginary parts go to zero relative to the real parts. Thus I zero out all coefficients with an absolute value less than say 10^(-12) and all imaginary parts whose absolute ratio to the real part is also less than 10^(-12). These two changes (with large enough radius and small enough zero criteria) reproduce the "minor" method as close as desired.

[For those that don't know, before redefining the LinearAlgebra:-`Determinant\unifloat` procedure, one has to turn off the kernelops option hiding the local procedures of the LinearAlgebra module.]


I get the impression that many of you are implicitly thinking that the unifloat[x] method is using an algorithm with some asymptotic property. That is, the algorithm has very good speed behavior as the matrix gets larger, and the coefficients converge in some sense to the correct answers as the matrix gets larger. If someone knows explicitly of such a result, I'd like to get a reference.

On the other hand, what I know is that for small problems  unifloat[x] gives seriously wrong (or at least very difficult to interpret) answers. This means that one has to either first convert floating point coefficients to rationals or use a routine such as SmithForm which does the conversion automatically (followed by the simple Determinant routine with no options,since the SmithForm will be R[x] and diagonal) or use a method which essentially directly uses the permutation definition such as "minor".

It would be nice if Maple's Determinant help page contained these pieces of information and caution for the R[x] problems.

Note: My first example was produced by me by using only the highest degree term in each matrix element of the second example. I used this example to try and determine what was going on, and because I could quickly get the highest and lowest possible degrees in the correct determinant. I never thought of using those methods first on a large problem (whatever large might be).

@Carl Love 

But my second example is dense, and unifloat still doesn't produce an interpretable and useful answer.


Thank everyone.

As I said before unifloat[x] would seem to be the appropriate method, although it does not seem to use an appropriate method. It computes using floating point and uses Gaussian elimination. This gives rise to the terrible effects of roundoff error in this case.  However, taking the Smith form of the matrix first gives just what one wants. That routine is meant to operate on R[x] and automatically converts the floating point entries to exact rational numbers first and does its gaussian elimination-like operations correctly for R[x].  One can then get a multiple (that is, multiplied by a scalar) of the correct determinant. Since I only want the roots, the multiplication factor is unimportant to me. However, by taking the determinants of the two left and righ transformation matrices, one can recover that factor which always has to be a scalar since these matrices are products of elementary matrices.

This would lead me to think that the unifloat method should be changed to automatically transforming to rationals and then using the Smith form. 

@Carl Love 

First, I would have thought that unifloat[x] method should have worked because it assumes that the matrix elements are polynomials in a single variable with floating point coefficients, just the situation I am looking at. One really needs to be wary of this method which the help page leads one to believe is the exactly correct method. And, indeed, the default method seems to revert to that for this case.

Perhaps there is just no way to program a determinant for such a matrix so that its roots are given sufficiently precisely. It may be that only by using the exact calculatons given by using rational coefficients (for which maple does not roundoff) or by essentially using the permutation definition (which is, also, recusive minors), can one be assured of getting a polynomial result whose roots are sufficiently precise.

What all your comments tell me is that Maple does not seem to have a better "automatic" procedure for this case. So far, for the "minor" method my problem is small enough so that computing time is not a factor. Also convering coefficients to rationals is also not a real problem. 

@Carl Love 

First, it is clear that, since all elements in the first column of my first example are polynomials
of at least degree 1, the determinant cannot have a constant term.

Second, one can go through (using Maple) the individual permutations in the permutation definition of
a determinant and determine the largest power that turns up. For my first example this is 8, not 9.

Third, I used Maple to go directly through the permutation definition of the determinant.
This reproduced exactly the method=minor results.

The following is the actual more extensive matrix that I am actually interested in. The first example was just to try and figure out what was going wrong on a simpler case (using just the highest degree term in each element).

Also what I am interested in about the determinant polynomial is the set of roots. (I am actually doing an errors-in-variable estimate of the coefficients of an approximating algebraic curve.) Thus if I cannot get a good interpretable set of roots, I am lost. Thus the overall error in the determinant coefficients is not relevant here.

Below I give the definition of the matrix used, and then I give the results for the minor method, the default method, and the unifloat method.  Yes, the default method is the unifloat method because my matrix elements are in R[x], BUT the help page says it uses gaussian elimination. The fracfree method gives an error, no doubt because the help page says it is for matrices with elements in Z[x]. I notice that one responder first turns the polynomial coefficients into exact rationals. I already pointed out that that works because Maple does exact calculations with rationals. 

You will see that only the "minor" method below gives usable, interpretable roots. Also, as I previously noted, with rational coefficients the results are exactly the same as for the "minor" method. For the default, unifloat, method, all but one of the roots are solidly complex. If they were even close to the correct roots, I might know how to interpret them, but since they are not the correct roots, they only serve to obscure things.

So here's the example:

`ψ_data_ex` := Matrix(6, 6, [[180*sigma2^2-53417.4875669262*sigma2+2.55312019922288*10^6, -17980.4676072929*sigma2+1.48876202145622*10^6, 60*sigma2^2-17777.2544622252*sigma2+1.26999620604100*10^6, -1792.74593023400*sigma2+1.47107749463710*10^5, -597.004097753022*sigma2+89710.8619398197, -60*sigma2+8902.91459448771], [-17980.4676072929*sigma2+1.48876202145622*10^6, 60*sigma2^2-17777.2544622252*sigma2+1.26999620604100*10^6, 1.47594266713082*10^6-17980.4676072929*sigma2, -597.004097753022*sigma2+89710.8619398197, 89455.7164588473-597.581976744666*sigma2, 5993.48920243096], [60*sigma2^2-17777.2544622252*sigma2+1.26999620604100*10^6, 1.47594266713082*10^6-17980.4676072929*sigma2, 180*sigma2^2-53246.0392064249*sigma2+2.52236958317357*10^6, 89455.7164588473-597.581976744666*sigma2, -1791.01229325907*sigma2+1.46048381658527*10^5, -60*sigma2+8874.33986773749], [-1792.74593023400*sigma2+1.47107749463710*10^5, -597.004097753022*sigma2+89710.8619398197, 89455.7164588473-597.581976744666*sigma2, -60*sigma2+8902.91459448771, 5993.48920243096, 597.581976744666], [-597.004097753022*sigma2+89710.8619398197, 89455.7164588473-597.581976744666*sigma2, -1791.01229325907*sigma2+1.46048381658527*10^5, 5993.48920243096, -60*sigma2+8874.33986773749, 597.004097753022], [-60*sigma2+8902.91459448771, 5993.48920243096, -60*sigma2+8874.33986773749, 597.581976744666, 597.004097753022, 60]])

" method = minor";
`Det_ψ_data_ex` := sort(Determinant(`ψ_data_ex`, method = minor));
norm_fac := max(seq(abs(coeff(`Det_ψ_data_ex`, sigma2, i)), i = 0 .. 9));
relative_p := sort(`Det_ψ_data_ex`/norm_fac);
roots_p := fsolve(relative_p, complex)

"method = default";
`Det_ψ_data_ex` := sort(Determinant(`ψ_data_ex`));
norm_fac := max(seq(abs(coeff(`Det_ψ_data_ex`, sigma2, i)), i = 0 .. 9));
relative_p := sort(`Det_ψ_data_ex`/norm_fac);
roots_p := fsolve(relative_p, complex);

" method = unifloat[sigma2]";
`Det_ψ_data_ex` := sort(Determinant(`ψ_data_ex`, method = unifloat[sigma2]));
norm_fac := max(seq(abs(coeff(`Det_ψ_data_ex`, sigma2, i)), i = 0 .. 9));
relative_p := sort(`Det_ψ_data_ex`/norm_fac);
roots_p := fsolve(relative_p, complex)

Here are the results:

" method = minor"
`Det_ψ_data_ex` := 186624000000*sigma2^8-7.322237352*10^13*sigma2^7+1.167816239*10^16*sigma2^6-9.707497494*10^17*sigma2^5+4.477166594*10^19*sigma2^4-1.124489089*10^21*sigma2^3+1.395557199*10^22*sigma2^2-6.72105102*10^22*sigma2+1.2667708*10^22
norm_fac := 6.72105102*10^22
relative_p := 2.776708575*10^(-12)*sigma2^8-1.089448262*10^(-9)*sigma2^7+1.737550029*10^(-7)*sigma2^6-0.1444342182e-4*sigma2^5+0.6661408434e-3*sigma2^4-0.1673085185e-1*sigma2^3+.2076397270*sigma2^2-1.000000000*sigma2+.1884780812
roots_p := .196358275828771, 14.2552568223045, 14.7363162314595, 48.2900528956757, 49.8699195146855, 83.2511169784422, 83.7567600395173, 97.9966150998332

"method = default"
`Det_ψ_data_ex` := -8.09527159898091*10^14*sigma2^9-3.71919218887944*10^14*sigma2^8+4.68051527145545*10^14*sigma2^7+1.16457363073298*10^16*sigma2^6-9.69868249773796*10^17*sigma2^5+4.47716413947149*10^19*sigma2^4-1.12448839673387*10^21*sigma2^3+1.39555702965615*10^22*sigma2^2-6.72104791815170*10^22*sigma2+1.26677278632965*10^22
norm_fac := 6.72104791815170*10^22
relative_p := -1.20446568713159*10^(-8)*sigma2^9-5.53364926745266*10^(-9)*sigma2^8+6.96396652494423*10^(-9)*sigma2^7+1.73272627262154*10^(-7)*sigma2^6-0.144303129747736e-4*sigma2^5+0.666140785483749e-3*sigma2^4-0.167308492727293e-1*sigma2^3+.207639797640356*sigma2^2-1.*sigma2+.188478463739031
roots_p := -11.0014855758297-5.90889388930547*I, -11.0014855758297+5.90889388930547*I, -1.71291902117475-10.5657205273659*I, -1.71291902117475+10.5657205273659*I, .196358694448831, 5.10515019731233-7.13344289603453*I, 5.10515019731233+7.13344289603453*I, 7.28136119151786-2.22432765697437*I, 7.28136119151786+2.22432765697437*I

" method = unifloat[sigma2]"
`Det_ψ_data_ex` := -8.09527159898091*10^14*sigma2^9-3.71919218887944*10^14*sigma2^8+4.68051527145545*10^14*sigma2^7+1.16457363073298*10^16*sigma2^6-9.69868249773796*10^17*sigma2^5+4.47716413947149*10^19*sigma2^4-1.12448839673387*10^21*sigma2^3+1.39555702965615*10^22*sigma2^2-6.72104791815170*10^22*sigma2+1.26677278632965*10^22
norm_fac := 6.72104791815170*10^22
relative_p := -1.20446568713159*10^(-8)*sigma2^9-5.53364926745266*10^(-9)*sigma2^8+6.96396652494423*10^(-9)*sigma2^7+1.73272627262154*10^(-7)*sigma2^6-0.144303129747736e-4*sigma2^5+0.666140785483749e-3*sigma2^4-0.167308492727293e-1*sigma2^3+.207639797640356*sigma2^2-1.*sigma2+.188478463739031
roots_p := -11.0014855758297-5.90889388930547*I, -11.0014855758297+5.90889388930547*I, -1.71291902117475-10.5657205273659*I, -1.71291902117475+10.5657205273659*I, .196358694448831, 5.10515019731233-7.13344289603453*I, 5.10515019731233+7.13344289603453*I, 7.28136119151786-2.22432765697437*I, 7.28136119151786+2.22432765697437*I


data_deg_ex := Matrix(6, 6, [[180*sigma2^2, -17980.4676072929*sigma2, 60*sigma2^2, -1792.74593023400*sigma2, -597.004097753022*sigma2, -60*sigma2], [-17980.4676072929*sigma2, 60*sigma2^2, -17980.4676072929*sigma2, -597.004097753022*sigma2, -597.581976744666*sigma2, 5993.48920243096], [60*sigma2^2, -17980.4676072929*sigma2, 180*sigma2^2, -597.581976744666*sigma2, -1791.01229325907*sigma2, -60*sigma2], [-1792.74593023400*sigma2, -597.004097753022*sigma2, -597.581976744666*sigma2, -60*sigma2, 5993.48920243096, 597.581976744666], [-597.004097753022*sigma2, -597.581976744666*sigma2, -1791.01229325907*sigma2, 5993.48920243096, -60*sigma2, 597.004097753022], [-60*sigma2, 5993.48920243096, -60*sigma2, 597.581976744666, 597.004097753022, 60]]);

Using the "Determinant" function

Det_data_deg_ex := Determinant(data_deg_ex);

yields the result:

Det_data_deg_ex = -5.5815798784*10^10+1.47922245027397*10^14*sigma2^7+1.17304067445167*10^11*sigma2^8

Notice that this is a 9th degree polynomial, which is not possible, and has a constant term, which is not possible. In addition, neither the constant term nor the coefficient of the 9th degree term are small.


After I discovered that the code-edit region did not have 2D math, I never even tried it. I only use the code-edit region for one statement simple calls to my procedures from the action definition parts of my embedded components so there is no possible use of any 2D math there. All the complicated stuff is done in the procedure definitions on the standard worksheet.


Thanks for your comments. They led me to dig deeper for the problem, which turned out to be an extra space that had gotten into the calling name of an embedded component. I missed this at first, and all the errors were about not finding a component.

What I am doing is having a worksheet with all the package loading statements and definitions of procedures that will be used by another worksheet containing the embedded components. This works (and has always worked), after I eliminated the extra space problem,because the procedures are actually run by the worksheet with the components. In the past I didn't realize the distinction and had thought that the common server was allowing the components to be known by both worksheets. But really, the procedures are data structures, and thus "variables" to Maple, and thus known by the common computing engine. The worksheet running them has to have the embedded components, not the worksheet in which they are defined.  I use this two worksheet process because the "code region" does not use standard gui editing and printing, and the extra worksheet, of course, has standard editing. The students are told to first bring up and execute an entire "setup" worksheet using the same server as the "run" worksheet. They do not need to know anything about the programming on the "setup" worksheet. Just fill in and press the buttons on the "run" worksheet.

Thanks again.


Actually, I have now tracked the occurrence of the problem to just  using Maple 2015. Here's a simple example.

On one worksheet I enter:


>< Here I put a radio button with the default name RadioButton0 and default value "false">

> GetProperty(RadioButton0, value);
      < the output is correctly "false" >


Now I start a new worksheet but carefully setting the Server to be the same as the first worksheet. Both worksheets are open. The first worksheet has been entirely executed. On the second worksheet I simply put.

>GetProperty(RadioButton0, value)

The output is the error message:
Error, (in DocumentTools:-GetProperty) Attempted to retrieve property of unknown component RadioButton0.

The second worksheet seems to be aware of and be able to use the package DocumentTools. But it doesn't seem to be aware of the embedded components on the first worksheet. I used exactly this procedure many times with Maple 18, and it always worked. (This led me at first to think that opening a Maple 18 worksheet with components in Maple 2015 was the problem, but, as you see above, the problem exists inside Maple 2015.


Yes, indeed, and that's even more, and more complicated persistence than I've noticed so far.


@Rouben Rostamian  

Of course, the display statement produces a plot size proportional to the Maple window size because it has an explicit size option forcing it to do that.

When I comment out the display(...) statement and run that execution group again, I get a thumbnail for the second plot where I just put the name of the plot alone in the statement. BUT if I uncomment the display statement and re-execute that same execution group once I still get a thumbnail for the third plot in that group. However, when I re-execute the group a second time (or more) I get a plot which is effectively using the size option from the just previous display command. When the size in the display command is set to be proportional, the third plot is proportional to the Maple window. If I set the size option in the display command to a fixed number of pixels, say [500, 500], then the display output is not proportional, and the third plot has the same size as the display and thus also not proportional (on the second execution).

Here's another, less variable example. In the same execution group as the current plot definition, call the first plot test_plot_1 and make a second test_plot_2 using the cosine. Then below the two plot definitions put a display statement to put both plots on the same graph with a size option. Follow that with two statements naming the two plots in turn. With this setup, the execution group always produces the last two plots as thumbnails.

Finally, I don't need or want a thumbnail to verify that the plot definition statement has been done. I just need output in the form <plot name> :=PLOT(....) as we used to have. If I use a colon, I do not get this verification, and if I use a semi-colon Maple uses more space and makes reading the output more difficult by putting in the thumbnail.

The bottom line of all of this is that Maple has a bug, and the only useful and reliable workaround is to put a size option in every print definition where the default used to work nicely in previous Maple versions.

@Rouben Rostamian  

The following is a simple code which shows this behavior even in Maple 2015.1 (My Maple says there are no more updates beyond .1.) The code is composed of 3 execution groups as indicated.

> with(plots);

>test_plot:=plot(sin(x), x=-Pi..Pi);
  display([test_plot], size=[.3,1.0]);


The second execution group gives a thumbnail plot in the output of the first statement. The second statement produces a full-sized plot. However, the third statement in that execution group produces only a thumbnail plot.

In the third execution group, and in any others, just the name of the plot produces a full-sized plot. 

Of course, I can complete shut off the output from the plot definition statement, but I do want to verify that a set of plotting instructions have been made. What I do not need is a lot of space being taken up by the thumbnail just to get this verification.


My example, and where I discovered the problem, is a Table of 2 rows and 2 columns (although I started with 1 row and 3 columns which would have been nicer). I have 2D plots and have no page breaks between rows (and do not really wish any page breaks between rows). I would like each plot to be about 1/2 a page width. I am using Maple 18. I am not plotting a array of plots, per se, that is, I am not using the Array plotting feature of Maple. I am simply putting code in 3 of the the Table cells and they generate the plots. However, I hide the code, since I don't want to "trouble" the readers with seeing it. I will give the rather short and simple code below (if that works). I have tried the size option but that seems to have absolutely no effect. I don't see how changing the font size of the legend will work; the font size being used seems already to be the smallest readable size on the screen. Why would making the font bigger make it appear?

At this point, I would just like to have the plots on one page with roughly square cells, thus using as much of the page as possible. That works on the screen. However, I now see that the Table cells are approximately square, that the Table takes up approximately the whole width of the printed page, but that each plot takes up only the upper left 1/4 of a cell (although it takes up the whole cell on my screen.) And there are no legends.

I have not discovered that when I don't hide the input (code) the legends are printed, although I wouldn't have considered the legends as "input".

I have also tried to print the plots as a plot array directly with display( ). Well, the plots are still just as small, and, although parts of the the legends now print, the first half of the legends in the right hand column are chopped off.

I have even tried copying simply each plot into its cell. That has the same result as produing the plot in the cell and hiding the input.
Whether I print to my hp printer (an OfficeJet Pro 8600) or to a pdf file, I get the same results.
I have tried to insert my code here, but it won't paste. However, it is quite simple and each cell has no more than 5 statements.

1 2 3 4 5 Page 1 of 5