acer

32358 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@icegood That's another good example. (So, there are problems for data all near either DBL_MIN or DBL_EPSILON.)

The plot structure for this example also contains data adequate to produce a smooth curve in the plot, but the plot drivers cannot handle it.

If it is this easy to repair the (already computed) plot data structure "manually" in Maple code, then the plotting routines might be able to accomodate it too.

 

restart:

f:=(y,a,b)->(8*(b+a))*(exp(-y))^2*y^2/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)-(32*(b+a))*(exp(-y)+1)*y*exp(-y)/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)+(16*(b+a))*(exp(-y)+1)*y^2*exp(-y)*((2*(1+2*b+2*a))*exp(-y*(1+2*b+2*a))-(4*(a+2*b+1))*exp(-y*(a+2*b+1))-(4*(a+2*b+2))*exp(-y*(a+2*b+2))+(2*(b+a+1))*exp(-2*y*(b+a+1))+(2*(b+a))*exp(-(2*(b+a))*y)-(4*(a+1))*exp(-y*(a+1))+(8*(b+1))*exp(-2*y*(b+1))-4*a*exp(-y*a)+(2*(b+a))*exp(-2*y)-(2*((2*(b+a))*y+4*b+1))*exp(-2*y)+(4*(b+a))*exp(-y)-(-6+(4*(b+a))*y)*exp(-y)+2*a+2*b)/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)^2+(8*(b+a))*(exp(-y)+1)*y^2*exp(-y)/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)+(8*(b+a))*(exp(-y)+1)^2/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)-(16*(b+a))*(exp(-y)+1)^2*y*((2*(1+2*b+2*a))*exp(-y*(1+2*b+2*a))-(4*(a+2*b+1))*exp(-y*(a+2*b+1))-(4*(a+2*b+2))*exp(-y*(a+2*b+2))+(2*(b+a+1))*exp(-2*y*(b+a+1))+(2*(b+a))*exp(-(2*(b+a))*y)-(4*(a+1))*exp(-y*(a+1))+(8*(b+1))*exp(-2*y*(b+1))-4*a*exp(-y*a)+(2*(b+a))*exp(-2*y)-(2*((2*(b+a))*y+4*b+1))*exp(-2*y)+(4*(b+a))*exp(-y)-(-6+(4*(b+a))*y)*exp(-y)+2*a+2*b)/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)^2+(8*(b+a))*(exp(-y)+1)^2*y^2*((2*(1+2*b+2*a))*exp(-y*(1+2*b+2*a))-(4*(a+2*b+1))*exp(-y*(a+2*b+1))-(4*(a+2*b+2))*exp(-y*(a+2*b+2))+(2*(b+a+1))*exp(-2*y*(b+a+1))+(2*(b+a))*exp(-(2*(b+a))*y)-(4*(a+1))*exp(-y*(a+1))+(8*(b+1))*exp(-2*y*(b+1))-4*a*exp(-y*a)+(2*(b+a))*exp(-2*y)-(2*((2*(b+a))*y+4*b+1))*exp(-2*y)+(4*(b+a))*exp(-y)-(-6+(4*(b+a))*y)*exp(-y)+2*a+2*b)^2/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)^3-(4*(b+a))*(exp(-y)+1)^2*y^2*(-2*(1+2*b+2*a)^2*exp(-y*(1+2*b+2*a))+4*(a+2*b+1)^2*exp(-y*(a+2*b+1))+4*(a+2*b+2)^2*exp(-y*(a+2*b+2))-4*(b+a+1)^2*exp(-2*y*(b+a+1))-4*(b+a)^2*exp(-(2*(b+a))*y)+4*(a+1)^2*exp(-y*(a+1))-16*(b+1)^2*exp(-2*y*(b+1))+4*a^2*exp(-y*a)-(8*(b+a))*exp(-2*y)+(4*((2*(b+a))*y+4*b+1))*exp(-2*y)-(8*(b+a))*exp(-y)+(-6+(4*(b+a))*y)*exp(-y))/(-2*exp(-y*(1+2*b+2*a))+4*exp(-y*(a+2*b+1))+4*exp(-y*(a+2*b+2))-exp(-2*y*(b+a+1))-exp(-(2*(b+a))*y)+4*exp(-y*(a+1))-4*exp(-2*y*(b+1))+4*exp(-y*a)+((2*(b+a))*y+4*b+1)*exp(-2*y)+(-6+(4*(b+a))*y)*exp(-y)+(2*(b+a))*y-4*b-3)^2:

Digits:=400:
P:=plot(f(y,0.173734641949551155, 0.02), y=5.4938108636702..5.49381086367034):
P;

data:=evalf[30](op([1,1],indets(P,specfunc(anything,CURVES)))):
const:={evalf[14](data[..,2][])}[1];
newdata:=[seq(evalf[30]([t[1],(t[2]-const)*10^14]),t=data)]:
miny,maxy:=min(newdata[..,2]),max(newdata[..,2]);

0.65555435660341e-12

0.687994651740957e-13, 0.1725660332336273e-12

plots:-display(subsindets(P,specfunc(anything,CURVES),
                          z->CURVES(newdata,op(2..nops(z),z))),
               ytickmarks=[seq(evalf[17](miny+(i-1)*(maxy-miny)/5
                               =nprintf("%a",
                                        const+(miny+(i-1)
                                               *(maxy-miny)/5)
                                               *10^(-14))),
                               i=1..6)]);

 

 

icegood_plot01.mw

@gkokovidis Thanks. I'm somewhat sure that Classic was available for Maple 14 on Solaris, but had forgotten if it had been dropped for Maple 15.

@gkokovidis Thanks. I'm somewhat sure that Classic was available for Maple 14 on Solaris, but had forgotten if it had been dropped for Maple 15.

Is the Classic GUI available with Maple on Solaris?

acer

Is the Classic GUI available with Maple on Solaris?

acer

@alex_01 

You write that you worry about the performance of your LS approach, by which you likely mean its robustness and correctness since your concern covers the two cases where its error norm is much higher. But you have not acknowledged that using the explictly formed Matrix inverse of (A^%T.A) is not the best numerical way to compute (A^%T.A)^(-1).A^%T which was illustrated by numeric example above. That exponent-(-1) is just a convenient way to denote the Matrix inverse for publication. That does not mean that it explicit inverse calculation is always numerically appropriate. But you can use LinearSolve and still validly claim that what you're doing is computing that same formula. And so you ought to, as it might clear up one of the two cases of your concern over your LS approach.

You write that you are concerned about the performance of the SVD method as used by the LeastSquares command for the over-determined case (m>n), and you give an example where it (and the other) approaches all produce a noticable error norm. But what value did you expect? It won't typically be anywhere close to zero, for the kinds of random entries you're using in the Matrix, because the m>n (df>0) system will very likely be overdetermined. That term means that there cannot be a solution which solves the system exactly -- an overdetermined system is inconsistent, and its least squares solution will have a non-zero error norm.

You may have to show the existence of significantly a better solution, before the method=SVD solution is cast in serious doubt.

You asked about eigenvalues. If you mean you want the eigenvalues of (A^%T.A) then those are the singular values of Matrix A. Note that one does not necessarily use the time and memory to explicit form (A^%T.A) as a Maple Matrix in order to compute the singular values of floating-point Matrix A. The computational work is done in compiled C functions, and clever workspace use/re-use may also mean that some forms are not produced in any stand-alone/returnable fashion.

In your first comment you claim that your Maple 13 does not have an SVD method implemented within LeastSquare. But that is untrue.

> infolevel[LinearAlgebra]:=1:

> LinearAlgebra[LeastSquares](X, Y, method=SVD):

SingularValues: calling external function
SingularValues: NAG hw_f08kef
SingularValues: NAG hw_f08kff
SingularValues: NAG hw_f08kff
SingularValues: NAG hw_f08mef
LeastSquares: calling external function
LeastSquares: NAG hw_f06paf
LeastSquares: NAG hw_f06pbf
LeastSquares: NAG hw_f06paf

> LinearAlgebra[LeastSquares](X, Y):

LeastSquares: calling external function
LeastSquares: NAG hw_f08aef
LeastSquares: NAG hw_f08agf
LeastSquares: NAG hw_f07tef

> kernelopts(version);
          Maple 13.02, X86 64 WINDOWS, Oct 12 2009, Build ID 436951

What was more recently added was the ability of LeastSquares to make use of the so-called "thin SVD" which makes it much more efficient in the greatly overdetermined case m>>n. (See here for a stand-alone implementation of that, outside of LinearAlgebra.)

If you want to see a more user-level implementation of the SVD approach then you could do worse than look over several questions posed by member herclau in the second half of 2010. eg. 1, 2, 3.

Note that in some cases the QR method may be faster, while the SVD method might be more robust. This is a commom dichotomy: better speed vs. better assurance of correctness.

You might think that everyone else should be able to figure out, without explanation beforehand, your intended meanings that df = num.rows - num.cols, or that "ey" means "Estimated Y" and is computed via A.x, and that data1 is the solution in the code while `x` is the solution elsewhere, and X is the Matrix in the code while A is the Matrix elsewhere, and that sometimes Y means lowercase y while X does not mean lowercase x. But it does make it harder to read. I think that it isn't as polite as can be, for your audience.

acer

@Markiyan Hirnyk You misunderstand my points, I believe. Maple has not had session dependent set ordering based on memory addresses in several major releases, so it is not relevent that you get the same order upon multiple reexecution here.

What I was trying to say is that the order will depend upon the lexicographic order (although that too can vary with option `setsort` as Maple is launched). So, choose some other name for the parameter, ie. Z instead of b, and its position in the set or list will be different. So choosing B[2] is ad hoc and will not reliably produce the same kind of result for other examples.

And also, one can choose a formal parameter x and along with additional (scoped) b, or a formal parameter b along with additional (scoped) x. And both of this two could produce the same list B, in which case the code that you showed, alone, will be insufficient to determine which is the additional parameter. That is why, when manipulating the inert form, I deliberately excluded the formal parameters (of an operator present) before calling FromInert.

This is just another instance of the fact that lexicographic set ordering is, essentially, about as indeterminate as was the old address-based set ordering. The latter is indeterminate due to what may or may not have occurred previosuly in the session, while the former is indeterminate because the user has free reign to choose his or her variable names at will.

Post-conversion to a list does not pertain to or alter the above.

@Markiyan Hirnyk You misunderstand my points, I believe. Maple has not had session dependent set ordering based on memory addresses in several major releases, so it is not relevent that you get the same order upon multiple reexecution here.

What I was trying to say is that the order will depend upon the lexicographic order (although that too can vary with option `setsort` as Maple is launched). So, choose some other name for the parameter, ie. Z instead of b, and its position in the set or list will be different. So choosing B[2] is ad hoc and will not reliably produce the same kind of result for other examples.

And also, one can choose a formal parameter x and along with additional (scoped) b, or a formal parameter b along with additional (scoped) x. And both of this two could produce the same list B, in which case the code that you showed, alone, will be insufficient to determine which is the additional parameter. That is why, when manipulating the inert form, I deliberately excluded the formal parameters (of an operator present) before calling FromInert.

This is just another instance of the fact that lexicographic set ordering is, essentially, about as indeterminate as was the old address-based set ordering. The latter is indeterminate due to what may or may not have occurred previosuly in the session, while the former is indeterminate because the user has free reign to choose his or her variable names at will.

Post-conversion to a list does not pertain to or alter the above.

@Markiyan Hirnyk Let's also recall that the positioning in the indets result may depend upon set ordering.

So determining the placement is problematic.

> restart:
> h := Statistics:-Distribution(PDF = (x-> Dirac(x-b))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "arrow", "b", "x", "operator"}

> h := Statistics:-Distribution(PDF = (x-> Dirac(x-A))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "A", "Dirac", "arrow", "x", "operator"}

> h := Statistics:-Distribution(PDF = (x-> Dirac(x-Z))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "Z", "arrow", "x", "operator"}

Since this may have been intended for user-defined distribution, the formal parameter name(s) or the PDF operator may also be ad hoc,

> h := Statistics:-Distribution(PDF = (b-> Dirac(b-x))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "arrow", "b", "x", "operator"}

In this last example, we would need to determine the parameter name as well, so as to not misidentify.

This all relates to why I originally sieved out the formal parameter names and some other atomics, when processing the inert form.

But I suppose that all this is moot now, as member icegood may choose another interface for his code's users.

@Markiyan Hirnyk Let's also recall that the positioning in the indets result may depend upon set ordering.

So determining the placement is problematic.

> restart:
> h := Statistics:-Distribution(PDF = (x-> Dirac(x-b))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "arrow", "b", "x", "operator"}

> h := Statistics:-Distribution(PDF = (x-> Dirac(x-A))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "A", "Dirac", "arrow", "x", "operator"}

> h := Statistics:-Distribution(PDF = (x-> Dirac(x-Z))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "Z", "arrow", "x", "operator"}

Since this may have been intended for user-defined distribution, the formal parameter name(s) or the PDF operator may also be ad hoc,

> h := Statistics:-Distribution(PDF = (b-> Dirac(b-x))):
> B := indets(ToInert(eval(h:-PDF)), atomic);           
               B := {1, "Dirac", "arrow", "b", "x", "operator"}

In this last example, we would need to determine the parameter name as well, so as to not misidentify.

This all relates to why I originally sieved out the formal parameter names and some other atomics, when processing the inert form.

But I suppose that all this is moot now, as member icegood may choose another interface for his code's users.

Thanks for uploadind your worksheet. Please note that we cannot yet run it, as it relies on some procedures like procBombcurve and (presumably assigned names of expressions like) PreBombcurve and PostBombcurve. Is it possible for you to upload everything required to run the worksheet? (Uploads and links to multiple files could be included, if you at liberty to do so.)

Also, I notice that the lines of the worksheet must be executed out of order, even if the missing bits were available. For example it executes a test of procedures `p` and `procRsum` before it defines/assigns procedures `procPatrientres` and `procC14incell`. That is not critical, but may confuse someone trying to run it.

It's slightly unweildy for procedures such as procPatientres to assign to globals like `residual`, when it might be more straightforward to just have it return the computed value for that. (And, in turn, any other procedure which called that could assign the result to its own local `residual`. Doing it in that way can reduce mix-ups, and keep the global name space clean and avoid need for unassignments at the top-level to avoid inadvertant mistakes.

acer

I really like purely exact approaches to such problems: no hidden float log, no `floor` or `ceil`, no outright or hidden `evalf` calls at all. The above method seems to be like that -- just exact multiplication and `op`. A reduced risk of the task's being liable to mistaken exclusion/inclusion due to roundoff or other numerical difficulty.

And it's probably quite fast.

And trailing-zero removal may be applied afterwards.

I'm supposing that `union` itself is not invoking evalf, and is relying only on evalb here.

Of course, for such tasks the speed can depend heavily on details. Is it for cases where a) there are many duplicates of just a few "unique" values, b) few duplicates of many "unique" values, or c) some mixed scenario.

acer

I really like purely exact approaches to such problems: no hidden float log, no `floor` or `ceil`, no outright or hidden `evalf` calls at all. The above method seems to be like that -- just exact multiplication and `op`. A reduced risk of the task's being liable to mistaken exclusion/inclusion due to roundoff or other numerical difficulty.

And it's probably quite fast.

And trailing-zero removal may be applied afterwards.

I'm supposing that `union` itself is not invoking evalf, and is relying only on evalb here.

Of course, for such tasks the speed can depend heavily on details. Is it for cases where a) there are many duplicates of just a few "unique" values, b) few duplicates of many "unique" values, or c) some mixed scenario.

acer

@alex_01 I could be mistaken in thinking that your problem is necessarily nonlinear. And you are right: solving it could likely be faster if it could be reformulated as LP. (There are constraints to accomodate, albeit simple bounds.) I'll try and give it some proper thought.

Ideally, the help-pages and Example worksheets are supposed to explain how to get Matrix form, with examples that explain it all. I agree, it can be messy. But as you say, seeing patterns is still not enough: one has to fully understand the structures in order to be able to get it for any new problem.  If I can find time, I'll convert this example, forcing use of NLPSolve.

Cheers. Sorry if I've been overbearing in this thread.

 

@alex_01 I could be mistaken in thinking that your problem is necessarily nonlinear. And you are right: solving it could likely be faster if it could be reformulated as LP. (There are constraints to accomodate, albeit simple bounds.) I'll try and give it some proper thought.

Ideally, the help-pages and Example worksheets are supposed to explain how to get Matrix form, with examples that explain it all. I agree, it can be messy. But as you say, seeing patterns is still not enough: one has to fully understand the structures in order to be able to get it for any new problem.  If I can find time, I'll convert this example, forcing use of NLPSolve.

Cheers. Sorry if I've been overbearing in this thread.

 

First 417 418 419 420 421 422 423 Last Page 419 of 592