acer

32303 Reputation

29 Badges

19 years, 308 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Some experiment seems to show that the bulk of the overhead may be in analysis of the problem type. If I replace, result := Optimization:-Maximize(eq,Q=0..1); with, result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); then the computation to obtain the plot gets about 10 times faster and also uses less memory. In full, Payoff := proc(Q, s, p, pa) 1 - 1/24*s^3 - 1/2*s*min(1, Q + 1/2*s)^2 + 1/4*s^2*min(1, Q + 1/2*s) - 1/3*min(1, Q + 1/2*pa)^3 + 1/3*min(1, Q + 1/2*s)^3 + Q*min(1, Q + 1/2*pa)^2 - Q*min(1, Q + 1/2*s)^2 - Q^2*min(1, Q + 1/2*pa) + Q^2*min(1, Q + 1/2*s) + s*Q*min(1, Q + 1/2*s) - 1/2*pa + 1/2*pa*min(1, Q + 1/2*pa)^2 + 1/4*pa^2 - 1/4*pa^2*min(1, Q + 1/2*pa) - s*Q + pa*Q - pa*Q*min(1, Q + 1/2*pa) - p*Q; end proc: Payoffeq:=Payoff(Q, s, p, pa); Qopt2 := proc(sval, pval, paval) local eq, result; global Payoffeq; if not type({args},'set'(numeric)) then return 'procname'('args'); end if; eq := subs({s=sval,p=pval,pa=paval},Payoffeq); result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); if type(result,'list'({numeric,list})) then return rhs(op(result[2])); else return 'procname'(args); end if; end proc: Qopt2(.2, 0.1e-1, 1); plot('Qopt2'(s, 0.1e-1, 1),s=0.1..0.5); I traced through the code in the debugger, when Optimization:-Maximize was used instead of Optimization:-NLPSolve with a supplied method. It looked to me as if the method chosen internally was 'default', which later got used as 'quadratic' and not as 'branchandbound'. Both separate timings and also userinfo corroborate the idea that method=branchandbound is not used when Optimization:-Maximize is used here. The userinfo suggests that method=quadratic is selected. So that means that the problem analysis and method selection overhead is very large, as forcing method=quadratic and calling NLPSolve directly show great performance improvement. acer
Some experiment seems to show that the bulk of the overhead may be in analysis of the problem type. If I replace, result := Optimization:-Maximize(eq,Q=0..1); with, result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); then the computation to obtain the plot gets about 10 times faster and also uses less memory. In full, Payoff := proc(Q, s, p, pa) 1 - 1/24*s^3 - 1/2*s*min(1, Q + 1/2*s)^2 + 1/4*s^2*min(1, Q + 1/2*s) - 1/3*min(1, Q + 1/2*pa)^3 + 1/3*min(1, Q + 1/2*s)^3 + Q*min(1, Q + 1/2*pa)^2 - Q*min(1, Q + 1/2*s)^2 - Q^2*min(1, Q + 1/2*pa) + Q^2*min(1, Q + 1/2*s) + s*Q*min(1, Q + 1/2*s) - 1/2*pa + 1/2*pa*min(1, Q + 1/2*pa)^2 + 1/4*pa^2 - 1/4*pa^2*min(1, Q + 1/2*pa) - s*Q + pa*Q - pa*Q*min(1, Q + 1/2*pa) - p*Q; end proc: Payoffeq:=Payoff(Q, s, p, pa); Qopt2 := proc(sval, pval, paval) local eq, result; global Payoffeq; if not type({args},'set'(numeric)) then return 'procname'('args'); end if; eq := subs({s=sval,p=pval,pa=paval},Payoffeq); result := Optimization:-NLPSolve(eq,Q=0..1,maximize=true,method=quadratic); if type(result,'list'({numeric,list})) then return rhs(op(result[2])); else return 'procname'(args); end if; end proc: Qopt2(.2, 0.1e-1, 1); plot('Qopt2'(s, 0.1e-1, 1),s=0.1..0.5); I traced through the code in the debugger, when Optimization:-Maximize was used instead of Optimization:-NLPSolve with a supplied method. It looked to me as if the method chosen internally was 'default', which later got used as 'quadratic' and not as 'branchandbound'. Both separate timings and also userinfo corroborate the idea that method=branchandbound is not used when Optimization:-Maximize is used here. The userinfo suggests that method=quadratic is selected. So that means that the problem analysis and method selection overhead is very large, as forcing method=quadratic and calling NLPSolve directly show great performance improvement. acer
Thanks for pointing out that error, Paulina. One thing that I found slightly disappointing was that, using Optimization, the final plot took more resources (time, and memory judging by all the gc calls). I haven't checked to see whether using a "solution module" is leaner. acer
Thanks for pointing out that error, Paulina. One thing that I found slightly disappointing was that, using Optimization, the final plot took more resources (time, and memory judging by all the gc calls). I haven't checked to see whether using a "solution module" is leaner. acer
An array is a maple table data structure which is indexed in a particular way, ie. by integers. A vector and a matrix are two further specializations of arrays which have 1 and 2 dimensions respectively and whose indexing starts at the value of 1. The older, deprecated linalg package is for dealing with the older, deprecated vector and matrix objects. The evalm command is for evaluating vector and matrix expressions (formulae, say) because those objects have what is called "last name evaluation". See ?last_name_eval. The rtable is a newer sort of general data structure. One instance of it is the newer Array, which is indexed by integer values. The newer Vector and Matrix objects are 1 and 2 dimensional instances of rtable whose indexing starts at the value of 1. The LinearAlgebra package is for dealing with Vectors and Matrices. Some top-level arithmetic operators such as `.`, `+`, `^`, etc, also know how to deal with these objects. The rtable objects do not have the "last name evaluation" property. There is no need to use evalm on them, and doing so is just generally inefficient. An important difference is that Vector objects have an orientation (as a row or as a column), which vector objects lack. Maple is picky about the orientation, insofar as it relates to things like the validity of multiplication. The two families, rtable and array based objects, should almost never be used in a mixed way. You example could also be written this way, A := Matrix(7, 7, [[1, -5, 0, 0, 0, 0, 0], [0, 1, -5/4, 0, 0, 0, 0], [0, 0, 1, -5/4, 0, 0, 0], [0, 0, 0, 1, -5/4, 0, 0], [0, 0, 0, 0, 1, -5/4, 0], [0, 0, 0, 0, 0, 1, -5/4], [0, 0, 0, 0, 0, 0, 1]]); B := Vector(1 .. 7, [1, 1/4, 1/4, 1/4, 1/4, 0, 204]); A^(-1) . B; You might also look at ?LinearAlgebra[LinearSolve] , in contrast to using the above method in which B is premultiplied by the inverse of A. As an illustration of the orientation issue, using Matrix A and Vector B as above, compare, 1/A . B; with, B/A; # or B . 1/A; You can use the type() command to distinguish programmatically between, say, Matrices and matrices. Eg, type(A,Matrix) or type(A,matrix). You can also use the convert() command to convert from one to the other. Eg, convert(A,matrix). You can use the syntax A^%T to transpose the Matrix A. You can also use this syntax to produce a copy of a Vector B with the other orientation. acer
I checked your Q_opt method, which examines the plot structures. It's not OK, even for single points. One can see this be inserting a global variable, say getit, and then duplicating the plot() call to be, getit:=plot(subs(s=s_val,subs(p=p_val,subs(pa=pa_val,Payoff))),Q=0..1); within procedure Q_opt(). Then call Q_opt(.2, 0.1e-1, 1), which returns 0.791120720833333291. Then issue plots[display](getit) and one can see that the maximal value in the plot is just above 0.9 (which does not agree with 0.7911 but does agree with my earlier results above). acer
I checked your Q_opt method, which examines the plot structures. It's not OK, even for single points. One can see this be inserting a global variable, say getit, and then duplicating the plot() call to be, getit:=plot(subs(s=s_val,subs(p=p_val,subs(pa=pa_val,Payoff))),Q=0..1); within procedure Q_opt(). Then call Q_opt(.2, 0.1e-1, 1), which returns 0.791120720833333291. Then issue plots[display](getit) and one can see that the maximal value in the plot is just above 0.9 (which does not agree with 0.7911 but does agree with my earlier results above). acer
If you are looking only for integer solutions, you could use isolve() instead of solve(). acer
Sorry, I didn't explain well what I meant to convey. The help-page ?$ mentions that one may have to be careful due to possible (unwanted) premature evaluation when using $. It seemed to me that that warning would be better if it were accompanied by more examples. Take the concatenation example. In that case, the x and the i are being concatenated prematurely, before maple instantiates i at value 1 and 2. Hence one gets xi,xi instead of x1,x2. In the case of mutable objects like tables or rtables (eg. Vectors), only one instance of the object will be created, even if it is repeated or assigned to multiple names. It's the same cause, that the object is created by an evaluation that happens prematurely (before the instantiation of i). By (careful) use of unevaluation quotes, one can delay the evaluation of the expression until after each instance is created. > (x,y) := Vector(1) $i=1..2: > x[1]:=17:x[1],y[1];evalb(x=y); 17, 17 true > (x,y) := 'Vector'(1) $i=1..2: > x[1]:=17:x[1],y[1];evalb(x=y); 17, 0 false I too prefer not to use $, unless the circumstances are very simple. I find that the efficiency gained over using seq() is irrelevant compared to the extra care which is needed to be kept in mind. acer
Yes, your M1 and M2 are the same object. > evalb(M1=M2); true This is not specific to rtables. It relates to lack of usual evaluation of the expression when using $. Other examples, > i $ i=1..2; 1, 2 > x||i $i=1..2; xi, xi > cat(x,i) $i=1..2; xi, xi > (x,y) := table([]) $i=1..2: > x[foo]:=bar: > y[foo]; bar It would be good if this were mentioned on the ?$ help-page. As it is, there is only mention of a side-issue, that uneval quotes may be necessary to avoid premature evaluation. acer
One can follow what happens, in the debugger. Below, I followed some of what happens for the example solve(abs(-sin(x)/x+1) = 1, x). One routine in which action takes place here is `solve/rec2`. It sets up a SYSTEM, first, as, [[{abs(-sin(x)/x+1) = 1}, {}, {x}, {}, true, false, 1, {x}]] Then, it makes calls to SolveTools:-Transformers[i] on SYSTEM. The transformers, i, which seems to affect this example include Abs and Inequality (I think). The Abs Transformer turns SYSTEM into this, [[{z+sin(x)/x-1, z-1, 0 < z}, {}, {x, z}, {}, true, false, 1, {x}], [{z-sin(x)/x+1, z-1, 0 < z}, {}, {x, z}, {}, true, false, 1, {x}], [{z+sin(x)/x-1, z, z-1}, {}, {x, z}, {}, true, false, 1, {x}]] The Inequality Transformer then turns it into this, [[{z+sin(x)/x-1, z-1, 0 < z}, {}, {x, z}, [{z = 1, x = 0}], true, true, 1, {x}], [{z-sin(x)/x+1, z-1, 0 < z}, {}, {x, z}, [{z = 1, x = 0}], true, true, 1, {x}], [{z+sin(x)/x-1, z, z-1}, {}, {x, z}, {}, true, false, 1, {x}]] That SYSTEM finally gets passed to SolveTools:-MakeSolutions, which returns [{x = 0}, {x = 0}]. One would have to know what the members in the system format mean, to know whether either Transformer is doing something unjustified. One could also follow it into SolveTools:-MakeSolutions, but I haven't done that.. acer
> fsolve(sin(x)/x-1=0,x=Pi); 0. That's not so good. acer
In Maple 11.00, > restart: > s:=solve(x^3-3*x+1 = 0): > seq(is(convert(i,trig), 'real'), i = s); true, true, true However, one can still confuse is(), even to the point of a spurious false instead of FAIL, > is( simplify(radnormal(rationalize(s[1]))), real); false In this example, there is also this form of s[1], which is clearly real, > simplify(convert(s[1],trig),trig): > lprint(%); 4^(1/2)*cos(2/9*Pi) > seq(simplify(eval(x^3-3*x+1,x=simplify(convert(i,trig),trig))),i=s); 0, 0, 0 There'll be room for a long time, for strengthening is() and simplify(). acer
In Maple 11.00, > restart: > s:=solve(x^3-3*x+1 = 0): > seq(is(convert(i,trig), 'real'), i = s); true, true, true However, one can still confuse is(), even to the point of a spurious false instead of FAIL, > is( simplify(radnormal(rationalize(s[1]))), real); false In this example, there is also this form of s[1], which is clearly real, > simplify(convert(s[1],trig),trig): > lprint(%); 4^(1/2)*cos(2/9*Pi) > seq(simplify(eval(x^3-3*x+1,x=simplify(convert(i,trig),trig))),i=s); 0, 0, 0 There'll be room for a long time, for strengthening is() and simplify(). acer
A good prize would be something of value to the recipient. Of course, it's difficult to guess in advance what will be of value to any of a group of relatively unknown people. But one thing that the especially good mentoring contributors on mapleprimes might share as a valuation is a high regard for properly working maple. Maybe a valuable prize would be a promise of a bug fixed in the next major release, of the recipient's (reasonable) choice. acer
First 570 571 572 573 574 575 576 Last Page 572 of 591