acer

32373 Reputation

29 Badges

19 years, 334 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@JAnd Give it a whirl (as well as by uncommenting 4x3 example).

By the way, what kind of data is in your Vector? Integers and rationals, or floats, or symbolics?

restart:
with(ArrayTools):

#m,n := 4,3;
m,n := 288,180:

V := Vector[row](m*n,(i)->i):

st:=time():
   m:=Reshape(V,[n,m])^%T;
printf("                             %.3f seconds\n",time()-st);


                             [ 288 x 180 Matrix     ]
                             [ Data Type: anything  ]
                        m := [ Storage: rectangular ]
                             [ Order: Fortran_order ]

                             0.016 seconds

m[1..3,1..3]; # checking top left corner

                               [  1    2    3]
                               [             ]
                               [181  182  183]
                               [             ]
                               [361  362  363]

st:=time():
   p:=FlipDimension(m,1);
printf("                             %.3f seconds\n",time()-st);


                             [ 288 x 180 Matrix     ]
                             [ Data Type: anything  ]
                        p := [ Storage: rectangular ]
                             [ Order: Fortran_order ]

                             0.000 seconds

p[-3..-1,1..3]; # checking bottom left corner

                               [361  362  363]
                               [             ]
                               [181  182  183]
                               [             ]
                               [  1    2    3]

@JAnd Give it a whirl (as well as by uncommenting 4x3 example).

By the way, what kind of data is in your Vector? Integers and rationals, or floats, or symbolics?

restart:
with(ArrayTools):

#m,n := 4,3;
m,n := 288,180:

V := Vector[row](m*n,(i)->i):

st:=time():
   m:=Reshape(V,[n,m])^%T;
printf("                             %.3f seconds\n",time()-st);


                             [ 288 x 180 Matrix     ]
                             [ Data Type: anything  ]
                        m := [ Storage: rectangular ]
                             [ Order: Fortran_order ]

                             0.016 seconds

m[1..3,1..3]; # checking top left corner

                               [  1    2    3]
                               [             ]
                               [181  182  183]
                               [             ]
                               [361  362  363]

st:=time():
   p:=FlipDimension(m,1);
printf("                             %.3f seconds\n",time()-st);


                             [ 288 x 180 Matrix     ]
                             [ Data Type: anything  ]
                        p := [ Storage: rectangular ]
                             [ Order: Fortran_order ]

                             0.000 seconds

p[-3..-1,1..3]; # checking bottom left corner

                               [361  362  363]
                               [             ]
                               [181  182  183]
                               [             ]
                               [  1    2    3]

@JAnd Sorry if I'm being dense, but could you show in plaintext the input and output that you expect (and, some command I gave that wasn't right for that?)? Is this the kind of thing that you need to do many times, or for large size? Thanks.

@JAnd Sorry if I'm being dense, but could you show in plaintext the input and output that you expect (and, some command I gave that wasn't right for that?)? Is this the kind of thing that you need to do many times, or for large size? Thanks.

It wasn't clear which way you wanted the entries laid out, by column or row.

`Reshape` can be a bit quicker, even with an explicit transposition (to get entries laid out in rows, presuming that you didn't just want order=C_order as a faster way to get that).

restart:
m,n:=33,22:
iter:=1009:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[n,m])^%T;
end do:
time()-st;
                                    0.249

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(m, n, L);
end do:
time()-st;
                                    0.655

st:=time():
for i from 1 to iter do
   m3:=Matrix(m, n,(i,j)->V[(i-1)*n+j]);
end do:
time()-st;
                                    1.186

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

restart:
m,n:=33,22:
iter:=1000:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[m,n])^%T;
end do:
time()-st;
                                    0.203

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(n, m, L);
end do:
time()-st;
                                    0.640

st:=time():
for i from 1 to iter do
   m3:=Matrix(n,m,(i,j)->V[(i-1)*m+j]);
end do:
time()-st;
                                    1.170

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

acer

It wasn't clear which way you wanted the entries laid out, by column or row.

`Reshape` can be a bit quicker, even with an explicit transposition (to get entries laid out in rows, presuming that you didn't just want order=C_order as a faster way to get that).

restart:
m,n:=33,22:
iter:=1009:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[n,m])^%T;
end do:
time()-st;
                                    0.249

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(m, n, L);
end do:
time()-st;
                                    0.655

st:=time():
for i from 1 to iter do
   m3:=Matrix(m, n,(i,j)->V[(i-1)*n+j]);
end do:
time()-st;
                                    1.186

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

restart:
m,n:=33,22:
iter:=1000:

V:=Vector[row](m*n,(i)->i):
with(ArrayTools):

st:=time():
for i from 1 to iter do
   m1:=Reshape(V,[m,n])^%T;
end do:
time()-st;
                                    0.203

st:=time():
for i from 1 to iter do
   L:=convert(V, list):
   m2:=Matrix(n, m, L);
end do:
time()-st;
                                    0.640

st:=time():
for i from 1 to iter do
   m3:=Matrix(n,m,(i,j)->V[(i-1)*m+j]);
end do:
time()-st;
                                    1.170

LinearAlgebra:-Norm(m1-m2),LinearAlgebra:-Norm(m2-m3);
                                    0, 0
#m1,m2,m3;

acer

Sorry, I forgot to add this,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=400,feasibilitytolerance=1e-5);

   [3.99999999999999467, [a = 32.1004819041634, b = 42.3076441354364, 

     c = 53.1072282340437, t = 0.604446569149606, u = 0.796645683487556]]

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=4000,feasibilitytolerance=1e-7);

   [4.00000000000000089, [a = 32.8320116158925, b = 40.1608997908390, 

     c = 51.8732962008298, t = 0.632927036071215, u = 0.774211448509924]]

But here is my objection to these results: we may believe that the exact global minima is exactly 4, on various grounds. But without that knowledge how can we trust the numerical optimizers. All the above results from both GlobalSolve and Minimize have constraint violations. And here is the kicker: allowing the very same constraint violations we can get these solvers to produce "false" results with much lower objective values. Eg,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..0.001, b=0..0.001, c=0..0.001,
   iterationlimit=400,feasibilitytolerance=1e-5);

       [0., [a = 0., b = 0., c = 0.00100000000000000, t = 0., u = 0.]]

The problem is that the last result above with objective value of 0.0 (or others cases very close to 0.0) seems to have "essential" constraint violations which are due not just to numerical (floating-point) effects. If you first choose some given feasibility tolerance then it may be possible to find a value of `c` such that a=0 and b=0 giving an objective result less than 4 (and perhaps close to zero). How shall we discriminate between one "ok" kind of constraint violations and another "not ok" kind?

I believe that the globally minimal value may be 4, and that the constraint violations for a=0 and b=0 (say) are essential and exact violations. But without analytical examination, how can we distinguish situations involving constraint violations?

acer

Sorry, I forgot to add this,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=400,feasibilitytolerance=1e-5);

   [3.99999999999999467, [a = 32.1004819041634, b = 42.3076441354364, 

     c = 53.1072282340437, t = 0.604446569149606, u = 0.796645683487556]]

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..100, b=0..100, c=0..100,
   iterationlimit=4000,feasibilitytolerance=1e-7);

   [4.00000000000000089, [a = 32.8320116158925, b = 40.1608997908390, 

     c = 51.8732962008298, t = 0.632927036071215, u = 0.774211448509924]]

But here is my objection to these results: we may believe that the exact global minima is exactly 4, on various grounds. But without that knowledge how can we trust the numerical optimizers. All the above results from both GlobalSolve and Minimize have constraint violations. And here is the kicker: allowing the very same constraint violations we can get these solvers to produce "false" results with much lower objective values. Eg,

restart:
Optimization:-Minimize(
   (3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c,
   {a^2+2*b*c*u >= b^2+c^2, a*t+b*u <= c,
   b^2*(t^2-u^2)/(t^2-1)+c^2 <= 2*b*c*u},
   t=0..1, u=0..1, a=0..0.001, b=0..0.001, c=0..0.001,
   iterationlimit=400,feasibilitytolerance=1e-5);

       [0., [a = 0., b = 0., c = 0.00100000000000000, t = 0., u = 0.]]

The problem is that the last result above with objective value of 0.0 (or others cases very close to 0.0) seems to have "essential" constraint violations which are due not just to numerical (floating-point) effects. If you first choose some given feasibility tolerance then it may be possible to find a value of `c` such that a=0 and b=0 giving an objective result less than 4 (and perhaps close to zero). How shall we discriminate between one "ok" kind of constraint violations and another "not ok" kind?

I believe that the globally minimal value may be 4, and that the constraint violations for a=0 and b=0 (say) are essential and exact violations. But without analytical examination, how can we distinguish situations involving constraint violations?

acer

@STHence Well, then I must apologize for not understanding just what you were after. I guess I got confused and thought that you primarily wanted to use SmithForm (for some reason of your own). And I was thinking about making a comment -- but did not -- about floating-point.

You may possibly be interested to know that Maple will use (comparison amongst) entries of the diagonal Matrix `S` produced by SingularValues when computing the rank of a floating-point Matrix. This ties into earlier stuff about Rank and Digits.

Best regards.

@STHence Well, then I must apologize for not understanding just what you were after. I guess I got confused and thought that you primarily wanted to use SmithForm (for some reason of your own). And I was thinking about making a comment -- but did not -- about floating-point.

You may possibly be interested to know that Maple will use (comparison amongst) entries of the diagonal Matrix `S` produced by SingularValues when computing the rank of a floating-point Matrix. This ties into earlier stuff about Rank and Digits.

Best regards.

@Rowlinginthedeep I wrote a short piece of code that did it a bit like that, but it's at home and I'm in the office right now. Instead of "removing" points I used a weight vector, with zero as entries for points to be ignored. I compared each residual (and/or y-difference w.r.t the line) to the max of such, as a basis for rejection.

Also, I did it as an iterated process, on the following grounds. At the first iteration, the residuals are being computed against what is taken to be the initial "wrong" approximating curve (line). After a "corrected" new line is computed (using weights to sieve out the noise) then all the points will have new residuals. It may well be that, with these new residuals based on the new line, the sets of points that should be included/excluded is slightly different than it was the first time. So a new weight vector can be constructed, and the process repeated. And so on. For the example at hand, after abount 5 iterations of this process the weight vector stopped needing to be changed, and the set of points rejected stabilized, and the approximating line stopped looking like it was being influence by the "noise" (ie. rotated too much clockwise).

I'm not a statistician, and I just cooked this up, so anyone who knows better please speak up.

@Rowlinginthedeep I wrote a short piece of code that did it a bit like that, but it's at home and I'm in the office right now. Instead of "removing" points I used a weight vector, with zero as entries for points to be ignored. I compared each residual (and/or y-difference w.r.t the line) to the max of such, as a basis for rejection.

Also, I did it as an iterated process, on the following grounds. At the first iteration, the residuals are being computed against what is taken to be the initial "wrong" approximating curve (line). After a "corrected" new line is computed (using weights to sieve out the noise) then all the points will have new residuals. It may well be that, with these new residuals based on the new line, the sets of points that should be included/excluded is slightly different than it was the first time. So a new weight vector can be constructed, and the process repeated. And so on. For the example at hand, after abount 5 iterations of this process the weight vector stopped needing to be changed, and the set of points rejected stabilized, and the approximating line stopped looking like it was being influence by the "noise" (ie. rotated too much clockwise).

I'm not a statistician, and I just cooked this up, so anyone who knows better please speak up.

Could you add, in a Comment, the Maple code (procedure) that you used, and also let us know the operating system and 32/64bit-ness of your Maple 16?

acer

This also seems to work (and provides some functional `ssystem` as a bonus),

restart:

unprotect(ssystem):
ssystem:=eval(`dsolve/numeric/ssystem`):
protect(ssystem):

cp:=Compiler:-Compile( proc(x) sin(x); end proc ):

cp(2.1);

                         0.863209366648873710

Who knew!

acer

This also seems to work (and provides some functional `ssystem` as a bonus),

restart:

unprotect(ssystem):
ssystem:=eval(`dsolve/numeric/ssystem`):
protect(ssystem):

cp:=Compiler:-Compile( proc(x) sin(x); end proc ):

cp(2.1);

                         0.863209366648873710

Who knew!

acer

First 390 391 392 393 394 395 396 Last Page 392 of 592