acer

32343 Reputation

29 Badges

19 years, 328 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

Upon a moment's reflection, it is not at all a good to suggest Matrix(Ans^%T,shape=antisymmetric) as that is the very sort of inefficiency we're trying to avoid. It creates two whole new large Matrices. Sorry.

Instead, the line in the autocompiled proc,

     A[i,j]:=Xmean/Xstdev;

could be followed immediately by,

     A[j,i] := -A[i,j];

acer

Upon a moment's reflection, it is not at all a good to suggest Matrix(Ans^%T,shape=antisymmetric) as that is the very sort of inefficiency we're trying to avoid. It creates two whole new large Matrices. Sorry.

Instead, the line in the autocompiled proc,

     A[i,j]:=Xmean/Xstdev;

could be followed immediately by,

     A[j,i] := -A[i,j];

acer

Inside the Sharpe procedure, there is a double loop over i and j. As I wrote it, the inner such loop goes from 1 to i. If you instead made j go from 1 to NSTOCK then it would compute all NSTOCK^2 entries.

But the results are antisymmetric. So there is no need to make twice the computational effort. If you want the Result Matrix to actually be contain all the full antisymmetric results then you can keep the `for j from 1 to i` (or make it `to i-1`) and instead have outerproc return an antisymmetric copy of Ans. That is to say, the last line in outerproc could be,

   Matrix(Ans^%T,shape=antisymmetric);

Indenting my code does not make it faster. I hope that it makes it more legible. Ideally, it would also have more comments.

yes, it's a big deal that there are many ways to implement an algorithm in Maple, some of which might be very fast and some of which might be very slow, with no 100% crystal clear characterization of what causes that.

If you want just the top 20 items, write a for-loop to walk the Results 20 times, storing the next greatest with each sweep. That can be much faster than sorting the whole thing.

acer

Inside the Sharpe procedure, there is a double loop over i and j. As I wrote it, the inner such loop goes from 1 to i. If you instead made j go from 1 to NSTOCK then it would compute all NSTOCK^2 entries.

But the results are antisymmetric. So there is no need to make twice the computational effort. If you want the Result Matrix to actually be contain all the full antisymmetric results then you can keep the `for j from 1 to i` (or make it `to i-1`) and instead have outerproc return an antisymmetric copy of Ans. That is to say, the last line in outerproc could be,

   Matrix(Ans^%T,shape=antisymmetric);

Indenting my code does not make it faster. I hope that it makes it more legible. Ideally, it would also have more comments.

yes, it's a big deal that there are many ways to implement an algorithm in Maple, some of which might be very fast and some of which might be very slow, with no 100% crystal clear characterization of what causes that.

If you want just the top 20 items, write a for-loop to walk the Results 20 times, storing the next greatest with each sweep. That can be much faster than sorting the whole thing.

acer

Recognize that `option autocompile` does not mean that the routine will certainly be Compiled! If you write it with uncompilable stuff in it, then it will simply run in Maple's regular interpreter.

You can try hitting the proc (the one that's intended to be compiled) with Compiler:-Compile, to see whether it will in fact compile.

Also, just because you have written something that can be Compiled does not mean necessarily that it, or anything else that is auxiliary, has been implemented efficiently.

It strikes me that using combinat to get the all permutations (of two picked items) appears to be a poor idea in this context. It forms the permutations of the indexes and stores them in some otherwise unneeded structure. That's aside from the fact that the computational formula appears to be symmetric (modulo a minus sign?).

> outerproc:=proc(SR::Matrix(datatype=float[8]))
> local Ans, n, nstock;
>   n,nstock := op(1,SR);
>   Ans := Matrix(nstock,nstock,storage=rectangular,
>                 datatype=float[8]);
>   Sharpe(Ans,nstock,n,SR); # acts inplace on Ans
>   #Matrix(Ans^%T,shape=symmetric);
>   ## or shape=antisymmetric ?
>   Ans;
> end proc:
>
> Sharpe:=proc(A::Matrix(datatype=float[8]),
>              NSTOCK::integer[4],
>              N::integer[4],
>              SR::Matrix(datatype=float[8]))
> option autocompile;
> local ap1::float, ap2::float, i::integer[4], j::integer[4],
>       r::integer[4], Xmean::float, Xstdev::float;
>   for i from 1 to NSTOCK do
>     for j from 1 to i-1 do
>       ap1 := 0.;
>       for r to N do
>         ap1 := ap1+SR[r,i]-SR[r,j];
>       end do;
>       Xmean := ap1/N;
>       ap2 := 0.;
>       for r to N do
>         ap2 := ap2+(SR[r,i]-SR[r,j]-Xmean)^2;
>       end do;
>       Xstdev := sqrt(ap2/(N-1));
>       A[i,j]:=Xmean/Xstdev;
>     end do;
>   end do;
> NULL;
> end proc:
>
>
> with(Statistics):
> nstock := 50:
> n := 100:
> StockReturn := Matrix(`<,>`(seq(Sample(RandomVariable(Normal(0, 1)),n),
>                                 i=1..nstock)))^%T;
                                   [ 100 x 50 Matrix      ]
                    StockReturn := [ Data Type: float[8]  ]
                                   [ Storage: rectangular ]
                                   [ Order: Fortran_order ]
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.144
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.013

If you want to make the result symmetric, or antisymmetric, fill in the commented bit above, in the outer proc.

Don't sort the whole result unless you really have a good reason to. If you only want the top 20 results then write something good to get just that. It's a whole other ballgame to sort the whole thing, in computational complexity. Don't do it unless you really need it all for some other computation. The same thing goes for reforming the result to a table, or whatever.

You might want to check that the answers are coming out right (by hand, using another method but on the exact same data).

acer

Recognize that `option autocompile` does not mean that the routine will certainly be Compiled! If you write it with uncompilable stuff in it, then it will simply run in Maple's regular interpreter.

You can try hitting the proc (the one that's intended to be compiled) with Compiler:-Compile, to see whether it will in fact compile.

Also, just because you have written something that can be Compiled does not mean necessarily that it, or anything else that is auxiliary, has been implemented efficiently.

It strikes me that using combinat to get the all permutations (of two picked items) appears to be a poor idea in this context. It forms the permutations of the indexes and stores them in some otherwise unneeded structure. That's aside from the fact that the computational formula appears to be symmetric (modulo a minus sign?).

> outerproc:=proc(SR::Matrix(datatype=float[8]))
> local Ans, n, nstock;
>   n,nstock := op(1,SR);
>   Ans := Matrix(nstock,nstock,storage=rectangular,
>                 datatype=float[8]);
>   Sharpe(Ans,nstock,n,SR); # acts inplace on Ans
>   #Matrix(Ans^%T,shape=symmetric);
>   ## or shape=antisymmetric ?
>   Ans;
> end proc:
>
> Sharpe:=proc(A::Matrix(datatype=float[8]),
>              NSTOCK::integer[4],
>              N::integer[4],
>              SR::Matrix(datatype=float[8]))
> option autocompile;
> local ap1::float, ap2::float, i::integer[4], j::integer[4],
>       r::integer[4], Xmean::float, Xstdev::float;
>   for i from 1 to NSTOCK do
>     for j from 1 to i-1 do
>       ap1 := 0.;
>       for r to N do
>         ap1 := ap1+SR[r,i]-SR[r,j];
>       end do;
>       Xmean := ap1/N;
>       ap2 := 0.;
>       for r to N do
>         ap2 := ap2+(SR[r,i]-SR[r,j]-Xmean)^2;
>       end do;
>       Xstdev := sqrt(ap2/(N-1));
>       A[i,j]:=Xmean/Xstdev;
>     end do;
>   end do;
> NULL;
> end proc:
>
>
> with(Statistics):
> nstock := 50:
> n := 100:
> StockReturn := Matrix(`<,>`(seq(Sample(RandomVariable(Normal(0, 1)),n),
>                                 i=1..nstock)))^%T;
                                   [ 100 x 50 Matrix      ]
                    StockReturn := [ Data Type: float[8]  ]
                                   [ Storage: rectangular ]
                                   [ Order: Fortran_order ]
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.144
 
>
> st:=time():
> Result := outerproc(StockReturn);
                             [ 50 x 50 Matrix             ]
                   Result := [ Data Type: float[8]        ]
                             [ Storage: triangular[upper] ]
                             [ Shape: symmetric           ]
                             [ Order: Fortran_order       ]
 
> time()-st;
                                     0.013

If you want to make the result symmetric, or antisymmetric, fill in the commented bit above, in the outer proc.

Don't sort the whole result unless you really have a good reason to. If you only want the top 20 results then write something good to get just that. It's a whole other ballgame to sort the whole thing, in computational complexity. Don't do it unless you really need it all for some other computation. The same thing goes for reforming the result to a table, or whatever.

You might want to check that the answers are coming out right (by hand, using another method but on the exact same data).

acer

Pass in N the lenth of the Vector as an argument to the proc you want to Compile.

If you hate having to pass in that N, then when you are finished writing the Compilable proc you can write another proc which doesn't take N, and wrap it around.

outer_proc:=proc(V::Vector)
local N;
  N := op(1,V); # gets the dimension
  compilable_proc(V,N);
end proc:

compilable_proc:=proc(V::Vector(datatype=float[8]))
option autocompile;
    ..stuff..
end proc:

Also, doesn't make a whole bunch of individual compilable procs. Make one single mother-of-a-compilable-proc, that calculates the mean, the SD, and does all the updataing of the data, and writes all the output to its output Matrix argument. An important goal can be to cut it down so that when it runs it only makes one or two Maple level function calls.

acer

Pass in N the lenth of the Vector as an argument to the proc you want to Compile.

If you hate having to pass in that N, then when you are finished writing the Compilable proc you can write another proc which doesn't take N, and wrap it around.

outer_proc:=proc(V::Vector)
local N;
  N := op(1,V); # gets the dimension
  compilable_proc(V,N);
end proc:

compilable_proc:=proc(V::Vector(datatype=float[8]))
option autocompile;
    ..stuff..
end proc:

Also, doesn't make a whole bunch of individual compilable procs. Make one single mother-of-a-compilable-proc, that calculates the mean, the SD, and does all the updataing of the data, and writes all the output to its output Matrix argument. An important goal can be to cut it down so that when it runs it only makes one or two Maple level function calls.

acer

The only ImageTools routine that was of use to me in this was Read. I'm simply implementing some formulae for such distortion and applying them to the (layers of the ) image Array.

I'm using option autocompile on the procedures, with evalhf as a fallback. The polynomial formula for forward distortion is evaluated using Horner's rule. I was only looking at the kinds of distortion due to focal length (ie. wide-angle barrel distortion, pincushion, etc) and not due to mismatch focal planes. I'm using an order 3 polynomial.

The very fast prototype version just truncates the Array index value, when accessing from the source Array. A less lossy version would interpolate. (I haven't figured out whether I'll use Curvefitting:-ArrayInterpolation or a hard-coded formula for that yet.)  I haven't yet split out the action so as to treat the three color layers separately (though I understand that this may be desirable as there is a rare lens issue with focal length distortion that affects some colors more than others, so that correcting them separately may be desirable.)

I probably won't have time to make it presentable until next week at earliest.

acer

The only ImageTools routine that was of use to me in this was Read. I'm simply implementing some formulae for such distortion and applying them to the (layers of the ) image Array.

I'm using option autocompile on the procedures, with evalhf as a fallback. The polynomial formula for forward distortion is evaluated using Horner's rule. I was only looking at the kinds of distortion due to focal length (ie. wide-angle barrel distortion, pincushion, etc) and not due to mismatch focal planes. I'm using an order 3 polynomial.

The very fast prototype version just truncates the Array index value, when accessing from the source Array. A less lossy version would interpolate. (I haven't figured out whether I'll use Curvefitting:-ArrayInterpolation or a hard-coded formula for that yet.)  I haven't yet split out the action so as to treat the three color layers separately (though I understand that this may be desirable as there is a rare lens issue with focal length distortion that affects some colors more than others, so that correcting them separately may be desirable.)

I probably won't have time to make it presentable until next week at earliest.

acer

> restart:

> with(Statistics):
> G:=Sample(RandomVariable(Normal(0,1)),10):

> Mean(G);
                                -0.04797088930

> StandardDeviation(G);
                                 0.7424204031

> tmean := 0.0:
> for kk from 1 to 10 do
>   tmean := tmean + G[kk];
> end do:
> tmean := tmean/10;
                            tmean := -0.04797088937

> tSD := 0.0:
> for kk from 1 to 10 do
>   tSD := tSD + (G[kk]-tmean)^2;
> end do:
> sqrt(tSD/(10));
                                 0.7043218365

> sqrt(tSD/(10-1));
                                 0.7424204031
So, you can replace calls to Statistics:-Mean or Statistics:-StandardDeviation by the explicit arithmetic code, so as to be able to use it with the Compiler.

acer

> restart:

> with(Statistics):
> G:=Sample(RandomVariable(Normal(0,1)),10):

> Mean(G);
                                -0.04797088930

> StandardDeviation(G);
                                 0.7424204031

> tmean := 0.0:
> for kk from 1 to 10 do
>   tmean := tmean + G[kk];
> end do:
> tmean := tmean/10;
                            tmean := -0.04797088937

> tSD := 0.0:
> for kk from 1 to 10 do
>   tSD := tSD + (G[kk]-tmean)^2;
> end do:
> sqrt(tSD/(10));
                                 0.7043218365

> sqrt(tSD/(10-1));
                                 0.7424204031
So, you can replace calls to Statistics:-Mean or Statistics:-StandardDeviation by the explicit arithmetic code, so as to be able to use it with the Compiler.

acer

Since Maple's Compiler is generally available (bundled Watcom on 32bit Windows, free MS compiler on 64bit Windows, free gcc on Linux&OSX&Solaris) it seems ok to use it for this task.

#Number theory
#Calculation of 10000000 fibonacci numbers

> t,tr:=time(),time[real]():
> N:=10^7:
> A := LinearAlgebra:-RandomVector(N,generator=100..1000,
>                 outputoptions=[datatype=integer[4]]):

> f := proc(a::Vector(datatype=integer[4]),b::Vector,n)
>         option autocompile; local phi,s5,i;
>         s5:=sqrt(5.); phi:=0.5*s5+0.5;
>         for i from 1 to n do
>           b[i]:=(phi^a[i]-(-phi)^(-a[i]))/s5;
>         end do;
>         NULL;
>      end proc:

> B:=Vector(N,datatype=float[8]):
> f(A,B,N): # results are in B
> min(time()-t,time[real]()-tr);
                                     5.340

(32bit Linux, Maple 13.01, Intel(R) Pentium(R) 4 CPU 3.20GHz)

acer

The question "how do I assign the results of a `solve` call to the individual variable names?" is a frequently asked question.

But often it is the "wrong" question, for the given task, where the "right" question is often more like, "how to I use the results of a `solve` call?".

Doug's reply above, showing the distinction between `assign` versus 2-parameter `eval`, is what ought to go in the FAQ page for this topic.

(A FAQ item for this topic might simply use equations and not Matrix M, to make it appear more general.)

acer

The question "how do I assign the results of a `solve` call to the individual variable names?" is a frequently asked question.

But often it is the "wrong" question, for the given task, where the "right" question is often more like, "how to I use the results of a `solve` call?".

Doug's reply above, showing the distinction between `assign` versus 2-parameter `eval`, is what ought to go in the FAQ page for this topic.

(A FAQ item for this topic might simply use equations and not Matrix M, to make it appear more general.)

acer

First 467 468 469 470 471 472 473 Last Page 469 of 592