acer

32363 Reputation

29 Badges

19 years, 332 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

And also see Preben Alsholm's procedure, ResizePlot, here.

And also see Preben Alsholm's procedure, ResizePlot, here.

@hermitian As you've guessed, CodeTools:-Usage is just something that you can wrap around a computational function call, to get an easy printing of the time and memory resources that get used. It's not necessary here, and might not have been available in Maple 13.

The `epsilon` is an option to evalf(Int(...)) the numeric integration (quadrature) command. It is the accuracy tolerance. You see, fsolve is picky and might not return a result unless it is satisified with the convergence of its result. And fsolve might try and attain an accuracy that depends upon the current Digits setting at the level at which it was called. So I set the quadrature accuracy-tolerance `epsilon` to be small enough to get 10 digits of accuracy, but put that inside the EnergyDensity procedure which had Digits raised high enough (hopefully) to allow evalf/Int to satisfy such a tolerance.

The other stuff inside evalf(Int(...)) was to force use of a method that might be suitable for a semi-infinite range of numeric integration. See the evalf/Int help-page. If Digits=15 had not been high enough to allow evalf/Int to get the provided `epsilon` accuracy then I'd have had to set Digits even higher within `EnergyDensity` and not specified _d01amc which only works up to Digits=15 and hardware double precision.

acer 

@hermitian As you've guessed, CodeTools:-Usage is just something that you can wrap around a computational function call, to get an easy printing of the time and memory resources that get used. It's not necessary here, and might not have been available in Maple 13.

The `epsilon` is an option to evalf(Int(...)) the numeric integration (quadrature) command. It is the accuracy tolerance. You see, fsolve is picky and might not return a result unless it is satisified with the convergence of its result. And fsolve might try and attain an accuracy that depends upon the current Digits setting at the level at which it was called. So I set the quadrature accuracy-tolerance `epsilon` to be small enough to get 10 digits of accuracy, but put that inside the EnergyDensity procedure which had Digits raised high enough (hopefully) to allow evalf/Int to satisfy such a tolerance.

The other stuff inside evalf(Int(...)) was to force use of a method that might be suitable for a semi-infinite range of numeric integration. See the evalf/Int help-page. If Digits=15 had not been high enough to allow evalf/Int to get the provided `epsilon` accuracy then I'd have had to set Digits even higher within `EnergyDensity` and not specified _d01amc which only works up to Digits=15 and hardware double precision.

acer 

@tomasz74 If your original statement, "Data points are taken in 1Hz" means that the data points form a grid (regular in both x- and y-directions) then you likely could be able to get a smooth surface from the (stock, non-add-on) plots:-surfdata command. Sorry if that's a repeat, and not appropriate for your situation.

@tomasz74 If your original statement, "Data points are taken in 1Hz" means that the data points form a grid (regular in both x- and y-directions) then you likely could be able to get a smooth surface from the (stock, non-add-on) plots:-surfdata command. Sorry if that's a repeat, and not appropriate for your situation.

Ok, I think I've got a better handle on understanding this now.

The Voronoi and Delaunay (incidence) graphs are dual, but that doesn't immediately give the geometric details of the Voronoi diagram (cells) from the details of the Delaunay triangulated mesh. But it turns out to not be so hard to compute.

I suppose that I am trying to say two main things:

1) One can either do the heavy lifting (distance calculations, like with Euclidean metric, involving circles) to compute the Delaunay triangulation, or the Voronoi diagram. But it shouldn't be necessary to make that effort for both. Now, I happen to have access to what seems to be a decent Delaunay triangulation mesh generator (producing details of the triangles and the incidence graph). I mostly only found schemes for computing the Voronoi diagram from scratch in various computational geometry texts. But a little observation reveals that the edges of the Voronoi cells may be computed directly from the intersections of the right bisectors of the triangles of the Delaunay triangulation.

2) We should be striving to compute the Voronoi cells as defined by straight line segments (well, according to choice of metric as Euclidean distance). So the program should be identifying those line segments. Therefore approaches (1, 2) which approximate those lines without final identification are not providing the ideal target.

I'm not saying that it's wrong to compute the Voronoi diagram the hard way. Whether to do "hard part" in computing the Voronoi or the Delaunay geometric details are a matter of choice. (And if you happen to have one then subsequently computing the other is much less hard.) But if one does elect to compute the Voronoi diagram as the hard part then the final result should be defining geometric details of the polytopes and not just a plot.

note: I will try to find time to produce a front-to-end computation: points -> Delaunay -> Voronoi cells, but I have only limited computer time this week.

acer

Ok, I think I've got a better handle on understanding this now.

The Voronoi and Delaunay (incidence) graphs are dual, but that doesn't immediately give the geometric details of the Voronoi diagram (cells) from the details of the Delaunay triangulated mesh. But it turns out to not be so hard to compute.

I suppose that I am trying to say two main things:

1) One can either do the heavy lifting (distance calculations, like with Euclidean metric, involving circles) to compute the Delaunay triangulation, or the Voronoi diagram. But it shouldn't be necessary to make that effort for both. Now, I happen to have access to what seems to be a decent Delaunay triangulation mesh generator (producing details of the triangles and the incidence graph). I mostly only found schemes for computing the Voronoi diagram from scratch in various computational geometry texts. But a little observation reveals that the edges of the Voronoi cells may be computed directly from the intersections of the right bisectors of the triangles of the Delaunay triangulation.

2) We should be striving to compute the Voronoi cells as defined by straight line segments (well, according to choice of metric as Euclidean distance). So the program should be identifying those line segments. Therefore approaches (1, 2) which approximate those lines without final identification are not providing the ideal target.

I'm not saying that it's wrong to compute the Voronoi diagram the hard way. Whether to do "hard part" in computing the Voronoi or the Delaunay geometric details are a matter of choice. (And if you happen to have one then subsequently computing the other is much less hard.) But if one does elect to compute the Voronoi diagram as the hard part then the final result should be defining geometric details of the polytopes and not just a plot.

note: I will try to find time to produce a front-to-end computation: points -> Delaunay -> Voronoi cells, but I have only limited computer time this week.

acer

@muffinman123 As Roman noted above, "Under the interface tab you can also make Maple create new worksheets (with prompts) instead of documents (free form) by default."

See Tools->Options->Interface and then the drop-down menu for "Default format for new worksheets", which you would set to "Worksheet" instead of "Document".

@muffinman123 As Roman noted above, "Under the interface tab you can also make Maple create new worksheets (with prompts) instead of documents (free form) by default."

See Tools->Options->Interface and then the drop-down menu for "Default format for new worksheets", which you would set to "Worksheet" instead of "Document".

It doesn't matter for partioning something as small as 20, but if code can scale then it's better to construct M with `seq` or `select` than to build it up triangularly with augmented lists or sets.

restart:
with(combinat, partition):
st,ba := time(),kernelopts(bytesalloc):
N,n := 40,10:
L := partition(N):
M := []:
for j to nops(L) do
   if nops(L[j]) = n then
      M := [op(M), L[j]];
   end if;
end do:
time()-st, kernelopts(bytesalloc)-ba;

                        1.700, 60282080

map(c -> sort(c, `>=`) , M):
nops(%);

                              3590

restart:
st,ba:=time(),kernelopts(bytesalloc):
N,n := 40,10:
L := combinat[partition](N):
M := select(z->nops(z)=n,L):
time()-st, kernelopts(bytesalloc)-ba;

                        0.188, 27389032
map(sort,M,`>=`):
nops(%);

                              3590

restart:
st,ba:=time(),kernelopts(bytesalloc):
N,n := 40,10:
L:=combinat[partition](N,N-n+1):
M := select(z->nops(z)=n,L):
time()-st, kernelopts(bytesalloc)-ba;

                        0.202, 22671304
map(sort,M,`>=`):
nops(%);

                              3590

acer

It doesn't matter for partioning something as small as 20, but if code can scale then it's better to construct M with `seq` or `select` than to build it up triangularly with augmented lists or sets.

restart:
with(combinat, partition):
st,ba := time(),kernelopts(bytesalloc):
N,n := 40,10:
L := partition(N):
M := []:
for j to nops(L) do
   if nops(L[j]) = n then
      M := [op(M), L[j]];
   end if;
end do:
time()-st, kernelopts(bytesalloc)-ba;

                        1.700, 60282080

map(c -> sort(c, `>=`) , M):
nops(%);

                              3590

restart:
st,ba:=time(),kernelopts(bytesalloc):
N,n := 40,10:
L := combinat[partition](N):
M := select(z->nops(z)=n,L):
time()-st, kernelopts(bytesalloc)-ba;

                        0.188, 27389032
map(sort,M,`>=`):
nops(%);

                              3590

restart:
st,ba:=time(),kernelopts(bytesalloc):
N,n := 40,10:
L:=combinat[partition](N,N-n+1):
M := select(z->nops(z)=n,L):
time()-st, kernelopts(bytesalloc)-ba;

                        0.202, 22671304
map(sort,M,`>=`):
nops(%);

                              3590

acer

Just a minor comment on Joe's answer, for Christopher:

It's possible to make try..catch behave a bit like this.

F:=proc(x::uneval)
   try
      eval(x);
   catch:
      lastexception[2];
   end try;
end proc:

F( tan(Pi/4) );

                               1

F( tan(Pi/6) );

                            1  (1/2)
                            - 3     
                            3       

F( tan(Pi/2) );

             "numeric exception: division by zero"

traperror( tan(Pi/4) );

                               1

traperror( tan(Pi/6) );

                            1  (1/2)
                            - 3     
                            3       

traperror( tan(Pi/2) );

             "numeric exception: division by zero"

And with try..catch one has more power (to intercept and handle some particular errors separately, or to have a `finally` clause, etc).

acer

Just a minor comment on Joe's answer, for Christopher:

It's possible to make try..catch behave a bit like this.

F:=proc(x::uneval)
   try
      eval(x);
   catch:
      lastexception[2];
   end try;
end proc:

F( tan(Pi/4) );

                               1

F( tan(Pi/6) );

                            1  (1/2)
                            - 3     
                            3       

F( tan(Pi/2) );

             "numeric exception: division by zero"

traperror( tan(Pi/4) );

                               1

traperror( tan(Pi/6) );

                            1  (1/2)
                            - 3     
                            3       

traperror( tan(Pi/2) );

             "numeric exception: division by zero"

And with try..catch one has more power (to intercept and handle some particular errors separately, or to have a `finally` clause, etc).

acer

@Okimatsuki With rank of 3, this system of three equations in four variables has 1 free parameter in the solution.

restart:

with(LinearAlgebra):

eqs:={x2 = 3*x4-2, x1 = -5*x4+3, x3 = -4*x4+2}:

A,b:=GenerateMatrix(eqs,[x1,x2,x3,x4]):

Rank(A);

                                         3

X:=LinearSolve(A,b,'free'=t):

Equate([x1,x2,x3,x4],X);

           [x1 = 3 - 5 t[4], x2 = -2 + 3 t[4], x3 = 2 - 4 t[4], x4 = t[4]]

solve(eqs);

               {x1 = -5 x4 + 3, x2 = 3 x4 - 2, x3 = -4 x4 + 2, x4 = x4}
First 410 411 412 413 414 415 416 Last Page 412 of 592