acer

32837 Reputation

29 Badges

20 years, 134 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@LijiH 

"1, it will restart the workspace."

No.


"2, it will use packages PolynomialTools and RandomTools."

If you mean that proc1 and proc2 will be codes with calls to those packages, then yes. Just incorporate `uses` statements inside the body of proc1 and proc2. No, you should not try to accomplish this rebinding in a global sense, merely by invoking your package. See the description of the help-page ?proc for a note on `uses`.


"3, it will protect the letters I, J and K."

Yes, sure. Just put calls to the `protect` command inside the ModuleLoad procedure of your package module. You will only see the effects of ModuleLoad being run if you `savelib` the package to a Library archive and then restart and load it.


"4, define M, which is not a proc but will be used in some proc that Helloworld offers."

Sure, a module can have both locals as well as exports. You can define it in the definitionof the module. You can even have proc1 or proc2 write values to such module locals.


You are going to have to create a Library archive, and set libname, to get this to behave how you want. See here for some notes on savelib'ing a module.

@TomM You wrote, "However, the whole package is not bundled with Maple, in the sense that the code (dlls) comes with the Maple program distribution." Are you claiming that the dll is shipped with Maple, or not? A dynamic shared CLAPACK library object is, in fact, included in the Maple installation on at least 32 and 64 bit MS-Windows and Linux.

The real and complex double precision funtions (Dxxxxx and Zxxxxx, but not single precision Cxxxxx and Sxxxxx) of CLAPACK 3.0 should be in the clapack.dll that is indeed shipped with Maple. If there are some functions in the dll which are not exported, then that would be something to fix.

There isn't a dynamic BLAS library object shipped with Maple to which you can dynamically link, expect (ATLAS) on Linux.

I agree with you that there is some extra work to set up a define_external. That's why I posted a link to an example of doing so. And that's why I asked whether people would be interested in a Maple Library package that contains pre-made Maple (Library, interpreted) language level entry points to the routines.

This topic has only come up two or three times in this forum's history (another one being a request for LDL decomposition, many years ago I think, to which I also answered with the code for a wrapperless define_external to a CLAPACK function.) It's often hard to tell, when people don't ask for something much, whether they don't want it or whether they don't realize it might be asked.

ps. Are any of the CARE/DARE/LyaponovSolve/SylvesterSolve routines possible alternatives to the kind of DS system you need solved?

acer

@TomM You wrote, "However, the whole package is not bundled with Maple, in the sense that the code (dlls) comes with the Maple program distribution." Are you claiming that the dll is shipped with Maple, or not? A dynamic shared CLAPACK library object is, in fact, included in the Maple installation on at least 32 and 64 bit MS-Windows and Linux.

The real and complex double precision funtions (Dxxxxx and Zxxxxx, but not single precision Cxxxxx and Sxxxxx) of CLAPACK 3.0 should be in the clapack.dll that is indeed shipped with Maple. If there are some functions in the dll which are not exported, then that would be something to fix.

There isn't a dynamic BLAS library object shipped with Maple to which you can dynamically link, expect (ATLAS) on Linux.

I agree with you that there is some extra work to set up a define_external. That's why I posted a link to an example of doing so. And that's why I asked whether people would be interested in a Maple Library package that contains pre-made Maple (Library, interpreted) language level entry points to the routines.

This topic has only come up two or three times in this forum's history (another one being a request for LDL decomposition, many years ago I think, to which I also answered with the code for a wrapperless define_external to a CLAPACK function.) It's often hard to tell, when people don't ask for something much, whether they don't want it or whether they don't realize it might be asked.

ps. Are any of the CARE/DARE/LyaponovSolve/SylvesterSolve routines possible alternatives to the kind of DS system you need solved?

acer

A natural followup here is to try and make the resulting exported image file show as more like a Maple plot, with axes or text or labels, etc.

This can be done by creating an empty plot with all the desired text and axes, exporting that to an image, reading the image back in, and then using that to mask the density image (which pretty much amounts to overlaying them, if the text is black).

An exported empty plot has a blank border, as the axes don't lie right at the image boundary. This can be accomodated by either fitting the density image to the axes-bounded portion of the empty plot's image, or vice-versa by chopping off the blank border or a slightly larger empty plot. I take the second approach, below. (I suspect the first would be a better choice, if one were aiming to produce a good, fully fledged plot exporter or plot driver replacement.) The boder size also varies with the choice of Maple interface, eg, commandline or Standard GUI.

This calls `ArgPlot` as coded in the Post. This is rough, top-level stuff, but ideally it could be made into a more general procedure.

F:=z->`if`(z=0,Pi,exp(1/z)):
hsize,vsize:=300,300:

st,str,ba,bu:=time(),time[real](),kernelopts(bytesalloc),kernelopts(bytesused):
  Q:=ArgPlot(F,-0.25,0.25,vsize,hsize):
time()-st,time[real]()-str,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

st,str,ba,bu:=time(),time[real](),kernelopts(bytesalloc),kernelopts(bytesused):
_Env_plot_warnings:=false:
tempfile:=cat(kernelopts('homedir'),"/temp.jpg"):
# The Standard GUI's plot driver pads the plot with whitespace.
# So we increment the target size for export, and then later chop.
# The padding seems to be absolute in size, rather than relative
# to the dimensions, at least for the GUI's jpeg plot device. Certainly the
# padding can be much greater for jpeg export using the commandline interface's
# jpeg plot driver. Adjust accordingly.
plotsetup('jpeg','plotoutput'=tempfile,'plotoptions'=cat("height=",vsize+6+4,
                                                         ",width=",hsize+5+5));
plot(I,-0.25..0.25,'view'=[-0.25..0.25,-0.25..0.25]);
plotsetup('default'):
try fflush(tempfile); catch: end try:
fclose(tempfile);
# Why doesn't the GUI plot driver flush before returning control?!
# The timing is affected by a delay (YMMV, on NFS or a network drive)
# to wait for the image file to be written. If the `Read` below fails
# and claims the file does not exist, the delay may need increasing.
Threads:-Sleep(1): # 1 second
_Env_plot_warnings:='_Env_plot_warnings':
T:=ImageTools:-Read(tempfile,'format'='JPEG'):
correctedT:=rtable_redim(T[1+6..-1-4,1+5..-1-5,1..-1],1..vsize,1..hsize,1..3):
# The mask could likely be done more efficiently in-place on Q ("by hand"),
# with saving on allocation and thus subsequent collection.
newQ:=ImageTools:-Mask(Q,correctedT):
ImageTools:-Write(cat(kernelopts('homedir'),"/arg_expinv.jpg"),newQ,'quality'=95):
time()-st,time[real]()-str,kernelopts(bytesalloc)-ba,kernelopts(bytesused)-bu;

ImageTools:-View(newQ);

 

This can be compared with less attactive results using `densityplot`. (I understand that densityplot's `grid` option is not the same as the width/height of an image. But increasing the `grid` values, for densityplot or complexplot3d, can be prohibitive in terms of time & memory resources and the performance of the plot driver/exporter and of the GUI itself. These aspects were the original motivation. The usual plotting commands cannot match the image creation method here, especially at higher or publication quality resolutions.)

restart:

F:=z->`if`(z=0,Pi,exp(1/z)):
hsize,vsize:=300,300:
plots:-densityplot(-argument(exp(1/(a+b*I))),a=-0.25..0.25,b=-0.25..0.25,
                   colorstyle=HUE, style=patchnogrid,scaletorange=-Pi..Pi,
                   grid=[hsize,vsize],axes=normal);

Exporting using context-menus (right-click, takes a long time...) produces this,

And using the GUI's plot driver with the plotsetup command produces this,

plotsetup('jpeg','plotoutput'="driv_expinv.jpg",'plotoptions'=cat("height=",300,
                                                                  ",width=",300));

acer

As far as the computational part goes, there seems to be some bottleneck in mapping log[2] over the complex-valued Matrix D2. Ideally, that subtask should be solvable with a very fast implementation using the Compiler, but I've put in comments about some weirdness with the Compiled procedure's behaviour there. (I'll submit an bug report.)

For producing higher resolution graphics than what densityplot (with colorstyle=HUE) provides, you might have a look at this recent Post. At higher resolutions, building the image directly can be much faster and more memory efficient than using densityplot. (The GUI itself uses huge amount of unreported memory for larger grids.)

The code below has some tweaks, with original lines commented out. (Uncommenting and running side by side might give some insight into why some ways are faster.) My machine runs the code below, with n=2^8, in under 7 seconds, and the originally Posted code in about 18 seconds. But I believe that a well-behaved Compiled complex log[2] map over a complex[8] Matrix would allow it to be done in around 1 second.

If you want the computational portions much faster, you could consider going with Compiled procedures to do all the computation, and give up the `~` mapping and evalhf. That also ensures sticking with hardward datatype Matrices (which didn't happen with the original code) and helps keep the memory use down.

restart:
with(DiscreteTransforms): with(plots):

n:=2^8:

Start:=time():

M := Matrix(n, n, 1, datatype = float):
Ii := Vector(n, proc (i) options operator, arrow; i end proc, datatype = float):
x := Vector(n, proc (i) options operator, arrow; Ii[i]-(1/2)*n end proc):
y := Vector(n, proc (i) options operator, arrow; (1/2)*n-Ii[i] end proc):
X, Y := Matrix(n, n, proc (i, j) options operator, arrow; x[j] end proc),
        Matrix(n, n, proc (i, j) options operator, arrow; y[i] end proc):
R := 10:

st:=time():
A := map[evalhf](`<=`,`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
time()-st;

#st:=time():
#AA := `~`[`@`(evalb, `<=`)](`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
#time()-st;

st:=time():
M := evalhf(zip(`and`,(A, M))):
time()-st;

#st:=time():
#MM:=`~`[evalhf](`~`[`and`](A, M)):
#time()-st;
#LinearAlgebra:-Norm(M-MM);

#plots:-listdensityplot(M, smooth = true, axes = none);

D1 := FourierTransform(M, normalization = none):
D2 := ArrayTools[CircularShift](D1, (1.0/2)*n, (1.0/2)*n):

st:=time():
Q1:=Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]):
time()-st;

#st:=time():
#Q11:=abs(D2);
#time()-st;
#LinearAlgebra:-Norm(Q1-Q11);

st:=time():
plots:-listdensityplot(Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]),
                       smooth = true, axes = none, colorstyle = HUE,
                       style=patchnogrid);
time()-st;

st:=time():
D3:=`~`[subs(LN2=evalhf(ln(2)),t->ln(t)/LN2)](D2):
time()-st;

#st:=time():
#D33 := `~`[log[2]](D2):
#time()-st;
#LinearAlgebra:-Norm(D3-D33);

# There is a very weird behaviour, where both evalhf and a Compiled version of
# this makeD3 routine can (for n=8 say) take either less than a second, or 25 seconds,
# intermittently. The very same calls, without re-Compilation, but vastly different timing.
# I've commented the attempts out, and am using the uncommented log[2] method above.
makeD3:=proc(n::integer,a::Matrix(datatype=complex[8]),b::Matrix(datatype=complex[8])) local i::integer, j::integer, LN2::float; LN2:=1.0/ln(2.0); for i from 1 to n do for j from 1 to n do #b[i,j]:=log[2](a[i,j]); b[i,j]:=ln(a[i,j])*LN2; end do; end do; NULL; end proc: #st:=time(): #cmakeD3:=Compiler:-Compile(makeD3,optimize=true): #time()-st; #st:=time(): #D3:=Matrix(n,n,datatype=complex[8]): #evalhf(makeD3(n,D2,D3)): #time()-st; #D3:=Matrix(n,n,datatype=complex[8]): #st:=time(): #cmakeD3(n,D2,D3); #time()-st; st:=time(): plots:-listdensityplot(abs(D3), smooth = true, axes = none, colorstyle = HUE, style=patchnogrid); time()-st; time()-Start;

[note: every time I edit this code it screws up the less-than synbols in the pre heml tags and misinterprets, removing content.]

aperture_density.mw

acer

As far as the computational part goes, there seems to be some bottleneck in mapping log[2] over the complex-valued Matrix D2. Ideally, that subtask should be solvable with a very fast implementation using the Compiler, but I've put in comments about some weirdness with the Compiled procedure's behaviour there. (I'll submit an bug report.)

For producing higher resolution graphics than what densityplot (with colorstyle=HUE) provides, you might have a look at this recent Post. At higher resolutions, building the image directly can be much faster and more memory efficient than using densityplot. (The GUI itself uses huge amount of unreported memory for larger grids.)

The code below has some tweaks, with original lines commented out. (Uncommenting and running side by side might give some insight into why some ways are faster.) My machine runs the code below, with n=2^8, in under 7 seconds, and the originally Posted code in about 18 seconds. But I believe that a well-behaved Compiled complex log[2] map over a complex[8] Matrix would allow it to be done in around 1 second.

If you want the computational portions much faster, you could consider going with Compiled procedures to do all the computation, and give up the `~` mapping and evalhf. That also ensures sticking with hardward datatype Matrices (which didn't happen with the original code) and helps keep the memory use down.

restart:
with(DiscreteTransforms): with(plots):

n:=2^8:

Start:=time():

M := Matrix(n, n, 1, datatype = float):
Ii := Vector(n, proc (i) options operator, arrow; i end proc, datatype = float):
x := Vector(n, proc (i) options operator, arrow; Ii[i]-(1/2)*n end proc):
y := Vector(n, proc (i) options operator, arrow; (1/2)*n-Ii[i] end proc):
X, Y := Matrix(n, n, proc (i, j) options operator, arrow; x[j] end proc),
        Matrix(n, n, proc (i, j) options operator, arrow; y[i] end proc):
R := 10:

st:=time():
A := map[evalhf](`<=`,`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
time()-st;

#st:=time():
#AA := `~`[`@`(evalb, `<=`)](`~`[`+`](`~`[`^`](X, 2), `~`[`^`](Y, 2)), R^2):
#time()-st;

st:=time():
M := evalhf(zip(`and`,(A, M))):
time()-st;

#st:=time():
#MM:=`~`[evalhf](`~`[`and`](A, M)):
#time()-st;
#LinearAlgebra:-Norm(M-MM);

#plots:-listdensityplot(M, smooth = true, axes = none);

D1 := FourierTransform(M, normalization = none):
D2 := ArrayTools[CircularShift](D1, (1.0/2)*n, (1.0/2)*n):

st:=time():
Q1:=Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]):
time()-st;

#st:=time():
#Q11:=abs(D2);
#time()-st;
#LinearAlgebra:-Norm(Q1-Q11);

st:=time():
plots:-listdensityplot(Matrix(map[evalhf](t->Re(abs(t)),D2),datatype=float[8]),
                       smooth = true, axes = none, colorstyle = HUE,
                       style=patchnogrid);
time()-st;

st:=time():
D3:=`~`[subs(LN2=evalhf(ln(2)),t->ln(t)/LN2)](D2):
time()-st;

#st:=time():
#D33 := `~`[log[2]](D2):
#time()-st;
#LinearAlgebra:-Norm(D3-D33);

# There is a very weird behaviour, where both evalhf and a Compiled version of
# this makeD3 routine can (for n=8 say) take either less than a second, or 25 seconds,
# intermittently. The very same calls, without re-Compilation, but vastly different timing.
# I've commented the attempts out, and am using the uncommented log[2] method above.
makeD3:=proc(n::integer,a::Matrix(datatype=complex[8]),b::Matrix(datatype=complex[8])) local i::integer, j::integer, LN2::float; LN2:=1.0/ln(2.0); for i from 1 to n do for j from 1 to n do #b[i,j]:=log[2](a[i,j]); b[i,j]:=ln(a[i,j])*LN2; end do; end do; NULL; end proc: #st:=time(): #cmakeD3:=Compiler:-Compile(makeD3,optimize=true): #time()-st; #st:=time(): #D3:=Matrix(n,n,datatype=complex[8]): #evalhf(makeD3(n,D2,D3)): #time()-st; #D3:=Matrix(n,n,datatype=complex[8]): #st:=time(): #cmakeD3(n,D2,D3); #time()-st; st:=time(): plots:-listdensityplot(abs(D3), smooth = true, axes = none, colorstyle = HUE, style=patchnogrid); time()-st; time()-Start;

[note: every time I edit this code it screws up the less-than synbols in the pre heml tags and misinterprets, removing content.]

aperture_density.mw

acer

@newlam All I see is base64 encoded source in your post, not images.

You can no longer paste 2D Math from Maple straight into Mapleprimes, since the site was revised. But mere images of your code can be awkward for us, since we don't want to have to type out anything.

You can use the green up-arrow in the editor here, to upload your worksheet. Or you can paste the code as 1D text Maple input.

Forcing the tickmarks seems to be key to this kind of thing, in current Maple. And that can be done for the x-axis or y-axis separately, or for both.

Flipping the plot in the x-direction can often be accomplished by a simple change of variables in the expression (as in the link Kamel has given). But it can also be done for either the x- or the y-direction using plottools, eg here.

acer

Forcing the tickmarks seems to be key to this kind of thing, in current Maple. And that can be done for the x-axis or y-axis separately, or for both.

Flipping the plot in the x-direction can often be accomplished by a simple change of variables in the expression (as in the link Kamel has given). But it can also be done for either the x- or the y-direction using plottools, eg here.

acer

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

@raffamaiden The lprint command is for line (1D, linear) printing, as its help-page explains. That's lprint's purpose, not to do 2D formatted printing. What you described is 2D formatted printing.

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

A few more comments might be added, if I may. The next two paragraphs are other suggestions to support the idea that symbolic inversion is not a good idea here.

The purely numerical floating-point computation (of the inverse, or of solving a linear system) whether done in Maple or Matlab will be careful about selection of pivots. That is, the floating-point implementation use the specific numeric values during pivot selection. Symbolically inverting the Matrix will remove this carefulness, regardless of whether the symbolic pivots are chosen according to symbolic length, or by no scheme. Only a scheme of symbolic pivot selection that could qualify the relative magnitude of the pivot choices under assumptions on the unknowns would have a hope of regaining this care. But it could be difficult and might require a case-split solution, and it would add to the symbolic processing overhead. And the symbolic LU (or inversion) doesn't allow for a custom pivot selection strategy in the symbolic case.

Even if you keep some form of Testzero, it's still only checking whether the pivots are symbolically identical to zero or not. It doesn't provide for any safety that, given particular numeric values for the unknowns, some pivot might not attain a zero value. If that happens then the whole approach breaks with division by zero.

Perhaps there is some other way to improve the performance of the Matlab code. Do you compute the inverse in order to solve linear equations like A.x=b? It wasn't clear from your post whether you might be solving at different instances for various different RHSs `b`, for the same numeric instance of Matrix `A`. If you will have to solve for different RHSs `b`, for the very same numeric values of the unknowns, then you could consider doing the numeric LU factorization of `A` just once for the given numeric values of the variables, and then using just forward- and backward-substitution for each distinct RHS instance. This isn't going to help, if you solve for just one single RHS instance for any particular set of values of the variables.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

The idea that a range should not be necessary (ie. supplied as a restriction on the domain), for numerical, floating-point rootfinding of a general expression or function is wrong.

There is no default range that will work for all problems. Provide a suggestion for an all-purpose range and it can be broken by example. Provide a rule to try and deduce a good working restricted domain by examining the function (under a timelimit or operational resource limit), and it can be broken by example.

Numerical analysis shouldn't be considered too arcane for younger students, if only to illustrate to them some of the difficulties. Younger people can understand not being able to see the forest for the trees, or not being able to see the trees for the forest.

acer

First 443 444 445 446 447 448 449 Last Page 445 of 601