acer

32353 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

My 64bit Linux installations of Maple 2016.0 (Build ID 1113130), Maple 2016.1 (Build ID 1132667), and Maple 2016.1a (Build ID 1133417) each contain only libicuucmpl.so.56.1 (and its two associated symlinks).

Just to check I just reinstalled Maple 2016.0 to an entirely fresh new location, and then ran the 2016.1a upgrade installer obtaied from here. After the upgrade installation completes then running the commandline interface (CLI aka TTY interface) of that goes ok for me, and there kernelopts(version) reports Build ID 1133417. But for that installation I also see only libicuucmpl.so.56.1 and its two expected symlinksin its bin.X86_64_LINUX subdirectory.

So I don't quite see how your Maple 2016.1a installation could have libicuucmpl.so.49.1.1 get into it. I note that the 64bit Linux version of Maple 2015.2 had that particular binary object file. Is it possible that you somehow selected the wrong installation for upgrade?

I am running Ubuntu 14.04.4 LTS but I don't quite see how that would affect what the installer puts in place. ( I suppose that it's possible...)

If you are absolutely sure that you ran the 64bit Linux Maple 2016.1a upgrade installer against a Maple 2016.x installation location, and still see that wrong binary then I suggest you contact Technical Support.

acer

Do you mean that you want to use a concatenation of existing symbols from the current palettes, as a new symbol that can be used for both 1D plaintext input as well as typeset 2D Input (including with its own new entry say in the favorites-palette)?

acer

This is a good question.

See also this old post by John May on the topic of subexpression labelling using the mechanism that Carl cites in his Answer.

The thread of vv's Answer is also worthwhile.

acer

@mmcdara That error message likely came from an attempt to assign to L[n] or some other entry of the list.

L:=[seq(0,i=1..100)]:
L[55]:=x: # succeeds

LL:=[seq(0,i=1..101)]:
LL[55]:=x;
Error, assigning to a long list, please use Arrays

The reason the kernel does this is because lists are not really mutable objects, and one cannot assign into a list and have it use the same memory space as before. Instead. assigning into a list entry creates a new list object. It's set up so that the name to which your list is assigned (say, L) is then reassigned with the new list, and this makes it appear as if lists were mutable. But the new list must get created and the old list must be garbage collected, which can affect performance. For large lists to which entries are assigned over and over again the perfomance hit can be extreme. It's a bit awful (IMO) that the kernel allows list entry assignment even for small lists of length 100 or less, because it just fosters inefficient programming.

Basically, assigning into lists entries can be terrible for performance. As Bad Practice goes, it's right up there with repeated list/set augmentation, eg, L:=[op(L),new] .

In contrast, tables and rtables (the latter includes Array, Matrix, Vector) are mutable containers, and entries can be replaced "in place" without the container itself being made anew.

@farahnaz You need to assign to filename the correct location of the file. Change it to wherever it is on your machine. It will be different than where it is on my machine. Something like,

filename := "C:/blah/blah/blah/16.3.xlsx";

Use forward quotes in that string.

Don't forget to respond to my comments about the weird "X" data values, and clarify your intentions.

In Maple versions 10.02 through Maple 2016.1 I get a result with

ax := 1: ay := 2: a := 0.5: b := 0.25:
expr := ax*cos(lambda)+ay*sin(lambda)-(a+b*lambda):
Student:-Calculus1:-Roots(expr, lambda=-2*Pi..2*Pi, numeric);

In Maple versions 10.02 and 11.02 the attempt without the numeric option emits an error message.

And you can just apply the max and min commands to the list returned from the Roots command.

[edited to remove speed commentary, thanks Markiyan]

 

@Carl Love Here's a difference between ~ and map. Note the datatype of the following return values.This is with the default of Digits=10 and UseHardwareFloats=deduced. I am not sure of timing difference between map[evalhf] and ~ for float[8] rtable and an operator that is evalhf'able and/or builtin.

map[evalhf](sin, Vector(3,1,datatype=float[8])): rtable_options(%,datatype);
                                                          float[8]

map(sin, Vector(3,1,datatype=float[8])): rtable_options(%,datatype);        
                                                          anything

sin~(Vector(3,1,datatype=float[8])): rtable_options(%,datatype);            
                                                          float[8]

Note the working precision used for this arithmetic operation, though.

restart;
map(`/`,Vector([2.0],datatype=float[8]),3.0);
                                                    [0.666666666600000]

`/`~(Vector([2.0],datatype=float[8]),3.0);
                                                    [0.666666666666667]

I suspect that the Library zip command invokes the undocumented kernel builtin named rtable_zip in the following float[8] case. So I suppose that for pairs of float[8] rtables it might be used directly.

zip(arctan,Vector([1],datatype=float[8]),Vector([2],datatype=float[8]));
                                                    [0.463647609000806]

rtable_options(%,datatype);
                                                          float[8]

rtable_zip(arctan,Vector([1],datatype=float[8]),Vector([2],datatype=float[8]));
                                                    [0.463647609000806]

rtable_options(%,datatype);
                                                          float[8]

Are you saying that you can evaluate f(x,y) reasonably quickly for all of some 30x30 x-y pairs? And then you want to find the volume under the surface (by interpolating those 30x30 f values at points required by a quadrature scheme)?

Furthermore, are you saying that when you use ArrayInterpolation to set up an f-hat (which interpolates f at just a single point, on demand) that it is too slow when you pass this to evalf(Int(...))?

ArrayInterpolation doesn't necessarily set up a whole interpolation scheme each time it's called. The basic schemes are precompiled in C, and the data is used to supply the numeric coefficients required by the spline scheme. What incurs some of the overhead is creations of temporary rtables for input and output (and to get the data into the desired form). Some of that overhead can be alleviated by using module locals to get the single x-y point into the scheme, and the scalar result back, and by supplying the data in just the right form. I've found that thise can make it a bit faster, but not super quick. I can dig up an example of that from my files, later in the day.

How accurate do you need the result? Were you able to try using the epsilon accuracy option of evalf(Int(..)) in your earlier attempt, for a coarser target accuracy? Did it help? What evalf(Int(...)) method got tried, do you know?

If you have the 30x30 data points computed quickly enough, then interpolating all x-y pairs on a finer full rectangular grid is then lightning fast. If your integrand is not singular in the domain then 2D Romberg integration might suffice. That has the virtue that it only needs a rectangular grid (and when you subdivide h->h/2 then most earlier evaluations just need to get re-weighted as squares' midpoints become corners, etc). So that's a route which avoids evalf(Int(...)) altogether, but of course one has to code the 2D Romberg (and make it refine if you want an error estimate from it, and make it adaptive if you want error-estimate-based selective subdivision...).

Going back to an interpolating-on-demand procedure, if it's done really well then ideally it should be faster to handle even a lot of software float scalar x-y points separately than would evaluation of some big 2D spline piecewise thing at all those separate x-y pairs required by evalf(Int(...)). It may be possible to dig a fast implementation out of the guts of dsolve/numeric. I've been meaning to try harder at that.

Any chance we could get the whole code, so as to not have to make something up?

 

@mstevens Your request in bold has at least one ambiguity. You state that you want it to "display the results in a+jb or mag <angle".  If you mean that you'd want a setting that could be toggled, to switch the default return mode say, then please be clear and explicit. But otherwise what did you intend by that "or" ?

The code I wrote allows for the earlier parts of your bolded text I think. But it currently returns in mag <angle form, and conversion of  individual results to a+jb complex form require an action (which I can offer as right click menu action too). I could also add a setting for return mode.

Its important to note that there is a big difference between working with new objects at the surface level and changing how all of maple returns or prints complex values. Even changing how Maple displays all complex numeric values would be tricky because purely real values will be inherently problematic: on a case by case basis within even a single expression they might be wanted as displayed as phasor with zero angle or as pure real. Phasors containing unknown variables are also inherently problematic.

Certainly objects (in the Maple language technical sense) aren't the only way to go about this. An alternative would be to have the < influx angle operator simply evaluate directly to a+jb form, for all operations to occur as usual, and for nonreal comes numerous to optionally display in < angle form (just a display thing -- the value is still explicit complex radians). I considered that. Its doable. I did it in objects partly because i wanted to do something nontrivial in objects (that wasn't quaternion).

You are of course quite free to go your own way here.

I'm not sure if i made my view clear above, but i would really like to see Maple get in-situ or telescoping context menu functionality. I first asked for it years ago. it would serve many good uses.

@mstevens My sheets show how to do most all that easily, IMNSHO. 

Its true that the functionality is not built into stock Maple. Not all that surprisingly as Maple has to support scientists and mathematicians as well as engineers. Often the needs of those groups are in opposition.

I can make Maple do many kinds of thing. I suggest that you focus on telling how you want my phaser objects (cited above) to be improved. I mean specifics. I am happy to help.

The objects I coded already act like phasors and don't be distracted by my caution above about phasor-phaser multiplication. You would be better off asking me to adjust the code than attempt it IMO. 

One of the things I can do is add right click context-menu items to change representation mode (degree polar aka phasor vs radian polar vs explicit imaginary etc). Those would return on a new line though, when toggled. At present only numeric formatting and units can be adjusted/toggled "in situ", but not other right click actions.

@mstevens I added phasor to the tags in that Question for which I cited my Answer above.

Note that "phasor" may be a slight misnomer for what I coded, in the following technical sense. I have a suspicion that two arbitrary phasors cannot just be multipled together, although scalar multiplication of a phasor is valid. Thecode I wrote does compute such a product of two of its objects (and you might be able to amend the static `*` export for that coded object in this regard, if so inclined...). So I suppose that technically it's more a degree-polar representation of a complex number than a phasor proper. Let me know if you run into problems.

@Carl Love Yes, I mean the garbage-collector of the JVM, which is distinct from Maple's kernel's garbage collector. The JVM garbage-collector does memory management of objects in the GUI's (JVM's) memory space.

Something like a PLOT3D Maple structure is stored internally in the Maple kernel as a DAG, when constructred via a Maple command. But when "printed" by Maple command the kernel sends it to the GUI which stores it in its own internal representation. Such objects in the JVM are managed by the JVM's own memory management system (the JVM's garbage collector) and that has little-to-nothing to do with whether the Maple kernel still has a reference to its own internal DAG representation. They are quite separate things.

If you look at this tutorial (in in particular the "Key Hotspot Components" in the "Exploring the JVM Architecture" tab) you can find where it states, "The heap is where your object data is stored. This area is then managed by the garbage collector selected at startup. Most tuning options relate to sizing the heap and choosing the most appropriate garbage collector for your situation."

In some older versions of Maple, depending on the JRE version in use, it was sometimes better for Maple's Java GUI's performance to allocate a larger JVM heap. That could sometimes be done with an edit to Maple's GUI's initialization file (not the user's Maple language initialization file, confusingly known by the same kind of name). But note that, with some versions, the JVM wouldn't start if this was set too high (see also Search results on Mapleprimes for keyword maxheap). I don't advise anyone to mess with that stuff unless their GUI doesn't launch with default settings.

If I recall Maple 2016 uses "Java 8", ie. some version 1.8.xx. On Windows 7 and 64bit Linux I can execute the following 1D plaintext Maple command (if system calls are enabled in GUI Options) from within my Maple 2016.1 to determine the Java version in use.

system(cat(kernelopts(':-mapledir'),"/jre",
    `if`(kernelopts(':-platform')="windows",
         "/bin/java.exe",
         cat(".",StringTools:-SubstituteAll(kernelopts(':-system')," ","_"),
             "/bin/java")),
    " -version")):

That shows something like java version "1.8.0_66" in my 64bit Linux Maple 2016.1, while it showed java version "1.6.0_45" in my 64bit Linux Maple 2015.2.

I suspect that the switch from Java 6 to 8 has improved GUI performance, and some of that may [1, 2] be due to improved JVM garbage colleciton.

@Markiyan Hirnyk I thought that you'd be interested in a shorter example (of your bivariate example with the spurious solution of z=0).

From your problematic example above I first got here:

restart;                
ee := x-(x*(x-z))^(1/2):
solve( ee, {z} );       
                                                          {z = 0}

eval(ee, %);            
                                                              2 1/2
                                                        x - (x )

And from there I got to my univariate example above.

For anyone finding this Question by searching, a variation on using the caption option, with color and multiple lines, is here.

@Markiyan Hirnyk  hmm

solve( x - (x^2)^(1/2) );
                                        x
First 294 295 296 297 298 299 300 Last Page 296 of 592