acer

32333 Reputation

29 Badges

19 years, 323 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@Axel Vogt It may be that inttrans:-fourier is splitting it, and then running into failure:

int(erf(x)*exp(-I*x*k),x = -infinity .. infinity):
lprint(%);
  -2*I*exp(-1/4*k^2)/k

int(erf(x)*cos(-I*x*k),x = -infinity .. infinity):
                     0

I*int(-erf(x)*sin(-I*x*k),x = -infinity .. infinity):
lprint(%);
  I*int(I*erf(x)*sinh(x*k),x = -infinity .. infinity)

The integrals involving erf(x)*cos(-I*x*k) and erf(x)*exp(-I*x*k) are handled successfully by method=meijergspecial.

This is an engaging read.

Readers who like this might also be interested in these old posts by John May: 1, 2, 3, 4, 5, 6.

@Magma I do not have access at this momemnt to a machine with enough RAM to complete your large example.

However, I have noticed something troubling about the performance on your smaller example (which I included in my first Comment) as shown by CodeTools:-Usage under 10 repeats.

Maple 2015.2 (or earlier, approx.)
Your BP:
memory used=41.43MiB, cpu time=229.70ms, real time=232.80ms, gc time=12.40ms
Carl's modified BP:
memory used=12.47MiB, cpu time=97.80ms, real time=97.80ms, gc time=4.41ms

Maple 2016.0 (or later, approx.)
Your BP:
memory used=193.62MiB, cpu time=1.03s, real time=990.50ms, gc time=96.95ms
Carl's modified BP:
memory used=152.51MiB, cpu time=783.60ms, real time=757.30ms, gc time=76.69ms

@Magma In order to get Carl's revised BP to behave the same (on the earlier example A) as your procedure BP in Maple 15 all I had to do was modify one instance of a call to seq (to not utilize the newer calling sequence seq(U) ).

Otherwise I simply copied and pasted as 1D input.

BP_15.mw

Did you have an (other) input example that was problematic?

@Christopher2222 In my opinion the iPhone is a far more interesting target than the Fire for an Arm port.

The Fire also has difficulties with installation of arbitrary programs as applications. That also applies (at present) to an x86-64 based Chromebook, although there is some mitigation for those who run a terminal shell in developer mode or who re-install with ubuntu, etc.

Another Arm platform that might interest some is the Raspberry Pi.

On a related topic, I am hoping that the worldwide cloud/thin-client craze will peter out, and that something as locally useful as plugins will re-emerge.

@Kitonum Indeed, all methods will suffer from the possible problem of prior uniquification by the kernel. Your last edit is another example of what I was showing: forcing the order in which the variables are used to complete-the-square is enough (but still will succomb if the unwanted form has been previously constructed).

Unfortunately I don't know how sort could be used to forcibly re-order the terms (as for some similar problems) because the target is not expanded.

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10:
CompleteSquare(P, [y,x]);

(x-1)^2+(y-1/2)^2-13/4 = 10

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10:
(y-1/2)^2+(x-1)^2-13/4=10:
CompleteSquare(P, [y,x]);

(y-1/2)^2+(x-1)^2-13/4 = 10

 

Download zw2.mw

@Kitonum Your approach is not generally reliable -- it doesn't work if the alternate form has been formed beforehand in the same session.

Given the above weakness (to which Carl also alluded), there are easier ways to get the same effect.

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10;
P1 := lhs(P);
A := CompleteSquare(P1, x);
op(1,A)+CompleteSquare(A-op(1,A), y)=rhs(P);

x^2+y^2-2*x-y-2 = 10

x^2+y^2-2*x-y-2

(x-1)^2+y^2-y-3

(x-1)^2+(y-1/2)^2-13/4 = 10

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10;
(y - 1/2)^2 + (x - 1)^2 - 13/4 = 10: # zwischenzug
P1 := lhs(P);
A := CompleteSquare(P1, x);
op(1,A)+CompleteSquare(A-op(1,A), y)=rhs(P);

x^2+y^2-2*x-y-2 = 10

x^2+y^2-2*x-y-2

(x-1)^2+y^2-y-3

(y-1/2)^2+(x-1)^2-13/4 = 10

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10;
CompleteSquare(CompleteSquare(P,y),x);

x^2+y^2-2*x-y-2 = 10

(x-1)^2+(y-1/2)^2-13/4 = 10

restart;
with(Student:-Precalculus):
P := x^2 + y^2 - 2*x - y - 2 = 10;
(y - 1/2)^2 + (x - 1)^2 - 13/4 = 10: # zwischenzug
CompleteSquare(CompleteSquare(P,y),x);

x^2+y^2-2*x-y-2 = 10

(y-1/2)^2+(x-1)^2-13/4 = 10

 

Download zwisch.mw

@vv The persistence of those plot labels (up until output removal) in your example is intentionally designed GUI behavior, as I explained previously in my Answer. I do not much like that behavior, but it's neither new nor very mysterious.

Yet what is indeed buggy is that, in the OP's original example, axis labels from one plot got applied to another within the same execution group.

I conjecture that this might be an overlooked corner-case is the logical flow of the GUI code that handles "plot persistence". It might also relate to use of semicolons on assignment statements involving plots.

@mmcdara That's quite a different mechanism.

Plot inheritance (aka plot persistence) affects output of single execution groups (or document blocks) independently. It consists of the GUI automatically re-applying certain display qualities from a prior plotting output upon re-execution of the very same group/block. The effect does not transfer between groups/blocks. It can be cleared by removal of the output of the particular group/block.

What you're describing now is that interface settings are managed by the GUI (I for Interface) and within a GUI session are not reset by a restart. That affects multiple groups/blocks together.

It is somewhat related to the unfortunate fact that calls to interface should go in a separate group/block from calls to restart.

Are you trying to say that you want to avoid the overhead of three calls to Transpose (or any kind of copying)? I mean, are you trying to avoid any overhead incurred by, say,
    Transpose(LinearSolve(A^%T,b^%T),inplace)

How large is your Matrix? Are the entries all floats? Is b always a Vector? Does the Matrix have an indexing-function (ie, shape)?

Are you looking for a convenient syntax, or is high efficiency a primary concern?

In the floating-point case, are you concerned with possibly less accuracy from b.A^(-1)?

Here is a simpler example with only a single plot rather than an animation.

In the Classic interface of Maple 2019.0 (32bit Windows) the PLOT3D structure that contains two LIGHT substructures will render using both light sources together.

In the Standard GUI the renderer only uses one of the light sources. As far as I know this has always been the case.

When the plotting commands were retro-fitted to use the new keyword handling (?paramprocessing) any arguments of the form light=[...] got the new handling so that only the last instance gets utilized. This change came some time after Maple 7. Prior to that multiple instances of the light=[...] option were all utilized and the resulting PLOT3D structure contained the corresponding multiple LIGHT substructures (but only Classic would utilize them in rendering).

In summary:
1) The Standard GUI renders using only one of any LIGHT substructures found, even when there are several.
2) The Classic GUI (32bit Maple 2019 on MS-Windows) can still render using multiple LIGHT substructures, if they are present.
3) The parameter-processing of the plotting commands now ignores all but the last light=[...] keyword option, so there no longer is any convenient way to construct a 3D plot that contains multiple LIGHT substructures.

Here's an example that I did in Maple 2019.0, 32bit Classic on Windows 7. I have replaced the images, so that it renders here as it does in Classic. (In the Standard GUI the first plot, which contains both LIGHT substructures, renders the same as the third plot. The default orientation may differ from Classic, but that's not the point.)

restart;
with(plots): with(plottools):
P := display(sphere([0,0,0],1,color=gray,style=surface)):
L1:=LIGHT(90, 40, 1, 0.5, 0.8):
L2:=LIGHT(90, -80, 1, 0.2, 0.1):

op(0,P)(op(P),L1,L2);


op(0,P)(op(P),L1);


op(0,P)(op(P),L2);


 

Download multiplelights.mws

@sand15 I recommend getting Maple 2019.

It comes bundled with LLVM.

For the very best experience in terms of stability and performance I'd also recommend it (or any modern version) on Linux.

@mmcdara For fun, here using Threads:-Seq and a few other storage twists.

Here, 602.30ms for the Box-Muller versus 466.70ms for the default Sample method and 2.31s for Sample's envelope method, in real-time, at N=10^7.

CodeTools:-Usage(Statistics:-Sample(Normal(0,1), 10^7, method=envelope), iterations=10):
memory used=95.36MiB, alloc change=0.52GiB, cpu time=2.33s, real time=2.31s, gc time=21.52ms

CodeTools:-Usage(BoxMuller_ac3(10^7), iterations=10):
memory used=79.96MiB, alloc change=0.78GiB, cpu time=938.50ms, real time=602.30ms, gc time=0ns

CodeTools:-Usage(Statistics:-Sample(Normal(0,1), 10^7), iterations=10):
memory used=76.31MiB, alloc change=0.75GiB, cpu time=466.30ms, real time=466.70ms, gc time=0ns

I was disappointed with the speedup under Threads:-Seq, though I did not try to find the optimal splitting length, given that there is overhead that can swamp if too many threads are allowed for tasks that are too small.

Using Threads:-Task and binary splitting seems a bookkeeping headache, although perhaps a spot in each subvector could store its length and the number of relevant entries.

Also, as written these procedures could actually return more than N values, although it would not be much effort to restrict this efficiently to at most N.

[edit] Apparently I forgot to attach the revision. I'll do that later today.

[edit] I also realize that another 5-10% of the time (and some memory overhead) can be shaved off by forming result R with exactly N entries, and quitting the loop early once N results are attained, etc. I will revise this evening.

@Carl Love I think that you're right, in the sense that it's not clear whether the OP's example is input or the result of some earlier computations. Since the OP asked just last week about substituting into the result of a series call, this Question is less clear than it could be.

@mmcdara Ok, so you are getting 64ms for N=10^5 with _ac2 under evalhf mode. That's a good start.

On 64bit platforms recent versions of Maple are supposed to use the LLVM compiler by default (unless the inmem=false options is passed to Compile, in which case it tries to use gcc from the OS). The LLVM compiler is bundled with the installation of Maple itself.

If your Maple version is "older" you might have to install gcc on your OSX (or Linux) box. It's free.

I don't recall which was the first Maple version to ship with LLVM bundled on the 64bit installs.

If you cannot get the Compiler to work in any version, including 2019.2 say, then there must be some internal problem that prevents it from working. It'd be nice to get this resolved for you.

First 196 197 198 199 200 201 202 Last Page 198 of 591