tomleslie

4477 Reputation

10 Badges

9 years, 97 days

MaplePrimes Activity


These are answers submitted by tomleslie

as in the attached, which for some reason is failing to  dsiplay inline here :-(
 

Download solveSYS.mw

The following works

restart;
interface(rtablesize=10):
data:="x:='x';y:='y';sol:=dsolve(diff(y(x),x)=1,y(x));";
eval~(parse~(StringTools:-Split(data, ";")));

"x:='x';y:='y';sol:=dsolve(diff(y(x),x)=1,y(x));"

 

[x, y, y(x) = x+_C1]

(1)

 

Download trickPar.mw

 

a procedure, one should use

uses plots;

rather than

with(plots);

So the attached will work.

For future reference, you should upload code in editable/executable form using the big, green, up-arrow in the Mapleprimes toolbar rather than posting 'pictures' of code which have to be retyped

  restart;
  bezier2:= proc(p0, p1, p2, p3)
                 uses plots:
                 local z:
                 z:= (r, s)-> p0[s]*(1-r)^3+3*p1[s]*r*(1-r)^2+3*p2[s]*r*(1-r)+p3[s]*r^3:
                 spacecurve( [z(t,1), z(t,2), 0],
                             t=0..1,
                             axes=boxed,
                             color=blue
                           );
          end proc:
  bezier2( [2,5], [3,5,4], [3.5,1], [2,0]);

 

interface(rtablesize=10):

 

Download doPlot.mw

I've never been too impressed by the RealDomain() package, but for this problem it isn't really necessary, becuase the following will work.

restart;
sol:= solve({x^2+y^2+z^2-4*x+6*y-2*z-11 = 0, 2*x+2*y-z = -18}):
assume(x::real, y::real, z::real);
yv:= solve(Im~(allvalues~(sol))):
`union`(allvalues~(eval(sol, yv[]))[2..3],yv);

The attached code does not really address any of the (somewhat) odd behaviour whihc has been reported by the OP in this thread. It is an attempt to produce code which is somehat more flexible and efficient than that which I originally supplied.

  1. The "flexibility" improvement is that it will automatically determine the optimal(?) number of parallel tasks to run, based on the value returned by kernelopts(numcpus) on the executing machine. It will automatically determine the correct parameters for each of the parallel tasks. It handles the case where the size of the problem does not divide exactly by the number of parallel tasks, by making the final task in the parallel set (marginally) longer than the others.
  2. The "efficiency" improvement (if any) is obtained by using elementwise operation, zip() and map() functions to replace seq() functions whereever possible.

The "flexibility" stuff is a benefit (easier to understand/maintain) and probably of benefit when running on machines whose core count is differetn from mine.

The "efficiency" stuff was a bit of  waste of time, since execution performance is pretty much what I obtained before :-(

Anyhow, for what it is worth, check the attached

#
# Using the fastest sequential code I cam up with
# with
#
  restart;
  nelems:= 10000000:
  n:= 374894756873546859847556:
  #n:= 700;
  A:= Array(1 .. 4, 1 .. nelems):
  st:= time[real]():
  A[1,..]:=  Array( 1..nelems,
                    i->i^10*n
                  ):
  A[2,..]:= length~(A[1, ..])-1:
  A[3,..]:= zip( (i,j)-> iquo(i, 10^(j-2)),
                 A[1,..],
                 A[2,..]
               ):
  A[4,..]:= map(irem, A[1,..], 1000):
  time[real]() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

53.697

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(1)

  restart;
#
# Parallelize the above across numcpus tasks
#
  nelems:= 10000000:
  n:= 374894756873546859847556:
  st:= time():
#
# Procedure which is executed in each of the
# parallel tasks
#
  doTask:= proc( sv, nv)
                 local r:= nv-sv+1,
                       B:= Array(1..4,1..nv-sv+1),
                      i :
                 B[1,..]:= Array(1..r,i->(sv-1+i)^10*n):
                 B[2,..]:= length~(B[1, ..])-~1:
                 B[3,..]:= zip( (i,j)-> iquo(i, 10^(j-2)),
                                B[1,..],
                                B[2,..]
                              ):
                 B[4,..]:= map(irem, B[1,..], 1000):
                 return B;
           end proc:
#
# Procedure which is executed after all parallel
# tasks are complete to "assemble" all of their
# results into a single output
#
  contTask:= proc( )
                  return ArrayTools:-Concatenate(2, _passed ):
             end proc:
#
# Procedure which figures out how many parallel tasks
# should be run, what the arguments for these tasks
# should be, and what should be run when all parallel
# tasks have completed
#                  
  setupTask:= proc(n, nc)
                   local i, rng:=floor(n/nc):

                   Threads:-Task:-Continue
                   ( contTask,
                     seq( Task=[doTask, (i-1)*rng+1, i*rng ],
                          i=1..nc-1
                        ),
                          Task=[doTask, (nc-1)*rng+1, n ]
                   );

              end proc:

  st:= time[real]():
  A:= Threads:-Task:-Start( setupTask,
                            nelems,
                            kernelopts(numcpus)
                          ):
  time[real]() - st;
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

25.363

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(2)

interface(rtablesize=10):

 

Download speedUP4.mw

something like

  restart:
#
# ignore this command: only needed to make
# worksheets dispaly properly on the
# MAplePrimes website
#
  interface(rtablesize=10):
#
# Load necessary packages
#
  with(LinearAlgebra):
  with(CodeTools):
#
# make sure that truly random matrices are being
# egnerated each time this code is executed. If
# the same result is required each time the code
# is executed, comment out the following line
#
  randomize():
#
# Specify the number of times that the calculation
# is done for each matrix size. NB a different
# matrix of the same size will be used for each
# iteration
#
  numiters:=10:
#
# Do the calculation recording the matrix size
# and the average cpu time taken for each matrix
# size. Average is taken over 'numiters' iterations
#
  seq( [i, CPUTime
           ( Eigenvectors
             ( RandomMatrix(i) ),
             iterations=numiters
           )[1]
       ],
       i=2..9
     );

[2, 0.1400000000e-1], [3, 0.3280000000e-1], [4, .2558000000], [5, 0.6710000000e-1], [6, .1217000000], [7, .2168000000], [8, .3916000000], [9, .6489000000]

(1)

 


 

Download cpuIter.mw

#
# Define the size of the (assumed square)
# matrix
#
  n:=20:
#
# Generate a random nXn matrix
#
  m:=LinearAlgebra:-RandomMatrix(n,n):
#
# Multiply its diagonal elements
#
  mul( m[i,i], i=1..n);

Maple 7 was superseded seventeen years ago, so I doubt if anyone here still has a copy: I certainly don't! I only have Maple releases going back five years, so I can't test anything for your release - just make some suggestions which you can try

  1. If the [$65..95] construct is casing a problem - don't use it. The construct [seq(i, i=65..95)] *ought* to do the same thiing
  2. Although the convert(...., `bytes`) command appears to work, I notice from the AMple update history that the StringTools package was introduced in Maple 7 and this contains specific commands for converting integers to letters and vice versa. Thus the construct [seq(StringTools[Char](i), i=65..95)] *ought to do exactly the same as convert( [$65..95], 'bytes' ). And possibly without the "Error" massage?!
  3. The warnng messages about the redefnition of "changecoords" and "arrow" are not cause by anything in this worksheet. I can only assume that these arise because of code you have in your Maple initialisation file (maple.ini). If you want to get rid of these warning messages you will need to locate/edit this initialisation file. Depending on your installation, the relevant file could be in a variety of locations. Try typing "?worksheet,reference,installation" or "?maple.ini" (without the quotes) at the Maple prompt and you ought to get some idea where your maple.ini file is located

     

If I take the fastest code I had in my earlier answer and increase the size of the problem by 10X to nelems=10000000, then this sequential code takes about 55secs on my machine (compared with ~6secs for nelems=1000000, so execution time looks pretty linear with size

If I parallelize the code into 4 Tasks, then the same size problem (ie nelems=10000000), then the execution time drops to ~23secs on my machine. (Parallelized code is a bit rough, I was in a hurry - sorry)

So a further 2.5X speed-up

Check the attached - be aware that total execution times is about 85 secs (on my machine)

#
# Using the fastest sequential code I cam up with
# with nelems=10000000. Takes about 55 secs on my
# machine
#
  restart;
  nelems := 10000000:
  n := 374894756873546859847556:
  op(n):
  A := Array(1 .. 4, 1 .. nelems):
  length(n):
  st := time[real]():
  A[1,1..nelems]:= Array([seq( i^10*n, i=1..nelems)]):
  A[2,1..nelems]:= Array([seq( length(A[1, i])-1, i=1..nelems)]):
  A[3,1..nelems]:= Array([seq(iquo(A[1,i], 10^(A[2, i]-2)),i=1..nelems)]):
  A[4,1..nelems]:= Array([seq(irem(A[1,i],1000),i=1..nelems)]):
  time[real]() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

55.113

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(1)

#
# Parallelize the above across four tasks, Takes about
# 23 secs on my machine, so a speed up of 2.5X on the
# above sequential code
#
  restart;
  nelems := 10000000:
  n := 374894756873546859847556:
  op(n):
  length(n):
  st := time():
  doTask:=proc(sv, nv)
               local rng:=nv-sv+1,
                     B:=Array(1..4,1..nv-sv+1),
                     i :
               B[1,1..rng]:= Array([seq( (sv-1+i)^10*n, i=1..rng)]):
               B[2,1..rng]:= Array([seq( length(B[1, i])-1, i=1..rng)]):
               B[3,1..rng]:= Array([seq(iquo(B[1,i], 10^(B[2, i]-2)),i=1..rng)]):
               B[4,1..rng]:= Array([seq(irem(B[1,i],1000),i=1..nv-sv+1)]):
               return B;
         end proc:
  time() - st;
  contTask:=proc( V,W,X,Y )
                  return copy(ArrayTools:-Concatenate(2, V, W, X, Y)):
           end proc:
                  
  setupTask:=proc(n)
                  Threads:-Task:-Continue
                  ( contTask,
                    Task=[doTask, 1, n/4 ],
                    Task=[doTask, n/4+1, n/2],
                    Task=[doTask, n/2+1, 3*n/4],
                    Task=[doTask, 3*n/4+1, n]
                  );
            end proc:

  st := time[real]():
  A:=Threads:-Task:-Start( setupTask, nelems):
  time[real]() - st;
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

 

0.

 

24.836

 

3748943819789586888963018855788947266280687433550338920856680612859778836854072888791259847556

 

93

 

374

 

556

(2)

interface(rtablesize=10);

[10, 10]

(3)

 

Download speedUP2.mw

 

 

 

I'm pretty sure that any Matrix initialiser function which takes two arguments can be used.

For example, the following

#
# Consider an initialiser function
# which takes two arguments
#
  f:=(x,y)->x+y;
  Matrix(3, 3, f);

will "work".

When you use eg `+`, this is just the "prefix form" of the + operator, and will accept two (or more) arguments. So for example

`+`(1,2,3,4,5,6);

will correctly reurn 21

that this is a version issue, because I see the same problem in my Maple 2018.2 installation - see the attached.

It seems to be a "bug" in odetest(), which got "fixed" with the release of Maple 2019

Using Maple 2018.2, one can check validity without using odetest().  Just substitute the proposed solutions into the original ODEs/boundary conditions. This confirms that the solutions are OK, so the problem must be with odetest()..

See the attached.

#
# Initialize and show version: possibly a version
# Maple issue on this problem
#
  restart:
  kernelopts(version);
  Physics:-Version();

`Maple 2018.2, X86 64 WINDOWS, Nov 16 2018, Build ID 1362973`

 

"C:\Users\TomLeslie\maple\toolbox\2018\Physics Updates\lib\Physics Updates.maple", `2019, March 14, 23:22 hours, version in the MapleCloud: 341, version installed in this computer: 326.`

(1)

#
# Using lists rather than sets tests two solutions
# and four BCs, so odetest() should return 6 zeros
# if everything is good - but it doesn't.
#
# Last two entries in the output list are non-zero,
# suggesting that the problem arises with the final
# two boundary conditions
#
  laplace := -phiL(z) + diff(phiL(z),z$2)=0,
             -phiM(z) + diff(phiM(z),z$2)=0:

  BCs:=  phiL(d1)=0,
         phiM(-d1)=0,
         phiL(0)=phiM(0),
         D(phiL)(0)-D(phiM)(0)=-n:

  sol:= [ phiM(z) = n/2/coth(d1)*(cosh(z)+coth(d1)*sinh(z)),
          phiL(z) = n/2/coth(d1)*(cosh(z)-coth(d1)*sinh(z))
        ]:

  odetest(sol, [laplace, BCs]);

[0, 0, 0, 0, (1/2)*(2*phiL(0)*coth(d1)-n)/coth(d1), (D(phiM))(0)-(1/2)*n]

(2)

#################################################
# Test the ode system with the obtained solutions.
#
# Should give 0=0, 0=0, which it does!
#
  eval( laplace, sol);
#################################################
# Test the first BC, phiL(d1)=0 with the relevant
# solution.
#
# Should result in 0, which it does!
#
  simplify
  ( eval
    ( eval
      ( phiL(z), sol),
      z=d1
    ),
    trig
  );
##################################################
# Test the second BC phiM(-d1)=0 with the relevant
# solution.
#
# Should result in 0, which it does!
#
  simplify
  ( eval
    ( eval
      ( phiM(z), sol),
      z=-d1
    ),
    trig
  );
##################################################
# Test the third BC phiL(0)=phiM(0) with the
# relevant solutions.
#
# Should result in 0, which it does!
#
  simplify
  ( eval
    ( eval
      ( phiL(z), sol),
      z=0
    ),
    trig
  )-
  simplify
  ( eval
    ( eval
      ( phiM(z), sol),
      z=0
    ),
    trig
  );
###############################################
# Test the fourth BC D(phiL)(0)-D(phiM)(0)=-n
# with the relevant solutions.
#
# Should result in -n, which it does!
#
  eval
  ( diff
    ( eval
      ( phiL(z), sol),
      z
    ),
    z=0
  )-
  eval
  ( diff
    ( eval
      ( phiM(z), sol),
      z
    ),
    z=0
  );

0 = 0, 0 = 0

 

0

 

0

 

0

 

-n

(3)

 

Download odeTest2.mw

 

  1. Simple problem - you use sets rather than lists for both the ode system and the  solution functions. Sets do not maintain the 'order' in which you enter elements. In fact elements will be sorted (more-or-less) lexicographically. This will cause arguments to the odetest() command to be p[resented in a slightly different order - so the output may(?) be in a (correspondingly? different order.Since you have two solutions and four boundary conditions, there ought to be 6 outputs (ideally all zero). If you use sets as arguments to odetest() then these six outputs will (ideally) collapse  to a set with the single element 0. since sets cannot contain duplicate elements. Since you are getting three outputs, this implies that you are getting four zeros and two non-zeros: the set container collapses the four zeros to a single zero, so you get an output set of three eleemnts, I recommend that you use list arguments rather than sets, as in the attached: this will maintain all six (hopefully) zeros and if anything is non-zero you can work out which solution/bc "failed"
  2. I'm restricted in identifying which two solutions/BCs are causing issues, because as the attached shows, using either sets or lists, I get either {0} or [0,0,0,0,0,0], indicating that odetest() thinks that everything is fine. This may be a version issue. What version are you running - check by using the same commands I used at the top of the attached.


For some reason this worksheet won't display here - no idea why: you will have to download to view

Download odeTest.mw

The attached performs your calculation three different ways: your original code, then two "improved" versions for nelems=1000000, which is long enough to make timings more-or-less reliable, but short enough that I don't get bored waiting. Before considering parallelizing I think it is important to make the "simple" code as fast as possible

The overall speed-up in the attached is about 5X. I may get a chance to look at improvements by parallelization later

#
# OP's original - takes ~35secs for nelems=1000000
# on my machine
#
  restart;
  nelems := 1000000:
  n := 374894756873546859847556:
  op(n):
  A := Array(1 .. 4, 1 .. nelems):
  length(n):
  st := time():
  for i to nelems do
      A[1, i] := i^10*n;
      A[2, i] := length(A[1, i]) - 1;
      b := convert(A[1, i], string);
      A[3, i] := parse(b[1 .. 3]);
      A[4, i] := parse(b[-3 .. -1]);
  end do:
  time() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

37.643

 

374891007942848343450975036111986883129568134169301890673940708132428295071299847556

 

83

 

374

 

556

(1)

#
# Simple code rewrite. Gives about 4X speed-up.
# Takes about 9.5secs for nelems=1000000 on my
# machine
#
  restart;
  nelems := 1000000: # ~ takes about 50mins when n=100,000,000
  n := 374894756873546859847556:
  op(n):
  A := Array(1 .. 4, 1 .. nelems):
  length(n):
  st := time():
  for i to nelems do
      A[1, i]:= i^10*n;
      A[2, i]:= length(A[1, i]) - 1;
      A[3, i]:= iquo(A[1,i], 10^(A[2, i]-2));
      A[4, i]:= irem(A[1,i],1000);
  end do:
  time() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

9.656

 

374891007942848343450975036111986883129568134169301890673940708132428295071299847556

 

83

 

374

 

556

(2)

#
# Use seq() commands rather than for-loop. Gives
# about 30% further speed-up. Takes about 6.5secs
# for nelems=1000000 on my machine
#
  restart;
  nelems := 1000000:
  n := 374894756873546859847556:
  op(n):
  A := Array(1 .. 4, 1 .. nelems):
  length(n):
  st := time():
  A[1,1..nelems]:= Array([seq( i^10*n, i=1..nelems)]):
  A[2,1..nelems]:= Array([seq( length(A[1, i])-1, i=1..nelems)]):
  A[3,1..nelems]:= Array([seq(iquo(A[1,i], 10^(A[2, i]-2)),i=1..nelems)]):
  A[4,1..nelems]:= Array([seq(irem(A[1,i],1000),i=1..nelems)]):
  time() - st;
  A:
  A[1, -2];
  A[2, -2];
  A[3, -2];
  A[4, -2];

6.303

 

374891007942848343450975036111986883129568134169301890673940708132428295071299847556

 

83

 

374

 

556

(3)

interface(rtablesize=10);

[10, 10]

(4)

 

Download speedUP.mw

 

that is before you edited/removed the pictures from your textbook, the first example had the initial conditions tau=0, prey(0)=40, pred(0)=16, although variable names 'pred' and 'prey' were given as 'x' and 'y'.

The attached reproduces the first few graphs from your textbook piccies with these conditions.

Observation

In a predator/prey model, the numbers of both quantities should be integers, and you should be using difference equations rather than differential equations: non-integer numbers of either predator or prey do not make sense -  unless you can define something like (say) 0.1 of a lion!!

I supppose that if the numbers of both predator and prey are "sufficiently large", then using continuou variables may be an acceptable(?) approximation.

However in your revised post you start with pred(0)=1 and prey(0)=1. This fails the "sufficiently large numbers" test mentioned above. It is also (logically) nonsense because

with prey(0)=1, no reproduction is possible. Even without a predator, the number of prey would stay at 1.Furthermore, given a moderately peckish predator, the number of prey will decline from 1 to 0 quite quickly. Some time later the predator will also die of starvation!

Attached worksheet won't display inline here: it contains the "rtablesize" workaround so the problem is probably one of the plot options I have used ( 'size=' maybe?). You will have to download to view :-(

Download predPrey.mw

 slightly different(?) way


 

  restart;
  with(plots):
  nFrames:=100:
  step:=evalf(2*Pi/nFrames):
  display( [ pointplot
             ( [ seq( [j*Pi, 0], j=0..6 )],
               symbol=solidcircle,
               symbolsize=10,
               color=red
             ),
             display
             ( [ seq( plot( [ sin(x+phi),
                              sin(x-phi),
                              sin(x+phi)+sin(x-phi)
                            ],
                            x=0..6*Pi,
                            color=[blue, green, black]
                          ),
                      phi=0..step*(nFrames-1), step
                    )
               ],
               insequence=true
             )
           ],
           axes=boxed,
           gridlines=false
         );

 

interface(rtablesize=10);

[10, 10]

(1)

 


 

Download stWave.mw

4 5 6 7 8 9 10 Last Page 6 of 103