Items tagged with multicore

Feed

As I understand it, Maple will detect and use the available cores in a system, if the calculation is suitable for multi-core use.

As I am installing Maple on a multi-user cluster, using a scheduler to run maple scripts, I want to ensure the maple jobs only use the number of cores allocated to the job.  

Is it possible to set the number of cores used ? 

If I have misunderstood how Maple works (I am new to it), or if there is a section in the documentation which explains this, please point me in the right direction.  I haven't found this info so far.

I use maple 12 on my dual core laptop and i plan to buy maple 2016 and an 8 core system to get advandage of multiple processors. 

Does maple 2016 functions get advandage of multiprocessor systems or it will be the same as having one processor?

I'm running calculations like this:

    N:=10000;
    f := (i,j)-> (some complicated procedure depending on i and j);
    M:= Matrix([Threads:-Seq([Threads:-Seq( f(i,j), j=1..N)], i=1..N)]);

I have a server with 20 cores, but each core has two threads, so this code should max out all 40 threads. But what I notice is only at most 20 threads being used at a time. 

I checked kernelopts(numcpus) returns 20. 

Does anyone have any advice on how to maximize my resource usage?

Our previous article described the design of fast algorithms for multiplying and dividing sparse polynomials. We have integrated these algorithms into the expand and divide commands of Maple 14. In this post I want to talk a bit about what you might see when you try Maple 14. Keep in mind that the product isn't released yet and I don't work for Maplesoft, so general disclaimers apply. Nevertheless, one of the first things you may notice is this.

task manager with maple 14

For double-precision ("hardware") real and complex floating-point operations on Matrices, Vectors, and Arrays Maple makes use of its external-calling mechanism to get to compiled code. A great deal of such compiled code for array operations requires what are known as Basic Linear Algebra Subprograms (BLAS). The BLAS libraries provide support not only directly for Matrix-Vector arithmetic but also indirectly in other external compiled libraries used by Statistics, ArrayTools, LinearAlgebra[Modular], etc.

Page 1 of 1