Mr. Roman Pearce

## 1678 Reputation

19 years, 140 days
CECM/SFU
Research Associate

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

## high performance computer algebra...

A rough answer to your question is that the numerical software community has historically put much more effort into achieving high performance, and this has made their software more useful. It requires knowledge of algorithms, attention to detail, and a certain amount of stubbronness to get the job done.

In the computer algebra community there are many individual projects dedicated to high performance, but the only system which can really claim to do it well is Magma.  And the focus of Magma is on algebra - it's has a more restricted scope than Maple.  Maple has stuff which is undeniably fast, and of course its numerical components (with fixed precision) are good.  But there are quite a few areas where Maple is grossly behind the capabilities of the hardware.  I'm thinking of:

- polynomials with 100M terms

- matrices with 100M entries

- plots with 100M points

- graphs with 100M edges

- groups with 100M elements

etc.

The number 100M = 10^8 is not a coincidence, it's an estimate derived from the number of machine cycles per second.  Current computers are 2.5 GHz * 10^9 * (4 cores) = 10^10 cycles per second, and GPUs are 10^11 and closing in on 10^12.  Anything you do in a high performance program should take at most 100s of cycles, so the above estimates are reasonable, and within a factor of 10 for O(n log n) algorithms.

So that's what we should be able to do.  I personally would like to have it, I'm sure it'd be useful.  But in the computer algebra community you will often run into people who lack imagination and don't see the need to handle large objects or do large problems.  They think polynomials with 2000 terms are large.  I think that listening to these voices would consign the entire field to irrelevance because Moore's Law is not stopping anytime soon.  In my opinion this attitude, along with the notion that "performance doesn't matter", has already done enough damage.

## thanks...

Thanks for the great article and example.

## firefox sidebar...

On Firefox 3.0.11 on Linux I get the sidebar loading below the page and on the left.

Edit: I logged in remotely and it's fixed.  It must have just been a caching quirk on my end.

## 1 GB...

I typically have matrices or polynomials sucking up a 1 GB or more.  One wrong semicolon and the standard interface dies.  I mostly use the terminal.

## Fifa referee simulation...

isgoal := proc(g::boolean) `if`(rand() mod 100 < 95, g, not g) end proc:

If you post your code or worksheet we can try to help you.  Otherwise it's hard to make suggestions.

## flat structure is fine...

I like the new flat structure and think it should be tried.  Many forums use this, you just have get into the habit of quoting.

## @acer there's a button "all" above ...

@acer there's a button "all" above the list to show them all

## editing posts...

Good luck tracking this down.  I was editing posts and adding tags.

## congratulations...

Congratulations on getting this out in the wild.  I think it's pretty good.

## can't display polynomial...

Also, here's a polynomial that you can't display using pretty printing in the standard interface at all:

f := randpoly([x,y,z,t,u,v],degree=15,dense):

## can't display polynomial...

Also, here's a polynomial that you can't display using pretty printing in the standard interface at all:

f := randpoly([x,y,z,t,u,v],degree=15,dense):

## no elision...

Turning off elision doesn't work (bug).  Try setting it to kernelopts(maximmediate) instead.

## no elision...

Turning off elision doesn't work (bug).  Try setting it to kernelopts(maximmediate) instead.

## Larrabee...

Larrabee v1 unfortunately did not ship, so we have not been able to test a real manycore processor.  A Niagara II Sparc might be a rough approximation, but testing it would require big committment of resources.  I think it's clear that we're going to see more and slower cores.  When you optimize for power that's what you get.

As for dense polynomials, no we did not do any dense algorithms such as the FFT.  Marc Moreno Maza at the University of Western Ontario is working on those, with encouraging results (as you pointed out).  We have done the sparse algorithms, which are Maple's default, and in the dense case you get classical complexity with no sorting overhead.  For multivariate polynomials up to a given total degree, i.e. degree(f(x,y,z))=n, this is not a bad tradeoff.  The FFT is usually written for the case degree(f,x[i])=n.

 First 7 8 9 10 11 12 13 Last Page 9 of 39
﻿