roman_pearce

Mr. Roman Pearce

1623 Reputation

18 Badges

15 years, 330 days
CECM/SFU
Research Associate
Abbotsford, British Columbia, Canada

I am a research associate at Simon Fraser University and a member of the Computer Algebra Group at the CECM.

MaplePrimes Activity


These are answers submitted by roman_pearce

f := mods( randpoly(x,dense,degree=1,coeffs=rand(1..4)), 5):

 

It depends on what you're doing.  Some algorithms in Maple are parallelized, e.g. numerical linear algebra, polynomial arithmetic, and Groebner bases, but you'll need to do larger computations to see a benefit.

This is possible but not efficient in Maple 2016.

f := randpoly([x,y,z]):
G := [seq(randpoly([x,y,z],degree=2),i=1..3)]:
r := Groebner:-NormalForm(f, G, tdeg(x,y,z), 'Q'):
Q;  # quotients for G
expand( f - add(Q[i]*G[i], i=1..nops(G)) - r );

If your curve is y=f(x) you would use y-f(x).  E.g. for y=(x-1)^2 it would be algcurves[genus](y-(x-1)^2,x,y);

Maple automatically sorts and combines like terms.  E.g. if you enter (x-1)+(x+1) Maple will produce 2*x.

To keep the brackets for display purposes, you can use two single quotes `` before each opening bracket, e.g. ``(x-1)+``(x+1); however this is a hack, and Maple will not recognize this as a polynomial.  You will have to call expand to get rid of the brackets before you compute anything.

If you don't like the ordering, you can use the sort command to re-sort the terms in a different ordering.

You can also use the collect command to collect with respect to one or more variables.

I'm surprised there isn't a library routine for this.  Here's my attempt.  You can specify the variables as an optional second argument.

`mod/Reduce` := proc(f) local c, m, v, p;
  if nargs=1 then error "insufficient arguments" end if;
  p := args[-1];
  v := `if`(nargs=2, indets(f), args[2]);
  c := [coeffs(f mod p, v, 'm')]:
  m := subsindets([m], name^integer, x->op(1,x)^(modp(op(2,x)-1,p-1)+1));
  Expand(inner(c,m)) mod p;
end proc:

f := randpoly([x,y,z,RootOf(z^4+z+1)],degree=10,terms=10);
Reduce(f) mod 2;
Reduce(f) mod 3;
Reduce(f,x) mod 2;
Reduce(f,{x,y}) mod 2;

RootFinding:-Isolate does a very specific thing.  For polynomial systems with a finite number of solutions, it computes a Groebner basis and constructs a Rational Univariate Representation of the system, which is a triangular form with some nice properties.  From that it computes bounding boxes for the solutions, and by default it uses those boxes to output floating point numbers with all correct digits.  So it uses three different algorithms in combination to solve a system exactly and gives you a guarantee about the output.

The solve command is more general and necessarily more flexible.  It can "solve" systems with an infinite number of solutions.  It can select among different algorithms.  One algorithm computes a Groebner basis and an RUR, just like RootFinding:-Isolate.  In other cases it may use a different algorithm and you may get a different representation for the solutions.  This is all fair game.

When you call evalf to numerically evaluate the results from solve, you should expect good results, but evalf doesn't guarantee that the resulting numbers are free of numerical error.  You can double check it by evaluating again at a higher setting of Digits, but that is not a proof.

This situation is far from ideal.  For solving it would be better to have just one or two algorithms instead of many, and for evaluation it would be better if numerical error did not show up in the output.  Both problems are difficult to address in general, so what you're seeing is a compromise between speed, complexity, and the state of the art.

@Ronan 

 Use semicolons to see the output

restart:
# here's some polynomials and variables for the columns
eqns := [seq(randpoly([x,y,z],degree=3,terms=20),i=1..10)];
var := [x,y];

# convert polynomials to [[coefficients],[monomials]]
P := map(proc(f) local c,m; c := coeffs(f,var,'m'); [[c],[m]] end proc, eqns):

# build set of monomials
M := [op({seq(op(i[2]),i=P)})]:

# build mapping of monomial to column
T := table([seq(M[i]=i,i=1..nops(M))]):

# assign to matrix
A := Matrix(nops(eqns),nops(M)):
for i to nops(P) do
  c,m := op(P[i]);
  for j to nops(c) do
    A[i,T[m[j]]] := c[j];
  end do:
end do:

A;

Try the following, let's say your set of equations is called S. We'll compute a canonical Groebner basis for the system, then try removing equations one by one by setting them to zero. If we get the same basis, the equation is redundant.

with(Groebner):
S := [op(S)]:  # make sure S is a list
R := S:        # make a copy of it
tord := 'tdeg'(SuggestVariableOrder(S));
G := Basis(S, tord):
for i to nops(S) do
   T := subsop(i=0, S);
   T := Basis(T, tord);
   if T=G then S := T; else R := subsop(i=0, R); end if;
end do:
S := remove(`=`,S,0);  # equations we kept
R := remove(`=`,R,0);  # equations we removed

Here is a simple loop in Maple:

for i from 1 to 10 do

  print(i);

end do; 

Here is an if/then/else block:

i := 10;

if isprime(i) then

  print("prime");

else

  print("composite");

end if;

For your homework problem, try looping and counting composites.

It's likely that you have either a quad core cpu with four cores or a dual core cpu like the Core i3 with hyperthreading.  In either case the operating system sees four cores, and 25% means you are using one of them completely.  Some parts of Maple can take advantage of multiple cores, but it is not yet possible to do this for everything at all times.

@MDD Load this code first.


unprotect(PolynomialIdeals:-EliminationIdeal):
PolynomialIdeals:-EliminationIdeal := proc(J::PolynomialIdeal, X::set(name), $)
local X2, U, G, tord, i;
option cache;
uses PolynomialIdeals:
X2, U := selectremove(member, IdealInfo:-Variables(J) intersect indets(IdealInfo:-Generators(J), 'name'), X);
if nops(U) = 0 then return J
elif nops(X2) = 0 then
return PolynomialIdeal(0, 'characteristic' = IdealInfo:-Characteristic(J));
elif nops(X2) = 1 then
G := UnivariatePolynomial(op(X2), J);
return PolynomialIdeal(G, 'characteristic' = IdealInfo:-Characteristic(J), 'variables' = X2, 'known_groebner_bases' = ['plex'(op(X2)) = [G], 'tdeg'(op(X2)) = [G]])
end if;
tord := select(Groebner:-ShortMonomialOrders:-IsElimOrder, IdealInfo:-KnownGroebnerBases(J, X2 union U), X2);
if 0 < nops(tord) then
G := [seq(Groebner:-ShortMonomialOrders:-ProjectOrder(i, X2) = remove(has, Groebner:-Basis(J, i, ':-normalize' = false), U), i = tord)];
return PolynomialIdeal(rhs(G[1]), 'characteristic' = IdealInfo:-Characteristic(J), 'variables' = X2, 'known_groebner_bases' = G)
end if;
tord := 'lexdeg'(selectremove(member, [Groebner:-SuggestVariableOrder(J, X2 union U)], U));
G := remove(has, Groebner:-Basis(J, tord, ':-normalize' = false), U);
PolynomialIdeal(G, 'characteristic' = IdealInfo:-Characteristic(J), 'variables' = X2, 'known_groebner_bases' = [Groebner:-ShortMonomialOrders:-ProjectOrder(tord, X2) = G])
end proc:

 

The Core i5 and i7 are good. You want a quad core processor. Intel uses a 4 digit numbering scheme where the first digit is the processor generation and the generations alternate between new designs and smaller lithography. A 'k' indicates an enthusiast processor which is typically faster. Here are some recent processors:

6xxx Skylake (14nm) was just released. The i7 6700 is fast and expensive. The i5 6600 looks like the best value for high performance. The 6500 should be better than most older cpus. The 6400 may not be faster than older Haswell chips.

5xxx Broadwell (14nm) is a die shrink of Haswell. These are rare because Intel had delays at 14nm. The 5775C and 5675C at 3.3/3.1 GHz appear slow, but they have a massive 128MB L4 cache which is very interesting for large data.

4xxx Haswell (22nm) is common. 4440/4460/4570/4590/4670/4690/4770 go from 3.1 to 3.5GHz. We have a 4570 and it's good. You should be able to get a good deal on these now.

These cpus are older, but still hold up well:

3xxx Ivy Bridge (22nm) is a die shrink of Sandy Bridge. The 3770k was a good performer.

2xxx Sandy Bridge (32nm) was a big leap over the previous generation. The 2600k was popular.

These cpus are showing their age:

Westmere (32nm, 2010) is a die shrink of Nehalem. Start of the Core i5's.

Nehalem (45nm, 2009) was the start of the Core i7's.

The AMD cpus were competitive on value but they have fallen behind. We have a Piledriver 8350-FX (32nm) and I thought it was a good value for 8 cores. It's slower than the Haswell cpus. AMD has a new architecture (Zen) in the pipeline which is worth watching.

As mentioned by acer, you want a 64-bit OS but I think that's standard.  Windows is fine.  Get lots of RAM to make your machine last.  I would say 16GB minimum, 32GB is worthwhile.  Get a solid state hard drive for your OS and programs no matter what.

The Ore_algebra package is built on Maple's commutative polynomial data structures, with the unfortunate side effect that non-commutative polynomials do not display correctly.  It is computing as if the D[i]'s were on the right.

You can fix the display of expanded polynomials with the sort command:

sort(D[1]*x[1]+1, [x[1],D[1]]);

To collect with respect to the D[i]:

collect(x[1]*D[2]+x[2]*D[2], {D[1],D[2]});

Or do both:

sort(collect(x[1]*D[2]+x[2]*D[2], {D[1],D[2]}), [x[1],x[2],D[1],D[2]]); 

Here's a routine to automate it:

skew_print := proc(F,A)
local f,d,x;
 d := A["polynomial_indets"];
 x := A["rational_indets"];
 f := collect(F,d);
 sort(f,[op(x),op(d)]);
end proc:

 

Drive C: is protected in Windows Vista and newer.  You can write to C:\Users\yourname\... but not anywhere else without administrator privileges.

1 2 3 4 5 6 7 Last Page 1 of 19