Carl Love

Carl Love

27651 Reputation

25 Badges

12 years, 122 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are answers submitted by Carl Love

Here is a version that I've reworked to make use of parallel processing. I needed to increase the number of side vectors b to 2^12 in order to get a meaningful time measurement. If your number of side vectors is only about 100 (like you said), it will be extremely wasteful to use the code below. This is because the overhead involved in setting up the parallel processing will be far greater than the time to solve the problem on one processor.


restart:

The float[8] technique used in this worksheet is extremely fast and is limited to moduli less than 2^25. If you need a larger modulus, it is very easy to modify (let me know), and  just a little slower.

N:= 2^7:  #order of matrix A
n:= 2^12:  #number of vectors b[i]
p:= 127:  #prime modulus

isprime(p);

true

Generate a random example upper triangular nonsingular A.

A:= LinearAlgebra:-RandomMatrix(
     N, shape= triangular[upper, unit], datatype= float[8]
):

Convert to shape= rectangular, which is required by LinearAlgebra:-Modular.

A:= Matrix(A, shape= rectangular):

Input A to the Modular system.

A:= LinearAlgebra:-Modular:-Mod(p, A, float[8]):

 

Generate a list of random side vectors b[k].

b:= [seq](
     LinearAlgebra:-RandomVector(N, datatype= float[8]),
     k= 1..n
):

Make a solving procedure that can be mapped into parallel processors. It returns the solution for a single side vector.

Solve:= proc(b::Vector, A::Matrix, p::posint)
     uses LAM= LinearAlgebra:-Modular;
     local B:= LAM:-Mod(p, b, float[8]);
     LAM:-BackwardSubstitute(p, A, B);
     B
end proc:     

 

For reasons that I don't understand, it seems necessary to make a single bogus call to LinearAlgebra:-Modular:-BackwardSubstitute and generate an error before the parallel version will work. This only seems to be necessary once after each restart.

LinearAlgebra:-Modular:-BackwardSubstitute(p);

Error, (in LinearAlgebra:-Modular:-BackwardSubstitute) incorrect number of arguments

 

Threads:-Map is a parallel processing version of map. All that you need to run the parallel code is...

# X:= Threads:-Map(Solve, b, A, p);

...the rest of the code in the command below is just there to measure the time.

CodeTools:-Usage(
     proc() :-X:= Threads:-Map(Solve, b, A, p) end proc()
):

memory used=5.30MiB, alloc change=15.31MiB, cpu time=234.00ms, real time=36.00ms

It looks like the code parallelizes extremely well:

36.*kernelopts(numcpus)/234.;

1.23076923076923

It's actually the best parallelization ratio that I've ever achieved in my own code.

But if you compare the time with the single-processor time for the same number of b's, you will see that there is no savings using the parallel code. You would have to use about 2^16  b's before there would be any savings.

 

Verify results (not really necessary):

for k to n do
     if LinearAlgebra:-Norm(
             LinearAlgebra:-Modular:-AddMultiple(
                  p,
                  p-1,
                  LinearAlgebra:-Modular:-Multiply(p, A, X[k]),
                  LinearAlgebra:-Modular:-Mod(p, b[k], float[8])
             )
        )
        <> 0
     then error "Bad solve"
     end if
end do;
"All solutions check";

"All solutions check"

 


Download ParallelModular.mw

 

At the sizes that you are talking about, it is not worth using multiple parallel processors. The following solution command runs in time less than the smallest time that can be measured directly by Maple, which is 1/64 seconds. If you really want to run on multiple parallel processors, let me know. But I think that the overhead of initiating the processes would outweigh any time savings.

Here's an example.

 

restart:

The float[8] technique used in this worksheet is extremely fast and is limited to moduli less than 2^25. If you need a larger modulus, it is very easy to modify (let me know), and  just a little slower.

N:= 2^7:  #order of matrix A
n:= 2^6:  #number of vectors b[i]
p:= 127:  #prime modulus

isprime(p);

true

Generate a random example upper triangular nonsingular A.

A:= LinearAlgebra:-RandomMatrix(
         N, shape= triangular[upper, unit], datatype= float[8]
    ):

Convert to shape= rectangular, which is required by LinearAlgebra:-Modular.

A:= Matrix(A, shape= rectangular):

Input A to the Modular system.

A:= LinearAlgebra:-Modular:-Mod(p, A, float[8]):

 

Generate a list of random side vectors b[k].

b:= [seq](LinearAlgebra:-RandomVector(N, datatype= float[8]),
         k= 1..n
    ):

Combine all the vectors into one matrix.

B:= `<|>`(seq(b[k], k= 1..n)):

Input it to Modular.

B:= LinearAlgebra:-Modular:-Mod(p, B, float[8]):

 

LinearAlgebra:-Modular:-BackwardsSubstitute does not "return" the answer. Rather, it overwrites B with the answer. In order to save B, we need to copy it.

X:= LinearAlgebra:-Modular:-Copy(p, B):

Solve system AX=B.

LinearAlgebra:-Modular:-BackwardSubstitute(p, A, X);

 

Verify results (not really necessary):

for k to n do
     if LinearAlgebra:-Norm(
             LinearAlgebra:-Modular:-Multiply(p, A, X[.., k])
             - B[.., k]
        )
        <> 0
     then error "Bad solve"
     end if
end do;
"All solutions check";

The kth solution vector can be extracted thus:

k:= 9:

X9:= X[.., k];

X9 := Vector(4, {(1) = ` 1 .. 128 `*Vector[column], (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*C_order})

(Optional) Convert back to true integer form.

X9int:= map(trunc,X9);

X9int := Vector(4, {(1) = ` 1 .. 128 `*Vector[column], (2) = `Data Type: `*anything, (3) = `Storage: `*rectangular, (4) = `Order: `*C_order})

X9int[37], X9[37];

121, HFloat(121.0)

 

 

Download ModularSolve.mw

It can be done with a single plot command. No transform or display is needed.

restart:
V:= exp(-x)*y^3/eta^3:
eta:= 1+k*x+sin(2*Pi*x):
x:= 0.1:
Ks:= [.1, .5, -.1]:
plot(
     [seq](eval([V, y, y= 0..eta], k= K), K= Ks),
     labels= ["v", "y"],
     color= [black, red, blue],
     linestyle= [solid, dash, dot],
     legend= [seq](k = K, K= Ks)
);

It is sad that you have not received more help.... I can give you a tiny bit of help, but I have very little knowledge of this subject area. I think that some of what you want can be done with the ?DynamicSystems package. In particular, there is ?DynamicSystems,BodePlot and ?DynamicSystems,TransferFunction .

I think that Joe Riel may know this package well. Maybe he will notice this namecheck and respond.

Also, could you please retype your expressions in normal characters? The Maple Math typesetting is horrible---nearly impossible to read.

Since no-one has answered yet and I know at least part of the answer, I will try to give some help. But I don't know much about Maplets yet.

What I can tell you for certain: Unlike some other languages (such as C), Maple does not allow assignments (:=) to be embedded in expressions. By changing a comma (,) to a semicolon (;), your code above can be made syntactically correct:

BoxRow(...

   GridLayout([

      [CheckBox['CB1'](), "Box1"],

      ....for all 6 boxes...

])); #<-- Here's the change

lstW:=['CB1','CB2',...for all 6 boxes];

What I'm not sure about: While the above is syntactically correct, I don't know whether it does anything useful.

 

 

A plot of the function that you call pdf clearly shows that it is not a p.d.f. Its horizontal asymptotes are at e, not 0. Thus CDF is meaningless.  Also note that the result returned by dsolve has the form f(x) = ..., so, if the concept were meaningful at all, you'd need to have rhs(pdf) where you currently just have pdf.

Your X25 and X2 are Vectors of polynomials. Quotient does not take Vectors as arguments, but it will take polynomials. So you could do

Quotient(X25[2], X2[2]);

Come up with some other name for the imaginary unit. Let's say you choose i (but it does not need to be a single letter). Then do

interface(imaginaryunit= i);

Then you're free to use I however you want.

In Maple 17, all you need to do is

local I;

and that will automatically change the imaginary unit to _I.

The solution returned by dsolve is taking a long time to compute plot points because non-numeric dsolve computes symbolic solutions in exact arithmetic even when the input coefficients are floats. This can lead to some extremely large rational numbers in the solution. (You can override this default with dsolve(..., convert_to_exact= false).) In this case, the solution is of the form exp(Int(N/D^2)) where the integral is unevaluated, N is polynomial of degree 8, D is polynomial of degree 5, and the coefficients of each polynomial are integers each having several hundred digits. When you try to plot this solution, Maple attempts a numerical integration (quadrature) of the exponent for each plotted point. For most of those points, it gives up because it can not achieve the accuracy specified by your value of Digits (probably set at the default 10).

It may be possible to carefully adjust the digits (lowercase d) and epsilon options on the numeric integration (see ?evalf,Int ) so that a plot can be obtained in a reasonable time. But it is much easier and much more efficient to use dsolve(..., numeric) and odeplot, like this:

pdf := dsolve(motion union ic, numeric);

plots:-odeplot(pdf, x= -0.01..0.01);

When you see the plot, you'll know why I chose such a narrow range.

The above Answers assume that the integral is real as if that were obvious. Perhaps it is obvious to some, but it wasn't to me. So here's a proof that it is real.

 

J:= exp(I*epsilon*ln((2+t)/(2-t)))*exp(I*n*t)/sqrt(4-t^2)/Pi;

exp(I*epsilon*ln((2+t)/(2-t)))*exp(I*n*t)/((-t^2+4)^(1/2)*Pi)

plot(eval(Im(J), [epsilon= -0.4, n= 1]), t= -2..2);

The imaginary part appears to be an odd function of t. That would make the integral over a symmetric real interval real. Let's prove it's an odd function.

evalc(Im(J)) assuming t::RealRange(-2,2);

(sin(epsilon*ln((2+t)/(2-t)))*cos(n*t)+cos(epsilon*ln((2+t)/(2-t)))*sin(n*t))/((-t^2+4)^(1/2)*Pi)

combine(%);

sin(epsilon*ln(-(2+t)/(t-2))+n*t)/((-t^2+4)^(1/2)*Pi)

ImJ:= unapply(%, t):

simplify(ImJ(t)+ImJ(-t)) assuming epsilon::real, n::real, t::RealRange(-2,2);

0

Therefore the imaginary part of the integrand is an odd function of t (for real n and epsilon).

 

Download odd_imag_part.mw

Why use such a crude trial-and-error method to find the B for each A? Here's a way using Optimization:-Minimize:

restart:
for k to 300 do
     A:= .01*k;
     Bmin:= Optimization:-Minimize(
          B, {exp(B)-A-1 >= 0, exp(B)-A/2-1.5 >= 0},
          feasibilitytolerance= 1e-6, optimalitytolerance= 1e-6,
          initialpoint= {B= .1}
     )[1];
     Sol[k]:= [A,Bmin]
end do:
plot(convert(Sol,list), labels= ['A', B[min]]);

Or we can get a fully symbolic solution thus:
L1:= solve(exp(B)-A-1, B);
                           ln(A + 1)
L2:= solve(exp(B)-A/2-3/2, B);
                            /1     3\
                          ln|- A + -|
                            \2     2/
convert(max(L1,L2), piecewise) assuming A>0;
                 /          /1     3\                  \
        piecewise|A <= 1, ln|- A + -|, 1 < A, ln(A + 1)|
                 \          \2     2/                  /


We can give this problem a more sophisticated combinatorial treatment. This might be appropriate for a case where the number of permutations is so great that iterating through them is undesirable. We note that any permutation of the four fractions in any solution is also a solution, essentially the same. Thus we can reduce the search by a factor of 4! = 24. The following returns the 8 unique solutions ((Kitonum's 192) / 4!) in 0.343 secs.

restart:
eq:= add(a[k]/a[k+4], k= 1..4) = a[9]:
S:= {$1..9}:
Sols:= table():

for s in S do
     S1:= S minus {s};
     for C in combinat:-choose(S1,4) do
          C1:= S1 minus C;
          for P in combinat:-permute([C1[]]) do
               A:= [C[],P[],s];
               if evalb(eval(eq, a= A)) then
                    Sols[A]:= [][]
               end if
          end do
     end do
end do:
 
for Sol in indices(Sols, nolist) do
   print(eval(eq, a= ``~(Sol)))
end do;

                  (2)   (3)   (5)   (7)      
                  --- + --- + --- + --- = (9)
                  (8)   (6)   (4)   (1) 

     
                  (1)   (4)   (6)   (7)      
                  --- + --- + --- + --- = (5)
                  (3)   (8)   (9)   (2)  

    
                  (3)   (5)   (6)   (7)      
                  --- + --- + --- + --- = (9)
                  (2)   (1)   (8)   (4)   

   
                  (3)   (4)   (5)   (7)      
                  --- + --- + --- + --- = (8)
                  (9)   (1)   (2)   (6)   

   
                  (2)   (4)   (5)   (7)      
                  --- + --- + --- + --- = (9)
                  (3)   (8)   (6)   (1) 

     
                  (2)   (5)   (6)   (9)      
                  --- + --- + --- + --- = (7)
                  (1)   (4)   (8)   (3)  

    
                  (2)   (5)   (6)   (7)      
                  --- + --- + --- + --- = (9)
                  (8)   (1)   (3)   (4)    

 
                  (1)   (2)   (7)   (9)      
                  --- + --- + --- + --- = (5)
                  (6)   (8)   (3)   (4)      

 

First, get rid of with(linalg). Then replace these old linalg commands

with the single command

Solution:= LinearSolve(A, XY):

Then you can plot the exact and the approximate solution together with

F:= x-> (1-x)*cos(x):
display([
     pointplot([seq]([h*(k-2), Solution[k]], k= 2..m+1), color= blue),
     plot(F(x), x= 0..1)
], symbol= diamond
);

You can plot the errors with

pointplot(
     [seq]([h*(k-2), F(h*(k-2)) - Solution[k]], k= 2..m+1),
     symbol= cross, title= "Errors"
);

You need to use lists rather than sets because you have no control over the order that things will appear in a set. Also, your indices themselves need to be lists because "naked" sequences cannot be distinct members of lists (or sets). Once you have lists, let's say A is your list of indices and B is the set of values. Then just do

A =~ B

It is possible for a naked sequence to be either or both of the sides of an equation. You can achieve this by

zip((x,y)-> op(x)=y, A, B)

zip is the two-set (or two-list) analog of map that you were looking for.

Example:

A:= [[1,2], [3,4]]:
B:= [5,6]:
zip((x,y)-> op(x)=y, A,B);
                    [(1, 2) = 5, (3, 4) = 6]

There is no effective way to make such an assumption. Even if you were include every possible inequation x[i] <> x[j], you would likely just overload the ability of a solver. That's why Combinatorial Optimization and Constraint Satisfaction Problems are their own areas of study, distinct from, for example, Linear Programming.

But you can solve such an equation by cycling through all permutations if the number is not too great. In the case at hand, we have 9! ~ 300,000---reasonable. The following executes in 0.235 secs for me.

restart:
eq:= a[9] = add(a[2*k-1]/a[2*k], k= 1..4):
A:= combinat:-firstperm(9):
while not evalb(eval(eq, a= A)) do  A:= combinat:-nextperm(A)  od:
eval(eq, a= ``~(A));

First 361 362 363 364 365 366 367 Last Page 363 of 392