Carl Love

Carl Love

28035 Reputation

25 Badges

12 years, 321 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@brian bovril Reading through that thread to find my code that you refer to is brutal, so I put it here instead:

EratosthenesSieve:= proc(N::posint)
# Returns list of primes <= N 
   local
      # Only record data for the odds >= 3
      M:= iquo(N-1,2)
     ,prime:= rtable(1..M, fill= true)
     ,p, P, k
   ;
   for p to trunc(evalf(sqrt(N)/2)) do
      if prime[p] then
         P:= 2*p+1;
         for k from (P+1)*p by P to M do  prime[k]:= false  end do
      end if
   end do;
   [`if`(N=1,[][],2), seq(`if`(prime[p], 2*p+1, [][]), p= 1..M)]
end proc:
     
CodeTools:-Usage(EratosthenesSieve(2^14)):
memory used=0.55MiB, alloc change=0 bytes, cpu time=16.00ms, real time=8.00ms, gc time=0ns

 

You say, essentially, that you want to plot P(t) vs. E. That's 3 dimensions: E = 0...4 (on horizontal axis), t = 0..T (on which axis?), and P = P(0)..P(T) (on vertical axis, I guess). Do you understand why this can't work?

If you want to plot P(T) vs. E, that can be done, because is not a dimension; it's just the number 20.

If you want a 3D plot of P(t) vs. (E, t), that can be done.

If you want a sequence of 2D plots of P(t) vs. t for various specific values of E, that can be done, but E won't be an axis.

If you want a sequence of 2D plots of P(t) vs. t with E varying continuously through 0..4 with its value shown by color gradation, that can be done. In this case, color is the 3rd dimension.

If you want the same thing as an animation with E varying with time (which is thus the third dimension), that can be done.

Your example of what you expect shows multiple distinct plots. Why is that?

@torabi That's just a small excerpt from a math paper with the formulas that you want to implement. I can't debug code in a language that I don't know well (Matlab) and then translate it to Maple. Please use explicit, nontrivial, matrices instead of scalars for everything that should be a matrix. When you've done that, I'll take another look at it.

@torabi Sorry then, I don't know Matlab well enough to figure it out. From my reading of Matlab's online help, size(M,1) refers to the extent of the 1st dimension of matrix M. I didn't see any meaning for when is a scalar. Hopefully someone else can answer. I do have a vague memory that in Matlab scalars are actually 1x1 matrices. Could that be the key to the solution?

In my translation table above, I added a translation for T= S:h:F because it looks like you need that for this code.

@Rouben Rostamian  wrote:

  • [T]he system A.U=B is solved m times within a for-loop with the same A, while B varies. In my mind I justified doing the inversion once and then applying the inverse m times as being not worse than solving the system m times, but I may be wrong on that.

Using theorethical considerations alone, i.e., order asymptotics, I'll explain why the O(n) solver is better even in this case where you repeatedly use the same coefficient matrix A for different side vectors B. Let's say that A1:= A^(-1). Then simply doing the multiplication U:= A1.B even once requires O(n^2) steps. This is not even counting the time to compute the inverse.

@acer wrote

  • Maple doesn't offer precomputing the P,LU for a band storage, in part because it needs a structure larger than input AT and things get a bit awkward functionality-wise.

I just wrote a triadiagonal solver that stores prefactors and thus beats that NAG band solver from your example by a factor of 5 timewise. The prefactors could be stored in place in the tridiagonal Matrix (without altering the fact that it is tridiagonal), but I didn't implement that aspect. Indeed, they could simply replace the main diagonal and 1st subdiagonal.

In my example, rather than solving repeatedly using the same side vector, I use all different side vectors.
 

restart:

N:= 2^12: #space dimension size
maxiter:= 2^7: #time dimension size

AT:= LinearAlgebra:-RandomMatrix(
   N$2, generator= 0.0..100.0, datatype= hfloat, shape= band[1,1]
):

V:= LinearAlgebra:-RandomMatrix((N,maxiter), generator= -100.0..100.0, datatype= hfloat):
XT:= Matrix((N,maxiter), datatype= hfloat):

#Time and accuracy test of Maple's/NAG's TriD solver:

st:= time():
for i to maxiter do
   XT[..,i]:= LinearAlgebra:-LinearSolve(AT,V[..,i])
end do:
time()-st;
LinearAlgebra:-Norm(AT.XT - V);

.172

0.210472710371334415e-7

#Carl's 2-stage TriD solver:
TriD_prefactor_inner:= proc(
   A::Array(datatype= float[8]),
   B::Array(datatype= float[8]),
   C::Array(datatype= float[8]),
   n::integer[4]
)::integer[4];
local k::integer[4];
   for k to n do A[k]:= A[k]/B[k]; B[k+1]:= B[k+1] - A[k]*C[k] od;
   0 #Just to return something
end proc:

TriD_prefactor_comp:= Compiler:-Compile(TriD_prefactor_inner):

TriD_prefactor:= proc(M::Matrix(square, storage= band[1,1], datatype= hfloat))
local   
   n:= rhs(rtable_dims(M)[1])-1,
   A:= rtable(1..n, k-> M[k+1, k], datatype= hfloat),
   B:= rtable(1..n+1, k-> M[k, k], datatype= hfloat),
   C:= rtable(1..n, k-> M[k, k+1], datatype= hfloat)
;
   TriD_prefactor_comp(A, B, C, n);
   (A,B,C)
end proc:

TriD_solve_inner:= proc(
   A::Array(datatype= float[8]),
   B::Array(datatype= float[8]),
   C::Array(datatype= float[8]),
   X::Array(datatype= float[8]),
   n::integer[4]
)::integer[4];
local k::integer[4];
   for k to n do X[k+1]:= X[k+1] - A[k]*X[k] od;
   X[n+1]:= X[n+1]/B[n+1];
   for k from n by -1 to 1 do X[k]:= (X[k]-C[k]*X[k+1])/B[k] od;
   0 #Just to return something
end proc:

TriD_solve_comp:= Compiler:-Compile(TriD_solve_inner):

TriD_solve:= proc(A, B, C, D):
local n:= rtable_num_elems(A), X:= rtable(D);
   TriD_solve_comp(A, B, C, X, n);
   X
end proc:
   

XT:= Matrix((N,maxiter), datatype= hfloat): #Clear old solutions.

#Time and accuracy test of Carl's 2-stage solver
st:= time():
AT_pf:= TriD_prefactor(AT):
for i to maxiter do
   XT[..,i]:= TriD_solve(AT_pf, V[..,i])
end do:
time()-st;
LinearAlgebra:-Norm(AT.XT - V);

0.32e-1

0.468513432494432891e-7

 


 

Download TriD_prefactored.mw

@torabi  GIGO[*1]: Translations can only work if the original is correct. My Matlab knowledge is quite rusty, but in this case, it looks like must be a Matrix for your code to make any sense in either language. Yet, you have M:= 2, making a scalar. So, I must ask, Did you ever run the code in Matlab? And did it work?

[*1]GIGO stands for "Garbage In, Garbage Out", a tongue-in-cheek law of computer science.

Vote up for the derivation and exposition.

Regarding the iterative solver: For n x tridiagonal A, the system A.U = B can be solved in O(n) steps and O(1) auxilliary memory, which is much faster than even multiplying by A^(-1), let alone computing A^(-1) in the first place. And often the memory alone needed for A^(-1) is prohibitive. See the Wikipedia article "Tridiagonal matrix algorithm".

It works fine, like this:

IfProc:= x-> if x=0 then 1 elif x=1 then 2 elif x=3 then 7 else 10 fi:
IfProc~([$0..4]);
                       [1, 2, 10, 7, 10]

 

@shayas75 I'm not sure what you mean. If you type exactly what I entered in red (the color makes no difference), you should get the answer 2. Isn't that good enough?

@torabi 

Matlab           Maple
-----------------------------------------------------
Q(.., k)         Q[.., k]  
size(M,k)        (1+rhs-lhs)([rtable_dims(M)][k])
T= S:h:F         T:= <seq(S..F, h)>
length(T)        1+max((rhs-lhs)~([rtable_dims(T)]))
zeros(N,lt)      Matrix(N,lt)
A\D              LinearAlgebra:-LinearSolve(A,D) 
for n = M:N      for n from N to M do
for n = M:S:N    for n from N by S to M do 
for n = [a,b,c]  for n in [a,b,c] do

Anticipating your next question, I also translated some for loop syntax.

Here's a detailed example of finding the conjugate of a partition via its Ferrers diagram. The diagram technique verifies the correspondence between partitions and their conjugates and also provides an effective technique for computing the conjugate.

Consider the partition: 14 = 1 + 3 + 4 + 6. It has length 4 and maximum 6. The conjugate will have length 6 and maximum 4. Here's the Ferrers diagram:

6: @ @ @ @ @ @
4: @ @ @ @ 
3: @ @ @
1: @
----------------------------Now count the entries in each column
   4 3 3 2 1 1

So, the conjugate partition is 14 = 1 + 1 + 2 + 3 + 3 + 4. Computers are better at adding 1s than counting @s, so in my program, it's like this:

  [0, 0, 0, 0, 0, 1] +  #1
  [0, 0, 0, 1, 1, 1] +  #3
  [0, 0, 1, 1, 1, 1] +  #4
  [1, 1, 1, 1, 1, 1]    #6 
----------------------------Add these as vectors
  [1, 1, 2, 3, 3, 4]

Now suppose that you called my program with arguments (20, 6). It would first call combinat:-partition(14, 6). One of the partitions in the returned list would be [1, 3, 4, 6]. The Ferrers process in my program converts that to [1, 1, 2, 3, 3, 4]. Finally, 1 is added to each entry, yielding [2, 2, 3, 4, 4, 5], which is a partition of 20 of length 6, which is what was desired.

@ms0439883 I said that the min(n-k, k) was needed for the case k > n/2. In your example, n=5, k=2, so k < n/2. The min is only required to satisfy the overly zealous argument checker in combinat:-partition. It has nothing to do with the logic of the my algorithm, with conjugates of partitions, or with Ferrers diagrams. 

If you haven't yet, read that Wikipedia article that I linked, which'll explain how Ferrers diagrams are used to get the conjugate. Later, I'll post an example. 

If this Question is pursuant to your other Question about a system of 4 degree-3 polynomials with 4 variables and 3 symbolic parameters: The Newton-Raphson method cannot work with symbolic parameters.

I don't think that there's any way around it, in Maple or any other system: If you want usable solutions, then you'll need to supply numeric values for the parameters. If you do supply numeric values, the command fsolve should provide results very quickly, such that you could, for example, make plots of results versus parameters.

@brian bovril But you yourself said 

  • Just to clarify, each contestant was given an opaque bag containing a buff. Once they were dispensed, they were told to reveal their buffs (simultaneously). So that would be Scenario 2.

And aren't we all in agreement that the probability for Scenario 2 is 6/1001, whereas 1/1001 is the probability for Scenario 1?? (MMcdara used the word "strategy" instead of "scenario".)

First 272 273 274 275 276 277 278 Last Page 274 of 708