Carl Love

Carl Love

28055 Reputation

25 Badges

12 years, 364 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@moses Look closely at the beginning of the first for loop in procedure `α__ν`. You have

for i from 1 to `n__pile ` do

It should be

for i from 1 to `n__pile` do

In other words, you have an extra space at the end of `n__pile`.

Note that there's no need to use name quotes on n__pile. If you had simply used n__pile, this error would've been advoided.

@moses If I understand you correctly, you want to evaluate the matrix that you currently have entered as alpha, with each value of S coming from the corresponding entry in the matrix S__p. Is that correct?

@pchin I figured that something like this was going on since there is no error message when trying to use colorscheme with densityplot. Can it be fixed (in the future) so that option scaletorange respects option zrange?

@Kitonum How do you get Maple's array plots (side-by-side plots) to display on MaplePrimes? It never works for me. What browser are you using and which Maple GUI?

@Kitonum The usual case is that numpoints alone isn't enough get an exact number of points. The default setting is adaptive= true, which attempts to fill in the sharp turns of a curve after fulfilling the numpoints requirement. You can read about it at ?plot,options. If you count the points in your own posted example, you'll see that there are 25, which is why I brought up this issue.

 

@acer I stumbled upon this technique myself when looking for a way to get more control over the size of 3d plots projected into the xy-plane. It's great that you've found a single command to do it.

What's the purpose of each pair of unevaluation quotes in the following line of code:

P2d:=':-PLOT'(remove(type,[op(P3d)],
     ':-specfunc(anything,{:-AXESLABELS,:-LIGHTMODEL,:-ORIENTATION})')[]):

@Bendesarts Yes, your guess as to the reason for the limited usability of the style of module that I proposed is entirely correct.

@Kitonum If you want precisely the number of points specified, then you also have to include adaptive= false, as in

plot(x^2, x= 0..2, style= point, numpoints= 20, adaptive= false);

The following worksheet shows a procedure that makes short work of the above and an example showing how one would use a typeset sentence with embedded equation in a userinfo statement.


TSprintf:= proc()
uses T= Typesetting;
     T:-mrow(seq(`if`(e::string, T:-mn(e), T:-Typeset(T:-EV(e))), e= [args]))
end proc:

EQ:= s^2-4*s+1+sqrt(u/Pi)=3:

TSprintf("The equation is  ", EQ, ".");

"The equation is  s^2-4 s+1+sqrt(u/Pi)=3."

(1)

MyProc:= proc(eq)
     (* other code *)
     userinfo(1, MyProc, NoName, print(TSprintf("The equation is  ", eq, ".")));
     (* other code *)
end proc:

infolevel[MyProc]:= 1:

MyProc(EQ);

"The equation is  s^2-4 s+1+sqrt(u/Pi)=3."

(2)

 


Download Typeset.mw

 

Would you please post the above as an attached worksheet? It'd be much easier to read and use without all the extraneous junk created by translating from 2D input to 1D input. You can use the green uparrow in the toolbar of the MaplePrimes editor to attach a file.

When I use Google Chrome, the white space is stripped, and I put it back space by space. When I use Mozilla Firefox, the white space is not stripped.

Your participation in MaplePrimes has been sorely missed.

@LaarsHelenius

Doesn't the algorithm require some way to detect unboundedness?

When I try the input that you suggest, I seem to get an infinite loop. Perhaps I didn't let it run long enough. There should be a limit on the number of iterations as a safety. So I added another keyword parameter, max_iters, which defaults to 99.

I added a userinfo statement that displays the number of iterations used. This will only execute if you give the command

infolevel[Karmarkar]:= 1:

I put coercion on the parameters A and c, so you no longer need to wrap them with evalf in the call.

So here's the new code:

Karmarkar:= proc(
     A::coerce(Matrix(datatype= float[8]), evalf),
     c::coerce(Vector[row](datatype= float[8]), evalf),
     {epsilon::positive:= 1e-4},
     {fudge_factor::realcons:= 0.9},
     {max_iters::posint:= 99}
)
uses LA= LinearAlgebra;
local
     n:= LA:-ColumnDimension(A),
     y:= Vector(n, fill= 1/n, datatype= float[8]),
     r:= evalf(1/sqrt(n*(n-1))),
     x_new:= `+`(LA:-NullSpace(A)[]),
     x,
     C:= epsilon,
     Ones:= Vector[row](n, fill= 1),
     J:= LA:-IdentityMatrix(n, datatype= float[8]),
     Diag, B, p, iter
;
     for iter to max_iters while C >= epsilon do
          x:= x_new;
          Diag:= LA:-DiagonalMatrix(x/LA:-Norm(x, 1));
          B:= <A.Diag, Ones>;
          p:= (J - B^+.(B.B^+)^(-1).B).(c.Diag)^+;
          y:= y - fudge_factor*r*p/LA:-Norm(p,2);
          x_new:= Diag.y/(Ones.Diag.y);
          C:= abs(c.(x_new - x)/(c.x))
     end do;
     if iter > max_iters then
          error
               "Didn't converge; error = %1. "
               "Increasing max_iters or epsilon may resolve this problem.",
               C
     end if;
     userinfo(1, Karmarkar, "Used", iters-1, "iterations.");
     x_new
end proc:

@Carl Love Here's an updated worksheet. I made the procedure faster, and it uses less memory. I also added several utility procedures, mostly related to plotting the clusters.

Fuzzy C-Means Clustering

restart:

FuzzyCMeans:= proc(
     X::Matrix(datatype= float[8]),   #The rows of X are the data points.
     #number of clusters and number of means:
     {clusters::And(posint, satisfies(x-> x > 1)):= 2},
     {fuzziness::satisfies(x-> x > 1):= 2},   #fuzziness factor
     {epsilon::positive:= 1e-4},   #convergence criterion
     {max_iters::posint:= 999},   #max number of iterations
     {Norm::procedure:= proc(V) local x; sqrt(add(x^2, x= V)) end proc}
)
option `Written by Carl J Love, 2016-Apr-13`;
description
    "Fuzzy C-Means clustering algorithm.
     See http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/cmeans.html
     and https://en.wikipedia.org/wiki/Fuzzy_clustering
    "
;
uses LA= LinearAlgebra;
local
     c:= clusters,     #number of clusters and number of means
     m:= fuzziness,    #fuzziness factor
     n:= op([1,1], X), #number of points
     d:= op([1,2], X), #dimension
     N:= Norm,         #d-dimensional vector norm
     #U[i,j] will be the degree (on a 0..1 scale) to which point i belongs to cluster j.
     U:= Matrix((n,c), datatype= float[8]),
     UM:= Matrix((n,c), datatype= float[8]), # U^~m
     #U1 is just an update copy of U.
     U1:= LA:-RandomMatrix((n,c), generator= 0..1, datatype= float[8]),
     C:= Matrix((c,d), datatype= float[8]), #The rows of C will be the cluster centers.
     e:= 2/(m-1), #fuzzy exponent used to update U
     err:= epsilon,  #max matrix difference on current iteration
     i, j, k, jj #loop indices
;
     for jj to max_iters while err >= epsilon do
          ArrayTools:-Copy(U1, U);
          ArrayTools:-Copy(U, UM);
          map[evalhf, inplace](`^`, UM, m);
          for j to c do
               C[j,..]:= add(UM[i,j]*X[i,..], i= 1..n) / add(UM[i,j], i= 1..n)
          end do;
          for i to n do  for j to c do
               U1[i,j]:= 1/add((N(X[i,..]-C[j,..])/N(X[i,..]-C[k,..]))^e, k= 1..c)
          end do  end do;
          LA:-MatrixAdd(U,U1,1,-1,inplace); #U:= U-U1
          err:= max(map[evalhf, inplace](abs, U))
     end do;
     if jj > max_iters then  
          error "Didn't converge; error = %1. Increase max_iters or epsilon.", err
     end if;
     userinfo(1, FuzzyCMeans, "Used", jj-1, "iterations.");      
     (C,U1)                
end proc:

Here are some utility procedures for working with the clusters. The first produces a list which gives for each point the cluster to which it most strongly belongs.

AssignToClusters:= (U::Matrix)->
     map(proc(i) local p; member(max(U[i,..]), U[i,..], p); p end, [$1..op([1,1], U)])
:
     

The next three procedures are for plotting 2D clusters.

PlotClusters2D:= proc(X::Matrix, U::Matrix)
local A:= AssignToClusters(U), k, J:= [$1..op([1,1], U)], c:= op([1,2], U);
     plot(
          [seq(X[select(i-> A[i]=k, J), ..], k= 1..c)],
          color= [seq(COLOR(HUE, k/c), k= 1..c)],
          style= point, symbol= diagonalcross, symbolsize= 16, _rest
     )
end proc:

PlotClusterMeans2D:= proc(C::Matrix)
local c:= op([1,1], C), k;
     plot(
          `[]`~(convert(C, listlist)),
          color= [seq(COLOR(HUE, k/c), k= 1..c)],
          style= point, symbol= circle, symbolsize= 32, _rest
     )
end proc:

PlotClustersAndMeans2D:= (X::Matrix, C::Matrix, U::Matrix)->
     plots:-display(
          [PlotClusters2D(X,U), PlotClusterMeans2D(C)],
          gridlines= false, _rest
     )
:
 

#Test data:
X:=
     [[5.37,19.50],[5.73,19.44],[4.66,20.03],[5.66,20.38],[4.22,21.97],
      [5.08,20.67],[5.08,19.08],[4.54,20.06],[4.35,19.82],[5.19,20.72],
      [4.48,19.95],[5.76,19.81],[4.15,18.68],[6.37,20.60],[5.58,21.13],
      [5.76,19.91],[5.85,19.02],[5.00,19.71],[5.42,20.31],[4.77,21.45],
      [8.61,19.48],[10.70,20.31],[10.99,20.28],[11.68,21.28],[9.12,20.77],
      [10.30,20.07],[10.40,21.62],[10.95,20.34],[9.79,20.29],[9.69,20.86],
      [10.02,21.45],[11.05,20.19],[9.20,17.96],[10.49,19.88],[9.61,19.49],
      [10.33,19.59],[9.29,20.94],[10.17,19.64],[10.97,20.32],[10.08,19.16]
     ]
:
XM:= Matrix(X, datatype= float[8]):

infolevel[FuzzyCMeans]:= 1:

(C,U):= FuzzyCMeans(
     XM,
     #All options are set to their default values here:
     clusters= 2, fuzziness= 2, epsilon= 1e-4, max_iters= 999,
     Norm= proc(V) local x; sqrt(add(x^2, x= V)) end proc
);

FuzzyCMeans: Used 8 iterations.

 

C, U := Matrix(2, 2, {(1, 1) = 5.17622947731172, (1, 2) = 20.0938362193081, (2, 1) = 10.2161565054282, (2, 2) = 20.2331200701138}), Vector(4, {(1) = ` 40 x 2 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})

(1)

PlotClustersAndMeans2D(XM, C, U);

 

The two clusters are obvious in this case; consequently, the number of iterations was low. Let's ask for three clusters:

(C,U):= CodeTools:-Usage(FuzzyCMeans(XM, clusters= 3));

FuzzyCMeans: Used 60 iterations.

memory used=60.99MiB, alloc change=0 bytes, cpu time=530.00ms, real time=534.00ms

 

C, U := Matrix(3, 2, {(1, 1) = 9.59964568674007, (1, 2) = 19.8238795074764, (2, 1) = 10.6838151702604, (2, 2) = 20.4543667502028, (3, 1) = 5.14884499714405, (3, 2) = 20.0909363818171}), Vector(4, {(1) = ` 40 x 3 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})

(2)

PlotClustersAndMeans2D(XM, C, U);

 

I had expected that the right cluster would be divided in two, but with see that it converged on a division of the left cluster. The algorithm only claims to converge to a local minimum. The initial solution guess is randomized, so perhaps another try:

 

(C,U):= CodeTools:-Usage(FuzzyCMeans(XM, clusters= 3));

FuzzyCMeans: Used 47 iterations.
memory used=47.79MiB, alloc change=0 bytes, cpu time=483.00ms, real time=481.00ms

 

C, U := Matrix(3, 2, {(1, 1) = 5.27087202309135, (1, 2) = 19.5727941736791, (2, 1) = 5.20533446031621, (2, 2) = 20.7234689370591, (3, 1) = 10.2698266410416, (3, 2) = 20.2497388751727}), Vector(4, {(1) = ` 40 x 3 `*Matrix, (2) = `Data Type: `*float[8], (3) = `Storage: `*rectangular, (4) = `Order: `*Fortran_order})

(3)

PlotClustersAndMeans2D(XM, C, U);

 

 

 

Download Fuzzy_clusters.mw

@AndrewG Having read the link that you gave, k-means clustering seems to me like just a crude first attempt at C-means clustering. So, perhaps you can tell me when and why anyone would use k-means clustering given that they have access to C-means?

You wrote the above as if we were familiar with your assignment, as if we were your classmates who've already been working on it. But it's pretty much unintelligible without the bigger picture. How about posting the assignment itself?

First 429 430 431 432 433 434 435 Last Page 431 of 709