ahmedluss

40 Reputation

4 Badges

10 years, 253 days

MaplePrimes Activity


These are replies submitted by ahmedluss

Dear, 

 I have a program written using Matlab for Partical Swarm Optimization, how can I change it into a program written using Maple.

the program:

% THE
% MATLAB

function [xmin, fxmin, iter] = PSO

%%%
success = 0;      

PopSize = 20;        
MaxIt = 5000;        
iter = 0;           

fevals = 0;      
maxw = 1.2;        

minw = 0.1;      

weveryit = floor(0.75*MaxIt); 
c1 = 0.5; 
c2 = 0.5; 
inertdec = (maxw-minw)/weveryit;
w = maxw;                    
f = “DeJong”;               
dim = 2;                     

upbnd = 5;                    

wbnd = –5;       
GM = 0;             
ErrGoal = 1e–3;                

% Initializing 
popul = rand(dim, PopSize)*(upbnd-lwbnd) + lwbnd;
vel = rand(dim, PopSize);

%Evaluate 
for i = 1:PopSize,
fpopul(i) = feval(f, popul(:,i));
fevals = fevals + 1;
end

% Initializing 
% values
bestpos = popul;
fbestpos = fpopul;

% Finding
[fbestpart,g] = min(fpopul);
lastbpf = fbestpart;

% SWARM EVOLUTION LOOP ? START ?
while (success == 0) & (iter < MaxIt),
iter = iter + 1;
% Update the value of the inertia weight w
if (iter<=weveryit)
w = maxw – (iter–1)?inertdec;
end

% VELOCITY UPDATE
for i=1:PopSize,
A(:,i) = bestpos(:,g);
end
R1 = rand(dim, PopSize);
R2 = rand(dim, PopSize);
vel = w?vel + c1?R1.?(bestpos-popul) + c2?R2.?(A-popul);

%

popul = popul + vel;

% Evaluate the new swarm
for i = 1:PopSize,
fpopul(i) = feval(f,popul(:, i));
fevals = fevals + 1;
end

% Updating 
changeColumns = fpopul < fbestpos;
fbestpos = fbestpos.*(~changeColumns) + fpopul.*changeColumns;
bestpos(:, find(changeColumns)) = popul(:, find(changeColumns));

% Updating
[fbestpart, g] = min(fbestpos);

% Checking 
%if 
if abs(fbestpart-GM) <= ErrGoal
success = 1;
else
lastbpf = fbestpart;
end
end
%  LOOP ? END ?
% Output
xmin = popul(:,g);
fxmin = fbestpos(g);

function DeJong=DeJong(x)
DeJong = sum(x.?2);
 

Thank you so much

Ahmed

 

Dear, 

 I have a program written using Matlab for Partical Swarm Optimization, how can I change it into a program written using Maple.

the program:

% THE
% MATLAB

function [xmin, fxmin, iter] = PSO

%%%
success = 0;      

PopSize = 20;        
MaxIt = 5000;        
iter = 0;           

fevals = 0;      
maxw = 1.2;        

minw = 0.1;      

weveryit = floor(0.75*MaxIt); 
c1 = 0.5; 
c2 = 0.5; 
inertdec = (maxw-minw)/weveryit;
w = maxw;                    
f = “DeJong”;               
dim = 2;                     

upbnd = 5;                    

wbnd = –5;       
GM = 0;             
ErrGoal = 1e–3;                

% Initializing 
popul = rand(dim, PopSize)*(upbnd-lwbnd) + lwbnd;
vel = rand(dim, PopSize);

%Evaluate 
for i = 1:PopSize,
fpopul(i) = feval(f, popul(:,i));
fevals = fevals + 1;
end

% Initializing 
% values
bestpos = popul;
fbestpos = fpopul;

% Finding
[fbestpart,g] = min(fpopul);
lastbpf = fbestpart;

% SWARM EVOLUTION LOOP ? START ?
while (success == 0) & (iter < MaxIt),
iter = iter + 1;
% Update the value of the inertia weight w
if (iter<=weveryit)
w = maxw – (iter–1)?inertdec;
end

% VELOCITY UPDATE
for i=1:PopSize,
A(:,i) = bestpos(:,g);
end
R1 = rand(dim, PopSize);
R2 = rand(dim, PopSize);
vel = w?vel + c1?R1.?(bestpos-popul) + c2?R2.?(A-popul);

%

popul = popul + vel;

% Evaluate the new swarm
for i = 1:PopSize,
fpopul(i) = feval(f,popul(:, i));
fevals = fevals + 1;
end

% Updating 
changeColumns = fpopul < fbestpos;
fbestpos = fbestpos.*(~changeColumns) + fpopul.*changeColumns;
bestpos(:, find(changeColumns)) = popul(:, find(changeColumns));

% Updating
[fbestpart, g] = min(fbestpos);

% Checking 
%if 
if abs(fbestpart-GM) <= ErrGoal
success = 1;
else
lastbpf = fbestpart;
end
end
%  LOOP ? END ?
% Output
xmin = popul(:,g);
fxmin = fbestpos(g);

function DeJong=DeJong(x)
DeJong = sum(x.?2);
 

Thank you so much

Ahmed

 

 

DEAR,

                     The algorithm of Partical Swarm Optimization:

In PSO a number of simple entities—the particles—are placed in the search space ofsome problem or function, and each evaluates the objective function at its current location.Each particle then determines its movement through the search space by combining some aspect of the history of its own current and best (best-fitness) locations with those of one or more members of the swarm, with some random perturbations. The next iteration takes place after all particles have been moved. Eventually the swarm as a whole, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function.

Each individual in the particle swarm is composed of three

D-dimensional vectors, where

D best position pi , and the velocity vi . The current position xi can be considered as a set of coordinates describing a point in space. On each iteration of the algorithm, the current position is evaluated as a problem solution. If that position is better than any that has been found so far, then the coordinates are stored in the second vector, pi . The value of the best function result so far is stored in a variable that can be called pbesti (for “previous best”), for comparison on later iterations. The objective, of course, is to keep finding better positions and updating pi and pbesti. New points are chosen by adding vi coordinates to xi , and the algorithm operates by adjusting vi , which can effectively be seen as a step size.

 of the particles through their interactions. In any case, populations are organized according to some sort of communication structure or topology, often thought of as a social network. The topology typically consists of bidirectional edges connecting pairs of particles, so that if j is in i’s neighborhood, i is also in j ’s. Each particle communicates with some other particles and is affected by the best point found by any member of its topological neighborhood. his is just the vector pi for that best neighbor, which we will denote with pg . The potential kinds of population “social networks” are hugely varied, but in practice certain types have been used more frequently.In the particle swarm optimization process, the velocity of each particle is iteratively adjusted so that the particle stochastically oscillates around pi and pg locations. The (original) process for implementing PSO is as in Algorithm

 Original PSO.D dimensions in the search space.loop

3: For each particle, evaluate the desired optimization fitness function in

4: Compare particle’s fitness evaluation with its

5: Identify the particle in the neighborhood with the best success so far, and assign itsindex to the variable

D variables.pbesti . If current value is better than pbesti, then set pbesti equal to the current value, and pi equal to the current location xi in D-dimensional space.

6: Change the velocity and position of the particle according to the following equation:

 

 

vi vi + U(01)( pi xi )+ U(02)( pg xi ),

 

7: If a criterion is met (usually a sufficiently good fitness or a maximum number ofiterations), exit loop.

8:

xi xi + vi .                    (1)end loop

Notes:

randomly generated at each iteration and for each particle.

– In the original version of PSO, each component of

U(0i ) represents a vector of random numbers uniformly distributed in [0i ] which isis component-wise multiplication.vi is kept within the range

[−

Vmax,+Vmax]

thanks.

Algorithm 

1: Initialize a population array of particles with random positions and velocities on

2:

The particle swarm is more than just a collection of particles. A particle by itself hasalmost no power to solve any problem; progress occurs only when the particles interact. Problem solving is a population-wide phenomenon, emerging from the individual behaviors

 

DEAR,

                     The algorithm of Partical Swarm Optimization:

In PSO a number of simple entities—the particles—are placed in the search space ofsome problem or function, and each evaluates the objective function at its current location.Each particle then determines its movement through the search space by combining some aspect of the history of its own current and best (best-fitness) locations with those of one or more members of the swarm, with some random perturbations. The next iteration takes place after all particles have been moved. Eventually the swarm as a whole, like a flock of birds collectively foraging for food, is likely to move close to an optimum of the fitness function.

Each individual in the particle swarm is composed of three

D-dimensional vectors, where

D best position pi , and the velocity vi . The current position xi can be considered as a set of coordinates describing a point in space. On each iteration of the algorithm, the current position is evaluated as a problem solution. If that position is better than any that has been found so far, then the coordinates are stored in the second vector, pi . The value of the best function result so far is stored in a variable that can be called pbesti (for “previous best”), for comparison on later iterations. The objective, of course, is to keep finding better positions and updating pi and pbesti. New points are chosen by adding vi coordinates to xi , and the algorithm operates by adjusting vi , which can effectively be seen as a step size.

 of the particles through their interactions. In any case, populations are organized according to some sort of communication structure or topology, often thought of as a social network. The topology typically consists of bidirectional edges connecting pairs of particles, so that if j is in i’s neighborhood, i is also in j ’s. Each particle communicates with some other particles and is affected by the best point found by any member of its topological neighborhood. his is just the vector pi for that best neighbor, which we will denote with pg . The potential kinds of population “social networks” are hugely varied, but in practice certain types have been used more frequently.In the particle swarm optimization process, the velocity of each particle is iteratively adjusted so that the particle stochastically oscillates around pi and pg locations. The (original) process for implementing PSO is as in Algorithm

 Original PSO.D dimensions in the search space.loop

3: For each particle, evaluate the desired optimization fitness function in

4: Compare particle’s fitness evaluation with its

5: Identify the particle in the neighborhood with the best success so far, and assign itsindex to the variable

D variables.pbesti . If current value is better than pbesti, then set pbesti equal to the current value, and pi equal to the current location xi in D-dimensional space.

6: Change the velocity and position of the particle according to the following equation:

 

 

vi vi + U(01)( pi xi )+ U(02)( pg xi ),

 

7: If a criterion is met (usually a sufficiently good fitness or a maximum number ofiterations), exit loop.

8:

xi xi + vi .                    (1)end loop

Notes:

randomly generated at each iteration and for each particle.

– In the original version of PSO, each component of

U(0i ) represents a vector of random numbers uniformly distributed in [0i ] which isis component-wise multiplication.vi is kept within the range

[−

Vmax,+Vmax]

thanks.

Algorithm 

1: Initialize a population array of particles with random positions and velocities on

2:

The particle swarm is more than just a collection of particles. A particle by itself hasalmost no power to solve any problem; progress occurs only when the particles interact. Problem solving is a population-wide phenomenon, emerging from the individual behaviors

Hello all,

 

Thank you so much, but I have the algorithm of this method, could any one help to write it in maple.

Regards.

the algorithm:After finding the two best values, the particle updates its velocity and position according to equations (1) and (2) respectively.

(vk+1) i =w(vk) i + c1 r1  (pbest i - (xk) i ) + c2 r2 (gbestk - (xk)

i )

(xk+1) i= (xk)i + (vk+1) i

where (

The 1

v k) i  is the velocity of ith particle at the kth iteration, (xk) is current the solution (or position) of the ith particle. r1 and r2 are random numbers generated uniformly between 0 and 1. c1 is the self-confidence (cognitive) factor and c2 is the swarm confidence (social) factor. Usually c1 and c2 are in the range from 1.5 to 2.5. Finally, w is the inertia factor that takes linearly decreasing values downward from 1 to 0 according to a predefined number of iterations as recommended by Haupt and Haupt [2004].st term in equation (1) represents the effect of the inertia of the particle, the 2nd term represents the particle memory influence, and the 3rd term represents the swarm (society)influence.. The velocities of the particles on each dimension may be clamped to a maximum velocity Vmax, which is a parameter specified by the user. If the sum of accelerations causes thevelocity on that dimension to exceed Vmax, then this velocity is limited to Vmax [Haupt and Haupt 2004]. Another type of clamping is to clamp the position of the current solution to a certain range in which the solution has valid value, otherwise the solution is meaningless [Haupt and Haupt 2004]. In this Chapter, position clamping is applied with no limitation on the velocity values.

 


PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. During every iteration, each particle is updated by following two"best" values. The first one is the position vector of the best solution (fitness) this particle hasachieved so far. The fitness value is also stored. This position is called position that is tracked by the particle swarm optimizer is the best position, obtained so far, by any particle in the population. This best position is the current global best and is calledgbest  

Page 1 of 1