Sergey Moiseev

Sergey Moiseev

398 Reputation

13 Badges

13 years, 143 days
Sergey N. Moiseev received M.S., Ph.D. and Dr.Sc. degrees in radio physics from the Voronezh State University, Voronezh, Russia in 1986, 1993 and 2003, respectively. From 1984 to 2003 the topics of his research have included theory and methods of signal processing, nonlinear optimization, decision-making theory, time series prediction, statistical radio physics, ionosphere sporadic channel models. He is currently a principal scientist in the JSC Kodofon, Voronezh, Russia. His current research interests are wide spread in the area of the communications.

MaplePrimes Activity


These are replies submitted by Sergey Moiseev

@mehdi jafari The help pages are available as usual:
1) ?DirectSearch, or ?Search, or ?SolveEquations, etc.
2) Set cursor on the package command and click F2.

If you have Maple2017 and automatically installed the package (v. 2.01) from Maple Cloud then there are not any problems.

If you have Maple 17 (or earlier version) then the package (v. 2.0) help is file DirectSearch.hdb

If you have Maple 18 (or later version) then the package (v. 2.0) help is file DirectSearch.help

@mehdi jafari The 'checkexit' is option of DirectSearch package. This package is not the Maple package - you must install it from Aplication center or from Maple Cloud.

The objective function does not contain variables x3 and y3. If the option variables is not specified the problem variables are the indeterminates of type name found in the objective function (see Help page). I.e., the variables x3 and y3 are not optimization variables, but constraints contain these variables. Therefore the option variables must be specified in such cases: variables=[x1, x2, x3, x4, y1, y2, y3, y4].

@Markiyan Hirnyk Yes, it is because the step size is equal to 1 by default for all the DirectSearch local optimizers. If a function/equation has constant-value intervals then the step size should be greater than the maximum length of the intervals. The big step size allows optimizer to step over these constant-value intervals.

@Carl Love The sequence of the commands:

Data:=[[50, 0.24623131e-1], [150, 0.30492576e-1], [250, 0.37921405e-1], [350, 0.41231232e-1], [450, 0.48429132e-1], [550, 0.53879065e-1], [650, 0.63028417e-1], [750, 0.7402681e-1], [850, 0.84360848e-1], [950, 0.93120204e-1], [1050, .109879577], [1150, .125467687], [1250, .143243543], [1350, .163460496], [1450, .180390724], [1550, .198975116], [1650, .212349613], [1750, .241651973], [1850, .26603949], [1950, .274659312], [2050, .302078022]];

X := <Data[..,1]>;
Y := <Data[..,2]>;

f := a+b*abs(x-d)^c; # model function
# this model has four parameters, i.e. the number of parameters is less than the number of parameters of
# the initial model

sol:=DirectSearch:-DataFit(f, [c>=1], X, Y, x, tolerances=10^(-14));
f1:=eval(f,sol[2]);

with(plots):
p1,p2:=pointplot(X,Y),plot(f1,x=min(X)..max(X));
plots[display](p1,p2);

# run the DataFit several times (due to the solution is unstable) and select the best solution

# by the way, the model f := a+b*abs(x)^c with three parameters give the good fit also

@Carl Love 73 is number of objective function evaluations to obtain solution. The objective function is average sum of squared residuals. Therefore 0.64 is average sum of squared residuals for solution and standard deviation of residuals is sqrt(0.64)=0.8. The solution has 2 or 3 digits perhaps due to the data has only 1 digits or because round-off errors during optimization.

By the way, intresting solution can be obtained by minimax method:

y := A*x+B*(sum(exp(-n^2*(x-C))/n^2, n = 1 .. 10));
X := Vector([1, 2, 3, 4, 5], datatype = float);
Y := Vector([2, 2, 6, 6, 8], datatype = float);

solm:=DataFit(y, X, Y, x, fitmethod=minimax);
ysolm:=eval(y,solm[2]);

solm:=[1.07497119958174, [A = 1.67553273091582, B = -35.0352285667293, C = -2.84336714978538], 842]

@Carl Love 73 is number of objective function evaluations to obtain solution. The objective function is average sum of squared residuals. Therefore 0.64 is average sum of squared residuals for solution and standard deviation of residuals is sqrt(0.64)=0.8. The solution has 2 or 3 digits perhaps due to the data has only 1 digits or because round-off errors during optimization.

By the way, intresting solution can be obtained by minimax method:

y := A*x+B*(sum(exp(-n^2*(x-C))/n^2, n = 1 .. 10));
X := Vector([1, 2, 3, 4, 5], datatype = float);
Y := Vector([2, 2, 6, 6, 8], datatype = float);

solm:=DataFit(y, X, Y, x, fitmethod=minimax);
ysolm:=eval(y,solm[2]);

solm:=[1.07497119958174, [A = 1.67553273091582, B = -35.0352285667293, C = -2.84336714978538], 842]

@Markiyan Hirnyk The solution [2, [x=1]] cannot be maximum of the following function in range x=-1.4..1.4:

f:=x^2+1-exp(-1/(1000*(x-1))^2);

See, for example, plot

evalf(eval(f,x=1.00001));
evalf(eval(f,x=1.0001));
plot(f, x=0.99999..1.00001);

The maximum point must be x>1 and maximum value must be strictly greater than 2, because part of the function exp(-1/(1000*(x-1))^2) is symmetric about x=1 but part of the function x^2+1 is greater for x=1+epsilon than for x=1-epsilon.

What do you mean? The accuracy is very high. See

f:=x^2+1-exp(-1/(1000*(x-1))^2);
plot(f,x=1.000309735..1.0003097378);

@john2 Replace incorrect option 'maximise' by correct one 'maximize' .

@john2 Replace incorrect option 'maximise' by correct one 'maximize' .

@Markiyan Hirnyk It will be much better if int() command does this change of variable himself.

More powerful symbolic int() and solve().

Maple can not take many symbolic integrals and can not solve many equations and inequations with additional parameters as compared to MMA. For example:

int(y^2*(1-exp(-y/b))^(alpha-1)*exp(-y/b), y=0 .. infinity) assuming alpha>0,b>0;

solve(sqrt(x+a)<x, x);

@Markiyan Hirnyk The DirectSearch solutions go to minimum of objective function ->4. If we assume that the true minimum of objective function is equal to 4 we can analitically solve the problem by solve command:

f:=(3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c;
solve(f=4, [a,b,c,t,u])[1];

If we take into account Acer assumtion u=t=sqrt(2)/2 the solution will be simpler:

f1:=eval(f, [t=sqrt(2)/2,u=sqrt(2)/2]);
solve(f1=4, [a,b,c])[1];

Now setting of various values for a and b we can obtain infinite number of solutions that give the minimum of objective function =4.

@Markiyan Hirnyk The DirectSearch solutions go to minimum of objective function ->4. If we assume that the true minimum of objective function is equal to 4 we can analitically solve the problem by solve command:

f:=(3*a/sqrt(1-u^2)+b/sqrt(1-t^2))/c;
solve(f=4, [a,b,c,t,u])[1];

If we take into account Acer assumtion u=t=sqrt(2)/2 the solution will be simpler:

f1:=eval(f, [t=sqrt(2)/2,u=sqrt(2)/2]);
solve(f1=4, [a,b,c])[1];

Now setting of various values for a and b we can obtain infinite number of solutions that give the minimum of objective function =4.

1 2 3 4 5 Page 1 of 5