Venkat Subramanian

416 Reputation

13 Badges

15 years, 42 days

MaplePrimes Activity


These are questions asked by

Let us say f(x) is a procedure that can take two possible values for x.
How do I store the stats of CodeTools:-Usage(f(1))  and CodeTools:-Usage(f(2)) for ease of comparison?  Hopefully, I am not missing any help statements/examples.

Dear All,
My group recently published a DAEsolver in Maple, https://mapletransactions.org/index.php/maple/article/view/16701
This solver is very competive compared to existing large scale DAE solvers (even in other languages) by performing symbolic calculation for analytical Jacobian (including sparsity pattern), and by using parallel sparse direct solver (PARDISO).

The code and paper use ListTools. If the attached procedure can be made threadsafe and parallelized, then the resulting DAEsolver is publishable and will be better than most solvers (from any language) today. We welcome collaborations and suggestions. The code works for N = 40, but kernel connection is lost for higher values of N and it is not stable.

thanks

Dr. Venkat Subramanian
PS, for the specific problem, it is trivial to write the Jacobian and residue in a for loop that can be run in parallel (and it helps us to solve > 1 Million or more DAEs), but the attached code (makeproc) helps in solving DAEs resulting from sophisticated discretization approaches for PDEs without user input.

 

NULL

restart:

Digits:=15;

15

makeproc:=proc(n0,nf,Nt,Eqode::Array,Vars::list,Vars1::list,Equation::Array,j11::Matrix(storage=sparse))
local i,j,LL,LL2,L,i1,varsdir,varsdir1,node,eqs;

for i from n0 to nf do
eqs:=rhs(Eqode[i]):
L:=indets(eqs)minus{t};#print(L);
if nops(L)>0 then
LL := [seq(ListTools:-Search(L[j], Vars),
             j = 1 .. nops(L))];
 LL2 := ListTools:-MakeUnique(LL);
if LL2[nops(LL2)] = 0 then
             LL := [seq(LL2[i1], i1 = 1 .. nops(LL2) - 1)];
         end if;
         if LL2[1]= 0 then LL:=[seq(LL2[i1],i1=2..nops(LL2))]: end:#print(1,LL);
varsdir:=[seq(Vars[LL[i1]]=0.5*uu[LL[i1]]+uu_old[LL[i1]],i1=1..nops(LL))];
else varsdir:=[1=1]:end:

Equation[i]:=uu[i]-deltA*subs(varsdir,t=tnew,eqs):#print(i,Equation[i]);
        L := indets(Equation[i]);
        LL := [seq(ListTools:-Search(L[j], Vars1),j = 1 .. nops(L))];
        LL2 := ListTools:-MakeUnique(LL);
        if LL2[nops(LL2)] = 0 then
            LL := [seq(LL2[i1], i1 = 1 .. nops(LL2) - 1)];
        end if;
        if LL2[1]= 0 then LL:=[seq(LL2[i1],i1=2..nops(LL2))]: end:
        
        for j to nops(LL) do
            j11[i, LL[j]] := diff(Equation[i], uu[LL[j]]);
        end do;
 
od:
end proc:

CodeTools[ThreadSafetyCheck](makeproc);

0, 1

#infolevel[all]:=10;

 

N:=40;h:=1.0/N:

40

with(Threads[Task]):

Eqs:=Array(1..N,1..N):

for i from 1 to N do for j from 1 to N do
if i = 1 then left:=0: else left:=(c[i,j](t)-c[i-1,j](t))/h:end:
if i = N then right:=-0.1: else right:=(c[i+1,j](t)-c[i,j](t))/h:end:
if j = 1 then bot:=0: else bot:=(c[i,j](t)-c[i,j-1](t))/h:end:
if j = N then top:=-0.1: else top:=(c[i,j+1](t)-c[i,j](t))/h:end:
Eqs[i,j]:=diff(c[i,j](t),t)=(right-left)/h+(top-bot)/h-c[i,j](t)^2:
od:od:

 

eqs1:=Array([seq(seq(Eqs[i,j],i=1..N),j=1..N)]):

ics:=[seq(seq(c[i,j](t)=1.0,i=1..N),j=1..N)]:

Vars:=[seq(seq(c[i,j](t),i=1..N),j=1..N)]:

Vars1:=[seq(uu[i],i=1..N^2)]:

Equation:=Array(1..N^2):j11:=Matrix(1..N^2,1..N^2,storage=sparse):eqs:=copy(Equation):

CodeTools:-Usage(makeproc(1,N^2,N^2,eqs1,Vars,Vars1,Equation,j11));

memory used=21.47MiB, alloc change=24.99MiB, cpu time=235.00ms, real time=231.00ms, gc time=62.50ms

1-deltA*(-1600.00000000000-.50*uu[1600]-1.0*uu_old[1600])

j11[5,5];

1-deltA*(-2400.00000000000-.50*uu[5]-1.0*uu_old[5])

#Equation[1];

makeprocDistribute := proc(i_low, i_high,Nt,Eqode::Array,Vars::list,Vars1::list,Equation::Array,j11::Matrix(storage=sparse))
local i_mid;
if 200 < i_high - i_low then
#if i_low > 250 then
#i_mid := floor(1/2*i_high - 1/2*i_low) + i_low;
i_mid:=iquo(i_low+i_high,2):
Continue(null,
Task = [makeprocDistribute, i_low, i_mid,Nt,Eqode,Vars,Vars1,Equation,j11],
Task = [makeprocDistribute,i_mid + 1, i_high,Nt,Eqode,Vars,Vars1,Equation,j11]);
else
makeproc(i_low, i_high,Nt,Eqode,Vars,Vars1,Equation,j11); end if;
end proc:

j11[5,5];

1-deltA*(-2400.00000000000-.50*uu[5]-1.0*uu_old[5])

N^2;

1600

NN:=N^2;

1600

Equation:=Array(1..N^2):j11:=Matrix(1..N^2,1..N^2,storage=sparse):

CodeTools:-Usage(Start(makeprocDistribute,1,NN,NN,eqs1,Vars,Vars1,Equation,j11)):

memory used=22.01MiB, alloc change=239.31MiB, cpu time=985.00ms, real time=241.00ms, gc time=1.61s

j11[5,5];

1-deltA*(-2400.00000000000-.50*uu[5]-1.0*uu_old[5])

 

NULL


 

Download makeproctest.mw
 

NULL

 


 

NULL

restart:

Digits:=15;

15

A:=Matrix(4,4,[[-1,2,0,0],[2,-1,2,0],[0,2,-1,2],[0,0,1,-1]],datatype=float[8],storage=sparse);

Matrix(4, 4, {(1, 1) = -1.0000000000000000, (1, 2) = 2.0000000000000000, (1, 3) = 0., (1, 4) = 0., (2, 1) = 2.0000000000000000, (2, 2) = -1.0000000000000000, (2, 3) = 2.0000000000000000, (2, 4) = 0., (3, 1) = 0., (3, 2) = 2.0000000000000000, (3, 3) = -1.0000000000000000, (3, 4) = 2.0000000000000000, (4, 1) = 0., (4, 2) = 0., (4, 3) = 1.0000000000000000, (4, 4) = -1.0000000000000000})

b:=Vector(4,[1,0.,0,0],datatype=float[8]):

 

sol:=LinearAlgebra:-LinearSolve(A,b,method=SparseDirectMKL);

Error, invalid input: LinearAlgebra:-LinearSolve expects value for keyword parameter method to be of type identical(none,SparseLU,SparseDirect,SparseDirectMKL,SparseIterative,LU,QR,solve,hybrid,Cholesky,subs,modular), but received SparseDirectMKL

NULL


 

Download buglinearsolve.mw

I was pleased to see the description of SparseDirectMKL, but it is not implemented properly yet. It is a step in the right direction, so please make this available in the near future

 

In a recent question/conversation, I had discussed integrating dsolve/numeric-based codes with NLPsolve at 
https://www.mapleprimes.com/questions/236494-Inconsistent-Behavior-With-Dsolvenumeric-And-NLPSolve-sqp

@acer was able to help resolve the issue by calling NLPSolve with higher optimality tolerance.

I am starting a new question/post to show the need for evalhf as the previous post takes too long to load.
Code is attached for the same.
 

restart:

currentdir();

"C:\Users\Venkat\OneDrive - The University of Texas at Austin\Documents\trial\Learnningsqpoptim"

Test code written by Dr. Venkat Subramanian at UT Austin, 05/31/2023 and has been updated multiple times. This code uses CVP approach (piecwise constant) to perform optimal control. NLPSolve combination with RK2 approach to integrate ODEs (constant step size) works well for this problem. But dsolve numeric based codes work (with optimality tolerance fix from acer), but cannot handle large values of nvar (optimization variables).

Digits:=15;

15

 

eqodes:=[diff(ca(t),t)=-(u+u^2/2)*1.0*ca(t),diff(cb(t),t)=1.0*u*ca(t)-0.1*cb(t)];

[diff(ca(t), t) = -1.0*(u+(1/2)*u^2)*ca(t), diff(cb(t), t) = 1.0*u*ca(t)-.1*cb(t)]

soln:=dsolve({op(eqodes),ca(0)=alpha,cb(0)=beta},type=numeric,'parameters'=[alpha,beta,u],savebinary=true):

 

 

Note that Vector or Array form can be used for RK2h to implement the procedure for any number of variables/ODEs. But the challenge will be when implicit methods are used to integrate ODEs/DAEs and running them in evalhf form (this can be done with Gauss elimination type linear solver, but this will be limited to small number of ODEs/DAEs say 200 or so).

RK2h:=proc(NN,u,dt,y0::Vector(datatype=float[8]))#this procedure can be made efficient in vector form
local j,c1mid,c2mid,c10,c20,c1,c2;
option hfloat;
c10:=y0[1];c20:=y0[2];
for j from 1 to NN do
  c1mid:=c10+dt/NN*(-(u+u^2/2)*c10):
  c2mid:=c20+dt/NN*(u*c10)-0.1*dt/NN*c20:
  c1:=c10/2+c1mid/2+dt/NN/2*(-(u+u^2/2)*c1mid):
  c2:=c20/2+c2mid/2+dt/NN/2*(u*c1mid)-0.1*dt/NN/2*c2mid:
  c10:=c1:c20:=c2:
  od:
y0[1]:=c10:y0[2]:=c20:
end proc:

 

soln('parameters'=[1,0,0.1]);soln(0.1);

[alpha = 1., beta = 0., u = .1]

[t = .1, ca(t) = HFloat(0.9895549324188543), cb(t) = HFloat(0.009898024276129616)]

 

 

ssdsolve:=proc(x)
interface(warnlevel=0):

#if  type(x,Vector)#if type is not needed for this problem, might be needed for other problems
#then


local z1,n1,i,c10,c20,dt,u;
global soln,nvar;
dt:=evalf(1.0/nvar):
c10:=1.0:c20:=0.0:
for i from 1 to nvar do
u:=x[i]:
soln('parameters'=[c10,c20,u]):
z1:=soln(dt):
c10:=subs(z1,ca(t)):c20:=subs(z1,cb(t)):
od:
-c20;
 #else 'procname'(args):

#end if:

end proc:

 

 

ssRK2:=proc(x)#based on RK2
#interface(warnlevel=0):
option hfloat;
#if  type(x,Vector)
#then

local z1,n1,i,c10,c20,dt,u,NN,y0;
global nvar,RK2h;
y0:=Array(1..2,[1.0,0.0],datatype=float[8]):
#y0[1]:=1.0:y0[2]:=0.0:
dt:=evalf(1.0/nvar):
NN:=256*2/nvar;#NN is hardcode based on the assumption that nvar will be a multiple of 2 <=256


for i from 1 to nvar do
u:=x[i]:
evalhf(RK2h(NN,u,dt,y0)):
od:
-y0[2];
 #else 'procname'(args):

#end if:

end proc:

nvar:=2;

2

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float):

bl := Vector(nvar,[seq(0.,i=1..nvar)],datatype=float):bu := Vector(nvar,[seq(5.,i=1..nvar)],datatype=float):

infolevel[Optimization]:=15;

15

CodeTools:-Usage(Optimization:-NLPSolve(nvar,evalhf(ssRK2),[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 11

memory used=0.96MiB, alloc change=0 bytes, cpu time=16.00ms, real time=121.00ms, gc time=0ns

 

Calling the procedures based on RK2 with NLPSolve uses evalf for numerical gradient (fdiff). Providing a procedure for gradient solves this issue.

gradRK2 := proc (x,g)
option hfloat;
local base, i, xnew, del;
global nvar,ssRK2;
xnew:=Array(1..nvar,datatype=float[8]):
base := ssRK2(x);
for i to nvar do
xnew[i] := x[i]; end do;
for i to nvar do
del := max(1e-5,.1e-6*x[i]);
xnew[i] := xnew[i]+del;
g[i] := (ssRK2(xnew)-base)/del;
xnew[i] := xnew[i]-del;
end do;
end proc:

CodeTools:-Usage(Optimization:-NLPSolve(nvar,evalhf(ssRK2),[],initialpoint=ic0,[bl,bu],objectivegradient=gradRK2,optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

attemptsolution: number of major iterations taken 11

memory used=184.77KiB, alloc change=0 bytes, cpu time=31.00ms, real time=110.00ms, gc time=0ns

Significant saving in memory used is seen. CPU time is also less, which is more apparent at larger valeus of nvar.

 

dsolvenumeric based codes work for optimization, but evalf is invoked possibly for both the objective and gradient

CodeTools:-Usage(Optimization:-NLPSolve(nvar,(ssdsolve),[],initialpoint=ic0,[bl,bu],optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 11

memory used=14.61MiB, alloc change=37.00MiB, cpu time=94.00ms, real time=213.00ms, gc time=46.88ms

 

Providing gradient procedure doesn't help with evalhf computaiton for dsolve/numeric based procedures.

graddsolve := proc (x,g)
local base, i, xnew, del;
global nvar,ssdsolve;
#xnew:=Vector(nvar,datatype=float[8]):
xnew:=Array(1..nvar,datatype=float[8]):
base := ssdsolve(x);
for i to nvar do
xnew[i] := x[i]; end do;
for i to nvar do
del := max(1e-5,.1e-6*x[i]);
xnew[i] := xnew[i]+del;
g[i] := (ssdsolve(xnew)-base)/del;
xnew[i] := xnew[i]-del;
end do;
end proc:

CodeTools:-Usage(Optimization:-NLPSolve(nvar,(ssdsolve),[],initialpoint=ic0,[bl,bu],objectivegradient=graddsolve,optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 11

memory used=3.82MiB, alloc change=0 bytes, cpu time=31.00ms, real time=129.00ms, gc time=0ns

 

Calling both RK2 and dsolve based procedures again to check the values and computation meterics

s1RK2:=CodeTools:-Usage(Optimization:-NLPSolve(nvar,evalhf(ssRK2),[],initialpoint=ic0,[bl,bu],objectivegradient=gradRK2,optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

attemptsolution: number of major iterations taken 11

memory used=185.09KiB, alloc change=0 bytes, cpu time=16.00ms, real time=95.00ms, gc time=0ns

s1dsolve:=CodeTools:-Usage(Optimization:-NLPSolve(nvar,ssdsolve,[],initialpoint=ic0,[bl,bu],objectivegradient=graddsolve,optimalitytolerance=1e-6)):

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

NLPSolve: feasibility tolerance set to 0.1053671213e-7

NLPSolve: optimality tolerance set to 0.1e-5

NLPSolve: iteration limit set to 50

NLPSolve: infinite bound set to 0.10e21

NLPSolve: trying evalhf mode

NLPSolve: trying evalf mode

attemptsolution: number of major iterations taken 11

memory used=3.82MiB, alloc change=0 bytes, cpu time=31.00ms, real time=122.00ms, gc time=0ns

s1RK2[1];s1dsolve[1];

-.523900304316377463

-.523901163953022553

 

Next, a for loop is written to optimize for increasing values of nvar. One can see that evalhf is important for larger values of nvar. While dsolve/numeric is a superior code, not being able to use it in evalhf format is a significant weakness and should be addressed. Note that dsolve numeric evalutes procedures that are evaluated in evalhf or compiled form, so hopefully this is an easy fix.

infolevel[Optimization]:=0:

for j from 1 to 9 do
nvar:=2^(j-1):
ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float):
bl := Vector(nvar,[seq(0.,i=1..nvar)],datatype=float):bu := Vector(nvar,[seq(5.,i=1..nvar)],datatype=float):
soptRK[j]:=CodeTools:-Usage(Optimization:-NLPSolve(nvar,evalhf(ssRK2),[],initialpoint=ic0,[bl,bu],objectivegradient=gradRK2,optimalitytolerance=1e-6)):
print(2^(j-1),soptRK[j][1]);

od:

memory used=88.40KiB, alloc change=0 bytes, cpu time=0ns, real time=10.00ms, gc time=0ns

1, HFloat(-0.5008626988793192)

memory used=155.66KiB, alloc change=0 bytes, cpu time=16.00ms, real time=30.00ms, gc time=0ns

2, -.523900304316377463

memory used=175.36KiB, alloc change=0 bytes, cpu time=62.00ms, real time=80.00ms, gc time=0ns

4, -.535152497919956782

memory used=243.69KiB, alloc change=0 bytes, cpu time=156.00ms, real time=239.00ms, gc time=0ns

8, -.540546896131004706

memory used=260.09KiB, alloc change=0 bytes, cpu time=141.00ms, real time=260.00ms, gc time=0ns

16, -.542695734426874465

memory used=385.84KiB, alloc change=0 bytes, cpu time=313.00ms, real time=545.00ms, gc time=0ns

32, -.542932877726400531

memory used=0.65MiB, alloc change=0 bytes, cpu time=734.00ms, real time=1.18s, gc time=0ns

64, -.543011976841572652

memory used=1.24MiB, alloc change=0 bytes, cpu time=1.55s, real time=2.40s, gc time=0ns

128, -.543035276649319276

memory used=2.99MiB, alloc change=0 bytes, cpu time=3.45s, real time=5.92s, gc time=0ns

256, -.543041496228812814

for j from 1 to 6 do
nvar:=2^(j-1):
ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float):
bl := Vector(nvar,[seq(0.,i=1..nvar)],datatype=float):bu := Vector(nvar,[seq(5.,i=1..nvar)],datatype=float):
soptdsolve[j]:=CodeTools:-Usage(Optimization:-NLPSolve(nvar,evalf(ssdsolve),[],initialpoint=ic0,[bl,bu],objectivegradient=graddsolve,optimalitytolerance=1e-6)):
print(2^(j-1),soptdsolve[j][1]);

od:

memory used=0.66MiB, alloc change=0 bytes, cpu time=16.00ms, real time=11.00ms, gc time=0ns

1, HFloat(-0.5008631947224957)

memory used=3.79MiB, alloc change=0 bytes, cpu time=15.00ms, real time=52.00ms, gc time=0ns

2, -.523901163953022553

memory used=21.28MiB, alloc change=0 bytes, cpu time=78.00ms, real time=236.00ms, gc time=0ns

4, -.535153942647626391

memory used=144.82MiB, alloc change=-4.00MiB, cpu time=469.00ms, real time=1.60s, gc time=46.88ms

8, -.540549407239521607

memory used=347.30MiB, alloc change=16.00MiB, cpu time=1.27s, real time=3.45s, gc time=140.62ms

16, -.542699055038344258

memory used=1.33GiB, alloc change=-8.00MiB, cpu time=7.66s, real time=15.92s, gc time=750.00ms

32, -.542936165630524603

 

SS:=[seq(soptRK[j][1],j=1..9)];

[HFloat(-0.5008626988793192), -.523900304316377463, -.535152497919956782, -.540546896131004706, -.542695734426874465, -.542932877726400531, -.543011976841572652, -.543035276649319276, -.543041496228812814]

 

E1:=[seq(SS[i]-SS[i+1],i=1..nops(SS)-1)];

[HFloat(0.02303760543705824), 0.11252193603580e-1, 0.5394398211048e-2, 0.2148838295869e-2, 0.237143299527e-3, 0.79099115172e-4, 0.23299807746e-4, 0.6219579494e-5]

To get 6 Digits accuracy we need nvar = 256 which may not be attainable with dsolve/numeric approach unless we are able to call it evalhf format.

 

 


 

Download ImportanceofevalhfNLPSolve.mw

dsolve/numeric + NLPSolve shows inconsistent behavior. This combination is important for parameter estimation and optimal control. Can anyone fix this? Hopefully, I am not making a mistake.

restart:

 

Test code written by Dr. Venkat Subramanian at UT Austin, 05/31/2023. This code uses CVP approach (piecwise constant) to perform optimal control. NLPSolve combination with dsolve numeric parametric form is buggy and fails for some values of nvar, and works for some values of nvar. Ideally increasing nvar should show convergence with respect to the objective function.

restart:

Digits:=15;

15

 

eqodes:=[diff(ca(t),t)=-(u+u^2/2)*1.0*ca(t),diff(cb(t),t)=1.0*u*ca(t)-0.1*cb(t)];

[diff(ca(t), t) = -1.0*(u+(1/2)*u^2)*ca(t), diff(cb(t), t) = 1.0*u*ca(t)-.1*cb(t)]

soln:=dsolve({op(eqodes),ca(0)=alpha,cb(0)=beta},type=numeric,'parameters'=[alpha,beta,u],compile=true,savebinary=true):

 

 

ss:=proc(x)
interface(warnlevel=0):
#if  type(x[1],numeric)
if  type(x,Vector)
then

local z1,n1,i,c10,c20,dt,u;
global soln,nvar;
dt:=evalf(1.0/nvar):
c10:=1.0:c20:=0.0:
for i from 1 to nvar do
u:=x[i]:
soln('parameters'=[c10,c20,u]):
z1:=soln(dt):
c10:=subs(z1,ca(t)):c20:=subs(z1,cb(t)):
od:
-c20;
 else 'procname'(args):

end if:

end proc:

 

nvar:=3;#code works for nvar:=3, but not for nvar:=2

3

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(3, {(1) = .1000000000000000, (2) = .1000000000000000, (3) = .1000000000000000})

 

ss(ic0);

HFloat(-0.09025793628011441)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(3, {(1) = 0., (2) = 0., (3) = 0.})

Vector[column](%id = 36893491136404053036)

infolevel[all]:=1;

1

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu]);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 3

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

[-.531453381886523, Vector(3, {(1) = .8114345197312305, (2) = 1.189413022736622, (3) = 2.427689509710160})]

nvar:=2;#code works for nvar:=3, but not for nvar:=2

2

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(2, {(1) = .1000000000000000, (2) = .1000000000000000})

 

ss(ic0);

HFloat(-0.09025793011810783)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(2, {(1) = 0., (2) = 0.})

Vector[column](%id = 36893491136437818540)

infolevel[all]:=1;

1

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu]);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 2

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

Error, (in Optimization:-NLPSolve) no improved point could be found

 

nvar:=5;#code works for nvar:=3, but not for nvar:=2

5

ic0:=Vector(nvar,[seq(0.1,i=1..nvar)],datatype=float[8]);

Vector(5, {(1) = .1000000000000000, (2) = .1000000000000000, (3) = .1000000000000000, (4) = .1000000000000000, (5) = .1000000000000000})

 

ss(ic0);

HFloat(-0.09025792639212991)

bl := Vector(nvar,[seq(0.,i=1..nvar)]);bu := Vector(nvar,[seq(5.,i=1..nvar)]);

Vector(5, {(1) = 0., (2) = 0., (3) = 0., (4) = 0., (5) = 0.})

Vector[column](%id = 36893491136472713804)

infolevel[all]:=1;

1

Optimization:-NLPSolve(nvar,ss,[],initialpoint=ic0,[bl,bu]);

NLPSolve: calling NLP solver

NLPSolve: using method=sqp

NLPSolve: number of problem variables 5

NLPSolve: number of nonlinear inequality constraints 0

NLPSolve: number of nonlinear equality constraints 0

NLPSolve: number of general linear constraints 0

[-.537338871783244, Vector(5, {(1) = .7435767224059456, (2) = .9013440906589676, (3) = 1.148772394841535, (4) = 1.629079877401040, (5) = 3.222801229872320})]

 


 

Download test.mw

1 2 3 Page 1 of 3