First it should be noted that in Maple, when an ODE solution is obtained, the values that the solution will be requested at (by default) are not known in advance, so the only possible method of operation is to compute solution values *as* they are requested.

The lsode method in Maple is more-or-less a direct mapping of the published Fortran algorithm with a Maple interface.

That interface was designed to integrate from a starting point to a requested ending point, outputting the value at that endpoint. The issue is that continuation of the integration (i.e. requesting of a second point) is performed from:

1) If the new point is farther away from the initial value than the last requested point, integration is performed from the last requested point

2) If the new point is closer to the initial value than the last requested point, integration is performed from the initial point.

One can actually observe this directly in Maple:

> mysin := proc(x) if not type(x,'numeric') then 'procname'(x);

> else lprint(`Request at x=`,x); sin(x); end if; end proc:

> dsn := dsolve({diff(y(x),x)=mysin(x),y(0)=0}, numeric,

> method=lsode, known={mysin});

`Request at x=`, 0

dsn := proc(x_lsode) ... end proc

> dsn(1);

`Request at x=`, 0.

`Request at x=`, .316227766016837939e-3

`Request at x=`, .316227766016837939e-3

...

`Request at x=`, .822526225536629685

`Request at x=`, .919835463478538284

`Request at x=`, 1.01714470142044688

[x = 1., y(x) = 0.459697513613591]

> dsn(1.2);

`Request at x=`, 1.

`Request at x=`, 1.00031207948782574

`Request at x=`, 1.00031207948782574

...

`Request at x=`, 1.13710328654956050

`Request at x=`, 1.19166312131692576

`Request at x=`, 1.24622295608429101

[x = 1.2, y(x) = 0.637642166314161]

> dsn(0.1);

`Request at x=`, 0.

`Request at x=`, .316227766016837953e-4

`Request at x=`, .632455532033675906e-4

...

`Request at x=`, .104069022625259522

`Request at x=`, .988159476246405072e-1

`Request at x=`, .119564316892035533

[x = 0.1, y(x) = 0.00499559764096088]

> dsn(1.2);

`Request at x=`, .100000000000000006

`Request at x=`, .100376805608346789

`Request at x=`, .100376805608346789

...

`Request at x=`, 1.03776125115700646

`Request at x=`, 1.13534129741291556

`Request at x=`, 1.28009936449751138

[x = 1.2, y(x) = 0.637642154095732]

Item 1 has the unfortunate side effect that the value computed for a point then depends upon all other points that have been requested before it, and can be changed if that sequence of points is different.

In fact this is directy apparent from the above example:

[x = 1.2, y(x) = 0.637642166314161]

[x = 1.2, y(x) = 0.637642154095732]

where the results at x=1.2 are different depending on whether we requested a value at 1.0 v.s. 0.1 before obtaining them. For some problems the differences can be much more substantial than the above.

This issue can be alleviated by the (brutal) use of the 'startinit=true' option (not really recommended) which forces consistency by performing a full integration from the initial point for each requested point.

In contrast, the default nonstiff (rkf45) and stiff (rosenbrock) methods are more recent Maple implementations with a tighter associtation with the Maple working style.

These methods are specifically implemented to resolve these issues (repeatability), as this was part of their design.

By default they use a natural interpolant associated with the integration method for point queries, and always take the 'natural' step (i.e. computed from the error criteria) to obtain a point request, so continuation past a previously computed solution value should always give the same result as computation dirrectly from the initial condition *WITHOUT* having to re-integrate from the initial point.

As for efficiency, there is a compile option that is supported by rkf45, rosenbrock and ck45 that is not available for the other solvers in Maple. The codes themselves are implemented in a compiled language, so the combination of the compiled integrators and the compilation of the problem should yield a hightly efficient solution process.

Here is a similar example as the above done with rkf45, where I only display the results of the calls:

[x = 1.0, y(x) = 0.459697678370392]

[x = 1.2, y(x) = 0.637642235621288]

[x = 0.1, y(x) = 0.00499585264319642]

[x = 1.2, y(x) = 0.637642235621288]

Note that the pair of results for 1.2 are *identical*.

Furthermore, the rkf45, rosenbrock, and ck45 methods *also* support a pre-compute/storage operating mode, where at the time of requesting the solution the integration is performed and the results are stored, making any subsequent solution value queries into a table lookup. This is controlled through use of the range option, but beware that sufficient storage must be allocated to accurately compute all solution values in the requested range using the requested tolerance, so memory usage may be a concern for a rapidly varying solution (or for use of a non-stiff method on a stiff problem).