>The previous version of the table and the ranking on Maple seems to be made in a rush doubting the validity of other entries as well. I am not well versed with the other platforms to accept or deny your conclusions.
Maple is the one I am not very well versed in beause I haven't had a license in years, so I checked through the documentation to see what changed but it's clear I missed some things. I am sorry about that but I think it all got fixed up. In the end, my opinion of Maple's suite has changed a lot. For example, looking at the delay diffeq suite, it looks like it's designed in a similar way to what we did in DelayDiffEq.jl where the idea is to get a stiff solver by extending a Rosenbrock method. I am sure that Maple devs saw the same reusage tricks that can be done in this case. The accuracy shown in your extended post on the state-delay problem along with the docs example point to the fact that there must be some form of discontinuitiy tracking as well, probably silently on top of their events system if they also followed the DKLAG paper, since otherwise though examples wouldn't get that tight of errors. So in the end I think choices like this look quite similar to choices we made, so it's nice to see some precident. I now think that Maple's delay equation setup is very great, I just don't get why it's documented so poorly.
I still think that the choices for the explicit RK tableaus are odd and suboptimal and hope that the Maple devs update those to more modern tableaus. But those are the kinds of "sugar" things. In general it seems that for "most people's problems in standard tolerances" some explicit RK method or some higher order Rosenbrock method seems to do well, and we have all of Hairer's benchmarks and DiffEqBenchmarks.jl to go off of for that, so it's pretty clear that Maple is hitting that area pretty solidly. Still, it's missing someone of the "sugar" like fully-implicit RK methods (radau) which our and Hairer's benchmarks say are important for high accuracy solving of stiff ODEs. It's also missing SDIRK methods which our benchmarks say trade with Rosenbrock methods as being more efficient in some cases. The mentioned case was a semilinear stiff quorum sensing model. I'll see if we can get that added to DiffEqBenchmarks.jl, but basically what's shown is that we agree with Hairer's benchmarks that the methods from his SDIRK4 were uncompetitive, but TRBDF2 and the newer Kvaerno and Kennedy & Carpenter methods (which are the basis of ARKODE) are much better than the SDIRK methods benchmarked in Hairer. All of the cases are essentially cases with semilinearity where the fact that Jacobians are required to be re-calculated every step in a Rosenbrock method whereas Jacobians are just for linesearches in SDIRK implicit steps actually made a big difference since the standard Hairer implementation of SDIRK then allows for Jacobian calcuations and re-factorizations to be skipped. But again, these are edge cases looking for just a little more efficiency on some problems, while explicit RK + Rosenbrock + LSODE covers quite a bit of ground. That plus the fact that Maple lets you compile the functions bumped up my opinion of Maple's set of solvers to very good but not excellent. I am sure that down the line we can write a Julia to Maple bridge to do more extensive benchmarking (Julia links to Sundials/LSODA/etc. so then there's a direct way to do quite a bit of comparisons) but since I have found that implementations don't differ much these days from what Hairer described, I'll assume Maple devs know what they're doing and would get similar efficiency in writing a compiled solver that anyone else does.
But I will say that Maple should definitely document not just how to use their stuff, but what they are doing. It's really hard to know what Maple is doing in its solvers sometimes. Most of the other suites have some form of publication that details exactly what methods they implemented and why. Maple's docs don't even seem to hit that, and I only found out some of those details in forum links here. What I can find on Maple are Shampine's old PSE papers, but it seems a lot has changed since then.
>Regarding your comment on 10,000 or more ODEs, are you talking about well defined system with well defined pattern for Jacobian or matrix or arbitrarily working with a random set of matrix? I agree that Maple is weak for this, but MATLAB switches to sparse solvers for large systems.
Yes. A quintessential example is a method of lines discretization of a reaction-diffusion equation. Busselator is a standard example, but I like to use like a system of 8 reactants or something like that. These PDE discretizations usually have a natural banded structure that things like Sundials have built-in choices of banded linear solvers for handling, or suites give the ability to pass in user-defined linear solvers. It definitely depends on the audience, but "using ODE solvers to write PDE solvers" is definitely a large group of users from what I've found and so handling this well matters to those who are building scientific software on top of the ODE suites.
>I have had bad experience with shooting methods (in particular single shooting) for BVPs. It will be trivial to break any such code for DAE BVPs.
Yes, and honestly the BVP solvers were a tough call between "Fair" and "Good" here. I put Julia's as "Good" here because of flexibility. The imputus for finishing and releasing them were because we were talking about them in our chat channel and someone wanted to use them for the boundary constraint that the maximum of the velocity was 1 over the interval. Since our setup involves writing constraints using (possibly continuous extension of) the solution, this was possible. And then we had some people use it for multipoint BVPs without having to do any transformations of them. So that, plus the fact that our MIRK-based method can do (stiff) multipoint BVPs and has a specialized form for banded Jacobian handling when it's a two-point BVP is why I ended up giving it a "Good". But that doesn't mean it's close to complete at all. The Shooting methods are nice but many problems are too sensitive to the initial condition to use them, so we really need to get a continuous extension, mass matrices, singularity handling, and adaptvity to complete our MIRK-based method. But since from what's missing it's clear that someone's problem can't be solved well by this setup, one could also put a "Fair" on this, but I don't think you can justify a "Poor" because it does handle so many unique things. MATLAB actually does quite well in this area with bvp4c solving two-point BVPs with singularities and stiffness just fine, but it's only a "Good" because it doesn't go any further. Maple's documentation explicitly states that it shouldn't be used for stiff BVPs, and it's unclear to me if it can do more than two-point BVPs so I think that justifies the "Fair" rating.
I did miss that COLSYS was so flexible in the table, along with COLDAE. If anything deserves an excellant, Netlib would be it. And the sensitivity analysis of DASPK is missing. That will get updated.
Of course, simple tables with rankings can only be ballparks because it's all about the details so I try to flesh out the details as much as possible in the text and hope to get as close as possible or as reasonable as possible in the table. I really dropped the ball in the first version of the table for Maple and I'm sorry about that. But as to:
>I am not well versed with the other platforms to accept or deny your conclusions.
let me just share the other main concerns that have been brought up:
1. Someone on Hacker News wondered why I omitted Intel's ODE solvers. They were fine but discontinued in 2011 and are just vanilla solvers without any event handling or other special things.
2. Someone emailed me about Mathematica having GPU usage (and now it looks like someone else posted a comment about it on my site), but that was for in general and their GPU stuff doesn't apply to ODEs.
3. Someone on Twitter mentioned that SciPy does do event handling. I can't find it in the docs at all so I am waiting for confirmation. But from what I can find, there are just sites showing how to hack in event handling and these don't even make use of dense output (and I cannot find a method in the docs, and the dev channels seem to show nobody has picked up the implementation as a project yet). So unless I'm missing something big it seems like this won't change. Edit: Looks like we are in agreement here now.
So it seems Maple is the only area that really had some big valid objections that have since been corrected. With the amount of press this somehow got (man, these were just mental notes I was sharing while comparing what's available to find out what to do next haha), I think there's some confidence to be gained that more devs in other languages haven't really voiced objections. Though I'm sure that there are corrections that can and will be made as other issues are brought up.