MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • These are current collections of Maple bugs before I lose track of them. I put them all in one post. Hopefully these can be fixed in Maple 2025.2. For each problem, I post separate worksheet, so there are few worksheets here.

    This is all on Linux using 2025.1 and latest SupportTools and latest Physics.

    1. Random crashes. This one is very strange. The crash happens randomly. You might need to try few times to see it or close the worksheet and reopen it.
     

    restart;

    Example . RANDOM CRASHES

     

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC],y(t)) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [(5/2)*2^(1/2)*(-(1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)-sin(2*t)*cos(8)+sin(8)*cos(2*t))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC],y(t)) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    Error, (in anonymous procedure called from cos) too many levels of recursion

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC]) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [-(5/2)*2^(1/2)*((1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)-sin(8)*cos(2*t)+sin(2*t)*cos(8))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC]) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [-(5/2)*2^(1/2)*((1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)-sin(8)*cos(2*t)+sin(2*t)*cos(8))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC]) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    Error, (in signum) too many levels of recursion

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC],y(t)) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    Error, (in anonymous procedure called from cos) too many levels of recursion

    restart

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC],y(t)) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [(5/2)*2^(1/2)*(-(1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)+sin(8)*cos(2*t)-sin(2*t)*cos(8))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC],y(t)) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [-(5/2)*2^(1/2)*((1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)+sin(2*t)*cos(8)-sin(8)*cos(2*t))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

    restart;

    sol:=y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2);
    ode:=diff(y(t),t) = (25-y(t)^2)^(1/2);
    IC:=y(4)=-5;
    odetest(sol,[ode,IC]) assuming t>1;

    y(t) = (-25*sin(t+arctan(sin(4)*cos(4)/(sin(4)^2-1)))^2+25)^(1/2)

    diff(y(t), t) = (25-y(t)^2)^(1/2)

    y(4) = -5

    [-(5/2)*2^(1/2)*((1-sin(2*t)*sin(8)-cos(2*t)*cos(8))^(1/2)*(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2)+sin(2*t)*cos(8)-sin(8)*cos(2*t))/(1+sin(2*t)*sin(8)+cos(2*t)*cos(8))^(1/2), -10]

     

     

     

     

    Download random_crashes_sept_8_2025.mw

     

    2. collection of bugs from solve(identity) (another one related to solve(identity at end)

    interface(version);

    `Standard Worksheet Interface, Maple 2025.1, Linux, June 12 2025 Build ID 1932578`

    SupportTools:-Version();

    `The Customer Support Updates version in the MapleCloud is 29 and is the same as the version installed in this computer, created June 23, 2025, 10:25 hours Eastern Time.`

    Physics:-Version();

    `The "Physics Updates" version in the MapleCloud is 1877 and is the same as the version installed in this computer, created 2025, July 11, 19:24 hours Pacific Time.`

     

    Example 1

     

    restart;

    eq:=1/8*A^2*exp(2*theta*(B+I))+1/8*exp(2*theta*(B-I))*A^2-1/4*A^2*exp(2*B*theta)-1/4*exp(theta*(B-2*I))*A*B-1/4*exp(theta*(B+2*I))*A*B+1/2*A*B*exp(B*theta)+1/4*exp(theta*(B-2*I))*A*C+1/4*exp(theta*(B+2*I))*A*C-1/2*A*C*exp(B*theta)-1/4*I*exp(theta*(B-2*I))*A+1/4*I*exp(theta*(B+2*I))*A+1/4*C^2*cos(2*theta)-1/4*C^2-1/2*C*sin(2*theta)-1/2*cos(2*theta)-1=0:
    the_vars:=[A, B, C]:
    solve(identity(eq,theta),the_vars);

    Error, (in gcd/doit) too many levels of recursion

     

    Example 2

     

    restart;

    eq:=-x^(1/2)-1/2*x*A^2+A*B*sinh(B*x)-1/2*x*A^2*cosh(2*B*x)=0;
    the_vars:=[A, B]:
    solve(identity(eq,x),the_vars);

    -x^(1/2)-(1/2)*x*A^2+A*B*sinh(B*x)-(1/2)*x*A^2*cosh(2*B*x) = 0

    Error, (in gcd/doit) too many levels of recursion

     

     

    Example 3

     

    restart;

    eq:=1 = X*(2*cos(X)*cos(x0)-X*sin(X)*cos(x0)-2*sin(X)*sin(x0)-X*cos(X)*sin(x0)-x0*sin(X)*cos(x0)-x0*cos(X)*sin(x0))*(2*Y*ln(Y+y0)+Y+2*y0*ln(Y+y0)+y0)/Y/(X*cos(X)*cos(x0)-X*sin(X)*sin(x0)+x0*cos(X)*cos(x0)-x0*sin(X)*sin(x0)+sin(X)*cos(x0)+cos(X)*sin(x0))/(2*ln(Y+y0)+2*Y/(Y+y0)+1+2*y0/(Y+y0));

    1 = X*(2*cos(X)*cos(x0)-X*sin(X)*cos(x0)-2*sin(X)*sin(x0)-X*cos(X)*sin(x0)-x0*sin(X)*cos(x0)-x0*cos(X)*sin(x0))*(2*Y*ln(Y+y0)+Y+2*y0*ln(Y+y0)+y0)/(Y*(X*cos(X)*cos(x0)-X*sin(X)*sin(x0)+x0*cos(X)*cos(x0)-x0*sin(X)*sin(x0)+sin(X)*cos(x0)+cos(X)*sin(x0))*(2*ln(Y+y0)+2*Y/(Y+y0)+1+2*y0/(Y+y0)))

    solve(identity(eq,X),[x0,y0]);

    Error, (in signature) too many levels of recursion

    solve(identity(eq,X),[x0,y0,Y]);

    Error, (in signature) too many levels of recursion

     

     


     

    Download collection_of_maple_internal_errors_sept_6_2025.mw

     

    3. Adding Physics:-Setup(assumingusesAssume = true): make combine fail

    interface(version);

    `Standard Worksheet Interface, Maple 2025.1, Linux, June 12 2025 Build ID 1932578`

    SupportTools:-Version();

    `The Customer Support Updates version in the MapleCloud is 29 and is the same as the version installed in this computer, created June 23, 2025, 10:25 hours Eastern Time.`

    Physics:-Version();

    `The "Physics Updates" version in the MapleCloud is 1877 and is the same as the version installed in this computer, created 2025, July 11, 19:24 hours Pacific Time.`

    restart

    Physics:-Setup(assumingusesAssume = true):

    A:=1/6*ln(u^2+1)+1/3*arctan(u)+1/6*ln(u^2-3^(1/2)*u+1)-1/3*arctan(2*u-3^(1/2))+1/6*ln(u^2+3^(1/2)*u+1)-1/3*arctan(2*u+3^(1/2));
    combine(A,ln) assuming real;

    (1/6)*ln(u^2+1)+(1/3)*arctan(u)+(1/6)*ln(u^2-3^(1/2)*u+1)-(1/3)*arctan(2*u-3^(1/2))+(1/6)*ln(u^2+3^(1/2)*u+1)-(1/3)*arctan(2*u+3^(1/2))

    Error, (in assuming) when calling 'is'. Received: 'invalid input: (u^2+1)^(1/6)*(u^2-3^(1/2)*u+1)^(1/6) <> 0'

    Physics:-Setup(assumingusesAssume = false):

    combine(A,ln) assuming real;

    ln((u^2+1)^(1/6)*(u^2-3^(1/2)*u+1)^(1/6))+ln((u^2+3^(1/2)*u+1)^(1/6))+(1/3)*arctan(u)-(1/3)*arctan(2*u-3^(1/2))-(1/3)*arctan(2*u+3^(1/2))

     


     

    Download adding_Phsyics_makes_combine_fail_sept_6_2025.mw

     

    4. odetest internal error when adding assuming

    interface(version);

    `Standard Worksheet Interface, Maple 2025.1, Linux, June 12 2025 Build ID 1932578`

    SupportTools:-Version();

    `The Customer Support Updates version in the MapleCloud is 29 and is the same as the version installed in this computer, created June 23, 2025, 10:25 hours Eastern Time.`

    Physics:-Version();

    `The "Physics Updates" version in the MapleCloud is 1877 and is the same as the version installed in this computer, created 2025, July 11, 19:24 hours Pacific Time.`

    restart;

    sol:=y(x) = 6*x/(3*x-2*LambertW(-3/2*exp(5/2*x+5/6*_C2)))+1/2*x+1/3;
    ode:=x-2*y(x)-1+(3*x-6*y(x)+2)*diff(y(x),x) = 0;
    odetest(sol,ode,y(x)) assuming positive;

    y(x) = 6*x/(3*x-2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*_C2)))+(1/2)*x+1/3

    x-2*y(x)-1+(3*x-6*y(x)+2)*(diff(y(x), x)) = 0

    Error, (in depends) too many levels of recursion

    odetest(sol,ode,y(x)); #removing positive it now works

    -(40/3)*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))^4/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))+180*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))^3*x/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))-450*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))^2*x^2/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))+315*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))*x^3/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))-(40/3)*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))^3/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))-252*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))^2*x/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))+630*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))*x^2/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))+315*x^3/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))-432*x*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))/((-3*x+2*LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2)))^3*(1+LambertW(-(3/2)*exp((5/2)*x+(5/6)*c__2))))

     


     

    Download internal_odetest_error_sept_6_2025.mw

     

    5. solve(identity,..  gives internal error when one variable is missing

    interface(version);

    `Standard Worksheet Interface, Maple 2025.1, Linux, June 12 2025 Build ID 1932578`

    SupportTools:-Version();

    `The Customer Support Updates version in the MapleCloud is 29 and is the same as the version installed in this computer, created June 23, 2025, 10:25 hours Eastern Time.`

    Physics:-Version();

    `The "Physics Updates" version in the MapleCloud is 1877 and is the same as the version installed in this computer, created 2025, July 11, 19:24 hours Pacific Time.`

    restart;

    eq:=-A^2*exp(2*B*x)+A*B*exp(B*x)-2*A*C*exp(B*x)-C^2-a*cos(b*x)^m*(A*exp(B*x)+C+1)=0;

    -A^2*exp(2*B*x)+A*B*exp(B*x)-2*A*C*exp(B*x)-C^2-a*cos(b*x)^m*(A*exp(B*x)+C+1) = 0

    the_vars:=[A, B, C,m]: #all variables are listed
    solve(identity(eq,x),the_vars);

    [[A = 0, B = B, C = -(1/2)*a-(1/2)*(a^2-4*a)^(1/2), m = 0], [A = 0, B = B, C = -(1/2)*a+(1/2)*(a^2-4*a)^(1/2), m = 0], [A = -C-(1/2)*a-(1/2)*(a^2-4*a)^(1/2), B = 0, C = C, m = 0], [A = -C-(1/2)*a+(1/2)*(a^2-4*a)^(1/2), B = 0, C = C, m = 0]]

    the_vars:=[A, B, C]:   #forget to add m variable to list, now it gives internal error variables are listed
    solve(identity(eq,x),the_vars);

    Error, (in depends) too many levels of recursion

     


     

    Download missing_variable_solve_sept_6_2025.mw

     

    6. odesteps gives internal error (was question before, moved it to here, so all in one place)
     

    interface(version);

    `Standard Worksheet Interface, Maple 2025.1, Linux, June 12 2025 Build ID 1932578`

    SupportTools:-Version();

    `The Customer Support Updates version in the MapleCloud is 29 and is the same as the version installed in this computer, created June 23, 2025, 10:25 hours Eastern Time.`

    restart;

    ode:=x^2*diff(y(x),x$2)+(x^2-5*x)*diff(y(x),x)+(5-6*x)*y(x)=0; #22942.  

    x^2*(diff(diff(y(x), x), x))+(x^2-5*x)*(diff(y(x), x))+(5-6*x)*y(x) = 0

    sol:=dsolve(ode);

    y(x) = c__1*x^5*(x+5)+c__2*x*(x^4*(x+5)*Ei(1, x)+(-x^4-4*x^3+3*x^2-4*x+6)*exp(-x))

    Student:-ODEs:-ODESteps(ode)

    Warning, cannot verify that the given particular solution, y(x) = 1+1/5*x, actually solves the corresponding homogeneous ODE, diff(diff(y(x),x),x)+1/x*(x-5)*diff(y(x),x)-(-5+6*x)/x^2*y(x) = 0

    Error, (in Student:-ODEs:-ChangeVariables) the ODE, diff(diff(U(T),T),T) = 5*(T^2+6*T-5)/T^2/(5+T)*U(T)-diff(U(T),T)*(T^2+2*T-25)/T/(5+T), contains the undifferentiated dependent variable, U(T), but the transformation %3, does not

     


     

    Download internal_error_ODESteps_sept_2_2025.mw

    I must thank @Scot Gould for having asked this question more than a year ago and thus, without meaning to, having been the driving force behind this post.

    There is an enormous literature about Monte-Carlo integration (MCI for short) and you might legitimately ask "Why another one?".

    A personal experience.
    Maybe if I tell you about my experience you will better understand why I believe that something is missing in the traditional courses and textbooks, even the most renowned ones.

    For several years, I led training seminars in statistics for engineers working in the field of numerical simulation.

    At some point I always came to speak about MCI and (as anyone does today) I introduced the subject by presenting the estimation of the area of a disk by randomly picking points in its circumscribed square and assessing its area from the proportion of points it contained.



    Once done I switched (still as anybody does) to the Monte-Carlo summation formula (see Wikipedia for instance).

    One day an attendee asked me this question "Why do you say that this [1D] summation formula is the same thing that the [2D] counting of points in the [circle within a box] example you have just presented?"

    I have to say I was surprised by this question for it seemed to me quite evident that these two ways of assessing the area were nothing but two different points of view of, roughly, the same thing.

    So I gave a quick, mostly informal, explanation (that I am not proud of) and, because the clock was running, I kept teaching the class.

    But this question really puzzled me and I thought for a simple but rigourous way to prove these two approaches were (were they?) equivalent, at least in some reasonable sense.

    The thing is that trying to derive simple explanations based on couting is not enough, and that you have to resort to certain probabilistic arguments to get out of it. Indeed, sticking to the counting approach leads to the more reasonable position that these two approaches are not equivalent.

    The end of the story is that I spent more time on these two approaches of MCI during the trainings that followed.

    Saying that, yes, the summation formula seems to be the reference today, but that the old counting strategy still has some advantages and can even gives access to information that the summation formula cannot.

    About this post.
    This post focuses mainly on what I call the Historical viewpoint (counting points), and is aimed, in its first part, to answer the question "Is this point of view equivalent or not to the Modern (summation formula) one?" (And if it is, in what sense is it so?).

    Let me illustrate this with the example @Scot Gould  presented in its question. The brown bold curve on the left figure is the graph of the function  func(x) (whose expression has no interest here) and the brown area represents the area we want to assess using MCI.

    In the Historical approach I picked unifomly at random N=100 points within the gray box (of area 2.42), found 26 of them were in the brown region and said the area of this latter is 2.42 x 26/100 = 0.6292. The Modern approach consists in picking uniformly N random points in the range x= [0.8, 3],  and using the blue formula to get an estimation of this same area ((Lbox is the x-length of the gray box, here equal to 2.2).

    The quesion is: Am I assessing the same thing when I apply either method? And, perhaps more importantly, do my estimators have the same properties?


    And here apppears a first problem:

    • Whatever the number of times you repeat the Historical sampling method, even with different points, you will always get a number of points in the brown region between 0 and N included, meaning that if S is the area of the gray box, the estimation of the brown area is always one of these numbers {0, S/N, 2.S/N, ..., S}.
    • At the opposite repetitions of the Modern approach will lead to a continuum of values for this brown area.
    • So, saying the two approaches might be equivalent simply means that a discrete set is equivalent to a non countable one.

    If we remain at the elementary counting level, Historical and Modern viewpoints then are not equivalent.

    Towards a probabilistic model of the Historical Process:
    This goes against everything you may have heard or read: so, are the authors of these statements all wrong?

    Yes, from a strict Historical point of view, but happily not if we interpret the Historical approach in a more loose and probabilistic manner (although this still needs to be considered carefully as it is shown in the main worksheet).

    This probabilistic manner relies upon a probabilistic model of the Historical process, where the event "K points out of N belong to the brown area" is to be interpreted as the realization of a very special random variable named Poisson-Binomial (do not worry if you never heard about it: a lot of statisticians did not neither).

    In a few words, whereas a Binomial random variable is the sum of several independent and identically distributed Bernoulli random variables, a Poisson-Binomial random variable is the sum of several independent but not necessarily identically distributed Bernoulli random variables. Thus the Poisson-Binomial distribution generalizes the Binomial one.

    Using the properties of Poisson-Binomial random variables we must prove in a rigorous way that the expectations of the area estimators for both the Historical and Modern approaches are identical.

    So, given this "trick" the two methods are thus equivalent, are they not? And that settles it.

    In fact, no, the matter of equivalence still remains.

    When uncertainty enters the picture.
    Generally one cannot satisfy ourselves with the sole estimation of the area and we would like to have information about the reliability of this estimation. For instance if I find this value is 0.6292, am I ready to bet my salary that I am right? Of course not, unless I am insane, but the things would change if I were capable of saying for instance that "I am 95% sure that the true value of the area is between 0.6 and 0.67".

    For the Historical vewpoint the Poisson-Binomial model makes possible to assess an uncertainty (not the uncertainty!) of the area estimation. But things are subtle, because there are different ways to compute an uncertainty:

    • At the elementary level the height of the gray box is an essential parameter, but it does not necessarily gives a good estimation of this uncertainty (one can easily reduced this latter arbitrarily close to 0!).
    • To get reliable uncertainty estimation the call to a probability theory related to Extreme Value Theory (EVT for short) necessary (all of this is explained in the attached worksheet).


    For the Modern point of view it is enough to observe that there is no concept of "box height" and that it is then impossible to assess any uncertainty. Question: "If it is so, how can (all the) MCI procedures return an uncertainty value?"
    The answer is simple: they consider a virtual encapsulating box whose eight is the maximum of the 
    func(xi). This trick enables providing an uncertainty, but this is a non-conservative estimation (an over-optimistic one if you prefer, in other terms an estimation we must regard very carefully).

    So, at the end Historical and Modern approaches are equivalent only if we restrict to the estimation of the area, but no longer as soon as we are interested in the quality of this estimation.

    What does the attached file contain?
    The attached file speaks a lot to the estimation of the estimator uncertainty.
    The core theory is named (Right) EndPoint Theory (I found nothing on Wikipedia nor any easy-to-read papers about this theory, so I more or less arbitrarilly decided to refer to this one). Basically it enables assessing the (usually right) end-point of a distribution known only through (right) censored data.
    The simplest example is those of a New York pedestrian who looks to the taxi numbers and asks himself how to assess the highest number a taxi has. Here we know this number exists (meaning that some related distribution is bounded), but the situation can be more complex if one does not ever know if this distribution is bounded or not (in which cas one seeks for a right end-point whose probability to be overpassed is less than some small value).
    A conservative, and thus reliable, uncertainty on the area estimator  can only be derived in the framework of the end-point theory.

    Once the basis of this theory are understood it becomes relatively simple to enhance the Historical approach to get estimators with lessen uncertainties.
    I present different ways to do this: one (even if derived otherwise) is named Importance Sampling, and the other leads in a straightforward way to algorithms which are quite close to some used in the CUBA library (partially accessible through evalf/Int).

    The last important, if not fundamental, concept discussed in this article concerns the distinction between dispersion interval and confidence interval, concepts that are unfortunately not properly distinguished due to the imprecision of the English language (I apologize to native English speakers for these somewhat harsh words, but this is the reality here).

    Some references are provided in attached (main) worksheet, but please, if you don't want to end up even more confused than you were before, avoid Wikipedia.

    To sum up.
    This note is a non-orthodox presentation of MCI centered arround the Historical viewpoint which, I am convinced of that, deserves a little more attention than the disk-in-the-square picture commonly displayed in MCI courses and textbooks.
    An I am even more convinced of that then this old-fashion (antiquated?) approach is an open door to some high level probability theories such than the EndPoint and the EVT one.

    Of course this post is not an advocacy agaist the Modern approach, and does not mean that you have to ignore classical texts or that the Law of Large Numbers (LLN) or the Central limit theorms are useless stuff in MCI.

    Maple, but not just Maple.
    A part of the attached worksheet is devoted base presents results I got with R  (a programming language for statistical computing and data visualization), simply because Maple 2015 (and it is still true for Maple 2025) did not contain the functions I needed.

    For instance R implements the Cuba library in a far more complete way than Maple (I give a critical discussion about the way Maple does it), enabling for instance the change of the random seed.

    Main worksheet (I apologize in advance for typos that could remain in the texts)
    A_note_on_Monte-Carlo_Integration.mw

    The main worksheet refers to this one
    How_does_the_variance_of_f_impact_the_estimator_dispersion.mw

    Extra worksheet: An introduction to Importance Sampling
    Importance_Sampling.mw

    Hi again all,

    Was trying to be helpful at

    mathforums.com

    and made these two Maple files.  

    simple_square_root_loop.mw

    simple_square_root_loop.pdf

    Hope that helps.

    Maple is the best :-)

    goodbye for now.

     

    Matthew


    Under the name of mmcdara (unfortunately inaccessible since the major July 2025 Mapleprimes outage, and probably lost forever, God rest his soul.) I published two years ago a post about Multivariate Normal Distribution.

    The current post continues in the same vein and presents the construction of a few new Multivariate Random Variables (MRV for short) named Multinomial (see for instance this recent question), Dirichlet, Categorical and related compound distributions.
    I advice the interested readers to give a quick look to these names on Wikipedia (more specific references are given at the top of the wotksheet).

    As I explained (in fact as my alter ego did) in Multivariate Normal Distribution, the Statistics package is limited to univariate random variabled  and thus implementing MRVs requires a little cunning.
    Here is a list of a few problems you face:

    • Whereas the expectation (sometimes named "mean") of a univariate random variable is a number or an expression, the expectation of a MRV is a vector (or a list, a n-uple, ...) of numbers or expressions.

    So far, so good, except that the Mean attribute of Distribution can only be a scalar quantity. So if you want to assign a vector to Mean you have to code it some way and do something like Decode(Mean(My_MRV)) to get the expectation in a vector form.
     

    • The Variance case is even more tricky because MRV variance are matrices.
       
    • Beyond this some very useful attributes like ParentName and Parameters cannot be instanciated in the definition of user random variables (whether there are MRV or not), implying here again some bit of gymnastics to, if not reaslly instantiate these attributes, be able at least to retrieve them when needed.
       
    • Finally, last but not least, the RandomSample is not appropriated to sample MRVs for reasons which are explained in the attached worksheet.


    The file below contains more than 20 procedures enabling the definition of the studied MRVs, the decoding of the coded attributes, the visualization (which is not that immediate because the supports of the MRVs I foccus on are simplexes), the parameter estimations against empirical observations (frequentist and bayesian points of view), and so on.

    Multinomial_Dirichlet_and_so_on.mw

    Nevertheless, there is still a lot missing, but at some point I believe we need to decide that the work is over.

     

    On the very first day of class, a student once told math educator Sam Densley: “Your class feels safe.”

    Open classroom door with students inside

    Honestly, I can’t think of a better compliment for a teacher. I reflected on this in a LinkedIn post, and I want to share those thoughts here too.

    A Story of Struggle

    I rarely admit this, because it still carries a sting of shame. In my role at Maplesoft, people often assume I was naturally good at math. The truth is, I wasn’t. I had to work hard, and I failed along the way.

    In fact, I failed my very first engineering course, Fundamentals of Electrical Engineering. Not once, but twice. The third time, I finally earned an A.

    That second failure nearly crushed me. The first time, I told myself I was just adjusting to university life. But failing again, while my friends all passed easily, left me feeling stupid, ashamed, and like I didn’t belong.

    When I got the news, I called my father. He left work to meet me, and instead of offering empty reassurances, he did something unexpected: he told me about his own struggles in school, the courses he failed, the moments he nearly gave up. Here was someone I admired, a successful engineer, admitting that he had stumbled too.

    In that moment, the weight lifted. I wasn’t dumb. I wasn’t alone.

    That experience has stayed with me ever since: the shame, the anxiety, the voice in my head whispering “I’m not cut out for this.” But also the relief of realizing I wasn’t the only one. And that’s why I believe vulnerability is key.

    When teachers open up, something powerful happens:

    • Students stop thinking they’re the only ones who feel lost.
    • They see that failure isn’t the end; it’s part of the process.
    • It gives students permission to be honest about their own struggles.

    That’s how you chip away at math anxiety and help students believe: “I can do this too.”

    Why Vulnerability Matters

    Abstract metallic mask with mathematical symbols

    I can’t recall a single teacher in my own schooling who openly acknowledged their academic struggles. Why is that?

    We tell students that “struggle is normal,” but simply saying the words isn’t enough. Students need to see it in us.

    When teachers hide their struggles, students assume they’re the only ones who falter. That’s when math anxiety takes root. But when teachers are vulnerable, the cycle breaks. Students realize that struggle doesn’t mean they’re “bad at math.” It means they’re learning. Vulnerability builds trust, and trust is the foundation of a safe classroom.

    What I Hear from Instructors

    In my work at Maplesoft, I often hear instructors say: “Students don’t come to office hours — I wish they did.”

    And I get it. Sometimes students are too anxious or hesitant to ask for help, even when a teacher makes it clear they’re available. That’s one of the reasons we built the Student Success Platform. It gives instructors a way to see where students are struggling without calling anyone out. Even if students stay silent, their struggles don’t stay invisible.

    But tools can only go so far. They can reveal where students need support and even help illuminate concepts in new ways. What they can’t do is replace a teacher. Real learning happens when students feel safe, and that safety comes from trust. Trust isn’t built on flawless lectures or perfect answers. It grows when teachers are willing to be human, willing to admit they’ve struggled too.

    That’s when students believe you mean it. And that’s when they’re more likely to walk through the door and ask for help.

    The Real Lesson

    Ultimately, what matters most in the classroom, whether in mathematics or any other subject, isn’t perfection. It’s effort.

    As a new school year begins, it’s worth remembering:

    • Students don’t just need formulas.
    • They need to know struggle is normal.
    • They need to know questions are welcome.
    • They need to know the classroom is safe enough to try.

    Because long after they move on, that’s what they’ll remember: not just what they learned, but how they felt.

    The need to solve quadratic equations never seems to disappear. Whether it is completing a physics problem, solving a differential equation, or performing equilibrium calculations in chemistry, quadratic equations are an integral part of all STEM-based disciplines.

     

    Depending on the complexity of the quadratic equation, the typical 'guess-and-check' method taught in most high school classes can often be frustrating and time-consuming. Professor of mathematics Dr. Po-Shen Loh, in his new method shown here, recognizes some important properties of solutions to quadratic equations and integrates them into a more intuitive approach that students are much more likely to feel motivated by.

     

    For example, consider the equation x^2 - 14x + 45 = 0. Most students are taught to first factor this equation by thinking of two numbers that multiply to 45 and add to -14. After trying multiple values, we would discover that those values are -5 and -9. We would use these values to factor the equation into the form (x-5)*(x-9) = 0. Setting each factor equal to zero, we would get x = 5 or x = 9. Equivalently, to solve for x more directly, we need two numbers that multiply to 45 and add to 14 (again, x = 5 and x = 9).

     

    The only way to speed up this process of guess-and-check is to do enough similar problems until the guesses become second nature. Not to mention, this becomes exponentially more difficult as the coefficient on x^2 increases (for example, solving the equation 6x^2 + 7x - 20 = 0).

     

    For the example above, Dr. Loh's method builds on a simple starting point:

     

    (i) We know that the numbers (call them R and S) add to 14

    (ii) We know that since the numbers add to 14, they must have a mean value of 14/2 = 7

    (iii) If the two numbers have an average of 7, they must be an equal 'distance' (call this distance z) from 7

    (iv) We can write the two numbers as R = 7+z and S = 7-z

    (v) Since the numbers R and S multiply to 45, then (7+z)*(7-z) = 45 ⇒ 49 - z^2 = 45. In other words, z^2 = 4, so z = +2 or z = -2

    (vi) The solution to the equation is then R = 7+2 = 9 and S = 7-2 = 5 (as we predicted)

     

    We can generalize this idea for any complex coefficients a, b and c in the equation ax^2 + bx + c = 0 to actually prove the quadratic formula. However, using Dr. Loh's method on specific examples (as above) helps build intuition for why the quadratic formula works in the first place. Other proof methods such as completing the square are just as mathematically sound, but they do not utilize the mathematical instinct that makes solving a problem in mathematics so gratifying.

     

    Although I am currently a student working for Maplesoft, I had not used Maple Learn extensively beforehand. Dr. Loh's idea of creating a more intuitive way to solve such a conventional problem inspired me to create a document in Maple Learn, linked here, outlining the steps above.

     

    Learning new ways to solve a problem in mathematics is exciting, but it is often difficult to present in a way that is clear, visually-appealing and easy to create. Most online mathematical environments are difficult to navigate and typically lack visualizations to accompany an idea. With Maple Learn, it felt comforting to open a clean canvas where I was able to easily build a document in just a few hours that not only summarized the main ideas of this new method, but also showed the user why the method works using live animations and colour schemes (see some examples below).

     

     

    I surprised myself (as well as my managers) by how quickly I was able to transfer all of my ideas into the document. I could also split related content into groups and use collapsible sections to keep the document uncluttered and easy to read.

     

    I also took advantage of the freedom to explore other documents and directly reference them through hyperlinks.

     

    Sometimes it can be difficult to follow a new concept without having some background information. Adding these references makes it simple for the reader to access supporting documents and ensure there are no knowledge gaps to be filled along the way. Once you make a document, you also have the option to publish it to your own gallery and make it public for others to use and learn from.

     

    Maple Learn has been incredibly helpful for sharing the things that interest me the most. If you have something related to mathematics that excites you, try not to keep it to yourself. Consider using Maple Learn to share your ideas with the world and see your vision come to life!

    This post is written by a mathematics teacher who usually views Maple’s new initiatives from an educational perspective, and I’m well aware that others may see things differently. A single user might be delighted by a new feature that fits their personal workflow. An advanced user might not care if something requires a workaround.

    There are also many preferences when it comes to how the interface should look. I often consider whether something will work well for our high school as a whole. We have students who are not very mathematically or scientifically inclined, and others who are. That’s why user-friendliness is essential. Some packages have been developed to make things easier for students. We try to avoid too many workarounds, since these often create problems for them.

    Now, on to Maple 2025’s new interface:

    When Microsoft introduced tabs and ribbons instead of menus and toolbars in Word many years ago, I personally thought it was a good idea. I can imagine it working well in Maple too — especially if the different elements are placed logically on the tabs, and frequently used functions are easy to access.

    However, I just returned from summer vacation, ready for a new school year, only to discover something surprising: the Windows version comes with the new ribbon interface, while the Mac version still has the old one! For any teacher, this is a nightmare scenario: teaching a class where the Windows and Mac interfaces look completely different. Has Maplesoft ended up caught between two chairs here?

    I’ve heard that a Mac version with tabs and ribbons is under development. But since it’s not ready yet, we can’t use it. On Windows, I also noticed a strange extra application called “Maple 2025 Screen Readers”. If you open it directly, you get an odd mix of modern 2D notation and old 1D Maple notation, which is simply unacceptable. If you instead click “Screen Reader Mode” in the top-right corner, it looks more normal. But does that mean it’s fully functional? If so, we might be able to combine this with the Mac version that still uses the old interface — and then switch next year to both Windows and Mac with tabs and ribbons. Still, I must say that Maplesoft is providing far too little information on this! Around 75% of our students use Macs, while only 25% use Windows.

    Another issue: When saving a Maple file on a Windows computer, you’re forced into Maple’s own “Save As” window. I’ve previously suggested that it should instead open directly in Windows’ native File Explorer, which is far more powerful. In File Explorer, you can quickly use Quick Access shortcuts to save the file in the right folder. In Maple’s “Save As” window, however, it often takes 6–7 extra clicks to reach the desired location. For students who aren’t very tech-savvy, navigating through a deep folder tree can be a real challenge. Why doesn’t Maplesoft just use Windows’ own File Explorer, which students are already familiar with? Most other programs do. Perhaps someone can explain why Maplesoft insists on keeping their own limited “Save As” dialog.

    Finally: I do believe that tabs and ribbons can be a good solution, but there’s still work to be done in placing items on appropriate tabs. For example, although I personally use the F5 keyboard shortcut to switch between Text, Non-executable Math, and Math mode, I know many students prefer to click on these options in Maple 2024. In the new interface, it now takes two or three clicks to do so. Since this is a function used very frequently, that’s a drawback. Couldn’t users be allowed to customize the Quick Access toolbar — via the Options menu — so these items can be placed there if needed?

     

     

    With the launch of ChatGPT 5.0, many people are testing it out and circulating their results. In our “random” Slack channel, where we share anything interesting that crosses our path, Filipe from IT posted one that stood out. He’d come across a simple math problem, double-checked it himself, and confirmed it was real:

    ChatGPT 5.0 Example

    As you can see, the AI-generated solution walked through clean, logical-looking steps and somehow concluded:

    x = –0.21

    I have two engineering degrees, and if I hadn’t known there was an error, I might not have spotted it. If I’d been tired, distracted, or rushing, I would have almost certainly missed it because I would have assumed AI could handle something this simple.

    Most of us in the MaplePrimes community already understand that AI needs to be used with care. But our students may not always remember, especially at the start of the school year if they’ve already grown used to relying on AI without question. 

    And if we’re honest, trusting without double-checking isn’t new. Before AI, plenty of us took shortcuts: splitting up the work, swapping answers, and just assuming they were right. I remember doing it myself in university, sometimes without even thinking twice. The tools might be different now, but that habit of skipping the “are we sure?” step has been around for a long time.

    The difference now is that general-purpose AI tools such as ChatGPT have become the first place we turn for almost anything we do. They respond confidently and are often correct, which can lead us to become complacent. We trust them without question. If students develop the habit of doing this, especially while they are still learning, the stakes can be much higher as they carry those habits into work, research, and other areas of their lives.

    The example above is making its rounds on social media because it’s memorable. It’s a basic problem, yet the AI still got it wrong and in a way that’s easy to miss if you’re not paying attention.

    Using it in the classroom can be a great way to help students remember that AI’s answers need to be checked. It’s not about discouraging them from using AI, but about reinforcing the habit of verifying results and thinking critically about what they see.

    So here’s my suggestion:

    • Show this example in your class, no matter the subject. If your students are using AI, they’ll benefit from seeing it.
    • Spend 10 minutes discussing it.
    • Use it as a jumping-off point to talk about what’s OK and not OK when using AI for your course.
    • Share other examples like this throughout the year as small reminders, so “critical thinking” becomes second nature.

    This isn’t just about catching an AI’s bad subtraction. It’s about building a culture of verification and reasoning in our students. The tools will keep improving, but so will the temptation to turn off our own thinking.

    If we can help students get into the habit of checking, AI can be a powerful partner without putting them on autopilot.

    To the MaplePrimes community: How do you talk to your students to help them build strong habits when working with AI? Do you bring in examples like this one, or use other strategies? I’d love it if you could share your thoughts, tips, and ideas.

     

    I was wondering whether MapleSoft has opted for an outdated concept. Here is Gemini's answer:

    Yes, there are many alternatives to ribbon-based GUIs. The ribbon UI, while popular and effective for many applications (especially those with a large number of functions like Microsoft Office), isn't the only way to design an interface. Here are some common alternatives:

     

    • Traditional Menus and Toolbars: This is the classic approach seen in older software. A menu bar at the top provides access to all commands, while toolbars offer quick-access icons for the most common functions. Many applications, especially on Linux, still use this model. A variation is to allow the user to customize their toolbars.

    • Command Palettes: This is a popular approach in modern text editors and IDEs (like Visual Studio Code or Sublime Text). The user presses a keyboard shortcut to bring up a search bar that allows them to quickly find and execute any command in the application. This is highly efficient for power users who know what they're looking for.

    • Contextual Menus (Right-Click Menus): These menus appear when a user right-clicks on an element. They present a list of actions relevant to the selected item. This is a very common and intuitive way to provide functionality without cluttering the main interface.

    • Sidebar or Panel-Based Interfaces: This design uses a vertical panel, often on the left or right side of the screen, to display tools, options, or settings. This is often seen in design software, where the user can access layers, brushes, or properties from a dedicated panel.

    • Hamburger Menus: A three-line icon (the "hamburger") that, when clicked, reveals a menu of options. This is a very common design in mobile applications and is increasingly used in desktop web applications and software to save screen space.

    • Floating Toolbars/Heads-Up Display (HUD): These are toolbars that appear near the user's cursor or the selected object. This keeps the tools close to where the user is working, reducing mouse travel. This is a common pattern in creative applications like 3D modeling or video editing software.

    The best choice depends on the type of application, the target user, and the platform. For applications with a huge number of features, a ribbon can be a good way to organize them visually. For applications with a more limited set of functions, or for a mobile-first design, a simpler approach is often better.

    Nothing exciting new but the insight that Maple's GUI is a hybrid that adds to the ribbon other concepts. Since the Maple ribbon is young and not part of the training data set, I asked for a better GUI for Word instead:

    A "better" GUI for Word is subjective, as different users have different needs. The current ribbon interface is great for discovering features, but it can also feel cluttered.

    So, instead of just one idea, here are three different approaches that could improve the Word GUI for different types of users:

     

    This GUI would learn from your habits. Instead of a static ribbon, the tabs and icons would change based on what you're doing.

    • Smart Ribbon: The ribbon would automatically switch to the most relevant tab. For example, if you highlight text, the "Home" tab would expand with formatting options like font size, bold, and color. If you're near a table, the "Table Design" and "Layout" tabs would appear instantly.

    • Contextual Command Palette: A simple keyboard shortcut (like Ctrl + K) would bring up a search bar in the middle of the screen. You could type "insert table of contents" or "change line spacing" and the command would execute without you having to hunt for it in the ribbon. This is a much faster option for power users.

     

    This approach would strip away everything that isn't essential for a focused writing experience.

    • Floating Toolbar: When you start typing, the entire ribbon disappears. When you highlight text, a small, floating toolbar with only the most basic formatting options (bold, italic, font size, bullet points) appears right next to your cursor.

    • "Focus Mode" on Steroids: A dedicated "Focus Mode" would hide all menus, toolbars, and even the scroll bar. You would see only your document on a clean background. All advanced functions would be accessible through the command palette or a single-click button that temporarily brings back the full ribbon.

     

    This design would let users build their own ideal workspace.

    • Customizable Side Panels: Instead of one giant ribbon, the GUI would be made of modular side panels that you could dock, undock, and resize. You could have a "Styles" panel on the left and a "Review" panel on the right, or a "References" panel that's only visible when you're working on a bibliography.

    • "My Tab": Users could create their own custom ribbon tab called "My Tab" and drag and drop their most-used commands into it. For example, a student might have "Footnote," "Citations," and "Table of Contents" all on one tab for easy access.

    Each of these designs solves a different problem with the current ribbon. The Adaptive Interface helps you find what you need quickly, the Minimalist Interface promotes focus, and the Modular Interface gives you complete control over your workspace.

    The highlighted passages point to areas that have been mentioned already in many comments on the GUI. However, there is one more that really stands in the way for me to work with the new GUI. It could be summarized as "Too many clicks". The former GUI (less intuitive, IMO) was in this respect much more optimzed for productivity.
    For example, inserting a text passage or putting the cursor into a text passage implies that the user most likely desires formating options. With the new GUI it is required to click on the edit tab (after having clicked on the insert tab or even more clicky with the quick access bar, which is for this use case not  really quick).

    With the former GUI (having a larger quick acess bar) and a tool bar (in red)

    only one click was required with substantially less mouse movement.

    Personally I would switch to the new GUI with the following improvements

    • a quick access bar that is customizable
    • a smart ribbon that switches to the edit mode tab when the cursor is placed on editable text or a new text/input/document block is inserted

    Having the functions that I use most frequently available in the quick access tool bar (highlighted in yellow) would allow me to minimize the ribbon with the same productivity and even more screen space as before.

    Keyboard shortcuts that differ from standard OS shortcuts are not a viable alternative for me.

    Overall, the direction with the new ribbon seems to be right to get new users productive faster. It seems to be a good choice without clear alternatives, and its graphical design aligns much better with the core values Maple provides.

    However, becoming productive fast does not mean that the productivity is high. From this perspective the former GUI is not outdated yet. The workflow with it is much faster and more focussed on math and code.

    Perhaps MapleSoft has solutions that will make the new GUI even more productive than the former GUI. This would be great!

    In Maple 2025, there are many strange issues, such as plot errors in math apps under Computer Science > Boolean Algebra, which did not occur in Maple 2024. Furthermore, in the 2025 version, when you open load package, some packages are blocked, and you must hide the taskbar to see the blocked packages. Finally, the Ribbon interface in Maple 2025 is really not suitable. Restart and startup code should not be placed in Home; some interfaces should be removed, or an option to retain the 2024 interface should be provided. I sincerely hope my suggestions are taken into consideration. Thank you.


    The Summer Issue of Maple Transactions has been published.  There are articles from a range of interests: research, education, and personal stories.

    Have a look, and I hope you find something of value in the issue.

     

    We are pleased to announce that the registration for the Maple Conference 2025 is now open!

    Like the last few years, this year’s conference will be a free virtual event. Please visit the conference page for more information on how to register.

    This year we are offering a number of new sessions, including more product training options, and an Audience Choice session.
    Also included in this year's registration is access to an in-depth Maple workshop day presented by Maplesoft's R&D members following the conference.  You can find an overview of the program on the Sessions page. Those who register before September 14th, 2025 will have a chance to vote for the topics they want to learn more about during the Audience Choice session.

    We hope to see you there!

    Hi,

    look at this Maple code.

    short_list_prime_factorization_fun.mw

    short_list_prime_factorization_fun.pdf

    Have a good day

    Matthew

    Hi,

    check out this maple code

    positive_odd_integer_factorization_data.pdf

    positive_odd_integer_factorization_data.mw

    that is all

    Regards,

    Matthew

    Hi,

    have some Maple code to share.

    prime_triplet_0_4_6.mw

    prime_triplet_0_4_6.pdf

    Enjoy

    Matthew

    ps Prime numbers are fun

    see https://t5k.org/

     

    1 2 3 4 5 6 7 Last Page 1 of 307