MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • Creating Graph Equation of An Apple in Cartesian Space using single Implicit Function only run by Maple software

    Enjoy...

    Please click the link below to see full equation on Maple file:

    2._Apel_3D_A.mw

     

    Creating Graph Equation of A Candle on Cartesian Plane using single Implicit Function only run by Maple software

    Enjoy...

    3D_Candle.mw

    Sea_Shells.mw

     

     

     

    Today I'm very greatfull to have Inspiration to create Graph Equation of 3D Candle in Cartesian Space using single 3D Implicit Function only, run by Maple software.

    Enjoy... 

    Candle_1.mw

     

    Today I got an inspiration to create graph equation of "Petrol Truck" using only with Single Implicit Equation in Cartesian space run by Maple Software

    Maple software is amazing...

    Enjoy...

     

    CREATING 3D GRAPH EQUATION OF BACTERIOPHAGE USING ONLY WITH SINGLE IMPLICIT EQAUTION IN CARTESIAN SPACE RUN BY MAPLE SOFTWARE

    MAPLE SOFTWARE IS AMAZING...

    ENJOY...

     

    GRAPH EQUATION OF A FEATHER

    AND THE EQUATION IS:

    ENJOY...

    I like this Equation and post it because it is so beautiful...

    Click this link below to see full equation and download the Maple file: 

    Bulu_Angsa_3.mw

     

    GRAPH EQUATION OF "383" CREATED BY DHIMAS MAHARDIKA

    ENJOY...

    with(plots):

    DHIMAS MAHARDIKA EQUATION

    plots:-implicitplot(15-8.*cos(y)^(79/2)-32.*cos(y)^(37/2)+96.*cos(y)^(33/2)-96.*cos(y)^(29/2)+4.*cos(x)^(61/2)+4.*cos(x)^(31/2)-12.*cos(x)^(27/2)+12.*cos(x)^(23/2)+24.*cos(y)^29-48.*cos(y)^27+16.*cos(y)^8-64.*cos(y)^6+96.*cos(y)^4-4.*cos(x)^(19/2)-6.*cos(x)^19-4.*cos(x)^(57/2)+32.*cos(y)^(25/2)+24.*cos(y)^25+8.*cos(y)^(75/2)-cos(x)^38+cos(y)^50-64.*cos(y)^2+4.*cos(x)^2-6.*cos(x)^4+4.*cos(x)^6-cos(x)^8+12.*cos(x)^21-6.*cos(x)^23, x = -15 .. 15, y = -15 .. 15, numpoints = 50000, thickness = 4, colour = blue)

     

    NULL

    Download 383.mw

     

    Drawing Eifel Tower using Implicit Equation in Cartesian Space 

     

    The recent Maple 2023 release comes with a multitude of new features, including a new Canvas Scripting Gallery full of templates for creating interactive Maple Learn documents.

    The Maple Learn Scripting Gallery can be accessed through Maple, by searching “BuildInteractiveContent Maple2023” in the search bar at the top of the application and clicking on the only result that appears. This will bring you to the help page titled “Build and Share Interactive Content”, which can also be found by searching “scripting gallery” in the search bar of a Maple help page window. The link to the Maple Learn Scripting Gallery is found under the “Canvas Scripting” section on this help page and clicking on it will open a Maple workbook full of examples and templates for you to explore.

    The interactive content in the Scripting Gallery is organized into five main categories – Graphing, Visualization, Quiz, Add-ons and Options, and Applications Optimized for Maple Learn – each with its own sub-categories, templates, and examples.

    One of the example scripts that I find particularly interesting is the “Normal Distribution” script, under the Visualizations category.

     

     

    All of the code for each of the examples and templates in the gallery is provided, so we can see exactly how the Normal Distribution script creates a Maple Learn canvas. It displays a list of grades, a plot for the grade distribution to later appear on, math groups for the data’s mean and variance, and finally a “Calculate” button that runs a function called UpdateStats.

    The initial grades loaded into the document result in the below plot, created using Maple’s DensityPlot and Histogram functions, from the Statistics package. 




     

    The UpdateStats function takes the data provided in the list of grades and uses a helper function, getDist, to generate the new plot to display the data, the distribution, the mean, and the variance. Then, the function uses a Script object to update the Maple Learn canvas with the new plot and information.

    The rest of the code is contained in the getDist function, which uses a variety of functions from Maple’s Statistics package. The Normal Distribution script takes advantage of Maple’s ability to easily calculate mean and variance for data sets, and to use that information to create different types of random variable distributions.

    Using the “Interactive Visualization” template, provided in the gallery, many more interactive documents can be created, like this Polyhedra Visualization and this Damped Harmonic Oscillator – both from the Scripted Gallery or like my own Linear Regression: Method of Least Squares document.


     

    Another new feature of Maple 2023 is the Quiz Builder, also featured in the Scripting Gallery. Quizzes created using Quiz Builder can be displayed in Maple or launched as Maple Learn quizzes, and the process for creating such a quiz is short.

    The QuizBuilder template also provides access to many structured examples, available from a dropdown list:


    As an example, check out this Maple Learn quiz on Expected Value: Continuous Practice. Here is what the quiz looks like when generated in Maple:


     

    This quiz, in particular, is “Fill-in the blank” style, but Maple users can also choose “Multiple Choice”, “True/False”, “Multiple Select”, or “Multi-Line Feedback”. It also makes use of all of the featured code regions from the template, providing functionality for checking inputted answers, generating more questions, showing comprehensive solutions, and providing a hint at the press of a button.

    Check out the Maple Learn Scripting Gallery for yourself and see what kinds of interactive content you can make for Maple and Maple Learn!

     

    Maple 2023: The colorbar option for contour plots does not work when used with the Explore command. See the example below.

    No_colorbar_when_exploring_contour_plots.mw
     

    Just installed Maple 2023 on Macbook Air M1.

    Maple Tasks dosn't work :-(

    /


    This post is closely related to this one Centered Divided Difference approximations whose purpose is to build Finite-Difference (FD) approxmation schemes.
    In this latter post I also talked, without explicitely naming it, about Truncation Error, see for instance https://en.wikiversity.org/wiki/Numerical_Analysis/Truncation_Errors.

    I am foccusing here on a less known concept named "Equivalent Equation" (sometimes named "Modified Equation").
    The seminal paper (no free acccess, which is surprising since it was first published 50 years ago) is this one by Warming and Hyett https://www.sciencedirect.com/science/article/pii/0021999174900114.
    For a scholar example you can see here https://guillod.org/teaching/m2-b004/TD2-solution.pdf.
    An alternative method to that of Warming and Hyett to derive the Equivalent Equation is given here (in French)
    http://www.numdam.org/item/M2AN_1997__31_4_459_0.pdf (Carpentier et al.)

    I never heard of the concept of Modified Equation applied to elliptic PDEs ; it's main domain of application is advection PDEs (parabolic PDEs in simpler cases).

    Basically, any numerical scheme for solving an ODE or a PDE has a Truncation Error: this means there is some discrepancy between the exact solution of this PDE and the solution the scheme provides.
    This discrepancy depends on the truncation error, in space alone for ODEs or elliptic PDEs, or in space and time for hyperbolic or advection PDEs.

    One main problem with the Truncation Error is that it doesn't enable to understand the impact it will have on the solution it returns. For instance, will this sheme introduce diffusion, antidiffusion, scattering, ...
    The aim of the Modified Equation is to answer these questions by constructing the continuous equation a given numerical scheme solves with a zero truncation error.
    This is the original point of view Warming and Hyett developped in their 1974 paper.

    This is a subject I work on 30 years ago (Maple V), and later in 2010. 
    It is very easy with Maple to do the painstaking development that Warming and Hyett did by hand half a century ago. And it is even so easy that the trick used by Carpentier et al. to make the calculations easier has lost some of its interest.

    Two examples are given in the attched file
    EquivalentEquation.mw

    If a moderator thinks that this post should be merged with Centered Divided Difference approximations, he can do it freely, I won't be offended.
     


    This code enables building Centered Divided Difference (CDD) approximations of derivatives of a univariate function.
    Depending on the stencil we choose we can get arbitrary high order approximations.

    The extension to bivariate functions is based upon what is often named tensorization in numerical analysis: for instance diff(f(x, y), [x, y] is obtained this way (the description here is purely notional)

    1. Let CDD_x the CDD approximation of diff( f(x), x) ) .
      CDD_x is a linear combination of shifted replicates of f(x)
    2. Let s one of this shifted replicates
      Let CDD_y(s) the CDD approximation of diff( s(y), y) ) .
    3. Replace in CDD_x all shifted replicates by their corresponding expression CDD_y(s)


    REMARKS:

    • When I write for instance "approximation of diff(f(x), x)", this must be intended as a short for "approximation of diff(f(x), x) at point x=a"
    • I agree that a notation like, for instance, diff(f(a), a) is not rigourous and that something like a Liebnitz notation would be better. Unfortunately I don't know how to get it when I use mtaylor.
       

    restart:


    CDDF stands for Cendered Divided Difference Formula

    CDDF := proc(f, A, H, n, stencil)
      description "f = target function,\nA = point where the derivatives are approximated,\nH = step,\nn = order of the derivative,\nstencil = list of points for the divided differenceCDDF\n";
      local tay, p, T, sol, unknown, Unknown, expr:

      tay := (s, m) -> convert(
                         eval(
                           convert(
                             taylor(op(0, f)(op(1, f)), op(1, f)=A, m),
                             Diff
                           ),
                           op(1, f)=A+s*H),
                         polynom
                       ):

      p   := numelems(stencil):
      T   := add(alpha[i]*tay(i, p+1), i in stencil):
      T   := convert(%, diff):

      if p > n+1 then
        sol := solve([seq(coeff(T, h, i)=0, i in subsop(n+1=NULL, [$0..p]))], [seq(alpha[i], i in stencil)])[];
      else
        sol := solve([seq(coeff(T, H, i)=0, i in subsop(n+1=NULL, [$0..n]))], [seq(alpha[i], i in stencil)])[];
      end if:

      if `and`(is~(rhs~(sol)=~0)[]) then
        WARNING("no solution found"):
        return
      else
        unknown := `union`(indets~(rhs~(sol))[])[];
        Unknown := simplify(solve(eval(T, sol) = Diff(op(0, f)(A), A$n), unknown));
        sol     := lhs~(sol) =~ eval(rhs~(sol), unknown=Unknown);
        expr    := normal(eval(add(alpha[i]*op(0, f)(A+i*H), i in stencil), sol));
      end if:

      return expr
    end proc:

    Describe(CDDF)


    # f = target function,
    # A = point where the derivatives are approximated,
    # H =
    # step,
    # n = order of the derivative,
    # stencil = list of points for the divided
    # differenceCDDF
    #
    CDDF( f, A, H, n, stencil )
     

     

    # 2-point approximation of diff(f(x), x) | x=a

    CDDF(f(x), a, h, 1, [-1, 1]);
    convert(simplify(mtaylor(%, h=0, 4)), Diff);

    -(1/2)*(f(a-h)-f(a+h))/h

     

    Diff(f(a), a)+(1/6)*(Diff(Diff(Diff(f(a), a), a), a))*h^2

    (1)

    # 3-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [-1, 0, 1]);
    convert(simplify(mtaylor(%, h=0)), Diff);

    -(-f(a-h)+2*f(a)-f(a+h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/12)*(Diff(Diff(Diff(Diff(f(a), a), a), a), a))*h^2

    (2)

    # 5-point pproximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    -(1/12)*(f(a-2*h)-16*f(a-h)+30*f(a)-16*f(a+h)+f(a+2*h))/h^2

     

    Diff(Diff(f(a), a), a)-(1/90)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^4

    (3)

    # 7-point approximation of diff(f(x), x$2) | x=a

    CDDF(f(x), a, h, 2, [$-3..3]);
    # simplify(taylor(%, h=0, 10));
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    -(1/180)*(-2*f(a-3*h)+27*f(a-2*h)-270*f(a-h)+490*f(a)-270*f(a+h)+27*f(a+2*h)-2*f(a+3*h))/h^2

     

    Diff(Diff(f(a), a), a)+(1/560)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^6

    (4)

    # 4-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-3/2..3/2, 1)]);
    convert(simplify(mtaylor(%, h=0, 6)), Diff);

    -(f(a-(3/2)*h)-3*f(a-(1/2)*h)+3*f(a+(1/2)*h)-f(a+(3/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)+(1/8)*(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a))*h^2

    (5)

    # 6-point staggered approximation of diff(f(x), x$3) | x=a

    CDDF(f(x), a, h, 3, [seq(-5/2..5/2, 1)]);
    # simplify(taylor(%, h=0, 8));
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (1/8)*(f(a-(5/2)*h)-13*f(a-(3/2)*h)+34*f(a-(1/2)*h)-34*f(a+(1/2)*h)+13*f(a+(3/2)*h)-f(a+(5/2)*h))/h^3

     

    Diff(Diff(Diff(f(a), a), a), a)-(37/1920)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a))*h^4

    (6)

    # 5-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-2..2]);
    convert(simplify(mtaylor(%, h=0, 8)), Diff);

    (f(a-2*h)-4*f(a-h)+6*f(a)-4*f(a+h)+f(a+2*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)+(1/6)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a))*h^2

    (7)

    # 7-point approximation of diff(f(x), x$4) | x=a

    CDDF(f(x), a, h, 4, [$-3..3]);
    convert(simplify(mtaylor(%, h=0, 10)), Diff);

    (1/6)*(-f(a-3*h)+12*f(a-2*h)-39*f(a-h)+56*f(a)-39*f(a+h)+12*f(a+2*h)-f(a+3*h))/h^4

     

    Diff(Diff(Diff(Diff(f(a), a), a), a), a)-(7/240)*(Diff(Diff(Diff(Diff(Diff(Diff(Diff(Diff(f(a), a), a), a), a), a), a), a), a))*h^4

    (8)


    A FEW 2D EXTENSIONS

    # diff(f(x, y), [x, y]) approximation over a (2 by 2)-point stencil

    stencil := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 1, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0]), Diff)

    (1/4)*(f(a-h, b-k)-f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(k*h)

     

    Diff(Diff(f(a, b), a), b)+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(f(a, b), a), b), b), b))*k^2

    (9)

    # Approximation of diff(f(x, y), [x, x, y, y] a (3 by 3)-point stencil


    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 2, stencil)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 8), Diff)

    -(2*f(a, b-k)-4*f(a, b)+2*f(a, b+k)-f(a-h, b-k)+2*f(a-h, b)-f(a-h, b+k)-f(a+h, b-k)+2*f(a+h, b)-f(a+h, b+k))/(h^2*k^2)

     

    Diff(Diff(Diff(Diff(f(a, b), a), a), b), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b), b))*h^2+(1/12)*(Diff(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b), b))*k^2

    (10)

    # Approximation of diff(f(x, y), [x, x, y] a (3 by 2)-point stencil

    stencil_x := [-1, 0, 1]:
    stencil_y := [-1, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil_x):
    fx  := eval(% , f=(u -> f[u](y))):
    ix  := [indets(fx, function)[]]:

    # step 2: approximate diff(g(y), y) over stencil < stencil > where
    #         g represents any function in fx.

    fxy := add(map(u -> CDDF(u, b, k, 1, stencil_y)*coeff(fx, u), ix)):

    # step 3: rewrite fxy in a more convenient form

    [seq(u=op([0, 0], u)(op([0, 1], u), op(1, u)), u in indets(fxy, function))]:
    fxy := simplify( eval(fxy, %) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (1/2)*(2*f(a, b-k)-2*f(a, b+k)-f(a-h, b-k)+f(a-h, b+k)-f(a+h, b-k)+f(a+h, b+k))/(h^2*k)

     

    Diff(Diff(Diff(f(a, b), a), a), b)+(1/12)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a), b))*h^2+(1/6)*(Diff(Diff(Diff(Diff(Diff(f(a, b), a), a), b), b), b))*k^2

    (11)

    # Approximation of the laplacian of f(x, y)

    stencil := [-1, 0, 1]:

    # step 1: approximate diff(f(x, y), x) over stencil < stencil >

    fx  := CDDF(f(x), a, h, 2, stencil):
    fy  := CDDF(f(y), b, k, 2, stencil):

    fxy := simplify( eval(fx, f=(u -> f(u, b))) + eval(fy, f=(u -> f(a, u))) );

    convert(mtaylor(fxy, [h=0, k=0], 6), Diff)

    (f(a-h, b)-2*f(a, b)+f(a+h, b))/h^2+(f(a, b-k)-2*f(a, b)+f(a, b+k))/k^2

     

    Diff(Diff(f(a, b), a), a)+Diff(Diff(f(a, b), b), b)+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), a), a), a), a))*h^2+(1/12)*(Diff(Diff(Diff(Diff(f(a, b), b), b), b), b))*k^2

    (12)

     


     

    Download CDD.mw

    Recently Mapleprimes user @vs140580  asked here about finding the shortest returning walk in a graph (explained more below). I provided an answer using the labelled adjacency matrix. @Carl Love pointed out that the storage requirements are significant. They can be reduced by storing only the vertices in the walks and not the walks themselves. This can be done surprisingly easily by redefining how matrix multiplication is done using Maple's LinearAlgebra:-Generic package.

    (The result is independent of the chosen vertex.)

    restart

    with(GraphTheory)

    Consider the following graph. We want to find, for a given vertex v, the shortest walk that visits all vertices and returns to v. A walk traverses the graph along the edges, and repeating an edge is allowed (as distinct from a path, in which an edge can be used only once).

    G := AddVertex(PathGraph(5), [6, 7]); AddEdge(G, {{3, 7}, {4, 6}, {6, 7}}); DrawGraph(G, layout = circle, size = [250, 250])

    GRAPHLN(undirected, unweighted, [1, 2, 3, 4, 5, 6, 7], Array(1..7, {(1) = {2}, (2) = {1, 3}, (3) = {2, 4}, (4) = {3, 5}, (5) = {4}, (6) = {}, (7) = {}}), `GRAPHLN/table/2`, 0)

    n := NumberOfVertices(G)

    7

    A := AdjacencyMatrix(G)

    As is well known, the (i, j) entry of A^k gives the number of walks of length k between vertices i and j. The labelled adjacency matrix shows the actual walks.

    Alab := `~`[`*`](A, Matrix(n, n, symbol = omega))

    Matrix(%id = 36893490455680637396)

    For example, the (3,3) entry of Alab^2 has 3 terms, one for each walk of length 2 that starts and ends at vertex 3. The `&omega;__i,j` factors in a term give the edges visited.

    So the algorithm to find the shortest walk is to raise Alab to successively higher powers until one of the terms in the diagonal entry for the vertex of interest has indices for all the vertices.

    Alab^2

    Matrix(%id = 36893490455709504684)

    The expressions for higher powers get very large quickly, so an algorithm that only retains the sets of vertices in each term as a set of sets will use less storage space. So we can consider the following modified labelled adjacency matrix.

    B := Matrix(n, n, proc (i, j) options operator, arrow; if A[i, j] = 1 then {{i, j}} else {} end if end proc)

    Matrix(%id = 36893490455703852204)

    Now we need to modify how we do matrix multiplication, but Maple has the LinearAlgebra:-Generic package to do this. We can redefine addition and multiplication to operate correctly on the sets of sets.

    Consider two sets of sets of vertices for walks.

    a := {{7}, {1, 3}, {2, 6, 7}}; b := {{1, 2}, {2, 3, 8}}

    {{7}, {1, 3}, {2, 6, 7}}

    {{1, 2}, {2, 3, 8}}

    Addition is just combining the possibilities, and set union will do this. And addition of "zero" should add no possibilities, so we take {} as zero.

    `union`(a, b); `union`(a, {})

    {{7}, {1, 2}, {1, 3}, {2, 3, 8}, {2, 6, 7}}

    {{7}, {1, 3}, {2, 6, 7}}

    Multiplication is just combining all pairs by union. Notice here that {7} union {1,3,5} and {1,5} union {3,7} give the same result, but that we do not get duplicates in the set.

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, b))}

    {{1, 2, 3}, {1, 2, 7}, {1, 2, 3, 8}, {1, 2, 6, 7}, {2, 3, 7, 8}, {2, 3, 6, 7, 8}}

    The unit for multiplication should leave the set of sets unchanged, so {{}} can be used

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {{}}))}

    {{7}, {1, 3}, {2, 6, 7}}

    And we should check that zero multiplied by a is zero

    {seq(seq(`union`(i, j), `in`(i, a)), `in`(j, {}))}

    {}

    Define these operations for the LinearAlgebraGeneric package:

    F[`=`] := `=`; F[`/`] := `/`; F[`0`] := {}; F[`1`] := {{}}; F[`+`] := `union`; F[`*`] := proc (x, y) options operator, arrow; {seq(seq(`union`(i, j), `in`(i, x)), `in`(j, y))} end proc

    Warning, (in F[*]) `j` is implicitly declared local

    Warning, (in F[*]) `i` is implicitly declared local

    Compare B^2 with Alab^2. We have lost information about the details of the walks except for the vertices visited.

    LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B, B)

    Matrix(%id = 36893490455680647164)

    So here is a procedure for finding the length of the shortest walk starting and ending at vertex v.

    WalkLength:=proc(G::Graph,v)
      uses GraphTheory;
      local x,y,i,j,vv,A,B,F,n,vertset;
      if IsDirected(G) then error "graph must be undirected" end if;
      if not member(v,Vertices(G),'vv') then error "vertex not in graph" end if;
      if not IsConnected(G) then return infinity end if;
      F[`=`]:=`=`:F[`/`]:=`/`: # not required but have to be specified
      F[`0`]:={}:
      F[`1`]:={{}}:
      F[`+`]:=`union`;
      F[`*`]:=(x,y)->{seq(seq(i union j, i in x), j in y)};
      n:=NumberOfVertices(G);
      vertset:={$(1..n)};
      A:=Matrix(n,n, (i, j)-> if AdjacencyMatrix(G)[i, j] = 1 then {{i, j}} else {} end if);
      B:=copy(A);
      for i from 2 do
        B:=LinearAlgebra:-Generic:-MatrixMatrixMultiply[F](B,A);
      until vertset in B[vv,vv];
      i;
    end proc:

    WalkLength(G, 1)

    10

    NULL

    Download WalksGenericSetOfSets.mw

    In an age where our lives are increasingly integrated online, cybersecurity is more important than ever. Cybersecurity is the practice of protecting online information, systems, and networks from malicious parties. Whenever you access your email, check your online banking, or make a post on Facebook, you are relying on cybersecurity systems to keep your personal information safe. 

    Requiring that users enter their password is a common security practice, but it is nowhere near hacker-proof. A common password-hacking strategy is the brute-force attack. This is when a hacker uses an automated program to guess random passwords until the right one is found. The dictionary attack is a similar hacking strategy, where guesses come from a list like the 10,000 Most Common Passwords.

    The easiest way to prevent this kind of breach is to use strong passwords. First, to protect against dictionary attacks, never use a common password like “1234” or “password”. Second, to protect against brute-force attacks, consider how the length and characters used affect the guessability. Hackers often start by guessing short passwords using limited types of characters, so the longer and more special characters used, the better.

    Using the Strong Password Exploration Maple Learn document, you can explore how susceptible your passwords may be to a brute-force attack. For example, a 6-character password using only lowercase letters and numbers could take as little as 2 seconds to hack.

    Whereas an 8-character password using uppercase letters, lowercase letters, and 10 possible special characters could take more than 60 hours to crack.

    These hacking times are only estimations, but they do provide insight into the relative strength of different passwords. To learn more about password possibilities, check out the Passwords Collection on Maple Learn

    The inverse problem of a mathematical question is often very interesting.

    I'm glad to see that Maple 2023 has added several new graph operations. The GraphTheory[ConormalProduct], GraphTheory[LexicographicProduct], GraphTheory[ModularProduct] and GraphTheory[StrongProduct] commands were introduced in Maple 2023.

    In fact, we often encounter their inverse problems in graph theory as well. Fortunately, most of them can find corresponding algorithms, but the implementation of these algorithms is almost nonexistent.

     

    I once asked a question involving the inverse operation of the lexicographic product.

    Today, I will introduce the inverse operation of line graph operations. (In fact, I am trying to approach these problems with a unified perspective.)

     

    To obtain the line graph of a graph is easy, but what about the reverse? That is to say, to test whether the graph is a line graph. Moreover, if a graph  g is the line graph of some other graph h, can we find h? (Maybe not unique. **Whitney isomorphism theorem tells us that if the line graphs of two connected graphs are isomorphic, then the underlying graphs are isomorphic, except in the case of the triangle graph K_3 and the claw K_{1,3}, which have isomorphic line graphs but are not themselves isomorphic.)

    Wikipedia tells us that there are two approaches, one of which is to check if the graph contains any of the nine forbidden induced subgraphs. 

    Beineke's forbidden-subgraph characterization:  A graph is a line graph if and only if it does not contain one of these nine graphs as an induced subgraph.

    This approach can always be implemented, but it may not be very fast. Another approach is to use the linear time algorithms mentioned in the following article. 

    • Roussopoulos, N. D. (1973), "A max {m,n} algorithm for determining the graph H from its line graph G", Information Processing Letters, 2 (4): 108–112, doi:10.1016/0020-0190(73)90029-X, MR 0424435

    Or: 

    •   Lehot, Philippe G. H. (1974), "An optimal algorithm to detect a line graph and output its root graph", Journal of the ACM, 21 (4): 569–575, doi:10.1145/321850.321853, MR 0347690, S2CID 15036484.


    SageMath can do that: 

       root_graph()

    Return the root graph corresponding to the given graph.

        is_line_graph()

    Check whether a graph is a line graph.

    For example, K_{2,2,2,2} is not the line graph of any graph.

    K2222 = graphs.CompleteMultipartiteGraph([2, 2, 2, 2])
    C=K2222.is_line_graph(certificate=True)[1]
    C.show() # Containing the ninth forbidden induced subgraph.
    

     

    enter image description here

     

    Another Sage example for showing that the complement of the Petersen graph is the line graph of K_5.

    P = graphs.PetersenGraph()
    C = P.complement()
    sage.graphs.line_graph.root_graph(C, verbose=False)
    

     

    (Graph on 5 vertices, {0: (0, 1), 1: (2, 3), 2: (0, 4), 3: (1, 3), 4: (2, 4), 5: (3, 4), 6: (1, 4), 7: (1, 2), 8: (0, 2), 9: (0, 3)})
    

     

    Following this line of thought, can Maple gradually implement the inverse operations of some standard graph operations? 

    Here are some examples:

    •   CartesianProduct
    •   TensorProduct
    •   ConormalProduct
    •   LexicographicProduct
    •   ModularProduct
    •   StrongProduct
    •   LineGraph
    •  GraphPower

    The areas of statistics and probability are my favorite in mathematics. This is because I like to be able to draw conclusions from data and predict the future with past trends. Probability is also fascinating to me since it allows us to make more educated decisions about real-life events. Since we are supposed to get a big snow storm in Waterloo, I thought I would write a blog post discussing conditional probability using the Probability Tree Generator, created by Miles Simmons.

    If the probability of snowfall on any given day during a Waterloo winter is 0.75, the probability that the schools are closed given that it has snowed is 0.6, and the probability that the schools are closed given that it has hasn’t snowed is 0.1, then we get the following probability tree, created by Miles’s learn document:

    From this information we can come to some interesting conclusions:

    What is the probability that the schools are closed on a given day?

    From the Law of total probability, we get:

    Thus, during a very snowy Waterloo winter, we could expect a 0.475 chance of schools being closed on any given day. 

    One of the features of this document is that the node probabilities are calculated. You can see this by comparing the second last step to the number at the end of probability trees' nodes.

    What is the probability that it has snowed given that the schools are closed?

    From Bayes’ Theorem, we get:

    Thus, during a very snowy Waterloo winter, we expect there to be a probability of 0.947 that it has snowed if the schools are closed. 

    We can also add more events to the tree. For example, if the students are happy or sad given that the schools are open:

    Even though we would all love schools to be closed 47.5% of the winter days in Waterloo, these numbers were just for fun. So, the next time you are hoping for a snow day, make sure to wear your pajamas inside out and sleep with a spoon under your pillow that night!

    To explore more probability tree fun, be sure to check out Miles’s Probability Tree Generator, where you can create your own probability trees with automatically calculated node probabilities and export your tree to a blank Maple Learn document. Finally, if you are interested in seeing more of our probability collection, you can find it here!

     

    The moment we've all been waiting for has arrived: Maple 2023 is here!

    With this release we continue to pursue our mission to provide powerful technology to explore, derive, capture, solve and disseminate mathematical problems and their applications, and to make math easier to learn, understand, and use. Bearing this in mind, our team of mathematicians and developers have dedicated the last year to adding new features and enhancements that not only improve the math engine but make that math engine more easily accessible within a user-friendly interface.

    And if you ever wonder where our team gets inspiration, you don't need to look further than Maple Primes. Many of the improvements that went into Maple 2023 came as a direct result of feedback from users. I’ll highlight a few of those user-requested features below, and you can learn more about these, and many, many other improvements, in What’s New in Maple 2023.

    • The Plot Builder in Maple 2023 now allows you to build interactive plot explorations where parameters are controlled by sliders or dials, and customize them as easily as you can other plots

    Plot Builder Explore

     

    • In Maple 2023, 2-D contour and density plots now feature a color bar to show the values of the gradations.


    • For those who write a lot of code:  You can now open your .mpl Maple code files directly in Maple’s code editor, where you can  view and edit the file from inside Maple using the editor’s syntax highlighting, command completion, and automatic indenting.

    Programming Improvements

    • Integration has been improved in many ways. Here’s one of them:  The definite integration method that works via MeijerG convolutions now does a better job of checking conditions on parameters so that they are only applied under proper assumptions. It also tells you the conditions under which the method could have produced an answer, so if your problem does meet those conditions, you can add the appropriate assumptions to get your result.
    • Many people have asked that we make it easier for them to create more complex interactive Math Apps and applications that require programming, such as interactive clickable plots, quizzes that provide feedback, examples that provide solution steps. And I’m pleased to announce that we’ve done that in Maple 2023 with the introduction of the Quiz Builder and the Canvas Scripting Gallery.
      • The new Quiz Builder comes loaded with sample quizzes and makes it easy to create your own custom quiz questions. Launch the quiz builder next time you want to author interactive quizzes with randomized questions, different response types, hints, feedback, and show the solution. It’s probably one of my favorite features in Maple 2023.

    • The Scripting Gallery in Maple 2023 provides 44 templates and modifiable examples that make it easier to create more complex Math Apps and interactive applications that require programming. The Maple code used to build each application in the scripting gallery can be easily viewed, copied and modified, so you can customize specific applications or use the code as a starting point for your own work

    • Finally, here’s one that is bound to make a lot of people happy: You can finally have more than one help page open at the same time!

    For more information about all the new features and enhancements in Maple 2023, check out the What’s New in Maple 2023.

    P.S. In case you weren’t aware - in addition to Maple, the Maplesoft Mathematics Suite includes a variety of other complementary software products, including online and mobile solutions, that help you teach and learn math and math-related courses.  Even avid Maple users may find something of interest!

    I was looking at symbolically solving a second-order differential equation and it looks like the method=laplace method has a sign error when the coefficients are presented in a certain way.  Below is a picture of some examples with and without method=laplace that should all have the same closed form.  Note that lines (s6) and (s8) have different signs in the exponential than they should have (which is a HUGE problem):

    restart

    s1 := dsolve([diff(x(t), t, t)+2*a*(diff(x(t), t))+a^2*x(t)], [x(t)])

    {x(t) = exp(-a*t)*(_C2*t+_C1)}

    (1)

    s2 := dsolve([diff(x(t), t, t)+2*a*(diff(x(t), t))+a^2*x(t)], [x(t)], method = laplace)

    x(t) = exp(-a*t)*(t*(D(x))(0)+x(0)*(a*t+1))

    (2)

    s3 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/b+x(t)/b^2], [x(t)])

    {x(t) = exp(-t/b)*(_C2*t+_C1)}

    (3)

    s4 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/b+x(t)/b^2], [x(t)], method = laplace)

    x(t) = exp(-t/b)*(t*(D(x))(0)+x(0)*(b+t)/b)

    (4)

    s5 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/sqrt(L*C)+x(t)/(L*C)], [x(t)])

    {x(t) = exp(-(L*C)^(1/2)*t/(L*C))*(_C2*t+_C1)}

    (5)

    s6 := dsolve([diff(x(t), t, t)+2*(diff(x(t), t))/sqrt(L*C)+x(t)/(L*C)], [x(t)], method = laplace)

    x(t) = (t*(D(x))(0)+3*C*L*x(0)*t/(L*C)^(3/2)+x(0))*exp((L*C)^(1/2)*t/(L*C))

    (6)

    s7 := dsolve([L*C*(diff(x(t), t, t))+2*sqrt(L*C)*(diff(x(t), t))+x(t)], [x(t)])

    {x(t) = exp(-(L*C)^(1/2)*t/(L*C))*(_C2*t+_C1)}

    (7)

    s8 := dsolve([L*C*(diff(x(t), t, t))+2*sqrt(L*C)*(diff(x(t), t))+x(t)], [x(t)], method = laplace)

    x(t) = exp(t/(L*C)^(1/2))*(t*(D(x))(0)+x(0)*(L*C+3*(L*C)^(1/2)*t)/(L*C))

    (8)

    s9 := dsolve([diff(x(t), t, t)+2*z*wn*(diff(x(t), t))+wn^2*x(t)], [x(t)])

    {x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

    (9)

    s10 := dsolve([diff(x(t), t, t)+2*z*wn*(diff(x(t), t))+wn^2*x(t)], [x(t)], method = laplace)

    x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

    (10)

    s11 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)])

    {x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

    (11)

    s12 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)], method = laplace)

    x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

    (12)

    s13 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)])

    {x(t) = _C1*exp((-z+(z^2-1)^(1/2))*wn*t)+_C2*exp(-(z+(z^2-1)^(1/2))*wn*t)}

    (13)

    s14 := dsolve([(diff(x(t), t, t))/wn^2+2*z*(diff(x(t), t))/wn+x(t)], [x(t)], method = laplace)

    x(t) = exp(-wn*t*z)*(cosh((wn^2*(z^2-1))^(1/2)*t)*x(0)+(x(0)*wn*z+(D(x))(0))*sinh((wn^2*(z^2-1))^(1/2)*t)/(wn^2*(z^2-1))^(1/2))

    (14)

    NULL

    Download DsolveLaplaceIssues.mw

    Hello everyone! Alex, Sarah, and I decided to create this collection of financial literacy documents as we noticed a lack of resources for this strand in mathematics. With many curricula around the world implementing financial literacy concepts, we thought it might be useful not just for Ontario, but for many jurisdictions around the world. 

    There are 4 documents in the Simple Interest collection; Introduction, Equation Generator, Mental Calculations, and Reflection. The Introduction is designed for intermediate and advanced level students as it introduces students to the concept of interest and how to calculate it. Students get to fill in the table by filling in the calculations on the right. This provides enough scaffolding so students of various grades can participate in this activity. 

     

    The Equation Generator document uses sliders to help students investigate linear equations in the form of y=mx+b. It also relates the simple interest equation (I=Prt) to the linear equation by asking students to compare interest rates. The idea behind this document is to bridge concepts outlined in the 2021 grade 9 destreamed math curriculum; in particular, the financial literacy, and linear relations strands. The document provides some reflection questions for students to think about the relationship between the variables. 

    The third document in the collection is the mental calculations document which presents a series of questions in increasing difficulty designed to help students compare interest rates. Students are intended to choose which scenario they think is more appropriate without using a calculator. There are hints provided on the right side if students wish for a hint, as well as explanations further to the right of the hints and answers below the main questions. Through our analysis of the curricula around the world, we noticed that many jurisdictions focus on mental math as a skill that their students should develop. Students may not always have access to a calculator and it is important for them to know how to make financially sound decisions or analyze advertisements that they may see around their neighbourhood. 

     

    Lastly, the last document is the reflection page where students are able to analyze their findings. In particular, “interest” may carry a negative connotation for students such that we want them to think of the potential benefits of interest as well. The reflection questions are designed to help students consolidate their learnings and can be further expanded on by the teacher. Such possibilities can include scenario-based questions. 

    May you find these documents helpful! 

    I did not see such a post for 2023 so I am starting one.

    What is your 3 top bug fixes or improvements  you would like to see in Maple 2023? 

    Here are mine

    1. Fix timelimit so that it completes at or near the time requested and not hang or take 10 hrs when asked to timeout after say 30 seconds.
    2. Fix the large amount of server crashes and the many random hangs when running large computation that takes long time. i.e. Make Maple more robust.
    3. Allow the user to remove output from worksheet while it is still running (Evaluate->Remove output from worksheet)

    Maple is a nice language and has many nice functions. But it is the usability part of Maple that leaves many bad impressions because of these things that should really have been fixed by now given how long  Maple software have been around.

     

    Happy Valentine’s Day to everyone in the MaplePrimes community. Valentine’s Day is a time to celebrate all things love and romance. To celebrate, we at Maplesoft wanted to share our hearts with you.

     


     

    Today the heart shape represents love, affection, and a major organ. Though the heart’s full meaning today is unique to the modern era, the shape itself is much older.

     In ancient Greece, the Cyrenese people used the heart-shaped seed of a plant called silphium as a form of contraception. The seed became so widely used that it is featured on Cyrenese currency. This is the first case of the heart shape being connected to love and passion, but the form did not yet have an association with the human heart.

    French poet Thibault de Blaison was the first to use a pear-shaped human heart to symbolize love in his thirteenth-century romance “Roman de la Poire”. Later, during the renaissance period, artists began to paint the Sacred Heart of Jesus in a spade-like shape. Depictions of the heart continued to develop and by the Victorian Era, the heart we know and love today had taken shape and started to appear on Valentine’s Day cards.

    The simplicity and symmetry of the heart shape, which likely led to its widespread popularity, also makes the form convenient to define mathematically.

    To find the equation for your heart, use the Valentine Hearts Maple Learn document. Choose one of four ways to define your heart, then move the sliders and change the color to make a unique equation for your heart. 

    Once you’re done, take a screenshot and share it with your Valentine. Who says math isn’t romantic?

     

     

    First 15 16 17 18 19 20 21 Last Page 17 of 306