MaplePrimes Posts

MaplePrimes Posts are for sharing your experiences, techniques and opinions about Maple, MapleSim and related products, as well as general interests in math and computing.

Latest Post
  • Latest Posts Feed
  • While googling around for Season 8 spoilers, I found data sets that can be used to create a character interaction network for the books in the A Song of Ice and Fire series, and the TV show they inspired, Game of Thrones.

    The data sets are the work of Dr Andrew Beveridge, an associate professor at Macalaster College (check out his Network of Thrones blog).

    You can create an undirected, weighted graph using this data and Maple's GraphTheory package.

    Then, you can ask yourself really pressing questions like

    • Who is the most influential person in Westeros? How has their influence changed over each season (or indeed, book)?
    • How are Eddard Stark and Randyll Tarly connected?
    • What do eigenvectors have to do with the battle for the Iron Throne, anyway?

    These two applications (one for the TV show, and another for the novels) have the answers, and more.

    The graphs for the books tend to be more interesting than those for the TV show, simply because of the far broader range of characters and the intricacy of the interweaving plot lines.

    Let’s look at some of the results.

    This a small section of the character interaction network for the first book in the A Song of Ice and Fire series (this is the entire visualization - it's big, simply because of the shear number of characters)

    The graph was generated by GraphTheory:-DrawGraph (with method = spring, which models the graph as a system of protons repelling each other, connected by springs).

    The highlighted vertices are the most influential characters, as determined by their Eigenvector centrality (more on this later).


    The importance of a vertex can be described by its centrality, of which there are several variants.

    Eigenvector centrality, for example, is the dominant eigenvector of the adjacency matrix, and uses the number and importance of neighboring vertices to quantify influence.

    This plot shows the 15 most influential characters in Season 7 of the TV show Game of Thrones. Jon Snow is the clear leader.

    Here’s how the Eigenvector centrality of several characters change over the books in the A Song of Ice and Fire series.

    A clique is a group of vertices that are all connected to every other vertex in the group. Here’s the largest clique in Season 7 of the TV show.

    Game of Thrones has certainly motivated me to learn more about graph theory (yes, seriously, it has). It's such a wide, open field with many interesting real-world applications.

    Enjoy tinkering!

    I recently had a wonderful and valuable opportunity to meet with some primary school students and teachers at Holbaek by Skole in Denmark to discuss the use of technology in the classroom. The Danish education system has long been an advocate of using technology and digital learning solutions to augment learning for its students. One of the technology solutions they are using is Maple, Maplesoft’s comprehensive mathematics software tool designed to meet the unique and complex needs of STEM courses. It is rare to find Maple being used at the primary school level, so it was fascinating to see first-hand how Maple is being incorporated at the school.

    In speaking with some of the students, I asked them what their education was like before Maple was incorporated into their course. They told me that before they had access to Maple, the teacher would put an example problem on the whiteboard and they would have to take notes and work through the solution in their notebooks. They definitely prefer the way the course is taught using Maple. They love the fact that they have a tool that let them work through the solution and provide context to the answer, as opposed to just giving them the solution. It forces them to think about how to solve the problem. The students expressed to me that Maple has transformed their learning and they cannot imagine going back to taking lectures using a whiteboard and notebook.

    Here, I am speaking with some students about how they have adapted Maple to meet their needs ... and about football. Their team had just won 12-1.


    Mathematics courses, and on a broader level, STEM courses, deal with a lot of complex materials and can be incredibly challenging. If we are able to start laying the groundwork for competency and understanding at a younger age, students will be better positioned for not only higher education, but their careers as well. This creates the potential for stronger ideas and greater innovation, which has far-reaching benefits for society as a whole.

    Jesper Estrup and Gitte Christiansen, two passionate primary school teachers, were responsible for introducing Maple at Holbaek by Skole. It was a pleasure to meet with them and discuss their vision for improving mathematics education at the school. They wanted to provide their students experience with a technology tool so they would be better equipped to handle learning in the future. With the use of Maple, the students achieved the highest grades in their school system. As a result of this success, Jesper and Gitte decided to develop primary school level content for a learning package to further enhance the way their students learn and understand mathematics, and to benefit other institutions seeking to do the same. Their efforts resulted in the development of Maple-Skole, a new educational tool, based on Maple, that supports mathematics teaching for primary schools in Denmark.

    Maplesoft has a long-standing relationship with the Danish education system. Maple is already used in high schools throughout Denmark, supported by the Maple Gym package. This package is an add-on to Maple that contains a number of routines to make working with Maple more convenient within various topics. These routines are made available to students and teachers with a single command that simplifies learning. Maple-Skole is the next step in the country’s vision of utilizing technology tools to enhance learning for its students. And having the opportunity to work with one tool all the way through their schooling will provide even greater benefit to students.

    (L-R) Henrik and Carolyn from Maplesoft meeting with Jesper and Gitte from Holbaek by Skole


    It helps foster greater knowledge and competency in primary school students by developing a passion for mathematics early on. This is a big step and one that we hope will revolutionize mathematics education in the country. It is exciting to see both the great potential for the Maple-Skole package and the fact that young students are already embracing Maple in such a positive way.

    For us at Maplesoft, this exciting new package provides a great opportunity to not only improve upon our relationships with educational institutions in Denmark, but also to be a part of something significant, enhancing the way students learn mathematics. We strongly believe in the benefits of Maple-Skole, which is why it will be offered to schools at no charge until July 2020. I truly believe this new tool has the potential to revolutionize mathematics education at a young age, which will make them better prepared as they move forward in their education.


    The Physics Updates for Maple 2019 (current v.331 or higher) is already available for installation via MapleCloud. This version contains further improvements to the Maple 2019 capabilities for solving PDE & BC as well as to the tensor simplifier. To install these Updates,

    • Open Maple,
    • Click the MapleCloud icon in the upper-right corner to open the MapleCloud toolbar 
    • In the MapleCloud toolbar, open Packages
    • Find the Physics Updates package and click the install button, it is the last one under Actions
    • To check for new versions of Physics Updates, click the MapleCloud icon. If the Updates icon has a red dot, click it to install the new version

    Note that the first time you install the Updates in Maple 2019 you need to install them from Packages, even if in your copy of Maple 2018 you had already installed these Updates.

    Also, at this moment you cannot use the MapleCloud to install the Physics Updates for Maple 2018. So, to install the last version of the Updates for Maple 2018, open Maple 2018 and enter PackageTools:-Install("5137472255164416", version = 329, overwrite)

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    This application solves a set of compatible equations of two variables. It also graphs the intersection point of the variable "x" and "y". If we want to observe the intersection point closer we will use the zoom button that is activated when manipulating the graph. If we want to change the variable ("x" and "y") we enter the code of the button that solves and graphs. In spanish.

    Lenin Araujo Castillo

    Ambassador of Maple

    Maple users often want to write a derivative evaluated at a point using Leibniz notation, as a matter of presentation, with appropriate variables and coordinates. For instance:


    Now, Maple uses the D operator for evaluating derivatives at a point, but this can be a little clunky:

    p := D[1,2,2,3](f)(a,b,c);
    q := convert( p, Diff );

    u := D[1,2,2,3](f)(5,10,15);
    v := convert( u, Diff );

    How can we tell Maple, programmatically, to print this in a nicer way? We amended the print command (see below) to do this. For example:

    print( D[1,2,2,3](f)(a,b,c), [x,y,z] );
    print( D[1,2,2,3](f)(5,10,15), [x,y,z] );

    print( 'D(sin)(Pi/6)', theta );

    Here's the definition of the custom version of print:

    # Type to check if an expression is a derivative using 'D', e.g. D(f)(a) and D[1,2](f)(a,b).
            proc( f )     
                   if op( [0,0], f ) <> D and op( [0,0,0], f ) <> D then
                           return false;
                   end if;       
                   if not type( op( [0,1], f ), 'name' ) or not type( { op( f ) }, 'set(algebraic)' ) then
                           return false;
                   end if;       
                   if op( [0,0,0], f ) = D and not type( { op( [0,0,..], f ) }, 'set(posint)' ) then
                           return false;
                   end if;       
                   return true;          
            end proc      
    # Create a local version of 'print', which will print expressions like D[1,2](f)(a,b) in a custom way,
    # but otherwise print in the usual fashion.
    local print := proc()
            local A, B, f, g, L, X, Y, Z;
            # Check that a valid expression involving 'D' is passed, along with a variable name or list of variable names.
            if ( _npassed < 2 ) or ( not _passed[1] :: 'Dexpr' ) or ( not passed[2] :: 'Or'('name','list'('name')) ) then
                   return :-print( _passed );
            end if;
            # Extract important variables from the input.
            g := _passed[1]; # expression
            X := _passed[2]; # variable name(s)
            f := op( [0,1], g ); # function name in expression
            A := op( g ); # point(s) of evaluation
            # Check that the number of variables is the same as the number of evaluation points.
            if nops( X ) <> nops( [A] ) then
                   return :-print( _passed );
            end if;
            # The differential operator.
            L := op( [0,0], g );
            # Find the variable (univariate) or indices (multivariate) for the derivative(s).
            B := `if`( L = D, X, [ op( L ) ] );
            # Variable name(s) as expression sequence.
            Y := op( X );
            # Check that the point(s) of evaluation is/are distinct from the variable name(s).
            if numelems( {Y} intersect {A} ) > 0 then
                   return :-print( _passed );
            end if;
            # Find the expression sequence of the variable names.
            Z := `if`( L = D, X, X[B] );
            return print( Eval( Diff( f(Y), Z ), (Y) = (A) ) );
    end proc:

    Do you use Leibniz Notation often? Or do you have an alternate method? We’d love to hear from you!

    Last year, I read a fascinating paper that presented evidence of an exoplanet, inferred through the “wobble” (or radial velocity) of the star it orbits, HD 3651. A periodogram of the radial velocity revealed the orbital period of the exoplanet – about 62.2 days.

    I found the experimental data and attempted to reproduce the periodogram. However, the data was irregularly sampled, as is most astronomical data. This meant I couldn’t use the standard Fourier-based tools from the signal processing package.

    I started hunting for the techniques used in the spectral analysis of irregularly sampled data, and found that the Lomb Scargle approach was often used for astronomical data. I threw together some simple prototype code and successfully reproduced the periodogram in the paper.


    After some (not so) gentle prodding, Erik Postma’s team wrote their own, far faster and far more robust, implementation.

    This new functionality makes its debut in Maple 2019 (and the final worksheet is here.)

    From a simple germ of an idea, to a finished, robust, fully documented product that we can put in front of our users – that, for me, is incredibly satisfying.

    That’s a minor story about a niche I’m interested in, but these stories are repeated time and time again.  Ideas spring from users and from those that work at Maplesoft. They’re filtered to a manageable set that we can work on. Some projects reach completion in under a year, while other, more ambitious, projects take longer.

    The result is software developed by passionate people invested in their work, and used by passionate people in universities, industry and at home.

    We always pack a lot into each release. Maple 2019 contains improvements for the most commonly used Maple functions that nearly everyone uses – such as solve, simplify and int – as well features that target specific groups (such as those that share my interest in signal processing!)

    I’d like to to highlight a few new of the new features that I find particularly impressive, or have just caught my eye because they’re cool.

    Of course, this is only a small selection of the shiny new stuff – everything is described in detail on the Maplesoft website.

    Edgardo, research fellow at Maplesoft, recently sent me a recent independent comparison of Maple’s PDE solver versus those in Mathematica (in case you’re not aware, he’s the senior developer for that function). He was excited – this test suite demonstrated that Maple was far ahead of its closest competitor, both in the number of PDEs solved, and the time taken to return those solutions.

    He’s spent another release cycle working on pdsolve – it’s now more powerful than before. Here’s a PDE that Maple now successfully solves.

    Maplesoft tracks visits to our online help pages - simplify is well-inside the top-ten most visited pages. It’s one of those core functions that nearly everyone uses.

    For this release, R&D has made many improvements to simplify. For example, Maple 2019 better simplifies expressions that contain powers, exponentials and trig functions.

    Everyone who touches Maple uses the same programming language. You could be an engineer that’s batch processing some data, or a mathematical researcher prototyping a new algorithm – everyone codes in the same language.

    Maple now supports C-style increment, decrement, and assignment operators, giving you more concise code.

    We’ve made a number of improvements to the interface, including a redesigned start page. My favorite is the display of large data structures (or rtables).

    You now see the header (that is, the top-left) of the data structure.

    For an audio file, you see useful information about its contents.

    I enjoy creating new and different types of visualizations using Maple's sandbox of flexible plots and plotting primitives.

    Here’s a new feature that I’ll use regularly: given a name (and optionally a modifier), polygonbyname draws a variety of shapes.

    In other breaking news, I now know what a Reuleaux hexagon looks like.

    Since I can’t resist talking about another signal processing feature, FindPeakPoints locates the local peaks or valleys of a 1D data set. Several options let you filter out spurious peaks or valleys

    I’ve used this new function to find the fundamental frequencies and harmonics of a violin note from its periodogram.

    Speaking of passionate developers who are devoted to their work, Edgardo has written a new e-book that teaches you how to use tensor computations using Physics. You get this e-book when you install Maple 2019.

    The new LeastTrimmedSquares command fits data to an equation while not being signficantly influenced by outliers.

    In this example, we:

    • Artifically generate a noisy data set with a few outliers, but with the underlying trend Y =5 X + 50
    • Fit straight lines using CurveFitting:-LeastSquares and Statistics:-LeastTrimmedSquares

    LeastTrimmedSquares function correctly predicts the underlying trend.

    We try to make every release faster and more efficient. We sometimes target key changes in the core infrastructure that benefit all users (such as the parallel garbage collector in Maple 17). Other times, we focus on specific functions.

    For this release, I’m particularly impressed by this improved benchmark for factor, in which we’re factoring a sparse multivariate polynomial.

    On my laptop, Maple 2018 takes 4.2 seconds to compute and consumes 0.92 GiB of memory.

    Maple 2019 takes a mere 0.27 seconds, and only needs 45 MiB of memory!

    I’m a visualization nut, and I always get a vicarious thrill when I see a shiny new plot, or a well-presented application.

    I was immediately drawn to this new Maple 2019 app – it illustrates the transition between day and night on a world map. You can even change the projection used to generate the map. Shiny!


    So that’s my pick of the top new features in Maple 2019. Everyone here at Maplesoft would love to hear your comments!

    It is my pleasure to announce the return of the Maple Conference! On October 15-17th, in Waterloo, Ontario, Canada, we will gather a group of Maple enthusiasts, product experts, and customers, to explore and celebrate the different aspects of Maple.

    Specifically, this conference will be dedicated to exploring Maple’s impact on education, new symbolic computation algorithms and techniques, and the wide range of Maple applications. Attendees will have the opportunity to learn about the latest research, share experiences, and interact with Maple developers.

    In preparation for the conference we are welcoming paper and extended abstract submissions. We are looking for presentations which fall into the broad categories of “Maple in Education”, “Algorithms and Software”, and “Applications of Maple” (a more extensive list of topics can be found here).

    You can learn more about the event, plus find our call-for-papers and abstracts, here:

    There have been several posts, over the years, related to visual cues about the values associated with particular 2D contours in a plot.

    Some people ask or post about color-bars [1]. Some people ask or post about inlined labelling of the curves [1, 2, 3, 4, 5, 6, 7]. And some post about mouse popup/hover-over functionality [1]., which got added as general new 2D plot annotation functionality in Maple 2017 and is available for the plots:-contourplot command via its contourlabels option.

    Another possibility consists of a legend for 2D contour plots, with distinct entries for each contour value. That is not currently available from the plots:-contourplot command as documented. This post is about obtaining such a legend.

    Aside from the method used below, a similar effect may be possible (possibly with a little effort) using contour-plotting approaches based on individual plots:-implicitplot calls for each contour level. Eg. using Kitonum's procedure, or an undocumented, alternate internal driver for plots:-contourplot.

    Since I like the functionality provided by the contourlabels option I thought that I'd highjack that (and the _HOVERCONTENT plotting substructure that plot-annotations now generate) and get a relatively convenient way to get a color-key via the 2D plotting legend.  This is not supposed to be super-efficient.

    Here below are some examples. I hope that it illustrates some useful functionality that could be added to the contourplot command. It can also be used to get a color-key for use with densityplot.


    contplot:=proc(ee, rng1, rng2)
      local clabels, clegend, i, ncrvs, newP, otherdat, others, tcrvs, tempP;
      (clabels,others):= selectremove(type,others,identical(:-contourlabels)=anything);
      if nops(clegend)>0 then
        if nops(clabels)>0 then
          return ':-PLOT'(seq(':-CURVES'(op(ncrvs[i]),op(indets(tcrvs[i],'specfunc(:-LEGEND)'))),
          return tempP;
        end if;
      elif nops(clabels)>0 then
        return plots:-contourplot(ee,rng1,rng2,others[],
        return plots:-contourplot(ee,rng1,rng2,others[]);
      end if;
    end proc:

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 9,
          legendstyle = [location = right],

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 17,
          legendstyle = [location = right],

    # Apparently legend items must be unique, to persist on document re-open.

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 11,
          legendstyle = [location = right],
          legend=['contourvalue',seq(cat($(` `,i)),i=2..5),
                  'contourvalue',seq(cat($(` `,i)),i=6..9),

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 8,

    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 13,

    conts:=[seq(low..high*1.01, (high-low)/(N-1))]:
    contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = conts,

      subsindets(contplot((x^2+y^2)^(1/2), x=-2..2, y=-2..2,
                          contours = 7,
      contplot((x^2+y^2)^(1/2), x=-2..2, y=-2..2,
          contours = 7, #grid=[50,50],
          legendstyle = [location=right],


      contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 5,
          thickness=0, filledregions),
      contplot(x^2+y^2, x=-2..2, y=-2..2,
          contours = 5,
          legendstyle = [location=right],

      contplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          contours = [seq(-1+(i-1)*(1-(-1))/(N-1),i=1..N)],
          legendstyle = [location=right],
       plots:-densityplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          style=surface, restricttoranges),

      contplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          contours = [seq(-1+(i-1)*(1-(-1))/(N-1),i=1..N)],
          legendstyle = [location=right],
          legend=['contourvalue',seq(cat($(` `,i)),i=2..3),
                  'contourvalue',seq(cat($(` `,i)),i=5..6),
                  'contourvalue',seq(cat($(` `,i)),i=8..9),
                  'contourvalue',seq(cat($(` `,i)),i=11..12),
       plots:-densityplot(sin(x)*y, x=-2*Pi..2*Pi, y=-1..1,
          style=surface, restricttoranges),






    A Complete Guide for performing Tensors computations using Physics


    This is an old request, a complete guide for using Physics  to perform tensor computations. This guide, shown below with Sections closed, is linked at the end of this post as a pdf file with all the sections open, and also as a Maple worksheet that allows for reproducing its contents. Most of the computations shown are reproducible in Maple 2018.2.1, and a significant part also in previous releases, but to reproduce everything you need to have the Maplesoft Physics Updates version 283 or higher installed. Feedback one how to improve this presentation is welcome.


    Physics  is a package developed by Maplesoft, an integral part of the Maple system. In addition to its commands for Quantum Mechanics, Classical Field Theory and General Relativity, Physics  includes 5 other subpackages, three of them also related to General Relativity: Tetrads , ThreePlusOne  and NumericalRelativity (work in progress), plus one to compute with Vectors  and another related to the Standard Model (this one too work in progress).


    The presentation is organized as follows. Section I is complete regarding the functionality provided with the Physics package for computing with tensors  in Classical and Quantum Mechanics (so including Euclidean spaces), Electrodynamics and Special Relativity. The material of section I is also relevant in General Relativity, for which section II is all devoted to curved spacetimes. (The sub-section on the Newman-Penrose formalism needs to be filled with more material and a new section devoted to the EnergyMomentum tensor is appropriate. I will complete these two things as time permits.) Section III is about transformations of coordinates, relevant in general.


    For an alphabetical list of the Physics commands with a brief one-line description and a link to the corresponding help page see Physics: Brief description of each command .


    I. Spacetime and tensors in Physics



    This section contains all what is necessary for working with tensors in Classical and Quantum Mechanics, Electrodynamics and Special Relativity. This material is also relevant for computing with tensors in General Relativity, for which there is a dedicated Section II. Curved spacetimes .


    Default metric and signature, coordinate systems


    Tensors, their definition, symmetries and operations



    Physics comes with a set of predefined tensors, mainly the spacetime metric  g[mu, nu], the space metric  gamma[j, k], and all the standard tensors of  General Relativity. In addition, one of the strengths of Physics is that you can define tensors, in natural ways, by indicating a matrix or array with its components, or indicating any generic tensorial expression involving other tensors.


    In Maple, tensor indices are letters, as when computing with paper and pencil, lowercase or upper case, latin or greek, entered using indexation, as in A[mu], and are displayed as subscripts as in A[mu]. Contravariant indices are entered preceding the letter with ~, as in A[`~&mu;`], and are displayed as superscripts as in A[`~mu`]. You can work with two or more kinds of indices at the same time, e.g., spacetime and space indices.


    To input greek letters, you can spell them, as in mu for mu, or simpler: use the shortcuts for entering Greek characters . Right-click your input and choose Convert To → 2-D Math input to give to your input spelled tensorial expression a textbook high quality typesetting.


    Not every indexed object or function is, however, automatically a tensor. You first need to define it as such using the Define  command. You can do that in two ways:



    Passing the tensor being defined, say F[mu, nu], possibly indicating symmetries and/or antisymmetries for its indices.


    Passing a tensorial equation where the left-hand side is the tensor being defined as in 1. and the right-hand side is a tensorial expression - or an Array or Matrix - such that the components of the tensor being defined are equal to the components of the tensorial expression.


    After defining a tensor - say A[mu] or F[mu, nu]- you can perform the following operations on algebraic expressions involving them



    Automatic formatting of repeated indices, one covariant the other contravariant


    Automatic handling of collisions of repeated indices in products of tensors


    Simplify  products using Einstein's sum rule for repeated indices.


    SumOverRepeatedIndices  of the tensorial expression.


    Use TensorArray  to compute the expression's components


    TransformCoordinates .


    If you define a tensor using a tensorial equation, in addition to the items above you can:



    Get each tensor component by indexing, say as in A[1] or A[`~1`]


    Get all the covariant and contravariant components by respectively using the shortcut notation A[] and "A[~]".


    Use any of the special indexing keywords valid for the pre-defined tensors of Physics; they are: definition, nonzero, and in the case of tensors of 2 indices also trace, and determinant.


    No need to specify the tensor dependency for differentiation purposes - it is inferred automatically from its definition.


    Redefine any particular tensor component using Library:-RedefineTensorComponent


    Minimizing the number of independent tensor components using Library:-MinimizeTensorComponent


    Compute the number of independent tensor components - relevant for tensors with several indices and different symmetries - using Library:-NumberOfTensorComponents .


    The first two sections illustrate these two ways of defining a tensor and the features described. The next sections present the existing functionality of the Physics package to compute with tensors.


    Defining a tensor passing the tensor itself


    Defining a tensor passing a tensorial equation


    Automatic formatting of repeated tensor indices and handling of their collisions in products


    Tensor symmetries


    Substituting tensors and tensor indices


    Simplifying tensorial expressions




    Visualizing tensor components - Library:-TensorComponents and TensorArray


    Modifying tensor components - Library:-RedefineTensorComponent


    Enhancing the display of tensorial expressions involving tensor functions and derivatives using CompactDisplay


    The LeviCivita tensor and KroneckerDelta


    The 3D space metric and decomposing 4D tensors into their 3D space part and the rest


    Total differentials, the d_[mu] and dAlembertian operators


    Tensorial differential operators in algebraic expressions


    Inert tensors


    Functional differentiation of tensorial expressions with respect to tensor functions


    The Pauli matrices and the spacetime Psigma[mu] 4-vector


    The Dirac matrices and the spacetime Dgamma[mu] 4-vector


    Quantum not-commutative operators using tensor notation


    II. Curved spacetimes



    Physics comes with a set of predefined tensors, mainly the spacetime metric  g[mu, nu], the space metric  gamma[j, k], and all the standard tensors of general relativity, respectively entered and displayed as: Einstein[mu,nu] = G[mu, nu],    Ricci[mu,nu]  = R[mu, nu], Riemann[alpha, beta, mu, nu]  = R[alpha, beta, mu, nu], Weyl[alpha, beta, mu, nu],  = C[alpha, beta, mu, nu], and the Christoffel symbols   Christoffel[alpha, mu, nu]  = GAMMA[alpha, mu, nu] and Christoffel[~alpha, mu, nu]  = "GAMMA[mu,nu]^(alpha)" respectively of first and second kinds. The Tetrads  and ThreePlusOne  subpackages have other predefined related tensors. This section is thus all about computing with tensors in General Relativity.


    Loading metrics from the database of solutions to Einstein's equations


    Setting the spacetime metric indicating the line element or a Matrix


    Covariant differentiation: the D_[mu] operator and the Christoffel symbols


    The Einstein, Ricci, Riemann and Weyl tensors of General Relativity


    A conversion network for the tensors of General Relativity


    Tetrads and the local system of references - the Newman-Penrose formalism


    The ThreePlusOne package and the 3+1 splitting of Einstein's equations


    III. Transformations of coordinates


    See Also


    Physics , Conventions used in the Physics package , Physics examples , Physics Updates



    Download, or the pdf version with sections open: Tensors_-_A_Complete_Guide.pdf

    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    Recently, my research team at the University of Waterloo was approached by Mark Ideson, the skip for the Canadian Paralympic men’s curling team, about developing a curling end-effector, a device to give wheelchair curlers greater control over their shots. A gold medalist and multi-medal winner at the Paralympics, Mark has a passion to see wheelchair curling performance improve and entrusted us to assist him in this objective. We previously worked with Mark and his team on a research project to model the wheelchair curling shot and help optimize their performance on the ice. The end-effector project was the next step in our partnership.

    The use of technology in the sports world is increasing rapidly, allowing us to better understand athletic performance. We are able to gather new types of data that, when coupled with advanced engineering tools, allow us to perform more in-depth analysis of the human body as it pertains to specific movements and tasks. As a result, we can refine motions and improve equipment to help athletes maximize their abilities and performance. As a professor of Systems Design Engineering at the University of Waterloo, I have overseen several studies on the motor function of Paralympic athletes. My team focuses on modelling the interactions between athletes and their equipment to maximize athletic performance, and we rely heavily on Maple and MapleSim in our research and project development.

    The end-effector project was led by my UW students Borna Ghannadi and Conor Jansen. The objective was to design a device that attaches to the end of the curler’s stick and provides greater command over the stone by pulling it back prior to release.  Our team modeled the end effector in Maple and built an initial prototype, which has undergone several trials and adjustments since then. The device is now on its 7th iteration, which we felt appropriate to name the Mark 7, in recognition of Mark’s inspiration for the project. The device has been a challenge, but we have steadily made improvements with Mark’s input and it is close to being a finished product.

    Currently, wheelchair curlers use a device that keeps the stone static before it’s thrown. Having the ability to pull back on the stone and break the friction prior to release will provide great benefit to the curlers. As a curler, if you can only push forward and the ice conditions aren’t perfect, you’re throwing at a different speed every time. If you can pull the stone back and then go forward, you’ve broken that friction and your shot is far more repeatable. This should make the game much more interesting.

    For our team, the objective was to design a mechanism that not only allowed curlers to pull back on the stone, but also had a release option with no triggers on the curler’s hand. The device we developed screws on to the end of the curler’s stick, and is designed to rest firmly on the curling handle. Once the curler selects their shot, they can position the stone accordingly, slide the stone backward and then forward, and watch the device gently separate from the stone.

    For our research, the increased speed and accuracy of MapleSim’s multibody dynamic simulations, made possible by the underlying symbolic modelling engine, Maple, allowed us to spend more time on system design and optimization. MapleSim combines principles of mechanics with linear graph theory to produce unified representations of the system topology and modelling coordinates. The system equations are automatically generated symbolically, which enables us to view and share the equations prior to a numerical solution of the highly-optimized simulation code.

    The Mark 7 is an invention that could have significant ramifications in the curling world. Shooting accuracy across wheelchair curling is currently around 60-62%, and if new technology like the Mark 7 is adopted, that number could grow to 70 or 75%. Improved accuracy will make the game more enjoyable and competitive. Having the ability to pull back on the stone prior to release will eliminate some instability for the curlers, which can help level the playing field for everyone involved. Given the work we have been doing with Mark’s team on performance improvements, it was extremely satisfying for us to see them win the bronze medal in South Korea. We hope that our research and partnership with the team can produce gold medals in the years to come.


    Throughout the course of a year, Maple users create wildly varying applications on all sorts of subjects. To mark the end of 2018, I thought I’d share some of the 2018 submissions to the Maple Application Center that I personally found particularly interesting.

    Solving the 15-puzzle, by Curtis Bright. You know those puzzles where you have to move the pieces around inside a square to put them in order, and there’s only one free space to move into?  I’m not good at those puzzles, but it turns out Maple can help. This is one of collection of new, varied applications using Maple’s SAT solvers (if you want to solve the world’s hardest Sudoku, Maple’s SAT solvers can help with that, too).

    Romeo y Julieta: Un clasico de las historias de amor... y de las ecuaciones diferenciales [Romeo and Juliet: A classic story of love..and differential equations], by Ranferi Gutierrez. This one made me laugh (and even more so once I put some of it in google translate, which is more than enough to let you appreciate the application even if you don’t speak Spanish). What’s not to like about modeling a high drama love story using DEs?

    Prediction of malignant/benign of breast mass with DNN classifier, by Sophie Tan. Machine learning can save lives.

    Hybrid Image of a Cat and a Dog, by Samir Khan. Signal processing can be more fun that I realized. This is one of those crazy optical illusions where the picture changes depending on how far away you are.

    Beyond the 8 Queens Problem, by Yury Zavarovsky. In true mathematical fashion, why have 8 queens when you can have n?  (If you are interested in this problem, you can also see a different solution that uses SAT solvers.)

    Gödel's Universe, by Frank Wang.  Can’t say I understood much of it, but it involves Gödel, Einstein, and Hawking, so I don’t need to understand it to find it interesting.

    Overview of the Physics Updates


    One of the problems pointed out several times about the Physics package documentation is that the information is scattered. There are the help pages for each Physics command, then there is that page on Physics conventions, one other with Examples in different areas of physics, one "What's new in Physics" page at each release with illustrations only shown there. Then there are a number of Mapleprimes post describing the Physics project and showing how to use the package to tackle different problems. We seldomly find the information we are looking for fast enough.


    This post thus organizes and presents all those elusive links in one place. All the hyperlinks below are alive from within a Maple worksheet. A link to this page is also appearing in all the Physics help pages in the future Maple release. Comments on practical ways to improve this presentation of information are welcome.



    As part of its commitment to providing the best possible environment for algebraic computations in Physics, Maplesoft launched, during 2014, a Maple Physics: Research and Development website. That enabled users to ask questions, provide feedback and download updated versions of the Physics package, around the clock.

    The "Physics Updates" include improvements, fixes, and the latest new developments, in the areas of Physics, Differential Equations and Mathematical Functions. Since Maple 2018, you can install/uninstall the "Physics Updates" directly from the MapleCloud .

    Maplesoft incorporated the results of this accelerated exchange with people around the world into the successive versions of Maple. Below there are two sections


    The Updates of Physics, as  an organized collection of links per Maple release, where you can find a description with examples of the subjects developed in the Physics package, from 2012 till 2019.


    The Mapleprimes Physics posts, containing the most important posts describing the Physics project and showing the use of the package to tackle problems in General Relativity and Quantum Mechanics.

    The update of Physics in Maple 2018 and back to Maple 16 (2012)




    Physics Updates during 2018


    Tensor product of Quantum States using Dirac's Bra-Ket Notation


    Coherent States in Quantum Mechanics


    The Zassenhaus formula and the algebra of the Pauli matrices


    Multivariable Taylor series of expressions involving anticommutative (Grassmannian) variables


    New SortProducts command


    A Complete Guide for Tensor computations using Physics



    Physics Maple 2018 updates


    Automatic handling of collision of tensor indices in products


    User defined algebraic differential operators


    The Physics:-Cactus package for Numerical Relativity


    Automatic setting of the EnergyMomentumTensor for metrics of the database of solutions to Einstein's equations


    Minimize the number of tensor components according to its symmetries, relabel, redefine or count the number of independent tensor components


    New functionality and display for inert names and inert tensors


    Automatic setting of Dirac, Paul and Gell-Mann algebras


    Simplification of products of Dirac matrices


    New Physics:-Library commands to perform matrix operations in expressions involving spinors with omitted indices


    Miscellaneous improvements



    Physics Maple 2017 updates


    General Relativity: classification of solutions to Einstein's equations and the Tetrads package


    The 3D metric and the ThreePlusOne (3 + 1) new Physics subpackage


    Tensors in Special and General Relativity


    The StandardModel new Physics subpackage



    Physics Maple 2016 updates


    Completion of the Database of Solutions to Einstein's Equations


    Operatorial Algebraic Expressions Involving the Differential Operators d_[mu], D_[mu] and Nabla


    Factorization of Expressions Involving Noncommutative Operators


    Tensors in Special and General Relativity


    Vectors Package


    New Physics:-Library commands


    Redesigned Functionality and Miscellaneous



    Physics Maple 2015 updates






    Tetrads in General Relativity


    More Metrics in the Database of Solutions to Einstein's Equations


    Commutators, AntiCommutators, and Dirac notation in quantum mechanics


    New Assume command and new enhanced Mode: automaticsimplification


    Vectors Package


    New Physics:-Library commands





    Physics Maple 18 updates




    4-Vectors, Substituting Tensors


    Functional Differentiation


    More Metrics in the Database of Solutions to Einstein's Equations


    Commutators, AntiCommutators


    Expand and Combine


    New Enhanced Modes in Physics Setup




    Vectors Package


    New Physics:-Library commands





    Physics Maple 17 updates


    Tensors and Relativity: ExteriorDerivative, Geodesics, KillingVectors, LieDerivative, LieBracket, Antisymmetrize and Symmetrize


    Dirac matrices, commutators, anticommutators, and algebras


    Vector Analysis


    A new Library of programming commands for Physics



    Physics Maple 16 updates


    Tensors in Special and General Relativity: contravariant indices and new commands for all the General Relativity tensors


    New commands for working with expressions involving anticommutative variables and functions: Gtaylor, ToFieldComponents, ToSuperfields


    Vector Analysis: geometrical coordinates with funcional dependency

    Mapleprimes Physics posts




    The Physics project at Maplesoft


    Mini-Course: Computer Algebra for Physicists


    A Complete Guide for Tensor computations using Physics


    Perimeter Institute-2015, Computer Algebra in Theoretical Physics (I)


    IOP-2016, Computer Algebra in Theoretical Physics (II)


    ACA-2017, Computer Algebra in Theoretical Physics (III) 



    General Relativity



    General Relativity using Computer Algebra


    Exact solutions to Einstein's equations 


    Classification of solutions to Einstein's equations and the ThreePlusOne (3 + 1) package 


    Tetrads and Weyl scalars in canonical form 


    Equivalence problem in General Relativity 


    Automatic handling of collision of tensor indices in products 


    Minimize the number of tensor components according to its symmetries


    Quantum Mechanics



    Quantum Commutation Rules Basics 


    Quantum Mechanics: Schrödinger vs Heisenberg picture 


    Quantization of the Lorentz Force 


    Magnetic traps in cold-atom physics 


    The hidden SO(4) symmetry of the hydrogen atom


    (I) Ground state of a quantum system of identical boson particles 


    (II) The Gross-Pitaevskii equation and Bogoliubov spectrum 


    (III) The Landau criterion for Superfluidity 


    Simplification of products of Dirac matrices


    Algebra of Dirac matrices with an identity matrix on the right-hand side


    Factorization with non-commutative variables


    Tensor Products of Quantum State Spaces 


    Coherent States in Quantum Mechanics 


    The Zassenhaus formula and the Pauli matrices 



    Physics package generic functionality



    Automatic simplification and a new Assume (as in "extended assuming")


    Wirtinger derivatives and multi-index summation

    See Also


    Conventions used in the Physics package , Physics , Physics examples , A Complete Guide for Tensor computations using Physics



    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    The Zassenhaus formula and the algebra of the Pauli matrices


    Edgardo S. Cheb-Terrab1 and Bryan C. Sanctuary2

    (1) Maplesoft

    (2) Department of Chemistry, McGill University, Montreal, Quebec, Canada



    The implementation of the Pauli matrices and their algebra were reviewed during 2018, including the algebraic manipulation of nested commutators, resulting in faster computations using simpler and more flexible input. As it frequently happens, improvements of this type suddenly transform research problems presented in the literature as untractable in practice, into tractable.


    As an illustration, we tackle below the derivation of the coefficients entering the Zassenhaus formula shown in section 4 of [1] for the Pauli matrices up to order 10 (results in the literature go up to order 5). The computation presented can be reused to compute these coefficients up to any desired higher-order (hardware limitations may apply). A number of examples which exploit this formula and its dual, the Baker-Campbell-Hausdorff formula, occur in connection with the Weyl prescription for converting a classical function to a quantum operator (see sec. 5 of [1]), as well as when solving the eigenvalue problem for classes of mathematical-physics partial differential equations [2].  
    To reproduce the results below - a worksheet with this contents is linked at the end - you need to have your Maple 2018.2.1 updated with the 
    Maplesoft Physics Updates version 280 or higher.



    [1] R.M. Wilcox, "Exponential Operators and Parameter Differentiation in Quantum Physics", Journal of Mathematical Physics, V.8, 4, (1967.


    [2] S. Steinberg, "Applications of the lie algebraic formulas of Baker, Campbell, Hausdorff, and Zassenhaus to the calculation of explicit solutions of partial differential equations", Journal of Differential Equations, V.26, 3, 1977.


    [3] K. Huang, "Statistical Mechanics", John Wiley & Sons, Inc. 1963, p217, Eq.(10.60).


    Formulation of the problem

    The Zassenhaus formula expresses exp(lambda*(A+B)) as an infinite product of exponential operators involving nested commutators of increasing complexity

    "(e)^(lambda (A+B))   =    (e)^(lambda A) * (e)^(lambda B) * (e)^(lambda^2 C[2]) * (e)^(lambda^3 C[3]) *  ...  "
                                                                           =   exp(lambda*A)*exp(lambda*B)*exp(-(1/2)*lambda^2*%Commutator(A, B))*exp((1/6)*lambda^3*(%Commutator(B, %Commutator(A, B))+2*%Commutator(A, %Commutator(A, B))))

    Given A, B and their commutator E = %Commutator(A, B), if A and B commute with E, C[n] = 0 for n >= 3 and the Zassenhaus formula reduces to the product of the first three exponentials above. The interest here is in the general case, when %Commutator(A, E) <> 0 and %Commutator(B, E) <> 0, and the goal is to compute the Zassenhaus coefficients C[n]in terms of A, B for arbitrary finite n. Following [1], in that general case, differentiating the Zassenhaus formula with respect to lambda and multiplying from the right by exp(-lambda*(A+B)) one obtains

    "A+B=A+(e)^(lambda A) B (e)^(-lambda A)+(e)^(lambda A)+(e)^(lambda B) 2 lambda C[2] (e)^(-lambda B) (e)^(-lambda A)+ ..."

    This is an intricate formula, which however (see eq.(4.20) of [1]) can be represented in abstract form as


    "0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+ ..."

    from where an equation to be solved for each C[n] is obtained by equating to 0 the coefficient of lambda^(n-1). In this formula, the repeated commutator bracket is defined inductively in terms of the standard commutator %Commutator(A, B)by

    {B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

    {C[j], B^n, A^0} = {C[j], B^n}, {C[j], A^m, B^n} = %Commutator(A, {A^`-`(m, 1), B^n, C[j]^k})

    and higher-order repeated-commutator brackets are similarly defined. For example, taking the coefficient of lambda and lambda^2 and respectively solving each of them for C[2] and C[3] one obtains

    C[2] = -(1/2)*%Commutator(A, B)

    C[3] = (1/6)*%Commutator(B, %Commutator(A, B))+(1/3)*%Commutator(B, %Commutator(A, B))

    This method is used in [3] to treat quantum deviations from the classical limit of the partition function for both a Bose-Einstein and Fermi-Dirac gas. The complexity of the computation of C[n] grows rapidly and in the literature only the coefficients up to C[5] have been published. Taking advantage of developments in the Physics package during 2018, below we show the computation up to C[10] and provide a compact approach to compute them up to arbitrary finite order.


    Computing up to C[10]

    Set the signature of spacetime such that its space part is equal to +++ and use lowercaselatin letters to represent space indices. Set also A, B and C[n] to represent quantum operators


    Setup(op = {A, B, C}, signature = `+++-`, spaceindices = lowercaselatin)

    `* Partial match of  '`*op*`' against keyword '`*quantumoperators*`' `




    [quantumoperators = {A, B, C}, signature = `+ + + -`, spaceindices = lowercaselatin]


    To illustrate the computation up to C[10], a convenient example, where the commutator algebra is closed, consists of taking A and B as Pauli Matrices which, multiplied by the imaginary unit, form a basis for the `&sfr;&ufr;`(2)group, which in turn exponentiate to the relevant Special Unitary Group SU(2). The algebra for the Pauli matrices involves a commutator and an anticommutator


    %Commutator(Physics:-Psigma[i], Physics:-Psigma[j]) = (2*I)*Physics:-LeviCivita[i, j, k]*Physics:-Psigma[k], %AntiCommutator(Physics:-Psigma[i], Physics:-Psigma[j]) = 2*Physics:-KroneckerDelta[i, j]


    Assign now A and B to two Pauli matrices, for instance

    A := Psigma[1]



    B := Psigma[3]



    Next, to extract the coefficient of lambda^n from

    "0=((&sum;)(lambda^n)/(n!) {A^n,B})+2 lambda ((&sum;) (&sum;)(lambda^(n+m))/(n! m!) {A^m,B^n,C[2]})+3 lambda^2 ((&sum;) (&sum;) (&sum;)(lambda^(n+m+k))/(n! m! k!) {A^k,B^m,(C[2])^n,C[3]})+..."

    to solve it for C[n+1] we note that each term has a factor lambda^m multiplying a sum, so we only need to take into account the first n+1 terms (sums) and in each sum replace infinity by the corresponding n-m. For example, given "C[2]=-1/2 `%Commutator`(A,B), "to compute C[3] we only need to compute these first three terms:

    0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1 .. 2)+2*lambda*(Sum(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n = 0 .. 1), m = 0 .. 1))+3*lambda^2*(Sum(Sum(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n = 0 .. 0), m = 0 .. 0), k = 0 .. 0))

    then solving for C[3] one gets C[3] = (1/3)*%Commutator(B, %Commutator(A, B))+(1/6)*%Commutator(A, %Commutator(A, B)).

    Also, since to compute C[n] we only need the coefficient of lambda^(n-1), it is not necessary to compute all the terms of each multiple-sum. One way of restricting the multiple-sums to only one power of lambda consists of using multi-index summation, available in the Physics package (see Physics:-Library:-Add ). For that purpose, redefine sum to extend its functionality with multi-index summation

    Setup(redefinesum = true)

    [redefinesum = true]


    Now we can represent the same computation of C[3] without multiple sums and without computing unnecessary terms as

    0 = Sum(lambda^n*{B, A^n}/factorial(n), n = 1)+2*lambda*(Sum(lambda^(n+m)*{C[2], A^m, B^n}/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(Sum(lambda^(n+m+k)*{C[3], A^k, B^m, C[2]^n}/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

    Finally, we need a computational representation for the repeated commutator bracket 

    {B, A^0} = B, {B, A^(n+1)} = %Commutator(A, {A^n, B^n})

    One way of representing this commutator bracket operation is defining a procedure, say F, with a cache to avoid recomputing lower order nested commutators, as follows

    F := proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc

    proc (A, B, n) options operator, arrow; if n::negint then 0 elif n = 0 then B elif n::posint then %Commutator(A, F(A, B, n-1)) else 'F(A, B, n)' end if end proc


    Cache(procedure = F)


    For example,

    F(A, B, 1)

    %Commutator(Physics:-Psigma[1], Physics:-Psigma[3])


    F(A, B, 2)

    %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))


    F(A, B, 3)

    %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3])))


    We can set now the value of C[2]

    C[2] := -(1/2)*Commutator(A, B)



    and enter the formula that involves only multi-index summation

    H := sum(lambda^n*F(A, B, n)/factorial(n), n = 2)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = 1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = 0))

    (1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]


    from where we compute C[3] by solving for it the coefficient of lambda^2, and since due to the mulit-index summation this expression already contains lambda^2 as a factor,

    C[3] = Simplify(solve(H, C[3]))

    C[3] = (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]


    In order to generalize the formula for H for higher powers of lambda, the right-hand side of the multi-index summation limit can be expressed in terms of an abstract N, and H transformed into a mapping:


    H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2)), N)

    proc (N) options operator, arrow; lambda^N*F(Physics:-Psigma[1], Physics:-Psigma[3], N)/factorial(N)+2*lambda*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], I*Physics:-Psigma[2], n), m)), n+m = N-1))+3*lambda^2*(sum(Physics:-`*`(Physics:-`^`(lambda, n+m+k), Physics:-`^`(Physics:-`*`(factorial(n), factorial(m), factorial(k)), -1), F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(I*Physics:-Psigma[2], C[3], n), m), k)), n+m+k = N-2)) end proc


    Now we have





    lambda*%Commutator(Physics:-Psigma[1], Physics:-Psigma[3])+(2*I)*lambda*Physics:-Psigma[2]


    The following is already equal to (11)


    (1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))+2*lambda*(lambda*%Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])+lambda*%Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))+3*lambda^2*C[3]


    In this way, we can reproduce the results published in the literature for the coefficients of Zassenhaus formula up to C[5] by adding two more multi-index sums to (13). Unassign C first


    H := unapply(sum(lambda^n*F(A, B, n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(A, F(B, C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(A, F(B, F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(A, F(B, F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(A, F(B, F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)*factorial(p)), n+m+k+l+p = N-4)), N)

    We compute now up to C[5] in one go

    for j to 4 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do









    The nested-commutator expression solved in the last step for C[5] is


    (1/24)*lambda^4*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], Physics:-Psigma[3]))))+2*lambda*((1/6)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/2)*lambda^3*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2])))+(1/6)*lambda^3*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], I*Physics:-Psigma[2]))))+3*lambda^2*((1/2)*lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[1], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(Physics:-Psigma[3], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[1], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+lambda^2*%Commutator(Physics:-Psigma[3], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1]))+(1/2)*lambda^2*%Commutator(I*Physics:-Psigma[2], %Commutator(I*Physics:-Psigma[2], (2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1])))+4*lambda^3*(lambda*%Commutator(Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(Physics:-Psigma[3], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator(I*Physics:-Psigma[2], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2]))+lambda*%Commutator((2/3)*Physics:-Psigma[3]-(4/3)*Physics:-Psigma[1], -((1/3)*I)*((3*I)*Physics:-Psigma[1]+(6*I)*Physics:-Psigma[3]-4*Physics:-Psigma[2])))+5*lambda^4*(-(8/9)*Physics:-Psigma[1]-(158/45)*Physics:-Psigma[3]-((16/3)*I)*Physics:-Psigma[2])


    With everything understood, we want now to extend these results generalizing them into an approach to compute an arbitrarily large coefficient C[n], then use that generalization to compute all the Zassenhaus coefficients up to C[10]. To type the formula for H for higher powers of lambda is however prone to typographical mistakes. The following is a program, using the Maple programming language , that produces these formulas for an arbitrary integer power of lambda:

    Formula := proc(A, B, C, Q)


    This Formula program uses a sequence of summation indices with as much indices as the order of the coefficient C[n] we want to compute, in this case we need 10 of them

    summation_indices := n, m, k, l, p, q, r, s, t, u

    n, m, k, l, p, q, r, s, t, u


    To avoid interference of the results computed in the loop (17), unassign C again



    Now the formulas typed by hand, used lines above to compute each of C[2], C[3] and C[5], are respectively constructed by the computer

    Formula(A, B, C, 2)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))


    Formula(A, B, C, 3)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))


    Formula(A, B, C, 5)

    sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))



    Construct then the formula for C[10] and make it be a mapping with respect to N, as done for C[5] after (16)

    H := unapply(Formula(A, B, C, 10), N)

    proc (N) options operator, arrow; sum(lambda^n*F(Physics:-Psigma[1], Physics:-Psigma[3], n)/factorial(n), n = N)+2*lambda*(sum(lambda^(n+m)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], C[2], n), m)/(factorial(n)*factorial(m)), n+m = N-1))+3*lambda^2*(sum(lambda^(n+m+k)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], C[3], n), m), k)/(factorial(n)*factorial(m)*factorial(k)), n+m+k = N-2))+4*lambda^3*(sum(lambda^(n+m+k+l)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], C[4], n), m), k), l)/(factorial(n)*factorial(m)*factorial(k)*factorial(l)), n+m+k+l = N-3))+5*lambda^4*(sum(lambda^(n+m+k+l+p)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], C[5], n), m), k), l), p)/(factorial(n)*factorial(l)*factorial(m)*factorial(k)*factorial(p)), n+m+k+l+p = N-4))+6*lambda^5*(sum(lambda^(n+m+k+l+p+q)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], C[6], n), m), k), l), p), q)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(k)*factorial(q)), n+m+k+l+p+q = N-5))+7*lambda^6*(sum(lambda^(n+m+k+l+p+q+r)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], C[7], n), m), k), l), p), q), r)/(factorial(n)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(r)), n+m+k+l+p+q+r = N-6))+8*lambda^7*(sum(lambda^(n+m+k+l+p+q+r+s)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], C[8], n), m), k), l), p), q), r), s)/(factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(s)), n+m+k+l+p+q+r+s = N-7))+9*lambda^8*(sum(lambda^(n+m+k+l+p+q+r+s+t)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], C[9], n), m), k), l), p), q), r), s), t)/(factorial(s)*factorial(n)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(t)), n+m+k+l+p+q+r+s+t = N-8))+10*lambda^9*(sum(lambda^(n+m+k+l+p+q+r+s+t+u)*F(Physics:-Psigma[1], F(Physics:-Psigma[3], F(C[2], F(C[3], F(C[4], F(C[5], F(C[6], F(C[7], F(C[8], F(C[9], C[10], n), m), k), l), p), q), r), s), t), u)/(factorial(s)*factorial(n)*factorial(t)*factorial(r)*factorial(l)*factorial(m)*factorial(p)*factorial(q)*factorial(k)*factorial(u)), n+m+k+l+p+q+r+s+t+u = N-9)) end proc


    Compute now the coefficients of the Zassenhaus formula up to C[10] all in one go

    for j to 9 do C[j+1] := Simplify(solve(H(j), C[j+1])) end do



















    Notes: with the material above you can compute higher order values of C[n]. For that you need:


    Unassign C as done above in two opportunities, to avoid interference of the results just computed.


    Indicate more summation indices in the sequence summation_indices in (19), as many as the maximum value of n in C[n].


    Have in mind that the growth in size and complexity is significant, with each C[n] taking significantly more time than the computation of all the previous ones.


    Re-execute the input line (23) and the loop (24).



    Edgardo S. Cheb-Terrab
    Physics, Differential Equations and Mathematical Functions, Maplesoft

    A common question to our tech support team is about completing the square for a univariate polynomial of even degree, and how to do that in Maple. We’ve put together a solution that we think you’ll find useful. If you have any alternative methods or improvements to our code, let us know!

    # Procedure to complete the square for a univariate
    # polynomial of even degree.
    CompleteSquare := proc( f :: depends( 'And'( polynom, 'satisfies'( g -> ( type( degree(g,x), even ) ) ) ) ), x :: name )
           local a, g, k, n, phi, P, Q, r, S, T, u:
           # Degree and parameters of polynomial.
           n := degree( f, x ):
           P := indets( f, name ) minus { x }:
           # General polynomial of square plus constant.
           g := add( a[k] * x^k, k=0..n/2 )^2 + r:
           # Solve for unknowns in g.
           Q := indets( g, name ) minus P:
           S := map( expand, { solve( identity( expand( f - g ) = 0, x ), Q ) } ):
           if numelems( S ) = 0 then
                  return NULL:
           end if:
           # Evaluate g at the solution, and re-write square term
           # so that the polynomial within the square is monic.
           phi := u -> lcoeff(op(1,u),x)^2 * (expand(op(1,u)/lcoeff(op(1,u),x)))^2:  
           T := map( evalindets, map( u -> eval(g,u), S ), `^`(anything,identical(2)), phi ):
           return `if`( numelems(T) = 1, T[], T ):
    end proc:
    # Examples.
    CompleteSquare( x^2 + 3 * x + 2, x );
    CompleteSquare( a * x^2 + b * x + c, x );
    CompleteSquare( 4 * x^8 + 8 * x^6 + 4 * x^4 - 246, x );
    m, n := 4, 10;
    r := rand(-10..10):
    for i from 1 to n do
           CompleteSquare( r() * ( x^(m/2) + randpoly( x, degree=m-1, coeffs=r ) )^2 + r(), x );
    end do;
    # Compare quadratic examples with Student:-Precalculus:-CompleteSquare()
    # (which is restricted to quadratic expressions).
    Student:-Precalculus:-CompleteSquare( x^2 + 3 * x + 2 );
    Student:-Precalculus:-CompleteSquare( a * x^2 + b * x + c );

    For a higher-order example:

    f := 5*x^4 - 70*x^3 + 365*x^2 - 840*x + 721;
    g := CompleteSquare( f, x ); # 5 * ( x^2 - 7 * x + 12 )^2 + 1
    h := evalindets( f, `*`, factor ); 4 * (x-3)^2 * (x-4)^2 + 1
    p1 := plot( f, x=0..5, y=-5..5, color=blue ):
    p2 := plots:-pointplot( [ [3,1], [4,1] ], symbol=solidcircle, symbolsize=20, color=red ):
    plots:-display( p1, p2 );

    tells us that the minimum value of the expression is 1, and it occurs at x=3 and x=4.

    The Joint Mathematics Meetings are taking place next week (January 16 – 19) in Baltimore, Maryland, U.S.A. This will be the 102nd annual winter meeting of the Mathematical Association of America (MAA) and the 125th annual meeting of the American Mathematical Society (AMS).

    Maplesoft will be exhibiting at booth #501 as well as in the networking area. Please stop by to chat with me and other members of the Maplesoft team, as well as to pick up some free Maplesoft swag or win some prizes.

    This year we will be hosting a hands-on workshop on Maple: A Natural Way to Work with Math

    This special event will take place on Thursday, January 17 at 6:00 -8:00 P.M. in the Holiday Ballroom 4 at the Hilton Baltimore.


    There are also several other interesting Maple related talks:

    MYMathApps Tutorials

    MAA General Contributed Paper Session on Mathematics and Technology 

    Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.

    Room 323, BCC
    Matthew Weihing*, Texas A&M University 
    Philip B Yasskin, Texas A&M University 


    The Logic Behind the Turing Bombe's Role in Breaking Enigma. 

    MAA General Contributed Paper Session on Mathematics and Technology 

    Wednesday January 16, 2019, 1:00 p.m.-1:55 p.m.
    Room 323, BCC
    Neil Sigmon*, Radford University 
    Rick Klima, Appalachian State University 


    On a software accessible database of faithful representations of Lie algebras. 

    MAA General Contributed Paper Session on Algebra, I 

    Wednesday January 16, 2019, 2:15 p.m.-6:25 p.m.
    Room 348, BCC
    Cailin Foster*, Dixie State University 

    Discussion of Various Technical Strategies Used in College Math Teaching. 

    MAA Contributed Paper Session on Open Educational Resources: Combining Technological Tools and Innovative Practices to Improve Student Learning, IV 

    Friday January 18, 2019, 8:00 a.m.-10:55 a.m.
    Room 303, BCC
    Lina Wu*, Borough of Manhattan Community College-The City University of New York 

    An Enticing Simulation in Ordinary Differential Equations that predict tangible results. 

    MAA Contributed Paper Session on The Teaching and Learning of Undergraduate Ordinary Differential Equations 

    Friday January 18, 2019, 1:00 p.m.-4:55 p.m.
    Room 324, BCC
    Satyanand Singh*, New York City College of Technology of CUNY 

    An Effort to Assess the Impact a Modeling First Approach has in a Traditional Differential Equations Class. 

    AMS Special Session on Using Modeling to Motivate the Study of Differential Equations, I 
    Saturday January 19, 2019, 8:00 a.m.-11:50 a.m.

    Room 336, BCC
    Rosemary C Farley*, Manhattan College 
    Patrice G Tiffany, Manhattan College 


    If you are attending the Joint Math meetings this week and plan on presenting anything on Maple, please feel free to let me know and I'll update this list accordingly.

    See you in Baltimore!


    Maple Product Manager

    First 25 26 27 28 29 30 31 Last Page 27 of 291