Maple Questions and Posts

These are Posts and Questions associated with the product, Maple

Hello there, 

Here is a set

thanks

Hello there, 

Here is a set of non-linear equations:

 

restart;

with(LinearAlgebra):

TrainLoad := -10*10^6*(cos(convert(40*degrees, radians))+I*sin(convert(40*degrees, radians)));

-10000000*cos((2/9)*Pi)-(10000000*I)*sin((2/9)*Pi)

(1)

 

evalf(TrainLoad, 7);

-7660444.-6427876.*I

(2)

f1n2 := (0.03 + I*0.1515)*Ix[c1] - (0.03 + I*0.1515)*Ix[c2] + 2 * V[at1] - 55*10^3 = 0;

(0.3e-1+.1515*I)*Ix[c1]+(-0.3e-1-.1515*I)*Ix[c2]+2*V[at1]-55000 = 0

(3)

f3n4 := (1.6 + I*6.24)*Ix[c1] + (1.12 + I*2.64)*Ix[c2] + V[t] - V[at1] = 0;

(1.6+6.24*I)*Ix[c1]+(1.12+2.64*I)*Ix[c2]+V[t]-V[at1] = 0

(4)

f5n6 := (1.36 + I*4.44)*Ix[c2] + V[at2] - V[t] = 0;

(1.36+4.44*I)*Ix[c2]+V[at2]-V[t] = 0

(5)

f7n8 := (-1.12 - I*2.64)*Ix[c1] + (-3.92 - I*12.00)*Ix[c2] + V[at2] - V[at1] = 0;

(-1.12-2.64*I)*Ix[c1]+(-3.92-12.00*I)*Ix[c2]+V[at2]-V[at1] = 0

(6)

f9n10 := V[t] * (Ix[c1] - Ix[c2]) + TrainLoad = 0;

V[t]*(Ix[c1]-Ix[c2])-10000000*cos((2/9)*Pi)-(10000000*I)*sin((2/9)*Pi) = 0

(7)

polynomials := {f1n2, f3n4, f5n6, f7n8, f9n10};

{(1.36+4.44*I)*Ix[c2]+V[at2]-V[t] = 0, V[t]*(Ix[c1]-Ix[c2])-10000000*cos((2/9)*Pi)-(10000000*I)*sin((2/9)*Pi) = 0, (-1.12-2.64*I)*Ix[c1]+(-3.92-12.00*I)*Ix[c2]+V[at2]-V[at1] = 0, (0.3e-1+.1515*I)*Ix[c1]+(-0.3e-1-.1515*I)*Ix[c2]+2*V[at1]-55000 = 0, (1.6+6.24*I)*Ix[c1]+(1.12+2.64*I)*Ix[c2]+V[t]-V[at1] = 0}

(8)

variables := {Ix[c1], Ix[c2], V[at1], V[at2], V[t]};

{Ix[c1], Ix[c2], V[at1], V[at2], V[t]}

(9)

fsolve(polynomials, variables, complex);

{Ix[c1] = 955.2297105-5281.491898*I, Ix[c2] = -505.0156845+2424.830843*I, V[at1] = 26894.34238+4.981252431*I, V[at2] = 10829.70666+56.66545127*I, V[t] = -623.3636109+1112.165758*I}

(10)

 

 


The 'fsolve()' command was able to come up with a solution. 

Then, when 'f9n10 := V[t] * (Ix[c1] - Ix[c2]) + TrainLoad = 0;' becomes 'f9n10 := V[t] * conjugate(Ix[c1] - Ix[c2]) + TrainLoad = 0;', 

the command ('fsolve()') refused to produce a solution. 

Would you tell me how to make the 'fsolve()' command work with the 'conjugate()' operator?

Thank you, 

In Kwon Park


 

 

Download no_conjugate.mw

 

Hello,

 

I am one of the software coordinators for CUNY and was wondering how would i obtain access to download Maplesoft for the college community?

I have the following program which constructs the multiplication table, CI, for a matrix Lie algebra and evaluates the difference between CI's row dimension and its rank. The code is a little convoluted because "LieTable" formats the entries very strangely and forces incorrect rank values.

The matrix CI is constructed rather quickly (within a few seconds), and everything works well with "small" examples (up to 12 basis elements has evaluated within seconds). However, the example I've included is for a 27-dimensional Lie algebra. As I stated, CI is constructed quickly, even in larger examples, but the rank evaluation (i.e., LinearAlgebra:-Rank(CI)) has never completed for the example I've included. I let it run for about 3 hours before shutting it down.

I have an older Macbook Air which I am using to run these computations. Could this simply be an issue of not enough computing power?

I have attempted to import the matrix CI into Mathematica (to see if it was simply a limitation of Maple), but that's its own headache (reads entries of the matrix incorrectly).

 

Any recommendations would help. If this is an issue of computing power, I can get access to a more powerful system soon. It doesn't seem that the code itself would cause the issue, since it is not the construction of the matrix which is giving me issues, it is the evaluation of the rank. I am rather naive about Maple (and programming in general) though, so I may be wrong.

 

Index_and_Contact.mw

I want to do the substitution f(t) - ff(t) = epsilon for any variable t in Maple:

 

expand(myerror);
    2 f(x - 2 h)   f(x)   3 f(x + 3 h)   2 ff(x - 2 h)   ff(x)
  - ------------ - ---- + ------------ + ------------- + -----
        15 h       6 h        10 h           15 h         6 h

       3 ff(x + 3 h)
     - -------------
           10 h     


NULL;
myfunc := t -> f(t) - ff(t) = epsilon;
 myfunc := proc (t) options operator, arrow, function_assign;

    f(t)-ff(t) = epsilon end proc


algsubs(myfunc(t), myerror);
          2               1        3            
        - -- f(x - 2 h) - - f(x) + -- f(x + 3 h)
          15              6        10           
        ----------------------------------------
                           h                    

               2                1         3             
             - -- ff(x - 2 h) - - ff(x) + -- ff(x + 3 h)
               15               6         10            
           - -------------------------------------------
                                  h                     


NULL;
subs(f(-h*n + x) = 1, ff(-h*n + x) = 0, f(x) = 1, ff(x) = 0, f(h*m + x) = 1, ff(h*m + x) = 0, myerror)*epsilon;
                           4 epsilon
                           ---------
                             15 h   

 

How do I change equations from the 2D-output into definitions?

#! Change the order (this case second):
solve({-alpha[-1] + 2*alpha[2] = 1, 1/2*alpha[-1]*h + 2*alpha[2]*h = 0, alpha[-1]/h + alpha[0]/h + alpha[2]/h = 0}, {alpha[a], alpha[b], alpha[c]});
          /            -2             1             1\
         { alpha[-1] = --, alpha[0] = -, alpha[2] = - }
          \            3              2             6/

NULL;
lhs({alpha[-1] = -2/3, alpha[0] = 1/2, alpha[2] = 1/6}[1]) := rhs({alpha[-1] = -2/3, alpha[0] = 1/2, alpha[2] = 1/6}[1]);
     /            -2\       / /            -2             1  
  lhs|alpha[-1] = --| := rhs|{ alpha[-1] = --, alpha[0] = -,
     \            3 /       \ \            3              2  

               1\    \
    alpha[2] = - }[1]|
               6/    /

 

 

I found in the Application Center a quite old work (2010) titled Generation of correlated random numbers  (see here view.aspx).
This work contains a few errors that I thought it was worth correcting. 

Basically the works I refer to concern the sampling of linearly correlated random variables (or correlation in the Pearson sense). Classical textbooks about the subject generally discuss this topic by considering only gaussian random variables and present two methods to generate linearlycorrelated samples: one base on the Cholesky decomposition of the correlation matrix, the other based on its SVD decomposition.

Now the question is: can we apply any of these two procedures to generate linearly correlated samples of arbitrary random variables?
The answer is NO and the reason why it is so is strongly related to a fundamental property of gaussian random variables (GRV) that is that any linear combination of GRVs is still a GRV.
But things are not that simple because even the multi gaussian case handmed with Cholesky's decomposition or SVD can lead to undesired results if no precautions are taken.

The aim of this post is to show what are those wrong results we obtain by thoughtlessly applying these decompositiond and, of course, to show how we must proceed to avoid them.

Let's start by a very simple point of natural good sense: suppose U1 and U2 are two independant identically distributed (iid) random variables and that we have some "function" F which, when applied to the couple (U1, U2) generate a couple (A1, A2) of linearly correlated random variables. Thus F(U1, U2) = (A1, A2).
Let's suppose this same relation holds if we replace U1 and U2 by "a sample of U1" and "a sample of U2", and thus (A1, A2) by "a sample of the bivariate (A1, A2) whose components are linearly correlated". Let's call S the cloud one could obtain by using for instance the ScatterPlot(A1, A2) procedure of Maple.

Let's suppose now that instead of computing F(U1, U2) I decide to compute (U2, U1). Let's call (A1' , A2') the corresponding joint sample and write S' := ScatterPlot(A2', A1').
It seems natural (and it is!) to think that S and S' will be the same up to sampling artifacts. 

Any correct method to generate samples from (linearly or not) correlated random variables must verify this similarity of patterns between S and S' S and S'. But this is not the case in this work view.aspx.

The safer way to correlate, even in the Pearson's sense, random variables is to use the concept of COPULAS (there is a work on copulas in the Application Center, but for a quick overview see here Copula_(probability_theory)).
For this special case of linear correlation on can use copulas without knowing it, and this is very simple: as soon as our  procedure F introduced above gives correct results if U1 and U2 are standard GRVs,

  • take any couple (R1, R2) of arbitrary random variables,
  • build a map M(R1, R2) --> (U1, U2),
  • generate (A1, A2) = F(U1, U2),
  • compute M^(-1)(A1, A2)


What is the point of correcting a work that is 10 years old?
A very simple answer is that the Cholesky's decomposition (or SVD) is still the emblematic method to use for linearly correlating random variable. This is the only one presented in scholar textbooks, the only one a lot of students have been taught about (unless they have  they have had an extensive background in probability or statistics), and thus a systematic source of wrong results users are not even aware of.


Next point: it's well known that the Pearson's correlation cannot be lower than -1 or higher than +1, but this is common mistake to think any value between -1 and +1 can be reached.
This is guaranteed for GRVs, but  not for some other random variables.
For a classical counter-example see  04_correlation_2016_cost_symposium_fkuo_tagged.pdf 

The notation used in the attached file are mainly those used in the initial work  view.aspx.

restart:


This work is aimed to correct the procedure used in  https://fr.maplesoft.com/applications/view.aspx?SID=99806
to correlate arbitrary random variables in the (common) Pearson's sense.

with(LinearAlgebra):
with(plots):
with(Statistics):

 

GAUSSIAN RANDOM VARIABLES

 

# First example: both A1 and A2 are centered gaussian random variables
#                The order we use (A1 next A2 or A2 next A1) to define Ma doesn't matter

Y   := RandomVariable(Normal(0, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Second example : both A1 and A2 are non-centered gaussian random variables with equal standard deviations.
#                  The order we use to define Ma does matter

Y   := RandomVariable(Normal(1, .25)):
rho := .9:
Q   := 10^4:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

# Second example corrected: to avoid order's dependency proceed this way
#    1/ center A1 and A2
#    2/ correlate the now centered rvs
#    3/ uncenter the couple of correlated rvs


C1  := convert(Scale(A1, scale=Mean), Vector[row]):
C2  := convert(Scale(A2, scale=Mean), Vector[row]):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)+~Mean(A1), Column(Rs2, 2)+~Mean(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)+~Mean(A1), Column(Rs2, 1)+~Mean(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example : both A1 and A2 are centered gaussian random variables with unequal standard deviations.
#                 The order we use to define Ma does matter

rho := .9:
Q   := 10^4:
A1  := Sample(Normal(0, 1), Q):
A2  := Sample(Normal(0, 2), Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# Third example corrected: to avoid order's dependency proceed this way
#    1/ scale A1 and A2
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs


C1  := A1 /~ StandardDeviation(A1):
C2  := A2 /~ StandardDeviation(A2):

Ma  := `<,>`(`<,>`(C1), `<,>`(C2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


A1A2 := ScatterPlot(Column(Rs2, 1)*~StandardDeviation(A1), Column(Rs2, 2)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(C2), `<,>`(C1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2)*~StandardDeviation(A1), Column(Rs2, 1)*~StandardDeviation(A2), title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

# More generally: to avoid order's dependency proceed this way
#    1/ transform A1 and A2 into standard gaussian random variables (mean and standard deviation scalings)
#    2/ correlate the now scaled rvs
#    3/ unscale the couple of correlated rvs

 

A MORE COMPLEX EXAMPLE:

NON GAUSSIAN RANDOM VARIABLES
(here two LogNormal rvs)

 

 

# Preliminary
#   the expectation (mean) of a LogNormal rv cannot be 0;
#   as a consequence it is expected that the order used to buid Ma will matter
#
# Proceed as Igor Hlivka did

 

Y   := RandomVariable(LogNormal(.5, .25)):
rho := .9:
Q   := 1000:
A1  := Sample(Y, Q):
A2  := Sample(Y, Q):
Ma  := `<,>`(`<,>`(A1), `<,>`(A2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

ScatterPlot(A1, A2, color = red, title = ["Raw LogNormal RV", font = [TIMES, BOLD, 12]]):
A1A2 := ScatterPlot(Column(Rs2, 1), Column(Rs2, 2), title = "Correlated LogNormal RV", opts, color=blue):

# And now change, as usual, the order in Ma

Ma  := `<,>`(`<,>`(A2), `<,>`(A1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

A2A1 := ScatterPlot(Column(Rs2, 2), Column(Rs2, 1), title = "Correlated LogNormal RV", opts, color=red):

display(A1A2, A2A1);

 

# How can we avoid that the order used to assemble Ma do matter?
#
# A close examination of what was done with gaussiann rvs show that in all the cases we
# went back to standard gaussian rvs before correlating them.
# So let's just do the same thing here.
#
# Of course it's not as immediate as previously...
# (please do not focus on the slowness of the code, it is written to clearly explain 
# what is done, not to be fast)



#-------------------------------------- from Y to standard gaussian
G  := RandomVariable(Normal(0, 1)):
G1 := Vector[row](Q, q -> Quantile(G, Probability(Y > A1[q], numeric), numeric)):
G2 := Vector[row](Q, q -> Quantile(G, Probability(Y > A2[q], numeric), numeric)):
# could be replaced by this faster code
#   cdf_Y := unapply(CDF(Y, z), z) assuming z > 0;
#   cdf_G := unapply(CDF(G, z), z);
#   S1    := sort(A1):
#   ini   := -10:
#   V     := Vector[row](Q):
#   for q from 1 to Q do
#     V[q] := fsolve(cdf_G(z)=cdf_Y(S1[q]), z=ini);
#     ini  := V[q]:
#   end do:
#------------------------------------------------------------------

Ma  := `<,>`(`<,>`(G1), `<,>`(G2)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:


opts := titlefont = [TIMES, BOLD, 12], symbol=point, transparency=0.5:


#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------
A1A2 := ScatterPlot(C1, C2, title = "Correlated Normal RV", opts, color=blue):



Ma  := `<,>`(`<,>`(G2), `<,>`(G1)):
MA  := Transpose(Ma):
Cor := Matrix([[1, rho], [rho, 1]]):
Cd2 := LUDecomposition(Cor, method = 'Cholesky', output = ['U']):
Rs2 := MA . Cd2:

#-------------------------------------- from standard gaussian to Y
C1 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 1], numeric), numeric)):
C2 := Vector[row](Q, q -> Quantile(Y, Probability(G > Rs2[q, 2], numeric), numeric)):
#------------------------------------------------------------------

A2A1 := ScatterPlot(C2, C1, title = "Correlated Normal RV", opts, color=red):

display(A1A2, A2A1);

 

 

CONCLUSION: Be extremely careful when correlating non standard gaussian random variables,
                             and more generally non gaussian random variables.


Correlating rvs the way Igor Hlivka did can be replaced in the more general framework of COPULA THEORY.

Mathematically a bidimensional copula C is a function from [0, 1] x [0, 1] to [0, 1] if C is joint CDF of a bivariate random variable
both with uniform marginals on [0, 1].
See for instanc here  https://en.wikipedia.org/wiki/Copula_(probability_theory)

What I did here to "correlate" A1 and A2 was nothing but to apply in a step-by-step way a GAUSSIAN COPULA to the bivariate
(A1, A2) random variable.
In  Quantile(G, Probability(Y > A1[q], numeric), numeric) the blue expression maps A1 onto [0, 1] (as it is needed
in the definition of a copula), while the brown sequence is the copula itself (when the same operation on A2 has been done).

 

 


 

Download LInear_Correlated_Random_Variables.mw

Why am I not able to replace O(h^3), and in what other way can I achieve the result (I want to get the coefficients of the f(x), D(f)(x), and D^(2)(f)(x) variables).

Q(h);
1 /          /                                       2    / 3\\
- |alpha[-2] \f(x) - 2 D(f)(x) h + 2 @@(D, 2)(f)(x) h  + O\h //
h \                                                            

   + alpha[0] f(x)

              /                     9                 2    / 3\\\
   + alpha[3] |f(x) + 3 D(f)(x) h + - @@(D, 2)(f)(x) h  + O\h /||
              \                     2                          //


NULL;


subs(f(x) = 1, D(f)(x) = 0, (D@@2)(f)(x) = 0, O(h^3) = 0, collect(Q(h), f(x)));
                         /     / 3\\            /     / 3\\
    alpha[0]   alpha[-2] \1 + O\h // + alpha[3] \1 + O\h //
    -------- + --------------------------------------------
       h                            h                      

 

How do I combine greek letters and latin letters in variable names? e.g. here I want to combine the greek letter Delta and the latin letter x to be used as one variable name:


Delta*x;
                            Delta x

Deltax;
                             Deltax

 

 

The original question (crossed through below) was too vague, so I tried to clarified it:

I have the following expression
Q(h);
 1 /  2                    1                 2    / 3\   1     
 - |- - f(x) - D(f)(x) h + - @@(D, 2)(f)(x) h  + O\h / + - f(x)
 h \  3                    2                             2     

      1                                        2    / 3\\
    + - f(x) + 2 D(f)(x) h + 2 @@(D, 2)(f)(x) h  + O\h /|
      6                                                 /


It doesn't expand this following way...
expand(Q(h));
      /                   1                 2    / 3\\       
    2 |f(x) - D(f)(x) h + - @@(D, 2)(f)(x) h  + O\h /|       
      \                   2                          /   f(x)
  - -------------------------------------------------- + ----
                           3 h                           2 h

                                              2    / 3\
       f(x) + 2 D(f)(x) h + 2 @@(D, 2)(f)(x) h  + O\h /
     + ------------------------------------------------
                             6 h                       


But it does expand when I add the multiplication symbols manually:
expand(2*(f(x) - D(f)(x)*h + 1/2*(D@@2)(f)(x)*h^2 + O(h^3))/(3*h) + f(x)/(2*h) + (f(x) + 2*D(f)(x)*h + 2*(D@@2)(f)(x)*h^2 + O(h^3))/(6*h));
                                                    / 3\
       4 f(x)   1           2                    5 O\h /
       ------ - - D(f)(x) + - h @@(D, 2)(f)(x) + -------
        3 h     3           3                      6 h  

How can I expand my output without having to add the multiplication symbols?


------------------------------------old question below, please disregard----------------------






I found there is assume=[...] syntax. So I changed my code to use it, instead of .... assuming 

Now I find that some tests fail.  Should assuming and assume not give same result?

restart;
sol :=y(x)^(1/2) = -1+2*exp(x):
ode:=diff(y(x),x)-2*y(x) = 2*y(x)^(1/2):

odetest(sol,ode) assuming x::real, x>0

gives 0

but

odetest(sol,ode,assume=[x::real, x >0])

does not give zero. Please see screen shot below.

Reason I changed, since I started saving assumptions to use in a list. So it was easier to use assume=[....] than assuming ...., since assuming wants expression sequence in front of it and I did not know how to convert the list of assumptions I have collected to an expression sequence on the fly to use assuming, otherwise I would not have changed the code.

But my question is: Should both give same result? Why is the result different?

Help says

The assume routine sets variable properties and relationships between variables. Similar functionality is provided by the assuming command.

I am probably not using assume= correctly.   I thought I could use the same thing I had on assuming, just put it in a list. 
 

This happens each time I run a long loop.  (2,500 iterations, which takes about 3 hrs to complete)

Maple always hangs (it does not time out on odetest() ). But my question is not about this (as this is something I have to deal with for long time now and mentioned it before many times. May be one day Maplesoft will fix this). 

But I noticed this also.  When Maple hangs, (and it always hangs at least once during this loop), I then click on the button "interrupt the current operation". This does stop the hangs.

Next I do a restart and starts the loop from the loop counter where it hanged in order to continue.  

But It still hangs at that same iteration. I repeate this again, and it still hangs.

Now I close Maple altogether, start Maple again, open same worksheet, and repeat from the same iteration again from where it was at before, now it does not hang.

This tells me that restart and "interrupt the current operation" do not clean everything as expected. Else why only restarting Maple makes it work?

It means mserver.exe (separate process from the frontend) still is caching something related, and that is why it hangs at that iteration.

I can reproduce this each time I run the whole loop from the start.

I can't make a minimal example, since I have no idea where it hangs and why. And it is related to running a long loop.

I just know it hangs when doing odetest() with timeout which never timeout, and it seems random at what iteration it decides to hang.

But my question is really basic here: Does mserver.exe keep any information about the earlier user session/worksheet even after restart ? help says that restarts clear internal memory of the kernel.

Isn't mserver.exe  the Maple kernel? If so, then what could explain that only restarting Maple clears the hang? I am just looking for ideas that could explain this.

This type of problem is the most annoying thing about Maple for me. 

Maple 2020.1 on windows 10.

 

Whoever designed Maple's user-interface thinks very differently from me. How can I insert a 2nd section into Chapter 1? There's no space between the end of Section 1 and the end of Chapter 1 for me to put the cursor, so where should I put the cursor, and what should I do?


Eigenvector result is changing every time it run.

How to make eigenvectors result the same every time it run?

 

1 2 3 4 5 6 7 Last Page 1 of 1711