acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

seq(`&#`||k||`;`,k=945..969);

`α`, `β`, `γ`, `δ`, `ϵ`, `ζ`, `η`, `θ`, `ι`, `κ`, `λ`, `μ`, `ν`, `ξ`, `ο`, `π`, `ρ`, `ς`, `σ`, `τ`, `υ`, `φ`, `χ`, `ψ`, `ω`

Download ent_seq.mw

Those are just Maple names which line-print like the unicode decimal codes and pretty-print (2D Output) like the typeset lowercase Greek letters.

For example, the Maple name `α` gets typeset in 2D Output as the symbol for the Greek letter alpha.

You don't need to wrap in any MathML style markup (eg. mo(...), mi(...) ) for that basic effect.

ps. It's not entirely clear exactly what you wanted to accomplish. Perhaps you're trying to do something different. Are you looking for a way to programmatically generate the literal names alpha, beta,...,omega as written in the Latin alphabet?

 

restart;

 

is_symbol_inside_func_only:=
  (e,f,y)->type(subsindets(e,specfunc(Not(freeof(y)),f),freeze),freeof(y)):

 

expr:=3*ln(1+y)+ln(3*y)*y+ln(y)+cos(7*y):
is_symbol_inside_func_only(expr,ln,y); #should return false

expr:=3*ln(1+y)+ln(3*y):
is_symbol_inside_func_only(expr,ln,y); #should return true

expr:=ln(y)+ln(3*y)+cos(y):
is_symbol_inside_func_only(expr,ln,y); #should return false


expr:=3+cos(y):
is_symbol_inside_func_only(expr,cos,y); #should return true

expr:=y+ln(y):
is_symbol_inside_func_only(expr,ln,y); #should return false

false

true

false

true

false

Download type_chk_ex.mw

That is the behavior of evalb, which is the default mechanism for if..then conditional testing. (That Help page describes the kinds of equivalence it tests.)

For such polynomial comparison you have a few alternatives -- to get a mathematical equivalence test of the kind needed here -- including expanding (or putting the difference into normal form) before comparing, or using is, or testeq,

evalb( (x+1)^2 = x^2+2*x+1 );

false

Any of the tests at end could be used here instead.

(You wouldn't need the wrapping evalb calls, though,
 since that's automatically part of the if..then check.)

if expand( (x+1)^2=x^2+2*x+1 ) then
   print(1);
else
   print(0);
end if;

1

evalb( expand( (x+1)^2 = x^2+2*x+1 ) );

true

evalb( normal( (x+1)^2 - (x^2+2*x+1) = 0 ) );

true

testeq( (x+1)^2=x^2+2*x+1 )

true

is( (x+1)^2=x^2+2*x+1 );

true

Download evalb_et_alia.mw

In general some such mathematical tests for equivalence are potentially more expensive than others. The user gets to decide which might be used.

In Maple 2021 and later the form you want is produced directly. (You haven't told us what older version you are using.)

Here are three different ways to get that integration result.

restart;

kernelopts(version); # later versions get it directly

`Maple 2020.2, X86 64 LINUX, Nov 11 2020, Build ID 1502365`

H:=Int(x^17*cos(x^6),x):

expand(value(IntegrationTools:-Parts(H,x^6)));

(1/3)*x^6*cos(x^6)+(1/6)*x^12*sin(x^6)-(1/3)*sin(x^6)

expand(convert(int(x^17*cos(x^6),x=0..x),trig));

(1/3)*x^6*cos(x^6)+(1/6)*x^12*sin(x^6)-(1/3)*sin(x^6)

int(x^17*cos(x^6),x=0..x,method=meijerg);

(1/3)*x^6*cos(x^6)+(1/6)*x^12*sin(x^6)-(1/3)*sin(x^6)

Download int_old_ex.mw

ps. In Maple 2021 and later the result,
   1/3*x^6*cos(x^6)+1/6*(x^12-2)*sin(x^6)
is produced by default (no method) and by method=risch, and a close trig form by method=meijerg. In Maple 2020 and earlier the method option was not supported for indefinite integration (hence my 3rd workaround above).

You appear to have made two kinds of mistake.

The first is that it looks like you've forgotten to put either an explicit multiplication symbol or a space (to denote multiplication implicitly) between the name a and the left open-parenthesis.

The result is that you've entered a(x-a) which is a function call of a (some as yet undefined operator...). You haven't used any syntax for a*(x-a) a product of a and x-a.

The second mistake is that you subtracted instead of added.

restart

expr := (x-a)^2-2*a(x-a)

(x-a)^2-2*a(x-a)

lprint(expr)

(x-a)^2-2*a(x-a)

indets(expr, function)

{a(x-a)}

new := (x-a)^2-2*a*(x-a)

(x-a)^2-2*a*(x-a)

lprint(new)

(x-a)^2-2*a*(x-a)

indets(new, function)

{}

expand(new)

3*a^2-4*a*x+x^2

intended := (x-a)^2+2*a*(x-a)

(x-a)^2+2*a*(x-a)

expand(intended)

-a^2+x^2

Download 2d_mul_ex.mw

There are several things you can do with that in Maple. For example,

restart;

ee := Sum((-1)^r*x^r, r=0..N);

Sum((-1)^r*x^r, r = 0 .. N)

value(ee);
eval(%, N=4);
sort(normal(%), x, ascending);

-(-x)^(N+1)/(x+1)+1/(x+1)

x^5/(x+1)+1/(x+1)

1-x+x^2-x^3+x^4

add((-1)^r*x^r, r=0..4);

1-x+x^2-x^3+x^4

limit(ee, N=infinity) assuming x<1, x>-1;

1/(x+1)

convert(1/(x+1), FPS);

Sum((-1)^k*x^k, k = 0 .. infinity)

series(1/(x+1), x)

series(1-x+x^2-x^3+x^4-x^5+O(x^6),x,6)

Download sum_ex.mw

ps. I converted your Post into a Question.

The following code shows how you can programmatically turn equations into such scalar expressions, or vice versa.

It also let's you do that if you have a mix of both. But it can also work if you have just all one type, or the other.

It's a simple task, so the code to handle it ought to be simple. I've tried to make this code straightforward, so that a Maple beginner could learn a bit from it. I deliberately didn't make it as super terse as possible.

restart;

ex1 := a*x^2 + b*x = v:

ex2 := c*x^2 + d*x - w:


Suppose you have a mixed collection -- some equalities, some scalar expressions

L := [ex1, ex2];

[a*x^2+b*x = v, c*x^2+d*x-w]

Turn all the equalities into scalar expressions.

map(expr -> ifelse( expr::`=`,
                    (lhs-rhs)(expr),
                    expr ),
    L);

[a*x^2+b*x-v, c*x^2+d*x-w]

Turn all the scalar expressions into equalities.

map(expr -> ifelse( expr::`=`,
                    expr,
                    expr=0 ),
    L);

[a*x^2+b*x = v, c*x^2+d*x-w = 0]


And now, a brief note on the syntax for invoking type

type(ex1, `=`);
evalb( ex1::`=` ); # shorthand that works in conditionals

true

true

Download type_equals.mw

The following is a glossed-over over-simplification.

Personal computers that one could purchase today have CPUs with associated hardware (circuits) specialized for the task of performing arithmetic on floating-point numbers stored/encoded in a mere 64 bits (double precision) or 32 bits (single precision) of memory.

Such so-called "hardware floats" are restricted in range, both in terms of the number of decimal places of the mantissa (roughly 15 decimal digits for double precision) and exponent.

If your machine's Operating System is made aware via some program (eg. compiled C, Maple, etc) of such floats in memory then it can make calls to have the relevant hardware chip perform such "direct" arithmetic operations. The hardware chips of personal computers made today can do billions of such operations per second (provided the program and OS can supply them that fast).

The essence of this speed is due to the numbers being stored in (64, or 32) bits that are encoded in a form that the hardware arithmetic unit understands directly, and the fact that the range in which these hardware floats exist is quite restricted. The restricted possible range of these floats allows for adequately accurate yet very fast dedicated circuitry or algorithms to be possible.

But Maple can also handle floating-point numbers from a much wider range -- with many more possible decimal digits, and much smaller/greater exponent. The floating-point arithmetic on such so-called "software float" numbers is emulated by the software program, and the numbers are stored in another kind of specialized encoding, ie. a more involved, higher level, dedicated data structure. These operations are considerably slower than those so-called hardware float operations mentioned earlier. Such software floats also require additional overhead (eg. memory-management).

Maple is an interpreted language, and the Maple program has a core interpreter that performs symbolic computations as well as performs/dispatches integer and software-float computations. Maple also has an alternate interpreter, available as the evalhf command, which can do float arithmetic more quickly because it uses the very compact hardware float representation and the mentioned low-level computations that the machine's chips can perform directly. Sometimes the relative performance benefit is as high as a factor of 15-30. The evalhf benefit is mostly restricted to a relatively smaller assortment of numeric operations, however.

Throughout the above, I've just written "arithmetic". But the situation is similar for elementary operations such as sqrtsin, exp, etc. Either the Operating System's runtime math library or the hardware chipset offers dedicated mechanisms to compute those operations accurately and quickly for hardware floats. Computing those for arbitrary precision software floats is computationally more expensive.

There are a couple of other ways to get some speed benefits of hardware float computation, from within Maple. Eg. the Compiler, and option hfloat on a procedure. (See also these two articles [1], [2], now old.)

There are cleverer mathematical ways to determine whether there are real roots, but the following is (deliberately) a minor adjustment to your existing code.

S5AléatoireParaboleSommetGraphe_ac.mw

The cited solution holds for complex x & y, including the real case where either is real and negative.

restart;

eq:=Z^2=y/x;

Z^2 = y/x

sols := [solve(eq,{Z})];

[{Z = (x*y)^(1/2)/x}, {Z = -(x*y)^(1/2)/x}]

eval~(eq,sols);

[y/x = y/x, y/x = y/x]

Or, perhaps,

[allvalues(solve({eq},Z,explicit=false))];

[{Z = (y/x)^(1/2)}, {Z = -(y/x)^(1/2)}]

eval~(eq,%);

[y/x = y/x, y/x = y/x]

Download solve_res.mw

[edit] In general, if for some reason you wanted to restrict the solution to x>0, y>0 then you could impose that as extra conditions. Here, fortunately your desired formulation attains. (These steps are deliberately split.)

restart;

eq:=Z^2=y/x;

Z^2 = y/x

conds := x>0,y>0

0 < x, 0 < y

temp := solve({eq,conds},Z);

temp := piecewise(0 < x and 0 < y, [{Z = sqrt(y)/sqrt(x)}, {Z = -sqrt(y)/sqrt(x)}], [])

temp assuming conds;

[{Z = y^(1/2)/x^(1/2)}, {Z = -y^(1/2)/x^(1/2)}]

Download solve_ex_conds.mw

The lists [ [1], [1,1], [2], [1,2], [2,2] ] could be padded by 0 on the right, each then with two elements.

Then the list of D terms could reindexed (sorted) by using the permutation of sorting that lists of padded lists.

Then that newly sorted list of D terms could be used to sort the original expression with a plex ordering.

restart;

Vars    := [seq(U[i], i=1..2)]:

AtPoint := [seq(P[i], i=1..2)]:

mt      := mtaylor(f(Vars[]), Vars =~ AtPoint, 3);

f(P[1], P[2])+(D[1](f))(P[1], P[2])*(U[1]-P[1])+(D[2](f))(P[1], P[2])*(U[2]-P[2])+(1/2)*(D[1, 1](f))(P[1], P[2])*(U[1]-P[1])^2+(D[1, 2](f))(P[1], P[2])*(U[1]-P[1])*(U[2]-P[2])+(1/2)*(D[2, 2](f))(P[1], P[2])*(U[2]-P[2])^2

K := map(t -> select(has, [op(t)], D)[], select(has,[op(mt)],D));

[(D[1](f))(P[1], P[2]), (D[2](f))(P[1], P[2]), (D[1, 1](f))(P[1], P[2]), (D[1, 2](f))(P[1], P[2]), (D[2, 2](f))(P[1], P[2])]

S1 := K[sort(map(t->[op([0,0,..],t),0][..2], K),output=permutation)];

[(D[1](f))(P[1], P[2]), (D[1, 1](f))(P[1], P[2]), (D[1, 2](f))(P[1], P[2]), (D[2](f))(P[1], P[2]), (D[2, 2](f))(P[1], P[2])]


other terms at end

sort(mt, order=plex(S1[]));

(U[1]-P[1])*(D[1](f))(P[1], P[2])+(1/2)*(U[1]-P[1])^2*(D[1, 1](f))(P[1], P[2])+(U[1]-P[1])*(U[2]-P[2])*(D[1, 2](f))(P[1], P[2])+(U[2]-P[2])*(D[2](f))(P[1], P[2])+(1/2)*(U[2]-P[2])^2*(D[2, 2](f))(P[1], P[2])+f(P[1], P[2])


other terms at front

sort(mt, order=plex(ListTools:-Reverse(S1)[]), ascending);

f(P[1], P[2])+(U[1]-P[1])*(D[1](f))(P[1], P[2])+(1/2)*(U[1]-P[1])^2*(D[1, 1](f))(P[1], P[2])+(U[1]-P[1])*(U[2]-P[2])*(D[1, 2](f))(P[1], P[2])+(U[2]-P[2])*(D[2](f))(P[1], P[2])+(1/2)*(U[2]-P[2])^2*(D[2, 2](f))(P[1], P[2])


Download sort_D.mw

There is a large Section of the Help system, entitled "Applications and Example Worksheets". It is organized mainly by subject, and its worksheets often use multiple commands to accomplish more involved computations (differential equations, statistics, plotting, etc, etc). This is quite different from the kinds of example usually seen in any single command's Help-page.

Also, if you're new to Maple then something to consider is what kind of environment do you want?

There is marked-up 2D Input versus plaintext 1D Maple Notation input mode. A Help-page's Examples Section can be toggled from one to the other using an icon in a menu-bar at top.

The defaults for Maple include 2D Input mode and Documents (as opposed to 1D input and Worksheets), and those aspects can both be changed and set as new preferences.

The User Manual has a great deal of 2D Input in Worksheets. The input is marked-up math, entered using left panel palettes or command-completion. It also has many basic examples that are solved using the right context-panel menus. It also has a great deal of Equation-label style of reference to previous statements. It also points users at Tutors and Assistants (popup solvers). Some people love all those convenient and powerful things, and some people gladly get by (easily) without any of them.

It's difficult to describe just how very different two people's work can look. You should probably try out the alternative modes, so that you don't miss out on something that you might prefer.

In my opinion the User Manual has quite a lot of focus on approaches and functionality and convenience that would be useful to a student.

A decent, inexpensive 3rd-party book that uses only 1D input (my own preference, in a Worksheet) is Understanding Maple by Ian Thompson. Perhaps it's fair to suggest that this focuses more on programming, and could be useful for scientific computation work in Maple.

You have attempted to use the syntax for plotting solutions from pdsolve(...,numeric), but your solutions are produced by dsolve(...,numeric) which are handled differently.

Also, your solutions contain f1(eta), but no f(eta).

sachi_stream_error_3d_ac.mw

You code attempts to find the general symbolic formula for the result, which seems like a bad decision.

Instead, you might construct a re-usable procedure which does the linear-algebra computations as a purely numeric computation for each numeric input value of z and v.

By the way, did you really intend that the first three columns of P3 and P4 be all zero? That doesn't seem to make doing the eigen-solving very interesting... Perhaps you might check your earlier formulas and methodology?

The numeric linear-algebra could be made even faster, but it might not be worthwhile unless you could first confirm that the formulas and overall approach are correct.

restart;

with(LinearAlgebra):
with(plots):

omega := v/h:
t := a[0]+a[1]*x +a[2]*x^2 +a[3]*sinh(omega*x)+a[4]*cosh(omega*x)+a[5]*sin(omega*x)+a[6]*cos(omega*x):
r := diff(t, x): R:=diff(t,x$3):
b1 := eval(t, x = q+3*h) = y[n+3]:
b2 := eval(r, x = q) = f[n]:
b3 := eval(r, x = q+h) = f[n+1]:
b4 := eval(r, x = q+2*h) = f[n+2]:
b5 := eval(r, x = q+3*h) = f[n+3]:
b6 := eval(r, x = q+4*h) = f[n+4]:
b7 := eval(R, x = q+4*h) = g[n+4]:
c := seq(a[i], i = 0 .. 6):
k := solve({b || (1 .. 7)}, {c}):
l := assign(k):
Cf := t: m := diff(Cf, x$3):
S4 := y[n+4] = combine(simplify(expand(eval(Cf, x = q+4*h)), size),trig):
collect(S4, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S3 := y[n+2] = simplify(expand(eval(Cf, x = q+2*h)), size):
collect(S3, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S2 := y[n+1] = simplify(expand(eval(Cf, x = q+h)), size):
collect(S2, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):
S1 := y[n] = simplify(expand(eval(Cf, x = q)),size):
collect(S1, [y[n+3], f[n], f[n+1], f[n+2],f[n+3],f[n+4], g[n+4]]):

h := 1:
YN_1 := seq(y[n+k], k = 1 .. 4):
A1, a1 := GenerateMatrix([S || (1 .. 4)], [YN_1]):
YN := seq(y[n-k], k = 3 .. 0, -1):
A2, a2 := GenerateMatrix([S || (1 .. 4)], [YN]):
FN_1 := seq(f[n+k], k = 1 .. 4):
A3, a3 := GenerateMatrix([S || (1 .. 4)], [FN_1]):
FN := seq(f[n-k], k = 3 .. 0, -1):
A4, a4 := GenerateMatrix([S || (1 .. 4)], [FN]):
GN_1 := seq(g[n+k], k = 1 .. 4):
A5, a5 := GenerateMatrix([S || (1 .. 4)], [GN_1]):

P1 := A1-ScalarMultiply(A3, z)-ScalarMultiply(A5, z^3):
P3 := A2+ScalarMultiply(A4, z):

funcP5M := proc(z,v,i::posint) local P1,P3,P4;
  if not [z,v]::list(numeric) then return 'procname'(args); end if;
  P1 := evalf(eval(:-P1,[:-z=z,:-v=v]));
  P3 := evalf(eval(:-P3,[:-z=z,:-v=v]));
  P4 := LinearSolve(P1, P3): #print(P4);
  LinearAlgebra:-Eigenvalues(P4)[i];
end proc:

funcP5M(15,-6,4);

-.102336220176213+0.*I

CodeTools:-Usage(
  implicitplot(Re(funcP5M(z,v,4)) = 1, z = -15 .. 15, v = -15 .. 15,
               gridrefine = 2, filled, coloring = [blue, gray],
               labels = [z, v], axes = boxed)
);

memory used=4.69GiB, alloc change=-8.00MiB, cpu time=28.99s, real time=24.27s, gc time=7.33s

 

Download RASTDFFAM_ac.mw

I am not aware of any shortcut syntax in Maple for linear-solving, or that is equivalent to Matlab's A\b.

Computing inv(A)*b is not a good "standard" way to solve a floating-point linear system, and it's not what Matlab does when you enter A\b. One reason for that is that explicitly forming the matrix inverse and then multiplying by it is not generally as accurate.

IIRC, the A\b syntax in Matlab is a shortcut for its mldivide command. And mldivide differs from its linsolve by doing some extra tests (possibly expensive for larger matrices).

If A is square then Matlab's linsolve will do an LU decomposition, otherwise it may use do QR (like a least-squares approach).

And that's what I'd recommend for you in Maple too. If A is a square floating-point Matrix then try the LinearSolve command, and otherwise try LeastSquares. But I suggest not using MatrixInverse (or a shortcut like A^(-1)), followed by multiplication.

note. Those Maple commands also have options. Eg. method=LU as default for LinearSolve in the square float case.

restart;

with(LinearAlgebra):

 

A := <<1|2|3>,<1|2|2.99>,<2|4.001|6.001>>;

Matrix(3, 3, {(1, 1) = 1, (1, 2) = 2, (1, 3) = 3, (2, 1) = 1, (2, 2) = 2, (2, 3) = 2.99, (3, 1) = 2, (3, 2) = 4.001, (3, 3) = 6.001})

b := RandomVector(3,datatype=float[8]):

sol := LinearSolve(A,b):

 

Now let's also solve using the explicit matrix inverse of A.

 

Ainv := MatrixInverse(A):

altsol := Ainv.b:


Let's compare the forward error of both solutions.

Norm( A.altsol - b );

0.769162511460308451e-10

Norm( A.sol - b );

0.181898940354585648e-11


The LinearSolve solution (done internally using LU) has a
smaller forward error (norm) than does the solution obtained
using the explicit Matrix inverse.

The Matrix A is not very well-conditioned.

ConditionNumber(A);

72024.00198


Let's suppose now that you will eventually get another rhs b to
solve, for the same lhs A.

If you had multiple rhs vectors up front, you could just solve
then all together in one call. It's the scenario in which you only
later get other rhs's that makes it worthwhile to re-use some
computation.

Yes, you could gain efficiency by re-using a precomputed
matrix inverse. But even here that is not necessary -- and hence
is still not better.

The following result consists of a Vector of the pivot details, and
the L and U superimposed (the diagonal of L is implicitly all 1). We
will be able to re-use this factorization step in multiple, separate
linear solvings.

naglu := [LUDecomposition(A, output=NAG)]:


The same answer as before.

LinearSolve(naglu, b):
Norm(% - sol);

0.

b2 := RandomVector(3,datatype=float[8]):


We can now re-use the LU factorization for rhs b2.

x2 := LinearSolve(naglu, b2):

Norm( A.x2 - b2 );

0.139053213388251606e-10


The same kind of re-use can be done using the factorization
from the QRdecomposition command, and passing that to
subsequent LinearSolve calls.

However QRdecomposition might not do column-pivoting.
So, if you want to do it as a least-squares computation (eg. A has
more rows than columns), and you are going to have many
rhs's b which are not all available up front, and you want best
general accuracy, then you might elect to do a SVD factorization.

Tricky to say more, without knowing the actual situation.

 

Download LSexample.mw

ps. Some of the extra tests that Matlab's mldivide does (ie. more than what linsolve does) involve checks for various special forms (symmetry, etc).

Maple's LinearSolve and LUDecomposition do some of that -- but quickly -- checking the Matrix for special shape/storage applied as options at Matrix creation time. Scan's of the data are not done.

First 30 31 32 33 34 35 36 Last Page 32 of 336