acer

32303 Reputation

29 Badges

19 years, 309 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

The term abstract linear algebra is quite often used to cover these tasks.

If you enter abstract linear algebra into the mapleprimes top-bar search field you can see that it's on several people's wish-lists.

acer

The Statistics[NonlinearFit] command can also do this sort of problem. I believe that it actually calls LSSolve to do the work. So which you prefer to use may be a matter of taste.

X:=map(t->op(1,t),data):
Y:=map(t->op(2,t),data):

model := ((A*exp((2*Pi*x)/L))/(1+exp((2*Pi*x)/L)))+((B*exp((2*Pi*x)/L))/(1+exp((2*Pi*x)/L))^2):

Statistics[NonlinearFit](model,X,Y,x);

The results I saw were the same as what Robert obtained with LSSolve.

acer

> RootFinding:-NextZero(x->BesselJ(0, x),0);
                                  2.404825558
 
> RootFinding:-NextZero(x->BesselJ(0, x),%);
                                  5.520078110
 
> RootFinding:-NextZero(x->BesselJ(0, x),%);
                                  8.653727913

acer

When you see 0.5e-1*x it means 0.5e-1 times x. And the "e" in 0.5e-1 is scientific notation (base 10). The "e-1" on the end means that it is scaled by 10^(-1).

So, 0.5e-1 = 0.5*10^(-1) = 0.05

One the other hand, the "e" in y=e^0.05 is the base of the natural logarithm, which gets translated to the exponential function exp().

Does that make sense? Can you copy and paste it back from notepad to maple and get the correct object once again?

acer

> r := a -> fsolve(a*x^3+x^2-a*x+1, x);
                                       3    2
                   r := a -> fsolve(a x  + x  - a x + 1, x)

> r(4);
                   -1.229095388, 0.2991393984, 0.6799559895

> R := a ->select(t->is(t,real),simplify(evalc([solve(a*x^3+x^2-a*x+1, x)]))):

> R(4);

                          1/2
                    12 237        Pi
[7/6 sin(1/3 arctan(---------) + ----) - 1/12,
                       289        6

                        Pi                                Pi    1/2
    -7/12 sin(1/3 %1 + ----) - 1/12 - 7/12 sin(-1/3 %1 + ----) 3   ,
                        6                                 3

                        Pi                                Pi    1/2
    -7/12 sin(1/3 %1 + ----) - 1/12 + 7/12 sin(-1/3 %1 + ----) 3   ]
                        6                                 3

                 1/2   1/2
             12 3    79
%1 := arctan(-------------)
                  289

> evalf(%);
                  [0.6799559899, -1.229095388, 0.2991393982]



acer

It would help if you could post a small example showing what you're trying.

acer

Have a look at the LinearAlgebra:-EigenConditionNumbers help-page.

The EigenConditionNumbers routine already has lots of optional return objects. Maybe it would be nice to have estimated error bounds be added as new ones.

If you run EigenConditionNumbers at hardware precision on a real float Matrix, after having assigned infolevel[LinearAlgebra] to be 1 or higher, then you should see that CLAPACK functions  dspevd or dgeevx get used (according to whether the Matrix has the symmetric indexing-function).

Then check out these links, here and here, which show code fragments relating error bounds to condition numbers.

 Let's look at the second of those links, for the nonsymmetric eigenvalue problem. The (Fortran) code fragment indicates that there are eigenvalue error bounds something like,

   EERRBD(I) = EPSMCH*ABNRM/RCONDE(I)

where EPSMCH is the machine epsilon, RCONDE(I) are the reciprocal condition numbers for the computed eigenvalues, and ABNRM is the computed 1-norm of the balanced Matrix.

Now, that link shows a single-precision example, using sgeevx and slamch. So I suppose that for dgeevx one would want dlamch.

> DLAMCH := define_external('dlamch_',
> xx::string,
> RETURN::float[8],
> LIB=ExternalCalling:-ExternalLibraryName("clapack",'HWFloat')):
> DLAMCH("E");
                                                 -15
                          0.111022302462515654 10

If you want to use balancing with EigenConditionNumbers then ABNRM might be harder to get. You could set up a whole new defne_external call for dgeevx to obtain it, or toggle off balancing for EigenConditionNumbers. Getting the right value for ABNRM is the place where I'd want to test my logic.

That's all for hardware precision. For Digits > evalhf(Digits) or UseHardwareFloats=false then some different "machine epsilon" would get used. It's probably something like 1.0*10(-Digits).

Cursory and informal experiment indicates that the estimated error bounds can be quite loose. The actual computed results are often quite a bit better.

Let's try the example from that second link,

> A := Matrix([[58,9,2],[186,383,96],[-912,-1551,-388]],datatype=float):

> evals,rconde := LinearAlgebra:-EigenConditionNumbers(A,
> 'balance'='none','output'=['values','conditionvalues']);

                    [49.9999999999994884 + 0. I]  [ 0.135802507121023514  ]
                    [                          ]  [                       ]
   evals, rconde := [2.00000000006298473 + 0. I], [0.000463973651664679803]
                    [                          ]  [                       ]
                    [0.99999999993727917 + 0. I]  [0.000465192433087686387]
 
> EPSMCH:=0.111022302462515654e-15:
> abnrm := LinearAlgebra:-Norm(A,1):

> bounde := Vector(3,(i)->EPSMCH*abnrm/rconde[i]);
                                  [               -11]
                                  [0.1588456196 10   ]
                                  [                  ]
                        bounde := [               -9 ]
                                  [0.4649322930 10   ]
                                  [                  ]
                                  [               -9 ]
                                  [0.4637141932 10   ]

And a slightly suspect check, to compare using SLAMCH's epsilon as they reported it (hence, maybe not exactly right).

> bounde := Vector(3,(i)->(5.9*10^(-8))*abnrm/rconde[i]);
                                    [0.0008441449458]
                                    [               ]
                          bounde := [ 0.2470765303  ]
                                    [               ]
                                    [ 0.2464292019  ]

which is close enough to their estimated bounds, I suppose.

The bottom line, for this Matrix A EigenConditionNumbers is returning eigenvalues which are accurate to about 10^(-9) or so. You can see by plain inspection that the computed eigenvalues are slightly better than the estimated error bounds would allow.

acer

For question 1) read the help on diff, for 2) read the help on limit, for 3a) read the help on piecewise, for 3b) read the help on plot, for 3c) read the help on int, for 4a) read the help on LinearAlgebra,RowOperation, for 4b) read the help on LinearAlgebra,SubMatrix, and for 5) read the help on fsolve.

acer

Evaluating an expression at specific values of the parameters: another FAQ.

Notice the it is R and not R(s) in the definition of y.

> R := 0.6e-1*(arctan(10*s/1.3+(-10)*.3)/Pi+1/2):
> y := -R*cos(6*Pi*s/1.3):

> eval(y,s=0.026);
             /  0.07366634316                \
            -|- ------------- + 0.03000000000| cos(0.1200000000 Pi)
             \       Pi                      /
 

> evalf(%);
                                -0.006091221169

acer

Are any of these what you had in mind?

f := x-> sum(1/n^2,n=x+1..infinity);
plot(f,1..5);
plots:-pointplot([seq([i,eval(f(i))],i=1..10)]);

But for this case, the sum can be found as a function known to maple, ie. Psi(1,x+1).

fe := sum(1/n^2,n=x+1..infinity) assuming x::posint;
plot(fe,x=1..5);
plots:-pointplot([seq([i,eval(fe,x=i)],i=1..10)]);

I wasn't sure whether you wanted a continuous or a discrete plot. You can create a proc (rather than an arrow operator) with a type attached to the argument (using :: notation), but other than that Maple doesn't easily associate domains with "functions".

acer

What's the domain of interest here? You'll want function f to be 1-1, where you hope to invert it properly.

Let's look at your function,

f := x -> piecewise(
   x<=20,
   100-x-(3/80)*x^2+(3/1600)*x^3-2*x,
   x>=20 and
   x<=fsolve(1495/16-(201/64)*x+(33/1280)*x^2-(3/25600)*x^3=0),
   1495/16-(201/64)*x+(33/1280)*x^2-(3/25600)*x^3);

plot(f,-100..100);
plot(f,-40..10);
Optimization:-Maximize(f,-40..10);

finv := proc(y, highlow := 'high')
global f;
 if highlow = 'high' then
   fsolve(x -> f(x) - y, -17.37 .. 100);
 else
   fsolve(x -> f(x) - y, -100 .. -17.37);
 end if;
end proc:


And so now, using -17.37 the maximal point as the cut-off between high and low ranges for the domain,

> finv(50); f(finv(50));
                                  16.02900152
 
                                  50.00000001
 
> finv(50,low); f(finv(50));
                                 -38.85067771
 
                                  50.00000001

> finv(f(2),high);
                                  2.000000000
 
> finv(f(-20),low);
                                 -20.00000000

acer

>  patmatch((1.330+2.440*x)*exp(-3.660*x),
> (a_num::realcons+b_num::realcons*x)*exp(c_num::realcons*x),
> 'res');
                                     true
 
> res;
                [a_num = 1.330, b_num = 2.440, c_num = -3.660]

You can utilize those values, in res, using 2-argument eval() later on. But if you really want the assignments actually done, then,

> a_num;
                                     a_num
 
> assign(res);
> a_num;
                                     1.330

acer

> E1 := x^2-y = 5:
> E2 := x-y^2 = -13:

> RootFinding:-BivariatePolynomial([rhs(E2)-lhs(E2),rhs(E1)-lhs(E1)],
>                                  [x, y]);

[3.000000000 + 0. I, 4.000000000 - 0. I],
 
    [-2.860805853 - 0. I, 3.184210129 + 0. I],
 
    [1.114907541 - 0. I, -3.756981174 + 0. I],
 
    [-1.254101688 + 0. I, -3.427228955 - 0. I]
 
> RootFinding:-Isolate([rhs(E2)-lhs(E2),rhs(E1)-lhs(E1)],
>                      [x, y]);

[[x = 1.114907541, y = -3.756981174], [x = -1.254101688, y = -3.427228955],
 
    [x = -2.860805853, y = 3.184210129], [x = 3., y = 4.]]

> _EnvExplicit:=true:
> esol := [solve({E2, E1}, {x, y}, AllSolutions)]:
> seq( simplify(eval([E1,E2],esol[i])), i=1..nops(esol) );

[5 = 5, -13 = -13], [5 = 5, -13 = -13], [5 = 5, -13 = -13], [5 = 5, -13 = -13]

acer

Try fsolve(K=S*F^n,F) .

If K and S are known real numeric values and n is a known positive integer then that should return the real solutions as floating-point numbers (which appears to be what you're after).

acer

This is the sort of thing that should be in the Tasks section of the help system.

The closest Task I can find to this is ?Task,EquationOfLineBetweenTwo2DPoints  . And that one is done using the geometry package, so can't serve to instruct anyone in the underlying mathematics. Which is a slight shame.

This below can also be done using `.` for DotProduct and ^%T for Transpose. Or VectorCalculus could be used. I'm not sure which is clearer.

with(LinearAlgebra):
r0 := <0, 4/3, - 5/3>:
 
p1,b1 := <3,1,-1>, 3:
DotProduct(Transpose(<x,y,z>), p1) = b1;
                               3 x + y - z = 3
# zero, only if r0 lies on plane 1
DotProduct(Transpose(p1), r0) - b1;
                                      0
p2,b2 := <1,2,4>, -4:
DotProduct(Transpose(<x,y,z>), p2) = b2;
                             x + 2 y + 4 z = -4
# zero, only if r0 lies on plane 2
DotProduct(Transpose(p2), r0) - b2;
                                      0
# line perpendicular to (p1 and p2) the normals of planes 1 and 2
L := CrossProduct(p1, p2):

# zero, only if L is perpendicular to p1 the normal of plane 1
DotProduct(Transpose(p1), L);
                                      0
# zero, only if L is perpendicular to p2 the normal of plane 2
DotProduct(Transpose(p2), L);
                                      0
# parametric Vector form, one form of the answer
Lpara := r0 + t*L:

# zero, only if any value of parameter t in Lpara makes point lie on plane 1
DotProduct(Transpose(p1), Lpara) - b1;
                                      0
# zero, only if any value of parameter t in Lpara makes point lie on plane 2
DotProduct(Transpose(p2), Lpara) - b2;
                                      0
# parametric form (using x,y, and z notation)
DotProduct(Transpose(<x,y,z>), Lpara, conjugate=false);
                            /4       \     /  5      \
                    6 t x + |- - 13 t| y + |- - + 5 t| z
                            \3       /     \  3      /

acer

First 315 316 317 318 319 320 321 Last Page 317 of 336