vv

14027 Reputation

20 Badges

10 years, 43 days

MaplePrimes Activity


These are replies submitted by vv

I would rather suggest something like

deg:=Pi/180:    # constant
45*deg;
          Pi/4
                          
sin(30*deg);
      1/2
                              
arccos(1/2) / deg;
           60

After all the radian is a natural unit while the degree is an arbitrary (but convenient) one, used for historical reasons.

Note that if such option will be implemented, most of the existent programs will fail for a user who sets it to "degree".

So, you think that I had the impression that the very short SQ procedure is a rigorous proof of the celebrated Toeplitz' conjecture :-)

@Markiyan Hirnyk 

@one man 

The worksheet works in Maple 2016/2017.
The attached version should work in Maple 17 too, but I cannot test it.
If you have problems please attach the result.

Tangent_plane_VV1.mw

@optoabhi 

There are a lot of discussions about this on the forum (just search them).

- The Maple engine is text based, so anyway the 2D math is internally converted to 1D.
- The 2D expressions may contain hidden fields, difficult or impossible to detect visually;
   if something goes wrong it will be difficult to debug.
- Most users which write consistent code prefer the Worksheet mode, 1D for input and 2D for output.
  The 2D input is reserved for presentations.
- A beginner may think that the 2D input mode is easier but when starting a more complex codding he/she will find that the old 1D is more reliable.
  Probably for an occasional user the 2D option is OK.
   

@nm 

You can choose symbolic values for x and y, and after that give them special values in order to simplify the result (optional).

Facxy:=proc(F::algebraic, X::name=anything, Y::name=anything)
  F=1/eval(F,[X,Y]).eval(F,Y).eval(F,X), is(F*eval(F,[X,Y])=eval(F,X)*eval(F,Y))
end:

(I have used Rouben's remark to simplify the procedure.)

 Facxy(cos(x + y + 1) + sin(x - 1)*sin(y + 2),   x=a, y=b);

Facxy(1/(x-1)*y,x=a,y=b);

 

 

I'd recommend to try a different programming and presentation style.
For example, for the curve only with animation:

 

NPar :=proc(F::{list,set}, X::{list,set}(`=`), L, N)
# Natural Parametrization; L=length, N=number of points generated
local n:=nops(X), x:=lhs~(X),  d, i,j, J,t,r,s;
if nops(F) <> n-1 then error "The number of functions must be nops(X)-1" fi;
J:=Matrix(n,n-1, (i,j) -> diff(F[j],x[i]));
d:=seq(LinearAlgebra:-Determinant(J[[1..j-1,j+1..n]])*(-1)^j,j=1..n);
d:=subs(x=~x(t), [d]);  
r:=sqrt(add(d^~2));
s:=dsolve({seq( (diff(x[i](t),t) = d[i]/r), i=1..n), X[](0)}
          ,numeric, output=Array([seq(i*L/N,i=0..N)]) );  
s[2,1][..,2..n+1];
end:

###############################

L1 := 1.2:
f1 := (x1-.5)^4+(x2-1)^4+x3^4+2*x1*x2*x3-L1^4:
f2 := (x1-sin((x1^2+x2^2+x3^2)^.5)^2)^2+(x2-sin(x1)^2)^2+(x3-sin(x1)^2)^2-1:

X0:=fsolve({f1,f2,x1-x3});

{x1 = -.4294971683, x2 = -0.7179716957e-1, x3 = -.4294971683}

(1)

K:=NPar([f1,f2], X0, 9.6, 100):

with(plots):

BG:=implicitplot3d(f2, x1 = -1.5 .. 2.5, x2 = -1 .. 3, x3 = -1 .. 2,
           color = green, transparency = .5, numpoints = 3000, style = surface):

animate(pointplot3d, [ 'K'[1..floor(a)], color=red, style=line, thickness=3 ], a=1..101, background=BG, frames=50);

 

 

 

@asa12 

Sorry, I cannot understand the problem. What eigenvector?   sys2 and sys3 are complex-valued so Minimize is out of the question.

@ernilesh80 

1. Yes, only the gradients of the active constrains must be computed. The objective function may be non-linear but the  conditions of the KT theorem must be verified.

2. Using DirectSearch you can find directly the global max (in a domain that makes sense, where the function is real valued).

with(DirectSearch):
a1:= 5:a2:= 10:W:= 1000:
f:= (8.680555553*((1.28*(.8*W1^.7*E1^.3-15.0*(-.4+T)^2*W1^.7-.1800000000*E1-.6199999998*W1))*sqrt(-0.5333333334e-1*E1^.3+0.5333333334e-1*W1^.3+1.000000000*(-.4+T)^2)+(1.20*(.8*W2^.6*E2^.4-24.0*(-.4+T)^2*W2^.6-.2400000000*E2-.5600000000*W2))*sqrt(-0.3333333334e-1*E2^.4+0.3333333334e-1*W2^.4+1.000000000*(-.4+T)^2)+7.0656*W1^.7*E1^.3+5.1840*W2^.6*E2^.4+(7.200*(.1706666667-1.28*T+.4*T*(4*T+40)-20*T^2))*W1^.7+(8.640*(.2133333333-1.60*T+.4*T*(5*T+40)-20*T^2))*W2^.6-3.36384*E1-4.85376*W1-3.91680*E2-2.99520*W2-(.6816*(54*W1+4*E1+105*W2+5*E2))*T-3.4560-(.624*(-4*E1-54*W1-5*E2-105*W2))*T))/T:
g:= W1*a1+W2*a2 - W <=0:
dom:=E1 = 80..100, E2 = 4..6, T = 0.1 .. 1,  W1 = 100 .. 200, W2 = 10 .. 20:
GlobalOptima( f, {g, dom},maximize);

[19181.2056127683, [E1 = 92.9095671121395, E2 = 5.08993012696828, T = .161751877713913, W1 = 172.549681895113, W2 = 13.7251590524430], 4971]

@tomleslie 

1. When ||Y - X.b|| is minimized for b and X is invertible, the minimum is 0 and is obtained for b = X^(-1).Y

You can check this in the worksheet adding
ans3 := simplify(X^(-1).Y);

and you will see that ans1 = ans2 = ans3.

(A symbolic square matrix is treated as invertible).

2.  Transpose(Y-X.b).K.((Y-X.b)   can be used only if K is positive (semi)definite.

 

@tomleslie 

With these assumptions X is invertible (genericity applies), so b = X^(-1).Y

You should clarify first the mathematical aspects.

1. b as a solution of the minimization problem is not unique in general.
You probably want the pseudo-inverse (Moore-Penrose). See ?MatrixInverse

2. The generalization using S:=Transpose(X.b-Y).MatrixInverse(C).(X.b-Y)
is strange because S could be unbounded (inf S  could be - infinity).

@nm 

The new ode is solved via the Bernoulli method (use infolevel). Forcing "separable" ==>

dsolve(diff(y(x), x) = 3*x^2*y(x)^2 , y(x), ['separable']);

    

 

@Adam Ledger 

Sorry but the term is a standard one. See https://en.wikipedia.org/wiki/Pathological_(mathematics)

@Adam Ledger 

f(x) = x^2*ln(x),  f(0) = 0

is a "pathological" function; it is of class C^1 but not C^2  (f'' is unbounded at 0 and actually f''(0+) = - infinity).

Note that for such functions, the significance of O(...)  in Maple is different from the Landau symbol
(in the sense that sub-polynomial terms may be included in O, see ?series).
Note also that the new  MultiSeries:-multiseries  uses the standard (Landau) significance for O.

  

Here are the results in Maple 2017.

 

S1:=series(x^2*ln(x), x, 1);

series(+O(x^2),x,2)

(1)

S2:=series(x^2*ln(x), x, 2);

series(+O(x^2),x,2)

(2)

S:=series(x^2*ln(x), x, 3);

series(ln(x)*x^2,x)

(3)

whattype(S); op(0,S);

series

 

x

(4)

SS:=MultiSeries:-series(x^2*ln(x), x, 5);

series(-ln(1/x)*x^2,x)

(5)

whattype(SS);  op(0,SS);

series

 

x

(6)

M:=MultiSeries:-multiseries(x^2*ln(x), x, 5);

SERIES(Scale, [-1/_var[1/ln(1/x)]], 0, algebraic, [2], infinity, integer, _var[x], -_var[x]^2/_var[1/ln(1/x)])

(7)

whattype(M); op(0,M);

function

 

SERIES

(8)

diff(x^2*ln(x), x);

2*x*ln(x)+x

(9)

diff(x^2*ln(x), x,x);

2*ln(x)+3

(10)

 

@rlopez 

It depends of course on how the residuals are computed in the inplicit approach.
But the space is finite dimensional and so all the norms are equivalent (for linear models).
The main problem is here the model which I think is not adequate; even y = ax+b is much better.

First 112 113 114 115 116 117 118 Last Page 114 of 177