sand15

1556 Reputation

15 Badges

11 years, 100 days

MaplePrimes Activity


These are replies submitted by sand15

@JoyDivisionMan

Here is an alternative way to what I named the "pointwise construction" of the self-convolution of function f.
Based on the SignalProcessing:-Convolution function this computation should be much faster than the  "pointwise construction" I presented in my answer (generally numeric convolution and correlation use the FFT algorithm), but requires some carefulness to correctly rescale the result.

Here is the code (X is the same 'X' than in my reply with step=0.01).

a := Array(f~(X), 'datatype' = 'float'[ 8 ] ):
b := a:
c := SignalProcessing:-Convolution(a, b):

N := numelems(c):
K := 2*(max-min)(X)/(N-1):

#----------------------- raw convolution
# dataplot(c, style=line);

plots:-display(
  #----------------------- rescaling to get 'c' in true values
  plottools:-transform(
    (x, y) -> [x*K, y*K])
    (dataplot(c, style=line, color=red, legend="SignalProcessing:-Convolution")
  )
  ,
  plot([seq([x, Exact(x)], x in X)], legend="exact", color=blue)
  , view=[0..4, default]
)

Result

@JoyDivisionMan 

Detailed explanation using a toy problem

restart

f := x -> piecewise(x <= 0, 0, x > 2, 0, x^2);

proc (x) options operator, arrow; piecewise(x <= 0, 0, 2 < x, 0, x^2) end proc

(1)

SelfConvolution := x -> Int(f(tau)*f(x-tau), tau=0..4);

proc (x) options operator, arrow; Int(f(tau)*f(x-tau), tau = 0 .. 4) end proc

(2)


Pointwise approximation of  SelfConvolution(x)

`Approximation at point x[i]` = step * Sum('f'(h[j])*'f'(x[i]-h[j]), j=1..J);

`where:`, step =h[i+1]-h[i]

`Approximation at point x[i]` = step*(Sum(f(h[j])*f(x[i]-h[j]), j = 1 .. J))

 

`where:`, step = h[i+1]-h[i]

(3)

# A small size approximation (J=5) :

step := 1:
h    := [$(-2..2)];  

[-2, -1, 0, 1, 2]

(4)

# Select only a few number of values for x[i]

X := [$0..4];

[0, 1, 2, 3, 4]

(5)

# Apply f to each element in h

A := f~(h);

[0, 0, 0, 1, 4]

(6)

# Construct the sequence x[i]-h[1], ...x[i]-h[J]

x_shifted := x -~ h;  # shifts x by all elments of h

[x+2, x+1, x, x-1, x-2]

(7)

# Apply f to each element of x_shifted

A_shifted := f~(x_shifted );  

A_shifted := [piecewise(x <= -2, 0, 0 < x, 0, (x+2)^2), piecewise(x <= -1, 0, 0 < x-1, 0, (x+1)^2), piecewise(x <= 0, 0, 2 < x, 0, x^2), piecewise(x <= 1, 0, 0 < x-3, 0, (x-1)^2), piecewise(x <= 2, 0, 0 < x-4, 0, (x-2)^2)]

(8)

# Evaluate the Sum in relations (3) for any value of x

C := step * simplify(add(A *~ A_shifted));   # pointwise multiplication of A by A_shifted

C := piecewise(x <= 1, 0, x <= 2, (x-1)^2, x <= 3, 5*x^2-18*x+17, x <= 4, 4*(x-2)^2, 4 < x, 0)

(9)

# For comparison

simplify(value(SelfConvolution(x)));

piecewise(x < 0, 0, x <= 2, (1/30)*x^5, x < 4, 64/5-16*x+(16/3)*x^2-(1/30)*x^5, 4 <= x, 0)

(10)

# Define a fonction 'Approx' from x to C

Approx := unapply(C, x):

# Evaluate 'Approx' for all values x[i] in X.
# This gives a pointwise approximation of SelfConvolution(x):

Exact    = [ seq( [x, value(SelfConvolution(x))], x in X) ];
'Approx' = [ seq( [x, Approx(x)], x in X) ];  # Not a very good approximation but nevertheless an approximation.
                                              # To enhance it decrease the step value

Exact = [[0, 0], [1, 1/30], [2, 16/15], [3, 47/10], [4, 0]]

 

Approx = [[0, 0], [1, 0], [2, 1], [3, 8], [4, 16]]

(11)


Once you have understood the principle on this small example, do it again with a smaller value of  'step', for instance

step := 0.01;

0.1e-1

(12)

h := [seq(-2..2, step)]:
X := [seq(0..4, step)]:
A := f~(h):

x_shifted := x -~ h:
A_shifted := f~(x_shifted ):

C := step * simplify(add(A *~ A_shifted)):
S := unapply(C*step , x):

Approx := unapply(C, x):

Exact := unapply(simplify(value(SelfConvolution(x))), x);

plots:-display(
  plot([seq([x, Exact(x)], x in X)], legend="exact", color=blue)
  ,
  plot([seq([x, Approx(x)], x in X)], legend="approx", color=red)
)

proc (x) options operator, arrow; piecewise(x < 0, 0, x <= 2, (1/30)*x^5, x < 4, 64/5-16*x+(16/3)*x^2-(1/30)*x^5, 4 <= x, 0) end proc

 

 
 

 

Download Detailed_explanation_with_a_toy_example.mw


"This system of nonlinear equations was proposed by an artificial intelligence named Alice." means absolutely nothing out of context.

Alice's answers are nonsensical out of context and the last "Practical applications" item is one of the moset stupid thing I ever read.
By the way Alice forgot to mention this set of equations could also have a practical application in predicting 

She should have stayed  in Wonderland where she was more brilliant.

@Andiguys 

Question_1_sand15.mw and reply_1_sand15.mw give indeed different results because they do not solve the same equations.
To be clearer the equations you solve in your initial question and the comment to my answer are not the same.

Pay attention to what you did and read First_and_second_problem_compared.mw carefully.

@Andiguys 

Read carefully the attached file

NULL

restart

with(Optimization):

_local(Pi);

q_relation := q = (1/2)*(Ce*tau-Ci+Pr)/Cr;

q = (1/2)*(Ce*tau-Ci+Pr)/Cr

(1)

Pr_relation := Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2;

Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2

(2)

subs(Pr_relation, q_relation);

q = (1/2)*((1/2)*Ce*tau-(1/2)*Ci+(1/2)*q*t-(1/2)*Cd+1/2)/Cr

(3)

isolate((3), q)

q = (Ce*tau-Cd-Ci+1)/(4*Cr-t)

(4)

simplify(subs((4), Pr_relation), size)

Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t)

(5)

``

map(simplify, solve({q_relation, Pr_relation}, {q, Pr}), size);

{Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t), q = (-Ce*tau+Cd+Ci-1)/(-4*Cr+t)}

(6)


 

simplify((6)[1] - (5))

0 = 0

(7)

simplify((6)[2] - (4))

0 = 0

(8)
 

 

Download reply_1_sand15.mw

@nm 

Thanks for sharing your experience

Maybe a track: the mw file I wanted to upload contains debugger outputs: if I remove them the file can be uploaded.

For information: I get no error using Maple 2015, even for a little bit complex display.

With_Maple2015.mw

@acer 

You're right that AN/2 * BN/2 by (A*B)N/2  are not always equivalent and I had even thought to write "Assuming that AN/2 * BN/2 by (A*B)N/2  holds..."
But I was also guided by the expressions of the two terms the OP wanted to prove the equality, and without this assumption I was pretty sure the proof could not be established
So I thought that this equivalence goes without saying.

But again you're right, I sould have been detailed in my answer.

@FDS 

Is there a policy to mark your latest post as best?
Yes, you should see a trophy at the top right of my answer: just click on it.

See you next time!

@FDS 

The updated worksheet  Stress_Strain_Curve_2.mw
And an illustraton of what you can do with it in the spirit of what you show in your pptx

About polynomial regression... there is really no mistery behind it.
The first question is: Are you familiar with the "multiple" linear regression in the case of Q regressors X1, ...XQ?
If it is so, "regressing" a dense polynomial of degree Q-1 with a single indeterminate Z, is nothing but using the "multiple" linear regression while setting Xn = Z(n-1), n=1..Q+1.
To give an example, suppose you want to fit a dense quadratic polynomial a+bZ+cZ2: simply introduce auxiliary variables X1 = 1, X2 =Z, X3 =Z2 and regress your dependent quantity no longer on  a+bZ+cZ2:  but on aX1+bX2+cX3...and that's it!

If you are not familiar with "multiple" regression I advice you to read any elementary textbook/course/mooc... on the subject.

Beyond that an important question often appears: what is the degree of the polynomial to use in order that the fitted polynomial fulfills two contradictory requirements:

  1. Fidelity to the data: otherwise said the fitted polynomial must be "close" to the data or, mathematically speaking, the residual sum of squares between the data and their restitution by the fitted polynomial must be small.
    Obviously, as a dense polynomial of degree D-1 depends on D parameters, a set od data of size D sill be exactly fitted by this dense polynomial.
    We use to say here that the model is over-parameterized
  2. Stability (or robustness, or generalization capability...): let us take this previous sample of size D and this same polynomial of degree D-1.
    This polynomial may have D-1 real roots and if, unhappily, some of them are located within the range of the regressor (Z), then the fitted polynomial will present undesired overshoots in between consecutive values of the regressor.
    In fact it is extremely likely that the fitted polynomial will behave this way.
    If you want to avoid this phenomenon use a polynomial of degree 0 or 1.

So, to satisfy the fidelity to the data requirement you must take a large degree, close to the sample size, but if you are more concern by "stability" or "smooth fit", then you must take low degree polynomials.
There exist a lot of litterature about strategies which balance these two contradictory requirements. I already gave you a few Wiki references on the subject.

What I wrote in my worksheet is that each branch is a sample of size about one thousand.
So a dense polynomial of degree 5, for instance, is very far ffrom being an  over-parameterized polynomial. This balance stuff I spoke abour above matters only when the number of parameter of the model (not necessarily a polynomial one) is of the order of the sample size, let us say 20% or 25% of this latter.
You would be right considering model selection (here degree selection) if you were investigating polynomials of degree 200 for example. But it is very farfrom being the case.

As I wrote in this last worksheet a mor important criterion is that two polynomial models intersect within L5 and EP5 ranges if you do not want to create unealistic approximated cycles.

@FDS 

Feel free to tell me if this goes to the right direction
Stress_Strain_Curve_2.mw

I understand the notion of cyclic loading-unloading , but you want to assess the "surface area" of each cycle, and so you need to define more precisely what characterizes a cycle in terms of geometry: where does it start (likely the intersection point between a charge path and the next/previous discharge path) and where it ends.
So I propose you my interpretation for the first cycle only... I am not going to go further if I am not on the right track in me doing this.

My Maple version is too old to manage workbooks. Could you simply upload the stress-strain data?

@janhardo 

Beautiful, I vote up

Error messages are pretty clear: you have to specify an approximate solution.

For more detais execute help(dsolve,numeric_bvp,advanced) and help(dsolve,numeric,BVP) in a worksheet and search for the word approxsoln.

Defining an approximate solution can be quite tricky and the best way to do this is to construct a simplified version your problem using a simplified physics (do a dimention analysis, identify second order effects, discard them), to solve it, and to use this solution as approxsoln (which can be either a procedure or an array).
Your are likely the better placed to do this because it is you who knows the physics behind your equations.

For instance, what is order of magnitude of
compared to other terms? Can you simplify it?

Or can you approximate BCs like?
Can you simplify them from a physical point of view?
As a rule a BC on the first derivative which nonlinearly depends on the second derivative makes a BC problem complicated to solve.

My advice is: begin to do this by yourself and come back to us if you get Maple difficulties to sove your problem.

@JoyDivisionMan 

By the way, the attached worksheet explains in more details how to get the pdf of Y = cos(X) where X  is a uniform random variable with support (-𝜋, +𝜋).
Two approaches are used: a direct construction of PDF(Y) and an indirect one by derivation of CDF(Y).

To compute the cdf of, let's say, X = sin(X) (which is obviously the same as Y) without doing all the stuff contained in the attached file (solve/allsolutions, branch identification, aggregation of partial pdf/cdf over the different branches, ...) it is much simpler to observe that sin(X) = cos(𝜋/2X) then define an auxiliary random variable A = 𝜋/2and replace everywhere it occurs  fX(something) by fA(some other thing) which is equal to fX(𝜋/2something)

restart

with(Statistics):

X := RandomVariable(Uniform(-Pi, Pi)):

Y := cos(X):

f__Y := CodeTools:-Usage( unapply(PDF(Y, y), y) )

memory used=17.78MiB, alloc change=34.00MiB, cpu time=273.00ms, real time=2.38s, gc time=0ns

 

proc (y) options operator, arrow; piecewise(y <= -1, 0, y < 1, 1/(Pi*(-y^2+1)^(1/2)), 1 <= y, 0) end proc

(1)

restart

with(Statistics):

X := RandomVariable(Uniform(-Pi, Pi)):

Y := cos(X):

F__Y := CodeTools:-Usage( unapply(CDF(Y, y), y) )

memory used=41.23MiB, alloc change=70.01MiB, cpu time=618.00ms, real time=2.02s, gc time=56.15ms

 

proc (y) options operator, arrow; piecewise(y <= -1, 0, y <= 1, (1/2)*(2*arcsin(y)+Pi)/Pi, 1 < y, 1) end proc

(2)

f__Y := unapply(diff(F__Y(y), y) , y)

proc (y) options operator, arrow; piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/((-y^2+1)^(1/2)*Pi), y = 1, undefined, 1 < y, 0) end proc

(3)


FIRST WAY: BUILD DIRECTLY PDF(Y)

plot(cos(x), x=Support(X, output='range'))

 


This graph is made of two "branches" where ths cosine function has a monotone variation:
          (1)  from -Pi to 0 cosine is monotonously increasing
          (2)  from 0 to Pi  cosine is monotonously decreasing


Watchout
The inverse cosine function is the arccosine function only on intervals where cosine is monotone.
So the reciprocal of  y = cos(x) over -p..p is not x = arccos(y).
Indeed:

solve(cos(x)=y, x, allsolutions)

2*Pi*_Z3-2*arccos(y)*_B3+arccos(y)

(4)

indets((4), name) minus {y, Pi}:
about~(%):

Originally _B3, renamed _B3~:

  is assumed to be: OrProp(0,1)

Originally _Z3, renamed _Z3~:
  is assumed to be: integer

 


Choosing _Z3 = 0 gives
         

reciprocal := eval((4), _Z3=0)

-2*arccos(y)*_B3+arccos(y)

(5)


And the two branches are
         

phi[1] := unapply( eval(reciprocal, _B3=0), y);
phi[2] := unapply( eval(reciprocal, _B3=1), y);

proc (y) options operator, arrow; arccos(y) end proc

 

proc (y) options operator, arrow; -arccos(y) end proc

(6)


pdf  fX(x) of X
   

f__X := unapply( PDF(X, x), x)

proc (x) options operator, arrow; piecewise(x < -Pi, 0, x < Pi, (1/2)/Pi, 0) end proc

(7)


The pdf  fY(y) of Y is given by this formula

result := Sum('f__X'(phi[i](y)) * abs(diff(phi[i](y), y)), i=1..2)

Sum(f__X(phi[i](y))*abs(diff(phi[i](y), y)), i = 1 .. 2)

(8)

result := value(result)

result := piecewise(arccos(y) < -Pi, 0, arccos(y) < Pi, 1/(2*Pi), 0)/sqrt(abs(y^2-1))+piecewise(-arccos(y) < -Pi, 0, -arccos(y) < Pi, 1/(2*Pi), 0)/sqrt(abs(y^2-1))

(9)

f__Y := unapply(simplify(result), y):
f__Y(y);

piecewise(y < -1, 0, y = -1, 1/(2*Pi*sqrt(y^2-1)), y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, 1/(Pi*sqrt(y^2-1)), 1 < y, 0)

(10)

plot(f__Y(y), y=-1..1, title="Arcsine law")

 


ALTERNATIVE WAY: BUILD CDF(Y) AND DERIVE IT TO GET PDF(Y)


cdf  FX(x) of X
   

F__X := unapply( CDF(X, x), x)

proc (x) options operator, arrow; piecewise(x < -Pi, 0, x < Pi, (1/2)*(x+Pi)/Pi, 1) end proc

(11)


If all the fi(y) were increasing functions of y, the cdf  FY(y) of Y would write

result := Sum(chi('F__X'(phi[i](y))), i=1..2);

Sum(chi(F__X(phi[i](y))), i = 1 .. 2)

(12)


If some fi(y) are decreasing functions of y, the cdf  FY(y) of Y is defined by this formula

result := Sum(chi('F__X'(phi[i](y))), i=1..2), ``(chi('F__X'(phi[i](y))) = piecewise(diff(phi[i](y), y) > 0, 'F__X'(phi[i](y)), 1-'F__X'(phi[i](y))))

result := Sum(chi(`#msub(mi("F"),mi("X"))`(phi[i](y)))*`,`(chi(`#msub(mi("F"),mi("X"))`(phi[i](y))) = piecewise(0 < diff(phi[i](y), y), `#msub(mi("F"),mi("X"))`(phi[i](y)), 1-`#msub(mi("F"),mi("X"))`(phi[i](y)))), i = 1 .. 2)

(13)

plot([phi[1](y), phi[2](y)], y=-1..1, color=[red, blue], legend=[typeset(phi__1(y)), typeset(phi__2(y))])

 


Thus

result := ``(1 - F__X(phi[1](y))) + F__X(phi[2](y))

result*`:=`(1-piecewise(arccos(y) < -Pi, 0, arccos(y) < Pi, (1/2)*(arccos(y)+Pi)/Pi, 1))+piecewise(-arccos(y) < -Pi, 0, -arccos(y) < Pi, (1/2)*(Pi-arccos(y))/Pi, 1)

(14)

F__Y := unapply(simplify(expand~(result)), y):
F__Y(y);

piecewise(y < -1, 1, y <= 1, (Pi-arccos(y))/Pi, 1 < y, 1)

(15)


So PDF(Y) is:

diff(F__Y(y), y);

piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, undefined, 1 < y, 0)

(16)


Verify that fY(y) obtained using the direct way (equation (10)) is almost everywhere equal to expression (16) 

"almost everywhere" means "out of the points of null probability measure (y=±1) where (16) is undefined.
   Note that the existence of these points come from the discontinuity of PDF(X) and that they could be removed from
   formulas (10) and (16) (because of their null probability measure).

simplify(f__Y(y) - diff(F__Y(y), y));

piecewise(y = -1, undefined, y = 1, undefined, 0)

(17)

plot(
  [f__Y(y), (16)]
  , y=-1..1
  , color=[red, blue]
  , thickness=[1, 5]
  , transparency=[0, 0.7]
  , legend=[typeset('f__Y'(y)), typeset('diff(F__Y(y), y)')]
)

 


NOTE THAT MAPLE IS CAPABLE TO COMPUTE PDF(Y) DIRECTLY

PDF(cos(X), y)

piecewise(y <= -1, 0, y < 1, 1/(sqrt(-y^2+1)*Pi), 1 <= y, 0)

(18)


But I am pretty sure this is only by chance for, from what I have observed asking Maple to compute the PDF of a polynomial
function of X, Maple does not seem to identify "branches" nor to use formulas like (8) or (13).

In the cos(X) case, without no particular attention to the correctness of the formulas and by invoking wrongly a symmetry argument 
we would get:

WrongFormulaButCorrectResult := diff(simplify(2*F__X(phi[2](y))), y);

WrongFormulaButCorrectResult := diff(simplify(2*F__X(phi[1](y))), y);

WrongFormulaButCorrectResult := piecewise(y < -1, 0, y = -1, undefined, y < 1, 1/(sqrt(-y^2+1)*Pi), y = 1, undefined, 1 < y, 0)

 

piecewise(y < -1, 0, y = -1, undefined, y < 1, -1/((-y^2+1)^(1/2)*Pi), y = 1, undefined, 1 < y, 0)

(19)

V := RandomVariable(Uniform(-5*Pi/3, Pi/3)):

PDF(cos(V), y);    # Should be equal to (10) or (16)

plot(%, y=-1..1, title="Obviously wrong", titlefont=[Tahoma, bold, 14]);

piecewise(y <= 1/2, 0, y < 1, 1/(sqrt(-y^2+1)*Pi), 1 <= y, 0)

 

 
 

 

Download cosine.mw

1 2 3 4 5 6 7 Last Page 1 of 39