sand15

1613 Reputation

16 Badges

11 years, 155 days

MaplePrimes Activity


These are replies submitted by sand15

@vv 

I have been taught this strict French formalism and even though I stopped doing pure math a long time ago, it has shaped my way of thinking—as it probably has for everyone who went through that phase during their math studies.

My wife, who is a little younger than I am, teaches mathematics to undergraduate and graduate students, and I asked her (before writing my previous response) whether notations such as

int(f(x), x=0..+infinity) = infinity

were acceptable or not.
She answered me that it wasn't among colleagues about her age but most common amid the "new" teacher generation, which is a constant point of discord when it is time to build the teaching programs. Kind of quarrel of the ancients and the moderns.

I was taught about coninuity and limit, thr epsilon-delts stuff when I was 15 (9th/10th grade in US educational system). Today those concepts are introduced in the second year of the mathematics and physics bachelor's program arguing that "we are going to lose to much pupils if we talk to them about continuity of a function"... and so, for the most part math student 19/20 the idea of continuity resumes to the fact the pen always slides on the sheet of paper when you draw f(x)).
I don't think the French educational math system is an exception. A few times ago there was a post about some highly ranked Maplesoft guy who had done a talk about the impoverishing of maths teaching in the US.

How can a newly appointed high school math teacher, who only discovered abstract concepts like continuity and limits (to name just a few) two years ago, possess the rigor required to teach these subjects... given that most of them are no longer even part of the course curriculum they are expected to teach?
My older daughter (31) got two master's degree in mathematics before deciding to teach maths in high school (junior and senior years). Even though she’s of course much younger than I am, she was part of the last cohorts of high school students who were taught about the epsilon-delta stuff. Sometimes she feels disheartened to see how the math curriculum has deteriorated over the past fifteen years.

Whiile some teachers lament this situation (regardless of their age, even if there are more of them among the older ones),  others simply accept it, saying, when referring to their students,“they're rubbish  and wouldn't understand,” still others (our system is almost entirely public, and teachers are civil servants paid by the state) believe it is not their role to argue against the official national curriculum (whose sole purpose is to ensure that the percentage of graduates is even higher than the previous year) finally, some have never even encountered certain basic mathematical concepts (basic at least for a 60-year-old) during their own education.

So you are right saying that "the French mathematical tradition as a whole is much more relaxed and I have seen f(oo), f(-oo) and other shorthands in many places!".
But how could it be other than that given what I wrote above.
Not making an effort to teach mathematical rigor (whether out of laziness or because that’s not what the government expects of you) inevitably leads to neglecting that very rigor and writing things like what you mentioned.

I think tools like Maple should be the last strongholds of this rigor.

@nm  @dharr  @Kitonum  

"should the underlying CAS system handle this itself and do the right thing (which would be to return undefined in this example),"

Yes, it should.
I'm French, and the French mathematics school considers writting things like 

eval(expr,x=infinity)

is a pure nonsense. Writting something like this in a maths control gives you a 0 to the question.
The correct writting in pure maths is

limit(expr, x=+infinity)


The English point of view seems far less pure and notations where the infinity symbol is consider as a number are quite common.
I have no doubt that any English mathemetician is aware that this is an abusive condensed notation. But the problem is that when a CAS accepts it, this opens the door to wrong interpretation, wrong results ... or questions on this site.
So my own opinion is that Maple should not accept people writting eval(expr,x=infinity) (or alike) and fire an error.

Another example

# Even this is incorrect (while commonly used)

int(x, x=0..+infinity)
                            infinity

# and should be written instead

limit(int(x, x=0..L), L=+infinity)
                            infinity


Note that this same difference of points of view between the English and French (maybe others) mathematics schools) appear in the notion of "domain of a function" (see Wiki for instance).
Its French definition makes a distinction between a function and an application, and is you consider an application as a function or not. Distinctions the English definition doesn't make.

Look at this simpler example: a variety of dimension 2 (the area below the curve) has a finite value while being upper bounded by a variety of lesser dimension (the length of the curve) with infinite value:

restart
f := x -> exp(-x):
int(f(x), x=0..+infinity)
                               1

L := unapply(sqrt(1+diff(f(x), x)^2), x):
int(L(x), x=0..+infinity);  # intuitively obvious result

                            infinity

I say this result is intuitively obvious because the length of the curve is necessarily larger than its projection along the x-axis... which is infinite.

This is the same as in your problem: a variety of dimension 2 (the area of gabriel's horn) of infinite 2D measure biunds a variety of dimension 3 (the volume of this horn) of finite 3D measure.
In fact the infinities you refer too are "not the same" and you cannot compare 2D and 3D measures. For instance the 3D measure of Gabriel's horn surface is 0: Would you say a finite volume is bounded with a surface of null measure?

Another point of view: the space-filling Peano curve is a curve of infinite length which is dense in the unit square. This means that a variety of dimension d may have an infinite length (wrt a d-measure) to be dense within a variety of dimension d' > d of finite d'-measure.

@acer 

"is still just as easily directly possible even if one also uses the Horner form" ... really?

What is the representation which leads to the quickest answer to the questions 
            What is the coefficient of
x6?
            What is the coefficient of x8?




Another example: What is the coefficient of x6?



But I had forgotten that you are never wrong, so I'm likely mistaken and the Horner form is surely  more suitable.

@JoyDivisionMan

Here is an alternative way to what I named the "pointwise construction" of the self-convolution of function f.
Based on the SignalProcessing:-Convolution function this computation should be much faster than the  "pointwise construction" I presented in my answer (generally numeric convolution and correlation use the FFT algorithm), but requires some carefulness to correctly rescale the result.

Here is the code (X is the same 'X' than in my reply with step=0.01).

a := Array(f~(X), 'datatype' = 'float'[ 8 ] ):
b := a:
c := SignalProcessing:-Convolution(a, b):

N := numelems(c):
K := 2*(max-min)(X)/(N-1):

#----------------------- raw convolution
# dataplot(c, style=line);

plots:-display(
  #----------------------- rescaling to get 'c' in true values
  plottools:-transform(
    (x, y) -> [x*K, y*K])
    (dataplot(c, style=line, color=red, legend="SignalProcessing:-Convolution")
  )
  ,
  plot([seq([x, Exact(x)], x in X)], legend="exact", color=blue)
  , view=[0..4, default]
)

Result

@JoyDivisionMan 

Detailed explanation using a toy problem

restart

f := x -> piecewise(x <= 0, 0, x > 2, 0, x^2);

proc (x) options operator, arrow; piecewise(x <= 0, 0, 2 < x, 0, x^2) end proc

(1)

SelfConvolution := x -> Int(f(tau)*f(x-tau), tau=0..4);

proc (x) options operator, arrow; Int(f(tau)*f(x-tau), tau = 0 .. 4) end proc

(2)


Pointwise approximation of  SelfConvolution(x)

`Approximation at point x[i]` = step * Sum('f'(h[j])*'f'(x[i]-h[j]), j=1..J);

`where:`, step =h[i+1]-h[i]

`Approximation at point x[i]` = step*(Sum(f(h[j])*f(x[i]-h[j]), j = 1 .. J))

 

`where:`, step = h[i+1]-h[i]

(3)

# A small size approximation (J=5) :

step := 1:
h    := [$(-2..2)];  

[-2, -1, 0, 1, 2]

(4)

# Select only a few number of values for x[i]

X := [$0..4];

[0, 1, 2, 3, 4]

(5)

# Apply f to each element in h

A := f~(h);

[0, 0, 0, 1, 4]

(6)

# Construct the sequence x[i]-h[1], ...x[i]-h[J]

x_shifted := x -~ h;  # shifts x by all elments of h

[x+2, x+1, x, x-1, x-2]

(7)

# Apply f to each element of x_shifted

A_shifted := f~(x_shifted );  

A_shifted := [piecewise(x <= -2, 0, 0 < x, 0, (x+2)^2), piecewise(x <= -1, 0, 0 < x-1, 0, (x+1)^2), piecewise(x <= 0, 0, 2 < x, 0, x^2), piecewise(x <= 1, 0, 0 < x-3, 0, (x-1)^2), piecewise(x <= 2, 0, 0 < x-4, 0, (x-2)^2)]

(8)

# Evaluate the Sum in relations (3) for any value of x

C := step * simplify(add(A *~ A_shifted));   # pointwise multiplication of A by A_shifted

C := piecewise(x <= 1, 0, x <= 2, (x-1)^2, x <= 3, 5*x^2-18*x+17, x <= 4, 4*(x-2)^2, 4 < x, 0)

(9)

# For comparison

simplify(value(SelfConvolution(x)));

piecewise(x < 0, 0, x <= 2, (1/30)*x^5, x < 4, 64/5-16*x+(16/3)*x^2-(1/30)*x^5, 4 <= x, 0)

(10)

# Define a fonction 'Approx' from x to C

Approx := unapply(C, x):

# Evaluate 'Approx' for all values x[i] in X.
# This gives a pointwise approximation of SelfConvolution(x):

Exact    = [ seq( [x, value(SelfConvolution(x))], x in X) ];
'Approx' = [ seq( [x, Approx(x)], x in X) ];  # Not a very good approximation but nevertheless an approximation.
                                              # To enhance it decrease the step value

Exact = [[0, 0], [1, 1/30], [2, 16/15], [3, 47/10], [4, 0]]

 

Approx = [[0, 0], [1, 0], [2, 1], [3, 8], [4, 16]]

(11)


Once you have understood the principle on this small example, do it again with a smaller value of  'step', for instance

step := 0.01;

0.1e-1

(12)

h := [seq(-2..2, step)]:
X := [seq(0..4, step)]:
A := f~(h):

x_shifted := x -~ h:
A_shifted := f~(x_shifted ):

C := step * simplify(add(A *~ A_shifted)):
S := unapply(C*step , x):

Approx := unapply(C, x):

Exact := unapply(simplify(value(SelfConvolution(x))), x);

plots:-display(
  plot([seq([x, Exact(x)], x in X)], legend="exact", color=blue)
  ,
  plot([seq([x, Approx(x)], x in X)], legend="approx", color=red)
)

proc (x) options operator, arrow; piecewise(x < 0, 0, x <= 2, (1/30)*x^5, x < 4, 64/5-16*x+(16/3)*x^2-(1/30)*x^5, 4 <= x, 0) end proc

 

 
 

 

Download Detailed_explanation_with_a_toy_example.mw


"This system of nonlinear equations was proposed by an artificial intelligence named Alice." means absolutely nothing out of context.

Alice's answers are nonsensical out of context and the last "Practical applications" item is one of the moset stupid thing I ever read.
By the way Alice forgot to mention this set of equations could also have a practical application in predicting 

She should have stayed  in Wonderland where she was more brilliant.

@Andiguys 

Question_1_sand15.mw and reply_1_sand15.mw give indeed different results because they do not solve the same equations.
To be clearer the equations you solve in your initial question and the comment to my answer are not the same.

Pay attention to what you did and read First_and_second_problem_compared.mw carefully.

@Andiguys 

Read carefully the attached file

NULL

restart

with(Optimization):

_local(Pi);

q_relation := q = (1/2)*(Ce*tau-Ci+Pr)/Cr;

q = (1/2)*(Ce*tau-Ci+Pr)/Cr

(1)

Pr_relation := Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2;

Pr = -(1/2)*Ce*tau+(1/2)*q*t-(1/2)*Cd+(1/2)*Ci+1/2

(2)

subs(Pr_relation, q_relation);

q = (1/2)*((1/2)*Ce*tau-(1/2)*Ci+(1/2)*q*t-(1/2)*Cd+1/2)/Cr

(3)

isolate((3), q)

q = (Ce*tau-Cd-Ci+1)/(4*Cr-t)

(4)

simplify(subs((4), Pr_relation), size)

Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t)

(5)

``

map(simplify, solve({q_relation, Pr_relation}, {q, Pr}), size);

{Pr = ((2*Ce*tau+2*Cd-2*Ci-2)*Cr-t*(Ce*tau-Ci))/(-4*Cr+t), q = (-Ce*tau+Cd+Ci-1)/(-4*Cr+t)}

(6)


 

simplify((6)[1] - (5))

0 = 0

(7)

simplify((6)[2] - (4))

0 = 0

(8)
 

 

Download reply_1_sand15.mw

@nm 

Thanks for sharing your experience

Maybe a track: the mw file I wanted to upload contains debugger outputs: if I remove them the file can be uploaded.

For information: I get no error using Maple 2015, even for a little bit complex display.

With_Maple2015.mw

@acer 

You're right that AN/2 * BN/2 by (A*B)N/2  are not always equivalent and I had even thought to write "Assuming that AN/2 * BN/2 by (A*B)N/2  holds..."
But I was also guided by the expressions of the two terms the OP wanted to prove the equality, and without this assumption I was pretty sure the proof could not be established
So I thought that this equivalence goes without saying.

But again you're right, I sould have been detailed in my answer.

@FDS 

Is there a policy to mark your latest post as best?
Yes, you should see a trophy at the top right of my answer: just click on it.

See you next time!

@FDS 

The updated worksheet  Stress_Strain_Curve_2.mw
And an illustraton of what you can do with it in the spirit of what you show in your pptx

About polynomial regression... there is really no mistery behind it.
The first question is: Are you familiar with the "multiple" linear regression in the case of Q regressors X1, ...XQ?
If it is so, "regressing" a dense polynomial of degree Q-1 with a single indeterminate Z, is nothing but using the "multiple" linear regression while setting Xn = Z(n-1), n=1..Q+1.
To give an example, suppose you want to fit a dense quadratic polynomial a+bZ+cZ2: simply introduce auxiliary variables X1 = 1, X2 =Z, X3 =Z2 and regress your dependent quantity no longer on  a+bZ+cZ2:  but on aX1+bX2+cX3...and that's it!

If you are not familiar with "multiple" regression I advice you to read any elementary textbook/course/mooc... on the subject.

Beyond that an important question often appears: what is the degree of the polynomial to use in order that the fitted polynomial fulfills two contradictory requirements:

  1. Fidelity to the data: otherwise said the fitted polynomial must be "close" to the data or, mathematically speaking, the residual sum of squares between the data and their restitution by the fitted polynomial must be small.
    Obviously, as a dense polynomial of degree D-1 depends on D parameters, a set od data of size D sill be exactly fitted by this dense polynomial.
    We use to say here that the model is over-parameterized
  2. Stability (or robustness, or generalization capability...): let us take this previous sample of size D and this same polynomial of degree D-1.
    This polynomial may have D-1 real roots and if, unhappily, some of them are located within the range of the regressor (Z), then the fitted polynomial will present undesired overshoots in between consecutive values of the regressor.
    In fact it is extremely likely that the fitted polynomial will behave this way.
    If you want to avoid this phenomenon use a polynomial of degree 0 or 1.

So, to satisfy the fidelity to the data requirement you must take a large degree, close to the sample size, but if you are more concern by "stability" or "smooth fit", then you must take low degree polynomials.
There exist a lot of litterature about strategies which balance these two contradictory requirements. I already gave you a few Wiki references on the subject.

What I wrote in my worksheet is that each branch is a sample of size about one thousand.
So a dense polynomial of degree 5, for instance, is very far ffrom being an  over-parameterized polynomial. This balance stuff I spoke abour above matters only when the number of parameter of the model (not necessarily a polynomial one) is of the order of the sample size, let us say 20% or 25% of this latter.
You would be right considering model selection (here degree selection) if you were investigating polynomials of degree 200 for example. But it is very farfrom being the case.

As I wrote in this last worksheet a mor important criterion is that two polynomial models intersect within L5 and EP5 ranges if you do not want to create unealistic approximated cycles.

@FDS 

Feel free to tell me if this goes to the right direction
Stress_Strain_Curve_2.mw

I understand the notion of cyclic loading-unloading , but you want to assess the "surface area" of each cycle, and so you need to define more precisely what characterizes a cycle in terms of geometry: where does it start (likely the intersection point between a charge path and the next/previous discharge path) and where it ends.
So I propose you my interpretation for the first cycle only... I am not going to go further if I am not on the right track in me doing this.

1 2 3 4 5 6 7 Last Page 2 of 40