2785 Reputation

17 years, 167 days

Dr. Robert J. Lopez, Emeritus Professor of Mathematics at the Rose-Hulman Institute of Technology in Terre Haute, Indiana, USA, is an award winning educator in mathematics and is the author of several books including Advanced Engineering Mathematics (Addison-Wesley 2001). For over two decades, Dr. Lopez has also been a visionary figure in the introduction of Maplesoft technology into undergraduate education. Dr. Lopez earned his Ph.D. in mathematics from Purdue University, his MS from the University of Missouri - Rolla, and his BA from Marist College. He has held academic appointments at Rose-Hulman (1985-2003), Memorial University of Newfoundland (1973-1985), and the University of Nebraska - Lincoln (1970-1973). His publication and research history includes manuscripts and papers in a variety of pure and applied mathematics topics. He has received numerous awards for outstanding scholarship and teaching.

Too restrictive?...

@brian bovril The help page at ?Statistics,Regression,Solution will show that one can obtain regression analysis for both linear and nonlinear fits. For linear fits, there are more items in the analysis, but some of the items in the linear fit analysis appear in the list for nonlinear fits.

Lagrange Expansion theorem...

OK, during lunch I realized what I needed. I think the Lagrange Expansion theorem applies. My first recourse for a reference is "Perturbation Techniques in Mathematics, Physics, and Engineering" by Bellman. Could probably pull it out of Wiki, but I'm addicted to print. Will have a go at it after some afternoon errands.

How valid?...

Interesting comment. I can obtain that result by applying the asympt command to the equation that I ended up solving numerically. (Bring all terms to one side first.) This gives an asymptotic expansion in terms of r and s. Setting the sum of the first two terms to zero and solving for s gives the result in vv's comment. Solving for s becomes intractable if more terms are taken, so I'm left with the analytic question: How valid is it to expand first, then solve. I'll need to think more about this technique. Perhaps vv's result can be obtained in some other way?

Return of BSplineCurve now clear...

OK, you have clarified for me what BSplineCurve returns.

I then went back to example2.mw where you obtain 44 for integral based on a cubic spline built from 200 data points taken from the graph of xydata. Graph that cubic spline and compare its graph to that of "expand(Interpolant(p1))". I would not trust that 44 obtained by integrating this expanded interpolant.

Graph the parametric curve obtained by BSplineCurve obtained in another_way.mw. Although this curve is close to the piecewise-linear curve given by xydata, it does not appear to be defined over the whole interval [0,8]. This might account for the lower value of 34 when integrating.

I think I stand by my 49.54 obtained by integrating under the piecewise-linear spline that approximates the modified data on [0,8.01].

Darkness: The BSplineCurve command returns two curves, c[1] and c[2]. This I do not understand. Their graphs are significantly different.

Some Light: By changing the last data point from [8,3] to [8.01,3], it is possible to get Maple's Spline command to work. Use this to create a degree-one spline for the data and graph it. The graph corresponds to the graph of xydata. The curves c[1] and c[2] sit underneath the graph of the piecewise linear spline. The area under the spline is 49.54, whereas the areas under the BSplineCurves are on the order of 27.

Interesting approach...

Markiyan, the BSplineCurve command returns a strange data structure, but op(1,c1) is a piecewise function that can be integrated.

The plot command needs either set or list braces around its two arguments because it is being asked to graph the data (xydata) and the spline.

Your approach of extracting the data points from the graph is very clever. Maple handled a spline with 200 nodes. Perhaps it could handle a spline with all 2000?

Is the data uniformly spaced?...

If the data is uniformly spaced, it would be simple to write the sum expressing, say, the trapezoid rule or Simpson's rule. Otherwise, you have to fit some continuous function to the data, whether piecewise linear, spline, etc. I would hesitate to build a spline with 2000 nodes. Perhaps break the integration into subintervals, with splines through the points corresponding to the subintervals.

No magic bullet here, so you will have to do some work to get the result you want.

Wow! Great!...

@Kitonum  The elegant simplicity of this latest solution is beautiful. Thanks for pouring your insights into this forum.

We all learn stuff here...

@DJJerome1976 I am as delighted with Preben's solution using freeze/thaw as you are! It allows the names u and v to be manipulated as names, but the vectors to be manipulated as objects. I'll be adding this trick to my Red Book of Maple Magic.

By the way, in Kitonum's solution using Equate, the sequence of two Equate commands results in a sequence of lists. In the Context Menu, there is a Join option that would join the contents of the two lists into one list. The code behind this Join option is a map of op, which Kitonum accomplishes with op~, again demonstrating his mastery of Maple syntax.

Equate vs. =~...

Just as I was about to post an answer to this question, Kitonum's equivalent solution appeared. Our solutions agree on the need to restructure the calculation because Maple does not have the ability to solve vector equations. Such equations must be reduced to equations between components.

Kitonum uses the efficient =~ to equate components of vectors, a construct that has appeared in Maple since the appearance of the Equate command that was formulated to do the same. "Equate" replaced the "equate" command in the old "student" package.

Being old-school, I would prefer the explicit Equate command. Kitonum's coding abilities far exceed mine, and his usage of coding techniques like =~ reflect his facility with Maple syntax.

It does not prevent solving for y...

@vv Just tried your suggestion, but I get the same result with and without the option singsol=all. The "implicit" option prevents Maple from solving for y. It's the solving for y that causes Maple to return the one expression that contains both solutions. When I solved the equation by hand, I noticed that I obtained sqrt(y) after the integration. So, if Maple is returning y=..., I wanted a way to see the step before. Hence, I thought of "implicit."

Mathematica's two solutions are equivale...

Maple's dsolve command gives the singular solution y=0 and a solution that can be put into the form (1/4)* (x-a)^2. Depending on whether the constant of integration "a" is positive or negative, this expression contains "both solutions" returned by Mathematica.

If the equation is solved "by hand" one gets 2*sqrt(y)=(c +/- x), so Maple is solving for y.

If the option "implicit" is included in the dsolve command, then all three solutions are returned, the singular solution y=0, and the two solutions x-2*sqrt(y)-a=0, x+2*sqrt(y)-a=0.

a[2]/aa[2] -> 1...

Using Kitonum's a[2] and aa[2], I find that simplify(a[2]/aa[2]) returns 1. However, simplifying the difference does not produce zero.

Student MultivariateCalculus Plane comma...

The Plane command in the Student MultivariateCalculus package will generate the representation of a plane from a list of three points it contains. The return already has had the equivalent of the primparts operator applied. It was in an attempt to understand just what the primparts command did that I spent time looking at these calculations. I ended up with the following code to generate all the planes induced by the given list L.

with(Student:-MultivariateCalculus):
for k from 1 to nops(L) do
temp:=GetRepresentation(Plane(L[k][]));
Temp:=lhs(temp)-rhs(temp);
print(Temp*signum(lcoeff(Temp))=0);
end do:

The Plane command creates a "plane object," which is, I believe, a module containing all the information about the plane. To see an equation for the plane, apply the GetRepresentation command. This returns an equation in the form a x+...=d, so some jiggering is needed to move everything to the left. Then, multiplication by the signum of the lead coefficient puts the equation in the desired form (as suggested by vv). Kitonum suggests a different strategy to make the lead coefficient positive.

By printing every equation on a separate line, it was easier to compare equations. I noticed that every equation contained the variable x. I added [[0,0,0],[1,0,0],[0,0,1]] to L and was happy to see that my code returned y=0 for that case.

I find that if I don't explore the various bits of code provided in the responses to the questions on MaplePrimes, I really don't learn much just by reading through these replies.

permute?...

In an attempt to understand both the original question and Kitonum's procedure that answered it, I tinkered around with the notion of permutations. I then wrote the following lines of code.

q:=proc(n,k)
combinat:-permute([0\$(n-k),1\$k],n);
end proc:

and

for k from 0 to 5 do
q(5,k);
end do;

I got the same results as Kitonum. Of course, my code has no error checks, etc., so is rather simplistic, but I think it captures the task, and helped me understand what was initially asked. Kitonum writes wonderful and sophisticated code, but personally, I find it very difficult to extract mathematical concepts from slick computer code. I guess that's because I never studied coding formally, but came to it the way an amateur comes to woodworking. Something gets built, but there's a lot of noise and sawdust as a by-product.

 First 7 8 9 10 11 12 13 Last Page 9 of 18
﻿