2740 Reputation

12 Badges

17 years, 47 days

Dr. Robert J. Lopez, Emeritus Professor of Mathematics at the Rose-Hulman Institute of Technology in Terre Haute, Indiana, USA, is an award winning educator in mathematics and is the author of several books including Advanced Engineering Mathematics (Addison-Wesley 2001). For over two decades, Dr. Lopez has also been a visionary figure in the introduction of Maplesoft technology into undergraduate education. Dr. Lopez earned his Ph.D. in mathematics from Purdue University, his MS from the University of Missouri - Rolla, and his BA from Marist College. He has held academic appointments at Rose-Hulman (1985-2003), Memorial University of Newfoundland (1973-1985), and the University of Nebraska - Lincoln (1970-1973). His publication and research history includes manuscripts and papers in a variety of pure and applied mathematics topics. He has received numerous awards for outstanding scholarship and teaching.

MaplePrimes Activity

These are replies submitted by rlopez

 It's certainly nice to be appreciated, and the web master who spotted Bob Jantzen's comment sent it to me with the title "Someone really liked this post!"

What the general readership might not know is that Bob Jantzen and I have communicated on a number of occasions over the years. He was one of the first instructors to take syntax-free computing in Maple seriously, and has posted a lot of his research and pedagogy on his web site. I've rummaged there more than once and find that the extent of Bob's interests is bigger than my capacity to keep up.

We never got a chance to say "hello" face-to-face, the closest being the time I gave a talk at Drexel University in Philadelphia. Bob took the train down from Villanova and was in the audience. He tells me he tried to introduce himself after the talk, but the commotion around the podium did not subside in time for him to greet me before he had to catch his train back.

Bob has been a good colleague, and I really appreciate his positive comments on this article and his support in general. I hope he continues to enjoy his stay in Italy, strike or no strike.

RJL Maplesoft

 Yes, that avoids the need for a limit because it's the first step in the derivation of the formula for the tangent of a sum. The next step in that derivation is division through by cos(x)cos(y). Since cos(y)=cos(Pi/2)=0, this division step is the reason why a limit has to be taken if the tangent formula is used.

RJL Maplesoft

The purpose wasn't to verify that the slopes of perpendicular lines are negative reciprocals, but to investigate how a novice might be led to that discovery, using only elementary tools.

Of course, Maple evaluates tan(x+Pi/2) directly. But how can this evaluation be reproduced stepwise? It requires the formula for the tangent of a sum. However, that will result in tan(Pi/2) appearing in both the numerator and denominator. Simple evaluation does not work. Hence, right from the start, the limit becomes the appropriate tool.

OK, knowing that a stepwise solution is going to require the notion of a limit, how well does Maple handle the limit? At top level, Maple's limit command clearly gets the right result, but the Limit Methods tutor hangs up on this because of the way it is programmed.

All of these issues become relevant in the preparation of a lesson for the intended audience, namely, students first learning that the slopes of perpendicular lines are negative reciprocals. That was the point of the discussion - to see if a "derivation" of the result could be constructed within Maple, and if so, how smoothly would it go.

RJL Maplesoft

 The objective function f2 is the square of the distance from the center of the circle to a data point. The quantity sigma2 is the sum of squares of the deviations sqrt(f2)-r. It is sigma2 that is measuring the deviations from the circle. The function f2 is an intermediate step in this calculation.

Consider the problem in the xy-plane. The circle would be (x-h)^2+(y-k)^2=r^2. For the data point (u,v), the comparable deviation would be of the form (u-h)^2+(v-k)^2. The geometry shows that this is the distance from the center to the data point. Comparing the square root of this to r is then a measure of how close the point is to the circle.

For the 3D case, sigma1 is used to find the best-fit plane. Then, sigma2 is used to find the best-fit circle, in analogy with the 2D case. Fitting the points to a sphere, as was done originally, is not as effective as the process used in the updated version of this calculation, as the graphs in Figures 1 and 3 show.

RJL Maplesoft

At the point (x,x^2) on the graph of the parabola y=x^2, the equation of the normal line is , where u and v are the coordinates of points (u,v) along the line. If points along the normal line are to be expressed as (x,y), then points along the parabola have to be parametrized differently. For example, along the parabola the generic point could be (a,a^2), in which case the equation of the normal line through this point would be y=-(1/(2*a))*(x-x)+a^2. In either representation of the line, it should be clear that the point-slope form was used, with being the negative reciprocal of , the slope along the parabola.
Whether the question is "Is Pi and 3.14 the same?" or "To what extent is the residual of the approximate solution of the PDE zero?" I think the mathematical issues are the same. Unless one poses a definition of "the same" (or equivalently, of "zero") the questions are not well defined. It's like asking if two vectors are the same without a definition of the norm being used to measure the difference between them. Jim's "easy" question yields to an evalf, which shows the extent to which 3.14 approximates Pi. Jim's PDE example is more difficult because it requires finding the maximum of a function of two variables. The residual for the approximate solution is a function of two variables, and any viable techniques for finding the maximum of its absolute value would suffice. For the function in this example, the greatest difficulty seems to be the division by r. However, each such divisor sits under BesselJ(1, lambda*r), and as r goes to zero, so does BesselJ(1,alpha*r). The issue is whether r=0 is a removable singularity or not. It turns out that it is, and there is a finite upper bound to the maximum of the residual. But if we change the norm (say, rms norm) then the maximum of the residual would most likely change, so the question as to whether the residual of the approximate solution is sufficiently close to zero to warrant approval must be linked to a definition of the norm and to the measure of tolerance that will be accepted as sufficient. Using the infinity norm I determined the max was about 1.27 * 10(-9), and occurred in the limit as both r and t approached zero. RJL Maplesoft
First 16 17 18 Page 18 of 18