acer

32717 Reputation

29 Badges

20 years, 82 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

What you need to know here is what is meant by the average value of a continuous function over a range.

For a discrete problem (eg. height of a finite number of people in a class) the average is usually taken to be the sum of their heights divided by the number of people.

So, for a continuous range of values (x values) how do you "add up" all the values? Isn't it by finding the area under the curve? And how do you "divide by the number of values"? Isn't it by dividing by the length of the range?

acer

What you need to know here is what is meant by the average value of a continuous function over a range.

For a discrete problem (eg. height of a finite number of people in a class) the average is usually taken to be the sum of their heights divided by the number of people.

So, for a continuous range of values (x values) how do you "add up" all the values? Isn't it by finding the area under the curve? And how do you "divide by the number of values"? Isn't it by dividing by the length of the range?

acer

You don't need `maximize` or `minimize` in order to find values of x where f becomes flat (ie. where the slope of f gets to zero).

> f := cos((1+sin(x))^(1/2));
                                                1/2
                           f := cos((1 + sin(x))   )

> df := diff(f,x);
                                               1/2
                               sin((1 + sin(x))   ) cos(x)
                    df := -1/2 ---------------------------
                                                 1/2
                                     (1 + sin(x))

Look df can only be zero when one of the two factors of the numerator of df is zero. By that I mean, only when either cos(x) is zero or sin((1+sin(x))^(1/2)) is zero. And any such special values must lie within 0..5, which is the range that you were given.

So, what value in 0..5 makes cos(x) to be zero?

Now look. sin(0) is 0, yes? So, if (1+sin(x))^(1/2) were zero then sin((1+sin(x))^(1/2)) would also be zero. And if 1+sin(x) were zero then (1+sin(x))^(1/2) would be zero. So what value of x in 0..5 makes 1+sin(x) be zero? What value of x in 0..5 makes sin(x) be -1 ?

But watch out! When (1+sin(x))^(1/2) is zero then the denominator of df will also be zero! And you may not just divide, 0/0. So check the limit, and the graph.

> eval(df,x=3*Pi/2);
Error, numeric exception: division by zero
> limit(df,x=3*Pi/2);
                                       0

plot( f, x=0..5 );

You see, you don't have to resort to the maximize() and minimize().

maximize( f, x=0..5, location );
minimize( f, x=0..5, location );

acer

You don't need `maximize` or `minimize` in order to find values of x where f becomes flat (ie. where the slope of f gets to zero).

> f := cos((1+sin(x))^(1/2));
                                                1/2
                           f := cos((1 + sin(x))   )

> df := diff(f,x);
                                               1/2
                               sin((1 + sin(x))   ) cos(x)
                    df := -1/2 ---------------------------
                                                 1/2
                                     (1 + sin(x))

Look df can only be zero when one of the two factors of the numerator of df is zero. By that I mean, only when either cos(x) is zero or sin((1+sin(x))^(1/2)) is zero. And any such special values must lie within 0..5, which is the range that you were given.

So, what value in 0..5 makes cos(x) to be zero?

Now look. sin(0) is 0, yes? So, if (1+sin(x))^(1/2) were zero then sin((1+sin(x))^(1/2)) would also be zero. And if 1+sin(x) were zero then (1+sin(x))^(1/2) would be zero. So what value of x in 0..5 makes 1+sin(x) be zero? What value of x in 0..5 makes sin(x) be -1 ?

But watch out! When (1+sin(x))^(1/2) is zero then the denominator of df will also be zero! And you may not just divide, 0/0. So check the limit, and the graph.

> eval(df,x=3*Pi/2);
Error, numeric exception: division by zero
> limit(df,x=3*Pi/2);
                                       0

plot( f, x=0..5 );

You see, you don't have to resort to the maximize() and minimize().

maximize( f, x=0..5, location );
minimize( f, x=0..5, location );

acer

The only thing I had to use was a single CrossProduct call (once normals S1 and S2 are obtained) to get the Vector that is perpendicular to both normals. That's simpler than two Nullspace calls and an IntersectionBasis call. And it's simpler than using `solve` as above, because the CrossProduct technique is quite understandable -- which is very likely why the hint is as it is.

Those DotProduct calls that I included, as checks which illustrated the perpendicular quality of the result, are just that: checks. Not necessary for getting the result. One should already know that they will come out OK. But this student is learning.

The (instructor'?) hint was to use the fact that the resulting line must be perpendicular to both planes (and hence to both normals). So CrossProduct is the (standard) simple way to get that.

Indeed, CrossProduct, DotProduct, and the equation forms for planes and lines is all one needs to solve most of the problems like this in a nice class. (That's also why I put those checks -- to help get that point home.) Getting familiar with those two routines, and the ramifications of their results, is usually what this level of linear algebra instruction is all about. Learn how to solve one problem like this by hand and one has a head start on others.

But one wants to get students to think about why it works, and not just put down,

r0 + t*LinearAlgebra[CrossProduct](S1, S2);

I just wish that this were the sort of content in more of the help Tasks. Too many of the Tasks are simple things (better) covered by some routine's single help-page.

acer

The only thing I had to use was a single CrossProduct call (once normals S1 and S2 are obtained) to get the Vector that is perpendicular to both normals. That's simpler than two Nullspace calls and an IntersectionBasis call. And it's simpler than using `solve` as above, because the CrossProduct technique is quite understandable -- which is very likely why the hint is as it is.

Those DotProduct calls that I included, as checks which illustrated the perpendicular quality of the result, are just that: checks. Not necessary for getting the result. One should already know that they will come out OK. But this student is learning.

The (instructor'?) hint was to use the fact that the resulting line must be perpendicular to both planes (and hence to both normals). So CrossProduct is the (standard) simple way to get that.

Indeed, CrossProduct, DotProduct, and the equation forms for planes and lines is all one needs to solve most of the problems like this in a nice class. (That's also why I put those checks -- to help get that point home.) Getting familiar with those two routines, and the ramifications of their results, is usually what this level of linear algebra instruction is all about. Learn how to solve one problem like this by hand and one has a head start on others.

But one wants to get students to think about why it works, and not just put down,

r0 + t*LinearAlgebra[CrossProduct](S1, S2);

I just wish that this were the sort of content in more of the help Tasks. Too many of the Tasks are simple things (better) covered by some routine's single help-page.

acer

It seems likely that you are supposed to give justifications about the maximum and minimum in terms of calculus. So just getting the answer directly from Maple's `maximize` routine won't help with the explanation.

If the maximum and minimum do not occur at the boundary of the region, then what happens to the derivative of f at the points at which f is maximum and minimum?

acer

It seems likely that you are supposed to give justifications about the maximum and minimum in terms of calculus. So just getting the answer directly from Maple's `maximize` routine won't help with the explanation.

If the maximum and minimum do not occur at the boundary of the region, then what happens to the derivative of f at the points at which f is maximum and minimum?

acer

Yes, I had considered this frontend use, but it wasn't clear to me what the final goal was. That is to say, what would be done with the result. I guessed the "wrong" way, and changed the Dirac(v) term so that an example might appear useful. The OP's followup question made it a clearer, so Joe's frontend suggestion is better.

acer

Yes, I had considered this frontend use, but it wasn't clear to me what the final goal was. That is to say, what would be done with the result. I guessed the "wrong" way, and changed the Dirac(v) term so that an example might appear useful. The OP's followup question made it a clearer, so Joe's frontend suggestion is better.

acer

> m := matrix([[1,"t"],[3.4,17]]);
                                    [ 1     "t"]
                               A := [          ]
                                    [3.4    17 ]
 
> writedata(terminal,m,string,
>           proc(f,x::algebraic) fprintf(f,`%a`,x) end proc);
1       t
3.4     17

acer

My Maple 11 is installed at /usr/local/maple11.02/ and its JRE is at  /usr/local/maple11.02/jre.X86_64_LINUX/ .

I also have an private JRE as ~/jre1.5.0_11/ . So you could try renaming (hence moving aside) your <MAPLE>/jre.X86_64_LINUX/ and then creating a symlink from the system JRE.

For example, using my locations,

cd /usr/local/maple11.02
mv jre.X86_64_LINUX jre.X86_64_LINUX.orig
ln -s ~/jre1.5.0_11 jre.X86_64_LINUX

You would naturally replace ~/jre1.5.0_11 above with the location of the archlinux JRE, and also replace /usr/local/maple11.02 with the location to which your Maple 11 is installed.

acer

duplicate, please ignore

duplicate, please ignore

Investigation of the code called by Statistics:-ScatterPlot reveals that the xerrors and yerrors optional arguments are not taken into account when calling NonlinearFit (ie, when the fit parameter is also specified to ScatterPlot).

This can be confirmed by stepping through computation inside Maple's debugger. The relevant routines may be debugged with these preliminary commands:

kernelopts(opaquemodules=false):
stopat(Statistics:-Visualization:-ScatterPlot:-BuildPlotStruct);                                                                        
stopat(Statistics:-Visualization:-ScatterPlot:-BuildPlotStructTab[default]);                                                                                
stopat(Statistics:-Visualization:-ScatterPlot:-BuildPlotStructTab[':-errors']);                                                                          
stopat(Statistics:-Visualization:-ScatterPlot:-BuildPlotStructTab[':-fit']);

In the case where yerrors is supplied and xerrors is not supplied (error in the dependent variable data only) then weighted least squares might be tried. See here and here. The second of those links may show the way to compute a set of weights which may be passed to Statistics:-NonlinearFit. Doing this by hand (since ScatterPlot doesn't do it) might make a nice blog post. But the definition of what yerrors is supposed to represent could be in question. Right now it is used by ScatterPlot to produce plotted lines through the data points which indicate the possible spread at each plotted data point. But if instead it were taken to indicate the variances of each (presumably uncorrelated) measurement then appropriate least squares weights might be computed. Assuming that the supplied data points in argument Y represent ed the means, and assuming a reasonable distribution, then with this new definition of yerrors the spread could still be displayed (as say a wide, specific confidence interval). It'd get more complicated if the measurements' errors were correlated. Even in the uncorrelated case it would have to be checked that NonlinearFit's handling of its weights parameter conforms to the weight appearing in the normal equations in the reference link. (If that's not true then there not much sense in computing weights from the variance.) I haven't investigated that.

If there is (measurement, say) error in both the dependent and independent variables' data then a total least squares approach may be useful. This naturally brings up the case where xerrors and yerrors are present,

Getting one's hands on the actual parameter values computed for the nonlinear fit as done by ScatterPlot would be an enhancement. (Even a raised infolevel doesn't show it.) Having that enhancement, when xerrors and yerrors are supplied, would be nicer still. And having NonlinearFit itself also accept (co-)variance information for its data -- in lieu of weights -- might be nicer still. These are Statistics routines, after all.

acer

First 547 548 549 550 551 552 553 Last Page 549 of 599