pagan

5147 Reputation

23 Badges

17 years, 126 days

 

 

"A map that tried to pin down a sheep trail was just credible,

 but it was an optimistic map that tried to fix a the path made by the wind,

 or a path made across the grass by the shadow of flying birds."

                                                                 - _A Walk through H_, Peter Greenaway

 

MaplePrimes Activity


These are answers submitted by pagan

You wrote "equations", but here's an example with two expressions f and g. If you really have equations, each with an equals-sign and a right- and left-hand-side, then you can form expressions from them using lhs(eq)-rhs(eq).

Basically, you can find intersections by finding where their difference (f-g) is zero.

f:=sin(x+5):
g:=1/45*((x-2)^3-10*(x-2)-1):
xpts:=Student:-Calculus1:-Roots(f-g,x=-10..10,numeric):
seq([t,eval(f,x=t)],t in xpts);
plot([f,g],x=-10..10,view=[-10..10,-1..1]);

convert(y,sin)

Do you mean that you want the RGB values (say) for a given i,j pixel, so as to check whether it is zero-triple.

C[i,j,1..3]; # 1D Array of all three color layers values at i,j

LinearAlgebra:-Norm(<C[i,j,1..3]>);

You could use C[i,j] instead of C[i,j,1..3]. But the key is to use something other than an equality test against a list or zero-Vector while using the equals-sign.

Instead, you could test against a triple-zero-Array using ArrayTools:-IsEqual, or against a triple-zero-Vector using LinearAlgebra:-Equal, or against scalar zero after applying LinearAlgebra:-Norm. (These were all suggested recently in another thread.)

Presumably, by "max area" you mean a region which you want fitted with a concave approximation or other smoothed cap. What's the criteria for distinguishing whether a pair of local maxima are in distinct max areas or are to be taken as being spikes in the same max area? How do you distinguish between those two situations? Is it based on closeness of the x values of the maxima, or on implications for the derivatives of the smoothing cap, or something else? I don't think that your question is unambiguous without that criteria.

You can evaluate an expression R at a value for an indeterminate x as follows.

eval(R,x=1.5);
seq(eval(R,x=X),x in [0.5,1.0,1.5,2.0]);
seq(eval(R,x=0.5*i),i=1..4);

Or you can construct a procedure from expression R, and invoke it with some value.

fR:=unapply(R,x);
fR(1.5);

You could also assign a numeric value to x, after which your R would automatically evaluate to a numeric value. But then you wouldn't be able to use x symbolically (as a name) again until you unassigned it. So that is much less practical than the first two methods.

> a:=[1,2,3,4,5]:
> b:=[5,4,3,2,1,0]:
> c:=[1,2]:

> zip(`+`,zip(`+`,a,b,0),c,0);
                              [7, 8, 6, 6, 6, 0]
 
> foldl((x,y)->zip(`+`,x,y,0),a,b,c);
                              [7, 8, 6, 6, 6, 0]

There is a 2D Math input problem in your Document.

Where it seems to have the call simplify((5)) it is really a pair of calls, one being "simplify;" and the other being ((5)). They merely appear to be a single command call.

If you re-enter (re-type) that call in a new execution block, it does simplify to what you wanted, in Maple 13.01 at least.

The whole approach of using lists for what appears to be a task suited for genuinely mutable data structures is dubious.

A Maple list is not really a mutable data structure. Sure, you can assign to an entry for lists of length less than a hundred, or subsop into even longer lists. But both those actions actually create wholly new structures, which means creation of more collectible garbage. (It's implemented so as to be convenient, with the replacement with the new object being done nicely and quietly. But it's still producing collectible garbage unnecessarily.) That extra, avoidable memory management cost usually makes the method less efficient than necessary.

> restart:
> L:=[1,2,3,4];
                               L := [1, 2, 3, 4]
 
> addressof(L);
                                    7505560
 
> L[3]:=15:
> L;
                                 [1, 2, 15, 4]
 
> addressof(L);
                                    7505656

If you intend to do a great deal of such entry replacement, and if you want it efficient, then you should likely be using an Array (rtable) or a table instead of a list.

I suggest using the FourierTransform and InverseFourierTransform routines from the DiscreteTransforms package. For float[8] and complex[8] datatype Arrays they will do a large portion of their work in fast external (compiled C) routines. They do not demand a power of 2 as the length of the input.

In contrast the deprecated FFT and iFFT routine require an input with an exact nonnegint power of 2 elements. And they do not even run internally under the (somewhat fast, but still slower than compiled C) floating-point evalhf interpreter -- you have to wrap their calls in evalhf by hand.

Some aspects of an rtable (Array, Matrix, or Vector) can be changed in-place without creating a new object. And some other aspects can only be changed by creating a new object.

For example

> T:=Array(1..2,[-1.1,2.3],datatype=float[8]);
              T := [-1.10000000000000009, 2.29999999999999982]

> rtable_options(T);
        datatype = float[8], subtype = Array,
        storage = rectangular, order = Fortran_order

> # can only change datatype by creating a new Array
> newT:=Array(T,datatype=anything):

> rtable_options(newT);
        datatype = anything, subtype = Array,
        storage = rectangular, order = Fortran_order

> # can change the order, and the subtype for this example, in-place
> rtable_options(newT,order=C_order,subtype=Vector[row]);

> rtable_options(T);
     datatype = float[8], subtype = Vector[row],
     storage = rectangular, order = C_order

In particular, the subtype can be changed in-place for 1D and 2D Arrays provided that the indices start from 1. The bounds of the indices can be changed in-place, using the rtable_redim() command. And so rtable_redim() also relates to changing the subtype in-place.

The `readonly` attribute can always be toggled on, but never off, in-place. The order can be toggled in-place between C_order and Fortran_order.

All other aspects such as indexing function, storage, and datatype cannot be changed in-place. Altering them requires creation of a new object (with the new, desired option). See the simple example above.

I don't know to what you are referring, when you write of a properties() function. I can note, however, that there are a quite a few such manipulations of rtables available from the right-click context-sensitive menus.

It means `on condition that`, just as it often means in English.

Mostly, you have misunderstood I think.

Firstly, Normal(0,1) with mean 0 and SD of 1 does not imply positive value. On the contrary, it is a bell shaped distribution centred around 0, so sampling from it will produce both positive and negative values.

Secondly, given a continuous distribution over a domain of nonzero measure (width, say), the probability of a sampled point being either end-point (or, any given point) is essentially zero. (Ok, it's not exactly zero, since Statistics:-Sample takes samples from the computer's range of floating-point numbers of which there are only finitely many representable at a given working precision. But it's close enough to zero so that the probability of the end-point being in a sample with finitely many members is negligible. In practice, it doesn't really matter whether you want the range to be open or closed.)

Thirdly, the range of possible values of the Normal distribution is unlimited (or, only limited by what can be represented as a float). There is no finite domain for that distribution in its usual sense. You may have been wanting a sample from the distribution Uniform(0,1).

restart:
X:=Statistics:-RandomVariable('Uniform'(0,1)):
Statistics:-Sample(X,5);

If you really wanted a sample of a Normal random variable, constrained by finite bounds, please say so.

Did you mean this, with z instead of y?

> f1 := (x, z, l) -> x*z-l*x+2*z*l:
> f2 := (x, z, l) -> x^2+4*l*x:
> f3 := (x, z, l) -> -x^2+4*x*z-12:

> solve({f1(x,z,l),f2(x,z,l),f3(x,z,l)},Explicit);
         /      1                   \    /    1                   \ 
        { l = - - I, x = 2 I, z = -I }, { l = - I, x = -2 I, z = I }
         \      2                   /    \    2                   / 

In Maple, the default is for capital i to indicate the square root of -1.

It might be easier to use expressions, without first creating operators.

> f1:=x*z-l*x+2*z*l:
> f2:=x^2+4*l*x:
> f3:=-x^2+4*x*z-12:

> solve({f1,f2,f3},Explicit);
         /      1                   \    /    1                   \ 
        { l = - - I, x = 2 I, z = -I }, { l = - I, x = -2 I, z = I }
         \      2                   /    \    2                   / 

The `copy` command will create N with the same entries as M without being the same identical object.

But you probably would be better off not doing that. You don't want to create and recreate a new Matrix for N at each iteration since that would be producing lots of collectible garbage. It does seem that you knew this, since you created two Matrices for M and N before the loop started. So it seems that you intended simply to copy the contents of M into N. You might be able to use ArrayTools:-Copy to do that efficiently.

Your test of M=N is also not what you likely intended since that will check merely that M and N are the same identical object. Try LinearAlgebra:-Norm(M-N), possibly testing whether that is small enough (or exactly zero, depending on your domain). You could use LinearAlgebra:-Equal, or the undocumented builtin EqualEntries, to compare M and N if the entries are exact, but I like using Norm of the difference when testing that float entries are "close enough".

What you meant by "modify M based on information in z" is intriguing. I can imagine schemes where you have little idea until the update is mostly done whether the final result will differ from N (whose entries equal those of the original M). But, of course, if  the update involves updating each entry "separately" then you can test each such updated entry as it's computed. That way, you could bail out early and save effort. You've very likely already considered all that.

One easy way  is to use the Matrix/Vector/Array indexing syntax using square brackets. The LineaAlgebra package is not required for that.

> M:=Matrix([[1,0,0,1],[0,1,0,1],[1,0,1,0],[1,0,0,1]]):
 
> M[1,1..4];
                                 [1, 0, 0, 1]
 
> M[1,1..-1];
                                 [1, 0, 0, 1]

> M[1]; # this last one with Maple 12 or later
                                 [1, 0, 0, 1]
First 35 36 37 38 39 40 41 Last Page 37 of 48