acer

32313 Reputation

29 Badges

19 years, 314 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

I'm not sure (yet) whether this is the full solution. I didn't have the patience to see whether solve({fff},AllSolutions) would ever return a result, as it took so long.

> solve(simplify(eval({fff},{x[22]=1,x[23]=0,x[31]=0,x[32]=1,y[21]=0})),AllSolutions);

                           /     1 \ 
                          { T = --- }
                           \    _Z1/ 

> about(_Z1);

Originally _Z1, renamed _Z1~:
  is assumed to be: integer

I got this by substituting new simple names for each of the sin & cos calls, solving, resubstituting, and trying to test whether the resubstitutions introduced inconsistency.

Oh, hang on! There is at least this larger set,

> SOL:=solve(eval({fff},[x[31]=0]),AllSolutions);

 /     1                                             \   
{ T = ---, x[22] = 1, x[23] = 0, x[32] = 1, y[21] = 0 }, 
 \    _Z1                                            /   

   /     1                                             \    /
  { T = ---, x[22] = 1, x[23] = 0, x[32] = 1, y[21] = 0 }, { 
   \    _Z2                                            /    \

          2 Pi                                                  
  T = -------------, x[22] = 1, x[23] = 0, x[32] = -1, y[21] = 0
      Pi + 2 Pi _Z3                                             

  \    /        2 Pi                                         
   }, { T = -------------, x[22] = 1, x[23] = 0, x[32] = -1, 
  /    \    Pi + 2 Pi _Z4                                    

           \    /         2 Pi                               
  y[21] = 0 },  |T = ---------------, x[22] = 0, x[23] = -1, 
           /   <     1                                       
                |    - Pi + 2 Pi _Z5                         
                \    2                                       

                      \    /         2 Pi                   
  x[32] = 1, y[21] = 0| ,  |T = ---------------, x[22] = 0, 
                       >  <     1                           
                      |    |    - Pi + 2 Pi _Z6             
                      /    \    2                           

                                  \    /          2 Pi         
  x[23] = -1, x[32] = 1, y[21] = 0| ,  |T = -----------------, 
                                   >  <       1                
                                  |    |    - - Pi + 2 Pi _Z7  
                                  /    \      2                

                                             \    /
  x[22] = 0, x[23] = 1, x[32] = -1, y[21] = 0| ,  |
                                              >  < 
                                             |    |
                                             /    \

            2 Pi                                           
  T = -----------------, x[22] = 0, x[23] = 1, x[32] = -1, 
        1                                                  
      - - Pi + 2 Pi _Z8                                    
        2                                                  

           \ 
  y[21] = 0| 
            >
           | 
           / 

> nops({SOL});
                               8

seq(simplify(eval({fff},teqs union {x[31]=0})),teqs in {SOL});
             {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}

And of course by the symmetry of Matrices `c` and `h` in `eq` we also have to check the following. (There are some solutions unique to each set, where x[31]<>0 or x[23]<>0.)

SOL2:=solve(eval({fff},[x[23]=0]),AllSolutions);
 /     1                                             \   
{ T = ---, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, 
 \    _Z9                                            /   

   /     1                                              \   
  { T = ----, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, 
   \    _Z10                                            /   

   /     1                                              \   
  { T = ----, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, 
   \    _Z11                                            /   

   /     1                                              \   
  { T = ----, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, 
   \    _Z12                                            /   

   /     1                                              \   
  { T = ----, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, 
   \    _Z13                                            /   

   /     1                                              \    /
  { T = ----, x[22] = 1, x[31] = 0, x[32] = 1, y[21] = 0 }, { 
   \    _Z14                                            /    \

           2 Pi                                                  
  T = --------------, x[22] = 1, x[31] = 0, x[32] = -1, y[21] = 0
      Pi + 2 Pi _Z15                                             

  \    /         2 Pi                                         
   }, { T = --------------, x[22] = 1, x[31] = 0, x[32] = -1, 
  /    \    Pi + 2 Pi _Z16                                    

           \    /         2 Pi                             
  y[21] = 0 }, { T = --------------, x[22] = 1, x[31] = 0, 
           /    \    Pi + 2 Pi _Z17                        

                       \    /         2 Pi                  
  x[32] = -1, y[21] = 0 }, { T = --------------, x[22] = 1, 
                       /    \    Pi + 2 Pi _Z18             

                                  \    /         2 Pi       
  x[31] = 0, x[32] = -1, y[21] = 0 }, { T = --------------, 
                                  /    \    Pi + 2 Pi _Z19  

                                             \    /
  x[22] = 1, x[31] = 0, x[32] = -1, y[21] = 0 }, { 
                                             /    \

           2 Pi                                                  
  T = --------------, x[22] = 1, x[31] = 0, x[32] = -1, y[21] = 0
      Pi + 2 Pi _Z20                                             

  \    /     1                                              \   
   }, { T = ----, x[22] = 1, x[31] = 1, x[32] = 0, y[21] = 1 }, 
  /    \    _Z21                                            /   

   /     1                                                \    /
  { T = ----, x[22] = 1, x[31] = -1, x[32] = 0, y[21] = -1 }, { 
   \    _Z22                                              /    \

           2 Pi                                                  
  T = --------------, x[22] = 1, x[31] = 1, x[32] = 0, y[21] = -1
      Pi + 2 Pi _Z23                                             

  \    /         2 Pi                                         
   }, { T = --------------, x[22] = 1, x[31] = -1, x[32] = 0, 
  /    \    Pi + 2 Pi _Z24                                    

           \    /         2 Pi                  
  y[21] = 1 }, { T = --------------, x[22] = 1, 
           /    \    Pi + 2 Pi _Z25             

                /  2    \                           /  2    \\   
  x[31] = RootOf\_Z  + 3/, x[32] = 2, y[21] = RootOf\_Z  + 3/ }, 
                                                             /   

   /     1                             /  2    \              
  { T = ----, x[22] = 1, x[31] = RootOf\_Z  + 3/, x[32] = -2, 
   \    _Z26                                                  

                 /  2    \\ 
  y[21] = -RootOf\_Z  + 3/ }
                          / 

> nops({SOL2});
                               18

> seq(simplify(eval({fff},teqs union {x[23]=0})),teqs in {SOL});
{0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, {0}, 

  {0}, {0}, {0}, {0}, {0}

acer

You can either not use Units:-Standard:-`^` and keep the float powers, or you can use rational powers.

For the first of those, note that you can get all of Units:-Standard loaded excepting some individual parts such as its `^` export. For example,

> restart:
> with(Units[Standard]): unwith(Units:-Standard,`^`):

> n:=1.38:
> p1:=1.1*Unit(bar):
> v1:=3*Unit(dm^3):
> v2:=0.4*Unit(dm^3):

> v1^n;
               4.554359738 ['dm'^3]^1.38

> p2:=(p1*v1^n)/(v2^n);
               17.74096457 ['bar']

Notice how the exponents in v1^n do not simplify above. I doubt that one could get them to combine using `simplify`, either.

For the second of those two approaches mentioned, you can either just enter everything youself as pure rationals (eg. 1.38 = 138/100, etc) or you can convert them with Maple (either once up front, or on-the-fly).

> restart:
> with(Units[Standard]):

> n:=1.38:
> p1:=1.1*Unit(bar):
> v1:=3*Unit(dm^3):
> v2:=0.4*Unit(dm^3):

> R:=t->convert(t,rational):

> evalf( (R(p1)*v1^R(n))/(R(v2)^R(n)) );
                             6                  
               1.774096458 10  ['Pa']

acer

You don't have MYPLOT as the return value of your procedure. So the plotobj argument doesn't get displayed or "printed".

You can work around that in a few ways. One way might be to assign MYPLOT to a local, then reset the 'default' plot options, and then to return that local as the last line of the procedure.

Or you could do it like this,

resizeMYPLOT := proc(plotObj)
   plotsetup('jpeg', 'plotoutput' = "c://temp/bar.jpg",
             'plotoptions' = "height=1000,width=1000");
   print(plotObj);
   plotsetup('default');
   NULL;
end proc:

plot(cos(x),x=-12..12);
resizeMYPLOT(%);

acer

I tried copying from and pasting to various Components' properties fields successfully on 64bit Maple for Windows 7 using Ctl-x, Ctl-c, and Ctl-v. Those keyboard shortcuts worked for me.

acer

I wasn't sure which it is of mol or m^3 that you want to get automatically changed into NM3. Below, I'll take it as mol which you want handled that way. Presumably you want anything which would otherwise return involving mol to instead return involving NM3. Is that right?

First, one can add a new unit. After that, then you can do things like convert(..,units,mol/sec,NM3/sec).

But you can also add a new system, which is a version of SI which uses NM3 instead of mol for the dimension of `amount_of_substance`.

> restart

> Units:-AddUnit(NM3,conversion=22.413*1000*mol,context=SI,prefix=SI);

> convert(20, units, mol, NM3);

                        0.0008923392674

> Units:-AddSystem(MySI,Units:-GetSystem(SI),NM3);
> Units:-UseSystem(MySI):

> with(Units[Natural]):

> 15*10^5*mol/sec;

                           [[  NM3   ]]
               66.92544506 [[ ------ ]]
                           [[ second ]]

If I may, I advise using Units[Standard] instead of Units[Natural]. I believe that the latter is a design mistake, basically robbing one of most all single letters (which otherwise come in useful for other purposes). It can also lead to confusion. Note that Units[Standard] can be used without need for having the name of the Unit() constructor appear in 2D Math input -- palette entry or command-completion can directly get the double-bracket unit appearance for 2D Math input.

acer

There is no easy way to get such fine grained display of intermediary steps for all manner of calculations.

But if you are able to do the requisite programming then you can sometimes coerce Maple into emulating such an environment.

Fo example see here or here.

acer

Quite a few requests have been made, over several years, for some quick, easy visual cue whether an indexed name is a table reference or an atomic identifier underneath.

Would it be useful if I were to try to put together a quick context-menu entry (automatically loadable into your session using user-owned initialization files, say) which could immediately show the situation upon right-click (but without  necessitating actual 1D conversion)?

acer

The ss1 object is a Maple module. And it has exports a, b, c, and d. (See the ?exports help-page.)

Hence you can access them programmatically like so,

ss1:-a;

ss1[a];

Of the two forms above, the first works the same even if you have assigned to the global name a. But if you have assigned to global a, then the second form needs to be called like ss1['a'] instead.

And if you are inside a procedure with a local a, and global a has also been assigned, then the first form works unchanged. But the second form would have to become something like ss1[':-a'] instead.

For these reasons it is often easier to just use ss1:-a, to avoid having to type out all of ss1[':-a'] which looks silly in comparison. One of the few benefits of the ss1[a] form is that you can do nifty stuff like seq(ss1[x], x in [a,b,c,d]).

acer

@brian abraham One of the common goals of using a Window Function is to correct the oscillation phenomenon near the end-points. Since there is (by defn) no original data beyond what is supplied then the FFT will usually not be able to accomodate such a discontinuity. The FFT process relies on "knowing" that the data extends in some (overlay of) a periodic way. A hard nonzero boundary for the data thus induces an effect similar to the Gibbs phenonmenon.

A key purpose of using a Window Function to scale the data so that it tends strongly to zero near the original data point boundary is to allow the data to be be safely padded further on with zero values. Since the scaled data is converging to zero, then padding with zero (or near zero) values will make the FFT process see a natural continuation. This makes the FFT and Inverse FFT behave much better at the "old" boundary, since they now act on a wider data set. And it's not just that the "old" boundaries are well inside the extended data set. The fact that the data is all nice is also key. The augmented data can easily be made to appear to extend the old curve that is tending to zero near the old boundary. The augmented data doesn't even have to oscillate in order to effectively do its job. There is no significant distinction from the augmented data and the (now tiny) oscillation in the scaled old data near the old boundary. The augmented data is effectively periodic (though literally it is not) because it so close to zero and its would-be oscillation is miniscule.

I will try to demonstrate using the previously posted example. I've added yet another, third, data set. It is based on the Windowed data set, but padded by ten percent more points. It's not padded with zero. It's padded with a continuation of the scaled curve. I've added two more plots, to show close-up where the padded data meets the scaled original data.



 

restart:

 

freqpass:=proc(M::Vector,rng::range(nonnegint))

local Mt, Xlow, Xhi;

  Xlow,Xhi := op(rng);

  Mt := DiscreteTransforms:-FourierTransform(M):

    if Xlow>1 then ArrayTools:-Fill(Xlow-1,0.0,Mt,0,1); end if;

    if Xhi<op(1,Mt) then ArrayTools:-Fill(op(1,Mt)-Xhi-1,0.0,Mt,Xhi+1,1); end if;

    Mt[1]:=Mt[1]/2;

    2*map(Re,DiscreteTransforms:-InverseFourierTransform(Mt));

end proc:

 

f := x -> 5*x + sin(128*x) - sin(130*x) + sin(133*x) - sin(137*x);

(1)

a,b,n := -Pi,Pi,1000:

 

M := Vector(n,(i)->(evalhf@f)(a+i*(b-a)/n),datatype=float[8]):

Mg := Vector(n,(i)->evalhf(f(a+i*(b-a)/n)*exp(-1/2*(a+i*(b-a)/n)^2)),datatype=float[8]):
numpad := trunc(n*0.10):
Mgpadded := Vector(n+2*numpad,datatype=float[8]):
Mgpadded[numpad+1..numpad+1+n]:=Mg:
Mgpadded[1..numpad] := Vector(numpad,(i)->M[1]
           *evalhf(exp(-1/2*(a+(-numpad+i)*(b-a)/n)^2)),datatype=float[8]):
Mgpadded[n+numpad+1..n+2*numpad] := Vector(numpad,(i)->M[n]
           *evalhf(exp(-1/2*(a+(n+i)*(b-a)/n)^2)),datatype=float[8]):

 

plots:-pointplot([seq([k,M[k]],k=1..op(1,M))],style=line,view=[0..1000,-20..20]);

plots:-pointplot([seq([k,Mg[k]],k=1..op(1,Mg))],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=1..op(1,Mgpadded))],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=1..numpad+100)],style=line);
plots:-pointplot([seq([k,Mgpadded[k]],k=n+numpad+1-100..n+2*numpad)],style=line);

 

 

 

 

 

filtered := freqpass(M,1..30):

filteredg := freqpass(Mg,1..30):
filteredgpadded := freqpass(Mgpadded,1..30)[numpad+1..numpad+1+n]:

plots:-pointplot([seq([k,filtered[k]],k=1..op(1,filtered))],style=line);

plots:-pointplot([seq([k,exp(1/2*(a+k*(b-a)/n)^2)*filteredg[k]],
                      k=1..op(1,filteredg))],style=line);
plots:-pointplot([seq([k,exp(1/2*(a+k*(b-a)/n)^2)*filteredgpadded[k]],
                 k=1..op(1,filteredgpadded))],style=line);

 

 

 

 

 



Download window_correction.mw

I liked Axel's most, primarily because it gives a smooth looking curve quickly.

But just for fun, here is yet another way to get a less smooth looking curve more slowly. (I like the space that shows between the curve and the y-axis. Sigh.)

ee := evalc( subs(z=x+y*I, abs(z-2)=2 ) );
plots:-implicitplot(ee, x=-4..4, y=-4..4);

acer

Can you use Maple to demonstrate that "A_10x10 is not PD under the assumption that B_4x4 is not PD" much faster than you can demonstrate that "B_4x4 is PD under the assumption that A_10x10 is PD"?

The latter of those two ways entails computing the whole logical formula for whether A is PD, up front. You've discovered that unless is is sparse or special then this can be very expensive.

But what if the logical formula for whether B_4x4 was PD was more cheaply attainable?

And what if you also devised a test of PD which could utilize a given logical formula T at any given internal logical step, by demanding that any subcheck be done under the assumption of T?

Under such circumstances, you could use the negation of the logical formula for B_4x4 being PD as T. And you could utilize ~T while computing whether A_10x10 was PD or not. If you got lucky then that computation would return 'false' quickly, reaching a logical contradiction before having to complete. That would allow you to conclude your hypothesis.

Now, how can one compute PD of a Matrix under some logical condition? If you look at (line numbers for Maple 14),

showstat(LinearAlgebra:-LA_Main:-IsDefinite,24..38);

then you can see code for testing PD of a symmetric Matrix, I think. Near the bottom there is a test, 0 < Re(de). You might be able to change that to something like (0<Re(de) and not(T)) or similar. Or maybe it would be better to use `is` and `assuming`?

Oh, but now I realize that you are interested in positive-semidefinite and not positive-definite. I doubt that the above approach is suitable for any method that has to compute whether A_10x10 is PSD by calculating all its eigenvalues at once. Now, what was that rule for computing PSD using minors? Did it involve all the minors, as opposed to just the principal minors, or am I misremembering?

acer

You are mistaken, in your claim that "In NLPSolve, f(Pi) or f(-Pi) are also calculated". If you trace through the calculuation (using `trace`, or `stopat`, or just with `print(x)` at the start of `f`) then you can see that only floating point approximations are passed to `f` during your call to NLPSolve.

acer

So, are you saying that a,b,c,d,wr & p are equal for each of fmax, fmin, and fmoy?

In that case, over what variable(s) is fmax the maximum? It can't be over {a,b,c,d,wr,p} if those are to be taken as fixed when computing fmax, fmin, and fmoy. So then the only thing left to vary when computing fmax and fmin is `t`. Is that correct? I'm supposing that it is, for now.

My first attempt seems to run amok of a known NLPSolve bug.

> f:=(a+2*b+c*d)*sin(wr*p*t);
> fmoy := p*wr*(int(f, t = 0 .. Pi/p/wr))/Pi;
> fmax:='op(1,NLPSolve(f,t=0..2,maximize))':
> fmin:='op(1,NLPSolve(f,t=0..2))':

(a + 2 b + c d) sin(wr p t)
2 (a + 2 b + c d)
-----------------
Pi

> Optimization:-NLPSolve((fmax-fmin)/fmoy,a=1..5,b=0.1..0.5,c=0.4..0.8,
> d=10..20,wr=200..300,p=5..20);

Warning, no iterations performed as initial point satisfies first-order conditions
[0., [a = HFloat(3.0), b = HFloat(0.3),

c = HFloat(0.6000000000000001), d = HFloat(15.0),

p = HFloat(12.5), wr = HFloat(250.0)]]

My next attempt (re-using assignments to f, fmax, etc fromabove) to workaround that problem, doesn't seem to work out. I am wondering whether it is because the objective and its gradient are not continuous (which is a quality that the NLPSolve methods generally require).

> F:=proc(A,B,C,DD,WR,P)
> local fmax, fmin, res;
> fmax:=Optimization:-NLPSolve(eval(f,[a=A,b=B,c=C,d=DD,wr=WR,p=P]),t=0..2,maximize);
> fmin:=Optimization:-NLPSolve(eval(f,[a=A,b=B,c=C,d=DD,wr=WR,p=P]),t=0..2);
> evalf((op(1,fmax)-op(1,fmin))*eval(fmoy,[a=A,b=B,c=C,d=DD,wr=WR,p=P]));
> end proc:

> objf := proc(V::Vector)
> local res;
> F(V[1],V[2],V[3],V[4],V[5],V[6]);
> end proc:

> objfgradient := proc(X::Vector,G::Vector)
> G[1] := fdiff( F, [1], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[2] := fdiff( F, [2], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[3] := fdiff( F, [3], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[4] := fdiff( F, [4], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[5] := fdiff( F, [5], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> G[6] := fdiff( F, [6], [X[1],X[2],X[3],X[4],X[5],X[6]] );
> NULL;
> end proc:

> Optimization:-NLPSolve( 6, objf, 'objectivegradient'=objfgradient,
> 'initialpoint'=Vector([1,0.2,0.5,11,210,6]));

Error, (in Optimization:-NLPSolve) no improved point could be found

Maybe a global optimization application (GlobalOptimization or DirectSearch) could do something useful (with either the first or second of those two approaches).

acer

What exactly is the question?

Is `r` a Vector, and do you want something like this?

map(t->(-1)^(t-1), r);

## Or, less efficiently as they each use an extra intermediary Vector,
# (-1)^~(r-~1);
# `^`~(-1,`-`~(r,1))

acer

First 284 285 286 287 288 289 290 Last Page 286 of 336