acer

32353 Reputation

29 Badges

19 years, 331 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@luigidavinci The size 2000x2000 is not very large, and should fit into about 32MB of memory for a non-sparse storage=rectangular and datatype=float[8] Matrix.

How fast do you need the computation to be? That is to say, is it speed or memory constraints that is your more critical issue?

If you just want a few smallest eigenvalues plus associated eigenvectors then you could try the link I gave above, to a wrapperless external-call to compute "selected" eigenvectors. It might not be as fast as a dedicated routine for handling a significantly sparse system. And it really only speeds up the subsequent computation of the subset of eigenvectors. The initial eigenvalue computation step likely take just as much time as for computing them all. That, combined with its being for full storage rather than sparse storage, might well mean that it is not at all the functionality that you're after. But I felt I should mention it. I believe that there is a short paragraph in that post detailing how to invoke the attached code, so as to compute eigenvectors for only a specific number of smallest eigenvalues.

Do you recall which F12 or ARPACK function it is which will more quickly compute (only, a selected number of) smallest eigenavlues?

 

It looks like some arguments to C function f12abc are of the type "double **", in which case a wrapper would likely need to be written by hand.

Maybe I ought to ask: what particular flavour of eigen-problem are you hoping to solve? Is it as convenient to use ARPACK itself, as opposed to NAG?

What platform are you on? (Hand-written wrappers need to be compiled, and usally linked to the target software's (usally, shared) library. So either you'd need to be able to compile+link yourself, or state which platform. Linux often makes life much easier here, but might not be an absolute requirement. Win64 is a little trickier than Win32, and OSX can sometimes be a pain. None of this means it's feasible, though.

acer

It looks like some arguments to C function f12abc are of the type "double **", in which case a wrapper would likely need to be written by hand.

Maybe I ought to ask: what particular flavour of eigen-problem are you hoping to solve? Is it as convenient to use ARPACK itself, as opposed to NAG?

What platform are you on? (Hand-written wrappers need to be compiled, and usally linked to the target software's (usally, shared) library. So either you'd need to be able to compile+link yourself, or state which platform. Linux often makes life much easier here, but might not be an absolute requirement. Win64 is a little trickier than Win32, and OSX can sometimes be a pain. None of this means it's feasible, though.

acer

@Christopher2222 I sent you a contact message with your mid in it (should appear as email, but maybe check your spam folder).

You can also see it in MapleCloud groups that you create (eg. groupnameyouinvent@mid). And in Posts (not Questions or Comments) it may appear a sa string in some of the definitions for s.eVar22, s.evar7, s.prop7, or s.prop9 (although these particulars could easily change in future). And I blurrily recall it appearing in urls of posts one made, in the "old" mapleprimes.

@Christopher2222 I've never seen or heard of a full list.

I once googled the terms "acer" and "maplesoft", because I was having trouble finding an old post. (Googling for "acer" and "maple" isn't very productive, for an unsurprising reason.) That search came up with this as the very first link. That link uses what appears to be the membership id number associated with the handle "acer", and I recalled that version 1 of mapleprimes also used that number for some aspects of pages/urls of material I'd posted.

So then I thought: what are the numbers for other handles? Rather than jump straight to the wayback machine, instead I just looked at the source, in my firefox, of one of my own Posts (not Question or Comment). And I found the id number therein (actually, two likely looking numbers, of which one did not pan out). That led to trying it for a Post made by another handle. Science, of a sort.

I agree, Comments are too hard to find.

It's more of a big deal since mapleprimes' search facility is dysfunctional, and a great number of older but very informative posts from the 'version 1' of mapleprimes were mis-imported as Post+Comments instead of Question+Answers+Comments.

If you need to find all your posted material, by the way, try here.

acer

@Alejandro Jakubi I'm not sure that you've understood my meaning. Yes, it's quite clear that univariate functions f1 and f2, from R->C as you have given them, have the same number of points (two) at which they have a jump discontinuities. So, I would agree, neither is superior as a univariate function of x alone. But now, before concluding that f1 and f2 are essentially of the same merit mathematically, and before moving on to judge them by notational form alone, I would consider their differences as functions from C->C.

Consider f1 as f1(x,y) with complex domain, and similarly for f2, as given in my comment above. Then the imaginary part of f1 has a jump discontinuity, as one moves across the real x-axis and not as one moves along it, for all x in (-infinity,-2/3) union (0,infinity). But the imaginary part of f2 has a jump discontinuity, as one moves across the x-axis, only on the finite interval (-2/3,0). One thing I am wondering is whether this has any bearing on anyone else's preferring one of {f1,f2} over the other.

@Alejandro Jakubi Thanks for your input.

Of course, I made no claim that either result was continuous (everywhere). I mentioned continuity in the sense that, yes, they are discontinuous at different sets of points and perhaps the nature of that difference is worth examing.

I don't really see that behaviour along the real axis should so strongly dictate the form of the general result for the whole complex domain. If x had been assumed real, then that particular aspect of the behaviour might be relevent, but it's a general complex domain that's also under consideration.

You mentioned jumps. That's one aspect that I was considering. Consider that the imaginary part of f1 has a jump discontinuity across two semi-infinite regions of the real axis. But the imaginary part of f2 has such a jump only across a single finite subinterval (zero to -2/3, is it?). I hoped that the color complexplot3d might have shown that better. But one can also see it like so,

f1:=-arctanh(1+3*x):

f2:=1/2*ln(x)-1/2*ln(2+3*x):

plots:-display(
  plot3d((a,b)->Re(eval(f1,x=a+b*I)), -2..2, -2..2, color=cyan),
  plot3d((a,b)->Im(eval(f1,x=a+b*I)), -2..2, -2..2, color=gold),
               view=[-2..2,-2..2,-5..5], axes=box);

plots:-display(
  plot3d((a,b)->Re(eval(f2,x=a+b*I)), -2..2, -2..2, color=cyan),
  plot3d((a,b)->Im(eval(f2,x=a+b*I)), -2..2, -2..2, color=gold),
               view=[-2..2,-2..2,-5..5], axes=box);

So one thing  that I'm wondering is: could the containment of the discontinuity to a finite (vs semi-infinite, twice) interval weight one form over the other? Or, alternatively, is one such sort of form more often easily "repairable" via addition of constants? What about symmetry? The f1 plot seems to have that going for it, over f2. I'm not saying that simplicity of form as estimated by operation count, etc, is irrelevent here. Rather I'm curious as to whether anyone can suggest other (possibly more) important considerations.

@Thomas Richard Yes, thanks, I'm aware of dsolve's convert_to_exact option. It's the difference in default behaviours, along with the difference in convenience of `dsolve` over `simplify` for handling this, which I feel might be confusing for some inexperienced users.

And, indeed, there are various choices for separately preprocessing input, so as to convert floats to exact quantities. (Some members might wish to see here for an old post on that theme.)

I get uneasy in the presence of inconsistencies, the cause of which I cannot explain easily.

Why does the second of these see float contagion, while the first does not?

> dsolve( (4*diff(y(x),x)-3.0*y(x)^2)/(2-3.0*y(x)) );

                                         4      
                             y(x) = ------------
                                    -3 x + 4 _C1

> simplify( (4*diff(y(x),x)-3.0*y(x)^2)/(2-3.0*y(x)) );

                            /   / d      \          2\
                         1. |4. |--- y(x)| - 3. y(x) |
                            \   \ dx     /           /
                       - -----------------------------
                                 -2. + 3. y(x)    

While I agree with Jacques that the user ought to understand that a floating-point number is not, in general, a good (and far less so: equivalent) alternative for an exact quantity, I also think that computer algebra systems are out on a limb when it comes to mixed floats and symbolics. There's a pretty decent body of understanding of the discipline of exact symbolic computation. And there's a large body of understanding of the theory and practice of floating-point computation. But the mixture of the two... is fledgling in comparison.

When floats were introduced in Maple, perhaps it seemed like a good idea to just "allow" a float coefficient on a symbolic quantity. But in all seriousness, I'm not sure that I really understand what kind of animal that is supposed to be. How should it behave? What are the rules for symbolically manipulating it? What are the rules for numerically manipulating it, while the symbol stays a symbol? Does the scale of the value that the symbol could take on affect the arithmetic with such animals?

Let me give another example. Why is discont allowed to return an answer here, and what is that answer supposed to mean? Is it supposed to represent any of the particular three literal floating-point numbers appearing inside `expr`? If not, then what does it in fact mean? (The result from `fdiscont`, on the other hand, can be much better explained: there may be discontinuities, when calculating at some specific working precision, within the returned range.)

> restart:

> Digits:=10:

> expr := 1/((x-10.81665)*(x-10.81665382)*(x-10.816653826392));

                                         1                           
       expr := ------------------------------------------------------
               (x - 10.81665) (x - 10.81665382) (x - 10.816653826392)

> # What aspect of `expr`, literal or otherwise, does this
> # exact rational result represent?
> discont(expr, x);

                             /54078437215095621\ 
                            { ----------------- }
                             \5000000000000000 / 

> # Even if we evaluate to floating-point, the meaning is unclear.
> evalf(%);

                                {10.81568744}

> # Don't misunderstand. I know how to compute and get that last, single result.
> # I just don't know for sure, what it's supposed to *mean*. I guess it means
> # something like: here is a result due to approximating the orginal problem to
> # the current working precision. But that doesn't tell me anything about how
> # accurate that result is as an approximation to any (or all) of the
> # discontinuities of the *original* problem.
> # As a user, how would I know that `discont` is maybe not what I want to apply
> # for this example?
> # What about applying `diff`, will that be safe and appropriate? Or `int`?  Etc. > fsolve(expand(1/expr), x); 10.81568744 > # Next result is more clear: discontinuities may lie in this range > fdiscont(expr, x=0..100); [10.8160639211926 .. 10.8172942195832]

So, why is `discont` even allowed to return a result for that `expr`?

This next bit is important: nobody wants a wrong result, or one for which the explanation (of why it's "an answer") is unclear. So it might not matter that you are clever enough to construct examples where there is no such confusion. That still leaves the genuine problem cases to worry about. And it's not just the outright problem cases that need worrying about. It's also the fine line that marks the boundary between the meaningful and the nonmeaningful results that should concern us.

Another example. Do you believe that the derivative of the piecewise below, `pw`, is discontinuous at x=2 or not? Or do you think that I've already cheated by writing "the derivative", as if it couldn't depend conceptually upon Digits or something else, and vary? Or perhaps you have your own philosophy of such mixed exact-symbolic/float expressions.

> restart:

> pw := piecewise( x>2, (x^2-13.)/(x-3.605551275), x+(13)^(1/2) );

                   pw :=  /     2                         
                          |    x  - 13.                   
                          | ---------------        2 < x  
                         <  x - 3.605551275               
                          |                               
                          |         (1/2)                 
                          \   x + 13             otherwise

> # Let's differentiate, and only afterwards convert to exact form.
> dpw:=diff(pw, x);

         dpw :=  /                  1.                       x < 2.
                 |                                                 
                 |           Float(undefined)                x = 2.
                 |                                                 
                <                         / 2      \               
                 |      2. x           1. \x  - 13./               
                 | --------------- - ------------------      2. < x
                 | x - 3.605551272                    2            
                 \                   (x - 3.605551272)             

> simplify(map(expand@rationalize,map(identify,dpw)));

                          / undefined        x = 2  
                         {                          
                          \     1          otherwise

> # Now, instead,convert to exact form before differentiating.
> simplify(map(expand@rationalize,map(identify,pw)));

                                       (1/2)
                                 x + 13     

> # Obviously, this is not discontinuous at x=2.
> diff(%,x);

                                      1

> discont(pw, x);

                               /   144222051\ 
                              { 2, --------- }
                               \   40000000 / 

> evalf(%);

                              {2., 3.605551275}

> fdiscont(pw, x=-20..20);

                                     []

acer

Also possible is:

intk:=proc(d,n,A,Lambda)
  evalf(Int(k->lambdak(k,n,A,d,Lambda),0.0..1.0));
end proc:

intk(2.,14,1.,1.);

acer

Also possible is:

intk:=proc(d,n,A,Lambda)
  evalf(Int(k->lambdak(k,n,A,d,Lambda),0.0..1.0));
end proc:

intk(2.,14,1.,1.);

acer

With the start of a new school year, the MapleCloud is flooded with dozen of posts (mostly empty or near-empty). It's a new form of social networking: giving the shout out to your peeps via Cloud message title.

acer

@mnhoff Roundoff error, upon floating-point evaluation following numeric substitution, can vary greatly according to the particular form of the symbolic expression. Symbolic reformulation such as is done by the simplify command does not necessarily produce a rearrangement that will be prone to less roundoff, in general.

Yes, sol3 seems to suffer more at the numeric data point you mentioned. I don't see a forcing reason why this should (or should not, I'm afraid) be true for all other data inputs.

> forget(evalf):
> evalf({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(%);                                                                
          {-143.4745136, -0.01428010101, 0.01428010101, 143.4745136}

> forget(evalf):
> evalf[100]({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(evalf(%));                                                         
          {-143.4744804, -0.01427413114, 0.01427413114, 143.4744804}

> 
> forget(evalf):                                                          
> evalf(eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):         
> sort(indets({%},identical(M)=anything));                                
  {M = -143.4584940, M = -0.01427433450, M = 0.01427433450, M = 143.4584940}

> forget(evalf):                                                   
> evalf[100](eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));                       
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

> 
> forget(evalf):  
> evalf(eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({%},identical(M)=anything));
  {M = -295.0142973, M = -0.01425070076, M = 0.01425070076, M = 295.0142973}

> forget(evalf):                                                   
> evalf[100](eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

It seems like a reasonable supposition that, "in the main", lengthier symbolic reformulations are more at risk of such float-evaluation roundoff. But it is not true in general.

ps. I've been wanting to do a followup to an earlier bloggish post relating roundoff to session dependent "order issues" (ie. about sums rather than products). But you may find this of some interest. It won't help you decide whether your `sol1` is better for uses in exported (C, Matlab, or other double-precision computational environments).

acer

@mnhoff Roundoff error, upon floating-point evaluation following numeric substitution, can vary greatly according to the particular form of the symbolic expression. Symbolic reformulation such as is done by the simplify command does not necessarily produce a rearrangement that will be prone to less roundoff, in general.

Yes, sol3 seems to suffer more at the numeric data point you mentioned. I don't see a forcing reason why this should (or should not, I'm afraid) be true for all other data inputs.

> forget(evalf):
> evalf({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(%);                                                                
          {-143.4745136, -0.01428010101, 0.01428010101, 143.4745136}

> forget(evalf):
> evalf[100]({eval(sol1,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])}):
> sort(evalf(%));                                                         
          {-143.4744804, -0.01427413114, 0.01427413114, 143.4744804}

> 
> forget(evalf):                                                          
> evalf(eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):         
> sort(indets({%},identical(M)=anything));                                
  {M = -143.4584940, M = -0.01427433450, M = 0.01427433450, M = 143.4584940}

> forget(evalf):                                                   
> evalf[100](eval(sol2,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));                       
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

> 
> forget(evalf):  
> evalf(eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({%},identical(M)=anything));
  {M = -295.0142973, M = -0.01425070076, M = 0.01425070076, M = 295.0142973}

> forget(evalf):                                                   
> evalf[100](eval(sol3,[x1=-17.275,x2=-180.93,y1=238.59,y2=-156.49])):
> sort(indets({evalf(%)},identical(M)=anything));
  {M = -143.4744804, M = -0.01427413114, M = 0.01427413114, M = 143.4744804}

It seems like a reasonable supposition that, "in the main", lengthier symbolic reformulations are more at risk of such float-evaluation roundoff. But it is not true in general.

ps. I've been wanting to do a followup to an earlier bloggish post relating roundoff to session dependent "order issues" (ie. about sums rather than products). But you may find this of some interest. It won't help you decide whether your `sol1` is better for uses in exported (C, Matlab, or other double-precision computational environments).

acer

First 431 432 433 434 435 436 437 Last Page 433 of 592