acer

32637 Reputation

29 Badges

20 years, 51 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

The Translator produces equivalent commands by default, rather than both producing and then evaluating them.

One can see that MmaTranslator:-Mma:-Get would actually do something (but only if the resulting command is run or evaluated, naturally).

> interface(verboseproc=3):

> eval(MmaTranslator:-Mma:-Get);
proc()
local line, last;
    last := readline(args);
    line := last;
    while line <> 0 do last := line; line := readline(args) end do;
    last
end proc

If the MmaTranslator:-MmaToMaple assistant is run, then it produces just the equivalent. It has a checkbox, off by default, to toggle evaluating the equivalent command.

Both the MmaTranslator and the Matlab:-FromMatlab systems have some "Maple runtime" routines. Some Mathematica commands (such as your Get example) are translated to equivalents in the respective runtimes. The functionality which has to be produced might not already exist as some other Maple routine outside of those "runtimes". In some sense, these packages can be seen partly as emulators rather than wholly translators. That's my take, anyway.

acer

The expressions you've given for LF1 and LF2 appear to be the same. But I think that I understand what you're after -- a global maximum. And using Optimization:-NLPSolve for the bivariate expression you are getting only local maxima which differ according to the supplied ranges, etc.

The NLPSolve command is supposed to be able to do global optimization for univariate expressions, however. And your bivariate expression can be see as the product of two univariate expressions (one in terms of only p, and the other in terms of only lambda). At the global max of the bivariate expression, won't those two separated univariate expressions either attain their global positive max or global negative mins (together)?

# Exponents are high, so increase digits.
> restart:

> part1 := Optimization:-NLPSolve((1-p)^12 * p^6, p = 0.1..0.9,
>              method=branchandbound, nodelimit=100, maximize);
part1 := [0.000010572491946857255657431169057246221172365512139963253208324\
 
    76673684648619603595616751183234400388153, [p = 0.333333333333333333333\
    33333333333333333333333333333736020909385629132457466811964496740584798\
 
    92106085]]

> part2 := Optimization:-NLPSolve(lambda^20/((exp(lambda))^6*(1-1/exp(lambda))^6),
>              lambda=1..10, method=branchandbound, nodelimit=100, maximize);
part2 := [74.64332738477354229696423984447562230886661996047134706648672868\
 
    808536810802011914892463634162665307, [lambda = 3.197059146345953482481\
    14658306428866389061101553410448249261020291420127792868684784402879499\
 
    2627541]]

> 1/1658880 * part1[1] * part2[1];
0.4757221605312909515206819021881908456041938871196218125541162231701706979\
 
                                  -9
    031545368422338561424874866 10

> LF:=(1/1658880)*(1-p)^12*p^6*lambda^20\
> /((exp(lambda))^6*(1-1/exp(lambda))^6):

> eval(LF,[part1[2][1],part2[2][1]]);
0.4757221605312909515206819021881908456041938871196218125541162231701706979\
 
                                  -9
    031545368422338561424874865 10
 
> part1[2][1], part2[2][1];
p = 0.333333333333333333333333333333333333333333333333337360209093856291324\
 
    5746681196449674058479892106085, lambda = 3.197059146345953482481146583\
    06428866389061101553410448249261020291420127792868684784402879499262754\
    1

Checking with the (add-on) GlobalOptimization package,

> LF:=(1/1658880)*(1-p)^12*p^6*lambda^20\
> /((exp(lambda))^6*(1-1/exp(lambda))^6):
> GlobalOptimization:-GlobalSolve(LF,p=0\
> .1..0.9,lambda=1..10,maximize,method=multistart);
                        -9
[0.475490028627130724 10  ,
 
    [p = 0.331418551180686749, lambda = 3.17705810474458694]]
So, since the objective is a product of "stand-alone" univariate expressions (in p and lambda separately), you may be able to split and use NLPSolve separately and get global results with more assurance (with less influence from the whim of the supplied range as with your original LF1, etc).

acer

A record of the commands issued during a session of the commandline (console, TTY) Maple interface can also be saved.

See the options --historyfile=histFile and --historysize=histSize in the ?maple help-page. The history file can be specified when the session is invoked, while the default is ~/.maple_history .

acer

See the help-pages for ssystem and system.

acer

I'm not sure if this is the same phenomenon as you are seeing, but the Maple 12.xx Standard GUI has problems with delayed rendering in some Linux distributions. AFAIK, it can occur in "older" distributions. It might be related to a clash between an "old" glibc and the "newer" JRE. The problem manifests itself as a 5-10 sec white pause over the whole GUI output canvas. It occurs especially after pasting.

It had nothing to do with plots. Since, small expressions would get it too. Oh, and on a machine that had it, it would occur at a slightly shorter delay for every output.

On an old Fedora box of mine the problem was very bad in 12.xx but has improved in 13. If you are stuck with Maple 12 for a while, then you might consider upgrading your distribution. I don't know of the problem occuring on any current Linux distro.

acer

Could you post the actual code, or upload a worksheet for that to this site?

acer

In Maple 9.5.1, for Matrix M,

rtable_scanblock( M, [rtable_dims(M)],'Maximum');

acer

I, for one, welcome the new Japanese overlords.

acer

You might fiddle with something like this, either to customize with your own time interval length, adjust the rounding/truncating, or even whether to print it or to refresh a Component.

The time interval used below, throughout, is 5 sec. Adjust to taste.

One can handle the case that Maple has a Thread-blocking operation (gc, simpl table write, etc) going on right at the 5sec rollover. Below, that case should be handled by printing at the next opportunity, while also resetting the tick-point.

You mentioned realtime. That is very difficult to do very well, with assurances.

p := proc() local X,oldX,f;
  f := proc() local i; for i from 1 to 100000 do i+i;
                       end do;
       end proc;
  oldX:=0;
  while true do
    X:=time[real]();
    if X-oldX<=5 then f();
    else
      if (X-oldX>=5 or `mod`(trunc(X),5)=0) then
        print(X); oldX:=X; f();
      else f();
      end if;
    end if;
  end do;
end proc:
                                                                                
Threads:-Create(p());
                                                                                
int(exp(x^101/(x^11102-3)),x);

I only tried on a single core machine, where it seemed to work ok. Experience on multi-core might be interesting.

acer

Suppose that you start with code like this, and that you would in fact be satisfied with double-precision results. The following runs out of memory, and takes forever.


with(Statistics):
with(RandomTools):
maxiter := 1000000:
n := 5:
A := Array(1..1000000):
C := Array(1..n):
 
for k to maxiter do
t := Generate(list(distribution(Normal(0,1)),n)):
tmean := Mean(t):
tSD := StandardDeviation(t):
 
for c to n do
C[c] := evalf((t[c]-tmean)/tSD):
end do:
 
C := sort(C):
 
for b to n do
C[b] := CumulativeDistributionFunction(Normal(0,1), C[b]):
end do:
 
A[k] := max(seq(max(abs(i/n - C[i]), abs((i-1)/n - C[i])), i = 1..n)):
end do:

Note the following,

> with(Statistics):
> CumulativeDistributionFunction(Normal(0,1), x);
                                              1/2
                                           x 2
                             1/2 + 1/2 erf(------)
                                             2

That alongside various other code optimizations may produce the following, which runs in about 23sec and allocates about 320MB (Maple 12, 64bit Linux),


st,bu,ba := time(),kernelopts(bytesused),kernelopts(bytesalloc):
Digits := trunc(evalhf(Digits)):
with(Statistics):
maxiter := 1000000:
n := 5:
A := Array(1..1000000,datatype=float):
C := Array(1..n,datatype=float):
t_S := RandomVariable(Normal(0,1)):
 
myproc := proc(A::Array,C::Array,all_t::Vector,
               maxiter::integer,n::integer)
  local k::integer, kk::integer, lC::Vector,
        temp::float,themax::float, tmean::float,
        tSD::float, kminusonen::integer;
  for k from 1 to maxiter do
    kminusonen := (k-1)*n;
    tmean := add(all_t[kminusonen+kk],kk=1..n)/n;
    tSD := sqrt(add((all_t[kminusonen+kk]-tmean)^2,kk=1..n)/(n-1));
    for kk from 1 to n do
      C[kk] := (all_t[kminusonen+kk]-tmean)/tSD:
    end do:
    lC := eval(sort(C)):
    for kk from 1 to n do
      lC[kk] := 1/2+1/2*erf(1/2*lC[kk]*2^(1/2));
    end do:
    themax := 0;
    for kk from 1 to n do
      temp := max(abs(kk/n - lC[kk]), abs((kk-1)/n - lC[kk]));
      if temp>themax then themax:=temp; end if;
    end do;
    A[k]:=themax;
  end do;
  NULL;
end proc:
 
all_t := Sample(t_S,maxiter*n):
evalhf(myproc(A,C,all_t,maxiter,n)):
time()-st,kernelopts(bytesused)-bu,kernelopts(bytesalloc)-ba;

Creating a version which may be entirely compiled gets it down to about 3sec to run with 55MB allocated,

st,bu,ba := time(),kernelopts(bytesused),kernelopts(bytesalloc):
Digits := trunc(evalhf(Digits)):
with(Statistics):
maxiter := 1000000:
n := 5:
A := Array(1..1000000,datatype=float):
C := Array(1..n,datatype=float):
lC := Array(1..n,datatype=float):
t_S := RandomVariable(Normal(0,1)):
 
myproc := proc(A::Array(datatype=float[8]),C::Array(datatype=float[8]),
               lC::Array(datatype=float[8]),all_t::Vector(datatype=float[8]),
               maxiter::integer,n::integer)
  local k::integer, kk::integer, temp::float, themax::float,
        tmean::float, tSD::float, kminusonen::integer, inf::float,
        j::integer, kkbest::float, jbest::float, fn::float;
  fn := 1.0*n;
  inf := 99999.9;
  for k from 1 to maxiter do
    kminusonen := (k-1)*n;
    tmean := 0.0;
    for kk from 1 to n do
      tmean := tmean + all_t[kminusonen+kk];
    end do;
    tmean := tmean/fn;
    tSD := 0.0;
    for kk from 1 to n do
      tSD := tSD + (all_t[kminusonen+kk]-tmean)^2;
    end do;
    tSD := sqrt(tSD/(fn-1.0));
    for kk from 1 to n do
      C[kk] := (all_t[kminusonen+kk]-tmean)/tSD:
    end do:
    # slowest sorter known to mankind
    for kk from 1 to n do
      kkbest := inf;
      for j from 1 to n do
        if C[j]<kkbest then
          kkbest := C[j];
          jbest := j;
        end if;
      end do;
      lC[kk] := kkbest;
      C[jbest] := inf;
    end do:
    for kk from 1 to n do
      lC[kk] := 1.0/2.0+(1.0/2.0)*erf((1.0/2.0)*lC[kk]*(2.0^(1/2)));
    end do:
    themax := 0;
    for kk from 1 to n do
      temp := max(abs(kk/fn - lC[kk]), abs((kk-1)/fn - lC[kk]));
      if temp>themax then themax:=temp; end if;
    end do;
    A[k]:=themax;
  end do;
  NULL;
end proc:
 
myproc_c := Compiler:-Compile(myproc):
 
all_t := Sample(t_S,maxiter*n):
myproc_c(A,C,lC,all_t,maxiter,n):
time()-st,kernelopts(bytesused)-bu,kernelopts(bytesalloc)-ba;

It's a bit of a pity that module exports cannot be called from within evalhf, or that Compiler:-Compile cannot translate Statistics:-Mean or Statistics:-StandardDeviation.

acer

Your un is created as a list. Don't use a list for this sort of task. Use a mutable data structure (for which element replacement is both possible and appropriate) such as a Vector, Array, or table instead. For example,

un := Vector(6):

See the section on data structures in the Maple Portal, or the Introductory Programming Guide, for details on those objects.

ps. That Portal page could explain "replacement" of a list element more fully (by describing subsop on a list and what that entails.)

acer

Don't use vector, matrix, and array. They are deprecated. Use Vector, Matrix and Array.

CurveFitting:-PolynomialInterpolation doesn't accept lowercase vector and matrix. The printing difference you mentioned is due to the fact that lowercase vector and matrix have last_name_eval.

Maple is a case-sensitive language.

ps. It is highly misleading to have the help-page vector(deprecated) be titled "Overview of Vectors". It's no wonder that new users find all this confusing. Fortunately the matrix(deprecated) and array(deprecated) help-pages haven't been treated to such a misguided mix-up with the capitalization in their titles.

acer

The purpose of that op() call is to extract the integrand of the inert Int call.

The general idea of the code seems to be to extract the integrand expression, get an alternate form, make a procedure that computes that, optimize that procedure, instantiate that at a dummy (r) to get a new integrand expression, recreate a new Int using the new integrand, and then finally to do numeric quadrature.

 F := Int(expr,t=a..b);
                                       b
                                      /
                                     |
                               F :=  |   expr dt
                                     |
                                    /
                                      a

> op(F)[1];
                                     expr

> IntegrationTools:-GetIntegrand(F); # another way
                                     expr

Why do you insert uneval quotes about commands, which prevent them from doing their intended job?

acer

It's not clear at what precision you intend the calculations to be done. That Generate call specifies 20 digits for the random values, but the environment variable Digits is not set above the default value (10).

If you would be satisfied with double precision then (leaving Digits alone) you could create Array A with the datatype=float[8] option. That could allow Statistics:-Mean (etc) to work with it using fast, memory efficient compiled routines.

For your loop of 10 million iterations I could imagine that quite a bit of Maple's time would be spent in creating and garbage collecting all those lists (t, and sorted t). They are small, but there are 20 million of them produced.

It might be useful if the ArrayTools package got a new routine to sort Vectors efficiently (in the memory sense, acting either inplace in the orginal or on a supplied container Vector for the result). More below of getting such efficient functionality yourself.

It's been mentioned before on this site that an improved Statistics:-Sample might allow one to re-use a float[8] Vector by populating it inplace. (That's how some good non-Maple random number generators work.) In the absence of that, you might consider the memory hit of generating all n*10^7 random values up front in a single float[8] Vector. Then you could simply act on clumps of 6 entries at a time. The code would keep track of which clump number it was working on and simply index into the large Vector appropriately. Such a large Vector would take about 500MB to allocate, and less than 10 seconds to create.

> with(Statistics):
> Dist := RandomVariable(Uniform(0,1)):
> st,ba:=time(),kernelopts(bytesalloc):
> T := Sample(Dist,6*10^7):
> time()-st,kernelopts(bytesalloc)-ba;
                               9.260, 480065524

This can also give an idea of how fast Maple could compute the mean of such a hardware float[8] Vector,

> st,ba:=time(),kernelopts(bytesalloc):
> Mean(T);
                                 0.4999735416

> time()-st,kernelopts(bytesalloc)-ba;
                                   0.148, 0

If 500MB allocation at once is too much then you could consider acting on a a fewer, repeated number of data Vectors -- say ten float[8] Vectors each with n*10^6 entries. That way only about 90MB would need to be allocated at once, with only six large Vectors that could be garbage-collected quickly. The idea is to play off total memory allocation against the cost of garbage collection of all those t's and sorted-t's.

If you feel adventurous, you might write a procedure which accepts a Vector and sorts it inplace. That could then be hit with Maple's Compiler:-Compile to provide a version which runs very fast and leanly. Since your n=6 then the complexity of the sorting algorithm wouldn't be relatively important. You could either create a single workspace float[8] Vector named sorted_t and repeatedly use ArrayTools:-Copy to get the current clump of the large T data Vector into it, or even better you could write the sorting procedure to accept full T as well as the "clump number" and then index into T appropriately in order to sort just the current clump of n entries inplace.

The procedure to be compiled could also do that max(...) work and populate float[8] Array A. In other words, write a single procedure to do it all, the sorting, the max&abs work, and the assignment into A. For fun, you could precompute one (or two) float[8] Vectors containing the i/n (or (i-1)/n) for quick re-use.

acer

This is called 2-argument eval, or evalat ("eval at", because it evaluates at some values). Notice that the instantiation can be obtained without having to actually assign to alpha. That can be useful because it leaves the original object as is, while also leaving name alpha unassigned and thus free immediately for continued symbolic use (no need to unassign it).

> eval(Result,alpha=Pi/2);
                                 [0    0    1]
                                 [           ]
                                 [1    0    0]
                                 [           ]
                                 [0    0    1]

I should mention that lowercase matrix and linalg are deprecated in favour of capitalized Matrix and LinearAlgebra.

acer

First 298 299 300 301 302 303 304 Last Page 300 of 339