6473 Reputation

17 Badges

8 years, 43 days

MaplePrimes Activity

These are replies submitted by mmcdara


Great, I vote up


Impressive, I vote up.


Here are several ways do achieve what you want:




P := seq(plot(x^n, x=-1..1, color=ColorTools:-Color([1/n, 1-1/n, 0]), size=[800, 400]), n=1..4):

Q1 := display(P[1], P[3]):
Q2 := display(P[2], P[4]):

here := cat(kernelopts(homedir), "/Desktop/example.jpg"):
plotsetup(jpeg, plotoutput = here):
   display(< Q1 | Q2 >);

img := ImageTools:-Read(here):




NOTE: I don't know why the tickmarks font has been changed by plotsetup  (just do display(< Q1 | Q2 >) after plotsetup(default) to see what the jpg file should contain) ?
I'm not even sure this issue can be fixed because Plotting Devices help page says "Notes: Font control is not available in this driver. The Maple JPEG driver is based in part on software provided by the Independent JPEG Group. JPEG is designed for photographs of natural scenes, which unlike plots, do not have sharp edges and abrupt color changes. GIF is a superior file format for plots; JPEG will make sharp edges blurry.'

Any attempt to modify the tickmarks (using for instance 

axis[1]=[tickmarks=[seq(i=sprintf("%s", convert(i, string)), i=-1..1)]]

) leads to the message Warning, ignoring axis information in _AXIS[n] structures

@Ali Hassani 

Thanks for the precisions.

So, if I correctly understand, may I say you have a differential equation which models the behaviour of some physical system; you know from some physical expertise that its solution shoud be of the form, let's say, 1-exp(-5*t); but the use of DTM to get a series representation of the exact solution if the polynomial f(t) in your intial question?

Plotting f(t) for t > 0 shows clearly that f does not behave as Physics suggest it would do.
Maybe you should provide us the differential equation to try and see how we can do to help you.

I look forward to reading from you

First point
Over which t-range do you want to resume your polynomial by a sum like alpha[0]+add(alpha[i]*exp(-beta[i]*t), i=1.. 5) ?

As you wrote "sum of exponential functions with a negative power" I assume you mean that the betas are strictly negative, am I right?
In which case the range over which to f is to be approximated should be a..b, with a >=0.
Am I still right?

Second point
Even with a range  a..b, with a >=0.there exists some additional constraints for all the betas to be strictly negative.
For instance f(t) must be convex everywhere in a..b.
Look here to the zeroes of diff(f(t), t$2):

fsolve(diff(f(t), t$2)=0, t=-100..100);
plot(abs(diff(f(t), t$2)), t=-0.8..1.8, axis[2]=[mode=log])
     -0.5543081877, 0.5294373439, 0.5525008020, 1.539900777

Then , for instance, all the betas are strictly negative if you consider the range -0.5543081877.. 0.5294373439, but some are positive while others are negative in the range -0.5543081877.. 0.6,
Given the rapid variation of f(t) for |t| > 1  the local discrepancies between f(t) and its "sum of exponential" approximations arround the zeroes of diff(f(t), t$2) are graphically invisible... but they do exist.

In the range t=0..1, f is correctly approximated by 


(adding more exponential terms improves marginally the quality of the fit ovr this range).
The above approximation (denoted fit1 below) can be obtained this way


f := 0.020399949322360296902872908942 + 0.0261353198432118595103693714851*t^3 + 0.0240968505875842806805439681431*t^4 + 0.0148456155621193706595799212802*t^5 + 0.0239969764160351203722354728376*t^2 + 0.0204278458408370651586217048716*t - 0.00450853634927256388740864146173*t^6 - 0.0355389767483113696513996149731*t^7 - 0.0766669789661906882315038416910*t^8 - 0.120843030849135239578151569663*t^9 - 0.153280689906711146639066606024*t^10 - 0.150288711858517536713273977277*t^11 - 0.0808171080937786380164380347445*t^12 + 0.0872390654213369913348407061899*t^13 + 0.373992140377042586618283139889*t^14 + 0.766807288928470485618700282187*t^15 + 1.19339994493571167326973251788*t^16 + 1.49476369302534328383069681700*t^17 + 1.41015598591182237492637420929*t^18 + 0.593451797299651247527539427688*t^19 - 1.31434443870999971750661332301*t^20:
f := unapply(f, t):

# Assuming this t-range
domain := 0..1;

g := unapply(alpha[0]+sum(alpha[i]*exp(-beta[i]*t), i=1.. n) , (t, n)) :

N := 1;

X := [seq](domain, 0.01):
Y := f~(X):

fit1 := Statistics:-NonlinearFit(g(t, N), X, Y, t);
plot([f(t), fit1], t=domain, color=[blue, red], legend=[typeset('f'(t)), typeset('g'(t, N))]);

# RSS stands for Residual Sum of Squares and gives a measure of the 
# quality of the approximation of f(t) by g(t, 1).
# As the step [0.01] goes to 0, RSS converges to the L2 norm of f(t)-g(t, 1)
# on the interval 0..1

RSS_1 := Statistics:-NonlinearFit(g(t, N), X, Y, t, output=residualsumofsquares);

Look to file for more informations and to see how you could proceed for higher order approximations (you will see that Statistics:-NonlinearFit has some limitations and that Maple provides better alternatives).

As the above file is restricted to domain(s) of the form a..b with a >= 0, you will find in file some hints about how to find approximations over ranges a..b <=0  and ranges a..b where a > 0 and b > 0.

But this require to be extremely careful when beta constraints are written.

@Carl Love 

What I meant by "it imposes a strict discipline of regular backup" is that before launching an operation that you are not sure will run correctly in a reasonable amount of time, do not forget to backup the worksheet before executing that operation.
We cannot rely upon the Autosave feature.

This regularly happens to me (I am probably a bit absent-minded) and I often blame myself for not having thought of saving my work before. 

It is indeed quite painful
When I'm fed up going back and forth between the two worksheets, I simply kill the attached process, or even all of Maple (in the task manager, the "guilty" mserver process is not released and keps consuming memory).
This imposes a strict discipline of regular back-up.


You realize, I presume, that the plots you provide have nothng to do with the data you provided before?
Just use 


to see visualize X.
The abscissa is a number from 1 to 781 and the ordinate is X with, possibly, some inknown unit. So something completely different to what you present here.
Could it be that each plot here has abscissa eqial to X and ordinate equal to Y?

You wrote contradictary things:

  • The object is to use Gaussian Smoothing and to ensure the endpoints fit.  
  • Therefore please use the method that will work best

Either I have to use Gaussian Smoothing ot either you let me free to the method I consider is the best.
For instance my initial reply uses gaussian kernel smoothing and seems, at least to me, a method I consider is good. The problem is that as I never used the term "2D gaussian smoothing" explicitely you seem to have swept it aside with a wave of your hand. Am I wrong?
Without any good reason to use  Gaussian Smoothing I don't see why I should this latter? 

I'm sorry to tell you that I have the strong feeling that I'm wasting my time.
I think you shoud wait for someone else to help you, good luck.


Things end here then, you haven't been very cooperative (you could have asked your "friend", if he really exists).

helpers here do not appreciate to decode LaTeX script to code it into Maple.
What they appreciate is when the member who asks a question has done a minimum effort to deliver a Maple code. We do not ask for this code to be perfect but is really the bare minimum to do.

To illustrate this let me take this simple line of yours

Parameters: m =1; a= 10^3/cm; M = 6.3*10^{17}; D = 10^{-5}cm^2/sec; \Omega = 1.7*10^{23} cm^3/Atoms; \tau = A * D* \Omega * t = A^{'}t = 10^{-14}cm^3/s;

In Maple syntax there are a lot of errors (curly brackets instead of parenthesis, multiplication symbols missing, names like \tau or \Omega are not allowed in Maple, triple equality [in bold font above], ...
I dont't understand what A^{'} is meant to be for you. Is A a matrix and A' its transpose?

Whatever, this is what a correct definition os Patameters should look like in Maple:

local D;  # D is a protected name in Maple
Parameters := {
  m = 1, 
  a = 10^3 * Units:-Unit(cm),
  M = 6.3*10^(17), 
  D = 10^(-5) * Units:-Unit(cm^2/s), 
  Omega = 1.7*10^(23) * Units:-Unit(cm^3), 
  #  tau = A * D* Omega * t = A^{'}  ### uncomprehensible
  t = 10^(-14) * Units:-Unit(cm^3/s)

Parameters := simplify(Parameters union {eval(tau=A * D* Omega * t, Parameters))};

# To integrate numerically wrt epsilon, A must be given a numeric value.

It already took me a few effort to do this, and it will take me, or another memeber, a larger one do decypher  the expression of 
W^{+,-}(x, \tau, M) and code it correctly in Maple.

Last but not least, your plot is unreadable and it is completely impossible to understand what you draw.

Could you do some effort and code your relations in syntax error free Maple?
Without this minimum work I doubt you will find help here.

If you want to integrate numerically a function F(x, a, b, c, ...) with respect to x you must give a, b, c, ... numerical values.
I have only little knowledge about Mathematica but I guess it is so.
So do with Maple what you did with Mathematica.



f := (epsilon, a, x, tau) -> exp(-(epsilon/2/a)^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2

proc (epsilon, a, x, tau) options operator, arrow; exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2 end proc


# No numerical value returned because a, x, tau are names

J := Int(f(epsilon, a, x, tau), epsilon=0..+infinity);
evalf(J);  # no numerical value returned

Int(exp(-(1/4)*epsilon^2/a^2-tau*epsilon^3)*(cos(epsilon*x)-1)/epsilon^2, epsilon = 0 .. infinity)


Int(exp(-.2500000000*epsilon^2/a^2-1.*tau*epsilon^3)*(cos(epsilon*x)-1.)/epsilon^2, epsilon = 0. .. Float(infinity))


# Once a, x, tau are given numeric values the integral is computed

J := Int(f(epsilon, 1, 1, 1), epsilon=0..+infinity);

Int(exp(-(1/4)*epsilon^2-epsilon^3)*(cos(epsilon)-1)/epsilon^2, epsilon = 0 .. infinity)







... but I don't think NLPSolve can still handle the problem for large n.
For instance a single run of NLPSolve takes about 10 seconds for n=50, which means my naive strategy consisting in starting from 1000 different points to try and catch a posible global maximum becomes rapidly prohibitive. 

I'd never heard of the OP problem, but I'd already been interested in statistical problems related to packing problems. For example, how to place N points in the unit square so that they are as far apart as possible (similar to placing N disks of identical radius in the unit square).
Those problems areusually adressedby specific algorithms, mosply stochastic (greddy, genetic, moving particles, ...) or Central Voronoi Tesselation (see here  pp 648-650).

Fun for fun, here is the best solution I gor to pack 14 spheres in the unit cube in such a way that the sum of their radii is maximum.
The result is absolutely stunning: all the spheres  have the same radius!




The packing shiuld be such that the sum of the radii is maximum

epsilon := 1e-8:

n := 14:
J := add(r[i], i=1..n):
P := [seq(<x[i], y[i], z[i]>, i=1..n)]:
cons_1 := seq(seq(add((P[i]-P[j])^~2) >= (r[i]+r[j])^2, j=i+1..n), i=1..n-1):
cons_2 := seq(seq(op([P[i][k]-r[i] >= 0, P[i][k]+r[i] <= 1]), i=1..n), k=1..3):
cons_3 := seq(r[i] >= epsilon, i=1..n):

SoR : Sum of the Radii

Max_trials : maximum number of trials
                     Note: I used a while loop instead of a for loop because NLPSolve sometimes returns an error

Packing := proc(n, Max_trials)
  local SoR, SoR_max, error_found, error_count, rep, StartAt, opt, sor, error_type, Best_packing:

  SoR     := Vector(Max_trials):
  SoR_max := 0:

  error_found := false:
  error_count := 0:

  rep := 0:
  while rep < Max_trials do
    StartAt := {
                 seq(x[i]=rand(0. .. 1.)(), i=1..n),
                 seq(y[i]=rand(0. .. 1.)(), i=1..n),
                 seq(z[i]=rand(0. .. 1.)(), i=1..n),
                 seq(r[i]=rand(0. .. 1e-6)(), i=1..n)
      opt := Optimization:-NLPSolve(J, {cons_1, cons_2, cons_3}, maximize, iterationlimit=10^4, initialpoint=StartAt):
      error_found := true
    end try:
    if `not`(error_found) then
      rep := rep+1:
      sor := evalf[6](eval(add(r[i], i=1..n), opt[2]));
      SoR[rep] := sor:
        [sprintf("%d", rep), sprintf("%1.6f", SoR_max)],
      if sor > SoR_max then
        SoR_max      := sor;
        Best_packing := opt[2];
      end if:
      error_found := false:
      error_count := error_count+1:
      error_type  := lastexception:
    end if:
  end do:

  return  table([
            Errors = `if`(error_count = 0, None, [Type = error_type, Occurrences = error_count]),
            RadSum = SoR,
            Best   = Best_packing
end proc:

res := Packing(14, 1000):

SoR := res[RadSum]:
Best_packing := res[Best]:

[Type = (Optimization:-NLPSolve, "no improved point could be found"), Occurrences = 136]


Histogram([SoR], minbins=100):

  (evalf[6]@rhs)~(select(has, Best_packing, r))
  , discrete=true
  , size=[1200, 100]
  , color=red
  , thickness=3
  , title="Radii"

couleur := () -> ColorTools:-Color([rand()/10^12, rand()/10^12, rand()/10^12]):
    sphere( eval([x[i], y[i], z[i]], Best_packing), eval(r[i], Best_packing), color=couleur(), style=surface)
    , i=1..n
  cuboid([0,0,0], [1,1,1], color="LightGray", transparency=0.8)
  , title=typeset(Sum(r[i], i=1..n) = evalf[6](eval(add(r[i], i=1..n), Best_packing)))
  , scaling=constrained



# Radii

print~(sort(evalf@rhs)~(select(has, Best_packing, r))):





























From packomania , click on Spheres in a cube  at the bottom of the page.

What is the largest radius that 14 identical spheres packed in a unit cube may have?
The value given in the page you look is 0.207106781186547524400844362105        

It is remarkable that a priori two different problems have the same solution.


Add-on: 3D packings of 9 to 14 spheres:

Question: optimal 3D packings of n identical spheres which are also optimal 3D packings of n spheres whose sum of radii is maximum are obtained for n=1, 2, 3, 4, 8, 14. Who comes next?


Your file Data_for_2nd_Order_Gassian_Smoothing_6_21_2024.maple can't be downloaded.
Change the extension into mw (not maple), this should, could work.

Unfortunatelythe paper is indeed not avaliable for free.
I'm going to do some research on Google Scholar from its title.

See you soon.

PS: I'm sorry if you found me a bit rough, that wasn't my intention.


A few questions:

  • Are z[F0](t) and z[R0](t) explicitely known?
    If it is the case you hust have to define them as I did in my file. More precisely I wrote
    SYS_example := eval(SYS, RemainingFunc =~ t):

    which is, I agree, quite unclear. 
    RemainingFunc is equal to  { z[F0](t) , z[R0](t) } and  RemainingFunc =~ t means 

    { z[F0](t) = t, z[R0](t) = t } 

    If, for instance, z[F0](t) = cos(t) and  z[R0](t) = exp(-t) then define SYS_example this way

    SYS_example := eval(SYS,  {z[F0](t) = cos(t) , z[R0](t) = exp(-t)}):


  • If you dont explicitely know these functions, for instance they are knoxn only at some values of t, you must construct from these points two analytic expressions to represent boyh of them.
1 2 3 4 5 6 7 Last Page 2 of 134