**First point**

Over which t-range do you want to resume your polynomial by a sum like a**lpha[0]+add(alpha[i]*exp(-beta[i]*t), i=1.. 5)** ?

As you wrote "**sum of exponential functions with a negative power**" I assume you mean that the **beta**s are strictly negative, am I right?

In which case the range over which to **f** is to be approximated should be **a..b**, with **a >=0**.

Am I still right?

**Second point**

Even with a range **a..b**, with **a >=0**.there exists some additional constraints for all the** beta**s to be strictly negative.

For instance** f(t)** must be **convex** everywhere in **a..b**.

Look here to the zeroes of **diff(f(t), t$2)**:

fsolve(diff(f(t), t$2)=0, t=-100..100);
plot(abs(diff(f(t), t$2)), t=-0.8..1.8, axis[2]=[mode=log])
-0.5543081877, 0.5294373439, 0.5525008020, 1.539900777

Then , for instance, all the betas are strictly negative if you consider the range -0.5543081877.. 0.5294373439, but some are positive while others are negative in the range -0.5543081877.. 0.6,

Given the rapid variation of** f(t) **for** |t| > 1 ** the local discrepancies between f(t) and its "sum of exponential" approximations arround the zeroes of **diff(f(t), t$2) **are graphically invisible... but they do exist.

In the range **t=0..1**, **f** is correctly approximated by

0.299348428151280e-1+1.18657264326334*10^(-7)*exp(17.3637775324664*t)

(adding more exponential terms improves marginally the quality of the fit ovr this range).

The above approximation (denoted **fit1** below) can be obtained this way

restart
f := 0.020399949322360296902872908942 + 0.0261353198432118595103693714851*t^3 + 0.0240968505875842806805439681431*t^4 + 0.0148456155621193706595799212802*t^5 + 0.0239969764160351203722354728376*t^2 + 0.0204278458408370651586217048716*t - 0.00450853634927256388740864146173*t^6 - 0.0355389767483113696513996149731*t^7 - 0.0766669789661906882315038416910*t^8 - 0.120843030849135239578151569663*t^9 - 0.153280689906711146639066606024*t^10 - 0.150288711858517536713273977277*t^11 - 0.0808171080937786380164380347445*t^12 + 0.0872390654213369913348407061899*t^13 + 0.373992140377042586618283139889*t^14 + 0.766807288928470485618700282187*t^15 + 1.19339994493571167326973251788*t^16 + 1.49476369302534328383069681700*t^17 + 1.41015598591182237492637420929*t^18 + 0.593451797299651247527539427688*t^19 - 1.31434443870999971750661332301*t^20:
f := unapply(f, t):
# Assuming this t-range
domain := 0..1;
g := unapply(alpha[0]+sum(alpha[i]*exp(-beta[i]*t), i=1.. n) , (t, n)) :
N := 1;
X := [seq](domain, 0.01):
Y := f~(X):
fit1 := Statistics:-NonlinearFit(g(t, N), X, Y, t);
plot([f(t), fit1], t=domain, color=[blue, red], legend=[typeset('f'(t)), typeset('g'(t, N))]);
# RSS stands for Residual Sum of Squares and gives a measure of the
# quality of the approximation of f(t) by g(t, 1).
# As the step [0.01] goes to 0, RSS converges to the L2 norm of f(t)-g(t, 1)
# on the interval 0..1
RSS_1 := Statistics:-NonlinearFit(g(t, N), X, Y, t, output=residualsumofsquares);

Look to file Methods.mw for more informations and to see how you could proceed for higher order approximations (you will see that **Statistics:-NonlinearFit** has some limitations and that Maple provides better alternatives).

As the above file is restricted to domain(s) of the form **a..b** with **a >= 0**, you will find in file Methods_different_domains.mw some hints about how to find approximations over ranges **a..b <=0 ** and ranges **a..b **where **a > 0 and b > 0**.

**But this require to be extremely careful when beta constraints are written.**