@Carl Love

"the likelihood function is only properly defined for distributions with at least one symbolic parameter"

**Right**, the likelihood function... is a function which must depend upon some parameters (those of the traget distribution).

In effect, Maple help pages says that:

*[likelihood] n. (Statistics) the probability of a given sample being randomly drawn, regarded as a function of the parameters of the population. *

**So I should have read more carefuly those pages...**

If S is a sample, D some distribution with parameters P and L denotes the likelihood (function), the expression of L is often written

L(D(P) ; S) to emphasize L is considered as a function of P (or D(P)).

Once P is instanciated to some values P*, L(D(P*) ; S) becomes a number.

My mistake comes from the common usage of the term likelihood, which may represent at the same timeeither the likelihood function itself L(D(P) ; S) , or either its value L(D(P*) ; S)... and in this later case we often talk about the "likelihood of the sample S" (as it is the probability density of S given D(P*)).

____________________________________________________________

When you write "What is the likelihood, or probability, that you've correctly estimated the parameters when there are no parameters? Of course it's 1."

I'm not completely sure of that.

Admittedly, from a bayesian perspective, we can write something like p(S) =int( p(S | P)*p(P), dP) where p(P) is some prior on P.

Rewriting this integral in terms of the likelihood we have p(S) =int( L(P ; S)*p(P), dP)=1... which seems to confirm your claim, excepted that there exist no distribution without parameters: then "... when there are no parameters? Of course it's 1." doesn't seem to make sense.

Maybe Maple uses some shortcut to return the value 1 ?

____________________________________________________________

"What I'm wondering is What happened to the factors of **1/sqrt(2*Pi) **that usually appear in the Normal PDF?"

Here again we face some approximations of the Statistics language: in many situations we use to consider the likelihood is defined up to an arbitrary multiplicative constant.

This comes from the fact that the infotùation which really matters is generally the ratio of two different likelihood.

For instance Likelihood(Normal(m, s), S) / Likelihood(Normal(m', s'), S)

In any event, thanks for your clarification which had have the merit to bring me back to my student years