sand15

350 Reputation

10 Badges

5 years, 81 days

MaplePrimes Activity


These are replies submitted by sand15

@Carl Love 

 

I did not want to sound offensive.
Put all this on blunders due to the fact that English is not my native language. Words do not have the same harshness from one language to another.

About the way I stated the problem there are, I agree, a lot of things to say and I recognize willingly that I should have done better


And don't go thinking that I'm one of those people who never admits to having made an error.
I can ensure you I never thought this about you

So, I'm sorry if I have hurt you.
Leave it there and recieve my apologies
 

@Carl Love

CodeTools[Usage] being rether inefficient to estimate the computational time (see my new question), I did this

t0 := time():
for r from 1 to 100 do   Code(...) end do:
time()-t0;

where "Code" is one among "NimMatrix first version", "NimMatrix last version", "My best code"

CONDITIONS : BASE = 5, K = 3 (==> NimMatrix(5, 3))

The results (Maple 2018 / Windows 7) are
"NimMatrix lastversion"     1.529 s  ---> 15.23 ms / run
"NimMatrix first version"    2.371 s  ---> 23.71 ms / run
"My best code"                         5.273 s  ---> 52.73 ms / run

As you've seen earlier my first comparisons where made by using CodeTools[Usage].
I tried the same here, running 10 times each of these codes and doing a restart before each new run. 
The cpu times I obtained for
"NimMatrix lastversion"     32, 63, 31, 62, 63, 32, 78, 54, 63, 63
"NimMatrix first version"    63, 47, 62, 47, 46, 47, 78, 47, 62, 46
"My best code"                         62, 78, 78, 78, 63, 62, 62, 31, 78, 78

I didn't compute the means of the CPU times but a lot of things may be observed :

  1. The times vary in a relatively large ratio (~2.5 for "NimMatrix lastversion"  or "My best code")
  2. CodeTools[Usage] seems to deliver rounded values (strabge it is that 78 or 63 appear many times)
  3. The times are roughly speaking the same and no evident ranking can be deduced from them (even if tendencies can be infered)
  4. The times are significantly larger to those obtained by the first method (which returns the real time, necessarily larger than the CPU time)


I guess that the strong differences in the results produced by the two methods comme from the fact that the first one (r=1..100) does no restart between each run.
My code doesn' use any remembre option and is probably less sensitive to the lack of restart (besides you may notive that
52.73 ms/run is not, off the top of my head, far from the mean of [ 62, 78, 78, 78, 63, 62, 62, 31, 78, 78]).

 

@nm 

I agree.
In fact there are many ways do realize this addition.
At the beginning I thought that some bitwise operations existed in Maple, but I failed to find them.
Then I decided to do may own coding and faced the problem of lists (a and b) of different length (typically I need to compute "A plus B" for each pair in {0..3^n}^2, [of course the relation "A plus B" = "B plus A" halves the number of operations)

I first wrote something more or less the same than your solution  (maybe a little bit longer) and I found it too complicated for it used ListTools[Reverse] too.
This why I have come to use a polynomial representation (my " pa := add(a[k]*x^(k-1), k=1..numelems(a))" ) .
But this looked rather "artificial".
Using gfun seemed promising because the coding was even shorter ... but I faced some difficulties at the last steps.

The key of your solution lies in the line c := parse(cat(op(a)))+parse(cat(op(b)));
A more astute way to handle lists of different length than using polynoms.

Thank you for your answer
 

@acer 

Absolutely right!
I'm ashamed not to have think to that myself :-(

Of course the few questions I posted here are just a part of a more general program but your proposal could answer them.

Than you acer 

@acer

For information ...

I keep using the Maplets package (probably no much people here still do that) and all the help pages quote option names and option values.

 

@Adam Ledger 

Surely a, ambitious and interesting initiative !

As I said working with Maplets seems to be very uncommen these days.
it feels like if Embedded components and interactive documents are in the way to take the lead now. I keep finding interesting features in Maplets, even if they suffer some drawbacks (their programming complexity and the lack of possibilities to debug them efficiently).



BTW, when I said that "I did not have MAPLE right now ", I just wanted to say that I was on a terminal where Maple wasn't installed. But I have it in my office and at home ... so dont' worry, I don't need to try an get an older version on the web :-)

@vv

Thanks (which implies I must be very carefull in using them)

@Carl Love 
Hi,

I use to use variables of the form X__"something", for instance
for k from 1 to K do
   X__||k := "some expression"
end do:

If this sequence of instructions is part of a procedure body, then I recieve no warning if the variables X__1 ...X__K have not been explicitely declared as local.
This always surprised me but I never went further.

@Thomas Richard 

Thanks Thomas

@nm 

 

Sorry !
Thanks for the answer, I'm going to look to the like you gave

@Carl Love

 

"the likelihood function is only properly defined for distributions with at least one symbolic parameter" 

Right, the likelihood function... is a function which must depend upon some parameters (those of the traget distribution).

In effect, Maple help pages says that: 

[likelihood] n. (Statistics) the probability of a given sample being randomly drawn, regarded as a function of the parameters of the population. 

 

So I should have read more carefuly those pages...


If S is a sample, D some distribution with parameters P and L denotes the likelihood (function), the expression of L is often written
L(D(P) ; S) to emphasize L is considered as a function of P (or D(P)). 
Once P is instanciated to some values P*, L(D(P*) ; S) becomes a number.
My mistake comes from the common usage of the term likelihood, which may represent at the same timeeither  the likelihood function itself L(D(P) ; S) , or either its value L(D(P*) ; S)... and in this later case we often talk about the "likelihood of the sample S" (as it is the probability density of S given D(P*)).

 

____________________________________________________________

When you write "What is the likelihood, or probability, that you've correctly estimated the parameters when there are no parameters? Of course it's 1."
I'm not completely sure of that.
Admittedly, from a bayesian perspective, we can write something like p(S) =int( p(S | P)*p(P), dP) where p(P) is some prior on P.
Rewriting this integral in terms of the likelihood we have  p(S) =int( L(P ; S)*p(P), dP)=1... which seems to confirm your claim, excepted that there exist no distribution without parameters: then "... when there are no parameters? Of course it's 1." doesn't seem to make sense.
Maybe Maple uses some shortcut to return the value 1 ?

 

____________________________________________________________


"What I'm wondering is What happened to the factors of 1/sqrt(2*Pi) that usually appear in the Normal PDF?"
Here again we face some approximations of the Statistics language: in many situations we use to consider the likelihood is defined up to an arbitrary multiplicative constant.
This comes from the fact that the infotùation which really matters is generally the ratio of two different likelihood.
For instance Likelihood(Normal(m, s), S) / Likelihood(Normal(m', s'), S)

In any event, thanks for your clarification which had have the merit to bring me back to my student years

 

@Carl Love 

 

By the way, thanks for making me discover the syntax op([2, 1, 1, 2], ...)
It is shorter (maybe slightly less clear) than op(2, op(1, ....) that I used to use

@Kitonum 

Than you.
It is exactly what I was expecting.

So I take it that Maple implicitely selects the first interval [-Pi/2, +Pi/2] when   solve(sin(x)=y,x)    returns   arcsin(y)

@vv

You write
 "First note that there is not such thing as "global inverse" of f, unless f (supposed to be C^1) is strictly monotonic."
Thank you vv for this quick math reminder but I know this perfectly.

Maybe I should have written
I want to construct some kind of pseudo global inverse of f over R by putting "side by side" local inverse functions
instead of
I want to construct the global inverse of f over R by putting "side by side" local inverse functions
to be clearer ?

I had thought that I was clear enough when saying
     The idea is to define the global inverse g of f over R by
     g := y ->  piecewise(y < f(a__1), g__0(y), ..., y < f(a__n), g__(n-1)(y))
    where g__p(y), is the inverse function of the restriction of f to ] a__p, a__(p+1) [

I realize it was not the case ...  or maybe you where too scandalized (with good reason) by reading the first lines (find the inverse of a non monotonic function) that you did not keep reading the rest of my question ?
I do not hold this against you: "This function is not strictly monotonic over R [and] I want to construct [its] global inverse" is really disturbing and I guess that hearing this would make me hit the roof too.

@Carl Love 

 

Thank you Carl.
My today problem concerns a polynomial function f, then your answer will be very valuable.

By the way: I have always been surprised that  solve(sin(x)=y, x)  returns arcsin(y), just as if the inverse of "sin" was defined everywhere.
If it is not too much to ask could you say a little more about this ?

Thank for all

5 6 7 8 9 10 11 Page 7 of 14