## 5259 Reputation

7 years, 112 days

## Are you sure...

that nothing is wrong BEFORE these lines?
For instance that you didn't write

`segment(segm[A,B]):  # no comma after sem`

?

It works well in Maple 2015

 > restart:
 > kernelopts(version)
 (1)
 > with(geometry):
 > point(A,0,0),point(B,1,1):
 > segment(segm, [A,B]):
 > try
 > a := map(coordinates, DefinedAs(segm))[1];
 > b := map(coordinates, DefinedAs(segm))[2];
 > catch "wrong type of arguments":
 > Alert(cat("Error, (in geometry:-DefinedAs) wrong type of arguments, variable segm: ", whattype(segm), segm), table(), 5)
 > end try;
 >
 (2)
 >

## @Carl Love  Just for fun: I suppos...

@Carl Love

Just for fun: I suppose you knew Ramanujan's anecdote about the number 1729?
I not here it is https://vedicmathschool.org/story-of-hardy-ramanujan-number-1729/

## @vv You're right, I made a conf...

You're right, I made a confusion between the elements of the sigma-algebra, which are sets, and the set of elements from which this sigma-algebra is build.

@Zeineb

Taking into account @vv's remark﻿, here is an edited version of the code I delivered previously.
I believe there is a simpler way to build the sigma algebra generated by a subset S of a set of sets.
Some examples are given in the attached file.

 > restart

PROPOSAL

 > kernelopts(version)
 (1)

WATCHOUT: the code below couldn't work with Maple 2023
(see here  https://www.mapleprimes.com/questions/236556-Solve-Equation-With-Parameter-Using-Maple#comment295355)

As I do not have Maple 2023 I cannot assure you the following code will work correctly.

 > # Here is what C looks like with Maple 2015.2 C := {RealRange(0, Open(1)), RealRange(Open(1), 2), {1}}; # The point is that not all C components are sets type~([C[]], set);
 (2)
 > # Step 1: separate components depending they areset or not Cs, Cns := selectremove(type, C, set);
 (3)
 > # Initialize A by components of type set PointSets := map(a -> {z=a[]}, Cs): A := copy(PointSets);
 (4)
 > # Complete A with Cns elements after faving represent them as set. # # Step 1: build the sets corresponding to each Cns component ContinuousSets := `or`(map(c -> {convert(z in c, relation)}, Cns)); # Step 2: assemble all sets A := A union ContinuousSets
 (5)
 > # Sigma-algebra generated by a subset of A # EXAMPLE 1 # Let S some subset of C S := {C[1]}: # Or, in more convenient form S := {A[1]}; # Then the sigma-algebra generated by S is sigma := { {}, S, A minus S, A }: print~(sigma):
 (6)
 > # EXAMPLE 2 # Let S some subset of A S := {A[1], A[3]}; # Then the sigma-algebra generated by S is sigma := { {}, S, A minus S, A }: print~(sigma):
 (7)
 > # EXAMPLE 3 # Let S some set S := {A[3], {z=2}}; # Check if S is a subset of A and if it so build the sigma algebra. if S subset A then   sigma := { {}, S, A minus S, A }:   print~(sigma): else   error cat("S (", S, ") is not a subset of A (", A, ")") end if;
 >

## @vv I don't understand.A sigma-...

I don't understand.
A sigma-algebra T(S) of a set S is a subset of P(S) (the power set of S), whose cardinal is of course a power of 2.
But there is no reason for the cardinal of T(S) to be a power of 2 too.
Am I right?

In any event I didn't verifiy if the set produces by ALG was indeed closed under complementation and countable union, nor if it computes correctly the complement of a set of the form {p <= z < q} (which I doubt for, as you quoted, ALG is essentially written for finite sets).
This is why I adviced the OP to check if its procedure ALG works correctly for non countable sets.

## Yet it's clear...

@Saha

the error message
Warning, expecting only range variable t in expression sin(x)-sin(x)*t+1/2*sin(x)*t^2-1/6*sin(x)*t^3 to be plotted but found name x

means that you want to plot wrt t something which depends on and x.
So either give x a value or use plot3d (see the help pages instead).

Have you observed that the function you want to plot is of the form

`f(t, x) = sin(x)*g(t)`

?
So why not trying to plot simply g(t) ... which is the beginning of the Taylor expansion of exp(-t) at t=0
Look to these plots

```plot3d( sin(x)*mtaylor(exp(-t), t=0, 4) , t=-5..5, x=-2*Pi..2*Pi);
plot3d( sin(x)*exp(-t) , t=-5..5, x=-2*Pi..2*Pi);
```

## @NanaP There are always questions t...

There are always questions to which several players contribute and others that remain more "confidential".

You'll quickly understand all this with a little practice.

The most upsetting thing (even for me) is that you'll never know why your question remains unanswered (is it that no one has an answer to offer, or is it that the problem is so uninteresting or so poorly presented that no one cares to answer it?)

So a good practice is probably to present your problem in the most consise, but nevertheless detailed, form and to deliver what you have been able to achieve with Maple (generally people don't like an image or a reference, or a piece of some article to figure out what the OP [the acronym used to designate the person who asked the question] wants).

Concerning successive questions, I agree that when no one have given you a reply, it's very tempting to bring up a question again. But you have to account for jet lag and half a day may separate a question and its answer.

Concerning the difficulty to find Maple solutions or algorithms in the web, I agree that it is extremely difficult.
Theitems you get refer generally to Mapleprimes. An advice: even if the Search/Advance Search feature could be improved, do not hesitate to browse amonq the Questions and Post to seek if similar problems have not been alerady answeerd.

Hope to see you soon.

To conclude, I believe, this thread, here is a more complete version of BDS_2.mw where I added several estimators of the left and right endpoints of the underlying distribution, given a sample from it and the lnowledge that it is left and right bounded.
Tou will see that the estimators are quite good is the distribution is "smooth" and quite rough if it is sharp (here left peaked).
BSD_2b.mw

If you'd like to dig up further on the subject, here are some companions articles (their understanding resquires some knowledge in Extreme Values Theory).​​
https://boris.unibe.ch/36886/1/10687_2007_Article_52.pdf
https://arxiv.org/pdf/1412.3972.pdf
https://projecteuclid.org/journals/annals-of-statistics/volume-12/issue-4/Estimating-an-Endpoint-of-a-Distribution-with-Resampling-Methods/10.1214/aos/1176346811.full

https://mendel-journal.org/index.php/mendel/article/view/28/30

UPDATED
(simpler definition of the SubjectiveBeta Distribution)

COMMENT 1
You should stop posting replies while the previous one doesn't receive an answer.

COMMENT 2:
Note from @NanaP: The Use_formulas.mw file by @mmcdara stopped at the 25 percentile "theoretical" calculation and did not go beyond this point until a histogram of simulated values is created

You're partially wrong: the first file I sent you (BDS.mw) "goes as far as the histogram", maybe you didn't pay attention?

My file Use_formulas.mw contains indeed some errors: I used the moments to compute the Skewness and Kurtosis instead of the central moments.
Apologies.

Here is a file which "[achieve] the ultimate goal of being able to simulate from a custom defined distribution of SB(minimum, mode, mean, maximum) .",  although I didn't particularly like the way you worded it.

The procedure SubjectiveBeta builds a "Subjective Beta Distribution" given its mean, mode, min value and max value

```SubjectiveBeta := proc(me, mo, P, Q)
# me = mean
# mo = mode
# P  = xmin
# Q  = xmax
```
 > restart:
 > with(Statistics):
 > # Basically you want to parameterize  a Betadistribution by its mean and its mode # (the smin and xmax stuff is just a translation and a scaling) # # S let's derive the expression of she usual parameters alpha and beta from the # mode and the mean. U  := RandomVariable('Beta' (alpha, beta)): ME := Mean(U); MO := Mode(U); solve({ME=MyMean, MO=MyMode}, {alpha, beta}); # no solution found
 (1)
 > # No solution being found, lets assume alpha > 1, beta > 1 and let's try to find one MO := Mode(U) assuming alpha > 1, beta > 1; reparam := solve({ME=MyMean, MO=MyMode}, {alpha, beta});
 (2)
 > # Here we got a solution provided MuMean <> MyMode. # # The developpements which follows maje sense only if alpha > 1, beta > 1 and # MyMean <>MyMode. # # I already pointed out the absurdity of such a re-parameterization but it is # your responsibility to do it and not of my concern.
 > # Preliminary work to get the desired formulas U := RandomVariable('Beta'(alpha, beta)): V := x__min + (x__max-x__min)*U: eval(reparam, {MyMean=me, MyMode=mo}): eval(reparam, {MyMean=(me-P)/(Q-P), MyMode=(mo-P)/(Q-P)}): simplify(%);
 (3)
 > SubjectiveBeta := proc(me, mo, P, Q)    # me = mean    # mo = mode    # P  = xmin    # Q  = xmax    local A  := (-2*mo+P+Q)*(-me+P)/((me-mo)*(-Q+P));    local B  := -(-2*mo+P+Q)*(-me+Q)/((me-mo)*(-Q+P));    local f0 := unapply(PDF('Beta'(A, B), t), t);    local F0 := unapply(CDF('Beta'(A, B), t), t);    printf("According to Maple help pages : %a\n", [nu=A, omega=B]);    Distribution(      PDF = unapply(f0((x-P)/(Q-P)) / (Q-P), x),      CDF = unapply(F0((x-P)/(Q-P)), x),      Mean = me,      Variance = (Q-P)^2*Variance('Beta'(A, B)),      Skewness = simplify(CentralMoment('Beta'(A, B), 3) / Variance('Beta'(A, B))^(3/2)),      Kurtosis = simplify(CentralMoment('Beta'(A, B), 4) / Variance('Beta'(A, B))^2),      Mode = mo,      Conditions = [Q > P],      RandomSample = proc(N::nonnegint)                       P +~ (Q-P) *~ Sample('Beta'(A, B), N)                     end proc    ) end proc:
 > BSD := RandomVariable(SubjectiveBeta(1500, 1400, 1000, 2100))
 According to Maple help pages : [nu = 15/11, omega = 18/11]
 (4)
 > N := 10^4: S := Sample(BSD, N): f := PDF(BSD, x): plots:-display(   Histogram(S),   plot(f, x=1000..2100, color=red, thickness=2) );
 > plots:-display(   ScatterPlot(sort(S), [\$1..N]/~N, symbol=point, legend="Empirical CDF"),   plot(CDF(BSD, x), x=1000..2100, color=red, thickness=2, legend="CDF") );
 > 'Mean'          = evalf(Mean(BSD)); 'EmpiricalMean' = Mean(S);  print(): 'Variance'          = evalf(Variance(BSD)); 'EmpiricalVariance' = Variance(S);  print(): 'Skewness'          = evalf(Skewness(BSD)); 'EmpiricalSkewness' = Skewness(S);  print(): 'Kurtosis'          = evalf(Kurtosis(BSD)); 'EmpiricalKurtosis' = Kurtosis(S);  print(): 'Mode'          = evalf(Mode(BSD)); 'EmpiricalMode' = Mode(S);
 (5)
 > BSD := RandomVariable(SubjectiveBeta(1500, 2000, 1000, 2100))
 According to Maple help pages : [nu = 9/11, omega = 54/55]
 (6)
 > plot(PDF('Beta'(9/11, 54/55), t), t=0..1); Mode('Beta'(9/11, 54/55));
 (7)
 > S := Sample(BSD, 10^4): f := PDF(BSD, x): plots:-display(   Histogram(S),   plot(f, x=1000..2100, color=red, thickness=2) ); 'Mean'          = evalf(Mean(BSD)); 'EmpiricalMean' = Mean(S);  print(): 'Variance'          = evalf(Variance(BSD)); 'EmpiricalVariance' = Variance(S);  print(): 'Skewness'          = evalf(Skewness(BSD)); 'EmpiricalSkewness' = Skewness(S);  print(): 'Kurtosis'          = evalf(Kurtosis(BSD)); 'EmpiricalKurtosis' = Kurtosis(S);  print(): 'Mode'          = evalf(Mode(BSD)); 'EmpiricalMode' = Mode(S);
 (8)
 > BSD := RandomVariable(SubjectiveBeta(1500, 1500, 1000, 2100));
 >

I hope that's all right with you, because I'm done with this subject.

## @NanaP  I'm not sure I underst...

@NanaP

I'm not sure I understood correctly so correct me if I'm wrong:

• You have a random variable X with a SB (short for "subjective Beta") distribution of parameters xmin, xmax, alpha, beta (I'm not sure of this point, maybe they are xmin, xmax, mode, mean ?).
• You have a sample S of X, let's say of size N, from which you compute some empirical statistics, for instance
the mean
```m := (1/N)*add(s[n], n=1..N)
```
• Some facts are:
• the above empirical statistics is the best unbiased asymptotic estimator of the Mean(X) (of course you already knew that);
• the classical estimation of the variance
`v := (1/(N-1))*add((s[n]-m)^2, n=1..N)`

also has this same property.

• The case of Skewness and Kurtosis is a little bit more complex. If I'm not mistaken there are no formulas for unbiased estimator (nearly unbiased estimators do exist).

• The case of xmin and xmax is even more complex.
I have some material concerning what is called "the endpoint distribution", that is the estimation of xmin or xmax from sample S.
Here is a first reference, https://arxiv.org/pdf/1412.3972.pdf, do not hesitate to ask for some more.

• Concerning the estimation of the mode there is no unbiased estimator, but a lot of estimators do exist.

## @NanaP Thank you for introducing me...

Thank you for introducing me to something I didn't know about, even though I've been working in a parallel domain for a long time.

Hope my contribution will be useful

## You're right...

@acer
I had observed on my own that isolate didn't work for this example.
So you are right saying that the imaginary unit is not the cause of my difficulty in the sense that I could faced them without having I.

It's just that for the particular case b=1 isolate works if c <> I, and not instead, so my quick jump to an erroneous conclusion.

## @acer  Thank you acer for all thes...

@acer

Thank you acer for all these informations.

## Seems you ask two different question, do...

The case

`Min := 1000; Mlikely := 1400; Avg := 1500; Max := 2100;`

can be solved by using the definitions of Mean, Variance, Skewness, Kurtosis, Mode and Quantile:

The Beta Subjective Distribution

 (1)

 (2)

 (3)

 (4)

 (5)

 (6)

 (7)

 > plot(f, x=Min..Max, gridlines=true)
 > mom := r -> evalf(Int(x^r*f, x=Min..Max)); _Mean     := mom(1); _Variance := mom(2)-_Mean^2; _Skewness := mom(3) / mom(2)^(3/2); _Kurtosis := mom(4) / mom(2)^2: Optimization:-Maximize(f, x=Min..Max); _Mode := eval(x, %[2]); Q := z -> evalf(Int(f, x=Min..z)): p := 0.25; _Q__||p := fsolve('Q(z)'=p, z=`if`(p < 0.5, Min, Max));
 (8)
 >

Note that the mode is not necesarily  unique (Beta(a, a) has two modes at x=0 and x=1 when a < 1 and a continuum of modes whan a=1).
So, I don't know what you want to do, but this re-parameterization you want to use is extremely dangerous.

The Beta Subjective Distribution

with(Statistics):

 (1)

 (2)

 (3)

About the Beta distribution there are only, up to my knowledge, these parameterizations:

• shape (a) and scale (b), the most common ant the one Maple adopts;
• mean (M) and variance (V);
• mean and precision (inverse of variance, noted tau below);

As you can see there is a 1-to-1 map between each of pair of parameterization

```m := Mean(Beta(a, b)):
v := Variance(Beta(a, b)):
solve({m=M, v=V}, {a, b});

solve({m=M, 1/v=tau}, {a, b});
/        / 2        \      / 2        \        \
|      M \M  - M + V/      \M  - M + V/ (M - 1)|
< a = - --------------, b = -------------------- >
|            V                      V          |
\                                              /
/      3        2              / 2                \        \
{ a = -M  tau + M  tau - M, b = \M  tau - M tau + 1/ (M - 1) }
\                                                          /
```

This VOSE article you refer about seems to be oriented to risk assesment, for which this Subjective Beta Distribition might be, according to the authors, and likely under some restrictions an interesting tool to achieve this goal.
I'm working on the domain of Reliability Analysis (assessment of probability of failures due to extreme events in engineering, something probably not that far from the "risk assessment" VOSE talk about?]) for more than 25 years and I had never heard of the approach developped by VOSE (but each domain develops it's own method with little inter-operability).

Probability ontology
http://www.math.wm.edu/~leemis/2008amstat.pdf
https://en.wikipedia.org/wiki/Relationships_among_probability_distributions

Different distributions
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013898/bin/supp_btw170_Swat_et_al_Supplementary1_ProbOnto_KBprintout.pdf

Different parameterizations
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5013898/bin/supp_btw170_Swat_et_al_Supplementary2_ProbOnto_useInPharmML.pdf

## @NanaP Does it mean you want to re-...

Does it mean you want to re-parameterize a Generalized Beta distribution by min, max, mode, min instead of alpha, beta, min and max ?

If it so you will have a problem because the relation (alpha, beta) --> (mode, mean) is not a 1-to-1 mapping.
Take the simplest case alpha > 1, beta=alpha, xmin=0, xmax=1 (thus the Generalized Beta is just Beta(alpha, alpha)).
As Mean(Beta(alpha, alpha)) = Mode(Beta(alpha, alpha)) = 1/2 whatever alpha > 1, a re-parametrization of the Beta distribution in term of mode and mean doesn't say anything about the pdf, cdf, variance, ... of this distribution.

## You're righ;...

@C_R

but the expression

`I*Int(f(x), x) = something - 2*I*Int(f(x), x);`

comes from some computation and I didn't want to extract the Int component from it's left hand side and isolate it from the equality.
In fact I wrote f(x) but I don't even know a priori what f(x) is.
So you can do something like this:

```expr := I*Int(f(x), x) = something - 2*I*Int(f(x), x):
J := select(has, [op(lhs(expr))], Int)[]:
isolate(expr, J)
/
|             1
|  f(x) dx = -- I something
|             3
/
```

The problem is that you replace I by some constant c, then isolate works as expected:

```c*Int(f(x), x) = something - 2*c*Int(f(x),x):
isolate(%, lhs(%))
/  /        \
| |         |   1
c | |  f(x) dx| = - something
| |         |   3
\/          /
I*Int(f(x), x) = something - 2*I*Int(f(x),x):
isolate(%, lhs(%))
/  /        \                   /  /        \
| |         |                   | |         |
I | |  f(x) dx| = something - 2 I | |  f(x) dx|
| |         |                   | |         |
\/          /                   \/          /
```