mmcdara

7891 Reputation

22 Badges

9 years, 65 days

MaplePrimes Activity


These are replies submitted by mmcdara

@anthei 

You're right, for an unknown reason, NonlinearFit uses w[i]^2 instead of w[i].
In my opinion, this should be considered a bug because there is no clear explanation in the help pages to justify these differences in the way weights are treated.

 

 

To complete my previous answer, in the case the additive noise is not stationnary (aka homoskedastic) and has a variance wich depends on the value of X, it's common to give more weight to onservations made with a small measurement error (=noise).
These weights are commonly identified to the inverse of the variance V(x) of this error.
In this case you have to minimize the sum of  ( obs[n] - Model(x[n]) )^2 / V(x[n]).
 

@Preben Alsholm 

 

Thank you for your detailed reply.

About the last point you mention, I'd never payed attention to the fact dsolve turned floats into rational.
There is probably a good reason do proceed this way (a same conversion is also used by PolyhedralSets and ... for efficiency reasons  I guess).

Thanks again
 

@Preben Alsholm 

Thank you Preben for this lengthy job you did.

I would have your iopinion about two 

  • Do you think that some simplification of the raw solution in terms of other special functions could cancel the imaginary part?
     
  • Do you have any idea why he solution contain Kummer's functions with rational parameters (an inner process of Maple seems to force a conversion to rationals)?
     
  • Do you think that this imaginary parts could be avoided if the solution contained Kummer's functions with real parameters instead of rational ones.?
     

And, thanks again.

@acer 
Thank you Acer for this very detailed analysis. I did feel indeed a little bit like I had been tinkering.
The method proposed by Rouben seems to me more judicious, even if I don't know enough of the internal mechanisms Maple uses to judge its relevance

@Rouben Rostamian  

Than you Rouben, this is indeed a simpler way to do what I had in mind
 

@vv 

Thanks for the answer.
The statement "local,..." can be useful indeed.

@vv 

Thanks for the solution.
I'm not at all a specialist of FE (for the moment I'm reading this book 100914book.pdf to discover this new field to me) but I have hoped a "fully resolved solution", that is a solution wich would not contain this arbitrary function a(x). Bur maybe it's not possible?

OH, SORRY, I JUST REALIZED I WROTE   F (1/x)=x^r* * F (x)  INSTEAD F (1/x)=x^r * F (x) (r > 0, f defined over R\{0}) !!!


I found (on the web) a solution for 
F (a*x/y b/y)=y^r * F (x, y)  with r=3, a=-1, b=1 but I simply don't understand how it was derived.

The problem occurs the bayesian treatment os straitght line fitting ("bayesian linear regression" if you prefer).
In the case it's better rewritting this FE under the form 
F(-alpha/beta, 1/beta) = beta^3*F(alpha, beta) (see here frequentism-and-bayesianism-4-bayesian-in-python), paragrah "prior on slope [beta] and intercept [alpha]".

I was looking for a more general solution in case of r, a anb have different values.

@acer 

Thanks, 
No, rsolve doesn't seem relevant and dsolve can only work on special cases. I only knew about the package gfun by its name and never look at it closely.
I'm going to see if he can be of any use to me.

A lot of examples of the kind of equations I'm interested in is given here functional-equations 
(only ad hoc manual solutions driven by the visual observation of the equations)

I've been stuck for some time trying to figure out equations like
F (1/x)=x^r* * F (x) (r > 0, f defined over R\{0})
or more generally
F (a*x/y, b/y)=y^r * F (x, y)  (a <>0, b <> 0, r > 0)

@Scot Gould 

Thanks for your return.
Feel free to contact me if you want to run the code and meet any difficulties.

@Scot Gould 
Is your code being reviewed by Maplesoft?  I don't think so. 
I load it agin on this reply, please tell me you're still unabled to download it.
Bayesian_Inference_ABC+MCMC_NF_2.mw

PS: the file contains the last plots and takes 2.3 Mo: maybe it is to big to be downloaded correctlye. In this case here is the 
same file without any plo (129 Ko)
Bayesian_Inference_No_Plot.mw

@Preben Alsholm 

I have done some tests and your solution helps me a lot.

Thanks again

@Preben Alsholm @acer

I apologize for my late response.
Thanks for your reply.

You wrote : "But you might want to unassign all but the last one:" YES, I think this is more my problem (as I sketched it out in my recent response to Acer).

Speaking of Acer, he seems to be warning about the unassigment of the  _ProbabilityDistributionXXX instances. Even if your solution seems satisfactory to me, isn't there a risk of collateral damage ?

@acer 

Sorry for this late reply.
You wrote "Does that mean that you are trying to improve performance or conserve memory?": YES, my main concern is the  slowdown of Maple that, I suspect, comes from the large number of objects Maple stores.

You wrote "Or do you have some other motivation?" YES, because I can have a huge number of random variables.

In a few words what I'm interested in is the development of Markov Chain Monte Carlo (MCMC) methods in some "bayesian calibration " framework. Some of the simpler MCMC methods require only one extra random variable for each of the parameters to calibrate. This of example the case for Metropolis-Hastings algorithm where each of these extra RV (dubbed "proposal") is invariant all along the iteration of the MCMC method.
For instance, if you have 2 parameters to calibrate you can choose 2 gaussian RV proposal of mean 0 and fixed standard deviation. Within the MCMC algorithm these proposals are sampled to draw new values of the parameters to calibrate.

More advanced MCMC methods adaptively modify the parameters of the proposals as the process evolves.
My idea was (I keep this gaussian example) to define a "generic" RV of formal standard deviation "s", for instance by writing 
proposal := s -> RandomVariable(Normal(0, s)), and to generate new values of the parameters by writitng Sample(X(s), ...) (remember the value of s can change at each iterate of the markov Chain).
Given a Markov Chain can have tens of thousands of iterations, you understand that this method can run to memory problems.

Finally: I'm going to test your solution

Thanks for your envolvment

@tomleslie 

Good contribution Tom.

I agree that models can talk rubbish if they are fed with data with poor credibility.
For this breakout the infection rate is endeed a very important parameter  whose a small variation an give raher different results (not to mention the models themselves).
I did somethong myself with the SIR model.
The reference I started from is this one https://www.up.ac.za/media/shared/259/pages-from-lnepid.zp128358.pdf

First 111 112 113 114 115 116 117 Last Page 113 of 154