mmcdara

4852 Reputation

17 Badges

6 years, 360 days

MaplePrimes Activity


These are replies submitted by mmcdara

@Rouben Rostamian  

Oh sorry, I didn't notice

@dharr 

Great, I vote up!



Both @dharr and I already told you that your code was full of syntax errors and it seems the only thing you are capable of is repeating the same question with the syntax errors.
Are you kidding us?


Either you define delta as a list, vector ar array  (delta[n]), or as a function  (delta(n)), not both!
I believe there is no recurrence behind what you mistakenly try to do, which is basically updating the values of Theta for consecutive x values.
So, first thing YOU HAVE TO CORRECT YOURSELF is to replace square brackets by parentheses wherever it's needed.

@dharr 

I agree, I nevertheless posted an answer for what it's worth

avoid to replicate questions, yours will be read and answered as soon as possible, don't forget that Mapleprimes is a worldwide community with some members still sleeping at the moment  :-)
More seriously, it's almost impossible to answer a question if you do not provide all the material: use the green up-arrow in the menu bar to upload your full worksheet.

See you soon.

PS1: Click on your first question, in the bottom right corner click on more, lastly delete it.

Look here to find a clue
https://www.maplesoft.com/support/help/errors/view.aspx?path=Warning,%20data%20could%20not%20be%20converted%20to%20float%20Matrix

@acer 

I hadn't undertood the subtlety.
Thanks for your reply

@Carl Love 

Thanks Carl.
@acer's code seems to display exactly the same things whatever  print/ is explicitely mentioned or not... which was the underlying reason for my question:
prettyprint.mw

@acer 

What is the purpose of print/ in the name of the procedures ?

@dharr 

You're right: when you know the values of the experimental inputs where the data have been recorded, the best option to use is output=Array.

 

@vv  @lcz 

Thank you vv, I'll take a look at it right away


If I'm not mistaken, plot is always in the foreground, and then if p, q are plot objects, then
display(p, q) displays ("<" means behind) p < q and display(q, p) displays q < p.
The same rule also holds if p is a  plot object and q an implicitplot object, or if both are implicitplot objects.

restart:
with(plots):
p := plot(x, x=-3/2..3/2, thickness=10, color=red):
q := implicitplot(x^2+y^2=1, x=-1..1, y=-1..1, thickness=10, color=blue):
display(p, q, axes=none);display(q, p, axes=none);

Here  p and q are always CURVES structure and that plot means PLOT(CURVES(...)).

The case 

restart:
with(plots):
p := plot(x, x=-3/2..3/2, thickness=10, color=red):
s := plottools:-disk([0, 0], 1):
display(p, s, axes=none);display(s, p, axes=none);

always displays s < p.
I guess the reason is that s is an object of type PLOT(POLYGONS(...)) and that a 2D object is always put behind a 1D object.
For instance

p := CURVES(...):
q := POLYGONS(...):
PLOT(p, q);  # displays q < p
PLOT(q, p);  # displays q < p


To control the display order the only method which always work is to display objects of the same structure.
For the OP's case a solution using plottools:-disk, that is PLOT(POLYGONS(...)) objects is to transform the object "c" (the blue arc) into a PLOT(POLYGONS(...)) object.
For instance

f  := t->x0*(1-t)^3+3*x1*t*(1-t)^2+3*x2*t^2*(1-t)+x3*t^3:
g  := t->y0*(1-t)^3+3*y1*t*(1-t)^2+3*y2*t^2*(1-t)+y3*t^3:
cd := [seq([f(t),g(t)], t in [seq](0..1,0.01))]:
no := [seq(eval([diff(g(t), t), -diff(f(t), t)], t=i), i in [seq](0..1,0.01))]:
no := [seq(no[i]/~norm(`<,>`(no[i]), 2), i=1..numelems(cd))]:
ci := [seq([cd[i][1]+no[i][1]*0.01, cd[i][2]+no[i][2]*0.01], i=1..numelems(cd))]:
C  := PLOT(POLYGONS([cd[], ListTools:-Reverse(ci)[]], COLOR(RGB, 0, 0, 1))):

DocumentTools:-Tabulate([ 
display(p0, p1, C, scaling=constrained, axes=none),
display(p0, C, p1, scaling=constrained, axes=none),
display(C, p0, p1, scaling=constrained, axes=none) 
])




An advantage of this method is that it keeps working whatever the shape you use to represent the nodes of the arc.

@vv 

we crisscrossed

@lcz 

I remember I asked a question about the displaying order, maybe a year ago.
@acer gave me some informations about the but I'm not capable to find he's reply

@lcz 

That's normal: given the symmetry of your problem  last = 1-first. and I used this as a shortcut.
Here is the general version

Download Disk+Bezier_2.mw

(Note that there could remain situations where the drawing is still incorrect, 
the culprit lines then are these ones

first := select(is, [solve(((f(t)-x0)^2+(g(t)-y0)^2)=0.02^2)], positive)[]:
last  := select(`<`, [solve(((f(t)-x3)^2+(g(t)-y3)^2)=0.02^2)], 1)[];

If this happens observe what these commands

[solve(((f(t)-x0)^2+(g(t)-y0)^2)=0.02^2)];
[solve(((f(t)-x3)^2+(g(t)-y3)^2)=0.02^2)];

return and try to fix the problem.
Do not hesitate to come back to me if you have any problem.)

@Rouben Rostamian  

Hi, 
Given where the two series were from, I considered (I agree in an implivit way without asking for more details) that the series were pairwised in the following sense.

  • Let T = t1, ..tN  (t being some independent parameter, scalar or not, this doesn't matter)
  • Let F and G two processes which both take t as (unique single to simplified) input.
  • Let A=F~(T) and B=G~(T).
    I say A and B are pairwised series.


This is what all people do when they are concerned by measuring deviations between experimental results and simulation results.

From a purely logic point of view your question is legitimate
(the deviation between A and a permuted A depends on this permutation)
But when it comes to experiment-simulation comparisons I beieve your question is out of context (no offence intended).
 

@Otttimor 

In my case the model nor the data are supposed to be perfect,
One recognizes classicaly 2 different causes for simulation to experiment discrepancy:

  1. A model error (or model bias), voluntary (one voluntary simplify the reality for numerical purposes ;a simple example is a2D axisymmetric model instead of a 3D one) or not (one then speak of "lack of knowledge" to indicate that we just do not know what the true Physics is).
    This error is NOT random even it's treated like this in a bayesian framework.
  2. Measurement errors (bias and/or random errors)

A discrepancy between a simulation and an experiment cna invoke only one type of error or both.
In my previous reply I just used "model error" for it was simpler to build an illustrative example from it.

In your case you say having data with measurement error (which is always the case).
Basically all I did in the worksheet I delivered remains applicable: you have a countable set of measurements (both t and Konc can be erroneous) and a continuous response (the solution of the ODE). When you compute some criterion to estimate how close the simulations are to the experiments, you don't care where the source of the discrepancy is (on the model or on the data).
So, YES, you have done things correctly in your file.

I dont quite understand what is happening in the norm-equation, but it seems like it would be the right equation with my type of situation?
The only question you have to ask yourself is "Is this L2 norm the best to use in my case ?". 
Maybe the Linfinity norm could be better ? 
It's up to you to know.

PARENTHESIS 1
However, when you want to go further, for instance to identify if the discrepancy comes from parametric errors or from  model error, you have to do specific analyses.
For instance, if you are extremely confidentin your model you can claim that the discrepancy originates in parametric errors, that is in measurement errors.
Let us take only one parameter P, and let P its true but unknown value. Let p the measure of P some device gives you and let E the random variable which models the measurement error of this device. When you are going to simulate your system the natural choice for a value to give to parameter P is the measured value p when in fact we should use P instead.
Schematically if F represent the black-box model F(P) is replaced by F(p)=F((p-P)+P) ~ F(P)+(p-P)F'(P): an error on the input (here using p instead of P) propagates to en error on the output of the code.
Which means thet even if the measurement of Konc was error free, the code would not give the same result than the experiment, just because the "input" of the former is p as the "input" of the latter is P (the true value of P).

Now add to this the fact that the expermental measure of Konc is likely to be noisy, and get one more reason to have a discrepancy between the simulation and the experiment.
Some specific approaches have been developped to handle the fact that the inputs of the code can be noisy and thus different from the true values the experiment uses (without knowing precisely their values). Some of these methods can be seen as the search for the best consensus between experimental and numerical results, and generally lead to better agreement between them and thus a smaller discrepancy.

PARENTHESIS 2
There exists a very simple test to detect a possible model error.
Instead of plotting the response of the code and the experimental data on the same figure, plot the difference between latter and the former evaluated at the experimental inputs (this is called "Residue Analysis"). If the resulting cloud doesn't exhibit any particular shape, it's likely your model is error free. If somepattern emerges this may come from a non stationnary error bias (not themost common situation), or from a model eror (the most likely situation).
In your case, the model overpredicts for t > 100, which should make you suspicious of it beyond 100s (some physical phenomenon neglected maybe?)

I hope I didn't drown you in all these thoughts.
Please do not hesitate to contact me if you need clarification or additional help

First 13 14 15 16 17 18 19 Last Page 15 of 106