sand15

787 Reputation

11 Badges

9 years, 185 days

MaplePrimes Activity


These are replies submitted by sand15

@Carl Love Thanks for those precisions

@tomleslie 
@Carl Love

I spoke too quickly : if G is a directed graph, DrawGraph(G, style=planar) returns an error in Maple 2016 ... excepted in very simple cases

DrawGraph(Graph({[a, b], [b,c], [c, a]}), style=planar)          #works
DrawGraph(Graph({[a, x], [a,y], [b,x], [b,y]}), style=planar)  #does not work

In the few textbooks about graph theory I read, I did not found any specification that planarity is restricted to undirected graphs only.
But I am not sure of that ???

@tomleslie 
@Carl Love 

I finally succedeed in solving my problem.
The key idea (this is the mountain of the title) is to add a dimension to the polygons in order to generate a 3D plot (I use nox PLOT3D(....) instead of PLOT) and to oriente the drawing to obtain the desired flat representation.

As Carl wrote,PLOT places CURVES in front of POLYGONS, but PLOT3D is less strict (there is also a STYLE(HIDDEN) option that authorizes some variants)


Whilst waiting for a simpler solution I should nevertheless like to thank you for your comments



AUXILIARY QUESTION :
In Maple 2015.2 DrawGraph(G, style=planar) returns an error if G is a directed graph.
I just checked the same command performs well in Maple 2016 : was it an error in Maple 2015.2 or is it an evolution of Maple 2016 ?

@Carl Love 

I am now at the office (sand15athome) and I  just see some errors in my previous response (I wrote it in a hurry).
I apologize for this and send you something fairer

____________________________________________________________________________________

Thank you Carl.
I suspected Maple used some ranking based on the dimension (point, line, surface) of the structures to plot.
But I had some hope that a bypass could exist.
The reason is :

1 : DrawGraph displays edges only "outside" the rectangles that represent the vertices of the graph

2 : plottools:-getdata(DrawGraph(...)) returns a list of polygons where some correspond to edges from a point A to a point B while others are rectangles centered at points A and B.
So, "materially", the edges are really from A to B but they are hidden by theses rectangles ... so I suspected those last where placed in the foreground. ... hence my question

@Carl Love 

@Kitonum

@taro 

It has been a delight for me to read you discussion.
I believe I have naw a better understanding in "command1~", "map" and "`command1/command2`~"  roles.

Thank you all

 

@Kitonum  I thought naively that the two forms were equivalent.

Is it true that I use to write things like convert~(L, string) but I saw, somewhere in the many answers on Mapleprimes, that some guys sometimes write things like `command1/command2`~(...) instead of (and here I'm probably wrong) command1~(..., command2)

Probably I shouldn't try to be innovative when I do not master the things

Thanks again

@Carl Love  is a very interesting feature ! Great thanks for the trick

@Carl Love   ... but I have to confess I use it rarely for readability concerns.
The same holds for the tilde operator where I prefer to use the "map" function.

Thank you for your contribution

 

@acer  I thank you for your extensive answer.

Generally I use the "||" constructor but, in the present case, I wanted a nice render of the equations, so th "__" constructor.
I never imagined myself to combine the two as you do at the end of your answer.


If it is not much to ask I would like to know what is the best way to proceed in this situation :

Suppose you define the pressure p of a gaz by the EOS : p = K*v-n  where n is the polytropic index and K some suitable constant.
For some situations n is defined by the ratio cp/cv of the heat capacities at constant pressure (cp) and volume (cv).

In Physics textbooks it is common to write relations such that
p = K*v-n
n =  cp/cv

but if I do this in Maple, cp is evaluated as c with a subscript equal to  K*v-n (which is perfectly normal)
To preserve the physical representation I used to write

p := K*v-n
n :=  c__p/c__v  # to avoid evaluation of p

Is it a safe method to procced ?
Does it exist a better alternative ?

Thanks in advance ... and than you again for your previous answer

 

@Carl Love 

 

Mathematically speaking :

(FR) la fonction "sécante" est définie comme étant la fonction réciproque de la fonction "cosinus" ...

(FR -> EN) the function "secant" is defined as the reciprocal function of function "cosine" ...

 

So there is absolutely no doubt that the correct translation of reciprocal is réciproque


But, in day-to-day language, even among people who share mathematical background and use to using mathematics in their activities (excluding teachers and professors), it is very common to use the french word inverse (inverse in english) to refer to the reciprocal function.

This is very likely that this abuse of terms is related to the notation F-1 for the reciprocal of the function F.
Thus it is not unusual to hear that "the secant is the inverse function of the cosine function" (if it is not "the secant is the inverse of the cosine")

@acer 

I always thought that it was a pity that NameToRGB24 does not accept a "palette = " option ...
                                                                        ... but it was just an undocumented feature !

(I do understand Carl's disappointment)

In any event your answer is perfect.

Great thanks

 

@Carl Love 

Your redefinition of RGB24toName is a very astute stopgap.

About the ColorTools package : I believe it is fairly powerfull while not very easy to work with.
One of the main criticism I would make is that it is quite difficult to pick "the" good color (or to create its own palette) from existing ones because the GetPalette( NameOfThePalette ) display the colors in a disturbing order (see "Resene" for example).

Thanks for the answer ... and for having pointed out a translation mistake :

(FR) Fonction reciproque  <--> reciprocal inverse function (EN)  ... I will remember this

@acer 

You write

  1. Your XP machine seems to be 4 physical cores without hyperthreading capability
    TRUE : this is a capibility that is disabled by default (here again a company policy)

  2. your new Windows 7 machine seems to be 4 physical cores with hyperthreading capability
    TRUE again : I asked that hyperthreading be enabled on my new "Windows 7" machine.
  3. On the Windows 7 machine it's quite possible that the OS distributes the load  ...
    Very likely indeed
    Here is a table that summarizes the performances I have just obtained (new machine / Windows 7)
    (10000 runs, distributed on N nodes ... or the nearest integer of 10000 that N divides)

    N

    Approx. Mean Load

    (%)

    Execution time

    (s)

    Observation from the task monitor (performance tag) 

    2

    25

    881

    4 active cores

    3

    38

    557

    6 active cores

    4

    50

    409

    8 active cores

    5

    60

    381

    ‘’

    6

    73

    370

    ‘’

    7

    90

    355

    ‘’

    8

    95

    343

    8 active cores, all « flat »


    One can notice that the the execution time with N nodes (TN) varies linearly, or so, between N=2, 3, 4.
    For N larger than 4 the improvement is slighter.
    The rightmost column refers to visual observation of the task monitor (ctrl+alt+suppr, tag "performance"). For N >= 4 the 8 cores exhibit a significant activity while, for N=3 two of them have no load at all, and that for N=2 four cores are inactive (odd nodes are active and even ones inactive).
    The approximated mean load column (from the task monitor) increases inversely to the execution time (which seems normal).

    It seems to me that the table above corroborates what you write in your last paragraph (at least as far as I understand ...)

 

Great thanks to you Acer for this fruitfull answer

@Carl Love 
I agree : the ratio 3.2/3.5 is anything but significative.

But the ratio 4/8 of the number of cores nodes should be :

given that all the cores recieve the same amount of runs to execute (resp 2500 on the 4 cores PC and 1250 on the 8 cores one) and that all the cores are active (I do not use Grid:-Launch plus a Send-Recieve protocol), the expected execution time should (?) divided by 2 on the 8 cores machine ... all other things being equal, and more specifically with the same OS.

 

Now, I agree with your suggestion "so you should run your test using the same number of nodes on each machine"
But two difficulties arise :

  1. Considering the performances I announced beforehand, I would have like to proceed to some extended comparisons. But (company policy oblige) the migration of operating systems is generally an opportunity to upgrade the working station if not to change it. This is what was done for me and I am no longer capable to test my code on my previous machine
    Accordingly, my comparisons are probably biased
  2. I have observed the following behaviour while using Grid-Run as described in my initial post
    Let us suppose I'm working on a 2x2 cores machine and that (1) I distribute 10000 runs over 4 cores and next (2) I distribute these 10000 runs over 2 cores (same proc or not ???)
    Let T(4) and T(2) the corresponding execution times. I can expect that T(2) is twice T(4) .. but, for a reason I don't know,  it is not the case (I'm not a specialist of parallel computing or processors architecture).
    A quick look to the performance tab of the task monitor shows in case (1) that the 4 cores are loaded the same way (saying 95% during the whole computation sequence) ... whereas in case (2) two cores are loaded up to a level of 75% (with large deviations) while the 2 others remain between 10% and 30%.

    Furthermore, the performance history is very chaotic in case (2) whereas quite flat in case (1) ... something I (mis)interpreted as a better task control by the operating system in case (1)

On "my" new Windows-7 machine (4 dual core processors) I have obtained the following results

  • Distribution over 8 cores (nodes) : 343 s
  • Distribution over 4 cores               409 s (?!?!?!)
    These results seem to corroborate your claim "using more nodes will incur a higher percentage of administrative costs" (???)

 

On "my" old Windows-XP machine (2 dual core processors) I had obtained these results

  • Distribution over 4 cores (nodes) : 504 s
  • Distribution over 2 cores               983 s
    The expected 1:2 ratio is realized here, suggesting a higher efficiency in task control (???)

 

So I keep thinking that something "is not going well" (more likely  with Windows-7)

Other point : now that I have 8 nodes avaliable to me, it is perhaps better to use Grid-Launch with a "master" node and to distribute the compuation over the remaining 7 ???
There are a lot of questions and posts here I need to look at : even the distribution of similar computations is not as simple as we think it is

Even if your answer is far from the luminous solution to which I was expecting to, it leads me to ask myself a lot of questions.

 I thank you for that

 

First 20 21 22 23 24 Page 22 of 24