acer

32313 Reputation

29 Badges

19 years, 312 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

@FDS To the right of an Answer's title is a thumbs-up icon and (if you posted the Question) a cup icon.

Anyone (with reputation score of at least 10) can vote-up an Answer by clicking the thumbs-up icon.

The person who posts the Question can also accept it as best by clicking the cup icon.

Naturally, you are free to do this for any of the Questions you post in this forum.

@Rouben Rostamian  The worksheet attached by simplevn1967 was saved by him using Maple 17 (released 2013).

@lemelinm The effect of your first suggestion -- to control the color of the surface "grid" lines differently from the color/shading of the surface -- is already mostly possible to achieve, and without having to recompute the surface's data points (eg. z-values).

The key here is to compute (once) the plot values, either as pure surface or pure wireframe style. Then the other can be constructed by overriding that option, ie. without recomputing the values.

For example, (and you could also adjust other qualities of these wire-frame lines, eg. thickness=2 , etc),

with(plots):
P := plot3d(-x^3+y^2,x=-1..1,y=-1..1,style=surface):
display(P,display(P,overrideoption,
                  color="Orange",style=wireframe));

Naturally, that example applied the override only to a single surface. One might well not want to have it as a blanket effect on all compound parts of a VolumeOfRevolution result. (But even there, it can be done selectively, with a bit of care.)

I'd agree that a simple choice of options for the effect you've described would be more user-friendly. On the other hand, there are several other things I'd rather take priority. Eg, separate x/z, y/z, x/y aspect ratios for the axes.

ps. I think that jtreiman's original suggestions (at top) produce a result that is easier to interpret than the default for the command in question.

@Christopher2222 Calls to time() ought to be calls to time[real]().

You could do that something like this:

caring_phase_pemanenan_predator_dgn_parametersesuai_jurnal_ac3.mw

I indexed table T by t-values, only because I don't know how you plan to access the maxima later. You could also index by the ordinal, `found`, etc.

It's not clear from your latest question whether you want the global maximum (over t=0..100, say) or all the local maxima each time around (when diff(x(t),t) is zero).

Here's a simple way to get that global max from the phaseportrait data itself, and the corresponding time, for one of the initial values.
caring_phase_pemanenan_predator_dgn_parametersesuai_jurnal_ac2.mw

There are several more complicated ways (eg, wrapping x(t) in a proc that does extra tracking), or using dsolve/numeric's events options, etc. That kind of approach can also come in useful if you want to save values "each time around", or each time diff(x(t),t)=0, etc. Let us know if you need.

Are x,y,z all to be considered as real?

Why do you use the symbolic option?

Could you provide some examples of input and desired answer?

@AHSAN If you want to move the horizontal line segment,

   plot([[-5, 0], [5, 0]], color=black)

either up or down then simply change the second values in the lists to something else. Do you not understand that the second values in the lists are the vertical (y) coordinates of the line's end-points?

Similarly, if you want to move the vertical line segment,

   plot([[0, ymin-eps], [0, ymax+eps]], color=black)

either left or right then simply change the first values in the lists. Do you not understand that the first values in the lists are the horizontal (x) coordinates of the line's end-points?

I think you could have easily figured this out for yourself.

@AHSAN The rangesonly option makes getdata return the ranges for all the dimensions of the plot.

A 2D plot has two dimensions. So in your case it's returning two ranges. The indexing, [2], accesses the second of those ranges, which is the vertical range.

I think that you could have figured that out.

Sorry, I don't understand what you mean or want, by, "show these line according to desire position".

@AHSAN That operator/procedure assigned to r is used to construct the columns, as rectangles.

The second parameter of that operator, y, is used to specify the vertical end (ie. height, whether positive or negative) of the column.

The other end of the column, vertically, is at 0, ie. the x-axis.

ps. I think that you could have figured that out.

@AHSAN I have modified the procedure so that it is more flexible, accepting additional plotting options and the numerical format for the column labels.

Help_Bar_Graph_acc.mw

And now it be called to produce these, say,

@Nicole Sharp Regarding your statement about extra parentheses being required in Maple, you could also enter that as,

   evalf(2*Pi^5*1.380649e-23^4/(15*299792458^2*6.62607015e-34^3))

@Christopher2222 No, it's not ridiculous or terrible -- it was just two short examples. I did not intend to be harshly critical.

I think that the overall idea is a good one. Indeed I suggested the same thing in 2008 but, while that devolved into a lengthy discussion about performance of float[8] Matrix Rank (for no reason other than that Nasser Abbasi once gave it, with results did not hold for long...), there was not a flood of examples.

It's a difficult thing to do very well. More importantly, it's pretty natural for a benchmark suite to get revised and reshaped.

Perhaps it would be helpful to start a list of areas for benchmarking, eg:
 - linear algebra
 - statistics
 - integration
 - differential equations
 - polynomial manipulation
 - special function evaluation
etc. Each of those might be sensibly split into numeric and symbolic computation. There's also group theory, graph theory, etc. Interface responsiveness is also interesting (eg. is 2-D input slow to parse, how fast is 2-D output, etc).

And comparison of benchmark numbers between operating systems, product releases, and hardware, might all be interesting.

[edit] There have been a few performance comparisons between competing products, (eg. Mma, Maple, Matlab, etc). One of those (now defunct, I think) was mentioned by me in this old Post. Maybe I still have some code from it; I'd have to find time to look.

Another speed performance product comparison was done by Wolfram, comparing Mma and Maple at both hardware and higher working precision, for a variety of numeric computations. Somewhere I have a Linux script which can run the float[8] LinearAlgebra portion of the Maple computations in an early snapshot of that -- only about a dozen or so commands tested. I used to use that to compare Maple + generic-CLAPACK/ATLAS/MKL, across releases. I had a swanky worksheet embedded-components interface, and it throws up a bunch of plots to overlay and compare Maple releases. I could probably find it and dust it off. The automation was useful. [edit] sursumCorda has since mentioned this suite, below, and provided the Maple code of a much later revision. I find the examples interesting, but I don't like the bookkeeping or methodology.

I think that individual benchmark examples should be run in fully separate kernel sessions, preferably from separate plaintext files.

@Christopher2222 IMO there are several weaknesses in your methodology. (I order these by decreasing severity, from mistake to quibble.)

1) You're measuring the sum of CPU time across multiple cores, which is a serious flaw. If you want to measure the wall-clock time duration for which the user has to wait, here, either use time[real]() or an appropriate call to CodeTools:-Usage. Your approach gives a confused idea about how a machine with more cores might finish the job more quickly, and about how long the computation actually takes.

2) As you have it, you're also measuring the time it takes for the interface to print the output. While that may be relatively negligible here, a clear picture in general would be to have separate operations to measure computations and (possible) something else dedicated to measure interface (eg, GUI, CLI) output timing. Also, there are some computations whose result take much longer to print than to compute. You could end the computation with a full colon.

3) This example tests the performance of the externally linked Intel MKL's dgesvd routine. That might interest some people [see also discussion in a 2008 Post of mine in which I too suggested a Maple benchmark, and also here], but it's not a great indicator of Maple's speed on general numerics (whether in evalhf or arbitrary precision modes).

4) Why set UseHardwareFloats:=true ? Shouldn't you have a specific and clear reason for changing away from default settings -- for this or any benchmark example?

@FDS The first form works ok for me, on columns of DS.

Test_date_ac2.mw

Perhaps you tried it with the wrong kind of single-quotes?

If you know for sure which columns will have the numeric entries then it's more efficient to only examine those. See attachment.

First 61 62 63 64 65 66 67 Last Page 63 of 591