Carl Love

Carl Love

28070 Reputation

25 Badges

13 years, 34 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Østerbro The doubling of the final numeric output will stop if you end your commands with a colon (:).

Your teacher's demands regarding units and regarding numbers are inconsistent: The teacher wants the units automatically simplified but doesn't want the numeric computation automatically simplified.

@tomleslie wrote:

[I]f I use ctrl-alt-delete->performance, then I get 8 "cpu" panes. I have always interpreted this as my processor is capable of running 8 threads.

Yes, that is correct.

Important to note that that an application may not ask for multiple threads, but Intel/Windows will decide that multiple threads are possible, and run them anyway.

I find that hard to believe because software that can decide when multiple threads are possible in other software is kinda a holy grail of multi-programming. The example that you show below is not an example of this for reasons that I will explain in a moment. So, do you have another example?

 

As an example, if I execute

 

restart:
L:= RandomTools:-Generate(list(integer, 2^18)):
CodeTools:-Usage(mul(x, x= L), iterations= 4):

 

...according to the ctrl-alt-delete/performance monitor, four of my "cpus" start working really hard....My assumption has always been that this is what happens when win7/intel attempts to multithread for efficiency purposes.

 

The effect that you are seeing is due to Maple's multithreaded garbage collection, which by default uses four threads (this is adjustable). See ?updates,Maple17,Performance -> Parallel Garbage Collector. Note that this multithreaded garbage collection is in effect (by default) all the time, regardless of whether you are using a multi-processing package.

On the other hand, if I execute your code group

...
L:= RandomTools:-Generate(list(integer, 2^18)):
CodeTools:-Usage(Threads:-Mul(x, x= L),iterations= 4):

.... This is way faster than the previous version - as in 36.59x in real time. I have always put this down to the fact that Maple is *way* better at determining an optimum mulithreading strategy than win7/intel is.

That effect can't possibly be due to multithreading. If you're using n threads, then the theoretical maximum real-time performance increase would be a factor of n (8 in this case). The effect that you're seeing is due to mul using a very poor algorithm for multiplying long lists of rational numbers. I assume that it's using a straightforward "linear" loop, something akin to

proc(L::list) local x, p:= 1;  for x in L do p:= p*x end do end proc:

but written in C. I can write a better divide-and-conquer multiplication algorithm using a paint brush clenched in my butt cheeks---and here it is:

Mul:= proc(L::list)
local n:= nops(L), m;
     if n < 4 then `*`(L[])
     else
          m:= iquo(n,2);
          thisproc(L[..m]) * thisproc(L[m+1..])
     end if
end proc:

Comparison:

L:= RandomTools:-Generate(list(integer, 2^16)):
p1:= CodeTools:-Usage(mul(L), iterations= 4):
memory used=9.36GiB, alloc change=0 bytes, cpu time=6.18s, real time=5.74s, gc time=2.29s

p2:= CodeTools:-Usage(Mul(L), iterations= 4):
memory used=32.32MiB, alloc change=0 bytes, cpu time=398.50ms, real time=406.00ms, gc time=0ns

p1-p2;

     0

That's a performance-improvement factor of 6.18/.3985 = 15.5. (Also, my code is 9.36/.03232 = 290 times more efficient at memory utilization.) The non-multi-threaded part of the algorithm used by Threads:-Mul is closer to mine than it is to the default mul. The rest of your factor-of-36 performance improvement is due to the multi-threading. To be fair to Maple's mul, it wasn't designed specifically for long lists of rationals, and it does just fine with lists of floats.

If I execute kernelopts(numcpus), it returns 8. Trust me, I only have 1cpu on this machine and the relevant help page states

 "this will be the actual number of CPUs that the machines has (treating hyperthreaded CPUs as 1 CPU). " 

so I would expect the answer to be 1, because I only have  one cpu (which admittedly can support 8 threads).

That statement (from ?kernelopts -> numcpus) is either flat-out wrong or it's poorly worded. The term CPU is being used ambiguously. The default value of kernelopts(numcpus) is the number of threads your machine can support. The name of the option should be changed to numthreads.

I don't really understand the purpose of your final code group....

By changing numcpus, I hope to change the number of threads used, and thus change the times.

[N]othing in it changes the number of cpus/threads....

Do you see the code kernelopts(numcpus= n)? That's supposed to change the number of threads. The print statement confirms that numcpus is changing.

I don't understand why each iteration executes about twice as fast the single calculation above. I can only assume that the loop construct changes the way that the calculation is "threaded" and produces something more efficient - no idea why.

No, it's because the first run is repeatedly allocating memory from the O/S; the subsequent runs are using the memory that has already been allocated for the first run. That's the purpose of my comment "First warm-up and stretch the memory. Otherwise the timings are invalid." See ?updates,Maple16,memorymanagement and ?updates,Maple17,Performance -> Multiple Memory Regions.

@firmaulana A linear program (LP) (as opposed to an integer linear program (ILP)) with 1340 constraints and roughly the same number of variables isn't exceptionally large. I'd guess that a dedicated LP package such as LINDO could handle it. You should probably use Maple to put the problem into a matrix form that you could pass to LINDO. There's a LINDO plug-in for Excel. Certainly passing matrices between Maple and Excel is easy, although there's probably some limitation on the width of an Excel worksheet.

@nm You're making several mistakes:

1. Semicolons are not part of statements.

2. Nor do semicolons terminate, complete, or finish statements; rather, just like in English, semicolons separate statements. In this way Maple syntax differs from the corrupted, impure syntax of the C family of languages. Rather, Maple syntax is derived from the beautiful syntax of the Algol family of languages.

3. A procedure definition is not a statement; it's an expression, a data structure just like lists, sets, etc. Only if it's assigned to something does it become a statement. (An isolated expression can be considered a form of statement. This is only useful if a procedure has side effects.)

4. A pair of parentheses in isolation, (), is equivalent to the NULL expression sequence; it's a valid expression sequence like any other.

5. P:= proc() whatever end proc() is not equivalent to P(), nor did I say it was. The things that are equivalent are

P:= proc() whatever end proc;  E:= P();  P:= 'P';

and

E:= proc() whatever end proc();  or  E:= (()-> whatever)();

These latter forms avoid the wasted syntax and memory of giving a name to a procedure which will never be used again. All these forms produce the same result, E.

It follows from (2) that a semicolon is never required immediately before end, else, elif, fi, od, or catch. It follows from (4) that although such semicolons aren't required, they are allowed. It follows from (3) that an anonymous procedure definition can be passed as an argument; indeed, such passing is very commonly seen as the first argument of map, select, etc.

Everything that I've said above is about the allowances and flexibility of the syntax; there's nothing about its restrictions. This is the antithesis of "fussy". If you want more things to generate syntax errors, then it's you who wants a fussy language.

@taro In Maple, you never need to copy the output and modify it manually. The following procedure will take any function expression and distribute the functional operator over the first argument if the first argument is a sum (but not a sum or Sum):

Distribute:= (f::function)-> maptype(:-`+`, op(0,f), op(f)):
L:= Limit(f(a+h)-f(a), h= 0);
Distribute(L);

Please provide your complete code. In the context that you've given so far, diff(conjugate(phi(X)), x1) is obviously 0.

Please provide example code.

@firmaulana No, I don't think that it's worth it to try a faster computer. If the limitation had been memory, it might be worth it to try a larger computer.

I believe that different processors may use a different number of clock cycles per operation, so the ratio 3.2/3.5 doesn't apply. Also, using more nodes will incur a higher percentage of administrative costs, so you should run your test using the same number of nodes on each machine.

@Christopher2222 My guess is that this web page has a way of detecting that it is being queried by a computer program rather than by a regular person, and in that case it sends back an essentially null response. If you can get some other program to read the page, I've figured out exactly how to get the today's price off of it.

@Østerbro I'm not very familiar with the Units package---and I have no familiarity whatsoever with embedded components like tables---and your worksheet is overwhelming. Please send me a few examples that show the system not working, and please describe what you would like the output to look like. Please point out exactly where the output is incorrect. Please make it a straightforward worksheet with no startup code, no tables, and no use of non-stock packages like Gympack. If I can make that work, then we can graduate to tables, etc.  Also please note that I have zero knowledge of your field of study or your native language. I do however have familiarity with scientific literature in general: I know what units are, I know all the standard units, and I know how they are supposed to be displayed. 

@tomleslie The raw unnormalized Laplacian matrix L is also symmetric and positive semidefinite. Perhaps you were misled by the word symmetric in symmetric normalized Laplacian matrix. The word is there to distinguish that normalization from the other, the random-walk normalized Laplacian matrix, which isn't symmetric.

@firmaulana Yes, that's the correct syntax. I didn't expect it to finish instantly, but I expect it to take much less than 12 hours. Let me know how long it actually takes.

@firmaulana Are you sure that you executed the statement that assigns the constraints to kendala? In other words, you must hit the Return key (or Enter key) on that assignment statement before it takes effect.

It may be best to re-execute the entire worksheet.

@firmaulana 

I'm suggesting that you remove assume= binary and give all variables bounds 0..1. This can be done with a very small modifcation of your code. Take the set of constraints out of the LPSolve command and assign the set to a variable, let's say Kendali. Then change the LPSolve command to

Sol:= LPSolve(z, Kendali, (indets({z, Kendali}, name)=~ 0..1)[], maximize);

I'm not saying for sure that this will work. It's just worth a try. It should take much less time. I vaguely recall a theorem (it's been 30 years) that under certain fairly common conditions, a binary ILP can be solved as a regular LP and the solution will turn out to be binary anyway.

 

First 402 403 404 405 406 407 408 Last Page 404 of 709