Carl Love

Carl Love

28025 Reputation

25 Badges

12 years, 311 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Markiyan Hirnyk There are two problems with your command

B:= LinearAolve(2, A, 2);

The first is that the return value of LinearSolve with the default inplace is NULL. So even if the command worked, would just be NULL

The second is that the second 2 in your command says that the rightmost two columns of A are "augmented". In other words, the matrix A represents a linear system C.X = D where C is 3x2 and D is 3x2, the leftmost two columns of being the C and the rightmost two columns being the D. This system is clearly inconsistent, its bottom row being < 0 0 | 1 1 >. If LinearSolve chooses to call that "singular" rather than "inconsistent", I am willing to forgive it. If it is to be considered a bug, it is only because of the choice of words in the error message.

@Markiyan Hirnyk The examples all seem consistent with what I said. With option inplace, the default, the result is obtained by re-examining the input matrix rather than by using the direct output of the command, which is NULL.

@Markiyan Hirnyk The examples all seem consistent with what I said. With option inplace, the default, the result is obtained by re-examining the input matrix rather than by using the direct output of the command, which is NULL.

@Markiyan Hirnyk Yes, of course, what I have created does not represent the original graph internally in Maple. This is a workaround required to create an approximate visualization of the Asker's original, which seemed to be what she wanted.

@Markiyan Hirnyk Yes, of course, what I have created does not represent the original graph internally in Maple. This is a workaround required to create an approximate visualization of the Asker's original, which seemed to be what she wanted.

@erik10 What is the maximum number of dintinct entries in A? It can't be 10000 because you wouldn't be able to visualize 10000 distinct vertical lines on one plot.

I multiplied by 1000 because I thought that that would cover any perceivable visual differences.

I don't understand what you're saying about no repetitions in the A vector. My code does not rely on there being repetitions.

If only the frequencies the given, then A could default to [$1..nops(B)] (i.e. [1, 2, ..., |B|]).

@erik10 What is the maximum number of dintinct entries in A? It can't be 10000 because you wouldn't be able to visualize 10000 distinct vertical lines on one plot.

I multiplied by 1000 because I thought that that would cover any perceivable visual differences.

I don't understand what you're saying about no repetitions in the A vector. My code does not rely on there being repetitions.

If only the frequencies the given, then A could default to [$1..nops(B)] (i.e. [1, 2, ..., |B|]).

Do you mean the package Gravitation written by John Fredsted?

Good solution. I like it better than my own.

Good solution. I like it better than my own.

@itsme Rather than copying and pasting, use lprint(%) and you'll get

hypergeom([2, 2, (-I*beta*epsilon+Pi)/Pi, (I*beta*epsilon+Pi)/Pi], [1, (-I*beta*epsilon+2*Pi)/Pi, (I*beta*epsilon+2*Pi)/Pi], 1/exp(4))/((beta^2*epsilon^2+Pi^2)*exp(4))

Here's Kitonum's procedure in more-standard Maple, using recursion and avoiding unnecessary recomputation.

a:= proc(n::posint)
option remember;
     `if`(n::odd, thisproc(n-2)*thisproc(n-1), thisproc(n-3) + thisproc(n-2))
end proc:

(a(1),a(2)):= (1,2):

Here's Kitonum's procedure in more-standard Maple, using recursion and avoiding unnecessary recomputation.

a:= proc(n::posint)
option remember;
     `if`(n::odd, thisproc(n-2)*thisproc(n-1), thisproc(n-3) + thisproc(n-2))
end proc:

(a(1),a(2)):= (1,2):

@acer 

but I'll chip in with some anecdotal evidence if that's OK.

Sure, your input is most welcome. So, it seems that your technique is still roughly based on numcpus. Perhaps that optimal value was 1/16th? two tasks per core? I'd like to see how may threads Threads:-Seq, Threads:-Map, etc., use when there's no tasksize specified, but the code is hidden in the kernel procedure Threads:-MultiThreadInterface.

Darin:

Thank you for this post, from which I've attained more understanding of Threads:-Task in two readings than I've obtained in multiple readings of the help pages and prepackaged example worksheet.

Your examples of using Task in a scalable way, and most other examples I've seen, bifurcate the problem into subtasks until a "base case" is reached. The base cases seem somewhat arbitrarily chosen, and may end up being inefficiently small on future machines. Why not use kernelopts(numcpus) to determine how many subtasks to use? That would be scalable.

A few days ago, I also posted a question/Comment to another of your parallelization blog entries. I know that it's very easy for followups to old posts to be missed, so you probably didn't see it.

First 634 635 636 637 638 639 640 Last Page 636 of 708