Carl Love

Carl Love

28070 Reputation

25 Badges

13 years, 36 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@Earl Better yet, any polygon can be partitioned into triangles whose vertices are vertices of the polygon.

@Carl Love I made an incorrect generalization from the n=2 to n=3 case. At first I thought that no group of three should occur more than once. Actually, it is much more strict: No pair can appear as part of one of the groups of three more than once. This seems to be a much more difficult problem, and the number of solutions is far less than 51 or 55. Brian's generalization of Kitonum's solution only applies to the more-liberal case: that no group of three occurs more than once.

@ben2015 It's easy. Use a Vector, not a list. Use add and mul rather than sum and product.

A:= <a||(1..10)>:
F:= unapply(add(A(k)*(1-A(k))^2*mul((1-2*A(l))^3, l= 1..k-1), k= 1..10), a||(1..10));

@Earl I'm not sure if this helps, but any polygon can be partitioned into triangles.

@brian bovril 

My argument is that 55 is an upper bound, not an upper and lower bound, for the g=12n=3 case (binomial(12,3) / (12/3) = 55). You got 51 solutions, so 51 is a lower bound. Do you have reason to believe that there are more than 51 solutions?

@max125 By "parsed", vv means that in the 2-D input, Maple attempts to interpret your expression as it's being typed. With Maple Input (also called 1-D input), no interpretation is attempted until you press Enter, and only the characters that you actually typed and which appear on your screen are interpretted. The word "parse" is nearly synonymous with "interpret".

@wgarn 

In regular Maple, the command

ans := map(proc (x) options operator, arrow; expand(convert(convert(x, rational), expln)) end proc, {ans});

can be shortened to

ans:= (expand@convert)~({ans}, compose, rational, expln);

I don't know if this also works in Maple TA.

 

@vv 

It's a very interesting attempt. It has some flaws that can perhaps be worked out without seriously increasing the timing. The most interesting part is that it shows that subs(A=~B, C) is implemented in a smart way when A, B, and C are very long lists. The easy/naive way would be O(nops(A)*nops(C)). From using your procedure, I suspect that it's either O(nops(A)*ln(nops(C))) or O(ln(nops(A))*nops(C)). Either the first or the second argument is first sorted so that subsequent lookups take logarithmic time.

The flaws: The first flaw is the same as that with L1toL2_C: If the lists have repeated elements then the return value isn't a permutation of the indices even when L1 is indeed a permutation of L2. The second flaw applies only to your added attempt that tries to use Arrays. If some of the elements of L1 are themselves lists, then the returned result may be utter nonsense. For example

L1:= [[a,b],a]:
L2:= [a,[a,b]]:
Perm1to2_E_bis(L2,L1);

Regarding the timing: Nothing can be concluded from a single trial which shows results so close. The results aren't reproducible. You need repeated trials, and you need to try them in both orders, or (preferably) with a intervening restart. I did do repeated tests of L1toL2_C versus Perm1to2_D with nops(L1) as high as 2^22 (four million) and I reached no conclusion.

@Markiyan Hirnyk I withdraw the statement "remove(has, fnormal(...), 0.*I) works...." I must replace it with "It's never a good idea to use remove(has, fnormal(...), 0.*I); use simplify(fnormal(...), zero) instead." My original statement was simply intended to give your code the benefit of the doubt since you'd already used the remove construct, apparently sucessfully. Upon closer inspection, I found that your remove commands did nothing, because your expressions didn't contain 0.*I after you'd applied fnormal. On the other hand, since has(1. + 0.*I, 0.*I) returns false, the remove wouldn't have worked even if your expression had contained 0.*I.

@Kitonum Your technique shows directly and manifestly that seven is a lower bound for the answer. On the other hand, the fact that your answer is itself a partition of the set of all 28 pairs of golfers shows that seven is also an upper bound. In other words, binomial(g,n) / (g/n) is an upper bound for the answer, where g is the number of golfers and n is the size of their grouping (n=2 in this case).

@vv Your procedure obviously has time complexity O(nops(L1)*nops(L2)). I was sure that that could be improved upon, at least theoretically. My two entries above are even much worse for long lists. Their inefficiency is hidden in the conversion to disjoint cycle form. Below, I give a procedure that does the job with simply three sorts and a reindexing. So its time complexity is O(nops(L1)*ln(nops(L1))).

This leaves the question Can a permutation in permlist form be converted to disjcyc form in time O(nops(P)*ln(nops(P)))? Looking at the code `convert/disjcyc`, one sees immediately that woefully inefficient iterative sequence-building or set-building processes are used in three places. 

@patient Oh, okay. By "all curves" I thought that you meant both U(x) and S(x). Here's what you want. Your first try above was almost right.

plots:-display(seq(pU(alpha, -1..-.02), alpha = .1..10);

@Markiyan Hirnyk As mentioned at ?fnormal, simplify(..., zero) is the preferred way to remove the extraneous 0.*I terms produced by fnormal. The remove(has, fnormal(...), 0.*I) works in the case of a simple sum, of course.

The problem seems to be corrected now. How did that happen?

@vv You wrote:

The order in the resulting list is reversed lexicographic, unlike cartprod where the order is lexicographic. What changes are needed to have the same order?

Just a trivial change is needed. Change CartProd to

CartProd:= proc(L::list(list))
uses LT= ListTools;
local S, _i, V:= _i||(1..nops(L));
     [eval(subs(S= seq, foldl(S, LT:-Reverse([V]), (V=~ LT:-Reverse(L))[])))]
end
proc:

This adds time complexity O(nops(L))---an insignificant amount. The time complexity of CartProd is O(`*`(nops(L),nops~(L)[])), or, using your m and n, it's O(m*n^m)---i.e., the time is proportional to the length of the output, which is obviously the minimum possible complexity.

I have tested the speed of both versions for a list L given by
m:=7;n:=10;
L:=[seq( [seq(10*i+j,j=1..n)], i=1..m)];

That's extremely large.

I have used only time[real]():

Use CodeTools:-Usage instead.

I was surprised to see that CartProd was only about twice faster than cartprod for such large lists. How do you explain this?

I've thought about it for several hours, and I can't explain it. If you make the example a little smaller---(m,n):= (6,10)---then CartProd beats cartprod by a factor of 8.

Have you an ideea about complexity in both cases?

I'm not sure about cartprod. You can look at the code with showstat(T[nextvalue]). If this code is O(nops(L)), then obviously the overall process is also O(`*`(nops(L),nops~(L)[])).

 

 

 

First 452 453 454 455 456 457 458 Last Page 454 of 709