Carl Love

Carl Love

28110 Reputation

25 Badges

13 years, 122 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

Perhaps you're missing a semicolon on the first line.

@Markiyan Hirnyk Hence the use of a midpoint method, either bvp[midrich] or bvp[middefer].

After correcting for "initial Newton iteration is not converging" by using a continuation parameter (I used A[1]:=1/Ccontinuation= C), I am left with the much more difficult-to-fix error "Newton iteration is not converging". See ?dsolve,numeric_bvp,advanced .

@Ratch You use option implicit when you do not want the answer to be in a form solved for y(x).

@Axel Vogt What he wants is

seq(fsolve(eval(NN, [x,N]=~ L[k])), k= 1..nops(L));

But all of the fsolves return unevaluated. I guess it's because either the accuracy cannot be achieved, or there is truly no solution. Further explorations with RootFinding:-NextZero and plot (see my other Answer) show that both situations ensue.

@J4James I misinterpretted your question. I thought that you wanted the one value of d that is the best fit for all the data. If you want a separate for each (x, N) pair, you can use fsolve without NonlinearFit.

@sakhan Converting the result back into hex is required. It is a nescessary step to getting back to plaintext. You cannot take an arbitrary integer and apply convert([...], bytes) to it. The integers in the list need to be in the range 0-255. Here's a little procedure to convert hex to plaintext:

HexToPlaintext:= H-> convert(sscanf(H, cat("%2x" $ length(H))), bytes);

However, if I apply this to your hex strings, I just get garbage characters rather than English text. So I wonder how you got your number1 thru number11.

What is the result when you call rtable_eval(A) ?

@John Fredsted It is just a coincidence that the final result is also equal B . < 3, 0, 1 >. If you change any entry in A, this is no longer true.

For shorthand, note that Vector([a,b,c]) is equivalent to < a, b, c >, and that Transpose(A) is equivalent to A^%T.

@Joe Riel How is it possible that the whole procedure takes only 48 seconds when simply the call to Iterator takes over 200 seconds for me? I am using a compiled iterator. That is, the line

AllP:= [seq(P, P= Iterator:-SetPartitions({S[]}, [[4,4]]))]:

takes over 200 seconds for me. Could there be that much difference between our compilers?

@Joe Riel That's an astute observation, Joe, that ln(1.0) = ln(1.00). I missed that, and I should've used add.

This issue of when `+` is more efficient than add is quite complex, but my tests show that `+`(L[]) is faster than add(x, x= L); so I default to using `+` in that situation. The map is a confounding factor: I don't know whether `+`(map(f,L)[]) is faster than add(f(x), x= L), but I'd guess that the add is faster.

NN is an expression, but not an equation, so it can't be solved. Did you intend for it to be equated to 0?

Using Maple 17, I have no trouble evaluating in the loop. I get,

                       0.074999999990753
                       0.149262458185785
                       0.187942405730181
                       0.198418374384841
                       0.199889286003792
                       0.199995945427242
                       0.199999923257023
                       0.199999999255143
                       0.199999999996312
                       0.199999999999991
                       0.200000000000000
                              0.2
                              0.2
                              0.2
                              0.2
                       0.200000000000000
                       0.199999999999991
                       0.199999999996312
                       0.199999999255143
                       0.199999923257023
                       0.199995945427242
                       0.199889286003792
                       0.198418374384841
                       0.187942405730181
                       0.149262458185785
                       0.074999999990753

What Maple are you using?

@brian bovril 

Yes and yes, it seems complicated.

@brian bovril 

I don't know of a good way to pick the cut-off value for the variation. (I hestitate to call the quantity "variance" as that has a well-defined mathematical meaning.) If you had a large number of such sets of 16 numbers to do, you might be able to figure it out from a random sample of those.

Your method for measuring the variation is not valid because it depends on the order of the blocks within the partition. Note that the two values of P that you got are essentially the same except for the order and the presence of decimal points. Hence the variation should measure the same. You only take four pairwise differences, when there are actually six (binomial(4,2)=6) pairs to consider. In the code below, I have a corrected version of this type variation measure. It uses combinat:-choose to iterate over all pairs of blocks.

I realized thatI could use the property that the ln of a product is the sum of the lns to speed up my program. Now I only need to take the ln of each number once.

You should see if you have a Maple compiler. Just take out the phrase compile= false from the code below. If you don't get a error message, then you have the compiler. This leads to about a 25% decrease in the time for Iterator:-SetPartitions.

 

restart:

 

S:= [1829.0, 1644.0, 1594.0, 1576.0, 1520.0, 1477.0, 1477.00, 1404.0,
     1392.0, 1325.0, 1313.0, 1297.0, 1292.0, 1277.0, 1249.0, 1236.0]:


#Labels

SL:= [seq(A||i,i=1..nops(S))]:

assign(Label ~ (S) =~ SL); #Create remember table of labels.


#Procedure to measure variation
assign(Ln~(S)=~ evalf(ln~(S))); #Create remember table of ln's.

lnp:= `+`(Ln~(S)[])/4:

Var:= P-> `+`(map(b-> abs(`+`(map(Ln, b)[]) - lnp), P)[]):


#Generate list of all partitions.

st:= time():  

AllP:= [seq(P, P= Iterator:-SetPartitions({S[]}, [[4,4]], compile= false))]:

t1:= time()-st;

 

#Find partition with minimal variation

Min:= proc(S::{list,set}, P::procedure)

local M:= infinity, X:= (), x, v;

     for x in S do

          v:= P(x);

          if v < M then  M:= v;  X:= x  end if

     end do;

     X

end proc:

 

P:= Min(AllP, Var);
t2:= time()-st-t1;

#Apply Labels

subsindets(P, realcons, Label);  

 

213.203

[{1236.0, 1277.0, 1576.0, 1644.0}, {1249.0, 1292.0, 1392.0, 1829.0}, {1297.0, 1404.0, 1477.00, 1520.0}, {1313.0, 1325.0, 1477.0, 1594.0}]

119.797

[{A14, A16, A2, A4}, {A1, A13, A15, A9}, {A12, A5, A7, A8}, {A10, A11, A3, A6}]

#Brian's naive "all pairs" measure of variation.
Var2:= P-> add(abs(`*`(B[1][])-`*`(B[2][])), B= combinat:-choose(P,2)):

P2:= Min(AllP, Var2);
t3:= time()-st-t1-t2;

[{1236.0, 1277.0, 1576.0, 1644.0}, {1249.0, 1292.0, 1392.0, 1829.0}, {1297.0, 1404.0, 1477.00, 1520.0}, {1313.0, 1325.0, 1477.0, 1594.0}]

309.516

The naive variation selects the same partition, but it takes nearly three times as much processor time.

 

 

Download Equal_products.mw

@Gauss 

LinearAlgebra:-ColumnSpace(A) gives you a basis for the image of A.

First 572 573 574 575 576 577 578 Last Page 574 of 710