Carl Love

Carl Love

28110 Reputation

25 Badges

13 years, 120 days
Himself
Wayland, Massachusetts, United States
My name was formerly Carl Devore.

MaplePrimes Activity


These are replies submitted by Carl Love

@casperyc Your two-stage method and my two-stage method may be effectively the same thing. Either way, it's much much better timewise than passing all the polynomial substitutions to simplify at the same time.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Alejandro Jakubi wrote:

If that complexity is something inherent to the polynomial rather than a measure of how the polynomial is written, does it mean, for instance, that it remains invariant under a linear change of variables?

I am not sure about that. But if expand(p - q) = 0 is true, then p and q have the same complexity, by my measure. Note that SolveTools:-Complexity does not have this property.

My notion is heuristic at this point. But the idea clearly has some merit since it makes the simplify commands run 1000s of times faster, and the results are obviously fully simplfied with respect to the side relations. If you take a look at the worksheets in this thread, I think that you'll see what I mean.

@Jimmy You may be able to do something by incorporating the logarithm right into the model rather than by incorporating it into the plot after the fact. For example, the model is currently

i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)-i = 0

You could make that

ln(i0*(exp(1000*(v-i*rs)/n0/(259/10))-1)) - ln(i) = 0

This might distribute the error more evenly along the curve. I am not sure.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

Kitonum wrote: Carl. your code is compact and elegant, but it works too slowly. Can you explain why?

No, I can't explain it. It uses two library procedures: `convert/base` and ListTools:-Reverse. All the rest is kernel. The library procedures are very simple. It would be possible to analyze it to figure out how much time is spent by the library procedures. Although parse is kernel, one must assume that it is quite complex.

A generalization: This bug is manifested if there is any division by any variable in any index containing the summation variable, no matter how deeply buried it is:

sum(g(3+a[k+f(1/j)]), k= 1..n);

@Jimmy I am not sure if you are questioning that or just stating a fact. residualmeansquare will always be less than residualsumofsquares because residualmeansquare = residualsumofsquares / degreesoffreedom, and degreesoffreedom is an integer greater than 1 in any practical problem.

Are your "letters" actually entered as j[0], j[1], ..., j[15]? If they are not, would it be convenient to put them in that style?

Is it possible that any of the exponents are 0 or 1? Are the exponents all integers?

It is the expected behaviour: g(i) is 0 because i is not 1. The standard order of evaluation says that g(i) will be evaluated before being passed to the sum command. The i has no value at that time, so it is not 1.

It is the expected behaviour: g(i) is 0 because i is not 1. The standard order of evaluation says that g(i) will be evaluated before being passed to the sum command. The i has no value at that time, so it is not 1.

@Alejandro Jakubi There's also codegen:-cost and SoftwareMetrics:-HalsteadMetrics.

However, the complexity that I am talking about is something inherent to the polynomial rather than a measure of how the polynomial is written.

Specifically, we have a set of n polynomials {P[k] $ k= 1..n} and a set of n distinct names {v[k] $ k= 1..n}. We have a set E of very large polynomials on which we want to perform the simplify with side relations

simplify(E, {P[k] = v[k] $ k= 1..n})

However, that takes way too much time (when n is, say, 10). I have discovered that it is sometimes possible to achieve the same effect as the above command nearly instantly by applying the simplifications one at a time, but only if they are applied in a certain order. I call that order "complexity" because it seems related to the usual notions of being less or more complex. But it is a partial order, not a total order.

@Alejandro Jakubi There's also codegen:-cost and SoftwareMetrics:-HalsteadMetrics.

However, the complexity that I am talking about is something inherent to the polynomial rather than a measure of how the polynomial is written.

Specifically, we have a set of n polynomials {P[k] $ k= 1..n} and a set of n distinct names {v[k] $ k= 1..n}. We have a set E of very large polynomials on which we want to perform the simplify with side relations

simplify(E, {P[k] = v[k] $ k= 1..n})

However, that takes way too much time (when n is, say, 10). I have discovered that it is sometimes possible to achieve the same effect as the above command nearly instantly by applying the simplifications one at a time, but only if they are applied in a certain order. I call that order "complexity" because it seems related to the usual notions of being less or more complex. But it is a partial order, not a total order.

@casperyc "Complexity", in the sense that I am using it, is a partial order of polynomials, but not a total order. I said that the last two "cannot be ranked by complexity"; I did not say that you had them in the wrong order, because there is no right or wrong order. They have to be processed as a pair.

I do not have a precise definition of complexity; it is a heuristic concept. If A has higher powers than B but is otherwise identical, then I say that A is more complex than B.

You wrote:

Also, my current method
sskappa( sstest(4,2) , kappa );
ssTwoStage(%,4);
take less than 20 sec.

I thought the goal was to use the substitutions sstest3(4,2) instead of sstest(4,2).

@casperyc "Complexity", in the sense that I am using it, is a partial order of polynomials, but not a total order. I said that the last two "cannot be ranked by complexity"; I did not say that you had them in the wrong order, because there is no right or wrong order. They have to be processed as a pair.

I do not have a precise definition of complexity; it is a heuristic concept. If A has higher powers than B but is otherwise identical, then I say that A is more complex than B.

You wrote:

Also, my current method
sskappa( sstest(4,2) , kappa );
ssTwoStage(%,4);
take less than 20 sec.

I thought the goal was to use the substitutions sstest3(4,2) instead of sstest(4,2).

First 616 617 618 619 620 621 622 Last Page 618 of 710