acer

32333 Reputation

29 Badges

19 years, 323 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are replies submitted by acer

I was just minimizing a squared expression, so that if NLPSolve found values where the objective was (approximately zero) then that could be a zero of the non-squared expression. (I used NLPSolve as an alternative approach, because fsolve requires as many expressions/functions as variables. You might have just one function but several variables.)

I have to rush off home. Since I wasn't sure of your exact query, I didn't make use of x(t) in my earlier code. I used just y(t). I'll look again, if nobody posts a methodology for your clarified task before then.

acer

I was just minimizing a squared expression, so that if NLPSolve found values where the objective was (approximately zero) then that could be a zero of the non-squared expression. (I used NLPSolve as an alternative approach, because fsolve requires as many expressions/functions as variables. You might have just one function but several variables.)

I have to rush off home. Since I wasn't sure of your exact query, I didn't make use of x(t) in my earlier code. I used just y(t). I'll look again, if nobody posts a methodology for your clarified task before then.

acer

There's a lot of scope for improving this code below. (A probabilistic fallback might also be worthwhile. I suspect that a fallback that using testeq & signum might not work; testeq is rather limited.)

Note that `is` can depend upon Digits.

> isconvex := proc(expr,A::realcons:=-infinity,B::realcons:=infinity)
> local a,b,t,x,use_expr,res;
>   x := indets(expr,name) minus {constants};
>   if nops(x) = 1 then
>     x := op(x);
>   else error;
>   end if;
>   # For more than one variable, one could test
>   # positive-definiteness of the Hessian.
>   res := is( diff(expr,x,x) >=0 )
>            assuming x>=A, x<=B;
>   if res<>FAIL then
>     return res;
>   else
>     userinfo(1,'isconvex',`falling back to the definition`);
>     # Is it better to expand, or simplify, or put
>     # Normalizer calls around the two-arg evals or
>     # their difference, oder...?
>     use_expr:=expand(expr);
>     is( eval(use_expr,x=(1-t)*a+t*b) <=
>         (1-t)*eval(use_expr,x=a)+t*eval(use_expr,x=b) )
>       assuming a<=b, t>=0, t<=1, b<=B, a>=A;
>   #else
>   ##Is global optimization another fallback?
>   end if;
> end proc:

> # note: this shows that none of the examples
> # below had to fall back to the defn after failing
> # a 2nd deriv test. But many such will exist.
> infolevel[isconvex]:=1:

> isconvex( y^2 );
                                     true

>
> isconvex( y^3 );
                                     false

> isconvex( y^3, 0 );
                                     true

>
> isconvex( (y-3)^2 );
                                     true

>
> isconvex( (y-3)^4 );
                                     true

>
> isconvex( exp(y) );
                                     true

> isconvex( -exp(y) );
                                     false

>
> isconvex( -sin(y), 0, Pi );
                                     true

acer

Perhaps someone will add it as the block comment syntax for Maple on wikipedia's language syntax comparison page (both here, and here).

acer

Perhaps someone will add it as the block comment syntax for Maple on wikipedia's language syntax comparison page (both here, and here).

acer

I'm not sure what you mean by "solve for ground". What are the variables?

Are you trying to find the value of 'ground' that satisfies a given y(t) at a given t?

> Z:=proc(TT,GG,YY) global ground; ground:=GG; hite(TT)-YY; end proc:

> fsolve( 'Z'(33.,gr,11853.) );
                                  106.7378019

Or are you trying to find both a 'ground' and a 't' that satisfy a given y(t)?

> Optimization:-Minimize( 'Z'(t,gr,11700)^2, gr=0..200, t=0..100 );
                        -16
[0.240100000000000009 10   ,

    [gr = 75.8416010127991882, t = 33.6050437589785673]]

> fsolve( 'Z'(33.605044,gr,11700.) );
                                  75.84162069

You mentioned `minimize`. What do you wish to minimize, in terms of what else?

acer

I'm not sure what you mean by "solve for ground". What are the variables?

Are you trying to find the value of 'ground' that satisfies a given y(t) at a given t?

> Z:=proc(TT,GG,YY) global ground; ground:=GG; hite(TT)-YY; end proc:

> fsolve( 'Z'(33.,gr,11853.) );
                                  106.7378019

Or are you trying to find both a 'ground' and a 't' that satisfy a given y(t)?

> Optimization:-Minimize( 'Z'(t,gr,11700)^2, gr=0..200, t=0..100 );
                        -16
[0.240100000000000009 10   ,

    [gr = 75.8416010127991882, t = 33.6050437589785673]]

> fsolve( 'Z'(33.605044,gr,11700.) );
                                  75.84162069

You mentioned `minimize`. What do you wish to minimize, in terms of what else?

acer

Despite the new error messages you mention, I believe that one of those approaches will still be needed.

The "procedure form" of Maximize is somewhat prone to doing that (Test-1) especially if a procedure to compute the objective gradient (and constraint jacobian, if constraint procs are supplied) is not explicitly supplied. A "known" bug.

As for your Test-2 and Test-3 attempts, it might be nice to have your .xls data, to figure out what went wrong. (Could it be that that `ln` call is producing a non-real?)

acer

Despite the new error messages you mention, I believe that one of those approaches will still be needed.

The "procedure form" of Maximize is somewhat prone to doing that (Test-1) especially if a procedure to compute the objective gradient (and constraint jacobian, if constraint procs are supplied) is not explicitly supplied. A "known" bug.

As for your Test-2 and Test-3 attempts, it might be nice to have your .xls data, to figure out what went wrong. (Could it be that that `ln` call is producing a non-real?)

acer

Try with `cos`, since there is no `cosine`.

acer

Try with `cos`, since there is no `cosine`.

acer

I realized that was your line of thinking, when I posted. And that is why I mentioned the 'parameters' option. Have a look at it. It might help you pull out the dsolve call (which is heavy machinery), and allow you to specify the w1_1, w1_2, and w1_3 as parameters. At the very least, you can pull out the diff calls.

acer

Why are those (unchanging) diff calls inside the loop? They don't need to be done 70000 times, right? Not the main culprit, perhaps, but it's often good to look at such aspects.

Do you really need Digits=16, which is just one digit of working precision above the cuttof between fast double-precision and slower "software" precision.

Can you replace that Normalize call with something that acts in-place? (Either a Compiler:-Compile'd in-place proc of your own, or exisiting call to Norm and a call to VectorScalarMultiply with its inplace option, etc).

Does the dsolve,numeric call really need to be inside the loop, or could it too be called just once up front with its new 'parameters' option to handle the changing w1? See the ?dsolve,numeric,IVP help-page.

That's all presupposing that you don't want to change the basic approach.

acer

Yes, VerifyTools can be useful.

It's a pity that some packages which can sometimes be useful (such as VerifyTools, EqualEntries, PiecewiseTools, etc) are undocumented.

I should submit an SCR that VerifyTools is not protected (and then put on the list of undocumented protected names).

acer

Yes, VerifyTools can be useful.

It's a pity that some packages which can sometimes be useful (such as VerifyTools, EqualEntries, PiecewiseTools, etc) are undocumented.

I should submit an SCR that VerifyTools is not protected (and then put on the list of undocumented protected names).

acer

First 476 477 478 479 480 481 482 Last Page 478 of 591