acer

32313 Reputation

29 Badges

19 years, 312 days
Ontario, Canada

Social Networks and Content at Maplesoft.com

MaplePrimes Activity


These are answers submitted by acer

It is worthwhile looking at the result from your call  PDF(0.5*X1 + 0.5*X2, t)  , ie. looking at what is actually being passed to the plot command, and comparing with other formulations.

The integral can be computed (even with floating-point coefficients), without as much numeric damage to coefficients and ensuing evaluation at plotting points. Loss of accuracy can be incurred for this particular example during reformulation (eg. expand, normal, simplify ...).

PDFexpandedpwpoly.mw

I am not suggesting that this is a better workaround that using exact rational coefficients for the original mix. I am trying to address the OP's question about why it can go wrong.

@Maria2212 That call to assume inside procedure S is the cause of the problem.

Remove the call to assume that you have inside S. Then try changing the call to dsolve inside S to this, instead:

   dsolve(L,y(v)) assuming lambda<0;

Also, inside procedure S there is a problematic call to subs which contains this:

   indets(%%,name)[4]

What name are you trying to extract, using that indets call? That method of getting at the name is very poor and error-prone. It should be replaced with something better, but you should tell us first what name it is trying to obtain. Is it one of _C1, _C2, lambda, or v? (Judging from your supplied output I suspect that it is a roundabout attempt to get the name lambda. But perhaps that is another programming mistake.) This should be fixed.

You should also get rid of all the use of % and %% inside S, and instead use assignments to additional local variables.

The use of sum and name-concatenation inside procedure Galerkin is poor. Better would be add and indexed names.

You should not set Digits:=2 . That is a very poor way to display results with only 2 digits, since it incurs significant roundoff error.

There are a couple of difficulties. One is the radical in the denominator, which can be handled using evala or rationalize. Another is the conjugated term.

example_symbolic_ac.mw

Are you looking for something like one of these?

arg_hue.mw

Or are you saying that you want the shading from your 2D image for call to complexplot3d? (It really isn't clear to me, from your Question, sorry). In that case could you simply negate the expression passed to complexplot3d? Eg,

plots:-complexplot3d(-(z - 1)/(z^2 + z + 1), z = -4 - 4*I .. 4 + 4*I
                     , style=surface, view = [-2..2,-2..2,0..2]
                     #, orientation=[-90,0,0]
                     , grid = [201, 201], shading = zhue
                     #, lightmodel=none
                     );

Do you also want contours?

You could also use printf.

Refine_Extrapolation_ac.mw

Here are various ways around that problem in your old Maple 13.

temp2.mw

(The problem relates to the presence of name r both within the indexed name P[r] and standalone, inside the radical.)

Here is a way, using freeze and thaw.  I also added an option for simplification within collect.

T := (p*a^(-Phi(xi))+q+r*a^Phi(xi))/ln(a):
u[0] := C[0]+C[1]*a^Phi(xi)+C[2]*a^(2*Phi(xi)):
u[1] := diff(u[0], xi):
d[1] := C[1]*a^Phi(xi)*T*ln(a)+2*C[2]*a^(2*Phi(xi))*T*ln(a):
u[2] := diff(d[1], xi):
d[2] := C[1]*a^Phi(xi)*T*ln(a)*(p*a^(-Phi(xi))+q+r*a^Phi(xi))
        +C[1]*a^Phi(xi)*(-p*a^(-Phi(xi))*T*ln(a)+r*a^Phi(xi)*T*ln(a))
        +4*C[2]*a^(2*Phi(xi))*T*ln(a)*(p*a^(-Phi(xi))+q+r*a^Phi(xi))
        +2*C[2]*a^(2*Phi(xi))*(-p*a^(-Phi(xi))*T*ln(a)+r*a^Phi(xi)*T*ln(a)):
expr := expand((2*k*k)*w*beta*d[2]-(2*alpha*k*k)*d[1]-2*w*u[0]+k*u[0]*u[0]):

thaw(collect(subs(a^Phi(xi)=freeze(a^Phi(xi)), expr),
             [freeze(a^Phi(xi))], simplify@factor));



And, if you would like terms like  (a^Phi(xi))^4  to become  a^(4*Phi(xi)) ,

combine(%, power);

collectexample.mw

Depending on your metric for measuring size of an expression there are a few alternatives that can reduce the size a bit more, eg. collection w.r.t. some additional names.

There is an old problem with Optimization and "operator form" where the internal construction of the gradient procedure gets confused and produces something which merely returns zero(es). (It's a problem in the automatic differentiation.)

Some aspects of that were fixed a while back, but your example looks like it may be running into a similar problem.  (...a zero gradient confuses the algorithm into thinking that first-order conditions have been satisfied.)

Here are three workarounds. The first and second use "expression form". The first delays premature evaluation of a call to TV, using dummy variables. The second uses a slightly modification of procedure TV that returns unevaluated when passed nonnumeric arguments. The third uses a manually constructed gradient procedure that itself does numeric differentiation (via fdiff), see this old Post.

Download dsol_param_Maximize.mw

You can simply use the top-level solve command.

For example,

eqs := [ K1 = N*(R9 + R6) + R9 / (R9 + R6)*N^2 + (2*R6 + 2*R9)*N + R9,
         K2 = N*(R6 + R9) / (R6 + R9)*N^2 + (2*R6 + 2*R9)*N + R9 ]

sols := solve(eqs, [R6,R9], explicit);

solveexample.mw

One way to get something stronger than evalb (via EqualEntries, from LinearAlgebra:-Equal) is,

    andmap(is,v1-v2,0)

There are other terse ways.

The following happens due by automatic simplification in the kernel (ie. you cannot delay it using delayed evaluation, once you get to this expression):

'2^(-1/2)';

              1  (1/2)
              - 2     
              2  

The following division does not suffer from that effect, and still using unevaluation quotes. But it is a relatively poor solution, since a pair of uneval-quotes will get stripped off upon each evaluation.

'1/sqrt(2)';

                1
              -------
               sqrt(2)

Solutions that require an ad hoc number of pairs of uneval-quotes tailored to suit the application are clumsy and poor. (Too few quotes and the effect vanishes, yet too many and ugly quotes are visible. It's a poor technique.)

There are a few other approaches that do not suffer from the ephemeral nature of uneval-quotes. One is to use an inert form:

1/%sqrt(2);

Other choices include wrapping key parts with a call to ``(...) , which can be undone using expand or evalindets.

More work is involved if you want to replace forms in precomputed results. For example, to replace the following returned result one might sensibly wish to first check that the integer base of the radical in the numerator divides into the denominator.

7/(6*sqrt(2));

              7   (1/2)
              -- 2     
              12  

But first you might want to consider whether this is for final presentation only, or whether you need computation/manipulation of the fixed-up/somehow-inertized expressions.

If you inform the identify command of additional base constant values then it can find an appropriate match.

If you provide factor with an (ad hoc, alas) extension then you can get exact trig form results without going via floating-point approximations (ie. without evalf).

Download solvetrig.mw

Sorry, this site's not allowing me to inline the worksheet.

The SolveSteps command is part of the Student:-Basics package. It looks like you may have mistakenly tried to use the command name without either qualifying or loading it

Try either of these:

Student:-Basics:-SolveSteps([12*x + y = 18, 7*x - 8*y = 32]);
with(Student:-Basics,SolveSteps);
SolveSteps([12*x + y = 18, 7*x - 8*y = 32]);

Is this the kind of substitution you want to achieve? (two ways)

ee:=BesselK(a1,b1)+BesselK(a2,b2);

applyrule(BesselK(a::anything,b::anything)
          = 1/(2*GAMMA(a))*(b/2)^(-a),
          ee);

subsindets(ee, specfunc(anything,BesselK),
           u->1/(2*GAMMA(op(1,u)))*(op(2,u)/2)^(-op(1,u)));

BKsubs.mw

ps. I added a factor or 1/2, for fun...

The non-CUDA timing can change because the MKL bundled with Maple (ie. that matches the Intel compiler version used to build the relevant parts of Maple) can be a later version. It can change from from release to release of Maple, and Intel strives to improve it. When newer architectures arrive there may well  be additional potential optimizations which require new coding by Intel.

The CUDA implementation of the relevant DGEMM function might also be improved, over time as well as for new architectures. There's no forcing reason why the Intel MKL should improve at the same rate as the CUDA implementation.

If I correctly understand your ratio definition (and if the CUDA timing is roughly the same for your two cited Maple versions) then the diminishment in the ratio could be explained simply by the fact that Intel has improved the performance of their MKL DGEMM for at least your platform and hardware. This has been a general trend for some years now.

The cost of transfering the data to-and-from the GPU card has also become a larger relative portion of the whole computation, as the timings improve. On some platforms and architecture the MKL DGEMM on the CPU is actually faster than the CUDA DGEMM on the GPU + data transfer cost for the 4000x4000 example.

First 91 92 93 94 95 96 97 Last Page 93 of 336