kfli

75 Reputation

4 Badges

9 years, 84 days

MaplePrimes Activity


These are questions asked by kfli

Hi! I have been trying to calculate the value of theClenshaw derivative. I can get the regular clenshaw to work but not the derivative. I'm going off of the thread (https://scicomp.stackexchange.com/questions/27865/clenshaw-type-recurrence-for-derivative-of-chebyshev-series). (P.s notice I have halfed the A[z+1] term. I have tried both ways but the overall result is wrong so I am guessing something more important is wrong).

This is my code so far

Clenshaw_Dx_1D:=proc(z,C,Nm,s)
local i,k,A,B: global Clen1D_Dx: 

A := Vector(z..Nm+1+z);
B := Vector(z..Nm+1+z);

for k from Nm-1+z by -1 to 1+z do
A[k] := C[k] + 2*s*A[k+1] - A[k+2]:
od:

for k from Nm-1+z by -1 to 1+z do
B[k] := 2*A[k+1] + 2*s*B[k+1] - B[k+2]:
od:
Clen1D_Dx :=  A[z+1]/2 + s*B[1+z] - B[2+z]:

end:

Where z could be 0 or 1 depending on what index your C array starts at. C is the array of chebyshev coefficients of a Chebyshev series appromating u(x) for example. The Chebycoeff1D procedure calculates the Chebyshev coefficients  and the code below calls the procedure for a specific function. Try it with some values of xM, for example 4-10.
 

Chebycoeff1D:=proc(express,Nn,C2,A,B)
local Cfac,fac1x,fac2x,k,K;

fac1x:=Pi/Nn;
  for k from 1 to Nn do 
  Cfac(k):=eval(subs(x=evalf(cos(fac1x*(k-0.5)))*A+B,evalf(express))); 
  od; unassign('k'):                    
  for K from 1 to Nn do
    fac2x:=Pi*(K-1)/Nn;
    C2[K-1]:=(2/Nn)*add(Cfac(k)*evalf(cos(fac2x*(k-0.5))),k=1..Nn);
  od:
end:
nn := 1.0:
Lc := 0.0: Rc := 1.0:
func:=nn*sin(2*Pi*x)+0.5*nn*sin(Pi*x):
Chebycoeff1D(func,xM+1,C,0.5*(Rc-Lc),0.5*(Rc+Lc)):

Once you have the C array call Clenshaw_Dx_1D. For example
 

Clenshaw_Dx_1D(0,C,xM+1,0.0);  # Evalutes f'(x) in the middle of the domain.

Check with

subs(x=0.5, diff(func,x));

The overall "pattern/shape" of the derivative values is correct, just not the amplitude/values. Any help here? Where is the Clenshaw derivative procedure going wrong.

Hello,

My question is mathematical in nature, so it might be a little out of place but I though I would give it a shot. 

You have a series of chebyshev coefficients in two connecting subdomains lets say S1 = [0,0.5] and S2=[0.5,1]. So far you are still in the spectral space. If you want to compute the solution in real space you can sum the coefficients with the Chebyshev polynomials. 

Now imagine you change the interval to S1 = [0,0.6] and S2 = [0.6,1]. Is there a way to manipulate the Chebyshev coefficients from both initial subdomains to create a new set of Chebyshev coefficients that fit the solution in the new subdomains. 

The brute force method would be to create the real solution of Chebyshev polynomials and then use that to form a new set of Chebyshev coefficients. Or you can use Clenshaw to compute the solution at several points, and then use the points to create new Chebyshev coefficients.

But what if we can stay in spectral space and create the new chebyshev coefficients. Is that possible? If so, how?

Hello! 

I am currently programming the Anderson Acceleration for fixed point iteration in Maple. The algorithm comes from Walker et al. 2011 (ANDERSON ACCELERATION FOR FIXED-POINT ITERATIONS) (if you are interested in this problem please read the short paper). The code that Walker supplies runs fine in Matlab, with qrdelete as a built-in function. However in Maple I have decided to skip operations on QR, and instead opted to create a new QR every time I increase or decrease the amount of residuals df. However, here comes the kicker, somehow Maple decides to turn a vector or matrix into a procedure when I Concatenate, or DeleteColumn. I could really use a working Anderson Acceleration code for my research (my research is not based on AA or root solvers in general, but spectral methods). I will paste the entire code here. This is my attempt at getting Walker's original Matlab code to work in Maple.

I could use some pointers and tips. Can you program this in a more efficient way? I would be happy to learn. *Notice that the code works for a host of different equations, but not all. Feel free to let me know if this question or inquiry is inappropriate and I will of course delete the post.

restart:
Digits:=12:
Runtime:=time():
with(LinearAlgebra):
with(plots):
with(orthopoly):
with(ArrayTools):
    

phix := Vector(2):
X    := Vector(2):
phix[1] := cos(x[2]):
phix[2] := 3*cos(x[1]):
X[1] := 0.0:
X[2] := 0.0:

#Code AndersonAcceleration.
AndersonAcceleration:=proc(N,phi,X0)
global x, xS, here1, here2;
local mMax, itmax, atol, rtol, droptol, beta, AAstart, res_hist, df, DGg, gamma;
local DG, mAA, iter, gval,fval,res_norm, tol, f_old, g_old, y, i, k, Q, R, QRg, dfT, DGT;

(*
%------------------------------------------------------------------------
% Function xS = AndersonAcceleration(N,phi,x0).
%
% Fixed-point iteration with or without Anderson acceleration.
% 'phi' is the fixed point iteration map and 'xS' is the 
% solution, so that xS = phi(xS).
%
% Input arguments:
%    X0 = Initial value solution. Note: In this function this variable 
%        must be a column vector.
%       1. 'mMax' = maximum number of stored residuals (non-negative integer).
%       NOTE: 'mMax' = 0 => no acceleration. default=1000
%       2. 'itmax' = maximum allowable number of iterations. default=1000
%       3. 'atol' = absolute error tolerance. default=1.0e-6
%       4. 'rtol' = relative error tolerance. default=1.0e-3
%       5. 'droptol' = tolerance for dropping stored residual vectors to 
%       improve conditioning: If 'droptol' > 0, drop residuals if the
%       condition number exceeds droptol; if droptol <= 0,
%       do not drop residuals.
%       6. 'beta' = damping factor: If 'beta' > 0 (and beta ~= 1), then 
%       the step is damped by beta; otherwise, the step is not damped.
%       NOTE: 'beta' can be a function handle; form beta(iter), where iter 
%       is the iteration number and 0 < beta(iter) <= 1.
%       7. 'AAstart' = acceleration delay factor: If 'AAstart' > 0, start 
%       acceleration when iter = AAstart.
%
% Output:
% xS = Solution vector.
%
% The notation used is that of H.F. Walker: Anderson Acceleration:
% Algorithms and implementation
%------------------------------------------------------------------------
*)

mMax    := 1000:
itmax   := 1000:
atol    := 1.0e-8:
rtol    := 1.0e-12:
droptol := 1.0e4:
beta    := 1.0:
AAstart := 0:

# Initialize storage arrays and number of stored residuals.
DG := Matrix():
df := Matrix():
DGg := Vector(N);
QRg := Vector(N);
mAA := 0:

for iter from 0 to itmax do

   x:=X0:
   gval := Vector(phi):
   fval := gval - X0:
   res_norm := norm(fval,2):
   print(res_norm);    
   # Set the residual tolerance on the initial iteration.
   if iter = 0 then
      tol := max(atol,rtol*res_norm):
   fi:
    
   # Convergence test, if converged the loop stops.
   if res_norm <= tol then
      print(res_norm);
      break;   # Breaks for-loop
   fi:
    
   # If resnorm is larger than 1e8 at iter > 5, problem stops
   if res_norm >1e8 and iter > 5 then
      print(res_norm);
      break; # Breaks for-loop, diverged
   fi:

   # Fixed point iteration without acceleration, if mMax == 0.
   if mMax = 0 or iter < AAstart then
      # We update E <- g(E) to obtain the next approximate solution.
      for i from 1 to N do
         X0[i] := gval[i]:
      od:
   else
      # With Anderson acceleration.
      # Update the df vector and the DG array.
      if iter > AAstart then
         if mAA < mMax or Size(df,2) = 1 then
            print(whattype(df));
            df := Concatenate(2,df,fval-f_old):
            DG := Concatenate(2,DG,gval-g_old):
         else 
            df := Concatenate(2,df[..,-1],fval-f_old):
            DG := Concatenate(2,DG[..,-1],gval-g_old):   
         fi:
         mAA := mAA + 1:
      fi:   # iter
      print(iter,mAA);
      print(whattype(df));
      # We define the old g and f values for the next iteration
      f_old := fval;
      g_old := gval;
      
      if mAA = 0 then
         # Initialization
         # If mAA == 0, update X <- g(X) to obtain themah next approximate
         # solution. No least-squares problem is solved for iter = 0
         for i from 1 to N do
            X0[i] := gval[i]:
         od:
      else
         if mAA > 1 then
            Q,R := QRDecomposition(df,datatype=float);
            while ConditionNumber(R) > droptol do
                if mAA = 2 then
                   print('here1'):
                   df := convert(DeleteColumn(df,1),Vector);
                   DG := convert(DeleteColumn(DG,1),Vector);
                else
                   df := DeleteColumn(df,1);
                   DG := DeleteColumn(DG,1);
                fi:
                Q,R := QRDecomposition(df,datatype=float);
                mAA := mAA - 1;
                print(Q,R,mAA);
            od:
          
            if Size(df,2) > 1 then
               print(Q,R);
               gamma := LeastSquares([Q,R],fval);
               print(gamma);
            else
               R := norm(df,2);
               Q := MTM[mldivide](R,df);
               gamma := MTM[mldivide](R,Transpose(Q).fval);
               print(gamma);
            fi:
         else
            R := norm(df,2);
            Q := MTM[mldivide](R,df);
            gamma := MTM[mldivide](R,Transpose(Q).fval);
         fi:
 
         if Size(gamma,1) > 1 then
            DGg:=DG.gamma:
         else
            DGg:=DG*gamma;
         fi:

         # Update the approximate solution.
         for i from 1 to N do
            X0[i] := gval[i] - DGg[i];
         od:
         print('Sol',X0);
         
         # Damping for non-zero beta
         if beta > 0 and beta <> 1 then
            if mAA = 1 then
               QRg := Q*R*gamma;
            else
               QRg := df.gamma;
            fi:
            for i from 1 to N do
               X0[i] := X0[i] - (1-beta)*(fval[i] - QRg[i]);
            od:
         fi:# isa(beta ...
         print(iter,mAA);
      fi: # mAA = 0
   fi:# mMax == 0

od:
xS := Vector(N);
for i from 1 to N do xS[i]:=X0[i]: od:
return xS
end:

AndersonAcceleration(2,phix,X):


 

Hello!

I have a pretty simple problem and I feel pretty dumb just asking this. 
If I have a 2x2 Matrix and I remove one column and one row, how do I get the result to be a scalar, not 0th vector?

R := RandomMatrix(2);
R1 := DeleteRow(R,-1);
R2 := DeleteColumn(R1,-1);


                         R := [ -82  41 ]
                                  [-70  91]
                        R1 := [-82  41]
                          R2 := [-82]

 

Hi!

Is it possible to create a vector, say

phi := Vector(2);
phi[1] := x[1]+2*x[2];
phi[2] := x[2]**2+x[1];

and then save it in a file that you can use in Matlab as a function handle?

I want to be able to create the phi vector in Maple but use it in Matlab.

Thanks for help!

 

1 2 3 Page 2 of 3