kfli

30 Reputation

4 Badges

3 years, 238 days

MaplePrimes Activity


These are questions asked by kfli

Hello,

My question is mathematical in nature, so it might be a little out of place but I though I would give it a shot. 

You have a series of chebyshev coefficients in two connecting subdomains lets say S1 = [0,0.5] and S2=[0.5,1]. So far you are still in the spectral space. If you want to compute the solution in real space you can sum the coefficients with the Chebyshev polynomials. 

Now imagine you change the interval to S1 = [0,0.6] and S2 = [0.6,1]. Is there a way to manipulate the Chebyshev coefficients from both initial subdomains to create a new set of Chebyshev coefficients that fit the solution in the new subdomains. 

The brute force method would be to create the real solution of Chebyshev polynomials and then use that to form a new set of Chebyshev coefficients. Or you can use Clenshaw to compute the solution at several points, and then use the points to create new Chebyshev coefficients.

But what if we can stay in spectral space and create the new chebyshev coefficients. Is that possible? If so, how?

Hello! 

I am currently programming the Anderson Acceleration for fixed point iteration in Maple. The algorithm comes from Walker et al. 2011 (ANDERSON ACCELERATION FOR FIXED-POINT ITERATIONS) (if you are interested in this problem please read the short paper). The code that Walker supplies runs fine in Matlab, with qrdelete as a built-in function. However in Maple I have decided to skip operations on QR, and instead opted to create a new QR every time I increase or decrease the amount of residuals df. However, here comes the kicker, somehow Maple decides to turn a vector or matrix into a procedure when I Concatenate, or DeleteColumn. I could really use a working Anderson Acceleration code for my research (my research is not based on AA or root solvers in general, but spectral methods). I will paste the entire code here. This is my attempt at getting Walker's original Matlab code to work in Maple.

I could use some pointers and tips. Can you program this in a more efficient way? I would be happy to learn. *Notice that the code works for a host of different equations, but not all. Feel free to let me know if this question or inquiry is inappropriate and I will of course delete the post.

restart:
Digits:=12:
Runtime:=time():
with(LinearAlgebra):
with(plots):
with(orthopoly):
with(ArrayTools):
    

phix := Vector(2):
X    := Vector(2):
phix[1] := cos(x[2]):
phix[2] := 3*cos(x[1]):
X[1] := 0.0:
X[2] := 0.0:

#Code AndersonAcceleration.
AndersonAcceleration:=proc(N,phi,X0)
global x, xS, here1, here2;
local mMax, itmax, atol, rtol, droptol, beta, AAstart, res_hist, df, DGg, gamma;
local DG, mAA, iter, gval,fval,res_norm, tol, f_old, g_old, y, i, k, Q, R, QRg, dfT, DGT;

(*
%------------------------------------------------------------------------
% Function xS = AndersonAcceleration(N,phi,x0).
%
% Fixed-point iteration with or without Anderson acceleration.
% 'phi' is the fixed point iteration map and 'xS' is the 
% solution, so that xS = phi(xS).
%
% Input arguments:
%    X0 = Initial value solution. Note: In this function this variable 
%        must be a column vector.
%       1. 'mMax' = maximum number of stored residuals (non-negative integer).
%       NOTE: 'mMax' = 0 => no acceleration. default=1000
%       2. 'itmax' = maximum allowable number of iterations. default=1000
%       3. 'atol' = absolute error tolerance. default=1.0e-6
%       4. 'rtol' = relative error tolerance. default=1.0e-3
%       5. 'droptol' = tolerance for dropping stored residual vectors to 
%       improve conditioning: If 'droptol' > 0, drop residuals if the
%       condition number exceeds droptol; if droptol <= 0,
%       do not drop residuals.
%       6. 'beta' = damping factor: If 'beta' > 0 (and beta ~= 1), then 
%       the step is damped by beta; otherwise, the step is not damped.
%       NOTE: 'beta' can be a function handle; form beta(iter), where iter 
%       is the iteration number and 0 < beta(iter) <= 1.
%       7. 'AAstart' = acceleration delay factor: If 'AAstart' > 0, start 
%       acceleration when iter = AAstart.
%
% Output:
% xS = Solution vector.
%
% The notation used is that of H.F. Walker: Anderson Acceleration:
% Algorithms and implementation
%------------------------------------------------------------------------
*)

mMax    := 1000:
itmax   := 1000:
atol    := 1.0e-8:
rtol    := 1.0e-12:
droptol := 1.0e4:
beta    := 1.0:
AAstart := 0:

# Initialize storage arrays and number of stored residuals.
DG := Matrix():
df := Matrix():
DGg := Vector(N);
QRg := Vector(N);
mAA := 0:

for iter from 0 to itmax do

   x:=X0:
   gval := Vector(phi):
   fval := gval - X0:
   res_norm := norm(fval,2):
   print(res_norm);    
   # Set the residual tolerance on the initial iteration.
   if iter = 0 then
      tol := max(atol,rtol*res_norm):
   fi:
    
   # Convergence test, if converged the loop stops.
   if res_norm <= tol then
      print(res_norm);
      break;   # Breaks for-loop
   fi:
    
   # If resnorm is larger than 1e8 at iter > 5, problem stops
   if res_norm >1e8 and iter > 5 then
      print(res_norm);
      break; # Breaks for-loop, diverged
   fi:

   # Fixed point iteration without acceleration, if mMax == 0.
   if mMax = 0 or iter < AAstart then
      # We update E <- g(E) to obtain the next approximate solution.
      for i from 1 to N do
         X0[i] := gval[i]:
      od:
   else
      # With Anderson acceleration.
      # Update the df vector and the DG array.
      if iter > AAstart then
         if mAA < mMax or Size(df,2) = 1 then
            print(whattype(df));
            df := Concatenate(2,df,fval-f_old):
            DG := Concatenate(2,DG,gval-g_old):
         else 
            df := Concatenate(2,df[..,-1],fval-f_old):
            DG := Concatenate(2,DG[..,-1],gval-g_old):   
         fi:
         mAA := mAA + 1:
      fi:   # iter
      print(iter,mAA);
      print(whattype(df));
      # We define the old g and f values for the next iteration
      f_old := fval;
      g_old := gval;
      
      if mAA = 0 then
         # Initialization
         # If mAA == 0, update X <- g(X) to obtain themah next approximate
         # solution. No least-squares problem is solved for iter = 0
         for i from 1 to N do
            X0[i] := gval[i]:
         od:
      else
         if mAA > 1 then
            Q,R := QRDecomposition(df,datatype=float);
            while ConditionNumber(R) > droptol do
                if mAA = 2 then
                   print('here1'):
                   df := convert(DeleteColumn(df,1),Vector);
                   DG := convert(DeleteColumn(DG,1),Vector);
                else
                   df := DeleteColumn(df,1);
                   DG := DeleteColumn(DG,1);
                fi:
                Q,R := QRDecomposition(df,datatype=float);
                mAA := mAA - 1;
                print(Q,R,mAA);
            od:
          
            if Size(df,2) > 1 then
               print(Q,R);
               gamma := LeastSquares([Q,R],fval);
               print(gamma);
            else
               R := norm(df,2);
               Q := MTM[mldivide](R,df);
               gamma := MTM[mldivide](R,Transpose(Q).fval);
               print(gamma);
            fi:
         else
            R := norm(df,2);
            Q := MTM[mldivide](R,df);
            gamma := MTM[mldivide](R,Transpose(Q).fval);
         fi:
 
         if Size(gamma,1) > 1 then
            DGg:=DG.gamma:
         else
            DGg:=DG*gamma;
         fi:

         # Update the approximate solution.
         for i from 1 to N do
            X0[i] := gval[i] - DGg[i];
         od:
         print('Sol',X0);
         
         # Damping for non-zero beta
         if beta > 0 and beta <> 1 then
            if mAA = 1 then
               QRg := Q*R*gamma;
            else
               QRg := df.gamma;
            fi:
            for i from 1 to N do
               X0[i] := X0[i] - (1-beta)*(fval[i] - QRg[i]);
            od:
         fi:# isa(beta ...
         print(iter,mAA);
      fi: # mAA = 0
   fi:# mMax == 0

od:
xS := Vector(N);
for i from 1 to N do xS[i]:=X0[i]: od:
return xS
end:

AndersonAcceleration(2,phix,X):


 

Hello!

I have a pretty simple problem and I feel pretty dumb just asking this. 
If I have a 2x2 Matrix and I remove one column and one row, how do I get the result to be a scalar, not 0th vector?

R := RandomMatrix(2);
R1 := DeleteRow(R,-1);
R2 := DeleteColumn(R1,-1);


                         R := [ -82  41 ]
                                  [-70  91]
                        R1 := [-82  41]
                          R2 := [-82]

 

Hi!

Is it possible to create a vector, say

phi := Vector(2);
phi[1] := x[1]+2*x[2];
phi[2] := x[2]**2+x[1];

and then save it in a file that you can use in Matlab as a function handle?

I want to be able to create the phi vector in Maple but use it in Matlab.

Thanks for help!

 

Hello,

So my questions is fairly simple but I am not sure if Maple can do this. I want to create a Matrix full of partial derivatives (Jacobian), but computing this Jacobian is maxing out my RAM. So I would like to create each row (or column) of the Jacobian seperately and save each row (or column) in a Matrix that has already been saved to a file. Basically I would like to store values in a file (in a loop) without overwriting the file.

1 2 Page 1 of 2