MapleEnthusiast

60 Reputation

2 Badges

4 years, 344 days

MaplePrimes Activity


These are replies submitted by MapleEnthusiast

Thanks for asking, @acer:

I've been working with a system of linear equations, which is homogenous (of degree 1). When brought to a matrix form, this system generates a singular matrix, which is non-invertable and, hence, I was apllying the Moore-Penrose inverse. Thanks to your great help in the past posts, I was able to get rid of the homogeneity issue by using model constraints (side relations). By now, I've also figured out how to make the corresponding matrix non-singular (by chosing a "scaling factor"). Nevertheless, I was still curious to see how closely the solution of the Moore-Penrose inverse approximates the actual ("true") solution of the original problem. It turns out that, by increasing the digit precision (in my particular example beyond 13 digits), the Moore-Penrose converges quite well to the "true" solution.

Thanks again for you great help with this and all my previous posts -- I really appreciate it!

Thanks, @acer -- that helps! 

@acer Sorry for not showing the output earlier, it's:

Matrix(2, 2, [[-0.3285714552*10^10, -0.1714285853*10^10], [-0.3285714553*10^10, -0.1714285852*10^10]])

As you've already hinted with the condition number, I'm trying to invert the singular matrix and I thought the (Moore-Penrose) pseudoinverse should be able to do so (instead of providing huge identical numbers), no?  

Dear @vv@acer, and @Carl Love,

In Puzzle_4.mw, I have further reduced the size of the system and simplified it in terms of parameters. I hope this will allow you to better recognize the puzzle that I've been struggling with for a while. As previously mentioned, the simplification under a constraint seems to deliver different terms of polynomials depending on the choice of the unconstrained variable. More specifically, if I choose R[3] as the unconstrained variable (see function in the attached Maple file) and differentiate the numerator of R[1] wrt to delta[2], I get 8 terms of the polynomial (see eq. (6) and (7)), whereas if I choose R[2] as the unconstrained variable (see function B in the attached file) and differentiate the numerator of R[1] wrt to a symmetric delta[3], I get 12 terms (see eq. (9) and (10)). This is very puzzling given that the system is symmetric in R's and delta's. I would really appreciate any help in this matter. 

@Carl Love I was able to perform myself a modification for the double sum (which was in fact trivial). I do experience, however, a major issue/puzzle with simplifying an equation under a parameter constraint, which I explain in a different thread (since I was advised against spliting off the thread, I just include the link, kindly asking you to have a look at it): https://www.mapleprimes.com/questions/230689-How-Do-I-Solve-Symbolic-Linear-Equations-#comment274341

Thank you very much in advance!

Dear @acer, really sorry for bugging you again... I was just wondering whether you happen to have a spare moment to look into my problem. I've been tinkering with it for a while now and how encountered a puzzle described above (which may have something to do with how Maple simplifies equation under a parameter constraint), which, unfortunately, I cannot resolve on my own. I'd really appreciate any help in this matter. Thank you.

@vv I have previously checked that the "simplification wrt size" does not explain the puzzle (see Puzzle_3.mw, where I have eliminated the "size" inside the simplify command and still get the identical results). Neither does your mini-example (which is obviously not puzzling), as there are no (1+epsilons)^a terms in my example. Hence the puzzle still remains and I'd appreciate any help with resolving it. Thanks.

@vv The reason to expect qualitatively similar results/patterns, is the fact that all R's and delta's are isomorphic/symmetric. My specific question is: What explains that, say, epsilon shows up 232 times in the numerator of the derivative of A with respect to a delta vs. 151 times in the numerator of the derivative of B with respect to a symmetric delta. 

Thanks @vv. Let me elaborate on a couple of issues from your response and clarify my question:

1. In my (little) experience with this forum, it seems like reducing my actual problem to a mini-example, which, albeit contributing to better understanding, does not solve the actual problem at hand is rather counter-productive (as I have to pose my question multiple times and it wastes people's time). So please let me stay with my actual problem. 

2. In the attached Puzzle_2.mw, I have further reduced the number of indices to the bare minimum, which allows me to tackle the actual problem. The structure of the problem is the same as before and the goal is to express in a compact form the analytical (as opposed to numerical) solution of the numerator of the following function for any given J:

where R_m is the free and unconstrained variable that is set to zero. Please also note that, given the constraints (j≠i, m≠j, m≠i, and k≠i), the smallest system size with which I can show the puzzle is J=4 (which I used in this example).  

3. I understand that the choice of the unconstrained variable (say, R_3=0 vs. R_4=0) will lead to quantitatively different R_1's in A and B. I'm still puzzled, however, why the pattern changes qualitatively depending on the choice of the scaling factor (given that the two alternative scaling factors enter the system isomorphically).

For instance, I used numboccur(..., epsilon) to count the number of occurances of epsilon in (the derivative) of A and (the derivative of) B. It turns out that epsilon occurs 232 times in the former and only 151 times in the latter case (see attached Puzzle_2). This is puzzling, no? The only possible explanation I can come up with that, despite the fact that A and B are isomorphic, for whatever reason Maple uses the constraint differently in the two cases. But, if this is the case, I'm not sure how to reach my goal given that, qualitatively, the solution patterns provided by Maple vary (arbitrarily?) case-by-case.

Thank you very much for your continuous help and not giving up on my problem!

Dear @acer and @vv,

Sorry for bugging you again. I was just wondering whether you were able to look into my puzzle above and might be able to resolve it. I really appreciate your help.

Dear @acer and @vv,

I’m still tinkering with this problem and have encountered a puzzle related to your codes, which, unfortunately, I couldn’t resolve myself (see Puzzle.mw):

Please note that, while the original system and the constraint remain the same, I have extended my example from J=3 to J=4 (i.e., in addition to R[1,1], R[2,1], R[3,1], we now also have R[4,1]). Now, using your codes, I solve this system in two different ways:

1. The function A in the attached Puzzle.mw designates R[3,1] as a free and unconstrained variable, and

2. The function B further below designates R[4,1] as a free and unconstrained variable.

In each case, I set the unconstrained variable equal to zero (i.e., R[3,1]=0 in case 1 and R[4,1]=0 in case 2) and set the following parameters to zero: {delta[1, 1] = 0, delta[3, 1] = 0, delta[4, 1] = 0}.

After substituting the above values, in each case I differentiate R[1,1] with respect to delta[2, 1], which yields equation (2) in case 1 and equation (3) in case 2. It turns out that the latter two equations are qualitatively different, in the sense that equation (2) has a higher number of polynomials and these polynomials are qualitative different (e.g., in equation (3) there are terms which contain only 2 multiplicatively connected pi’s, whereas the smallest number of multiplicatively connected pi’s in equation (2) is 2. This is puzzling to me: How come the choice of the unconstrained variable, everything else equal (and the system being fully “symmetric”), qualitatively affects the results?

Thank you very much in advance!

@vv I found the mistake from my previous post (and sincerely apologize for the confusion): I missed the fact that matrix is already defined as the identity matrix minus some other matrix K, and so with E := simplify(MatrixInverse(I - A), size) I was effectively calculating the inverse of (I - (I-K)). Once I correct this mistake, I do get the expected result of "0/0", which is identical to solving the linear system under the constraint. (So there is no difference/disconnect between the two approaches.)  

Yet, I do have a specific follow-up question regarding the calculation of the Moore-Penrose inverse in Maple: Once I incorporate the constraints into the original system using sys := simplify({eq1, eq2, eq3}, Constraints) and calculate the matrix inverse using: 

Rs := indets(sys, specindex(R))

(A, B) := LinearAlgebra:-GenerateMatrix(sys, Rs)

E := simplify(MatrixInverse(A), size)

I get the error message: Error, (in LinearAlgebra:-MatrixInverse) singular matrix (which, I believe, is what you were referring to in your previous post with "the matrix of your system is singular if the relations are considered"). I'm still wondering: How do I calculate im Maple the Moore-Penrose inverse of this singular matrix? Is it:

LinearAlgebra[MatrixInverse](A, method=pseudo) ?

Many thanks in advance.

P.S. I've been trying to calculate the Pseudo inverse of a 3x3 matrix using the above command and it is taking me (on a 64GB machine) more than 5 hours -- is it normal or am I doing something wrong?

@vv I appreciate and understand your simple example. 

However, coming back to my original example/approach, I'm still not sure why a) Maple seems to deliver a different solution with the matrix-inverse compared to the solve-solution, and b) once I utilize the constraint, I no longer get the division by zero with the matrix-inverse solution (perhaps the answer to both questions is that I made a mistake when calculating the matrix-inverse, but, unfortunately, I couldn't identify it myself after multiple attempts). 

Also, just to reiterate on my last (conceptual) question from the previous point, can't one use the Moore-Penrose inverse to calculate the inverse of the linear system of equations which is homogenous of degree 1?

@vv @acer @Carl Love et al.

I'm experiencing a disconnect between the simple solve and the 'matrix inverse' solution and was wondering whether you could resolve it:

Just to remind you, my problem is a homogenous linear system (of degree 1) of the following form:

R[1,1]=f(R[1,1], R[2,1], R[3,1], Parameters),

R[2,1]=g(R[1,1], R[2,1], R[3,1], Parameters),

R[3,1]=h(R[1,1], R[2,1], R[3,1], Parameters),

with the auxiliary restriction (constraint) on Parameters explicitly stated above. 

As mentioned in my original Example.mw, if I solve the above system for the unknowns {R[1,1], R[2,1], R[3,1]} using Maple's solve-command and then utilize the constraint for any of the solutions, I get the error message: "Error, (in simplify/siderels:-Recurse) indeterminate expression of the form 0/0" (which results from the fact that the system is homogenous of degree 1, and, hence, each R is defined only up to a scale).

As stated in my latest post, I then tried to solve this system by inverting the matrix as follows (see Moore-Penrose.mw):

R=E*Parameters,

where E=(I-A)^{-1} is the (3x3) matrix inverse calculated using E := simplify(MatrixInverse(I - A), size) and Parameters is the 3x1 vector of parameters. Now, when I calculate F:=E[1,1]*B[1] + E[1,2]*B[2] + E[1,3]*B[3], that should give me R[1,1] as a function of parameters only, right? It turns out, however, that the solution to F:

a) seems to be different from the solution for R[1,1] that I get from using the solve-command, and

b) once I utilize the constraint, no longer yields the error message ("...indeterminate expression of the form 0/0").

Hence, I was wondering what am I missing/doing wrong here? 

Perhaps more broadly, I thought that, if the linear system of equations is homogenous (of degree 1) and, hence, non-invertable, one can still calculate the Moore-Penrose inverse. I was thus wondering: How can I utilize the constraint in the original system for Maple to realize that the system is non-invertable and calculate the Moore-Penrose (instead of the regular) inverse.

I really appreciate your help (and sorry again for the lengthy post...).

Dear @acer, thanks for taking your time to respond -- fully understandable. 

@vv Sorry for asking stupid questions (as previously mentioned, I am not a mathematician -- somewhing for which I cannot apologize often enough). I appreciate your advice regarding lengthy names -- I will make sure to replace them with simpler variables in the future. In fact, I have done so in the .mw document posted below by removing all "d logs", which originally denoted percentage changes of variables R and delta.

I have a quick follow-up question: Couldn't one use the Moore-Penrose (pseudo)inverse to express R's as a function of delta (without a scaling factor/numeraire):

In this updated document (Moore_Penrose.mw), I've tried to do so using the following code, parts of which I learned from @Carl Love in a different question/thread (thanks Carl!):

sys := [eq1, eq2, eq3];

Rs := indets(sys, specindex(R));

(A, B) := LinearAlgebra:-GenerateMatrix(sys, Rs);

with(LinearAlgebra):

C := Matrix(3, 3, [[1, 0, 0], [0, 1, 0], [0, 0, 1]]):

E := simplify(MatrixInverse(C - A), size)

Question to a question: Does E := simplify(MatrixInverse(C - A), size) calculate the Moore-Penrose inverse? I assume so according to this Maple help page (which says: "compute the inverse of a square Matrix or the Moore-Penrose pseudo-inverse of a Matrix), but just wanted to make sure. I also found another Maple help page, which calls for a different code: F := LinearAlgebra[MatrixInverse](C - A, method = pseudo). However, the latter seems to be taking too long to calculate, hence I wasn't sure whether it is correct.

Lastly, I've tried to evaluate the denominator (determinant) of (any entry of) the above matrix using the constraint and it no longer yields the "Error, (in simplify/siderels:-Recurse) indeterminate expression of the form 0/0", which I was getting when solving the homogenous system using the simple solve() command. Could anyone who has any experience with the Moore-Penrose please confirm whether what I'm doing makes sense? Thank you very much in advance!

 

1 2 3 4 Page 2 of 4