What does calculating the inverse of a matrix mean?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
3
down vote

favorite
1












Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$

Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $
.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.



Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.










share|cite|improve this question





















  • If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
    – A.Γ.
    3 hours ago










  • @A.Γ. that's not what I am asking.
    – Eyad H.
    3 hours ago










  • To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
    – littleO
    1 hour ago















up vote
3
down vote

favorite
1












Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$

Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $
.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.



Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.










share|cite|improve this question





















  • If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
    – A.Γ.
    3 hours ago










  • @A.Γ. that's not what I am asking.
    – Eyad H.
    3 hours ago










  • To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
    – littleO
    1 hour ago













up vote
3
down vote

favorite
1









up vote
3
down vote

favorite
1






1





Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$

Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $
.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.



Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.










share|cite|improve this question













Assume I have 3 equations, $x+2y+z=2, 3x+8y+z=12, 4y+z=2$ which could be represented in matrix form ($Ax = b$) like this:
$beginpmatrix
1 & 2 & 1\
3 & 8 & 1\
0 & 4 & 1
endpmatrixbigl .beginpmatrix
x\
y\
z
endpmatrix = beginpmatrix
2\
12\
2
endpmatrix$

Then, the inverse of $A$, $A^-1$, would be: $beginpmatrix
2/5 & 1/5 & -3/5\
-3/10 & 1/10 & 1/5\
6/5 & -2/5 & 1/5
endpmatrix $
.
So, my question is, what does this even mean? We know that $A$ is a coefficients matrix that represents the 3 equations above, so what does $A^-1$ mean with respect to these 3 equations? What have I done to the 3 equations is exactly my question.



Please note that I understand very well how to find the inverse of a matrix, I just don't understand the intuition of what's happening and sort of the meaning of the manipulations I am applying to the equations when they are in matrix form.







linear-algebra






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked 3 hours ago









Eyad H.

328110




328110











  • If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
    – A.Γ.
    3 hours ago










  • @A.Γ. that's not what I am asking.
    – Eyad H.
    3 hours ago










  • To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
    – littleO
    1 hour ago

















  • If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
    – A.Γ.
    3 hours ago










  • @A.Γ. that's not what I am asking.
    – Eyad H.
    3 hours ago










  • To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
    – littleO
    1 hour ago
















If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
– A.Γ.
3 hours ago




If $Ax=b$ then $A^-1Ax=A^-1b$ $Leftrightarrow$ $Ix=A^-1b$ $Leftrightarrow$ $x=A^-1b$.
– A.Γ.
3 hours ago












@A.Γ. that's not what I am asking.
– Eyad H.
3 hours ago




@A.Γ. that's not what I am asking.
– Eyad H.
3 hours ago












To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
– littleO
1 hour ago





To understand $A^-1$, forget about the system of equations for a moment and just think about $A$. The inverse of a square matrix $A$ is the matrix (denoted $A^-1$) which has the property that $A A^-1 = I$, where $I$ is the identity matrix. This is analogous to the fact that the inverse of a number $a$ is the number (denoted $a^-1$) such that $a a^-1 = 1$.
– littleO
1 hour ago











5 Answers
5






active

oldest

votes

















up vote
1
down vote













Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).



Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.



Again, this holds true for all systems, not just $ntimes n$.






share|cite|improve this answer



























    up vote
    1
    down vote













    Let us organize our equations in this way:



    $xbegin bmatrix
    1 \
    3 \
    vdots \
    0
    endbmatrix$

    + $ybegin bmatrix
    2 \
    8 \
    vdots \
    4
    endbmatrix$

    + $zbegin bmatrix
    1 \
    1 \
    vdots \
    1
    endbmatrix$

    = $begin bmatrix
    2 \
    12 \
    vdots \
    2
    endbmatrix$



    When you have done inverse we got:



    $2begin bmatrix
    2/5 \
    -3/10 \
    vdots \
    6/5
    endbmatrix$

    + $12begin bmatrix
    1/5 \
    1/10 \
    vdots \
    2/5
    endbmatrix$

    + $2begin bmatrix
    -3/5 \
    1/5 \
    vdots \
    1/5
    endbmatrix$

    = $begin bmatrix
    x \
    y \
    vdots \
    z
    endbmatrix$



    Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.



    An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.






    share|cite|improve this answer



























      up vote
      0
      down vote













      If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".






      share|cite|improve this answer




















      • While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
        – Ittay Weiss
        3 hours ago

















      up vote
      0
      down vote













      A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
      Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.






      share|cite|improve this answer





























        up vote
        0
        down vote













        It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have



        $$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$



        that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that



        $$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$



        Technically if you are attempting to find that vector you are using two steps back sub and front sub.



        If we have the $Ax=b$ problem we have



        $$ LUx=b tag3$$



        we get two problems



        $$ Ly =b tag4$$



        $$ y_1 = fracb_1l_11 tag5 $$
        $$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
        is front sub



        $$ Ux = y tag7 $$



        and back sub is



        $$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$



        The intuition depends on the matrix decomposition.






        share|cite|improve this answer






















          Your Answer




          StackExchange.ifUsing("editor", function ()
          return StackExchange.using("mathjaxEditing", function ()
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          );
          );
          , "mathjax-editing");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: false,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













           

          draft saved


          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2954052%2fwhat-does-calculating-the-inverse-of-a-matrix-mean%23new-answer', 'question_page');

          );

          Post as a guest






























          5 Answers
          5






          active

          oldest

          votes








          5 Answers
          5






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote













          Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).



          Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.



          Again, this holds true for all systems, not just $ntimes n$.






          share|cite|improve this answer
























            up vote
            1
            down vote













            Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).



            Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.



            Again, this holds true for all systems, not just $ntimes n$.






            share|cite|improve this answer






















              up vote
              1
              down vote










              up vote
              1
              down vote









              Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).



              Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.



              Again, this holds true for all systems, not just $ntimes n$.






              share|cite|improve this answer












              Matrix multiplication corresponds to substituting new variables for the given ones in the system of linear equations. In more detail, for a system of $n$ equations in $n$ unknowns $X_1,dots ,X_n $, suppose that $A$ represents the system of equations. Suppose now that you introduce new variables $Y_1,dots ,Y_n$ and you express each $X_i$ as a linear combination of the new variables. If you write $B$ for the matrix of coefficients of the $X_i$ represented as combinations of the $Y_i$, then the matrix $AB$ corresponds to the matrix of coefficients of the original system of equations after substituting the new variables in. If you work this out for the case $n=2$ it's easy to see what is going on. This in fact is one way to motivate the definition of matrix multiplication (in general, not just for square matrices).



              Now, what all this tells you is that if you have $A$ and you found that $B=A^-1$ is its inverse, then if you introduce new variables $Y_1,dots , Y_n$ and express the $X_i$ in terms of those by reading the coefficient in the inverse matrix $B$, then substituting these variables into the original system will result in a very very simple system. Namely, the coefficient after substituting will be the coefficients in $AB=I$. This is the simplest system in the world. So, find the inverse of a matrix is equivalent to finding a change of coordinates, from the $X_i$'s to the $Y_i$'s, which make the system of equations particularly nice.



              Again, this holds true for all systems, not just $ntimes n$.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered 3 hours ago









              Ittay Weiss

              62.4k699181




              62.4k699181




















                  up vote
                  1
                  down vote













                  Let us organize our equations in this way:



                  $xbegin bmatrix
                  1 \
                  3 \
                  vdots \
                  0
                  endbmatrix$

                  + $ybegin bmatrix
                  2 \
                  8 \
                  vdots \
                  4
                  endbmatrix$

                  + $zbegin bmatrix
                  1 \
                  1 \
                  vdots \
                  1
                  endbmatrix$

                  = $begin bmatrix
                  2 \
                  12 \
                  vdots \
                  2
                  endbmatrix$



                  When you have done inverse we got:



                  $2begin bmatrix
                  2/5 \
                  -3/10 \
                  vdots \
                  6/5
                  endbmatrix$

                  + $12begin bmatrix
                  1/5 \
                  1/10 \
                  vdots \
                  2/5
                  endbmatrix$

                  + $2begin bmatrix
                  -3/5 \
                  1/5 \
                  vdots \
                  1/5
                  endbmatrix$

                  = $begin bmatrix
                  x \
                  y \
                  vdots \
                  z
                  endbmatrix$



                  Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.



                  An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.






                  share|cite|improve this answer
























                    up vote
                    1
                    down vote













                    Let us organize our equations in this way:



                    $xbegin bmatrix
                    1 \
                    3 \
                    vdots \
                    0
                    endbmatrix$

                    + $ybegin bmatrix
                    2 \
                    8 \
                    vdots \
                    4
                    endbmatrix$

                    + $zbegin bmatrix
                    1 \
                    1 \
                    vdots \
                    1
                    endbmatrix$

                    = $begin bmatrix
                    2 \
                    12 \
                    vdots \
                    2
                    endbmatrix$



                    When you have done inverse we got:



                    $2begin bmatrix
                    2/5 \
                    -3/10 \
                    vdots \
                    6/5
                    endbmatrix$

                    + $12begin bmatrix
                    1/5 \
                    1/10 \
                    vdots \
                    2/5
                    endbmatrix$

                    + $2begin bmatrix
                    -3/5 \
                    1/5 \
                    vdots \
                    1/5
                    endbmatrix$

                    = $begin bmatrix
                    x \
                    y \
                    vdots \
                    z
                    endbmatrix$



                    Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.



                    An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.






                    share|cite|improve this answer






















                      up vote
                      1
                      down vote










                      up vote
                      1
                      down vote









                      Let us organize our equations in this way:



                      $xbegin bmatrix
                      1 \
                      3 \
                      vdots \
                      0
                      endbmatrix$

                      + $ybegin bmatrix
                      2 \
                      8 \
                      vdots \
                      4
                      endbmatrix$

                      + $zbegin bmatrix
                      1 \
                      1 \
                      vdots \
                      1
                      endbmatrix$

                      = $begin bmatrix
                      2 \
                      12 \
                      vdots \
                      2
                      endbmatrix$



                      When you have done inverse we got:



                      $2begin bmatrix
                      2/5 \
                      -3/10 \
                      vdots \
                      6/5
                      endbmatrix$

                      + $12begin bmatrix
                      1/5 \
                      1/10 \
                      vdots \
                      2/5
                      endbmatrix$

                      + $2begin bmatrix
                      -3/5 \
                      1/5 \
                      vdots \
                      1/5
                      endbmatrix$

                      = $begin bmatrix
                      x \
                      y \
                      vdots \
                      z
                      endbmatrix$



                      Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.



                      An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.






                      share|cite|improve this answer












                      Let us organize our equations in this way:



                      $xbegin bmatrix
                      1 \
                      3 \
                      vdots \
                      0
                      endbmatrix$

                      + $ybegin bmatrix
                      2 \
                      8 \
                      vdots \
                      4
                      endbmatrix$

                      + $zbegin bmatrix
                      1 \
                      1 \
                      vdots \
                      1
                      endbmatrix$

                      = $begin bmatrix
                      2 \
                      12 \
                      vdots \
                      2
                      endbmatrix$



                      When you have done inverse we got:



                      $2begin bmatrix
                      2/5 \
                      -3/10 \
                      vdots \
                      6/5
                      endbmatrix$

                      + $12begin bmatrix
                      1/5 \
                      1/10 \
                      vdots \
                      2/5
                      endbmatrix$

                      + $2begin bmatrix
                      -3/5 \
                      1/5 \
                      vdots \
                      1/5
                      endbmatrix$

                      = $begin bmatrix
                      x \
                      y \
                      vdots \
                      z
                      endbmatrix$



                      Initially on right side we had $[2 12 2 ]^T$. Using inverse, we want to place $[x y z ]^T$ on right side.



                      An another way is to think $[2 12 2 ]^T$ as a point represented using three vectors $[1 3 0 ]^T$, $[2 8 4 ]^T$, $[1 1 1 ]^T$. $x,y,z$ was the scaling factors. Now we transformed same point into $[x y z ]^T$ using vectors associated with column vectors of $A^-1$.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered 1 hour ago









                      nature1729

                      43338




                      43338




















                          up vote
                          0
                          down vote













                          If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".






                          share|cite|improve this answer




















                          • While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                            – Ittay Weiss
                            3 hours ago














                          up vote
                          0
                          down vote













                          If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".






                          share|cite|improve this answer




















                          • While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                            – Ittay Weiss
                            3 hours ago












                          up vote
                          0
                          down vote










                          up vote
                          0
                          down vote









                          If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".






                          share|cite|improve this answer












                          If you think of the matrix representation as a function from $mathbbR^3$ to itself, maybe that will help? The inverse could then be thought of as the inverse of the function, in the same manner that you would invert, say, $f(x)=x^3+5$. The question you're answering when multiplying the right hand side of the equation with the inverse is "what vector do I put in to my function A(x) so that I get out the RHS".







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered 3 hours ago









                          edo

                          169




                          169











                          • While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                            – Ittay Weiss
                            3 hours ago
















                          • While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                            – Ittay Weiss
                            3 hours ago















                          While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                          – Ittay Weiss
                          3 hours ago




                          While this perspective is true it does not quite answer OP's question since it changes the focus away from the equations.
                          – Ittay Weiss
                          3 hours ago










                          up vote
                          0
                          down vote













                          A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
                          Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.






                          share|cite|improve this answer


























                            up vote
                            0
                            down vote













                            A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
                            Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.






                            share|cite|improve this answer
























                              up vote
                              0
                              down vote










                              up vote
                              0
                              down vote









                              A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
                              Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.






                              share|cite|improve this answer














                              A matrix is just an array filled with numbers. But you have learnt how to "multipliy" to matrices in order to get a third one. Getting an inverse of a matrix $A$, is just finding an other matrix named $A^-1$ such that $$A^-1A=id.$$
                              Now it happens that this multiplication rule, which may seem abstract and a nonsense, is precisely defined to respect finding solution of a system of equations. Namely, if you write as you did your system $Abar x=bar y$, then it is equivalent to any system $BAbar x=Bbar y$ for any invertible matrix $B$, and chosing $B=A^-1$ gives you the system $bar x=A^-1bar y$ because of the (good) way multiplication is defined (associative etc.) which is precisely what you wanted to known: finding $bar x$ expressed in term of $bar y$.







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited 1 hour ago

























                              answered 2 hours ago









                              Drike

                              352112




                              352112




















                                  up vote
                                  0
                                  down vote













                                  It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have



                                  $$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$



                                  that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that



                                  $$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$



                                  Technically if you are attempting to find that vector you are using two steps back sub and front sub.



                                  If we have the $Ax=b$ problem we have



                                  $$ LUx=b tag3$$



                                  we get two problems



                                  $$ Ly =b tag4$$



                                  $$ y_1 = fracb_1l_11 tag5 $$
                                  $$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
                                  is front sub



                                  $$ Ux = y tag7 $$



                                  and back sub is



                                  $$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$



                                  The intuition depends on the matrix decomposition.






                                  share|cite|improve this answer


























                                    up vote
                                    0
                                    down vote













                                    It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have



                                    $$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$



                                    that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that



                                    $$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$



                                    Technically if you are attempting to find that vector you are using two steps back sub and front sub.



                                    If we have the $Ax=b$ problem we have



                                    $$ LUx=b tag3$$



                                    we get two problems



                                    $$ Ly =b tag4$$



                                    $$ y_1 = fracb_1l_11 tag5 $$
                                    $$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
                                    is front sub



                                    $$ Ux = y tag7 $$



                                    and back sub is



                                    $$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$



                                    The intuition depends on the matrix decomposition.






                                    share|cite|improve this answer
























                                      up vote
                                      0
                                      down vote










                                      up vote
                                      0
                                      down vote









                                      It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have



                                      $$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$



                                      that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that



                                      $$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$



                                      Technically if you are attempting to find that vector you are using two steps back sub and front sub.



                                      If we have the $Ax=b$ problem we have



                                      $$ LUx=b tag3$$



                                      we get two problems



                                      $$ Ly =b tag4$$



                                      $$ y_1 = fracb_1l_11 tag5 $$
                                      $$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
                                      is front sub



                                      $$ Ux = y tag7 $$



                                      and back sub is



                                      $$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$



                                      The intuition depends on the matrix decomposition.






                                      share|cite|improve this answer














                                      It depends on the method you use to calculate the inverse. For instance, if you use the LU decomposition. Then given the matrix $A$ we have



                                      $$ underbraceL_m-1, cdots L_2 ,L_1_L^-1 A = U tag1 $$



                                      that is $A = LU$ is composed of a lower and upper triangular matrix. The lower triangular $L$ is a record you use to keep track of eliminating the entries to make the $U$ matrix, or reduced echelon form (sometimes). It is simply a ratio of the rows. When you take the inverse of either you will end up with a lower triangular or upper triangular matrix again. In either event it meant such that



                                      $$ A = LU implies A^-1A implies (LU)^-1(LU) = U^-1L^-1LU = I tag2$$



                                      Technically if you are attempting to find that vector you are using two steps back sub and front sub.



                                      If we have the $Ax=b$ problem we have



                                      $$ LUx=b tag3$$



                                      we get two problems



                                      $$ Ly =b tag4$$



                                      $$ y_1 = fracb_1l_11 tag5 $$
                                      $$ y_i = frac1l_ii bigg( b_i -sum_j=1^i-1 l_ij y_j bigg) tag6 $$
                                      is front sub



                                      $$ Ux = y tag7 $$



                                      and back sub is



                                      $$x_i = frac1u_ii bigg( y_i - sum_j=i+1^N u_ij x_j bigg) tag8 $$



                                      The intuition depends on the matrix decomposition.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited 4 mins ago

























                                      answered 1 hour ago









                                      Ryan Howe

                                      1,8621216




                                      1,8621216



























                                           

                                          draft saved


                                          draft discarded















































                                           


                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2954052%2fwhat-does-calculating-the-inverse-of-a-matrix-mean%23new-answer', 'question_page');

                                          );

                                          Post as a guest













































































                                          Comments

                                          Popular posts from this blog

                                          Long meetings (6-7 hours a day): Being “babysat” by supervisor

                                          Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

                                          Confectionery