Why do pushdown automata use a stack?

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
1
down vote

favorite












I'm taking a computer theory class and my professor told us that a pushdown automaton cannot use data structures other than a stack (like a queue or multiple stacks). Why is that?










share|cite|improve this question























  • It's the definition.
    – Raphael♦
    6 hours ago










  • Then why is this the definition? Why not a queue? Is there a reason for it?
    – LTKills
    4 hours ago










  • Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
    – Raphael♦
    4 hours ago















up vote
1
down vote

favorite












I'm taking a computer theory class and my professor told us that a pushdown automaton cannot use data structures other than a stack (like a queue or multiple stacks). Why is that?










share|cite|improve this question























  • It's the definition.
    – Raphael♦
    6 hours ago










  • Then why is this the definition? Why not a queue? Is there a reason for it?
    – LTKills
    4 hours ago










  • Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
    – Raphael♦
    4 hours ago













up vote
1
down vote

favorite









up vote
1
down vote

favorite











I'm taking a computer theory class and my professor told us that a pushdown automaton cannot use data structures other than a stack (like a queue or multiple stacks). Why is that?










share|cite|improve this question















I'm taking a computer theory class and my professor told us that a pushdown automaton cannot use data structures other than a stack (like a queue or multiple stacks). Why is that?







automata pushdown-automata computation-models






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 4 hours ago









reinierpost

2,4961323




2,4961323










asked 9 hours ago









LTKills

84




84











  • It's the definition.
    – Raphael♦
    6 hours ago










  • Then why is this the definition? Why not a queue? Is there a reason for it?
    – LTKills
    4 hours ago










  • Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
    – Raphael♦
    4 hours ago

















  • It's the definition.
    – Raphael♦
    6 hours ago










  • Then why is this the definition? Why not a queue? Is there a reason for it?
    – LTKills
    4 hours ago










  • Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
    – Raphael♦
    4 hours ago
















It's the definition.
– Raphael♦
6 hours ago




It's the definition.
– Raphael♦
6 hours ago












Then why is this the definition? Why not a queue? Is there a reason for it?
– LTKills
4 hours ago




Then why is this the definition? Why not a queue? Is there a reason for it?
– LTKills
4 hours ago












Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
– Raphael♦
4 hours ago





Somebody found it interesting. To your point: the "why" is historical. What are you really interested in?
– Raphael♦
4 hours ago











3 Answers
3






active

oldest

votes

















up vote
3
down vote



accepted










If you change the stack to the queue or multiple stacks, the power of computation will be increased! (as you know, we can model a queue with two stacks). If we use a queue, it can be powerful as a Turing machine. However, the computation power of a push-down automaton is not that much!
You can know more about that here.






share|cite|improve this answer



























    up vote
    3
    down vote













    If you take finite control with different memory models, the resulting classes of automata have very different capabilities. In fact, that's a huge part of what automata theory concerns itself with.



    Just a few examples:



    • finite memory --> regular

    • single stack --> context-free

    • single queue --> Turing-complete

    • two stacks --> Turing-complete

    • infinite tape --> Turing-complete

    • heaps --> we don't really know

    And so on and so on.



    Note that you can also modify other elements:



    • The alphabet -- make it infinite, or continuous.

    • The transition function -- make it a relation, or allow $varepsilon$-transitions.

    • The state set -- make it infinite, or allow multiple starting states.

    • The inputs -- make them infinite, or trees, or timed sequences.

    Interesting things can happen!



    That's the beauty of mathematical definitions: you usually get something. Then you can spend many months on finding out what it is you got. Maybe it's interesting, maybe not. Maybe it's useful (a dreaded question after talks!), maybe not. But it always is.






    share|cite|improve this answer



























      up vote
      1
      down vote













      OmG and Raphael have already answered your question:



      • pushdown automata use a stack because they're defined that way

      • if they didn't use a stack, what you'd get is a different type of automaton, with different properties

      So at this point you may ask: but why does my professor present the pushdown automaton, not some other kind of automaton? What makes the pushdown automaton, with a stack, more interesting than an automaton with some other data structure?



      The answer is that pushdown automata can recognize exactly the context-free languages, which were introduced early on in the history of computer science, in the 1950s, as a technique to describe the syntax of languages: both natural languages, such as English or Russian, and computer languages such as programming languages (the first programming language they described was FORTRAN).



      This was a big deal at the time. Cybernetics and behaviorism were all the craze. Computers were starting to be applied commercially. One of the areas they were applied to was language processing. How can we describe language?



      Well, it consists of utterances (things people say), that consist of sequences of items (words, or sounds). A person who wants to say something will emits those utterances; another person receives them, and makes sense out of them. What if we want to make a computer do the same thing? We will need to find a way to describe a language that can be taught to a computer. We will need a way to generate valid utterances that a computer can use, and a way to recognize those utterances that a computer can use - such that the utterances are actually the utterances of a language humans use, such as English or Russian.



      Descriptions of devices that could do such things already existed: there were state machines and various kinds of automata. However, Noam Chomsky, a mathematician who was tasked with teaching a class at MIT on how to build software for automatic translation, realized state machines aren't powerful enough to describe natural languages such as English, because they cannot describe the phenomenon of recursion in language. He needed something similar, but with the power to describe recursion, and arrived at the context-free grammars. He proved that these can indeed describe more languages than state machines can, but fewer than some other techniques - a result known as the Chomsky hierarchy. He hoped they would describe languages such as English and Russian.



      Context-free grammars generate utterances. Chomsky thought they basically described how a person generates sentences when speaking. You also need a device to describe how utterances are recognized. That is what pushdown automata are for. They can recognize exactly the utterances generated by a context-free grammar. So they were thought to describe basically how a human recognizes sentences when listening.



      Soon afterwards, it was discovered that context-free languages aren't actually capable of describing the syntax of languages completely - you need all kinds of additional machinery to do it properly. This is true both for natural languages and artificial languages. (To be honest, I think Chomsky meant to come up with visibly pushdown languages, except he didn't realize the difference, and if he had, it would have saved us all a lot of hassle.)



      However, context-free languages and pushdown automata are mathematically very simple and elegant devices, and the Chomsky hierarchy is a simple and elegant result, so they are very useful in education to explain the basics of computer-based language description and recognition (formal language theory). For this reason, they have continued to be a standard part of the theoretical computer science curriculum, and many techniques used in practice are based on them, so they really are requisite knowledge if you want to study anything related to natural language processing, programming language implementation, and other language-related topics.






      share|cite|improve this answer




















        Your Answer




        StackExchange.ifUsing("editor", function ()
        return StackExchange.using("mathjaxEditing", function ()
        StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
        StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
        );
        );
        , "mathjax-editing");

        StackExchange.ready(function()
        var channelOptions =
        tags: "".split(" "),
        id: "419"
        ;
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function()
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled)
        StackExchange.using("snippets", function()
        createEditor();
        );

        else
        createEditor();

        );

        function createEditor()
        StackExchange.prepareEditor(
        heartbeatType: 'answer',
        convertImagesToLinks: false,
        noModals: false,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: null,
        bindNavPrevention: true,
        postfix: "",
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        );



        );













         

        draft saved


        draft discarded


















        StackExchange.ready(
        function ()
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f98972%2fwhy-do-pushdown-automata-use-a-stack%23new-answer', 'question_page');

        );

        Post as a guest






























        3 Answers
        3






        active

        oldest

        votes








        3 Answers
        3






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        3
        down vote



        accepted










        If you change the stack to the queue or multiple stacks, the power of computation will be increased! (as you know, we can model a queue with two stacks). If we use a queue, it can be powerful as a Turing machine. However, the computation power of a push-down automaton is not that much!
        You can know more about that here.






        share|cite|improve this answer
























          up vote
          3
          down vote



          accepted










          If you change the stack to the queue or multiple stacks, the power of computation will be increased! (as you know, we can model a queue with two stacks). If we use a queue, it can be powerful as a Turing machine. However, the computation power of a push-down automaton is not that much!
          You can know more about that here.






          share|cite|improve this answer






















            up vote
            3
            down vote



            accepted







            up vote
            3
            down vote



            accepted






            If you change the stack to the queue or multiple stacks, the power of computation will be increased! (as you know, we can model a queue with two stacks). If we use a queue, it can be powerful as a Turing machine. However, the computation power of a push-down automaton is not that much!
            You can know more about that here.






            share|cite|improve this answer












            If you change the stack to the queue or multiple stacks, the power of computation will be increased! (as you know, we can model a queue with two stacks). If we use a queue, it can be powerful as a Turing machine. However, the computation power of a push-down automaton is not that much!
            You can know more about that here.







            share|cite|improve this answer












            share|cite|improve this answer



            share|cite|improve this answer










            answered 7 hours ago









            OmG

            1,027311




            1,027311




















                up vote
                3
                down vote













                If you take finite control with different memory models, the resulting classes of automata have very different capabilities. In fact, that's a huge part of what automata theory concerns itself with.



                Just a few examples:



                • finite memory --> regular

                • single stack --> context-free

                • single queue --> Turing-complete

                • two stacks --> Turing-complete

                • infinite tape --> Turing-complete

                • heaps --> we don't really know

                And so on and so on.



                Note that you can also modify other elements:



                • The alphabet -- make it infinite, or continuous.

                • The transition function -- make it a relation, or allow $varepsilon$-transitions.

                • The state set -- make it infinite, or allow multiple starting states.

                • The inputs -- make them infinite, or trees, or timed sequences.

                Interesting things can happen!



                That's the beauty of mathematical definitions: you usually get something. Then you can spend many months on finding out what it is you got. Maybe it's interesting, maybe not. Maybe it's useful (a dreaded question after talks!), maybe not. But it always is.






                share|cite|improve this answer
























                  up vote
                  3
                  down vote













                  If you take finite control with different memory models, the resulting classes of automata have very different capabilities. In fact, that's a huge part of what automata theory concerns itself with.



                  Just a few examples:



                  • finite memory --> regular

                  • single stack --> context-free

                  • single queue --> Turing-complete

                  • two stacks --> Turing-complete

                  • infinite tape --> Turing-complete

                  • heaps --> we don't really know

                  And so on and so on.



                  Note that you can also modify other elements:



                  • The alphabet -- make it infinite, or continuous.

                  • The transition function -- make it a relation, or allow $varepsilon$-transitions.

                  • The state set -- make it infinite, or allow multiple starting states.

                  • The inputs -- make them infinite, or trees, or timed sequences.

                  Interesting things can happen!



                  That's the beauty of mathematical definitions: you usually get something. Then you can spend many months on finding out what it is you got. Maybe it's interesting, maybe not. Maybe it's useful (a dreaded question after talks!), maybe not. But it always is.






                  share|cite|improve this answer






















                    up vote
                    3
                    down vote










                    up vote
                    3
                    down vote









                    If you take finite control with different memory models, the resulting classes of automata have very different capabilities. In fact, that's a huge part of what automata theory concerns itself with.



                    Just a few examples:



                    • finite memory --> regular

                    • single stack --> context-free

                    • single queue --> Turing-complete

                    • two stacks --> Turing-complete

                    • infinite tape --> Turing-complete

                    • heaps --> we don't really know

                    And so on and so on.



                    Note that you can also modify other elements:



                    • The alphabet -- make it infinite, or continuous.

                    • The transition function -- make it a relation, or allow $varepsilon$-transitions.

                    • The state set -- make it infinite, or allow multiple starting states.

                    • The inputs -- make them infinite, or trees, or timed sequences.

                    Interesting things can happen!



                    That's the beauty of mathematical definitions: you usually get something. Then you can spend many months on finding out what it is you got. Maybe it's interesting, maybe not. Maybe it's useful (a dreaded question after talks!), maybe not. But it always is.






                    share|cite|improve this answer












                    If you take finite control with different memory models, the resulting classes of automata have very different capabilities. In fact, that's a huge part of what automata theory concerns itself with.



                    Just a few examples:



                    • finite memory --> regular

                    • single stack --> context-free

                    • single queue --> Turing-complete

                    • two stacks --> Turing-complete

                    • infinite tape --> Turing-complete

                    • heaps --> we don't really know

                    And so on and so on.



                    Note that you can also modify other elements:



                    • The alphabet -- make it infinite, or continuous.

                    • The transition function -- make it a relation, or allow $varepsilon$-transitions.

                    • The state set -- make it infinite, or allow multiple starting states.

                    • The inputs -- make them infinite, or trees, or timed sequences.

                    Interesting things can happen!



                    That's the beauty of mathematical definitions: you usually get something. Then you can spend many months on finding out what it is you got. Maybe it's interesting, maybe not. Maybe it's useful (a dreaded question after talks!), maybe not. But it always is.







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered 4 hours ago









                    Raphael♦

                    56.5k22138307




                    56.5k22138307




















                        up vote
                        1
                        down vote













                        OmG and Raphael have already answered your question:



                        • pushdown automata use a stack because they're defined that way

                        • if they didn't use a stack, what you'd get is a different type of automaton, with different properties

                        So at this point you may ask: but why does my professor present the pushdown automaton, not some other kind of automaton? What makes the pushdown automaton, with a stack, more interesting than an automaton with some other data structure?



                        The answer is that pushdown automata can recognize exactly the context-free languages, which were introduced early on in the history of computer science, in the 1950s, as a technique to describe the syntax of languages: both natural languages, such as English or Russian, and computer languages such as programming languages (the first programming language they described was FORTRAN).



                        This was a big deal at the time. Cybernetics and behaviorism were all the craze. Computers were starting to be applied commercially. One of the areas they were applied to was language processing. How can we describe language?



                        Well, it consists of utterances (things people say), that consist of sequences of items (words, or sounds). A person who wants to say something will emits those utterances; another person receives them, and makes sense out of them. What if we want to make a computer do the same thing? We will need to find a way to describe a language that can be taught to a computer. We will need a way to generate valid utterances that a computer can use, and a way to recognize those utterances that a computer can use - such that the utterances are actually the utterances of a language humans use, such as English or Russian.



                        Descriptions of devices that could do such things already existed: there were state machines and various kinds of automata. However, Noam Chomsky, a mathematician who was tasked with teaching a class at MIT on how to build software for automatic translation, realized state machines aren't powerful enough to describe natural languages such as English, because they cannot describe the phenomenon of recursion in language. He needed something similar, but with the power to describe recursion, and arrived at the context-free grammars. He proved that these can indeed describe more languages than state machines can, but fewer than some other techniques - a result known as the Chomsky hierarchy. He hoped they would describe languages such as English and Russian.



                        Context-free grammars generate utterances. Chomsky thought they basically described how a person generates sentences when speaking. You also need a device to describe how utterances are recognized. That is what pushdown automata are for. They can recognize exactly the utterances generated by a context-free grammar. So they were thought to describe basically how a human recognizes sentences when listening.



                        Soon afterwards, it was discovered that context-free languages aren't actually capable of describing the syntax of languages completely - you need all kinds of additional machinery to do it properly. This is true both for natural languages and artificial languages. (To be honest, I think Chomsky meant to come up with visibly pushdown languages, except he didn't realize the difference, and if he had, it would have saved us all a lot of hassle.)



                        However, context-free languages and pushdown automata are mathematically very simple and elegant devices, and the Chomsky hierarchy is a simple and elegant result, so they are very useful in education to explain the basics of computer-based language description and recognition (formal language theory). For this reason, they have continued to be a standard part of the theoretical computer science curriculum, and many techniques used in practice are based on them, so they really are requisite knowledge if you want to study anything related to natural language processing, programming language implementation, and other language-related topics.






                        share|cite|improve this answer
























                          up vote
                          1
                          down vote













                          OmG and Raphael have already answered your question:



                          • pushdown automata use a stack because they're defined that way

                          • if they didn't use a stack, what you'd get is a different type of automaton, with different properties

                          So at this point you may ask: but why does my professor present the pushdown automaton, not some other kind of automaton? What makes the pushdown automaton, with a stack, more interesting than an automaton with some other data structure?



                          The answer is that pushdown automata can recognize exactly the context-free languages, which were introduced early on in the history of computer science, in the 1950s, as a technique to describe the syntax of languages: both natural languages, such as English or Russian, and computer languages such as programming languages (the first programming language they described was FORTRAN).



                          This was a big deal at the time. Cybernetics and behaviorism were all the craze. Computers were starting to be applied commercially. One of the areas they were applied to was language processing. How can we describe language?



                          Well, it consists of utterances (things people say), that consist of sequences of items (words, or sounds). A person who wants to say something will emits those utterances; another person receives them, and makes sense out of them. What if we want to make a computer do the same thing? We will need to find a way to describe a language that can be taught to a computer. We will need a way to generate valid utterances that a computer can use, and a way to recognize those utterances that a computer can use - such that the utterances are actually the utterances of a language humans use, such as English or Russian.



                          Descriptions of devices that could do such things already existed: there were state machines and various kinds of automata. However, Noam Chomsky, a mathematician who was tasked with teaching a class at MIT on how to build software for automatic translation, realized state machines aren't powerful enough to describe natural languages such as English, because they cannot describe the phenomenon of recursion in language. He needed something similar, but with the power to describe recursion, and arrived at the context-free grammars. He proved that these can indeed describe more languages than state machines can, but fewer than some other techniques - a result known as the Chomsky hierarchy. He hoped they would describe languages such as English and Russian.



                          Context-free grammars generate utterances. Chomsky thought they basically described how a person generates sentences when speaking. You also need a device to describe how utterances are recognized. That is what pushdown automata are for. They can recognize exactly the utterances generated by a context-free grammar. So they were thought to describe basically how a human recognizes sentences when listening.



                          Soon afterwards, it was discovered that context-free languages aren't actually capable of describing the syntax of languages completely - you need all kinds of additional machinery to do it properly. This is true both for natural languages and artificial languages. (To be honest, I think Chomsky meant to come up with visibly pushdown languages, except he didn't realize the difference, and if he had, it would have saved us all a lot of hassle.)



                          However, context-free languages and pushdown automata are mathematically very simple and elegant devices, and the Chomsky hierarchy is a simple and elegant result, so they are very useful in education to explain the basics of computer-based language description and recognition (formal language theory). For this reason, they have continued to be a standard part of the theoretical computer science curriculum, and many techniques used in practice are based on them, so they really are requisite knowledge if you want to study anything related to natural language processing, programming language implementation, and other language-related topics.






                          share|cite|improve this answer






















                            up vote
                            1
                            down vote










                            up vote
                            1
                            down vote









                            OmG and Raphael have already answered your question:



                            • pushdown automata use a stack because they're defined that way

                            • if they didn't use a stack, what you'd get is a different type of automaton, with different properties

                            So at this point you may ask: but why does my professor present the pushdown automaton, not some other kind of automaton? What makes the pushdown automaton, with a stack, more interesting than an automaton with some other data structure?



                            The answer is that pushdown automata can recognize exactly the context-free languages, which were introduced early on in the history of computer science, in the 1950s, as a technique to describe the syntax of languages: both natural languages, such as English or Russian, and computer languages such as programming languages (the first programming language they described was FORTRAN).



                            This was a big deal at the time. Cybernetics and behaviorism were all the craze. Computers were starting to be applied commercially. One of the areas they were applied to was language processing. How can we describe language?



                            Well, it consists of utterances (things people say), that consist of sequences of items (words, or sounds). A person who wants to say something will emits those utterances; another person receives them, and makes sense out of them. What if we want to make a computer do the same thing? We will need to find a way to describe a language that can be taught to a computer. We will need a way to generate valid utterances that a computer can use, and a way to recognize those utterances that a computer can use - such that the utterances are actually the utterances of a language humans use, such as English or Russian.



                            Descriptions of devices that could do such things already existed: there were state machines and various kinds of automata. However, Noam Chomsky, a mathematician who was tasked with teaching a class at MIT on how to build software for automatic translation, realized state machines aren't powerful enough to describe natural languages such as English, because they cannot describe the phenomenon of recursion in language. He needed something similar, but with the power to describe recursion, and arrived at the context-free grammars. He proved that these can indeed describe more languages than state machines can, but fewer than some other techniques - a result known as the Chomsky hierarchy. He hoped they would describe languages such as English and Russian.



                            Context-free grammars generate utterances. Chomsky thought they basically described how a person generates sentences when speaking. You also need a device to describe how utterances are recognized. That is what pushdown automata are for. They can recognize exactly the utterances generated by a context-free grammar. So they were thought to describe basically how a human recognizes sentences when listening.



                            Soon afterwards, it was discovered that context-free languages aren't actually capable of describing the syntax of languages completely - you need all kinds of additional machinery to do it properly. This is true both for natural languages and artificial languages. (To be honest, I think Chomsky meant to come up with visibly pushdown languages, except he didn't realize the difference, and if he had, it would have saved us all a lot of hassle.)



                            However, context-free languages and pushdown automata are mathematically very simple and elegant devices, and the Chomsky hierarchy is a simple and elegant result, so they are very useful in education to explain the basics of computer-based language description and recognition (formal language theory). For this reason, they have continued to be a standard part of the theoretical computer science curriculum, and many techniques used in practice are based on them, so they really are requisite knowledge if you want to study anything related to natural language processing, programming language implementation, and other language-related topics.






                            share|cite|improve this answer












                            OmG and Raphael have already answered your question:



                            • pushdown automata use a stack because they're defined that way

                            • if they didn't use a stack, what you'd get is a different type of automaton, with different properties

                            So at this point you may ask: but why does my professor present the pushdown automaton, not some other kind of automaton? What makes the pushdown automaton, with a stack, more interesting than an automaton with some other data structure?



                            The answer is that pushdown automata can recognize exactly the context-free languages, which were introduced early on in the history of computer science, in the 1950s, as a technique to describe the syntax of languages: both natural languages, such as English or Russian, and computer languages such as programming languages (the first programming language they described was FORTRAN).



                            This was a big deal at the time. Cybernetics and behaviorism were all the craze. Computers were starting to be applied commercially. One of the areas they were applied to was language processing. How can we describe language?



                            Well, it consists of utterances (things people say), that consist of sequences of items (words, or sounds). A person who wants to say something will emits those utterances; another person receives them, and makes sense out of them. What if we want to make a computer do the same thing? We will need to find a way to describe a language that can be taught to a computer. We will need a way to generate valid utterances that a computer can use, and a way to recognize those utterances that a computer can use - such that the utterances are actually the utterances of a language humans use, such as English or Russian.



                            Descriptions of devices that could do such things already existed: there were state machines and various kinds of automata. However, Noam Chomsky, a mathematician who was tasked with teaching a class at MIT on how to build software for automatic translation, realized state machines aren't powerful enough to describe natural languages such as English, because they cannot describe the phenomenon of recursion in language. He needed something similar, but with the power to describe recursion, and arrived at the context-free grammars. He proved that these can indeed describe more languages than state machines can, but fewer than some other techniques - a result known as the Chomsky hierarchy. He hoped they would describe languages such as English and Russian.



                            Context-free grammars generate utterances. Chomsky thought they basically described how a person generates sentences when speaking. You also need a device to describe how utterances are recognized. That is what pushdown automata are for. They can recognize exactly the utterances generated by a context-free grammar. So they were thought to describe basically how a human recognizes sentences when listening.



                            Soon afterwards, it was discovered that context-free languages aren't actually capable of describing the syntax of languages completely - you need all kinds of additional machinery to do it properly. This is true both for natural languages and artificial languages. (To be honest, I think Chomsky meant to come up with visibly pushdown languages, except he didn't realize the difference, and if he had, it would have saved us all a lot of hassle.)



                            However, context-free languages and pushdown automata are mathematically very simple and elegant devices, and the Chomsky hierarchy is a simple and elegant result, so they are very useful in education to explain the basics of computer-based language description and recognition (formal language theory). For this reason, they have continued to be a standard part of the theoretical computer science curriculum, and many techniques used in practice are based on them, so they really are requisite knowledge if you want to study anything related to natural language processing, programming language implementation, and other language-related topics.







                            share|cite|improve this answer












                            share|cite|improve this answer



                            share|cite|improve this answer










                            answered 2 hours ago









                            reinierpost

                            2,4961323




                            2,4961323



























                                 

                                draft saved


                                draft discarded















































                                 


                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function ()
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f98972%2fwhy-do-pushdown-automata-use-a-stack%23new-answer', 'question_page');

                                );

                                Post as a guest













































































                                Comments

                                Popular posts from this blog

                                Long meetings (6-7 hours a day): Being “babysat” by supervisor

                                Is the Concept of Multiple Fantasy Races Scientifically Flawed? [closed]

                                Confectionery