Automata learning without counterexamples

The name of the pictureThe name of the pictureThe name of the pictureClash Royale CLAN TAG#URR8PPP











up vote
2
down vote

favorite












In Angluin's automata learning framework, a student aims to learn a regular language $Lsubseteq Sigma^*$ by asking two types of questions to his teacher:



Word queries: given $win Sigma^*$, is $win L$?



Equivalence queries: given a language $Ksubseteq Sigma^*$, is $K=L$? If not, the teacher gives a counterexample, i.e. a word $win Ksetminus L cup Lsetminus K$.



Using Angluin's algorithm, the student learns $L$ with polynomially many queries in the number of states of the minimal DFA of $L$ and the size of the counterexamples.



Now, consider a restricted scenario where the teacher no longer gives counterexamples. Is it still possible to learn L with a polynomial number of queries? I conjecture that this is not the case because for every polynomial-length sequence of queries and answers, one can find several regular languages consistent with the answers.



Does anyone see how to prove this?










share|cite|improve this question

























    up vote
    2
    down vote

    favorite












    In Angluin's automata learning framework, a student aims to learn a regular language $Lsubseteq Sigma^*$ by asking two types of questions to his teacher:



    Word queries: given $win Sigma^*$, is $win L$?



    Equivalence queries: given a language $Ksubseteq Sigma^*$, is $K=L$? If not, the teacher gives a counterexample, i.e. a word $win Ksetminus L cup Lsetminus K$.



    Using Angluin's algorithm, the student learns $L$ with polynomially many queries in the number of states of the minimal DFA of $L$ and the size of the counterexamples.



    Now, consider a restricted scenario where the teacher no longer gives counterexamples. Is it still possible to learn L with a polynomial number of queries? I conjecture that this is not the case because for every polynomial-length sequence of queries and answers, one can find several regular languages consistent with the answers.



    Does anyone see how to prove this?










    share|cite|improve this question























      up vote
      2
      down vote

      favorite









      up vote
      2
      down vote

      favorite











      In Angluin's automata learning framework, a student aims to learn a regular language $Lsubseteq Sigma^*$ by asking two types of questions to his teacher:



      Word queries: given $win Sigma^*$, is $win L$?



      Equivalence queries: given a language $Ksubseteq Sigma^*$, is $K=L$? If not, the teacher gives a counterexample, i.e. a word $win Ksetminus L cup Lsetminus K$.



      Using Angluin's algorithm, the student learns $L$ with polynomially many queries in the number of states of the minimal DFA of $L$ and the size of the counterexamples.



      Now, consider a restricted scenario where the teacher no longer gives counterexamples. Is it still possible to learn L with a polynomial number of queries? I conjecture that this is not the case because for every polynomial-length sequence of queries and answers, one can find several regular languages consistent with the answers.



      Does anyone see how to prove this?










      share|cite|improve this question













      In Angluin's automata learning framework, a student aims to learn a regular language $Lsubseteq Sigma^*$ by asking two types of questions to his teacher:



      Word queries: given $win Sigma^*$, is $win L$?



      Equivalence queries: given a language $Ksubseteq Sigma^*$, is $K=L$? If not, the teacher gives a counterexample, i.e. a word $win Ksetminus L cup Lsetminus K$.



      Using Angluin's algorithm, the student learns $L$ with polynomially many queries in the number of states of the minimal DFA of $L$ and the size of the counterexamples.



      Now, consider a restricted scenario where the teacher no longer gives counterexamples. Is it still possible to learn L with a polynomial number of queries? I conjecture that this is not the case because for every polynomial-length sequence of queries and answers, one can find several regular languages consistent with the answers.



      Does anyone see how to prove this?







      automata-theory regular-language






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 4 hours ago









      user49692

      391




      391




















          1 Answer
          1






          active

          oldest

          votes

















          up vote
          3
          down vote













          Consider password automata: for each $win0,1^n$, the DFA $M_w$ accepts the language $w$. In this case, a membership query is the same as an equivalence query --- and clearly, you'll need exponentially many of these to find the "needle in the haystack". (This is even if the learner knows in advance that the target automaton is of this form.) For a formal proof, a teacher could simply respond to the first $2^n-1$ membership/equivalence queries with NO, adaptively choosing the target automaton as the unique remaining possibility.



          Update. I neglected to mention that the password automaton has $n+1$ states and is learnable in $n+1$ usual Angluin queries.






          share|cite|improve this answer






















            Your Answer




            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "114"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            convertImagesToLinks: false,
            noModals: false,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













             

            draft saved


            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f41787%2fautomata-learning-without-counterexamples%23new-answer', 'question_page');

            );

            Post as a guest






























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            3
            down vote













            Consider password automata: for each $win0,1^n$, the DFA $M_w$ accepts the language $w$. In this case, a membership query is the same as an equivalence query --- and clearly, you'll need exponentially many of these to find the "needle in the haystack". (This is even if the learner knows in advance that the target automaton is of this form.) For a formal proof, a teacher could simply respond to the first $2^n-1$ membership/equivalence queries with NO, adaptively choosing the target automaton as the unique remaining possibility.



            Update. I neglected to mention that the password automaton has $n+1$ states and is learnable in $n+1$ usual Angluin queries.






            share|cite|improve this answer


























              up vote
              3
              down vote













              Consider password automata: for each $win0,1^n$, the DFA $M_w$ accepts the language $w$. In this case, a membership query is the same as an equivalence query --- and clearly, you'll need exponentially many of these to find the "needle in the haystack". (This is even if the learner knows in advance that the target automaton is of this form.) For a formal proof, a teacher could simply respond to the first $2^n-1$ membership/equivalence queries with NO, adaptively choosing the target automaton as the unique remaining possibility.



              Update. I neglected to mention that the password automaton has $n+1$ states and is learnable in $n+1$ usual Angluin queries.






              share|cite|improve this answer
























                up vote
                3
                down vote










                up vote
                3
                down vote









                Consider password automata: for each $win0,1^n$, the DFA $M_w$ accepts the language $w$. In this case, a membership query is the same as an equivalence query --- and clearly, you'll need exponentially many of these to find the "needle in the haystack". (This is even if the learner knows in advance that the target automaton is of this form.) For a formal proof, a teacher could simply respond to the first $2^n-1$ membership/equivalence queries with NO, adaptively choosing the target automaton as the unique remaining possibility.



                Update. I neglected to mention that the password automaton has $n+1$ states and is learnable in $n+1$ usual Angluin queries.






                share|cite|improve this answer














                Consider password automata: for each $win0,1^n$, the DFA $M_w$ accepts the language $w$. In this case, a membership query is the same as an equivalence query --- and clearly, you'll need exponentially many of these to find the "needle in the haystack". (This is even if the learner knows in advance that the target automaton is of this form.) For a formal proof, a teacher could simply respond to the first $2^n-1$ membership/equivalence queries with NO, adaptively choosing the target automaton as the unique remaining possibility.



                Update. I neglected to mention that the password automaton has $n+1$ states and is learnable in $n+1$ usual Angluin queries.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited 40 mins ago

























                answered 3 hours ago









                Aryeh

                4,9861835




                4,9861835



























                     

                    draft saved


                    draft discarded















































                     


                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcstheory.stackexchange.com%2fquestions%2f41787%2fautomata-learning-without-counterexamples%23new-answer', 'question_page');

                    );

                    Post as a guest













































































                    Comments

                    Popular posts from this blog

                    What does second last employer means? [closed]

                    Installing NextGIS Connect into QGIS 3?

                    One-line joke