Is there a *simple* proof that the intersection of a CFL and a regular language is a CFL?
Clash Royale CLAN TAG#URR8PPP
up vote
1
down vote
favorite
I am following a course on complexity theory where languages are a part of the course. There is a proof that no matter how hard I try to understand, it is till so complex that I cannot make it to half of the proof. Namely, the prooc of the statement that the intersection of a CFL and a regular language is again CFL.
The proof that is provided to us is 2-3 pages of pure text and notations. The ones online are also heavily dependent on much notations and unfortunately, Sipser does not handle it in his book Introduction to the theory of computation. I'm wondering if there's a straight-forward and less-dependent-on-notation proof that someone knows that will contribute to understanding the proof or even reproducing it. Because at this moment, I don't even understand the proof.
regular-languages context-free proof-techniques
add a comment |Â
up vote
1
down vote
favorite
I am following a course on complexity theory where languages are a part of the course. There is a proof that no matter how hard I try to understand, it is till so complex that I cannot make it to half of the proof. Namely, the prooc of the statement that the intersection of a CFL and a regular language is again CFL.
The proof that is provided to us is 2-3 pages of pure text and notations. The ones online are also heavily dependent on much notations and unfortunately, Sipser does not handle it in his book Introduction to the theory of computation. I'm wondering if there's a straight-forward and less-dependent-on-notation proof that someone knows that will contribute to understanding the proof or even reproducing it. Because at this moment, I don't even understand the proof.
regular-languages context-free proof-techniques
add a comment |Â
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I am following a course on complexity theory where languages are a part of the course. There is a proof that no matter how hard I try to understand, it is till so complex that I cannot make it to half of the proof. Namely, the prooc of the statement that the intersection of a CFL and a regular language is again CFL.
The proof that is provided to us is 2-3 pages of pure text and notations. The ones online are also heavily dependent on much notations and unfortunately, Sipser does not handle it in his book Introduction to the theory of computation. I'm wondering if there's a straight-forward and less-dependent-on-notation proof that someone knows that will contribute to understanding the proof or even reproducing it. Because at this moment, I don't even understand the proof.
regular-languages context-free proof-techniques
I am following a course on complexity theory where languages are a part of the course. There is a proof that no matter how hard I try to understand, it is till so complex that I cannot make it to half of the proof. Namely, the prooc of the statement that the intersection of a CFL and a regular language is again CFL.
The proof that is provided to us is 2-3 pages of pure text and notations. The ones online are also heavily dependent on much notations and unfortunately, Sipser does not handle it in his book Introduction to the theory of computation. I'm wondering if there's a straight-forward and less-dependent-on-notation proof that someone knows that will contribute to understanding the proof or even reproducing it. Because at this moment, I don't even understand the proof.
regular-languages context-free proof-techniques
regular-languages context-free proof-techniques
edited 39 mins ago
Thinh D. Nguyen
3,32311366
3,32311366
asked 1 hour ago
Abdul Malek Altawekji
1185
1185
add a comment |Â
add a comment |Â
2 Answers
2
active
oldest
votes
up vote
1
down vote
I don't know of a simple proof, but there is a proof where the underlying idea is simple to understand.
Suppose you have a PDA $N$ (deterministic or nondeterministic; doesn't matter) which accepts the language $L_M$, and whose set of states is $S$. And further suppose you have a DFA $M$ with $T$ as its set of states which accepts the language $L_M$.
Conceptually, you can construct a new PDA, whose set of states is the Cartesian product $S times T$, and awhich "runs" both the PDA and the DFA in parallel. A transition from state $(s_i,t_j)$ to state $(s_p,t_q)$ does whatever the PDA does when transitioning from state $s_i$ to $s_p$, but also takes the appropriate transition from $t_j$ to $t_q$.
An accept state in this new PDA is any state which corresponds to an accept state in both $M$ and $N$. Then this PDA should, assuming we get the details right, accept the language $L_M cap L_N$.
There are a lot of details that you need to get right, which makes the proof fairly technical. So it's not a short proof. But it's simple to understand the idea behind it.
If you want to understand how this works, I would suggest trying to write a reasonably formal proof that the intersection of two regular languages is regular using a similar construction.
And as an additional exercise: Why can't you use this technique to prove that the intersection of two context-free languages is context-free?
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
add a comment |Â
up vote
1
down vote
We should imagine the way a pushdown automaton (PDA) works.
A PDA has a read-only head pointing at the input string. At each moment, it is resting at some index of the input string. And while the PDA is still running, it only moves from left to right once. But it may stay at some indices to "think" for a while.
Having said that, we can see that there are two kinds of transitions:
Consuming transition: the transitions that moves the input head one step to the right.
Non-consuming transition: the transitions that keeps the input head in place. These ones only manipulate the stack for sure.
Now, construct the cross product of the PDA $mathbbP$ and the DFA $mathrmA$ as usual, i.e. with the set of states as cartesian product of the set of states of $mathrmP$ and the set of states of $mathrmA$.
So, for each consuming transition of $mathbbP$, we only allow it in the cross product PDA when the corresponding transition in $mathrmA$ is also true.
And while the PDA is "thinking" (i.e. performing non-consuming moves), the DFA $mathrmA$'s state must remain unchanged.
add a comment |Â
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
I don't know of a simple proof, but there is a proof where the underlying idea is simple to understand.
Suppose you have a PDA $N$ (deterministic or nondeterministic; doesn't matter) which accepts the language $L_M$, and whose set of states is $S$. And further suppose you have a DFA $M$ with $T$ as its set of states which accepts the language $L_M$.
Conceptually, you can construct a new PDA, whose set of states is the Cartesian product $S times T$, and awhich "runs" both the PDA and the DFA in parallel. A transition from state $(s_i,t_j)$ to state $(s_p,t_q)$ does whatever the PDA does when transitioning from state $s_i$ to $s_p$, but also takes the appropriate transition from $t_j$ to $t_q$.
An accept state in this new PDA is any state which corresponds to an accept state in both $M$ and $N$. Then this PDA should, assuming we get the details right, accept the language $L_M cap L_N$.
There are a lot of details that you need to get right, which makes the proof fairly technical. So it's not a short proof. But it's simple to understand the idea behind it.
If you want to understand how this works, I would suggest trying to write a reasonably formal proof that the intersection of two regular languages is regular using a similar construction.
And as an additional exercise: Why can't you use this technique to prove that the intersection of two context-free languages is context-free?
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
add a comment |Â
up vote
1
down vote
I don't know of a simple proof, but there is a proof where the underlying idea is simple to understand.
Suppose you have a PDA $N$ (deterministic or nondeterministic; doesn't matter) which accepts the language $L_M$, and whose set of states is $S$. And further suppose you have a DFA $M$ with $T$ as its set of states which accepts the language $L_M$.
Conceptually, you can construct a new PDA, whose set of states is the Cartesian product $S times T$, and awhich "runs" both the PDA and the DFA in parallel. A transition from state $(s_i,t_j)$ to state $(s_p,t_q)$ does whatever the PDA does when transitioning from state $s_i$ to $s_p$, but also takes the appropriate transition from $t_j$ to $t_q$.
An accept state in this new PDA is any state which corresponds to an accept state in both $M$ and $N$. Then this PDA should, assuming we get the details right, accept the language $L_M cap L_N$.
There are a lot of details that you need to get right, which makes the proof fairly technical. So it's not a short proof. But it's simple to understand the idea behind it.
If you want to understand how this works, I would suggest trying to write a reasonably formal proof that the intersection of two regular languages is regular using a similar construction.
And as an additional exercise: Why can't you use this technique to prove that the intersection of two context-free languages is context-free?
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
add a comment |Â
up vote
1
down vote
up vote
1
down vote
I don't know of a simple proof, but there is a proof where the underlying idea is simple to understand.
Suppose you have a PDA $N$ (deterministic or nondeterministic; doesn't matter) which accepts the language $L_M$, and whose set of states is $S$. And further suppose you have a DFA $M$ with $T$ as its set of states which accepts the language $L_M$.
Conceptually, you can construct a new PDA, whose set of states is the Cartesian product $S times T$, and awhich "runs" both the PDA and the DFA in parallel. A transition from state $(s_i,t_j)$ to state $(s_p,t_q)$ does whatever the PDA does when transitioning from state $s_i$ to $s_p$, but also takes the appropriate transition from $t_j$ to $t_q$.
An accept state in this new PDA is any state which corresponds to an accept state in both $M$ and $N$. Then this PDA should, assuming we get the details right, accept the language $L_M cap L_N$.
There are a lot of details that you need to get right, which makes the proof fairly technical. So it's not a short proof. But it's simple to understand the idea behind it.
If you want to understand how this works, I would suggest trying to write a reasonably formal proof that the intersection of two regular languages is regular using a similar construction.
And as an additional exercise: Why can't you use this technique to prove that the intersection of two context-free languages is context-free?
I don't know of a simple proof, but there is a proof where the underlying idea is simple to understand.
Suppose you have a PDA $N$ (deterministic or nondeterministic; doesn't matter) which accepts the language $L_M$, and whose set of states is $S$. And further suppose you have a DFA $M$ with $T$ as its set of states which accepts the language $L_M$.
Conceptually, you can construct a new PDA, whose set of states is the Cartesian product $S times T$, and awhich "runs" both the PDA and the DFA in parallel. A transition from state $(s_i,t_j)$ to state $(s_p,t_q)$ does whatever the PDA does when transitioning from state $s_i$ to $s_p$, but also takes the appropriate transition from $t_j$ to $t_q$.
An accept state in this new PDA is any state which corresponds to an accept state in both $M$ and $N$. Then this PDA should, assuming we get the details right, accept the language $L_M cap L_N$.
There are a lot of details that you need to get right, which makes the proof fairly technical. So it's not a short proof. But it's simple to understand the idea behind it.
If you want to understand how this works, I would suggest trying to write a reasonably formal proof that the intersection of two regular languages is regular using a similar construction.
And as an additional exercise: Why can't you use this technique to prove that the intersection of two context-free languages is context-free?
answered 35 mins ago
Pseudonym
9,56812041
9,56812041
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
add a comment |Â
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
For the additional exercise, it is obviously impossible to merge two stacks, which may differ vastly in length when the two PDAs run.
â Thinh D. Nguyen
15 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
This exercise may help the OP if he/she already understands what is going on in the proof for DFA. But of no help, when he/she still has not figured out the cross product in your answer. Because, in that case, there is no second stack to be added to the picture.
â Thinh D. Nguyen
12 mins ago
add a comment |Â
up vote
1
down vote
We should imagine the way a pushdown automaton (PDA) works.
A PDA has a read-only head pointing at the input string. At each moment, it is resting at some index of the input string. And while the PDA is still running, it only moves from left to right once. But it may stay at some indices to "think" for a while.
Having said that, we can see that there are two kinds of transitions:
Consuming transition: the transitions that moves the input head one step to the right.
Non-consuming transition: the transitions that keeps the input head in place. These ones only manipulate the stack for sure.
Now, construct the cross product of the PDA $mathbbP$ and the DFA $mathrmA$ as usual, i.e. with the set of states as cartesian product of the set of states of $mathrmP$ and the set of states of $mathrmA$.
So, for each consuming transition of $mathbbP$, we only allow it in the cross product PDA when the corresponding transition in $mathrmA$ is also true.
And while the PDA is "thinking" (i.e. performing non-consuming moves), the DFA $mathrmA$'s state must remain unchanged.
add a comment |Â
up vote
1
down vote
We should imagine the way a pushdown automaton (PDA) works.
A PDA has a read-only head pointing at the input string. At each moment, it is resting at some index of the input string. And while the PDA is still running, it only moves from left to right once. But it may stay at some indices to "think" for a while.
Having said that, we can see that there are two kinds of transitions:
Consuming transition: the transitions that moves the input head one step to the right.
Non-consuming transition: the transitions that keeps the input head in place. These ones only manipulate the stack for sure.
Now, construct the cross product of the PDA $mathbbP$ and the DFA $mathrmA$ as usual, i.e. with the set of states as cartesian product of the set of states of $mathrmP$ and the set of states of $mathrmA$.
So, for each consuming transition of $mathbbP$, we only allow it in the cross product PDA when the corresponding transition in $mathrmA$ is also true.
And while the PDA is "thinking" (i.e. performing non-consuming moves), the DFA $mathrmA$'s state must remain unchanged.
add a comment |Â
up vote
1
down vote
up vote
1
down vote
We should imagine the way a pushdown automaton (PDA) works.
A PDA has a read-only head pointing at the input string. At each moment, it is resting at some index of the input string. And while the PDA is still running, it only moves from left to right once. But it may stay at some indices to "think" for a while.
Having said that, we can see that there are two kinds of transitions:
Consuming transition: the transitions that moves the input head one step to the right.
Non-consuming transition: the transitions that keeps the input head in place. These ones only manipulate the stack for sure.
Now, construct the cross product of the PDA $mathbbP$ and the DFA $mathrmA$ as usual, i.e. with the set of states as cartesian product of the set of states of $mathrmP$ and the set of states of $mathrmA$.
So, for each consuming transition of $mathbbP$, we only allow it in the cross product PDA when the corresponding transition in $mathrmA$ is also true.
And while the PDA is "thinking" (i.e. performing non-consuming moves), the DFA $mathrmA$'s state must remain unchanged.
We should imagine the way a pushdown automaton (PDA) works.
A PDA has a read-only head pointing at the input string. At each moment, it is resting at some index of the input string. And while the PDA is still running, it only moves from left to right once. But it may stay at some indices to "think" for a while.
Having said that, we can see that there are two kinds of transitions:
Consuming transition: the transitions that moves the input head one step to the right.
Non-consuming transition: the transitions that keeps the input head in place. These ones only manipulate the stack for sure.
Now, construct the cross product of the PDA $mathbbP$ and the DFA $mathrmA$ as usual, i.e. with the set of states as cartesian product of the set of states of $mathrmP$ and the set of states of $mathrmA$.
So, for each consuming transition of $mathbbP$, we only allow it in the cross product PDA when the corresponding transition in $mathrmA$ is also true.
And while the PDA is "thinking" (i.e. performing non-consuming moves), the DFA $mathrmA$'s state must remain unchanged.
edited 10 mins ago
answered 17 mins ago
Thinh D. Nguyen
3,32311366
3,32311366
add a comment |Â
add a comment |Â
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f99164%2fis-there-a-simple-proof-that-the-intersection-of-a-cfl-and-a-regular-language%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password