Calculating inter-annotator agreement
Clash Royale CLAN TAG#URR8PPP
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty margin-bottom:0;
up vote
2
down vote
favorite
Are there situations when it is allowed to omit calculating expected agreement but use only observed agreement as reliable measure? I have multi-label classification (in particular annotation of semantic relations between words in a sentence, so the probability of chance agreement is very low) with multiple annotators.
expected-value inter-rater cohens-kappa
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |Â
up vote
2
down vote
favorite
Are there situations when it is allowed to omit calculating expected agreement but use only observed agreement as reliable measure? I have multi-label classification (in particular annotation of semantic relations between words in a sentence, so the probability of chance agreement is very low) with multiple annotators.
expected-value inter-rater cohens-kappa
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |Â
up vote
2
down vote
favorite
up vote
2
down vote
favorite
Are there situations when it is allowed to omit calculating expected agreement but use only observed agreement as reliable measure? I have multi-label classification (in particular annotation of semantic relations between words in a sentence, so the probability of chance agreement is very low) with multiple annotators.
expected-value inter-rater cohens-kappa
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Are there situations when it is allowed to omit calculating expected agreement but use only observed agreement as reliable measure? I have multi-label classification (in particular annotation of semantic relations between words in a sentence, so the probability of chance agreement is very low) with multiple annotators.
expected-value inter-rater cohens-kappa
expected-value inter-rater cohens-kappa
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
asked 2 hours ago


Elena
111
111
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
New contributor
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
Elena is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
add a comment |Â
add a comment |Â
1 Answer
1
active
oldest
votes
up vote
2
down vote
Yes; it is perfectly in order to calculate the proportion of agreement and give a confidence interval for it. In fact this is often helpful even in situations where the raters are only using two categories but one of them is very rare. In such cases Cohen's $kappa$ can be low while proportion of agreement is high, so presenting both gives a better picture. I would suggest giving both in your case too to let the reader see what exactly is happening.
add a comment |Â
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
2
down vote
Yes; it is perfectly in order to calculate the proportion of agreement and give a confidence interval for it. In fact this is often helpful even in situations where the raters are only using two categories but one of them is very rare. In such cases Cohen's $kappa$ can be low while proportion of agreement is high, so presenting both gives a better picture. I would suggest giving both in your case too to let the reader see what exactly is happening.
add a comment |Â
up vote
2
down vote
Yes; it is perfectly in order to calculate the proportion of agreement and give a confidence interval for it. In fact this is often helpful even in situations where the raters are only using two categories but one of them is very rare. In such cases Cohen's $kappa$ can be low while proportion of agreement is high, so presenting both gives a better picture. I would suggest giving both in your case too to let the reader see what exactly is happening.
add a comment |Â
up vote
2
down vote
up vote
2
down vote
Yes; it is perfectly in order to calculate the proportion of agreement and give a confidence interval for it. In fact this is often helpful even in situations where the raters are only using two categories but one of them is very rare. In such cases Cohen's $kappa$ can be low while proportion of agreement is high, so presenting both gives a better picture. I would suggest giving both in your case too to let the reader see what exactly is happening.
Yes; it is perfectly in order to calculate the proportion of agreement and give a confidence interval for it. In fact this is often helpful even in situations where the raters are only using two categories but one of them is very rare. In such cases Cohen's $kappa$ can be low while proportion of agreement is high, so presenting both gives a better picture. I would suggest giving both in your case too to let the reader see what exactly is happening.
edited 1 hour ago
Nick Cox
37.5k478126
37.5k478126
answered 2 hours ago
mdewey
11.1k72041
11.1k72041
add a comment |Â
add a comment |Â
Elena is a new contributor. Be nice, and check out our Code of Conduct.
Elena is a new contributor. Be nice, and check out our Code of Conduct.
Elena is a new contributor. Be nice, and check out our Code of Conduct.
Elena is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f372971%2fcalculating-inter-annotator-agreement%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password