turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- Analytics
- /
- Stat Procs
- /
- Multiple comparisons problem with Cohen's Kappa

Topic Options

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-28-2015 05:04 PM

Hey guys;

I am trying to use the Cohen's Kappa statistic to compare the levels of agreement between four raters for four categories. However, the statistic I'm getting from the mAGREE macro is the level of agreement of all raters for each rating

Example output:

Rating Kappa Prob>Z

1 0.58 >.001

2 0.62 >.001

3 0.45 >.001

4 0.78 >.001

The table that I want to end up filling out looks something like this

Rater 1 | Rater 2 | Rater 3 | Rater 4 | Rater 5 | |
---|---|---|---|---|---|

Rater 1 | |||||

Rater 2 | |||||

Rater 3 | |||||

Rater 4 | |||||

Rater 5 |

What I am thinking of doing is calculating Kappa statistics for all combinations of raters. But then my question is, does that count as multiple comparisons and would it increase my levels of Type I error? Thanks.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-28-2015 10:43 PM

Computing a p value for Cohen's kappa seems rather odd in the first place. You would never expect the null hypothesis to be true. If it were even close to true, then there wouldn't even be a point to computing an agreement statistic and even a highly significant kappa may be quite low for any practical purpose.

So, doing many kappas does increase the chance of type I error, but if there is even a tiny chance of a type I error, something is very wrong.

- Mark as New
- Bookmark
- Subscribe
- Subscribe to RSS Feed
- Highlight
- Email to a Friend
- Report Inappropriate Content

04-29-2015 08:17 AM

"does that count as multiple comparisons and would it increase my levels of Type I error?"

Sorry.If I was right. You messed it with ANOVA . Agree test is not ANOVA , therefore there should not be multiple comparisons ,Type I error .