- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I’m testing whether two empirical distributions are identical or not. I have a group of people with two observations for the variable ‘EKC’, ‘before’ and ‘after’ some intervention. I’m using K-S for the comparison of both distributions. Additionally, observations come from a national survey, so each individual contains a survey weight (i.e. weight) to produce national estimates. See code below:
ods graphics on;
proc npar1way data = dat edf;
freq weight;
class time;
var ekc;
ods output KS2Stats=ks;
run;
Notice in the table below, that both distribution are very similar (almost identical).
Percentiles |
Before |
After |
Differences |
% change |
100% Max |
3289.1 |
3279.5 |
-9.6 |
0.3 |
99% |
2443.7 |
2436.6 |
-7.1 |
0.3 |
95% |
2180.5 |
2173.5 |
-6.9 |
0.3 |
90% |
2047.3 |
2040.8 |
-6.4 |
0.3 |
75% Q3 |
1838.8 |
1832.6 |
-6.2 |
0.3 |
50% Median |
1623.3 |
1617.6 |
-5.6 |
0.3 |
25% Q1 |
1427.8 |
1422.7 |
-5.2 |
0.4 |
10% |
1271.8 |
1266.9 |
-4.9 |
0.4 |
5% |
1187.6 |
1182.7 |
-4.9 |
0.4 |
1% |
1029.4 |
1024.9 |
-4.6 |
0.4 |
0% Min |
642.3 |
638.4 |
-3.9 |
0.6 |
Nevertheless, the K-S for the comparison of the two samples suggest the both distributions are different ((Pr > KSa) <.0001).
I’m not sure how the fact that both empirical distributions are not independent (note they come from the same groups of individuals before and after some intervention) can affect the test. If so, can you please suggest an alternative valid test?
Thanks a lot,
A.G.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
And remember if you have a large N, small differences are easier to pick up and more likely to be statistically significant even if they're not practically significant.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thank you!
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
If it's pre-post measures though, you usually analyze the difference in the scores and see if that's centered on 0.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Right now you're using the 'default' hypothesis of 0 but that doesn't have to be true...
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
The apparent extreme sensitivity of the KS test here is due to the use of the FREQ statement. Freq specifies a frequency, not a weight. When you say "x=10, freq=100" the procedure considers that you have 100 independent measurements at 10, not a single measurement with a sampling weight of 100. SAS does not provide a weighted KS test (if such a thing exists).
Properly weighted statistics are provided by the SURVEYxxxx procs.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
I suspect there might be two issues here. As Reeza pointed out, the big sample size might be causing picking up statisticial significance when there is no. Additionally, observations from both groups are not independent. Not sure how sensitive KS is to this.
Thank you!