- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Hello everyone, 3 questions....
when I am testing a drug against placebo, 1.
what are the test I can make to see if there are significance differences or not?
For a SBP data results of adult males, the significance test shows that μ (drug) is significantly different from μ0 (placebo = 129 mm Hg) at the significance level alpha= 0.05, and the 95%
(CI = 129.1 to 130.9 mm Hg) did not include 129 mm Hg. On the other hand, the difference between μ (drug) and μ0 (placebo) is not significant at the alpha=0.01he doifha ; the 99%
(CI = 128.8 to 131.2 mm Hg) for μ does indeed contain μ0.
2. What CI and alpha are better to get?
3. Can someone give a simple explanation that could help me understand this difference between get CI 95 and 99?
Thanks,
V.
V.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
the significance difference has been evaluating p-value,
p was <0.05 if we consider alpha= 0.05 (significant different) then null hypothesis Ho is false.
but p was > 0.01 if we consider alpha=0.01 (not significant different) , the null hypothesis is true.
Sorry, I miss this information.
- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content
Think about what the p value is saying: if it is less than 0.05, then there is less than one chance in twenty that a difference as large as the one observed would come about due to chance variability if the null hypothesis of no difference were true. If it is less than 0.01, then there is less than one chance in one hundred that a difference as large as the one observed would come about due to chance alone. It indicates how much "risk" there is in rejecting the null hypothesis of no difference.
OK, that's the standard. Now comes the part that doesn't answer your questions, but is where I get out a soapbox and make a speech. All of the p values depend on a very strong assumption--a null hypothesis of no difference. But in the real world, this could never be true. NEVER. It might be closely approximated to the level of your measurement instrument, but the probability of two population means being exactly equal can be shown through some pretty rigorous math to be exactly zero. So p values are approximate tools to begin with, and their interpretation should never come down to whether a hypothesis is true or false, but how likely the difference observed could be due to random variation.
Steve Denham