turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Find a Community

- Home
- /
- SAS Programming
- /
- SAS Procedures
- /
- Actual P Value for statistical procedures

Topic Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-14-2011 03:21 PM

Right now in statistical analysis, we always want to present an actual p value if significant rather than, e.g., p<0.0001, for a statistic. I am wondering whether it is possible to directly output an actual p value in SAS analysis.

Thanks

Thanks

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to larry2011

03-14-2011 05:20 PM

Here is one way.

You can use ods output to create a sas file for each relevant part of the output from a procedure. For instance, you could use:

proc glimmix ;

ods output tests3=tests3;

class trt b ;

model y = trt|b;

run;

to store the Type 3 test results in a file called tests3 (these are the F tests, degrees of freedom, P values). The p value is called ProbF in this file (just print it to check this out). Then in a print procedure, you can choose a different format for ProbF. Below is one way, where I request 12 decimal places for P.

proc print data=tests3;

format ProbF pvalue15.12; *<--no space in pvalue15.12 ;

run;

The relevant ods file is different for each procedure. Check the documentation for your desired procedure.

You can use ods output to create a sas file for each relevant part of the output from a procedure. For instance, you could use:

proc glimmix ;

ods output tests3=tests3;

class trt b ;

model y = trt|b;

run;

to store the Type 3 test results in a file called tests3 (these are the F tests, degrees of freedom, P values). The p value is called ProbF in this file (just print it to check this out). Then in a print procedure, you can choose a different format for ProbF. Below is one way, where I request 12 decimal places for P.

proc print data=tests3;

format ProbF pvalue15.12; *<--no space in pvalue15.12 ;

run;

The relevant ods file is different for each procedure. Check the documentation for your desired procedure.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-16-2011 04:32 PM

Thanks, lvm.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to larry2011

03-17-2011 07:16 AM

Now comes the question: Why? Why do you need the "actual" p value? Once you get to values this small, even trivial deviations from the assumptions in the analysis lead to changes that look like something is going on. For instance, a change from 0.0000012 to 0.000006 looks like a five-fold change in the p value, yet this amount of change in absolute value isn't at all uncommon with even slight changes in the underlying distribution of the residuals.

Wouldn't the effect size be more useful? You already know that it is unlikely that the deviation you see is due to chance alone. It strikes me (and that's just me and my personal opinion) that knowing the change observed is five sigma as opposed to three sigma would be more useful. The conversion to p value from effect size is really, really dependent on assumptions about the distribution.

Maybe I've been reading too many physics related things lately...

Or maybe I just went through the same kind of argument with a QA auditor.

SteveDenham

Wouldn't the effect size be more useful? You already know that it is unlikely that the deviation you see is due to chance alone. It strikes me (and that's just me and my personal opinion) that knowing the change observed is five sigma as opposed to three sigma would be more useful. The conversion to p value from effect size is really, really dependent on assumptions about the distribution.

Maybe I've been reading too many physics related things lately...

Or maybe I just went through the same kind of argument with a QA auditor.

SteveDenham

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

Posted in reply to SteveDenham

03-17-2011 02:31 PM

Actually, it is common to look at the "actual" small p values in some fields, such as in molecular biology. This is not my area, but in the analysis of microarrays, there are hundreds or thousands of genes tested for a signal in a single experiment, and -log(p) is determined for each, or for differences, etc., and these are then graphed in various ways. Check out the classic: Wolfinger et al. (2001; Journal of Computational Biology 8: 625-637). I know that a standardized effect size could show the same thing, but it appears to be the tradition in some fields to look at -log(p), where p is not truncated. Multiplicity adjustments are also done, depending on the study.

- Mark as New
- Bookmark
- Subscribe
- RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content

03-18-2011 07:08 AM

Thanks, lvm, for an answer that makes it clear why you might need "actual" p values. Microarray data is so information dense that even I can see that the CLT has made deviations from normality trivially small, and so the p values are meaningful.

Thanks again!

SteveDenham

Thanks again!

SteveDenham