<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: How to compare Sensitivity, specificity, PPV, NPV between two diagnostic tests in Statistical Procedures</title>
    <link>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495499#M25683</link>
    <description>&lt;P&gt;The comparison of diagnostic tests against a gold standard can be done by comparing their ROC curves, which is composed of both sensitivity and specificity values over a range of cutpoints. This is directly available using the ROC and ROCCONTRAST statements in PROC LOGISTIC. See the example titled "Comparing Receiver Operating Characteristic Curves" in the Examples section of the LOGISTIC documentation.&lt;/P&gt;</description>
    <pubDate>Thu, 13 Sep 2018 20:23:38 GMT</pubDate>
    <dc:creator>StatDave</dc:creator>
    <dc:date>2018-09-13T20:23:38Z</dc:date>
    <item>
      <title>How to compare Sensitivity, specificity, PPV, NPV between two diagnostic tests</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495111#M25659</link>
      <description>&lt;P&gt;Hello,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Looking for help and some validation on my current procedure.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have two diagnostic tests, Test1 and Test2 that I have calculated sensitivity, specificity, NPV, PPV and F1 scores for. I want to determine if one test is superior on each of these parameters. My data is given as 1 if test is positive, and 0 if test is negative. Same for disease.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For sensitvity and specficity, so far I have used McNemars following this method:&amp;nbsp;&lt;A href="https://onlinecourses.science.psu.edu/stat509/node/152/" target="_blank"&gt;https://onlinecourses.science.psu.edu/stat509/node/152/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;For the specficty i made sure that the first row and first column had the test negative but am getting the same P value.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For PPV, and NPV I have found this macro:&amp;nbsp;&lt;A href="https://support.sas.com/resources/papers/proceedings15/2141-2015.pdf" target="_blank"&gt;https://support.sas.com/resources/papers/proceedings15/2141-2015.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;But I can't seem to get the macro to work as it references "t" and I can't seem to figure out where the t is coming from. .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;No clue where to start for the F1, but I've read a paper that refereces using McNemars also but not sure how that would work.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any help is greatly appreciated!&lt;/P&gt;</description>
      <pubDate>Thu, 13 Sep 2018 00:51:16 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495111#M25659</guid>
      <dc:creator>LC06</dc:creator>
      <dc:date>2018-09-13T00:51:16Z</dc:date>
    </item>
    <item>
      <title>Re: How to compare Sensitivity, specificity, PPV, NPV between two diagnostic tests</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495499#M25683</link>
      <description>&lt;P&gt;The comparison of diagnostic tests against a gold standard can be done by comparing their ROC curves, which is composed of both sensitivity and specificity values over a range of cutpoints. This is directly available using the ROC and ROCCONTRAST statements in PROC LOGISTIC. See the example titled "Comparing Receiver Operating Characteristic Curves" in the Examples section of the LOGISTIC documentation.&lt;/P&gt;</description>
      <pubDate>Thu, 13 Sep 2018 20:23:38 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495499#M25683</guid>
      <dc:creator>StatDave</dc:creator>
      <dc:date>2018-09-13T20:23:38Z</dc:date>
    </item>
    <item>
      <title>Re: How to compare Sensitivity, specificity, PPV, NPV between two diagnostic tests</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495553#M25687</link>
      <description>&lt;P&gt;Thanks for the response. I know ROC curves are often used for diagnostic testing comparison. But I've also seen direct comparisons of Sensitivity, specificity, NPV, PPV and F1. I've figured out the first four, but still having a hard time figuring out how to do the McNemars for F1 statistic as this paper stated they did.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.ncbi.nlm.nih.gov/pubmed/29425639" target="_blank"&gt;https://www.ncbi.nlm.nih.gov/pubmed/29425639&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Fri, 14 Sep 2018 01:07:16 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/How-to-compare-Sensitivity-specificity-PPV-NPV-between-two/m-p/495553#M25687</guid>
      <dc:creator>LC06</dc:creator>
      <dc:date>2018-09-14T01:07:16Z</dc:date>
    </item>
  </channel>
</rss>

