<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Calibration using GLIMMIX in Statistical Procedures</title>
    <link>https://communities.sas.com/t5/Statistical-Procedures/Calibration-using-GLIMMIX/m-p/90904#M4474</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using proc GLIMMIX to develop a model for mortality by a specified time-point (binary outcome of alive or dead at 30-days). The data are clustered by hospital so I am using PROC GLIMMIX to fit a random intercept model for the data using the reporting hospital as the subject variable. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In order to properly evaluate the performance of my models I would like to examine their calibration and discrimination. I found this article (&lt;A href="http://support.sas.com/kb/41/364.html" title="http://support.sas.com/kb/41/364.html"&gt;41364 - ROC analysis for binary response models fit in the GLIMMIX, NLMIXED, GAM or other procedures&lt;/A&gt;) detailing how to create an ROC curve and get a c-statistic (i.e. area under the ROC curve) for examining model discrimination; however, I am still having problems figuring out how to get a good measure of calibration. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Proc GLIMMIX does not have the LACKFIT option to produce a Hosmer-Lemeshow statistic as in Proc Logistic (and I am fairly certain that this statistic is not appropriate to use with clustered data anyway). I am trying to figure out how to do something along the lines of producing a "plot of expected vs. observed mortality rates across deciles of increasing risk" but am having some trouble figuring out how to go about doing this. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help you could provide woul dbe greatly appreciated. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 25 Jul 2013 17:49:37 GMT</pubDate>
    <dc:creator>rhysticlight</dc:creator>
    <dc:date>2013-07-25T17:49:37Z</dc:date>
    <item>
      <title>Calibration using GLIMMIX</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Calibration-using-GLIMMIX/m-p/90904#M4474</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I am using proc GLIMMIX to develop a model for mortality by a specified time-point (binary outcome of alive or dead at 30-days). The data are clustered by hospital so I am using PROC GLIMMIX to fit a random intercept model for the data using the reporting hospital as the subject variable. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In order to properly evaluate the performance of my models I would like to examine their calibration and discrimination. I found this article (&lt;A href="http://support.sas.com/kb/41/364.html" title="http://support.sas.com/kb/41/364.html"&gt;41364 - ROC analysis for binary response models fit in the GLIMMIX, NLMIXED, GAM or other procedures&lt;/A&gt;) detailing how to create an ROC curve and get a c-statistic (i.e. area under the ROC curve) for examining model discrimination; however, I am still having problems figuring out how to get a good measure of calibration. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Proc GLIMMIX does not have the LACKFIT option to produce a Hosmer-Lemeshow statistic as in Proc Logistic (and I am fairly certain that this statistic is not appropriate to use with clustered data anyway). I am trying to figure out how to do something along the lines of producing a "plot of expected vs. observed mortality rates across deciles of increasing risk" but am having some trouble figuring out how to go about doing this. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any help you could provide woul dbe greatly appreciated. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 25 Jul 2013 17:49:37 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Calibration-using-GLIMMIX/m-p/90904#M4474</guid>
      <dc:creator>rhysticlight</dc:creator>
      <dc:date>2013-07-25T17:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: Calibration using GLIMMIX</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Calibration-using-GLIMMIX/m-p/90905#M4475</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The hard part is interpreting anything like an HL stat in light of the clustering.&amp;nbsp; I would suggest doing a within-hospital lack of fit test for each hospital, as well as one overall that essentially ignored the clustering.&amp;nbsp; If the latter shows a lack of fit, it might then be quickly identified as being due to a specific hospital.&amp;nbsp; I think all of these tests would have to be obtained from PROC LOGISTIC, first with a by hospital statement, and then without.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You might be able to take the within hospital p values as data for a generalized linear model with a beta distribution, and use the sample sizes as weights.&amp;nbsp; This might provide a better pooled representation than the "ignore the clustering" approach.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Good luck.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Steve Denham&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 29 Jul 2013 18:49:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Calibration-using-GLIMMIX/m-p/90905#M4475</guid>
      <dc:creator>SteveDenham</dc:creator>
      <dc:date>2013-07-29T18:49:43Z</dc:date>
    </item>
  </channel>
</rss>

