<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>rss.livelink.threads-in-node</title>
    <link>https://communities.sas.com/</link>
    <description>SAS Support Communities</description>
    <pubDate>Mon, 29 Apr 2024 00:55:23 GMT</pubDate>
    <dc:creator>Community</dc:creator>
    <dc:date>2024-04-29T00:55:23Z</dc:date>
    <item>
      <title>Visual Analytics - Scatter Plot blank with large datasets</title>
      <link>https://communities.sas.com/t5/SAS-Visual-Analytics/Visual-Analytics-Scatter-Plot-blank-with-large-datasets/m-p/926238#M17973</link>
      <description>&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm trying to get scatter plots of some numeric columns in my dataset with ~1million records, but it shows up empty. I've tried multiple combinations of columns and it's always just blank, like this:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sc5_0-1714349434536.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95983iD6D57AA414D1C088/image-size/medium?v=v2&amp;amp;px=400" role="button" title="sc5_0-1714349434536.png" alt="sc5_0-1714349434536.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm guessing it's an issue with the no. of data points, because when I charted a test dataset&amp;nbsp; i ~ = j with 1000 records it works.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="sc5_1-1714350018970.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95984iB5C6A8537283C56E/image-size/medium?v=v2&amp;amp;px=400" role="button" title="sc5_1-1714350018970.png" alt="sc5_1-1714350018970.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But when I expand the same dataset to a million records it's blank.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Does anyone know how I can fix this and plot scatter plots with large datasets?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you!&lt;/P&gt;</description>
      <pubDate>Mon, 29 Apr 2024 00:21:56 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Visual-Analytics/Visual-Analytics-Scatter-Plot-blank-with-large-datasets/m-p/926238#M17973</guid>
      <dc:creator>sc5</dc:creator>
      <dc:date>2024-04-29T00:21:56Z</dc:date>
    </item>
    <item>
      <title>logistic regression adjusting for covariate</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/logistic-regression-adjusting-for-covariate/m-p/926225#M46060</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;I want to explore the relationship between fruit consumption and diabetes. I've categorized fruit intake into tertiles, while diabetes status is binary (yes/no). I want to adjust the findings to age (continuous), gender (categorical), race (categorical), and physical activity&amp;nbsp;(categorical) in the analysis. Is the logistic regression code that I want to use correct?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;proc&lt;/STRONG&gt; &lt;STRONG&gt;logistic&lt;/STRONG&gt; data=SS plots(only)=(effect oddsratio);&lt;/P&gt;&lt;P&gt;class Gender(ref='1') race(ref='1') physical_activity(ref='1')/param=ref ;&lt;/P&gt;&lt;P&gt;model diabetes(event='1')= fruitcode age Gender race physical_activity/Clodds=wald;&lt;/P&gt;&lt;P&gt;title " Association of fruit with diabetes (adjust for age, gender, race, physical activity) ";&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;run&lt;/STRONG&gt;;&lt;/P&gt;&lt;P&gt;Fruitcode: I used fruit tertile in this code.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;For finding the P-trend, I used this code:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;proc&lt;/STRONG&gt; &lt;STRONG&gt;logistic&lt;/STRONG&gt; data=SS plots(only)=(effect oddsratio);&lt;/P&gt;&lt;P&gt;model diab(event='1')= fruit age Gender race physical_activity/Clodds=wald;;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;run&lt;/STRONG&gt;;&lt;/P&gt;&lt;P&gt;Fruit is continuous&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 19:09:09 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/logistic-regression-adjusting-for-covariate/m-p/926225#M46060</guid>
      <dc:creator>Manije72</dc:creator>
      <dc:date>2024-04-28T19:09:09Z</dc:date>
    </item>
    <item>
      <title>Create new date variable based on specific condition</title>
      <link>https://communities.sas.com/t5/New-SAS-User/Create-new-date-variable-based-on-specific-condition/m-p/926222#M41569</link>
      <description>&lt;P&gt;I have a data set contains DMRN, last visit date, the presence of UI, and recorded date of UI.&lt;/P&gt;
&lt;TABLE width="50"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;DMRN&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;last_visit&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;UI&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;recorded_time&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;31&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;26AUG2021&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;06APR2018&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;31&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;26AUG2021&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;16JAN2020&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;31&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;26AUG2021&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;&lt;STRONG&gt;4MAY2021&lt;/STRONG&gt;&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;33&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;24MAY2022&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;02MAR2020&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;33&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;24MAY2022&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;24MAY2022&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD&gt;
&lt;P&gt;35&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;01DEC2014&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;0&lt;/P&gt;
&lt;/TD&gt;
&lt;TD&gt;
&lt;P&gt;25MAR2013&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I would like to create new_date for each DMRN based on these conditions: &lt;BR /&gt;if UI = 1 then new_date = recorded_time&lt;BR /&gt;else if UI = 0 then new_date = last_visit&lt;BR /&gt;The outcome would be like as follow:&amp;nbsp;&lt;/P&gt;
&lt;TABLE style="width: 50px;" border="1" width="411px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;DMRN&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;last_visit&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;UI&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;recorded_time&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;new_date&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;06APR2018&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;&lt;STRONG&gt;4MAY2021&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;16JAN2020&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;&lt;STRONG&gt;4MAY2021&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;&lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;&lt;STRONG&gt;4MAY2021&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;&lt;STRONG&gt;4MAY2021&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;
&lt;P&gt;02MAR2020&lt;/P&gt;
&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;35&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;01DEC2014&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;25MAR2013&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;01DEC2014&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Could you please help me with the code? Thank you very much&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 18:16:01 GMT</pubDate>
      <guid>https://communities.sas.com/t5/New-SAS-User/Create-new-date-variable-based-on-specific-condition/m-p/926222#M41569</guid>
      <dc:creator>tan-wongv</dc:creator>
      <dc:date>2024-04-28T18:16:01Z</dc:date>
    </item>
    <item>
      <title>Keep only value and date that occurred before or at last visit date</title>
      <link>https://communities.sas.com/t5/New-SAS-User/Keep-only-value-and-date-that-occurred-before-or-at-last-visit/m-p/926221#M41568</link>
      <description>&lt;P&gt;I have a dataset contains DMRN, last visit date, the presence of UI, and recorded date of UI&lt;/P&gt;
&lt;TABLE style="width: 50px;" border="1" width="50"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;DMRN&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;last_visit&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;UI&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;recorded_time&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;06APR2018&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;16JAN2020&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;1&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;4MAY2021&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;&lt;STRONG&gt;31&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;&lt;STRONG&gt;26AUG2021&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;&lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;&lt;STRONG&gt;26MAY2022&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;02MAR2020&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;24MAY2022&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;35&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;01DEC2014&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;25MAR2013&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px" height="30px"&gt;&lt;STRONG&gt;35&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="100.703px" height="30px"&gt;&lt;STRONG&gt;01DEC2014&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="40px" height="30px"&gt;&lt;STRONG&gt;1&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="113.344px" height="30px"&gt;&lt;STRONG&gt;05JAN2015&lt;/STRONG&gt;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I would like to keep only UI value and recorded_time that occurred before or equal to last_visit date for each DMRN. Could you please help me with the code?The outcome would look like this:&lt;/P&gt;
&lt;TABLE style="width: 50px;" border="1" width="311px"&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;DMRN&lt;/TD&gt;
&lt;TD width="100.703px"&gt;last_visit&lt;/TD&gt;
&lt;TD width="40px"&gt;UI&lt;/TD&gt;
&lt;TD width="113.344px"&gt;recorded_time&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px"&gt;06APR2018&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px"&gt;16JAN2020&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;31&lt;/TD&gt;
&lt;TD width="100.703px"&gt;26AUG2021&lt;/TD&gt;
&lt;TD width="40px"&gt;1&lt;/TD&gt;
&lt;TD width="113.344px"&gt;4MAY2021&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px"&gt;02MAR2020&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;33&lt;/TD&gt;
&lt;TD width="100.703px"&gt;24MAY2022&lt;/TD&gt;
&lt;TD width="40px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px"&gt;24MAY2022&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD width="58.7969px"&gt;35&lt;/TD&gt;
&lt;TD width="100.703px"&gt;01DEC2014&lt;/TD&gt;
&lt;TD width="40px"&gt;0&lt;/TD&gt;
&lt;TD width="113.344px"&gt;25MAR2013&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;</description>
      <pubDate>Sun, 28 Apr 2024 18:01:15 GMT</pubDate>
      <guid>https://communities.sas.com/t5/New-SAS-User/Keep-only-value-and-date-that-occurred-before-or-at-last-visit/m-p/926221#M41568</guid>
      <dc:creator>tan-wongv</dc:creator>
      <dc:date>2024-04-28T18:01:15Z</dc:date>
    </item>
    <item>
      <title>Proc logistic effect plot using GTL</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Proc-logistic-effect-plot-using-GTL/m-p/926220#M364495</link>
      <description>&lt;P&gt;hello ,&lt;BR /&gt;I am trying to use graph template use by Proc Logistics / effectplot , so that I can modify the graph template for some changes&lt;BR /&gt;like Y axis label . But when I use this below code&lt;/P&gt;&lt;PRE&gt;proc template;
source Stat.Logistic.Graphics.EffectCont/ file="&amp;amp;path/reg.txt";
run;&lt;/PRE&gt;&lt;P&gt;I get log message as-&lt;/P&gt;&lt;P&gt;"link Stat.Logistic.Graphics.EffectCont to Common.Zreg.Graphics.EffectCont;"&lt;/P&gt;&lt;P&gt;can someone tell me what does this above log message means ?&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 17:52:08 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Proc-logistic-effect-plot-using-GTL/m-p/926220#M364495</guid>
      <dc:creator>rs72</dc:creator>
      <dc:date>2024-04-28T17:52:08Z</dc:date>
    </item>
    <item>
      <title>Making header border lines invisible/white in selective columns with PROC REPORT</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Making-header-border-lines-invisible-white-in-selective-columns/m-p/926219#M364494</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'm creating an output that requires this header layout&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="smackerz1988_1-1714045922018.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95851i5B9146D7E42375A0/image-size/medium?v=v2&amp;amp;px=400" role="button" title="smackerz1988_1-1714045922018.png" alt="smackerz1988_1-1714045922018.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;I'm using this code&amp;nbsp;&lt;/P&gt;
&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;proc report data=final out=view missing nowindows split="*" contents=" ";
   column pag ("MedDRA System Organ Class /* Preferred Term" (" " aesoc))    
    ("trt (N=&amp;amp;trt_count.)" ("arm1*(N=&amp;amp;arm1_count.)" ('Subjects with TEAE*n (%)' arm1))
    ("arm2*(N=&amp;amp;arm2_count.)" ('Subjects with TEAE*n (%)' arm2)))
    ord;
   define pag / order  order = internal noprint ;
   define aesoc  / display " " left flow style(header)=[just=l] STYLE(column)=[&amp;amp;col_style asis=on cellwidth=12.00cm];
   define ord / order order =  internal noprint;   
   define arm2 / " " display   center flow STYLE(column)=[&amp;amp;col_style cellwidth=7.00cm];
   define arm1 / " " display center   flow STYLE(column)=[&amp;amp;col_style cellwidth=7.00cm];
   

    compute after pag / style ={&amp;amp;line_small};
        line "";
    endcomp;

    break after pag / page;

compute before _page_ / left style={&amp;amp;tit_style};
   line "Table 1.4.3 Severe treatment-emergent adverse events by System Organ Class and Preferred Term with incidence of &amp;gt;=5% in at least one arm - Separately by TPC regimen - Safety Analysis Set";   
endcomp;

	compute before ord / style ={&amp;amp;line_small};
		line "";		
	endcomp;

 compute after _page_ / left style={&amp;amp;foot_style};
   line "Notes: NE: Not evaluable; SOC: System Organ Class; PT: Preferred Term; TEAE: Treatment-emergent adverse event. Adverse events were coded using the MedDRA dictionary, Version 26.1 and graded according to NCI CTCAE v5.0.";   
endcomp;
run;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;However this is my ouput&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="smackerz1988_0-1714317401092.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95981i683CA4BE3A0181DA/image-size/medium?v=v2&amp;amp;px=400" role="button" title="smackerz1988_0-1714317401092.png" alt="smackerz1988_0-1714317401092.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;How am I able to make this border line for this column either not visibly or removed?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 15:17:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Making-header-border-lines-invisible-white-in-selective-columns/m-p/926219#M364494</guid>
      <dc:creator>smackerz1988</dc:creator>
      <dc:date>2024-04-28T15:17:46Z</dc:date>
    </item>
    <item>
      <title>find the count of variable by group</title>
      <link>https://communities.sas.com/t5/SAS-Programming/find-the-count-of-variable-by-group/m-p/926217#M364493</link>
      <description>&lt;P&gt;Hello all,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have initially posted this inappropriately, am sorry. I am creating a better representation of the issue so I can get help on this.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a dataset with category (teachers, students) and a list of forms (form2, event46, case32) which they are expected to fill within 60days after the school resume. Event46 has a group variable (grp=A, B, C). That is someone can be in any of the group. Among the teachers and students, some did not fill the form (0), some filled the form within 30 days (1), and some filled the form within 60 days (2).&amp;nbsp;&lt;/P&gt;&lt;P&gt;The 0,1,2 represents the days that the form was filled below:&lt;/P&gt;&lt;P&gt;No form filled=0&lt;/P&gt;&lt;P&gt;Form filled within 0-30days=1&lt;/P&gt;&lt;P&gt;Form filled within 31-60days=2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is what I want&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Table 1:&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. If a teacher or student only filled form2 within 30days and did not fill event46 or case32 they should be in the form2 only column. Same apply for other forms.&lt;/P&gt;&lt;P&gt;If they filled the form after 30 days, they will be in the 31-60 days count by category.&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. If anyone filled more than one(1) forms either within 30 days or within 60 days or both, they should be in the multiple column.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Table 2:&lt;/P&gt;&lt;P&gt;3. Finally, I want a total of those who did not fill any form in both category(teacher or student).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have created a sample output and a demo dataset. Thanks so much for your help.&lt;/P&gt;&lt;PRE&gt;&lt;CODE class=""&gt;data have;
input category $ form2 case32 event46 grp $;
datalines;
Teacher 1 0 0 .
Student 1 0 0 .
Teacher 0 1 0 .
Teacher 1 0 1 A
Teacher 0 0 1 C 
Teacher 0 0 1 B
Teacher 1 0 0 .
Student 0 1 1 C
Student 0 0 1 C
Student 0 0 1 C
Student 1 0 0 .
Student 0 1 0 .
Student 0 2 0 .
Teacher 1 1 1 B
Teacher 0 0 2 A 
Teacher 0 0 2 B
Teacher 2 0 0 .
Student 0 1 1 C
Teacher 0 0 0 .
Student 0 0 0 .
Student 0 0 1 B 
Teacher 0 0 1 B 
Teacher 0 2 0 .
Teacher 2 2 0 .
Student 0 1 0 .
Teacher 2 0 1 A
Student 0 0 1 A 
Teacher 0 0 0 .
Student 2 0 0 .
Teacher 0 0 2 C
Student 0 0 2 C
Student 0 0 2 B
Student 0 0 2 A
;
run;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;&lt;SPAN&gt;The expected output should look like this:&lt;/SPAN&gt;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Form2&lt;/P&gt;&lt;P&gt;only&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Case32 only&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Event46 A&lt;/P&gt;&lt;P&gt;only&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Event46 B&lt;/P&gt;&lt;P&gt;only&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Event46 C&lt;/P&gt;&lt;P&gt;Only&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;Multiple&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&lt;STRONG&gt;Total&lt;/STRONG&gt;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Teacher fill form within 0-30 days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Teacher fill form within 31-60 days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Student fill form within 0-30days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Student fill form within 31-60days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;&lt;STRONG&gt;Total&lt;/STRONG&gt;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;&amp;nbsp;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;Total&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Teacher who did not complete any form within 0-60 days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Student who did not complete any form within 0-60 days&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;&lt;P&gt;Total&lt;/P&gt;&lt;/TD&gt;&lt;TD&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 14:54:45 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/find-the-count-of-variable-by-group/m-p/926217#M364493</guid>
      <dc:creator>CathyVI</dc:creator>
      <dc:date>2024-04-28T14:54:45Z</dc:date>
    </item>
    <item>
      <title>Need Advice on Handling High-Dimensional Data in Data Science Project</title>
      <link>https://communities.sas.com/t5/SAS-Data-Science/Need-Advice-on-Handling-High-Dimensional-Data-in-Data-Science/m-p/926214#M10766</link>
      <description>&lt;P&gt;Hey everyone,&lt;/P&gt;&lt;P&gt;I’m relatively new to data science and currently working on a project that involves a dataset with over 60 columns. Many of these columns are categorical, with more than 100 unique values each.&lt;/P&gt;&lt;P&gt;My issue arises when I try to apply one-hot encoding to these categorical columns. It seems like I’m running into the curse of dimensionality problem, and I’m not quite sure how to proceed from here.&lt;/P&gt;&lt;P&gt;I’d really appreciate some advice or guidance on how to effectively handle high-dimensional data in this context. Are there alternative encoding techniques I should consider? Or perhaps there are preprocessing steps I’m overlooking?&lt;/P&gt;&lt;P&gt;Any insights or tips would be immensely helpful.&lt;/P&gt;&lt;P&gt;Thanks in advance!&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 11:57:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Data-Science/Need-Advice-on-Handling-High-Dimensional-Data-in-Data-Science/m-p/926214#M10766</guid>
      <dc:creator>tress</dc:creator>
      <dc:date>2024-04-28T11:57:46Z</dc:date>
    </item>
    <item>
      <title>Error message</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Error-message/m-p/926211#M364490</link>
      <description>&lt;PRE&gt;&lt;CODE class=""&gt;/*Model 1*/

Proc logistic data= dormant_pca_cc;
class BU_419_DQ E1_B_07;
model FCS_bad_9m_Jun19 =
BU_419_DQ 
BU_26_AB_NUM 
BU_345_HN_NUM 
RR_CHR_105 
BU_162_GG_NUM 
RR_CHR_169 
BU_799_TEB_NUM 
RR_CHR_111
BU_301_PL_NUM 
BU_1552_SHC_NUM
BU_1577_RIC_NUM
RR_CHR_122
E1_B_07
bu_670_uz_num;
ods output GlobalTests = Globaltests_full;
run; 

data _null_;
set globaltests_full;
if test = "Likelihood Ratio" then do;
call symput ("ChiSq_full", ChiSq);
call symput ("DF_full", DF);
END;
RUN;


/*Model 2*/

Proc logistic data= dormant_pca_cc2;
class BU_419_DQ E1_B_07;
model FCS_bad_9m_Jun19 =
BU_419_DQ 
BU_26_AB_NUM 
BU_345_HN_NUM 
RR_CHR_105 
BU_162_GG_NUM 
RR_CHR_169 
BU_799_TEB_NUM 
RR_CHR_111
BU_301_PL_NUM 
BU_1552_SHC_NUM
BU_1577_RIC_NUM
RR_CHR_122
E1_B_07;
ods output GlobalTests = Globaltests_reduced;
run; 

data _null_;
set globaltests_reduced;
if test = "Likelihood Ratio" then do;
call symput ("ChiSq_reduced", ChiSq);
call symput ("DF_reduced", DF);
END;
RUN;

data = LRT_result;
LR =(&amp;amp;ChiSq_full - &amp;amp;ChiSq_reduced);
DF = (&amp;amp;DF_full - &amp;amp;DF_reduced);
p=1 - probchi(chiSq,DF);
RUN;


&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;This is a code to check the change in likelihood ratio. I am getting the below error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;&lt;CODE class=""&gt;1                                                          The SAS System                               10:46 Sunday, April 28, 2024

1          ;*';*";*/;quit;run;
2          OPTIONS PAGENO=MIN;
3          %LET _CLIENTTASKLABEL='Test2';
4          %LET _CLIENTPROCESSFLOWNAME='Model Macro';
5          %LET _CLIENTPROJECTPATH='C:\Users\8522019\OneDrive - Lloyds Banking Group\Desktop\Likelihood test1.egp';
6          %LET _CLIENTPROJECTPATHHOST='MMD014713504257';
7          %LET _CLIENTPROJECTNAME='Likelihood test1.egp';
8          %LET _SASPROGRAMFILE='';
9          %LET _SASPROGRAMFILEHOST='';
10         
11         ODS _ALL_ CLOSE;
12         OPTIONS DEV=SVG;
13         GOPTIONS XPIXELS=0 YPIXELS=0;
14         %macro HTML5AccessibleGraphSupported;
15             %if %_SAS_VERCOMP_FV(9,4,4, 0,0,0) &amp;gt;= 0 %then ACCESSIBLE_GRAPH;
16         %mend;
17         FILENAME EGHTML TEMP;
18         ODS HTML5(ID=EGHTML) FILE=EGHTML
19             OPTIONS(BITMAP_MODE='INLINE')
20             %HTML5AccessibleGraphSupported
21             ENCODING='utf-8'
22             STYLE=HtmlBlue
23             NOGTITLE
24             NOGFOOTNOTE
25             GPATH=&amp;amp;sasworklocation
26         ;
NOTE: Writing HTML5(EGHTML) Body file: EGHTML
27         
28         data = LRT_result;
           ____
           180

ERROR 180-322: Statement is not valid or it is used out of proper order.

29         LR =(&amp;amp;ChiSq_full - &amp;amp;ChiSq_reduced);
           __
           180

ERROR 180-322: Statement is not valid or it is used out of proper order.

30         DF = (&amp;amp;DF_full - &amp;amp;DF_reduced);
           __
           180

ERROR 180-322: Statement is not valid or it is used out of proper order.

31         p=1 - probchi(chiSq,DF);
           _
           180

ERROR 180-322: Statement is not valid or it is used out of proper order.

32         RUN;
33         
34         %LET _CLIENTTASKLABEL=;
35         %LET _CLIENTPROCESSFLOWNAME=;
36         %LET _CLIENTPROJECTPATH=;
37         %LET _CLIENTPROJECTPATHHOST=;
38         %LET _CLIENTPROJECTNAME=;
39         %LET _SASPROGRAMFILE=;
40         %LET _SASPROGRAMFILEHOST=;
41         
42         ;*';*";*/;quit;run;
43         ODS _ALL_ CLOSE;
44         
45         
46         QUIT; RUN;
47         &lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Can anyone help me correcting the code? That would be great!&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 10:35:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Error-message/m-p/926211#M364490</guid>
      <dc:creator>Arjayita</dc:creator>
      <dc:date>2024-04-28T10:35:14Z</dc:date>
    </item>
    <item>
      <title>How to test the interaction between strata and treatment groups in a mmrm model?</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/How-to-test-the-interaction-between-strata-and-treatment-groups/m-p/926210#M46059</link>
      <description>&lt;P&gt;I constructed a mmrm model as below:&lt;/P&gt;&lt;PRE&gt;&lt;CODE class=""&gt; proc mixed data = have;
 class id treatment week strata;
 model chg = base treatment week treatment*week treatment*strata/ ddfm = KR;
 repeated week/ subject = id type = UN;
 lsmeans treatment*week/ cl alpha = 0.05 diff
 ods output Tests3=tests3 lsmeans=lsmeans diffs=diffs;
 run;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;(1) id: represents the patient id;&lt;/P&gt;&lt;P&gt;(2) treatment: contains 2 group, "drug" and "placebo";&lt;/P&gt;&lt;P&gt;(3) chg: change from baseline of hba1c;&lt;/P&gt;&lt;P&gt;(4) base: the value of hba1c at baseline;&lt;/P&gt;&lt;P&gt;(5) week: contains 3 levels: "week8", "week16" and "week24"&lt;/P&gt;&lt;P&gt;(6 )strata: stratification based on the median of baseline hba1c, ie&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; group1=patients of those who have baseline hba1c &amp;lt;= median of baseline hba1c;&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; group2=patients of those who have baseline hba1c &amp;gt; median of baseline hba1c.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In general, I'd like to know how to the test the interaction between strata and treatment groups. I am not sure if I constructed the model in a right way because I looked at the results (as below), the values of Diffs in LS means were pretty close between strata, but p for interaction was significant. (The Diffs in LS means were got from the output file diffs, and the p for interaction were got from the output file test3.)&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="interaction.PNG" style="width: 999px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95980i9BD39C2E31893048/image-size/large?v=v2&amp;amp;px=999" role="button" title="interaction.PNG" alt="interaction.PNG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I also checked the Diffs in LS means of week8 and week16, the diffs in LS means were all pretty close between 2 strata, but the p for interaction were all significant (p-value &amp;lt; 0.0001). So I am not sure if I interpret the result in the right way. What data should look quite different if the p for interaction is statistically significant? Thanks.&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 09:22:42 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/How-to-test-the-interaction-between-strata-and-treatment-groups/m-p/926210#M46059</guid>
      <dc:creator>Robin_moon</dc:creator>
      <dc:date>2024-04-28T09:22:42Z</dc:date>
    </item>
    <item>
      <title>Specify non-inferiority margin using mixed model for repeated measurement</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Specify-non-inferiority-margin-using-mixed-model-for-repeated/m-p/926203#M364487</link>
      <description>&lt;P&gt;I have a dataset structured as repeated measurement and Mixed model for repeated measurement is used for analysis. The standard code to get average change from baseline (note that here we are comparing baseline with the mean of the last 4 visits) and the associated p-value has been posted below. I wonder based on this, how can I test the non-inferiority using a margin of -1, e.g., change from baseline is greater than -1. The null hypothesis is that this change will be smaller or equal to -1. Can anyone suggest how to write SAS for this test?&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="sas"&gt;proc mixed data=mydata;
  class id visit sex(ref='F');
  model change = age sex baseline visit*baseline/ ddfm=kr fullx;
  repeated visit / subject = id type = un;
  lsmeans visit ;
  estimate 'Average' Intercept 1 age 1 sex 0.5 0.5 
           visit 0 0 0 0 0 0.25 0.25 0.25 0.25 base &amp;amp; basemean 
           visit*base 0 0 0 0 0 &amp;amp;basemean1 &amp;amp;basemean1 &amp;amp;basemean1 &amp;amp;basemean1 / cl;
run;&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 06:15:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Specify-non-inferiority-margin-using-mixed-model-for-repeated/m-p/926203#M364487</guid>
      <dc:creator>DingTao</dc:creator>
      <dc:date>2024-04-28T06:15:05Z</dc:date>
    </item>
    <item>
      <title>Find Standard deviation from 2, or more, columns/variables with aim of SQL</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Find-Standard-deviation-from-2-or-more-columns-variables-with/m-p/926187#M364473</link>
      <description>&lt;P&gt;Good night guys:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Last post i tried to explain an example, but for some logistic reasons i couldn't finish the explanation there so here i go again:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;i have this data set:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;data have;&lt;BR /&gt;input id South North east west ;&lt;BR /&gt;cards;&lt;BR /&gt;1 2 4 12 9&lt;BR /&gt;2 3 . 11 10&lt;BR /&gt;3 2 4 12 10&lt;BR /&gt;4 6 4 13 12&lt;BR /&gt;;&lt;/P&gt;
&lt;P&gt;proc sql;&lt;BR /&gt;create table want as&lt;BR /&gt;select *, &lt;BR /&gt;sum(south + north)/(count(south + north)) as mean_south_north, &lt;BR /&gt;std(south and north) as sd_south_north &lt;BR /&gt;from have;&lt;BR /&gt;quit;&lt;/P&gt;
&lt;P&gt;proc print data = want;&lt;BR /&gt;run;, but unfortunately i got this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;DIV class="branch"&gt;
&lt;DIV&gt;
&lt;DIV align="center"&gt;
&lt;TABLE class="table" summary="Procedure Print: Data Set WORK.WANT" frame="box" rules="all" cellspacing="0" cellpadding="5"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH class="r header" scope="col" width="21.8167px" height="38px"&gt;Obs&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="40px" height="38px"&gt;id&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="40px" height="38px"&gt;south&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="40px" height="38px"&gt;north&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="40px" height="38px"&gt;west&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="40px" height="38px"&gt;east&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="92.8833px" height="38px"&gt;sum_south_north&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="99.5667px" height="38px"&gt;mean_south_north&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="91.2px" height="38px"&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;sd_south_north&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="84.95px" height="38px"&gt;sum_west_east&lt;/TH&gt;
&lt;TH class="r header" scope="col" width="96.9167px" height="38px"&gt;mean_west_east&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TH class="r rowheader" scope="row" width="21.8167px" height="30px"&gt;1&lt;/TH&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;1&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;10&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;.&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;15&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;9&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="92.8833px" height="30px" class="r data"&gt;116&lt;/TD&gt;
&lt;TD width="99.5667px" height="30px" class="r data"&gt;25.2&lt;/TD&gt;
&lt;TD width="91.2px" height="30px" class="r data"&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;8.48528&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="84.95px" height="30px" class="r data"&gt;74&lt;/TD&gt;
&lt;TD width="96.9167px" height="30px" class="r data"&gt;12.3333&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="r rowheader" scope="row" width="21.8167px" height="30px"&gt;2&lt;/TH&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;2&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;40&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;12&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;10&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;14&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="92.8833px" height="30px" class="r data"&gt;116&lt;/TD&gt;
&lt;TD width="99.5667px" height="30px" class="r data"&gt;25.2&lt;/TD&gt;
&lt;TD width="91.2px" height="30px" class="r data"&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;8.48528&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="84.95px" height="30px" class="r data"&gt;74&lt;/TD&gt;
&lt;TD width="96.9167px" height="30px" class="r data"&gt;12.3333&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="r rowheader" scope="row" width="21.8167px" height="30px"&gt;3&lt;/TH&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;3&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;50&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;14&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;13&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="40px" height="30px" class="r data"&gt;&lt;FONT color="#00FF00"&gt;&lt;STRONG&gt;13&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/TD&gt;
&lt;TD width="92.8833px" height="30px" class="r data"&gt;116&lt;/TD&gt;
&lt;TD width="99.5667px" height="30px" class="r data"&gt;25.2&lt;/TD&gt;
&lt;TD width="91.2px" height="30px" class="r data"&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;8.48528&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/TD&gt;
&lt;TD width="84.95px" height="30px" class="r data"&gt;74&lt;/TD&gt;
&lt;TD width="96.9167px" height="30px" class="r data"&gt;12.3333&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The &lt;FONT face="arial black,avant garde" color="#FF0000"&gt;&lt;STRONG&gt;RED BOLD&lt;/STRONG&gt;&lt;/FONT&gt; text in the table above represents the standard deviation wrongly found, where the correct value must be&amp;nbsp; &lt;STRONG&gt;&lt;FONT face="arial black,avant garde" color="#FF0000"&gt;18.4715998224301,&lt;/FONT&gt;&lt;/STRONG&gt;&lt;FONT face="arial,helvetica,sans-serif" color="#000080"&gt; as the real value of the standard deviation of 10, 40, 50, 12 and 14. Must be noted that it is a missing value in the data set, so it is very important to consider also.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" color="#000080"&gt;Other important issue, it would be grate make the program to find the standard deviation from other columns and put them as other variable, for example the standard deviation of west and east observations , as std_west_east, considering &lt;STRONG&gt;&lt;FONT color="#00FF00"&gt;GREEN DATA&lt;/FONT&gt;&lt;/STRONG&gt; in the table&lt;/FONT&gt;&lt;FONT face="arial,helvetica,sans-serif" color="#000080"&gt; above &lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" color="#000080"&gt;i appreciate any help to complete this task correcting the code i provided to you above.&lt;/FONT&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;FONT face="arial,helvetica,sans-serif" color="#000080"&gt;thanks in advance&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Sun, 28 Apr 2024 00:11:40 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Find-Standard-deviation-from-2-or-more-columns-variables-with/m-p/926187#M364473</guid>
      <dc:creator>jonatan_velarde</dc:creator>
      <dc:date>2024-04-28T00:11:40Z</dc:date>
    </item>
    <item>
      <title>Column mapping in load table step in SAS Viya 4.0</title>
      <link>https://communities.sas.com/t5/Moving-to-SAS-Viya/Column-mapping-in-load-table-step-in-SAS-Viya-4-0/m-p/926137#M74</link>
      <description>&lt;P&gt;We are migrating SAS 9.4 DI jobs to SAS Viya 4. In a DI jobs, when we migrated table loader transformation to load table step, column mappings are not migrated properly. Columns with not matching names are not mapped. In sas 9.4 DI table loader transformation, we can map any column to anyone irrespective of its name as per data but in SAS Viya, this is a big concern. Can anyone help me with any solution because client has lots of DI jobs to migrate to SAS Viya.&lt;/P&gt;</description>
      <pubDate>Sat, 27 Apr 2024 09:51:06 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Moving-to-SAS-Viya/Column-mapping-in-load-table-step-in-SAS-Viya-4-0/m-p/926137#M74</guid>
      <dc:creator>ParitoshACN</dc:creator>
      <dc:date>2024-04-27T09:51:06Z</dc:date>
    </item>
    <item>
      <title>fname middlename lastname</title>
      <link>https://communities.sas.com/t5/SAS-Programming/fname-middlename-lastname/m-p/926135#M364433</link>
      <description>&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;Data ds
infile datalines;
input name $40.;
datalines;
Virat Kohli
Surya Kumar Yadav
Rohit Sharma
Dhoni
Chetandra Pratap Singh Chauhan
;
run;&lt;/CODE&gt;&lt;/PRE&gt;
&lt;P&gt;find firstname&amp;nbsp; &amp;nbsp;middlename&amp;nbsp; lastname&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="pavank_0-1714203596486.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95970i10EB96A6BBE24895/image-size/medium?v=v2&amp;amp;px=400" role="button" title="pavank_0-1714203596486.png" alt="pavank_0-1714203596486.png" /&gt;&lt;/span&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 27 Apr 2024 08:55:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/fname-middlename-lastname/m-p/926135#M364433</guid>
      <dc:creator>pavank</dc:creator>
      <dc:date>2024-04-27T08:55:14Z</dc:date>
    </item>
    <item>
      <title>Write a code on SAS on likelihood ratio</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Write-a-code-on-SAS-on-likelihood-ratio/m-p/926116#M364419</link>
      <description>I have one dataset DORMANT_PCA_CC which has fourteen variables. Bu_419_DQ, Bu_26_ab_num, Bu_345_hn_num, rr_chr_105, bu_162_gg_num, rr_chr_169, bu_799_teb_num, rr_chr_111, bu_301_pl_num, bu_1552_shc_num, bu_1577_ric_num, rr_chr_122, e1_b_07, bu_670_uz_num.&lt;BR /&gt;i want to drop one variable which is bu_670_uz_num.&lt;BR /&gt;&lt;BR /&gt;If I want to test whether the drop out variable bu_670_uz_num is significant or not, I shall perform a likelihood ratio test of two models. How to write a code on SAS to check the change in likelihood?&lt;BR /&gt;&lt;BR /&gt;thanks in advance</description>
      <pubDate>Sat, 27 Apr 2024 01:13:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Write-a-code-on-SAS-on-likelihood-ratio/m-p/926116#M364419</guid>
      <dc:creator>Arjayita</dc:creator>
      <dc:date>2024-04-27T01:13:14Z</dc:date>
    </item>
    <item>
      <title>Gompertz curve modelling using NLIN PROC</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Gompertz-curve-modelling-using-NLIN-PROC/m-p/926113#M364416</link>
      <description>&lt;P&gt;I want to use the Gompertz curve model to develop a growth curve and also get the analysis components. the model that i want to use is :&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;BWt = BW0 * exp*(L/K(1 - exp(-Kt))).&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The equation includes parameters such as BWt (recorded body weight at age t), BW0 (estimated weight at hatching), L (initial specific growth rate), and K (maturation rate or exponential factor of decay of the specific growth rate).&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;how can i go about developing the code.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;line in the data stands from the animal breed in other terms. this is a sample of the data set. otherwise, the data set is huge.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;TABLE&gt;&lt;TBODY&gt;&lt;TR&gt;&lt;TD&gt;LINE&lt;/TD&gt;&lt;TD&gt;BW0&lt;/TD&gt;&lt;TD&gt;WEEK&lt;/TD&gt;&lt;TD&gt;BWT&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;39.0&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;153.6&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;39.0&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;128.4&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;37.0&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;143.2&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;31.8&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;107.0&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;32.6&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;144.6&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;32.4&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;117.2&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;35.6&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;154.2&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;37.4&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;.&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;33.4&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;66.6&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;32.0&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;92.6&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;37.6&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;154.0&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;34.8&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;125.6&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;34.4&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;113.8&lt;/TD&gt;&lt;/TR&gt;&lt;TR&gt;&lt;TD&gt;F01&lt;/TD&gt;&lt;TD&gt;37.2&lt;/TD&gt;&lt;TD&gt;2&lt;/TD&gt;&lt;TD&gt;146.8&lt;/TD&gt;&lt;/TR&gt;&lt;/TBODY&gt;&lt;/TABLE&gt;</description>
      <pubDate>Fri, 26 Apr 2024 23:47:04 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Gompertz-curve-modelling-using-NLIN-PROC/m-p/926113#M364416</guid>
      <dc:creator>samkelomotsa</dc:creator>
      <dc:date>2024-04-26T23:47:04Z</dc:date>
    </item>
    <item>
      <title>Obtain the mean from two or more columns using SQL and put it as a new column</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Obtain-the-mean-from-two-or-more-columns-using-SQL-and-put-it-as/m-p/926112#M364415</link>
      <description>&lt;P&gt;i have this data corresponding to many participants, and were collected data from them in different locations.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="0" cellspacing="0"&gt;&lt;COLGROUP width="22"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="43"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="66"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="117"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="102"&gt;&lt;/COLGROUP&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD height="17" align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;id&amp;quot;}"&gt;id&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;South&amp;quot;}"&gt;South&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;North&amp;quot;}"&gt;North&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;east&amp;quot;}"&gt;east&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;west&amp;quot;}"&gt;west&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;1&lt;/TD&gt;
&lt;TD align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;TD align="center"&gt;9&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;3&lt;/TD&gt;
&lt;TD align="center"&gt;.&lt;/TD&gt;
&lt;TD align="center"&gt;11&lt;/TD&gt;
&lt;TD align="center"&gt;10&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;3&lt;/TD&gt;
&lt;TD align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;TD align="center"&gt;10&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;6&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;13&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;now i need to obtain the mean ( or average) of all of them and put it into columns, as well as standard deviation.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Repair that both mean and standard deviation are considering all observations, in sql i don't want to divide the sum by the number of the observations, is not fashion LOL.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;i would like to obtain this:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;TABLE border="0" cellspacing="0"&gt;&lt;COLGROUP width="22"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="43"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="66"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="117"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="102"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="37"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="39"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="117"&gt;&lt;/COLGROUP&gt; &lt;COLGROUP width="102"&gt;&lt;/COLGROUP&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TD height="17" align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;id&amp;quot;}"&gt;id&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;South&amp;quot;}"&gt;South&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;North&amp;quot;}"&gt;North&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;mean_south_north&amp;quot;}"&gt;mean_south_north&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;SD_south_north&amp;quot;}"&gt;SD_south_north&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;east&amp;quot;}"&gt;east&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;west&amp;quot;}"&gt;west&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;mean_east_west&amp;quot;}"&gt;mean_east_west&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;SD_east_west&amp;quot;}"&gt;SD_east_west&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;1&lt;/TD&gt;
&lt;TD align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;3,57142857142857&lt;/TD&gt;
&lt;TD align="center"&gt;1,39727626201154&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;TD align="center"&gt;9&lt;/TD&gt;
&lt;TD align="center"&gt;11,125&lt;/TD&gt;
&lt;TD align="center"&gt;1,35620268186054&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;3&lt;/TD&gt;
&lt;TD align="center" data-sheets-value="{ &amp;quot;1&amp;quot;: 2, &amp;quot;2&amp;quot;: &amp;quot;.&amp;quot;}"&gt;.&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;3,57142857142857&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,39727626201154&lt;/TD&gt;
&lt;TD align="center"&gt;11&lt;/TD&gt;
&lt;TD align="center"&gt;10&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;11,125&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,35620268186054&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;3&lt;/TD&gt;
&lt;TD align="center"&gt;2&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;3,57142857142857&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,39727626201154&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;TD align="center"&gt;10&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;11,125&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,35620268186054&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TD height="17" align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center"&gt;6&lt;/TD&gt;
&lt;TD align="center"&gt;4&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;3,57142857142857&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,39727626201154&lt;/TD&gt;
&lt;TD align="center"&gt;13&lt;/TD&gt;
&lt;TD align="center"&gt;12&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;11,125&lt;/TD&gt;
&lt;TD align="center" data-sheets-formula="=R[-1]C"&gt;1,35620268186054&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;the first mean comes from the observations obtained in south and north, and the second, on east and west. as well their respective standard deviations.&lt;/P&gt;
&lt;P&gt;Thanks in advance&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 26 Apr 2024 23:42:15 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Obtain-the-mean-from-two-or-more-columns-using-SQL-and-put-it-as/m-p/926112#M364415</guid>
      <dc:creator>jonatan_velarde</dc:creator>
      <dc:date>2024-04-26T23:42:15Z</dc:date>
    </item>
    <item>
      <title>Proc sql-creating new table not working</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Proc-sql-creating-new-table-not-working/m-p/926110#M364413</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;I am trying to use proc sql to create a new table (code below) but getting an error ( log screenshot attached). Could anyone please help me with this issue? Thank you.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;proc sql;&lt;BR /&gt;create table ae as&lt;BR /&gt;select distinct usubjid, aedecod,&lt;BR /&gt;catx('@', smq01nam, smq02nam, smq03nam, smq04nam, smq05nam, smq06nam, smq07nam, smq08nam,&lt;BR /&gt;smq09nam, smq10nam) as aesmq,&lt;BR /&gt;aebodsys, trt01an&lt;BR /&gt;from snpm&lt;BR /&gt;where 'Broad' in (smq01sc, smq02sc, smq03sc, smq04sc, smq05sc, smq06sc, smq07sc, smq08sc,&lt;BR /&gt;smq09sc, smq10sc)&lt;BR /&gt;and not missing(catx('@', smq01nam, smq02nam, smq03nam, smq04nam, smq05nam, smq06nam, smq07nam, smq08nam,&lt;BR /&gt;smq09nam, smq10nam));&lt;BR /&gt;quit;&lt;/P&gt;</description>
      <pubDate>Fri, 26 Apr 2024 22:42:17 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Proc-sql-creating-new-table-not-working/m-p/926110#M364413</guid>
      <dc:creator>billi_billi</dc:creator>
      <dc:date>2024-04-26T22:42:17Z</dc:date>
    </item>
    <item>
      <title>Dica da Semana: SAS Enterprise Guide – Conexão com Viya 4</title>
      <link>https://communities.sas.com/t5/Dicas-e-recursos/Dica-da-Semana-SAS-Enterprise-Guide-Conex%C3%A3o-com-Viya-4/ba-p/926105</link>
      <description>&lt;P&gt;A nova versão do SAS Enterprise Guide 8.4 pode se conectar com Viya 4. Veja neste artigo como fazer!&lt;/P&gt;</description>
      <pubDate>Fri, 26 Apr 2024 20:07:36 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Dicas-e-recursos/Dica-da-Semana-SAS-Enterprise-Guide-Conex%C3%A3o-com-Viya-4/ba-p/926105</guid>
      <dc:creator>ericlesvictor</dc:creator>
      <dc:date>2024-04-26T20:07:36Z</dc:date>
    </item>
    <item>
      <title>PROC COUNTREG - Scoring with Zero Inflate Conway Maxwell Poisson (ZICMP)</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/PROC-COUNTREG-Scoring-with-Zero-Inflate-Conway-Maxwell-Poisson/m-p/926066#M46056</link>
      <description>&lt;P&gt;Hello Sas Community,&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN class=""&gt;I am currently working on a project where I need to utilize PROC COUNTREG in SAS, particularly focusing on utilizing ZICMP distribution with spatialeffects.&amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;SPAN&gt;I'm trying to manually score based on the parameters, and from the documentation, I've reconstructed the following steps.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;Let:&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;n be the number of observations&lt;/LI&gt;&lt;LI&gt;m be the number of model regressors&lt;/LI&gt;&lt;LI&gt;s be the number of spatial effects regressors&lt;/LI&gt;&lt;LI&gt;z be the number of zero model regressors&lt;/LI&gt;&lt;LI&gt;d be the number of dispersion model regressors&lt;/LI&gt;&lt;LI&gt;X1 (n x m) be the matrix containing the values of m model regressors&lt;/LI&gt;&lt;LI&gt;X2 (n x s) be the matrix containing the values of s spatial effects regressors&lt;/LI&gt;&lt;LI&gt;Z1 (n x z) be the matrix containing the values of z zero model regressors&lt;/LI&gt;&lt;LI&gt;D1 (n x d) be the matrix containing the values of d dispersion model regressors&lt;/LI&gt;&lt;LI&gt;Wmat (n x n) be the spatial weights matrix&lt;/LI&gt;&lt;LI&gt;Beta1 (m x 1) be the vector containing the coefficients of m model regressors&lt;/LI&gt;&lt;LI&gt;Beta2 (s x 1) be the vector containing the coefficients of s spatial effects regressors&lt;/LI&gt;&lt;LI&gt;Gamma (z x 1) be the vector containing the coefficients of z zero model regressors&lt;/LI&gt;&lt;LI&gt;Delta (d x 1) be the vector containing the coefficients of d dispersion model regressors&lt;/LI&gt;&lt;LI&gt;Intercept be the value of the intercept of the model&lt;/LI&gt;&lt;LI&gt;Inf_Intercept be the value of the intercept of the zero model&lt;/LI&gt;&lt;LI&gt;Disp_Intercept be the value of the intercept of the dispersion model&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_2-1714146494522.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95929i3EE0174BB7444023/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_2-1714146494522.png" alt="Psueri_2-1714146494522.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_3-1714146494522.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95933i484457023B8BCBB6/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_3-1714146494522.png" alt="Psueri_3-1714146494522.png" /&gt;&lt;/span&gt;&amp;nbsp;, or&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_4-1714146494523.png" style="width: 133px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95935i14CF4B62EA0D4428/image-dimensions/133x44?v=v2" width="133" height="44" role="button" title="Psueri_4-1714146494523.png" alt="Psueri_4-1714146494523.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_5-1714146494523.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95934iD900AB8C2B76DFCD/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_5-1714146494523.png" alt="Psueri_5-1714146494523.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_6-1714146494523.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95937i0C068C24384D84BB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_6-1714146494523.png" alt="Psueri_6-1714146494523.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_7-1714146494524.png" style="width: 292px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95938i8ED392BEB3B9D7B1/image-dimensions/292x27?v=v2" width="292" height="27" role="button" title="Psueri_7-1714146494524.png" alt="Psueri_7-1714146494524.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;But the values of lambda vector obtained in the output of the countreg differs from the ones computed as exp(xbeta), that is instead the value mu in the countreg output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_8-1714146494524.png" style="width: 263px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95936iD9E3F54D619BA70F/image-dimensions/263x25?v=v2" width="263" height="25" role="button" title="Psueri_8-1714146494524.png" alt="Psueri_8-1714146494524.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;(in documentation there is a typo in the ZICMP, but it's correct in CMP and the results are the same of countreg output)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_9-1714146494524.png" style="width: 288px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95939iC1AC9DD11CC85FCE/image-dimensions/288x93?v=v2" width="288" height="93" role="button" title="Psueri_9-1714146494524.png" alt="Psueri_9-1714146494524.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now there isn't a finite form for this Normalization Factor, following the paper:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Approximating the Conway–Maxwell–Poisson distribution normalization constant &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Steven B. Gillispie &lt;/EM&gt;&lt;EM&gt;∗&lt;/EM&gt;&lt;EM&gt; and Christopher G. Green &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;University of Washington, Seattle, USA — Department of Statistics &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;Technical Report no. 615 &lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;June 6, 2013&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I found this approximation formula&amp;nbsp;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_10-1714147059902.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95940iFB9F3DC290B0DBB5/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_10-1714147059902.png" alt="Psueri_10-1714147059902.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And then&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_11-1714147103297.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95941iBB9054448D6B7BFB/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_11-1714147103297.png" alt="Psueri_11-1714147103297.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_14-1714147225679.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95945i4EB4651610802E42/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_14-1714147225679.png" alt="Psueri_14-1714147225679.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Psueri_15-1714147225680.png" style="width: 400px;"&gt;&lt;img src="https://communities.sas.com/t5/image/serverpage/image-id/95944iC3FD3B7CCC90CB3A/image-size/medium?v=v2&amp;amp;px=400" role="button" title="Psueri_15-1714147225680.png" alt="Psueri_15-1714147225680.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Now, the predicted results &lt;SPAN&gt;&lt;SPAN class=""&gt;obtained do not match those obtained from the output of the countreg. The differing values are:&lt;/SPAN&gt;&lt;/SPAN&gt;&amp;nbsp;Lambda (as mentioned above), P(y=0), P(y!=0) and Predicted (those three latter are probably due to the Lambda error)&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any examples, tips, or resources you could share would be greatly appreciated. Thank you in advance for your assistance!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;Piero&lt;/P&gt;</description>
      <pubDate>Fri, 26 Apr 2024 16:13:50 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/PROC-COUNTREG-Scoring-with-Zero-Inflate-Conway-Maxwell-Poisson/m-p/926066#M46056</guid>
      <dc:creator>Psueri</dc:creator>
      <dc:date>2024-04-26T16:13:50Z</dc:date>
    </item>
  </channel>
</rss>

