<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>lvm Tracker</title>
    <link>https://communities.sas.com/kntur85557/tracker</link>
    <description>lvm Tracker</description>
    <pubDate>Tue, 12 May 2026 05:56:36 GMT</pubDate>
    <dc:date>2026-05-12T05:56:36Z</dc:date>
    <item>
      <title>Re: mianalyze of lsmestimate</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/mianalyze-of-lsmestimate/m-p/946007#M47284</link>
      <description>&lt;P&gt;Very important: the datafile input to the macro can only have ONE variable (one column), with the test statistics. You need to create a datafile by keeping only this variable.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Oct 2024 17:59:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/mianalyze-of-lsmestimate/m-p/946007#M47284</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-10-02T17:59:41Z</dc:date>
    </item>
    <item>
      <title>Re: mianalyze of lsmestimate</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/mianalyze-of-lsmestimate/m-p/946005#M47283</link>
      <description>&lt;P&gt;If I understand correctly, you have obtained the test statistic for each imputation in addition to the estimate and SE. MIANALYZE will correctly combine the estimates (a contrast for each imputation) to give you a global estimate, but you want a global test for significance. This can't be done with a PROC, but Paul Allison wrote a nice macro to do this based on the work by Li et al (Li, K. H., Meng, X. L., Raghunathan, T. E., and Rubin D. B. 1991. Significance levels from repeated P-values with multiply-imputed data. Statitsica Sin. 1:65-92.) and&amp;nbsp;Schafer (1997. Analysis of Incomplete Multivariate Data. London - UK: Chapman and Hall. 444pp.).&amp;nbsp; The macro combines chi-square test statistics from each imputation to estimate a global F statistic for significance. If the test statistic for your problem is a student t for each imputation, then simply square the t values to obtain chi-square statistics for each imputation (the square of a t is a F statistic, but with 1 numerical df, chi-square and F are the same). If your test statistic for each impuation is an F statistic, then multiply the F by the numerical df to obtain the chi-square. (This all assume that you have reasonably large denominator df. ) The macro works by having a file with only one variable, the chi-square test statistic for each impuation. It is run with:&lt;/P&gt;
&lt;P&gt;%combchi(df=1, data=DATAFILE);&lt;/P&gt;
&lt;P&gt;One does not use the denominator df.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Macro:&lt;/P&gt;
&lt;PRE&gt;%macro combchi(df=,chi=,data=);
*---From Paul Allison (pooling chi-squared tests for significance
	across all imputation results;
proc iml;
  DF=&amp;amp;df;
  %if &amp;amp;chi ^= and &amp;amp;data ^= %then %do;
    print, "Error: Can't specify both CHI= and DATA=";
	abort;
  %end;
  %if &amp;amp;chi ^= %then %do; g2={&amp;amp;chi}; %end;
  %if &amp;amp;data ^= %then %do; 
     use &amp;amp;data;
     read all into g1;
     g2=g1`; 
  %end;
  m=ncol(g2);
  g=sqrt(g2);
  mg2=sum(g2)/m;
  r=(1+1/m)*(ssq(g)-(sum(g)**2)/m)/(m-1);
  F=(mg2/df - r*(m+1)/(m-1))/(1+r);
  DDF=(m-1)*(1+1/r)**2/df**(3/m);
  P=1-probf(f,df,ddf);
  print f df ddf;
  print p;
run;
quit;
%mend combchi;&lt;/PRE&gt;
&lt;P&gt;Example here is when there is 1 numerator df for each test.&amp;nbsp; There is another way to run the macro where there is no datafile, but one inputs the test values directly (not discussed here).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 02 Oct 2024 17:51:45 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/mianalyze-of-lsmestimate/m-p/946005#M47283</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-10-02T17:51:45Z</dc:date>
    </item>
    <item>
      <title>Re: TEMPLATE: how to combine the equivalent of LAYOUT LATTICE and LAYOUT DATAPANEL</title>
      <link>https://communities.sas.com/t5/Graphics-Programming/TEMPLATE-how-to-combine-the-equivalent-of-LAYOUT-LATTICE-and/m-p/929299#M24652</link>
      <description>&lt;P&gt;Thanks for the very helpful workaround!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 May 2024 19:24:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Graphics-Programming/TEMPLATE-how-to-combine-the-equivalent-of-LAYOUT-LATTICE-and/m-p/929299#M24652</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-05-22T19:24:43Z</dc:date>
    </item>
    <item>
      <title>TEMPLATE: how to combine the equivalent of LAYOUT LATTICE and LAYOUT DATAPANEL</title>
      <link>https://communities.sas.com/t5/Graphics-Programming/TEMPLATE-how-to-combine-the-equivalent-of-LAYOUT-LATTICE-and/m-p/928959#M24642</link>
      <description>&lt;P&gt;I have an interesting challenge with TEMPLATE.&amp;nbsp; I am trying to create a lattice where each plot within the lattice is a panel of graphs. For a single panel of graphs, I can easily do this with SGPANEL, but I need to make a single figure with several different panels. The following TEMPLATE code works just fine to create a 1x4 panel of graphs (instead of using SGPANEL):&lt;/P&gt;
&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;proc template;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;define statgraph test12;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;begingraph;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; layout datapanel classvars=(type _trt) / rows=1 columns=4;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; layout prototype;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; bandplot x=year limitupper=uclmarg limitlower=lclmarg ;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;seriesplot x=year y=y / lineattrs=graphFit ;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;scatterplot x=year y=y / markerattrs=(size=8px color=blue symbol=circlefilled) ;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; endlayout; &amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;endlayout;&lt;/DIV&gt;
&lt;DIV&gt;endgraph;&lt;/DIV&gt;
&lt;DIV&gt;end;&lt;/DIV&gt;
&lt;DIV&gt;run;&lt;/DIV&gt;
&lt;DIV&gt;I would like to wrap the equivalent of a "LAYOUT LATTICE" around this to have several rows, where each row is a panel of four (or whatever) graphs (with a different response variable for each row). I realize that LAYOUT LATTICE does not allow for DATAPANEL, so I am looking for a workaround. For instance, I would like to use something like:&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;proc template;&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;define statgraph test12;&lt;/DIV&gt;
&lt;DIV&gt;&amp;nbsp;begingraph;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt;layout lattice / columns=1 rows=3;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;layout datapanel classvars=(type _trt) / rows=1 columns=4;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; layout prototype;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; bandplot x=year limitupper=uclmarg limitlower=lclmarg ;&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;seriesplot x=year y=p / lineattrs=graphFit ;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;scatterplot x=year y=y / markerattrs=(size=8px color=blue symbol=circlefilled) ;&lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp;endlayout; &lt;/SPAN&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;SPAN&gt;&amp;nbsp; &amp;nbsp;&lt;/SPAN&gt;&lt;STRONG style="color: #333399; font-family: inherit;"&gt;layout datapanel classvars=(type _trt) / rows=1 columns=4;&lt;/STRONG&gt;&lt;/DIV&gt;
&lt;DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; layout prototype;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; bandplot x=year limitupper=uclmarg2 limitlower=lclmarg2 ;&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;seriesplot x=year y=p2 / lineattrs=graphFit ;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;scatterplot x=year y=y2 / markerattrs=(size=8px color=blue symbol=circlefilled) ;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt; &amp;nbsp; &amp;nbsp; endlayout;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt;&amp;nbsp;endlayout;&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#333399"&gt;&lt;STRONG&gt;....&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;&lt;FONT color="#000000"&gt;endlayout;&lt;/FONT&gt;&lt;/DIV&gt;
&lt;DIV&gt;endgraph;&lt;/DIV&gt;
&lt;DIV&gt;end;&lt;/DIV&gt;
&lt;DIV&gt;run;&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;DIV&gt;&lt;BR /&gt;Any recommendations on how to do this? Thanks.&amp;nbsp;&lt;/DIV&gt;
&lt;DIV&gt;LVM&lt;/DIV&gt;</description>
      <pubDate>Sun, 19 May 2024 21:00:58 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Graphics-Programming/TEMPLATE-how-to-combine-the-equivalent-of-LAYOUT-LATTICE-and/m-p/928959#M24642</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-05-19T21:00:58Z</dc:date>
    </item>
    <item>
      <title>Re: SAS code for proc glimmix data - interaction analysis</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/SAS-code-for-proc-glimmix-data-interaction-analysis/m-p/919815#M45681</link>
      <description>&lt;P&gt;Since you are considering LOC a fixed effect (by putting it in the MODEL statement), you would not also include LOC as random effect (using the RANDOM statement).&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If your replicates are actually block (which I am calling rep here), then you would want to use&lt;/P&gt;
&lt;P&gt;RANDOM rep(LOC);&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 11 Mar 2024 17:39:54 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/SAS-code-for-proc-glimmix-data-interaction-analysis/m-p/919815#M45681</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-03-11T17:39:54Z</dc:date>
    </item>
    <item>
      <title>Re: Mixture of chi square with NLMixed in sas</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Mixture-of-chi-square-with-NLMixed-in-sas/m-p/911425#M45236</link>
      <description>&lt;P&gt;I think you are asking about testing Ho: Variance = 0 vs Ha: Variance &amp;gt;0. When you use the COVTEST statement in GLIMMIX, the procedure automatically uses a mixture of chi-square statistics when appropriate (and it tells you in the results).&amp;nbsp; The mixture is typically needed when the variance parameter is on the boundary (e.g., 0).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://support.sas.com/kb/40/724.html" target="_blank" rel="noopener"&gt;https://support.sas.com/kb/40/724.html&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;to learn about COVTEST (this is different than the covtest option in MIXED).&amp;nbsp; The online documentation for GLIMMIX will explain COVTEST in more details, with simpler examples. This is a likelihood ratio test, where the ratio is compared with the chi-square statistic (mixture or otherwise, depending on the situation).&amp;nbsp; This is explained also in:&lt;/P&gt;
&lt;P&gt;chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/&lt;A href="https://support.sas.com/resources/papers/proceedings/proceedings/forum2007/177-2007.pdf" target="_blank" rel="noopener"&gt;https://support.sas.com/resources/papers/proceedings/proceedings/forum2007/177-2007.pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 12 Jan 2024 17:56:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Mixture-of-chi-square-with-NLMixed-in-sas/m-p/911425#M45236</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-01-12T17:56:14Z</dc:date>
    </item>
    <item>
      <title>Re: Stepwise Model Selection for longitudinal binary data using PROc GENMOD</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Stepwise-Model-Selection-for-longitudinal-binary-data-using-PROc/m-p/911421#M359403</link>
      <description>&lt;P&gt;You should check out this article from the SAS Global Forum:&lt;/P&gt;
&lt;P&gt;chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/&lt;A href="https://support.sas.com/resources/papers/proceedings14/1822-2014.pdf" target="_blank" rel="noopener"&gt;https://support.sas.com/resources/papers/proceedings14/1822-2014.pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;It deals with stepwise selection using GLIMMIX. In GLIMMIX you can basically fit most of the models of GENMOD, and many more. But the syntax of GLIMMIX is different for repeated measures/longitudinal data (similar, but different enough to be confusing).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But I must say: be very cautious when doing stepwise selection (using these traditional approaches). There are a lot of statistical reasons why this may not be a good idea. The newer methods in GLMSELECT, etc, are much better (but as you noticed, no way to deal with correlated data).&lt;/P&gt;</description>
      <pubDate>Fri, 12 Jan 2024 17:32:49 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Stepwise-Model-Selection-for-longitudinal-binary-data-using-PROc/m-p/911421#M359403</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2024-01-12T17:32:49Z</dc:date>
    </item>
    <item>
      <title>Re: Calculating weight for site effect based on standard error</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Calculating-weight-for-site-effect-based-on-standard-error/m-p/903042#M44788</link>
      <description>&lt;P&gt;It is not clear what you are trying to do here. It appears to me that you want to do a meta-analysis. That is, you have results from each site (means, SEs, ...), and now you want to combine to determine an overall mean and SE (effect size mean for the population of sites). If that is what you are doing, MIXED or GLIMMIX can certainly be used, where one overrides the residual with a weight term. There are tricks to the coding that are not obvious. But I don't know if that is what you are trying to do. It is possible that you simply want a hierarchical linear mixed model, but with separate residual for each site, all in one analysis (no meta-analysis). That also can be done, where you get a different residual for each site (sort of like using weights).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I can't give you code until you more fully describe what you are trying to do. Maybe with a toy example.&lt;/P&gt;</description>
      <pubDate>Tue, 14 Nov 2023 19:31:10 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Calculating-weight-for-site-effect-based-on-standard-error/m-p/903042#M44788</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2023-11-14T19:31:10Z</dc:date>
    </item>
    <item>
      <title>Re: Estimating treatment effects, 2 Group Pre-Post Matched Analysis</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Estimating-treatment-effects-2-Group-Pre-Post-Matched-Analysis/m-p/880071#M43540</link>
      <description>&lt;P&gt;As indicated by StatDave, you need to use NLMIXED. THis is a very powerful procedure, but it requires programming. In addition to the link give, I like the following for very good instruction:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://stats.oarc.ucla.edu/sas/faq/how-do-i-run-a-random-effect-zero-inflated-poisson-model-using-nlmixed/" target="_blank"&gt;https://stats.oarc.ucla.edu/sas/faq/how-do-i-run-a-random-effect-zero-inflated-poisson-model-using-nlmixed/&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sun, 11 Jun 2023 19:24:46 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Estimating-treatment-effects-2-Group-Pre-Post-Matched-Analysis/m-p/880071#M43540</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2023-06-11T19:24:46Z</dc:date>
    </item>
    <item>
      <title>Re: Proc Mix insufficient memory issue</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Proc-Mix-insufficient-memory-issue/m-p/860252#M42510</link>
      <description>&lt;P&gt;The specific details for PROC MIXED are here:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.3/statug/statug_mixed_details58.htm#statug_mixed017845" target="_blank"&gt;https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.3/statug/statug_mixed_details58.htm#statug_mixed017845&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Memory determination depends on a lot of things. For example, how one specifies the model can make a big difference. A statement such as&lt;/P&gt;
&lt;P&gt;random A A*B;&lt;/P&gt;
&lt;P&gt;can use more memory (and be slower to fit) than using:&lt;/P&gt;
&lt;P&gt;random int B / sub=A;&lt;/P&gt;
&lt;P&gt;because the latter is processed by subjects.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 22 Feb 2023 18:36:10 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Proc-Mix-insufficient-memory-issue/m-p/860252#M42510</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2023-02-22T18:36:10Z</dc:date>
    </item>
    <item>
      <title>Re: Error code in SAS when choosing unstructured working correlation within Proc Genmod</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Error-code-in-SAS-when-choosing-unstructured-working-correlation/m-p/842211#M41747</link>
      <description>&lt;P&gt;I use MIXED and GLIMMIX much more than I do GENMOD. But.... I think you should make VID a class variable (add it to the CLASS statement). Also, if you have more than one observation for each combination of year and VID, you will have a problem.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Nov 2022 21:08:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Error-code-in-SAS-when-choosing-unstructured-working-correlation/m-p/842211#M41747</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-11-02T21:08:41Z</dc:date>
    </item>
    <item>
      <title>Re: PROC GLIMMIX Giving Incorrect Means</title>
      <link>https://communities.sas.com/t5/SAS-Programming/PROC-GLIMMIX-Giving-Incorrect-Means/m-p/842210#M333026</link>
      <description>&lt;P&gt;As indicated elsewhere, the LSMEAN is not a simple arithmetic average of the observations. It is an estimate (prediction) based on the model. The arithmetic means will only agree with LSMEANS under very specific (limited) situations. Furthermore, the LSMEANS for a non-normal distribution will definitely not be equivalent to an arithmetic average of the raw data points.&lt;/P&gt;</description>
      <pubDate>Wed, 02 Nov 2022 21:02:04 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/PROC-GLIMMIX-Giving-Incorrect-Means/m-p/842210#M333026</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-11-02T21:02:04Z</dc:date>
    </item>
    <item>
      <title>Re: Normality Assumption for Mixed Model Analysis</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Normality-Assumption-for-Mixed-Model-Analysis/m-p/820106#M40561</link>
      <description>&lt;P&gt;Your choice of an unstructured covariance matrix (UN) could be problematic if you have many times. The number of parameters to estimate for variances and covariances grows geometrically with time. I recommend you check out the user's guide for examples of other choices for type=.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;The second example in the MIXED user's guide is a good place to start, if you haven't already read it. Or read:&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.jstor.org/stable/1400366?seq=1" target="_blank"&gt;https://www.jstor.org/stable/1400366?seq=1&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 17:37:09 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Normality-Assumption-for-Mixed-Model-Analysis/m-p/820106#M40561</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T17:37:09Z</dc:date>
    </item>
    <item>
      <title>Re: Normality Assumption for Mixed Model Analysis</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/Normality-Assumption-for-Mixed-Model-Analysis/m-p/820105#M40560</link>
      <description>&lt;P&gt;Best approach is to look at the studentized residuals in graphic form. Add PLOTS=studentpanel on the procedure statement. See the user guide about how to interpret. Normality is not overly important (within reason), but constant variability is important.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 17:29:49 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/Normality-Assumption-for-Mixed-Model-Analysis/m-p/820105#M40560</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T17:29:49Z</dc:date>
    </item>
    <item>
      <title>Re: satterthwaitie adjustment for linear combination of variance components, possibly correlated</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820102#M40559</link>
      <description>&lt;P&gt;If you have a recent version of SAS/STAT, you could directly use PROC BGLIMM for a full Bayesian analysis.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 17:25:16 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820102#M40559</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T17:25:16Z</dc:date>
    </item>
    <item>
      <title>Re: satterthwaitie adjustment for linear combination of variance components, possibly correlated</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820101#M40558</link>
      <description>&lt;P&gt;The text indicated df calculated using the Satterthwaite method. But I have only scanned this.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 17:23:24 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820101#M40558</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T17:23:24Z</dc:date>
    </item>
    <item>
      <title>Re: satterthwaitie adjustment for linear combination of variance components, possibly correlated</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820065#M40552</link>
      <description>&lt;P&gt;And I just remembered this article that deals with prediction intervals for mixed models, also with extensive SAS code in the online supplement. I know that address Satterthwaite.&lt;/P&gt;
&lt;P&gt;&lt;A href="https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.8386" target="_blank"&gt;https://onlinelibrary.wiley.com/doi/epdf/10.1002/sim.8386&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 16:28:26 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820065#M40552</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T16:28:26Z</dc:date>
    </item>
    <item>
      <title>Re: satterthwaitie adjustment for linear combination of variance components, possibly correlated</title>
      <link>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820058#M40551</link>
      <description>&lt;P&gt;I see you are trying to calculate prediction intervals rather than confidence intervals. These are not straightforward for mixed models in SAS. I usually do this "brute force", but I have not dealt with the complexity of Satterthwaite or Kenward-Roger df adjustments (although I am a huge fan of KR adjustments, in general). These calculations can be quite challenging, but I think there may be some IML programs out there to do this (but maybe not for your problem).&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Your question made me think of the following article that may be of help. I have not studied it -- it is sitting on my desk to study at some point. It deals with prediction intervals for mixed models, and the online supplement has extensive SAS code.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://www.tandfonline.com/doi/full/10.1080/19466315.2020.1776762" target="_blank"&gt;https://www.tandfonline.com/doi/full/10.1080/19466315.2020.1776762&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2022 16:14:10 GMT</pubDate>
      <guid>https://communities.sas.com/t5/Statistical-Procedures/satterthwaitie-adjustment-for-linear-combination-of-variance/m-p/820058#M40551</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2022-06-23T16:14:10Z</dc:date>
    </item>
    <item>
      <title>Re: Obtaining minimum variance quadratic unbiased estimates as starting values for the covariance fa</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Obtaining-minimum-variance-quadratic-unbiased-estimates-as/m-p/746289#M234098</link>
      <description>These are the initial estimates (guesses) of your variances, in the order in you random statements. You need to put something in, perhaps based on results from some other experiment.  You can have sas try a range of guesses, and it will use the one that gives the minimum -2LL as the starting value for the optimization. Example&lt;BR /&gt;parms (.2 to 2.5 by .2) (1 to 5 by 1) (.5 to 5 by .5);&lt;BR /&gt;You would need ballpark estimates to start.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Jun 2021 15:55:56 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Obtaining-minimum-variance-quadratic-unbiased-estimates-as/m-p/746289#M234098</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2021-06-07T15:55:56Z</dc:date>
    </item>
    <item>
      <title>Re: Obtaining minimum variance quadratic unbiased estimates as starting values for the covariance fa</title>
      <link>https://communities.sas.com/t5/SAS-Programming/Obtaining-minimum-variance-quadratic-unbiased-estimates-as/m-p/745788#M233850</link>
      <description>&lt;P&gt;This is because you have three variance components in your model. When you use the PARMS statement, you must give a starting value (or a held fixed value) for each variance (and/or covariance) component. Something like&amp;nbsp;&lt;/P&gt;
&lt;P&gt;parms (2) (2) (.1);&lt;/P&gt;
&lt;P&gt;Note: in the old post, the last parameter (the second in that case) was being held fixed (not estimated). This is because you used "/ hold=2" option, where the "2" here refers to the second variance component. If you want to estimate now all three variances, don't put in that option.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;With the error you got, you might have an overparameterized model. That is, possibly one of the variance components is actually 0 in your random-coefficients model.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Jun 2021 13:04:38 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Programming/Obtaining-minimum-variance-quadratic-unbiased-estimates-as/m-p/745788#M233850</guid>
      <dc:creator>lvm</dc:creator>
      <dc:date>2021-06-04T13:04:38Z</dc:date>
    </item>
  </channel>
</rss>

