04-11-2024
Laser_Taco_
Fluorite | Level 6
Member since
11-15-2023
- 8 Posts
- 4 Likes Given
- 2 Solutions
- 0 Likes Received
-
Latest posts by Laser_Taco_
Subject Views Posted 885 11-22-2023 04:35 PM 2157 11-22-2023 03:41 PM 2176 11-22-2023 02:45 PM 3103 11-20-2023 04:18 PM 3229 11-17-2023 11:03 AM 3271 11-16-2023 07:31 PM 3327 11-15-2023 04:35 PM 3382 11-15-2023 12:01 PM -
Activity Feed for Laser_Taco_
- Posted Re: Confidence interval for the regression fit on Statistical Procedures. 11-22-2023 04:35 PM
- Liked Re: Interpreting cubic vs. quadratic model fit and p values. for mthorne. 11-22-2023 04:13 PM
- Posted Re: Interpreting cubic vs. quadratic model fit and p values. on Statistical Procedures. 11-22-2023 03:41 PM
- Posted Re: Interpreting cubic vs. quadratic model fit and p values. on Statistical Procedures. 11-22-2023 02:45 PM
- Liked Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) for SteveDenham. 11-21-2023 12:42 PM
- Posted Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) on Statistical Procedures. 11-20-2023 04:18 PM
- Liked Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) for SteveDenham. 11-17-2023 06:57 PM
- Posted Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) on Statistical Procedures. 11-17-2023 11:03 AM
- Posted Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) on Statistical Procedures. 11-16-2023 07:31 PM
- Liked Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) for jiltao. 11-15-2023 04:46 PM
- Posted Re: Proc mixed, defining data structure for desired comparison (Random effect and subject) on Statistical Procedures. 11-15-2023 04:35 PM
- Posted Proc mixed, defining data structure for desired comparison (Random effect and subject) on Statistical Procedures. 11-15-2023 12:01 PM
-
Posts I Liked
Subject Likes Author Latest Post 1 1 1 1
11-22-2023
04:35 PM
Hi there @AnaG_ , Here is a link to the formulas in the documentation, and here is a link to the formulas themselves. Follows the standard calculation for CI found in statistics textbooks. Regards, Kyle
... View more
11-22-2023
03:41 PM
Hi, My response will be somewhat general as I would need a bit more information to give a more informative answer. For the purpose of providing an estimate of the 50% of the maximum response, fitting the highest order polynomial which is significant should provide you with the best estimate for your data. Emphasis on _your_data_ , an important concept is that you want to fit the simplest model that accounts for most of the variance. Increasing the order of polynomial in the model can lead to the overfitting of your model and limit it's ability to generalize to other similar data. As for the change in the sig. of the linear term in the model.. It aligns with the tighter fit to your data including the cubic term--which results in the linear coefficient lacking importance. If you want to describe *only* to your data, you are fine to utilize the cubic model. If you want to extrapolate to other data sets (making inferences, prediction, etc.), you'd be better off with the quadratic model. Regards, Kyle
... View more
11-22-2023
02:45 PM
Hello, What exactly is the purpose of the model? Are you looking to describe the fit or use the model for prediction?
... View more
11-20-2023
04:18 PM
Hi @SteveDenham , I'm not sure if I understand what you mean by "converted to a ratio or some other type of value, so that a single value is obtained per pen at each preference measurement (time), you should be able to improve the analysis.". I understand that having multiple responses from the same pen is my issue here. For converting the values to a ratio, each value per diet/pen is already a ratio == Diet__ :Total Consumed. Regards,
... View more
11-17-2023
11:03 AM
Hello @stevendenham, Thank you for your response! Yes, you are correct. There are 3 pens/block (i used the word "Rep" in my initial post) with 8 blocks in total. The reason I have two values for each pen is due to the different consumptions from the two feeders in each pen. If I were to use only 1 value from the pen, I am unsure how I would differentiate the consumption between the two feeders within the pen (800 v 100 etc.). To your point on proportions, I also considered this and agree especially with a handful of cases were the consumption for the feeders within a pen are 0 and 1, respectively. Admittedly, I am a bit unfamiliar with GLIMMIX and GEE, but should the code look something like this: Proc GLIMMIX data=Exp200 ; class Pen Trt Block Feed; model y= Feed / ddfm=KR; random block/ subject=Pen; **Unsure if " /subject=pen" is necessary** lsmeans Feed/pdiff lines adjust=tukey; run; As for the failure to converge, I was able to get it to converge (with no errors) by removing the NOBOUND option, but the Cov Parm Estimate for Block/subject=pen was zero which is odd. Regards, Kyle
... View more
11-16-2023
07:31 PM
Thank you again for the response. How might only using 1 or the observations impact the interpretation or the results? Regards,
... View more
11-15-2023
04:35 PM
Hi Jill, Thank you for the response. I also think that is the issue. Since these values are percentages that add to 100% (when combined), they aren't independent. Any idea or suggestion on how I can re-arrange? Regards, Kyle
... View more
11-15-2023
12:01 PM
Hello, Exp details: 24 pens, 3 feed types (800, 400, 100), 8 replicates, RCBD The purpose is to evaluate a preference for feed type (800, 400, 100). To make the comparisons, three "treatments" were created (A = 800 vs 100, B= 800 vs 400, C =400 vs 100), the A,B, C treatments were used to allocate 2 of the 3 feeds to 1 pen in a RCBD. For 5 days, a known amount of feed was added to each feeder/pen, and after 24 h the feed was weighed back to determine the feed disappearance. Preference was defined as ((disappearance of ___ feed (800, 400, 100; respectively))/ Total feed disappearance (combined total of the 2 feeders/pen)*100). Giving the proportion of of each feed /pen/day. In the proposal (was proved to me...), the Statistical Analysis is suppose to be "...preference test as the fixed effect and replicate as the random effect. LSmean and means are to be separated using pdiff with Tukey's adjustment with pen as the EU." IMHO this somewhat lacking and not very descriptive. CODE: input PEN DIET Treatment Rep Feed pref1 pref2 pref3 pref4 pref5 1 1 3 1 800 68.7500 57.6471 67.8571 57.9545 78.9157 1 3 3 1 100 31.2500 42.3529 32.1429 42.0455 21.0843 2 1 1 1 800 79.7101 44.7368 44.4444 41.3043 48.3333 2 2 1 1 400 20.2899 55.2632 55.5556 58.6957 51.6667 3 2 2 1 400 100.0000 67.5676 85.1852 80.5825 56.0847 3 3 2 1 100 0.0000 32.4324 14.8148 19.4175 43.9153 4 1 1 2 800 92.5926 57.7465 65.5172 73.9130 59.0278 4 2 1 2 400 7.4074 42.2535 34.4828 26.0870 40.9722 .... ; Run; Proc Print; Run; %macro model_loop; %let yvar1 = pref1; %let yvar2 = pref2; %let yvar3 = pref3; %let yvar4 = pref4; %let yvar5 = pref5; %do i=1 %to 5 ; Proc Mixed data=Exp200 NOBOUND ASYCOV method=REML; Title "&&yvar&i"; class Pen Trt Rep Feed; model &&yvar&i = Feed/ddfm=Kenwardroger; random Rep / subject=Pen; lsmeans Feed/pdiff adjust=tukey; store out=work.glm&&yvar&i; run; proc plm restore=work.glm&&yvar&i; Title "&&yvar&i PLM"; lsmeans Feed/ pdiff lines adjust=tukey; run; %end; %mend model_loop; %model_loop; quit; The issue I'm having is a failure to converge . I have a feeling that it has to do with the specification of the subject. I guess I don't have a super specific question. I just don't feel like this is the correct way to analyze the data and looking for advice. Regards,
... View more