You want to fit a model to the Training data set, and then apply the fitted model from the training data set to the validation data set. This is not what you have done ... you have fit a whole new model to the validation data set.
Here is an example of how to apply the fitted model to the validation data set: http://support.sas.com/kb/39/724.html
PROC LOGISTIC models binary outcomes
However you also have time which makes it more complicated.
PROC AUTOREG and ARIMA are probably your starting point.
@GreggB wrote:
My objective is to predict if a student will be flagged to attend a summer reading camp that is determined by a test score generated during end-of-year testing in May. The variable used to predict is a reading score earned in the Fall.
I call the response variable camp_flag and the fall score f_read.
My model (I’m assuming) is something like:
model camp_flag = f_read
I have 2 years of data, so I want to use one year to create the model and use the other year to test the accuracy of the model’s ability to predict camp_flag. Camp_flag is 0 or 1.
My online search is a bit overwhelming. I just need a suggestion on which procedure to learn to accomplish this.
Is the time issue because the 2 tests are several months apart or because my 2 data sets are from 2 different years?
I don't see a need for Time Series ARIMA or AUTOREG if there are only two measurements per student.
Simple logistic regression of the measurement in the fall to predict end_of_may test score. The two different years could be used as an additional predictor variable.
In any event, I would combine both years of data, and randomly select individuals to be the training data set, and other randomly selected individuals to be the validation data set.
For some reason I thought that time would be a factor. Can students be sent to the camp more than once? Does their previous attendance affect their likelihood to attend again?
If not, I totally agree with @PaigeMiller that you should combine both years and take a random sample, BUT make sure to either include or exclude a student entirely or include them entirely. A single student shouldn't have records in both the test and training data set.
They would attend only once. To be sure I can unduplicate by Student ID to make sure.
I think I read about what you're saying - the data is divided into 2 sets using ranuni. One set is used to create the model and the other half is used for prediction?
@GreggB wrote:
They would attend only once. To be sure I can unduplicate by Student ID to make sure.
I think I read about what you're saying - the data is divided into 2 sets using ranuni. One set is used to create the model and the other half is used for prediction?
Yes, that's one way to do it.
And make sure that the different years are a categorical predictor variable in the model.
proc logistic data = twoyears outest=estimates_2yrs;
model camp_flag = RIT;
run;
quit;
twoyears looks like so: (ID is unique; termName has 2 possible values; camp_flag is 0 or 1)
termName ID RIT camp_flag
2016-2017 001 249 0
2017-2018 002 279 1
1. You're saying my model should be camp_flag = termName RIT ?
2. I want to make sure my objective is clear: I have a 3rd data set (termName = 2019-2020) that contains RIT and I want to predict the camp_flag value so that students most likely to have a value of 0 based on their end-of-year test can be identified now and receive academic intervention. My next step?
What is RIT?
My mistake. It is the fall reading score I referred to as f_read earlier.
@GreggB wrote:
proc logistic data = twoyears outest=estimates_2yrs; model camp_flag = RIT; run; quit;
twoyears looks like so: (ID is unique; termName has 2 possible values; camp_flag is 0 or 1)
termName ID RIT camp_flag
2016-2017 001 249 0
2017-2018 002 279 1
1. You're saying my model should be camp_flag = termName RIT ?
2. I want to make sure my objective is clear: I have a 3rd data set (termName = 2019-2020) that contains RIT and I want to predict the camp_flag value so that students most likely to have a value of 0 based on their end-of-year test can be identified now and receive academic intervention. My next step?
updated code:
proc logistic data = twoyears outest=estimates_2yrs;
class termname;
model camp_flag = termname rit;
run;
quit;
Since termname is not numeric I used a CLASS statement. Is this correct?
if so, I interpret this as TermName not being signficant.
Analysis of Maximum Likelihood Estimates | ||||||
---|---|---|---|---|---|---|
Parameter | DF | Estimate | Standard Error |
Wald Chi-Square |
Pr > ChiSq | |
Intercept | 1 | 18.7084 | 1.8919 | 97.7879 | <.0001 | |
TermName | Fall 2016-2017 | 1 | -0.1980 | 0.1377 | 2.0676 | 0.1505 |
rit | 1 | -0.1225 | 0.0113 | 118.1675 | <.0001 |
Yes, that's correct.
/* split the data randomly with 50/50 split */
data train valid;
set twoyears; /* 2 years of data combined */
if ranuni(7) <= .5 then output train; else output valid;
run;
/*compare the 2 data sets */
proc logistic data = train outest=estimates_train;
model camp_flag = rit;
run;
quit;
proc logistic data = valid outest=estimates_valid;
model camp_flag = rit;
run;
quit;
Based on what I have studied I believe this is the next step. Here is the % concordant for train and valid, respectively. Is PROC SCORE my next step, using "twoyears"? I'm not sure which portion of the output to look at to determine if I have a model that's good for prediction.
Association of Predicted Probabilities and Observed Responses |
|||
---|---|---|---|
Percent Concordant | 94.3 | Somers' D | 0.892 |
Percent Discordant | 5.1 | Gamma | 0.898 |
Percent Tied | 0.6 | Tau-a | 0.099 |
Pairs | 29455 | c |
0.946 |
Association of Predicted Probabilities and Observed Responses |
|||
---|---|---|---|
Percent Concordant | 89.0 | Somers' D | 0.788 |
Percent Discordant | 10.1 | Gamma | 0.795 |
Percent Tied | 0.9 | Tau-a | 0.063 |
Pairs | 23648 | c | 0.894 |
Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!
ANOVA, or Analysis Of Variance, is used to compare the averages or means of two or more populations to better understand how they differ. Watch this tutorial for more.
Find more tutorials on the SAS Users YouTube channel.