04-09-2024
JudithRourke
SAS Moderator
Member since
06-28-2011
- 24 Posts
- 0 Likes Given
- 0 Solutions
- 0 Likes Received
-
Latest posts by JudithRourke
Subject Views Posted 2167 08-07-2023 11:19 AM 2771 08-07-2023 11:19 AM 2195 08-07-2023 11:19 AM 1500 08-07-2023 11:18 AM 2029 08-07-2023 11:17 AM 763 08-07-2023 11:15 AM 1505 08-07-2023 11:14 AM 2695 08-07-2023 11:10 AM 5665 03-01-2023 02:29 PM 685 02-10-2023 12:00 PM -
Activity Feed for JudithRourke
- Posted Calculating Elasticities in an Almost Ideal Demand System on SAS Code Examples. 08-07-2023 11:19 AM
- Posted Calculating Price Elasticity of Demand on SAS Code Examples. 08-07-2023 11:19 AM
- Posted Chow Test for Structural Breaks on SAS Code Examples. 08-07-2023 11:19 AM
- Posted Calculating Elasticities from a Translog Cost Function on SAS Code Examples. 08-07-2023 11:18 AM
- Posted Calculating Economic Indices on SAS Code Examples. 08-07-2023 11:17 AM
- Posted Bootstrapping Correct Critical Values in Tests for Structural Change on SAS Code Examples. 08-07-2023 11:15 AM
- Posted Analysis of Unobserved Component Models Using PROC UCM on SAS Code Examples. 08-07-2023 11:14 AM
- Posted Bivariate Granger Causality Test on SAS Code Examples. 08-07-2023 11:10 AM
- Posted Simulation: The Critical Technology in Digital Twin Development on Research and Science from SAS. 03-01-2023 02:29 PM
- Posted Innovative contributions to NeurIPS 2022 on Research and Science from SAS. 02-10-2023 12:00 PM
- Posted Using PROC DEEPCAUSAL to optimize revenue through policy evaluation on Research and Science from SAS. 11-30-2022 12:00 PM
- Posted Powering disruption: A DevOps journey at SAS on Research and Science from SAS. 11-28-2022 11:14 AM
- Posted Performance Improvements in PROC CSSM for Scalable State Space Modeling on Research and Science from SAS. 09-08-2022 10:29 AM
- Posted SAS Develops Powerful Tool For CROs and Pharmaceutical Companies on Research and Science from SAS. 09-01-2022 01:59 PM
- Posted Inventors in SAS Analytics R&D on Research and Science from SAS. 08-18-2022 10:54 AM
- Posted The SAS Batting Lab: The Model Powering the Analysis on Research and Science from SAS. 07-07-2022 11:28 AM
- Posted Nobel Prize Winners and SAS Causal Econometrics Software on Research and Science from SAS. 01-14-2022 03:26 PM
- Posted Using Network Analysis and Machine Learning to Identify Virus Spread Trends in COVID-19 on Research and Science from SAS. 01-14-2022 11:54 AM
- Posted Towards Optimized Actions in Critical Situations of Soccer Games with Deep Reinforcement Learning on Research and Science from SAS. 01-14-2022 11:15 AM
- Posted Battling Deforestation Using Crowd-Driven AI on Research and Science from SAS. 01-14-2022 11:04 AM
-
My Library Contributions
Subject Likes Author Latest Post 0 0 0
08-07-2023
11:19 AM
2 Likes
Economists are often interested in price and income elasticities. Price elasticity is defined as the percentage change in quantity demanded for some good with respect to a one percent change in the price of the good (own price elasticity) or of another good (cross-price elasticity). Mathematically,
where ε ίϳ is the cross price elasticity for ί≠ϳ, own price elasticity for ί=ϳ, is the price of the ί th good, and q is the quantity demanded for the ί th good. A price elasticity greater than 1 is called price elastic, and a price elasticity smaller than 1 is called price inelastic. A given percentage increase in the price of an elastic good will reduce the quantity demanded for the good by a higher percentage than for an inelastic good.
Income elasticity is defined as the percentage change in quantity demanded with respect to a one percent change in income.
where x is total income.
Price elasticities can either be derived from the Marshallian demand equation or the Hicksian demand equation. The Marshallian demand equation is obtained from maximizing utility subject to the budget constraint, while the Hicksian demand equation is derived from solving the dual problem of expenditure minimization at a certain utility level. Elasticities derived from Marshallian demand are called Marshallian or uncompensated elasticities, and elasticities derived from Hicksian demand are called Hicksian or compensated elasticities. Marshallian elasticities can be transformed into Hicksian elasticities through the Slutsky equation:
where ε H represents Hicksian elasticity, ε M represents Marshallian elasticity, ω ϳ is the budget share on good ϳ, and e is the income elasticity for good ί. More detailed discussions on the Marshallian and the Hicksian demand relations and the Slutsky equation can be found in many standard economics textbooks; see Nicholson (1992) and Gravelle and Rees (1992).
Analysis
In this example, you calculate the Marshallian and the Hicksian price elasticities and the income elasticity for the Almost Ideal Demand System (AIDS) model described in the example "Estimating an Almost Ideal Demand System Model." The model is as follows:
where ω ί is the share associated with the ί th good, ϒ ίϳ is the slope coefficient associated with the ϳ th good in the ί th share equation, and ρ ϳ is the price on the ϳ th good. Χ is the total expenditure on the system of goods, and Ρ is the price index.
The AIDS model implies that the Marshallian price elasticity for good ί with respect to good ϳ is
where
Income elasticity is given by
If you are interested in elasticities at a specific point, an ESTIMATE statement can be used in the MODEL procedure to obtain estimates and standard errors of the elasticity at that point. For example, if you want to calculate your own price elasticity for beef, and you know both the budget share for beef, say, 0.5, and ln(X/P), say, 9.0, then you can use an ESTIMATE statement in the MODEL procedure as follows (see "Estimating an Almost Ideal Demand System Model" for more code in the MODEL statement):
data aids_;
input year qtr pop b_q p_q c_q t_q b_p p_p c_p t_p cpi pc_exp;
... more datalines ...
;
run;
/* Full Nonlinear AIDS Model */
proc model data=aids;
w_b = ab + gbb*lpb + gbp*lpp + gbc*lpc + gbt*lpt + bb*(lx-p) + abco1*co1
+ absi1*si1 + ab_t*t ;
w_p = ap + gbp*lpb + gpp*lpp + gpc*lpc + gpt*lpt + bp*(lx-p) + apco1*co1
+ apsi1*si1 + ap_t*t ;
w_c = ac + gbc*lpb + gpc*lpp + gcc*lpc + gct*lpt + bc*(lx-p) + acco1*co1
+ acsi1*si1 + ac_t*t ;
fit w_b w_p w_c / itsur nestit outs=rest outest=fin2 converge = .00001
maxit = 1000 ;
parms ab bb gbb gbp gbc gbt abco1 absi1 ab_t
ap bp gpp gpc gpt apco1 apsi1 ap_t
ac bc gcc gct acco1 acsi1 ac_t
at gtt ;
estimate 'elasticity beef' (gbb - bb*(.5 - bb*9.0))/.5 - 1;
run;
quit;
This will yield the estimate for the elasticity when the budget share for beef is 0.5, and ln(X/P) = 9.0. The output is shown below.
The MODEL Procedure
Nonlinear ITSUR Estimates
Term
Estimate
Approx Std Err
t Value
Approx Pr > |t|
Label
elasticity beef
-0.93823
0.0355
-26.40
<.0001
(gbb - bb*(.5 - bb*9.0))/.5 - 1
The estimated own price elasticity for beef suggests that increasing the price for beef by 1% will reduce the demand for beef by 0.94%. Such information will be useful for setting prices. The ESTIMATE statement also provides standard error estimates.
If you are not interested in standard errors on the elasticities, and you want to compute all your own price and cross-price elasticities for the system, it is more convenient to do it in IML as shown below. Recall that the parameters estimated in the nonlinear AIDS model are in the data set fin2. The variables not needed in the calculation are eliminated in the following data step so it is easier to read the data set into IML.
To calculate elasticities for the nonlinear AIDS model, you first need to read in the estimated parameters from the output data set fin2:
proc iml;
use fin2;
read all var {gbb gbp gbc gbt gpp gpc gpt gcc gct gtt} ;
read all var {bb bp bc ab ap ac at} ;
close fin2;
Note that the elasticities have meanings only at a specific data point. In the current example, you calculate the elasticities at the mean point of the data. The following example illustrates the nonlinear AIDS case.
/* recall meanw contains the means of the variables */
use meanw;
/* read in the mean shares */
read all var {w_b w_p w_c w_t} ;
/* read in the mean price and expenditure */
read all var {bm pm cm tm x } ;
lpb = log(bm);
lpp = log(pm);
lpc = log(cm);
lpt = log(tm);
lx=log(x);
close meanw;
To calculate the elasticity matrix with its own price elasticity as diagonal elements and cross-price elasticities as off-diagonal elements, you can express the parameters in matrix form and use matrix manipulation in the calculation.
/* Budget share vector */
w = w_b//w_p//w_c//w_t;
/* gamma(i,j) matrix */
gij = (gbb||gbp||gbc||gbt)//
(gbp||gpp||gpc||gpt)//
(gbc||gpc||gcc||gct)//
(gbt||gpt||gct||gtt);
/* turkey parameter based on sum-to-one constraint */
bt= 0-bb-bp-bc;
a=ab//ap//ac//at; /* alpha(i) vector */
b=bb//bp//bc//bt; /* beta(i) vector */
You then specify the nonlinear price index as described in the example "Estimating an Almost Ideal Demand System Model":
a0=0;
p = a0 + ab*lpb + ap*lpp + ac*lpc + at*lpt +
.5*(gbb*lpb*lpb + gbp*lpb*lpp + gbc*lpb*lpc + gbt*lpb*lpt +
gbp*lpp*lpb + gpp*lpp*lpp + gpc*lpp*lpc + gpt*lpp*lpt +
gbc*lpc*lpb + gpc*lpc*lpp + gcc*lpc*lpc + gct*lpc*lpt +
gbt*lpt*lpb + gpt*lpt*lpp + gct*lpt*lpc + gtt*lpt*lpt );
Now you calculate each element of the elasticity matrix:
nk=ncol(gij);
mi = -1#I(nk);
ff2 = j(nk,nk,0); /* Initialize Marshallian elasticity matrix */
fic2 = j(nk,nk,0); /* Initialize Hicksian elasticity matrix */
fi2 = j(nk,1,0); /* Income elasticity vector */
/* prepare for plotting the elasticity matrices*/
/* initialize index vectors for the X- and Y-axis */
x = j(nk*nk,1,0);
y = j(nk*nk,1,0);
/* initialize vector to store elasticity matrices */
Helast = j(nk*nk,1,0);
Melast = j(nk*nk,1,0);
i=1;
do i=1 to nk;
fi2[i,1] = 1 + b[i,]/w[i,];
j=1;
do j=1 to nk;
ff2[i,j] = mi[i,j] + (gij[i,j] - b[i,]#(w[j,]-b[j,]#(lx-p)))/w[i,];
fic2[i,j] = ff2[i,j] + w[j,]#fi2[i,];
x[(i-1)*nk+j,1] = i ;
y[(i-1)*nk+j,1] = j ;
Melast[(i-1)*nk+j,1] = ff2[i,j] ;
Helast[(i-1)*nk+j,1] = fic2[i,j] ;
end;
end;
/*create data set for plotting*/
create plotdata var{x y Melast Helast} ;
append;
close plotdata;
run;
quit;
The calculated Marshallian elasticity matrix for the nonlinear AIDS model is given below:
Marshallian Elasticity Matrix
BEEF
PORK
CHICKEN
TURKEY
BEEF
-0.944019
-0.018546
-0.108581
0.0234774
PORK
-0.026672
-0.851963
-0.111961
-0.041822
CHICKEN
-0.164428
-0.098732
-0.195914
-0.122067
TURKEY
0.0180247
-0.477217
-0.590155
-0.572917
The results show that all own price elasticities are negative, and all of the elasticities are less than 1 in absolute value, meaning that all goods are inelastic.
The income elasticities are reported below:
Income Elasticity
BEEF
1.0476685
PORK
1.0324174
CHICKEN
0.5811402
TURKEY
1.6222649
The results show that turkey consumption is the most sensitive to income changes, while chicken consumption is the least sensitive to income changes.
Finally, the Hicksian elasticity matrix is given below:
Hicksian Elasticity Matrix
BEEF
PORK
CHICKEN
TURKEY
BEEF
-0.382661
0.2802386
0.0385055
0.0639165
PORK
0.526515
-0.557529
0.0329848
-0.001971
CHICKEN
0.1469568
0.0670036
-0.114325
-0.099635
TURKEY
0.8872617
-0.014564
-0.362398
-0.510299
The own price Hicksian elasticities are also negative for all four goods as expected.
The following statements illustrate plotting the Hicksian elasticity in three dimensions.
proc g3d data = plotdata;
scatter x*y=Helast
/ grid
shape='pillar'
color=colorval
caxis=blue
rotate=60
size=2.5
yticknum=4
xticknum=4
zticknum=3
zmin=-1
zmax=1;
run;
quit;
The Hicksian elasticity matrix is plotted in the figure shown below. The red bar indicates positive elasticity, while the green bar indicates negative elasticity.
Figure 1: Hicksian Elasticity
Acknowledgment
The SAS program used in this example is based on code provided by Dr. Barry Goodwin.
References
Gravelle, H., and Rees, R. (1992), Microeconomics, New York: Longman Publishing.
Nicholson, W. (1992), Microeconomic Theory: Basic Principles and Extensions, Fifth Edition, Fort Worth: Dryden Press.
SAS Institute Inc. (1999), SAS/ETS User's Guide, Version 8, Cary, NC: SAS Institute Inc.
... View more
08-07-2023
11:19 AM
2 Likes
Figure 1: Quantity and Price for Beef
The price elasticity of demand is defined as the percentage change in quantity demanded for some good with respect to a one percent change in the price of the good. For example, if the price of some good goes up by 1%, and as a result sales fall by 1.5%, the price elasticity of demand for this good is -1.5%/1% = -1.5. Thus, price elasticity measures the responsiveness of quantity demanded to changes in price. A price elasticity greater than one is called price elastic, and price elasticity less than one is called price inelastic. A given percentage increase in the price of an elastic good will reduce the quantity demanded for the good by a higher percentage than for an inelastic good. In general, a necessary good is less elastic than a luxury good. For an introductory text on price elasticities, see Nicholson (1992). Price elasticity can be expressed as:
where ε is the price elasticity, P is the price of the good, and Q is the quantity demanded for the good.
Analysis
In this example, you will calculate the price elasticity of demand for beef in a simple log-linear demand model. The data consist of quarterly retail prices and per capita consumption for beef. The data period covers the first quarter of 1977 through the third quarter of 1999. The data were obtained from the USDA Red Meats Yearbook (accessed 2001).
The log-linear demand model is of the following form: InQ= a + b*InP
where Q and P are defined as before, a and b are parameters to be estimated.
The log-linear demand model is a very simple one. In the real world, there may be many additional complexities that need to be considered in the model. For example, prices of some other closely related goods may have a significant effect on the quantity demanded for beef; hence, they may also enter the right-hand side of the demand equation. On a store level, there may also be occasions when sales remain zero regardless of how much the price is; for example, when the good is out of stock. For demonstration purposes, these complexities will be ignored.
The log-linear demand function implies that the price elasticity of demand is constant:
Thus, to obtain an estimate of the price elasticity, you just need an estimate of b. You can use the AUTOREG procedure to obtain the estimates.
To use the AUTOREG procedure, you first read in the price and quantity data in the DATA step, and transform the price and quantity data into log forms:
data a ;
input yr qtr q p ;
date = yyq(yr,qtr) ;
format date yyq6. ;
lq = log(q) ;
lp = log(p) ;
datalines;
1977 1 22.9976 142.1667
1977 2 22.6131 143.9333
1977 3 23.4054 146.5
1977 4 22.7401 150.8
1978 1 22.0441 160
... more datalines ...
1997 4 16.2354 279.3
1998 1 16.6884 273.4667
1998 2 17.1985 278.1
1998 3 17.5085 277.3667
1998 4 16.6475 279.5333
1999 1 16.6785 278
1999 2 17.7635 284.7667
1999 3 17.6689 289.2333
;
run ;
Then you specify the input data set in the PROC AUTOREG statement and specify the regression model in a MODEL statement. If you are not concerned with autocorrelated errors and just want to do an ordinary least-squares regression, you can specify the model as follows:
proc autoreg data=a outest=estb ;
model lq = lp ;
output out=out1 r=resid1 ;
title "OLS Estimates";
run ;
This will yield ordinary least squares estimates of a and b. The output is shown in Figure 2.
OLS Estimates
The AUTOREG Procedure
Ordinary Least Squares Estimates
SSE
0.12252634
DFE
89
MSE
0.00138
Root MSE
0.03710
SBC
-334.26774
AIC
-339.28946
Regress R-Square
0.8521
Total R-Square
0.8521
Durbin-Watson
1.1073
Variable
DF
Estimate
Standard Error
t Value
Approx Pr > |t|
Intercept
1
5.8364
0.1294
45.10
<.0001
lp
1
-0.5314
0.0235
-22.64
<.0001
Figure 2: OLS Esimates
The parameter estimate for b is equal to -0.5314, which suggests that increasing the price for beef by 1% will reduce the demand for beef by 0.53%.
The following SAS code uses the GPLOT procedure to plot the residuals obtained from the OLS estimation, as shown in Figure 3.
proc gplot data=plot2 ;
title 'OLS Model Residual Plot' ;
axis1 label=(angle=90 'Residuals') ;
axis2 label=('Date') ;
symbol1 c=blue i=needle v=none ;
plot resid1*date / cframe=ligr haxis=axis2 vaxis=axis1 ;
run ;
Figure 3: OLS Model Residual Plot
Note that the residual errors from the OLS regression do not appear to be random. The residual plot shows evidence of autocorrelation in the error process. This violates the independent error assumption of the classical regression model, and the OLS estimates will be inefficient. Thus, the parameter estimates are not as accurate as they could be.
To improve the efficiency of the estimated parameters, you can correct for autocorrelation using the AUTOREG procedure. Specify the order of autocorrelation using the NLAG= option. Since the data are quarterly, this example chooses order 1 as well as order 4 for seasonality considerations. You can use the METHOD= option to specify the maximum likelihood estimation.
proc autoreg data=a outest=estb ;
model lq = lp / nlag=(1 4) method=ml ;
output out=out2 r=resid2 ;
title "OLS, Autocorrelations, & Maximum Likelihood Estimates";
run ;
The output in Figure 4 first shows the initial OLS results. Then the estimates of autocorrelations and the maximum likelihood estimates are displayed.
OLS, Autocorrelations, & Maximum Likelihood Estimates
The AUTOREG Procedure
Ordinary Least Squares Estimates
SSE
0.12252634
DFE
89
MSE
0.00138
Root MSE
0.03710
SBC
-334.26774
AIC
-339.28946
Regress R-Square
0.8521
Total R-Square
0.8521
Durbin-Watson
1.1073
Variable
DF
Estimate
Standard Error
t Value
Approx Pr > |t|
Intercept
1
5.8364
0.1294
45.10
<.0001
lp
1
-0.5314
0.0235
-22.64
<.0001
Estimates of Autocorrelations
Lag
Covariance
Correlation
-1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
0
0.00135
1.000000
| |********************|
1
0.000564
0.419098
| |******** |
2
-0.00001
-0.007458
| | |
3
0.000459
0.340944
| |******* |
4
0.000895
0.664708
| |************* |
OLS, Autocorrelations, & Maximum Likelihood Estimates
The AUTOREG Procedure
Maximum Likelihood Estimates
SSE
0.04786518
DFE
87
MSE
0.0005502
Root MSE
0.02346
SBC
-407.51126
AIC
-417.55469
Regress R-Square
0.6109
Total R-Square
0.9422
Durbin-Watson
1.4299
Variable
DF
Estimate
Standard Error
t Value
Approx Pr > |t|
Intercept
1
5.1843
0.1929
26.87
<.0001
lp
1
-0.4131
0.0353
-11.69
<.0001
AR1
1
-0.1765
0.0713
-2.48
0.0152
AR4
1
-0.7134
0.0723
-9.87
<.0001
Figure 4: OLS,Autocorrelation,and Maximum Likelihood Estimates
By including an autoregressive error structure in the model, the total R2 statistic increases from 0.8521 to 0.9422 in the maximum likelihood estimation. The information criteria AIC and SBC become more negative. These imply significant improvement over the OLS estimation.
Notice also that the estimate for b changes from -0.5314 to -0.4131, which means that the estimated elasticity for beef demand is smaller in absolute value than the OLS estimate. Since the estimated elasticity is smaller than 1 in absolute value, you may conclude that demand for beef is relatively inelastic.
Figure 5 illustrates the residuals obtained from the autoregressive error model estimation.
proc gplot data=plot2 ;
title 'Autoregressive Error Model Residual Plot' ;
axis1 label=(angle=90 'Residuals') ;
axis2 label=('Date') ;
symbol1 c=blue i=needle v=none ;
plot resid2*date / cframe=ligr haxis=axis2 vaxis=axis1 ;
run ;
Figure 5: Autoregressive Error Model Residual Plot
References
Nicholson, W. (1992), Microeconomic Theory: Basic Principles and Extensions, Fifth Edition, Fort Worth: Dryden Press.
SAS Institute Inc. (1999), SAS/ETS User's Guide, Version 8, Cary, NC: SAS Institute Inc.
USDA-ERS Electronic Data Archive, Red Meats Yearbook, housed at Cornell University's Mann Library, [http://usda.mannlib.cornell.edu/], accessed 25 September 2001.
... View more
Labels:
08-07-2023
11:19 AM
Perhaps the most important assumption of any time series model is that the underlying process is the same across all observations in the sample. It is, therefore, necessary to analyze carefully time series data that include periods of violent change. A tool that is particularly useful in this regard is the Chow test.
The Chow test is commonly used to test for structural change in some or all of the parameters of a model in cases where the disturbance term is assumed to be the same in both periods.
The Chow test is an application of the F-test, and it requires the sum of squared errors from three regressions - one for each sample period and one for the pooled data.
Analysis
In an investigation of the demand for food in the United States, researchers may want to determine whether the structure of the demand equation changed after World War II.
Exploring the Data Set
The data for this study include yearly observations on per capita food consumption, the price of food, and per capita income for the years 1927-1941 and 1948-1962 (Maddala 1992). There are no observations for the war years between 1942 and 1947. The DATA step creates a SAS data set named FOOD, reads data values into the variables YEAR, Q, P, and Y, and creates the constant term ONE and the log transformations LNQ, LNP, LNY.
data food;
input year q p y @@;
retain one 1;
lnq = log(q);
lnp = log(p);
lny = log(y);
datalines;
27 88.9 91.7 57.7
28 88.9 92.0 59.3
29 89.1 93.1 62.0
...
;
run;
Once the FOOD data set is created, the interactive data analysis feature of SAS/INSIGHT software can be used to check the data for errors and to explore graphically possible relationships among the variables.
In this case, a 3-D Rotating Plot of the variables LNQ, LNP, and LNY show a break between the observations before 15 and after 16 corresponding to the years 1927-1941 and 1948-1962. This evidence suggests that a test for a structural break in a model for the demand for food may be appropriate.
Computing the Chow Test
The AUTOREG procedure specifies a linear regression of the log of per capita food consumption on the log price of food, the log of per capita income, and a constant term (automatically included). The CHOW= option in the model statement performs Chow tests at the specified breakpoints. The breakpoint candidates 15, 16, and 17 corresponding to the years 1941, 1948, and 1949 are chosen from the preceding analysis.
proc autoreg data=food;
model lnq = lnp lny / chow=(15 16 17);
run;
CHOW Test
The AUTOREG Procedure
Dependent Variable
lnq
Ordinary Least Squares Estimates
SSE
0.00286947
DFE
27
MSE
0.0001063
Root MSE
0.01031
SBC
-182.30489
AIC
-186.50848
Regress R-Square
0.9731
Total R-Square
0.9731
Durbin-Watson
1.2647
Structural Change Test
Test
Break Point
Num DF
Den DF
F Value
Pr > F
Chow
15
3
24
5.07
0.0074
Chow
16
3
24
5.54
0.0049
Chow
17
3
24
1.29
0.2992
Variable
DF
Estimate
Standard Error
t Value
Approx Pr > |t|
Intercept
1
4.0473
0.1360
29.76
<.0001
lnp
1
-0.1189
0.0404
-2.95
0.0066
lny
1
0.2412
0.0134
17.95
<.0001
R 2 is a measure for the appropriateness of the model. The value of 0.973 implies that LNP and LNY together explain more than 90% of the variation in LNQ.
The coefficient estimates are highly significant, while the negative sign on LNP and the positive sign on LNY satisfy the intuition that price elasticity has an inverse relationship to quantity demanded and income elasticity has a direct relationship.
Notice that the Chow test is highly significant for breakpoints 15 and 16, which correspond to the years 1941 and 1948. This is not a very surprising result given the state of the world at that time.
References
Chow, G.C. (1960), "Tests of Equality between Sets of Coefficients in Two Linear Regressions," Econometrica, 28, 591-605. Fisher, F.M. (1970), "Tests of Equality between Sets of Coefficients in Two Linear Regressions: An Expository Note," Econometrica, 38, 361-366. Greene, W.H. (1993), Econometric Analysis, Second Edition, New York: Macmillan Publishing Company. Maddala, G.S. (1992), Introduction to Econometrics, Second Edition, New York: Macmillan Publishing Company.
... View more
Labels:
08-07-2023
11:18 AM
Elasticities of substitution are an important measure of production relationships. When derived demand systems are obtained from a cost function, it is possible to estimate several elasticities of substitution along with price elasticities. For a firm with a single output production function, price elasticity is the percentage change in quantity demanded of an input with respect to a one percent change in the price of the input (own price elasticity) or of another input (cross-price elasticity). This is expressed as:
where η ίϳ is the price elasticity, w ϳ is the price of the ί th input, and x ί is the quantity of the ί th input.
The elasticity of substitution measures the ease with which two inputs can be substituted for one another in the production process. It is mathematically defined as:
where MRTS ϳ,ί is the marginal rate of technical substitution of input ϳ for input ί. Problems arise when this form for the elasticity of substitution is used to describe production processes with more than two inputs. Several other definitions of the elasticity of substitution are explored in the analysis that follows.
Analysis
This example computes elasticities from a system of derived demand equations obtained from a translog cost function. The translog cost function is
and using Shephard’s lemma, the derived demand equations are
where is the cost share of the ί th input.
For the translog cost function, the price elasticities of demand are
for all and for all ί
Hicks-Allen elasticities of substitution are given by:
for all ί≠ϳ and
for all ί.
Morishima elasticities of substitution are simply computed as:
Some care must be taken when using elasticities of substitution to characterize production relationships. Hicks’ original concept of the elasticity of substitution applied to the case of production with two inputs. In cases with more than two inputs, the Hicks concept can still be applied, but output and all other inputs besides the pair under investigation must be held constant. The Hicks-Allen elasticity presented here (referred to occasionally as the Allen or Allen/Uzawa elasticity) attempts to rectify the inadequacies of the Hicks concept when applied to more than two inputs. As shown, the Hicks-Allen elasticity is a poor measure on this account. What little information the Hicks-Allen elasticity contains can be found in parameter estimates alone. In the case of many factors of production, the best measure of substitution between inputs is the Morishima elasticity. Developed by the economist of the same name, this elasticity is both an exact measure of the ease of substitution and provides complete comparative statics information about relative factor shares. It comes much closer to realizing the goals of Hicks’ original elasticity in the case of many inputs. Both elasticities are calculated in this example to demonstrate the impact of choosing one or another in a given situation.
Data and parameter estimates were previously stored in datasets est and klems in the example "Estimating a Derived Demand System from a Translog Cost Function." The elasticities are evaluated at the sample means, so the MEANS procedure is used to compute the sample mean cost shares and store this information in the dataset meanshares.
proc means data = klems noprint mean;
variables sk sl se sm ss;
output out = meanshares mean = sk sl se sm ss;
run;
Elasticities are most easily reckoned using the IML procedure as the following statements demonstrate. Because some of the parameters were not estimated, their values must be backed out through the application of the homogeneity and symmetry restrictions.
proc iml;
/*Read in parameter estimates*/
use est;
read all var {gkk gkl gke gkm gks};
read all var {gll gle glm gls};
read all var {gee gem ges};
read all var {gmm gms};
close est;
/*Calculate S parameter based on homogeneity constraint*/
gss=0-gks-gls-ges-gms;
/*Read in mean cost shares and construct vector*/
use meanshares;
read all var {sk sl se sm ss};
close meanshares;
w = sk//sl//se//sm//ss;
print w;
/*Construct matrix of parameter estimates*/
gij = (gkk||gkl||gke||gkm||gks)//
(gkl||gll||gle||glm||gls)//
(gke||gle||gee||gem||ges)//
(gkm||glm||gem||gmm||gms)//
(gks||gls||ges||gms||gss);
print gij;
nk=ncol(gij);
mi = -1#I(nk); /*Initialize negative identity matrix*/
eos = j(nk,nk,0); /*Initialize Marshallian EOS Matrix*/
mos = j(nk,nk,0); /*Initialize Morishima EOS Matrix*/
ep = j(nk,nk,0); /*Initialize Price EOD Matrix*/
/*Calculate Marshallian EOS and Price EOD Matrices*/
i=1;
do i=1 to nk;
j=1;
do j=1 to nk;
eos[i,j] = (gij[i,j]+w[i]#w[j]+mi[i,j]#w[i])/(w[i]#w[j]);
ep[i,j] = w[j]#eos[i,j];
end;
end;
/*Calculate Morishima EOS Matrix*/
i=1;
do i=1 to nk;
j=1;
do j=1 to nk;
mos[i,j] = ep[i,j]-ep[j,j];
end;
end;
run;
Elasticities are reported in Figure 1.
Price Elasticities of Demand
Capital
Labor
Energy
Materials
Services
Capital
-0.338
0.227
0.0183
0.0593
0.0335
Labor
0.0650
-0.630
0.0315
0.231
0.303
Energy
0.0606
0.364
-0.0915
-0.170
-0.163
Materials
0.0167
0.227
-0.0145
-0.233
0.00367
Services
0.0679
2.148
-0.1000
0.0265
-2.142
Hicks-Allen Elasticities of Substitution
Capital
Labor
Energy
Materials
Services
Capital
-2.993
0.575
0.536
0.148
0.600
Labor
0.575
-1.594
0.921
0.574
5.435
Energy
0.536
0.921
-2.679
-0.423
-2.925
Materials
0.148
0.574
-0.423
-0.579
0.0658
Services
0.600
5.435
-2.925
0.0658
-38.437
Morishima Elasticities of Substitution
Capital
Labor
Energy
Materials
Services
Capital
0
0.857
0.110
0.292
2.176
Labor
0.403
0
0.123
0.463
2.445
Energy
0.399
0.994
0
0.0627
1.979
Materials
0.355
0.857
0.0771
0
2.146
Services
0.406
2.778
-0.0084
0.259
0
Figure 1: Elasticity Matrices
Own price elasticities of demand are all negative. Using the Hicks-Allen elasticity, all pairs of inputs are substitutes except energy and services and energy and materials. The matrix of Hicks-Allen elasticities is symmetric by design. In general, the degree of substitution is not particularly high except in the case of labor and services. This indicates that the textile industry has responded to increased competition from foreign firms with lower labor costs by substituting away from labor to greater use of services. The Morishima elasticities support this interpretation, but the magnitudes of these elasticities seem more reasonable. Virtually all inputs are substitutes under this measure.
References
Blackorby, C., and Russell, R. R. (1989). “Will the Real Elasticity of Substitution Please Stand Up? (A Comparison of the Allen/Uzawa and Morishima Elasticities).” American Economic Review 79:882–888.
Chambers, R. G. (1988). Applied Production Analysis: A Dual Approach. New York: Cambridge University Press.
Diewert, W. E., and Wales, T. J. (1987). “Flexible Functional Forms and Global Curvature Conditions.” Econometrica 55:43–68.
Jorgenson, D. (1986). “Econometric Methods for Modeling Producer Behavior.” In Handbook of Econometrics, edited by Z. Griliches, and M. D. Intriligator, 1841–1915. Amsterdam: North-Holland.
... View more
Labels:
08-07-2023
11:17 AM
1 Like
Overview
This example illustrates the calculation of some widely known economic indices such as Laspeyre, Paasche, Bowley, Fisher, and more by defining them with PROC FCMP and then accessing the compiled functions by using the SAS global option CMPLIB in a DATA step. An economic index is a statistic about the economy that is used to study relative movements in prices or quantities over a period of time. They assist in decision-making to create a stable economy by predicting future performance.
Some of the popular ways of computing economic indices are given by Laspeyre’s index, Paasche’s index, Bowley’s index, and Fisher’s index formulas. Each of the index formulas can be used to compute both a price index and a quantity index. A price index measures the change of price over time for a fixed basket of products and services. A quantity index calculates the change in consumption over time for a basket of goods with a fixed value at a certain time. Some examples of price-related economic indices are the consumer price index (CPI), import and export price indices, producer price indices, and the employment cost index. The growth rate of gross domestic product (GDP) is an example of quantity-related change.The following indices are calculated in this example:
Laspeyre’s
Paasche’s
Bowley’s
Fisher’s
Marshall Edgeworth’s
Mitchell’s
Walsh’s
geometric mean
harmonic mean
Details
An ideal index might be expected to have a fixed weight in the numerator and denominator. But with a price change, quantities purchased are rarely identical over two given periods. Although Laspeyre’s and Paasche’s index formulas are two widely used methods to calculate indices, these indices do not account for the fact that consumers typically react to price changes by changing the quantities they purchase. With an increase in price, the consumer would reduce the quantity; hence the weights added to Paasche’s index would be smaller than that of Laspeyre’s index. As a result, Paasche’s index systematically understates inflation while Laspeyre’s index overstates it.
To compensate for this discrepancy, different formulas were introduced. Bowley’s and Fisher’s formulas use the arithmetic mean and the geometric mean (of Laspeyre’s and Paasche’s index values), respectively. Marshall-Edgeworth’s formula uses an average of the base price and the current price as weights. The Fisher’s index is more appropriate when dealing with percentage changes. As their names suggest, the harmonic mean index computes the harmonic average and the geometric mean index computes the geometric mean.
Except for Laspeyre’s, the harmonic mean, and the geometric mean, all the other indices require revised knowledge of the current expenditure pattern. This is a disadvantage because collecting current information and updating weights require more time and effort. All the indices put together serve as an overall measure of the relative movements.
In the following formulas, let period Ο denote the reference point in an earlier period, also sometimes known as the base period. Let period t denote the current time of interest with which the base period is compared. Let ί = 1...n denote the items in the basket over which the summation is carried out. Let Ρ ο,i denote the price of the ί th item at the base period, and q t,i denote the price of the ί th item at the current period of interest t. Some of the fixed-weight price indices illustrated in this example and defined by using PROC FCMP are enumerated below. The function names defined in PROC FCMP are shown in parentheses. For each index type, the quantity index formula is defined, analogous to its corresponding price index formula.
Laspeyre's
price index (laspeyres_price_index)
quantity index (laspeyres_qty_index)
Paasche's
price index (paasche_price_index)
quantity index (paasche_qty_index)
Bowley’s
price index (bowley_price_index)
quantity index (bowley_qty_index)
Fisher's
price index (fisher_price_index)
quantity index (fisher_qty_index)
geometric mean (GM)
price index (geometricmean_price_index)
quantity index (geometricmean_qty_index)
harmonic mean (HM)
price index (harmonicmean_price_index)
quantity index (harmonicmean_qty_index)
Marshall-Edgeworth’s
price index (marshall_edgeworth_price_index)
quantity index (marshall_edgeworth_qty_index)
Walsh's
price index (walsh_price_index)
quantity index (walsh_qty_index)
Mitchell's
price index (mitchell_price_index)
quantity index (mitchell_qty_index)
The q a in the Mitchell’s index denotes the relative importance of the items. Mitchell has advocated using the average of quantities bought and sold over a period of time to be used as weights (Kenney and Keeping 1962). In this example, the Mitchell’s price index function is calculated with the weight "wt" (defined as q a in the preceding formula) as the average of q 0 and q n . You can supply your own weights depending on how you rank the items in their relative order of importance. Instead of using the preceding Mitchell’s formula, some users might have a better idea of the "wt" given by value, v a = q a x p 0 of their products. In that case, Mitchell’s price index formula is given by:
Similarly, Mitchell’s quantity index formula is given by:
where in this case,v a is defined as v a = p a x q 0
Note that the price indices defined by using PROC FCMP can be used to compute the quantity indices as well, by interchanging the price and quantity values. To avoid confusion, the quantity index formulas are defined separately. For example, the Laspeyre’s price index needs input arguments of laspeyres_price_index (p0,q0,pn). You can obtain Laspeyre’s quantity index by changing the input arguments to laspeyres_price_index (q0,p0,qn), replacing the p's with q's and vice versa. This yields the same result as using the laspeyres_qty_index (p0,q0,qn).
Analysis
You can define the index functions as subroutines by using the FCMP procedure. In this example, the functions are stored in "sasuser.ecoidx.economic_indicators" by using the OUTLIB option in the PROC FCMP statement. The laspeyres_price_index created by PROC FCMP is shown as follows:
%let OUTLIB = sasuser.ecoidx.economic_indicators;
/*-------- create functions with PROC FCMP --------*/
proc fcmp outlib= &OUTLIB;
function laspeyres_price_index( p0[*], q0[*], pn[*] ) label= "Laspeyres Price Index";
/*---------------------------------------------------------------------
* ENTRY: laspeyres_price_index
*
* PURPOSE: Computes Laspeyres Price index.
*
* USAGE: idxl_p = laspeyres_price_index( p0, q0, pn );
* p0 denotes price vector for items at time 0/base;
* q0 denotes quantity vector for items at time 0/base;
* pn denotes price vector for items at time n;
* In the following example for 2 items,
* the Laspeyres Price index has been calculated.
*
* NOTE: Missing values as arguments to the function return missing value.
*
* EXAMPLES: idxl_p = laspeyres_price_index( p0, q0, pn );
*
*
*--------------------------------------------------------------------*/
if dim(p0) ~= dim(q0) | dim(p0) ~= dim(pn) then do;
Put "ERROR: Arguments to laspeyres_price_index do not have the same dimensions";
return( . );
end;
num=0;den=0;
do i=1 to dim(p0);
num=num + (pn[i]*q0[i]);
den=den + (p0[i]*q0[i]);
end;
idxl_p=num/den;
return( idxl_p );
endsub;
The variable IDXL_P holds the computed Laspeyre’s price index. The RETURN statement in a subroutine enables the computed result to be returned when the specified function is invoked. Note that the IDXL_P result is used later in a DATA step given below. You need the base price p0, base quantity q0, and current price pn to calculate the Laspeyre’s price index. The input arguments for the laspeyres_price_index function are p0[*], q0[*], and pn [*]. The [*] in p0[*] denotes an array that contains the base prices for all the items in the basket. The other subroutines that define the other index functions have been created similarly. The complete statements can be found in the accompanying SAS source file (sas.html). The data set contains wine and cheese products over two time periods T1 and T2. The observations are as follows:
Product
T1
T2
types
q
p
q
p
Cheese
100
15
108
18
Wine
25
22
38
16
The library catalog 'sasuser.ecoidx' in the SAS global option CMPLIB= specifies where to look for the previously compiled functions created in PROC FCMP. For more information about the FCMP procedure, see the Base SAS Procedures Guide, Version 9. The following DATA step reads the data and calculates the various price indices.
/*-------- test functions with datastep --------*/
/***calculate price index***/
options CMPLIB= (sasuser.ecoidx);
data indices(drop=i);
array p0[2] _temporary_ (15 22);
array q0[2] _temporary_ (100 25);
array pn[2] _temporary_ (18 16);
array qn[2] _temporary_ (108 38);
array wt_q[2] _temporary_;
array wt_p[2] _temporary_;
do i =1 to dim(p0);
wt_q[i] = .5 * (q0[i]+ qn[i]);
wt_p[i] = .5 * (p0[i]+ pn[i]);
end;
idxl_p = laspeyres_price_index( p0, q0, pn );
idxp_p = paasche_price_index( p0, pn, qn );
idxb_p = bowley_price_index( p0, q0, pn, qn );
idxf_p = fisher_price_index( p0, q0, pn, qn );
idxgm_p = geometricmean_price_index( p0, q0, pn );
idxhm_p = harmonicmean_price_index( p0, q0, pn );
idxme_p = marshall_edgeworth_price_index( p0, q0, pn, qn );
idxw_p = walsh_price_index( p0, q0, pn, qn );
idxm_p = mitchell_price_index( p0, wt_q, pn, 1 ); /*<----user supplied _type_ */
idxl_q = laspeyres_qty_index( p0, q0, qn );
idxp_q = paasche_qty_index( q0, pn, qn );
idxb_q = bowley_qty_index( p0, q0, pn, qn );
idxf_q = fisher_qty_index( p0, q0, pn, qn );
idxgm_q = geometricmean_qty_index( p0, q0, qn );
idxhm_q = harmonicmean_qty_index( p0, q0, qn );
idxme_q = marshall_edgeworth_qty_index( p0, q0, pn, qn );
idxw_q = walsh_qty_index( p0, q0, pn, qn );
idxm_q = mitchell_qty_index( q0, wt_p, qn, 1 ); /*<----user supplied _type_ */
run;
Figure 3.1 shows the price indices computed with the Laspeyre’s, Paasche’s, Bowley’s, Fisher’s, geometric-mean, harmonic-mean, Marshall-Edgeworth’s, Walsh’s, and Mitchell’s index formulas.
Obs
idxl_p
idxp_p
idxb_p
idxf_p
idxgm_p
idxhm_p
idxme_p
idxw_p
idxm_p
1
1.07317
1.03909
1.05613
1.05599
1.04914
1.02181
1.03259
1.05670
1.05459
Figure 3.1: Price Indices
Note that the relative importance of the items denoted by "wt" (or q a in the Mitchell’s price index formula) is calculated as an average of the base and the current quantities while computing price indices. You can supply your own weights "wt" based on your judgment of the relative importance of the items.
As pointed out earlier, if you are required to calculate the Laspeyre’s quantity index, you could enter laspeyres_price_index (q0, p0, qn) with p’s replaced by q’s and vice versa or laspeyres_price_index (p0, q0, qn). Note that during the quantity index calculation, the "wt" in Mitchell’s formula is calculated as an average of the base and the current prices. Again you might use the v a weighted Mitchell’s formula. In that case, you would enter mitchell_price_index(p0, wt, pn, 2) instead of "1" for _type_. For the same dataset, the quantity indices are computed.
Figure 3.2 shows the various quantity indices computed for the same data with the Laspeyre’s, Paasche’s, Bowley’s, Fisher’s, geometric-mean, harmonic-mean, an Marshall-Edgeworth’s, Walsh’s, and Mitchell’s index formulas.
Obs
idxl_q
idxp_q
idxb_q
idxf_q
idxgm_q
idxhm_q
idxme_q
idxw_q
idxm_q
1
1.19805
1.16
1.17902
1.17887
1.18371
1.17094
1.08822
1.17771
1.17835
Figure 3.2: Quantity Indices
When a price increases, the price index increases; when the price falls, the index falls as well. The same is true for the quantity index. For the small data set presented here, you can guess what the results would be. Over the two time periods, there is an increase in the price of cheese and a decrease in the price of wine. However, the quantities purchased for both cheese and wine show an increase over the two periods. If you compare the two results above, the quantity index values are slightly larger than the price index values; this is expected if you compare the relative changes by observation.
References
SAS Institute Inc. (2003), The FCMP Procedure, Version 9, Cary, NC: SAS Institute Inc.
Kenney, J. F., and Keeping, E. S. (1962), Mathematics of Statistics, Pt.1, 3rd Edition, Princeton, NJ: Van Nostrand, 64–74.
... View more
Labels:
08-07-2023
11:15 AM
Overview
Tests for structural change in a time series variable are typically performed by modeling a likely breakpoint for the structural model with a dummy variable that has value 0 before the break and value 1 after the break. The residual sum of squares from this unrestricted model and a restricted model with no breakpoint is compared using the standard F -test.
Christiano, in his 1988 paper "Searching for a Break in GNP," questioned this traditional method of testing for structural change in a time series process. The gist of his paper is that traditional methods of testing for a break in trend are flawed in that the significance levels overstate the likelihood of the trend break alternative hypothesis. He attributes this to the fact that conventional tests assume that the breakpoint is exogenous to the test, whereas, in practice, the breakpoint is almost always chosen a priori.
Christiano uses quarterly data on log GNP from 1948:1 to 1988:4 to fit the trend-stationary (TS) model.
A battery of F-tests is computed for each of the draws from the error distribution. The unrestricted model is
where
for i = 3, ... , T-2. The dummy variable d t has a value of zero before the breakpoint i and a value of one after the breakpoint i.
The critical values come from the 95 th percentile of each row of the following matrix:
where F ij is the F-value for the i t h breakpoint in the j t h simulation.
Pre-test adjusted critical values take into account the fact that the breakpoint is commonly picked before the test is computed. To account for this bias, the 95 th percentile of the largest F-value from each simulation is calculated.
Analysis
The data for this example comes from the Citibase data set CITIQTR in the SASHELP library. The variables DATE, GDPQ, and L_GDPQ are read into the SAS DATA set GDP, where GDPQ is the quarterly gross domestic product in 1987 dollars from 19081:1 to 1991:4. L_GDPQ is the logarithmic transformation of GDPQ.
data gdp;
set sashelp.citiqtr;
keep date gdpq l_gdpq;
l_gdpq = log(gdpq);
run;
A plot of the data in Figure 1 reveals a likely candidate for a structural break around 1983.
Figure 1: Log of Quarterly GDP,1980 - 1991
SAS's Interactive Matrix Language (SAS/IML) is used to perform the 10,000 simulations for this bootstrapping example. IML is a powerful language that enables you to read variables from SAS DATA sets into matrices for computations and manipulations that may be too complicated or unwieldy in traditional DATA steps or procedures.
Begin by invoking PROC IML, reading the time series of interest, and initializing a few variables.
proc iml;
simnum = 10000;
echo = 500;
use gdp;
read all into data var{gdpq};
read all into date var{date};
ldata = log(data);
nlag = 2;
strtvls = ldata[1:nlag,];
y = ldata[3:nrow(ldata)];
y_1 = ldata[2:nrow(ldata)-1];
y_2 = ldata[1:nrow(ldata)-2];
SIMNUM sets the number of simulations to be performed equal to 10,000; ECHO is a feedback variable that is used later in the program to give you information on the progress of the simulation. In this case, at every 500 observations, the program will print the total number of simulations computed to that point. Y, Y_1, and Y_2 are vectors corresponding to the time series variable GDPQ and its two lagged variables.
The next step is to define some modules which can be thought of as either functions or subroutines.
The module BATTERY takes as input the time series variables Y, Y_1, and Y_2 and returns as output a column vector F-tests, one for each possible breakpoint. The module also returns the matrix of variables in the restricted model and the number of variables in the restricted and unrestricted models. Notice that the dummy variable DI is used to control the breakpoint. The F-values are calculated directly using matrix algebra.
start battery(y,y_1,y_2) global(xr, k, m);
batf = 0;
n = nrow(y);
t = cusum(j(n,1,1));
xr = j(n,1,1) || t || y_1 || y_2;
m = ncol(xr);
er = (i(n) - xr*inv(xr`*xr)*xr`)*y;
rss = er`*er;
do i=(m-1) to n-2 by 1;
di = j(i-1,1,0) // j(n-i+1,1,1);
xu = j(n,1,1) || di || t || di#t || y_1 || y_2;
k = ncol(xu);
eu = (i(n) - xu*inv(xu`*xu)*xu`)*y;
uss = eu`*eu;
fstat = ((rss-uss)/(k-m)) / (uss/(n-k));
batf = batf // fstat;
end;
batf = batf[2:nrow(batf),];
return(batf);
finish battery;
The module BOOTSTRP takes as input the number of observations in the original data set NDATA, a vector of starting values, STRTVLS, a vector of fitted errors from which to make random draws EHAT, and the vector of parameter estimates BETAHAT from the fitted model. It returns a column vector YSIM, of simulated data.
start bootstrp(ndata, strtvls, ehat, betahat);
ner = nrow(ehat);
rndehat = j(ner,1,1);
do eloop=1 to ner by 1;
rndehat[eloop] = ehat[ceil(ner*uniform(0))];
end;
ysim = j(ndata,1,1);
ysim[1:nrow(strtvls),] = strtvls;
t = cusum(j(ndata,1,1));
do l=(nrow(strtvls)+1) to ndata by 1;
ysim[l] = betahat[1] +
betahat[2]*t[l-2] +
betahat[3]*ysim[l-1] +
betahat[4]*ysim[l-2] +
rndehat[l-2];
end;
return(ysim);
finish bootstrp;
The module SORTCM sorts each row of the input matrix by column and returns the resultant sorted matrix.
start sortcm(x);
temp1=x;
do i=1 to nrow(temp1) by 1;
temp2 = temp1[i,];
temp3 = temp2;
temp2[,rank(temp2)] = temp3;
temp1[i,] = temp2;
end;
return(temp1);
finish sortcm;
The main program begins by computing a batter of F-tests for the original data and calculating the parameter estimates and residuals from the restricted TS model.
fdata = battery(y, y_1, y_2);
temp = cusum(j(nrow(fdata)+m-2,1,1));
brkpt = temp[(m-1):nrow(temp),];
ehat = (i(nrow(y)) - xr*inv(xr`*xr)*xr`)*y;
betahat = inv(xr`*xr)*xr`*y;
Next, the F matrix is initialized and the 10,000 simulations are performed. The results of each simulation are horizontally concatenated to the F matrix and the current simulation number is printed on the screen every 500 iterations.
f = j(nrow(ehat)-2*(k-m),1,0);
ndata = nrow(ldata);
do simloop=1 to simnum by 1;
ysim = bootstrp(ndata, strtvls, ehat, betahat);
y = ysim[3:ndata];
y_1 = ysim[2:ndata-1];
y_2 = ysim[1:ndata-2];
f = f || battery(y, y_1, y_2);
if mod(simloop,echo)=0 then print simloop;
end;
The maximum F statistic for each simulation is found and sorted in the row vector FCOLMAX. The rows of F are sorted by column to create the matrix FSORT. The column associated with the 95 th percentile is selected for each matrix as well as a column vector of the 95% critical value for the standard F-test, becoming the variables F_MAX, F_95, and F_STD, respectively.
f = f[,2:ncol(f)];
fcolmax = sortcm(f[<>,]);
fsort = sortcm(f);
cv95 = int(.95*simnum);
brkpt = date[brkpt,];
fdata = fdata;
fstd_95 = j(nrow(fdata),1,finv(.95,k-m,nrow(ehat)-k));
f_95 = fsort[,cv95];
fmax_95 = j(nrow(fdata),1,fcolmax[,cv95]);
Finally, a SAS DATA set is created containing the variables of interest and PROC IML is exited.
create sasuser.bootout
var{brkpt fdata fstd_95 f_95 fmax_95};
append;
quit;
The GPLOT procedure can be used to view the results of the test, shown in Figure 2.
title 'Bootstrapped Critical Values';
axis1 label=('Break Point') minor=none
order=('01jan80'd to '01jan91'd by year);
axis2 label=(angle=90 'F-statistics')
order=(0 to 15 by 3);
symbol1 c=red i=join; /* for fdata */
symbol2 c=black i=spline; /* for fstd_95 */
symbol3 c=green i=spline; /* for f_95 */
symbol4 c=blue i=spline; /* for fmax_95 */
proc gplot data=sasuser.bootout;
format brkpt year4.;
plot fdata * brkpt = 1
fstd_95 * brkpt = 2
f_95 * brkpt = 3
fmax_95 * brkpt = 4 / overlay
vaxis=axis2
vminor=1
haxis=axis1
legend
cframe=ligr;
run;
quit;
Figure 2: Bootstrapped Critical Values
References
Christiano, L. J. (1988), "Searching for a Break in GNP," N.B.E.R. Working Paper No. 2695.
Rappoport, P. and Reichlin, L. (1988), "Segmented Trends and Nonstationary Time Series," Economic Journal, 168-177.
SAS Institute Inc. (1990), SAS/IML Usage and Reference, Version 6, First Edition, Cary, NC: SAS Institute Inc.
... View more
Labels:
08-07-2023
11:14 AM
Overview
The UCM procedure analyzes and forecasts equally spaced univariate time series data using the Unobserved Components Model (UCM). A UCM decomposes a response series into components such as trend, seasonal, cycle, and regression effects due to predictor series. These components capture the salient features of the series that are useful in explaining and predicting its behavior. The UCMs are also called Structural Models in the time series literature. This example illustrates the use of the UCM procedure by analyzing a yearly time series.
A Series with Trend and a Cycle
The time series data analyzed in this example are annual age-adjusted melanoma incidences from the Connecticut Tumor Registry (Houghton, Flannery, and Viola 1980) from 1936 to 1972. The observations represent the number of melanoma cases per 100,000 people.
The following DATA step reads the data and creates a date variable to label the measurements.
data melanoma ;
input Incidences @@ ;
year = intnx('year','1jan1936'd,_n_-1) ;
format year year4. ;
label Incidences = 'Age Adjusted Incidences of Melanoma per 100,000';
datalines ;
0.9 0.8 0.8 1.3 1.4 1.2 1.7 1.8 1.6 1.5
1.5 2.0 2.5 2.7 2.9 2.5 3.1 2.4 2.2 2.9
2.5 2.6 3.2 3.8 4.2 3.9 3.7 3.3 3.7 3.9
4.1 3.8 4.7 4.4 4.8 4.8 4.8
;
run ;
Figure 1 shows a plot of the data.
Figure 1: Melanoma Incidences Plot
To analyze this series, a UCM that contains a trend component, a cycle component, and an irregular component is appropriate. A time series yt that follows such a UCM can be formally described as:
where μ t is the trend component, ψ t is the cycle component, and ε t is the error term. The error term is also called the irregular component, which is assumed to be a Gaussian white noise with variance . The trend μ t is modeled as a stochastic component with a slowly varying level and slope. Its evolution is described as follows:
The disturbances η t and ξ t are assumed to be independent. There are some interesting special cases of this trend model, obtained by setting one or both of the disturbance variances and equal to zero. If is set equal to zero, then you get a linear trend model with a fixed slope. If is set to zero, then the resulting model usually has a smoother trend. If both the variances are set to zero, the resulting model is the deterministic linear time trend, μ t = μ 0 + β 0 t.
The cycle component \psi_t is modeled as follows:
Here ρ is the damping factor, where 0 ≤ ρ ≤ 1 and the disturbances and are independent variables. This results in a damped stochastic cycle that has time-varying amplitude and phase, and a fixed period equal to 2 π/λ.
The parameters of this UCM are the different disturbance variances , , , and ; the damping factor ρ; and the frequency λ.
The following syntax fits the UCM to the melanoma incidences series:
proc ucm data = melanoma;
id year interval = year;
model Incidences ;
irregular ;
level ;
slope ;
cycle ;
run ;
Begin by specifying the input data set in the PROC statement. Second, use the ID statement in conjunction with the INTERVAL= statement to specify the time interval between observations. Note that the values of the ID variable are extrapolated for the forecast observations based on the values of the INTERVAL= option. Next, the MODEL statement is used to specify the dependent variable. If there are any predictors in the model, they are specified in the MODEL statement on the right-hand side of the equation.
Finally, the IRREGULAR statement is used to specify the irregular component, the LEVEL and SLOPE statements are used to specify the trend component, and the CYCLE statement is used to specify the cycle component. Notice that different components in the model are specified by separate statements and that each component statement has a different set of options, which can be found in the SAS/ETS User's Guide. These options are useful for specifying additional details about that component. The following output from the UCM procedure in Figure 2 shows the parameter estimates for this model.
Final Estimates of the Free Parameters
Component
Parameter
Estimate
Approx Std Error
t Value
Approx Pr > |t|
Irregular
Error Variance
0.05706
0.01750
3.26
0.0011
Level
Error Variance
7.328566E-9
4.70077E-6
0.00
0.9988
Slope
Error Variance
8.71942E-11
5.61859E-8
0.00
0.9988
Cycle
Damping Factor
0.96476
0.04857
19.86
<.0001
Cycle
Period
9.68327
0.62859
15.40
<.0001
Cycle
Error Variance
0.00302
0.0022975
1.31
0.1893
Figure 2: Parameter Estimates
The table shows that the disturbance variances for the level and slope components are highly insignificant. This suggests that a deterministic trend model may be more appropriate. The estimated period of the cycle is about 9.7 years. Interestingly, this is similar to another well-known cycle, the sun-spot activity cycle, which is known to have a period of 9 to 11 years. This provides some support for the claim that melanoma incidences are related to sun exposure. The estimate of the damping factor is 0.96, which is close to 1. This suggests that the periodic pattern of melanoma incidences does not diminish quickly.
The procedure outputs a variety of other statistics useful in model diagnostics, such as series forecasts and component estimates, which point toward the use of a deterministic trend model. You can construct this model with a fixed linear trend by holding the values of the level and slope disturbance variances fixed at zero. These types of modifications in the model specification are very easy to do in the UCM procedure. The following syntax illustrates some of this functionality.
ods html ;
ods graphics on ;
proc ucm data = melanoma;
id year interval = year;
model Incidences ;
irregular ;
level variance=0 noest ;
slope variance=0 noest ;
cycle plot=smooth ;
estimate back=5 plot=(normal acf);
forecast lead=10 back=5 plot=decomp;
run ;
ods graphics off ;
ods html close ;
The ID, MODEL, and IRREGULAR statements appear as they did in the first model. In this model, however, you specify some specific options in the remaining component statements:
In the LEVEL and SLOPE statements, the variances are set to zero to create a model with a fixed linear trend. A NOEST option is also included in these statements to fix the values of the model parameters.
In the CYCLE statement, you can use the PLOT= option to plot the smoothed estimate of the cycle component.
In the ESTIMATE statement, you are able to control the span of observations used in parameter estimation using the BACK= option. In this particular model, you set BACK=5 to specify a hold-out sample of five observations, which are omitted from the estimation. You can also plot the residual diagnostic plots using the PLOT= option.
In the FORECAST statement, you use the LEAD= option to specify the number of periods to forecast beyond the historical period. In this case, you select to produce 10 multi-step forecasts. The BACK= option tells PROC UCM to begin the multi-step forecast five observations back from the end of the historical data. This corresponds with the beginning of the hold-out sample period specified by the BACK= option on the ESTIMATE statement. Thus a total of 10 multi-step forecasts are produced (five corresponding with the hold-out sample and five additional forecasts into the future). Finally, use the PLOT= option to generate the series decomposition plots.
The ODS graphics on; statement invokes the ODS graphics system. The PLOT options on the CYCLE and FORECAST statements in the code cause ODS to produce high-resolution plots of the specified components. The ODS graphics off; statement turns off the graphics system. Note that the ODS Graphics System is experimental in SAS 9 and 9.1.
The parameter estimates for the deterministic trend model are shown in Figure 3:
Final Estimates of the Free Parameters
Component
Parameter
Estimate
Approx Std Error
t Value
Approx Pr > |t|
Irregular
Error Variance
0.05675
0.02387
2.38
0.0174
Cycle
Damping Factor
0.94419
0.08743
10.80
<.0001
Cycle
Period
9.76778
0.89263
10.94
<.0001
Cycle
Error Variance
0.00590
0.0045948
1.28
0.1994
Figure 3: Parameter Estimates for Deterministic Trend Model
The procedure prints a variety of model diagnostic statistics by default (not shown). You can also request different residual plots. The model residual histogram and autocorrelation plots that follow in Figure 4 and Figure 5 do not show any serious violations of the model assumptions.
Figure 4: Prediction Error Histogram
Figure 5: Prediction Error Autocorrelations
The component plots in the model are useful for understanding the series' behavior and detecting structural breaks in the evolution of the series. The following plot in Figure 6 shows the smoothed estimate of the cycle component in the model.
Figure 6: Smoothed Cycle Component
Forecasts for Variable Incidences
Obs
year
Forecast
Standard Error
95% Confidence Limits
33
1968
4.342356
0.30415
3.746235
4.938476
34
1969
4.550798
0.32420
3.915380
5.186216
35
1970
4.693234
0.33336
4.039858
5.346611
36
1971
4.763516
0.33408
4.108734
5.418299
37
1972
4.783619
0.33260
4.131739
5.435500
38
1973
4.792227
0.33172
4.142069
5.442386
39
1974
4.828202
0.33070
4.180042
5.476362
40
1975
4.915774
0.33029
4.268425
5.563122
41
1976
5.056911
0.33408
4.402118
5.711704
42
1977
5.232987
0.34403
4.558710
5.907264
Figure 7:Forecasts for Variable Incidences
The observations beyond the hold-out sample indicate that four to five incidences of melanoma per 100,000 people can be expected in the next five years.
You can also obtain a model-based "decomposition" of the series that shows the incremental effects of adding together different components that are present in the model. The following trend and trend plus cycle plots in Figure 8 and Figure 9 show such a decomposition in the current example.
Figure 8: Smoothed Trend Estimate
Figure 9: Sum of Trend and Cycle Components
References
Houghton, A. N., Flannery, J., and Viola, V. M. (1980), "Malignant Melanoma in Connecticut and Denmark," International Journal of Cancer, 25, 95-114.
SAS Institute Inc. (2002), SAS/ETS User's Guide, Version 9, Cary, NC: SAS Institute Inc.
... View more
Labels:
08-07-2023
11:10 AM
Overview
Figure 1: GDP and Gasoline Price
A question that frequently arises in time series analysis is whether or not one economic variable can help forecast another economic variable. For instance, it has been well documented that nearly all of the postwar economic recessions have been preceded by large increases in the price of petroleum. Does this imply that oil shocks cause recessions?
One way to address this question was proposed by Granger (1969) and popularized by Sims (1972). Testing causality, in the Granger sense, involves using F-tests to test whether lagged information on a variable Y provides any statistically significant information about a variable X in the presence of lagged X. If not, then "Y does not Granger-cause X."
There are many ways in which to implement a test of Granger causality. On particularly simple approach uses the autoregressive specification of a bivariate vector autoregression. Assume a particular autoregressive lag length p, and estimate the following unrestricted equation by ordinary least squares (OLS):
Conduct an F-test of the null hypothesis by estimating the following restricted equation also by OLS:
Compare their respective sum of squared residuals.
If the test statistic
is greater than the specified critical value, then reject the null hypothesis that Y does not Granger-cause X.
It is worth noting that with lagged dependent variables, as in Granger-causality regressions, the test is valid only asymptotically. An asymptotically equivalent test is given by
Another caveat is that Granger-causality tests are very sensitive to the choice of lag length and to the methods employed in dealing with any non-stationarity of the time series.
Analysis
Producing the desired test statistics requires some preliminary data manipulation. Two Citibase data sets are read from the SASHELP library: CITIQTR, from which the variables DATE and GDPQ are kept, and CITIMON, from which DATE and EEGP are kept.
data gdp;
set sashelp.citiqtr;
keep date gdpq;
run;
data gp;
set sashelp.citimon;
keep date eegp;
run;
A problem arises from the fact that GDPQ is a quarterly gross domestic product, measured in billions of 1987 dollars from 1980:1 to 1991:4, while EEGP is an index, with the base year 1987, of monthly retail gas prices from January 1980 to December 1991. In order to use these two series in this analysis, it is necessary for the observations to be of the same frequency. The EXPAND procedure can be used to transform the monthly gas price observations into quarterly observations and merge them with the GDP data to create a combined data set with quarterly observations. The OBSERVED= option enables you to control the observation characteristics of the input time series and of the output series. In this example, the average of three monthly observations is used to create each quarterly observation. Then, the data are lagged for two periods using the LAG function.
proc expand data=gp out=temp from=month to=qtr;
convert eegp / observed=average;
id date;
run;
data combined;
merge gdp temp;
by date;
run;
data causal;
set work.combined;
gdpq_1 = lag(gdpq);
gdpq_2 = lag2(gdpq);
eegp_1 = lag(eegp);
eegp_2 = lag2(eegp);
run;
After the data are processed, the unrestricted and restricted models are estimated using the AUTOREG procedure, and output files are created for the residuals for each regression.
* unrestricted model;
proc autoreg data=causal;
model gdpq = gdpq_1 gdpq_2 eegp_1 eegp_2;
output out=out1 r=e1; /* output residuals */
run;
* restricted model;
proc autoreg data=out1;
model gdpq = gdpq_1 gdpq_2;
output out=out2 r=e0; /* output residuals */
run;
These residuals can then be read into vectors in PROC IML and used to calculate the test statistics with matrix algebra.
ods select Iml._LIT1010
Iml.TEST1_P_VAL1
Iml.TEST2_P_VAL2;
ods html body='exgran01.htm';
* compute test;
proc iml;
start main;
use out1;
read all into e1 var{e1};
close out1;
use out2;
read all into e0 var{e0};
close out2;
p = 2; /* # of lags */
T = nrow(e1); /* # of observations */
sse1 = ssq(e1);
sse0 = ssq(e0);
* F test;
test1 = ((sse0 - sse1)/p)/(sse1/(T - 2*p - 1));
p_val1 = 1 - probf(test1,p,T - 2*p - 1);
* asymtotically equivalent test;
test2 = (T * (sse0 - sse1))/sse1;
p_val2 = 1 - probchi(test2,p);
print "IML Result",, test1 p_val1,,
test2 p_val2;
finish;
run;
quit;
ods html close;
IML Result
test1
p_val1
3.8623494
0.0286651
test2
p_val2
8.6229196
0.013414
Figure 2: Bivariate Granger Causality Test Results
As shown in Figure 2, with ρ (the number of lags included in the regressions) set equal to two, both test statistics are significant at the 5% level. Thus, it would seem that past values of petroleum prices help to predict GDP.
References
Ashley, R. (1988), "On the Relative Worth of Recent Macroeconomic Forecasts," International Journal of Forecasting, 4, 363-376.
Ashley, R., Granger, C.W.J., and Schmalensee, R. (1980), "Advertising and Aggregate Consumption: An Analysis of Causality," Econometrica, 48, 1149-1168.
Berndt, E. (1991), The Practice of Econometrics: Classic and Contemporary, New York: Addison-Wesley.
Geweke, J., Meese, R., and Dent, W. (1983), "Comparing Alternative Tests of Causality in Temporal Systems: Analytic Results and Experimental Evidence," Journal of Econometrics, 21, 161-194.
Granger, C.W.J. (1969), "Investigating Causal Relations by Econometric Methods and Cross-Spectral Methods," Econometrica, 34, 424-438.
Hamilton, J. (1994), Time Series Analysis, Princeton, NJ: Princeton University Press.
Sims, C. (1972), "Money, Income and Causality," American Economic Review, 62, 540-552.
Sims, C. (1980), "Macroeconomics and Reality," Econometrica, 48, 1-48.
... View more
Labels:
03-01-2023
02:29 PM
3 Likes
Members of the SAS Analytics Center of Excellent and the SAS Internet of Things departments put together a tutorial that describes how they envision the digital twins developed in industry and the pivotal role simulation plays in their development. Using supply chain digital twins as an example application, they introduce their digital twin framework that simulation practitioners might find useful when developing their digital twin solutions to understand what did happen, predict what may happen, and determine solutions to fix future problems before they happen. They conclude with simulation research streams that contribute to the use of simulation in digital twin development. This was initially presented at the Winter Simulation Conference (WSC) 2022.
... View more
Labels:
02-10-2023
12:00 PM
As one of the most prestigious machine learning and AI conferences, NeurIPS 2022 attracted researchers and practitioners from all over the world. They exchanged the latest research ideas and progress in machine learning (ML), deep learning, and AI. Analytics R&D director Yan Xu and Analytics R&D manager Brandon Reese attended NeurIPS 2022, where SAS made several notable contributions. One was a workshop, co-organized by Yan and Brandon, that concentrated on higher-order optimization in machine learning. The other was Brandon, Yan, and other SAS colleagues participating in and winning fifth place in the EURO Meets NeurIPS 2022 Vehicle Routing Competition. Read more about the motivation and highlights of the higher-order optimization workshop. And find out what the SAS team accomplished during the Vehicle Routing competition!
... View more
Labels:
11-30-2022
12:00 PM
SAS' Gunce Walton introduces to you a new scoring capability, how it utilizes Deep Neural Networks (DNNs), and also shares its use cases with PROC DEEPCAUSAL in a SAS Data Science blog post.
... View more
Labels:
11-28-2022
11:14 AM
1 Like
Billy Dickerson and Connie Dunbar of SAS R&D chronicle key challenges and lessons learned in SAS' journey to continuous integration (CI) and continuous (CD). Delivering software to our customers faster and more frequently is just one aspect of SAS’ transformation. Read more.
... View more
Labels:
09-08-2022
10:29 AM
2 Likes
By Rajesh Selukar
Introduction
PROC CSSM has several new performance enhancements. Of all these, the introduction of a new more efficient likelihood gradient algorithm has the most significant impact on its scalability. In the latest version of PROC CSSM (2022.1.3 release), you might notice significant speedups for some of your programs. Generally, noticeable speedups can be seen for programs with large data sets as well as large and complex state space models. For instance, if you run the example A Dynamic Factor Model for the Yield Curve, you will notice that your program runs about four times faster compared to previous releases. When you run a slightly modified version of this program where you use a non-diagonal autoregressive matrix in the VARMA specification of the zeta factor (state zeta(3) type=VARMA(p=1) cov(g);), it’s as eight times as quick. Moreover, this slightly more complex state space model fits the data better. Large data sets and complex state space models are common in many fields. For example, in the analysis of data that are generated because of multi-level, multi-subject, longitudinal studies, complex state space models are a norm. For more information on the state space modeling of such hierarchical, longitudinal data, see Functional Modeling of Longitudinal Data with the SSM Procedure.
In some situations, the scalability improvements in the latest version of PROC CSSM can reduce the program's running times from hours to minutes and from days to hours. The new algorithm is based on the backpropagation (or adjoint) method in the algorithmic-differentiation (AD) literature. Prior to this, PROC CSSM used a Kalman filter-smoother (KFS) based gradient algorithm. The new backpropagation-based algorithm turns out to be much more efficient than the KFS-based algorithm, particularly when at least some of the model parameters affect the transition matrix of the state space model. Next, let’s summarize the results of two small performance studies that investigated the performance gains due to the new gradient algorithm.
Time Comparison of Old and New Gradient Algorithms
When the parameter vector does not have any parameter that affects the transition matrix, the computation times of the new and old gradient algorithms are expected to be comparable. Nevertheless, our initial testing suggests that the new algorithm is usually faster than the old algorithm even for such cases. When the parameter vector has some parameters that affect the transition matrix, the new algorithm is often significantly faster than the old one. To provide some idea about the performance gains due to the use of the new algorithm in practical scenarios, let me present the results of two small performance studies. In these studies, the timings of the gradient computation and the overall parameter estimation are compared for the old and the new algorithm under identical settings. Note that, due to finite-precision arithmetic the gradient vectors produced by these two mathematically equivalent algorithms can differ slightly. This in turn can lead to the following situations during the parameter estimation:
Nearly identical parameter estimates are obtained in the same number of iterations.
Nearly identical parameter estimates are obtained in a different number of iterations.
The parameter estimates differ significantly. This could be because in one or both cases the optimization process does not converge or converge to different local optima.
The first two situations are more common when the model and the data are reasonably in accord and the parameter estimation problem is well-posed. The issues included in the performance studies are chosen with this consideration.
The first performance study covers all the illustrative examples in the documentation for PROC CSSM, including the “Getting Started” example. In all, this study has 18 different parameter estimation problems. While relatively small in problem size, the illustrative examples cover various state space modeling scenarios. A summary of the findings for this study is as follows:
For all examples, nearly identical parameter estimates were obtained. In a few cases the number of iterations to achieve the optimum differed.
In 17 out of the 18 cases, the new algorithm computed the gradient faster than the old algorithm and for the one case that defied this pattern, the new algorithm was only marginally slower.
With the new algorithm, the combined time of parameter estimation for these 18 examples was less than fifty percent of the combined time of the old algorithm. That is, overall, more than twice the speedup was achieved.
The second study considered moderate-sized problems of diverse types from real-life settings. For SSMs, the computational cost of parameter estimation depends on factors such as, nObs, the number of observations in the data set, nParm, the size of the parameter vector, stateSize, the size of the latent state vector, and so on. In the comparison between the old and the new algorithm, nTParm, the number of parameters that affect the transition matrix, is also a key factor. In this study, the problems are chosen with different values for these factors. Its findings are summarized in Table1. The Speed-Up column in Table 1 shows the ratio of parameter estimation times between the old and the new algorithm. It shows that under a variety of situations the new algorithm provides a considerable speedup over the old algorithm.
Problem #
nObs
nParm
nTParm
stateSize
Speed-Up
1
1380
25
6
140
9.0x
2
6324
39
9
6
8.1x
3
1581
8
0
349
1.8x
4
4360
4
0
547
2.1x
5
2004
7
2
95
3.1x
In conclusion, I would love to hear about your experience, whether good or bad of working with this new change in PROC CSSM. Drop me a note at Rajesh Selukar@sas.com
... View more
Labels:
09-01-2022
01:59 PM
4 Likes
SAS Clinical Enrollment Simulation Cloud is a powerful tool developed specifically for clinical trial enrollment planning in the pharmaceutical industry. However, it lacked the capability to quickly answer the what-if questions that are important for problem diagnosis and management of a clinical trial. SAS has now developed a new sensitivity analysis technology for use with the SAS Clinical Enrollment Simulation Cloud. It can conduct local sensitivity analysis to answer the what-if questions for any number of stochastic inputs without running additional simulations beyond the basic scenario. Instead of directly opening more sites to improve only the most important KPI—the time it takes to enroll a given target number of patients—the sensitivity measures suggest smart resource management and effort allocation strategies that are both time and cost efficient. Read more about sensitivity analysis in clinical trial simulation at SAS.
... View more
Labels:
08-18-2022
10:54 AM
Here at SAS, we believe curiosity is at the heart of human progress. SAS is filled with great researchers, engineers, and scientists who, in turn, develop great products. Coming up with new and innovative approaches to help our customers is one of the things we do best. In many situations, our innovation is based on a patent. In 2021, SAS filed 59 patent applications, and 93 new patents were issued. The Analytics R&D Division leads the pack in patent applications. Take a look at what we have invented.
... View more
Labels: