<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Different ADF test results between SAS and Python in SAS Forecasting and Econometrics</title>
    <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985580#M5034</link>
    <description>&lt;P&gt;Sorry for my late response.&lt;/P&gt;
&lt;P&gt;Here is the documentation on the computation of ADF test p values in SAS:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect019.htm#etsug_macros000931" target="_blank"&gt;SAS Help Center: PROBDF Function for Dickey-Fuller Tests&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;**********************************************************************************************************&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;The PROBDF function is calculated from approximating functions fit to empirical quantiles that are produced by a Monte Carlo simulation that employs&lt;SPAN&gt;&amp;nbsp;10^8&amp;nbsp;&lt;/SPAN&gt;replications for each simulation. Separate simulations were performed for selected values of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and for&lt;SPAN&gt;&amp;nbsp;d = 1, 2, 4, 6, 12&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(where&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;are the second and third arguments to the PROBDF function).&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;The maximum error of the PROBDF function is approximately&lt;SPAN&gt;&amp;nbsp;+-10^-3&amp;nbsp;&lt;/SPAN&gt;for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the set (1,2,4,6,12) and can be slightly larger for other&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;values. Because the number of simulation replications used to produce the PROBDF function is much greater than the 60,000 replications used by Dickey and colleagues (Dickey and Fuller&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A tabindex="0" href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect025.htm#etsug_macrosdick_d79" target="_blank"&gt;1979&lt;/A&gt;; Dickey, Hasza, and Fuller&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A tabindex="0" href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect025.htm#etsug_macrosdick_d84" target="_blank"&gt;1984&lt;/A&gt;), the PROBDF function can be expected to produce results that are substantially more accurate than the critical values reported in those papers.&lt;/P&gt;
&lt;P&gt;**************************************************************************************************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Different software packages may implement different algorithms, and even with similar algorithms, there could still be minor differences in the implementation details, and computed p values may not be identical, as the case you observe.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this helps.&lt;/P&gt;</description>
    <pubDate>Fri, 27 Mar 2026 23:21:38 GMT</pubDate>
    <dc:creator>SASCom1</dc:creator>
    <dc:date>2026-03-27T23:21:38Z</dc:date>
    <item>
      <title>Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984920#M5028</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I'm a new SAS user. I have been trying to replicate the tests I conducted in Python (e.g. normality, Dicky Fuller Test) in SAS. However, I have noticed that there are differences in results (Pr &amp;lt; Tau) and ADF stat between SAS and Python. For clarity, I used PROC ARIMA for ADF in SAS and adfuller package in Python. May I know why there are differences in result?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Appreciate if anyone can respond to this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 06:32:36 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984920#M5028</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-18T06:32:36Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984924#M5029</link>
      <description>&lt;P&gt;Generic question gets a generic response: differences arise from different details in the programming and often the machines the code executes on. Things like number of decimal places maintained internally for computations can easily effect results even when using the same algorithms and just because a statistical test has the same name in different packages there is very likely no commonality at all points in the way the algorithm was programmed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;HOW much difference might be the question that you need an answer to.&amp;nbsp; So perhaps sharing the results in question is place to start.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also some programs will report the results using different defaults. For instance in a logistic regression with a true/false type outcome one program may default to modeling the "true" value and the other the "false". So same data would tend to report something that looks like a compliment of the other (70% true or 30% false for example).&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;For serious details you might need to include:&lt;/P&gt;
&lt;P&gt;Your data&lt;/P&gt;
&lt;P&gt;Your Code for both approaces&lt;/P&gt;
&lt;P&gt;The output&lt;/P&gt;
&lt;P&gt;The research you question you want answered so we can validate that the SAS (at least) approach is using appropriate options.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Some of the SAS procedures will have a section in the online help called details that may include some of the computation details.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 07:59:50 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984924#M5029</guid>
      <dc:creator>ballardw</dc:creator>
      <dc:date>2026-03-18T07:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984925#M5030</link>
      <description>&lt;P&gt;You used adfuller package in Python, right?&lt;/P&gt;
&lt;P class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;Link to the Python documentation:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;&lt;A class="csA2BF052F" href="https://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html" target="_blank"&gt;&lt;SPAN class="csE3DD5F11"&gt;https://www.statsmodels.org/dev/generated/statsmodels.tsa.stattools.adfuller.html&lt;/SPAN&gt;&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="cs95E872D0"&gt;&amp;nbsp;&lt;/P&gt;
&lt;UL&gt;
&lt;LI class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;What's your setting for the regression parameter? &lt;BR /&gt;Note that SAS does not support the ctt setting (&lt;SPAN&gt;“ctt” : constant, and linear and quadratic trend).&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;What's your setting for maxlag parameter?&lt;BR /&gt;Note: the number of&amp;nbsp;&lt;SPAN&gt;augmenting lags in the underlying regression model is probably different between SAS PROC ARIMA and Python.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;&lt;SPAN&gt;I think it's best you put your Python code AND your SAS code in the reply.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL class="lia-list-style-type-square"&gt;
&lt;LI class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;&lt;SPAN&gt;For your Python code, you can use the "Insert Code" icon ( &amp;lt;/&amp;gt; ) in the header with icons.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;&lt;SPAN&gt;For your SAS code, you can use the "Insert SAS Code" icon ( the little running man to the right of &amp;lt;/&amp;gt; ) in the header with icons.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P class="cs95E872D0"&gt;&lt;SPAN class="cs53F207AF"&gt;&lt;SPAN&gt;BR, Koen&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 08:15:23 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984925#M5030</guid>
      <dc:creator>sbxkoenk</dc:creator>
      <dc:date>2026-03-18T08:15:23Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984926#M5031</link>
      <description>&lt;P&gt;Thank you for your response.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Further details on this:&lt;/P&gt;&lt;P&gt;1. Data (as per Excel attachment)&lt;/P&gt;&lt;P&gt;2. You may find the code as below:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;a. SAS code:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;/*stationarity test*/&lt;BR /&gt;proc arima data=WORK.QUARTERLY;&lt;BR /&gt;identify var= &amp;amp;&amp;amp;var&amp;amp;i stationarity=(adf=(0));&lt;BR /&gt;ods output stationaritytests=stationary_data;&lt;BR /&gt;run;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;b. Python code:&lt;BR /&gt;adf_result_var1 = adfuller(raw_ln_hfa[combo[0]], maxlag=0, regression='c', autolag=None)&lt;BR /&gt;adf_result_var2 = adfuller(raw_ln_hfa[combo[1]], maxlag=0, regression='c', autolag=None)&lt;BR /&gt;adf_var1 = adf_result_var1[0] # ADF statistic&lt;BR /&gt;adf_var2 = adf_result_var2[0]&lt;BR /&gt;adf_pval_var1 = adf_result_var1[1] # p-value&lt;BR /&gt;adf_pval_var2 = adf_result_var2[1]&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;3. Output:&lt;/P&gt;&lt;P&gt;a. SAS p-value (from pr &amp;lt; tau): 0.51376756197933&lt;/P&gt;&lt;P&gt;b. Python p-value:&amp;nbsp;0.521899648105788&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;4. Research question: to reject or accept whether data is stationary.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 08:26:24 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984926#M5031</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-18T08:26:24Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984927#M5032</link>
      <description>&lt;PRE&gt;adf_result_var1 = adfuller(raw_ln_hfa[combo[0]], maxlag=0, regression='c', autolag=None)
adf_result_var2 = adfuller(raw_ln_hfa[combo[1]], maxlag=0, regression='c', autolag=None)
adf_var1 = adf_result_var1[0] # ADF statistic
adf_var2 = adf_result_var2[0]
adf_pval_var1 = adf_result_var1[1] # p-value
adf_pval_var2 = adf_result_var2[1]&lt;/PRE&gt;&lt;PRE&gt;&lt;CODE class=" language-sas"&gt;/*stationarity test*/
proc arima data=WORK.QUARTERLY;
identify var= &amp;amp;&amp;amp;var&amp;amp;i stationarity=(adf=(0));
ods output stationaritytests=stationary_data;
run;&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;Hi,&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;These are my codes for your reference.&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 08:30:05 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984927#M5032</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-18T08:30:05Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984934#M5033</link>
      <description>&lt;P&gt;&lt;SPAN&gt;An Augmented Dickey-Fuller (ADF) test with zero augmenting lags is equivalent to the original Dickey-Fuller (DF) test.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;The ADF test performed in PROC ARIMA is based on the description of this test in Hamilton (1994).&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Hamilton, J.&amp;nbsp;D. (1994). &lt;/SPAN&gt;&lt;SPAN&gt;&lt;EM&gt;Time Series Analysis&lt;/EM&gt;&lt;/SPAN&gt;&lt;SPAN&gt;. Princeton, NJ: Princeton University Press.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Note there are different ways in SAS to perform ADF:&lt;/SPAN&gt;&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;&lt;SPAN&gt;PROC ARIMA&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;PROC AUTOREG&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;A href="https://go.documentation.sas.com/doc/fr/pgmsascdc/v_067/etsug/etsug_macros_sect019.htm" target="_blank"&gt;SAS Help Center: PROBDF Function for Dickey-Fuller Tests&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN&gt;&lt;A href="https://go.documentation.sas.com/doc/fr/pgmsascdc/v_067/etsug/etsug_macros_sect010.htm" target="_blank"&gt;SAS Help Center: DFTEST Macro&lt;/A&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;I guess they all provide the same p-values.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;BR /&gt;No idea why you notice a difference between SAS and Python in your p-values for DF.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; a. SAS p-value (from pr &amp;lt; tau): 0.51376756197933&lt;/P&gt;
&lt;P&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; b. Python p-value:&amp;nbsp;0.521899648105788&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Both p-values are very close to each other and there is no doubt about the conclusion (the same conclusion for both), but they may still be far enough apart to be odd.&lt;BR /&gt;It could be due to all sorts of things. Please note that the p-values are derived from a huge amount of simulation replications.&amp;nbsp;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Maybe&amp;nbsp;&lt;a href="https://communities.sas.com/t5/user/viewprofilepage/user-id/82879"&gt;@SASCom1&lt;/a&gt;&amp;nbsp;can help further?&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;If you want to open a Technical Support ticket,&lt;BR /&gt;&lt;/SPAN&gt;&lt;SPAN&gt;here is a link for your convenience: &lt;A href="https://support.sas.com/en/technical-support.html#contact" target="_blank"&gt;https://support.sas.com/en/technical-support.html#contact&lt;/A&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;(&lt;SPAN&gt;SAS Technical Support)&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;BR, Koen&lt;/P&gt;</description>
      <pubDate>Wed, 18 Mar 2026 11:06:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/984934#M5033</guid>
      <dc:creator>sbxkoenk</dc:creator>
      <dc:date>2026-03-18T11:06:14Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985580#M5034</link>
      <description>&lt;P&gt;Sorry for my late response.&lt;/P&gt;
&lt;P&gt;Here is the documentation on the computation of ADF test p values in SAS:&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect019.htm#etsug_macros000931" target="_blank"&gt;SAS Help Center: PROBDF Function for Dickey-Fuller Tests&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;**********************************************************************************************************&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;The PROBDF function is calculated from approximating functions fit to empirical quantiles that are produced by a Monte Carlo simulation that employs&lt;SPAN&gt;&amp;nbsp;10^8&amp;nbsp;&lt;/SPAN&gt;replications for each simulation. Separate simulations were performed for selected values of&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and for&lt;SPAN&gt;&amp;nbsp;d = 1, 2, 4, 6, 12&lt;/SPAN&gt;&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;(where&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;n&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;and&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;are the second and third arguments to the PROBDF function).&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P class="xisDoc-paragraph"&gt;The maximum error of the PROBDF function is approximately&lt;SPAN&gt;&amp;nbsp;+-10^-3&amp;nbsp;&lt;/SPAN&gt;for&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;in the set (1,2,4,6,12) and can be slightly larger for other&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class="aa-mathtext"&gt;d&lt;/SPAN&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;values. Because the number of simulation replications used to produce the PROBDF function is much greater than the 60,000 replications used by Dickey and colleagues (Dickey and Fuller&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A tabindex="0" href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect025.htm#etsug_macrosdick_d79" target="_blank"&gt;1979&lt;/A&gt;; Dickey, Hasza, and Fuller&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;A tabindex="0" href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_macros_sect025.htm#etsug_macrosdick_d84" target="_blank"&gt;1984&lt;/A&gt;), the PROBDF function can be expected to produce results that are substantially more accurate than the critical values reported in those papers.&lt;/P&gt;
&lt;P&gt;**************************************************************************************************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Different software packages may implement different algorithms, and even with similar algorithms, there could still be minor differences in the implementation details, and computed p values may not be identical, as the case you observe.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this helps.&lt;/P&gt;</description>
      <pubDate>Fri, 27 Mar 2026 23:21:38 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985580#M5034</guid>
      <dc:creator>SASCom1</dc:creator>
      <dc:date>2026-03-27T23:21:38Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985595#M5035</link>
      <description>I see. If we want to verify SAS calculation independently through other&lt;BR /&gt;means e.g. R/Python/Excel, how can we do that? Is it through Tau statistic?&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Mar 2026 00:06:43 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985595#M5035</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-30T00:06:43Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985603#M5036</link>
      <description>&lt;P&gt;There are three kinds of tests under the ADF tests: rho test, tau test, and F test.&lt;/P&gt;
&lt;UL&gt;
&lt;LI&gt;The rho test is the regression coefficient test, which is also called the normalized bias test.&lt;/LI&gt;
&lt;LI&gt;The tau test is the studentized test.&lt;/LI&gt;
&lt;LI&gt;The F test is a joint test for unit root.&lt;/LI&gt;
&lt;/UL&gt;
&lt;P&gt;&lt;SPAN&gt;For more information about test statistics under the ADF tests, see &lt;/SPAN&gt;&lt;SPAN&gt;the section&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://go.documentation.sas.com/doc/en/pgmsascdc/v_073/etsug/etsug_arima_details07.htm" target="_blank"&gt;SAS Help Center: Stationarity Tests&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;And here's a paper by&amp;nbsp;David A. Dickey himself.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt; SAS Global Forum 2016 proceedings&lt;BR /&gt;&amp;gt;&amp;gt; Paper 7080-2016 &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt; What’s the Difference? &lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt; David A. Dickey, NC State University&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&amp;gt;&amp;gt;&amp;nbsp;&lt;A href="https://support.sas.com/resources/papers/proceedings16/7080-2016.pdf" target="_blank"&gt;https://support.sas.com/resources/papers/proceedings16/7080-2016.pdf&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Prof. Dickey claims on p.14:&amp;nbsp;&amp;nbsp;"The taus and their associated pvalues are the most commonly used of these tests."&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;BR, Koen&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 03:53:41 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985603#M5036</guid>
      <dc:creator>sbxkoenk</dc:creator>
      <dc:date>2026-03-30T03:53:41Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985606#M5037</link>
      <description>&lt;P&gt;Thank you for your response. Since we cannot compare p-value directly between the two platforms, how do we at least determine the consistency of pass/fail stationary test between the two platforms? Is there a way to extract critical value from SAS so I can determine whether tau statistic passes or fails the 1%, 5% and 10% of critical value thresholds and can I extract the coefficients used to approximate p value from SAS?&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 08:59:57 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985606#M5037</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-30T08:59:57Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985656#M5038</link>
      <description>&lt;P&gt;SAS only outputs the rho, tau, and F test statistics together with their corresponding p values, it does not print the 1%,5%,or 10% critical values. You make conclusions to reject or not reject the null using the computed p values and compare with your desired significance level.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Mar 2026 22:45:14 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985656#M5038</guid>
      <dc:creator>SASCom1</dc:creator>
      <dc:date>2026-03-30T22:45:14Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985657#M5039</link>
      <description>&lt;P&gt;Noted on this response, but my issue arises when the p-value results lead&lt;BR /&gt;to different conclusion (e.g. Python rejects null but SAS accepts null). How can I reconcile these differences?&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 01:39:08 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985657#M5039</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-03-31T01:39:08Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985670#M5040</link>
      <description>&lt;P&gt;I hope it happens extremely rarely, but of course – at some point – such situations (different conclusions) will arise.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;You could say that you’re in a sort of meta-analysis scenario (a bit of a stretch, I agree) and you can opt for a weighted p-value. With equal weights, you get an average p-value.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It’s a tricky topic, ... but you’ll have to choose a reconciliation scenario if you continue to work with both SAS and Python to investigate stationarity.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Good luck,&lt;/P&gt;
&lt;P&gt;Koen&lt;/P&gt;</description>
      <pubDate>Tue, 31 Mar 2026 07:39:53 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985670#M5040</guid>
      <dc:creator>sbxkoenk</dc:creator>
      <dc:date>2026-03-31T07:39:53Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985728#M5041</link>
      <description>&lt;P&gt;What p values did you get from the two software packages for this case, when they lead to different conclusions? Are they using the same codes as you provided earlier in this thread? Can you provide more details on the output and the example data?&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 00:41:52 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985728#M5041</guid>
      <dc:creator>SASCom1</dc:creator>
      <dc:date>2026-04-01T00:41:52Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985732#M5042</link>
      <description>&lt;P&gt;Yes, I used the same codes as shown earlier in the thread. I have attached the data and added the output for your reference.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;Result&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;&lt;P&gt;Tau (Python):&amp;nbsp;-2.88491230787933&lt;/P&gt;&lt;P&gt;Tau (SAS):&amp;nbsp; -2.88&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The p-value at 0.05 significance level:&lt;/P&gt;&lt;P&gt;Python: 0.0471308321494495&lt;/P&gt;&lt;P&gt;SAS:&amp;nbsp;0.0551742457504444&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 04:41:31 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985732#M5042</guid>
      <dc:creator>asiahty</dc:creator>
      <dc:date>2026-04-01T04:41:31Z</dc:date>
    </item>
    <item>
      <title>Re: Different ADF test results between SAS and Python</title>
      <link>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985809#M5043</link>
      <description>&lt;P&gt;Although algorithm differences in software packages could lead to slightly different p values for the ADF test, it is important that you choose the appropriate model and appropriate number of augmenting lags when performing the ADF test and comparing between software packages, since these factors can greatly impact the ADF test results.&amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I experimented a bit with the data you provided. Following the method described in Dr. Dickey's paper(in section 7)&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://support.sas.com/resources/papers/proceedings/proceedings/sugi30/192-30.pdf" target="_blank"&gt;https://support.sas.com/resources/papers/proceedings/proceedings/sugi30/192-30.pdf&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I experimented on the choice of number of augmenting lags,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;data b ;&lt;BR /&gt;set yourdata;&lt;BR /&gt;lagy = lag(y) ;&lt;/P&gt;
&lt;P&gt;det = dif(y) ;&lt;BR /&gt;det1 = lag(det);&lt;BR /&gt;det2 = lag(det1) ;&lt;BR /&gt;det3 = lag(det2) ;&lt;BR /&gt;det4 = lag(det3);&lt;/P&gt;
&lt;P&gt;run;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;proc reg data = b outest = d2_lag2 ;&lt;BR /&gt;model det = lagy det1 det2 det3 det4 ;&lt;BR /&gt;test det1 = 0, det2 = 0, det3 = 0, det4 = 0 ;&lt;/P&gt;
&lt;P&gt;run;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;the parameters estimates&amp;nbsp;&lt;/P&gt;
&lt;DIV class="branch"&gt;
&lt;DIV&gt;
&lt;DIV align="center"&gt;
&lt;TABLE class="table" summary="Procedure Reg: Parameter Estimates" frame="box" cellspacing="0" cellpadding="5"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH class="c b header" colspan="6" scope="colgroup"&gt;Parameter Estimates&lt;/TH&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l b header" scope="col"&gt;Variable&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;DF&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;Parameter&lt;BR /&gt;Estimate&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;Standard&lt;BR /&gt;Error&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;t&amp;nbsp;Value&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;Pr&amp;nbsp;&amp;gt;&amp;nbsp;|t|&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;Intercept&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;1.01485&lt;/TD&gt;
&lt;TD class="r data"&gt;1.11803&lt;/TD&gt;
&lt;TD class="r data"&gt;0.91&lt;/TD&gt;
&lt;TD class="r data"&gt;0.3706&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;lagy&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;-0.25648&lt;/TD&gt;
&lt;TD class="r data"&gt;0.16629&lt;/TD&gt;
&lt;TD class="r data"&gt;-1.54&lt;/TD&gt;
&lt;TD class="r data"&gt;0.1325&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;det1&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;0.13969&lt;/TD&gt;
&lt;TD class="r data"&gt;0.16457&lt;/TD&gt;
&lt;TD class="r data"&gt;0.85&lt;/TD&gt;
&lt;TD class="r data"&gt;0.4021&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;det2&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;-0.09814&lt;/TD&gt;
&lt;TD class="r data"&gt;0.16308&lt;/TD&gt;
&lt;TD class="r data"&gt;-0.60&lt;/TD&gt;
&lt;TD class="r data"&gt;0.5514&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;det3&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;0.07849&lt;/TD&gt;
&lt;TD class="r data"&gt;0.15074&lt;/TD&gt;
&lt;TD class="r data"&gt;0.52&lt;/TD&gt;
&lt;TD class="r data"&gt;0.6061&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;det4&lt;/TH&gt;
&lt;TH class="r data"&gt;1&lt;/TH&gt;
&lt;TD class="r data"&gt;-0.50851&lt;/TD&gt;
&lt;TD class="r data"&gt;0.15089&lt;/TD&gt;
&lt;TD class="r data"&gt;-3.37&lt;/TD&gt;
&lt;TD class="r data"&gt;
&lt;P&gt;0.0019&lt;/P&gt;
&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;notice the strong significance on the det4 parameter.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;the F test results&lt;/P&gt;
&lt;DIV class="branch"&gt;
&lt;DIV&gt;
&lt;DIV align="center"&gt;
&lt;TABLE class="table" summary="Procedure Reg: Results" frame="box" cellspacing="0" cellpadding="5"&gt;
&lt;THEAD&gt;
&lt;TR&gt;
&lt;TH class="c b header" colspan="5" scope="colgroup"&gt;Test 1 Results for Dependent Variable det&lt;/TH&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l b header" scope="col"&gt;Source&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;DF&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;Mean&lt;BR /&gt;Square&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;F Value&lt;/TH&gt;
&lt;TH class="r b header" scope="col"&gt;Pr&amp;nbsp;&amp;gt;&amp;nbsp;F&lt;/TH&gt;
&lt;/TR&gt;
&lt;/THEAD&gt;
&lt;TBODY&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;Numerator&lt;/TH&gt;
&lt;TD class="r data"&gt;4&lt;/TD&gt;
&lt;TD class="r data"&gt;120.04426&lt;/TD&gt;
&lt;TD class="r data"&gt;3.97&lt;/TD&gt;
&lt;TD class="r data"&gt;0.0097&lt;/TD&gt;
&lt;/TR&gt;
&lt;TR&gt;
&lt;TH class="l rowheader" scope="row"&gt;Denominator&lt;/TH&gt;
&lt;TD class="r data"&gt;33&lt;/TD&gt;
&lt;TD class="r data"&gt;30.21161&lt;/TD&gt;
&lt;TD class="r data"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;TD class="r data"&gt;&amp;nbsp;&lt;/TD&gt;
&lt;/TR&gt;
&lt;/TBODY&gt;
&lt;/TABLE&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;/DIV&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;strongly reject the null that all four lagged differences have zero coefficient. I also did some additional F tests on subsets of det1~det4, results all seem to suggest the choice of&amp;nbsp; 4 augmenting lags may be needed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I also experimented using the autolag = 'AIC' option in your Python code, which also selected #of lags = 4 in Python, confirmed the above choice.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, if we choose 4 number of augmenting lags instead of 0 in your original code, and then compare the ADF test results, the tau statistic is -1.54, with p value for tau 0.5021 in PROC ARIMA, and p value for tau 0.51246 in Python. For both packages, the evidence for non-stationarity is quite strong. The slight difference in the p values between packages in this case is of no importance.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I hope this helps.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 01 Apr 2026 23:57:24 GMT</pubDate>
      <guid>https://communities.sas.com/t5/SAS-Forecasting-and-Econometrics/Different-ADF-test-results-between-SAS-and-Python/m-p/985809#M5043</guid>
      <dc:creator>SASCom1</dc:creator>
      <dc:date>2026-04-01T23:57:24Z</dc:date>
    </item>
  </channel>
</rss>

