Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

Options

- RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page

☑ This topic is **solved**.
Need further help from the community? Please
sign in and ask a **new** question.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Posted 08-29-2023 01:36 PM
(203 views)

Hello

My code is supposed to simulate two samples from exponential distribution based on progressive type II. Then, I must modify the samples by truncating each one at a particular time and replacing the truncated values with new ones. The final samples should be used to estimate the overlap function between the two samples. I have three truncation choices: after 30%, 80%, or 0% of the data.

My problem is that all results are almost identical. No matter if I do the truncation or not!!!

I also found that if I fix the percentage of the truncation and change the censoring scheme (i.e., to censor the data at the beginning, at the end, or in the middle(results remain unchanged) which also doesn't make any sense!!!

I would greatly appreciate any help in resolving this matter.

```
proc iml;
*Results for T1, T2 and T3 have striking similarities, I checked everything and don't know what is the problem,
Results for Ti, i=1,2,3 when changing the censoring scheme also produce stricking similarities!!!!
**********************************************************************************************************************;
seed=0; NN=10000;*NN is the number of simulations;
theta1=1; theta2=5;
case=17; n1=80; m1=50; k=J(m1,1,0); K[m1]=n1-m1; L=K; n2=n1; m2=m1;
/**********************************************************************************************************************;
case=16; n1=80; m1=50; k=J(m1,1,0); K[1]=n1-m1; L=K; n2=n1; m2=m1;
case=17; n1=80; m1=50; k=J(m1,1,0); K[m1]=n1-m1; L=K; n2=n1; m2=m1;
case=18; n1=80; m1=50; k=J(m1,1,0); k[41:50]=3; L=K; n2=n1; m2=m1;
*/
*********************** Exact OVL *******************************;
R=(theta1/theta2);
R_star=((m2-1)/m2)*R;
Exact_rho=2*sqrt(R_star)/(1+R_star);
Exact_lambda=(Exact_rho)**2;
Exact_delta=1-R_star**(1/(1-R_star))*abs(1-1/R_star);
******************************************************************************;
start Uniform_p(m,R) global(seed);
** The Required Progressively Type-II from U(0,1);
V=J(m,1,0); U=J(m,1,0);W=J(m,1);
Call randseed(seed);
Call randgen(W, "Uniform");
Do i=1 to m;
Sum=0;
Do j=m-i+1 to m;
Sum=Sum+R[j];
End;
Sum=Sum+i;
V[i]=W[i]**(1/Sum);
End;
Do i=1 to m;
U[i]=1-prod ( V[m: (m-i+1)] );
End;
Call sort (U);
return U;
Finish;
********************************************************************;
Start Adaptive(X, T, n,m,R,theta) ;
Do idx=1 to m-1;
If (( X[idx] < T) & (T <= X[idx+1] )) then j=idx;
End;
If X[m]>T then
Do;
W=Uniform_p(m-j,R);
X_m_j = T - theta*log(1-W);
X_Adptive= X[1:j]//X_m_j;
Rm=n-m-sum(R[1:J]);
R2=J(m-j,1,0);R2[m-J]=Rm;
newR=R[1:j]//R2;
End;
else
Do;
j= m;
newR=R;
X_Adptive=X;
End;
P1=[X_Adptive,j,newR];
return P1;
Finish;
********** Subroutine to compute the MLEs **************;
*********************************************************;
start MLE(X,n,m,R,Rm,J);
Theta_MLE=( Rm*X[m]+X[+]+ sum( (R#X)[1:J] ) )/m;
return(Theta_MLE);
finish;
******************************************************************************************;
theta1_Adp=J(NN,1,0); theta2_Adp=J(NN,1,0);H_Adp=J(NN,1,0);
R_star_Adp=J(NN,1,0);
rho_star_Adp=J(NN,1,0);Var_rho_Adp=J(NN,1,0);Bias_rho_Adp=J(NN,1,0);
lambda_star_Adp=J(NN,1,0);Var_lambda_Adp=J(NN,1,0);Bias_lambda_Adp=J(NN,1,0);
delta_star_Adp=J(NN,1,0);Var_delta_Adp=J(NN,1,0);Bias_delta_Adp=J(NN,1,0);
MSE_rho_Adp=J(NN,1,0);MSE_lambda_Adp=J(NN,1,0);MSE_delta_Adp=J(NN,1,0);
Lower_rho_Adp=J(NN,1);Upper_rho_Adp=J(NN,1);Length_rho_Adp=J(NN,1);
Lower_lambda_Adp=J(NN,1);Upper_lambda_Adp=J(NN,1);Length_lambda_Adp=J(NN,1);
Lower_delta_Adp=J(NN,1);Upper_delta_Adp=J(NN,1);Length_delta_Adp=J(NN,1);
COV1_Adp=J(NN,1,0);COV2_Adp=J(NN,1,0);COV3_Adp=J(NN,1,0);***COVi= stands for coverage of OVLi, i=1,2,3**;
var_rho_Adp=J(NN,1,0);var_lambda_Adp=J(NN,1,0);var_delta_Adp=J(NN,1,0);
Do JJ=1 to NN;
X=J(m1,1,0);Y=J(m2,1,0); X_ADP=J(m1,1,0);Y_ADP=J(m2,1,0);
**** Data set I;
***************;
X = - theta1 * log (1- uniform_p(m1,K)); * Create Progressive type II data;
*T1=X[ceil(0.3*m1)]; * Keep the top 80% and replace the remaining data using Adaptive module;
*T1=X[ceil(0.8*m1)]; * Keep the top 30% and replace the remaining data using Adaptive module;
T1=X[m1]+2; * Do not truncate the data;
AdaptiveResX=Adaptive(X, T1,n1,m1,K,theta1);**Create Adaptive hybrid progressive type II;
X_ADP=AdaptiveResX$1;* This is my Adaptive IIPH that I will use to find MLEs;
J1=AdaptiveResX$2;
newK=AdaptiveResX$3;
K=newK; Km=K[m1];
**** Data set II;
****************;
Y = - theta2 * log (1- uniform_p(m2,L));
*T2=Y[ceil(0.3*m2)];
*T2=Y[ceil(0.8*m2)];
T2=Y[m2]+2;
AdaptiveResY=Adaptive(Y, T2,n2,m2,L,theta2);
Y_ADP=AdaptiveResY$1;* This is my Adaptive IIPH that I will use to find MLEs;
J2=AdaptiveResY$2;
newL=AdaptiveResY$3;
L=newL; Lm=L[m2];
*print J1 T1 J2 T2 k newk l newl;
*print X T1 J1 X_adp " " Y T2 J2 Y_adp;
*********************************************************************;
*Theta1_Adp[JJ]=( Km* X_ADP[m1]+X_ADP[+] + sum( ( K#X_ADP )[1:J1] ) )/m1;
*Theta2_Adp[JJ]=( Lm* Y_ADP[m2]+Y_ADP[+] + sum( ( L#Y_ADP )[1:J2] ) )/m2;
Theta1_Adp[JJ]=MLE(X_ADP,n1,m1,K,Km,J1);
Theta2_Adp[JJ] =MLE(Y_ADP,n2,m2,L,Lm,J2);
R_star_Adp[JJ]=(m2-1)/m2*(theta1_Adp[JJ]/theta2_Adp[JJ]);
End;
** The estimated values of the OVL **;
rho_star_Adp=2#sqrt(R_star_Adp)#(1+R_star_Adp)##(-1);
lambda_star_Adp=rho_star_Adp##2;
delta_star_Adp=1-R_star_Adp##(1/(1-R_star_Adp))#abs(1-1/R_star_Adp);
****** Asymptotic Variance of the Overlape ********;
var_rho_Adp=R_star_Adp#(1-R_star_Adp)##2/(1+R_star_Adp)##4#(m1+m2-1)/(m1*(m2-2));
var_lambda_Adp=16#R_star_Adp##2#(1-R_star_Adp)##2/(1+R_star_Adp)##6#(m1+m2-1)/(m1*(m2-2));
var_delta_Adp=R_star_Adp##(2/(1-R_star_Adp))#(log(R_star_Adp))##2/(1-R_star_Adp)##2#(m1+m2-1)/(m1*(m2-2));
****** Asymptotic Bias of the Overlape ********;
Bias_rho_Adp=(m1+m2-1)/( m1*(m2-2))# sqrt(R_star_Adp)# ( 3#R_star_Adp##2-6#R_star_Adp-1)/( 4#(1+R_star_Adp)##3 );
Bias_lambda_Adp=(m1+m2-1)/(m1*(m2-2))#4#R_star_Adp##2#(R_star_Adp-2)/(1+R_star_Adp)##4;
H_Adp= R_star_Adp##(1/(1-R_star_Adp)) # ( R_star_Adp#(2#R_star_Adp-log(R_star_Adp)-2)#log(R_star_Adp) - (R_star_Adp-1)##2 )/(R_star_Adp-1)##3;
Do JJ=1 to NN;
if (R_star_Adp[JJ] > 1) then Bias_delta_Adp[JJ]=(m1+m2-1)/(2*m1*(m2-2))* H_Adp[JJ];
if (R_star_Adp[JJ] < 1) then Bias_delta_Adp[JJ]=-(m1+m2-1)/(2*m1*(m2-2))* H_Adp[JJ];
end;
Theta1_h=theta1_Adp[:];
Theta2_h=theta2_Adp[:];
Bias_rho=Bias_rho_Adp[:]; Bias_lambda=Bias_lambda_Adp[:]; Bias_delta=Bias_delta_Adp[:];
**** Interval estimation using Asymptotic technique ****;
Do JJ=1 to NN;
*************************** OVL1 *****************************;
Lower_rho_Adp[JJ] = rho_star_Adp[JJ] - Bias_rho_Adp[JJ] - 1.96*sqrt(Var_rho_Adp[JJ]);
Upper_rho_Adp[JJ] = rho_star_Adp[JJ] - Bias_rho_Adp[JJ] + 1.96*sqrt(Var_rho_Adp[JJ]);
Length_rho_Adp[JJ]=Upper_rho_Adp[JJ]-Lower_rho_Adp[JJ];
If ( (Exact_rho >= Lower_rho_Adp[JJ])& (Exact_rho <= upper_rho_Adp[JJ]) ) then COV1_Adp[JJ]=1;* COV is the coverage;
*************************************************************************************;
*************************** OVL2 *****************************;
Lower_lambda_Adp[JJ] = lambda_star_Adp[JJ] - Bias_lambda_Adp[JJ] - 1.96*sqrt(Var_lambda_Adp[JJ]);
Upper_lambda_Adp[JJ] = lambda_star_Adp[JJ] - Bias_lambda_Adp[JJ] + 1.96*sqrt(Var_lambda_Adp[JJ]);
Length_lambda_Adp[JJ]= Upper_lambda_Adp[JJ] - Lower_lambda_Adp[JJ];
If ( (Exact_lambda >= Lower_lambda_Adp[JJ])& (Exact_lambda <= upper_lambda_Adp[JJ]) )
then COV2_Adp[JJ]=1;
**************************************************************************************;
*************************** OVL3 ******************************;
Lower_delta_Adp[JJ] = delta_star_Adp[JJ] - Bias_delta_Adp[JJ] - 1.96*sqrt(Var_delta_Adp[JJ]);
Upper_delta_Adp[JJ] = delta_star_Adp[JJ] - Bias_delta_Adp[JJ] + 1.96*sqrt(Var_delta_Adp[JJ]);
Length_delta_Adp[JJ]= Upper_delta_Adp[JJ] -Lower_delta_Adp[JJ];
If ( (Exact_delta >= Lower_delta_Adp[JJ])& (Exact_delta <= upper_delta_Adp[JJ]) )
then COV3_Adp[JJ]=1;
end;
***************** END Of Simulations ************************;
********************************************************************************************************;
BiasOVL1_Adp=ABS(Bias_rho_Adp[:]);
BiasOVL2_Adp=ABS(Bias_lambda_Adp[:]);
BiasOVL3_Adp=ABS(Bias_delta_Adp[:]);
varOVL1=VAR_rho_Adp[:];
VAROVL2=VAR_lambda_Adp[:];
VAROVL3=VAR_delta_Adp[:];
length_OVL1_Adp=Length_rho_Adp[:];
length_OVL2_Adp=Length_lambda_Adp[:];
length_OVL3_Adp=Length_delta_Adp[:];
coverage1_Adp=COV1_Adp[:];
coverage2_Adp=COV2_Adp[:];
coverage3_Adp=COV3_Adp[:];
****formating the output *******;
*print Lower_rho_RSS exact_rho upper_rho_RSS;
Bias=J(3,1);Length=J(3,1);coverage=J(3,1); Var=J(3,1,0);
bias[1,1]=Round(BiasOVL1_Adp,0.0001 ); bias[2,1]=Round(BiasOVL2_Adp,0.0001 ); bias[3,1]=Round(BiasOVL3_Adp,0.0001 );
Var[1,1]=Round(VarOVL1,0.0001 ); Var[2,1]=Round(VarOVL2,0.0001 ); Var[3,1]=Round(VarOVL3,0.0001 );
Length[1,1]=Round(length_OVL1_Adp,0.0001 ); Length[2,1]=Round(length_OVL2_Adp,0.0001 ); Length[3,1]=Round(length_OVL3_Adp,0.0001 );
coverage[1,1]=Round(COV1_Adp[:],0.001 ); coverage[2,1]=Round(COV2_Adp[:],0.001 ); coverage[3,1]=Round(COV3_Adp[:],0.001 );
names={rho,lambda,delta};
print case R n1 m1;
print names bias " " Var " " Length " " coverage ;
quit;
```

1 ACCEPTED SOLUTION

Accepted Solutions

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I think I figure out what is wrong with the results. I am restricting the outcomes to 3 or 4 digits, since the outcomes are very close, this caused all results to be identical. I validated the outcomes with a mathematica code.

Thank you again for inspiring me Rick!!

5 REPLIES 5

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Is there some specific reason you do not want to use Call Randgen with the Exponential distribution?

My IML is so rusty and limited I am not going to attempt to follow all that code but if I were given an instruction to generate samples from an exponential distribution Randgen with Exponential would be a strong starting point instead of trying rewrite the exponential code.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Hello ballardw

Thank you for your reply. I am simulating progressive type II data from exponential an not simple random sample from exponential.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Hi Salah,

I'm sorry to hear that you are still having problems. Personally, I find it difficult to debug other people's code when I do not know what they are doing or what the correct answer should be.

However, in my 30+ years as a programmer, I have adopted a few habits that serve me well when I program. I wrote down some of the tips in the context of optimization, but a few of the tips apply to simulation as well:

1. Start with a simple program. Completely debug it and ensure that it works before attempting anything complicated.

2. Break your big program into smaller function modules that you can create/debug/reuse.

3. Always test the program at each step of development. Don't wait until the program is completely written to test it.

4. Do not attempt a sample of size 10,000 until you have verified that the program works for much smaller sample sizes.

5. Do not attempt multiple truncation choices until you have verified that one works.

6. Are your two samples independent? If so, you can debug/validate each process one at a time.

I wish I could offer more helpful advice, but I think the best way for you to proceed is to implement a highly disciplined approach to writing the program. Believe me, I have spent days trying to debug programs, so I know it is very frustrating. Sometimes I succeed only after I decide to rewrite portions of the code, testing as I go, until I discover my initial mistakes.

Best wishes!

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

Thank you Rick for the tips.

As a matter of fact, this is exactly what I have done.

Nevertheless, I will start again from scratch with fresh eye.

- Mark as New
- Bookmark
- Subscribe
- Mute
- RSS Feed
- Permalink
- Report Inappropriate Content

I think I figure out what is wrong with the results. I am restricting the outcomes to 3 or 4 digits, since the outcomes are very close, this caused all results to be identical. I validated the outcomes with a mathematica code.

Thank you again for inspiring me Rick!!

Registration is open! SAS is returning to Vegas for an AI and analytics experience like no other! Whether you're an executive, manager, end user or SAS partner, SAS Innovate is designed for everyone on your team. Register for just $495 by 12/31/2023.

**If you are interested in speaking, there is still time to submit a session idea. More details are posted on the website. **

Multiple Linear Regression in SAS

Learn how to run multiple linear regression models with and without interactions, presented by SAS user Alex Chaplin.

Find more tutorials on the SAS Users YouTube channel.