BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
vomer
Obsidian | Level 7

Hi there,

I am wondering what the best way is to get this kind of analysis?

Lets assume if my data looks like:

ClientID | Transaction

001 | A1

001 | A1

001 | A2

001 | A3

002 | A4

002 | A5

003 | A6

003 | A6

Result would be:

Client 001 has 3 unique transactions

Client 002 has 2 unique transactions

Client 003 has 1 unique transactions

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions
UrvishShah
Fluorite | Level 6

proc sql;

    create table want as

    select client_id,count(distinct transactions) as tot_trans

    from have

  group by client_id

    order by client_id;

quit;

If your data is very big then the above code will take much time as i have included count(distinct transactions)...so SQL processor has to make two passes through the data in order to eliminate the duplicates...

In that case, just try with following one...

proc sql;

   create table want as

   select distinct client_id, transactions

   from have;

quit;

proc freq data = want noprint;

   table client_id / out = want(drop = percent);

run;

Choice is yours based on the size of data...

-Urvish

View solution in original post

3 REPLIES 3
GPatel
Pyrite | Level 9

data a;

input id $ Trans $;

cards;

...

Proc SQL ;

Select id, count(Distinct(Trans)) as UnqTrans from A Group by ID; Quit;

DR_Majeti
Quartz | Level 8

Hi Vomer,

This is another process by Proc Freq through sorting

Data want;                                                                                                                        
input id trans $;                                                                                                                 
datalines;                                                                                                                        
001 A1                                                                                                                            
001 A1                                                                                                                            
001 A2                                                                                                                            
001 A3                                                                                                                            
002 A4                                                                                                                            
002 A5                                                                                                                            
003 A6                                                                                                                            
003 A6                                                                                                                            
;                                                                                                                                 
proc sort data=want nodup;                                                                                                        
By id trans;                                                                                                                      
run;                                                                                                                              
Proc freq data=want;                                                                                                              

table id / nopct nocum;

Run;                                                                                                          

run;

UrvishShah
Fluorite | Level 6

proc sql;

    create table want as

    select client_id,count(distinct transactions) as tot_trans

    from have

  group by client_id

    order by client_id;

quit;

If your data is very big then the above code will take much time as i have included count(distinct transactions)...so SQL processor has to make two passes through the data in order to eliminate the duplicates...

In that case, just try with following one...

proc sql;

   create table want as

   select distinct client_id, transactions

   from have;

quit;

proc freq data = want noprint;

   table client_id / out = want(drop = percent);

run;

Choice is yours based on the size of data...

-Urvish

sas-innovate-2024.png

Don't miss out on SAS Innovate - Register now for the FREE Livestream!

Can't make it to Vegas? No problem! Watch our general sessions LIVE or on-demand starting April 17th. Hear from SAS execs, best-selling author Adam Grant, Hot Ones host Sean Evans, top tech journalist Kara Swisher, AI expert Cassie Kozyrkov, and the mind-blowing dance crew iLuminate! Plus, get access to over 20 breakout sessions.

 

Register now!

What is Bayesian Analysis?

Learn the difference between classical and Bayesian statistical approaches and see a few PROC examples to perform Bayesian analysis in this video.

Find more tutorials on the SAS Users YouTube channel.

Click image to register for webinarClick image to register for webinar

Classroom Training Available!

Select SAS Training centers are offering in-person courses. View upcoming courses for:

View all other training opportunities.

Discussion stats
  • 3 replies
  • 1096 views
  • 4 likes
  • 4 in conversation