BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.
_Dan_
Quartz | Level 8

Hi,

 

I'm trying to report on concurrent groups of events. We capture data every 15 minutes across 7 Servers / 14 Work spaces. I'm tracking where there's 100% utilisation of any of those 14 Work spaces.

 

The Concurrency flag is done. I have a '1' where an event existed for two or more rows (two or more 15 minute periods). Simple SQL - most likely a lag/lead way to do it, but this works. Basically the _Get dataset has the original RunDtTm, plus two more columns where the RunDtTm has been shifted forward by 15 minutes (RunDtTm15), or backwards by 15 minutes (RunDtTm15_).

proc sql;
	create table _Stage as
		select
			datepart(o.RunDtTm) as RunDt format DDMMYY10.
			,o.RunDtTm
			,case when coalesce(b.RunDtTm,f.RunDtTm) is not null then 1 else 0 end as Concurrent
			,o.Server
			,o.WorkFolder
			,o.Use_pct
		from
			_Get		o	left join
			_Get		f	on o.RunDtTm = f.RunDtTm15 and o.Server = f.Server and o.WorkFolder = f.WorkFolder left join
			_Get		b	on o.RunDtTm = b.RunDtTm15_ and o.Server = b.Server and o.WorkFolder = b.WorkFolder 
;quit;

 

 

 

What I now need is a flag that counts how many times per day a record exists, but, counts concurrent records as one. Here's the data I have right now. The best example would be 22/04/2020, Server03, work2 - there were three events, but two were concurrent episodes.

Event 1: 22APR2020:09:00:00

Event 2: 22APR2020:10:30:00 - 22APR2020:11:00:00

Event 3: 22APR2020:11:15:00 - 22APR2020:12:00:00

 

And thus the new column would show 1, 2, 2, 2, 3, 3, 3, 3.

 

Here's the contents of the data _Stage from the above query. 

DATA WORK._STAGE;
    LENGTH
        RunDt              8
        RunDtTm            8
        Concurrent         8
        Server           $ 8
        WorkFolder       $ 5
        Use_pct          $ 4 ;
    FORMAT
        RunDt            DDMMYY10.
        RunDtTm          DATETIME18.
        Concurrent       BEST1.
        Server           $CHAR8.
        WorkFolder       $CHAR5.
        Use_pct          $CHAR4. ;
    INFORMAT
        RunDt            DDMMYY10.
        RunDtTm          DATETIME18.
        Concurrent       BEST1.
        Server           $CHAR8.
        WorkFolder       $CHAR5.
        Use_pct          $CHAR4. ;
    INFILE DATALINES4
        DLM='7F'x
        MISSOVER
        DSD ;
    INPUT
        RunDt            : ?? DDMMYY10.
        RunDtTm          : ?? ANYDTDTM18.
        Concurrent       : ?? BEST1.
        Server           : $CHAR8.
        WorkFolder       : $CHAR5.
        Use_pct          : $CHAR4. ;
DATALINES4;
02/04/202002APR2020:16:00:000server06work2100%
03/04/202003APR2020:09:30:000server07work2100%
07/04/202007APR2020:09:45:000server03work2100%
07/04/202007APR2020:18:45:000server06work1100%
08/04/202008APR2020:11:00:000server06work2100%
08/04/202008APR2020:11:30:000server06work2100%
09/04/202009APR2020:09:30:000server02work2100%
09/04/202009APR2020:19:45:000server06work2100%
13/04/202013APR2020:14:00:000server02work2100%
14/04/202014APR2020:15:45:000server02work2100%
14/04/202014APR2020:15:45:000server05work2100%
15/04/202015APR2020:07:45:000server03work2100%
15/04/202015APR2020:09:15:000server02work2100%
15/04/202015APR2020:11:15:000server03work2100%
16/04/202016APR2020:09:00:001server03work2100%
16/04/202016APR2020:09:15:001server03work2100%
16/04/202016APR2020:09:30:001server03work2100%
16/04/202016APR2020:09:45:001server03work2100%
16/04/202016APR2020:11:15:000server03work2100%
16/04/202016APR2020:12:00:000server03work1100%
17/04/202017APR2020:09:00:000server02work2100%
20/04/202020APR2020:09:15:000server05work1100%
20/04/202020APR2020:12:00:000server06work2100%
22/04/202022APR2020:09:00:000server03work2100%
22/04/202022APR2020:10:30:001server03work2100%
22/04/202022APR2020:10:45:001server03work2100%
22/04/202022APR2020:11:00:001server03work2100%
22/04/202022APR2020:11:15:000server02work2100%
22/04/202022APR2020:11:15:001server03work2100%
22/04/202022APR2020:11:30:001server03work2100%
22/04/202022APR2020:11:45:001server03work2100%
22/04/202022APR2020:12:00:001server03work2100%
23/04/202023APR2020:04:45:000server02work1100%
23/04/202023APR2020:14:15:000server04work2100%
24/04/202024APR2020:12:30:000server06work1100%
27/04/202027APR2020:04:30:000server03work2100%
28/04/202028APR2020:08:15:000server03work1100%
28/04/202028APR2020:10:30:001server05work2100%
28/04/202028APR2020:10:45:001server05work2100%
28/04/202028APR2020:13:00:000server04work2100%
28/04/202028APR2020:17:15:000server07work2100%
29/04/202029APR2020:07:15:000server06work2100%
29/04/202029APR2020:14:30:000server05work2100%
29/04/202029APR2020:15:15:000server03work2100%
30/04/202030APR2020:11:30:000server06work2100%
30/04/202030APR2020:14:15:000server06work1100%
30/04/202030APR2020:16:00:000server03work1100%
;;;;

 

 

Here's what I want it to look like (take note of entry 5 which is a different Sever, and hence the Concurrent_Count is 1):

 

RunDt	RunDtTm	Concurrent	Server	WorkFolder	Use_pct   Concurrent_Count
22/04/2020	22APR2020:09:00:00	0	geniegrid03	/sas/workb	100%     1
22/04/2020	22APR2020:10:30:00	1	geniegrid03	/sas/workb	100%     2
22/04/2020	22APR2020:10:45:00	1	geniegrid03	/sas/workb	100%     2
22/04/2020	22APR2020:11:00:00	1	geniegrid03	/sas/workb	100%     2
22/04/2020	22APR2020:11:15:00	0	geniegrid02	/sas/workb	100%     1
22/04/2020	22APR2020:11:15:00	1	geniegrid03	/sas/workb	100%     3
22/04/2020	22APR2020:11:30:00	1	geniegrid03	/sas/workb	100%     3
22/04/2020	22APR2020:11:45:00	1	geniegrid03	/sas/workb	100%     3
22/04/2020	22APR2020:12:00:00	1	geniegrid03	/sas/workb	100%     3

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
RichardDeVen
Barite | Level 11

You are having different concurrent_count values tracked for different servers.  You explanation shows the count increasing, so in this answer that value will be called seq for sequence  Server is a character value, and tracking by value can be easily done using a HASH object.  They key is the `server` and the data is the `seq` that may or may not get incremented according to your rules.

 

The hash should be cleared at the start of the date when the sequence counting is reset.

 

Example:

 

Data is presumed to be in a timestamp order essential to sequence count computation.

data want;
  if 0 then set _STAGE;

  if _n_ = 1 then do;
    declare hash cc();
    cc.defineKey('server');
    cc.defineData('seq');
    cc.defineDone();
    call missing(seq);  * prevent warning;
    pserver = server;   * prep pdv;
  end; 

  cc.clear();

  pserver = '';  * act as a LAG var for server;
  pflag = .;     * act as a LAG var for concurrent flag;

  * one day at a time;

  do until (last.rundt);
    set _STAGE;
    by rundt;

    rc = cc.find();  * side effect is that seq for server retrieved from hash;

    if pserver ne server then do;   * server changed from prior row;
      if rc ne 0 
        then seq = 1;  * first time for server;
        else seq + 1;  * first time for server in a subsequent run, seq+1 regardless of concurrent;
    end;
    else do;
      /* same server as prior row */
      /* a hash entry MUST exist if code flow reaches here */
      /* which means seq is value for current server       */
      if concurrent = 0 then 
        seq + 1;            * not concurrent always increases seq;
      else
        seq + (pflag = 0);  * concurrent run of 0 1 becomes seq+1, and 1 1 gets seq+0;
    end;

    cc.replace(); * save seq count for server;

    OUTPUT;

    pflag = concurrent;  * update prior value tracking variables (lag emulation);
    pserver = server;

    drop rc pflag pserver;
  end;
run;

 

Happy counting!

View solution in original post

2 REPLIES 2
RichardDeVen
Barite | Level 11

You are having different concurrent_count values tracked for different servers.  You explanation shows the count increasing, so in this answer that value will be called seq for sequence  Server is a character value, and tracking by value can be easily done using a HASH object.  They key is the `server` and the data is the `seq` that may or may not get incremented according to your rules.

 

The hash should be cleared at the start of the date when the sequence counting is reset.

 

Example:

 

Data is presumed to be in a timestamp order essential to sequence count computation.

data want;
  if 0 then set _STAGE;

  if _n_ = 1 then do;
    declare hash cc();
    cc.defineKey('server');
    cc.defineData('seq');
    cc.defineDone();
    call missing(seq);  * prevent warning;
    pserver = server;   * prep pdv;
  end; 

  cc.clear();

  pserver = '';  * act as a LAG var for server;
  pflag = .;     * act as a LAG var for concurrent flag;

  * one day at a time;

  do until (last.rundt);
    set _STAGE;
    by rundt;

    rc = cc.find();  * side effect is that seq for server retrieved from hash;

    if pserver ne server then do;   * server changed from prior row;
      if rc ne 0 
        then seq = 1;  * first time for server;
        else seq + 1;  * first time for server in a subsequent run, seq+1 regardless of concurrent;
    end;
    else do;
      /* same server as prior row */
      /* a hash entry MUST exist if code flow reaches here */
      /* which means seq is value for current server       */
      if concurrent = 0 then 
        seq + 1;            * not concurrent always increases seq;
      else
        seq + (pflag = 0);  * concurrent run of 0 1 becomes seq+1, and 1 1 gets seq+0;
    end;

    cc.replace(); * save seq count for server;

    OUTPUT;

    pflag = concurrent;  * update prior value tracking variables (lag emulation);
    pserver = server;

    drop rc pflag pserver;
  end;
run;

 

Happy counting!

_Dan_
Quartz | Level 8

Good lord, you've put just as much effort into that reply as I put into creating and debugging the process on Friday. I ran it and it's perfect ,thank-you.

 

Now all I have to do is learn how it works and keep it in mind for future usage.

SAS Innovate 2025: Register Now

Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!

How to Concatenate Values

Learn how use the CAT functions in SAS to join values from multiple variables into a single value.

Find more tutorials on the SAS Users YouTube channel.

SAS Training: Just a Click Away

 Ready to level-up your skills? Choose your own adventure.

Browse our catalog!

Discussion stats
  • 2 replies
  • 738 views
  • 0 likes
  • 2 in conversation