(This article is part of a growing collection of articles on feature engineering for data science. Find more links at the end of this article.)
Feature Engineering is an important tool for (supervised) machine learning. Model accuracy and interpretability benefits from variables that precisely describe the behavior of the analysis subjects. In many cases these features are derived from transactional data which are recorded, e.g. over time. In some cases, simple descriptive measures like the mean or the sum provide a very good picture of the analysis subjects. Often it is important to dig deeper to adequately describe the behavior. Analytic methods help to calculate KPIs that measure the trend over time per customer, the accordance with predefined pattern or the correlation of individual customer with the average customer.
Transposing longitudinal data to "WIDE" format also results in a table which has one record per analysis subset. Of course, SAS offers a large and powerful toolset to change between different data structures, e.g. by using the TRANSPOSE procedure, using the DATASTEP. Many SAS users have also programmed and published different macros for that purpose.
For powerful analytical models it is however not enough to "replicate" the original data just in another format. You want to describe the behavior of your analysis subjects. You can to this by calculating different analytic measures.
This article focuses on “correlation analysis between individual customers and the average customer” and shows the rationale, the coding and the interpretation of the results. There is also a link to a Youtube Recording with a presentation on this topic, which you can find below.
Let's have a look at a line chart with usage curves for 4 selected customers (orange, green violet, blue) and the average usage curve over all 50000 customers. You can see that some customers have a usage curve which is very similar with the average customer, e.g. the orange one, which other customers show a different pattern over time. You can describe the usage curves for example with the follow three parameters: 1) Average Level, 2) Variability 3) Similarity with the "average customer"
The orange customer has a very similar, slightly lower LEVEL as the average customer. His VARIABILITY is over the six time periods is rather large (++). And his SIMILARITY with the average customer is very high (++). When the average of customers is rising their usage, the orange customer is also rising his usage.
You can document the measures for the orange customer as shown in the first line of the following table.
When you look at the green customer, you see that his average level is almost the same as the average customer, so you can set a "=" sign. His variability is very high. His similarity with the average customer is not only low, it is negative. When the average goes up, the green customer decreases his usage and vice versa. So you can set "--" here.
Other remarkable behavior which you can see form the chart is the the violet customer has almost no variability, his usage stays at the same level over all periods.
And the blue customer has a remarkable low level with some variability and some similarity with the average customers.
You have now described the different usage of the customers using certain features. You are not looking at the original individual data any more, but on features that allow to infer behavior, study causality between factors or define segments.
In order to represent the features LEVEL, VARIABILITY and SIMILARITY with statistical measures, it is straightforward to use the MEAN for level and the STANDARD DEVIATION for variability. (Pearson) CORRELATION can be used to describe similarity between two vectors of measures. The CORR procedure in SAS not only allows to calculate Pearson or Spearman correlation, it also calculates the mean and the standard deviation and outputs it into a dataset. The following 3 steps show how to code the example with SAS.
In the first step you calculate the average usage per time interval. The code below shows how you can do this with a SQL query. You could also use SAS procedures like the MEANS, or the SUMMARY procedure for aggregation.
proc sql;
create table monthly_average
as select month,
mean(usage) as MonthlyAverage format = 8.2
from usage
group by month
order by month;
quit;
Thus code generates a dataset with one entry per time interval:
Monthly
Month Average
1 59.01
2 62.79
3 55.11
4 62.68
5 54.59
6 58.49
In the next step, you join this (lookup) table to the original data. You need to do this because the CORR procedure which will be used to calculate the correlations per CUSTID, needs to have the two variables in columns in the same dataset.
The code uses a SQL query to join the tables. A session describing different methods in SAS to join lookup tables to a master table will be published soon in my youtube webinar playlist.
proc sql;
create table usage_enh
as
select u.custid,
u.month,
u.usage,
m.MonthlyAverage
from usage as u,
Monthly_Average as m
where u.month = m.month;
quit;
The resulting table looks as follows. You see that the 6 monthly average values for the months 1-6 are replicated for each CustID
In the next step you use the CORR procedure to calculate the (Spearman) correlation between the usage values of each customer and the monthly average of all customers.
PROC CORR DATA = usage_enh
OUTS = Corr_Usage NOPRINT;
BY CustID;
VAR Usage;
WITH MonthlyAverage;
RUN;
Note the following from the code:
The resulting output dataset looks as follows:
You see that the relevant statistics (mean, standard deviation and correlation) are available in the output table. You just need to re-arrange the structure to a one-row-per-subject data structure to use it for machine learning purposes.
You can use the TRANSPOSE procedure to arrange the data with one row per subject.
proc transpose data=Corr_Usage out=customer_ABT(drop= _name_);
by custid;
id _type_;
var usage;
run;
Finally you can perform some formatting on the table and rename column names and specify formats.
data customer_ABT;
set customer_ABT;
drop n;
rename mean=Level std=Variability corr=Similarity;
format mean 8.1 std 8.1 corr 8.2;
run;
The resulting table shows that you have derived 3 features, LEVEL, VARIABILITY and SIMILARITY from the original transactional table. You can use this table now as addtitional input variables in supervised machine learning. Or you can use the variables to segment your customers into groups with similar behavior, e.g. AVERAGE LEVEL/LOW VARIABILITY/HIGH SIMILARITY.
The idea of this article is to illustrate how you can derive features from longitudinal data and use them for other data science tasks. You have seen how easy it is to code the creation of 3 powerful features with SAS code.
The article could also encourage you to to be creative, when it comes to using longitudinal data in your machine learning analyses. In my example the SIMILIRATY feature has been a good indicator whether a customer is close to the average customer or, for whatever reason, is behaving rather in the opposite way. This indicator can be a good predictor for product upselling or cancellation risk analyses.
Your data might require other forms of aggregations or calculations - Be creative! The analytic toolbox is very large and you can find the appropriate measure for your use case.
There is a webinar on the same topic in my Data Preparation for Data Science Playlist. It explains the case as a software demo.
There are more SAS Communities article which show examples for feature engineering:
This example has been taken from my SAS Press book Data Preparation for Analytics Using SAS see chapter 18.3, page 184pp
Further books of the author in SAS Press:
Registration is now open for SAS Innovate 2025 , our biggest and most exciting global event of the year! Join us in Orlando, FL, May 6-9.
Sign up by Dec. 31 to get the 2024 rate of just $495.
Register now!
Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning and boost your career prospects.