BookmarkSubscribeRSS Feed

Manipulating Data in SAS Studio Flows Part 3: Deduplicating Data

Started 3 weeks ago by
Modified 2 weeks ago by
Views 885

Have you ever needed to cut down a large data set, but aren’t sure how? Chances are you’ll need to remove duplicate rows or values in your data, whether that’s to eliminate redundancy or to simply analyze rows with unique field values. Much like other common data manipulation tasks, SAS Studio Flows has multiple steps that can remove duplicates in a few mouse clicks!

 

In this post, I'll show you three different steps which can perform three different types of deduplication: Remove Duplicates, Sort, and Query.

 

Remove Duplicates step

 

As the name suggests, the Remove Duplicates step will remove duplicate rows from your data. While other steps have some flexibility regarding how duplicates are removed, this step will only remove complete duplicate rows. In other words, this step eliminates rows that match the values of another row for every field. Visit the documentation for a full description of step capabilities.

 

Say you’re working with an augmented version of the SASHELP.CLASS table, CLASSDUPS. This table includes duplicates of some student records.

 

01_grbarn_dedup_1.png

Select any image to see a larger version.
Mobile users: To view the images, select the "Full" version at the bottom of the page.

 

To use the Remove Duplicates step, simply connect your source data to the node. The only required setting is a simple check box to remove duplicates across all columns.

 

02_grbarn_dedup_2.png

 

If desired, you can adjust output settings. By default, the step will replace the existing output table with the same name and write duplicate rows to the log, but you can choose to store duplicates in a separate table instead.

 

03_grbarn_dedup_3.png

 

After running the step, all duplicate rows are removed.

 

04_grbarn_dedup_4.png

 

Duplicates are also written to the log due to the default output settings.

 

05_grbarn_dedup_5.png

 

Sort step

 

The Sort step was the main subject of my last post. Though its primary capability is to sort data, this step can also be used to remove duplicate rows. Additionally, this step allows more control over how to handle duplicates – you can choose to remove duplicate rows based only on the sorting columns, or by all columns. Visit the documentation for a full description of step capabilities.

 

Say you’re analyzing the SASHELP.FISH table, which contains measurements of fish caught in Lake Laengelmaevesi. Measurements include weight, three different lengths (nose to beginning of tail, nose to notch of tail, and nose to end of tail), height, and width.

 

06_grbarn_dedup_6.png

 

You’d like to quickly extract the first appearance of each species in the data. This data has been pre-sorted to list each specimen by increasing length within each species, meaning that the first occurrence of each species in the data will have the shortest length measurement.

 

First, configure the step to sort the source data by Species in ascending order.

 

07_grbarn_dedup_7.png

 

Notice the Options button on the tab’s toolbar. If selected, you can configure duplicate row behavior with three options: Keep all rows, Remove rows with duplicate values in sort columns, or Remove duplicate rows. The final option would yield results identical to the Remove Duplicates step. For the desired result, select Remove rows with duplicate values in sort columns.

 

08_grbarn_dedup_8.png

 

Once you run this step, the output data will show only one row for each species, since all duplicate values in the species column were removed. The remaining entries represent the shortest specimens in each species.

 

09_grbarn_dedup_9.png

 

Query step

 

The Query step is powerful and multi-faceted, capable of performing many different actions on your data. Not only can it sort data, but it can remove duplicates with flexibility like the Sort step by using the option to select distinct rows only for the results. Visit the documentation for a full description of step capabilities.

 

Say you’re analyzing SASHELP.ORSALES, a table containing data about the fictional sports store Orion Star.

 

10_grbarn_dedup_10.png

 

ORSALES has 912 rows, but we can use the Query step to quickly find all of the product lines and categories that Orion Star carries. After adding and connecting the appropriate nodes, start by selecting Product_Line and Product_Category for the query output.

 

11_grbarn_dedup_11.png

 

Then, navigate to the column options > More options and select Select distinct rows only. With this option toggled, the output will contain only the distinct combinations of unique Product_Line and Product_Category values from the source data.

 

12_grbarn_dedup_12.png

 

The output has 12 rows. These represent four product lines: Children, Clothes & Shoes, Outdoors, and Sports. Most lines only have one or two product categories except for the Sports line, which has eight categories.

 

13_grbarn_dedup_13.png

 

Notice that this deduplication method differs from the Sort step method. The Sort step enabled deduplication of sort columns only, but the output contained all source columns. Alternatively, the Query step enabled deduplication of the selected columns, but the output contained only the selected columns. The option to select distinct rows will apply to every selected column. Therefore, if all columns are selected for output, the distinct rows option will behave like the Remove Duplicates step.

 

Considerations & Summary

 

These deduplication options are available with every SAS Studio license (much like the sorting options shown in my previous post).

 

As I’ve demonstrated, each step mentioned in this post has slightly different capabilities. You’ll need to choose your deduplication step based on exactly which fields you want to remove duplicates from and which fields you want to include in the output.

 

In this post, I’ve discussed three methods for deduplicating data in SAS Studio Flows. Want to know more about working with data in Flows? Check out my previous posts in this series on appending and sorting data. Look out for future posts on aggregating data and performing table lookups!

 

Which of these deduplication methods is most useful for your work? Will you use these methods for improving data quality, exploring data, or both? Let me know!

 

 

Find more articles from SAS Global Enablement and Learning here.

Comments

This is great, @GraceBarnhill! Easy to follow and oh so useful.

Version history
Last update:
2 weeks ago
Updated by:
Contributors

SAS Innovate 2025: Call for Content

Are you ready for the spotlight? We're accepting content ideas for SAS Innovate 2025 to be held May 6-9 in Orlando, FL. The call is open until September 25. Read more here about why you should contribute and what is in it for you!

Submit your idea!

Free course: Data Literacy Essentials

Data Literacy is for all, even absolute beginners. Jump on board with this free e-learning  and boost your career prospects.

Get Started