BookmarkSubscribeRSS Feed
Jade_SAS
Pyrite | Level 9

Dear all,

 

   I am working on one project with extremely unbalanced data which has less than 1% event incidence (800 out of total 114600 obs), what's the best method to deal with this kind of data? As the expecting goal of this project is to provide rules to distinguish the bad event (less than 1%) in the future, I am using decision tree right now. But the performance is really bad, the decision tree will not go further, only stay with the root node. Any suggestion on dealing with this kind of problem is welcome!!!

 

   BTW: I am using SAS miner 14.2 on the SAS Linux.

   Thank you!

 

Jade

4 REPLIES 4
Reeza
Super User
Without any more information, my suggestion would be to switch to a case control option where you take your 800 and match it 1:N against the big data set, with N controls per case.

I would then bootstrap it to try that approach X times and see if my model is stable. I have no idea how to do this in EM and don't have access to it though. Technically could be done in Base SAS though.
Jade_SAS
Pyrite | Level 9

Thank you Reeza!

Is there a reference paper of this procedure?  Thank you!

 

Jade

 

 

Reeza
Super User
Not really, you can look at PROC PSMATCH to get the matches and then PROC PHREG for conditional logistic regression. The documentation has examples for each.

sas-innovate-2026-white.png



April 27 – 30 | Gaylord Texan | Grapevine, Texas

Registration is open

Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and lock in 2025 pricing—just $495!

Register now

How to choose a machine learning algorithm

Use this tutorial as a handy guide to weigh the pros and cons of these commonly used machine learning algorithms.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 1608 views
  • 1 like
  • 2 in conversation