06-22-2014 03:52 PM
I have a quick question on how to efficiently create a test data set.
I am working remotely with confidential data and I would like to create a fake data set with the same data structure so that I can do some programming on my local computer. As potentially many tables are affected, I would like to automatize this procedure as far as possible. Basically, I would like to do the following:
1. generate some code that creates a data set with the same variables/data types (a simple proc sql- describe table may do?)
2. save mean, std, and correlations (of number variables) and detect string data types of other variables.
3. create random data with the same of the statistical attributes of the original data.
4. the code should be flexible so that I can adjust it easily for different tables.
Any help is highly appreciated!!
06-23-2014 10:54 AM
The first part is easy:
set have (obs=0);
Creating fake data with mean and std is quite possible. Correlation I'm not so sure of, especially if you want to mantain correlation from variable x to y1, y2, y3, etc. As when you create the dummy X you have a lot of constraints on the other variables that will be an interesting challenge.
06-23-2014 11:00 AM
Could you not just annonymize the data you have, so anything identifiable would be replaced with some code + random number?