08-08-2013 02:52 AM
Hope you are doing well. I am planning to track user's login and logout time in Enterprise Miner(version 12.1).There are some audit tables that used to contain these details.
Can you please help me about the steps.If someone has worked on this earlier, please let me know how to proceed about this.
08-08-2013 03:36 AM
There is a APM package that could server as a starting point.
You are needing to have setup this like a project as of the busisness, doing data-analyzes and reporting like they are doing.
Sounds weird but the biggest data analyzes is doen on technical data access logs securioty logs etc. Like Prism.
08-08-2013 05:34 AM
Thanks Jaap!! Sharing web link that has some instructions regarding the installation of APM package in SAS9.3.
Are you talking about this only. Please share your views on this.
08-08-2013 05:57 AM
Yes, that is the one (9.3 version belonging to 12.1 Miner)
Read it, you can download it and run/use. It are a lot of sas-sources and sas-samples.
With the log-server configuration you could add more events to logging. It will not monitor just Em but all users. EM is using WS-servers.
Do you have a well secured environment then you need to place this in a own "sand-boxed" part.
08-08-2013 06:44 AM
Have download the APM93.unx.tar to my local machine. I can see many configuration changes suggested in the PDF. Do you have any idea how changes are going to impact the current set-up working. Also please let me know how to create a own sand-boxed part.Thanks in advance.
08-08-2013 08:18 AM
You have the code and documentation. Your last question is confusing me as it looks having little experience on Unix.
- The root-key is the holy grail you do not want to use without a good reason. Root-kits are with the bad guys, the blackhats.
- you have mountpoints defining the ammount of storage of storage pool. Needing "root" acces and (SAN) storage mangement
- Security is set up/done by the "owner" that is the key creating the files/directories. Within a HFS you have to define a well defined structure with keys and groups to orgnize and the logic and the security that is associated with it. It is complicated and confusing when you have never faced it. Knowing all things like the special behavior lik gid-sticky bit on directories it is becoming seen "as usual".
Some special versions of Unix exist adding some security approach of the far more advanced (of this aspect) MS-Windows. Hadoop is based in a Unix approach, never stop learning.
The instructions are about:
- placing the sources. You need to store it somewhere.
- placing all aggregated data and reprots. You need to run the code and update tables to do that.
- You need to able to read the source-data (log-files)
As long as you are the only person you could use your personal key. Place it somewhere you have enough storage for that.
If this key has limites access it will hold you in this environment (sand-box)
If you are fine using a DMS system possible by a SAS/connect you could by-pass the metadata bi/di processing.
Not very sensible are:
- sasinst as could be possible change unintended your SAS installation
- sassrv as kind of root keys to all SAS-data and SAS customers and ....., could be possible change unintended delete or access something form those parts.
So you need to define more keys for each dedicate segregated prupose under the condition of the requirement of strict security.
This is also something to add as addtional application-server context or a dedicated metadata (other level) on your machine.
You do not want to have access to other parts on your system or do you?
With a sand-box I mean setting up OS storage and the security in a way you can not access something you do not need.