In discussion with a few users on the benifits of Stored Processes the idea of a Internal Library of Stored Processes came up. Has any one had experience with the setting up of such a library within a health organization. What are some of the benifits and challanges in setting up this environment?
When you talk about "web" applications, it's useful (in my opinion) to distinguish between the differing technologies available to you.
If you have SAS/IntrNet (and the Application Broker/Application Dispatcher servers/library) set up, then as soon as you start storing programs in the Dispatcher library, you DO have a set of SAS programs that can be shared and dynamically invoked using SAS/IntrNet. Or to put it another way -- you have a set of programs that can all be executed by sending a URL request from a web browser to a web server like this:
SAS/IntrNet uses CGI (Common Gateway Interface) technology. A "broker" program sits on the web server and receives HTTP requests from client machines. Of course, that means that somebody has to code the HTML interfaces (forms or web pages) that will dynamically invoke the programs sitting in the Application Dispatcher library. Not many of your users are going to want to type the above URL into their web browser in order to run a SAS program from a web browser.
The programs in the Application Dispatcher library are the same as your basic SAS programs, but for example, because the programs are running on a web server, you can't do something like this:
ODS HTML FILE='c:\temp\wombat.html';
because, let's face it, if you have a UNIX or APPLE web server, you don't ever have a C drive. Instead, you code something like this in your program: ODS HTML FILE=_webout; This means that your results follow the path or pipeline back from the web server to the machine that initiated the request. The reserved fileref for that "pipeline" for SAS/IntrNet Dispatcher programs is _webout.
But THEN, you had to move the changed program from your local machine to a special library (the Application Dispatcher Library) that was defined to the Application Dispatcher. Then, your request (via URL) would come to the "broker" program and if necessary, the "broker" program would hand off the program to the Application Dispatcher to service the request (aka run the program) and return the results to the requesting client machine.
OK, now move forward in time to now and Stored Processes. In concept, Stored Processes in the SAS Intelligence Platform architecture are very similar in concept to Application Dispatcher programs. Stored Process programs also have to live in a special place. Only, instead of living in a special Application Dispatcher library, they live in a stored process repository.
The advantage of having your stored processes in a repository, is that they can be "shared" and executed from many of the SAS Intelligence Platform client applications. What does that mean? It means that the same Stored Process can be executed from within EG, from within the SAS Add-In for Microsoft Office, from within SAS Web Report studio or from within a custom web page that executed the stored process using the Stored Process Web Application.
The technology change that makes it possible to execute a Stored Process from multiple clients is the fact that information for the stored process is defined in the METADATA server that is the "boss" of the SAS Intelligence Platform applications.
So, let's say I want to create a program that allows the end-user to select a REGION for their request. With SAS/IntrNet, I am responsible for coding an HTML form that shows the user the list of possible REGIONS. And, then my form action kicks off the Application Dispatcher program.
But, if I am using a Stored Process, I can define the "REGION" parameter in the SAS Intelligence Platform metadata. That means that each one of the client applications will use the appropriate interface to prompt the user for the REGION value when they go to run that stored process. I can even make the REGION value required in the metadata definition and THEN, the client interfaces will NOT execute the stored process until they pick a default value AND I can set a default value for the parameter.
And, even better, if I don't want the end users to use EG or Microsoft Word to surface their results, I can still use the Stored Process Web Application to kick off the stored processes via URL -- in pretty much the same fashion that I could do it with SAS/IntrNet.
This is getting to be a quite long post. Sorry 'bout that. I thought SAS/IntrNet was pretty cool -- and having a central Dispatcher library of shareable programs was cool. And I still do think that it's cool. But the metadata wins out over everything else for me...because I just have to write the program once, register the parameters once in the metadata, register what server I want the stored process to run on -- then I'm done...that stored process can run from a variety of client applications without my intervention.
I think one of the challenges right off with setting up stored processes in a Pharma company would be the fact that Stored Processes are difficult to write and register if your preferred SAS tool is good ole SAS DMS. And my experience is that SAS programmers in Pharma prefer the more traditional SAS interface.
Stored Process creation and registration is much easier in Enterprise Guide then other means (I guess you could call it manual registration). Getting the typical Pharma SAS programming team to convert to using Enterprise Guide is a definite challenge.
Most SAS programmers at Pharma companies are writing lots of macro, data step, and proc report code. They are heavily invested in massive macro libraries that they have developed in house to customize every last option on gplot or proc glm output. They know the syntax for the procs that they use. You show them Enterprise Guide and they simply don't see anything there that interests them.
Personally, I like Enterprise Guide a lot. But then I am not spending 40 hours a week writing proc report code to prove that the next new drug is safe and effective.
Now, if we expand health from just a Pharma discussion to medical customers and maybe health insurance provides -- then I see some real nice traction for Enterprise Guide and Stored Processes. It is a very natural fit for those types of users. Insurance providers have massive claims databases and they have very few people that know how to accurately and efficiently report on these massive data. I could definitely see a group inside of one of these organizations using Enterprise Guide to author a series of Stored Processes and expose them to a larger audience within the organization to run them and view the results.
EG is way cool for registering stored processes using the Stored Process Wizard -- but I consider EG to be the "automated" way to create your SP from either a task or an entire project or even from existing code.
When I create stored processes for the Stored Process class, however, I use good, old-fashioned DMS and then I manually register the SPs with SAS Management Console. Almost all of the SPs that I write for class are PROC REPORT examples, since that is currently something that you cannot do with EG.
In a regular BI configuration, there is already a location built in for your SAS macro library and your "code snippet" collection and your format library. Macro programmers do have to make a few changes to their macro code -- but the changes are minor and they should be able to convert their legacy programs to SPs fairly easily. This is the subject of one of my SGF presentations this year. Of course, my macro program won't be as complex as those used by most Pharmas, because I only have 50 minutes to talk.
At any rate, I don't think that Stored Processes are difficult to write. What was difficult for me was the "paradigm shift" from the DMS way of doing things to the BI way of doing things. But since I'd already gone through that same kind of shift with SAS/IntrNet, it wasn't such a huge change.
One of the notes in my SGF paper is the observation that "Stored Process Consumers Don't F3" -- and I'm just hoping that there are enough folks in the audience who actually use F3 enough to get the lighthearted joke. Of course, Stored Process AUTHORS might still use F3 -- but there's a real difference between how you test and debug the SP, as an author, and how the end user or SP consumer will experience and execute the SP.
For the really heavy duty SAS programmers, I agree that they may not do more than use EG in order to use the Stored Process Wizard. In class, we tell students that they don't actually have to open SAS to modify their programs and sometimes, I challenge them to make the required code changes just using Notepad -- which is an interesting exercise because it reinforces that in order to test their code, they have to break out of their dependency on DMS and actually test in the client applications.
But SAS in batch (via DMS or via command programs) still has a place in the BI platform -- there's no reason to put all of the nightly number-crunching and file updating into Stored Processes -- these processes might be better turned into DIStudio jobs or even kept as batch SAS jobs to update the appropriate files. Then you can let your SPs zip in and out of the data to do the on-demand reporting against the updated files.
So that's my 2 cents plus your 2 cents and we still can't go to Starbucks on that!