Hi, Thanks for the replies so far. Perhaps it will help to explain what I'm trying to do... I'm trying to develop a generic approach to creating an SQL query against a dataset using a stored process. In most cases, it will be a straight query against a fact table. In some instances, I will then join dimension data to the fact table subset using a hash object join. For now, assume that the subsetting (where clause) only occurs against the fact table. In some instances, the end user will select the libref and dataset. In others, I will simply set default values for these parameters, then make the prompts hidden. So, in the user interface (prompt manager), the end user selects: the data source (libref and dataset) (FROM clause), desired output columns (SELECT clause), subset criteria (WHERE clause), and output sort order (ORDER BY clause - for now only ascending sort order is supported). The best (albeit klunky) design I've been able to come up with so far is: libref: text, dynamic list, single value dataset: text, dynamic list, single value columns: text, dynamic list, multiple ordered values filter name/value pairs (5): name: text, dynamic list, single value value: text, user enters values, multiple values (5 pairs of prompts) additional filtering: text, user enters values, single value order: text, dynamic list, multiple ordered values The dataset list is dependent on the libref selection, all downstream column selections are depended on libref and dataset selection. The source dataset for the dynamic lists is SASHELP.VLIBNAM, SASHELP.VTABLE, and SASHELP.VCOLUMN. I created the metadata for these tables by creating a pre-assigned library definition for SASHELP, then imported selected tables, writing the objects to \Shared Data. The user selects the desired libref, then the desired table, then the desired output columns. For the filter, there are 5 "built-in" prompts where the user selects the desired variable, then enters the desired subsetting values. Some of these variables have about 1.5 - 3M distinct values, so "Get Values" from a table simply won't perform in an acceptable manner - the user will have to type or paste the values into the prompt. The variable data type is checked in the stored process, and the data is quoted as appropriate. In almost all cases, 5 prompts (variables) are sufficient. Otherwise, an advanced user can augment the filter (where clause) using the additional filtering prompt. If 5 prompts is more than enough, I can simply hide the unnecessary prompts without having to change the underlying STP code. If I want to "bind" a particular variable to a prompt, I can set a default value and either protect or hide the variable selection dropdown. This approach (hopefully) allows flexibility for the end user and fast stored process development for the developer. This all works fine...EXCEPT it performs horribly. Like 1-3 minutes for the prompt manager to display the UI. So I tried making a "snapshot" of VLIBNAME, VTABLE, and VCOLUMN for a single library, but still got very poor performance. My guess is all these dynamic, dependent prompts, plus having to query the workspace server, is the cause of the poor UI performance. So, I switched to using the datasource and datasource item prompts. Which performs great - it's reading from metadata instead of dynamic views on the workspace server - but these prompts IMO set really stupid values for the macro variables!!! 1) What I meant by "derive" is, can I use PROC METADATA or the metadata data step interfaces to derive the libref from a metadata table object, if I have the path to the object? The SASHELP table metadata objects are in Shared Data only because I wrote them there. 2) I understand that DATASOURCE is the metadata location - that was in fact my point. And I wasn't just "griping" - most (all?) of my stored processes are "old school" - I'm not doing much with say PROC METADATA, PROC STP, etc. I truly don't understand whether any stored process developer, or stored process end user, would be interested in the metadata path over the libref.dataset name of the table object? I'm only working with Tables, not say Information Maps. Would the metadata path be of interest for an Information Map datasource? If anyone can provide an example of a stored process where the metadata path is useful to the stored process, I'd love to see it; I'm sure I would learn something from the example. <quote>DATASOURCEITEM allows user to select data items from either tables</quote> I disagree, since the DATASOURCEITEM does not contain sufficient information to derive the source libref and dataset. If I have SASHELP.SHOES, SASUSER.SHOES, and SOMELIB.SHOES, which SHOES dataset did the end user select? The generated macro variables don't give me that information. 3) I can't hide the DATASOURCEITEM GUI from the user, since I need it to be visible so they can select the desired columns. What I want to suppress is the DATASOURCE path in the GUI, since it is of no interest (and in fact confusing) to my end users. 4) IMO this is a bit short sighted. Setting the available list of downstream DATASOURCEITEMS based on the selection of an upstream DATASOURCE is obvious, isn't it? Surely R&D would get this??? We were doing this in SAS/AF for decades! In DMS, select Tools --> Query to see what I mean! 5) OK, thanks, COLUMN0 is deprecated then, but still around due to legacy functionality. I would open a TS track, but I think this is more of stating a desired use case and request for feedback from the community. If others are in agreement I'll formalize this as an enhancement request (although IMO a few of these are not enhancements but design bugs worthy of a hotfix). I understand the evolutionary history of SAS, but the subject matter in this post is very recent code/functionality in SAS. Like I said, maybe I'm missing something, but I think the prompt macro variables should be returning the libref.tablename.columnname rather than a metadata path. I wonder if the prompt manager got much usability testing by end users before being released to the market?
... View more