BookmarkSubscribeRSS Feed
🔒 This topic is solved and locked. Need further help from the community? Please sign in and ask a new question.

[Not getting any responses on the Visual Analytics board, so trying here..]

 

SAS 9.4 & VA 7.3 (distributed, co-located with Cloudera): Trying to understand all the ways SAS users (especially programmers) can utilise the platform, set up best practices, explain worst-case scenarios if they program with an "old SAS" mindset, etc - what processing takes place in the LASR server, and what processing is fed back to a compute server, because I don't want them creating code which draws massive amounts of Hadoop or LASR data back to a compute server.

 

I've reviewed the VA admin guide, the IMS ref guide, the VA installation guide, etc.

 

One question I haven't been able to definitely answer: The processing behind the VA Data Preparation (e.g, when setting up a star schema), it's essentially SQL execution, but where will that processing be executed - on the LASR server, or by a compute server?

 

Thanks.

1 ACCEPTED SOLUTION

Accepted Solutions
LinusH
Tourmaline | Level 20

To my understanding, all data perpetration/builder logic is executed by a compute server. Only the loading part to LASR naturally involves LASR.

And with SAS code, especially SQL, you could probably benefit from implict pass through if your data source is in Hadoop or other RDBMS.

Data never sleeps

View solution in original post

4 REPLIES 4
LinusH
Tourmaline | Level 20

To my understanding, all data perpetration/builder logic is executed by a compute server. Only the loading part to LASR naturally involves LASR.

And with SAS code, especially SQL, you could probably benefit from implict pass through if your data source is in Hadoop or other RDBMS.

Data never sleeps
SimonDawson
SAS Employee

VDB builds SAS programs that run on the workspace server. Some of the buttons in the interface interfaces of VA apps like some of the data import features spawn pooled workspace servers.

SimonDawson
SAS Employee
When you have co-located hadoop if everything is setup nicely you should be able to do a bunch of the lifting of data into LASR in parallel if its hive tables and if you have SASHDAT tables in hadoop you can do memory mapped IO for even better through put.

Call tech support when you have a moment free and ask for me lets have a chat about it.
AndrewHowell
Moderator

Thanks, Simon - will do. Chat soon.

suga badge.PNGThe SAS Users Group for Administrators (SUGA) is open to all SAS administrators and architects who install, update, manage or maintain a SAS deployment. 

Join SUGA 

Get Started with SAS Information Catalog in SAS Viya

SAS technical trainer Erin Winters shows you how to explore assets, create new data discovery agents, schedule data discovery agents, and much more.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 4 replies
  • 933 views
  • 1 like
  • 3 in conversation