BookmarkSubscribeRSS Feed
racrown
Calcite | Level 5

We have Visual Analytics 7.3 ( installed in a distributed environment -4 worker nodes with 128GB RAM each.(no co-located data store)  I am loading data from Hive(separate Hadoop cluster w/o SAS compoents) into LASR.  My users want to search raw data sets which are aproximately 200GB+.  I have no problem loading smaller sets which are as large as 80GB into LASR, but the 200GB+ fail to load.  When they apply the filters to get to the data they need, it should be less than 1% of the 200GB.  Pulling it all into the LASR memory structures seem to be very cumbersome for this use case.   I am thinking a different SAS tool would have better served their needs.  I am thinking Data Miner, Web Reports or SAS/ACCESS Interface to Hadoop might map well.  Advice is appreciated.

1 REPLY 1
SASKiwi
PROC Star

I would suggest that running SQL queries against the data in Hadoop would be a far better option. LASR servers should be for  reporting and exploring using the VA front-end only.

sas-innovate-2026-white.png



April 27 – 30 | Gaylord Texan | Grapevine, Texas

Registration is open

Walk in ready to learn. Walk out ready to deliver. This is the data and AI conference you can't afford to miss.
Register now and lock in 2025 pricing—just $495!

Register now

Tips for filtering data sources in SAS Visual Analytics

See how to use one filter for multiple data sources by mapping your data from SAS’ Alexandria McCall.

Find more tutorials on the SAS Users YouTube channel.

Discussion stats
  • 1 reply
  • 973 views
  • 1 like
  • 2 in conversation