Hi everyone, I´m facing the some problem. If I understood the architecture of connector SAS interface to Hadoop, It connects to a hive server and after submit processing to a spark cluster. I configured as this article, setting by default engine and redirecting to a spark cluster. https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started But, I got exactly this error when I´m using some where clauses. @AndrewHowell , did you suceeed to solve this issue ? If yes, how do you get success to run jobs on spark engine ? Can you provide your config or properties you adjusted to run ? Thanks in advance, Regardas
... View more