Since we did not have SAS/Access to Spark licensed, I ended up implementing it myself. Performance is great in comparison. In brief:
- create an empty table using implicit sql passthrough
- export csv, formatting datetime and date to default spark formats
- upload csv to S3
- execute copy into statement on databricks warehouse using explicit sql passthrough