I've been dabbling with this, and hit the fundemantal issue that JSON is inherently schema-less, unlike XML. Essentially, you are never sure what you're going to get, in terms of field structure, from any one REST call (or another expression of data in JSON, like an "export to file").
What I tried was creating "template" datasets/tables, that I could use to conform any instance of JSON derived tables (from the same source or endpoint) to the virtual schema that the data (more or less) complies with.
So in the code I was working on, once I've got the various tables - but often only one really - back from the JSON library, I run them through an append to an empty "target instance" of the template dataset, freshly created for the purpose of being the output from my user transform, which means that I get an output table from my call to the REST end point that's crudely conformed to a static structure/schema.
I ended up with a User Transfrom that took the "template" dataset as an input, and the REST endpoint details and options/parameters as options, and coughed out a dataset with a fixed format. So in DIS, I could use registered tables as the output of the UT that called hte REST endpoint, with a stable data structure.
PROC APPEND is a but low-tech/brute force for doing this, it would probably be better done in a more in-depth macro. There are options to suppress the squawking about the JSON sourced table not being just like the template derived BASE, but I couldn't get it to stop issuing warnings, and "probably beneign warnings" are absolutely not what want in a DIS UT.
XML is much prefereable, as the XML Map mechanism, and XML itself, tackles the issue of how to handle parts of a schema not being present in a particular set of records - in JSON the absence of a schema, and the absence of any sort of placeholder for "not present" data items/fields, means that it's actually an undesireable format for "ETL-esque" data transport, but it's not going away.
... View more