I think he hits the nail on the head when he says: To obtain this flexibility, the data has to be pre-summarised for all crossings (nway), and then summarised again for each subpopulation split. This means that the size of the data set can increase considerably compared to the original detail data.This procedure defeats the main purpose of VA’s, which is to load detail data in memory, and to derive aggregations on the fly as the user navigates the report. Besides which, his approach might work okay when displaying counts, but as soon as you start using percentages (e.g. percent of column subtotal) the Sum _ByGroup_ function starts behaving really weirdly when applying filters. I suspect we're asking more of VA than it's been designed to do, but even so, it's something that we'd be able to work around if it were able to perform aggregations on aggregated measures. Which - as your mate suggests - shouldn't be too difficult for a product that's supposed to let you do this sort of thing. I guess I'm just curious to know why VA can't do it given that it's a relatively simple extension of what it can already do: Calculation A - Fine Calculation B - Fine Calculation A, then factor that in to Calculation B - Not fine
... View more