The simplest workaround would be just to change the pre-aggregation name. In Cube Cloud, we have a warmup feature that allows building all pending pre-aggregations before deploying the actual changed version. If the cost of rebuilding is the main concern, you can invalidate pre-aggregations by dropping a table and resetting the Redis instance.
Thanks for getting back to me.
What I ended up doing was deleting the table and restarted Redis. It did however rebuild almost everything so I guess something unexpected happened as a result of the data in redis no longer being available. It was not ideal, to say the least.
As you mention, cost is the main issue here as the primary datastore is Athena and therefore costs per GB of data scanned. Cube Cloud’s current feature offering currently isn’t sufficient to bypass this and would also (presumably) have prevented me deleting the pre-aggregation table and clearing the data from Redis so I’d have been stuck.
Rescanning and rebuilding pre-aggregations for every possible date for any possible scenario just isn’t viable going forward really so may I suggest a feature to rebuild data by pre-aggregation, and date/refresh key be it on cube cloud or otherwise?
The blackbox nature of Cube (and I suppose even more so, Cube Cloud) makes me hesitant about the access to Athena. I think another feature request, more likely for Cube Cloud than Cubejs itself i suppose would be a query/rebuild plan.
For example, if i were to say rename a pre-aggregation per your suggestion-
what queries would it run? what data would i be able to reuse? what is the estimated time based on last time it ran these things etc? that sort of information would be invaluable.
If i could see if a change i’m making is going to cause a monumental amount of queries then I might think twice about doing it etc.
Hey @benswinburne ! This is indeed a big problem. Most of these features and tools you’re referring to already in Cube Cloud. On some of them like single partition invalidation or warmup invalidation progress, we’re working right now.