Multiple issues with pre-aggregations

Hi there,

We are facing multiple issues trying to set up pre-aggregations. Our store is Big Query. We have tried both external pre-aggregations on Postgres as well as Cube Store, to no avail. For now, we are just trying to use a simple “originalSql” (ie copy the whole table).

Here are a sample of issues we get:

  • when a cube dimension is defined with a custom SQL statement (ie in this case, BigQuery SQL), it seems that Cube tries to employ the same syntax in the pre-aggregation store, which fails because of incompatible SQL dialects.
  • if a BigQuery table contains repeated fields, pre-aggregation fails with various errors
  • various syntax errors related to extra brackets or parentheses, which are a little hard to debug because of terse error messages.
  • “Invalid identifier xxx for schema” (only with Cube Store)
  • "Invalid function xxx (with Cube Store)

We’d love to use pre-aggregations and we’d appreciate some help debugging these errors


if data is too large with originalSql will fail . maybe try tto use rollup or others. originalSql will fetch all the datas. current cube.js’s query-orchestrator not better for large datasets with originalSql (same as sql database && cubestore )

Hi @Antoine_Boulanger,

Thank you for posting it. Have you tried using rollup pre-aggregations?

The table we are using to test is ~100MB - would that be considered large? We can try rollup as a comparison to see if it unblocks things, but the reason we wanted originalSql is because our measures are not additive.

Are those measures distinct counts? You can consider approximation through the countDistinctApprox measure type to make them additive.

originalSql has various known limitations when being used with external pre-aggregations mostly due to incompatibility in SQL dialects.

They are quantiles. Are there any plans to improve originalSql? In our case, I think we mainly need to cache the whole BQ tables into another relational store with lower read latency.