View in #general on Slack
@Bruce_SW: Does anyone have any intuition how our Cube deployment would behave if we shut down our Redis server and deleted all of its data? We’re using Cube deployed to GCP’s Cloud Run with a GCP Redis Memorystore instance as our cache driver, and we need to scale the Redis instance, which will result in the above.
I’m hoping that Cube will be able to handle this reasonably gracefully by failing as long as the Redis server is down and continuing thereafter with an empty cache of queries.
@Jon_Sherrard: I’ve run with this exact same setup, deployed via terraform, and scaled a Redis and I barely noticed a thing, but our GCP queries weren’t too bad.
I’d expect some lag in your application as all the GCP queries are re-run, and also a bit of cost GCP if you have a tonne of large queries too.
I’d also say that we spun up the new redis instance and swapped out the redis environment variable, and then did another deployment to bring down the old redis server.
@Bruce_SW: Thanks, Jon, for chiming in. I appreciate it.
I ended up applying this change via Terraform to our production instance and the upgrade took about five minutes. I was the only one who saw any impact as I intentionally loaded the page while upgrading and it lagged for a bit and eventually failed.
I like your strategy to spin up the new redis instance and swap out the environment variable. We’ll do something similar next time if we have more active users.
Might be worth upgrading from a basic tier to a standard tier instance as Google advertises “almost no downtime during the scaling process”