Blog
February 20, 2019 Marie H.

Migrating MySQL to GCP Cloud SQL

Migrating MySQL to GCP Cloud SQL

Photo by <a href="https://unsplash.com/@paulrigel?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Paul Rigel</a> on <a href="https://unsplash.com/?utm_source=cloudista&utm_medium=referral" target="_blank" rel="noopener">Unsplash</a>

Migrating MySQL to GCP Cloud SQL

When Privia Health moved its infrastructure off Rackspace and onto GCP, one of the bigger pieces was migrating the MySQL databases that backed the Rails applications. We were running MySQL 5.7 on Rackspace VMs with manual backup scripts and a homegrown HA setup. Cloud SQL was the obvious target. Here's what the migration actually looked like.

Why Cloud SQL Over Self-Managed MySQL on GCE

The honest answer is operational overhead. Running MySQL on a GCE VM means you own the OS, the MySQL version upgrades, the backup schedule, the HA configuration (typically MySQL Group Replication or MHA), and the monitoring. Cloud SQL handles all of that:

  • Automated backups with point-in-time recovery
  • Automatic failover to a standby replica (High Availability configuration)
  • Automated minor version updates (with a configurable maintenance window)
  • Integrated IAM-based access controls
  • No OS-level patching

The tradeoffs are real: you lose superuser access, some MySQL features are restricted (no LOAD DATA INFILE from local paths, limited GRANT syntax), and you're paying a Cloud SQL premium over raw GCE costs. For our use case, the operational savings were worth it.

Provisioning with Terraform

We managed the Cloud SQL instances with Terraform from the start rather than clicking through the console, because we had multiple environments (staging, production) and wanted reproducible configs.

resource "google_sql_database_instance" "main" {
  name             = "privia-mysql-prod"
  database_version = "MYSQL_5_7"
  region           = "us-east4"

  settings {
    tier              = "db-n1-standard-2"
    availability_type = "REGIONAL"  # enables HA

    backup_configuration {
      enabled            = true
      binary_log_enabled = true
      start_time         = "03:00"
    }

    ip_configuration {
      ipv4_enabled    = false
      private_network = google_compute_network.vpc.id
    }

    maintenance_window {
      day          = 7  # Sunday
      hour         = 4
      update_track = "stable"
    }
  }

  deletion_protection = true
}

resource "google_sql_database" "app_db" {
  name      = "privia_production"
  instance  = google_sql_database_instance.main.name
  charset   = "utf8mb4"
  collation = "utf8mb4_unicode_ci"
}

resource "google_sql_user" "app_user" {
  name     = "privia_app"
  instance = google_sql_database_instance.main.name
  password = var.db_password
  host     = "%"
}

Setting ipv4_enabled = false and using private_network means the instance has no public IP — connections go through the VPC or through the Cloud SQL Auth Proxy. That was a hard requirement from our security review.

Exporting from Rackspace MySQL

The export command we used:

mysqldump \
  --single-transaction \
  --routines \
  --triggers \
  --set-gtid-purged=OFF \
  --databases privia_production \
  -h 10.x.x.x \
  -u root \
  -p \
  > privia_production_$(date +%Y%m%d).sql

--single-transaction is important for InnoDB tables — it starts a transaction before the dump so you get a consistent snapshot without locking tables. --routines and --triggers exports stored procedures and triggers. --set-gtid-purged=OFF is required when the source uses GTID replication and you're not doing a replication-based cutover; without it the dump includes SET @@GLOBAL.gtid_purged statements that Cloud SQL rejects.

Upload the dump to GCS:

gsutil cp privia_production_20190218.sql gs://privia-db-migrations/

Importing to Cloud SQL

gcloud sql import sql privia-mysql-prod \
  gs://privia-db-migrations/privia_production_20190218.sql \
  --database=privia_production \
  --project=privia-health-prod

Cloud SQL runs the import asynchronously. Check status:

gcloud sql operations list --instance=privia-mysql-prod --limit=5

The import took about 40 minutes for our largest database (~15GB). Cloud SQL will report the operation as DONE when finished.

The Charset Gotcha

This bit us. Our Rackspace MySQL instance had been configured with utf8 as the default character set, which in MySQL is actually a 3-byte variant that can't store 4-byte Unicode characters (emoji, some CJK characters). The correct charset for full Unicode support is utf8mb4.

Check your source before migrating:

SHOW VARIABLES LIKE 'character_set%';
SHOW VARIABLES LIKE 'collation%';

If your source is utf8 but your data includes 4-byte characters stored in columns explicitly defined as utf8mb4, mysqldump handles this correctly — but the default charset for new tables will be whatever the instance default is.

We explicitly set charset = "utf8mb4" and collation = "utf8mb4_unicode_ci" in the Terraform google_sql_database resource, and added charset=utf8mb4 to the Rails database.yml connection config. Verify after import:

SELECT table_name, table_collation
FROM information_schema.tables
WHERE table_schema = 'privia_production';

Rails Configuration

The Cloud SQL Auth Proxy handles authentication and encryption, so the Rails app connects to localhost:3306 (where the proxy is listening) rather than directly to Cloud SQL.

# config/database.yml
production:
  adapter: mysql2
  encoding: utf8mb4
  host: 127.0.0.1
  port: 3306
  database: privia_production
  username: privia_app
  password: <%= ENV['DB_PASSWORD'] %>
  pool: 10
  connect_timeout: 10

For local development, run the proxy:

cloud_sql_proxy -instances=privia-health-prod:us-east4:privia-mysql-prod=tcp:3306

The proxy uses your local Application Default Credentials (or a service account JSON key) to authenticate to Cloud SQL. No firewall rules, no public IPs, no SSH tunnels.

Replication Lag During Cutover

We did a live cutover with a brief maintenance window. The process:

  1. Put the Rails apps into maintenance mode
  2. Run a final incremental mysqldump of any tables with high write volume
  3. Import the incremental dump to Cloud SQL
  4. Update DNS / application config to point at Cloud SQL via the proxy
  5. Bring apps back up

During the pre-cutover window, we monitored replication lag on the Rackspace side (we had a read replica there) with:

SHOW SLAVE STATUS\G

The Seconds_Behind_Master field tells you if your replica is caught up. We waited for that to reach 0 before starting the cutover steps.

Total downtime was about 12 minutes. Not zero, but acceptable given the scope of the change.


Updated March 2026: Cloud SQL has matured significantly since this migration. AlloyDB is now the recommended option for PostgreSQL-heavy workloads that need higher IOPS and analytical query performance. For MySQL specifically, Cloud SQL still makes sense. One thing worth noting: the Cloud SQL Auth Proxy has been superseded by the Cloud SQL Connector libraries (available for Python, Java, Go), which handle connection pooling and IAM authentication more cleanly in application code.