This template provides a starting point for AWS to GCP migrations using Terraform. It includes mappings for several common services:
- EC2 instances → Google Compute Engine VMs
- VPC/Subnets → GCP VPC Networks and Subnets
- Security Groups → Firewall Rules
- S3 Buckets → Cloud Storage Buckets
- IAM Roles → Service Accounts and IAM Permissions
- RDS → Cloud SQL
- ELB → GCP Load Balancer
- CloudWatch → Cloud Monitoring
For an actual migration, you'd need to:
- Identify all AWS resources in your environment
- Map each to the appropriate GCP equivalent
- Use Terraform data sources to extract AWS configuration
- Create matching GCP resources with appropriate configurations
- Implement data migration strategies for databases, storage, etc.
==============================================================
# AWS to GCP Migration Terraform Template
# This template provides examples of how to map common AWS resources to their GCP equivalents
# Configure the AWS provider (source resources)
provider "aws" {
region = "us-west-2"
# Authentication typically via environment variables or shared credentials file
}
# Configure the Google Cloud provider (target resources)
provider "google" {
project = "my-gcp-project"
region = "us-central1"
# Authentication typically via GOOGLE_APPLICATION_CREDENTIALS environment variable
}
provider "google-beta" {
project = "my-gcp-project"
region = "us-central1"
}
# Variables
variable "project_name" {
description = "Name of the project being migrated"
type = string
default = "migration-project"
}
variable "environment" {
description = "Deployment environment (dev, staging, prod)"
type = string
default = "dev"
}
# Example: EC2 to Compute Engine migration
# AWS: EC2 Instance
data "aws_instance" "source_instance" {
# You might specify an instance ID or use filters
instance_id = "i-0123456789abcdef0"
}
# GCP: Compute Engine Instance
resource "google_compute_instance" "target_instance" {
name = "${var.project_name}-vm-${var.environment}"
machine_type = "n1-standard-2" # Map AWS instance type to GCP machine type
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-10" # Choose appropriate OS image
size = 20 # Size in GB
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
subnetwork = google_compute_subnetwork.subnet.self_link
access_config {
# Include this block to give the instance a public IP
}
}
metadata = {
ssh-keys = "username:${file("~/.ssh/id_rsa.pub")}"
}
tags = ["web", "allow-http"]
service_account {
email = google_service_account.instance_sa.email
scopes = ["cloud-platform"]
}
}
# Example: VPC migration
# AWS: VPC
data "aws_vpc" "source_vpc" {
id = "vpc-0123456789abcdef0"
}
# GCP: VPC Network
resource "google_compute_network" "vpc_network" {
name = "${var.project_name}-network-${var.environment}"
auto_create_subnetworks = false
}
# AWS: Subnet
data "aws_subnet" "source_subnet" {
id = "subnet-0123456789abcdef0"
}
# GCP: Subnet
resource "google_compute_subnetwork" "subnet" {
name = "${var.project_name}-subnet-${var.environment}"
ip_cidr_range = "10.0.1.0/24" # Match or adapt CIDR from AWS
region = "us-central1"
network = google_compute_network.vpc_network.id
}
# Example: Security Group to Firewall Rules migration
# AWS: Security Group
data "aws_security_group" "web_sg" {
id = "sg-0123456789abcdef0"
}
# GCP: Firewall Rules
resource "google_compute_firewall" "allow_http" {
name = "${var.project_name}-allow-http"
network = google_compute_network.vpc_network.name
allow {
protocol = "tcp"
ports = ["80", "443"]
}
source_ranges = ["0.0.0.0/0"]
target_tags = ["allow-http"]
}
resource "google_compute_firewall" "allow_ssh" {
name = "${var.project_name}-allow-ssh"
network = google_compute_network.vpc_network.name
allow {
protocol = "tcp"
ports = ["22"]
}
source_ranges = ["0.0.0.0/0"] # Best practice: restrict to known IPs
}
# Example: S3 to Cloud Storage migration
# AWS: S3 Bucket
data "aws_s3_bucket" "source_bucket" {
bucket = "my-aws-source-bucket"
}
# GCP: Cloud Storage Bucket
resource "google_storage_bucket" "target_bucket" {
name = "${var.project_name}-bucket-${var.environment}"
location = "US" # Choose appropriate location
versioning {
enabled = true # If S3 bucket has versioning enabled
}
uniform_bucket_level_access = true
}
# Example: IAM roles and service accounts
# GCP: Service Account for Compute Instances
resource "google_service_account" "instance_sa" {
account_id = "${var.project_name}-sa-${var.environment}"
display_name = "Service Account for ${var.project_name} instances"
}
resource "google_project_iam_member" "instance_sa_role" {
project = "my-gcp-project"
role = "roles/storage.objectViewer"
member = "serviceAccount:${google_service_account.instance_sa.email}"
}
# Example: RDS to Cloud SQL migration
# AWS: RDS Instance
data "aws_db_instance" "source_db" {
db_instance_identifier = "my-aws-db"
}
# GCP: Cloud SQL Instance
resource "google_sql_database_instance" "target_db" {
name = "${var.project_name}-db-${var.environment}"
database_version = "POSTGRES_13" # Or "MYSQL_8_0" - match RDS engine
region = "us-central1"
settings {
tier = "db-f1-micro" # Adjust based on RDS instance type
backup_configuration {
enabled = true
}
ip_configuration {
ipv4_enabled = true
authorized_networks {
name = "all"
value = "0.0.0.0/0" # Production: restrict to app IP ranges
}
}
}
deletion_protection = true # Set to false if you need to delete via Terraform
}
resource "google_sql_database" "database" {
name = "application-db"
instance = google_sql_database_instance.target_db.name
}
# Example: ELB to GCP Load Balancer
# GCP: Global HTTP Load Balancer
resource "google_compute_global_address" "lb_ip" {
name = "${var.project_name}-lb-ip-${var.environment}"
}
resource "google_compute_backend_service" "backend_service" {
name = "${var.project_name}-backend-${var.environment}"
protocol = "HTTP"
timeout_sec = 10
enable_cdn = false
backend {
group = google_compute_instance_group.webservers.self_link
}
health_checks = [google_compute_health_check.http_health_check.self_link]
}
resource "google_compute_instance_group" "webservers" {
name = "${var.project_name}-instance-group-${var.environment}"
description = "Web servers instance group"
zone = "us-central1-a"
instances = [
google_compute_instance.target_instance.self_link
]
named_port {
name = "http"
port = 80
}
}
resource "google_compute_url_map" "url_map" {
name = "${var.project_name}-url-map-${var.environment}"
default_service = google_compute_backend_service.backend_service.self_link
}
resource "google_compute_target_http_proxy" "http_proxy" {
name = "${var.project_name}-http-proxy-${var.environment}"
url_map = google_compute_url_map.url_map.self_link
}
resource "google_compute_global_forwarding_rule" "forwarding_rule" {
name = "${var.project_name}-http-rule-${var.environment}"
target = google_compute_target_http_proxy.http_proxy.self_link
port_range = "80"
ip_address = google_compute_global_address.lb_ip.address
}
resource "google_compute_health_check" "http_health_check" {
name = "${var.project_name}-http-health-check-${var.environment}"
check_interval_sec = 5
timeout_sec = 5
http_health_check {
port = 80
request_path = "/"
}
}
# Example: CloudWatch to Cloud Monitoring
resource "google_monitoring_alert_policy" "cpu_alert" {
display_name = "${var.project_name}-cpu-alert-${var.environment}"
combiner = "OR"
conditions {
display_name = "High CPU utilization"
condition_threshold {
filter = "metric.type=\"compute.googleapis.com/instance/cpu/utilization\" AND resource.type=\"gce_instance\""
duration = "60s"
comparison = "COMPARISON_GT"
threshold_value = 0.8
trigger {
count = 1
}
aggregations {
alignment_period = "60s"
per_series_aligner = "ALIGN_MEAN"
}
}
}
notification_channels = [
google_monitoring_notification_channel.email.name
]
}
resource "google_monitoring_notification_channel" "email" {
display_name = "Email Notification Channel"
type = "email"
labels = {
email_address = "alerts@example.com"
}
}
# Output important values
output "gcp_instance_ip" {
value = google_compute_instance.target_instance.network_interface[0].access_config[0].nat_ip
}
output "gcp_load_balancer_ip" {
value = google_compute_global_address.lb_ip.address
}
output "gcp_storage_bucket" {
value = google_storage_bucket.target_bucket.url
}
output "gcp_database_connection" {
value = google_sql_database_instance.target_db.connection_name
sensitive = true
}
==============================================================
Based on the AWS to GCP migration using Terraform, here are the known issues, solutions, and typical timeframes:
Known Issues & Solutions for AWS to GCP Migration
1. Resource Mapping Differences
- Issue: AWS and GCP services don't have perfect 1:1 equivalents
- Solution: Create service mapping documentation specific to your workloads; sometimes requires architectural changes
- Timeframe: 1-2 weeks for analysis, ongoing throughout migration
2. IAM & Authentication Models
- Issue: Completely different permission models between AWS (roles/policies) and GCP (service accounts)
- Solution: Recreate security posture using GCP's native IAM; use least privilege approach
- Timeframe: 2-4 weeks depending on complexity
No comments:
Post a Comment