Google Cloud VMware Engine (GCVE)¶
Scope¶
Google Cloud VMware Engine (GCVE): node types, private cloud sizing, Private Service Access networking, HCX migration, committed use discounts, GCP service integration, NSX-T design, and DNS configuration.
Google-operated VMware private cloud running on dedicated bare-metal nodes in Google Cloud. Provides full VMware stack (vSphere, vSAN, NSX-T, vCenter, HCX) with native GCP service integration.
Checklist¶
- [Critical] Node type selection: standard-72 (72 vCPU, 768GB RAM), standard-36 (36 vCPU, 256GB RAM), or memory-optimized (72 vCPU, 1.5TB RAM)?
- [Critical] Private cloud sizing: minimum 3 nodes per cluster, cluster expansion in single-node increments?
- [Critical] Networking: VMware Engine network with Private Service Access to GCP VPC, Cloud Interconnect or VPN for on-prem connectivity?
- [Critical] Region selection: verify GCVE availability in target regions (more limited than general GCP regions)?
- [Critical] Pricing model: on-demand (per-node hourly) or 1yr/3yr committed use discounts (CUDs)?
- [Critical] Migration strategy: HCX for bulk migration and vMotion, or third-party migration tools?
- [Recommended] GCP service integration: which services (BigQuery, Cloud SQL, GKE, Cloud Functions, Pub/Sub) will VMware VMs consume?
- [Recommended] Storage expansion: vSAN only, or add NetApp Cloud Volumes for NFS datastores?
- [Recommended] NSX-T network design: segments, distributed firewall rules, T0/T1 gateway topology for workload isolation?
- [Recommended] Identity integration: Google Cloud IAM for GCVE management, vCenter SSO with LDAP/AD?
- [Recommended] Monitoring strategy: Google Cloud Operations Suite (Cloud Monitoring, Cloud Logging) integration, or VMware Aria?
- [Recommended] DNS configuration: Cloud DNS forwarding to NSX-T DNS, or on-prem DNS integration?
- [Optional] External IP addresses for VMware VMs: allocate public IPs via GCVE service for direct internet access?
- [Optional] Subnet allocation strategy for VMware Engine network: management, vMotion, vSAN, workload subnet planning?
- [Optional] Multi-private-cloud topology: separate private clouds per environment or shared clusters with resource pools?
Why This Matters¶
GCVE is the ideal path to GCP for organizations with VMware estates needing proximity to GCP data and analytics services — particularly BigQuery. The minimum 3-node requirement sets a baseline cost similar to AVS. Private Service Access networking is different from standard VPC peering, and misunderstanding it leads to routing and connectivity failures. GCVE has more limited regional availability than general GCP services, so region selection must be validated early. Not leveraging committed use discounts for predictable workloads wastes 30-50% on compute costs.
Common Decisions (ADR Triggers)¶
| Decision | When to Create ADR |
|---|---|
| Node type selection | Always — standard-72 vs. standard-36 vs. memory-optimized affects cost and workload density |
| On-demand vs. committed use discounts | Always — CUDs provide significant savings but lock in capacity |
| Private Service Access topology | Always — determines how VMware VMs reach GCP services |
| On-prem connectivity method | When hybrid — Cloud Interconnect (dedicated or partner) vs. HA VPN |
| vSAN vs. external storage | When storage exceeds vSAN capacity — NetApp Cloud Volumes adds NFS without more nodes |
| GCP service integration scope | When VMs consume GCP services — affects networking, IAM, and billing |
| Monitoring approach | When observability is scoped — Cloud Operations Suite vs. VMware Aria vs. both |
| DNS architecture | When VMs need name resolution — Cloud DNS forwarding vs. on-prem DNS replication |
Reference Architectures¶
- Analytics Hybrid: VMware VMs running application tier + BigQuery for analytics, data flows from VMs to BigQuery via Private Service Access, Looker for visualization
- Oracle on GCVE: Oracle databases on VMware VMs (preserving licensing), GCP-native services for application and integration layers, Cloud Interconnect for on-prem connectivity
- GCP Hybrid Landing Zone: on-prem vSphere + GCVE connected via Cloud Interconnect, shared services in GCP VPC (Cloud DNS, Cloud NAT), phased migration with HCX
- DR to GCP: on-prem primary site, GCVE as DR target, HCX for replication, scaled-down private cloud expanded during failover
- Dev/Test on GCP: GCVE private cloud for VMware-dependent dev/test workloads, GKE for cloud-native development, shared VPC for networking
Key Constraints¶
- Minimum 3 nodes per private cloud (no single-node option)
- Google manages the underlying infrastructure — no direct ESXi host access
- GCVE regional availability is more limited than general GCP services — verify before committing
- HCX is available but may require separate activation depending on the deployment
- Private Service Access is required for GCP VPC connectivity — standard VPC peering does not work
- vCenter access uses CloudAdmin role (not root); some operations require support requests
- Node types and pricing vary by region
Reference Links¶
- Google Cloud VMware Engine documentation -- official GCVE deployment, networking, and private cloud management
- GCVE pricing -- node pricing, committed use discounts, and cost estimation
- GCVE network architecture -- Private Service Access, VPC peering, and connectivity design
See Also¶
providers/vmware/infrastructure.md-- VMware vSphere and VCF infrastructureproviders/vmware/networking.md-- NSX-T networking design patternsproviders/gcp/compute.md-- GCP compute for hybrid GCVE workloadsproviders/vmware/vmc-aws.md-- VMware Cloud on AWS as alternative cloud VMware