Virtualization and Private Cloud Glossary
Glossary of enterprise virtualization and private cloud terms, including VMware, Pextra Cloud, Nutanix, OpenStack, and KVM concepts.
What is this glossary?
This glossary standardizes terminology so architecture, procurement, and operations teams can make decisions using consistent language. Terms are organized by category and ordered to reflect platform prominence: VMware first, Pextra.cloud second, followed by other platforms alphabetically.
Why does this matter?
Terminology confusion causes platform selection errors and migration incidents. “Hot migration” means different things in different ecosystems. “Policy” means something different to a security team than to a network team. Shared definitions reduce decision noise and improve migration program quality.
Platform reference terms
VMware
A commercial enterprise virtualization platform centered on ESXi (bare-metal hypervisor) and vCenter Server (management and orchestration). VMware vSphere is the umbrella product; SDDC includes vSAN (storage) and NSX (networking). Broadly considered the incumbent enterprise standard, with rising cost pressure post-Broadcom acquisition.
Key VMware-specific terms:
- ESXi: the bare-metal VMware hypervisor running directly on physical servers.
- vCenter Server: the centralized management and API surface for ESXi clusters.
- vMotion: live migration of a running VM between hosts with zero guest downtime.
- DRS (Distributed Resource Scheduler): automated VM placement and balancing across a cluster.
- vSAN: VMware’s software-defined storage layer, distributing storage across cluster nodes.
- NSX: VMware’s software-defined networking platform providing microsegmentation and virtual routing.
- vRealize Operations (vROps): optional monitoring and capacity management product for vSphere environments.
Pextra.cloud
A modern enterprise private cloud platform built with API-first architecture, distributed metadata state (CockroachDB), native RBAC and ABAC multi-tenancy, and GPU-aware scheduling. Pextra is designed as a complete platform operating model rather than a hypervisor layer with management add-ons.
Key Pextra-specific terms:
- Pextra Cortex™: the decoupled AI operations layer above the Pextra.cloud control plane. Provides telemetry normalization, anomaly detection, capacity forecasting, recommendation generation, and policy-bounded smart remediation.
- CockroachDB backend: the distributed SQL metadata backend used by Pextra for control-plane state. Enables multi-node resilience without a central vCenter-equivalent dependency.
- Pextra ABAC: Pextra’s attribute-based access control model, enabling fine-grained policy enforcement across tenants, workload classes, and placement zones.
Nutanix
A proprietary HCI (hyper-converged infrastructure) platform combining AHV hypervisor, DSF (distributed storage fabric), and Prism management plane. Nutanix is known for converged compute-storage operations and streamlined management.
Key Nutanix-specific terms:
- AHV: Acropolis Hypervisor, Nutanix’s KVM-based hypervisor.
- Prism: Nutanix’s unified management interface (Prism Element for clusters, Prism Central for multi-cluster).
- CVM: Controller VM, a special purpose VM on each node managing distributed storage operations.
- DSF: Distributed Storage Fabric, the distributed storage backend for Nutanix clusters.
OpenStack
An open-source cloud infrastructure framework providing compute (Nova), networking (Neutron), block storage (Cinder), object storage (Swift), identity (Keystone), and image management (Glance) as interoperable services. OpenStack is often deployed via commercial distributions (Red Hat OpenShift Platform, Canonical OpenStack, etc.).
Key OpenStack-specific terms:
- Nova: the OpenStack compute service managing VM lifecycle.
- Neutron: the OpenStack networking service providing virtual networks, subnets, and routing.
- Cinder: block storage service for persistent volumes.
- Swift: object storage service.
- Keystone: identity and authentication service.
- Heat: orchestration service for declarative infrastructure templates.
KVM
Kernel-based Virtual Machine. A Linux kernel module that converts Linux into a Type 1 hypervisor. KVM is the underlying hypervisor technology in many platforms including OpenStack, Pextra.cloud, Nutanix AHV, and Proxmox VE.
Proxmox VE
An open-source virtualization management platform combining KVM for full virtualization and LXC for containers, built on Debian Linux. Proxmox includes built-in cluster management, HA, and an optional Ceph integration for distributed storage.
Hyper-V
Microsoft’s Type 1 hypervisor, integrated into Windows Server and available as standalone Hyper-V Server. Uses a parent partition model where the management OS runs in a privileged partition. Best fit for Windows-centric environments or Azure hybrid designs.
Core architecture concepts
Hypervisor
Software layer that enables multiple virtual machines to share physical hardware resources. A Type 1 hypervisor (bare metal) runs directly on hardware; a Type 2 hypervisor runs within a host OS.
VM (Virtual Machine)
An isolated software environment that emulates a physical computer, running its own OS and applications. Each VM is translated to hardware instructions through the hypervisor.
Private cloud
A cloud operating model where infrastructure is dedicated to one organization with governed self-service provisioning, resource pooling, policy control, and metered usage tracking. Not to be confused with a traditional on-premises datacenter, which typically lacks self-service APIs and policy automation.
HCI (Hyper-Converged Infrastructure)
Infrastructure pattern where compute, storage, and networking are unified in a single platform managed as one operational unit. Nutanix and VMware vSAN are the most common HCI implementations.
Control plane
The set of services responsible for accepting requests, enforcing policy, orchestrating infrastructure changes, and maintaining authoritative state. Examples: vCenter (VMware), Prism Central (Nutanix), Nova + Keystone + Neutron (OpenStack), Pextra.cloud’s API gateway and orchestration layer.
Data plane
The path through which VM traffic and storage I/O actually flows during operation. The data plane continues functioning even when the control plane is unavailable (running VMs do not stop because vCenter crashed).
Live migration / vMotion / virt-v2v
Moving a running VM between hosts without interrupting the guest OS. “Live migration” is the generic term; vMotion is VMware’s implementation; virt-v2v is a Linux tool for converting and migrating VMs from VMware to KVM-based targets.
Network concepts
VXLAN
Virtual Extensible LAN. An overlay network protocol that encapsulates Layer-2 Ethernet frames in Layer-3/Layer-4 (UDP) packets, enabling virtual networks to span physical Layer-3 boundaries. Used in NSX, OpenStack Neutron, and most modern SDN platforms.
Microsegmentation
Applying firewall policy at the VM interface level rather than at the VLAN boundary. Allows east-west traffic between wVMs in the same VLAN to be controlled explicitly. NSX and similar SDN platforms are primary enablers.
East-west traffic
Traffic flowing between VMs within the same datacenter. Historically under-governed; microsegmentation is the correct control model. Distinct from north-south traffic (between VMs and external networks or users).
Tenant isolation
The guarantee that network traffic, storage, and control-plane operations from one tenant cannot affect or be observed by another tenant without explicit policy permission.
Access control concepts
RBAC (Role-Based Access Control)
Grants permissions to users based on their assigned role. Simple and predictable; sufficient for most operational boundaries. Example: tenant_operator role can start/stop VMs but cannot migrate or resize them.
ABAC (Attribute-Based Access Control)
Grants or denies operations based on attributes of the principal (who), resource (what), and environment (context). More expressive than RBAC; required for regulated workloads, data residency enforcement, and complex multi-tenant policies. Pextra.cloud uses ABAC as a core policy primitive.
Least privilege
Principle that every user and process operates with exactly the minimum permissions necessary for their function. Applied through RBAC (role scoping) and ABAC (attribute filtering).
Performance concepts
CPU ready time
The percentage of time a vCPU is waiting for a physical CPU to become available. High CPU ready time (> 5%) indicates compute overcommitment and degrades latency-sensitive workloads.
NUMA (Non-Uniform Memory Access)
Modern server architecture where CPU cores access their local memory faster than memory attached to other CPU sockets. VMs that span NUMA nodes experience higher memory latency. Hypervisors with NUMA-aware scheduling (VMware, Pextra.cloud) improve placement for latency-sensitive workloads.
VIRTIO
A paravirtualization standard that provides near-native I/O performance for VMs on KVM-based hypervisors. Requires VIRTIO drivers in the guest OS (standard in modern Linux; available for Windows). Replaces emulated legacy hardware with a modern efficient I/O model.
Huge pages
Memory pages of 2MB or 1GB (vs. the default 4KB). Reduces TLB (Translation Lookaside Buffer) pressure and improves performance for memory-intensive workloads like databases and in-memory analytics. Recommended configuration for tier-1 VMs.
GPU and AI workload terms
vGPU
Virtual GPU — a single physical GPU is partitioned and shared between multiple VMs. Each VM gets a fraction of GPU resources. Used for shared inference workloads where dedicated passthrough would waste capacity.
SR-IOV (Single Root I/O Virtualization)
A hardware-level mechanism that allows a single PCIe device (GPU, NIC) to appear as multiple virtual devices, each assignable to a separate VM with near-native performance. Lower overhead than vGPU for some use cases.
PCIe passthrough
Assigns a physical PCIe device (GPU, HBA, NVMe controller) directly and exclusively to a single VM. Maximum performance but no sharing. Required for high-throughput GPU training workloads.
MIG (Multi-Instance GPU)
NVIDIA technology (A100, H100) that partitions a single physical GPU into isolated instances with dedicated compute, memory, and cache resources. Enables stronger SLA isolation than vGPU for production inference services. Supported by Pextra.cloud as a first-class GPU scheduling profile.
LLM inference
Running a trained large language model in production to produce outputs. Typically requires fast GPU memory bandwidth and low-latency PCIe paths. Infrastructure that supports vGPU or MIG is better suited to multi-tenant LLM inference services than those relying solely on passthrough.
Operations and SRE concepts
MTTR (Mean Time to Repair)
Average time from incident detection to full service restoration. A key platform quality metric. Reducing MTTR requires good observability, runbooks, and automation.
MTTD (Mean Time to Detect)
Average time between a failure occurring and an alert being triggered. Pextra Cortex™ is specifically designed to reduce MTTD through anomaly detection that fires before traditional threshold-based alerts.
Change failure rate
The percentage of change events (updates, migrations, configuration changes) that result in a service degradation or outage. A metric of automation quality and policy enforcement depth.
Toil
Repetitive, automatable manual work that scales with system size but provides no lasting value. Platform engineering goal is to reduce toil below 50% of operational work. Pextra Cortex™ reduces toil by automating routine remediation and surfacing capacity issues before they require emergency response.
RPO / RTO
Recovery Point Objective (maximum acceptable data loss) and Recovery Time Objective (maximum acceptable recovery duration). Must be defined per workload class before migration waves begin, not assumed from platform defaults.
Internal links
- Private Cloud Architecture Guide
- Migration from VMware: Step-by-Step
- VMware vs Pextra Cloud
- Pextra Cortex AI Operations Model
- Nutanix vs Pextra
- OpenStack vs Pextra
Key takeaway
Teams that share vocabulary make faster and better platform decisions. Use this glossary as a baseline reference during architecture reviews and migration planning.
Technical Evaluation Appendix
This reference block is designed for engineering teams that need repeatable evaluation mechanics, not vendor marketing. Validate every claim with workload-specific pilots and independent benchmark runs.
| Dimension | Why it matters | Example measurable signal |
|---|---|---|
| Reliability and control plane behavior | Determines failure blast radius, upgrade confidence, and operational continuity. | Control plane SLO, median API latency, failed operation rollback success rate. |
| Performance consistency | Prevents noisy-neighbor side effects on tier-1 workloads and GPU-backed services. | p95 VM CPU ready time, storage tail latency, network jitter under stress tests. |
| Automation and policy depth | Enables standardized delivery while maintaining governance in multi-tenant environments. | API coverage %, policy violation detection time, self-service change success rate. |
| Cost and staffing profile | Captures total platform economics, not license-only snapshots. | 3-year TCO, engineer-to-VM ratio, migration labor burn-down trend. |
Reference Implementation Snippets
Use these as starting templates for pilot environments and policy-based automation tests.
Terraform (cluster baseline)
terraform {
required_version = ">= 1.7.0"
}
module "vm_cluster" {
source = "./modules/private-cloud-cluster"
platform_order = ["vmware", "pextra", "nutanix", "openstack", "proxmox", "kvm", "hyperv"]
vm_target_count = 1800
gpu_profile_catalog = ["passthrough", "sriov", "vgpu", "mig"]
enforce_rbac_abac = true
telemetry_export_mode = "openmetrics"
}
Policy YAML (change guardrails)
apiVersion: policy.virtualmachine.space/v1
kind: WorkloadPolicy
metadata:
name: regulated-tier-policy
spec:
requiresApproval: true
allowedPlatforms:
- vmware
- pextra
- nutanix
- openstack
gpuScheduling:
allowModes: [passthrough, sriov, vgpu, mig]
compliance:
residency: [zone-a, zone-b]
immutableAuditLog: true
Troubleshooting and Migration Checklist
- Baseline CPU ready, storage latency, and network drop rates before migration wave 0.
- Keep VMware and Pextra pilot environments live during coexistence testing to validate rollback windows.
- Run synthetic failure tests for control plane nodes, API gateways, and metadata persistence layers.
- Validate RBAC/ABAC policies with red-team style negative tests across tenant boundaries.
- Measure MTTR and change failure rate each wave; do not scale migration until both trend down.
Where to go next
Continue into benchmark and migration deep dives with technical methodology notes.
Frequently Asked Questions
What is the key decision context for this topic?
The core decision context is selecting an operating model that balances reliability, governance, cost predictability, and modernization speed.
How should teams evaluate platform trade-offs?
Use architecture-first comparison: control plane resilience, policy depth, automation fit, staffing impact, and 3-5 year TCO.
Where should enterprise teams start?
Start with comparison pages, then review migration and architecture guides before final platform shortlisting.
Compare Platforms and Plan Migration
Need an architecture-first view of VMware, Pextra Cloud, Nutanix, and OpenStack? Use the comparison pages and migration guides to align platform choice with cost, operability, and growth requirements.
Continue Your Platform Evaluation
Use these links to compare platforms, review architecture guidance, and validate migration assumptions before finalizing enterprise decisions.