Server virtualization—breaking one physical server up into a bunch of virtual machines—is one of the most significant changes in server management in the past 10 years. We wrote “server management” in lowercase because it’s used not just in Windows Server but in various flavors of Linux, Unix, Sun Solaris, and so on. Being able to buy one big, powerful, reliable piece of hardware and fool it into believing that it’s actually 10 or 20 smaller separate pieces of computer hardware and then installing separate server OSes on those bits of “virtual server hardware” has greatly simplifi ed server management for operations big and small. Furthermore, it has solved a server management problem that has bedeviled server room planners for years: underutilized hardware. The tool that fools the computer into thinking that it is actually many separate computers is generically called a virtual machine manager (VMM). You see, ever since the start of server computing, most organizations have preferred to put each server function—email, AD domain controller, fi le server, web server, database server—on its own separate physical server. Thus, if you needed a domain controller, a web server, and an email server for your domain, you would commonly buy three separate server computers, put a copy of Windows Server on each one, and make one a DC, one a web server (by enabling Internet Information Services, R2’s built-in web server software, on the server), and one an Exchange Server. The downside of this was that each of those servers would probably run at fairly low load levels: it wouldn’t be surprising to learn that the DC ran about 5 percent of the CPU’s maximum capacity, the web server a bit more, and the email server a bit more than that. Running a bunch of pieces of physical server hardware below their capacity meant wasting electricity, and that’s just not green thinking, y’know? In contrast, buying one big physical server and using a VMM to chop it up into (for example) three virtual servers would probably lead to a physical server that’s working near capacity, saving electricity and cooling needs.
First, let’s cover the new technology added in this version. Since there are so many improvements to Hyper-V, we’re just going briefl y touch on each one:
◆ Client Hyper-V gives desktop Windows Hyper-V technology without the need for installing a server OS.
◆ A Hyper-V module for Windows PowerShell provides more than 160 cmdlets to manage Hyper-V.
◆ Hyper-V Replica allows you to replicate virtual machines between storage systems, clusters, and datacenters in two sites. This helps provide business continuity and disaster recovery.
◆ Resource metering helps track and collect data about network usage and resources on specific virtual machines.
◆ Simplified authentication groups administrators as a local security group. By doing so, fewer users need to be created to access Hyper-V.
◆ Single-root I/O virtualization (SR-IOV) is a new feature that allows you to assign a network adapter directly to a virtual machine.
◆ Storage migration allows you to move the virtual hard disks to a different physical storage while a virtual machine is running.
◆ SMB 3.0 fi le share is a new feature that provides virtual machines with shared storage, without the use of a storage area network (SAN).
◆ The virtual Fibre Channel allows you to virtualize workloads and applications that require direct access to Fibre Channel-based storage. It also makes it possible to configure clustering directly within the guest operating system (sometimes referred to as guest clustering).
◆ Virtual Non-Uniform Memory Architecture (NUMA) allows certain high-performance applications running in the virtual machine to use NUMA topology to help optimize performance.
Now let’s briefly talk about some of the enhancements made to existing Hyper-V technology that many administrators will find useful.
◆ Dynamic memory allows you to configure Smart Paging so your virtual machines can more efficiently restart. If a virtual machine has less startup memory, dynamic memory can be configured to support it.
◆ Importing virtual machines has received a tune-up to better handle configuration problems that would normally prevent an import. Until now the process included copying a virtual machine but never checked for configuration issues.
◆ Live migrations make it possible to complete a live migration in a nonclustered environment. This improvement will make moving a live virtual machine easier.
◆ Larger storage resources, increased scale, and better hardware error-handling are offered in this version. The intention is to help you configure large, high-performance virtual machines with the ability to scale.
◆ Virtual Hard Disk Format (VHDX) increases the maximum storage size of each virtual hard disk. The new format supports up to 64 terabytes of storage. It also comes with builtin hardware protection against power failures. This format will also prevent performance falloff on large-sector physical disks.
◆ You no longer need to shut down the live virtual machine to recover deleted storage space. Virtual machine snapshots will now free up the space the snapshot consumed once it is deleted.