Skip to content

September 3, 2014

In Which I’m Late to the Party – VMware EVO

by Steve

slowpoke-rail.jpeg

If anything could wash the taste of the vRealize rebranding out of my mouth, it was VMware’s announcement of the EVO family.

VMware has realized that for all the pomp and circumstance they’ve built up around the Software-Defined Data Center, it remains a tough nut to crack. There are considerable challenges around initial setup, provisioning, and ongoing lifecycle management and support. The way they see it, there are three approaches to this today:

  1. Build your own. Separate procurement processes for all components—software, storage, compute, networking, etc—often based on a vendor’s reference architecture.
  2. Converged Infrastructure. Same traditional components from your major compute, network, and storage vendors, but sold as a single bundle, with some level of pre-integration. Your Vblocks, and to a somewhat lesser extent, FlexPods.
  3. Hyper-Converged Appliances. Scale-out devices comprising integrated compute and storage, with some networking tying them together.  The SimpliVities and Nutanices of the world.

EVO:RAIL is VMware’s initial swing at option 3. Chris Wahl has an excellent overview of EVO:RAIL here. The gist is that these are Hyper-Converged Infrastructure Appliances (HCIAs), 2U, 4-Node appliances, with each node containing two Intel E-2620v2 CPUs, up to 192GB RAM, a PCI-E Disk Controller, a 146GB SAS or 32 GB SATADOM device for ESXi boot, a single up-to-400GB SSD, three 1.2TB SAS 10K HDs, two 10GbE RJ45 or SFP+ ports for data traffic, and one 100/1000 Gb NIC for management. Each HCIA includes licensing for vSphere Enterprise Plus, Virtual SAN, Log Insight, and the EVO Engine.

Installation is ridiculously easy—rack it, cable it, and access the EVO:RAIL management console to launch a wizard that does the rest for you. The wizard walks you through assigning hostnames, network configuration, and passwords, and then cranks away for about fifteen minutes before presenting your with a happy “Hooray!” message and a URL for the EVO:RAIL dashboard . During that fifteen minutes, EVO:RAIL is deploying a vCenter appliance, installing ESXi, and configuring everything based on your inputs in the setup wizard. Crazy, huh?

The EVO:RAIL dashboard is something like training wheels for the vSphere Web Client. You can deploy new VMs, view system health, and perform updates. The traditional clients are available, too, if you prefer things to be harder.

Note that VMware itself is not in the hardware business. EVO:RAIL is something of a reference architecture, and VMware is partnering with OEMs to produce actual EVO:RAIL devices. Launch partners are Dell, EMC, Fujitsu, Inspur, Net One Systems, and Supermicro. You have to hand it to Dell, playing arms manufacturer in the hyper-converged appliance wars. They now OEM for VMware, SimpliVity, and Nutanix.

EVO:RAIL is neat, and I want one in a tower form-factor to sit under my desk and serve as a respectable homelab. I want a pony, too. But, neat though it may be, it doesn’t hold a candle to what’s ahead. VMworld also offered a tech preview of EVO:RAIL’s big brother, EVO:RACK.

EVO:RACK is the full SDDC in a box. Well, crate. The idea here is that these are large-scale deployments, with half-rack and full-rack configurations available, scaling to a large but unspecified number of racks. What’s in the racks will vary—the tech preview featured 2U, 2-Node appliances with Virtual SAN, but it’s somewhat open-ended. VMware will work with parters to qualify configurations for EVO:RACK. Software-wise, it’ll come with everything–vRealize Suite, Virtual SAN, NSX, and the new EVO:RACK Manager.

EVO:RACK Manager is where the magic happens. Here’s the experience VMware is shooting for:

  1. Pre-configured rack(s) arrive at customer site
  2. Customer connects power and network uplinks
  3. Customer walks through a quick wizard to define network settings, IP ranges, DNS, hostnames, tenant information–usual and basic stuff.
  4. EVO:RACK Manager takes over. Auto-provisions internal and uplink networking, ESXi installation, VCAC configuration.
  5. After provisioning, EVO:RACK Manager becomes a single pane of glass. Like, for real. As you grow, you continue to manage the environment as a single logical rack. New racks are auto-discovered and auto-provisioned based on Customer-defined SLA policies. Customers can request capacity, and EVO:RACK Manager auto-provisions pools based on those SLAs, and creates VCAC reservations as appropriate. Customers can request applications, and EVO will deploy logical networks and security automatically, again based on defined policies.

Total time, from wheeling in the rack to having a self-service portal? They’re shooting for under two hours.

I spent about half an hour just staring slack-jawed at the EVO:RACK Manager UI demo, just blown away. That degree of orchestration is hard. Just whisper “UIM” around anyone that’s been working with Vblocks awhile and watch the involuntary shudder. My initial impression was that this was a thumb-nosing at VCE, and what Vision is still so far from delivering. But in the Tech Preview session, they were called out by name as being on the integration roadmap. So maybe we’ll see a Vblock 900 someday based on EVO:RACK.

Yes, it’s a tech preview. Yes, forward-looking-statements and all. I don’t care–it’s nice to be excited about hardware again.

Read more from Work

Leave a Reply

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

required
required