Windows Azure Pack – Failed to Register Virtual Machine Cloud Provider

I was installing Windows Azure Pack to a different lab environment (having not had a problem on previous occasions) and everything was fine until connecting to Virtual Machine Manager.  Every time I tried, I got the error “Failed to register virtual machine cloud provider”:

WAP-VMM_FailedToRegister

Clicking on ‘Details’ only gave “An error occurred while processing this request.” I did some troubleshooting, including following the great blog article from Microsoft Troubleshooting Windows Azure Pack, SPF & VMM, all of which didn’t point to the problem.

I went to the Event Viewer on the server running Windows Azure Pack (only an express installation in this lab) and navigated to ‘Applications and Services Logs > Microsoft > WindowAzurePack > MgmtSvc-AdminAPI > Operational‘ and there were several long errors in there.  They were all slightly different in content, but all started the same way:

  • Resource provider unexpected exception for proxy request with verb ‘GET’, operation name ‘Outgoing admin proxy call’

Sounds like a PowerShell issue…

Reading further into each error, I spotted the following:

  • Invoking method GetSupportedQueryOptions of type Microsoft.SystemCenter.Foundation.Psws.Spf.SpfOperationManager failed. Cause of the problem: Windows PowerShell updated your execution policy successfully, but the setting is overridden by a policy defined at a more specific scope.  Due to the override, your shell will retain its current effective execution policy of Unrestricted.

Having a look at the Execution Policy, it was indeed set to Unrestricted.  I found in Group Policy that the setting ‘Turn on Script Execution’ was enabled.  I removed this, ran a gpupdate and checked the Execution Policy again, which had now reverted to Restricted.  I also ran update on the Service Provider Foundation server and check it was now set to Restricted.

Connecting to Virtual Machine Manager now worked as expected.

Hyper-V Host SMB NIC Selection

When I was creating a simple proof of concept with some new DataOn DNS-1640D JBODs the other day (before several of these are going into production), I was seeing some unexpected behaviour with which network was being used by a Hyper-V host for SMB Multichannel. Very simply, the host had three network adapters: one management and two storage, all on separate subnets (as is required for SMB Multichannel).  The management NIC was onboard and the two storage NICs were an additional card.  Monitoring the networks when reading/writing data from/to the storage showed only the management NIC was being used.  I checked DNS, connections, client access point, etc. and everything seemed configured correctly on my SOFS cluster, so I configured SMB Multichannel constraints for select only my two storage NICs.  Still no joy.

So I reached out to Aidan Finn (www.aidanfinn.com), who is a Microsoft MVP and knows this stuff very well, to see if he could shed some light on the problem.  Despite him being on holiday, I had a reply within 5 hours with his thoughts about what was going on.  There is a specific order to which network card the Hyper-V server would select.  He said:

It’s a waterfall decision:

  1. If a NIC with RDMA is found: use it/them
  2. If a NIC with RSS enabled is found: use it/them
  3. The highest speed NIC: use it/them

The decision will go through that list in order and jump out at the first satisfied one.

He was spot on.  Sure enough, by running Get-SmbClientNetworkInterface on the Hyper-V host, I could see that my management NIC had Receive Side Scaling (RSS) enabled and my two storage NICs didn’t and so selection got to the management NIC and didn’t go looking for any others.  I’m sure Aidan will elaborate on this in one of his future blog posts.

To solve this, I had to do two things: disable RSS on the management NIC and then configure SMB Multichannel constraints on to the two storage NICs as all three were now equal, according to SMB Multichannel.  The following two PowerShell commands sorted it:

  • Disable-NetAdapterRss
  • New-SmbMultichannelConstraint

I have already blogged about SMB Multichannel Constraints, so for more information, see here: http://www.knappett.net/index.php/2014/07/10/configuring-smb-multichannel-contraints-to-a-scale-out-file-server-cluster-hyper-v-and-virtual-machine-manager/.  I won’t have to use this for the production clusters as I will have a much different types of network adapters and setup, but this should prove helpful for a proof of concept.  This would also need to be done on each Hyper-V host with this issue.

Thanks, Aidan!

Configuring SMB Multichannel Contraints to a Scale-Out File Server Cluster – Hyper-V and Virtual Machine Manager

With a Scale-Out File Server (SOFS) cluster using SMB Multichannel in Windows Server 2012 and Windows Server 2012 R2, in certain scenarios it may be necessary to restrict which networks can be used for the SMB 3.0 traffic.  This can be done and managed with the handy PowerShell SMB Share Cmdlets (http://technet.microsoft.com/en-us/library/jj635726.aspx):

  • Get-SmbMultichannelConstraint
  • New-SmbMultichannelConstraint
  • Remove-SmbMultichannelConstraint

These are configured on the client and not the SOFS nodes and you are setting the contraint on the connection to the Client Access Point (CAP) of the SOFS role in the Failover Cluster, rather than each SOFS node individually.  It will also only affect network traffic for connections to this Client Access Point, so other shares or SOFS clusters will behave as before and need to be configured separately if the that network traffic is required to be constrained.

Important
It’s important to note that when using System Center Virtual Machine Manager (VMM) to deploy VMs to this Scale-Out File Server that it uses the Fully Qualified Domain Name (FQDN) of the Client Access Point, rather than the NetBIOS name that you might use manually and these are treated separately.  Therefore, as you will see below, we have to add two entries when configuring the SMB Multichannes constraints.

As an example, I have a Failover Cluster configured with a Scale-Out File Server role called LAB12R2-SOFSCAP and two networks (on different subnets of course) for SMB Multichannel.  I have two SOFS nodes, LAB12R2-SOFS01 and LAB12R2-SOFS02, and one Hyper-V host, LAB12R2-HYPV01.  These screenshots are from the configuration in Failover Cluster Manager:

SOFS-CAP

SOFS-Network

On my Hyper-V host, I also have two storage networks and a Production network to correspond to the SOFS nodes.  If the networks adapters are all identical on the Hyper-V host, it will use all three of these for SMB Multichannel as it grabs whatever meets the requirements.  In order to restrict this to just the two storage networks, on the Hyper-V host (and every Hyper-V host in the cluster that this is required), run the following in PowerShell (as an Administrator):

  • Get-NetAdapterNote down the number(s) from the ifIndex column of the network adapter(s) you want to use.

Get-NetAdapter

  • New-SmbMultichannelConstraint -ServerName LAB12R2-SOFSCAP -InterfaceIndex 29,32
    New-SmbMultichannelConstraint -ServerName LAB12R2-SOFSCAP.lab.local -InterfaceIndex 29,32
    where LAB12R2-SOFSCAP is the name of the Client Access Point for the Scale-Out File Server role, and 29,32 are each interface index I want to use (to add more just separate with additional commas)

 

  • Get-SmbMultichannelConstraint

Get-SmbMultichannelContraint

If you not longer need these constraints or make a mistake and want to reset the configuration, run the following:

  • Get-SmbMultichannelConstraint | Remove-SmbMultichannelConstraint