Producent oprogramowania VMware wypuścił na rynek nową wersję oprogramowania dla vCenter o numerze wersji 7.0u3. W najnowszej wersji rozwinięto funkcję Fault Tolerance, według producenta ma ona zapewnić całkowitą dostępność maszyn wirtualnych i zagwarantować ich bezawaryjność. W najnowszej wersji naprawiono błąd, który występował po zaktualizowaniu systemu do wersji vCenter Server 7.0 Update 2 , i powodował, iż wszyscy dostawcy pamięci masowej z filtrami we/wy wyświetlali się w kliencie vSphere jako Offline lub Disconnected. Rozwiązano również problem z vMware Lifecycle Manager, który podczas sprawdzania stanu zgodności narzędzi VMware Tools zwracał błąd „500” i nie dawał żadnych informacji. Po więcej ciekawych informacji zachęcamy do przeczytania dalszej części artykułu.
- vCenter Server 7.0 Update 3 contains all security fixes from vCenter Server 7.0 Update 2d and covers all vulnerabilities documented in VMSA-2021-0020.
- vSphere Memory Monitoring and Remediation, and support for snapshots of PMem VMs: vSphere Memory Monitoring and Remediation collects data and provides visibility of performance statistics to help you determine if your application workload is regressed due to Memory Mode. vSphere 7.0 Update 3 also adds support for snapshots of PMem VMs.
- Extended support for disk drives types: Starting with vSphere 7.0 Update 3, vSphere Lifecycle Manager validates the following types of disk drives and storage device configurations:
• HDD (SAS/SATA)
• SSD (SAS/SATA)
• SAS/SATA disk drives behind single-disk RAID-0 logical volumes
- Use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host: Starting with vSphere 7.0 Update 3, you can use vSphere Lifecycle Manager images to manage a vSAN stretched cluster and its witness host.
- vSphere Cluster Services (vCLS) enhancements: With vSphere 7.0 Update 3, vSphere admins can configure vCLS virtual machines to run on specific datastores by configuring the vCLS VM datastore preference per cluster. Admins can also define compute policies to specify how the vSphere Distributed Resource Scheduler (DRS) should place vCLS agent virtual machines (vCLS VMs) and other groups of workload VMs.
- Improved interoperability between vCenter Server and ESXi versions: Starting with vSphere 7.0 Update 3, vCenter Server can manage ESXi hosts from the previous two major releases and any ESXi host from version 7.0 and 7.0 updates. For example, vCenter Server 7.0 Update 3 can manage ESXi hosts of versions 6.5, 6.7 and 7.0, all 7.0 update releases, including later than Update 3, and a mixture of hosts between major and update versions.
- MTU size greater than 9000 bytes: With vCenter Server 7.0 Update 3, you can set the size of the maximum transmission unit (MTU) on a vSphere Distributed Switch to up to 9190 bytes to support switches with larger packet sizes.
- Zero downtime, zero data loss for mission critical VMs in case of Machine Check Exception (MCE) hardware failure: With vSphere 7.0 Update 3, mission critical VMs protected by VMware vSphere Fault Tolerance can achieve zero downtime, zero data loss in case of Machine Check Exception (MCE) hardware failure, because VMs fallback to the secondary VM, instead of failing.
vSphere Lifecycle Manager Issues
- When you try to check VMware Tools or VM Hardware compliance status, you see a status 500 error and the check returns no resultsIn the vSphere Client, when you navigate to the Updates tab of a container object: host, cluster, data center, or vCenter Server instance, to check VMware Tools or VM Hardware compliance status, you might see a status 500 error. The check works only if you navigate to the Updates tab of a virtual machine.This issue is resolved in this release.
- SNMP dynamic firewall ruleset is modified by Host Profiles during a remediation processThe SNMP firewall ruleset is a dynamic state, which is handled during runtime. When a host profile is applied, the configuration of the ruleset is managed simultaneously by Host Profiles and SNMP, which can modify the firewall settings unexpectedly.This issue is resolved in this release.
- Import Host Profile task fails with a reference host errorThe NoAccess or NoCryptoAdmin roles might be modified during exports of a host profile in a 7.0.x vCenter Sever system and the import of such a host profile might fail with a reference host error. In the vSphere Client, you see a message such as There is no suitable host in the inventory as reference host for the profile Host Profile.This issue is resolved in this release. However, you must edit the host profile xml file for versions earlier than vCenter Server 7.0 Update 3, and remove the privileges in the NoAccess or NoCryptoAdmin roles before an import operation.
- A CNS query with the compliance status filter set might take unusually long time to completeThe CNS QueryVolume API enables you to obtain information about the CNS volumes, such as volume health and compliance status. When you check the compliance status of individual volumes, the results are obtained quickly. However, when you invoke the CNS QueryVolume API to check the compliance status of multiple volumes, several tens or hundreds, the query might perform slowly.This issue is resolved in this release.
- All I/O filter storage providers are offline after upgrade to vCenter Server 7.0 Update 2After patching or upgrading your system to vCenter Server 7.0 Update 2, all I/O filter storage providers might display with status Offline or Disconnected in the vSphere Client. vCenter Server 7.0 Update 2 supports the Federal Information Processing Standards (FIPS) and certain environments might face the issue due to certificates signed with the sha1 hashing algorithm that is not FIPS-compliant.This issue is resolved in this release.
- You do not see progress on vSphere Lifecycle Manager and vSphere with VMware Tanzu tasks in the vSphere ClientIn a mixed-version vCenter Server 7.0 system, such as vCenter Server 7.0 Update 1 and Update 2 transitional environment with Enhanced Linked Mode enabled, tasks such as image, host or hardware compliance checks that you trigger from the vSphere Client might show no progress, while the tasks actually run.This issue is resolved in this release.
- If DRS Awareness of vSAN Stretched Cluster is enabled on a stretched cluster managing ESXi hosts of version earlier than 7.0 Update 2, vSphere DRS might suggest wrong virtual machine placementPrior to vSphere 7.0 Update 2, vSphere DRS has no awareness of read locality for vSAN stretched clusters and the DRS Awareness of vSAN Stretched Cluster feature requires all hosts in a vCenter Server system to be of version ESXi 7.0 Update 2 to work as expected. If you manage ESXi hosts of version earlier than 7.0 Update 2 in a vCenter Server 7.0 Update 2 system, some read locality stats might be read incorrectly and result in improper placements.This issue is resolved in this release. The fix ensures that if ESXi hosts of version earlier than 7.0 Update 2 are detected in a vSAN stretched cluster, read locality stats are ignored and vSphere DRS uses the default load balancing algorithm to initial placement and load balance workloads.
- You see vCenter Server High Availability health degradation alarms reporting an rsync failureIf you use both vSphere Auto Deploy and vCenter Server High Availability in your environment, rsync might not sync quickly enough some short-lived temporary files created by Auto Deploy. As a result, in the vSphere Client you might see vCenter Server High Availability health degradation alarms. In the /var/log/vmware/vcha file, you see errors such as rsync failure for /etc/vmware-rbd/ssl. The issue does not affect the normal operation of any service.This issue is resolved in this release. vSphere Auto Deploy now creates the temporary files outside the vCenter Server High Availability replication folders.
- Deployment of virtual machines fails with an error Could not power on virtual machine: No space left on deviceIn rare cases, vSphere Storage DRS might over recommend some datastores and lead to an overload of those datastores, and imbalance of datastore clusters. In extreme cases, power-on of virtual machines might fail due to swap file creation failure. In the vSphere Client, you see an error such as Could not power on virtual machine: No space left on device. You can backtrace the error in the /var/log/vmware/vpxd/drmdump directory.This issue is resolved in this release.
- Boot sequence for ESXi hosts that are provisioned with Auto Deploy stops at /vmw/rbd/host-registerESXi hosts that are provisioned with Auto Deploy might fail to boot after you update your vCenter Server system to 7.0 Update 2 and later. In the logs, you see a message such as:
FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/rbd/cache/f2/0154d902a1ebb121bac89040df90d1/README.b0f08dea872690a93c4b5bc5e14148d1’This issue is resolved in this release.
- NEW If the NT LAN Manager (NTLM) is disabled on Active Directory, configuration of the vSphere Authentication Proxy service might failYou cannot configure the vSphere Authentication Proxy service on an Active Directory when NTLM is disabled, because by default the vSphere Authentication Proxy uses NTLMv1 for initial communication.This issue is resolved in this release. The fix changes the default protocol for the initial communication of the vSphere Authentication Proxy to NTLMv2.
- NEW Configuration of the vSphere Authentication Proxy service might fail when NTLMv2 response is explicitly enabled on vCenter ServerConfiguration of the vSphere Authentication Proxy service might fail when NTLMv2 response is explicitly enabled on vCenter Server with the generation of a core.lsassd file under the /storage/core directory.
vSphere Cluster Services Issues
You see compatibility issues in new vCLS VMs deployed in vSphere 7.0 Update 3 environment
The default name for new vCLS VMs deployed in vSphere 7.0 Update 3 environment uses the pattern vCLS-UUID. vCLS VMs created in earlier vCenter Server versions continue to use the pattern vCLS (n). Since the use of parenthesis () is not supported by many solutions that interoperate with vSphere, you might see compatibility issues.
Workaround: Reconfigure vCLS by using retreat mode after updating to vSphere 7.0 Update 3.
You see errors in the vSphere Client when the HTTP Reverse Proxy (rhttpproxy) service is set on different ports from 80 and 443
If you configure vCenter Enhanced Linked Mode and customize the rhttpproxy settings from the default ports 80 for HTTP and 443 for HTTPS, you might see an error such as You have no privileges to view object when you first log in to the vSphere Client.
Backup and Restore Issues
When monitoring task status in a vSphere with Tanzu environment, you see an error that a specified parameter is not correct
In the vSphere Client, when you navigate to Monitor > Tasks, you see an error such as vslm.vcenter.VStorageObjectManager.deleteVStorageObjectEx.label – A specified parameter was not correct: in the Status field. The issue occurs in vSphere with Tanzu environments when you deploy a backup solution that uses snapshots. If the snapshots are not cleaned up, some operations in Tanzu Kubernetes clusters might not complete and cause the error.
Workaround: Delete snapshots from the backup solution endpoint by using vendor instructions and retry the Tanzu Kubernetes cluster operation.
You cannot delete services from supervisor clusters in your vSphere environment
In rare cases, you might not be able to delete services such as NGINX and MinIO from supervisor clusters in your vSphere environment from the vSphere Client. After you deactivate the services, the Delete modal continuously stays in processing state.
Workaround: Close and reopen the Delete modal.
You cannot enable or reconfigure a vSphere Trust Authority cluster on a vCenter Server system of version 7.0 Update 3 with ESXi hosts of earlier versions
If you try to enable or reconfigure a vSphere Trust Authority cluster on a vCenter Server system of version 7.0 Update 3 with ESXi hosts of earlier versions, encryption of virtual machines on such hosts fails.
Workaround: Keep your existing Trusted Cluster configuration unchanged until you upgrade your ESXi hosts to version 7.0 Update 3.
vSphere Lifecycle Manager Issues
You cannot upload an NSX depot to a vSphere Lifecycle Manager depot when vCenter Server services are deployed on a custom port
If you create a vSphere Lifecycle Manager cluster and configure NSX-T Data Center on that cluster by using the NSX Manager user interface, the configuration might fail as the upload of an NSX depot to the vSphere Lifecycle Manager depot fails. In the NSX Manager user interface, you see an error such as 26195: Setting NSX depot(s) on Compute Manager: 253b644a-4ea5-4025-9c47-6cd00af1d75f failed with error: Unable to connect ComputeManager. Retry Transport Node Collection at cluster. The issue occurs when you use a custom port to configure the vCenter Server that is associated with the NSX-T Data Center as a compute manager in the NSX Manager.
Installation, Upgrade, and Migration Issues
After upgrade to vCenter Server 7.0 Update 3, some plug-ins might fail due to incompatibility with Spring 5
After you upgrade your system to vCenter Server 7.0 Update 3, the vSphere Client is upgraded to use the Spring Framework version 5, because Spring 4 is EOL as of December 31, 2020. However, some plug-ins which use Spring 4 APIs might fail due to incompatibility with Spring 5. For example, plug-ins for VMware NSX Data Center for vSphere of version 6.4.10 or earlier. You see an error such as HTTP Status 500 – Internal Server Error.
Workaround: Update the plug-ins to use Spring 5. Alternatively, downgrade the vSphere Client to use Spring 4 by uncommenting the line //-DuseOldSpring=true in the /etc/vmware/vmware-vmon/svcCfgfiles/vsphere-ui.json file and restarting the vSphere Client. For more information, see VMware knowledge base article 85632.
vSphere Pod Service might fail after a vCenter Server upgrade while waiting for a vCenter Server reboot
If vSphere Pod Service fails for some reason during stage 1 of a vCenter Server upgrade while waiting for a vCenter Server reboot, the service does not complete the upgrade.
Workaround: Continue or retry the upgrade operation after vSphere Pod Service recovers.
vCenter Server and vSphere Client Issues
Skyline Health page displays garbage characters
In the vSphere Client, when you navigate to vCenter Server or select an ESXi host in the vSphere Client navigator and click Monitor > Skyline Health, the page displays garbage characters in the following locales: Korean, Japanese, German and French.
Workaround: Switch to English locale.
If vCenter Server services are deployed on custom ports, remediation of ESXi hosts in a vSphere Lifecycle Manager cluster with vSAN enabled fails
If vCenter Server services are deployed on custom ports in an environment with enabled vSAN, vSphere DRS and vSphere HA, remediation of vSphere Lifecycle Manager clusters might fail due to a vSAN resource check task error. The vSAN health check also prevents ESXi hosts to enter maintenance mode, which leads to failing remediation tasks.
Workaround: For more information, see VMware knowledge base article 85890.
You see Certificate Status alarm in the vSphere Client for expiring certificates in the vSphere Certificate Manager Utility backup store
The VMware Certificate Manager uses the vSphere Certificate Manager Utility backup store (BACKUP_STORE) to support certificate revert, keeping only the most recent state. However, the vpxd service throws a Certificate Status error when monitoring the BACKUP_STORE, if it contains any expired certificates, even though this is expected.
Workaround: Delete the certificate entries in BACKUP_STORE by using the following vecs-cli commands:
Get expired certificate alias in BACKUP_STORE:
/usr/lib/vmware-vmafd/bin/vecs-cli entry list –store BACKUP_STORE –text
Delete certificate in BACKUP_STORE:
/usr/lib/vmware-vmafd/bin/vecs-cli entry delete –store BACKUP_STORE –alias <alias>
In the vSphere Client dark theme, in the OVF deployment wizard, you cannot see the Virtual machine name field
If you use the vSphere Client dark theme, in the OVF deployment wizard, after you provide a virtual machine name and open the tree view to select a location, the Virtual machine name field turns into solid white color and hides your input.
Workaround: Click on the white space that hides your input in the Virtual machine name field to restore the correct view.
If the deployment location is an NSX Distributed Virtual port group, deployments by using an OVF file or template might fail
If the following two conditions exist in your environment, deployments by using an OVF file or template might fail:
The deployment location is an NSX Distributed Virtual port group
The deployment location is a vSphere cluster with a mixed transport node of a vSphere Distributed Switch (VDS) and NSX Virtual Distributed Switch (N-VDS), and the N-VDS has the same logical switch as the OVF deployment location.
Workaround: Select the OVF deployment location to be on an opaque network, not on a NSX Distributed Virtual port group, or retry the deployment. In a mixed transport node, the target is randomly selected and a retry of the deployment succeeds when the location is on the VDS.
Notatki producenta: VMware vCenter Server 7.0 Update 3
Bezpieczeństwo w biznesie