Advanced Setup Guide AOS v5 - 19
Advanced Setup Guide AOS v5 - 19
19
Acropolis Advanced
Setup Guide
December 16, 2020
Contents
Copyright...................................................................................................................28
License......................................................................................................................................................................... 28
Conventions............................................................................................................................................................... 28
Version......................................................................................................................................................................... 28
1
MANUAL CLUSTER CONFIGURATION
METHODS
Verifying IPv6 Link-Local Connectivity
About this task
The automated IP address and cluster configuration utilities depend on IPv6 link-local
addresses, which are enabled on most networks. Use this procedure to verify that IPv6 link-local
is enabled.
Procedure
» Linux
$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0c:29:dd:e3:0b
inet addr:10.2.100.180 Bcast:10.2.103.255 Mask:255.255.252.0
inet6 addr: fe80::20c:29ff:fedd:e30b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2895385616 errors:0 dropped:0 overruns:0 frame:0
TX packets:3063794864 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2569454555254 (2.5 TB) TX bytes:2795005996728 (2.7 TB)
» Mac OS
$ ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 70:56:81:ae:a7:47
inet6 fe80::7256:81ff:feae:a747 en0 prefixlen 64 scopeid 0x4
inet 172.16.21.208 netmask 0xfff00000 broadcast 172.31.255.255
media: autoselect
Note the IPv6 link-local addresses, which always begin with fe80. Omit the / character and
anything following.
» Windows
> ping -6 ipv6_linklocal_addr%interface
» Linux/Mac OS
$ ping6 ipv6_linklocal_addr%interface
• Replace ipv6_linklocal_addr with the IPv6 link-local address of the other laptop.
• Replace interface with the interface identifier on the other laptop (for example, 12 for
Windows, eth0 for Linux, or en0 for Mac OS).
If the ping packets are answered by the remote host, IPv6 link-local is enabled on the subnet.
If the ping packets are not answered, ensure that firewalls are disabled on both laptops and
try again before concluding that IPv6 link-local is not enabled.
5. Reenable the firewalls on the laptops and disconnect them from the network.
Results
• If IPv6 link-local is enabled on the subnet, you can use automated IP address and cluster
configuration utility.
• If IPv6 link-local is not enabled on the subnet, you have to manually set IP addresses and
create the cluster.
Note: IPv6 connectivity issue might occur if mismatch occurs because of VLAN tagging. This
issue might occur because ESXi that is shipped from the factory does not have VLAN tagging,
hence it might have VLAN tag as 0. The workstation (laptop) that you have connected might be
connected to access port, so it might use different VLAN tag. Hence, ensure that ESXi port must
be in the trunking mode.
Procedure
• A cluster must have at least five nodes, blocks, racks for redundancy factor 3 to be
enabled.
• For guest VMs to tolerate the simultaneous failure of two nodes or drives in different
blocks, the data must be stored on storage containers with replication factor 3.
For example, if the new cluster should comprise all four nodes in a block, include all the IP
addresses of all four Controller VMs.
If the cluster starts properly, output similar to the following is displayed for each node in the
cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
Replace cluster_name with a name for the cluster chosen by the customer.
b. Configure the DNS servers.
nutanix@cvm$ ncli cluster add-to-name-servers servers="dns_server"
Replace dns_server with the IP address of a single DNS server or with a comma-separated
list of DNS server IP addresses.
c. Configure the NTP servers.
nutanix@cvm$ ncli cluster add-to-ntp-servers servers="ntp_server"
Replace ntp_server with the IP address or host name of a single NTP server or a with a
comma-separated list of NTP server IP addresses or host names.
d. Configure an external IP address for the cluster.
nutanix@cvm$ ncli cluster set-external-ip-address \
external-ip-address="cluster_ip_address"
Procedure
1. Log on to any Controller VM on the same subnet as the Controller VMs that you want to
include in the new cluster.
This can be a Controller VM that is already part of a cluster. Connect to the IPv6 address
because an IPv4 connection is lost during configuration.
2. Create a JSON file that defines the networking configuration for the new cluster.
{
"Subnet Mask": {
"Controller": "Subnet mask",
"Hypervisor": "Subnet mask",
"IPMI": "Subnet mask"
},
"Default Gateway": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"IP Addresses": {
"block_serial_number/A": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/B": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/C": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
},
"block_serial_number/D": {
"Controller": "IPv4 address",
"Hypervisor": "IPv4 address",
"IPMI": "IPv4 address"
}
}
}
The IP Addresses block requires one entry for each node that you want to include in the
cluster (minimum 3 nodes). Each node is identified by the block serial number and the node
position (A, B, C, or D).
Replace cluster_config_json_file with the name of the JSON file that defines the networking
configuration for the new cluster.
If you want to configure redundancy factor 3, add the parameter --redundancy_factor=3
before create.
Redundancy factor 3 has the following requirements:
• A cluster must have at least five nodes, blocks, racks for redundancy factor 3 to be
enabled.
• For guest VMs to tolerate the simultaneous failure of two nodes or drives in different
blocks, the data must be stored on storage containers with replication factor 3.
If the cluster can be created, a success message for each Controller VM is displayed and the
cluster starts. If the cluster cannot be created, ensure the JSON file is correct and attempt
the creation again.
If the cluster is running properly, output similar to the following is displayed for each node in
the cluster:
CVM: 10.1.64.60 Up
Zeus UP [5362, 5391, 5392, 10848, 10977, 10992]
Scavenger UP [6174, 6215, 6216, 6217]
SSLTerminator UP [7705, 7742, 7743, 7744]
SecureFileSync UP [7710, 7761, 7762, 7763]
Medusa UP [8029, 8073, 8074, 8176, 8221]
DynamicRingChanger UP [8324, 8366, 8367, 8426]
Pithos UP [8328, 8399, 8400, 8418]
Hera UP [8347, 8408, 8409, 8410]
Stargate UP [8742, 8771, 8772, 9037, 9045]
InsightsDB UP [8774, 8805, 8806, 8939]
InsightsDataTransfer UP [8785, 8840, 8841, 8886, 8888, 8889, 8890]
Ergon UP [8814, 8862, 8863, 8864]
Cerebro UP [8850, 8914, 8915, 9288]
Chronos UP [8870, 8975, 8976, 9031]
Curator UP [8885, 8931, 8932, 9243]
Prism UP [3545, 3572, 3573, 3627, 4004, 4076]
CIM UP [8990, 9042, 9043, 9084]
AlertManager UP [9017, 9081, 9082, 9324]
Arithmos UP [9055, 9217, 9218, 9353]
Catalog UP [9110, 9178, 9179, 9180]
Acropolis UP [9201, 9321, 9322, 9323]
Atlas UP [9221, 9316, 9317, 9318]
Uhura UP [9390, 9447, 9448, 9449]
Snmp UP [9418, 9513, 9514, 9516]
SysStatCollector UP [9451, 9510, 9511, 9518]
Tunnel UP [9480, 9543, 9544]
ClusterHealth UP [9521, 9619, 9620, 9947, 9976, 9977, 10301]
Janus UP [9532, 9624, 9625]
NutanixGuestTools UP [9572, 9650, 9651, 9674]
MinervaCVM UP [10174, 10200, 10201, 10202, 10371]
ClusterConfig UP [10205, 10233, 10234, 10236]
APLOSEngine UP [10231, 10261, 10262, 10263]
Replace cluster_name with a name for the cluster chosen by the customer.
b. Configure the DNS servers.
nutanix@cvm$ ncli cluster add-to-name-servers servers="dns_server"
Replace dns_server with the IP address of a single DNS server or with a comma-separated
list of DNS server IP addresses.
c. Configure the NTP servers.
nutanix@cvm$ ncli cluster add-to-ntp-servers servers="ntp_server"
Replace ntp_server with the IP address or host name of a single NTP server or a with a
comma-separated list of NTP server IP addresses or host names.
d. Configure an external IP address for the cluster.
nutanix@cvm$ ncli cluster set-external-ip-address \
external-ip-address="cluster_ip_address"
Procedure
2. Restart the node and press Delete to enter the BIOS setup utility.
There is a limited amount of time to enter BIOS before the host completes the restart
process.
4. Press the down arrow key until BMC network configuration is highlighted and then press
Enter.
5. Press down the arrow key until Update IPMI LAN Configuration is highlighted and press
Enter to select Yes.
9. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS
setup utility.
The node restarts.
Note: If you are reconfiguring IPMI address on a node because you have replaced the
motherboard, restart Genesis on the Controller VM only for that node.
Procedure
1. Log on to the hypervisor host with SSH (vSphere or AHV) or remote desktop connection
(Hyper-V).
» vSphere
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
» Hyper-V
> ipmiutil lan -e -I mgmt_interface_ip_addr -G mgmt_interface_gateway
-S mgmt_interface_subnet_addr -U ADMIN -P ADMIN
» AHV
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@ahv# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
• Replace mgmt_interface_ip_addr with the new IP address for the remote console.
• Replace mgmt_interface_gateway with the gateway IP address.
• Replace mgmt_interface_subnet_addr with the subnet mask for the new IP address.
» vSphere
root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1
» Hyper-V
> ipmiutil lan -r -U ADMIN -P ADMIN
» AHV
root@ahv# ipmitool -v -U ADMIN -P ADMIN lan print 1
Note: If you are reconfiguring IPMI address on a node because you have replaced the
motherboard, restart Genesis on the Controller VM only for that node.
You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to
the node.
Procedure
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then
press Enter.
Note: Do not add any other device, including guest VMs, to the VLAN to which the
Controller VM and hypervisor host are assigned. Isolate guest VMs on one or more separate
VLANs.
7. If necessary, highlight the Set static IP address and network configuration option and
press Space to update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields
based on your environment and then press Enter .
10. If necessary, highlight the Use the following DNS server addresses and hostname option
and press Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
Procedure
1. Log on to the Hyper-V host with Remote Desktop Connection and start PowerShell.
Name : Ethernet
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
LinkSpeed : 10 Gbps
Name : Ethernet 3
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
LinkSpeed : 10 Gbps
Name : Ethernet 4
InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
LinkSpeed : 0 bps
Name : Ethernet 2
InterfaceDescription : Intel(R) I350 Gigabit Network Connection
LinkSpeed : 1 Gbps
Make a note of the Name of the 1 GbE interfaces you want to enable.
If you want to configure the interface as a standby for the 10 GbE interfaces, include the
parameter -AdministrativeMode Standby
Perform these steps once for each 1 GbE interface you want to enable.
Note: Do not add any other device, including guest VMs, to the VLAN to which the Controller VM
and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
Procedure
1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
Name : Ethernet
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection
LinkSpeed : 10 Gbps
Name : Ethernet 3
InterfaceDescription : Intel(R) 82599 10 Gigabit Dual Port Network Connection #2
Name : NetAdapterTeam
InterfaceDescription : Microsoft Network Adapter Multiplexor Driver
LinkSpeed : 20 Gbps
Name : Ethernet 4
InterfaceDescription : Intel(R) I350 Gigabit Network Connection #2
LinkSpeed : 0 bps
Name : Ethernet 2
InterfaceDescription : Intel(R) I350 Gigabit Network Connection
LinkSpeed : 0 bps
Make a note of the InterfaceDescription for the vEthernet adapter that links to the physical
interface you want to modify.
a. Select a network adapter by typing the Index number of the adapter you want to change
(refer to the InterfaceDescription you found in step 2 on page 17) and pressing Enter.
Warning: Do not select the network adapter with the IP address 192.168.5.1. This IP
address is required for the Controller VM to communicate with the host.
b. Enter the primary and secondary DNS servers and press Enter.
The DNS servers are updated.
7. Exit the Server Configuration utility by typing 4 and pressing Enter then 15 and pressing
Enter.
1. Log on to the Hyper-V host with the IPMI remote console and start a Powershell prompt.
• Replace domain_name with the name of the join for the host to join.
• Replace node_name with a new name for the host.
• Replace domain_admin_user with the domain administrator username.
The host restarts and joins the domain.
Procedure
1. Edit the settings of port br0, which is the internal port on the default bridge br0.
b. Update the values of the nameserver parameter and then save and close the file.
c. Restart the networking service.
root@ahv# service network restart
3. Assign the AHV host to a VLAN. For information about how to add the AHV host to a VLAN,
see Assigning an Acropolis Host to a VLAN in the Acropolis Hypervisor Administration
Guide.
Note: Do not add any other device, including guest VMs, to the VLAN to which the Controller
VM and hypervisor host are assigned. Isolate guest VMs on one or more separate VLANs.
Procedure
Output similar to the following indicates that interfaces eth2 and eth3 are bonded and that
the bonded port is assigned to the br0 switch. The output also indicates that the interfaces
eth0 and eth1 are not on bridge br0.
Uplink ports: bond0
Uplink ifaces: eth2 eth3
Procedure
• Enter the desired netmask value in the NETMASK field. (replace xxx.xxx.xxx.xxx with the
appropriate value.)
• Enter the appropriate static IP address (assigned by your IT department) for the
Controller VM in the IPADDR field.
• Enter none as the value in the BOOTPROTO field. (If you employ DHCP, you must change the
value from dhcp to none. Only a static address is allowed; DHCP is not supported.)
• Enter the IP address for your gateway in the GATEWAY field.
Note: Carefully check the file to ensure that there are no syntax errors, whitespace at the
end of lines, or blank lines in the file.
• Before you decide to change the CVM, hypervisor host, and IPMI IP addresses, consider the
possibility of incorporating the existing IP address schema into the new infrastructure by
reconfiguring your routers and switches instead of Nutanix nodes and CVMs. If that is not
possible and you must change the IP addresses of CVMs and hypervisor hosts, proceed with
the procedure described in this document.
Note the following if you are using the network segmentation feature.
• The network segmentation feature enables the backplane network for CVMs in your
cluster (eth2 interface). The backplane network is always a non-routable subnet and/or
VLAN that is distinct from the one which is used by the external interfaces (eth0) of your
CVMs and the management network on your hypervisor. Typically, you do not need to
change the IP addresses of the backplane interface (eth2) if you are updating the CVM or
host IP addresses.
• If you have enabled network segmentation on your cluster, check to make sure that the
VLAN and subnet in-use by the backplane network is still going to be valid once you
move to the new IP scheme. If not, and change the subnet or VLAN. See the Prism Web
Console Guide for your version of AOS to find instructions on disabling the network
segmentation feature (see the Disabling Network Segmentation topic) before you change
the CVM and host IP addresses. After you have updated the CVM and host IP addresses
by following the steps outlined later in this document, you can then proceed to re-enable
network segmentation. Follow the instructions in the Prism Web Console Guide, which
describes how to designate the new VLAN or subnet for the backplane network.
• If you have configured remote sites for data protection, either wait until any ongoing
replications are complete or stop them. After you successfully reconfigure the IP addresses,
update the reconfigured IP addresses at the remote sites before you resume the replications.
• Nutanix recommends that you prepare a spreadsheet that includes the existing and new
CVM, hypervisor host, and IPMI IP addresses, subnet masks, default gateway, and cluster
virtual IP addresses and VLANs (download the IP Address Change Worksheet Template).
• You can change the virtual IP address of the cluster either before or after you change the
CVM IP address. The virtual IP address of the cluster is required to configure certain data
protection features. Do the following to change the virtual IP address of the cluster.
CAUTION: All the features that use the cluster virtual IP address will be impacted if you
change that address. See the "Virtual IP Address Impact" section in the Prism Web Console
Guide for more information.
Replace insert_new_external_ip_address with the new virtual IP address for the cluster.
Replace prism_admin_user_password with password of the Prism admin account.
• Ensure that the cluster NTP and DNS servers are reachable from the new Controller VM IP
addresses. If you are using different NTP and DNS servers, remove the existing NTP and DNS
servers from the cluster configuration and add the new ones. If you do not know the new
Web Console In the gear icon pull-down list, click Name Servers.
In the gear icon pull-down list, click NTP Servers.
• Log on to a Controller VM in the cluster and check that all hosts are part of the metadata
store.
nutanix@cvm$ ncli host ls | grep "Metadata store status"
For every host in the cluster, Metadata store enabled on the node is displayed.
Warning: If Node marked to be removed from metadata store is displayed, do not proceed
with the IP address reconfiguration, and contact Nutanix Support to resolve the issue.
Warning: If you are using distributed switches in your ESXi clusters, migrate the distributed
switches to standard switches before you perform any IP address reconfiguration procedures
that involve changing the management VMkernel port of the ESXi host to a different distributed
port group that has a different VLAN.
CAUTION:
Do not use the external IP address reconfiguration script (external_ip_reconfig) if you
are using the network segmentation feature on your cluster and you want to change
the IP addresses of the backplane (eth2) interface. See the Reconfiguring the Backplane
Network topic in the Prism Web Console Guide for instructions about how to change
the IP addresses of the backplane (eth2) interface.
Following is the summary of steps that you must perform to change the IP addresses on a
Nutanix cluster.
Note: Check the connectivity between CVMs and hosts, that is all the hosts must be
reachable from all the CVMs and vice versa before you perform step 4. If any CVM or host is
not reachable, contact Nutanix Support for assistance.
Warning: If you are changing the Controller VM IP addresses to another subnet, network, IP
address range, or VLAN, you must also change the hypervisor management IP addresses to the
same subnet, network, IP address range, or VLAN.
See the Changing the IP Address of on Acropolis Host topic in the AHV Administration Guide for
instructions about how to change the IP address of an AHV host.
See the Changing a Host IP Address topic in the vSphere Administration Guide for Acropolis for
instructions about how to change the IP address of an ESXi host.
See the Changing a Host IP Address topic in the Hyper-V Administration for Acropolis guide for
instructions about how to change the IP address of a Hyper-V host.
Procedure
1. Log on to the hypervisor with SSH (vSphere or AHV), remote desktop connection (Hyper-V),
or the IPMI remote console.
If you are unable to reach the IPMI IP addresses, reconfigure by using the BIOS or hypervisor
command line.
For using BIOS, see the Configuring the Remote Console IP Address (BIOS) topic in the
Acropolis Advanced Setup Guide.
For using the hypervisor command line, see the Configuring the Remote Console IP Address
(Command Line) topic in the Acropolis Advanced Setup Guide.
Warning: This step affects the operation of a Nutanix cluster. Schedule a down time before
performing this step.
If you are using VLAN tags on your CVMs and on the management network for your
hypervisors and you want to change the VLAN tags, make these changes after the cluster is
stopped.
For information about assigning VLANs to hosts and the Controller VM, see the indicated
documentation:
• AHV: See the Assigning an Acropolis Host to a VLAN and Assigning the Controller VM to a
VLAN topics in the AHV Administration Guide.
• ESXi: For instructions about tagging a VLAN on an ESXi host by using DCUI, see the
Configuring Host Networking (ESXi) topic in the vSphere Administration Guide for
Acropolis (using vSphere HTML5 Client).
Note: If you are relocating the cluster to a new site, the external_ip_reconfig script works only
if all the CVMs are up and accessible with their old IP addresses. Otherwise, contact Nutanix
Support to manually change the IP addresses.
After you have stopped the cluster, shut down the CVMs and hosts and move the cluster.
Proceed with step 4 only after you start the cluster at the desired site and you have
confirmed that all CVMs and hosts can SSH to one another. As a best practice, ensure that
the out-of-band management Remote Console (IPMI, iDRAC, and ILO) is accessible on each
node before you proceed further.
Verify that upstream networking is configured to support the changes to the IP address
schema .
For example, check the network load balancing or LACP configuration to verify that it
supports the seamless transition from one IP address schema to another.
4. Run the external IP address reconfiguration script (external_ip_reconfig) from any one
Controller VM in the cluster.
nutanix@cvm$ external_ip_reconfig
5. Follow the prompts to type the new netmask, gateway, and external IP addresses.
A message similar to the following is displayed after the reconfiguration is successfully
completed:
External IP reconfig finished successfully. Restart all the CVMs and start the cluster.
7. After you turn on every CVM, log on to each CVM and verify if the IP address has been
successfully changed. Note that it might take up to 10 minutes for the CVMs to show the new
IP addresses after they are turned on.
Note: If you see any of the old IP addresses in the following commands or the commands fail
to run, stop and call Nutanix Support assistance.
c. From any one CVM in the cluster, verify that the following outputs show the new IP
address scheme and that the Zookeeper IDs are mapped correctly.
Note: Never edit the following files manually. Contact Nutanix Support for assistance.
If the cluster starts properly, output similar to the following is displayed for each node in the
cluster:
CVM: 10.1.64.60 Up
Zeus UP [3704, 3727, 3728, 3729, 3807, 3821]
Scavenger UP [4937, 4960, 4961, 4990]
SSLTerminator UP [5034, 5056, 5057, 5139]
Hyperint UP [5059, 5082, 5083, 5086, 5099, 5108]
Medusa UP [5534, 5559, 5560, 5563, 5752]
DynamicRingChanger UP [5852, 5874, 5875, 5954]
Pithos UP [5877, 5899, 5900, 5962]
Stargate UP [5902, 5927, 5928, 6103, 6108]
Cerebro UP [5930, 5952, 5953, 6106]
Chronos UP [5960, 6004, 6006, 6075]
Curator UP [5987, 6017, 6018, 6261]
Prism UP [6020, 6042, 6043, 6111, 6818]
AlertManager UP [6070, 6099, 6100, 6296]
Arithmos UP [6107, 6175, 6176, 6344]
SysStatCollector UP [6196, 6259, 6260, 6497]
Tunnel UP [6263, 6312, 6313]
What to do next
• Run the following NCC checks to verify the health of the Zeus configuration. If any of these
checks report a failure or you encounter issues, contact Nutanix Support.
• If you have configured remote sites for data protection, you must update the new IP
addresses on both the sites by using the Prism Element web console.
• Configure the network settings on the cluster such as DNS, DHCP, NTP, SMTP, and so on.
• Power on the guest VMs and configure the network settings in the new network domain.
• After you verify that the cluster services are up and that there are no alerts informing that
the services are restarting, you can change the IPMI IP addresses at this stage, if necessary.
For instructions about how to change the IPMI addresses, see the Configuring the Remote
Console IP Address (Command Line) topic in the Acropolis Advanced Setup Guide.
License
The provision of this software to you does not grant any licenses or other rights under any
Microsoft patents with respect to anything other than the file server implementation portion of
the binaries for this software, including no licenses or any other rights in any hardware or any
devices or software that are used to communicate with or in connection with this software.
Conventions
Convention Description
root@host# command The commands are executed as the root user in the vSphere or
Acropolis host shell.
> command The commands are executed in the Hyper-V host shell.
Version
Last modified: December 16, 2020 (2020-12-16T10:32:51+05:30)