This allows the data to persist as a container is power cycled / moved.The following shows the device interaction using user space drivers:ncli cluster edit-params enable-shadow-clones=trueThe following figure shows the environment after EC has run with the storage savings:The platform supports a good amount of user data input formats, I've identified a few of the key ones below:A common question is what happens when a local node’s SSD becomes full?  As mentioned in the Disk Balancing section, a key concept is trying to keep uniform utilization of devices within disk tiers.  In the case where a local node’s SSD utilization is high, disk balancing will kick in to move the coldest data on the local SSDs to the other SSDs throughout the cluster.  This will free up space on the local SSD to allow the local node to write to SSD locally instead of going over the network.  A key point to mention is that all CVMs and SSDs are used for this remote I/O to eliminate any potential bottlenecks and remediate some of the hit by performing I/O over the network.For connecting VPCs (in the same or different regions), you can use VPC peering which allows you to tunnel between VPCs. In a 50 node cluster, each CVM will handle 2% of the metadata scan and data rebuild.At power-on ADS will balance VM initial placement throughout the cluster.Core includes the foundational Nutanix products facilitating the migration from complex 3-tier infrastructure to a simple HCI platform. Once the nodes are powered up they will be discoverable by the current cluster using mDNS.Given this mechanism, client side multipathing (MPIO) is no longer necessary for path HA. While those are relatively low touch solutions, there are certain features that can be tedious to setup for a large environment such as protection domains.
disk balancing).

This allows for a single namespace where users can store home directories and files.These categories can then be leveraged by policies to determine what rules / actions to apply (also leveraged outside of the Flow context).vdisk_config_printer | grep '#'Data is also consistently monitored to ensure integrity even when active I/O isn't occurring. This eliminates any computation overhead on reads once the strips have been rebuilt (automated via Curator). When reading old data (stored on the now remote node/CVM), the I/O will be forwarded by the local CVM to the remote CVM.  All write I/Os will occur locally right away.  DSF will detect the I/Os are occurring from a different node and will migrate the data locally in the background, allowing for all read I/Os to now be served locally.  The data will only be migrated on a read as to not flood the network.Categories are used to define groups of entities which policies and enforcement are applied to. Any limits below this value would be due to limitations on the client side, such as the maximum vmdk size on ESXi.The hosts running on baremetal in AWS are traditional AHV hosts, and thus leverage the same OVS based network stack.Typical main memory latency is ~100ns (will vary), we can perform the following calculations:You can perform a silent installation of the Nutanix Guest Tools by running the following command (from CD-ROM location):Clicking on the 'Execution id' will bring you to the job details page which displays various job stats as well as generated tasks.The following figure provides an example of what a typical node logically looks like:This allows for the best of both worlds, the goodness of the OpenStack Portal and APIs, without the complex OpenStack infrastructure and associated management.