这篇文章是昨天在安装Nutanix CE过程中阅读的一篇文档,感觉比较详细,就简单翻译一下,主要是安装的过程和配置的过程。作者Michael Webster的儿子是当前世界上最年轻的Nutanix NPP认证获得者(8岁),这篇文章是作者在为儿子搭建模拟实验室的时候写的。作者本身也是业界大牛,看看作者的证书。
简译版的安装过程:
1.首先需要准备一台存储空间足够的ESXi6.0主机。
2.为Nutanix CE创建一个虚拟交换机端口组。
3.下载NutanixCE的镜像
4.将镜像解压,并重命名为ce-falt.vmdk
5.将ce-flat.vmdk上传到ESXi6.0主机
6.将ce.vmdk的描述文件上传到ESXi主机(描述文件的内容会在下面列出)
ce.vmdk文件内容如下(可以先创建ce.txt,然后修改文件类型)
#Disk DescriptorFile
version=4
encoding=”UTF-8″
CID=4470c879
parentCID=ffffffff
isNativeSnapshot=”no”
createType=”vmfs”
# Extent description
RW 14540800 VMFS “ce-2015.06.08-beta-flat.vmdk”
# The Disk Data Base
#DDB
ddb.adapterType = “lsilogic”
ddb.geometry.cylinders = “905”
ddb.geometry.heads = “255”
ddb.geometry.sectors = “63”
ddb.longContentID = “ac2a7619c0a01d87476fe8124470c879”
ddb.uuid = “60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9”
ddb.virtualHWVersion = “10”
7.使用 vCenter 的vSphere Web Client创建一个新的虚拟机(使用workstation连接到ESXi主机创建也可以,主要是启用虚拟机的CPU虚拟化功能)。
8.配置虚拟机的兼容性为ESXi 6.0 及之后
9.配置虚拟机的操作系统版本为CentOS Linux 4/5/6/7 64位
10.至少为虚拟机操作系统分配4vCPU
11.虚拟机的CPU配置如下
至少配置 16GB内存 (以便虚拟机能运行Acropolis 虚拟化系统)
12.配置网卡为E1000
13.将SCSI控制器更改为 PVSCSI,并删除默认列出的的16GB磁盘。
14.将之前上传的ce.VMDK镜像添加为SCSI 0:0
15.添加一个新的硬盘,500GB,将其模拟为SSD,并将其连接到SCSI 0:1
16.再添加一个新的硬盘,也是500GB,将其模拟为HDD SCSI 0:1 (Nutanix CE要求至少两块磁盘200GB SSD+500GB HDD)
17.在虚拟机选项中配置SCSI 0:1为SSD
18.结束配置
19.克隆虚拟机,以便组建3节点的集群使用
20.打开虚拟机的电源,开始配置 Nutanix CE
(Nutanix CE 实际使用时要求SSD的IOPS至少为5000,在实验中我们用过修改配置文件中的值来满足这一要求。
配置文件在/home/install/phx_iso/phoenix/sysUtil.py,修改值为SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50
需要使用root账号登陆,密码为nutanix/4u
21.重复上述步骤,部署其他Nutanix CE 节点,安装完成后会显示CVM登陆界面的地址。
22.第一次登陆CVM控制器时要求登陆Nutanix账号,此时会出现DNS无法解析的情况。
此时需要使用如下指令配置DNS服务器(建议配置谷歌的DNS 8.8.8.8)
$>ncli
ncli>add-to-name-servers servers=8.8.8.8
然后登录进来就可以看到Nutanix的界面了,开始学习之旅吧。
英文原文
A great way to learn Nutanix technology is by using the Nutanix Community Edition, which is a community supported, free, but fully functional version of the Nutanix software and Acropolis Hypervisor. If you don’t have spare hardware lying around your house or lab then a great way to use this initially is nested on top of ESXi. Recently my 8yr Old Son Sebastian passed the Nutanix Platform Professional Certification and I decided to give him his own home lab as a reward and also as a late birthday present. I had some Dell T710 servers from my VMware lab prior to Nutanix that I thought would make a great first home lab, and they’re connected into 10G switches. This article will cover how I put this environment together and give a high level overview. It was super easy, and anyone can do it. You can build very powerful demo or learning environments this way.
Firstly I would like to acknowledge three great resources that I used to help get this up and running. In no particular order:
JOEP PISCAER of Virtual Lifestyle – #NextConf running Nutanix Community Edition nested on Fusion
Albert Chen – How to install Nutanix CE into ESXi 6.0
William Lam – Emulating an SSD Virtual Disk in a VMware Environment
I took the work that Joep and Albert did and mixed in a bit of William Lam’s magic to fool Community Edition into thinking that a plain virtual disk was an SSD so that I could get the environment up and running on VMDK’s running on a normal datastore. This was for functional testing only and just for a home lab for my son, so it’s not going to be breaking any performance records anyway. What I am about to describe assumes you are setting the environment up for a minimum of 3 Nutanix CE VM’s.
Here is the high level process I went through:
1.Start with a funning ESXi 6.0 Host that has some storage attached
2.Create a Portgroup on a Virtual Switch to attach the Nutanix Community Edition VM to and ensure that the security settings allow Promiscuous Mode (I called this NXCVM)
Note: If you wish to trunk multiple VLAN’s to the Nutanix CE VM’s you can use VLAN 4095 on a standard switch or Virtual Port Trunking on a Distributed Switch. The Default or Native VLAN that Nutanix CE is connected to should have DHCP on the network (or use static addresses).
3.Download the Nutanix Community Edition Image File
4.Unzip the image file and rename it <image file name>-flat.vmdk
5.Upload the flat.vmdk file using the datastore browser to the datastore on the ESXi 6.0 host
6.Upload the ce.vmdk descriptor file using the datastore browser to the ESXi 6.0 host (descriptor I used included below)
7.Using the vSphere Web Client via vCenter Create a new VM (You will see why it’s important to use the web client in a second)
8.Configure Compatibility as ESXi 6.0 and later
9.Configure the new VM as a CentOS Linux 4/5/6/7 64Bit VM
10.Configure at least 4 vCPU’s and expose hardware assisted virtualization options to the guest OS
11.Nutanix Community Edition Nested CPU Config
Configure at least 16GB RAM (this allows to run VM’s on top of the Acropolis Hypervisor as well as the Nutanix Software etc)
12.Change the Network Adapter to E1000
13.Change the SCSI Controller to PVSCSI i.e. VMware Paravirtual (yes this works just fine), and delete the 16GB virtual disk that is listed by default
14.Add an Existing Hard Disk and use the ce.vmdk image that you previously uploaded attached to SCSI 0:0
15.Add a New Hard Disk, I used 500GB thin provisioned, this will act as the virtual SSD, and attach it to SCSI 0:1 on the PVSCSI Adapter
16.Add a New Hard disk, I used 500GB thin provisioned, this will act as the HDD, and attach it to SCSO 0:2 on the PVSCSI Adapter
Note: when using thin provisioned VMDK’s they will not grow until real data is written to them. This means you can run lots of virtual Nutanix Community Edition clusters without taking up much if any storage space. All of the Nutanix data optimization and reduction features also work on top of this reducing the real data footprint even further :). In my case however I’m only running one CE Node per ESXi host, but you could run many for a training or demo environment for example.
17.Click VM Options, Expand Advanced and Edit the Configuration Parameters and Add a new row with the following:
18.Click Next and then Finish
19.Clone the VM you’ve just created so that you have a template that can be used later
20.Power on the Nutanix CE VM you’ve just created
Note: Depending on how powerful your hardware is and if you are using a real SSD or not you may want to modify the /home/install/phx_iso/phoenix/sysUtil.py file and change the SSD_rdIOPS_thresh = 50 and SSD_wrIOPS_thresh = 50 using vi by logging in as root. Then you can exit and run through the install process.
21.Deploy your additional Nutanix CE nodes by cloning from the template you created. No need to run through the above steps again.
Here is the ce.vmdk descriptor file that I used:
#Disk DescriptorFile
version=4
encoding=”UTF-8″
CID=4470c879
parentCID=ffffffff
isNativeSnapshot=”no”
createType=”vmfs”
# Extent description
RW 14540800 VMFS “ce-2015.06.08-beta-flat.vmdk”
# The Disk Data Base
#DDB
ddb.adapterType = “lsilogic”
ddb.geometry.cylinders = “905”
ddb.geometry.heads = “255”
ddb.geometry.sectors = “63”
ddb.longContentID = “ac2a7619c0a01d87476fe8124470c879”
ddb.uuid = “60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9”
ddb.virtualHWVersion = “10”
Now you have deployed your Nutanix CE VM’s (assuming more than one) you can create a Nutanix cluster. This is done simply by logging into the Nutanix CVM on one of the VM’s using either the DHCP IP address or the static IP address you used. Use the username nutanix and password nutanix/4u. Run cluster -s <cvmip>,<cvmip>,<cvmip> create and it will create your cluster. Add DNS Servers to the cluster using ncli cluster add-to-name-servers servers=<dns server>,<dns server>.
You’re done. Log into the CVM IP address of any of the nodes on HTTPS port 9440 and you can update the PRISM Admin User Password and begin to create VM’s.
After creating a cluster to ensure everything is working successfully I recommend running a diagnostics test. To do this log out of the CVM and back into the CVM again using the Nutanix user. Issue the following command:
diagnostics/diagnostics.py –display_latency_stats –run_iperf run; diagnostics/diagnostics.py cleanup
The output will include network performance, latency stats and IO stats for random and sequential reads and writes. Your performance will be completely dependent on the hardware you deploy on, and as this is not meant for performance, but functional testing, expect it to be lower than you would get from a real production / enterprise class Nutanix system.
Once everything is complete you should have something similar to the following to use and learn from:
Final Word
TIP: If you want better network performance and less dropped packets in your nested environment you should install the VMware Fling Mac Learning Filter Driver as described in this article and further explained by William Lam in this article.
Nutanix Community Edition is a great tool to use to try out Nutanix software and become familiar with the interface and the power and simplicity of Nutanix solutions. It’s free, supported by the community, and runs on a wide range of hardware or nested on ESXi as I’ve shown here. It’s suitable for home labs, demos, API integration testing, and training environments. It is a bit easier to get running if you’re using bare metal, as you can just dump the image on a USB stick and boot into the installer and be up and running a bit quicker, but running nested gives you a lot more flexibility and the ability to over provision hardware for dev/test/training/demo environments. It’s meant for experimentation