![vcenter 6.5 and netapp slow datastore enumeration vcenter 6.5 and netapp slow datastore enumeration](https://www.vladan.fr/wp-content/uploads/images/VCP6.5-DCV-Rescan-storage-adapter.jpg)
- Vcenter 6.5 and netapp slow datastore enumeration software#
- Vcenter 6.5 and netapp slow datastore enumeration series#
- Vcenter 6.5 and netapp slow datastore enumeration windows#
With the chassis selected there are only a few other configuration items I needed to decide on. This means that if you outgrow the 90-bay chassis you’ll need to build another whereas with the 60-bay unit, I could add a PCIe HBA with external connections (such as Broadcom SAS 9305-16e) and cable it up to an expansion chassis with another 60 disks, etc. The reason for choosing the 60-bay device over the 90-bay is that the 90-bay does not have any PCIe slots available.
![vcenter 6.5 and netapp slow datastore enumeration vcenter 6.5 and netapp slow datastore enumeration](https://i.ytimg.com/vi/TzowMOTJh7U/maxresdefault.jpg)
Basically, you cannot buy this chassis empty and jam your own parts in (boo, hiss, I know but this is for your own good).įor this build I went with two of the SSG-6048R-E1CR60L machines so that I have one in a production environment and one in a second environment that can be used for replication purposes. This nice part is that no matter which platform you choose Supermicro sells this only as a pre-configured machine – this means that their engineers are going to make sure that the hardware you choose to put in this is all from a known compatibility list. I ended up between the Supermicro SSG-6048R-E1CR60L or the SSG-6048R-E1CR90L – the E1CR60L is a 60-bay 4U chassis while the E1CR90L is a 90-bay 4U chassis. I spent some time browsing the offerings from Supermicro and came across two solutions that would work for my situation. In fact, more often than not, Supermicro is building the physical boxes that the other manufacturers are selling, anyway.
![vcenter 6.5 and netapp slow datastore enumeration vcenter 6.5 and netapp slow datastore enumeration](http://vcloud-lab.com/files/images/VMware-vsphere-esxi-vcenter-vm-virtual-machine-migrate-storage-datastore-vmotion-select-datastore-compatible-vmfs-advanced-disk-format-thin-thick-eager-zeroed-thick-lazy-zeroed-vm-storage-policies.png)
So what’s left? There are a couple whitebox-type solutions out there, but Supermicro is obviously the “industry standard” for when you don’t want to pay a big name for a server/box. since the hardware will be at a premium combined with maintenance plans to match. So, we’ve pretty much ruled out building a storage node using Dell, IBM, Cisco, HPE, etc. The only requirements I have for this project is that I need to hold as many disks as possible, support SAS2 or better (for large disks), present the disks directly to the server (no hardware RAID), and it must be affordable.
Vcenter 6.5 and netapp slow datastore enumeration software#
Hardware selectionĪfter determining how I’d approach this solution from a software perspective, I needed to figure out the hardware component. ZFS on Linux (ZoL) will also run on CentOS, Fedora, etc. Unfortunately Red Hat Enterprise Linux does not support ZFS (yet) and so that option was not in the running though I’d have gladly gone that route as well. You could also build this on Solaris with necessary licensing if you wanted to that route but it’d be more expensive. Today, you can run ZFS on Ubuntu 16.0.4.2 LTS with standard repositories and Canonical’s Ubuntu Advantage Advanced Support. The reason ZFS stood out to me was because of its redundancy and flexibility in storage pool configuration, its inherent (sane) support for large disk rebuilding, its price, and the performance it can offer. Because of the budget for this project, the first thing that popped into my head was ZFS on Linux.
Vcenter 6.5 and netapp slow datastore enumeration windows#
I have played around with different solutions such as Windows Storage Spaces, Nexenta, FreeNAS, Nutanix, unRAID, ZFS on several different operating systems, Btrfs, Gluster, Ceph, and others. If you follow my blog you may know that I experiment with different storage technology. In this example, however, the requirement is literally “as much storage as possible, with some redundancy, for as little cost as possible…” Let’s do it! Choosing the OS/filesystem So while the initial hardware purchase from EMC or NetApp may seem acceptable the licensing fees will surely add up. In fact, a lot of the big players in the storage arena who support this kind of scale do so by licensing per terabyte (think Compellent, NetApp, EMC, etc.). Hundreds of terabytes, however, can result in $500k – $1M+ of expense depending on what system you’re using. After ruling out cloud storage options your next thought might be to add as much capacity as required, with overhead, to your existing storage infrastructure. This could be due to the application not knowing what to do with AWS API calls or maybe there is some legal or regulatory reason that the data cannot sit there.
![vcenter 6.5 and netapp slow datastore enumeration vcenter 6.5 and netapp slow datastore enumeration](https://s3.manualzz.com/store/data/033969840_1-2bb890f8cdb4f5a44b37b48708e81f15-360x466.png)
In fact, with the AWS Storage Gateway you can get “block storage” access to AWS for a decent price within your data center. It’s hard – if not impossible – to beat the $/GB and durability that Amazon is able to provide with their object storage offering. When looking to store say, 800 terabytes of slow-tier/archival data my first instinct is to leverage AWS S3 (and or Glacier).
Vcenter 6.5 and netapp slow datastore enumeration series#
This is Part 1 of my “832 TB – ZFS on Linux” series – if you’re looking for Part 2, follow the link below:Ĩ32 TB – ZFS on Linux – Setting Up Ubuntu: Part 2Ĩ32 TB – ZFS on Linux – Configuring Storage: Part 3