I'm re-configuring my home lab from Windows Storage Spaces to hardware-based RAID.

 

Previously everything was connected directly to the motherboard however now with the use of a RAID module and SAS Expander I'm considering how lane selection will impact performance.

 

Hardware

 

Desired RAID Config

 

     

  1. RAID 1: 2x 120GB SSD Windows Server 2016 Host
  2. RAID 1: 2x 120GB SSD Priority VM
  3. RAID 10: 4x 4TB SATA Secondary VMs + Mirrored 2x 256GB SSD Cache
  4. RAID 10: 4x 2TB SATA Storage + Mirrored 2x 120GB SSD Cache

 

All of the hardware is compatible up to "SATA 6Gb/s" (obviously the HDDs won't get near that) however I'm unclear as to how that translates through the use of SAS Expanders etc. (whether that speed is shared between the lanes or physical ports)?

 

At a minimum in order to utilise SSD cache through the RAID controller (which is the main point of this exercise) I need to connect the 8x SATA drives and 4x SSD cache drives (#3 & #4 above) to the RAID module as below:

 

  • SAS RAID Module -> 2x MiniSAS -> SAS Expander
  • SAS Expander -> 2x MiniSAS -> 8x 3.5" SAS Backplane -> 8x SATA Storage
  • SAS Expander -> MiniSAS-to-4xSATA -> 4x 2.5" SAS Backplane w/ keylock -> 4x SSD Cache

The main question is whether I should go ahead and also connect the additional OS & VM SSDs (#1 & #2 above) through the hardware module as follows?:

 

  • SAS Expander -> MiniSAS-to-4xSATA -> 4x 2.5" SAS Backplane -> 4x OS & VM SSDs

Or whether I would be better to free up some bandwidth on the RAID module and connect the remaining SSDs directly to the SATA 6Gb/s ports on the motherboard (and either utilise the on-board LSI / RSTE for RAID 1 or go without it if it'll conflict with the hardware)?