Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


fritzman

127 posts

Master Geek


#237807 19-Jun-2018 02:33
Send private message

Noob to 10Gb stuff, but think I understand the basics.

 

I have some wuestions regarding compatibility of the pci-e cards needed for the server and workstations, if anyone has had a bit of experience with this stuff please...

 

I look after a Church that does quite a bit of video production, and am looking at getting the two main workstations decent access to the Raid5 SAS array that is on the server, some 25m away in the rack.

 

With some help, I think I've sussed out the switch hardware, as follows...

 

Core/Distribution – 2x Aruba OC1950 12XGT 4SFP+ Switches, each with: 12x 10GbE RJ45 Copper Ports & 4x 10GbE SFP+ Fibre Ports.  (2x SFP+ DAC Cables to form them up as an aggregated pair).

 

 

 

The plan from there is to use this pair to connect the two other switches, the server and the two workstations.

 

 

 

2x Aruba OC1850 48G  4XGT switches each with 4x 10GbE RJ45 Copper Ports & 48x 1GbE RJ45 Copper Ports (2x RJ45 10G Connections to Core - 1 to each OC1950, Link Aggregated to 20Gb).

 

 

 

For the server and the workstations, these dual port 10Gb pci-e adapters have been suggested, and I'm just not sure about their compatibility, never actually having used 10Gb stuff before... hoping someone might have.

 

 

 

SuperMicro 7048R-C1R4+ Server - 1 x StarTech PEX20000SFPI with 2x 10GbE SFP+ Ports (DAC Connected to each OC1950 using SFP+ (Link Aggregated to 20Gb).

 

Workstations (2) - 2x StarTech ST20000SPEXI with 2x 10GbE RJ45 Copper Ports (Cat6 connected and Link Aggregated to 20Gb, 1 Connection to each OC1950).

 

 

 

I've had a look at some Dell X540-T2 cards on ebay and they (while 2nd-hand) would save me about $700, but I'm not sure about compatibility.  I see Intel lists Win10 drivers for their version of the card, and hope that the Dell one would accept the driver from Intel.


View this topic in a long page with up to 500 replies per page Create new topic
 1 | 2
cyril7
7846 posts

Uber Geek

Trusted
Subscriber

  #2040140 19-Jun-2018 06:40
Send private message

Hi, personally I alway select Mellanox based 10G cards they have the largest feature set (ie more likely to support the transfer tech your application will use) and are far cheaper than Intel based solutions.

 

All the main server vendors have Mellanox based cards in their inventory so you can maintain warranty, although looking at the hardware you are using on your workstations, that is not an issue.

 

I have linked PBTech's web site above simply as an easy supply source, although Ingrams carry Mellanox also.

 

Not too sure how you are moving files about, but these cards will support all the lattest SMB3 higher speed features, and RMDA which I imagine (no experience of modern video manipulation software) would be using to manage large files sitting on other machines.

 

Oh and finally Mellanox is the nic used to build Azure, so it has Microsoft preference, so I understand.

 

Cyril


freakngeek
350 posts

Ultimate Geek


  #2040151 19-Jun-2018 07:33
Send private message

You'll probably find your RAID array will be the bottle neck

 

I bought a couple of Intel 10Gbe nics a few years back to play around with
Setup a couple of computers and linked these via the 10Gbe NICs directly (no switch)
I had to setup RAM drives at both ends to be able to get 1200MB/s between the machines
These days with NVMe this is not that hard, but you need serious gear to take advantage of 10Gbe

 

Spindle drives to get to 1200Mb/s in RAID will take some serious amount of drives and serious RAID controller
I get about 400MB/s from a 8x 2TB RAID6 array on a LSI 1GB cached controller

 

But as @cyril7 says Mellanox is the cheapest way to go, Intel costs a bomb
But then you already have 10Gbe at both ends and all you need is the switch in between

 

But that is seriously nice gear for a Church, I'm envious


 
 
 
 


mentalinc
2044 posts

Uber Geek

Trusted
Subscriber

  #2040173 19-Jun-2018 08:27
Send private message

Do you even need a switch?

 

 

 

Dual port card on the server (connectx-3)

 

Single port card on the two work stations.

 

Buy the cards second hand from ebay

 

Buy some fiber cable and transceivers from fs.com

 

All up should cost you circa $600-800 tops!

 

 





CPU: Intel 3770k| RAM: F3-2400C10D-16GTX G.Skill Trident X |MB:  Gigabyte Z77X-UD5H-WB | GFX: GV-N660OC-2GD gv-n660oc-2gd GeForce GTX 660 | Monitor: Qnix 27" 2560x1440

 

 


cyril7
7846 posts

Uber Geek

Trusted
Subscriber

  #2040176 19-Jun-2018 08:39
Send private message

Ahhhh, fs.com,..... porn site for network engineers


rphenix
901 posts

Ultimate Geek


  #2040228 19-Jun-2018 09:34
Send private message

cyril7:

 

Hi, personally I alway select Mellanox based 10G cards they have the largest feature set (ie more likely to support the transfer tech your application will use) and are far cheaper than Intel based solutions.

 

All the main server vendors have Mellanox based cards in their inventory so you can maintain warranty, although looking at the hardware you are using on your workstations, that is not an issue.

 

 

I also love Mellanox but prefer to avoid OEM branded so that I can always get the latest stock mellanox firmware without jumping through hoops cross flashing.  I have had the odd comparability issue with certain types of modules always sorted by either updating the switch or updating the Mellanox card (usually its the switch).

 

If your starting out it might pay to get one or two DAC's they tend to be a little more likely to be accepted by the switch and you can of course skip the need to use a switch at all using the DAC between two servers.

 

I've had good luck with the Ubiquiti 10G multimode modules - they work well with the Mellanox cards and the Ubiquiti Edgewitch 16XG which gives you 12x SFP+ ports and 4x 10G Copper ethernet ports.

 

For Fibre leads I had to purchase from overseas, I simply couldn't get them at the lengths I wanted here - it also worked out a heck of a lot cheaper.


Beccara
1287 posts

Uber Geek


  #2040264 19-Jun-2018 09:51
Send private message

We run a Mellanox network, Have a chat to PBTech's server guys - You can pick up a Mellanox switch and cards new for a really good price, Even their 40g kit is super cheap for plain switching/vlan feature sets. I would farm this out to either your server vendor or someone like PBTech, It's easy to make a costly mistake in ordering





Most problems are the result of previous solutions...

All comment's I make are my own personal opinion and do not in any way, shape or form reflect the views of current or former employers unless specifically stated 

hio77
'That VDSL Cat'
12612 posts

Uber Geek

Trusted
Subscriber

  #2040289 19-Jun-2018 10:14
Send private message

freakngeek:

 

Spindle drives to get to 1200Mb/s in RAID will take some serious amount of drives and serious RAID controller
I get about 400MB/s from a 8x 2TB RAID6 array on a LSI 1GB cached controller

 

 

When i was doing servers, on our 10G machines, perfecting the right RAID controller to handle the load is the biggest difficulty.

 

It's easy to hit the 700Mbit mark, but pushing harder it's more the controller that does the work. As pointed out you needed a 1GB Cached controller :)

 

 

 

 

 

freakngeek:

 

But that is seriously nice gear for a Church, I'm envious

 

 

i like to look at it that it's just me being not so outgoing rather than the snide other comments i could default to...

 

but some of the churches i've visited with the partner... I have honestly sat there and gone wtf at the amount of video production gear they run... After the last one i visited, nothing surprises me.....

 

 





#include <std_disclaimer>

 

Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have.

 


 
 
 
 


fritzman

127 posts

Master Geek


  #2040291 19-Jun-2018 10:15
Send private message

Thanks for all the help guys...

 

I have a quote from Ingrams using the HPE gear, but they want to use AT or Startech (which I thought was pretty budget stuff), but will reach out to PBTech also and specifically ask for the Mellanox gear and see how it compares.

 

As mentioned, I've seen the Dell cards based on the Intel 540 chipset and they were cheap (like sub $200 each).

 

 

 

I ran a fresh bench on the raid array (7 x 10k sas drives) this morning and it sat pretty constantly around 800Mb/s, so if I can get anything like that through the 20Gb network, that would be adequate, I think.  I do have the option of a cheap Dell MD1200 with 12 x 7200rpm SAS drives also but I haven't received that yet, so not sure about how it would compare in speed terms.

 

 

 

Also... re the video stuff being done in Churches today... yeah, they are in the main pretty professional setups and (in this case have two full-time staff creating their stuff) often have dedicated staff doing stuff that is fairly up there.


fritzman

127 posts

Master Geek


  #2040385 19-Jun-2018 13:01
Send private message

freakngeek:

 

You'll probably find your RAID array will be the bottle neck

 

I bought a couple of Intel 10Gbe nics a few years back to play around with
Setup a couple of computers and linked these via the 10Gbe NICs directly (no switch)
I had to setup RAM drives at both ends to be able to get 1200MB/s between the machines
These days with NVMe this is not that hard, but you need serious gear to take advantage of 10Gbe

 

Spindle drives to get to 1200Mb/s in RAID will take some serious amount of drives and serious RAID controller
I get about 400MB/s from a 8x 2TB RAID6 array on a LSI 1GB cached controller

 

But as @cyril7 says Mellanox is the cheapest way to go, Intel costs a bomb
But then you already have 10Gbe at both ends and all you need is the switch in between

 

But that is seriously nice gear for a Church, I'm envious

 

 

 

 

Thanks... had a look at their cards and this one (MCX4121A-XCAT) looks good for the link to the server, but I don't any dual RJ45 10GbE ones.

 

 

 

Also, I don't see any Mellanox switches there, other than a couple that are about twice the cost of the whole project.


freakngeek
350 posts

Ultimate Geek


  #2040399 19-Jun-2018 13:22
Send private message

I have had very little to do with Mellanox over the years, have always been a staunch RJ45 man, but do appreciated they are superior in some ways

 

https://www.pp.co.nz/products.php?pp_id=AA77248&ref=pricespy
Not badly priced switch to get you started.


rphenix
901 posts

Ultimate Geek


  #2040402 19-Jun-2018 13:29
Send private message

freakngeek:

 

I have had very little to do with Mellanox over the years, have always been a staunch RJ45 man, but do appreciated they are superior in some ways

 

https://www.pp.co.nz/products.php?pp_id=AA77248&ref=pricespy
Not badly priced switch to get you started.

 

 

Yep - That is the Unifi version of the Ubiquiti Edgewitch 16XG I mentioned earlier works fine with Mellanox cards.

 

 

 

 


fritzman

127 posts

Master Geek


  #2040493 19-Jun-2018 14:14
Send private message

Thanks guys.

 

 

 

It occurs to me, having re-read the very helpful posts, that maybe aggregating pairs of 10Gb connections everywhere (in the switches, server and workstations) to achieve a 20Gb backbone is actually unnecessary?

 

If there is no way they are going to saturate a single 10Gb setup, do I need the additional cost involved in making it a 20Gb setup?


cyril7
7846 posts

Uber Geek

Trusted
Subscriber

  #2040499 19-Jun-2018 14:24
Send private message

I am assuming there is only a small number of devices, therefore even with L3+4 hashing you probably still will not make much use of a LAG over a single link, and that is not even considering disk related bottlenecks.

 

As for switching, I must say the UniFi/UBNT 16XG looks pretty damn good value, I would probably go for the UBNT Edgeswitch version as opposed to the Unifi one which probably requires a management instance where as typically the UBNT versions are stand alone, that all said I have never used UBNT switching gear, lots of wireless, but not a lot of their switching.

 

Another option would be an AT XS916MXS allbeit at 3x the price of the Edgeswitch. I have installed a number of these where we want to expand 10G switching on existing x610's as supplied under MoE SNUP, they work well.

 

Cyril


fritzman

127 posts

Master Geek


  #2040511 19-Jun-2018 14:46
Send private message

cyril7:

 

I am assuming there is only a small number of devices, therefore even with L3+4 hashing you probably still will not make much use of a LAG over a single link, and that is not even considering disk related bottlenecks.

 

As for switching, I must say the UniFi/UBNT 16XG looks pretty damn good value, I would probably go for the UBNT Edgeswitch version as opposed to the Unifi one which probably requires a management instance where as typically the UBNT versions are stand alone, that all said I have never used UBNT switching gear, lots of wireless, but not a lot of their switching.

 

Another option would be an AT XS916MXS allbeit at 3x the price of the Edgeswitch. I have installed a number of these where we want to expand 10G switching on existing x610's as supplied under MoE SNUP, they work well.

 

Cyril

 

 

 

 

Thanks...

 

 

 

There are about 15-25 users at any one time, including 2 x video-encoding workstations that will be accessing video files on the server and editing them using Adobe Creative Cloud.

 

I had a quick look at the AT switch and it seems to be about double the cost of the HP one in my OP (HPE ARUBA 1950 12XGT).

 

Because the building is already cabled with new Cat6, (wished I had known about 6A/7 before the sparky did his thing), I need to run a 10GbE (or 2 if I'm aggregating the dual-ports at each end) link across these to each video workstation.

 

 

 

I favour the simplicity of the HP 1950 switches, in that when linked, the two of them appear as a single switch, and with one 10Gb cable from each device plugged into each of the pair of 1950 switches, you get aggregation to 20Gb and redundancy down to 10Gb.

 

 


cyril7
7846 posts

Uber Geek

Trusted
Subscriber

  #2040515 19-Jun-2018 14:54
Send private message

Cat6 should be good for 10G if installed correctly upto 37m, with variable potential over that to 50odd meters.

 

Sorry should have checked your current switch spec before commenting on that. As for aggregation, it would make sense between the switch and server/s but 10G to each work station is surely sufficient.

 

Edit: Further, 10Gbase-T is in my view not a nice connection, its power hungry and has a serialisation delay which makes it not suitable for some applications, but probably not a biggie for yours. I would still do DAC between servers and switch.

 

Cyril


 1 | 2
View this topic in a long page with up to 500 replies per page Create new topic





News »

Huawei launches IdeaHub Pro in New Zealand
Posted 27-Oct-2020 16:41


Southland-based IT specialist providing virtual services worldwide
Posted 27-Oct-2020 15:55


NASA discovers water on sunlit surface of Moon
Posted 27-Oct-2020 08:30


Huawei introduces new features to Petal Search, Maps and Docs
Posted 26-Oct-2020 18:05


Nokia selected by NASA to build first ever cellular network on the Moon
Posted 21-Oct-2020 08:34


Nanoleaf enhances lighting line with launch of Triangles and Mini Triangles
Posted 17-Oct-2020 20:18


Synology unveils DS16211+
Posted 17-Oct-2020 20:12


Ingram Micro introduces FootfallCam to New Zealand channel
Posted 17-Oct-2020 20:06


Dropbox adopts Virtual First working policy
Posted 17-Oct-2020 19:47


OPPO announces Reno4 Series 5G line-up in NZ
Posted 16-Oct-2020 08:52


Microsoft Highway to a Hundred expands to Asia Pacific
Posted 14-Oct-2020 09:34


Spark turns on 5G in Auckland
Posted 14-Oct-2020 09:29


AMD Launches AMD Ryzen 5000 Series Desktop Processors
Posted 9-Oct-2020 10:13


Teletrac Navman launches integrated multi-camera solution for transport and logistics industry
Posted 8-Oct-2020 10:57


Farmside hits 10,000 RBI customers
Posted 7-Oct-2020 15:32









Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.


Support Geekzone »

Our community of supporters help make Geekzone possible. Click the button below to join them.

Support Geezone on PressPatron



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.

Alternatively, you can receive a daily email with Geekzone updates.