Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


View this topic in a long page with up to 500 replies per page Create new topic
1 | 2 | 3
bignose
143 posts

Master Geek
+1 received by user: 22


  #2610589 25-Nov-2020 15:12
Send private message

cyril7:

 

Not quite, yes that is correct, however if you are using a multi stream transfer method (SMB3 for example) and L4 is included in the hashing you then have the ability for a file transfer to split that transfer across multiple lag members. Obviously this requires both the sender and receiver to have similar lag capability or one of them have a 10G NIC to realise the aggregated potential.

 

 

SMB3 multi-channel not only doesn't require LAG (let alone LAG with l4 hashing) - it requires you do NOT run it over a LAG link (it wants totally separate interfaces).

 

It can require your client NIC does RSS if you're using a setup like dual gbe server side to a single multi-gig nic client side though

 

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-r2-and-2012/dn610980(v=ws.11)

 

 

 

basically it replaces all the expense and hassle of a smart switch and manual lag aggregation with software smarts




cyril7
9075 posts

Uber Geek
+1 received by user: 2499

ID Verified
Trusted
Subscriber

  #2610593 25-Nov-2020 15:20
Send private message

Hi, so reading the document you posted, one of the methods, infact the document refers to it as the preferred is to use NIC Teaming, and in particular MS Dynamic teaming mode, which is infact L4 hashing with a MS elaboration, clearly this can only work on traffic from the server to the network as that is all the server TCP stack has control of. If you want traffic from the network to the server to acheive similar ability then L3/4 hashing is the closest thing to replicate that load balancing to the Team. Some switches now also describe a Dynamic hashing algorithm that uses similar elaborations to the MS design, we have a recent Dell stack at work that supports that method, regarless its just a L2/3/4 hash.

 

Cyril


bignose
143 posts

Master Geek
+1 received by user: 22


  #2610594 25-Nov-2020 15:21
Send private message

cyril7:

 

Hi, so reading the document you posted, one of the methods, infact the document refers to it as the preferred is to use NIC Teaming, and in particular MS Dynamic teaming mode, which is infact L4 hashing with a MS elaboration, clearly this can only work on traffic from the server to the network as that is all the server TCP stack has control of. If you want traffic from the network to the server to acheive similar ability then L3/4 hashing is the closest thing to replicate that load balancing to the Team. Some switches now also describe a Dynamic hashing algorithm that uses similar elaborations to the MS design, we have a recent Dell stack at work that supports that method, regarless its just a L2/3/4 hash.

 

 

that's software teaming - it's done at the endpoints, not on the switch

 

also teaming is (now) limited to windows server ( as possibly windows 10 workstation)




cyril7
9075 posts

Uber Geek
+1 received by user: 2499

ID Verified
Trusted
Subscriber

  #2610597 25-Nov-2020 15:30
Send private message

OK interesting, have not had to deal with server side networks for awhile.

 

Cyril


nitro
761 posts

Ultimate Geek
+1 received by user: 339


  #2610601 25-Nov-2020 15:39
Send private message

we've gone astray a bit. apologies OP.

 

i think LAG will screw up SMB3 because it will present itself as a single logical interface.

 

 


Tinkerisk
4810 posts

Uber Geek
+1 received by user: 3673


  #2610604 25-Nov-2020 15:54
Send private message

Stu:

 

Any help is much appreciated.

 

Cheers, and thanks.

 

 

Go with the QNAP TS-453D (since it can be upgraded to real ~500-600MB/s later on - if needed). Dual LAG will provide you 'up to' double bandwith (but not double speed and it does NO load balancing). Addtional feature is: NVMe can be used for cache or pool memory (i.e. for VMs). That's (officially) not the case with DSM (cache only).





     

  • Qui nihil scit, omnia credere debet. - He who knows nothing must believe everything.
  • Firewalls do NOT stop dragons!
  • I avoid Big Tech, they try hard to dictate technology and culture across borders.
  • In effect we have everything to hide from someone, and no idea who someone is.

 
 
 
 

Shop now for Lenovo laptops and other devices (affiliate link).

Stu

Stu

Hammered
8751 posts

Uber Geek
+1 received by user: 2414

Moderator
ID Verified
Trusted
Lifetime subscriber

  #2610821 25-Nov-2020 20:51
Send private message

Thanks for the rather interesting and informative replies. It certainly doesn't seem like I'll benefit from Link Aggregation, at least not any time soon. I'll put the money into RAM and maybe an NVMe SSD or two.

 

As for those recommending particular brands, I'll just make a couple of comments on where I'm at in the decision process: 

 

The main thing putting me off the Synology DS920+, is that the base RAM is soldered in. I kinda get why they may have gone with this approach, but what if the soldered memory develops a fault outside of warranty? RAM does that. Built in apps seem to be better than the other brands, though. Also, Synology don't, and apparently won't, natively allow the NVMe SSD to be used for anything other than caching.

 

The QNAP TS-453D doesn't have built in NVMe capability, and requires a PCIe card to add this facility. The box is already priced higher than the DS920+ and the AS6604T

 

The ASUSTOR AS6604T software doesn't seem to be as polished, but 2 x 2.5GbE (like the QNAP), built in NVMe slots (like the Synology), and 3 decent USB ports are giving this box a slight leg up.

 

 

 

I still don't know!

 

Thanks again folks.





People often mistake me for an adult because of my age.

 

Keep calm, and carry on posting.

 

Referral Links: Sharesies

 

Are you happy with what you get from Geekzone? If so, please consider supporting us by subscribing.

 

No matter where you go, there you are.


richms
29114 posts

Uber Geek
+1 received by user: 10229

Trusted
Lifetime subscriber

  #2610838 25-Nov-2020 22:11
Send private message

bignose:

 

10gbe, whilst nice - is overkill for a cheap 4bay NAS like that, there's simply no way it'll saturate a 10gbe link (4 bays in raid5 will probably give sub 500megabytes/sec sustained r/w).

 

Maybe if you were stuffing the NAS with ssd's rather than hdds - but a lot of cheaper units run out of cpu grunt to push data at 10gbe rates even when running an all ssd array

 

802.3bz is cheaper at NAS end (two of the units the OP listed have dual 2.5gbe built in), and much cheaper at the switch (the $200 2.5gbe qnap switch is at least half the price of anything with at leadt 2 10gbe ports) 

 

 

You don't have to saturate a 10 gig link for it to be worth while, just do better than 1 gig which is a piece of cake on almost anything. Sure there are 2.5 and 5 gig solutions but they're still more costly than a cheap mellonix card and DAC cables are.





Richard rich.ms

bignose
143 posts

Master Geek
+1 received by user: 22


  #2610840 25-Nov-2020 22:25
Send private message

richms:

Sure there are 2.5 and 5 gig solutions but they're still more costly than a cheap mellonix card and DAC cables are.



any decent 10gbe switch with a couple of DAC ports is going to be close to $500 (something like the unmanaged qnap) - and even then it also means you're going to need to have both the switch and the NAS relatively close to the client computers to be able to use DAC (or that the house is wired with cat6), plus of course you'll need a 10gbe card for the NAS - as well as for the PCs (even if you use 'cheap' mellanox's)

meanwhile a qnap 5 port 2.5gbe switch is $200, and will run over boring old cate5e system wiring, you can get pcie 2.5gbe cards for the pc ifor <$100....and the qnap has dual 2.5gbe ports already built in.

I'm not really seeing how you 'cheap' solution is cheap?


Tinkerisk
4810 posts

Uber Geek
+1 received by user: 3673


  #2610855 25-Nov-2020 23:49
Send private message

You don't need a complete 10G net to use 10G. And 10G must not be for each client unless you need direct access to fast storage for online video editing.

 

My config for instance is just 2x 1G from NAS to the 1G network for file service/s and 1x 1G from VirtualizationServers for the VM-APPLICATIONs. So nothing special compared to 1G only networks.

 

One of the two 10G of the NAS is linked to a static (no DHCP by a slowdown 1G Router) 10G network via 10G switch for further 10G Backup- and VM-STORAGE services. Even the main 10G switch could be omitted, if there were only 2 parties involved talking to each other (like a NAS and it's Backup server).

 

My personal optinion is that 2.5 and 5 GbE is a time effect coming from the gamer scene. It makes more sense to go instantly to 10GbE. Don't forget, that a 10GbE RJ45 interface easily draws ~7W per interface (better to use DAC cabling or SFC+ for greater distances).

 

Conclusion for my use case is: (this must not be for you)

 

1) 10GbE for storage services for VirtualizationServers (DAC+)

 

2) for fast data backups to other storage (RJ45 when enforced to use, better DAC+ or SFC+)

 

3) and Backbone to each distribution switch on each floor/area (SFC+, 1x or 2x 10G uplink/s, multiple 1G to the clients).

 

Just my 2 cents.

 

 





     

  • Qui nihil scit, omnia credere debet. - He who knows nothing must believe everything.
  • Firewalls do NOT stop dragons!
  • I avoid Big Tech, they try hard to dictate technology and culture across borders.
  • In effect we have everything to hide from someone, and no idea who someone is.

bignose
143 posts

Master Geek
+1 received by user: 22


  #2610881 26-Nov-2020 07:54
Send private message

Tinkerisk:

My personal optinion is that 2.5 and 5 GbE is a time effect coming from the gamer scene. It makes more sense to go instantly to 10GbE. Don't forget, that a 10GbE RJ45 interface easily draws ~7W per interface (better to use DAC cabling or SFC+ for greater distances).




people who say this never seem to think of larger deployments - when you've got a site with several km of cat5e in building already you don't just wake up one day and go 'you know what, lets rip it all out and rewired with 6a so we can deploy 10gbe to the desktops'. Doubly so given the range of 10gbase-t - as it'd also involve upping the switch density massively

Being able to eek better speeds out of existing wiring IS a big deal - and is NOT a 'gamer' thing (do you think all the work that want into bz was just so gamers can convince themselves things are faster when their actual link to the internet is still sub 1gig?)

 
 
 

Shop now on AliExpress (affiliate link).
Tinkerisk
4810 posts

Uber Geek
+1 received by user: 3673


  #2610894 26-Nov-2020 08:14
Send private message

bignose:
Tinkerisk:

 

My personal optinion is that 2.5 and 5 GbE is a time effect coming from the gamer scene. It makes more sense to go instantly to 10GbE. Don't forget, that a 10GbE RJ45 interface easily draws ~7W per interface (better to use DAC cabling or SFC+ for greater distances).

 




people who say this never seem to think of larger deployments ...

 

Sure. Evolution never cares of Cat 5e wiring :-)

 

We'll see that even 10GbE is only a timely step.





     

  • Qui nihil scit, omnia credere debet. - He who knows nothing must believe everything.
  • Firewalls do NOT stop dragons!
  • I avoid Big Tech, they try hard to dictate technology and culture across borders.
  • In effect we have everything to hide from someone, and no idea who someone is.

Tinkerisk
4810 posts

Uber Geek
+1 received by user: 3673


  #2612215 28-Nov-2020 13:01
Send private message

Stu:

 

I still don't know!

 

Thanks again folks.

 

 

There is no 100% match in that price range - fully agreed.

 

But I had to decide as well - I catched the TS-453D for 399€ (679 NZ$). The 10GbE+2NVMe add-on card is another 180€ (300 NZ$). Taxes included.

 

Yes, I'm aware that the board has a shared PCIe 2.0x2 only for 10GbE and the 2 NVMe which limits the bus bandwith. On the other hand, up to 32GB RAM ('inofficial'), 2x 1(/2.5)GbE, 1x 10GbE, 2 NVMe cache/pool, 8 camera licenses OOTB and a more capable QTS you hardly get from the competitors in that range.





     

  • Qui nihil scit, omnia credere debet. - He who knows nothing must believe everything.
  • Firewalls do NOT stop dragons!
  • I avoid Big Tech, they try hard to dictate technology and culture across borders.
  • In effect we have everything to hide from someone, and no idea who someone is.

Stu

Stu

Hammered
8751 posts

Uber Geek
+1 received by user: 2414

Moderator
ID Verified
Trusted
Lifetime subscriber

  #2612445 28-Nov-2020 19:09
Send private message

I was close to pushing the button on the QNAP TS-453D myself, until PBTech dropped the price on the ASUSTOR AS6604T to $848. At that price, it's worth having a play. Also picked up a couple of rather well priced 512GB Samsung 970 Pro NVMe sticks, which I'm not sure if I'll use for cache or storage. I'll upgrade the RAM once I've seen more test results of 16GB and 32GB with the ASUSTOR.




People often mistake me for an adult because of my age.

 

Keep calm, and carry on posting.

 

Referral Links: Sharesies

 

Are you happy with what you get from Geekzone? If so, please consider supporting us by subscribing.

 

No matter where you go, there you are.


Tinkerisk
4810 posts

Uber Geek
+1 received by user: 3673


  #2612512 29-Nov-2020 05:07
Send private message

Stu: I was close to pushing the button on the QNAP TS-453D myself, until PBTech dropped the price on the ASUSTOR AS6604T to $848. At that price, it's worth having a play. Also picked up a couple of rather well priced 512GB Samsung 970 Pro NVMe sticks, which I'm not sure if I'll use for cache or storage. I'll upgrade the RAM once I've seen more test results of 16GB and 32GB with the ASUSTOR.

 

So go ahead - have fun. Careful with the SSD cache. For read-cache 'power fail secured' SSDs are recommended (unless you have the whole thing on a UPS). For read cache it's not that important. In general the cache SSDs 'should' have a TBW starting with 400-500 TB under heavy loads.  





     

  • Qui nihil scit, omnia credere debet. - He who knows nothing must believe everything.
  • Firewalls do NOT stop dragons!
  • I avoid Big Tech, they try hard to dictate technology and culture across borders.
  • In effect we have everything to hide from someone, and no idea who someone is.

1 | 2 | 3
View this topic in a long page with up to 500 replies per page Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.