Hi 2Degrees team,
The situation:
Juxtaposition with Teams reporting vs real world experience.
In effect:
On one hand - Our deeply integrated monitoring software tools are alerting colleagues to a perceived poor experience with Teams due to their latency report indicating poor response times across the board for our region.
On the other hand - Our colleagues are not having any concerns with Teams and have neither reported nor experienced any degraded Teams performance across any platform or network interface either in the office or at home.
Given the relatively geographic isolation of this matter I've turned my procrastination attention to looking at how 2Degrees explicitly handles Teams protocols and am wondering if QOS is differentiating between the emulated traffic from the monitoring tools versus the genuine Teams traffic and subsequently their difference in routing is causing a disparity between 'observed' Teams traffic and what/where Teams packets are actually routed.
So I ask how does 2Degrees explicitly handle Teams traffic for a large corporation with QOS enabled?
In your experience is it common for Teams monitoring tools to have disparity with latency between real world and emulated/test traffic?
Are we hamstringing ourselves by only having latency as a canary in the coalmine to warn of Teams degradation and should a more robust multi aspect be used as Teams traffic loss monitoring (if even possible).
My background is generic IT support, I am not a dedicated network engineer whatsoever so sorry if these are stupid questions, but the huge disparity between what this very popular and expensive software is reporting and what our users are experiencing really has me wondering about how this all works and why this may be happening.
Appreciate any banter or corrections! :)
Cheers.
