Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


kiwirock

694 posts

Ultimate Geek
+1 received by user: 141


#312758 14-May-2024 23:10
Send private message

I've been trying to update some Mikrotik devices tonight, but they are going extremely slow. 

 

On doing a trace route I'm getting high pings in Mercury's network at hop 10, worse to Mikrotik because they're in Latvia - almost as good as a geostationary satellite Internet connection, and throughput of about 10KB/s.

 

Can anyone confirm Mercury customers in the South Island are seeing similar results?

 

  4    16 ms    13 ms    13 ms  lo0-2.avo-bng-2.as55850.net [103.241.56.242]
  5    19 ms    18 ms    19 ms  et-0-0-16.avo-p-1.as55850.net [116.251.185.52]
  6    23 ms    19 ms    18 ms  et-0-0-8.ch-p-1.as55850.net [116.251.185.158]
  7    18 ms    18 ms    18 ms  et-0-0-0.ch-p-2.as55850.net [116.251.184.129]
  8    19 ms    19 ms    26 ms  et-0-0-8.cpc-p-2.as55850.net [116.251.184.250]
  9    18 ms    15 ms    19 ms  lo0-1.cpc-ir-1.as55850.net [103.241.56.8]
 10   155 ms   155 ms   159 ms  et-0-0-3.cpc-p-1.as55850.net [103.26.202.19]
 11   158 ms   158 ms   162 ms  et-0-1-5.ote-p-4.as55850.net [116.251.184.96]
 12   165 ms   160 ms   159 ms  palo-b24-link.ip.twelve99.net [62.115.115.216]
 13     *        *        *     Request timed out.
 14   547 ms   547 ms   544 ms  ash-bb2-link.ip.twelve99.net [62.115.136.201]
 15   530 ms     *        *     prs-bb1-link.ip.twelve99.net [62.115.112.243]
 16   543 ms   543 ms   542 ms  adm-bb1-link.ip.twelve99.net [62.115.134.96]
 17   541 ms   540 ms   548 ms  hbg-bb3-link.ip.twelve99.net [80.91.252.43]
 18     *        *      543 ms  kbn-bb5-link.ip.twelve99.net [62.115.134.76]
 19   543 ms     *      540 ms  kbn-b1-link.ip.twelve99.net [62.115.143.7]
 20   528 ms     *      527 ms  kbn-bb6-link.ip.twelve99.net [62.115.138.112]
 21   531 ms   528 ms   528 ms  sto-bb2-link.ip.twelve99.net [62.115.139.172]
 22   395 ms   396 ms   396 ms  riga-b3-link.ip.twelve99.net [62.115.139.199]
 23   539 ms   536 ms   542 ms  siatet-ic-332270.ip.twelve99-cust.net [213.248.8
4.33]
 24     *        *        *     Request timed out.
 25     *        *        *     Request timed out.
 26     *        *        *     Request timed out.
 27   550 ms   545 ms   548 ms  forum.mikrotik.com [159.148.147.239]

 

 

 

edit: similar results to their download balancer  @ 159.148.147.251


Create new topic
michaelmurfy
meow
13579 posts

Uber Geek
+1 received by user: 10910

Moderator
ID Verified
Trusted
Lifetime subscriber

  #3230554 15-May-2024 00:45
Send private message

Unfortunately it's likely the same reason affecting almost every other provider in this part of the world. I'll just quote a post by @saf in relation to another provider but likely same issue:

 

saf: There are a significant number of submarine cables cut currently between Asia and Europe. With this happening, carriers and providers are being forced to use paths with less capacity, or temporary measures in order to keep packets flowing, which causes congestion on the remaining paths that are working - but hey, moving most packets is better than no packets!

 

Fixing these cables is proving especially challenging, moreso than normal due to the instability in the region, needing military escorts and defense etc.

 

Doing some digging, it does look like RETN and/or their provider are striking this problem, which matches both in terms of the traffic path, and explains differences in time of day based on when the working path they're using is congested.

 

We have already completed some traffic engineering changes a month or so ago in order to route around points of significant congestion due to all the above with the elements we have control over, however we can't fix every path to everywhere, as we can't control the whole path to the remote destination.

 

For example, on my provider I'm noticing traffic via IPv6 actually go via the US currently (really weird route) while IPv4 going via the EU by another congested route. Same problem, it's not quick but I've seen this congestion drop down to literally dialup speeds (around 7kb/sec download rate) at one point over to the part of the world Mikrotik host their stuff (in Latvia).





Michael Murphy | https://murfy.nz
Referral Links: Quic Broadband (use R122101E7CV7Q for free setup)

Are you happy with what you get from Geekzone? Please consider supporting us by subscribing.
Opinions are my own and not the views of my employer.




saf

saf
221 posts

Master Geek
+1 received by user: 533

ID Verified
Trusted
Vetta Group
Subscriber

  #3230639 15-May-2024 09:44
Send private message

Comparing your path with the "normal" path via Arelion (AS1299) as below:

 

 9. sjo-b23-link.ip.twelve99.net                                             87.9%   175  160.8 153.1 150.3 160.8   3.4
10. palo-b24-link.ip.twelve99.net                                             0.0%   175  153.1 153.8 151.2 166.2   3.3
11. nyk-bb2-link.ip.twelve99.net                                              6.9%   175  229.6 223.9 220.0 248.6   4.2
12. kbn-bb6-link.ip.twelve99.net                                             11.0%   174  305.2 304.8 302.1 359.5   5.2
13. sto-bb2-link.ip.twelve99.net                                              0.6%   174  309.9 309.6 306.7 332.6   3.8
14. riga-b3-link.ip.twelve99.net                                              0.0%   174  316.3 316.7 314.1 340.4   3.8
15. siatet-ic-332270.ip.twelve99-cust.net                                     1.1%   174  324.0 324.9 322.2 343.8   3.5
16. ???
17. ???
18. ???
19. forum.mikrotik.com                                                        0.0%   174  317.5 319.3 316.2 335.0   4.1

 

It looks pretty clear that something went bang on at least the AS1299 path between "palo" (Palo Alto) and "nyk" (New York).

 

Based on your trace, this then looks like traffic was re-routed via Asia-Europe subsea cable paths, which as @michaelmurfy has stated, is a terrible path for traffic currently due to multiple fibre cuts throughout the region and in the Red Sea, impacting multiple major cables between Marseille-Singapore/Hong Kong, with no ETR's due to the instability and risk in the region.

 

With the alternate path going via these cables, this explains the increased latency (longer path to Europe) and loss (constrained capacity on Asia-Europe subsea cables) overnight.

 

Just goes to show the internet really is quite literally a series of tubes which can get cut, congested and otherwise constrained!





My views are as unique as a unicorn riding a unicycle. They do not reflect the opinions of my employer, my cat, or the sentient coffee machine in the break room.


mercutio
1392 posts

Uber Geek
+1 received by user: 134


  #3241819 28-May-2024 16:11
Send private message

I'm seeing high latency to Germany with Mercury over IPv6 but it's fine over IPv4.  I can trace back from the Germany host and it's taking a path via he.net/telstra which appears to be where the high latency is coming from.  Maybe related, maybe not.  Back when New York had flooding many many years ago and there were capacity issues between US/Europe I found he.net had near on 50% latency for a very long time, while other parties could be up 20 msec or so.


Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.