Thanks Talkiet for hunting that down, I had checked it a month ago when doing the tests, but not last night.
The problem we all have with testing is that we are looking for a proxy for the "peak" performance, Speedtest do a 7MB file download, we do a 300KB file download for obvious reasons (24 hours x 30 days = 720 tests/month) In order to get a reliable test result we studied the impact of choosing parts of a test, Speedtest sample 20, we tried sampling 10 and 4 and found the difference minor, so used 4 - and we use the fastest. Sampling part of a file sometimes produces an erroneous result due to rounding of small numbers, worse on 10 samples than 4, but this is not sufficient reason to change our practice - we discard those tests as you would when you see a strange Speedtest result.
Speedtest would discard the slowest 30%, because they are very likely to be the first 2 out of 20 due to rampup and discarding the two top and bottom results is more likely to drop the rounding errors. Our results never use the first quartile and we publish averages or medians in the main, which removes the need for high levels of accuracy on every single test, ie we can be within say 5% of the result for each individual test and still provide good information in the aggregate.