[LINK] Net firms quizzed on speed limits - BBC News

Karl Schaffarczyk karl at karl.net.au
Fri Oct 12 08:28:20 AEST 2007


....and equally one of the tools which I have observed using up *all* 
available capacity is BitTorrent.

 From my days running an ISP, you could have 500+ DSL users on a 4Mb 
pipe, all happily co-existing. Throw BitTorrent into the mix, and 
within a month or two the result is dozens of complaints:
"I am only geting(sic) 400Ksec out of my 1.5Mb service when I use 
bittorent(sic)"
Analysis of the traffic usage demonstrated that the top 1% of users 
who were using peer to peer apps such as BitTorrent, eMule, Limewire 
and the like were consuming 97% of available bandwidth.

One such user managed to pull 479Gb in one month over a 1.5Mb 
connection. Now that's what I call reaching a theoretical limit ;)
Any slowness found in ISPs is likely to be due to poor management of 
Peer to Peer traffic over their network.



The other factor to keep in mind are the speed tests themselves. 
There is a particular speed test which uses download files hosted by 
volunteers. Most of these files are hosted on the end of DSL links. 
Not surprisingly, when these hosts are chosen in the speed test, the 
speed test concludes that network speed of 128k or 256k is available.


Finally, we should remember that many services are still hosted in 
DIY style environments, with websites having 256k upload bandwidth, 
and until recently, some websites were still hosted on the end of one 
or two dialup (yes, 33k6) modems!


Richard: I don't think this thread is about buying something which is 
capable of more but the user not using it, as per your example of a 
car which can do 160km/h but doesn't. The point being that the user 
is *not asking* their car to do 160km/h. With the 'net connection, 
the user *is* asking for high speed.


Regards
Karl


>
>
>About the only tool I've seen in recent times that can measure available
>capacity is BitTorrent. Most of the reasons why network transactions are
>slow have not all that much to do with bandwidth per se and have lot to
>do with latency, jitter, application design and the (woeful) quality of
>many protocol stacks in use in popularly deployed clients and servers.
>
>But then again why should mundane aspects of technology get in the way
>of a good "consumers are being duped" story in the popular press?
>
>:-)
>
>     Geoff
>
>
>
>Richard Chirgwin wrote:
>>  Ahh, that old saw again.
>>
>>  Things that don't run to their maximum rated capacity all the time (off
>>  the top of my head):
>>  Cars (can do 160 km/h, don't)
>>  Computers
>>  Ethernet (can run gigabits/second, but most connections are idle most of
>>  the time)
>>  Wireless Ethernet (sold on line speed, perform by "maximum throughput",
>>  mostly idle most of the time)
>>  ...and so on.
>>
>>  The problem is: what *do* you sell broadband on? The "minimum" speed may
>>  be zero. Or realistically, the average per-user throughput is in tens of
>>  Kbps.
>>
>>  Further: the "speed test" performance may not even reflect badly on your
>>  service provider; there may well be congestion between you and the speed
>>  test; or the speed tester site is under heavy load, or...
>>
>>  For example: on the office ADSL2+, the maximum sync speed when I had a
>>  service provider technician visit during installation was near 20 Mbps.
>>  The provider's own Web pages loaded at roughly 16 Mbps. Stuff from
>>  people who peered with that provider, roughly 10 Mbps. From non-peers,
>>  anything down to 3 Mbps.
>>
>>  So what's a reasonable measure for broadband "speed"?
>  >
>>  RC
>  >
>




More information about the Link mailing list