[LINK] National Filter Scheme

Glen Turner glen.turner at aarnet.edu.au
Tue Jan 16 04:42:28 AEDT 2007


brd at iimetro.com.au wrote:

> The scheme will be administered by the Department of Communications, Information
> Technology and the Arts with the support of NetAlert, and will accredit a panel
> of filters for distribution that have been tested and approved by the
> Australian Communications and Media Authority (ACMA) for efficacy and minimum
> filter standards.

Certainly the efficacy of filters needs to be improved.  There are a large
number of false positives in most filters. A policy to have universal use of
filters implies that serious work on reducing false positives needs to occur
as this threatens some major public health projects: the last serious look
at these filters showed they caught HIV information, breastfeeding information,
and sex education (including the information distributed by the SA govt's
Child and Youth Health).

I wonder if ACMA will make the test plan public so that public health and
other agencies get an opportunity to state their requirements of filters.

> The Government will keep ISP-level Internet content filtering technology under
> regular review and will conduct another trial of ISP-level filtering technology
> in Tasmania.

This is rigging the results. Tasmania is the simple case for ISP
filtering as speeds off the island are slow, average load is low
and the exit points are geographically close so there need not
be much replication of filtering equipment.

All the filtering equipment I've seen has been totally unsuited
for use in a large ISP environment. They are basically no more
than PCs. And being implemented in software rather than hardware
they are open to a huge range of complexity-related attacks. [1]

I'd really like to see ACMA's evaluation criteria, as I suspect
they are a tad naive.

> ACMA will be required to provide an annual report on international trends
> in ISP-level filtering and will work closely with NetAlert to investigate
> technological improvements in filtering technology.

Just bloody marvellous. There are a few startups in this space,
mainly hardware firewall firms looking to widen their market.
I really hope the government isn't going to require us to buy
equipment from the leading two of these, since they both seem
to be headed for bankruptcy. I've already been stuffed about by
one failed startup and have no desire to go through all that
again. There's a huge gulf between "technologically",
"deployable" and "financially prudent".  Again, I like to see
how ACMA's evaluation criteria address this.

Cheers, Glen.


  [1] Every search algorithm has a worst case, usually 10-1000 times worse.
      So we can send various URLs through the system and observe the
      changing jitter.  We collect together the worse cases and use that
      to derive information about the implementation -- such as the
      number of hash buckets, or the URLs at the bottom of the search
      tree. We then use that information to simply request a set of
      URLs from the ISP, maybe 10Mbps or so of GET requests.  The
      filtering box overloads (seeing the equivalent of 100Mbps to
      100Gbps of requests because of the algorithmic complexity in
      dealing with our carefully chosen 10Mbps), dropping the ISP.
      The attack can be launched from outside the ISP since the ISP
      is going to need to filter both incoming and outgoing URLs.

      This isn't at all theoretical. There was a lot of work done on
      Linux about a year ago to reduce as far as possible its
      vulnerability to algorithmic complexity attacks after some
      devastating demos (including dropping a box using a 10Kbps
      stream). Similarly the Snort intrusion detector has had a
      run of algorithmic attacks by people wanting to drop the
      Snort box before doing nasty stuff.

      That's why ISPs are keen only to have hardware forwarding devices
      in the packet's path. The hardware runs at the worst case speed
      (although very quickly) so there's no way to make it run worse
      by fiddling with traffic contents.

      Where we do have PCs in the forwarding path they are for
      monitoring and attach using a passive optical splitter.
      The PC sees one copy of the data and the router the other
      copy.  The router forwards its copy, so the router cares
      not if the PC is running or not.



More information about the Link mailing list