[LINK] Scott Morrison said hate content on social media could be automatically screened out by algorithms. Is he correct? - ABC News (Australian Broadcasting Corporation)
Roger Clarke
Roger.Clarke at xamax.com.au
Fri Apr 19 10:16:28 AEST 2019
> On 18/4/19 10:34 am, Antony Broughton Barry wrote:
>> https://www.abc.net.au/news/2019-04-18/fact-check-can-algorithms-screen-out-hate-content-social-media/10979770
On 19/4/19 9:30 am, Tom Worthington wrote:
> An algorithm which automatically screened out hate content on social
> media would likely also filter out many statements by Australian
> mainstream politicians.
It's an ill wind, etc. Because maybe we'd then be spared not only a lot
of pollie-rave, but also a slice of the Question Time farce.
> We need any such mechanism to be imperfect.
We're pretty safe there.
Has anyone seen any attempts to operationally define 'hate speech'?
Brandis made a complete hash of attempts to reconsider the awkward s.18C
of the Racial Discrimination Act (Cth).
The key criteria for unlawful speech are to "offend, insult, humiliate
or intimidate'.
Brandis proposed:
http://www.rogerclarke.com/DV/PFS-1408.html#CC
- the removal of 'offend', 'insult' and 'humiliate'
- a definition of 'intimidation' as 'a reasonable likelihood of
causing of fear of physical harm'
- the addition of 'vilification' defined as 'reasonable likelihood
of inciting hatred'
In almost the only instance in which I ever agreed with Brandis about
*anything*, all of his proposals appeared to me to be well worth
considering. (Once I'd analysed the package as a whole, however, I felt
that the excessively permissive interpretation and saving provision
undermined the intimidation and vilification protections).
Another existing criterion, along the lines of 'incitement of violence',
is tricky, but at least you could draft and test some rules for scoring
passages against that criterion (e.g. multiple occurrences of active
words associated with violence, such as 'kill' and 'attack'; and
multiple occurrences of pejoratives relating to a category of people).
But I really can't see things like 'a reasonable likelihood of causing
fear of physical harm' and 'a reasonable likelihood of inciting hatred'
being amenable to any vaguely sensible automated application.
> ... With an effective social
> media filtering mechanism it will be very tempting for a future
> government to use it to suppress free speech in Australia.
Absolutely. Every proposal for censorship has to be considered very
carefully, and kneejerk reactions vigorously fought.
--
Roger Clarke mailto:Roger.Clarke at xamax.com.au
T: +61 2 6288 6916 http://www.xamax.com.au http://www.rogerclarke.com
Xamax Consultancy Pty Ltd 78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Visiting Professor in the Faculty of Law University of N.S.W.
Visiting Professor in Computer Science Australian National University
More information about the Link
mailing list