The way forward: Can Big Tech be controlled?

As most people are aware, Facebook’s services were used by the Christchurch shooter to show a live video. The website 8chan was used to publish his manifesto, and both the video and manifesto were subsequently republished on Facebook.

Big Tech

While Facebook did remove the live video, there were more than 1.2 million views in the first 24 hours before this was achieved.

This raises the question about what sort of responsibility we should require from social media companies and, given the borderless nature of the internet, what governments can do to encourage and enforce this.

Public debate makes it apparent there are many misconceptions about social media. We address these here, before offering possible solutions.

  • Social media as a “publisher”
    Public opinion indicates that Facebook should have been able to prevent the live video from being posted.

    This ignores the fact that Facebook is not a publisher in the sense of traditional media or a vetting publishing service which can choose what it publishes by deferring the item or article for review.

    Social media, by its very operation, is designed to be a fast and instant post-and-publish product which, arguably, is not in a position, due to the high volume and nature of its instant turnaround, to monitor and review every item before publication. So a more viable option needs to be sought to handle offensive material.
  • Automated systems to flag objectionable material
    Privacy Commissioner John Edwards criticises what he sees as Facebook’s lack of systems to flag the Christchurch shootings video.

    More inquiry would be beneficial as it is likely Facebook did have automated systems in place. In any event, automating this process is clearly preferable to hiring additional staff to review posts.

    A method of self-reporting and flagging is already implemented by Facebook but is subject to the personal opinion of the viewer.

    As an international company, Facebook will have a large number of users who will slip material through before it becomes significant enough to attract the attention of a takedown auditor.

    Further, non-members cannot report on material, leaving the reporting to the very people who may be posting the material, or blocking it from specific groups until the day of an event when they publish to the public.
  • Social media providers and the nature of companies
    Facebook provides a service in exchange for the ability to advertise to its users.

    While it must comply with local laws, its first priority is to its shareholders; how to increase any dividend and/or increase the company’s value.

    Typically, morality is a ‘nice to have’, not a primary focus for companies.

    Despite this, the public debate seems to imply an expectation that social media companies will act in the public interest first.

    But given the statutory obligations for directors to act in the best interests of the company (and ultimately shareholders), a company will not typically prioritise the best interests of the public.
  • Should we expect a product to prevent itself from being used illegally
    While we could require car manufacturers to modify their vehicles to restrict or prevent people from driving while intoxicated, we haven’t done this.

    The objective standard for drink driving is more easily defined than identifying a threshold for objectionable material.

    So it seems a stretch to require social media companies to prevent themselves from being used illegally when other sectors with more clear and objective standards are failing to do so.
  • Freedom of speech applied against a contractual arrangement
    There have also been demands for free speech from some users who maintain social media companies are denying them this right.

    This arises where the social media company blocks or bans certain groups or activities that fail to comply with its terms and conditions.

    Such people seem to confuse freedom of speech, protected through New Zealand legislation (Bill of Rights Act 1990 and Human Rights Act 1993) and the principles of freedom of contract.

    If social media companies comply with New Zealand law, they can dictate the terms and conditions for using their services, including a right not to host groups inciting hatred and to remove undesirable posts.
  • Restrict live streaming for those who have interacted with objectionable groups
    Quite simply, this is not practical to implement.

    Anyone determined to live stream could simply create a new, “clean” account or move to another provider.

    Further, live streaming cannot be defined as objectionable until the objectionable activity has occurred. By then, the harm may already be done.
  • Require prompt action by social media companies when an objectionable post is reported
    Some have asked why Facebook took so long to remove the original video and didn’t act while it was first being live streamed.

    But we understand Facebook acted quickly to remove the post and copies as soon as it was notified by police. We also understand Facebook did not receive a single complaint or report about the video from the people who watched it live before that time.

    Many members of the public may not have known how to lay a complaint, nor did they find it concerning until they realised it was real, and not faked.

    So, requiring Facebook and social media companies to clearly and prominently display contact details for take-down notices might be desirable.

    Other websites did not act so quickly, or refused to do so. Websites like 8chan, where the original Facebook video link appeared with the manifesto, has a policy of not responding to takedown requests.

    On that basis, most of New Zealand’s largest ISPs blocked access to 8chan soon after the shootings occurred. There does not appear to be any plan to restore access to this provider.

    But the ability to monitor and control access may be of limited availability to ISPs without direct channel access to the uplinks. So, we recommend the wholesale channels provide access for ISPs to block services. This can be added to the Commerce Commission’s mandate.

Across the Ditch

Australia’s recently-enacted Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019 applies solely to material recorded or streamed.

So it covers audio, visual and audio-visual, and includes photos, audio, and video displaying ‘abhorrent violent conduct’ such as a terrorist act, murder, attempted murder, rape, torture, or kidnapping.

This would address the Christchurch video but not the shooter’s manifesto.

The first part of the Australian Bill requires any social media company that becomes aware of any of the above acts to notify the police within a reasonable time.

While well intentioned, the Australian legislation has alarming inadequacies such as the requirements in sections 474.35-36. Under these provisions, the eSafety Commissioner can issue a written notice to a content host about abhorrent violent material it is hosting.

On prosecution of that content host, if the material were available when the notice was issued, it is automatically presumed the content host, or a person (including the host’s senior staff) has been reckless unless they can prove otherwise.

There is no requirement for the Australian government to prove recklessness and no opportunity for the parties to remove such material on being given notice (ie, guilt through the inaction of the content host, or its senior staff, to hosting material they may not have yet been aware of, and guilt is assumed unless proved otherwise).

With fines of up to 10% of revenue, and the possibility of custodial sentences for senior executives, the consequences for breaching this law seem incommensurate with failing to notice and act quickly enough on the crimes of another.

There is also inconsistency in its application, with government departments being exempt from prosecution for similar failures to act.

We hope New Zealand will not adopt similar legislation as it is counterproductive. It would merely drive those social media companies out of New Zealand because the revenue to be made here would be greatly outweighed by the potential fines.

Where to from here?
Despite the difficulties of regulation, monitoring, and jurisdiction discussed above, we believe there is a way forward and measures can be taken to create significant improvements, especially if social media providers engage.

These include:

  • Creating incentives for social media companies to invest in R&D to improve AI technology that recognises objectionable, dangerous or abhorrent activity

    Governments typically use taxation as a tool to influence behaviour.

    Instead, they could offer tax incentives or other tools to encourage social media companies to do research and development into creating AI tools to detect objectionable or harmful material.

    In this way, company directors can satisfy their shareholders that it is in the best interests of the company to invest in such research.
  • Influencing social media companies to become proactive in social responsibility measures by enacting legislation requiring them to adopt a social responsibility code

    Essentially there is a contractual relationship for providing services between social media companies and users.

    As such, the companies are in a position of strength to dictate the terms of use, provided they comply with laws in the jurisdiction within which they operate. 

    A real opportunity exists to influence social media use, with nations and social media companies formulating and agreeing a social responsibility code between them.

    To have teeth, the legislation would need to be enacted in the relevant countries and enforceable across borders through international agreement.

    We believe this would be achievable if a standardised code were agreed to.
  • Review existing national legislation to strengthen its reach and effect
    Quite likely, a further review of the Privacy Act 1993 will be necessary. And the Harmful Digital Communication Act 2015 is of limited assistance in its current form because the final form of the legislation failed to adopt many of the Law Commission’s recommendations. New legislation, or at least a review of the current law, is urgently required.
  • Implementing steps to lift the shield of anonymity that emboldens certain users
    Many users posting objectionable material do so in the comfort of remaining anonymous.

    With the benefit of this shield, they make statements and post material that they would not otherwise say or do in a public place.

    If they are made aware their identity may be revealed to a monitoring agency if there are concerns they may be breaching a social responsibility code, or the terms and conditions of use, many current behaviours tolerated online (but not on the street) might be eliminated or reduced.

This step would require a user register.

A user signup page would need to include the additional step of verification similar to the way Australia verifies access to Digital IDs (VoIP phone numbers).

This way, the user can still have anonymity online, but this pseudonym would be traceable to their account.

Companies already do this for many online stores to protect privacy while tracking user information for credit card purchases.

So, we do not see this as a difficult step as identifiable information is already kept by most social media companies.

In conjunction with internet service providers, including mobile network companies, the user’s identity can be easily determined in almost all cases.

  • Educating users and creating more awareness about terms of use with social media providers
    Create awareness among users of the social responsibility code and ensure they are aware their use of the platform is subject to contractual terms which they must abide by.

We are heartened by the steps taken to start a dialogue on this with other nations.

Contact Us
Phone 09 303 5270
Fax 09 309 3726
Email reception@adls.org.nz