What happens when justice is blind to technology?

What happens when justice is blind to technology?

It is not uncommon to worry about what happens when law enforcement has access to too much technology. There are, of course, benefits to ethical, politically neutral, and responsible use of technology within law enforcement – though where the line is drawn between adequate capability and too much capability is also a common concern. (For example, use of video evidence submitted by a member of the public who witnessed an assault would certainly be adequate capability – installing CCTV on every corner in case of a future assault, on the other hand, might be seen as giving too much technical capability.)

But what happens when justice is blind to technology? That, too, poses serious issues.

Blind Justice

In recent days, news has emerged of the case of Danny Kay, who has been freed from gaol and his conviction cleared from a rape sentence, based on a tiny application of technology – one which the investigating police should have been able to employ, themselves. From News.com.au1:

A MAN who was wrongly jailed for rape has spoken out of the years of hell that only ended when his family managed to find deleted Facebook messages that proved his innocence.

Danny Kay, of Derby, UK, spent more than two years behind bars after police relied on an “edited and misleading” conversation between himself and his accuser — with cops now reviewing just how they got it so wrong.

By any reasonable definition, rape is a serious crime – and accusations of rape therefore must be seriously investigated. While law enforcement in many countries has struggled to prove they take rape claims seriously, the seriousness of an alleged crime should, by all means, result in a serious investigation of the claims of the accuser and any rebuttals of the accused. In the case in question, there was a clear and substantial difference between the Facebook conversation used by police as evidence, leading to Danny being found guilty, and the actual Facebook conversation that took place.

While comprehensive details have not been published, it would seem it would seem that through one reason or another, the police were provided an edited (whether this was deliberately or accidentally done is not a topic for consideration here) version of a conversation that took place; that editing seemed to create an implicit confession of misdeeds, whereas the unedited conversation, retrieved after almost two years, revealed no such admission – implicit or otherwise – took place, and the context surrounding the interpreted admission was in fact entirely different.

The same article notes also that the quashing of Danny’s conviction2:

…comes after three high-profile rape cases collapsed in the same week, including two where bungling cops did not disclose crucial texts sent by the alleged victims.

The average law-enforcement officer can’t be expected to be a complete geek, conversant in all aspects of technology, mobile and online communication, etc., in much the same way that we expect many school teachers to be generalists on a variety of topics, without necessarily having deep expertise on any of them.

Yet when reflecting on the case described above, one has to wonder why records were not properly examined while evidence was being assembled for charges, and subsequent trial. In fact, the case as described does not even appear to represent a new style of issue – forgeries and other approaches to manipulating evidence in cases has a long history; preventing tampering is in fact such a high-profile concern in many countries for financial situations, at least, that businesses can find themselves subject to archive, retention lock and write-once laws – ensuring that what is recorded stays recorded and cannot be altered. (Equally, one wonders why the accused’s legal team did not make more of an effort to find the unedited conversation.)

The reliability and integrity of evidence gathered from computer systems is a topic which has been of ongoing concern in various countries. In Australia for instance, the Australian Law Reform Commission published under Reliability and accuracy of computer-produced evidence3 findings and recommendations relating to how we can deem evidence from computer systems to be trustworthy. It notes, for instance, that:

6.16 | Section 146 of the uniform Evidence Acts creates a rebuttable presumption that, where a party tenders a document or thing that has been produced by a process or device, if the device or process is one that, if properly used, ordinarily produces a particular outcome, then in producing the document or thing on this occasion, the device or process has produced that outcome. For example, it would not be necessary to call evidence to prove that a photocopier normally produced complete copies of documents and that it was working properly when it was used to photocopy the relevant document.

The challenge, of course, is that as technology improves, the ability to manipulate evidence originating from it also improves. The photocopier argument may seem straight-forward, until we consider that a careful and judicious use of a photocopier, scanner and manipulated sections of documents may result in ‘evidence’ which has no accurate relationship with an original document that has been adjusted.

In the ALRC report, two submissions in particular are referenced in favour of stronger attention to the veracity of computer records as evidence in cases. The Victorian Privacy Commissioner office is noted as observing4:

…in numerous instances in Victoria, technology-generated evidence (particularly speed camera evidence) has been shown to be less than reliable. It submits that it is critical to maintain public confidence  in the judicial process and that this can be eroded by even isolated instances of the admission of inaccurate computer evidence. It submits that the public’s confidence in the accuracy and reliability of some technologies has already been shaken. It is therefore important to subject these technologies to scrutiny and maintain the highest standards of testing computer evidence, particularly as computer systems become more sophisticated and complex.

The law society of New South Wales, it is noted, expresses concerns about the lack of distinct parallel between photocopied evidence (per the earlier example), and pure electronic evidence5:

‘in an age of computer hacking and viruses the rebuttable presumption in s 146 of the uniform Evidence Acts is of concern’. It points out that s 146 envisages application to machine-produced evidence such as photocopies (this is the example given in the legislation), but simple data copying is considerably different from computer-produced data, which can be stored and manipulated. It submits that the existence of quality control or internal control systems should be sufficient for computer-produced evidence to be considered prima facie accurate and reliable. However, it questions what the standard of quality control should be and suggests that that there may have to be different standards for different litigants. (Compare, for example, the computer records and systems of a sole trader with those of a multinational corporation.) It also submits that the concerns raised about the accuracy and reliability of computer-produced evidence apply to other electronic communications such as SMSs.

Effectively, the above cited submissions/concerns fall into two distinct categories: the first is the accuracy of automatically generated computer data in relation to the intended function – e.g., whether speed camera data is reliably accurate. The second refers to the integrity of computer retrieved data – where might data be altered, and to what extent can it be trusted that data has not been altered. The risk of either situation affecting a legal case is compounded by the risk of humans erroneously assuming or deciding that since data or information has come from a computer system, there is no further vetting required; this then almost becomes reminiscent of the Abraham Lincoln ‘quote’ circulated via Internet memes:

Lincoln Meme
Abraham Lincoln: Don’t believe everything you read on Internet, just because there’s a picture with a quote next to it.

Yet, despite submissions indicating concern over both accuracy and integrity of computer generated records, the ALRC report noted the commission’s response was6:

6.40 | The Commissions have made it clear in this Inquiry that a major overhaul of the legislation is neither warranted nor desirable.

Effectively, the response notes there is insufficient evidence of failures in relation to tendered evidence to warrant legislative change.

One wonders, referring back to Danny Kay’s case, how many examples may already exist, hidden from view (other than to those directly impacted), thanks to an inadequate understanding of technology in the first place.

Footnotes

  1. Man cleared of rape after Facebook message proves his innocence, Brittany Vonow, January 1 2018, news.com.au
  2. Ibid.
  3. Reliability and accuracy of computer-produced evidence, ALRC
  4. Ibid, 6.37.
  5. Ibid, 6.39.
  6. Reliability and accuracy of computer-produced evidence, ALRC

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: