Trust me, I’m from the coalition

Stuart Robert and the coalition government are asking us to trust them with a pervasive and potentially privacy invading app, together with the data it collects. They’ve implied, but haven’t actually said, that without the app it’s unlikely current isolation restrictions will be be lifted—some not very subtle blackmail.

[Edit 2020-0427 some things have changed since I first wrote this, and most things haven’t. Rather than change the original article I have created an addendum to address changes and various false claims and objections that have been floating around since.]

So should we trust them, or believe the blackmail? First off, let’s consider their track record on privacy protection, data protection, and IT competence in general. The coalition has given us:

  • Encryption breaking legislation
  • Metadata exposing legislation
  • Unlawful Robodebt
  • Secrecy provisions so draconian that warrant, arrest, trial, judgment and sentence served were all hidden from the public
  • MyHealth data breaches
  • Parliament House hacks
  • BOM hacks
  • DoD hacks
  • Two very public and in one case unlawful AFP searches of journalists in an attempt to discover whistleblower sources
  • The AFP lying about their use of Clearview, a global surveillance app, followed by news of a massive Clearview data breach
  • MyGov meltdown

That’s just a selection. So the coalition is coming off a pretty low base when it comes to public trust about transparency, privacy and IT.

Despite the obviously corrosive effect that years of bungling and deceit must have had on public confidence, Minister Robert has led with his chin and asked us to trust him. Just a reminder, this is also the Minister who has overseen the latest (unlawful) version of Robodebt, and the Minister who blamed the MyGov meltdown on an “external attack” which subsequently turned out to be Australians legitimately trying to use the system.

Robert’s explanation and apology?

“My bad.”

So Minister Robert is asking us all to install and run an app that he assures us that, like him, we can trust. Why? His big selling points:

  • The code is available
  • The Commonwealth doesn’t “get” the data, the States do
  • The data is encrypted
  • When the pandemic is “done”, Robert will “blow away the national datastore”
  • The app “only” swaps names and phone numbers
  • Team Australia

Let’s consider those in turn. The code is available – excellent; let’s face it, without that guarantee there wouldn’t be a snowball’s chance anyone could trust it.

The Commonwealth doesn’t “get” the data. This is an interesting use of the verb “get”. Let’s discuss:

  • Who receives the data? The Commonwealth.
  • Who stores the data? The Commonwealth.
  • Who distributes the data? The Commonwealth.
  • Who then receives it? The States.
  • Who retains the data? The Commonwealth and the States.
  • Who decides when to delete the Commonwealth data? The Commonwealth, in the person of Minister Robert.
  • Who decides how to use, store and delete the State’s copies of the data? Who knows. Minister Robert hasn’t specified, and isn’t telling us.

But wait a minute, Robert said the data is encrypted. We’re safe, right?

Well, the States have to be able to decode the data, obviously. So they must have keys. But who owns the keys? That would be whoever distributes the app. The Commonwealth.

But it’s irrelevant, anyway, because remember the first piece of legislation I listed earlier? The Commonwealth can break any encryption it chooses. So the Commonwealth has the entire database which it can read, if it wishes. An interesting reading of Robert’s claim “The Commonwealth doesn’t get the data”. My trust levels are now dropping rapidly, from a starting point of zero.

Next point – Robert assures us that he will “blow away” the data when the pandemic is “done”. Very quickly – there is no legislative guarantee of this whatsoever. There is no independent oversight whatsoever. Finally, the definition of “done” hasn’t been given, and is entirely at the discretion of the Minister, the one who has overseen Robodebt and MyGov.

In the meantime, he’ll have a database with lists of names and phone numbers of every person in contact with a positive test for the last three weeks. So will the States, and Robert hasn’t made any stipulations as to how they’ll protect the data, how long they’ll keep it, or what they’ll do with it.

Trust levels? Subterranean.

Which is very disappointing. On paper, the case for a contact tracking app looks good. It’s also relatively simple for the government, if it’s serious, to do a few straightforward things to assure us:

  • Publish and escrow the code
  • Legislate the conditions for its use, with significant penalties for misuse
  • Provide real oversight removed from political interference
  • Sunset clauses
  • A minimum of centralised data, and real anonymity

None of that is hard to do or would take any longer than the time it will take to create the app and see what the uptake is.

Doing those things would significantly allay people’s reasonable fears.
Not doing those things, which are simple, reasonable and not time consuming, doesn’t just leave us fearful, it pretty much confirms our worst suspicions about, at best, the ineptitude of the rollout, or at worst, the longer-term intentions for the data.

No matter how useful the app might be, if I can get much the same protection from staying home and washing my hands, and the government doesn’t provide those simple assurances, there is no way I’m using that app.

Let’s compare Robert’s announcement against my wish list:

  • Publish and escrow the code ✖️ However we know already that the code is going to exchange name and phone number
  • Legislate the conditions for its use, with significant penalties for misuse ✖️”Trust me, I oversaw Robodebt and when MyGov crashed I wrongly claimed a cyber-attack and then said ‘My bad’”
  • Provide real oversight removed from political interference ✖️ The Minister and the States are going to decide what happens
  • Sunset clauses ✖️”Trust me”, see above
  • A minimum of centralised data, and real anonymity ✖️

Well, that didn’t go well, did it. Faced with a long list of untrustworthy acts and errors, the government has decided to go with “trust me”, together with the confidence boosting assurance that although they’ll own all the data, it will only be given to the States to use as they see fit, and Robert pinkie-promises to delete it “when it’s all over”, where “over” is undefined.

Compare that proposed solution with another real, published application that the government could adopt, that could meet all the technical requirements, and with some good will from the government, could be made subject to necessary oversight and sunset.

This one: https://github.com/DP-3T/documents

So what’s the probability I’ll accept this government’s assurances about this app? Zero.

Probability that, knowing how this app is engineered and who is handling the data, I’ll use the app? Zero.

Probability that a real, useful application exists to do the job, but our government isn’t using it? 100%.

Faced with a real opportunity to make a significant difference the government has instead chosen a poor solution, badly implemented, and instead of providing real safeguards has resorted to cheap salesmanship using “Team Australia”, “Trust us” and “We’ll keep you locked up if you don’t use it”.

Disappointing, but unsurprising.

[Edit, 2020-04-27]

Since I originally wrote this piece there have been some developments; most particularly the app has been published, yet the source still isn’t available. Since its publication I’ve seen a wide variety of opinions and arguments both from official sources and by journalists and the general public.

Very little that has transpired, however, has actually altered any of my initial concerns or answered any of the questions. Rather than repeat ad nauseam the various questions and arguments on Facebook, Twitter and elsewhere I’ll gather them up here.

First of all, using this app is, for the moment, voluntary. I personally won’t be using it in its current incarnation with the protections as provided by the government. That doesn’t mean that you shouldn’t or can’t use it, so let’s dispose of that furphy immediately. I would strongly urge you not to use it, for all the reasons I’ll outline, but in the end it’s your choice. What I do object to is others telling me either that I have no valid reasons, or refusing to provide proof for various unsubstantiated assertions they’re making, or suggesting that I’m endangering others by not using the app.

That last allegation is a particularly unpleasant one, based as it must be on total ignorance of who I am or my circumstances or behaviour, so it has to go down as a particularly low form of emotional blackmail and the last resort of people who can’t provide any better arguments. Appeals to “the common good” are trivially easy to make, yet for some reason (for example) we’ve failed to outlaw tobacco smoking or poker machines, probably responsible for more deaths and misery on an annual basis than Covid-19.

Please understand I’m not advancing an argument of the form “well ‘flu kills more people than Covid-19”, because that’s seriously flawed as well. What I’m saying is that a vague appeal to “the common good” as the overwhelming argument for doing anything is immensely vague and unquantifiable. For example, apparently corporate tax cuts are going to be “for the common good”, but there’s certainly a lack of hard data or consensus around that. If, in turn, it appears that the solution has considerable scope for public harm, then it’s fair to examine the clearly identified problems against the unquantified “public good”. In particular, the argument about “the public good” is in fact a false dichotomy, since it presumes that there’s only one solution to the problem—the government’s—and it’s all or nothing. We know that’s not true, so it’s fair not only to weigh up the problems but also to compare them to other solutions that might achieve the “public good” that’s so important.

So, my concerns about the tracking app fall into three categories.

Trust

The government, despite making various promises which it so far hasn’t kept, and which I’ll describe, is basically asking us to trust it. It’s a reasonable question, then, as to whether that trust is easily given, or whether in fact the government has very little trustworthiness to draw on.

Security

A highly intrusive and extensive gathering of personal data as the app provides would ordinarily be the cause for significant concerns, both about its supposedly legitimate use, but also about the very real risks of its use for other purposes, either by official parts of the federal government, of state governments, or unofficial use by individuals with official access, or finally by illegitimate actors without approved access. It’s reasonable therefore to investigate what mechanisms have been put in place, or have been proposed or promised in order to define legitimate use and to prevent illegitimate use.

Architecture

The basic purpose of the app is beguilingly simple—use smartphone technology to detect and log people with whom you’re in close proximity for more than fifteen minutes; save that information, and then if you’re diagnosed with Covid-19, at your discretion provide it to “officials” to allow your contacts to be traced and notified.

The devil is in the myriad architectural details: how are users identified, how is data protected from snooping, how is aggregated data protected, who gets to know who the users are, how are they notified, how can they be assured data will be deleted, and so on. It turns out there are quite a large number of different ways this can be achieved, with varying degrees of anonymity, protection and transparency, and it’s fair to ask how the government’s solution rates.

The Concerns

First of all, the overall trustworthiness of the government. I’ve outlined a number of significant concerns earlier in the article, but it’s worth detailing several recent and significant events to be added to the list.

Source code

The government very clearly promised that the application source code would be published. It hasn’t been, despite the fact that the application itself has now been published and the government is encouraging its use, based only on “trust me”. This is, prima facie a total betrayal of trust, and even by itself a sufficient reason not to use the application. Rather than say up-front that they wouldn’t or couldn’t publish the source code (with the obvious corollary for trust), they promised something and then didn’t deliver. On such a delicate and important issue as this, if it’s as important as proponents keep saying it is, how could they possibly fall at this first hurdle, and how could you retain any trust in anything they now promise? “Trust me”? No, I don’t.

Data security

There are three very obvious threats to the government collected and stored data: malicious intrusion by third parties, force majeure either open or covert if the data is stored outside Australia, and accidents and incompetence by the data managers.

It is clearly imperative that the data is not only under Australian government control, but that it physically resides in Australia so that it’s not open to any legal access by foreign states. Yet the government tender, which excluded Australian storage providers, was let to an overseas company. After a massive outcry the government changed its arrangements, and now claims that the data will be physically housed in Australia, removing one of the three threats, yet the simple fact that this wasn’t originally a clear condition yet again indicates that the government simply isn’t competent to manage this data.

Both the storage provider (Amazon) and the government themselves have suffered numerous and significant data breaches, even in the relatively recent past, and those breaches have in many cases been attributed to hostile state actors or overseas agents. So even though the data is physically in Australia there’s still very little reason to be confident it will be protected, particularly from foreign state actors who would be very curious to know who is meeting whom and when.

You’d have to be very naive not to realise what a tempting prize this app and its data are for all kinds of malicious actors. There’s absolutely no doubt that both the app and the databases will be a number one target for all kinds of hackers, and our government’s track record there is very poor.

This also keeps ignoring the significant issue that all the data will be stored and used by the states, who if anything are even less competent than the Commonwealth.

This of course also totally ignores the threat of covert but possibly legal access by our own intelligence services in pursuit of journalists, whistleblowers, terrorists, and so on. Guarantees? “Trust me on our track record”.

Legal protections

Despite considerable concern about the very vague “trust me” promises made initially, the only protections currently in place are ministerial regulations. These were established at the stroke of a ministerial pen, and could be removed or altered just as easily and with no other controls possible. These are the same kind of ministerial discretions that allow au pairs and sports grants.

Despite there being promises of real legislation being enacted when parliament next sits (after the app is published), there aren’t even drafts of the proposed legislation available. So the government has been able to alter and release the app, but hasn’t been able to whip up some legislation that provides the guarantees it’s promising against its abuse, or to expose what it’s actually going to put in place.

The same goes for vague promises that it’s bulletproof and nobody can access the data for any other reason. If the government can arrest, try, sentence and jail a whistleblower in total secrecy what makes you think they don’t have the power to break encryptions and issue “national security” warrants against this data? They have both powers. Do you have any legal opinion that they couldn’t apply here? No? “Trust me”.

State controls and sunset clauses

Again, despite repeated verbal assurances that “only state health officials” will be able to see and use the data, we’ve not seen any actual legislation guaranteeing this.

After a lot of song and dance about the Commonwealth not “getting” the data, which is demonstrably rubbish, there has been almost total silence about what guarantees there are about the equivalent protections and controls over the per-state copies of the data, now being stored, controlled and accessed by state governments. Where are they storing it? How are they protecting it? Likewise the often-repeated promises about the data being deleted “when it’s all over”.

There has still been no definition of when that will be, except at the minister’s discretion. And only the Commonwealth copy. That would be Stuart “My bad” Robert. “Trust me”.

Those four concerns remain despite the fact that the app is now live and in use. It’s a very clear indication of the government’s real level of care and concern about the issues around the app that these undertakings have either been explicitly broken, or still not fulfilled. In the light of all that, my confidence that government is either competent or trustworthy?

Absolutely none.

The questions around the yet-to-be-seen draft legislation cover most of the second concern. It’s simply not possible to determine whether or not the government will actually put appropriate and stringent controls in place without seeing any details. Just as importantly we don’t know whether the appropriate policing and oversight will be provided, because a law that’s not policed or enforced is worse than useless since it gives a false security. What’s clear from the government’s track record is that it is unlikely to put any of this into the hands of an independent arms-length body. Again, this is something that is entirely possible to do and would increase public trust and confidence. The fact that government has so far not even suggested it might do it tells you whether they care about substance or appearance.

The last concern is around the gory technical details of the app’s architecture and implementation. For a start, even if the app’s source code were available, which it isn’t, this would only tell us how the initial data gathering and storage would operate. It wouldn’t tell us anything about what happened once the data was uploaded to central servers, or at any time after that, so the app architecture is only part of the puzzle.

However, without going into mind-numbing detail, it’s fair to say that there are several basically different ways of achieving the fundamental goal of knowing whether you’ve been exposed to a possible infection, or who should be warned if you’re infectious.

The first and really basic architectural decision is whether the data remains distributed and controlled by individual phone owners, or whether it’s aggregated centrally. If it’s distributed, there’s no possibility for many of the subsequent abuses. If it’s central, there is. Australia has chosen central, even though distributed solutions exist.

The next decision is how to protect the identities of users. There are solutions that anonymise users at the phone level, and keep regularly changing encrypted identities so that even if data is stolen it won’t be possible to link the transmitted identities to actual people. The government hasn’t chosen one of those. It has chosen a solution that makes central re-identification possible.

The last decision is how users inform authorities or each other of possible infection. The distributed models allow for all the alerts to happen without central intervention. A person who tests positive is, prima facie already known to authorities as the result of a positive test, however how and whether that person’s contacts need to be known to central authorities or whether it’s sufficient for them to be informed and to make their own decisions is an architectural decision. The Australian government has chosen the big-brother central model, again.

So all of the choices that could be made to ensure anonymity, confidentiality and guard against abuse of data have not been taken, and instead the more dangerous, less private and more prone to abuse methods chosen.

It’s worth pointing out that most European countries, most recently Germany, that are making these same decisions are opting for non-centralised, anonymous architectures, because clearly they not only want the benefits of having the app but they also care about the concerns of their citizens and want the best chance that they’ll see and believe that there’s little risk in using it.

Not Australia. “Trust us”. Except I don’t, and for many good reasons.

So, finally, what are some of the arguments being mounted to dispute all of the problems I’ve described above?

  • “The code has been published”. It hasn’t. Next.
  • “The code will be published, it just takes a little time.” Seriously? The app and code are complete and it has been released. It was promised and still hasn’t been delivered. You’d have to be really gullible to believe this. Perhaps it will be, eventually, but as I’ve observed even if it shows there are no malicious back doors or other problems all the governmental and data management problems remain. In the meantime it’s yet another breach of trust, and a pretty major one.
  • “The code can’t be published because it would allow hackers in”. Again, seriously? Security through obscurity is what gives you weekly security patches from Microsoft. The most robust and unhackable code is open-source, because it’s had hundreds or thousands of eyes on it. Encryption algorithms are published, and they’re not hacked. This is a giant red-herring, and anybody trailing it is either extremely ignorant or deliberately malicious.
  • “You’re not serious about privacy otherwise you wouldn’t be on Facebook/Twitter”. Apart from being irrelevant to my concerns about app privacy, this is such a ridiculous confusion of concerns. If Facebook knows things about me it uses them to make money from advertisers. If our government or other governments know things about me there may be serious legal concerns. I might be a journalist, a whistleblower, a victim of domestic violence, in witness protection, doing sensitive commercial negotiations, there are myriad reasons why the comparison with Facebook and Twitter is just laughable, apart from being an attempt to avoid discussion of the real concerns by whataboutism.
  • “There is so legislation”. No, there isn’t. There is a ministerial regulation that was created without debate, oversight or any other controls or discussion, and just as it was created in a day it could be just as easily changed or removed in a day, without anybody to stop it. The only reliable protections here are legislated ones, and for preference ones with significant penalties, managed by an independent body, and provided with real policing powers. Otherwise it’s just window-dressing.
  • “It’s proof even against a court order”. Says who? Apart from the fact that this “protection” could disappear as quickly as it appeared, the reality is that Australia has a large array of very powerful and very secretive legal processes that give police and intelligence agencies significant and largely unobserved powers. The AFP raids on journalists demonstrated the government’s intentions. The existence of a prisoner who was arrested, tried, sentenced, jailed, and ultimately released without any public knowledge or scrutiny should be some indication of the legislative over-reach of this government. The minister can say “nothing can touch this”, but the problem is that the police have powers to serve warrants that nobody is allowed to talk about. So nobody would ever know, and until actual legislation is exposed and debated there’s no way to really know whether what the minister says is true, or whether it’s another version of “Robodebt is legal, trust me”.
  • Any and all claims about what the current biosecurity regulation does or doesn’t do. Seriously? Do you have an independent legal opinion on them? Have they actually been tested in the High Court like Robodebt or the AFP warrants? In reality all of this is just a more complicated version of “trust me”, because until there’s real independent scrutiny the government’s track record on matters like this is lamentable.

So, that’s the longer up-to-date version. No doubt more rubbish arguments will be proposed, and if necessary I’ll add them to this article, but the bottom line remains that the government had every opportunity to do this right, and it has blown it at every step and continues to blow it. Despite really wishing we had a technology-assisted tracking app, there is no way I’ll be using this one, and I’d recommend everyone else say the same and demand they do it properly.

We need it, it exists, the protections are available, we just have to get the government to do the right thing. If few people use the current app it will make the argument for the proper implementation so much stronger.

2 thoughts on “Trust me, I’m from the coalition

Comments are closed.