How science fiction can inform a generally staid profession about the legal issues of the future.
In a darkened seminar room at the University of Sussex, Lilian Edwards is telling her audience about Captain America 2. It is, she says, a must-see movie. It’s an odd claim for a Professor of Internet Law. It’s even odder that the audience, most of whom are also lawyers, are furiously scribbling notes on what they should be watching at the weekend. Welcome to the marvellous mayhem that is the Gikii conference.
Every year for the last decade, a group of tech-savvy, sci-fi loving lawyers have gathered to examine some of the strangest ideas ever dreamed up. Gikii (pronounced Geeky) is a two-day meeting focussed on geek law – but most of its presentations are unashamedly derived from science fiction.
In the inaugural 2006 conference, for instance, Adrian de Groot of Radboud University talked the crowd through the legal ramifications of developing open-source killer robots. At the same meeting, Judith Rauhofer of the Edinburgh Law School interpreted JK Rowling’s Harry Potter and the Half-Blood Prince as a parody of the British response to the terrorist threat.
Coming right up to date, this year’s conference included an expose of Disney’s Frozen as an example of the “chilling effect” created by enforced secrecy and self-censorship (see Not chilled, but frozen, at right).
It might seem light-hearted – perhaps even self-indulgent, given these lawyers’ love of all things geeky. But these legal minds are among the best in Europe. Gikii, they insist, is a bona fide attempt to find ways to tackle issues of national and international importance.
Writers, artists and filmmakers harness their creative powers to successfully imagine the technological future; why not take advantage of their prescience to help the law keep up?
Science fiction and the law
In the imagination of sci-fi writers, the future is often dystopian. This is why these viewpoints are so useful, Edwards reckons: many of the pitfalls in the uptake of technology have been imagined beforehand, helping us avoid them.
“The Gikii crowd is a really influential bunch in legal circles, and lots of these discussions have turned out to be useful to our work,” says Edwards, who teaches at Strathclyde University in Scotland.
That’s why, for instance, Yung Shin Van Der Sype’s dissection of Dave Eggers’ novel The Circle was so well received at Gikii this year. Co-presented with her Katholieke Universiteit Leuven colleague Jef Ausloos, the analysis brings salient lessons.
Mae Holland, the main character in The Circle, gets a job at a whizzy hi-tech company not dissimilar to Facebook, Google and other Silicon Valley enterprises. Once settled, she proceeds to sink deeper and deeper into a life lived almost entirely online. The company’s various mantras – “Secrets are Lies”; “Privacy is Theft”; “Sharing is Caring” – give us pause and make us reflect on our privacy laws and habits, Van Der Sype points out. “We are heading toward this kind of society,” she says.
Fortunately, we are in a better position to head off the kinds of trouble Mae encounters. This is one of the strengths of looking at fiction, says Van Der Sype; while Eggers has no “privacy knights” fighting against the pressure to live an utterly public life, they do exist in our world, and we can use the contrast between art and reality to highlight – and promote – the value of their role.
“We have people who are fighting for better data protection, but I’m not sure their voices are strong enough yet,” Van Der Sype says.
The rapid growth of the online world, and the myriad opportunities it presents, mean that many of its legal aspects are under-explored. It’s not that the law can’t cope with what happens online, it’s simply that applying existing law to these new situations takes thought, time and creativity – and that can be accelerated by letting existing stories and ideas fire up everyone’s imagination.
Van Der Sype would like to see more people reading fiction like The Circle so that they think about the issues it raises. That goes double for those on the technical side – software engineers and developers – so they become more involved with legal and ethical issues from the start.
“I think there needs to be more collaboration between computer scientists and technical people and lawyers and policy makers,” she says.
The robot judge
Take the possibility of robot judges, presented at Gikii this year by intellectual property lawyer and entrepreneur Anna Ronkainnen of the Danish company Trademark Now. Her analysis led her to the conclusion that the much-feared “robotisation” of the law won’t happen – the law is too nuanced for artificial intelligence software to make appropriate judgements, she reckons. However, thinking about how future technology might affect legal decision-making does prepare us for what might be coming.
Not that putting algorithms in charge would be all bad. Earlier this year, legal scholars in the U.S. proposed that algorithms might be better than human law-enforcement officials at determining what is an appropriate level of intrusion for the purposes of surveillance. The basic idea is that a human detective on the trail of a suspected criminal will always want to go one step further because there is an intrinsic desire to make a past investment of work worth the effort. An algorithm will have no such motivation, and will therefore be in a better position to call off the hunt when it is right to do so.
Then there are the upcoming problems of direct interaction between humans and robots. Raymond Cuijpers of the Eindhoven University of Technology works on the development of robot caregivers that will meet the needs of elderly and other vulnerable people.
The KSERA project researched how to obtain a successful, effective interaction between human and mobile robots, and part of this work involves thinking about the limits of legal responsibility. “You cannot put responsibility on the robot, it’s just a device,” he says.
Preliminary thinking is that liability lands on whoever sells the system, but Cuijpers admits that it’s not crystal clear. Imagine a scenario in wich a patient suffers after following bad instructions from a robot. “What if the patient says, ‘the robot told me to do this’?” Cuijpers asks. The manufacturer might claim the patient was too naïve to follow the robot’s instructions, while the patient could claim the robot was too persuasive to ignore.
That is particularly plausible given the trend toward equipping robots with ever more human-like capabilities. “If you can convince a person that the program is no longer distinguishable from a real person, you could have a serious legal problem,” Cuijpers points out.
The advent of artificial intelligence will only make things more difficult in legal terms. If people want robots that clean their homes and empty their dishwashers, the robots will have to learn the layout of the house and where the crockery is kept.
“If a robot is able to learn, the program it executes is not the same as the program it was initially installed with,” says Thomas Bolander of the Technical University of Denmark. “If it then injures a human, the manufacturer might say you didn’t teach it to empty the dishwasher in the right way, so this is not our responsibility.”
When the internet can kill
The problem is potentially more acute if remote-surgery protocols go wrong. This too had its origins in sci-fi: Robert Heinlein wrote about it in a 1942 short story, and we are over a decade past the point where a surgeon in the U.S. operated a robot that performed an operation on a patient in France.
Who would be legally responsible in the event of a breakdown in the connection to the robot?
Strict testing and certification measures are already in place to prevent breakdowns in the technology itself, but bullet-proofing the long-distance communication protocols is still a work-in-progress, according to Sandra Hirche, professor of control engineering at the Technische Universität München.
“The development of ‘safe-by-design’ protocols for communication over larger distances in robotic and other control applications is an active area of research,” she says.
The question of who gets sued if something goes wrong will depend on how this research plays out – and how well the robot engineers and data service providers comply with agreed standards.
Another pitfall highlighted by sci-fi is the fact that we might find ourselves loving our new technologies too much. The recent Hollywood movie Her highlighted the attachment disorders that might arise when computers become better friends than humans.
We might not even need sci-fi examples for long. The MIT Media Lab has spawned Jibo, a cute domestic robot that can make shopping lists, dial phones and even tell children a bedtime story. It is clearly designed so that people will enjoy being in its company. But what if we become too dependent on it?
“We can easily end up being addicted to technology and I think we should be made aware of the potential dangers,” Bolander says. “We spend a lot of resources warning about the dangers of addictions like smoking and drugs. Perhaps have to think about technology addictions in the same way.”
As the tobacco industry knows only too well, such issues can land manufacturers in court. With technology, it’s not yet clear what the law has to say. Hopefully, the Gikii lawyers will be, like Captain America, just prescient enough to save the day.
By Michael Brooks