What Questions Should A Computer Actually Ask?

AI (artificial intelligence) seems to know us better than we know ourselves. Depending on how you look at it, this means technology is either our best friend or a psychopathic stalker, a friendly ear or a private investigator. Apps like trusty Siri, Google Assistant and Amazon’s Alexa are all listening in, collating your data. Just in March this year, Amazon surrendered some of its Echo recordings to the FBI in conjunction with a murder investigation.

So, AI can literally be used as a witness. But just how helpful should a computer be? What questions should it actually be asking?

This is the big ethical conundrum we need to think about. Amid questions on whose job AI will replace, it’s this that should be plaguing our dreams.

In theory, a computer shouldn’t ask a question it already knows the answer to. But computers are created by humans, and humans never seem to understand where to draw the line. So if AI is manufactured by perpetually curious individuals, how will it know when to stop?

As Stephen Hawking points out, that it’s not AI that’s dangerous, but the goals we’re setting it. AI doesn’t have a moral compass. It has a goal. If I programme it to take over the world, it’ll try its damndest – it won’t care about its carbon footprint or emotional collateral damage on the way.

It’s already gone past the Igor stage, not bothering with the precursory ‘Yes master’ waffle and jumping straight to the solution; browsers scour your data history, pre-empting what you’re actually thinking, providing an answer before you’ve properly formulated the question.

Of all the major players, Facebook has always been seen as this omnipresent data lord – a bright blue beacon slurping up personal details like emoji milkshake and dishing out cold, calculated ads.

But if you’ve always seen Facebook as being in the ‘scary zone’ with your entire profile to hand, the University of Cambridge’s ‘Apply Magic Sauce’ tool took scariness to a whole new level in 2015. Apply Magic Sauce estimated your gender, intelligence, life satisfaction, sexual preference, political and religious preferences, education and relationship status – all determined from what you’d clicked ‘like’ on. That’s it.

If that’s what a nifty research tool can harvest, just imagine what a global institution can muster.

Take the myriad election ads you were undoubtedly served over the past few months. Political views are held close by many – we don’t want just anyone knowing our business. So to have a faceless algorithm dictate who will best run the country, giving us advice on how to vote, is plain intrusive.

Take that one step further, as Capital Analytica did. It assumed your political views from just 30 Facebook likes before micro-messaging, feeding you key policies and buzzwords. It arguably knows your political opinions better than your family and maybe even yourself. Given that the hung parliament we found ourselves with last month was apparently decided by 450 votes – which is nothing – plus the undeniable power micro-messaging now holds, and we’re in scary-zone proper.

Unless you go full-on Stig of the Dump and completely remove yourself from tech, there’s no way of escaping data collation. Surveillance is the internet’s business model – it’s how Google and Facebook make their revenue. As depressing as it is, it’s out of our hands and in those of the developers.

It’s their job to create innovative, exciting tech that’ll advance the human race. Of course that tech needs to work in a commercial sense, but also ethically. AI has to be on our side. Developers need to ensure AI protects our data – they need to question the project ethics every single day, so fast is the advancement of tech.

If not, we risk falling into the Uncanny Valley. Although usually a term used to describe the unease or revulsion caused by the near-identical resemblance to ourselves in a computer-generated figure or robot, the Valley is just as apt when discussing AI.

We’ve come so far. We let our phones organise our diaries. We trust them, we let them into our lives. And then something like Air Canada’s AI selectively emails customers and sends us climbing out of the Valley again and scurrying to the hills.

There’s one simple thing developers need to bear in mind if they don’t want to fall off the edge again: I am not a product. But I am being treated as a monetary asset, just another number in advertisers’ spreadsheets. Maybe I’ll be less frosty were they to sweeten the deal a little; they’re using my data to better target me, so I should either be getting paid for said data or have it all turned off because the current one-sided arrangement isn’t sustainable.

Computers and AI are advancing, and that’s fine. Actually, that’s better than fine – that’s amazing. But there has to be some leeway. Advertising tech is already perceived as invasive, and if it doesn’t give us any insight, any control, it runs the risk of scaring off the people it’s catering to. Extrapolate that to life in general, and it brings us back to the central question: what questions should a computer actually ask?

— This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.