The Future Of AI Is Important, But Scaremongering Is Not Productive

While the apocalyptic warnings from Russian President Vladimir Putin and Tesla CEO Elon Musk that Artificial Intelligence (AI) could spark WWIII and threaten our very existence are interesting, we must recognise that scaremongering does not provide decision makers with a productive route forward, and could plausibly do more harm than good.

In recent years there have been considerable advancements in the development of AI, and over the summer the technology has been headline news following dire warnings about its potential threat to global security.

First the Russian President, Vladimir Putin, proclaimed in a televised address that whoever leads the development and application of Artificial Intelligence will ‘become the ruler of the world’. Then SpaceX and Tesla CEO, Elon Musk, tweeted that he thought that ‘competition for AI superiority at national level (is the) most likely cause of WW3’.

However any doomsday predictions need to be viewed within the wider context. While AI could certainly revolutionise military technology and open up a new frontier for international competition, we have seen from our work that it will help save lives and increase quality of life through applications such as better medical diagnosis and improving government services.

As we move closer towards developing workable AI technologies, it is vital that developers and policy makers proceed with care. It was therefore welcome to see the House of Lords Select Committee on Artificial Intelligence launch an inquiry into the implications of AI. The Committee is examining the economic, ethical, and social potential impact of advances in artificial intelligence, and plans to publish a report and recommendations for the UK Government in March 2018.

Regulation is a vital tool in ensuring the safe, responsible and beneficial application of AI, but it is almost important that we strike the right balance to maximise the benefits of regulation while minimising any potential disadvantages. A better understanding of the issues, opportunities, and risks surrounding AI amongst policy-makers and business leaders is the first step to ensuring the responsible application of this revolutionary technology, so any efforts which assist in this should be welcomed.

In that vein, back in June ASI published, in partnership with leading international law firm Slaughter and May, a white paper on the responsible deployment of AI in business, entitled Superhuman Resources: Responsible Deployment of AI in Business.

Our report argued that that AI could become the most transformative workplace technology of the 21st Century, but it was vital that business leaders deployed AI both effectively and responsibly. While the growth of AI is no different to that of other fields of innovation in respect of the need to develop ethics and laws simultaneously, it offers a real opportunity for businesses to shape and influence legal and regulatory frameworks as they begin to adapt to machine-based processes, products and services.

The development and deployment of Artificial Intelligent holds almost infinite potential applications across all sectors and areas of life, but it is important to recognise that it also undoubtedly poses considerable risks.

AI isn’t just another new technology. It has the ability to alter, and in some cases completely replace, human processes. AI’s virtually limitless potential to transform businesses, workplaces, militaries, and people’s lives across the world is incredibly exciting, but it is vital that any organisation that deploys AI ensures it does so responsibly.

Marc Warner is CEO at ASI Data Science

— This feed and its contents are the property of The Huffington Post UK, and use is subject to our terms. It may be used for personal consumption, but may not be distributed on a website.