In other words, if you think fake news is a problem now, wait until it can be manufactured in bulk by machines and pumped out into our information eco-system on an industrial scale.
Unlike deepfake videos which can be exposed – undetectable textfakes masquerading as ordinary chat on social media platforms like Twitter or Facebook – have the potential to influence us in potentially more subtle and dangerous ways.
By weaving an elaborate web of lies designed to deceive and manipulate, the aim will be to shift the way we think by immersing us in a soup of pervasive misinformation.
Toss in a plethora of other synthetic material – fake videos, images and audio – and it becomes increasingly difficult to trust anything at all on the Internet, further eroding trust.
The technology also poses a new kind of challenge for social media companies of course.
As neutral ‘platform operators’ rather than publishers, they have always argued that it is not their job to judge whether people are using their services to tell the truth or not.
What happens if the lies are being produced, spread and commented upon entirely by algorithms? Do they have a responsibility to shut these down?
Hawking’s concern about AI revolved around the lack of rules or any kind of governance over a powerful new technology – and the urgent need to set standards to supervise its use.
“We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance,” Hawking said.
There can be few more pressing areas where rules and standards need to be applied than here.