Hydrogen, given enough time, turns into people…
Edward Robert Harrison, 1995
You know it is concerning when someone like Scarlett Johansson has her voice (allegedly) used without her consent, and she can’t seem to do much about it.
While we all rejoice and salivate at the prospects of AI making our lives better, there are words of caution from many, on the dangers of venturing into the unknown without having some ground rules in place.
In generative AI, models are trained to recognize patterns in data and then use these patterns to generate new, similar data. If a model is trained on English sentences, it learns the statistical likelihood of one word following another, allowing it to generate coherent sentences.
The above explanation sounds innocent enough – unless you prod deeper that is. Most generative AI is trained on voluminous data sets that of course, don’t appear from thin air. So does it violate copyright?
The legal argument advanced by generative AI companies is that AI trained on copyrighted works is not an infringement of copyright since these models are not copying the training data; rather, they are designed to learn the associations between the elements of writings and images like words and pixels.
The end-user however, is a different (social) animal. If one types in prompts to generate images quite similar to Banksy’s art, is the AI in the wrong, or the end-user, or none?
Human creators (we hope) know to decline requests to produce content that violates copyright. Can AI companies build similar guardrails into generative AI? Or are we entering a phase when all news, web content and the like are generated by computers while we stand by and gleefully type in ‘creative’ prompts to arrive at the next mind-blowing but clearly artificially generated image?
Some ecosystems are moving towards working on a solution. Closest home is the Dubai AI Campus, that was recently inaugurated by His Highness Sheikh Hamdan bin Mohammed. Based in the DIFC, the AI campus offers licenses to build and scale AI companies. But that’s the beginning.
What is needed is a holistic approach to AI – learning, deliberating, understanding the future of technology. Preparing for disruptions in multiple sectors, possible loss of jobs, and possible creation of many more. Skill upgrades as well.
And yes, arriving at a way to regulate these technologies – to set some ground rules so that AI remains beneficial, and does not morph into something that we find is too big, and too late, to control.