AN image-generating artificial intelligence software has been spotted creating its own vocabulary leading to recent claims it has turned "demonic".
The creators of DALLE-2 AI were surprised to discover its hidden language but have exclusively told The Sun that other AI poses a more serious danger.
Genius physicist Stephen Hawking once revealed fears that thinking machines could one day take over the world.
Lots of people share this view so it's no surprise that news of a system called DALLE-E2 writing its own language sparked concern.
Giannis Daras, a computer science Ph.D. student at the University of Texas, published a detailing DALLE-E2's unexplained new language.
Daras told DALLE-E2 to create an image of "farmers talking about vegetables" and the program did so, but the farmers' speech read "" - an unknown word made up by the AI.
Daras "vicootes" back into the DALLE-E2 system and got back pictures of vegetables.
"We then feed the words: 'Apoploe vesrreaitars' and we get birds," Daras wrote on Twitter.
"It seems that the farmers are talking about birds, messing with their vegetables!"
Daras and his co-researcher Alexandros G. Dimakis have written a on DALLE-E2's "hidden vocabulary" and spoke to The Sun about their research.
Most read in News Tech
They told The Sun: "What we found is that there seem to be some special gibberish words that make DALLE-2 generate objects, with some consistency. For example, when we ask the image generator to create "Apoploe vesrreaitais" (a gibberish text) it consistently gives images of birds.
"Also DALLE-2 sometimes can produce images of these gibberish words, and this is how we found them.
"One likely explanation is that these are mashups of real words that mean species of birds, e.g. a Apoploe could be a mashup of Apodidae and Ploceidae which are real bird families."
They added: "We want to emphasize that this is an adversarial attack and hence does not need to work all the time.
"If a system behaves in a wildly unpredictable way, even if that happens one out of ten times and under an unusual input, that is still a massive security and interpretability issue."
Despite several reports claiming that this is an example of AI thinking for itself and disobeying humans, Daras and Dimakis aren't too concerned.
They told The Sun: "We do not consider AI becoming independent as something plausible with the current techniques."
However, they do think AI still poses some very serious risks.
The researchers explained: "There are numerous very serious risks from AI at the moment that are much more immediate.
"Fairness, transparency, robustness and interpretability of AI models as well as the concentration of data and computing power in very few organizations come in mind."
They added: "AI models, like any other technology, is not inherently good or evil.
"However, AI models can already facilitate evil actions like discriminating against certain populations, amplifying biases, polarizing social groups and increasing monopoly barriers to entry."
So the good news is, you don't have to worry about demonic AI machines taking over the world any time soon.
This doesn't mean AI is perfect and there are issues that need to be addressed right now.
The researchers told The Sun: "We think that there are a lot of problems to be addressed before we should worry about AI consciousness.
"Our research finding adds one more item in the long list of AI security and interpretability challenges that need to be addressed."