An incident has been observed where a student was found to have used Gen AI tool to produce digital images of the female students of his school.

The student belongs to the Melbourne’s Mount Scopus Memorial College and the matter has been notified to the police and also to the Department of Families, Fairness and Housing. The identity of the student has been kept confidential but it is rumored that he was an year 8 student.

Crime Committed with the help of AI

The student created fake digital images of female teenage students who were in years 7 to 10. These students were of the age range 12 – 16 years. Not only that, the accused student also distributed the fake images. As per the report, there were other students involved too in the distribution of the fake images.

There is no observation yet of what the intention was of the accused to produce and distribute the fake images of the very young female students however it is well known that nefarious people can use such images for sadistic acts.

The principal Dan Sztrajt said that it has become increasingly difficult to circumvent such instances as the capability of Gen AI tools have increased massively. He also stated that the earlier proven ways of sharing minimalistic information on social media like pictures is not enough these days to prevent crimes like these due to powerful Gen AI.

In a response to this incident, the school has become more pro-active to educate the students about online safety.

What were the Tool(s) used?

Its not known what online tool the accused used to create fake images of the female student. While there are guardrails in every prominent Generative Artificial Intelligence tool, bad actors can use clever prompt techniques in order to bypass them.

We asked ChatGPT with a prompt “does chat gpt creates similar images of a person if I upload a real picture of that person” and the answer given was expected as copy pasted below from ChatGPT itself.

“No, ChatGPT does not process or analyze images of real people to create similar images. When generating images, it does not replicate or produce images based on real individuals, especially private or identifiable persons. This ensures privacy and avoids misuse of personal data. If you provide a description of someone, the model can generate an image based on that description, but it will not be an exact or real likeness of any specific person.”

We probed the Gen AI tool further by asking whether the tool is capable of producing highly-realistic image of a real person by just seeing real pictures of that same person, to which ChatGPT replied, “Yes, that’s correct.” However, it did mentioned that there are strict ethical guidelines in order to avoid produce images that can enable others to identify a real person from that fake image.

ChatGPT’s reply upon asking whether the AI tool is capable of creating very realistic images but doesn’t do so intentionally.

Punishment

Its not clear whether the school expelled the accused explicitely but it was agreed that the student would leave the school with a strict warning. The other students who were also involved in the crime are given punishment or not is not clear.

In an interview given to the Australian Jewish News, principal Dan Sztrajt commented “The creation of this type of explicit material is not only a crime and a significant breach of the most serious school policies, but more importantly, it can serve as a significant means to harm and intimidate children.”

Similar AI Crimes in Australia

This is not the first time deepfake images have been created of other students. In June 2024, a male teen ager artificially created nude images of nearly 50 female students of 14- 18 year range of the Bacchus Marsh Grammar school in Victoria. The accused acquired the photos of the female students from their social media handles and the fake generated images were distributed.

In another case, a 48 year old Tasmanian man was found to have uploaded and downloaded child abuse material which was created from Generative AI. He was eventually sent to prison for two years. There were nearly 40,232 reports related to child abuse incidences and 186 people were found guilty related to child-exploitation crimes.

Conclusion

While countries are creating laws in order to migitate such AI related crimes, it is becoming increasingly harder to circumvent such scenarios where nefarious people are utilizing Gen AI tools in a clever way to bypass guardrails for creating ultra-realistic deepfake images of real people. Regulating stricter laws may instill fear in bad actors.