With the rapid evolution of AI technology, content generation has become easier or atleast is massively aiding humans in creating a piece of content.
With the way content production has skyrocketed, the forecast may seem that humans would no longer be required to produce the content. According to an AI industry observer, both AI and Generative AI are likely to produce nearly 90% of digital content by 2025.
The evolution of AI has start to severely affect workforce in many industries. The tech industry seems to have suffered the most as numbers as high as 2,50,000 were laid off in 2023 from nearly 2000 Tech companies as per website layoffs.fyi. However, there may be some sigh of relief for the people working in content generation, especially for the digital medium.
Reports suggests that AI models may not produce optimum results if it continues to be trained on synthesized data, especially from other AI products. A prompt on ChatGPT also suggests the same explanation by the theory of “Model Collapse”. According to it, AI models require human content to understand the richness, diversity and creativity which it needs in order to generate quality content through its algorithm.
As more and more AI content becomes the dataset which enters the training pool, issues like bias amplification, performance degradation, lack of creativity and reliability issues are likely to occur.
While the AI industry is keen to optimize the guardrails in their AI product, a strict detection of content produced by AI model may become the need of the hour in coming years and future so that synthesized data can be mitigated from entering the training pool.
The AI bills that will arrive in the future may mandate watermarking any type of content produced through AI so that viewers know that it isn’t human generated. For those of you who don’t know, AI watermarking means embedding a hidden pattern in the output result that allows algorithms to detect that the content is not human generated.
Things that Generative AI cannot do (Yet!)
Gen AI has caught the attention of the world like no other software. Open AI’s ChatGPT is the second fastest software after Threads to reach 1 million users in just 5 days. Things that the likes of ChatGPT and Midjourney could do were simply unimaginable. However, there are some notable things that Gen AI cannot do yet.
Firstly, the technology cannot generate a purely novel idea. It may derive very appealing results but all those are inspired from the training datasets in some or the other way. One of the lines in Harvard Business Review caught my attention where it says the following:
For e.g. a music composition although comes from permutation and combination of notes but Gen AI may not be able to write a new music that isn’t influenced by the music data on which it was trained.
Gen AI cannot assess accuracy of the next output data based on factual correctness. It actually does that through probability. And this is why they may suffer from Hallucinations at times.