AI to the Rescue: OpenAI’s New Model Could Help Social Media Platforms Tackle Harmful Content
OpenAI, the creator of ChatGPT, has made a strong case for the use of AI in content moderation on social media sites. This change has the potential to transform operational efficiency by accelerating labor-intensive tasks.
Despite significant expenditures in generative AI by big businesses such as Microsoft and Alphabet, concrete improvements in a variety of industries using this technology have yet to be realised. OpenAI, which is supported by Microsoft, recently announced its cutting-edge GPT-4 AI model. It claims to be able to reduce the content moderation process from months to hours while providing greater consistency in content labelling.
The significance of this innovation becomes clear when considering the difficult nature of content moderation, particularly for social media behemoths like Meta (Facebook’s parent company). Meta relies on a global network of moderators to keep users safe from hazardous content such as child exploitation and graphic violence.
OpenAI understands that the content moderation process is frequently delayed and can put human moderators under stress, perhaps resulting in mental pressure. Their method attempts to reduce the time it takes to develop and adapt content regulations from months to hours.
OpenAI CEO Sam Altman has stated that the company does not train its AI models on user-generated data, emphasising the company’s commitment to ethical AI development and data ethics.
However, according to a study published in Analytics India Magazine, OpenAI may encounter financial difficulties. According to the research, the company’s fiscal stability may be jeopardised, potentially leading to insolvency by the end of 2024. The operational costs associated with OpenAI’s ChatGPT alone are almost $700,000 per day.