ChatGPT-4o: Experts See Data Privacy and Cybercrime as Potential Threats

By Consultants Review Team Friday, 31 May 2024

As OpenAI's multimodal generative AI platform ChatGPT-4o gains popularity among users, technology experts have raised concerns about data privacy and the possibility of cybercrime. Other possible concerns identified by them include fixing responsibility for misuses and hallucinations, intellectual property and copyright conflicts, voice cloning, and a lack of transparency in data utilization.

This is significant since there is presently no particular regulation in the nation to address the flip side of generative AI models such as GPT-4o. The Digital India Act, which is supposed to provide a framework for AI laws, is now in the works.

Recently, the government issued guidance advising generative AI businesses to only make their platforms public after thoroughly evaluating their models for prejudice and discrimination.

"In terms of GPT 4o, this application is now capable of creating hitherto novel cyber security consequences, infections, malware, and computer pollutants. "We will now see more scam calls and messages," stated Pavan Duggal, a Supreme Court lawyer and cyber law specialist.

"When people prefer to trust whatever they perceive as true, disinformation, misinformation, and mispropaganda will be critical aspects and obstacles. Deepfake information will be steering left, right, and center in terms of GPT-4o," Duggal said, adding that a regulation to control artificial intelligence technology is necessary.

GPT-4o supports numerous capabilities, including text, audio, vision, and image, and produces a mix of text, audio, and image outputs. It functions similarly to an artificial intelligence assistant, providing responses depending on requests. On Thursday, OpenAI announced that advanced GPT-4o features will be available for free to users.

One of the most recent concerns to arise following the debut of GPT-4o was an intellectual property conflict, in which Hollywood actress Scarlett Jahansson sued OpenAI for utilizing her voice in one of their models without her authorization.

"If generative AI creates data tomorrow, who is responsible for it, and who will be held liable if there is a hallucination?" Today's challenges include deepfakes and voice cloning; tomorrow's issues will be different, thus we need rigorous regulations to control AI," said Saakshar Duggal, a practicing counsel in the Delhi High Court.

AI hallucinations refer to erroneous information from large language models (LLMs), which underpin AI chatbots.

According to Saakshar, GPT-4o requires access to the users' cameras for its vision skills, which means the model may see the users and their surroundings in order to respond to specific cues, which may pose a privacy risk. Similarly, he advocated for openness in the usage of data by generative AI businesses.

Current Issue