ChatGPT-4o Sparks Concerns Over Potential Use in Voice-Based Scam Automation
by Justin Erickson
Security experts are raising concerns about the potential misuse of OpenAI’s ChatGPT-4o model in autonomous voice-based scams. The advanced language capabilities of ChatGPT-4o can simulate natural conversations, which could be exploited to impersonate trusted contacts or automate phishing calls. This capability poses significant risks as it could enable scammers to conduct fraudulent interactions without human involvement. Experts advise individuals to remain cautious of unsolicited voice calls and verify identities before sharing sensitive information. In response, OpenAI highlighted improvements in its latest o1 model, designed with stronger safeguards against misuse, including restricted voice generation and higher resistance to unsafe content.
Third-Party references:
Click the links below to learn more details. (Opens in a new tab/window.)
