Page 1 of 1

ChatGPT accounts on Dark Web Marketplace

Posted: Sun Jan 05, 2025 7:13 am
by rifat28dddd
Time is running out while governments and technology communities around the world are discussing AI policies. The main concern is keeping humanity safe against misinformation and all the risks it involves.

And the discussion is turning hot now that fears are related to data privacy. Have you ever thought about the risks of sharing your information using ChatGPT, Bard, or other AI chatbots?

If you haven’t then, you may not yet know that technology giants have been taking serious measures to prevent information leakage.

In early May, Samsung notified their staff of a new internal policy restricting AI tools on devices running on their networks, after sensitive data was accidentally leaked to ChatGPT.

“The company is reviewing measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” said a Samsung spokesperson to TechCrunch.

And they also explained that the company will temporarily restrict the use of generative AI through company devices until the security measures are ready.

Another giant that adopted a similar action was Apple. According to the WSJ, Samsung’s rival is also concerned about confidential data leaking out. So, their restrictions include ChatGPT as well as some AI tools used to write code while they are developing similar technology.

Even earlier this year, an Amazon lawyer urged employees not to share any information or code with AI chatbots, after the company found ChatGPT responses similar to the internal Amazon data.

In addition to the Big Techs, banks such as Bank of America and Deutsche Bank are also internally implementing restrictive measures to prevent the leakage of financial information.

And the list keeps growing. Guess what! Even Google joined in.

Even you Google?
According to Reuters’ anonymous sources, last week Alphabet Inc. (Google parent) advised their employees not to enter confidential information into the AI chatbots. Ironically, this includes their own AI, Bard, which was launched in the US last March and is in the process of rolling out to another 180 countries in 40 languages.

Google’s decision is due to researchers’ discovery that chatbots could reproduce the data inputted through millions of prompts, making them available to human reviewers.

Alphabet warned their engineers to avoid inserting code netherlands phone numbers in the chatbots as AI can reproduce them, potentially generating a leakage of their technology’s confidential data. Not to mention, favoring their AI competitor, ChatGPT.

Google confirms it intends to be transparent about the limitations of its technology and updated the privacy notice urging users “not to include confidential or sensitive information in their conversations with Bard.”

Another factor that could generate sensitive data exposure is, as AI chatbots are becoming more and more popular, employees around the world who are adopting them to optimize their routines. Most of the time without any cautiousness or supervision.

Yesterday Group-IB, a Singapore-based global cybersecurity solutions leader, reported that they found more than 100k compromised ChatGPT accounts infected with saved credentials within their logs. This stolen information has been traded on illicit dark web marketplaces since last year. They highlighted that by default, ChatGPT stores the history of queries and AI responses, and the lack of essential care is exposing many companies and their employees.