What do AI chatbots know about us, and who are they sharing it with? - Help with AI for latest technology

Breaking

Underreview for latest technology gadgets and worldwide technologies, AI, Machine Learning, Neural networks, Artificial intelligence, Tensorflow, Deep Learning, DeepAI, Python,JavaScript,OpenCv, ChatBot, Natural Language Processing,Scikit-learn

Friday, 7 April 2023

What do AI chatbots know about us, and who are they sharing it with?

AI Chatbots are relatively old by tech standards, but the newest crop — led by OpenAI's ChatGPT and Google's Bard — are vastly more capable than their ancestors, not always for positive reasons. The recent explosion in AI development has already created concerns around misinformation, disinformation, plagiarism and machine-generated malware. What problems might generative AI pose for the privacy of the average internet user? The answer, according to experts, is largely a matter of how these bots are trained and how much we plan to interact with them

In order to replicate human-like interactions, AI chatbots are trained on mass amounts of data, a significant portion of which is derived from repositories like Common Crawl. As the name suggests, Common Crawl has amassed years and petabytes worth of data simply from crawling and scraping the open web. “These models are training on large data sets of publicly available data on the internet,” Megha Srivastava, PhD student at Stanford's computer science department and former AI resident with Microsoft Research, said. Even though ChatGPT and Bard use what they call a "filtered" portion of Common Crawl's data, the sheer size of the model makes it "impossible for anyone to kind of look through the data and sanitize it,” according to Srivastava.

Either through your own carelessness or the poor security practices by a third-party could be in some far-flung corner of the internet right now. Even though it might be difficult to access for the average user, it's possible that information was scraped into a training set, and could be regurgitated by that chatbot down the line. And a bot spitting out someone's actual contact information is in no way a theoretical concern. Bloomberg columnist Dave Lee posted on Twitter that, when someone asked ChatGPT to chat on encrypted messaging platform Signal, it provided his exact phone number. This sort of interaction is likely an edge case, but the information these learning models have access to is still worth considering. "It’s unlikely that OpenAI would want to collect specific information like healthcare data and attribute it to individuals in order to train its models," David Hoelzer, a fellow at security organization the SANS Institute, told Engadget. “But could it inadvertently be in there? Absolutely.”

Open AI, the company behind ChatGPT, did not respond when we asked what measures it takes to protect data privacy, or how it handles personally identifiable information that may be scraped into its training sets. So we did the next best thing and asked ChatGPT itself. It told us that it is "programmed to follow ethical and legal standards that protect users’ privacy and personal information" and that it doesn't "have access to personal information unless it is provided to me." Google for its part told Engadget it programmed similar guardrails into Bard to prevent the sharing of personally identifiable information during conversations.

Helpfully, ChatGPT brought up the second major vector by which generative AI might pose a privacy risk: usage of the software itself — either via information shared directly in chatlogs or device and user information captured by the service during use. OpenAI’s privacy policy cites several categories of standard information it collects on users, which could be identifiable, and upon starting it up, ChatGPT does caution that conversations may be reviewed by its AI trainers to improve systems. 

Google's Bard, meanwhile, does not have a standalone privacy policy, instead uses the blanket privacy document shared by other Google products (and which happens to be tremendously broad.) Conversations with Bard don't have to be saved to the user's Google account, and users can delete the conversations via Google, the company told Engadget. “In order to build and sustain user trust, they're going to have to be very transparent around privacy policies and data protection procedures at the front end,” Rishi Jaitly, professor and distinguished humanities fellow at Virginia Tech, told Engadget.

Despite having a "clear conversations" action, pressing that does not actually delete your data, according to the service’s FAQ page, nor is OpenAI is able to delete specific prompts. While the company discourages users from sharing anything sensitive, seemingly the only way to remove personally identifying information provided to ChatGPT is to delete your account, which the company says will permanently remove all associated data.

Hoelzer told Engadget he’s not worried that ChatGPT is ingesting individual conversations in order to learn. But that conversation data is being stored somewhere, and so its security becomes a reasonable concern. Incidentally, ChatGPT was taken offline briefly in March because a programming error revealed information about users’ chat histories. It's unclear this early in their broad deployment if chat logs from these sorts of AI will become valuable targets for malicious actors.

For the foreseeable future, it's best to treat these sorts of chatbots with the same suspicion users should be treating any other tech product. “A user playing with these models should enter with expectation that any interaction they're having with the model," Srivastava told Engadget, "it's fair game for Open AI or any of these other companies to use for their benefit.”

This article originally appeared on Engadget at https://ift.tt/Xn0ZvG7

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/Xn0ZvG7

No comments:

Post a Comment

Guys Comments for Revolutionary Change!!!