Scammers Can Exploit Personal Data from Just One ChatGPT Search: How to Protect Yourself

Cybersecurity expert Kurt “CyberGuy” Knutsson warns that AI tools can expose your personal data and offers tips to safeguard your privacy.

Scammers Can Exploit Personal Data from Just One ChatGPT Search: How to Protect Yourself

ChatGPT and other large language models (LLMs) have quickly become integral tools for tackling a wide range of everyday problems. From summarizing dense information to drafting creative projects or even planning home layouts, their intuitive interface and powerful capabilities make them both accessible and appealing. Yet, as with most emerging technology, their convenience comes paired with privacy risks that are not always obvious.

LLMs like ChatGPT function as advanced conversational agents: users simply type questions or requests in natural language, and the model responds accordingly, often with in-depth and articulate answers. While this has democratized access to complex information, it also means anyone could potentially retrieve sensitive or personal information with just the right query. Despite built-in safeguards designed to restrict misuse, determined individuals may sometimes circumvent these protections through clever rewording.

A major concern is how easy it has become for others to collect detailed profiles on individuals using LLMs, leveraging publicly available information that was previously more cumbersome to compile. Although these AI systems do not "know" anything beyond their training data and real-time web sources, they excel at quickly aggregating information. Much of this data comes from people-search sites, social media platforms like Facebook and LinkedIn, and public records databases. Of these, people-search sites are often the most invasive, listing names, addresses, relatives, and more with little oversight.

To protect your privacy, it is crucial to reduce your online exposure. While hundreds of people-search sites operate in the U.S., combing through every one individually is unrealistic. Instead, you can use AI tools to assemble a targeted list of sites that might contain your data. Although these tools may not produce a comprehensive list in a single search, repeated queries can help you identify most of the key sites that hold your information.

Once you've identified these sites, you should submit opt-out requests to each. Most people-search platforms offer a process—often found in the website's footer under terms like "Opt-Out" or "Do Not Sell My Info"—to remove your information from their searchable databases. This process can be time-consuming and tedious, but it is an essential defense against unwanted exposure. For a less labor-intensive approach, automated data removal services are available. These services send removal requests on your behalf to a vast array of brokers and sites, monitoring your data’s presence over time.

However, data brokers extend beyond people-search websites. Marketing, financial, health, and risk analytics companies all trade in personal data, turning individual details into commodities bought and sold without explicit consent. Data removal services now target these broader brokers as well and often only take minutes to set up. Some of the more advanced services will even support custom removals from sites not covered by their standard routines, provided you supply them with the necessary links.

Still, no solution is perfect. Removal services typically have coverage limitations, as not all data brokers are cooperative. In such cases, you may need to manually point out which websites hold your data, but this is a relatively minor effort compared to the benefits of enhanced privacy and security.

Here are some additional ways to safeguard your information:

  • Be cautious with AI prompts: Never share sensitive details—such as full names, addresses, or account credentials—when interacting with LLMs.
  • Secure your accounts: Use strong, unique passwords and enable multifactor authentication. A password manager can simplify the process and improve overall security.
  • Review social media settings: Limit what you make public and regularly audit your privacy options to minimize what is shared with third parties and data brokers.
  • Install reliable antivirus software: Protect devices from malware and phishing attempts that can lead to broader data exposure.
  • Use dedicated email addresses: Reserve specific emails for opt-outs and online registrations, keeping your primary account safer from spam and breaches.


Ultimately, large language models bring enormous promise but introduce significant responsibility for users. Being proactive about your digital privacy, understanding where your data lives, and taking advantage of available tools can help mitigate these new risks. As AI grows increasingly powerful, vigilance remains the best line of defense.

The ongoing debate continues: Should technology providers like OpenAI bear legal responsibility if their platforms are used to collect or expose private data without consent? Ongoing feedback from users and experts alike will shape the boundaries of privacy in the age of artificial intelligence.