The ChatGPT macOS app was a privacy nightmare

0
28

[ad_1]

Like oil and water, there are things that just don’t mix. AI and privacy are two of them. There are several AI tools and features that just scream privacy nightmare. Well, that was true for a popular application that just hit the market. The ChatGPT macOS app stored conversations in plain text.

If this sounds familiar, Microsoft was just put through the wood chipper over something similar. Its impressive and rather creepy Recall feature was found storing information on users’ computers in an easy-to-read text document. This means that, if someone gains access to that user’s computer, then they could extract that document and basically get a debriefing of EVERYTHING they’ve been doing. Microsoft walked the feature back and has since put it on indefinite hiatus.

The ChatGPT macOS app was storing conversations in plain text

Consumer-facing AI is slowly moving away from data centers and onto devices. You can’t argue with the convenience of having powerful AI on your phone or computer without needing to connect to the internet. This is why there’s been such a big push for on-device AI.

However, the companies pushing these technologies aren’t being upfront and honest about what they have to do to keep the data on-device. Microsoft’s move was reckless, and it seems that OpenAI didn’t learn from its benefactor’s mistake. Pedro José Pereira Vieito made a post on Threads showing the ChatGPT macOS app storing conversations in plain text on the computer.

Pereira designed an app that could access these conversations and surface them. Jay Peters from The Verge was able to demonstrate this and confirm that it’s true. After The Verge contacted OpenAI, the company was able to make some changes and block it.

So, problem solved… right?

Okay, OpenAI fixed the problem and saved the day. Great, but the thing is that the fundamental problem hasn’t been fixed. Both Microsoft and OpenAI have been caught being completely reckless with their users’ data, and we don’t find out about it until 3rd-party entities look into their systems and report them. So, these companies were completely happy with letting these security issues exist on our devices.

All we can say is that we’re lucky that the people finding these vulnerabilities are good samaritans and not hackers. Again, Pereira designed an app that was able to extract that data. What’s stopping someone from developing an app that could extract that data and surreptitiously send it right to them? This sort of thing happens.

The main issue is that major corporations continue to launch products with blatant security issues and don’t act until they are discovered. Aren’t these the companies that brag about top-of-the-line security in keynotes? We trust them implicitly because it’s Microsoft or because it’s Google or because it’s Apple. These are multi-trillion-dollar companies, so they, ostensibly, know a thing or two about keeping our data safe. However, they continue to prove that they don’t.

Average users shouldn’t be finding these security flaws; they shouldn’t be finding logs of conversations stored in plain text. Until major corporations stop being so reckless with our data with these flashy and fancy AI tools, the problem will never truly be fixed.

[ad_2]

Source link