Six Already Creepy Ways to Use ChatGPT AI
Today, artificial intelligence has made a lot of noise. Especially ChatPGT. This chatbot can also be misused. Here are six use cases that are intimidating to say the least.
ChatGPT, OpenAI’s text generator, isn’t perfect for producing precise or interesting writing, but it can produce relatively appropriate text on just about any topic almost instantly. This is quite remarkable. But even with many security measures in place, the system can be quite dangerous.
We are just beginning to discover a less than ideal application. Anything that can create content out of thin air can create something dangerous. It is the matter of time. Below are six scary or at least dubious uses that people have already seen. And all this before ChatGPT was really popular and when the application was still in its infancy.
1. Create malware
ChatGPT creating malware is rightfully intimidating. Not because the malware is new, but because ChatGPT can do this indefinitely. AI don’t sleep. As Infosecurity Magazine explained, “Cybersecurity researchers have managed to create a polymorphic program that is very complex and hard to detect.”Researchers can use their creation to create malware and use the app to create variations of that code, making it harder to detect or stop.
2. Cheating at school
Less scary, but perhaps more predictable. A tool that can generate text on any topic is great for a child who wants to cheat at school. The teachers have already said they caught the students in the act. Schools have banned the app. And this trend should not slow down, on the contrary. AI is sure to be another tool that young children will master in school.
3. Spam on dating apps
Spam might not be the right word, but people use ChatGPT to chat with their Tinder matches. This is not necessarily scary, but knowing that you can trade with a computer program and not with a potential partner can be confusing.
4. Take on the job of reporters and other editors
Should I worry about my job?
5. Phishing and other types of fraud
It’s hard to prove its existence, but a tool like ChatGPT is perfect for phishing. Phishing messages are often easy to recognize precisely because the language is imperfect. With ChatGPT, this will no longer be the case. Experts in any case have already warned that this is a very suitable practical case.
6. Recruiters are fools
Everyone knows how hard it is to find a job. It’s a long, often demoralizing process, and sometimes a good job at the end. But if you are looking for a job, you may be missing out on an opportunity for an AI application. Recently, a company found that answers written by ChatGPT work better than 80% of people. ChatGPT could more easily use the keywords expected by recruiters and, in fact, through various filters set by HR.
Leave a Reply