Lloyd Tripp

Artificial Intelligence Advocate

ChatGPT is improving scammers' text messages

2023-03-09 illicit Lloyd Tripp

Update: Links to security articles

Below are three compelling articles that delve deeper into the security implications of chatbots like ChatGPT. While information security professionals are likely aware of this threat, it is essential to educate everyone on the matter, so please help spread the word.

opwnai-cybercriminals-starting-to-use-chatgpt

chatgpt-cybercriminal-malware-female-chatbots

russian-hackers-attempting-to-use-chatgpt-for-malicious-use

Improving scammers’ text messages

As with many powerful tools, ChatGPT can be used for illicit purposes. In this case, ChatGPT can be used to correct grammar, punctuation and misspellings in a scammer’s text message and rephrase it to read as if it came from a legitimate source. As we have been taught, poor grammar, punctuation and misspellings are often clues that a text message did not come from a legitimate source. But what happens if these clues are removed by ChatGPT? For most savvy users, there will be other clues to flag it as a scam, but my concern is that there are already people falling for these scams so there may be an incremental increase in victims if the texts read as if they came from legitimate sources.

Take action

If you create training for your employees or customers to identify scam text messages, you need to be aware of this development and adjust your training materials appropriately.

Actual example

For instance, here is the actual wording from a scammer’s text message (without the link) and how ChatGPT fixed it:
text_scam