Artificial intelligence chatbots have taken the world by storm in recent months. We had fun asking ChatGPT questions, trying to figure out how much of our work it could handle, and even getting it to tell us jokes.
But while many people were having fun, cybercriminals moved on and found ways to use AI for more sinister purposes.
They discovered that artificial intelligence can make their phishing scams more difficult — and that makes them more successful.
Our advice has always been to be careful with emails. Read them carefully. Watch out for spelling and grammar mistakes. Before clicking any link, make sure it’s the real deal.
And it’s still great advice.
But ironically, phishing emails created by chatbots seem more human than ever – putting you and your identity at greater risk of fraud. So we all have to be even more careful.
Criminals use artificial intelligence to create unique variations of the same phishing lure. They use it to remove spelling and grammar mistakes and even create entire email threads to make the scam more believable.
Security tools to detect messages written by artificial intelligence are being developed, but they are still a long way off. This means you have to be extra careful when opening emails – especially ones you’re not expecting. Always check the address from which the message was sent and check with the sender (not by replying to the email!) if you have the slightest doubt.
If you need more advice or team training on phishing scams, please get in touch.
Published with permission from Your Tech Updates.