ChatGPT is the fastest growing app in history - 100 million active users in two months. It can write essays, draft contracts, solve complex math problems, and handle customer service requests. But as Robert Massey posts on the Cyber Security Intelligence website (https://www.cybersecurityintelligence.com/blog/the-security.-risks-of-chatgpt-6974.html) ChatGPT can be used by cybercriminals.
The security risk of ChatGPT is that criminals don't need coding experience to create malicious software. Criminals are using it to replicate well known malware strains and techniques. Phishing emails were easiert to detect because of spelling errors; ChatGPT can edit them to be more credible.
Open AI the artificial intelligence lab that created ChatGPT claims to have policies and technical measures in place to protect user data and privacy. This may not be enough since ChatGPT scrapes data from the Internet, including sensitive information. Any business that uses the Internet, no matter how small, is at risk of being scammed.
Massey suggests a number of protective measures:
- Use penetration testing to identify and fix vulnerabilities,
- Have a data resilience strategy and a data protection plan.
- Use immutable data storage to create backups that are secure, accessable and recoverable.