What is ChatGPT? What can it do? Can anyone use it? OpenAI ChatGPT and chatbots are widely popular and used by millions of professionals around the world. ChatGPT is the fastest-growing software in history. It Took only 2 months to reach 100 million users, compared to TikTok which took nine months to reach a similar number of users. This was despite the fact that ChatGPT was not giving perfect and expected responses.
It is just a matter of time for anything to become a prey to cyberattacks and breaches. It did not take long for actors to find the breach point for ChatGPT, and recently the breach happened. The threat as thought by analysts was, AI tools will be used to write malicious code which could pose a threat.
What happened was the AI tool, ChatGPT, was breached and attacked. OpenAI, the company that developed ChatGPT, confirmed the data breach due to which it was taken down. On the backend, an open-source library being used had vulnerable code. However, the breach was taken care of and fixed soon after.
What is ChatGPT
ChatGPT AI is a tool developed by OpenAI, a company based in the USA. The tool is used to create responses that are generated by AI which uses NLP and other Machine learning algorithms to understand the query. The responses can be anything: ranging from essays to blogs, to code and everything in between. The tool became so popular because it aided in hyper-boosting productivity for employees, businesses, and students alike.
Since its release ChatGPT has not seen a downfall, in fact a pro and latest version was released recently which was the version 2 of GPT, namely GPT4. GPT4 is a supercharged ChatGPT version which is better at everything from writing to understanding the queries better and creating code prompts acting on the latest data available.
Is ChatGPT safe to use?
Regardless of the technology you are using, sooner or later, it will face breaches and cybersecurity threats. However, when you compare ChtaGPT to other softwares and technologies the phase of cyberthreats came early on.
The fact that, vulnerability was in the code of an open-source library being used by the AI tool and not the code of ChatGPT itself, is somewhat on the positive side. On top of that, as soon as the OpenAI chatGPT got to know about the breach, they took down the software and repaired it. This shows us that there are teams working actively on ChatGPT. Rest assured if this practice is continued users can be sure that they will get a positive response.
Developers worldwide use open-source libraries to develop softwares of every kind. The library ChatGPT AI uses is called Redis. The usage of this library is to keep the cache of use’s information and queries which helps with fast response, recalls, and access.
When thousands of developers are working on something, open-source libraries are used, and pre-written code templates are used, vulnerabilities happen.
If we compare the security threat of GPT to others over past years, the breach was a minor one and the bug was patched pretty quickly, that is within a few days. But as we can see, even minor breach and cyber threat is causing major damage to its reputation.
What was the breach about?
What happened was: for some users it was possible to see other user’s information like email address, search titles, user name, payment address, last digits of payment card, etc. But this was for a few users who were online at the same time.
“Same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window.”
Is ChatGPT a Threat to Your Privacy?
Now you know what is ChatGPT, the question now is whether it is safe to use or not. It is too soon to jump to conclusions. Firstly, because the breach was a minor one and secondly it was fixed. Third and most important one, card numbers were not visible completely at any time, it was only the last digits which were exposed.
Users have to understand the difference between security and privacy. Though they are correlated and closely boned with thin lines between them, they are different things.
ChatGPT, moving onwards after the breach, is very clear about its policies regarding data and user information, you can read about that here.
However, many companies have banned the usage of ChatGPT for official work. And, you should be careful as well. Sharing or processing sensitive information through AI tools cannot be unseen as before.
As a general rule of thumb, do not share any information which you would not want to get public or have your company know about.
ChatGPT & Related Security Risks
Right now it is difficult to regulate AI and such tools, mainly because the technology is at early stages and we do not know what policies to make. But, like any other technology and tool, it will be abused and there can be threats in the future to its users.
The future where chatbots are integrated into everything we use is not far. Microsoft Bing and VS Code are big names who integrated ChatGPT. While Google is working on its own chatbot assistant. This is the near future of tech, and other industries will follow soon.
Is ChatGPT banned anywhere?
Why was ChatGPT banned? Where was it banned? Some countries rushed to ban the use of ChatGPT as well as other AI based chatbots and similar tools due to security concerns. Where some are rushing to take advantage of the AI others are learning towards regulations and ban. Italy is one of the countries which banned its usage after a security breach. Here you can find a list of countries that have banned ChatGPT and a list of countries that OpenAI chatGPT banned.
In addition to countries, some companies have also banned using ChatGPT because it is a security threat and a risk. Samgung in particular has threatened and warned its employees on using the tool. If there will, they can be fired.
Should ChatGPT be banned over security concerns?
An AI tool, like ChatGPT, is used for various purposes which includes seeking aid in work like programming, problem solving, writing, and even in fields like education and businesses. Imposing a ban or lifting the ban with open use depends on the type of usage, type of work, and type of company or industry. If the goal of using an AI assistant chatbot is to speed up the work and no sensitive information is being shared with it. Then the usage is fine and is actually helpful.
Where it can provide valuable information in seconds, it can also be used for harm, and can pose a security threat as well. For instance, ChatGPT can be used for phishing emails and scammers to make the whole experience realistic. Poor grammar and weird sentence structure was the giveaway of scammers but no more. With AI they can generate realistic and culture appropriate realistic scams and scripts.
The potential of misuse of any technology and tool is inevitable. The first thing should be to put forward policies and regulations for use and development of AI tools like ChatGPT and others. Restricting and regulating its usage in a company is totally dependent on the authorities. The assessment and study of all that has happened would be wise for companies to ban, especially where data privacy, user information, and security is a concern. The ban from educational institutes is not understandable, as it is a major tool which provides valuable data.
After weighing the pros and cons, the benefits and risks, one should decide to either impose a ban or keep on using. You never have to completely open the usage, for instance in certain cases and conditions restriction should be imposed.
Customer service or after sales services is the most important thing for product success. In the case of digital products, maintenance and support decides whether it will survive or not. If we study the case of ChatGPT and the breach that happened, we can see that their maintenance and support team resolved the issue swiftly. This tells us, in future if any such thing were to happen users can rely on their team.
In addition to solving the current problem, they also revisited the entire thing and found out other potential issues. Which were also fixed. Even though the issue was little and could be ignored or hidden, they clearly communicated.
OpenAI has written a complete article about the issue and how they addressed it, with open communication.
They have noticed the affected users about the breach and ensure that they are no longer at risk. The OpenAI ChatGPT team is working to protect its users’ privacy and digital safety alongside an apology.
Here is the list of actions that OpenAI used for maintaining security and safety.
- Extensive testing for bug fixing.
- Stress testing ensures data is returned to the intended user.
- Examining the system ensures data and queries are only visible to the correct user.
- Correlated data sources to identify affected users for notifying them.
- Logging for identifying the issue and ensuring it has stopped.
- Improved code robustness and scale our Redis cluster for reducing connection errors.
What are next steps from OpenAI
OpenAI is collaborating with Redis open-source developers and maintainers, collectively they are addressing the issue and making sure in future this type of leaks and breaches do not happen.