What NOT To Tell ChatGPT
The AI race is well under way and it seems like development teams, consumers, and businesses alike are scrambling to see what artificial intelligence chat apps like ChatGPT are all about. From how they work to how they can be voice programmed to work to help you find money and business solutions.
For some, however, the explosion of AI into the mainstream is less exciting and more anxiety inducing. Hollywood has done little to reassure us about technology that gets a little too “smart,” after all. Fortunately, most of the reluctance around ChatGPT and similar technology are simply misunderstandings of what the technology is, or of what it’s capable of. That being said, this doesn’t mean that caution should just be thrown to the wind, and there absolutely are things that you should not be telling ChatGPT.
Telling Secrets
We’ll talk more about this limitation in a bit, but while ChatGPT specifically doesn’t actually store any of the information that you tell it, it’s good practice to be careful with your secrets—private or professional. Especially as AI apps begin and continue to become more sophisticated, an inevitable feature will be that the software learns and remembers information from interactions with individuals.
Many businesses are looking to see how ChatGPT’s processing, analytical, and communication powers can be put to work for strategic and financial decisions. It might seem like a good idea to provide the app with secret or private information to give the AI a better idea of what it’s working with. However, if the AI is able to take your information, learn it, and store it, you might accidentally declassify private information to someone who might also be talking to the AI bot.
If you ask ChatGPT for strategic advice against competitors, where do you think the information about competitors comes from? As things stand, the AI is trained on a closed data set with a cutoff date a few years in the past, so ChatGPT is not exactly picking up trade secrets left and right. However, as soon as real-time learning is implemented and rolled out, it will be a good idea to be careful about what information goes into that chat prompt.
Can ChatGPT Information Expire?
One common fear about advancing AI and ChatGPT specifically is the idea that the technology can act maliciously. One reason this fear is misplaced is that, simply, ChatGPT doesn’t just work on its own. It’s a prompt-based chat app meaning that, before it can do anything at all, you have to hit enter on your keyboard.
That being said, this only states that ChatGPT is more of a tool than an autonomous entity. This does not mean that tools are free from use for harm. For example, ChatGPT isn’t going to just install malware into your device, but somebody might prompt ChatGPT to develop malware, or describe what that process looks like. While there are safeguards in place to help make sure things like this don’t happen, it is still potentially possible.
Powerful tools can be used by bad people to cause harm, which is why it is important to hold ourselves and others accountable for malicious actions like this. Prompting ChatGPT to prepare harmful material or to perform harmful acts are some things that you should not be partaking in.
Current AI Bot Limitations
Luckily, some of these risks are more hypothetical than they are true risks at the moment. GPT-4 shows very strong promises to be the powerful AI assistant that ChatGPT gets us excited about but, until that iteration is available to the public, the true capacities of ChatGPT are far more limited. Being trained on a limited data set is one constraint, as mentioned. This means that, even if you were to tell ChatGPT corporate secrets, it does not currently have the capacity to learn and/or share this information with others.
Another limitation is the shaky accuracy of responses. GPT-4 promises much stronger, more robust, and more correct answers to prompts but, ChatGPT is just a promise of what is to come. As things stand, even if you managed to convince ChatGPT to give you malicious information, or to provide malicious software, there is a good chance that the information you would receive would simply be wrong either way.
For these reason, the same limitations that keep ChatGPT from being a full-fledged and trustworthy AI assistant are the limitations that make it less of a threat to anybody. The biggest danger, for now, is over-trusting the artificial intelligence app, and making risky decisions based on algorithmically, but ambiguously, generated responses.
Living Pono is dedicated to communicating business management concepts with Hawaiian values. Founded by Kevin May, an established and successful leader and mentor, Living Pono is your destination to learn about how to live your life righteously and how that can have positive effects in your career. If you have any questions, please leave a comment below or contact us here. Also, join our mailing list below, so you can be alerted when a new article is released.
Finally, consider following the Living Pono Podcast to listen to episodes about living righteously, business management concepts, and interviews with business leaders.