Working with Generative AI, there is no denying that better prompts allow models to understand us better in terms of result that we are seeking from it.
Often sometimes, people make common mistakes while talking to the AI models through prompts which sometimes give frustrating experience to the user through sub-optimal or irrelevant outputs.
We asked ChatGPT itself about the mistakes that people often make while talking to it. The reasons include Lack of Clarity, Overloading information, neglecting context, excessive jargon, open-ended prompts, imbalanced prompts, and forgetting to specify the desired output.
Anthropic, one of major rival of ChatGPT recently shared a podcast where they gave tips to remember while giving prompts to the their Generative AI model Claude.
Willingness to Iterate and identify misrepresentations
One of the engineers shared her views that if people are willing to talk back and forth with the model and can things that are misinterpretated by the model, then the model can finally produce the needed outcome for them.
David Hershey, Applied AI at Anthropic, who works with customers mostly says “Like people think that you are writing one thing and then you are done. Then I’ll be like that to get a semi-decent prompt, like when I sit down with the model, like earlier when I was prompting the model and I was like in a 15 minute span, I’ll be sending hundreds of prompts to the model. Its just back and forth, back and forth. So I think its this willingness to iterate and to look and think that what is it that was misinterpreted here, if anything? And then fix that thing. So that ability to iterate. So, clear communication, that ability to iterate and also thinking about ways in which your prompt might go wrong. “
As AI models continue to refine themselves, they may be able to grasp the correct intent and produce optimum results from even the casually written prompt without straight-forward instructions.
Not writing clear defined prompts
A prompt written in a clear, crystal step by step instructions that is made to be understood by a 5 year old will often generate optimal results. However, people can have a tendency to write prompts half-heartedily which can be difficult for the AI model to interpret correctly.
David Hershey says “A lot of people will just write down the things they know. But they dont really take the time to systematically break out what is the actual full set of information you need to know to understand this task? And thats very clear thing I see a lot is prompts where its just conditioned. The prompt that someone wrote is so conditioned on their prior understanding of a task that when they like show it to me I’m like, “This makes no sense.” None of the words you wrote makes any sense because I don’t know anything about your interesting use case.”
Amanda Askell, who leads one of the finetuning teams at Anthropic too shares her sentiment in agreement to the above view point and says “Yeah, the amount of times I have seen someone’s prompt and then being like “I cant do the task based on this prompt. I’m human level and you are giving this to something that is worse than me and expecting it to do better.””