Prompt Engineering is the discipline of crafting prompts to give you the desired results out of an LLM. The name is a bit of a misnomer because prompt creation is more of an art than engineering. This article will give you the fundamental techniques that cover 80% of the discipline.
Creating good prompts requires experimentation and practice. There is no “one” way of doing things. Two prompt-engineers can achieve the same tasks with dramatically different prompts, hyperparameters, and techniques.
The typical design cycle of prompt includes: experimenting with 2-3 techniques, running through test-cases, slight adjustments, more testing, and so on…
Remember, LLMs are just auto-complete machines. Tasks that may seem trivial to humans can be incalculably hard for LLMs.
A simple anecdotal example: a client wanted us to group similar articles together. The goal of the project was to cluster without knowing the grouping in advance. We attempted to use an LLM, but it struggled to determine what was considered similar. Similar in terms of words, sentiment, style, content? As humans, we can evaluate all these categories and classify with ease. After 20+ hours of experimentation, we stuck to a traditional ML approach. LLMs just weren’t the right tool.
Zero Shot means the LLM was provided with no previous examples of the task you are asking it to do. This means directly asking a question or executing a task. With the vast corpus of data LLMs are trained on, this will be your primary strategy. It is the most human-like.
Write a recipe on how to cook Mac and Cheese
The word "learning" in “Zero Shot Learning” does not mean we are training or teaching the LLM through our prompt. The actual training is done during model creation. Rather, the LLM is “learning” to understand what you are trying to request and answering accordingly.
To get a more templated response, opt for Few-Shot Learning. This is where you provide previous examples on what you want the LLM to mimic. This is extremely powerful when you require output to be in a certain format. The following example asks the LLM to classify the type of review a customer left:
Classify whether the customer is "Satisfied", "Unsatisfied", or "Unsure" for the reviews below.
Customer Review: It was a pleasure working with these guys. Very professional and helped me save 10K annually!
Sentiment: Satisfied
Customer Review: I didn't hear back from them even though I emailed 10 times!
Sentiment: Unsatisfied
Customer: Concise, professional, and directly helped me implement what I needed. Thanks!
LLM's Response
Sentiment: Satisfied
LLMs understand your prompt best when it is direct and to the point (very much like Business Writing). The general rule of thumb is to be concise without losing meaning.
// Bad Example
I am starting a startup and need some help with the scoping. It is a SAAS Marketing platform
// Good Example
Write a business plan for a SAAS Marketing startup. Include a competitor analysis and brief financials overview.
Pretty simple, right?
LLMs are smart. Still, you want to make it as easy as possible for the LLM to understand you. Break down tasks to its most simple form.
// Bad Example
Compose a narrative chronicling the journeys of an animal in exploration
// Good Example
Write a short poem about a talking animal who goes on a adventure
For GPT-3.5 and Open-Source models, using these techniques made a huge difference in output quality. For models like GPT-4, prompt engineering is starting to matter less because of how smart the LLMs are becoming. For production scenarios, it is certainly best practice to optimize the prompts. For the everyday user though, treating it like a human works just fine.
Writing good prompts is a matter of practice and putting the work in. Even as an executive or manager, it is vitally important to understand and master this tool. Not only will it supercharge your productivity, but help you uncover new business use cases during the process.
This website is operated and maintained by ApplyGPT LLC. Use of the website is governed by its Fulfillment Policy and Privacy Policy.