
资料内容:
Introduction
When thinking about a large language model input and output, a text prompt (sometimes
accompanied by other modalities such as image prompts) is the input the model uses
to predict a specific output. You don’t need to be a data scientist or a machine learning
engineer – everyone can write a prompt. However, crafting the most effective prompt can be
complicated. Many aspects of your prompt affect its efficacy: the model you use, the model’s
training data, the model configurations, your word-choice, style and tone, structure, and
context all matter. Therefore, prompt engineering is an iterative process. Inadequate prompts
can lead to ambiguous, inaccurate responses, and can hinder the model’s ability to provide
meaningful output.
When you chat with the Gemini chatbot,
1
you basically write prompts, however this
whitepaper focuses on writing prompts for the Gemini model within Vertex AI or by using
the API, because by prompting the model directly you will have access to the configuration
such as temperature etc.
This whitepaper discusses prompt engineering in detail. We will look into the various
prompting techniques to help you getting started and share tips and best practices to
become a prompting expert. We will also discuss some of the challenges you can face
while crafting prompts.
Prompt engineering
Remember how an LLM works; it’s a prediction engine. The model takes sequential text as
an input and then predicts what the following token should be, based on the data it was
trained on. The LLM is operationalized to do this over and over again, adding the previously
predicted token to the end of the sequential text for predicting the following token. The next
token prediction is based on the relationship between what’s in the previous tokens and what
the LLM has seen during its training.
When you write a prompt, you are attempting to set up the LLM to predict the right sequence
of tokens. Prompt engineering is the process of designing high-quality prompts that guide
LLMs to produce accurate outputs. This process involves tinkering to find the best prompt,
optimizing prompt length, and evaluating a prompt’s writing style and structure in relation
to the task. In the context of natural language processing and LLMs, a prompt is an input
provided to the model to generate a response or prediction.