QuestionsLeaderboardAppendixBlogPracticeProfile
Back to Repository
Generative AI & LLMsEasy

Why does telling a model "Don't use the word 'delve'" often fail? What is a more effective way to rewrite a prompt to avoid specific behaviors?

Practice Your Response

Similar Questions in Generative AI & LLMs

Medium

How does an LLM decide which tool to call? If you give a model 50 tools, what happens to its accuracy and context window?

View
Medium

In a few-shot prompt (giving examples), does the order or diversity of the examples matter more for the model’s performance?

View
Easy

Why are delimiters like ###, """, or <xml> tags important in long prompts? How do they help prevent the model from getting confused between instructions and data?

View

Built for the AI Engineering community.

BlogPrivacyTermsContact