This principle focuses on guiding the AI with clarity so it understands exactly what is expected. Clear prompts reduce confusion and improve accuracy. One important tactic is using delimiters, such as triple quotes, brackets, or phrases like “Text:” to clearly separate instructions from the content. Another tactic is asking for structured output, for example requesting answers in bullet points, tables, or numbered steps, which makes responses organized and easy to use. Prompts should also state conditions and constraints, such as word limits, grade level, tone, or format, so the response matches expectations. Few-shot prompting is another useful tactic, where you provide examples of input and output to show the AI what a good answer looks like. Overall, being specific does not mean being complicated; it means being precise, focused, and intentional so the model delivers exactly what you need. Some tactics are :
Delimiters help separate instructions from the text the model must work on. Using symbols like triple quotes, brackets, or labels such as “Text:” makes it clear what content the AI should focus on, reducing confusion and improving accuracy. eg : triple back-ticks '''
Requesting output in a specific format such as bullet points, tables, or numbered steps helps organize information clearly. This makes responses easier to read, compare, and directly use in tasks like reports or presentations.
Stating limits such as word count, tone, grade level, or style guides the model toward the exact type of response needed. Clear constraints prevent overly long, off-topic, or inappropriate outputs.
Giving examples of correct input and output shows the model what a good response looks like. This tactic is especially useful for complex tasks, as it sets clear expectations and improves consistency.
This principle helps improve reasoning and accuracy by encouraging the AI to think step by step instead of rushing to an answer. A key tactic is asking the model to break the task into steps, such as analyzing the problem first, then solving it, and finally explaining the result. Another tactic is to explicitly request reasoning, for example by saying “explain your thinking” or “show the steps before the final answer.” This is especially useful for math, logic, or complex questions. You can also ask the model to work out its solution before giving the final response, which reduces mistakes and improves depth. For comparison or evaluation tasks, instructing the model to consider multiple options before choosing one leads to more balanced and thoughtful answers. Giving the model space to think turns quick guesses into well-reasoned solutions
Instructing the model to break a problem into smaller steps improves reasoning and reduces errors. This is especially useful for math, logic, and multi-part questions.
Asking the model to explain its reasoning before giving the final output encourages deeper thinking and more accurate responses.
Dividing a large or complicated task into smaller parts helps the model focus on one step at a time, leading to clearer and more reliable results.
Requesting the model to consider multiple solutions or viewpoints before choosing one leads to balanced and well-thought-out answers.
Good prompts come in many shapes depending on the situation. In school or coding, they are usually clear and instructional or analytical, helping you think carefully or solve a problem 🟦. With friends, prompts can be fun and creative 🟩, while social media and discussions often use open-ended or reflective prompts to engage people 🟨. A strong prompt gives enough guidance but still lets you be creative or thoughtful. Overall, knowing the type of situation helps you make prompts that work best for learning, coding, or sharing ideas.
🟦 = Formal (School / Coding), 🟩 = Informal, 🟨 = Social / Social Media