Kuldeep Singh Sidhu's picture
5 3

Kuldeep Singh Sidhu

singhsidhukuldeep

AI & ML interests

Seeking contributors for a completely open-source 🚀 Data Science platform! singhsidhukuldeep.github.io

Organizations

Posts 60

view post
Post
22
Good folks from VILA Lab at Mohamed bin Zayed University of AI have introduced 26 guiding principles for optimizing prompts when interacting with large language models (LLMs) like LLaMA and GPT.

These principles aim to enhance LLM response quality, accuracy, and task alignment across various scales of models.

1. Be direct and concise, avoiding unnecessary politeness.
2. Specify the intended audience.
3. Break complex tasks into simpler steps.
4. Use affirmative directives instead of negative language.
5. Request explanations in simple terms for clarity.
6. Mention a potential reward for better solutions.
7. Provide examples to guide responses.
8. Use consistent formatting and structure.
9. Clearly state tasks and requirements.
10. Mention potential penalties for incorrect responses.
11. Request natural, human-like answers.
12. Encourage step-by-step thinking.
13. Ask for unbiased responses without stereotypes.
14. Allow the model to ask clarifying questions.
15. Request explanations with self-tests.
16. Assign specific roles to the model.
17. Use delimiters to separate sections.
18. Repeat key words or phrases for emphasis.
19. Combine chain-of-thought with few-shot prompts.
20. Use output primers to guide responses.
21. Request detailed responses on specific topics.
22. Specify how to revise or improve text.
23. Provide instructions for generating multi-file code.
24. Give specific starting points for text generation.
25. Clearly state content requirements and guidelines.
26. Request responses similar to provided examples.

Results show significant improvements in both "boosting" (response quality enhancement) and "correctness" across different model scales. Using the ATLAS benchmark, specialized prompts improved response quality and accuracy by an average of 57.7% and 67.3%, respectively, when applied to GPT-4.
view post
Post
386
OpenAI's latest model, "o1", has demonstrated remarkable performance on the Norway Mensa IQ test, scoring an estimated IQ of 120.

Everyone should think before answering!

Key findings:

• o1 correctly answered 25 out of 35 IQ questions, surpassing average human performance
• The model excelled at pattern recognition and logical reasoning tasks
• Performance was validated on both public and private test sets to rule out training data bias

Technical details:

• o1 utilizes advanced natural language processing and visual reasoning capabilities
• The model likely employs transformer architecture with billions of parameters
• Improved few-shot learning allows o1 to tackle novel problem types

Implications:

• This represents a significant leap in AI reasoning abilities
• We may see AIs surpassing 140 IQ by 2026 if the trend continues
• Raises important questions about the nature of intelligence and cognition

models

None public yet

datasets

None public yet