OpenAI has released model o1, which significantly outperforms GTP-4o on reasoning-heavy tasks.
Most large language models (LLMs) can answer your questions in a zero-shot manner. Zero-shot prompting means that you don't need to provide examples of appropriate answers.
For more complex requests, you can use few-shot prompting. This involves including examples of questions and answers in the prompt, which condition the model's answers.
The next level involves chain-of-thought (CoT) prompting, in which you ask the LLM to think step-by-step. You can combine CoT with few-shot prompting to get better results on more complex tasks.
We covered these and more advanced techniques in this article.
By optimising their latest model for CoT, OpenAI has achieved a step improvement in performance compared to GTP-4o. The o1 model:
Source: OpenAI
As with other models, fine-tuning for your specific use case further improves o1's performance.
By combining the recent advances in CoT and voice optimised models, we expect several customer-facing AI use cases to become viable in 2025.
Insights
About Cognis
Cognis helps organisations to transition to an AI-powered future.
We equip and enable you to harness the power of AI to create new revenue streams, reimagine customer experiences, and transform operations.
NEWSLETTER
Sign up to our newsletter.
© Cognis Pty Ltd