Automate 90% of manual prompt engineering using their self-improving prompt optimizer. The fastest way to make your prompts, RAG and AI agents more reliable. Iterate faster during development and prevent regressions in production.
Are you spending a lot of time optimizing prompts by hand? Hamming AI is launching their Prompt Optimizer (new feature in beta) to automate prompt engineering. It's completely free for 7 days!
Thought experiment: What if LLMs were used to optimize prompts for other LLMs?
Problem: Writing prompts by hand is tedious
Writing high-quality and performant prompts by hand requires enormous trial and error. Here's the usual workflow:
Write an initial prompt.
Measure how well it performs on a few examples in a prompt playground. Bonus points if you use an experimentation platform like Hamming to automate this flow.
Tweak the prompt by hand to handle cases where it's failing.
Repeat steps 2 & 3 until you get tired of wordsmithing.
What's worse, new model versions often break previously working prompts. Or, say you want to switch from OpenAI GPT3.5 Turbo to Llama 3. You need to re-optimize your prompts by hand. ❌
Hamming's take: use LLMs to write optimized prompts
Describe your task, add examples, or let Hamming synthetically create some, and click run.
Behind the scenes, Hamming uses LLMs to generate different prompt variants. Their LLM judge measures how well a particular prompt solves the task. They capture outlier examples and use them to improve the few-shot examples in the prompt. They run several "trials" to refine the prompts iteratively.
Benefits:
No more tedious word-smithing.
No more scoring outputs by hand.
No need to remember to tip your LLM or ask it to think carefully step-by-step.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.