Future of Prompt Engineering: Prompt LLMs

This is the"meta-learning" with few-shot. LLM can "learn" on whatever you manage to cram into the context window with "prompt LLMs"

Future of Prompt Engineering: Prompt LLMs
where LLMs are going?

The next frontier of prompt engineering is probably going to be 'Prompt LLMs' which going to work as a brain 🧠 of LLMs and create autonomous agents

LLMs are unaware of their strengths and limitations. They have a finite context window. That they can barely do mental math. Prompts can get unlucky and LLM may go off the rails 🤦‍♀️

"Prompt LLMs"  can string prompts together in loops that create agents that can perceive, think, and act, their goals defined in English in prompts.

Feedback is incorporated by adding "reflect prompts" that evaluate outcomes,

This is the"meta-learning" with few-shot. LLM can "learn" on whatever you manage to cram into the context window with "prompt LLMs"

What do you think?