-
Notifications
You must be signed in to change notification settings - Fork 7
Open
Description
The LLM forgets details in the middle of the prompt. Rather than dumping all text into one prompt, do something more intelligent to structure information search or summarize prior information. This weakness has been pointed out in the research: https://arxiv.org/html/2404.02060v1
One possible solution would be to optimize prompt engineering. Another possible solution is a multi-agent system (https://arxiv.org/pdf/2306.03314), in which we have specialized submodels, i.e. a cog.yaml specialist, a predict.py specialist, a debugging specialist, etc. Possible multi-agent frameworks include:
- AutoGen: https://github.com/microsoft/autogen
- CrewAI: https://github.com/crewAIInc/crewAI
- WorkGPT: https://github.com/team-openpm/workgpt
A proposed multi-agent structure is:
- file agent - reads files, orders by importance, and extracts relevant parts of the
cog.yamlagent andpredict.pyagent cog.yamlagent - generatescog.yamlusing extracted files, working examples, and PyPi grounding. Will test by trying to runcog run.predict.pyagent - generatespredict.pyusing extracted files and working examples. Will test by trying to runcog predict.- critic agent (optional, costly but could save development time) - reviews generated
cog.yamlandpredict.pybefore they're run - debugging agent - reviews errors from
cog runorcog predictand synthesizes feedback back to its respective agent - weights agent
Concerns of using multi-agent systems include getting stuck in loops and the potential for high costs.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels