Skip to content

zjunlp/MemP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MemP Exploring Agent Procedural Memory

📄arXiv🤗HFPaper

License: MIT

Table of Contents


🌻Acknowledgement

Our code is referenced and adapted from Langchain, ETO. And Thanks to ETO provide the trajectory on train set.

🌟Overview

Large Language Models based agents excel at diverse tasks yet they suffer from brittle procedural memory that is manually engineered or entangled in static parameters. In this work we investigate strategies to endow agents with a learnable updatable and lifelong procedural memory. We propose MemP that distills past agent trajectories into both fine grained step by step instructions and higher level script like abstractions and explore the impact of different strategies for Build Retrieval and Update of procedural memory. Coupled with a dynamic regimen that continuously updates corrects and deprecates its contents this repository evolves in lockstep with new experience. Empirical evaluation on TravelPlanner and ALFWorld shows that as the memory repository is refined agents achieve steadily higher success rates and greater efficiency on analogous tasks. Moreover procedural memory built from a stronger model retains its value migrating the procedural memory to a weaker model can also yield substantial performance gains.

In MemP, we support two strategies for building procedural memory: one constructs procedural memory offline using existing trajectories, and the other adopts a self-learning approach, starting from scratch to execute agent tasks online while actively learning procedural memory.

🔧Installation

git clone https://github.com/zjunlp/MemP
pip install -r requirements.txt
cd ProceduralMem

✏️Offline Running

python run_memp_offline.py \
    --model your_model_name \
    --split dev_or_test \
    --batch_size concurrency_num \
    --max_steps n \
    --exp_name save_name \
    --few_shot \
    --use_memory

📝Online Running

python run_memp_online.py \
    --model your_model_name \
    --split dev_or_test \
    --batch_size concurrency_num \
    --max_steps n \
    --exp_name save_name \
    --few_shot \
    --use_memory \
    --overwrite

🚩Citation

If this work is helpful, please kindly cite as:

@article{DBLP:journals/corr/abs-2508-06433,
  author       = {Runnan Fang and
                  Yuan Liang and
                  Xiaobin Wang and
                  Jialong Wu and
                  Shuofei Qiao and
                  Pengjun Xie and
                  Fei Huang and
                  Huajun Chen and
                  Ningyu Zhang},
  title        = {Memp: Exploring Agent Procedural Memory},
  journal      = {CoRR},
  volume       = {abs/2508.06433},
  year         = {2025},
  url          = {https://doi.org/10.48550/arXiv.2508.06433},
  doi          = {10.48550/ARXIV.2508.06433},
  eprinttype    = {arXiv},
  eprint       = {2508.06433},
  timestamp    = {Sat, 13 Sep 2025 14:46:20 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2508-06433.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages