Our paper, titled “PPM,” has been accepted for presentation at ESEC/FSE 2024. PPM introduces a novel approach to benchmarking large language models (LLMs) by advocating for the use of dynamically generated datasets instead of static datasets. Specifically, PPM is designed to benchmark the programming capability of LLMs. PPM propose to merge two existing programming problems to create new one as the benchmark.