This is the most probable match. Published in (European Symposium on Research in Computer Security), this paper introduces a security layer designed to protect machine learning models from being "stolen" or extracted by adversaries.
1. "BDPL: A Boundary Differentially Private Layer Against Machine Learning Model Extraction Attacks" bdplarchive.rar
It uses differential privacy to obfuscate responses for queries that fall near a model's decision boundary. This is the most probable match
This research focuses on optimizing discrete prompts for large language models (LLMs) without needing access to the model's internal weights or gradients. If you have a file named bdplarchive
A more recent 2023 paper from (TMLR) uses the same acronym for Black-box Discrete Prompt Learning .
If you have a file named bdplarchive.rar from a security repository, it likely contains the implementation of the boundary differentially private layer and the experimental scripts used to verify its effectiveness against extraction attacks.
You can find the full text through the official Springer link or IEEE Xplore. 2. "Black-box Discrete Prompt Learning" (BDPL)