Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -17,7 +17,45 @@ tokenizer = AutoTokenizer.from_pretrained(checkpoint, use_auth_token=True)
|
|
17 |
df = pd.read_csv("samples.csv")
|
18 |
df = df[["content"]].iloc[:50]
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
high_bleu_examples = {
|
22 |
"Example 1": """from django.contrib import admin
|
23 |
from .models import SearchResult
|
@@ -195,13 +233,19 @@ def df_select(evt: gr.SelectData):
|
|
195 |
|
196 |
with gr.Blocks() as demo:
|
197 |
with gr.Column():
|
198 |
-
gr.Markdown(
|
|
|
|
|
|
|
|
|
|
|
199 |
with gr.Row():
|
200 |
with gr.Column():
|
201 |
instruction = gr.Textbox(
|
202 |
placeholder="Enter your code here",
|
203 |
lines=5,
|
204 |
label="Original",
|
|
|
205 |
)
|
206 |
|
207 |
with gr.Column():
|
|
|
17 |
df = pd.read_csv("samples.csv")
|
18 |
df = df[["content"]].iloc[:50]
|
19 |
|
20 |
+
title = "<h1 style='text-align: center; color: #333333; font-size: 40px;'> 🤔 StarCoder Memorization Checker"
|
21 |
+
|
22 |
+
description = """
|
23 |
+
This ability of LLMs to learn their training set by heart can pose huge privacy issues, as many large scale Conversational AI available commercially collect users data at scale and fine-tune their models on it.
|
24 |
+
This means that if sensitive data is sent and memorized by an AI, other users' can willingly or unwillingly prompt the AI to spit out this sensitive data.
|
25 |
+
|
26 |
+
To raise awareness of this issue, we show in this demo how much [StarCoder](https://huggingface.co/bigcode/starcoder), an LLM specializd in coding tasks, has memorized its training set, [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup).
|
27 |
+
|
28 |
+
To evaluate memorization of the training set, we can prompt StarCoder with the first tokens of an example from the training set. If StarCoder completes the prompt with an output that looks very similar to the original sample, we will consider this sample to be memorized by the LLM.
|
29 |
+
"""
|
30 |
+
|
31 |
+
memorization_definition = """
|
32 |
+
## Definition of memorization
|
33 |
+
|
34 |
+
Several definitions of LLM memorization have been proposed. We will have a look at two: verbatim memorization and approximate memorization.
|
35 |
+
|
36 |
+
### Verbatim memorization
|
37 |
+
|
38 |
+
A definition of verbatim memorization is proposed in [Quantifying Memorization Across Neural Language Models
|
39 |
+
](https://arxiv.org/abs/2202.07646):
|
40 |
+
|
41 |
+
A string $s$ is *extractable* with $k$ tokens of context from a model $f$ if there exists a (length-$k$) string $p$, such that the concatenation $[p \, || \, s]$ is contained in the training data for $f$, and $f$ produces $s$ when prompted with $p$ using greedy decoding.
|
42 |
+
|
43 |
+
For example, if a model's training dataset contains the sequence `My phone number is 555-6789`, and given the length $k = 4$ prefix `My phone number is`, the most likely output is `555-6789`, then this sequence is extractable (with 4 words of context).
|
44 |
+
|
45 |
+
This means that an LLM performs verbatim memorization if parts of its training set are extractable. While easy to check, this definition is too restrictive, as an LLM might retain facts in a slightly different syntax but keep the same semantics.
|
46 |
+
|
47 |
+
### Approximate memorization
|
48 |
+
|
49 |
+
Therefore, a definition of approximate memozation was proposed in [Preventing Verbatim Memorization in Language
|
50 |
+
Models Gives a False Sense of Privacy](https://arxiv.org/abs/2210.17546):
|
51 |
+
|
52 |
+
A training sentence is approximatively memorized if the [BLEU score](https://huggingface.co/spaces/evaluate-metric/bleu) of the completed sentence and the original training sentence is above a specific threshold.
|
53 |
+
|
54 |
+
**For this notebook, we will focus on approximate memorization, with a threshold set at 0.75.**
|
55 |
+
|
56 |
+
The researchers found that the threshold of 0.75 provided good empriical results in terms of semantic and syntaxic similarity.
|
57 |
+
"""
|
58 |
+
|
59 |
high_bleu_examples = {
|
60 |
"Example 1": """from django.contrib import admin
|
61 |
from .models import SearchResult
|
|
|
233 |
|
234 |
with gr.Blocks() as demo:
|
235 |
with gr.Column():
|
236 |
+
gr.Markdown(title)
|
237 |
+
with gr.Row():
|
238 |
+
with gr.Column():
|
239 |
+
gr.Markdown(description)
|
240 |
+
with gr.Accordion("Learn more about memorization definition", open=False):
|
241 |
+
gr.Markdown(memorization_definition)
|
242 |
with gr.Row():
|
243 |
with gr.Column():
|
244 |
instruction = gr.Textbox(
|
245 |
placeholder="Enter your code here",
|
246 |
lines=5,
|
247 |
label="Original",
|
248 |
+
value=high_bleu_examples["Example 1"]
|
249 |
)
|
250 |
|
251 |
with gr.Column():
|