Fill-Mask
Transformers
PyTorch
German
bert
Inference Endpoints
scherrmann commited on
Commit
5e8fce8
1 Parent(s): 65a7495

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ decay of 1e − 5 and a maximal learning of 1e − 4. I train the model using a
28
  To fine-tune the model, I use several datasets, including:
29
  - A manually labeled [multi-label database of German ad-hoc announcements](https://arxiv.org/pdf/2311.07598.pdf) containing 31,771 sentences, each associated with up to 20 possible topics.
30
  - An extractive question-answering dataset based on the SQuAD format, which was created using 3,044 ad-hoc announcements processed by OpenAI's ChatGPT to generate and answer questions (see [here](https://huggingface.co/datasets/scherrmann/adhoc_quad)).
31
- - The [financial phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013) for sentiment classification, translated to German using [DeepL](https://www.deepl.com/translator)
32
 
33
  ### Benchmark Results
34
  The further pre-trained German FinBERT model demonstrated the following performances on finance-specific downstream tasks:
 
28
  To fine-tune the model, I use several datasets, including:
29
  - A manually labeled [multi-label database of German ad-hoc announcements](https://arxiv.org/pdf/2311.07598.pdf) containing 31,771 sentences, each associated with up to 20 possible topics.
30
  - An extractive question-answering dataset based on the SQuAD format, which was created using 3,044 ad-hoc announcements processed by OpenAI's ChatGPT to generate and answer questions (see [here](https://huggingface.co/datasets/scherrmann/adhoc_quad)).
31
+ - The [financial phrase bank](https://arxiv.org/abs/1307.5336) of Malo et al. (2013) for sentiment classification, translated to German using [DeepL](https://www.deepl.com/translator) (see [here](https://huggingface.co/datasets/scherrmann/financial_phrasebank_75agree_german)).
32
 
33
  ### Benchmark Results
34
  The further pre-trained German FinBERT model demonstrated the following performances on finance-specific downstream tasks: