Update README.md
Browse files
README.md
CHANGED
@@ -17,10 +17,11 @@ pipeline_tag: text-classification
|
|
17 |
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models and prompting of black-box and open-source large language models. It provides various basic and efficient metrics to assess the performance of QA models.
|
18 |
|
19 |
### Updates
|
20 |
-
- Uopdated to version 0.2.
|
21 |
- Supports prompting OPENAI GPT-series models and Claude Series models now. (Assuimg OPENAI version > 1.0)
|
22 |
- Supports prompting various open source models such as LLaMA-2-70B-chat, LLaVA-1.5 etc by calling API from [deepinfra](https://deepinfra.com/models).
|
23 |
-
|
|
|
24 |
|
25 |
## Installation
|
26 |
* Python version >= 3.6
|
|
|
17 |
QA-Evaluation-Metrics is a fast and lightweight Python package for evaluating question-answering models and prompting of black-box and open-source large language models. It provides various basic and efficient metrics to assess the performance of QA models.
|
18 |
|
19 |
### Updates
|
20 |
+
- Uopdated to version 0.2.17
|
21 |
- Supports prompting OPENAI GPT-series models and Claude Series models now. (Assuimg OPENAI version > 1.0)
|
22 |
- Supports prompting various open source models such as LLaMA-2-70B-chat, LLaVA-1.5 etc by calling API from [deepinfra](https://deepinfra.com/models).
|
23 |
+
- Added trained tiny-bert for QA evaluation. Model size is 18 MB.
|
24 |
+
- Pass huggingface repository name to download model directly for TransformerMatcher
|
25 |
|
26 |
## Installation
|
27 |
* Python version >= 3.6
|