Update README.md
Browse files
README.md
CHANGED
@@ -1,19 +1,15 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
-
# SLIM-EXTRACT
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
**slim-extract** implements a specialized function-calling customizable 'extract' capability that takes as an input a context passage, a customized key, and outputs a python dictionary with key that corresponds to the customized key, with a value consisting of a list of items extracted from the text corresponding to that key, e.g.,
|
11 |
|
12 |
`{'universities': ['Berkeley, Stanford, Yale, University of Florida, ...'] }`
|
13 |
|
14 |
-
This model is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
|
15 |
-
|
16 |
-
For fast inference use, we would recommend the 'quantized tool' version, e.g., [**'slim-extract-tool'**](https://huggingface.co/llmware/slim-extract-tool).
|
17 |
|
18 |
|
19 |
## Prompt format:
|
@@ -27,8 +23,8 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
27 |
<details>
|
28 |
<summary>Transformers Script </summary>
|
29 |
|
30 |
-
model = AutoModelForCausalLM.from_pretrained("llmware/slim-extract")
|
31 |
-
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract")
|
32 |
|
33 |
function = "extract"
|
34 |
params = "company"
|
@@ -70,7 +66,7 @@ For fast inference use, we would recommend the 'quantized tool' version, e.g.,
|
|
70 |
<summary>Using as Function Call in LLMWare</summary>
|
71 |
|
72 |
from llmware.models import ModelCatalog
|
73 |
-
slim_model = ModelCatalog().load_model("llmware/slim-extract")
|
74 |
response = slim_model.function_call(text,params=["company"], function="extract")
|
75 |
|
76 |
print("llmware - llm_response: ", response)
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
+
# SLIM-EXTRACT-PHI-3
|
7 |
|
|
|
8 |
|
9 |
+
**slim-extract-phi-3** implements a specialized function-calling customizable 'extract' capability that takes as an input a context passage, a customized key, and outputs a python dictionary with key that corresponds to the customized key, with a value consisting of a list of items extracted from the text corresponding to that key, e.g.,
|
10 |
|
11 |
`{'universities': ['Berkeley, Stanford, Yale, University of Florida, ...'] }`
|
12 |
|
|
|
|
|
|
|
13 |
|
14 |
|
15 |
## Prompt format:
|
|
|
23 |
<details>
|
24 |
<summary>Transformers Script </summary>
|
25 |
|
26 |
+
model = AutoModelForCausalLM.from_pretrained("llmware/slim-extract-phi-3")
|
27 |
+
tokenizer = AutoTokenizer.from_pretrained("llmware/slim-extract-phi-3")
|
28 |
|
29 |
function = "extract"
|
30 |
params = "company"
|
|
|
66 |
<summary>Using as Function Call in LLMWare</summary>
|
67 |
|
68 |
from llmware.models import ModelCatalog
|
69 |
+
slim_model = ModelCatalog().load_model("llmware/slim-extract-phi-3")
|
70 |
response = slim_model.function_call(text,params=["company"], function="extract")
|
71 |
|
72 |
print("llmware - llm_response: ", response)
|