Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
@@ -11,10 +11,10 @@ inference: false
|
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **topic analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
|
13 |
|
14 |
-
`{"
|
15 |
|
16 |
|
17 |
-
SLIM models are designed to
|
18 |
|
19 |
Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](https://huggingface.co/llmware/slim-topics-tool).
|
20 |
|
@@ -22,7 +22,7 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](
|
|
22 |
## Prompt format:
|
23 |
|
24 |
`function = "classify"`
|
25 |
-
`params = "
|
26 |
`prompt = "<human> " + {text} + "\n" + `
|
27 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
28 |
|
@@ -74,7 +74,7 @@ Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](
|
|
74 |
|
75 |
from llmware.models import ModelCatalog
|
76 |
slim_model = ModelCatalog().load_model("llmware/slim-topics")
|
77 |
-
response = slim_model.function_call(text,params=["
|
78 |
|
79 |
print("llmware - llm_response: ", response)
|
80 |
|
|
|
3 |
inference: false
|
4 |
---
|
5 |
|
6 |
+
# SLIM-TOPICS
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
11 |
|
12 |
slim-sentiment has been fine-tuned for **topic analysis** function calls, generating output consisting of a python dictionary corresponding to specified keys, e.g.:
|
13 |
|
14 |
+
`{"topics": ["..."]}`
|
15 |
|
16 |
|
17 |
+
SLIM models are designed to generate structured outputs that can be used programmatically as part of a multi-step, multi-model LLM-based automation workflow.
|
18 |
|
19 |
Each slim model has a 'quantized tool' version, e.g., [**'slim-topics-tool'**](https://huggingface.co/llmware/slim-topics-tool).
|
20 |
|
|
|
22 |
## Prompt format:
|
23 |
|
24 |
`function = "classify"`
|
25 |
+
`params = "topics"`
|
26 |
`prompt = "<human> " + {text} + "\n" + `
|
27 |
`"<{function}> " + {params} + "</{function}>" + "\n<bot>:"`
|
28 |
|
|
|
74 |
|
75 |
from llmware.models import ModelCatalog
|
76 |
slim_model = ModelCatalog().load_model("llmware/slim-topics")
|
77 |
+
response = slim_model.function_call(text,params=["topics"], function="classify")
|
78 |
|
79 |
print("llmware - llm_response: ", response)
|
80 |
|