doberst commited on
Commit
1177311
1 Parent(s): 3b46af1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
- license: cc-by-sa-4.0
3
  inference: false
4
  ---
5
 
6
- # SLIM-SUMMARY
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
- **slim-summary** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
11
 
12
  As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
13
 
@@ -15,9 +15,9 @@ Input is a text passage, and output is a list of the form:
15
 
16
  &nbsp;&nbsp;&nbsp;&nbsp;`['summary_point1', 'summary_point2', 'summary_point3']`
17
 
18
- This model is 2.7B parameters, small enough to run on a CPU, and is fine-tuned on top of [**llmware/bling-stable-lm-3b-4e1t-v0**](https://huggingface.co/llmware/bling-stable-lm-3b-4e1t-v0), which in turn, is a fine-tune of stabilityai/stablelm-3b-4elt.
19
 
20
- For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tool'**](https://huggingface.co/llmware/slim-summary-tool).
21
 
22
  ## Usage Tips
23
 
@@ -41,8 +41,8 @@ For fast inference use of this model, we would recommend using the 'quantized to
41
  <details>
42
  <summary>Transformers Script </summary>
43
 
44
- model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary")
45
- tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary")
46
 
47
  function = "summarize"
48
  params = "key points (3)"
@@ -87,7 +87,7 @@ For fast inference use of this model, we would recommend using the 'quantized to
87
  <summary>Using as Function Call in LLMWare</summary>
88
 
89
  from llmware.models import ModelCatalog
90
- slim_model = ModelCatalog().load_model("llmware/slim-summary")
91
  response = slim_model.function_call(text,params=["key points (3)], function="summarize")
92
 
93
  print("llmware - llm_response: ", response)
 
1
  ---
2
+ license: apache-2.0
3
  inference: false
4
  ---
5
 
6
+ # SLIM-SUMMARY-TINY
7
 
8
  <!-- Provide a quick summary of what the model is/does. -->
9
 
10
+ **slim-summary-tiny** is a small, specialized model finetuned for summarize function-calls, generating output consisting of a python list of distinct summary points.
11
 
12
  As an experimental feature in the model, there is an optional list size that can be passed with the parameters in invoking the model to guide the model to a specific number of response elements.
13
 
 
15
 
16
  &nbsp;&nbsp;&nbsp;&nbsp;`['summary_point1', 'summary_point2', 'summary_point3']`
17
 
18
+ This model is 1.1B parameters, small enough to run on a CPU, and is fine-tuned on top of a tiny-llama base.
19
 
20
+ For fast inference use of this model, we would recommend using the 'quantized tool' version, e.g., [**'slim-summary-tiny-tool'**](https://huggingface.co/llmware/slim-summary-tiny-tool).
21
 
22
  ## Usage Tips
23
 
 
41
  <details>
42
  <summary>Transformers Script </summary>
43
 
44
+ model = AutoModelForCausalLM.from_pretrained("llmware/slim-summary-tiny")
45
+ tokenizer = AutoTokenizer.from_pretrained("llmware/slim-summary-tiny")
46
 
47
  function = "summarize"
48
  params = "key points (3)"
 
87
  <summary>Using as Function Call in LLMWare</summary>
88
 
89
  from llmware.models import ModelCatalog
90
+ slim_model = ModelCatalog().load_model("llmware/slim-summary-tiny")
91
  response = slim_model.function_call(text,params=["key points (3)], function="summarize")
92
 
93
  print("llmware - llm_response: ", response)