doberst commited on
Commit
308eafb
1 Parent(s): 5743d87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -15
README.md CHANGED
@@ -6,9 +6,13 @@ tags: [green, p1, llmware-fx, ov, emerald]
6
 
7
  # slim-extract-tiny-ov
8
 
9
- **slim-extract-tiny-ov** is a specialized function calling model with a single mission to look for values in a text, based on an "extract" key that is passed as a parameter. No other instructions are required except to pass the context passage, and the target key, and the model will generate a python dictionary consisting of the extract key and a list of the values found in the text, including an 'empty list' if the text does not provide an answer for the value of the selected key.
10
 
11
- This is an OpenVino int4 quantized version of slim-extract-tiny, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
 
 
 
 
12
 
13
 
14
  ### Model Description
@@ -16,23 +20,12 @@ This is an OpenVino int4 quantized version of slim-extract-tiny, providing a ver
16
  - **Developed by:** llmware
17
  - **Model type:** tinyllama
18
  - **Parameters:** 1.1 billion
19
- - **Model Parent:** llmware/slim-extract-tiny
20
  - **Language(s) (NLP):** English
21
  - **License:** Apache 2.0
22
- - **Uses:** Extraction of values from complex business documents
23
  - **RAG Benchmark Accuracy Score:** NA
24
  - **Quantization:** int4
25
-
26
- ### Example Usage
27
-
28
- from llmware.models import ModelCatalog
29
-
30
- text_passage = "The company announced that for the current quarter the total revenue increased by 9% to $125 million."
31
- model = ModelCatalog().load_model("slim-extract-tiny-ov")
32
- llm_response = model.function_call(text_passage, function="extract", params=["revenue"])
33
-
34
- Output: `llm_response = {"revenue": [$125 million"]}`
35
-
36
 
37
  ## Model Card Contact
38
 
 
6
 
7
  # slim-extract-tiny-ov
8
 
9
+ **slim-extract-tiny-ov** is a specialized function calling model that implements a generative 'question' (e.g., 'q-gen') function, which takes a context passage as an input, and then generates as an output a python dictionary consisting of one key:
10
 
11
+ `{'question': ['What was the amount of revenue in the quarter?']} `
12
+
13
+ The model has been designed to accept one of three different parameters to guide the type of question-answer created: 'question' (generates a standard question), 'boolean' (generates a 'yes-no' question), and 'multiple choice' (generates a multiple choice question).
14
+
15
+ This is an OpenVino int4 quantized version of slim-q-gen, providing a very fast, very small inference implementation, optimized for AI PCs using Intel GPU, CPU and NPU.
16
 
17
 
18
  ### Model Description
 
20
  - **Developed by:** llmware
21
  - **Model type:** tinyllama
22
  - **Parameters:** 1.1 billion
23
+ - **Model Parent:** llmware/slim-q-gen
24
  - **Language(s) (NLP):** English
25
  - **License:** Apache 2.0
26
+ - **Uses:** Question generation from a context passage
27
  - **RAG Benchmark Accuracy Score:** NA
28
  - **Quantization:** int4
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  ## Model Card Contact
31