KB-VQA-E / my_model /config /kbvqa_config.py
m7mdal7aj's picture
Update my_model/config/kbvqa_config.py
853fe22 verified
raw
history blame
1.44 kB
import os
# Model and Tokenizer Settings
KBVQA_MODEL_NAME_7b = "m7mdal7aj/fine_tuned_llama_2_7b_chat_OKVQA"
KBVQA_MODEL_NAME_13b = "m7mdal7aj/fine_tuned_llama_2_13b_chat_OKVQA"
QUANTIZATION = '4bit' # 8bit can be used as well.
MAX_CONTEXT_WINDOW = 4000 # keeping 96 tokens as margin
ADD_EOS_TOKEN = False # We do not need the model to add the default special tokens, because we already added them in the prompt engineering module with bew extra ones.
TRUST_REMOTE = False
USE_FAST = True
LOW_CPU_MEM_USAGE = True
# Access Token
HUGGINGFACE_TOKEN = os.getenv("HUGGINGFACE_TOKEN")
# SYS Prompt Designed
SYSTEM_PROMPT = "You are a helpful, respectful and honest assistant for visual question answering. you are provided with a caption of an image and a list of objects detected in the image along with their bounding boxes and level of certainty, you will output an answer to the given questions in no more than one sentence. Use logical reasoning to reach to the answer, but do not output your reasoning process unless asked for it. If provided, you will use the [CAP] and [/CAP] tags to indicate the beginning and end of the caption respectively. If provided you will use the [OBJ] and [/OBJ] tags to indicate the beginning and end of the list of detected objects in the image along with their bounding boxes respectively. If provided, you will use [QES] and [/QES] tags to indicate the beginning and end of the question respectively."