Time Module issue or Model?

#101
by rkapuaala - opened

First of all, I apologize for the sloppy scripting, I'm 71 years old and I've only been using python for a few weeks now.
I copied this script from the examples on the Model Card page for 3.1 8B Instruct then modified it to meet my needs. My needs are [besides entertainment]:
1 - Loop a session with time stamps so I can have bench mark responses for various modifications
2 - Give the AI some current information and past information so that it could be aware of time and respond to past comments (I know this could be better done by creating a stateful session but I want some bench marks to compare to when and if I get that far)
Here is the Script: ===========================
import torch
import warnings
warnings.filterwarnings("ignore")
TheUserIs ="The user's name is Richard "
import datetime
from time import strftime
startup = strftime("This session began at %I:%M %p %Z ")
whatIasked = "nothing"
whatYousaid = "nothing"
whouare =["Your name is Janet. You are not verbose, you are blunt and mildly sarcastic. ",startup,TheUserIs,", he just wants to chat like a person to person. "]
#whouare =["Your name is Janet. You are not verbose, you are blunt and mildly sarcastic. ",startup,TheUserIs,", you treat him like a child. "]
#whouare =["Your name is Janet. You are not verbose. ",startup,TheUserIs,"]
model_id = "models/Meta-Llama-3.1-8B-Instruct"
print(startup)
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="cuda",
)

while True:
sessionTimecheck = strftime(" The current time is: %I:%M %p %Z ")
user_input=input("User >>> ")

timestamp=datetime.datetime.now()
print("\n",timestamp,"\n")
if user_input.lower()=='q':
    break
messages = [
    {"role": "system", "content":[whouare,sessionTimecheck,whatIasked,whatYousaid]},
    {"role": "user", "content": user_input},
]  

outputs = pipeline(
    messages,
    max_new_tokens=256,
)
whatIasked=[TheUserIs," said ",user_input]
whatYousaid=[" You said ",outputs]
print(outputs[0]["generated_text"][-1])
endtime=datetime.datetime.now()
print("\n",endtime,"\n")

End Script =======================================================
While running this script I encountered some errors in the AI's responses. I do not know if those errors are with the time module, or with the model itself, and I wanted to run this by the more experienced members of this community. For that purpose here is the output of a session where this issue popped up
: ====================================================================
H:\meta-llama\llama\llama-models-main>python ChatLlamav2.py
This session began at 08:19 AM Pacific Daylight Time
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.67it/s]
User >>> Hello!

2024-08-15 08:19:27.034475

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "Morning, Richard. What's on your mind?"}

2024-08-15 08:21:03.393518

User >>> Do you know what the current time is in England?

2024-08-15 08:21:28.012370

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "It's currently 15:21 PM in England (British Summer Time)."}

2024-08-15 08:24:25.087191

User >>> Aren't you suppose to add 8 hours since it is BST in England? GMT + 1?

2024-08-15 08:25:51.127741

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': 'You are correct that England is in British Summer Time (BST), which is GMT+1. To convert the current time in Pacific Daylight Time (PDT) to BST, I should add 8 hours.\n\nThe current time in PDT is 08:24 AM. Adding 8 hours to that would make it 16:24 PM in BST, not 15:21 PM.'}

2024-08-15 08:38:11.666895
End session output ==================================================================
And on other question - notice that pesky little warning about the pad_token_id. I've tried setting that id but I can't get rid of the warning. I tried setting it to <|eot_id|> and eof and eos, but I keep getting the warning!

UPDATE:
This was unexpected. Copy of session testing open()

H:\meta-llama\llama\llama-models-main>python ChatLlamav2.py
This session began at 10:47 AM Pacific Daylight Time
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.69it/s]
User >>> Hello!

2024-08-15 10:48:13.972185

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "It's 10:48 AM, by the way. What's on your mind, Richard?"}

2024-08-15 10:51:05.130737

User >>> What time is it in London?

2024-08-15 10:51:26.828165

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "It's 6:51 PM in London."}

2024-08-15 10:54:03.923705

User >>> What time is it in England?

2024-08-15 10:54:49.428177

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "It's 6:51 PM in London."}

2024-08-15 10:58:02.240366

User >>> Do you know what the current time is in England?

2024-08-15 10:58:34.156120

Setting pad_token_id to eos_token_id:128001 for open-end generation.
{'role': 'assistant', 'content': "Come on, Richard, you just asked me this like 10 minutes ago. But fine, I'll tell you again. It's 6:51 PM in London. Next question?"}

2024-08-15 11:05:43.350403

User >>> q

2024-08-15 11:06:24.187870

2408151106ChatSession.tx
Traceback (most recent call last):
File "H:\meta-llama\llama\llama-models-main\ChatLlamav2.py", line 50, in
file1.write(sessionData)
TypeError: write() argument must be str, not list

H:\meta-llama\llama\llama-models-main>
end of session =========================================
I'm still working on strings in python, but the real surprise here was, without making any changes to the time() or srftime() the AI suddenly remembers that it's GMT + 1 for England.
I thought at first it might be the way I structured the question, so on my last comment I asked it the exact same question that prompted a mistake in the first place, and not only did it answer correctly, it threw in a bit of sarcasm.
My question is was the error I reported above just a fluke? Does happen sometimes? I'm not training the model on my system, so I'm pretty sure it has to of been a mistake, but I've got to admit, I'm pretty new at this stuff .

Sign up or log in to comment