Spaces:
Runtime error
why do I get charged for errors!?
All I get for my requests are errors and charging my openai account!!
@kehsani You will be charged for the number of tokens used in the demo, irrespective of its success or failure. Occasionally, an error may occur in one of the stages; however, by that point, you would have already utilized some tokens (for interpreting your input, selecting the appropriate model, and so on). Hope this helps.
@taesis I am using openAI key for gpt3 as I am waiting for gpt4, can that cause error? Thanks
@microsoft Are you considering changing the default model to gpt-3.5-turbo? That would reduce costs 10 times without the hustle of duplicating the space.
@microsoft EOF
@taesiri when I tryp to duplicate hugginggpt it says your hardware will be downgraded to a free cpu and this might break the system. Is this what you were referring to or is there a way to remain on gpu and pay a bit for it? I went ahead to duplicate but got runtime error of limit 16Gi reached. Seems there are a few of them around
@kehsani The role of the LLM (GPT-3.5/4) in this project is to parse natural language input and utilize available models to answer the query and produce some output. Typical tasks, such as object detection or image captioning, can be performed using CPU-only spaces (either free or paid); however, Text2Image models require powerful GPUs. You can enable or disable available models here, depending on your use cases. If you disable a few models, it might run on under 16GB of RAM.
@taesiri Thanks for the feed back. I would be happy to run it on a faster machine and pay, perhaps not A10G model, but duplicating a space does not give options what gpu to run under. Also now that duplication has failed, if I try to run duplication again it says you have already duplicated this space, I do not see any where that I can delete this space nor do I see it as listed, there is nothing under spaces! I have the option to create a new space!?
@taesiri A couple of questions. For disabling a model, do I go to the site link you provided and simply comment out some of the libraries? And if a duplication fails due to size limit, I guess restarting the duplication will not help since the size of the imported libraries will always be greater than 16G for cpu limit!? I am trying factory reboot. I get this error wehn factory rebooting "RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx". I do have a gpu and I do use it. So not sure if this is really the problem. The space is suppose to run on cpu anyways!?
@kehsani Here you are https://huggingface.co/spaces/taesiri/HuggingGPT-Lite
Hi, I just used the link, but the model keeps on returning "{βerrorβ: {βmessageβ: βThis is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions?β, βtypeβ: βinvalid_request_errorβ, βparamβ: βmodelβ, βcodeβ: None}}". Not sure what's going on
I have tested this, but I am unable to replicate the problem you are experiencing (both on Spaces and my Local machine). Could you please provide more details so that we can assist you better
Hi, all I did yesterday was duplicate HuggingGPT-Lite to my space and ran it, nothing changed. I tried both on Mac and Win, I've made the model I duplicate public if you can check it.
Here 's the result I got when running the example
@Vito99
This is strange. Are you using the gpt-3.5-turbo
model? Additionally, were you able to access this model on OpenAI's playground?
@Vito99 You are receiving the same error message here, which is interesting. To fix this, You should reach out to OpenAI or alternatively, ask about this error on https://community.openai.com/.
@Vito99 You are receiving the same error message here, which is interesting. To fix this, You should reach out to OpenAI or alternatively, ask about this error on https://community.openai.com/.
SureοΌ Thanks a LOT!!!!!