๐Ÿ’ก Future features

#176
by victor HF staff - opened
Hugging Chat org
โ€ข
edited Jun 12, 2023

Ask and discuss about future (big) features here ๐Ÿ”ฅ

Planned features:

  • Web search โœ…
  • Customize parameters
  • Add more models
  • Conversations trees (maybe)
  • Mobile app? (maybe)
victor pinned discussion
Hugging Chat org

A preview of the incoming web search:

image.png

image.png

Possibility to add plugins? which is already available in open assistant but generic no authentication plugins like https://www.klarna.com/.well-known/ai-plugin.json

Editing past prompts, similar to ChatGPT. For instance, I ask a question, and it gives a response. Then I ask another question, and it gives a response. I can then go back and re-word my second question, in an attempt to get the AI to respond the way I wanted.

A continue feature, so that if the AI is cut off in the middle of a response, you can continue it. For instance, I ask it to write some code. The AI writes it, but gets cut off. I am unable to get the AI to continue the code. It also seems that the AI also has very little ability to build from past questions and responses, so asking it "Please continue" ends up with the AI changing topic, in my case, from Tensorflow code to a Flask webpage.

being able to edit prompts and reduce the max length would be useful

the main issue I am seeing with this model is that it often gives very verbose answers to simple questions and the more its memory fills with its own word vomit the more unhinged it gets

Ability for the user to change the generation parameters?

Hugging Chat org

Possibility to add plugins? which is already available in open assistant but generic no authentication plugins like https://www.klarna.com/.well-known/ai-plugin.json

From my tests, plugins don't work very well with current open models, so I wouldn't rush into it. But I am open to your feedback on plugins.

being able to edit prompts and reduce the max length would be useful

Ability for the user to change the generation parameters?

Yes, I think it makes sense if it's not directly visible - maybe in the settings menu.

Ability to summarise and provide key points of a webpage from the URL provided by user.

Ability to connect to user's own vector store with documents and provide answers?

Might be overkill or irrelevant, but just saying.

  1. be able to edit previous prompt.
  2. be able to customize the model parameters: have presets (creative, standard, precise), and also a custom one so you can put in whatever.

Until you used a UI that has these features, you might not realize how great they are and they become kind of essential. (example: https://open-assistant.io/chat)

A preview of the incoming web search:

image.png

image.png

I think web search is the most important point. A New UI with a web search activation button so the user Can decide to scrap on the web or not. Some plugin extension are already allowing it with chatgpt. Once web search is done,also a simple way to connect api to the interface will b

Hugging Chat org
โ€ข
edited May 30, 2023

I think web search is the most important point. A New UI with a web search activation button so the user Can decide to scrap on the web or not. Some plugin extension are already allowing it with chatgpt. Once web search is done,also a simple way to connect api to the interface will b

Great to hear, we are releasing it this week! https://github.com/huggingface/chat-ui/pull/237 cc @nsarrazin

I think web search is the most important point. A New UI with a web search activation button so the user Can decide to scrap on the web or not. Some plugin extension are already allowing it with chatgpt. Once web search is done,also a simple way to connect api to the interface will b

Great to hear, we are releasing it this week! https://github.com/huggingface/chat-ui/pull/237 cc @nsarrazin

Great to hear. We are about to build something based on hugging chat ,so the fact that you already integrate web search will honestly save weeks of work ๐Ÿ˜…. I think if it's out this week,we better simply wait.

It can't give any complete answers. I've never received any complete answers yet! 5/10 marks

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

I am wondering if there's any simple figma file available for chat ui. This will really help development to have it released that way and not only on github.

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

Great! Do you decouple the logic from the UI? I would like to see additional, chat-ui independant frontends (like avatars) that utilize that in the future.

Hugging Chat org

I am wondering if there's any simple figma file available for chat ui. This will really help development to have it released that way and not only on github.

Hi @willyninja30 I made a Figma export for you (https://www.figma.com/file/nDCvAdyWUUyKgY8HOc7sJk/chat-ui?type=design&node-id=0%3A1&t=h4GkYwoab0e1L5H7-1) - hope it can help.

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

Great! Do you decouple the logic from the UI? I would like to see additional, chat-ui independant frontends (like avatars) that utilize that in the future.

At this stage, not yet but that's an Idea

I am wondering if there's any simple figma file available for chat ui. This will really help development to have it released that way and not only on github.

Hi @willyninja30 I made a Figma export for you (https://www.figma.com/file/nDCvAdyWUUyKgY8HOc7sJk/chat-ui?type=design&node-id=0%3A1&t=h4GkYwoab0e1L5H7-1) - hope it can help.

You rock Victor ๐Ÿ™๐Ÿ™๐Ÿ™๐Ÿ”ฅ๐Ÿ”ฅ๐Ÿ˜ญ

Hugging Chat org

The Figma file is awesome, thanks for sharing! ๐Ÿ”ฅ

The Figma file is awesome, thanks for sharing! ๐Ÿ”ฅ

It's really useful. That's why it's Always important to think about no code options in open source community. Tech people often forget that most human beings don't know how to code.

I think web search is the most important point. A New UI with a web search activation button so the user Can decide to scrap on the web or not. Some plugin extension are already allowing it with chatgpt. Once web search is done,also a simple way to connect api to the interface will b

Great to hear, we are releasing it this week! https://github.com/huggingface/chat-ui/pull/237 cc @nsarrazin

Hi Victor, just noticed on hugging chat that it's finally available. Amazing work. We are taking it as starting point for our ai solution ui. Will share next week, are you open to try with the features we added? ๐Ÿ˜„

When it comes to my own HuggingChat experience and desireable, future features,

I'm honestly excited about the spectrum of diverse answers given to, more less, same queries which don't requre correct or exclusive information returned. The model appears to be able to set or otherwise understand the conversation mood, with varying degrees of precision. It's not entertaining only to generate answers anew, but also ask them in another chat session, i.e. in the middle of an entirely different conversation. Of course, the model adapts and learns from user interaction, only adding to the thrill of discovery.

Having mentioned the mood and tone, I've been curious enough to repeatedly ask the model to explain how it "remembers" our conversation to me. I wanted to know how good, or long, its memory was for the purposes of a single session; I've received answers ranging from concerningly vague to shockingly technical, so I was only able to grasp that this memory isn't really too big. This is something I've approached from another angle, asking the model to try and remember the first thing addressed, exactly as I've typed it at the very session beginning. The treshold is indeed very low, and the answers received were completely bogus. :)

When I found out about this, it compelled me to think about ways to remedy what appears to be "memory loss". I thought about simple tasks people could be needing help with, some of which suit AI chat the best. To help the model from steering the conversation astray, a set of information to describe the following conversation atop (and different than) the emptying stack which refreshes according to more recent conversation would be a good idea. For example, if I needed the AI chat to help me generate questions for some sort of movie trivia, I'd want a handful of efficient rules paired up with things to consider and omit prior to AI chat receiving the query for another question generation. Because, when these instructions are buried underneath the "conversation pile", the quality of answers sharply drops.

I only hope available resources aren't too modest for something similar to what I've described to be put to test.

Hello, great job building web search module. Just a few things i noticed using it for the past hours.
1- It does connect to the web perfectly.
2- It tend to take only the first page result and not contextualize enough the data, trying to mix it with the model data and it ends up destroying the final output. So maybe should take the first 3 results to do a summary.
3- Takes time, maybe it's ok, but I think making sure that it takes less time might be good, but it's not critical at this stage.
4- Various output from serp api : as serp api allows to get not only text result but also video and maps, would be cool to allow the end user for example to prompt "give me the best yoga video tutorials" and get a reply with shortcuts and/or small views on maybe 3 youtube vid . The best real case doing that is on perplexity ai , you can check with a request.
5- Maps can be book. "what is the best itineray from x to y location" result prompting using google map query. and same of air tickets with google flights.

  • Also the web search structure Can be really improved by really going deeper than just first links

Just a few options and reco from a fan, great job again, I know you already did a lot.

If we could give a description of what the model did well (or not) when giving a thumbs up or down might help speeding up the learning process.

I'd like to get the sudden name-changes in mid-text out of the model's artificial brain, you see, without having to send another prompt and risking to confuse the AI

Also it seems on mobile it's sending the message instead of doing line breaks. Would be nice for formatting.

I think web search is the most important point. A New UI with a web search activation button so the user Can decide to scrap on the web or not. Some plugin extension are already allowing it with chatgpt. Once web search is done,also a simple way to connect api to the interface will b

Great to hear, we are releasing it this week! https://github.com/huggingface/chat-ui/pull/237 cc @nsarrazin

Hi Victor, Hope you are safe. When we build based on hugging chat UI, everything is fine expect the fact that the 'character by character' text generation just stop working once we build and deploy on web serveur. Don't get why.

Hugging Chat org

Hi Victor, Hope you are safe. When we build based on hugging chat UI, everything is fine expect the fact that the 'character by character' text generation just stop working once we build and deploy on web serveur. Don't get why.

Hi Willy, hope you are well. Sorry for this problem can you give us more details? cc @nsarrazin

@victor Hi Victor, thanks for the feedback. (It's actually our beta phase,and some buttons/features are not activite or visible yet) now.We tried to deploy using netlify https://polite-semifreddo-d532a0.netlify.app/ you Can check ,you'll notice that the reply comes out but there's no 'character to character' so the end user might feel like there's a technical issue while waiting for the answer to pop up. We used the same code on vercel https://ares-chat-v2.vercel.app/ and here character to character works,we don't get why

Hi Victor, Hope you are safe. When we build based on hugging chat UI, everything is fine expect the fact that the 'character by character' text generation just stop working once we build and deploy on web serveur. Don't get why.

Hi Willy, hope you are well. Sorry for this problem can you give us more details? cc @nsarrazin

Site note , we added guanaco and startchat beta ๐Ÿ˜„. We are working on optimizing the web search with citing sources

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

Feature added ๐Ÿ˜. Just be patient for the release. You Can have a look on the UI with that 'file' button ready on https://openareslab.com

Hugging Chat org

Feature added ๐Ÿ˜. Just be patient for the release. You Can have a look on the UI with that 'file' button ready on https://openareslab.com

Awesome addition ๐Ÿ”ฅ๐Ÿ”ฅ

This comment has been hidden

Ask and discuss about future (big) features here ๐Ÿ”ฅ

Planned features:

  • Web search โœ…
  • Customize parameters
  • Add more models
  • Conversations trees (maybe)
  • Mobile app? (maybe) - Should you be exposing the developer API, I would like to create a Flutter app around it..

Can we please add copy/save response feature, this will be a nice add.
Is there any possibility to create/export result in CSV/Excel format directly?

Adding more models would be nice.

I would like a way to edit prompts, and I would love if you could add the OpenAssistant version of LLAMA-2 when it comes out

Hugging Chat org

I would like a way to edit prompts, and I would love if you could add the OpenAssistant version of LLAMA-2 when it comes out

@nsarrazin is currently working on it!

Ability to remove parts of the converstation - your own or AI responses. Even from the middle of the thread.

It would allow freeing the context and let you make the AI forget parts that you no longer want it to reference.

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

May I ask you what status this feature is in?

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

May I ask you what status this feature is in?

It's live now. Check on https://open ares.net

Just click on the 'file' button and upload your pdf

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

May I ask you what status this feature is in?

It's live now. Check on https://open ares.net

Thanks for quick reponse. Just tested it with a simple pdf file. Unfortunately it seems that if cannot answer any questions. I asked a simple question but I always get something like "I apologize, but I cannot answer your question as there is no relevant information provided on pages 4 and 5."
Furthermore it uses a separate UI for asking questions.
Personally I think the killer feature for companies and organizations would be the possibility to talk to all of their documents (in their intranet / private cloud) directly from the HuggingChat UI, rather than uploading single files to the internet.

Thank you for the feedback. Please keep in mind that it's beta :). On a side note, it works, i think it's your file who's not on a right format. And based on the reply,the LLM simply had issues extracting the specific answer to your question. It happens with all llms. Feel free to try another pdf or another question. Regarding the seperate ui, you are right but some users prefer to have it seperate. We will unify both interfaces later on :)

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

May I ask you what status this feature is in?

It's live now. Check on https://open ares.net

Thanks for quick reponse. Just tested it with a simple pdf file. Unfortunately it seems that if cannot answer any questions. I asked a simple question but I always get something like "I apologize, but I cannot answer your question as there is no relevant information provided on pages 4 and 5."
Furthermore it uses a separate UI for asking questions.
Personally I think the killer feature for companies and organizations would be the possibility to talk to all of their documents (in their intranet / private cloud) directly from the HuggingChat UI, rather than uploading single files to the internet.

Replied

I would appreciate a feature that enables us to ask questions to our own documents like we have seen with PrivateGPT. Wondering if someone is working on such a feature yet?

That's something my team is actually doing. We are just waiting for the release of the new hugging chat ui to connect the feature on it and release our own version. I think the community needs it a lot.

May I ask you what status this feature is in?

It's live now. Check on https://open ares.net

Thanks for quick reponse. Just tested it with a simple pdf file. Unfortunately it seems that if cannot answer any questions. I asked a simple question but I always get something like "I apologize, but I cannot answer your question as there is no relevant information provided on pages 4 and 5."
Furthermore it uses a separate UI for asking questions.
Personally I think the killer feature for companies and organizations would be the possibility to talk to all of their documents (in their intranet / private cloud) directly from the HuggingChat UI, rather than uploading single files to the internet.

Replied

Well, I tested it with the EU State of the Union Letter of Intent but it did not work. Official document, no strange format. You can find it here. https://state-of-the-union.ec.europa.eu/document/download/1f472511-5019-4811-9189-a4611528782c_en?filename=SOTEU_2023_Letter_of_Intent_EN_0.pdf
I also tried other documents but no meaningful results.

For the use case I described (intranet / private cloud) the data must not be uploaded to your remote API because of privacy reasons. Your solution is not open source, isn't it?

From a functional perspective we would need to be able to include (and exclude) directories and files (not just pdfs) and to embed and vectorize the content identified in this way, potentially in a private cloud. Definitely a long way to go but a lot of it is available as open source yet.

Therefore I'm wondering if we shouldn't pursue a fully open sourced solution in this regard, potentially triggered by HuggingFace.

I've created a seamless HuggingChat integration that allows you to talk with your documents and that covers my stated requirement / use case, that no data leaves the private cloud. Feel free to get an overview of the solution in my new YouTube video: https://youtu.be/n63SDeQzwHc
@willyninja30 @julien-c @coyotte508 @victor

This comment has been hidden

A way to edit past prompts and parameters. Right now using it is quite tedious and requires creating multiple convos for the same topic. The regenerate button is nearly useless for falcon 180b as it repeatedly spits out the same output with no variations.

Would we ever see a features road map? That would significantly help with what I should implement myself, and plan my own roadmap.

If not, it will be nice to even know no such feature road map exists.

Things that I would love to see are:
voice input
file upload to help with chat - this seem to be the most frequently requested, and many people above showed interest, what is the latest plan on this?

Hugging Chat org

Hi, here are the big items on the roadmap:

  • Image upload support (already shipped). Chat-ui now supports vision models!
  • Assistants: Share a custom prompt with custom model directly from HuggingChat. This is our focus right now.
  • RAG on text files support (starting with pdf) cc @mishig
  • Agents & tools, continue to explore what's possible with functions calling agents (that can call other models from HuggingChat)
  • Better UX on mobile (more to come soon).

Features I would love to see!

  • multiple drafts to a prompt and we can pick the best one and write feedback. This would be really useful and easy way to collect real world rlhf data. and if every single vendor is doing it, it must be important data to collect.
  • Maybe even for us to give pointers on why the answer was wrong and also collect that data
  • Also exporting in markdown would be really useful too!
  • 'Fact Check' button

image.png

image.png

image.png

image.png

image.png

Would it be possible to add sillytavern support or something to add support for characters?

Hugging Chat org

Would it be possible to add sillytavern support or something to add support for characters?

Coming soon!

Please add sampling parameters, thank you!

Holy! The assistant feature just dropped in! thanks guys

Let us use our own model as assistants and then you have something!

Being able to edit prompts and see conversations trees sounds amazing.

Hugging Chat org
โ€ข
edited Feb 13

Being able to edit prompts and see conversations trees sounds amazing.

Should be the next feature we release!

awesome! found the pull req here: feature/conv_branching!!
and please don't forget the delete feature (prompt, response from trees) ref: discussion #358

Hello Victor ! Are you planning on RAG implementation soon ? I would love that ! Anyway, keep going, I love Hugging chat :)

@willyninja30 hello! i've tried the sites and unfortunately, most of the features weren't working (except OA and Image generation). but even the image generation isn't working now. could you please tell a little bit about the image generator? images were kinda cool.

Imgs

Perhaps we could add the possibility to search among all chat histories? The left side panel gets overcrowded pretty fast with lots and lots of usage and it becomes very difficult to search in previous chats and chat histories.

TTS mode would be apreciated.

Should be the next feature we release!

I would love to see per-message or per-chat sampler settings. Especially with repetition heavy models like LLaMa 3 and Command R Plus (on longer context windows) the lack of rep-pen and often too-low temp can cause issues. Per-message is preferable because then if something is a bit off, retrying the message with different sampler params can help get things back on track.

I know it's available for Assistants right now, but I'd really love to see it available in general and per-message and easily editable. Different models have very different tolerances so I'd love to see it to make HF Chat really useful on longer contexts.

Web search needs to be more detailed and in-depth. To analyze more reference information (up to 50 links) several search engines yandex, google, gogoduck,bing. For example, I wanted to get people's feedback on the OnePlus 11 phone model and was not satisfied with LLM's answers.

@snombler were you saying that it should be the next feature to be released to my own suggestion, the tts mode?

I'm saying it should be next after TTS (or at the same time since they already have support for custom temp/rep_pen/etc in the frontend, just not per-message).

Thanks for adding the delete feature in chats! ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

Chats used to get so messy before with no option to clear them out! The chat/branch delete feature is a game-changer. Truly appreciated! It even cleans up the subsequent branches following the deleted prompt/response.
But there's this thing I have noticed (please don't hate me ๐Ÿ™ ๐Ÿฅบ) that we can't delete prompt/response if there are no other branches. Clean step back purpose and a bit more efficient.
An improvement could be to relocate the delete button near Edit prompt and retry/+-1/copy buttons. (can be distanced and colored red due to potential misclick tragedy)
Otherwise, it really is fine! Kudos to the devs!
[Low Priority]

  1. Deleting a branch seems to refresh the entire chat-ui, thus resetting the chat back to branch 1. (as page reload does)
    Possible solution (if even): adding route/chat order memory in User-Assistant/prompt-response rotation. eg. like sys prompt stored in each chat, the order of the last selected route will be stored as well.
    e.g. p1.r2.p1.r1.p5.r3 sequence. As there should already be a number system for re-ordering after branch deletion, to be merged with that.

I would like to have a feature to upload knowledge files for the custom assistant. That would be actually insane!

When I reopen the chat, the old answers are displayed. However, the OpenAI chat shows the latest answers. Can this be toggled in the settings in hugging chat?

Having control over model parameters would be super helpful. Sometimes I need exact, repetitive responses (like Instruct format), while other times I need creative ideas but get stuck with similar results due to pre-defined settings. Long conversations can also get repetitive.

It would be interesting if the AI could analyze applications, view images and we could also delete messages in the chat.

To do:
Add the ability to edit the ai's response.

https://aistudio.google.com/ did it, why not us?

Screenshot 2024-09-02 132738.png

One thing I'd like to see is the AI being able to see the system time. Or a multiple of it so you can simulate a faster or slower time frame.

And maybe a tool so that it could store and retrieve information it might need later in chat, being able to update it each reply.

Also, when your reply fails, it'd be nice if you didn't have the chat entry field cleared so you could just try again. It got to the point where I'd just copy what I was going to send if it was long enough it'd be a pain to retype it, and then just paste it in again if it failed. A retry or an undo, maybe.

Hugging Chat org

One thing I'd like to see is the AI being able to see the system time. Or a multiple of it so you can simulate a faster or slower time frame.

btw with dynamic prompt you could inject the current time (of the timezone you want) at inference query.

I tried to inject the time on LM Studio using Jinja, but couldn't get it to work. Figured I'd probably have to make a tool for it. After looking, it doesn't seem like it would be that difficult on huggingface. It's built-in to Kobold AI LIte, though.

To do:
Add the ability to edit the ai's response.

https://aistudio.google.com/ did it, why not us?

Screenshot 2024-09-02 132738.png
any news on this??

Hugging Chat org
โ€ข
edited 24 days ago

Add the ability to edit the ai's response.

This is possible in the playground: https://hf.co/playground (not sure we'll add it to HuggingChat)

Add the ability to edit the ai's response.

This is possible in the playground: https://hf.co/playground (not sure we'll add it to HuggingChat)

I hope

Sign up or log in to comment