Spaces:
Runtime error
Runtime error
File size: 4,181 Bytes
a1bc39d 3eb58cd b7a097d 3eb58cd 9224ffd 17291f6 3eb58cd fd71939 17291f6 3eb58cd fd71939 3eb58cd a1bc39d fd71939 f6d72d9 fd71939 9907d16 9224ffd 9907d16 9224ffd 5b4db95 9907d16 5b4db95 9224ffd 9907d16 9224ffd 5b4db95 9907d16 5b4db95 9808a5f 9224ffd 9808a5f 9224ffd 3eb58cd 9907d16 3eb58cd 9808a5f 9907d16 9224ffd 9907d16 e5a45fc 99c33b8 e5a45fc a1bc39d 8e7188b a1bc39d bef2a73 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
import gradio as gr
def add_text(history, text):
history = history + [(text, None)]
return history, ""
def add_file(history, file):
history = history + [((file.name,), None)]
return history
def bot(history):
response = "**That's cool!**"
history[-1][1] = response
return history
with gr.Blocks() as demo:
gr.Markdown("""
Hey there genius!
Welcome to our demo! We've trained Meta's Llama on almost 200k data entries in the question/answer format.
In the future, we are looking to expand our model's capabilities further to assist in a range of IP related tasks.
If you are interested in using a more powerful model that we have trained, please get in touch!
As far as data is concerned, you have nothing to worry about! We don't store any of your inputs to use for further training, we're not OpenAI 👀. We'd just like to know if this is something people would be interested in using!
""")
with gr.Tab("Text Drafter"):
gr.Markdown("""
You can use this tool to expand your idea using Claim Language.
Example input: A device to help the visually impaired using proprioception.
Output:
""")
text_input = gr.Textbox()
text_output = gr.Textbox()
text_button = gr.Button("")
with gr.Row():
gr.Markdown("""
You can use this tool to expand your idea using Claim Language.
Example input: A device to help the visually impaired using proprioception.
Output:
""")
with gr.Column(scale=0.85):
txt = gr.Textbox(
show_label=False,
placeholder="Enter text and press enter, or upload an image",
).style(container=False)
with gr.Tab("Description Generator"):
gr.Markdown("""
You can use this tool to turn a claim into a
Example input: A device to help the visually impaired using proprioception.
Output:
""")
with gr.Row(scale=1, min_width=600):
text1 = gr.Textbox(label="Input",
placeholder='Type in your idea here!')
text2 = gr.Textbox(label="Output")
with gr.Tab("Knowledge Graph"):
gr.Markdown("""
Are you more of a visual type? Use this tool to generate graphical representations of your ideas and how their features interlink.
Example input: A device to help the visually impaired using proprioception.
Output:
""")
with gr.Row(scale=1, min_width=600):
text1 = gr.Textbox(label="Input",
placeholder='Type in your idea here!')
text2 = gr.Textbox(label="Output")
with gr.Tab("Prosecution Ideator"):
gr.Markdown("""
Below is our
Example input: A device to help the visually impaired using proprioception.
Output:
""")
with gr.Row(scale=1, min_width=600):
text1 = gr.Textbox(label="Input",
placeholder='Type in your idea here!')
text2 = gr.Textbox(label="Output")
with gr.Tab("Claimed Infill"):
gr.Markdown("""
Below is our
Example input: A device to help the visually impaired using proprioception.
Output:
""")
with gr.Row(scale=1, min_width=600):
text1 = gr.Textbox(label="Input",
placeholder='Type in your idea here!')
text2 = gr.Textbox(label="Output")
chatbot = gr.Chatbot([], elem_id="Claimed Assistant").style(height=500)
with gr.Row():
with gr.Column(scale=0.85):
txt = gr.Textbox(
show_label=False,
placeholder="Enter text and press enter, or upload an image",
).style(container=False)
with gr.Column(scale=0.15, min_width=0):
btn = gr.UploadButton("📁", file_types=["image", "video", "audio"])
txt.submit(add_text, [chatbot, txt], [chatbot, txt]).then(
bot, chatbot, chatbot
)
demo.launch() |