Spaces:
Runtime error
Runtime error
File size: 2,608 Bytes
2884436 e6f9fdd 2884436 eaed141 2884436 9dafcb0 2884436 503c20d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
import guidance
import gradio as gr
import pandas as pd
from dotenv import load_dotenv
load_dotenv()
def finalGradingPrompt(resume_summary, role, exp, ires):
# Initialize the guidance model
model = guidance.llms.OpenAI('gpt-3.5-turbo')
# Define the final grading prompt using the guidance template
finalRes = guidance('''
{{#system~}}
You are now the Final Decision Interview Result Grading Expert. You are provided with an Interview's evaluation details.
You need to evaluate the interview scenario and provide an overall score and set of Scope of Improvement statements for the interviewee.
{{~/system}}
{{#user~}}
The interview has been completed and the results of the interview will be provided to you. You need to evaluate the case and
provide an overall score of the interviewee's performance and suggestions for further improvements if required, based on the overall score.
Here's the Interviewee's Extracted JSON Summary:
{{resume_summary}}
{{~/user}}
{{#user~}}
The interviewee applied to the following role:
{{role}}
and has the following experience in that role:
{{exp}}
Here are the list of CSV records made from questions answered with grades under appropriate rubrics. These records also
contain the start and end timestamps of the interviewee answering the questions within a 2-minute time constraint.
Finally, the records contain a float value of the plagiarism score. We have set the threshold of 0.96 for an answer to be considered plagiarized.
The CSV records are as follows:
{{ires}}
{{~/user}}
{{#user~}}
Based on the above inputs of the interview, generate an overall performance score and scope of improvements based on it.
{{~/user}}
{{#assistant~}}
{{gen 'final_evaluation' temperature=0.5 max_tokens=1000}}
{{~/assistant}}
''', llm=model)
# Calling the final grading prompt with the provided inputs
res = finalRes(resume_summary=resume_summary, role=role, exp=exp, ires=ires)
# Return the final evaluation from the response
return res['final_evaluation']
def get_shape(csv_file,resume_summary,role,experience):
with open(csv_file.name, "r") as f:
k=pd.read_csv(f)
k.dropna()
l=finalGradingPrompt(resume_summary=resume_summary,role=role,exp=experience,ires=k)
return l
gr.Interface(fn=get_shape, inputs=['file','text','text','text'], outputs='text').launch() |