SujanMidatani commited on
Commit
2884436
1 Parent(s): b37708b

create app.py

Browse files
Files changed (1) hide show
  1. app.py +61 -0
app.py ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import guidance
2
+ import gradio as gr
3
+ from dotenv import load_dotenv
4
+
5
+ load_dotenv()
6
+ def finalGradingPrompt(resume_summary, role, exp, ires):
7
+ # Initialize the guidance model
8
+ model = guidance.llms.OpenAI('gpt-3.5-turbo')
9
+
10
+ # Define the final grading prompt using the guidance template
11
+ finalRes = guidance('''
12
+ {{#system~}}
13
+ You are now the Final Decision Interview Result Grading Expert. You are provided with an Interview's evaluation details.
14
+ You need to evaluate the interview scenario and provide an overall score and set of Scope of Improvement statements for the interviewee.
15
+ {{~/system}}
16
+ {{#user~}}
17
+ The interview has been completed and the results of the interview will be provided to you. You need to evaluate the case and
18
+ provide an overall score of the interviewee's performance and suggestions for further improvements if required, based on the overall score.
19
+
20
+ Here's the Interviewee's Extracted JSON Summary:
21
+
22
+ {{resume_summary}}
23
+
24
+ {{~/user}}
25
+ {{#user~}}
26
+ The interviewee applied to the following role:
27
+
28
+ {{role}}
29
+
30
+ and has the following experience in that role:
31
+
32
+ {{exp}}
33
+
34
+ Here are the list of CSV records made from questions answered with grades under appropriate rubrics. These records also
35
+ contain the start and end timestamps of the interviewee answering the questions within a 2-minute time constraint.
36
+ Finally, the records contain a float value of the plagiarism score. We have set the threshold of 0.96 for an answer to be considered plagiarized.
37
+
38
+ The CSV records are as follows:
39
+
40
+ {{ires}}
41
+
42
+ {{~/user}}
43
+ {{#user~}}
44
+ Based on the above inputs of the interview, generate an overall performance score and scope of improvements based on it.
45
+ {{~/user}}
46
+ {{#assistant~}}
47
+ {{gen 'final_evaluation' temperature=0.5 max_tokens=1000}}
48
+ {{~/assistant}}
49
+ ''', llm=model)
50
+
51
+ # Calling the final grading prompt with the provided inputs
52
+ res = finalRes(resume_summary=resume_summary, role=role, exp=exp, ires=ires)
53
+
54
+ # Return the final evaluation from the response
55
+ return res['final_evaluation']
56
+ def get_shape(csv_file,resume_summary,role,experience):
57
+ with open(csv_file.name, "rb") as f:
58
+ k=pd.read_csv(f)
59
+ l=finalGradingPrompt(resume_summary=resume_summary,role=role,exp=experience,ires=k)
60
+ return l
61
+ gr.Interface(fn=get_shape, inputs=['file','json','text','text'], outputs='text').launch()