Update README.md
Browse files
README.md
CHANGED
@@ -78,7 +78,6 @@ summary = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
|
78 |
|
79 |
print("Generated Summary:", summary)
|
80 |
|
81 |
-
|
82 |
## Training Details
|
83 |
|
84 |
### Training Data
|
@@ -98,12 +97,6 @@ ROUGE Score
|
|
98 |
| **ROUGE 2** | 4.96 | 8.64 |
|
99 |
| **ROUGE L** | 17.24 | 22.50 |
|
100 |
|
101 |
-
|
102 |
-
## Technical Specifications
|
103 |
-
|
104 |
-
### Model Architecture and Objective
|
105 |
-
In post-disaster humanitarian assistance scenarios, the efficiency of digital help desks and the quality of the information they collect are crucial for providing effective and timely support to the people affected. This model leverages parameter-efficient fine-tuning techniques, including Low-Rank Adaptation (LoRA) and Prefix Tuning, to generate summaries and reduce the time spent on manually writing high-quality summaries. The results indicate that the adjusted LLMs not only improve the speed and quality of text summarization but also ensure adaptability to sensitive contexts. Potential challenges and recommendations for implementing the model in practice are also discussed.
|
106 |
-
|
107 |
## Citation
|
108 |
|
109 |
Base model: https://huggingface.co/knkarthick/MEETING_SUMMARY
|
|
|
78 |
|
79 |
print("Generated Summary:", summary)
|
80 |
|
|
|
81 |
## Training Details
|
82 |
|
83 |
### Training Data
|
|
|
97 |
| **ROUGE 2** | 4.96 | 8.64 |
|
98 |
| **ROUGE L** | 17.24 | 22.50 |
|
99 |
|
|
|
|
|
|
|
|
|
|
|
|
|
100 |
## Citation
|
101 |
|
102 |
Base model: https://huggingface.co/knkarthick/MEETING_SUMMARY
|