Upload tokenizer
Browse files- README.md +60 -23
- tokenizer.json +1 -6
- tokenizer_config.json +1 -1
README.md
CHANGED
@@ -6,29 +6,66 @@ tags:
|
|
6 |
- NLP
|
7 |
pipeline_tag: summarization
|
8 |
widget:
|
9 |
-
- text:
|
10 |
-
|
11 |
-
|
12 |
-
Sam
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
Elon
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
Sam Altman:
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
example_title: Sample 1
|
33 |
---
|
34 |
# Arc of the Conversation Model
|
|
|
6 |
- NLP
|
7 |
pipeline_tag: summarization
|
8 |
widget:
|
9 |
+
- text: ' Moderator: Welcome, everyone, to this exciting panel discussion. Today,
|
10 |
+
we have Elon Musk and Sam Altman, two of the most influential figures in the tech
|
11 |
+
industry. We’re here to discuss the future of artificial intelligence and its
|
12 |
+
impact on society. Elon, Sam, thank you for joining us. Elon Musk: Happy to be
|
13 |
+
here. Sam Altman: Looking forward to the discussion. Moderator: Let’s dive right
|
14 |
+
in. Elon, you’ve been very vocal about your concerns regarding AI. Could you elaborate
|
15 |
+
on why you believe AI poses such a significant risk to humanity? Elon Musk: Certainly.
|
16 |
+
AI has the potential to become more intelligent than humans, which could be extremely
|
17 |
+
dangerous if it goes unchecked. The existential threat is real. If we don’t implement
|
18 |
+
strict regulations and oversight, we risk creating something that could outsmart
|
19 |
+
us and act against our interests. It’s a ticking time bomb. Sam Altman: I respect
|
20 |
+
Elon’s concerns, but I think he’s overestimating the threat. The focus should
|
21 |
+
be on leveraging AI to solve some of humanity’s biggest problems. With proper
|
22 |
+
ethical frameworks and robust safety measures, we can ensure AI benefits everyone.
|
23 |
+
The fear-mongering is unproductive and could hinder technological progress. Elon
|
24 |
+
Musk: It’s not fear-mongering, Sam. It’s being cautious. We need to ensure that
|
25 |
+
we have control mechanisms in place. Without these, we’re playing with fire. You
|
26 |
+
can’t possibly believe that AI will always remain benevolent or under our control.
|
27 |
+
Sam Altman: Control mechanisms are essential, I agree, but what you’re suggesting
|
28 |
+
sounds like stifling innovation out of fear. We need a balanced approach. Overregulation
|
29 |
+
could slow down advancements that could otherwise save lives and improve quality
|
30 |
+
of life globally. We must foster innovation while ensuring safety, not let fear
|
31 |
+
dictate our actions. Elon Musk: Balancing innovation and safety is easier said
|
32 |
+
than done. When you’re dealing with something as unpredictable and powerful as
|
33 |
+
AI, the risks far outweigh the potential benefits if we don’t tread carefully.
|
34 |
+
History has shown us the dangers of underestimating new technologies. Sam Altman:
|
35 |
+
And history has also shown us the incredible benefits of technological advancement.
|
36 |
+
If we had been overly cautious, we might not have the medical, communication,
|
37 |
+
or energy technologies we have today. It’s about finding that middle ground where
|
38 |
+
innovation thrives safely. We can’t just halt progress because of hypothetical
|
39 |
+
risks. Elon Musk: It’s not hypothetical, Sam. Look at how quickly AI capabilities
|
40 |
+
are advancing. We’re already seeing issues with bias, decision-making, and unintended
|
41 |
+
consequences. Imagine this on a larger scale. We can’t afford to be complacent.
|
42 |
+
Sam Altman: Bias and unintended consequences are exactly why we need to invest
|
43 |
+
in research and development to address these issues head-on. By building AI responsibly
|
44 |
+
and learning from each iteration, we can mitigate these risks. Shutting down or
|
45 |
+
heavily regulating AI development out of fear isn’t the solution. Moderator: Both
|
46 |
+
of you make compelling points. Let’s fast forward a bit. Say, ten years from now,
|
47 |
+
we have stringent regulations in place, as Elon suggests, or a more flexible framework,
|
48 |
+
as Sam proposes. What does the world look like? Elon Musk: With stringent regulations,
|
49 |
+
we would have a more controlled and safer AI development environment. This would
|
50 |
+
prevent any catastrophic events and ensure that AI works for us, not against us.
|
51 |
+
We’d be able to avoid many potential disasters that an unchecked AI might cause.
|
52 |
+
Sam Altman: On the other hand, with a more flexible framework, we’d see rapid
|
53 |
+
advancements in AI applications across various sectors, from healthcare to education,
|
54 |
+
bringing significant improvements to quality of life and solving problems that
|
55 |
+
seem insurmountable today. The world would be a much better place with these innovations.
|
56 |
+
Moderator: And what if both of you are wrong? Elon Musk: Wrong? Sam Altman: How
|
57 |
+
so? Moderator: Suppose the future shows that neither stringent regulations nor
|
58 |
+
a flexible framework were the key factors. Instead, what if the major breakthroughs
|
59 |
+
and safety measures came from unexpected areas like quantum computing advancements
|
60 |
+
or new forms of human-computer symbiosis, rendering this entire debate moot? Elon
|
61 |
+
Musk: Well, that’s a possibility. If breakthroughs in quantum computing or other
|
62 |
+
technologies overshadow our current AI concerns, it could change the entire landscape.
|
63 |
+
It’s difficult to predict all variables. Sam Altman: Agreed. Technology often
|
64 |
+
takes unexpected turns. If future advancements make our current debate irrelevant,
|
65 |
+
it just goes to show how unpredictable and fast-moving the tech world is. The
|
66 |
+
key takeaway would be the importance of adaptability and continuous learning.
|
67 |
+
Moderator: Fascinating. It appears that the only certainty in the tech world is
|
68 |
+
uncertainty itself. Thank you both for this engaging discussion.'
|
69 |
example_title: Sample 1
|
70 |
---
|
71 |
# Arc of the Conversation Model
|
tokenizer.json
CHANGED
@@ -1,11 +1,6 @@
|
|
1 |
{
|
2 |
"version": "1.0",
|
3 |
-
"truncation":
|
4 |
-
"direction": "Right",
|
5 |
-
"max_length": 1024,
|
6 |
-
"strategy": "LongestFirst",
|
7 |
-
"stride": 0
|
8 |
-
},
|
9 |
"padding": null,
|
10 |
"added_tokens": [
|
11 |
{
|
|
|
1 |
{
|
2 |
"version": "1.0",
|
3 |
+
"truncation": null,
|
|
|
|
|
|
|
|
|
|
|
4 |
"padding": null,
|
5 |
"added_tokens": [
|
6 |
{
|
tokenizer_config.json
CHANGED
@@ -930,7 +930,7 @@
|
|
930 |
"clean_up_tokenization_spaces": true,
|
931 |
"eos_token": "</s>",
|
932 |
"extra_ids": 100,
|
933 |
-
"max_length":
|
934 |
"model_max_length": 512,
|
935 |
"pad_token": "<pad>",
|
936 |
"stride": 0,
|
|
|
930 |
"clean_up_tokenization_spaces": true,
|
931 |
"eos_token": "</s>",
|
932 |
"extra_ids": 100,
|
933 |
+
"max_length": 2048,
|
934 |
"model_max_length": 512,
|
935 |
"pad_token": "<pad>",
|
936 |
"stride": 0,
|