text
stringlengths 4
378
| likes
stringlengths 0
4
| reply
stringlengths 0
309
|
---|---|---|
I do acknowledge risks.
*BUT*
1. Yes, open research and open source are the best ways to understand and mitigate them.
2. AI is not something that just happens. *We* build it, *we* have agency in what it becomes. Hence *we* control the risks. It's not some sort of natural… | 681 | My followers might hate this idea, but I have to say it: There's a bunch of excellent LLM interpretability work coming out from AI safety folks (links below, from Max Tegmark, Dan Hendrycks, Owain Evans et al) studying open source models including Llama-2. Without open source,… |
Selfied after talking with students in front of the Meta booth at ICCV - Paris | 867 | |
David Donoho says other fields should adopt the openness of ML/AI research that has made it progress so fast by enabling "frictionless reproducibility"
Also, we should ignore the fear-mongers.
We agree. | 225 | David Donoho nails it: |
Nicely consistent and variable generations from a world model.
Video generation is done in representation space.
Pixel generation is the final step, only useful for visualization and data generation. | 233 | What’s exciting about |
World models FTW. | 265 | Today we're announcing |
IBM, HuggingFace, and Mistral are in the green category.
Google is turning red.
Inflection is a yellowish question mark.
Now let's do governments.... | 1.1K | Funny how the more overvalued a company is, the more alarmist about AI. |