Video ID
string
Channel ID
string
Title
string
Time Created
string
Time Published
string
Duration
string
Description
string
Category
string
Like Count
float64
Dislike Count
float64
Jk1YP4Y_U_0
UCv83tO5cePwHMt1952IVVHw
Stoic Philosophy Text Generation with TensorFlow
2020-04-19 11:33:45 UTC
2020-04-19 13:52:43 UTC
1859 seconds
Explanation of key parts to a RNN text generator built in TensorFlow with Python. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 I've written a couple of Medium articles on this project, if you're interested check them out here: Stoic Philosophy - Built by Algorithms https://towardsdatascience.com/stoic-philosophy-built-by-algorithms-9cff7b91dcbd Supercharged Prediction with Ensemble Learning https://towardsdatascience.com/recurrent-ensemble-learning-caffdcd94092 Music used by Lakey Inspired. 1 - Blue Boi 2 - Falling https://www.youtube.com/channel/UCOmy8wuTpC95lefU5d1dt2Q
People & Blogs
10
0
gXqHd6-NKBo
UCv83tO5cePwHMt1952IVVHw
How to Build TensorFlow Pipelines with tf.data.Dataset
2020-11-02 08:23:38 UTC
2020-11-02 08:57:48 UTC
1853 seconds
Link to updated version (without video freeze): https://youtu.be/f6XVfgJTbp4 An introduction to building better input pipelines for Machine Learning in TF2. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Link to tf.data API docs: https://www.tensorflow.org/guide/data
People & Blogs
46
9
yYEPNla4tlQ
UCv83tO5cePwHMt1952IVVHw
Every New Feature in Python 3.10.0a2
2020-11-08 18:09:49 UTC
2020-11-10 16:44:05 UTC
883 seconds
Every new feature in the early release alpha 2 preview of Python 3.10 There is video lag 5:00 - 9:55 covering the Type Alias section (sorry!) - the audio is okay though πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
People & Blogs
88
5
GYDFBfx8Ts8
UCv83tO5cePwHMt1952IVVHw
How-to Build a Transformer for Language Classification in TensorFlow
2020-11-19 09:57:27 UTC
2020-11-19 12:20:35 UTC
2299 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp How to build a transformer model for sentiment analysis (language classification) using HuggingFace's Transformers library in TensorFlow 2 with Python. We cover the full process from downloading data all the way through to building and training the transformer model. This is a multi-class classification problem using both TensorFlow and Transformers to build a multiclass sentiment classifier. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Article version is here: https://betterprogramming.pub/build-a-natural-language-classifier-with-bert-and-tensorflow-4770d4442d41 Or here (free link if you don't have Medium membership): https://betterprogramming.pub/build-a-natural-language-classifier-with-bert-and-tensorflow-4770d4442d41?sk=346cd4ce5ee019c400835588b56d8574 Article extract: "High-performance transformer models like BERT and GPT-3 are transforming a huge array of previously menial, language-based tasks, into the work of a few clicks, saving a lot of time. In most industries, the newest wave of language optimization is just getting started β€” taking their first baby steps. But these seedlings are widespread, and sprouting quickly. Much of this adoption is thanks to the incredibly low barrier-to-entry. If you know the basics of TensorFlow or PyTorch, and take a little time to get to grips with the Transformers library β€” you’re already halfway there. With the Transformers library, it takes just three lines of code to initialize a cutting-edge ML model β€” a model built from the billions of research dollars spent by the likes of Google, Facebook, and OpenAI. This article will take you through the steps to build a classification model that leverages the power of transformers, using Google’s BERT. Transformers - Finding Models - Initializing - Bert Inputs and Outputs Classification - The Data - Tokenization - Data Prep - Train-Validation Split - Model Definition - Train"
People & Blogs
384
12
DgGFhQmfxHo
UCv83tO5cePwHMt1952IVVHw
How-to use the Kaggle API in Python
2020-11-22 20:19:30 UTC
2020-11-22 20:29:27 UTC
462 seconds
Simple step-by-step tutorial covering the setup and use of the Kaggle API for downloading datasets using the Kaggle library in Python. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
People & Blogs
121
6
YvVQgvAz9dY
UCv83tO5cePwHMt1952IVVHw
Language Generation with OpenAI's GPT-2 in Python
2020-11-23 12:36:44 UTC
2020-11-24 14:22:46 UTC
498 seconds
Easy natural language generation with Transformers and PyTorch. We apply OpenAI's GPT-2 model to generate text in just a few lines of Python code. Language generation is one of those natural language tasks that can really produce an incredible feeling of awe at how far the fields of machine learning and artificial intelligence have come. GPT-1, 2, and 3 are OpenAI’s top language models β€” well known for their ability to produce incredibly natural, coherent, and genuinely interesting language. In this article, we will take a small snippet of text and learn how to feed that into a pre-trained GPT-2 model using PyTorch and Transformers to produce high-quality language generation in just eight lines of code. We cover: PyTorch and Transformers - Data Building the Model - Initialization - Tokenization - Generation - Decoding Results πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium Article: https://towardsdatascience.com/text-generation-with-python-and-gpt-2-1fecbff1635b Friend Link (free access): https://towardsdatascience.com/text-generation-with-python-and-gpt-2-1fecbff1635b?sk=930367d835f15abb4ef3164f7791e1b1 Thumbnail background by gustavo centurion on Unsplash https://unsplash.com/photos/O6fs4ablxw8
People & Blogs
133
1
egDIqQIjDCI
UCv83tO5cePwHMt1952IVVHw
Text Summarization with Google AI's T5 in Python
2020-11-24 21:26:27 UTC
2020-11-27 06:00:07 UTC
419 seconds
Easy text summarization using Google AI's T5 model using HuggingFace transformers and PyTorch in Python. Automatic text summarization allows us to shorten long pieces of text into easy-to-read, short snippets that still convey the most important and relevant information of the original text. In this video, we’ll build a simple but incredibly powerful text summarizer using Google’s T5. We’ll be using the PyTorch and HuggingFace’s Transformers frameworks. This is split into three parts: 1. Import and Initialization 2. Data and Tokenization 3. Summary Generation πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 You can read the article version of this on Medium here: https://betterprogramming.pub/how-to-summarize-text-with-googles-t5-4dd1ae6238b6 (And for those of you without Medium membership, here's a free link): https://betterprogramming.pub/how-to-summarize-text-with-googles-t5-4dd1ae6238b6?sk=740d3009282cb2c4f7478a0c073dedb3 Thumbnail background by gustavo centurion on Unsplash https://unsplash.com/photos/O6fs4ablxw8
People & Blogs
115
1
DFtP1THE8fE
UCv83tO5cePwHMt1952IVVHw
How-to do Sentiment Analysis with Flair in Python
2020-12-04 11:15:10 UTC
2020-12-04 14:00:03 UTC
848 seconds
Learn how to perform powerful sentiment analysis with no fine-tuning or pre-training required using the Flair NLP library in Python. With the real-time information available to us on massive social media platforms like Twitter, we have all the data we could ever need to create these accurate and up-to-date sentiment metrics for different companies. But then comes the question, how can our computer understand what this unstructured text data means? That is where sentiment analysis comes in. Sentiment analysis is a particularly interesting branch of Natural Language Processing (NLP), which is used to rate the language used in a body of text. Through sentiment analysis, we can take thousands of tweets about a company and judge whether they are generally positive or negative (the sentiment) in real-time! πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/sentiment-analysis-for-stock-price-prediction-in-python-bed40c65d178 (Free link if you don't have Medium membership): https://towardsdatascience.com/sentiment-analysis-for-stock-price-prediction-in-python-bed40c65d178?sk=1cbf33a5d1fd2ed841f9487972c1cbed Thumbnail photo by Alexander London on Unsplash https://unsplash.com/@alxndr_london
People & Blogs
64
2
8o3jvkK2GGU
UCv83tO5cePwHMt1952IVVHw
Python Environment Setup for Machine Learning
2020-12-23 13:50:07 UTC
2020-12-23 13:53:02 UTC
754 seconds
Everything you need for a Python environment set up for Machine Learning and Data Science! πŸ“• Article: https://towardsdatascience.com/how-to-setup-python-for-machine-learning-173cb25f0206 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Thumbnail background by Christian Wiediger on Unsplash https://unsplash.com/@christianw
People & Blogs
38
1
BYbJ_HH788U
UCv83tO5cePwHMt1952IVVHw
Functional API - TensorFlow Essentials #2
2020-12-28 16:41:11 UTC
2020-12-29 10:04:40 UTC
341 seconds
A look at the functional API method for building models in TensorFlow 2 for Python. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Thumbnail background by Darius Bashar on Unsplash https://unsplash.com/@dariusbashar?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
Education
20
0
_8Bydxud1XU
UCv83tO5cePwHMt1952IVVHw
Training Parameters - TensorFlow Essentials #3
2020-12-28 19:30:23 UTC
2020-12-29 23:37:57 UTC
450 seconds
Learn how to set up model training parameters and compile the model before training. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Thumbnail background by Alex McCarthy on Unsplash https://unsplash.com/@4lexmccarthy?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
Education
17
0
f6XVfgJTbp4
UCv83tO5cePwHMt1952IVVHw
Input Data Pipelines - TensorFlow Essentials #4
2020-12-28 23:25:54 UTC
2020-12-30 11:30:02 UTC
751 seconds
Learn how to set-up efficient and clean input data pipelines using tf.data.Dataset πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Thumbnail background by Daria Nepriakhina on Unsplash https://unsplash.com/@epicantus?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
Education
54
0
MQD1yMnZ_jk
UCv83tO5cePwHMt1952IVVHw
Sequential Model - TensorFlow Essentials #1
2020-12-29 09:46:00 UTC
2020-12-29 09:50:23 UTC
391 seconds
Learn how to use the sequential model building approach in TensorFlow 2. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Background thumbnail by Aryan Dhiman on Unsplash https://unsplash.com/@mylifeasaryan_?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
Education
84
1
KTFWNI0qL28
UCv83tO5cePwHMt1952IVVHw
6 of Python's Newest and Best Features (3.7-3.9)
2021-01-12 23:31:26 UTC
2021-01-12 23:58:12 UTC
1084 seconds
A rundown of the six most recent, and coolest features added to Python in the past few years! 2018 brought us a plethora of new features with the release of Python 3.7, followed by 3.8 in 2019, and 3.9 in 2020. Many of those changes were behind the scenes. Optimizations and upgrades that the vast majority of us will never notice, despite their benefits. Others are more obvious, additions to syntax or functionality that can change how we write our code. But even these visible changes can be hard to keep up with. In this video, we will run through the more apparent upgrades to provide a brief but hopefully invaluable refresher on everything new to Python from the past few years. - Python 3.7 - Breakpoints - Python 3.8 - Walrus Operator - F-string '=' Specifier - Positional-only Parameters - Python 3.9 - More Type Hinting - Dictionary Unions Medium Article: https://towardsdatascience.com/amazing-features-added-to-python-from-3-7-to-now-4f35f0bb1ea6 (Free access link): https://towardsdatascience.com/amazing-features-added-to-python-from-3-7-to-now-4f35f0bb1ea6?sk=bda3cb7717caa969b81619f85191f241 Thumbnail background by Martin Sanchez on Unsplash: https://unsplash.com/photos/4PDPLw1flgE πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
15
2
GyJtxd14DTc
UCv83tO5cePwHMt1952IVVHw
Novice to Advanced RegEx in Less-than 30 Minutes + Python
2021-01-27 09:06:42 UTC
2021-01-27 09:51:32 UTC
1769 seconds
A full tutorial covering everything you need to know about Regular Expressions - an essential for anyone learning to code - and even more so for anyone interested in Natural Language Processing. This video includes: - metacharacters - quantifiers - capture groups - using capture groups in Python - character sets - look-ahead and look-behind assertions - negative look-ahead and look-behind assertions - inline modifiers - passing modifiers as function parameters in Python - conditionals (if-else statements for RegEx) - re.match - re.search - re.findall We cover all of this in-depth in this tutorial, incl. examples all the way through on RegEx101 (an interactive debugging/regex building tool) and also in Python. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
239
8
1ZcXmjZtJJ8
UCv83tO5cePwHMt1952IVVHw
Building a PlotLy $GME Chart in Python
2021-02-02 13:38:16 UTC
2021-02-07 13:24:45 UTC
4492 seconds
A code-along video covering the coding process from imagination to Python. Something a little different, I'm not overly keen on this format - it's pretty long - but I've recorded it and I think maybe this can be useful for a few of you. I haven't prepared anything beforehand, this is just going into the coding process with a rough outline of wanting to build a stock chart for GME (GameStop) and adding a few technical indicators - to get more familiar with PlotLy and the AlphaVantage API. So, it's a weird one, but I hope a few of you enjoy it - thanks :) πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
10
0
ZIRmXkHp0-c
UCv83tO5cePwHMt1952IVVHw
How to Build Custom Q&A Transformer Models in Python
2021-02-09 20:42:56 UTC
2021-02-12 13:30:03 UTC
4216 seconds
In this video, we will learn how to take a pre-trained transformer model and train it for question-and-answering. We will be using the HuggingFace transformers library with the PyTorch implementation of models in Python. Transformers are one of the biggest developments in Natural Language Processing (NLP) and learning how to use them properly is basically a data science superpower - they're genuinely amazing I promise! I hope you enjoy the video :) πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/the-ultimate-performance-metric-in-nlp-111df6c64460 (Free link): https://towardsdatascience.com/how-to-fine-tune-a-q-a-transformer-86f91ec92997?sk=9344fd51afe71a0905db833d0183d436 Code: https://gist.github.com/jamescalam/55daf50c8da9eb3a7c18de058bc139a3 Photo in thumbnail by Lorenzo Herrera on Unsplash https://unsplash.com/@lorenzoherrera
Education
163
5
FdjVoOf9HN4
UCv83tO5cePwHMt1952IVVHw
How-to Use The Reddit API in Python
2021-02-12 11:36:48 UTC
2021-02-12 12:02:48 UTC
1401 seconds
Learn how to use the Reddit API in Python, including setup, authorization, and pulling data from subreddits. Reddit API docs: https://www.reddit.com/dev/api/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c πŸ“– Free link: https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c?sk=0295f297c1365bee7cc7a32bdff21b61 Extract from article: "Reddit is a huge ecosystem brimming with data that is readily available at our very fingertips. As a data-minded person, I wanted to take advantage of this and perform some analysis using this vast repository of open-source data. Initially, it turned out that getting to grip with Reddit’s API wasn’t as clear-cut as expected β€” despite being a straightforward process; it can be a little confusing at first. So, after figuring everything out, I wrote this article β€” which I hope will help a few of you to get familiar with using the Reddit API in Python. We will cover: Getting Access Making Requests - Reading the Data - Streaming New Posts Parameters Getting Access First, we need access. Unlike most popular services, the Reddit API was somewhat difficult to figure out initially. There are several steps: 1. Go to App Preferences and click create another app… at the bottom. 2. Fill out the required details, make sure to select script β€” and click create app. 3. make a note of the personal use script and secret tokens. 4. Request a temporary OAuth token from Reddit. We need our username and password for this. 5. Add headers=headers to every request. The OAuth token will expire after ~2 hours, and a new one will need to be requested. " And so on, check it out if you're interested in reading (rather than watching). πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
627
11
scJsty_DR3o
UCv83tO5cePwHMt1952IVVHw
How to Build Q&A Models in Python (Transformers)
2021-02-17 21:03:29 UTC
2021-02-19 15:00:21 UTC
1189 seconds
In this video we'll cover how to build a question-answering model in Python using HuggingFace's Transformers. You will need to install the transformers library with: pip install transformers Alongside either TensorFlow or PyTorch (to follow this video exactly you will need PyTorch). To install TensorFlow just type: pip install tensorflow OR conda install tensorflow And for PyTorch follow the instructions under 'Install PyTorch' here: https://pytorch.org/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Link to Q&A fine-tuning video: https://youtu.be/ZIRmXkHp0-c You can find the Medium article link below here: https://towardsdatascience.com/question-and-answering-with-bert-6ef89a78dac
Education
151
1
QJq9RTp_OVE
UCv83tO5cePwHMt1952IVVHw
How-to Decode Outputs From NLP Models (Python)
2021-02-21 18:02:42 UTC
2021-02-24 15:00:10 UTC
577 seconds
In this video, we will cover three ways to decode the output probabilities from NLP models - greedy search, random sampling, and beam search. Learning how to decode outputs can make a huge difference in diagnosing model issues and improving text output quality - and as an added bonus it's super easy. One of the often-overlooked parts of sequence generation in natural language processing (NLP) is how we select our output tokens β€” otherwise known as decoding. You may be thinking β€” we select a token/word/character based on the probability of each token assigned by our model. This is half-true β€” in language-based tasks, we typically build a model which outputs a set of probabilities to an array where each value in that array represents the probability of a specific word/token. At this point, it might seem logical to select the token with the highest probability? Well, not really β€” this can create some unforeseen consequences β€” as we will see soon. When we are selecting a token in machine-generated text, we have a few alternative methods for performing this decode β€” and options for modifying the exact behavior too. In this video we will explore three different methods for selecting our output token, these are: - Greedy Decoding - Random Sampling - Beam Search πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Link to the article version on Medium: https://towardsdatascience.com/the-three-decoding-methods-for-nlp-23ca59cb1e9d Free link (if you don't have membership): https://towardsdatascience.com/the-three-decoding-methods-for-nlp-23ca59cb1e9d?sk=64fbb0204c174dc520af027a69f88030
Education
28
0
TCZgXFPNnbc
UCv83tO5cePwHMt1952IVVHw
Identify Stocks on Reddit with SpaCy (NER in Python)
2021-03-01 21:47:29 UTC
2021-03-03 14:27:48 UTC
1307 seconds
We will learn how to process unstructured text data from Reddit and extract organization names so that any further analysis is automatically classified and results assigned to the correct stocks. Organizations are mentioned in each subreddit in a variety of formats. Typically we will find two formats: - Organization name, eg Tesla/Tesla Motors - Ticker symbol, eg TSLA, tsla, or $TSLA We also need to be able to differentiate between tickers and other abbreviations/slang -some of these are unclear like AI (AI can mean both artificial intelligence and refer to the ticker symbol for C3.ai). So, we need a reasonable competent NER process to accurately classify our data. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Reddit API video: https://youtu.be/FdjVoOf9HN4 /r/investing data: https://github.com/jamescalam/transformers/blob/main/course/named_entity_recognition/data/reddit_investing.csv Medium article: https://towardsdatascience.com/ner-for-extracting-stock-mentions-on-reddit-aa604e577be (Free version if you don't have Medium membership): https://towardsdatascience.com/ner-for-extracting-stock-mentions-on-reddit-aa604e577be?sk=d16305d40b18e7955a0665633182d2b4 Thanks for watching!
Education
33
0
yDGo9z_RlnE
UCv83tO5cePwHMt1952IVVHw
Sentiment Analysis on ANY Length of Text With Transformers (Python)
2021-03-10 08:15:21 UTC
2021-03-10 13:15:03 UTC
1630 seconds
The de-facto standard in many natural language processing (NLP) tasks nowadays is to use a transformer. Text generation? Transformer. Question-and-answering? Transformer. Language classification? Transformer! However, one of the problems with many of these models (a problem that is not just restricted to transformer models) is that we cannot process long pieces of text. Almost every article I write on Medium contains 1000+ words, which, when tokenized for a transformer model like BERT, will produce 1000+ tokens. BERT (and many other transformer models) will consume 512 tokens maxβ€Š-β€Štruncating anything beyond this length. Although I think you may struggle to find value in processing my Medium articles, the same applies to many useful data sourcesβ€Š-β€Šlike news articles or Reddit posts. We will take a look at how we can work around this limitation. In this article, we will find the sentiment for long posts from the /r/investing subreddit. This video will cover: High-Level Approach Getting Started - Data - Initialization Tokenization Preparing The Chunks - Split - CLS and SEP - Padding - Reshaping For BERT Making Predictions πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Here's a link to the Medium article: https://towardsdatascience.com/how-to-apply-transformers-to-any-length-of-text-a5601410af7f And a free access link if you don't have Medium membership: https://towardsdatascience.com/how-to-apply-transformers-to-any-length-of-text-a5601410af7f?sk=d4e717eb2ff31fb27ea67019bbb63ad6
Education
111
2
9Od9-DV9kd8
UCv83tO5cePwHMt1952IVVHw
Unicode Normalization for NLP in Python
2021-03-16 09:27:24 UTC
2021-03-17 13:30:00 UTC
927 seconds
ℕ𝕠-π• π•Ÿπ•– π•šπ•Ÿ π•₯π•™π•–π•šπ•£ π•£π•šπ•˜π•™π•₯ π•žπ•šπ•Ÿπ•• 𝕨𝕠𝕦𝕝𝕕 𝕖𝕧𝕖𝕣 𝕦𝕀𝕖 π•₯𝕙𝕖𝕀𝕖 π•’π•Ÿπ•Ÿπ• π•ͺπ•šπ•Ÿπ•˜ π•—π• π•Ÿπ•₯ π•§π•’π•£π•šπ•’π•Ÿπ•₯𝕀. 𝕋𝕙𝕖 𝕨𝕠𝕣𝕀π•₯ π•₯π•™π•šπ•Ÿπ•˜, π•šπ•€ π•šπ•— π•ͺ𝕠𝕦 𝕕𝕠 π•’π•Ÿπ•ͺ π•—π• π•£π•ž 𝕠𝕗 ℕ𝕃ℙ π•’π•Ÿπ•• π•ͺ𝕠𝕦 𝕙𝕒𝕧𝕖 𝕔𝕙𝕒𝕣𝕒𝕔π•₯𝕖𝕣𝕀 π•π•šπ•œπ•– π•₯π•™π•šπ•€ π•šπ•Ÿ π•ͺ𝕠𝕦𝕣 π•šπ•Ÿπ•‘π•¦π•₯, π•ͺ𝕠𝕦𝕣 π•₯𝕖𝕩π•₯ π•“π•–π•”π• π•žπ•–π•€ π•”π• π•žπ•‘π•π•–π•₯𝕖𝕝π•ͺ π•¦π•Ÿπ•£π•–π•’π••π•’π•“π•π•–. We also find that text like this is incredibly commonβ€Š-β€Šparticularly on social media. Another pain-point comes from diacritics (the little glyphs in Γ‡, Γ©, Γ…) that you'll find in almost every European language. These characters have a hidden property that can trip up any NLP modelβ€Š-β€Štake a look at the Unicode for two versions of Γ‡: Latin capital letter C with cedilla: \u00C7 Latin capital letter C + combining cedilla: \u0043\u0327 Both are completely different, despite rendering as the same character. To deal with all of these text variants we need to use Unicode normalization - which we will cover in this video. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/what-on-earth-is-unicode-normalization-56c005c55ad0 Friend link (free access): https://towardsdatascience.com/what-on-earth-is-unicode-normalization-56c005c55ad0?sk=0cd19a9ad9f5d948b33179bab3c3b7cd
Education
43
0
2qJavL-VX9Y
UCv83tO5cePwHMt1952IVVHw
The NEW Match-Case Statement in Python 3.10
2021-03-17 20:37:52 UTC
2021-03-19 16:00:03 UTC
1088 seconds
Python 3.10 is beginning to fill-out with plenty of fascinating new features. One of those, in particular, caught my attentionβ€Š-β€Šstructural pattern matchingβ€Š-β€Šor as most of us will know it, switch/case statements. Switch-statements have been absent from Python despite being a common feature of most languages. Python is leapfrogging ahead of those languages by introducing the match-case statement as a switch-case v2.0. Back in 2006, PEP 3103 was raised, recommending the implementation of a switch-case statement. However, after a poll at PyCon 2007 received no support for the feature, the Python devs dropped it. Fast-forward to 2020, and Guido van Rossum, the creator of Python, committed the first documentation showing the new match-statements, which have been named Structural Pattern Matching, as found in PEP 634. Let's take a look at how this new logic works. Medium Article: https://towardsdatascience.com/switch-case-statements-are-coming-to-python-d0caf7b2bfd3 Friend Link (free access): https://towardsdatascience.com/switch-case-statements-are-coming-to-python-d0caf7b2bfd3?sk=363e0f7696502647e007f91910b4c817 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:58 Switch-Case 02:37 Flow of Logic 03:21 Second Example (Tuples) 05:00 Final Example Setup 11:30 Final Example If-Else Version 15:22 Final Example Match-Case Version
Education
310
11
pjtnkCGElcE
UCv83tO5cePwHMt1952IVVHw
Multi-Class Language Classification With BERT in TensorFlow
2021-03-24 17:51:01 UTC
2021-03-25 16:00:15 UTC
2604 seconds
Chapters for each section of the video (preprocessing, model build, prediction) are in the video timeline. Transformers have been described as the fourth pillar of deep learning [1], alongside the three big neural net architectures of CNNs, RNNs, and MLPs. However, from the perspective of natural language processingβ€Š-β€Štransformers are much more than that. Since their introduction in 2017, they've come to dominate a majority of NLP benchmarksβ€Š-β€Šand continue to impress daily. What I'm saying is, transformers are damn cool. And with libraries like HuggingFace's transformersβ€Š-β€Šit has become too easy to build incredible solutions with them. So, what's not to love? Incredible performance paired with the ultimate ease-of-use. In this video, we'll work through building a multi-class classification model using transformersβ€Š-β€Šfrom start-to-finish. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/multi-class-classification-with-transformers-6cf7b59a033a Free access: https://towardsdatascience.com/multi-class-classification-with-transformers-6cf7b59a033a?sk=544872025c2283c54cf4294814b8cae3 Link to Kaggle video: https://youtu.be/DgGFhQmfxHo [1] Fourth Pillar of AI: https://ark-invest.com/articles/analyst-research/transformers-comprise-the-fourth-pillar-of-deep-learning/ 00:00 Intro 01:21 Pulling Data 01:47 Preprocessing 14:33 Data Input Pipeline 24:14 Defining Model 33:29 Model Training 35:36 Saving and Loading Models 37:37 Making Predictions
Education
264
1
JkeNVaiUq_c
UCv83tO5cePwHMt1952IVVHw
How to Build Python Packages for Pip
2021-04-02 14:51:14 UTC
2021-04-02 15:19:32 UTC
1267 seconds
The most powerful feature of Python is its community. Almost every use-case out there has a package built specifically for it. Need to send mobile/email alerts? pip install knockknock β€Š- β€ŠBuild ML apps? pip install streamlit β€Š- β€ŠBored of your terminal? pip install coloramaβ€Š - β€ŠIt's too easy! I know this is obvious, but those libraries didn't magically appear. For each package, there is a person, or many personsβ€Š-β€Šthat actively developed and deployed that package. Every single one. All 300K+ of them. That is why Python is Python, the level of support is phenomenalβ€Š-β€Šmindblowing. In this video, we will learn how to build our own packages. And add them to the Python Package Index (PyPI). Afterward, we will be able to install our packages using pip install! GitHub Repo: https://github.com/jamescalam/aesthetic_ascii πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium Article: https://towardsdatascience.com/how-to-package-your-python-code-df5a7739ab2e πŸ“– Here's a free link: https://towardsdatascience.com/how-to-package-your-python-code-df5a7739ab2e?sk=04d9f67c0654445bbcbbf6825f535900
Education
390
11
4Jmq28RQ3hU
UCv83tO5cePwHMt1952IVVHw
How-to Structure a Q&A ML App
2021-04-09 15:02:44 UTC
2021-04-09 15:22:50 UTC
585 seconds
▢️ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB I'm planning on doing something different, a series of videos where we work through the steps - from start-to-finish - of (attempting) to build a Q&A web app that answers our questions with Stoic answers. In this video, I'm outlining the idea and describing the high-level setup that I think we'll need to put together. It should be cool! We'll be using the Haystack framework for 'Q&A at scale', which using HuggingFace transformers under-the-hood, and the Elasticsearch document store. Find the repo here: https://github.com/jamescalam/aurelius πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
46
0
Vwq7Ucp9UCw
UCv83tO5cePwHMt1952IVVHw
How to Index Q&A Data With Haystack and Elasticsearch
2021-04-11 21:30:32 UTC
2021-04-12 15:00:11 UTC
807 seconds
▢️ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB The second video in 'Building a Stoic Q&A App' - here we're setting up Elasticsearch and Haystack to store the data (Meditations) ready for retrieval when we ask our app questions. Find the code here: https://github.com/jamescalam/aurelius/tree/main/code/labs πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
79
3
DBsxUSUhfRg
UCv83tO5cePwHMt1952IVVHw
Q&A Document Retrieval With DPR
2021-04-12 14:44:59 UTC
2021-04-15 15:00:10 UTC
890 seconds
▢️ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB The third video in building our Stoic Q&A app. In open-domain question answering, we typically design a model architecture that contains a data source, retriever, and reader/generator. The first of these components is typically a document store. The two most popular stores we use here are Elasticsearch and FAISS. Next up is our retriever β€” the topic of this video. The job of the retriever is to filter through our document store for relevant chunks of information (the documents) and pass them to the reader/generator model. DPR (dense passage retriever) is a dense vector retriever that is trained on question-context pairs. Encoding both accordingly - enabling super accurate similarity indexing. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 If you're interested in learning more about DPR, I wrote about it on Medium here: https://towardsdatascience.com/how-to-create-an-answer-from-a-question-with-dpr-d76e29cc5d60 (Free link): https://towardsdatascience.com/how-to-create-an-answer-from-a-question-with-dpr-d76e29cc5d60?sk=1bdd7c1bff80bf51410962691c690c69 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
57
0
QrzHImDEq_w
UCv83tO5cePwHMt1952IVVHw
How to Use Type Annotations in Python
2021-04-23 21:44:38 UTC
2021-04-27 14:53:25 UTC
907 seconds
Type annotationsβ€Š-β€Šalso known as type signaturesβ€Š-β€Šare used to indicate the datatypes of variables and input/outputs of functions and methods. In many languages, datatypes are explicitly stated. In these languages, if you don't declare your datatypeβ€Š-β€Šthe code will not run. Type annotations have a long and convoluted history with Python, going all the way back to the first release of Python 3 with the initial implementation of function annotations. Type annotations in Python are not make-or-break like in other languages (like C). They're optional chunks of syntax that we can add to make our code more explicit. Erroneous type annotations will do nothing more than highlight the incorrect annotation in our code editorβ€Š-β€Šno errors are ever raised due to annotations. So, if type annotations are not enforced, why use them? Well, as we touched upon alreadyβ€Š-β€Šdeclaring types makes our code more explicit, and if done well, easier to readβ€Š-β€Šboth for ourselves and others. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Read the Medium article here: https://towardsdatascience.com/type-annotations-in-python-d90990b172dc πŸ“– Here's a free link: https://towardsdatascience.com/type-annotations-in-python-d90990b172dc?sk=29bc29ab5478a842363963b421781b47 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:55 Datatypes Example in C 2:53 Static and Dynamic Typed Languages 3:47 Type Annotations in Python 4:25 How to Define Simple Types 6:04 IDE Warnings 8:20 More Complex Types 9:53 dict[str, int] 11.07 Multiple Types 11:38 Union Operator (Py 3.9) 12:34 Union Operator (Py 3.10) 13:21 Optional Operator
Education
132
3
2tdLYIKPafc
UCv83tO5cePwHMt1952IVVHw
Extractive Q&A With Haystack and FastAPI in Python
2021-04-26 22:03:55 UTC
2021-04-29 15:00:04 UTC
1058 seconds
▢️ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB In this video we work through building an extractive Q&A stack using Haystack, and embedding it within a FastAPI instance in Python. We use the BERT transformer for our reader model, alongside Elasticsearch and the BM25 retriever algorithm. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
71
1
jVPd7lEvjtg
UCv83tO5cePwHMt1952IVVHw
Sentence Similarity With Transformers and PyTorch (Python)
2021-05-04 15:25:17 UTC
2021-05-05 15:00:20 UTC
1270 seconds
Easy mode: https://youtu.be/Ey81KfQ3PQU All we ever seem to talk about nowadays are BERT this, BERT that. I want to talk about something else, but BERT is just too good β€Š- β€Šso this video will be about BERT for sentence similarity. A big part of NLP relies on similarity in highly-dimensional spaces. Typically an NLP solution will take some text, process it to create a big vector/array representing said textβ€Š-β€Šthen perform several transformations. It's highly-dimensional magic. Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be. The logic is this: - Take a sentence, convert it into a vector. - Take many other sentences, and convert them into vectors. - Find sentences that have the smallest distance (Euclidean) or smallest angle (cosine similarity) between themβ€Š-β€Šmore on that here. - We now have a measure of semantic similarity between sentencesβ€Š-β€Šeasy! At a high level, there's not much else to it. But of course, we want to understand what is happening in a little more detail and implement this in Python too. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1?sk=c0f2990b4660210b447e52d55bd0f4e5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:16 BERT Base Network 1:11 Sentence Vectors and Similarity 1:47 The Data and Model 3:01 Two Approaches 3:16 Tokenizing Sentences 9:11 Creating last_hidden_state Tensor 11:08 Creating Sentence Vectors 17:53 Cosine Similarity
Education
233
2
Ey81KfQ3PQU
UCv83tO5cePwHMt1952IVVHw
Sentence Similarity With Sentence-Transformers in Python
2021-05-04 19:55:42 UTC
2021-05-05 15:00:09 UTC
370 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp Hard mode: https://youtu.be/jVPd7lEvjtg All we ever seem to talk about nowadays are BERT this, BERT that. I want to talk about something else, but BERT is just too good β€Š- β€Šso this video will be about BERT for sentence similarity. A big part of NLP relies on similarity in highly-dimensional spaces. Typically an NLP solution will take some text, process it to create a big vector/array representing said textβ€Š-β€Šthen perform several transformations. It's highly-dimensional magic. Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be. The logic is this: - Take a sentence, convert it into a vector. - Take many other sentences, and convert them into vectors. - Find sentences that have the smallest distance (Euclidean) or smallest angle (cosine similarity) between themβ€Š-β€Šmore on that here. - We now have a measure of semantic similarity between sentencesβ€Š-β€Šeasy! At a high level, there's not much else to it. But of course, we want to understand what is happening in a little more detail and implement this in Python too. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/bert-for-measuring-text-similarity-eec91c6bf9e1?sk=c0f2990b4660210b447e52d55bd0f4e5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
371
4
W8ZPQOcHnlE
UCv83tO5cePwHMt1952IVVHw
NER With Transformers and spaCy (Python)
2021-05-09 20:57:10 UTC
2021-05-11 15:00:28 UTC
567 seconds
Named entity recognition (NER) consists of extracting 'entities' from textβ€Š-β€Šwhat we mean by that is given the sentence: "Apple reached an all-time high stock price of 143 dollars this January." We might want to extract the key pieces of informationβ€Š-β€Šor 'entities'β€Š-β€Šand categorize each of those entities. Like so: - Apple β€Š: Organization - 143 dollarsβ€Š: β€ŠMonetary Value - this Januaryβ€Š: β€ŠDate For us humans, this is easy. But how can we teach a machine to distinguish between a granny smith apple and the Apple we trade on NASDAQ? (No, we can't rely on the 'A' being capitalized…) This is where NER comes inβ€Š-β€Šusing NER, we can extract keywords like apple and identify that it is, in fact, an organizationβ€Š-β€Šnot a fruit. The go-to library for NER is spaCy, which is incredible. But what if we added transformers to spaCy? Even better - we'll cover exactly that in this video. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5
Education
120
2
q9NS5WpfkrU
UCv83tO5cePwHMt1952IVVHw
Training BERT #1 - Masked-Language Modeling (MLM)
2021-05-19 09:31:26 UTC
2021-05-19 14:51:39 UTC
984 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp BERT, everyone's favorite transformer costs Google ~$7K to train (and who knows how much in R&D costs). From there, we write a couple of lines of code to use the same modelβ€Š-β€Šall for free. BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP). MLM consists of giving BERT a sentence and optimizing the weights inside BERT to output the same sentence on the other side. So we input a sentence and ask that BERT outputs the same sentence. However, before we actually give BERT that input sentenceβ€Š-β€Šwe mask a few tokens. So we're actually inputting an incomplete sentence and asking BERT to complete it for us. How to train BERT with MLM: https://youtu.be/R6hcxMMOrPE πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 Medium article: https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c?sk=17a19eca8dc8280bea4138802580ffe0 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://www.udemy.com/course/nlp-with-transformers/?couponCode=MEDIUM3 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
277
3
R6hcxMMOrPE
UCv83tO5cePwHMt1952IVVHw
Training BERT #2 - Train With Masked-Language Modeling (MLM)
2021-05-19 11:38:10 UTC
2021-05-19 14:51:49 UTC
1666 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp BERT has enjoyed unparalleled success in NLP thanks to two unique training approaches, masked-language modeling (MLM), and next sentence prediction (NSP). In many cases, we might be able to take the pre-trained BERT model out-of-the-box and apply it successfully to our own language tasks. But often, we might need to pre-train the model for a specific use case even further. Further training with MLM allows us to tune BERT to better understand the particular use of language in a more specific domain. Out-of-the-box BERTβ€Š-β€Šgreat for general purpose use. Fine-tuned with MLM BERTβ€Š-β€Šgreat for domain-specific use. In this video, we'll cover exactly how to fine-tune BERT models using MLM in PyTorch. πŸ‘Ύ Code: https://github.com/jamescalam/transformers/blob/main/course/training/03_mlm_training.ipynb Meditations data: https://github.com/jamescalam/transformers/blob/main/data/text/meditations/clean.txt Understanding MLM: https://youtu.be/q9NS5WpfkrU πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/masked-language-modelling-with-bert-7d49793e5d2c?sk=17a19eca8dc8280bea4138802580ffe0 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
223
1
1gN1snKBLP0
UCv83tO5cePwHMt1952IVVHw
Training BERT #3 - Next Sentence Prediction (NSP)
2021-05-23 18:14:04 UTC
2021-05-25 14:56:47 UTC
823 seconds
Next sentence prediction (NSP) is one-half of the training process behind the BERT model (the other being masked-language modelingβ€Š-β€ŠMLM). Where MLM teaches BERT to understand relationships between wordsβ€Š-β€ŠNSP teaches BERT to understand relationships between sentences. In the original BERT paper, it was found that without NSP, BERT performed worse on every single metric - β€Šso it's important. Now, when we use a pre-trained BERT model, training with NSP and MLM has already been done, so why do we need to know about it? Well, we can actually further pre-train these pre-trained BERT models so that they better understand the language used in our specific use-cases. To do that, we can use both MLM and NSP. So, in this video, we'll go into depth on what NSP is, how it works, and how we can implement it in code. Training with NSP: https://youtu.be/x1lAcT3xl5M πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f?sk=3595968413abde1c5833e1a96e449673 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
94
6
x1lAcT3xl5M
UCv83tO5cePwHMt1952IVVHw
Training BERT #4 - Train With Next Sentence Prediction (NSP)
2021-05-27 15:52:57 UTC
2021-05-27 16:15:39 UTC
2205 seconds
Next sentence prediction (NSP) is one-half of the training process behind the BERT model (the other being masked-language modelingβ€Š-β€ŠMLM). Although NSP (and MLM) are used to pre-train BERT models, we can use these exact methods to further pre-train our models to better understand the specific style of language in our own use cases. So, in this video, we'll cover exactly how we take an unstructured body of text, and use it to pre-train a BERT model using NSP. Meditations data: https://github.com/jamescalam/transformers/blob/main/data/text/meditations/clean.txt Jupyter Notebook https://github.com/jamescalam/transformers/blob/main/course/training/06_nsp_training.ipynb πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/bert-for-next-sentence-prediction-466b67f8226f?sk=3595968413abde1c5833e1a96e449673 πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
95
1
5-A435hIYio
UCv83tO5cePwHMt1952IVVHw
New Features in Python 3.10
2021-06-03 16:41:56 UTC
2021-06-08 15:00:02 UTC
800 seconds
The Python 3.10 release has several new features like structural pattern matching, a new typing Union operator, and parenthesized context managers! Python 3.10 has now been released, here we test all of the best new features introduced. We'll cover some of the most interesting additions to Pythonβ€Š-β€Šstructural pattern matching, parenthesized context managers, more typing, and the new and improved error messages. Download the latest release: https://www.python.org/downloads/release/python-3100/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/whats-new-in-python-3-10-a757c6c69342 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/whats-new-in-python-3-10-a757c6c69342?sk=648ae12c1025a83affba4eecec0d46c6 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:45 Type Annotations in Python 01:10 Typing Union Operator 02:07 Parenthesized Context Managers 05:07 Structural Pattern Matching 09:31 Better Error Messages
Education
375
2
IC9FaVPKlYc
UCv83tO5cePwHMt1952IVVHw
Training BERT #5 - Training With BertForPretraining
2021-06-04 05:13:06 UTC
2021-06-15 15:00:19 UTC
1306 seconds
NSP Logic https://youtu.be/1gN1snKBLP0 MLM Logic https://youtu.be/q9NS5WpfkrU πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/how-to-train-bert-aaad00533168 πŸ“– Here's a free link: https://towardsdatascience.com/how-to-train-bert-aaad00533168?sk=5ad4e5e44a6c573b3be1967c9abdcc35 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
128
1
fA0dFQacmic
UCv83tO5cePwHMt1952IVVHw
FREE 11 Hour NLP Transformers Course (Next 3 Days Only)
2021-06-04 07:56:44 UTC
2021-06-04 13:00:19 UTC
267 seconds
The offer has now expired! You can find the final 70% discount here: https://bit.ly/3DFvvY5 In total, 10823 people redeemed the code - which is incredible, I'm very happy so many of you were interested in the course and I hope it will help many of you in learning about transformers and NLP where it may have been too expensive to otherwise - so thank you all! πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
51
0
GhGUZrcB-WM
UCv83tO5cePwHMt1952IVVHw
How-to Use HuggingFace's Datasets - Transformers From Scratch #1
2021-06-21 21:56:31 UTC
2021-06-22 13:00:07 UTC
861 seconds
How can we build our own custom transformer models? Maybe we'd like our model to understand a less common language, how many transformer models out there have been trained on Piemontese or the Nahuatl languages? In that case, we need to do something different. We need to build our own modelβ€Š-β€Šfrom scratch. In this video, we'll learn how to use HuggingFace's datasets library to download multilingual data and prepare it for training our custom transformer tokenizer and model. --- Part 2: https://youtu.be/JIeAB8vvBQo Part 3: https://youtu.be/heTYbpr9mD8 Part 4: https://youtu.be/35Pdoyi6ZoQ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403?sk=aea909609f41be43bdb2dbbd75a801f2 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
147
3
JIeAB8vvBQo
UCv83tO5cePwHMt1952IVVHw
Build a Custom Transformer Tokenizer - Transformers From Scratch #2
2021-06-22 20:07:37 UTC
2021-06-24 14:00:06 UTC
857 seconds
How can we build our own custom transformer models? Maybe we'd like our model to understand a less common language, how many transformer models out there have been trained on Piemontese or the Nahuatl languages? In that case, we need to do something different. We need to build our own modelβ€Š-β€Šfrom scratch. In this video, we'll learn how to use HuggingFace's tokenizers library to build our own custom transformer tokenizer. Part 1: https://youtu.be/GhGUZrcB-WM --- Part 3: https://youtu.be/heTYbpr9mD8 Part 4: https://youtu.be/35Pdoyi6ZoQ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ“™ Medium article: https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403 πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/transformers-from-scratch-creating-a-tokenizer-7d7418adb403?sk=aea909609f41be43bdb2dbbd75a801f2 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
80
3
ziiF1eFM3_4
UCv83tO5cePwHMt1952IVVHw
3 Vector-based Methods for Similarity Search (TF-IDF, BM25, SBERT)
2021-06-28 13:25:28 UTC
2021-06-29 13:00:23 UTC
1764 seconds
Vector similarity search is one of the fastest-growing domains in AI and machine learning. At its core, it is the process of matching relevant pieces of information together. Similarity search is a complex topic and there are countless techniques for building effective search engines. In this video, we'll cover three vector-based approaches for comparing languages and identifying similar 'documents', covering both vector similarity search and semantic search: - TF-IDF - BM25 - Sentence-BERT πŸ“° Original article: https://www.pinecone.io/learn/semantic-search/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership Mining Massive Datasets Book (Similarity Search): πŸ“š https://amzn.to/3CC0zrc (3rd ed) πŸ“š https://amzn.to/3AtHSnV (1st ed, cheaper) πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 01:37 TF-IDF 11:44 BM25 20:30 SBERT
Education
416
1
AY62z7HrghY
UCv83tO5cePwHMt1952IVVHw
3 Traditional Methods for Similarity Search (Jaccard, w-shingling, Levenshtein)
2021-06-28 17:44:01 UTC
2021-06-29 12:00:04 UTC
1520 seconds
Similarity search is one of the fastest-growing domains in AI and machine learning. At its core, it is the process of matching relevant pieces of information together. Similarity search is a complex topic and there are countless techniques for building effective search engines. In this video, we'll cover three traditional approaches for comparing languages and identifying similar 'documents': - Jaccard Similarity - w-shingling - Levenshtein distance πŸ“° Original article: https://www.pinecone.io/learn/semantic-search/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership Mining Massive Datasets Book (Similarity Search): πŸ“š https://amzn.to/3CC0zrc (3rd ed) πŸ“š https://amzn.to/3AtHSnV (1st ed, cheaper) πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:23 Jaccard Similarity 02:39 w-shingling 07:17 Levenshtein Distance
Education
86
0
heTYbpr9mD8
UCv83tO5cePwHMt1952IVVHw
Building MLM Training Input Pipeline - Transformers From Scratch #3
2021-07-02 15:28:46 UTC
2021-07-05 14:00:30 UTC
1392 seconds
The input pipeline of our training process is the more complex part of the entire transformer build. It consists of us taking our raw OSCAR training data, transforming it, and preparing it for Masked-Language Modeling (MLM). Finally, we load our data into a DataLoader ready for training! Part 1: https://youtu.be/GhGUZrcB-WM Part 2: https://youtu.be/JIeAB8vvBQo --- Part 4: https://youtu.be/35Pdoyi6ZoQ πŸ“™ Medium article: https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6 πŸ“– Free link: https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6?sk=9db6224efbd4ec6fd407a80b528e69b0 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Education
69
0
ee71R4Cqb5o
UCv83tO5cePwHMt1952IVVHw
Angular App Setup With Material - Stoic Q&A #5
2021-07-05 08:50:04 UTC
2021-07-20 14:00:28 UTC
814 seconds
▢️ Stoic Q&A App Playlist: https://www.youtube.com/playlist?list=PLIUOU7oqGTLixb-CatMxNCO-mJioMmZEB The fifth video in our Stoic Q&A series - setting up our Angular app with Angular Material. Prerequisites: Installation of Node.js and NPM - https://nodejs.org/en/ Angular - https://angular.io/guide/setup-local πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP
Science & Technology
17
0
35Pdoyi6ZoQ
UCv83tO5cePwHMt1952IVVHw
Training and Testing an Italian BERT - Transformers From Scratch #4
2021-07-05 18:22:41 UTC
2021-07-06 13:00:03 UTC
1838 seconds
We need two things for training, our DataLoader and a model. The DataLoader we have β€” but no model. For training, we need a raw (not pre-trained) RobertaForMaskedLM. To create that, we first need to create a RoBERTa config object to describe the parameters we’d like to initialize FiliBERTo with. Once we have our model, we set up our training loop and train! Post-training, we'll test the model with Laura, who is Italian - and hope for the best. Part 1: https://youtu.be/GhGUZrcB-WM Part 2: https://youtu.be/JIeAB8vvBQo Part 3: https://youtu.be/heTYbpr9mD8 --- πŸ“™ Medium article: https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6 πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6?sk=9db6224efbd4ec6fd407a80b528e69b0 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 00:35 Review of Code 02:02 Config Object 06:28 Setup For Training 10:30 Training Loop 14:57 Dealing With CUDA Errors 16:17 Training Results 19:52 Loss 21:18 Fill-mask Pipeline For Testing 21:54 Testing With Laura
Science & Technology
94
1
sKyvsdEv6rk
UCv83tO5cePwHMt1952IVVHw
Faiss - Introduction to Similarity Search
2021-07-09 13:47:26 UTC
2021-07-13 15:00:19 UTC
1896 seconds
Full Similarity Search Playlist: https://www.youtube.com/watch?v=AY62z7HrghY&list=PLIUOU7oqGTLhlWpTz4NnuT3FekouIVlqc&index=1 Facebook AI Similarity Search (FAISS) is one of the most popular implementations of efficient similarity search, but what is itβ€Š-β€Šand how can we use it? What is it that makes FAISS special? How do we make the best use of this incredible tool? Fortunately, it's a brilliantly simple process to get started with. And in this video, we'll explore some of the options FAISS provides, how they work, andβ€Š-β€Šmost importantlyβ€Š-β€Šhow FAISS can make our semantic search faster. 🌲 Pinecone Article: https://www.pinecone.io/learn/faiss-tutorial/ πŸ“Š Data: https://github.com/jamescalam/data/tree/main/sentence_embeddings_15K Notebook: https://gist.github.com/jamescalam/7117aa92235a7f52141ad0654795aa48 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP Mining Massive Datasets Book (Similarity Search): πŸ“š https://amzn.to/3CC0zrc (3rd ed) πŸ“š https://amzn.to/3AtHSnV (1st ed, cheaper) πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
354
5
bWLvGGJLzF8
UCv83tO5cePwHMt1952IVVHw
Why are there so many Tokenization methods in HF Transformers?
2021-07-27 07:12:07 UTC
2021-07-27 14:00:10 UTC
1080 seconds
HuggingFace's transformers library is the de-facto standard for NLPβ€Š-β€Šused by practitioners worldwide, it's powerful, flexible, and easy to use. It achieves this through a fairly large (and complex) code-base, which has resulted in the question: "Why are there so many tokenization methods in HuggingFace transformers?" Tokenization is the process of encoding a string of text into transformer-readable token ID integers. In this video we cover five different methods for this - do these all produce the same output, or is there a difference between them? πŸ“™ Medium article: https://towardsdatascience.com/why-are-there-so-many-tokenization-methods-for-transformers-a340e493b3a8 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ“– If membership is too expensive - here's a free link: https://towardsdatascience.com/why-are-there-so-many-tokenization-methods-for-transformers-a340e493b3a8?sk=4a7e8c88d331aef9103e153b5b799ff5 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
51
0
B7wmo_NImgM
UCv83tO5cePwHMt1952IVVHw
Choosing Indexes for Similarity Search (Faiss in Python)
2021-08-09 14:33:47 UTC
2021-08-09 15:04:10 UTC
1893 seconds
Facebook AI Similarity Search (Faiss) is a game-changer in the world of search. It allows us to efficiently search a huge range of media, from GIFs to articlesβ€Š-β€Šwith incredible accuracy in sub-second timescales for billion+ size datasets. The success in Faiss is due to many reasons. One of those, in particular, is its flexibility. Faiss recognizes that there is no 'one-size-fits-all' in similarity search. Instead, Faiss comes with a wide range of search indexesβ€Š-β€Šwhich we can mix and match to our choosing. However, this great flexibility produces a questionβ€Š-β€Šhow do we know which size fits our use case? Which index do we choose? Should we use multiple indexes, or is one enough? This video will explore the pros and cons of some of the most important indexesβ€Š-β€ŠFlat, LSH, HNSW, and IVF. We will learn how we decide which to use and the impact of parameters in each index to build some of the best indexes for semantic search. 🌲 Pinecone Article: https://www.pinecone.io/learn/vector-indexes/ πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership Download script for Sift1M dataset: https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf Similarity Search Series: https://www.youtube.com/playlist?list=PLIUOU7oqGTLhlWpTz4NnuT3FekouIVlqc πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ‘Ύ Discord https://discord.gg/c5QtDB9RAP Mining Massive Datasets Book (Similarity Search): πŸ“š https://amzn.to/3CC0zrc (3rd ed) πŸ“š https://amzn.to/3AtHSnV (1st ed, cheaper) πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
122
1
e_SBq3s20M8
UCv83tO5cePwHMt1952IVVHw
Locality Sensitive Hashing (LSH) for Search with Shingling + MinHashing (Python)
2021-08-19 16:53:50 UTC
2021-08-20 16:00:16 UTC
1627 seconds
Locality sensitive hashing (LSH) is a widely popular technique used in approximate nearest neighbor (ANN) search. The solution to efficient similarity search is a profitable oneβ€Š-β€Šit is at the core of several billion (and even trillion) dollar companies. LSH consists of a variety of different methods. In this video, we'll be covering the traditional approachβ€Š-β€Šwhich consists of multiple stepsβ€Š-β€Šshingling, MinHashing, and the final banded LSH function. 🌲 Pinecone article: https://www.pinecone.io/learn/locality-sensitive-hashing/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff 00:00 Intro 01:21 Overview 05:58 Shingling 08:45 Vocab 09:27 One-hot Encoding 11:10 MinHash 15:51 Signature Info 18:08 LSH 22:20 Tuning LSH
Science & Technology
208
19
8bOrMqEdfiQ
UCv83tO5cePwHMt1952IVVHw
How LSH Random Projection works in search (+Python)
2021-08-24 05:09:11 UTC
2021-08-24 16:00:04 UTC
1148 seconds
Locality sensitive hashing (LSH) is a widely popular technique used in approximate similarity search. The solution to efficient similarity search is a profitable oneβ€Š-β€Šit is at the core of several billion (and even trillion) dollar companies. The problem with similarity search is scale. Many companies deal with millions-to-billions of data points every single day. Given a billion data points, is it feasible to compare all of them with every search? Further, many companies are not performing single searchesβ€Š-β€ŠGoogle deals with more than 3.8 million searches every minute. Billions of data points combined with high-frequency searches are problematicβ€Š-β€Šand we haven't considered the dimensionality nor the similarity function itself. Clearly, an exhaustive search across all data points is unrealistic for larger datasets. The solution to searching impossibly huge datasets? Approximate search. Rather than exhaustively comparing every pair, we approximateβ€Š-β€Šrestricting the search scope only to high probability matches. 🌲 Pinecone article: https://www.pinecone.io/learn/locality-sensitive-hashing-random-projection/ Download Sift1M: https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf IndexLSH for Fast Similarity Search in Faiss: https://youtu.be/ZLfdQq_u7Eo πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
66
3
ZLfdQq_u7Eo
UCv83tO5cePwHMt1952IVVHw
IndexLSH for Fast Similarity Search in Faiss
2021-08-24 05:25:21 UTC
2021-08-24 16:00:12 UTC
1119 seconds
Faiss β€Š- β€Šor Facebook AI Similarity Searchβ€Š - β€Šis an open-source framework built for enabling similarity search. Faiss has many super-efficient implementations of different indexes that we can use in similarity search. That long list of indexes includes IndexLSHβ€Š-β€Šan easy-to-use implementation of everything we have covered so far in LSH. 🌲 Pinecone article: https://www.pinecone.io/learn/locality-sensitive-hashing-random-projection/ Download Sift1M: https://gist.github.com/jamescalam/a09a16c17b677f2cf9c019114711f3bf How LSH Random Projection works in search (+Python): https://youtu.be/8bOrMqEdfiQ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
27
0
BMYBwbkbVec
UCv83tO5cePwHMt1952IVVHw
Faiss - Vector Compression with PQ and IVFPQ (in Python)
2021-08-30 14:35:01 UTC
2021-08-30 15:30:04 UTC
1161 seconds
So far we’ve worked through the logic behind a simple, readable implementation of product quantization (PQ) in Python for semantic search. Realistically we wouldn’t use this because it is not optimized and we already have excellent implementations elsewhere. Instead, we would use a library like Faiss (Facebook AI Similarity Search) β€” or a production-ready service like Pinecone. We’ll take a look at how we can build a PQ index in Faiss, and we’ll even take a look at combining PQ with an Inverted File (IVF) step to improve search speed. Before we start, we need to get data. We will be using the Sift1M dataset. It can be downloaded and opened using this script: https://gist.github.com/jamescalam/928a374b85daffa49a565f3dc18d059c#file-get_sift1m-ipynb 🌲 Pinecone article: https://www.pinecone.io/learn/product-quantization/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
36
1
t9mRf2S5vDI
UCv83tO5cePwHMt1952IVVHw
Product Quantization for Vector Similarity Search (+ Python)
2021-08-30 15:22:47 UTC
2021-08-30 15:37:46 UTC
1777 seconds
Vector similarity search can require huge amounts of memory. Indexes containing 1M dense vectors (a small dataset in today’s world) will often require several GBs of memory to store. When building recommendation systems or semantic search engines, this is not acceptable. The problem of excessive memory usage is exasperated by high-dimensional data, and with ever-increasing dataset sizes, this can very quickly become unmanageable. Product quantization (PQ) is a popular method for dramatically compressing high-dimensional vectors to use 97% less memory, and for making nearest-neighbor search speeds 5.5x faster in our tests. A composite IVF+PQ index speeds up the search by another 16.5x without affecting accuracy, for a whopping total speed increase of 92x compared to non-quantized indexes. 🌲 Pinecone article: https://www.pinecone.io/learn/product-quantization/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free AI-Powered Code Refactoring with Sourcery: https://sourcery.ai/?utm_source=YouTub&utm_campaign=JBriggs&utm_medium=aff
Science & Technology
116
2
GEhmmcx1lvM
UCv83tO5cePwHMt1952IVVHw
Composite Indexes and the Faiss Index Factory
2021-09-11 17:27:12 UTC
2021-09-24 12:53:58 UTC
1063 seconds
In the world of vector search, there are many indexing methods and vector processing techniques that allow us to prioritize between recall, latency, and memory usage. Using specific methods such as IVF, PQ, or HNSW, we can often return good results. But for best performance we will usually want to use composite indexes. We can view a composite index as a step-by-step process of vector transformations and one or more indexing methods. Allowing us to place multiple indexes and/or processing steps together to create our β€˜ideal’ index. For example, we can use an inverted file (IVF) index to reduce the scope of our search (increasing search speed), and then add a compression technique such as product quantization (PQ) to keep larger indexes within a reasonable size limit. Where there is the ability to customize indexes, there is the risk of producing indexes with unnecessarily poor recall, latency, or memory usage. We must know how composite indexes work if we want to build robust and high-performance vector similarity search applications. It is essential to understand where different indexes or vector transformations can be used β€” and when they are not needed. Part 2: https://youtu.be/3Wqh4iUupbM 🌲 Pinecone article: https://www.pinecone.io/learn/composite-indexes/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://jamescalam.medium.com/subscribe (it's free!) https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:54 Composite Indexes 06:43 Faiss Index Factory 11:34 Why we use Index Factory 17:11 Outro
Science & Technology
21
0
3Wqh4iUupbM
UCv83tO5cePwHMt1952IVVHw
Best Indexes for Similarity Search in Faiss
2021-09-12 07:02:26 UTC
2021-09-24 12:54:07 UTC
1582 seconds
In the world of vector search, there are many indexing methods and vector processing techniques that allow us to prioritize between recall, latency, and memory usage. Using specific methods such as IVF, PQ, or HNSW, we can often return good results. But for best performance we will usually want to use composite indexes. We can view a composite index as a step-by-step process of vector transformations and one or more indexing methods. Allowing us to place multiple indexes and/or processing steps together to create our β€˜ideal’ index. For example, we can use an inverted file (IVF) index to reduce the scope of our search (increasing search speed), and then add a compression technique such as product quantization (PQ) to keep larger indexes within a reasonable size limit. Where there is the ability to customize indexes, there is the risk of producing indexes with unnecessarily poor recall, latency, or memory usage. We must know how composite indexes work if we want to build robust and high-performance vector similarity search applications. It is essential to understand where different indexes or vector transformations can be used β€” and when they are not needed. Part 1: https://youtu.be/GEhmmcx1lvM 🌲 Pinecone article: https://www.pinecone.io/learn/composite-indexes/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://jamescalam.medium.com/subscribe (it's free!) https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:30 IVFADC 03:30 IVFADC in Faiss 07:29 Multi-D-ADC 09:17 Multi-D-ADC in Faiss 14:43 IVF-HNSW 21:39 IVF-HNSW in Faiss 25:58 Outro
Science & Technology
31
0
cR4qMSIvX28
UCv83tO5cePwHMt1952IVVHw
How to Build a Bert WordPiece Tokenizer in Python and HuggingFace
2021-09-13 20:13:08 UTC
2021-09-14 13:30:06 UTC
1880 seconds
Building a transformer model from scratch can often be the only option for many more specific use cases. Although BERT and other transformer models have been pre-trained for a vast number of languages and domains, they do not cover everything. Often, it is these less common use cases that stand to gain the most from having someone come along and build a specific transformer model. It could be for an uncommon language or less tech-savvy domain. BERT is the most popular transformer for a wide range of language-based machine learningβ€Š-β€Šfrom sentiment analysis to question and answering, BERT has enabled a diverse range of innovation across many borders and industries. The first step for many in designing a new BERT model is the tokenizer. In this article, we'll take a look at the WordPiece tokenizer used by BERTβ€Š-β€Šand see how we can build our own from scratch. πŸ“• Medium article: https://towardsdatascience.com/how-to-build-a-wordpiece-tokenizer-for-bert-f505d97dddbb πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ•ΉοΈ Free Article link (if you don't have Medium membership): https://towardsdatascience.com/how-to-build-a-wordpiece-tokenizer-for-bert-f505d97dddbb?sk=eea06e01c9faecd939e10589e9de1291
Science & Technology
95
1
H_kJDHvu-v8
UCv83tO5cePwHMt1952IVVHw
Metadata Filtering for Vector Search + Latest Filter Tech
2021-09-20 12:23:11 UTC
2021-09-20 14:04:27 UTC
2054 seconds
Vector similarity search makes massive datasets searchable in fractions of a second. Yet despite the brilliance and utility of this technology, often what seem to be the most straightforward problems are the most difficult to solve. Such as filtering. Filtering takes the top place in being seemingly simple β€” but actually incredibly complex. Applying fast-but-accurate filters when performing a vector search (ie, nearest-neighbor search) on massive datasets is a surprisingly stubborn problem. This article explains the two common methods for adding filters to vector search, and their serious limitations. Then we will explore Pinecone’s solution to filtering in vector search. πŸ“£ Get the API key! https://www.pinecone.io/start/ 🌲 Pinecone article: https://www.pinecone.io/learn/vector-search-filtering/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:24 Vector Search Recap 02:03 Why Filter? 02:56 Metadata Filtering 101 07:48 Pre-filtering 09:37 Post-filtering 11:30 Single-Stage Filtering 12:22 Vectors and Metadata Code 13:58 Connecting to Pinecone 14:55 Building Query Vector 16:47 Querying 21:37 First Filter 24:40 Adding More Conditions 27:03 Filtering with Numbers 30:55 Search Speed and Filtering 33:44 Outro
Science & Technology
20
0
r-zQQ16wTCA
UCv83tO5cePwHMt1952IVVHw
Build NLP Pipelines with HuggingFace Datasets
2021-09-20 14:58:03 UTC
2021-09-23 13:30:07 UTC
2030 seconds
HF Datasets is an essential tool for NLP practitionersβ€Š-β€Šhosting over 1.4K (mostly) high-quality language-focused datasets, and an easy-to-use treasure trove of functions for building efficient pre-processing pipelines. In this article, we will take a look at the massive repository of datasets available, and explore some of the library's brilliant data processing capabilities. πŸ“• Medium article: https://towardsdatascience.com/build-nlp-pipelines-with-huggingface-datasets-d597ff5f68ad πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ“– Free Article Access (if you don't have Medium membership!): https://towardsdatascience.com/build-nlp-pipelines-with-huggingface-datasets-d597ff5f68ad?sk=948106e47e64bc3e9e8a1358b0568d48
Science & Technology
53
1
QvKMwLjdK-s
UCv83tO5cePwHMt1952IVVHw
HNSW for Vector Search Explained and Implemented with Faiss (Python)
2021-09-29 08:13:49 UTC
2021-10-05 13:00:23 UTC
2075 seconds
Hierarchical Navigable Small World (HNSW) graphs are among the top-performing indexes for vector similarity search. HNSW is a hugely popular technology that time and time again produces state-of-the-art performance with super-fast search speeds and flawless recall β€” HNSW is not to be missed. Despite being a popular and robust algorithm for approximate nearest neighbors (ANN) searches, understanding how it works is far from easy. This video helps demystify HNSW and explains this intelligent algorithm in an easy-to-understand way. Towards the end of the video, we'll look at how to implement HNSW using Faiss and which parameter settings give us the performance we need. 🌲 Pinecone article: https://www.pinecone.io/learn/hnsw/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://jamescalam.medium.com/subscribe (it's free!) https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:41 Foundations of HNSW 08:41 How HNSW Works 16:38 The Basics of HNSW in Faiss 21:40 How Faiss Builds an HNSW Graph 26.49 Building the Best HNSW Index 33:33 Fine-tuning HNSW 34:30 Outro
Science & Technology
131
3
g_yMowQikOE
UCv83tO5cePwHMt1952IVVHw
Intro to APIs in Python - API Series #1
2021-09-29 12:21:47 UTC
2021-09-29 14:00:18 UTC
1704 seconds
Taking those first steps into interacting with the web using Python can seem dauntingβ€Š-β€Šbut it need not be. It is a surprisingly simple process, with well established rules and guidelines. We'll cover the absolute essentials for getting started, including: - Application Program Interfaces (APIs) - Javascript Object Notation (JSON) - Requests with Python - Real world use-cases πŸ“• Article: https://towardsdatascience.com/quick-fire-guide-to-apis-in-python-891dd98c8877 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Sign-up For New Articles Every Week on Medium! https://jamescalam.medium.com/subscribe (it's free!) https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ“– Free Access Link (if you don't have Medium membership): https://towardsdatascience.com/quick-fire-guide-to-apis-in-python-891dd98c8877?sk=7c159ba45154db23abcc6a7f9de4f910 Geocoding Docs: https://developers.google.com/maps/documentation/geocoding/cloud-setup GitHub Docs: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token 00:00 Intro 00:20 What is an API? 01:47 RESTful APIs 05:26 API Methods 07:20 HTTP Codes (200s) 08:14 HTTP Codes (400s) 10:00 JSON Format 11:21 Talking to APIs in Python 14:30 Google Geocoding API 22:08 GitHub API 27:48 Outro
Science & Technology
119
0
bVZJ_O_-0RE
UCv83tO5cePwHMt1952IVVHw
Intro to Dense Vectors for NLP and Vision
2021-10-04 08:28:38 UTC
2021-10-12 17:47:15 UTC
2629 seconds
There is perhaps no greater component to the success of modern Natural Language Processing (NLP) technology than vector representations of language. The meteoric early 2010s rise of NLP was ignited with the introduction of word2vec by a team lead by TomΓ‘Ε‘ Mikolov in 2013. Word2vec is one of the most iconic and earliest examples of dense vectors representing text. But since the days of word2vec, developments in representing language have advanced at ludicrous speeds. This video will explore *why* we use dense vectors β€” and some of the best approaches to building dense vectors available today. 🌲 Pinecone article: https://www.pinecone.io/learn/dense-vector-embeddings-nlp/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:50 Why Dense Vectors? 03:55 Word2vec and Representing Meaning 08:40 Sentence Transformers 09:58 Sentence Transformers in Python 15:08 Question-Answering 18:18 DPR in Python 29:55 Vision Transformers 33:22 OpenAI's CLIP in Python 42:49 Review and What's Next
Science & Technology
92
0
MF75aNH3Gjs
UCv83tO5cePwHMt1952IVVHw
API Series #2 - Building an API with Flask in Python
2021-10-05 07:01:25 UTC
2021-10-07 14:52:32 UTC
1902 seconds
Next video - how to deploy to the cloud: https://youtu.be/3fsIcMgUOY8 How can we set up a way to communicate from one software instance to another? It sounds simple, and β€” to be completely honest β€” it is. All we need is an API. An API (Application Programming Interface) is a simple interface that defines the types of requests (demands/questions, etc.) that can be made, how they are made, and how they are processed. In our case, we will be building an API that allows us to send a range of GET/POST/PUT/PATCH/DELETE requests (more on this later), to different endpoints, and return or modify data connected to our API. We will be using the Flask framework to create our API and Insomnia to test it. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸ•ΉοΈ Medium article: https://towardsdatascience.com/the-right-way-to-build-an-api-with-python-cd08ab285f8f πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP Free article link: https://towardsdatascience.com/the-right-way-to-build-an-api-with-python-cd08ab285f8f?sk=6e2dda4c8b6012767114e12ff34b1464 Download Insomnia: https://insomnia.rest/download
Science & Technology
117
2
WS1uVMGhlWQ
UCv83tO5cePwHMt1952IVVHw
Intro to Sentence Embeddings with Transformers
2021-10-19 09:44:58 UTC
2021-10-20 17:06:20 UTC
1866 seconds
Transformers have wholly rebuilt the landscape of natural language processing (NLP). Before transformers, we had okay translation and language classification thanks to recurrent neural nets (RNNs) β€” their language comprehension was limited and led to many minor mistakes, and coherence over larger chunks of text was practically impossible. Since the introduction of the first transformer model in the 2017 paper β€˜Attention is all you need’, NLP has moved from RNNs to models like BERT and GPT. These new models can answer questions, write articles (maybe GPT-3 wrote this), enable incredibly intuitive semantic search β€” and much more. In this video, we will explore how these embeddings have been adapted and applied to a range of semantic similarity applications by using a new breed of transformers called β€˜sentence transformers’. 🌲 Pinecone article: https://www.pinecone.io/learn/sentence-embeddings/ Vectors in ML: https://www.youtube.com/playlist?list=PLIUOU7oqGTLgz-BI8bNMVGwQxIMuQddJO πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP
Science & Technology
188
1
aSx0jg9ZILo
UCv83tO5cePwHMt1952IVVHw
Fine-tune Sentence Transformers the OG Way (with NLI Softmax loss)
2021-10-22 14:16:49 UTC
2021-10-22 14:39:46 UTC
2223 seconds
Sentence embeddings with transformers can be used across a range of applications, such as semantic textual similarity (STS), semantic clustering, or information retrieval (IR) using concepts rather than words. This video dives deeper into the training process of the first sentence transformer, sentence-BERT, or more commonly known as SBERT. We will explore the Natural Language Inference (NLI) training approach of softmax loss to fine-tune models for producing sentence embeddings. Be aware that softmax loss is no longer the preferred approach to training sentence transformers and has been superseded by other methods such as MSE margin and multiple negatives ranking loss. But we’re covering this training method as an important milestone in the development of ever-improving sentence embeddings. 🌲 Pinecone article: https://www.pinecone.io/learn/train-sentence-transformers-softmax/ Check out the Sentence Transformers library: https://github.com/UKPLab/sentence-transformers Talk by Nils Reimers (one of the SBERT creators) on training: https://www.youtube.com/watch?v=RHXZKUr8qOY He does more NLP vids too: https://www.youtube.com/channel/UC1zCuTrfpjT6Sv2kJk-JkvA πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:42 NLI Fine-tuning 01:44 Softmax Loss Training Overview 05:47 Preprocessing NLI Data 12:48 PyTorch Process 19:48 Using Sentence-Transformers 30:45 Results 35:49 Outro
Science & Technology
83
0
or5ew7dqA-c
UCv83tO5cePwHMt1952IVVHw
Fine-tune High Performance Sentence Transformers (with Multiple Negatives Ranking)
2021-10-25 20:18:30 UTC
2021-10-26 13:00:22 UTC
2213 seconds
Transformer-produced sentence embeddings have come a long way in a very short time. Starting with the slow but accurate similarity prediction of BERT cross-encoders, the world of sentence embeddings was ignited with the introduction of SBERT in 2019. Since then, many more sentence transformers have been introduced. These models quickly made the original SBERT obsolete. How did these newer sentence transformers manage to outperform SBERT so quickly? The answer is multiple negatives ranking (MNR) loss. This video will cover what MNR loss is, the data it requires, and how to implement it to fine-tune our own high-quality sentence transformers. Implementation will cover two approaches. The first is more involved, and outlines the exact steps to fine-tune the model (we'll just run over it quickly). The second approach makes use of the sentence-transformers library’s excellent utilities for fine-tuning. 🌲 Pinecone article: https://www.pinecone.io/learn/fine-tune-sentence-transformers-mnr/ Check out the Sentence Transformers library: https://github.com/UKPLab/sentence-transformers Talk by Nils Reimers (one of the SBERT creators) on training: https://www.youtube.com/watch?v=RHXZKUr8qOY He does more NLP vids too: https://www.youtube.com/channel/UC1zCuTrfpjT6Sv2kJk-JkvA πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:02 NLI Training Data 02:56 Preprocessing 10:11 SBERT Finetuning Visuals 14:14 MNR Loss Visual 16:37 MNR in PyTorch 23:04 MNR in Sentence Transformers 34:20 Results 36:14 Outro
Science & Technology
86
0
iCkftKsnQgg
UCv83tO5cePwHMt1952IVVHw
Hybrid Search Walkthrough in Pinecone
2021-10-29 01:44:06 UTC
2021-10-29 15:05:00 UTC
1040 seconds
Pinecone offers a production-ready vector database for high performance and reliable *semantic search* at scale. But did you know Pinecone's semantic search can be paired with the more traditional keyword search? Semantic search is a compelling technology allowing us to search using abstract concepts and *meaning* rather than relying on specific words. However, sometimes a simple keyword search can be just as valuable β€” especially if we know the exact wording of what we're searching for. In this video, we will explore these features through a start-to-finish example of basic keyword search in Pinecone. 🌲 Check the docs: https://www.pinecone.io/docs/examples/basic-hybrid-search/ πŸ”‘ Free API key: https://app.pinecone.io 00:52 How Hybrid Search Works 01:25 Preprocessing 03:01 Creating Keywords 05:34 Creating an Index 06:50 Data Upsert 08:33 Query Setup 10:52 Keyword Search 12:31 OR Logic 14:49 AND Logic 15:10 Negation 17:04 Outro
Science & Technology
17
1
3fsIcMgUOY8
UCv83tO5cePwHMt1952IVVHw
API Series #3 - How to Deploy Flask APIs to the Cloud (GCP)
2021-11-01 23:16:31 UTC
2021-11-02 14:30:00 UTC
806 seconds
Building that first API is for many of us, a significant step towards creating impactful tools that may one day be used by many developers. But often those APIs don't make it out of our local machines. Fortunately, it's incredibly easy to deploy APIs. Assuming you have no idea what you're doing right nowβ€Š-β€Šyou will probably be deploying your first API in around ten minutes. I'm not joking, it's super easy. Let's get started. πŸ“• Article: https://towardsdatascience.com/how-to-deploy-a-flask-api-8d54dd8d8b8a πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ“– Free article link: TO ADD
Science & Technology
75
2
NNS5pOpjvAQ
UCv83tO5cePwHMt1952IVVHw
All You Need to Know on Multilingual Sentence Vectors (1 Model, 50+ Languages)
2021-11-04 11:27:18 UTC
2021-11-04 13:00:10 UTC
2392 seconds
We’ve learned about how sentence transformers can be used to create high-quality vector representations of text. We can then use these vectors to find similar vectors, which can be used for many applications such as semantic search or topic modeling. These models are very good at producing meaningful, information-dense vectors. But they don’t allow us to compare sentences across different languages. Often this may not be a problem. However, the world is becoming increasingly interconnected, and many companies span across multiple borders and languages. Naturally, there is a need for sentence vectors that are language agnostic. Unfortunately, very few textual similarity datasets span multiple languages, particularly for less common languages. And the standard training methods used for sentence transformers would require these types of datasets. Different approaches need to be used. Fortunately, some techniques allow us to extend models to other languages using more easily obtained language translations. In this video, we will cover how multilingual models work and are built. We’ll learn how to develop our own multilingual sentence transformers, the datasets to look for, and how to use high-performing pretrained multilingual models. 🌲 Pinecone article: https://www.pinecone.io/learn/multilingual-transformers/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:19 Multilingual Vectors 05:55 Multi-task Training (mUSE) 09:36 Multilingual Knowledge Distillation 11:13 Knowledge Distillation Training 13:43 Visual Walkthrough 14:53 Parallel Data Prep 20:23 Choosing a Student Model 24:55 Initializing the Models 30:05 ParallelSentencesDataset 33:54 Loss and Fine-tuning 36:59 Model Evaluation 39:23 Outro
Science & Technology
30
0
-td57YvJdHc
UCv83tO5cePwHMt1952IVVHw
Question-Answering in NLP (Extractive QA and Abstractive QA)
2021-11-13 19:09:02 UTC
2021-11-16 12:06:13 UTC
2886 seconds
Search is a crucial functionality in many applications and companies globally. Whether in manufacturing, finance, healthcare, or *almost* any other industry, organizations have vast internal information and document repositories. Unfortunately, the scale of many companies’ data means that the organization and accessibility of information can become incredibly inefficient. The problem is exacerbated for language-based information. Language is a tool for people to communicate often abstract ideas and concepts. Naturally, ideas and concepts are harder for a computer to comprehend and store in a meaningful way. How do we minimize this problem? The answer lies with *semantic search*, specifically with the question-answering (QA) flavor of semantic search. This article will introduce the different forms of QA, the components of these 'QA stacks', and where we might use them. 🌲 Pinecone article: https://www.pinecone.io/learn/question-answering/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Meaningful Search 01:23 Use-case 02:22 Open Domain QA (ODQA) 06:41 SQuAD Format 10:45 Quick Preprocessing 15:18 Creating Context Vectors Database 23:24 Open-book Extractive QA 32:50 Open-book Abstractive QA 41:53 Closed-book Abstractive QA 47:27 Final Thoughts
Science & Technology
72
0
pNvujJ1XyeQ
UCv83tO5cePwHMt1952IVVHw
Today Unsupervised Sentence Transformers, Tomorrow Skynet (how TSDAE works)
2021-11-24 14:20:20 UTC
2021-11-24 16:24:24 UTC
2661 seconds
To adapt a pretrained transformer to produce meaningful sentence vectors, we typically need a more supervised fine-tuning approach. We can use datasets like natural language inference (NLI) pairs, labeled semantic textual similarity (STS) data, or parallel data (pairs of translations). For some domains and languages, such as finance and English, this data is fairly easy to find or gather. But many domains and many languages have very little labeled data. If you can find semantic similarity pairs for the agriculture industry, please let me know. There are many languages, such as Dhivehi, where unlabelled data is hard to find and labelled data practically non-existent. This means you either spend a very long time gathering tens of thousands of labeled samples or you can try an unsupervised fine-tuning approach. Unsupervised training methods for sentence transformers are not as effective as their supervised counterparts, but they do work. And if you have no other choice, why not? In this video, we will introduce the concept of unsupervised fine-tuning for sentence transformers. We will learn to train these models using the unsupervised Transformer-based Sequential Denoising Auto-Encoder (TSDAE) approach. 🌲 Pinecone article: https://www.pinecone.io/learn/unsupervised-training-sentence-transformers/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Why Language Embedding Matters 05:12 Supervised Methods 05:29 Natural Language Inference 07:15 Semantic Textual Similarity 07:43 Multilingual Training 10:00 TSDAE (Unsupervised) 18:50 Data Preparation 29:05 Initialize Model 32:39 Model Training 36:25 NLTK Error 37:15 Evaluation 41:01 TSDAE vs Supervised Methods 42:42 Why TSDAE is Cool
Science & Technology
70
0
3IPCEeh4xTg
UCv83tO5cePwHMt1952IVVHw
Making The Most of Data: Augmented SBERT
2021-12-16 15:46:03 UTC
2021-12-17 14:24:40 UTC
3310 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp ML models are data-hungry. They consume massive amounts of data to identify generalized patterns and apply those learned patterns to new data. As models get bigger, so do datasets. And although we have seen an explosion of data in the past decade, it is often not accessible or in an ML-friendly format, especially in niche domains. For many niche, low-resource domains, finding or annotating a substantial dataset manually is practically impossible. Fortunately, we don't need to label (or even find) this new data. Instead, we can automatically generate or label data using one or more *data augmentation* techniques. In this video, we will introduce data augmentation and its application to the field of NLP. We will focus on the 'in-domain' flavor of a particular data-augmentation strategy named augmented SBERT (AugSBERT). 🌲 Pinecone article: https://www.pinecone.io/learn/data-augmentation/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP
Science & Technology
42
0
mjKqP3kRxbQ
UCv83tO5cePwHMt1952IVVHw
Building Transformer Tokenizers (Dhivehi NLP #1)
2021-12-28 15:02:22 UTC
2021-12-28 15:45:03 UTC
1982 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp Get in touch with Ashraq: https://www.linkedin.com/in/ismailashraq/ The language of Dhivehi (or Maldivian) is fascinating. It uses a complex writing system known as Thaana, and I absolutely cannot comprehend any of it. It is so wildly different from anything I knowβ€Š-β€Šbut, like the archipelago, it looks wonderful. Ashraq described the difficulty of applying NLP to his native tongue of Dhivehi. There are several reasons for this, which we will explore in this video, and learn how to build an effective Dhivehi WordPiece tokenizer. πŸ“• Article: https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Article Friend Link (Free Access): https://towardsdatascience.com/designing-tokenizers-for-low-resource-languages-7faa4ab30ef4?sk=c0c16de9eea7dbe1d2a9c106abf38e1a πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:06 Dhivehi Project 02:28 Hurdles for Low Resource Domains 04:21 Dhivehi Dataset 04:52 Download Dhivehi Corpus 08:25 Tokenizer Components 08:44 Normalizer Component 11:55 Pre-tokenization Component 14:59 Post-tokenization Component 16:26 Decoder Component 17:41 Tokenizer Implementation 21:04 Tokenizer Training 24:22 Post-processing Implementation 27:12 Decoder Implementation 28:07 Saving for Transformers 30:33 Tokenizer Test and Usage 31:36 Download Dhivehi Models 32:21 First Steps
Science & Technology
49
0
a8jyue22SJM
UCv83tO5cePwHMt1952IVVHw
AugSBERT: Domain Transfer for Sentence Transformers
2022-01-04 05:14:16 UTC
2022-01-04 14:59:50 UTC
1750 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp When building language models, we can spend months optimizing training and model parameters, but it’s useless if we don't have the correct data. The success of our language models relies first and foremost on data. The augmented SBERT training strategy can help us. Given this scenario, we can transfer information from an out-of-domain (or *source*) dataset to our target domain. We will learn how to do this here. First, we will learn to assess which source datasets align best with our target domain quickly. Then we will explain and work through the AugSBERT domain-transfer training strategy. 🌲 Pinecone article: https://www.pinecone.io/learn/augsbert-domain-transfer/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP πŸ”— n-gram Similarity Script: https://gist.github.com/jamescalam/b73f37017ae32bd6094747c4b0fca94a πŸ”— AugSBERT In-Domain Article: https://www.pinecone.io/learn/data-augmentation/ 00:00 Why Use Domain Transfer 04:08 Strategy Outline 06:05 Train Source Cross-Encoder 12:44 Cross-Encoder Outcome 15:12 Labeling Target Data 20:31 Training Bi-encoder 23:58 Evaluator Bi-encoder Performance 28:08 Final Points
Science & Technology
41
0
w1dMEWm7jBc
UCv83tO5cePwHMt1952IVVHw
How to build a Q&A AI in Python (Open-domain Question-Answering)
2022-01-10 07:19:13 UTC
2022-01-11 14:00:20 UTC
2364 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp How can we design these natural, human-like Q&A interfaces? The answer is open-domain question-answering (ODQA). ODQA allows us to use natural language to query a database. That means that, given a dataset like a set of internal company documents, online documentation, or as is the case with Google, everything on the world’s internet, we can retrieve relevant information in a natural, more human way. 🌲 Pinecone article: https://www.pinecone.io/learn/retriever-models/ πŸ”— Nils YT Talk: https://youtu.be/XNJThigyvos?t=118 πŸ”— MNR Loss Article: πŸ”— Free Pinecone API Key: https://app.pinecone.io/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Why QA 04:05 Open Domain QA 08:24 Do we need to fine-tune? 11:44 How Retriever Training Works 12:59 SQuAD Training Data 16:29 Retriever Fine-tuning 19:32 IR Evaluation 25:58 Vector Database Setup 33:42 Querying 37:41 Final Notes
Science & Technology
66
1
-fzCSPsfMic
UCv83tO5cePwHMt1952IVVHw
How to build a Q&A Reader Model in Python (Open-domain QA)
2022-01-18 12:17:09 UTC
2022-01-18 16:37:37 UTC
1504 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp Open-domain question-answering (ODQA) is a wildly popular *pipeline* of databases and language models that allow us to ask a machine human-like questions and return comprehensible and even intelligent answers. Despite the outward guise of simplicity, ODQA requires a reasonably advanced set of components placed together to enable the *extractive* Q&A functionality. We call this *extractive* Q&A because the models are not generating an answer. Instead, the answer already exists but is hidden somewhere within potentially thousands, millions, or even more data sources. By enabling extractive Q&A, we enable a more *intelligent* and *efficient* way to retrieve information from what can be massive stores of data. 🌲 Pinecone article: https://www.pinecone.io/learn/reader-models/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:13 ODQA Components 03:09 Data Preprocessing 22:35 Fine-tuning
Science & Technology
26
0
JLKUV-LiXjk
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #1 - Installation and API
2022-01-25 12:04:00 UTC
2022-01-25 16:00:09 UTC
735 seconds
▢️ Streamlit for ML Part 2: https://www.youtube.com/watch?v=U0EoaFFGyTg&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=2 Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations. All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need! In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app. πŸ“• Article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:39 App Outline 03:36 Streamlit Installation 06:15 Streamlit API Basics
Science & Technology
32
0
U0EoaFFGyTg
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #2 - ML Models and APIs
2022-01-26 16:07:51 UTC
2022-01-26 16:30:36 UTC
911 seconds
▢️ Streamlit for ML Part 3: https://www.youtube.com/watch?v=lYDiSCDcxmc&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=3 Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations. All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need! In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app. πŸ”— Code to Create Index: https://gist.github.com/jamescalam/2123ce0bb8a871f48a151a023a7ece67 πŸ“• Article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:47 Creating the Vector DB 08:56 Implementing Retrieval
Science & Technology
19
0
lYDiSCDcxmc
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #3 - Make Apps Fast with Caching
2022-01-27 13:13:14 UTC
2022-01-27 15:00:36 UTC
584 seconds
▢️ Streamlit for ML Part 4: https://www.youtube.com/watch?v=XdxeKiY2UXg&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=4 Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations. All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need! In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app. ▢️ Streamlit for ML Playlist: https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1 πŸ“• Article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:35 Streamlit Caching 06:56 Experimental Caching Primitives
Science & Technology
24
0
XdxeKiY2UXg
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #4 - Adding Bootstrap Components
2022-01-28 10:05:43 UTC
2022-01-28 15:11:42 UTC
590 seconds
▢️ Streamlit for ML Part 5.1: https://www.youtube.com/watch?v=SGazDb8o-to&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=5 Streamlit has proven itself as an incredibly popular tool for quickly putting together high-quality ML-oriented web apps. More recently, it has seen wider adoption in production environments by ever-larger organizations. All of this means that there is no better time to pick up some experience with Streamlit. Fortunately, the basics of Streamlit are incredibly easy to learn, and for most tools, this will be more than you need! In this series, we will introduce Streamlit by building a general knowledge Q&A interface. We will learn about key Streamlit components like write, text_input, container. How to use external libraries like Bootstrap to quickly create new app components. And use caching to speed up our app. ▢️ Streamlit for ML Playlist: https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1 πŸ“• Article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: https://towardsdatascience.com/getting-started-with-streamlit-for-nlp-75fe463821ec?sk=ac5e0b7c39938f52162862411a66a58b πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:35 Streamlit Caching 06:56 Experimental Caching Primitives
Science & Technology
38
1
JydpRavoJqI
UCv83tO5cePwHMt1952IVVHw
Adding New Doc Stores to Haystack
2022-02-15 04:56:36 UTC
2022-03-15 15:00:14 UTC
1825 seconds
πŸ₯³ Released with Haystack v1.3! Install direct from PyPI with: pip install 'farm-haystack[pinecone]' PR: https://github.com/deepset-ai/haystack/pull/2254 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:15 Contributing or Testing 03:31 ODQA 06:20 What is Haystack? 08:13 Haystack QA Workflow 14:52 Contributing to Open Source 22:54 Haystack Doc Stores 26:09 Doc Store Core Methods 29:31 Final Notes, Contribute/Test
Science & Technology
14
0
SGazDb8o-to
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #5.1 - Custom React Components in Streamlit Setup
2022-02-17 15:24:47 UTC
2022-02-17 15:45:58 UTC
1158 seconds
▢️ Streamlit for ML Part 5.2: https://www.youtube.com/watch?v=mxm8ihWoVbk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=6 There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components. If you're still stuck, and there is just no component that covers what you need, we can build our own custom components. To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component. In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements. ▢️ Streamlit for ML Playlist: https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1 πŸ“• Article: Coming soon πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: Coming soon πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:19 Environment Setup 03:42 Starting with a Template 07:41 Naming for Card Component 11:31 Installing Node Packages 15:12 Running the Component
Science & Technology
26
1
mxm8ihWoVbk
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #5.2 - MUI Card Component Build
2022-02-20 15:25:56 UTC
2022-02-21 14:00:31 UTC
1619 seconds
▢️ Streamlit for ML Part 5.3: https://www.youtube.com/watch?v=lZ2EaPUnV7k&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=7 There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components. If you're still stuck, and there is just no component that covers what you need, we can build our own custom components. To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component. In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements. ▢️ Streamlit for ML Playlist: https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1 πŸ“• Article: Coming soon πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: Coming soon πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:59 Clearing Card Component 04:59 Building the Component 14:22 Pulling in MUI Code 24:08 Adding Roboto Font 26:05 Final Points
Science & Technology
16
1
lZ2EaPUnV7k
UCv83tO5cePwHMt1952IVVHw
Streamlit for ML #5.3 - Publishing Components to Pip
2022-02-27 16:28:49 UTC
2022-02-28 17:00:29 UTC
858 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp There are plenty of prebuilt components designed by Streamlit themselves, and if you can't find what you need, there are even community-built components. If you're still stuck, and there is just no component that covers what you need, we can build our own custom components. To do this we do need to start playing with the lower-level web technologies that Streamlit itself is built upon. So it isn't as simple as using a prebuilt component. However, thanks to pre-made templates, it isn't too hard to create a new component. In this sub-series, we'll learn exactly how to create custom components. We'll focus on designing an interactive card component using Material UI design elements. ❗ Python Packaging Video: https://youtu.be/JkeNVaiUq_c ▢️ Streamlit for ML Playlist: https://www.youtube.com/watch?v=JLKUV-LiXjk&list=PLIUOU7oqGTLg5ssYxPGWaci6695wtosGw&index=1 πŸ“• Article: Coming soon πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ“– Friend link to article: Coming soon πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:09 PyPI 02:41 Preparing for Distribution 05:43 Build React Component 06:39 Create Python Package 11:57 Pip Install 13:58 Ending
Science & Technology
10
0
J0cntjLKpmU
UCv83tO5cePwHMt1952IVVHw
Train Sentence Transformers by Generating Queries (GenQ)
2022-03-08 03:10:28 UTC
2022-03-08 14:52:23 UTC
1634 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp Fine-tuning effective dense retrieval models is challenging. Bi-encoders (sentence transformers) are the current best models for dense retrieval in semantic search. Unfortunately, they're also notoriously data-hungry models that typically require a particular type of labeled training data. Hard problems like this attract attention. As expected, there is plenty of attention on building ever better techniques for training retrievers. One of the most impressive is GenQ. This approach to building bi-encoder retrievers uses the latest text generation techniques to synthetically generate training data. In short, all we need are passages of text. The generation model then augments these passages with synthetic queries, giving us the exact format we need to train an effective bi-encoder model. 🌲 Pinecone article: https://www.pinecone.io/learn/genq/ πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:32 Why GenQ? 02:23 GenQ Overview 04:28 Training Data 06:48 Asymmetric Semantic Search 07:54 T5 Query Generation 13:52 Finetuning Bi-encoders 16:02 GenQ Code Walkthrough 21:40 Finetuning Bi-encoder Walkthrough 26:48 Final Points
Science & Technology
39
0
Dn8OYkatiU0
UCv83tO5cePwHMt1952IVVHw
Testing the New Haystack Doc Store
2022-03-22 17:15:10 UTC
2022-03-22 19:26:00 UTC
1399 seconds
πŸ₯³ Released with Haystack v1.3! Install direct from PyPI with: pip install 'farm-haystack[pinecone]' PR: https://github.com/deepset-ai/haystack/pull/2254 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:19 Demo Start and Install 03:25 Initialization 06:30 Download and Write Documents 10:55 Extractive QA Pipeline 11:23 Fetch by ID 19:01 Metadata Filtering 22:24 Get All Documents
Science & Technology
5
0
uEbCXwInnPs
UCv83tO5cePwHMt1952IVVHw
Is GPL the Future of Sentence Transformers? | Generative Pseudo-Labeling Deep Dive
2022-03-29 10:46:39 UTC
2022-03-30 12:52:39 UTC
3175 seconds
🎁 Free NLP for Semantic Search Course: https://www.pinecone.io/learn/nlp Training sentence transformers is hard; they need vast amounts of labeled data. On one hand, the internet is full of data, and, on the other, this data is *not* in the format we need. We usually need to use a supervised training method to train a high-performance bi-encoder (sentence transformer) model. There is research producing techniques placing us ever closer to fine-tuning high-perfomance bi-encoder models with unlabeled text data. One of the most promising is GPL. At its core, GPL allows us to take unstructured text data and use it to build models that can understand this text. These models can then intelligently respond to natural language queries regarding this same text data. It is a fascinating approach, with massive potential across innumerous use cases spanning all industries and borders. With that in mind, let's dive into the details of GPL and how we can implement it to build high-performance LMs with nothing more than plain text. 🌲 Pinecone article: https://www.pinecone.io/learn/gpl/ πŸ”— Notebooks: https://github.com/pinecone-io/examples/tree/master/learn/nlp_course/gpl πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:08 Semantic Web and Other Uses 04:36 Why GPL? 07:31 How GPL Works 10:37 Query Generation 12:08 CORD-19 Dataset and Download 13:27 Query Generation Code 21:53 Query Generation is Not Perfect 22:39 Negative Mining 26:28 Negative Mining Implementation 27:21 Negative Mining Code 35:19 Pseudo-Labeling 35:55 Pseudo-Labeling Code 37:01 Importance of Pseudo-Labeling 41:20 Margin MSE Loss 43:40 MarginMSE Fine-tune Code 46:30 Choosing Number of Steps 48:54 Fast Evaluation 51:43 What's Next for Sentence Transformers?
Science & Technology
76
2
j3psNM5y-eA
UCv83tO5cePwHMt1952IVVHw
Implementing Filters in the New Haystack Doc Store
2022-04-06 15:53:46 UTC
2022-04-06 16:26:54 UTC
1695 seconds
πŸ₯³ Released with Haystack v1.3! Install direct from PyPI with: pip install 'farm-haystack[pinecone]' Join me as I work through the final few PR issues on the latest Haystack document store, and figure out how Haystack's filter_utils work. PR: https://github.com/deepset-ai/haystack/pull/2254 πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:41 Filtering 05:36 Testing Existing Filter Utils 07:57 Making Sense of Filter Utils 10:35 Writing the First Filter 16:26 First Working Filter 18:24 Testing New Filters 21:27 Implementing in the Doc Store 24:02 Testing Pipeline Filters 27:11 Final Issue and Outro
Science & Technology
3
0
ok0SDdXdat8
UCv83tO5cePwHMt1952IVVHw
Spotify's Podcast Search Explained
2022-04-13 15:02:31 UTC
2022-04-14 13:14:50 UTC
2998 seconds
The market for podcasts has grown tremendously in recent years. Driving the charge in podcast adoption is Spotify. In a few short years, they have become the undisputed leaders in podcasting. Despite only entering the game in 2018, by late 2021, Spotify had already usurped Apple, the long-reigning leader in podcasts, with more than 28M monthly podcast listeners. To back their podcast investments, Spotify has worked on making the podcast experience as seamless and accessible as possible. From their all-in-one podcast creation app (Anchor) to podcast APIs and their latest natural language enabled podcast search. Spotify’s natural language search for podcasts is a fascinating use case. In the past, users had to rely on keyword/term matching to find the podcast episodes they wanted. Now, they can search in natural language, in much the same way we might ask a real person where to find something. In this video, we will take a look under the hood of Spotify's podcast search, and learn how to implement a similar system ourselves. 🌲 Pinecone article: https://www.pinecone.io/learn/spotify-podcast-search πŸ”— Code and tests: https://github.com/pinecone-io/examples/tree/spotify-podcast-search/learn/search-in-wild/spotify-podcast-search πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 04:16 NLP in Semantic Search 08:35 Why Now? 09:29 Transformer Models 11:52 Sentence Transformers 13:12 Vector Search 15:56 How Spotify Built Podcast Search 17:35 Data Source, Fine-tuning, and Eval 22:58 Code Implementation, Dataset 24:44 Data Preparation 26:39 Query Generation 29:54 Fine-tuning a Podcast Model 41:40 Evaluation 48:05 Does it Scale? 49:00 Sharing Your Work
Science & Technology
58
1
gVAJ_l_S7uQ
UCv83tO5cePwHMt1952IVVHw
How to learn NLP for free
2022-04-24 16:41:28 UTC
2022-04-26 13:05:48 UTC
1402 seconds
Knowing what to learn is one of the hardest parts about self-learning. Imagine being thrown into the wilderness and being told to find a specific landmark. Without a map you will end up wandering to wilderness with no better option than taking one step after another. I spent a long time wandering step-by-step and eventually found my way into working with deep learning and NLP full-time. Here I will share many of the resources I used or wish I had used in the past. You can this "curriculum" as a rough guideline in self-learning ML and working towards a full-time position. ALL LINKS in article/friend link below: πŸ“• Medium article: https://jamescalam.medium.com/the-self-taught-nlp-engineer-curriculum-c425c3fc3ff6 πŸ“– Friend link: https://jamescalam.medium.com/the-self-taught-nlp-engineer-curriculum-c425c3fc3ff6?sk=986263c644d9b36699d800713faa478a --- πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:53 ML 101 + Prerequisites 04:58 Sentdex + Neural Nets from Scratch 07:32 ML Coursera 09:31 100 Page ML Book 11:14 Applied ML + Daniel Bourke 13:17 Origin of Modern NLP 13:41 CS224N 14:44 NLP Specialization Coursera 15:57 Modern NLP + Transformers Intro 16:54 Transformer Courses 18:14 Doing Projects 19:18 Semantic + Vector Search 19:54 NLP for Semantic Search 20:44 Mining of Massive Datasets 22:27 Final Points
Science & Technology
165
1
fb7LENb9eag
UCv83tO5cePwHMt1952IVVHw
BERTopic Explained
2022-05-10 14:13:06 UTC
2022-05-11 15:10:23 UTC
2714 seconds
90% of the world's data is unstructured. It is built by humans, for humans. That's great for human consumption, but it is *very* hard to organize when we begin dealing with the massive amounts of data abundant in today's information age. Organization is complicated because unstructured text data is not intended to be understood by machines, and having humans process this abundance of data is wildly expensive and *very slow*. Fortunately, there is light at the end of the tunnel. More and more of this unstructured text is becoming accessible and understood by machines. We can now search text based on *meaning*, identify the sentiment of text, extract entities, and much more. Transformers are behind much of this. These transformers are (unfortunately) not Michael Bay's Autobots and Decepticons and (fortunately) not buzzing electrical boxes. Our NLP transformers lie somewhere in the middle, they're not sentient Autobots (yet), but they can understand language in a way that existed only in sci-fi until a short few years ago. Machines with a human-like comprehension of language are pretty helpful for organizing masses of unstructured text data. In machine learning, we refer to this task as *topic modeling*, the automatic clustering of data into particular topics. BERTopic takes advantage of the superior language capabilities of these (not yet sentient) transformer models and uses some other ML magic like UMAP and HDBSCAN (more on these later) to produce what is one of the most advanced techniques in language topic modeling today. 🌲 Pinecone article: https://www.pinecone.io/learn/bertopic πŸ”— Code notebooks: https://github.com/pinecone-io/examples/tree/master/learn/algos-and-libraries/bertopic πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:40 In this video 02:58 BERTopic Getting Started 08:48 BERTopic Components 15:21 Transformer Embedding 18:33 Dimensionality Reduction 25:07 UMAP 31:48 Clustering 37:22 c-TF-IDF 40:49 Custom BERTopic 44:04 Final Thoughts
Science & Technology
153
3
O9lrWt15wH8
UCv83tO5cePwHMt1952IVVHw
Long Form Question Answering (LFQA) in Haystack
2022-05-17 15:22:17 UTC
2022-05-17 15:46:21 UTC
2159 seconds
Question-Answering (QA) has exploded as a subdomain of Natural Language Processing (NLP) in the last few years. QA is a widely applicable use case in NLP yet was out of reach until the introduction of [transformer models](/learn/transformers/) in 2017. Without transformer models, the level of language comprehension required to make something as complex as QA work simply was not possible. Although QA is a complex topic, it comes from a simple idea. The automatic retrieval of information via a more human-like interaction. The task of information retrieval (IR) is performed by almost every organization in the world. Without other options, organizations rely on person-to-person IR and rigid keyword search tools. This haphazard approach to IR generates a lot of friction, particularly for larger organizations. QA offers a solution to this problem. Rather than these documents being lost in an abyss, they can be stored within a space where an intelligent QA agent can access them. Unlike humans, our QA agent can scan millions of documents in seconds and return answers from these documents almost instantly. With QA tools, employees can stop wasting time searching for snippets of information and focus on their *real*, value-adding tasks. A small investment in QA is, for most organizations, a no-brainer. 🌲 Pinecone article: https://www.pinecone.io/learn/haystack-lfqa πŸ”— Code notebooks: https://github.com/pinecone-io/examples/blob/master/integrations/haystack/haystack_lfqa.ipynb πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 04:20 Approaches to Question Answering 05:43 Components of QA Pipeline 08:58 LFQA Generator 09:40 Haystack Setup 10:32 Initialize Document Store 13:02 Getting Data 17:53 Indexing Embeddings 21:51 Initialize Generator 24:10 Asking Questions 26:12 Common Problems 29:32 Generator Memory 31:30 Few More Questions 34:54 Outro
Science & Technology
55
1
uYas6ysyjgY
UCv83tO5cePwHMt1952IVVHw
New GPU-Acceleration for PyTorch on M1 Macs! + using with BERT
2022-05-22 16:37:37 UTC
2022-05-24 13:00:34 UTC
1140 seconds
GPU-acceleration on Mac is finally here! Today's deep learning models owe a great deal of their exponential performance gains to ever increasing model sizes. Those larger models require more computations to train and run. These models are simply too big to be run on CPU hardware, which performs large step-by-step computations. Instead, they need massively parallel computations. That leaves us with either GPU or TPU hardware. Our home PCs aren't coming with TPUs anytime soon, so we're left with the GPU option. GPUs use a highly parallel structure, originally designed to process images for visual heavy processes. They became essential components in gaming for rendering real-time 3D images. GPUs are essential for the scale of today's models. Using CPUs makes many of these models too slow to be useful, which can make deep learning on M1 machines rather disappointing. Fortunately, this is changing with the support of GPU on M1 machines beginning with PyTorch v1.12. In this video we will explain the new integration and how to implement it yourself. πŸ“• Article: https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1 πŸ“– Friend Link (free access): https://towardsdatascience.com/gpu-acceleration-comes-to-pytorch-on-m1-macs-195c399efcc1?sk=a88acd35f600858093c177b97d690b03 πŸ”— Code notebooks: https://github.com/jamescalam/pytorch-mps πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:34 PyTorch MPS 04:57 Installing ARM Python 09:09 Using PyTorch with GPU 12:14 BERT on PyTorch GPU 13:51 Best way to train LLMs on Mac 16:01 Buffer Size Bug 17:24 When we would use Mac M1 GPU
Science & Technology
115
3
FzLIIwiaXSU
UCv83tO5cePwHMt1952IVVHw
How to Build an AI-Powered Video Search App
2022-06-01 12:37:21 UTC
2022-06-01 16:29:43 UTC
1343 seconds
Technology and culture have advanced and become ever more entangled. Some of the most significant technological breakthroughs are integrated so tightly into our culture that we never even notice they’re there. One of those is AI-powered search. It powers your Google results, Netflix recommendations, and ads you see everywhere. It is being rapidly weaved throughout all aspects of our lives. Further, this is a new technology; its full potential is unknown. This technology weaves directly into the cultural phenomenon of YouTube. Imagine a search engine like Google that allows you to rapidly access the billions of hours of YouTube content. There is no comparison to that level of highly engaging video content in the world. 🌲 Pinecone article: https://www.pinecone.io/learn/youtube-search πŸ”— Code: https://github.com/pinecone-io/examples/tree/master/learn/projects/yt-search πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 02:56 YouTube Search App 04:43 Getting Data 07:58 Enhancing the Data 12:45 Scraping Other Metadata 14:52 Loading Data from Hugging Face 15:42 Index and Query the Data 20:43 Streamlit App Code
Science & Technology
58
0
xXsDIK9z_fg
UCv83tO5cePwHMt1952IVVHw
Using Semantic Search to Find GIFs
2022-06-06 09:17:01 UTC
2022-06-07 12:05:40 UTC
1050 seconds
Vector search powers some of the most popular services in the world. It serves your Google results, delivers the best podcasts on Spotify, and accounts for at least 35% of consumer purchases on Amazon. In this article, we will use vector search applied to language, called semantic search, to build a GIF search engine. Unlike more traditional search where we rely on keyword matching, semantic search enables search based on the human meaning behind text and images. That means we can find highly relevant GIFs with natural language prompts. The pipeline for a project like this is simple, yet powerful. It can easily be adapted to tasks as diverse as video search or answering Super Bowl questions, or as we’ll see, finding GIFs. 🌲 Pinecone article: https://www.pinecone.io/learn/gif-search πŸ”— Code: https://github.com/pinecone-io/examples/tree/master/learn/projects/gif-search πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:17 GIF Search Demo 01:56 Pipeline Overview 05:33 Data Preparation 08:17 Vector Database and Retriever 12:37 Querying 15:42 Streamlit App Code
Science & Technology
20
1
_OAU1kQdmgE
UCv83tO5cePwHMt1952IVVHw
How to Learn Data Science | ML | Programming
2022-06-15 10:37:57 UTC
2022-06-15 13:11:47 UTC
992 seconds
In this video I share five of the approaches/thoughts I have regarding learning, in particular for learning data science, machine learning, or programming. πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 01:33 Scale of Theory vs. Applied 02:55 Shape of Learning 05:52 Courses vs. Projects 08:37 Open Source 10:44 Writing 12:44 Following Interests 15:42 Final Notes
Education
24
0
BD9TkvEsKwM
UCv83tO5cePwHMt1952IVVHw
Evaluation Measures for Search and Recommender Systems
2022-06-25 14:35:27 UTC
2022-06-28 15:06:40 UTC
1885 seconds
In this video you will learn about popular offline metrics (evaluation measures) like Recall@K, Mean Reciprocal Rank (MRR), Mean Average Precision@K (MAP@K), and Normalized Discounted Cumulative Gain (NDCG@K). We will also demonstrate how each of these metrics can be replicated in Python. Evaluation of information retrieval (IR) systems is critical to making well-informed design decisions. From search to recommendations, evaluation measures are paramount to understanding what does and does not work in retrieval. Many big tech companies contribute much of their success to well-built IR systems. One of Amazon’s earliest iterations of the technology was reportedly driving more than 35% of their sales. Google attributes 70% of YouTube views to their IR recommender systems. IR systems power some of the greatest companies in the world, and behind every successful IR system is a set of evaluation measures. 🌲 Pinecone article: https://www.pinecone.io/learn/offline-evaluation πŸ”— Code notebooks: https://github.com/pinecone-io/examples/tree/master/learn/algos-and-libraries/offline-evaluation πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP 00:00 Intro 00:51 Offline Metrics 02:38 Dataset and Retrieval 101 06:08 Recall@K 07:57 Recall@K in Python 09:03 Disadvantages of Recall@K 10:21 MRR 13:32 MRR in Python 14:18 MAP@K 18:17 MAP@K in Python 19:27 NDCG@K 29:26 Pros and Cons of NDCG@K 29:48 Final Thoughts
Science & Technology
49
0
coaaSxys5so
UCv83tO5cePwHMt1952IVVHw
How to build next-level Q&A with OpenAI
2022-07-06 19:48:54 UTC
2022-07-07 13:24:35 UTC
1168 seconds
Walkthrough of the OpenAI x Pinecone Q&A app I built for a webinar with OpenAI. This is the coolest Q&A app I've ever built thanks to Pinecone vector search and OpenAI's incredible embeddings and generation endpoints. LINKS: πŸ•Ή App: https://pinecone-io-playground-beyond-search-openaisrcserver-h65vzl.streamlitapp.com πŸ‘¨β€πŸ’» Code and Data: https://github.com/pinecone-io/examples/tree/master/integrations/openai/beyond_search_webinar OpenAI x Pinecone Webinar: ▢️ https://www.youtube.com/watch?v=HtI9easWtAA πŸ€– 70% Discount on the NLP With Transformers in Python course: https://bit.ly/3DFvvY5 πŸŽ‰ Subscribe for Article and Video Updates! https://jamescalam.medium.com/subscribe https://medium.com/@jamescalam/membership πŸ‘Ύ Discord: https://discord.gg/c5QtDB9RAP
Science & Technology
36
0

Dataset containing video metadata from a few tech channels, i.e.

Downloads last month
52
Edit dataset card