{ "nbformat": 4, "nbformat_minor": 0, "metadata": { "accelerator": "GPU", "colab": { "name": "Copy of efi_en_starter_notebook.ipynb", "provenance": [], "collapsed_sections": [], "toc_visible": true }, "kernelspec": { "name": "python3", "display_name": "Python 3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.5.6" } }, "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Igc5itf-xMGj" }, "source": [ "# Masakhane - Machine Translation for African Languages (Using JoeyNMT)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "x4fXCKCf36IK" }, "source": [ "## Note before beginning:\n", "### - The idea is that you should be able to make minimal changes to this in order to get SOME result for your own translation corpus. \n", "\n", "### - The tl;dr: Go to the **\"TODO\"** comments which will tell you what to update to get up and running\n", "\n", "### - If you actually want to have a clue what you're doing, read the text and peek at the links\n", "\n", "### - With 100 epochs, it should take around 7 hours to run in Google Colab\n", "\n", "### - Once you've gotten a result for your language, please attach and email your notebook that generated it to masakhanetranslation@gmail.com\n", "\n", "### - If you care enough and get a chance, doing a brief background on your language would be amazing. See examples in [(Martinus, 2019)](https://arxiv.org/abs/1906.05685)" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l929HimrxS0a" }, "source": [ "## Retrieve your data & make a parallel corpus\n", "\n", "If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.\n", "\n", "Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. " ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "oGRmDELn7Az0", "colab": { "base_uri": "https://localhost:8080/", "height": 122 }, "outputId": "61acdb19-5c3b-4937-beb9-2f5f6ebed4c1" }, "source": [ "from google.colab import drive\n", "drive.mount('/content/drive')" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n", "\n", "Enter your authorization code:\n", "··········\n", "Mounted at /content/drive\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "Cn3tgQLzUxwn", "colab": {} }, "source": [ "# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:\n", "# These will also become the suffix's of all vocab and corpus files used throughout\n", "import os\n", "source_language = \"en\"\n", "target_language = \"nya\" \n", "lc = False # If True, lowercase the data.\n", "seed = 42 # Random seed for shuffling.\n", "tag = \"baseline\" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted\n", "\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "os.environ[\"tag\"] = tag\n", "\n", "# This will save it to a folder in our gdrive instead! \n", "!mkdir -p \"/content/drive/My Drive/masakhane/$src-$tgt-$tag\"\n", "g_drive_path = \"/content/drive/My Drive/masakhane/%s-%s-%s\" % (source_language, target_language, tag)\n", "os.environ[\"gdrive_path\"] = g_drive_path\n", "models_path = '%s/models/%s%s_transformer'% (g_drive_path, source_language, target_language)\n", "# model temporary directory for training\n", "model_temp_dir = \"/content/drive/My Drive/masakhane/model-temp\"\n", "# model permanent storage on the drive\n", "!mkdir -p \"$gdrive_path/models/${src}${tgt}_transformer/\"" ], "execution_count": null, "outputs": [] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "kBSgJHEw7Nvx", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "61688e5e-5c17-4baa-8854-958ae4f04c71" }, "source": [ "!echo $gdrive_path" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "/content/drive/My Drive/masakhane/en-nya-baseline\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "gA75Fs9ys8Y9", "colab": { "base_uri": "https://localhost:8080/", "height": 102 }, "outputId": "925edfb6-3c75-4601-ae6c-b38e1c50941e" }, "source": [ "#TODO: Skip for retrain\n", "# Install opus-tools\n", "! pip install opustools-pkg " ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "Collecting opustools-pkg\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/6c/9f/e829a0cceccc603450cd18e1ff80807b6237a88d9a8df2c0bb320796e900/opustools_pkg-0.0.52-py3-none-any.whl (80kB)\n", "\r\u001b[K |████ | 10kB 29.9MB/s eta 0:00:01\r\u001b[K |████████ | 20kB 6.3MB/s eta 0:00:01\r\u001b[K |████████████▏ | 30kB 7.6MB/s eta 0:00:01\r\u001b[K |████████████████▏ | 40kB 8.1MB/s eta 0:00:01\r\u001b[K |████████████████████▎ | 51kB 7.3MB/s eta 0:00:01\r\u001b[K |████████████████████████▎ | 61kB 8.3MB/s eta 0:00:01\r\u001b[K |████████████████████████████▎ | 71kB 8.5MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 81kB 5.6MB/s \n", "\u001b[?25hInstalling collected packages: opustools-pkg\n", "Successfully installed opustools-pkg-0.0.52\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "xq-tDZVks7ZD", "colab": { "base_uri": "https://localhost:8080/", "height": 204 }, "outputId": "5dbe00c9-1177-44c9-9fbb-fc282c40a960" }, "source": [ "#TODO: Skip for retrain\n", "# Downloading our corpus\n", "! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q\n", "\n", "# extract the corpus file\n", "! gunzip JW300_latest_xml_$src-$tgt.xml.gz" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "\n", "Alignment file /proj/nlpl/data/OPUS/JW300/latest/xml/en-nya.xml.gz not found. The following files are available for downloading:\n", "\n", " ./JW300_latest_xml_en.zip already exists\n", " ./JW300_latest_xml_nya.zip already exists\n", " 572 KB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en-nya.xml.gz\n", "\n", " 572 KB Total size\n", "./JW300_latest_xml_en-nya.xml.gz ... 100% of 572 KB\n", "gzip: JW300_latest_xml_en-nya.xml already exists; do you wish to overwrite (y or n)? n\n", "\tnot overwritten\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "j2K6QK2NOaUX", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "f10dd54e-7fb9-44a7-8c36-38e2a1db1555" }, "source": [ "# extract the corpus file\n", "! gunzip JW300_latest_xml_$tgt-$src.xml.gz" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "gzip: JW300_latest_xml_nya-en.xml.gz: No such file or directory\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "n48GDRnP8y2G", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 578 }, "outputId": "32880d12-76cb-446d-8b95-112a5877508c" }, "source": [ "#TODO: Skip for retrain\n", "# Download the global test set.\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", " \n", "# And the specific test set for this language pair.\n", "os.environ[\"trg\"] = target_language \n", "os.environ[\"src\"] = source_language \n", "\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en \n", "! mv test.en-$trg.en test.en\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg \n", "! mv test.en-$trg.$trg test.$trg" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "--2020-07-12 20:08:28-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 277791 (271K) [text/plain]\n", "Saving to: ‘test.en-any.en.1’\n", "\n", "\rtest.en-any.en.1 0%[ ] 0 --.-KB/s \rtest.en-any.en.1 100%[===================>] 271.28K --.-KB/s in 0.02s \n", "\n", "2020-07-12 20:08:28 (11.8 MB/s) - ‘test.en-any.en.1’ saved [277791/277791]\n", "\n", "--2020-07-12 20:08:30-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-nya.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 203330 (199K) [text/plain]\n", "Saving to: ‘test.en-nya.en’\n", "\n", "test.en-nya.en 100%[===================>] 198.56K --.-KB/s in 0.01s \n", "\n", "2020-07-12 20:08:30 (13.0 MB/s) - ‘test.en-nya.en’ saved [203330/203330]\n", "\n", "--2020-07-12 20:08:32-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-nya.nya\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 226404 (221K) [text/plain]\n", "Saving to: ‘test.en-nya.nya’\n", "\n", "test.en-nya.nya 100%[===================>] 221.10K --.-KB/s in 0.02s \n", "\n", "2020-07-12 20:08:33 (11.1 MB/s) - ‘test.en-nya.nya’ saved [226404/226404]\n", "\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "id": "NqDG-CI28y2L", "colab_type": "code", "colab": { "base_uri": "https://localhost:8080/", "height": 34 }, "outputId": "5265c307-2d30-4133-efb9-e968d324db2d" }, "source": [ "#TODO: Skip for retrain\n", "# Read the test data to filter from train and dev splits.\n", "# Store english portion in set for quick filtering checks.\n", "en_test_sents = set()\n", "filter_test_sents = \"test.en-any.en\"\n", "j = 0\n", "with open(filter_test_sents) as f:\n", " for line in f:\n", " en_test_sents.add(line.strip())\n", " j += 1\n", "print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "Loaded 3571 global test sentences to filter from the training/dev data.\n" ], "name": "stdout" } ] }, { "cell_type": "code", "metadata": { "colab_type": "code", "id": "3CNdwLBCfSIl", "colab": { "base_uri": "https://localhost:8080/", "height": 376 }, "outputId": "4df3192b-b6be-43a8-af23-21e65a3e3b10" }, "source": [ "#TODO: Skip for retrain\n", "import pandas as pd\n", "\n", "# TMX file to dataframe\n", "source_file = 'jw300.' + source_language\n", "target_file = 'jw300.' + target_language\n", "\n", "source = []\n", "target = []\n", "skip_lines = [] # Collect the line numbers of the source portion to skip the same lines for the target portion.\n", "with open(source_file) as f:\n", " for i, line in enumerate(f):\n", " # Skip sentences that are contained in the test set.\n", " if line.strip() not in en_test_sents:\n", " source.append(line.strip())\n", " else:\n", " skip_lines.append(i) \n", "with open(target_file) as f:\n", " for j, line in enumerate(f):\n", " # Only add to corpus if corresponding source was not skipped.\n", " if j not in skip_lines:\n", " target.append(line.strip())\n", " \n", "print('Loaded data and skipped {}/{} lines since contained in test set.'.format(len(skip_lines), i))\n", " \n", "df = pd.DataFrame(zip(source, target), columns=['source_sentence', 'target_sentence'])\n", "# if you get TypeError: data argument can't be an iterator is because of your zip version run this below\n", "#df = pd.DataFrame(list(zip(source, target)), columns=['source_sentence', 'target_sentence'])\n", "df.head(10)" ], "execution_count": null, "outputs": [ { "output_type": "stream", "text": [ "Loaded data and skipped 4429/60566 lines since contained in test set.\n" ], "name": "stdout" }, { "output_type": "execute_result", "data": { "text/html": [ "
\n", " | source_sentence | \n", "target_sentence | \n", "
---|---|---|
0 | \n", "This publication is not for sale . | \n", "Magazini ino si yogulitsa . | \n", "
1 | \n", "\n", " | Colinga cake ni kuthandiza pa nchito yophunzit... | \n", "
2 | \n", "3 Finding the Way | \n", "3 Mmene Mungaipezele | \n", "
3 | \n", "4 Contentment and Generosity | \n", "4 Kukhutila Komanso Kupatsa | \n", "
4 | \n", "6 Physical Health and Resilience | \n", "6 Thanzi Labwino na Kupilila | \n", "
5 | \n", "8 Love | \n", "8 Cikondi | \n", "
6 | \n", "10 Forgiveness | \n", "10 Kukhululuka | \n", "
7 | \n", "12 Purpose in Life | \n", "12 Colinga ca Moyo | \n", "
8 | \n", "14 Hope | \n", "14 Ciyembekezo | \n", "
9 | \n", "16 Learn More | \n", "16 Dziŵani Zambili | \n", "