{ "cells": [ { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "view-in-github" }, "source": [ "" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "Igc5itf-xMGj" }, "source": [ "# Masakhane - Machine Translation for African Languages (Using JoeyNMT)\n", "\n", "### Languages: English-Tshiluba\n", "\n", "### Author: Salomon KABONGO KABENAMUALU" ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "l929HimrxS0a" }, "source": [ "## Retrieve your data & make a parallel corpus\n", "\n", "If you are wanting to use the JW300 data referenced on the Masakhane website or in our GitHub repo, you can use `opus-tools` to convert the data into a convenient format. `opus_read` from that package provides a convenient tool for reading the native aligned XML files and to convert them to TMX format. The tool can also be used to fetch relevant files from OPUS on the fly and to filter the data as necessary. [Read the documentation](https://pypi.org/project/opustools-pkg/) for more details.\n", "\n", "Once you have your corpus files in TMX format (an xml structure which will include the sentences in your target language and your source language in a single file), we recommend reading them into a pandas dataframe. Thankfully, Jade wrote a silly `tmx2dataframe` package which converts your tmx file to a pandas dataframe. " ] }, { "cell_type": "code", "execution_count": 1, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 120 }, "colab_type": "code", "id": "oGRmDELn7Az0", "outputId": "cab1986b-af47-430b-c5b1-9639e3807e2c" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n", "\n", "Enter your authorization code:\n", "··········\n", "Mounted at /content/drive\n" ] } ], "source": [ "from google.colab import drive\n", "drive.mount('/content/drive')" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": {}, "colab_type": "code", "id": "Cn3tgQLzUxwn" }, "outputs": [], "source": [ "# TODO: Set your source and target languages. Keep in mind, these traditionally use language codes as found here:\n", "# These will also become the suffix's of all vocab and corpus files used throughout\n", "import os\n", "source_language = \"en\"\n", "target_language = \"lua\" \n", "lc = False # If True, lowercase the data.\n", "seed = 42 # Random seed for shuffling.\n", "tag = \"baseline\" # Give a unique name to your folder - this is to ensure you don't rewrite any models you've already submitted\n", "\n", "os.environ[\"src\"] = source_language # Sets them in bash as well, since we often use bash scripts\n", "os.environ[\"tgt\"] = target_language\n", "os.environ[\"tag\"] = tag\n", "\n", "# This will save it to a folder in our gdrive instead!\n", "!mkdir -p \"/content/drive/My Drive/masakhane/$src-$tgt-$tag\"\n", "os.environ[\"gdrive_path\"] = \"/content/drive/My Drive/masakhane/%s-%s-%s\" % (source_language, target_language, tag)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 33 }, "colab_type": "code", "id": "kBSgJHEw7Nvx", "outputId": "a090e69e-9174-4790-a695-853b97c11c2e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "/content/drive/My Drive/masakhane/en-lua-baseline\n" ] } ], "source": [ "!echo $gdrive_path" ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 100 }, "colab_type": "code", "id": "gA75Fs9ys8Y9", "outputId": "f4c18fa7-9452-4fd1-b32c-edd84645ee00" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Collecting opustools-pkg\n", "\u001b[?25l Downloading https://files.pythonhosted.org/packages/6c/9f/e829a0cceccc603450cd18e1ff80807b6237a88d9a8df2c0bb320796e900/opustools_pkg-0.0.52-py3-none-any.whl (80kB)\n", "\u001b[K |████████████████████████████████| 81kB 5.6MB/s eta 0:00:011\n", "\u001b[?25hInstalling collected packages: opustools-pkg\n", "Successfully installed opustools-pkg-0.0.52\n" ] } ], "source": [ "# Install opus-tools\n", "! pip install opustools-pkg" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 200 }, "colab_type": "code", "id": "xq-tDZVks7ZD", "outputId": "b2838aa0-05c3-40d5-c26d-190f9366ff43" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "Alignment file /proj/nlpl/data/OPUS/JW300/latest/xml/en-lua.xml.gz not found. The following files are available for downloading:\n", "\n", " 3 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en-lua.xml.gz\n", " 263 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/en.zip\n", " 32 MB https://object.pouta.csc.fi/OPUS-JW300/v1/xml/lua.zip\n", "\n", " 298 MB Total size\n", "./JW300_latest_xml_en-lua.xml.gz ... 100% of 3 MB\n", "./JW300_latest_xml_en.zip ... 100% of 263 MB\n", "./JW300_latest_xml_lua.zip ... 100% of 32 MB\n" ] } ], "source": [ "# Downloading our corpus\n", "! opus_read -d JW300 -s $src -t $tgt -wm moses -w jw300.$src jw300.$tgt -q\n", "\n", "# extract the corpus file\n", "! gunzip JW300_latest_xml_$src-$tgt.xml.gz" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 566 }, "colab_type": "code", "id": "n48GDRnP8y2G", "outputId": "b7bdafa9-2756-4df4-ad09-bf371126b9af" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "--2020-01-02 08:48:18-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 277791 (271K) [text/plain]\n", "Saving to: ‘test.en-any.en’\n", "\n", "test.en-any.en 100%[===================>] 271.28K --.-KB/s in 0.02s \n", "\n", "2020-01-02 08:48:18 (16.5 MB/s) - ‘test.en-any.en’ saved [277791/277791]\n", "\n", "--2020-01-02 08:48:22-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-lua.en\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 204380 (200K) [text/plain]\n", "Saving to: ‘test.en-lua.en’\n", "\n", "test.en-lua.en 100%[===================>] 199.59K --.-KB/s in 0.02s \n", "\n", "2020-01-02 08:48:22 (10.9 MB/s) - ‘test.en-lua.en’ saved [204380/204380]\n", "\n", "--2020-01-02 08:48:28-- https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-lua.lua\n", "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...\n", "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.\n", "HTTP request sent, awaiting response... 200 OK\n", "Length: 236174 (231K) [text/plain]\n", "Saving to: ‘test.en-lua.lua’\n", "\n", "test.en-lua.lua 100%[===================>] 230.64K --.-KB/s in 0.02s \n", "\n", "2020-01-02 08:48:29 (14.7 MB/s) - ‘test.en-lua.lua’ saved [236174/236174]\n", "\n" ] } ], "source": [ "# Download the global test set.\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-any.en\n", " \n", "# And the specific test set for this language pair.\n", "os.environ[\"trg\"] = target_language \n", "os.environ[\"src\"] = source_language \n", "\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.en \n", "! mv test.en-$trg.en test.en\n", "! wget https://raw.githubusercontent.com/juliakreutzer/masakhane/master/jw300_utils/test/test.en-$trg.$trg \n", "! mv test.en-$trg.$trg test.$trg" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 33 }, "colab_type": "code", "id": "NqDG-CI28y2L", "outputId": "54c30eff-00d1-4c6c-954a-a9e86d73bc05" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loaded 3571 global test sentences to filter from the training/dev data.\n" ] } ], "source": [ "# Read the test data to filter from train and dev splits.\n", "# Store english portion in set for quick filtering checks.\n", "en_test_sents = set()\n", "filter_test_sents = \"test.en-any.en\"\n", "j = 0\n", "with open(filter_test_sents) as f:\n", " for line in f:\n", " en_test_sents.add(line.strip())\n", " j += 1\n", "print('Loaded {} global test sentences to filter from the training/dev data.'.format(j))" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 66 }, "colab_type": "code", "id": "6D-_PqdXGJiB", "outputId": "dde6c0b8-d957-46fa-90ac-f65f11eaeb7a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "drive\t\t\t JW300_latest_xml_en.zip sample_data test.lua\n", "jw300.en\t\t JW300_latest_xml_lua.zip test.en\n", "JW300_latest_xml_en-lua.xml jw300.lua\t\t test.en-any.en\n" ] } ], "source": [ "!ls" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 364 }, "colab_type": "code", "id": "3CNdwLBCfSIl", "outputId": "c27b04a0-217d-41c2-8b0d-6834b31e3aef" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Loaded data and skipped 5835/330019 lines since contained in test set.\n" ] }, { "data": { "text/html": [ "
\n", " | source_sentence | \n", "target_sentence | \n", "
---|---|---|
0 | \n", "Our Planet — What Is Its Future ? | \n", "Nditekemena kayi didiku bua buloba buetu ebu ? | \n", "
1 | \n", "“ While humans for millennia have feared ‘ act... | \n", "Bilondeshile tshikandakanda kampanda tshia mu ... | \n", "
2 | \n", "The United Nations Environment Programme ( UNE... | \n", "( Globe and Mail ) Tshibambalu tshia Ndongamu ... | \n", "
3 | \n", "Executive director of UNEP , Klaus Toepfer , s... | \n", "Klaus Toepfer , mulombodi wa ndongamu eu udi w... | \n", "
4 | \n", "Some environmental progress has been made sinc... | \n", "Katshia benza Tshibambalu etshi mu 1972 bua ku... | \n", "
5 | \n", "As reported in The Toronto Star , “ the qualit... | \n", "Nunku , anu mudi tshinga tshikandakanda tshile... | \n", "
6 | \n", "Also , forest management programs , such as th... | \n", "Kabidi , ndongamu ya malu a mêtu bu mudi ya mu... | \n", "
7 | \n", "Even so , the UNEP report says that if economi... | \n", "Nansha nanku , luapolo lua Tshibambalu etshi l... | \n", "
8 | \n", "The Globe stated : “ About half the world’s ri... | \n", "( Toronto Star ) Tshinga tshikandakanda tshiak... | \n", "
9 | \n", "Eighty countries holding 40 per cent of the wo... | \n", "Mu matunga 80 mudi bia pa lukama 40 bia bantu ... | \n", "