PyTorch 使用 TorchText 進(jìn)行語言翻譯

2020-09-07 17:25 更新
原文: https://pytorch.org/tutorials/beginner/torchtext_translation_tutorial.html

本教程說明如何使用torchtext的幾個(gè)便捷類來預(yù)處理包含英語和德語句子的著名數(shù)據(jù)集的數(shù)據(jù),并使用它來訓(xùn)練序列到序列模型,并注意將德語句子翻譯成英語 。

它基于 PyTorch 社區(qū)成員 Ben Trevett 的本教程,并由 Seth Weidman 在 Ben 的允許下創(chuàng)建。

在本教程結(jié)束時(shí),您將能夠:

  • Preprocess sentences into a commonly-used format for NLP modeling using the following torchtext convenience classes: 


<cite>字段</cite>和
<cite>TranslationDataset</cite>

torchtext具有用于創(chuàng)建數(shù)據(jù)集的實(shí)用程序,可以輕松地對其進(jìn)行迭代,以創(chuàng)建語言翻譯模型。 一個(gè)關(guān)鍵類是字段,它指定應(yīng)該對每個(gè)句子進(jìn)行預(yù)處理的方式,另一個(gè)關(guān)鍵類是 <cite>TranslationDataset</cite> ; torchtext有幾個(gè)這樣的數(shù)據(jù)集; 在本教程中,我們將使用 Multi30k 數(shù)據(jù)集,其中包含約 30,000 個(gè)英語和德語句子(平均長度約為 13 個(gè)單詞)。

注意:本教程中的標(biāo)記化需要 Spacy 我們使用 Spacy,因?yàn)樗鼮橛⒄Z以外的其他語言的標(biāo)記化提供了強(qiáng)大的支持。 torchtext提供了basic_english標(biāo)記器,并支持其他英語標(biāo)記器(例如摩西),但對于語言翻譯(需要多種語言),Spacy 是您的最佳選擇。

要運(yùn)行本教程,請先使用pipconda安裝spacy。 接下來,下載英語和德語 Spacy 分詞器的原始數(shù)據(jù):

python -m spacy download en
python -m spacy download de

安裝 Spacy 后,以下代碼將根據(jù)Field中定義的標(biāo)記器,標(biāo)記TranslationDataset中的每個(gè)句子。

from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator


SRC = Field(tokenize = "spacy",
            tokenizer_language="de",
            init_token = '<sos>',
            eos_token = '<eos>',
            lower = True)


TRG = Field(tokenize = "spacy",
            tokenizer_language="en",
            init_token = '<sos>',
            eos_token = '<eos>',
            lower = True)


train_data, valid_data, test_data = Multi30k.splits(exts = ('.de', '.en'),
                                                    fields = (SRC, TRG))

得出:

downloading training.tar.gz
downloading validation.tar.gz
downloading mmt_task1_test2016.tar.gz

現(xiàn)在我們已經(jīng)定義了train_data,我們可以看到torchtext的Field的一個(gè)非常有用的功能:build_vocab方法現(xiàn)在允許我們創(chuàng)建與每種語言相關(guān)的詞匯

SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)

一旦運(yùn)行了這些代碼行,SRC.vocab.stoi將是一個(gè)詞典,其詞匯表中的標(biāo)記作為鍵,而其對應(yīng)的索引作為值; SRC.vocab.itos將是相同的字典,其中的鍵和值被交換。 在本教程中,我們不會(huì)廣泛使用此事實(shí),但這在您將遇到的其他 NLP 任務(wù)中可能很有用。

BucketIterator

我們將使用的最后torchtext個(gè)特定功能是BucketIterator,它很容易使用,因?yàn)樗?code>TranslationDataset作為第一個(gè)參數(shù)。 具體來說,正如文檔所說:定義一個(gè)迭代器,該迭代器將相似長度的示例批處理在一起。 在為每個(gè)新紀(jì)元生產(chǎn)新鮮改組的批次時(shí),最大程度地減少所需的填充量。 有關(guān)使用的存儲(chǔ)過程,請參閱池。

import torch


device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')


BATCH_SIZE = 128


train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
    (train_data, valid_data, test_data),
    batch_size = BATCH_SIZE,
    device = device)

可以像DataLoader``s; below, in the ``trainevaluate函數(shù)一樣調(diào)用這些迭代器,只需使用以下命令即可調(diào)用它們:

for i, batch in enumerate(iterator):

每個(gè)batch然后具有srctrg屬性:

src = batch.src
trg = batch.trg

定義我們的nn.ModuleOptimizer

這大部分是從torchtext角度出發(fā)的:構(gòu)建了數(shù)據(jù)集并定義了迭代器,本教程的其余部分僅將模型定義為nn.Module以及Optimizer,然后對其進(jìn)行訓(xùn)練。

具體來說,我們的模型遵循在此處中描述的架構(gòu)(您可以在此處找到更多注釋的版本)。

注意:此模型只是可用于語言翻譯的示例模型; 我們選擇它是因?yàn)樗侨蝿?wù)的標(biāo)準(zhǔn)模型,而不是因?yàn)樗怯糜诜g的推薦模型。 如您所知,目前最先進(jìn)的模型基于“變形金剛”; 您可以在此處看到 PyTorch 的實(shí)現(xiàn) Transformer 層的功能; 特別是,以下模型中使用的“注意”與變壓器模型中存在的多頭自我注意不同。

import random
from typing import Tuple


import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch import Tensor


class Encoder(nn.Module):
    def __init__(self,
                 input_dim: int,
                 emb_dim: int,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 dropout: float):
        super().__init__()


        self.input_dim = input_dim
        self.emb_dim = emb_dim
        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim
        self.dropout = dropout


        self.embedding = nn.Embedding(input_dim, emb_dim)


        self.rnn = nn.GRU(emb_dim, enc_hid_dim, bidirectional = True)


        self.fc = nn.Linear(enc_hid_dim * 2, dec_hid_dim)


        self.dropout = nn.Dropout(dropout)


    def forward(self,
                src: Tensor) -> Tuple[Tensor]:


        embedded = self.dropout(self.embedding(src))


        outputs, hidden = self.rnn(embedded)


        hidden = torch.tanh(self.fc(torch.cat((hidden[-2,:,:], hidden[-1,:,:]), dim = 1)))


        return outputs, hidden


class Attention(nn.Module):
    def __init__(self,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 attn_dim: int):
        super().__init__()


        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim


        self.attn_in = (enc_hid_dim * 2) + dec_hid_dim


        self.attn = nn.Linear(self.attn_in, attn_dim)


    def forward(self,
                decoder_hidden: Tensor,
                encoder_outputs: Tensor) -> Tensor:


        src_len = encoder_outputs.shape[0]


        repeated_decoder_hidden = decoder_hidden.unsqueeze(1).repeat(1, src_len, 1)


        encoder_outputs = encoder_outputs.permute(1, 0, 2)


        energy = torch.tanh(self.attn(torch.cat((
            repeated_decoder_hidden,
            encoder_outputs),
            dim = 2)))


        attention = torch.sum(energy, dim=2)


        return F.softmax(attention, dim=1)


class Decoder(nn.Module):
    def __init__(self,
                 output_dim: int,
                 emb_dim: int,
                 enc_hid_dim: int,
                 dec_hid_dim: int,
                 dropout: int,
                 attention: nn.Module):
        super().__init__()


        self.emb_dim = emb_dim
        self.enc_hid_dim = enc_hid_dim
        self.dec_hid_dim = dec_hid_dim
        self.output_dim = output_dim
        self.dropout = dropout
        self.attention = attention


        self.embedding = nn.Embedding(output_dim, emb_dim)


        self.rnn = nn.GRU((enc_hid_dim * 2) + emb_dim, dec_hid_dim)


        self.out = nn.Linear(self.attention.attn_in + emb_dim, output_dim)


        self.dropout = nn.Dropout(dropout)


    def _weighted_encoder_rep(self,
                              decoder_hidden: Tensor,
                              encoder_outputs: Tensor) -> Tensor:


        a = self.attention(decoder_hidden, encoder_outputs)


        a = a.unsqueeze(1)


        encoder_outputs = encoder_outputs.permute(1, 0, 2)


        weighted_encoder_rep = torch.bmm(a, encoder_outputs)


        weighted_encoder_rep = weighted_encoder_rep.permute(1, 0, 2)


        return weighted_encoder_rep


    def forward(self,
                input: Tensor,
                decoder_hidden: Tensor,
                encoder_outputs: Tensor) -> Tuple[Tensor]:


        input = input.unsqueeze(0)


        embedded = self.dropout(self.embedding(input))


        weighted_encoder_rep = self._weighted_encoder_rep(decoder_hidden,
                                                          encoder_outputs)


        rnn_input = torch.cat((embedded, weighted_encoder_rep), dim = 2)


        output, decoder_hidden = self.rnn(rnn_input, decoder_hidden.unsqueeze(0))


        embedded = embedded.squeeze(0)
        output = output.squeeze(0)
        weighted_encoder_rep = weighted_encoder_rep.squeeze(0)


        output = self.out(torch.cat((output,
                                     weighted_encoder_rep,
                                     embedded), dim = 1))


        return output, decoder_hidden.squeeze(0)


class Seq2Seq(nn.Module):
    def __init__(self,
                 encoder: nn.Module,
                 decoder: nn.Module,
                 device: torch.device):
        super().__init__()


        self.encoder = encoder
        self.decoder = decoder
        self.device = device


    def forward(self,
                src: Tensor,
                trg: Tensor,
                teacher_forcing_ratio: float = 0.5) -> Tensor:


        batch_size = src.shape[1]
        max_len = trg.shape[0]
        trg_vocab_size = self.decoder.output_dim


        outputs = torch.zeros(max_len, batch_size, trg_vocab_size).to(self.device)


        encoder_outputs, hidden = self.encoder(src)


        # first input to the decoder is the <sos> token
        output = trg[0,:]


        for t in range(1, max_len):
            output, hidden = self.decoder(output, hidden, encoder_outputs)
            outputs[t] = output
            teacher_force = random.random() < teacher_forcing_ratio
            top1 = output.max(1)[1]
            output = (trg[t] if teacher_force else top1)


        return outputs


INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
## ENC_EMB_DIM = 256
## DEC_EMB_DIM = 256
## ENC_HID_DIM = 512
## DEC_HID_DIM = 512
## ATTN_DIM = 64
## ENC_DROPOUT = 0.5
## DEC_DROPOUT = 0.5


ENC_EMB_DIM = 32
DEC_EMB_DIM = 32
ENC_HID_DIM = 64
DEC_HID_DIM = 64
ATTN_DIM = 8
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5


enc = Encoder(INPUT_DIM, ENC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, ENC_DROPOUT)


attn = Attention(ENC_HID_DIM, DEC_HID_DIM, ATTN_DIM)


dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, ENC_HID_DIM, DEC_HID_DIM, DEC_DROPOUT, attn)


model = Seq2Seq(enc, dec, device).to(device)


def init_weights(m: nn.Module):
    for name, param in m.named_parameters():
        if 'weight' in name:
            nn.init.normal_(param.data, mean=0, std=0.01)
        else:
            nn.init.constant_(param.data, 0)


model.apply(init_weights)


optimizer = optim.Adam(model.parameters())


def count_parameters(model: nn.Module):
    return sum(p.numel() for p in model.parameters() if p.requires_grad)


print(f'The model has {count_parameters(model):,} trainable parameters')

得出:

The model has 1,856,685 trainable parameters

注意:特別是在對語言翻譯模型的性能進(jìn)行評分時(shí),我們必須告訴nn.CrossEntropyLoss函數(shù)忽略僅填充目標(biāo)的索引。

PAD_IDX = TRG.vocab.stoi['<pad>']


criterion = nn.CrossEntropyLoss(ignore_index=PAD_IDX)

最后,我們可以訓(xùn)練和評估該模型:

import math
import time


def train(model: nn.Module,
          iterator: BucketIterator,
          optimizer: optim.Optimizer,
          criterion: nn.Module,
          clip: float):


    model.train()


    epoch_loss = 0


    for _, batch in enumerate(iterator):


        src = batch.src
        trg = batch.trg


        optimizer.zero_grad()


        output = model(src, trg)


        output = output[1:].view(-1, output.shape[-1])
        trg = trg[1:].view(-1)


        loss = criterion(output, trg)


        loss.backward()


        torch.nn.utils.clip_grad_norm_(model.parameters(), clip)


        optimizer.step()


        epoch_loss += loss.item()


    return epoch_loss / len(iterator)


def evaluate(model: nn.Module,
             iterator: BucketIterator,
             criterion: nn.Module):


    model.eval()


    epoch_loss = 0


    with torch.no_grad():


        for _, batch in enumerate(iterator):


            src = batch.src
            trg = batch.trg


            output = model(src, trg, 0) #turn off teacher forcing


            output = output[1:].view(-1, output.shape[-1])
            trg = trg[1:].view(-1)


            loss = criterion(output, trg)


            epoch_loss += loss.item()


    return epoch_loss / len(iterator)


def epoch_time(start_time: int,
               end_time: int):
    elapsed_time = end_time - start_time
    elapsed_mins = int(elapsed_time / 60)
    elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
    return elapsed_mins, elapsed_secs


N_EPOCHS = 10
CLIP = 1


best_valid_loss = float('inf')


for epoch in range(N_EPOCHS):


    start_time = time.time()


    train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
    valid_loss = evaluate(model, valid_iterator, criterion)


    end_time = time.time()


    epoch_mins, epoch_secs = epoch_time(start_time, end_time)


    print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
    print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
    print(f'\t Val. Loss: {valid_loss:.3f} |  Val. PPL: {math.exp(valid_loss):7.3f}')


test_loss = evaluate(model, test_iterator, criterion)


print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')

得出:

Epoch: 01 | Time: 0m 35s
        Train Loss: 5.667 | Train PPL: 289.080
         Val. Loss: 5.201 |  Val. PPL: 181.371
Epoch: 02 | Time: 0m 35s
        Train Loss: 4.968 | Train PPL: 143.728
         Val. Loss: 5.096 |  Val. PPL: 163.375
Epoch: 03 | Time: 0m 35s
        Train Loss: 4.720 | Train PPL: 112.221
         Val. Loss: 4.989 |  Val. PPL: 146.781
Epoch: 04 | Time: 0m 35s
        Train Loss: 4.586 | Train PPL:  98.094
         Val. Loss: 4.841 |  Val. PPL: 126.612
Epoch: 05 | Time: 0m 35s
        Train Loss: 4.430 | Train PPL:  83.897
         Val. Loss: 4.809 |  Val. PPL: 122.637
Epoch: 06 | Time: 0m 35s
        Train Loss: 4.331 | Train PPL:  75.997
         Val. Loss: 4.797 |  Val. PPL: 121.168
Epoch: 07 | Time: 0m 35s
        Train Loss: 4.240 | Train PPL:  69.434
         Val. Loss: 4.694 |  Val. PPL: 109.337
Epoch: 08 | Time: 0m 35s
        Train Loss: 4.116 | Train PPL:  61.326
         Val. Loss: 4.714 |  Val. PPL: 111.452
Epoch: 09 | Time: 0m 35s
        Train Loss: 4.004 | Train PPL:  54.815
         Val. Loss: 4.563 |  Val. PPL:  95.835
Epoch: 10 | Time: 0m 36s
        Train Loss: 3.922 | Train PPL:  50.519
         Val. Loss: 4.452 |  Val. PPL:  85.761
| Test Loss: 4.456 | Test PPL:  86.155 |

下一步

  • 在上查看使用torchtext 的Ben Trevett 其余教程。
  • 敬請關(guān)注使用其他torchtext功能以及nn.Transformer通過下一個(gè)單詞預(yù)測進(jìn)行語言建模的教程!

腳本的總運(yùn)行時(shí)間:(6 分鐘 10.266 秒)

Download Python source code: torchtext_translation_tutorial.py Download Jupyter notebook: torchtext_translation_tutorial.ipynb

由獅身人面像畫廊生成的畫廊


以上內(nèi)容是否對您有幫助:
在線筆記
App下載
App下載

掃描二維碼

下載編程獅App

公眾號
微信公眾號

編程獅公眾號