Seq2Seq(Sequence-to-Sequence)模型是一种用于处置序列数据的神经网络架构,宽泛运行于人造言语处置(NLP)义务,如机器翻译、文本生成、对话系统等。
它经过编码器-解码器架构将输入序列(如一个句子)映射到输入序列(另一个句子或序列)。
Seq2Seq 模型由两个关键部分组成。
编码器是一个循环神经网络(RNN)或其变体,如LSTM或GRU,用于接纳输入序列并将其转换为一个固定大小的高低文向量。
编码器逐渐处置输入序列的每个期间步,经过暗藏层形态始终降级输入消息的示意,直到编码抵达输入序列的开头。
这一环节的最后一个暗藏形态通常被以为是整个输入序列的摘要,传递给解码器。
class Encoder(nn.Module):def __init__(self,input_dim,embedding_dim,hidden_size,num_layers,dropout):super(Encoder,self).__init__()#note hidden size and num layersself.hidden_size = hidden_sizeself.num_layers = num_layers#create a dropout layerself.dropout = nn.Dropout(dropout)#embedding to convert input token into dense vectorsself.embedding = nn.Embedding(input_dim,embedding_dim)#bilstm layerself.lstm = nn.LSTM(embedding_dim,hidden_size,num_layers=num_layers,bidirectinotallow=True,dropout=dropout)def forward(self,src):embedded = self.dropout(self.embedding(src))out,(hidden,cell) = self.lstm(embedded)return hidden,cell
解码器也是一个RNN网络,接受编码器输入的高低文向量,并生成目的序列。
解码器在每一步会生成一个输入,并将上一步的输入作为下一步的输入,直到发生特定的中断符。
解码器的初始形态来自编码器的最后一个暗藏形态,因此可以了解为解码器是基于编码器生成的全局消息来预测输入序列。
class Decoder(nn.Module):def __init__(self,output_dim,embedding_dim,hidden_size,num_layers,dropout):super(Decoder,self).__init__()self.output_dim = output_dim#note hidden size and num layers for seq2seq classself.hidden_size = hidden_sizeself.num_layers = num_layersself.dropout = nn.Dropout(dropout)#note inputs of embedding layerself.embedding = nn.Embedding(output_dim,embedding_dim)self.lstm = nn.LSTM(embedding_dim,hidden_size,num_layers=num_layers,bidirectinotallow=True,dropout=dropout)#we apply softmax over target vocab sizeself.fc = nn.Linear(hidden_size*2,output_dim)def forward(self,input_token,hidden,cell):#adjust dimensions of input tokeninput_token = input_token.unsqueeze(0)emb = self.embedding(input_token)emb = self.dropout(emb)#note hidden and cell along with outputout,(hidden,cell) = self.lstm(emb,(hidden,cell))out = out.squeeze(0)pred = self.fc(out)return pred,hidden,cell
Seq2Seq 模型的基本上班流程如下
上方是一个经常使用 Seq2Seq 启动机器翻译的示例代码。
首先,咱们从 HuggingFace 导入了数据集,并将其分为训练集和测试集
import numpy as npimport pandas as pdimport seaborn as snsimport matplotlib.pyplot as pltimport torchimport torch.nn as nnimport torch.nn.functional as Ffrom torch.utils.data import>
加载源言语和目的言语的 spaCy 模型。
spaCy 是一个配置弱小、可用于消费的 Python 初级人造言语处置库。
与许多其余 NLP 库不同,spaCy 专为实践经常使用而设计,而非钻研试验。
它长于经常使用预先训练的模型启动高效的文本处置,可成功标志化、词性标志、命名实体识别和依赖性解析等义务。
en_nlp = spacy.load('en_core_web_sm')de_nlp = spacy.load('de_core_news_sm')#tokenizerdef sample_tokenizer(sample,en_nlp,de_nlp,lower,max_length,sos_token,eos_token):en_tokens = [token.text for token in en_nlp.tokenizer(sample["en"])][:max_length]de_tokens = [token.text for token in de_nlp.tokenizer(sample["de"])][:max_length]if lower == True:en_tokens = [token.lower() for token in en_tokens]de_tokens = [token.lower() for token in de_tokens]en_tokens = [sos_token] + en_tokens + [eos_token]de_tokens = [sos_token] + de_tokens + [eos_token]return {"en_tokens":en_tokens,"de_tokens":de_tokens}fn_kwargs = {"en_nlp":en_nlp,"de_nlp":de_nlp,"lower":True,"max_length":1000,"sos_token":'<sos>',"eos_token":'<eos>'}train_data = train_data.map(sample_tokenizer,fn_kwargs=fn_kwargs)val_data = val_data.map(sample_tokenizer,fn_kwargs=fn_kwargs)test_data = test_data.map(sample_tokenizer,fn_kwargs=fn_kwargs)min_freq = 2specials = ['<unk>','<pad>','<sos>','<eos>']en_vocab = build_vocab_from_iterator(train_data['en_tokens'],specials=specials,min_freq=min_freq)de_vocab = build_vocab_from_iterator(train_data['de_tokens'],specials=specials,min_freq=min_freq)assert en_vocab['<unk>'] == de_vocab['<unk>']assert en_vocab['<pad>'] == de_vocab['<pad>']unk_index = en_vocab['<unk>']pad_index = en_vocab['<pad>']en_vocab.set_default_index(unk_index)de_vocab.set_default_index(unk_index)def sample_num(sample,en_vocab,de_vocab):en_ids = en_vocab.lookup_indices(sample["en_tokens"])de_ids = de_vocab.lookup_indices(sample["de_tokens"])return {"en_ids":en_ids,"de_ids":de_ids}fn_kwargs = {"en_vocab":en_vocab,"de_vocab":de_vocab}train_data = train_data.map(sample_num,fn_kwargs=fn_kwargs)val_data = val_data.map(sample_num,fn_kwargs=fn_kwargs)test_data = test_data.map(sample_num,fn_kwargs=fn_kwargs)train_data = train_data.with_format(type="torch",columns=['en_ids','de_ids'],output_all_columns=True)val_data = val_data.with_format(type="torch",columns=['en_ids','de_ids'],output_all_columns=True)test_data = test_data.with_format(type="torch",columns=['en_ids','de_ids'],output_all_columns=True)def get_collate_fn(pad_index):def collate_fn(batch):batch_en_ids = [sample["en_ids"] for sample in batch]batch_de_ids = [sample["de_ids"] for sample in batch]batch_en_ids = pad_sequence(batch_en_ids,padding_value=pad_index)batch_de_ids = pad_sequence(batch_de_ids,padding_value=pad_index)batch = {"en_ids":batch_en_ids,"de_ids":batch_de_ids}return batchreturn collate_fndef get_dataloader(dataset,batch_size,shuffle,pad_index):collate_fn = get_collate_fn(pad_index)dataloader =>
© 版权声明