# 14.4. Pretraining word2vec¶ Open the notebook in Colab Open the notebook in Colab Open the notebook in Colab

In this section, we will train a skip-gram model defined in Section 14.1.

First, import the packages and modules required for the experiment, and load the PTB dataset.

from d2l import mxnet as d2l
from mxnet import autograd, gluon, np, npx
from mxnet.gluon import nn
npx.set_np()

batch_size, max_window_size, num_noise_words = 512, 5, 5
num_noise_words)
from d2l import torch as d2l
import torch
from torch import nn

batch_size, max_window_size, num_noise_words = 512, 5, 5
num_noise_words)

## 14.4.1. The Skip-Gram Model¶

We will implement the skip-gram model by using embedding layers and minibatch multiplication. These methods are also often used to implement other natural language processing applications.

### 14.4.1.1. Embedding Layer¶

As described in Section 9.7, The layer in which the obtained word is embedded is called the embedding layer, which can be obtained by creating an nn.Embedding instance in high-level APIs. The weight of the embedding layer is a matrix whose number of rows is the dictionary size (input_dim) and whose number of columns is the dimension of each word vector (output_dim). We set the dictionary size to $$20$$ and the word vector dimension to $$4$$.

embed = nn.Embedding(input_dim=20, output_dim=4)
embed.initialize()
embed.weight
Parameter embedding0_weight (shape=(20, 4), dtype=float32)
embed = nn.Embedding(num_embeddings=20, embedding_dim=4)
print(f'Parameter embedding_weight ({embed.weight.shape}, '
'dtype={embed.weight.dtype})')
Parameter embedding_weight (torch.Size([20, 4]), dtype={embed.weight.dtype})

The input of the embedding layer is the index of the word. When we enter the index $$i$$ of a word, the embedding layer returns the $$i^\mathrm{th}$$ row of the weight matrix as its word vector. Below we enter an index of shape ($$2$$, $$3$$) into the embedding layer. Because the dimension of the word vector is 4, we obtain a word vector of shape ($$2$$, $$3$$, $$4$$).

x = np.array([[1, 2, 3], [4, 5, 6]])
embed(x)
array([[[ 0.01438687,  0.05011239,  0.00628365,  0.04861524],
[-0.01068833,  0.01729892,  0.02042518, -0.01618656],
[-0.00873779, -0.02834515,  0.05484822, -0.06206018]],

[[ 0.06491279, -0.03182812, -0.01631819, -0.00312688],
[ 0.0408415 ,  0.04370362,  0.00404529, -0.0028032 ],
[ 0.00952624, -0.01501013,  0.05958354,  0.04705103]]])
x = torch.tensor([[1, 2, 3], [4, 5, 6]])
embed(x)
tensor([[[ 1.1437,  0.5280, -0.8023, -1.0549],
[ 1.3125,  0.2065, -0.6631,  0.0770],
[-0.1221,  0.0322,  0.5749,  0.5369]],

[[ 0.1518, -1.2389, -1.2231, -1.0293],
[-0.7240, -0.0219,  0.4400, -1.3385],
[ 0.3204,  0.8878,  0.7034, -1.1366]]], grad_fn=<EmbeddingBackward>)

### 14.4.1.2. Skip-gram Model Forward Calculation¶

In forward calculation, the input of the skip-gram model contains the central target word index center and the concatenated context and noise word index contexts_and_negatives. In which, the center variable has the shape (batch size, 1), while the contexts_and_negatives variable has the shape (batch size, max_len). These two variables are first transformed from word indexes to word vectors by the word embedding layer, and then the output of shape (batch size, 1, max_len) is obtained by minibatch multiplication. Each element in the output is the inner product of the central target word vector and the context word vector or noise word vector.

def skip_gram(center, contexts_and_negatives, embed_v, embed_u):
v = embed_v(center)
u = embed_u(contexts_and_negatives)
pred = npx.batch_dot(v, u.swapaxes(1, 2))
return pred
def skip_gram(center, contexts_and_negatives, embed_v, embed_u):
v = embed_v(center)
u = embed_u(contexts_and_negatives)
pred = torch.bmm(v, u.permute(0, 2, 1))
return pred

Verify that the output shape should be (batch size, 1, max_len).

skip_gram(np.ones((2, 1)), np.ones((2, 4)), embed, embed).shape
(2, 1, 4)
skip_gram(torch.ones((2, 1), dtype=torch.long),
torch.ones((2, 4), dtype=torch.long), embed, embed).shape
torch.Size([2, 1, 4])

## 14.4.2. Training¶

Before training the word embedding model, we need to define the loss function of the model.

### 14.4.2.1. Binary Cross Entropy Loss Function¶

According to the definition of the loss function in negative sampling, we can directly use the binary cross-entropy loss function from high-level APIs.

loss = gluon.loss.SigmoidBCELoss()
class SigmoidBCELoss(nn.Module):
def __init__(self):
super().__init__()

out = nn.functional.binary_cross_entropy_with_logits(
return out.mean(dim=1)

loss = SigmoidBCELoss()

It is worth mentioning that we can use the mask variable to specify the partial predicted value and label that participate in loss function calculation in the minibatch: when the mask is 1, the predicted value and label of the corresponding position will participate in the calculation of the loss function; When the mask is 0, they do not participate. As we mentioned earlier, mask variables can be used to avoid the effect of padding on loss function calculations.

pred = np.array([[.5]*4]*2)
label = np.array([[1., 0., 1., 0.]]*2)
mask = np.array([[1, 1, 1, 1], [1, 1, 0, 0]])
array([0.724077 , 0.3620385])
pred = torch.tensor([[.5]*4]*2)
label = torch.tensor([[1., 0., 1., 0.]]*2)
mask = torch.tensor([[1, 1, 1, 1], [1, 1, 0, 0]])
tensor([0.7241, 0.3620])

We can normalize the loss in each example due to various lengths in each example.

array([0.724077, 0.724077])
tensor([0.7241, 0.7241])

### 14.4.2.2. Initializing Model Parameters¶

We construct the embedding layers of the central and context words, respectively, and set the hyperparameter word vector dimension embed_size to 100.

embed_size = 100
net = nn.Sequential()
nn.Embedding(input_dim=len(vocab), output_dim=embed_size))
embed_size = 100
net = nn.Sequential(nn.Embedding(num_embeddings=len(vocab),
embedding_dim=embed_size),
nn.Embedding(num_embeddings=len(vocab),
embedding_dim=embed_size))

### 14.4.2.3. Training¶

The training function is defined below. Because of the existence of padding, the calculation of the loss function is slightly different compared to the previous training functions.

def train(net, data_iter, lr, num_epochs, device=d2l.try_gpu()):
net.initialize(ctx=device, force_reinit=True)
{'learning_rate': lr})
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs])
metric = d2l.Accumulator(2)  # Sum of losses, no. of tokens
for epoch in range(num_epochs):
timer, num_batches = d2l.Timer(), len(data_iter)
for i, batch in enumerate(data_iter):
center, context_negative, mask, label = [
data.as_in_ctx(device) for data in batch]
pred = skip_gram(center, context_negative, net[0], net[1])
l.backward()
trainer.step(batch_size)
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, '
f'{metric[1] / timer.stop():.1f} tokens/sec on {str(device)}')
def train(net, data_iter, lr, num_epochs, device=d2l.try_gpu()):
def init_weights(m):
if type(m) == nn.Embedding:
nn.init.xavier_uniform_(m.weight)
net.apply(init_weights)
net = net.to(device)
animator = d2l.Animator(xlabel='epoch', ylabel='loss',
xlim=[1, num_epochs])
metric = d2l.Accumulator(2)  # Sum of losses, no. of tokens
for epoch in range(num_epochs):
timer, num_batches = d2l.Timer(), len(data_iter)
for i, batch in enumerate(data_iter):
center, context_negative, mask, label = [
data.to(device) for data in batch]

pred = skip_gram(center, context_negative, net[0], net[1])
l.sum().backward()
optimizer.step()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1],))
print(f'loss {metric[0] / metric[1]:.3f}, '
f'{metric[1] / timer.stop():.1f} tokens/sec on {str(device)}')

Now, we can train a skip-gram model using negative sampling.

lr, num_epochs = 0.01, 5
train(net, data_iter, lr, num_epochs)
loss 0.373, 107559.5 tokens/sec on gpu(0)
lr, num_epochs = 0.01, 5
train(net, data_iter, lr, num_epochs)
loss 0.373, 439322.6 tokens/sec on cuda:0

## 14.4.3. Applying the Word Embedding Model¶

After training the word embedding model, we can represent similarity in meaning between words based on the cosine similarity of two word vectors. As we can see, when using the trained word embedding model, the words closest in meaning to the word “chip” are mostly related to chips.

def get_similar_tokens(query_token, k, embed):
W = embed.weight.data()
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = np.dot(W, x) / np.sqrt(np.sum(W * W, axis=1) * np.sum(x * x) + 1e-9)
topk = npx.topk(cos, k=k+1, ret_typ='indices').asnumpy().astype('int32')
for i in topk[1:]:  # Remove the input words
print(f'cosine sim={float(cos[i]):.3f}: {vocab.idx_to_token[i]}')

get_similar_tokens('chip', 3, net[0])
cosine sim=0.594: microprocessor
cosine sim=0.494: intel
cosine sim=0.478: desktop
def get_similar_tokens(query_token, k, embed):
W = embed.weight.data
x = W[vocab[query_token]]
# Compute the cosine similarity. Add 1e-9 for numerical stability
cos = torch.mv(W, x) / torch.sqrt(torch.sum(W * W, dim=1) *
torch.sum(x * x) + 1e-9)
topk = torch.topk(cos, k=k+1)[1].cpu().numpy().astype('int32')
for i in topk[1:]:  # Remove the input words
print(f'cosine sim={float(cos[i]):.3f}: {vocab.idx_to_token[i]}')

get_similar_tokens('chip', 3, net[0])
cosine sim=0.517: intel
cosine sim=0.441: chips
cosine sim=0.412: microprocessor

## 14.4.4. Summary¶

• We can pretrain a skip-gram model through negative sampling.

## 14.4.5. Exercises¶

1. Set sparse_grad=True when creating an instance of nn.Embedding. Does it accelerate training? Look up MXNet documentation to learn the meaning of this argument.

2. Try to find synonyms for other words.

3. Tune the hyperparameters and observe and analyze the experimental results.

4. When the dataset is large, we usually sample the context words and the noise words for the central target word in the current minibatch only when updating the model parameters. In other words, the same central target word may have different context words or noise words in different epochs. What are the benefits of this sort of training? Try to implement this training method.