A deep learning model - BERT from Google AI Research - has yielded state-of-the-art results in a wide variety of Natural Language Processing (NLP) tasks. In this tutorial, we will show how to load and train the BERT model from R, using Keras.
Today, we’re happy to feature a guest post written by Turgut Abdullayev, showing how to use BERT from R. Turgut is a data scientist at AccessBank Azerbaijan. Currently, he is pursuing a Ph.D. in economics at Baku State University, Azerbaijan.
In the previous post, Sigrid Keydana explained the logic behind the reticulate package and how it enables interoperability between Python and R. So, this time we will build a classification model with BERT, taking into account one of the powerful capabilities of the reticulate package – calling Python from R via importing Python modules.
Before we start, make sure that the Python version used is 3, as Python 2 can introduce lots of difficulties while working with BERT, such as Unicode issues related to the input text.
Note: The R implementation presupposes TF Keras while by default, keras-bert does not use it. So, adding that environment variable makes it work.
Sys.setenv(TF_KERAS=1)
# make sure we use python 3
reticulate::use_python('C:/Users/turgut.abdullayev/AppData/Local/Continuum/anaconda3/python.exe',
required=T)
# to see python version
reticulate::py_config()
: C:/Users/turgut.abdullayev/AppData/Local/Continuum/anaconda3/python.exe
python: C:/Users/turgut.abdullayev/AppData/Local/Continuum/anaconda3/python37.dll
libpython: C:\Users\TURGUT~1.ABD\AppData\Local\CONTIN~1\ANACON~1
pythonhome: 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)]
version: 64bit
Architecture: C:\Users\TURGUT~1.ABD\AppData\Local\CONTIN~1\ANACON~1\lib\site-packages\numpy
numpy: 1.16.4
numpy_version
: Python version was forced by use_python function NOTE
Luckily for us, a convenient way of importing BERT with Keras was created by Zhao HG. It is called Keras-bert. For us, this means that importing that same python library with reticulate
will allow us to build a popular state-of-the-art model within R.
There are several methods to install keras-bert in Python.
!pip install keras-bert
-m pip install keras-bert python3
-bert conda install keras
After this procedure, you can check whether keras-bert is installed or not.
reticulate::py_module_available('keras_bert')
1] TRUE [
Finally, the TensorFlow version used should be 1.14/1.15. You can check it in the following form:
tensorflow::tf_version()
1] ‘1.14’ [
In a nutshell:
-bert
pip install keras::install_tensorflow(version = "1.15") tensorflow
BERT1 is a pre-trained deep learning model introduced by Google AI Research which has been trained on Wikipedia and BooksCorpus. It has a unique way to understand the structure of a given text. Instead of reading the text from left to right or from right to left, BERT, using an attention mechanism which is called Transformer encoder2, reads the entire word sequences at once. So, it allows to understanding a word based on its surroundings. There are different kind of pre-trained BERT models but the main difference between them is trained parameters. In our case, BERT with 12 encoder layers (Transformer Blocks), 768-hidden hidden units, 12-heads3, and 110M parameters will be used to create a text classification model.
Loading a pre-trained BERT model is straightforward. The downloaded zip file contains:
Let’s load keras-bert via reticulate
and prepare a tokenizer object. The BERT tokenizer will help us to turn words into indices.
library(reticulate)
k_bert = import('keras_bert')
token_dict = k_bert$load_vocabulary(vocab_path)
tokenizer = k_bert$Tokenizer(token_dict)
BERT uses a WordPiece tokenization strategy. If a word is Out-of-vocabulary (OOV), then BERT will break it down into subwords. (eating => eat, ##ing).
There are 3 types of embedding layers in BERT:
As usual with keras, the batch size, number of epochs and the learning rate should be defined for training BERT. Additionally, the sequence length is needed.
seq_length = 50L
bch_size = 70
epochs = 1
learning_rate = 1e-4
DATA_COLUMN = 'comment_text'
LABEL_COLUMN = 'target'
Note: the max input length is 512, and the model is extremely compute intensive even on GPU.
We can load the BERT model and automatically pad sequences with seq_len
function. Keras-bert4 makes the loading process very easy and comfortable.
model = k_bert$load_trained_model_from_checkpoint(
config_path,
checkpoint_path,
training=T,
trainable=T,
seq_len=seq_length)
The dataset for this post is taken from the Kaggle Jigsaw Unintended Bias in Toxicity Classification competition.
In order to prepare the dataset, we write a preprocessing function which will read and tokenize data simultaneously. Then, we feed the outputs of the function as input for BERT model.
# tokenize text
tokenize_fun = function(dataset) {
c(indices, target, segments) %<-% list(list(),list(),list())
for ( i in 1:nrow(dataset)) {
c(indices_tok, segments_tok) %<-% tokenizer$encode(dataset[[DATA_COLUMN]][i],
max_len=seq_length)
indices = indices %>% append(list(as.matrix(indices_tok)))
target = target %>% append(dataset[[LABEL_COLUMN]][i])
segments = segments %>% append(list(as.matrix(segments_tok)))
}
return(list(indices,segments, target))
}
The way we have written the preprocess function, at first, it will read data, then add zeros and encode words into indices. Hence, we will have 3 output files:
c(x_train,x_segment, y_train) %<-%
dt_data('~/Downloads/jigsaw-unintended-bias-in-toxicity-classification/train.csv',2000)
The input data are in list format. They need to be extracted and transposed. Then, the train and segment matrices should be placed into the list.
Using the Adam optimizer with warmup helps to lower the learning rate at the beginning of the training process. After certain training steps, the learning rate will gradually be increased, because learning new data without warmup can negatively affect a BERT model.
In order to build a binary classification model, the output of the BERT model should contain 1 unit. Therefore, first of all, we should get input and output layers. Then, adding an additional dense layer to the output can perfectly meet our needs.
library(keras)
input_1 = get_layer(model,name = 'Input-Token')$input
input_2 = get_layer(model,name = 'Input-Segment')$input
inputs = list(input_1,input_2)
dense = get_layer(model,name = 'NSP-Dense')$output
outputs = dense %>% layer_dense(units=1L, activation='sigmoid',
kernel_initializer=initializer_truncated_normal(stddev = 0.02),
name = 'output')
model = keras_model(inputs = inputs,outputs = outputs)
This is how the model architecture looks like after adding a dense layer and padding input sequences.
Model
__________________________________________________________________________________________Layer (type) Output Shape Param # Connected to
==========================================================================================
-Token (InputLayer) (None, 50) 0
Input
__________________________________________________________________________________________-Segment (InputLayer) (None, 50) 0
Input
__________________________________________________________________________________________-Token (TokenEmbedd [(None, 50, 768), ( 23440896 Input-Token[0][0]
Embedding
__________________________________________________________________________________________-Segment (Embedding (None, 50, 768) 1536 Input-Segment[0][0]
Embedding
__________________________________________________________________________________________-Token-Segment (Add (None, 50, 768) 0 Embedding-Token[0][0]
Embedding-Segment[0][0]
Embedding
__________________________________________________________________________________________-Position (Position (None, 50, 768) 38400 Embedding-Token-Segment[0][0]
Embedding
__________________________________________________________________________________________-Dropout (Dropout) (None, 50, 768) 0 Embedding-Position[0][0]
Embedding
__________________________________________________________________________________________-Norm (LayerNormali (None, 50, 768) 1536 Embedding-Dropout[0][0]
Embedding
__________________________________________________________________________________________-1-MultiHeadSelfAtten (None, 50, 768) 2362368 Embedding-Norm[0][0]
Encoder
__________________________________________________________________________________________-1-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-1-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-1-MultiHeadSelfAtten (None, 50, 768) 0 Embedding-Norm[0][0]
Encoder-1-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-1-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-1-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-1-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-1-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-1-FeedForward-Dropou (None, 50, 768) 0 Encoder-1-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-1-FeedForward-Add (A (None, 50, 768) 0 Encoder-1-MultiHeadSelfAttenti
Encoder-1-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-1-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-1-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-2-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-1-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-2-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-2-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-2-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-1-FeedForward-Norm[0][
Encoder-2-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-2-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-2-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-2-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-2-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-2-FeedForward-Dropou (None, 50, 768) 0 Encoder-2-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-2-FeedForward-Add (A (None, 50, 768) 0 Encoder-2-MultiHeadSelfAttenti
Encoder-2-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-2-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-2-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-3-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-2-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-3-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-3-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-3-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-2-FeedForward-Norm[0][
Encoder-3-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-3-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-3-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-3-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-3-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-3-FeedForward-Dropou (None, 50, 768) 0 Encoder-3-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-3-FeedForward-Add (A (None, 50, 768) 0 Encoder-3-MultiHeadSelfAttenti
Encoder-3-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-3-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-3-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-4-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-3-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-4-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-4-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-4-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-3-FeedForward-Norm[0][
Encoder-4-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-4-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-4-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-4-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-4-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-4-FeedForward-Dropou (None, 50, 768) 0 Encoder-4-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-4-FeedForward-Add (A (None, 50, 768) 0 Encoder-4-MultiHeadSelfAttenti
Encoder-4-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-4-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-4-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-5-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-4-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-5-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-5-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-5-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-4-FeedForward-Norm[0][
Encoder-5-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-5-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-5-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-5-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-5-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-5-FeedForward-Dropou (None, 50, 768) 0 Encoder-5-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-5-FeedForward-Add (A (None, 50, 768) 0 Encoder-5-MultiHeadSelfAttenti
Encoder-5-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-5-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-5-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-6-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-5-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-6-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-6-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-6-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-5-FeedForward-Norm[0][
Encoder-6-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-6-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-6-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-6-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-6-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-6-FeedForward-Dropou (None, 50, 768) 0 Encoder-6-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-6-FeedForward-Add (A (None, 50, 768) 0 Encoder-6-MultiHeadSelfAttenti
Encoder-6-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-6-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-6-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-7-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-6-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-7-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-7-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-7-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-6-FeedForward-Norm[0][
Encoder-7-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-7-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-7-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-7-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-7-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-7-FeedForward-Dropou (None, 50, 768) 0 Encoder-7-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-7-FeedForward-Add (A (None, 50, 768) 0 Encoder-7-MultiHeadSelfAttenti
Encoder-7-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-7-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-7-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-8-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-7-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-8-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-8-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-8-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-7-FeedForward-Norm[0][
Encoder-8-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-8-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-8-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-8-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-8-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-8-FeedForward-Dropou (None, 50, 768) 0 Encoder-8-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-8-FeedForward-Add (A (None, 50, 768) 0 Encoder-8-MultiHeadSelfAttenti
Encoder-8-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-8-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-8-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-9-MultiHeadSelfAtten (None, 50, 768) 2362368 Encoder-8-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-9-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-9-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-9-MultiHeadSelfAtten (None, 50, 768) 0 Encoder-8-FeedForward-Norm[0][
Encoder-9-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-9-MultiHeadSelfAtten (None, 50, 768) 1536 Encoder-9-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-9-FeedForward (FeedF (None, 50, 768) 4722432 Encoder-9-MultiHeadSelfAttenti
Encoder
__________________________________________________________________________________________-9-FeedForward-Dropou (None, 50, 768) 0 Encoder-9-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-9-FeedForward-Add (A (None, 50, 768) 0 Encoder-9-MultiHeadSelfAttenti
Encoder-9-FeedForward-Dropout[
Encoder
__________________________________________________________________________________________-9-FeedForward-Norm ( (None, 50, 768) 1536 Encoder-9-FeedForward-Add[0][0
Encoder
__________________________________________________________________________________________-10-MultiHeadSelfAtte (None, 50, 768) 2362368 Encoder-9-FeedForward-Norm[0][
Encoder
__________________________________________________________________________________________-10-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-10-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-10-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-9-FeedForward-Norm[0][
Encoder-10-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-10-MultiHeadSelfAtte (None, 50, 768) 1536 Encoder-10-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-10-FeedForward (Feed (None, 50, 768) 4722432 Encoder-10-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-10-FeedForward-Dropo (None, 50, 768) 0 Encoder-10-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-10-FeedForward-Add ( (None, 50, 768) 0 Encoder-10-MultiHeadSelfAttent
Encoder-10-FeedForward-Dropout
Encoder
__________________________________________________________________________________________-10-FeedForward-Norm (None, 50, 768) 1536 Encoder-10-FeedForward-Add[0][
Encoder
__________________________________________________________________________________________-11-MultiHeadSelfAtte (None, 50, 768) 2362368 Encoder-10-FeedForward-Norm[0]
Encoder
__________________________________________________________________________________________-11-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-11-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-11-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-10-FeedForward-Norm[0]
Encoder-11-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-11-MultiHeadSelfAtte (None, 50, 768) 1536 Encoder-11-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-11-FeedForward (Feed (None, 50, 768) 4722432 Encoder-11-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-11-FeedForward-Dropo (None, 50, 768) 0 Encoder-11-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-11-FeedForward-Add ( (None, 50, 768) 0 Encoder-11-MultiHeadSelfAttent
Encoder-11-FeedForward-Dropout
Encoder
__________________________________________________________________________________________-11-FeedForward-Norm (None, 50, 768) 1536 Encoder-11-FeedForward-Add[0][
Encoder
__________________________________________________________________________________________-12-MultiHeadSelfAtte (None, 50, 768) 2362368 Encoder-11-FeedForward-Norm[0]
Encoder
__________________________________________________________________________________________-12-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-12-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-12-MultiHeadSelfAtte (None, 50, 768) 0 Encoder-11-FeedForward-Norm[0]
Encoder-12-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-12-MultiHeadSelfAtte (None, 50, 768) 1536 Encoder-12-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-12-FeedForward (Feed (None, 50, 768) 4722432 Encoder-12-MultiHeadSelfAttent
Encoder
__________________________________________________________________________________________-12-FeedForward-Dropo (None, 50, 768) 0 Encoder-12-FeedForward[0][0]
Encoder
__________________________________________________________________________________________-12-FeedForward-Add ( (None, 50, 768) 0 Encoder-12-MultiHeadSelfAttent
Encoder-12-FeedForward-Dropout
Encoder
__________________________________________________________________________________________-12-FeedForward-Norm (None, 50, 768) 1536 Encoder-12-FeedForward-Add[0][
Encoder
__________________________________________________________________________________________Extract (Extract) (None, 768) 0 Encoder-12-FeedForward-Norm[0]
__________________________________________________________________________________________-Dense (Dense) (None, 768) 590592 Extract[0][0]
NSP
__________________________________________________________________________________________output (Dense) (None, 1) 769 NSP-Dense[0][0]
==========================================================================================
: 109,128,193
Total params: 109,128,193
Trainable params-trainable params: 0
Non __________________________________________________________________________________________
Aus usual with Keras, before training a model, we need to compile the model. And using fit()
, we feed it the R arrays.
model %>% compile(
k_bert$AdamWarmup(decay_steps=decay_steps,
warmup_steps=warmup_steps, lr=learning_rate),
loss = 'binary_crossentropy',
metrics = 'accuracy'
)
model %>% fit(
concat,
targets,
epochs=epochs,
batch_size=bch_size, validation_split=0.2)
In this post, we’ve shown how we can use Keras to conveniently load, configure, and train a BERT model.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding↩︎
Attention — focuses on salient parts of input by taking a weighted average of them. 768 hidden units divided by 12 chunks and each chunk will have 64 output dimensions, afterward, the result from each chunk will be concatenated and forwarded to the next layer↩︎
Implementation of the BERT. Official pre-trained models could be loaded for feature extraction and prediction↩︎
If you see mistakes or want to suggest changes, please create an issue on the source repository.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. Source code is available at https://github.com/henry090/BERT-from-R, unless otherwise noted. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Abdullayev (2019, Sept. 30). Posit AI Blog: BERT from R. Retrieved from https://blogs.rstudio.com/tensorflow/posts/2019-09-30-bert-r/
BibTeX citation
@misc{abdullayev2019bert, author = {Abdullayev, Turgut}, title = {Posit AI Blog: BERT from R}, url = {https://blogs.rstudio.com/tensorflow/posts/2019-09-30-bert-r/}, year = {2019} }