Text Classification With Happy Transformer and Hugging Face's Evaluate Library
In this article we'll discuss how to train a Transformer model called BERT to perform sentiment analysis for Amazon reviews using a Python Package I am the lead maintainer of called Happy Transformer. We'll also use Hugging Face's new Python library called Evaluate, which makes it easier than ever to evaluate AI models.
Happy Transformer is built on top of Hugging Face's Transformers library to simplify creating and training NLP Transformer models. We'll use the library to train a model to perform text classification with just a few lines of code for the AI aspects of the project.
The process of training a model for text classification with Happy Transformer is more or less the same regardless of the dataset you're using. For this tutorial, we'll use a dataset on Hugging Face's dataset distribution Hub called "amazon_polarity." This dataset contains Amazon reviews, along with if the review was positive or negative. You can find other text classification datasets on Hugging Face's Hub here.
Hugging Face just released a Python library a few days ago called Evaluate. This library allows programmers to create their own metrics to evaluate models and upload them for others to use. At launch, they included 43 metrics, including accuracy, precision, and recall which will be the three we'll cover in this article. Within just a few days, the library already has 526 stars on GitHub. Although this is far less than the 64.2k stars that the Hugging Face Transformers library has, it's still a great start, and I'm confident the library will grow to become very popular.
Check out the code for this tutorial within Google Colab.
Installation
We'll use four packages: Happy Transformer, Evaluate, Datasets and TQDM. Let's start by installing Happy Transformer
pip install happytransformer
The Datasets and TQDM are dependencies for Happy Transformer and so we do not ned to pip install them. We just need to install Hugging Face's Evaluate package.
pip install evaluate
Model
We'll import a class from Happy Transformer called HappyTextClassification which we'll use to install, train and use the model.
from happytransformer HappyTextClassification
There are various models we can use such as BERT, DistilBERT, ALBERT and RoBERTa. For this tutorial however we'll use BERT, which in my opinion is the most popular of the bunch. This page contains an example of how to instantiate the other models.
There are three arguments we need to provide to insatiate a model. For the first positional argument we need to provide the model type, which is "BERT." For the second positional argument we'll provide the mode name, which is found on its Hugging Face Model Hub page and in this case is "bert-base-uncased." And finally, we need to provide the number of labels to the "num_labels" parameters, which is the number number of classes we'll classify the text into and thus is 2 since we're dealing with two classes: positive and negative.
happy_tc = HappyTextClassification("BERT", "bert-base-uncased", num_labels=2)
Note: you can also use a larger model called "bert-large-uncased" which may result in better performance.
Data
We can download the dataset using Hugging Face's Datasets library using a function called "load_dataset."
from datasets import load_dataset
We can now download the dataset by providing its name as the only position input to the load_dataset function.
data = load_dataset("amazon_polarity")
The dataset contains 3,600,000 of training cases and 400,000 of test cases. This is far more than what we need for this tutorial, and so we'll use a portion of each. The code below selects the content and labels for 1000 training examples and 1000 testing examples.
train_cases = data["train"]["content"][:1000]
train_labels = data["train"]["label"][:1000]
test_cases = data["test"]["content"][:1000]
test_labels = data["test"]["label"][:1000]
Happy Transformer requires that our training data is stored within a CSV file with two columns: text and label as explained on this webpage.
with open("train.csv", "w") as f:
writer = csv.writer(f)
writer.writerow(['text', 'label'])
writer.writerows(zip(train_cases, train_labels))
Train
We can now train the model! But first, let's import a class called TCTrainArgs from Happy Transformer which we can use to adjust the training parameters.
from happytransformer import TCTrainArgs
There are many different learning parameters we can adjust like learning rate, number of epochs as described on this webpage. But, for this tutorial we'll just adjust the batch size which is important to adjust to reducing training time. By default it's 1 and we'll increase it to 8. If you experience and out of memory error then I suggest you decrease the batch size.
TCTrainArgs(batch_size=8)
We can now train the model by calling the "train()" method for our HappyTextClassification object. We'll provide the path to the training file to the first position parameter and the arguments to the "args" parameter.
happy_tc.train("train.csv", args=args)
Inference
We can use the happy_tc's "classify_text()" method to perform inference to produce corrections. We simply need to just provide the text we wish to classify as the only position parameter.
output_1 = happy_tc.classify_text("What a bad product!")
print(output_1)
Output: TextClassificationResult(label='LABEL_0', score=0.9972155094146729)
The output is a dataclass object with two variables: label and score. The label indicates what class the model classified the text into. It is "LABEL_0" when the next is negative and "LABEL_1" when the text is positive since during training the value "0" was assigned to negative and "1" to positive. The score indicated the confidence the model has for the prediction between 0-1 where 1 predicts high certainty. We can isolate the values as shown below.
print(output_1.label)
print(output_1.score)
Output:
LABEL_0
0.9972155094146729
Here's an example of a positive case.
output_2 = happy_tc.classify_text("What a great product!")
print(output_2.label)
print(output_2.score)
Output:
LABEL_1
0.995507001876831
Produce Predictions
First off, let's import TQDM to track the progress as our model makes predictions.
from tqdm import tqdm
Now we'll iterate over all of the test cases and save the result in a variable called predictions. We'll convert the outputs labels form "LABLE_0" and "LABEL_1" to "0" and "1" respectively.
predictions = []
for case in tqdm(test_cases):
output = happy_tc.classify_text(case).label
if output == "LABEL_0":
predictions.append(0)
else:
predictions.append(1)
Evaluate
Initialization
As discussed, we'll use Hugging Face's Evaluate library to compute the accuracy, precision and recall. Let's import it.
import evaluate
We can now load the three metrics by calling "evaluate.load()' and providing the name of the metric to the only position parameter.
accuracy_metric = evaluate.load("accuracy")
precision_metric = evaluate.load("precision")
recall_metric = evaluate.load("recall")
Accuracy
To use a metric we must call its "compute" method and provide the labels to the "references" parameter and the outputs of our model for the test cases to the "predictions" parameter.
accuracy_output = accuracy_metric.compute(references=test_labels, predictions=predictions)
print(accuracy_output)
Output: {'accuracy': 0.895}
The output is a dictionary with a single key called accuracy. We can isolate the result as shown below.
print(accuracy_output["accuracy"])
Output: 0.895
Precision
The process for producing outputs for both the precision and recall metric is the same as the accuracy metric.
precision__output = precision_metric.compute(references=test_labels, predictions=predictions)
print(precision__output)
Output: {'precision': 0.9287257019438445}
Recall
recall_output = recall_metric.compute(references=test_labels, predictions=predictions)
print(recall_output)
Output: {'recall': 0.8565737051792829}
Conclusion
You just learned how to create and train text classification models with Happy Transformer. Be sure to subscribe to Vennify's YouTube channel and newsletter. Also, check out this article that describes a technique I came up with that allows you to label text classification training data with AI.
Stay happy everyone!
Here's the code for this tutorial within Google Colab.
🌟🌟🌟 Support Happy Transformer by giving it a star 🌟🌟🌟
Book a Call
We may be able to help you or your company with your next NLP project. Feel free to book a free 15 minute call with us.