NLP tweets

NLP tweets

In this homework, you’ll be working with a collection of tweets. The task is to predict the geolocation (country) where the tweet comes from. This homework involves writing code to preprocess data and perform text classification.

Preprocessing (4 marks)¶

Instructions: Download the data (as1-data.json) from Canvas and put it in the same directory as this iPython notebook. Run the code below to load the json data. This produces two objects, x and y, which contains a list of tweets and corresponding country labels (it uses the standard 2 letter country code) respectively. No implementation is needed.

import json

data = json.load(open(“as1-data.json”))
for k, v in data.items():
x.append(k)
y.append(v)

print(“Number of tweets =”, len(x))
print(“Number of labels =”, len(y))
print(“\nSamples of data:”)
for i in range(10):
print(“Country =”, y[i], “\tTweet =”, x[i])

assert(len(x) == 943)
assert(len(y) == 943)

Question 1 (1.0 mark)¶
Instructions: Next we need to preprocess the collected tweets to create a bag-of-words representation (based on frequency). The preprocessing steps required here are: (1) tokenize each tweet into individual word tokens (using NLTK TweetTokenizer); (2) lowercase all words; (3) remove any word that does not contain any English letters in the alphabet (e.g. {hello, #okay, abc123} would be kept, but not {123, !!}) and (4) remove stopwords (based on NLTK stopwords). An empty tweet (after preprocessing) and its country label should be excluded from the output (x_processed and y_processed).

Task: Complete the preprocess_data(data, labels) function. The function takes a list of tweets and a corresponding list of country labels as input, and returns two lists. For the first list, each element is a bag-of-words representation of a tweet (represented using a python dictionary). For the second list, each element is a corresponding country label. Note that while we do not need to preprocess the country labels (y), we need to have a new output list (y_processed) because some tweets maybe removed after the preprocessing (due to having an empty set of bag-of-words).

Check: Use the assertion statements in “For your testing” below for the expected output.

import nltk
nltk.download(‘stopwords’)
from nltk.tokenize import TweetTokenizer
from nltk.corpus import stopwords

tt = TweetTokenizer()
stopwords = set(stopwords.words(‘english’)) #note: stopwords are all in lowercase

def preprocess_data(data, labels):

# Your answer BEGINS HERE

# Your answer ENDS HERE

x_processed, y_processed = preprocess_data(x, y)

print(“Number of preprocessed tweets =”, len(x_processed))
print(“Number of preprocessed labels =”, len(y_processed))
print(“\nSamples of preprocessed data:”)
for i in range(10):
print(“Country =”, y_processed[i], “\tTweet =”, x_processed[i])

For your testing:

assert(len(x_processed) == len(y_processed))
assert(len(x_processed) > 800)

Instructions: Hashtags (i.e. topic tags which start with #) pose an interesting tokenisation problem because they often include multiple words written without spaces or capitalization. Run the code below to collect all unique hashtags in the preprocessed data. No implementation is needed.

def get_all_hashtags(data):
hashtags = set([])
for d in data:
for word, frequency in d.items():
if word.startswith(“#”) and len(word) > 1:
hashtags.add(word)
return hashtags

hashtags = get_all_hashtags(x_processed)
print(“Number of hashtags =”, len(hashtags))
print(sorted(hashtags))

Question 2 (1.0 mark)¶
Instructions: Our task here to tokenize the hashtags, by implementing the MaxMatch algorithm discussed in class.

NLTK has a list of words that you can use for matching, see starter code below (words). Be careful about efficiency with respect to doing word lookups. One extra challenge you have to deal with is that the provided list of words (words) includes only lemmas: your MaxMatch algorithm should match inflected forms by converting them into lemmas using the NLTK lemmatizer before matching (provided by the function lemmatize(word)). Note that the list of words (words) is the only source that you’ll use for matching (i.e. you do not need to find other external word lists). If you are unable to make any longer match, your code should default to matching a single letter.

For example, given “#newrecords”, the algorithm should produce: [“#”, “new”, “records”].

Task: Complete the tokenize_hashtags(hashtags) function by implementing the MaxMatch algorithm. The function takes as input a set of hashtags, and returns a dictionary where key=”hashtag” and value=”a list of tokenised words”.

Check: Use the assertion statements in “For your testing” below for the expected output.

from nltk.corpus import wordnet
nltk.download(‘words’)
nltk.download(‘wordnet’)

lemmatizer = nltk.stem.wordnet.WordNetLemmatizer()
words = set(nltk.corpus.words.words()) #a list of words provided by NLTK
words = set([ word.lower() for word in words ]) #lowercase all the words for better matching

def lemmatize(word):
lemma = lemmatizer.lemmatize(word,’v’)
if lemma == word:
lemma = lemmatizer.lemmatize(word,’n’)
return lemma

def tokenize_hashtags(hashtags):
# Your answer BEGINS HERE

# Your answer ENDS HERE

#tokenise hashtags with MaxMatch
tokenized_hashtags = tokenize_hashtags(hashtags)

#print results
for k, v in sorted(tokenized_hashtags.items())[-30:]:
print(k, v)

For your testing:

assert(len(tokenized_hashtags) == len(hashtags))
assert(tokenized_hashtags[“#newrecord”] == [“#”, “new”, “record”])

Question 3 (1.0 mark)¶
Instructions: Our next task is to tokenize the hashtags again, but this time using a reversed version of the MaxMatch algorithm, where matching begins at the end of the hashtag and progresses backwards (e.g. for #helloworld, we would process it right to left, starting from the last character d). Just like before, you should use the provided word list (words) for word matching.

Task: Complete the tokenize_hashtags_rev(hashtags) function by the MaxMatch algorithm. The function takes as input a set of hashtags, and returns a dictionary where key=”hashtag” and value=”a list of tokenised words”.

Check: Use the assertion statements in “For your testing” below for the expected output.

def tokenize_hashtags_rev(hashtags):
# Your answer BEGINS HERE

# Your answer ENDS HERE

#tokenise hashtags with the reversed version of MaxMatch
tokenized_hashtags_rev = tokenize_hashtags_rev(hashtags)

#print results
for k, v in sorted(tokenized_hashtags_rev.items())[-30:]:
print(k, v)

For your testing:

assert(len(tokenized_hashtags_rev) == len(hashtags))
assert(tokenized_hashtags_rev[“#newrecord”] == [“#”, “new”, “record”])

Question 4 (1.0 mark)¶
Instructions: The two versions of MaxMatch will produce different results for some of the hashtags. For a hastag that has different results, our task here is to use a unigram language model (lecture 3) to score them to see which is better. Recall that in a unigram language model we compute P(#, hello, world = P(#)*P(hellow)*P(world).

You should: (1) use the NLTK’s Brown corpus (brown_words) for collecting word frequencies (note: the words are already tokenised so no further tokenisation is needed); (2) lowercase all words in the corpus; (3) use add-one smoothing when computing the unigram probabilities; and (4) work in the log space to prevent numerical underflow.

Task: Build a unigram language model with add-one smoothing using the word counts from the Brown corpus. Iterate through the hashtags, and for each hashtag where MaxMatch and reversed MaxMatch produce different results, print the following: (1) the hashtag; (2) the results produced by MaxMatch and reversed MaxMatch; and (3) the log probability of each result as given by the unigram language model. Note: you do not need to print the hashtags where MaxMatch and reversed MaxMatch produce the same results.

An example output:

MaxMatch = [#, a, bc, d]; LogProb = -2.3
Reversed MaxMatch = [#, a, b, cd]; LogProb = -3.5

MaxMatch = [#, ef, g, h]; LogProb = -4.2
Reversed MaxMatch = [#, e, fgh]; LogProb = -3.1

Have a look at the output, and see if the sequences with better language model scores (i.e. less negative) are generally more coherent.

from nltk.corpus import brown

#words from brown corpus
brown_words = brown.words()

# Your answer BEGINS HERE

# Your answer ENDS HERE

Text Classification (4 marks)¶

Question 5 (1.0 mark)¶
Instructions: Here we are interested to do text classification, to predict the country of origin of a given tweet. The task here is to create training, development and test partitions from the preprocessed data (x_processed) and convert the bag-of-words representation into feature vectors.

Task: Create training, development and test partitions with a 70%/15%/15% ratio. Remember to preserve the ratio of the classes for all your partitions. That is, say we have only 2 classes and 70% of instances are labelled class A and 30% of instances are labelled class B, then the instances in training, development and test partitions should also preserve this 7:3 ratio. You may use sklearn’s builtin functions for doing data partitioning.

Next, turn the bag-of-words dictionary of each tweet into a feature vector. You may also use sklearn’s builtin functions for doing this (but if you don’t want to use sklearn that’s fine).

You should produce 6 objects: x_train, x_dev, x_test which contain the input feature vectors, and y_train, y_dev and y_test which contain the labels.

from sklearn.feature_extraction import DictVectorizer

x_train, x_dev, x_test = None, None, None
y_train, y_dev, y_test = None, None, None

# Your answer BEGINS HERE

# Your answer ENDS HERE

Question 6 (1.0 mark)¶
Instructions: Now, let’s build some classifiers. Here, we’ll be comparing Naive Bayes and Logistic Regression. For each, you need to first find a good value for their main regularisation hyper-parameters, which you should identify using the scikit-learn docs or other resources. Use the development set you created for this tuning process; do not use cross-validation in the training set, or involve the test set in any way. You don’t need to show all your work, but you do need to print out the accuracy with enough different settings to strongly suggest you have found an optimal or near-optimal choice. We should not need to look at your code to interpret the output.

Task: Implement two text classifiers: Naive Bayes and Logistic Regression. Tune the hyper-parameters of these classifiers and print the task performance (accuracy) for different hyper-parameter settings.

from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression

# Your answer BEGINS HERE

# Your answer ENDS HERE

Question 7 (1.0 mark)¶
Instructions: Using the best settings you have found, compare the two classifiers based on performance in the test set. Print out both accuracy and macro-averaged F-score for each classifier. Be sure to label your output. You may use sklearn’s inbuilt functions.

Task: Compute test performance in terms of accuracy and macro-averaged F-score for both Naive Bayes and Logistic Regression, using their optimal hyper-parameter settings based on their development performance.

# Your answer BEGINS HERE

# Your answer ENDS HERE

Question 8 (1.0 mark)¶
Instructions: Print the most important features and their weights for each class for the two classifiers.

Task: For each of the classifiers (Logistic Regression and Naive Bayes) you’ve built in the previous question, print out the top-20 features (words) with the highest weight for each class (countries).

An example output:

Classifier = Logistic Regression

Country = au
aaa (0.999) bbb (0.888) ccc (0.777) …

Country = ca
aaa (0.999) bbb (0.888) ccc (0.777) …

Classifier = Naive Bayes

Country = au
aaa (-1.0) bbb (-2.0) ccc (-3.0) …

Country = ca
aaa (-1.0) bbb (-2.0) ccc (-3.0) …

Have a look at the output, and see if you notice any trend/pattern in the words for each country.

# Your answer BEGINS HERE

# Your answer ENDS HERE