News API
Blogs
How To Annotate Entities With Spacy PhraseMatcher
Tutorial

How To Annotate Entities With Spacy PhraseMatcher

How To Annotate Entities With Spacy PhraseMatcher

How To Annotate Entities With Spacy PhraseMatcher

By the end of this article, you will be able to write a name matching pipeline using spaCy’s PhraseMatcher. It will find and label organization names and their stock ticker symbols.

DEMO

GIF of the app that demonstrates the PhraseMatcher text annotator we build in the article

You can find the accompanying web app here.

In this tutorial, we'll:

  • Learn about the spaCy PhraseMatcher class
  • Use PhraseMatcher to create a text annotation pipeline that labels organization names and stock tickers
  • Learn a quick-and-easy way to scrape Wikipedia tables directly into pandas DataFrame
  • Use spaCy's displacy class to visualize custom entities

The Problem We're Trying To Solve In This Article

While working on the named-entity recognition (NER) pipeline for one of our previous articles, we ran into some issues with the default spaCy NER model. The pipeline aimed to detect significant events like acquisitions. For this, we used spaCy’s in-built NER model to detect organization names in news headlines. The problem is that the model is far from perfect so it doesn’t necessarily detect all organizations. And this is to be expected for entities like organization names that have variations and new additions. 

There are two ways to solve this issue. We can either train a better statistical NER model on an updated custom dataset or use a rule-based approach to make the detections. The funny thing about this choice is that it’s not really a choice.  You see, to train a better NER model we would need to label text data. And the best way to optimize the data annotation process is to automate it (or parts of it) using a rule-based method. 

So, that’s exactly what we are doing to do in this article. We’ll learn about spaCy’s PhraseMatcher and use it to annotate organization names and stock ticker symbols.

Introduction To spaCy PhraseMathcer

The PhraseMatcher class enables us to efficiently match large lists of phrases. It accepts match patterns in the form of Doc objects that can contain many tokens. 

First, let's install spaCy.

The next step is to initialize the PhraseMatcher with a vocabulary. Like Matcher, the PhraseMatcher object must share the same vocabulary with the documents it will operate on.

import spacy
from spacy.matcher import PhraseMatcher
nlp = spacy.load("en_core_web_sm")
matcher = PhraseMatcher(nlp.vocab)

After initializing the PhraseMathcer object with a vocab, we can add the patterns using the .add() method.

# List of Patterns To Match For
phrases = ["Sergio Mattarella", "Mario Draghi", "president", "prime minister"]
# Create Doc Objects For The Phrases
patterns = [nlp(text) for text in phrases ]
matcher.add("PatternList", patterns)

NOTE: Each phrase string has to be processed with the nlp object to create Doc objects for the pattern dictionary. Doing this in a simple loop or a list comprehension can easily become inefficient and slow. If we only need to match for text patterns (and not other attributes) we can use nlp.make_doc instead, which only runs the tokenizer. We can also use  nlp.tokenizer.pipe for an extra speed boost as it processes the texts as a stream.

two options for creating pattern Docs more efficiently: make_doc and tokenizer.pipe

Now that we have added the desired phrases, we can use the matcher object to find them in some news text.

doc = nlp("A joint session of Italian parliament and some regional delegates, \
known as “great electors,” began a secret ballot on Monday to elect the next \
Italian president to replace the current officeholder, Sergio Mattarella. \
It is a focus of special attention because a top contender for the job is \
the prime minister, Mario Draghi, a titan of Europe who in just a year in \
power has stabilized Italy’s politics and initiated long-overdue overhauls.")
# Find Matches
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
print(span.text)
Output of the PhraseMatcher we just created for labeling names and political positions of power

Matching On Other Token Attributes

By default, the PhraseMatcher matches on the verbatim token text, i.e., .text attribute. Using the attr argument on initialization, we can change the token attribute the PhraseMatcher uses when comparing the patterns to the matched Doc.

Let’s say we wanted to create case-insensitive match patterns. We can do so using the attribute LOWER that enables us to match on Token.lower.

matcher = PhraseMatcher(nlp.vocab, attr="LOWER")
name = ["Sergio Mattarella", "Mario Draghi"]
# Only run nlp.make_doc to speed things up
patterns = [nlp.make_doc(name) for name in names]
matcher.add("Names", patterns)
matches = matcher(doc)
for match_id, start, end in matches:
span = doc[start:end]
print(span.text)
Output of our PhraseMatcher that matches for lowercase version of the text

Although nlp.make_doc helps us create Doc objects for patterns as efficiently as possible, it can cause issues if we want to match for other attributes. It only runs the tokenizer and none of the other pipeline components. So, we need to ensure that the required pipeline component runs when we create the pattern dictionary.

For example, to match on POS or LEMMA, the pattern Doc objects need to have part-of-speech tags set by the tagger or morphologizer. We either call the nlp object on the pattern texts, or use nlp.select_pipes to run components selectively.

Why PhraseMatcher?

Okay, that’s cool and everything, but why are we choosing PhraseMacher in the first place? Wouldn’t Python’s built-in find() method do just fine? What about spaCy’s Matcher class? 

Yeah, both of these approaches will work, but they are rather inefficient.

The time complexity of the find() function is O(N*L) where N is the size of the string in which we search, and L is the size of the phrase(pattern) we are searching for. And this is just for one phrase, so this complexity would increase linearly with the number of phrases as well.

Matcher vs PhraseMatcher

There are two main reasons for choosing PhraseMacther over the Matcher class when working with large lists of patterns:

  • computational efficiency
  • ease of implementation. 

Matcher runs in O(N*M) time for a document of N words and a pattern list of M entries. On the other hand, PhraseMatcher's average-case complexity is much better. The specifics depend on the pattern list, but it's generally somewhere between O(N) and O(N*log(M)). This has a big impact when the phrase dictionaries are very big, i.e., when we have a large M. For instance, let's say we want to match for a dictionary with every drug manufactured by a pharmaceutical company. PhraseMatcher will massively outperform its Matcher equivalent.

Just take a look at this time complexity plot. See how quickly the upper bound for Matcher(purple) moves in comparison to PhraseMatcher(blue) for the same increase in M.

time complexity plot Matcher vs PhraseMatcher that shows Matcher increasing substatially faster than PhraseMatcher

The other advantage of PhraseMatcher is its ease of use. With PhraseMatcher we can easily loop over the list of phrases and create Doc objects to pass as patterns. The process is a bit more complex for a Matcher object as it works on the token level. So we’ll have to accommodate for phrases of different token lengths. 

For instance, to match the phrase “Washington, D.C.” using Matcher, we would have to create a complex pattern like the following two:

Two Match patterns options for spaCy Matcher that match for

Whereas, when using PhraseMatcher, we can simply pass in nlp("Washington, D.C.") and don’t have to write complex token patterns.

Using PhraseMatcher To Label News Data

Besides spaCy and pandas, we’ll be using a few other Python libraries:

  • cleanco - To clean and normalize company names
  • BeautifulSoup and requests - To fetch the list of S&P500 companies from Wikipedia and extract relevant data.

To install these run:

pip install cleanco requests beautifulsoup4
view raw install.sh hosted with ❤ by GitHub

Getting Organization Names And Ticker Symbols

We’ll be using the list of all NasDAQ listed companies available on datahub.

!wget https://datahub.io/core/nasdaq-listings/r/nasdaq-listed-symbols.csv
view raw get_list.sh hosted with ❤ by GitHub
import pandas as pd
data = pd.read_csv("nasdaq-listed-symbols.csv")
first five rows of the datahub NasDAQ list of company names and tickers

This list was last updated three years ago, so we’ll supplement it using the list of S&P500 companies available on Wikipedia.

Get the webpage with the table of S&P500 companies.

import requests # library to handle requests
from bs4 import BeautifulSoup # library to parse HTML documents
url = "https://en.wikipedia.org/wiki/List_of_S%26P_500_companies"
response=requests.get(url)

Parse the HTML using BeautifulSoup and select the required table.

highlighted wikipedia HTML table class in screenshot that has inspect element options on
soup = BeautifulSoup(response.text, 'html.parser')
table=soup.find('table',{'class':"wikitable"})
view raw select_table.py hosted with ❤ by GitHub

Convert the HTML table into a Pandas DataFrame using the read_html() method

df=pd.read_html(str(table))
# convert list to dataframe
df=pd.DataFrame(df[0])
first five rows of Wikipedia S&P500 table we just converted into a dataframe

There are some extra columns that we don’t need, and the name for the organizations’ column is different from the other DataFrame. We’ll drop the extra columns and rename the ‘Security’ column.

SP500 = df[['Symbol', 'Security']]
SP500 = SP500.rename(columns={"Security": "Company Name"})

Now we can combine the two tables to get a more comprehensive list of names and tickers.

combined_list = pd.concat([data, SP500], ignore_index = True).drop_duplicates()

Cleaning Organization Names

Examples of extra information enclosed within parentheses in organization names

Some of the organization names have additional information in parentheses such as the state of registration or the class of the share. These things don’t matter for our aim of labeling organization names. So let’s create a function that removes anything enclosed with a pair of parentheses. 

def remove_parenthesis(name):
if "(" in name:
l_paren_idx = name.index("(")
r_paren_idx = name.index(")")
return name[: l_paren_idx] + name[r_paren_idx + 1 :]
else:
return name
combined_list['Company Name'] = combined_list['Company Name'].apply(remove_parenthesis)

Besides that, organization names can have multiple forms. For example, Activision can be written as just ‘Activision Blizzard’ or ‘Activision Blizzard, Inc.’. Currently, our list contains the longer form of the names, to create a robust annotation system we’ll need to include both forms. 

To obtain the cleaner, shorter versions of the organization names we’ll use cleanco

from cleanco import basename
combined_list['Cleaned Name'] = combined_list['Company Name'].apply(basename)
combined_list['Cleaned Name'] = combined_list['Cleaned Name'].apply(basename)
names = pd.concat([combined_list['Company Name'], combined_list['Cleaned Name']], ignore_index = True).drop_duplicates()
view raw clean_names.py hosted with ❤ by GitHub

There are a few empty entries, so let’s deal with that before we move further.

names = [name for name in cleaned_names if name != " " and len(name) > 0]

Moreover, there are some incomplete/wrong organization names that will cause false positives if they're not dealt with.

name_corrections = {"A": "A-Mark", "Federal": "Federal-Mogul",
"Global": "Global-Tech Advanced Innovations",
"G": "G-III Apparel", "Heritage": "Heritage Crystal Clean",
"II": "II-VI", "Mid": "Microchip Technology",
"Pro":"Pro-Dex", "Perma":"Perma-Fix Environmental Services",
"Park": "Park-Ohio Holdings", "Bio": "Bio-Techne",
"ROBO": " ROBO Global Robotics and Automation Index ETF",
"United": "United-Guardian", "Uni":"Uni-Pixel",
"Popular" : "Banco Popular", "News": "News Corp",
}
names = [name_corrections[name] if name in name_corrections.keys() else name for name in names ]
view raw correct_name.py hosted with ❤ by GitHub

Create The PhraseMatcher Labeller

Now that we have all the data we need, we can create our PhraseMatcher object.

import spacy
from spacy.matcher import PhraseMatcher
nlp = spacy.load("en_core_web_sm")
matcher = PhraseMatcher(nlp.vocab)

PhraseMatcher supports adding multiple rules containing several patterns, and assigning IDs to each matcher rule. This means we can use the same matcher object for the names and tickers.

patterns = [nlp.make_doc(name) for name in names]
matcher.add("COMPANY", patterns)
patterns = [nlp.make_doc(symbol) for symbol in data['Symbol']]
matcher.add("SYMBOL", patterns)

And that’s it! Now we can use this to label text data. Let’s try it out on some news data and see how it performs.

Text = "Microsoft (MSFT) dipped 2.4% after announcing the software giant will \
buy video game company Activision Blizzard, Inc (ATVI) in an all-cash transaction \
valued at $68.7 billion. \nThe shortened trading week will feature quarterly \
reports from 35 companies in the S&P 500, including Bank of America (BAC), \
UnitedHealth Group(UNH), and Netflix (NFLX). General Motors (GM) said it \
will invest roughly $6.6 billion in its home state of Michigan through \
2024. GM has projected it will overtake Tesla (TSLA) as the \
top U.S.-based seller of electric vehicles by mid-decade. Retailer Gap (GPS) \
shares fell 6.7% after Morgan Stanley downgraded the retailer."
doc = nlp(Text)
matches = matcher(doc)
for match_id, start, end in matches:
rule_id = nlp.vocab.strings[match_id] # get the unicode ID, i.e. 'COMPANY'
span = doc[start : end] # get the matched slice of the doc
print(rule_id, span.text)
view raw test_matcher.py hosted with ❤ by GitHub
company names and ticker matches found by our phrasematcher annotator

Hmm, where seems to be some duplication. The problem is that our PhraseMatcher finds both forms of the company name, the longer complete version, and the shorter, cleaner version. There’s a fairly straightforward (greedy) solution to this issue, we count the occurrences of each matched name in all matched names. 

The idea is that shorter names like “Activision Blizzard” will have a count of more than one. This is because they will as they will also appear as substrings in longer names like “Activision Blizzard, Inc”. But the longer version will occur only once. Let’s add this de-duplication and visualize the results using spaCy’s displacy.

from spacy import displacy
# displacy options
colors = {"COMPANY": "#F67DE3", "SYMBOL": "#7DF6D9"}
options = {"colors": colors}
plot_data = {
"text": doc.text,
"ents": [],
"title": None
}
matches_with_dup = {"COMPANY":{}, "SYMBOL": {}}
for match_id, span_start, span_end in matches:
rule_id = nlp.vocab.strings[match_id]
text = doc[span_start: span_end].text
start_idx = doc.text.index(doc[span_start].text)
end_idx = start_idx + len(text)
matches_with_dup[rule_id][text] = {"start": start_idx, "end": end_idx, "label": rule_id}
# substring names will appear multiple times but the expanded
# names will appear only once
for ent_type in matches_with_dup.keys():
matches = matches_with_dup[ent_type]
keys = matches.keys()
counts = {text:0 for text in keys}
for text in keys:
for key in keys:
if text in key:
counts[text] += 1
for text, count in counts.items():
if count == 1:
plot_data['ents'].append(matches[text])
#sort the matches by start index
plot_data['ents'] = sorted(plot_data['ents'], key=lambda ent: ent["start"])
displacy.render(plot_data , style="ent", options=options, manual=True, jupyter =True)
displacy visualization of labelled news data after duplication fix

Conclusion

In this article, you learn about spaCy’s PhraseMatcher and used it to create a rudimentary text annotation pipeline that labels organization names and their ticker symbols. Depending on your needs, this pipeline can be easily extended to offer near-perfect results. You can even supplement it with a statistical NER model.

Generally speaking, statistical models are useful if your application needs to be able to generalize based on the context. That being said, you should always carefully consider if your use case really needs a model. Sometimes it might be better off with a rule-based system.

Two main criteria for choosing a rule-based approach:

  1. You already have a large dictionary of terms to match
  2. There’s a more or less finite number of instances of the given category (public-traded organization names and cities are good examples)

Combining a rule-based system with a statistical NER model might help you get better results if misspellings and spelling variations are important for your application.  And spaCy’s entity recognizer respects pre-defined entities (e.g. set by previous ) and will use them as constraints for its predictions. So, if you train a NER model to detect public traded companies and add it after the rules, it will only find entities in the text that hasn't been labeled. This can potentially help you find entities the rules might miss.

Choosing the Right News API Should Be Easy

Get access to the guide that simplifies your decision-making. Enter your email to download now.

Download white paper
Success! Your white paper is on its way. Be sure to check your inbox shortly!
Oops! Something went wrong while submitting the form.

READY FOR
CUSTOM NEWS SOLUTIONS?

Drop your email and find out how our API delivers precisely what your business needs.