Let's get our nerd on

Research has gotten the wheels spinning lately, and I've been diving into ever-more technical papers after finish off the book.  Speaking of papers, did I mention I'm published? Next up, we're focusing in on what my research project is going to be.

What it's looking like right now is that I will be working on a system that infers relations between objects in a text (see below...) and then further infering what sorts of things we can learn from those. For instance, if we learn from some text that (excuse the weird style here, look at the bit on First Order Logic below) Promotes(milk, bone_strength) and Promotes(bone_strength, health) what can we say about milk, health?

So, to get a better view of the background of this branch of semantics I've charged through a few papers. Here's a share of the nerdery that I've been reading. Yeah, I'm going to write about academic papers.  Yes, this is where you want to stop reading.

Identifying Relations for Open Information Extraction

Fader et al.

This paper comes from a bunch of guys at the University of Washington, with which I have a peculiar history and am keeping under tabs for grad school (especially after reading this paper). The work presented in the paper resulted in a neat NLP program called ReVerb, an open system you can play with over on their site.

Anyways, I plan on starting these reviews off by just explaining their titles, so let's get started.

Identifying Relations

I'm assuming we all understand the word "identifying", so in this context, what are relations?

Well, they're just what they sound like. A relation is the way in which two or more things in a text are related to each other. Usually represented with First Order Logic, so that

Matt will open the door

can be represented by Opening(e), Opener(e,Matt), Openee(e, door) where we let e be some abstract representation of the event itself. If we get whacky enough, pretty much anything can be written this way, but we have to be careful not to be too specific or too abstract with our "functions" (Opening, etc).

Information Extraction

According to the authors, previous Information Extraction (IE) systems focus on narrow fields and work with labeled training examples. That means a human went through and wrote down the correct relations to be extracted for a large set of sentences, which are then presented to the AI to help it learn. Labeling is a rather laborious practice; some of the students in my lab earn cash on the side doing it for the research of the higher ups.

Usually with IE, the possible sorts of relations are specified in advance. This means that a system made to learn about sports is usually explicitly taught that there exists a relationship called Coaches(coach_name, team). The goal is to have the computer intuitively recognize that this sentence

Sugiura led the Inui team to victory in the championships with his superior coaching skills.

has embedded within it the fact Coaches(Sugiura, Inui team), with the goal being that the computer adds that fact to what it "knows".

Open IE

Open IE might not know these relations exist. Instead, the goal of Fader's paper is to show a way for a system that not only finds instances of a relation it already knows, but to also find new relations that it doesn't know and then build upon them. That's the heart of Open IE.

The big problem here is getting false hits, like seeing the sentence

The guide contains dead links and omits sites.

and deciding that "contains omits" is a relationship. Another problem is uninformative relations, like "is" rather than "is an album by" or "gave" rather than "gave birth to". Another example error would be taking the sentence

Faust made a deal with the devil.

and returning the triplet (Faust, made, a deal), rather than (Faust, made a deal with, the devil). The problem here is messing up what exactly the relation is.

Hopefully at this point, the title makes a bit of sense! So what's going on in the paper?

Their Innovation

Just kidding. You'll have to read the paper if you want to know what happens here. Essentially, they come up with new restrictions that are placed on the system that, woohoo, make it work better than previous systems. Kaching, improvement!

Matt Enlow

Matt has a camera, a home on wheels, and this website
Down by the river