White Paper, Part I -- Intraspexion Is the New Sonar Machine

By Nick Brestoff, Founder and CEO and Jagannath Rajagopal, Chief Data Scientist

On January 29, 2018, in the Artificial Lawyer blog by the UK's Richard Tromans, Ari Weinstein (CEO of Confident Contracts) reported on a panel at LTNY's Legal AI Bootcamp. "The 'kumbaya moment' of the session," he reported, "was when all agreed on the panel that AI not only helps legal teams to be more efficient, but it helps you to do what you could not do before." (Italics added.)

So let's start off by describing what it is that, as corporate counsel, you can't do now. 

Consider your daily flow of internal communications. It's an ocean of data, and you're closer to it than outside counsel or eDiscovery vendors.

Wouldn't you like to see the risky or "smoking gun" emails lurking below the surface before you have to manage the lawsuit? Of course you would. If you could do that, you could be proactive about the risks.

But you can't be proactive now, can you? You can't see these risks.

That's where we come in. Our system is like Sonar, a device that emits sound waves and let users see below the surface of the water.

Without Sonar, you can't see below the surface. But with Sonar, you can locate fish and shipwrecks, as below.

For the Wikipedia article about Sonar, click here.

For the Wikipedia article about Sonar, click here.

And with Sonar, you can also find threats like underwater mines, except that your underwater mines are the risky emails.

Our system is like Sonar. Intraspexion is a new tool for corporate law departments. It's a litigation "early warning system."

NEEDLES IN A HAYSTACK

The form of Artificial Intelligence we use is called Deep Learning.  

in this White Paper, and without getting into the math of Deep Learning, we'll explain how our patented software system works.

But to better explain why Intraspexion is the New Sonar Machine, we'll also use the analogy of finding the needles in the haystack. We know you've heard that one.

FIRST LIGHT

For starters, here's a little bit of our history. The term "first light" refers, generally, to the first use of a new instrument. It's then that you see what needs further attention. In Q4 of 2017, we completed a pilot project with a company whose identity and confidential information we’re not permitted to disclose.

However, in a non-confidential telephone communication, a company attorney reported that our system had found, in a now-closed discrimination case, a risky email that the company already knew about. That was good news, but it was not exciting news.

But we were also told that our system had also found a risky email the company had later determined was material, but also that, previously, the company had not known about it.

Now that was compelling. How’d that happen?

TRAINING DATA

Deep Learning (for text) requires two basic ingredients: a classification of data (i.e., a label) and lots of examples of the classification (a "positive" set) and, for contrast, examples of text we wouldn't expect to see (a "negative" set).

When we put the pieces together, we've created a "model."

Our first classification is “employment discrimination.”

Now for "positive" and "negative" examples.

We created a “positive” set of examples from the factual allegations in hundreds of previously filed discrimination complaints in the federal court litigation database called PACER. Our extractions were from the classification for “Civil Rights-Jobs." "Civil Rights-Jobs" has another, less formal name: "employment discrimination."

Second, we created a “negative” set of examples that was “unrelated” to “employment discrimination.” This negative set consisted of newspaper and Wikipedia articles and other text.

To the best of our knowledge, there were no emails in these sets of examples.

After that, we looked at Enron emails and, to make a long story short, found four (4) examples of true risks for employment discrimination. We found them in the subsets for Lay, Kenneth (Ken Lay was the Chairman and CEO of Enron) and Derrick, J.

To our knowledge, no one had surfaced them previously. Thus, we had successfully trained our Deep Learning model to "learn" the pattern for "discrimination."

Our third step was the pilot project we can't discuss.

Then, after that "first light" pilot project, we added 10,000 Enron emails to the unrelated set, so the model could “understand” English in the context of emails.

Then we looked at 20,401 Enron emails the system had never seen before.

Result: Our "model" for discrimination called out 25 emails as being "related" to discrimination, and our 4 "needles" were among the 25.

That's 25 out of 20,401 emails, a fraction of 0.001225, which is a little less than one-eighth of one percent.

Pretty good. We had a functional "model" for employment discrimination. (Note: we can do this with any business-relevant PACER classification.)  

Now we can help you visualize our model by using the technological marvel called “t-stochastic neighbor embedding,” which is abbreviated “t-sne” and is pronounced "tee-snee."

In the image below, you’ll see this visualization. Here, whites = training documents unrelated to discrimination; while reds (lower left hand corner) = are training documents related to discrimination.

t-SNE visualization of the discrimantioan emails found

See the separation between the whites and the reds? That's a clear decision boundary. There are no red documents in the cluster of whites, and no white documents in the cluster of reds. If the colors are mixed, the Deep Learning "engine" needs tuning.

Next, when the "model" is asked to assess text in emails, which the system has never seen before, it can "read" the text in each email and indicate whether an email matches up with the pattern of reds, the documents related to discrimination, and to what degree.  

White Paper, Part II -- Intraspexion Is the New Sonar Machine

WHAT'S GOING ON?

The "reds" are documents consisting of factual allegations that were drawn from hundreds of discrimination complaints after they were filed in PACER. It didn't matter who the defendant was. We think of this level of training now as "general."

Later we realized that we can augment the "general" training by using factual allegations in discrimination complaints that have been previous filed against a specific company. When we do that, the level of training is "company-specific." If you're with a potential customer, we augment our model with publicly available data about your company.

In addition, our system includes a patented feedback feature. Our software allows a user to accept a "related to the risk" email as a True Positive or reject it as a False Positive.

After there's enough company-specific feedback data like that, we can augment both the positive and negative training sets.

So, currently (and as we go to market), our best model for employment discrimination is a super binary filter. It splits out the emails "related" to the risk from the "unrelated" emails. We show you only the small number of risky emails related to the risk for which the model has been trained.

This filtering makes it possible for a human reviewer to see a relatively small subset of emails "related" to the risk, and then that person splits out the True Positives from the False Positives.

So the human reviewer--a corporate paralegal or attorney--is the Gold Standard. That person decides which high scoring email to escalate to a second reviewer, and that person decides whether an internal investigation should take place.

And that is why AI here is not frightening in any way. It means Augmented Intelligence.

Moreover, with this new tool at their disposal, corporate counsel will be more valuable to the company than ever before.

So, returning to our analogy of mines below the surface of your ocean of data, now you can see that, with our patented Deep Learning assist, corporate counsel can identify the underwater mines in time for the captain of the ship to take evasive action.

HYPOTHETICAL

Besides the cost of our software, what other costs are there?

To see if we can answer that question, let’s consider a much larger set of emails.

The attorney for the company-who-must-not-be-named also mentioned (again, in a non-confidential telephone call) that it was typical for their system to handle two million emails per month.

We remembered the number. We were going to report "early warnings" daily, so we did the math:

Assume 2,000,000 emails per month; now, when

  1. divided by 4.3 weeks per month, the result is 465,116 emails per week;

  2. and when 465,116 emails per week is divided by 5 days per week, the result is 93,023 emails per day.

At that point, we realized that we were being asked to look at 93,023 emails per day.

OK, back to the haystack analogy. If that's the size of the daily haystack, a call for volunteers will be unavailing.

So, understandably, without a way to surface the needles from a haystack that large, no one even bothers to look

Let's continue the calculation, but only in an informal way. Remember that when we ran our discrimination model against a held-out set of 20,401 Enron emails, it surfaced 25 emails related to the risk, a fraction of about one-eighth of one percent, i.e., 0.0012.

So the number of emails the system would surface as "related to the risk," when presented with 93,023 emails per day, is:

93,023 multiplied by 0.0012 = about 112 emails per day.

That's doable. We know this because, in an "Email Statistics Report, 2011-2015," the Radicati Group reported (at p. 3) that business users sent and received 121 emails per day in 2014 (on average), and expected the number to grow to 140 emails per day in 2018. 

So, for a reviewer, 112 emails per day is a slightly below-average amount, and, assuming a 7-hour workday, turns out to be about 16 “related” emails per hour, which is one email about every four (4) minutes.

And if the company is at the projected 2018 level of 140 emails per day, that's 20 emails per hour during a 7-hour workday, which would give each reviewed three (3) minutes per email.

But we can tell you from experience that a reviewer can spot a False Positive in only a few seconds. We provide a "help" here. From the general training set, we built a database of words that are subject-matter related to the risk. We pass the email output through this database and it highlights those words for the reviewers. (See our Pilot Program vimeo.)  

Accordingly, for companies generating two million emails per month, it may take only one (1) reviewer a day to decide which emails (from the day before) to escalate to a second reviewer.

Thus, with Intraspexion, a risky email might rise to the surface, and be visible to reviewers, only a day or so after it was written.

But the number of reviewers will also depend on how many types of litigation risks a company wants to model.

So the answer to the "other costs" question is a familiar one: it depends. 

BUT WHY IS OUR INNOVATION FOR THE LEGAL PROFESSION SUCH GOOD NEWS?

Well, let's start by admitting that neural networks have been around for decades. (For that history, click on "decades.") To make another long story short, there were winters (disappointments) and springs (hope and the hype that goes with it).

But in 2012, Deep Learning (the "street name" for multi-layer neural networks) matured and started producing extraordinary results.

The results were so strong that Andrew Ng--whose resume includes teaching computer science at Stanford, co-founding Coursera, leading Deep Learning teams at Google and Baidu, and more--has said that Deep Learning “is the new electricity,” and that, as such, Deep Learning will be as broadly impactful today as electricity was during the Industrial Revolution. 

(Prof. Ng was quoted in an October 2016 Fortune cover story, "Why Deep Learning Is Suddenly Changing Your Life," which you can access by clicking here.)

Thus, with Intraspexion, we’re working with today’s new electricity, and we're the first to use it for the legal profession, which is why we have patents.

Intraspexion is a litigation early warning system for risk, in order to avoid litigation. And who doesn't want less litigation?

BOTTOM LINE

Once an underwater mine is identified and our system provides an early warning, then there’s a realistic hope of avoiding that mine.

And that's not hype. For the legal profession, our New Sonar Machine is a new hope.

 

"Should Prevention Be a Core Principle of AI?"

On October 11, 2016, AI Trends posted an article by Nick Brestoff, Intraspexion’s CEO, and Larry W. Bridgesmith, one of Intraspexion’s co-founders and an Adjunct Professor of Law at the Vanderbilt School of Law. The article offers an ethics proposal for the entire AI community. 

 

"Building a Litigation Risk Profile That Could Save Your Company Money"

On January 29, 2016, Corporate Counsel posted an article by Nick Brestoff in which he shows companies how to build a litigation risk profile using data from PACER, the federal litigation database. The online version is behind a paywall, but you can see get the idea by reading Nick's blog article dated December 6, 2017.

 

"Preventing Litigation: An Early Warning System to Get Big Value Out of Big Data"

On August 28, 2015, Business Expert Press published Preventing Litigation: An Early Warning System to Get Big Value Out of Big Data. Nick Brestoff is the primary author, having written 20 out of 25 chapters. His book was endorsed by Richard Susskind, one the legal profession’s most respected thought leaders. For select excerpts, see the Tab for Book.

 

"data Lawyers and Preventive Law"

On October 25, 2012, Legaltech News published Nick Brestoff's article asserting that, in the coming days, technology would enable corporate counsel to prevent litigation, not just manage it. (The Legaltech News online version is behind a paywall.)

 

"CAN ARTIFICIAL INTELLIGENCE EASE THE EDD BURDEN?"

On January 20, 2011, Law Technology News published Nick Brestoff's article, as titled above. He wrote, "Welcome to the age of legal informatics," and, after describing the upcoming battle between IBM Watson and two human champions at Jeopardy!, wrote, "I'm betting on Watson." (Boldface in the original.) (The Law Technology News online version is also behind a paywall.)