Intraspexion Is the New Sonar Machine

By Nick Brestoff, CEO and Jagannath Rajagopal, Chief Data Scientist

Looking for mines below the surface of your ocean of data?

Of course you are. But you can't see below the surface. You currently have no way to do that, so you must manage the litigation that comes up. Can you be proactive? Not really. After the disaster, risk managers present seminars about what just happened and how you can avoid the next one. Fine. But why couldn't they see that one coming?

Our system is different from anything that's ever been available for corporate legal departments. Ever.

It's why, in the Prediction Technology sector of the LawGeex Legal AI Landscape, we alone have patents. Not patents pending. Patents. Why?

We use a form of AI called Deep Learning to surface the risky emails for you to see, investigate, and then advise control group executives on how to handle them.

So using Deep Learning to see the risks is much better than a smoke alarm. Does a smoke alarm tell you where the fire is located? No. Our system is more like using sonar to see below the surface of the ocean of data that's created every day. With sonar, you can locate fish, find threats like underwater mines (see our Pilot Program page) and locate shipwrecks, like the image below.

Sonar image of shipwreck of the Soviet Navy minesweeper T-297, formerly the Latvian Virsaitis in Estonian waters 20 km from Keri island. https://en.wikipedia.org/wiki/Sonar.

Sonar image of shipwreck of the Soviet Navy minesweeper T-297, formerly the Latvian Virsaitis in Estonian waters 20 km from Keri island. https://en.wikipedia.org/wiki/Sonar.

NEEDLES IN A HAYSTACK

So, in this White Paper, and without getting into the math of Deep Learning, we'll explain how our patented software system works. But to better explain why Intraspexion is the New Sonar Machine, we'll switch the analogy from looking out for the underwater mines to finding the needles in the haystack.

As a former attorney, I know you've heard about email haystacks, and email risks are more like needles because they're hard to see.

FIRST LIGHT

For starters, here's a little history. The term "first light" refers, generally, to the first use of a new instrument. It's then that you see what needs further attention. In 2016, we completed our first pilot with a company whose identity and confidential information we’re not permitted to disclose.

However, in a non-confidential telephone communication, a company attorney reported that our system had found, in a now-closed discrimination case, a risky email that the company already knew about. That was good news, but it was not exciting news.

But we were also told that our system had also found a risky email the company had determined was material, and that, previously, the company had not known about it.

Now that was compelling. How’d that happen?

TRAINING DATA

First, we trained our Deep Learning model for “employment discrimination.”

We created a “positive” set of examples that was “related” to the use case of “discrimination.” For this set, we used the factual allegations from hundreds of previously filed discrimination complaints in the federal court litigation database called PACER. Our extractions were from the category of litigation labeled “Civil Rights-Jobs,” which is Nature of Suit code 442. "Civil Rights-Jobs" has another, less formal name: "employment discrimination."

(To the best of our knowledge, there were no emails in the training examples).

Second, we created a “negative” set of examples that was “unrelated” to “employment discrimination.” This negative set consisted of newspaper and Wikipedia articles and other text.

After that, we looked at Enron emails and found four (4) examples of true risks for employment discrimination. We found them in the subsets for Lay, Kenneth (Ken Lay was the Chairman and CEO of Enron) and Derrick, J.

To our knowledge, no one had surfaced them previously. So we had successfully trained our Deep Learning "model" to "learn" the pattern for "discrimination."

Third, after that "first light" pilot project, we added 10,000 Enron emails to the unrelated set, so the model could “understand” English in the context of emails. Then we processed 20,410 Enron emails as a held-out set.

Result: Our "model" for discrimination called out 25 emails as being "related" to discrimination, and our 4 "needles" were among the 25. Our Sonar Machine had worked.

Thus, we had a functional "model," and we can help you visualize it by using the technological marvel called “t-stochastic neighbor embedding,” which is abbreviated “t-sne” and is pronounced "tee-snee."

In the image below, you’ll see this visualization. Here, whites = training documents unrelated to discrimination; while reds (lower left hand corner) = are training documents related to discrimination.

See the separation? That's a clear decision boundary. There are no red documents in the cluster of whites, and no white documents in the cluster of reds. If the colors are mixed, the algorithm needs tuning.

Next, when the "model" is asked to assess text in emails, which they system has never seen before, it can "read" the text in each email and indicate whether an email matches with the pattern of reds (the documents related to discrimination), and to what degree.  

Picture1.png

WHAT'S GOING ON?

The "reds" are documents consisting of factual allegations that were drawn from hundreds of discrimination complaints after they were filed in PACER. This level of training is general in nature. We aren't yet focusing in any way on the litigation experience of a potential customer.

But we can also train a "model" using factual allegations in discrimination complaints aimed at a potential customer.

And, of course, we can train a model in any PACER Civil litigation category of interest (e.g., breach of contract, fraud, antitrust) using both general and company-specific data.

In addition to company-specific data that's public information (because it's in PACER), our system includes a patented feedback feature. Our software allows a user to accept an email as a True Positive or reject it as a False Positive. After there's enough feedback data like this, we can augment both the positive and negative training sets. It's another way of making the training data more company-specific. 

So, currently (and as we go to market), our best discrimination model is a super binary filter. It splits out the emails "related" to a specific classification from the "unrelated" emails. This filtering makes it possible for a human reviewer to see a relatively small subset of emails "related" to the risk, and then that person splits out the True Positives from the False Positives.

So the human reviewer here is the Gold Standard. That person decides which high scoring email to escalate to a second reviewer, and that person decides whether the email is such that an internal investigation should take place.

And that is why AI here means Augmented human Intelligence.

Accordingly, our system won't replace corporate counsel; in fact, they'll be more valuable to the company.

Why? Because with our patented Deep Learning computer-assist, corporate counsel can identify the underwater mines in time for the captain of the ship to take evasive action. Avoiding a mine is far better than dealing with the carnage it may cause.

HYPOTHETICAL

Let's shift to costs. Besides the cost of our software, what other costs are there?

To see if we can answer that question, let’s consider a much larger set of emails.

The attorney for the company-who-must-not-be-named also mentioned (again, in a non-confidential telephone call) that it was typical for their system to handle two million emails per month.

We remembered this challenge. We were going to report "early warnings" daily, so we did the math:

Assume 2,000,000 emails per month; now, when

  1. divided by 4.3 weeks per month, the result is 465,116 emails per week;

  2. and when 465,116 emails per week is divided by 5 days per week, the result is 93,023 emails per day.

At that point, we realized that we were being asked to look at 93,023 emails per day.

That's the size of the daily haystack! Large!

Understandably, without a way to surface the needles from a haystack that large, no one even bothers to look. So corporate counsel has no choice but to manage the lawsuits that come through the door.

Let's continue the calculation, but only in an informal way that's not statistically valid. Remember that when we ran our discrimination model against a held-out set of 20,401 Enron emails, it surfaced 25 emails containing the 4 discrimination-risky emails we knew about. What's 25 "related" emails divided by 20,401? It's about one-eighth of one percent, i.e., 0.0012.

So the number of emails the system would surface as "related to the risk," when presented with 93,023 emails per day, is:

93,023 multiplied by 0.0012 = about 112 emails per day.

How can we evaluate that number? Is it small, average, or large? Well, in an "Email Statistics Report, 2011-2015," the Radicati Group reported (at p. 3) that business users sent and received 121 emails per day in 2014 (on average), and expected the number to grow to 140 emails per day in 2018. 

So, for a reviewer, 112 emails per day is a slightly below-average amount, and, assuming a 7-hour workday, is just 16 “related” emails per hour, which is one email about every four (4) minutes.

And if the company is at the projected 2018 level of 140 emails per day, that's 20 emails per hour during a 7-hour workday, which would give each reviewed three (3) minutes per email.

But we can tell you from experience that a reviewer can spot a False Positive in only a few seconds. We provide a "help" here. From the general training set, we built a database of words that are subject-matter related to the risk. We pass the email output through this database and it highlights those words for the reviewers. (See our Pilot Program vimeo.)  

Accordingly, for companies generating two million emails per month, it may take only one (1) paralegal a day to decide which emails (from the day before) to escalate to a reviewing attorney as a True Positive worthy of further investigation.

Thus, with Intraspexion, a risky email might rise to the surface, and be visible to a reviewer, only a day or so after it was written. But the number of reviewers will depend on how many types of litigation risks a company wants to model.

So the answer to that question is: it depends. 

Now let's address the "other" costs besides the cost of Intraspexion. If a reviewer escalates an email to the next level, and an investigation is undertaken, there’s the time and cost associated with the next level of review and the ensuing investigation. And if the investigation warrants it, there's the time and cost of escalating the situation to a control group executive, deciding what to do, and being proactive in order to avoid the risk.

So the answer to the question about the "other" costs is: it's soon too to tell.

WHY IS THE ANSWER UNSATISFACTORY?

At this point, we can't pin down the other ancillary costs of our system, and so deem this initial answer unsatisfactory for many reasons:

  1. The hypo is based on two million emails per month, and your company may have more or less.

  2. Also, we scaled back in a linear fashion. But the system is non-linear and doesn’t scale in a straight-line fashion. There will be days when our system reports "early warning" traffic that is much higher, or lower, than 16. Thus, a reviewer could go days without the system sending out a warning of a single “needle” per hour, and then all of a sudden report, say, 36, which would then require more staff time; and

  3. And there are other costs unknown to us that a company may incur when a risk is surfaced and is confirmed as being worthy of an investigation. It would be great if, after deployment, our customers would tell us the costs associated with this data, but they may not do so, even if the data is anonymized. We may never know these details.

A more accurate answer is to simply admit that Intraspexion’s patented system, though innovative and deserving of issued patents, is new and no one has aggregated data "from the field."

BUT WHY IS OUR INNOVATION FOR THE LEGAL PROFESSION SUCH GOOD NEWS?

Well, let's start by admitting that neural networks have been around for decades. (For some history, click on "decades.") To make a long story short, there were winters (disappointments) and springs (hope and the hype that goes with it).

But in 2012, Deep Learning (the "street name" for multi-layer neural networks) matured. As recounted in a Forbes article, Deep Learning had started producing extraordinary results. (For the article, click on "extraordinary results.")

The results were so strong that Andrew Ng--whose resume includes teaching computer science at Stanford, co-founding Coursera, leading Deep Learning teams at Google and Baidu, and more--has said that Deep Learning “is the new electricity.” And when he said that, he explained that Deep Learning will be as broadly impactful now as electricity affected the "industrial revolution," and so will change everything now.

(Prof. Ng was quoted in an October 2016 Fortune cover story, "Why Deep Learning Is Suddenly Changing Your Life," which you can access by clicking here.)

Thus, with Intraspexion, we’re working with today’s new electricity for the legal profession. We didn’t invent Deep Learning, but Intraspexion is applying it in a way that is not unlike the application of sonar to see below the surface of the ocean. In your ocean of data, Intraspexion is an early warning system for risk. (And that's why the ROI is high.)

THE ORIGIN OF OUR NAME

We invented and patented a software system that uses Deep Learning to accomplish a practical result: to enable corporate counsel to use its own intranet in a way that allows for introspection.

BOTTOM LINE

Once an underwater mine is identified and our system provides an early warning, then there’s a realistic hope of avoiding that mine.

And that's not hype. For the legal profession, our New Sonar Machine is a new hope.

 

"Should Prevention Be a Core Principle of AI?"

On October 11, 2016, AI Trends posted an article by Nick Brestoff, Intraspexion’s CEO, and Larry W. Bridgesmith, one of Intraspexion’s co-founders and an Adjunct Professor of Law at the Vanderbilt School of Law. The article offers an ethics proposal for the entire AI community. 

 

"Building a Litigation Risk Profile That Could Save Your Company Money"

On January 29, 2016, Corporate Counsel posted an article by Nick Brestoff in which he shows companies how to build a litigation risk profile using data from PACER, the federal litigation database. The online version is behind a paywall, but you can see get the idea by reading Nick's blog article dated December 6, 2017.

 

"Preventing Litigation: An Early Warning System to Get Big Value Out of Big Data"

On August 28, 2015, Business Expert Press published Preventing Litigation: An Early Warning System to Get Big Value Out of Big Data. Nick Brestoff is the primary author, having written 20 out of 25 chapters. His book was endorsed by Richard Susskind, one the legal profession’s most respected thought leaders. For select excerpts, see the Tab for Book.

 

"data Lawyers and Preventive Law"

On October 25, 2012, Legaltech News published Nick Brestoff's article asserting that, in the coming days, technology would enable corporate counsel to prevent litigation, not just manage it. (The Legaltech News online version is behind a paywall.)

 

"CAN ARTIFICIAL INTELLIGENCE EASE THE EDD BURDEN?"

On January 20, 2011, Law Technology News published Nick Brestoff's article, as titled above. He wrote, "Welcome to the age of legal informatics," and, after describing the upcoming battle between IBM Watson and two human champions at Jeopardy!, wrote, "I'm betting on Watson." (Boldface in the original.) (The Law Technology News online version is also behind a paywall.)