Analytics are fabulous but are we forgetting the human factor?

Analytics are fabulous but are we forgetting the human factor?

It may seem like it’s ‘back to the future’ at first brush but a recent case study by Anne Kershaw Esq. titled ILAS v. Linear Review really does present some fresh thinking in terms of the way we approach discovery during litigation.

Here’s our take on Anne’s case study.

There is no doubt that sophisticated analytics and specifically, predictive coding, can deliver impressive results, however it’s also important to remember such technologies are not suited to all ediscovery cases. Technologies like predictive coding are sophisticated and complex so it’s often difficult to explain them to clients and lawyers who tend to steer clear of statistical analysis discussions. Furthermore, they generally involve an additional expense that needs to be justified over and above processing, user and hosting costs.

So, it can require a leap of faith – a leap more people will, no doubt, be comfortable taking as the technology matures and becomes more pervasive.

In the meantime, it’s important not to lose sight of the fact that intelligent, creative lawyers or legal technologists can conduct powerful human analysis with easy to understand functionality such as ‘find similar’ searches, email threading, domain and name normalization and through simple, iterative refinement of keywords alongside other metadata filters, all out of the box features of any modern litigation software package.

This is particularly the case when the human analyst can dive into the results and circle back, real time, to tweak their criteria until they are happy with the quantity and quality of the documents they have found. When all that analysis is available within the same platform as the processing and review functions it means there’s no shuffling of data from one system to another and no need to process a load file prior to review. It also means you can go ‘back to the well’ at no cost even after the review is underway, should you need to change and re-apply the filter criteria (and that never happens, right?).

Anne’s case study is largely about breathing life back into the lost art of ECA and the methodology and technology presented in the case study should be within the comfort zone of all lawyers and their clients, once they know it’s possible.

In the study to which the case study relates impressive outcomes were delivered through some intelligent legal analysis up front instead of going down the traditional workflow that is still (even 7 years after the advent of predictive coding) overwhelmingly prevalent in the industry viz. pay to pre-process the data, develop metadata filters, search criteria and keywords ‘in the dark’ (without any real insight into the document contents), tweak the terms until the quantity of responsive documents is ‘proportionate’ (without any assessment of the quality of those documents) then pay yet another fee to process them into a load file for a review platform where a linear review of the filtered subset is undertaken or more analytics are applied. Invariably, once the review is underway, that’s the point at which it becomes patently clear that the initial filters and search terms used for culling purposes were ineffective but by then it’s just too late, too difficult and too costly to go back.

That’s not ECA. It’s Too Late Case Assessment.

Thanks Anne, for sparking some renewed interest in a lost art and for reminding us of the importance of human expertise alongside great technology. You can download the research paper from our documentation page here.