Electronic discovery

From Wikipedia, the free encyclopedia
(Redirected from E discovery)

Electronic discovery (also ediscovery or e-discovery) refers to discovery in legal proceedings such as litigation, government investigations, or Freedom of Information Act requests, where the information sought is in electronic format (often referred to as electronically stored information or ESI).[1] Electronic discovery is subject to rules of civil procedure and agreed-upon processes, often involving review for privilege and relevance before data are turned over to the requesting party.

Electronic information is considered different from paper information because of its intangible form, volume, transience and persistence. Electronic information is usually accompanied by metadata that is not found in paper documents and that can play an important part as evidence (e.g. the date and time a document was written could be useful in a copyright case). The preservation of metadata from electronic documents creates special challenges to prevent spoliation.

In the United States, at the federal level, electronic discovery is governed by common law, case law and specific statutes, but primarily by the Federal Rules of Civil Procedure (FRCP), including amendments effective December 1, 2006, and December 1, 2015.[2][3] In addition, state law and regulatory agencies increasingly also address issues relating to electronic discovery. In England and Wales, Part 31 of the Civil Procedure Rules[4] and Practice Direction 31B on Disclosure of Electronic Documents apply.[5] Other jurisdictions around the world also have rules relating to electronic discovery.

Stages of process[edit]

The Electronic Discovery Reference Model (EDRM) is an ubiquitous diagram that represents a conceptual view of these stages involved in the ediscovery process.

Identification[edit]

The identification phase is when potentially responsive documents are identified for further analysis and review. In the United States, in Zubulake v. UBS Warburg, Hon. Shira Scheindlin ruled that failure to issue a written legal hold notice whenever litigation is reasonably anticipated will be deemed grossly negligent. This holding brought additional focus to the concepts of legal holds, eDiscovery, and electronic preservation.[6] Custodians who are in possession of potentially relevant information or documents are identified. Data mapping techniques are often employed to ensure a complete identification of data sources. Since the scope of data can be overwhelming or uncertain in this phase, attempts are made to reasonably reduce the overall scope during this phase - such as limiting the identification of documents to a certain date range or custodians.

Preservation[edit]

A duty to preserve begins upon the reasonable anticipation of litigation. Data identified as potentially relevant during preservation is placed in a legal hold. This ensures that data cannot be destroyed. Care is taken to ensure this process is defensible, while the end goal is to reduce the possibility of data spoliation or destruction. Failure to preserve can lead to sanctions. Even if a court does not rule that the failure to preserve is negligence, they can force the accused to pay fines if the lost data puts the defense "at an undue disadvantage in establishing their defense."[7]

Collection[edit]

Once documents have been preserved, collection can begin. The collection is the transfer of data from a company to its legal counsel, who will determine the relevance and disposition of data. Some companies that deal with frequent litigation have software in place to quickly place legal holds on certain custodians when an event (such as legal notice) is triggered and begin the collection process immediately.[8] Other companies may need to call in a digital forensics expert to prevent the spoliation of data. The size and scale of this collection are determined by the identification phase.

Processing[edit]

During the processing phase, native files are prepared to be loaded into a document review platform. Often, this phase also involves the extraction of text and metadata from the native files. Various data culling techniques are employed during this phase, such as deduplication and de-NISTing. Sometimes native files will be converted to a petrified, paper-like format (such as PDF or TIFF) at this stage to allow for easier redaction and bates-labeling.

Modern processing tools can also employ advanced analytic tools to help document review attorneys more accurately identify potentially relevant documents.

Review[edit]

During the review phase, documents are reviewed for responsiveness to discovery requests and for privilege. Different document review platforms and services can assist in many tasks related to this process, including rapidly identifying potentially relevant documents and culling documents according to various criteria (such as keyword, date range, etc.). Most review tools also make it easy for large groups of document review attorneys to work on cases, featuring collaborative tools and batches to speed up the review process and eliminate work duplication.

Analysis[edit]

Qualitative analysis of the content discovered in the collection phase and after being reduced by the preprocessing phase. The Evidence is looked at in context. Correlation analysis or contextual analysis to extract structured information relevant to the case. Structuring like Timelineing or Clustering into Topics can be done. An example structure could be the analysis from a client-based perspective; here, each investigator looks at one agent included in the evidence additional Patterns like discussions or network analysis around people can be done.

Production[edit]

Documents are turned over to opposing counsel based on agreed-upon specifications. Often this production is accompanied by a load file, which is used to load documents into a document review platform. Documents can be produced either as native files or in a petrified format (such as PDF or TIFF) alongside metadata.

Presentation[edit]

Displaying and explaining evidence before audiences (at depositions, hearings, trials, etc.). The idea is that the audience understands the presentation, and non-professionals can follow the interpretation. Clarity and ease of understanding are the focus here. The native form of data needs to be abstracted, visualized, and broad into context for the presentation. The results of the analysis should be the subject of the presentation. The clear documentation should provide reproducibility.

Types of electronically stored information[edit]

Any data that is stored in an electronic form may be subject to production under common eDiscovery rules. This type of data has historically included email and office documents (spreadsheets, presentations, documents, PDFs, etc.) but can also include photos, video, instant messaging, collaboration tools, text (SMS), messaging apps, social media, ephemeral messaging, Internet of things (smart devices like Fitbits, smart watches, Alexa Alexa, Apple Siri, Nest), databases, and other file types.

Also included in ediscovery is "raw data", which forensic investigators can review for hidden evidence. The original file format is known as the "native" format. Litigators may review material from ediscovery in one of several formats: printed paper, "native file", or a petrified, paper-like format, such as PDF files or TIFF images. Modern document review platforms accommodate the use of native files and allow for them to be converted to TIFF and Bates-stamped for use in court.

Electronic messages[edit]

In 2006, the U.S. Supreme Court's amendments to the Federal Rules of Civil Procedure created a category for electronic records that, for the first time, explicitly named emails and instant message chats as likely records to be archived and produced when relevant.

One type of preservation problem arose during the Zubulake v. UBS Warburg LLC lawsuit. Throughout the case, the plaintiff claimed that the evidence needed to prove the case existed in emails stored on UBS' own computer systems. Because the emails requested were either never found or destroyed, the court found that they were more likely to exist than not. The court found that while the corporation's counsel directed that all potential discovery evidence, including emails, be preserved, the staff that the directive applied to did not follow through. This resulted in significant sanctions against UBS.

To establish authenticity, some archiving systems apply a unique code to each archived message or chat. The systems prevent alterations to original messages, messages cannot be deleted, and unauthorized persons cannot access the messages.

The formalized changes to the Federal Rules of Civil Procedure in December 2006 and 2007 effectively forced civil litigants into a compliance mode with respect to their proper retention and management of electronically stored information (ESI). Improper management of ESI can result in a finding of spoliation of evidence and the imposition of one or more sanctions, including adverse inference jury instructions, summary judgment, monetary fines, and other sanctions. In some cases, such as Qualcomm v. Broadcom, attorneys can be brought before the bar.[9]

Databases and other structured data[edit]

Structured data typically resides in databases or datasets. It is organized in tables with columns, rows, and defined data types. The most common are Relational Database Management Systems (RDBMS) that are capable of handling large volumes of data such as Oracle, IBM Db2, Microsoft SQL Server, Sybase, and Teradata. The structured data domain also includes spreadsheets (not all spreadsheets contain structured data, but those that have data organized in database-like tables), desktop databases like FileMaker Pro and Microsoft Access, structured flat files, XML files, data marts, data warehouses, etc.

Audio[edit]

Voicemail is often discoverable under electronic discovery rules. Employers may have a duty to retain voicemail if there is an anticipation of litigation involving that employee. Data from voice assistants like Amazon Alexa and Siri have been used in criminal cases.[10]

Reporting formats[edit]

Although petrifying documents to static image formats (TIFF & JPEG) had become the standard document review method for almost two decades, native format review has increased in popularity as a method for document review since around 2004. Because it requires the review of documents in their original file formats, applications and toolkits capable of opening multiple file formats have also become popular. This is also true in the ECM (Enterprise Content Management) storage markets, which converge quickly with ESI technologies.

Petrification involves the conversion of native files into an image format that does not require the use of native applications. This is useful in the redaction of privileged or sensitive information since redaction tools for images are traditionally more mature and easier to apply on uniform image types by non-technical people. Efforts to redact similarly petrified PDF files by incompetent personnel have removed redacted layers and exposed redacted information, such as social security numbers and other private information.[11][12]

Traditionally, electronic discovery vendors had been contracted to convert native files into TIFF images (for example, 10 images for a 10-page Microsoft Word document) with a load file for use in image-based discovery review database applications. Increasingly, database review applications have embedded native file viewers with TIFF capabilities. With both native and image file capabilities, it could either increase or decrease the total necessary storage since there may be multiple formats and files associated with each individual native file. Deployment, storage, and best practices are becoming especially critical and necessary to maintain cost-effective strategies.

Structured data are most often produced in delimited text format. When the number of tables subject to discovery is large or relationships between the tables are of essence, the data are produced in native database format or as a database backup file.[13]

Common issues[edit]

A number of different people may be involved in an electronic discovery project: lawyers for both parties, forensic specialists, IT managers, and records managers, amongst others. Forensic examination often uses specialized terminology (for example, "image" refers to the acquisition of digital media), which can lead to confusion.[1]

While attorneys involved in case litigation try their best to understand the companies and organizations they represent, they may fail to understand the policies and practices that are in place in the company's IT department. As a result, some data may be destroyed after a legal hold has been issued by unknowing technicians performing their regular duties. Many companies are deploying software that properly preserves data across the network to combat this trend, preventing inadvertent data spoliation.

Given the complexities of modern litigation and the wide variety of information systems on the market, electronic discovery often requires IT professionals from both the attorney's office (or vendor) and the parties to the litigation to communicate directly to address technology incompatibilities and agree on production formats. Failure to get expert advice from knowledgeable personnel often leads to additional time and unforeseen costs in acquiring new technology or adapting existing technologies to accommodate the collected data.

Emerging trends[edit]

Alternative collection methods[edit]

Currently the two main approaches for identifying responsive material on custodian machines are:

(1) where physical access to the organizations network is possible - agents are installed on each custodian machine which push large amounts of data for indexing across the network to one or more servers that have to be attached to the network or

(2) for instances where it is impossible or impractical to attend the physical location of the custodian system - storage devices are attached to custodian machines (or company servers) and then each collection instance is manually deployed.

In relation to the first approach there are several issues:

  • In a typical collection process large volumes of data are transmitted across the network for indexing and this impacts normal business operations
  • The indexing process is not 100% reliable in finding responsive material
  • IT administrators are generally unhappy with the installation of agents on custodian machines
  • The number of concurrent custodian machines that can be processed is severely limited due to the network bandwidth required

New technology is able to address problems created by the first approach by running an application entirely in memory on each custodian machine and only pushing responsive data across the network. This process has been patented[14] and embodied in a tool that has been the subject of a conference paper.[15]

In relation to the second approach, despite self-collection being a hot topic in eDiscovery, concerns are being addressed by limiting the involvement of the custodian to simply plugging in a device and running an application to create an encrypted container of responsive documents.[16]

Regardless of the method adopted to collect and process data there are few resources available for practitioners to evaluate the different tools. This is an issue due to the significant cost of eDiscovery solutions. Notwithstanding the limited options for obtaining trial licences for the tools, a significant barrier to the evaluation process is creating a suitable environment in which to test such tools. Adams suggests the use of the Microsoft Deployment Lab which automatically creates a small virtual network running under HyperV [17]

Technology-assisted review[edit]

Technology-assisted review (TAR)—also known as computer-assisted review or predictive coding—involves the application of supervised machine learning or rule-based approaches to infer the relevance (or responsiveness, privilege, or other categories of interest) of ESI.[18] Technology-assisted review has evolved rapidly since its inception circa 2005.[19][20]

Following research studies indicating its effectiveness,[21][22] TAR was first recognized by a U.S. court in 2012,[23] by an Irish court in 2015,[24] and by a U.K. court in 2016.[25]

Recently a U.S. court has declared that it is "black letter law that where the producing party wants to utilize TAR for document review, courts will permit it."[26] In a subsequent matter,[27] the same court stated,

To be clear, the Court believes that for most cases today, TAR is the best and most efficient search tool. That is particularly so, according to research studies (cited in Rio Tinto[26]), where the TAR methodology uses continuous active learning ("CAL")[28] which eliminates issues about the seed set and stabilizing the TAR tool. The Court would have liked the City to use TAR in this case. But the Court cannot, and will not, force the City to do so. There may come a time when TAR is so widely used that it might be unreasonable for a party to decline to use TAR. We are not there yet. Thus, despite what the Court might want a responding party to do, Sedona Principle 6[29] controls. Hyles' application to force the City to use TAR is DENIED.

Grossman and Cormack define TAR in Federal Courts Law Review as:

A process for Prioritizing or Coding a Collection of Documents using a computerized system that harnesses human judgments of one or more Subject Matter Expert(s) on a smaller set of Documents and then extrapolates those judgments to the remaining Document Collection. Some TAR methods use Machine Learning Algorithms to distinguish Relevant from Non-Relevant Documents, based on Training Examples Coded as Relevant or Non-Relevant by the Subject Matter Experts(s), while other TAR methods derive systematic Rules that emulate the expert(s)’ decision-making process. TAR processes generally incorporate Statistical Models and/or Sampling techniques to guide the process and to measure overall system effectiveness.[30]

Convergence with information governance[edit]

Anecdotal evidence for this emerging trend points to the business value of information governance (IG), defined by Gartner as "the specification of decision rights and an accountability framework to encourage desirable behavior in the valuation, creation, storage, use, archival, and deletion of information. It includes the processes, roles, standards, and metrics that ensure the effective and efficient use of information in enabling an organization to achieve its goals."

As compared to eDiscovery, information governance as a discipline is relatively new. Yet, there is traction for convergence. eDiscovery—a multi-billion-dollar industry—is rapidly evolving, ready to embrace optimized solutions that strengthen cybersecurity (for cloud computing). Since the early 2000s, eDiscovery practitioners have developed skills and techniques that can be applied to information governance. Organizations can apply the lessons learned from eDiscovery to accelerate their path to a sophisticated information governance framework.

The Information Governance Reference Model (IGRM) illustrates the relationship between key stakeholders and the Information Lifecycle and highlights the transparency required to enable effective governance. The updated IGRM v3.0 emphasizes that Privacy & Security Officers are essential stakeholders. This topic is addressed in an article entitled "Better Ediscovery: Unified Governance and the IGRM," published by the American Bar Association.[31]

See also[edit]

References[edit]

  1. ^ a b Various (2009). Eoghan Casey (ed.). Handbook of Digital Forensics and Investigation. Academic Press. p. 567. ISBN 978-0-12-374267-4. Retrieved 27 August 2010.
  2. ^ "Federal Rules of Civil Procedure". LII / Legal Information Institute.
  3. ^ "2015 Amendments" (PDF). Archived from the original (PDF) on 2017-06-12. Retrieved 2017-06-27.
  4. ^ Ministry of Justice, PART 31 - DISCLOSURE AND INSPECTION OF DOCUMENTS, accessed 11 September 2022
  5. ^ Ministry of Justice, PRACTICE DIRECTION 31B – DISCLOSURE OF ELECTRONIC DOCUMENTS, last updated 1 October 2020, accessed 11 September 2022
  6. ^ "Judge Scheindlin Brought Great Insight and Leadership". March 28, 2016.
  7. ^ "Case Law AJ Holdings v. IP Holdings". January 13, 2015.
  8. ^ Logikcull. "Legal Hold and Data Preservation | Ultimate Guide to eDiscovery | Logikcull". Logikcull. Retrieved 2018-06-08.
  9. ^ Qualcomm v. Broadcom: Implications for Electronic Discovery accessdate=2014-10-19
  10. ^ Sullivan, Casey C. "How the IoT Is Solving Murders and Reshaping Discovery". Retrieved 2018-06-08.
  11. ^ Kincaid, Jason (February 11, 2009). "The AP Reveals Details of Facebook/ConnectU Settlement With Greatest Hack Ever". TechCrunch.
  12. ^ Schneier, Bruce (June 26, 2006). "Yet Another Redacting Failure]. Schneier on Security". Schneier.com.
  13. ^ "The Sedona Conference®". thesedonaconference.org.
  14. ^ "Method and system for searching for, and collecting, electronically-stored information". Elliot Spencer, Samuel J. Baker, Erik Andersen, Perlustro LP. 2009-11-25. {{cite journal}}: Cite journal requires |journal= (help)CS1 maint: others (link)
  15. ^ Richard, Adams; Graham, Mann; Valerie, Hobbs (2017). "ISEEK, a tool for high speed, concurrent, distributed forensic data acquisition". Research Online. doi:10.4225/75/5a838d3b1d27f.
  16. ^ "Digital Forensics Services". www.ricoh-usa.com.
  17. ^ https://espace.curtin.edu.au/bitstream/handle/20.500.11937/93974/Adams%20RB%202023%20Public.pdf?sequence=1&isAllowed=y
  18. ^ Grossman, Maura R.; Cormack, Gordon V. (January 2013). "Grossman-Cormack glossary of technology-assisted review with foreword by John M. Facciola, U.S. Magistrate Judge" (PDF). Federal Courts Law Review. 7 (1). Stannardsville, Virginia: Federal Magistrate Judges Association: 6. Retrieved August 14, 2016.
  19. ^ Gricks, Thomas C. III; Ambrogi, Robert J. (November 17, 2015). "A brief history of technology assisted review". Law Technology Today. Chicago, Illinois: American Bar Association. Retrieved August 14, 2016.
  20. ^ Sedona Conference. TAR Case Law Primer Public Comment Version August 2016 Retrieved August 17, 2016
  21. ^ Roitblat, Herbert; Kershaw, Anne. "Document categorization in legal electronic discovery: Computer classification vs. manual review". Journal of the Association for Information Science and Technology. 61 (1). Hoboken, New Jersey: Wiley-Blackwell: 1–10. Retrieved August 14, 2016.
  22. ^ Richmond Journal of Law & Technology Technology-assisted review in ediscovery can be more effective and more efficient than manual review Retrieved August 14, 2016
  23. ^ S.D.N.Y. (2012). Moore v. Publicis Archived 2013-03-10 at the Wayback Machine Retrieved August 13, 2016.
  24. ^ High Court, Ireland (2015). Irish Bank Resolution Corporation Limited v. Sean Quinn Retrieved August 13, 2016
  25. ^ High Court of Justice Chancery Division, U.K. (2016). Pyrrho Investments Ltd v. MWB Property Ltd Retrieved August 13, 2016
  26. ^ a b S.D.N.Y (2015). Rio Tinto v. Vale Retrieved August 14, 2016
  27. ^ S.D.N.Y. (2016). Hyles v. New York City Retrieved August 14, 2016
  28. ^ Practical Law Journal (2016). Continuous Active Learning for TAR Retrieved August 14, 2016
  29. ^ Sedona Conference (2007). Best practices recommendations & principles for addressing electronic document production Archived 2016-07-06 at the Wayback Machine
  30. ^ Grossman, Maura R., and Gordon V. Cormack. "A tour of technology-assisted review." Perspectives on Predictive Coding and Other Advanced Search and Review Technologies for the Legal Practitioner (ABA 2016) (2016).
  31. ^ Article (2012). Ledergerber, Marcus (ed.). "Better Ediscovery: Unified Governance and the IGRM". American Bar Association. Archived from the original on 2016-10-11. Retrieved 2016-08-21.

External links[edit]