Simply put, data processing is the process of preparing collected documents for import into a review platform. With the variety of types and volumes of data and the vast number of different review platforms and methodologies, data processing can be anything but simple. SkylineOmega has created proprietary processes and workflows to efficiently process any data set in the most cost effective manner.
Analysis and reporting are the first steps to formulate a processing plan. SkylineOmega technicians will analyze your data to verify the makeup of the content. We can provide the following reports to give you an immediate understanding of your data and assist you in making informed decisions.
- Comprehensive and summary Data Discovery Reports
- Custodian level reporting
- File Type and Size
- Deduplication and deNist
- Password protected Files
- Encrypted Files
Data culling or filtering is the process in which non-relevant data is removed from the data set before the completion of processing. Culling data reduces time and cost from the project not only during processing, but also during the review process. There is no need to host or review more data than is necessary. We can filter using the below methods or a combination of the methods.
- Date filtering
- Keyword Filtering
- Metadata fields, such as email sender and recipient
- File type
- Email Threading
Our ability to scale our resources means that we can meet any deadline for either full or native data processing. During Native Processing, we extract all relevant metadata fields and extractable text. We then prepare supplemental OCR for any file without extractable text. Full Processing builds on Native Processing by preparing images for all original native files. Through our system, we can export directly in to our review tool or prepare deliverable for a platform of your choice such as Relativity, Concordance, Summation, Ringtail and more.