Challenges and opportunities in the research setting. 

The emergence and mass utilization of high-throughput (HT) technologies, including sequencing technologies (genomics) and mass spectrometry (proteomics, metabolomics, lipids), has allowed geneticists, biologists, and biostatisticians to bridge the gap between genotype and phenotype on a massive scale. These new technologies have brought rapid advances in our understanding of cell biology, evolutionary history, microbial environments, and are increasingly providing new insights and applications towards clinical care and personalized medicine.

Areas covered: The very success of this industry also translates into daunting big data challenges for researchers and institutions that extend beyond the traditional academic focus of algorithms and tools. The main obstacles revolve around analysis provenance, data management of massive datasets, ease of use of software, interpretability and reproducibility of results. Expert Commentary: The authors review the challenges associated with implementing bioinformatics best practices in a large-scale setting, and highlight the opportunity for establishing bioinformatics pipelines that incorporate data tracking and auditing, enabling greater consistency and reproducibility for basic research, translational or clinical settings.

Story Source:

Materials provided by Note: Content may be edited for style and length.

Journal Reference:

  1. Davis-Turak J1, Courtney SM2,3, Hazard ES2,4, Glen WB Jr2,3, da Silveira WA2,3, Wesselman T1, Harbin LP5, Wolf BJ5, Chung D5, Hardiman G2,5,Genomics pipelines and data integration: challenges and opportunities in the research setting.  2017 Mar;17(3):225-237. doi: 10.1080/14737159.2017.1282822. Epub 2017 Jan 25.

Jeremy Davis-Turak

Written by Jeremy Davis-Turak

Jeremy earned his Ph.D. in Bioinformatics and Systems Biology in the lab of Alexander Hoffmann at UCSD, researching kinetic models of co-transcriptional splicing. In his studies he developed analyses for RNA-seq, nascent RNA-seq, GRO-seq and MNase-seq that were intimately linked with mechanistic models. Jeremy set up the Bioinformatics Core at the San Diego Center for Systems Biology, optimizing pipeline for RNA-seq and ChIP-seq. Jeremy also has extensive experience analyzing gene expression data from his time working in the Neurogenetics Laboratory at UCLA, where he became an expert in the analysis of Microarrays, Weighted Gene Coexpression Network Analysis, pathways analyses, gene set enrichment and motif analysis. His ambitious goal of enabling researchers without programming experience to ask quantitative questions led to the development of web portals featuring tools to query relational databases of expression data (microarray and sequencing) and perform on-the-fly computational analyses.