|Run||Spots||Bases||Size||GC content||Published||Access Type|
This run has 2 reads per spot:
|L=101, 100%||L=101, 100%|
Technical read Application Read L=4, 100% Length is 4, 100% spots contain this read ̅L=165, σ=92.8, 66% Average length is 165, standard deviation is 92.8, 66% spots contain this read
|SAMN01120800 (SRS358648)||RNA was extracted from untransformed stem cell lines derived from adipose stromal cells from the animal Clint, the reference chimpanzee (Pan troglodytes), in the laboratory of Dr. Rob Norgren. The cells lines were obtained from Coriell. Sequencing was performed at the University of Nebraska Medical Center.||Pan troglodytes|
|PRJNA173089||SRP014910||Pan troglodytes Genome sequencing and Transcriptome or Gene expression|
The purpose of this project is to sequence the transcriptome of the reference chimpanzee, Clint. This information will be used to improve the annotations of the chimpanzee genome.
You need SRA Toolkit to operate on SRA runs.
Default toolkit configuration enables it to find and retrieve SRA runs by accession. It also downloads (and cache) only the part of data you really need. For example quality scores represent a majority of data volume and you may not need them if you dump fasta only (versus fastq). Or if you are looking at particular gene you may not need reads aligned to other regions or not aligned at all. Same way if you use GATK with enabled SRA support you need only SRA run accessions to fire your process.
fastq-dump will dump reads in a number of "standard" fastq and fasta formats.
vdb-dump is also capable of producing fasta and fastq (beside other formats). It dumps data much faster then fastq-dump but ordering of reads may be different and it does not produce split-read multi-file output.
Prefetch tool will help you cache all data in advance if you plan to run data analysis in environment where getting data from NCBI at run time is unfeasible.
Read more at SRA Knowledge Base on how to download SRA data using command line utilities.