|Run||Spots||Bases||Size||GC content||Published||Access Type|
This run has 2 reads per spot:
|L=100, 100%||L=100, 100%|
Technical read Application Read L=4, 100% Length is 4, 100% spots contain this read ̅L=165, σ=92.8, 66% Average length is 165, standard deviation is 92.8, 66% spots contain this read
|SRP004490||Plant genome sequencing data|
Illumina paired end sequencing data
You need SRA Toolkit to operate on SRA runs.
Default toolkit configuration enables it to find and retrieve SRA runs by accession. It also downloads (and cache) only the part of data you really need. For example quality scores represent a majority of data volume and you may not need them if you dump fasta only (versus fastq). Or if you are looking at particular gene you may not need reads aligned to other regions or not aligned at all. Same way if you use GATK with enabled SRA support you need only SRA run accessions to fire your process.
fastq-dump will dump reads in a number of "standard" fastq and fasta formats.
vdb-dump is also capable of producing fasta and fastq (beside other formats). It dumps data much faster then fastq-dump but ordering of reads may be different and it does not produce split-read multi-file output.
Prefetch tool will help you cache all data in advance if you plan to run data analysis in environment where getting data from NCBI at run time is unfeasible.
Read more at SRA Knowledge Base on how to download SRA data using command line utilities.
The sections below show results of analysis run by software which is still in experimental stage. Please use provided results with a boatload of salt and let us know what you think.
-- SRA team
- Unidentified reads: 79.32%
- Identified reads: 20.68%
|Viruses||Enterobacteria phage phiX174 sensu lato||species||0.0||713||71.3|
|Viruses||Enterobacteria phage S13||0.0||76||7.6|
|Viruses||Enterobacteria phage NC56||0.0||57||5.7|
|Viruses||Enterobacteria phage WA10||0.0||39||3.9|
|Viruses||Enterobacteria phage WA4||0.0||30||3.0|
|Viruses||Enterobacteria phage MED1||0.0||28||2.8|
|Viruses||Enterobacteria phage ID45||0.0||27||2.7|
|Viruses||Enterobacteria phage ID22||0.0||26||2.6|
|Viruses||Enterobacteria phage WA11||0.0||14||1.4|
Results show distribution of reads mapping to specific taxonomy nodes as a percentage of total reads within the analyzed run. In cases where a read maps to more than one related taxonomy node, the read is reported as originating from the lowest shared taxonomic node. So when a read maps to two species belonging to the same genus, it is reported as having originated from their common genus. Under typical conditions where a single organism has been sequenced, the expectations are that reads will map to several taxonomy nodes across the organism’s lineage, and that the number of reads mapping to higher level nodes will be more than those that map to terminal nodes.
STAT results are proportional to the size of sequenced genomes. So given a mixed sample containing several organisms at equal copy number, one expects proportionally more reads to originate from the larger genomes. This means that the percentages reported by STAT will reflect genome size and must be considered against the genomic complexity of the sequenced sample.
The NCBI SRA Taxonomy Analysis Tool (STAT) calculates the taxonomic distribution of reads from next generation sequencing runs. This analysis maps individual sequencing reads to a taxonomic hierarchy and reports the taxonomic composition of reads within a sequencing run.
STAT maps sequencing reads to a taxonomic hierarchy using a two-step strategy based on exact query read matches to precomputed k-mer dictionary databases. In the first pass a small, a “coarse” reference dictionary database is used to identify organisms matching a read set. In the second pass, organism-specific slices from a “fine” reference dictionary database are used to compute distribution of reads between identified taxonomy classes (species and higher order taxonomy nodes). When multiple taxnodes are mapped for single spot we use the lowest non-ambigous mappimg
STAT k-mer dictionaries are built using an iterative minhash based approach against reference genomic databases. For every fixed segment length of incoming reference nucleotide sequence, k-mer representing this segment selected based on minimum fvn1 hash function. Several strategies were used to enhance the specificity and accuracy of STAT results. Low complexity k-mers composed of >50% homo-polymer or dinucleotide repeats (e.g. AAAAAA or ACACACACACA) were filtered from dictionaries, and discrete k-mers belonging to multiple taxonomic references were “merged” at the lowest common taxonomic node shared between references. Finally, the specificity of representative k-mers was determined by searching against the source reference genomic database. When representative k-mers were found in multiple taxonomic references nodes, they were merged at the lowest common taxonomic node as above.
Reference sequences were mapped to the taxonomy hierarchy using the NCBI taxonomy database. The database contained 48,180 taxonomy nodes in January, 2017.
Segment sizes and K-mer selection
K-mer dictionaries were built by computationally slicing reference genomes into sequential segments and selecting 32-mers to represent each segment. The “coarse” k-mer dictionary uses variable segment lengths, proportional to genomes size and ranging from 200-8000 nt. The “fine” k-mer dictionary uses a constant 64 nt segment length for all genomes (for 32-mer index it gives us 32x reduction in space and io at the cost of expectation that we have at least one error-free 64-mer for every spot )
Can I get the software?
Yes. at github
git clone https://github.com/ncbi/ngs-tools.git --branch tax cd ./ngs-tools/tools/tax make Makefile and in ./examples folder you can find helper *.sh scripts
How can I cite you?
No publication yet. We intend to post a preprint soon.