|Run||Spots||Bases||Size||GC content||Published||Access Type|
This run has 2 reads per spot:
|̅L=79, σ=12.1, 100%||̅L=80, σ=12.8, 100%|
Technical read Application Read L=4, 100% Length is 4, 100% spots contain this read ̅L=165, σ=92.8, 66% Average length is 165, standard deviation is 92.8, 66% spots contain this read
|PRJEB7759||ERP008710||A Catalogue of the Mouse Gut Metagenome|
The importance of the gut microbiota for regulation of whole body metabolism, energy homeostasis, development of the immune system and even complex behavioural traits is well documented. The acquisition of comprehensive gene catalogues of the human gut metagenome using “next generation sequencing” has immensely advanced our insight into the complex metagenome-host genome interaction and the connection between common human diseases and the gut microbiota. Causal links are difficult to establish in humans, and therefore, mice still serve as important models for functional studies. We used HiSeq2000-based whole metagenome sequencing to establish a catalogue of 2.6 million non-redundant microbial genes from faecal samples of 184 mice representing different strains, fed different diets, obtained from different providers, and kept in different housing laboratories to secure high diversity and representation. Similar to the human gut microbiome, more than 99% of the genes in the catalogue were bacterial, suggesting that the mouse microbiota overall comprises between 800 and 900 bacterial species. A core mouse gut microbiome was defined at the genus level comprising 60 genera, 25 of which were shared with the core genera in the human gut microbiome. Although the mouse gut microbiome was functionally similar to its human counterpart, sharing 79.9% of its KEGG orthologous groups, only 4.0% of the mouse gut microbial genes were shared with those identified in human gut microbiome, emphasising the need for a specific mouse catalogue. We observed marked differences regarding provider, housing laboratory, strains, gender and feed, emphasizing the need for carefully controlled experimental conditions and caution comparing data from different laboratories. As we have accounted for these factors in creating the catalogue, it provides a useful reference for future studies, including studies using short reads for comprehensive analyses of the mouse gut microbiome.
You need SRA Toolkit to operate on SRA runs.
Default toolkit configuration enables it to find and retrieve SRA runs by accession. It also downloads (and cache) only the part of data you really need. For example quality scores represent a majority of data volume and you may not need them if you dump fasta only (versus fastq). Or if you are looking at particular gene you may not need reads aligned to other regions or not aligned at all. Same way if you use GATK with enabled SRA support you need only SRA run accessions to fire your process.
fastq-dump will dump reads in a number of "standard" fastq and fasta formats.
vdb-dump is also capable of producing fasta and fastq (beside other formats). It dumps data much faster then fastq-dump but ordering of reads may be different and it does not produce split-read multi-file output.
Prefetch tool will help you cache all data in advance if you plan to run data analysis in environment where getting data from NCBI at run time is unfeasible.
Read more at SRA Knowledge Base on how to download SRA data using command line utilities.
In addition to it you can download the following data:
|Type||Size||Name||Free Egress||Access Type|
The sections below show results of analysis run by software which is still in experimental stage. Please use provided results with a boatload of salt and let us know what you think.
-- SRA team
- Unidentified reads: 93.72%
- Identified reads: 6.28%
|Bacteria||Lachnospiraceae bacterium A4||species||1.3||52,410||52.4|
|Bacteria||Lachnospiraceae bacterium A2||species||0.9||36,037||36.0|
|Bacteria||Lachnospiraceae bacterium 28-4||species||0.5||19,473||19.5|
|Bacteria||Oscillibacter sp. 1-3||species||0.4||13,731||13.7|
|Bacteria||Lachnospiraceae bacterium COE1||species||0.2||8,304||8.3|
|Bacteria||Lachnospiraceae bacterium 10-1||species||0.2||7,521||7.5|
|Bacteria||Firmicutes bacterium ASF500||species||0.2||6,673||6.7|
|Bacteria||Anaerotruncus sp. G3(2012)||species||0.2||5,933||5.9|
|Bacteria||Lachnospiraceae bacterium 3-1||species||0.1||5,355||5.4|
|Bacteria||Dorea sp. 5-2||species||0.1||4,886||4.9|
|Bacteria||Clostridium sp. ASF502||species||0.1||4,591||4.6|
|Bacteria||Eubacterium plexicaudatum ASF492||0.1||4,491||4.5|
|Bacteria||Alistipes senegalensis JC50||0.1||2,732||2.7|
|Bacteria||Lachnospiraceae bacterium 3-2||species||0.1||2,584||2.6|
|Bacteria||Enterorhabdus caecimuris B7||0.1||2,427||2.4|
|Bacteria||Eubacterium sp. 14-2||species||0.0||1,806||1.8|
|Bacteria||Alistipes shahii WAL 8301||0.0||1,696||1.7|
|Bacteria||Alistipes timonensis JC136||0.0||1,585||1.6|
|Bacteria||Enterorhabdus mucosicola DSM 19490||0.0||1,389||1.4|
|Bacteria||Lachnospiraceae bacterium M18-1||species||0.0||1,015||1.0|
Results show distribution of reads mapping to specific taxonomy nodes as a percentage of total reads within the analyzed run. In cases where a read maps to more than one related taxonomy node, the read is reported as originating from the lowest shared taxonomic node. So when a read maps to two species belonging to the same genus, it is reported as having originated from their common genus. Under typical conditions where a single organism has been sequenced, the expectations are that reads will map to several taxonomy nodes across the organism’s lineage, and that the number of reads mapping to higher level nodes will be more than those that map to terminal nodes.
STAT results are proportional to the size of sequenced genomes. So given a mixed sample containing several organisms at equal copy number, one expects proportionally more reads to originate from the larger genomes. This means that the percentages reported by STAT will reflect genome size and must be considered against the genomic complexity of the sequenced sample.
The NCBI SRA Taxonomy Analysis Tool (STAT) calculates the taxonomic distribution of reads from next generation sequencing runs. This analysis maps individual sequencing reads to a taxonomic hierarchy and reports the taxonomic composition of reads within a sequencing run.
STAT maps sequencing reads to a taxonomic hierarchy using a two-step strategy based on exact query read matches to precomputed k-mer dictionary databases. In the first pass a small, a “coarse” reference dictionary database is used to identify organisms matching a read set. In the second pass, organism-specific slices from a “fine” reference dictionary database are used to compute distribution of reads between identified taxonomy classes (species and higher order taxonomy nodes). When multiple taxnodes are mapped for single spot we use the lowest non-ambigous mappimg
STAT k-mer dictionaries are built using an iterative minhash based approach against reference genomic databases. For every fixed segment length of incoming reference nucleotide sequence, k-mer representing this segment selected based on minimum fvn1 hash function. Several strategies were used to enhance the specificity and accuracy of STAT results. Low complexity k-mers composed of >50% homo-polymer or dinucleotide repeats (e.g. AAAAAA or ACACACACACA) were filtered from dictionaries, and discrete k-mers belonging to multiple taxonomic references were “merged” at the lowest common taxonomic node shared between references. Finally, the specificity of representative k-mers was determined by searching against the source reference genomic database. When representative k-mers were found in multiple taxonomic references nodes, they were merged at the lowest common taxonomic node as above.
Reference sequences were mapped to the taxonomy hierarchy using the NCBI taxonomy database. The database contained 48,180 taxonomy nodes in January, 2017.
Segment sizes and K-mer selection
K-mer dictionaries were built by computationally slicing reference genomes into sequential segments and selecting 32-mers to represent each segment. The “coarse” k-mer dictionary uses variable segment lengths, proportional to genomes size and ranging from 200-8000 nt. The “fine” k-mer dictionary uses a constant 64 nt segment length for all genomes (for 32-mer index it gives us 32x reduction in space and io at the cost of expectation that we have at least one error-free 64-mer for every spot )
Can I get the software?
Yes. at github
git clone https://github.com/ncbi/ngs-tools.git --branch tax cd ./ngs-tools/tools/tax make Makefile and in ./examples folder you can find helper *.sh scripts
How can I cite you?
No publication yet. We intend to post a preprint soon.