NGS Meeting on 16-4-2010
Time & Location
9:00-10:30, 16 April 2010
Hojel City Center, gebouw D (5th floor)
Graadt van Roggenweg 340
3531 AH Utrecht
|9:00-9:15||Leon Mei||NGS updates|
|9:15-9:45||Jan van Haarst||De novo assembly tools evaluation|
|9:45-10:00||Frans Paul Ruzius||Update on mapping alignment tool evaluation|
|10:00-10:20||Martien Groenen||Animal Breeding and Genetics at WUR.|
|10:20-10:30||Leon Mei||Update from participants & parking lot|
- Martien Groenen (WUR)
- Jan van Haarst
- Frans Paul Ruzius
- Freerk van Dijk
- Morris Swertz (alternating with proteomics meeting)
- Alex Bossers (CVI-WUR, Lelystad)
- Victor de Jager
- Leon Mei
- Rutger Brouwer
- Yanju Zhang (LIACS, Leiden University)
Discussion on De Novo Software Evaluation
- How about the de novo assembly on virus? The conclusion is that they are often very simple and we won't include it into our evaluation. But the evaluation protocol certainly can be reused in future for assembly on virus.
- How about the de novo assembly using Solid data? We decide to leave it out at the moment since there are no NBIC members working on this.
- Parameter on coverage is missing in the current evaluation. Is this important?
- K-mer setting influences the result significantly. How do we handle this in the evaluation?
- The count of unused reads can be a measure.
- How about a weighted utility function to rank the tools?
- It would be interesting to identify the performance measures that separate the tools from each other the most.
- The evaluation script needs to be shared on gForge so that others can perform the evaluation as well. This will also enable the performance comparison in different hardware settings.
- The unfiltered raw data is used. With the filtering functions that are often part of de novo assembly software, the results might change. How do we tackle this?
To-do list De Novo Evaluation
- Coverage, number of "N" in the resulted assembly, the count of unused reads will be used as a performance measure and will be included in the evaluation table.
- We will examine the assembly quality difference caused by changing the value of K-mer. Then we will decide what is the best way to include k-mer setting in the evaluation framework.
- We will do several comparison between using unfiltered raw data and filtered data. It is still possible that unfiltered data can be useful in some cases. We will provide some recommendations on this.
- The script will be added to the gForge: https://gforge.nbic.nl/projects/ngstools/