Reshaping DIA-based Proteomics with Spectronaut 14

“What’s New in Spectronaut 14?”

Maximilian Helf (Product Manager, Bioinformatics - Biognosys)


In the past year, our software engineers have built new features and improved many of the existing workflows in Spectronaut, our comprehensive solution for DIA proteomics. We are proud to present Spectronaut 14 as a major upgrade over previous generations of the software. It comes with deep-learning driven identification enhancements as well as new options for data exploration and statistical analysis. Key improvements include support for ion mobility technologies, a game-changing breakthrough in library-free DIA analysis and the ability to distribute large-scale analyses with 10,000's of DIA files across multiple computers.


“Direct Searching of DIA Data Catches Up with Sample-Specific Libraries”

Lukas Reiter (CTO - Biognosys)


Label-free data independent acquisition (DIA) is increasingly used for large scale proteome profiling. Typically, as the first step in this workflow, a project-specific library is generated using data-dependent acquisition (DDA). This library generation step significantly complicates the DIA workflow. Alternatively, DIA data can be searched directly using a protein sequence (FASTA) file. In the past, however, proteome coverage has lagged compared to using libraries. Here, we present a deep learning enhanced algorithm, directDIA 2.0, to directly search DIA using a FASTA file. Fragmentation and retention time are predicted on the fly and used for scoring thereby significantly improving proteome coverage. These improvements render project-specific DDA libraries dispensable and transform single shot DIA into a much simplified and most powerful quantitative proteomics workflow.


“Scalability Redefined: A New Workflow in Spectronaut to Analyze 10’000+ Raw Files on a Desktop Computer”

Oliver M. Bernhardt (Principal Scientist, Bioinformatics - Biognosys)


Since the emergence of DIA as the go-to method for high-throughput quantitative proteomics, the meaning of what constitutes a large experiment has shifted with each new generation of instruments and analysis software. Today, large experiments exceed 1’000 raw files with even larger studies already on the horizon. This poses an immense challenge to the memory consumption and overall runtime of processing pipelines that try to analyze this data in an experiment-wide context rather than on a run-by-run basis.

Here, we present a new data analysis pipeline that increases scalability by allowing analysis of files in batches while new measurements are still being acquired. The results can then be combined into a single, controlled experiment while keeping system requirements to a minimum.




Back to Agenda


Let us help you do great proteomics!

Please click here to contact us.