Discovery shotgun proteomics often yields large, noisy data for the researcher to organise and filter to produce quality results. Our laboratory employs a method of replicate summing with spectral counting to filter out low quality and non-reproducible peptide matches to improve the quality of downstream quantitative analysis. This is especially important for our research where our samples range in complexity from ancient skin samples to grape cell culture to mice retinal tissue. To facilitate these data handling processes, we have developed several pieces of software that are freely available for use.
Same-Same analysis helps to correct overzealous Multiple Testing Corrections. Using permutation analysis, Same-Same will take six replicates of PSM-searched data and calculate a modified Benjamini-Hochberg cut-off value. Using this method, researchers can make more informed decisions at the discovery stage about which proteins are suited to quantitation.
PeptideWitch produces high stringency data from PSM-searched results; in addition to conducting the Same-Same process, Peptide Witch will perform control vs treatment analysis on input data. Data is converted to highly stringent and reproducible results before being quantified, with data presented in Heatmaps, PCA charts, Venn Diagrams and Excel outputs.
PeptideMind helps validate control vs treatment quantitation using machine-learning assisted protocols. Users will upload two states with six replicates each and PeptideMind will create a 400-strong permutation analysis for differentially regulated protein IDs. The program is able to detect outliers and highlight reproducible results using a consensus of four different machine learning algorithms, thus acting as a form of statistical validation of quantitation procedures.