To maintain the high quality of the papers published in the Yearbook of Music Psychology/Jahrbuch Musikpsychologie (JBDGM), we would like to support your work by giving you the following guidelines for the review process. These guidelines will help you and the authors optimize the submission and the final product. The consideration of these aspects is of central importance for all future researchers (in particular for those conducting meta-analyses) and for the planning of new research projects. This proposal considers the standardized effect size and complete report of statistical values as the most important factors in the report of quantitative empirical results. These routines are part of the quality management of the JBDGM.

  1. Keywords play an important role in the database search for literature. Without these, a study published in the JBDGM will not be found by search engines such as PsycINFO. Please check the adequacy of selected keywords, which is an important prerequisite for successful information retrieval, for instance, in the case of meta-analyses.
  2. Is the relationship between empirical hypotheses and statistical hypotheses clearly articulated? If more than one statistical test is used to confirm an empirical hypothesis, is there a risk of alpha-level inflation?
  3. Sampling: Is the sampling procedure of subjects transparent, reflected, and justified? Non-random sampling usually prohibits strong claims for the generalization of significant results. Have subjects been randomly assigned to experimental conditions? Deviations from a random assignment usually lead to low internal validity. Have the authors described the measures to secure internal validity?
    Sampling has to be documented carefully: Complete information on study samples is mandatory (mean age, N, SD). Indication of min./max. age range is not sufficient. Moreover, information on socio-economic background or musical experience should be indicated. Standardized indicators such as the ISCED 2011 should be used (see https://en.wikipedia.org/wiki/International_Standard_Classification_of_Education).
  4. Information on the study design is required. For example, did the authors choose an experimental or a quasi-experimental design? Information on the study design should be given in the following form: “Our study was based on a 2 × 2 × 2 repeated measures design with the independent variables xxx (strong, weak), yyy (early, late), and the degree of expertise (high, low; between subjects variable).”
  5. Citations of relevant publications from the JBDGM should be mentioned in the manuscript. This is a prerequisite for journal ranking and visibility.
  6. All experimental studies should include complete descriptive statistics (means, standard deviations, number of subjects, etc.). All the pertinent information must be present in the text or as a table.
  7. Reports on correlations for all variables should be given in a matrix.
  8. ANOVA designs. The authors should: a. report all within-cell information including means, standard deviations, and sample sizes. Row and column information is also helpful; b. report sums of squares and mean squares for all effects including nonsignificant effects. The report of p values only is insufficient and not acceptable; c. report F values, degrees of freedom, and exact p values up to 3 decimals. Smaller values should be indicated as p < .001.
  9. Repeated measures or correlated observation designs. The authors should: a. report all values mentioned in no. 8 (ANOVA designs) above; b. report all measure-to-measure correlations.
  10. ANCOVA designs. The authors should: a. report all values mentioned in no. 8 (ANOVA designs) above; b. report all summary statistics for all variables including the covariate; c. report covariances (or correlations) between the dependent variable and each covariate; d. report covariances (or correlations) among all covariates; e. report whether the covariate and the experimental factors can be considered independent.
  11. Information on prospective (a priori) test power (1−β) for the calculation of sample sizes should be indicated. The free statistic software G*Power (https://www.gpower.hhu.de) can be used to do this.
  12. Directional hypotheses should be tested with directional statistical tests instead of omnibus tests such as ANOVA (e.g., use contrasts).
  13. According to APA guidelines, information on effect sizes in standardized indicators (e.g., Cohen’s d, Hedge’s g, r, r2, etc.) is mandatory. All the pertinent information on this topic must be present in the text.
  14. Materials should be meta-analytically friendly; e.g., they should include exact p values up to 3 decimals (smaller values should be indicated as p < .001), F and t values and degrees of freedom for those with non-significant results.
  15. If the reported data fails to comply with a normal distribution (e.g., in the case of reaction times), non-parametric descriptive statistics should be reported and the following test design needs to be in line with this fact (i.e., Spearman’s or Kendalls’s correlation coefficients instead of Pearson’s, or a method of data normalization should be applied).
  16. In the case of qualitative research methods (e.g., interview-based studies), interrater reliability for coding should be reported. Analysis of verbal data should consider established best practice standards, such as the method of constant 3 comparison between codes and data (for the analysis of qualitative data such as conversation analysis or phenomenological analysis, see the guidelines by Braun & Clarke, 2006). Have all data collection and analysis steps been reported transparently so that it is possible for the reader to objectively and critically evaluate the results?
  17. In the case of web-based studies, Reips’ (2002, 2012) best practice rules (e.g., high-hurdle technique) should be considered.
  18. Please check if the data set and stimuli (considered copyright) should be made available. Technical facilities for Supplementary Online Material are available from the publisher PsychOpen. Alternatively, online repositories such as OSF (https://osf.io) can be used. Due to the limited number of pages in each issue, it would be more economical if the authors included tables, score examples, or additional figures in the online supplemental section. Exact times for audio or video sections taken from complete tracks should be indicated.
  19. Non-significant results are not a reason for publication to be rejected. However, results should be supported by sufficient a priori test power and assumption on expected effect size. This could be interpreted in terms of effects, which are smaller than expected.
  20. To increase test power, one-tailed (directional) hypotheses instead of two-tailed (non-directional) tests should be used.
  21. All submissions should be in line with the Transparency and Openness Promotion (TOP) Guidelines as suggested by Nosek et al. (2015).

Some of these features might be unusual to you, so we would like to recommend the following book for further reading (Ellis, 2010).

If you have any further questions, please check the guidelines given by the APA publication manual or contact the JBDGM Editor-in-Chief (editors@jbdgm.psychopen.eu).

References

  • Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, metaanalysis, and the interpretation of research results. Cambridge University Press.
  • Hancock, G. R., Stapleton, L. M., & Mueller, R. O. (2019). The reviewer’s guide to quantitative methods in the social sciences (2nd ed.). Routledge. https://doi.org/10.4324/9780203861554
  • American Psychological Association. (n.d.). Journal Article Reporting Standards (JARS). APA Style. https://apastyle.apa.org/jars
  • Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., ...Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
  • Reips, U.-D. (2002). Standards for internet-based experimenting. Experimental Psychology, 49(4), 243–256. https://doi.org/10.1026//1618-3169.49.4.243
  • Reips, U.-D. (2012). Using the Internet to collect data. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology, Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 291–310). American Psychological Association. https://doi.org/10.1037/13620-017
  • Sedlmeier, P. (2009). Beyond the significance test ritual: What is there? Zeitschrift für Psychologie, 217(1), 1–5. https://doi.org/10.1027/0044-3409.217.1.1