Objective The aim of this study was to investigate the psychometric properties of a multisource review survey tool for medical consultants in an Australian health care setting. Methods Two sets of survey data from a convenience sample of medical consultants were analysed using SPSS, comprising self-assessment data from 73 consultants and data from 734 peer reviewers. The 20-question survey consisted of three subscales, plus an additional global question for reviewers. Analysis included the reliability coefficient (α) of the scale and the three subscales, inter-rater reliability or agreement and validity of the model, correlation between the single global question, the total performance score and the three survey subfactors (Pearson’s), interrater agreement (rWG(J)), the optimal number of peer reviewers required and model-based reliability (ρ). Results The global question, total performance score and the three subfactors were strongly correlated (general scale r=0.81, clinical subscale r=0.78, humanistic subscale r =0.74, management subscale r=0.75; two-tailed P<0.01 for all). The scale showed very good internal consistency, except for the five-question management subscale. Model-based reliability was excellent (ρ=0.93). Confirmatory factor analysis showed the model fit using the 20-item scale was not satisfactory (minimum discrepancy/d.f.=7.70; root mean square error of approximation=0.10; comparative fit index=0.79; Tucker–Lewis index=0.76). A modified 13-item model provided a good fit. Using the 20-item scale, a 99% level of agreement could be achieved with eight to 10 peer reviewers; for the same level of agreement, the number of reviewers increased to >10 using a revised 13-item scale. Conclusions Overall, the 20-item multisource review survey tool showed good internal consistency reliability for both self and peer ratings; however, further investigation using a larger dataset is needed to analyse the robustness of the model and to clarify the role that a single global question may play in future multisource review processes. What is known about the topic? Defining and measuring skills and behaviours that reflect competence in the health setting have proven to be complex, and this has resulted in the development of specific multisource feedback surveys for individual medical specialities. Because little literature exists on multisource reviews in an Australian context, a pilot study of a revised survey tool was undertaken at an Australian tertiary hospital. What does this paper add? The aim of this study was to investigate the psychometric properties of a generic tool (used across specialities) by assessing the validity, reliability and interrater reliability of the scale and to consider the contribution of a single global question to the overall multisource feedback process. This study provides evidence of the validity and reliability of the survey tool under investigation. The strong correlation between the global item, the total performance score and the three subfactors suggests that this is an area requiring further investigation to determine the role that a robust single global question like this may play in future multisource review surveys. Our five-question management skills subscale provides answers to questions relevant to the specific organisation surveyed, and we anticipate that it may serve to stimulate further exploration in this area. What are the implications for practitioners? The survey tool may provide a valid and reliable basis for performance review of medical consultants in an Australian healthcare setting.