top of page

Saturday Afternoon Pre-Conference Workshops

Using Modern Test Theory for standard setting in Medical Education 

  • Associate Professor Boaz Shulruf and Prof Philip Jones, University of New South Wales, Sydney, Australia 

 

Introduction:  The decision to pass or fail a medical student is a ‘high stakes’ one. The literature offers a range of quality standard setting methods, yet all have major limitations. Among those limitations or challenges might be the need to recruit a panel of experts to set up the standards, the need to employ a statistician or psychometrician who is able to undertake complex statistical analysis, the need communicate the results to the affected students in a simplified way and the need to provide robust justification for the pass/fail decisions should such decisions are legally challenged. This workshop will introduce the Objective Borderline Method (OBM), which is a new standard setting method derived from the principles of Modern Test Theory. The OBM is a probability base model, that can be applied for most types of examinations and yet is mathematically simple, which applicable for users with no statistical background.

 

Content and structure:

  1. Introduction to standard setting methods, what purposes they serve and a brief overview of the most commonly used methods

  2. Introduction of Modern Test Theory and its relevance to standard setting

  3. Introduction of the Objective Borderline Method (OBM), its theoretical foundation and application, using examples from different types of examinations.

  4. Applying the OBM: setting standards for OSCE and MCQ (guided self-practice)

  5. In-depth critical appraisal and comparison of the OBM with other methods

 

Intended outcomes:  Participants will be able to use the Objective Borderline Method (RBM) for setting objective Pass/Fail standards for clinical and other examinations within their own clinical and educational context.  Hand-outs with guidelines and illustrations will be provided to the participants

 

Who should attend:  Medical educators who have strong interest in assessment and standard setting.

 

Level:  Intermediate and advanced 

Designing and evaluating situational judgement tests to assess non-academic attributes in postgraduate selection

  • Fiona Patterson, University of Cambridge & Work Psychology Group

  • Máire Kerrin, Work Psychology Group

  • Chris Roberts, University of Sydney

  • Marcia Reid & Robert Hale Australian General Practice Education & Training (AGPET)

 

Introduction:  Research shows that an array of non-cognitive professional attributes, such as integrity, empathy, resilience and team awareness are critically important predictors of job performance and training outcomes. Until recently, international selection practices have tended to focus primarily on assessing academic ability. A key challenge for recruiters is how best to assess a broad range of non-academic attributes reliably, since large scale interviewing is costly and there is limited research evidence to support the use of personality tests for example, especially in high stakes settings.

 

Building on international research and the Ottawa consensus statement regarding selection practices, Prideaux et al (2011) asked whether situational judgement tests (SJTs) may be a valid method for assessing a broad range of non-academic attributes for high volume selection.  This workshop explores the research evidence underpinning the reliability and validity of SJTs in selection in medicine and how best to develop SJT items for selection purposes.

 

Content & Structure of workshop: Presenters will share their experience of developing and evaluating SJTs as a selection methodology. They will illustrate how they are delivered in combination other methods (eg., interviews, knowledge tests) for postgraduate selection across various specialties/settings. We will draw upon work conducted on GP selection in Australia using an SJT and MMIs and selection into specialty selection in the UK.

 

Participants will be invited to practice item development and have the opportunity of reviewing SJT items. The session will consist of several short presentations on aspects of using the SJT, with a taster session on item writing with lively discussion and some interactive small group work.

 

Intended outcomes: By the end of the session, participants will: (1) Understand the research evidence on the reliability and validity of SJTs for medical selection (2) Understand the features important in developing a SJT (eg, designing items and response formats); (3) Recognise the advantages and limitations of using an SJT for selection into medical education and training;

 

Who should attend: All those interested in selection into medical training, undergraduate or postgraduate.

 

Level: Introductory

Research in Medical Education: Making Strange with Culture(s)

  • Dr Brian Hodges, Wilson Centre for Research in Education, University Health Network & Department of Psychiatry, University of Toronto, Toronto General Hospital, Toronto, Canada

  • Dr Ming-Jung Ho, Department of Social Medicine, National Taiwan University, College of Medicine, Taipei, Taiwan

  • Dr Ayelet Kuper, Wilson Centre for Research in Education, Sunnybrook Health Sciences Centre & Department of Medicine, University of Toronto, Toronto General Hospital, Toronto, Canada

  • Dr Cynthia Whitehead, Wilson Centre for Research in Education, Women’s College Hospital & Department of Family & Community Medicine, University of Toronto, Toronto, Canada

 

Introduction:  There is growing awareness that practices in medical education around the world are “constructed” – that is they can be very different across historical time periods and in different cultural settings. Far from there being a universal concept of what medical education is or should be, there are fascinating debates and divergences. This workshop will focus on the dimension of culture in medical education, examining specific examples of research that take up culture(s) using anthropological, sociological and discursive lenses. The purpose is shed light on things that might appear to be “true” or “natural” about medical education and show rather that some of our practices are in fact rather strange. 

 

Structure:  Introductory presentation, small group work/case study, discussion.

 

Intended Outcomes:  Greater awareness of the constructed nature of medical education practices. Introduction to research methods from the social sciences that explore culture and medical education.

 

Who should attend:   Anyone with a curiosity to understand medical education in its diversity and variations and an appreciation for social science research methods. No research experience is necessary.

 

Level:  Introductory / Intermediate

Improving your OSCE: Measurement, Recognition and Remediation of Station Level Problems

  • Dr Richard Fuller, Leeds Institute of Medical Education, School of Medicine, University of Leeds, UK

  • Dr Godfrey Pell, Leeds Institute of Medical Education, School of Medicine, University of Leeds, UK

  • Dr Matthew Homer, Leeds Institute of Medical Education, School of Medicine, University of Leeds, UK

  • Professor Trudie Roberts, Leeds Institute of Medical Education, School of Medicine, University of Leeds, UK

 

Introduction:  OSCEs are one of the major performance test formats in healthcare education, but are complex to design and deliver, and methods of assessment and standard setting must be defensible when subjected to detailed scrutiny.  This workshop overviews how psychometric indicators at ‘whole exam’ and ‘station level’ can be used to test assumptions about quality, identify problems and model solutions within an overall framework of quality improvement.

 

Content and structure:  The workshop will begin with an overview of the use of borderline methods of standard setting in OSCEs, and discusses the generation, use and interpretation of a variety of ‘whole exam’ and ‘station level’ psychometric indicators.  A range of ‘diagnostic’ exercises will allow participants to gain confidence in interpreting station level metrics and identifying problems that range across station/checklist design issues, errors that arise during the delivery of the OSCE and the impact of aberrant assessor behaviour.  Participants will then focus on ‘treatments’ – proposing solutions and carrying out subsequent monitoring that can be applied to their own OSCE assessments.

 

Intended outcomes:  At the end of the workshop, participants will

  • Be better informed about the use of borderline methods to generate quality metrics

  • Have developed and improved  their skills in the analysis of performance tests

  • Be able to recognise common ‘station level’ problems and propose remedial action

 

Participants will also be encouraged to generate 'take home lessons' to implement in their own institutions

 

Level:  Intermediate

Effecting Effective Feedback

Dr Janet MacDonald, School of Postgraduate Medical and Dental Education, Cardiff University, Wales, UK

Dr Lynne Allery, School of Postgraduate Medical and Dental Education, Cardiff University, Wales, UK

Dr Lesley Pugsley, School of Postgraduate Medical and Dental Education, Cardiff University, Wales, UK

 

Introduction:  Formative assessment plays an integral part in facilitating learning; however the ways in which feedback is given, received and interpreted is multifaceted1. A number of studies have explored the quality of feedback provided to students,2,3  to determine principles for formative assessment; whilst others have  explored how feedback seeking behaviors can be encouraged4.   Since the educational value of feedback can be highly variable, developing coding systems in order to analyse the nature of the feedback that is provided can be a useful way for tutors to explore the nature of the feedback given and reflect on how these comments might enhance or impede learning.

 

Content and structure:  In this highly interactive workshop, participants will be provided with the opportunity to engage with some of the coding frames that have been developed and to apply them to feedback transcripts in order to analyse the depth of feedback provided. The group will explore the educational value of this feedback for learners and reflect on the ways in which this approach might be usefully applied to peer review as a staff development tool in their own settings.

 

Intended outcomes:

  • Participants will have experienced the application of the coding systems for analysing feedback.

  • Participants will be able to apply these tools in their own settings to enhance the quality of the formative feedback provided to learners

  • Ability to utilise the format as part of peer review and standard setting processes.

 

Who should attend:  Anyone involved in teaching and assessing learning and providing formative and summative feedback

 

Level:  Introductory and intermediate levels

 

References:

  1. Eva, KW, Armson, H, Holmboe, E, Lockyer, J, Loney, E, Mann, K, Sargeant, J. (2012). Factors influencing responsiveness to feedback: on the interplay between fear, confidence and reasoning processes. Advances in Health Sciences Education. 17:15-26

  2. Higher Education Academy (2004). Student Enhanced Learning through Effective Feedback (SENLEF). HEA : York.

  3. Assessment Reform Group (2002). Assessment for Learning: 10 research-based principles to guide classroom practice,

  4. Crommelinck, M. & Anseel, F. (2013). Understanding and encouraging feedback seeking behavior: a literature review. Medical Education. 47:232-241.

  5. Brown, E. and Glover, C. (2006) Evaluating Written Feedback. In C. Bryan and K. Clegg (Eds). Innovative Assessment in Higher Education. London:Routledge. 

Improving MCQs: Response and scoring systems

  • Dr Mike Tweed and Dr Tim Wilkinson, University of Otago, New Zealand

 

Introduction:  MCQs are used in many health-professional assessments. A common response system is choosing one from a list of n possibilities, usually 2 (true/false), 5 (best of five) or more. Common scoring systems have +1 for a correct response, with any incorrect responses scoring 0 (number correct) or -1/n-1 (formula scoring). Although easy to implement and understand these are limited when extrapolating to clinical practice. Issues include: partial knowledge; misinformation; constrained responses; differential incorrect responses; clinical uncertainty; scope of practice; self-awareness; and unrealistic responses to practice.

 

Content and structure:  Using content provided by and therefore relevant to participants, small groups will consider benefits and limitations of commonly used response and scoring systems. The benefits and limitations of other response and scoring systems in use by the participants will also be considered.  Following this, means proposed by the participants to overcome the limitations will be linked to methods available.

 

Intended outcomes:  Participants will be able to return to their place of work and:

  • Consider how MCQ response and scoring may be developed to better meet the purpose of their assessments

  • Discuss benefits and limitations of commonly used and currently used response and scoring systems

  • Discuss how these limitations might be overcome

  • Increase awareness of response and scoring systems including: script concordance; subset selection; confidence/certainty response; weighted response; respond until correct; ranking responses; and safe responding.

 

Who should attend:  Anyone interested in exploring alternatives and developing MCQ formats.

 

Level:  Beginner

Use of short film vignettes in OSCEs to assess medical ethics and law 

  • Dr Carolyn Johnston, King’s College London School of Medicine, London, UK

  • Dr Tushar Vince, King’s College London School of Medicine, London, UK

 

Introduction:  Medical ethics and law (MEL) is part of the core curriculum in UK medical schools. MEL at King’s College London School of Medicine (KCLSM) is integrated across all years of teaching and assessed by short written examination and OSCE. The use of film vignettes in OSCE stations aims to provide an effective method to assess applied medical ethics – students are faced with realistic scenarios and required to demonstrate knowledge and an ability to identify and balance competing ethical issues. Four film vignettes have been made and used in OSCE stations for years 2 and 4:

 

  • The role of the family in decision-making for an elderly man who lacks capacity

  • Informing the relevant authority about a patient with epilepsy who is continuing to drive against medical advice

  • Dealing with an aggressive and racist patient who needs treatment in hospital

  • A seventeen year old refuses on-going chemotherapy which has a predicted even chance of remission

 

Content and structure: Demonstration of process and discussion cost of making short film vignettes for assessment of MEL;  Sharing experience of drafting questions and standardised mark sheets;  Films already used in assessment at KCLSM will be demonstrated and those attending the workshop can ‘trial’ the OSCE stations;  Sharing data on performance to show that use of film vignettes does work as a valid tool for assessment;  Ideas for other film vignettes for assessment will be discussed.

 

Intended outcomes: Enthusiasm to try novel methods of assessing MEL; Increased knowledge of use of technology in OSCE; Confidence in approaching making of short films for assessment (or teaching) of MEL.

 

Who should attend: those who are interested in:

  • adopting a novel approach to assessment

  • assessing medical ethics and law

  • technology based assessment

 

Level: All

bottom of page