top of page

Friday Afternoon Pre-Conference Workshops

Optimal implementation of progress testing consortia: recent developments

On behalf of the Committee of Interuniversity Progress Test Medicine, the Netherlands.

 

  • Dr EA Dubois, Leiden University Medical Center, Leiden, the Netherlands

  • Dr C Krommenhoek, Leiden University Medical Center, Leiden, the Netherlands

  • Dr CF Collares, Maastricht University, Maastricht, the Netherlands

  • Dr A Freeman, University of Exeter Medical School, Exeter, UK

  • Dr AA Meiboom, VU University Medical Center Amsterdam, Amsterdam, the Netherlands

  • Dr AJA Bremers, University Medical Center St Radboud, Nijmegen, the Netherlands

  • Dr B Schutte, Maastricht University, Maastricht, the Netherlands

  • Dr AEJ Dubois, University Medical Center Groningen, Groningen, the Netherlands

  • DR RA Tio, University Medical Center Groningen, Groningen, the Netherlands

 

Introduction:  The initiatives for national and international large-scale progress testing consortia are increasing. In this workshop we will address the most recent developments on important aspects for the appropriate development and execution of collaborative projects on progress testing.

 

Content and structure:  The workshop starts with a short introduction on the structure of the progress testing consortium in the Netherlands and the first steps towards international collaboration. The participants will be split up into groups to elaborate on specific topics. Each participant can join two of the following groups:

  1. Score comparisons and standard settings between different populations: how to ensure test fairness and validity?

  2. Translation and review process: how to deal with test adaptation, an international blueprint and review panel?

  3. Logistics and test safety, how to ensure test safety in large-scale consortia?

  4. Collaboration, how to tackle legal issues, contract, and organisational structure?

  5. Giving feedback, how to use progress testing to enhance learning within a collaborative project?

  6. Production of questions, how to deal with relevance, aims/objectives?

 

After discussion in small groups, the participants will get together for a plenary feedback of all topics where some take-home messages will be addressed.

 

Intended outcomes:  Make the participants benefit from existing experiences in progress testing, raising awareness to issues which need to be accounted for before starting up collaboration in progress testing.  Enable participants to tackle possible pitfalls in collaborative progress testing efforts.

 

Who should attend:  Participants having ideas for setting up an initiative for international progress testing should benefit from this workshop. It does not matter whether the participants are already involved in a national collaborative project for progress testing or not. This workshop is also meant for participants from countries who have doubts about international collaboration on progress testing because of differences in ethnic groups.

 

Level:  Introductory, intermediate

Are our assessments really valid? Using validity paradigms to design and evaluate programmes of assessment

  • Prof Trudie Roberts and Dr Richard Fuller, Leeds Institute of Medical Education, School of Medicine, University of Leeds, UK

  • Dr Kathy Boursicot, St George’s, University of London, UK

 

Introduction: The shifting emphasis of validity to a more argument and inferential based approach provides a new lens with which to review how we design and evaluate programmes of assessment.  Validity is increasingly recognised a continuum rather than an absolute, with consensus on the importance of the construct validity of assessment and the value laden nature of validity evidence.  This workshop overviews the challenges between ‘traditional’ (psychometric) and more ‘contemporary’ interpretivist views of validity and how they can assist in selection and design of test formats and target further scholarship opportunities.

 

Content and Structure: The workshop will explore how the concept of validity has shaped the development of assessments over that last three decades. Participants will work through a typical programme of assessments covering knowledge, skills and attitudes, identifying the strengths and weaknesses of the individual components. The participants will be introduced to the joint publication from the American Education Research Association (AERA), the American Psychological Association (APA) and the National Council on Measurement in Education (NCME) – ‘The Standards for Educational and Psychological Testing’ as a way of analyzing the utility of common types of assessment.

 

Intended outcomes:  At the end of the workshop, participants will

  • Be better informed about the changing face of validity evidence and assumptions

  • Have developed and improved  their skills in the analysis of assessment formats against a validity framework

  • Be able to identify sources of evidence used to generate validity inferences

 

Participants will also be encouraged to generate 'take home lessons' to implement in their own institutions

 

Level:  Intermediate

Preparing simulated/standardized patients for high stakes assessments

  • Cathy Smith, PhD, National SP Training Consultant, Pharmacy Examining Board of Canada; Lecturer, Faculty of Medicine, University of Toronto

  • Carol O’Byrne, BSP, RPEBC, RPh, OSCE Manager, Associate Registrar, Pharmacy Examining Board of Canada

  • Debra Nestel, PhD, Professor of Simulation Education in Healthcare, School of Rural Health, Faculty of Medicine, Nursing and Health Sciences, Monash University, Victoria, Australia        

 

Introduction:  Simulated/standardized patients (SPs) are, in large part, the exam question for high stakes assessments, in particular for the Objective Structured Clinical Examination (OSCE).  SPs need to present the question, or patient portrayal, in a standardized manner to provide the opportunity for reliable assessment inferences, ensuring the defensibility of the OSCE.  Standardization refers to the consistency and accuracy of SP performance over time and between learners. (Adamo, 2003; Wallace, 2002) This is a complex, nuanced and demanding task, compounded by the diverse characteristics of SPs, SP trainers, and high stakes assessment contexts.  There is a lack of detailed information regarding the training and support that SPs and SP trainers receive before, during and after they carry out this job. (Cleland, 2009; Watson, 2006) However, many of sources of variance can be remedied with improved training and ongoing monitoring for quality assurance. (Beaulieu, 2003; Cleland; Tamblyn, 1991; Watson)

 

Content:  In this workshop, we share our experiences of preparing SPs for high stakes assessments in Australia, Canada and the United Kingdom in medicine and pharmacy.  We provide a rigorous and systematic experiential approach to train and support SPs, based on the concept of deliberate practice (Ericsson, 1993).  Participants will work with tools that support standardization of SP performance including an explicit training protocol, case training DVDs, and an exam readiness evaluation form.

 

Structure:  Interactive exercises including large group discussions, training simulations using a ‘fish bowl’ approach, small group conversation circles and opportunities for individual reflection.

 

Intended outcomes:  By the end of this session, participants will be able to: discuss key considerations for standardizing SP performance for high stakes assessments; identify specific training strategies and tools to standardize SP performance; apply these strategies and tools through interactive role-play; reflect on applications to their own practice.

 

Who should attend:  Clinical educators and others responsible for training SPs for high stakes assessments

 

Level:  Intermediate

 

References

  1. Adamo G. 2003. Simulated and standardized patients in OSCEs: achievements and challenges:1992-2003. Medical Teacher. 25(3), 262- 270.

  2. Beaulieu MD, Rivard M, et al. ( 2003) Using Standardized Patients to Measure Professional Performance of Physicians. International Journal for Quality in Health Care, 15(3):251-59.

  3. Cleland JA, Keiko A, et al. ( 2009) The Use of Simulated Patients In Medical Education AMEE GUIDE 42. Medical Teacher, 31(6):477-86.

  4. Ericsson KA, Krampe R, Tesch-Roemer C. 1993. The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100: 363-406.

  5. Tamblyn RM, Klass DJ, et al (1991b) The Accuracy of Standardized Patient Presentation. Medical Education, 25(2):100-109.

  6. Watson MC, Norris P et al (2006) A Systematic Review of the Use of Simulated Patients and Pharmacy Practice Research. International Journal of Pharmacy Practice, 14: 83-93.

The ‘What’ as well as the ‘How’: Towards more effective feedback in formative assessment of clinical skills for patient encounters 

  • Dr. J Lefroy, Dr. SP Gay, Dr. MH Bartlett, Professor RK McKinley, Keele University School of Medicine, UK

 

Introduction:  Specific and timely feedback from a trusted assessor is one of the most effective interventions for improving skills (1). To be effective feedback must be tailored to the learner’s needs and be sufficiently specific to scaffold learning. This is challenging for busy clinical supervisors but support materials may make this task simpler and more effective. We will explore the expectations of formative workplace based assessment and the challenges assessors face in meeting them. Participants will discuss and share best practice in giving bespoke feedback.

 

Content and structure:

  1. Introductory Plenary – to orientate delegates to formative assessment of clinical skills used in patient encounters.

  2. Constructing feedback – delegates will observe videos of student-patient encounters and individually decide on the feedback they would give. This will stimulate small group discussions of the content of feedback.

  3. A suite of generic clinical encounter skills assessment and feedback tools will be introduced and used to construct feedback for the same observed encounters

  4. The closing discussion will bring together the reflections of the attendees and focus on on the practicalities of formative assessment in real-time clinical practice.

 

Intended outcomes:  Participants will be able to:

  • Identify the aims of formative assessment

  • Share best practice on developing the content of feedback

  • Construct effective tailored feedback a) freestyle and b) using a purpose-built system

  • Reflect on personal experience of what has and has not worked in giving feedback

 

Intended audience:  Clinicians wanting to develop their skills of conducting formative assessment of skills for patient encounters.

 

Level: Introductory to intermediate

Friday All Day Pre-Conference Workshops

Continued from Morning...

FLAME – Fundamentals in Leadership and Management for Educators: Assessing Leadership and Professionalism 

  • Judy McKimm and Paul Jones, College of Medicine, Swansea University, Wales, UK

 

Introduction: It is widely recognised that non-technical skills, including leadership, are vital for effective and safe professional practice. Educators internationally are focussing on establishing robust ways of assessing professional practice. Regulatory and professional bodies increasingly require learners to demonstrate competence in leadership, yet many educators are struggling to teach and assess leadership competence. Many tools exist to assess leadership, but with crowded curricula and large numbers of students/trainees, how can educators implement effective leadership development programmes and assess leadership skills and behaviours? This workshop explores how leadership theory, practice and assessment can help inform our understanding of both assessing professionalism and embedding leadership development. 

 

Intended outcomes: By the end of the workshop participants will (1) demonstrate understanding of leadership theory in relation to assessing leadership and (2) how leadership theory and practice can be used in assessing professionalism; (3) become familiar with methods for teaching and assessing leadership; (4) have shared practice on challenges and solutions and (5) identified strategies for introducing/developing leadership programmes. 

 

Content and Structure: Interactive small and large group activities and short presentations designed to facilitate discussion and participation and meet individual and group needs. 

 

Who should attend: Undergraduate and postgraduate medical and health professions’ educators who run leadership and management courses or plan to do so or have an interest in assessing professional behaviours and practice. 

 

Level of workshop: Intermediate/advanced

bottom of page