top of page

Saturday Morning Pre-Conference Workshops

Setting Standards for Performance-Based Assessments

  • John (Jack) R. Boulet Ph.D., Foundation for Advancement of International Medical Education and Research

  • André F. De Champlain, Ph.D., Medical Council of Canada

 

Introduction:  For most traditional assessments, including multiple-choice examinations and other selected- response formats, standard setting techniques are well-developed and widely used.  With the recent adoption of high-stakes performance-based assessments in medicine and other healthcare professions, including those used for credentialing, there has been a need to modify existing standard setting methodologies, including developing new techniques that can reliably delimit the point, or points, that separate adequate from inadequate performance. 

 

Content:  The workshop will consist of the following parts:

 

  1. Introduction to standard setting techniques. The participants will be provided with a brief synopsis of the main issues, including the need for standard setting, the methods and processes that are currently used, and the techniques that can be employed to evaluate the adequacy of the standards.

  2. Standard setting activities.  As part of this workshop, the participants will act as a large standard-setting panel.

  3. Deriving the standard/ discussion.  The summary judgments from the panelists (audience) will be analyzed to yield performance standards. 

 

Intended outcomes:  After attending this workshop, the learner will be able to:

 

  1. Choose an appropriate standard setting methodology for his/her particular needs,

  2. Design a basic standard setting study,

  3. Understand and evaluate the process of setting standards for performance-based assessments.

 

Who should attend:  Individuals who are responsible for developing, administering, scoring performance-based assessments.

 

Level:  Intermediate

Defining, assessing and predicting professionalism of medical students and doctors

  • Professor David Powis, University of Newcastle, New South Wales, Australia

  • Associate Professor Don Munro, University of Newcastle, New South Wales, Australia

  • Professor Brian Kelly, University of Newcastle, New South Wales, Australia

 

Content:  The workshop will comprise three distinct components, each initiated by a short presentation by one of the facilitators followed by focused discussion involving all participants. The workshop will be informed by the findings of ongoing research conducted by the facilitators and others.  A short reading list (annotated, or with abstracts attached) will be prepared for intended participants in advance of the workshop.  The aim is to find ways to improve professionalism of doctors by

 

  • Identifying the essential competencies (knowledge, skills and personal qualities) that comprise professionalism;

  • Determining how these could be taught, encouraged and assessed in medical schools, and

  • Specifying the core personal qualities underpinning professionalism, the presence or absence of which should be identified at the point of selection for medical school

 

Desired outcome: To achieve a consensus of the essential components of professionalism in the context of medical education and practice.  The workshop will attempt to devise practical strategies for educators for how these components may be taught, fostered and assessed within the medical curriculum, and how they may be measured by admissions personnel at the point of student selection and used to inform the admissions decision.

 

Who should attend:  Medical educators concerned with delivering the curriculum and with its associated summative assessment procedures, and those involved with selecting the students for health professional programs.

 

Level:  Intermediate

From OSCE to OSTE: Using Objective Structured Teaching Encounters for Educators’ Deliberate Practice

  • Alice Fornari, EdD, North Shore LIJ Health System, Great Neck, NY, USA

  • Barrett Fromme, MD, North Shore LIJ Health System, Great Neck, NY, USA

  • Krista Johnson, MD, North Shore LIJ Health System, Great Neck, NY, USA

  • Don Scott, MD MHS, North Shore LIJ Health System, Great Neck, NY, USA

  • Deborah Simpson, PhD, North Shore LIJ Health System, Great Neck, NY, USA

 

Introduction: The best learning occurs in the context of good teaching, yet most teachers receive neither teaching skills training nor observation-based feedback, despite the ACGME and LCME mandates for teacher development. A key component in improving any complex skill like teaching is deliberate practice: sequenced task repetition with timely and behavior specific feedback.  OSTEs are performance-based teaching exercises, which, like OSCE’s, use scripted “actors’ to portray common/difficult educational scenarios. OSTEs provide teachers an opportunity to “deliberately practice” skills in a low-threat simulation environment. In addition, formative feedback focused on teaching skills can provide education leaders with outcome measures for program evaluation. Faculty developers must use strategies that go beyond knowledge interventions to provide deliberate practice opportunites with feedback – like OSTEs – to advance the skills we value as educators.

   

Content and Structure:

  1. Presenters will describe OSTE methodology, relevant literature, and analyze short OSTEs videos from teacher development programs.

  2. Participants will (a) develop OSTEs in facilitated small groups using OSTE worksheets and (b) enact and debrief their OSTEs using a volunteer from another small group as teacher.

  3. Session concludes by discussing OSTE assessment checklists to support learning outcomes/program evaluation, and opportunities for resource sharing (OSTE cases), possible collaboration and scholarship.

 

Intended outcomes:  This session will advance OSTEs as a strategy to meet today’s pressing needs for efficient, effective, and observation-based teacher development.  Participants will:

  1. receive an OSTE case set;

  2. be prepared to use OSTEs; and

  3. appreciate OSTEs benefits and challenges, including value added benefits to trainees who serve as “standardized learners”. 

 

Who should attend:  Faculty/administrators involved in faculty development efforts to improve teaching and learning; Clinical teachers/supervisors.

 

Level: Intermediate 

Good questions, good answers – construct alignment in judgement-based assessment

  • Dr Jim Crossley, University of Sheffield School of Medicine, Sheffield, UK

  • Professor Brian Jolly, University of Newcastle, NSW, Australia

  • Professor Robert McKinley, Keele University School of Medicine, Keele, UK

  • Professor Shiphra Ginsburg, Mt Sinai Hospital, Toronto, Canada

 

Introduction:  Many of the most important components of clinical performance cannot be reduced to their component parts for ‘objective’ assessment; they depend instead on judgements made by appropriately experienced assessors. New evidence makes it clear that such assessors produce much more reliable judgements if the response format or scale that they are working with is well aligned to the way they inherently understand progression or merit.  For example, surgeons are both discriminating and consistent in their independent views of how ready a trainee is to operate independently; the inherent construct is readiness for independence or ‘entrustability’.

This observation has a profound impact on how we should design the response scales of all judgement-based assessment instruments.

 

Content and structure:

  • Keynote: evidence for the value of construct aligned scales (JC)

  • Q&A: exploring the implications as a large group (all facilitators)

  • Small group work: designing aligned scales for different contexts (undergraduate/postgraduate, medical/nursing/allied professions, craft specialities/non-craft specialities) (facilitated groups)

  • Group presentations and mutual critique

  • Plenary: a summary of findings and planning for implementation and investigation (RM)
     

Intended outcomes: 

Attendees will:

  • be able to describe the concept of construct alignment as applied to assessment scales

  • have the opportunity to design such a scale in their own context and to receive feedback on that scale

  • be encouraged to formulate an implementation plan including a plan for evaluation

 

Who should attend:  Assessment leads or those interested in developing assessments (undergraduate or postgraduate, medical or non-medical)

 

 

Level:  Intermediate

Using Classical Test Theory with Excel® to quality assure assessments

  • John Patterson, PhD, Barts and The London, Queen Mary University of London, UK

 

Introduction:  Although more complex theories, such as item response theory, Rasch modelling and generalizability theory are now available, classical test theory (CTT) provides the simplest approach for analysing the performance of assessments.  CTT gives measures of item facility. item discrimination and inter-item correlation, as well as estimates of overall assessment reliability and the contribution of each test item to that reliability. Such information is valuable in making examination decisions, in reviewing test and item performance and in question bank management.

 

Content and structure:  Delegates will use case studies from single best answer (SBA) and OSCE assessments to gain experience of the practical application of CTT and will role play how an examination board may consider statistical reports.  The workshop will not involve calculations, although those attending will be able to obtain Excel workbooks that perform all of the calculations covered in the workshop.  

 

Intended outcomes:  Those attending this workshop will be able to:

  • Apply CTT concepts to the calculation and interpretation of item statistics.

  • Define ‘reliability’ in CTT terms; explain the calculation of Cronbach’s α; discuss the factors in test design (number of items and inter-item correlation) that influence reliability.

  • Interpret values of test metrics generated by single best answer (SBA) and OSCE assessments and consider how deletion of items or stations may improve the balance between content validity and overall reliability.

 

Who should attend:  Anyone with assessment interests or responsibilities wishing to know how to evaluate test items using psychometrics. No prior knowledge of psychometrics is required and mathematical concepts will be kept to a minimum.

 

Level:  Introductory

Practical and trustworthy competency-based assessment in residency: Lessons learned from four years of implementation of the Competency-Based Achievement System (CBAS)

  • Dr Shelley Ross, University of Alberta, Edmonton, Canada

  • Dr Mike Donoff, University of Alberta, Edmonton, Canada

  • Dr Shirley Schipper, University of Alberta, Edmonton, Canada

 

Introduction:  As medical education moves globally towards competency-based assessment, programs need good strategies to track progress towards competency. Our approach was to develop the Competency-Based Achievement System (CBAS), a competency-based assessment framework that uses formative feedback to inform summative evaluation. For learners, we wanted a system that offered a way to guide their learning using formative feedback. And for advisors and preceptors, we wanted a system that would be learner driven, so that learners would a) recognize when they were being given feedback, b) be able to act upon that feedback, and c) progress towards competence by soliciting feedback in areas where they needed it. Family Medicine has been using CBAS since July 2008. Since implementation, an average of 5000 FieldNotes (documentation of formative feedback from workplace-based observations) have been entered annually into eCBAS, the electronic workbook used to track progress.

 

Content and structure:  This session will offer participants some experience in using CBAS tools through demonstration and group discussion; particular emphasis will be placed on applying the tools of CBAS to unique cases within participants’ programs. This interactive workshop is for anyone with questions about how to implement workable competency-based assessment, and those who are already carrying out competency-based assessment and would like to share their experiences – positive and negative – with others. Input and sharing of experiences from all participants is strongly encouraged.

 

Intended outcomes:  By the end of this session, participants will be able to: 1) Describe how workable competency-based assessment can be done; 2) Identify the ways in which CBAS tools allow for more effective use and recording of feedback in Residency training; and 3) Evaluate how the CBAS system may work in their own programs. 

 

Who should attend:  Members of programs at any stage of competency-based assessment implementation.

 

Level: All levels

Assessment for and of learning: a framework for implementing student patient portfolios

  • Associate Professor Susan van Schalkwyk, Centre for Health Professions Education

  • Associate Professor Julia Blitz, Division of Family Medicine; Faculty of Medicine and Health Sciences, Stellenbosch University, South Africa

 

Introduction:  There is a growing body of evidence that attests to the value inherent in adopting portfolios as an integral part of student’s learning in health professions education. This includes the use of student patient portfolios as part of the formative and summative assessment practice in the clinical domain. However, introducing such portfolios into a traditional and established curriculum at undergraduate level can present significant challenges for programme coordinators as well as the faculty themselves. Drawing on our experience in implementing student patient portfolios for final-year medical students at a rural clinical school, this interactive workshop will give participants the opportunity to engage with a model for implementation that they can tailor to fit their own contexts.

 

Content and structure:  After definitional clarity has been achieved by eliciting participant inputs and aligning these with the prevailing literature on portfolios in support of student learning, specifically in the clinical domain, the benefits and challenges associated with this approach will be explored in groups. This will be followed by a focus on the role and function of the patient portfolio where participants will have the opportunity to debate its potential to promote assessment for and of learning. Using structured templates, participants will then engage with the process of curriculum design to identify suitable ‘spaces’ where patient portfolios might make a meaningful contribution to student learning. Finally the implementation framework will be introduced. Participants will experience some of the practical components of the framework through role play. The workshop will end with a synthesis of the key concepts that have emerged during the session.  

 

Intended outcomes:  By the end of this workshop participants will be able to:

  • Define what a student patient portfolio is

  • Describe the potential role and function of these portfolios in student learning and assessment

  • Map out how patient portfolios can be included in students’ current curricula – effectively and seamlessly

  • Apply a framework to guide the implementation of patient portfolios in their context.

 

Who should attend:  Faculty involved in curriculum planning; clinician educators, faculty development practitioners

 

Level:  Introductory - intermediat

bottom of page