Sunday Symposia

  • Dr Kevin Eva, University of British Columbia, Canada

  • Dr Larry Gruppen, University of Michigan, USA

  • Dr Maxine Papadakis, UCSF School of Medicine, USA

  • Dr Lewis First, Chair, NBME (Moderator)

 

Symposium organised by National Board of Medical Examiners

 

2015 marks the 100th anniversary of the NBME.  The mission of the NBME is improving healthcare around the world through assessment and that mission supports directly the theme of the 2014 Ottawa Conference, “Transforming Healthcare through Excellence in Assessment and Evaluation”.  

 

One mechanism the NBME uses to transform healthcare through excellence in assessment is the awarding of grant funds to support research in assessment through the Stemmler Fund.  The goal of the Stemmler Fund is to provide support for research or development of innovative assessment approaches that will enhance the evaluation of those preparing to, or continuing to, practice medicine. Expected outcomes include advances in the theory, knowledge, or practice of assessment at any point along the continuum of medical education, from undergraduate and graduate education and training, through practice.   Recipients of Stemmler Fund grants have made significant contributions to our understanding of assessment and evaluation; the tools used for assessment and evaluation; and have explored qualitative aspects of assessment and evaluation including professionalism and admissions.

The symposium will highlight the work of three Stemmler recipients and consider the impact their work has had on assessment and evaluation of medical professionals and suggestions for continued research in assessment and evaluation.

NBME Stemmler Grants: Demonstrating Excellence in Assessment and Evaluation

10:30 - 12:00  Sunday, April 27

  • Dr Amar Rughani

  • Dr Jane Mamelok

  • Dr Jill Edwards

  • Dr Simon Street

 

There are changes happening in the NHS and in medical education in the UK. Inevitably this has repercussions in general practice. There are new clinical and administrative responsibilities for GPs. Training for general practice must now address an expanded curriculum and that expansion is reflected in new demands on the programme of assessment.  We have 5 years’ experience of running a national internet- based electronic portfolio with more than 3000 trainees completing their training each year. The four presenters have all been at the centre of the implementation and development of work place based assessment since its inception. Their presentations will give brief overviews of:
 

  1. The quality management of the programme of workplace based learning and assessment in the UK since 2007.

  2. The developing GP curriculum and the use of “blueprinting” in developing the assessments.

  3. The changes in the assessment of psychomotor skills in general practice 4. The regulatory constraints on developments in the assessment programme.

Work Place Based Assessment in UK general practice  – 

How do we build on our 5 year experience?

10:30 - 12:00  Sunday, April 27

  • Katharine Boursicot, SGUL, UK

  • Richard Fuller, Leeds, UK

  • Marjan Govearts, Maastricht, Netherlands

  • Saskia Wools, CITO, Netherlands

  • Trudie Roberts, Leeds, UK (Chair) 

 

The objectives of the session are:

 

  • To explore the validity issues in a range of assessments used in medical education.

  • To examine some examples of medical education assessments and evaluate to what extent their claims for validity match the criteria described in Standards for Educational and Psychological Testing

  • To raise awareness of modern validity concepts among medical educators

  • To encourage debate and discussion between proponents of the ‘psychometric’ and ‘interpretation and argument’ perspectives of validity.

 

Overview: The session brings together different researchers to provide an international perspective of how far the modern views of validity have impacted on medical education testing.

 

The scholarly significance of this symposium is that it brings medical education testing under scrutiny in relation to more modern argument-based approaches to validity. While the traditional psychometric discourse has been, and is still, dominant in medical education assessment, there are growing concerns that there are limitations to this view, especially in the context of newer assessment tools, such as workplace-based assessments. It is our intention to highlight the wider outlook provided by the unitary concept of validity, with its requirement to consider a range of different factors/evidence when making interpretations of test results, especially in high-stakes situations.

Validity Issues in Medical Education Assessment

14:00 - 15:30  Sunday, April 27

  • Panellists: Ayelet Kuper, Cynthia Whitehead, and Rachel Ellaway

  • Discussant: Brian Hodges

 

The widespread adoption of role-based competency frameworks, such as CanMEDS, has highlighted the importance of assessing physician roles (often called “non-Medical Expert” or “Intrinsic” roles) that go beyond the performance of medical knowledge and technical skills. This in turn requires a reappraisal of the means by which we facilitate learners’ adoption of those roles, not least because they require heterogeneous and broad-ranging activities and outcomes that can extend far beyond the traditional purview of medical schools and the healthcare system (the legitimacy of which is contested by both practicing physicians and medical educators).

 

Although there have been multiple attempts to define assessment methods for the non-Medical Expert Roles, educators continue to struggle to apply psychometric-based assessment methods to the social constructs represented by these roles. Current models of assessment and evaluation do not take into account the interactions between roles and a more holistic physician identity, nor do they address the somewhat uneven ontological, epistemological and theoretical grounding for the non-Medical Expert roles. A growing emphasis on evaluation models based on readily-measurable proximal health outcomes also threatens to undermine a systematic evaluation of the changes we make to curricula (and the learning, teaching and assessment processes within them) to accommodate the many demands of role-based competency frameworks.

 

This symposium will provide a range of contrasting theoretically-grounded non-psychometric perspectives that challenge concepts such as authenticity and identity that are bound up with the non-Medical Expert roles. We will explore novel approaches to the assessment of these roles and the evaluation of the curricula that support them. Our aim is to draw the audience into a robust and constructive conversation about the assessment of the non-Medical Expert roles in order to explore theoretical, methodological and practical directions for medical educators and researchers to employ in their own practices.

The Non-Medical Expert Roles: Methodological Challenges to Assessment and Evaluation

14:00 - 15:30  Sunday, April 27

16:00 - 17:30  Sunday, April 27

What is best practice in the selection of medical students?

  • Professor Jennifer Cleland, University of Aberdeen, UK

  • Dr Sandra Nicholson, Barts and the London, UK

  • Professor Fiona Patterson, Cambridge University, UK

  • Dr Jonathan Dowell, University of Dundee, UK

 

Selection can be seen as the first assessment in the medical education and training pathway. Admission to medical school has traditionally used educational attainment as a primarily hurdle, increasingly in conjunction with aptitude test of some sort.  Non-academic abilities are then usually considered by interview and/or other sources such as personal statements or even personality tests.  However, these approaches have been criticised heavily on the basis of poor reliability as well as dubious validity and it is also clear they are not infallible: with regulators concerned about some of those entering medicine.  And rarely is the major influence of self-selection considered.

 

This symposium will explore the question "What is best practice in the selection of medical students?” from a number of angles including “evidential weight” and supporting so-called Widening Participation.  It will draw heavily on the presenters’ review for the UK General Medical Council (GMC, weblink to report), focusing on areas of innovation.  The format will include four short presentations, each with Q&A, to stimulate discussion regarding how available tools can contribute to the selection of the best medical students and what additional research is now required. 

 

Widening Participation refers to the policy that people from certain social, ethnic and cultural groups should be represented more in medicine the medical workforce. How systems can achieve this will be explored.  Emerging selection tools and their evidence-base will be reviewed, including Situational Judgement tests and Multiple Mini Interviews. Finally, the view of the regulator will be considered.  The symposium will finish with recommendations for policy and practice, and where additional research should be focused to plug key gaps in knowledge and understanding. 

16:00 - 17:30  Sunday, April 27

Exploring Rater Cognition in Workplace-Based Assessment from Three Different Research Perspectives

  • Eric Holmboe, American Board of Internal Medicine, Pennsylvania, USA

  • Andrea Gingerich, Northern Medical Program (UBC Medicine), BC, Canada

  • Jennifer Kogan, Perelman School of Medicine, University of Pennsylvania, USA

  • Peter Yeates, University of Manchester, United Kingdom   

  • Marjan Govaerts, Maastricht University, Netherlands

 

Workplace-based assessments are an integral part of our assessment systems. In efforts to improve the defensibility of assessment decisions and our accountability to patient safety, researchers have begun investigating raters’ cognitive processes. Although a relatively new domain of inquiry, there appear to be three distinct (though not exclusive) perspectives on rater cognition. One considers raters’ cognitive processing to be conscious and controllable, and seeks tangible training solutions. A second acknowledges the automatic and unavoidable limitations of human cognition and will ultimately seek to provide design solutions to minimize such weaknesses. The third casts the rater as a valuable source of information whose expertise is squandered in current practices but could be harnessed in radically different assessment approaches. This symposium features a group of international rater cognition researchers representing the current understanding of rater cognition. We see this symposium as an important tool for stimulating a discussion about prevailing assumptions and conceptual gaps as well as potential implications for improving assessment.

Monday Symposia

10:30 - 12:00  Monday, April 28

Faculty Development and Learner Assessment:  The Missing Link

  • Yvonne Steinert and Colleagues, Centre for Medical Education, Faculty of Medicine, McGill University, Montreal, Canada

 

The assessment of learners at all levels of the educational continuum is the focus of much debate and research, as are specific aspects of assessment including standard setting, psychometric properties of assessment methods, and the value of an assessment program. However, the role of clinical teachers in assessing students and residents, and the need to prepare faculty members to observe critically, question effectively, and judge appropriately, is often neglected. The goal of this symposium is to highlight the role that faculty development can play in promoting reliable, valid, and fair learner assessments. Following a large group exercise in which participants will be asked to “assess” a learner and respond to a teacher’s and a resident’s perspective on the challenges of learner assessment, we will review the rationale for faculty development in this area. We will also highlight common approaches to preparing faculty for their role as assessors as well as the proposed content of a faculty development curriculum that includes the goals and principles of learner assessment, an overview of diverse assessment methods (including their strengths and limitations), standard setting and ‘inter-rater’ reliability, and the role of contextual factors in assessment. It has been said that the lack of agreement among faculty members – and the difficulty of assessing learners in a meaningful way – is a threat to the reliability and validity of decisions made about learner competence. The goal of this symposium is to address how faculty development can help to overcome this challenge.

  • Trudie Roberts

  • David Wilkinson

  • Ronald Harden

 

 

What is Excellence in Assessment?

10:30 - 12:00  Monday, April 28

  • André F. De Champlain, PhD, Medical Council of Canada

  • Kevin Eva, PhD, University of British Columbia

  • Brownell Anderson, MEd, National Board of Medical Examiners

  • Professor Dame Lesley Southgate, DBE, FRCP, FRCGP, St George's Hospital Medical School

  • Ian Bowmer, MD, FRCPC, Medical Council of Canada (Discussant)

 

Where We’ve Been, Where We Are, and Where We Need to Go

A widened perspective on assessment has been advocated to better meet the systemic nature of medical education.  The aim of this symposium is to outline how measurement scientists and medical educators can better collaborate to meet this desire.  In the introduction to the session, Dr. De Champlain will focus on past successful collaborative models between measurement science and medical education that might serve as a platform for moving forward. 

 

Overcoming the Unintended Consequences of Competency-based Assessment – A Medical School Perspective

The drive towards competency-based frameworks is beneficial in that it offers explicit and broadly focused models to guide assessment practices. However, implicit in these common frameworks are a variety of assumptions that may have unintended and undesirable consequences. Dr. Eva will present some of these challenges and discuss the impact they have on the culture and practice of medical education.

 

Integrating Assessment Data and Educational Experiences Across the Continuum

At nearly all levels of training, medical educators are exploring methodologies to assess learner knowledge and competence to provide relevant, integrated assessments that can support lifelong learning and individual growth.  As such, this represents a shift from assessment of learning to assessment for learning.  Ms. Anderson will outline alternatives that better integrate assessment methods and education to reinforce lifelong learning.

 

Workplace Assessment: Has Measurement Killed Judgement?

Workplace assessment (WA) has become an important part of medical education programmes and typically entails observation, recording and judgment of a wide range of an individual’s activities in different settings. But these important contributions to assessment can be undermined if they do not comply with the psychometric paradigm. Professor Southgate will outline and discuss attempts to combine these approaches in her presentation.  

Bridging the Gap: How Medical Education and Measurement Science can Better Collaborate to Meet Growing and Broadening Assessment Needs

14:00 - 15:30  Monday, April 28

  • Professor Val Wass (Chair)

  • Dr Katie Petty-Saphon

  • Veronica Davids

  • Professor Simon Maxwell

  • Siobhan Fitzpatrick

  • Professor Fiona Patterson

 

All 33 UK Medical Schools have formed an Assessment Alliance, working together to share good practice and resources and address issues of clinical competency standards. Individual medical school examinations are maintained, monitored by their regulator the General Medical Council. This symposium opens for discussion the challenges of shared test formats and compatibility of standards across this National initiative. 

 

Introduction and overview

Scene setting for those not familiar with UK processes; Role of the MSC/MSC-AA and its relation to the regulator; Academic freedom of medical schools vs. external accountability to stakeholders; and Pros and cons of a national examination/need for comparability between schools.

 

  • The development and utility of a shared question bank:  History and buy- in by schools; Development of good practice in assessment; IT issues; Practical issues.

  • Comparison of passing standards using Rasch modeling:  Conceptual issues; Application and initial results 

  • The development and utility of a national prescribing skills assessment: The problem of prescribing errors; Prescribing in relation to pharmacology and therapeutics; Experience with pilot online national assessments.

  • The place of Situational Judgements Tests for entry into residency (UK Foundation Programme):  Lessons from industry and selection into general practice; Identification of the key roles of F1 doctors; Applicability to F1 selection; Experience with the SJT in selection.

 

Medical Schools Council (MSC) assessment initiatives  

14:00 - 15:30  Monday, April 28

  • Jocelyn Lockyer PhD, University of Calgary, Canada

  • Joan Sargeant PhD, Dalhousie University, Halifax, Nova Scotia, Canada

  • John Campbell MBBS, University of Exeter Medical School, Exeter, UK

  • Marianne Xhignesse MD, University of Sherbrooke, Sherbrooke, PQ, Canada

  • Karen Mann PhD, Dalhousie University, Halifax, Nova Scotia, Canada

 

Multisource feedback (MSF) is increasingly being used as part of revalidation to assess physician performance across a range of competencies with particular emphasis on collaboration, communication, and professionalism.  Both quantitative and qualitative data maybe collected. Feedback from medical colleagues, co-workers (e.g., nurses, pharmacists, technicians), and patients are aggregated and form the basis of the data. MSF approaches have been extensively examined for evidence of validity, reliability, feasibility, acceptability, equivalence, and educational/catalytic effect. This research has identified areas of concern and opportunities to enhance the potential of MSF to support physician learning and change.  Four questions emerging from the literature will direct this symposia:

 

  1. Rater selection. What is the optimal approach to selecting raters?  Should the physician select the professionals who assess his/her competence? (John Campbell)

  2. Data presentation. How are MSF data optimally presented to participating physicians? What is the value in collecting qualitative data? (Jocelyn Lockyer)

  3. Feedback delivery and action plan development. What are the optimal approaches to feedback delivery? (Joan Sargeant)

  4. Coaching and mentoring. What potential benefits would ‘coaching’ with a certified coach offer to the MSF process? (Marianne Xhignesse)

Multisource Feedback: Its controversies and challenges in providing feedback to practicing physicians

16:00 - 17:30  Monday, April 28

Presenters:

  • Dr. Matthew Lineberry, Assistant Professor of Medical Education, University of Illinois at Chicago

  • Dr. Clare Kreiter, Professor of Family Medicine, University of Iowa

  • Dr. Georges Bordage, Professor of Medical Education, University of Illinois at Chicago

Discussant:

  • Dr. Jack Boulet, Associate Vice President, Foundation for Advancement of International Medical Education and Research (FAIMER)

 

Sound diagnostic reasoning during clinical encounters is a key competency of the effective clinician. However, the inherent complexity of such reasoning makes it challenging to assess, either for formative or summative purposes. In this session, we discuss one type of clinical reasoning assessment, the Script Concordance Test (SCTs).

 

Two recently-published reviews on SCTs’ psychometric properties have claimed that the methodology generally produces reliable and valid assessments of clinical reasoning, and that such tests may soon be suitable for high-stakes testing. Through a review of published SCT reports and a re-analysis of a previously-published SCT report, we have identified three critical threats to the valid interpretation and use of SCT scores which were not identified in previous reviews. These threats consist of logical inconsistencies in the scoring procedures, unexamined sources of measurement error, and construct confounding with examinees’ response tendencies on Likert-type assessment items. This third issue risks bias against racial or ethnic groups with certain response tendencies; it also makes the test susceptible to score inflation due to coached test-taking strategies. Our research shows that examinees could drastically inflate their scores by never endorsing the two extreme scale points on the tests’ 5-point scale.  Even examinees that simply endorse “0” for every item could still outperform most examinees that responded as the test intended.

 

In this symposium, we present our research on these validity threats and seek to stimulate discussion of alternative methodologies for assessment of clinical reasoning moving forward. 

Some Promise and Pitfalls of Clinical Reasoning Assessment: A Critical Examination of the Script Concordance Test 

16:00 - 17:30  Monday, April 28

Tuesday Symposia

  • Dr. Kenneth Locke, Faculty of Medicine, University of Toronto, Toronto, ON, Canada

  • Dr. Anthony Donato, Reading Health System, Reading, PA, USA

  • Dr. Pippa Hall, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada

  • Dr. Margaret McKenzie, Cleveland Clinic Lerner College of Medicine, Cleveland, OH, USA

  • Dr. Hedy Wald, Warren Alpert Medical School of Brown University, Providence, RI, USA

 

Portfolios are increasingly used in health professional education to complement, or in some cases replace, other forms of competency assessment. In many cases, they are used solely to promote reflective practice skill development in learners; in others, portfolios play a significant role in progress decisions. The literature emphasizes the importance of structure, coaching, and assessment for portfolios to be effective.  Required portfolio focus, content, and assessment criteria vary widely amongst programs; such variation presents challenges in articulating common principles and strategies, because of differing assessment protocols and standards for success, as well as the relative role (formative or summative) of portfolio assessment. In addition, when assessing learners' work in portfolios, many institutions do not provide standard training for assessors, and thereby undermine the learning value of portfolio assessment, whether summative or formative. In this symposium, we will examine underlying theoretical principles, and explore issues and dilemmas that can arise, when using portfolios for formative and/or summative assessment. In doing so, we will discuss assessment strategies in light of Schuwirth and Van der Vleuten’s conceptual framework of assessment tools as instruments of learning, with emphasis on systems of assessment for learning, rather than reliance on individual assessments of learning. We aim to clarify and justify common elements of successful portfolio assessment systems. Portfolio implementation strategies and assessment systems within presenters’ undergraduate and graduate medical education programs, with associated practical and institutional issues, will serve as exemplars.

Issues and Controversies in the Use of Portfolios for Assessment in Undergraduate and Graduate Medical Education

08:30 - 10:00  Tuesday, April 29

  • Janice L. Hanson, PhD, EdS, University of Colorado School of Medicine, Aurora, Colorado USA

  • Lindsey Lane, BM BCh, University of Colorado School of Medicine, Aurora, Colorado, USA

  • TJ Jirasevijinda, MD, Weill Cornell Medical College, New York, New York, USA

  • Paul Hemmer, MD, Uniformed Services University of the Health Sciences, Bethesda, MD, USA 

 

This symposium will confront the implicit assumption that “measurement” is preferable to “description” when assessing and evaluating learners in medical education. Symposium presenters will discuss why written narrative descriptions of learners’ performance provide a more useful and valid foundation for assessment and evaluation than ratings and scores from scales, checklists and examinations. Presentations will address challenges of changing a culture of evaluation that has relied on numbers for most evaluation data; methods for building shared understanding among faculty and learners; challenges of relying on narrative data when faculty come from different cultural backgrounds.

 

Dr. Hanson will introduce the symposium with an overview of the culture change required when shifting evaluation from rating scales to written descriptions of learner performance, then frame the topics of speakers that follow.

 

Dr. Lane will explain why written descriptions of learner performance provide a useful and valid foundation for evaluation decisions.

 

When multicultural faculty write narrative evaluations: Dr. Jirasevijinda will discuss experience with narrative evaluation when teachers have attended medical schools in many countries with differing traditions.

 

Building a shared vocabulary: Dr. Hemmer will focus on building a shared understanding among teachers of the meaning of words used to describe learner performance and on building consensus about evaluation decisions for individual learners.

 

Dr. Hanson will close with a summary of the group’s conversation about changing a program’s culture of evaluation toward narrative description of learner performance.

Narrative description as evaluation data in health professional education

08:30 - 10:00  Tuesday, April 29