Validation of a Novel Cutaneous Neoplasm Diagnostic Self-Efficacy Instrument (CNDSEI) for Evaluating User-Perceived Confidence With Dermoscopy

Background Accurate medical image interpretation is an essential proficiency for multiple medical specialties, including dermatologists and primary care providers. A dermatoscope, a ×10–×20 magnifying lens paired with a light source, enables enhanced visualization of skin cancer structures beyond standard visual inspection. Skilled interpretation of dermoscopic images improves diagnostic accuracy for skin cancer. Objective Design and validation of Cutaneous Neoplasm Diagnostic Self-Efficacy Instrument (CNDSEI)—a new tool to assess dermatology residents’ confidence in dermoscopic diagnosis of skin tumors. Methods In the 2018–2019 academic year, the authors administered the CNDSEI and the Long Dermoscopy Assessment (LDA), to measure dermoscopic image interpretation accuracy, to residents in 9 dermatology residency programs prior to dermoscopy educational intervention exposure. The authors conducted CNDSEI item analysis with inspection of response distribution histograms, assessed internal reliability using Cronbach’s coefficient alpha (α) and construct validity by comparing baseline CNDSEI and LDA results for corresponding lesions with one-way analysis of variance (ANOVA). Results At baseline, residents respectively demonstrated significantly higher and lower CNDSEI scores for correctly and incorrectly diagnosed lesions on the LDA (P = 0.001). The internal consistency reliability of CNDSEI responses for the majority (13/15) of the lesion types was excellent (α ≥ 0.9) or good (0.8≥ α <0.9). Conclusions The CNDSEI pilot established that the tool reliably measures user dermoscopic image interpretation confidence and that self-efficacy correlates with diagnostic accuracy. Precise alignment of medical image diagnostic performance and the self-efficacy instrument content offers opportunity for construct validation of novel medical image interpretation self-efficacy instruments.


Methods
As a quality improvement initiative, per policy, the curriculum and instrument development efforts did not require formal supervision by our institutional review board, but did receive Quality Improvement Advisory Board oversight. between the following (two) diagnoses with naked eye examination (n=11); and (4) How comfortable are you with distinguishing between the following (two) diagnoses with dermoscopic examination (n=11). Answer options included a Likert scale ranging from 1 to 10 with 1 labeled as "low comfort" and 10 labeled as "high comfort." The LDA included 30 dermoscopic images that residents were asked to classify in 3 ways: (1) whether the lesion is

Introduction
Accurate medical image interpretation is a diagnostic proficiency in almost every area of medical education. The validation of metrics that quantify confidence and skill in image interpretation is necessary to measure the impact of educational efforts.
A dermatoscope is a medical device that pairs a 10× magnifier with polarized light, facilitating a more complete visualization of skin structures not readily visible to typical clinical examination. Skilled interpretation of dermoscopic images has been shown to reduce both false positives and false negatives during melanoma screening examinations when compared to clinical (naked-eye) examination alone [1,2]. Nevertheless, dermoscopy education in dermatology residency programs offers opportunity for improvement: 38% of US dermatology residents receive no dermoscopy training, and those residents who do receive training average only 2 hours of educational exposure [3,4]. Our group leveraged Project ECHO (Extension for Community Healthcare Outcomes), a telementoring framework, to deliver the DERMatology Early Melanoma Diagnosis (DERM:END) educational intervention [5].
To evaluate our educational intervention, we developed,

Results:
At baseline, residents respectively demonstrated significantly higher and lower CNDSEI scores for correctly and incorrectly diagnosed lesions on the LDA (P = 0.001). The internal consistency reliability of CNDSEI responses for the majority (13/15) of the lesion types was excellent (α ≥ 0.9) or good (0.8≥ α <0.9).

Conclusions:
The CNDSEI pilot established that the tool reliably measures user dermoscopic image interpretation confidence and that self-efficacy correlates with diagnostic accuracy. Precise alignment of medical image diagnostic performance and the self-efficacy instrument content offers opportunity for construct validation of novel medical image interpretation self-efficacy instruments.  scale (1-10) was utilized with 1 defined as "low comfort," 10 defined as "high comfort," and 0 for a question not answered by a resident (no response).

A B C D)
for the lesion-types classified by participants on LDA correctly versus incorrectly (F = 6.91, P = 0.001).

Discussion
The growing emphasis on competency-based medical education necessitates the validation of metrics that can quan-   subjects are less likely to answer questions that do not seem relevant. Content validity addresses whether the instrument covers most or all dimensions of the concept to be measured [10]. The analysis establishing content validity is theoretical and based on expert opinion, systematic review of the literature review, and factor analysis. Factor analysis is the grouping of questions into subtypes to meet each desired domain.
In our tool, questions were grouped by lesion type and more broadly by benign and malignant types. Finally, inter-rater and test-retest reliability were not assessed. Test-retest reliability was not feasible to assess, as we only tested at baseline and after intervention with the intention of seeing a variance. Inter-rater reliability is not applicable to the construct in this context, as we are measuring self-perceived confidence that cannot be quantified by a separate rater. tify skill acquisition. Medical image interpretation skills are critical proficiencies in almost all fields of medical practice.
In radiology there have been efforts to develop and validate simulation-based assessments that mirror real-life clinical decision-making [7] and to quantify radiographic image interpretation skills of non-radiologists [8]. Both tools relied on comparison of non-experts (medical students/interns) to more experienced users (senior residents) for construct validation of medical image interpretation accuracy [7,8].
While medical image interpretation accuracy is important, one's perceived ability to achieve certain attainments [9], or self-efficacy, is an important determinant of practice change. Instruments addressing self-efficacy must be carefully validated to demonstrate the ability to appropriately capture user confidence in the construct it intends to measure [10]. We present a unique model for construct validation of medical image interpretation self-efficacy instruments by precisely aligning medical image interpretation accuracy and self-efficacy instrument content.
In validating an educational metric, it is important to