Loading…

Face, Content, and Construct Validation of the da Vinci Skills Simulator

Objective To report on assessments of face, content, and construct validity for the commercially available da Vinci Skills Simulator (dVSS). Methods A total of 38 subjects participated in this prospective study. Participants were classified as novice (0 robotic cases performed), intermediate (1-74 r...

Full description

Saved in:
Bibliographic Details
Published in:Urology (Ridgewood, N.J.) N.J.), 2012-05, Vol.79 (5), p.1068-1072
Main Authors: Kelly, Douglas C, Margules, Andrew C, Kundavaram, Chandan R, Narins, Hadley, Gomella, Leonard G, Trabulsi, Edouard J, Lallas, Costas D
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Objective To report on assessments of face, content, and construct validity for the commercially available da Vinci Skills Simulator (dVSS). Methods A total of 38 subjects participated in this prospective study. Participants were classified as novice (0 robotic cases performed), intermediate (1-74 robotic cases), or expert (≥75 robotic cases). Each subject completed 5 exercises. Using the metrics available in the simulator software, the performances of each group were compared to evaluate construct validation. Immediately after completion of the exercises, each subject completed a questionnaire to evaluate face and content validation. Results The novice group consisted of 18 medical students and 1 resident. The intermediate group included 6 residents, 1 fellow, and 2 faculty urologist. The expert group consisted of 2 residents, 1 fellow, and 7 faculty surgeons. The mean number of robotic cases performed by the intermediate and expert groups was 29.2 and 233.4, respectively. An overall significant difference was observed in favor of the more experienced group in 4 skill sets. When intermediates and experts were combined into a single “experienced” group, they significantly outperformed novices in all 5 exercises. Intermediates and experts rated various elements of the simulators realism at an average of 4.1/5 and 4.3/5, respectively. All intermediate and expert participants rated the simulator's value as a training tool as 4/5 or 5/5. Conclusion Our study supports the face, content, and construct validation attributed to the dVSS. These results indicate that the simulator may be most useful to novice surgeons seeking basic robot skills acquisition.
ISSN:0090-4295
1527-9995
DOI:10.1016/j.urology.2012.01.028