Overview:
• Principles of evaluating automation systems in the context of regulated software
• Awareness of the biggest sources of uncertainty for medical device vulnerability assessment
• Ability to apply the principles to current systems including manual ones
Automating vulnerability assessments is a hot topic; there’s simply too much work for human experts to perform consistently with high quality. Software tools, data standards, and decision modeling are being developed. In past talks, we have presented progress in automating parts of the process.
How do we know whether automation is doing a good job? What are the impacts of automation getting this wrong? Drawing on lessons from systems engineering, decision science, human bias, and measuring the performance of classification systems, we present a simple framework for evaluating automation for vulnerability assessment.