Background: Cervical cancer poses a significant threat in resource-limited regions, where screening and HPV vaccination are often inadequate. Visual inspection with acetic acid, while practical, suffers from reproducibility and accuracy issues. This research aimed to create a deep learning algorithm capable of automatically identifying cervical precancer and cancer through visual evaluation.
Methods: A 7-year longitudinal study in Costa Rica involving 9406 women (aged 18-94) was conducted (1993-2000). The study incorporated various cervical screening methods alongside histopathological confirmation of precancerous conditions. Cancer cases were tracked for up to 18 years via tumor registry linkage. Digitized cervical images, obtained using cervicography during screening, were utilized to train and validate a deep learning algorithm. The algorithm produced a prediction score (0-1), which could be categorized to optimize the balance between sensitivity and specificity in detecting precancer and cancer. All statistical analyses were two-sided.
Results: Automated visual evaluation of cervigrams at enrollment demonstrated superior accuracy in identifying cumulative precancer and cancer cases (AUC = 0.91, 95% CI = 0.89-0.93) compared to both original cervigram interpretation (AUC = 0.69, 95% CI = 0.63-0.74; P < .001) and conventional cytology (AUC = 0.71, 95% CI = 0.65-0.77; P < .001). A single visual screening round focusing on women aged 25-49 could detect 127 (55.7%) of the 228 precancers (CIN2/CIN3/AIS) diagnosed within the entire adult population (ages 18-94), while necessitating referral for only 11.0% of women.
Conclusions: These findings suggest that automated visual evaluation of cervical images, achievable with modern digital cameras and Learning Images technology, holds significant promise. Successful implementation could facilitate the widespread adoption of effective point-of-care cervical screening, particularly in areas with limited resources.