Human visual attention-inspired knowledge distillation underlying interpretable computational pathology
Abstrak
Abstract Computational pathology leverages advanced deep-learning techniques to analyze medical images with high resolution. However, a trade-off exists between model lightweight, interpretability, and task performance in such real-world scenarios. Knowledge distillation (KD) is widely applied to compress deep learning models while preserving high performance. Despite this, deep learning-based KD often lacks interpretable design, leading to inaccurate attention to images. Inspired by human vision processing, we developed a human vision attention-inspired knowledge distillation (HVisKD) strategy that captures local and global patch relations to construct differentiated features. We employed it in pathological analysis to balance the tradeoff. HVisKD improves performance across various lightweight models in segmentation tasks. More importantly, the attention map of HVisKD showed promoted consistency with human expert-labeled segmentation. Furthermore, we examined HVisKD in a real-world intraoperative pathological diagnosis scenario and achieved an interpretable and fast analysis. Together, HVisKD offers a lightweight and interpretable strategy for computational pathology, aligning deep learning with brain-like information processing for more dependable output.
Penulis (13)
Muzhou Yu
Zihan Zhong
Xingang Zhou
Yuekun Wang
Tingyu Liang
Jiamin Chen
Hongmin Huang
Junhan Zhou
Dachun Zhao
Bo Lei
Yu Wang
Wenbin Ma
Kaisheng Ma
Akses Cepat
- Tahun Terbit
- 2025
- Sumber Database
- DOAJ
- DOI
- 10.1038/s41598-025-26004-1
- Akses
- Open Access ✓