A research team at POSTECH has developed a technology that surpasses the constraints of traditional imaging methods, providing stable and highly accurate cell visualization. Their findings are published in Nature Communications.
In life sciences, confocal fluorescence microscopy (CFM) is widely regarded for producing high-resolution cellular images. However, it requires fluorescent staining, which poses risks of photobleaching and phototoxicity, potentially damaging the cells under study. Conversely, mid-infrared photoacoustic microscopy (MIR-PAM) allows for label-free imaging, preserving cell integrity. Yet, its reliance on longer wavelengths limits spatial resolution, making it difficult to visualize fine cellular structures with precision.
To bridge these gaps, the POSTECH team developed an innovative imaging method powered by explainable deep learning (XDL). This approach transforms low-resolution, label-free MIR-PAM images into high-resolution, virtually stained images resembling those generated by CFM. Unlike conventional AI models, XDL offers enhanced transparency by visualizing the transformation process, ensuring both reliability and accuracy.
The team implemented a single-wavelength MIR-PAM system and designed a two-phase imaging process: The Resolution Enhancement phase converts low-resolution MIR-PAM images into high-resolution ones, clearly distinguishing intricate cellular structures such as nuclei and filamentous actin, and the Virtual Staining phase produces virtually stained images without fluorescent dyes, eliminating the risks associated with staining while maintaining CFM-quality imaging.
This innovative technology delivers high-resolution, virtually stained cellular imaging without compromising cell health, offering a powerful new tool for live-cell analysis and advanced biological research.
Professor Chulhong Kim remarked, “We have developed a cross-domain image transformation technology that bridges the physical limitations of different imaging modalities, offering complementary benefits. The XDL approach has significantly enhanced the stability and reliability of unsupervised learning.”
Professor Jinah Jang added, “This research unlocks new possibilities for multiplexed, high-resolution cellular imaging without labeling. It holds immense potential for applications in live-cell analysis and disease model studies.”
The study was led by Professors Chulhong Kim (Department of Electrical Engineering, Department of Convergence IT Engineering, Department of Mechanical Engineering, Department of Medical Science and Engineering, Graduate School of Artificial Intelligence) and Jinah Jang (Department of Mechanical Engineering, Department of Convergence IT Engineering, Department of Medical Science and Engineering), alongside doctoral candidate Eunwoo Park, Dr. Sampa Misra (Department of Convergence IT Engineering), and Dr. Dong Gyu Hwang (Center for 3D Organ Printing and Stem Cells).
More information:
Eunwoo Park et al, Unsupervised inter-domain transformation for virtually stained high-resolution mid-infrared photoacoustic microscopy using explainable deep learning, Nature Communications (2024). DOI: 10.1038/s41467-024-55262-2
Provided by
Pohang University of Science and Technology
Citation:
Cell imaging technology: AI transforms label-free photoacoustic microscopy into confocal microscopy (2025, January 17)
retrieved 17 January 2025
from https://phys.org/news/2025-01-cell-imaging-technology-ai-free.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.