Keywords
Correlative Microscopy, Image Registration, In-silico labeling, Deep Learning
This article is included in the Artificial Intelligence and Machine Learning gateway.
This article is included in the NEUBIAS - the Bioimage Analysts Network gateway.
Correlative Microscopy, Image Registration, In-silico labeling, Deep Learning
Correlative Light and Electron Microscopy (CLEM) combines the high resolution of electron microscopy (EM) with the molecular specificity of fluorescence microscopy. In super-resolution array tomography (srAT) for example, serial sections are imaged first under the fluorescence microscope using super-resolution techniques such as structured illumination microscopy (SIM), and then in the electron microscope1. With this technique, it is possible to identify and assign molecular identities to subcellular structures such as electrical synapses1,2 or microdomains in bacterial membranes3 that cannot be resolved by EM due to insufficient contrast.
To visualize and interpret the results of CLEM, the fluorescent images must be registered to the EM images with high accuracy and precision. Due to the different contrasts of EM and fluorescence images, automated correlation-based image alignment, as used e.g. for aligning EM serial sections4, is not directly possible. Registration is often done by hand using a fluorescent chromatin stain2, or semi-automatically with fiducial markers using tools such as eC-CLEM5. Further improvement and automation of the registration process is of great interest to make CLEM scalable to larger datasets.
Deep Learning using convolutional neural networks (CNNs) has become a powerful tool for various tasks in microscopy, including denoising and deconvolution as well as classification and segmentation, reviewed in 6 and 7. One interesting application of CNNs is the prediction of fluorescent labels from transmitted light images of cells, also called “in silico labeling”8,9.
We show here that this approach can be used to predict the fluorescent chromatin stain in electron microscopy images of cell nuclei. The predicted “in silico” chromatin images are sufficiently similar to real experimental chromatin images acquired with SIM to use them for automated correlation-based registration of CLEM images. Based on this observation, we developed “DeepCLEM”, a fully automated CLEM registration workflow implemented in FIJI10 and based on CNNs.
We used previously acquired imaging data of Caenorhabditis elegans and of human skin samples from healthy subjects. Sample preparation as well as the acquisition of the imaging data has been previously described in detail1,2,11. Briefly, C. elegans worms were cryo-immobilized via high-pressure freezing and subsequently processed by freeze substitution. All samples were embedded in methacrylate resin and sectioned at 100 nm. Ribbons of consecutive sections were attached to glass slides and labeled with fluorophores. Live Hoechst 33342 was used to stain chromatin and immunolabeling was used to visualize molecular identities. The sections were then imaged with SIM super-resolution microscopy. Next, they were processed for electron microscopy by heavy metal contrasting and carbon coating. The regions of interest previously imaged with SIM were then imaged again on the same sections with scanning electron microscopy, resulting in pairs of images that needed to be correlated.
To prepare ground truth for network training, we manually registered the chromatin channel to the EM images as described in 2. We selected 30 subimages and super-imposed them in the software Inkscape. By reducing the opacity of the chromatin images, they could be manually resized, rotated and dragged until the Hoechst signal coincided with the electron-dense heterochromatin puncta in the underlying EM images.
We implemented DeepCLEM as a Fiji10 plugin, using CSBDeep12 for network prediction. Preprocessing of the images as well as network training were performed in Python using scikit-image13 and TensorFlow14. First, a neural network trained on manually registered image pairs predicts the fluorescent chromatin signal from previously unseen EM images (Figure 1A). This "virtual" fluorescent chromatin image is then automatically registered to the experimentally measured chromatin signal from the sample using correlation-based alignment in FIJI (Figure 1B). The transformation parameters from this automated alignment are finally used to register the other SIM images that contain the signals of interest to the EM image (Figure 1C).
DeepCLEM requires FIJI10 with CSBDeep12 to run. The paths to the images and model file are entered in a user dialog (Figure 2). After running DeepCLEM, the correlated images and a .XML file containing the transform parameters are written to the output directory. The workflow is summarized in Figure 1; instructions for installing and running DeepCLEM and for training custom networks are included in the repository.
We trained DeepCLEM on correlative EM and SIM images of C. elegans and on human skin tissue and compared prediction and registration results for different network architectures and preprocessing routines. A generative adversarial network (pix2pix) showed promising results in some images from the skin dataset, but overall performance was best using the ProjectionCARE network from CSBDeep12.
EM images had large differences in contrast even when acquired in the same laboratory. We compared different preprocessing routines, including normalization and histogram equalization, and found that histogram equalization alone resulted in the best performance on our data. The best combination of preprocessing steps for optimizing contrast may however depend on the data.
We developed “DeepCLEM”, a fully automated CLEM registration workflow implemented in Fiji10 based on prediction of the chromatin stain from EM images using CNNs. Our registration workflow can easily be included in existing CLEM routines or adapted for other imaging modalities as well as for 3D stacks.
While we found that "DeepCLEM" performs well under various conditions, it has some limitations: using chromatin staining for correlation requires the presence of nuclei in the field of view. This limitation could be overcome by using e.g. propidium iodide to label the overall structure of the tissue.
Source code, pretrained networks and example data as well as documentation are available online at:
Source code available from: https://github.com/CIA-CCTB/Deep_CLEM.
Archived source code at time of publication: https://doi.org/10.5281/zenodo.409524715
License: MIT License.
This publication was supported by COST Action NEUBIAS (CA15124), funded by COST (European Cooperation in Science and Technology).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for developing the new software tool clearly explained?
No
Is the description of the software tool technically sound?
Partly
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Partly
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: TEM, STEM, tomography, CLEM; microbiology, cell biology
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Partly
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Partly
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Partly
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Image Analysis, CLEM, Machine Learning, Electron microscopy
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | |||
---|---|---|---|
1 | 2 | 3 | |
Version 3 (revision) 28 Dec 23 |
read | ||
Version 2 (revision) 02 Aug 22 |
read | read | |
Version 1 27 Oct 20 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)