ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Software Tool Article

COverlap: a Fiji toolset for the 3D co-localization of two fluorescent nuclear markers in confocal images

[version 1; peer review: 3 approved with reservations]
PUBLISHED 03 Jan 2024
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the NEUBIAS - the Bioimage Analysts Network gateway.

Abstract

With the increasing complexity and throughput of microscopy experiments, it has become essential for biologists to navigate computational means of analysis to produce automated and reproducible workflows. Bioimage analysis workflows being largely underreported in method sections of articles, it is however quite difficult to find practical examples of documented scripts to support beginner programmers in biology. Here, we introduce COverlap, a Fiji toolset composed of four macros, for the 3D segmentation and co-localization of fluorescent nuclear markers in confocal images. The toolset accepts batches of multichannel z-stack images, segments objects in two channels of interest, and outputs object counts and labels, as well as co-localization results based on the physical overlap of objects. The first macro is a preparatory step that produces maximum intensity projections of images for visualization purposes. The second macro assists users in selecting batch-suitable segmentation parameters by testing them on small portions of the images. The third macro performs automated segmentation and co-localization analysis, and saves the parameters used, the results table, the 3D regions of interest (ROIs) of co-localizing objects, and two types of verification images with segmentation and co-localization masks for each image of the batch. The fourth macro allows users to review the verification images displaying segmentation masks and the location of co-localization events, and to perform corrections such as ROI adjustment, z-stack reslicing, and volume estimation correction in an automatically documented manner. To illustrate how COverlap operates, we present an experiment in which we identified rare endothelial proliferation events in adult rat brain slices on more than 350 large tiled z-stacks. We conclude by discussing the reproducibility and generalizability of the toolset, its limitations for different datasets, and its potential use as a template that is adaptable to other types of analyses.

Keywords

3D segmentation, co-localization, confocal microscopy, angiogenesis, bioimage analysis, endothelial cell proliferation, ImageJ, Fiji toolset

Introduction

In recent years, the complexity of bioimage analysis workflows has significantly increased because of the extraordinary volume of data generated from microscopy experiments.1 Although a plethora of user-friendly bioimage analysis methods and tools are now available to life scientists,2 there is no single turnkey solution, owing to the variety of existing imaging problems.3 Additionally, performing complex analysis workflows without computational skills has become increasingly challenging or even impossible.4 In cell biology, computer vision has the dual advantage of automating the analysis process and enabling complete data extraction from images.5 Perhaps just as importantly, the creation of macros or scripts for quantifying images addresses the need to report reproducible bioimage analysis workflows, because code and its documentation serve as traceable entities that can be conveniently reviewed and reused as necessary.4,6 Despite the availability of excellent teaching materials for life scientists wanting to learn programming,79 there seems to be a lack of practical and accessible published examples (such as Refs. 1013) that can inspire programming novices.

We introduce COverlap, a Fiji toolset consisting of four macros designed to segment and detect the co-occurrence of two nuclear markers in confocal multichannel z-stack images. A testing step on small subsamples of images first facilitates the selection of batch-suitable parameters for the segmentation workflow, which consists of image enhancement processes (normalization, filtering, and background subtraction), followed by thresholding and connected component analysis to identify and count objects in two image channels. The toolset combines pre-existing plugins to perform a fully automated segmentation and co-localization analysis based on the percentage of volume overlap between objects. Finally, the toolset provides support for the visualization, manual review, and annotation of results while giving users the opportunity to implement a documented set of corrections without having to perform the entire analysis again.

After providing a detailed description of the toolset’s requirements, installation procedure, and operation, we present an experiment in which we used COverlap to detect newly formed endothelial cells in the anterior cingulate cortex (ACC) of adult rats, a cerebral region involved in the long-term storage of memories. This area being widely spread in the brain, and endothelial proliferation events quite rare outside of development, aging, disease, or injury,14,15 this experiment required the acquisition of a substantial number of x20 multichannel tiled z-stack images spanning the region of interest on several 50 μm-thick coronal brain sections per animal. To reveal endothelial proliferation, we labeled newly synthesized DNA in the nuclei of proliferating cells using an EdU (5-Ethynyl-2-deoxyuridine) assay, performed immunofluorescence staining of ERG, a transcription factor specific to endothelial nuclei in the brain, and used COverlap to detect their co-occurrence. The toolset organization, with a clear separation of interactive from automated steps, greatly facilitated the analysis of more than 350 multidimensional images, while the automated recording of all parameters and results ensured the reproducibility of the experiment.

We conclude this article by addressing how the object segmentation and co-localization analysis components of the workflow were selected and how they may not be applicable to all comparable analysis challenges. We then discuss how COverlap can be flexibly used either as a parameterizable solution for non-programmers with a compatible experiment or as an adaptable template, which we believe exemplifies good practices in reporting a bioimage analysis workflow.

Methods

Implementation

Software

Our toolset is written in the ImageJ macro language and is designed to be installed in Fiji16 as a set of clickable action buttons that trigger four individual macros. The version of Fiji used to write and run this toolset was 1.54f. The toolset relies on classical segmentation algorithms (filtering, background subtraction, thresholding, watershed) to extract objects from images and on a user-defined minimum volume overlap ratio to quantify co-localizing objects.

Toolset installation and setting up

The toolset can be installed by downloading the COverlap_Toolset.ijm file on GitHub and placing it in the macros/toolsets/directory of Fiji. Additionally, the toolset requires the installation of plugins MorphoLibJ17 and 3D ImageJ Suite.18 Users must enable the following update sites in Fiji: IJPB-plugins, 3D ImageJ Suite, Java8, and ImageScience. The GitHub README.md file19 provides a detailed tutorial for the installation and use of COverlap.

Prerequisites

The toolset requires z-stack multichannel (up to four, minimum two) images, where the two channels ideally contain nuclear or blob-like markers. The toolset was tested by using .tif and .nd files but could be tested with other types of files that the Bio-formats Importer can open.

To run, the toolset requires an Image folder containing the experiment and an initially empty Results folder. Two types of Image folder and naming organization are possible (Figure 1): either a master folder contains one subfolder per image (such as is required for .nd files where a given subfolder contains the .nd file and one .tif image per channel), or a single folder contains all the images. In the former case, the name of each subfolder needs to include: the identifier of the sample (such as unique identifying code) and the name of the region analyzed (such as “ACC” for anterior cingulate cortex or “PC” for parietal cortex). In the latter case, the name of the image must include this information.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure1.gif

Figure 1. Examples of suitable folder organization and naming.

Operation

Minimal system requirements

The toolset was run on a workstation with an Intel® Core™ i9-10900 CPU @ 2.80GHz, 128 GB of RAM, and Windows 10 64-bit.

Overview of the workflow

COverlap comprises four macros that can be individually triggered by four action buttons. Each macro performs a specific step (Figure 2):

  • 1. A preliminary creation of maximum intensity projection (MIP) for each image.

  • 2. A testing step that allows testing of the workflow on small portions of images to select the best parameters for filtering, background subtraction, and thresholding for a given batch.

  • 3. The manual drawing of regions of interest (ROIs) for all images in the batch followed by automatic segmentation and 3D co-localization of the two nuclear markers.

  • 4. A review step, where results and segmentations/co-localization masks can be verified, and corrections applied if needed (ROI adjustment, z-stack reslicing, and volume estimation correction), with appropriate documentation of any such correction.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure2.gif

Figure 2. Complete COverlap workflow.

Red bubbles: Macros. Blue: Manual steps. Grey: automated steps. Green: output elements.

Macro 1: creation of maximum intensity projection images

The first macro of this toolset allows for the creation of an MIP for each image of the batch and saves it as a TIFF file in the corresponding image folder.

The MIPs generated in this preliminary step will be used as a visualization tool, both as a support for drawing the ROI during the parameter-testing phase (Macro 2) and the batch analysis (Macro 3), and for the creation of verification images on which the positions of co-localization events found during the batch analysis are displayed on the MIP, making it easy for users to locate them and appreciate their distribution (Macros 4).

A click on the “1” icon of the toolset launches the first macro, and users are invited to specify the folder organization of their images (Figure 1), a substring (sequence of characters) contained in the file name of all the images, and the extension of images that need to be retrieved from the folder. If no substring is specified, then all images of the specified extension type are processed. Users can then pick a different Lookup Table (LUT) for each channel they want to be part of the MIP or the option “None” for non-existing channels or channels they do not wish to project. An option for disabling the automatic display of all channels as a composite image is also available.

Macro 2: testing of parameters for nuclear marker segmentation

The second macro allows users to test various segmentation parameters (filtering, object minimum size, background subtraction, and threshold) on small portions of images.

Users are invited to point to their Images and Results folders and to specify the names of their target markers and the channel on which they are located. Users can also indicate the names of one and up to two ROIs. These region names should be included in the names of the images that contain them for the macro to properly use their respective associated parameters. If incorrectly spelled or no regions are specified, the left-column parameters will be used for all images. The point of this feature is to allow for region-specific thresholds to be chosen, for example, in cases where different brain regions whose contents are systematically different were imaged. Notably, the current implementation of the macro only works with one ROI per image.

As done previously, users must also provide a substring contained in all image file names, as well as their extension, for the macro to retrieve them in the image folder. These parameters can be saved for retrieval during the subsequent steps of the toolset or later use of the macro.

The macro then displays a list of paths on which users are invited to click to open an MIP from the batch. Once an MIP is opened, users fill in various segmentation parameters in the interface. After they select a representative portion of the image and add it to the ROI Manager, the macro will crop the original image around this test ROI and perform the segmentation test for both channels automatically.

The segmentation of the two nuclear markers of interest performed by this second macro consists of the same steps as those that will be performed during the batch analysis with the third macro (Figure 3). Both macros duplicate the working channels and close the original image to preserve its integrity. Each subsequent step of the segmentation is sequentially performed on each duplicated channel.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure3.gif

Figure 3. Object-segmentation workflow.

Contrast enhancement is first performed on each slice of the z-stack, by way of the histogram stretching method (also called normalization, Process > Enhance contrast… in ImageJ), where pixel values of the image are recalculated so their range is linearly “stretched” to be equal to the maximum range for the data type (for example 0-65535 for 16-bit images), with 0.35% pixels allowed to become saturated (to prevent a few outlying pixels from causing the histogram-stretch to not work as intended). Each channel is then cropped around the ROI previously drawn by the user and subsequently converted to an 8-bit image by linearly scaling from min-max to 0-255, where min and max values are the “Display range” in Image > Show Info or the values displayed in the Image > Adjust > Brightness > Contrast tool.

Image enhancement processes are then applied: first, denoising is achieved using a 3D median filter18 (Plugins > 3D Suite > Filters > 3D Fast Filters) chosen for its edge-preserving qualities, followed by a 3D Gaussian filter (Process > Filters > Gaussian Blur 3D) to further smooth images. 3D filtering takes into account possible voxel anisotropy: if the voxels are non-cubic, the ratio between the x and y parameters and the z parameters should approximate that of the ratio between the x and y dimensions and the z dimension of the voxel. The rolling ball background subtraction is then performed (Process > Background subtraction) on each slice of the stack in order to get rid of uneven background fluorescence, which would prevent a global thresholding from working. As a rule of thumb, the selected radius of the rolling ball must be at least as large as the radius of the largest object to be detected in the image. Setting parameters to 0 for any of the enhancement processes will result in the macro skipping this particular step.

The images are then ready for segmentation. The images are processed using the users’ numerical threshold parameter for each channel and the Plugins > 3D Suite > Segmentation > 3D Simple Segmentation. The plugin is based on connected-component analysis, which clusters voxels above an intensity threshold based on their connectivity (here, using an algorithm where all 26 neighboring voxels, including diagonally, are considered connected) and assigns a value to each cluster (see Chapters 2 and 9 of Ref. 20). This method generates a labeled image where each segmented object is attributed to a gray level and has the advantage of including a minimum size filter that allows the exclusion of smaller particles that are out of the nucleus of interest size range.

Users have the option to apply a watershed algorithm to segmented images using the Plugins > 3D Suite > Segmentation > 3D Watershed Split plugin, chosen for its ability to split merged objects based on the local maxima of their Euclidean distance map (i.e., the farthest points from the boundaries of objects, corresponding to their centers). The watershed radius should be approximately the radius of the objects of interest in a given channel.

Users can optionally exclude objects that touch the edges of the ROI. For this, the labeled image is binarized (objects appear in white and the background in black), and the ROI is drawn in white as a two-pixel line on each slice of the z-stack. The image is then converted back into a labeled image, where the ROI and any object that it touches are labeled as a single object. The largest object in the image is then removed, thereby removing the drawn ROI and the touching objects. Users should be advised that this algorithm cannot function properly if the image contains a very large object that comprises more voxels than the object composed of the ROI and the objects it touches. However, this is unlikely in the case of images used for nuclei segmentation and co-localization. The ROI should also be drawn in a single line, so that its border is continuous: any hollow in the ROI created with the “Alt” key will result in the object exclusion feature not functioning. As this feature is mostly useful to avoid detecting spurious co-localization events on the edge of the ROI due to partially segmented objects, we also advise users to disable it during Macro 2 to save processing time.

The result of this automated segmentation test can then be visualized. For both channels, users can compare the original (contrast-enhanced) image, the enhanced (filtered/background subtracted) image, and the composite visualization image displaying the test ROI and the outline of the segmented nuclei (green LUT) on the original image (red LUT). Users can browse the z-stack and toggle the segmentation outline on and off to assess whether their marker of interest has been properly segmented using the chosen parameters.

Parameters can be repeatedly tested until a suitable set is reached without having to reopen the image each time, and users can switch to another image without exiting the macro to test these parameters on a different sample. When a given set of parameters is deemed applicable to the majority of images in the batch, a “Save parameters” box can be ticked before running the test, which generates a file that the third macro retrieves to fill in the graphical user interface (GUI) for parameters automatically. Users can then proceed to batch analysis using the Macro 3.

Macro 3: segmentation and 3d co-localization of two nuclear fluorescent markers

The third macro performs segmentation and co-localization analysis on the entire batch of images. Users indicate the batch folder containing all the images and MIPs generated with Macro 1 (or image subfolders where each holds an original image and its MIP), as well as a separate output folder (which may contain saved parameters from Macro 2) to store the results, ROIs, and verification images.

Users are prompted to check the targets, channels, and regions parameters that are retrieved from the file saved in Macro 2 and modify them if needed. Similarly, the macro retrieves the segmentation parameters saved in Macro 2 to fill the GUI, and users can check and modify them as needed. If the “Save parameters” box is ticked, the previously saved parameters are overwritten. In any case, when the analysis is started, all the detection parameters used for the batch, as well as the date and time of processing, are saved in the Results folder.

The macro then opens each MIP in turn, and users are prompted to draw one ROI per image, on which the analysis will be performed. As this process may take a long time, proportional to the number of images in the batch and the complexity of the region to be drawn, we advise that users plan ahead and divide their batch into several smaller, more manageable batches if they have a limited amount of time to dedicate to ROI drawing. If users wish to use the edge-exclusion option (see Macro 2), the ROI edges must be continuous (as opposed to having holes drawn in with the Alt key). Users can skip drawing an ROI on images that they do not wish to analyze, as the macro will later ignore images without a matching ROI. Once all the images have been read, the segmentation and co-localization analyses are performed. Alternatively, users can place an already created set of ROIs in the Results folder and uncheck the “Ask for ROI” option in the segmentation GUI. Each ROI must be named after its corresponding image (after the following example “ImageName_ROI.zip”). This technique can also be used in cases where ROIs have already been drawn once, and the analysis must be run again.

For each image, both channels are first segmented using the same process as that described for Macro 2, before the object-based co-localization analysis is performed.

Our method initially defines objects that possess overlapping voxels as co-localized and allows the setting of a minimum percentage of voxels (relative to the total number of voxels in the object, i.e., a minimum volume overlap ratio) that need to be overlapping to be considered as co-localization. For two labeled images obtained from two channels A and B, the MultiColoc plugin of the 3D Suite computes all co-localizations (i.e., all intersections of voxels) between every possible A-B pair of objects. The macro then selects only pairs of objects for which, for at least one of the two objects, the volume overlap ratio meets or exceeds the overlap threshold set in the parameters. If the threshold is set at 50%, only pairs comprising at least one object for which at least 50% of its total volume overlaps the other object will be retained. Notably, this method allows multiple co-localizations to be preserved for a single object.

Depending on the hardware specifications, especially for large batches and/or large images, the analysis can take a long time. The progress is displayed in the Log window (image being processed/total number of images), while the macro runs in the background (images are not displayed). Each time an image is processed, the Results folder is updated, such that if the macro is interrupted before its completion, the results for images that have been successfully analyzed are not lost. Specifically, for each image analyzed, the results table is updated, a.zip file containing a set with the 3D ROIs of each co-localizing object is saved, and two types of verification images are created.

The results table summarizes, for each analyzed sample: the parent folder it belonged to (which can be named after a relevant group or batch name), the sample’s name, its target region, its number of z-slices, the area of the drawn ROI, the total area and the volume analyzed, the number of segmented objects for channels A and B, the number of objects A co-localizing in B, the number of objects B co-localizing in A for a given overlap threshold, and the date and time of processing.

The first type of verification image is a .jpg image based on the MIP from Macro 1, which displays the outline of the analyzed ROI, as well as enlarged outlines around the found co-localization events. Its purpose is to facilitate the localization of these events for users, who can also obtain a general idea of their distribution in the analyzed ROI. The other type of verification image is a .tif image comprised of a z-stack with four binary channels: two (red and green) channels corresponding to each marker segmentation, one (blue) to the co-localization events detected by the macro, and one (grey) to the overlap of objects that were excluded by the co-localization threshold. The first three channels of this image are displayed as a composite, such that any detected co-localization will conveniently appear in white, through the superposition of the three red, green, and blue LUTs. The purpose of this image is to enable users to verify both the segmentation and the co-localization analysis in the analyzed volume: major segmentation problems (such as an inappropriate threshold resulting in noisy segmentation) and spurious co-localization events should be apparent in this image. Users can also review the initially hidden fourth channel to assess the appropriateness of their overlap threshold.

In addition to these verification images, a set of 3D ROIs for co-localizing objects is also saved for each image containing co-localization events. Using the 3D Manager plugin,18 users can perform additional measurements on co-localized objects, such as intensity measurements on the original image and measurement of volume or other geometric properties.

Once the analysis is completed, a message is displayed in the Log window, and the text file containing the detection parameters is appended with the date and time of completion of the analysis. Users can then proceed to the review step at any time they deem convenient.

Macro 4: verification and correction of images

The fourth macro allows users to review their analyzed images and perform corrections, such as trimming of the ROI (e.g., to exclude an air bubble trapped in the mounting medium between the slide and coverslip), reslicing of the z-stack (to exclude out-of-focus slices), changing the overlap threshold for co-localizations, or correcting the volume estimation.

After inputting their Images and Results folders, users indicate the name of the targets and their respective channels, as well as the original overlap threshold with which the images have been analyzed. The macro then retrieves and displays the list of verification images from the Results folder. Users are invited to open a composite image from the list to review it. They can open the corresponding MIP visualization image at the same time to check where the detected co-localization events are located, but must close it before the next step. When they are ready to perform corrections, users click on the OK button and are presented with a list of corrections that may be applied to the image. If they confirm that they wish to perform corrections, the “Correction Options” GUI is displayed.

Users can perform several actions:

  • They can reshape the ROI to exclude parts of it, such as a bubble, tear in the tissue, or part of the ROI that is too close to the edge of the sample. Note that if users have failed to encompass everything they wished to analyze in the original ROI, this cannot be fixed at this point, and the image should be reprocessed using Macro 3.

  • They can reslice the z-stack by excluding the slices at the beginning and/or end of the stack. This is useful when the initial z-stack comprises out-of-focus slices that either generate spurious segmentation or are devoid of objects.

  • They can review overlapping objects: those above the chosen overlap threshold that are detected as co-localized appear on channel 3, and those under the threshold are located on channel 4. Users can change the overlap threshold if they are not satisfied (we advise, however, that this should be performed on all images).

Once the options are set, the macro performs all the above-mentioned corrections. Objects are quantified again with these new parameters, from the corrected composite image directly, without having to reopen the original image, and co-localizations are analyzed again for these objects. An important feature of this correction step is that it corrects the estimated analyzed volume.

While the initially calculated volume corresponds to the area of the ROI multiplied by the number of z-slices multiplied by the size of the z-step, this algorithm wraps, for each slice, a convex hull selection around all detected cells regardless of the channel and uses the sum of these selections’ areas multiplied by the size of the z-step in place of the original formula. This allows users to base the analysis only on the volume that has objects in it and is especially useful when the biological sample is not perfectly flat on the microscope slide and does not fill the totality of the ROI in the first and/or last few slices of the z-stack (Figure 4). Users can ignore this option and rely on the “cookie-cutter” ROI-based volume estimation, which is also provided and considers the new ROI shape and the new number of slices (e.g., in the case of a very sparse labeling or if the exact same ROI is being used for all images).

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure4.gif

Figure 4. Correction of estimated volume with Convex Hull Algorithm.

Following corrections, new files are saved in the Results folder, containing the mention “Adjusted” in their name: the new ROI if it has been modified, the new visualization images (Composite and MIP) showing the corrected analysis, and the new set of 3D ROIs for co-localizing events that users can later review or use for measurements.

One advantage of this review step is that while the whole process may take time for large batches of images, users do not have to review all images at once and can use the macro any number of times to review parts of the batch. Because the macro appends, retrieves, and displays the original Results file, users can easily know where they left off the previous time and continue their review work at a later time.

Use cases

There are two ways immunohistochemical analysis can help reveal the presence of an angiogenic phenomenon in situ. The first is by showing changes in the architecture of the capillary network, such as an increase in the number of branches or the length of its segments.21 The second is by detecting endothelial cell proliferation, which is necessary for the creation of new blood vessels. In this example, we used COverlap to identify proliferating endothelial cells in the ACC of rats that performed a memory task versus control animals. The goal of the experiment was to examine the survival kinetic of proliferating endothelial cells at various delays after the encoding of an associative olfactory memory.

Origin of samples

In accordance with the principles of the European community, the experimental protocols were validated by the local ethics committee (CEEA-50, APAFIS n°20108), the animal welfare committee of IMS (Integration from Material to System lab, UMR5218 CNRS/Université de Bordeaux, Talence, France, agreement n°A-33-522-5) and the French Ministry of Research.

48 two-months old male Sprague-Dawley rats (Janvier Labs, Saint Berthevin, France) were attributed an identifying number and acclimated for one week before being randomly allocated to experimental and control groups (controlling that no weight difference existed between groups) and moved from group-housing to single-housing according to protocol recommendations22). Animals were given unrestricted access to water and food pellets (A04, Safe). Animals were handled daily for at least 5 days after the first week of acclimation to minimize experimenter-induced stress. To reduce potential stress and neophobic responses, animals were then habituated to consume powdered chow (A04, Safe) from cups for 3 days before the experiment.22 The general health of animals as well as food and water intake were monitored daily and scored throughout the experiment. The experiment was conducted during the light period (7 a.m to 7 p.m., 100 lux) of the light-dark cycle.

Four groups of rats (3-6 months old), each divided into one experimental and one control group of six animals each, performed the initial phases of the Social Transmission of Food Preference task (as described in Ref. 22) in which rodents learn about the safety of a new food by smelling it on a conspecific’s breath. Group sizes were determined according to the protocol article.22 Rats were intraperitoneally injected three times with the proliferation marker 5-Ethynyl-2’-deoxyuridine (EdU, CAS 61135-33-9, Boc Sciences, 60 mg/mL in saline solution with 9 g/L NaCl; 60 mg/kg of body weight per injection): once immediately after the encoding phase of the memory (morning), once in the evening, and once the following morning (injections were repeated to maximize chances of detecting endothelial proliferation linked to memory encoding). Depending on groups, rats were euthanized 1, 3, 6, or 30 days after the last injection of EdU to assess cell proliferation and survival at various post-encoding delays. The euthanasia protocol consisted of a lethal intraperitoneal injection of sodium pentobarbital (EXAGON®, 200 mg/kg of weight) and lidocaine (LUROCAÏNE®, 20 mg/kg).

Rats were perfused intracardiacally at a slow rate (13 mL/min) to preserve the cerebrovascular endothelium, with 300 mL of heparinized saline solution (2.5 mL/L heparin (5000 UI/mL, Choay, Cheplapharm, France), 9 g/L NaCl, in 18 MΩ water), followed by 350 mL of cold fixative solution (40 g/L paraformaldehyde (Merck, 158127) in phosphate buffer (PB, containing 4.8 g/L monosodium phosphate (Merck, S0751) and 22.72 g/L disodium phosphate (Merck, S0876) in 18 MΩ water)). After extraction, the brains were left overnight in the fixative solution at 4°C before being sliced into 50 μm-thick sections with a vibratome. According to local regulations, bodies were placed in leak-proof sealed bags in a dedicated freezer before retrieval by a biological waste disposal company.

Histology

Four histology batches were generated, in which the different experimental conditions (post-encoding Delay (1, 3, 6, 30 days) × Group (Experimental, Control)) were evenly represented in each batch. All subsequent steps of the experiment (immunohistochemistry, image acquisition and analysis) were performed blind to the experimental group of animals (Experimental or Control). Eight brain slices per animal, with four spanning the ACC (a brain region involved in associative olfactive memory consolidation) and four spanning the parietal cortex (PC, not involved in this consolidation process) were stained for nuclei (DAPI), blood vessels (Tomato lectin), proliferating nuclei (EdU), and endothelial nuclei (ERG transcription factor). The detailed immunofluorescence protocol is available in the associated Zenodo repository.23

Image acquisition

Using a spinning-disk confocal microscope, we acquired one multichannel z-stack mosaic per slice, covering either the ACC or one side of the PC. Out of the possible 384 (Delay (4) × Group (2) × Animals (6) × Brain region (2) × Brain slices (4)), we acquired a total of 360 images, because some slices were either too damaged or inadequately mounted on the slide. Image acquisition specifications are available in the Zenodo repository23 where we filled Rebecca Senft’s Microscopy Checklist24 and provided a link to FPBase25 to display the spectra viewer for our experiment.

Results

Macro 1

In the parameters GUI of Macro 1 (Figure 5A), we indicated that our Image folder contained one subfolder per image, in which a file always named “Scan1.nd” was stored with its three corresponding .tif channel images. An example of a resulting MIP with the three chosen LUTs is shown in Figure 5B. Although only the first two channels (ERG and EdU labeling) were used for quantification, we chose to include the third channel with the Lectin staining on MIPs to facilitate the visualization of the vascular network.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure5.gif

Figure 5. A. Macro 1 graphical user interface. B. Example of maximum intensity projection (MIP) with zoomed in inserts. Red outline: anterior cingulate cortex (ACC) region of interest.

Macro 2

We used Macro 2 to determine the best set of parameters for the segmentation of ERG+ and EdU+ nuclei. Due to slight batch-dependent differences in background fluorescence, we chose different thresholds for a given batch but kept the minimum size, filtering, and background subtraction parameters consistent across batches (Figure 6A). This was made possible by the fact that our experimental conditions were evenly distributed among batches, and we advise against such a choice if it introduces a potential bias in the experimental results. Here, we assume that despite our efforts to perform the staining and imaging protocol in an identical manner for all batches, a number of slight variations may have occurred that could explain this difference, such as the percentage of error during pipetting, temperature variations during heat-induced antibody retrieval, or duration of mounting medium curing (7-10 days before the first day of imaging for one batch).

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure6.gif

Figure 6. A. Segmentation graphical user interface for Macros 2 and 3. B. Output of a test on a small region of interest (cyan outline): Top panels: ERG channel, Bottom panels: EdU (5-ethynyl-2’-deoxyuridine) channel. Left: original, contrast-enhanced image. Middle: Original, contrast-enhanced image with Red Lookup Table and yellow outlines of segmented nuclei. Right: Filtered image with cyan outlines of segmented nuclei.

We performed multiple rounds of testing on multiple images. For each image, various small representative ROIs were selected to test the segmentation parameters. The output of such a test is presented in Figure 6B: toggling the outline on and off and examining the filtered image output helped us determine the optimal parameters for the segmentation of our targets. When we found the best compromise for the majority of images in a given batch, we ticked the “Save parameters” box before starting the last test. The parameters selected for each batch are listed in Table 1.

Table 1. Segmentation parameters for each histology batch.

ThresholdMinimum sizeMedian filter (x, y, z)Gaussian filter (x, y, z)Background subtraction radiusWatershed
Batch 1ERG81205, 5, 21, 1, 15No
EdU502504, 4, 21, 1, 110Yes
Batch 2ERG91205, 5, 21, 1, 15No
EdU502504, 4, 21, 1, 110Yes
Batch 3ERG81205, 5, 21, 1, 15No
EdU552504, 4, 21, 1, 110Yes
Batch 4ERG61205, 5, 21, 1, 15No
EdU502504, 4, 21, 1, 110Yes
Macro 3

We launched Macro 3 separately for each batch with their set of segmentation parameters determined with Macro 2 (Table 1) and with an overlap threshold set at 30%, chosen to reliably exclude false positives while still accounting for size and shape differences between EdU and ERG labeling.

We drew an ROI for only 359 of the 360 images because the MIP inspection revealed a problem with the tile-stitching of one image in Batch 3. The processing time, number of images, and data size for each batch are listed in Table 2. One GB of images required a mean 3:35 minutes of processing time, which corresponded to a mean of 11:19 min per image.

Table 2. Processing time, number of images and total data size of each histology batch.

Processing time (hh:mm:ss)Number of imagesSize (GB)
Batch 116:02:3892271,49
Batch 217:36:4289300,46
Batch 318:13:4887293,93
Batch 415:51:4491266,92
Total67:44:523591132,8
Mean16:56:1389,75283,2
Macro 4

All 359 verification images were inspected using the Macro 4. We filled the corrections GUI (Figure 7A) for each image requiring adjustments and chose to perform volume estimate correction (Figure 4) for all images regardless. Example of adjusted verification images (.jpg MIP and .tif composite files) produced by this correction step are shown in Figure 7B.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure7.gif

Figure 7. A. Left: Example of composite image opened by Macro 4 (a sample region of interest (ROI) was used for better visibility, ERG objects appear in green, EdU (5-ethynyl-2’-deoxyuridine) in red and co-localizing objects in white). Right: Corrections graphical user interface. B. Example of adjusted verification images after corrections have been performed. Left: maximum intensity projection (MIP) with red overlay showing modified ROI excluding a tear and location of co-localization events (inset: middle top). Right: Z-slice 14 of the composite verification image (inset: middle bottom).

The “Comment” section was particularly useful to report problems with images. We also systematically used it to describe the reason for a ROI modification. We tried to be consistent in our qualification of identical issues or justifications across images in order to classify them after the batch was fully reviewed (e.g. using “Sparse” to describe sparsely labeled ERG or “Removed corpus callosum” for a ROI modification).

DISCARDED IMAGES

The review revealed that two animals presented a complete absence of EdU labeling, indicative of injection issues since other animals in the same histology batch were normally labeled; all acquired images for these animals (four of the ACC and two of the PC each) were excluded from further analysis.

Of the 347 remaining images, 15, all from the PC region, were also discarded upon first inspection: one because of an error in which the same slice had been acquired twice, and the others because the tissue was too damaged and/or improperly mounted to produce accurate segmentation. In general, tissue from the PC was more damaged than that from the ACC.

Inspection of the appended results file revealed that due to the rather weak labeling of ERG, some of the remaining 332 images were annotated as “Sparsely labeled” in the optional comments section. We chose to set a density of object/mm3 threshold to exclude sparsely labeled images in a reproducible way and excluded 31 (23 CP, 9 ACC) images for which the density of labeled ERG cells was less than 3500 ERG objects per mm3.

CORRECTIONS PERFORMED

Of the 301 z-stacks left, 292 (97%) were resliced to exclude out-of-focus optical slices, indicating that the range of the z-stack acquisition was consistently too generous. This type of information is crucial for planning future experiments, in which care will be taken to acquire a shorter range in z.

Eighteen (≈6%) ROIs were adjusted to exclude air bubbles in the mounting medium (6), damaged or torn tissue (5, as illustrated in Figure 7B), or part of the corpus callosum that was unduly encompassed in the initial ROI drawing (7).

The co-localization overlap threshold was deemed satisfactory and maintained at 30% for all images.

EXPERIMENTAL RESULTS

For the two targets (number of ERG+ and number of EdU+ objects) and for the two possible types of co-localization events (number of ERG+ objects in EdU+ objects and number of EdU+ objects in ERG+ objects), we calculated the density of objects per mm3 of analyzed tissue (using the corrected volume estimate) for all valid slices, and averaged them to obtain a mean density per region for each animal. The results are shown in Figure 8.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure8.gif

Figure 8. Mean densities of objects per mm3 with standard error of the mean (SEM).

Individual data points correspond to mean of slices per animal.

We performed Šídák’s multiple comparison test using GraphPad Prism for each brain region to determine whether, for each post-encoding delay, the experimental and control groups had significantly different object densities. None of the comparisons revealed significant differences between the groups (Table 3).

Table 3. Multiple comparisons test for densities of object per mm3.

Šídák's multiple comparisons testPredicted (LS) mean diff.95,00% CI of diff.< threshold?SummaryAdjusted P ValueSE of diff.N1N2tDF
ERG meansEXP ACC - CTRL ACCD1285,4-1125 to 1696Nons0,9744539,4550,529138
D3-353,1-1641 to 934,3Nons0,9256492,4660,717138
D6-34,91-1322 to 1253Nons>0,9999492,4660,0709038
D3037,39-1250 to 1325Nons>0,9999492,4660,0759238
EXP PC - CTRL PCD1200,2-860,0 to 1260Nons0,9797403,2550,496634
D3266,2-748,9 to 1281Nons0,9350386,0560,689734
D6300,8-759,4 to 1361Nons0,9154403,2550,746134
D30592,6-422,5 to 1608Nons0,4375386,0651,53534
EdU meansEXP ACC - CTRL ACCD110,29-20,64 to 41,22Nons0,861411,83550,869838
D325,04-3,197 to 53,27Nons0,099710,80662,31838
D6-5,001-33,23 to 23,23Nons0,984310,80660,463138
D301,278-26,95 to 29,51Nons>0,999910,80660,118438
EXP PC - CTRL PCD121,94-26,59 to 70,47Nons0,671218,45551,18934
D331,63-14,84 to 78,09Nons0,290917,67561,79034
D6-21,15-69,68 to 27,39Nons0,699918,45551,14634
D307,938-38,53 to 54,40Nons0,986017,67650,449334
ERG in EdU meansEXP ACC - CTRL ACCD11,67-0,703 to 4,04Nons0,26400,906551,8438
D30,600-1,56 to 2,76Nons0,92270,827660,72538
D6-0,306-2,47 to 1,86Nons0,99330,827660,37038
D30-0,382-2,55 to 1,78Nons0,98450,827660,46238
EXP PC - CTRL PCD11,585-3,224 to 6,394Nons0,86361,829550,866634
D30,7648-3,840 to 5,369Nons0,98741,751560,436834
D60,4022-4,407 to 5,211Nons0,99911,829550,220034
D300,3432-4,261 to 4,947Nons0,99941,751650,196034
EdU in ERG meansEXP ACC - CTRL ACCD12,097-0,2390 to 4,433Nons0,09350,8936552,34738
D30,6703-1,462 to 2,803Nons0,88400,8157660,821738
D6-0,3061-2,439 to 1,827Nons0,99290,8157660,375338
D30-0,3820-2,515 to 1,751Nons0,98360,8157660,468338
EXP PC - CTRL PCD11,431-3,447 to 6,310Nons0,90561,855550,771634
D30,7648-3,906 to 5,435Nons0,98811,776560,430634
D60,4022-4,476 to 5,280Nons0,99921,855550,216834
D300,3907-4,280 to 5,061Nons0,99911,776650,220034

Discussion

While every component of a bioimage analysis workflow is crucial, every choice made to select one option over another contributes to making the analysis overly tailored to the specific problem that it attempts to solve. For both segmentation and co-localization analysis, we briefly discuss such choices as well as some possible alternatives. We then suggest that the code can also be used as a template or source of inspiration, and the inadequate parts adjusted to suit the different requirements of other datasets.

Segmentation

Image enhancement

Normalization is a critical choice and may not be appropriate for all types of analysis. First, care must be taken that the relative intensity distribution is nearly identical for all images in the experiment. Second, it should be noted that while normalization may be convenient for visualization and segmentation purposes, it represents a loss of information (in our case, 0.35% of pixels are allowed to become saturated) and may alter results if further processing steps are based on absolute gray values.26 Here, normalization facilitates nuclei segmentation despite varying illumination along the depth of the samples.26 Likewise, users may not want to downscale images acquired with a higher bit-depth to 8-bit, as this also results in a loss of information. In our case, however, it allows us to work with lighter images, which speeds up processing without compromising segmentation quality. Additionally, the narrower range of gray values simplifies the testing and selection of a suitable intensity threshold.

We have included three steps in the workflow that are classically used to enhance images for the purpose of segmentation (see Chapter 3 in Ref. 20): two spatial nonlinear (median) and linear (Gaussian) filters to denoise images and smooth out irrelevant details, and Fiji’s rolling ball algorithm (based on Ref. 27) to eliminate background fluorescence. Taking advantage of the parameter GUI, a step can be omitted by setting its parameter to zero, making it possible to apply each step alone or in combination with one or two other steps. Although users can choose the parameters, the order in which the steps are performed cannot be modified without modifying the code. If this workflow proves ineffective in enhancing more complex images, users may want to consider alternative denoising techniques (reviewed in Refs. 28, 29) and illumination correction methods.3032

Segmentation method

Nucleus segmentation in 3D presents a much greater challenge than in 2D33 but provides several advantages. In contrast to segmenting the MIP of the stack, it prevents false positives that can occur in the co-localization analysis when nuclei are located atop one another in z. Compared to limiting the analysis to a single z-plane, it maximizes the quantity of information analyzed by allowing the segmentation of at least two layers of nuclei along the z-axis (Figure 9). This has the added benefit of preserving the three-dimensional geometry of the nuclei and better describing their spatial relationship in situ.

fcbc06cf-a95b-4416-accb-efd887a8dcda_figure9.gif

Figure 9. 3D representation of segmented objects and co-localization from the example region of interest in Figure 7A.

We implemented the 3D Simple Segmentation of the 3D Suite,18 which consists of thresholding pre-processed images, performing connected component analysis, and filtering out objects by size. When necessary, the 3D watershed algorithm of the 3D Suite was applied to separate the touching objects. To avoid excluding touching nuclei that the watershed algorithm might not have separated, we solely implemented the minimum size and excluded the maximum-size filter. We employed a numerical threshold, as suggested by the plugin’s interface, which we empirically found performed satisfactorily on our preprocessed images. Nevertheless, if an automatic threshold is preferred, minor modifications to the code would permit it, because the plugin can operate on a previously obtained binary mask by setting the threshold at 1. If both numerical and automatic thresholds fail to consistently extract objects, users can attempt to implement the iterative thresholding algorithm of the 3D Suite.18

Our segmentation method would certainly exhibit limitations for densely packed or overlapping nuclei that represent a difficult segmentation case, even for more advanced tools.33 In our experiment, EdU+ nuclei were sparsely distributed and often appeared in doublets which the watershed algorithm was able to properly split when they were touching. In addition, the endothelial nuclei were sufficiently spaced to be segmented satisfactorily. However, owing to the elongated shape of ERG+ objects, we abstained from applying the watershed algorithm to prevent their over-segmentation. We recognize that our result for the density of ERG+ objects might be slightly underestimated because some touching objects were counted as a single entity. While this does not hinder the co-localization analysis, it makes the density of EdU+ in ERG+ objects a more dependable indicator than the density of ERG+ in EdU+, because co-localizing EdU+ objects are still expected to exceed the overlap threshold, while touching ERG+ objects counted as a single object may not.

Deep learning (DL) methods have revolutionized the field of bioimage analysis and are quickly becoming the gold standard for classification, denoising or segmentation.34 For the latter, some DL solutions are now accessible to biologists with limited computational skills (listed in recent reviews33,35,36), but most are difficult to set up and use, especially for 3D segmentation.33 Moreover, user-friendly platforms such as deepImageJ37 rely on pretrained models, which are currently not widely available in 3D. Training new models from scratch requires extensive computing power in the case of 3D data as well as large annotated datasets that are time-consuming and difficult to produce.38 While we wager that these challenges will soon be overcome given the rapid expansion of the field, we chose to use a classical method that does not have such heavy hardware or annotation requirements.

Co-localization

Two types of strategies are used to assess co-localization. Pixel-based approaches consider the image as a whole and evaluate the correlation between the signal intensities of two channels. Their main drawback is that they are not informative about the location of co-localization events. Conversely, object-based approaches rely on segmentation and allow quantification of the degree of co-localization between objects. Several reviews3941 can inform users on the various metrics and tools associated with different types of co-localization analysis, and Cordelières and Zhang13 offer a guideline decision tree to help determine which type is most suited to a particular experiment.

In object-based methods, segmented objects can be described by two types of centers: the centroid or geometrical center, which relates to the shape of the object, and the intensity center or center of mass, which considers the distribution of fluorescence intensities within the object. When objects have sizes close to the optical resolution, their centers are commonly used to assess co-localization, such as is possible with the popular Fiji plugin JACoP.42 For example, two objects can be considered co-localized if the distance between their centers is smaller than the optical resolution (distance between centers approach).40 Another approach makes it possible to deal with size heterogeneity when resolution-limited objects in channel A co-localize with larger objects in channel B: in this case, one can quantify the number of centers from A that fall inside the volume of objects in B (centers-particles coincidence approach).43

Reducing objects larger than the optical resolution to the coordinates of their centers may lead to underestimation of the co-localization events and does not preserve information about the geometry or intensity of objects or the extent of co-localization. A more straightforward approach in this case is to compute the overlap (physical or intensity-based) between the objects. DiAna,44 another comprehensive Fiji plugin, uses physical overlap to assess co-localization and offers several segmentation algorithms as well as various measurements of the properties of objects (such as volume or mean intensity) and distances between them. Despite its extensive functionalities, this plugin is not fully macro recordable in Fiji, making it suboptimal for automating our batch analysis and the creation of verification images.

Therefore, we used the MultiColoc plugin of the 3D Suite18 in our workflow. A notable advantage of this plugin is that it allows multiple co-localizations to be preserved for a single object (e.g., several subcellular components per cell). Similar to Zhang and Cordelières,12 who used the volume overlap method in their demonstration, we implemented an overlap threshold. Owing to the filtering and segmentation process, the labels can be slightly dilated compared with real objects. With an appropriately set overlap threshold, false positives for objects that are in close proximity but that slightly overlap owing to label expansion can be excluded. Contrary to Zhang and Corderlières’s workflow, ours does not assume that objects in one channel are consistently smaller than those in the other, and all pairs of objects where at least one meets the volume overlap criteria are counted as co-localized.

The co-localization metric extracted by our workflow is the number of co-localizing objects (for A in B and B in A). We used it to compute densities of co-localizing objects per volume of tissue analyzed, but users can also take advantage of the object quantification for each channel to compare the ratio of co-localizing objects over the total number of objects instead. Overall, we designed our workflow to be as conservative as possible to suit various measurement requirements without having to perform the entire analysis again. 3D ROIs for co-localizing objects are saved for users to perform measurements as needed, using the 3D Manager and the original image. In addition, while the composite image is primarily a visualization tool, it is also two clicks (or lines of code) away from being split into binary masks and turned again into label images by the 3D Manager, for each channel and the co-localized and discarded overlapping objects. Thus, the possibility of performing any kind of supplementary measurement on the properties of objects or their relationships is preserved.

Reproducibility and generalizability

With COverlap, we have focused our efforts on several points that contribute to creating a reproducible bioimage analysis workflow.

First, we strived to make our workflow user-friendly. We kept it contained in a single software (or “collection”45) to simplify its use and enhance its accessibility,2 used GUIs whenever possible to collect user input, and wrote detailed documentation to support its implementation. We also provided the code itself, which we organized and commented to improve its readability.

Second, we attempted to make the organization of the workflow time efficient. We attempted to divide it into its interactive and automated steps by isolating the parameter testing and reviewing phases from the batch analysis. This ensures that no manual input alters the reproducibility of the main analysis and allows users to perform other tasks while it runs. It also gives users the freedom to organize the time they allocate to the testing and reviewing steps as they need.

Third, we aimed to achieve traceability and transparency. We attempted to produce reproducible results by automatically recording all parameters used for a given analysis. We also implemented a method for users to review the analysis and perform corrections in an automatically documented (and optionally user-commented) fashion. The conservative manner of recording results, ROIs, and visualization images also provides a measure of scalability to the toolset: not only can users verify results again at any time after they have been obtained, but they can also reuse the produced files to perform new measurements (such as measuring intensities on the original image, or performing spatial statistics on the segmented objects) without having to perform the whole analysis again.

Finally, we sought to impart flexibility to this toolset. At a lower level, by making it fairly parameterizable for absolute non-coders who would want to try it in a compatible experiment. For example, the toolset may be used to assess the proliferation of other cell types with nuclear markers, such as Olig2 for oligodendrocytes and their precursors,46 or to investigate neuronal activation with markers for NeuN (RBFOX3) and the transcription factor c-Fos. At a higher level, by offering the code as a modifiable template or even as a skeleton, where one (and especially scripting beginners) can reuse the general organization of the code or alter specific functions to suit their needs, such as implementing a different segmentation style or type of co-localization analysis.

Software availability

Source code is available from: https://github.com/mambroset/COverlap-a-Fiji-co-localization-toolset

Archived source code available from: https://zenodo.org/doi/10.5281/zenodo.1016114119

License: GPL 3.0

Fiji (RRID:SCR_002285)16 and the MorphoLibJ17 and 3D ImageJ Suite (RRID:SCR_024534)18 plugins necessary to run the toolset are freely available through the provided links. To install these plugins, users must enable the following update sites in Fiji: IJPB-plugins, 3D ImageJ Suite, Java8 and ImageScience.

Any question that holds a potential interest for other users can be posted on the image.sc forum and linked to the username of the author @Melow.

Figures were made with Adobe Illustrator (RRID:SCR_010279), and Draw.io (RRID:SCR_022939) for the diagrams in Figure 1, Figure 2, and Figure 3. Data were formatted with Microsoft Excel 2019 (RRID:SCR_016137) and statistical analysis was performed using GraphPad Prism 9 (RRID:SCR_002798).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 03 Jan 2024
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Ambroset M, Bontempi B and Morel JL. COverlap: a Fiji toolset for the 3D co-localization of two fluorescent nuclear markers in confocal images [version 1; peer review: 3 approved with reservations]. F1000Research 2024, 13:14 (https://doi.org/10.12688/f1000research.143062.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 03 Jan 2024
Views
8
Cite
Reviewer Report 16 May 2024
Bijie Bai, University of California, Los Angeles, California, USA 
Approved with Reservations
VIEWS 8
The paper presents COverlap, a Fiji toolset designed for automated 3D segmentation and co-localization of fluorescent nuclear markers in confocal microscopy images. The presented plugin comprises four macros, streamlining the process of visualization, parameter testing, image segmentation, and statistical analysis. ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Bai B. Reviewer Report For: COverlap: a Fiji toolset for the 3D co-localization of two fluorescent nuclear markers in confocal images [version 1; peer review: 3 approved with reservations]. F1000Research 2024, 13:14 (https://doi.org/10.5256/f1000research.156680.r265535)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
11
Cite
Reviewer Report 14 May 2024
Jean-Karim Heriche, European Molecular Biology Lab, Heidelberg, Germany 
Approved with Reservations
VIEWS 11
This paper presents a set of tools aimed at the detection of colocalization of two nuclear markers in fluorescence confocal microscopy images. The colocalization detection workflow is implemented as macros for the Fiji platform and usage is illustrated on the ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Heriche JK. Reviewer Report For: COverlap: a Fiji toolset for the 3D co-localization of two fluorescent nuclear markers in confocal images [version 1; peer review: 3 approved with reservations]. F1000Research 2024, 13:14 (https://doi.org/10.5256/f1000research.156680.r265543)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
16
Cite
Reviewer Report 02 Apr 2024
Thomas Boudier, CENTURI, Aix-Marseille University, Marseille, France 
Approved with Reservations
VIEWS 16
The article is well written and easy to follow. The methods and procedures are well detailed, maybe too detailed.
The solution proposed clearly is of interest for biologists interested in co-localization analysis of nuclear structures.

However, ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Boudier T. Reviewer Report For: COverlap: a Fiji toolset for the 3D co-localization of two fluorescent nuclear markers in confocal images [version 1; peer review: 3 approved with reservations]. F1000Research 2024, 13:14 (https://doi.org/10.5256/f1000research.156680.r253715)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 03 Jan 2024
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.