Keywords
Image Publication, FIJI, Good principles of figure design, Beginner's workflow, Image processing, open source, Visualization, Image analysis
This article is included in the Research on Research, Policy & Culture gateway.
This article is included in the NEUBIAS - the Bioimage Analysts Network gateway.
This article is included in the Bioinformatics gateway.
Image Publication, FIJI, Good principles of figure design, Beginner's workflow, Image processing, open source, Visualization, Image analysis
Every day, around 2000 biomedical articles appear, 500 of which contain images. These published images provide new insights, but each day also the number of problematic images increases. While intentionally manipulated images are rare1,2, erroneous handling of images is more common. Problematic is also that methods often insufficiently report on image acquisition and processing3. Last, images frequently have low legibility, as only 10–20% of published images provide all key information (annotation of color/inset/scale/specimen)4. In the long run, problematic images may undermine the trust in scientific data and, when published in emerging image archives, reduce the value of such repositories5,6.
Today’s scientists face rapidly evolving technologies and employ many methodologies, with microscopy and image analysis7 just one among many. Problematic images thus partially arise from: 1) lack in training, as ethical and legible processing of microscopy data is not systematically taught3, 2) lack in local expertise, as image facilities are restricted to a few research hubs, and 3) while publishers established guidelines for handling image forgeries8–10, actionable and clear instructions for legible image publishing are lacking.
Here, we introduce an image processing workflow to effectively and ethically present images. The step-by-step workflow enables novice users, with no image processing experience and occasional microscopists, with no intention towards specializing in image processing to take the first steps towards publishing truthful and legible images.
Obtaining high quality bioimages starts with optimal microscope settings and must be adapted to subsequent quantitative or qualitative analyses11–16. After acquisition, bioimages can be processed and prepared for publication using the workflow below (Figure 1), which is visually summarized in cheat-sheet style (Figure 3 and Figure 4). Both are based on Fiji17, an open source, free image analysis program for bioimages.
Load images into Fiji and make sure metadata (see Table 1: Glossary), such as the scale, are correct. Save images with a new name to keep raw images untouched. After processing, save images in TIFF format, which preserves the entire information and enables measurements. For presentation, save images in PNG format, which irreversibly merges the image with annotations and saves multichannels as 24-bit RGB. Beware of incorrect or unintentional bit-depth conversions18.
After opening, images with a large gray value range may appear black11. To properly display such data19, adjust the brightness contrast by pressing the auto button or using the sliders. When performing comparisons between images, we recommend using the same fixed values using the set button. This linear intensity adjustment is acceptable if key features are not obscured. Pressing apply/saving images as PNG changes the intensity range irreversibly and makes images unsuitable for intensity measurements. Non-linear adjustments i.e. histogram equalizations or gamma correction need to be explained20,21.
Often further processing is necessary. Be familiar with these methods to decide if subsequent image intensity quantification is still truthful. A maximum intensity projection is acceptable for visualization of a 3D stack, but intensity measurements should use ‘sum’ or ‘average’ projections. Similarly, noise is problematic for visualizations and is reduced with linear filters such as a Gaussian blur. Clearly state the image processing methods12,20.
Image rotation sometimes helps for better comparisons, to reduce unnecessary information, or for aligning specimens. Rotation, but also decreasing or increasing the size of images in pixels, may degrade the image quality by interpolation. Such loss of information may be acceptable for visualization, but quantification and measurements must be done beforehand20,21.
Often larger fields-of-view are captured than are required. Cropping is then not only permissible, it is necessary to focus the reader on the relevant result. In contrast, it is not ethical to crop out data that would change the interpretation of the experiment, or to “cherry-pick” data20,21. When a larger field of view and a magnification of detail (‘inset’) need to be shown side-by-side, indicate inset position in the original image.
Scientific cameras capture each wavelength (channel) with grayscale images. If one channel is shown, grayscale, which has the best contrast with black background, is favorable. To visualize several channels of a specimen (e.g. colocalization studies), encode channels with different colors. A look-up table (LUT) determines how gray values are translated into a color value. For color selection, always consider visibility for color-blind, and traditions of scientific fields. Apply color-schemes consistently.
Images represent physical dimensions and can depict different scales ranging from nanometer to millimeter, which is often not obvious22 thus providing scale information is essential. Further, annotate what each color and symbol represents in an image.
We tested the workflow on fluorescently-stained microscope images of Drosophila egg chambers (RRID:BDSC_5905;23) and the HeLa (RRID:CVCL_0030) ImageJ sample image24. For generating a “poor” image example, we processed the raw microscope images minimally, only converting the bit depth from 16-bit to 8-bit and retained default color schemes. We did not add annotations, performed no image cropping, rotation, or specific brightness contrast adjustments as these often lack in poorly visualized images4. We thus simulated images as they are typically “processed” in the majority of current publications4. To perform a qualitative assessment, we tested image visibility to color blind (deuteranopia) audiences using the color blindness simulator (RRID: SCR_018400;25).
Using our example microscope images, we qualitatively compared the readability of images processed with or without the workflow described. Images for which the steps of the workflow were implemented contained the key information, were cropped to maximize focus, and sufficiently annotated (color channels, scale, organism), while images processed minimally without following the workflow have a “poor” readability (Figure 2A, B). Furthermore, we demonstrated that images processed according to our workflow (‘color’) are still accessible to color blind readers (Figure 2C).
A. Schematic of typical errors in published bioimages and improved version of exemplary image without compression artifacts, and with accessible color-code, annotation, and scale. B. Example images poorly visualized and after processing with the workflow presented here. C. Color blind (deuteranopia) rendering of the images shown in B. Poorly visualized images are inaccessible to color blind readers.C. Color blind (deuteranopia) rendering of the images shown in B. Poorly visualized images are inaccessible to color blind readers.
Our workflow is based on the open source software Fiji, but its principles are applicable to other software. The workflow steps and accompanying suggestions for image presentation are available as accessible “cheat sheets” (Figure 3 and Figure 4) for wide distribution and adoption to more specific needs.
After completing the workflow, images may be assembled for publication and legends added26. Layouting images on a page can be done with design software or in Fiji plugins27,28. Consider the final dimensions and orientation (landscape/portrait) and save figures for print with 300 dots per inch (DPI). This workflow is iterative and feedback from colleagues helps to identify possible hurdles.
If followed, the workflow helps avoiding common problems of published 2D images, but principles are also applicable to 3D stacks and movies. Indeed, lack of truthful scientific communication and reproducibility are among the biggest problems faced by science today29 and considering that an estimated 500 publications with images are published daily, improving image quality could have a profound impact in tackling this issue.
HeLa cell test images are available at: https://imagej.nih.gov/ij/images/hela-cells.zip. D.melanogaster egg chamber cells images are available on Open Science Framework.
Open Science Framework: Effective image visualization for publications – a workflow using open access tools and concepts. https://doi.org/10.17605/OSF.IO/SDPZK30.
Open Science Framework: Effective image visualization for publications – a workflow using open access tools and concepts. https://doi.org/10.17605/OSF.IO/SDPZK30.
This project contains the following extended data:
- Processing_images_cheatsheet_SchmiedJambor.png (printable image of cheat sheet 1)
- SchmiedJambor_Figures3_Cheatsheet1.eps (modifiable version of cheat sheet 1)
- Publishing_ images_cheatsheet_SchmiedJambor.png (printable image of cheat sheet 2)
- SchmiedJambor_Figures4_Cheatsheet2.eps (modifiable version of cheat sheet 2)
Data are available under the terms of the Creative Commons Zero "No rights reserved" data waiver (CC0 1.0 Public domain dedication).
We thank Robert Haase, Hella Hartmann, Florian Jug, Anna Klemm, and Pavel Tomancak for constructive comments on manuscript and cheat sheets. Font awesome icons were used in preparing the figures.
This publication was supported by COST Action NEUBIAS (CA15124), funded by COST (European Cooperation in Science and Technology).
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Partly
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Partly
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Yes
References
1. Jonkman J, Brown CM, Wright GD, Anderson KI, et al.: Tutorial: guidance for quantitative confocal microscopy.Nat Protoc. 15 (5): 1585-1611 PubMed Abstract | Publisher Full TextCompeting Interests: No competing interests were disclosed.
Reviewer Expertise: Bioimage analysis (David Barry) and cell and developmental biology (Georgina Fletcher)
Is the rationale for developing the new software tool clearly explained?
Yes
Is the description of the software tool technically sound?
Yes
Are sufficient details of the code, methods and analysis (if applicable) provided to allow replication of the software development and its use by others?
Yes
Is sufficient information provided to allow interpretation of the expected output datasets and any results generated using the tool?
Yes
Are the conclusions about the tool and its performance adequately supported by the findings presented in the article?
Yes
Competing Interests: No competing interests were disclosed.
Reviewer Expertise: Cell biology
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 2 (revision) 18 Feb 21 |
read | |
Version 1 26 Nov 20 |
read | read |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.