The advent of accessible, easy-to-use high-content screening systems has brought many benefits to life science. However, this increased access to large amounts of images also brings challenges, for example relating to data processing, statistical analysis and automation. Addressing these challenges is essential in order to turn large image sets into reliable quantified data, suitable for robust statistical analyses.
Automated image analysis and quantitation is a key step in translating images into numbers for statistical analysis. AI-based methods can benefit a range of life science applications which require data from microscope images and where image analysis often plays a critical role. On the other hand, they are still perceived to be complex and time-consuming to set up. Developments in image analysis software such as deep learning based on convolutional neural networks are now making automated analyses easier. They allow researchers to quickly train systems, for example to automatically capture the critical information needed for improved segmentation analysis, thus providing a fast route to powerful data.
New possibilities in image quantification by AI
A central element of achieving powerful quantifications is analyzing large image sets to extract the data. As these analyses need to be fast, reliable and unbiased, many of them lend themselves well to automation. Although automated image analysis has been around for many years, simple thresholding-based methods often struggle to detect objects that are clearly visible to the human eye. The introduction of artificial intelligence (AI) has enabled analysis software to take on increasingly difficult tasks that were previously impossible using thresholding methods, thereby giving researchers more options for quantifying images in many areas of research.
AI has undoubtedly revolutionized the field of image analysis with expanded capabilities and better data for stronger experimental evidence. Nevertheless, automated image analysis continues to be perceived by some as challenging when starting from scratch. One reason for this is the expertise required to set up the software to carry out the analyses. Image analysis is often thought of as something that is easy when up and running, but requires detailed training to initialize. Another frequently encountered concern is the time it takes to train and optimize the software to make sure it can handle the images in question and avoid artifacts that can affect the outcome.
Training neural networks
© OSISFig. 1 A) Schematic showing the training process of the neuronal network, B) Schematic showing the application (inference) of the trained neural network
Responding to the need for AI-based methods to become faster and easier to set up, the latest image analysis software can now go beyond conventional AI and use deep convolutional neural networks. This type of neural network architecture has been described as the most powerful object segmentation technology [1]. Neural networks of this kind feature an unrivaled adaptability to various challenging image analysis tasks, making it an ideal choice for a range of applications in life science imaging.
AI-based software usually needs to be provided with images and annotated object masks for training, also known as “ground truth” data [2]. These annotations (e.g. the boundaries of cells) have to be made manually, which can be a time-consuming step because of the large amount of training data required. Using a deep learning approach, however, the microscope software can use clever approaches to automatically generate the ground truth required to train the neural network by acquiring reference images during the training phase (Fig. 1). Once the network has been trained, it can be applied to new images and predict the object masks with a high level of precision.
Since this approach to ground truth generation requires little human interaction, large amounts of training image pairs can be acquired in a short amount of time. This makes it possible for the neural network to adapt to all kinds of variations and distortions during the training, which results in a neural network model that is robust against these challenging issues.
The capability of CNN-based deep learning technology in automated image analysis is shown below by means of frequent application examples.
© OSISFig. 2 From left to right: AI prediction of nuclei positions (blue), green fluorescent protein (GFP) histone 2B labels showing nuclei (green) and raw brightfield transmission image (gray)
Label-free nuclei detection
Label-free observation has recently seen a significant resurgence in importance owing to machine-learning methods and dramatic improvements in image analysis [3]. A good example of improving capabilities by automated acquisition of ground truth information is the detection and segmentation of nuclei without using staining or fluorescent labels. This method is useful in many high-throughput imaging studies because it saves time, avoids cellular changes as a result of the fluorescent label and saves a fluorescent channel for other labels. In order to train the software in detecting nuclei in unstained brightfield images, the nuclei in the training samples can be labeled with a fluorescent marker. The microscope can then automatically acquire a large number of image pairs (brightfield and fluorescence) and detect the nuclei by automated thresholding in the fluorescence images. These objects become the ground truth to train the neural network, which can then find the nuclei in other samples using only brightfield images (Fig. 2).
© OSISFig. 3 Deep learning-based predictions of the location and shape of cell nuclei (red lines) at 100 % (a), 2 % (b), 0.2 % (c) and 0.05 % (d) of optimal conditions. The contours derived from the lowest SNR (d) deviate significantly from the correct contours, indicating that the limit of the technique for quantitative analysis at ultra-low exposure levels is between 0.2 % and 0.05 % of the usual light exposure. Contrast optimized per SNR for visualization only.
Quantitative analysis at ultra-low exposure
A self-learning microscopy approach can also be used for training deep learning software to detect fluorescently labeled nuclei in ultra-low light. Low-light fluorescence makes long-term live cell imaging possible, because it minimizes phototoxicity and photobleaching. In this approach the deep learning software can be trained to detect labeled nuclei at a very low signal-to-noise ratio (SNR) by using image pairs of the same samples imaged in normal and ultra-low light conditions. Figure 3 shows how, after the training phase, the deep learning software can accurately predict the locations of nuclei, even at a light intensity of only 0.2 % of optimal conditions. The software can even extract information beyond simple contours – classifying cells as either dividing or non-dividing by looking at the difference in signal intensity between cells with single and double DNA content.
© OSISFig. 4 Prediction of glomeruli positions on a mouse kidney section using Olympus TruAI (blue)
Analysis of tissue sections
AI-based software can apply the same deep learning approach to speed up analysis of tissue sections, for example kidney sections. Kidney glomeruli fulfill an essential role in filtering waste products from the blood. In kidney research, quantifying glomeruli can provide important information about the functioning of the kidney as a whole. However, in fluorescently labeled tissue sections, glomeruli can be hard to automatically discriminate from the surrounding tissue – making reliable quantification challenging. In Figure 4, a fluorescence image of a mouse kidney is analyzed using deep learning software. After training, the software was able to reliably identify the locations of the glomeruli within the tissue, demonstrating its capabilities in images where the color and shape of the objects of interest are often only marginally different from other features in the image.
Summary
The introduction of AI in microscopy has already made a lasting impact on the way automated image analysis and quantification is carried out in many areas of life science research. Software setup and training are still perceived as hurdles to adopting AI, but new advances, such as deep learning based on convolutional neural networks have made it faster and easier than ever to obtain reliable data from large sets of images.
________________________________________________________________________________________
Category: Application | Microscopy
Literature:
[1] Long, J., Shelhamer, E., Darrell, T. et al. (2014) Fully convolutional networks for semantic segmentation, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, 3431-3440, DOI: 10.1109/CVPR.2015.7298965
[2] Ljosa, V., Sokolnicki, K.L., Carpenter, A. E. (2012) Annotated high-throughput microscopy image sets for validation. Nat. Methods, 9, 637, 2012 Jun 28, DOI: 10.1038/nmeth.2083
[3] Christiansen, E.M., Yang, S.J., Ando, D.M., Javaherian, A. et al. (2018) In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images, Cell, 173, 792-803.e19., 2018 Apr 19, DOI: 10.1016/j.cell.2018.03.040
Header image: © OSIS
Date of publication:
27-Oct-2020