Description of the Lusca algorithm
The image analysis by Lusca includes the following steps: (1) setting of the input parameters, (2) segmenting and precise tuning of the image to obtain the masked region of interest (ROI), and (3) quantifying recognized objects via particle, skeleton, and colocalization analyses.
Setting up the input parameters for image analysis
Lusca’s wizard guides users through the selection of input parameters: (a) choosing the image folder, (b) selecting the image type (e.g., channel/single, 2D/3D), and (c) optionally setting scale, cropping the image for analysis and/or the user can proceed to quantify other morphological parameters with interactive approach, without interactive approach or with already segmented images. For further analysis without an interactive approach, the user provides (d) the location of the folder containing the classifier(s), (e) the name of the image segmentation classifier, (f) the intensity, area/volume, and circularity thresholds for fine-tuning, and selects (g) the type of morphological analysis (neural projections, soma and nuclei, area, number and intensity, length and branching, width, and colocalization of segments). Additional input settings for specific analyses include (h) histogram parameters (number of bins, minimum and maximum width) for neural projections or width analysis, (i) parameters from (d) to (f) corresponding to nuclei image analysis for soma analysis, and (j) parameters from (d) to (f) corresponding to colocalizing image analysis for colocalization of segments.
If the user chooses the interactive approach for quantification of other morphological parameters, the wizard guides them through the image analysis process to create and set unknown variables from (d) to (j). A sequence of “while” loops is used to repeat the steps until the user is satisfied with the obtained image. A detailed breakdown of the aforementioned macro’s architecture can be found in Fig. S1. On the other hand, if morphological analysis with already segmented images is chosen, the user only provides the location of those segmented images, so the analysis starts from step 3 in the Fig. 1.
Pipeline for image analysis of neural projections and bodies. Blue squares represent image segmentation, while grey, orange, green, red, violet, and yellow represent different options for image analysis. These include colocalization, neural bodies, neural projections, length and branching, width, and area, number, and intensity, respectively. Step 1 shows the enlarged input images of neurons and nuclei stained with MAP2 (red), SMI312 (green), and DAPI (cyan). ROIs are segmented from the background with TWS in step 2, following intensity, area, and circularity thresholding to obtain the segmented image in step 3. “Neural bodies” analysis includes the calculation of area, number, and intensity for both nuclei and soma images. Neural bodies image is acquired by using the Boolean operator “AND” on previously dilated nuclei image and neuron image from step 3 to avoid false signals. The area, number, and intensity of both neural bodies and nuclei are calculated with the particle analyser after redirection to the corresponding input images (orange square, step 4). Analysis option “Neural projections” includes the calculation of area, number, intensity, length, branching, and width. Particle analysis was performed again to obtain area, number, and intensity results (green square, step 4). Length and branching are calculated after forming skeletons in step 5, while width calculation further involves the transformation of the step 5 image into a 32-bit image. Following thresholding and deduction of 255, values of NaN for the background pixels, and 0 for skeletons, are obtained. Simultaneously, using the Local Thickness on the image from step 3, accurate width dimensions are achieved (step 6). The images from step 5 and step 6 are added to obtain the image in step 7 from which width results are calculated. The separation of these options into distinct squares (red, violet, and yellow) facilitates better user comprehension and easier application, especially considering the versatility of Lusca in analysing various biological objects beyond neurons.
By the end of this step, the images from the folder are saved in the LIST data type, while the input data for each image type is saved into arrays. This approach allows for automated loading of images and provides the user with a sorted array of a vast amount of data for image analysis. Additionally, a “Results” folder is created within the image directory for users to access images, graphs, and tables displaying the analysis results.
Image segmentation and fine-tuning
For image segmentation, Lusca relies on Trainable Weka Segmentation (TWS), a machine-learning plugin13. This process involves forming a machine-learning classifier by defining classes that represent the number of objects the user wishes to distinguish within the image. Initially, a minimum of two classes is required, with the option to add more classes through “Create new class”, focusing on the shared intensity and shape characteristics. Once a classifier of satisfactory precision is created, it should be saved using the “Save classifier” command, enabling its limitless utilization in automated image processing of similar types of data. For quantification purposes, used classifiers were random forest models. Therefore, since the decision tree structure is determined by the input pixels, precise determination of ROIs is crucial. Furthermore, the selection of suitable training features, which rely on the characteristics of the object intended for segmentation, is equally important. These affect the learning process, optimizing both image segmentation and analysis speed.
TWS as output generates Probability Maps, a stack with channels matching the segmentation classes. For instance, in Fig. 1 step 1, an image of neurons was segmented into projections, somas, and the background (step 2). In each channel, the whiter the pixels are, the greater the possibility that a certain pixel belongs to the selected class. Additional fine-tuning is performed by applying intensity, area/volume, and circularity thresholds to obtain a segmented image, represented by black masks on a white background (Fig. 1, step 3).
Image quantification procedure
To analyse segmented objects, the segmented image is redirected to the input image, replacing black masks with shaded masks based on the intensity of the input image (Fig. 1, step 4). The Particle Analyzer measures 2D image parameters like number, area, circularity, and intensity, while the 3D Objects Counter quantifies number, volume, surface, sphericity, and intensity for 3D images14,15.
Although designed for analysis of neuronal projections, Lusca can also analyse neuronal bodies (Fig. 1, orange square). During the initial testing, the image segmentation pipeline had inaccuracies when analysing images with thicker neurites, increased background noise, and stain accumulation, which were mistaken for neuron bodies. This happened since TWS, and particle analyser could not eliminate these false positive signals due to their similarity in characteristics, size, or shape to actual neuron bodies. Since nuclei reside within the cell bodies, and seldom intersect with the aforementioned inaccuracies, extraction of the overlapping signal allowed for the separation of neuronal bodies. Considering the nuclei are smaller than somas, the nuclei’s surface area was enlarged before applying the “AND” operator (Fig. 1, steps 8–9). This ensured accurate cell body recognition, facilitating calculations of area/volume, surface, number, intensity, and shape.
To quantify the length of projections, the objects from step 3, derived from the segmented image, are converted into one-pixel-width lines, i.e. skeletons, using the Skeletonize (2D/3D) plugin (Fig. 1, step 5)16. Following skeleton analysis yields data on endpoint, junction, and slab voxels, as well as the maximum, mean, and sum of branch lengths, offering insights into the neuronal culture’s condition.
The width of the projections is determined by calculating the width per each increment of length. This is achieved by applying a Local Thickness plugin on the image from step 317. The width corresponds to the largest circle (2D) or sphere (3D) that fits within the projection. Simultaneously, the skeleton image (step 5) becomes a 32-bit image, so that after thresholding background’s value is converted to Not a Number (NaN), while skeleton pixels remain at a value of 255. To establish a direct connection between thickness in step 6, and the corresponding pixel in step 5, 255 is subtracted from the whole image leaving the background pixels at NaN, and skeleton pixels at 0. Merging these images creates a final image where pixel values directly indicate section thickness (step 7). From this image histogram, mean, and median width are calculated.
Colocalization analysis is performed on segmented objects within the selected images (Fig. 1, step 3). Manders’ Colocalization Coefficient is determined as a quotient of colocalized area/volume between the two images and the total area/volume on one of the images18. To obtain the pixels with positive signals on both images, Lusca uses the Boolean operator “AND” with the particle analyser for area measurements, while 3D objects counter is utilized for volume measurements. The total area/volume of the image objects is acquired after segmentation, as mentioned earlier in this paragraph, with the particle analyser or 3D objects counter.
Validation of Lusca and comparison to other algorithms
To validate the results obtained with Lusca, the manual analysis served as the golden standard.
The comparison of Lusca’s results primarily centred on open-source scripts such as NeuronJ, NeuriteTracer, NeurphologyJ, and CellProfiler.
The comparison criteria included parameters readily accessible in these programs, excluding those requiring additional user input. Key parameters defining Lusca’s performance were established: projection length and width, soma and nuclei counts, neuron and nuclei area/volume, and analysis time.
Additionally, an execution speed parameter was introduced, calculated as the quotient of the number of measurements and analysis time using PowerShell. This facilitated a comparative analysis of the efficiency and speed of the different programs in executing varied functions within a defined time frame.
Thirty images of neurons and nuclei were used for validation and comparison, categorized into three groups: 2D high-quality-stained, 2D low-quality-stained, and 3D high-quality-stained images.
High-quality-stained images exhibited minimal noise between neuronal or nuclear areas and their boundaries, with a ratio between area intensity and boundary intensity greater than 5. On the other hand, low-quality-stained images had a high amount of background noise, so the aforementioned ratio was lower than 519. To assess the program performance across diverse cell cultures, capturing various morphological features, neurons were cultured for 5–10 days.
Measurement of neuronal projections reveals that Lusca achieves the same level of preciseness as manual analyses with 230 times larger execution speed
To evaluate the accuracy, precision, and speed of Lusca, it was first compared to the “golden standard”—manual analyses. A detailed breakdown of manual validation can be found in the Methods section. No statistically significant differences were found between manual analyses and Lusca for both 2D high and low-quality images, along with 3D images for neuronal body and nuclei counts, neuron and nuclei area/volume, as well as projection length and width. A detailed description of data, shown as mean and standard deviation (SD), can be found in Supplements (Tables S1–S3).
Comparison of Lusca with other image analysis algorithms reveals that Lusca offers the largest number of measurements with the highest execution speed
Various tools facilitate analysis of neuronal morphologies, prompting a comparison of the results obtained with NeuronJ, NeuriteTracer, NeurphologyJ, and CellProfiler to those generated by both Lusca and manual measurements, the golden standard. Images of neurons were first analysed with ImageJ/FIJI scripts and compared accordingly (Fig. 2a). No notable difference was observed in length, soma and nuclei count, or neuron and nuclei surface area in both high and low-quality-stained images.
Qualitative and quantitative comparison of different scripts for neuron analysis with Lusca and manual tracing on 2D high and low-quality-stained and 3D images. Comparison between open-source programs, Lusca, and manual tracing performed on MAP2 high and low-quality-stained input images. (a) Qualitative comparison of Lusca, NeurphologyJ, and Neurite Tracer final output images for each image stain quality. Datasets (neuronal bodies and nuclear count, neurite length and width, and neuron area/volume) generated by each open-source program, Lusca and manual measurements subjected to linear regression for 2D (b) high and (c) low-quality-stained images, as well as (d) 3D images. The equations for the lines of best fit and the coefficients of determination are presented in the figure key.
Further comparison included the calculation of execution speed (Table 1). For ten analysed images, NeuronJ measured 5 parameters in 222.76 min, Neurite Tracer assessed 2 parameters in 1.63 min, NeurphologyJ evaluated 15 parameters in 9.12 min and lastly, Lusca calculated 29 parameters in 6 min.
Additionally, linear regression was performed to compare the results of the aforementioned programs with manual measurements (Fig. 2b–d). Lusca was the only program to consistently mirror manual measurements throughout both 2D high and low-quality-stained images and 3D images.
For each ImageJ macro (Lusca, NeuriteTracer, and NeurphologyJ) false positive and false negative measurements were calculated as well. Following Encarnacion-Rivera et al.’s study, false negative length was characterised by unrecognized visible neurite signals by the macro, while false positives were identified by the macro but lacked actual neurite signals19. False negative neuronal body count was defined as uncounted bodies with present signals, while false positives represented macro-recognized bodies lacking signal. False positive and negative rates were defined as measurements of neurite length or neuronal bodies divided by the manual measurements. Recognised false positive signals included background noise noted as a soma or a neurite signal and thicker neurite segment recognised as a neural body. On the other hand, false negative signals included merged neural bodies not recognised separately, neurite or neural body recognised as background due to low contrast, and neurite misplaced as neural body (Fig. 3a–g). Figure 3 shows that Lusca had significantly fewer false positive and negative length measurements compared to NeuriteTracer and NeurphologyJ in both high and low-quality-stained images. Similarly, Lusca showed fewer false positive and negative counts for neuronal bodies, a significant difference when compared to NeuriteTracer and NeurphologyJ. However, no difference was found for the false positive count in high-quality-stained NeurphologyJ images (Fig. 3j–k).
False positive and negative operational classifications and measurements rate comparison between NeurphologyJ, NeuriteTracer and Lusca. False negative signal defined as an unrecognized visible signal by the macro. False positive signal characterised by the macro but lacking an actual signal. Recognised false positive signals for neural projections: (a) background noise mistaken for neurite, while for neural bodies these include (b) thick neurite and (c) background noise misplaced for neural body. False negative signals for neural projections: (d) neurites recognised as background and (e) thick neurite misplaced to neural bodies, and for neural bodies (f) two connected neural bodies and (g) neural body recognised as background. For further comparison of ImageJ/FIJI macros, false positive and negative signals measured for each script on high and low-quality-stained MAP2 neuron images: false positive neurite length rate (h), false negative neurite length rate (i), false positive count rate of neuronal bodies (j), false negative count rate of neuronal bodies (k).
Lusca and other cellular and subcellular structures
Besides neural morphometric analysis, Lusca successfully quantified various morphological parameters of blood vessels or mitochondria images.
Application of Lusca in morphological analysis of blood vessels
Since both neural projection and blood vessels are tubular objects, Lusca’s applicability in analysing 3D magnetic resonance angiography (MRA) image stacks was also tested20. Before analysis, anatomical landmarks were established to standardize the volume and position of maximum intensity projection ROIs (Fig. 4 Optional step). Though optional, this step ensures a more controlled analysis, since analysing the entire stack without such standardization might increase false positive areas due to MR coil-induced intensity variance. Volumetric vessel analysis was conducted using a specialized 3D classifier for distinguishing vessels from brain tissue and the background. Following the application of intensity and volume thresholds, results were obtained as measurements of vessel volume in the measured hemisphere (Fig. 4 steps 1–4). This confirmed Lusca’s ability to quantitatively assess high blood flow velocity cerebral vasculature in a longitudinal manner using MRA.
Lusca pipeline applied for morphological 3D magnetic resonance angiography (MRA) stack analysis. For 3D perception the anatomical planes (a) coronal, (b) sagittal and (c) transversal are shown. Blue, yellow, red, and violet squares represent image segmentation, area, number and intensity, length and branching, and width measurements respectively. The anatomical landmarks of MRA stacks used to standardize the volume and position of the maximum intensity projection ROIs (Optional step, grey square). Vessels on the input images in step 1 segmented from the background with TWS in step 2 and, after area and intensity thresholding, the step 3 segmented image is acquired. In step 4, after redirection of segmented image to the input image, the area, number, and intensity are obtained. For length and branching measurements, in step 3 the image undergoes skeletonization and analysis. Width is calculated by applying Local thickness mask on the image from step 3 to get step 6 image. Simultaneously, the threshold is applied to the 32-bit image from step 5, and after subtracting 255 values NaN for the background and 0 for skeleton pixels are obtained. Images from steps 5 and 6 are added to get step 7 image that serves for width calculations.
Application of Lusca in morphological analysis of mitochondrial shapes
Subcellular structures like mitochondria compose distinct cellular networks. Morphological analysis of these networks provides insights into mitochondrial dynamics, quality, and function. Jagečić et al. demonstrated Lusca’s efficacy in comparing mitochondrial networks under varying cell conditions (Fig. 5a, b)21. A classifier was made to differentiate the mitochondrial network from the background (Fig. 5, steps 1–2). According to Ahmad et al., who described tubular, intermediate, and punctate mitochondrial shapes that were linked to functional changes, Lusca successfully segmented these shapes based on their configuration’s differences in circularity, categories of which included 0.00–0.33, 0.33–0.66, and 0.66–1.00 (Fig. 5, step 3)22. The analysis also included the calculation of area, number, intensity, length, and branching of the mitochondria (Fig. 5 steps 4–5).
Lusca pipeline for morphological analysis of mitochondria. Blue, red, and yellow squares represent image segmentation, length and branching, and area, count and intensity measurements respectively. Mitochondria immunocytochemistry images in (a) normoxic and (b) after oxygen–glucose deprivation treatment were stained with Tomm20 antibody. Mitochondria (step 1) are segmented from the background with TWS in step 2 and after area, intensity and circularity thresholding, the step 3 image is acquired. In step 4 after redirecting the input image the result area, count and intensity are obtained. For further length and branching measurements, the step 3 image is skeletonized and analysed (step 5).
- SEO Powered Content & PR Distribution. Get Amplified Today.
- PlatoData.Network Vertical Generative Ai. Empower Yourself. Access Here.
- PlatoAiStream. Web3 Intelligence. Knowledge Amplified. Access Here.
- PlatoESG. Carbon, CleanTech, Energy, Environment, Solar, Waste Management. Access Here.
- PlatoHealth. Biotech and Clinical Trials Intelligence. Access Here.
- Source: https://www.nature.com/articles/s41598-024-57650-6