MIC 2020” project ref. C20051DS ”MALMO” (Mathematical Approaches to Modelling

Efficient 3D reconstruction of Whole Slide Images in Melanoma
J. Arslana, M. Ounissia, H. Luoa, M. Lacroixb, P. Dupr ́eb, P. Kumarc, A. Hodgkinsond, S. Dandouc, R. Lariveb, C. Pignodelb, L. Le Camb, O. Radulescuc, and D. Racoceanua
aSorbonne Universit ́e, Institut du Cerveau – Paris Brain Institute – ICM, CNRS, Inria, Inserm, AP-HP, Hoˆpital de la Piti ́e Salpˆetri`ere, Paris, France
bInstitut de Recherche en Canc ́erologie de Montpellier (IRCM), INSERM, Universit ́e de Montpellier, Institut r ́egional du Cancer de Montpellier, Montpellier, France cLaboratory of Pathogen Host Interactions, Universit ́e de Montpellier, CNRS, Montpellier, France
dQuantitative Biology & Medicine, Living Systems Institute, University of Exeter, UK
Cutaneous melanoma is an invasive cancer with a worldwide annual death toll of 57,000 (Arnold et al., JAMA Dermatol 2022). In a metastatic state, surgical interventions are not curative and must be coupled with targeted therapy, or immunotherapy. However, resistance appears almost systematically and late-stage prognosis can remain poor. The complexity to eradicate melanoma stems from its plasticity; these cancer cells continually adapt to tumor microenvironmentt, which leads to resistance to treatment. Our primary assumption is that therapeutic resistance relies in part on a series of non-genetic transitions including changes in the metabolic states of these cancer cells. The 3D spatial distribution of blood vessels that are sources of nutrition and oxygen that drive this metabolic status is an important variable for understanding zoning aspects of this adaptation process. Using Whole Slide Images (WSI) of melanoma tumors from Patient Derived Xenograft (PDX) mouse models, we build 3D vascular models to help predicting and understanding the metabolic states of cancer cells within the tumor. Our 3D reconstruction pipeline from PDX tumor samples sectioned over 2mm depth and stained with Hematoxylin and Eosin (H&E), involves three primary steps, including 2D vessel segmentation using Deep Learning, intensity and affine-based image registration, and 3D reconstruction using interpolation and 3D rendering (allowing a better interaction with biologists, pathologists, and clinicians). The originality of our computer-assisted pipeline is its capability to (a) deal with sparse data (i.e., not all tissue sections were readily available), and (b) adapt to a multitude of WSI-related challenges (e.g., epistemic uncertainty, extended processing times due to WSI scale, etc.). We posit both our 3D reconstruction pipeline, quantitative results of the major stages of the process and a detailed illustration of the challenges faced, presenting resolutions to improve the pipeline’s efficiency.
Keywords: cutaneous melanoma, whole slide images, 3D reconstruction, vascular reconstruction, personalized medicine
1. INTRODUCTION
Historically a rare cancer, cutaneous melanoma has rapidly evolved to become one of the most fatal forms of cancerstoday,accountingforapproximately57,000deathsworldwideannuallyasof2020.1,2 Earlystagesofthe disease have increased survival rates with surgical interventions. However, treatment of late-stage metastatic melanoma is much more perplexing, relying on targeted therapy, or immunotherapy for an improved line of defense.3 Despite these advanced therapeutic strategies, the clinical outcome of metastatic-affected patients remains poor because of major resistance that appears nearly systematically.
Further author information: (Send correspondence to Janan Arslan)
J.A.: E-mail: Telephone: +33 (0)6 75 22 39 53

The infiltration of resident host tissues by the cancer cells can stimulate molecular, cellular, and physical changes within the host tissue, creating a microenvironment that is conducive to the survival and proliferation of melanoma.4 Furthermore,melanomacancercellsareplasticcellsthathavetheabilitytorewiretheirmetabolism to adapt to changing environmental conditions (e.g., poorly vascularized and hypoxic tumor microenvironment, or as a result of the drug exposure), and thus thrive within their complex microenvironment.5
In this study, our primary assumption is that resistance to treatment stems at least in part from a series of non-genetic transitions and changed metabolic states. In a previous mathematical modeling work we showed that the spatial heterogeneity of the blood vessels, which are sources of nutrition and drug compounds for the tumor, may generate zoning with resistant cells distributed preferentially in highly vascularized regions.6 We propose here a pipeline to reconstruct the 3D architecture of blood vessels distribution in a melanoma tumor using a artificial intelligence (AI) algorithms. I would give here the different steps of the pipeline…. The proposed pipeline has a multi-function purpose: in addition to understanding the characteristics of the vascular network through reconstruction, we also automate previously human-run and manual pathology grading systems that are time-consuming and arduous. The pipeline was designed for Whole Slide Images (WSIs) of Hematoxylin and Eosin (H&E)-stained pathology section from melanoma tumors derived from Patient-Derived Xenograft (PDX) mouse models. In this study, we elucidate the steps of our pipeline, provide qualitative and quantitative assessment, and discuss WSI-related challenges (notorious and plenty owing to the size and depth of information in WSIs), and the solutions we have utilized to improve the efficiency and adaptability of our 3D reconstruction. This 3D vascular reconstruction can be used to explicate how characteristics such as vascular shape, size and bifurcations differ in alternate metabolic states, and thus can be used in predicting treatment efficacy.
2.1 3D Reconstruction Pipeline
2.1.1 Overall Pipeline
2. METHODOLOGY
Fig. 1 provides an overall outline of the 3D reconstruction pipeline. Images first underwent preliminary pre- processing steps, including being (A) exported in Tagged Image File (TIF) Format, (B) split into top and bottom tissues, (C) crop-centered, inpainted, and cleaned via the removal of the noisy background. Once images were pre-processed they were (D) registered in a pairwise fashion sequentially using an optimized intensity- and affine-based registration method. This was followed by patch-level segmentation (E & F) to produce complete, segmented WSIs (G). All segmented WSIs were rendered and interpolated to produce a final 3D vascular model (H & I).
2.1.2 Image Pre-processing
The pre-processing stage is designed to normalize the WSIs in order to ease subsequent processes. In essence, the following steps are taken to create WSIs that are more uniform and almost mimic the standardized radiological images, such as those of computed tomography (CT) scans. To create this uniformity, several artifacts and variabilities in WSIs need to be accounted for (Fig. 2), with the most predominant concern being tissue shift. WSI acquisition is littered with epistemic uncertainty (i.e., uncertainty due to incomplete knowledge of a disease orprocess,orreducibleerrors,suchassubjective,measurement,orhuman-relatederrors).7 Thisismostlydueto the process being heavily reliant on humans manually carrying out each task of WSI acquisition (i.e., the slicing of the tissue, its addition to a glass slide, etc). To minimize tissue shift, images were crop-centered. Crop-centering in this pipeline includes: establishing an image size with a width of 6000 and height of 5000 (choice of the size based on the size of the baseline and largest tissue section); the identification of the central point of the WSI; and from the identified center, cutting a perimeter based on the pre-set width and height. This process ensures all tissue sections are centrally located while retaining their original tissue size. The maintenance of the original tissue size takes priority. Cutting directly around the perimeter of each tissue would have resulted in small tissue sections appearing erroneously large that would not reflect their true size relative to the whole tissue. Thus, there would have been an inconsistency in the image size that could have impacted rendering and interpolation techniques later in the pipeline (refer to Section 2.1.6). An additional step was required prior to crop-centering. This included the addition of a border of a width of 2000 pixels around each tissue section following its split into top and bottom images. The reasoning for this addition was to account for the disparity in the location of

Figure 1: Overall pipeline for 3D vascular reconstruction from H&E-stained WSIs.
some tissue sections. Some tissue samples were close to the edge of the slide as shown in Fig. 3. Thus, when crop-centering was attempted, the images would not be correctly centered, as there was insufficient space to cut a perimeter of 6000 × 5000. The addition of a 2000-pixel border granted additional and sufficient space to ensure the correct and central placement of our tissue samples.
Noisy backgrounds were also removed (Fig. 4). Noisy backgrounds in WSIs consist of straining artifacts and pixelation caused by the glass covers for slides. Noisy background removal involved the tissue samples being isolated from their original background and placed on a clean background. This was achieved using foreground extraction. The original images were binarized using a simple Otsu thresholding technique, creating a foreground mask. To remove small, erroneous regions, such as extraneous stains around the tissue, the largest contour (i.e., the tissue section) was selected, eliminating any small and unnecessary regions from the binary mask. Using the Bitwise Operation, the white portions of our masks were used to identify the region of interest within the original image, and therefore only foreground information was extracted. The final step was to simply add a white color to the background, resulting in a nice, cleaned tissue section for processing.
Finally, the epidermis was inpainted using a whole-image level trained U-Net model (Fig. 5). Initial registra- tion attempts showed the epidermis hindered the performance of the registration and its presence provided little value for the final vascular reconstruction. The epidermis U-Net model had mostly similar hyperparameters to that of our patch-level blood vessel segmentation model (refer to Section 2.1.4) and included: batch size=32, the ADAM optimizer, learning rate=3 × 10−5, epoch=300, and the Binary Cross Entropy Dice loss function (BCE DICE). Given the (almost) consistent nature of the epidermis (i.e., shape, location, patterns, etc.), a smaller epoch size proved to be more than adequate for inpainting. The BCE DICE was chosen based on its ability to better discern segmentation boundaries as compared to other loss functions.

Figure 2: Sample image artifacts impacting pipeline flow. Due to the mostly manual acquisition of tissue sections in WSI processing, these imaging modalities are highly prone to several artifacts, such as erroneous staining and tissue shifts. There are also pixelations (shown as horizontal bars) that appear around the perimeter of tissue sections.
Figure 3: Demonstration of proper and improper crop-centering. In some cases, tissue sections would appear at the edge of a slide, resulting in a lack of space to cut the pre-set perimeter of 6000 × 5000. This often resulted in images that were not centered, and had smaller image sizes as compared to other tissue sections. To correct this anomaly, an additional step was taken to first add a border size of 2000 pixels before the crop-centering.

Figure 4: Noisy background removal through foreground extraction. WSI artifacts were removed by using a foreground extraction process. This involved the creation of a binary mask using the Otsu thresholding, selecting the largest contour (i.e., the centralized WSI, eliminating all smaller artifacts that may be surrounding the tissue), then using this binary mask as a reference to which regions of the original image we wish to extract. Finally, a clean white background was re-added, leading to a cleaned tissue slide.
Figure 5: Whole-image level epidermis model trained on U-Net architecture. To remove the epidermis, which provided little value in terms of vascular reconstruction but caused a hindrance in the registration process, a U-Net model was trained to detect and inpaint the epidermis.

Programming Help, Add QQ: 749389476
2.1.3 Image Registration
Histopathological image registration poses several challenges, with feature-based image registration methods limited by their ability to identify distinct points throughout time-series images, while intensity-based methods alone result in a high number of local optima when there is noisy data. In this study, for the registration of the WSI stack, we propose an improved version of the symmetric, intensity-based affine registration framework basedonafamilyofsymmetricdistancemeasures,introducedbyO ̈fverstedtetal.8 Morespecifically,theauthors combined intensity-based registration with either fuzzy point-to-set bidirectional or fuzzy point-to-set inwards distances into an asymmetric average minimal distance. The limitation of this method is that, when compared to traditional similarity measures (i.e., Sum of Squared Differences, Pearson Correlation Coefficient, and Mutual Information), this method requires substantial memory to store auxiliary data structures (e.g., a single, paired- image registration may require up to 4GB of working memory). This results in extensive computation time. For example, the initial version of this algorithm took approximately one week to finalize the registration of our entire WSI dataset, even when executed on a high-performance cluster (HPC).
In order to optimize the execution time, we profiled this approach and deduced the following bottlenecks/flaws:
• Sequential image registration takes longer to finish registering all slides for a given tumor because a given slides+1 requires the registered slides (which is the registration results of slides−1 and slides) to initiate the subsequent registration. Thus, we can not take advantage of the parallelization offered by HPC. This dependency limits the scalability of the algorithm.
• Registration using the native resolution is unjustifiably costly for such a registration task. It uses a global approach that estimates one affine transformation applicable to all slide pixels. For instance, given a pair of RGB slides with 6000×5000 pixels each, when re-scaling to 90% of the original size, the registration takes 1961.59 sec while yielding a mean absolute error (MAE) of 45.72. However, re-scaling to 10% of the original scale takes only 16.30 sec with a 45.53 error.
• The tumors analyzed in this study are ball-like. Thus, the mid-section slides contain more tissue when compared to the top/bottom slides. This registration approach does not account for this specification since it stretches the slides tissue (affine transformation) to minimize the global error.
We propose a generic optimization scheme for WSI registration (Fig. 6): (i) Chose the down-scaled percentage that does not influence the registration performance, (ii) Register each two consecutive WSI in a separate CPU (job array in case of HPC), (iii) Apply a correction ratio based on the stretched area and original tissue. In order to quantify the added value from the optimization scheme, we registered 220 consecutive H&E WSI. It takes less than 10 minutes compared to one week previously (without the optimization). Note that the same HPC resources were used on both tests.
Code Help
… !”$=$ % !”
!”! … !”!→# Registered slide 1 Registered slide s
Ratio correction
!”!=$ % !” & ‘ !#
Affine transformation matrix estimation
Figure 6: Optimization scheme for whole slide images registration.
2.1.4 2D Blood Vessel Segmentation
Similar to the epidermis inpainting discussed in Section 2.1.2, our blood segmentation model was trained using the U-Net architecture, with the primary differences being (i) the blood vessel model was trained at a patch- level, with patch sizes of 512 × 512, and (ii) due to class imbalances and the variety of shape and locations (even within patches), the training epoch was extended to 1000 (Fig. 7). Furthermore, data were trained and tested using a 5-fold cross-validation approach, with each fold split into 80% training and 20% test. The final hyper-parameters included: batch size=32, the ADAM optimizer, learning rate=3 × 10−5, epoch=1000, and the BCE DICE. As stated earlier, the BCE DICE was used for its ability to better discern boundaries. This is particularly imperative in the case of vascular network segmentation. Our hypothesis is based on the evaluation of non-genetic transitions that may impact chemotherapeutic response. Thus, capturing the branching and size of blood vessel networks can impact our understanding of nutrient and oxygen diffusion, so we need the most accurate representation of the network.
程序代写 CS代考 加微信: cstutorcs
Figure 7: Patch-level blood vessel segmentation model trained on U-Net architecture.
2.1.5 Pipeline Evaluation
The evaluation metrics used to assess 2D segmentation of the blood vessels included: BCE DICE, MAE, accu- racy (ACC; TP+TN ), Dice coefficient (DICE; 2TP ), Dice Loss (i.e., 1 – DICE; DICE LOSS),
TP+TN+FP+FN 2TP+FP+FN
the F1-score (2 × P REC×REC ), Precision (PREC; T P ), recall (REC; T P ), specificity (SPEC; T N )
PREC+REC TP+FP TP+FN TN+FP and Matthews Correlation Coefficient (MCOR; √ (TP×TN)−(FP×FN) ). Image registration per-
(TP+FP)(TP+FN)(TN+FP)(TN+FN) formance was validated using the Structural Similarity Index (SSIM).
2.1.6 3D Rendering and Interpolation
For a complete 3D reconstruction, a hybrid solution was created using rendering and interpolation, where ren- dering stacks existing data to create a preliminary vascular volume while the interpolation imputes the gaps between the H&E rendered tissues sections. Rendering was achieved with marching cubes. The shape- and distance-based method proposed by Schenk et al. was used for interpolation.9 Their proposed interpolation involves the following:
• First, binary scenes for the i and i+t images are established. While in their publication these are user- defined contours, in our application the binary scenes involve the segmented and sequentially stacked WSIs.
• Gray-level distance maps are generated for the i and i+t binary scenes, with distance being relative to the binary scene boundaries. Distances within the boundaries are given positive values, while those outside of the boundaries are negative.
• Gray-level distance maps are interpolated using conventional gray-scale interpolation techniques, such as linear interpolation.
• The interpolated gray-level distance maps are converted back into interpolated binary scenes.
2D-level blood vessel model training was based on the tiles extracted at the highest resolution. For the purposes of 3D rendering and interpolation, the whole slides at their highest resolution would be too large to

export (requiring inordinate amounts of memory) and execute in a timely manner. Therefore, in the final phase of the pipeline, whole tissue sections were exported with a downsampling factor of 8. The highest resolution has a magnification of ×40, which translates to 0.25 μm per pixel. At downsampling of 8, we have a magnification level of ×5 and a resolution of 2 μm per pixel. Given the training of the U-Net (described in Section 2.1.4) was based on patch sizes of 512 × 512 at a resolution of 0.25 μm per pixel, the equivalent patch size and resolution at a downsampling of 8 would be 64 × 64 patch sizes. As illustrated in Fig. 8, 512 × 512 patches with an ×40 magnification (A) and 64 × 64 patches with an ×5 magnification (C) capture the same contextual and spatial information within WSIs. If we were, hypothetically, to run patch-level segmentation at 512 × 512 patches with an ×5 magnification (B), we would capture a larger spatial zone, which would lead to inefficient blood vessel segmentation, as our model was trained to see blood vessels ’close up’ within these patches.
Figure 8: Patch sizes, magnifications, and resolutions. A patch size of 512 × 512 at a magnification of ×40 captures the same spatial and contextual information as a patch size of 64 × 64 at a magnification of ×5.
3.1 Data preparation
3.1.1 Raw data
The PDX model is illustrated in Fig. 9a. Cutaneous melanoma tumors were extracted from humans during biopsy, then implanted onto immunodeficient mice for amplification. PDX samples underwent serial sectioning at every 12μm over a depth of 2mm and were stained with H&E. Each tissue section had a thickness of 4μm. This sectioning protocol resulted in gaps between each tissue section, which were imputed using interpolation methods (refer to Section 2.1.6). Slides were digitized into MIRAX format. Each slide contained two tissue sections (i.e., top and bottom) as shown in Fig. 9b; this method was used to expedite data acquisition time and reduce production costs. A total of 120 slides (240 tissues sections) were available. Blood vessels were annotated by an expert pathologist (co-author CP) using the open-source software QuPath.10
3.1.2 Sampling
In preparation for training using deep learning methods, WSIs along with their annotated counterparts were exported as tiles/patches from QuPath. Patches were sized 512×512 (with size selection based on preliminary experimentation with various patch sizes) and were exported at their highest resolution (0.25 μm per pixel). A total of 1000 patches with positive (i.e., presence of blood vessels) and negative (i.e., background/other molecular markers) cases were captured.

(a) Complete PDX model illustration. (b) Sample H&E slide with two tissue sections.
Figure 9: Human tumors are engrafted onto immunodeficient mice and grown. Mice-harvested samples then undergo serial sectioning. Slides stained with hematoxylin and eosin (H&E) are digitized into whole slide images (WSIs).
4. RESULTS
2D segmentation performs generally well as demonstrated in Table 1, with DICE > 80% in training and test results. The 3D reconstruction model illustrated in part (I) of Fig. 1 further validates this as we can qualitatively see the majority of the blood vessels (particularly the larger blood vessels, which are suggested to play an important role in understanding chemotherapeutic response). Registration performance was qualitatively and quantitatively assessed as shown in Table 2 and Fig. 10. We noticed that larger tissue sections had lower SSIMs as compared to smaller sections, despite the alignment being near-perfect visually (Fig. 10). We assume the lower SSIM is a reflection of the greater heterogeneity of blood vessels in the larger sections (i.e., many small blood vessels that do not appear across all large tissue sections). Smaller tissues, however, have larger blood vessels that appear iteratively, and thus demonstrate higher performance given the similarity in features (SSIM > 80%). The qualitative and quantitative results from this study demonstrate an overall good performance. The final 3D model reconstructed (Fig. 1) demonstrates a clear vascular network consisting of large and small (capillaries) blood vessels. Quantifying these vascular networks based on shape and spatial distribution can mark a stepping stone in understanding how non-genetic metabolic states could impact chemotherapeutic response in melanoma patients.
5. NOVELTY OF WORK
We present an end-to-end approach for reconstructing a vascular 3D model based on WSIs. An original feature of our pipeline is its ability to handle sparse data, given that not all tissue sections were made readily available. This is achieved through rendering and interpolation. Furthermore, the pipeline has been built to accommodate the multitude of challenges faced with processing WSIs. These include: conducting patch-level training and segmentation; creating uniformity in the available data; and improving registration by using inpainting, crop centering, and artefact removal techniques to minimize registration-related errors. Other breakthrough compo- nents of our project include the utilization of epistemic uncertainty. This involves understanding human-related or reducible errors, and using statistical techniques to detect and minimize the impact of uncertainty on the pipeline. Our work further includes optimization via parallel processing to improve the efficiency and speed of our reconstruction process.

Table 1: Average training and test results for 2D-level blood vessel segmentation based on U-Net architecture across 5-fold cross-validation.
BCE DICE MAE ACC DICE DICE LOSS REC PREC F1 SPEC MCOR
Training Mean SD
0.081 0.019 0.009 0.001 0.992 0.001 0.886 0.037 0.114 0.037 0.836 0.052 0.842 0.049 0.832 0.054 0.996 0.001 0.830 0.053
Testing Mean SD
0.183 0.083 0.027 0.017 0.973 0.016 0.813 0.086 0.188 0.085 0.767 0.129 0.847 0.082 0.795 0.112 0.985 0.015 0.784 0.105
SD = Standard Deviation
Table 2: Quantitative assessment of registered images using SSIM.
Mean SE Median SD Min. Max.
0.864 0.0032 0.861 0.048 0.768 0.964
SE = Standard Error
SD = Standard Deviation
(a) Overlap between large tissue sections following reg- (b) Overlap between small tissue sections following reg- istration. istration.
Figure 10: Qualitative assessment of registered images. While visually near-perfect, variability in SSIM between larger and smaller tissues are attributable to greater heterogeneity of blood vessels in larger tissues.
6. CONCLUSIONS
In this study, we have proven that AI can be used to develop automated and clinically operational pipelines for WSIs. Using a combination of deep learning, image processing, registration, statistical and parallel processing techniques, we can develop 3D models that are adaptable to WSI challenges and produce results which elucidate vascular networks that could predict chemotherapy efficacy in melanoma patients. This body of work can be readily extended to other biomarkers and cancer types and is not limited in its scope or application.

ACKNOWLEDGMENTS
This study was supported by The Cancer ITMO of the French National Alliance for Life and Health Sci- ences (AVIESAN): “MIC 2020” – project ref. C20051DS ”MALMO” (Mathematical Approaches to Modelling Metabolic Plasticity and Heterogeneity in Melanoma).
REFERENCES
[1] Arnold, M., Singh, D., Laversanne, M., Vignat, J., Vaccarella, S., Meheus, F., Cust, A. E., de Vries, E., Whiteman, D. C., and Bray, F., “Global burden of cutaneous melanoma in 2020 and projections to 2040,” JAMA Dermatol. 158, 495–503 (May 2022).
[2] Erdei, E. and Torres, S. M., “A new understanding in the epidemiology of melanoma,” Expert Rev. Anti- cancer Ther. 10, 1811–1823 (Nov. 2010).
[3] Davis, L. E., Shalin, S. C., and Tackett, A. J., “Current state of melanoma diagnosis and treatment,” Cancer Biol. Ther. 20, 1366–1379 (Aug. 2019).
[4] Anderson, N. M. and Simon, M. C., “The tumor microenvironment,” Curr. Biol. 30, R921–R925 (Aug. 2020).
[5] Ratnikov, B. I., Scott, D. A., Osterman, A. L., Smith, J. W., and Ronai, Z. A., “Metabolic rewiring in melanoma,” Oncogene 36, 147–157 (Jan. 2017).
[6] Hodgkinson, A., Trucu, D., Lacroix, M., Le Cam, L., and Radulescu, O., “Computational model of het- erogeneity in melanoma: Designing therapies and predicting outcomes,” Frontiers in Oncology 12, 857572 (2022).
[7] Arslan, J. and Benke, K. K., “Progression of geographic atrophy: Epistemic uncertainties affecting mathe- matical models and machine learning,” Translational Vision Science &amp Technology 10, 3 (Nov. 2021).
[8] O ̈fverstedt, J., Lindblad, J., and Sladoje, N., “Fast and robust symmetric image registration based on distances combining intensity and spatial information,” 28, 3584–3597 (2018).
[9] Schenk, A., Prause, G., and Peitgen, H.-O., “Efficient semiautomatic segmentation of 3D objects in medical images,” in [Medical Image Computing and Computer-Assisted Intervention – MICCAI 2000], Lecture notes in computer science, 186–195, Springer Berlin Heidelberg, Berlin, Heidelberg (2000).
[10] Bankhead, P., Loughrey, M. B., Fern ́andez, J. A., Dombrowski, Y., McArt, D. G., Dunne, P. D., McQuaid, S., Gray, R. T., Murray, L. J., Coleman, H. G., James, J. A., Salto-Tellez, M., and Hamilton, P. W., “QuPath: Open source software for digital pathology image analysis,” Sci. Rep. 7, 16878 (Dec. 2017).