Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning

Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning

Abstract

A primary obstacle in the analysis of tissue imaging information is cell division– the job of determining the exact border of every cell in an image. To resolve this issue we built TissueNet, a dataset for training division designs which contains more than 1 million by hand identified cells, an order of magnitude more than all formerly released division training datasets. We utilized TissueNet to train Mesmer, a deep-learning-enabled division algorithm. We showed that Mesmer is more precise than previous approaches, generalizes to the complete variety of tissue types and imaging platforms in TissueNet, and accomplishes human-level efficiency. Mesmer made it possible for the automated extraction of essential cellular functions, such as subcellular localization of protein signal, which was challenging with previous techniques. We then adjusted Mesmer to harness cell family tree details in extremely multiplexed datasets and utilized this boosted variation to measure cell morphology modifications throughout human pregnancy. All code, information and designs are launched as a neighborhood resource.

Access alternatives

Subscribe to Journal

Get complete journal gain access to for 1 year

99,00 EUR

just 8,25 EUR per concern

Tax estimation will be settled throughout checkout.

Rent or Buy post

Get time restricted or complete short article gain access to on ReadCube.

from$ 8.99

All rates are NET rates.

Data schedule

The TissueNet dataset is readily available at https://datasets.deepcell.org/ for noncommercial usage.

Code accessibility

All software application for dataset building, design training, release and analysis is offered on our github page https://github.com/vanvalenlab/intro-to-deepcell All code to create the figures in this paper is offered at https://github.com/vanvalenlab/publication-figures/tree/master/2021- Greenwald_Miller_et_al-Mesmer

References

  1. 1.

    Giesen, C. et al. Extremely multiplexed imaging of growth tissues with subcellular resolution by mass cytometry. Nat. Techniques11, 417–422(2014).

    CAS
    PubMed

    Google Scholar

  2. 2.

    Keren, L. et al. MIBI-TOF: a multiplexed imaging platform relates cellular phenotypes and tissue structure. Sci. Adv. 5, eaax5851(2019).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  3. 3.

    Huang, W., Hennrick, K. & Drew, S. A vibrant future of quantitative pathology: recognition of Vectra innovation utilizing chromogenic multiplexed immunohistochemistry and prostate tissue microarrays. Hum. Pathol.44, 29–38(2013).

    CAS
    PubMed

    Google Scholar

  4. 4.

    Lin, J.-R. et al. Extremely multiplexed immunofluorescence imaging of human tissues and growths utilizing t-CyCIF and traditional optical microscopic lens. eLife 7, e31657(2018).

    PubMed
    PubMed Central

    Google Scholar

  5. 5.

    Gerdes, M. J. et al. Extremely multiplexed single-cell analysis of formalin-fixed, paraffin-embedded cancer tissue. Proc. Natl Acad. Sci.110, 11982–11987(2013).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  6. 6.

    Goltsev, Y. et al. Deep profiling of mouse splenic architecture with CODEX multiplexed imaging. Cell174, 968–981 e15(2018).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  7. 7.

    Chen, K. H., Boettiger, A. N., Moffitt, J. R., Wang, S. & Zhuang, X. Spatially solved, extremely multiplexed RNA profiling in single cells. Science348, aaa6090(2015).

    PubMed
    PubMed Central

    Google Scholar

  8. 8.

    Lee, J. H. et al. Extremely multiplexed subcellular RNA sequencing in situ. Science343, 1360–1363(2014).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  9. 9.

    Moffitt, J. R. et al. Molecular, spatial and practical single-cell profiling of the hypothalamic preoptic area. Science362, eaau5324(2018).

    PubMed
    PubMed Central

    Google Scholar

  10. 10

    Wang, X. et al. Three-dimensional intact-tissue sequencing of single-cell transcriptional states. Science361, eaat5691(2018).

    PubMed
    PubMed Central

    Google Scholar

  11. 11

    Lubeck, E., Coskun, A. F., Zhiyentayev, T., Ahmad, M. & Cai, L. Single-cell in situ RNA profiling by consecutive hybridization. Nat Methods11, 360–361(2014).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  12. 12

    Eng, C.-H. L. et al. Transcriptome-scale super-resolved imaging in tissues by RNA seqFISH . Nature568,235–239(2019).

    CAS PubMed PubMed Central Google Scholar

  13. 13

    Rozenblatt-Rosen, O. et al. The human growth atlas network: charting growth shifts throughout area and time at single-cell resolution. Cell181, 236–249(2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  14. 14

    Snyder, M. P. et al. The body at cellular resolution: the NIH Human Biomolecular Atlas Program. Nature574, 187–192(2019).


    Google Scholar

  15. 15

    Regev, A. et al. The human cell atlas white paper. Preprint at https://arxiv.org/abs/181005192 v1(2018).

  16. 16

    Keren, L. et al. A structured tumor-immune microenvironment in triple unfavorable breast cancer exposed by multiplexed ion beam imaging. Cell174, 1373–1387 e19(2018).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  17. 17

    Milo, R. & Phillips, R. Cell Biology by the Numbers 1st edn (Garland Science, 2015).

  18. 18

    Mescher, A. Junqueira’s Basic Histology: Text and Atlas13 th edn (McGraw Hill, 2013).

  19. 19

    McQuin, C. et al. CellProfiler 3.0: next-generation image processing for biology. PLoS Biol.16, e2005970(2018).

    PubMed
    PubMed Central

    Google Scholar

  20. 20

    Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Techniques 9, 676–682(2012).

    CAS

    Google Scholar

  21. 21

    Schneider, C. A., Rasband, W. S. & Eliceiri, K. W. NIH Image to ImageJ: 25 years of image analysis. Nat. Techniques 9, 671–675(2012).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  22. 22

    Berg, S. et al. ilastik: interactive device discovering for (bio) image analysis. Nat. Approaches16, 1226–1232(2019).

    CAS
    PubMed

    Google Scholar

  23. 23

    de Chaumont, F. Icy: an open bioimage informatics platform for prolonged reproducible research study. Nat. Techniques 9, 690–696(2012).

    CAS
    PubMed

    Google Scholar

  24. 24

    Belevich, I., Joensuu, M., Kumar, D., Vihinen, H. & Jokitalo, E. Microscopy image internet browser: a platform for division and analysis of multidimensional datasets. PLoS Biol.14, e1002340(2016).

    PubMed
    PubMed Central

    Google Scholar

  25. 25

    Ronneberger, O., Fischer, P. & Brox, T. in Medical Image Computing and Computer-Assisted Intervention– MICCAI 2015(eds Navab, N. et al.) 234–241(Lecture Notes in Computer Science 9351, Springer, 2015).

  26. 26

    Valen, D. A. V. et al. Deep knowing automates the quantitative analysis of specific cells in live-cell imaging experiments. PLoS Comput. Biol.12, e1005177(2016).

    PubMed
    PubMed Central

    Google Scholar

  27. 27

    Caicedo, J. C. et al. Nucleus division throughout imaging experiments: the 2018 Data Science Bowl. Nat. Approaches16, 1247–1253(2019).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  28. 28

    Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: a generalist algorithm for cellular division. Nat. Approaches18, 100–106(2021).

    CAS
    PubMed

    Google Scholar

  29. 29

    Hollandi, R. et al. nucleAIzer: a parameter-free deep knowing structure for nucleus division utilizing image design transfer. Cell Syst.10, 453–458 e6 (2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  30. 30

    Koyuncu, C. F., Gunesli, G. N., Cetin-Atalay, R. & Gunduz-Demir, C. DeepDistance: a multi-task deep regression design for cell detection in inverted microscopy images. Med. Image Anal.63, 101720 (2020).

    PubMed

    Google Scholar

  31. 31

    Yang, L. et al. NuSeT: A deep knowing tool for dependably separating and examining congested cells. PLoS Comput. Biol.16, e1008193(2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  32. 32

    Yu, W. et al. CCDB:6843, mus musculus, Neuroblastoma. CIL. Dataset. https://doi.org/107295/ W9CCDB6843

  33. 33

    Koyuncu, C. F., Cetin‐Atalay, R. & Gunduz‐Demir, C. Object‐oriented division of cell nuclei in fluorescence microscopy images. Cytometry A93, 1019–1028(2018).

    PubMed

    Google Scholar

  34. 34

    Ljosa, V., Sokolnicki, K. L. & Carpenter, A. E. Annotated high-throughput microscopy image sets for recognition. Nat. Approaches 9, 637–637(2012).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  35. 35

    Kumar, N. et al. A multi-organ nucleus division obstacle. IEEE Trans. Medication. Imaging39, 1380–1391(2020).

    PubMed

    Google Scholar

  36. 36

    Verma, R. et al. MoNuSAC2020: A Multi-organ Nuclei Segmentation and Classification Challenge. IEEE Trans. Medication. Imaging10.1109/ TMI.20213085712(2021).

  37. 37

    Moen, E. et al. Precise cell tracking and family tree building in live-cell imaging try outs deep knowing. Preprint at bioRxiv https://doi.org/101101/803205(2019).

  38. 38

    Gamper, J. et al. PanNuke dataset extension, insights and standards. Preprint at https://arxiv.org/abs/200310778 v7(2020).

  39. 39

    Bannon, D. et al. DeepCell Kiosk: scaling deep knowing– made it possible for cellular image analysis with Kubernetes. Nat. Techniques18, 43–45(2021).

    CAS
    PubMed

    Google Scholar

  40. 40

    Haberl, M. G. et al. CDeep3M– plug-and-play cloud-based deep knowing for image division. Nat. Techniques15, 677–680(2018).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  41. 41

    Ouyang, W., Mueller, F., Hjelmare, M., Lundberg, E. & Zimmer, C. ImJoy: an open-source computational platform for the deep knowing period. Nat. Approaches16, 1199–1200(2019).

    CAS
    PubMed

    Google Scholar

  42. 42

    von Chamier, L. et al. Democratising deep knowing for microscopy with ZeroCostDL4Mic. Nat. Commun.12, 2276 (2021).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  43. 43

    Hughes, A. J. et al. Quanti.us: a tool for quick, versatile, crowd-based annotation of images. Nat. Approaches15, 587–590(2018).

    CAS
    PubMed

    Google Scholar

  44. 44

    Ouyang, W., Le, T., Xu, H. & Lundberg, E. Interactive biomedical division tool powered by deep knowing and ImJoy. F1000 Research10, 142 (2021).


    Google Scholar

  45. 45

    Wolny, A. et al. Precise and flexible 3D division of plant tissues at cellular resolution. eLife 9, e57613(2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  46. 46

    DeepCell Label: https://github.com/vanvalenlab/deepcell-label

  47. 47

    Lin, T.-Y. et al. Function pyramid networks for item detection. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR)2117–2125(IEEE, 2017).

  48. 48

    Tan, M., Pang, R. & Le, Q. V. EfficientDet: scalable and effective things detection. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)10778–10787(IEEE, 2020).

  49. 49

    He, K., Zhang, X., Ren, S. & Sun, J. Deep recurring knowing for image acknowledgment. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)770–778(IEEE, 2016).

  50. 50

    Zuiderveld, K. in Graphics Gem s (ed Heckbert, P. S.) Ch. VIII.5 (Academic Press, 1994).

  51. 51

    Chevalier, G. Make smooth forecasts by mixing image spots, such as for image division. https://github.com/Vooban/Smoothly-Blend-Image-Patches

  52. 52

    Meyer, F. & Beucher, S. Morphological division. J. Vis. Commun. Image R 1, 21–46(1990).


    Google Scholar

  53. 53

    Weigert, M., Schmidt, U., Haase, R., Sugawara, K. & Myers, G. Star-convex polyhedra for 3D things detection and division in microscopy. In IEEE Winter Conference on Applications of Computer Vision (WACV)3655–3662(IEEE, 2020).

  54. 54

    Fu, C.-Y., Shvets, M. & Berg, A. C. RetinaMask: discovering to anticipate masks enhances modern single-shot detection totally free. Preprint at https://arxiv.org/abs/190103353 v1(2019).

  55. 55

    Schürch, C. M. et al. Collaborated cellular areas manage antitumoral resistance at the colorectal cancer intrusive front. Cell182, 1341–1359 e19(2020).

    PubMed
    PubMed Central

    Google Scholar

  56. 56

    Ali, H. R. et al. Imaging mass cytometry and multiplatform genomics specify the phenogenomic landscape of breast cancer. Nat. Cancer 1, 163–175(2020).


    Google Scholar

  57. 57

    Gaglia, G. et al. HSF1 stage shift moderates tension adjustment and cell fate choices. Nat. Cell Biol.22, 151–158(2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  58. 58

    Nelson, D. E. et al. Oscillations in NF-κB signaling manage the characteristics of gene expression. Science306, 704–708(2004).

    CAS
    PubMed

    Google Scholar

  59. 59

    Kumar, K. P., McBride, K. M., Weaver, B. K., Dingwall, C. & Reich, N. C. Regulated nuclear-cytoplasmic localization of interferon regulative aspect 3, a subunit of double-stranded RNA-activated element 1. Mol. Cell Biol.20, 4159–4168(2000).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  60. 60

    Wolff, A. C. et al. Suggestions for human skin development element receptor 2 screening in Breast Cancer: American Society of Clinical Oncology/College of American pathologists medical practice standard upgrade. J. Clin. Oncol.31, 3997–4013(2013).

    PubMed

    Google Scholar

  61. 61

    Risom, T. et al. Shift to intrusive breast cancer is connected with progressive modifications in the structure and structure of growth stroma. Preprint at bioRxiv https://doi.org/101101/20210105425362(2021)

  62. 62

    Ark Analysis. https://github.com/angelolab/ark-analysis

  63. 63

    Koss, L. G. Diagnostic Cytology and Its Histopathologic Bases (J.B. Lippincott Company, 1979).

  64. 64

    Erlebacher, A. Immunology of the maternal-fetal user interface. Annu. Rev. Immunol.31, 387–411(2013).

    CAS
    PubMed

    Google Scholar

  65. 65

    Greenbaum, S. et al. Spatio-temporal coordination at the maternal-fetal user interface promotes trophoblast intrusion and vascular renovation in the very first half of human pregnancy. Preprint at bioRxiv https://doi.org/101101/20210908459490(2021).

  66. 66

    Garrido-Gomez, T. et al. Faulty decidualization throughout and after extreme preeclampsia exposes a possible maternal contribution to the etiology. Proc. Natl Acad. Sci. U.S.A.114, E8468– E8477(2017).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  67. 67

    Deep Cell Core Library. Deep knowing for single-cell analysis. https://github.com/vanvalenlab/deepcell-tf

  68. 68

    Bankhead, P. et al. QuPath: open source software application for digital pathology image analysis. Sci. Rep. 7, 16878 (2017).

    PubMed
    PubMed Central

    Google Scholar

  69. 69

    Graham, S. et al. Hover-Net: synchronised division and category of nuclei in multi-tissue histology images. Med. Image Anal.58, 101563 (2019).

    PubMed

    Google Scholar

  70. 70

    Tsai, H.-F., Gajda, J., Sloan, T. F. W., Rares, A. & Shen, A. Q. Usiigaci: instance-aware cell tracking in stain-free stage contrast microscopy allowed by artificial intelligence. SoftwareX 9, 230–237(2019).


    Google Scholar

  71. 71

    Kiemen, A. et al. In situ characterization of the 3D microanatomy of the pancreas and pancreatic cancer at single cell resolution. bioRxiv2020.1208416909(2020) https://doi.org/101101/20201208416909

  72. 72

    Cao, J. et al. Facility of a morphological atlas of the Caenorhabditis elegans embryo utilizing deep-learning-based 4D division. Nat. Commun.11, 6254 (2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  73. 73

    Schulz, D. et al. Synchronised multiplexed imaging of mRNA and proteins with subcellular resolution in breast cancer tissue samples by mass cytometry. Cell Syst. 6, 531 (2018).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  74. 74

    McKinley, E. T. et al. Enhanced multiplex immunofluorescence single-cell analysis exposes tuft cell heterogeneity. JCI Insight 2, e93487(2017).

    PubMed Central

    Google Scholar

  75. 75

    Patel, S. S. et al. The microenvironmental specific niche in timeless Hodgkin lymphoma is enhanced for CTLA-4- favorable T-cells that are PD-1-negative. Blood134, 2059–2069(2019).

    PubMed
    PubMed Central

    Google Scholar

  76. 76

    Jackson, H. W. et al. The single-cell pathology landscape of breast cancer. Nature578, 615–620(2020).

    CAS
    PubMed

    Google Scholar

  77. 77

    Rashid, R. et al. Extremely multiplexed immunofluorescence images and single-cell information of immune markers in tonsil and lung cancer. Sci. Information 6, 323 (2019).

    PubMed
    PubMed Central

    Google Scholar

  78. 78

    McCaffrey, E. F. et al. Multiplexed imaging of human tuberculosis granulomas reveals immunoregulatory functions saved throughout tissue and blood. Preprint at bioRxiv https://doi.org/101101/20200608140426(2020).

  79. 79

    Walt, Svander et al. scikit-image: image processing in Python. PeerJ 2, e453(2014).

    PubMed
    PubMed Central

    Google Scholar

  80. 80

    Kingma, D. P. & Bachelor’s Degree, J. Adam: an approach for stochastic optimization. Preprint at https://arxiv.org/abs/14126980 v9(2014).

  81. 81

    Kluyver, T. et al. in Positioning and Power in Academic Publishing: Players, Agents and Agendas(eds Schmidt, B. & Loizides, F.) (IOS Press, 2016).

  82. 82

    Chollet, F. et al. Keras. https://keras.io(2015).

  83. 83

    Hunter, J. D. Matplotlib: a 2D graphics environment. Comput. Sci. Eng. 9, 90–95(2007).


    Google Scholar

  84. 84

    Harris, C. R. et al. Variety programs with NumPy. Nature585, 357–362(2020).

    CAS
    PubMed
    PubMed Central

    Google Scholar

  85. 85

    Reback, J. et al. pandas-dev/pandas: Pandas 1.1.3. https://doi.org/105281/ zenodo.3509134(2020).

  86. 86

    Pedregosa, F. et al. Scikit-learn: artificial intelligence in Python. Preprint at https://arxiv.org/abs/12010490 v4(2012).

  87. 87

    Waskom, M. et al. mwaskom/seaborn. https://doi.org/105281/ zenodo.592845(2020).

  88. 88

    Abadi, M. et al. TensorFlow: massive artificial intelligence on heterogeneous dispersed systems. Preprint at https://arxiv.org/abs/160304467 v2(2016).

  89. 89

    Hoyer, S. & Hamman, J. xarray: N-D identified selections and datasets in Python. J. Open Res. Softw. 5, 10 (2017).


    Google Scholar

Download recommendations

Acknowledgements

We thank K. Borner, L. Cai, M. Covert, A. Karpathy, S. Quake and M. Thomson for fascinating conversations; D. Glass and E. McCaffrey for feedback on the manuscript; T. Vora for copy modifying; R. Angoshtari, G. Barlow, B. Bodenmiller, C. Carey, R. Coffey, A. Delmastro, C. Egelston, M. Hoppe, H. Jackson, A. Jeyasekharan, S. Jiang, Y. Kim, E. McCaffrey, E. McKinley, M. Nelson, S.-B. Ng, G. Nolan, S. Patel, Y. Peng, D. Philips, R. Rashid, S. Rodig, S. Santagata, C. Schuerch, D. Schulz, Di. Simons, P. Sorger, J. Weirather and Y. Yuan for offering imaging information for TissueNet; the crowd annotators who powered our human-in-the-loop pipeline; and all clients who contributed samples for this research study. This work was supported by grants from the Shurl and Kay Curci Foundation, the Rita Allen Foundation, the Susan E. Riley Foundation, the Pew Heritage Trust, the Alexander and Margaret Stewart Trust, the Heritage Medical Research Institute, the Paul Allen Family Foundation through the Allen Discovery Centers at Stanford and Caltech, the Rosen Center for Bioengineering at Caltech and the Center for Environmental and Microbial Interactions at Caltech (D.V.V.). This work was likewise supported by 5U54 CA20997105, 5DP5OD01982205, 1R01 CA24063801 A1, 5R01 AG06827902, 5UH3CA24663303, 5R01 CA22952904, 1U24 CA22430901, 5R01 AG05791504 and 5R01 AG05628705 from NIH, W81 XWH2110143 from DOD, and other financing from the Bill and Melinda Gates Foundation, Cancer Research Institute, the Parker Center for Cancer Immunotherapy and the Breast Cancer Research Foundation (M.A.). N.F.G. was supported by NCI CA246880-01 and the Stanford Graduate Fellowship. B.J.M. was supported by the Stanford Graduate Fellowship and Stanford Interdisciplinary Graduate Fellowship. T.D. was supported by the Schmidt Academy for Software Engineering at Caltech.

Author details

Author notes

  1. Cole Pavelchek

    Present address: Washington University School of Medicine in St. Louis, St. Louis, MO, USA

  2. Sunny Cui

    Present address: Department of Computer Science, Princeton University, Princeton, NJ, USA

  3. These authors contributed similarly: Noah F. Greenwald, Geneva Miller.

Affiliations

  1. Cancer Biology Program, Stanford University, Stanford, CA, USA

    Noah F. Greenwald, Brianna J. McIntosh & Ke Xuan Leow

  2. Department of Pathology, Stanford University, Stanford, CA, USA

    Noah F. Greenwald, Alex Kong, Adam Kagel, Christine Camacho Fullaway, Ke Xuan Leow, Jaiveer Singh, Mara Fong, Gautam Chaudhry, Zion Abraham, Jackson Moseley, Shiri Warshawsky, Erin Soon, Shirley Greenbaum, Tyler Risom, Sean C. Bendall & Michael Angelo

  3. Division of Biology and Bioengineering, California Institute of Technology, Pasadena, CA, USA

    Geneva Miller, Erick Moen, Thomas Dougherty, Morgan Sarah Schwartz, Cole Pavelchek, Isabella Camplisson, William Graf & David Van Valen

  4. Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, USA

    Sunny Cui

  5. Department of Molecular Cell Biology, Weizmann Institute of Science, Rehovot, Israel

    Omer Bar-Tal & Leeat Keren

  6. Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA

    Mara Fong

  7. Immunology Program, Stanford University, Stanford, CA, USA

    Erin Soon

  8. Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA

    Travis Hollmann

Contributions

N.F.G., L.K., M.A. and D.V.V. developed the task. E.M. and D.V.V. developed the human-in-the-loop technique. L.K. and M.A. developed the whole-cell division method. G.M., T.D., E.M., W.G. and D.V.V. established DeepCell Label. G.M., N.F.G., E.M., I.C., W.G. and D.V.V. established the human-in-the-loop pipeline. M.S.S., C.P., W.G. and D.V.V. established Mesmer’s deep knowing architecture. W.G., N.F.G. and D.V.V. established design training software application. C.P. and W.G. established cloud implementation. M.S.S., S.C., W.G. and D.V.V. established metrics software application. W.G. established plugins. N.F.G., A. Kong, A. Kagel, J.S. and O.B.-T. established the multiplex image analysis pipeline. A. Kagel and G.M. established the pathologist examination software application. N.F.G., G.M. and T.H. monitored training information production. N.F.G., C.C.F., B.J.M., K.X.L., M.F., G.C., Z.A., J.M. and S.W. carried out quality assurance on the training information. E.S., S.G. and T.R. produced MIBI-TOF information for morphological analyses. S.C.B. aided with speculative style. N.F.G., W.G. and D.V.V. trained the designs. N.F.G., W.G., G.M. and D.V.V. carried out information analysis. N.F.G., G.M., M.A. and D.V.V. composed the manuscript. M.A. and D.V.V. monitored the job. All authors supplied feedback on the manuscript.

Corresponding authors

Correspondence to.
Michael Angelo or David Van Valen

Ethics statements

Competing interests

M.A. is a developer on patent United States20150287578 A1. M.A. is a board member and investor in IonPath Inc. T.R. has actually formerly sought advice from for IonPath Inc. D.V.V and E.M. have actually submitted a provisionary patent for this work. The staying authors state no completing interests.

Additional details

Peer evaluation details Nature Biotechnology thanks the confidential customers for their contribution to the peer evaluation of this work.

Publisher’s note Springer Nature stays neutral with regard to jurisdictional claims in released maps and institutional associations.

Extended information

Extended Data Fig. 1 DeepCell Label annotation workflow.

a, How multichannel images are represented and modified in DeepCell Label. b, Scalable backend for DeepCell Label that dynamically changes needed resources based upon use, permitting concurrent annotators to operate in parallel. c, Human-in-the-loop workflow diagram. Images are submitted to the server, gone through Mesmer to make forecasts, and cropped to assist in mistake correction. These crops are sent out to the crowd to be remedied, sewed back together, go through quality assurance to guarantee precision, and utilized to train an upgraded design.

Extended Data Fig. 2 Mesmer benchmarking.

a, PanopticNet architecture. Images are fed into a ResNet50 foundation paired to a function pyramid network. 2 semantic heads produce pixel-level forecasts. The very first head forecasts whether each pixel comes from the interior, border, or background of a cell, while the 2nd head forecasts the center of each cell. b, Relative percentage of preprocessing, reasoning, and postprocessing time in PanopticNet architecture. c, Evaluation of accuracy, recall, and Jaccard index for Mesmer and formerly released designs (right) and designs trained on TissueNet (left). d, Summary of TissueNet precision for Mesmer and chosen designs to assist in future benchmarking efforts e, f Breakdown of a lot of widespread mistake types ( e) and less common mistake types ( f) for Mesmer and formerly released designs shows Mesmer’s benefits over previous techniques. g, Comparison of the size circulation of forecast mistakes for Mesmer (left) with nuclear division followed by growth (right) reveals that Mesmer’s forecasts are objective.

Extended Data Fig. 3 TissueNet precision contrasts.

a, Accuracy of professional designs trained on each platform type (rows) and examined on information from other platform types (columns) suggests excellent contract within immunofluorescence and mass spectrometry-based techniques, however not throughout unique techniques. b, Accuracy of professional designs trained on each tissue type (rows) and assessed on information from other tissue types (columns) shows that designs trained on just a single tissue type do not generalize too to other tissue types. c, Quantification of F1 rating as a function of the size of the dataset utilized for training. d-h, Quantification of specific mistake types as a function of the size of the dataset utilized for training. i, Representative images where Mesmer precision was bad, as identified by the image particular F1 rating. j, Impact of image blurring on design precision. k, Impact of image downsampling and after that upsampling on design precision. l, Impact of including random sound to image on design precision. All scale bars are 50 μM.

Extended Data Fig. 4 3D division.

Proof of concept for utilizing Mesmer’s division forecasts to create 3D divisions. A z-stack of 3D information is fed to Mesmer, which produces different 2D forecasts for each piece. We computationally connect the divisions forecasts from each piece to form 3D items. This method can form the basis for human-in-the-loop building and construction of training information for 3D designs.

Supplementary info

About this post

Cite this post

Greenwald, N.F., Miller, G., Moen, E. et al. Whole-cell division of tissue images with human-level efficiency utilizing massive information annotation and deep knowing.
Nat Biotechnol(2021). https://doi.org/101038/ s41587-021-01094 -0

Download citation

Read More

Author: admin

Leave a Reply

Your email address will not be published. Required fields are marked *