Skip to content
Snippets Groups Projects
Commit 5f5c8f79 authored by Nassim Bouteldja's avatar Nassim Bouteldja
Browse files

Update README.md

parent f7750903
No related branches found
No related tags found
No related merge requests found
......@@ -4,7 +4,7 @@
# **FLASH**: **F**ramework for **LA**rge-**S**cale **H**istomorphometry
This repository represents a python framework to train, evaluate and apply segmentation networks for renal histological analysis. In particular, we trained an [nnUnet](https://github.com/MIC-DKFZ/nnUNet) for kidney tissue segmentation followed by training another [U-net-like](https://arxiv.org/pdf/1505.04597.pdf) CNN for the segmentation of several renal structures including ![#ff0000](https://via.placeholder.com/15/ff0000/000000?text=+) tubulus, ![#00ff00](https://via.placeholder.com/15/00ff00/000000?text=+) glomerulus, ![#0000ff](https://via.placeholder.com/15/0000ff/000000?text=+) glomerular tuft, ![#00ffff](https://via.placeholder.com/15/00ffff/000000?text=+) non-tissue background (including veins, renal pelvis), ![#ff00ff](https://via.placeholder.com/15/ff00ff/000000?text=+) artery, and ![#ffff00](https://via.placeholder.com/15/ffff00/000000?text=+) arterial lumen from PAS-stained histopathology data. In our experiments, we utilized human tissue data sampled from different cohorts including inhouse biopsies (UKA_B) and nephrectomies (UKA_N), the Human BioMolecular Atlas Program cohort (HuBMAP), the Kidney Precision Medicine Project cohort (KPMP), and the Validation of the Oxford classification of IgA Nephropathy cohort (VALIGA).
This repository represents a python framework to train, evaluate and apply segmentation networks for renal histological analysis. In particular, we trained an [nnUnet](https://github.com/MIC-DKFZ/nnUNet) for kidney tissue segmentation followed by training another [U-net-like](https://arxiv.org/pdf/1505.04597.pdf) CNN for the segmentation of several renal structures including ![#ff0000](https://via.placeholder.com/15/ff0000/000000?text=+) tubulus, ![#00ff00](https://via.placeholder.com/15/00ff00/000000?text=+) glomerulus, ![#0000ff](https://via.placeholder.com/15/0000ff/000000?text=+) glomerular tuft, ![#00ffff](https://via.placeholder.com/15/00ffff/000000?text=+) non-tissue background (including veins, renal pelvis), ![#ff00ff](https://via.placeholder.com/15/ff00ff/000000?text=+) artery, and ![#ffff00](https://via.placeholder.com/15/ffff00/000000?text=+) arterial lumen from PAS-stained histopathology data. In our experiments, we utilized human tissue data sampled from different cohorts including inhouse biopsies (UKA_B) and nephrectomies (UKA_N), the *Human BioMolecular Atlas Program* cohort (HuBMAP), the *Kidney Precision Medicine Project* cohort (KPMP), and the *Validation of the Oxford classification of IgA Nephropathy* cohort (VALIGA).
# Installation
1. Clone this repo using [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git):<br>
......@@ -36,11 +36,11 @@ training.py --model --setting --epochs --batchSize --lrate --weightDecay
```
- We trained the prior tissue segmentation network using the [nnUnet repo](https://github.com/MIC-DKFZ/nnUNet).
# Application
Use *getPredictionForBigPatch.py* to apply the trained network for histopathological renal structure segmentation to data of your choice.
Use *segment_WSI.py* to apply the trained networks for tissue and histopathological renal structure segmentation to data of your choice.
```
python ./FLASH/getPredictionForBigPatch.py
python ./FLASH/segment_WSI.py
```
Note: Before running the script, you need to specify the path to the WSI (variable: *WSIpath*), the network path (variable: *modelpath*), and the path to a results folder (variable: *resultspath*).<br>
Note: Before running the script, you need to specify the path to the image folder (variable: *WSIpath*), both network paths (variable: *modelpath*), and the results folder (variable: *resultspath*).<br>
In particular, the script will segment a specified patch from the given WSI using the network. Determine the position of the patch of interest by providing the raw coodinates (e.g. coordinates shown in QuPath) of its upper left corner (variable: *patchCenterCoordinatesRaw*) and determine its size by modifying *patchGridCellTimes*. The latter variable specifies how many 516x516 patches are segmented row-wise as well as column-wise.<br>
<br>
You can also apply the trained network to our provided exemplary image patches contained in the folder *exemplaryData*. These patches show various pathologies associated with different murine disease models, and are listed below including our ground-truth annotation:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment