Skip to content
Snippets Groups Projects
Nassim Bouteldja's avatar
Nassim Bouteldja authored
Update README.md

See merge request !1
2871fc38
History

FLASH: Framework for LArge-Scale Histomorphometry

This repository represents a python framework to train, evaluate and apply segmentation networks for renal histological analysis. In particular, we trained a neural network based on the U-net architecture to segment several renal structures including #ff0000 tubulus, #00ff00 glomerulus, #0000ff glomerular tuft, #00ffff non-tissue background (including veins, renal pelvis), #ff00ff artery, and #ffff00 arterial lumen from PAS-stained histopathology data. In our experiments, we utilized human tissue data sampled from different cohorts including inhouse biopsies (UKA_B) and nephrectomies (UKA_N), the Human BioMolecular Atlas Program cohort (HuBMAP), the Kidney Precision Medicine Project cohort (KPMP), and the Validation of the Oxford classification of IgA Nephropathy cohort (VALIGA).

Installation

  1. Clone this repo using git:
git clone https://git-ce.rwth-aachen.de/labooratory-ai/flash.git/
  1. Install miniconda and use conda to create a suitable python environment as prepared in environment.yml that lists all library dependencies:
conda env create -f ./environment.yml
  1. Activate installed python environment:
source activate python37
  1. Install pytorch depending on your system:
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch

Training

Train a network, e.g. using the following command:

python ./FLASH/training.py -m custom -s train_val_test -e 500 -b 6 -r 0.001 -w 0.00001

Note:

  • Before, you need to specify the path to results folder (variable: resultsPath) in training.py and the path to your data set folder (variable: image_dir_base) in dataset.py
  • training.py is parameterized as follows:
training.py --model --setting --epochs --batchSize --lrate --weightDecay 

Application

Use getPredictionForBigPatch.py to apply the trained network for histopathological renal structure segmentation to data of your choice.

python ./FLASH/getPredictionForBigPatch.py

Note: Before running the script, you need to specify the path to the WSI (variable: WSIpath), the network path (variable: modelpath), and the path to a results folder (variable: resultspath).
In particular, the script will segment a specified patch from the given WSI using the network. Determine the position of the patch of interest by providing the raw coodinates (e.g. coordinates shown in QuPath) of its upper left corner (variable: patchCenterCoordinatesRaw) and determine its size by modifying patchGridCellTimes. The latter variable specifies how many 516x516 patches are segmented row-wise as well as column-wise.

You can also apply the trained network to our provided exemplary image patches contained in the folder exemplaryData. These patches show various pathologies associated with different murine disease models, and are listed below including our ground-truth annotation:

Cohort Annotation
UKA_B Annotation
UKA_N Annotation
HuBMAP Annotation
KPMP Annotation

Contact

Peter Boor, MD, PhD
Institute of Pathology
RWTH Aachen University Hospital
Pauwelsstrasse 30
52074 Aachen, Germany
Phone: +49 241 80 85227
Fax: +49 241 80 82446
E-mail: pboor@ukaachen.de

/**************************************************************************
*                                                                         *
*   Copyright (C) 2022 by RWTH Aachen University                          *
*   http://www.rwth-aachen.de                                             *
*                                                                         *
*   License:                                                              *
*                                                                         *
*   This software is dual-licensed under:                                 *
*   • Commercial license (please contact: pboor@ukaachen.de)              *
*   • AGPL (GNU Affero General Public License) open source license        *
*                                                                         *
***************************************************************************/