Reference

Pavkovic, M., Pantano, L., Gerlach, C.V. et al. Multi omics analysis of fibrotic kidneys in two mouse models. Sci Data 6, 92 (2019).

Samples

Samples from two mouse models were collected. The first one is a reversible chemical-induced injury model (folic acid (FA) induced nephropathy). The second one is an irreversible surgically-induced fibrosis model (unilateral ureteral obstruction (UUO)). mRNA and small RNA sequencing, as well as 10-plex tandem mass tag (TMT) proteomics were performed with kidney samples from different time points over the course of fibrosis development. In the FA model, mice were sacrificed before the treatment (day 0) and 1, 2, 7, and 14 days after a single injection of folic acid. For the UUO model, mice were sacrificed before obstruction (day 0) and 3, 7, and 14 days after the ureter of the left kidney was obstructed via ligation. For both studies, kidneys were removed at each time point for total RNA isolation and protein sample preparation.

We will first explore the UUO transcriptome data.

Data sources

Doc URL
Total RNA for the experiment on Unilateral ureter obstruction (UUO) model https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE118339
Supplementary material of the article with all the data tables (zip archive) https://zenodo.org/record/2592516

Build metadata table

We roughly build a minimal metadata table from the info we find in the data table. This should be improved by importing dataset-specific information for a more relevant analysis.

Rough metadata table. Data type = proteomeDataset = uuo
dataType sampleName condition sampleNumber color
transcriptome normal_1 normal 1 #BBFFBB
transcriptome normal_2 normal 2 #BBFFBB
transcriptome day3_1 day3 1 #FFBB44
transcriptome day3_2 day3 2 #FFBB44
transcriptome day3_3 day3 3 #FFBB44
transcriptome day7_1 day7 1 #FF8800
transcriptome day7_2 day7 2 #FF8800
transcriptome day7_3 day7 3 #FF8800
transcriptome day14_1 day14 1 #FF4400
transcriptome day14_2 day14 2 #FF4400

Compute sample-wise statistics

We describe our data by computing sample-wise descriptors of the central tendency (mean, median), dispersion (standard deviation, inter-quartile range) and some milestone percentiles (05, 25, 50, 75, 95).

dataType sampleName condition sampleNumber color mean median sd IQR min perc05 Q1 Q3 perc95 max
transcriptome normal_1 normal 1 #BBFFBB 1881.3 731.14 3756.4 1652.0 1.4976 48.588 239.83 1891.8 7463.8 84183
transcriptome normal_2 normal 2 #BBFFBB 1675.7 675.93 3266.2 1483.4 1.1899 46.894 225.55 1709.0 6559.9 73210
transcriptome day3_1 day3 1 #FFBB44 1715.6 843.60 2858.5 1652.1 6.7569 76.704 318.55 1970.7 6050.8 61663
transcriptome day3_2 day3 2 #FFBB44 1912.4 943.17 3169.4 1870.2 9.5006 82.463 353.51 2223.7 6702.8 67802
transcriptome day3_3 day3 3 #FFBB44 1788.7 879.47 3032.2 1716.6 8.2186 78.684 329.64 2046.2 6258.7 66225
transcriptome day7_1 day7 1 #FF8800 1535.7 772.21 2589.1 1503.8 10.1767 71.455 297.81 1801.6 5254.4 64460
transcriptome day7_2 day7 2 #FF8800 1767.2 865.20 3045.3 1747.5 4.3223 79.410 334.24 2081.7 5972.5 70714
transcriptome day7_3 day7 3 #FF8800 1604.9 803.62 2762.1 1593.5 7.2826 71.727 304.52 1898.0 5375.4 65294
transcriptome day14_1 day14 1 #FF4400 1566.7 760.19 3055.8 1474.5 12.6918 72.337 297.53 1772.1 5247.2 78074
transcriptome day14_2 day14 2 #FF4400 1482.4 714.26 2961.8 1399.4 4.1563 64.983 277.37 1676.7 4951.8 82035

Distribution of raw values

Distribution of raw counts

Distribution of raw counts

The distribution of raw counts is not very informative, because the range is defined by some outlier, i.e. a feature having a huge values respective to the rest of the distribution.

This problem is particularly sensitive for RNA-seq data: even with a strong zoom on the abcsissa range from 0 to 500, the histogram shows a steep drop in the first bins.

Distribution of raw counts - two-sides trimming

For visualization, it is sometimes convenient to truncate both tails of the distribution, in order to avoid both the outliers (right tail) and the sometimes large number of zero values. This is particularly useful for single-cell RNA-seq data, which can have a “zero-inflated” distribution, but it is also useful for other data types.

Distribution of raw counts

Distribution of raw counts

Log2-transformed values

A typical normalising approach is apply a log2 transformation .

For RNA-seq data, this might however create a problem when the counts of a given gene in a given sample is 0. To circumvent this, we can add an epsilon (\(\epsilon = 0.1\)) before the log2 transformation.

Note: Pancovic’s “raw counts” are actually decimal numbers and their mimimal value is higher than 0. The epsilon is thus not required but we keep it in this script in order to enable re-using it with other data sources.

Feature filtering

We filter out the features having very low values across all the samples, which we consider as undetected.

Box plot of the log2-transformed values before (left) and after (right)  feature filtering.

Box plot of the log2-transformed values before (left) and after (right) feature filtering.

We filtered out all the features whose maximal count value across all samples was lower than 100. Among the 8508 genes from the raw count table, 0 were considered undetected according to this criterion. We use the remaining 8508 genes for the subsequent analyses.

Standardization: centering

Before going any further, it is important to ensure some standardization of the samples, in order to correct for biases due to inter-sample differences in sequencing depth (RNA-seq) or other sample-specific bias.

For the sake of simplicity, we will use here a very simple criterion: median-based centering.

When we start from the raw values, the principle is to divide the counts of each sample by a sample-specific scaling factor in order to bring each sample to the same median count.

Since we are working with log2-transformed values, we compute the difference, which is equivalent to apply a ratio to the raw values.

Box plot of the log2-transformed feature-filtered values before (left) and after (right) median-based standardization.

Box plot of the log2-transformed feature-filtered values before (left) and after (right) median-based standardization.

Consistently, after centering, all the medians appear aligned on the box plot.

Standardization: scaling

The second step of standardization is to ensure homogeous dispersion between samples. This relies on some biological assumption, so it has to be evaluated on a case-per-case basis.

My trick: for high-throughput data, I generally use the inter-quartile range (IQR) rather than the standard deviation as scaling factor. This ensures robustness to outliers, which may have a very strong impact for some datatypes (e.g. RNA-seq).

Box plot of the log2-transformed feature-filtered values before (left) and after (right) IQR-based scaling.

Box plot of the log2-transformed feature-filtered values before (left) and after (right) IQR-based scaling.

Raw versus standardized values

At the end of this journey, we can compare the raw and the standardized values. Needless to say that the preceding steps quite transformed the data. Since this will likely impact all the subsequent analyses, it is worth making sure we took the right choices for each step (filtering, log2-transformation, centering, scaling). This should be evaluated based on your knowledge of your data.

Box plot of the raw values (left) and the final data after feature filtering, normalization by log2 transormation, centering and scaling (right).

Box plot of the raw values (left) and the final data after feature filtering, normalization by log2 transormation, centering and scaling (right).

Violin plots

We can also inspect the distribution of counts per sample with the vioplot() function.

Box plot of the raw values (left) and the final data after feature filtering, normalization by log2 transormation, centering and scaling (right).

Box plot of the raw values (left) and the final data after feature filtering, normalization by log2 transormation, centering and scaling (right).

Scatter plots

We can also inspect the distribution of counts per sample with the plot() function.

Exporting the result

We export the pre-processed data in separate tables for further reuse.

#### Save tables  ####

outfiles <- vector()

## Raw counts, all the variables
outfiles["raw"] <- file.path(outdirs$dataset, 
                             paste0(parameters$filePrefix, 
                                    "raw.tsv.gz"))
write.table(x = format(digits = 3, big.mark   = "", decimal.mark = ".", rawValues), 
            dec = ".", 
            file = gzfile(outfiles["raw"], "w"), 
            quote = FALSE, 
            sep = "\t")

## Log2-transformed counts, all the features
outfiles["log2"] <- file.path(outdirs$dataset, 
                              paste0(parameters$filePrefix, 
                                     "log2.tsv.gz"))
write.table(x = format(digits = 3, big.mark   = "", decimal.mark = ".", log2Values), 
            dec = ".", 
            file = gzfile(outfiles["log2"], "w"), 
            quote = FALSE, sep = "\t")

## Filtered features only, log2-transformed values
outfiles["filtered"] <- file.path(outdirs$dataset, 
                                  paste0(parameters$filePrefix, 
                                         "log2_filtered.tsv.gz"))
write.table(x = format(digits = 3, big.mark   = "", decimal.mark = ".", log2Filtered), 
            dec = ".", 
            file = gzfile(outfiles["filtered"], "w"), 
            quote = FALSE, sep = "\t")

## Filtered genes only, log2-transformed and centered but not scaled data
outfiles["centered"] <- file.path(outdirs$dataset, 
                                  paste0(parameters$filePrefix, 
                                         "log2_median-centered.tsv.gz"))
write.table(x = format(digits = 3, big.mark   = "", decimal.mark = ".", log2MedianCentered), 
            dec = ".", 
            file = gzfile(outfiles["centered"], "w"), 
            quote = FALSE, sep = "\t")


## Filtered genes only, log2-transformed and standardized counts
outfiles["standardized"] <- file.path(outdirs$dataset, 
                                      paste0(parameters$filePrefix, 
                                             "log2_standardized.tsv.gz"))
write.table(x = format(digits = 3, big.mark   = "", decimal.mark = ".", log2Standardized), 
            dec = ".", 
            file = gzfile(outfiles["standardized"], "w"), 
            quote = FALSE, sep = "\t")

## Metadata
outfiles["metadata"] <- file.path(outdirs$dataset, 
                                  paste0(parameters$filePrefix, 
                                         "metadata.tsv"))
write.table(x = metadata, 
            file = outfiles["metadata"] ,
            quote = FALSE, sep = "\t")

Memory image

We will now save a memory image in order to enable us to reload the data exactly in the state we reached now, without having to redo all the steps.

The memory image can be reloaded with a single R command: load(./results/pavkovic2019_proteome_uuo_memimage.Rdata)

Session info

R version 3.6.3 (2020-02-29)
Platform: x86_64-conda_cos6-linux-gnu (64-bit)
Running under: CentOS Linux 7 (Core)

Matrix products: default
BLAS/LAPACK: /shared/mfs/data/software/miniconda/envs/r-3.6.3/lib/libopenblasp-r0.3.9.so

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8     LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8    LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] vioplot_0.3.4 zoo_1.8-8     sm_2.2-5.6    stringr_1.4.0 knitr_1.28   

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.4.6    lattice_0.20-41 digest_0.6.25   grid_3.6.3      magrittr_1.5    evaluate_0.14   highr_0.8       rlang_0.4.6     stringi_1.4.6   rmarkdown_2.1   tools_3.6.3     xfun_0.14       yaml_2.2.1      compiler_3.6.3  tcltk_3.6.3     htmltools_0.4.0