Last week in a laboratory meeting my supervisor gave me two pieces of advise regarding my calculations to predict how long and how strong the uptake motif has to be in order to be detectable by the technique I am going to use.
1. I should generate an equation that calculates how the expected distribution of number of motifs in synthetic degenerate fragments changes. Differences between input DNA and the fragments taken up occurs because of preferential DNA uptake of a given motif present on some DNA fragments.
2. Start with only one simple scenario with a motif of 4 bases and finish calculations with that motif first, before working with more complex scenarios.
Following the advice I calculated the frequency distributions of a 4 base motif AAAA in a 30 base degenerate fragments. In a first calculation I did this manually using formulas described in previous posts, however, this time I found an R functions (dbinom) that did the job without the pain of manual calculations.
After I generated the frequency distribution in degenerate fragments, I generated two equations to calculate:
1. The frequency of number of fragments with no motifs in a 30 base fragments after uptake
n = number of occurrences of the motif in a fragment
Finput (n) = matrix of frequency distribution of n in the input pool
Fuptake (n) = matrix of frequency distribution of n in the uptake pool
i = vector of strength of bias favoring uptake of fragments (i.e., 1,2,2,2,2,2)
With these equations I was able to calculate the frequency distribution when having 1 or more motifs increases the chances of being taken up 2 fold (see graph below). I could also calculated with increases of 1.5 fold, 1.1 fold and even 1.01 fold.
In this graph for fragments with 0 motifs we see lower frequency in approximately 10% between the input (grey) and the fragments taken up (white). For fragments with one motif there is an increase that nearly doubles the frequency between input and the fragments taken up. I expect to see the same increase rate between Finput and Fuptake when n is higher or equal to 1. This expectation was accomplished since 0.173/0.095 = 1.82 (n = 1) as was also 0.00882791/0.00485660 (n = 2).
It is important to clarify that the two fold increase refers to uptake (or chances to be taken up) and not to frequency increase. For instance, using the example of a previous post
if my input is:
n = 0 1 2 3 4
Finput = 100 200 300 200 100
and 10% are taken up 90/900:
# = 10 20 30 20 10
Fuptake 0.111 0.222 0.333 0.222 0.111
now what if having n>= 1 increase 2 fold uptake
# = 10 40 60 40 20
Fuptake = 0.058 0.235 0.353 0.235 0.118 ^
(^ this frequencies were calculated dividing the value to the total i.e. 10/170. When I used my formula frequencies matched)
The difference between Finput and Fuptake when n>=1 is “1.06”, but it is not 2 fold.
Given the expectations that the difference between Finput and Fuptake are the same when n>= 1, plus the fact that the frequencies increase proportionally to their Finput, I can confidently trust that the calculations really reflect what I am looking for (expected frequency of fragments taken up).
In the same way I was able to calculate the frequency distribution when the chances of been taken up 5% with the numbers of motifs (difference in the equation is that “i” is a vector of numbers (e.g. 1.5, 2, 2.5, 3, 3.5) instead of one number.
In this graph for fragments with 0 motifs we see lower frequency in approximately 5% between the input (grey) and the fragments taken up (white). For fragments with one motif there is an increase of 43% the frequency between input and the fragments taken up, while for fragments with two motifs the increase in frequency is of 90%. In this example, I expect to see the gradual increase between Finput and Fuptake as the number of motifs increase. This expectation is also accomplished since there is gradual increase of 47% (90%-43% ) as number of motifs increase.
Now that I have accomplished this I need to determine if the differences in frequency between input and fragments taken up will be detectable statistically
I first used a Goodness of fit chi-square, but unfortunately, this test does not work with frequencies (i.e. 0.2, 0.04, 0.5) only with absolute numbers (i.e. 6, 8, 10, 100) (and numbers have to be higher than 5). So, I multiplied my frequencies by 1e+6 to simulate absolute numbers. The results showed what is expected with p-value related tests when numbers are very large. That phenomenon is that the test would consider as significant even minimal deviations between frequencies. For example, even 1.01 fold uptake bias generated a significant difference (p-value: 1e-16) in input and uptake distributions, because that 1.01 fold difference generated 1e+4 number of fragments. If I multiply my relative frequencies by 1000 instead, 1.01 fold increase was not significant anymore (p-value: 0.9569).
Write a paragraph discussing what this tells me about the sensitivity of my experiment
Results of this preliminary analysis shows that the power to detect significant differences in synthetic fragments depend on the read deep to which the fragments will be sequenced. A higher read coverage will provide higher discrimination power detecting significant differences for even marginal differences such as a 1.01 fold increase in uptake. However, I have to be careful for false positives since the tests have too much power.
As an intend of finding an alternative that calculates the p-values (dealing better with the stochastic changes in expected number of fragments when large numbers are compared), I calculated the Goodness of fit chi-square p-values using Monte Carlo simulations based on the expected frequencies to estimate the null distribution (using the option simulate.p.value = TRUE of the function chisq.test ).
When a fold increase of 1.01 was tested, chi-square using this simulated p-values gave me a p-value of 0.02 (which is still high, but is more conservative than the 1e-16 that I calculated before).
I personally think that simulating p-values deals better with stochasticity expected given the high power of the test because of large numbers of a high read deep. However, I will write a separate blog post about expanding on this.