Hi, I am moving to my very own web page.
This new page is part of my new year goals of learning new skills, including learning html, css, improving my R and tuning my git knowledge.
Please follow this link to my new blog
Hi, I am moving to my very own web page.
This new page is part of my new year goals of learning new skills, including learning html, css, improving my R and tuning my git knowledge.
Please follow this link to my new blog
I am back from vacation and what a better way to shake my mind than a blog post!
Before my vacation break, I was trying to make a double stranded 200 bp fragment with a randomized region by using two different long oligos. One 135 bp with a 70 randomized bases and a 30 bases overlapping region with the other 95 bases oligo.
The 200 bp fragment is intended to make by annealing the two oligos and then extending/amplifying them by a modified PCR protocol.
In my last post, I described all the different strategies that I did in order to obtain that fragment. However, despite my several attempts, I always obtained I 250 bp product, instead of the 200 bp expected product.
After analyzing the oligos sequence in detail, I realized that a un-specific annealing between regions of the oligos could be responsible of the extra 50 bases.
In order to test if this misspriming event was responsible of the extra bases, I sequenced the purified 250 fragment using both left and right primers.
Analysis of the sequence traces from the forward primer and from the reverse primer showed no evidence of misspriming at all.
In fact the sequenced fragment coincide with the expected 200 base pair fragment sequence and not to the product expected if this misspriming event had taken place.
In conclusion, I have no idea why the synthesized fragment migrated as a 250 bp fragment in the agarose gel electrophoresis. However, I was able to show that the misspriming event did not happen. Two possibilities come to my mind to explain this problem:
It is possible that some extra bases were added at the 3’ end of the fragment. This extra bases might not be detected in the sequenced fragment since the sequence trace lost resolution near the end of the fragment. However, even if this is the case the extra bases are not going to have negative consequences in the illumina barcoding attachment protocol step.
In this step, a set of primers containing a barcode and illumina flow-cell adapter sequences will anneal with both ends of the fragment. Then, the fragments will be amplified using a short cycle PCR protocol. Since the sequencing results show that the priming sites of this barcoding oligos are intact, then the extra bases won’t be expected to have any effect.
Since no evidence of misspriming was detected, I consider that is safe to try some uptake experiments using the synthesized 250 base pair fragment and then quantify how much I can recover from the wild type Acinetobacter baylyi BD413 and from the no-uptake ADP239 comP::nptII mutant.
Lately I have been working on generating a 200 bp fragment that will contain a 69 bp randomized region. I will do this by using two large oligos with a 30bp overlapping region. As shown in the figure below, the right oligo has a illumina sequencing primer site, extra bases and the overlapping region. On the other hand, the left oligo has a illumina sequencing primer site, 3 fixed bases, 69 randomized base, and the overlapping region with the left oligo.
The illumina Nextera sequencing priming sites on the flanks (see figure below) will allow the fragment to be sequenced in both directions,if needed. Additionally this region will act as template for the primers, used by Nextera, to attach the barcodes and the adapters to the fragments to be sequenced (see figure below, barcodes are shown in grey and light blue, adapters in yellow and dark blue). Generating this 200 bp fragment is an essential part of my thesis, since I will use them as input DNA for uptake experiments using competent Acinetobacter baylyi and Thermus thermophilus. By comparing the sequences from the fragments that were taken up and from the input DNA fragments, I will try to determine if this bacteria have an uptake biases for a particular sequence motif.
In a first experiment, I annealed the two oligos using 1.1 ug of the right (38pmol) and left (26.5 pmol) oligos, as well as Tris and water in a 20 ul reaction. The oligos were then denatured at 95 degrees, then the temperature was decrease at 0.1 degree/second until it reached 65 degrees for 5 minutes.
Then, I used 2 ul of this annealed oligos reaction as part of a 6 PCR reactions using primers that will amplify the 200 bp fragment.
Besides the 2 ul of anneal oligos, 4 of the 6 reactions used: 200uM dNTPs, 0.2uM primers, 1X buffer, 2 units of Onetaq (NEB).
One reaction (called extension only) used: 2 ul of anneal oligos, 200uM dNTPs, 1X buffer, 2 units of Onetaq (NEB), but no PCR primers.
One reaction (No taq) used: 2 ul of anneal oligos, 200uM dNTPs, 1X buffer, 0.2uM primers, but no taq polymerase.
Amplification program includes an extension step:
65 degrees 10 min
Followed by a standard PCR protocol:
95 degrees 2 min
95 degrees 30 sec ]
55 degrees 30 sec ] 1, 5, 10, 20 cycles
68 degrees 30 sec ]
95 degrees 5 min
The “taq only” and “extension only”samples were taken out of the thermocycler after the extension step. The rest of the 4 reactions were amplified for 1, 5, 10, 20 cycles.
Results showed that:
In a next experiment, I was looking to determine why I am getting this 250 bp band. My, hypothesis was that not-fully synthesized “intermediate” sequences, which are common in long oligo synthesis, were responsible of the unspecific band. With that in mind, I tested if denaturing, re-annealing and re-extending oligos several times (without any PCR amplification, only extension) would remove, at least partially this “intermediate” oligos that could be responsible of the 250 bp band observed.
In this experiment, I used ~220 ug of the right (38pmol) and left (26.5 pmol) oligos, as well as Tris and water in a 20 ul reaction.
The oligos were then anneal and extended using three distinct protocols:
Results showed that:
In this point I still believed that oligo mis-priming originated because of not-fully synthesized oligo intermediate, so I gel purified the anneal/extended product of the second reaction. Next, I did a PCR using a 1/100 dilution of the anneal extended gel purified and non-purified product with 4 different annealing temperatures each (62, 60, 58, 55.7 degrees).
Results showed a 250 bp band with a very faint 200bp band. It is surprising that even after gel purification I still get a 250bp band, that it is even more intense that the expected 200bp band.
At this point, it seems that there is any problem with the oligo sequences themselves, so I looked at the oligo design carefully in order to discover if there is any region of the oligos prone to miss-priming.
By looking at the sequences of the oligos, I realized that a 16 bases region of the overlaping region from the left primer (third row of the figure below) anneal perfectly (region in black rectangule) with extra bases from the extended forward strand (second row of the figure below)
This mis-priming would leave a 5 ‘ overhang that could easily get extended from the 3′ end of the forward strand and the 3’ end of the reverse strand, generating a 245bp sequence.
Additionally the sequence located in the extra bases of the left primer is partially complementary to each other (ccgcatcAGGTGGCACGAGgatgcgg).
This mis-priming is consistent with the results from the second experiment in which I tested three annealing/extension protocols. The first and second protocols had different results, despite that the first part of them is identical (94 degree denaturation followed by a rap decrease of 0.1 degree/sec to 58 degrees annealing). However, when I denaturate and re-anneal/extend the oligos in the first protocol, then I see the ~250bp band. Maybe the slow ramp gave enough time at higher temperatures (~65 degrees) to anneal the oligos correctly.
Eventually I will have to re-design and re-order at least one of both oligos, since even if I am successful amplifying the 200 bp fragment, I do not like any strange mis-priming effects when barcodes and adapters are introduced later on during the library prep step previous to sequencing the fragments.
As discussed extensively in previous posts, our analysis of DNA uptake data in Haemophilus influenzae is going very well. So far it seems that most of the variation in DNA uptake across the genome are explained by the proximity to an uptake signal sequence (USS). We have scored the genome using two distinct uptake motif models. The first motif model or “genomic uptake motif” is based on the USS sequences enriched the genome, while the second model or “uptake bias motif” is based on experiments characterizing the biases on DNA uptake machinery.
Scoring the genome with both of these models results in a score per genomic position that allowed us to identify regions of the genome with USS or USS-like sequences. After several distinct analysis, that I am not going to discuss in this post, we were able to determined: 1. the most adequate cuttoff score to identify which sequences could be classified as USS. 2. which uptake motif model performed better, generating less false positives. Additionally, we now have a good idea of how well the proximity of a genomic region to a USS sequences could explain DNA uptake (at least for small donor DNA fragment sizes).
The next step in our analysis is to find an independent way to call for uptake peaks, seen as positions with local peaks in uptake ratios across the genome. Once we identify this peaks, we will be able to compare them with the positions that we predicted as USS based on our scoring method (genomic motif or uptake bias motif) and in our chosen cutoff. I predict that this comparison will be able to determine:
The strategy that I will apply in this analysis is to first find all the peaks of the data by using the peak finder function created in R. This step was already done, and it was not easy to implement. The function depends on two parameters “w” and “span”. As explained in here, the function works by smoothing the data to then find local peaks by comparing local maxima with the smoothed points. By changing these parameters (w = 200, span = 0.02), I was able to optimize the function finding each local peak within the first 10 kb of the genome.
However as has can be seen, some false positives were seen in position with low depth called as peaks.
Note: x represent the genomic positions, while y represent uptake ratio. Dots in red are peaks identified by the peak finder function. Black line is the smoothed line made by the function. Grey dots are the actual data points showing uptake ratio at each position
To eliminate this false positives, I introduced another parameter called “cut”. This parameter allowed me to filter all peaks in positions with uptake ratios lower than a certain threshold (cut = 0.5). Next, the entire genome has to be screened in order to generate a list of positions with uptake peaks. This was done by breaking the genome into small pieces (10kb/each) and then screening each piece for local peaks. The genome was splitted, since the function performed much better when smaller sections of the genome were used. In other words, when larger sections of the genome were used, the peaks found were farther away from the real peaks.
This could be seen both graphically and by examining the peak lists generated.
For instance, the two figures above shows the DNA uptake profile of a section of genome. Blue point represent positions with more than 10 reads in the input, red point are sites with less than 10 reads in the input. Black dots represent positions identified as peaks, when the genome was split into 10kb pieces (first figure), and when the genome was split into 100kb pieces (second figure)
It is clear by seen both figures than when the genome was split into larger sections, the peak finder function was less precise.
In the table below we can see a list of the first 7 positions of the genome classified as USS by our uptake bias model , aligned by the focal position (column 1) and by the central C of the core of the motif( column 2) ; as well as the first 6 -7 positions identified as uptake peaks when the genome was split into 10kb fractions of the genome (column 3), into 100kb fractions (column 4), and to the entire non-split genome (column 5)
As can be seen, the smaller the genome fractions used to feed to the function, the closer the positions are to either the USS focal or to central C positions. For instance, when the genome was analyzed breaking it in 10kb pieces, most uptake peaks were just a few bases away from the USS focal or central C position. On the other hand, when the genome was analyzed breaking it in 100kb pieces, one position around 1023 bases was not recognized as a peak and the rest of the peaks were further away from the USS positions. As an extreme, we can see that running the function to the entire genome (without fractioning it first) resulted in a list with many missing peaks (no peaks in the first 80kb of the genome).
The table above also tells us that when the uptake peak list generated breaking the genome in 10 kb fractions correlates very well with the USS positions list. Now we only need a graphical way to visualize how well both of this lists match together.
One way to graphically analyze the uptake peak data is to plot simply the distribution of mean uptake ratio of the uptake peak list. Of course this analysis does not tell us how well uptake peaks match with the USS list, however it does give us an idea of the distribution of uptake ratios from peak positions.
I predict that the distribution should look like something similar to the figure below, where most peaks are located between 3 and 4, with a few peaks with higher or lower uptake ratios (red arrows). Positions with lower uptake ratios (less than 2) could represent peaks with a either a non-optimum USS sequence or artefacts of the peak finder that were not removed by the “cut” argument (of 0.5). On the other hand peaks with uptake ratios higher than 5 could represent positions with either a extremely good USS sequence or a overestimated uptake ratio given positions in the input with low sequencing depth.
The actual distribution of mean uptake ratios of uptake peaks, shown below, looked somewhat similar to what I predicted. Now by simply extracting positions with higher and lower uptake ratios I could look carefully at the sequences of this positions in order to determine why they have those uptake ratios.
However, This analisis still does not tell me directly how well the uptake peak list correlate with the USS list. In order to graphically see how both list match each other I will calculate the distance of each uptake peak of the list to the nearest uptake USS (using the central C position). Then, I can plot this distance vs the uptake ratios of the uptake peaks.
I expect that this plot will look like the figure below:
I expect that most peaks will be just a few bases from the nearest USS (1 – 40 bases). Artefacts that were clasified as peaks but that are not real peaks are expected to be farther away (red arrows) from the nearest USS and are expected to have low uptake ratios, since they might not be eliminated by the cutoff argument chosen (0.5). If there were peaks that are not explained by a USS sequence, we would expect that this peaks should be farther away from a USS, but they would have a high uptake ratio (green arrows). This analisis, would tells us right away:
Finally, to answer the question if a higher uptake score also means a higher uptake peak, I could plot a simple regression of both parameters and see how assess the slope of the regression. Based on what we have seen so far, it seems that increasing the USS score beyond a certain point does not increase uptake ratio any further. Of course this analysis would gives us only a rough idea since other factors such as interaction between more than one USS or interactions between positions in the USS motif are not included in the analysis.
The plot of distance to the nearest USS to uptake ratios of the uptake peak list is shown below.
By looking at the figure, I realized that there are a few things I did not taken in account.
As a result I can see than 80.2% of the peaks identified are closer than 200 bases from a USS position, and 77.7% of the uptake peaks are closer than 100 bases from a USS position. This results show that USSs explained a big percent of the uptake peaks seen in the data. Now the question is what happen with the rest 20%. Are they a result of artefacts of the analysis? or are they really far away from a USS?
The other analysis that I did was the regression of uptake scores vs the uptake ratios of the uptake peak list. The regression clearly show no association. In other words among peaks, increasing the USS score further does not increase uptake.
Note: In this figure I used the re-calculated uptake ratios (summing 1 read to the input to avoid having missing values) in order to mke the regression work.
In my last post, I explain the plan that I had to analyze the data from the project I am working on. This project is looking to determine why some regions of Haemophilus influenzae genome are taken up more than others by competent cells in DNA uptake experiments.
As explained in that post, the first step in my analysis was to determine which positions correspond to USS or USS-like sequences. This is done by scoring the genome using a position specific scoring matrix (pssm), using a similar approach as the one used by various software to create sequence logos (such as the sequence logo of a genomic USS region shown below).
However, as explain in a post from my supervisor, the USS scoring can be done in multiple ways (using the genome enriched motif, the uptake motif, taking in account positions interactions, weighting positions within the motif according to how important they are for uptake, taking in account GC content).
In the analysis showed in this post, I used the most simplistic approach to score the genome. That approach was to score the genome using the genome enriched motif and giving all positions the same relative weight to the overall score (no position is more important than the other). Of course, this approach is not realistic, but it will give a first glims at the data and will allow further comparisons with more realistic and complicated scoring models.
The second step in my data analysis plan was to plot the scores against the uptake ratios. However, as explained in the post that analysis is not very useful, so I will not discuss it any further. Instead, I will focus on the third analysis described in my post which is to plot the uptake ratios to the distance to the nearest USS at each position in the genome.
My supervisor did a figure (included below) that explain what we expected to see in this analysis plot:
As explained in my supervisor post, In this figure, we can see the points for a band that goes down from an uptake ratio of 4 (enriched presumably because of a USS nearby) to a uptake ratio < 1 when distances from a USS are more than 500 bases. It is important to remember that the donor DNA in this data was sheared to an average size of 250 bp. A wide band of points would mean that there are a lot of points where the USS score does not capture all aspects explaining the influence of USS on uptake. She stated on her blog that there are three reasons why this could happen:
(Cited directly from her blog post)
As part of this analysis, I have not only to calculate the distance of each position to a USS in the genome but also, I need to choose a threshold to determine what is the lowest score that can be identified as a USS. The figure below shows the distribution of scores in the genome. It is evident that only a small fraction of the score (see second figure below) have score higher than 15.
The threshold (red line) that is chosen, to discriminate what is a USS and what is not, could have an impact on the data because true USS’s with a score just below the threshold could be identified as non-USS sequences. This points will appear as outliers to the right of the curve (below) and can be confounded as non-USS sequence factors that influence uptake. This effect is exactly what I saw the data from the figure below:
In contrast to the expected figure made by my supervisor, we can see a very broad band goes down from an uptake ratio of 6 (with some points higher) to a distance ~500 bp. Additionally, there are a few peaks (red arrows) that have increased uptake ratios at distances > 500 bp. Even though these peaks uptake ratios are not very high (an uptake ratio of 1 represent the same coverage as the input), they are considerably higher than the background. The threshold to determine the lowest score that could be considered as a USS in this analysis was 19.04. This score was chosen because it was the lowest score among the sequences chosen to build the pssm matrix, but there is no a stronger reason to have chosen that number. As a result, when I evaluated the positions that belong to those peaks closer, I saw that they belong to descrite narrow regions (~ 200 bp), and that they had one position centered around the middle of the range with a score slightly lower than the threshold chosen. When the same data was re-analyzed chosing a lower threshold (18) then most of these peaks were eliminated, as it shows the figure below:
It can also be seen that when the threshold of 19.04 was chosen, the points farther away to a USS were at > 6000 bases, while when the threshold of 18 was chosen, the points farther away to a USS were at < 4000 bases. However there are still a couple more peaks that can be seen closer to a distance of 1000bp.
As a conclusion from this analysis, I can see that despite the simplistic scoring methodology that I chose to identify a USS, the distances to the closest USS are apparently able to explain a lot of the variation in uptake. However, ther are still many questions that remain, such as what are the effects of interaction between USS’s?, why some USS have higher influence in uptake than others?, and which percentaje of the variance can be explained by USS related variables?. However, from this analysis there is a question that need to be answered first:
To answer this question I need to determine how low are the uptake ratios in the regions depleated of uptake. In other words, how low are the valleys shown when uptake ratios are plotted vs genomic positions
In red, the mean uptake ratios from donor DNA sheared at an average size of 250bp are shown. Peaks represent position enriched in DNA uptake while valleys are positions depleted in DNA uptake. To calculate the mean uptake score of the valleys, I took all positions with a ratio lowers than 0.3 (depleted 5 fold) and I calculated the mean uptake ratio, which was 0.044. With this in mind, I wanted to determine if there are positions in this range that have high USS scores (At least higher than the threshold)
The next figure I explain the expectations of this analysis. First, I calculated the mean SS score of positions with an uptake ratio below 0.3, which was 0.044. Then I wondered, what would I see if I plotted the mean uptake ratio of positions with ratios lower than 0.3 over the USS scores? Since the mean of this points is 0.044, I expect most points to be located closer to 0. Additionally, I expect that most point should have a low score (below the threshold), and that points above this threshold (red arrows) should be either positions with low uptake despite having a true USS, or positions with a sequence incorrectly classified as a USS.
The actual data is shown below:
The red line is set indicating the threshold of 18. The figure shows how there are still some points with USS scores above the threshold. This points were identified as USS’s, however they clearly have low uptake ratios. As explained earlier, these points could very well be positions that have USS-like sequences, but that that have mutations in the core of the motif that decrease uptake; also these points could be true USS’s with low uptake (for an unknown reason). One way to answers this question is to repeat the analysis using a more complex scoring model.
However, before jumping into that, there is one more analysis I need to make. The former post-doc of the lab has suggested me to calculate the maximum score of each position to a USS within different distances (e.g. 100 bp, 200bp, 500bp). Then, I could plot that max score against uptake ratios. Before analyzing the data, I expected to see that most points would increase in uptake ratios as the maximum score within 200bp
Arrows show positions that, either have high uptake ratios despite being close to a USS (two arrows on the left), or have extremely high uptake ratios (arrow on the right). I expect the band of points to be broader close to a max score of 10 and narrower at the max score increases. I also expect to see this pattern reversed when I plot a max score within 500 bp instead of 100 or 200 bp (more points should now cluster close to a higher max score).
When we evaluate the data using this approach we can see:
At this point the more smartest thing to do would be to reanalyze the data including a better scoring model. I could keep writting about strategies to do this but I am starting to get tired and this post is getting too long.
Finally, I got the sequencing data from a project I had been working on. This project is trying to determine the factors explaining why some regions of Haemophilus influenzae genome are taken up more than others by competent cells in DNA uptake experiments. To answer this question, I did a series of uptake experiments using donor DNA from two distinct strains 86-028NP and PittGG, sheared to average sizes of 0.25kb and 6 kb. Those donor DNA’s were incubated with competent Rd cells. Donor DNA was later recovered using an organic extraction method that allowed us to isolate periplasmic DNA from (most) chromosomal DNA. The recovered donor DNA’s, as well as the inputs from each experiment, were deep sequenced using Illumina technology.
The former post-doc of the lab, who is now a PI, with one of his students have already done the job of aligning the reads to the donor and recipient genomes as well as normalize reads by sequencing depth. Reads aligned to recipient genome represent chromosomal contamination that was able to co-extracted (despite of the agarose gel purification clean-up methods used) . Additionally, they divided the normalize reads per each position to the normalized reads of the input to generate an uptake ratio where 1 represent indicates no enrichment or depletion of the position in periplasm relative to the controls. A number > 1 indicate enrichment and < 1 indicate depletion. When plotted over the genome positions, this uptake scores show positions of the genome that are taken up more than others and that are enriched in the samples compared with the input control.
In red, mean uptake ratios for each genomic position are shown from the 86-028NP 0.25kb (average) sheared donor DNA. In blue mean uptake ratios for each genomic position are shown from the 86-028NP 6kb (average) sheared donor DNA.
My objective is to determine if the variance in uptake across regions of the genome can be explained by the presence of a ‘uptake-signal-sequence (USS), as well as distance from a certain point in the genome to the nearest USS, or even interactions between several USS (when this are close together). This analysis will also help us to determine regions that have high uptake but no nearby USS as well as regions with low uptake despite having a USS closeby. These regions in which uptake is not explained by USS presence can be studied in more detail to try to identify non-USS motifs that might explain low or high uptake.
As it is shown in the figure uptake ratios of 0.25kb donor DNA (in red) are sharper and more spaced than 6kb donor DNA (blue). For this reason, I will start my analysis using the uptake ratios from the 0.25kb 86-028NP donor DNA.
My first task in this analysis is to determine if USS-dependent variables (e.g. presence of USS, distance to the closest USS, interactions between more than one USS) would explain regions of the genome that are enriched (peaks) of depleted in the uptake genomic maps.
Problems and aspects to consider:
This first set of analysis will give us just a preliminary of how well USS presence/distance correlate to uptake ratios. It will also allow us to determine genomic positions with high uptake that are not explained by USS presence/distance. However, several aspects have to be considered:
I could work in this text forever since there is so many things that can be done with the data, but I have been working in this for a long time and it is time to start putting the plan in accion.
My supervisor asked to calculate my radioactive exposure, after working with P32, for our annual radioactivity report. My first thought was that don’t remember very well how to do that calculation, so I went back to the radioactivity manual I got during my training. I looked for a formula to calculate exposure and I see that the manual says:
The theoretical dose to an individual in the vicinity of a point source of radioactivity is defined as:
Perfect! Now I only need to The specific Gama Ray Constant. Wait a minute!! Does P32 emits gamma rays? The manual gives me a list of isotopes and their gamma ray constants but P32 is not in the list!
How am I supposed to calculate my exposure then?
Ok so after hours of reading the manual carefully I decided to look for answers at the place where most people would start looking… Wikipedia
Well, it turns out P32 does not emit gamma rays. Perfect, that is a start, but HOW I CALCULATE MY EXPOSURE THEN?
Wait! the manual uses the terms exposure and dose, but I do not understand the difference (At least by reading the manual). So, I decided to look for other manuals.
After several more hours, I found the best manual explaining the concept between exposure and dose. So, “Dose is a measure of energy deposited by radiation in a material, or of the relative biological damage produced by that amount of energy given the nature of the radiation”, while “Exposure is a measure of the ionizations produced in air by x-ray or gamma radiation. The term exposure (with its ‘normal’ definition) is sometimes used to mean dose. (e.g. ‘He received a radiation exposure to his hand.’)”
Ahhh! that is why I was confused, people use “exposure” to mean “dose”. So, I do not have to calculate my exposure but my dose to P32.
Now, how can I calculate my dose?
Most of the radiation dose calculated wil be for the skin (hands) since my body was protected.
Looking at the Radionuclide Safety Data Sheet of P32 I realize that the Dose Rate by P32 is 348 rad/hour per 1 mCi/cm
Rad is a unit of absorbed dose
Rem is a unit of dose equivalent (since dose of alpha particles is not the same as beta or gamma)
For beta emitter 1 Rad = 1 Rem
Ok now I can use the formula above. I do not have a gamma ray constant but I have the P32 Dose Rate which will replace the T in the formula. The average activity (A) used per experiment was 0.74 MBq. The exposure time (T) lets say is 10 hour (probably an overestimation). The distance (d) from the source was 30 cm2 approximately.
((348 rem/hour per 1 mCi/cm) * 0.02 mCi * 10 hours)/ (20cm) = 3.48 rem
3.48 rem to SV
1Sv/100rem * 3.48 rem/year = 0.0348 Sv/year = 34.8 mSv/year
Ok that dose is very low compared with the maximum dose per year of UBC nuclear energy workers and even general public.
Lets consider the time I worked with the whole source with has 9.25MBq activity
Then time (t) would be marginal 0.25 hours and the activity (A) will increase to 0.25 mCi
((348 rem/hour per 1 mCi/cm) * 0.25 mCi * 0.25 hours)/ (20cm) = 1 rem
1Sv/100rem * 1 rem/year = 0.01 Sv/year = 10 mSv/year
So my total dose is 34.8 mSv/year + 10 mSv/year = 44.8 mSv/year
Note: This calculations are wrong. The radioactive safety commitee told me they will publish a guideline of how to do this calculation soon.