++ work finishing up first rough draft of Ab Stat section 11-25-13 work on dissertation 11-171-3d1221 I should email the people at Spain quickly Juan R. Molina Susa Alcami ^Actually, I'll hold off on this until I've had my defense. A general method for characterization of humoral immunity induced by a vaccine or infection When sera antibodies react with an array of random peptides, each peptide will exhibit a different level of binding quantified as a peptide intensity. The distribution of peptide intensities will depend on the number of different antibodies and their affinities and avidities for various peptide sequences. The number of antibodies, the affinities of the antibodies, and the avidities of the antibodies will depend on the state of the organism: healthy or disease and young or old. Can the information acquired from the peptide intensity distribution resulting from sera antibodies reacting with an array of random sequence peptides be used to distinguish healthy or disease states? Can this information be compressed into a single quantified variable? ELISA in which antigens are immobilized in wells of a microtiter plate and serum seri- ally diluted across the plate. ----Effects of age on antibody affinity maturation. -----http://www.ncbi.nlm.nih.gov/pubmed/12653658 ----Antibody quality in old age. -----http://www.ncbi.nlm.nih.gov/pubmed/16608408 ----Effect of age on humoral immunity, selection of the B-cell repertoire and B-cell development. -----http://www.ncbi.nlm.nih.gov/pubmed/9476670 This would explain why the immune system responds less effectively to vaccines with age (82), Apoptosis and other immune biomarkers predict influenza vaccine responsiveness. autoimmunity increases with age (83) The aging immune system: primary and secondary alterations of immune reactivity in the elderly. increase of inflammation with age Mitochondrial dysfunction and oxidative stress activate inflammasomes: impact on the aging process and age-related diseases. Age-related changes in immune function: effect on airway inflammation. Age-related inflammation: the contribution of different organs, tissues and systems. How to face it for therapeutic approaches. ^this looks like a good one ("inflamm-ageing") Low grade inflammation as a common pathogenetic denominator in age-related diseases: novel drug targets for anti-ageing strategies and successful ageing achievement. Candore G, Caruso C, Jirillo E, Magrone T, Vasto S. gave thermocycle touchdown conditions to Felicia 95 C 2 min, 98 C 10 s, 65 C 10 s, 72 C 60 s, 98 C 10 s, 64 C 10 s, 72 C 60 s, 98 C 10 s, 63 C 10 s, 72 C 60 s, 98 C 10 s, 62 C 10 s, 72 C 60 s, 98 C 10 s, 61 C 10 s, 72 C 60 s, 98 C 10 s, 60 C 10 s, 72 C 60 s, 98 C 10 s, 59 C 10 s, 72 C 60 s, 98 C 10 s, 58 C 10 s, 72 C 60 s, 98 C 10 s, 57 C 10 s, 72 C 60 s, 98 C 10 s, 56 C 10 s, 72 C 60 s, (98 C 10 s, 55 C 10 s, 72 C 60 s)X25 4 C hold entropy norm_entropy min cv stdev mean median 5th_perc 95th_perc max_norm min_norm stdev_norm mean_norm 5th_perc_norm 95th_perc_norm kurtosis skew dyn_range --Single number measures ---characteristics of single number measures: entropy, kurtosis, skew, etc. ---Paper: Entropy Measures Quantify Global Splicing Disorders in Cancer ---min and max of entropy, normalized entropy, entropy doesn't change with normalized values ---How is physics entropy, information entropy, and peptide intensity entropy related? The kurtosis is the measure of "peakedness" of a distribution, and higher values indicate tighter peaks. The skewness measures the extent to which a distribution "leans" to one side of the mean. The skewness can be positive or negative with 0 skew indicating that the distribution is symmetric about the mean without leaning to one side. Larger values of skew would indicate greater leaning. The mathematical concept of entropy was first developed during the 1800s as scientists found that some energy was always lost and not transformed into useful work in combustion engines. The first mathematical formulation of entropy was put forth by Clausius in English in 1856, and this representation consisted of bulk quantities such as the transfer of heat and temperature. Ludwig Boltzmann later defined entropy from a thermodynamic perspective and defined entropy in terms of the distribution of microstates of the system. In 1948, Claude Shannon defined the statistical nature of "lost information" in phone-line signals at Bell Telephone Laboratories M. Tribus, E.C. McIrvine, “Energy and information�, Scientific American, 224 (September 1971). website with good energy level distribution graph and explanation http://entropysite.oxy.edu/entropy_is_simple/ S=-kb sum from i=1 to k p(i)ln(pi) H = sum from i=1 to k p(i)log(p(i)) where p(xi) is the probability of outcome xi where kb is the Boltzmann constant, pi is the probability of a microstate S=-kbsumpilnpi C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\kwhittem\Presentations\2013\Spring 2013 Committee Meeting\Spring 2013 Committee Meeting Presentation 5-20-13.pptx An Intuitive Explanation of the Information Entropy of a Random Variable http://daniel-wilkerson.appspot.com/entropy.html ------ 11-19-13 Here's a dissertation that looks like it has a lot of useful information about networks. ANALYZING THE IMPACT OF LOCAL PERTURBATIONS OF NETWORK TOPOLOGIES AT THE APPLICATION-LEVEL "C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\DR\2013\11-19-13\ANALYZING THE IMPACT OF LOCAL PERTURBATIONS OF NETWORK TOPOLOGIES AT THE APPLICATION-LEVEL DISSERTATION.pdf" One paper he references is "Hide and seek on complex networks" "C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\DR\2013\11-19-13\Hide and Seek on Complex Networks 11-19-13.pdf" Formatted Entropy equations "C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\DR\2013\11-18-13\formatted entropy equations shannon and boltzmann 11-18-13.docx" Note that in statistical mechanics a microstate is a specific microscopic configuration of a thermodynamic system. double dInterim = 0; double dCount = Double.valueOf(list.size()).doubleValue(); double dMultiplier = ((dCount)*(dCount+1)) / ((dCount - 1) * (dCount - 2)*(dCount-3)); double dSubtractor = 3 * (Math.pow(dCount - 1, 2)) / ((dCount - 2) * (dCount - 3)); for(int i=1; i<=list.size(); i++) { double current_data = Double.valueOf(list.get(i-1).toString()).doubleValue(); dInterim = dInterim + Math.pow(((current_data - mean) / stdev), 4); } return (dMultiplier * dInterim - dSubtractor); equation for kurtosis http://office.microsoft.com/en-gb/excel/kurt-HP005209150.aspx Formatted equation for kurtosis Formatted equation for kurtosis 11-19-13d1352 skew public double Skew(int[] list, double mean, double stdev) { double dInterim = 0; double dCount = (double)list.length; double dMultiplier = (dCount) / ((dCount - 1) * (dCount - 2)); for(int i=1; i<=list.length; i++) { double current_data = (double)list[i-1]; dInterim = dInterim + Math.pow(((current_data - mean) / stdev),3); } double dSkewness = dMultiplier * dInterim; return(dSkewness); } public double kurtosis(int[] list, double mean, double stdev) { double dInterim = 0; double dCount = (double)list.length; double dMultiplier = ((dCount)*(dCount+1)) / ((dCount - 1) * (dCount - 2)*(dCount-3)); double dSubtractor = 3 * (Math.pow(dCount - 1, 2)) / ((dCount - 2) * (dCount - 3)); for(int i=1; i<=list.length; i++) { double current_data = (double)list[i-1]; dInterim = dInterim + Math.pow(((current_data - mean) / stdev), 4); } return (dMultiplier * dInterim - dSubtractor); } int N = list.length; double n = (N + 1) * percentile; //Another method: double n = (N-1)*percentile+1; if(n==Double.valueOf(1.0).doubleValue()) { return_value= list[0]; } else if(n==N) { return_value= list[N-1]; } else { int k = Double.valueOf(n).intValue(); double d = n-k; return_value= list[k-1]+d*(list[k]-list[k-1]); } Percentile Code { //the percentile number should be a value inbetween 0 and 1 //I think all of the values in the original list need to be positive for this function to work public double percentile(double[] original_list, double percentile) { double return_value = 0; double[] list = new double[original_list.length]; //first copy the list to a new arraylist so that all of the elements are doubles //I need to copy the arraylist so that everything in the list is a double for(int i=0; i 1.466364: D (46.0/12.0)". This indicates that at this level of the tree, if the sample has a cv greater than the indicated value it is assigned the class of disease. In this case the algorithm made this assignment 46 times and was incorrect 12 times. Algorithm Correctly Classified Instances Kappa Statistic ROC Area SVM 62.1 0.234 0.616 J48graft 78.8 0.5706 0.744 SVM Random 48.8 -0.0351 0.483 J48graft Random 54.5 0.066 0.504 R: A language and environment for statistical computing. version 15.0.4551.1003 Here we demonstrate a reproducible and scalable immunosignature platform based on fabrication of 330,000-peptide microarrays on a silicon wafer using equipment and protocols common to the electronics industry. Binding of antibodies to the array. Deprotected arrays were soaked overnight in DMF and stepwise transitioned to aqueous (20% steps, 30 min each). Residual DMF was removed by two 5 min washes in distilled water. Arrays were equilibrated in PBS for 30 min and blocked in incubation buffer (3% BSA in Phosphate Buffered Saline, 0.05% Tween 20 (PBST)). Arrays were washed and briefly spun dry prior to loading into the multi-well gasket (Array-It). Incuba-tion buffer was added to each well (100ul) and 100 ul of 1:2500 diluted sera was added for a final concen-tration of 1:5000. Arrays were incubated for 1 hr at RT with rocking then washed extensively with PBST and 1% BSA in PBST using a BioTek 405TS plate washer. Anti-human IgG-DyLight 549 (KPL, Gaithersburg, MD) was similarly added as the sera to a final concentration of 5.0 nM. Following 1hr at RT with rocking, unbound secondary was removed by extensive washing in PBST followed by distilled wa-ter. The arrays were removed from the gasket while submerged, dunked in isopropanol and centrifuged dry (800xG, 5 min). The 330K methods came from here "C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\DR\2013\11-19-13\Scalable High-Density Peptide Arrays for Comprehensive Health Monitoring Chip paper 7 (2) jbl 13AUG2013.docx" The 10K methods came from here "C:\Users\kurtw_000\Documents\kurt\storage\CIM Research Folder\DR\2013\11-1-13\Normals paper (2012)\Normals paper 2012 - formatted to journal specifications.dotx" We have published the general assay conditions and ana-lytical methods previously2-8. Briefly, blood is separated into serum, distributed into aliquots and frozen. The microarrays are pre-washed in 10% acetonitrile, 1% BSA to remove unbound peptides. Slides are then blocked in 1X PBS pH 7.3, 3% BSA, 0.05% Tween 20, 0.014% B-mercaptohexanol for 1 hour at 23oC. Without drying, slides are immersed in sample buffer (3% BSA, 1X pBS, 0.05% Tween 20 pH 7.2) and serum is diluted 1:500 and allowed to bind to the peptides for 1 hour at 37oC. For saliva, the dilution is 1:10 following 5’ RT centrifugation at 10,000g, other steps are identical to serum processing. Slides are washed 3x5’ each in 1X Tris-buffered saline with 0.05% Tween 20 (TBST) pH 7.2, then introduced to a direct-labeled Alexa-555 fluorescent mouse anti-human secondary (Novus, Rockford, IL). After the secondary has bound, the slides are washed 3x5’ each in 1X TBST, then 3x5’ washes in distilled water. Slides are dried by cen-trifugation and scanned in an Agilent ‘C’ scanner, 100% laser power, 70% PMT at 543nm. TIFF images are then converted to values using GenePix 8.0 (Molecular Devic-es, Santa Clara, CA). Values are median normalized and log10 normalized. Correlation coefficients across tech-nical replicates must exceed 0.90 or the slides are re-processed. Subsequent analysis steps varied depending on the experiment. We have recently demonstrated that it is possible to apply less than a microliter of diluted blood to an array of 10,000 non-natural sequence peptides cova-lently bound to a glass slide and detect specific, dis-ease-dependent changes in the profile of circulating antibodies (reviewed in Sykes, Legutki & Stafford)1. Exploring antibody recognition of sequence space through random-sequence peptide microarrays refs 2,3,4,5 Peptides were designed with random sequences, except for glycine – serine – cysteine linkers at the carboxyl (peptide library 1) or amino (library 2) terminus. Library 1 peptides were synthesized by Alta Biosciences (Birmingham, UK), spotted in duplicate using a NanoPrint LM60 microarray printer (Arrayit, Sunnyvale, CA). Library 2 peptides were synthesized by Sigma Genosys (St. Louis, MO), printed by Applied Microarrays (Tempe, AZ) using a piezo non-contact printer in a two-up design. Immunosignaturing microarrays distinguish antibody profiles of related pancreatic diseases I think I've basically finished the materials and methods. Now I can flesh out the results and discussion more All of the diseases had a higher entropy average than the entropy average of the normal group with the exception of BPE (Figure 33). The heatmap of all of the measures reveals that many of the normal samples cluster together with a low entropy and a higher cv, standard deviation normalized, mean normalized, 95th percentile normalized, dynamic range, and minimum normalized than the infectious disease samples. Many of the west nile virus samples also cluster together with values characteristic for that group for the same measures and a very low kurtosis and skew (Figure 34). The entropy and kurtosis could distinguish between disease and normal the best by a t-test, and the max normalized and entropy were weighted the most by SVM (Figure 35 and Figure 36). The J48graft tree reveals that 12 samples were correctly classified as normal based on the criteria that their entropy was less than or equal to 7.63 (Figure 37). The machine learning statistics illustrate that the machine learning algorithms can classify better than chance with information from the measures (about 80% accuracy for J48 graft), but these algorithms can classify no better than chance when the class assignments are randomly assigned (Table 1). WNV Syph mal HB Bordetella pertussis The box and dotplot of the entropy shows that the chronic disease groups have a significantly higher entropy than the normal group. The buffer and monoclonal antibody groups on the other hand have a lower entropy (Figure 38). The heatmap of all of the measures with each sample shows that samples of the same group generally cluster together (Figure 39). The multiple myeloma samples have a very high relative entropy and a low normalized 5th percentile. The breast cancer samples have a relatively high entropy, mean, median, 5th percentile, min, standard deviation, and 95th percentile. Th emonoclonal antibodies and the normals are relatively low for all of these measures. The statistical significance from a t-test reveals that entropy and the normalized maximum are the best measures (Figure 40), and the normalized 5th percentile and entropy have the greatest SVM weights (Figure 41). The J48graft tree reveals that 24 samples were classified as normal if the entropy was less than or equal to 9.22, and only 2 of these samples were misclassified. The machine learning algorithms demonstrate that these measures allow about 80% of the samples to be correctly classified with an SVM, and the algorithms perform no better than chance when random class assignments are made (Table 2). west nile virus (WNV), syphilis (S), HP, hepatitis B virus (HBV), dengue (DEN), Bordetella pertussis(BPE), and Borrelia (BORR) A box and dotplot of the entropy of the different groups (Figure 43), a heatmap of the measures for all of the samples (Figure 44), the statistical significance of the the measures from a t-test (Figure 45), the SVM weight of the measures (Figure 46), the J48graft tree of the measures (Figure 47), and a table of the machine learning statistics (Table 3) are presented below. Machine learnign statistics for 10K Every group had a higher entropy than the normal group with the exception of BPE (Figure 43). The heatmap of the measures for all of the samples revealed that many of the groups clustered together indicating that a group has similar values for the different measures (Figure 44). The statistical significance from a t-test showed that the median and 5th percentile measures were the best at distinguishing the groups, and the SVM weights of the measures show that the kurtosis and the normalized 95th percentile had the most weight (Figure 46). The J48 graft tree indicates that 13 samples were classified as disease if the median was greater than 1614, and none of these samples were misclassified. The machine learning statistics demonstrate that an SVM can classify about 80% of the instances correct, and the classification algorithms perform no better than chance when the class of each sample is assigned randomly (Table 3). 10010346_bot_hp7_07102012.gpr 10010346_top_hp8_07102012.gpr 10010449_top_hp12_07102012.gpr 10010454_bot_hp14_07102012.gpr 10010454_top_hp13_07102012.gpr 10010984_bot_hp10_07102012.gpr 10010984_top_hp9_07102012.gpr 10010985_bot_hp9_07102012.gpr 10010985_top_hp10_07102012.gpr 10010987_bot_hp18_07132012.gpr 10010987_top_hp16_07132012.gpr 10010988_bot_hp11_07132012.gpr 10010988_top_hp15_07132012.gpr 10010992_bot__hp8_07102012.gpr 10010992_top_hp7_07102012.gpr 10010996_top_hp17_07102012.gpr 10010997_bot_hp17_07102012.gpr 10010999_bot_hp13_07102012.gpr 10010999_top_hp14_07102012.gpr 10011001_bot_hp12_07102012.gpr 10011014_bot_hp15_07132012.gpr 10011014_top_hp11_07132012.gpr 10011595_bot_hp16_07132012.gpr 10011595_top_hp18_07132012.gpr A box and dotplot of the the entropy for the NC and AD group (Figure 49), a heatmap of the measures for all of the samples (Figure 50), the statistical significance of the measures from a t-test (Figure 51), and the machine learning statistics are presented (Table 4). The heatmap reveals that the alzhiemer samples do not cluster together and are evenly spread out through the more abundant normal samples (Figure 50). The p-values obtained for this dataset (Figure 51) were not as significant as that obtained for the other human disease datasets (Figure 35, Figure 40, and Figure 45). The machine learning algorithms were also incapable of classifying the samples any better than expected by chance (Table 4). Machine learning statistics for wafer 46 A box and dotplot of the entropy of young live attenuated PR8 influenza infected mice The 6-7 aged mice were one year and 2 months old. The 10 young mice were 6-8 weeks old. 1*10^4 pfu/doe. Blood collected 40 days after infection. The aged group samples were pooled and applied to the array with four replicates. The young mice were pooled and applied in four replicates. The young and aged samples also cluster together in a heatmap, and the aged samples have generally higher values for most of the measures (Figure 54). The most significant measures that distinguish the two groups were the normalized mean and normalized 95th percentile (Figure 55). These trends reflect the broader peptide intensity distribution in the aged samples compared to the young samples, as illustrated with one aged and one young example sample (Figure 56). The sample number is too small to apply any machine learning techniques. All of the normal samples with recorded ages in the 10K experiment, and from 330K wafers 20, 22, 25, and 46 were used to analzye the effects of age in humans. Across these four datasets there were 132 samples. A sample was placed into the young category "Y" if the age was less than or equal to 40 years old, and a sample was placed into the aged category "A" if the age was greater than 40 years old. The average entropy of the aged group was higher than the average entropy of the young group (Figure 57). A heatmap reveals that most of the aged samples cluster together with high entropy values (Figure 58). Subsets of normals also group together since there is a fairly large group of normals with relatively low values for all of the measures with the exception of relatively high values for the normalized maximum, normalized 5th percentile, kurtosis, and skew. The most statistically significant measures from a t-test were entropy and minimum (Figure 59), and the measures with the greatest SVM weight were the entropy and normalized entropy (Figure 60). Note that the normalized entropy was included for this dataset since some of the samples were run on 10K arrays and some other samples were run on 330K arrays. Machine learning was performed on a dataset with an equal number of young and aged samples. The young samples to include were chosen randomly. The machine learning statistics demonstrate that an SVM can correctly classify a sample as young or aged 70.7% of the time, and the algorithm performs no better than chance when the samples are randomly assigned to the young or aged groups (Table 5). 70.7% correctly classified by SVM, and 68.0% correctly classified by J48graft. SVM weight (Figure 58) J48 graft tree (Figure 59) table (Table 5) table for nationality (Table 6) The normalized maximum, skew, and kurtosis perform well in p-value based ranks, and the normalized 5th percentile, dynamic range, and normalized minimum perform well by SVM rank. A box and dotplot with the mean plus or minus one or two standard deviations for normal and disease samples is presented in Figure 64. This plot was constructed using data from four different datasets: 10K, the 1st 330K, the wafer 46 330K, and the young/aged dataset. The range of normal entropy values is around 7.75 +/-1. The disease entropy values have less variation and the values range from around 8.10 +/-0.65. Therefore, the disease/aged range is enclosed within the normal range. Given the knowledge acquired from these datasets we know that the higher the entropy the normal, the more likely it is that the normal sample is actually a disease/aged sample or progressing towards this state. A normal sample with a very low entropy below the mean minus one standard deviation could indicate a variety of different situations: 1. the individual is healthy and normal; 2. the individual has recently been vaccinated; 3. the individual has an acute infection such as a common cold. 5 breast cancer samples 24 normal samples Analysis was performed to identify the types of peptides in the distribution which contributed the most to the ability of the entropy measure to distinguish between healthy and disease groups (Section 3.10). This data for the analysis was 5 breast cancer samples and 24 normal samples. The entropy of each sample was calculated, and a t-test was performed with the two groups. Before the entropy was calculated, however, various numbers of peptides of a given type were removed. If peptides specific to the disease are more important than random peptides, then fewer specific peptides could be removed before the significance of the p-value starts to decrease. Here specific peptides refers to peptides that have the most significant p-value in a t-test with the intensities of that peptide in breast cancer and normal samples. The results show that specific peptides are indeed more important than random peptides since the p-value with entropy between the two groups starts to become less significant after about 5,000 peptides have been removed (Figure 65). With random peptides, on the other hand, about 230,173 peptides must be removed before the signal starts to drop off. A targeted removal of peptides that are most different between breast cancer and normal causes the entropy measure to lose the ability to distinguish between the groups more than removal of random peptides. What is the actual p-value of the most significant peptide when the p-value with the entropy between the groups begins to become less significant? The p-value with the entropy becomes less significant than a p-value of 0.1 after 100,000 peptides have been removed. At this point the p-value of the most significant peptide is about 0.15 (Figure 67). Therefore, in this dataset, peptides with a p-value more significant than 0.15 seem to contribute the most to the ability of entropy to distinguish between groups. Removal of the highest intensity peptides yielded some interesting results as well. Here different peptides were removed from each sample since the peptides were removed in order of highest intensity unique to each sample. Naturally, there will be some similarities. Many of the top 1,000 peptides in one sample will be in the top 1,000 peptides of another sample, but there will not be a 100% overlap due to natural variation. In the analysis, when the top 5,000 peptides were removed, the p-value actually becomes more significant than when all of the peptides were present. This result makes sense because many of the top several thousand peptides were maxed out at 65,535 in this dataset. Therefore, these peptides which had exactly the same value in both groups was diluting the calculations so that there was less of a difference in entropy between the two groups. Once these maxed out peptides are removed, then the p-value becomes less significant than it was with the full 330K. Next the p-value between groups with entropy becomes extremely significant with the remaining very low intensity peptides. This makes sense since a fairly large group of peptides will have intensity values near zero, and the distribution of these near-zero peptides will have a different shape in normal and breast cancer samples. Figure 66 shows the graph zoomed into the portion with 5,000 or less peptides remaining. In another analysis, the least significant peptides, rather than the most significant, were removed from the distribution. In this experiment, the p-value of the entropy between the groups becomes more significant after the 100,000 least significant peptides have been removed, and then the p-value drops off to become less significant (Figure 68). The increase in significance observed was not as great as the significance obtained when removing the highest intensity peptides. What is the actual value of the p-value of the least significant peptide when there is an increase in significance with entropy? The p-value of the least significant peptide when 100,000 least significant peptides have been removed is about 0.25. Therefore, removing peptides with a p-value greater than 0.25 improved the ability of entropy to distinguish between the two groups. In summary, peptides that are different by a t-test with intensity contribute the most to the ability of the entropy to distinguish between the two groups. The highest maxed out peptides do not contribute much to the ability of entropy to distinguish. Also, the shape of the group of peptides in the very low range is very important for the ability of entropy to distinguish between groups. Many questions can be asked and investigated from these results. Would the set of low peptides that allow entropy to distinguish well be mostly the same or different for different diseases such as breast cancer and lung cancer or syphilis? If another breast cancer dataset was analyzed would the low peptides be mostly the same or different than the original breast cancer dataset? In other words, is this distribution of low peptides caused by a type of specific low binding unique to a disease or set of diseases, or is this distribution of low peptides mostly random. If it is random, then non-specific antibodies may be binding randomly to peptides on the array based on their location at the time and whether they happen to interact with a certain peptide. entropy of airport connections 5.02502383975844 entropy of random airport connections 5.76 pg. 445 Mitigating Age-Related Immune Dysfunction Heightens the Efficacy of Tumor Immunotherapy in Aged Mice "C:\Users\kurtw_000\Box Sync\DocDR\2013\11-22-13\Development of peptide microarrays for epitope mapping of antibodies against the human TSH receptor.pdf" "C:\Users\kurtw_000\Box Sync\DocDR\2013\11-22-13\Development of peptide microarrays for epitope mapping of antibodies against the human TSH receptor (annotated).pdf" The transgenic mice were FVB/N neuT transgenic mice which begin to develop spontaneous mammary adenocarcinomas around 33 weeks old. 2 refs Interleukin 12-mediated prevention of spontaneous mammary adenocarcinomas in two lines of Her-2-neu transgenic. Myeloid-derived suppressor cells in mammary tumor progression in FVB Neu transgenic mice Application of Immunosignatures to the Assessment of Alzheimer’s Disease Paper: Application of immunosignatures to the assessment of Alzheimer's disease "Plasma from 12 patients with probable AD and 12 age- matched controls without cognitive derangement were provided by Alex Roher (Cohort A; Banner’s Sun Health Research Insti- tute, Phoenix, AZ). All patients were enrolled into a brain-bank program. Postmortem examination was performed by a neuro- pathologist on 9 patients (5 with and 4 without dementia). Samples were acquired after written consent and approval of the Banner Institutional Review Board (IRB). Plasma from a second cohort of elderly patients (Cohort B) was provided by Roger N. Rosenberg (UT Southwest Medical Center, Dallas, TX). Profiling studies were approved by ASU’s IRB (protocol #0912004625)." change the Department of Defense part Did I discuss why the entropy does not change when it is normalized? ^now I did What species was the antibody against HIV? human and the protein was gp120 ^answered I think I'll go ahead and remove the machine learning multiple mouse immunization data. Wed Nov 27 Phoenix to Salt Lake 2:33 -> 4:10 Salt Lake to Cedar City 5:00 -> 5:50 Sat Nov 30 Cedar City to Salt Lake 9:30 -> 10:30 Salt Lake to Phoenix 11:26 -> 1:02 ------------ 11-23-13 I would like to make the first chip disease dataset and 10K more consistent. This means I need to make some changes to the boxplot, heatmap, and text in the 10K section. Name of diseases in first chip set WNV SYPH MAL HBV DEN BPE BORR Name of diseases in 10K WNV S N HP HBV DEN BPE BORR Changes that need to be made S->SYPH just one to change