Dataset Viewer
Auto-converted to Parquet
text
stringlengths
1.41k
13.4k
quality
float64
1
2.58
source
stringclasses
5 values
and this isn't mixed deeper. The thermal gradient in the skin thus restricts heat loss from the bulk of the ocean below. So the visible sunlight that does penetrate through the skin and warms the ocean to depth is restricted from leaving the ocean again. 2. The'skin' idea isn't valid. Downwelling radiation is absorbed in the top millimeter but then mixing transfers this heat deeper. And as a result the'skin' doesn't have a temperature gradient that restricts heat loss from the ocean. So the broader ocean absorbs more heat but is more easily able to lose heat as well. Sounds like swings and roundabouts to me, with the same net result. 0 0 24. arncliffe Roy Spencer is playing a bit fast and lose with the available data. There are 3 research groups that analyse the data from the satellites. Spencer is one of the principles of one team at UAH. The other teams are at RSS and Star/NESDIS. UAH have been showing a trend of 0.04 C/decade. RSS a trend of 0.078 C/decade. Roy averages these two and shows that average. He doesn't mention Star/NESDIS who are showing a trend of 0.124 C/decade. So if one averaged all 3 satellite results the result would be higher. And in fact it is more accurate to say that the correct value for the satellite data lies somewhere between UAH and Star. If Star is closer to the mark, the models are spot on. He doesn't mention that the handling by UAH of the troubled transition between the NOAA satellites NOAA-9 and NOAA 10 has been identified as possibly flawed, leading to a cool bias in the UAH data since then. He doesn't mention that the radiosonde datasets are regarded as questionable for climatological purposes at higher altitude due to radiative heating/cooling effects on the instrument packages. And then he compares them to model runs based on one of the scenarios with the highest rates of CO2 accumulation. Finally he doesn't mention that models don't predict exactly what will happen year by year but rather the broad trends. The models aren't good enough to capture all the causes of short term variability. And if you look at his graph, the data for the sensors still falls within the range of model prediction up to around 1998, 2000. So the models may be missing events over the last decade plus. And this isn't an exceptional result. Shorter term climate variability is harder to model, even when longer term trends can be modelled. And a decade is short term in climate terms. It is a couple of decades too soon to claim that the models are wrong. 0 0 25. John and Dana: After thinking about this some more, I understand that you are using the analogy of 4 Hiroshima bombs per second to represent the amount of heat gained by the earth as a whole. I don't think it helps the vast majority of even the educated population to understand what is happening. We all know that an atomic bomb is a bad thing. But, on a global scale, is it really? When you take the surface area of the earth (5.1×10 8 km 2), and the amount of energy received from the sun: the result is that each square meter of area facing the Sun receives about 1,380 joules per second (otherwise known as the Solar Constant). Once you look at numbers on this scale, the numbers produced by an atomic bomb (even 4 per second), aren't scary anymore. But, everyone knows how bad an atomic bomb is. So, your use of this analogy is an effective tactic. 0 0 26. Glenn and Dana, will y'all please add Glenn's comment as a new section in The Big Picture post? It fills a gap in the logical flow that I've had to fill when pointing people to that post for them to get a very quick basic understanding. I think it is well worth the tradeoff of making that post slightly longer. It would be a new section after the section "Global Warming Continues" and before the section "Humans are Increasing Atmospheric Greenhouse Gases." The section title should be something like "Increased Greenhouse Gases are Causing the Warming." 0 0 27. If people don't like Hiroshima bombs you could go with something like 'global energy consumption'. As in, 'global warming is causing the planet to accumulate heat at a rate equal to our monthly global energy consumption every 2 days'. Most people probably don't have a good handle on just how much energy we use, but it is one of the few other values in the right ballpark to be a useful comparison point. It would also help to dispel the 'global warming is caused by waste heat' myth. 0 0 28. Hi all I do not understand this obsession here with "science communicating". Science is not communicated (not as political propaganda), is disseminated and/or popularized, and as far as possible, when it has already passed the rigors of scientific method (empirical validation at least). Glenn Tamblyn #24 says "It is a couple of decades too soon to claim that the models are wrong." Yep, However it is not too early to say that the models are correct... I think this is a version of scientific validation quite asymmetric, being naive. Last but not less important. Using Hiroshima bomb (more than one hundred thousand dead), gives people an idea of the ethical level, and of the balance between popularize (or explaining) on the one hand, and convince (terrify rather), on this side of debate. From my point of view is a wrong strategy, to the point that it can only work with the more illiterate society. But hey, I'm aware that my opinion is worthless here. It's your choice. 0 0 29. I do believe that many of us convinced in the seriousness of AGW really is trying to figure out how we can explain this to people so they are able to act upon it. Its really one of humanities biggest challenges, one that could have consequences on a planetary scale if not handled in time. Many believe we are really running of out of time. Talking about CO2 emission cuts of a certain % by 2050 isnt going to cut it. What is needed is for people to wake up and understand that our CO2 emissions through our addiction to fossil fuel burning is really turning up the thermostat of the planet causing a shift that the planet likely only have experienced in rather more cataclysmic events like the big siberian traps vulcanism or asteroide impact. The data shows a dramatic concentration of CO2 in both the air and seas at rates 10x past extinction events. I think its really cause for concern even though we dont really see the big consequences right this second. A tipping point can be passed (and most likely several have been passed now with the Arctic melting fast) where the changes happen so fast that global average temperatures could rise rapidly. I do believe there is enough evidence now that the planet is absorbing more of the suns energy than it has in the past, and that this amount is significant even on a planetary scale (some are trying to make the 4 abombs per second sound like a small number, just like CO2 is only a small part of the atmosphere - this is a dangerous way of thinking - it only takes 0.25g of arsenic to kill a person). Looking at the broad picture, its really fantastic that life even exists on the planet, and while humans might feel like small gods wielding the power of fossil fuels, we really are quite insignificant and vulnurable, like all living things on this planet. I do believe we should treat our lucky position in this galaxy with some respect and at least acknowledge what we have discovered about simple physics. Sometimes one does not need proof in order to know something is right, if I fall out of a 10 story building, I will likely die - but I dont really need to watch someone fall to their death to understand this is a physical fact. The same way we do know CO2 is a greenhouse gas and that it traps heat. Glenn Tamblyn's line of reasoning perfectly explains a valid reason for the extra heat stored and I my opinion it shouldnt be hard for people to grasp this if they are willing to listen. 0 0 30. Very eloquent! IMHO: The rate should be more important. You should say that by 1998. We were at a rate of ~2 Hiroshima bomb detonations per second and since that so called pause we have move to 4 Hiroshima bomb detonations per second. Or put it in step with time. Like in 198x we were at 1 by 199x at 2... and today at 4. How much many more Hiroshima bomb detonations does it need to increase before we start taking this seriously. 0 0 31. Some of the heat goes into the melting of Arctic Ice. If we apply gentle heat to a beaker of water containing a lump of ice, the water temperature will not increase until the ice has finished melting. So how much ice has melted? The figure here: shows that the September mean volume for _PHONE_ stands at 12,000 cubic km of ice
1.150113
Zyphra/Zyda-2
By Mary Plumb | July 13, 2015 In the early 90s, Kt Boehrer became an influential player in a renewal of interest in declination when she published her book, Declinations, The Other Dimension, which presented 20 years of research on the subject. Leigh Westin's very precisely elucidated book, Beyond the Solstice by Declination, followed in 1999. Currently, declination and out-of-bounds planets are quite commonly used by practicing astrologers. To those who may be unfamiliar with the terms, I'll recap a bit. Essentially, as the Sun travels around the zodiac on the ecliptic, it reaches a maximum northern declination (23N27) at the tropic of Cancer on the summer solstice. The Sun reaches maximum southern declination (23S27) at the tropic of Capricorn on the winter solstice. Declination is measured as the earth's tilt along its axis as it orbits the Sun. The planets normally travel within that band of degrees. A planet is considered out-of-bounds when it travels beyond the ecliptic, or beyond 23°27′ to the north or south. A basic notion of interpretation is that the out-of-bounds planet is no longer under the guidance, or rule, of the Sun, i.e., the organizing principle. The OOB planet deviates from normal expression; it becomes exaggerated, somehow unusual, or unconventional. This can manifest in many ways, sometimes as a remarkable talent or gift, sometimes as inappropriate or extreme behavior. We're just winding down from a period of Mars being out-of-bounds. On June 9 Mars reached 23N35, just moving out of the range of the Sun's declination. The last week in June, Mars reached its maximum OOB declination in this cycle (24N08), and on July 16, Mars moves back into the natural border of the guidance of the Sun at 23N27. Mars can be obvious in world events, especially when it is OOB. Here are a few recent events that echoed an exaggerated Mars: One of the world's top drug lords, Joaquin Guzman "El Chapo," escaped from maximum-security prison; the murders in a South Carolina church (and the news that a background check error at the FBI allowed the shooter to purchase a gun); media reports that psychologists secretly aided a CIA torture program; same-sex marriage was guaranteed by the Constitution; two inmates escaped from a New York prison (overtaking the news cycle for weeks). (On September 11, 2001, Mars, at 1°26′ Capricorn in longitude, was at an extreme out-of-bounds declination (26S48). Mars will not reach this extreme south declination again until 2033.) Mars is in the news (as usual), but I've been thinking a lot about the natal Moon OOB. In a list of familiar descriptors for an OOB planet (i.e., unrestrained, extreme, independent, wild, awkward, privileged, magnified), I found a keyword I had not thought of before that seemed very true: vulnerable. (1) If a planet is out there on its own, beyond the conventional boundaries, the experience of vulnerability can be present as well. The Moon has its own rhythm — for about nine or ten years it will stay inside the Sun's boundary, never going out-of-bounds. Then, in nine- or ten-year cycles, it will go OOB several times a month. (The Moon will stay in bounds from 2011-2020.) There is a further interesting distinction within the OOB cycle, that is, peak years when the Moon will travel to its extreme OOB declination. This is when the nodes are near 0° Aries and Libra. So, people born with the nodes in Aries and Libra may possibly have the natal Moon at this extreme OOB. This can be a very helpful thing to note for your friends and clients, not to mention being enormously useful to recognize if you are in that category yourself. The years when the extreme OOB Moon (almost 29° N or S declination) is possible are around 1913, 1931-32, 1950, 1968-9, 1987-88, and 2005-6. The OOB Moon does not happen every month during those years; you'll need to check the ephemeris. Those with an OOB natal Moon can be very intuitive; have transparent emotional boundaries; be connected to the subconscious; live a highly emotional life; and have an empathetic nature. Other lunar themes with a natal OOB Moon are exceptional circumstances in giving birth; unusual relationships with the mother or a marked impact from early childhood; food and security issues; and an acutely emotional response to life. All of these themes may be more pronounced when the Moon reaches its maximum declination, as in the years noted above. Oprah, Joan Rivers, Tracy Morgan, Gwyneth Paltrow, Christopher Reeve, Clint Eastwood, Kurt Cobain, Louis CK, and Jennifer Anniston are some celebrities with natal Moon OOB. I know a young girl born with the Moon in Capricorn near maximum southern declination (born in 2005). She is a very happy and outgoing child, but enormously sensitive, something that is not especially obvious. She occasionally has moods of tremendous loneliness and vulnerability. When she was very young, her mother, with a natal Moon nicely organized within the borders of the Sun, thought she was a "drama queen." I talked to her about the extreme sensitivity of the OOB Moon and gave her a different way to understand her daughter's nature. This week I talked with someone who has natal Mercury OOB. She said that sometimes, when she's speaking about something that makes complete sense to her, she suddenly realizes that the other person has no idea what she's talking about. She related this to that notion of vulnerability. I spoke with someone else who has natal Mars OOB (28S05). She is beginning to understand that her directness and outspokenness can sometimes be experienced as aggression, leaving others feeling alienated, and she, vulnerable. (2) Many people will never have a personal experience of the Moon OOB, either in the natal, or by progression. This may suggest a more emotionally stable life or the ability to navigate emotions more easily. Those people born with the Moon close to maximum declination can give new meaning to "over-the-top" emotions. It can be a big help in self-understanding (hopefully leading to increased peace of mind) if they can understand the imagery of the OOB Moon. Here's a monthly calendar with a graph of the planets' declinations; a graphic ephemeris is particularly useful in seeing declination. Not all printed ephemerides include declination tables (software does, of course), but if you prefer a paper copy, here's a Declination Ephemeris spanning 1930 to 2025, courtesy of Café Astrology. The tables include the Sun through Chiron, but not the Moon. (1) Declination: The Steps of the Sun, Paul Newman, Wessex, 2006. (2) Here's a blog I wrote about an athlete with an extreme OOB Mars in the natal chart: Usain Bolt, known as "the lightning bolt." "Mars is also way out-of-bounds, at 28°S10'. (The nodes are in Aries/Libra and planets go to the most extreme out-of-bounds when the nodal axis is in these signs.)" Like what you see? Subscribe to The Mountain Astrologer By Gary P. Caton | May 4, 2015 "In all chaos there is a cosmos, in all disorder a secret order." C.G. Jung (1) It has become common for modern astrologers to think of Mercury retrograde in terms of "shadow." The basic idea seems to be that there are two areas or zones of shadow, one before and one after actual retrograde motion. When in these zones, even though Mercury is traveling direct, we are still (at least potentially) subject to retrograde kinds of experiences. So weeks before the retrograde, after Mercury passes the degree of its eventual station direct, Mercury enters a zone of sky that is eventually crossed three times, and it doesn't leave this zone for weeks after turning direct — until passing the degree of the initial station retrograde. However, when we consider not just the motion but also the speed of Mercury, it seems these zones before and after the retrograde could be better described as slow zones. So we can imagine that when Mercury comes to the station degree, we are experiencing a sort of "cosmic speed bump," alerting us that we are in a new kind of zone where new rules and/or codes of conduct may apply, similar to when passing through a school or construction zone while driving. Nevertheless, even when adding the dimension and nuance of speed, I find this view of Mercury retrograde still wanting. Although taking responsibility for the speed with which we navigate our lives should help to assuage some of the dualistic thinking that seems to accompany the commonly held view of Mercury retrograde (direct = good, retrograde = bad), it remains in my eyes a relatively impoverished view. This is because it is taking into account only two dimensions, those of zodiacal longitude and time/speed. And yet, we live in (
1.811979
m-a-p/FineFineWeb
ization, for this variable we counted the total duration of vocalizations regardless of the emitter. To calculate frequencies and percentages of time, we used the duration of collective arousals for the 48- and 2-h separation conditions, and the 10 min of the videotaped phase in the control condition. We calculated the percentage of contact-sitting and social grooming over the total number of scans for each condition of arousal and postarousal periods. With respect to the analysis of partner preferences, however, the number of scans occurring during collective arousal remained limited, so we relied on the exact durations measured from videotape footage. To compare different conditions, and arousal and postarousal periods, we applied the Kruskal–Wallis, Mann– Whitney, and Wilcoxon signed-rank tests, exact procedure (Siegel and Castellan, 1988) using the SPSS software version 16 (SPSS, Chicago, IL). All probabilities were two-tailed. The significance level was set at 0.05. American Journal of Physical Anthropology RESULTS Duration of collective arousal Collective arousal systematically occurred in both groups after a 48-h separation. It also occurred in all cases in Group A and in seven out of eight cases in Group B after a 2-hr separation. The number of individuals involved in affiliative interactions at each 10-s interval decreased during the 10-min recording period (see Fig. 2). No collective arousal was observed in the control period. Comparisons of the duration of collective arousal in the 48- and 2-h conditions showed that its mean duration was significantly longer following a 48-h separation both in Group A (Mann–Whitney test, n1 5 8, n2 5 8 U 5 11.0, P 5 0.026, 48 h: 8.5 min 6 1.6, 2 h: 6.3 6 2.1) and Group B (U 5 10.5, n1 5 8, n2 5 8, P 5 0.023, 48 h: 8.0 6 1.8, 2 h: 4.7 6 3.2). It is worth noting that collective arousal periods usually began and ended quite abruptly (Supporting Information Fig. S1). Social interactions occurring during collective arousal We compared the mean rates per minute of behaviors between the three different conditions (Table 2). In both groups, rates significantly differed across conditions except for conflict and scratching in Group B, and yawning in both groups; affiliative behaviors appeared more frequent in the separation–reunion conditions. We additionally performed pairwise tests to compare the effects of 2- and 48-h separation conditions. This showed that the second condition yielded higher rates for several behavior patterns (Kruskal–Wallis test, P \ 0.05): clasp, facial display, expressive run, social play, and conflict in 461 COLLECTIVE AROUSAL IN TONKEAN MACAQUES TABLE 2. Comparisons of behavioral rates after reunion across the separation and control conditions Group A Behavior b Clasp Mountb Interferenceb Facial displayb Scratchb Yawnb Conflictb Vocalizationc Expressive runc Social playc Group B Condition Meana SD X2 P Meana SD X2 P 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 0.54 0.29 0.01 0.06 0.02 0 0.25 0.08 0 3.5 3 0.17 0.06 0.11 0.24 0.1 0.07 0.06 0.20 0 0.04 40.2 34.2 0.31 7.6 3.5 0.05 2.52 2.52 0.86 0.14 0.23 0.01 0.02 0.02 0.01 0.08 0.11 0 0.8 1.4 0.09 0.04 0.07 0.13 0.07 0.09 0.06 0.13 0 0.07 9.9 14.2 0.83 1.8 2.7 0.04 1.77 0.98 1.35 17.4 \0.001 \0.001 \0.001 10.2 0.003 17.2 \0.001 15.3 \0.001 15.9 \0.001 17.7 \0.001 11.1 0.002 2.0 0.379 2.5 0.290 1.5 0.481 13.9 \0.001 3.0 0.202 16.2 \0.001 16.1 \0.001 18.0 \0.001 18.2 \0.001 9.3 0.010 0.25 0.12 0.01 0.02 0.04 0 0.19 0.11 0 1 0.63 0.07 0.08 0.08 0.02 0.01 0.01 0.02 0.13 0.04 0.05 6.2 17.3 0 2 0.88 0.05 1.8 1.37 1.3 16.9 13.3 0.71 0.24 0.01 0.03 0.03 0 0.34 0.13 0 3.1 1.39 0.09 0.09 0.11 0.11 0.02 0.01 0.01 0.11 0.01 0.04 48.4 30.4 0 4.9 1.3 0.03 2.8 0.86 1.61 6.2 0.039 Kruskal-Wallis test, n1 5 8, n2 5 8, n3 5 8, d.f. 5 2. a Means are given per test and per individual (except for conflicts and vocalizations which are given per group). b Frequency per minute. c Duration in seconds per minute. Group A, and mount and interference in Group B; other differences were not statistically significant. Contact behaviors occurring during arousal vs. postarousal periods Partner preferences during collective arousal and postcollective arousal We compared the percentage scans of social grooming and contact-sitting which occurred during arousal and the hour following the 10-min videotaped period in the 48- and 2-h separation conditions (Table 4). In both groups, contact-sitting increased significantly during the postarousal period for the 48- and 2-h conditions. Levels of social grooming also rose during the postarousal period, except for the 48-h condition in Group A. We compared the mean rates per minute of affiliative interactions occurring during collective arousal in individuals belonging to previously separated subgroups and individuals remaining in the same subgroup (Table 3). After a 48-h separation, both groups showed higher rates of all behaviors between partners from different subgroups. After a 2-h separation, we found similar trends but differences were statistically significant only for clasps, mounts, and contact-sitting in Group B, and for clasps and social grooming in Group A. No significant partner preferences appeared in control periods. We compared partner preferences during the postarousal period from the percentages of scans of social grooming and contact-sitting (Table 3). We did not find statistically significant preferences for contact-sitting between partners regardless of their subgroup membership, whereas individuals in both groups exchanged significantly more grooming with partners from which they had been separated for 48 h. The difference was also significant after a 2-h separation for Group B but not Group A. The comparison of partner preferences in the control period did not yield significant differences. DISCUSSION This is the first experimental study demonstrating that it is possible to reproducibly induce bursts of affiliative interactions in a monkey species, as stated in our first prediction. After a period of separation, Tonkean macaques welcome each other through collective arousal; all individuals run around, embrace or grasp one another, while displaying many affiliative facial expressions and uttering noisy vocalizations. Based on the proportion of group members engaged in affiliation per time unit, the event lasted between a few and ten minutes. Collective arousal should not however be reduced to this operational definition; for instance, it is also characterized by the occurrence of simultaneous affiliative interactions, including polyadic ones (see Supporting Information Video). American Journal of Physical Anthropology 462 A. DE MARCO ET AL. TABLE 3. Comparisons of behaviors during arousal and post-arousal periods according to partner preferences in the different experimental conditions Behavior Condition b Clasp 48 h 2h control b 48 h Mount 2h control Social playc 48 h 2h control Contact-sittingc 48 h 2h control Social groomingc 48 h 2h control Contact-sittingd 48 h 2h control d Social grooming 48 h 2h control Group A Group B Subgroup membership Meana SD T P Meana SD T P same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different 0.006 0.02 0.004 0.009 0.0001 0.0003 0.006 0.018 0.005 0
1.360696
m-a-p/FineFineWeb
accurate diagnosis, including P-wave signal. The number of codes and the lower and upper threshold ECG value of the codes are determined to achieve both efficient encoding and sufficient data reconstruction, especially for P-wave signals. The number of codes and the lower and upper threshold ECG value of the codes are flexible and can be adjusted to adapt to ECG data input and storage space. In one embodiment, the number of the bins are chosen from 23 to 210. A higher number of bins usually results in less ECG data error loss but more storage space and battery power use. The proper demarcation of upper and lower thresholds also reduces error loss and contributes to accurate re-construction of ECG value and graph shape. The number of bins and the thresholds for these bins are carefully selected to keep essential information of the ECG signals and filter away non-essential information, with a special emphasis to accurately representing the P-wave. Normally, each successive bin continues forward from a previous bin so as to cover a contiguous range of electrocardiography values. In one embodiment, the size of the bins, i.e., the interval between the higher threshold ECG value and the lower threshold ECG value, are not equal thought the contiguous range; instead, areas of high frequency calls for a smaller size of bins. The size of the bins is partly determined by the frequency of the ECG values falling into the bin. In one embodiment, 24=16 bins are used, as described with reference to in FIG. 24 where the lower threshold ECG value and upper threshold ECG value for each bin are also provided. This setup provides minimum error loss and a significant compression ratio, among other considerations. The first, second, and third columns represent the lower threshold ECG value, the upper threshold ECG value, and the coding of the bins. The bin that an ECG data will fall into depends on the difference between the raw ECG data value and corresponding serial accumulator compared to the range that the bin covers. If an ECG raw data falls into a particular bin, the raw ECG data can be represented by the code of the bin. In this example, the codes are encoded with a four-bit storage space, with one bit to encode sign and three bits to encode magnitude. Similar, up to 32 codes can be encoded with a five-bit storage space, with one bit to encode sign and 4 bits to encode magnitude. The minimum (Min) and maximum (Max) values in FIG. 24 defines an inclusive range of ECG values for each ECG code. An input ECG value that fall within the range defined by Min and Max value will be encoded by the code in the third column in FIG. 24. The Min and Max ranges can be the same for all of the bins or can be tailored to specific ranges of ECG values, to emphasize higher or lower density. For example, the Min and Max value 5,001-50,000 correspond to code +7 is low density and reflects the expectation that few actual ECG values exceeding 5001 μV will occur. The density of the Min and Max value can be adjusted to enhance ECG signal detection such as P-wave signal. as a further example, the Min and Max ECG value ranges can be evenly defined throughout, or be doubled each of the successive bin. In one embodiment, the number of bins is selected to be a power of two, although a power of two is not strictly required, particularly when a second stage compression as further described below with reference to FIG. 26. FIG. 25 is an example illustrating the encoding and compression scheme in accordance with method and parameters as described with reference to in FIGS. 23 and 24. The first three ECG values of an ECG datastream, 12000, 11904, and 12537, are shown in column Ito show a recursive process. Remaining values are omitted since they are processed through the same recursive process. The initial ECG value, 12000, is equivalent to the center value of the ECG recorder. The initial serial accumulator is assigned to the center value of the ECG recorder, 12000. The difference between the initial ECG value to the initial serial accumulator is 0, which falls within the lower and upper threshold of bin 0. Thus the initial ECG value is encoded with the code 0. 12000 is transferred to next row as the serial accumulator for next ECG value. The next ECG value is 11904. The difference between the next ECG value and the serial accumulator for the second value is 11904−12000=−96. The difference of −96 falls into the bin with the code of −3, where the lower threshold of the bin is −41 and the upper threshold of the bin is −150. Thus, the second ECG value is encoded with the code of −3, which is the bin identification. For the purpose of decoding the second value, an encoder first refers to the assigned bin, which is bin −3; the encoder then reads the lower threshold ECG value of the assigned bin −3, which is −41; and the encoder finally add the lower threshold ECG value of the assigned bin to the decoded value of the first ECG value, which is 12000, to arrive at a decoded value of 11959. The decoded value 11959 in turn serves as the serial accumulator for the next ECG value, in this case the next ECG value is the third one of 12537. The difference between the third value and its corresponding serial accumulator is 12537−11959=578. This difference, 578, falls into the bin with a code of +5, which has a lower threshold ECG value of 301 and upper threshold ECG value of 1500. Thus the third ECG value is encoded with the code of +5. The third ECG value is decoded by adding the lower threshed ECG value of the assigned bin +5, which is 301, to the decoded value of second ECG value, which is 11959, to arrive at the decoded value of 12260. The decoded value of 12260 in turn will serve as the serial accumulator for the next ECG value. The encoding process continue until the last reading is taken. The encoder keeps track of the accumulated encoded value as the encoding process progresses along. The encoding process described above is also a lossy compression process that encodes raw ECG signals with a finite number of codes. This process captures essential information while achieving significant data compression. In one embodiment, an other compressing step is performed. The other compression step may be performed independently. The other compression step may also be performed on top of the encoding process described above to achieve a higher level compression than one step alone. The second compression step can be a lossless compression performed on the codes from the first step. In one embodiment, the compression ratio of the second compression is in the range of 1.4 to 1.6, increasing the data storage capacity of a non-volatile memory by more than 41-66%. In another embodiment, the compression ratio of the second compression is in excess of 1.6, increasing the data storage capacity of a non-volatile memory by more than 66%. Thus, the combination of the lossy compression and the lossless compression serves to achieve both high fidelity of the ECG signal preservation and high compression ratio, which translate into increased data storage capacity and reduced power consumption for the ambulatory electrocardiography monitor, resulting in extended wear time of the monitor. In one embodiment, the second compression is effected by encoding a sequence of codes obtained from the first compression into a single number between 0 and 1, with frequently used codes using fewer bits and not-so-frequently occurring codes using more bits, resulting in reduced storage space use in total. FIG. 26 is a flow diagram showing a monitor recorder-implemented method for further compressing the codes. A sequence of the codes corresponding to the series of the ECG values is provided to the compressing module 134. The compressing module 134 set a range of 0 to 1 to an initial sequence of the codes (step 231). The compressing module 134 further performs recursive steps of assigning each successive codes into a sub-range within a previous range according to the probabilities of the codes appearing after a code (steps 232-239). In order to do so, the compressing module 134 obtains an estimation of probabilities of next codes, given a current code (step 233). Several variations of calculating and adjusting the probabilities of the next codes will be described infra. The compressing module 134 divides the range of the current code into sub-ranges, each sub-range representing a fraction of the range proportional to the probabilities of the next codes (step 234). These sub-ranges are contiguous and sequential. The compressing module 134 reads the next code (step 235) and selects the sub-range corresponding to the read next code (step 236). The read next code is represented, or encoded, by the corresponding sub-range (step 237). The sub-range corresponding to the read next code is assigned to be the range for the code next to the read next code (step 238), and the range is further divided into sub-ranges with each sub-range representing a fraction of the range proportional to the probabilities of codes next to the read next code (step 39
2.092877
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
, we performed 2 complementary screens to identify FDA-approved drugs and drug-like small molecules with activity against T-ALL. We developed a zebrafish system to screen small molecules for toxic activity toward MYC-overexpressing thymocytes and used a human T-ALL cell line to screen for small molecules that synergize with Notch inhibitors. We identified the antipsychotic drug perphenazine in both screens due to its ability to induce apoptosis in fish, mouse, and human T-ALL cells. Using ligand-affinity chromatography coupled with mass spectrometry, we identified protein phosphatase 2A (PP2A) as a perphenazine target. T-ALL cell lines treated with perphenazine exhibited rapid dephosphorylation of multiple PP2A substrates and subsequent apoptosis. Moreover, shRNA knockdown of specific PP2A subunits attenuated perphenazine activity, indicating that PP2A mediates the drug's antileukemic activity. Finally, human T-ALLs treated with perphenazine exhibited suppressed cell growth and dephosphorylation of PP2A targets in vitro and in vivo. Our findings provide a mechanistic explanation for the recurring identification of phenothiazines as a class of drugs with anticancer effects. Furthermore, these data suggest that pharmacologic PP2A activation in T-ALL and other cancers driven by hyperphosphorylated PP2A substrates has therapeutic potential. Alejandro Gutierrez, Li Pan, Richard W.J. Groen, Frederic Baleydier, Alex Kentsis, Jason Marineau, Ruta Grebliunaite, Elena Kozakewich, Casie Reed, Francoise Pflumio, Sandrine Poglio, Benjamin Uzan, Paul Clemons, Lynn VerPlank, Frank An, Jason Burbank, Stephanie Norton, Nicola Tolliday, Hanno Steen, Andrew P. Weng, Huipin Yuan, James E. Bradner, Constantine Mitsiades, A. Thomas Look, Jon C. Aster Interaction of the chemokine CXCL12 with its receptor CXCR4 promotes neuronal function and survival during embryonic development and throughout adulthood. Previous studies indicated that μ-opioid agonists specifically elevate neuronal levels of the protein ferritin heavy chain (FHC), which negatively regulates CXCR4 signaling and affects the neuroprotective function of the CXCL12/CXCR4 axis. Here, we determined that CXCL12/CXCR4 activity increased dendritic spine density, and also examined FHC expression and CXCR4 status in opiate abusers and patients with HIV-associated neurocognitive disorders (HAND), which is typically exacerbated by illicit drug use. Drug abusers and HIV patients with HAND had increased levels of FHC, which correlated with reduced CXCR4 activation, within cortical neurons. We confirmed these findings in a nonhuman primate model of SIV infection with morphine administration. Transfection of a CXCR4-expressing human cell line with an iron-deficient FHC mutant confirmed that increased FHC expression deregulated CXCR4 signaling and that this function of FHC was independent of iron binding. Furthermore, examination of morphine-treated rodents and isolated neurons expressing FHC shRNA revealed that FHC contributed to morphine-induced dendritic spine loss. Together, these data implicate FHC-dependent deregulation of CXCL12/CXCR4 as a contributing factor to cognitive dysfunction in neuroAIDS. Jonathan Pitcher, Anna Abt, Jaclyn Myers, Rachel Han, Melissa Snyder, Alessandro Graziano, Lindsay Festa, Michele Kutzler, Fernando Garcia, Wen-Jun Gao, Tracy Fischer-Smith, Jay Rappaport, Olimpia Meucci Children with focal hyperinsulinism of infancy display a dramatic, non-neoplastic clonal expansion of β cells that have undergone mitotic recombination, resulting in paternal disomy of part of chromosome 11. This disomic region contains imprinted genes, including the gene encoding the cell cycle inhibitor p57Kip2 ( Dana Avrahami, Changhong Li, Ming Yu, Yang Jiao, Jia Zhang, Ali Naji, Seyed Ziaie, Benjamin Glaser, Klaus H. Kaestner High blood pressure is the leading risk factor for death worldwide. One of the hallmarks is a rise of peripheral vascular resistance, which largely depends on arteriole tone. Ca2+-activated chloride currents (CaCCs) in vascular smooth muscle cells (VSMCs) are candidates for increasing vascular contractility. We analyzed the vascular tree and identified substantial CaCCs in VSMCs of the aorta and carotid arteries. CaCCs were small or absent in VSMCs of medium-sized vessels such as mesenteric arteries and larger retinal arterioles. In small vessels of the retina, brain, and skeletal muscle, where contractile intermediate cells or pericytes gradually replace VSMCs, CaCCs were particularly large. Targeted disruption of the calcium-activated chloride channel TMEM16A, also known as ANO1, in VSMCs, intermediate cells, and pericytes eliminated CaCCs in all vessels studied. Mice lacking vascular TMEM16A had lower systemic blood pressure and a decreased hypertensive response following vasoconstrictor treatment. There was no difference in contractility of medium-sized mesenteric arteries; however, responsiveness of the aorta and small retinal arterioles to the vasoconstriction-inducing drug U46619 was reduced. TMEM16A also was required for peripheral blood vessel contractility, as the response to U46619 was attenuated in isolated perfused hind limbs from mutant mice. Out data suggest that TMEM16A plays a general role in arteriolar and capillary blood flow and is a promising target for the treatment of hypertension. Christoph Heinze, Anika Seniuk, Maxim V. Sokolov, Antje K. Huebner, Agnieszka E. Klementowicz, István A. Szijártó, Johanna Schleifenbaum, Helga Vitzthum, Maik Gollasch, Heimo Ehmke, Björn C. Schroeder, Christian A. Hübner High-dose ionizing irradiation (IR) results in direct tumor cell death and augments tumor-specific immunity, which enhances tumor control both locally and distantly. Unfortunately, local relapses often occur following IR treatment, indicating that IR-induced responses are inadequate to maintain antitumor immunity. Therapeutic blockade of the T cell negative regulator programmed death–ligand 1 (PD-L1, also called B7-H1) can enhance T cell effector function when PD-L1 is expressed in chronically inflamed tissues and tumors. Here, we demonstrate that PD-L1 was upregulated in the tumor microenvironment after IR. Administration of anti–PD-L1 enhanced the efficacy of IR through a cytotoxic T cell–dependent mechanism. Concomitant with IR-mediated tumor regression, we observed that IR and anti–PD-L1 synergistically reduced the local accumulation of tumor-infiltrating myeloid-derived suppressor cells (MDSCs), which suppress T cells and alter the tumor immune microenvironment. Furthermore, activation of cytotoxic T cells with combination therapy mediated the reduction of MDSCs in tumors through the cytotoxic actions of TNF. Our data provide evidence for a close interaction between IR, T cells, and the PD-L1/PD-1 axis and establish a basis for the rational design of combination therapy with immune modulators and radiotherapy. Liufu Deng, Hua Liang, Byron Burnette, Michael Beckett, Thomas Darga, Ralph R. Weichselbaum, Yang-Xin Fu The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type–specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type–specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease–causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms. Aline Winkelmann, Nicola Maggio, Joanna Eller, Gürsel Caliskan, Marcus Semtner, Ute Häussler, René Jüttner, Tamar Dugladze, Birthe Smolinsky, Sarah Kowalczyk, Ewa Chronowska, Günter Schwarz, Fritz G. Rathjen, Gideon Rechavi, Carola A. Haas, Akos Kulik, Tengis Gloveli, Uwe Heinemann, Jochen C. Meier Patients with the autoimmune rheumatic disease systemic lupus erythematosus (
1.455405
openbmb/Ultra-FineWeb
). Connected to the opposite end of rod 202 is the wiper of a coil potentiometer 206 which is thus positioned within the potentiometer in accordance with the position of link 124(b). The coil of the potentiometer 206 is connected by leads 208 to an appropriate electrical circuit (not shown). The coil potentiometer 206 acts as a voltage divider and, depending upon the position of the wiper within the coil, produces a current which is fed to an indicator 210. One indicator which can be conveniently used is a trim tab indicator type IP 10100 manufactured by Bennett Marine Corp. 550 NW 12th Avenue, Deerfield Beach, Fla. 33441. Depending upon the polarity and value of the applied signal, indicator 210 will be illuminated above or below a central point 212. The indicator segment 214 shows upward motion, the angle of the ascent being indicated by the position of the lamp segment lit with respect to the central point 212. Similarly the indicator segment 216 shows downward motion. The signals from the two vertical coil potentiometers 206 are both employed to drive the indicator 210. Although not shown, the links controlling left and right steering are also fitted to coil potentiometers which transmit their signals to indicator 218. Similarly, segment 222 shows rightward movement and segment 224 shows leftward movement, the degree of turn being indicated by the particular segment illuminated. The correspondence between the boring unit 32 movement and the indicated directions of the joy sticks 194, 196 in FIG. 12 only hold true if the orientation of the boring unit 32 is as shown by the indicator lamps 188(c), 188(d) and 188(e). If the boring unit 32 rotates 90° counter-clockwise as is shown by the indicator lights 188(a), 188(b) and 188(c) in FIG. 13, the result of using the joy sticks 194, 196 as indicated by the arrows would be incorrect movement. The movement of joy stick 194 in the direction of arrow 1 would not be to aim the boring unit 32 toward the surface but rather to direct it to move leftwardly. Similarly down, as with arrow 2, is rightward movement whereas right with arrow 3 by joy stick 196 would cause upward movement and left in the direction of arrow 4 is downward. To eliminate the possible confusion, the central portion 226 of the control panel 192 is rotatable. Portion 226 is rotated so as to align central pointer 228 with the middle one of the three illuminated indicator lights 188, that is 188(b) as is shown in FIG. 14. Now movement of joy sticks 194, 196 will be correctly visually displayed to the operator. A further rotation of the boring unit 32 90° counter-clockwise will result in the one direction being down or a 180° reversal of the initial position. Turning now to FIGS. 15 and 16, the hydraulic cylinder 227 used to provide the fluid to inlet port 144 of the steering portion 38 is shown. Although only one cylinder 227 is shown, it should be understood that there are four cylinders 227 arranged in a circle in housing 41 to the left of indicator portion 40. The two cylinders 227 for the vertical-direction movement are shown in FIG. 4, but the horizontal-movement control cylinders have been omitted for the sake of clarity. Hydraulic fluid is fed to inlet 229 and passes via filter 230 to passage 232 and a further filter 234 to passages 236 and 238 and into chamber 242 via port 240. From chamber 242, the fluid flows via port 244 through passages 246, 248 to inlet port 144. This flow is possible because the valve plunger 254 is in the retracted position to the left of port 240 as is shown in FIG. 15. In response to an electrical signal on lines 256, solenoid 250 is operated to advance valve plunger 254 to the right of port 240 blocking all further flow of hydraulic fluid to inlet port 144 as shown in FIG. 16. Individual ones of the pairs of cylinders 227 will be operated in accordance with the desired direction of movement of boring unit 32. After boring unit 32 has arrived at its desired location, it can be used to draw a new utility item through the newly created bore by installing the pulling eye 114 of FIG. 6 and re-reeling the service cable 42. If it is desirable or necessary to increase the diameter of the bore this can be done on the return by use of a back reamer as shown in FIG. 18. Nose portion 34 and hammer portion 36 are removed by unscrewing the hammer portion 36 from the threaded stud 88 of the steering portion 38. Back reamer 260 is now threadedly engaged to threaded stud 88 by means of internally-threaded anvil 262. A hammer portion 264 is arranged for reciprocating movement concentrically along support tube 266 to apply force to surfaces 268 of anvil 262 at the end of its forward stroke and to approach collar 270 at the end of its rearward stroke. A compression spring 271 serves to return hammer portion 264 to its rearward position adjacent collar 270 and apply the force of hammer portion 264 to surfaces 268 of anvil 262. Fluid is fed via tube 272 to the passages 274 and interspaces 276 between hammer portion 264 and collar 270 forcing hammer portion 264 forward to strike surfaces 268 of anvil 262 and moving the entire assembly to the left in FIG. 18. Since the diameter of hammer portion 264 is greater than that of nose body 60, the bore is enlarged. The fluid escaping the interspaces 276 is available to lubricate the passage of the back reamer 260. The trailing utility item that is fastened to pulling eye 278 of collar 270 acts as a break for the returning hammer portion 264 to prevent an impact with collar 270 which could drive the back reamer 260 in the wrong direction. All essential fluid, hydraulic and electrical conductors are housed in the service cable 42 fastened to the housing 41. Beyond the location of the hydraulic cylinders 227, housing 41 is decreased in outside diameter as at 280 and the outer surface formed with a series of ridges 282. At the end of the body portion, a plate 284 is fixed across the opening dividing same in half so that the various lines, tubes and conductors can pass over either face of plate 284 and enter the boring unit housing 41. An aperture 286 is placed in plate 284 for purposes to be described below. Service cable 42 is prepared so that the various lines, tubes and conductors are separated and extend beyond the outer jacket 290 of service cable 42 so that they can pass along plate 284 into the boring unit 32 for attachment to their respective components. The end of outer jacket 290 is brought up against end 288 of housing 41 over the ridges 282 in the reduced-diameter portion 280 and clamped thereto by use of a stainless steel hose clamp 292 of a construction well known in the art. The makeup of service cable 42 is best appreciated from a consideration of FIG. 17. At the center of service cable 42 is a steel wire 294 which is attached to plate 284 by means of aperture 286. The wire 294, having a diameter of about 0.250 inches, can be used to pretension the service cable 42 and thus reduce the tendency of the boring unit 32 to rotate by providing a more rigid trailing cable and to provide the main pulling lin for drawing the boring unit 32 or back reamer 260 back through the newly-created bore whether by themselves or with a utility item fastened thereto to decrease the forces otherwise applied directly to the weaker service cable alone. Steel wire 294 is surrounded by six fiberglass rods also having a diameter of about 0.250 inches. These rods are applied with a slight twist (1 wrap per 9 lineal feet) rather than extending in parallel with steel wire 294. These rods provide crush support and when used with further fiberglass rods having a reverse or opposite twist tend to keep service cable 42 from rotating. Steel wire 294 and fiberglass rods 296 are surrounded by an extruded jacket 298. Along the outer surface of jacket 298 are arranged the 2,000-pounds-per-square-inch working pressure drill mud line 300 which couples to line 98; four 2,000-pounds-per-square-inch working pressure hydraulic lines 302 which are coupled to inlets 229 of hydraulic cylinders 227; two 5,000-pounds-per-square-inch working pressure air lines 304, one of which couples to line 100, the other remaining as a spare; two electrical cables 306, each composed of six pair of 22-gauge stranded conductors--four pair used for the solenoids 250 of the hydraulic cylinders 250, two pair coupled to conductors 208 of the coil potentiometers 206, two further pair for the conductors of the horizontal coil potentiometers (not shown) and four pair used to couple the mercury switches 170 to the indicator lamps 188; and two electrical conductors 308 number 12 wire rated at 600 volts. Surrounding these hoses and conductors is a second ply of ten 0.250-inch fiberglass rods 310 applied with a twist direction opposite to that of fiberglass rods 294 and of a greater twist being one wrap in 4.5 lineal feet. The net effect of these two counter-twist plys of fiberglass rods is to support and strengthen the cable 42 and to resist any tendency to rotate in either direction. Also, as stated above, the steel wire 292 can be tensioned before any tension is applied to the overall cable 42 and this pre-tensioning tends to make the cable 42 more rigid also preventing rotation during reeling or unreeling. The cable 42 is further protected and reinforced by pressure extruding a polyethylene interior jacket 312 and a polyurethane wear jacket 290 over the cable core and components. The unreeling of the supply cable 42 is generally controlled by the boring unit 32. As it advances, it pulls the supply cable
1.643485
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
and be controlled remotely from a single device". What is it that can't be done today with fixed network access and wifi? (hoping that the device that remotely controls our home belongs to us…) Are we sure that the complexity of Narrowband-IoT (NB IoT) and the 5G version will prevail over wifi and bluetooth? It seems to me that the probability that this will happen in the short to medium term is quite low, unless some black swan gives the necessary impulse. - "An ultra-performing network, will be fundamental for the transition to the Internet of things, i.e. to ensure the development of applications and services for the Smart City based on sensors (for example for traffic control, waste collection, urban lighting, logistics)". In 20 milliseconds, many cars pass through street intersections and a lot of waste is thrown. If a lamp burns, then, we really know first! - "The new, very fast 5G mobile networks will also help improve safety on the roads. In addition to allowing more data to be transferred in the same unit of time, they have a much lower latency and a reduced error rate: if one data packet in a thousand is "lost" with 4G, with 5G you get up to one in a million. It is these last two aspects that will make the electronic security systems of cars more effective, which will "dialogue" with each other in real time and with the certainty that the information arrives". About the latency I mentioned above; about the loss of packets, as people who deal with networks know, network protocols (TCP) have control mechanisms to ensure applications that all transmitted packets arrive. This aspect, therefore, is already insurable to current technologies. - "The productive world will be revolutionized through the full digitalization of production facilities – the so-called Industry 4.0 – and the development of precision agriculture". Indeed in the business market there could be an interest in 5G for IIoT (Industrial IoT), unlike the residential market. The 5G will allow a higher communication density up to one sensor per square meter, that is one million sensors per square kilometer compared to the 50 thousand allowed by the best NB IoT technologies. We must also bear in mind that in the industrial field wired network technologies are very expensive (PROFINET cables and switches cost one order of magnitude more than the non-industrial equivalents) and introduce rigidity in installations that could be more flexibly realized and reconfigured using wireless connections. (examples 1, 2, 3, 4) Be careful! I'm not saying that 5G doesn't help, but that the "killer applications" that are proposed today (which often include "life saving" arguments, because on those you can't squeeze investments), with the notable exception of the business market and IIoT, are generally trivial and almost certainly destined not to materialize. Figure 1 – Piazza Maggi, Milan This does not mean that a new network infrastructure should not be built, on the contrary! The first motivation for a new infrastructure is that it must precede the demand and enable it: the new infrastructure will serve a demand that is not there today. Einaudi, a very famous italian economist and politician, said that markets expresses demands, not needs, meaning they express immediate requirements and not long-term needs. Also from this point of view the idea of pooling investments to co-invest in a 5G network infrastructure seems reasonable. The networks they are a changin' The "mobile" networks (which are not mobile: it is people who are mobile, not the networks) work by emitting signals that fade as you move away from the antenna, so that phones and antennas have to transmit with increasing power to be able to communicate, like two people talking to each other by moving away from each other. Or, another solution, is to place many more antennas that will be much closer to the user so that they will be able to communicate with lower power; the lower the power, the the larger the number of antennas that are needed. This is what has happened with mobile telephony: at every stage of evolution from GSM to UMTS (or 3G) to HSDPA (or 3.5G) to LTE (or 4G) emissions decrease and antenna density increases. Consequently, the wireless access segment (the part from the user to the antenna) tends to become shorter: from the many kilometers of GSM (with low bandwidth) to a few tens of meters with Wi-Fi and 5G (which provide a lot of bandwidth). With 5G, in Italy we will have hundreds of thousands of small antennas, with very low emissions (also because we have the lowest emission constraints in the world). In the future we will find them at the base of many buildings and they will provide us ubiquitously performances similar to Wi-Fi, wherever we go. To each of these antennas we will need to feed some bandwidth with a network connection hookup (usually fixed). For this reason it is of obvious the importance of the presence of fixed networks, whether they are via the "old cable TV" as in some European countries or telephone networks with optical fibers. But it is good to overcome also this conceptual differentiation. The network is always and only one: it serves to carry data of any type. It does not matter if it was born for the telephone (copper pair) or for TV (cable networks) or if it is made with optical fibers. Such a high density of antennas means that each antenna will be used by fewer people. If an antenna covers a radius of two kilometers, all users in a small town will connect to it. If an antenna covers a twenty meter radius, only the few people in that building will connect to it. The transmission capacity available from an antenna is shared with the people who connect to it. In the case of the small town, it will be shared among many people; in the second case it will be shared among very few people so that each individual in the second scenario will have much more bandwidth available than their peers in the first scenario. Increasing the capillarity of the antennas helps to reduce emissions and increase the performance available to users. Lower emissions thanks to 5G It seems a counter-intuitive thing: lower electromagnetic emissions with more antennas; higher capacity with lower emissions. An example helps to clarify: imagine a room with forty people and two of them talking with a megaphone. The noise pollution will be very high and the overall capacity of the room will be just one conversation. If everyone whispers, the noise pollution will be minimal and the overall capacity will be twenty simultaneous conversations. Ultimately, the network is not what we are used to think it is: a single object managed by an operator. Instead, it is a mix of different types of routes, with different technologies, much likey a road network. From the point of view of the user, in the future there will not be a great distinction between fixed network and wireless network. The fixed network will have a very high capillarity and at its edge there will be an antenna. If located inside a house, it will still be a Wi-Fi managed independently by the most sophisticated users, if it will be located outside of the house it will instead be a 5G network managed by an operator. A mental image of networks Let's picture circles to imagine service boundaries of an operator. Today there is a circle that reaches the users' home, the fixed network, to which a Wifi access point is connected; it can be right next to each side of the circumference, depending on whether it is provided and managed by the operator (inside the circle) or installed directly by the user (outside the circle). Outside this circle there are the user's devices, from PCs to smart refrigerators (!?) to TVs connected to disks to store documents, photos and movies (and managing their complexity). Then there is another circle, the "mobile network" one, which radius is smaller. It's the network that powers the cellular radio stations located right inside the edge of the circle, inside the perimeter of the operator, to which users connect with their mobile devices. Over time this second circle has expanded and with the 5G it will come close to the border of the first circle, so close that many users will find it more convenient to directly connect their devices, entrust the operator with the custody of their documents, photos and movies (without having to manage the complexity) reducing its management burden, often increasing the level of security and being able to have everywhere what would otherwise only be available at home / office. These two circles, the fixed network and the mobile network have grown closer over time and will continue to do so with 5G; there is a large overlap in the surface area of the two circles. The more widespread the "fixed" network is developed, the more it will also contribute to the infrastructure for 5G. The telephonist's drama Let's go back to the use cases: many of those told, especially those that are presented as useful to save lives, suffer from what we could call "the telephonist's drama". I call it this way because i often perceive it when talking to many friends, very competent people, who work for telco operators who, being born as telephone operators, keep that imprinting in their DNA and seem to me they still are a bit conditioned by that way of thinking. An example of a typical "telephone" way of thinking is that of value added services. Let's imagine a system to improve driving safety, based on a myriad of temperature and humidity sensors scattered along a mountain road so that a car that arrives that behind the blind curve can be informed there is
1.07813
Zyphra/Zyda-2
. 5B was manufactured with gas source 22 providing 20 SCCM of the 80% argon/20% H2 mixture. (In other words, 4 SCCM of H2 was introduced into chamber 5.) The friction exhibited by the resulting carbon film rose from a value of 0.2 to 1.0 in less than 8 minutes. In FIG. 5C, 40 SCCM of the 80% argon/20% H2 mixture was introduced into sputtering chamber 5 by source 22 while source 11 was off. The friction coefficient exhibited by the resulting carbon film rose to 1.0 in 12 minutes. In FIG. 5D, 60 SCCM of the 80% argon/20% H2 mixture was provided by gas source 22 while gas source 11 was off. The friction coefficient from the resulting carbon film rose to a level of about 0.7 and then stopped rising, even after 66 minutes, and the test was terminated. Although during the above experiments gas source 11 was off, gas source 11 can be used to vary the hydrogen concentration adjacent targets 8a, 8b. FIG. 4 illustrates the relationship between gas flow from source 22 (20% H2 /80% argon) and the concentration of hydrogen at targets 8a, 8b. Because of diffusion and gas flow between the different target areas, the hydrogen concentration at targets 8a, 8b does not equal exactly 20%. The curve of FIG. 4 was estimated taking into account the geometry of the sputtering system and flow pattern of gases in the system. (The hydrogen concentration at targets 8a, 8b is substantially equal to the hydrogen concentration at the substrate when the substrate is between targets 8a and 8b.) Typically, magnetic disks are unacceptable if the friction coefficient is greater than 1.0. Accordingly, the disks of FIGS. 5A, 5B and 5C wore out and became unacceptable relatively quickly. However, as mentioned above, the disk of FIG. 5D remained acceptable, even after 66 minutes. Accordingly, it is seen in FIGS. 5A-5D that the greater the hydrogen concentration in the sputtering chamber, the greater the carbon film performance. FIG. 6 illustrates the time required during the drag tests for a disk to exceed a friction coefficient of 1.0. The disks produced under gas flows of 0, 20, 40 and 60 SCCM of the 80% argon/20% H2 mixture in FIG. 6 were generated under the same gas flow conditions as FIGS. 5A, 5B, 5C and 5D, respectively. The Y axis of FIG. 6 is logarithmic. As can be seen, the lifetime of the carbon film is increased by more than an order of magnitude by introducing 60 SCCM of a 20% H2 /80% argon gas flow hydrogen into the sputtering chamber. The data points for disks manufactured when gas source 22 provided 60 SCCM of the argon/H2 mixture were estimated, based on slope of the friction vs. time curves from drag tests. The plot in FIG. 6 shows, for a group of samples prepared under different hydrogen concentrations, that a small amount of hydrogen will have almost no effect on the mechanical characteristics of the carbon film, whereas a large amount of hydrogen will have a very dramatic effect on the carbon. The reason hydrogen affects the friction exhibited by the disks is not completely understood. I have three theories concerning why this result is achieved. According to the article entitled "Evidence for Tribochemical Wear on Amorphous Carbon Thin Films" by Bruno Marchon et al., published at the proceedings of the MRM Conference in Rimini, Italy in 1989 (incorporated herein by reference), carbon wears out primarily through an oxidation phenomenon. When a read/write head strikes a magnetic disk, a great amount of force is exerted on a small portion of the carbon film by the read/write head. This causes localized heating and oxidation of the carbon film. Thus, Marchon reported that carbon wear was prevented or drastically reduced by conducting contact-start-stop tests in a nitrogen (oxygen-free) atmosphere. It is possible that hydrogen doping the carbon film also drastically reduces localized oxidation. Another possible reason why introduction of hydrogen into a carbon film retards the increase in friction is that as the read/write head and the carbon film wear, the amount of contact area between the read/write head and the disk increases. The presence of hydrogen in the carbon film reduces an attractive force between the read/write head and the carbon, and thus retards the increase in the friction coefficient even when the contact area between the read/write head and carbon increases due to wear. A third theory as to why hydrogen in a carbon film retards the increase in friction is that hydrogen-doped films exhibit a greater degree of elasticity. (Experimental data pertaining to this effect is provided below.) Thus, the carbon film is more compliant (elastic), and may be able to absorb the shock loading of the film by the read/write head, thereby allowing the film to last longer. The hydrogen introduced at targets 8a, 8b is actually incorporated into the sputtered carbon film. This was demonstrated by using a sampling gas mass spectrometer or residual gas analyzer (RGA) to monitor the consumption rate of hydrogen near the carbon sputtering targets. A plot of the hydrogen mass peak intensity versus calculated hydrogen concentration with the plasma on and off (i.e. when sputtering is taking place and not taking place, respectively) is shown in FIG. 7. The RGA output is in arbitrary units, but is proportional to the amount of hydrogen in the sputtering chamber near targets 8a, 8b. From this data, it can be determined that plasma at the carbon targets consumes approximately one half of the hydrogen introduced at the carbon cathode area, indicating that the plasma causes reaction of input hydrogen and results in incorporation of hydrogen into the carbon film. (Unless otherwise stated, hydrogen concentrations elsewhere in this specification and claims refer to concentrations calculated as if there were no hydrogen consumption during sputtering. It is believed, however, that the hydrogen concentration is about 50% of this calculated value at targets 8a, 8b when the plasma is on.) Raman spectroscopy is a useful technique for obtaining information regarding the bonding characteristics of carbon atoms within the deposited film. See D. S. Knight, et al., "Characterization of Diamond Films", J. Mater. Res. Vol 4, No. 2, March/April 1989, and Willard et al., Instrumental Methods of Analysis, 6th Edition, published by Wadsworth Publishing Co. in 1981, incorporated herein by reference. Typical spectra of a carbon film with no hydrogen is shown in FIG. 8A. Typically the spectra is characterized by broad overlapping peaks around 1310/cm (generally known as the D-peak) and 1550/cm (generally known as the G-peak). The peaks can be deconvoluted to obtain more accurate peak position and intensity values. The deconvoluted spectra is shown in FIG. 8B. The Raman spectra of a film produced using 80 SCCM of the 20% H2 /80% argon mixture is shown in FIG. 8C. There is a change in the ratio of the D to G peaks, as well as a slight shift in the peak positions as seen in the deconvoluted spectra of FIG. 8C, shown in FIG. 8D. The G and D peaks shift to lower frequencies as hydrogen is added. The change in peak ratio expressed in terms of height and area ratios as a function of the amount of hydrogen present during sputtering is plotted in FIG. 9, and height position is plotted in FIG. 10. Raman spectra shows a clear indication of the changes in chemistry of the carbon atoms within the film as more hydrogen is added. Based on changes in the SP3/SP2 peak intensity ratios, it is apparent that the carbon becomes more amorphous. Typically, a carbon film lacking hydrogen has a brown to greyish color at a thickness of about 300 Å. The sheet resistance at this thickness is about 0.5MΩ/square, using a four point probe measurement. Resistivity of a 300 Å carbon film made with 20 SCCM of the 20% hydrogen/80% argon mixture was measured using a four point probe. The resistance was greater than 20MΩ/square. Further, the carbon film sputtered with 20 SCCM 20% H2 /80% argon was yellow when formed on a metallic alloy, and colorless if formed on glass. This indicates that hydrogen in the sputtering chamber introduces chemical and structural changes in the resulting carbon film. A specially made 2000 Å thick carbon coating was made in order that micro-hardness measurements can be taken of the carbon coating with various amounts of hydrogen. The method used for the hardness and elastic constant determination is described by M. F. Doerner et al. in "A Method For Interpreting The Data From Depth-Sensing Indentation Instruments", published in J. Mater. Res., July/August 1986, p. 601. Table 1 below lists the values which were obtained. ______________________________________Flow Rate of the20% Hydrogen/80% Armixture Hardness Elasticity______________________________________ 0 SCCM 8 GPa 140 GPa60 SCCM 8 GPa 92
2.162489
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
Principal component analysis From Wikipedia, the free encyclopedia Jump to navigation Jump to search PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.866, 0.5) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean. Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables. PCA was invented in 1901 by Karl Pearson,[1] as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s.[2] Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (Golub and Van Loan, 1983), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe's Principal Component Analysis),[3] Eckart–Young theorem (Harman, 1960), or empirical orthogonal functions (EOF) in meteorological science, empirical eigenfunction decomposition (Sirovich, 1987), empirical component analysis (Lorenz, 1956), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics. PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA can be done by eigenvalue decomposition of a data covariance (or correlation) matrix or singular value decomposition of a data matrix, usually after a normalization step of the initial data. The normalization of each attribute consists of mean centering – subtracting each data value from its variable's measured mean so that its empirical mean (average) is zero – and, possibly, normalizing each variable's variance to make it equal to 1; see Z-scores.[4] The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score).[5] If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, ("normalized") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back. PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a multivariate dataset is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint[citation needed]. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced. PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.[6][7][8][9] Robust and L1-norm-based variants of standard PCA have also been proposed.[10][11][9] Intuition[edit] PCA can be thought of as fitting a p-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small, and by omitting that axis and its corresponding principal component from our representation of the dataset, we lose only an equally small amount of information. To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to become unit vectors. Once this is done, each of the mutually orthogonal, unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform our covariance matrix into a diagonalised form with the diagonal elements representing the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues. This procedure is sensitive to the scaling of the data, and there is no consensus as to how to best scale the data to obtain optimal results.[citation needed] Details[edit] PCA is mathematically defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[3] Consider a data matrix, X, with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature (say, the results from a particular sensor). Mathematically, the transformation is defined by a set of p-dimensional vectors of weights or coefficients that map each row vector of X to a new vector of principal component scores , given by in such a way that the individual variables of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where is usually selected to be less than to reduce dimensionality). First component[edit] In order to maximize variance, the first weight vector w(1) thus has to satisfy Equivalently, writing this in matrix form gives Since w(1) has been defined to be a unit vector, it equivalently also satisfies The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector. With w(1) found, the first principal component of a data vector x(i) can then be given as a score t1(i) = x(i)w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)w(1)} w(1). Further components[edit] The kth component can be found by subtracting the first k − 1 principal components from X: and then finding the weight vector which extracts the maximum variance from this new data matrix It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX. The kth principal component of a data vector x(i) can therefore be given as a score tk(i) = x(i)w(k) in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, {x(i)w(k)} w(k), where w(k) is the kth eigenvector of XTX. The full principal components decomposition of X can therefore be given as where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by
2.545063
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
top of page Publications Electronic copies of publications can be provided upon request 2024 Bodnar TS, Ainsworth-Cruickshank GRJ, Billy V, Parfrey LW, Weinberg J, Raineki C (2024). Alcohol consumption during pregnancy differentially affects the fecal microbiota of dams and offspringScientific Reports, 14: 16121. Wilson DA, Sullivan RM, Smiley JF, Saito M, Raineki C (2024). Developmental alcohol exposure is exhausting: Sleep and the enduring consequences of alcohol exposure during development. Neuroscience and Biobehavioral Reviews, 158: 105567. 2023 Holman PJ, Raineki C (2023). Prenatal alcohol exposure and early life adversity: A translational perspective for dissecting compounding impacts. Alcohol: Clinical and Experimental Research, 47: 2227-2230. *Invited commentary Bodnar TS, Chao A, Holman PJ, Ellis L, Raineki C, Weinberg J (2023). Impact of COVID-19 pandemic on adults with Fetal Alcohol Spectrum Disorder: Linking immune function to mental health status. Frontiers in Neuroscience, 17: _PHONE_. 2021 Opendak M, Raineki C, Perry RE, Rincón-Cortés M, Song SC, Zanca RM, Wood E, Packard K, Hu S, Woo J, Martinez K, Vinod KY, Brown RW, Deehan GA Jr., Froemke RC, Serrano PA, Wilson DA, Sullivan RM (2021). Bidirectional control of infant social behavior by dopaminergic innervation of the basolateral amygdala. Neuron, 109: 4018-4035.e7. Holman PJ, Raineki C, Chao A, Grewal R, Haghighat S, Fung C, Morgan E, Ellis L, Yu W, Weinberg J (2021). Altered social recognition memory and hypothalamic neuropeptide expression in adolescent male and female rats following prenatal alcohol exposure and/or early-life adversity. Psychoneuroendocrinology, 126: 105146. 2020 Bodnar TS, Raineki C, Wertelecki W, Yevtushok L, Plotka L, Granovska I, Zymak-Zakutnya N, Pashtepa A, Wells A, Honerkamp-Smith G, Coles CD, Kable JA, Chambers CD, Weinberg J, the CIFASD (2020). Immune network dysregulation associated with child neurodevelopmental delay: Modulatory role of prenatal alcohol exposure. Journal of Neuroinflammation, 17: 39. 2019 Raineki C*, Opendak M*, Sarro E, Showler A, Bui K, McEwen BS, Wilson DA, Sullivan RM (2019). During infant maltreatment, stress targets hippocampus but stress with mother present targets amygdala and social behaviorProceedings of the National Academy of Sciences of the United States of America, 116: 22821-22832. *authors contributed equally Raineki C, Morgan EJ, Ellis L, Weinberg J (2019). Glucocorticoid receptor expression in the stress-limbic circuitry is differentially affected by prenatal alcohol exposure and adolescent stressBrain Research, 1718: 242-251. Lam VYY, Raineki C, Wang LY, Chiu M, Lee G, Ellis L, Yu W, Weinberg J (2019). Role of corticosterone in anxiety- and depressive-like behavior and HPA regulation following prenatal alcohol exposureProgress in Neuro-psychopharmacology and Biological Psychiatry, 90: 1-15. 2018 Sagae SC, Zanardini B, Ribeiro-Paz ED, Amaral AC, Bronczek GA, Lubaczeuski C, Grassiolli S, Koehler-Santos P, de Oliveira JR, Donadio MVF, Raineki C (2018). Metabolic dysfunction in a rat model of early-life scarcity-adversity: Modulatory role of cafeteria dietExperimental Physiology, 103: _PHONE_. Bodnar TS, Raineki C, Wertelecki W, Yevtushok L, Plotka L, Zymak-Zakutnya N, Honerkamp-Smith G, Wells A, Rolland M, Woodward TS, Coles CD, Kable JA, Chambers CD, Weinberg J, the CIFASD (2018). Altered maternal immune networks are associated with adverse child neurodevelopment: Impact of alcohol consumption during pregnancyBrain, Behavior, and Immunity, 73: 205-215. Lam VYY, Raineki C, Ellis L, Yu W, Weinberg J (2018). Interactive effects of prenatal alcohol exposure and chronic stress in adulthood on anxiety-like behavior and central stress-related receptor mRNA expression: Sex- and time-dependent effectsPsychoneuroendocrinology, 97: 8-19. Lam VYY, Raineki C, Takeuchi LE, Ellis L, Woodward TS, Weinberg J (2018). Chronic stress alters behavior in the forced swim test and underlying neural activity in animals exposed to alcohol prenatally: Sex- and time-dependent effectsFrontiers in Behavioral Neuroscience, 12: 42. Raineki C, Ellis L, Weinberg J (2018). Impact of adolescent stress on the expression of stress-related receptors in the hippocampus of animals exposed to alcohol prenatallyHippocampus, 28: 201-216. 2017 Raineki C, Bodnar TS, Holman PJ, Baglot SL, Lan N, Weinberg J (2017). Effects of early-life adversity on immune function are mediated by prenatal environment: Role of prenatal alcohol exposureBrain, Behavior, and Immunity, 66: 210-220. Walker C-D, Bath KG, Joëls M, Korosi A, Larauche M, Lucassen PJ, Morris MJ, Raineki C, Roth TL, Sullivan RM, Taché Y, Baram TZ (2017). Chronic early life stress induced by limited bedding and nesting (LBN) material in rodents: Critical considerations of methodology, outcomes and translational potentialStress, 20: 421-448. Yan C-G, Rincón-Cortés M, Raineki C, Sarro E, Colcombe S, Guilfoyle DN, Yang Z, Gerum S, Biswal BB, Milham MP, Sullivan RM, Castellanos FX (2017). Aberrant development of intrinsic brain activity in a rat model of caregiver maltreatment of offspring. Translational Psychiatry, 7: e1005. Sullivan RM, Sullivan-Wilson T, Raineki C (2017). Neurobiology of Infant attachment: Nurturing and abusive relationships. Reference Module in Neuroscience and Biobehavioral Psychology, 1-10. 2016 Raineki C, Chew L, Mok P, Ellis L, Weinberg J (2016). Short- and long-term effects of stress during adolescence on emotionality and HPA function of animals exposed to alcohol prenatallyPsychoneuroendocrinology, 74: 13-23. Perry RE, Al Aïn S, Raineki C, Sullivan RM, Wilson DA (2016). Development of odor hedonics: Experience-dependent ontogeny of circuits supporting maternal and predator odor responses in rats. Journal of Neuroscience, 36: 6634-6650. 2015 Workman JL*, Raineki C*, Weinberg J, Galea LAM (2015). Alcohol and pregnancy: Effects on maternal care, HPA axis function, and hippocampal neurogenesis in adult femalesPsychoneuroendocrinology, 57: 37-50. *authors contributed equally Raineki C, Sarro E, Rincón-Cortés M, Perry R, Boggs J, Holman CJ, Wilson DA, Sullivan RM (2015). Paradoxical neurobehavioral rescue by memories of early-life abuse: The safety signal value of odors learned during abusive attachment. Neuropsychopharmacology, 40: 906-914. 2014 Raineki C, Hellemans KGC, Bodnar T, Lavigne KM, Ellis L, Woodward TS, Weinberg J (2014). Neurocircuitry underlying stress and emotional regulation in animals prenatally exposed to alcohol and subjected to chronic mild stress in adulthoodFrontiers in Endocrinology, 5: 5. ^Invited manuscript Raineki C, Lucion AB, Weinberg J (2014). Neonatal handling: An overview of the positive and negative outcomesDevelopmental Psychobiology, 56: _PHONE_. *Invited review 2013 Roth TL, Raineki C, Salstein L, Perry R, Sullivan-Wilson T, Sloan A, Lalji B, Hammock E, Wilson DA, Levitt P, Okutani F, Kaba H, Sullivan RM (2013). Neurobiology of secure infant attachment and attachment despite adversity: A mouse modelGenes, Brain and Behavior, 12: 673-680. Raineki C, Lutz ML, Sebben V, Ribeiro RA, Lucion AB (2013)
1.17551
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
8 Different Types of Lawn Sprinklers Explained (with Pictures) by Max - last update on Types of Lawn Sprinklers Lawn sprinklers provide an efficient and easy solution for keeping your lawn watered and healthy during dry months, without you having to stand out in your yard manually spraying a hose around the property. There are many different types of lawn sprinklers available, and the one most suitable for your property and situation will be dependent on many variables. Some of the things you need to consider when looking for a sprinkler are the size of your lawn, the shape of your lawn, and the water pressure at your property. You will also want to think about how you want your sprinkler to look, and if you want it to be fitted underground or above soil level. Some sprinklers can be quite noisy, which is another factor you should consider before fitting a sprinkler system at the family home. To learn all about the many different types of lawn sprinklers, and that lawns they are best suited to, read on. Automatic Vs. Manual Sprinklers Automatic Vs. Manual Sprinklers Sprinklers can be set on an automatic system to turn on and off at specified points in the day, or they can be operated manually. The obvious advantage of automatic sprinklers, or those set on a timer, is that you can set up your sprinklers and then forget about them for the remainder of the season. They will turn on at a set time, and turn off at a set time, with no manual intervention needed. As the best time to water a lawn is early in the morning, an automatic sprinkler can be set to operate before you have even woken up. If you want to water the lawn at this time of day with a manual sprinkler, this will probably mean an interruption to your sleep! Instead, if you have manually operated sprinklers, it's more likely you'll turn them on during the day or in the evening, which are not times that are as beneficial for the health of the grass as watering it in the morning. The other great thing about automatic sprinklers is that they will come on even if you're away from your property. You can go on vacation for several weeks, knowing that you'll return to a lush lawn. By comparison, if your sprinkler has to be manually operated, you'll need to ask a neighbor to turn them on in your absence or hire somebody to do this job. Otherwise, you risk returning from vacation or a business trip to find a dry and dead lawn. Similarly, even if you are home, a manually operated sprinkler requires that you remember to turn them on and off every day to keep your lawn in good health. If you remember to turn the sprinkler on, but then get distracted with another task and forget to turn it off, you risk drowning your lawn and also wasting water and racking up a large water bill. Or, if you forget to operate the sprinkler altogether, the lawn could dry out. An automatic sprinkler, by comparison, will run for a set time to ensure your lawn gets exactly the right amount of water each day. There is a drawback to this, though; if you have had rain or storms, your automatic sprinkler will still turn on and water the lawn even when it may not need the additional moisture. This leads us to the one main advantage of manually operating a sprinkler; that you can judge each day based on the weather how long to leave your sprinklers on for. Most people find that automatic sprinklers represent the most convenient method for irrigating their lawn, but these also tend to be the most expensive to buy. Types of Lawn Sprinklers 1. Rotary Sprinklers Rotary Sprinklers Rotary sprinklers mechanically rotate while spraying streams of water over a lawn, going back and forth to create specific angles or circles of dispersed water droplets. Rotary sprinklers are popular in gardens of all sizes, but they particularly excel when used on large lawns, because they are able to throw water further than any other type of stationary sprinkler. Some models can send water across huge distances, as far as 90 feet in one direction, therefore covering a diameter of up to 180 feet. They are also great for lawns with soils that water does not infiltrate quickly, such as clay or compacted soils. This is because rotary sprinklers have a lower rate of precipitation than other sprinkler types, such as pop-ups, meaning that it takes them longer to spread the same amount of water than a pop-up sprinkler. This is helpful for some soil types where run-off or pooling of water would occur if too much water was dispersed at once. A slower precipitation rate will allow a slower draining soil to absorb the water more evenly. Rotary sprinklers are also the best type of sprinkler for distributing the water evenly. They operate in a way that ensures the same amount of water hits every patch of lawn. There are many different types of rotary sprinklers that have various different spraying patterns. They are typically designed to be adjusted in quarter-circle increments, but some work on smaller angles. For best performance, you will need a psi water pressure of between 40 and 50 to be compatible with a rotary sprinkler. If you have a lower water pressure than a rotary sprinkler may not be the best option for you, though the one exception to this is stream rotary sprinklers, as some are able to operate on lower water pressure. There are three main types of rotary sprinklers. These are impact sprinklers, gear-driven sprinklers, and stream sprinklers. 1.1. Impact This is the type of sprinkler that many people will immediately think of when they think of rotary sprinklers. These types of sprinklers were first invented by Rain Bird, and so they are now mistakenly called 'rainbird sprinklers' by some people, but in fact, Rain Bird is a brand and not a type of sprinkler. Impact sprinklers can easily be identified by the noise they make while they are running. The head of these sprinklers rotates as the pressure from the nozzles water stream hits the spring-loaded arm, and it is this action that makes the sprinkler very noisy. Impact sprinkler heads are usually constructed from metal, namely brass or bronze. This makes them hardwearing and better able to withstand the elements and wear and tear than their plastic counterparts. They tend to have a long lifespan, but the noise they make can put a lot of people off from using them. They also require regular maintenance to keep them running properly, as they have many intricate parts that can wear down and need attention. They are considered to be a little old fashioned, as other rotary sprinklers that operate more quietly and require less maintenance are starting to replace impact sprinklers. The best time of day to irrigate a lawn is early in the morning, and this is why people living in residential properties will choose to avoid impact sprinklers, as the sound of them going off in the early hours of the morning can disturb sleep. For this reason, impact sprinklers are best suited to commercial properties, such as the grounds of golf courses, where their loud noise isn't going to be a problem. 1.2. Gear-driven Gear-driven rotors have rotating heads that spray a stream of water in a continually turning pattern. The head rotates due to a series of gears, which operates as a result of the pressure from the flowing water. Like impact sprinklers, these rotary sprinklers can spread wide streams of water, at a distance of anywhere between 18 to 60 feet in standard models, making them ideally suited to medium and large-sized lawns. The body of gear-driven rotors are encased in plastic, which hides the moving parts. This, teamed with the way they operate, makes them much quieter than impact sprinklers. They are well suited for use in both residential and commercial settings, being almost silent when in operation. Gear-driven sprinklers are also less expensive to buy compared to impact sprinklers, and they are low maintenance. 1.3. Stream Stream rotor sprinklers, also known as multi-stream sprinklers, spray rotating streams of water in multiple directions all at once. These are interesting to watch, and put on a spectacular display of moving water fountains that can be mesmerizing. The benefit of these is that, like gear-driven sprinklers, they operate quietly. They also have low precipitation rates compared to some other sprinkler types, which makes them ideal for soils that absorb water more slowly, or sloped and uneven ground where too much water dumped at once can slide down the slope instead of being taken into the soil as it should. The main drawback of this type of sprinkler is that the head is prone to getting clogged up if the water flowing through it is not filtered. 'Dirty' water can block up the mechanism and cause operational issues. 2. Fixed Spray Sprinklers Fixed Spray Sprinklers These types of sprinkles spray a pattern of streaming water that does not move. It can fan out in full circles or patterns of various angles as designated by the user. Standard models might have the option to spray quarter circles, half-circles, and full circles, while more advanced models can have specific selections that allow the user to set the sprinkler in increments of 40 degrees. One of the benefits of this type of sprinkler is if you have a straight-edged lawn, you can position the sprinkler against the edge with it facing towards the lawn. When set at a half-circle, this will be able to adequately spray the whole edge of the lawn without wasting water on the sidewalk. These sprinklers aren't as strong as rotary sprinklers and are not able to throw water as far. They typically spray streams of water ranging between 3 and
1.040431
Zyphra/Zyda-2
++ed by: ZMUGHAL KEEDI FAYLAND SYP FOXCOOL 69 PAUSE users 37 non-PAUSE users. Gisle Aas NAME LWP - Library for WWW access in Perl SYNOPSIS use LWP; print "This is libwww-perl-$LWP::VERSION\n"; DESCRIPTION Libwww-perl is a collection of Perl modules which provides a simple and consistent programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients, thus libwww-perl said to be a WWW client library. The library also contain modules that are of more general use. The main architecture of the library is object oriented. The user agent, requests sent and responses received from the WWW server are all represented by objects. This makes a simple and powerful interface to these services. The interface should be easy to extend and customize for your needs. The main features of the library are: - Contains various reusable components (modules) that can be used separately or together. - Provides an object oriented model of HTTP-style communication. Within this framework we currently support access to http, gopher, ftp, news, file, and mailto resources. - The library be used through the full object oriented interface or through a very simple procedural interface. - Support the basic and digest authorization schemes. - Transparent redirect handling. - Supports access through proxy servers. - URL handling (both absolute and relative URLs are supported). - A parser for robots.txt files and a framework for constructing robots. - An experimental HTML parser and formatters (for PostScript and plain text). - The library can cooperate with Tk. A simple Tk-based GUI browser called 'tkweb' is distributed with the Tk extension for perl. - An implementation of the HTTP content negotiation algorithm that can be used both in protocol modules and in server scripts (like CGI scripts). - A simple command line client application called lwp-request. HTTP STYLE COMMUNICATION The libwww-perl library is based on HTTP style communication. This section try to describe what that means. Let us start with this quote from the HTTP specification document <URL:_URL_ - The HTTP protocol is based on a request/response paradigm. A client establishes a connection with a server and sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity meta-information, and possible body content. What this means to libwww-perl is that communication always take place through these steps: First a request object is created and configured. This object is then passed to a server and we get a response object in return that we can examine. A request is always independent of any previous requests, i.e. the service is stateless. The same simple model is used for any kind of service we want to access. For example, if we want to fetch a document from a remote file server, then we send it a request that contains a name for that document and the response will contain the document itself. If we access a search engine, then the content of the request will contain the query parameters and the response will contain the query result. If we want to send a mail message to somebody then we send a request object which contains our message to the mail server and the response object will contain an acknowledgment that tells us that the message has been accepted and will be forwarded to the recipient(s). It is as simple as that! The Request Object The request object has the class name HTTP::Request in libwww-perl. The fact that the class name use HTTP:: as a name prefix only implies that we use the HTTP model of communication. It does not limit the kind of services we can try to pass this request to. For instance, we will send HTTP::Requests both to ftp and gopher servers, as well as to the local file system. The main attributes of the request objects are: - The method is a short string that tells what kind of request this is. The most used methods are GET, PUT, POST and HEAD. - The url is a string denoting the protocol, server and the name of the "document" we want to access. The url might also encode various other parameters. - The headers contain additional information about the request and can also used to describe the content. The headers is a set of keyword/value pairs. - The content is an arbitrary amount of data. The Response Object The request object has the class name HTTP::Response in libwww-perl. The main attributes of objects of this class are: - The code is a numerical value that encode the overall outcome of the request. - The message is a short (human readable) string that corresponds to the code. - The headers contain additional information about the response and they also describe the content. - The content is an arbitrary amount of data. Since we don't want to handle all possible code values directly in our programs, the libwww-perl response object have methods that can be used to query what kind of response this is. The most commonly used response classification methods are: is_success() The request was was successfully received, understood or accepted. is_error() The request failed. The server or the resource might not be available, access to the resource might be denied or other things might have failed for some reason. The User Agent Let us assume that we have created a request object. What do we actually do with it in order to receive a response? The answer is that you pass it on to a user agent object and this object will take care of all the things that need to be done (low-level communication and error handling). The user agent will give you back a response object. The user agent represents your application on the network and it provides you with an interface that can accept requests and will return responses. You should think about the user agent as an interface layer between your application code and the network. Through this interface you are able to access the various servers on the network. The libwww-perl class name for the user agent is LWP::UserAgent. Every libwww-perl application that wants to communicate should create at least one object of this kind. The main method provided by this object is request(). This method takes an HTTP::Request object as argument and will (eventually) return a HTTP::Response object. The user agent has many other attributes that lets you configure how it will interact with the network and with your application code. - The timeout specify how much time we give remote servers in creating responses before the library disconnect and creates an internal timeout response. - The agent specify the name that your application should use when it presents itself on the network. - The from attribute can be set to the e-mail address of the person responsible for running the application. If this is set, then the address will be sent to the servers with every request. - The use_alarm specify if it is OK for the user agent to use the alarm(2) system to implement timeouts. - The use_eval specify if the agent should raise an exception (die in perl) if an error condition occur. - The parse_head specify whether we should initialize response headers from the <head> section of HTML documents. - The proxy and no_proxy specify if and when communication should go through a proxy server. <URL:_URL_ - The credentials provide a way to set up user names and passwords that is needed to access certain services. Many applications would want even more control over how they interact with the network and they get this by specializing the LWP::UserAgent by sub-classing. The library provide a specialization called LWP::RobotUA that is used by robot applications. An Example This example shows how the user agent, a request and a response are represented in actual perl code: # Create a user agent object use LWP::UserAgent; $ua = new LWP::UserAgent; $ua->agent("AgentName/0.1 " . $ua->agent); # Create a request my $req = new HTTP::Request POST => '_URL_ $req->content_type('application/x-www-form-urlencoded'); $req->content('match=www&errors=0'); # Pass request to the user agent and get a response back my $res = $ua->request($req); # Check the outcome of the response if ($res->is_success) { print $res->content; } else { print "Bad luck this time\n"; } The $ua is created once when the application starts up. New request objects are normally created for each request sent. NETWORK SUPPORT This section goes through the various protocol schemes and describe the HTTP style methods that are supported and the headers that might
1.217378
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
March 25, 2014 by Richard Elliott In his influential book Capturing Sound: How Technology Has Changed Music, Mark Katz identifies seven traits of sound recording technology that are 'distinctive and defining': tangibility, portability, (in)visibility, repeatability, temporality, receptivity, and manipulability (Katz, Capturing Sound, pp. 9-47). While the first of these would seem the most obvious to discuss in relation to musical materiality, all seven of Katz's traits are relevant to such a discussion. Tangibility relates to the way in which, with the onset of recorded sound, music becomes a thing. Recorded sound did not invent music as an object, and it is important to remember previous examples of the attempt to 'fix' or 'keep' music, such as notation, written description, ballad sheets, sheet music and even the collecting of musical instruments. But the objects associated with recorded sound–from early cylinders through records and on to CDs–could be seen (and felt and heard) to 'contain' music with far greater fidelity and far greater proximity to 'the thing itself' than earlier, non-sonic objects. As many people have shown, this thing-ness would have considerable implications for the association between music and commodification (Eisenberg 2008 is particularly good on this subject). Again, recordings did not invent the connection between music and capital but they did fundamentally reconstitute the capitalist machinery of what would come to be known as the culture industry. This refiguring of music's potential as commodity means that this particular form of tangibility has survived the recent decline in the prevalence of the recorded object; the digitalizing and virtualizing of culture may have brought about a significant change in what we think of as cultural 'objects', but the relationship between culture and capital seems to be as firm in the present as at any point during the last century. Katz's second trait, portability, refers to a quality that becomes ever more apparent in the evolution of music-as-thing. Walter Benjamin's famous analysis of mechanical reproduction highlights portability as one of the ways in which the 'aura' of the original artwork is lost. It was the artwork's location in a particular time and space that contributed to its aura. The mechanically-reproduced object, however, has no need of such an aura given that a vital part of its raison d'être is the freedom it allows its users to have it wherever and whenever they wish. Recorded music can be disseminated to diverse audiences who are severed from the time and space of the original musical moment. As Katz says, with reference to the travelling picò sound systems of Cartagena, Colombia, 'while recorded music is often decoupled from its origins in space and time, this "loss" begets a contextual promiscuity that allows music to accrue new, rich, and unexpected meanings' (Katz, p. 15). By visibility and invisibility, Katz is highlighting the fact that, with recorded sound, there is no need for performers and audiences to see each other. This means that a certain element of expressive communication is withdrawn as there can no longer be a reliance on facial expression or bodily gesture, factors which are important to performers and audience members alike. Audiences cannot read the musicians for clues, nor can performers gauge the response of their audiences to what they are playing. Repeatability is perhaps one of the more obvious facets of recorded sound, yet it can be conceptually complex, as a number of philosophical words that have taken repetition as their subject have shown. In terms of what Katz would call 'phonographic effects'–the changes wrought upon musical performance and consumption inaugurated by recorded sound–repetition comes to the fore in the ways in which listeners come to expect certain things from the music they hear. Repetition brings familiarity and, while this can be of great value–for example, in the possibility of studying a particular piece of music, by which we generally mean, in the phonographic era, a particular recording–it can also lead to a sense of disturbance, such as that experienced when a live performance of a recording with which we are intimately familiar does not seem to live up to its recorded counterpart. What we witness here is a process whereby a previously understood precedence–the precedence of the performance to the recording–is seemingly reversed. Musicians, of course, are listeners too and, as Katz suggests, their listening practices and phonographic awareness, will inevitably affect their performance. Until the introduction of the long-playing disc at the end of the 1940s, listeners were not able to hear more than four and a half minutes of continuous recorded sound. One of the most obvious phonograph effects, then, relates to temporality. Any kind of recording, be it writing, painting, photography or phonography, is a reduction of the complexity of the phenomenal world (but also, paradoxically, an addition to what there is in the world). As a time-based phenomenon, recorded sound inevitably placed an emphasis on the reduction of musical time to what could be fitted into the grooves of the record. A number of writers have attended to the changes undergone by specific musical forms due to the three-minute limit of 78 rpm recordings (see, for example, Vieira Nery 2004: 204; Holst 2006: 54; Conte 1995; Nagoski 2007), though it seems safe to conclude that such compromises were necessary for all musical styles and genres. As Katz relates, the three-minute standard came to be associated with the 'formula' for successful pop, though it should also be noted that the subsequent ability to record extended works did not discourage such temporal formulas from being maintained in many genres. The transcriptive nature of most recording up the 1950s was less to with a desire for authenticity than a necessity. The rather cumbersome process of recording during this early period necessitated a large amount of manipulation, from the placing of singers and musicians at various points in relation to the microphone to the elimination of certain instruments and styles and their replacement with more phonograph-friendly alternatives. However, while these factors could be defined in terms of the manipulation necessary to secure good recordings, Katz chooses to categorise them as examples of 'receptivity' his least well-defined category. 'Manipulability', by contrast, is used to refer to the ability, with the advent of tape recording in the 1940s and digital recording at the end of the 1970s, to manipulate the recorded object itself. This might involve the splicing together of different recordings to create a single recorded 'text', such as happened with Teo Macero's recordings of Mile Davis in the 1960s and George Martin's of the Beatles in the same decade. Or it might involve the sampling of recordings to use as a background texture or foregrounded, featured element in new recordings, as became the practice in hip-hop and musics influenced by it. The role of producers, studio engineers and, in the case of hip hop and dance music, deejays became far more prominent die to these kinds of sonic manipulation.Before the 'turntablism' of hip hop deejays, playback devices and records had been used as instruments or instrumental textures by a number of artists, including John Cage in his Imaginary Landscape No. 1 (1939). The use and deliberate misuse of recordings and playback technology constitute, for Caleb Kelly, a notable strand of subversive art in the second half of the twentieth century (Kelly 2009). With reference to the criticisms levelled against recording technology by thinkers such as Theodor Adorno and Jacques Attali, Kelly suggests that practitioners of what he calls 'cracked media' challenge the intended uses of technology and substitute alternative forms of labour via subversive acts of misuse and abuse. Korean-born Nam June Paik, for example, modified turntables to allow numerous randomly accessed records to be played on them simultaneously, the end results working as both quirky sculptures and manifestations of sound art. Czech artist Milan Knižak put on ritualistic happenings in the streets of Prague before moving to sound-based work that included the physical alteration and mutilation of vinyl records. Works such as Broken Music applied cut-up techniques to records, cracking, dismantling, and rebuilding them so that they played reconfigured music. US-based Christian Marclay also worked with broken and cut-up records on projects which complemented his work in the visual arts while simultaneously performing as a radical turntablist. Japan's Yasunao Tone performed similar mutilations on compact discs, "wounding" them in order to change and distort their data. The German band Oval, meanwhile, was instrumental in turning the sound of CD malfunction into aesthetically pleasing music, using glitch as a texture in pop songs. There are numerous aspects that Katz does not attend to in the initial presentation of his seven traits. One is the way in which some of them work together. Temporality and repeatability, for example, should be thought of as necessarily connected. Of particular importance to our project are the ways in which time and repetition shape and are shaped by memory acts, whether voluntary or involuntary. Music performs its evocation work by tapping into the ability to render the past in the present to a seemingly infinite degree. Temporality here refers not only to the length of a recording but to the duration of the remembered experience that the recording summons up. Length of recording is still important, however, because one of the many magical qualities of the recording is its very brevity in comparison to the experience it evokes. Michael Pickering and Emily Keightley attend to this issue when they write, 'It is obvious that one piece of music did not play throughout our memories of
2.125903
openbmb/Ultra-FineWeb
Why CFOs should have artificial intelligence on their minds has been saved Why CFOs should have artificial intelligence on their minds - Getting smart about AI and ML - Around the blocks - Artificial intelligentsia: Humans working with machines - Contact us Smart CFOs now have to give serious thought to artificial intelligence (AI). The technology, which enables computers to be taught to analyze data, identify patterns, and predict outcomes, has evolved from aspirational to mainstream, opening a potential knowledge gap among some finance leaders. In fact, "AI's 'early adopter' phase is ending," according to the recently published third edition of Deloitte's State of AI in the Enterprise report.1 The survey, which collected responses from nearly 2,750 executives at companies that have adopted AI, found that about half of respondents (47%) were "skilled" in their AI efforts, meaning that their companies had launched multiple AI systems, but lagged in terms of their number of implementations or their AI expertise—or both. Another 26% were categorized as "seasoned," given that they had already built multiple AI systems and shown a high level of maturity in selecting, managing, and integrating AI technologies. For their part, CFOs seem eager to explore AI's potential, despite the pandemic-forced focus on cash. In fact, in Deloitte's North American CFO Signals™ survey for the third quarter of 2020, "accelerated business digitization," including AI, was one of the top strategic shifts CFOs said their companies were making in response to the turbulent economic environment.2 What many finance leaders recognize is that AI is more than another cutting-edge tool. By unleashing its full capabilities in finance and throughout the business, companies can turn it into a driver of differentiation that not only increases productivity, but also boosts growth. Within the finance function, for example, AI can be applied to replacing repetitive and labor-intensive tasks, performing such transactional work with increased speed and accuracy. Moreover, with its capacity to learn from large data sets, the technology can also be used to improve accuracy in such areas as budgeting and forecasting to enhance companywide decision-making. In this issue of CFO Insights, we'll discuss how finance leaders can incorporate AI (particularly the pattern-recognition skills of its machine learning application) into their operations. In addition, we'll explore some of the decisions involved with implementing AI: What kinds of projects should you start with (hint: think visibility)? Where can you source hard-to-find talent that aligns with the mission (hint: look around you)? Should you now become a data scientist—or at least learn how to sound like one (hint: apply humility liberally)? Getting smart about AI and ML The term "artificial intelligence" refers to the concept of making machines smarter, enabling them to mimic human cognitive functions. Machine learning (ML) is an algorithmic branch of AI that involves allowing the machine to teach itself based on the data it receives. As the algorithms derive predictions or insights from the data, and get feedback on the outcomes, they keep refining their capabilities to achieve higher degrees of certainty. Such an approach is clearly well-suited to the finance function, which routinely relies on large and complex volumes of data, both financial and operational, to fuel its many processes. In the State of AI survey, 67% of respondents reported that they are currently using machine learning, and almost all (97%) plan to use it in the near future. Among executives whose companies have adopted AI, many envision it transforming not only businesses, but also entire industries in the next five years. Using ML to optimize and transform processes, CFOs can automate tasks that typically require manual intervention, improving the accuracy of the accrual process or speeding up account reconciliation and, ideally, eliminating any traces of human bias. But ML's place isn't just in back-office applications, where it can boost efficiencies. By partnering with the commercial side of the business, ML can produce insights and boost predictability, providing increasingly accurate predictions as to, for instance, which customers are likely to make repeat purchases or which suppliers are likely to default. The algorithms, of course, only know what they absorb from the data—which is based on countless human decisions and a vast array of systems. As such, its knowledge base reflects and projects flaws, ranging from inconsistent data quality to potential human bias. Identifying and eliminating such deficiencies requires ongoing maintenance and testing, subjecting the algorithms to quality control so that, for instance, a bank doesn't unfairly reject the lending application of a creditworthy individual. ML would know better. Around the blocks The technology's capacity for learning depends on not only the volume and quality of data it receives, but also how well it is aligned with the problem. To lay down a solid foundation for the technology, companies need to assess and mitigate any quality issues involving data, undertaking data-cleansing initiatives to boost integrity and accuracy. Companies that set their expectations high, and find the availability of relevant data low, are setting themselves up for disappointment. To support AI, any data governance issues also need to be addressed beforehand. Internal wrangling over data (some data owners may be risk-averse about sharing theirs, while others want to keep control over data because of its value) can result in needless delays. But CFOs who remain focused on realizing the ultimate benefits of ML sooner rather than later—aware that it can free up their teams to spend more time on strategic issues—can see past the initial questions they may have, including: - How can we fund our AI projects? Taking a cross-functional, integrated approach to ML will likely produce the most value for the enterprise, resulting in a shared decision-making tool. But companies can start with point solutions, aiming the technology at a specific problem rather than investing in a more costly enterprisewide solution. Barriers to entry for AI have dropped significantly, as platforms offering ready-made infrastructure and algorithms have become available.3 If necessary, finance leaders can explore creative funding sources, such as vendor subsidy and ecosystems programs, coinvestment strategies, and other models, to provide funding for technology innovation within finance. Teams can also explore venture capital models to fund AI use cases and to use the outcomes as proof points for further investment. - Which early use cases are likely to yield a financial return? ML's self-learning capabilities means it gains value over time. But identifying a specific problem, and defining the desired outcome, can enable CFOs to measure the technology's impact early on. The greatest opportunity for short-term ROI may lie with streamlining back-office activities, including transaction processing (particularly in shared services). Decreasing labor-intensive, repetitive tasks will quickly and clearly justify long-term investment in AI technology. In the State of AI survey, respondents cited improved process efficiency as the top benefit that AI has enabled them to achieve. The best use cases tend to be function-specific, but should also offer broad visibility, if possible. - Should we build or buy AI? Finance leaders may want to collaborate with their technology counterparts (CIO, CTO) to determine whether to partner with third-party AI providers, develop solutions internally, or pursue a hybrid approach. In making this decision, finance and IT should investigate other use cases being implemented in the organization and leverage home-grown experience and talent to understand what suits the current environment. Organizations frequently mix bought capabilities and home-grown models. When evaluating whether to expand partnerships with cloud vendors and other providers or to foster new ones, consider if the problem is shared across other areas of the enterprise, and ensure alignment of the organization's AI ambitions. Is the process you are solving for specific to finance (e.g., revenue forecasting)? Or is it a solution that could benefit other areas as well (e.g., invoice matching)? - How can we quickly develop in-house expertise? Assessing off-the-shelf solutions and designing realistic use cases requires deep competency in AI. One option is to outsource the technical end to a provider of managed AI services, enabling finance to focus on excavating data out of functional silos. Developing in-house expertise can begin with prioritizing AI-related skills in recruitment and training. It may be helpful to stage a hackathon to solve a specific business problem, using it to identify a group of developers who are interested in becoming ML engineers. By making it part of their job to do so, the company can build a knowledgeable team. - Why should we trust AI? The notion of intelligent machines replacing—and overpowering—humans is strictly the province of blockbuster movies. CFOs should evaluate their current operations and identify opportunities to update the operating model, ensuring that the right people and capabilities are in place to maintain an AI-fueled finance function. Employees embedded in transactional tasks may need to retool themselves, developing stronger analytical skills. The finance function's overall skill set may shift, but not necessarily its population of FTEs. Humans with the right training need to manage AI systems, leveraging the appropriate capabilities. Artificial intelligentsia: Humans working with machines In the past, many CFOs have led campaigns to incorporate analytics, particularly in FP&A. But the stakes and rewards involved in championing ML are much higher, given its potential use across the enterprise. And while uncertainty surrounding the pandemic continues, the fact that ML is not rules-based (there's no limit to its ability to evolve) means it can change and adapt as the next normal takes shape. Similarly, finance leaders can lay the groundwork for the technology by taking piecemeal steps, such as: - Inventory internal capabilities. AI technology, including ML, is function-agnostic, so before CFOs plant their flag, they ought to check to make sure other functions haven't preceded them. Marketing may have begun using it to understand how to better retain customers in a virtual environment. Supply chain may already be relying on it to
1.423164
openbmb/Ultra-FineWeb
In the circuit 66, the drain 80 and source 82 are disposed in series between the pre-drain resistor 70 and the post-source resistor 72. The gate 76 is connected to WL3. The pre-drain resistor 70, the drain 80, the source 82, and the post-source resistor 72 are disposed in series on the bit-line BL0. The capacitor 68, which models the capacitance of the bit-line, has one plate connected to ground 74 and another plate connected to the bit-line BL0, in parallel with the memory elements 64. Several of the components of the circuit 66 represent phenomenon affecting the memory elements 64 when it is sensed. The pre-drain resistor 70 generally represents the drain-to-bitline resistance of the memory elements 64 connected to the bit-line above (i.e., up current from) WL3 when these memory elements 64 are turned on, (e.g., during a read operation). Similarly, the post source resistor 72 generally corresponds to the source-to-ground resistance of the memory elements 64 connected to the bit-line below WL3 when the memory element 64 is sensed. The circuit 66 models electrical phenomena associated with reading the memory elements 64 at the intersection of WL3 and BL0. The operation of the memory elements 64 will now be briefly described with reference to FIGS. 4 and 5. FIG. 5 illustrates one potential relationship between the bit-line current (IBIT), the word-line voltage (VWL), and the voltage of the floating gate 78 (VFG). As illustrated by FIG. 5, VFG affects the response of the memory element 64 to a given VWL. Decreasing the voltage of the floating gate shifts the I-V curve of the memory elements 64 to the right. That is, the relationship between the bit-line current and a word-line voltage depends on the voltage of the floating gate 78. The memory elements 64 may store data by exploiting this effect. To write data to the memory elements 64, a charge corresponding to the data may be stored on the floating gate 78. The charge of the floating gate 78 may be modified by applying voltages to the source 82, drain 80, and/or gate 76 such that the resulting electric fields produce phenomenon like Fowler-Northam tunneling and/or hot-electron injection near the floating gate 78. Initially, the memory elements 64 may be erased by applying a word-line voltage designed to drive electrons off of the floating gate 78. In some embodiments, an entire column or block of memory elements 64 may be erased generally simultaneously. Once the memory elements 64 are erased, the gate 76 voltage may be manipulated to drive a charge onto the floating gate 78 that is indicative of a data value. After the write operation ends, the stored charge may remain on the floating gate 78 (i.e., the memory elements 64 may store data in a nonvolatile fashion). As illustrated by FIG. 5, the value stored by the memory element 64 may be read by applying a voltage, VWL, to the gate 76 and quantizing (e.g., categorizing) a resulting bit-line current, IBIT. Each of the I-V traces depicted by FIG. 5 correspond to a different charge stored on the floating gate, VFG, which should not be confused with the voltage that is applied to the gate, VWL. The difference in floating gate 70 voltage, VFG, between each I-V trace is an arbitrarily selected scaling factor "x." The illustrated I-V traces correspond to eight-different data values stored by the memory element 64, with a VFG of 0x representing a binary data value of 000, a VFG of 1x representing a binary data value of 001, and so on through VFG of 7x, which represents a binary data value of 111. Thus, by applying a voltage to the gate 76 and measuring the resulting bit-line current, the charge stored on the floating gate 78 may be sensed, and the stored data may be read. The accuracy with which the bit-line current is quantized may affect the amount of data that a designer attempts to store in each memory element 64. For example, in a system with a low sensitivity, a single bit may be stored on each memory element 64. In such a system, a floating gate voltage VFG of 0x may represent a binary value of 0, and a floating gate voltage VFG of −7x may represent a binary value of one. Thus, the difference in floating gate voltages VFG corresponding to different data values may be relatively large, and the resulting differences and bit-line currents for different data values may also be relatively large. As a result, even low-sensitivity sensing circuitry may quantize (e.g., discern) these large differences in bit-line current during a read operation. In contrast, high-sensitivity sensing circuitry may facilitate storing more data in each memory element 64. For instance, if the sensing circuitry can distinguish between the eight different I-V traces depicted by FIG. 5, then the memory elements 64 may store three bits. That is, each of the eight different charges stored on the floating gate 78 may represent a different three-bit value: 000, 001, 010, 011, 100, 101, 110, or 111. Thus, circuitry that precisely quantizes the bit-line current IBIT may allow a designer to increase the amount of data stored in each memory element 64. However, as mentioned above, a variety of effects may interfere with accurate measurement of the bit-line current. For instance, the position of the memory elements 64 along a bit-line may affect RPD and RPS, which may affect the relationship between the word-line voltage VWL and the bit-line current IBIT. To illustrate these effects, FIG. 6 depicts noise on the bit-line while reading from the memory element 64. As illustrated, noise in the bit-line current IBIT may cause the bit-line current IBIT to fluctuate. Occasionally, the fluctuation may be large enough to cause the bit-line current IBIT to reach a level that represents a different stored data value, which could cause the wrong value to be read from the memory elements 64. For instance, if the bit-line current is sensed at time 84, corresponding to an arbitrarily selected peak, a data value of 100 may be read rather than the correct data value of 011. Similarly, if the bit-line current is sensed at time 86, corresponding to an arbitrarily selected local minimum, a data value of 010 may be read rather than a data value of 011. Thus, noise on the bit-line may cause erroneous readings from memory elements 64. FIG. 7 depicts a quantizing circuit 16 that may tend to reduce the likelihood of an erroneous reading. The illustrated quantizing circuit 16 includes an analog-to-digital converter 88 and a digital filter 90 connected to each of the bit-lines 38, 40, 42, 44, and 46, respectively. Each bit-line 38, 40, 42, 44, and 46 may connect to a different analog-to-digital converter 88 and digital filter 90. The digital filters 90, in turn, may connect to an input/output bus 92, which may connect to a column decoder 18, a column address latch 20, and/or control circuitry 28 (see FIG. 2). In operation, the quantizing circuit 16 may quantize (e.g., digitize) analog signals from the memory elements 64 in a manner that is relatively robust to noise. As explained below, the quantizing circuit 16 may do this by converting the analog signals into a bit-stream and digitally filtering high-frequency components from the bit-stream. The analog-to-digital converter 88 may be a one-bit, analog-to-digital converter or a multi-bit, analog-to-digital converter. In the present embodiment, an analog-to-digital converter 88 receives an analog signal from the memory element 64, e.g., a bit-line current IBIT or a bit-line voltage VBL, and outputs a bit-stream that represents the analog signal. The bit-stream may be a one-bit, serial signal with a time-averaged value that generally represents the time-averaged value of the analog signal from the memory element 64. That is, the bit-stream may fluctuate between values of zero and one, but its average value, over a sufficiently large period of time, may be proportional to the average value of the analog signal from the memory element 64. In certain embodiments, the bit-stream from the analog-to-digital converter 88 may be a pulse-density modulated (PDM) version of the analog signal. The analog-to-digital converter 88 may transmit the bit-stream to the digital filter 90 on a bit-stream signal path 94. The digital filter 90 may digitally filter high-frequency noise from the bit-stream. To this end, the digital filter 90 may be a low-pass filter, such as a counter, configured to average (e.g., integrate and divide by the sensing time) the bit-stream over a sensing time, i.e., the time period over which the memory element 64 is read. (Alternatively, in some embodiments, the digital filter
2.008805
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
described by Satake et al., using casein (2 % in 200 mM Tris–HCl buffer, pH 7.0) as substrate. Briefly, 0.4 ml of casein was incubated with different concentration of PgSE (0-300 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37º for 2.5 h. About 1.5 ml of Trichloroacetic acid (TCA) (0.44 M) was added to terminate the reaction and allowed to stand for 30 min at room temperature. The mixture was centrifuged at 3000 rpm for 5 min and the supernatant (1 ml) was mixed with 0.4 M sodium carbonate (2.5 ml) and 1:2 diluted Folin reagent (0.5 ml). The colour developed was read at 660 nm. One unit of enzyme activity was defined as the amount of enzyme required to increase in the absorbance of 0.01 at 660 nm. Protease activity was expressed as units/min/mg. Fibrinogenolytic activity was performed as described by Gubbiveeranna et al.. Briefly, human fibrinogen (50 µg) was treated with different concentration of PgSE (0.5–20 µg) in 25 µl of 10 mM sodium phosphate buffer (pH 7.0) and incubated at 37° for 2.5 h. Reducing sample buffer (10 µl) containing 1M urea, 4 % SDS and 4 % beta (β)-mercaptoethanol was added to terminate the reaction and kept in boiling water bath for 3 min. The hydrolyzed products were analyzed in 12 % SDS- PAGE and visualized by staining with Coomassie brilliant blue R-250. Fibrinolytic activity was carried out according to the method of Shivaiah and Kempaiah. Briefly, trisodium citrate (3.2 %) treated blood in the ratio 1:9 was centrifuged at 3000 rpm for 5-10 min. The supernatant obtained was separated and used as Platelet Poor Plasma (PPP). Equal volume of PPP (100 µl) and 25 mM CaCl2 (100 µl) was incubated at 37º to get fibrin clot. The clot formed was thoroughly washed with 10 mM sodium phosphate buffer (pH 7.0) for 5-6 times. The washed fibrin clot was incubated with different concentration of PgSE (15 to 120 µg) in a total reaction volume of 40 μl of 10 mM sodium phosphate buffer (pH 7.0) at 37º for 2.5 h. Reducing sample buffer (20 µl) containing 1M urea, 4 % SDS and 4 % β-mercaptoethanol was added to terminate the reaction and kept in boiling water bath for 3 min. An aliquot (20 μl) of the supernatant was subjected to 10 % SDS-PAGE to analyze fibrin hydrolyzing pattern. Recalcification time (RT): Plasma recalcification time was determined according to the method described by Gubbiveeranna et al.. Briefly, PPP (100 µl) was pre-warmed to 37º before use and incubated with different concentration of PgSE (0-80 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 5 min. Later, 100 µl of 25 mM CaCl2 was added and the clotting time was recorded. The 10 mM sodium phosphate buffer (pH 7.0) alone without PgSE was considered as negative control. The protein concentration was estimated as described by Lowry et al.. Briefly, bovine serum albumin (BSA) was used as standard and the protein concentration of PgSE was determined by comparing with known concentration of BSA. The experiments were performed in triplicates and the data obtained from the experiments were expressed as mean±standard error of mean (SEM). The results were statistically analysed using one–way analysis of variance (ANOVA) followed by Tukey's Multiple Comparison Test. The data were considered significant at p<0.05. Results and Discussion The PgSE (160 µg) was treated with non-reducing sample buffer and incubated in boiling water bath for 3 min. The sample was then loaded onto 12 % SDS-PAGE under non-reducing condition. After electrophoresis, the gel was stained with Coomassie Brilliant Blue R-250 to visualise the protein bands. PgSE exhibited dense bands in the high molecular weight region between 122 kDa and 47 kDa. Two distinct bands were seen in the lower molecular weight region at ~16.6 kDa and ~14.3 kDa (fig. 1). PgSE was evaluated for proteolytic activity using 2 % casein as substrate. It showed a specific activity of 0.8 U/mg/ml. Casein (0.2 %) and gelatin (0.2 %) were copolymerized with the polyacrylamide gel separately for the detection of proteolytic activity. PgSE (80 µg) was incubated with non-reducing sample buffer at 37º for 1.5 h and loaded onto 12 % SDS-PAGE under non- reducing condition. After electrophoresis, gels were washed with 2.5 % of Triton X-100 for 1 h to remove SDS. The gels were incubated overnight in incubation buffer containing Tris–HCl (50 mM, pH 7.6, 10 mM Cacl2 and 150 mM NaCl). Gels were then stained with 0.25 % Coomassie brilliant blue R-250 to visualize the activity bands. The bands were observed at molecular weight region ~97 kDa and between 34 kDa and 18.2 kDa for caseinolytic activity. Similarly, translucent activity bands in the molecular weight region ~96.7 kDa and between 52.3 kDa and 24.9 kDa were seen for the gelatinolytic activity (fig. 2). PgSE was studied for fibrinogenolytic activity using human fibrinogen, which is a 340 kDa soluble plasma glycoprotein, composed of three subunits (Aα, Bβ and γ). Fibrinogen plays a crucial role in arresting blood during vascular injury by getting converted to fibrin upon action of thrombin. The fibrin is subsequently converted to fibrin-based blood clot. PgSE hydrolysed all the chains (Aα, Bβ and λ) of fibrinogen in a dose dependent manner. The fibrinogen (50 µg) was incubated with different concentration of PgSE (0.5 µg to 20 μg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 2.5 h. The reaction was terminated by adding denaturing sample buffer containing 1 M urea, 4 % SDS and 4 % β-mercaptoethanol and kept in boiling water bath for 3 min. SDS-PAGE (12 %) was performed in reducing condition to visualize degradation pattern. After electrophoresis, the gel was stained with 0.25 % Coomassie brilliant blue R-250 (fig. 3). The fibrinolytic activity of PgSE was studied using human fibrin. PgSE hydrolyzed fibrin clot in a dose dependent manner. α-polymer and α-chain subunits of fibrin were hydrolysed at 90 µg followed by partial degradation of λ-dimer and β-chain subunits at a concentration of 120 µg. Equal volumes of PPP (100 µl) and 25 mM Cacl2 (100 µl) were mixed and incubated to get fibrin clot. The fibrin clot was washed in 10 mM sodium phosphate buffer (pH 7.0) and incubated with different concentration of PgSE (15 µg to 120 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 2.5 h. The reaction was terminated by adding denaturing sample buffer containing 1 M urea, 4 % SDS and 4 % β-mercaptoethanol and kept in boiling water bath for 3 min. The sample was loaded onto 10 % SDS-PAGE in reducing condition. After electrophoresis, the gel was stained with Coomassie Brilliant blue R-250 to visualize the bands (fig. 4). PgSE which was found to contain protease associated with fibrinogenolytic and fibrinolytic activity showed anticoagulant activity upon incubation with PPP. PgSE increased the recalcification time in a dose dependent manner indicating its anticoagulant property. PgSE increased the clotting time from control 161 s to 487 s at a concentration of 80 µg. The PgSE increased the clotting time by 3.03-folds (fig. 5). Fig. 5: RT of PgSE, PPP (100 μl) was incubated with different concentration of PgSE (20, 40, 60 and 80 μg) in a total volume of 20 μl of sodium phosphate buffer (10 mM, pH 7.0) at 37
1.098801
m-a-p/FineFineWeb
Skip to main content Advertisement You are viewing the new BMC article page. Let us know what you think. Return to old version Research | Open | Published: Tissue-specific regulation of Igf2r/Airn imprinting during gastrulation Abstract Background Appropriate epigenetic regulation of gene expression during lineage allocation and tissue differentiation is required for normal development. One example is genomic imprinting, which is defined as parent-of-origin mono-allelic gene expression. Imprinting is established largely due to epigenetic differences arriving in the zygote from sperm and egg haploid genomes. In the mouse, there are approximately 150 known imprinted genes, many of which occur in imprinted gene clusters that are regulated together. One imprinted cluster includes the maternally expressed Igf2r, Slc22a2, and Slc22a3 genes and the paternally expressed long non-coding RNA (lncRNA) Airn. Although it is known that Igf2r and Airn are reciprocally imprinted, the timing of imprinted expression and accompanying epigenetic changes have not been well characterized in vivo. Results Here we show lineage- and temporal-specific regulation of DNA methylation and histone modifications at the Igf2r/Airn locus correlating with differential establishment of imprinted expression during gastrulation. Our results show that Igf2r is expressed from both alleles in the E6.5 epiblast. After gastrulation commences, the locus becomes imprinted in the embryonic lineage with the lncRNA Airn expressed from the paternal allele and Igf2r restricted to maternal allele expression. We document differentially enriched allele-specific histone modifications in extraembryonic and embryonic tissues. We also document for the first time allele-specific spreading of DNA methylation during gastrulation concurrent with establishment of imprinted expression of Igf2r. Importantly, we show that imprinted expression does not change in the extraembryonic lineage even though maternal DMR2 methylation spreading does occur, suggesting distinct mechanisms at play in embryonic and extraembryonic lineages. Conclusions These results indicate that similar to preimplantation, gastrulation represents a window of dynamic lineage-specific epigenetic regulation in vivo. Background Genomic imprinting is an epigenetic phenomenon that results in mono-allelic gene expression in a parent-of-origin manner. Imprinted expression has been identified at approximately 150 mouse genes, which often occurs in clusters containing multiple imprinted transcripts [1,2]. Expression of imprinted genes is thought to be established in cis by allele-specific DNA methylation established at imprinting control regions (ICRs) in the gametes, thus arriving in the zygote as maternal and paternal specific information. A regulatory theme has emerged at many imprinted clusters in which a single long non-coding RNA (lncRNA) is thought to repressively regulate genes in cis through direct transcriptional blocking and/or recruitment of repressive chromatin remodeling complexes such as G9a and PRC2, resulting in differential allele-specific histone modifications [3,4]. One cluster on mouse chromosome 17 includes the maternally expressed Igf2r, Slc22a2, and Slc22a3 genes and the paternally expressed lncRNA Airn [5], and several non-imprinted genes (Slc22a1, Mas, and Plg). The Airn promoter lies in the second intron of Igf2r, and Airn transcription occurs from the opposite strand overlapping Igf2r exons 1 and 2 [5-7]. Paternal Airn expression may participate in imprinting of the maternally expressed genes by blocking access of the transcriptional machinery to the Igf2r start site [8], and transcription of Airn has been shown to be required for silencing of Igf2r [8,9]. Paternal allele silencing of the other imprinted genes in the cluster only occurs in extraembryonic lineages and may be a result of Airn recruitment of repressive complexes such as G9a to their promoters [4]. Biallelic expression of Igf2r is observed in ES cells and only becomes imprinted upon differentiation in vitro [4]. Although the expression of Igf2r and Airn has been documented in preimplantation and late stage embryos [10-12], lineage-specific expression dynamics have not been observed during gastrulation. Recent studies have focused on mechanisms in ES cell models [4,8,13], but the precise timing and mechanisms responsible for imprinting at Igf2r/Airn in vivo remain unknown. Here we characterize tissue-specific dynamics of expression and epigenetic modifications that occur at Igf2r/Airn during normal gastrulation. We show that significant epigenetic regulation occurs at imprinted loci during epiblast differentiation in vivo. Results and discussion Imprinted expression of Igf2r and Airn during gastrulation The Igf2r/Airn imprinted cluster contains the maternally expressed Igf2r, Slc22a2, and Slc22a3, the paternally expressed Airn, and the non-imprinted Plg, Slc22a1, and Mas1 genes (Figure 1A). We determined the expression of the genes within the cluster in embryonic and extraembryonic tissues from C57BL/6JxPWD/PhJ-F1 embryos at embryonic days E6.5 and E7.5 by RT-PCR (Figure 1B). Of the genes in the cluster, Igf2r is expressed in both the epiblast (EPI) and visceral endoderm (VE) at E6.5 and the embryonic (EM) and extraembryonic (EX) tissues of E7.5 embryos (Figure 1B). Airn is expressed in the VE at E6.5 and in both tissues at E7.5. However, no Airn was detected in the epiblast at E6.5 (Figure 1B). Figure 1 figure1 Expression analysis. (A) Schematic of gene locations at the mouse Igf2r/Airn locus: transcription start sites (bent arrows), Igf2r (light grey), and Airn (dark grey). (B) RT-PCR analysis of genes in the cluster shows expression of Igf2r at E6.5 and E7.5, as well as Airn expression in the E6.5 VE and E7.5 EM and EX. The other genes in the cluster are not expressed at appreciable levels during gastrulation. (C) SSCP analysis of Igf2r expression shows biallelic expression in the E6.5 EPI, while the paternal allele is silent (imprinted) in all other tissues and stages examined. (D) RT-PCR demonstrates that Airn is not expressed in epiblast but is paternally expressed (E) in all other samples. Two embryos (one per lane) shown for each tissue/stage for each assay. EPI, epiblast; VE, visceral endoderm; EM, embryonic portion of E7.5 embryo; EX, extraembryonic portion of E7.5 embryo. Red box highlights the non-imprinted status of Igf2r and lack of Airn expression in the E6.5 EPI. B, B6 allele; P, PWD allele. +, pooled adult kidney, liver, brain, and heart cDNA. Parental tissue used in (C-E) is adult kidney. To further understand the imprinted expression of Igf2r and Airn during gastrulation, we carried out allele-specific expression analysis of C57BL/6JxPWD/PhJ-F1 and C57BL/6J-Chr 17PWD/Ph/ForeJxC57BL/6J-F1 embryos (hereafter referred to as B × P and P × B F1 embryos, respectively). Single-strand confirmation polymorphism (SSCP) revealed that Igf2r is expressed from both alleles in the EPI of E6.5 embryos (Figure 1C, red box). In E6.5 VE, Igf2r is maternally expressed and paternally imprinted (Figure 1C). At E7.5, Igf2r is imprinted in both tissues (Figure 1C). Our results show that in the multipotent epiblast, Igf2r is expressed from both alleles, but once embryonic cells have adopted defined lineages at E7.5, Igf2r expression becomes imprinted. This correlation suggests a relationship between relative differentiation state in vivo and imprinted expression at the locus - consistent with ES cell models. Since Airn is thought to establish imprinting of Igf2r [8], we also examined allele-specific Airn expression. In E6.5 EPI, Airn is not expressed (Figure 1B,D, red box), corresponding with biallelic Igf2r expression (Figure 1C). In the VE at E6.5, where Igf2r is imprinted, we observe reciprocal imprinting (paternal expression) of Airn (Figure 1
1.818146
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
Why is It Essential to Invest in Bird Proofing? People must watch pesky birds to protect their health and property. Here are a few reasons why: - Wellbeing Threats – Bird droppings and nesting materials have been linked to several diseases that can infect humans and animals. Their droppings contain various parasites and infectious diseases such as encephalitis and salmonella. - Investing Costs – Besides affecting people's well-being, bird droppings and other debris can damage your property. If left untouched, birds can completely ruin your property or solar panel after weeks, months and years of damage. - Equipment Defects – Pigeon droppings can cause costly damage to solar panels and other equipment. The acidic nature of their faeces can discolour surfaces and oxidise metals. - Clogged Drains – Bird droppings and nesting materials such as twigs, grass, and moss clog the drains and gutters. If not cleaned properly, it can cause flooding, especially during the rainy season. - Falling Chances – Slippery bird droppings in the environment can increase the chances of slipping and falling. - Food safety is essential – Birds find their way into restaurants, storage facilities, processing and packaging areas, and other places and contaminate food. Not monitoring bird nests can substantially impact food safety scrutiny. Solar Panel Pigeon Proofing in Melbourne One of the common issues faced by our clients is the damage caused by pigeons. Their nesting materials and droppings can accumulate on the panels, reducing their efficiency and potentially causing long-term damages. Additionally, bird droppings can be unsettling and unhygienic, posing health risks to you and your family. If you are facing issues with pigeon proofing in Melbourne, Tom's Pest Control Melbourne is here to help you. Our professional pigeon control service team in Melbourne specialises in providing effective solar panel pigeon proofing solutions. How We Implement Pigeon Control Solutions on Your Solar Panels? At Tom's Pest Control, we offer reliable and humane pigeon proofing methods tailored to suit your needs. Here's how we can assist you: Many of us worry about the methods bird removal companies use for bird control. At Tom's Pest Control, we believe in humane pigeon proofing methods to ensure the birds are not harmed. Here are some ways we can assist you in bird control. - Assessment: Our expert team will thoroughly inspect your solar panels and surrounding areas to identify potential entry points and areas where pigeons may nest. - Customised Solutions: Based on our assessment, we will recommend the most suitable pigeon proofing techniques for your solar panels. These may include: - Installing Mesh or Wire Barriers: We will secure your solar panels with durable solar panel bird proofing mesh or wire barriers, preventing pigeons from accessing the area. - Implementing deterrents: We can install effective visual or audio bird deterrents, such as reflective devices, bird spikes, or ultrasonic devices, to discourage pigeons from roosting on your solar panels. - Professional Installation: Our experienced technicians will skillfully install the chosen solar panel pigeon proofing measures to ensure their effectiveness and durability. - Maintenance and Follow-up: We offer ongoing maintenance services to ensure that the pigeon proofing measures remain intact and continue to protect your solar panels. Our team will always be a phone call away if you encounter any future issues. Our solar panel pigeon proofing methods are safe for the birds and comply with ethical standards. We strive to deliver long-lasting results and provide a pest-free environment for your solar panels. Despite being a crucial part of our ecology, birds may seriously harm humans and property. Damages may compound and cost you money, whether you have a bird nesting in your house or too many birds living on your land. Birds typically construct nests within rain gutters and other drainage system components when they settle on your roof. When the drainage system is obstructed, snow, ice, and rain can accumulate on your roof and cause significant problems. Roofs under too much stress may cave in, but not before they fracture and require expensive repairs. Also, the acidity in bird droppings can damage HVAC systems, roofing, vehicles, and more. When the droppings corrode HVAC wiring and systems, they will inevitably malfunction or break down. When the droppings corrode rooftops, leaks and holes can appear. Solar panels are becoming more and more popular among homeowners to reduce their power costs. However, when pigeons build a nest below your solar panels, they severely damage the panels and wiring, creating an unattractive mess on your roof. Consequently, it is necessary to pigeon-proof solar panels. Choosing our specialists to install your solar bird proofing means putting your faith in a business with years of experience. Our pest control specialists in your area are happy to help you discuss and share your concern about your pest issues. We can provide the quality treatments available while keeping our costs low. In addition, your solar panel pigeon proofing will be backed by Tom's Pest Control Melbourne Service Guarantee to provide you peace of mind. If the treatment is unsuccessful, well, fix it for free. Waiting for pigeons to ruin your solar panels is a bad idea. Following are the steps covered in bird control inspection on your property: - Identification of bird species will depend on your actions. - It's crucial to comprehend that the term "infestation level" often refers to the flock or population size. - Detect the area where the birds have taken over. It is typical to locate several places on the property. - You can find the access equipment by correctly evaluating the place. Ladders, lifts, cranes, and scaffolding are examples of equipment. - Recognise any limitations that can cause your project to stop or be delayed. - You could be involved in initiatives that need specific permits, such as public access or even renovating historic structures. - Your pricing estimate, which will include labour time, travel costs, equipment rentals, material costs, and other ancillary costs, will be broken down with your site evaluation. Your solar panel investment might be compromised if pigeons destroy the inside and outside components of the panels. In addition, high amounts of acidity found in pigeon droppings can harm the surface of your solar panels and erode cables. So, it's crucial to use a solar panel bird control service. Our skilled and experienced bird control professionals at Tom's Pest Control Melbourne successfully use bird-proof solar panels. Our standardised processes for bird-proofing solar panels aim to prevent birds from landing and roosting on the panels. We also provide routine maintenance services for further protection against bird poop and other debris on your solar panels. Don't hesitate to contact Tom's Pest Control if you are worried about birds harming your solar panels. The worst that can happen from bird mites is a few nights of restless itching. However, it's preferable to keep them hidden. Before removing any nesting birds from your property, keep an eye out for them and verify local wildlife rules. Physical eradication is the best line of action when attempting to get rid of mites in your house. You might want to try vacuuming or wet them with a cloth. To permanently rid your home of mites, you must remove the vacuum cleaner bag immediately. A safe insecticide that works well against mites is permethrin. It is safe for pets to be exposed to and is used for localised spot therapy. For their food and that of their young, birds eat insects, including mosquitoes, beetles, and moths. Insect larvae are highly protein-rich and are caught in large quantities by birds. Due to the prevalence of insects that prey on plants and animals, life would have been terrible without birds (from grains to human blood). This means that in natural systems, birds are essential for both lowering and sustaining insect populations. Being insectivorous, birds consume beetles, weevils, caterpillars, grasshoppers, and other insects. By bringing bluebirds to your property, you may effectively manage pests without the use of pesticides or additional expenses. Even though using harsh pesticides to manage insect infestations may have solved one issue, they ultimately had terrible effects, and endangered birds were killed. However, with precise scientific technologies, we can now utilise medicines that target that particular insect while harmless to all other animals. When battling infestations, this is essential since, while you want to get rid of the bug, you also don't want deadly and hazardous chemicals near your kids, pets, and the wonders of nature, like birds. However, you may use Control Bug Spray with confidence, knowing that you're using a natural solution that eliminates pests and won't harm your bird. Remember that using natural insecticides is beneficial for you, your pets, and not only your birds. Invest today. Bird mesh is one of the best methods for protecting residential solar systems from birds. Bird mesh wraps around the perimeter of the entire array and is designed to seal the space beneath your solar panels. It is attached directly to the panels. The least appealing solution for deterring birds is spiked, yet they are quite effective. Spikes discourage birds from perching on or near your solar panels long enough to build a nest or create a major mess. Although they appear old, plastic birds of prey are functional. For example, a fake owl can move convincingly and often enough to drive birds away if you invest in one with a head that swings in the wind. For solar panels, they make excellent pigeon guards. You may have to spend between $200 and $1500 to get your property pigeon proofed. However, the price may increase based on the number of solar panels in your system, the slope of your roof, the height of your home'
2.007509
m-a-p/FineFineWeb
NP TaqMan assay; Leo Kenefic for interpretation of the genotypes from the 2006 outbreak in Scotland; and Jodi Beaudry, Molly Matthews, James Cook, and Christine Clark-Friedman for assistance with aspects of the TEA phylogeny. Thanks also to Jane Burton, Suzanna Hawkey, and the Special Pathogens Reference Unit team for assistance with preparation of nucleic acid samples and to John Gillece and Remy Hilsabeck for performing the Illumina whole genome sequencing of Ba4599 and for assistance with bioinformatic analyses. This work was supported by the US Department of Homeland Security Science and Technology Directorate through award HSHQDC-10-C-00139 and the UK Department of Health. This work was also funded, in part, under Agreement no. HSHQDC-07-C-00020 awarded by the US Department of Homeland Security for the management and operation of the National Biodefense Analysis and Countermeasures Center, a federally funded research and development center. Biography Dr Price is a postdoctoral research fellow at the Center for Microbial Genetics and Genomics at Northern Arizona University in Flagstaff. Her research interests focus on Burkholderia pseudomallei and Bacillus anthracis evolution, genetics, and genomics, particularly in vivo evolution of these pathogens. Footnotes Suggested citation for this article: Price EP, Seymour ML, Sarovich DS, Latham J, Wolken SR, Mason J, et al. Molecular epidemiologic investigation of an anthrax outbreak among heroin users, Europe. Emerg Infect Dis [serial on the Internet]. 2012 Aug [date cited]. _URL_ 1Current affiliation: Menzies School of Health Research, Casuarina, Northern Territory, Australia. 1. Van Ert MN, Easterday WR, Huynh LY, Okinaka RT, Hugh-Jones ME, Ravel J, et al. Global genetic population structure of Bacillus anthracis. PLoS ONE. 2007;2:e461. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref] 2. Keim P, Smith KL Bacillus anthracis evolution and epidemiology. Curr Top Microbiol Immunol. 2002;271:21–32. [PubMed] 3. Pearson T, Busch JD, Ravel J, Read TD, Rhoton SD, U'Ren JM, et al. Phylogenetic discovery bias in Bacillus anthracis using single-nucleotide polymorphisms from whole-genome sequencing. Proc Natl Acad Sci U S A. 2004;101:13536–41. doi: 10.1073/pnas._PHONE_. [PubMed] [Cross Ref] 4. Kenefic LJ, Pearson T, Okinaka RT, Schupp JM, Wagner DM, Hoffmaster AR, et al. Pre-Columbian origins for North American anthrax. PLoS ONE. 2009;4:e4813. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref] 5. Mayer TA, Bersoff-Matcha S, Murphy C, Earls J, Harper S, Pauze D, et al. Clinical presentation of inhalational anthrax following bioterrorism exposure: report of 2 surviving patients. JAMA. 2001;286:2549–53. doi: 10.1001/jama.286.20.2549. [PubMed] [Cross Ref] 6. Ringertz SH, Hoiby EA, Jensenius M, Maehlen J, Caugant DA, Myklebust A, et al. Injectional anthrax in a heroin skin-popper. Lancet. 2000;356:1574–5. doi: 10.1016/S0140-6736(00)03133-0. [PubMed] [Cross Ref] 7. Knox D, Murray G, Millar M, Hamilton D, Connor M, Ferdinand RD, et al. Subcutaneous anthrax in three intravenous drug users: a new clinical diagnosis. J Bone Joint Surg Br. 2011;93-B:414–7. doi: 10.1302/0301-620X.93B3.25976. [PubMed] [Cross Ref] 8. Ramsay CN, Stirling A, Smith J, Hawkins G, Brooks T, Hood J, et al. An outbreak of infection with Bacillus anthracis in injecting drug users in Scotland. Euro Surveill. 2010;15:pii:19465. [PubMed] 9. National Health Service An outbreak of anthrax among drug users in Scotland, December 2009 to December 2010: a report on behalf of the National Anthrax Outbreak Control Team. December 2011 [cited 2012 Jan 24]. _URL_ 10. European Centre for Disease Prevention and Control Annual epidemiological report on communicable diseases in Europe 2010. [cited 2011 Jul 19]. _URL_ 11. Powell AG, Crozier JE, Hodgson H, Galloway DJ A case of septicaemic anthrax in an intravenous drug user. BMC Infect Dis. 2011;11:21. doi: 10.1186/_PHONE_-21. [PMC free article] [PubMed] [Cross Ref] 12. Booth MG, Hood J, Brooks TJ, Hart A Anthrax infection in drug users. Lancet. 2010;375:1345–6. doi: 10.1016/S0140-6736(10)60573-9. [PubMed] [Cross Ref] 13. Keim P, Van Ert MN, Pearson T, Vogler AJ, Huynh LY, Wagner DM Anthrax molecular epidemiology and forensics: using the appropriate marker for different evolutionary scales. Infect Genet Evol. 2004;4:205–13. doi: 10.1016/j.meegid.2004.02.005. [PubMed] [Cross Ref] 14. Radun D, Bernard H, Altmann M, Schoneberg I, Bochat V, van Treeck U, et al. Preliminary case report of fatal anthrax in an injecting drug user in North-Rhine-Westphalia, Germany, December 2009. Euro Surveill. 2010;15:pii: 19464. [PubMed] 15. Okinaka RT, Henrie M, Hill KK, Lowery KS, Van Ert M, Pearson T, et al. Single nucleotide polymorphism typing of Bacillus anthracis from Sverdlovsk tissue. Emerg Infect Dis. 2008;14:653–6. doi: 10.3201/eid1404.070984. [PMC free article] [PubMed] [Cross Ref] 16. de Lamballerie X, Zandotti C, Vignoli C, Bollet C, de Micco P A one-step microbial DNA extraction method using "Chelex 100" suitable for gene amplification. Res Microbiol. 1992;143:785–90. doi: 10.1016/0923-2508(92)90107-Y. [PubMed] [Cross Ref] 17. Marston CK, Allen CA, Beaudry J, Price EP, Wolken SR, Pearson T, et al. Molecular epidemiology of anthrax cases associated with recreational use of animal hides and yarn in the United States. PLoS ONE. 2011;6:e28274. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref] 18. Newton CR, Graham A, Heptinstall LE, Powell SJ, Summers C, Kalsheker N, et al. Analysis of any point mutation in DNA. The amplification refractory mutation system (ARMS). Nucleic Acids Res. 1989;17:2503–16. doi: 10.1093/nar/17.7.2503. [PMC free article] [PubMed] [Cross Ref] 19. Rhodes RB, Lewis K, Shultz J, Huber S, Voelkerding KV, Leonard DG, et al. Analysis of the factor V Leiden mutation using the READIT Assay. Mol Diagn. 2001;6:55–61. doi: 10.2165/00066982-200106010-00007. [PubMed] [Cross Ref] 20. Van Ert MN, Easterday WR, Simonson TS, U'Ren JM, Pearson T, Kenefic LJ, et al. Strain-specific single-nucleotide polymorphism assays for the Bacillus anthracis Ames strain. J Clin Microbiol. 2007;45:47–53. doi: 10.1128/JCM.01233-06. [PMC free article] [PubMed] [Cross Ref] 21. Miller JR, Delcher AL, Koren S, Venter E, Walenz
1.003473
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
New Tactic for Weight Management: Blood Sugar Control Nutritional Outlook, Volume 17, Issue 5 Controlling blood sugar is a newer tactic for weight management. Of all the neologisms that shape and mirror our age-locavore, crowdsource, twerking-none may resonate more profoundly, or regrettably, than diabesity. It's a term that pediatric endocrinologist and researcher Francine R. Kaufman, MD, applied to the conflation of type 2 diabetes and obesity. Alas, the word all too aptly describes the collection of ills that characterize both epidemics: excess abdominal fat, high blood sugar, generalized inflammation, etc. These truly are epidemics. The American Diabetes Association says that close to 10% of the U.S. population is diabetic, a diagnosis given to those with fasting blood glucose levels topping 126 mg/dL in at least two tests. The toll on our economy associated with diagnosed cases, to say nothing of undiagnosed cases, amounts to fully $245 billion annually. Meanwhile, nearly 35% of adult Americans meet the Centers for Disease Control and Prevention's definition of obesity-an adult with a body mass index of 30 or above-with repercussions ranging from conditions like heart disease, stroke, and some forms of cancer to-not coincidentally-type 2 diabetes. The hybrid nature of diabesity underscores a fundamental link between the two conditions it describes-a link that not only reflects similarities in their causes, but suggests novel interventions for mitigating them. The more we learn about the relationship between the disordered glucose metabolism at the heart of diabetes and the weight gain that leads to obesity, the more it appears that supplements designed to control the former may help control the latter, as well. Complex Puzzle Of course, researchers and clinicians have long known that obesity, overweight, and physical inactivity are risk factors for type 2 diabetes, which accounts for almost 95% of the nation's cases, including in children, as more young people edge over into overweight territory. These associations notwithstanding, it's worth noting that not all overweight or obese people are diabetic-nor are all diabetics, type 2 or otherwise, overweight or obese. But about 80% of type 2 diabetics are overweight or obese, which raises the question: What is it about excess weight and diabetes that bind the two in a sort of chicken-and-egg interdependence? Researchers are still sorting out all the puzzle pieces, but we have some ideas as to where and how they fit. To bring that picture into better resolution, it helps to understand a bit about energy, its metabolism, and the hormones that make energy. A healthy body harvests the energy it needs from the foods and drinks it eats. More specifically, the body breaks down fats, proteins, carbohydrates, and even alcohol to the simple sugar glucose, which is an easily convertible "energy currency" for the cells. The body does this with the help of the hormone insulin, which the pancreas continually secretes but really starts pumping out once glucose levels in the blood rise, such as after a meal or snack. Because simple sugars and starches break down easily into glucose-that is, they have a high glycemic index-the more sugary or starchy the meal or snack, the more rapidly glucose floods the bloodstream in a state known as postprandial hyperglycemia. When we're in that state, says Mitch Skop, senior director, new product development, Pharmachem Laboratories Inc. (Kearny, NJ), "the pancreas releases more insulin to escort glucose into the cells" where the glucose supplies immediate energy or goes into storage as fat in the liver. Given glucose's ability to trigger this response, those of us fond of sweetened beverages and an abundance of starchy and sugary foods run the risk of drowning our cells in glucose-and insulin-every time we eat. And, sure enough, cells that have bathed too long in insulin soon grow desensitized to the hormone's signals. This desensitization more properly goes by the name insulin resistance. It means that much of that glucose in the blood winds up stuck outside the cells' doors "with essentially nowhere else to go," Skop says. This deprives the cells of valuable energy while causing a host of other problems down the line. "So it stands to reason," he continues, "that a poor diet high in starch and sugar calories will, over time, cause insulin resistance, which itself may lead to diabetes." Overlapping Pathophysiologies So, to recap: Excess sugar in the blood leads to insulin resistance, which leads to diabetes. And excess dietary energy (calories)-these days, that often means sugar-leads to overweight and obesity. Overweight and obesity themselves promote diabetes by making it harder for cells to pay attention to insulin's glucose-management cues. And now we're learning that simply having diabetes can make it harder for diabetics to manage their weight, as the insulin they take to treat the disease allows more glucose to enter the cells where-absent sufficient energy demand-it's stored as fat. This complicated relationship doesn't surprise Mohammad Rafi, president of Bioactive American Corp. (Highland Park, NJ), who points out that "healthy blood sugar can help maintain normal body weight in healthy adults, and diabetic patients will often lose or gain weight based upon the antidiabetic drugs they're taking." Further, he notes that both diabetes and obesity share an oxidative and inflammatory component in the sense that high blood sugar can promote inflammation. "There is a direct correlation to inflammation and issues that cause weight gain, such as blood sugar," he says. In the words of a recent study in The Lancet1, all these elements are part of an "overlapping pathophysiology" that underlies both excess weight and poor glucose control. An Open Door Granted, there's nothing inevitable about being overweight or diabetic, as both conditions are largely preventable-even reversible-through sensible diet and exercise. Pharmaceuticals have also stepped in to combat the diseases' progression, but we've already seen the unintended consequence insulin has on weight gain. Diabetes medications may present other drawbacks, as well, including their cost: the American Diabetes Association estimates that 18% of diabetes' $245 billion price tag goes toward prescriptions to treat its complications. As for pharmaceutical attempts at addressing obesity, they have an even poorer track record, with the safety-driven withdrawals of fenfluramine/phentermine (better known as fen-phen), sibutramine, and others still fresh in mind. Does this open a door for supplementary interventions? Rafi thinks it may. "The supplement industry needs to be proactive, advocating for healthy lifestyles that include a good diet, exercise, and stress reduction," he says. But also "there is data that certain ingredients can prevent excess absorption of carbohydrates from foods to curb obesity, diabetes, and other chronic disorders." Indeed, Skop says, "Action has been taken for quite some time in the research and formulation of products for healthy blood sugar support"-products that could also encourage healthy weight loss. And though blood sugar management would be a new weight-control angle for the industry, it may be one whose time has come. Building Blockers "Thermogenic fat burners, satiety enhancers, and metabolism boosters have all worked to a certain extent in weight loss or weight management and have been accepted by consumers," says Xiaoming "Sandy" Chien, PhD, vice president, innovative products, HORN Nutraceuticals (La Mirada, CA). Blood sugar management may be "a less direct approach," she says, but "it addresses the bottom line of weight management: excess glucose being stored as fat." Among the ingredients recognized for managing blood sugar are the South Asian botanical Gymnema sylvestre, banaba leaf extract (popular in the Philippines), bitter melon extract, cinnamon extract, and others. "Many of these products, such as the cinnamon extracts currently on the market, work with the beta cells in the pancreas to increase insulin production, and therefore help clear out blood glucose," Chien says. "Some of the products can also increase insulin sensitivity, which helps cells take up blood glucose." Some of the ingredients attracting the most attention, both for their blood glucose and weight management potential, fall within the category of starch or carb blockers. Historically made from proteinaceous extracts of the white bean, Phaseolus vulgaris, they suppress the enzymes that break down starches and other polysaccharides into the simpler sugars that cells use for energy or store as fat. More specifically, they target the enzymes alpha-amylase, which turns starches into oligosaccharides, and alpha-glucosidase, which breaks oligosaccharides further into monosaccharides like glucose. The theory is that by inhibiting the enzymatic release of these simple sugars, starch blockers make it harder to transform calories into pounds. But there's more to it than just that. "Technically," Chien says, "products that block starches actually delay their digestion and absorption, which allows the body fully to process the resulting glucose in a more sustained way, leaving little excess to be stored as fat." Even better, she
2.052726
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
Battle Drill #1: Conduct Platoon Attack (7-3-D101) TASK. Conduct Platoon Attack (7-3-D101). CONDITIONS. An enemy squad has occupied defensive positions or is moving to the platoon front. The enemy has indirect fire and CAS capabilities. The platoon is attacking separately or as part of a larger unit. Plans, preparation, and movement to the objective have been accomplished. The platoon is directed to attack the enemy. 1. The platoon main body is not surprised or fixed by the enemy. 2. The platoon accomplishes its assigned task within the commander's intent. The platoon kills, captures, or forces the withdrawal of the enemy. 3. The platoon maintains a sufficient fighting force to defeat the enemy's counterattack and continue operations. 1. Action on Enemy Contact. a. The platoon initiates contact. The platoon leader directs when and how his base of fire element will establish a base of fire. The element must be in position and briefed before it initiates contact. The base of fire squad leader (normally the weapons squad leader), upon the signal from the platoon leader, initiates contact with a high casualty-producing weapon. The squad marks the engagement area with ir illumination (MTETT dependent), while the squad leader uses his hand-held laser pointer and AN/PAQ-4 to designate enemy positions, crew-served weapons, and vehicles. Soldiers focus on the squad leader's laser as well as the team leader's tracers and AN/PAQ-4 to engage targets. If the platoon has not been detected, steps 1 and 2 consist of positioning the support element and identifying the enemy's positions. b. If the enemy initiates contact, the platoon takes the following actions: (1) The squad in contact reacts to contact (Battle Drill No. 2, React to Contact Platoon/Squad, 7-3/4-D103). It attempts to achieve suppressive fires with one fire team and maneuvers the other team to attack the enemy in the flank. The team providing suppressive fires marks its flanks by throwing ir chemlight bundles or ir flares and continues to use its AN/PVS-7B and AN/PAQ-4 to place well-aimed, accurate fires on the enemy. The squad employs M203 and hand-held ir smoke to screen the assaulting teams movement. The squad leader notifies the platoon leader of his actions. (2) The platoon leader, his RTO, the platoon FO, the squad leader of the next squad, and one machine gun team move forward to link up with the squad leader of the squad in contact. (3) The squad leader of the trail squad moves to the front of his lead fire team. (4) The platoon sergeant moves forward with the second machine gun team and the weapons squad leader and links up with the platoon leader. If directed, he assumes control of the base of fire element and positions the machine guns to add suppressive fire against the enemy. The platoon sergeant uses his hand-held laser to designate the left and right limits of fires while the weapons squad leader uses the pointer to designate targets. (5) The platoon leader assesses the situation. He follows the success of the squad's flank attack by leading the trail squads along the covered and concealed route taken by the assaulting fire team of the squad in contact. The base of fire element uses the AN/PVS-7B to monitor the movement of the assaulting element. c. If the squad in contact cannot achieve suppressive fire, the squad leader reports to the platoon leader. (1) The squad in contact establishes a base of fire. (a) The squad leader deploys his squad to provide effective, sustained fires on the enemy position. The squad leader continues to designate targets using the hand-held laser pointer and AN/PAQ-4 while soldiers SEE through their AN/PVS-7B and place accurate fires on the enemy with the AN/PAQ-4. (b) The squad leader reports his final position to the platoon leader. (2) The remaining squad (not in contact) takes up covered and concealed positions in place and uses the AN/PVS-7B to observe the flanks and rear of the platoon. (3) The platoon leader moves forward with his RTO, the platoon FO, the squad leader of the nearest squad, and one machine gun team. 2. Locate the Enemy. a. The squad leader of the squad in contact reports the enemy size, location, and any other information to the platoon leader. The platoon leader completes the squad leader's assessment of the situation. b. The squad continues to engage the enemy positions and mark the engagement area with ground ir flares, tracers, and AN/PAQ-4. c. The platoon sergeant moves forward with the weapons squad leader and the second machine gun team and links up with the platoon leader. 3. Suppress the Enemy. a. The platoon leader determines if the squad in contact can gain suppressive fire against the enemy, based on the volume and accuracy of the enemy's return fire. He SEEs through the AN/PVS-7B and makes the assessment by looking at the enemy's muzzle flashes and the strike of their rounds and tracers. b. If YES, he directs the squad (with one or both machine guns) to continue suppressing the enemy: (1) The squad in contact destroys or suppresses enemy weapons that are firing most effectively against it, normally crew-served weapons. The squad leader identifies the enemy crew-served by its muzzle flashes and rate of fire. He uses his hand-held laser pointer to designate priority targets for his squad. (2) In addition, the squad in contact continues to place ir screening smoke (if enemy has NODs) to prevent the enemy from seeing the maneuver element. c. If NO, the platoon leader deploys another squad and the machine gun team to suppress the enemy position. The second squad lead elements SEE the base of fire squad flank element's ir chemlights or flares through the AN/PVS-7B and links up either to the left or right flank of the base of fire squad as directed by the platoon leader. (The platoon leader may direct the platoon sergeant to position this squad and one or both of the machine gun teams in a better support-by-fire position.) d. The platoon leader again determines if the platoon can gain suppressive fire over the enemy. e. If YES, he continues to suppress the enemy with two squads and two machine guns. (1) The platoon sergeant assumes control of the base-of-fire element (squad in contact, the machine gun teams, and any other squad designated by the platoon leader). He uses his hand-held laser pointer to designate sectors of fire for the squads. (2) The machine gun team occupies a covered and concealed position and suppresses the enemy position. The gunners SEE through the AN/PVS-4 and identify the targets designated by the weapons squad leader's laser. f. The platoon FO calls for and adjusts fires, based on the platoon leader's directions. (The platoon leader does not wait for indirect fires before continuing with his actions.) g. If still NO, the platoon leader deploys the last squad to provide flank and rear security and guide the rest of the platoon forward as necessary, and reports the situation to the company commander. Normally, the platoon will become the base of fire element for the company and may deploy the last squad for suppressive fires. The platoon continues to suppress/fix the enemy with direct and indirect fire, and responds to orders from the company commander. a. If the squad(s) in contact together with the machine gun can suppress the enemy, the platoon leader determines if the remaining squad(s) not in contact can maneuver. He makes the following assessment using his AN/PVS-7: (1) Location of enemy positions and obstacles. (2) Size of enemy force. (The number of enemy automatic weapons, presence of any vehicles, and employment of indirect fire are indicators of enemy strength.) (3) Vulnerable flank. (4) Covered and concealed flanking route to the enemy position. b. If yes, the squad leader maneuvers the squad(s) into the assault: (1) Once the platoon leader has ensured the base of fire squad is in position and providing suppressive fires, he leads the assaulting squad(s) to the assault position. (2) Once in position, the platoon leader gives the prearranged signal for the base of fire squad to lift or shift direct fires to the opposite flank of the enemy position. The signal is normally FM or an ir signaling device. The assault squad leader identifies the targets (enemy positions) that have been designated by the support by fire squad leader through his AN/PVS-7B. Simultaneously, at the platoon leader's command for the support by fire squad to lift or shift, the assault squad leader uses his hand-held laser pointer to point out the targets. Team leaders use AN/PAQ-4 to control fires. The assault squads MUST pick up and maintain effective fire throughout the assault. Handover of responsibility for direct fires from the base of fire squad to the assault squad is critical to prevent fratricide. (3) The platoon FO shifts indirect fires (including smoke) to suppress the enemy position. (4) The assaulting squad(s) fight through enemy positions using fire and maneuver. (5) The platoon leader controls the movement of his squads. He uses his hand-held laser pointer to assign specific objectives for each squad and designates the main effort or base maneuver element. (The base of fire squad must
1.251687
HuggingFaceFW/fineweb-edu
The New York University (NYU) Medical School has revealed the acceptance rate for this year. In this article, MyStudyExtra provides additional details on the acceptance rate, requirements, MCAT score, GPA, and application deadline for the NYU Medical School. The NYU School of Medicine is renowned for being a pioneer in fields like psychiatry, surgery, and forensic medicine. The university, which is located in the Langone Medical Center on Manhattan's East Side, offers dual-degree, M.D., and Ph.D. programs. It is also well-known for its programs that treat AIDS and drug addiction. Students at NYU School of Medicine who have completed one semester of courses and are in good academic standing are encouraged to apply to the School of Medicine Honors Program to further their education. About New York University (NYU) Medical School The New York University (NYU) School of Medicine was established in 1841 as the University Medical College and is renowned for developing fields including pediatrics, forensic medicine, surgery, and psychiatry. The institution, which is situated on the East Side of Manhattan's Langone Medical Center, provides dual-degree M.D. and Ph.D. programs in addition to being well-known for its AIDS and drug abuse treatment programs. Every MD student is eligible for a ground-breaking full-tuition scholarship (valued at $56,272) as of 2018, regardless of their academic record or financial situation. Students are urged to apply for federal or institutional loans to assist pay for extra living costs, the price of books, and other educational costs. The NYU School of Medicine has over 70 student groups, including organizations like Chamber Musicians in Medicine, Classical Arts Appreciation, and Physicians for Human Rights, which will please students looking for a deep and rich student experience. Students are encouraged to start their clubs or extracurricular activities. A few national organizations, such as the American Medical Students Association, the American Medical Women's Association, and the NYU Physicians for a National Health Program, have branches at the university that students might choose to join. Students at the NYU School of Medicine who have finished one semester of courses and are in excellent academic standing are encouraged to apply to the School of Medicine Honors Program to further their studies. Through this program, students are paired with faculty mentors who collaborate with them while they create an honors research topic, prepare a thesis, and defend it. The Global Health Initiatives program at NYU Medical School sends students abroad to conduct research and take part in clinical education and public health activities. NYU medical school Requirements The NYU Grossman School of Medicine is pleased with how well-rounded and knowledgeable the medical students are on subjects outside the humanities and sciences. They think their diversity will help them be better able to address the most difficult problems in healthcare. To be eligible, applicants must meet several criteria: - Must be a U.S. citizen, permanent resident, or have Deferred Action for Childhood Arrivals (DACA) status. - Must have a bachelor's degree from an accredited college or university in the United States or Canada. - We also accept applications from international students who have a bachelor's degree from an accredited college or university in the United States or Canada. - The Medical College Admission Test® (MCAT®) is required for all applicants. NYU Medical School Courses The following are the courses that NYU requires applicants to have completed: - Chemistry of Inorganic Compounds. - Organic Chemistry is the study of organic compounds. - Science of the Social. - College-level English and math. NYU Med School Admission Process The NYU medical school acceptance rate is just 2.2%, NYU Grossman is one of the most competitive medical schools in the country. An example is statistics for the class of 2025: - Applications: 9,635 - Interviews: 820 - Matriculants: 108 - Median MCAT score: 522. - MCAT range: 512–527 - Median GPA: 3.96 - GPA range: 3.64–4.0 NYU medical school Requirements, tuition, International students How difficult is it to get into NYU Medical School? Admission to NYU Medical School is competitive. They have an admissions rate of 2.5 percent, which puts them exactly between Brown and Washington. Out of 9,243 applications, 999 potential students were interviewed, and 102 were accepted. Is NYU Medical School free? The NYU Medical School is indeed free. The choice to provide free tuition "recognizes a moral obligation that must be addressed, as institutions lay an increasing debt load on young people who wish to become physicians," according to Robert Grossman, dean of NYU Health. Is NYU medical school free for international students? The School of Medicine at New York University (NYU) has declared that all previous and current students would no longer be charged tuition, regardless of academic standing or financial need. With full tuition scholarships available to all students, including International students, NYU is the only top-ranked US medical school to do so. NYU medical school GPA and NYU medical school MCAT score Students accepted to NYU Medical School had a median undergraduate the NYU Medical School GPA of 3.96 (with a range of 3.47-4.0) and a median NYU medical school MCAT score of 522 (with a range of 510-527). NYU Med School Tuition & Financial Aid No matter their academic standing or financial situation, all students at the NYU School of Medicine are eligible for a full scholarship. This equates to a $56,272 annual award for each student. Students are required by the college to have health insurance, as well as to cover additional costs and living expenses. NYU Medical School Curriculum The USMLE Step 1 exam is taken by students after their clerkship year, except for MD/Ph.D. candidates who take it before starting their Ph.D. studies. Additionally, the curriculum includes PLACE and NYU3T (a collaboration program with the New York University College of Nursing) (Patient-Based Longitudinal Ambulatory Care Experience). The Grossman School of Medicine at New York University also has 5-year dual degree programs, some of which can be completed in as little as four years: - Biomedical Informatics MD/MS. - Translational Research MD/MS. - General Management MD/MBA (with the New York University Stern School of Business). - Health Policy and Management MD/MPA. - Global Health MD/MPH. The 3-year program is only open to students who have been accepted into the 4-year stream. A residency spot at NYU Langone Health in the chosen specialty is guaranteed to students enrolled in the three-year program. They finish preclinical training at the same time as four-year students, but they start clinical rotations six weeks earlier and complete a summer fellowship in the department of their choice after their first year. NYU Medical School MD Programs NYU breaks down its main four-year MD program into four parts: Pre-clerkship curriculum: This occurs during the first year and a half of your education. The clerkship: During the second half of the second year and the first half of the third, students participate in four rotations for twelve weeks each. Individualized exploration: During the second half of the third year, students take time to study for the U.S. Medical Licensure Exam and take elective courses or begin a concentration. Career preparation: Students complete a second clerkship and sub-internships in the fourth and final year. Additionally, the Grossman School of Medicine provides a three-year expedited MD degree. Students can cut their living expenditures and other charges while hastening their time in medical school. This program compresses vocational preparation and individualized inquiry, and it necessitates additional summertime commitments. NYU med school program completion timeline NYU also offers other dual degree MD programs, including: MD/Masters of Public Administration in Health Policy and Administration: This program lasts five years, with a break in the usual MD program in the fourth year for the MPA curriculum. Students complete a capstone project in the final year, preparing them for careers at consulting firms, government agencies, nonprofits, and more. MD/Masters of Public Health in Global Health: An MPH in Global Health prepares students for careers in epidemiology, community health, policy management, and more. Students can apply for this program until their junior year. MD/Masters of Science in Translational Research: This degree helps you prepare for research-related careers and their application to medical therapies. The program lasts five years, with the fourth year focusing on the MS. MD/Masters of Arts in Bioethics: A degree in Bioethics can help you work as a bioethicist in hospitals, medical schools, and labs. The program lasts five years, with the fourth year focused on Bioethics. MD/MBA in General Management: You enroll in NYU's Stern School of Business during your fourth year. There, you take classes to prepare you for careers in business ownership, healthcare management, and pharmaceutical consulting, among others. NOTE: A dual degree program may be right for you if you seek interdisciplinary medical education. Is NYU Med School Tuition Free? The NYU program will begin immediately and pay $55,000 in annual tuition for 443 current students. However, students will be responsible for paying for their lodging, meals, and other costs, which add up to about $27,000 a year. Bachelor's Degree and GPA A bachelor's degree from an approved institution or university in the United States or Canada is required of candidates for admission to NYU Grossman School of Medicine. Students in our most recent incoming class had undergraduate GPAs that were, respectively, 3.96 and 3.92 on average. Letters of Evaluation Your application to NYU Grossman must include
1.109161
m-a-p/FineFineWeb
End of preview. Expand in Data Studio

High Quality Text (Longer) Dataset

This is agentlans/high-quality-text except that only chunks between 1750 and 2250 Meta Llama 3.1 tokens were kept.

The chunks were embedded using MongoDB/mdbr-leaf-mt and hierarchically clustered.

Downloads last month
131