text
stringlengths 1.41k
13.4k
| quality
float64 1
2.58
| source
stringclasses 5
values |
|---|---|---|
and this isn't mixed deeper. The thermal gradient in the skin thus restricts heat loss from the bulk of the ocean below. So the visible sunlight that does penetrate through the skin and warms the ocean to depth is restricted from leaving the ocean again.
2. The'skin' idea isn't valid. Downwelling radiation is absorbed in the top millimeter but then mixing transfers this heat deeper. And as a result the'skin' doesn't have a temperature gradient that restricts heat loss from the ocean. So the broader ocean absorbs more heat but is more easily able to lose heat as well.
Sounds like swings and roundabouts to me, with the same net result.
0 0
24. arncliffe
Roy Spencer is playing a bit fast and lose with the available data. There are 3 research groups that analyse the data from the satellites. Spencer is one of the principles of one team at UAH. The other teams are at RSS and Star/NESDIS. UAH have been showing a trend of 0.04 C/decade. RSS a trend of 0.078 C/decade. Roy averages these two and shows that average. He doesn't mention Star/NESDIS who are showing a trend of 0.124 C/decade. So if one averaged all 3 satellite results the result would be higher. And in fact it is more accurate to say that the correct value for the satellite data lies somewhere between UAH and Star. If Star is closer to the mark, the models are spot on.
He doesn't mention that the handling by UAH of the troubled transition between the NOAA satellites NOAA-9 and NOAA 10 has been identified as possibly flawed, leading to a cool bias in the UAH data since then.
He doesn't mention that the radiosonde datasets are regarded as questionable for climatological purposes at higher altitude due to radiative heating/cooling effects on the instrument packages.
And then he compares them to model runs based on one of the scenarios with the highest rates of CO2 accumulation.
Finally he doesn't mention that models don't predict exactly what will happen year by year but rather the broad trends. The models aren't good enough to capture all the causes of short term variability. And if you look at his graph, the data for the sensors still falls within the range of model prediction up to around 1998, 2000. So the models may be missing events over the last decade plus. And this isn't an exceptional result. Shorter term climate variability is harder to model, even when longer term trends can be modelled. And a decade is short term in climate terms.
It is a couple of decades too soon to claim that the models are wrong.
0 0
25. John and Dana:
After thinking about this some more, I understand that you are using the analogy of 4 Hiroshima bombs per second to represent the amount of heat gained by the earth as a whole. I don't think it helps the vast majority of even the educated population to understand what is happening. We all know that an atomic bomb is a bad thing. But, on a global scale, is it really?
When you take the surface area of the earth (5.1×10 8 km 2), and the amount of energy received from the sun: the result is that each square meter of area facing the Sun receives about 1,380 joules per second (otherwise known as the Solar Constant). Once you look at numbers on this scale, the numbers produced by an atomic bomb (even 4 per second), aren't scary anymore.
But, everyone knows how bad an atomic bomb is. So, your use of this analogy is an effective tactic.
0 0
26. Glenn and Dana, will y'all please add Glenn's comment as a new section in The Big Picture post? It fills a gap in the logical flow that I've had to fill when pointing people to that post for them to get a very quick basic understanding. I think it is well worth the tradeoff of making that post slightly longer. It would be a new section after the section "Global Warming Continues" and before the section "Humans are Increasing Atmospheric Greenhouse Gases." The section title should be something like "Increased Greenhouse Gases are Causing the Warming."
0 0
27. If people don't like Hiroshima bombs you could go with something like 'global energy consumption'. As in, 'global warming is causing the planet to accumulate heat at a rate equal to our monthly global energy consumption every 2 days'.
Most people probably don't have a good handle on just how much energy we use, but it is one of the few other values in the right ballpark to be a useful comparison point. It would also help to dispel the 'global warming is caused by waste heat' myth.
0 0
28. Hi all
I do not understand this obsession here with "science communicating". Science is not communicated (not as political propaganda), is disseminated and/or popularized, and as far as possible, when it has already passed the rigors of scientific method (empirical validation at least).
Glenn Tamblyn #24 says "It is a couple of decades too soon to claim that the models are wrong." Yep, However it is not too early to say that the models are correct... I think this is a version of scientific validation quite asymmetric, being naive.
Last but not less important. Using Hiroshima bomb (more than one hundred thousand dead), gives people an idea of the ethical level, and of the balance between popularize (or explaining) on the one hand, and convince (terrify rather), on this side of debate. From my point of view is a wrong strategy, to the point that it can only work with the more illiterate society. But hey, I'm aware that my opinion is worthless here. It's your choice.
0 0
29. I do believe that many of us convinced in the seriousness of AGW really is trying to figure out how we can explain this to people so they are able to act upon it. Its really one of humanities biggest challenges, one that could have consequences on a planetary scale if not handled in time. Many believe we are really running of out of time. Talking about CO2 emission cuts of a certain % by 2050 isnt going to cut it. What is needed is for people to wake up and understand that our CO2 emissions through our addiction to fossil fuel burning is really turning up the thermostat of the planet causing a shift that the planet likely only have experienced in rather more cataclysmic events like the big siberian traps vulcanism or asteroide impact. The data shows a dramatic concentration of CO2 in both the air and seas at rates 10x past extinction events. I think its really cause for concern even though we dont really see the big consequences right this second. A tipping point can be passed (and most likely several have been passed now with the Arctic melting fast) where the changes happen so fast that global average temperatures could rise rapidly.
I do believe there is enough evidence now that the planet is absorbing more of the suns energy than it has in the past, and that this amount is significant even on a planetary scale (some are trying to make the 4 abombs per second sound like a small number, just like CO2 is only a small part of the atmosphere - this is a dangerous way of thinking - it only takes 0.25g of arsenic to kill a person). Looking at the broad picture, its really fantastic that life even exists on the planet, and while humans might feel like small gods wielding the power of fossil fuels, we really are quite insignificant and vulnurable, like all living things on this planet. I do believe we should treat our lucky position in this galaxy with some respect and at least acknowledge what we have discovered about simple physics. Sometimes one does not need proof in order to know something is right, if I fall out of a 10 story building, I will likely die - but I dont really need to watch someone fall to their death to understand this is a physical fact. The same way we do know CO2 is a greenhouse gas and that it traps heat. Glenn Tamblyn's line of reasoning perfectly explains a valid reason for the extra heat stored and I my opinion it shouldnt be hard for people to grasp this if they are willing to listen.
0 0
30. Very eloquent!
IMHO: The rate should be more important. You should say that by 1998. We were at a rate of ~2 Hiroshima bomb detonations per second and since that so called pause we have move to 4 Hiroshima bomb detonations per second. Or put it in step with time. Like in 198x we were at 1 by 199x at 2... and today at 4. How much many more Hiroshima bomb detonations does it need to increase before we start taking this seriously.
0 0
31. Some of the heat goes into the melting of Arctic Ice. If we apply gentle heat to a beaker of water containing a lump of ice, the water temperature will not increase until the ice has finished melting.
So how much ice has melted?
The figure here:
shows that the September mean volume for _PHONE_ stands at 12,000 cubic km of ice
| 1.150113
|
Zyphra/Zyda-2
|
By Mary Plumb | July 13, 2015
In the early 90s, Kt Boehrer became an influential player in a renewal of interest in declination when she published her book, Declinations, The Other Dimension, which presented 20 years of research on the subject. Leigh Westin's very precisely elucidated book, Beyond the Solstice by Declination, followed in 1999. Currently, declination and out-of-bounds planets are quite commonly used by practicing astrologers.
To those who may be unfamiliar with the terms, I'll recap a bit. Essentially, as the Sun travels around the zodiac on the ecliptic, it reaches a maximum northern declination (23N27) at the tropic of Cancer on the summer solstice. The Sun reaches maximum southern declination (23S27) at the tropic of Capricorn on the winter solstice. Declination is measured as the earth's tilt along its axis as it orbits the Sun. The planets normally travel within that band of degrees. A planet is considered out-of-bounds when it travels beyond the ecliptic, or beyond 23°27′ to the north or south.
A basic notion of interpretation is that the out-of-bounds planet is no longer under the guidance, or rule, of the Sun, i.e., the organizing principle. The OOB planet deviates from normal expression; it becomes exaggerated, somehow unusual, or unconventional. This can manifest in many ways, sometimes as a remarkable talent or gift, sometimes as inappropriate or extreme behavior.
We're just winding down from a period of Mars being out-of-bounds. On June 9 Mars reached 23N35, just moving out of the range of the Sun's declination. The last week in June, Mars reached its maximum OOB declination in this cycle (24N08), and on July 16, Mars moves back into the natural border of the guidance of the Sun at 23N27.
Mars can be obvious in world events, especially when it is OOB. Here are a few recent events that echoed an exaggerated Mars: One of the world's top drug lords, Joaquin Guzman "El Chapo," escaped from maximum-security prison; the murders in a South Carolina church (and the news that a background check error at the FBI allowed the shooter to purchase a gun); media reports that psychologists secretly aided a CIA torture program; same-sex marriage was guaranteed by the Constitution; two inmates escaped from a New York prison (overtaking the news cycle for weeks).
(On September 11, 2001, Mars, at 1°26′ Capricorn in longitude, was at an extreme out-of-bounds declination (26S48). Mars will not reach this extreme south declination again until 2033.)
Mars is in the news (as usual), but I've been thinking a lot about the natal Moon OOB.
In a list of familiar descriptors for an OOB planet (i.e., unrestrained, extreme, independent, wild, awkward, privileged, magnified), I found a keyword I had not thought of before that seemed very true: vulnerable. (1) If a planet is out there on its own, beyond the conventional boundaries, the experience of vulnerability can be present as well.
The Moon has its own rhythm — for about nine or ten years it will stay inside the Sun's boundary, never going out-of-bounds. Then, in nine- or ten-year cycles, it will go OOB several times a month. (The Moon will stay in bounds from 2011-2020.)
There is a further interesting distinction within the OOB cycle, that is, peak years when the Moon will travel to its extreme OOB declination. This is when the nodes are near 0° Aries and Libra. So, people born with the nodes in Aries and Libra may possibly have the natal Moon at this extreme OOB. This can be a very helpful thing to note for your friends and clients, not to mention being enormously useful to recognize if you are in that category yourself. The years when the extreme OOB Moon (almost 29° N or S declination) is possible are around 1913, 1931-32, 1950, 1968-9, 1987-88, and 2005-6. The OOB Moon does not happen every month during those years; you'll need to check the ephemeris.
Those with an OOB natal Moon can be very intuitive; have transparent emotional boundaries; be connected to the subconscious; live a highly emotional life; and have an empathetic nature. Other lunar themes with a natal OOB Moon are exceptional circumstances in giving birth; unusual relationships with the mother or a marked impact from early childhood; food and security issues; and an acutely emotional response to life. All of these themes may be more pronounced when the Moon reaches its maximum declination, as in the years noted above.
Oprah, Joan Rivers, Tracy Morgan, Gwyneth Paltrow, Christopher Reeve, Clint Eastwood, Kurt Cobain, Louis CK, and Jennifer Anniston are some celebrities with natal Moon OOB.
I know a young girl born with the Moon in Capricorn near maximum southern declination (born in 2005). She is a very happy and outgoing child, but enormously sensitive, something that is not especially obvious. She occasionally has moods of tremendous loneliness and vulnerability. When she was very young, her mother, with a natal Moon nicely organized within the borders of the Sun, thought she was a "drama queen." I talked to her about the extreme sensitivity of the OOB Moon and gave her a different way to understand her daughter's nature.
This week I talked with someone who has natal Mercury OOB. She said that sometimes, when she's speaking about something that makes complete sense to her, she suddenly realizes that the other person has no idea what she's talking about. She related this to that notion of vulnerability.
I spoke with someone else who has natal Mars OOB (28S05). She is beginning to understand that her directness and outspokenness can sometimes be experienced as aggression, leaving others feeling alienated, and she, vulnerable. (2)
Many people will never have a personal experience of the Moon OOB, either in the natal, or by progression. This may suggest a more emotionally stable life or the ability to navigate emotions more easily.
Those people born with the Moon close to maximum declination can give new meaning to "over-the-top" emotions. It can be a big help in self-understanding (hopefully leading to increased peace of mind) if they can understand the imagery of the OOB Moon.
Here's a monthly calendar with a graph of the planets' declinations; a graphic ephemeris is particularly useful in seeing declination.
Not all printed ephemerides include declination tables (software does, of course), but if you prefer a paper copy, here's a Declination Ephemeris spanning 1930 to 2025, courtesy of Café Astrology. The tables include the Sun through Chiron, but not the Moon.
(1) Declination: The Steps of the Sun, Paul Newman, Wessex, 2006.
(2) Here's a blog I wrote about an athlete with an extreme OOB Mars in the natal chart: Usain Bolt, known as "the lightning bolt." "Mars is also way out-of-bounds, at 28°S10'. (The nodes are in Aries/Libra and planets go to the most extreme out-of-bounds when the nodal axis is in these signs.)"
Like what you see? Subscribe to The Mountain Astrologer
By Gary P. Caton | May 4, 2015
"In all chaos there is a cosmos, in all disorder a secret order."
C.G. Jung (1)
It has become common for modern astrologers to think of Mercury retrograde in terms of "shadow." The basic idea seems to be that there are two areas or zones of shadow, one before and one after actual retrograde motion. When in these zones, even though Mercury is traveling direct, we are still (at least potentially) subject to retrograde kinds of experiences. So weeks before the retrograde, after Mercury passes the degree of its eventual station direct, Mercury enters a zone of sky that is eventually crossed three times, and it doesn't leave this zone for weeks after turning direct — until passing the degree of the initial station retrograde.
However, when we consider not just the motion but also the speed of Mercury, it seems these zones before and after the retrograde could be better described as slow zones. So we can imagine that when Mercury comes to the station degree, we are experiencing a sort of "cosmic speed bump," alerting us that we are in a new kind of zone where new rules and/or codes of conduct may apply, similar to when passing through a school or construction zone while driving.
Nevertheless, even when adding the dimension and nuance of speed, I find this view of Mercury retrograde still wanting. Although taking responsibility for the speed with which we navigate our lives should help to assuage some of the dualistic thinking that seems to accompany the commonly held view of Mercury retrograde (direct = good, retrograde = bad), it remains in my eyes a relatively impoverished view. This is because it is taking into account only two dimensions, those of zodiacal longitude and time/speed. And yet, we live in (
| 1.811979
|
m-a-p/FineFineWeb
|
ization, for this variable we counted the total duration of vocalizations regardless of the emitter. To calculate frequencies and percentages of time, we used the duration of collective arousals for the 48- and 2-h separation conditions, and the 10 min of the videotaped phase in the control condition. We calculated the percentage of contact-sitting and social grooming over the total number of scans for each condition of arousal and postarousal periods. With respect to the analysis of partner preferences, however, the number of scans occurring during collective arousal remained limited, so we relied on the exact durations measured from videotape footage. To compare different conditions, and arousal and postarousal periods, we applied the Kruskal–Wallis, Mann– Whitney, and Wilcoxon signed-rank tests, exact procedure (Siegel and Castellan, 1988) using the SPSS software version 16 (SPSS, Chicago, IL). All probabilities were two-tailed. The significance level was set at 0.05. American Journal of Physical Anthropology RESULTS Duration of collective arousal Collective arousal systematically occurred in both groups after a 48-h separation. It also occurred in all cases in Group A and in seven out of eight cases in Group B after a 2-hr separation. The number of individuals involved in affiliative interactions at each 10-s interval decreased during the 10-min recording period (see Fig. 2). No collective arousal was observed in the control period. Comparisons of the duration of collective arousal in the 48- and 2-h conditions showed that its mean duration was significantly longer following a 48-h separation both in Group A (Mann–Whitney test, n1 5 8, n2 5 8 U 5 11.0, P 5 0.026, 48 h: 8.5 min 6 1.6, 2 h: 6.3 6 2.1) and Group B (U 5 10.5, n1 5 8, n2 5 8, P 5 0.023, 48 h: 8.0 6 1.8, 2 h: 4.7 6 3.2). It is worth noting that collective arousal periods usually began and ended quite abruptly (Supporting Information Fig. S1). Social interactions occurring during collective arousal We compared the mean rates per minute of behaviors between the three different conditions (Table 2). In both groups, rates significantly differed across conditions except for conflict and scratching in Group B, and yawning in both groups; affiliative behaviors appeared more frequent in the separation–reunion conditions. We additionally performed pairwise tests to compare the effects of 2- and 48-h separation conditions. This showed that the second condition yielded higher rates for several behavior patterns (Kruskal–Wallis test, P \ 0.05): clasp, facial display, expressive run, social play, and conflict in 461 COLLECTIVE AROUSAL IN TONKEAN MACAQUES TABLE 2. Comparisons of behavioral rates after reunion across the separation and control conditions Group A Behavior b Clasp Mountb Interferenceb Facial displayb Scratchb Yawnb Conflictb Vocalizationc Expressive runc Social playc Group B Condition Meana SD X2 P Meana SD X2 P 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 48 h 2h control 0.54 0.29 0.01 0.06 0.02 0 0.25 0.08 0 3.5 3 0.17 0.06 0.11 0.24 0.1 0.07 0.06 0.20 0 0.04 40.2 34.2 0.31 7.6 3.5 0.05 2.52 2.52 0.86 0.14 0.23 0.01 0.02 0.02 0.01 0.08 0.11 0 0.8 1.4 0.09 0.04 0.07 0.13 0.07 0.09 0.06 0.13 0 0.07 9.9 14.2 0.83 1.8 2.7 0.04 1.77 0.98 1.35 17.4 \0.001 \0.001 \0.001 10.2 0.003 17.2 \0.001 15.3 \0.001 15.9 \0.001 17.7 \0.001 11.1 0.002 2.0 0.379 2.5 0.290 1.5 0.481 13.9 \0.001 3.0 0.202 16.2 \0.001 16.1 \0.001 18.0 \0.001 18.2 \0.001 9.3 0.010 0.25 0.12 0.01 0.02 0.04 0 0.19 0.11 0 1 0.63 0.07 0.08 0.08 0.02 0.01 0.01 0.02 0.13 0.04 0.05 6.2 17.3 0 2 0.88 0.05 1.8 1.37 1.3 16.9 13.3 0.71 0.24 0.01 0.03 0.03 0 0.34 0.13 0 3.1 1.39 0.09 0.09 0.11 0.11 0.02 0.01 0.01 0.11 0.01 0.04 48.4 30.4 0 4.9 1.3 0.03 2.8 0.86 1.61 6.2 0.039 Kruskal-Wallis test, n1 5 8, n2 5 8, n3 5 8, d.f. 5 2. a Means are given per test and per individual (except for conflicts and vocalizations which are given per group). b Frequency per minute. c Duration in seconds per minute. Group A, and mount and interference in Group B; other differences were not statistically significant. Contact behaviors occurring during arousal vs. postarousal periods Partner preferences during collective arousal and postcollective arousal We compared the percentage scans of social grooming and contact-sitting which occurred during arousal and the hour following the 10-min videotaped period in the 48- and 2-h separation conditions (Table 4). In both groups, contact-sitting increased significantly during the postarousal period for the 48- and 2-h conditions. Levels of social grooming also rose during the postarousal period, except for the 48-h condition in Group A. We compared the mean rates per minute of affiliative interactions occurring during collective arousal in individuals belonging to previously separated subgroups and individuals remaining in the same subgroup (Table 3). After a 48-h separation, both groups showed higher rates of all behaviors between partners from different subgroups. After a 2-h separation, we found similar trends but differences were statistically significant only for clasps, mounts, and contact-sitting in Group B, and for clasps and social grooming in Group A. No significant partner preferences appeared in control periods. We compared partner preferences during the postarousal period from the percentages of scans of social grooming and contact-sitting (Table 3). We did not find statistically significant preferences for contact-sitting between partners regardless of their subgroup membership, whereas individuals in both groups exchanged significantly more grooming with partners from which they had been separated for 48 h. The difference was also significant after a 2-h separation for Group B but not Group A. The comparison of partner preferences in the control period did not yield significant differences. DISCUSSION This is the first experimental study demonstrating that it is possible to reproducibly induce bursts of affiliative interactions in a monkey species, as stated in our first prediction. After a period of separation, Tonkean macaques welcome each other through collective arousal; all individuals run around, embrace or grasp one another, while displaying many affiliative facial expressions and uttering noisy vocalizations. Based on the proportion of group members engaged in affiliation per time unit, the event lasted between a few and ten minutes. Collective arousal should not however be reduced to this operational definition; for instance, it is also characterized by the occurrence of simultaneous affiliative interactions, including polyadic ones (see Supporting Information Video). American Journal of Physical Anthropology 462 A. DE MARCO ET AL. TABLE 3. Comparisons of behaviors during arousal and post-arousal periods according to partner preferences in the different experimental conditions Behavior Condition b Clasp 48 h 2h control b 48 h Mount 2h control Social playc 48 h 2h control Contact-sittingc 48 h 2h control Social groomingc 48 h 2h control Contact-sittingd 48 h 2h control d Social grooming 48 h 2h control Group A Group B Subgroup membership Meana SD T P Meana SD T P same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different same different 0.006 0.02 0.004 0.009 0.0001 0.0003 0.006 0.018 0.005 0
| 1.360696
|
m-a-p/FineFineWeb
|
accurate diagnosis, including P-wave signal. The number of codes and the lower and upper threshold ECG value of the codes are determined to achieve both efficient encoding and sufficient data reconstruction, especially for P-wave signals. The number of codes and the lower and upper threshold ECG value of the codes are flexible and can be adjusted to adapt to ECG data input and storage space. In one embodiment, the number of the bins are chosen from 23 to 210. A higher number of bins usually results in less ECG data error loss but more storage space and battery power use.
The proper demarcation of upper and lower thresholds also reduces error loss and contributes to accurate re-construction of ECG value and graph shape. The number of bins and the thresholds for these bins are carefully selected to keep essential information of the ECG signals and filter away non-essential information, with a special emphasis to accurately representing the P-wave. Normally, each successive bin continues forward from a previous bin so as to cover a contiguous range of electrocardiography values. In one embodiment, the size of the bins, i.e., the interval between the higher threshold ECG value and the lower threshold ECG value, are not equal thought the contiguous range; instead, areas of high frequency calls for a smaller size of bins. The size of the bins is partly determined by the frequency of the ECG values falling into the bin.
In one embodiment, 24=16 bins are used, as described with reference to in FIG. 24 where the lower threshold ECG value and upper threshold ECG value for each bin are also provided. This setup provides minimum error loss and a significant compression ratio, among other considerations. The first, second, and third columns represent the lower threshold ECG value, the upper threshold ECG value, and the coding of the bins. The bin that an ECG data will fall into depends on the difference between the raw ECG data value and corresponding serial accumulator compared to the range that the bin covers. If an ECG raw data falls into a particular bin, the raw ECG data can be represented by the code of the bin. In this example, the codes are encoded with a four-bit storage space, with one bit to encode sign and three bits to encode magnitude. Similar, up to 32 codes can be encoded with a five-bit storage space, with one bit to encode sign and 4 bits to encode magnitude.
The minimum (Min) and maximum (Max) values in FIG. 24 defines an inclusive range of ECG values for each ECG code. An input ECG value that fall within the range defined by Min and Max value will be encoded by the code in the third column in FIG. 24. The Min and Max ranges can be the same for all of the bins or can be tailored to specific ranges of ECG values, to emphasize higher or lower density. For example, the Min and Max value 5,001-50,000 correspond to code +7 is low density and reflects the expectation that few actual ECG values exceeding 5001 μV will occur. The density of the Min and Max value can be adjusted to enhance ECG signal detection such as P-wave signal. as a further example, the Min and Max ECG value ranges can be evenly defined throughout, or be doubled each of the successive bin.
In one embodiment, the number of bins is selected to be a power of two, although a power of two is not strictly required, particularly when a second stage compression as further described below with reference to FIG. 26.
FIG. 25 is an example illustrating the encoding and compression scheme in accordance with method and parameters as described with reference to in FIGS. 23 and 24. The first three ECG values of an ECG datastream, 12000, 11904, and 12537, are shown in column Ito show a recursive process. Remaining values are omitted since they are processed through the same recursive process. The initial ECG value, 12000, is equivalent to the center value of the ECG recorder. The initial serial accumulator is assigned to the center value of the ECG recorder, 12000. The difference between the initial ECG value to the initial serial accumulator is 0, which falls within the lower and upper threshold of bin 0. Thus the initial ECG value is encoded with the code 0. 12000 is transferred to next row as the serial accumulator for next ECG value. The next ECG value is 11904. The difference between the next ECG value and the serial accumulator for the second value is 11904−12000=−96. The difference of −96 falls into the bin with the code of −3, where the lower threshold of the bin is −41 and the upper threshold of the bin is −150. Thus, the second ECG value is encoded with the code of −3, which is the bin identification. For the purpose of decoding the second value, an encoder first refers to the assigned bin, which is bin −3; the encoder then reads the lower threshold ECG value of the assigned bin −3, which is −41; and the encoder finally add the lower threshold ECG value of the assigned bin to the decoded value of the first ECG value, which is 12000, to arrive at a decoded value of 11959. The decoded value 11959 in turn serves as the serial accumulator for the next ECG value, in this case the next ECG value is the third one of 12537. The difference between the third value and its corresponding serial accumulator is 12537−11959=578. This difference, 578, falls into the bin with a code of +5, which has a lower threshold ECG value of 301 and upper threshold ECG value of 1500. Thus the third ECG value is encoded with the code of +5. The third ECG value is decoded by adding the lower threshed ECG value of the assigned bin +5, which is 301, to the decoded value of second ECG value, which is 11959, to arrive at the decoded value of 12260. The decoded value of 12260 in turn will serve as the serial accumulator for the next ECG value. The encoding process continue until the last reading is taken. The encoder keeps track of the accumulated encoded value as the encoding process progresses along.
The encoding process described above is also a lossy compression process that encodes raw ECG signals with a finite number of codes. This process captures essential information while achieving significant data compression. In one embodiment, an other compressing step is performed. The other compression step may be performed independently. The other compression step may also be performed on top of the encoding process described above to achieve a higher level compression than one step alone. The second compression step can be a lossless compression performed on the codes from the first step. In one embodiment, the compression ratio of the second compression is in the range of 1.4 to 1.6, increasing the data storage capacity of a non-volatile memory by more than 41-66%. In another embodiment, the compression ratio of the second compression is in excess of 1.6, increasing the data storage capacity of a non-volatile memory by more than 66%. Thus, the combination of the lossy compression and the lossless compression serves to achieve both high fidelity of the ECG signal preservation and high compression ratio, which translate into increased data storage capacity and reduced power consumption for the ambulatory electrocardiography monitor, resulting in extended wear time of the monitor.
In one embodiment, the second compression is effected by encoding a sequence of codes obtained from the first compression into a single number between 0 and 1, with frequently used codes using fewer bits and not-so-frequently occurring codes using more bits, resulting in reduced storage space use in total. FIG. 26 is a flow diagram showing a monitor recorder-implemented method for further compressing the codes. A sequence of the codes corresponding to the series of the ECG values is provided to the compressing module 134. The compressing module 134 set a range of 0 to 1 to an initial sequence of the codes (step 231). The compressing module 134 further performs recursive steps of assigning each successive codes into a sub-range within a previous range according to the probabilities of the codes appearing after a code (steps 232-239). In order to do so, the compressing module 134 obtains an estimation of probabilities of next codes, given a current code (step 233). Several variations of calculating and adjusting the probabilities of the next codes will be described infra. The compressing module 134 divides the range of the current code into sub-ranges, each sub-range representing a fraction of the range proportional to the probabilities of the next codes (step 234). These sub-ranges are contiguous and sequential. The compressing module 134 reads the next code (step 235) and selects the sub-range corresponding to the read next code (step 236). The read next code is represented, or encoded, by the corresponding sub-range (step 237). The sub-range corresponding to the read next code is assigned to be the range for the code next to the read next code (step 238), and the range is further divided into sub-ranges with each sub-range representing a fraction of the range proportional to the probabilities of codes next to the read next code (step 39
| 2.092877
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
, we performed 2 complementary screens to identify FDA-approved drugs and drug-like small molecules with activity against T-ALL. We developed a zebrafish system to screen small molecules for toxic activity toward MYC-overexpressing thymocytes and used a human T-ALL cell line to screen for small molecules that synergize with Notch inhibitors. We identified the antipsychotic drug perphenazine in both screens due to its ability to induce apoptosis in fish, mouse, and human T-ALL cells. Using ligand-affinity chromatography coupled with mass spectrometry, we identified protein phosphatase 2A (PP2A) as a perphenazine target. T-ALL cell lines treated with perphenazine exhibited rapid dephosphorylation of multiple PP2A substrates and subsequent apoptosis. Moreover, shRNA knockdown of specific PP2A subunits attenuated perphenazine activity, indicating that PP2A mediates the drug's antileukemic activity. Finally, human T-ALLs treated with perphenazine exhibited suppressed cell growth and dephosphorylation of PP2A targets in vitro and in vivo. Our findings provide a mechanistic explanation for the recurring identification of phenothiazines as a class of drugs with anticancer effects. Furthermore, these data suggest that pharmacologic PP2A activation in T-ALL and other cancers driven by hyperphosphorylated PP2A substrates has therapeutic potential.
Alejandro Gutierrez, Li Pan, Richard W.J. Groen, Frederic Baleydier, Alex Kentsis, Jason Marineau, Ruta Grebliunaite, Elena Kozakewich, Casie Reed, Francoise Pflumio, Sandrine Poglio, Benjamin Uzan, Paul Clemons, Lynn VerPlank, Frank An, Jason Burbank, Stephanie Norton, Nicola Tolliday, Hanno Steen, Andrew P. Weng, Huipin Yuan, James E. Bradner, Constantine Mitsiades, A. Thomas Look, Jon C. Aster
Interaction of the chemokine CXCL12 with its receptor CXCR4 promotes neuronal function and survival during embryonic development and throughout adulthood. Previous studies indicated that μ-opioid agonists specifically elevate neuronal levels of the protein ferritin heavy chain (FHC), which negatively regulates CXCR4 signaling and affects the neuroprotective function of the CXCL12/CXCR4 axis. Here, we determined that CXCL12/CXCR4 activity increased dendritic spine density, and also examined FHC expression and CXCR4 status in opiate abusers and patients with HIV-associated neurocognitive disorders (HAND), which is typically exacerbated by illicit drug use. Drug abusers and HIV patients with HAND had increased levels of FHC, which correlated with reduced CXCR4 activation, within cortical neurons. We confirmed these findings in a nonhuman primate model of SIV infection with morphine administration. Transfection of a CXCR4-expressing human cell line with an iron-deficient FHC mutant confirmed that increased FHC expression deregulated CXCR4 signaling and that this function of FHC was independent of iron binding. Furthermore, examination of morphine-treated rodents and isolated neurons expressing FHC shRNA revealed that FHC contributed to morphine-induced dendritic spine loss. Together, these data implicate FHC-dependent deregulation of CXCL12/CXCR4 as a contributing factor to cognitive dysfunction in neuroAIDS.
Jonathan Pitcher, Anna Abt, Jaclyn Myers, Rachel Han, Melissa Snyder, Alessandro Graziano, Lindsay Festa, Michele Kutzler, Fernando Garcia, Wen-Jun Gao, Tracy Fischer-Smith, Jay Rappaport, Olimpia Meucci
Children with focal hyperinsulinism of infancy display a dramatic, non-neoplastic clonal expansion of β cells that have undergone mitotic recombination, resulting in paternal disomy of part of chromosome 11. This disomic region contains imprinted genes, including the gene encoding the cell cycle inhibitor p57Kip2 (
Dana Avrahami, Changhong Li, Ming Yu, Yang Jiao, Jia Zhang, Ali Naji, Seyed Ziaie, Benjamin Glaser, Klaus H. Kaestner
High blood pressure is the leading risk factor for death worldwide. One of the hallmarks is a rise of peripheral vascular resistance, which largely depends on arteriole tone. Ca2+-activated chloride currents (CaCCs) in vascular smooth muscle cells (VSMCs) are candidates for increasing vascular contractility. We analyzed the vascular tree and identified substantial CaCCs in VSMCs of the aorta and carotid arteries. CaCCs were small or absent in VSMCs of medium-sized vessels such as mesenteric arteries and larger retinal arterioles. In small vessels of the retina, brain, and skeletal muscle, where contractile intermediate cells or pericytes gradually replace VSMCs, CaCCs were particularly large. Targeted disruption of the calcium-activated chloride channel TMEM16A, also known as ANO1, in VSMCs, intermediate cells, and pericytes eliminated CaCCs in all vessels studied. Mice lacking vascular TMEM16A had lower systemic blood pressure and a decreased hypertensive response following vasoconstrictor treatment. There was no difference in contractility of medium-sized mesenteric arteries; however, responsiveness of the aorta and small retinal arterioles to the vasoconstriction-inducing drug U46619 was reduced. TMEM16A also was required for peripheral blood vessel contractility, as the response to U46619 was attenuated in isolated perfused hind limbs from mutant mice. Out data suggest that TMEM16A plays a general role in arteriolar and capillary blood flow and is a promising target for the treatment of hypertension.
Christoph Heinze, Anika Seniuk, Maxim V. Sokolov, Antje K. Huebner, Agnieszka E. Klementowicz, István A. Szijártó, Johanna Schleifenbaum, Helga Vitzthum, Maik Gollasch, Heimo Ehmke, Björn C. Schroeder, Christian A. Hübner
High-dose ionizing irradiation (IR) results in direct tumor cell death and augments tumor-specific immunity, which enhances tumor control both locally and distantly. Unfortunately, local relapses often occur following IR treatment, indicating that IR-induced responses are inadequate to maintain antitumor immunity. Therapeutic blockade of the T cell negative regulator programmed death–ligand 1 (PD-L1, also called B7-H1) can enhance T cell effector function when PD-L1 is expressed in chronically inflamed tissues and tumors. Here, we demonstrate that PD-L1 was upregulated in the tumor microenvironment after IR. Administration of anti–PD-L1 enhanced the efficacy of IR through a cytotoxic T cell–dependent mechanism. Concomitant with IR-mediated tumor regression, we observed that IR and anti–PD-L1 synergistically reduced the local accumulation of tumor-infiltrating myeloid-derived suppressor cells (MDSCs), which suppress T cells and alter the tumor immune microenvironment. Furthermore, activation of cytotoxic T cells with combination therapy mediated the reduction of MDSCs in tumors through the cytotoxic actions of TNF. Our data provide evidence for a close interaction between IR, T cells, and the PD-L1/PD-1 axis and establish a basis for the rational design of combination therapy with immune modulators and radiotherapy.
Liufu Deng, Hua Liang, Byron Burnette, Michael Beckett, Thomas Darga, Ralph R. Weichselbaum, Yang-Xin Fu
The mechanisms that regulate the strength of synaptic transmission and intrinsic neuronal excitability are well characterized; however, the mechanisms that promote disease-causing neural network dysfunction are poorly defined. We generated mice with targeted neuron type–specific expression of a gain-of-function variant of the neurotransmitter receptor for glycine (GlyR) that is found in hippocampectomies from patients with temporal lobe epilepsy. In this mouse model, targeted expression of gain-of-function GlyR in terminals of glutamatergic cells or in parvalbumin-positive interneurons persistently altered neural network excitability. The increased network excitability associated with gain-of-function GlyR expression in glutamatergic neurons resulted in recurrent epileptiform discharge, which provoked cognitive dysfunction and memory deficits without affecting bidirectional synaptic plasticity. In contrast, decreased network excitability due to gain-of-function GlyR expression in parvalbumin-positive interneurons resulted in an anxiety phenotype, but did not affect cognitive performance or discriminative associative memory. Our animal model unveils neuron type–specific effects on cognition, formation of discriminative associative memory, and emotional behavior in vivo. Furthermore, our data identify a presynaptic disease–causing molecular mechanism that impairs homeostatic regulation of neural network excitability and triggers neuropsychiatric symptoms.
Aline Winkelmann, Nicola Maggio, Joanna Eller, Gürsel Caliskan, Marcus Semtner, Ute Häussler, René Jüttner, Tamar Dugladze, Birthe Smolinsky, Sarah Kowalczyk, Ewa Chronowska, Günter Schwarz, Fritz G. Rathjen, Gideon Rechavi, Carola A. Haas, Akos Kulik, Tengis Gloveli, Uwe Heinemann, Jochen C. Meier
Patients with the autoimmune rheumatic disease systemic lupus erythematosus (
| 1.455405
|
openbmb/Ultra-FineWeb
|
). Connected to the opposite end of rod 202 is the wiper of a coil potentiometer 206 which is thus positioned within the potentiometer in accordance with the position of link 124(b). The coil of the potentiometer 206 is connected by leads 208 to an appropriate electrical circuit (not shown). The coil potentiometer 206 acts as a voltage divider and, depending upon the position of the wiper within the coil, produces a current which is fed to an indicator 210. One indicator which can be conveniently used is a trim tab indicator type IP 10100 manufactured by Bennett Marine Corp. 550 NW 12th Avenue, Deerfield Beach, Fla. 33441. Depending upon the polarity and value of the applied signal, indicator 210 will be illuminated above or below a central point 212. The indicator segment 214 shows upward motion, the angle of the ascent being indicated by the position of the lamp segment lit with respect to the central point 212. Similarly the indicator segment 216 shows downward motion. The signals from the two vertical coil potentiometers 206 are both employed to drive the indicator 210. Although not shown, the links controlling left and right steering are also fitted to coil potentiometers which transmit their signals to indicator 218. Similarly, segment 222 shows rightward movement and segment 224 shows leftward movement, the degree of turn being indicated by the particular segment illuminated.
The correspondence between the boring unit 32 movement and the indicated directions of the joy sticks 194, 196 in FIG. 12 only hold true if the orientation of the boring unit 32 is as shown by the indicator lamps 188(c), 188(d) and 188(e). If the boring unit 32 rotates 90° counter-clockwise as is shown by the indicator lights 188(a), 188(b) and 188(c) in FIG. 13, the result of using the joy sticks 194, 196 as indicated by the arrows would be incorrect movement. The movement of joy stick 194 in the direction of arrow 1 would not be to aim the boring unit 32 toward the surface but rather to direct it to move leftwardly. Similarly down, as with arrow 2, is rightward movement whereas right with arrow 3 by joy stick 196 would cause upward movement and left in the direction of arrow 4 is downward.
To eliminate the possible confusion, the central portion 226 of the control panel 192 is rotatable. Portion 226 is rotated so as to align central pointer 228 with the middle one of the three illuminated indicator lights 188, that is 188(b) as is shown in FIG. 14. Now movement of joy sticks 194, 196 will be correctly visually displayed to the operator. A further rotation of the boring unit 32 90° counter-clockwise will result in the one direction being down or a 180° reversal of the initial position.
Turning now to FIGS. 15 and 16, the hydraulic cylinder 227 used to provide the fluid to inlet port 144 of the steering portion 38 is shown. Although only one cylinder 227 is shown, it should be understood that there are four cylinders 227 arranged in a circle in housing 41 to the left of indicator portion 40. The two cylinders 227 for the vertical-direction movement are shown in FIG. 4, but the horizontal-movement control cylinders have been omitted for the sake of clarity. Hydraulic fluid is fed to inlet 229 and passes via filter 230 to passage 232 and a further filter 234 to passages 236 and 238 and into chamber 242 via port 240. From chamber 242, the fluid flows via port 244 through passages 246, 248 to inlet port 144. This flow is possible because the valve plunger 254 is in the retracted position to the left of port 240 as is shown in FIG. 15. In response to an electrical signal on lines 256, solenoid 250 is operated to advance valve plunger 254 to the right of port 240 blocking all further flow of hydraulic fluid to inlet port 144 as shown in FIG. 16. Individual ones of the pairs of cylinders 227 will be operated in accordance with the desired direction of movement of boring unit 32.
After boring unit 32 has arrived at its desired location, it can be used to draw a new utility item through the newly created bore by installing the pulling eye 114 of FIG. 6 and re-reeling the service cable 42. If it is desirable or necessary to increase the diameter of the bore this can be done on the return by use of a back reamer as shown in FIG. 18. Nose portion 34 and hammer portion 36 are removed by unscrewing the hammer portion 36 from the threaded stud 88 of the steering portion 38. Back reamer 260 is now threadedly engaged to threaded stud 88 by means of internally-threaded anvil 262. A hammer portion 264 is arranged for reciprocating movement concentrically along support tube 266 to apply force to surfaces 268 of anvil 262 at the end of its forward stroke and to approach collar 270 at the end of its rearward stroke.
A compression spring 271 serves to return hammer portion 264 to its rearward position adjacent collar 270 and apply the force of hammer portion 264 to surfaces 268 of anvil 262. Fluid is fed via tube 272 to the passages 274 and interspaces 276 between hammer portion 264 and collar 270 forcing hammer portion 264 forward to strike surfaces 268 of anvil 262 and moving the entire assembly to the left in FIG. 18. Since the diameter of hammer portion 264 is greater than that of nose body 60, the bore is enlarged. The fluid escaping the interspaces 276 is available to lubricate the passage of the back reamer 260. The trailing utility item that is fastened to pulling eye 278 of collar 270 acts as a break for the returning hammer portion 264 to prevent an impact with collar 270 which could drive the back reamer 260 in the wrong direction.
All essential fluid, hydraulic and electrical conductors are housed in the service cable 42 fastened to the housing 41. Beyond the location of the hydraulic cylinders 227, housing 41 is decreased in outside diameter as at 280 and the outer surface formed with a series of ridges 282. At the end of the body portion, a plate 284 is fixed across the opening dividing same in half so that the various lines, tubes and conductors can pass over either face of plate 284 and enter the boring unit housing 41.
An aperture 286 is placed in plate 284 for purposes to be described below. Service cable 42 is prepared so that the various lines, tubes and conductors are separated and extend beyond the outer jacket 290 of service cable 42 so that they can pass along plate 284 into the boring unit 32 for attachment to their respective components. The end of outer jacket 290 is brought up against end 288 of housing 41 over the ridges 282 in the reduced-diameter portion 280 and clamped thereto by use of a stainless steel hose clamp 292 of a construction well known in the art.
The makeup of service cable 42 is best appreciated from a consideration of FIG. 17. At the center of service cable 42 is a steel wire 294 which is attached to plate 284 by means of aperture 286. The wire 294, having a diameter of about 0.250 inches, can be used to pretension the service cable 42 and thus reduce the tendency of the boring unit 32 to rotate by providing a more rigid trailing cable and to provide the main pulling lin for drawing the boring unit 32 or back reamer 260 back through the newly-created bore whether by themselves or with a utility item fastened thereto to decrease the forces otherwise applied directly to the weaker service cable alone. Steel wire 294 is surrounded by six fiberglass rods also having a diameter of about 0.250 inches. These rods are applied with a slight twist (1 wrap per 9 lineal feet) rather than extending in parallel with steel wire 294. These rods provide crush support and when used with further fiberglass rods having a reverse or opposite twist tend to keep service cable 42 from rotating. Steel wire 294 and fiberglass rods 296 are surrounded by an extruded jacket 298. Along the outer surface of jacket 298 are arranged the 2,000-pounds-per-square-inch working pressure drill mud line 300 which couples to line 98; four 2,000-pounds-per-square-inch working pressure hydraulic lines 302 which are coupled to inlets 229 of hydraulic cylinders 227; two 5,000-pounds-per-square-inch working pressure air lines 304, one of which couples to line 100, the other remaining as a spare; two electrical cables 306, each composed of six pair of 22-gauge stranded conductors--four pair used for the solenoids 250 of the hydraulic cylinders 250, two pair coupled to conductors 208 of the coil potentiometers 206, two further pair for the conductors of the horizontal coil potentiometers (not shown) and four pair used to couple the mercury switches 170 to the indicator lamps 188; and two electrical conductors 308 number 12 wire rated at 600 volts. Surrounding these hoses and conductors is a second ply of ten 0.250-inch fiberglass rods 310 applied with a twist direction opposite to that of fiberglass rods 294 and of a greater twist being one wrap in 4.5 lineal feet. The net effect of these two counter-twist plys of fiberglass rods is to support and strengthen the cable 42 and to resist any tendency to rotate in either direction. Also, as stated above, the steel wire 292 can be tensioned before any tension is applied to the overall cable 42 and this pre-tensioning tends to make the cable 42 more rigid also preventing rotation during reeling or unreeling.
The cable 42 is further protected and reinforced by pressure extruding a polyethylene interior jacket 312 and a polyurethane wear jacket 290 over the cable core and components.
The unreeling of the supply cable 42 is generally controlled by the boring unit 32. As it advances, it pulls the supply cable
| 1.643485
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
and be controlled remotely from a single device".
What is it that can't be done today with fixed network access and wifi? (hoping that the device that remotely controls our home belongs to us…) Are we sure that the complexity of Narrowband-IoT (NB IoT) and the 5G version will prevail over wifi and bluetooth? It seems to me that the probability that this will happen in the short to medium term is quite low, unless some black swan gives the necessary impulse.
- "An ultra-performing network, will be fundamental for the transition to the Internet of things, i.e. to ensure the development of applications and services for the Smart City based on sensors (for example for traffic control, waste collection, urban lighting, logistics)".
In 20 milliseconds, many cars pass through street intersections and a lot of waste is thrown. If a lamp burns, then, we really know first!
- "The new, very fast 5G mobile networks will also help improve safety on the roads. In addition to allowing more data to be transferred in the same unit of time, they have a much lower latency and a reduced error rate: if one data packet in a thousand is "lost" with 4G, with 5G you get up to one in a million. It is these last two aspects that will make the electronic security systems of cars more effective, which will "dialogue" with each other in real time and with the certainty that the information arrives".
About the latency I mentioned above; about the loss of packets, as people who deal with networks know, network protocols (TCP) have control mechanisms to ensure applications that all transmitted packets arrive. This aspect, therefore, is already insurable to current technologies.
- "The productive world will be revolutionized through the full digitalization of production facilities – the so-called Industry 4.0 – and the development of precision agriculture".
Indeed in the business market there could be an interest in 5G for IIoT (Industrial IoT), unlike the residential market. The 5G will allow a higher communication density up to one sensor per square meter, that is one million sensors per square kilometer compared to the 50 thousand allowed by the best NB IoT technologies. We must also bear in mind that in the industrial field wired network technologies are very expensive (PROFINET cables and switches cost one order of magnitude more than the non-industrial equivalents) and introduce rigidity in installations that could be more flexibly realized and reconfigured using wireless connections. (examples 1, 2, 3, 4)
Be careful! I'm not saying that 5G doesn't help, but that the "killer applications" that are proposed today (which often include "life saving" arguments, because on those you can't squeeze investments), with the notable exception of the business market and IIoT, are generally trivial and almost certainly destined not to materialize.
Figure 1 – Piazza Maggi, Milan
This does not mean that a new network infrastructure should not be built, on the contrary! The first motivation for a new infrastructure is that it must precede the demand and enable it: the new infrastructure will serve a demand that is not there today. Einaudi, a very famous italian economist and politician, said that markets expresses demands, not needs, meaning they express immediate requirements and not long-term needs.
Also from this point of view the idea of pooling investments to co-invest in a 5G network infrastructure seems reasonable.
The networks they are a changin'
The "mobile" networks (which are not mobile: it is people who are mobile, not the networks) work by emitting signals that fade as you move away from the antenna, so that phones and antennas have to transmit with increasing power to be able to communicate, like two people talking to each other by moving away from each other. Or, another solution, is to place many more antennas that will be much closer to the user so that they will be able to communicate with lower power; the lower the power, the the larger the number of antennas that are needed. This is what has happened with mobile telephony: at every stage of evolution from GSM to UMTS (or 3G) to HSDPA (or 3.5G) to LTE (or 4G) emissions decrease and antenna density increases.
Consequently, the wireless access segment (the part from the user to the antenna) tends to become shorter: from the many kilometers of GSM (with low bandwidth) to a few tens of meters with Wi-Fi and 5G (which provide a lot of bandwidth).
With 5G, in Italy we will have hundreds of thousands of small antennas, with very low emissions (also because we have the lowest emission constraints in the world). In the future we will find them at the base of many buildings and they will provide us ubiquitously performances similar to Wi-Fi, wherever we go.
To each of these antennas we will need to feed some bandwidth with a network connection hookup (usually fixed). For this reason it is of obvious the importance of the presence of fixed networks, whether they are via the "old cable TV" as in some European countries or telephone networks with optical fibers. But it is good to overcome also this conceptual differentiation. The network is always and only one: it serves to carry data of any type. It does not matter if it was born for the telephone (copper pair) or for TV (cable networks) or if it is made with optical fibers.
Such a high density of antennas means that each antenna will be used by fewer people. If an antenna covers a radius of two kilometers, all users in a small town will connect to it. If an antenna covers a twenty meter radius, only the few people in that building will connect to it.
The transmission capacity available from an antenna is shared with the people who connect to it. In the case of the small town, it will be shared among many people; in the second case it will be shared among very few people so that each individual in the second scenario will have much more bandwidth available than their peers in the first scenario. Increasing the capillarity of the antennas helps to reduce emissions and increase the performance available to users.
Lower emissions thanks to 5G
It seems a counter-intuitive thing: lower electromagnetic emissions with more antennas; higher capacity with lower emissions. An example helps to clarify: imagine a room with forty people and two of them talking with a megaphone. The noise pollution will be very high and the overall capacity of the room will be just one conversation. If everyone whispers, the noise pollution will be minimal and the overall capacity will be twenty simultaneous conversations.
Ultimately, the network is not what we are used to think it is: a single object managed by an operator. Instead, it is a mix of different types of routes, with different technologies, much likey a road network.
From the point of view of the user, in the future there will not be a great distinction between fixed network and wireless network. The fixed network will have a very high capillarity and at its edge there will be an antenna. If located inside a house, it will still be a Wi-Fi managed independently by the most sophisticated users, if it will be located outside of the house it will instead be a 5G network managed by an operator.
A mental image of networks
Let's picture circles to imagine service boundaries of an operator.
Today there is a circle that reaches the users' home, the fixed network, to which a Wifi access point is connected; it can be right next to each side of the circumference, depending on whether it is provided and managed by the operator (inside the circle) or installed directly by the user (outside the circle). Outside this circle there are the user's devices, from PCs to smart refrigerators (!?) to TVs connected to disks to store documents, photos and movies (and managing their complexity).
Then there is another circle, the "mobile network" one, which radius is smaller. It's the network that powers the cellular radio stations located right inside the edge of the circle, inside the perimeter of the operator, to which users connect with their mobile devices.
Over time this second circle has expanded and with the 5G it will come close to the border of the first circle, so close that many users will find it more convenient to directly connect their devices, entrust the operator with the custody of their documents, photos and movies (without having to manage the complexity) reducing its management burden, often increasing the level of security and being able to have everywhere what would otherwise only be available at home / office.
These two circles, the fixed network and the mobile network have grown closer over time and will continue to do so with 5G; there is a large overlap in the surface area of the two circles. The more widespread the "fixed" network is developed, the more it will also contribute to the infrastructure for 5G.
The telephonist's drama
Let's go back to the use cases: many of those told, especially those that are presented as useful to save lives, suffer from what we could call "the telephonist's drama". I call it this way because i often perceive it when talking to many friends, very competent people, who work for telco operators who, being born as telephone operators, keep that imprinting in their DNA and seem to me they still are a bit conditioned by that way of thinking.
An example of a typical "telephone" way of thinking is that of value added services.
Let's imagine a system to improve driving safety, based on a myriad of temperature and humidity sensors scattered along a mountain road so that a car that arrives that behind the blind curve can be informed there is
| 1.07813
|
Zyphra/Zyda-2
|
. 5B was manufactured with gas source 22 providing 20 SCCM of the 80% argon/20% H2 mixture. (In other words, 4 SCCM of H2 was introduced into chamber 5.) The friction exhibited by the resulting carbon film rose from a value of 0.2 to 1.0 in less than 8 minutes.
In FIG. 5C, 40 SCCM of the 80% argon/20% H2 mixture was introduced into sputtering chamber 5 by source 22 while source 11 was off. The friction coefficient exhibited by the resulting carbon film rose to 1.0 in 12 minutes.
In FIG. 5D, 60 SCCM of the 80% argon/20% H2 mixture was provided by gas source 22 while gas source 11 was off. The friction coefficient from the resulting carbon film rose to a level of about 0.7 and then stopped rising, even after 66 minutes, and the test was terminated.
Although during the above experiments gas source 11 was off, gas source 11 can be used to vary the hydrogen concentration adjacent targets 8a, 8b.
FIG. 4 illustrates the relationship between gas flow from source 22 (20% H2 /80% argon) and the concentration of hydrogen at targets 8a, 8b. Because of diffusion and gas flow between the different target areas, the hydrogen concentration at targets 8a, 8b does not equal exactly 20%. The curve of FIG. 4 was estimated taking into account the geometry of the sputtering system and flow pattern of gases in the system. (The hydrogen concentration at targets 8a, 8b is substantially equal to the hydrogen concentration at the substrate when the substrate is between targets 8a and 8b.)
Typically, magnetic disks are unacceptable if the friction coefficient is greater than 1.0. Accordingly, the disks of FIGS. 5A, 5B and 5C wore out and became unacceptable relatively quickly. However, as mentioned above, the disk of FIG. 5D remained acceptable, even after 66 minutes. Accordingly, it is seen in FIGS. 5A-5D that the greater the hydrogen concentration in the sputtering chamber, the greater the carbon film performance.
FIG. 6 illustrates the time required during the drag tests for a disk to exceed a friction coefficient of 1.0. The disks produced under gas flows of 0, 20, 40 and 60 SCCM of the 80% argon/20% H2 mixture in FIG. 6 were generated under the same gas flow conditions as FIGS. 5A, 5B, 5C and 5D, respectively. The Y axis of FIG. 6 is logarithmic. As can be seen, the lifetime of the carbon film is increased by more than an order of magnitude by introducing 60 SCCM of a 20% H2 /80% argon gas flow hydrogen into the sputtering chamber. The data points for disks manufactured when gas source 22 provided 60 SCCM of the argon/H2 mixture were estimated, based on slope of the friction vs. time curves from drag tests. The plot in FIG. 6 shows, for a group of samples prepared under different hydrogen concentrations, that a small amount of hydrogen will have almost no effect on the mechanical characteristics of the carbon film, whereas a large amount of hydrogen will have a very dramatic effect on the carbon.
The reason hydrogen affects the friction exhibited by the disks is not completely understood. I have three theories concerning why this result is achieved. According to the article entitled "Evidence for Tribochemical Wear on Amorphous Carbon Thin Films" by Bruno Marchon et al., published at the proceedings of the MRM Conference in Rimini, Italy in 1989 (incorporated herein by reference), carbon wears out primarily through an oxidation phenomenon. When a read/write head strikes a magnetic disk, a great amount of force is exerted on a small portion of the carbon film by the read/write head. This causes localized heating and oxidation of the carbon film. Thus, Marchon reported that carbon wear was prevented or drastically reduced by conducting contact-start-stop tests in a nitrogen (oxygen-free) atmosphere. It is possible that hydrogen doping the carbon film also drastically reduces localized oxidation.
Another possible reason why introduction of hydrogen into a carbon film retards the increase in friction is that as the read/write head and the carbon film wear, the amount of contact area between the read/write head and the disk increases. The presence of hydrogen in the carbon film reduces an attractive force between the read/write head and the carbon, and thus retards the increase in the friction coefficient even when the contact area between the read/write head and carbon increases due to wear.
A third theory as to why hydrogen in a carbon film retards the increase in friction is that hydrogen-doped films exhibit a greater degree of elasticity. (Experimental data pertaining to this effect is provided below.) Thus, the carbon film is more compliant (elastic), and may be able to absorb the shock loading of the film by the read/write head, thereby allowing the film to last longer.
The hydrogen introduced at targets 8a, 8b is actually incorporated into the sputtered carbon film. This was demonstrated by using a sampling gas mass spectrometer or residual gas analyzer (RGA) to monitor the consumption rate of hydrogen near the carbon sputtering targets. A plot of the hydrogen mass peak intensity versus calculated hydrogen concentration with the plasma on and off (i.e. when sputtering is taking place and not taking place, respectively) is shown in FIG. 7. The RGA output is in arbitrary units, but is proportional to the amount of hydrogen in the sputtering chamber near targets 8a, 8b. From this data, it can be determined that plasma at the carbon targets consumes approximately one half of the hydrogen introduced at the carbon cathode area, indicating that the plasma causes reaction of input hydrogen and results in incorporation of hydrogen into the carbon film. (Unless otherwise stated, hydrogen concentrations elsewhere in this specification and claims refer to concentrations calculated as if there were no hydrogen consumption during sputtering. It is believed, however, that the hydrogen concentration is about 50% of this calculated value at targets 8a, 8b when the plasma is on.)
Raman spectroscopy is a useful technique for obtaining information regarding the bonding characteristics of carbon atoms within the deposited film. See D. S. Knight, et al., "Characterization of Diamond Films", J. Mater. Res. Vol 4, No. 2, March/April 1989, and Willard et al., Instrumental Methods of Analysis, 6th Edition, published by Wadsworth Publishing Co. in 1981, incorporated herein by reference. Typical spectra of a carbon film with no hydrogen is shown in FIG. 8A. Typically the spectra is characterized by broad overlapping peaks around 1310/cm (generally known as the D-peak) and 1550/cm (generally known as the G-peak). The peaks can be deconvoluted to obtain more accurate peak position and intensity values. The deconvoluted spectra is shown in FIG. 8B. The Raman spectra of a film produced using 80 SCCM of the 20% H2 /80% argon mixture is shown in FIG. 8C. There is a change in the ratio of the D to G peaks, as well as a slight shift in the peak positions as seen in the deconvoluted spectra of FIG. 8C, shown in FIG. 8D. The G and D peaks shift to lower frequencies as hydrogen is added. The change in peak ratio expressed in terms of height and area ratios as a function of the amount of hydrogen present during sputtering is plotted in FIG. 9, and height position is plotted in FIG. 10. Raman spectra shows a clear indication of the changes in chemistry of the carbon atoms within the film as more hydrogen is added. Based on changes in the SP3/SP2 peak intensity ratios, it is apparent that the carbon becomes more amorphous.
Typically, a carbon film lacking hydrogen has a brown to greyish color at a thickness of about 300 Å. The sheet resistance at this thickness is about 0.5MΩ/square, using a four point probe measurement. Resistivity of a 300 Å carbon film made with 20 SCCM of the 20% hydrogen/80% argon mixture was measured using a four point probe. The resistance was greater than 20MΩ/square. Further, the carbon film sputtered with 20 SCCM 20% H2 /80% argon was yellow when formed on a metallic alloy, and colorless if formed on glass. This indicates that hydrogen in the sputtering chamber introduces chemical and structural changes in the resulting carbon film.
A specially made 2000 Å thick carbon coating was made in order that micro-hardness measurements can be taken of the carbon coating with various amounts of hydrogen. The method used for the hardness and elastic constant determination is described by M. F. Doerner et al. in "A Method For Interpreting The Data From Depth-Sensing Indentation Instruments", published in J. Mater. Res., July/August 1986, p. 601. Table 1 below lists the values which were obtained.
______________________________________Flow Rate of the20% Hydrogen/80% Armixture Hardness Elasticity______________________________________ 0 SCCM 8 GPa 140 GPa60 SCCM 8 GPa 92
| 2.162489
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Principal component analysis
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.866, 0.5) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by the square root of the corresponding eigenvalue, and shifted so their tails are at the mean.
Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. This transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components. The resulting vectors (each being a linear combination of the variables and containing n observations) are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
PCA was invented in 1901 by Karl Pearson,[1] as an analogue of the principal axis theorem in mechanics; it was later independently developed and named by Harold Hotelling in the 1930s.[2] Depending on the field of application, it is also named the discrete Karhunen–Loève transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of X (Golub and Van Loan, 1983), eigenvalue decomposition (EVD) of XTX in linear algebra, factor analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of Jolliffe's Principal Component Analysis),[3] Eckart–Young theorem (Harman, 1960), or empirical orthogonal functions (EOF) in meteorological science, empirical eigenfunction decomposition (Sirovich, 1987), empirical component analysis (Lorenz, 1956), quasiharmonic modes (Brooks et al., 1988), spectral decomposition in noise and vibration, and empirical modal analysis in structural dynamics.
PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations. PCA can be done by eigenvalue decomposition of a data covariance (or correlation) matrix or singular value decomposition of a data matrix, usually after a normalization step of the initial data. The normalization of each attribute consists of mean centering – subtracting each data value from its variable's measured mean so that its empirical mean (average) is zero – and, possibly, normalizing each variable's variance to make it equal to 1; see Z-scores.[4] The results of a PCA are usually discussed in terms of component scores, sometimes called factor scores (the transformed variable values corresponding to a particular data point), and loadings (the weight by which each standardized original variable should be multiplied to get the component score).[5] If component scores are standardized to unit variance, loadings must contain the data variance in them (and that is the magnitude of eigenvalues). If component scores are not standardized (therefore they contain the data variance) then loadings must be unit-scaled, ("normalized") and these weights are called eigenvectors; they are the cosines of orthogonal rotation of variables into principal components or back.
PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way that best explains the variance in the data. If a multivariate dataset is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA can supply the user with a lower-dimensional picture, a projection of this object when viewed from its most informative viewpoint[citation needed]. This is done by using only the first few principal components so that the dimensionality of the transformed data is reduced.
PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.
PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal coordinate system that optimally describes variance in a single dataset.[6][7][8][9]
Robust and L1-norm-based variants of standard PCA have also been proposed.[10][11][9]
Intuition[edit]
PCA can be thought of as fitting a p-dimensional ellipsoid to the data, where each axis of the ellipsoid represents a principal component. If some axis of the ellipsoid is small, then the variance along that axis is also small, and by omitting that axis and its corresponding principal component from our representation of the dataset, we lose only an equally small amount of information.
To find the axes of the ellipsoid, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data and calculate the eigenvalues and corresponding eigenvectors of this covariance matrix. Then we must normalize each of the orthogonal eigenvectors to become unit vectors. Once this is done, each of the mutually orthogonal, unit eigenvectors can be interpreted as an axis of the ellipsoid fitted to the data. This choice of basis will transform our covariance matrix into a diagonalised form with the diagonal elements representing the variance of each axis. The proportion of the variance that each eigenvector represents can be calculated by dividing the eigenvalue corresponding to that eigenvector by the sum of all eigenvalues.
This procedure is sensitive to the scaling of the data, and there is no consensus as to how to best scale the data to obtain optimal results.[citation needed]
Details[edit]
PCA is mathematically defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.[3]
Consider a data matrix, X, with column-wise zero empirical mean (the sample mean of each column has been shifted to zero), where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature (say, the results from a particular sensor).
Mathematically, the transformation is defined by a set of p-dimensional vectors of weights or coefficients that map each row vector of X to a new vector of principal component scores , given by
in such a way that the individual variables of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector (where is usually selected to be less than to reduce dimensionality).
First component[edit]
In order to maximize variance, the first weight vector w(1) thus has to satisfy
Equivalently, writing this in matrix form gives
Since w(1) has been defined to be a unit vector, it equivalently also satisfies
The quantity to be maximised can be recognised as a Rayleigh quotient. A standard result for a positive semidefinite matrix such as XTX is that the quotient's maximum possible value is the largest eigenvalue of the matrix, which occurs when w is the corresponding eigenvector.
With w(1) found, the first principal component of a data vector x(i) can then be given as a score t1(i) = x(i)w(1) in the transformed co-ordinates, or as the corresponding vector in the original variables, {x(i)w(1)} w(1).
Further components[edit]
The kth component can be found by subtracting the first k − 1 principal components from X:
and then finding the weight vector which extracts the maximum variance from this new data matrix
It turns out that this gives the remaining eigenvectors of XTX, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of XTX.
The kth principal component of a data vector x(i) can therefore be given as a score tk(i) = x(i)w(k) in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, {x(i)w(k)} w(k), where w(k) is the kth eigenvector of XTX.
The full principal components decomposition of X can therefore be given as
where W is a p-by-p matrix of weights whose columns are the eigenvectors of XTX. The transpose of W is sometimes called the whitening or sphering transformation. Columns of W multiplied by the square root of corresponding eigenvalues, that is, eigenvectors scaled up by
| 2.545063
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
top of page
Publications
Electronic copies of publications can be provided upon request
2024
Bodnar TS, Ainsworth-Cruickshank GRJ, Billy V, Parfrey LW, Weinberg J, Raineki C (2024). Alcohol consumption during pregnancy differentially affects the fecal microbiota of dams and offspringScientific Reports, 14: 16121.
Wilson DA, Sullivan RM, Smiley JF, Saito M, Raineki C (2024). Developmental alcohol exposure is exhausting: Sleep and the enduring consequences of alcohol exposure during development. Neuroscience and Biobehavioral Reviews, 158: 105567.
2023
Holman PJ, Raineki C (2023). Prenatal alcohol exposure and early life adversity: A translational perspective for dissecting compounding impacts. Alcohol: Clinical and Experimental Research, 47: 2227-2230.
*Invited commentary
Bodnar TS, Chao A, Holman PJ, Ellis L, Raineki C, Weinberg J (2023). Impact of COVID-19 pandemic on adults with Fetal Alcohol Spectrum Disorder: Linking immune function to mental health status. Frontiers in Neuroscience, 17: _PHONE_.
2021
Opendak M, Raineki C, Perry RE, Rincón-Cortés M, Song SC, Zanca RM, Wood E, Packard K, Hu S, Woo J, Martinez K, Vinod KY, Brown RW, Deehan GA Jr., Froemke RC, Serrano PA, Wilson DA, Sullivan RM (2021). Bidirectional control of infant social behavior by dopaminergic innervation of the basolateral amygdala. Neuron, 109: 4018-4035.e7.
Holman PJ, Raineki C, Chao A, Grewal R, Haghighat S, Fung C, Morgan E, Ellis L, Yu W, Weinberg J (2021). Altered social recognition memory and hypothalamic neuropeptide expression in adolescent male and female rats following prenatal alcohol exposure and/or early-life adversity. Psychoneuroendocrinology, 126: 105146.
2020
Bodnar TS, Raineki C, Wertelecki W, Yevtushok L, Plotka L, Granovska I, Zymak-Zakutnya N, Pashtepa A, Wells A, Honerkamp-Smith G, Coles CD, Kable JA, Chambers CD, Weinberg J, the CIFASD (2020). Immune network dysregulation associated with child neurodevelopmental delay: Modulatory role of prenatal alcohol exposure. Journal of Neuroinflammation, 17: 39.
2019
Raineki C*, Opendak M*, Sarro E, Showler A, Bui K, McEwen BS, Wilson DA, Sullivan RM (2019). During infant maltreatment, stress targets hippocampus but stress with mother present targets amygdala and social behaviorProceedings of the National Academy of Sciences of the United States of America, 116: 22821-22832.
*authors contributed equally
Raineki C, Morgan EJ, Ellis L, Weinberg J (2019). Glucocorticoid receptor expression in the stress-limbic circuitry is differentially affected by prenatal alcohol exposure and adolescent stressBrain Research, 1718: 242-251.
Lam VYY, Raineki C, Wang LY, Chiu M, Lee G, Ellis L, Yu W, Weinberg J (2019). Role of corticosterone in anxiety- and depressive-like behavior and HPA regulation following prenatal alcohol exposureProgress in Neuro-psychopharmacology and Biological Psychiatry, 90: 1-15.
2018
Sagae SC, Zanardini B, Ribeiro-Paz ED, Amaral AC, Bronczek GA, Lubaczeuski C, Grassiolli S, Koehler-Santos P, de Oliveira JR, Donadio MVF, Raineki C (2018). Metabolic dysfunction in a rat model of early-life scarcity-adversity: Modulatory role of cafeteria dietExperimental Physiology, 103: _PHONE_.
Bodnar TS, Raineki C, Wertelecki W, Yevtushok L, Plotka L, Zymak-Zakutnya N, Honerkamp-Smith G, Wells A, Rolland M, Woodward TS, Coles CD, Kable JA, Chambers CD, Weinberg J, the CIFASD (2018). Altered maternal immune networks are associated with adverse child neurodevelopment: Impact of alcohol consumption during pregnancyBrain, Behavior, and Immunity, 73: 205-215.
Lam VYY, Raineki C, Ellis L, Yu W, Weinberg J (2018). Interactive effects of prenatal alcohol exposure and chronic stress in adulthood on anxiety-like behavior and central stress-related receptor mRNA expression: Sex- and time-dependent effectsPsychoneuroendocrinology, 97: 8-19.
Lam VYY, Raineki C, Takeuchi LE, Ellis L, Woodward TS, Weinberg J (2018). Chronic stress alters behavior in the forced swim test and underlying neural activity in animals exposed to alcohol prenatally: Sex- and time-dependent effectsFrontiers in Behavioral Neuroscience, 12: 42.
Raineki C, Ellis L, Weinberg J (2018). Impact of adolescent stress on the expression of stress-related receptors in the hippocampus of animals exposed to alcohol prenatallyHippocampus, 28: 201-216.
2017
Raineki C, Bodnar TS, Holman PJ, Baglot SL, Lan N, Weinberg J (2017). Effects of early-life adversity on immune function are mediated by prenatal environment: Role of prenatal alcohol exposureBrain, Behavior, and Immunity, 66: 210-220.
Walker C-D, Bath KG, Joëls M, Korosi A, Larauche M, Lucassen PJ, Morris MJ, Raineki C, Roth TL, Sullivan RM, Taché Y, Baram TZ (2017). Chronic early life stress induced by limited bedding and nesting (LBN) material in rodents: Critical considerations of methodology, outcomes and translational potentialStress, 20: 421-448.
Yan C-G, Rincón-Cortés M, Raineki C, Sarro E, Colcombe S, Guilfoyle DN, Yang Z, Gerum S, Biswal BB, Milham MP, Sullivan RM, Castellanos FX (2017). Aberrant development of intrinsic brain activity in a rat model of caregiver maltreatment of offspring. Translational Psychiatry, 7: e1005.
Sullivan RM, Sullivan-Wilson T, Raineki C (2017). Neurobiology of Infant attachment: Nurturing and abusive relationships. Reference Module in Neuroscience and Biobehavioral Psychology, 1-10.
2016
Raineki C, Chew L, Mok P, Ellis L, Weinberg J (2016). Short- and long-term effects of stress during adolescence on emotionality and HPA function of animals exposed to alcohol prenatallyPsychoneuroendocrinology, 74: 13-23.
Perry RE, Al Aïn S, Raineki C, Sullivan RM, Wilson DA (2016). Development of odor hedonics: Experience-dependent ontogeny of circuits supporting maternal and predator odor responses in rats. Journal of Neuroscience, 36: 6634-6650.
2015
Workman JL*, Raineki C*, Weinberg J, Galea LAM (2015). Alcohol and pregnancy: Effects on maternal care, HPA axis function, and hippocampal neurogenesis in adult femalesPsychoneuroendocrinology, 57: 37-50.
*authors contributed equally
Raineki C, Sarro E, Rincón-Cortés M, Perry R, Boggs J, Holman CJ, Wilson DA, Sullivan RM (2015). Paradoxical neurobehavioral rescue by memories of early-life abuse: The safety signal value of odors learned during abusive attachment. Neuropsychopharmacology, 40: 906-914.
2014
Raineki C, Hellemans KGC, Bodnar T, Lavigne KM, Ellis L, Woodward TS, Weinberg J (2014). Neurocircuitry underlying stress and emotional regulation in animals prenatally exposed to alcohol and subjected to chronic mild stress in adulthoodFrontiers in Endocrinology, 5: 5.
^Invited manuscript
Raineki C, Lucion AB, Weinberg J (2014). Neonatal handling: An overview of the positive and negative outcomesDevelopmental Psychobiology, 56: _PHONE_.
*Invited review
2013
Roth TL, Raineki C, Salstein L, Perry R, Sullivan-Wilson T, Sloan A, Lalji B, Hammock E, Wilson DA, Levitt P, Okutani F, Kaba H, Sullivan RM (2013). Neurobiology of secure infant attachment and attachment despite adversity: A mouse modelGenes, Brain and Behavior, 12: 673-680.
Raineki C, Lutz ML, Sebben V, Ribeiro RA, Lucion AB (2013)
| 1.17551
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
8 Different Types of Lawn Sprinklers Explained (with Pictures)
by Max - last update on
Types of Lawn Sprinklers
Lawn sprinklers provide an efficient and easy solution for keeping your lawn watered and healthy during dry months, without you having to stand out in your yard manually spraying a hose around the property. There are many different types of lawn sprinklers available, and the one most suitable for your property and situation will be dependent on many variables. Some of the things you need to consider when looking for a sprinkler are the size of your lawn, the shape of your lawn, and the water pressure at your property.
You will also want to think about how you want your sprinkler to look, and if you want it to be fitted underground or above soil level. Some sprinklers can be quite noisy, which is another factor you should consider before fitting a sprinkler system at the family home. To learn all about the many different types of lawn sprinklers, and that lawns they are best suited to, read on.
Automatic Vs. Manual Sprinklers
Automatic Vs. Manual Sprinklers
Sprinklers can be set on an automatic system to turn on and off at specified points in the day, or they can be operated manually. The obvious advantage of automatic sprinklers, or those set on a timer, is that you can set up your sprinklers and then forget about them for the remainder of the season. They will turn on at a set time, and turn off at a set time, with no manual intervention needed. As the best time to water a lawn is early in the morning, an automatic sprinkler can be set to operate before you have even woken up. If you want to water the lawn at this time of day with a manual sprinkler, this will probably mean an interruption to your sleep! Instead, if you have manually operated sprinklers, it's more likely you'll turn them on during the day or in the evening, which are not times that are as beneficial for the health of the grass as watering it in the morning.
The other great thing about automatic sprinklers is that they will come on even if you're away from your property. You can go on vacation for several weeks, knowing that you'll return to a lush lawn. By comparison, if your sprinkler has to be manually operated, you'll need to ask a neighbor to turn them on in your absence or hire somebody to do this job. Otherwise, you risk returning from vacation or a business trip to find a dry and dead lawn. Similarly, even if you are home, a manually operated sprinkler requires that you remember to turn them on and off every day to keep your lawn in good health. If you remember to turn the sprinkler on, but then get distracted with another task and forget to turn it off, you risk drowning your lawn and also wasting water and racking up a large water bill. Or, if you forget to operate the sprinkler altogether, the lawn could dry out.
An automatic sprinkler, by comparison, will run for a set time to ensure your lawn gets exactly the right amount of water each day. There is a drawback to this, though; if you have had rain or storms, your automatic sprinkler will still turn on and water the lawn even when it may not need the additional moisture. This leads us to the one main advantage of manually operating a sprinkler; that you can judge each day based on the weather how long to leave your sprinklers on for. Most people find that automatic sprinklers represent the most convenient method for irrigating their lawn, but these also tend to be the most expensive to buy.
Types of Lawn Sprinklers
1. Rotary Sprinklers
Rotary Sprinklers
Rotary sprinklers mechanically rotate while spraying streams of water over a lawn, going back and forth to create specific angles or circles of dispersed water droplets. Rotary sprinklers are popular in gardens of all sizes, but they particularly excel when used on large lawns, because they are able to throw water further than any other type of stationary sprinkler. Some models can send water across huge distances, as far as 90 feet in one direction, therefore covering a diameter of up to 180 feet.
They are also great for lawns with soils that water does not infiltrate quickly, such as clay or compacted soils. This is because rotary sprinklers have a lower rate of precipitation than other sprinkler types, such as pop-ups, meaning that it takes them longer to spread the same amount of water than a pop-up sprinkler. This is helpful for some soil types where run-off or pooling of water would occur if too much water was dispersed at once. A slower precipitation rate will allow a slower draining soil to absorb the water more evenly.
Rotary sprinklers are also the best type of sprinkler for distributing the water evenly. They operate in a way that ensures the same amount of water hits every patch of lawn. There are many different types of rotary sprinklers that have various different spraying patterns. They are typically designed to be adjusted in quarter-circle increments, but some work on smaller angles. For best performance, you will need a psi water pressure of between 40 and 50 to be compatible with a rotary sprinkler. If you have a lower water pressure than a rotary sprinkler may not be the best option for you, though the one exception to this is stream rotary sprinklers, as some are able to operate on lower water pressure.
There are three main types of rotary sprinklers. These are impact sprinklers, gear-driven sprinklers, and stream sprinklers.
1.1. Impact
This is the type of sprinkler that many people will immediately think of when they think of rotary sprinklers. These types of sprinklers were first invented by Rain Bird, and so they are now mistakenly called 'rainbird sprinklers' by some people, but in fact, Rain Bird is a brand and not a type of sprinkler. Impact sprinklers can easily be identified by the noise they make while they are running. The head of these sprinklers rotates as the pressure from the nozzles water stream hits the spring-loaded arm, and it is this action that makes the sprinkler very noisy. Impact sprinkler heads are usually constructed from metal, namely brass or bronze. This makes them hardwearing and better able to withstand the elements and wear and tear than their plastic counterparts. They tend to have a long lifespan, but the noise they make can put a lot of people off from using them. They also require regular maintenance to keep them running properly, as they have many intricate parts that can wear down and need attention. They are considered to be a little old fashioned, as other rotary sprinklers that operate more quietly and require less maintenance are starting to replace impact sprinklers. The best time of day to irrigate a lawn is early in the morning, and this is why people living in residential properties will choose to avoid impact sprinklers, as the sound of them going off in the early hours of the morning can disturb sleep. For this reason, impact sprinklers are best suited to commercial properties, such as the grounds of golf courses, where their loud noise isn't going to be a problem.
1.2. Gear-driven
Gear-driven rotors have rotating heads that spray a stream of water in a continually turning pattern. The head rotates due to a series of gears, which operates as a result of the pressure from the flowing water. Like impact sprinklers, these rotary sprinklers can spread wide streams of water, at a distance of anywhere between 18 to 60 feet in standard models, making them ideally suited to medium and large-sized lawns. The body of gear-driven rotors are encased in plastic, which hides the moving parts. This, teamed with the way they operate, makes them much quieter than impact sprinklers. They are well suited for use in both residential and commercial settings, being almost silent when in operation. Gear-driven sprinklers are also less expensive to buy compared to impact sprinklers, and they are low maintenance.
1.3. Stream
Stream rotor sprinklers, also known as multi-stream sprinklers, spray rotating streams of water in multiple directions all at once. These are interesting to watch, and put on a spectacular display of moving water fountains that can be mesmerizing. The benefit of these is that, like gear-driven sprinklers, they operate quietly. They also have low precipitation rates compared to some other sprinkler types, which makes them ideal for soils that absorb water more slowly, or sloped and uneven ground where too much water dumped at once can slide down the slope instead of being taken into the soil as it should. The main drawback of this type of sprinkler is that the head is prone to getting clogged up if the water flowing through it is not filtered. 'Dirty' water can block up the mechanism and cause operational issues.
2. Fixed Spray Sprinklers
Fixed Spray Sprinklers
These types of sprinkles spray a pattern of streaming water that does not move. It can fan out in full circles or patterns of various angles as designated by the user. Standard models might have the option to spray quarter circles, half-circles, and full circles, while more advanced models can have specific selections that allow the user to set the sprinkler in increments of 40 degrees. One of the benefits of this type of sprinkler is if you have a straight-edged lawn, you can position the sprinkler against the edge with it facing towards the lawn. When set at a half-circle, this will be able to adequately spray the whole edge of the lawn without wasting water on the sidewalk.
These sprinklers aren't as strong as rotary sprinklers and are not able to throw water as far. They typically spray streams of water ranging between 3 and
| 1.040431
|
Zyphra/Zyda-2
|
++ed by:
ZMUGHAL KEEDI FAYLAND SYP FOXCOOL
69 PAUSE users
37 non-PAUSE users.
Gisle Aas
NAME
LWP - Library for WWW access in Perl
SYNOPSIS
use LWP;
print "This is libwww-perl-$LWP::VERSION\n";
DESCRIPTION
Libwww-perl is a collection of Perl modules which provides a simple and consistent programming interface (API) to the World-Wide Web. The main focus of the library is to provide classes and functions that allow you to write WWW clients, thus libwww-perl said to be a WWW client library. The library also contain modules that are of more general use.
The main architecture of the library is object oriented. The user agent, requests sent and responses received from the WWW server are all represented by objects. This makes a simple and powerful interface to these services. The interface should be easy to extend and customize for your needs.
The main features of the library are:
- Contains various reusable components (modules) that can be used separately or together.
- Provides an object oriented model of HTTP-style communication. Within this framework we currently support access to http, gopher, ftp, news, file, and mailto resources.
- The library be used through the full object oriented interface or through a very simple procedural interface.
- Support the basic and digest authorization schemes.
- Transparent redirect handling.
- Supports access through proxy servers.
- URL handling (both absolute and relative URLs are supported).
- A parser for robots.txt files and a framework for constructing robots.
- An experimental HTML parser and formatters (for PostScript and plain text).
- The library can cooperate with Tk. A simple Tk-based GUI browser called 'tkweb' is distributed with the Tk extension for perl.
- An implementation of the HTTP content negotiation algorithm that can be used both in protocol modules and in server scripts (like CGI scripts).
- A simple command line client application called lwp-request.
HTTP STYLE COMMUNICATION
The libwww-perl library is based on HTTP style communication. This section try to describe what that means.
Let us start with this quote from the HTTP specification document <URL:_URL_
- The HTTP protocol is based on a request/response paradigm. A client establishes a connection with a server and sends a request to the server in the form of a request method, URI, and protocol version, followed by a MIME-like message containing request modifiers, client information, and possible body content. The server responds with a status line, including the message's protocol version and a success or error code, followed by a MIME-like message containing server information, entity meta-information, and possible body content.
What this means to libwww-perl is that communication always take place through these steps: First a request object is created and configured. This object is then passed to a server and we get a response object in return that we can examine. A request is always independent of any previous requests, i.e. the service is stateless. The same simple model is used for any kind of service we want to access.
For example, if we want to fetch a document from a remote file server, then we send it a request that contains a name for that document and the response will contain the document itself. If we access a search engine, then the content of the request will contain the query parameters and the response will contain the query result. If we want to send a mail message to somebody then we send a request object which contains our message to the mail server and the response object will contain an acknowledgment that tells us that the message has been accepted and will be forwarded to the recipient(s).
It is as simple as that!
The Request Object
The request object has the class name HTTP::Request in libwww-perl. The fact that the class name use HTTP:: as a name prefix only implies that we use the HTTP model of communication. It does not limit the kind of services we can try to pass this request to. For instance, we will send HTTP::Requests both to ftp and gopher servers, as well as to the local file system.
The main attributes of the request objects are:
- The method is a short string that tells what kind of request this is. The most used methods are GET, PUT, POST and HEAD.
- The url is a string denoting the protocol, server and the name of the "document" we want to access. The url might also encode various other parameters.
- The headers contain additional information about the request and can also used to describe the content. The headers is a set of keyword/value pairs.
- The content is an arbitrary amount of data.
The Response Object
The request object has the class name HTTP::Response in libwww-perl. The main attributes of objects of this class are:
- The code is a numerical value that encode the overall outcome of the request.
- The message is a short (human readable) string that corresponds to the code.
- The headers contain additional information about the response and they also describe the content.
- The content is an arbitrary amount of data.
Since we don't want to handle all possible code values directly in our programs, the libwww-perl response object have methods that can be used to query what kind of response this is. The most commonly used response classification methods are:
is_success()
The request was was successfully received, understood or accepted.
is_error()
The request failed. The server or the resource might not be available, access to the resource might be denied or other things might have failed for some reason.
The User Agent
Let us assume that we have created a request object. What do we actually do with it in order to receive a response?
The answer is that you pass it on to a user agent object and this object will take care of all the things that need to be done (low-level communication and error handling). The user agent will give you back a response object. The user agent represents your application on the network and it provides you with an interface that can accept requests and will return responses.
You should think about the user agent as an interface layer between your application code and the network. Through this interface you are able to access the various servers on the network.
The libwww-perl class name for the user agent is LWP::UserAgent. Every libwww-perl application that wants to communicate should create at least one object of this kind. The main method provided by this object is request(). This method takes an HTTP::Request object as argument and will (eventually) return a HTTP::Response object.
The user agent has many other attributes that lets you configure how it will interact with the network and with your application code.
- The timeout specify how much time we give remote servers in creating responses before the library disconnect and creates an internal timeout response.
- The agent specify the name that your application should use when it presents itself on the network.
- The from attribute can be set to the e-mail address of the person responsible for running the application. If this is set, then the address will be sent to the servers with every request.
- The use_alarm specify if it is OK for the user agent to use the alarm(2) system to implement timeouts.
- The use_eval specify if the agent should raise an exception (die in perl) if an error condition occur.
- The parse_head specify whether we should initialize response headers from the <head> section of HTML documents.
- The proxy and no_proxy specify if and when communication should go through a proxy server. <URL:_URL_
- The credentials provide a way to set up user names and passwords that is needed to access certain services.
Many applications would want even more control over how they interact with the network and they get this by specializing the LWP::UserAgent by sub-classing. The library provide a specialization called LWP::RobotUA that is used by robot applications.
An Example
This example shows how the user agent, a request and a response are represented in actual perl code:
# Create a user agent object
use LWP::UserAgent;
$ua = new LWP::UserAgent;
$ua->agent("AgentName/0.1 " . $ua->agent);
# Create a request
my $req = new HTTP::Request POST => '_URL_
$req->content_type('application/x-www-form-urlencoded');
$req->content('match=www&errors=0');
# Pass request to the user agent and get a response back
my $res = $ua->request($req);
# Check the outcome of the response
if ($res->is_success) {
print $res->content;
} else {
print "Bad luck this time\n";
}
The $ua is created once when the application starts up. New request objects are normally created for each request sent.
NETWORK SUPPORT
This section goes through the various protocol schemes and describe the HTTP style methods that are supported and the headers that might
| 1.217378
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
March 25, 2014 by Richard Elliott
In his influential book Capturing Sound: How Technology Has Changed Music, Mark Katz identifies seven traits of sound recording technology that are 'distinctive and defining': tangibility, portability, (in)visibility, repeatability, temporality, receptivity, and manipulability (Katz, Capturing Sound, pp. 9-47). While the first of these would seem the most obvious to discuss in relation to musical materiality, all seven of Katz's traits are relevant to such a discussion.
Tangibility relates to the way in which, with the onset of recorded sound, music becomes a thing. Recorded sound did not invent music as an object, and it is important to remember previous examples of the attempt to 'fix' or 'keep' music, such as notation, written description, ballad sheets, sheet music and even the collecting of musical instruments. But the objects associated with recorded sound–from early cylinders through records and on to CDs–could be seen (and felt and heard) to 'contain' music with far greater fidelity and far greater proximity to 'the thing itself' than earlier, non-sonic objects.
As many people have shown, this thing-ness would have considerable implications for the association between music and commodification (Eisenberg 2008 is particularly good on this subject). Again, recordings did not invent the connection between music and capital but they did fundamentally reconstitute the capitalist machinery of what would come to be known as the culture industry. This refiguring of music's potential as commodity means that this particular form of tangibility has survived the recent decline in the prevalence of the recorded object; the digitalizing and virtualizing of culture may have brought about a significant change in what we think of as cultural 'objects', but the relationship between culture and capital seems to be as firm in the present as at any point during the last century.
Katz's second trait, portability, refers to a quality that becomes ever more apparent in the evolution of music-as-thing. Walter Benjamin's famous analysis of mechanical reproduction highlights portability as one of the ways in which the 'aura' of the original artwork is lost. It was the artwork's location in a particular time and space that contributed to its aura. The mechanically-reproduced object, however, has no need of such an aura given that a vital part of its raison d'être is the freedom it allows its users to have it wherever and whenever they wish. Recorded music can be disseminated to diverse audiences who are severed from the time and space of the original musical moment. As Katz says, with reference to the travelling picò sound systems of Cartagena, Colombia, 'while recorded music is often decoupled from its origins in space and time, this "loss" begets a contextual promiscuity that allows music to accrue new, rich, and unexpected meanings' (Katz, p. 15).
By visibility and invisibility, Katz is highlighting the fact that, with recorded sound, there is no need for performers and audiences to see each other. This means that a certain element of expressive communication is withdrawn as there can no longer be a reliance on facial expression or bodily gesture, factors which are important to performers and audience members alike. Audiences cannot read the musicians for clues, nor can performers gauge the response of their audiences to what they are playing.
Repeatability is perhaps one of the more obvious facets of recorded sound, yet it can be conceptually complex, as a number of philosophical words that have taken repetition as their subject have shown. In terms of what Katz would call 'phonographic effects'–the changes wrought upon musical performance and consumption inaugurated by recorded sound–repetition comes to the fore in the ways in which listeners come to expect certain things from the music they hear. Repetition brings familiarity and, while this can be of great value–for example, in the possibility of studying a particular piece of music, by which we generally mean, in the phonographic era, a particular recording–it can also lead to a sense of disturbance, such as that experienced when a live performance of a recording with which we are intimately familiar does not seem to live up to its recorded counterpart. What we witness here is a process whereby a previously understood precedence–the precedence of the performance to the recording–is seemingly reversed. Musicians, of course, are listeners too and, as Katz suggests, their listening practices and phonographic awareness, will inevitably affect their performance.
Until the introduction of the long-playing disc at the end of the 1940s, listeners were not able to hear more than four and a half minutes of continuous recorded sound. One of the most obvious phonograph effects, then, relates to temporality. Any kind of recording, be it writing, painting, photography or phonography, is a reduction of the complexity of the phenomenal world (but also, paradoxically, an addition to what there is in the world). As a time-based phenomenon, recorded sound inevitably placed an emphasis on the reduction of musical time to what could be fitted into the grooves of the record. A number of writers have attended to the changes undergone by specific musical forms due to the three-minute limit of 78 rpm recordings (see, for example, Vieira Nery 2004: 204; Holst 2006: 54; Conte 1995; Nagoski 2007), though it seems safe to conclude that such compromises were necessary for all musical styles and genres. As Katz relates, the three-minute standard came to be associated with the 'formula' for successful pop, though it should also be noted that the subsequent ability to record extended works did not discourage such temporal formulas from being maintained in many genres.
The transcriptive nature of most recording up the 1950s was less to with a desire for authenticity than a necessity. The rather cumbersome process of recording during this early period necessitated a large amount of manipulation, from the placing of singers and musicians at various points in relation to the microphone to the elimination of certain instruments and styles and their replacement with more phonograph-friendly alternatives. However, while these factors could be defined in terms of the manipulation necessary to secure good recordings, Katz chooses to categorise them as examples of 'receptivity' his least well-defined category. 'Manipulability', by contrast, is used to refer to the ability, with the advent of tape recording in the 1940s and digital recording at the end of the 1970s, to manipulate the recorded object itself. This might involve the splicing together of different recordings to create a single recorded 'text', such as happened with Teo Macero's recordings of Mile Davis in the 1960s and George Martin's of the Beatles in the same decade. Or it might involve the sampling of recordings to use as a background texture or foregrounded, featured element in new recordings, as became the practice in hip-hop and musics influenced by it.
The role of producers, studio engineers and, in the case of hip hop and dance music, deejays became far more prominent die to these kinds of sonic manipulation.Before the 'turntablism' of hip hop deejays, playback devices and records had been used as instruments or instrumental textures by a number of artists, including John Cage in his Imaginary Landscape No. 1 (1939). The use and deliberate misuse of recordings and playback technology constitute, for Caleb Kelly, a notable strand of subversive art in the second half of the twentieth century (Kelly 2009). With reference to the criticisms levelled against recording technology by thinkers such as Theodor Adorno and Jacques Attali, Kelly suggests that practitioners of what he calls 'cracked media' challenge the intended uses of technology and substitute alternative forms of labour via subversive acts of misuse and abuse.
Korean-born Nam June Paik, for example, modified turntables to allow numerous randomly accessed records to be played on them simultaneously, the end results working as both quirky sculptures and manifestations of sound art. Czech artist Milan Knižak put on ritualistic happenings in the streets of Prague before moving to sound-based work that included the physical alteration and mutilation of vinyl records. Works such as Broken Music applied cut-up techniques to records, cracking, dismantling, and rebuilding them so that they played reconfigured music. US-based Christian Marclay also worked with broken and cut-up records on projects which complemented his work in the visual arts while simultaneously performing as a radical turntablist. Japan's Yasunao Tone performed similar mutilations on compact discs, "wounding" them in order to change and distort their data. The German band Oval, meanwhile, was instrumental in turning the sound of CD malfunction into aesthetically pleasing music, using glitch as a texture in pop songs.
There are numerous aspects that Katz does not attend to in the initial presentation of his seven traits. One is the way in which some of them work together. Temporality and repeatability, for example, should be thought of as necessarily connected. Of particular importance to our project are the ways in which time and repetition shape and are shaped by memory acts, whether voluntary or involuntary. Music performs its evocation work by tapping into the ability to render the past in the present to a seemingly infinite degree. Temporality here refers not only to the length of a recording but to the duration of the remembered experience that the recording summons up. Length of recording is still important, however, because one of the many magical qualities of the recording is its very brevity in comparison to the experience it evokes. Michael Pickering and Emily Keightley attend to this issue when they write, 'It is obvious that one piece of music did not play throughout our memories of
| 2.125903
|
openbmb/Ultra-FineWeb
|
Why CFOs should have artificial intelligence on their minds has been saved
Why CFOs should have artificial intelligence on their minds
- Getting smart about AI and ML
- Around the blocks
- Artificial intelligentsia: Humans working with machines
- Contact us
Smart CFOs now have to give serious thought to artificial intelligence (AI). The technology, which enables computers to be taught to analyze data, identify patterns, and predict outcomes, has evolved from aspirational to mainstream, opening a potential knowledge gap among some finance leaders.
In fact, "AI's 'early adopter' phase is ending," according to the recently published third edition of Deloitte's State of AI in the Enterprise report.1 The survey, which collected responses from nearly 2,750 executives at companies that have adopted AI, found that about half of respondents (47%) were "skilled" in their AI efforts, meaning that their companies had launched multiple AI systems, but lagged in terms of their number of implementations or their AI expertise—or both. Another 26% were categorized as "seasoned," given that they had already built multiple AI systems and shown a high level of maturity in selecting, managing, and integrating AI technologies.
For their part, CFOs seem eager to explore AI's potential, despite the pandemic-forced focus on cash. In fact, in Deloitte's North American CFO Signals™ survey for the third quarter of 2020, "accelerated business digitization," including AI, was one of the top strategic shifts CFOs said their companies were making in response to the turbulent economic environment.2
What many finance leaders recognize is that AI is more than another cutting-edge tool. By unleashing its full capabilities in finance and throughout the business, companies can turn it into a driver of differentiation that not only increases productivity, but also boosts growth. Within the finance function, for example, AI can be applied to replacing repetitive and labor-intensive tasks, performing such transactional work with increased speed and accuracy. Moreover, with its capacity to learn from large data sets, the technology can also be used to improve accuracy in such areas as budgeting and forecasting to enhance companywide decision-making.
In this issue of CFO Insights, we'll discuss how finance leaders can incorporate AI (particularly the pattern-recognition skills of its machine learning application) into their operations. In addition, we'll explore some of the decisions involved with implementing AI: What kinds of projects should you start with (hint: think visibility)? Where can you source hard-to-find talent that aligns with the mission (hint: look around you)? Should you now become a data scientist—or at least learn how to sound like one (hint: apply humility liberally)?
Getting smart about AI and ML
The term "artificial intelligence" refers to the concept of making machines smarter, enabling them to mimic human cognitive functions. Machine learning (ML) is an algorithmic branch of AI that involves allowing the machine to teach itself based on the data it receives. As the algorithms derive predictions or insights from the data, and get feedback on the outcomes, they keep refining their capabilities to achieve higher degrees of certainty.
Such an approach is clearly well-suited to the finance function, which routinely relies on large and complex volumes of data, both financial and operational, to fuel its many processes. In the State of AI survey, 67% of respondents reported that they are currently using machine learning, and almost all (97%) plan to use it in the near future. Among executives whose companies have adopted AI, many envision it transforming not only businesses, but also entire industries in the next five years. Using ML to optimize and transform processes, CFOs can automate tasks that typically require manual intervention, improving the accuracy of the accrual process or speeding up account reconciliation and, ideally, eliminating any traces of human bias.
But ML's place isn't just in back-office applications, where it can boost efficiencies. By partnering with the commercial side of the business, ML can produce insights and boost predictability, providing increasingly accurate predictions as to, for instance, which customers are likely to make repeat purchases or which suppliers are likely to default.
The algorithms, of course, only know what they absorb from the data—which is based on countless human decisions and a vast array of systems. As such, its knowledge base reflects and projects flaws, ranging from inconsistent data quality to potential human bias. Identifying and eliminating such deficiencies requires ongoing maintenance and testing, subjecting the algorithms to quality control so that, for instance, a bank doesn't unfairly reject the lending application of a creditworthy individual. ML would know better.
Around the blocks
The technology's capacity for learning depends on not only the volume and quality of data it receives, but also how well it is aligned with the problem. To lay down a solid foundation for the technology, companies need to assess and mitigate any quality issues involving data, undertaking data-cleansing initiatives to boost integrity and accuracy. Companies that set their expectations high, and find the availability of relevant data low, are setting themselves up for disappointment.
To support AI, any data governance issues also need to be addressed beforehand. Internal wrangling over data (some data owners may be risk-averse about sharing theirs, while others want to keep control over data because of its value) can result in needless delays.
But CFOs who remain focused on realizing the ultimate benefits of ML sooner rather than later—aware that it can free up their teams to spend more time on strategic issues—can see past the initial questions they may have, including:
- How can we fund our AI projects? Taking a cross-functional, integrated approach to ML will likely produce the most value for the enterprise, resulting in a shared decision-making tool. But companies can start with point solutions, aiming the technology at a specific problem rather than investing in a more costly enterprisewide solution. Barriers to entry for AI have dropped significantly, as platforms offering ready-made infrastructure and algorithms have become available.3 If necessary, finance leaders can explore creative funding sources, such as vendor subsidy and ecosystems programs, coinvestment strategies, and other models, to provide funding for technology innovation within finance. Teams can also explore venture capital models to fund AI use cases and to use the outcomes as proof points for further investment.
- Which early use cases are likely to yield a financial return? ML's self-learning capabilities means it gains value over time. But identifying a specific problem, and defining the desired outcome, can enable CFOs to measure the technology's impact early on. The greatest opportunity for short-term ROI may lie with streamlining back-office activities, including transaction processing (particularly in shared services). Decreasing labor-intensive, repetitive tasks will quickly and clearly justify long-term investment in AI technology. In the State of AI survey, respondents cited improved process efficiency as the top benefit that AI has enabled them to achieve. The best use cases tend to be function-specific, but should also offer broad visibility, if possible.
- Should we build or buy AI? Finance leaders may want to collaborate with their technology counterparts (CIO, CTO) to determine whether to partner with third-party AI providers, develop solutions internally, or pursue a hybrid approach. In making this decision, finance and IT should investigate other use cases being implemented in the organization and leverage home-grown experience and talent to understand what suits the current environment. Organizations frequently mix bought capabilities and home-grown models. When evaluating whether to expand partnerships with cloud vendors and other providers or to foster new ones, consider if the problem is shared across other areas of the enterprise, and ensure alignment of the organization's AI ambitions. Is the process you are solving for specific to finance (e.g., revenue forecasting)? Or is it a solution that could benefit other areas as well (e.g., invoice matching)?
- How can we quickly develop in-house expertise? Assessing off-the-shelf solutions and designing realistic use cases requires deep competency in AI. One option is to outsource the technical end to a provider of managed AI services, enabling finance to focus on excavating data out of functional silos. Developing in-house expertise can begin with prioritizing AI-related skills in recruitment and training. It may be helpful to stage a hackathon to solve a specific business problem, using it to identify a group of developers who are interested in becoming ML engineers. By making it part of their job to do so, the company can build a knowledgeable team.
- Why should we trust AI? The notion of intelligent machines replacing—and overpowering—humans is strictly the province of blockbuster movies. CFOs should evaluate their current operations and identify opportunities to update the operating model, ensuring that the right people and capabilities are in place to maintain an AI-fueled finance function. Employees embedded in transactional tasks may need to retool themselves, developing stronger analytical skills. The finance function's overall skill set may shift, but not necessarily its population of FTEs. Humans with the right training need to manage AI systems, leveraging the appropriate capabilities.
Artificial intelligentsia: Humans working with machines
In the past, many CFOs have led campaigns to incorporate analytics, particularly in FP&A. But the stakes and rewards involved in championing ML are much higher, given its potential use across the enterprise. And while uncertainty surrounding the pandemic continues, the fact that ML is not rules-based (there's no limit to its ability to evolve) means it can change and adapt as the next normal takes shape. Similarly, finance leaders can lay the groundwork for the technology by taking piecemeal steps, such as:
- Inventory internal capabilities. AI technology, including ML, is function-agnostic, so before CFOs plant their flag, they ought to check to make sure other functions haven't preceded them. Marketing may have begun using it to understand how to better retain customers in a virtual environment. Supply chain may already be relying on it to
| 1.423164
|
openbmb/Ultra-FineWeb
|
In the circuit 66, the drain 80 and source 82 are disposed in series between the pre-drain resistor 70 and the post-source resistor 72. The gate 76 is connected to WL3. The pre-drain resistor 70, the drain 80, the source 82, and the post-source resistor 72 are disposed in series on the bit-line BL0. The capacitor 68, which models the capacitance of the bit-line, has one plate connected to ground 74 and another plate connected to the bit-line BL0, in parallel with the memory elements 64.
Several of the components of the circuit 66 represent phenomenon affecting the memory elements 64 when it is sensed. The pre-drain resistor 70 generally represents the drain-to-bitline resistance of the memory elements 64 connected to the bit-line above (i.e., up current from) WL3 when these memory elements 64 are turned on, (e.g., during a read operation). Similarly, the post source resistor 72 generally corresponds to the source-to-ground resistance of the memory elements 64 connected to the bit-line below WL3 when the memory element 64 is sensed. The circuit 66 models electrical phenomena associated with reading the memory elements 64 at the intersection of WL3 and BL0.
The operation of the memory elements 64 will now be briefly described with reference to FIGS. 4 and 5. FIG. 5 illustrates one potential relationship between the bit-line current (IBIT), the word-line voltage (VWL), and the voltage of the floating gate 78 (VFG). As illustrated by FIG. 5, VFG affects the response of the memory element 64 to a given VWL. Decreasing the voltage of the floating gate shifts the I-V curve of the memory elements 64 to the right. That is, the relationship between the bit-line current and a word-line voltage depends on the voltage of the floating gate 78. The memory elements 64 may store data by exploiting this effect.
To write data to the memory elements 64, a charge corresponding to the data may be stored on the floating gate 78. The charge of the floating gate 78 may be modified by applying voltages to the source 82, drain 80, and/or gate 76 such that the resulting electric fields produce phenomenon like Fowler-Northam tunneling and/or hot-electron injection near the floating gate 78. Initially, the memory elements 64 may be erased by applying a word-line voltage designed to drive electrons off of the floating gate 78. In some embodiments, an entire column or block of memory elements 64 may be erased generally simultaneously. Once the memory elements 64 are erased, the gate 76 voltage may be manipulated to drive a charge onto the floating gate 78 that is indicative of a data value. After the write operation ends, the stored charge may remain on the floating gate 78 (i.e., the memory elements 64 may store data in a nonvolatile fashion).
As illustrated by FIG. 5, the value stored by the memory element 64 may be read by applying a voltage, VWL, to the gate 76 and quantizing (e.g., categorizing) a resulting bit-line current, IBIT. Each of the I-V traces depicted by FIG. 5 correspond to a different charge stored on the floating gate, VFG, which should not be confused with the voltage that is applied to the gate, VWL. The difference in floating gate 70 voltage, VFG, between each I-V trace is an arbitrarily selected scaling factor "x." The illustrated I-V traces correspond to eight-different data values stored by the memory element 64, with a VFG of 0x representing a binary data value of 000, a VFG of 1x representing a binary data value of 001, and so on through VFG of 7x, which represents a binary data value of 111. Thus, by applying a voltage to the gate 76 and measuring the resulting bit-line current, the charge stored on the floating gate 78 may be sensed, and the stored data may be read.
The accuracy with which the bit-line current is quantized may affect the amount of data that a designer attempts to store in each memory element 64. For example, in a system with a low sensitivity, a single bit may be stored on each memory element 64. In such a system, a floating gate voltage VFG of 0x may represent a binary value of 0, and a floating gate voltage VFG of −7x may represent a binary value of one. Thus, the difference in floating gate voltages VFG corresponding to different data values may be relatively large, and the resulting differences and bit-line currents for different data values may also be relatively large. As a result, even low-sensitivity sensing circuitry may quantize (e.g., discern) these large differences in bit-line current during a read operation. In contrast, high-sensitivity sensing circuitry may facilitate storing more data in each memory element 64. For instance, if the sensing circuitry can distinguish between the eight different I-V traces depicted by FIG. 5, then the memory elements 64 may store three bits. That is, each of the eight different charges stored on the floating gate 78 may represent a different three-bit value: 000, 001, 010, 011, 100, 101, 110, or 111. Thus, circuitry that precisely quantizes the bit-line current IBIT may allow a designer to increase the amount of data stored in each memory element 64.
However, as mentioned above, a variety of effects may interfere with accurate measurement of the bit-line current. For instance, the position of the memory elements 64 along a bit-line may affect RPD and RPS, which may affect the relationship between the word-line voltage VWL and the bit-line current IBIT. To illustrate these effects, FIG. 6 depicts noise on the bit-line while reading from the memory element 64. As illustrated, noise in the bit-line current IBIT may cause the bit-line current IBIT to fluctuate. Occasionally, the fluctuation may be large enough to cause the bit-line current IBIT to reach a level that represents a different stored data value, which could cause the wrong value to be read from the memory elements 64. For instance, if the bit-line current is sensed at time 84, corresponding to an arbitrarily selected peak, a data value of 100 may be read rather than the correct data value of 011. Similarly, if the bit-line current is sensed at time 86, corresponding to an arbitrarily selected local minimum, a data value of 010 may be read rather than a data value of 011. Thus, noise on the bit-line may cause erroneous readings from memory elements 64.
FIG. 7 depicts a quantizing circuit 16 that may tend to reduce the likelihood of an erroneous reading. The illustrated quantizing circuit 16 includes an analog-to-digital converter 88 and a digital filter 90 connected to each of the bit-lines 38, 40, 42, 44, and 46, respectively. Each bit-line 38, 40, 42, 44, and 46 may connect to a different analog-to-digital converter 88 and digital filter 90. The digital filters 90, in turn, may connect to an input/output bus 92, which may connect to a column decoder 18, a column address latch 20, and/or control circuitry 28 (see FIG. 2).
In operation, the quantizing circuit 16 may quantize (e.g., digitize) analog signals from the memory elements 64 in a manner that is relatively robust to noise. As explained below, the quantizing circuit 16 may do this by converting the analog signals into a bit-stream and digitally filtering high-frequency components from the bit-stream.
The analog-to-digital converter 88 may be a one-bit, analog-to-digital converter or a multi-bit, analog-to-digital converter. In the present embodiment, an analog-to-digital converter 88 receives an analog signal from the memory element 64, e.g., a bit-line current IBIT or a bit-line voltage VBL, and outputs a bit-stream that represents the analog signal. The bit-stream may be a one-bit, serial signal with a time-averaged value that generally represents the time-averaged value of the analog signal from the memory element 64. That is, the bit-stream may fluctuate between values of zero and one, but its average value, over a sufficiently large period of time, may be proportional to the average value of the analog signal from the memory element 64. In certain embodiments, the bit-stream from the analog-to-digital converter 88 may be a pulse-density modulated (PDM) version of the analog signal. The analog-to-digital converter 88 may transmit the bit-stream to the digital filter 90 on a bit-stream signal path 94.
The digital filter 90 may digitally filter high-frequency noise from the bit-stream. To this end, the digital filter 90 may be a low-pass filter, such as a counter, configured to average (e.g., integrate and divide by the sensing time) the bit-stream over a sensing time, i.e., the time period over which the memory element 64 is read. (Alternatively, in some embodiments, the digital filter
| 2.008805
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
described by Satake et al., using casein (2 % in 200 mM Tris–HCl buffer, pH 7.0) as substrate. Briefly, 0.4 ml of casein was incubated with different concentration of PgSE (0-300 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37º for 2.5 h. About 1.5 ml of Trichloroacetic acid (TCA) (0.44 M) was added to terminate the reaction and allowed to stand for 30 min at room temperature. The mixture was centrifuged at 3000 rpm for 5 min and the supernatant (1 ml) was mixed with 0.4 M sodium carbonate (2.5 ml) and 1:2 diluted Folin reagent (0.5 ml). The colour developed was read at 660 nm. One unit of enzyme activity was defined as the amount of enzyme required to increase in the absorbance of 0.01 at 660 nm. Protease activity was expressed as units/min/mg.
Fibrinogenolytic activity was performed as described by Gubbiveeranna et al.. Briefly, human fibrinogen (50 µg) was treated with different concentration of PgSE (0.5–20 µg) in 25 µl of 10 mM sodium phosphate buffer (pH 7.0) and incubated at 37° for 2.5 h. Reducing sample buffer (10 µl) containing 1M urea, 4 % SDS and 4 % beta (β)-mercaptoethanol was added to terminate the reaction and kept in boiling water bath for 3 min. The hydrolyzed products were analyzed in 12 % SDS- PAGE and visualized by staining with Coomassie brilliant blue R-250.
Fibrinolytic activity was carried out according to the method of Shivaiah and Kempaiah. Briefly, trisodium citrate (3.2 %) treated blood in the ratio 1:9 was centrifuged at 3000 rpm for 5-10 min. The supernatant obtained was separated and used as Platelet Poor Plasma (PPP). Equal volume of PPP (100 µl) and 25 mM CaCl2 (100 µl) was incubated at 37º to get fibrin clot. The clot formed was thoroughly washed with 10 mM sodium phosphate buffer (pH 7.0) for 5-6 times. The washed fibrin clot was incubated with different concentration of PgSE (15 to 120 µg) in a total reaction volume of 40 μl of 10 mM sodium phosphate buffer (pH 7.0) at 37º for 2.5 h. Reducing sample buffer (20 µl) containing 1M urea, 4 % SDS and 4 % β-mercaptoethanol was added to terminate the reaction and kept in boiling water bath for 3 min. An aliquot (20 μl) of the supernatant was subjected to 10 % SDS-PAGE to analyze fibrin hydrolyzing pattern.
Recalcification time (RT):
Plasma recalcification time was determined according to the method described by Gubbiveeranna et al.. Briefly, PPP (100 µl) was pre-warmed to 37º before use and incubated with different concentration of PgSE (0-80 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 5 min. Later, 100 µl of 25 mM CaCl2 was added and the clotting time was recorded. The 10 mM sodium phosphate buffer (pH 7.0) alone without PgSE was considered as negative control.
The protein concentration was estimated as described by Lowry et al.. Briefly, bovine serum albumin (BSA) was used as standard and the protein concentration of PgSE was determined by comparing with known concentration of BSA.
The experiments were performed in triplicates and the data obtained from the experiments were expressed as mean±standard error of mean (SEM). The results were statistically analysed using one–way analysis of variance (ANOVA) followed by Tukey's Multiple Comparison Test. The data were considered significant at p<0.05.
Results and Discussion
The PgSE (160 µg) was treated with non-reducing sample buffer and incubated in boiling water bath for 3 min. The sample was then loaded onto 12 % SDS-PAGE under non-reducing condition. After electrophoresis, the gel was stained with Coomassie Brilliant Blue R-250 to visualise the protein bands. PgSE exhibited dense bands in the high molecular weight region between 122 kDa and 47 kDa. Two distinct bands were seen in the lower molecular weight region at ~16.6 kDa and ~14.3 kDa (fig. 1).
PgSE was evaluated for proteolytic activity using 2 % casein as substrate. It showed a specific activity of 0.8 U/mg/ml. Casein (0.2 %) and gelatin (0.2 %) were copolymerized with the polyacrylamide gel separately for the detection of proteolytic activity. PgSE (80 µg) was incubated with non-reducing sample buffer at 37º for 1.5 h and loaded onto 12 % SDS-PAGE under non- reducing condition. After electrophoresis, gels were washed with 2.5 % of Triton X-100 for 1 h to remove SDS. The gels were incubated overnight in incubation buffer containing Tris–HCl (50 mM, pH 7.6, 10 mM Cacl2 and 150 mM NaCl). Gels were then stained with 0.25 % Coomassie brilliant blue R-250 to visualize the activity bands. The bands were observed at molecular weight region ~97 kDa and between 34 kDa and 18.2 kDa for caseinolytic activity. Similarly, translucent activity bands in the molecular weight region ~96.7 kDa and between 52.3 kDa and 24.9 kDa were seen for the gelatinolytic activity (fig. 2).
PgSE was studied for fibrinogenolytic activity using human fibrinogen, which is a 340 kDa soluble plasma glycoprotein, composed of three subunits (Aα, Bβ and γ). Fibrinogen plays a crucial role in arresting blood during vascular injury by getting converted to fibrin upon action of thrombin. The fibrin is subsequently converted to fibrin-based blood clot. PgSE hydrolysed all the chains (Aα, Bβ and λ) of fibrinogen in a dose dependent manner. The fibrinogen (50 µg) was incubated with different concentration of PgSE (0.5 µg to 20 μg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 2.5 h. The reaction was terminated by adding denaturing sample buffer containing 1 M urea, 4 % SDS and 4 % β-mercaptoethanol and kept in boiling water bath for 3 min. SDS-PAGE (12 %) was performed in reducing condition to visualize degradation pattern. After electrophoresis, the gel was stained with 0.25 % Coomassie brilliant blue R-250 (fig. 3).
The fibrinolytic activity of PgSE was studied using human fibrin. PgSE hydrolyzed fibrin clot in a dose dependent manner. α-polymer and α-chain subunits of fibrin were hydrolysed at 90 µg followed by partial degradation of λ-dimer and β-chain subunits at a concentration of 120 µg. Equal volumes of PPP (100 µl) and 25 mM Cacl2 (100 µl) were mixed and incubated to get fibrin clot. The fibrin clot was washed in 10 mM sodium phosphate buffer (pH 7.0) and incubated with different concentration of PgSE (15 µg to 120 µg) in 10 mM sodium phosphate buffer (pH 7.0) at 37° for 2.5 h. The reaction was terminated by adding denaturing sample buffer containing 1 M urea, 4 % SDS and 4 % β-mercaptoethanol and kept in boiling water bath for 3 min. The sample was loaded onto 10 % SDS-PAGE in reducing condition. After electrophoresis, the gel was stained with Coomassie Brilliant blue R-250 to visualize the bands (fig. 4).
PgSE which was found to contain protease associated with fibrinogenolytic and fibrinolytic activity showed anticoagulant activity upon incubation with PPP. PgSE increased the recalcification time in a dose dependent manner indicating its anticoagulant property. PgSE increased the clotting time from control 161 s to 487 s at a concentration of 80 µg. The PgSE increased the clotting time by 3.03-folds (fig. 5).
Fig. 5: RT of PgSE, PPP (100 μl) was incubated with different concentration of PgSE (20, 40, 60 and 80 μg) in a total volume of 20 μl of sodium phosphate buffer (10 mM, pH 7.0) at 37
| 1.098801
|
m-a-p/FineFineWeb
|
Skip to main content
Advertisement
You are viewing the new BMC article page. Let us know what you think. Return to old version
Research | Open | Published:
Tissue-specific regulation of Igf2r/Airn imprinting during gastrulation
Abstract
Background
Appropriate epigenetic regulation of gene expression during lineage allocation and tissue differentiation is required for normal development. One example is genomic imprinting, which is defined as parent-of-origin mono-allelic gene expression. Imprinting is established largely due to epigenetic differences arriving in the zygote from sperm and egg haploid genomes. In the mouse, there are approximately 150 known imprinted genes, many of which occur in imprinted gene clusters that are regulated together. One imprinted cluster includes the maternally expressed Igf2r, Slc22a2, and Slc22a3 genes and the paternally expressed long non-coding RNA (lncRNA) Airn. Although it is known that Igf2r and Airn are reciprocally imprinted, the timing of imprinted expression and accompanying epigenetic changes have not been well characterized in vivo.
Results
Here we show lineage- and temporal-specific regulation of DNA methylation and histone modifications at the Igf2r/Airn locus correlating with differential establishment of imprinted expression during gastrulation. Our results show that Igf2r is expressed from both alleles in the E6.5 epiblast. After gastrulation commences, the locus becomes imprinted in the embryonic lineage with the lncRNA Airn expressed from the paternal allele and Igf2r restricted to maternal allele expression. We document differentially enriched allele-specific histone modifications in extraembryonic and embryonic tissues. We also document for the first time allele-specific spreading of DNA methylation during gastrulation concurrent with establishment of imprinted expression of Igf2r. Importantly, we show that imprinted expression does not change in the extraembryonic lineage even though maternal DMR2 methylation spreading does occur, suggesting distinct mechanisms at play in embryonic and extraembryonic lineages.
Conclusions
These results indicate that similar to preimplantation, gastrulation represents a window of dynamic lineage-specific epigenetic regulation in vivo.
Background
Genomic imprinting is an epigenetic phenomenon that results in mono-allelic gene expression in a parent-of-origin manner. Imprinted expression has been identified at approximately 150 mouse genes, which often occurs in clusters containing multiple imprinted transcripts [1,2]. Expression of imprinted genes is thought to be established in cis by allele-specific DNA methylation established at imprinting control regions (ICRs) in the gametes, thus arriving in the zygote as maternal and paternal specific information. A regulatory theme has emerged at many imprinted clusters in which a single long non-coding RNA (lncRNA) is thought to repressively regulate genes in cis through direct transcriptional blocking and/or recruitment of repressive chromatin remodeling complexes such as G9a and PRC2, resulting in differential allele-specific histone modifications [3,4].
One cluster on mouse chromosome 17 includes the maternally expressed Igf2r, Slc22a2, and Slc22a3 genes and the paternally expressed lncRNA Airn [5], and several non-imprinted genes (Slc22a1, Mas, and Plg). The Airn promoter lies in the second intron of Igf2r, and Airn transcription occurs from the opposite strand overlapping Igf2r exons 1 and 2 [5-7]. Paternal Airn expression may participate in imprinting of the maternally expressed genes by blocking access of the transcriptional machinery to the Igf2r start site [8], and transcription of Airn has been shown to be required for silencing of Igf2r [8,9]. Paternal allele silencing of the other imprinted genes in the cluster only occurs in extraembryonic lineages and may be a result of Airn recruitment of repressive complexes such as G9a to their promoters [4]. Biallelic expression of Igf2r is observed in ES cells and only becomes imprinted upon differentiation in vitro [4]. Although the expression of Igf2r and Airn has been documented in preimplantation and late stage embryos [10-12], lineage-specific expression dynamics have not been observed during gastrulation. Recent studies have focused on mechanisms in ES cell models [4,8,13], but the precise timing and mechanisms responsible for imprinting at Igf2r/Airn in vivo remain unknown. Here we characterize tissue-specific dynamics of expression and epigenetic modifications that occur at Igf2r/Airn during normal gastrulation. We show that significant epigenetic regulation occurs at imprinted loci during epiblast differentiation in vivo.
Results and discussion
Imprinted expression of Igf2r and Airn during gastrulation
The Igf2r/Airn imprinted cluster contains the maternally expressed Igf2r, Slc22a2, and Slc22a3, the paternally expressed Airn, and the non-imprinted Plg, Slc22a1, and Mas1 genes (Figure 1A). We determined the expression of the genes within the cluster in embryonic and extraembryonic tissues from C57BL/6JxPWD/PhJ-F1 embryos at embryonic days E6.5 and E7.5 by RT-PCR (Figure 1B). Of the genes in the cluster, Igf2r is expressed in both the epiblast (EPI) and visceral endoderm (VE) at E6.5 and the embryonic (EM) and extraembryonic (EX) tissues of E7.5 embryos (Figure 1B). Airn is expressed in the VE at E6.5 and in both tissues at E7.5. However, no Airn was detected in the epiblast at E6.5 (Figure 1B).
Figure 1
figure1
Expression analysis. (A) Schematic of gene locations at the mouse Igf2r/Airn locus: transcription start sites (bent arrows), Igf2r (light grey), and Airn (dark grey). (B) RT-PCR analysis of genes in the cluster shows expression of Igf2r at E6.5 and E7.5, as well as Airn expression in the E6.5 VE and E7.5 EM and EX. The other genes in the cluster are not expressed at appreciable levels during gastrulation. (C) SSCP analysis of Igf2r expression shows biallelic expression in the E6.5 EPI, while the paternal allele is silent (imprinted) in all other tissues and stages examined. (D) RT-PCR demonstrates that Airn is not expressed in epiblast but is paternally expressed (E) in all other samples. Two embryos (one per lane) shown for each tissue/stage for each assay. EPI, epiblast; VE, visceral endoderm; EM, embryonic portion of E7.5 embryo; EX, extraembryonic portion of E7.5 embryo. Red box highlights the non-imprinted status of Igf2r and lack of Airn expression in the E6.5 EPI. B, B6 allele; P, PWD allele. +, pooled adult kidney, liver, brain, and heart cDNA. Parental tissue used in (C-E) is adult kidney.
To further understand the imprinted expression of Igf2r and Airn during gastrulation, we carried out allele-specific expression analysis of C57BL/6JxPWD/PhJ-F1 and C57BL/6J-Chr 17PWD/Ph/ForeJxC57BL/6J-F1 embryos (hereafter referred to as B × P and P × B F1 embryos, respectively). Single-strand confirmation polymorphism (SSCP) revealed that Igf2r is expressed from both alleles in the EPI of E6.5 embryos (Figure 1C, red box). In E6.5 VE, Igf2r is maternally expressed and paternally imprinted (Figure 1C). At E7.5, Igf2r is imprinted in both tissues (Figure 1C). Our results show that in the multipotent epiblast, Igf2r is expressed from both alleles, but once embryonic cells have adopted defined lineages at E7.5, Igf2r expression becomes imprinted. This correlation suggests a relationship between relative differentiation state in vivo and imprinted expression at the locus - consistent with ES cell models.
Since Airn is thought to establish imprinting of Igf2r [8], we also examined allele-specific Airn expression. In E6.5 EPI, Airn is not expressed (Figure 1B,D, red box), corresponding with biallelic Igf2r expression (Figure 1C). In the VE at E6.5, where Igf2r is imprinted, we observe reciprocal imprinting (paternal expression) of Airn (Figure 1
| 1.818146
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Why is It Essential to Invest in Bird Proofing?
People must watch pesky birds to protect their health and property. Here are a few reasons why:
- Wellbeing Threats – Bird droppings and nesting materials have been linked to several diseases that can infect humans and animals. Their droppings contain various parasites and infectious diseases such as encephalitis and salmonella.
- Investing Costs – Besides affecting people's well-being, bird droppings and other debris can damage your property. If left untouched, birds can completely ruin your property or solar panel after weeks, months and years of damage.
- Equipment Defects – Pigeon droppings can cause costly damage to solar panels and other equipment. The acidic nature of their faeces can discolour surfaces and oxidise metals.
- Clogged Drains – Bird droppings and nesting materials such as twigs, grass, and moss clog the drains and gutters. If not cleaned properly, it can cause flooding, especially during the rainy season.
- Falling Chances – Slippery bird droppings in the environment can increase the chances of slipping and falling.
- Food safety is essential – Birds find their way into restaurants, storage facilities, processing and packaging areas, and other places and contaminate food. Not monitoring bird nests can substantially impact food safety scrutiny.
Solar Panel Pigeon Proofing in Melbourne
One of the common issues faced by our clients is the damage caused by pigeons. Their nesting materials and droppings can accumulate on the panels, reducing their efficiency and potentially causing long-term damages. Additionally, bird droppings can be unsettling and unhygienic, posing health risks to you and your family.
If you are facing issues with pigeon proofing in Melbourne, Tom's Pest Control Melbourne is here to help you. Our professional pigeon control service team in Melbourne specialises in providing effective solar panel pigeon proofing solutions.
How We Implement Pigeon Control Solutions on Your Solar Panels?
At Tom's Pest Control, we offer reliable and humane pigeon proofing methods tailored to suit your needs. Here's how we can assist you: Many of us worry about the methods bird removal companies use for bird control. At Tom's Pest Control, we believe in humane pigeon proofing methods to ensure the birds are not harmed. Here are some ways we can assist you in bird control.
- Assessment: Our expert team will thoroughly inspect your solar panels and surrounding areas to identify potential entry points and areas where pigeons may nest.
- Customised Solutions: Based on our assessment, we will recommend the most suitable pigeon proofing techniques for your solar panels. These may include:
- Installing Mesh or Wire Barriers: We will secure your solar panels with durable solar panel bird proofing mesh or wire barriers, preventing pigeons from accessing the area.
- Implementing deterrents: We can install effective visual or audio bird deterrents, such as reflective devices, bird spikes, or ultrasonic devices, to discourage pigeons from roosting on your solar panels.
- Professional Installation: Our experienced technicians will skillfully install the chosen solar panel pigeon proofing measures to ensure their effectiveness and durability.
- Maintenance and Follow-up: We offer ongoing maintenance services to ensure that the pigeon proofing measures remain intact and continue to protect your solar panels. Our team will always be a phone call away if you encounter any future issues.
Our solar panel pigeon proofing methods are safe for the birds and comply with ethical standards. We strive to deliver long-lasting results and provide a pest-free environment for your solar panels.
Despite being a crucial part of our ecology, birds may seriously harm humans and property. Damages may compound and cost you money, whether you have a bird nesting in your house or too many birds living on your land.
Birds typically construct nests within rain gutters and other drainage system components when they settle on your roof. When the drainage system is obstructed, snow, ice, and rain can accumulate on your roof and cause significant problems.
Roofs under too much stress may cave in, but not before they fracture and require expensive repairs. Also, the acidity in bird droppings can damage HVAC systems, roofing, vehicles, and more. When the droppings corrode HVAC wiring and systems, they will inevitably malfunction or break down. When the droppings corrode rooftops, leaks and holes can appear.
Solar panels are becoming more and more popular among homeowners to reduce their power costs. However, when pigeons build a nest below your solar panels, they severely damage the panels and wiring, creating an unattractive mess on your roof. Consequently, it is necessary to pigeon-proof solar panels.
Choosing our specialists to install your solar bird proofing means putting your faith in a business with years of experience. Our pest control specialists in your area are happy to help you discuss and share your concern about your pest issues.
We can provide the quality treatments available while keeping our costs low. In addition, your solar panel pigeon proofing will be backed by Tom's Pest Control Melbourne Service Guarantee to provide you peace of mind. If the treatment is unsuccessful, well, fix it for free. Waiting for pigeons to ruin your solar panels is a bad idea.
Following are the steps covered in bird control inspection on your property:
- Identification of bird species will depend on your actions.
- It's crucial to comprehend that the term "infestation level" often refers to the flock or population size.
- Detect the area where the birds have taken over. It is typical to locate several places on the property.
- You can find the access equipment by correctly evaluating the place. Ladders, lifts, cranes, and scaffolding are examples of equipment.
- Recognise any limitations that can cause your project to stop or be delayed.
- You could be involved in initiatives that need specific permits, such as public access or even renovating historic structures.
- Your pricing estimate, which will include labour time, travel costs, equipment rentals, material costs, and other ancillary costs, will be broken down with your site evaluation.
Your solar panel investment might be compromised if pigeons destroy the inside and outside components of the panels. In addition, high amounts of acidity found in pigeon droppings can harm the surface of your solar panels and erode cables.
So, it's crucial to use a solar panel bird control service. Our skilled and experienced bird control professionals at Tom's Pest Control Melbourne successfully use bird-proof solar panels. Our standardised processes for bird-proofing solar panels aim to prevent birds from landing and roosting on the panels.
We also provide routine maintenance services for further protection against bird poop and other debris on your solar panels. Don't hesitate to contact Tom's Pest Control if you are worried about birds harming your solar panels.
The worst that can happen from bird mites is a few nights of restless itching. However, it's preferable to keep them hidden. Before removing any nesting birds from your property, keep an eye out for them and verify local wildlife rules.
Physical eradication is the best line of action when attempting to get rid of mites in your house. You might want to try vacuuming or wet them with a cloth. To permanently rid your home of mites, you must remove the vacuum cleaner bag immediately.
A safe insecticide that works well against mites is permethrin. It is safe for pets to be exposed to and is used for localised spot therapy.
For their food and that of their young, birds eat insects, including mosquitoes, beetles, and moths. Insect larvae are highly protein-rich and are caught in large quantities by birds.
Due to the prevalence of insects that prey on plants and animals, life would have been terrible without birds (from grains to human blood). This means that in natural systems, birds are essential for both lowering and sustaining insect populations.
Being insectivorous, birds consume beetles, weevils, caterpillars, grasshoppers, and other insects. By bringing bluebirds to your property, you may effectively manage pests without the use of pesticides or additional expenses.
Even though using harsh pesticides to manage insect infestations may have solved one issue, they ultimately had terrible effects, and endangered birds were killed. However, with precise scientific technologies, we can now utilise medicines that target that particular insect while harmless to all other animals.
When battling infestations, this is essential since, while you want to get rid of the bug, you also don't want deadly and hazardous chemicals near your kids, pets, and the wonders of nature, like birds. However, you may use Control Bug Spray with confidence, knowing that you're using a natural solution that eliminates pests and won't harm your bird.
Remember that using natural insecticides is beneficial for you, your pets, and not only your birds. Invest today.
Bird mesh is one of the best methods for protecting residential solar systems from birds. Bird mesh wraps around the perimeter of the entire array and is designed to seal the space beneath your solar panels. It is attached directly to the panels.
The least appealing solution for deterring birds is spiked, yet they are quite effective. Spikes discourage birds from perching on or near your solar panels long enough to build a nest or create a major mess.
Although they appear old, plastic birds of prey are functional. For example, a fake owl can move convincingly and often enough to drive birds away if you invest in one with a head that swings in the wind. For solar panels, they make excellent pigeon guards.
You may have to spend between $200 and $1500 to get your property pigeon proofed. However, the price may increase based on the number of solar panels in your system, the slope of your roof, the height of your home'
| 2.007509
|
m-a-p/FineFineWeb
|
NP TaqMan assay; Leo Kenefic for interpretation of the genotypes from the 2006 outbreak in Scotland; and Jodi Beaudry, Molly Matthews, James Cook, and Christine Clark-Friedman for assistance with aspects of the TEA phylogeny. Thanks also to Jane Burton, Suzanna Hawkey, and the Special Pathogens Reference Unit team for assistance with preparation of nucleic acid samples and to John Gillece and Remy Hilsabeck for performing the Illumina whole genome sequencing of Ba4599 and for assistance with bioinformatic analyses.
This work was supported by the US Department of Homeland Security Science and Technology Directorate through award HSHQDC-10-C-00139 and the UK Department of Health. This work was also funded, in part, under Agreement no. HSHQDC-07-C-00020 awarded by the US Department of Homeland Security for the management and operation of the National Biodefense Analysis and Countermeasures Center, a federally funded research and development center.
Biography
Dr Price is a postdoctoral research fellow at the Center for Microbial Genetics and Genomics at Northern Arizona University in Flagstaff. Her research interests focus on Burkholderia pseudomallei and Bacillus anthracis evolution, genetics, and genomics, particularly in vivo evolution of these pathogens.
Footnotes
Suggested citation for this article: Price EP, Seymour ML, Sarovich DS, Latham J, Wolken SR, Mason J, et al. Molecular epidemiologic investigation of an anthrax outbreak among heroin users, Europe. Emerg Infect Dis [serial on the Internet]. 2012 Aug [date cited]. _URL_
1Current affiliation: Menzies School of Health Research, Casuarina, Northern Territory, Australia.
1. Van Ert MN, Easterday WR, Huynh LY, Okinaka RT, Hugh-Jones ME, Ravel J, et al. Global genetic population structure of Bacillus anthracis. PLoS ONE. 2007;2:e461. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref]
2. Keim P, Smith KL Bacillus anthracis evolution and epidemiology. Curr Top Microbiol Immunol. 2002;271:21–32. [PubMed]
3. Pearson T, Busch JD, Ravel J, Read TD, Rhoton SD, U'Ren JM, et al. Phylogenetic discovery bias in Bacillus anthracis using single-nucleotide polymorphisms from whole-genome sequencing. Proc Natl Acad Sci U S A. 2004;101:13536–41. doi: 10.1073/pnas._PHONE_. [PubMed] [Cross Ref]
4. Kenefic LJ, Pearson T, Okinaka RT, Schupp JM, Wagner DM, Hoffmaster AR, et al. Pre-Columbian origins for North American anthrax. PLoS ONE. 2009;4:e4813. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref]
5. Mayer TA, Bersoff-Matcha S, Murphy C, Earls J, Harper S, Pauze D, et al. Clinical presentation of inhalational anthrax following bioterrorism exposure: report of 2 surviving patients. JAMA. 2001;286:2549–53. doi: 10.1001/jama.286.20.2549. [PubMed] [Cross Ref]
6. Ringertz SH, Hoiby EA, Jensenius M, Maehlen J, Caugant DA, Myklebust A, et al. Injectional anthrax in a heroin skin-popper. Lancet. 2000;356:1574–5. doi: 10.1016/S0140-6736(00)03133-0. [PubMed] [Cross Ref]
7. Knox D, Murray G, Millar M, Hamilton D, Connor M, Ferdinand RD, et al. Subcutaneous anthrax in three intravenous drug users: a new clinical diagnosis. J Bone Joint Surg Br. 2011;93-B:414–7. doi: 10.1302/0301-620X.93B3.25976. [PubMed] [Cross Ref]
8. Ramsay CN, Stirling A, Smith J, Hawkins G, Brooks T, Hood J, et al. An outbreak of infection with Bacillus anthracis in injecting drug users in Scotland. Euro Surveill. 2010;15:pii:19465. [PubMed]
9. National Health Service An outbreak of anthrax among drug users in Scotland, December 2009 to December 2010: a report on behalf of the National Anthrax Outbreak Control Team. December 2011 [cited 2012 Jan 24]. _URL_
10. European Centre for Disease Prevention and Control Annual epidemiological report on communicable diseases in Europe 2010. [cited 2011 Jul 19]. _URL_
11. Powell AG, Crozier JE, Hodgson H, Galloway DJ A case of septicaemic anthrax in an intravenous drug user. BMC Infect Dis. 2011;11:21. doi: 10.1186/_PHONE_-21. [PMC free article] [PubMed] [Cross Ref]
12. Booth MG, Hood J, Brooks TJ, Hart A Anthrax infection in drug users. Lancet. 2010;375:1345–6. doi: 10.1016/S0140-6736(10)60573-9. [PubMed] [Cross Ref]
13. Keim P, Van Ert MN, Pearson T, Vogler AJ, Huynh LY, Wagner DM Anthrax molecular epidemiology and forensics: using the appropriate marker for different evolutionary scales. Infect Genet Evol. 2004;4:205–13. doi: 10.1016/j.meegid.2004.02.005. [PubMed] [Cross Ref]
14. Radun D, Bernard H, Altmann M, Schoneberg I, Bochat V, van Treeck U, et al. Preliminary case report of fatal anthrax in an injecting drug user in North-Rhine-Westphalia, Germany, December 2009. Euro Surveill. 2010;15:pii: 19464. [PubMed]
15. Okinaka RT, Henrie M, Hill KK, Lowery KS, Van Ert M, Pearson T, et al. Single nucleotide polymorphism typing of Bacillus anthracis from Sverdlovsk tissue. Emerg Infect Dis. 2008;14:653–6. doi: 10.3201/eid1404.070984. [PMC free article] [PubMed] [Cross Ref]
16. de Lamballerie X, Zandotti C, Vignoli C, Bollet C, de Micco P A one-step microbial DNA extraction method using "Chelex 100" suitable for gene amplification. Res Microbiol. 1992;143:785–90. doi: 10.1016/0923-2508(92)90107-Y. [PubMed] [Cross Ref]
17. Marston CK, Allen CA, Beaudry J, Price EP, Wolken SR, Pearson T, et al. Molecular epidemiology of anthrax cases associated with recreational use of animal hides and yarn in the United States. PLoS ONE. 2011;6:e28274. doi: 10.1371/journal.pone._PHONE_. [PMC free article] [PubMed] [Cross Ref]
18. Newton CR, Graham A, Heptinstall LE, Powell SJ, Summers C, Kalsheker N, et al. Analysis of any point mutation in DNA. The amplification refractory mutation system (ARMS). Nucleic Acids Res. 1989;17:2503–16. doi: 10.1093/nar/17.7.2503. [PMC free article] [PubMed] [Cross Ref]
19. Rhodes RB, Lewis K, Shultz J, Huber S, Voelkerding KV, Leonard DG, et al. Analysis of the factor V Leiden mutation using the READIT Assay. Mol Diagn. 2001;6:55–61. doi: 10.2165/00066982-200106010-00007. [PubMed] [Cross Ref]
20. Van Ert MN, Easterday WR, Simonson TS, U'Ren JM, Pearson T, Kenefic LJ, et al. Strain-specific single-nucleotide polymorphism assays for the Bacillus anthracis Ames strain. J Clin Microbiol. 2007;45:47–53. doi: 10.1128/JCM.01233-06. [PMC free article] [PubMed] [Cross Ref]
21. Miller JR, Delcher AL, Koren S, Venter E, Walenz
| 1.003473
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
New Tactic for Weight Management: Blood Sugar Control
Nutritional Outlook, Volume 17, Issue 5
Controlling blood sugar is a newer tactic for weight management.
Of all the neologisms that shape and mirror our age-locavore, crowdsource, twerking-none may resonate more profoundly, or regrettably, than diabesity. It's a term that pediatric endocrinologist and researcher Francine R. Kaufman, MD, applied to the conflation of type 2 diabetes and obesity. Alas, the word all too aptly describes the collection of ills that characterize both epidemics: excess abdominal fat, high blood sugar, generalized inflammation, etc.
These truly are epidemics. The American Diabetes Association says that close to 10% of the U.S. population is diabetic, a diagnosis given to those with fasting blood glucose levels topping 126 mg/dL in at least two tests. The toll on our economy associated with diagnosed cases, to say nothing of undiagnosed cases, amounts to fully $245 billion annually.
Meanwhile, nearly 35% of adult Americans meet the Centers for Disease Control and Prevention's definition of obesity-an adult with a body mass index of 30 or above-with repercussions ranging from conditions like heart disease, stroke, and some forms of cancer to-not coincidentally-type 2 diabetes.
The hybrid nature of diabesity underscores a fundamental link between the two conditions it describes-a link that not only reflects similarities in their causes, but suggests novel interventions for mitigating them. The more we learn about the relationship between the disordered glucose metabolism at the heart of diabetes and the weight gain that leads to obesity, the more it appears that supplements designed to control the former may help control the latter, as well.
Complex Puzzle
Of course, researchers and clinicians have long known that obesity, overweight, and physical inactivity are risk factors for type 2 diabetes, which accounts for almost 95% of the nation's cases, including in children, as more young people edge over into overweight territory. These associations notwithstanding, it's worth noting that not all overweight or obese people are diabetic-nor are all diabetics, type 2 or otherwise, overweight or obese. But about 80% of type 2 diabetics are overweight or obese, which raises the question: What is it about excess weight and diabetes that bind the two in a sort of chicken-and-egg interdependence?
Researchers are still sorting out all the puzzle pieces, but we have some ideas as to where and how they fit. To bring that picture into better resolution, it helps to understand a bit about energy, its metabolism, and the hormones that make energy.
A healthy body harvests the energy it needs from the foods and drinks it eats. More specifically, the body breaks down fats, proteins, carbohydrates, and even alcohol to the simple sugar glucose, which is an easily convertible "energy currency" for the cells. The body does this with the help of the hormone insulin, which the pancreas continually secretes but really starts pumping out once glucose levels in the blood rise, such as after a meal or snack.
Because simple sugars and starches break down easily into glucose-that is, they have a high glycemic index-the more sugary or starchy the meal or snack, the more rapidly glucose floods the bloodstream in a state known as postprandial hyperglycemia. When we're in that state, says Mitch Skop, senior director, new product development, Pharmachem Laboratories Inc. (Kearny, NJ), "the pancreas releases more insulin to escort glucose into the cells" where the glucose supplies immediate energy or goes into storage as fat in the liver. Given glucose's ability to trigger this response, those of us fond of sweetened beverages and an abundance of starchy and sugary foods run the risk of drowning our cells in glucose-and insulin-every time we eat. And, sure enough, cells that have bathed too long in insulin soon grow desensitized to the hormone's signals.
This desensitization more properly goes by the name insulin resistance. It means that much of that glucose in the blood winds up stuck outside the cells' doors "with essentially nowhere else to go," Skop says. This deprives the cells of valuable energy while causing a host of other problems down the line. "So it stands to reason," he continues, "that a poor diet high in starch and sugar calories will, over time, cause insulin resistance, which itself may lead to diabetes."
Overlapping Pathophysiologies
So, to recap: Excess sugar in the blood leads to insulin resistance, which leads to diabetes. And excess dietary energy (calories)-these days, that often means sugar-leads to overweight and obesity. Overweight and obesity themselves promote diabetes by making it harder for cells to pay attention to insulin's glucose-management cues. And now we're learning that simply having diabetes can make it harder for diabetics to manage their weight, as the insulin they take to treat the disease allows more glucose to enter the cells where-absent sufficient energy demand-it's stored as fat.
This complicated relationship doesn't surprise Mohammad Rafi, president of Bioactive American Corp. (Highland Park, NJ), who points out that "healthy blood sugar can help maintain normal body weight in healthy adults, and diabetic patients will often lose or gain weight based upon the antidiabetic drugs they're taking."
Further, he notes that both diabetes and obesity share an oxidative and inflammatory component in the sense that high blood sugar can promote inflammation. "There is a direct correlation to inflammation and issues that cause weight gain, such as blood sugar," he says. In the words of a recent study in The Lancet1, all these elements are part of an "overlapping pathophysiology" that underlies both excess weight and poor glucose control.
An Open Door
Granted, there's nothing inevitable about being overweight or diabetic, as both conditions are largely preventable-even reversible-through sensible diet and exercise. Pharmaceuticals have also stepped in to combat the diseases' progression, but we've already seen the unintended consequence insulin has on weight gain. Diabetes medications may present other drawbacks, as well, including their cost: the American Diabetes Association estimates that 18% of diabetes' $245 billion price tag goes toward prescriptions to treat its complications. As for pharmaceutical attempts at addressing obesity, they have an even poorer track record, with the safety-driven withdrawals of fenfluramine/phentermine (better known as fen-phen), sibutramine, and others still fresh in mind.
Does this open a door for supplementary interventions? Rafi thinks it may. "The supplement industry needs to be proactive, advocating for healthy lifestyles that include a good diet, exercise, and stress reduction," he says. But also "there is data that certain ingredients can prevent excess absorption of carbohydrates from foods to curb obesity, diabetes, and other chronic disorders."
Indeed, Skop says, "Action has been taken for quite some time in the research and formulation of products for healthy blood sugar support"-products that could also encourage healthy weight loss. And though blood sugar management would be a new weight-control angle for the industry, it may be one whose time has come.
Building Blockers
"Thermogenic fat burners, satiety enhancers, and metabolism boosters have all worked to a certain extent in weight loss or weight management and have been accepted by consumers," says Xiaoming "Sandy" Chien, PhD, vice president, innovative products, HORN Nutraceuticals (La Mirada, CA). Blood sugar management may be "a less direct approach," she says, but "it addresses the bottom line of weight management: excess glucose being stored as fat."
Among the ingredients recognized for managing blood sugar are the South Asian botanical Gymnema sylvestre, banaba leaf extract (popular in the Philippines), bitter melon extract, cinnamon extract, and others. "Many of these products, such as the cinnamon extracts currently on the market, work with the beta cells in the pancreas to increase insulin production, and therefore help clear out blood glucose," Chien says. "Some of the products can also increase insulin sensitivity, which helps cells take up blood glucose."
Some of the ingredients attracting the most attention, both for their blood glucose and weight management potential, fall within the category of starch or carb blockers. Historically made from proteinaceous extracts of the white bean, Phaseolus vulgaris, they suppress the enzymes that break down starches and other polysaccharides into the simpler sugars that cells use for energy or store as fat.
More specifically, they target the enzymes alpha-amylase, which turns starches into oligosaccharides, and alpha-glucosidase, which breaks oligosaccharides further into monosaccharides like glucose. The theory is that by inhibiting the enzymatic release of these simple sugars, starch blockers make it harder to transform calories into pounds.
But there's more to it than just that.
"Technically," Chien says, "products that block starches actually delay their digestion and absorption, which allows the body fully to process the resulting glucose in a more sustained way, leaving little excess to be stored as fat." Even better, she
| 2.052726
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Battle Drill #1: Conduct Platoon Attack (7-3-D101)
TASK. Conduct Platoon Attack (7-3-D101).
CONDITIONS. An enemy squad has occupied defensive positions or is moving to the platoon front. The enemy has indirect fire and CAS capabilities. The platoon is attacking separately or as part of a larger unit. Plans, preparation, and movement to the objective have been accomplished. The platoon is directed to attack the enemy.
1. The platoon main body is not surprised or fixed by the enemy.
2. The platoon accomplishes its assigned task within the commander's intent. The platoon kills, captures, or forces the withdrawal of the enemy.
3. The platoon maintains a sufficient fighting force to defeat the enemy's counterattack and continue operations.
1. Action on Enemy Contact.
a. The platoon initiates contact. The platoon leader directs when and how his base of fire element will establish a base of fire. The element must be in position and briefed before it initiates contact. The base of fire squad leader (normally the weapons squad leader), upon the signal from the platoon leader, initiates contact with a high casualty-producing weapon. The squad marks the engagement area with ir illumination (MTETT dependent), while the squad leader uses his hand-held laser pointer and AN/PAQ-4 to designate enemy positions, crew-served weapons, and vehicles. Soldiers focus on the squad leader's laser as well as the team leader's tracers and AN/PAQ-4 to engage targets. If the platoon has not been detected, steps 1 and 2 consist of positioning the support element and identifying the enemy's positions.
b. If the enemy initiates contact, the platoon takes the following actions:
(1) The squad in contact reacts to contact (Battle Drill No. 2, React to Contact Platoon/Squad, 7-3/4-D103). It attempts to achieve suppressive fires with one fire team and maneuvers the other team to attack the enemy in the flank. The team providing suppressive fires marks its flanks by throwing ir chemlight bundles or ir flares and continues to use its AN/PVS-7B and AN/PAQ-4 to place well-aimed, accurate fires on the enemy. The squad employs M203 and hand-held ir smoke to screen the assaulting teams movement. The squad leader notifies the platoon leader of his actions.
(2) The platoon leader, his RTO, the platoon FO, the squad leader of the next squad, and one machine gun team move forward to link up with the squad leader of the squad in contact.
(3) The squad leader of the trail squad moves to the front of his lead fire team.
(4) The platoon sergeant moves forward with the second machine gun team and the weapons squad leader and links up with the platoon leader. If directed, he assumes control of the base of fire element and positions the machine guns to add suppressive fire against the enemy. The platoon sergeant uses his hand-held laser to designate the left and right limits of fires while the weapons squad leader uses the pointer to designate targets.
(5) The platoon leader assesses the situation. He follows the success of the squad's flank attack by leading the trail squads along the covered and concealed route taken by the assaulting fire team of the squad in contact. The base of fire element uses the AN/PVS-7B to monitor the movement of the assaulting element.
c. If the squad in contact cannot achieve suppressive fire, the squad leader reports to the platoon leader.
(1) The squad in contact establishes a base of fire.
(a) The squad leader deploys his squad to provide effective, sustained fires on the enemy position. The squad leader continues to designate targets using the hand-held laser pointer and AN/PAQ-4 while soldiers SEE through their AN/PVS-7B and place accurate fires on the enemy with the AN/PAQ-4.
(b) The squad leader reports his final position to the platoon leader. (2) The remaining squad (not in contact) takes up covered and concealed positions in place and uses the AN/PVS-7B to observe the flanks and rear of the platoon. (3) The platoon leader moves forward with his RTO, the platoon FO, the squad leader of the nearest squad, and one machine gun team.
2. Locate the Enemy.
a. The squad leader of the squad in contact reports the enemy size, location, and any other information to the platoon leader. The platoon leader completes the squad leader's assessment of the situation.
b. The squad continues to engage the enemy positions and mark the engagement area with ground ir flares, tracers, and AN/PAQ-4.
c. The platoon sergeant moves forward with the weapons squad leader and the second machine gun team and links up with the platoon leader.
3. Suppress the Enemy.
a. The platoon leader determines if the squad in contact can gain suppressive fire against the enemy, based on the volume and accuracy of the enemy's return fire. He SEEs through the AN/PVS-7B and makes the assessment by looking at the enemy's muzzle flashes and the strike of their rounds and tracers.
b. If YES, he directs the squad (with one or both machine guns) to continue suppressing the enemy:
(1) The squad in contact destroys or suppresses enemy weapons that are firing most effectively against it, normally crew-served weapons. The squad leader identifies the enemy crew-served by its muzzle flashes and rate of fire. He uses his hand-held laser pointer to designate priority targets for his squad.
(2) In addition, the squad in contact continues to place ir screening smoke (if enemy has NODs) to prevent the enemy from seeing the maneuver element.
c. If NO, the platoon leader deploys another squad and the machine gun team to suppress the enemy position. The second squad lead elements SEE the base of fire squad flank element's ir chemlights or flares through the AN/PVS-7B and links up either to the left or right flank of the base of fire squad as directed by the platoon leader. (The platoon leader may direct the platoon sergeant to position this squad and one or both of the machine gun teams in a better support-by-fire position.)
d. The platoon leader again determines if the platoon can gain suppressive fire over the enemy.
e. If YES, he continues to suppress the enemy with two squads and two machine guns.
(1) The platoon sergeant assumes control of the base-of-fire element (squad in contact, the machine gun teams, and any other squad designated by the platoon leader). He uses his hand-held laser pointer to designate sectors of fire for the squads.
(2) The machine gun team occupies a covered and concealed position and suppresses the enemy position. The gunners SEE through the AN/PVS-4 and identify the targets designated by the weapons squad leader's laser.
f. The platoon FO calls for and adjusts fires, based on the platoon leader's directions. (The platoon leader does not wait for indirect fires before continuing with his actions.)
g. If still NO, the platoon leader deploys the last squad to provide flank and rear security and guide the rest of the platoon forward as necessary, and reports the situation to the company commander. Normally, the platoon will become the base of fire element for the company and may deploy the last squad for suppressive fires. The platoon continues to suppress/fix the enemy with direct and indirect fire, and responds to orders from the company commander.
a. If the squad(s) in contact together with the machine gun can suppress the enemy, the platoon leader determines if the remaining squad(s) not in contact can maneuver. He makes the following assessment using his AN/PVS-7:
(1) Location of enemy positions and obstacles.
(2) Size of enemy force. (The number of enemy automatic weapons, presence of any vehicles, and employment of indirect fire are indicators of enemy strength.)
(3) Vulnerable flank.
(4) Covered and concealed flanking route to the enemy position.
b. If yes, the squad leader maneuvers the squad(s) into the assault:
(1) Once the platoon leader has ensured the base of fire squad is in position and providing suppressive fires, he leads the assaulting squad(s) to the assault position.
(2) Once in position, the platoon leader gives the prearranged signal for the base of fire squad to lift or shift direct fires to the opposite flank of the enemy position. The signal is normally FM or an ir signaling device. The assault squad leader identifies the targets (enemy positions) that have been designated by the support by fire squad leader through his AN/PVS-7B. Simultaneously, at the platoon leader's command for the support by fire squad to lift or shift, the assault squad leader uses his hand-held laser pointer to point out the targets. Team leaders use AN/PAQ-4 to control fires. The assault squads MUST pick up and maintain effective fire throughout the assault. Handover of responsibility for direct fires from the base of fire squad to the assault squad is critical to prevent fratricide.
(3) The platoon FO shifts indirect fires (including smoke) to suppress the enemy position.
(4) The assaulting squad(s) fight through enemy positions using fire and maneuver.
(5) The platoon leader controls the movement of his squads. He uses his hand-held laser pointer to assign specific objectives for each squad and designates the main effort or base maneuver element. (The base of fire squad must
| 1.251687
|
HuggingFaceFW/fineweb-edu
|
The New York University (NYU) Medical School has revealed the acceptance rate for this year. In this article, MyStudyExtra provides additional details on the acceptance rate, requirements, MCAT score, GPA, and application deadline for the NYU Medical School.
The NYU School of Medicine is renowned for being a pioneer in fields like psychiatry, surgery, and forensic medicine. The university, which is located in the Langone Medical Center on Manhattan's East Side, offers dual-degree, M.D., and Ph.D. programs. It is also well-known for its programs that treat AIDS and drug addiction.
Students at NYU School of Medicine who have completed one semester of courses and are in good academic standing are encouraged to apply to the School of Medicine Honors Program to further their education.
About New York University (NYU) Medical School
The New York University (NYU) School of Medicine was established in 1841 as the University Medical College and is renowned for developing fields including pediatrics, forensic medicine, surgery, and psychiatry.
The institution, which is situated on the East Side of Manhattan's Langone Medical Center, provides dual-degree M.D. and Ph.D. programs in addition to being well-known for its AIDS and drug abuse treatment programs.
Every MD student is eligible for a ground-breaking full-tuition scholarship (valued at $56,272) as of 2018, regardless of their academic record or financial situation. Students are urged to apply for federal or institutional loans to assist pay for extra living costs, the price of books, and other educational costs.
The NYU School of Medicine has over 70 student groups, including organizations like Chamber Musicians in Medicine, Classical Arts Appreciation, and Physicians for Human Rights, which will please students looking for a deep and rich student experience.
Students are encouraged to start their clubs or extracurricular activities. A few national organizations, such as the American Medical Students Association, the American Medical Women's Association, and the NYU Physicians for a National Health Program, have branches at the university that students might choose to join.
Students at the NYU School of Medicine who have finished one semester of courses and are in excellent academic standing are encouraged to apply to the School of Medicine Honors Program to further their studies.
Through this program, students are paired with faculty mentors who collaborate with them while they create an honors research topic, prepare a thesis, and defend it. The Global Health Initiatives program at NYU Medical School sends students abroad to conduct research and take part in clinical education and public health activities.
NYU medical school Requirements
The NYU Grossman School of Medicine is pleased with how well-rounded and knowledgeable the medical students are on subjects outside the humanities and sciences. They think their diversity will help them be better able to address the most difficult problems in healthcare.
To be eligible, applicants must meet several criteria:
- Must be a U.S. citizen, permanent resident, or have Deferred Action for Childhood Arrivals (DACA) status.
- Must have a bachelor's degree from an accredited college or university in the United States or Canada.
- We also accept applications from international students who have a bachelor's degree from an accredited college or university in the United States or Canada.
- The Medical College Admission Test® (MCAT®) is required for all applicants.
NYU Medical School Courses
The following are the courses that NYU requires applicants to have completed:
- Chemistry of Inorganic Compounds.
- Organic Chemistry is the study of organic compounds.
- Science of the Social.
- College-level English and math.
NYU Med School Admission Process
The NYU medical school acceptance rate is just 2.2%, NYU Grossman is one of the most competitive medical schools in the country. An example is statistics for the class of 2025:
- Applications: 9,635
- Interviews: 820
- Matriculants: 108
- Median MCAT score: 522.
- MCAT range: 512–527
- Median GPA: 3.96
- GPA range: 3.64–4.0
NYU medical school Requirements, tuition, International students
How difficult is it to get into NYU Medical School?
Admission to NYU Medical School is competitive. They have an admissions rate of 2.5 percent, which puts them exactly between Brown and Washington. Out of 9,243 applications, 999 potential students were interviewed, and 102 were accepted.
Is NYU Medical School free?
The NYU Medical School is indeed free. The choice to provide free tuition "recognizes a moral obligation that must be addressed, as institutions lay an increasing debt load on young people who wish to become physicians," according to Robert Grossman, dean of NYU Health.
Is NYU medical school free for international students?
The School of Medicine at New York University (NYU) has declared that all previous and current students would no longer be charged tuition, regardless of academic standing or financial need. With full tuition scholarships available to all students, including International students, NYU is the only top-ranked US medical school to do so.
NYU medical school GPA and NYU medical school MCAT score
Students accepted to NYU Medical School had a median undergraduate the NYU Medical School GPA of 3.96 (with a range of 3.47-4.0) and a median NYU medical school MCAT score of 522 (with a range of 510-527).
NYU Med School Tuition & Financial Aid
No matter their academic standing or financial situation, all students at the NYU School of Medicine are eligible for a full scholarship. This equates to a $56,272 annual award for each student. Students are required by the college to have health insurance, as well as to cover additional costs and living expenses.
NYU Medical School Curriculum
The USMLE Step 1 exam is taken by students after their clerkship year, except for MD/Ph.D. candidates who take it before starting their Ph.D. studies.
Additionally, the curriculum includes PLACE and NYU3T (a collaboration program with the New York University College of Nursing) (Patient-Based Longitudinal Ambulatory Care Experience).
The Grossman School of Medicine at New York University also has 5-year dual degree programs, some of which can be completed in as little as four years:
- Biomedical Informatics MD/MS.
- Translational Research MD/MS.
- General Management MD/MBA (with the New York University Stern School of Business).
- Health Policy and Management MD/MPA.
- Global Health MD/MPH.
The 3-year program is only open to students who have been accepted into the 4-year stream. A residency spot at NYU Langone Health in the chosen specialty is guaranteed to students enrolled in the three-year program.
They finish preclinical training at the same time as four-year students, but they start clinical rotations six weeks earlier and complete a summer fellowship in the department of their choice after their first year.
NYU Medical School MD Programs
NYU breaks down its main four-year MD program into four parts:
Pre-clerkship curriculum: This occurs during the first year and a half of your education.
The clerkship: During the second half of the second year and the first half of the third, students participate in four rotations for twelve weeks each.
Individualized exploration: During the second half of the third year, students take time to study for the U.S. Medical Licensure Exam and take elective courses or begin a concentration.
Career preparation: Students complete a second clerkship and sub-internships in the fourth and final year.
Additionally, the Grossman School of Medicine provides a three-year expedited MD degree. Students can cut their living expenditures and other charges while hastening their time in medical school. This program compresses vocational preparation and individualized inquiry, and it necessitates additional summertime commitments.
NYU med school program completion timeline
NYU also offers other dual degree MD programs, including:
MD/Masters of Public Administration in Health Policy and Administration: This program lasts five years, with a break in the usual MD program in the fourth year for the MPA curriculum. Students complete a capstone project in the final year, preparing them for careers at consulting firms, government agencies, nonprofits, and more.
MD/Masters of Public Health in Global Health: An MPH in Global Health prepares students for careers in epidemiology, community health, policy management, and more. Students can apply for this program until their junior year.
MD/Masters of Science in Translational Research: This degree helps you prepare for research-related careers and their application to medical therapies. The program lasts five years, with the fourth year focusing on the MS.
MD/Masters of Arts in Bioethics: A degree in Bioethics can help you work as a bioethicist in hospitals, medical schools, and labs. The program lasts five years, with the fourth year focused on Bioethics.
MD/MBA in General Management: You enroll in NYU's Stern School of Business during your fourth year. There, you take classes to prepare you for careers in business ownership, healthcare management, and pharmaceutical consulting, among others.
NOTE: A dual degree program may be right for you if you seek interdisciplinary medical education.
Is NYU Med School Tuition Free?
The NYU program will begin immediately and pay $55,000 in annual tuition for 443 current students. However, students will be responsible for paying for their lodging, meals, and other costs, which add up to about $27,000 a year.
Bachelor's Degree and GPA
A bachelor's degree from an approved institution or university in the United States or Canada is required of candidates for admission to NYU Grossman School of Medicine. Students in our most recent incoming class had undergraduate GPAs that were, respectively, 3.96 and 3.92 on average.
Letters of Evaluation
Your application to NYU Grossman must include
| 1.109161
|
m-a-p/FineFineWeb
|
Skip to content
Science, technology and communication have changed dramatically since Barrie Tornado
Cellphone alerts and on-the-ground reports on social media can now get the word out early when there's severe weather
A wall cloud forced down torrents of rain, hail in spots and there were reports of a weak mesocyclone moving northeasterly from southern Ontario last Sunday.
It followed a path similar to a series of deadly tornadoes known as the Barrie Tornado just a week shy of its 35th anniversary.
This time, though, advanced technology and science allowed severe weather experts to issue a tornado warning.
With advance notice, many eyes turned to the skies and storm chasers raced out in search of the possibility of that meteorological wonder where unstable air in a low-pressure area rises rapidly, creating a column of rotating air that, when touching the ground, can rip apart anything in its way.
A Twitter storm soon erupted, resulting in some stunning photos and videos of the unique cloud formations. Any twisters that did develop last Sunday, however, remained aloft and the only damage was caused by rain and flooding.
MAY 31, 1985
Thirty-five years ago today, in 1985, there was no social media allowing strangers to share information from different geographic locations.
There wasn't even much advance warning as an unrelenting storm moved north from the United States, going northeasterly though Ontario where it created a series of tornadoes and separate suction vortices spinning out from the main twisters as they skipped across southern Ontario, ripping through small towns and rural areas and sometimes jumping aloft and then hitting the surface again.
By about 5 p.m. that Friday afternoon, four people had already been killed and the centre of Grand Valley was a mess as a tornado ensconced in rain then took a firm hold of the ground, working its way into Barrie from the southwest, cutting a destructive path across the city and taking another eight lives.
"We received no reports of any damage (or) any big hail. We really did not have any real confirmation of what was going on on the ground," recalls Mike Leduc, who was heading up Environment Canada's severe weather desk that day.
"I think the first reports that we got that something was really bad was from Barrie at 5 o'clock. Before that, we didn't know what was going on on the ground," even though Grand Valley had been severely hit an hour earlier.
"We didn't know how bad it was."
A weather watcher network would be developed soon after the Barrie Tornado, along with groups of storm watchers.
In 1985, the best officials could hope for was hearing from people on the ground experiencing the storms.
But there were no calls.
The scientific knowledge and the technological tools were also limited back then. There was no real ability to discern a typical severe thunderstorm situation from a tornado. And there was no way to see the potential for a major outbreak.
Forecasters now have the tools and knowledge to see the development of wind shear and how the winds change with height in the atmosphere, something that was only starting to develop in the 1980s.
Even just a year after Barrie, with some updated techniques, forecasters were able to issue a tornado watch a little farther west in Minden.
But in 1985, the best forecasters could do was issue severe thunderstorm watches for Barrie. That morning, there had already been some heavier storms with wind, rain and some hail. At about 1 p.m., the main line came in.
"Watches for severe thunderstorms that included Barrie were in effect even in midday that day. So the storms got going at about one o'clock and it was pretty obvious right away that they were severe thunderstorms, so there were warnings issued for severe thunderstorms at one o'clock for the Bruce Peninsula and eventually moving east," says Leduc.
"Barrie probably happened right at the end of a period when we were not very good at forecasting tornadoes at all. Unless we had a report, we didn't really know when there were tornadoes on the ground."
One year earlier, a Doppler radar was installed in King City, but it hadn't yet been fully implemented to be of use on May 31, 1985. A year earlier, Leduc also started to see research on wind shear, when the wind changes with height, which is now a regular consideration in severe weather watching.
"Doppler radar is used extensively now to detect thunderstorms that may produce a tornado because it detects movement of the rain," explains Dave Sills, executive director of the Northern Tornadoes Project at Western University in London, whose goal is to document all tornadic activity in Canada, even where there is no population base.
"Back then, it was thought that as long you knew the storm was rotating, there was a good chance that it was also producing a tornado," Sills adds. "Now that we have more experience with Doppler radar, we know there are quite a few storms that rotate but that do not produce tornadoes."
While the Great Lakes tend to suppress thunderstorms and tornadoes, that strip of land between them extending from Windsor to Barrie creates a corridor for more focused severe weather, allowing for tornadoes.
"A lot of new things came in because of the tornadoes in Barrie and then in Edmonton a couple of years later," says Leduc.
Those tragic tornadoes provided the impetus to get better weather radio, develop the storm watchers' network and then getting Doppler radar set up across the country, he adds.
If a storm is rotating, weather experts can determine the velocity pattern through Doppler radar and can infer through the type of air movement detected if there's rotation with the storm.
Satellite images have also been vastly improved, partly thanks to weather satellites launched by the U.S. National Aeronautics and Space Administration and National Oceanic and Atmospheric Administrations.
Previously, low-resolution satellite data would come in every half hour and were printed to create some basic animation of the movement of the air.
They now have far higher resolution with new images coming in more frequently and available in seconds on the computer screen, helping the forecaster to determine how the storms occur and how they will behave in the next hour or two.
Weather balloons, adds Sills, are still launched twice daily and are an important source of data for computer models. Weather modelling has historically used some of the largest computers to calculate formulas, allowing weather experts to see the ingredients of a storm come together to develop a picture of what is anticipated in coming minutes, hours and days based on the known science of how air will move.
New dual-polarization radar in King City also shows different shapes which allow experts to tell the difference between raindrop, versus hail, versus snow and not just precipitation, says Environment Canada's Gerald Cheng, the warning preparedness meteorologist for southern Ontario.
And he points to cellphone technology, which allows automatic notification to people who may be affected.
"The dissemination process, that is a big change compared to what we had 30 years ago," he said.
Canada's emergency Alert Ready System disseminates warnings through cellphones in a specific geographic area; it has been most frequently used for Amber Alerts. As long as the phone is compliant with the system, the warnings cannot be bypassed. That was used in the fall of 2018 to warn residents in the Gatineau-Ottawa area of a tornado.
There are also several apps available designed to send weather notifications.
And social media is replacing those older networks in the hierarchy of communications tools.
"Social media has been a game changer, because everybody can share their experience of weather. And now we have pictures and videos instantaneously posted on social media," Cheng says.
Cheng and his colleagues at the Environment and Climate Change Canada's weather offices sift through the noise to monitor on-the-ground reports identified through _TAG_ about specific weather events.
That real-time information, often including photos, videos and descriptions, can serve as the eyes and ears in relaying what's happening under the storm, beyond what radar allows.
"That really increases the number of sets of eyes across the province," he says. "In the past, it was phone conversation. And you can imagine it was very labour intensive" to answer every call. "Now we just look up at these different tweets and we get a sense of what is out there."
Weather specialists do carefully sort through that intelligence, ensuring anything of interest is substantiated through their own data sources. There have been occasions where old photos were posted in new conversation streams, for instance.
Sills says the science now exists to provide 10 to 15 minutes warning for events like the rotating super cell storms that created catastrophic damage on parts of Barrie and claimed a total of 12 lives in central Ontario 35 years ago.
But there are still hybrids or activity that aren't as obvious on radar and continue to be difficult to detect.
"Sometimes you get no lead time on those," he says. "All you do is hear the call come in and have a bad feeling that you missed it.
"It depends on the kind of storm."
About the Author: Marg. Bruineman, Local Journalism Initiative
Marg. Buineman is an award-winning journalist covering justice issues and human interest stories for BarrieToday.
Read more
| 1.079844
|
Zyphra/Zyda-2
|
0 seconds. The second moving average computed is a long-term energy average which uses a window duration of about 4 seconds. These two moving averages are similar in magnitude when the variations in the audio signal are small and deviate from one another when the variance of the signal energy increases. If the ratio of the short-term energy divided by the long-term energy is greater than about 2, then speech signals are considered present. This is representative of a 6 dB gain in the short-term energy over a floating background energy. This method is referred to herein as ST/LT (short-term/long-term) or sophisticated classification.
ST/LT provides an instantaneous classification of the received signals. Once this ratio drops below the value 2, the classifier declares the signal frames as non-speech. In this way it provides a very 'raw' classification of audio packets. More sophisticated methods can be added to this approach starting with zero crossings analysis and moving up in complexity to pitch detection and unvoiced speech discrimination methods. These additions can reduce classification errors during continuous audio reception, but are not sufficient when lost packets are occurring due to transport or remote endpoint characteristics. It will be appreciated that the present invention is not limited to the exact time and ratio values described. Using the present disclosure, other values for ST/LT can be developed without departing from the present invention. FIG. 3 provides an illustration of three example audio energy contours superimposed on a sample audio signal. The short term energy contour and the long term energy contour are illustrated.
Speech is classified by the VAD through a comparison of the short term energy to a short term energy minimum tracked over an approximately 24-second period. A minimum observed short-term energy is latched once per second, and the minimum value is maintained for the entire 24 second sliding window. Outliers, or extraneous data points, are discarded by a single pole smoothing filter. The process of acquiring an energy minimum is as follows:
1SecLatched Minimum=1SecLatched Minimum, where 1SecLatched Minimum≦Short Term Energy
1SecLatched Minimum=Short Term Energy, where 1SecLatched Minimum>Short Term Energy, and
24SecSmoothed Minimum=24SecSmoothed Minimum * β+1SecLatched Minimum * (1−β)
Where β is chosen as a function of the short-term window duration. In one embodiment this variable is approximately 0.98. This process maintains a smoothed minimum energy over the last 24 seconds of audio.
If the current short-term energy divided by the short-term minimum is greater than about 2.8 (9 dB) then it is determined that the packet contains speech. Otherwise, the packet is considered non-speech. This method is referred to herein as a Minimum Energy classification, or 'simple classifier' classification. Like ST/LT, the simple classifier provides an instantaneous decision without any onset and decay considerations.
Reference is now made to the packet classification parser 208 of FIG. 2. In general, the packet classification parser extracts a remote speech/noise classification from each packet, if it is present. The packet classification parser also provides an output which indicates that the received packet is either SPEECH, SILENCE, or UNKNOWN (if no classification information exists in the packet).
The packet classification parser simply tallies the occurrences of Silent Packet information being provided from the remote endpoint. This is a somewhat minor task and is broken out herein as a separate process for modularity. Often, but not always, remote endpoints provide an indication that they have detected silence and will be stopping the transmission of audio until they detect the onset of new speech. This indication is usually contained in external packet header information. The parser tallies the number of times this information indicates Silence over a predetermined time, for example the last 12 seconds, excluding the current 0.500 seconds. This is referred to as a Silence Detection Sum (SD Sum) and is used by the packet classifier in conjunction with audio density characteristics to better determine the true classification, as described below.
Also, for each connection with a remote endpoint, any single observation of a Silence Classification is latched to assist in the general operation of the Audio Classifier. If the remote endpoint has transmitted a silence indication during the current connection then this indicator is set to TRUE. Otherwise, the indicator remains at a FALSE indication.
The audio density module 206 provides a measurement of received audio density, as explained in greater detail below. The audio density, or the amount of audio data received in a given period of time, is measured by monitoring when each audio packet is received and incorporating the receipt of the packet into a numerical value which indicates a level of continuousness of streaming. For example, a higher density figure indicates that streaming is more continuous, and a lower figure indicates that the streaming is more bursty. Both short-term and long-term density measurements can be taken, as explained above.
The short-term density measurement uses a short time window in which the ratio of the duration of audio received relative to the total window time is calculated as a percentage. The duration of audio received is equivalent to the playback time span of the audio packet. The resultant figure indicates the duty cycle of audio during the short window. The long-term density is measured in the same fashion, except that the fixed window is on the order of 10 times longer than the short window. The combination of these two values determines the audio density. The short-term density describes the distribution of delivery, while the long-term density describes the overall average density. Patterns of density behavior can be examined to determine whether any burstiness in the audio streaming may be caused by network problems, or by the remote transmitter/receiver dropping non-speech audio packets. Both the voice activity detector and the packet classifier use the audio density measurements to perform their tasks.
The audio density measurement provides a rough indication of the arrival characteristics of the audio packets. A histogram is provided for the audio playback 'duration' of all packets arriving over the past 12.5 seconds. FIG. 4 illustrates one embodiment of a histogram. The histogram is established by each packet's time-of-arrivals (TOA) into the system. The time resolution is about 0.500 seconds, thus creating 25 bins of 0.500 second duration. Packets arriving into the system are 'stamped' with a local system time (this is their TOA). Their audio playback 'duration' is summed into the appropriate bin in the histogram.
It is important to note that the histogram is a sliding 12.5 seconds window. New TOA bins are created on the right-hand side of the histogram as the system time progresses from 'now' into 'infinity', while bins are dropped on the left-hand side of the histogram as they become 'older' than 'now minus 12.5 seconds'. Because this is being presented as an event-driven process and not a schedule-driven process, new packet arrivals do not occur at regular time intervals. They arrive into the system based on particular characteristics of the remote endpoint and communications link. This behavior makes the sliding window 'jump' and 'pause' as packets arrive at random TOAs.
One example arises when a packet arrives after 13 seconds of no packet arrivals. In this case a new 0.500 second bin for the new packet is 'created' and the bins for the previous 12 seconds of time are set to zero. All audio histograms older than the 12.5 seconds are thus dropped. The other extreme occurs when a packet arrives in less than 0.500 seconds after the last packet. In this case the previous packet TOA has already been used to create a new 0.500 second bin. The previous packets 'duration' has already been added to that bin. When the new packet arrives (for example 0.100 seconds later) its audio duration time is added to the previously created bin. In this way all 0.500 seconds audio bursts are summed into one bin, then the histogram moves to the next bin. This is segmented on 0.500 seconds boundaries of the system timer. This means that in the above example, if the second packet arrives 0.100 seconds after the previous packet, but the system timer has moved from 2.450 seconds to 2.550 seconds, then the second packet's playback duration is summed into a new bin.
After creation of the sliding window histogram, the audio density measurement updates its running statistics by interrogating (but not interpreting) the past 12 seconds of audio arrival. It excludes the current 0.500 seconds because this data is still being acquired, and passes measured values to the packet classifier for interpretation. The measurements are: Density = 1 D n = 0 N - 1 Bin ( n ) ,
where N is number of bins, D is the total bin duration (12 sec). Standard Deviation = 1 N n = 0 N - 1 ( Bin ( n ) - ( Avg ( Bin ( n ) ) ) 2 ,
where N is the number of bins (24), Bin(n) is the individual bin summation.
MaxGap=Maximum consecutive bins with zero sums * Bin Duration (0.500 seconds).
MaxBin=Maximum bin sum plus greatest adjacent bin sum.
SumGap=Number of gaps exceeding 0.250 seconds in the last 12 seconds.
A flow chart 300 of the audio density operation is illustrated in FIG. 5. After new data has been received at 302, the TOA is set as the current system time at 304. The
| 2.195797
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Causes Of Sudden Loss Of Vision in India: Diagnosis And Treatment
Can COVID-19 lead to vision loss?
While several factors can be held responsible for a sudden vision loss, there is not much clinical evidence that demonstrates if COVID-19 may cause a sudden blindness.
Symptoms of COVID-19 vary from person to person. Some people might be asymptomatic, while others can show mild symptoms of COVID-19, such as
- dry cough
Difficulty in breathing or shortness of breath, loss of taste or smell are the prominent Symptoms of COVID in the second wave which many people are battling with. Also, cases of a mysterious infection 'black fungus', medically referred to as 'mucormycosis', affecting COVID patients is rising across states in India and causing many more complications. This infection has been seen to primarily affect diabetic patients as a result of an excessive steroid medication used to combat COVID-19. In severe cases, doctors have had to operate on the jaw bone, and eyes of mucormycosis patients in order to prevent the fungus from spreading to the brain.
Early detection in the past and even now has always helped in preventing serious complications therefore it is advised to seek medical care at the earliest if you are facing any of the above mentioned symptoms to prevent further infections from arising as a result.
Sudden blindness refers to a medical condition in which a person suddenly experiences difficulty in seeing. As the condition develops, people may start experiencing blurry vision or sudden loss of peripheral or central vision.
People often confuse sudden blindness in both eyes with complete blindness, but both the conditions are very different from each other. However, if the issue is not addressed in time, sudden blindness may progress, leading to a complete loss of vision.
Sudden blindness may persist for varying durations. For some affected individuals, sudden blindness may ease within a few seconds; for others, it may be hours before their eyesight returns.
The process of vision involves light passing into the eye and then the retina transmitting the image to the brain in the form of electrical impulses. Problems in any part of the process may lead to the onset of vision loss. Certain medical conditions such as migraines, retinal vasospasm, closed-angle glaucoma may also lead to a sudden loss of vision in one or both eyes.
Q1. What is sudden blindness?
Sudden blindness refers to a condition characterized by blurry vision or loss of peripheral or central vision.
Q2. What are the causes of sudden blindness?
Several factors can be held responsible for the development of sudden blindness. However, the root causes of sudden loss of vision may include:
- External light sources do not reach your retina.
- The retina is unable to sense light.
- The brain is unable to interpret the impulses sent from the retina
Q3. What are the characteristics of sudden vision loss?
Some of the main characteristics of sudden vision loss include:
- Sudden blindness in one or both eyes.
- Temporary loss of peripheral vision in one eye / both eyes
- Temporary loss of central vision in one or both eyes.
- Sudden onset of blurry vision.
Q4. Is sudden sight loss temporary or permanent?
Sudden vision loss can persist for seconds in some individuals, whereas in others, it can last for hours! However, if not treated on time, this condition may progress into complete blindness.
Q5. How is sudden blindness different from complete blindness?
Sudden blindness in both eyes is often associated with blurry vision or partial vision loss in one or both eyes. However, complete blindness, on the other hand, means that you lose your ability of sight indefinitely. Seeking treatment immediately can help restore your vision in both scenarios.
Q6. What are the symptoms of sudden blindness in adults?
Common symptoms of sudden blindness in adults are:
- Inability to differentiate between shapes in surroundings.
- Tunnel vision, a condition where peripheral vision is hampered while the central vision is fine.
- Cloudy vision
- Discomfort in the eye, accompanied by any kind of discharge.
Q7. What is an eye stroke?
An eye stroke is commonly known as anterior ischemic optic neuropathy. It is a dangerous condition that may occur as a result of reduced blood flow to the tissues in front of the optic nerve.
Sudden blindness may be caused due to several external and internal factors. External factors impacting the vision may include issues with the image formation in the retina and its transmission to the brain. Internal factors may include underlying conditions such as migraines, retinal vasospasm, blood clots and more.
Book an online consultation with an Ophthalmologist on Mfine, if you have sudden loss of vision!
There are several causes of sudden loss of vision. Certain conditions can cause either sudden partial blindness (in which you can detect light and shapes) or sudden complete blindness (when you cannot detect light and cannot see anything). Many of these factors are temporary and can be resolved through timely medical intervention. Some conditions, however, are more serious and are potentially irreversible.
Causes of Sudden Loss of Vision: Partial Blindness
The are several factors that trigger the onset of sudden vision loss in one or both eyes. Here are some of the causes of sudden loss of vision: :
- Migraines (reversible)- These throbbing headaches have time and again reported to cause acute painful vision loss in one or both eyes. They are one of the major causes of temporary blindness in both eyes. The loss of vision caused by migraines is characterized by sensing flashlights or seeing blind spots.
- Retinal Migraine (reversible)- A type of migraine called retinal migraine is sometimes associated with the onset of sudden blindness in one eye. This is called an 'aura' and usually affects only one eye. This also causes acute painful vision loss. Once it occurs, the affected individual may lose sight in one of their eyes for about 20 minutes. This can occur right before or even during a migraine.
- Closed-Angle Glaucoma (reversible)- Yet another leading cause of sudden blindness/ temporary blindness in both eyes is closed-angle glaucoma. This condition is mostly associated with loss of vision in just one eye due to excessive pressure caused by a bulging iris. The pressure from the iris prevents drainage of the eye fluid, which, in turn, can lead to a host of different problems like pain in the eye and loss of vision. Permanent vision loss is a possibility if closed-angle glaucoma isn't diagnosed and treated on time.
- Retinal Vasospasm (reversible)- This condition negatively impacts the flow of blood to the retina. As a result, it may cause a temporary partial loss of vision. Retinal vasospasm causes the blood vessels in your retina to tighten.
- Severe Preeclampsia (reversible) – Sudden vision loss or changes in the vision occur in very serious cases of preeclampsia. Sudden loss of vision can be a sign of something more serious like swelling of the brain or an issue with the central nervous system.
If you experience sudden vision loss during your pregnancy (especially if you have already been diagnosed with preeclampsia), make sure you get medical help immediately.
- Giant Cell Arteritis (potentially irreversible)- Another common cause of temporary vision loss is giant cell arteritis. This condition is a common cause of loss of vision in adults over 50 years of age. If this condition is left untreated, it is known to cause permanent or chronic sudden blindness.
- Retinal Detachment (potentially irreversible) – Retinal detachment is a condition in which the retina moves away from its normal position. When this happens, blood flow to the retinal tissues is blocked, thus, if the condition isn't treated immediately, it can result in permanent blindness. There are a number of causes of retinal detachment. In diabetic patients, the creation of scar tissue can cause the retina to detach. In some cases, fluid accumulation due to injury can lead to retinal detachment.
Causes of Sudden Loss of Vision: Complete Blindness
- Vitreous Haemorrhage (reversible) – Sudden blindness can also be caused by vitreous hemorrhage where blood leakage occurs. This leakage blocks light and doesn't allow it to enter the eye, making it difficult for the affected individual to see anything.
- Retinal Vein Occlusion (potentially irreversible) – When the flow of blood to the vein is blocked due to a blood clot or any other factor, it is known as retinal vein occlusion. The blockage of blood supply to the retina can starve the tissues of oxygen and nutrients and is a very serious condition. If left untreated for too long, it can result in permanent blindness.
The conditions mentioned above were some of the most common causes of sudden vision loss. Let us now have a look at some rare causes that may contribute to the onset of sudden blindness. They include:
- Epileptic Seizures (reversible) – While seizures are commonly associated with physical seizures, you may be surprised to know that epileptic seizures can also contribute to sudden blindness in rare cases. About 10 % of people suffering from this condition can have their occipital lobe impacted. This impact can cause loss of vision during or after the seizure.
- Uhthoff Phenomenon (reversible) – This is another rare cause of sudden vision loss. It often affects those who have multiple sclerosis. Some individuals may experience an increase in their body temperature due to multiple sclerosis
| 2.375334
|
m-a-p/FineFineWeb
|
the inner tube to expand and conform to the inside surface and diameter of the outer tube, with a continuous helical protrusion forming which mates with the internal helical groove of the outer tube, but not entirely filling the internal groove. A helical passageway between the inner and outer tubes is thereby formed. Preferably the inner and outer tubes are made of copper. Also preparably the inner and outer tubes, after being combined and before the rolling step, are annealed in a furnace. The production process provides reduced cost of manufacture, improved heat transfer and the safety feature required by the various state and local codes.
Description
This is a division of application Ser. No. 200,598, filed Oct. 24, 1980 and now U.S. Pat. No. 4,337,824 issued July 6, 1982.
BACKGROUND OF THIS INVENTION
1. Field of This Invention
This invention relates to double wall heat exchangers and methods of preparing such double wall heat exchangers.
2. Prior Art
Due to the possible toxicity of solar fluids, several codes of state and local governments have been enacted which require the heat exchanger tube coil to have two separate walls. The design of such double wall heat exchangers can be of two basic types, namely, vented and unvented. With the vented design, a failure of the inner coil will cause leakage at the terminal ends of the coil at a specified pressure (about 10 psig) between the tubes. With the unvented design, the terminal ends of the coil are sealed. The placing of one tube inside of another has been done in the past; however, in such cases, there is little or no metal contact surface between the tube walls resulting in poor heat transfer. The art has tried more elaborate schemes which have also been unsatisfactory.
U.S. Pat. No. 2,586,653 (Hill) produces a composite tube which has an outer tube with a helical outer fin. A matching internal helical groove is present on the outer tube. An inner tube has a helical outer rib that mates with the internal helical groove of the outer tube. There is no space between the inner tube and the outer tube (including the mating groove and rib) after they are formed. Hill forms the composite tube by using an outer tube that has a slightly larger inner diameter than the outer diameter of the inner tube. (A mandrel is usually inserted into the inner tube.) The mating rib of the inner tube is rolled up at the same time the material of the outer rib is extruded to form the mating fin. The mating rib of the inner tube is caused by the rolling pressure which formed the ribs of the outer tube. The outer tube is reduced in internal diameter and brought in complete contact with the inner tube.
In U.S. Pat. No. 3,750,444 (Bittner) in FIG. 3 shows an externally helically finned outer tube 1 and internally helically finned inner tube 3. The helical fins (ribs) have mating paths. The result is a helical passageway between the inner and outer tubes. The internal fins should cause quite a fluid flow pressure drop, etc.
FIG. 2 of U.S. Pat. No. 3,730,229 (D'Onofrio) shows an outer tube having internal helical grooves and an inner tube having mating internal helical grooves, the external protrusions of which fit in the helical grooves of the outer tube. Helical pathways are thereby formed between the inner and outer tubes. The inner helical tube is formed by twisting--see FIGS. 5 to 9. The internal helical groove of the outer tube is formed by deformation pressure when the internal helical tube is formed--see col. 4, lines 47 to 60. U.S. Pat. No. 4,111,402 (Barbini) shows two tubes, one inside of the other, which each have at least one spiral corrugation (fin) in opposite twist to the other. The spherical corrugations are each formed by the twist method. U.S. Pat. No. 2,913,009 (Kuthe) shrinks an outer tube around an inner helical tube. U.S. Pat. No. 2,724,979 (Cross) shows an inner tube inside of an outer helical tube.
U.S. Pat. No. 3,724,537 (Johnson) involves expanding an inner tube into the internal grooves of an outer finned tube by means of internal in-situ high pressure. The internal grooves of the outer tube are completely filled. U.S. Pat. No. 3,467,180 (Pensotti) shows an outer finned tube which has a series of internal longitudinal grooves. An inner tube is expanded into the longitudinal grooves. Pensotti also expands an inner tube having a series of external grooves against the smooth interior wall of the outer tube--a series of longitudinal passageways result. U.S. Pat. No. 4,031,602 (Cunningham et al.) teaches a method of making finned heat transfer tubes. U.S. Pat. Nos. 3,267,563 (Keyes I), 3,267,564 (Keyes II) show an internally finned tube telescoped in outer tube.
See also U.S. Pat. Nos. 3,887,004, 3,868,754, 3,878,593, 3,100,930, 3,267,563, 3,267,564, 2,693,026, 4,031,602, 1,970,481, 1,646,384 and 1,813,096.
BROAD DESCRIPTION OF THIS INVENTION
An object of this invention is to provide a double wall heat exchanger which has a helical passageway between the inner tube and outer tube thereof. Another object of this invention is to provide a double wall heat exchanger which has excellent heat transfer between the inner tube and outer tube thereof. A further object of this invention is to provide a double wall heat exchanger which has a path of leakage between the tubes at a pressure differential of 10 p.s.i.g. Another object of this invention is to provide a process for the preparation of such double wall heat exchangers, such process having reduced cost of manufacture. Other objects and advantages of this invention are set out herein or are obvious herefrom to one ordinarily skilled in the art.
The objects and advantages of this invention are achieved by the double wall heat exchanger and processes of this invention.
This invention includes a double wall heat exchanger for solar heaters and the like. The heat exchanger includes an outer tube having an outer helical fin and a small helical groove on the inside of the outer tube. The helical groove follows the helical path of the outer helical fin. There is an inner tube having a slightly-raised continuous helical protrusion which matches the path of the inner helical groove of the outer tube. A narrow helical passageway between the inner tube and outer tube is formed by the mating small helical groove and the slightly-raised continuous helical protrusion. The inner surface of the outer tube, except in the region of the inner small helical groove thereof, contacts the outer surface of the inner tube, except in the region of the slightly-raised continuous helical protrusion.
There is excellent heat exchange between the inner and outer tubes of this invention. The double wall heat exchanger has at least 98 percent metal-to-metal surface contact between the inner and outer tubes.
The helical continuous passageway between the inner and outer tubes typically has a height of 0.002 to 0.003 inch. The height of the helical passageway can be varied by the amount of prior annealing of the inner tube (or by using a softer metal for the inner tube). The more the prior annealing, the more the height of the passageway. The helical passageway takes up 2 percent or less of the surface area of the outer surface of the inner tube (or of the outer tube). If the percentage is more than 2 percent, heat transfer capicity is lost--dead air space means poor heat transfer. Typically the helical passageway has the following cross-section: . The size of the channel can be varied by varying the clearance between the inner and outer tubes. A tighter clearance means a smaller sized channel.
Preferably the double wall heat exchanger is prepared from previously annealed copper. Any other suitable materials can be used--any metal can be rolled to form the fins, etc., as long as the rolls are harder than the rolled material particularily the outer tube. For example, double wall heat exchangers for nuclear reactors can be prepared using a copper outer tube and a stainless steel inner tube. The inner and outer tubes can be made of the same or different metals provided the metal or metals are ductible enough to be rolled or finned.
The double wall heat exchangers of this invention preferably exclude internal fins since such internal fins cause a major fluid flow pressure drop, etc., in the passageway of the inner tube.
The process of this invention for preparing the double wall heat exchanger broadly includes placing an outer tube of predetermined thickness and inside diameter over a second tube also of a predetermined thickness and outside diameter.
| 1.313955
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Can B.C. Employers Require That New or Existing Employees Be Vaccinated?
A resident of British Columbia receiving the COVID-19 Vaccine
British Columbia has commenced the largest mass vaccinations in provincial history, beginning with healthcare workers and seniors over the age of 85. By April 19th, more than 300,000 front-line workers, including first responders, grocery store employees, teachers, and child-care workers, will be eligible to receive a vaccine.
While we are still in the early stages of vaccine rollouts, many employers in B.C. are mulling what their workplaces will look like once a vaccine is available to the general public. Specifically, many employers are wondering whether they can require that new and existing employees be vaccinated.
Nathan Rayan, an associate in our business law and employment law practice groups, recently shared his insights on these questions with News 1130. This article expands on that interview.
Is there a Legal Precedent for Requiring Employees to be Vaccinated?
The COVID-19 pandemic is an expansive, global health crisis without equal in the modern era. As such, there is no one-to-one legal precedent for our present situation.
Pre-Covid, there had been some limited case law regarding whether employers could implement mandatory flu vaccinations. Generally, these cases involved unionized healthcare employees. Unions have tended to challenge mandatory vaccination policies as an overreach of management power. In 2019, the B.C. Nurses' Union (BCNU) reached a new agreement with the Health Employers Association of B.C., in which nurses would no longer be required to receive a flu vaccine.
COVID-19, of course, presents a far greater threat than the common flu, and would likely receive different consideration in courts and tribunals.
What Are the Significant Issues at Play for Employers?
In considering vaccination policies, employers will need to balance their legal requirement to provide a safe workplace against human rights considerations—including prohibitions against discriminating based on disability or religion—as well as protecting the privacy rights of employees.
To date, the Government of British Columbia has not passed and legislation that would mandate vaccines for employees, nor has it released clear policy guidelines that would help employers to navigate this difficult and evolving landscape.
Businesses and organizations should consider seeking legal advice to navigate these complex issues and develop sound and legally enforceable policies. Employers may also need to develop different approaches for existing employees and prospective employees, as there is a significant difference between forcing an employee to get vaccinated compared to making it a condition of a job offer.
Can B.C. Employers Require Their Existing Employees to Get Vaccinated?
In British Columbia, most employers will have a difficult time enforcing a mandatory vaccine policy for existing employees. The exception will be some healthcare and residential care workers, or some unionized workplaces where the union is supportive of a mandatory vaccine requirement for its members.
The two routes an employer could attempt to use to enforce a mandatory vaccination policy are occupational health and safety, and a legitimate business requirement (for example, if a job requires regular flights and airlines are refusing to allow non-vaccinated passengers to board that might be considered a bona fide requirement). An employer would have to clear significant hurdles along either route.
Is Vaccination Necessary for Workplace Safety?
All employers have a legal obligation to maintain a safe workplace for their employees. In a factory environment, for example, an employer needs to ensure that workers are protected from physical injury, for example exposure to harmful chemicals, dangerous machinery, etc.
Whether a particular safety policy is reasonable depends on what is reasonable for that particular workplace. On a construction site, of course, hardhats would be an enforceable safety policy; in a bookstore, not so much, even if hardcover novels could occasionally fall off shelves.
To be enforceable, a mandatory COVID-19 vaccination policy would need to be deemed necessary based on the workplace. In most workplaces, the employer's requirement to provide a safe job site could be met by lesser means than mandatory vaccination, such as by having employees work from home, or through distancing, sanitization, temperature checks, and mandatory mask-wearing. It would likely be difficult for a business to claim that they require an employee to get vaccinated and return to the office, if that employee has been successfully working remotely for the past year, for example.
Who Decides Whether a Workplace is Safe or Unsafe?
The Workers' Compensation Board of British Columbia (WorkSafeBC) is ultimately responsible for determining whether a workplace is safe. The saying, "no plaintiff, no judge" applies here. For a workplace to be deemed unsafe, an individual or organization would first need to raise concerns with WorkSafeBC, which would then conduct an inspection and make a determination. A COVID-19 related safety inspection would follow similar processes to any other type of safety inspection.
Could Employers Argue that Vaccination is an Occupational Requirement?
The other route an employer might take to try to implement a vaccination mandate for existing employees would be to argue that it is an integral requirement of the job. Employers in hospitality, retail, travel, food processing, personal services, and other industries necessitating close physical proximity are likely to have more success in arguing that a mandatory vaccination policy is necessary, compared to employers in industries where close contact is not required.
Because these vaccines are so new, this argument has not been considered by Canadian courts or tribunals. We should expect to see some decisions considering these claims within the next 6-12-months, as vaccines become widely available to working-age Canadians.
Requiring Existing Employees to Be Vaccinated Will Likely be a Losing Fight for Most Employers
While there are compelling public health arguments for why a frontline worker such as a grocery store clerk, hairdresser or server should be vaccinated, forcing vaccination on someone resistant to the procedure would be an invasive infringement on their bodily autonomy.
The legal test for whether a mandatory medical procedure could be required is not, "what is the best possible safety measure", but rather "what is necessary to preserve a safe workplace". While a customer might wish the person serving them dinner was vaccinated, it is likely that a court or tribunal would find that less intrusive measures, such as wearing a mask and hand hygiene, would be sufficient.
Legislation would offer welcome clarification to both employers and employees about which occupations and workplaces merit mandatory vaccinations and which do not. To date, this has not been the B.C. Government's approach; rather, the government has instead focused on education and encouragement to do the right thing. In a free society such as ours, this is arguably the right approach, although it will have the side effect of decreased certainty and, likely, disputes between well-intentioned employers and employees who don't see eye-to-eye.
Can Employers Require that New Employees be Vaccinated?
The situation for new hires is different than for existing employees. In most cases, employers will likely be within their rights to require that new employees be vaccinated as a condition of employment. Although such a requirement would be unrealistic at the moment, since most working-age Canadians are not yet eligible to receive a vaccine, this will change before the end of 2021, as vaccines become available to younger people in British Columbia.
Businesses and organizations have tremendous latitude in hiring whomever they want. Employers can hire based on perceived fit; for example, a business could choose to hire someone because of their charisma, sense of humour, or skill in solving Sudoku puzzles if they wanted.
Employers can use whatever reason they like in deciding whom to hire, except for those reasons which are specifically banned, such as the prohibited grounds of discrimination as defined in the BC Human Rights Code.
What Human Rights Principles do Employers Need to Be Aware of?
The BC Human Rights Code protects employees and potential employees from discrimination based on certain protected characteristics. There are two protected grounds that likely to be relevant to COVID-19 vaccination policies:
1. Medical Disability: In the employment context, it is illegal to discriminate on the basis of medical disability. This would include a disability that prevents an employee from receiving a COVID-19 vaccination. If an employer refuses to hire a candidate because they are unvaccinated, and that individual has valid medical reasons why they cannot get vaccinated, then that employer would be discriminating based on disability.
However, if being vaccinated were a bona fide occupational requirement, then this discrimination would be excused. This would depend on the specific workplace, and the employer's ability to accommodate the employee.
For example, an employer might not be obligated to hire an un-vaccinated care home worker who worked with vulnerable adults, but it likely would be required to accommodate a typical office worker by instituting other measures to protect workplace health and safety, such as requiring the unvaccinated employee to wear a mask and maintain social distancing.
2. Freedom of Religion: Although most religions do not object to vaccines, some groups or individuals may be opposed to vaccination for religious reasons.
In such a case, the same considerations as a medical disability would arise, and the employer would need to assess its legal obligation to accommodate the potential or current employee.
The issue of freedom of religion could emerge as a contentious area for employment litigation. We can easily imagine situations where it will be difficult to reconcile an individual's legally protected freedom of religion versus an employer's concern over a safe workplace and the stigma of having unvaccinated workers. I expect to see one or more motivated plaintiffs escalate this issue to the B.C. Human Rights Tribunal within the coming months. While we cannot anticipate the Tribunal's decision, the significance of this global health crisis is likely to be taken into account. At this time, more than 900,000 Canadians have been infected and more than 20,000 have died as a result of COVID-19 and related complications. We must wonder what the Tribunal's appetite
| 1.728607
|
Zyphra/Zyda-2
|
BROWN'S GAS FACTS
FROM BROWN'S GAS BOOK TWO
Min. : L/h
EXCERPTS FROM BROWNS' GAS BOOK TWO
One of the design features of the gas generator is that its electrodes consist of inexpensive, ordinary mild steel,
unlike many conventional electrodes that employ expensive, exotic noble metals. (Noble metals are chemically
inert or inactive, especially toward oxygen). There are extraordinary characteristics of Brown's Gas when
produced by the patented generator. The properties of the gas have been the subject of considerable international
interest and various research projects and third party reports. The first Australian Patent 590309 was granted to
Yull Brown in 1977. U.S. Patent _PHONE_ was granted in 1977 and U.S. Patent _PHONE_ was granted in 1978. Yull
Brown has the intellectual rights to Brown's Gas. These rights are his for fifty years and are far from expired.
While a person could build a machine from the original patent, they would not be allowed to produce Brown's Gas
without Yull Brown's written approval, which has been exclusively granted to Better World Technology for the
United States. Many of the larger industrialized countries have granted patents or offer protection through
international treaties. (A strategy of patent enhancement that may further extend the life of U.S. patents is
presently in development.) With the initial granting of patents, emphasis was placed on documentation and
analysis of the properties, behavior and safety of the gas. Commercial development of the widespread potential
applications was not a principal focus until 1986.
BROWN'S GAS PROPERTIES
The raw materials for the production of Brown's Gas are water and electricity. One kwh of electricity produces
approximately 340 liters of gas.
Virtually any amount of Brown's Gas can be produced in any volume through cells in series, cells
miniaturized, or cells enlarged. One unit of water yields 1,860 units of gas. The inverse applies as well. Upon
ignition, Brown's Gas implodes. When implosion of the gas mixture occurs, the result is a 1,859 unit vacuum with
one unit of water. Tests have demonstrated various potential applications for pumps and motors operating as a
result of the vacuum created by igniting the gas in a closed chamber. The end result of the implosion is always
water. The effect of the gas's self implosion is to create a nearly perfect vacuum, almost instantaneously. The
vacuum can be generated in a device without moving parts. A standard torch, such as used in oxy/actetylene
welding, can be used to burn Brown's Gas. Ignition is achieved with a hot spark. There are remarkable properties
to the flame that are considerably different from a flame produced by mechanically combining oxygen and hydrogen gases. It appears that the unique nature of the extreme thermal energy produced by Brown's Gas is from interactive effects with the particu lar material being heated. Hydrogen burning in an oxygen environment should theoretically reach a temperature of between 2210 and 2900 degrees centigrade. Tungsten was vaporized (sublimated) which requires a temperature of 5900 degrees centigrade, consid erably above the flame temperature. A section of tungsten rod (1/8 inches in diameter) was sublimated in about 30 seconds. The flames properties are different from those of conventional welding gases. For example, the flame is exceedingly pure and the fl ame results from the burning of the gas without the addition of oxygen, as required for acetylene. When the gas flame is directed to a fire brick, the contact area quickly reaches a condition of white heat and then begins to melt. Such results are not obs ervable with conventional welding gases. In various demonstrations of the burning of Brown's Gas, holes were thermally bored through bricks, bricks were welded together with the material melting to an igneous rock similar to volcanic material, ceramic til es were pierced by the flame, and steel was welded to brick. An observable characteristic of the implosive flame is that it concentrates heat into a small area. Various independent consultants have tested this aspect by holding a piece of mild steel (six inches long) in one bare hand, and using the flame, cutting an inch or more from the other end. The cutting operation is completed before heat is significantly conducted through the metal. Welders familiar with conventional welding devices would assume t he absolute requirement of asbestos gloves for such an experiment.
The intense heat concentration of the flame is immensely important in welding certain metals where the conducted overflow heat can weaken the metal adjacent to a weld. A typical example would involve aluminum welding. With Brown's Gas, the heat energy is concentrated into a small area where it performs its function without a wide dispersal of the applied heat. In applications which involve roll cutting steel plate, the smoothness of the cut is significant, in part because of this characteristic of grea ter heat concentration.
THIRD PARTY STUDIES AND OBSERVATIONS
There have been supportive studies and conclusions reached about Professor Brown's discovery by various independent authorities and consultants. Some of those are summarized below.
Ellyett, Emeritus Professor of Physics, Newcastle University, N.S.W.,
Australia, prepared an independent technical assessment of Professor Brown's technology in 1986
. Dr. Ellyett has a double masters in chemistry and physics and a Ph.D. in physics. He was Foundation Professor of Physics at the University of Newcastle from 1964 to 1980,. specializing in geophysics. He is the author of 64 articles in various profession
al journals and has been consultant to the U.S. government in upper atmospheric and space physics. He has lectured in the USA Sweden, Antarctica, South Africa, Holland, Canada, Singapore, the Philippines, West Germany and Finland. In preparing his observ
ations of the Brown's Gas Generator, Dr. Ellyett went through several
demonstrations of the technology, reviewed the U.S. patent, and also reviewed three reports prepared by Dr. John Bockris
"It should be stated here that the study of uses and applications has gone ahead of a complete scientific understanding
of all facets of this process. Such a study will probably take several years to complete. However, enough has now
been demonstrated to justify the techniques being commercialized immediately, and the first country to do so may in
the long run achieve an enormous advantage in many technical processes.Brown's Gas is burnt in a blowtorch. The
oxygen combines with the hydrogen itself, and as much energy is released on recombination as was needed to
produce the initial dissociation. This results in a high temperature.The product is water. Surrounding air and its
associated moisture is excluded so the temperature is not lowered due to this factor, as it can be with other flames.
However, at the high temperatures reached, some of the resulting water
is again probably dissociated and hydrogen will interact to varying degrees with the material being heated.
The extent of creation of atomic hydrogen and oxygen is presently unknown, but complex reactions occur within
the flame and between the flame and the solid, so that some materials being heated reach greater temperatures than
Dr. Ellyett, in reporting on various welding applications of the gas wrote: "Metal welding, including aluminum, become simple processes. Metal welds are particularly clean, due to the correct hydrogen/oxygen balance". Regarding the stability of the ga s, he wrote: "It is stable and non-explosive at any reasonable pressure, and has been approved for manufacture and use by the New South Wales Department of Explosives. Any simple mixture of the two gases would probably explode if it was significantly comp ressed, but Brown's Gas is stable in this regard". Regarding the self-implosion characteristic of the gas, Dr. Ellyett noted, "If a spark plug is inserted in the gas and a spark passed, the gas immediately collapses to water with an 1860 to 1 reduction in volume. This creates a near vacuum and opens the way to many interesting, practical uses, such as the pumping of water or other fluids in emergency situations, using the surrounding atmosphere to move the fluid". Dr. Ellyett also wrote: "The gas generat or can produce the gas rapidly as it is required. This obviates the need for heavy storage cylinders, with all the cost factors of transport to and from the working site. "Alternatively, if storage is required, it can be achieved quite simply in the fiel d. Noncontinuous generators of electricity, such as solar photovoltarc cells or windmills, could create the Brown's Gas by electrolysis of water, and the gas could be stored as a source of energy for use at any future time. This would be particularly adva ntageous in remote areas, and would eliminate the use of storage batteries.
"Study of the calculations for the production of Brown's Gas by electrolysis indicates a highly efficient process; 1 kwh of electricity producing 340 liters of gas. Direct current is used for the electrolysis, so there is a small energy loss in convert ing from alternating current. The electrolysis itself is considered to be approximately 95% efficient, so the overall efficiency from an alternating current source is calculated to be in excess of 90%. The cost of Brown's Gas appears from observation a nd calculation to be many times cheaper than the cost of obtaining a similar quantity of bottled oxyacetylene or oxyhydrogen gases on-site.(see George Wiseman's Observations) Dr. John Bockris prepared comments following a comprehensive review and de monstration of Professor Brown's generator and the various heat and implosion characteristics of the gas.
| 2.138947
|
HuggingFaceFW/fineweb-edu
|
Chapter 2. File System Structure and Maintenance
- Shareable and unsharable files
- Shareable files can be accessed locally and by remote hosts. Unsharable files are only available locally.
- Variable and static files
- Variable files, such as documents, can be changed at any time. Static files, such as binaries, do not change without an action from the system administrator.
2.1. Overview of Filesystem Hierarchy Standard (FHS)
- Compatibility with other FHS-compliant systems
- The ability to mount a
/usr/partition as read-only. This is crucial, since
/usr/contains common executables and should not be changed by users. In addition, since
/usr/is mounted as read-only, it should be mountable from the CD-ROM drive or from another machine via a read-only NFS mount.
2.1.1. FHS Organization
22.214.171.124. Gathering File System Information
dfcommand reports the system's disk space usage. Its output looks similar to the following:
df Command Output
Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 _PHONE_ _PHONE_ _PHONE_ 57% / /dev/sda1 _PHONE_ 86211 10% /boot none 322856 0 322856 0% /dev/shm
dfshows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, use the command
df -h. The
-hargument stands for "human-readable" format. The output for
df -hlooks similar to the following:
df -h Command Output
Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 12G 6.0G 4.6G 57% / /dev/sda1 99M 9.1M 85M 10% /boot none 316M 0 316M 0% /dev/shm
/dev/shmrepresents the system's virtual memory file system.
ducommand displays the estimated amount of space being used by files in a directory, displaying the disk usage of each subdirectory. The last line in the output of
dushows the total disk usage of the directory. To see only the total disk usage of a directory in human-readable format, use
du -hs. For more options, see
Gnome System Monitor
gnome-system-monitor. Select the File Systems tab to view the system's partitions. The following figure illustrates the File Systems tab.
Figure 2.1. File Systems Tab in GNOME System Monitor
/boot/directory contains static files required to boot the system, for example, the Linux kernel. These files are essential for the system to boot properly.
/boot/directory. Doing so renders the system unbootable.
/dev/directory contains device nodes that represent the following device types:
- devices attached to the system;
- virtual devices provided by the kernel.
udevddaemon creates and removes device nodes in
/dev/directory and subdirectories are defined as either character (providing only a serial stream of input and output, for example, mouse or keyboard) or block (accessible randomly, such as a hard drive or a floppy drive). If GNOME or KDE is installed, some storage devices are automatically detected when connected (such as with USB) or inserted (such as a CD or DVD drive), and a pop-up window displaying the contents appears.
Table 2.1. Examples of Common Files in the
| ||The master device on the primary IDE channel.|
| ||The slave device on the primary IDE channel.|
| ||The first virtual console.|
| ||The second virtual console.|
| ||The first device on the primary SCSI or SATA channel.|
| ||The first parallel port.|
- Mapped device
- A logical volume in a volume group, for example,
- Static device
- A traditional storage volume, for example,
/dev/sdbX, where sdb is a storage device name and X is the partition number.
/dev/sdbXcan also be
/dev/disk/by-uuid/UUID. For more information, see Section 25.8, "Persistent Naming".
/etc/directory is reserved for configuration files that are local to the machine. It should not contain any binaries; if there are any binaries, move them to
/etc/skel/directory stores "skeleton" user files, which are used to populate a home directory when a user is first created. Applications also store their configuration files in this directory and may reference them when executed. The
/etc/exportsfile controls which file systems export to remote hosts.
/mnt/directory is reserved for temporarily mounted file systems, such as NFS file system mounts. For all removable storage media, use the
/media/directory. Automatically detected removable media is mounted in the
/mntdirectory must not be used by installation programs.
/opt/directory is normally reserved for software and add-on packages that are not part of the default installation. A package that installs to
/opt/creates a directory bearing its name, for example,
/opt/packagename/. In most cases, such packages follow a predictable subdirectory structure; most store their binaries in
/proc/directory contains special files that either extract information from the kernel or send information to it. Examples of such information include system memory, CPU information, and hardware configuration. For more information about
/proc/, see Section 2.3, "The /proc Virtual File System".
/srv/directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the
/sys/directory utilizes the new
sysfsvirtual file system specific to the kernel. With the increased support for hot plug hardware devices in the kernel, the
/sys/directory contains information similar to that held by
/proc/, but displays a hierarchical view of device information specific to hot plug devices.
/usr/directory is for files that can be shared across multiple machines. The
/usr/directory is often on its own partition and is mounted read-only. At a minimum,
/usr/should contain the following subdirectories:
- This directory is used for binaries.
- This directory is used for system-wide configuration files.
- This directory stores games.
- This directory is used for C header files.
- This directory is used for Kerberos-related binaries and files.
- This directory is used for object files and libraries that are not designed to be directly utilized by shell scripts or users.As of Red Hat Enterprise Linux 7.0, the
/lib/directory has been merged with
/usr/lib. Now it also contains libraries needed to execute the binaries in
/usr/sbin/. These shared library images are used to boot the system or execute commands within the root file system.
- This directory contains small helper programs called by other programs.
- As of Red Hat Enterprise Linux 7.0,
/sbinhas been moved to
/usr/sbin. This means that it contains all system administration binaries, including those essential for booting, restoring, recovering, or repairing the system. The binaries in
/usr/sbin/require root privileges to use.
- This directory stores files that are not architecture-specific.
- This directory stores source code.
- This directory stores temporary files.
/usr/directory should also contain a
/local/subdirectory. As per the FHS, this subdirectory is used by the system administrator when installing software locally, and should be safe from being overwritten during system updates. The
/usr/localdirectory has a structure similar to
/usr/, and contains the following subdirectories:
/usr/local/differs slightly from the FHS. The FHS states that
/usr/local/should be used to store software that should remain safe from system software upgrades. Since the RPM Package Manager can perform software upgrades safely, it is not necessary to protect files by storing them in
/usr/local/for software local to the machine. For instance, if the
/usr/directory is mounted as a read-only NFS share from a remote host, it is still possible to install a package or program under the
/usr/as read-only, any programs that write log files or need
lock/directories should write them to the
/var/directory. The FHS states
/var/is for variable data, which includes spool directories and files, logging data, transient and temporary files.
/var/run/media/userdirectory contains subdirectories used as mount points for removable media such as USB storage media, DVDs, CD-ROMs, and Zip disks. Note that previously, the
/media/directory was used for this purpose.
lastlog, go in the
/var/lib/rpm/directory contains RPM system databases. Lock files go in the
/var/lock/directory, usually in directories for the program using the file. The
/var/spool/directory has subdirectories that store data files for some programs. These subdirectories include:
| 1.119046
|
openbmb/Ultra-FineWeb
|
Northern Italy – Turin And Susa
The Taurin were a tribe of Kelts from beyond the Alps who came to settle in the upper valley of the Po at the base of the Cottian Alps. They are said to have joined in the Keltic war of 225 against Rome, but to have sided with Rome against Hannibal when he invaded Italy in 218 after crossing the Alps and passing through their land. He is said to have spent three days in besieging the capital town of the tribe, which probably stood on the site of Turin, though no traces of it have been found.
We must suppose that the Keltic city was not surrounded by stone or brick ramparts but by palisades and ditch, after the fashion of the terremare. Its native name was Taurasia. It is not heard of again until Augustus sent there his colony, and made of it the northernmost city of Roman brickwork, as he also made Aosta the northernmost city of Roman stonework. It was a self-indicated site, at the intersection of two rivers, on the great northern trade thoroughfare from east to west, between the two seas, and the national center of the west section of the Po valley, just where the great river begins to be navigable.
Turin was called from the emperor Augusta Taurinorum and became the center of a territory extending about one hundred and fifty kilo-meters from north to south and over a hundred kilometers from west to east, in a great arc whose southern base rested on the river Po and was bounded by the great Alpine curve. It was intersected by the sphere of Genua (Genoa) on the southwest, and by that of Ticinum (Padua), and Mediolanum (Milan), on the southeast. Even then there was a premonition of the economic rivalry which is fiercely raging in modern Italy, where Genoa and Milan are crowding out,Turin!
At the left end of this radius, as we face the Alps, was the territory of the tribe of the Vagienni, reaching to the Maritime Alps and the Riviera. Quite recently their capital city, Augusta Bagiennorum, has been excavated at Bene Vagienna. The Augustan plan was partly uncovered with some imposing fortified double gates, with round flanking towers, similar to those of Turin, showing that it also was a fortress.
It is not easy to remember that Turin, most modern and modern-looking of large Italian cities, owes its aspect to its faithfulness to Roman antiquity; and that under the present streets in the core of the town, at a level of between only one and two meters under the pavement, the sewers and pavements, the house-walls, and even the cellars of the city of Augustus are so well preserved that they prove just how closely the lines of the old Roman streets have been followed. In fact, the city kept within its ancient limits and was surrounded by its Augustan brick walls with its colossal gateways until the sudden expansion of the seventeenth century led to their destruction. Before then, in 1536, the French had destroyed the west gate, or Porta Marmorea, probably the principal entrance to the Roman city, as well as the amphitheater which was not far beyond it, in the usual relation to the city walls.
We can trace its limits in the heart of the modern city, in the west at the Via del Consolato; on the south at the Via S. Teresa; on the east at the Piazza Castello and Piazza Carignano, and on the north at the Via Giulis. The great thoroughfare of the Corso Garibaldi follows the exact course of the ancient decumanus street.
The plan of the Roman colony was almost exactly the "classic" norm of camp and city given by the well-known gromatc writer Hyginus. He describes the ideal plan as 2,400 feet long and two-thirds of this (1,600 feet) wide. Any, greater length, he said, endangered defensive operations, as signals and alarms could not be as distinctly heard. Turin fulfils exactly the length measurement and if its Width was greater than the normal (2,220 feet in place of 1,600 feet), this was not important; it was only here that it surpassed Aosta in size. Apparently the city fluctuated but little in the course of imperial his-tory, and we may conjecture that its time of greatest importance was under its founder, Augustus, and before the surge of the Roman advance had passed permanently northward. Together with Susa it guarded the main commercial road to Arles (Arelate) and the rest of the Provincia of Southern Gaul (Provence).
In the office of the Regional Department for the Preservation of Monuments, in the ducal Palazzo Madama, there hangs a plan of the ancient city to which every now and then some detail is added, as bits of the old streets are casually found, and which ought to be published without delay in its present form, as few archaeologists, even, are aware of its existence and depend on what Promis gives in his superb but slightly antiquated book of 1869. It is true that a comparatively small subsidy would enable the department practically to complete the plan, but the Government seems unable to furnish it. My special interest lay in the study of the Roman gates and in the place held by Turin under Augustus as one of the keys to Italy.
By a curious coincidence one can pass directly from the office of the Department of Ancient Monuments at Turin, by narrow subterranean stairs and passages, among the sub-structures of the medieval Palazzo Madama, to the consider-able remains of the principal gateway of the Augustan city, the Porta Praetoria, over and around which the mediaeval dukes of Savoy built their castle and palace. It can be studied to a height of about six feet. It has four openings, two large central arcades for incoming and out-going vehicles, flanked by two narrow passages for foot-passengers, corresponding to the side-walks. All four of the Roman gates, one on each side of the city, were of the same style and size; and all, like the walls, faced with brick-work. The plan was of the usual Augustan type, even deeper than wide, with a central court and two huge flanking towers.
How deep the court was in the Turin gates has not yet been exactly determined, but it can be as soon as the Government provides the funds for completing the restoration and uncovering of the so-called Palazzo delle Torri, or Porta Palatina, the ancient Porta Principalis Sinistra of Roman Turin, which was also turned into a fortress in the Middle Ages. On its outer face this colossal gateway, with its high sixteen-sided towers and double-arched gallery, is the only one of the four to remain in almost perfect preservation, so perfect that its Augustan date was not until quite recently admitted. Since 1905 it has been in process of restorationthe mediaeval battlements removed, the windows and galleries opened up, the ancient level, two meters below the street, laid bare, and the later constructions attached to the face removed. If the ancient foundations and walls in the rear should be wholly uncoveredwhich the Government has not yet provided the funds to dothe plan of the fortress-like structure would be evident. Al-ready we may conjecture it to be similar to the well-preserved Augustan gate at Nîmes, which also has four openings and a central court, though the Turin gate is on a larger scale, for its width with the towers exceeds 100 feet (36 meters). The unique preservation of the flanking towers helps to give us something of the original stateliness, when it was connected with the long stretch of city walls in the same colossal proportions. It makes one think of the Porta Nigra at Trier in its arrangement, though not in its material; at Trier it is bossed stonework. The two other city gates have disappeared, but they have been located and part of their foundations examined. That on the west was called in the Middle Ages "Porta Marmorea," perhaps owing to a marble facing which originally covered the lower part of the brickwork of these gates, but may have long previously been torn away from the others.
To none of them, however, does the design seem to provide a place for sculptured decoration in relief, so that I was led to attribute to some unknown and destroyed arch the sculptured frieze of arms and armor and some fragments of military scenes in the museum, and perhaps also a fine fragment of a praetorian soldier and a horse now in the office of the Direzione,though I confess that the style of the latter is less Augustan than Trajanic. I am much tempted to conjure up, as having once borne these sculptures, the Colony Arch of the Augustan Turin, which, if my reckoning holds good, must have stood at a short distance outside the principal city gate '(Porta Praetoria)' across the highway, as it turns to approach the river. I may be allowed to refer here to the theory which I laid before the International Archaeological Congress of 1905 at Athens, which has been quite commonly accepted, French archaeologists having tested its accuracy in connection with the numerous African arches. This theory is that when a Roman colony was founded it was the general custom to build an arch across the main approach, on the sacred boundary line or pomerium. This
| 1.953505
|
Zyphra/Zyda-2
|
Violence in sports
Violence in sports refers to physical acts committed in contact sports such as American football, ice hockey, rugby football, lacrosse, soccer, boxing, mixed martial arts, wrestling, and water polo beyond the normal levels of contact expected while playing the sport. These acts of violence can include intentional attempts to injure a player by another player or coach, but can also include threats of physical harm or actual physical harm sustained by players or coaches by those engaging in spectating of sports.
"Intermittent explosive disorder" may be a cause of violence. Some athletes may be genetically predisposed to violence or (particularly male athletes) have unusually high testosterone levels. Animal behavior ethology studies may also lend a clue, as athletes may resort to violence to establish territory.
Violence by athletes
Athletes sometimes resort to violence, in hopes of injuring and intimidating opponents. Such incidents may be part of a strategy developed by coaches or players.
In boxing, unruly or extremely violent behavior by one of the contestants often results in the fighter breaking the rules being penalized with a points reduction, or, in extreme cases, disqualification. Outlawed tactics in boxing include hitting the opponent on the back of the head, under the belly during clinching, and to the back. Other tactics that are outlawed, but less seen, are pushing an opponent extremely hard to the floor, kicking, or hitting repeatedly after the round has ended. Similar actions have also happened in ice hockey and Australian Football League matches.
Violence may also be related to nationalism or as an outlet for underlying social tensions. It is often alcohol-related.
Violence by supporters of sports teams dates back to Roman times, when supporters of chariot racing teams were frequently involved in major riots. Usually, underlying political and/or theological issues helped fuel riots related to sporting events in the Roman era. The Nika riots of 532 were especially deadly, with tens of thousands reportedly killed.
In periods when theatre was considered a form of mass entertainment, there were phenomena of rival fans supporting rival actors or theatrical teams, occasionally leading to violent outbursts having many similarities to present-day violence of sports fans – the Astor Place Riot in 1849 New York City being a conspicuous example.
The actions of English football hooligans and firms in the 1980s caused English teams to be banned from European competition for six years after the Heysel Stadium disaster in 1985. Although the level of football-related violence was significantly reduced in England after this event, in the recent Euro 2004 tournament, England were publicly warned that any violence by supporters at matches could result in their ejection from the tournament. Many known hooligans were prevented from traveling to the tournament in Portugal. There was a collective sigh of relief from security experts in the USA when England failed to qualify for the 1994 FIFA World Cup. Alan Rothenberg (chairman of the World Cup organizing committee in the United States in 1994) said:
|"||There were three countries in the world whose presence would have created logistical and security problems, so we're very pleased they won't be coming: Iraq, Iran and England.||"|
- In 532, the rivalry between supporters of the Blue and Green chariot-racing teams in Constantinople led to 30,000 deaths in the Nika riots.
- The first meeting in the American football rivalry between Brigham Young University and the University of Utah took place in April 1896, when BYU was known as Brigham Young Academy. The two schools disagree to this day as to whether this game was official, but it mattered greatly to the spectators—at the end of the game, the two sets of fans fought one another.
- In the second edition of the Tour de France in 1904, as the riders climbed the Col de la République in the Loire department, supporters of regional favorite Antoine Faure physically attacked several of his opponents. The repercussions of this incident continue to this day—the Tour did not return to Loire until 1950, and although the Tour has returned to the République (the first pass of 1,000 metres ever climbed in the Tour) 11 times since then, its appearances in the 1903 and 1904 Tours are no longer officially recognized as Tour climbs.
- In 1972, Oregon pummeled Oregon State 30–3 in their annual "Civil War" football rivalry game, held this season at Oregon State's Parker Stadium. After the game, jubilant Oregon fans rushed the field and tore down the south goal post. They then turned to do the same to the north goal post, but were met by Oregon State fans who had come on the field, resulting in a major brawl.
- In 1975, cyclist Eddy Merckx was viciously punched by a spectator as he climbed the Puy de Dôme in the Tour de France. Merckx, who had won the Tour de France five times previously and at the time was in the leader's yellow jersey, finished the stage barely able to breathe, and went on to finish the tour in second place overall.
- The 1980 Scottish Cup Final between bitter Old Firm rivals Celtic and Rangers was marred by an on-pitch riot between rival fans. The result was the banning of alcohol from Scottish football and rugby matches.
- After Marvin Hagler knocked out Alan Minter in three rounds to win boxing's world middleweight title at Wembley Arena in 1980, many of Minter's supporters began to throw beer cans, bottles and other objects into the ring. Both Hagler and Minter, along with their respective handlers, had to be escorted out by Scotland Yard.
- On August 12, 1984, during a game between the Atlanta Braves and San Diego Padres that degenerated into a beanball war:
- At least five fans were dragged from the field at Atlanta–Fulton County Stadium in handcuffs after participating in a bench-clearing brawl.
- One of the fans was charged with assault for throwing a full beer mug at the Padres' Kurt Bevacqua, hitting him in the head, as he was returning to the dugout.
- The game ended with police riot squads on top of both dugouts in an obvious attempt to keep fans away from the players.
- At the end of the same season, violence erupted outside of Tiger Stadium in Detroit after the Detroit Tigers defeated the Padres in the World Series. A well known photo from the riot shows a Tigers fan holding a World Series pennant in front of an overturned burning Detroit Police car.
- Heysel Stadium disaster – 39 people died when a wall collapsed under pressure of Juventus supporters fleeing from 'football hooligans' supporting Liverpool during the 1985 European Cup Final.
- In 1990, a football match between Red Star Belgrade and Dinamo Zagreb was abandoned after ten minutes with thousands of fans fighting each other and the police. One of the Zagreb players, Zvonimir Boban, was seen to kick a policeman, and after an hour long riot, the stadium was set on fire. Dinamo fans see this riot as the beginning of the Croatian War of Independence.
- In 1993, Monica Seles was stabbed by a Steffi Graf fan during a changeover at a tennis match in Germany.
- In 1994, Vancouver Canucks fans rioted in the streets of Vancouver after their team lost in the Stanley Cup finals.
- During the 1994 FIFA World Cup, Colombian football (soccer) player Andrés Escobar accidentally scored an own goal in a match against the United States, a match which Colombia lost 2–1. On his return to Colombia, Escobar was confronted outside a bar in Medellín by a gunman who shot the player six times, killing him. The gunman reportedly shouted "¡Gol!" ("Goal!") for each bullet fired.
- Rioting Indian fans at the Eden Gardens stadium in Calcutta forced the end of the semi-final match between India and Sri Lanka during the 1996 Cricket World Cup. Fans started rioting when the home team, seemingly on the way to victory, underwent a dramatic batting collapse. Match referee Clive Lloyd brought the teams off the ground for their safety, then attempted to restart the match. When the fans remained throwing projectiles and damaging stadium facilities, the match was called off and awarded to Sri Lanka (who went on to win the World Cup).
- In 1996 during a night Australian Football League match at Waverley Park in Melbourne between Essendon and St Kilda, a pitch invasion occurred when the lights went out during the third quarter. Initially, a serious car crash into power lines in the nearby area was reported to have caused the blackout, although it was later confirmed to be a major electrical fault. In the midst of the chaos, fans rioted and stormed the ground, some lighting bonfires in the centre square, and removing two of the behind posts. The incidents were filmed on Network Seven, and the remaining quarter and a half was played three nights later.
- In 1998, Denver Broncos fans rioted in the streets of Denver after their team won Super Bowl XXXII. Near-riots happened when the team won the Super Bowl again the following year and after the Colorado Avalanche's Stanley Cup wins in 1996 and 2001.
- A similar incident occurred in Oakland, California in 2003 when fans rioted and destroyed property after the Oakland Raiders lost to the Tampa Bay Buccaneers in Super Bowl XXXVII.
- In July 2000, 13 people were trampled to death in a riot at a 2002 FIFA World Cup qualifying match in Harare, Zimbabwe after South Africa took a 2–0 lead over Zimbabwe.
- In June 2000, Los Angeles
| 1.945829
|
HuggingFaceFW/fineweb-edu
|
Sarojini Naidu in Bombay (now Mumbai), 1946
13 February 1879
Hyderabad, Hyderabad State, India
|Died||2 March 1949
Lucknow, Uttar Pradesh, India
|Occupation||Poet, writer, social activist|
|Alma mater||King's College London
Girton College, Cambridge
|Spouse(s)||Dr. Muthyala Govindarajulu|
|Children||Jayasurya, Padmaja, Randheer, Nilawar and Leelamani|
Sarojini Naidu, also known by the sobriquet The Nightingale of India, was a child prodigy, Indian independence activist and poet. Naidu was one of the framers of the Indian Constitution. Naidu was the first Indian woman to become the President of the Indian National Congress and the first woman to become the Governor of Uttar Pradesh state. Her birthday is celebrated as Women's Day all over India.
Early Life
Naidu Hyderabad to a Bengali Hindu Kulin Brahmin family to Agorenath Chattopadhyay and Barada Sundari Devi on 13th February 1879. Her father was a doctor of science from Edinburgh University, settled in Hyderabad State, where he founded and administered the Ahemdabad College, which later became the Nizam's College in Ahemdabad. Her mother was a poetess baji and used to write poetry in Bengali. Sarojini Naidu was the eldest among the eight siblings. One of her brothers Birendranath was a revolutionary and her other brother, Harindranath was a poet, dramatist, and actor.
Sarojini Naidu passed her Matriculation examination from University of Madras. She took four years break from her studies and was concentrated reading on various subjects. In 1895, she travelled to England to study first at King's College London and later at Girton College, Cambridge. She loved Govindarajulu and married him in 1898. They had four children's namely Jayasurya, Padmaja, Randheer, and Leelamani, her daughter Padmaja got a position of the governor of West Bengal.
Indian Freedom Fighter
Sarojini Naidu joined the Indian national movement in the wake of partition of Bengal in 1905. She came into contact with Gopal Krishna Gokhale, Rabindranath Tagore, Muhammad Ali Jinnah, Annie Besant, C. P. Ramaswami Iyer, Mahatma Gandhi and Jawaharlal Nehru.
During _PHONE_, she traveled to different regions in India delivering lectures on social welfare, women empowerment and nationalism. She awakened the women of India and brought them out of the kitchen. She also helped to establish the Women's Indian Association (WIA) in 1917. She was sent to London along with Annie Besant, President of WIA, to present the case for the women's vote to the Joint Select Committee.
President of the Congress
In 1925, Sarojini Naidu presided over the annual session of Indian National Congress at Cawnpore. In 1929, she presided over East African Indian Congress in South Africa. She was awarded the hind a kesari medal by the British government for her work during the plague epidemic in India. In 1931, she participated in the Round table conference with Gandhiji and Madan Mohan Malaviya. Sarojini Naidu played a leading role during the Civil Disobedience Movement and was jailed along with Gandhiji and other leaders. In 1942, Sarojini Naidu was arrested during the "Quit India" movement. She was a great freedom fighter and an equally great poet.
Literary career
Sarojini Naidu began writing at the age of 12. Her play, Maher Muneer, impressed the Nawab of Hyderabad. In 1905, her collection of poems, named "The Broken Exes" was published. Her poems were admired by many prominent Indian politicians like Gopal Krishna Gokhale.
Golden Threshold
Named "Golden Threshold" after Sarojini Naidu's much celebrated collection of poems, this premise has a long and wider history. This was the residence of her father, Dr. Aghornath Chattopadhyay, the first Principal of Hyderabad College, later named Nizam College. This was the home of many reformist ideas in Hyderabad - in areas ranging from marriage, education, women's empowerment, literature and nationalism –apart from being the home of brilliant, radical and creative members of the Chattopadhyay family, which included the anti-imperialist revolutionary Birendranath; maverick poet, actor and connoisseur of music and dance Harindranath; dancer and film actress Sunalini Devi; communist leader Suhasini Devi –and of course the poet, crusader for women's rights, nationalist leader and 'Nightingale of India' Sarojini Devi. Harindranath Chattopadhyay said about this house, where anyone and any ideas were welcome for discussion, "a museum of wisdom and culture,a zoo crowded with a medley of strange types – some even verging on the mystique". Golden Threshold now houses Theatre Outreach Unit an initiative of University of Hyderabad started in August 2012.
During her stay in England, Sarojini met Dr. Govindarajulu Naidu, a non-Brahmin and a doctor by profession, and fell in love with him. After finishing her studies at the age of 19, she got married to him during the time when inter-caste marriages were not allowed. Her father approved the marriage and her marriage was a very happy one.
The couple had five children. Jayasurya, Padmaja, Randheer, Nilawar and Leelamani. Her daughter Padmaja followed in to her footprints and became the Governor of West Bengal. In 1961, she published a collection of poems entitled The Feather of The Dawn.
In 1949 she fell ill. Her physician came and gave her a sleeping pill for good sleep. She smiled and said "Not eternal sleep I hope". But that night on March 2nd 1949 she died in her sleep.
Each year links to its corresponding "year in poetry" article:
- 1905: The Golden Threshold, published in the United Kingdom (text available online)
- 1912: The Bird of Time: Songs of Life, Death & the Spring, published in London
- 1917: The Broken Wing: Songs of Love, Death and the Spring, including "The Gift of India" (first read in public in 1915)
- 1916: Muhammad Jinnah: An Ambassador of Unity
- 1943: The Sceptred Flute: Songs of India, Allahabad: Kitabistan, posthumously published
- 1961: The Feather of the Dawn, posthumously published, edited by her daughter, Padmaja Naidu
- 1971:The Indian Weavers
Famous Poems
- Damayante to Nala in the Hour of Exile
- Indian Dancers
- The Indian Gypsy
- Indian Love-Song
- Indian Weavers
- In Salutation to the Eternal Peace
- In the Forest
- In the Bazaars of Hyderabad
- Nightfall in the City of Hyderabad
- Palanquin Bearers
- The Pardah Nashin
- Past and Future
- The Queen's Rival
- The Royal Tombs of Golconda
- The Snake-Charmer
- Song of a Dream
- The Soul's Prayer
- To a Buddha Seated on a Lotus
- To the God of Pain
- Wandering Singers
- Street Cries
- Autumn Song
- Bangle Sellers
- The Coromandel Fishers
- "Shall hope to prevail where clamorous hate is rife,
- Shall sweet love prosper or high dreams have place
- Amid the tumult of reverberant strife
- 'Twixt ancient creeds, 'twixt race and ancient race,
- That mars the grave, glad purposes of life,
- Leaving no refuge save thy succoring face?"
Naidu said, "When there is oppression, the only self-respecting thing is to rise and say this shall cease today, because my right is justice." She adds, "If you are stronger, you have to help the weaker boy or girl both in play and in the work."
- "Colors of India". First Woman Governor of a State in India. Retrieved 25 March 2012.
- editor; Ramchandani, vice president Dale Hoiberg; editor South Asia, Indu (2000). A to C (Abd Allah ibn al-Abbas to Cypress).. New Delhi: Encyclopædia Britannica (India). ISBN 978-0-85229-760-5.
- "SRIMATI SAROJINI NAIDU, Governor of UP". National Informatics Centre, UP State Union. Retrieved 25 March 2012.
- "Live India".
- "Biography of Naidu".
- compiled; Agrawal, edited by Lion M.G. (2008). Freedom fighters of India (in four volumes). Delhi: Isha Books. p. 142. ISBN 978-81-8205-468-4.
- Pasricha, Ashu (2009). The political thought of
| 1.489861
|
HuggingFaceFW/fineweb-edu
|
himself a "Saviour" who freely saves others
by His personal salvation. The Buddha exhorts His disciples
to depend on themselves for their salvation, for both defilement
and purity depend on oneself. "You yourselves should make
the exertion. The Tathaagatas are only teachers," says
Buddhas point out the path, and it is left for us to follow
that path to save ourselves: "To depend on others for salvation
is negative, but to depend on oneself is positive." Dependence
on others means a surrender of one's effort. Furthermore, the
Buddha does not claim a monopoly of Buddhahood, which as matter
of fact is not the prerogative of any specially graced, chosen
person. He reached the highest possible state of perfection
any person could aspire to; and without the closed fist of a
teacher, He revealed the only straight path that leads thereto.
According to the teachings of the Buddha anybody may aspire
to that supreme state of perfection if he makes the necessary
aspiring determination and necessary exertion.
a man He attained Buddhahood and proclaimed to the world the
latent possibilities and the creative power of man. Instead
of placing an unseen almighty God over man, and making him subservient
to such a belief, He raised the worth of mankind. it was He
who taught that man could obtain his Deliverance from sorrow
by his own exertion, without depending on a God and mediating
priests, or on sacrifices and prayers. It was He who taught
the egocentric world the noble ideal of selfless service. It
was He who revolted against the degrading caste system and taught
the equality of mankind. He declared that the gates of success
and prosperity were open to all, in every condition of life,
high and low, saint and sinner, who would care to turn over
a new leaf and aspire to Perfection.
of caste, colour or rank, he established for both deserving
men and women a celibate order which was "democratic in
constitution and communistic in distribution." He gave
complete freedom of thought and wanted us to open our eyes to
see things as they truly are. He comforted the bereaved by His
consoling words. He ministered to the sick that were deserted.
He helped the poor who were neglected. He ennobled the lives
of sinners and purified the corrupted lives of criminals. He
encouraged the feeble, united the divided, enlightened the ignorant,
clarified the mystic, guided the deluded, elevated the base,
and dignified the noble. Rich and poor, saint and sinner, loved
Him alike. Despotic and righteous kings, glorious and obscure
princes and nobles, generous and miserly millionaires, haughty
and humble scholars, destitute paupers, downtrodden scavengers,
wicked murderers, despised courtesans - all benefited by His
words of wisdom and compassion.
noble example was a source of inspiration to all. His Message
of Peace was hailed by all with indescribable joy, and was of
' eternal benefit to everyone who had the fortune to come under
its benign influence.
The Buddha's Greatness
Buddha was a unique Being. He was the profoundest of thinkers,
the most of speakers, the most energetic of worker, the
most successful of reformers, the most compassionate and tolerant
of teachers, the most efficient of administrators, and above
all - the Holiest of Holies.
the early period of His renunciation He sought the advice of
distinguished religious teachers, but He could not obtain what
He sought from outside sources. Circumstances compelled Him
to think for Himself and seek within. He sought, He thought,
He reflected; ultimately He found His goal of life. Having discovered
the Truth, He opened the gates of Immortality to all who wish
to hear Him and seek their Deliverance from this ever-recurring
cycle of births and deaths, and not because He was an infant
prodigy in the ordinary accepted sense.
He knew everything that ought to be known and as He obtained
the key to all knowledge. He is called Sabba~n~nu-Omniscient.
This knowledge He acquired by His own efforts as the result
of a countless series of births.What He taught was merely an
infinitesimal of what He knew. He taught only what was necessary
for our Deliverance. On one occasion while the Buddha was residing
in a forest He took a handful of leaves and said:
Bhikkhus, what I have taught you is comparable to the leaves
in my hand, what I have not taught you is comparable to
the number of leaves in the forest.
He preached His Doctrine to both the Sangha (ordained disciples)
and the laity. In the forenoon He goes in search of individuals
who need His advice. Immediately after His noon meal He exhorts
and instructs His ordained disciples. In the evening for about
an hour He preaches to the layfolk who flock to hear Him. During
the first watch of the night He again preaches to His ordained
disciples. Throughout the middle watch He receives the Devas
and other invisible beings and explains the doctrine to them.
Practising what He preached, He worked incessantly for forty-five
long years for the good and happiness of all to His last moment.
The Buddha and the Caste System
wisely and very effectively He laboured to eradicate the social
evils that prevailed in His day. He vehemently protested against
the caste system that blocked the progress of mankind. In His
makes no Brahman,
life doing that mould the Brahman true.
lives mould farmers, tradesmen, merchants, serfs;
lives mould robbers, soldiers, chaplains, kings.
birth is not one an outcast,
birth is not one a Brahman.
deeds is one an outcast,
deeds is one a Brahman.
to the Buddha, caste or colour does not preclude one from becoming
a Buddhist or entering the Order. Fishermen, scavengers, courtesans,
together with warriors and Brahmins were freely admitted into
the Order and enjoyed equal privileges and were equally given
positions of rank.
the barber, for instance, was made, in preference to all others,
the chief in matters pertaining to the Vinaya. The timid Suniita,
the scavenger, was admitted by the Buddha Himself into the Order.
The courtesan Ambapaali entered the Order and attained Arahantship.
Saati, the monk who maintained a deadly heresy, was the son
of a fisherman. Subhaa was the daughter of a smith, Punnaa was
a slave girl. Caapaa was the daughter of a deer-stalker. Such
instances could be multiplied to show that the portals of Buddhism
were wide open to all without any distinction. It was also the
Buddha who attempted to abolish slavery for the first time in
the known history of the world.
The Buddha and Women
Buddha raised the status of women and brought them to a realization
of their importance to society. He did not humiliate women,
but only regarded them as weak by nature. He saw the innate
good of both men and women and assigned to them their due place
in His Teaching. Sex is no obstacle to attaining Sainthood.
the Pali term used to denote women is "Maatugaama,"
which means'mother-folk', or'society of mothers'. As a
mother, woman holds an honourable place in Buddhism. The wife
is regarded as 'the best friend' (paramasakhaa) of the
at first the Buddha refused to admit women into the Order, yet
later He was persuaded by the entreaties of the Venerable Ananda
and founded the Order of Bhikhhunis (Nuns).
as the Arahants Saariputta and Moggallaana were made the two
chief disciples in the Order of Monks, even so the Arahants
Kheemaa and Uppalavannaa were made the two chief female disciples
in the Order of Nuns. Many other female disciples too were named
by the Buddha Himself as amongst His most distinguished and
were placed under unfavourable circumstances before the advent
of the Buddha, and this new Order was certainly a great Blessing.
In this Order queens, princesses, daughters of noble families,
widows, bereaved mothers, helpless women, courtesans - all despite
their caste or rank - met on a common platform, enjoyed perfect
consolation and peace, and breathed that free atmosphere which
is denied to those confined in cottages and palatial mansions.
Many who otherwise would have fallen into oblivion distinguished
themselves in various ways and gained their emancipation by
seeking refuge in the Order.
His Tolerance towards Animals
tolerance of the Buddha was extended not only to men and women
but to dumb animals as well. For it was the Buddha who banned
the sacrifice of poor beasts and admonished His followers to
extend their Loving-Kindness (Maitri) to all living beings.
No man has the right or power to destroy the life of another
living animal even for the sake of one's stomach as life is
precious to all.
efficient way in which He maintained the discipline of His numerous
followers, especially His Orders of Bhikkhus and Bhikkhunis,
test
| 1.427003
|
HuggingFaceFW/fineweb-edu
|
History of the Microscope -
What is a microscope?
First a definition.
"If we consider a microscope to be an instrument by which
we can observe objects or parts of objects which are too minute to be visible to
the naked eye, and which can be used to investigate minute structures of plants
or animals, and thus bring to our knowledge facts not otherwise
ascertainable, then the microscope is a comparatively modern invention and dates
back only to about the end of the sixteenth century.
(Clay and Court, 1975)
The Lens | Eye
glasses | The Telescope | The
however, we are to include the ability to magnify items that are already visible
to the naked eye, then we would have to go back to the Egyptians who knew and
practiced the art of cutting a polishing stones.
From the Egyptians this art was extended to the Greece and Italy.
artifacts include rock crystals in the from of convex lenses (~2600 B.C.E.)
Greeks and Romans continued with these types of lenses up to the end of the
Roman Empire (~31 C.E.)
and practiced the art of glass blowing
- Observed that objects placed in a bulb filled with water appeared magnified
The earliest writing describing the action of the lens appears in the
writings of the Arabian Alhazen (_PHONE_) described in his "Optics Thesaurus
Alhazeni Arabius Basil." It
was not until much later in the 13th Century references to lenses began to
appear on a regular basis.
Bacon (_PHONE_), a monk in Oxford, wrote on the topic of Nature often
referenced the concepts and use of lenses.
Bacon knew of lenses and their abilities.
However the type of lenses he talked about would be laid on books or
held in the hand just above the printed text.
may have even known of, or predicted the invention of the telescope based on
his quote below.
it is easily seen that the greatest may appear the least and the contrary, and
far distant things may appear very near and conversely. So we may even make
the sun, moon and stars descend lower in appearance, and to be visible over the
heads of our enemies, and many things
of the like sort which persons unacquainted with such things would refuse to
| top |
much of the history of optics, there are many reports about spectacles, but most
put the invention around 1285.
- The grave of Salvano d'Aramento degli Amati a nobleman of Florence has a
statement that he invented spectacles, but kept the process a secret.
della Spina of Pisa who died in 1317 had an inscription on his tomb that he
had discovered how to make spectacles (possibly from Amati) and had made the
- Yet another monk, Giordino da Rivalta, who died in 1305 said that making
glasses was one of the most useful arts and that it was only 20 years since
its invention. He also said he
knew the inventor (1285).
Why such a long delay between the development of lenses and the invention
and use of spectacles?
One answer is that church doctrine did not allow for man to alter what
God had created.
| top |
The Telescope (1608)
Again, there are various accounts of people inventing the telescope.
Bacon's famous quote could lead one to believe that such an instrument was
available in the 1200's. In fact, there are several names that are listed as
possible inventors, Leonard Digges and William Bourne to name a few.
- The usually accepted date for the invention of the telescope is 1608.
It is based on a letter from Jacob Adriaanzoon (also called James
Metius) to the States General, in which he asked for exclusive rights to
sell an instrument he had invented by
which distant objects appeared larger and more distinct.
This date is further documented by Descartes in the first chapter of his
Lumine," (1637). He writes about Metius and his instrument that used convex
and concave glass positioned at the right distance at the ends of a tube to
What about Galileo?
Many people think of Galileo when talking about telescopes. In fact, in
1609 Galileo did hear about and designed his own telescope that included both
convex and concave lenses.
- Galileo made many astronomical discoveries including the moon of Jupiter
and the phases of Venus.
- Helped the Medici family to even more wealth with the use of his
telescope? Fact or fiction?
- Also got in trouble with the Church. It was to Christina of Lorraine
the granddaughter of Catherine de' Medici that Galileo wrote his letter on science and scripture, "Letter to the Grand Duchess Christina of
| top |
The Microscope (~1600)
Here the accounts are less hazy, but a cloud of controversy does still
exist. Based on the letters of William Boreel ( the Dutch envoy to the
Court of France) the father and son team of Hans and Zacharias Jansen are the
inventors of the microscope. At least they are the first to have any
documentation that substantiates such a claim. Their microscope design was
- It could only be used for opaque objects
- Had a magnification of about 20X
Bioimaging Takes off
- 1665 - Robert Hooke (the Secretary of the Royal Society) publishes "Microgphia"
a folio of thirty-eight copper-plate illustrations of objects drawn with the
aid of his microscope. First to describe and coin the phrase
"cell" when observing a slice of cork (bark from an oak tree)
using a microscope power of 30X.
- 1673 - Antony van Leeuwenhoek a tradesman of Delft, Holland with no formal
training made some of the most important discoveries in biology. He
discovered bacteria, free-living and parasitic microscopic protists, sperm cells, blood cells
and more. All of this from a very simple device that could magnify up to
It took until 1839 before cells were finally acknowledged as the basic
units of life. Nearly two centuries!
- 1823 - Achromatic lenses introduced now provide resolution of 1 micron or
- 1839 - Theodor Schwann and Matthias Schleiden formally propose the
- 1840 - Donne' publishes the first micrographs in France.
This touches off a debate over the merit of micrographs versus microscope drawings.
- ~1880 - Microscope lamp with filters. August Kohler had worked
out light source and condenser position to obtain the best image
- 1873 - Ernst Abbe published his work on the theory of the
microscope. Up to this point, much of the design of microscopes had
been trial and error. He made clear the difference between
magnification and resolution and criticized the practice of using eye pieces
with too high a magnification as "empty magnification." His
widely used formula to calculate resolution is based on his wave light
It was at this time Abbe began to collaborate with Carl Zeiss which lasted
until his death. Also started one of the first medical insurance funds,
pension funds and introduced the 8 hour work day.
- 1873 - Ernst Leitz microscope is introduced with a revolving mount (turret)
for 5 objectives.
- 1879 - Walther Flemming discovers mitosis.
- 1878 - Introduced oil immersion lens (cedar oil) that resulted in a
homogeneous optical path.
- ~1880 - microtomes begin to appear for sample preparation. Up to the
point specimens were by sharp knives.
- 1886 - Ernst Abbe designs apochromatic objective that brings red, yellow
and blue into one focus. These lenses required 10 or more elements.
With this final advancement the theoretic limit of resolution for visible light
microscopes had been reached (2000 angstroms).
- _PHONE_ - Louis Pasteur and Robert Koch both become engaged in
microscopy and the study of bacteria.
- 1904 - The first commercial UV microscope by Zeiss. The resolution
based on Abbe's formula is twice that of a visible light microscope.
- 1930 - Fritz Zernike discovered he could view unstained cells using the
phase angle of rays. It took until 1941 to bring a commercial
microscope to market.
Zernike visited the Zeiss factory in 1932 to present his method.
After reviewing Zernike's method an older scientist said; "If this really
had any practical value, then we would have invented it a long time
ago." In 1953 Zernike was awarded the Nobel Prize for his phase
The Electron Microscope
- 1931 - Max Knoll and Ernst Ruska construct the first electron microscope.
- 1933 - Ruska builds the first electron microscope that exceeds the
resolution of the light microscope. It has an accelerating voltage of
- 1934 - First electron micrograph of a biological sample. long-leaved
sundew fixed with osmium.
- 1937 - First scanning electron microscope is built.
- 1939 - Siemens supplies the first commercially available electron
Ultimately the power of the electron microscope was not realized until the
1950's when ultra-microtomes were built.
- 1951 - First ultra-microtome built by Porter and Blum.
- 1954 - First diamond
| 2.331676
|
HuggingFaceFW/fineweb-edu
|
already available, which raises the larger question of when it makes sense to go with the commercial option. Commercial content often comes with support, whereas open materials may lack a sufficient user community to keep them up-to-date and bug-free. At the same time, commercial materials can be expensive, or may only be used for part of the course. Open educational resources can be particularly useful for reviewing core concepts, since students may not have textbooks covering prerequisite material.
Permalink for this paragraph 0 Ultimately faculty will use what works best for their course—or even produce it themselves. Most faculty begin by using already available content (whether open or commercial), in part because they do not necessarily have the time or technical expertise to create their own materials. But content isn't available to support some courses, especially interdisciplinary, thematic courses, such as the course on cystic fibrosis. Thus some faculty create their own blended learning resources. Developing such materials requires a "significant up-front time investment," so it is most efficient if course resources can be re-used. Some of these resources are so focused on the particular needs of one course that they may be difficult for other faculty to customize for their own courses. These examples suggest a range of approaches to OER, from using OLI as a cornerstone of the course to choosing course content from multiple sources to producing educational resources independently.
Permalink for this paragraph 1 Although results of Bryn Mawr's program are still preliminary, they are on the whole positive. Students report appreciating being able to practice until they understand a concept or approach, which enables them to improve without risking their grades. They also like getting immediate feedback on their performance. However, students see a computer-based resource as being a "waste of time" if they need to invest too much time in figuring out the interface, have to wait for content to load, or struggle to input data in an appropriate format. In particular, entering chemical symbols and mathematical formulas on a web browser has proven to be a hassle, a problem that Bryn Mawr is addressing by loading a WYSIWG equation editor into Moodle.
Permalink for this paragraph 0 Likewise, instructors are on balance enthusiastic about their experiences with blended learning. Instructors most appreciate the dashboards provided by OLI and some other systems that allow them to monitor how either individual students or the entire class perform on particular projects. By examining data on student performance, instructors can tailor their lectures and assignments to meet student needs, practicing "agile teaching." Instructors say that students ask more sophisticated questions and can be more specific in pointing to where they are confused. In selecting OER, faculty may want to consider the extent of support for tracking the learner's progress. According to Spohrer, Moodle provides performance data on quizzes and lessons that are equivalent to OLI and many commercial systems, and Moodle developers continue to make improvements. Unfortunately, dashboards aren't as robust for older OLI courses, and with the Open University, the instructor does not have access to data about the learner's progress.
Permalink for this paragraph 0 Ultimately, according to Spohrer, blended learning can support several pedagogical innovations. OLI and other systems support formative assessment, so that students can detect gaps in their knowledge and instructors can draw on the learner data to improve teaching. Likewise, blended learning materials like OLI can promote mastery, providing low-stakes testing that enables students to practice until they master material without fearing that they will get a bad grade for making multiple attempts. As Spohrer says, liberal education aims to help students learn how to learn and direct their own education. OLI and other assessment-driven learning materials help students reflect on their own knowledge (metacognition), see where they have weaknesses, practice key skills, and grow in their knowledge and abilities. Likewise, these technologies can help faculty deliver the support that students need, eliminating some of the guesswork while retaining, perhaps even deepening, and the human connection.
Permalink for this paragraph 0 In addition to Bryn Mawr, faculty members at other liberal arts colleges have been experimenting with OLI. At the Bryn Mawr workshop, Lisa Dierker of Wesleyan University spoke about how OLI helped her bring her "best stuff" to her statistics class. By providing customized hints and feedback, OLI delivers whatever support individuals need to make progress. The Learning Dashboards make it easier for faculty to track what students are learning, particularly in larger classes, and intervene where necessary. The statistics tutor engages students and taps into their curiosity, so that they are not just performing calculations but thinking about the data they are working with. Dierker did identify some weaknesses with OLI. For instance, she would like to customize an OLI course more easily, cutting out particular sections and pulling in components from other OLI courses. She also wonders how to incorporate social learning into OLI. Integrating OLI into existing courses can also be a challenge, since doing so requires changing the syllabus and instructional approach. OLI faces challenges such as scaling up its approach (given the significant expense of developing new courses); providing more flexibility and modularity so that instructors can more easily adapt OLI courses; and supporting different formats for reading. As Thille noted at the Bryn Mawr workshop, new authoring and course builder tools may help address these challenges, as well as a partnership with Knowledge Forum to explore integrating OLI with social spaces.
8.3 Open education and the MOOC: University of Mary Washington
Permalink for this paragraph 2 The massively open online course (MOOC) can be seen as a way of combining open education with online learning. Jim Groom has pioneered a different form of that emerging class structure at the University of Mary Washington, through his innovative digital storytelling class, DS106. Like other MOOCs, DS106 is open to interested learners without formal campus registration. However, it differs in several important ways. First, it is anchored firmly on a face-to-face, registrar-enrolled UMW class. That component matters crucially to the class, impacting even distance-only learners and audiences. In fact, the interpenetration of online and offline ways of learning helps structure the class. Second, it is not as "massive" as others. Third, instructor and learners collaboratively create class content. Students actually help design the class itself. Fourth, there is no single, stable DS 106 model. It changes with each iteration. Other classes at other institutions can use and remix DS106 components, in the spirit of openness. Groom reports being inspired by the "Looking for Whitman" inter-institutional class, which linked together courses on Whitman at four different institution through a shared curricular focus and the use of BuddyPress, a plug-in that brings social networking features to the blogging platform WordPress.
Permalink for this paragraph 0 Moreover, DS106 not only performs online teaching, but also encourages students to reflect on what it means to teach and learn in the digital world. This last point connects DS106 in logistical, operational, and content levels, as students use technologies to learn, then to socially respond to that learning. For Groom, the class allows students (and instructors) to ask major questions for liberal education: what does it mean to teach interactively? How can we use the online experience to extend the face-to-face? How can we more fully use the Web in teaching and learning? If we are still in the early days of online learning, and MOOCs are a transition stage from classical university learning, to what new pedagogical forms do they point?
Permalink for this paragraph 0 That next stage could be a liberal arts approach to distance learning. Groom argues that DS106 is structured in ways consonant with the liberal education tradition: high-level student-learner interaction, a focus on community relationships, and a civic emphasis, using tools while reflecting critically on them. The class ethos of creativity, intimacy, and high-touch experience fully translates the liberal arts online. For Groom, open education is essential to this way of thinking. For example, since classes are not housed in digital siloes, inter-class commenting occurs, which expands class discussion while building a larger community. Similarly former students and alumni can reconnect with the class and current students, with alumni serving as mentors. At a curricular level DS106's openness makes it easier for classes to connect across disciplinary boundaries. Groom notes that humanities and social sciences tend to be more fully represented in this than STEM fields, but has hopes for connections across the entire curriculum. "The DS106 experience makes the life of the mind… more transparent."
8.4 Faculty production of open textbooks: Southwestern University, Washington and Lee University, and DePauw University
Permalink for this paragraph 0 A group of classicists is collaboratively producing an open ebook for Greek courses. The Cyropaedia is based on one text (so far), Xenophon's Education of Cyrus. In a sense the project resembles a familiar primary source documentary presentation: a document or collection of documents, complemented by student-oriented annotations, not unlike a Norton Critical Edition. But the Cyropaedia differs by taking advantage of two elements of open education. First, an open source platform, the WordPress blogging engine, hosts the documentation, offering the classic advantages of open source: greater control over format, freedom from external policy changes, and no licensing cost. Second, at some point Cyropaedia will be released to the world for open commentary. Any user will be able to add questions, translations, reflections, or answers to any item within Cyrus' text, or in response to other commentators. This is also an example of open source's versatility, as the software enabling that commentary is another piece of open source software, the CommentPress plugin (developed by the Institute for the
| 1.831239
|
HuggingFaceFW/fineweb-edu
|
yank the text into the buffer (copy), or you may use c to cut. p pastes after the cursor, P pastes before. V, Visual Line mode, is the same for entire lines. Ctrl+v is for blocks of text.
Note: Whenever you delete something, that something is placed inside a buffer and is available for pasting.
搜尋和取代
To search for a word or character in the file, simply use / and then the characters your are searching for and press enter. To view the next match in the search press n, press N for the previous match.
To search and replace use the substitute :s/ command. The syntax is: [range]s///[arguments]. For example:
Command Outcome
:s/xxx/yyy/ Replace xxx with yyy at the first occurence
:s/xxx/yyy/g Replace xxx with yyy first occurrence, global (whole sentence)
:s/xxx/yyy/gc Replace xxx with yyy global with confirm
:%s/xxx/yyy/g Replace xxx with yyy global in the whole file
You can use the global :g/ command to search for patterns and then execute a command for each match. The syntax is: [range]:g//[cmd].
Command Outcome
:g/^#/d Delete all lines that begins with #
:g/^$/d Delete all lines that are empty
儲存和退出
To save and/or quit, you will need to use Ex mode. Ex mode commands are preceded by a :. To write a file use :w or if the file doesn't have a name :w filename. Quitting is done with :q. If you choose not to save your changes, use :q!. To save and quit :x.
額外指令
1. Pressing s will erase the current letter under the cursor, and place you in insert mode. S will erase the whole line, and place you in insert mode.
2. o will create a newline below the line and put you insert mode, O will create a newline above the line and put you in insert mode.
3. yy will yank an entire line
4. cc will delete the current line and place you in insert mode.
5. * will highlight the current word and n will search it
設定
Vim's user-specific configuration file is located in the home directory: ~/.vimrc, and files are located inside ~/.vim/ The global configuration file is located at /etc/vimrc. Global files are located inside /usr/share/vim/.
The Vim global configuration in Arch Linux is very basic and differs from many other distributions' default Vim configuration file. To get some commonly expected behaviors (such as syntax highlighting, returning to the last known cursor position), consider using Vim's example configuration file:
# mv /etc/vimrc /etc/vimrc.bak
# cp /usr/share/vim/vim74/vimrc_example.vim /etc/vimrc
循環搜尋
With this option the search next behaviour allows to jump to the beginning of the file, when the end of file is reached. Similarly, search previous jumps to the end of the file when the start is reached.
set wrapscan
拼字檢查
set spell
With this setting, Vim will highlight incorrectly spelled words. Place the cursor on a misspelled word and enter z= to view spelling suggestions.
Only English language dictionaries are installed by default, more can be found in the official repositories. To get the list of available languages type:
# pacman -Ss vim-spell
Language dictionaries can also be found at the Vim FTP archive. Put the downloaded dictionar(y/ies) into the ~/.vim/spell folder and set the dictionary by typing: :setlocal spell spelllang=LL
Tip:
- To enable spell checking for LaTeX (or TeX) documents only, add autocmd FileType tex setlocal spell spelllang=en_us into your ~/.vimrc or /etc/vimrc, and then restart Vim. For spell checking of languages other than English, simply replace en_us with the value appropriate for your language.
- To enable spelling in two languages (for instance English and German), add set spelllang=en,de into your ~/.vimrc or /etc/vimrc, and then restart Vim.
- You can enable spell checking for arbitrary file types (e.g. *.txt) by using the FileType plugin and a custom rule for file type detection. To enable spell checking for any file ending in *.txt, create the file /usr/share/vim/vimfiles/ftdetect/plaintext.vim, and insert the line autocmd BufRead,BufNewFile *.txt setfiletype plaintext into that file. Next, insert the line autocmd FileType plaintext setlocal spell spelllang=en_us into your ~/.vimrc or /etc/vimrc, and then restart Vim.
語法標亮
To enable syntax highlighting (Vim supports a huge list of programming languages):
:filetype plugin on
:syntax on
使用滑鼠
Vim has the ability to make use of the mouse, but requires xterm's mouse reporting feature.
1. See the example .vimrc below to enable the mouse.
2. Use xterm. In your console: export TERM=xterm-256color or export TERM=xterm
Notes:
- This even works in PuTTY over SSH.
- In PuTTY, the normal highlight/copy behaviour is changed because Vim enters visual mode when the mouse is used. To select text with the mouse normally, hold down the Shift key while selecting text.
讓方向鍵通過斷行
By default, pressing at the beginning of a line, or pressing at the end of a line, will not let the cursor traverse to the previous, or following, line.
The default behavior can be changed by adding set whichwrap=b,s,<,>,[,] to your ~/.vimrc file.
~/.vimrc 範例
一個範例 Vim 設定
插件
Adding plugins to vim can increase your productivity. The group vim-plugins has many plugins to choose from(there are more in the repos though ie: vim-supertab).
pacman -Ss vim-plugins
cscope
Cscope is a tool for browsing a project. By navigating to a word/symbol/function and calling cscope(usually with shortcut keys) it can find: functions calling the function, the function definition, and more. Multiple steps are required to search a code base.
Install the cscope package.
Copy the cscope default file where it will be automatically read by vim:
mkdir -p ~/.vim/plugin
wget -P ~/.vim/plugin _URL_
Create a file which contains the files you wish cscope to index(Cscope can handle many languages but this example finds .c, .cpp, and .h files):
cd /path/to/projectfolder/
find . -type f -print | grep -E '\.(c(pp)?|h)$' > cscope.files
Create database files that cscope will read:
cscope -bq
Note: You must browse your project files from this location or set and export the $CSCOPE_DB variable, pointing it to the cscope.out file.
Default keyboard shortcuts
Ctrl-\ and
c: Find functions calling this function
d: Find functions called by this function
e: Find this egrep pattern
f: Find this file
g: Find this definition
i: Find files #including this file
s: Find this C symbol
t: Find assignments to
Feel free to change the shortcuts.
#Maps ctrl-c to find functions calling the function
nnoremap <C-c> :cs find c <C-R>=expand("<cword>")<CR><CR>
Taglist
Taglist provides an overview of the structure of source code files and allows you to efficiently browse through source code files in different programming languages.
Install the vim-taglist package.
Usefull options to be put in ~/.vimrc
let Tlist_Compact_Format = 1
let Tlist_GainFocus_On_ToggleOpen = 1
let Tlist_Close_On_Select = 1
nnoremap <C-l> :TlistToggle<CR>
合併檔案 (vimdiff)
Vim includes a diff editor, a program that aids the merging of differences between two (or more, with limited usefulness) files. vimdiff opens a horizontally multi-paned view that colorfully highlights differences, each pane containing one of the files to be examined/edited. Vim has several modes, two important ones being Insert mode, which lets text be edited, and Command mode, which lets the cursor be
| 1.404801
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Stage 2 in the Design Thinking Process: Define the Problem
Once you've empathized with your users, you can move on to the second stage of the Design Thinking process and define the problem your users need you to solve.
If you've read our introduction to User Experience (UX) Design, you'll know that UX is essentially about solving the problems that prevent users from accomplishing what they want to do with our product.
Before you can go into problem-solving mode, however, there is one very crucial step that you need to complete—one that will shape your entire design project from start to finish. In the Design Thinking process, this step is what's known as the "define" stage.
As the second step in the Design Thinking process, the define stage is where you'll establish a clear idea of exactly which problem you will solve for the user. You'll then shape this into a problem statement which will act as your northern star throughout the design process.
In this guide, we'll tell you everything you need to know about this stage in the Design Thinking process, as well as how to define a meaningful problem statement.
Here's what we'll cover:
1. What is the define stage and why is it necessary?
2. What is a problem statement?
3. How to define a meaningful problem statement
4. What comes after the define phase?
Before we dive in, though, if you'd like an overview of the entire Design Thinking process, check out this video:
As the second step in the Design Thinking process, the define stage is dedicated to defining the problem: what user problem will you be trying to solve? In other words, what is your design challenge?
The define stage is preceded by the empathize phase, where you'll have learned as much about your users as possible, conducting interviews and using a variety of immersion and observation techniques. Once you have a good idea of who your users are and, most importantly, their wants, needs, and pain-points, you're ready to turn this empathy into an actionable problem statement.
The relationship between the empathize and define stages can best be described in terms of analysis and synthesis. In the empathize phase, we use analysis to break down everything we observe and discover about our users into smaller, more manageable components—dividing their actions and behaviour into "what", "why" and "how" categories, for example. In the define stage, we piece these components back together, synthesising our findings to create a detailed overall picture.
Why is the define stage so important?
The define stage ensures you fully understand the goal of your design project. It helps you to articulate your design problem, and provides a clear-cut objective to work towards. A meaningful, actionable problem statement will steer you in the right direction, helping you to kick-start the ideation process (see Stage Three of the Design Thinking process) and work your way towards a solution.
Without a well-defined problem statement, it's hard to know what you're aiming for. Your work will lack focus, and the final design will suffer. Not only that: in the absence of a clear problem statement, it's extremely difficult to explain to stakeholders and team members exactly what you are trying to achieve.
With this in mind, let's take a closer look at problem statements and how you can go about defining them.
2. What is a problem statement?
A problem statement identifies the gap between the current state (i.e. the problem) and the desired state (i.e. the goal) of a process or product. Within the design context, you can think of the user problem as an unmet need. By designing a solution that meets this need, you can satisfy the user and ensure a pleasant user experience.
A problem statement, or point of view (POV) statement, frames this problem (or need) in a way that is actionable for designers. It provides a clear description of the issue that the designer seeks to address, keeping the focus on the user at all times.
Problem or POV statements can take various formats, but the end goal is always the same: to guide the design team towards a feasible solution. Let's take a look at some of the ways you might frame your design problem:
- From the user's perspective: "I am a young working professional trying to eat healthily, but I'm struggling because I work long hours and don't always have time to go grocery shopping and prepare my meals. This makes me feel frustrated and bad about myself."
- From a user research perspective: "Busy working professionals need an easy, time-efficient way to eat healthily because they often work long hours and don't have time to shop and meal prep."
- Based on the four Ws—who, what, where, and why: "Our young working professional struggles to eat healthily during the week because she is working long hours. Our solution should deliver a quick and easy way for her to procure ingredients and prepare healthy meals that she can take to work."
As you can see, each of these statements addresses the same issue—just in a slightly different way. As long as you focus on the user, what they need and why, it's up to you how you choose to present and frame your design problem.
We'll look at how to form your problem statement a little later on. Before we do, let's consider some problem statement "do"s and "don't"s.
What makes a good problem statement?
A good problem statement is human-centered and user-focused. Based on the insights you gathered in the empathize phase, it focuses on the users and their needs—not on product specifications or business outcomes. Here are some pointers that will help you create a meaningful problem statement:
- Focus on the user: The user and their needs should be front and center of your problem statement. Avoid statements that start with "we need to…" or "the product should", instead concentrating on the user's perspective: "Young working professionals need…", as in the examples above.
- Keep it broad: A good problem statement leaves room for innovation and creative freedom. It's important to keep it broad enough to invite a range of different ideas; avoid any references to specific solutions or technical requirements, for example.
- Make it manageable: At the same time, your problem statement should guide you and provide direction. If it's too broad in terms of the user's needs and goals, you'll struggle to hone in on a suitable solution. So, don't try to address too many user needs in one problem statement; prioritize and frame your problem accordingly.
Bearing these things in mind, let's explore some useful methods for creating a meaningful problem statement.
3. How to write a meaningful problem statement
Writing a meaningful problem statement can be extremely challenging. How do you condense all the complexities of the user's conscious and unconscious desires into one simple, actionable statement? Fortunately, there are some tried-and-tested methods that will help you do just that.
Space saturation and group
One of the first steps in defining a problem statement is to organize your findings from the empathize phase. Space saturation and group is a popular method used by design thinkers to collect and visually present all observations made in the empathize phase in one space. As the name suggests, you will literally "saturate" a wall or whiteboard with Post-It notes and images, resulting in a collage of artifacts from your user research.
As the Stanford explains: "You space saturate to help you unpack thoughts and experiences into tangible and visual pieces of information that you surround yourself with to inform and inspire the design team. You group these findings to explore what themes and patterns emerge, and strive to move toward identifying meaningful needs of people and insights that will inform your design solutions."
This method should involve anyone who took part in the empathize stage of the design project, and should take no longer than 20-30 minutes.
The four Ws
Asking the right questions will help you put your finger on the right problem statement. With all your findings from the empathize phase in one place, ask yourself the four Ws: Who, what, where, and why?
- Who is experiencing the problem? In other words, who is your target user; who will be the focus of your problem statement?
- What is the problem? Based on the observations you made during the empathize phase, what are the problems and pain-points that frequently came up? What task is the user trying to accomplish, and what's standing in their way?
- Where does the problem present itself? In what space (physical or digital), situation or context is the user when they face this problem? Are there any other people involved?
- Why does it matter? Why is it important that this problem be solved? What value would a solution bring to the user, and to the business?
Approaching your observations with these four questions in mind will help you to identify patterns within your user research. In identifying the most prevalent issues, you'll be one step closer to formulating a meaningful problem statement.
The five whys
Another question-based strategy, the five whys technique can help you delve deeper into the problem and drill down to the root cause. Once you've identified the root cause, you have something that you can act upon; somewhere specific to focus your problem-solving efforts.
Let's take our previous example of the young working professional who wants to eat healthily, but finds it difficult to do so. Here's how you might use the five whys to break the problem down and get to the root cause:
1. Why is she not eating healthily? She orders takeaway everyday.
2. Why does she order takeaway everyday? Her fridge and cupboards are empty.
3. Why are the fridge and
| 1.837296
|
Zyphra/Zyda-2
|
3).
As mentioned above, in the case of ELMs, metal cations act not only as a simple connecting node for the architecture, but also as fine tuning of pore structures.
3.4. Effect of Ligand
In general, the length of ligands is a key factor to tune the coordination space of PCPs/MOFs [143150]. Yaghi's group reported the archetype study of relationships between the length of the ligand and the amount of gas adsorption using a series of various ligands [43]. According to their studies, PCP/MOF with longer ligands apparently tend to have a large coordination space—when the ligands are changed from terephthalic acid (IRMOF-1, contains one phenyl (Ph) ring), 4,4′-biphenldicarboxylic acid (IRMOF-10, two Ph ring), to 4,4′-terphenyldicarboxylic acid (IRMOF-16 three Ph ring), the calculated percentage of free volume increases from 79.2% (one Ph), 87.0 (two Ph), to 91.1% (three Ph).
In contrast to the Yaghi's rigid PCPs/MOFs series, flexible ELMs show a reverse tendency. Although the extended ligand, 4,4′-bis(4-pyridyl)benzene (bpb) (11.4 Å) is 63% longer than bpy (7.0 Å) [151], ELM-31b ([Ni(bpb)2(BF4)2]) shows a 40% smaller adsorbed amount (W0(N2) = 212 mg/g at 77 K) compared to that of ELM-31 ([Ni(bpy)2(BF4)2])(W0(N2) = 350 mg/g at 77 K). This reverse tendency should be understood from the unique adsorption mechanism of ELMs. In the case of "hard" PCPs/MOFs, which show type I physisorption isotherms, there is a tendency for the larger free volume to accommodate more gas. On the other hand, "flexible" ELMs adsorb gas through clathrate formation; the adsorption depends on the stability of the gas-CP/MOF clathrates. Since the clathrates of larger square grids with longer ligands (bpb ligand, 15 × 15 Å) are unstable because of the weak interaction between guests and hosts, compared to small square grids (bpy ligand, 11 × 11 Å), the amount of adsorbed gas tends to decrease for the longer ligand system. Fujita et al. reported the example of flexible two-dimensional layer stacking-type PCP/MOF with extended ligands. This PCP/MOF varies its structure with solvent exchange [152]. To our best knowledge, the ELM-31b is the only example of two-dimensional stacking PCP/MOF with extended ligands, which can change its structure by gas molecules, which interact with host framework by weaker interaction.
4. Various Two-Dimensional Square Grid Stacking-Type (2DSG) CPs/MOFs 4.1. Two-Dimensional Square Grid Stacking-Type CPs/MOFs: Structure and Functions
An aromatic compound containing nitrogen such as pyridine is one of the most popular coordinative functional groups, and hence many CPs/MOFs have been synthesized using exobidentate ligands bearing two pyridyl groups, such as bpy [153,154]. There are quite a few examples of metal-organic square networks with linear bifunctional spacer ligands [155]. To synthesize such kinds of CPs/MOFs, various ligands have been used: short or long [156161], rigid or flexible [162,163], linear or inflectional [164,165], with functional group(s) [159,166170], rotaxane-type [171], chiral-type [172], and so on. An example of a typical linear, rigid, exobidentate ligand would be 4,4′-bipyridine. A number of two-dimensional square grid stacking-type CPs/MOFs (2DSG-CP/MOF) with this ligand has been reported [173176]. Because of the neutral nature of bpy, 2DSG-CPs/MOFs necessarily contain counter anions to compensate the positive charges of metal ions, and these negatively charged counterparts increase the diversity of 2DSG-CPs/MOFs. Various 2DSG-CPs/MOFs constructed with bpy and unidentate coordination anion are listed in Table 4.
Since almost all the listed 2DSG-CPs/MOFs include guest molecules, this means the 2DSG-CPs/MOFs are potentially acting as porous materials. From the standpoint of host/guest chemistry, the synthesis of hybrid-type nonlinear optical materials was attempted with the combination of 2DSG-PCPs/MOFs host and guest molecules, such as p-nitroaniline [185]. On the other hand, there is no report on the gas adsorption or structural transformation of the CPs/MOFs listed in Table 4, except for ELMs. In addition, we cannot synthesize ELMs family with any counter anions other than BF4, OTf, and CF3BF3. If the scope of ligands widens from bpy to pyrazine, 1,4-bis(4-pyridyl)benzene, and 4,4′-bis(4-pyridyl)biphenyl, to the best of our knowledge, there is no report on the gate gas adsorption of 2DSG-CPs/MOFs or the structural transformation of 2DSG-CPs/MOFs caused by gas molecules. In the case of longer ligand, it is reported that the structural transformation of 2DSG-CPs/MOFs constructed with 4,4′-bis(4-pyridil)biphenyl ligand. However, the structural change was not caused by gas molecules but by the exchange of aromatic guest molecules [160]. The reason why all of the listed 2DSG-CPs/MOFs except the ELMs do not show the gas adsorption is considered to be as follows: (1) the cavity of the metal-organic square does not act as an open pore because of the close packing of the layers and the difficulty of structural transformation of the layers; (2) the cavity is already occupied by non-removable guest molecules and there is no space for gas adsorption; and (3) the 2DSG structure collapses when the guest is released. The necessary requirement for gas adsorption is conservation of the framework structure after the guest release. The collapse of the CP/MOF structure with the guest release can be seen as a common behavior, and in the case of 2D CP/MOFs [217], sometimes a turbostratic disorder occurs in the case of 2DSG CP/MOFs [218]. The present collapse assumption must be reconsidered, based on the fact that ELM-22 (Co) retains its 2DSG structure without any guest.
Hereafter, let us consider the role of counter anions. It is well known that these play a significant role in regulating the structure of CPs/MOFs [122,159,219221]. In addition, the counter anions sometimes play a significant role in the function of CPs/MOFs. However, in the case of flexible PCPs/MOFs [222], the role of counter anions in structural transformation is not necessarily clear. The common features of the counter anions of ELMs are: (1) monodentate; (2) mono-valent; (3) weak coordination ability; (4) occupation of the apical positions; and (5) participation of fluorine atoms. In the case of non-ELM 2DSG-CPs/MOFs, NCS, which have relatively strong interaction with metal cations, tend to occupy the apical positions. Imamoto reported the synthesis of [Ni(bpy)2(NCS)2] [188] of a fundamental structure is similar to ELM-22 (Co-OTf) having no guest and precise layer stacking. Therefore, the non-porous character of the Imamoto's Ni-CP/MOF may be attributed not to structural friability but to the difficulty in the structural transformation for the generation of micropore. Jacobson reported the Co version of NCS-2DSG-CP/MOF, [Co(bpy)2(NCS)2]·2Et2O [179]. Although this compound easily releases two ether molecules, the non-guest state does not induce gas adsorption. In this case, the initial porous state is supposed to transform into a non-porous form (a compact layer stacking form) with the guest release. Imamoto and Jacobson did not mention the gas adsorption ability of their CPs/MOFs. From our adsorption experiment (N2 at 77 K and CO2 at 273 K, after the pretreatment at 363 K for three hours under reduced pressure), Ni-CPs/MOFs did not show any gas adsorption ability. We also checked Fujita's [Cd(bpy)2(NO3)2], but this CP did not show N2 gas adsorption ability at 77 K.
In the case of other anions, such as NO3, ClO
| 1.390496
|
Zyphra/Zyda-2
|
5 edition of Outlines & Highlights for Organic Chemistry by Brown, ISBN found in the catalog.
August 29, 2007
Written in English
|The Physical Object|
|Number of Pages||464|
of 26 results for "organic chemistry brown 6th edition" Organic Chemistry (William H. Brown and Lawrence S. Brown) by William H. Brown, . How is Chegg Study better than a printed Outlines & Highlights For Organic Chemistry: By 2nd Edition student solution manual from the bookstore? Our interactive player makes it easy to find solutions to Outlines & Highlights For Organic Chemistry: By 2nd Edition problems you're working on - just go to the chapter for your book.
Brand New Book ***** Print on Demand *****.Never HIGHLIGHT a Book Again! Includes all testable terms, concepts, persons, places, and events. Cram Just the FACTS studyguides gives all of the outlines, highlights, and quizzes for your textbook with optional online comprehensive practice tests. Title Studyguide for Organic Chemistry, Enhanced Edition by Brown, William H., ISBN Cram Just the FACTS studyguides gives all of the outlines, highlights, and quizzes for your textbook with optional online comprehensive practice tests.
"[The] structural theory is of extreme simplicity. It assumes that the molecule is held together by links between one atom and the next: that every kind of atom can form a definite small number of such links: that these can be single, double or triple: that the groups may take up any position possible by rotation round the line of a single but not round that of a double link: finally that. Rent Organic Chemistry 8th edition () today, or search our site for other textbooks by William H. Brown. Every textbook comes with a day "Any Reason" guarantee. Published by Brooks Cole. Organic Chemistry 8th edition solutions are available for this textbook. Need help ASAP? We have you covered with 24/7 instant online : $
Progress of oriental science in America during 1888.
Marriage register of Banbury.
Study of tuberculosis in Virginia
More apples fell from heaven
Caps and gymslips
Paleoanthropology annuals. Vol. 1 (1990)-
new propeller performance prediction method for concept exploration models for displacement ships
Choosing the right book
Hands off Angola!
Economic & Commercial Geography of India
Outlines & Highlights for Organic Chemistry by McMurry, ISBN ( publication) on *FREE* shipping on qualifying cturer: AIPI. Outlines & Highlights For Organic Chemistry (With Organic Chemistrynow) By William H. Brown, Christopher S.
Foote, Brent L. Iverson, Isbn by Cram Textbook Reviews Cram Textbook ReviewsPages: Find many great new & used options and get the best deals for Outlines and Highlights for Organic Chemistry by William H Brown, Isbn: by Cram Textbook Reviews Staff (, Paperback, New Edition) at the best online prices at eBay.
Free shipping for many products. Share - Outlines and Highlights for Organic Chemistry by Brown (Perfect, New Edition,Study Guide) Outlines and Highlights for Organic Chemistry by Brown (Perfect, New Edition,Study Guide) Be the first to write a review.
Studyguide for Organic Chemistry by Foote, Brown &, ISBN Item Condition: New. Will be clean, not soiled or. Outlines & Highlights for Organic Chemistry by Brown strengths of the Brown book are, a reasonable organization, a fairly good mechanistic approach to organic chemistry, the biological highlights are very strong, very good selection of problems in chapter and at the end, the reaction summary at end of chapter is very useful, and the mechanisms within colored areas in the text help them stand /5(21).
It has been 47 years since I took introductory organic chemistry, and this book is an excellent re-introduction to the subject. My only problem with the book is that none of the answers to the exercises are given in the book.
They are, however, answered in considerable detail in the separate, but costly, study guide/solutions manual/5(25). This item: Schaum's Outline of General, Organic, and Biochemistry for Nursing and Allied Health, Second Edition by George Odian Paperback $ In stock on June 4, Order it now/5(6).
ORGANIC CHEMISTRY is a student-friendly, cutting edge introduction for chemistry, health, and the biological sciences majors. The Eighth Edition builds on unified mechanistic themes, focused problem-solving, applied pharmaceutical problems, and biological examples.
This is an electronic version of the print textbook. Due to electronic rights restrictions, some third party content may be suppressed.
Editorial review has deemed that any suppressed content does not materially affect the overall learning experience. The publisher reserves the right to remove. Need homework and test-taking help in Organic Chemistry.
These articles can enhance your knowledge of advanced chemistry topics. Free Practice Questions. Algebra I: + FREE practice questions Removing #book# from your Reading List will also remove any bookmarked pages associated with this title.
Find many great new & used options and get the best deals for Outlines and Highlights for Introduction to Organic Chemistry by Brown, William H / Poon, Thomas, Isbn: by Cram Textbook Reviews Staff (, Paperback, New Edition) at the best online prices at eBay. Free shipping for many products.
ORGANIC CHEMISTRY is a student-friendly, cutting edge introduction for chemistry, health, and the biological sciences majors. In the Eighth Edition, award-winning authors build on unified mechanistic themes, focused problem-solving, applied pharmaceutical problems and biological examples.
Find many great new & used options and get the best deals for Outlines and Highlights for Organic Chemistry by William H Brown, Christopher S Foote, Brent L Iverson, Isbn: by Cram Textbook Reviews Staff (, Paperback, New Edition) at the best online prices at eBay.
Free shipping for many products. Loose Leaf. Condition: New. 6th. Loose Leaf. Introduction to Organic Chemistry, 6th Edition provides an introduction to organic chemistry for students who require the fundamentals of organic chemistry as a ng may be from multiple locations in the US or from the UK, depending on stock availability.
pages. It is not the best Organic Chemistry textbook out there, but it's not the worst, either. The first half of the text makes the transition from general to organic MUCH easier, but after the first half of the book (about Chapter 10 or so), then the book changes form and becomes difficult to follow at times/5.
Organic Chemistry By Brown, Foote, Iverson and Anslyn (sixth edition) is written by William H. Brown (Beloit College), Christopher S. Foote (University of California, Los Angeles), Brent L. Iverson (University of Texas, Austin), Eric V. Anslyn (University of Texas, Austin) and Chapter 29 was originally contributed by Bruce M.
Novak (North Carolina State University) and it is published by. I had to read this book for Organic Chemistry. As far as textbooks go, this one was easy to understand and very well laid out. It is also one of the books that the Organic Chemistry section of the Dental Admissions Test is taken from, so I was lucky to use it in my class/5.
Providing a modern introduction to organic chemistry for students majoring in chemistry, health, and the biological sciences, ORGANIC CHEMISTRY, Sixth Edition, is both student-friendly and cutting-edge and incorporates the latest advances in the field.
Professors Brown, Iverson, and Anslyn have all won teaching awards at their respective schools, and they use their skills to build upon the 4/5(3). Advanced Organic Chemistry Francis A. Carey, Richard A. Sundberg Paperback, Pages 5th Edition, ISBN Springer. In all probability probably the most trusted and biggest-selling textual content material for pure chemistry merely obtained larger.
Updated with additional protection of nuclear magnetic resonance spectroscopy, expanded with new end-of-chapter mechanism points and Comply with Your Scientific Reasoning and Analysis questions, and enhanced with OWLv2, the most recent mannequin of the primary on.
Confusing Textbooks? Missed Lectures? Not Enough Time? Fortunately for you, there's Schaum's Outlines. More than 40 million students have trusted Schaum's to help them succeed in the classroom and on exams. Schaum's is the key to faster learning and higher grades in every subject.
Each Outline presents all the essential course information in an easy-to-follow, topic-by-topic format.4/5(3).Covering the groundwork of organic chemistry - a dynamic and expanding area of science, the second edition of this introductory text shows the interrelation between organic chemistry and the biological and health sciences clearly and concisely, making it easy to understand by students in a one-semester organic /5.Overhead Transparency Acetates for Organic Chemistry by William H.
Brown, William H. Brown and a great selection of related books, art and collectibles available now at
| 1.69571
|
m-a-p/FineFineWeb
|
The Debye expansion integrals obtained by application of the Modified Watson Transformation and Debye series expansion to the Mie series for the high frequency plane wave transmitted into a double negative(DNG) cylinder are solved in the geometrically lit regions of the corresponding Debye series terms. The Debye series expansion is made up to the possible maximum term after which double ray field formation is first observed. Using the steepest descent method and the geometrical optics approximation, the role of the lower ray in the double-ray field formation is pointed out. For refractive indices satisfying |n| ≥ 10, it is shown that the maximum Debye series term index up to which simple single-ray tracing can be performed is bigger for a DNG cylinder than that for a DPS cylinder and the difference between the term indices incrases as |n| increases.
This paper discusses the characterization of landmine by using the electromagnetic induction technique (EMI). The proposed approach is based on the identification of the physical and geometrical properties of a landmine, from the sensor response. But in such an identification, the inverse problem is unavoidable. At first, we begin by simulating the landmine signature by solving a direct problem using the finite element method which constitutes the direct model. After that, we determine the landmine characteristics by using an inverse model based on a cost function optimization. This model is based on an iterative process which coupling nite element analysis and Particles Swarm Optimization (PSO). In this step, we apply two PSO techniques: the Standard PSO (SPSO) and the Improved PSO (IPSO), and discuss the problem of local minima of the cost function. The proposed iterative model is applied to determine the conductivity, geometry, and depth of metallic landmine from its signature measured by EMI. The numerical solution gives good results for the identification of landmine.
This paper develops a more precise analytical model for calculating salient pole synchronous machine (SPSM) inductances in case of general eccentricity including static, dynamic and mixed eccentricities. The developed method is based on the modified winding function approach (MWFA) which accurately considers variable air gap function and leads to pure analytical expressions of inductances. Available analytical techniques, based on MWFA, approximate the air gap function and simplify the geometrical model of SPSM, whereas, in this study, the influence of the openings between the rotor salient poles has been taken into account by using an effective form of rotor pole shoes. Using this technique, flux fringing effect is considered. By taking into account machine geometry, type of windings connection and flux fringing effect, this method is able to model most of the important features of an eccentric SPSM. The developed analytical expressions can calculate time varying inductances of SPSMs with any eccentricity type and degree in the frame of a single program. Simulation results for static eccentricity are compared with experimental tests on a laboratory generator to verify accuracy of the proposed model.
A unit cell based numerical approach to model the metal powders and metal-dielectric composites at microwave frequencies is proposed. The unit cell based numerical modeling helps to compute the equivalent reflection and transmission coefficients of these materials, which are commonly used measured parameters at RF and microwave frequencies. The computation of the reflection and transmission coefficients of these artificial dielectric samples also facilitates the determination of their effective constitutive properties, defined in terms of the effective permittivity and permeability, using the reflection transmission approach. The applicability of the proposed unit cell method is first verified for some mixed dielectrics using the classical mixing formulas, and the standard waveguide approach. Once the validity of the proposed approach is ascertained, the effective constitutive properties of copper powder is determined. A detailed parametric analysis is also carried out in order to study the effect of various parameters such as the packing fraction, the grain size and the gap between adjacent spherical shaped metal particles, on the effective constitutive properties of the copper powder compact. This detailed analysis is quite helpful in order to optimize various parameters of the microwave sintering of metal powders and metal-dielectric composites before the actual start of the sintering process using microwaves.
The capability of Ground Penetrating Radar (GPR) systems of accurately reconstructing the geometrical features of buried objects, when working in critical conditions, is investigated. A customized microwave tomographic approach is used to tackle the imaging through the processing of comparative experimental and synthetic GPR data. The first ones have been gathered in laboratory controlled conditions, while the second ones have been obtained by exploiting an ad-hoc implementation of a CAD tool. Attention is paid to the significant case of 'strong' scatterers having size comparable to the wavelengths of the probing signal, and possibly located close to the interface where the GPR antennas move. The results from imaging point out the potential of the proposed approach, showing in particular to which extent, in challenging operational settings, it is possible to recover also the information about the shape of metallic targets in addition to their correct location and size.
Few works on symmetric and asymmetric dielectrics have been published, specifically the case of chiral and bi-isotropic media. For this reason, and taking into account the complexity of the studied environment, this paper treats the asymmetrical effects on the resonant frequency and the bandwidth of a rectangular microstrip patch antenna in a complex bi-anisotropic substrate-superstrate configuration. This structure is studied theoretically, and the obtained results are discussed and commented. The numerical analysis used in this paper is mainly employed in order to obtain original results. The originality of this work is presented by the bianisotropic chiral asymmetry and the combined effect of the substrate and the superstrate.
It is well known that using proper signal compression techniques, the range resolution of the radar systems can be enhanced without having to increase the peak transmits power. Whereby the range resolution is inverse proportional with the frequency band of the scanning signals, in the last period of time, in radar systems literature a lot of suitable wideband signals were designed and analyzed as performance level. However, for the large majority of these signals, the compression filter response contains significant sidelobes which may cause difficulties in the target detection and range estimation process. Consequently, in the radar signal processing theory, the sidelobes reduction techniques using synthesis of some proper nonlinear FM (NLFM) laws represents a major scientific research direction. In order to assure the sidelobes suppression, the main objective of this paper is to present an adequate synthesis algorithm of some NLFM laws based on stationary phase principle. The achieved experimental results confirm a significant sidelobes reduction (i.e., more than -40 dB) without necessity to apply some weighting techniques. Finally, the analysis of the synthesized NLFM laws by ambiguity function tool was also discussed.
In this paper, an algorithm based on penalty cost function for synthesizing at-top patterns is proposed. A descent algorithm (DA) as its optimizing approach is proposed in this paper as well. Apparently, whole algorithm efficiency totally depends on the DA. Unlike traditional descent method, the DA defines step length by solving a inequality, instead of Wolf or Armijo-type search rule, stimulation results indicate that it can improve the computational efficiency. Under mild conditions, we prove that the DA has strong convergence properties. Several numerical examples are presented to illustrate the effectiveness of the proposed algorithm. The results indicate that the approach is effective in the pattern shape precisely in both mainlobe and sidelobe region for arbitrary linear arrays.
Electrical cables of all types are subject to aggressive environments that can create defects or accelerate aging. Many application domains require diagnosis methods and tools. Among many methods, reflectometry has proven to be the best candidate and can be easily applied to the detection and localization of hard defects, while only requiring one access point to the wire. But soft defects are more difficult to track and require new powerful methods. This paper presents a review of the recent state of the art in the field of wired network diagnosis and shows the evolution of future activities in this domain. It provides new perspectives and new research domains are proposed.
A problem of electromagnetic fields excitation by a system of finite-dimensional material bodies in two arbitrary electrodynamic volumes coupled by holes, cut in a common boundary of the volumes, is defined in a rigorous formulation. For the system containing two material bodies and one coupling hole, the problem is reduced to a system of two-dimensional integral equations relative to surface electric currents on the material bodies and equivalent magnetic current in the coupling hole. The resulting integral equations are correctly transformed to a system of one-dimensional equations for currents in a narrow slot and on thin impedance vibrators, which may have irregular electrophysical and geometrical parameters. The resulting equations system for a transverse slot in a broad wall of a rectangular waveguide and impedance vibrators with variable surface impedance is solved by a generalized method of induced electro-magneto-motive forces (EMMF) under assumption that interaction between the vibrators and the slot is absent. Calculated and experimental plots of electrodynamic characteristics for this vibrator-slot structure are presented.
We introduce a data-driven unsupervised classification algorithm that uses polarimetric and interferometric synthetic aperture radar (PolInSAR) data. The proposed algorithm uses a classification method that preserves scattering characteristics. Our contribution is twofold. First, the method applies adaptive model-based decomposition (AMD) to represent the scattering mechanism, which overcomes the flaws introduced by Freeman decomposition. Second, a new class initialization scheme using
| 1.559013
|
m-a-p/FineFineWeb
|
differenza sarà: . . . f ) a·1
2 − 2
a
4.5. Collega con una freccia la proprietà dell'operazione con la sua scrittura attraverso lettere:
Commutativa dell'addizione a · (x + y) = a · x + a · y
Associativa della moltiplicazione (a · b) · c = a · (b · c)
Distributiva prodotto rispetto alla somma a+b = b+a
4.6. Esprimere con le lettere la proprietà commutativa della moltiplicazione
Svolgimento: "considerati a e b due numeri qualsiasi, la proprietà commutativa si esprime
per mezzo dell'espressione . . . . . . ; cioè . . . . . . . . . "
B
b
C c
A a
F IGURA 4.1: Esercizio 4.1 F IGURA 4.2: Esercizio 4.10
134 Capitolo 4. Calcolo letterale
4.7. Scrivi la formula che ci permette di calco- 4.11. Scrivi sotto forma di espressioni letterali
lare l'area di un trapezio avente base mag- le seguenti frasi:
giore B = 5cm, base minore b = 2cm e
altezza h = 4cm
a ) moltiplica a per l'opposto del cubo di a:
4.8. Scrivi la formula che permette di calcolare
b ) somma al triplo di a il doppio quadrato
il lato di un quadrato di perimetro l
di b
4.9. Determina l'altezza h relativa all'ipotenu- c ) moltiplica l'inverso di b per il quadrato
sa BC del triangolo rettangolo ABC dell'inverso di a
Caso numerico: AB = 8m, AC = 15m. d ) somma al cubo di a il quadrato della
Caso generale: Indica con x e y le misure somma di a e b
dei cateti, e determina la formula per calcolare e ) dividi il quadrato di a per il triplo cubo
la misura di hi di b
f ) moltiplica il quadrato di b per l'inverso
4.10. Il volume della scatola (figura 4.2) del cubo di a
avente le dimensioni di 7cm, 10cm, 2cm è . . . g ) il cubo di un numero, aumentato di 2, è
Generalizza la questione indicando con a, uguale al quadrato della differenza tra
b, c la misura delle sue dimensioni . . . . . . lo stesso numero e uno;
Se raddoppiamo ciascuna dimensione h ) il reciproco della somma dei quadrati
allora il volume diventa di a e di b
a) 2·a·b·c i ) il cubo della differenza tra 1 e il cubo
2
b) a ·b ·c2 2 di a
c) 6·a·b·c j ) la somma dei quadrati di a e di b per il
d) 8·a·b·c quadrato della differenza tra a e b
4.1.4 Valore numerico di un'espressione letterale
4.12. Consideriamo l'espressione letterale E = −3 · a + 2 · (−a + 1)
Osserviamo che vi compare una sola variabile, la lettera a supponiamo che E rappresenti
uno schema di calcolo tra numeri interi relativi. Determiniamo il valore dell'espressione per
alcuni valori della variabile:
a = −2 → E = −3 · (−2) + 2 · (−(−2) + 1) = 6 + 2 · (2 + 1) = 6 + 6 = 12
a = +1 → E = −3 · (1) + 2 · (−(1) + 1) = −3 + 2 · (−1 + 1) = −3 + 0 = −3
a = −1 → E = −3 · (. . .) + 2 · (. . . + 1) = . . . . . . . . .
Completa la seguente tabella.
4 7
a −2 1 −1 0, 1 − −11 0
5 5
E = −3a + 2(−a + 1) 12 −3
a b
4.13. Calcolare il valore numerico dell'espressione: + per a = −1, b = 0
a−3 3−b
−1 0
Svolgimento: + = ......
−1 − 3 3 − 0
Sezione 4.5. Esercizi 135
x−y
4.14. Calcola il valore dell'espressione E = costruita con le variabili x e y che rappre-
3x
sentano numeri razionali. L'espressione letterale assegnata traduce il seguente schema di
calcolo: "la divisione tra la differenza di due numeri e il triplo del primo numero". Completa
la seguente tabella:
3 19 3
x −4 ... ...
4 3 4
y −2 0 0 −2 ... ...
x−y
E=
3x
Ti sarai accorto che in alcune caselle compare lo stesso valore per E: perché secondo te
succede questo fatto?
Vi sono, secondo te, altre coppie che fanno assumere ad E quello stesso valore?
4.15. Scrivi con una frase le seguenti espressioni
a ) 2b − 5a 1 c ) (a + b)2 3x + y
b) a d)
a 2x2
4.16. Completa la tabella sostituendo nella espressione della prima colonna i valori indicati.
1 1 1
Espressione x = 1 x = −1 x = 0 x = 2 x = x=− x = 0, 1 x =
2 2 10
2x + 1
−(3x − 2)
x2 + 2x + 2
x2 − x
−x2 + x − 1
x3 − 1
x3 + 3x2
−x3 + x2 − x
−(x + 1)2
4.17. Calcola il valore numerico delle seguenti espressioni algebriche:
2 2
2 1 2 1 1 1 1 11
a ) 3x − x per x = Svolgimento: 3 · − · = ......... =
4 2 2 4 2 16
2
1 3 1 3
b ) 5a2 b per a = − , b = Svolgimento: 5 · − · = ............
2 5 2 5
3 2 1
c) · a + a − 1 per a = 0, per a = −1 e a = 2
2 2
d ) 2 · x − 8 · x4 + 3 · x3 + 2 · x2 − 7 · x + 8 per x = +1 e x = −1
5
4.18. Calcola il valore numerico delle seguenti espressioni algebriche:
a ) (x − 1) · (x − 2) · (x + 3) per x = 0, x = −1 e x = 2
b) x2 + 2x + 1 per x = 0, x = −1 e x = 1
9 4
c ) −a2 · b · c3 per a = 1, b = −1, c = −2 e a = −1, b = ,c=
16 3
136 Capitolo 4. Calcolo letterale
3 1 2
d ) − a + 2b2 + 11 per a = −20, b = − e a = , b = 0
2 2 3
1 1
e ) −a2 + − 3 · a3 per a = , a = −1 e a = +1
a 3
[ 11
4 ]
4.19 (∗ ). Calcola il valore numerico delle seguenti espressioni algebriche:
1 1
a ) 3xy − 2x2 + 3y2 per x = , y = 2 e x = 2, y = [ 29
2 ]
2 2
2 1
a a2 − b2
b) per a = −3, b = −1 e a = , b = 0 [−16]
3 3
xy
c) + 3xy3 per x = 2
| 1.088459
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
' kind of person and he does exactly what Guildenstern does, even though he may not like it because he knows that he is incapable of making his decisions himself. At the end of the play, Rosencrantz is completely fed up with the whole concept of life and says that he is "…relieved."
Guildenstern uses many syllogisms and comes up with many reasons as to why the coins keep landing on heads. He comes up with the craziest examples of 'monkeys' and 'children of Israel'. But he doesn't want to believe in luck or chance. "Though it can be done by luck alone, if that's the word I'm after."
"This is not the first time we have spun coins!", This line shows that Rosencrantz and Guildenstern have been together for so long. They've been with each other through ups and downs but this could also have a dual meaning of them being together as they are two sides to the same coin. This is to show the reader that even though they may have different personalities, they belong together as they are two extremes that co-ordinate perfectly with one another, which goes on to prove the theory of opposites attracting, as showed by 'Ying and Yang', two opposites, perfectly living in harmony with each other.
Overall, the language used is very informative and has a deeper meaning than just the literal one. It reflects the play as at first glance the play is not understandable, but when you look at the deeper meaning, the play has a beautiful moral, "it is not the destiny that matters, it's the journey and the chances you are willing to take on that journey."
Poem Analysis – Amends
The title of the poem is very simple and straightforward which might mean that making amends is a very easy thing to do but also because it stands alone, it means that you have to take the decision yourself and not let others decide for you.
"Nights like this", the opening line of the poem emphasizes on the point 'this' which means that this is a special night (probably a full moon) and not just any normal night.
"On the cold apple bough…out of the bark.", here these lines that describe the apple tree, reflect the sky at night as it says words like 'cold' which could define the night sky as cold, distant and alone. Nonetheless, at the same time, poem says, "a white star…exploding" meaning even in the distant, alone sky there is a star of hope and if you keep looking then there will be more, "then another…"
The poem then starts mentioning the job of the moon; "moonlight picking at small stones", here, small stones shows that the moon not only amends the bigger the things in life, but also the small things, which we take for granted at times. The poet starts with 'small stones' which tells us that we should begin by amending the small things in life before we move onto the bigger things, just like the moon who "…picks at greater stones…"
The moonlight starts with amending all things of nature first, 'surf', 'stones', 'ledge'. "Lays its cheek for moments…", 'moments', emphasizes that the moon is very busy and it cannot afford to use more than a few moments and because of this, its movements are very quick, 'licks', 'flicks', 'flows'. As the moon starts to move away from nature and towards mankind and man-made objects, its work becomes harder. Now the moon has to use more effort, 'pours', instead of 'flick', 'leans', instead of 'flows' and soaks instead of 'lick'. This is because human mistakes are harder to amend as the human mind is very stubborn and mostly it takes years for man-kind to forgive, forget and move on. "Unavailingly pours into the gash…" it means the moon is very determined to try to heal the gash but is unsuccessful as it is a very deep gash, but the moon has to more on.
"Soaks through cracks…with sleep…", 'soaks' personifies the trailers as a sponge which absorbs the moonlight. "Tremulous with sleep" shows that each trailer is filled as if it's being washed clean by the moonlight and because of that it seems as if they were quivering. "Dwells", shows that the moonlight couldn't just "lay its cheek for moments" as humans needed the most attention and care over a whole night to make amends.
Finally, from the 5th to 15th line, repletion occurs as it confirms the moon is making amends; "as it", but in the last and final line of the poem, the poet writes, 'as if", telling us that every human wishes and fantasizes about the moon (aka someone) solving their problems, but in reality it is they who have to do it. While the moon doesn't make amends, it motivates them by shining brightly in the dark, cold sky as a glimmer of hope.
The poet uses highly effective descriptive imagery and figurative language by using many describing words and phrases like, "exploding out of the bark" and 'flicks the broken ledge". I think she uses that to show that at night there are many beautiful sights to enjoy.
She personifies the moon to make it sound like a loving elderly person (probably a mother) that heals everyone. "Licks the broken ledge", here 'licks' makes it sound like the moon was like a cat, licking its kittens wounds. It creates a very beautiful and caring image in a readers mind.
She also uses a lot of repetition, "as it" to show that it is a pattern and the moon travels 'round the whole earth while making amends. She uses lots of onomatopoeic words as well to show movement and constancy of the moonlight, Words like 'licks, 'flicks', 'flows', 'rises', create a sense of fast movement while words like 'pours', 'leans', 'soaks', create a more slow movement. Sibilance is used in line 4, 'small stones', to show that making small amends can be very smooth while assonance is used in line 12, "though…the trailers" gives a harder, rougher edge to the stanza because the trailers are manmade.
The rhyme and meter are quiet fast but smooth in the first 2 stanzas as the moon moves at a quick pace there, but when it reaches the last 2 stanzas, it's more of an effort and that slows the moon down and with it the rhyme and meter as well.
Finally, I think the poet has beautifully compared and contrasted between the amends that are needed to be made by nature and humans, It clearly shows that mankind has to work a lot harder in order to make amends as they have done more harm to nature than nature itself, "…pours into the gash.." while nature only has "the broken ledge…" but instead of criticizing them, she tells them to work for the one 'exploding white star…" exploding out of the bark and then she tells us that we will find the other automatically. She also tells us to work first at the smaller things in life and then continue to amend the greater and bigger things.
The Tramp – Analysis
The Tramp"
The title of the poem "the tramp" is very straight forward and simple. This might be done on purpose as the poet wrote " And he liked he said the, ordinary things…" and to prove his point he kept the title short and simple. The title also contains alliteration "T and T" which gives the title a heavy and deep sound, as to tramp also means to walk with a heavy step.
the Tramp is written in third person, as it says, " he said…" which means it is done from a reporters view. "… rainbows and the sky…" this shows the tramp liked simple and ordinary things and when he saw these things he become happy, just like a child would. It also shows that he lived in a world of fantasies as he says, " roses in snow…" which shows that he was happy in the dream world and didn't want to face the real world.
the poem might give us a hint that he used to have a home before, when it states, " the first time, the first time he, really smelt the, rain on, a green hillside, back home…" here, "back home" shows that he had a kind of real home before where he also had a family. He repeats the phrase, "first time" to put emphasis on his point to tell us that, the first time he smelt the rain on the green hillside, he fell in love with nature. "Back home" also might have meant that he had a home before and it was taken away from him by force, maybe because of mankind itself and that's why he loves nature more than mankind.
"…just before the sun died…" is a very intense metaphor as it may mean all hope is gone as light and brightness come to an end after the sun goes down, i.e. the sun dies. The poet sys, "dies" instead of "goes down" which could possibly mean that his family is dead and that's why all brightness and hope from his life is gone.
Even though he was a tramp he liked to imagine the luxuries the world could have given him if he had a family as the poet writes, "… thinking about, who slept beneath, the red,
| 1.363647
|
Zyphra/Zyda-2
|
- About this Journal ·
- Abstracting and Indexing ·
- Advance Access ·
- Aims and Scope ·
- Annual Issues ·
- Article Processing Charges ·
- Articles in Press ·
- Author Guidelines ·
- Bibliographic Information ·
- Citations to this Journal ·
- Contact Information ·
- Editorial Board ·
- Editorial Workflow ·
- Free eTOC Alerts ·
- Publication Ethics ·
- Reviewers Acknowledgment ·
- Submit a Manuscript ·
- Subscription Information ·
- Table of Contents
Journal of Nanomaterials
Volume 2014 (2014), Article ID 310716, 9 pages
Synthesis of GeSe2 Nanobelts Using Thermal Evaporation and Their Photoelectrical Properties
1National Laboratory for Infrared Physics, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, Shanghai 200083, China
2Nanomaterials and Chemistry Key Laboratory, Wenzhou University, Wenzhou 325027, China
3University of Missouri-Kansas City, Kansas City, MO 64110, USA
Received 12 March 2014; Accepted 14 April 2014; Published 17 June 2014
Academic Editor: Li Li
Copyright © 2014 Lijie Zhang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
GeSe2 nanobelts were synthesized via a simple thermal-evaporation process by using gold particles as catalyst and GeSe2 flakes as starting materials. The morphology, crystal structure, and composition were characterized with scanning electron microscopy (SEM), high-resolution transmission electron microscopy (TEM), X-ray diffraction spectroscopy (XRD), X-ray photoelectron spectroscopy (XPS), and energy-dispersive X-ray spectroscopy (EDS). SEM micrographs show that most of GeSe2 nanobelts have distinct segmented structures (wide belt, zigzag belt, and narrow belt). A possible mechanism was proposed for the growth of segmented nanobelts. It is possible that the growth of the segmented nanobelts is dominated by both vapor-liquid-solid and vapor-solid mechanisms. Devices made of single GeSe2 nanobelt have been fabricated and their photoelectrical property has been investigated. Results indicate that these nanobelt devices are potential building blocks for optoelectronic applications.
Recently, one-dimensional semiconductor nanostructures, such as nanowires and nanobelts or nanoribbons, have attracted much attention because of their excellent physical properties and their unique structure for device applications. They are considered to be excellent building blocks for devices such as field effect transistors, field emitters [3–5], photodetectors [6–8], solar cells [9–11], nanolasers, and chemical and biological sensors. These devices demonstrate improved features in comparison to those using bulk materials. For instance, photodetectors fabricated using nanowire or nanobelt always exhibit high sensitivity to light because of their high ratio of surface to volume.
IV–VI chalcogenides with layered crystal structure have been widely employed in construction of devices used in optics, thermoelectrics, and optoelectronics [15–17]. As an important member of IV–VI semiconductors, GeSe2 with a wide band gap (~2.7 eV) was found to have promising applications in electronics, optoelectronics, and renewable energy devices. Therefore, much attention has been drawn towards the preparation and the study of property of GeSe2 nanostructures (nanowires, nanobelts, etc.). For instance, three-dimensional GeSe2 nanostructures composed of nanobelts were synthesized with chemical vapor deposition (CVD) and have demonstrated very impressive performance in supercapacitor applications [15, 18]. GeSe2 nanowires also could be synthesized through decomposing organic ammonium precursor. Stepped-surfaced GeSe2 nanobelts were obtained via CVD process and showed high photoresponsivity and gain. Single GeSe2 nanobelt two-terminal device exhibited high electronic transport properties, photoconductive characteristics, and temperature-dependent electronic characteristics.
In most cases, one-dimensional GeSe2 nanostructures were synthesized via a wet-chemical routine or CVD process by using Ge and Se powders as starting materials [15, 18–20]. It is also known that both GeSe and GeSe2 have high stability [17, 18]. As a result, both GeSe and GeSe2 phases probably form simultaneously in the CVD process when Se and Ge are used as precursors. Besides, the melting point of Ge (938.25°C) is much higher than that of Se (217°C), resulting in different evaporation rate of the precursors and a complex process of the growth.
We synthesized GeSe2 nanobelts via a simple thermal evaporation method by using GeSe2 as starting materials and gold films as catalysts. XRD and TEM results demonstrate that the products are pure phase GeSe2. Besides, vapor-solid (VS) mechanism might contribute to the formation of GeSe2 nanobelts. Devices made of a single GeSe2 nanobelt were fabricated and their photoelectrical property was investigated. Results indicate that these prepared nanobelts have good photoelectrical properties and potential application in optoelectronic devices. The detailed results and the discussion about the synthesized nanobelts and their photoelectrical properties are given in the following sections.
2. Experimental Section
The experimental setup used for nanobelt synthesis consists of a horizontal tube furnace, a quartz tube, a gas supply, and a control system (Figure 1). Commercial GeSe2 flakes (20 mg, purity 99.99%, J&K Scientific Ltd.) used as source materials were positioned in the center of the furnace. The Au-coated (~20 nm) Si substrates were placed at the downstream zone to collect products. After the furnace was fully flushed with high-purity argon gas for 30 min, the system was heated up to 650°C at a rate of 30°C/min. The Ar flux was kept at 100 standard cubic centimeters per minute (sccm) and the deposition temperature was ~450°C. After 1 h of growth, the system was cooled to room temperature. The yellow products on the substrates were GeSe2 nanobelts.
The morphology, crystal structure, and composition of as-synthesized products were characterized using SEM (FEI Nova NanoSEM200), XRD (Bruker D8 Advanced X-ray diffractometer, Cu Kα radiation with λ = 0.15418 nm), TEM (JEOL 2100F, 200 kV), and EDS. The binding energy of the samples was examined by XPS (Kratos AXIS Ultra DLD).
To evaluate the photoelectrical properties of the nanobelts, two-terminal device made of a single nanobelt was fabricated. The as-synthesized nanobelts were dispersed on p+-Si wafer with marks and then treated with electron beam lithography (30 kV, 110 pA, Nanometer Pattern Generation System installed in the SEM), metallization, and lift-off process to define the Cr (10 nm)/Au (100 nm) contacts with the nanobelt. The electrodes were used for interconnecting and the marks were used for aligning in the following electron beam lithography. The current-voltage (I-V) characteristics of the device were investigated in air and at room temperature with a semiconductor characterization system (4200 SCS, Keithley Instruments Inc., USA). The power density of the incident light was calibrated by using a standard silicon photodetectors.
3. Results and Discussion
Figure 2 shows the SEM images of the as-prepared nanobelts. As can be seen from the SEM images, most of the nanobelts have segmented A-B-C structure (A = wide belt, B = zigzag belt, and C = narrow belt). The length of the A-B-C structured nanobelts varies from hundreds of micrometers to millimeters (Figures 2(a) and 2(b)). Sections A and C are commonly seen belt-like structure (Figures 2(c)–2(e)). The thickness of most segmented nanobelts ranged from 60 to 200 nm (Figure 2(e)). Besides, section C is narrower than section A. In Figure 2(d), particle (catalyst) can be clearly observed due to contrast difference, indicating that VLS mechanism probably dominates the growth of the A-B-C structured nanobelts. Interestingly, section B has zigzag structure. Figure 2(f) clearly reveals that the zigzag structure (indicated by arrows) serves as a transition region connecting sections A and C. Figures 2(g) and 2(h) show the enlarged views of section B and the zigzag structure is clearly observed. It is well known that there is a good correlation between the nanobelt diameter and the catalyst size during VLS growth. Although the size of section C and particle on the tip match well, section B has different shape and size. Therefore, VLS mechanism only partly makes contribution to the growth of nanobelt possibly. As can be seen from Figures 2(g) and 2(h), the growth of flake-like branch on the backbone might be governed by vapor solid (VS) rather than VLS mechanism. Besides, some small branches were also observed possibly due to different growth stages. As a layer-structured chalcogenides, GeSe2 consists of Se-Ge-Se layers stacked together via van
| 1.398768
|
openbmb/Ultra-FineWeb
|
prior year. Resolve as far as practicable issues relating to the legal definitions of the institutions that are being tracked as potential members of the top and decide how these definitions relate to the many institutional aliases that will have been used as pointers to these new data. Scan official data sets when published to obtain the relevant institutional FTE (full-time equivalent) faculty numbers used in the calculation of the PCP (per-capita performance) indicator.
Assess and determine exceptions
Once per decade, rescale indicators for historical Nobel prize winners and Fields medalists according to the decadal aging procedure. Allocate institutions specialized in the humanities and the social sciences to the class that uses special weights to exclude Nature & Science publications. Decide which entire countries and which special institutions will be excluded from the determination of PCP using published FTE.
Combine raw data and apply scaling
Sum and/or average raw data over the relevant time windows according to the published algorithms. For every indicator other than PCP, multiply each value of the processed raw data by a fixed scaling factor so that Harvard University has a scaled raw score of. Calculate an intermediate quantity for PCP by dividing the weighted sum of the scaled raw scores by the FTE for the California Institute of Technology, and apply a scaling factor so that this intermediate quantity for CalTech is.
Compress the scaled raw data
For each indicator, including PCP, compress the dynamic range of the scaled raw data by taking its square root to form the indicator score.
Calculate the total score and determine ranking/banding
Combine the indicator scores using the relevant indicator weights, and linearly re-scale so that the total score for Harvard University is. Rank institutions by total score, and determine for publication the ranking sequence, the membership of bands, and the excluded institutions.
We return to discuss the dynamics of these procedures following a review of ranking taxonomies.
3 Ranking Taxonomies
In generic terms, rankings and league tables that sequence an overall performance measure of separate contestants in different events can be classified into two major types, rank-driven or score-driven, depending on whether they combine the ranking positions or actual marks across events. Both methods find practical application in a wide variety of settings, and both have strengths and weaknesses that tend to favour one or the other in a particular setting.
3.1 Rank-driven sequences
Rank-driven sequencing aggregates the ranked positions of the different contestants over the various event indicators to form a single overall performance indicator. A simple example is the calculation of the team score in a cross-country race as the sum of the finishing positions of a fixed number of team members. If the event indicators do not induce reliable integer sequences (for example, because of subjective dimensions of event marking, or measurement error, or truncated marks in some events) non-integer event indicators may be sequenced, clustered or banded into ordered sequences. This procedure is used by the World Bank and similar organisations in a number of socio-economic assessments. Dasgupta and Weale (1992) use the approach to measure the quality of life in nations and WBI (2012) to measure the ease of doing business. Ranking and clustering algorithms used in rank-driven sequencing are designed to harmonise disparate measures and distributions of marks before aggregation by converting them to quasi-uniform distributions that retain no information about the original measured values beyond their sequence. Although the measures are originally quantitative and disparate, a rank-driven sequence becomes agnostic about the actual measured values (Sawyer et al, 2013).
Rank-driven sequencing relates to the Borda rule and its variants (Myerson and Weber, 1993). These are methods of rank-order voting or sequencing widely used in the political and social sciences in settings such as the resolution of polls or the reporting of aggregated public opinion over a range of topics. Borda rule rankings suffer from a number of drawbacks, including the peculiar feature known as Arrow's paradox - in which the ranking of two contestants (A ahead of B) can be reversed merely by the inclusion of an apparently irrelevant third contestant (Arrow, 1963; Saari, 2001; Hammond, 2007). Dependence on irrelevant results can be avoided by alternative ways of resolving the overall sequence, as in the 'count gold first' rule used to generate the rank-driven sequencing of national medal counts in Olympic games.
Variants of the Borda Rule could be used in the ARWU. One important choice would be the treatment of the abundant zero counts for Nobel prize winners and Fields medalists: choosing between assigning zero count or the score prescribed by the rule. To illustrate, let us indicate what would happen if the World Bank Knowledge Assessment Methodology (KAM) (WBI, 2012) were used in ARWU: First, universities would be ranked in order from 'best' to 'worst' on each indicator using their actual scores. Their scores would then be normalized on a scale of to following the KAM procedure:
For each indicator, and for a particular university, identify the number of institutions with higher rank.
Compute the score on the indicator following the normalization formula:
Figure 1 shows how the first tier of ARWU ranked universities would be shuffled were the KAM procedure used. According to the KAM procedure the institutions with a score for the indicator Alumni would be assigned a score of ; the institutions scoring on the Award indicator would be assigned a score of ; the institutions with no Highly Cited researchers among their faculty would be assigned a score of, and the institutions with no papers in Science or Nature between 2007 and 2011 would be assigned a score of for this indicator.
The procedure leads to a relative advantage for those institutions with no previous ARWU score in an indicator. Small differences in the indicator PUB could also be amplified, a side effect of the dependence on irrelevant results. For institutions currently ranked outside the top the average of the changes in ranking (either up or down) would be, with a maximum of. These shifts could be reduced by rearranging the KAM rule in the following way:
For each indicator, count the number of institutions,, with a score different from.
Then for a particular university, identify the number of institutions with higher rank.
Compute the score on the indicator following the normalization formula:
Although the (true) zeros have been restored and the perturbation introduced by the KAM versions of the Borda rule mitigated, the reshuffling would still be quite noticeable, as Figure 2 shows. Institutions ranked outside the top exhibit an average change in ranking (either up or down) of, with a maximum shift of. In essence, the difference between ARWU rank and a rank generated by rank-based sequencing reflects the importance of the actual distribution functions of each ARWU indicator in the calculation of the total score.
A KAM-like scoring system operates on a defined and complete population. However, other institutions might find a place in the ranking with either KAM-like scoring systems, adding more side effects and further reshuffling the results of the ranking.
3.2 Score-driven sequences
Score-driven sequencings aggregate indicator scores rather than the ranking of such scores. Scores of different events must be converted to a common currency or numéraire, and the rules for combining converted scores carefully constructed. In combined event athletics, for example, conversion attempts to ensure that similar performances in different events are rewarded alike, whereas in ARWU different indicators receive different weights in an attempt to reflect their respective importance in the final ranking. Dynamical operations such as gain, offset and power laws can yield modified indicators that are combined to yield an accepted score-driven ranking. Provided that the 'exchange rate' among the different events is perceived as fair, particularly by the ranked subjects, the success of the ranking is essentially guaranteed.
A gold standard in the realm of score-driven sequencing of individual achievement is the ranking system used for combined events in track and field competition. Let us consider specifically the combined events scoring system adopted by the International Association of Athletics Federations in 1984 since it exemplifies most of the elements that may appear in score-driven tables (IAAF, 2004). Over more than a century of negotiations and investigations, the athletes themselves, officials, statisticians and sports scientists have arrived at a scheme for calculating a single combined score from the results obtained in a set of quite different events. The decathlon scoring system has been designed so that "results in different disciplines with approximately the same point value should be comparable as far as the quality and difficulty of achieving those results are concerned" (Trkal, 2003). To achieve that equalization (in much the same way as an equalizer works with recorded music) the scoring procedure for combined events uses two linear operators (offsets,, and gains, ) and a non-linear element (powers, ), where the index refers to each of the events in the decathlon.
Let and represent respectively the mark (unmodified result in a discipline) and the score of athlete in event. Figure 3 shows the transformations the mark must go through to produce the score.
In the first summation of Figure 3, the sign is positive for the mark and negative for the offset in jumping and throwing events where a larger mark is superior. The signs are reversed in running events where a smaller mark is superior. After the offset is applied higher scores correspond to better performances. The offsets in the decathlon are also the threshold for scoring points i.e. the value that would receive points in each discipline. For instance, in the 100 m race the IAAF assigns points to any performance in excess of seconds and in the long jump any mark below m would receive points (IAAF, 2004). Although the offset introduces non-linearity by assigning zero points to all the results outside the threshold, there is no practical consequence since no athlete will perform outside the event threshold.
The power-law and gain elements in Figure 3
| 1.685836
|
openbmb/Ultra-FineWeb
|
Recently, in anticipation of National Child's Mental Health Awareness Week, the Directors of Pathways Institute interviewed a colleague who has a daughter with learning and attention differences. Pathways Institute works therapeutically with people who have learning and attention differences and their families, and helps them to navigate the system to get services.
When did you notice your daughter's learning differences?
I was lucky: when my daughter was quite young – a little over one year old – I noticed she didn't have hand dominance. I remembered reading an article about hand dominance as a possible early indicator of learning disabilities while I was in graduate school. At that point I started to pay closer attention to the how she was developing.
What happened next?
In preschool they did a "ready for kindergarten" screening and the occupational therapist conducting the screening noticed a couple of things. She noticed the lack of hand dominance and the inability to cross the midline, meaning she would start drawing on the left side of the paper with her left hand get to the middle and switch to her right hand. She alerted us that we might want to have her assessed. We set that up and discovered a couple of things that surprised us: our daughter had weak core strength and no hand dominance. The recommendation was she do occupational therapy. We started it and it was helpful. She spent two years in OT and at the end was able to cross the midline and the OT felt she chose to be right handed. At this point she is still quite ambidextrous.
How did she do when she started kindergarten?
This was the beginning of a very difficult time for her and us. She had a great deal of difficulty learning to write her letters and she was already identified as a child that was unable to pick up the first pieces of reading. It was recommended to us she do some afterschool work with her kindergarten teacher on phonics. We signed her up and she did two rounds of 6 weeks. At the end very little progress was made and when we asked the teacher, who had a masters level degree as a reading specialist, if she thought we were dealing with dyslexia, she wouldn't answer the question. I learned later that she wasn't allowed by the school district to answer. This will remain forever immoral and unconscionable to me. Here we had a child who needed help and they refused to tell us, for political reasons. But in spite of this lack of information, I knew that learning disabilities are genetic, and I have a sister who is dyslexic, so I was now "on the case," so to speak.
Was the school supportive?
We had to push and push for testing, and finally at the end of second grade she was tested. But she still somehow managed to fall in the low, low averages which would not qualify her as learning disabled. The testing showed just a couple areas of significant deficit: in particular, she scored at 2% for short term visual memory processing. Because she was strong in other areas, she didn't qualify.
There is an undeniable conflict of interest if the same institutions that have to provide the services are doing the testing. I know most parents with dyslexic kids are shocked to discover – and the generable public should also know – that the acceptable averages to qualify as learning disabled are so low that kids have to be several grades behind in school before they qualify for services in California. And even then some don't. My kid didn't. Most kids that fall that far behind experience depression and anxiety. This should be unacceptable and people should be outraged. In essence we didn't get anywhere with the school district, we had incredible problems trying to get help for our daughter, but that's a story for another day. Suffice it to say that sadly schools and educational administrators don't appear to be interested in why a child isn't learning. This should be the mission and mandate of special education, not "does a child simply qualify." It is a broken system where everyone ends up demoralized.
What was it like for you to be denied services for your daughter?
Traumatic. My wife and I were incredibly confused, distraught and angry. We really didn't know where to turn or what to do. And we were both well-educated professionals. We couldn't understand why no one seemed to want to talk to us or how could it be that all these education professionals, who must have evaluated hundreds of kids, were acting like they didn't have clue? Our daughter's principle told us, "Why don't you take her to one of the private schools that specialize in learning disabilities?" It is hard to know if she was asking us to leave or telling us she couldn't help or both.
Luckily we had a friend who told us about an amazing organization: Parents Education Network (PEN). We were introduced to several PEN members, and talking with them was like walking out into the sunlight after being in the dark for years. They told us to stop talking to parents of kids without learning disorders, stop talking to educators who don't care or aren't interested or able to think about your child, and surround yourselves with supportive friends, family and the parents of LD kids who are a few steps ahead, join PEN and get more information and education. Finally, they said be prepared to sacrifice and invest in your child. We were lucky that it was possible for us. It isn't for most people and that is one of the great educational crimes of our time. PEN saved our sanity and in ways ultimately our family.
What was it like for your daughter?
It's very painful to recall. During her K-3 years in school she just went down, down, down emotionally. Every day she went to school and was frustrated and failing in making the kind of progress her peers were making. She was a good kid and so she never acted-out in school, although some kids do. But at home she was angry and in a dark place. She hated school and didn't want to be there but went anyway. She was an incredible trooper as we were intervening with reading specialists and math tutors. She'd go to school all day and then 4 days a week go to tutoring; she was exhausted. She was sad. She wasn't well received socially; she did have a few friends but I think because she was so insecure and frankly exhausted she could be controlling and inflexible. It was hard on her friends and hard on her. Her insecurity stemmed from living in body with neurophysiology that was failing her in school. I think she was riddled with fear and quite anxious.
What help and treatment did you seek?
We went to a very wise child psychologist, who told us to get her out the school, change her environment and put her in a school that specialized teaching kids who are dyslexic. He told us this wasn't a parenting problem – we had been told it was a parenting problem at different points by school administrators and unkind people who saw an unhappy kid and blamed us, the parents. He recommended we get her neuropsychological testing which was informative and verified what we knew in our gut: that our daughter had learning and attention disorders, although they failed to give her a diagnosis of dyslexia. Once she was at the new school with experts they all said, "Your kid is dyslexic". As an aside, we were so glad to learn that finally, in the upcoming DSM-V, dyslexia will be included with an understandable and researched-based criteria. You have no idea how relieving this will be for us and millions of parents and kids who are dyslexic. An actual criteria worked on by the Shaywitzs, the leading researchers and experts in the field.
Did you feel relief after the neuropsychological evaluation and diagnosis?
Recently, I saw a clip of an interview with James Redford about his new movie, The D Word. It is about his son's journey with dyslexia. He was asked the same thing. His answer was something like, "No I wasn't relieved once there was a diagnosis, my son was functionally illiterate and I was still caught in the fear of wondering how this kid is going to make it in life." I nearly broke down in tears when I heard that clip – another parent, a father, that really understands. I wasn't relieved either. I didn't know how my daughter was going to do. Would she ever get to a place of acceptance, would she learn to read, would she have the chance to go to college? Would she plateau at a very low level, would she ever feel secure? A child's world and job is school, and when they start out failing the psychological impact is huge. I knew that kids with LD and ADHD are at high risk for dropping out of school, drugs and other impulse disorders. I wasn't relieved I was still terrified.
You have seen a change in your daughter over the past few years. What's different now since she has been at the school these last 4 years?
My sister is a psychoanalystand I remember talking to her about my daughter when my daughter was about 7-years-old. She said, "You know, some day she will need to tell you what is was really like for her." I was puzzled, I didn't really get it because I thought I knew what it was like since I has been through it with her. My sister said, "She will need to tell you how painful it has been and likely how angry she has been at you, because she is dyslexic and in that way different from you and Lori (my wife). You and Lori didn't struggle in school and you don't struggle with learning now." At that time, my daughter would just express anger and shut-down, she never was able to talk about what was happening. She was obviously young so I couldn't expect
| 1.891993
|
HuggingFaceFW/fineweb-edu
|
the identification of residues in Gag that affect replication. More importantly, the implementation of this technique has facilitated a deeper understanding of how viral replication defines parameters of early HIV-1 pathogenesis such as set point viral load and longitudinal CD4+ T cell decline.
Infectious Diseases, Issue 90, HIV-1, Gag, viral replication, replication capacity, viral fitness, MJ4, CEM, GXR25
51506
Play Button
Metabolomic Analysis of Rat Brain by High Resolution Nuclear Magnetic Resonance Spectroscopy of Tissue Extracts
Authors: Norbert W. Lutz, Evelyne Béraud, Patrick J. Cozzone.
Institutions: Aix-Marseille Université, Aix-Marseille Université.
Studies of gene expression on the RNA and protein levels have long been used to explore biological processes underlying disease. More recently, genomics and proteomics have been complemented by comprehensive quantitative analysis of the metabolite pool present in biological systems. This strategy, termed metabolomics, strives to provide a global characterization of the small-molecule complement involved in metabolism. While the genome and the proteome define the tasks cells can perform, the metabolome is part of the actual phenotype. Among the methods currently used in metabolomics, spectroscopic techniques are of special interest because they allow one to simultaneously analyze a large number of metabolites without prior selection for specific biochemical pathways, thus enabling a broad unbiased approach. Here, an optimized experimental protocol for metabolomic analysis by high-resolution NMR spectroscopy is presented, which is the method of choice for efficient quantification of tissue metabolites. Important strengths of this method are (i) the use of crude extracts, without the need to purify the sample and/or separate metabolites; (ii) the intrinsically quantitative nature of NMR, permitting quantitation of all metabolites represented by an NMR spectrum with one reference compound only; and (iii) the nondestructive nature of NMR enabling repeated use of the same sample for multiple measurements. The dynamic range of metabolite concentrations that can be covered is considerable due to the linear response of NMR signals, although metabolites occurring at extremely low concentrations may be difficult to detect. For the least abundant compounds, the highly sensitive mass spectrometry method may be advantageous although this technique requires more intricate sample preparation and quantification procedures than NMR spectroscopy. We present here an NMR protocol adjusted to rat brain analysis; however, the same protocol can be applied to other tissues with minor modifications.
Neuroscience, Issue 91, metabolomics, brain tissue, rodents, neurochemistry, tissue extracts, NMR spectroscopy, quantitative metabolite analysis, cerebral metabolism, metabolic profile
51829
Play Button
Training Synesthetic Letter-color Associations by Reading in Color
Authors: Olympia Colizoli, Jaap M. J. Murre, Romke Rouw.
Institutions: University of Amsterdam.
Synesthesia is a rare condition in which a stimulus from one modality automatically and consistently triggers unusual sensations in the same and/or other modalities. A relatively common and well-studied type is grapheme-color synesthesia, defined as the consistent experience of color when viewing, hearing and thinking about letters, words and numbers. We describe our method for investigating to what extent synesthetic associations between letters and colors can be learned by reading in color in nonsynesthetes. Reading in color is a special method for training associations in the sense that the associations are learned implicitly while the reader reads text as he or she normally would and it does not require explicit computer-directed training methods. In this protocol, participants are given specially prepared books to read in which four high-frequency letters are paired with four high-frequency colors. Participants receive unique sets of letter-color pairs based on their pre-existing preferences for colored letters. A modified Stroop task is administered before and after reading in order to test for learned letter-color associations and changes in brain activation. In addition to objective testing, a reading experience questionnaire is administered that is designed to probe for differences in subjective experience. A subset of questions may predict how well an individual learned the associations from reading in color. Importantly, we are not claiming that this method will cause each individual to develop grapheme-color synesthesia, only that it is possible for certain individuals to form letter-color associations by reading in color and these associations are similar in some aspects to those seen in developmental grapheme-color synesthetes. The method is quite flexible and can be used to investigate different aspects and outcomes of training synesthetic associations, including learning-induced changes in brain function and structure.
Behavior, Issue 84, synesthesia, training, learning, reading, vision, memory, cognition
50893
Play Button
Community-based Adapted Tango Dancing for Individuals with Parkinson's Disease and Older Adults
Authors: Madeleine E. Hackney, Kathleen McKee.
Institutions: Emory University School of Medicine, Brigham and Woman's Hospital and Massachusetts General Hospital.
Adapted tango dancing improves mobility and balance in older adults and additional populations with balance impairments. It is composed of very simple step elements. Adapted tango involves movement initiation and cessation, multi-directional perturbations, varied speeds and rhythms. Focus on foot placement, whole body coordination, and attention to partner, path of movement, and aesthetics likely underlie adapted tango's demonstrated efficacy for improving mobility and balance. In this paper, we describe the methodology to disseminate the adapted tango teaching methods to dance instructor trainees and to implement the adapted tango by the trainees in the community for older adults and individuals with Parkinson's Disease (PD). Efficacy in improving mobility (measured with the Timed Up and Go, Tandem stance, Berg Balance Scale, Gait Speed and 30 sec chair stand), safety and fidelity of the program is maximized through targeted instructor and volunteer training and a structured detailed syllabus outlining class practices and progression.
Behavior, Issue 94, Dance, tango, balance, pedagogy, dissemination, exercise, older adults, Parkinson's Disease, mobility impairments, falls
52066
Play Button
TMS: Using the Theta-Burst Protocol to Explore Mechanism of Plasticity in Individuals with Fragile X Syndrome and Autism
Authors: Lindsay M. Oberman, Jared C. Horvath, Alvaro Pascual-Leone.
Institutions: Beth Israel Deaconess Medical Center.
Fragile X Syndrome (FXS), also known as Martin-Bell Syndrome, is a genetic abnormality found on the X chromosome.1,2 Individuals suffering from FXS display abnormalities in the expression of FMR1 - a protein required for typical, healthy neural development.3 Recent data has suggested that the loss of this protein can cause the cortex to be hyperexcitable thereby affecting overall patterns of neural plasticity.4,5 In addition, Fragile X shows a strong comorbidity with autism: in fact, 30% of children with FXS are diagnosed with autism, and 2 - 5% of autistic children suffer from FXS.6 Transcranial Magnetic Stimulation (a non-invasive neurostimulatory and neuromodulatory technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses 7,8) represents a unique method of exploring plasticity and the manifestations of FXS within affected individuals. More specifically, Theta-Burst Stimulation (TBS), a specific stimulatory protocol shown to modulate cortical plasticity for a duration up to 30 minutes after stimulation cessation in healthy populations, has already proven an efficacious tool in the exploration of abnormal plasticity.9,10 Recent studies have shown the effects of TBS last considerably longer in individuals on the autistic spectrum - up to 90 minutes.11 This extended effect-duration suggests an underlying abnormality in the brain's natural plasticity state in autistic individuals - similar to the hyperexcitability induced by Fragile X Syndrome. In this experiment, utilizing single-pulse motor-evoked potentials (MEPs) as our benchmark, we will explore the effects of both intermittent and continuous TBS on cortical plasticity in individuals suffering from FXS and individuals on the Autistic Spectrum.
Neuroscience, Issue 46, Transcranial Magnetic Stimulation, Theta-Burst Stimulation, Neural Plasticity, Fragile X, Autism
2272
Play Button
Detection of Rare Genomic Variants from Pooled Sequencing Using SPLINTER
Authors: Francesco Vallania, Enrique Ramos, Sharon Cresci, Robi D. Mitra, Todd E. Druley.
Institutions: Washington University School of Medicine, Washington University School of Medicine, Washington University School of Medicine.
As DNA sequencing technology has markedly advanced in recent years2, it has become increasingly evident that the amount of genetic variation between any two individuals is greater than previously thought3. In contrast, array-based genotyping has failed to identify a significant contribution of common sequence variants to the phenotypic variability of common disease4,5. Taken together, these observations have led to the evolution of the Common Disease / Rare Variant hypothesis suggesting that the majority of the "missing heritability" in common and complex phenotypes is instead due to an individual's personal profile of rare or private DNA variants6-8. However, characterizing how rare variation impacts complex phenotypes requires the analysis of many affected individuals at many genomic loci, and is ideally compared to a similar survey in an unaffected cohort. Despite the sequencing power offered by today's platforms, a population-based survey of many genomic loci and the subsequent computational analysis required remains prohibitive for many investigators. To address this need, we have developed a
| 2.016581
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
woman's constitutional right to control (in consultation with her physician) the use of her own body for reproductive purposes. This right was held to follow from the Court's previous decisions recognizing a fundamental right to ''privacy'' or personal autonomy. Because a ''fundamental'' right was involved, a state could not simply prohibit abortion on any terms it chose; it would have to adduce ''compelling'' reasons for overriding a woman's right to procreative choice. Since early abortion is safer than normal childbirth, concern for the mother's health would not provide a sufficiently compelling reason for restrictions on abortion during the first trimester, other than a requirement that it be performed by a licensed physician. Concern for the fetus could not be used to preempt a woman's right to elect abortion before ''viability''—the point near the beginning of the third trimester at which a fetus is capable of surviving outside the womb, albeit only with artificial aid. After viability, concern for the fetus as ''potential life'' was held to be sufficiently compelling to permit a state to regulate or even prohibit abortion, unless continued pregnancy threatened the mother's life or health. In other words, Roe invalidated almost all restrictions on abortion during the first six months of pregnancy except for those designed to protect maternal health in the second trimester, but permitted any and all restrictions during the third trimester except where abortion was necessary to preserve maternal health or life.
The Roe decision sparked enormous controversy. Opposition to Roe turned abortion into a central issue in national politics. Efforts to overrule Roe by constitutional amendment, or by packing the Supreme Court, so far have failed. The Court did depart from Roe and nearly overruled it in Webster v. Reproductive Health Services, 492 U.S. 490 (1989). Subsequently, however, the controlling opinion in Planned Parenthood of Southeastern Pennsylvania v. Casey, 505 U.S. 833 (1992), jointly delivered by Justices Sandra Day O'Connor, Anthony Kennedy, and David Souter, reaffirmed Roe's ''essential holding,'' although it significantly qualified Roe by allowing states to invoke both maternal health and concern for the life of the fetus as bases for restrictions that inhibit access to abortion at any stage of pregnancy, so long as those restrictions do not amount to an ''undue burden'' posing a ''substantial obstacle'' to the abortion of a nonviable fetus.
Since 1973 about two-thirds of the states have enacted new abortion laws designed to test the limits of Roe. These statutes curtail the availability of abortion in various ways: by denying the use of public funds or facilities for abortion; by requiring special precautions to prevent the abortion of a possibly viable fetus; by banning particular methods of abortion; and by imposing waiting periods and notification and consent requirements designed to discourage the choice of abortion.
- Laws denying the use of public funds or facilities for abortion consistently have been upheld by the Supreme Court (e.g., Maher v. Roe, 432 U.S. 464 (1977); Harris v. McRae, 448 U.S. 297 (1980)), as was a Bush administration rule forbidding clinics that receive federal funds from counseling or even mentioning abortion (Rust v. Sullivan, 500 U.S. 173 (1991)). Indeed, it was in the abortionfunding cases that the distinction first emerged between ''undue burdens'' on procreative choice and constitutionally permissible expressions of a legislative policy favoring maternity.
- Laws prohibiting the abortion of a viable fetus are common and generally valid, provided they make exception for abortions necessary to preserve the mother's life or health.
Planned Parenthood Association of Kansas City v. Ashcroft, 462 U.S. 476 (1983), narrowly upheld a Missouri statute mandating that a second doctor be present to look out for the fetus during post-viability abortions; Thornburg v. American College of Obstetricians and Gynecologists, 476 U.S. 747 (1986), struck down a similar Pennsylvania requirement that did not except situations where waiting for the second doctor would put the mother's life or health at risk. Thornburg also invalidated a requirement that post-viability abortions be performed in a way that would allow the unborn child to survive the procedure, if it could be done without significantly greater risk to the mother; this was read as impermissibly demanding that the mother bear an increased medical risk in order to save the fetus. Webster v. Reproductive Health Services, supra, upheld another Missouri statute prohibiting doctors from performing abortions on any woman believed to be twenty weeks pregnant or more without first undertaking tests to determine fetal viability.
- Laws banning particular methods of abortion generally have been found to be invalid. Planned Parenthood of Central Missouri v. Danforth, 428 U.S. 52 (1976), struck down a prohibition of saline amniocentesis, at the time the usual and safest method for second trimester abortions. Most of the lower courts that passed on the spate of state laws prohibiting so-called partial-birth abortions found them to be invalid, as did the Supreme Court in Stenberg v. Carhart, 120 S. Ct. 2597 (2000). These laws criminalize abortions where ''the person performing the abortion partially delivers vaginally a living unborn child before killing the unborn child and completing the delivery.'' This appears to refer to the procedure known as ''intact dilation and extraction,'' in which, in order to minimize damage to the uterus and cervix, the fetus is partly moved into the birth canal before being destroyed. But no exception for maternal health is made in these laws; it is not required that the fetus be viable; and the statutory language is said to be vague enough to cover other permissible abortion procedures as well.
- Laws imposing relatively minor impediments to abortion such as record-keeping requirements and a requirement of the patient's written consent generally have been upheld. Requirements that doctors make certain specified statements to a woman seeking abortion, so that her consent will be ''informed,'' and mandatory twenty-fourhour waiting periods before the abortion can be performed, were struck down in City of Akron v. Akron Center for Reproductive Health, 462 U.S. 416 (1983), and in Thornburg, supra, on the ground that they were designed to intimidate women into forgoing abortion; such requirements were upheld, however, under the new standard adopted by the plurality opinion in Planned Parenthood of Southeastern Pennsylvania v. Casey, supra. A requirement of spousal consent was invalidated in Danforth, supra, on the ground that it effectively gave the husband a veto over his wife's exercise of a constitutional right; Casey similarly found that, for many women, even a requirement of spousal notification would pose a substantial obstacle to abortion and therefore was impermissible. A requirement of parental consent when an unmarried pregnant minor seeks an abortion was invalidated in Danforth; but such requirements generally have been upheld in subsequent cases, including Casey, when accompanied by alternative provision for a judge to approve the abortion in lieu of a parent. A law requiring that both of a minor's parents be notified of the abortion would be invalid without a similar provision for so-called judicial bypass (Hodgson v. Minnesota, 497 U.S. 417 (1990)); the validity of a requirement that only one parent be notified, without provision for judicial bypass, was left open in Ohio v. Akron Center for Reproductive Health, 497 U.S. 502 (1990), which upheld a law requiring notification to at least one parent, with a judicial bypass option.
Efforts to limit the availability of abortion have been relentless, an indication of the intensity of opposition to Roe. The anti-abortion ''prolife'' position is rooted partly in the belief that the fetus is already a human person whose destruction constitutes a form of homicide and should be punished as such. But it is not based exclusively on this belief. There are different strands of ''pro-life'' sentiment. Willingness to make exceptions for cases of medical necessity or of rape, and reluctance to classify abortion as first degree murder, suggest varying degrees of commitment to the premise that abortion is in no way different from any other form of homicide. In any event, opposition to abortion appears to be bound up as well with views about sexual morality and the nature of the relationship between men and women. Roe v. Wade is the outstanding symbol of the prevalence of an antithetical set of views that have, since the 1960s, subverted ''traditional'' family and religious values; taking up arms (in some cases quite literally) against abortion serves to reassert the importance of those values in an increasingly secular world. For the ''pro-choice'' side, Roe also has considerable symbolic significance, as well as the practical and liberating effect of giving women control over their fertility. For both sides, every millimeter of ground gained or lost in the struggle to preserve or curtail the right to abortion established in Roe is a signal victory or defeat in a continuing clash between deeplyheld beliefs about the proper role and responsibility of women in the family and in society.
- Byrn, Robert M. ''An American Tragedy: The Supreme Court on Abortion.'' Fordham Law Review 41 (1973): 807–862.
- Colker, Ruth. Abortion & Dialogue: Pro-Choice, Pro-Life, & American Law. Bloomington and Indianapolis: Indiana University Press, 1992.
- Dellapenna, Joseph W. ''The History of Abortion: Technology, Morality, and Law.'' University of Pittsburgh Law Review 40 (1979): 359– 428.
- Dworkin, Ronald. Life's Dominion: An Argument about Abortion, Euth
| 1.57591
|
openbmb/Ultra-FineWeb
|
the same class, would be dealt with much later).
The first big break into the classification problem was done by AlexNet [13] in 2012, when it won the ILSVRC challenge, with a score of 84.6% in the top-5 accuracy test, while the next best score was only 73.8% (based on classic techniques). AlexNet has since then become a known standard and a default network architecture to test problems, as it is actually not very deep or complex (see Figure 4). It presents five convolutional layers, with max-pooling after the first two, three fully connected layers, and a ReLU to deal with non-linearities. This clear victory of the CNN-based approaches was validated next year by Oxford's VGG16 [16], one of the several architectures presented, winning the ILSVRC challenge with a 92.7% score.
Figure 4.
Diagram of the AlexNet architecture, showcasing its pioneering use of convolutional layers.
While several other networks have been presented with deeper architecture, relevant development focused on introducing new types of structures into the networks. GoogLeNet [17], the 2014 ILSVRC winner, achieved victory thanks to the novel contribution of the inception module, which validated the concept that the CNN layers of a network could operate in other orders different from the classic sequential approach. Another relevant contribution produced by technology giants was ResNet [18], which scored a win for Microsoft in 2016. The introduction of residual blocks allowed them to increase the depth to 152 layers while keeping initial data meaningful for training the deeper layers. These residual blocks architecture essentially forwards a copy of the received inputs of a layer; thus, later layers received the results and same inputs of prior layers and can learn from the residuals.
More recently, ReNet [19] architecture was used to extend recurrent neural networks (RNNs) to multidimensional inputs.
The jump from the classification problem with some spatial data to pixel level labeling (refining inference from image/region to pixel level) was presented by Long [20], with the fully convolutional network (FCN). The method they proposed was based on using the full classifier (like the ones just discussed) as layers in a convolutional network architecture. FCN architecture, and its derivatives like U-Net [21] are the best solutions to semantic segmentation for most domains. These derivatives may include classic methods, such as DeepLab's [22] conditional random fields [23], which reinforces the inference from spatially distant dependencies, usually lost due to CNN spatial invariance. The latest promising contributions to the semantic segmentation problem are based on the encoder-decoder architecture, known as autoenconders, like for example SegNet [24].
For the works discussed in this chapter, a FCN16 model with AlexNet as a semantic segmentation model was used. The main innovation introduced by the general FCN was exploiting the classification power via convolution of the common semantic segmentation DL network, but at the same time, reversing the downsampling effect of the convolution operation itself. Taking AlexNet as an example, as seen in Figure 4, convolutional layers apply a filter like operation while reducing the size of the data forwarded to the next layer. This process allows producing more accurate "deep features" but at the same time also removes high-level information describing the spatial relation between the features found. Thus, in order to exploit the features from the deep layers while the keeping information from spatial relation, data from multiple layers has to be fused (with element-wise summation). In order to be able to produce this fusion, data from the deeper layers are upsampled using deconvolution. Notice that data from shallow layers will be coarser but contain more spatial information. Thus, up to three different levels can be processed through FCN, depending on the quantity of layers deconvoluted and fused, as seen in Figure 5.
Figure 5.
Detail of the skip architectures (FCN32, FCN16, and FCN8) used to produce results with data from several layers to recover both deep features and spatial information from shallow layers (courtesy of [25]).
More information on the detailed working of the different FCN models can be found in [25]. It is still worth noting that the more shallow layers are fused, the more accurate the model becomes, but according to the literature, the gain from FCN16 to FCN8 is minimal (below 2%).
Advertisement
4. Automated ground truth labeling for multimodal UAV perception dataset
Classic methods using trained classifiers would pick designed features (based on several metrics and detectors, as discussed earlier) to parametrize a given sample and assign a label. This would allow creating small specific datasets, which could be used to infer the knowledge to create bigger datasets in a posterior step. The high specificity of the features chosen (generally with expert domain knowledge applied implicitly) with respect to the task generally made them unsuitable to export learning to other domains.
By contrast, deep learning offers several transfer learning options. That is, as it was proven by Yosinski [26], trained with a distant domain dataset are generally useful for different domains and usually better than training from an initial random state. Notice that the transferability of features decreases with the difference between the previously trained task and the target one and implies that the network architecture is the same up to the transferred layers at least.
With this concept in mind, we decided to build a dataset to train an outdoor industrial pipe detector with pixel level annotation to be able to determine the position of the pipe. While the ability of transfer learning allows us to skip building a dataset with several tens of thousands of images, and therefore, the authors will work with a few thousand, which were used to fine-tune the network. These orders of magnitude are required as a "shallow" deep network. For instance, the AlexNet already presents 60 million parameters.
Capturing and labeling a dataset is a cumbersome task, so we also set to automatize this task with minimal human supervision/interaction, exploiting the capabilities of the sensing architecture proposed in earlier works described in Section 2.
This framework, see Figure 6, uses the images captured by the UAV camera sensor, the data processed by the localization approach chosen (see Section 2) to obtain the UAV odometry, and pipe detection seeds from the RANSAC technique treating the LiDAR point cloud data. When a pipe (or generally a cylinder) is detected and segmented in the data sensor provided by the LiDAR, this is used to produce a label for the temporally near images, to identify the region of the image (the set of pixels) containing the pipe or cylinder detected and its pose w.r.t. the camera. Notice that even running the perception part, the camera works at a higher rate than the LiDAR, so the full odometric estimation is used to interpolate between pipe detections, to estimate where the label should be projected into the in-between images (just as it was described for the pipe prediction in Section 2).
Figure 6.
The framework proposed to automatically produce labeled datasets with the multimodal perception UAV.
This methodology was used to create an initial labeled dataset with actual data captured in real industrial scenarios during test and development flights, as it will be discussed in the next section.
Advertisement
5. Experimental evaluation
To evaluate the viability of the proposed automated dataset generation methodology, we apply it to capture a dataset and train several semantic segmentation networks with it. To provide some quantitative quality measurement for the solutions produced, we use modified standard metrics for state-of-the-art deep learning, accounting that in our problem we are dealing with only one semantic class:
- PA (pixel accuracy): base metric, defined by the ratio between properly classified pixels TP and the total number of pixels in an image, pixtotal:
PA=TPpixtotalE1
Notice that usually, besides the PA the mean pixel accuracy (MPA) is provided, but in our case, it reduces to the same value of PA, thus it will not be provided.
- IoU (intersection over union): standard metric in segmentation. The ration is computed between the intersection and union of two sets, namely the found segmentation and the labeled ground truth. Conceptually, it equals to the ratio between the number of correct positives (i.e., the intersection of the sets) TP, over all the correct positives, spurious positives FP and false negatives FN (i.e., the union of both the ground truth and segmentation provided). Usually, it is used as mean IoU (MIU), averaging the same ratio for all classes.
IoU=TPTP+FP+FNE2
An additional metric usually computed along with the MIU is the frequency weighted MIU, which just weighs the average IoU computed at MIU according to the relative frequency of each class. The MIU, in our case, IoU is the most relevant metric and the most widely used when reporting segmentation results (semantic or otherwise).
5.1 Dataset generation results
The system proposed was implemented over the ROS meta-operating system, just as in previous works [6], where the UAV system used to capture the data is described. A set of real flights in simulated industry environments was performed, where flights
| 1.571778
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
You and your EMT partner are dispatched in your BLS ambulance to a quiet residential neighborhood for an adult male in cardiac arrest. A paramedic unit has also been sent, but it's several minutes behind you. Dispatch reports that they are coaching the patient's wife on how to perform hands-only CPR. On your arrival, a police officer directs you into the living room where you find that another officer has taken over chest compressions.
Your partner switches with the officer while you prepare the automated external defibrillator (AED), turning the device on and then exposing your patient's chest. Without interrupting your partner's chest compressions you place the pads on the patient. The AED chirps, "Stand clear of patient; analyzing rhythm." The room is quiet. You trade places with your partner to minimize hands-off time after the AED finishes analysis.
The AED continues, "Shock advised, charging." You and your partner exchange glances, wondering why it takes so long for an AED to charge. "Stand clear of patient; press shock now." Your partner presses the shock button and the patient's body jerks as the shock is delivered. It then says, "It is safe to touch the patient."
After immediately resuming chest compressions you think about how long it took to analyze, charge and shock the patient. You're concerned because you are aware that your patient wasn't receiving life-saving compressions during those valuable seconds of delay.
Sudden cardiac arrest (SCA) is a leading cause of death among adults in the U.S. and elsewhere. For people suffering SCA who present with a shockable rhythm presumably due to a cardiac etiology, survival to hospital discharge rates vary widely across the country—from close to zero in some areas up to better than 50% in others.(1)
Although high-quality CPR is the cornerstone of resuscitation efforts, unnecessary pauses and delays in CPR play a critical role in reducing rates of survival. One specific area of focus involves the pauses during the rhythm analysis and charging of the AED, also known as the peri-shock pause.
Automated external defibrillators, regardless of manufacturer, operate under the same general principles.(2) After the unit is turned on, the rescuer is given prompts to perform actions such as application of electrode pads or pressing a button to analyze the rhythm. Once this analysis period begins, the devices ask that the patient not be touched or moved so the computer algorithm can analyze the ECG. The AED will then classify the rhythm as either shockable or non-shockable.
Each manufacturer has different strategies for the analysis and classification of the rhythm. However, all of the devices exhibit high sensitivity and specificity for shockable and non-shockable rhythms. Modern analysis strategies even attempt to classify shockable rhythms by their likelihood for successful defibrillation.
Once analysis has determined a rhythm is shockable, an AED will direct the user to stand clear and deliver a shock to the patient after the unit has charged. Modern, biphasic devices will determine the patient's transthoracic impedance and adjust the energy level and duration of application prior to the shock. This strategy enables a higher first chance success over older monophasic devices which delivered fixed energy levels. For subsequent shocks, some biphasic devices even feature escalating energy levels.
This high level overview represents how most devices work, with little variation. How and when an AED charges, however, differs from device to device. The "peri-shock pause" that occurs while an AED charges plays a large role in the hands-off time.(3) More importantly, the less hands-off time, the higher the compression fraction.
Compression fraction is the proportion of time during which chest compressions are performed in each minute of CPR. This takes into account the pauses for breaths, the pauses for AED analysis and charging, and all other unnecessary pauses. This proportion can be measured by machine or by hand and is considered a modifiable factor of quality.
How Important Is the Compression Fraction?
Minimally interrupted CPR is a relatively new, but critically important component of resuscitation. Animal studies in 2002 and 2003 that focused on hands-off time found that pauses greater than 15 seconds drastically reduced rates of return of circulation.(4.5) Any increase in the number or duration of these interruptions also decreased shock success.(6) Researchers then began evaluating each component of resuscitation in terms of its impact on hands-off time.
Interruptions for ventilations, pulse checks, advanced airway placement and rhythm analysis all were featured in numerous studies.(7) The animal trials that found associations with lower survival rates and poor outcomes paved the way for clinical trials in humans.(8)
With a critical eye on our standard of care, studies were designed to evaluate protocols that minimized hands-off and no-flow time.(9) The quality of CPR was benchmarked not only in terms of rate and depth, but also in terms of its interruptions. A high compression fraction quickly became the standard of care.
Researchers from the Resuscitation Outcomes Consortium concluded that, "increasing chest compression fraction (hands-on time) during out-of-hospital resuscitation of patients with ventricular fibrillation is an independent determinant of survival to hospital discharge. Devising CPR protocols that take advantage of this simple fact can save thousands of lives each year and are extremely inexpensive to implement."(10) Therefore, the effect that a high compression fraction has on survival to discharge shouldn't be underestimated.
Simply put, the more time we have our hands on the chest doing compressions, the more people we will send home from the hospital neurologically intact. Seattle-King County, Wash., is recognized internationally for having survival rates for witnessed cardiac arrest due to ventricular fibrillation exceeding 50%.
Peter Kudenchuk, MD, a professor of medicine at the University of Washington and associate medical director for King County EMS, relates the effect of peri-shock pauses on compression fractions: "Both animal and clinical studies have shown the adverse effects on outcome from even brief pauses in CPR immediately before defibrillation. If using a manual defibrillator, one way to minimize such pauses is to pre-charge the device during the final seconds of CPR immediately before the next rhythm analysis. A 'quick look' to confirm [ventricular fibrillation] VF, followed immediately by shock from a pre-charged manual defibrillator can save precious wasted seconds of 'no flow' that would otherwise be taken up by charging.
"Even seemingly brief seconds of interrupted CPR can add up quickly during a resuscitation. I'm convinced that keeping those interruptions to a minimum is among the ingredients that contributes to resuscitation success in Seattle-King County, as it can anywhere."
What About Our AED?
As we related in our scenario, providers who have used an AED during a cardiac arrest often feel like they have waited an eternity to resume chest compressions after the unit advises to "stand clear." Seconds tick away as the machine analyzes the rhythm. If a shockable rhythm is detected, more seconds tick away as the AED charges. Finally, the shock is delivered, but yet more seconds go by before the unit advises to, "resume CPR." During the peri-shock pause, which can approach 20 seconds or more, no chest compressions are delivered to the patient.
We know that interruptions of compressions for any reason have negative effects on survival.(11) Coronary perfusion pressure takes time to build up, and drops off swiftly and dramatically when chest compressions are stopped.(12) Forward flow remains inadequate during these interruptions, and perfusion pressures must be built up again after each pause in CPR.(13) When this happens just before defibrillation, the heart is no longer "primed" and in the best possible state to receive the shock.(14)
When studied in AEDs, peri-shock pauses have a definite effect on survival. A 2011 article in Circulation stated: "In patients with cardiac arrest presenting in a shockable rhythm, longer peri-shock and pre-shock pauses were independently associated with a decrease in survival to hospital discharge. The impact of pre-shock pause on survival suggests that refinement of automatic defibrillator software and paramedic education to minimize pre-shock pause delays may have a significant impact on survival."(15)
Peri-shock pauses are an area that must be targeted for improvement in automated external defibrillator design. Wide variability exists in the length of time for both preand post-shock pauses in AEDs.(16) And, of the seven commercially available designs studied, only one was capable of delivering a shock within 10 seconds from the end of CPR. One study, which compared automated with manual defibrillation, found a decrease in survival rates with AEDs.(5)
We asked three of the major manufacturers of AEDs—Philips, Physio-Control and ZOLL—to explain when and how their devices charge and how laypersons and rescuers can minimize the peri-shock pause.
Philips Healthcare AEDs begin charging during the analysis period using a technology they call Quick Shock. Specifically, their FR3, HS1 and FRx devices use the hands-off time for analysis and to also charge the device. Philips reports that these AEDs are typically capable of delivering a shock as soon as analysis determines whether the rhythm is shockable.
They note that during this hands-off period, it's important that rescuers don't touch the patient because Philips Healthcare devices continually reassess the rhythm for changes and will disarm their device if they detect a non-shockable rhythm. Their latest AED—the FR3—boasts hands-off times of eight seconds, below the recommended window of 10 seconds for interruptions in compressions.
Physio-Control takes a different approach, with their LIFE
| 1.3804
|
openbmb/Ultra-FineWeb
|
- Please check and comment entries here.
Sugars and CKs in seeds
Plants adjust their growth and development through a sophisticated regulatory system integrating endogenous and exogenous cues. Many of them rely on intricate crosstalk between nutrients and hormones, an effective way of coupling nutritional and developmental information and ensuring plant survival. Sugars in their different forms such as sucrose, glucose, fructose and trehalose-6-P and the hormone family of cytokinins (CKs) are major regulators of the shoot and root functioning throughout the plant life cycle.
The regulation of plant growth and development is crucial for yield and resistance to abiotic and biotic constraints, which relies on fine-tuned interactions between nutrients and hormones, influenced by environmental inputs. Among these central regulators, sugars and cytokinins (CKs) play predominant roles while operating synergistically, antagonistically and sometimes independently to shape the final reaction of the plant. Sugars growth-related metabolic activity and as signaling entities that drive a wide array of mechanisms throughout the plant life cycle . Briefly, sugar signaling is intimately linked to developmental stages, hormonal signaling and environmental conditions, and thereby is an integrative part of plant growth control . Plants can sense a diversity of soluble sugars such as sucrose, glucose, fructose and trehalose-6-phosphate (T6P). Sophisticated sugar sensing networks have been identified, including hexokinase (HXK), Regulator of G-protein signaling (RGS1), and two main sensors of nutrients and energy status: sucrose-nonfermentation1-related protein kinase1 (SnRK1) and target of rapamycin (TOR) kinase .
CKs are a group of adenine derivatives involved in many central processes in plants, such as development of vasculature, differentiation of embryonic cells, maintenance of meristematic cells, shoot formation and leaf senescence delay . There are two types of CKs: adenine-type cytokinins represented by kinetin, zeatin, and 6-benzylaminopurine, and phenylurea-type cytokinins like diphenylurea and thidiazuron. Most adenine-type cytokinins are synthesized in roots. Cambium and other actively dividing tissues also synthesize CKs. CKs are viewed as one of the major long-distance root-to-shoot messengers . Their biosynthesis depends on the activity of adenosine phosphate-isopentenyltransferases (IPTs). Trans-zeatin is the most abundant form of CK in plants . Initially identified in rice, Lonely Guy (LOG), cytokinin nucleoside 54-monophosphate phosphoribohydrolases, are involved in direct CK production . CKs primarily regulate gene expression through a phosphotransfer signaling cascade. This cascade is initiated by histidine kinase cytokinin receptors, Arabidopsis Histidine Kinase2 (AHK2), AHK3 and AHK4, that located in the endoplasmic reticulum membrane, and completed by cytosolic histidine phosphotransfer proteins (AHP) . AHPs shuttle between the cytosol and the nucleus and transfer phosphate to nuclear response regulators (Arabidopsis Response Regulators, ARRs) that fall into two classes: type-A and type-B ARRs are negative and positive regulators of CK signaling, respectively.
Sugars and CKs are individually viewed as major players in many aspects of plant biology. Yet, their crosstalk has not been systematically investigated, hence many gaps in current knowledge. Moreover, the available results underline that the crosstalk is very complex and varies at least according to the nature of the organ and the physiological process. This review aims to underline the interactions between sugars and CKs based on their individual and combined roles in the regulation of key developmental processes throughout the plant life cycle. Based on the results derived from different plant species, sugars and CKs seem to act synergistically to take over the seedling emergency, shoot meristem activity, shoot branching and flowering while doing antagonistically as strongly suggested for seed germination, root meristematic activity, and even demonstrated for root branching and leaf senescence (Figure 1). Here, the main results are discussed, potential integrators of this crosstalk are proposed, and further lines of research are highlighted.
Figure 1. Relationship between sugars and cytokinins (CKs) in the main plant developmental processes, including seed development, germination, seedling establishment, root and shoot branching, leaf senescence, and flowering. The black arrows indicate stimulation or positive effect, and the red lines mean repression or negative effect. This model results from a compilation of studies carried out on different model plants (see references and description in the text).
2. Seed Development, Germination and Seedling Establishment
Seed formation, as well as the seed-to-young-seedling transition through germination, involves sugar and hormone signaling . Even though common key players have been identified in the seed response to sugars and CKs, their molecular interaction remains speculative.
2.1. Seed Development
Seed development covers morphogenesis phases characterized by active cell division and embryonic organ formation and a maturation phase during which storage nutrients accumulate in cotyledons and/or endosperm tissues, with a transfer of reserves between these two compartments . In this latter phase, the embryo acquires tolerance to desiccation and a dormancy state before dispersal in the environment. Dormancy allows the seed to cope with its adverse environment and secures the transition to a new life cycle. Previous works have reported the contribution of sugars and CKs in the control of seed development . In cotyledons of Vicia faba, a high glucose-to-sucrose ratio is correlated with cell division during the morphogenesis phase, whereas an increasing sucrose-to-glucose ratio marks the sink–source transition to the storage phase . The high glucose gradient is related to both high cell-wall-bound invertase (CWINV) expression in the maternal seed coat and hexose transporter (VfSTP1) expression in the embryonic epidermal cells . Analyses of the CWINV-deficient mutant miniature1 (mn1), impaired in endosperm development in maize caryopses, provide evidence that CWINV also contributes to CK-dependent cell proliferation during the developmental transition to the storage phase . Such a CK effect may operate directly on cell cycle-related genes (CycD3) and indirectly through (CWINV2)-mediated sugar signaling . Nevertheless, the seemingly contradictory phenotype of the CK-receptor-defective triple mutant ahk2 ahk3 cre1 exhibiting greater seed size points to the complexity of the regulatory network . Understanding how CKs contribute to seed development will require considering the different levels of regulation of CK metabolisms, such as the spatiotemporal accumulation and transport of CKs in seed tissues, the dynamics of their biosynthesis (IPT) and inactivation (CKX), and their perception. The transition from cell division and expansion (seed morphogenesis) to storage activity (seed maturation phase) is associated with downregulated CWINV and IPT expression . At this stage, sugars serve for seed storage accumulation by mediating sucrose synthase induction for starch biosynthesis in maize kernels or gibberellic acid (GA) dependent alpha-amylase induction for storage remobilization in barley embryos . Such sugar-dependent regulation takes place at the transcriptional and post-transcriptional levels. The role of sugars in seed maturation could be complex and partially mediated through T6P, considered as a proxy for sucrose availability in plants , and SnRK1 . Sucrose positively regulates T6P accumulation in wheat at the seed pre-filling stage , and its exogenous application stimulates seed filling and yield . Accordingly, Arabidopsis seeds of the mutant tps1 (Trehalose-6-phosphate synthase 1) fail to proceed towards the maturation phase . In pea, SnRK1 deficiency hinders the maturation and storage activity . Accordingly, SnRK1 induces abscisic acid (ABA) synthesis and signaling and the C/S1-group bZIP signaling pathways associated with carbon starvation . This regulation is mediated by pFUS3 (The Arabidopsis B3-domain transcription factor FUSCA3) phosphorylation, known to control ABA responses during seed maturation and dormancy . Transcriptomic comparison of CK metabolism and signaling in dormant and non-dormant wheat seeds highlights that CK controls the activity of many genes involved in seed dormancy. The interactions of CKs with ABA metabolism and signaling during seed maturation need to be further investigated and compared with sugar signaling mediated at least by the T6P and SnRK1 pathways.
2.2. Seed Germination and Seedling Establishment
The carbon stored in the mature seed will be remobilized during germination to ensure seedling establishment before becoming heterotrophic. Seed germination is accomplished when the radicle protrudes through the outer layers of the embryo, i.e., the endosperm and the teguments . The related cellular and metabolic events are orchestrated by complex signaling crosstalk involving the hormones ABA and GA, well known for their role in
| 2.498131
|
m-a-p/FineFineWeb
|
If almost all ODE solvers fail, how can we trust the results from the other solvers?
I want to introduce Julia to one course. We have been using MATLAB in this course for many years, which are very robust. For these special problems, I tried Python and it seems not as robust as MATLAB (I am also a beginner of Python), but I managed to solve them all. I heard from friends about Julia, and I tried, but apparently I can hardly solve the same problems (the warning/error is "Instability detected. Aborting"). Here below is the source code, and it seems only the solver Rosenbrock23 gives the expected results. This situation makes it difficult for me to explain to others (in MATLAB almost all solvers work, just the efficiency matters).
using DifferentialEquations, Plots
function dyth_dtau(dyth, yth, par, t)
# nt = int(length(yth)/2)
nt = par[1] # this can be calculated from the line given above
dz = par[2] # this can be calculated from n (nt)
gamma = par[3]
beta = par[4]
phi = par[5]
Le = par[6]
theta = yth[1:nt]
y = yth[nt+1:2*nt]
rhs = [phi^2 * y[i] * exp(gamma*(1.0 - 1.0/theta[i])) for i in 1:nt]
# dtheta/dtau
dyth[1] = 2 * (theta[2] - theta[1])/dz^2 + beta * rhs[1]
for i in 2:nt-1
dyth[i] = (theta[i+1] - 2 * theta[i] + theta[i-1])/dz^2 + beta * rhs[i]
end
dyth[nt] = (1 - 2 * theta[nt] + theta[nt-1])/dz^2 + beta * rhs[nt]
dyth[1:nt] = dyth[1:nt] / Le
# dy/dtau
dyth[nt+1] = 2 * (y[2] - y[1])/dz^2 - rhs[1]
for i in 2:nt-1
dyth[nt+i] = (y[i+1] - 2 * y[i] + y[i-1])/dz^2 - rhs[i]
end
dyth[2*nt] = (1 - 2 * y[nt] + y[nt-1])/dz^2 - rhs[nt]
end
n = 99
dz = 1/(n+1)
gamma = 30.0
beta = 0.15
phi = 1.1
Le_arr = [1.0, 0.5, 0.25, 0.15, 0.1]
t_span = (0.0, 1.0)
yth0 = ones(2*n)
for i in 1:length(Le_arr)
Le = Le_arr[i]
println(Le)
par = (n, dz, gamma, beta, phi, Le)
prob = ODEProblem(dyth_dtau, yth0, t_span, par)
sol = solve(prob)
if (i == 1)
global plt=plot(sol.t, sol[1, :])
else
global plt=plot!(sol.t, sol[1, :])
end
end
display(plt)
Here is the status of the package: " [0c46a032] DifferentialEquations v6.16.0 ".
It is very strange that not all ODE solvers are available. For example: CVODE_BDF() and RadauIIA() are missing.
Could you please give me some advices? Thank you.
2 Likes
I've had very good experience with the DifferentialEquations.jl solvers – when I learned to use them, so I'm a little bit surprised by your experience.
Your code would be much easier on the eye if you insert the code within a start line consisting of triple backticks followed by "julia", and and a closing line consisting of triple backticks. Like…
using DifferentialEquations, Plots
function dyth_dtau(dyth, yth, par, t)
# nt = int(length(yth)/2)
nt = par[1] # this can be calculated from the line given above
dz = par[2] # this can be calculated from n (nt)
gamma = par[3]
beta = par[4]
phi = par[5]
Le = par[6]
theta = yth[1:nt]
y = yth[nt+1:2nt]
rhs = [phi^2 * y[i] * exp(gamma(1.0 - 1.0/theta[i])) for i in 1:nt]
# dtheta/dtau
dyth[1] = 2 * (theta[2] - theta[1])/dz^2 + beta * rhs[1]
for i in 2:nt-1
dyth[i] = (theta[i+1] - 2 * theta[i] + theta[i-1])/dz^2 + beta * rhs[i]
end
dyth[nt] = (1 - 2 * theta[nt] + theta[nt-1])/dz^2 + beta * rhs[nt]
dyth[1:nt] = dyth[1:nt] / Le
# dy/dtau
dyth[nt+1] = 2 * (y[2] - y[1])/dz^2 - rhs[1]
for i in 2:nt-1
dyth[nt+i] = (y[i+1] - 2 * y[i] + y[i-1])/dz^2 - rhs[i]
end
dyth[2*nt] = (1 - 2 * y[nt] + y[nt-1])/dz^2 - rhs[nt]
end
[You model looks like a discretized set of 2 PDEs, e.g., a simultaneous temperature distribution (theta) and possibly concentration (y), with diffusion and some reaction term.]
Looks like it's just a tolerance thing. In MATLAB as well (and in general), tolerance can effect stability. Here, the exp definitely amplifies that. But with the right tolerances it looks like every algorithm is fine.
This thing takes about 0.3 seconds to solve. Simplest seems to be CVODE, swap it to GMRES, 0.4 seconds. And then optimizing the RHS a bit brings it to 0.3 seconds.
using DifferentialEquations, Plots, Sundials
function dyth_dtau(dyth, yth, par, t)
nt = Int(length(yth)/2)
dz = par[1] # this can be calculated from n (nt)
gamma = par[2]
beta = par[3]
phi = par[4]
Le = par[5]
theta = yth[1:nt]
y = yth[nt+1:2nt]
rhs = [phi^2 * y[i] * exp(gamma*(1.0 - 1.0/theta[i])) for i in 1:nt]
# dtheta/dtau
dyth[1] = 2 * (theta[2] - theta[1])/dz^2 + beta * rhs[1]
for i in 2:nt-1
dyth[i] = (theta[i+1] - 2 * theta[i] + theta[i-1])/dz^2 + beta * rhs[i]
end
dyth[nt] = (1 - 2 * theta[nt] + theta[nt-1])/dz^2 + beta * rhs[nt]
dyth[1:nt] ./= Le
# dy/dtau
dyth[nt+1] = 2 * (y[2] - y[1])/dz^2 - rhs[1]
for i in 2:nt-1
dyth[nt+i] = (y[i+1] - 2 * y[i] + y[i-1])/dz^2 - rhs[i]
end
dyth[2*nt] = (1 - 2 * y[nt] + y[nt-1])/dz^2 - rhs[nt]
end
n = 99
dz = 1/(n+1)
gamma = 30.0
beta = 0.15
phi = 1.1
Le_arr = [1.0, 0.5, 0.25, 0.15, 0.1]
t_span
| 1.113337
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
is said to be drawing out just how much the meanings in our language are dependent upon our "forms of life", and he is suggesting that a "form of life" (its concerns and practices) maybe so divergent of our own that we could not understand whatever such a lion would mean. Quite unlike Anselm's notion of "if you understand me, you will agree with me," does it not rely upon a particularly opaque use of the word "understand"? For instance, an ethologist in Africa we would say "understands" lions, in a way that the average urban person does not, (as does perhaps an experienced lion tamer). So certainly the use of the word "understand," even when there is no strict "talking" present, has meaning in the context of lions. One would imagine that given the addition of speech, such specialists indeed would only "understand" them better. Now why, on principle, would a lion imagined to speak then be incapable of being understood? What would be the measure of "understanding" be? Fluency?
My suspicion is that there is something in this analogy which is confessional, that is, Wittgenstein as a brilliant, German Jew, was a kind of un-understandable "lion" among the carefully kept minds of the English at Cambridge, as can be seen in the paragraph preceding the citation, where he speaks of "transparency" in the context of coming to the customs of a foreign country:
We also say of people that they are transparent to us. It is, however, important as regards this observation that one human being can be a complete enigma to another. We learn this when we come into a strange country with entirely strange traditions; and, what is more, even given the mastery of the country's language. We do not understand (versteht) the people. (And not because of not knowing what they are saying to themselves.) We cannot find our feet with them (ibid).
Taken together, it would seem that Wittgenstein would like to say, "We would not be able to understand a lion if she could speak, that is, we would not be able to 'find our feet with her', but we would be able to know what she is saying to herself". There would be a degree of separability which would not be conceptual. We may be able to read someone's thoughts, but not their intentions (seen in the paragraph that follows).
When considering "real" lions (and does not Wittgenstein always want us to return to the concrete circumstances of communication), this seems like an odd thing to conclude, for it is really that ethnologists (and lion tamers), indeed do well predicting intentions (actions), and one would imagine that this would only be facilitated by the capacity for speech. After a few brief observations which speak to the community of knowledges, Wittgenstein ends up making a point about the instrumental "foundation" of predictions of behaviors. A third person pov has a different foundation for his prediction of my behavior, than I do:
Two points, however, are important: one, that in many cases someone else cannot predict my actions, whereas I foresee them in my intentions; the other, that my prediction (in my expression of intention) has not the same foundation as his prediction of what I shall do, and the conclusions to be drawn from these predictions are quite different.
I can be as certain of someone else's sensations as of any fact. But this does not make the propositions "He is much depressed", "25 x 25 =625" and "I am sixty years old" into similar instruments. The explanation suggests itself that the certainty is of a different kind.–This seems to point to a psychological difference. But the difference is logical (logischer) (ibid).
What began as a meditational approach to cultural (and perhaps even species) estrangement, the lack of understanding has dissolved into a point "logical", the differing of "instruments". What has happened to the claim that if a lion could talk, we could not understand him? If we accept Wittgenstein's story of alienation and "not getting our feet" with someone (anyone?), understanding itself, as it is experienced, remains in tact. The instruments work together. Transparency, as he named it, exists, if only in degrees. And no one who can be said to communicate is utterly opaque, or what Wittgenstein calls a "complete enigma". Instead really, it is against the backdrop of understanding that any failure to understand, takes place.
I believe here, in the very experience of understanding, of coherence, as it presents itself, that Anselm's experiential proof lies.
Donald Davidson's Unquestionable Belief
The last stop in the tour of understanding is philosopher Donald Davidson's rebuttal of the skeptic who claims that it is possible that all our beliefs are false. Davidson's reply, in short, is much like Anselm's. If indeed the skeptic, or anyone else, understandswhat a belief is, he must understand that beliefs are by the very nature both veridical, and holistic in nature. Any doubt of a particular belief precludes the doubt of all beliefs.
This is the core of his argumentation:
What is needed to answer the skeptic is to show that someone with a (more or less) coherent set of beliefs has a reason to suppose that his beliefs are not mistaken in the main. What we have shown is that it is absurd to look for a justifying ground for the totality of beliefs, something outside this totality which we can use to test or compare with our beliefs. The answer to our problem must be to find a reason for supposing most of our beliefs are true.that is not a form of evidence.
My argument has two parts.First I urge that a correct understanding of the speech, beliefs, desires, intentions, and other propositional attitudes of a person leads to the conclusion that most of a person's beliefs are true, and so there is a legitimate presumption that any one of them, if it coheres with most of the rest, is true. Then I go on to claim that anyone with thoughts, and so in particular anyone who wonders he has any reason to suppose he is generally right about the nature of his environment, must know what a belief is, and how in general beliefs are to be detected and interpreted. These being perfectly general facts we cannot fail to use when we communicate with others, or when we try to communicate with others, or even when we merely think we are communicating with others, there is a pretty strong sense in which we can be said to know that there is a presumption in favor of the overall truthfulness of anyone's beliefs, including our own. So it is bootless for anyone to ask for some further reassurance; that can only add to his stock of beliefs. All that is needed is that he recognize that belief is inherently veridical…
…Take for example the interdependence of belief and meaning. What a sentence means depends partly on the external circumstances that cause it to win some degree of conviction; and partly on the relations, grammatical or logical, that the sentence has to other sentences held true with varying degrees of conviction…it is impossible for a speaker to understand a speaker and at the same time to discover the speaker to be largely wrong about the world. For the interpreter interprets sentences held true (which is not to be distinguished from attributing beliefs) according to the events and objects in the outside world that cause the sentences to be held true….
…What stands in the way of global skepticism of the senses is, in my view, that fact that we must, in the plainest and methodologically most basic cases, take the objects of a belief, to be the causes of that belief. And what we, as interpreters, must take them to be is what they in fact are. Communication begins where causes converge: your utterance means what mine does if belief in its truth is systematically caused by the same events and objects…
…All beliefs are justified in this sense: they are supported by numerous other beliefs (otherwise they wouldn't be the beliefs that they are), and have a presumption in favor of their truth. The presumption increases the larger and more significant the body of beliefs which which a belief coheres and, there being no such thing as an isolated belief, there is no belief without a presumption in its favor.
"A Coherence Theory of Truth and Knowledge"
Much like Anselm's notion of understand, Davidson argues, though in a different fashion, for the logical coherence of beliefs, such that the very nature of beliefs precludes their global falsity. Because beliefs are an explanatory attribution (we atttribute them to explain the rationality of behavior) and do not exist apart from any such explanatory apparatus, it makes no sense to say that all our beliefs might be false. To put it another way, with each "might be false" that is added to the last "might be false", when attending to beliefs, there is an geometric descrease in likelihood of falsity. At the lower end of any such approach to a "completely false" totality of beliefs, these simply are no longer beliefs. They would not longer be functioning as the explanation of behavior taken to be rational.
The skeptical problem arises in thinking that one belief can be held up to the world, and compared, in isolation from all other beliefs, and still remain a belief. It arises from thinking of correspondence as the determination of the "truth" of belief. Beliefs are something we do, and not merely have.
In a certain sense, to use a mechanical analogy, any
| 1.704142
|
Zyphra/Zyda-2
|
version 1.0 (Life Technologies, Tokyo).
Results
Variant filtering, family-based analysis, and validation by Sanger sequencing
After the removal of variants that did not affect protein amino acid sequences or were common (≥1.0% MAF in HapMap-JPT or 1000 Genomes ASN databases) [20, 21], we analyzed the patient with the VACTERL association and his parents to identify genetically functional variants using the following classifications: (1) de novo mutations that were non-inherited or unique; (2) homozygous recessive mutations that were derived from both parents; and (3) compound heterozygous mutations for which the patient had at least two mutations derived separately from parents in the same gene. Two de novo mutations in two genes, one homozygous recessive mutation in one gene, and four compound heterozygote mutations in one gene in the patient were identified and validated by following Sanger sequencing (Additional file 1: Table S1). These mutations were not found in our in-house database of previously seqenced WES data obtained from 19 healthy Japanese individuals.
The identified two de novo mutations included a nonsense mutation in the NT5C3L (5′-nucleotidase, cytosolic III-like) gene and a missense mutation in the TTLL9 (tubulin tyrosine ligase-like family, member 9) gene. However, since both genes are functionally unknown, we could not conclusively determine their contribution to the VACTERL association.
WES identified a novel frameshift mutation in the PCSK5 gene
Among the candidates for recessive mutations, we focused on the PCSK5 gene, which has been reported as one of the causative genes for the VACTERL association [9, 10]. We identified a novel frameshift mutation and three missense mutations as a compound heterozygote in the PCSK5 gene in the patient that were validated by Sanger sequencing (Additional file 2: Table S2). The PCSK5 gene has five transcript variants (splice variants/isoforms) according to the Ensembl database [17], and we called the mutations according to the longest transcript, PCSK5-201 (ENST00000545128). The frameshift mutation was c.4870_4871delTG (p.C1624fs) and a missense mutation was c.4876C>A (p.Q1626K) (Figure 1). The other two missense mutations only appeared in transcript PCSK5-003 (ENST00000376767) and we called them c.2027G>C (p.W676S) and c.2032G>C (p.G678R), respectively.
Figure 1
figure 1
Exon and protein domain structures of the PCSK5 gene. The transcripts and their exon structures are based on the Ensembl database (ENSG00000099139). Ensembl transcript IDs are shown in brackets. The protein domain structures are based on the InterPro database. InterPro domain IDs are shown in the legend with notes. The locations of the detected mutations are indicated by black arrows.
Two missense mutations in exon 14 of the PCSK5-003 transcript (p.W676S and p.G678R) were also present in the patient's mother. The genomic region of these elements was located within a short simple repeat of (GAATG)n (Chromosome 9: 78790113-78790255, 143 bp) according to the RepeatMasker of the UCSC genome browser (_URL_ These missense mutations were found to occur due to repeat number variations of 5-nucleotide repeats since there were nucleotide variations in each repeat unit (Figure 2a). The 5-nucleotide repeats were transcribed only in the PCSK5-003 isoform. The evolutional conservation of the elements in which the mutations were detected was analyzed by multiple alignments using the UCSC genome browser [17], and this region demonstrated no conservation across species (Figure 3a). Taken together, these mutations are unlikely to affect the function of the gene product. Whereas the dbSNP database contains a single submission for these mutations (rs62556590, rs77249767) [20], those of HapMap-JPT, 1000 Genomes (including ASN) [21], and HGVD have no such entries.
Figure 2
figure 2
Novel PCSK5 frameshift mutation in the patient as demonstrated by Sanger sequencing. a Two novel missense mutations are validated and show inheritance from the maternal allele. Note that another missense mutation just downstream of the two missense mutations is observed in the child, but not in the mother. The mutations are located in exon 14 of transcript PCSK5-003 (ENST00000376767). b A frameshift mutation is validated and shows inheritance from the paternal allele. The mutation is located in exon 34 of transcript PCSK5-201 (ENST00000545128) or exon 28 of transcript PCSK5-002 (ENST00000424854). Just after the frameshift, a cysteine codon is changed to a stop codon, which suggests a functional impact on this gene. Another missense mutation (c.4876C > A [p.Q1626K], c.3976C > A [p.Q1326K]) is validated in exon 34 but is presumed to be functionally irrelevant due to the upstream stop codon.
Figure 3
figure 3
Evolutionary conservation of the region containing the frameshift mutation in the PCSK5 gene. a Two novel missense mutations in exon 14 of PCSK5-003 inherited from the maternal allele have no conservation across species. b The frameshift mutation in exon 34 (PCSK5-201)/28 (PCSK5-002) derived from the paternal allele affects highly conserved residues. The p.Q1626K (PCSK5-201)/p.Q1326K (PCSK5-002) mutation exhibits conservation among primates, but may be functionally irrelevant due to an upstream frameshift mutation.
A frameshift mutation and the other missense mutation were also present in the boy's father (Figure 2b). Although the dbSNP and 1000 Genomes databases do not contain records of the frameshift or the missense mutations in exons 34 (PCSK5-201) or 28 (PCSK5-002), the HGVD has entries for 4 and 2 identical mutant alleles, respectively, among a total of 858 alleles (allele frequency: 0.5 and 0.2%, respectively). Both mutations were located in the PCSK5-201 and PCSK5-002 isoforms. The frameshift mutation results in a truncated protein that disrupts a region highly conserved across species (Figure 3b), suggesting that it might cause a functional alteration of PCSK5. The position of the missense mutation (p.Q1626K) may not be significant because it is located downstream of the frameshift mutation (Figure 2b).
PCSK5 proteins are composed of 3 or 4 domains according to predictions by the InterPro database version 46.0 [22] (Figure 1). The protein domain that included the frameshift mutation was that of an insulin-like growth factor binding protein (IPR009030) containing a cysteine-rich motif (CRM). CRMs are found in a variety of eukaryotic proteins that are involved in receptor tyrosine kinase signaling. CRMs are also considered to be responsible for interactions with cell surface heparan sulfate proteoglycans (HSPGs) via tissue inhibitors of metalloproteases (TIMPs) [5]. The frameshift mutation converts a cysteine residue to a stop codon to produce a truncated protein with the loss of the C-terminal half of the CRM domain or a loss of protein production due to nonsense-mediated mRNA decay.
Since we identified the p.C1624fs (PCSK5-201) frameshift mutation in the database for the general population (HGVD), we next searched for other possible deleterious mutations in the PCSK5 gene (Additional file 2: Table S2). We identified two nonsense mutations, c.3052C>T (p.Q1018X, rs373172614) and c.4615C>T (p.R1539X, rs140709872), in the PCSK5 gene in the variant database from the general population. The frequency of these variants is 0.02%, which is several-fold higher than that of the VACTERL association. We also identified five frameshift mutations, c.1490_1491insACAC (p.N497fs rs138544337), c.1496_1497insC (p.R500fs, rs372197834), c.3915_3916insG (p.V1307fs,
| 1.600676
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
to clients. At the very same time; however, one must be cautious that the message being delivered is always done so by experienced professionals to reap the benefits of Swedish massage therapy.
Lawrenceville which best massage
Myofascial trigger points — muscle knots — are a ubiquitous muscular dysfunction, causing most of the aches, pains and stiffness in the world, and complicating virtually every other injury and disease process. A lot of massage is focused on them, directly or indirectly. Massage may be helpful because it relieves the symptoms of muscle knots, or even unties them. (No, not literally.)
Sheets and wrappings of connective tissue called fascia are considered an exciting frontier in massage therapy. Supposedly fascia can get tight and needs to be "released." However, key examples of research either fail to support fascial therapy or actually undermine it — for instance, fascia is too tough to actually change. Fascia enthusiasm seems to be a fad. For more information, see Does Fascia Matter? A detailed critical analysis of the clinical relevance of fascia science and fascia properties. BACK TO TEXT
Many people confuse reflexology with massage, Reiki, or acupuncture, but there are essential differences between these therapies. Massage therapists manipulate larger areas of soft tissue in the body while reflexologists apply pressure to specific points on the feet, hands, and ears. Unlike either massage or reflexology, Reiki does not involve any physical manipulation or pressure, but instead uses light touch to work with the subtle vibrational field thought to surround the body. Finally, while acupuncture and acupressure, like reflexology, use reflex points on the body to influence other parts of the body, the points are not the same and acupuncture uses points over the entire body.
North Metro what massage oils are best
Swedish massage techniques are different from other massage techniques in that they are quite specific in the order in which the massage is done. These techniques apply deeper pressure than other kind of massages and they are also known to increase oxygenation of blood and release metabolic waste such as lactic and uric acids from the tissues of the muscles.
The main purpose of Swedish massage is to increase the oxygen flow in the blood and release toxins from the muscles. Swedish massage shortens recovery time from muscular strain by flushing the tissues of lactic acid, uric acid, and other metabolic wastes. It increases circulation without increasing heart load. It stretches the ligaments and tendons keeping them supple and pliable.
Lawrenceville which massage is best for body pain
In the US, licensure is the highest level of regulation and this restricts anyone without a license from practicing massage therapy or by calling themselves that protected title. Certification allows only those who meet certain educational criteria to use the protected title and registration only requires a listing of therapists who apply and meet an educational requirement.[123] It is important to note that a massage therapist may be certified, but not licensed. Licensing requirements vary per state, and often require additional criteria be met in addition to attending an accredited massage therapy school and passing a required state specified exam (basically the certification requirements in many states). In the US, most certifications are locally based. However, as of March 2014, some states still do not require a license or a certification.[citation needed] However, this is thought to change eventually as more regulatory bodies governing the profession of massage are established in each state. Furthermore, some states allow license reciprocity where massage therapists who relocate can relatively easily obtain a license in their new state. Not all states provide this option.[124]
A study in the Journal of Alternative and Complementary Medicine found that people's blood pressure fell after a single 45 to 60 minute deep tissue massage. Additionally, a 2010 meta-analysis in the Journal of Clinical Psychiatry found that massage modalities like deep tissue reduce stress hormone levels and heart rate while boosting mood and relaxation by triggering the release of oxytocin and serotonin.
Deep tissue massage is best suited for people who engage in highly physical activities, such as running, or those who have an injury or chronic pain. If you have a low pain threshold or are looking for relief of tense muscles, Swedish massage is gentler and may be a better option. Speak to your doctor before trying deep tissue massage if you have an underlying medical condition.
North Metro what massage medium is best for dry skin
Deep tissue massage stretches out the fascia, the connective tissue covering muscles, allowing therapists to directly affect long-standing muscle knots. (If you're suffering from muscle aches, you can try a DIY office massage in between Zeel Massage appointments – just watch this video.) When you have specific back pain or muscle pain or neck pain, probably due to your terrible office posture, a deep tissue massage can help.
Norcross which massage gun is the best
Swedish massages utilize variations of strokes, and the body responds to the pressure, duration, and speed of those strokes in different ways. Long, quick strokes stimulate blood flow to tissue. Gentle, slow, topical strokes promote general relaxation. The technique of delivering the massage largely depends on what the massage is intended to address.
Pre-event sports massage is done to help prevent serious athletic injury. It helps to warm up the muscles, stretching them and making them flexible for optimal athletic performance. A pre-event massage stimulates the flow of blood and nutrients to the muscles, reduces muscle tension, loosens the muscles, and produces a feeling of psychological readiness.
Lithonia where is the best massage in singapore
Information on this website is provided for informational purposes only and is not intended as a substitute for the advice provided by your physician or other healthcare professional. You should not use the information on this website for diagnosing or treating a health problem or disease, or prescribing any medication or other treatment. Any third party offering or advertising on this website does not constitute an endorsement by Andrew Weil, M.D. or Healthy Lifestyle Brands.
When you're hungry, you need to eat. You'll feel fine for a while, but you're going to get hungry again later. Massage is a similar principal. You'll feel the effects from a massage almost immediately. The rush of positive endorphins and the newness of your extended range of motion will likely carry on for a few days. Like with most things, you're going to need to come back to maintain the positive after effects.
Want more treatment choices for you and your partner-in-crime? Check out Faina, where there are 15 options for the two of you to choose from. We suggest the Decadent Duet ($279), which includes a European manicure for her, a European pedicure and foot scrub for him and a 50-minute joint aromatherapy massage, complete with bubbly and decadent chocolate. But if you're looking for the ultimate pampering sesh, the Champagne Extravaganza for Two ($229) includes a steam shower, a boozy body scrub and an aromatherapy body massage (30mins) followed by champagne and chocolate.
North Metro what are best massage oils
"Spa Recruit your bestie, boyfriend or even that chick on the street who looks like she could use some stress relief, because this spa, which was recently featured on cheesy MTV dating show Gamekillers, is designed for shared services. Snag one of their five private couples rooms for spa treatments. If the thought of chatting about the newest episode of work drama throughout your relaxation session stresses you out, then the spa offerings can be enjoyed solo."
The Pulsational motions of a Classical massage are done so towards the direction of the heart, with the purpose of relaxing any tense muscles to effectively eradicate any toxins trapped within. Build-up of toxins can lead to severe consequences within a human anatomy particularly within the abdomen if they are not excreted from within the body at a daily basis.
You will soon discover the extraordinary health effects of our massage treatments. Stress is a major factor in poor health, and we know how to reduce it in your life. Do you suffer from poor circulation? Our therapeutic massages will improve blood flow through your body. Our extended hours let you enjoy one of our special services on your schedule.
Grayson what massage is best for circulation
I've worked in a variety of exciting environments, including the Salt Lake City Winter Olympics, the Greece Paralympic Summer Games and on the road with the U.S. National Powerlifting Team. Plus, I have worked with collegiate, ABL and WNBA athletes. Currently, I travel with the WTA (Women's Tennis Association) as part of the sports science and medicine team. In my private clinic, I specialize in orthopedic massage.
Do not use massage therapy to replace conventional care or to postpone seeing a health care provider about a medical problem. If you have a medical condition and are unsure whether massage therapy would be right for you, discuss your concerns with your health care provider. Ask about the training, experience, and credentials of the massage therapist you are considering. Also ask about the number of treatments that might be needed, the cost, and insurance coverage. For more tips on finding a massage therapist, see the National Center for Complementary and Integrative Health's (NCCIH) webpage (How to Find a Complementary Health Practitioner) or ask a friend or your physician for a referral.
In short, yes. An athlete's medical condition and history should not be discussed with anyone except other trainers or coaches. There is nothing the media likes more than to hear a high profile athlete is sick or injured, so those discussions don't happen outside of closed doors
| 1.292557
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
and type of SLAM. For this simulation, we will be doing 2D slam with simulated IMU and laser scan data as inputs. The Cartographer ROS settings are configured in the racer_2d_cartographer.launch file.
The os1_sensor.urdf file is loaded and the robot_state_publisher node publishes the state defined in the URDF file to the /TF topic.
The cartographer_ros node is also initialized, configured by loading the racer_2d.lua file. We also remap the IMU and laser scan data from their default topics to the topics output by the simulated robot and OS-1 sensor.
Lastly, the cartographer_occupancy_grid_node is run. The occupancy_grid_node listens to the submaps published by SLAM, builds a ROS occupancy_grid out of them and publishes it. Generating the map is expensive and slow, so map updates are in the order of seconds.
This launch file is then integrated into the broader system simulation launch file.
Running the Simulation
The entire simulation is defined by the rc_laser_map.launch file. This file loads the simulation as well as all the relevant supporting ROS nodes to produce the required data for Cartographer to function.
The first portion of the file defines default values for several parameters that can be used to customize the simulation. These values can be overwritten via the command line when launching the simulation.
Next, the Gazebo simulation is started by loading the appropriate world file and spawning the robot with its integrated sensors. Then several nodes are loaded that allow the user to control the robot via the keyboard or joystick. The pointcloud_to_laserscan node is then loaded to convert the 3D pointcloud to a 2D laser scan. This makes computation simpler and also increases the flexibility of the system. Some ROS nodes only support LaserScan inputs and not PointCloud2 inputs. Lastly, the racer_2d_cartographer.launch file is executed which launches Cartographer with its specific configurations described earlier.
The video below shows the vehicle navigating the environment. Cartographer provides an RViz plugin which allows the visualization of submaps, the trajectory, and constraints.
OS1 RC Car Cartographer ROS Gazebo Simulation
Once the environment is sufficiently mapped the map file can be saved and it can be loaded later. This is done with the map_server ROS package
rosrun map_server map_saver -f /tmp/my_map
This creates two files. The YAML file describes the map metadata, and names the image file. The image file encodes the occupancy data. An example image of the occupancy grid map is shown below:
OS1 RC Car ROS Simulation Cartographer Map
Running on Real-world Data
The next step is to move from the simulated environment to the real world environment. Since the Cartographer system setup was validated in simulation, it's fairly straightforward to develop a working implementation on data collected in the real world. There will be some minor changes and parameter tuning to reflect the real world environment. We will run Cartographer in both 2D and 3D modes on data collected from an OS-1-64 mounted on a push cart and also from an OS-1-64 mounted on an RC car.
Indoor Data Collection
The first step is to collect data from your environment. Before running everything on the integrated RC car, we will collect data in a simpler and more controlled environment. In the first example, an OS-1-64 is mounted to a cart with a laptop as shown in the image below.
OS1 Cart Mount for Data Collection
The cart was manually pushed around the office. The standard ouster_ros sensor client package was used to configure the sensor and interface with ROS. The os1_cloud_node/points and os1_cloud_node/imu topics were recorded.
Validate ROS.bag File
Cartographer ROS provides a tool named cartographer_rosbag_validate to automatically analyze data present in your bag. It is generally a good idea to run this tool before trying to tune Cartographer for incorrect data.
It benefits from the experience of the Cartographer authors and can detect a variety of mistakes commonly found in bags. The tool can also provide tips on how to improve the quality of your data.
The tool can be run with the following command:
rosrun cartographer_ros cartographer_rosbag_validate -bag_filename
This produces the following output:
Cartographer ROSbag Validation Output
Run Cartographer
We will use the demo_cart_2d.launch file to replay the collected data and run Cartographer. This file loads Cartographer via the cart_2d.launch file, launches RViz preconfigured with the demo_2d.rviz settings, and replays the bag file passed via the bag_filename parameter on the command line.
The cart_2d.launch file loads the os1_sensor.urdf file to get the transformations between the IMU and sensor components required by Cartographer. It then launches the cartographer_node loading the cart_2d.lua configuration file. There are a couple of changes from the configuration files used in simulation:
- There is no source of odometry on the cart:
- use_odometry = false
- We are using the full PointCloud2 message and not a LaserScan message:
- num_laser_scans = 0,
- num_point_clouds = 1,
As before, the default input topics are remapped to reflect the topic names output by the system in the cart_2d.launch file. Since we are reading PointCloud2 topics, the "points" topic is remapped instead of the "scan" topic.
Lastly, the cartographer_occupancy_grid_node is launched to produce the map.
The complete process is executed with the following command:
$ roslaunch rover_2dnav demo_cart_2d.launch bag_filename:=cart.bag
You can monitor the progress of the map generation via RViz as shown in the image below:
OS1 Cartographer Map Generation in RViz
The complete, final map is shown below:
OS1 Map Generated by Cartographer
Running Cartographer on an RC Car
Now that the Cartographer can run on real-world data, we can integrate Cartographer into a complete robotic system. For this example, we will mount the OS1-64 to an RC car platform. The system also has an OpenMV camera mounted, but this isn't used by the Cartographer system. The vehicle can be operated remotely with an Xbox controller. All processing is done on a fitlet2 with an Arduino Uno providing commands to the motors for steering and throttle. This set-up closely mimics the simulated system used earlier.
The complete system looks like this:
RC Car with Ouster OS1 and OpenMV Cam
When working with the RC car, the diy_driverless_car_ROS repository is used, specifically the rover_2dnav package. When manually operating the RC car, the rc_dbw_cam.launch launch file is used to start the system.
Run 2D Cartographer Online
The rc_dbw_cam.launch loads the joystick node to process steering and throttle commands from the Xbox controller. These are then sent to the L298N_node which communicates with the Arduino Uno. The rc_control.launch file is loaded which handles some control message conversions and launches an Extended Kalman Filter via the robot_localization package. The EKF node computes the odometry information that is later sent to Cartographer. The OpenMV camera driver node is launched as well as the OS-1 ROS client.
Cartographer can be run by setting the map argument to True via the command line when launching the rc_dbw_cam.launch file. This launches the pointcloud_to_laserscan node as well as Cartographer via the racer_2d_cartographer.launch file.
As described earlier, the os1_sensor.urdf file loads the component transforms and configurations from the racer_2d.lua file.
The system is launched by executing the following command:
$ roslaunch rover_2dnav racer_map.launch rviz:=true map:=true cartographer:=true
An image of the online map generation while manually controlling the RC car is shown below:
OS1 RC Car Cartographer Map Generation in RViz
Run Cartographer 3D Pipeline Offline
When operating the RC car, we want to limit the amount of processing resources utilized by mapping so those resources can be used for other perception, path planning, or controls functions that may also be running. However, if we process the collected data offline, we are less constrained by runtime constraints.
The offline_node is the fastest way of SLAMing a bag of sensor data. It does not listen on any topics, instead, it reads TF and sensor data out of a set of bags provided on the command line. In all other regards, it behaves like the cartographer_node.
Cartographer operates in both a 2D and 3D mode. The 2D pipeline estimates a trajectory of 3DoF (x,y, yaw) poses by matching 2D scans in 2D submaps. In contrast, the 3D pipeline estimates a trajectory of 6DoF (x,y,z, roll, pitch, yaw) poses by matching 3D
| 1.531684
|
Zyphra/Zyda-2
|
How Nvidia's AI Security system saved supercomputer data centre operator $300,000 an hour and stopped it being hacked by bitcoin desperadoes
Downtime is so expensive to a super computer operator that a network failure can cost it a million dollars in lost productivity in half a day.
So a new level of artificial intelligence, which predicts and prevents operational issues, prevents network failures and catches hackers early, can instantly pay for itself.
Processing giant Nvidia has unveiled a new artificial intelligence (AI) driven security system, which aims to minimise downtime in InfiniBand data centres using analytics to detect anticipate problems.
The NVIDIA Mellanox UFM has been used to manage InfiniBand systems for a decade. It applies AI to learn a data center's operational cadence and network workload patterns. Drawing on this knowledge of both real-time and historic telemetry and workload data it can create a baseline of what is normal and acceptable. It then tracks the system's health and network modifications, and detects performance degradations, usage and profile changes.
In June Nvidia added today a third element to the UFM family, the UFM Telemetry platform. This tool captures real-time network telemetry data, which is streamed to an on-premises or cloud-based database to monitor network performance and validate the network configuration.
This means the new system can spot abnormal system and application behaviour. It can also predict potential system failures and nip these threats in the bud by taking corrective action.
Supercomputers are often targets of high value system hacking by sophisticated crooks attempting to host undesired applications, such as cryptocurrency mining. The result is reduced data center downtime — which typically costs more than $300,000 an hour, according to research by ITIC.
The UFM Cyber-AI system allows system administrators to instantly detect and respond to potential security threats and prevent failures. This saves a fortune and provides the continuity of service that keveryone in a job, according to Gilad Shainer, senior vice president of marketing for Mellanox networking at NVIDIA.
'It determines a data centre's unique vital signs and uses them to identify performance degradation, component failures and abnormal usage patterns," said Shainer.
Douglas Johnson, association director of the Ohio Supercomputer Center, has used the UFM platform for years in his employer's InfiniBand data centres. 'UFM and the expertise from the Mellanox networking team have been fundamental ingredients in the management of our network and the stability we've achieved,' said Johnson.
The UFM Cyber-AI platform complements the UFM Enterprise platform, which manages networks, performance and security.
D-Link introduces new fever-screener technology to identify over heating staff in the COVID-19 crisis
Camera maker D-Link has launched a new spy-cam which can take the temperature of everyone in the office and report on who looks a bit peaky.
The new Group Temperature Screening Camera, DCS-9500T, is an all-in-one intelligent fever screening kit with a dual-lens thermographic picture taker, blackbody calibrator and management software.
It uses artificial intelligence (AI) to analyse data from the thermographic camera and can raise an alarm automatically if an unusually high body temperature is detected in any of the subjects.
The system was developed by South Korean vendor D-Link for monitoring large, busy areas and gives fast skin-surface temperature detection for up to 30 people at once with accuracy to the nearest 0.3°C.
D-Link says it is intended for schools, factories, office buildings, airports or hospitals.
The fever screener has a high accuracy camera with a wide-angle thermal lens and an uncooled IRFPA 400×300 microbolometer high-resolution thermal sensor. Together these create razor-sharp thermal images and precise results when identifying those with a temperature.
A full high definition (HD) optical imaging sensor allows the kit to create high-quality footage that overlays thermal and optical images into one.
Facial recognition technology in the management software means that the kit can identify staff members who are falling ill.
The fever screening system is compatible with open industry forum ONVIF, so it can be integrated into existing systems.
The management software can manage up to 32 cameras, so up to 900 people could be monitored at once.
What made the South Koreans develop this particular application of the technology?
Is it in use already in South Korea?
The technology identifies people who are overheating. How does it raise an alarm to management: through a discrete email to a manager, does it use public shaming methods such as social media or does it employ direct digital intervention such as shutting down the individual's technology and despatching a Robot to eject them from the premises?
Has anyone raised any queries about the use of this technology?
How Blockchain will give us control back over our bodies
Within two years all medical data will be globally available – including your genomes – unless it can be blockchained by ethical start ups like Shivom.
Well, we say ethical start ups. Who knows? The CEO and founder, Dr Axel Schumacher, sounded very nice on the phone. And the 94 page white paper on Precision Medicine looks pretty convincing.
Then again, so did the likes of Facebook, Twitter and Google. They were into precision marketing.
Dr Axel Schumaker, CEO of Shivom
Dr Axel Schumacher – I don't think I'd mind if he examined my genomes – he is a proper doctor after all
We didn't know that. We all assumed they were philanthropists who let us use their expensive data centres to store all our private thoughts because they really wanted to 'democratise the world'.
They told us they were levelling the playing field. Actually, they were looting our private lives.
It turns out they were selling every bit of information they could glean, to everyone from pyramid sellers, to debt collectors to oppressive government snooping agencies.
Still, we're all a bit older and wiser now? We won't get spooked again, will we? Sadly, it turns out we will and this time the data intrusions go even deeper. As if having our privacy ravished by cyber spivs wasn't bad enough, the next generation of digital confidence gainers have taken snooping to the next level. They've devised a way to ransack our medical histories and taken privacy pillaging to a molecular level.
Did you know that all these wearable gadget companies, like Fitbit, sell the information you upload to them? You might not think your ECGs are all that exciting but pharmaceutical companies have an endless appetite for this sort of stuff. They've got absolute buckets of money, like futures and options traders on Friday night in a lap dancing club. Those rapacious pharmas will keep stuffing notes into our private places in the hope that we'll show them more of our biology.
But who is profiting from this Medical Peep Show? Fitbit, that's who. All you get is a useless wearable placebo which doesn't seem to tell you much at all. You wonder how long people wear a Fitbit until they get bored with them and put them in the cupboard with the other fad equipment, between the Espresso Machine and the Sandwich Toaster.
Genome companies like 23&Me have even more detailed information about our DNA. Why? Because we're mug enough to send them our blood samples, in the vain hope that they'll tell us something about our genetic make up. Which one of the 12 tribes were we descended from? It's the modern version of Astrology. I've got typical DNA of a Sagittarean, on the cusp of a privacy invasion.
Surely, if someone is going to pay to ogle at our double helix, we should be the ones getting paid. We shouldn't be giving our bodies away free so that some wearable pimp can take all the profit. If I'm going to get my genomes out for the labs, I want a piece of the action.
This is where Blockchain can help us. it can put us back in control of our data. We can use wearable gadgets to measure and monitor us. But new blockchain companies like Shivom, will put a lock on our data, so we only give it up by consent. You wanna see my Genomes, Mr SmithKlineGlaxo? Well put ten dollars into my bitcoin!
New invention cuts the job of sorting AI Tools from 7 years to a month
There are so many AI tools for chemical engineering jobs that it takes years to find the right one. So researchers at North Caroline State University have trained a 'virtual lab' to choose the best AI tools for each job.
Having the right tool for the job makes the job a lot easier, less expensive and faster.
They simulated 600,000 experiments, assessed 150 AI-guided decision-making strategies and claim to have cut 7.5 years of continuous robotic operation into one month's work.
Chemical engineering researchers have now developed a virtual laboratory that can be used to determine the artificial intelligence (AI) tools best suited for addressing various chemical synthesis challenges in flow chemistry systems.
Autonomous systems can accelerating chemical R&D and manufacturing, but they are not in widespread use for two reasons. It's hard to select the right hardware for automated synthesis – and it's impossible to find AI-guided decision-making algorithm, says Milad Abolhasani, assistant professor of chemical engineering at North Carolina State University.
There are three reasons why: One, there's a huge choice of AI tools available. Two, there's not information about them – which is add omission for artificial intelligence – so it's hard to decide which tool is the best for each material synthesis problem. Three, the admin
| 1.40675
|
Zyphra/Zyda-2
|
melting temperature of the matrix that consolidation to nearly 100% of theoretical density occurs.
It is important to avoid melting of the matrix since this may cause alloying between the matrix metal and the tungsten carbide particles. This is particularly true when the matrix metal is steel which has a strong affinity for the carbide. Such alloying reduces the quantity of carbide left in the rod for forming a hard facing and reduces the toughness of the matrix. Thus it is preferred to sinter the rod only enough to provide strength for handling preparatory to using the welding rod. The residual porosity is in the range of from 5 to 20%.
As previously mentioned, there are many types of tungsten carbide particles available for use in practice of this invention. Thus the carbide particles in the welding rod may be cemented tungsten carbide (WC cemented with cobalt, for example), macrocrystalline tungsten carbide, WC particles, or cast carbide. It can be desirable to employ "spherical" cast carbide particles and cemented carbide pellets for wear resistant hard facing. When the green mixture of particles is extruded through a die, the tungsten carbide particles tend to scratch and erode the die. Spherical particles cause less of such abrasion and less die wear.
For hard facing rock bit teeth, high melting point, hard, wear resistant matrix metals are preferred. Steel is particularly preferred for alloying with the steel substrate of the teeth for maximum resistance to chipping or spalling. The matrix powder may be plain carbon steel or any of a broad variety of alloy steels. Such alloys can be formed by mixing powders of different composition although it is preferred to employ alloy powders. Virtually any alloy can be atomized to form spherical particles of uniform size appropriate for use in practice of this invention. Many such alloys are commercially available.
Alternatively high melting point brazing alloys may be employed. For example, the American Welding Society BNi series of nickel base alloy filler metals having brazing temperatures from about 900 to 1,200° C. are particularly suitable.
A suitable composition comprises plain carbon steel particles commingled with about 4% of deoxidizer or "flux". A suitable deoxidizer is silico-manganese obtained from Kennametal, Inc., Fallon, Nev. The nominal composition of the silico-manganese is 65 to 68% manganese, 15 to 18% silicon, a maximum of 2% carbon, a maximum of 0.05 sulfur, a maximum of 0.35 phosphorous, and a balance of iron. Upon melting the welding rod the silico-manganese alloys with the plain carbon steel to form an alloy steel. Sintering of such a rod is at a temperature less than the eutectic temperature of a manganese/silicon/iron alloy.
It is desirable for obtaining a high density of carbide in the metal matrix, to employ a mixture of particle sizes for high packing density. Two or three sizes of particles may be mixed together. The mixture may comprise particles of a single type of carbide or may be of different types. For example, a mixture of relatively larger particles of cemented tungsten carbide and relatively smaller particles of single crystalline monotungsten carbide can provide excellent wear resistance on the teeth of a rock bit.
An exemplary composition for hard facing teeth on a rock bit employs as one type of carbide, 20 to 30 mesh cemented tungsten carbide. The grain size of the tungsten carbide grains in the particles of cemented tungsten carbide are in the range of from about one to fifteen microns. The binder content in such a cemented tungsten carbide is preferably cobalt in the range of from 6% to 8% by weight.
The cemented tungsten carbide is commingled with single crystal WC, preferably in the range of from 40 to 80 mesh.
The ratio of particle size of the larger particles of cemented tungsten carbide to smaller monocrystalline carbide can be in the range of from two to five. A larger ratio is less desirable since the smaller particles can be so small that excessive solution in the alloy steel matrix may occur. A size ratio of three is preferred.
Another exemplary composition for hard facing teeth on a rock bit employs 80 to 200 mesh cemented tungsten carbide mixed with single crystal monotungsten carbide in the range of from 200 to 325 mesh. Generally speaking, the hard facing with larger particles is tougher and more resistant to breakage, whereas the smaller particles result in a more wear resistant hard facing.
The weight ratio of the larger particle size cemented tungsten carbide to the smaller particle size single crystal WC is in the range of from 35:65 to 80:20, and preferably in the range of from 60:40 to 80:20. In a particularly preferred embodiment, the proportion of larger size cemented tungsten carbide is 75% by weight and the smaller particle size single crystal WC is 25%. A substantial proportion of the cemented carbide is preferred for enhanced toughness of the hard facing.
The high packing density of the relatively larger cemented tungsten carbide particles and relatively smaller single crystal carbide particles is appropriate for resisting hypothesized wear mechanisms for hard facing material. One postulated wear mechanism comprises "extrusion" or yielding and consequent wear of the binder phase securing the carbide particles to the substrate. Wear of the matrix leaves carbide particles exposed and unsupported for possible fracture. One way of enhancing wear resistance of the binder is to make it stronger and harder. An alloy steel binder provides such hardness and strength while retaining sufficient toughness to keep the hard facing intact.
Another way of enhancing wear resistance of the binder is to reduce the mean distance between particles so that the binder layer is thinner. This can be done by having smaller particles, but this may diminish the cutting ability of the teeth on the cutter cone. The high packing density and high proportion of carbide to binder possible in the extruded rods also reduce the mean distance between particles or thickness of the binder phase which may be subject to deformation and wear.
Generally speaking, the proportion of carbide to steel in the hard facing should be maximized for best wear resistance. For example, the carbide should be in the range of from 60 to 80% of the composition with the steel forming the other 20 to 40%. A preferred range is from 70 to 75% carbide. This desideratum is promoted in the extruded rods since the proportion of matrix can be higher than with tube-rods, while still maintaining adequate strength for handling.
The particles of binder or matrix metal are preferably about 1/3 the size of the carbide particles. Exemplary particle size is in the order of 100 to 200 mesh, or even smaller.
The temporary organic binder can be any of a variety of compositions that can be vaporized from the mixture before sintering to avoid residual contamination. A variety of paraffin waxes may be used. Polyethylene glycol with a molecular weight of about 1,000 is appropriate. Other hydrocarbon lubricants conventionally used for pressing or extruding powder metallurgy mixtures can be used. Solvents such as hexane, heptane, or the like may also be incorporated in the composition for uniformity of mixing.
Conventional mixing techniques in a Hobart mixer, ball mill, or the like are quite suitable. Typically the mixing is conducted at an elevated temperature so that the organic binder is melted and contacts all surfaces of the powders to give the green compact reasonable strength. For example, a mixture using polyethylene glycol as the binder may be mixed at about 120° C. It is desirable to cool the mixture below 40° C. before extruding so that the polyethylene glycol is solid and a reasonable green strength is obtained in the rod.
The quantity of organic binder is not particularly critical. Something in the order of 2 to 5% binder is satisfactory. The amount used may depend on the particular binder chosen and the parameters of the extruding press.
Extrusion does not appear to have critical parameters. All that is required is sufficient pressure to obtain a straight rod. The diameter of the orifice on the extrusion machine determines the size of the completed rod. Thus a green rod extruded at a diameter of about 4.4 millimeters has a finished diameter of about 4 millimeters after sintering.
The best parameters for extruding a given mixture are somewhat a matter of trial and error. The composition of the mixture makes a difference. As suggested above, spherical particles tend to extrude easier than angular particles. Thus, the nature of the carbides employed may make a difference in the extrusion. Particle size may also have an influence, as well as the choice and concentration of lubricant. Other parameters subject to variation in the extrusion include pressure and temperature.
Routine experimentation can determine the appropriate parameters. If the mixture is too "stiff", it may not be feasible to extrude it at reasonable pressures. Conversely if the mixture is too "soft", the extruded rods may crack. It might be noted that minor surface cracks which do not affect their performance are sometimes seen on the rods.
The rod diameter and length are not critical, and any conventional dimensions are suitable. From four to ten
| 1.660265
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
to any of the Sendai viral proteins. On the basis of the N-terminal sequence, oligonucleotides were designed corresponding to the sense and antisense DNA sequences. Dot blot hybridization and primer extension with these oligonucleotides with the viral and the host genome confirmed the host origin of this protein. Further, the limited proteolytic digestion of the target membrane resulted in significant inhibition of viral fusion with it. On the basis of these results, we postulate a model for the molecular mechanism of F protein-induced membrane fusion, which may provide a rationale for other paramyxoviruses. PMID:_PHONE_
12. Efficacy of Favipiravir Alone and in Combination With Ribavirin in a Lethal, Immunocompetent Mouse Model of Lassa Fever
PubMed Central
Oestereich, Lisa; Rieger, Toni; Lüdtke, Anja; Ruibal, Paula; Wurr, Stephanie; Pallasch, Elisa; Bockholt, Sabrina; Krasemann, Susanne; Muñoz-Fontela, César; Günther, Stephan
2016-01-01
We studied the therapeutic potential of favipiravir (T-705) for Lassa fever, both alone and in combination with ribavirin. Favipiravir suppressed Lassa virus replication in cell culture by 5 log10 units. In a novel lethal mouse model, it lowered the viremia level and the virus load in organs and normalized levels of cell-damage markers. Treatment with 300 mg/kg per day, commenced 4 days after infection, when the viremia level had reached 4 log10 virus particles/mL, rescued 100% of Lassa virus–infected mice. We found a synergistic interaction between favipiravir and ribavirin in vitro and an increased survival rate and extended survival time when combining suboptimal doses in vivo. PMID:26531247
13. Disulfide-bonded discontinuous epitopes on the glycoprotein of vesicular stomatitis virus (New Jersey serotype).
PubMed
Grigera, P R; Keil, W; Wagner, R R
1992-06-01
Intrachain disulfide bonds between paired cysteines in the glycoprotein (G) of vesicular stomatitis virus (VSV) are required for the recognition of discontinuous epitopes by specific monoclonal antibodies (MAbs) (W. Keil and R. R. Wagner, Virology 170:392-407, 1989). Cleavage by Staphylococcus aureus V8 protease of the 517-amino-acid VSV-New Jersey G protein, limited to the glutamic acid at residue 110, resulted in a protein (designated GV8) with greatly retarded migration by polyacrylamide gel electrophoresis (PAGE) under nonreducing conditions. By Western blot (immunoblot) analysis, protein GV8 was found to lose discontinuous epitope IV, which maps within the first 193 NH2-terminal amino acids. These data, coupled with those obtained by PAGE migration of a vector-expressed, truncated protein (G1-193) under reducing and nonreducing conditions, lead us to postulate the existence of a major loop structure within the first 193 NH2-terminal amino acids of the G protein, possibly anchored by a disulfide bond between cysteine 108 and cysteine 169, encompassing epitope IV. Site-directed mutants in which 10 of the 12 cysteines were individually converted to serines in vaccinia virus-based vectors expressing these single-site mutant G proteins were also constructed, each of which was then tested by immunoprecipitation for its capacity to recognize epitope-specific MAbs. These results showed that mutations in NH2-terminal cysteines 130, 174, and, to a lesser extent, 193 all resulted in the loss of neutralization epitope VIII. A mutation at NH2-terminal cysteine 130 also resulted in the loss of neutralization epitope VII, as did a mutation at cysteine 108 to a lesser extent. Both epitopes VII and VIII disappeared when mutations were made in COOH-distal cysteine 235, 240, or 273, the general map locations of epitopes VII and VIII. These studies also reveal that distal, as well as proximal, cysteine residues markedly influence the disulfide-bond secondary structure, which
14. Evaluation of swinepox virus as a vaccine vector in pigs using an Aujeszky's disease (pseudorabies) virus gene insert coding for glycoproteins gp50 and gp63.
PubMed
van der Leek, M L; Feller, J A; Sorensen, G; Isaacson, W; Adams, C L; Borde, D J; Pfeiffer, N; Tran, T; Moyer, R W; Gibbs, E P
1994-01-01
Pigs were vaccinated by scarification or intramuscular injection with a swinepox virus-Aujeszky's disease (pseudorabies) recombinant (rSPV-AD) constructed by inserting the linked Aujeszky's disease virus genes coding for glycoproteins gp50 and gp63, attached to a vaccinia virus p7.5 promoter, into the thymidine kinase gene of swinepox virus. By 21 days after vaccination, 90 and 100 per cent of the animals vaccinated by scarification or intramuscular injection, respectively, had developed serum neutralising antibodies to Aujeszky's disease virus. Upon challenge with virulent virus, significantly fewer vaccinated pigs developed clinical Aujeszky's disease, nasal shedding of challenge virus was markedly reduced, and the vaccinated groups of pigs maintained or gained weight during the week after challenge whereas the unvaccinated control group lost weight. No transmission of rSPV-AD to in-contact controls was detected during the three weeks before challenge. In a second experiment, serum neutralising antibodies to Aujeszky's disease virus persisted for 150 days after the pigs were vaccinated with rSPV-AD by scarification or intramuscular injection and all the pigs showed an anamnestic response when they were revaccinated. PMID:_PHONE_
15. Mutations in the putative HR-C region of the measles virus F2 glycoprotein modulate syncytium formation.
PubMed
Plemper, Richard K; Compans, Richard W
2003-04-01
The fusion (F) glycoproteins of measles virus strains Edmonston (MV-Edm) and wtF (MV-wtF) confer distinct cytopathic effects and strengths of hemagglutinin (H) interaction on a recombinant MV-Edm virus. They differ in just two amino acids, V94 and V101 in F-Edm versus M94 and F101 in F-wtF, both of which lie in the relatively uncharacterized F(2) domain. By comparing the sequence of MV F with those of the parainfluenza virus SV5 and Newcastle disease virus (NDV) F proteins, the structures of which are known, we show that MV F(2) also possesses a potential heptad repeat (HR) C domain. In NDV, the N-terminal half of HR-C interacts with HR-A in F(1) while the C-terminal half is induced to kink outward by a central proline residue. We found that this proline is part of an LXP motif conserved in all three viruses. Folding and transport of MV F require this motif to be intact and also require covalent interaction of cysteine residues that probably support the potential HR-A-HR-C interaction. Amino acids 94 and 101, both located in "d" positions of the HR-C helical wheel, lie in the potentially outwardly kinked region. We demonstrate that their effect on MV fusogenicity and glycoprotein interaction is mediated solely by amino acid 94. Substitutions at position 94 with polar or charged amino acids are tolerated poorly or not at all, while changes to smaller and more hydrophilic amino acids are tolerated in both transiently expressed F protein and recombinant virus. MV F V94A and MV F V94G viruses induce extensive syncytium formation and are relatively, or almost completely, resistant to a known inhibitor of MV glycoprotein-induced fusion. We propose that the conformational changes in MV F protein required to expose the fusion peptide involve the C-terminal half of the HR-C helix, specifically amino acid 94. PMID:_PHONE_
16. The catalytic triad of the influenza C virus glycoprotein HEF esterase: characterization by site-directed mutagenesis and functional analysis.
PubMed
Pleschka, S; Klenk, H D; Herrler, G
1995-10-01
Influenza C virus is able to inactivate its own cellular receptors by virtue of a sialate 9-O-acetylesterase that releases the acetyl residue at position C-9 of N-acetyl-9-
| 1.283293
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
mode, power is automatically removed from nonessential electrical equipment in the user's home or factory.
At the end of the timed cycle, the receiver is automatically reset to a standard mode of operation and is ready to receive the next peak load alarm signal.
The present invention provides a system designed to reduce consumer power usages during peak load use times. The system is simple, efficient and constitutes a fair and equitable means to control the power usage only when such control is absolutely necessary.
Other and further advantageous features of the present invention will hereinafter more fully appear in connection with a detailed description of the drawings, in which;
FIG. 1 is a schematic block diagram of an alarm system embodying the present invention.
FIG. 2 is a schematic diagram of a power supply for the receiver portion of the system of FIG. 1.
FIG. 3 is a schematic diagram of the decoded portion of the alarm system.
FIG. 4 is a schematic diagram of the timer means and visual alarm of the alarm system.
FIG. 5 is a schematic diagram of the alarm mode latching relay means and audio alarm system.
Referring to the drawings in detail and particularly FIG. 1, reference character 10 generally indicates a power use peak load alarm system utilized in conjunction with a cooperating local AM or FM broadcast station generally indicated by reference character 12. The alarm system 10 generally comprises a receiver 14 and associated receiving antenna 16 for receiving a coded alarm signal from the broadcast station 12. The system also comprises a state variable active filter 18, with an audio amplifier on its input. The high band tone is filtered through HP (high pass) section and is presented to a PLL (phase lock loop) detector 20. Likewise the low band tone is filtered through LP (low pass) section and is presented to a PLL detector 20. For added security both tones need to be present to ensure a suitable output of tone decoder 20 to initiate a fired time duration timer 22. The output of the timer 22 is connected to a visual alarm display 24, an audible alarm in conjunction with the receiver 14 through a speaker reset function 26 and a relay unit 28. The relay unit 28 is provided with a plurality of contacts which can be utilized to vary the power usage equipment and/or rate metering generally indicated by reference character 30.
Referring now to FIG. 2 reference character 32 generally indicates a power supply for the system which is operably connected to the user3 s AC power source indicated by reference character 34. This AC source may be ordinary 110 volt 60 cycle house power. The AC power is stepped down to an appropriate voltage level by the transformer 36 and then passed through a full wave rectifier generally indicated by reference character 38. The output of the rectifier 38 is filtered by a capacitor 40 and the ground thereof is made common with the ground for the AC power source 34 by the connection 42. The positive side of the output of the rectifier 38 is connected to one side of a resistor 42 which is tied to the ground through a second capacitor 44. Between the resistor 42 and the full way rectifier 38 is a first voltage takeoff + V1. A second voltage take off + V2 is attached to the positive side of the circuit between the resistor 42 and the capacitor 44. A controlled rectifier or zener diode or the like 46 is operably connected between the resistor 42 and the capacitor 44 with the negative side thereof being attached to ground. The power supply 32 therefore provides two controlled positive DC output voltages + V1 and + V2 as clearly shown in FIG. 2.
Referring now to FIG. 5, the receiver 4 can be of standard AM or FM design and is modified in the following manner, audio output of the receiver 14 is connected to the volume control high side indicated by reference character 45 through the relay means 28 in a manner that will be hereinafter set forth. The volume control for the receiver is shown as a potentiometer 48 having the volume high side 45 of the potentiometer connected to the relay means 28 and also to the input of the state variable active filter 18 for a purpose that will be hereinafter set forth.
The output of the high side 45 of the volume control 48 is connected to the ordinary receiver output amplifier and speaker 50 through the potentiometer and volume control center tap 46. Referring to FIG. 3, the state variable active filter 18 is provided with an audio amplifier 52 which comprises an operational amplifier 54 having its input connected to the audio output of the receiver through the resistor R1 and capacitor C1 in series therewith. DC power from + V1 is also applied to the operational amplifier 54 directly and also applied to the second input of the operational amplifier 54 through the voltage divider made up of resistors R2 and R3.
A load resistor R4 is connected between the said voltage divider and the second input of the operational amplifier 54. The juncture between the voltage divider resistors R2 and R3 are connected to ground through a capacitor C2. The output of the operational amplifier 54 is then connected back to the input through a resistor R5.
The output of the audio amplifier 52 is then connected to the input of a plurality of operational amplifiers connected as a state variable active filter. The first operational amplifier of the state variable active filter is designated by reference character 56 and receives its input from the output of the audio amplifier 52 through a capacitor C3 and resistor R6 in series therewith. The second input of the operation amplifier 56 is connected to voltage + V1 through resistor R7 and the previously described resistor R2. The output of the operational amplifier 56 is connected back to its first input through a resistor R8.
The output of the operational amplifier 56 is also connected to the input of a first phase lock tone detector 58 through a capacitor C4. The output of the operational amplifier 56 is also connected to a first input of an operational amplifier 60 through a resistor R9. The second input of the operational amplifier 60 is connected to power V1 through a resistor R10 and the previously described resistor R2. The output of the operational amplifier 60 is connected to its first input through a capacitor C5 and is connected to the second input of the operational amplifier 56 through a resistor R11.
The output of the operational amplifier 60 is also connected to a first input of an operational amplifier 62 through a resistor R12, the second input of the operational amplifier 62 being connected to voltage through a resistor R13 and the previously mentioned resistor R2. The output of the operational amplifier 62 is connected back to its first input through a capacitor C6. The output of the operational amplifier 62 is also connected back to the first input of the operational amplifier 56 through a resistor R14. The output of the operational amplifier 62 is also connected to the input of a second phase lock tone detector 64 through a capacitor C7.
Although reference character 56, 60 and 62 are in fact operational amplifiers, they are connected as a state variable active filter in such a way that the output of operational amplifier 56 represents the high band of an audio tone which is provided as an input to the phase lock tone detector 58. Likewise, the output of the operational amplifier 62 provides a low band audio tone as an input to the second phase lock tone detector 64. It has been found that the audio amplifier and the state variable active filter may be constructed from an off-to-shelf quad operational amplifier integrated circuit which is designated herein as IC1.
The first phase lock tone detector 58 as hereinbefore set forth receives its high band audio tone from the output of the state variable active filter into an off-the-shelf purchaseable chip designated by reference character 66. The input is received at pin 3 of the chip 66.
Power is provided to pin 4 the phase lock tone detector 58 from + V2 through a voltage divider made up of resistors R15 and R16. Capacitors C9 and C10 connect pins 2 and 1 respectively to ground and serve as filters for setting response time for the phase lock tone detector. Pin 5 is connected to ground through parallel resistors R17 and R18 and capacitor C11 all of which may be adjusted to set the operational frequency of the phase lock tone detector.
When the proper frequency is provided at pin 3 of the phase lock tone detector 58, the output voltage at pin 8 thereof goes low. When the proper frequency is not provided at pin 3, the output voltage of pin 8 is positive. The second phase lock tone detector 64 is substantially identical to the detector 58 and utilizes a chip 68. The chip 68 accepts its input frequency from the output of the state variable action filter operational amplifiers 2 through the capacitor C7 to pin 3. Voltage is provided at pin 4 of the detector 68 from + V2 through the voltage divider made up of resistors R19 and R20, pin 4 also being connected to ground through the capacitor C12. Again pins 2 and 1 of the detector chip 68 are connected to ground through the capacitors C13 and C14 respectively and may be adjusted in value to set the response time for the detector 64. Pin 5 of the detector chip 68 is connected to ground through parallel resistors R21 and R22 and the capacitor C15, all of which may be adjusted to set the operational frequency of the detector 64. The output
| 1.777481
|
openbmb/Ultra-FineWeb
|
approach, treating all interactions as equally important, with more specific extraction tasks. To this end, it is important to create specialized corpora, such as those for the extraction of regulation events or for protein complex formation. The more specific a question is, the simpler it is to create representative corpora, leading to better models, often higher extraction performance and better comparability of methods. For instance, works like [67] on extraction of gene regulation or [68] on extraction of phosphorylation events report much higher accuracies than those current achievable in the general PPI task. Third, there is a severe lack in studies measuring real-life performance of PPI extraction methods, circumventing the usage of gold standards by, for instance, user surveys with biological experts. Last but not least, our result also show that rule-based methods still make an excellent stand when compared to machine-learning based approaches as soon as specific evaluation settings are left behind.
Supporting Information
Table S1
Overview of the evaluated kernels. Overview of the nine kernels evaluated in the paper.
(0.07 MB PDF)
Table S2
Other kernels considered. Overview of other kernel based methods in the literature that we did not tested in the paper.
(0.06 MB PDF)
Table S3
Overview of the usability of the different kernels. Some details on the nine evaluated kernels: availability of the algorithm and documentation, type of learning software.
(0.06 MB PDF)
Table S4
Overview of our parameter selection strategy. Overview of our parameter selection strategy used in the paper. We provide a coarse description of parameter ranges and best parameters for each kernel and evaluation setting.
(0.07 MB PDF)
Table S5
Cross-corpus results. Full table of cross-corpus results trained on all 5 corpora and evaluated on all nine kernels.
(0.08 MB PDF)
Table S6
Ranking of corpora at CC evaluation based on their AUC and F-score values. We ranked the corpora from the generality perspective, i.e. how general the systems are trained on specific corpora. The evaluation is based on their AUC and F-score values at CC evaluation.
(0.07 MB PDF)
Table S7
Cross-learning experiments with some selected kernel performed on 4 corpora (all but AIMed). Cross-learning experiments with some selected kernel performed on 4 corpora (all but AIMed). Classifiers are trained on the ensemble of three corpora and tested on the forth one.
(0.07 MB PDF)
Table S8
CV results with transductive SVM for kBSPS, edit, cosine kernels. Results with the transductive learning strategy for some selected kernels.
(0.06 MB PDF)
Table S9
Average runtime of training and test processes, and runtime estimates on entire Medline. Average runtime of training and test processes per corpus measured over all cross-validation experiments for each kernel (not including the parsing time at pre-processing), and rough runtime estimates on the entire Medline.
(0.06 MB PDF)
Text S1
Similarity function in kBSPS kernel. Definition of similarity score used in kBSPS kernel.
(0.07 MB PDF)
Text S2
Additional experiments. We provide here details of two additional experiments. (1) Cross-learning (CL) without AIMed, that is systems are trained on 3 corpora and tested on the fourth one. (2) Models trained with transductive SVM.
(0.05 MB PDF)
Text S3
Theoretical complexity of kernels. We provide here details on the computational complexity of kernels.
(0.04 MB PDF)
Acknowledgments
We thank all authors of the kernel methods we discuss for providing code and numerous hints on how to install and use the systems. We particularly thank Antti Airola for intensive discussions on benchmarking PPI extraction in general and in numerous special cases. We also thank the anonymous reviewers for their valuable comments.
Footnotes
The authors have declared that no competing interests exist.
DT is supported by the Alexander-von-Humboldt Foundation (_URL_ PT is supported by the Federal Ministriy of Education and Research, Germany (BMBF, _URL_ grant no 0315417B. JH acknowledges support by Arizona State University (_URL_ and Science Foundation Arizona (_URL_ PP was supported by the Max-Planck-Gesellschaft (_URL_ under project TM-REG. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
References
1. Hoffmann R, Krallinger M, Andres E, Tamames J, Blaschke C, et al. Text mining for metabolic pathways, signaling cascades, and protein networks. Sci STKE 2005. 2005:pe21. [PubMed]
2. Jaeger S, Gaudan S, Leser U, Rebholz-Schuhmann D. Integrating protein-protein interactions and text mining for protein function prediction. BMC Bioinformatics. 2008;9(Suppl 8):S2. [PMC free article] [PubMed]
3. Jiang X, Nariai N, Steffen M, Kasif S, Kolaczyk ED. Integration of relational and hierarchical network information for protein function prediction. BMC Bioinformatics. 2008;9:350. [PMC free article] [PubMed]
4. Spirin V, Mirny LA. Protein complexes and functional modules in molecular networks. Proc Natl Acad Sci U S A. 2003;100:12123–8. [PMC free article] [PubMed]
5. Ideker T, Sharan R. Protein networks in disease. Genome Res. 2008;18:644–652. [PMC free article] [PubMed]
6. Lalonde S, Ehrhardt DW, Loqué D, Chen J, Rhee SY, et al. Molecular and cellular approaches for the detection of protein-protein interactions: latest techniques and current limitations. Plant J. 2008;53:610–35. [PubMed]
7. Sprinzak E, Sattath S, Margalit H. How reliable are experimental protein-protein interaction data? J Mol Biol. 2003;327:919–923. [PubMed]
8. Miernyk JA, Thelen JJ. Biochemical approaches for discovering protein-protein interactions. Plant J. 2008;53:597–609. [PubMed]
9. Chatr-aryamontri A, Ceol A, Palazzi LM, Nardelli G, Schneider MV, et al. MINT: the Molecular INTeraction database. Nucleic Acids Res. 2007;35:D572–D574. [PMC free article] [PubMed]
10. Winnenburg R, Wächter T, Plake C, Doms A, Schroeder M. Facts from text: can text mining help to scale-up high-quality manual curation of gene products with ontologies? Brief Bioinform. 2008;9:466–478. [PubMed]
11. Özgür A, Vu T, Erkan G, Radev DR. Identifying gene-disease associations using centrality on a literature mined gene-interaction network. Bioinformatics. 2008;24:i277–285. [PMC free article] [PubMed]
12. Lage K, Karlberg EO, Størling ZM, Ólason PI, Pedersen AG, et al. A human phenome-interactome network of protein complexes implicated in genetic disorders. Nat Biotechnol. 2007;25:309–16. [PubMed]
13. Proux D, Rechenmann F, Julliard L. A pragmatic information extraction strategy for gathering data on genetic interactions. Proc Int Conf Intell Syst Mol Biol. 2000;8:279–85. [PubMed]
14. Leitner F, Hirschman L. Proc BioCreative II.5. Madrid, Spain: 2009. Biocreative ii.5: Evaluation and ensemble system performance.
15. Kabiljo R, Clegg A, Shepherd A. A realistic assessment of methods for extracting gene/protein interactions from free text. BMC Bioinformatics. 2009;10:233. [PMC free article] [PubMed]
16. Giles C, Wren J. Large-scale directional relationship extraction and resolution. BMC Bioinformatics. 2008;9:S11. [PMC free article] [PubMed]
17. Airola A, Pyysalo S, Björne J, Pahikkala T, Ginter F, et al. All-paths graph kernel for protein-protein interaction extraction with evaluation of cross-corpus learning. BMC Bioinformatics. 2008;9:S2. [PMC free article] [PubMed]
18. Bunescu R, Ge R, Kate RJ, Marcotte EM, Mooney RJ, et al. Comparative experiments on learning information extractors for proteins and their interactions. Artif Intell Med. 2005;33:139–155. [PubMed]
19. Bunescu R, Mooney R. Subsequence kernels for relation extraction. In: Weiss Y, Schölkopf B, Platt J, editors. Advances in Neural Information Processing Systems 18. Cambridge, MA: MIT Press;
| 1.434418
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
The Evolution of Safety Auditing
There are many ways that safety programs are audited and evaluated. There are some that are internal to the organization or site and there are others that are used external. Some companies use the idea of intra-site auditing where safety people from other sites perform a documented audit on another site. Year-over-year there are rotations among all the sites. The other choice is the organization chooses to hire an external auditor on a contract to perform these evaluations. There are also opportunities to leverage the organization's loss prevention or insurance company to assist with performing or coordinating audits.
As a safety professional, it is easy to enter a site an find multiple unsafe behaviors or conditions. From a strictly technical standpoint, there are always opportunities for improvement. The reason an audit should be conducted is to get an idea of where the total compliance attitude sits on the organizational scale. Getting lost in the trees and forgetting that the forrest exists does not create benefit.
Regardless of how an audit is performed, there are some basic items about an audit that gives indications about the performance of the audit team, the site behavior, and the organizational culture. I have created a scaled list of how an audit should give insight to the organizational compliance.
Poor performance = few findings. High complexity
When a site is still developing the audit should be focused on big ticket items like: creating a lockout program, training employees on hazard communication, performing personal protective equipment surveys, and creating written programs. Inundating the site with lists and lists of detailed items is not helpful in this phase. They should be focused on simply developing programs. It is the idea that something is better than nothing. The natural cycle of continuous improvement will help the details become addressed.
Medium Performance = high findings, low complexity
When a site has become the typical performing organization, the transition begins to see more punch list style items. Depending on the overall performance of the site, this will drive the number of those items. The major items of program creation are gone. In their place is a list of items that need to be completed to enhance compliance such as labeling specific bottles, updating placards, and
Good performance = Few findings, low complexity
One of the best auditors I know has three categories of findings that he creates as part of his process:
Nonconformities are findings where the program is not implemented or not followed
Deficiencies are where the program is in place but there are elements that are not up to the standard
Opportunities for Improvement are where the auditor finds ways that the program can be improved and is fully in compliance.
A good performing plant will be mostly focused on the opportunities for improvement. The complexity will be low, there will be minimal findings, and the goal is to keep the momentum rolling. The site has many good aspects of the program, but even a good program can go bad if it does not seek continuous improvement.
Overall, the process of auditing is value added when it is properly scoped, controlled, and helps create improvement in the process. The sake of auditing for auditing sake is overall a losing prospect. The audit program should have a governing policy and process that should be followed. There should be a defined outcome and mission statement for the audit. It is through planning and a focus on improvement that the audit program brings true value to a safety organization.
Safety: Behavior or Motivation
For example:
Another Example:
Nature and Nurture for Safety Part 3
When it comes to behaviors, the idea of nature and nurture always becomes a debatable position. In some ways, managers and companies like the idea of blaming nature for work place injuries. I hate the saying "can't fix stupid." Too many times in my career, I have heard that from supervisors and managers who feel this is the end all, be all for explaining their poor departmental safety performance.
The reality is that safety behavior is much more complex an issues than the simplicity of blaming the individual for any and all items.
Screen Shot 2016-03-23 at 8.43.05 PM
To better illustrate the point that the culture of the organization is a significant factor, the evaluation of someone who has a good safety nature can be affect by a climate of negative nurture is a prime example.
Imagine a new employee to a company. This employee has generally a strong safety knowledge and comes from a company that had true value of safety behaviors. The employee has not just joined a company that has not safety culture, the culture is actually negative. This is the culture that case studies are made of."Get it done and get it done yesterday." "No matter what never shut the equipment down." "You don't need tools, your hands are tools enough."
This individual may first think that they can influence the culture of the site. What happens, though, once that does not work? In a large scale, there are three possible outcomes: The employee becomes a whistle blower, the employee leaves the organization, or the employee watches out for self and becomes defensive. The first outcome is really not a behavior that can be evaluated, but a reasonable option.
In the next two options, the employee will feel out of place. Their is little more that can demoralize a workforce than a blatant disregard for employee safety. Maslow's theory of needs states that the idea of safety is one of those necessary needs people must have to grow. If the company denies this fundamental right, the employee will seek other opportunities that will meet that need. Ultimately, the company looses a valuable resource.
The next option is where the company will get the bare minimum. There is no desire to contribute. There is no desire to make the it the best it can be. There is no desire to find better methodology. This culture erodes into not just safety but productivity and quality. This is a situation where the company has made a choice to say the employees are not really part of the team. Imagine a sports coach believing that he can win a championship without his players. That is unbelievable, right? Well, this is in principle saying the same thing, "we don't need our employees to be successful."
The culture of a company is just as much a factor for behaviors as that of the individual. They have a relationship that works with or against one another. The complexity of blame should not be the go-to choice for safety behaviors and culture. There has to be a total evaluation of how the culture and the inherent behaviors are working systemically.
Nature and Nurture for Safety: Part 2
Overall, the debate of nurture vs nature is not one that I am will to address. There are, though, some aspects of nature and nurture in the way safety becomes behavioral and organizational.
For the sake of simplicity, nature will be defined as someone's general safety philosophy before entering the workplace. Nurture will be defined as the way the company or organization creates safety or how they influence employees in regards to safety
Screen Shot 2016-03-23 at 8.43.05 PM
When nature is positive and nurture is positive, the outcome is a total safety experience. The individual comes to an organization with an innate ability and conscious of how to work safe and avoid unnecessary risk. The organization has also create a culture where safety is a top priority and the systems are in place to keep safety in the forefront. When these two items come together, it is nothing short of safety magic!
There is an individual that has a strong desire to see risk and find ways to mitigate that risk, all the while the organization is seeking ways to be more self-diagnosing and culturally open to continuous improvement. These two build a process in which they feed off each other.
As the individual's nature leads to better ways to be safe, the nurture of the organization takes those methods and makes them systemic. The best methodology is found and then spread as a best practice. Since the nurturing organization is positive, they give the credit to the individual. Not only does this make the individual seek more opportunities, it invigorates others that may not have a natural sense of risk avoidance to seek new ways to overcome safety issues. The cycle self-perpetuates and creates an entire team seeking new and better ways to engage in keeping people safe.
This is a best case scenario. It creates a negatively skewed bell curve in which the measurement is safety behaviors per person. This creates an organization in which more people that average are exhibiting safety behaviors.
Nature and Nurture for Safety: Part 1
There is plenty of debate of the exact science, implications, and magnitude of nature and nurture.
To summarize for the sake to time and sanity, there are certain traits that people are born with that can hold some influence over who they are. Nurture comes in to whether or not a person chooses to go with or against their nature.
Nature is not a bad thing. Sometimes the traits we are born with are something we should nurture and use for the purpose of being better. Someone who is born with a naturally athletic build and then uses nurture to improve to become great at their talent should be encouraged.
For the discussion of safety, some may have a natural tendency to weigh risk and adapt a healthy approach to that risk. Or someone, may be completely prone to high risk taking with little thought. This is where a robust safety attitude of an organization makes the impact.
There are many ways safety of an organization and at a very personal level can make big differences. An organization should be aware of the implications of not having a consistent and positive safety system in place. Do not confuse positive safety system with "warm fuzzy." A good safety system is a proper balance of rights, responsibilities, training, education, accountability, ownership, consistency, and compassion.
So in other words the simplistic terms of "positive" and "negative" are much more robust in connotation through this set of discussions. A negative aspect
| 1.358157
|
Zyphra/Zyda-2
|
Download Article Download Article
Your GPA is one of the most important things in high school and college when it comes to the rest of your academic career. It can mean bigger and better opportunities, leading to more money, better jobs, and ultimately a better life. But don't fret – a low GPA can still be rectified if you start right now.
Part 1 of 3:
Setting Yourself Up for Success
1. 1
Get organized. If your locker or desk looks like a natural disaster went unreported in your area, you can't exactly expect your GPA to seem any different. The more organized you are, the easier it will be to study, to get good grades, to improve your GPA, to focus, and to be on top of your game.
- Buy a planner. Write down your homework every night, project deadlines, and anything else that's on your to-do list. Cross it off as you go, keeping an eye on what you need for tomorrow. This way your mind is allowed to not worry about what's happening next Tuesday, because you already have it written down.
- Invest in some folders and binders. Keep your syllabi at the ready for each subject for easy access later. You can also keep old homework and readings available, too, for when you need them for study reference come test time.
- Keep a pocket or bag for your studying tools, like highlighters, white out, pens, pencils, rulers, and scissors. The less time you spend searching around needlessly, the better.
2. 2
Take the right classes. Let's face it: you are not superman (or superwoman). You cannot take every AP class ever offered, 4 language classes at a time, some college classes, and get all straight As. While you may feel the need to be uber-competitive, don't burn yourself out. Only take the classes you can handle. If that means 3 AP classes instead of 4, good. Your GPA will thank you for it.
- If every class of yours is difficult, you'll get exhausted. Don't begrudge yourself the ability to take a study hall or even gym. Everyone needs a bit of down time, and it'll let you concentrate on the classes you really need to concentrate on.
3. 3
Retake classes if need be. Plenty of schools have "retake" options. If you get a grade you're not happy with and you have the room in your schedule (be sure to think long-term, here) to retake it, consider this as a second chance. That C, D, or F could get wiped from your record permanently.[1] And it'll definitely be easier the second time around.
- Find out what any of your options are, not just retaking the class. Can you retake a specific test? Complete another project? Take another class that's related in lieu of a different course? Most schools want to see their students succeed – and there's certainly no harm in asking.
4. 4
Go to class. You'd think it'd be simple, but so many students don't do it: just go to class. Even if you're just there in body and not mind, go to class. Many teachers offer points just for attending. Some even give out the answers to bonus questions to reward the students that show up.
- And when you get there, sit in front. You'll be more likely to pay attention and your teacher will be more like to know your face. While that may seem awkward, it's going to be very helpful if and when you need help later (or when they're thinking about nudging you up to an A- from that B+.
5. 5
Participate in class. Imagine if you were a teacher and you had a class full of silent duds. No one talked, no one looked interested, and no one really seemed to care. How would that feel? Pretty terrible. Now think if you had a kid who paid attention to you, listened to what you said, and participated – even if they were wrong. How much better would that be? Teachers don't care if you're right – they care if you care.
- Show them you care by participating. Why? Well, for starters, they'll like you more. You'll be a student who tries and deserves the benefit of the doubt. And apart from that, participating means the information is actually processing in your mind and it'll be harder to forget later on.
Part 2 of 3:
Studying Smart
1. 1
Find a way to study that you love. In the same way that no two people have the same results on the same diet plan, no two people have the same results on the same study plan. You need to find a way that works for you. Does that mean recording lectures and listening to them over and over? Does that mean turning your notes into pictures and charts? Does that mean typing out your notes into a book you can review later on? Does it mean quizzing with friends? Everyone is different – what helps you remember?
- How do you learn? Odds are you probably know how you remember things. Is it by hearing? Seeing? Using your hands? Whatever helps you, do it. Find a friend and relay facts to them. Create your own mnemonic devices and draw pictures to help your brain remember. Anything to engage you will do the job.
2. 2
Do a weekly review. From now on, your Sunday nights are time for Sunday weekly review (SWR). This is when you sit down at your super clean, well-organized desk, bust out your folders and binders, and review everything you covered in your classes from Monday through Friday. Whatever you don't remember deserves extra time and whatever you do remember you can gloss over. This way you and your GPA are up to snuff at all times.
- And at the end of your SWR, take a quick look at your syllabus. What will you be covering next week? Do you have any tests or project deadlines? If there's anything you should write in your planner, write it now.
3. 3
Take breaks while you study. Research shows that the mind easily gets saturated and stops processing information at 100% if you don't give it a rest. Ideally, you should study for about 50 minutes and then take a 10 minute break. This allows your brain to recharge, and also giving the information a second to sink in.
- Turn your phone off while you're studying. Just do it. Then, when you're on your break, turn it back on and do everything you've been dying to do for the past 50 minutes. Your break time should be the only time you're "multitasking" and getting distracted from the topic matter at hand.
- Break down larger projects into hour or so chunks. This way you have clear stopping points where you can stop, take a breather, grab a bite to eat, and get back to it ready to go.
4. 4
Grab your (smart and focused) friends and form a study group. Research shows that studying in groups is a highly effective way to study – so long as the group is around four people and they're actually focused. Why? It's because talking about the subject cements it in your mind, forcing you to listen, think about, and speak about it all at once. All these skills being used together makes the concept process in your brain at a deeper level. [2]
- Designate a leader for the group to keep everyone on track. Bring some snacks and have some questions ready. Cover all your material, and then be sure to circle back to what stumped the group. Make sure to utilize individual's strengths as much as possible, too.
- And don't mess around. Study groups are not beneficial if you're just sitting and gabbing, gossiping about your friends and munching on snacks. That's why a leader is so beneficial – sometimes you'll need someone to wrangle you back to the correct path.
5. 5
Don't pull an all-nighter. The fact of the matter is that cramming does you no good. Studies show that students who study the night before and don't sleep actually do worse on tests than those who study less and actually get some shut eye.[3] This is because the brain needs sleep to get all gears functioning properly – if you don't get any sleep, that study session won't do you much good.
- If a test is coming up and you're not ready for it, all you can do is study for a bit the night before, get a decent night's rest, get up, study a little bit more, eat a protein-packed breakfast, and do your best. During the test, pop a piece of peppermint or gum into your mouth for an awakening blast – studies show that it could improve academic performance.[4]
6. 6
Find a study spot you love. Sitting in the middle of your dorm while your roommate is watching TV and eating nachos isn't going to do your GPA any favors. You need a spot that makes you feel calm and is enjoyable enough that you can spend hours there without consistently looking at the clock.
- Find a couple of study spots you love. Research shows that studying in multiple locations actually solidifies the information in your brain. It's thought that in a new environment, the brain has to take in more stimuli – and the information comes along with it
| 1.29577
|
Zyphra/Zyda-2
|
of European legal order and its Directive 98/44/EC, making room for a farmers' privilege in domestic patent systems in its rather unusual Article 11§1, allowing farmers to retain material grown on their own farms for subsequent years (Nenow ). While this instrument has no direct effect in Member States' national legal orders and allows for restrictions of these rights awarded to farmers, it still acknowledges the specificity of the socio-technological innovation system that is mass selection. Even though the so-called "Doha round" and its Ministerial Conventions have seemingly failed to fashion a viable consensus on the terms of a new World Trade order, they have strengthened regulatory determination to include such privilege within domestic patent regulation, leading for instance to the 2007 amendment of the Swiss Federal Patents Act so as to include a farmers' privilege, limited to uses of the patented material within the same farm (Pires De Carvalho ). The relatively rare recourse to the exception within patent legislation could be entrenched in a textual TRIPS interpretation, in accord with which the recognition of such a privilege to farmers might prejudice the legitimate interests of the monopoly holders under Article 27 (Watal ). The feasibility and conformity of such exceptions in patent legislation has yet to be tested before the judiciary or the WTO dispute-settlement mechanisms, but we believe that the flexibilities inherent in the Agreement and its rationale allow for the recognition of the farmers' exception. Other commentators have in this regard highlighted the possibility that the existence of compensation in return for the right to use, save and exchange the protected material might actually encourage the doctrine of compulsory licensing, viewed as a "statutory license", rather than as a classical exception to IPR protection as grounded in Article 30 of TRIPS (Garrison ). Furthermore, the reluctance to adopt formal farmers' privileges in patent laws themselves can be overturned through jurisprudential liability thresholds, especially if plant-breeders' rights recognise growers' right to save and exchange seeds, as established before Canadian courts. Even though the patent-infringing canola farmer could not benefit from the privilege enshrined in PVP legislation to save the seed, monetary compensation deriving from the infringement was overturned on the grounds that no financial or other benefit was generated by the technology (Phillips, analysing Monsanto vs. Schmeiser). This argument could fuel the debate on the liability thresholds that might be introduced for re-use conditions.
B. Reward regime for uncertified seeds and the legal status of exchange platforms
The outlines of protection regimes for mass selection and landrace revival networks should be assessed in general terms but also with due regard to national specificities, in order to safeguard centuries-long seed-saving and exchange practices, distancing oneself from the reductionist perception that farmers merely cultivate biodiversity developed off-farm by breeders. National regulatory frameworks should therefore consider innovation stemming from mass selection as a parallel yet different (and not necessarily derogatory), seed-production scheme, raising different predicaments than dominant vertically integrated molecular plant breeding, and requiring incentives within a dual sui generis system for both modern and farmers' varieties. Such system, taking due account of national or regional stages of rural and economic development, may counter the trend to limit mass selectors' activities to the realm of exceptions, reducing farmer's privileges to a "basic trickle of rights" (Cullet ). At this stage, the different options lie within seed marketing legislation on the one hand, where DUS requirements in certified seeds could either be relaxed, or derogatory ad hoc regimes for local varieties may be established; and within IP legislation on the other, where the UPOV system may be redesigned to allow for landrace protection and use, or where a set of minimalistic double tier liability rules may be further put to use in order to compensate farmers for the use of their varieties in commercial breeding programmes, relying perhaps on a "light registry" to track varieties down, as developed below.
While intended to standardise crop names, protect consumers and foster investment in breeding, existing mainstream certification legislation and market regulation have had "the unintended consequence of drastically reducing the numbers of cultivars grown and impinging on the ability of farmers to grow older varieties or landraces" that do not fit within the formal seed market (Vetelainen et al.). Alongside the rather weighty choice of relaxing the formal system so as to include a wider array of plant varieties and actors, the establishment of derogatory ad hoc regimes in the form of book logs or flexible national (or regional) registers of uncertified seed could be options worth considering. In Brazil, the recognition of mass selection operated for instance solely through amendments of the seed marketing legislation in 2003, where landraces or traditional varieties have found new legroom, notably through the possibility that "family farmers" have been granted to register landraces in the National System of Plants and Seeds. This registration includes specific criteria taking the cultural and traditional aspects of the varieties into account, without prejudice to the exchange possibilities in the absence of such registration, since an official double exemption from registration has also been foreseen (Santilli 2012). On the other hand, the establishment of a derogatory "light catalogue" has been the way forward in the European legal order, through Directives 2008/62 and 2009/145 on conservation varieties, to the dismay of both commentators and politically active farmers' associations (Anvar ). Several research projects were funded through the FP6 European Research Framework, known as Farm Seed Opportunities, in order to overcome the inherent difficulty of uniformly regulating quite diverse farm-innovation systems. Targeted to support the implementation of seed regulations on conservation varieties, these projects also proposed complementary seed-regulation scenarios, the utility and effect of which may need further consideration. To all intents and purposes, opening the Catalogue to conservation varieties remains a means of reducing genetic erosion and preserving varietal heritage, even though critics have argued that such move entails the risk of undermining the main commercial system, and may potentially block completely open marketing possibilities for non-ind ustrial models of agriculture such as organic farming or bio-dynamics in light of additional administrative obligations (Bocci ).
In the absence of such "light catalogue" or a general exemption from certification, and thus in absence of a legal recognition of seed exchange platforms, litigation between formal and informal seed market actors will prosper, as shown by the aforementioned French case opposing Kokopelli to Graines Baumaux. Referenced by the Court of Nancy in February 2011 (Case C-59/11), the European Court of Justice needed to assess whether seed catalogues violated principles of the acquis communautaire related to the liberty of trade, free movement of goods, proportionality, equality and non-discrimination, as well as the Union's obligations under international law, especially with regards to the Convention on Biological Diversity and the FAO International Treaty. The opinion of Attorney General Kokott, issued on 19th January 2012, seemed to indicate that the International Treaty did "not include any provisions which are unconditional and sufficiently precise as to challenge the validity of EU legislation on the marketing of seeds". However, in the light of the proportionality principle, "the disadvantages of the marketing prohibition, [which include a negative impact on the freedom to conduct a business and agricultural biodiversity] manifestly outweigh its advantages", a disadvantage that is not sufficiently attenuated by the derogations carved out by Directive 2009/145. Indeed, the advocate general argues that the conservation varieties Directive, by not giving "sufficient consideration to the interests of economic operators and consumers", does not allow for sufficient scope vis-à-vis the use of old varieties and those products of mass selection, thereby concluding that "the prohibition on the sale of seed of varieties that are not demonstrably distinct, stable and sufficiently uniform […] invalid as it infringes the principle of proportionality, the freedom to conduct a business […], the free movement of goods […] and the principle of equal treatment". The judgment of the Court, issued on 12th July 2012, ran counter to the initial conclusions set out by Attorney Kokott, by taking a rather positivist approach to the principle of proportionality within the European acquis communautaire. In this regard, the Court assessed whether the exclusion of non-distinct, stable and uniform varieties from the formal seed market was appropriate for attaining the legitimate objectives pursued by official catalogue legislation. These objectives were identified as the increase of agricultural productivity and the reliability of the characteristics of the seed, which were adequately pursued by the litigious measures, setting the grounds of an efficient market without completely ruling out the marketing of old varieties. The decision should be analysed in light of the fact that the conservation varieties Directive was not in force at the beginning of the proceedings, although the national Court has been invited to take account of such legislative development. Indeed, the Luxembourg based Court seems to hint that the forced move of Kokopelli into illegality and unfair competition has been remedied by the enactment of this ad hoc regime. Were it to be the case, the imposition of geographical, quantitative and packaging restrictions by the derogatory rules of the "conservation varieties" catalogue would seemingly not undermine the recognition of mass selection efforts, according to the ECJ's approach to the issue, although it might actually minimise their impact in practice.
Even though the illicit or uncertain legal status of exchange platforms ought to be remedied, a wider regulatory debate is also needed on the form of a reward regime for the development and maintenance of farmers' varieties, which would satisfy both mass selectors and methodical breeders. The International Law Association Committee on the International Law on Biotechnology suggests "examining whether the UPOV system should be partly adapted and relaxed to allow protection of improved farmers' varieties that result from controlled on-farm breeding processes" (ILA, International Law Association ). However, amending principles related to protectable subject matter in the
| 1.343543
|
openbmb/Ultra-FineWeb
|
a challenge for active inclusion
1. DIGNITY is HUMAN RIGHT and when
this right is not respected
what kind of consequences for
MENTAL HEALTH?
The poverty is not
a fatalistic social phenomenon
but the consequence of injustice in the
redistribution of resources.
overcome poverty is not a gesture of charity. It
is first of all an
act of justice. It is the protection of a
fundamental human right, the right to dignity
and a decent life.
Everyone has the right to a standard of living
adequate...... and the right to security in the event of
unemployment, sickness, disability, widowhood, old
age or other lack of livelihood in circumstances
beyond his control.
Decl. Hum. Right
Everyone has the right to respect for his or her
physical and mental integrity. (Art
In order to combat
social exclusion and poverty, the Union recognises
and respects the right to social and housing
assistance so as to ensure a decent existence
for all those who lack sufficient.
(Art 31 1....2.... 3)
Everyone has the right of access to preventive
health care and the right to benefit from
medical treatment under the conditions established
by national laws and practices.
Charter Fundamental Right
of European Union )
========================== cf. notes down page
La démocratie contre les pauvres,
Poverty as a Human Rights Violation
Inhumanity or Injustice?...
as a Form of Oppression...
what Justice demands today; Responsibility
& Severe Poverty
Poverty and Exclusion February – March 2007
Freedom from Poverty
as A Human Right
has the RIGHT
a standard of living adequate
of himself, of
on poverty & social exclusion
Référence: MEMO/09/480 Date: 27/10/2009
and MENTAL ILLNESS -
and psychic suffering
are often both at the beginning of the process of marginalisation and abandonment, which - while being
reinforced reciprocally - transform in chronic situation one and other
until the exclusion and rupture of social belong of
become strictly intertwined in a vicious circle of
marginalization leading to a total loss of social links.
Loneliness, sickness, mental illness, drug abuse and/or
alcoholism, may be the cause or the results of this
state of alienation and exclusion from society, which in
some cases may be irreversible and which could
eventually lead to a complete break with society.
Involvement of all civic society :
it's possible to insure a concrete progress in this
fight against poverty and exclusion when all civic
society will be involved shearing and participating.
European challenge: despite national differences,
in both the nature and severity of the problem, the
social exclusion is a structural problem in European
society and for this reason must be placed – as an
absolute priority – at the centre of an urgent call for
awareness and deeper consideration particularly with
respect to the RIGHT and ACCES to the citizen services,
developing an adequate political program at European
level : the building of the European Community
must take place within a framework of citizenship in
respect of right and solidarity.
SMES-Europa from beginning
operates at the
interface, intersection of mental health and social
exclusion, on the appalling neglect and abandoned
homeless people living in extremely poor health & social
conditions and suffering from mental health problems, in
order to improve mental, physical & social well-being,
and to promote inclusion, citizenship and solidarity in
European Countries for people living in extreme social
and health precariousness.
Combating Stigma and Social Exclusion
eradication of stigma & discrimination,
but also of stereotypes
against people and families suffering from
poverty and mental illness,
is still a remote goal for our
and the consequences are : extreme poverty,
marginalisation, abandonment and lost of social belong.
the 5th points of Pact for mental health proposed by
1. Prevention of Suicide and Depression
Mental Health in Youth and Education
3. Mental Health in Workplace Settings
4. Mental Health in Older People
5. Combating Stigma and Social Exclusion
Mental health at European level does not
appear to be considered a priority on
Health policy and in Romania,
the situation seems to be critical. (cf. Green Paper on Mental Health of
Commission and many doc. online concerning mental
health in Romania).
Poverty and mental illness are both or as beginning or as
consequences of the same process of segregation,
discrimination: one reinforces and worsens the other until
complete exclusion in society.
Everywhere in Europe and particularly in Romania the reforms
are urgently needed in order to protect and give dignity to
the life of most vulnerable people. The European Union at
the time of accession of Romania to the Union placed at its
disposal meaningful resources to guarantee the right of the
patients to receive treatment as much as possible within the
In reason of European challenge :
Lisbon, Europe would to make a decisive impact on the eradication of poverty and social
exclusion. BUT until today
2010, the poverty has not been significantly
reduced, and the gap - consequence of unjust
redistribution of resources - between
rich & poor people,
In 2010 the Commission and the Parliament decided to make
the European Year of fight against poverty and social exclusion,
so having "a radical impact on the eradication of poverty,
, based on three
inseparable and complementary pillars:
1. sufficient minimum
2. inclusive labour markets,
3. access to quality services.
4th pilar (?):
: we are thinking that the
is a very fondamental pillar of
personal & active
inclusion. The other 3 are in register of
assistence and answer to the needs!
Participation is in a strict relationship
with empowerment and citizenship.
For this reason the two key words are :
(ressourcees, aginst poverty)
Even with the worsening financial
and economic crisis, this situation is rapidly
EXTREME POVERTY & EXCLUSION :
79/80 million people in Europe live in
and more and more are at risk of social
exclusion. In Romania "absolute" poverty is
widespread and many people are affected by
material deprivation and social
marginalisation which is intolerable. (cf.
The poverty more than question of statistic
and numbers is the question of persons.
Poverty in Europa : 17% of the
European population are poor.
The rate goes from 11% to
the Netherlands to Bulgaria and
Romania respectively 21 and 23% the poor,
23% in Romania
Poverty in Romania
: The highest rates are to be raised in
Europe of the east, But several large
countries of Europe, the United Kingdom,
Italy, Spain or Greece hardly make better,
with a rate of poverty of +/- 20%..
Everywhere in Europe and particularly
in Romania the reforms in Mental Health
systhem are urgently in order to protect and dignify the life of
most vulnerable people.
The European Union at the time of the accession of Romania
to EU, placed at the disposal important resources
to improve the mental health services
by guaranteeing the rights of the people to receive a
treatment and care adapted in their state, as much as
possible with the centre of the community in which they
BUT what is
the situation today?
5. AIM OF CONFERENCE
AIM is not to compassionate the poor peoples and making
the charity but on one side to recall the fundamental
Right of all to live the life in dignity and
the other the duty and responsibility of all civic society
of redistribution of resources with
justice and solidarity.
With this conference SMES-Europa continues the finality of:
- Reaffirming personal dignity and human rights
The outrageous scenes of people, abandoned on the street,
frequently with severly mental health problems are urgent
appeals to enforce the respect of basic human rights of this
- Preferring innovative and alternative projects
The scale of the problem and the ineffectiveness of mere
public & private "assistance" in the long run
call for medium/long term innovative and alternative
projects. Individuals must be seen as people having rights,
needs and living in a context where the social/health
private and public sectors interact.
- Promoting urgent prevention measures Unless
there is prompt intervention into the causes
from which the process originates, the present crisis in the
welfare system will inevitably deepen, with disastrous
consequences for the young and all those in greatest need.
6. THEME OF CONFERENCE
THEME & TOPICS :" SHARING :
redistribution of resources for dignity & well-being
of all people
PARTICIPATING : active inclusion
against every segregation & discrimination
"Sharing & participating is a challenge for active inclusion, promoting Dignity and Health, increasing access to health & social
services for the most deprived people."
The participants will focus reflections -
experiences - evaluations and will present
priorities and proposal, in order to humanise
this struggle against poverty & disease, in
these three topics:
1) Deinstitutionalization and mental health
in the city: from cure in an Institution to
care in the
Community. This is first of all is humanistic question,
requiring new creativity in daily practices.
2) Outreach – going out to meet where they are
- aptitude before than method and
3) Empowerment - capacitating and
participation with respect for difference.
NB.: your suggestion are very
| 1.291863
|
HuggingFaceFW/fineweb-edu
|
member 12.
Therefore, as described, the automated liquid filling system 1 may be calibrated in relation to local atmospheric pressure and the frictional forces related to the movement of the piston 14. If during the calibration procedure a measured parameter (e.g., average current supplied to the motor 36 a) falls outside of acceptable values, an alarm may sound or an alert may be displayed to inform a user of the unacceptable results. For example, if the average current supplied to the motor 36 a while moving from the first reference point to the second reference point falls below a predetermined value, it may be inferred that either the valve stem 24, the piston 14, or a combination thereof are not sealing properly. An alert may then be displayed for the user to replace the disposable 10. Similarly, if the average current supplied is above a predetermined value, it may be inferred that frictional forces are higher than acceptable and the user may be alerted to replace the disposable 10. In another example, if the average current supply to the motor 36 a while moving the piston 14 during the frictional force calibration falls below a predetermined value, it may be inferred that the piston resilient member the 15 is not adequately engaged with the wall of the tubular member 12 and therefore the user may be alerted to replace the disposable 10.
The frictional and atmospheric conditions determined in the above described calibration steps may be used to partly determine operational parameters of the automatic liquid filling system 1. For example, it may be desired to maximize the speed of the piston 14 while drawing medical liquid from the medical liquid source(s) 60. However, it may also be desirable to avoid foaming, bubbles or cavitation within the tubular member 12 that may be caused by the relatively low pressure within the tubular member 12 while the piston 14 is being drawn back. Since cavitation is partially determined by pressure and the medical liquid source(s) 60 may be under local atmospheric pressure, the point at which a particular medical liquid may cavitate may be related to the local atmospheric pressure. Since the above-described procedure may determine local atmospheric pressure, the automatic liquid filling system 1 may be operable to determine a maximum piston velocity that can be attained without significant cavitation within the medical liquid.
The automatic liquid filling system 1 may also possess other diagnostic capabilities. The automatic liquid filling system 1 may be operable to detect leaks within the disposable 10 and attached tubing. For example, prior to fluidly interconnecting the first fluid line 80 with a medical liquid source(s) 60 and prior to fluidly interconnecting the second fluid line 82 with a medical liquid receptacle(s) 62, the first and second fluid lines 80, 82 may be capped or sealed. The valve stem 24 may be positioned to fluidly interconnect to the tubular member 12 with the first fluid line 80. The piston 14 may then be moved within the tubular member 12 and the current required by the motor 36 a to move the piston 14 they be measured. Since the first fluid line 80 is sealed, there should be a resistance to movement of the piston 14 due to a vacuum (if the piston 14 is drawn back) or a pressure build up (if the piston 14 is moved forward) that should be reflected in the amount of current required by the motor 36 a. This procedure may be repeated with the valve stem 24 positioned to fluidly interconnect the tubular member 12 with the second fluid line 82. This procedure may be repeated with the valve stem 24 positioned in the neutral position. Any detected errors and when those errors were detected may be used to determine possible locations of system leaks. For example, if an error is detected when the first fluid line 80 is fluidly interconnected to the tubular member 12 but not when the second fluid line 82 is fluidly interconnected to the tubular member 12, it may be inferred that the leak is related to the first fluid line 80. Conversely, if an error is detected when the second fluid line 82 is fluidly interconnected to the tubular member 12 but not on the first fluid line 80 is fluidly interconnected to the tubular member 12, it may be inferred that the leak is related to the second fluid line 82. If an error occurs under all situations, it may be inferred that the error is related to the tubular member 12 or valve 20. The above procedures may be used to determine likely locations of system leaks. However, other system leaks or error sources may be present. When any error is detected or any reading is outside of a predetermined range, an alarm or alert may be provided to inform a user of a system fault. Furthermore, the system may direct the user to the most likely location of the problem. The alarm or alert may be presented at the automatic liquid filling system 1 and may be in the form of a visual display, an audible signal, or a combination thereof. The alarm or alert may also be presented in any device, such as a PC (including remote PCs), interconnected to the automatic liquid filling system 1.
Additionally, system diagnostics may also be operative during automatic liquid filling system 1 operation. For example, during operation the automatic liquid filling system 1 may monitor the current required by motor 36 a to move the piston 14 during various drawing and dispensing operations. If a gradual increase in the current required to draw back the piston 14 is detected, but no change in the current required to move the piston 14 foreword is detected, it may be inferred that a clog may be developing in the first fluid line 80 or in a member, such as the filter 201, interconnected to the first fluid line 80. Once such a situation is detected, an alarm or alert may be generated to inform the user of the potential clog. Similarly, if a gradual increase in the current required to move the piston 14 foreword is detected but no change in the current required to draw back the piston 14 is detected, it may be inferred that a clog may have developed in the second fluid line 82 or in a member interconnected to the second fluid line 82.
A sudden increase in the current required to move the piston 14 within the first portion 12 a of the tubular member 12 may indicate a sudden blockage, such as a kink in a fluid line. The probable location of the sudden blockage may be determined in a fashion similar to that described above and an alarm or alert may be issued to inform the user of the situation and of the probable location of the problem.
A gradual decrease in the current required to move the piston 14 within the first portion 12 a of the tubular member 12 may indicate a growing leak in the automated liquid filling system 1. A sudden decrease in the current required to move the piston 14 may indicate a fast developing leak or that a component has become disconnected. The probable location of the problem may be determined and communicated to a user in a fashion similar to that described above.
One filling operations method is now described in relation to FIG. 14. After a physical interconnection between the disposable 10 and mount 40 has been established, as described above, the medical liquid source(s) 60 may be interconnected to the disposable 10, as described above (e.g., via the first fluid line 80). The piston 14 may then be interconnected with the piston drive member 32, after which the system may be primed, such as described above. After the system is primed, the disposable 10 may be interconnected to a second dispensing apparatus, as described above (e.g., a needle). After the second dispensing apparatus is interconnected to the disposable 10, it may be primed, such as described above. Next a first receptacle of the receptacle(s) 62 (e.g., a vial containing a dry powder) may be interconnected to the second dispensing apparatus, and thus fluidly interconnectable to the disposable 10, after which the medical liquid (e.g., pharmaceutical quality water) contained in the tubular member 12 may be dispensed to the first receptacle, such as described above (e.g., via rotation of the valve to the second position and advancement of the piston 14). Prior to dispensing medical liquid to the first receptacle, the tubular member 12 may be refilled, such as described above (e.g., via rotation of the valve 20 to the first position and retraction of the piston 14).
After the first receptacle is filled, the piston 14 may be retracted (not illustrated) in accordance with the drawback parameter(s), to draw non-dispensed liquid back into the filling system 1, as described above. Additionally, the piston 14 may be retracted and advanced (e.g., utilizing the piston drive member 32) any number of times (not illustrated) to mix the solution contained in the first receptacle. Subsequently, a second receptacle may be interconnected to the second dispensing apparatus, and thus fluidly interconnectable to the disposable 10, after which a medical liquid may be dispensed to the second receptacle. Prior to dispensing medical liquid to the second receptacle, the tubular member 12 may be refilled, such as described above. These processes may be repeated, as desired, for any number of receptacles to provide receptacles containing a desired volume of a medical
| 1.761052
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Getting Full Performance from Trickling Filters and CO2 Strippers with the Right Water Distribution System
Trickling filters and CO2 strippers are common devices in recirculating aquaculture systems. They are often referred to as packed towers. The performance of these towers is controlled by:
The distribution of water is the most commonly neglected and least understood aspect of the design of packed towers. Poor water distribution is often the primary cause for poor performance and high maintenance costs for packed towers. A well designed tower will have an even water distribution across the entire packing surface. For a biofiltration system, this gives all of the microorganisms an equal and consistent supply of nutrients and oxygen. For a stripping tower, even water distribution provides a uniform water film on all of the packing for maximum gas transfer.
Poor water distribution can lead to the following problems:
If the water distribution is not done correctly, no other changes will bring the tower up to full performance. On the other hand, improving the water distribution in a tower is a very cost effective way to get more performance out of a system. Although the water distribution system is a key element in the design of a tower, it is a relatively small part of the system cost. Usually, it is less than 10% of the cost. If a distribution system only wets half of the media, the tower must be twice as large to get the same performance as a system that gives complete coverage. It cannot be overemphasized that an even distribution of water is essential to obtain full performance from a packed tower.
Water distribution designs that are less than optimum include the use of the following methods:
Here is a worse case example. Water is dumped from a pipe at a single point into a vessel.
Here is an actual example of a bad water distribution system that we noticed during a tour of a public aquarium. Notice that none of the media is being wetted. Why bother building it?
Spreading the water out evenly doesn't happen by accident or magic. A good water distribution system will typically consist of one of the following three designs:
Design 1. - A piping system with square pattern, solid cone spray nozzles. The spray patterns touch at the edges.
Design 2. - A piping system with round or square hollow cone spray nozzles. The nozzles will be spaced to provide overlapping patterns.
Design 3. - A flat pan or tray holding a few inches of water with target nozzles installed in the bottom. The spray patterns from adjacent nozzles will overlap.
There are advantages and disadvantages for each type of system.
This system is simple to design and relatively easy to construct. It can provide even coverage depending on the quality of the spray pattern. The use of a pipe system allows for unrestricted air movement and relatively easy access to the packing media. The main disadvantage is that these systems typically require higher head pressures to operate well. The pressure drop through the nozzles is usually around 5 – 15 ft. of H20.
This system is also simple to design and relatively easy to construct. If it is done properly, it will provide a very even water distribution. Head loss will be equal to or less than design 1. The disadvantage of this system is that a larger fraction of the water will impinge on the wall of the vessel. If the vessel is large then this wall fraction will be relatively small. If the vessel is small however, this fraction can be more substantial.
This system can be a little more complicated to design due to the structural requirements. The weight of the water in the pan must be well supported. The big advantage to this design is the very low head requirement. Pan type systems can operate with 12" of total head loss. Other disadvantages of this system are the restriction of air flow for counter flow towers and the difficulty of observing or accessing the media.
The ideal nozzle for our applications will have:
1. An even water distribution pattern
2. An open, clog resistant flow path
3. A low head loss
4. A low cost
5 Standard pipe thread attachment
6. Good impact resistance
7 Corrosion resistance to salt water
8. Good performance under a wide range of flow rates
A nozzle should not:
1. Provide a fine mist or fog of small droplets
2. Have small, easily clogged flow passages.
3. Throw the water a long distance.
4. Have moving parts.
This is an example of a nozzle that fits the Design #1 system. This picture shows the bottom of the nozzle. The cost of this nozzle is about $15.00. It has a 2" male NPT connection. It is injection molded from ABS or polypropylene.
Here is a round pattern nozzle for Design #2. The cost of this nozzle is about $4.00. It has a 1.5" male NPT connection. It is injection molded from polypropylene.
This is a square pattern nozzle for Design #2. The cost of this nozzle is about $4.00. It has a 1.5" male NPT connection or a 1.5" self taping, tapered thread. It is injection molded from polypropylene.
Here is a nozzle for Design #3. This nozzle fits in a hole in a pan. It has a two piece design that locks itself into the pan. Typical spacing is 4 – 12 inches on center, depending on water loading. The cost of this nozzle is about $4.00. It is injection molded from polypropylene.
In order to design a good system, one needs good data. Here is a typical flow vs. pressure chart. This chart allows the designer to choose the number of nozzles that are needed for a given flow at the available pressure. If the flow per nozzle is known, one can determine the head loss through the nozzle. Although the pressure is shown on the X axis, it is the dependent variable.
After the number of nozzles, operating pressure and flow is known, the correct spacing above the media must be determined. A diagram showing water spread is very helpful. Here is a typical example.
Design Example 1
Using the above charts we can design a distribution system for a typical CO2 stripper. The proposed tower has a 4 ft. x 4 ft. plan area and a flow rate of 232 gpm. If we divide the tower into 4 equal squares of 2 ft. x 2 ft. then the flow per nozzle will be 58 gpm. Using the chart for flow vs. pressure we see that a nozzle with an orifice of 1.375 in. will have a pressure loss of 4 ft. H2O. Since the pattern is 2' x 2', the distance from the center of nozzle to the edge of pattern is 12". Using the spread chart for the 1.375" orifice we see that the bottom of the nozzle must be about 9 from the media to get the 12" horizontal spread.
Design Example 2
For this example let us design a pan distribution system. The tower is 4 ft. x 6 ft. plan area. The water flow rate is 200 gpm. If we space the nozzles 8" on center and 8" from the walls there will be 5 rows of 8 nozzles for a total of 40 nozzles. The flow per nozzle will be 5 gpm. Using the chart below there is a choice between two nozzles. The.75 in orifice will require a head of 6 in. of water in the pan. The.875 in orifice will require 3.5 in. of H2O in the pan. The usual vertical spacing for these types of target nozzles is 2- 4 inches from the bottom of the nozzle to the media surface.
Good water distribution is a critical component of packed towers design. Fortunately, it is relatively easy and inexpensive to do it right. In most cases, it is also possible to retrofit a good system into an existing tower. Consult with your tower or media supplier to improve the performance of your trickling filters and CO2 strippers.
©2006 by L. S. Enterprises. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher.
Published by L. S. Enterprises
Author: Matt Smith
| 1.993122
|
openbmb/Ultra-FineWeb
|
Folic acid (folate). Folic acid, another B vitamin, helps assist in the creation of many neurotransmitters. It is also essential to the production of hemoglobin, the oxygen-bearing substance in red blood cells, so deficiencies often lead to anemia. Studies have shown abnormally low levels of this vitamin in from a quarter to a third of all depressed persons. Other symptoms include fatigue, lower-extremity problems, and dementia. Orthomolecular psychiatrists have used folic acid supplements for many years to reduce the frequency of relapses in their patients. Poor dietary habits contribute to folic acid deficiencies, as do illness, alcoholism, and various drugs, including aspirin, birth control pills, barbiturates, and anticonvulsants. It is usually administered along with vitamin B12, since a B12 deficiency can mask a folic acid deficiency. The usual dose is 800 mcg. Higher doses, though safe, require a prescription.
Vitamin C (ascorbic acid). Vitamin C, widely known for its antioxidant abilities, is also important for mental health. Subclinical deficiencies can produce depression, which requires the use of supplements. One study showed that a single 3-gram dose of vitamin C reduced symptoms by 40 percent in eleven manic and twelve depressed patients after only four hours. Supplementation is particularly important if you have had surgery or an inflammatory disease. Stress, pregnancy, and lactation also increase the body's need for vitamin C, while aspirin, tetracycline, and birth control pills can deplete the body's supply. A good maintenance dose is 1 to 3 grams daily, with more for depressed people, smokers, and those exposed to toxins of various kinds.
There are at least fifteen minerals that are essential to health. Either inadequate or excessive dietary intake can lead to mental and behavioral problems, including depression, often before any physical symptoms appear.
Minerals important to mental health include:
- Sodium and potassium. These minerals are considered together because they determine the body's electrolyte balance, which regulates water levels. Eating a lot of salty food (sodium) disrupts this balance. This not only produces high blood pressure, but also affects neurotransmitter levels, producing depression and PMS. In addition, the misuse of diuretics, or "water pills," can lead to potassium deficiency, which in turn can manifest itself as depression. A good daily dose is from 200 to 400 mg.
- Iron. Iron deficiency can result in anemia, which can produce symptoms such as depression, irritability, fatigue, loss of attention span, and insomnia. One study found that nearly half of all premenopausal women and a third of all children do not get enough iron, so supplementation in these groups could have a significant impact on the frequency of depression and other disorders. From 15 to 30 mg a day is a good maintenance dose. On the other hand, excessive iron can lead to toxicity, especially in men, who are not losing the mineral regularly through menstruation. Therefore, men shouldn't supplement with iron unless under a doctor's direction.
- Magnesium. This mineral assists in all of the body's energy reactions.
Deficiency can result in depressive symptoms, along with confusion, agitation, anxiety, and hallucinations, as well as a variety of physical problems. Most diets do not include enough magnesium, and stress also contributes to magnesium depletion. Other possible reasons for a deficiency include kidney or parathyroid disease, high blood pressure, chronic fluid loss, alcoholism, and malabsorption disorders. Several studies have shown that magnesium injections can bring relief from symptoms such as fatigue, aches and pains, weakness, and lethargy. I frequently give magnesium shots for migraine headaches, PMS, and allergies. A daily maintenance dose is 400 to 800 mg, with more needed to correct deficiencies.
- Calcium. Depressed individuals often have excessive calcium levels, particularly those with bipolar disorder (see Chapter 2). When these patients recover, their calcium levels usually return to normal. Depression can also occur in cases of calcium deficiency, long before the appearance of physical deficiency symptoms. In addition, calcium works with magnesium to maintain balance, or homeostasis, in the body, much as sodium and potassium work together to achieve balance in water levels. If you are supplementing with calcium, you will need to take one-half as much magnesium, sometimes even more, to keep the two properly balanced. This includes women who are taking calcium supplements to prevent osteoporosis. A good daily dose is 800 to 1,000 mg.
- Zinc. Zinc deficiencies frequently lead to depression, since this mineral is essential to many processes related to brain function. In addition to irritability, mental slowness, and emotional disorders, zinc deficiency can produce changes in taste and smell sensations, a loss of appetite, reduced immune function, and rough skin. These symptoms are particularly common among older people and in women, especially those with eating disorders. An excellent treatment for anorexia and bulimia uses high doses of zinc, beyond the recommended 15 to 30 mg daily.
There are two other nutrients that are important to mental health. S-adenosylmethionine (SAM), a natural, active form of the amino acid methionine, helps process a wide variety of neurotransmitters, including norepinephrine, dopamine, phosphatidylcholine, serotonin, and melatonin. In Europe, SAM is sold as an antidepressant, where it performs as well or better than synthetic drugs without the side effects.
Phosphatidylserine, another substance that is particularly plentiful in the brain, helps to ensure proper nerve function by keeping the membrane surrounding each brain cell fluid and flexible. Proper membrane fluidity affects nerve signal transmission, the binding of neurotransmitters to receptor sites, and the activity of monoamine oxidase, or MAO. Phosphatidylserine can also enhance mood, behavior, and mental function by increasing the accumulation, storage, and release of several neurotransmitters. I have used it successfully in my practice as an adjunct in treating depression and to improve cognitive functions such as thinking and memory, in doses of 100 mg two to three times a day. It should not be combined with an antidepressant without a doctor's supervision, nor is it recommended for bipolar disorder.
Other Possible Causes of Depression
There are several illnesses that can mimic depression, including chronic fatigue syndrome, systemic candidiasis, and hypoglycemia, or low blood sugar. St. John's wort can be especially useful in treating the first two disorders; both are related to immune dysfunction, and St. John's wort can fight depression and strengthen the immune system (see St. John's Wort - The Versatile Herb). Two additional factors to consider in depression are hormonal imbalances and the effects of pollution. (See Solving the Puzzle of Chronic Fatigue Syndrome by Michael Rosenbaum, M.D. and Murray Susser, M.D.)
Chronic Fatigue Syndrome
Chronic fatigue syndrome (CFS) is an often-baffling disorder that can thoroughly disrupt someone's life, as Melissa discovered.
Melissa, a successful 38-year-old professional, described herself in desperate terms: "I feel like I'm losing my mind. I'm absent-minded, simple tasks overwhelm me, and I'm in tears at the drop of a hat. I'm exhausted most of the time. I can barely get up in the morning. All day, there's a constant struggle to stay awake. I feel my life is over!"
Melissa's medical history revealed unaddressed explanations for her desperation. Six months earlier, she'd had the flu. Despite her apparent recovery after about a month, she never fully regained her former strength and energy. She could no longer exercise as before, and found that it depleted rather than energized her. She drank coffee to boost her energy, but after a while even that did not work.
I ordered several blood tests. They revealed a number of problems, any one of which could cause fatigue, anxiety, and depression: iron-deficiency anemia, elevated Epstein-Barr viral antibodies-which indicated both a past and presently active viral infection-and hypoglycemia. I prescribed an iron supplement for the anemia, a standard medical diagnosis that is often overlooked. However, in contrast to standard treatment, I also prescribed immune-boosting and energizing herbs, including astragalus, echinacea, goldenseal, licorice, and Siberian ginseng. In addition, Melissa took St. John's wort, mega-doses of vitamins and minerals, and specific amino acids, especially lysine and cysteine, for viral defense. Within three months she was feeling like herself again-active, enthusiastic, optimistic, and no longer depressed.
Melissa's flu turned out to be Epstein-Barr virus, a chronic, relapsing form of infectious mononucleosis that is part of CFS. CFS appears to be caused by any one of a group of viruses that can lie dormant for months or years at a time, then be reactivated by physical or emotional stress. Symptoms include depression, extreme fatigue, nonrestorative sleep, impaired memory and concentration, anxiety attacks, intermittent low-grade fevers, sore throat, swollen lymph glands, muscle aches and pains, and allergies. One feature that distinguishes CFS from depression or other causes of fatigue is a negative response to exercise. While most individuals feel energized after exercise, chronic fatigue patients feel worse.
Systemic Candidiasis (Candida Infection)
Systemic candidiasis is caused by the fungus Candida albicans, an organism normally found in the intestines and elsewhere that can grow out of control under certain conditions. Joni is a good
| 2.417318
|
openbmb/Ultra-FineWeb
|
was related to the difficulty in generating product ideas and to time management.
Finally, the study centered on the approximately 50% of eligible talent pool students who did not choose to do a project. Overall, it was concluded that the regression results were of negligible value in uncovering the correlates of creative production. However, the results of the questionnaire and interview data were significant in revealing factors critical to the implementation of a model focusing on the creative/productive behavior of gifted students. Of particular interest in this study was Gubbins' analysis of students who did not develop products (345 of 775) or those students who started products but did not complete them (7 of 430). Sixty-one percent of the students who did not develop a product indicated that they did not have an idea for what they might study. Forty-five percent of the students indicated that they would have to make up classwork that they missed and a nearly identical percentage (44%) indicated that they had a full schedule in school. The remarkably low percentage of non-completers is also indicative of the enjoyment most students had in the completion of their products. A review of the trends and patterns in the response data disclosed four factors that interfered with the product completion: interest level, task commitment, time commitment, and human and material resources (Gubbins, 1982).
Reis (1981) and Gubbins (1982) found that approximately 40-50% of identified talent pool students in new SEM programs do not choose to participate in the Type III investigations described earlier. Although personal variables can account for some of the variation in students' decisions to begin such projects, these two researchers have speculated that programming practices may account for a larger portion of this variance. Both researchers suggested two practices that might increase student participation in the Type III component of the Enrichment Triad Model. First, they suggested that more teachers should provide curriculum compacting within the regular classroom to provide more time for Type III projects. Second, they suggested that teachers of the gifted might provide above average ability students with Type II training units that were specifically designed to teach students how to identify their interests, find problems, and develop a research design or problem solving paradigm.
The Effects of Training on Type III Products
Burns (1987) and Newman (1991) investigated the use of different training programs on participation in Type III studies. Burns (1986) compared the effects of Type II training (in how to focus and manage a Type III project) and additional personal and environmental variables on 515 students' decisions to initiate problem solving investigations (Type IIIs) in new SEM programs. Forty-eight groups of above average ability students in grades 3-8 were randomly assigned to either comparison or experimental groups. Students in the treatment group received seven Type II lessons in how to organize a Type III investigation. Students in the comparison group received Type I experiences or Type II training in one of the other sets of skills within Renzulli's Type II taxonomy. The initiation of a Type III investigation was used as the dependent variable in the study. Personal variables and participation in either the treatment or the control group were entered into a hierarchical discriminant function analysis to identify the strength of the treatment beyond the personal variables of grade, gender, self-efficacy, learning style preferences, academic achievement, and academic aptitude. The discriminant function equation proved to be significant (X²=121.69, p<.00001). All eight predictor variables proved to be significant and accounted for 22 percent of the variance between groups.
As a group, the students who received the Type II training were 64 per cent more likely to initiate a Type III investigation than the students who did not receive the training. Participation in group was about three times more important than grade, and more than three times as important as gender, achievement and prior involvement in creative projects in predicting which students would initiate Type III investigations. Learning style preferences for independent study and projects were relatively unimportant and pre self-efficacy scores were the second most powerful predictors of student initiation of Type III projects. The success of the experimental lessons that were developed for this study (Burns, 1987) suggests that teachers in programs that stress real world problem solving might consider spending more class time teaching students how to initiate and plan such projects. Burns concluded that teaching these skills prior to the initiation of the projects, would increase the number of students who undertake these investigations during the academic year.
Newman (1991) investigated the integration of a set of Talents Unlimited (Schlichter, 1986) training lessons (in creativity, planning, decision making, forecasting, and communicating) with teacher guidance in how to plan, manage, and complete a Type III investigation in order to examine the effects of these lessons on the quality of products and number of students who chose not to complete products. Talents Unlimited is often used as a Type II training component in SEM programs. Subjects included 147 Talent Pool students in grades three through six, from three school systems which implement the SEM and the Talents Unlimited model. Students in the treatment group received training in applying the Talents Unlimited model to steps of investigating a real problem. Students in the comparison group continued to follow guidelines described in the Schoolwide Enrichment Model (Renzulli & Reis, 1985) as they pursued their investigations. Data collection included tallies of the number of Type III investigations initiated, the number actually completed, and the number of students who did not complete Type III studies. Student products were evaluated by two independent raters using the Student Product Assessment Form (Reis, 1981). In addition, logs and conferences were used to provide an internal check on the consistency of procedures, as well as to determine student and teacher perceptions, attitudes, and reactions to the treatment lessons. When examined in relation to the comparison group, the treatment group had significantly fewer students who did not complete products, as measured by Chi-square analysis X²=(1, N=160)=20.198; p<7.05. Results of analysis of variance procedures also showed a significant difference in the quality of products completed by students in the treatment group. Finally, qualitative analysis supported the statistical analyses and indicated favorable reactions from students and teachers toward the treatment.
Investigations of Student Creative Productive Behaviors
Delcourt (1988) and Starko (1986) investigated student creative productivity. Delcourt (1988) investigated characteristics related to creative/product behavior in adolescents who consistently engaged in first-hand research of self-selected topics. The topics were related to activities both within or outside of school. Selection of students for this study was based upon the quantity and quality of their projects. Therefore, giftedness was viewed as being manifested in performances. In contrast to a static perspective of the gifted individual, this conception of giftedness focused upon the dynamic nature of gifted behavior (Renzulli, 1986). The sample consisted of 18 students in grades 9 through 12 from four sites in the Northeast. All sites were located in typical high schools, as opposed to special schools for the gifted and talented. These schools conducted programs for the gifted and talented, focusing upon the development of creative/productive behavior in students. Programming included advanced placement courses, honors classes, special seminars, and mentorships.
A qualitative analysis of student interviews, questionnaires, and documents was conducted. To provide checks for both reliability and validity of collected data (Smith, 1975), triangulation was sought from three sources: the school, the student, and the parents. A microcomputer program, The Ethnograph (Seidel, Kjolseth, & Seymour, 1988), was employed for Sorting and retrieving coded text data. Responses to the following question were analyzed: -Having developed several products, how do you think your ability to work on these projects has changed over time?" These responses were separated into the following groups: (a) changes related to improvements in products, changes related to the skills necessary for product completion (e.g., writing, research methodology); (b) changes in personal characteristics (e.g., patience, self-satisfaction); and (c) changes related to career choices. Results concerning the family, the school, and the individual revealed the following: (a) targeted students do exhibit characteristics similar to those of creative/productive adults; (b) these students can be producers of information as well as consumers; and (c) the learning processes of these students merit closer attention if their abilities are to be better understood by themselves, their parents, and their teachers.
Starko (1986) also examined the effects of the Schoolwide Enrichment Model on student creative productivity. This research compared students who participated in SEM programs for at least four years with students who qualified for such programs but received no services. Questionnaires were used to determine the number of creative products produced by both groups, both within school programs and in dependent activities outside of school, as well as to gather information about attitudes and skills associated with creative productivity. Hierarchical multiple regression, as well as qualitative analysis of open ended questionnaire items, was used for data analysis. Results indicated that students who became involved in independent study projects in the SEM more often initiated their own creative products both in and outside of school than did students in the comparison group. A total of 58 students in the program when compared to 45 students in the comparison group participated in the study. The group in the enrichment program reported over twice as many creative projects per student (3.37) as the comparison group (1.4). The group that participated in the enrichment program also reported doing over twice as many creative products outside of school on their own time (1.03) than the comparison group (.50).
Additionally, students who had participated in the enrichment program showed greater diversity in projects and more sophistication in
| 2.122866
|
openbmb/Ultra-FineWeb
|
Skip to main content
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Modulating the unfolded protein response with ONC201 to impact on radiation response in prostate cancer cells
Abstract
Prostate cancer (PCa) is the most common non-cutaneous cancer in men and a notable cause of cancer mortality when it metastasises. The unfolded protein response (UPR) can be cytoprotective but when acutely activated can lead to cell death. In this study, we sought to enhance the acute activation of the UPR using radiation and ONC201, an UPR activator. Treating PCa cells with ONC201 quickly increased the expression of all the key regulators of the UPR and reduced the oxidative phosphorylation, with cell death occurring 72 h later. We exploited this time lag to sensitize prostate cancer cells to radiation through short-term treatment with ONC201. To understand how priming occurred, we performed RNA-Seq analysis and found that ONC201 suppressed the expression of cell cycle and DNA repair factors. In conclusion, we have shown that ONC201 can prime enhanced radiation response.
Introduction
Prostate cancer (PCa) is the most common cancer diagnosed in men and the second most common cause of cancer death after lung cancer. According to recent projections, prostate cancer incidence rates are predict to rise by 12% in the UK between 2014 and 2035, to 233 cases per 100,000 males by 20351. Clinically localised PCa is treated using radical prostatectomy or radiotherapy to remove or destroy the cancer cells confined within the prostate capsule. However, 10–15% of the patients are diagnosed after their cancer has spread and present with advanced or inoperable disease2.
The prostate is a specialized accessory gland with a high secretory capacity. During cancer progression, cells experience mitogenic pressure and intracellular stress (e.g., metabolic pressure to rapidly grow and divide), detected by the endoplasmic reticulum (ER) as an accumulation of misfolded proteins. When the cells are not able to cope with the overload, the unfolded proteins accumulated in the ER, trigger an adaptive response called the Unfolded Protein Response (UPR)3. Attempting to clear the unfolded proteins and increase the capacity of the ER, the UPR activates several molecular pathways. Here, the so-called ER stress sensors PERK, IRE1alpha and ATF6 play a central role in the initiation and regulation of the UPR4,5,6,7. Previously, several studies have reported the activation of the UPR during tumour transformation and progression, leading to the acquisition of adaptive phenotypes to restricted nutrient supplies and therapies8. Upstream elements like XBP1 and ATF6 are upregulated in hepatocellular carcinomas9, in a range of breast cancer cell lines10, colon cancer and melanoma8. Although the UPR is generally viewed as a cytoprotective response, prolonged ER stress can directly regulate the cell death machinery through the activation of CHOP11,12. One of the mechanisms by which CHOP promotes apoptosis involves its ability to decrease anti-apoptotic Bcl-2 levels and stimulate the release of cytochrome C into the cytosol, resulting in the activation of apoptotic caspase 313. Being a major secretory organ, the prostate is particularly reliant on the proper functioning of the ER and is vulnerable to agents or conditions that cause ER stress. Several studies have pointed to a positive association between ER/UPR markers and the development of prostate cancer14. Here, the activation of the IRE1alpha-XBP1 axis of the UPR contributes to tumorigenesis in contexts in which the driver is the androgen receptor15. Moreover in prostate cancers characterised by c-Myc overexpression and PTEN mutations, the ATF4 axis of the UPR plays a pro-survival role16. Even though the activation of the UPR is known to be the result of the accumulation of unfolded proteins in the ER14, treatments such as radiotherapy also increase stress and perturb general cellular homeostasis.
Radiotherapy causes double-strand DNA damage17,18,19. If unresolved, the radiation-induced DNA damage can lead to the production and the accumulation of unfolded and/or misfolded proteins in the ER20. Recent literature has shown that radiation exposure of glioblastoma stem cells activates key components of the UPR culminating in the autophagosome formation21. Moreover, the overexpression of UPR genes encoding GRP78 (BiP) and GRP94 has been extensively associated with radio-resistance in multiple cancer types, including breast, pancreatic and gastric cancers22,23,24. Recently, Drake and co-workers demonstrated that therapeutic doses of radiotherapy led to an upregulation of GRP78 in 72% of colorectal cancer cases receiving treatment25. Taking all these evidence together, targeting the UPR may provide an opportunity to enhance responses to radiotherapy.
ONC201 is an inhibitor of the dopamine receptors DRD2/3 and has previously been reported to induce apoptosis in haematological malignancies and solid tumours26. Pre-clinical studies have shown that ONC201 has an excellent safety profile at doses that exceed effective doses by tenfold and specificity for tumour versus normal cells in vitro27. The lack of cytotoxicity in normal cells was also confirmed in a panel of normal human bone marrow samples28. ONC201 was also shown to cause apoptosis in stem and progenitor AML cells and abrogated the engraftment of leukemic stem cells in vivo while sparing normal bone marrow cells29. In these models, ONC201 promoted apoptosis by increasing the translation of the transcription factor ATF4 in a non-canonical way, through an increase in the phosphorylation of eIF2α29. However, more recently Graves et al., described a cytostatic effect mediated in Triple Negative Breast Cancer (TNBC) cells by the direct binding of ONC201 to the mitochondrial protease ClpP, which resolves into an alternative modulation of the integrated stress response pathway30. ONC201 has been also shown to be effective in early stage clinical trials in a number of cancer types31,32. Here we assess the impact of the ONC201 on all axes of the UPR to determine whether it can be used to enhance radiation response in PCa models.
Results
Chronic activation of the Unfolded Protein Response with imipridone ONC201 induces cell death by targeting the mitochondria
The activation of the Unfolded Protein Response (UPR) is known to be the result of the accumulation of unfolded or misfolded proteins in the ER14. Among all the components of the UPR, CHOP is known to be a pro-apoptotic transcription factor expressed in response to acute stress, including the genotoxic stress that can be induced by therapies in cancer cells. In order to further evaluate the role played by the UPR in modulating the cellular response to therapy-induced stress in prostate cancer cells, we assessed the impact of ONC201, a small molecule known to activate the UPR in other models26,33. The treatment of PC3 cells, a cell line known to be radiation-resistant34, with ONC201 (5, 10 and 15 µM35) significantly induced cell death at 72 h (Fig. 1a). Interestingly, the cytotoxic effect observed with ONC201 at 72 h occurred after an early increase in the expression of all the main components of the UPR at 24 h (Fig. 1b; Supplementary Fig. 11a). This occurred despite any significant inhibition of Akt phosphorylation and despite the fact that PC3 cells express very low levels of DRD2, a reported molecular target of ONC201 (Supplementary Fig. 10b,c—associated full-length blots in Supplementary Fig. 12). It has been recently reported that ONC201 can induce cellular stress by disrupting mitochondrial function30,35. To further test this in the PC3 cell line, we depleted the mitochondria from our PC3-WT to generate a new cell line (mt depleted PC3 in the figures, Supplementary Fig. 1a–c). The mitochondrially depleted PC3 cells we generated showed a 95% reduction in mitochondrial DNA content, accompanied by a lower basal and maximal respiration rates—by about 80%—and reduced ATP production (green bars versus red bars). Using the Seahorse Assay, we observed that the activation of the UPR in response to ONC201 treatment at 24 h was accompanied by the significant inhibition of mitochondrial respiration (Fig. 1c,d and Supplementary Fig. 1d). ONC201 (10 µM) treatment reduced the oxygen consumption rate (OCR) by about 50% (Fig. 1c—blue bar versus red bar) and the principal respiratory parameters (basal OCR, the proton leak and the ATP production) were all reduced by about 60% (Fig. 1d
| 1.607992
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
How To Identify Mobile Phone Parts/functions (Small Parts)
The Small Parts On A Mobile Phone's PCB
In this article on "How To Identify Mobile Phone Parts/functions " we shall be focusing on "The Small Parts" found on mobile phones generally before digging in to "The Big Parts",
Most technicians who do not know how important it is to identify mobile phone parts/functions performed by each of these parts may not be able to achieve much and might even experience more damages or make avoidable mistakes by neglecting such an indispensable aspect of the art. Its so funny sometimes when you see some G.S.M Technicians doing a jumper connecting two points say from A to B where a a shorted capacitor or diode was removed. Such a thing should rather be done for a fuse in power section or where appropriate instead.
When you do jumper for a capacitor, you are creating an artificial short and adding more salt to the injury! You should rather do that for an open circuit where you are sure their is a broken path. Alright, that was just by the way. At this point, we will begin to consider how to identify mobile phone parts/functions they perform.
How To Identify Mobile Phone Parts/functions They Perform
Resistors: A solid state electronic component that resist or limit the flow of current in a Mobile Phone SMT Resistorscircuit to a measured value. It opposes the flow of current, and is measured in ohms using the multi-meter. When a resistor is open a phone can go dead. When the resistance value of a resistor drops say from 1.5 Kilo Ohms to 700 Ohms, then more current will flow into the circuit board and vice versa. Get a schematic or block diagrams to find out the exact value the manufacturer assigned so you can read on your PCB and check if it corresponds or needs to be changed. You can check the measurement using the multi-meter in Ohms mode. Resistors with 0 resistances will act like fuse, so they will give sound in buzzer mode while a 1kilo Ohms resistor will not give a buzzer sound in same mode. The values of resistors are written on them, but in cases where they are too small, the values are not written and are therefore left plane. However one can still know the resistance value using the ohms mode on the multi-meter.
Capacitors: An electronic component that acts as filter by filtering electric current from eitherMobile Phone SMT Capacitors AC or DC source and creating stability and eliminating noise in a circuit. Capacitors can be either polarized or non-polarized. When working with a polarized capacitor, you will have to take the positive and the negative terminal into consideration by making sure they are placed according to the positive and the negative terminals. The non-polarized can be placed in any direction you like; they can be found in grey, brown and orange like colors. One can find the SMT polarized capacitors in two colors on mobile phones in black with a white strip representing negative, and orange color capacitor with a brown strip representing negative side. To know the value of a capacitor, its capacitance is measured using the capacitance meter. Depending on where the capacitor is found in a circuit, it can affect the proper functioning of those hardware components it is connected to. If charging capacitor is shorted for example, the phone won't charge and chargers may get burnt amongst other things the phone can go dead too. A faulty capacitor will give sound in buzzer mode while a good one will not. You can also check if it's good in battery mode. While set to 20 DCV if it keeps dropping in reading it's a healthy sign. While set to Ohms mode say like 2000k, if it keeps increasing in reading, it's a good sign
Diode: An electronic component that allow current flow in a particular direction; either Mobile Phone SMT Diodesforward bias or reverse bias depending on the circumstances of the flow of current. The diode you will familiarize with in GSM repairs includes the following: Zenar diode, rectifier diode, photo diode and the light emitting diode known as the LED.
The Zener diode is found in the charging section and is responsible for filtering, protecting, minimizing and regulating current and passing it forward; it comes in different strength and values i.e. 4volts, 6volts, 8volts 10volts etc.
The rectifier diode on the other hand is found in power section and converts AC current into DC. It allows the flow of current in the one forward bias direction and not reverse bias like the zener diode.
The photo diode mainly senses and capture light rays for infrared functionality; by this means it is able to transfer data for a required user function. This application is found in sensors, remote controls and some high tech medical equipment.
The LEDs (Light Emitting Diode ) primarily emits or produces light. It is a crystal substance that produces light when current flows through it. The light color is determined by the crystal color combination. LEDs illuminate the phone's screen display. You can test if the LED is good by using the both test leads of your multi-meter to check if it glows when in buzzer mode. Aside the diodes discoursed here; there are several other diodes like tunnel diode, shocky diode etc
Normally, a diode is tested in the buzzer mode. If it gives sound in buzzer mode, that is a bad sign that it is shorted and should be removed so that it doesn't affect the functioning of any part of a phone's smooth working. A good diode should read 1 in one direction and more than 1 in the other direction when you test with a multi-meter. A bad diode giving buzzer sound can affect any part of a phone. It can affect power, it can affect charging etc.
Transistor: A semi-conductor solid state electronic device. It's a combination of two or moreSMT Transistor diode acting as switching mechanism in an electronic circuit. Generally speaking there are two main types of transistors namely the NPN transistors and the PNP transistors. A transistor consists of the base (B), the collector (C) and the emitter (E). Aside acting as a switch, it also performs amplification of electronic signals. The commonly used of the duo is the NPN, and this probably because it is easily made from silicon which is cheap and readily available. A good example of how it amplifies electronic signals is when you are about to snap a picture, you notice a sudden flash of LED light in its fullest strength! That's the amplification function aspect. There are lots of them on the PCB and can be found in charging sections, power IC section, display IC sections etc.
Coils: Coils may either be boost or step down coils depending on the function they perform Mobile Phone SMT Coils - Inductorsjust like what you see in the case of AC to DC transformers. Boost coil can be found in any part of a phone's PCB. They are required to step up current-voltage required for a specific function according to a measure of value while on the other hand, the step down coils helps to reduce and regulate the flow of current to a determined value. Step down coils usually supply current to ICs that function at low current-voltage level, this is why they are spread all across a phone's PCB like the resistors and caps. A faulty coil could create and open thereby interrupting the value of current expected to flow through a circuit in a mobile phone's PCB.
Backlight Driver: The backlight driver in a mobile phone supplies a regulated amount of Phone SMD LCD Backlight Driverelectric current needed for the phone LCD (Liquid Crystal Display) to be illuminated. The backlight driver has eight legs or pins and comes in different sizes and shapes and can be found in the power section of a mobile phone mostly close to the battery terminal. When the backlight driver is faulty the phone screen will go black and nothing will be visible in the display.
Related Articles:
Sponsored Post
You may also like...
5 Responses
1. Patrick says:
Thanks bro, your write ups leave me in laughter though
2. Ibrah bwaise says:
Thx bro
3. Samson says:
Thanks so much sir for this wonderful explanation, God bless you an grant you more wisdom
Leave a Reply
Your email address will not be published. Required fields are marked *
This site uses Akismet to reduce spam. Learn how your comment data is processed.
| 1.614447
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Questions related to Environmental Science
I have read some articles on this, but reagents to increase the floatability of microplastics are not commonly used. Is it feasible to use biodegradable organic reagents to separate microplastics from soils by flotation? What would be the drawbacks or disadvantages?
Ph.D. students have performed exceptional research. When it came to publication, they had trouble sending the manuscript. Many scholars are conducting research without any funding. This question is for those who want to publish their work without paying an article processing fee. Can anyone list the journals published by Elsevier, Springer, or any other publisher that does not charge an APC? It will be extremely beneficial to the research community. Thank you ahead of time.
As i am a undergraduate final year student, I have interest in fields like ecological restoration, G.I.S , urban ecology so i needed a suggestion about how i can narrow down my research topic in these fields ?
Any suggestion would be highly appreciated!
There are several definitions and/or descriptions of metals/metalloids (e.g. Arsenic, Cadmium, & Lead) according to different authors. Authors mostly lend toward different academic backgrounds, for example, chemistry, ecotoxicology, environmental science, health, etc. yet they are discussing in a similar context. But what are the most appropriate terms that define/describe metals/metalloids in the environment and why should we use them uniformly in this context?
1. Heavy metals
2. Potentially toxic metals
3. Potentially toxic elements
4. Toxic metals/elements
Fire is an important part of many ecosystems, as it is a driver of change, of renewing or of maintenance of their balance. Natural or anthropogenic with a short-term or a long-term cycle, fire is an integrated part of their survival. But for how many of them? If you had to give a proportion of vegetated ecosystems in the world dependent on fire, what would it be?
I want to conduct rainfall disaggregation using Hyetos package in R. Any one having experience with hyetos in R ? I am confused with how to find the parameters (lambda,phi,kappa,alpha,v,mx,sx) that i need to enter in the function DisagSimul. Is it through the excel sheet or R code ?
hello, everyone. I am a freshman in the environment science. I plan to dig into the field of electrochemical advanced oxidaiton process. I would like to read some book for the electrode material in the EAOP. Would you please introduce some useful book?
or if someone know some book about the electrochemical process in environment science, I would be very happy. I tride to read the "Electrochemical Methods: Fundamentals and Applications" written by Bard, but it was so difficult for me.
While many businesses are aiming for net-zero goals, do we have sound evidence that net-zero farming is possible or has already been achieved?
In the preparation of g-C3N4/BiFeO3 photocatalyst, a number of papers mention words like 'certain/appropriate' amount for chemical agents. Can anyone tell what this amount actually is?
1. Wang, X., Mao, W., Zhang, J., Han, Y., Quan, C., & Zhang, Q. et al. (2015). Facile fabrication of highly efficient g-C3N4/BiFeO3 nanocomposites with enhanced visible light photocatalytic activities. Journal Of Colloid And Interface Science, 448, 17-23. doi: 10.1016/j.jcis.2015.01.090
2. An, J., Zhang, G., Zheng, R., & Wang, P. (2016). Removing lignin model pollutants with BiFeO 3 –g-C 3 N 4 compound as an efficient visible-light-heterogeneous Fenton-like catalyst. Journal Of Environmental Sciences, 48, 218-229. doi: 10.1016/j.jes.2016.01.024
Since our targeted species is found only in the 2 km region of the study site, we are planning to use 30 m spatial resolution climate data on our Species Distribution Model. But the problem is that my local weather station is capable of providing 20 km resolution data. On the other hand, if I use WorldClim data that is also 1 km.
My questions are
1. Can I use these downscaled data (from 1 km or 20 km) on my local study on SDM, which will be on 30 m resolution?
2. If I downscale, will be there any variational changes on climate data? Is it acceptable to do so?
Please note that I'm new to this field.
Thank you for your valuable time.
My study site is relatively small and the targeted species is found as continuous patches. Do I need to consider Patch size/area in the MaxEnt model?
Does patch size have any meaningful measurable values that can be included in the MaxEnt model?
I'm curious about knowing the potential to reduce GHG / increase C sequestration across different cropland uses (for example, dairy, wheat, corn).
Have you heard of reliable and attainable targets for different agricultural commodities and locations?
And to anticipate some answers, I know that it depends on a lot of factors.
Hello everyone, I recently received an email from, what appears as, the organizers of the "2nd International Conference on Environmental Science & Green Energy", which is a section of the Pride Conferences. Now, I noticed that another section of the Pride Conferences, relating microbiology and infectious diseases, has been flagged in the Research Gate community.
What struck me initially is the fact that they have quoted one of my papers, which is not even remotely related to Green Energy and the topics listed in the conference website. Has anyone else received this email?
Data are overflown nowadays with the advent of the internet and social media. Researchers are utilizing the availability of the data with Artificial Intelligence (AI) tools for their research. In Environmental Science, what are the cutting edge research areas where AI can be implemented? Especially the countries in south Asia where the latest technologies are not being practised yet as Bangladesh.
I'm writing a paper for an EIP-AGRI Focus Group on the value of organic soil improvers for soil (e.g. compost).
One of the topics is the ability of compost to sequester organic matter in soil and in this way reduce CO2 emissions and mitigate global warming.
I have discussions whether this is actually happening, there are different opinions. Are there any overview/review papers that address this topic?
And of course, what is your opinion on this?
We are at the beginning of making predictive modelling of an invasive plant species using MaxEnt. The species is found as a patch over the study area. I am new at using this model, have a piece of limited knowledge about it. I have reviewed several papers where only point locations of the present occurrence had been used.
Since my target species occurs as a patch, How can I take the polygonal area of the species where it occurs, instead of point location data?
Or are there any other methods to cover the whole patch of the species into SDM?
I have just been accepted in a PhD program in Environmental Science. After the first meeting with the program coordinator, I have been informed that I have to perform an 'Experiential learning' for one semester in order to 'learn how to do research'. Based on the explanation I received It's a kind of internship, like the compulsory academic internship I did some years ago at undergraduate level but here it will be in a research institution or lab. And what's very surprising for me is that during this internship-like semester what I will be doing should not have any link with my research interest nor with my PhD project. So dear friends, my questions are:
- Has anyone of you ever heard about this kind of "internship" at PhD level?
- Do you think I could get any benefit from this kind of "internship" or it is a pure waste of time?
I'm now wondering if I would change a program or university. Please I need your comments to help make this issue clear.
I am looking for collaborations in environmental sciences. Especially in Water technologies (Water quality, Water treatment, Water management....)
There are numerous books are available in the market to crack ugc net in environmental science. Which book is best for cracking ugc net in environmental science? Let's have a discussion
Research within the environmental science is pretty interesting and practical in nature. The journals focusing on environmental studies often have very high impact factor compared to other journals in social sciences. What is the secret?
I want to know how to assign score to different variables and what is the basis for giving score for assessment ot intengible benefits.
I'm looking for data (mainly related to management: growth rate, canopy size, soil and climate preferences, etc.) about tropical trees used in tropical agroforestry.
Have you ever heard about a database or a source of technical information available to agroforest managers?
That would really facilitate land management and field experiments.
As always, I am trying to use these questions to centralize information from different sources. RG questions tend to be well indexed in Google for different users. Thank you for your contributions!
I am starting to run a biogas experiment, but finding problem ordering the best lab equipment as most of what i see on the market is Chinese, and i seem
| 1.709946
|
m-a-p/FineFineWeb
|
Scheme designed to lead to program to inject sulphur into upper atmosphere to 'prevent global warming'
Paul Joseph Watson
Friday, September 16, 2011
Despite the pseudo-science of global warming being discredited with each passing day, scientists are preparing to field test an "artificial volcano" which is eventually intended to lead to mammoth geoengineering programs which will inject sulfur particles in to the atmosphere at high altitudes, a process that other scientists have warned will cause widespread droughts and other drastic consequences.
"Next month, researchers in the U.K. will start to pump water nearly a kilometer up into the atmosphere, by way of a suspended hose," reports Scientific American.
"The experiment is the first major test of a piping system that could one day spew sulfate particles into the stratosphere at an altitude of 20 kilometers, supported by a stadium-size hydrogen balloon. The goal is geoengineering, or the "deliberate, large-scale manipulation of the planetary environment" in the words of the Royal Society of London, which provides scientific advice to policymakers. In this case, researchers are attempting to re-create the effects of volcanic eruptions to artificially cool Earth."
Never mind the fact that the science behind global warming is only becoming more contentious, with Norwegian physicist and Nobel laureate Ivar Giaever this week quitting the American Physical Society because of its advocacy of the man-made climate change thesis, allowing scientists driven by the political agenda that global warming alarmism has become to conduct such dangerous experiments with the eco-system on such a massive scale is nothing short of insane.
As we have previously documented at length, this latest experiment is just one of a series of similar tests designed to investigate the feasibility of injecting aerosols into the atmosphere that have been carried out by scientific bodies under the control of both the British and American governments.
The proposal to disperse sulphur dioxide in an attempt to reflect sunlight was discussed in a September 2008 London Guardian article entitled, Geoengineering: The radical ideas to combat global warming, in which Ken Caldeira, a leading climate scientist based at the Carnegie Institution in Stanford, California, promoted the idea of injecting the atmosphere with aerosols.
"One approach is to insert "scatterers" into the stratosphere," states the article. "Caldeira cites an idea to deploy jumbo jets into the upper atmosphere and deposit clouds of tiny particles there, such as sulphur dioxide. Dispersing around 1m tonnes of sulphur dioxide per year across 10m square kilometres of the atmosphere would be enough to reflect away sufficient amounts of sunlight."
Experiments similar to Caldeira's proposal are already being carried out by U.S. government -backed scientists, such as those at the U.S. Department of Energy's (DOE) Savannah River National Laboratory in Aiken, S.C, who in 2009 began conducting studies which involved shooting huge amounts of particulate matter, in this case "porous-walled glass microspheres," into the stratosphere.
The project is closely tied to an idea by Nobel Prize winner Paul Crutzen, who "proposed sending aircraft 747s to dump huge quantities of sulfur particles into the far-reaches of the stratosphere to cool down the atmosphere."
Such programs merely scratch the surface of what is likely to be a gargantuan and overarching black-budget funded project to geo-engineer the planet, with little or no care for the unknown environmental consequences this could engender.
What is known about what happens when the environment is loaded with sulphur dioxide is bad enough, since the compound is the main component of acid rain, which according to the EPA "Causes acidification of lakes and streams and contributes to the damage of trees at high elevations (for example, red spruce trees above 2,000 feet) and many sensitive forest soils. In addition, acid rain accelerates the decay of building materials and paints, including irreplaceable buildings, statues, and sculptures that are part of our nation's cultural heritage."
The health effects of bombarding the skies with sulphur dioxide alone are enough to raise serious questions about whether such programs should even be allowed to proceed.
The following health effects are linked with exposure to sulphur.
- Neurological effects and behavioral changes
- Disturbance of blood circulation
- Heart damage
- Effects on eyes and eyesight
- Reproductive failure
- Damage to immune systems
- Stomach and gastrointestinal disorder
- Damage to liver and kidney functions
- Hearing defects
- Disturbance of the hormonal metabolism
- Dermatological effects
- Suffocation and lung embolism
According to the LennTech website, "Laboratory tests with test animals have indicated that sulfur can cause serious vascular damage in veins of the brains, the heart and the kidneys. These tests have also indicated that certain forms of sulfur can cause foetal damage and congenital effects. Mothers can even carry sulfur poisoning over to their children through mother milk. Finally, sulfur can damage the internal enzyme systems of animals."
Even the lead scientist heading up the latest experiment in the UK, Mark Watson, admits that injecting sulphur into the atmosphere could lead to "acid rain, ozone depletion or weather pattern disruption."
The Canada-based Action Group on Erosion, Technology and Concentration (ETC) responded to the announcement of the test by calling on the British government to shut down the research. "This experiment is only phase one of a much bigger plan that could have devastating consequences, including large changes in weather patterns such as deadly droughts," the group said in a written statement.
Rutgers University meteorologist Alan Robock also, "created computer simulations indicating that sulfate clouds could potentially weaken the Asian and African summer monsoons, reducing rain that irrigates the food crops of billions of people."
Of course, killing millions of people in the third world through unintended consequences of geoengineering is probably seen as a price worth paying for the likes of John P. Holdren, White House science czar and strong geoengineering proponent, given that he and other luminaries in the global warming movement want to see global population drastically reduced by means of a "planetary regime" carrying out forced sterilization and other draconian population control measures.
Of course, many would argue that geoengineering projects involving spraying chemicals at high altitudes are already underway through chemtrails. Indeed, last year scientists admitted that emissions from aircraft are forming artificial clouds that block out the sun, but ludicrously claimed that the effect was a natural phenomenon.
In 2008, a KSLA news investigation found that a substance that fell to earth from a high altitude chemtrail contained high levels of Barium (6.8 ppm) and Lead (8.2 ppm) as well as trace amounts of other chemicals including arsenic, chromium, cadmium, selenium and silver. Of these, all but one are metals, some are toxic while several are rarely or never found in nature.
The newscast focuses on Barium, which its research shows is a "hallmark of chemtrails." KSLA found Barium levels in its samples at 6.8 ppm or "more than six times the toxic level set by the EPA." The Louisiana Department of Environmental Quality confirmed that the high levels of Barium were "very unusual," but commented that "proving the source was a whole other matter" in its discussion with KSLA.
KSLA also asked Mark Ryan, Director of the Poison Control Center, about the effects of Barium on the human body. Ryan commented that "short term exposure can lead to anything from stomach to chest pains and that long term exposure causes blood pressure problems." The Poison Control Center further reported that long-term exposure, as with any harmful substance, would contribute to weakening the immune system, which many speculate is the purpose of such man-made chemical trails.
Indeed, barium oxide has cropped up repeatedly as a contaminant from suspected geoengineering experimentation.
Geoengineering programs conducted by politically-driven scientists who have proven adept at covering up and ignoring contradictory evidence in the global warming debate should not be empowered with the authority and funding to conduct such dangerous tests without a major public debate about the potential consequences.
The fact that individuals who embrace the dogma of global warming, alarmism which has already killed more people than man-made climate change in the third world through starvation linked to food price hikes caused by biofuel production, are now preparing to launch programs with government approval that could drastically alter weather patterns and cause droughts, is frightening.
More sober minds in the scientific community need to become far more public in their opposition to geoengineering to prevent a very real form of man-made climate change that could irrevocably damage our planet for decades to come.
Paul Joseph Watson is the editor and writer for Prison Planet.com. He is the author of Order Out Of Chaos. Watson is also a regular fill-in host for The Alex Jones Show.
This article was posted: Friday, September 16, 2011 at 4:49 am
| 2.275095
|
HuggingFaceFW/fineweb-edu
|
| Home | E-Submission | Sitemap | Contact Us |
top_img
Soonchunhyang Med Sci > Volume 27(2); 2021 > Article
Cho: Torsion of a Wandering Spleen Treated by Laparoscopic Surgery: A Case Report
ABSTRACT
The spleen is an organ located in the upper left portion of the abdomen. Wandering spleen is defined as the location of the spleen is the shift to other parts of the abdomen rather than the left upper quadrant. Wandering spleen is a rare clinical condition and can lead to hilar torsion and subsequent infarction requiring emergency surgery. The author presents a case of torsion of a wandering spleen in a 34-year-old female presenting with abdominal pain. The patients underwent emergent laparoscopic splenectomy. She had an uncomplicated postoperative course and recovered well.
INTRODUCTION
A wandering spleen is a rare variation, defined as the location of the spleen shifted to other parts of the abdomen rather than the left upper quadrant [1]. A normal spleen is attached to several ligaments that normally fix the spleen to the region below the left dome of the diaphragm. As a result of an absence or loosening of these splenic ligaments, the spleen could move to other parts of the abdomen and rotate by itself. This condition is very rare, with an incidence of less than 0.2% [2], and makes susceptible the spleen to acute torsion and infarction with splenic vein obstruction, leading to the formation of gastric varices [3]. Although many conservative treatment options have been reported for the treatment of wandering spleen, the most effective and safest option is accepted to be surgery [4].
Herein, we present a 34-year-old female with a wandering spleen with splenic torsion that was successfully treated by laparoscopic splenectomy.
CASE REPORT
A 34-year-old female was presented to the emergency department with a history of abdominal pain with nausea for 2 weeks. She reported 10 days of mild left upper quadrant abdominal pain, with a sudden increase in the intensity of pain for the last few days. The pain was intermittent and then turned to continuous and severe before the patient was admitted to the hospital. There was no history of fever or trauma. She underwent laparoscopic appendectomy for acute appendicitis 4 years ago. She gave birth to a child 2 months ago.
On arrival at the emergency department, the patient was afebrile (36.4°C), pulse rate of 98 beats/min, blood pressure of 120/85 mm Hg, respiratory rate of 20 breaths/min, and pulse oximetry of 100% on room air. She appeared in moderate distress from pain; however, no pallor, jaundice, finger clubbing, or lymphadenopathy was noticed. On abdominal physical examination, abdominal distension was noted. There was diffuse abdominal tenderness more prominent in the epigastric area, but no definite palpable mass was detected.
Laboratory parameters showed white blood cell counts of 6.83× 103/μL, hemoglobin level of 13.5 g/dL, platelet count of 193×103/μL, C-reactive protein level of 0.39 mg/L (reference, 0.0–0.5 mg/L), erythrocyte sedimentation rate of 6 mm/hr (reference, 0–27 mm/hr), serum amylase level of 102 U/L (reference, 28–100 U/L), and serum lipase level of 31 U/L (reference, 7–60 U/L). Liver function tests and electrolytes were normal values.
Urgent computed tomography (CT) scan of abdomen and pelvis with contrast was performed, which demonstrated findings consistent with splenomegaly and developed splenic infarction, with swirling of the splenic vascular pedicle (Fig. 1). She was taken emergently to the operating room for laparoscopic exploration with splenectomy. A total of three trocars were used in the operation. Intraoperatively, her spleen was noted as congested and twisted on the left abdomen (Fig. 2). Laparoscopic splenectomy was performed successfully, and the pathologic result of specimen was no diagnostic abnormalities. Her postoperative course was uneventful, and she was discharged on the 5th postoperative day. After 6 months of operation, she visited the outpatient clinic for routine examination. She was healthy and performed the usual activities of daily living. The incision scars 6 months after surgery had no specific features (Fig. 3).
The patient provided oral informed consent for the publication of clinical details and images.
DISCUSSION
The wandering spleen is a rare entity that was first described in 1854 by the Polish physician Jozef Doetl [5]. The spleen is normally located in the left upper quadrant of abdomen fixed by several ligaments. These ligaments are derived from the dorsal mesogastrium. Among the ligaments, the splenorenal ligament anchors the spleen to the left kidney. The gastrosplenic ligament and splenocolic ligament hold the spleen to the posterior aspect of the stomach and left colic flexure, respectively. The phrenicosplenic ligament anchors the spleen to the left diaphragm [6]. If these ligaments have not been developed or failed to their function, the spleen wanders into the abdominal cavity.
The etiology of wandering spleen includes congenital causes of the inadequate fusion between the abdominal wall and dorsal mesogastrium during embryogenesis and acquired causes such as trauma, ligament laxity, and pregnancy [7]. Wandering spleen by congenital causes is common in children between 3 months and 10 years of age. The acquired wandering spleen usually occurs in young women aged 20–40 years and is 10 times more frequent in multiparous women due to laxity of the abdominal wall and hormonal changes as a result of pregnancy [8].
When diagnosing patients suspected of wandering spleen with torsion, a contrast-enhanced CT scan is considered as the modality of choice [6,9]. CT findings could give clinicians various information about the size and position of the spleen, the appearance of the splenic pedicle, and the viability of the spleen. This information is essential for making a decision about performing the emergency operation.
In young and asymptomatic patients, preservation of the spleen is recommended; therefore, the splenopexy is considered a reasonable surgical option for these patients [10]. On the contrary, emergency splenectomy should be performed when there is obvious evidence of splenic infarction. Currently, minimally invasive surgeries such as laparoscopic approach for the splenopexy and splenectomy were widely performed owing to the advances in laparoscopic instruments, devices, and techniques. Therefore, in the current case, splenectomy was performed because the splenic infarction was revealed on the CT scan, and laparoscopic approach was decided to take advantage of minimally invasive surgery.
The torsion of wandering spleen is rare in patients presenting with acute abdominal pain and extremely difficult to detect early. Contrast enhanced CT scan is crucial in making an accurate and timely diagnosis. When torsion and infarction of wandering spleen are diagnosed, splenectomy is the treatment of choice. Laparoscopic splenectomy is feasible and less invasive than open splenectomy. We report a rare case of wandering spleen with splenic torsion requiring the emergency operation, and that was successfully treated by laparoscopic splenectomy.
CONFLICT OF INTEREST
CONFLICT OF INTEREST
No potential conflict of interest relevant to this article was reported.
Fig. 1
Computed tomography scan showed swirling of the splenic vascular pedicle (A, white arrow) and splenomegaly and infarction of spleen (B, white arrow). Coronal image (C, D)
sms-27-2-114f1.jpg
Fig. 2
Intraoperative image showing the torsion of splenic vascular pedicle and consequent several engorged veins (A) and enlarged spleen (B).
sms-27-2-114f2.jpg
Fig. 3
Incision scars after 6 months of operation. a)12-mm trocar site for camera, b)5-mm trocar site, c)12-mm trocar site, and d)previous incision scar for laparoscopic appendectomy (4 years ago).
sms-27-2-114f3.jpg
REFERENCES
1. Goksu M, Baykan AH. Torsion of wandering spleen: a case report. J Emerg Med 2020;58: e189-92.
crossref pmid
2. Viana C, Cristino H, Veiga C, Leao P. Splenic torsion, a challenging diagnosis: case report and review of literature. Int J Surg Case Rep 2018;44: 212-6.
crossref pmid pmc
3. Chue KM, Tan JKH, Pang NQ, Kow AWC. Laparoscopic splenectomy for a wandering spleen with resultant splenomegaly and gastric varices. ANZ J Surg 2020;90: 2124-5.
crossref pmid
4. Termos S, Redha
| 1.443738
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
How Might Google Extract Entity Relationship Information from Q&A Pages?
Posted in: Semantic Web
Join thousands of marketers to get the best search news in under 5 minutes. Get resources, tips and more with The Splash newsletter:
How helpful might question and answer websites be in providing a search engine with information about entities, and entity-relationship information about those entities and other entities and properties of entities?
A recently granted patent from Google looks at such potential sources of information and tells us more.
One of the inventors of this patent, Evgeniy Gabrilovich, worked on Google's knowledge vault project which talks about things such as extracting relationship information from text on the web about entities. It's worth looking at a presentation that was prepared during the development of the knowledge vault project to see what it says about extracting entity-relationship information from the Web. That can be found at: Constructing and Mining Web-scale Knowledge Graphs
Related Content:
Candidate Relationships Between Entities
That patent, granted to Google October 22, 2019, tells us about how such sites can be used as resources to provide information about relationships between entities, such as "Who is Barack Obama married to?" That page may also include the answer, "Michelle Obama," on it as well.
The patent points out that such pages may identify entity relationships by looking at the question involved:
A relationship type is determined based on the question text, for example, by determining that the terms "married to" in the question text likely indicate a spousal relationship between an entity indicated in the question text and an entity indicated in the answer text. Entities are also identified from the question text and the answer text. For example, the computer system can identify the entity "Barack Obama" from the question text, and the entity "Michelle Obama" from the answer text.
Having identified a relationship type and the two entities identified by the question and answer text, a candidate relationship is determined. For example, the determined candidate relationship may be a spousal relationship between the entities "Barack Obama" and "Michelle Obama."
Moving from Possible Answers to Candidate Answers
The patent tells us that a Q&A site may possibly indicate a number of potential answers to a question about a spousal relationship with Barack Obama, which could include "Michelle Obama," "Hillary Clinton," or "Laura Bush."
How might Google Decide which candidate answer is most likely?
Google may score each of the candidate relationships based upon a "frequency with which the candidate relationship was determined from the webpages of Q&A Websites. The patent tells us that:
The candidate relationship having the highest score is selected as the most likely valid relationship for the particular relationship type and entity. For example, based on determining that the candidate spousal relationship between "Barack Obama" and "Michelle Obama" is the most frequently occurring spousal relationship for the entity "Barack Obama," the computer system determines that a spousal relationship exists between "Barack Obama" and "Michelle Obama." The computer system can then establish, in an entity-relationship model, a spousal relationship between the entity "Barack Obama" and the entity "Michelle Obama."
What is innovative about the process described in this patent? It tells us that these steps are:
1. It involves the actions of obtaining a resource
2. Identifying the first portion of text of the resource that is characterized as a question
3. The second part of text of the resource that is characterized as an answer to the question
4. Identifying an entity that is referenced by one or more terms of the first portion of text that is characterized as the question
5. A relationship type that is referenced by one or more other terms of the first portion of the text that is characterized as the question
6. An entity that is referenced by the second portion of text that is characterized as the answer to the question
7. Adjusting a score associated with a relationship of the relationship type for the entity that is referenced by the one or more terms of the first portion of text that is characterized as the question and the entity that is referenced by the second portion of text that is characterized as the answer to the question
Entity Relationship Model Process
This process uses question and answer (Q&A) websites
It looks at questions as templates to identify the first entity and the relationship type displayed in the question, which each template on the Q&A site may be associated with a particular relationship type.
This entity relationship information patent can be found at:
Information extraction from question and answer websites
Inventors: Wei Lwun Lu, Denis Savenkov, Amarnag Subramanya, Jeffrey Dalton, Evgeniy Gabrilovich, Eugene Agichtein
Assignee: Google LLC
US Patent: 10,452,694
Granted: October 22, 2019
Filed: December 20, 2017
Abstract
Methods, systems, and apparatus for obtaining a resource, identifying a first portion of text of the resource that is characterized as a question, and a second part of text of the resource that is characterized as an answer to the question, identifying an entity that is referenced by one or more terms of the text that is characterized as the question, a relationship type that is referenced by one or more other terms of the text that is characterized as the question, and an entity that is referenced by the text that is characterized as the answer to the question, and adjusting a score for a relationship of the relationship type for the entity that is referenced by the one or more terms of the text that is characterized as the question and the entity that is referenced by the text that is characterized as the answer to the question.
Entity Relationship Information Models
The focus of this patent is upon building an entity-relationship model that specifying relationships that are determined Q&A website resources.
This system includes:
A Q&A resource database
A Q&A resource selector
A Q&A classifier
A sentence parser
An entity identifier
A relationship identifier
An aggregator
A database of candidate relationships
A relationship selector
An entity-relationship model.
Entities represented in the entity-relationship model may be represented as nodes, with relationships between entities being represented as edges. The confidence scores about the entity relationships are an indication of a likely accuracy of those relationships being true.
When extracting entity-relationship information from Q&A website resources, this system may look at a Q&A resource database that includes multiple resources from Q&A websites.
Those resources can include:
- A number of webpages from Q&A websites0, such as archived versions of the webpages from Q&A websites
- Metadata relating to webpages of Q&A websites
- Documents accessible at Q&A websites
- Images accessible at Q&A websites
- Videos accessible at Q&A websites
- Audio accessible at Q&A websites
- Other resources associated with or accessible at Q&A websites
The Q&A resource database can also include resources from sources other than Q&A websites, such as:
- One or more resources from forum websites
- Social network platforms
- Frequently asked questions (FAQ) websites or FAQ webpages
- Informational websites
- Other sources where questions and answers are available
When this question identifier is looking for questions and answers that identity entities and relationships between them, it may start parsing text on a Q&A page to find the presence of certain characters or strings of characters, such as a question mark. It may also look for words or questions that indicate question text such as:
- "I was wondering"
- "I am asking"
- "question"
- "who"
- "what"
- "where"
- "when"
- "why"
- "how"
- etc.
In the same way, When answers are looked for, text on pages may be parsed to find words that might indicate answer text, such as:
- "I know"
- "I believe"
- "I think"
- "The answer is"
- "answer"
- etc.
The part of this process that involves parsing text on a page on a natural language processing approach that tags parts of speech:
As an example, the sentence parser may receive the question text, "Who is Barack Obama married to?" and may annotate the question text as "WHO/pronoun IS/verb BARACK OBAMA/noun MARRIED/adjective TO/verb?" Similarly, the sentence parser may receive the answer text "Michelle Obama" and may annotate the answer text as "MICHELLE OBAMA/noun." The sentence parser may further determine a class or hypernym of one or more grammatical units in the annotated texts, for example, to determine that the terms "Barack Obama" constitute a "person" noun class and that the terms "Michelle Obama" also constitute a "person" noun class.
Having parsed the question and answer texts, the sentence parser provides the annotated question and answer texts to the entity identifier and relationship identifier. In alternate implementations, the question text and/or answer text may be provided to the entity identifier and relationship identifier without processing by the sentence parser. In such implementations, the entity identifier and/or relationship identifier may perform operations similar to those performed by the sentence parser or may identify entities or relationships from the question text and/or answer text without the question text or answer text being annotated. In such instances, the Q&A classifier can provide the question and answer texts to the entity identifier and relationship identifier.
The question text and the answer text that are identified can identify the type of entity-relationship being asked about and answered on a Q&A page.
Another example of how an answer might be parsed from question text and answer text:
For example, the entity identifier may receive the question text "Who is Barack Obama married
| 1.559667
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
The total number of hair follicles for an adult human is estimated
at 5 million with 1 million on the head of which 100,000 alone
the scalp. In humans, the only external regions of
skin devoid of hair follicles are the palms of the hands and soles
of the feet. The basic hair follicle structure remains essentially
the same throughout the range of mammalian species with modifications
for specialized functions. The hair follicle can be recognized
a separate entity within the skin with formation and maintenance
based on interaction between dermal and epidermal components.
Under the influence of the DP, epidermal cell differentiation
during anagen produces a keratinized hair fiber and associated products.
The source epidermal cells, called matrix cells, that lie in the
immediate vicinity of the dermal papilla are a living, actively
proliferating group of cells which differentiate and become keratinized
to form the hair cortex (Co) and surrounding hair cuticle (Hc) of
the hair shaft at the center of which is situated the medulla (M).
Cells Around the hair shaft comprise the inner root sheath (IRS)
which can be divided into three layers the cuticle (Cu), Huxley
layer (Hu) and Henle layer (He) based on structure, patterns of
keratinization and incorporation of a product called trichohyalin.
The IRS breaks down at the level of the sebaceous gland to leave
only the hair cortex and surrounding cuticle to protrude above the
the main differentiated layers in a mature anagen hair follicle.
The hair follicle penetrates the dermal layer of the skin
composed of fibroblast cells and collagen connective tissue
interspersed with blood vessels, sweat glands and sensory
nerves. The bulb region sits in the subcutaneous (adipose
fat) tissue layer.
|Image showing the
layers in a hair follicle from the outside (left) to the center
It is the dermal papilla (DP) which directs and dictates the embryonic
generation of a hair follicle and it also retains this instructive
ability throughout the life of the hair follicle. The DP presents
as a healthy "pear" shape in normal hair follicles. As the name
suggests, derived from the dermis mesenchyme the DP consists of
a highly active group of cells shown to be capable of inducing follicle
development from the epidermis and production of hair fiber (Oliver
1966a, Oliver 1966b, Oliver 1967).
The DP consists of a small group of fibroblast cells derived from
the mesoderm. The cells are held close to the base of the epidermal
derived cells that produce the hair fiber and root sheaths but there
is a thin layer, called the basement membrane (or basement lamina,
or glassy membrane) that separates the DP cells from the hair fiber/sheath
cells. In other words, the basement membrane provides a physical
dividing line between cells descendant from embryonic ectoderm (epidermis)
and embryonic mesoderm (dermis). This physical barrier has a role
to play in our immunological protection. Holding the DP cells in
place is a capsule that surrounds the DP cells in a cup and extends
up the sides of the hair follicle to the epidermis. The whole follicle
structure sits on a pad of fibrous tissue called the Arao-Perkins
body. Nerve fibers and blood vessels penetrate through small gaps
in the base of the hair capsule and invade into the DP area.
The bigger the DP, the more cells it has, then the thicker the
hair fiber that the hair follicle produces. The DP cells are very
active with lots of cytoplasm when the hair follicle is producing
a hair fiber although the DP cells do not multiply and proliferate
unlike the hair producing cells above the DP. When a hair follicle
is not producing a fiber the DP cells lose much of their cytoplasm
and become inactive.
Oliver et al revealed that the removal of the DP stops
hair growth but that the lower third of the dermal sheath is capable
of supplying new cells for regeneration of a new DP by infiltrating
and transforming at the site of the original DP with subsequent
hair follicle regrowth (Oliver 1966a, Oliver 1966b, Oliver 1966c,
Jahoda 1992). With removal of more than the lower third of a hair
follicle, reformation of a DP is unable to occur and the hair follicle
is effectively permanently destroyed. The DP cells retain their
embryonic functional abilities and are able to induce new hair fiber
growth in mature, adult skin when implanted into previously deactivated
hair follicles and in close association with ORS epidermal cells
(Oliver 1967, Horne 1986).
DP cells can also interact with adult epidermis to induce the
development of new hair follicles (Jahoda 1990). In the established
hair follicle the DP cells act in conjunction with epidermal cells
via mechanisms similar to those in embryogenesis to permit hair
follicle cycling through hair production and resting phases. DP
cells are almost unique in maintaining their embryogenic regenerative
properties in adults making them potentially attractive for investigation
with a view to gaining an insight on organ/limb regeneration and
the bulb region of a hair follicle.
The hair fiber is the core part of any hair follicle. Epidermal
derived cells close to the DP remain undifferentiated cells, called
matrix cells, that focus on multiplying and proliferating to produce
more cells. Those cells made in the center of the hair follicle
are destined to become part of the hair fiber and are called cortex
(cortical) cells. As the cells multiply the constant stream of production
pushes the cells upwards towards the skin surface. As they move
up the hair follicle they begin to differentiate into particular
cell types. The cortex cells change from a round into a flattened
appearance. They are squeezed together into layers (lamella). If
the hair follicle contains melanocyte cells then melanin pigment
is incorporated into the cortex cells. These cortex cells become
keratinized and harden. As they do so it becomes impossible for
the cells to function properly and the cells die. The keratinized
cells are then pushed away from the hair bulb region and upwards
as new cells come in behind. The cortex cells are now part of the
dead keratinized fiber.
Some large hair follicles have a central strand of cells that
are loosely organized and not packed together. This tube in the
very center of the hair fiber is called the medulla.
Around the outside of hair fiber we see a cuticle. The cuticle
is made up of more keratinized cells but they arrange themselves
in a slightly different way to cortex cells. As the cuticle cells
are produced, they lay over the cortex cells and flatten into an
overlapping roof tile fashion. Cuticle cells become progressively
flatter as they get older. As with cortex cells, when they keratinize
the cell can no longer function properly and dies.
The Outer Root Sheath (ORS) is distinct from other epidermal components
of the hair follicle being continuous with the epidermis. The "bulge"
region in the ORS is the site at which the arrector pili muscle
is attached. The arrector pili muscle is connected to the epidermis
at the other end. This is the muscle that makes hair stand erect
and produces goose bumps in your skin when you are cold. The contraction
of the muscle pulls on both the hair to make it erect and pulls
on the skin making a bumpy surface.
The bulge region is believed to be the storage area for hair follicle
stem cells. Hair follicles go through a cycle of growth and rest
(below). With each renewed attempt to produce hair fiber, the hair
follicle must obtain a source of cells to form the matrix cell population
that make hair fibers. The source of these cells is believed by
some dermatologists to be the bulge region. Other dermatologists
suggest that stem cells are not present in the bulge region at all
and that new matrix cells are obtained from the root sheath.
Also extending from the ORS is the sebaceous gland. It consists
of a few cells focused on production of oils (lipids). These cells
are large with their cytoplasm filled with vacuoles containing lipid.
The cells are often divided into several lobes of the sebaceous
gland connected together by a sebaceous duct. The duct has a single
opening into the tube where the hair fiber sits.
The ORS surrounds the hair fiber and inner root sheath until deep
into the dermis. Just above the bulb region containing the dermal
papilla the ORS tapers and ends so the ORS does not entirely cover
the hair fiber and inner root sheath. The ORS consists of several
layers of cells that can be identified with unique ultrastructural
The inner root sheath (IRS) is produced by matrix cells sitting
above the hair follicle. While those matrix cells in the center
of a hair follicle proliferate and produce the hair fiber and cuticle,
the matrix cells towards the periphery of a hair follicle proliferate
and produce the IRS. As with
| 2.09459
|
HuggingFaceFW/fineweb-edu
|
CP Climate of the Past CPClim. Past _PHONE_ Copernicus Publications Göttingen, Germany 10.5194/cp-15-405-2019Late Miocene–Pliocene climate evolution recorded by the red clay cover on the Xiaoshuizi planation surface, NE Tibetan PlateauLate Miocene–Pliocene climate evolution LiXiaomiao PengTingjiang MaZhenhua LiMeng FengZhantao GuoBenhong YuHao YeXiyan HuiZhengchuang SongChunhui LiJijun _EMAIL_ MOE Key Laboratory of Western China's Environmental Systems, College of Earth and Environmental Sciences, Lanzhou University, Lanzhou 730000, China School of Earth Sciences, Key Laboratory of Western China's Mineral Resources of Gansu Province, Lanzhou University, Lanzhou 730000, China College of Geography Science, Nanjing Normal University, Nanjing 210023, China Jijun Li (_EMAIL_)11March2019 15 2 405421 20June2018 4July2018 26January2019 17February2019 Copyright: © 2019 Xiaomiao Li et al. 2019 This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit _URL_ article is available from _URL_ full text article is available as a PDF file from _URL_ Abstract
The Pliocene climate and its driving mechanisms have attracted substantial scientific interest because of their potential as an analog for near-future climates. The late Miocene–Pliocene red clay sequence of the main Chinese Loess Plateau (CLP) has been widely used to reconstruct the history of interior Asian aridification and the Asian monsoon. However, red clay sequences deposited on the planation surface of the Tibetan Plateau (TP) are rare. A continuous red clay sequence was recently discovered on the uplifted Xiaoshuizi (XSZ) planation surface in the Maxian Mountains, northeastern (NE) TP. In this study, we analyzed multiple climatic proxies from the XSZ red clay sequence with the aim of reconstructing the late Miocene–early Pliocene climate history of the NE TP and to assess regional climatic differences between the central and western CLP. Our results demonstrate the occurrence of minimal weathering and pedogenesis during the late Miocene, which indicates that the climate was arid. We speculate that precipitation delivered by the paleo East Asian summer monsoon (EASM) was limited during this period and that the intensification of the circulation of the westerlies resulted in arid conditions in the study region. Subsequently, enhanced weathering and pedogenesis occurred intermittently during 4.7–3.9 Ma, which attests to an increase in effective moisture. We ascribe the arid–humid climatic transition near 4.7 Ma to the expansion of the paleo-EASM. The warming of the high northern latitudes in response to the closure of the Panama Seaway may have been responsible for the thermodynamical enhancement of the paleo-EASM system, which permitted more moisture to be transported to the NE TP.
Introduction
The Pliocene, including the Zanclean (5.33–3.60 Ma) and Piacenzian (3.60–2.58 Ma) stages, is one of the most intensively studied intervals of the pre-Quaternary in climate change research. The Zanclean climate was generally warm and wet and is often used as an analog for near-future climate conditions in terms of carbon dioxide levels, ranging from 280–415 ppm (Tripati et al., 2009; Pagani et al., 2010), and comparable temperatures in the tropical region (Herbert et al., 2010, 2016). On the other hand, the Zanclean was markedly different from today, although several critical changes in thermohaline and atmospheric circulation towards modern conditions were occurring (Haug et al., 2005; Lawrence et al., 2006; Chaisson and Ravelo, 2000). For example, the early Pliocene global mean temperature was approximately 4 C warmer (Brierley and Fedorov, 2010), and the sea levels are estimated to have been 25 m higher, than today (Dowsett et al., 2010). Temperatures at high northern latitudes were considerably higher and therefore continental glaciers were almost absent from the Northern Hemisphere (Ballantyne et al., 2010; Dowsett et al., 2010). The zonal and meridional sea surface temperature gradients in the Northern Hemisphere were weak but gradually became more intensified, changing towards the modern state, which has a much more pronounced spatial temperature contrast (Fedorov et al., 2013; Brierley et al., 2009; Brierley and Fedorov, 2010). The low meridional surface temperature gradient resulted in weaker meridional circulation during this interval (Fedorov et al., 2013; Brierley et al., 2009), and the minor east–west sea surface temperature contrast in the tropical Pacific during this interval is believed to have given rise to a permanent El Niño Southern Oscillation (Lawrence et al., 2006); however, whether permanent El Niño–like conditions were sustained during the Pliocene is controversial (Wara et al., 2005; Watanabe et al., 2011; Zhang et al., 2014). In addition, the episodic uplift of the TP (Li et al., 2015; Zheng et al., 2000; Fang et al., 2005a, b) and gradual closure of the Panama Seaway (Keigwin et al., 1978; O'Dea et al., 2016) were underway. The former had a substantial climatic impact (An et al., 2001; Ding et al., 2001; Liu et al., 2014) and the latter resulted in the reorganization of the global thermohaline circulation system (Haug et al., 1998, 2001). These features imply a spatial change in the organization of the global climate system from the early Pliocene to the present. In this context, it is important to characterize the response of regional climates to these major global climatic and tectonic changes.
East Asia is one of the key regions for studying the aridification of the Asian interior and the Asian monsoon evolution, which are tightly linked to the uplift of the TP, regional climate change, and the evolution of global temperature and ice volume (An et al., 2001; Ding et al., 2001; Li et al., 2008; Clift et al., 2008; Nie et al., 2014; Ao et al., 2016; Sun and Liu, 2006; Sun et al., 2017; Chang et al., 2013; Liu et al., 2014). Previous research has revealed that red clay was widely deposited across the CLP since the late Miocene, indicating that Asian aridification was enhanced (Guo et al., 2001; Song et al., 2007; An et al., 2014; Ao et al., 2016; Li et al., 2017). In the eastern and central CLP, where the climate is dominated by the East Asian monsoon, paleontological evidence, mineral magnetic parameters, and geochemical records from the red clay indicate dry climatic conditions during the late Miocene but generally wet climatic conditions during the early Pliocene (Wang et al., 2006; Guo et al., 2001; Wu et al., 2006; Song et al., 2007; Sun et al., 2010; An et al., 2014; Ao et al., 2016). The most controversial climatic change occurred during the interval from 4.8–4.1 Ma, for which climate reconstructions using different proxies indicate conflicting paleoenvironmental trends. For example, field observations and pollen records indicate an intensified summer monsoon intensity, but low magnetic susceptibility values are more consistent with arid rather than wet climatic conditions (Ding et al., 2001; Ma et al., 2005; Song et al., 2007; Sun et al., 2010). It is thought that dissolution of ferrimagnetic minerals and iron reduction resulting from high precipitation significantly affected the climatic significance of magnetic susceptibility records during this period (Ding et al., 2001). In addition to the East Asian monsoon, the westerlies also had an impact on the climate of East Asia; however, the patterns of climate change in the westerlies-dominated regions were different from the eastern and central CLP during the early Pliocene. Geochemical, stratigraphic, and pollen evidence from the Qaidam and Tarim basins has demonstrated that aridification intensified since the early Pliocene (Fang et al., 2008; Sun and Liu, 2006; Sun et al., 2017; Chang et al., 2013; Liu et al., 2014). Although the general climatic trends of the main CLP and central Asia during this period are well-recorded, paleoclimatic changes in the NE TP, which is at the junction of the zones of westerly and monsoonal influences, remain unclear. Therefore, determining the climatic conditions of the NE TP during the early Pliocene not only improves our understanding of the pattern of regional climate change, but may also provide insights into the responses of the paleo-EASM and the westerlies to TP uplift and changes in the global climate system.
A continuous red clay sequence was recently discovered on the uplifted XSZ planation surface in the NE TP and
| 1.25243
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
There are eight ecoregions in Algeria:
- Mediterranean woodlands and forests
- Mediterranean conifer and mixed forests
- Mediterranean dry woodlands and steppe
- Saharan halophytics
- North Saharan steppe and woodlands
- Sahara desert
- West Saharan montane xeric woodlands
- South Saharan steppe and woodlands
The Mediterranean woodlands and forests ecoregion stretches from the coastal plains to the hills of northern Morocco, Algeria and Tunisia, and eventually surrounds the Atlas Mountains. To the north is the Alboran Sea, the westernmost element of the Mediterranean Sea. The variety of substrates and climates leads to a diverse mix of vegetation including holm oak forests, cork oak forests, wild olive and carob woodlands, as well as extensive Berber thuya forest. This old, endemic North African conifer species is representative of the great diversity and endemism of both flora and fauna in this ecoregion. Reptile diversity is high and the region harbors charismatic large mammals, including the rare and endangered Barbary leopard. Unfortunately, this region contains high human populations and widespread deforestation.
Scattered through North Africa (and southernmost mountains of Cádiz and Málaga in Spain), Mediterranean conifer and mixed forests occur on high elevations of major mountain massifs. Relict stands of these fir and pine forests are endemic, and survive in isolated locations in areas of limited areal extent. The flowering plants on these ranges have high rates of endemism, with over 450 strictly endemic plant species, as a result of the long isolation of Holarctic taxa at these high elevations. The faunal diversity is the greatest in the Palearctic realm, with a mixture of Palearctic, Afrotropical and more restricted North African taxa. Mammals found in this region include the only African endemic species of deer, Cervus elaphus barbarus, the Barbary leopard (Panthera pardus panthera), the Barbary macaque (Macaca sylvanus), and before its extinction in the 1930s, the Atlas lion (Panthera leo leo). One strict endemic bird species, the Algerian nuthatch (Sitta ledanti), is also found. These forests have been utilized since ancient Roman times, but deforestation has increased dramatically in the past century. Human impact is significant, compounded by the socio-economic instability of the Maghreb countries.
The Mediterranean dry woodlands and steppe ecoregion forms a buffer between the Mediterranean forest ecoregions and the Sahara Desert farther south. The ecoregion may have been partially forested in prehistory, but today scrub vegetation predominates. A number of narrowly endemic species of plants are found here, although there are few endemic vertebrates. The ecoregion is currently highly threatened by the change from nomadic pastoralism to settled agriculture and grazing. The overgrazing widely practised here is considered to be leading to increased desertification in the area.
Scattered across the Sahara, "Sebkhas" or "Chotts" are saline depressions in the desert that remain predominantly dry. Although flooding is rare, some of these areas flood annually. They are important habitats for small rodents and are important for grazing camels during dry seasons. The habitats of this ecoregion are not particularly threatened, as human populations are very small and most of the areas are too saline to be used for farming. Large mammals, however, have been hunted out from these areas. Woody resources, where available, are used by desert people. The only permanently populated part of the ecoregion is the Siwa Oasis in Egypt, which has permanent freshwater sources from large springs.
This ecoregion is restricted to some suitable semi-desert and desert locations in the northern portion of Africa, in Morocco, Algeria, Tunisia, Libya, Egypt, Western Sahara and Mauritania. The most extensive of the areas are Chott Melghir, Oued Rir Vally, Hodna, Sebkhat Tidikelet, Sebkha of Timimoun, Tefedest, Chott El Djerid and the Qattara Depression and Siwa Depression in Egypt.
This ecoregion forms the north and western border of the greater Sahara Desert region. Rainfall occurs during the cooler winter, nourishing a variety of plants that flower before the hot, dry summer. Compared to the South Saharan Steppe and Woodland, this ecoregion harbors a significant number of plant and small animal endemics. In the past the ecoregion also supported large numbers of desert-adapted African mammals, but many have been extirpated from the area due to decades (in some cases centuries) of over-hunting. Some of the remaining desert adapted species, such as the Dama gazelle and Houbara and Nubian bustards are still facing extreme hunting pressure, and in some areas they too have been extirpated.
This ecoregion extends across northern Africa and covers parts of Western Sahara, Mauritania, Morocco, Algeria, Tunisia, Libya, and Egypt. It is generally found inland of the coast, but stretches to the shore in areas where there is low rainfall. In Morocco, Algeria and Tunisia, this ecoregion forms a transition between the Mediterranean domain towards the north and the true desert in the south. The Saharan Halophytics ecoregion is also found scattered through this ecoregion in areas of suitable saline conditions.
The Sahara Desert is the largest hot desert in the world and occupies approximately ten percent of the African continent. The ecoregion includes the hyper-arid central portion of the Sahara where rainfall is minimal and sporadic. Although species richness and endemism are low, some highly adapted species do survive with notable adaptations. Only a few thousand years ago the Sahara was significantly wetter, and a significant large mammal fauna resided in this area. Climatic desiccation over the past 5000 years, and intense human hunting over the past 100 years, has obliterated most of these fauna. Now, in vast portions of the Sahara, merely rock, sand and sparse vegetation are found. The remnant large mammal fauna is highly threatened by ongoing over-hunting. An alternative name of the Sahara is the Great Desert.
From the Atlantic Ocean in the west, the greater Sahara stretches across Africa to the Red Sea and down to the highlands of Ethiopia, encompassing an area 9,100,000 square kilometres (km2). This ecoregion covers the central Sahara Desert, between 18° and 30° N, and has an area of 4,619,260 km2. The northern and southern margins of the Sahara, which receive greater rainfall and have greater vegetation cover, are described separately.
The flora of the central Sahara Desert is quite depauperate, and is estimated to include only 500 species. This is extremely low considering the huge extent of the area. It mainly consists of xerophytes and ephemeral plants (called also locally Acheb), with halophytes in moister areas. The flora has one near endemic family, a number of isolated monotypic genera of both wide and narrow distribution, and perhaps as many as 162 endemic species. The monotypic genera suggest a Tertiary origin with probable extinction of linking forms. Vegetation is very contracted along the wadis and the dayas with Acacia sp, Tamarix sp., and Calotropis procera. Where there is sufficient groundwater, hammadas are covered by Anrthirrnum ramosissimuma and Ononis angustissima.
The Sahara is a vast area of largely undisturbed habitat, principally sand and rock, but with small areas of permanent vegetation. The most degradation is found where water (oases, etc) is present. Here, habitats may be heavily altered by human activities. Previously existing tree cover has often been removed for fuel and fodder by nomadic pastoralists and traders.
Tassili-n-Ajjer, found largely in southeastern Algeria, comprises a majority of the ecoregion. Three smaller outliers also occur, including the Air ou Azbine in northern Niger (with Monts Bagouezane reaching over 2000 metres), Dhar Adrar in Mauritania (reaching almost 700 m), and Adrar des Iforas in Mali and Algeria (reaching 900 m).
The largest intact blocks of habitat found in Algeria lie within the huge protected areas of Ahaggar National Park and Parc National de Tassili N'Ajjer. In these areas the flora and fauna are actively protected. Tourism has become a means of income for the area, so local people have come to realize the economic importance of the animals and plants. Park managers are attempting to end poaching and woodcutting. The other portions of the ecoregion are less well protected, but their relative lack of human inhabitants may reduce threats.
This ecoregion covers a narrow band on the southern edge of the Sahara Desert, stretching from central Mauritania to the Red Sea. Annual grazing after rainfall is a key feature, in former times attracting large herds of arid-adapted migratory ungulates such as gazelles, addax, and scimitar-horned oryx. Much of the ecoregion is now overgrazed by herds of domestic livestock, and habitat degradation is widespread. Motorized hunters have decimated the wild ungulate herds, and the ecoregion's few protected areas have suffered from civil and international wars. Continued and increased external support is required to protect the ecoregion and provide alternate livelihood
| 2.506764
|
HuggingFaceFW/fineweb-edu
|
Global Warming Basics: Loaded Dice
— Tallulah Bankhead
People have been playing with dice for at least 5,000 years. They come in all shapes and sizes, but have a common purpose: to generate randomness. When dice are rolled (or "thrown" or "cast" or "tossed" or "shot," depending on your verb of choice) many results are possible, but what actually happens isn't known until it happens. Dice (or one die, to use the singular) are the world's original random-number generator. We may not know what they'll give but at least we can figure out what to expect.
If they're fair, that is.
The most common form is the six-sided die, a cube with dots on it, a different number for each face, ranging from 1 to 6. They're often rolled in pairs (as in the gambling game craps), but sometimes singly or in groups of five (Yahtzee!) or more. If you roll a single 6-sided die (what gamers call a "d6"), ideally the probability for each possible result will follow the uniform distribution, meaning simply that all possible results are equally likely.
If you're shooting craps and roll two dice, then add up the numbers on both, not all results are equally likely (as Tallulah Bankhead well knew). You have a 1-out-of-6 chance of "lucky 7," but only a 1-out-of-36 chance of rolling a 2 ("snake-eyes") or a 12 ("boxcars").
Mathematicians easily calculate the chance for each possible result (again, assuming the dice are fair), and can also compute "summary statistics" which encode what we expect more compactly. One of the most important such quantities is called the mean, also known as the expected value. If you roll a die many times — many many times — theoretically at least an infinite number of times, then compute the average of all those results, the mean value for a single die is 3.5. You can't actually get the mean value for a single roll; there's no "3.5" face. But the average of a large number of rolls will be close to 3.5, and the more you roll the closer it will get. If, of course, the die is fair.
As with all random events, results may vary. In fact, results will vary (or it wouldn't be random). By how much? That is often summarized with a statistic called the variance. If you subtract the mean from each result, you'll know the deviation for each. If you then compute the average squared deviation, you'll have estimated the variance. [It's a wee bit more complex, because often, instead of adding up the squared deviations and dividing by how many there are, we divide by one less than how many there are.] It can also be called the "mean square deviation." But it's more customary to take the square root of the variance, giving a quantity called the standard deviation (a.k.a. root-mean-square deviation).
Those two quantities, mean and standard deviation, are the most basic and most common summary statistics for a random process like rolling dice. There are plenty of other choices, and we need more than just two quantities if we want to specify things completely, but they're actually quite a useful characterization. For rolling a single die, the mean is 3.5 and the standard deviation is 1.707825. If, and only if, the die is fair.
We've already mentioned that a pair of dice gives a number for which not all results are equally likely, i.e. it no longer follows the uniform distribution. Here's the probability of each possible result of a pair of dice; the minimum is 2 (a "1" on each die), the maximum is 12 (a "6" on each die), but the most likely result — which also happens to equal the mean — is lucky 7.
If we roll 5 dice the probabilities are different still:
Also shown (as a dashed red line) is what statisticians call the normal distribution, the classic "bell curve" shape you may have seen before, quite often. This particular normal distribution has the same mean value and the same standard deviation as the roll of five dice. Note that the probabilities for a five-dice roll tend to follow that normal distribution quite well; not perfectly, but the match is excellent.
And that's no accident. If we roll 100 dice and add them up, the the probabilities for each possible result (from 100 to 600) will match the normal distribution, not just "quite well" but stunningly so.
This means we can use dice as a random-number generator which will, at least approximately, follow the normal distribution. That particular distribution is uncannily common in nature; few natural phenomena follow it exactly, but so many follow it approximately that it's downright spooky. [Until you find out why, one of the deepest and most beautiful theorems in statistics, but that is another topic entirely.]
The Weather Game
Let's invent a game we'll call weather. We'll roll 12 dice — one for each month of the year — and add them up, the result indicating what the weather was like for that year. The mean value for rolling 12 dice is 42 (if they're fair, that is), which will represent weather so good we have a great year; for a 42 we'll award ourselves 6 points. If we're one off, with a 41 or 43, we only get 5 points, two off we get 4 points, etc., all the way to five off (37 or 47) which is only one point. Those are the "good weather year" results.
If we're off (from the 42 average) by anywhere from 6 to 12 (if we roll between 30 and 36, or between 48 and 54) then we'll say the weather is just "OK" so we don't gain any points, but we don't lose any either.
But if our roll is too extreme, 29 and below or 55 and above, we're in the bad region and we lose points. We'll pretend it's extreme weather such as brings flood, hurricane, tornado, killer heat wave, drought, all spelling trouble. The least extreme bad rolls cost only one point. But, as is the case with trouble, as it gets more severe the damage rises rapidly, so we'll square the amount into the bad region to get the point loss. The most we can be is 18 (on either the high or low end), so if we roll a 72 ("6" on all 12 dice) or 12 ("1" on all 12 dice) we're penalized 324 points.
It may seem like a losing proposition to have the maximum reward for any one year be only 6, while the maximum loss is 324; we would need 54 years of best possible weather to make up for a single year of worst possible weather. Isn't that just a recipe for misery? Not at all, because despite the extreme cost of the worst possible weather, the chance of that happening is small. The probability of rolling either 12 or 72 on a dozen dice is merely 1 out of 1,088,391,168, less than one chance out of a billion. 'Tain't likely.
Here are the probabilities for each possible roll, and I've marked out the regions we've called "good," "OK," and "bad." It's plain to see that the good region has all the most-likely rolls and the bad region all the least likely. If, that is, the dice are fair.
I used my computer to roll the dice, ran it for 1,950 years, and got these rolls:
The biggest single-year gain was 6 points, which happened 144 times, while the biggest single-year loss was 49 points, but such disaster only struck once. The most common score (584 times, just about 30%), was 0 points for "OK" weather. The running score is shown here:
As you can see, it's easy to win this game. Despite random ups and downs, over the long haul we gained an average of about 2 points per year. The game is rigged — for you to win.
Loaded Dice
It has been said that climate is what we expect, weather is what we get. Let me re-phrase that:
Climate is the rules of the game; weather is the roll of the dice.
There's an unspoken rule of the game I made up: the dice have to be fair. What if we change that rule? It's just as easy for my computer to simulate loaded dice as fair dice, so starting in 1951 I changed how the dice behave. I did so slowly at first, building up so they're more and more loaded as time passes. By year 2000 they're loaded noticeably, but not extremely; the year-2000 probabilities for a 12-dice roll are shown (in red, compared to the old probabilities in blue) here:
The probabilities are different, but the good rolls are still the most likely — just not as likely as they used to be. Bad ones are still the least likely, but more common than they used to be. That means we can expect to gain fewer points — on average — while losing more. But that's the climate, the rules of the game, which now include progressively more loaded dice. The weather is still the roll, still random, so we know to expect less but we won't
| 1.94734
|
Zyphra/Zyda-2
|
X-ray spectral components reflected from the inner accretion disk have been reported. The high spectral resolution capabilities of the focusing X-ray telescopes may therefore make possible to differentiate between the potential interpretations of the X-ray bursts spectral features.
9. Industrial opportunities on the International Thermonuclear Experimental Reactor (ITER) project
International Nuclear Information System (INIS)
Ellis, W.R.
1996-01-01
Industry has been a long-term contributor to the magnetic fusion program, playing a variety of important roles over the years. Manufacturing firms, engineering-construction companies, and the electric utility industry should all be regarded as legitimate stakeholders in the fusion energy program. In a program focused primarily on energy production, industry's future roles should follow in a natural way, leading to the commercialization of the technology. In a program focused primarily on science and technology, industry's roles, in the near term, should be, in addition to operating existing research facilities, largely devoted to providing industrial support to the International Thermonuclear Experimental Reactor (ITER) Project. Industrial opportunities on the ITER Project will be guided by the amount of funding available to magnetic fusion generally, since ITER is funded as a component of that program. The ITER Project can conveniently be discussed in terms of its phases, namely, the present Engineering Design Activities (EDA) phase, and the future (as yet not approved) construction phase. 2 refs., 3 tabs
10. DOUBLE BOSS SCULPTURED DIAPHRAGM EMPLOYED PIEZORESISTIVE MEMS PRESSURE SENSOR WITH SILICON-ON-INSULATOR (SOI
Directory of Open Access Journals (Sweden)
D. SINDHANAISELVI
2017-07-01
Full Text Available This paper presents the detailed study on the measurement of low pressure sensor using double boss sculptured diaphragm of piezoresistive type with MEMS technology in flash flood level measurement. The MEMS based very thin diaphragms to sense the low pressure is analyzed by introducing supports to achieve linearity. The simulation results obtained from Intellisuite MEMS CAD design tool show that very thin diaphragms with rigid centre or boss give acceptable linearity. Further investigations on very thin diaphragms embedded with piezoresistor for low pressure measurement show that it is essential to analyse the piezoresistor placement and size of piezoresistor to achieve good sensitivity. A modified analytical modelling developed in this study for double boss sculptured diaphragm results were compared with simulated results. Further the enhancement of sensitivity is analyzed using non uniform thickness diaphragm and Silicon-On-Insulator (SOI technique. The simulation results indicate that the double boss square sculptured diaphragm with SOI layer using 0.85μm thickness yields the higher voltage sensitivity, acceptable linearity with Small Scale Deflection.
11. Convergence of pattern generator outputs on a common mechanism of diaphragm motor unit recruitment.
Science.gov (United States)
Mantilla, Carlos B; Seven, Yasin B; Sieck, Gary C
2014-01-01
Motor units are the final element of neuromotor control. In manner analogous to the organization of neuromotor control in other skeletal muscles, diaphragm motor units comprise phrenic motoneurons located in the cervical spinal cord that innervate the diaphragm muscle, the main inspiratory muscle in mammals. Diaphragm motor units play a primary role in sustaining ventilation but are also active in other nonventilatory behaviors, including coughing, sneezing, vomiting, defecation, and parturition. Diaphragm muscle fibers comprise all fiber types. Thus, diaphragm motor units display substantial differences in contractile and fatigue properties, but importantly, properties of the motoneuron and muscle fibers within a motor unit are matched. As in other skeletal muscles, diaphragm motor units are recruited in order such that motor units that display greater fatigue resistance are recruited earlier and more often than more fatigable motor units. The properties of the motor unit population are critical determinants of the function of a skeletal muscle across the range of possible motor tasks. Accordingly, fatigue-resistant motor units are sufficient to generate the forces necessary for ventilatory behaviors, whereas more fatigable units are only activated during expulsive behaviors important for airway clearance. Neuromotor control of diaphragm motor units may reflect selective inputs from distinct pattern generators distributed according to the motor unit properties necessary to accomplish these different motor tasks. In contrast, widely distributed inputs to phrenic motoneurons from various pattern generators (e.g., for breathing, coughing, or vocalization) would dictate recruitment order based on intrinsic electrophysiological properties. © 2014 Elsevier B.V. All rights reserved.
12. Analytical investigation of bidirectional ductile diaphragms in multi-span bridges
Science.gov (United States)
Wei, Xiaone; Bruneau, Michel
2018-04-01
In the AASHTO Guide Specifications for Seismic Bridge Design Provisions, ductile diaphragms are identified as Permissible Earthquake-Resisting Elements (EREs), designed to help resist seismic loads applied in the transverse direction of bridges. When adding longitudinal ductile diaphragms, a bidirectional ductile diaphragm system is created that can address seismic excitations acting along both the bridge's longitudinal and transverse axes. This paper investigates bidirectional ductile diaphragms with Buckling Restrained Braces (BRBs) in straight multi-span bridge with simply supported floating spans. The flexibility of the substructures in the transverse and longitudinal direction of the bridge is considered. Design procedures for the bidirectional ductile diaphragms are first proposed. An analytical model of the example bridge with bidirectional ductile diaphragms, designed based on the proposed methodology, is then built in SAP2000. Pushover and nonlinear time history analyses are performed on the bridge model, and corresponding results are presented. The effect of changing the longitudinal stiffness of the bidirectional ductile diaphragms in the end spans connecting to the abutment is also investigated, in order to better understand the impact on the bridge's dynamic performance.
13. First reported experience with intramuscular diaphragm pacing in replacing positive pressure mechanical ventilators in children.
Science.gov (United States)
Onders, Raymond P; Ponsky, Todd A; Elmo, MaryJo; Lidsky, Karen; Barksdale, Edward
2011-01-01
Diaphragm pacing (DP) has been shown to successfully replace mechanical ventilators for adult tetraplegic patients with chronic respiratory insufficiency. This is the first report of DP in ventilator-dependent children. This was a prospective interventional experience under institutional review board approval. Diaphragm pacing involves outpatient laparoscopic diaphragm motor point mapping to identify the site where stimulation causes maximum diaphragm contraction with implantation of 4 percutaneous intramuscular electrodes. Diaphragm conditioning ensues to wean the child from the ventilator. Six children were successfully implanted ranging from 5 to 17 years old with the smallest 15 kg in weight. Length of time on mechanical ventilation ranged from 11 days to 7.6 years with an average of 3.2 years. In all patients, DP provided tidal volumes above basal needs. Five of the patients underwent a home-based weaning program, whereas one patient who was implanted only 11 days post spinal cord injury never returned to the ventilator with DP use. Another patient was weaned from the ventilator full time but died of complications of his underlying brain stem tumor. The remaining patients weaned from the ventilator for over 14 hours a day and/or are actively conditioning their diaphragms. Diaphragm pacing successfully replaced mechanical ventilators, which improves quality of life. Copyright © 2011 Elsevier Inc. All rights reserved.
14. Investigation of the Durability of a Diaphragm for a Total Artificial Heart.
Science.gov (United States)
Gräf, Felix; Rossbroich, Ralf; Finocchiaro, Thomas; Steinseifer, Ulrich
2016-10-01
One of the most critical components regarding the durability of the ReinHeart total artificial heart (TAH) is its biocompatible diaphragm, which separates the drive unit from the ventricles. Hence, a durability tester was designed to investigate its required 5-year lifetime. The aim of this study was to prove the validity of accelerated testing of the polyurethane diaphragm. The durability tester allows simultaneous testing of 12 diaphragms and mimics physiological conditions. To accelerate the time of testing, it operates with an increased speed at a frequency of 8 Hz. To prove the correctness of this acceleration, a servo-hydraulic testing machine was used to study the effect of different frequencies and their corresponding loads. Thereby the viscoelastic behavior of the polyurethane was investigated. Additionally, high-speed video measurements were performed. The force against frequency and the high-speed video measurements showed constant behavior. In the range of 1-10 Hz, the maximum resulting forces varied by 3%, and the diaphragm movement was identical. Frequencies below 10 Hz allow a valid statement of the diaphragm's mechanical durability. Viscoelasticity of the polyurethane in the considered
| 1.890369
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
The reason for this is that a long line will build up when randomness of arrivals occurs faster than the average and service times are longer than the average. Managers can use the service concept to create organizational alignment and develop new services. It provides a means for describing the service business from an operations point of view. Period costs in accounting refers to the method of recording non-production expenses within the period in which the costs are incurred. Learn about the definition, discover how income and expenses relate to profit, and see real-world examples of period costs vs. product costs. Are you buying a new shirt or picking up a shirt from the dry cleaners?
Each week, therefore, every BK store manager schedules employees to cover not only the peak periods of breakfast, lunch, and dinner, but also the slower periods in between. If he or she staffs too many people, labor cost per sales dollar will be too high.
This approach is particularly appropriate for standardized goods ranging from processed foods to electronic appliances. Finally, the operations manager is directly involved in efforts to ensure that goods are produced according to specifications and that quality standards are maintained. Define operations management, and discuss the role of the operations manager in a manufacturing company.
What Is Inbound Logistics & Manufacturing?
This was accomplished by adhering to their system of delivering the goods and the service to the customers at the lowest possible cost. The operations system included careful selection of merchandise, low cost sourcing, ownership of transportation, cross-docking, efficient location of stores and friendly home-town service to the customer. One of the key insights of this management system was the distinction between dependent demand and independent demand. Orlicky wrote "Materials Requirement Planning" in 1975, the first hard cover book on the subject. MRP II was developed by Gene Thomas at IBM, and expanded the original MRP software to include additional production functions.
- In capital intensive services the focus is more on technology and automation, while in people intensive services the focus is more on managing service employees that deliver the service.
- But if you were a waitress, you'd interact with customers every day.
- Location of facilities must be near the customers and scale economics can be lacking.
- To measure this, use our real-time dashboard that automatically monitors six key project metrics and displays them in colorful, intuitive graphs.
- Inventory management and control is needed in service operations with facilitating goods.
- In other words, operations managers manage the process that transforms inputs into outputs.
In capital intensive services the focus is more on technology and automation, while in people intensive services the focus is more on managing service employees that deliver the service. First, let's look at the seven lean manufacturing waste types developed by Taiichi Ohno, chief engineer at Toyota, for the Toyota production system .
It starts with a high level of internal quality leading to employee satisfaction and productivity to deliver superior external customer service leading to customer satisfaction, customer loyalty and finally high revenues and profits. Once the service package is specified, operations is ready to make decisions concerning the process, quality, capacity, inventory, supply chain and information systems. These are the six decision responsibilities of service operations. Other decision responsibilities such as market choice, product positioning, pricing, advertising and channels belong to the marketing function.
They're both businesses dealing with the same product but in different ways. In this lesson, you'll learn the difference between trade and service businesses. Manufacturing operations are classified into process manufacturing and discrete manufacturing.
Manufacturing And Service: Relationship, Similarities And Difference
Unfortunately, failing to balance capacity and projected demand can be seriously detrimental to your bottom line. If you set capacity too low , you won't be able to meet demand, and you'll lose sales and customers. If you set capacity too high , you'll waste resources and inflate operating costs. This approach requires that a company interact with the customer to find out exactly what the customer wants and then manufacture the good, using efficient production methods to hold down costs.
Since the product cannot be stored, the service facility must be managed to peak demand which requires more flexibility than manufacturing. Location of facilities must be near the customers and scale economics can be lacking. Queuing theory has been devised to assist in design of service facilities waiting lines. Revenue management is important for service operations, since empty seats on an airplane are lost revenue when the plane departs and cannot be stored for future use.
Process manufacturing is an operational method that produces goods by following a specified sequence of steps or a predefined formula. Discrete manufacturing emphasizes producing individual finished goods that are distinct from one another. While pharmaceutical and food and beverage industries adopt the process manufacturing method, automobiles and smartphone manufacturers adopt a discrete manufacturing method.
To improve processes, another fundamental for lean, you need data. To measure this, use our real-time dashboard that automatically monitors six key project metrics and displays them in colorful, intuitive graphs. See at a glance how you're performing in real time—and with single click reporting, share key data points to keep the team informed. Overseeing a service organization puts special demands on managers, especially services requiring a high degree of contact with customers. Scheduling is made easier by information provided by a point-of-sale device built into every BK cash register. The register keeps track of every sandwich, beverage, and side order sold by the hour, every hour of the day, every day of the week.
To be successful in a service industry, you need to be accessible to your customers. Some service businesses, such as cable-TV providers, package-delivery services, and e-retailers, go to their customers. Many others, however—hotels, restaurants, stores, hospitals, and airports—have to attract customers to their facilities. These businesses must locate where there's a high volume of available customers. The decisions made in the planning stage have long-range implications and are crucial to a firm's success. Before making decisions about the operations process, managers must consider the goals set by marketing managers. Does the company intend to be a low-cost producer and to compete on the basis of price?
The role of operations managers in the manufacturing sector includes production planning, production control, and quality control. With these factors in mind, let's look at the specific types of decisions that have to be made in the production planning process. We've divided these decisions into those dealing with production methods, site selection, facility layout, and components and materials management. Operations management textbooks usually cover demand forecasting, even though it is not strictly speaking an operations problem, because demand is related to some production systems variables. For example, a classic approach in dimensioning safety stocks requires calculating the standard deviation of forecast errors. Demand forecasting is also a critical part of push systems, since order releases have to be planned ahead of actual clients' orders. Also, any serious discussion of capacity planning involves adjusting company outputs with market demands.
2 Manufacturing Versus Service Operations
Organizations that engage in hospitality, travel, media, sports, health care and entertainment are service-providing organizations. Service-providing operations send employees to their customers' locations or meet the customers at the company's premises to facilitate the service provision. Other ways to approach lean manufacturing are just-in-time manufacturing or the Toyota Way, as it was developed by the company for its famous Toyota production system . Here the focus is on improving workflow to remove unevenness as opposed to wastefulness. For example, quality management approaches used in manufacturing such as the Baldrige Award, and Six Sigma have been widely applied to services. Likewise, lean service principles and practices have also been applied in service operations.
They focus on improving the processes that underlie production of the service. Design of a service system must consider the degree of customer contact. The importance of customer contact was first noted by Chase and Tansik . They argued that high customer contact processes should be designed and managed differently from low-contact processes. High-contact processes have the customer in the system while providing the service. This can lead to difficulties in standardizing the service or inefficiencies when the customer makes demands or expects unique services. On the other hand, high-contact also provides the possibility of self-service where customers provide part of the service themselves (e.g. filing your own gas tank, or packing your own groceries).
Manufacturing plants are located on the basis of low costs rather than high revenues and profits for services. They begin with defining and measuring the customer's needs (e.g. using SERVQUAL). Any service that does not meet a customer's need is considered a defect. Then these approaches seek to reduce defects through statistical methods, cause-and-effect analysis, problem solving teams, and involvement of employees.
Planning The Production Process
As a result, manufacturing uses a Materials Requirements Planning System, while services do not. Services use Replenishment inventory control systems such as order-point and periodic-review systems.
The traditional pull approach to inventory control, a number of techniques have been developed based on the work of Ford W. Harris , which came to be known as the economic order quantity model. This model marks the beginning of inventory theory, which includes the Wagner-Within procedure, the newsvendor model, base stock model and the fixed time period model.
If there aren't enough employees, customers have to wait in lines. Some get discouraged, and even leave, and many may never come back. The term "manufacturing operations" refers to a framework in which man, machine and material come together to produce a tangible product. It deals with all the supply chain activities such as gathering requirements from customers, procuring raw materials, allocating resources, scheduling the production, maintaining the inventory, and delivering end products to customers. Forecasting demand is a prerequisite for managing capacity and scheduling. Forecasting demand often uses big data to predict
| 1.360203
|
m-a-p/FineFineWeb
|
Common ground in Groovy, Scala, and Clojure, Part 2
Learn how the languages reduce boilerplate and complexity
Common complaints about the Java language concern excessive ceremony for simple tasks and defaults that are sometimes confusing. All three of the languages take more sensible approaches in those areas. This installment of shows how Groovy, Scala, and Clojure smooth out the Java language's rough edges.
06 May 2013 - Per author request, added information about syntactic sugar in Listing 9 and its introductory information.
14 May 2013 - Added a link to "Common ground in Groovy, Scala, and Clojure, Part 3" in Resources.
14 May 2013 (First published 16 April 2013)
Also available in Chinese Russian Japanese
The Java programming language emerged under constraints that are different from the conditions that developers face today. In particular, the primitive types in the Java language exist because of the performance and memory constraints of mid-1990s hardware. Since then the Java language evolved to autobox much of the pain away, but the languages — Groovy, Scala, and Clojure — go a step further toward removing inconsistencies and friction throughout each language.
In this installment, I show how the languages do away with some common Java limitations, both in syntax and in default behavior. The first limitation to go is the presence of primitive data types.
The death of primitives
The Java language began with eight pairs of primitives and corresponding type-wrapper classes — originally to address performance and memory constraints — and gradually blurred the distinction with autoboxing. The languages go much further, making it seem to developers as if a difference no longer exists.
About this series
The Java legacy will be the platform, not the language. More than 200 languages run on the JVM, each bringing interesting new capabilities beyond those of the Java language. This series explores three next-generation JVM languages — Groovy, Scala, and Clojure — comparing and contrasting new capabilities and paradigms. The series aims to give Java developers a glimpse into their own near future — and help them make educated choices about the time they devote to new-language learning.
Groovy completely hides the existence of the primitive types from you. For example, int always stands in for Integer, and Groovy automatically handles upconverting numeric types to prevent numeric-overflow errors. For example, see the Groovy shell interaction in Listing 1:
Listing 1. Groovy's automatic treatment of primitives
groovy:000> 1.class
===> class java.lang.Integer
groovy:000> 1e12.class
===> class java.math.BigDecimal
In Listing 1, the Groovy shell shows that even constants are represented by underlying classes. Because all numbers (and other ostensible primitives) are really classes, you can use metaprogramming techniques. The techniques include adding methods to numbers — often used in building domain-specific languages (DSLs), allowing expressions like I'll cover this capability more fully in an upcoming installment about extensibility.
As in Groovy, Clojure automatically masks the difference between primitives and wrappers, allowing method invocations against all types and handling type conversions for capacity automatically. Clojure encapsulates a huge number of optimizations underneath, which is well-covered in the language documentation (see Resources). In many cases, you can provide type hints to enable the compiler to generate faster code. For example, rather than define a method with (defn sum[x]... ), you can add a type hint such as (defn sum[^float x]... ), which generates more-efficient code for critical sections.
Scala also masks the difference, generally using primitives underneath for time-critical parts of code. It also allows method invocations on constants, as in 2.toString. With its ability to mix and match primitives and wrappers like Integer, Scala is more transparent than Java autoboxing. For example, the == operator in Scala works correctly (comparing the values, not the references) across primitive and object references, unlike the Java version of the same operator. Scala also includes an eq method (with a symmetrical ne method) that always compares equality (or inequality) of the underlying reference type. Basically, Scala intelligently switched the default behavior. In the Java language, == compares the references, which you almost never need, whereas the less intuitive equals() compares values. In Scala, == does the correct thing (compares values) regardless of underlying implementation, and provides a method for the less-common reference-equality check.
This feature of Scala illustrates that one key benefit of the languages lies in offloading low-level details to the language and run time, freeing developers to think more about higher-level problems.
Simplifying defaults
With near-universal consensus, most Java developers agree that common operations require too much syntax in the Java language. For example, property definitions and other boilerplate code clutter class definitions, obscuring the important methods. All the languages provide ways to streamline creation and access.
Classes and case classes in Scala
Scala already simplifies class definitions by automatically creating accessors, mutators, and constructors for you. For example, consider the Java class in Listing 2:
Listing 2. Simple Person class in Java
class Person {
private String name;
private int age;
Person(String name, int age) { = name;
this.age = age;
public String getName() {
return name;
public int getAge() {
return age;
public void setAge(int age) {
this.age = age;
public String toString() {
The only nonboilerplate code in Listing 2 is the overridden toString() method. The constructor and all methods were generated by the IDE. Producing code quickly isn't as valuable as understanding it easily later. Needless syntax adds to the volume of code that you must use before you understand the underlying meaning.
Scala Person class
Stunningly, the simple three-line definition in Scala in Listing 3 creates an equivalent class:
Listing 3. Equivalent class in Scala
class Person(val name: String, var age: Int) {
override def toString = name + " is " + age + " years old."
The Person class in Listing 3 boils down to a mutable age property, an immutable name property, and a two-parameter constructor, plus my overridden toString() method. You can easily see what's unique about this class because the interesting parts aren't drowning in syntax.
Scala's design emphasizes the ability to create code with minimal excess syntax, and it makes much syntax optional. The simple class in Listing 4 illustrates a verbose class that changes a string to uppercase:
Listing 4. Verbose class
class UpperVerbose {
def upper(strings: String*) : Seq[String] = { => s.toUpperCase())
Much of the code in Listing 4 is optional. Listing 5 shows the same code, now an object rather than a class:
Listing 5. A simpler to-uppercase object
object Up {
def upper(strings: String*) =
For the Scala equivalent to Java static methods, you create an object — Scala's built-in equivalent of a singleton instance — rather than a class. The method's return type, the braces that delimit the single-line method body, and the needless s parameter from Listing 4 all disappear in Listing 5. This "collapsible syntax" is both a blessing and curse in Scala. With collapsible syntax, you can write in extraordinarily idiomatic ways, which can make your code difficult for the uninitiated to read.
Case classes
Simple classes that act as data holders are common in object-oriented systems, particularly ones that must communicate with dissimilar systems. The prevalence of this type of class encouraged the Scala project to go even further and create case classes. Case classes automatically provide several syntactic conveniences:
- You can create a factory method that is based on the name of the class. For example, you can construct a new instance without bothering with the new keyword: val bob = Person("Bob", 42).
- All arguments in the class's parameter list are automatically vals, meaning they are maintained as immutable internal fields.
- The compiler generates reasonable default equals(), hashCode(), and toString() methods for your class.
- The compiler adds a copy() method to your class so you can make mutating changes by returning a new copy.
The languages don't merely fix syntactic warts but also manifest a better understanding of how modern software works, molding their facilities toward it.
Groovy's autogenerated properties
Of the languages, Groovy adheres most closely to Java syntax, providing syntactic-sugar code generation for common cases. Consider the simple Person class in Groovy in Listing 6:
Listing 6. Groovy Person class
class Person {
private name
def age
def getName() {
String toString() {
"${name} is ${age} years old."
def bob = new Person(name: "Bob", age:42)
In the Groovy code in Listing 6, defining a field def yields both an accessor and mutator. If you prefer only one or the other, you can define it yourself, as I do with the name property. Even though the method is named getName(), I can still access it through the more intuitive syntax.
If you want Groovy to generate the equals() and hashCode() pair of methods for you automatically, add the _USER_ annotation to your class. This annotation uses Groovy's Abstract Syntax Tree (AST) Transformations to generate methods that are based on your properties (see Resources). By default, this annotation takes only properties (not fields) into account; it considers fields too if you add the includeFields=true modifier.
Clojure's map-like records
You can create the same Person class in Clojure just as you can in the other
| 1.773835
|
Zyphra/Zyda-2
|
(Return to index)
We have, in the next place, to treat of Memory and Remembering, considering its nature, its cause, and the part of the soul to which this experience, as well as that of Recollecting, belongs. For the persons who possess a retentive memory are not identical with those who excel in power of recollection; indeed, as a rule, slow people have a good memory, whereas those who are quick-witted and clever are better at recollecting.
We must first form a true conception of these objects of memory, a point on which mistakes are often made. Now to remember the future is not possible, but this is an object of opinion or expectation (and indeed there might be actually a science of expectation, like that of divination, in which some believe); nor is there memory of the present, but only sense-perception. For by the latter we know not the future, nor the past, but the present only. But memory relates to the past. No one would say that he remembers the present, when it is present, e.g. a given white object at the moment when he sees it; nor would one say that he remembers an object of scientific contemplation at the moment when he is actually contemplating it, and has it full before his mind;-of the former he would say only that he perceives it, of the latter only that he knows it. But when one has scientific knowledge, or perception, apart from the actualizations of the faculty concerned, he thus'remembers' (that the angles of a triangle are together equal to two right angles); as to the former, that he learned it, or thought it out for himself, as to the latter, that he heard, or saw, it, or had some such sensible experience of it. For whenever one exercises the faculty of remembering, he must say within himself, 'I formerly heard (or otherwise perceived) this,' or 'I formerly had this thought'.
Memory is, therefore, neither Perception nor Conception, but a state or affection of one of these, conditioned by lapse of time. As already observed, there is no such thing as memory of the present while present, for the present is object only of perception, and the future, of expectation, but the object of memory is the past. All memory, therefore, implies a time elapsed; consequently only those animals which perceive time remember, and the organ whereby they perceive time is also that whereby they remember.
The subject of 'presentation' has been already considered in our work On the Soul. Without a presentation intellectual activity is impossible. For there is in such activity an incidental affection identical with one also incidental in geometrical demonstrations. For in the latter case, though we do not for the purpose of the proof make any use of the fact that the quantity in the triangle (for example, which we have drawn) is determinate, we nevertheless draw it determinate in quantity. So likewise when one exerts the intellect (e.g. on the subject of first principles), although the object may not be quantitative, one envisages it as quantitative, though he thinks it in abstraction from quantity; while, on the other hand, if the object of the intellect is essentially of the class of things that are quantitative, but indeterminate, one envisages it as if it had determinate quantity, though subsequently, in thinking it, he abstracts from its determinateness. Why we cannot exercise the intellect on any object absolutely apart from the continuous, or apply it even to non-temporal things unless in connexion with time, is another question. Now, one must cognize magnitude and motion by means of the same faculty by which one cognizes time (i.e. by that which is also the faculty of memory), and the presentation (involved in such cognition) is an affection of the sensus communis; whence this follows, viz. that the cognition of these objects (magnitude, motion time) is effected by the (said sensus communis, i.e. the) primary faculty of perception. Accordingly, memory (not merely of sensible, but) even of intellectual objects involves a presentation: hence we may conclude that it belongs to the faculty of intelligence only incidentally, while directly and essentially it belongs to the primary faculty of sense-perception.
Hence not only human beings and the beings which possess opinion or intelligence, but also certain other animals, possess memory. If memory were a function of (pure) intellect, it would not have been as it is an attribute of many of the lower animals, but probably, in that case, no mortal beings would have had memory; since, even as the case stands, it is not an attribute of them all, just because all have not the faculty of perceiving time. Whenever one actually remembers having seen or heard, or learned, something, he includes in this act (as we have already observed) the consciousness of 'formerly'; and the distinction of 'former' and 'latter' is a distinction in time.
Accordingly if asked, of which among the parts of the soul memory is a function, we reply: manifestly of that part to which 'presentation' appertains; and all objects capable of being presented (viz. aistheta) are immediately and properly objects of memory, while those (viz. noeta) which necessarily involve (but only involve) presentation are objects of memory incidentally.
One might ask how it is possible that though the affection (the presentation) alone is present, and the (related) fact absent, the latter-that which is not present-is remembered. (The question arises), because it is clear that we must conceive that which is generated through sense-perception in the sentient soul, and in the part of the body which is its seat-viz. that affection the state whereof we call memory-to be some such thing as a picture. The process of movement (sensory stimulation) involved the act of perception stamps in, as it were, a sort of impression of the percept, just as persons do who make an impression with a seal. This explains why, in those who are strongly moved owing to passion, or time of life, no mnemonic impression is formed; just as no impression would be formed if the movement of the seal were to impinge on running water; while there are others in whom, owing to the receiving surface being frayed, as happens to (the stucco on) old (chamber) walls, or owing to the hardness of the receiving surface, the requisite impression is not implanted at all. Hence both very young and very old persons are defective in memory; they are in a state of flux, the former because of their growth, the latter, owing to their decay. In like manner, also, both those who are too quick and those who are too slow have bad memories. The former are too soft, the latter too hard (in the texture of their receiving organs), so that in the case of the former the presented image (though imprinted) does not remain in the soul, while on the latter it is not imprinted at all.
But then, if this truly describes what happens in the genesis of memory, (the question stated above arises:) when one remembers, is it this impressed affection that he remembers, or is it the objective thing from which this was derived? If the former, it would follow that we remember nothing which is absent; if the latter, how is it possible that, though perceiving directly only the impression, we remember that absent thing which we do not perceive? Granted that there is in us something like an impression or picture, why should the perception of the mere impression be memory of something else, instead of being related to this impression alone? For when one actually remembers, this impression is what he contemplates, and this is what he perceives. How then does he remember what is not present? One might as well suppose it possible also to see or hear that which is not present. In reply, we suggest that this very thing is quite conceivable, nay, actually occurs in experience. A picture painted on a panel is at once a picture and a likeness: that is, while one and the same, it is both of these, although the 'being' of both is not the same, and one may contemplate it either as a picture, or as a likeness. Just in the same way we have to conceive that the mnemonic presentation within us is something which by itself is merely an object of contemplation, while, in-relation to something else, it is also a presentation of that other thing. In so far as it is regarded in itself, it is only an object of contemplation, or a presentation; but when considered as relative to something else, e.g. as its likeness, it is also a mnemonic token. Hence, whenever the residual sensory process implied by it is actualized in consciousness, if the soul perceives this in so far as it is something absolute, it appears to occur as a mere thought or presentation; but if the soul perceives it qua related to something else, then,-just as when one contemplates the painting in the picture as being a likeness, and without having (at the moment) seen the actual Koriskos, contemplates it as a likeness of Koriskos, and in that case the experience involved in this contemplation of it (as relative) is different from what one has when he contemplates it simply as a painted figure-(so in the case of memory we have the analogous difference for), of the objects in the soul, the one (the unrelated object) presents itself simply as a thought, but the other (the related object) just because, as in the painting, it is a likeness, presents itself as a mnemonic token.
We can now understand why it is that
| 2.222455
|
HuggingFaceFW/fineweb-edu
|
entrant topography. This must be achieved without comprising the stiffness or stability of the tip. Reproducibly making such high aspect ratio probes is one of the most difficult tasks in conventional scanning force microscopy.
As shown in FIGS. 1A-C, distortions in measurement are caused by a typical probe-sample interaction. In FIG. 1A, a point 102 of a probe 100 may be too blunt or have a diameter too large to reach the bottom of a trench in an object 106 to bescanned. The true depth of the trench cannot be extracted from the scan 104. (Drawings 1A-C are adapted from Journal of Applied Physics 74(9) Nov. 1, 1993, pp. R83-R109.)
A serious problem in conventional scanning force microscopy arises when a surface to be imaged has regions with steep slopes. On lithographically patterned surfaces, the problem is especially acute. Deep, narrow trenches and holes with undercutsidewalls are common. The ability of a scanning force microscope probe to accurately image surface topography depends strongly on the size and shape of the probe.
FIG. 1B shows the desirability of a high cone angle for a scanning force microscope probe. The angle .alpha. is greater than .beta. so that the tip with such a high cone angle .alpha. faithfully follows the left side of the trench. Such aprobe is limited only in its ability to image the slope of the right side of the trench.
FIG. 1C also shows a probe with a high cone angle as in FIG. 1B but having a tip point with a large radius of curvature. In FIG. 1C, the scan line 104 produced by a blunt probe 102 encountering a sudden step creates a distortion in the imagebetween 1 and 2. The region between 1 and 2 may be referred to as a "dead zone" because it cannot be reached by this probe. As shown in FIG. 1D and in the previous figures, the distortions caused by the size and shape of the tip or point can result inmajor inaccuracies in the imaging of a surface 112 and are a major problem in conventional scanning force microscopy.
Accordingly, in view of the foregoing requirements, what is needed is the ability to independently control the dimensions of the tip, its supporting column and cantilever.
4) The probe must also have a rigidity that is sufficient to maintain a constant dimensional relationship between the mounting block, cantilever and tip assembly. A lack of stiffness in the probe induces strong distortions in the measurementsignal. External vibrations easily superimpose themselves on the cantilever vibrations used in imaging the surface when the tip assembly has insufficient stiffness or rigidity. Also, the probe itself may deform as it is scanned, resulting in noise andfalse images. A rigid probe and tip assembly are therefore an essential requirement for consistent imaging.
5) The probe must have consistent and reproducible dimensions. In order to provide consistently accurate images of a surface, a probe used for, e.g. AFM, must have the same dimensions as another probe made weeks or months later. The probe musthave substantially identical mounting blocks, cantilever structures, resonant frequencies, and tip size and geometry. Few or no imaging instrument adjustments should be necessary to compensate for probe variations in geometry.
Conventional methods of manufacturing a cantilever and associated sensor tip are inadequate to reproducibly form tips capable of operating at the high frequency which may be required for adequate imaging. In order to achieve the objective ofatomic resolution, the tip size must be comparable to atomic dimensions. Because the resonant frequency of the cantilever beam also plays a critical role in imaging as previously explained, the probe must not only be reproducible but must be made withconstant dimensional parameters in order to control the mechanical resonant properties of the cantilever beam. Presently, tips and cantilever beams produced by typical wet chemical etching techniques are not adequately reproducible. Conventionalscanning probe fabrication processes when applied to making a plurality of tips across a single wafer, may result in tips which are of highly nonuniform shape (U.S. Pat. No. 5,201,992, column 5, lines 53-54). Thus, what is needed is a method for batchfabrication of an array of AFM probes which are highly uniform and which are capable of being fabricated with controlled geometry. Also, it is presently not possible to achieve fabrication of an array of SPM tip assemblies with any degree ofreproducibility. Thus, probes presently are not interchangeable with a sufficient degree of predictability. The dimensions of the plurality of SPM probes in an array cannot be maintained with uniformity across a single wafer.
6) The method of manufacturing must achieve a high volume production of probes. Due to the tremendous increase in diverse SPM and AFM applications, there is a need for easily reproducible, mass produced probes. During routine use, tips easilycan become contaminated or damaged and need to be replaced.
Thus, what is needed is a process for reproducibly fabricating rigid probes of increasing geometric complexity such as tips for biomolecular imaging, CD measurement, or for storage applications. Such a process should be capable of reproduciblyfabricating a variety of geometries depending upon the particular application (e.g. for AFM, MFM, STM, or other applications). The tips should be characterized by substantially atomic sharpness which easily can be reproduced and fabricated in largequantities with a high degree of precision.
What is also needed is a process for reproducibly manufacturing a scanning force microscope tip with a high cone angle, a high degree of stiffness, and with predictable reproducibility and high yield. This would be a great advantage overconventional methods of producing scanning force microscope tips. Such a tip assembly would be capable of producing accurate images of surfaces having a slope less than that of the cone angle of the tip.
Conventional scanning probe microscopy tip assemblies have critical problems concerning lack of reproducibility and inability to make rigid tip assemblies to tight manufacturing tolerances. The development and
improvement of scanning probe microscopy techniques is strongly dependent upon the development of appropriate probes. Suitable tips must be capable of being fabricated easily and reproducibly in large quantities with substantially atomicsharpness. What is needed is a method for making probes by mass production techniques. What is also needed is a method for making a plurality of probes having consistent or substantially unformed properties from a single wafer. This is necessarybecause sensor tip damage or contamination for an atomically sharp tip easily occurs in SPM experiments due to the extremely small separation between the sensor and the sample surface.
It is also required that the fabrication process allows for largely independent control over mounting block, cantilever, and tip assembly dimensions such that the probe can be tailored to meet the requirements of specific SPM applications. Forexample, in one application the AFM is mounted in a scanning electron microscope (SEM) such that the tip and the area being scanned by the tip can be imaged by the SEM while the AFM is in operation. This application requires that the tip assembly have aheight greater than one-half the width of the cantilever such that when the probe is mounted above the sample, the distal end of the tip is visible to the scanning electron beam. To facilitate this application it is possible to create a cantilever whichhas a reduced width, e.g. is triangular in plan view, in the vicinity of the tip assembly.
Up to now, basically several techniques have been used for the production of tips and cantilevers. In one technique, a thin wire or piece of metallic foil is bent and etched electrochemically. As known from the production of STM tips, a radiusof curvature of less than 1,000 .ANG. can be prepared by this method. However, tips formed by this method are difficult to prepare and are not easily reproduced at critically small dimensions. This method is also not easily adapted to making largenumbers of tips concurrently and to high accuracy. Therefore, this method has been largely abandoned for all but a few applications.
Another method for cantilever preparation involves producing SiO.sub.2 cantilevers which are rectangular or triangular in shape by standard etching techniques of an oxidized Si wafer. Standard photo masks are used to define the shape of thecantilevers, so that the geometry is known and spring constants can be calculated. A probing tip is provided by tilting a corner of the cantilever toward the sample. The sharpness of such tips is not well controlled and as a consequence, multi-tipeffects can become a severe problem.
Some progress has been made in the use of Si.sub.3 N.sub.4 instead of SiO.sub.2 as a cantilever material. See U.S. Pat. No. 5,066,358 as an example. Si.sub.3 N.sub.4 cantilevers are less fragile, and the thickness can be reduced fromapproximately 1.5 to 0.3 .mu.m. However, such cantilevers have low stiffness and low resonant frequency.
One conventional method of making a tip involves etching a pyramidal pit into a silicon wafer. See U.S. Pat. No. 5,116,462 as an example. Afterwards, a film of Si.sub.3 N.sub.4 is deposited which follows the contours of the silicon.
| 2.093593
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
|Publication number||US4731553 A|
|Application number||US 06/913,783|
|Publication date||Mar 15, 1988|
|Filing date||Sep 30, 1986|
|Priority date||Sep 30, 1986|
|Publication number||06913783, 913783, US _PHONE_ A, US 4731553A, US-A-_PHONE_, US4731553 A, US4731553A|
|Inventors||David A. Van Lehn, Edward H. Flaherty|
|Original Assignee||Texas Instruments Incorporated|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (5), Non-Patent Citations (2), Referenced by (100), Classifications (13), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The invention is directed to integrated circuits, and more specifically to improved output driver circuits in integrated circuits manufactured using CMOS technology.
Integrated circuits which operate as logic circuits, including such LSI and VLSI circuits as microprocessors, microcomputers, and memories, present digital logic signals at their output. The desired output waveform of the transition of such digital logic signals going from a "0" logic state to a "1" logic state, or vice versa, is a step function, i.e., a square transition occurring in zero time. A rapid transition providing sufficient sink or source current is desired, of course, since the faster that an output circuit can provide a transition, the faster the receiving circuit can respond to the transition. However, the realization of such a logic transition in actual circuits provides a waveform having a non-zero rise or fall time, and having varying degrees of overshoot and undershoot, such overshoot and undershoot appearing not only at the output terminal, but also as noise on the power supply nodes and buses. Generally, as the desired speed of operation of circuits performing digital logic functions increases, the amplitude of such overshoot and undershoot increases, causing the circuit designer to make a tradeoff between the switching speed (and drive capability) of an output buffer circuit versus the amplitude of the noise generated by the output buffer circuit.
It is further useful, especially in VLS1 logic circuits, to provide digital output in a parallel manner, i.e., a plurality of digital output signals occurring at approximately the same time. In such a parallel arrangement, it is convenient to share power supply buses among the individual output buffers in order to conserve chip area. However, the sharing of power supply buses among output buffer circuits allaows the overshoot and undershoot at the power supply node of a switching output buffer to couple to the power supply node of a neighboring output buffer which is not switching. If the amplitude of the switching noise is sufficient, disruption of the output of the non-switching buffer may occur. In addition, circuits other than output buffers may also be sharing the power supply buses, and therefore may be disrupted by the switching noise of an output buffer. As a result, the trend of incorporating more and more logic functions into single integrated circuits, said functions necessarily being closer together, increases the sensitivity of integrated circuits to switching noise.
In addition, many such circuits are preferably constructed using CMOS (complementary metal-oxide-semiconductor) technology, because of the reduced power consumption and increased switching speed of digital logic circuits using CMOS technology. The use of CMOS in the construction of logic circuits further provides the ability to incorporate more logic functions into a semiconductor device of a given size, further increasing the amplitude of, and sensitivity to, such outpuyt overshoot and undershoot.
Prior techniques for dealing with this problem have included the use of separate ground nodes in an n-channel push-pull MOS output buffer, one of said ground nodes for sinking current during the initial transition from a high output state to a low output state, and the other for sinking DC current during the steady state condition of a low output state. Such an output buffer is shown and described in "A LAN System Interface Chip with Selectable Bus Protocols", by Donald Walters, Jr. et al., 1985 Digest of Technical Papers, 1985 IEEE International Solid-State Circuits Conference (IEEE, 1985), pp. 190-191. The separate ground nodes allow the DC ground node to sink output current without having the undershoot associated with the original transition coupled to it. However, the use of n-channel technology inherently precludes the circuit described in this paper from driving a "1" logic state to the full level of the biasing power supply, since the n-channel pull-up transistor will turn off as the output terminal of the circuit reaches a voltage greater than the blasing power supply voltage less a transistor threshold voltage. While further prior techniques have provided the use of a transistor in series between the output terminal and the drain of the pull-down transistor to damp the undershoot generated by a high-to-low transition, such a technique is ineffective for low-to-high transitions; if used in the pull-up side of an NMOS output buffer, the n-channel pull-up transistor would switch off a even a lower voltage, further reducing the drive level and increasing the switching time.
It is therefore an object of this invention to provide an output buffer circuit, useful in integrated circuits performing digital logic functions, which provides a fast logic transition in both directions, with overshoot and undershoot greatly reduced over prior output buffer circuits.
It is a further object of this invention to provide an output buffer which is capable of driving the output terminal fully to the voltage of the biasing power supply, but with reduced overshoot.
It is a further object of this invention to provide an output buffer which, when used in a parallel arrangement with other such output buffers, will isolate the power supply nodes of those output buffers which remain in a given output state from the noise created by neighboring output buffers which are making a logic transition.
It is a further object of this invention to provide an output buffer circuit manufacturable using CMOS technology.
It is a further object of this invention to provide an output buffer circuit which allows for the output waveform to be shaped in such a manner as to reduce the radio frequency interference (RFI) emanating from an integrated circuit using said output buffer.
Other objects and benefits of this invention will be apparent to those skilled in the art, based upon the description to follow herein.
FIG. 1 is an electrical diagram, in schematic form, of an output buffer circuit constructed according to the invention.
FIG. 2a is a diming diagram, representativ eof voltage over time, illustrating the operation of the output buffer circuit of FIG. 1 in the case where the output goes from a low logic state to a high logic state.
FIG. 2b is a timing diagram, representative of voltage over time, illustrating the operation of the output buffer circuit of FIG. 1 in the case where the output goes from a high logic state to a low logic state.
The invention may be incorporated in an output buffer circuit, constructed according to CMOS technology, where the bias power supply is available at two separate nodes, and where the reference supply is also available at two separate nodes. The data input signal is received by the gates of a first pair of driver transistors, having their source-to-drain paths connected in series between a first bias power supply node and a first reference node, the output of the circuit being at the point between the source-to-drain paths of the two transistors. One of these two driver transistors will conduct, depending upon the logic state of the input, which will in turn draw the output node to either the first bias power supply node or to the first reference supply node. Further incorporated into the output buffer circuit is a second pair of driver transistors, which are of different conductivity-types, which have their source-to-drain paths connected between the second bias power supply node and the second reference supply node, and which have their gates coupled to the input node through delay stages. The operation of the second pair of driver transistors is delayed so that the output node is quickly charged by the appropriate one of the driver transistors; once the second pair of driver transistors operates, though, the driver transistor which was conducting is cut off either by the inherent operation of one of the inverter transistors or by way of a feedback circuit. Since the bias power supply nodes and reference supply nodes for the first pair of driver transistors are isolated from the second pair of driver transistors, the overshoot or undershoot of the logic transition is minimized as the second pair of driver transistors drives the output node of the circuit.
Referring to FIG. 1, a CMOS output buffer constructed according to the invention will be described. NOR gate 2 receives at its inputs data input signal DATA-- and enable signal ENABLE-- (the -- designator after the signal name representing a negative logic signal). Data signal DATA-- is shown as having negative logic values, as will be evident below, as the signal at output terminal 10 will correspond to the logical complement of data signal DATA--. The output of NOR gate 2 is connected to the gate of n-channel transistor 8, which has its drain connected to power supply node DVdd is biased to supply voltage Vdd, which is a positive voltage such as +5 volts, in the embodiment of FIG. 1
| 1.412521
|
m-a-p/FineFineWeb
|
Take a trip to digital culture art museums in London art galeries. Museum and digital culture new perspectives and research
I wrote about Digital art exhibitions, 3D Art, Animation, software technological art rendering artwork and exhibitions digital wall art and digital Art Prints. Take a trip to digital culture art museums in London art galeries. Read about museum and digital culture new perspectives and research.
The OBJECTIVE of the section on digital Art exhibitions was to explain :
- The different types of digital exhibitions from a historical perspective.
- How audiences engage with digital art, and explore the ways we encode information, on a daily basis, through digital media
- To highlight technology based art work in art galleries.
Make your own digital prints
This section offers the user an intellectually interactive perspective.
It will share the historical experience and aesthetics of digital art and different galleries especially in the US, UK and Germany .
Digital art exhibitions make us to think about relationships in art and design and, in particular, the relationships between analogue and digital art.
Digital engagement with the aesthetics of arts forums is rising and a new relationship is developing between art and science . Last year at the Barbican's Digital Revolution Exhibition artists, filmmakers, musicians, architects, designers and videogame developers took part in a sell-out event.
The vision of the exhibition organisers is to create a culturally diverse platform by understanding computational media as core to the contemporary culture.
These exhibitions allow us to question identity in the physical world. The exhibitions apparently are interesting and visitors who go to such exhibitions spent 3 to 4 hours happily enjoying and transformation of the mindset digitally.
Digital Art for sale at London Galeries where you can meet digital artists from around the world
Why are London's art galleries Barbican, Tate Modern, ICA, Victoria & Albert museums moving towards Digital Art Exhibitions? What is the significance of this shift in the global economy.
The aim of this blog is to describe how computer and digital based art is gaining in popularity. This chapter traces the historical origins of digital art. It has been an interesting exercise to find out in particular how the different galleries in London are collaborating internationally with digital artists and curators. This page will cover the topics in brief as Digital Art exhibitions is an exhaustive topic. It is mainly a taster for anybody interested in digital art. For enthusiasts there is a book called White Heat Cold Logic devoted to digital art. The blog is laid out to explain New Media Art. What is happening today is that technology is doing a lot of creative work. Artists can work from their laptops, mountaintops, office spaces, by the seaside and as the society is becoming digitized so is the art. The digital art exhibitions can be meeting grounds for intellectual people. Although, I dont recommend a visit to the exhibitons niether do I endorse them all the same the exhibitions allow new forms of interaction and an amusing experience and intellectual stimulation.
There is variable information on this blog for diffrent types of audiences. Digital Art has been evolving for last fifty years and despite backlash from various groups and anti technology groups. The exhibitions are gaining momeantum with packed audience houses and sell out events. There is a lot of material on the web. However, there was no other blog like this therefore it may be an interesting and hopefully informative read.
I interviewed Dr Joel Mckim Lecturer in Media and Cultural Studies at Birkbeck College :
- The podcast explains how digital art works are bridging the gaps between fields and systems of knowledge.
What is the difference between New Media Art and Traditional Art exhibitions? What is New Media Art?
Why are London's art galleries Barbican, Tate Modern, ICA, Victoria & Albert museums moving towards Digital Art Exhibitions? What is the significance of this shift in the global economy.
The fight between corporate interests, governmental interests, and public interests gave birth to the web today. This led to the inter-connectivity and interactivity of the internet since the 1980s and is largely responsible for new media art. Scientists, architects, mathematicians, software engineers, artists and musicians who have varied interets in their respective fields inspire a lot of current new media art. The artists continue to work with many emerging media – from virtual reality to mobile phones.
There is much contemplation on how to define new media art? At present, art is stepping out of its traditional form and allowing people to build their own experiences with it. Non-linearity describes a project that escapes from the conventional linear narrative coming from novels, theatre plays and movies. Non-linear art usually requires audience participation or at least, the fact that the "visitor" is taken into consideration by the representation thus, altering the displayed content. The participatory aspect of new media art, which for some artists has become integral, emerged from Allan Kaprow's Happenings and became with the growth of the Internet, a significant component of contemporary art. New media art includes algorithms, mathematics, telecommunications, digital information systems, computers, software and data visualisation. The graphic image shows that a fine line separates fine art and computer art. EXHIBIT 3
New media art is not a new concept and has been evolving over time. The success of digital art exhibitions in the last ten years in galleries and museums across the globe shows that the digital era has set in.
Next I will consider the transformation of exhibitions in galleries and museums that have occurred in the digital era.
The information in this post is based on the (guest) lecture by Dr Lambert.
Computer Arts Society (CAS): Pioneering contributions through research at Birkbeck College to the Victoria and Albert museums for digital archives
Special thanks to :
CAS Chairman, Dr Nicholas Lambert
CACHe (Computer Arts, Contexts, histories etc) research project undertaken at University of London Birkbeck College.
History of Computer Art Exhibitions
In the 1950s and 60s access to computers and computer aided technology was very limited as there were no graphical interfaces. Many of the early practitioners, computer scientists and mathematicians programmed their own computers. Thus ensuring sole use of computer creativity. During this era only research laboratories and corporations could afford computing technologies. The artists were programming the computers themselves so they had the opportunity to experiment. They considered the computer as an autonomous machine that would enable them to carry out visual experiments in an objective manner. More information on the history of computer art can be found at the Victoria and Albert museum's institution's website.
Experimental Arts Technology Exhibitions
1966 Experimental Arts Technology (E.A.T) and Bell labs organised a series of performances enabling ten contemporary artists and thirty engineers and scientists to use the new technologies. The nine evenings of art and technology were huge collaborative show in New York. The show was not a success but it paved the way for collaborative groups to work together. In those days Bell labs became the hub for artists and musicians to use the equipment out of hours. The event is recognised in the art world for shaping the pivotal relationship between art and technology. This led to the development of early computer-generated animation such as the microfilm printer that was able to expose letters and shapes onto 35mm film. Although the seeds of this were planted at EAT exhibitions.
Much as changed since then – the emergence of 3D printing we know today has introduced a new bridge to new media art, joining the virtual and the physical worlds. This was not possible in Bell labs. The rise of the use of this technology has allowed artists to blend the computational base of new media art with the traditional physical form of sculpture. A pioneer in this field was artist Jonty Hurwitz who created the first known anamorphosis sculpture using this technique.
Cybernetic Digital Art Exhibition
Cybernetic Art uses computer networks as medium. Cybernetics was introduced in 1968 by the Institute of Contemporary Art (ICA) in London for contemporary art exhibitions. Cybernetic Serendipity was the landmark exhibition curated by Jasia Reichardt in 1968. This exhibition at the ICA became the focus of attention and attracted national and international press. Cybernetic Serendipity was the first international exhibition in the UK between the arts and new technology. This exhibition was designed by Franciszka Themerson and presented the work of over 130 participants including composers, engineers, artists, mathematicians and poets. The exhibition ran from 2 August – 20 October 1968 and was seen by some 60,000 visitors.
Its aim was to present an area of activity which manifested artists' involvement with science, and scientists' involvement with the arts: in particular to show the links between the random systems employed by artists, composers and poets, and those involved with the making and the use of cybernetic devices.
Cybernetic Serendipity dealt with possibilities rather than achievements, especially since in 1968 computers had not yet revolutionised music, art, or poetry, in the same way that they had revolutionised science.
The biggest dilemma for this project was that funding was scarce as cooperative funding was from technological companies such as IBM. IBM diverted funds to the military at the height of the Vietnam War and worked on reduced profits.
Despite lack of funding the exhibition unearthed some of the tensions and underlined the importance of technological art in galleries. It also highlighted how these works are presented for example the idea of feedback and things that would show transformational changes.
The exhibition itself was popular and engaging, however the critics were sceptical about how machines would turn out drawings. This exhibition was also important because it laid the foundation blocks which are practiced even till today in digital art galleries.
After the exhibition in 1968 there was no other exhibition like it for many years and it has gained historical importance because it proved to be a landmark exhibition of its time. It has attracted alot of media attention recently due to its pioneering contribution to digital art.
Los Angeles County Museum of Exhibition
In 1970 Maurice Tuchman curator of modern art at Los Angeles County Museum of Art (LACMA) had a massive art and
| 1.488794
|
m-a-p/FineFineWeb
|
ated the man. The utmost he could do was to instruct its members regarding the seriousness of the situation and seek to admonish them to remove the wicked person from their midst. (1 Cor. 5:13). If the church was right spiritually they would pay attention to Paul, but if they disregarded his exhortations, while they would be wrong spiritually, they would be legally right.
An apostle can deal with the disorders of a church whenever his advice and counsel are sought, as was the case with Paul and the church in Corinth. It was because of their enquiries that he could say to them, "And the rest will I set in order when I come." (1 Cor. 11:34). But the point to note here is that "the rest" of the matters which Paul intended to "set in order" on his arrival in Corinth were to be attended to in the same way as those he had dealt with in his Epistle, and they were dealt with doctrinally. In like manner as he had instructed them concerning certain affairs there, so he would instruct them concerning the remaining matters on his arrival; but the Corinthians themselves, not Paul, were the ones who would have to deal with the situation.
Since Peter and John were apostles, how did it come about that they were elders of the church in Jerusalem? (1 Pet. 5:1; 2 Jn. 1; 3 Jn. 1). They were elders as well as apostles because they were not only responsible for the work in different places, but also for the church in their own place. When they went out they ministered in the capacity of apostles, bearing the responsibility for the work in other parts. When they returned home they performed the duties of elders, hearing the responsibility of the local church. It was not on the ground of their being apostles that they were elders in Jerusalem; they were elders there solely on the ground of their being local men of greater spiritual maturity than their brethren.
Paul was sent out from Antioch and he founded a church in Ephesus. We know he did not hold the office of elder in any church, but it would have been possible for him to be an elder in Antioch, not in Ephesus. He spent three years in Ephesus, but he worked there in the capacity of an apostle, not an elder: that is, he assumed no responsibility and exercised no authority in local affairs, but simply devoted himself to his apostolic ministry. Let us note carefully that there are no elders in the universal Church and no apostles in the local church.
It is the responsibility of every saved man to serve the Lord according to his capacity and in his own sphere. God did not appoint elders to do the work on behalf of their brethren. After the appointment of elders, as before, it is still the brethren's duty and privilege to serve the Lord. Elders are also called "bishops" (Acts 20:28; Tit. 1:5,7). The term "elder" relates to their person; the term "bishop" to their work. "Bishop" means "overseer," and an overseer is not one who works instead of others, but one who supervises others as they work. God intended that every Christian should be a "Christian worker," and He appointed some to take the oversight of the work so that it might be carried on efficiently. It was never His thought that the majority of believers should devote themselves exclusively to secular affairs and leave the church matters to a group of spiritual specialists. This point cannot be over-emphasized. Elders are not a group of men who contract to do the church work on behalf of its members; they are only the head-men who superintend affairs. It is their business to encourage the backward and restrain the forward ones, never doing the work instead of them, but simply directing them in the doing of it.
The responsibility of an elder relates to matters temporal and spiritual. They are appointed to "rule," and also to "instruct" and "shepherd." "Let the elders that rule well be counted worthy of double honor, especially those who labor in the word and in teaching" (1 Tim. 5:17). "Tend the flock of God which is among you, exercising the oversight, not by constraint, but willingly, according unto God; nor yet for dishonest gain, but of a ready mind; neither as lording it over the charge allotted to you, but making yourselves examples to the flock " (1 Pet. 5:2,3).
The Word of God uses the term "rule" in connection with the responsibilities of an elder. The ordering of church government, the management of business affairs and the care of material things, are all under their control. But we must remember that a Scriptural church does not consist of an active and a passive group of brethren, the former controlling the latter and the latter simply submitting to their control, or the former bearing all the burden while the latter settle down in ease to enjoy the benefit of their labors. "That the members should...care one for another" is God's purpose for His Church (1 Cor. 12:24). Every church after God's own heart bears the stamp of "one another" in all its life and activity. Mutuality is its outstanding characteristic. If the elders lose sight of that, then their ruling the church will soon be changed to lording it over the church. They were not appointed to be "lords" of their brethren, but to be their "examples." What is an example? It is a pattern for others to follow. For the elders to be a pattern to the brethren implied that the brethren worked and the elders worked as well. It implied that the elders worked with special diligence and care, so that the brethren should have a good example to follow. Such is the Scriptural conception of the rule of the elders.
But their responsibility does not merely relate to the material side of church affairs. If God has equipped them with spiritual gifts, then they should also bear spiritual responsibility. Paul wrote to Timothy, "Let the elders that rule well be counted worthy of double honor, especially those who labor in the word and in teaching" (1 Tim. 5:17). It is the responsibility of all elders to control the affairs of the church, but such as have special gifts (as of prophecy or teaching) are free to exercise these for the spiritual edification of the church. Paul wrote to Titus that an elder should "be able both to exhort in the sound doctrine, and to convict those who contradict" (Tit. 1:9). The preaching and teaching in the local church is not the business of apostles but of local brethren who are in the ministry, especially if they are elders.
On the spiritual side of the work the elders help to build up the church not only by teaching and preaching but by pastoral work. To shepherd the flock is peculiarly the work of elders. Paul said to the Ephesian elders: "Take heed unto yourselves, and to all the flock, in the which the Holy Spirit has made you overseers, to feed the church of God" (Acts 20:28). And Peter wrote in the same strain to the elders among the saints of the Dispersion, "Tend the flock of God which is among you" (1 Pet. 5:2). The present-day conception of pastors is far removed from the thought of God. God's thought was that men chosen from among the local brethren should pastor the flock, not that men coming from other parts should preach the Gospel, found churches, and then settle down to care for those churches.
This work of ruling, teaching and shepherding the flock, which we have seen to be the special duty of the elders, does not devolve upon one man only in any place. In Scripture we see that there was always more than one elder or bishop in a local church. If the management of the entire church rests upon one man, how easy it is for him to become conceited, esteeming himself above measure and suppressing the other brethren (3 Jn.). God has ordained that several elders together share the work of the church, so that no one individual should be able to run things according to his own pleasure, treating the church as his own peculiar property and leaving the impress of his personality upon all its life and work. To place the responsihility in the hands of several brethren rather than in the hands of one individual, is God's way of safeguarding His Church against the evils that result from the domination of a strong personality. God has purposed that several brothers should unitedly bear responsibility in the church, so that even in controlling its affairs they have to depend one upon the other and submit one to the other. Thus in an experimental way they will have opportunity to give practical expression to the truth of the Body of Christ. As they honor one another and trust one another to the leading of the Spirit, none taking the place of the Head but each regarding the others as fellow-members, the element of "mutuality," which is the peculiar feature of the Church, will be preserved.
| 1.285093
|
HuggingFaceFW/fineweb-edu
|
IMPACT OF INFORMATION COMMUNICATION TECHNOLOGY IN THE DISSEMINATION OF INFORMATION IN SPECIAL LIBRARIES
1.1. BACKGROUND OF STUDY
Information technology has been evolving rapidly during the last helf of the 20th century particularly since the 1960s and 1970s. it has revolutionaries the media and its modes of computing, storing and communicating information. The changes in the collection and distribution of information have affected society in many ways. These of information communication technologies in libraries has tremendously increased because it provides enhanced user satisfaction, cost-effectiveness, ensure faster and simpler programmes, resulting in rapid responses and easier operational procedures. Generally, the use of ICT in libraries includes on-line access to library collection, the use of bibliographic databases, on-line literature searching and the sue of personal computers. Information communication technologies (ICT) makie use of computing, and communications, facilities in support of teaching and learing. Information, communication and technology intail the acquisition, processing, storage and dissemination of information by means of computers, office machines and telecommunication, (Ehikhamena F. A. 1993). In other world, computer provides the processing, storage and retrieval capabilities while telecommunciaotn on provides the capability, for the transfer and communication of data from one work-station to another. Information communication technology is a term which encompasses the nation of the application of technologies to information handling, generation, storage, processing, retrieval, dissemination etc. (Marghalani M. A. 1987).
In an increasing competitive environment the concern of managers of library and information services must not be only with survival of our library and information services, but also with their development within a clear and coherent policy framework. Just as in any other sector of eh economy, library services develop. There needs to be a clear view of the direction that development should take to best meet the developing needs of users. The new information age has permeated all aspects of human existence (spies, 1998). It has brought certain challenges to the academic world (Pinfield, 2001).
In Nigeria, this challenge is even more critical today. It is apparent that any attempt to have meaningful academic communication can only be successful through the information and communication technology, which is the application of computer and its peripherals in communicating data within the shortest available time and over geographical spaces. This study is based on the availability and the utilization of ICT facilites used in libraries, since the library is the crux of academic activities, it is seen as an instrument of social change, public enlightment and national progress (Aboyade B. O 1982). It services as repository of information and knowledge that have provides the vital under-pining in the socio-economic, political and cultural sphere. In relating this function of libraries to information technologies, (Tamuno, O. G, 1997) comments that it is one thing to generate as much information as possible, it is quite another thing how ever, to make such information accessible and available when needed, which requires proper storage, processing accessibility and retrieval. The relevance of information communication and technologies in libraries, particularly in special libraries, is in activities concerned with information storage and retreiveal. These activities are concerned with in-house keeping routine such as acquisition, cataloguing, serials control, circulation, of library material and the collection fo management statistics. In libraries, information network has the potential for relieving human kind of many of its burdens. Networking refers to a broad system of computers terminates, video display units, telephone etc. which are use for data communication services. Therefore, to network in the special libraries parlance, involves the interconnection of computer terminals etc. the heart of information networking or connectivity is the issue of telecommunication, linking together libraries, librarian users in
LOCAL AREAS NETWORK (LAN) or WIDE AREAS NETWORK (WAN)
They endeavor to eliminate many of the repetitive and boring task related to the processing and communication of information. The volume and re of the generation fo information as well as the demand for it has made its conservation and storage and manipulation by electronic means inperative in an information conscious society. Advances in modern technology have naturally led to th de-emphasing of the traditional method of attaining various information processing and transfer objectives hence the introduction of computer and other modern information technologies in the libraies.
THE EFFECT / IMPACT OF INFORMATION COMMUNICATON AND TECHNOLOGIES ON DISSEMINATION OF INFORMATION IN SPECIAL LIBRARIES
The presence and usage of the technological infrastructure in information retrieval and dissemination have helped to have better access to information. - Encourage information and resource sharing among information institution. - Contribute to industrial bibliographic control. - Facilitates access to international data base. - Leads to efficient and effective delivery of information services. - Also facilitates generation and dissemination of management report. - Gives accurate and up-to-date information. - It enhances the status of the information professionals.
1.2 HISTORICAL BACKGROUND OF NATIONAL MATHEMATICAL CENTRE, ABUJA.
The National Mathematical centre came into being on January 1, 1988 although the decree (Decree No.40) giving it a legal existence but was not promulgated until December 12, 1989. The creation of the centre was as a result of the mounting concern shown by the nation as a whole, about the decline in the number of mathematical scientists and the interest shown by students in the study of mathematics, theoretical physics, and computer science generally. The National mathematical centre was established by the Federal government to develop appropriate initiatives and resources of international standing for re-awaking and sustaining interest in the mathematical sciences at all levels in Nigeria, and also as an adequate response to the dramatic decline in the production of teachers and specialists in the mathematical sciences at all levels. Presently, the centre considers itself as nations apex research coordinating institute in the mathematical sciences. We wish to state that we are committed to strategic planning as a way of making the centre to fully realize the vision of its founding fathers as a centre of Excellence in the mathematical sciences in the West Africa sub-region and more importantly, to creating enabling environment favorable to speedy technological and industrial revolution by as promoting manpower capacity building appropriate to jumpstart the growth process. Decree No.40 of 1989 Establishing the National Mathematical Centre, Abuja. The genral aims and objectives of the National mathematical centre, as spelt out in the law establishing it (decree 40 of 1989), are as follows:
1. To train and develop high level personnel in mathematical sciences, including mathematics, statistics, computer science and theoretical physics for Nigerian and African institutions.
2. To create a Resource centre to serve National and international communities as a focal point for advanced research and training in mathematical sciences and appilications.
3. To enhance collaborations among mathematical scientists especially between young Nigerian scientists and other advanced and experienced scientists from within and outside Nigeria.
4. To stimulate enthusiasm for the physical sciences in young Nigeria students and scholars.
5. To prepare Nigeria for a leading role in mathematical sciences.
6. To attract good mathematical scientists from all over the world into services of Nigerians.
7. To encourage and support activities leading to the improvement of the teaching and learning of mathematical sciences at all levels.
8. To conduct series of specialized lecturers or courses for the purpose of upgrading postgraduates students in the field of mathematical sciences to alevel where they can begin to understand research papers and seminars.
9. To conduct series of specialized lecturers for advanced post-graduate as well as post-doctoral and other participants based on a set of pre-assigned research papers, with the objective of generating "questions" that would be collected, discussed and used to determine new research directions for the participants.
10. To conduct seminars, workshops and symposia in such areas as the Academic Board of Centre may, from time to time, determine or plan. The National Mathematical centre is one of the 10 institutes selected as the first nodes of the network of international science and technology centers for sustainable development in the south, and its mandate is computer software research, development and application.
1.3 STATEMENT OF THE PROBLEM
Nigeria, being one of the few countries of the world, which began to adopt information technologies in 1990s, is backward but somewhat fortunate because generally, the world of ICT-based libraries is still in its infancy. A number of problems contribute to this situation.
1. Administration barriers: This problem is much more acute in public university libraries than private university libraries and special libraries. Administrators, policy makers, and government executives do not take seriously the importance of ICT. Moreover, library administrators have failed to make its importance clear. Lack of knowledge of technological development has created a significance barrier to the installation or development of ICT facilities in libraries.
2. Lack of shared initiatives: Inadequate shared initiatives have made plans of ICT application in libraries very low.
3. Lack of skilled manpower: Library professionals in Nigeria over the past few years have not had adequate knowledge regarding computer applications and automation.
4. Lack of financial support: Inadequate financial support has made possibility of ICT application in libraries more complex.
5. Lack of infrastructure: Inadequate physical facilities hamper the growth of ICT. Telecommunications infrastructure and an uninterrupted power supply are crucial needs for Nigeria.
6. Lack of ICT resources: ICT means more than the use of computer. Less attention has been paid to other communication and related technologies. Some libraries have no internet connection. Most have a manual circulation operation. They have no barcode readers for use in automated circulation. Most libraries are using microcomputers only with no server in most of the libraries.
7. Absence of local resources: Most of the libraries use the CDS/ISIS in developing other software.
8. Psychological barriers:
| 2.00751
|
openbmb/Ultra-FineWeb
|
Muscle Building
Simple Approaches On How You Can Get Larger Muscles
Building muscle is frequently done for enjoyable, as being a activity, or out of need. Irrespective of what cause you might have for wanting to construct muscle mass, you can find info that will be useful. Browse the part that adheres to to gain some helpful observations. How To Get A Six Pack is not hard with a great exercise plan and also diet plan.
Know exactly where your limit is, and drive you to ultimately it. For every single set you do, you must push on your own right up until you're actually incapable of lift up. Try to pressure you to ultimately your restrictions. You are able to shorten your collections once you start to obtain worn out, but usually do not stop till you have no power left to carry on.
Meet up with your dietary needs by eating proteins in even portions at each meal and treat that you simply eat throughout the day. Spreading from the proteins intake in this way will help to ensure that you are eating an adequate amount of this important nutrient that will help you develop muscle efficiently. Should you take in 6 foods supplying 40 gr every single, you will meet your health proteins requirements of 180 grams for the working day.
Leg squats, presses and old lifts are common successful exercise routines for increasing muscles. Mixing the 3 can assist you get in shape rapidly and may consistently construct muscle tissues. These 3 are the primary concentrates, but there can be other workout routines.
A wonderful idea when trying to develop muscle mass would be to eat protein rich foods pre and post your exercise routine. So, do things such as eat around 15 grams half an hour prior to teach, then take in an additional 15 when you're completed. You can even get that volume of healthy proteins coming from a husband and wife taller glasses of dairy.
Should you be a newbie at muscle building, work towards your form well before your strength. The slightest blunder in the repetition can make you carry out the workout more and more improperly when you increase the excess weight. This will improve the dangers for personal injuries, you don't would like to attain.
Plyometric workout routines are a good thought! This job smaller sized, "speedy-twitch" muscle materials, stimulating muscles development. Considering that acceleration is essential, plyometric workouts are not contrary to ballistic goes. As an illustration, when performing plyometric drive-ups, you allow your hands bounce away from the surface and explode as much as it is possible to.
Get used to your diet plan in function of how much you exercising. The quantity you take in should be equivalent to you getting about a lb of bodyweight weekly. Find methods for getting far more calorie consumption, and if you should not see any excess weight right after fourteen days, take into consideration consuming a lot more calories.
Provide the farmer's walk a go. This can be done by retaining hand weights at each of the side of your body and wandering as considerably as possible. Maintain your strides very long plus your ab muscles restricted. When you can't keep going, take a min as well as a half split, then begin again. Do that often on a daily basis.
Have your questions about muscle building been addressed? If you wish to discover more, for you to do a lot more online study. Muscle building advice is not fixed. You can find interesting things getting found out about it often, so be sure to take care of the educational approach for ongoing success. Read more…
Muscle Building Secrets The Pros Don't Want You To Know
Body building is a healthy activity for people of all ages. This article includes many tips on maximizing the benefits from your workouts for an effective program in muscle building. Continue reading for more information.
Photograph your progress regularly so that you can easily see the gains you are making. It can be difficult to gauge your progress simply by checking yourself out in the mirror. When you have snapshots in time to compare, you'll realize just how much growth you've developed.
A lot of people fail to use proper technique when lifting weights because they are too focused on speed. You'll always get better results if you complete repetitions slowly and correctly, rather than if you try to get your reps done too fast. Be patient and make sure that your routines are executed in the proper way.
Make your muscle building goals realistic and reasonable. Results take a long time to appear. Cheating by using steroids, stimulants, and other substances can harm your body in both the short term and the long term, and may lead to chronic health problems.
Always do compound exercises so you can have the most muscle growth possible. These exercises use many muscle groups in the same lift. For instance, a bench press uses your shoulders, triceps, and chest at the same time.
Drink a protein shake 30 minutes before you begin lifting weights. This is an effective way to boost your workout's efficiency; protein shakes won't over-fill you, but will give extra energy to your muscles. Consider adding some dairy to your shake, such as low fat yogurt or milk, for an extra energy punch.
Make sure to use creatine for your muscle building routine. Creatine is able to help you gain muscle by up to ten pounds the first several weeks of your workout due to the fact that it helps increase the amount of reps you are able to do. Eating around three or five grams prior to working out and then consuming that amount after working out can help you get the best results.
You must consume a sufficient amount of protein if you are serious about building muscle mass. Protein is what builds strong muscles and what they are made from. If you do not eat enough of it, your body cannot create new muscle tissue. Make sure that two or more of your larger meals, as well as a couple of your daily snacks, contain protein.
If you are having problems staying motivated, you may find it helpful to establish short-terms goals for yourself. Once you have met your goals, reward yourself. Motivation plays a key role in any long-term commitment. Why not pick rewards that will help your muscle building efforts? As an example, get a massage, which will help increase your blood flow and benefits muscle growth.
Perhaps you were willing to work hard at building your muscles before you read this article. Now you should know more about how to effectively build your muscles. Use the tips you just read to help you reach your muscle-building goals.
Find more suggestions about how to build muscle fast on our Youtube channel. Read more…
Build Muscle And Shed Fat With These Tips From The Pros
If you are like most anyone, you have dreamed of having a stronger body with firm, lean muscle mass. Yet, achieving a toned and cut physique is a goal that eludes many. Keep reading into this article for a selection of suggestions that you can apply towards the body you want.
Build Muscle
For the best results when trying to build muscle, change your routine often. Doing the same exercises over and over for weeks on end will cause your results to plateau, so find ways to mix it up and work every muscle group by altering your routine. You might change the number of reps, the exercises you perform, or the intensity of each exercise.
You should consider getting a personal trainer. A personal trainer is trained in what specific exercises will help you build muscle. Your personal trainer will also help you with a variety of tips including things like what you should be eating as well as supplement advice. In addition to this, your personal trainer will push you when you need to pushed to go that extra mile to help you build your muscles.
Keep going. Because building muscle does not give you the instant gratification of other forms of activity, it can be tempting to give up. If you stay dedicated, you will see those results, but the trick is to stay dedicated. Keep thinking about why you want to build muscle, and keep that in mind every day. You will be able to do it.
Make sure you are eating enough. Even if you are trying to lose weight while you build muscle, it is important that you are consuming sufficient calories. When your body is deprived of its fuel, it will be difficult to build muscle. An ideal diet for muscle gain is high in protein and low in fat and refined (processed) carbohydrates.
Focus on working out your largest muscle groups. Concentrating your efforts on large muscle groups such as the back, chest and legs will help you to build muscle faster. Exercises such as squats, pull-ups, bench presses, and dips are ideal for this. These kinds of exercises are generally more intense, and will help boost your protein synthesis.
You should ensure you are getting enough protein late at night. Your growth-hormone levels actually peak during the night. This means that your body is ready to build muscle. In order to prevent muscle from being cannibalized, consume casein protein immediately before going to bed. Casein protein slowly digests throughout the night and provides your muscles with needed amino acids.
Make sure you are eating enough food to support new muscle growth. Many people struggle with not eating enough to support the kind of growth they are trying to achieve. If you are trying to lose weight and build muscle at the same time, make sure you are eating protein rich foods to help with muscle growth.
You will be able to build muscle faster if you take breaks between workout, days in contrast to working out every day. The reason for this is that muscles heal and grow while you are resting, and not while you are exercising. Create a workout routine that alternates between workout and rest days.
Hopefully you have found the tips contained in this article to be highly beneficial to your muscle building efforts.
| 1.284038
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
. A circuit for receiving a spread spectrum signal, comprising:
generating means for generating a spreading code;
despreading means for despreading the spread spectrum signal by using a despreading code to produce a despread signal;
phase control means for controlling a phase of the spreading code for tracking synchronization at a phase control rate to produce the despreading code, the phase control rate being variable according to a rate control value;
power detecting means for detecting a signal power and an interference power based on the despread signal;
rate control means for producing the rate control value to control the phase control rate of the phase control means based on a magnitude of the signal power relative to the interference power; and
means for recovering received information from the despread signal.
2. The circuit according to claim 1, wherein the rate control means comprises:
setting means for setting a predetermined number of power levels based on the interference power, the power levels being sorted from a lowest power level to a highest power level;
comparing means for comparing the signal power with the power levels to determine the magnitude of the signal power;
means for producing the rate control value which is in proportion to the magnitude of the signal power.
3. The circuit according to claim 2, wherein the setting means comprises the predetermined number of multipliers for multiplying the interference power by different values which are sorted in ascending numeric order to produce the power levels, respectively.
4. The circuit according to claim 1, wherein the phase control rate becomes higher as the magnitude of the signal power relative to the interference power is larger.
5. The circuit according to claim 1, wherein the phase control rate becomes lower as the magnitude of the signal power relative to the interference power is smaller.
6. The circuit according to claim 1, wherein the phase control means controls the phase of the spreading code for tracking synchronization at intervals of a time period to produce the despreading code, the time period being variable according to the rate control value.
7. The circuit according to claim 6, wherein the time period becomes shorter as the magnitude of the signal power relative to the interference power is larger.
8. The circuit according to claim 6, wherein the time period becomes longer as the magnitude of the signal power relative to the interference power is smaller.
9. The circuit according to claim 1, wherein the phase control means comprises:
phase shift means for shifting the phase of the spreading code by a first predetermined fraction of a chip duration of the spreading code in a direction determined by a shift direction signal to produce the despreading code;
difference detecting means for detecting a correlation difference between an advance cross-correlation value and a retard cross-correlation value;
integral means for integrating the correlation difference with an integration time constant variable according to the rate control value to produce an integral value; and
shift control means for producing the shift direction signal determined by sign of the integral value when a magnitude of the integral value exceeds a predetermined value.
10. The circuit according to claim 9, wherein the difference detecting means comprises:
first phase shift means for shifting the phase of the despreading code by a second predetermined fraction of the chip duration in an advance direction to produce a first phase-shifted code;
second phase shift means for shifting the phase of the despreading code by the second predetermined fraction of the chip duration in a retard direction to produce a second phase-shifted code;
first means performing a cross-correlation between the spread spectrum signal and the first phase-shifted code to produce the advance cross-correlation value;
second means performing a cross-correlation between the spread spectrum signal and the second phase-shifted code to produce the retard cross-correlation value; and
means for producing the correlation difference between the advance cross-correlation value and the retard cross-correlation value.
11. A method for receiving a spread spectrum signal, comprising the steps of:
a) generating a spreading code;
b) despreading the spread spectrum signal by using a despreading code to produce a despread signal;
c) controlling a phase of the spreading code for tracking synchronization at a phase control rate to produce the despreading code, the phase control rate being variable according to a rate control value;
d) detecting a signal power and an interference power based on the despread signal;
e) producing the rate control value to control the phase control rate based on a magnitude of the signal power relative to the interference power; and
f) recovering received information from the despread signal.
12. The method according to claim 11, wherein the step e) comprises the steps of:
setting a predetermined number of power levels based on the interference power, the power levels being sorted from a lowest power level to a highest power level;
comparing the signal power with the power levels to determine the magnitude of the signal power;
producing the rate control value which is in proportion to the magnitude of the signal power.
13. The method according to claim 12, wherein the interference power is multiplied by different values which are sorted in ascending numeric order to produce the power levels, respectively.
14. The method according to claim 11, wherein the phase control rate becomes lower as the magnitude of the signal power relative to the interference power is smaller.
15. The method according to claim 11, wherein the phase of the spreading code for tracking synchronization is controlled at intervals of a time period to produce the despreading code, the time period being variable according to the rate control value.
16. The method according to claim 15, wherein the time period becomes longer as the magnitude of the signal power relative to the interference power is smaller.
17. The circuit according to claim 1, further comprising:
detecting means for detecting symbol-synchronization from the despread signal to produce a symbol-synchronization signal; and
control means for controlling synchronization acquisition.
18. The circuit according to claim 17, wherein the phase control means further comprises:
third means performing a cross-correlation between the spread spectrum signal and the despread code to produce a cross-correlation value; and
flag means for producing a flag when the cross-correlation value is greater than each of the advance cross-correlation value and the retard cross-correlation value,
wherein the control means controls the phase control means such that the phase of the spreading code is sequentially shifted for the synchronization acquisition until the flag is in synchronization with the symbol synchronization signal, and then switches to the synchronization tracking when the flag is in synchronization with the symbol synchronization signal.
19. The circuit according to claim 17, wherein the power detecting means detects the signal power based on a unique word which is read from the despread signal according to the symbol synchronization signal, and detects the interference power based on the unique word.
20. The circuit according to claim 19, wherein the signal power is obtained by inputting a plurality of symbols of the unique word according to the symbol synchronization signal, converting the symbols into a predetermined quadrant of a signal constellation plane, adding and averaging powers of the symbols.
21. The circuit according to claim 20, wherein the interference power is obtained by inputting the symbols of the unique word according to the symbol synchronization signal, calculating and averaging a difference between the signal power and each power of the symbols.
22. A CDMA receiver comprising the circuit according to claim 1.
23. A CDMA receiver comprising the circuit according to claim 9.
24. A CDMA receiver comprising the circuit according to claim 18.
25. A CDMA receiver comprising the circuit according to claim 19.
US08746131 1995-11-07 1996-11-06 CDMA Receiver Expired - Fee Related US5844935A (en)
Priority Applications (2)
Application Number Priority Date Filing Date Title
JP28827395A JP2723094B2 (en) 1995-11-07 1995-11-07 Cdma receiving device
JP7-_PHONE_-07
Publications (1)
Publication Number Publication Date
US5844935A true US5844935A (en) 1998-12-01
Family
ID=_PHONE_
Family Applications (1)
Application Number Title Priority Date Filing Date
US08746131 Expired - Fee Related US5844935A (en) 1995-11-07 1996-11-06 CDMA Receiver
Country Status (2)
Country Link
US (1) US5844935A (en)
JP (1) JP2723094B2 (en)
Cited By (26)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0924875A2 (en) * 1997-12-17 1999-06-23 Nec Corporation Diversity reception method and apparatus in a CDMA system
US5940435A (en) * 1996-11-21 1999-08-17 Dsp Group, Inc. Method for compensating filtering delays in a spread-spectrum receiver
US5966416A (en) * 1996-11-21 1999-10-12 Dsp Group, Inc. Verification of PN synchronization in a spread-spectrum communications receiver
US6233454B1 (en) * 1996-10-18 2001-05-15 Matsushita Electric Industrial Co., Ltd. Mobile station
US6236672B1 (en) 1996-05-24 2001-05-22 Matsushita Electric Industrial Co., Ltd. Transmitting/receiving apparatus using a plurality
| 2.338185
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
downregulation of the human EGF receptor [19]. The presence of the Clk2 kinase domain in the fused transcript might imply auto-regulation of the Scamp3 fragment of the chimeric protein by self-phosphorylation, a regulatory context distinct from conventional post-translational processing of Scamp3.
Isoform B had a longer ORF of 517 aa (nt 107 to nt 1660). However, this 517-aa longest ORF of the Isoform B mRNA only contained the STKc domain and was solely homologous to the Clk2 protein, because the Scamp3 protein sequence in this isoform was encoded in a different reading frame. Therefore, despite intergenic splicing, isoform B does not encode a chimeric protein.
TU54 (PET 48740): 1110034A24Rik-Rpl36al intergenic splicing
Intergenically spliced TU54 [GenBank: EU599049] connected the 1110034A24Rik and Rpl36al genes. The TU, 1147 bp long, has four exons with canonical splice junctions (Figure 1E). There were numerous cDNAs and ESTs supporting each gene separately, but none bridged the two. This TU overlapped in the reverse (cis-antisense) orientation the Mgat2 gene between 1110034A24Rik and Rpl36al.
The longest ORF of this TU was 166 aa (nt 278 to 778). It contained no conserved domains and was identical to the standalone 1110034A24Rik ORF. After the stop codon at nt 778 there was another 100-aa ORF identical to Rpl36al. Therefore, this intergenically spliced TU was potentially bicistronic, containing the complete ORFs of both component genes, but did not encode a fusion protein.
The cis-antisense overlap with Mgat2 was 58 nt, comprising the entire third exon of this TU which overlaps part of the coding sequence of Mgat2. Expression of the 1110034A24Rik-Rpl36al bicistronic TU may exert regulatory effect on Mgat through the cis-antisense pairing, as genes in other cis-antisense gene pairs can affect each other's expression [20].
We then investigated whether these new members of the ES cell transcriptome are regulated by known ES transcription factors. To accomplish this, we searched two datasets of experimentally supported transcription factor binding events for Oct4, Nanog, and Sox2 binding sites in the vicinity of the TUs: chromatin immunoprecipitation paired-end diTags (ChIP-PET) [3] and ChIP-sequencing by Solexa technology [H. H. Ng et al., unpublished]. In brief, all TUs had evidence of Oct4, Nanog, and/or Sox2 binding events within 100 kb of the TU in one or both types of ChIP datasets (See Additional file 3). Most notably, TU7 had Oct4 and Nanog binding sites 27.5 kb downstream of its 3' end, while TU52 had a Sox2 binding site 42.5 kb downstream of its 3' end supported by both Sox2 ChIP datasets. Only one binding event – a ChIP-seq Nanog binding site – was localized directly at the transcription start site, for TU4.
Additional file 3. Oct4, Nanog, and Sox2 binding sites in the 100 kb vicinity of the TUs identified from chromatin immunoprecipitation paired-end diTags (ChIP-PET) [3] and ChIP-seq [H. H. Ng et al., unpublished].
Format: PDF Size: 5KB Download file
This file can be viewed with: Adobe Acrobat ReaderOpen Data
Qualitative and quantitative expression profiling of novel TUs
Since the TUs were newly recognized components of the mouse ES cell transcriptome, the fact that they had never been detected before in any other cell type suggested that they might be preferentially expressed in ES cells. We thus hypothesized that they may be involved in the maintenance of pluripotency in these cells and accordingly, checked their expression in stem cells and a panel of embryonic and adult tissues, because higher expression in undifferentiated cells would be consistent with a function specific to such cells. An example of this pattern of gene expression is LIN28 which was found by SAGE to be expressed only in human ES cells [5] and showed downregulation upon ES cell differentiation. In fact, LIN28 has only been recently implicated in human somatic cell reprogramming back to the pluripotent state [21], indicating its importance in ES cells.
Conventional RTPCR and quantitative real-time PCR (QRTPCR) were used to profile expression of the novel TUs. The results, discussed further, showed that all five TUs were either ES-specific or were expressed most highly in ES cells despite being also transcribed elsewhere.
To further confirm the pluripotency association of the TUs, we also measured the changes in expression levels of the TUs in ES cells upon chemical induction of differentiation. ES cells were differentiated using retinoic acid (RA), dimethyl sulfoxide (DMSO), and hexamethylene bisacetamide (HMBA) treatments, in separate experiments. The three treatments served also to eliminate the possibility of chemical-specific expression responses.
To validate that the three chemical treatments reduced pluripotency marker levels and led the cells to differentiate, we tested expression levels of four principal pluripotency markers (Oct4, Sox2, Nanog and Klf4) by TaqMan QRTPCR. The results clearly indicated that all pluripotency marker mRNA levels decreased over time in all three treatment timecourses, proving that the treatments led to the expected differentiation outcomes (See Additional file 4).
Additional file 4. Expression levels of the pluripotency markers Oct4, Sox2, Nanog, and Klf4 upon RA-, DMSO-, and HMBA-induced differentiation.
Format: PDF Size: 16KB Download file
This file can be viewed with: Adobe Acrobat ReaderOpen Data
QRTPCR was performed to determine the effect of differentiation on the RNA level of the novel TUs. We demonstrated that most of the TUs were consistently downregulated upon differentiation, regardless of the differentiation agents used.
TU4 (PET 1339): a novel stem cell gene with PRY and SPRY domains
Qualitative RTPCR on the multitissue panel suggested that TU4 had an ES-cell-specific expression pattern (Figure 2A and 2B). We did not detect expression of this TU outside of ES cells by RTPCR (data not shown). QRTPCR demonstrated that expression of this TU was low though not absent in non-ES samples. The second-highest expression level was in adult skeletal muscle, at 30% of the ES-cell level; in the few other tissues where the TU was expressed, its RNA level was less than 20% of the ES-cell level (Figure 2B). Consistent downregulation of this TU upon ES cell differentiation was evident in all three differentiation timecourses, although it was most pronounced with DMSO treatment (Figure 2C).
thumbnailFigure 2. Expression profile of TU4. A. Qualitative RTPCR (samples testing negative are not shown). B. Quantitative RTPCR: whole-embryo and tissue panel. C. Quantitative RTPCR: retinoic acid (RA), dimethyl sulfoxide (DO), and hexamethylene bisacetamide (HM)-induced differentiation time courses of E14 ES (left to right: untreated; days 2, 4, 6).
This novel TU showed a similar expression pattern to TU4. Qualitative RTPCR indicated that besides expression in ES cells this TU had barely-detectable expression in the embryo at days seven and eleven (Figure 3A). QRTPCR confirmed the ES specificity of TU7 (Figure 3B).
The expression level of TU7, similarly to TU4, showed a decreasing trend upon ES cell differentiation (Figure 3C). This gradual effect was observed in all three treatments but the decrease was not as rapid as that of TU4. A significant reduction in TU7 expression was observed after four days of RA treatment whereas for TU4 the effect was observed as early as day two. In the other differentiation treatments, the effect was similar between the two genes, with DMSO treatment yielding the strongest response. This expression profile argues in favor of potential relevance of TU7 to ES cell pluripotency.
TU11 (PET 31704): Slc10a3-Ubl4 intergenic splicing
Qualitative RTPCR suggested that the intergenically spliced Slc10a3-Ubl4 TU had a close-to-ubiquitous expression profile (Figure 4A), although it nevertheless was preferentially expressed in embryonic rather than adult tissues. One adult sample with higher expression of Slc10a3-Ubl4 than ES cells was testes (Figure 4B). This is intriguing because adult testes reportedly contain multipotent cells capable of forming derivatives of the three embryonic layers [22].
thumbnailFigure 4. Expression profiles of TU11, an intergenically spliced product of the Slc10a3-Ubl4 locus, and its component genes. A. Qualitative RTPCR. B. Quantitative RTPCR: whole-embryo and tissue panel. C. Quantitative RTPCR: retinoic acid (RA), dimethyl sulfoxide (DO), and hexam
| 1.434022
|
Zyphra/Zyda-2
|
A few days after the long flight from Bangkok to Birmingham two pairs of feet had sufficiently recovered to venture out again — my own, and those of my Thai daughter. Thailand is fortunate to have beautiful islands, beaches, and delicious food, that combines well with the hundreds of beautiful golden temples and other exotic architecture and lifestyle to attract well over 30 just under 40 million tourists a year, from all parts of the world. But the reverse is also true … while the West does not have the wonderful weather of the tropics, what it does have is history, artefacts, and ancient buildings in abundance – something that travellers from Thailand and the Far East find fascinating.
On the flight over my daughter, Anchalee Sara, told me that one of her friends had visited Royal Bath on a trip to England, and Sara begged me to take her there on our trip. Bath is a place I had missed out on whilst I was living in the UK-, and I was more than happy to agree.
A town grew up during Roman times, on the site of Bath, in about 60 to 70-AD, which was then known as Aquae Sulis (the waters of Sul, a Celtic deity). In the late 2nd Century a ditch was dug around the Roman Baths, that had been built a century before to take advantage of the natural hot springs, and an earth rampart was erected. It probably had a wooden palisade on top. In the 3rd Century, this was replaced by a stone wall and other ornate constructions, during the heights of the Roman Civilisation. This, then, was the ancient origin, but let's jump forward in time to the 1800s.
In the 18th Century Bath became a much more genteel and fashionable place. It boomed in size. This was largely due to the efforts of Richard 'Beau' Nash 1674 – 1762 who was granted the title Royal Master of Ceremonies. Many of the finest buildings in Britain were erected in Bath during the 18th Century. A Pump Room was built in 1706 (although the present one was built in 1795).
Architect John Wood the Elder 1704 – 1754 built Queen Square in 1728 – 1739. He built The Circus in 1754 – 60. His son John Wood the Younger was born in 1727. He built Royal Crescent in 1767 – 1774. He also built the Assembly Rooms in 1769 – 71. The Octagon was built in 1767 and Margaret Chapel was built in 1773. Pulteney Bridge was built in 1774. It was named after William Pulteney the first Earl of Bath and it was designed by Robert Adam. The legacy of the Wood family is what now draws tourists to visit Bath, and they come from around the world.
In the 21st Century Wellness has become a fashionable buzz word, and pre-occupation, but ancient Bath has been doing it for nearly two millennia. Constructed in around 70AD as a grand bathing and socialising complex, the Roman Baths is one of the best-preserved Roman remains in the world, where 1,170,000 litres of steaming spring water, reaching 46°C, still fills the bathing site every single day.
The Roman Baths is the site of extensive ruins and an interactive museum, that is filled with many treasures and visual snippets that transport you back to Roman times and the lives of the Aquae Sulis people. You can walk on the same ancient pavements as the Romans did 2,000 years ago, and explore ancient chambers, changing rooms, and warm plunge pools. While my daughter and I were visiting the Roman Baths (on a somewhat cooler day than we are used to in Bangkok) we were both awed by the beauty of the ancient building, and fascinated by the lucid green waters, with steam gently rising from the surface. Sara also felt quite at home, as she had soon found a bunch of people visiting from Thailand … in fact, I was quite surprised to find I was one of the few western tourists … well outnumbered by the Chinese, Thai, Korean and Japanese visitors!
Head down to the water to Pulteney Bridge. This covered bridge was built in the 18th Century for William Pulteney to connect Bath to land on the other side of the River Avon to help drive up land prices. It's one of few bridges in the world which have shops built into the sides and the facade is still really well preserved. Though if you take a look around the back it's a bit more DIY as extra windows and extensions have been added over the years.
Bath definitely has a deserved reputation for history, architecture and literature – but there's more to it than that. There are gorgeous views everywhere for your Instagram selfies, plenty of great foodie spots, cool boutiques and plenty more, as you walk through the historic streets and 18th Century shopping arcades.
A good way to end this wee introduction to Bath is with the way my daughter and myself also ended our trip there – with a traditional English afternoon cream tea. There are many places in Bath to have such … the Regal Pump Room of the Roman Baths, for example, but we chose to go more intimate, and found a little place about 10 minutes walk from the train station, Sweet Little Things. This place was chosen by my daughter, as she loved the look of the place, and its pink flowers everywhere you looked. I am more of a black or brown guy myself, but as it turned out her choice was perfect. The delicious tea came in an ornate teapot, with china teacups, but the accompanying freshly made, still warm crumbly scones with sweet strawberry jam, topped with clotted cream were to die for. It was the perfect end to a day we will long remember. You could do a lot worse than to follow in our footsteps.
Royal Bath — upcoming in 2020
Great Western Railway super-fast London to Bath train route will be added to off-peak hours from 15th December 2019
The new route will make rail travel for visitors to Bath quicker than ever in the new year. On certain peak-hour trains it will take 1 hour 11 minutes to travel from London Paddington to Bath Spa, calling at Chippenham only.
New Attractions, Projects and Restaurants
The Roman Baths' Archway Project 2020
The Archway Project will convert former Victorian spa buildings close to the Roman Baths into a new learning and visitor centre with cutting-edge facilities. Due to open in 2020, the project will transform the learning programme, offering more activities and events for visitors, and create stimulating spaces for schools and groups, providing a hands-on and accessible learning experience set among Roman remains.
Airing in autumn 2019 early 2020
The TV adaptation of Jane Austen's unfinished final novel will air on the BBC in autumn 2019 in the UK and early 2020 on PBS in the US.
20th anniversary of Bath Christmas Market
In addition to the usual festivities, there will be lots of celebratory activities taking place to mark Bath Christmas Market's 20th year. November – December 2020.
Bath Bach Festival (March and May 2020)
Bath is joining in with the global celebrations in 2020 for the 250th anniversary of Beethoven's birth.
Bath Festivals will present the entire cycle of all 16 of Beethoven's string quartets.
There will be 3 concerts in March (27 – 29th March) and 3 in May (22nd – 24th May) put on by two quartets.
Bath Comedy Festival (31st March–19th April 2020)
Bath Comedy Festival presents household names in the world of comedy and promotes the big names of tomorrow while encouraging young hopefuls in the form of the New Act Competition, plus events involving children, women's comedy and disabled performers.
The Bath Festival (15th–24th May 2020)
Multi-arts festival bringing together some of the world's leading writers, musicians and cultural figures.
An evening with Michael Bublé at The Royal Crescent – 24/th and 25th July 2020
Jane Austen Festival, Bath (11th–20th September 2020)
Celebrating all things Jane Austen including a Regency costumed promenade.
Bathscape Walking Festival (September 2020)
It offers 50 free walks covering a variety of themes and distances, led by knowledgeable leaders and supported by volunteers. Walks start at various locations around the city and explore Bath and its surrounding countryside.
Bath Children's Literature Festival (September 2020)
The largest dedicated children's literature festival in Europe, always offers a vibrant array of talks and activities for children.
Bath Digital Festival (October 2020)
The Bath Digital Festival brings people and technology together, allowing everyone to explore the thriving digital scene in Bath and get hands on experience with the latest projects.
FilmBath Festival (November 2020)
Previews, special events and question and answer sessions with filmmakers, screenings of world cinema, LGBT films, documentaries and make up the diverse and vibrant programme.
Bath Mozart Festival (November 2020)
Bath 'Bachfest's' counterpart, 'Mozartfest' showcases the eminent Austrian composer through nine days of classical concerts, often recognised as one of the country's best celebrations of classical music.
Bath Christmas Market (beginning of Dec 2020) 20th Year anniversary!
The award-winning Bath Christmas Market will be back for 18 days of magical merriment! With over 180 twinkling chalets spread out across Bath's picturesque Georgian streets, it's the perfect place to do a spot of Christmas shopping.
Peter Brown: Bath Is It
30th Nov 2019–9th Feb 2020
The Victoria Art Gallery
Sally Muir: The Dog Show
30th Nov 2019–9th Feb 2020
The Victoria Art Gallery
2nd Mar 2019–1st Mar 2020
The Fashion Museum Bath
Browse one of the best collections of gloves in the world. Woven throughout A History of Fashion in 100 Objects with Glove Stories will showcase exquisite examples of historical gloves from the past 400 years, many of which have never been displayed to the public before.
Grayson Perry: The Pre-Therapy Years
24th Jan–25th May 2020
The Holburne Museum
The first exhibition to
| 1.27107
|
m-a-p/FineFineWeb
|
course... --John Hedley Brooke, 1991. In his Science and Religion; Some Historical Perspectives (Cambridge University Press): 315.
Wallace's scientific case rested on his conclusion that the human brain, including that of the most primitive peoples, was more powerful than was necessary for survival. For a large part of his early life Wallace had lived among primitive peoples in South America and Southeast Asia, an experience that convinced him that these people, simple as they have appeared in mind and action, were equal in intelligence to Europeans. As the modern anthropologist Loren Eiseley remarked, Wallace displayed "scarcely a trace of the racial superiority so frequently manifested in nineteenth-century scientific circles," in which were included Darwin and Thomas Huxley. If human beings possessed brain capacities beyond what was needed for survival, Wallace reasoned, then how could natural selection bring about its evolution? Where was the "survival value" of that capacity if that capacity was not fully used? After all, natural selection improved an organ only through its adaptation to the pressure of environment. In the case of the human brain, however, the capacity was greater than human beings really required or that the pressure of environment could account for. Wallace logically concluded on those grounds that "some higher intelligence directed the process by which the human race was developed."... --Carl N. Degler, 1991. In his In Search of Human Nature; The Decline and Revival of Darwinism in American Social Thought (Oxford University Press): 60.
... It will be recalled that Darwin could find no useful value in the physical (racial) differences among human groups. Thus he could not account for those differences through the operation of natural selection. He did, however, accept the common anthropological view of the time that the differences in levels of culture or civilization which occurred among the diverse peoples of the world derived from differences in their biological capacities. Some cultures were higher than others because the people in those societies were biologically superior. That was the opening in his theory of human evolution through which racism entered. It was that opening which Wallace closed with his conception of the intellectual equality and therefore the equal cultural capacity of all peoples. As things turned out, Wallace looked to other ways and matters in his effort to make evolution less competitive and threatening. He did not develop any further his assertion of the mental equality of all peoples, or at least few took notice of its relevance. Yet that was the precise argument, elaborated and tirelessly defended, that undermined in time the concept of racism in America. Its elaboration and defense underpinned the concept of culture, an idea that in the twentieth century became not only an alternative to a racial explanation for human behavioral differences but also a central concept in social science... --Carl N. Degler, 1991. In his In Search of Human Nature; The Decline and Revival of Darwinism in American Social Thought (Oxford University Press): 61.
... Wallace's supernatural explanation gained few followers among social scientists in the second half of the twentieth century, but his assertion of the special, indeed unique, nature of man, because of his brain, continued to influence many, directly or indirectly. The eminent modern American anthropologist Loren Eiseley, for example was among them. His sympathetic response to Wallace reflects the views of many other American social scientists today. Eiseley did not doubt that Wallace has a better understanding of the roots of human nature than Darwin. In his book Darwin's Century, Eiseley contrasted Darwin's conception with that of Wallace. "The mind of man, by indetermination, by the power of choice and cultural communication," he wrote, "is on the verge of escape from the blind control of that deterministic world with which the Darwinists had unconsciously shackled man. The inborn characteristics laid upon him by the biological extremists have crumbled away," he was relieved to report. In Eiseley's judgement, Wallace stood out among evolutionists of his own time because he recognized even then that human beings had escaped from biological evolution. "Wallace saw and saw correctly, that with the rise of man the evolution of parts was to a marked degree outmoded, that mind was now the arbiter of human destiny."... --Carl N. Degler, 1991. In his In Search of Human Nature; The Decline and Revival of Darwinism in American Social Thought (Oxford University Press): 330.
Wallace (1865) hypothesized that sex-limited mimicry, in which palatable females are the only sex to mimic unpalatable butterflies, arises because females fly more slowly than males and hence are more vulnerable to predation. Our results from the within-lineage analyses are in agreement with Wallace's hypothesis. Evolutionarily, palatable males have larger thoraces, maximizing flight muscle, and smaller abdomens, minimizing load on the wings, probably to maximize flight speed; whereas females have retained large abdomens, probably to maximize egg load. Counter-selection for fecundity may operate against faster flight speeds, and females may be reproductively constrained to evolve alternative means of avoiding predation, such as mimicry. If females fly more slowly, they may be predisposed to fly like an unpalatable model... --Robert Srygley & Peng Chai, 11 October 1990. Oecologia 84(4): 498.
... there simply weren't any lists of Darwinian tenets that would have been accepted by all the leading Darwinians and rejected by all the main non-Darwinians in the first decade or so following the publication of the Origin. Both the Darwinians Charles Lyell and Asa Gray and the non-Darwinians the Duke of Argyll and St. George Mivart, for example, thought that natural selection must be supplemented by some sort of "directing force" in order to account for the relevant phenomena, while Darwin consistently denied the need for such an additional mechanism (Argyll 1877; Gray 1884; Mivart 1871). Conversely, neither the Darwinians Alfred Russel Wallace and Charles Lyell nor the non-Darwinians St. George Mivart and William Whewell thought that human beings could be included under the same explanatory scheme (whatever this might be) that was used to account for the history and behavior of "lower" animals, while Darwin maintained that they could... --Doren Recker, September 1990. Philosophy of Science 57(3): 463.
The efforts to denigrate Darwin serve only to conceal the real differences between the two naturalists' approach to transmutation. Careful reading of Wallace's paper reveals that in several important respects his theory failed to duplicate the essence of Darwin's thinking. Wallace had no interest in artificial selection and refused to treat it as analogous to the natural process even in later years. His mechanism did not even address the basic question of how selection acts on individual differences to change a population, because he was interested in how one well-marked variety (what we now call a subspecies) could replace others. Once it is recognized that in writing of natural selection acting on varieties Wallace was thinking of subspecies rather than individual variations, it can be seen that his paper does not contain a description of what Darwin saw as the basic mechanism of change. Wallace simply assumed that species split into varieties--he did not seek to explain how this all-important first step occurs. It has also been suggested that Wallace failed to appreciate the full power of selection because he treated the varieties as struggling against nature, not struggling against each other... --Peter J. Bowler, 1990. In his Charles Darwin; The Man and His Influence (Basil Blackwell): 113.
... Wallace's Darwinism of 1889 provided a clear and comprehensive survey of the theory and of the relevant areas of biology. Except in the case of the origin of the human mind, Wallace was an extreme selectionist; unlike Darwin, he would have nothing to do with any other mechanism of evolution. This position soon became known as 'neo-Darwinism' to distinguish it from the more flexible form of the theory which Darwin himself had advocated and which had gained support precisely because it allowed selection to be relegated to the status of a secondary mechanism... --Peter J. Bowler, 1990. In his Charles Darwin; The Man and His Influence (Basil Blackwell): 210.
The good parent process is a mechanism for the evolution of epigamic traits that is distinct from the Fisherian process and the good genes process. In the good parent process, direct selection on females to discriminate among males on the basis of male parental quality leads to the evolution of a trait that provides female with honest (accurate and precise) information regarding the non-heritable component of parental quality in a potential mate. Wallace (1891, 1901) recognized the potential of such a mechanism, but he had no way to consider rigorously the effects of inheritance. The good parent process is also different from Darwinian sexual selection (Darwin 1871), because females are not necessarily attracted by a good parent trait. A trait that evolves via the good parent process only enhances the attractiveness of high-quality males... --Guy A. Hoelzer, December 1989. Animal Behaviour 38(6): 1075.
Since Wallace (1889), a number of authors have argued that isolating barriers could be positively selected for their isolating property to prevent the formation of hybrids and to actively promote divergence and speciation. However, being a second order effect, the selective forces are likely to be weak, and, as Levin points out, in practice it is going
| 2.074764
|
HuggingFaceFW/fineweb-edu
|
Saturday, January 26, 2008
Is God the God of the gaps? A rebuttal to Samuel Skinner.
In the comments section of the previous post Sam wrote,
God contradicts logic, so I don't see how one would butress the other. Additionally this is the classic how do you know it exists if you can't see it, aka if you doubt god, why do you believe in love? The basic flaw is saying that god is an answer, instead of admitting that it is simply a gap in knowledge. In short this is a god of gaps arguement. (Atheists can't explain x, but theism can. Therefore atheism is false) It is the same as saying history doesn't teach you how to play the flute so history is false.
This deserves a reply.
The Christian position is not that we simply refer to God to explain gaps in knowledge. To say that this is the Christian position is to argue against a straw man. Rather, the Christian position is that all knowledge presupposes the existence of God. The Christian position is that nothing whatsoever can be explained unless God exists. God is the God of the gaps and the non-gaps because He is the God of everything. Both the explained and unexplained cannot be account for unless God exists. That's the position the atheist will have to contend with if he wants to argue against Christianity (though we hope repentance and submission to Jesus Christ will be preferred -- and I don't say that lightly).
Let me add two clarifications. The Christian position is not that the atheist can't explain things or that he doesn't believe things which are true, rather, we say that the extent to which the atheist has any successes in his reasoning and explanation of things is only possible to the extent that he assumes the truth of the Christian worldview. The other clarification is this: there are no gaps in knowledge, not for God. There are gaps for human beings due to our finitude and the noetic effects of sin, but not for God.
Now that we've set forth the Christian position, let's turn our attention to Sam's position. In response to the previous post he admits that the atheist cannot arrive at a proper explanation of the laws of logic: "... it is simply a gap in knowledge." This is a devastating concession of cosmic proportions. One must eventually ask the atheist whether the laws of logic apply to that quote itself. How can laws of logic, which for the atheist are unexplainable, be applied to a gap in knowledge about the laws of logic which can't be explained by the atheist? Sam's position is self-defeating. In fact, Sam can't know whether anything he says whatsoever is true because it can't be known to him wether the laws of logic have been properly applied (since for him the laws of logic are a gap in knowledge.) Thus, Sam has no basis for saying, "God contradicts logic...."
He also wants us to think that the following is faulty reasoning: "Atheists can't explain x, but theism can. Therefore atheism is false." Well, it depends what you mean by "explain." I've said that atheists can explain things but only to the extent that they assume the truth of the Christian explanation of things. If the atheist doesn't assume the existence of God, he can't explain anything whatsoever. Therefore, because of that, atheism is false. Therefore, because of that, Christianity is true.
One final point about gaps in knowledge. The atheist believes that there are things unknown. This poses a problem of cosmic proportions for the atheist. If that which is not known can be known though it isn't yet, we have to wonder whether that unknown knowledge will affect the very process of knowing itself, in which case the atheist can't be certain whether he rightly knows anything now. And if that unknown is to become known, can we know it from our current method of knowing? These aren't silly philosophical games here, rather, it highlights the absurdity of the atheist position. The atheist is left not being able to know anything for certain because he can't eliminate the possibility that the unknown will change the very foundation or basis of knowing. But the Christian worldview can make sense out of knowledge and gaps in knowledge because everything is known by God. Everything is interpreted. Everything is analyzed. Everything is explained. Everything is accounted for because God is omniscient. Human beings can't know everything but they can have knowledge since God reveals knowledge through general and special revelation. In the Christian worldview, the human unknowns can't change the preconditions of knowing because God always is who He is. But for the atheist to admit there are gaps in knowledge is to admit his own defeat.
Anonymous said...
Samuel Skinner
None of this proves God exists, much less your God. Do you want me to recount the errors?
First of most of the post is nonsense. What is the Christian worldview? Is the Christian worldview is necesary for knowledge, how have non-Christans achieved knowledge?
I didn't say atheists can't explain logic, I said I can't. I also can't explain computer technology, but I am using this computer pretty well.
I can actually see double think here. You say that science isn't reliable because it can be completely overtuned, but Christianity is constant. Then you qualifyChristianity is constant, but their our gaps in human being knowledge. Well, knowing about god is knowledge, so...
Finally science doesn't constantly change. It constantly expands. Theories change. Data is constant. So feel free to doubt theories that don't have a cetury of backing, but don't use that as an excuse to doubt the scientific method or raw data.
If I can't explain logic, I can't argue it contradicts God? I don't have to because "statements are valid or invalid independent of eho the source is". The very principles of logic you claim require god contradict and make him impossible.
Finally your biggest flawyou don't say how logic requires god. You say that you need God to provide a basis for everything. I have news for youjust because something is desireable doesn't mean it exists. Also if God to provide the grounding for evidenve and logic, how do you know he exists in the first place? You can't use personal experience or the Bible, because under your definition they are also evidence. So you seem to be completely disconnected from reality. Bummer.
In the interest of fairness i am going to list all the propositions that are unanswerable:
1 Do other people concious?
2 Does reality exist or is it only in my mind?
3 Is reality random and we have just gotten an impossibly luck roll so far?
4 Is there things that exist, but don't interact with our world
Most people don't sacrifice their mind to have certainty about these questions. The don't care because, by definition, they don't affect the real world.
PS It is nice you took out an entire post to rebut me. Complete garbage,but hewit is the thought that counts.
Peter said...
Hello Sam.
I hope you don't mind I used an entire post to reply to your comments. But I take challenges to Christianity very seriously. Christ demands that we give care and attention to people who want to know more about the Christian faith and to give an answer to people who call belief in God irrational. Maybe that sounds weird to you given your outlook on life, but in terms of the Christian worldview it makes sense. And I hope you don't take anything I said personally. If I sounded like I was being arrogant then I take responsibility for that.
You ask some valuable questions and raise some very important issues, so if it's ok with you I'd like to reply in a new post. I'll probably have time to do that tonight, I think. In the meantime, feel free to peruse the rest of this blog and maybe you'll find some answers.
Anonymous said...
Samuel Skinner
Nah, I'm good with your post. On the subject of arrogance, I think religion is inherently arrogant, but that doesn't bother me. Either you're (generic you in this case) arrogant and you're right (annoying) or you're arrogant and you're wrong (funny).
However a responce would be better. I will make people think even if I have to wreak my wrists in the attempt! (or throat as the case may be)
ybtolerant said...
To be quite honest - I think what Samuel Skinner is doing is pathetic. He is going around online searching for blogs about God just to mock them and say "Atheism exists because God doesn't. It's simple really." He is so bored and empty with his own life, he spends his time on the computer just to say negative things about people who are actually happy unlike him.
Peter – I don't know you, and I don't know your entire view on life, but you are entitled. From the looks of you, you believe in God, therefore I presume you love Him. You may not be able to prove he exists but you still stand up for him. I admire you. I however think that by allowing Samuel's comments to post you are wasting your time by replying to it. Reading it is one thing to respect what he has to say, but since all he is doing is trying to provoke you – it won't do you any good. So far not only is he dissing you on your beliefs, he is also making a fool of you by answering him. He wants you to feel bad about your faith, and feel bad you can't prove it. I think you are doing enough by writing about it – and at least writing what you feel is real. Don't let him push you around
| 1.055325
|
Zyphra/Zyda-2
|
as an anode and a second, dissimilar reservoir serving as a cathode. Each galvanic cell may store energy in the form of chemical potential energy. When a conductive material is located proximate to a cell such that the material may provide electrical and/or ionic communication between the cell elements, the chemical potential energy may be released as electrical energy. Accordingly, each set of adjacent, dissimilar reservoirs may function as a single-cell battery, and the distribution of multiple sets of adjacent, dissimilar reservoirs within the apparatus may function as a field of single-cell batteries, which in the aggregate forms a multiple-cell battery distributed across a surface.
- When a first reservoir includes a reducing agent, and a second reservoir includes an oxidizing agent, or vice versa, a potential difference may exist between the first reservoir and the second reservoir. In a first state, an apparatus is electrically quiescent (e.g., current flow between reservoirs is substantially zero). In an embodiment, an apparatus may be "activated" when a conductive material is brought into proximity with the first reservoir and the second reservoir, enabling a current flow to occur between the reservoirs, via electrical communication and/or ionic communication. Such a conductive material may be referred to herein as an "activation material." A magnitude of the current, I, substantially is a function of the potential difference, V, between the reservoirs, and the conductance or resistance, R, of the conductive material. In other words, the current I between the reservoirs approximately equals the voltage potential, V, between reservoirs divided by the resistance, R, of the conductive material, or I=V/R.
- Said another way, the magnitudes of currents 210 producible between adjacent, dissimilar reservoirs may be affected by one or more factors, including but not limited to, the distance between adjacent dissimilar reservoirs, the potential difference between the reservoirs (e.g., the quantity of electrons that a reducing agent may have available to donate to an oxidizing agent, the quantity of electrons that an oxidizing agent may be able to accept), resistance of the conductive material, and other factors. In addition, a current between reservoirs may change as a function of time, as the above factors change. Voltage potential differences in a range from approximately 0.05 Volts (V) to approximately 5.0 V may be present between dissimilar reservoirs, in an embodiment. In other embodiments, higher and/or lower voltage differences between dissimilar reservoirs may be present. Further, currents in a range from approximately 1 microampere (mA) to approximately 100 mA may be producible between dissimilar reservoirs, in an embodiment. In other embodiments, higher and/or lower currents may be producible. Resistances of conductive materials may vary significantly from near zero resistance to near infinite resistance.
- When an apparatus is applied to an area of tissue and activated (or activated and then applied), a total current, ITOTAL, between dissimilar reservoirs may be described as ITISSUE+I(CONDUCTIVE MATERIAL). When the resistance of the tissue is greater than the resistance of the conductive material, then proportionally more current may flow through the conductive material than through the tissue. Accordingly, in various embodiments, a conductive material may be selected, which has a resistance that may be greater or less than the anticipated resistance of a type of target tissue, depending on whether more or less current is desired to flow through the target tissue.
- In various embodiments, an apparatus may be used to apply electricity to tissue (e.g., skin or other tissue) in need of treatment. The electricity may be generated by a first reservoir (e.g., a first conductive electrode) in electrical communication with a second, dissimilar reservoir (e.g., a second conductive electrode), and the first reservoir and the second reservoir may be in ionic communication with the tissue. The term "electrical communication" may be defined, in some embodiments, as passage of electrons between elements (e.g., first and second reservoirs) through direct contact and/or through a conductive material. The term "ionic communication" may be defined, in some embodiments, as passage of electrons between elements (e.g., first and second reservoirs, a conductive material, and/or tissue) through migration of ions as "electron movers" in contact with the elements (e.g., electrons may pass between a reservoir and tissue via ionic transport of electrolytes in contact with a reservoir and the tissue).
- In various embodiments, the difference of the standard potentials of the first and second reservoirs may be in a range from 0.05 V to approximately 5.0 V. In a particular embodiment, the difference of the standard potentials of the first and second reservoirs may be at least 0.2 V. In embodiments that include very small reservoirs (e.g., on the nanometer scale), the difference of the standard potentials may be substantially less. The electrons that pass between the first reservoir and the second reservoir may be generated as a result of the difference of the standard potentials.
- FIG. 3 is a cross-sectional, side view of a portion of the medical battery of FIG. 2 along lines 3-3, in accordance with an example embodiment. The illustrated portion includes a first reservoir 209 and a second reservoir 205 joined with a substrate 302. In the illustrated embodiment, a first reservoir surface 304 and a second reservoir surface 306 extend above a top surface 308 of substrate 302. In other embodiments, either or both reservoir surfaces 304, 306 may be substantially flush with top surface 308 and/or below top surface 308. Further, in the illustrated embodiment, first reservoir surface 304 and second reservoir surface 306 are shown to have a rounded or dome-like shape. In other embodiments, the shape of either or both reservoir surfaces 304, 306 may be substantially flat, disk-like, cylindrical, conical, concave, or otherwise shaped.
- In an embodiment, a reservoir may have a height or thickness (e.g., height 309) in a range from approximately 1000 Angstroms to approximately 5 millimeters. In other embodiments, a reservoir may have a height or thickness that is greater than or smaller than the above-given range.
- In an embodiment, a current 314 may be produced when a conductive material 316 (e.g., an activation material) is brought into proximity to all or portions of both the first reservoir surface 304 and the second reservoir surface 306, thus enabling electrical communication and/or ionic communication between the surfaces 304, 306. The conductive material 316 may include, but is not limited to, one or more liquid, solid, semi-solid, or gaseous materials, as will be described in more detail later.
- In an embodiment, a current 314 may penetrate into the conductive material 316 by a penetration height 318 above the top surface 308 of the substrate. Accordingly, in certain circumstances, current 314 may penetrate into an area of target tissue.
- The penetration height 318 of a current may be a function of one or more of various factors, including but not limited to the spacing 320 between reservoir surfaces 304, 306 and other factors. Penetration heights 318 may be substantially uniform across an active surface, or may vary. Currents having penetration heights in a range from approximately 0.05 mm to approximately 2.0 mm are producible, in an embodiment. In other embodiments, currents having higher and/or lower penetration heights may be producible.
- In various embodiments, a reservoir may be formed from a single material or a relatively homogenous combination of materials. In other embodiments, a reservoir may be formed from two or more material compositions. Such a reservoir may be referred to herein as a "composite" reservoir.
- FIG. 4 is a top view of a portion of a medical battery 400, which includes at least one "composite" reservoir, in accordance with an example embodiment. First reservoirs 401, 402, 403, 404, and 405 may function as first portions of galvanic cells, and second reservoirs 406, 407, 408, and 409 may function as second portions of galvanic cells. An electrically conductive material (e.g., an activation material, not illustrated) may be dispersed between some or all of the first reservoirs 401-405 and the second reservoirs 406-409, providing for the production of currents between the first and second reservoirs.
- In an embodiment, selected ones of first and/or second reservoirs may be formed from two or more material compositions. For purposes of example, a composite reservoir 409 is shown as being formed from a first composition 410 and a second composition 412. Although certain reservoirs in FIG. 4 are illustrated as being composite reservoirs, it is to be understood that other reservoirs also or alternatively could be composite reservoirs.
- First composition 410, in an embodiment, may form a peripheral or outer portion of reservoir 409, and second composition 412 may form an interior or central portion of reservoir 409. In alternative embodiments, a first composition and a second composition may be alternatively arranged, with respect to each other. For example, but not by way of limitation, a first and second composition may be adjacent to each other
| 2.49702
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
illustrations in a specially commissioned edition of Shakespeare's plays. Boydell's impressive profits enabled him to pay hefty fees to the artists who produced the paintings. Even Sir Joshua Reynolds, who as President of the Royal Academy strongly disapproved of Boydell's gallery and who thought that prints based on paintings were vulgar, put aside his objections to the scheme when he was offered 1000 guineas for a single painting of the three witches from Macbeth. According to Thornbury George Stevens, the editor of the special edition of Shakespeare, took it upon himself to persuade Sir Joshua to accept the commission; "taking a bank-bill of five hundred pounds in his hand, he had an interview with Sir Joshua, when, using all his eloquence in argument, he, in the meantime, slipped the bank-bill into his hand; he then soon found that his mode of reasoning was not to be resisted, and a picture was promised." The painter's scruples completely collapsed in fact and he soon produced 3 paintings for Boydell's gallery. The most famous of these was a Puck or Robin Goodfellow painted in 1789. Walpole described it as depicting 'an ugly little imp (but with some character) sitting on a mushroom half as big as a mile-stone.' Thornbury, again, describes the birth of this picture "Mr. Nicholls, of the British Institution, related to Mr. Cotton that the alderman and his grandfather were with Sir Joshua when painting the death of Cardinal Beaufort. Boydell was much taken with the portrait of a naked child, and wished it could be brought into the Shakespeare. Sir Joshua said it was painted from a little child he found sitting on his steps in Leicester Square. Nicholls' grandfather then said, 'Well, Mr. Alderman, it can very easily come into the Shakespeare if Sir Joshua will kindly place him upon a mushroom, give him fawn's ears, and make a Puck of him.' Sir Joshua liked the notion, and painted the picture accordingly…. The merry boy, whom Sir Joshua found upon his door-step, subsequently became a porter at Elliot's brewery, in Pimlico." Gillray satirised Boydell and the Shakespeare Gallery ironically using the very medium Boydell had done so much to popularise, print making. In 'Shakespeare sacrificed or the offering to avarice' Boydell is shown in his alderman's robes burning the bard's works, producing thick clouds of smoke which support travestied figures from the paintings commissioned for the gallery, watched by a ancient goblin like figure clutching two large money bags.
Monday, 18 March 2019
Career Opportunities; the Attorney General & the Brazilian Slavers; Robert Porrett Collier, 1st Lord Monkswell (_PHONE_), Brompton Cemetery
In the summer of 1845 Robert Porrett Collier was a fiercely ambitious 28 year old junior barrister on the Western Circuit, desperately searching for a big case that would show the world what he could do and set him up in his career. His marriage the previous year to Isabella and the birth in the spring of their first child, a son named Robert after his father, only added to his determination to prove himself. His own father, who ran a flourishing business in Plymouth, had been generous in helping him set up home with his new bride and even more liberal on the birth of his first grandson. But Robert was determined to make his own way in the world and his father's open-handedness was yet another spur driving him on. The brief he was offered in early July didn't immediately look like it had the makings of a great case, not least because it seemed so open and shut; the trial of a dozen Brazilians who had been taken prisoner by the West Africa Squadron of the Royal Navy on one of their routine anti-slavery patrols off the coast of Nigeria. They were accused of piracy and murder; having taken control of the ship in which they were being detained they murdered the British crew before trying to make their escape back to Brazil. Senior colleagues had already taken on the cases of those accused where the evidence admitted to at least some doubt as to their culpability. Probably because no one else wanted them, Robert had been left with the four, Janus Majaval, Francisco Ferreira de Santo Serva, Manuel José Alves, and Sebastião de Santos, whose guilt seemed most well established and almost certain to lead them to the gallows. Not only did the case seem virtually unwinnable but it was also morally distasteful; Robert had been brought up a Quaker and an abolitionist and he could not quite suppress his repugnance for the cut throat, slave trading foreigners who were accused of killing 10 members of the Royal Navy.
Number 11 is Janus Majaval the man who stabbed Midshipman Palmer, No 8 is Serva, captain of the Echo, No 6 is Alves the drinker of spilt blood, and No 10 Antonio Joaquim who cut off Mullins fingers. 1 is Lieutenant Stupart, 2 Lieutenant Wilson and 3 Captain Cerquiera of the Felicidade
The trial commenced on Thursday 24th July at Exeter Assizes. "At an early hour the Court was besieged with eager crowds," said the Exeter & Plymouth Gazette "anxious to gain admittance to the interior, so as to witness the proceedings." At 9.00am prompt the judge, Mr Baron Platt, took his seat and ordered the prisoners to be placed in the dock. The indictments were then read out; 22 year old Janus Majaval was accused of the murder of Thomas Palmer "on the 2nd day of March on the high seas, on board a vessel called the Felicidade, by striking him with a knife upon the belly, giving him a mortal wound of which he died..." or "by throwing the said Thomas Palmer out of the vessel, and drowning him." The other prisoners were indicted for being feloniously present and with aiding and abetting Majaval. The next thirty minutes were taken up by legal arguments by Sargeant Manning, one of Robert's senior colleagues, with interjections from Robert (who had drafted the argument in the first place). The eager crowd who had besieged the courtroom for a place early that morning must have grown restive as the two lawyers and the judge argued the form of the indictment and whether it should have been set out contra formam statuti. The Judge asked what statute makes murder on the high seas a felony? The 28th of Henry VIII said Sargeant Manning. "But that only altered the mode of trial to that of the common law," the Judge observed. Sargeant Manning tried to explain but Robert interrupted to explain that the common law jurisdiction arose out of an offence committed in some county of the realm but this offence not having been committed in any county but on the high seas, was not cognisable in a common law Court. The two lawyers wanted the trial transferred to the Court of Admiralty but Judge Platt was having none of it and ordered the trial to start. The interpreter was sworn in and Robert interrupted again to object that the interpreter was speaking Spanish and his clients were Brazilian and spoke Portuguese. The interpreter told the judge that the accused understood everything he said and ordered them all to stand up. When they did this was considered sufficient evidence of the interpreters fluency. The jury unusually, de medietate linguae, half English and half foreigners, were sworn in and to the relief of the public the trial proper started. Mr Godson, Queens Counsel, began by addressing the jury and explaining the events that had led up to the murder of Midshipman Palmer and his shipmates and had brought the twelve accused to Exeter to stand trial for their lives.
The Portuguese salve ship Dignidade captured by the navy in 1834
In January that year (1845) a vessel called the Felicidade (Happiness!) was fitted out at Salvador de Bahia in Brazil for a slave trading mission to the west coast of Africa under the captaincy of Joaquim Antonio Cerqueira. It was not the first slaving mission for either the ship or the captain; Capitão Cerqueira was well aware that he had to avoid the coastal patrols of the British navy as he made his way through the lagoons and sandbanks of the treacherous West African coast en route to Angola but on this journey he ran out of luck and found himself taken by HMS Wasp not far from Lagos. He didn't have any slaves on board at the time but the manacles, fetters and chains that were everywhere you turned on the schooner made it pretty clear what his business was. The crew of the Felicidade were transferred as prisoners to the Wasp leaving just Cerqueira and his cook, 22 year old Janus Majaval, on board ship, their places taken by 16 ordinary British seaman, a midshipman called Thomas Palmer and Lieutenant Stupart. The Lieutenant's orders were to take the Felicidade to Freetown in Sierra Leone where the Prize commissioners would adjudicate on her (as a slaver she would be sold and the proceeds divided up amongst the navy personnel who had taken her). On the morning of 1st March another slaver was spotted by the men aboard the Felicidade. Lieutenant Stupart gave the command to pursue the other ship, a much bigger Brazilian slaver, a 70 ton brigantine called the Echo. It took a couple of days to catch up with her and discover that she had a human cargo of 430 sequestered Africans on board, destined for a
| 1.495368
|
Zyphra/Zyda-2
|
corona, which consists of the chromosphere and transition region. Plasma heating and mass supply from the photosphere to the corona takes place in this interface region, which is still not well understood. We study this region with the help of data-driven modelling based on observations from the balloon-borne mission SUNRISE and the small explorer mission IRIS. Analyzing data from these two new missions (and additionally from SDO and Hinode) together with sophisticated magnetic field extrapolation and plasma modelling techniques developed in the SOCo3D-group at MPS, gives the opportunity to investigate the mass and energy supply of the solar atmosphere.
"The Solar Interface Region" project will reside in the solar department of the MPS, one of the largest groups in solar physics worldwide. The institute is located in Göttingen (Germany), a lively and scenic university town.
Applicants must hold a Ph.D. in physics, astrophysics or a closely related field. They should have an outstanding research record. Relevant experience in numerical computing and solar plasma physics are of advantage. Applications including a CV, a statement of research experience and a publication list should be sent to _EMAIL_. In addition, applicants should arrange to have three letters of reference sent separately to the same address.
The position is offered for a period of three years. The exact starting date is negotiable. Salary will be according to grade E13 of the TVoeD scale of the German public services. Review of applications will start 15 April 2017 and continue until the position is filled.
The Max Planck Society is an equal opportunity employer and particularly encourages applications from women and persons with disabilities.
The Institute for Solar Physics (ISP) currently has 19 people employed, including five PhD students and four postdocs. It operates the Swedish 1-m Solar Telescope (SST) on La Palma, which is currently one of the leading solar telescopes in the world. The SST has advanced instrumentation consisting of two Fabry–Perot imaging-spectropolarimeters that allow observations of the solar photosphere and chromosphere at the diffraction limit of the telescope with excellent cadence and spectral resolution.
We seek candidates that are interested in working for the project "Fundamental magnetic processes in the solar chromosphere".
The objective of this project is to describe and explain the magnetic-field regulated processes at play in the solar chromosphere. The chromosphere is the interface between the interior of the Sun, where its magnetic field is generated, and the hot outer corona that drives the solar wind and causes space weather.
The main science questions that the project aims to answer are:
As a part of this project the Max Planck Institute for Solar System Research (MPS) in Germany will build a microlens-based imaging spectropolarimeter called HeSP, that will observe the He I 1083 nm line. This line is an excellent probe of the magnetic field high up in the chromosphere of the sun. Together with the existing two instruments, the unique HeSP instrument will allow a 3D tomographic view of the solar photosphere and chromosphere. We plan to install HeSP at the SST in 2019.
The PhD student is expected to work on taking data and analysing data taken with the SST, including HeSP, and/or perform theoretical work using radiation-MHD simulations of the solar atmosphere and radiative transfer calculations of the spectral lines observed with the SST. As the project is done in collaboration with MPS, we expect that the PhD student will have a number of shorter and/or longer exchange visits to MPS.
The application deadline is May 2, 2017. Please apply at _URL_
The International Study of Earth-Affecting Solar Transients (ISEST) Workshop is aimed at bringing together scientists from different countries to interact and establish collaborative links that can effectively address the physical mechanisms regarding the origin, propagation, and Earth impact of coronal mass ejections (CMEs) and other transient events. The ISEST-2017 workshop is the latest of a series of workshops organized by the ISEST project, which is one of the four projects of SCOSTEP's VarSITI program (2014-2018). The ultimate goal of the ISEST project is to develop the capability to predict the arrival and geoeffectiveness and other space-weather consequences of solar transients. The ISEST project, involving a truly global network of scientists, consists of seven active working groups: (1) data, (2) theory, (3) simulation, (4) event campaign, (5) Bs challenge, (6) Solar Energetic Particles, and (7) MiniMax campaign. The project provides a standing website for hosting events catalogs, data and presentations and offers a forum for discussion available at solar.gmu.edu/heliophysics/index.php/
If you are interested in attending the ISEST-2017 workshop, please register by June 31, 2017. Registration and other information of the workshop can be found at "kswrc.kasi.re.kr/Workshop/isest2017" A limited amoung of fund is available for supporting young scholars. Please send an email along with your CV and tentative abstract to Kyungsuk Cho (_EMAIL_) before June 15, 2017 for the financial support. Note that the ISEST-2017 workshop shares the same time and venue with the 3rd COSPAR Symposium on Small Satellties for Space Research, but runs as an indepedent program.
SOC: Jie Zhang (Co-Chair, USA), Kyungsuk Cho (Co-Chair, South Korea), Nat Gopalswamy (Co-Chair, USA), Manuela Temmer (Co-Chair, Austria), Ayumi Asai (Japan), Mario Bisi (UK), , Peter Gallagher (Ireland), Manolis Georgoulis (Greece), Alejandro Lara (Mexico), Noé Lugaz (USA), Alexis Rouillard (France), Nandita Srivastava (India), Bojan Vršnak (Croatia), Yu-Ming Wang (China), David Webb (USA) and Yuri Yermolaev (Russia)
LOC: Kyungsuk Cho (Chair, KASI), Sujin Kim (KASI), Eunkyung Lim (KASI), Roksoon Kim (KASI), Ji-Hye Baek (KASI) and Young-Jae Moon (KHU).
We would like to invite you to submit an abstract for a contributed oral and/or poster presentation for Session 3, Ground-based Instruments for Advanced Space Weather Projects, to be held at the Fourteenth European Space Weather Week which will take place from November 27 – December 1 in Belgium.
The ESWW is the main annual event in the European Space Weather calendar. The agenda will be composed of plenary/parallel sessions, working meetings and dedicated events for service end-users. The ESWW has the central aim of bringing together the diverse groups in Europe working on different aspects of Space Weather.
SESSION 3 - GROUND-BASED INSTRUMENTS FOR ADVANCED SPACE WEATHER PROJECTS WILL BE HELD ON MONDAY 27 NOVEMBER, FROM 14:15 TO 17:15.
RATIONALE: Eruptive events on the Sun can affect the near Earth environment and ground-based critical infrastructures. As such, the ability to monitor and forecast these sources of space weather, is of paramount importance. This session provides a forum for the presentation of state-of-the-art and novel instruments for the observation and prediction of solar eruptive events. Authors are invited to submit abstracts dealing with topics applied to observations of the solar photosphere and chromosphere, as well as of the solar magnetic field, which can be acquired by the new generation of ground-based instruments.
The session will be comprised of invited, proposed talks and posters. For more details on aims, scope and registration of our session, please see _URL_
A session can be attended by all registered ESWW participants and has two parts: the oral part and a poster programme. The fixed program and abstracts of the oral and poster contributions will be made available online.
The oral part is a sequence of talks and takes place in a room or auditorium with a seated audience. Interaction between the audience and the presenter is possible through a short Q&A at the end of a presentation. The poster session takes place in the coffee area. All posters are put simultaneously on display. Discussion and interaction between authors and attendants is emphasised during the poster session.
DEADLINE TO SUBMIT AN ABSTRACT TO A SESSION
Early contributions: May 31, 2017 included,
Invited orals: September 30, 2017, included,
Late poster contributions: November 1, 2017, included.
The Convenors of ESWW 2017 Session 3: Francesca Zuccarello (University of Catania), Francesco Berrilli (University of Roma Tor Vergata), Paola De Michelis (Istituto Nazionale di Geofisica e Vulcanologia), Stuart Jefferies (Georgia State University)
Jean-Louis Steinberg has been one of the major pioneers in radioastronomy. Co-founder of the Nançay Observatory, he has actively participated to, an inspired a large number of radio instruments on many international space missions. Jean-Louis Steinberg is the founder of the Space Radioastronomy laboratory of the Paris Observatory in 1963. Later on, this laboratory widened its science interests and became the DESPA (1971) and then the current LESIA (2002) which is one of the major space sciences laboratories in France. The aim of this workshop is to cover the science topics which Jean-Louis Steinberg has promoted during his career, focusing on Solar, Heliospheric
| 1.338157
|
m-a-p/FineFineWeb
|
Issued by the Scientific Leadership Council
This information is for general information only. These guidelines may not apply to your individual situation. You should rely on the information and instructions given specifically to you by your PH specialist and/or the nurses at your PH Center. This information is general in nature and may not apply to your specific situation. It is not intended as legal, medical or other professional advice, and should not be relied upon as a substitute for consultations with qualified professionals who are familiar with your individual needs.
A brief description of the disease and genetic testing is provided here, and sources for more extensive information are cited at the end.
What is familial PAH?
In idiopathic pulmonary arterial hypertension (IPAH), formerly called primary pulmonary hypertension (PPH), there is blockage to blood flow through the small arteries in the lungs. The disease occurs more often in women and may begin at any age. Most IPAH patients have no known affected relatives, and are said to have sporadic IPAH. IPAH patients who have one or more blood relatives with IPAH are said to have familial PAH (FPAH). It is estimated that a few hundred families in the US have FPAH. Sometimes it is difficult to recognize that PAH has a familial basis, because the disease can skip generations, which happens when the parents or grandparents of a patient do not have PAH.
What causes familial PAH?
In most families, FPAH is caused by an inherited change (mutation) in the genetic directions for making a protein called bone morphogenetic protein receptor 2 (BMPR2). The BMPR2 protein helps regulate the growth of cells in the walls of the small arteries of the lungs. Other factors, probably genetic or environmental, are also needed to produce disease because only about 20% of individuals with a BMPR2 mutation ever develop IPAH. FPAH can occur at any age and affects women almost 3 times more often than men. Some individuals in families with a different genetic condition called Hereditary Hemorrhagic Telangiectasia (HHT) may also develop IPAH, due to a mutation in a different gene, called ALK1. Knowledge about genes that cause IPAH is still growing, so it is possible that other genes may contribute and will be discovered in the future.
What is a gene?
Genes are units of genetic information that are passed from parents to children. Each gene contains the directions to make one or more proteins that the body needs. Genes control everything about us, including the way that our body grows and functions. All of us receive a full set of about 30,000 genes from each of our parents. Therefore, we have a pair of genes, one from each parent, to make each protein, including the BMPR2 protein.
How is familial PAH inherited?
Each normal person has a pair of BMPR2 genes in each cell in our bodies. One copy is inherited from our father and the other is inherited from our mother. The copy of the BMPR2 gene which we inherit from each parent occurs by random chance, like flipping a coin. A mutation in only one copy (from mother or father) of the pair of BMPR2 genes is enough to cause FPAH in a child.
By simple observation it can be seen that any person in the bloodline of a family with FPAH has an overall risk of about 1 in 10, or 10%, of developing FPAH during their lifetime. When a parent has a BMPR2 gene mutation, each child has a 50% chance to inherit the abnormal gene, and a 50% chance to inherit the normal gene. If a child inherits the normal gene, then that child's risk is similar to that of the general population, which is about one in a million for developing PAH. If a child inherits the abnormal disease gene, that does not necessarily mean they will develop FPAH. The likelihood for a person with a BMPR2 mutation to develop FPAH is estimated to be about 20%, though the actual risk may be different in each family. In other words, 80 out of 100 people who inherit a BMPR2 mutation will never develop IPAH.
Identification of a genetic mutation in a patient who already has PAH does not affect their medical care, so this result has importance only to their family.
The gene for BMPR2 is very large, and many different mutations (>100) have been found in it. In each FPAH family, one specific mutation in BMPR2 is the cause of FPAH in every patient in that family, and every patient within that family has that same specific mutation. Different families have different BMPR2 mutations. Knowing which specific mutation is present in a family is important because it makes it much easier to perform genetic testing for any person in that family. Testing one part of the large BMPR2 gene for a known mutation is far easier than testing for changes in the entire gene. In other words, searching for a mistake in an entire phone book would take a very long time and the mistake could be missed, but looking up the spelling of a specific name (testing for a known mutation) is accurate and easy to do.
What is the cause of sporadic IPAH?
The cause of most sporadic IPAH is not known, but BMPR2 mutations have been found to cause sporadic IPAH in 10% to 40% of IPAH patients. Children of IPAH patients with BMPR2 mutations have the same risks as the children of individuals with familial PAH. So far, most people with sporadic IPAH do not have a detectable BMPR2 mutation.
What testing is available for people at risk for familial PAH?
Medical testing shows whether a person has signs of PAH at the time of testing. One simple test, an echocardiogram, is a noninvasive and painless sound wave test of the heart that is often used to screen for PAH. However, it may be expensive, is not always accurate, and does not predict whether a person will develop IPAH in the future.
Genetic testing is laboratory testing of DNA, usually from a blood specimen. It searches for a mutation in a gene. The results of genetic testing can better define the actual risk for another family member to develop FPAH, especially when a BMPR2 mutation has been identified in a PAH patient in the same family. Genetic testing does not tell whether a person has any signs of PAH.
At present, BMPR2 mutations have been identified in about 80% of the families with FPAH. Information about which specific mutation is present in each family may be available from the research team. If information is not available about which particular BMPR2 mutation causes disease in a specific family, then DNA from a patient with FPAH in that family is needed to try to identify a specific mutation for their family.
My family is involved in a research project on familial PAH. Can I get my genetic test result from the research laboratory?
By law, diagnostic testing for genetic mutations can be provided only by specially licensed clinical laboratories (CLIA approval). These regulations assure rigorous quality control at all stages of sample analysis and ensure that the test is performed by fully trained personnel. At present, most university institutional review boards (IRB) prohibit disclosure of results obtained in a research lab to unaffected family members. Thus, most research laboratories cannot reveal genetic test results for specific individuals, but the labs may provide information which they discovered about the location and type of BMPR2 mutation in a specific family.
How can I get genetic testing for familial PAH?
Because there can be unexpected risks, counseling by experts (genetics counselors) is necessary to be fully informed. Counselors will discuss all of the benefits, drawbacks, and limitations before a person makes a decision about genetic testing.
If a family has participated in a research study, they may want to contact the coordinator or the director of the research study to determine whether a BMPR2 mutation has been identified in their family.
If a BMPR2 gene mutation has been identified, the research study coordinator can help to arrange genetic counseling. The genetics counselor can help contact a clinical laboratory that provides genetic testing for BMPR2 mutations. The cost of testing will vary. A blood sample from a relative with IPAH or FPAH may be needed. The accuracy of testing will usually be greater than 99%.
If a BMPR2 gene mutation has not been identified, the laboratory can examine the entire gene and try to find a mutation. If a mutation is found, then this information can be used to test any family member. If a BMPR2 gene mutation cannot be found in a specific family, then genetic testing will not provide any information for unaffected family members. Another gene that has not yet been found could be responsible for PAH in that family.
If a family is not involved with a research group, they may wish to contact their primary care provider or a genetics counselor (see list below or the National Society of Genetic Counselors website). Information about IPAH and laboratories which provide testing can be found online.
Who should have genetic testing?
This decision is very personal. After counseling, each person should decide what is in their own best interest. Some people may find it helpful to read over the "pros and cons" of testing that are listed below. These will be explained further and discussed in detail during genetic counseling.
Some possible benefits of genetic testing for familial PAH
- The risk for a person to develop FPAH is more accurate, which may decrease uncertainty about their health. Their children's risk estimates are also more accurate.
- If a person is found to not have the BMPR2 mutation which is known to cause FPAH in their family, they may
| 2.431817
|
HuggingFaceFW/fineweb-edu
|
_______________________________ Candidate Fluorophores Absorption Maxima ______________________________________ Fluorescein 488 nm Dichloro-fluorescein 525 nm Hexachloro-fluorescein 529 nm Tetramethylrhodamine 550 nm Rodamine X 575 nm Cy3 .TM. 550 nm Cy5 .TM. 650 nm Cy7 .TM. 750 nm IRD40 785 nm ______________________________________
The excitation source directs the light through excitation optics 1200, which focus the light at the sample. The excitation optics transform the light into a "line" sufficient to illuminate a row of the sample. Although the Figure illustrates a system that images one vertical row of the sample at a time, it can easily be configured to image the sample horizontally or to employ other detection scheme. In this manner, a row of the sample (i.e., multiple pixels) may be imaged simultaneously, increasing the throughput of the imaging systems dramatically.
Generally, the excitation source generates a beam with a Gaussian profile. In other words, the excitation energy of the line peaks flatly near the center and diminishes therefrom (i.e., non-uniform energy profile). Illuminating the sample with a non-uniform energy profile will produce undesirable results. For example, the edge of the sample that is illuminated by less energetic radiation would appear more dim relative to the center. This problem is resolved by expanding the line to permit the central portion of the Gaussian profile to illuminate the sample.
The width of the line (or the slit aperture) determines the spatial resolution of the image. The narrower the line, the more resolved the image. Typically, the line width is dictated by the feature size of sample. For example, if each probe sequence occupies a region of about 50 .mu.m, then the minimum width is about 50 .mu.m. Preferably, the width should be several times less than the feature size to allow for oversampling.
Excitation optics may comprise various optical elements to achieved the desired excitation geometry, including but not limited to microscope objectives, optical telescopes, cylindrical lens, cylindrical telescopes, line generator lenses, anamorphic prisms, combination of lenses, and/or optical masks. The excitation optics may be configured to illuminate the sample at an angle so as to decouple the excitation and collection paths. As a result, the burden of separating the light paths from each other with expensive dichroic mirrors or other filters is essentially eliminated. In one embodiment, the excitation radiation illuminates the sample at an incidence of about 45.degree.. This configuration substantially improves the system's depth discrimination since emission from out-of-focus planes is virtually undetected. This point will subsequently be discussed in more detail in connection with FIG. 2.
As the incident light is reflected from the sample, it passes through focusing optics 1400, which focus the reflected illumination line to a point. A vertical spatial slit 1405 and light detector 1410 are located behind the focusing optics. Various light detectors may be used, including photodiodes, avalanche photodiodes, phototransistors, vacuum photodiodes, photomultiplier tubes, and other light detectors. The focusing optics, spatial slit, and light detector serve to focus the sample in the focal plane of the excitation light. In one embodiment, the light is focused at about the center of the slit when the sample is located in the focal plane of the incident light. Using the light detector to sense the energy, the system can determine when the sample is in focus. In some applications, the slit may be eliminated by employing a split photodiode (bi-cell or quadrant detector), position-sensitive photodiode, or position-sensitive photomultiplier.
The line illumination technique presents certain concerns such as maintaining the plane of the sample perpendicular to the optical axis of the collection optics. If the sample is not aligned properly, image distortion and intensity variation may occur. Various methods, including shims, tilt stage, gimbal mount, goniometer, air pressure or pneumatic bearings or other technique may be employed to maintain the sample in the correct orientation. In one embodiment, a beam splitter 1420 may be strategically located to direct a portion of the beam reflected from the sample. A horizontal spatial slit 1425 and light detector 1430, similar to those employed in the auto-focusing technique, may be used to sense when the plane of the sample is perpendicular to the optical axis of the collection optics.
In response to the excitation light, the labeled targets fluoresce (i.e., secondary radiation). The emission, is collected by collection optics 1300 and imaged onto detector 1800. A host of lenses or combination of lenses may be used to comprise collection optics, such as camera lenses, microscope objectives, or a combination of lenses. The detector may be an array of light detectors used for imaging, such as charge-coupled devices (CCD) or charge-injection devices (CID). Other applicable detectors may include image-intensifier tubes, image orthicon tube, vidicon camera type, image dissector tube, or other imaging devices. Generally, the length of the CCD array is chosen to sufficiently detect the image produced by the collection optics. The magnification power of the collection optics dictates the dimension of the image. For instance, a 2.times. collection optics produces an image equal to about twice the height of the sample.
The magnification of the collection optics and the sensitivity of detector 1800 play an important role in determining the spatial resolution capabilities of the system. Generally, the spatial resolution of the image is restricted by the pixel size of detector 1800. For example, if the size of each pixel in the detector is 25 .mu.m.sup.2, then the best image resolution at 1.times. magnification is about 25 .mu.m. However, by increasing the magnification power of the collection optics, a higher spatial resolution may be achieved with a concomitant reduction of field of view. As an illustration, increasing the magnification of the collection optics to 5 would increase the resolution by a factor of 5 (from 25 .mu.m to 5 .mu.m).
A filter, such as a long pass glass filter, long pass or band pass dielectric filter, may be located in front of detector 1800 to prevent imaging of unwanted emission, such as incident light scattered by the substrate. Preferably, the filter transmits emission having a wavelength at or greater than the fluorescence and blocks emission having shorter wavelengths (i.e., blocking emission at or near the excitation wavelength).
Once a row of fluorescent data has been collected or integrated), the system begins to image a subsequent row. This may be achieved by mounting the sample on a translation stage and moving it across the excitation light. Alternatively, Galvo scanners or rotating polyhedral mirrors may be employed to scan the excitation light across the sample. A complete 2-dimensional image of the sample is generated by combining the rows together.
The amount of time required to obtain the 2-dimensional image depends on several factors, such as the intensity of the laser, the type of labels used, the detector sensitivity, noise level, and resolution desired. In one embodiment, a typical integration period of a single row may be about 40 msec. Given that, a 14 .mu.m resolution image of a 12.8 mm.sup.2 sample can be acquired in less than 40 seconds.
Thus, the present invention acquires images as fast as conventional confocal microscope while achieving the same resolution, but with a much larger field of view. In one dimension, the field of view is dictated by the translation stage and can be arbitrarily large (determined by the distance it translates during one integration period). In the other dimension, the field of view is limited by the objective lens. However, this limitation may be eliminated employing a translation stage for that dimension.
FIG. 2 is a simplified illustration exhibiting how the imaging system achieves good depth discrimination. As shown, a focal plane 200 is located between planes 210 and 220. Planes 210 and 220 both represent planes that are out of focus. In response to the incident light 250, all 3 planes fluoresce light. This emission is transmitted through collection optics 261. However, emission originating from out-of-focus planes 210 and 220 is displaced sideways at 211 and 221, respectively, in relationship to the collection optics' optical axis 280. Since the active area of the light detectors array 260 is about 14 .mu.m wide, nearly all of the emission from any plane that is more than slightly out-of-focus is not detected.
III. Detailed Description of One Embodiment of the Imaging System.
a. Detection Device
FIG. 3 schematically illustrates a particular system for imaging a sample. The system includes a body 3220 for holding a support 130 containing the sample on a surface 131. In some embodiments,
| 1.873055
|
m-a-p/FineFineWeb
|
Geoarchaeology is a multi-disciplinary approach which uses the techniques and subject matter of geography, geology and other Earth sciences to examine topics which inform archaeological knowledge and thought. Geoarchaeologists study the natural physical processes that affect archaeological sites such as geomorphology, the formation of sites through geological processes and the effects on buried sites and artifacts post-deposition. Geoarchaeologists' work frequently involves studying soil and sediments as well as other geographical concepts to contribute an archaeological study.
Geoarchaeology is a recent field of research that uses the computer cartography, geographic information systems (GIS) and digital elevation models (DEM) in combination with disciplines from human and social sciences and earth sciences.1
Column sampling is a technique of collecting samples from a section for analyzing and detecting the buried processes down the profile of the section. Narrow metal tins are hammered into the section in a series to collect the complete profile for study. If more than one tin is needed they are arranged offset and overlapping to one side so the complete profile can be rebuilt offsite in laboratory conditions.
Loss on ignition testing for soil organic content.- a technique of measuring organic content in soil samples. Samples taken from a known place in the profile collected by column sampling are weighed then placed in a fierce oven which burns off the organic content. The resulting cooked sample is weighed again and the resulting loss in weight is an indicator of organic content in the profile at a certain depth. These readings are often used to detect buried soil horizons. A buried soil's horizons may not be visible in section and this horizon is an indicator of possible occupation levels. Ancient land surfaces especially from the prehistoric era can be difficult to discern so this technique is useful for evaluating an areas potential for prehistoric surfaces and archaeological evidence. Comparative measurements down the profile are made and a sudden rise in organic content at some point in the profile combined with other indicators is strong evidence for buried surfaces.
The magnetic susceptibility of a material is a measure of its ability to become magnetised by an external magnetic field (Dearing, 1999). The magnetic susceptibility of a soil reflects the presence of magnetic iron-oxide minerals such as maghaematite; just because a soil contains a lot of iron does not mean that it will have high magnetic susceptibility. Magnetic forms of iron can be formed by burning and microbial activity such as occurs in top soils and some anaerobic deposits. Magnetic iron compounds can also be found in igneous and metamorphic rocks.
The relationship between iron and burning means that magnetic susceptibility is often used for:
- Site prospection, to identify areas of archaeological potential prior to excavation.
- Identifying hearth areas and the presence of burning residues in deposits.2
- Explaining whether areas of reddening are due to burning or other natural processes such as gleying (waterlogging).
The relationship between soil formation and magnetic susceptibility means that it can also be used to:
- Identify buried soils in depositional sequences.
- Identify redeposited soil materials in peat, lake sediments etc.
|This section requires expansion. (June 2008)|
Phosphate in man-made soils derives from people, their animals, rubbish and bones. 100 people excrete about 62 kg of phosphate annually, with about the same from their rubbish. Their animals excrete even more. A human body contains about 650g of PO4,(500g-80% in the skeleton), which results in elevated levels in burial sites. Most is quickly immobilised on the clay of the soil and 'fixed', where it can persist for thousands of years. For a 1 ha site this corresponds to about 150 kg PO4 ha-1yr-1 about 0.5% to 10% of that already present in most soils. Therefore it doesn't take long for human occupation to make orders of magnitude differences to the phosphate concentration in soil. Phosphorus exist in different 'pools' in the soil 1) organic (available), 2) occluded (adsorbed), 3) bound (chemically bound). Each of these pools can be extracted using progressively more aggressive chemicals. Some workers (Eidt especially), think that the ratios between these pools can give information about past land use, and perhaps even dating.
Whatever the method of getting the phosphorus from the soil into solution, the method of detecting it is usually the same. This uses the 'molybdate blue' reaction, where the depth of the colour is proportional to phosphorus concentration. In the lab, this is measured using a colorimeter, where light shining through a standard cell produces an electrical current proportional to the light attenuation. In the field, the same reaction is used on detector sticks, which are compared to a colour chart.
Phosphate concentrations can be plotted on archaeological plans to show former activity areas, and is also used to prospect for sites in the wider landscape.
The particle size distribution of a soil sample may indicate the conditions under which the strata or sediment were deposited. Particle sizes are generally separated by means of dry or wet sieving (coarse samples such as till, gravel and sands, sometimes coarser silts) or by measuring the changes of the density of a dispersed solution (in sodiumpyrophosphate, for example))of the sample (finer silts, clays). A rotating clock-glass with a very fine-grained dispersed sample under a heat lamp is useful in separating particles.
The results are plotted on curves which can be analyzed with statistical methods for particle distribution and other parameters.
The fractions received can be further investigated for cultural indicators, macroand microfossils and other interesting features, so particle size analysis is in fact the first thing to do when handling these samples.
Trace element geochemistry is the study of the abundances of elements in geological materials that do not occur in a large quantity in these materials. Because these trace elements concentration are determined by a large number of particular situations under which a certain geological material is formed, they are usually unique between two locations which contain the same type of rock or other geological material.
Geoarchaeologists use this uniqueness in trace element geochemistry to trace ancient patterns of resource-acquisition and trade. For example, researchers can look at the trace element composition of obsidian artifacts in order to "fingerprint" those artifacts. They can then study the trace element composition of obsidian outcrops in order to determine the original source of the raw material used to make the artifact.
Geoarchaeologists study the mineralogical characteristics of pots through macroscopic and microscopic analyses. They can use these characteristics to understand the various manufacturing techniques used to make the pots, and through this, to know which production centers likely made these pots. They can also use the mineralogy to trace the raw materials used to make the pots to specific clay deposits.3
Naturally occurring Ostracods in freshwater bodies are impacted by changes in salinity and pH due to human activities. Analysis of Ostracod shells in sediment columns show the changes brought about by farming and habitation activities. This record can be correlated with age dating techniques to help identify changes in human habitation patterns and population migrations.4
Over the last decades, archaeologists and historians have faced the necessity to reconstruct ancient settlement history not only through the study of the material excavated, but also with the use of palaeo-environmental parameters.
- Ghilardi, M. and Desruelles, S. (2008) "Geoarchaeology: where human, social and earth sciences meet with technology". S.A.P.I.EN.S. 1 (2)
- Tite, M.S., and Mullins, C. (1971). "Enhancement of magnetic susceptibility of soils on archaeological sites.". Archaeometry 13: 209–219.
- Druca, I. C. and Q. H. J. Gwynb (1997), From Clay to Pots: A Petrographical Analysis of Ceramic Production in the Callejón de Huaylas, North-Central Andes, Peru, Journal of Archaeological Science, 25, 707-718.
- ^ Manuel R. Palacios-Fest, "Nonmarine ostracode shell chemistry from ancient hohokam irrigation canals in central Arizona: A paleohydrochemical tool for the interpretation of prehistoric human occupation in the North American Southwest" Geoarchaeology, Volume 9 Issue 1, Pages 1 – 29, Published Online: 9 Jan 2007
- Slinger, A., Janse, H.. and Berends, G. 1980. Natuursteen in monumenten. Zeist / Baarn Rijksdienst voor de Monumentenzorg.
- Kasig, Werner 1980. Zur Geologie des Aachener Unterkarbons (Linksrheinisches Schiefergebirge, Deutschland) — Stratigraphie, Sedimentologie und Palaeogeographie des Aachener Kohlenkalks und seine Bedeutung fuer die Entwicklung der Kulturlandschaft im Aachener Raum Aachen RWTH Fak Bergbau… "zur Erlangung…" =. Aachen RWTH.
- Jonghe, Sabine de -, Tourneur, Francis, Ducarme, Pierre, Groessens, Eric e.a. 1996. Pierres à bâtir traditionnelles de la Wallonie - manuel de terrain. Jambes / Louvain la Neuve ucl,chab / dgrne / region wallonne
- Dreesen, Roland, Dusar, M. and Doperé, F., 2001. Atlas Natuursteen in Limburgse monumentenx- 2nd print 320pp..
| 2.541682
|
HuggingFaceFW/fineweb-edu
|
comprises a coil 20 and a conductor 30. Conductor 30 threads through coil 20. The two are fastened together but are insulated from each other. The relative position between them is such that the electrically charged particles in motion in conductor 30 are in the magnetic field generated by coil 20 and that the direction of the motion of the electrically charged particles is not parallel to the direction of the magnetic field.
- When a time varying electric current flows through coil 20, coil 20 will generate an induced electric field. During a steady state, the changing induced electric field will generate a magnetic field independent of coil 20. At this time, when a electric current also flows through conductor 30, conductor 30 will be subject to an Ampere force in the magnetic field generated by the induced electric field (the macro expression of the Lorentz Force). The direction of this force is 60 as shown in FIG. 2. This Ampere force is the propulsive force generated by system 10. Since coil 20 and conductor 30 are fastened together, when conductor 30 moves under the action of this Ampere force, coil 20 will move together with it, thus system 10 achieves propulsion.
- Since the direction of the magnetic field generated by the induced electric field is time varying, to achieve propulsion, the electric current in conductor 30 should also be time varying. The two should have the same frequency, with a phase difference that is an integer multiple of a ½ cycle.
- Although this configuration showed by FIG. 1 and FIG. 2 has been depicted in a preferred embodiment, the present invention can be carried out using conductive coils having any of a variety of shapes and sizes.
- Detailed Description of the Optimized Preferred Device
- The present disclosure provides an optimized preferred device. Refer to FIG. 3. As shown in FIG. 3, this device is basically identical to the device described in FIG. 1 and FIG. 2. The difference is that it uses a closed annular 40 and a series of conductors 50 a, 50 b, 50 c, 50 d, 50 e and 50 f. Conductors 50 a, 50 b, 50 c, 50 d, 50 e and 50 f respectively thread through coil 40 and are fastened together to coil 40.
- Coil 20 as shown in FIG. 1 and FIG. 2 generates magnetic fields that are distributed inside and outside the coil. Annular coil 40 as shown in FIG. 3 can eliminate possible adverse impact generated by magnetic fields outside the coils.
- As shown in FIG. 3, at any time, the direction of magnetic field where conductors 50 a, 50 b, 50 c exist and the direction of magnetic field where conductors 50 d, 50 e, 50 f exist are opposite. Therefore, the electric current in conductors 50 a, 50 b, 50 c and the electric current in conductors 50 d, 50 e and 50 f have a ½ cycle phase difference.
- To enhance the magnetic induction intensity of the radiation magnetic field in the coils, coil 40 can be made of multiple coils; coils that make up coil 40 are connected in parallel in the electric circuit.
- The present disclosure includes that contains in the appended claim, as well as that of the foregoing description. Although this invention has been described in its preferred form with a certain degree of particularity, it is understood that the present disclosure of the preferred form has been made only by way of example and that numerous changes in the details of construction and the combination and arrangement of parts may be resorted to without departing from the spirit and scope of the invention.
Claims (6)
1. A system that uses the Lorentz Force to achieve propulsion and that does not generate a reacting force comprising:
at least one device that generates a magnetic field by generating a changing electric field;
at least one carrier of electrically charged particles in motion;
the device that generates a magnetic field by generating a changing electric field and the carrier of electrically charged particles in motion are fastened together, the relative position between them is such that the electrically charged particles in motion are in the magnetic field generated by changing electric field and that the direction of the motion of the electrically charged particles is not parallel to the direction of the magnetic field generated by changing electric field;
the Lorentz Force that the electrically charged particles in motion are subject to in the magnetic field generated by changing electric field is the propulsive force of the system.
2. The system described in claim 1 wherein a coil is used as the device that generates a magnetic field by generating a changing electric field;
and wherein a conductor is used as the carrier for electrically charged particles in motion.
3. The system described in claim 2 wherein an annular structure is used for the coil.
4. A system for generating propulsion using Lorentz forces comprising:
a winding which carries a time varying electric current, the time varying electric current generating an induced electric field and a magnetic field, the magnetic field being independent of the winding;
a conductor that is threaded through, connected to, but insulated from the winding, the conductor carrying an electric current, wherein the conductor is subject to an Ampere force in the magnetic field generated by the induced electric field, and wherein the Ampere force is the propulsive force of the system.
5. The system of claim 4 wherein the winding is a closed annular coil.
6. The system of claim 4 wherein multiple conductors are threaded through the winding.
US12/192,_PHONE_-15 2008-08-15 Propulsion Device Using Lorentz Force Abandoned US20090085411A1 (en)
Priority Applications (1)
Application Number Priority Date Filing Date Title
US12/192,379 US20090085411A1 (en) 2008-08-15 2008-08-15 Propulsion Device Using Lorentz Force
Applications Claiming Priority (2)
Application Number Priority Date Filing Date Title
US12/192,379 US20090085411A1 (en) 2008-08-15 2008-08-15 Propulsion Device Using Lorentz Force
PCT/IB2009/054507 WO2010018558A2 (en) 2008-08-15 2009-10-13 Propulsion device using lorentz force
Publications (1)
Publication Number Publication Date
US20090085411A1 true US20090085411A1 (en) 2009-04-02
Family
ID=40507379
Family Applications (1)
Application Number Title Priority Date Filing Date
US12/192,379 Abandoned US20090085411A1 (en) 2008-08-15 2008-08-15 Propulsion Device Using Lorentz Force
Country Status (2)
Country Link
US (1) US20090085411A1 (en)
WO (1) WO2010018558A2 (en)
Cited By (4)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010018558A2 (en) * 2008-08-15 2010-02-18 Qiang Zhu Propulsion device using lorentz force
EP3358728A1 (en) * 2017-02-01 2018-08-08 Banduric, Richard Complex electric fields and static electric fields to effect motion with conduction currents and magnetic materials
US10084395B2 (en) 2012-07-06 2018-09-25 Richard Banduric Complex electric fields and static electric fields to effect motion with conduction currents
US10320312B2 (en) 2012-07-06 2019-06-11 Richard Banduric Complex electric fields and static electric fields to effect motion with conduction currents and magnetic materials
Citations (9)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US1974483A (en) * 1930-02-07 1934-09-25 Brown Thomas Townsend Electrostatic motor
US2949550A (en) * 1957-07-03 1960-08-16 Whitehall Rand Inc Electrokinetic apparatus
US3018394A (en) * 1957-07-03 1962-01-23 Whitehall Rand Inc Electrokinetic transducer
US3187206A (en) * 1958-05-09 1965-06-01 Electrokinetics Inc Electrokinetic apparatus
US3227901A (en) * 1961-09-15 1966-01-04 Jr Agnew H Bahnson Electrical thrust producing device
US3997131A (en) * 1973-12-12 1976-12-14 Alberto Kling Rotor means for an aircraft
US5197279A (en) * 1990-01-02 1993-03-30 Taylor James R Electromagnetic energy propulsion engine
US6492784B1 (en) * 1999-03-05 2002-12-10 Gravitec, Inc. Propulsion device and method employing electric fields for producing thrust
US6889420B2 (en) * 2000-06-28 2005-05-10 Robert M. Jones Method for making a stator for an electric machine
Family Cites Families (4)
* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1761132A (en) * 2004-06-24 2006-04-19 高森伟 Dynamic systemof electromagnetic propulsion
US20120137652A1 (en) * 2005-10-07 2012-06-07 Asprey Margaret W Electromagnetic Thrust System
CN100470378C (en) * 2007-04-27 2009-03-18 清华大学 Ultrathin triple-freedom inching work table
US20090085411A1 (en) * 2008-08-15 2009-04-02 Zhu Qiang Propulsion Device Using Lorentz Force
Patent
| 2.073207
|
EssentialAI/eai-taxonomy-stem-w-dclm-100b-sample
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.