id
stringlengths
1
6
content
stringlengths
0
5.74M
200
‪Jean-Marc Lévy-Leblond‬ - ‪Google Scholar‬ =============== กำลังโหลด... ระบบไม่สามารถดำเนินการได้ในขณะนี้ โปรดลองใหม่อีกครั้งในภายหลัง การอ้างอิงต่อปี การอ้างอิงซ้ำกัน บทความต่อไปนี้รวมอยู่ใน Scholar จำนวนการอ้างอิงที่รวมเข้าด้วยกันของบทความเหล่านี้จะนับเฉพาะบทความแรกเท่านั้น การอ้างอิงที่รวมเข้าด้วยกัน จำนวน "อ้างอิงโดย" นี้รวมถึงการอ้างอิงไปยังบทความต่อไปนี้ใน Scholar บทความที่ทำเครื่องหมาย อาจต่างจากบทความนั้นในโปรไฟล์ เพิ่มผู้เขียนร่วม ผู้เขียนร่วม ติดตาม บทความใหม่โดยผู้เขียนคนนี้ การอ้างอิงใหม่สำหรับผู้เขียนคนนี้ บทความใหม่ที่เกี่ยวข้องกับงานวิจัยของผู้เขียนคนนี้ ที่อยู่อีเมลที่ใช้รับข้อมูลอัปเดต เสร็จสิ้น โปรไฟล์ของฉันห้องสมุดของฉันเมตริกการแจ้งเตือน การตั้งค่า ลงชื่อเข้าสู่ระบบ ลงชื่อเข้าสู่ระบบ รับโปรไฟล์ของฉันเอง อ้างโดย ดูทั้งหมด | | ทั้งหมด | ตั้งแต่ปี 2020 | | --- | --- | --- | | การอ้างอิง | 8353 | 1904 | | ดัชนี h | 40 | 19 | | ดัชนี i10 | 97 | 38 | 0 380 190 95 285 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025583960616068508677968296114111117126140184158179195214269231230231234276303266283247282240303313337333361254 ติดตาม Jean-Marc Lévy-Leblond professeur émérite, Université de Nice ยืนยันอีเมลแล้วที่ unice.fr Physique théoriqueEpistémologieculture scientifique บทความอ้างโดย | | | | | --- | --- | --- | | ชื่อ จัดเรียง เรียงตามการอ้างอิงเรียงตามปีเรียงตามชื่อ | อ้างโดย อ้างโดย | ปี | | Nonrelativistic particles and wave equations JM Lévy-Leblond Communications in Mathematical Physics 6 (4), 286-311, 1967 | 701 | 1967 | | Possible kinematics H Bacry, JM Lévy‐Leblond Journal of Mathematical Physics 9 (10), 1605-1614, 1968 | 666 | 1968 | | Une nouvelle limite non-relativiste du groupe de Poincaré JM Lévy-Leblond Annales de l'institut Henri Poincaré. Section A, Physique Théorique 3 (1), 1-12, 1965 | 480 | 1965 | | Galilean electromagnetism M Le Bellac, JM Lévy-Leblond Il Nuovo Cimento B (1971-1996) 14 (2), 217-234, 1973 | 405 | 1973 | | Galilei group and Galilean invariance JM Lévy-Leblond Group theory and its applications, 221-299, 1971 | 362 | 1971 | | One more derivation of the Lorentz transformation JM Lévy-Leblond American Journal of Physics 44 (3), 271-277, 1976 | 341 | 1976 | | Galilei group and nonrelativistic quantum mechanics JM Levy‐Leblond Journal of mathematical Physics 4 (6), 776-788, 1963 | 324 | 1963 | | Position-dependent effective mass and Galilean invariance JM Levy-Leblond Physical Review A 52 (3), 1845, 1995 | 301 | 1995 | | Electron capture by polar molecules JM Levy-Leblond Physical Review 153 (1), 1, 1967 | 297 | 1967 | | Who is afraid of nonhermitian operators? A quantum description of angle and phase JM Lévy-Leblond Annals of Physics 101 (1), 319-341, 1976 | 249 | 1976 | | About misunderstandings about misunderstandings JM Lévy-Leblond Public understanding of science 1 (1), 17-21, 1992 | 191 | 1992 | | La pierre de touche: la science à l'épreuve... JM Lévy-Leblond Gallimard, 1996 | 182 | 1996 | | L'esprit de sel: science, culture, politique JM Lévy-Leblond Fayard, 2014 | 145 | 2014 | | Group-theoretical foundations of classical mechanics: the Lagrangian gauge problem JM Lévy-Leblond | 138 | 1969 | | Quantics JM Lévy-Leblond, F Balibar | 135 | 1990 | | Galilean quantum field theories and a ghostless Lee model JM Lévy-Leblond Communications in mathematical physics 4 (3), 157-176, 1967 | 130 | 1967 | | Nonsaturation of gravitational forces JM Lévy‐Leblond Journal of Mathematical Physics 10 (5), 806-812, 1969 | 124 | 1969 | | The pedagogical role and epistemological significance of group theory in quantum mechanics JM Lévy-Leblond La Rivista del Nuovo Cimento (1971-1977) 4 (1), 99-143, 1974 | 122 | 1974 | | Aux contraires: l'exercice de la pensée et la pratique de la science JM Lévy-Leblond | 99 | 1996 | | Symmetrical coupling of three angular momenta JM Lévy‐Leblond, M Lévy‐Nahas Journal of Mathematical Physics 6 (9), 1372-1380, 1965 | 98 | 1965 | ระบบไม่สามารถดำเนินการได้ในขณะนี้ โปรดลองใหม่อีกครั้งในภายหลัง บทความ 1–20 แสดงเพิ่มเติม ความเป็นส่วนตัวข้อกำหนดความช่วยเหลือ เกี่ยวกับ Scholarศูนย์ช่วยเหลือของ Search
201
Published Time: Thu, 29 Aug 2024 23:18:42 GMT ETHOGRAM DEVELOPMENT READING RMC Mate Compatibility Workshop ETHOGRAMS An ethogram is a catalog or dictionary of the discrete behaviors typically employed by a species. The included behaviors are sufficiently defined that an observer may record the number of acts, or the amount of time engaged in the behaviors. The ethogram may include drawings or v ideo in addition to a written description of each discrete behavior. In an ethogram, behaviors are described without explicit reference to their purpose. For example, although a specific movement may represent a putative threat display it should be given a n objective name such as "head forward" or "bracing display", and not "head forward threat" or "bracing threat". Complete ethograms are often used by zoos & aquariums to describe normal behavior and monitor behavior in order to identify pathology due to il lness or poor animal care. The ethogram usually describes only a small portion of the complete behavioral repertoire that is important for asking the question at hand. HYPOTHESIS Remember, an ethogram is a living document. Not all behaviors will be import ant to the mate compatibility issue hypothesis you developed last week during class. Decide which behaviors will be important to record in order to address your question or help to solve your issue. If you keep a master copy of your ethogram at your facili ty you can grow it with each project. For example, it is not uncommon to have a “master ethogram” that encompasses all known behaviors and then subset it to a mating behavior ethogram, maternal care ethogram and a welfare ethogram, each of which may have b ehaviors specific to the question at hand. You may also have a “pilot” version of your study/question where you test your ethogram and cut behaviors you are not observing frequently after the first couple sessions or after one season of data collection. Af ter the completion of your study it is wise to statistically analyze your ethogram to determine which behaviors are most predictive of mate compatibility and/or reproductive success and which can be trimmed (i.e., behaviors that have all 0s could be elimin ated easily as can behaviors that provide no variability in relationship to reproductive success) to keep data collection efficient. For this week , you will be focusing on courtship and mating behaviors. As you develop your ethogram keep in mind the follo wing points: The type of data collection and the specifics of your ethogram will vary whether you’re studying a group (e.g., herd, pack, troop) versus a pair. For example, group research is often done as scan sampling whereas solitary carnivore pairs are usually focal sampling (see below). What stage of breeding you’re observing will also determine behaviors. For example, if you are researching at the howdy introduction stage and your species seems too aggressive for introductions you will not be includi ng copulation or many physical contact behaviors in your ethogram, whereas a study at the introduction phase will have these behaviors. If you’re studying across both howdy and breeding introduction periods it’s a good idea to include as many behaviors as possible that occur during the two periods (e.g., aggressive and affiliative behaviors) and add a few breeding behaviors during introductions (e.g., mounting, contact aggression behaviors, copulation). Sometimes your species may be a particularly shy spec ies or their behaviors may be too affected by the presence of observers. In these cases , it is probably most productive to have video monitoring and train researchers to score video at a later, convenient time. Remember, all species have evolved to be mos t active at a certain time of day to either avoid predation or increase their efficiency, so some species won’t be most active at time periods observers are available. A large number of species are nocturnal, so problems in breeding may be occurring becaus e mating introductions are performed during suboptimal times. Make sure while developing your ethogram that you attempt a few observations at different times of the day and/or record animals 24/7 with video for a period of time to ensure that you’re captur ing all courtship and breeding behavior. This course is likely taking place outside of your breeding season, so you will need to think back to the breeding season and incorporate behaviors that may not be observed during this workshop but that are present during the breeding season. You can also take advantage of nature shows and YouTube to get species -typical breeding behaviors, if available. The Behavior Scientific Advisory Group has a great tutorial for animal behavior research methods that we have inc luded as a resource for this workshop. It is very helpful if any of the following information is confusing or you need clarification (or if you just want to learn more about the topic!). Specifically, the section on Ethogram development will be useful movi ng forwards. MEASURING ANIMAL BEHAVIOR Description of behavior Behavior can be described either in terms of its structure, i.e., its appearance and physical form (e.g., “run bill along wing feathers”), in terms of the consequences/effect of the behavior on the environment, other individuals or the subject itself (e.g., “preen”), or in terms of the spatial relationship of the subject to the environment or other individuals (e.g., “approach the mate” or “leave the nest”). Description of structure i s usually preferable because it can be more accurate and repeatable and because it is neutral. Measures Whatever the level of description (element, pattern or sequence) behaviors can be quantified using four basic types of measure: 1. Latency : time from s ome specified event (e.g., start of recording) to the onset of the first occurrence of the behavior. This is not often used for most behaviors in an ethogram used for mate compatibility research, but specific latencies may provide a great deal of informati on. Below are the ones we most typically incorporate into our mate compatibility trials: i. Latency to first howdy barrier contact of the opposite sex - This latency can be useful in determining relative “motivation” for breeding particularly if one individua l of the pairing seems more motivated than the other. ii. Latency to first affiliative proximity event - This latency can be useful in determining relative motivation and affiliation for potential breeding introductions. iii. Latency to first mounting attempt - This latency can be useful in determining relative motivation and affiliation for breeding. Frequency : number of occurrences of the behavior per unit time. This measurement has been shown to be the most useful in quickly and accurately quantifying mate pr eference. It is easiest to hit observer reliability on this measure across multiple observers, which also increases its usefulness as a measure. Duration : length of time the behavior lasts (total, average, etc.) Intensity : defined according to behavior (e. g. loudness of the long -call in the black -headed gull, partly or fully raised head feathers in a jay or low and high intensity of fighting according to absence or presence of physical contact, respectively), usually expressed as defined levels (e.g. high, medium or low) or on a defined scale. Behaviors can be defined according to types which lie at the extremes of a duration scale: Events and States. Events : behaviors of instantaneous to short duration (<5s), which are best described by their frequency (e.g ., vocalizations). Events are often repeated rapidly in Bouts that can be measured both by their duration and frequency. States : behaviors of relatively long duration (>5s), which can be best described by their total or average duration (e.g., body postures or position in the environment). These categories are not strict, and Events can also be described by duration if we’re looking at a small time scale (e.g., duration of different vocalization types), while States can be measured by frequency when measured over a long time scale (e.g., number of feeding bouts in a day). 3. Recording methods How you collect the data you will need to answer your research question is affected by the research question itself and the logistics of doing behavioral obser vations. We have only briefly described the different data collection methods in order to give you an idea about the time demands and differing levels of efforts required for each. Certain recording methods are sometimes better suited for one purpose than another and result in different kinds of information. In general, we suggest using the Focal Continuous method for mate compatibility research . One of the first decisions to make is how many animals to observe at one time. Focal animal : At any one ti me, the observer is only focusing their observations and recording behavior on a single individual. Typically used in studies based on ethograms that contain more than just a few behaviors and/or a combination of States and Events, studies of complex behav ioral patterns, or in species that change behavior regularly and quickly. Group : The observer simultaneously records the behavior of all individuals in the focal group. This approach is used when the ethogram is more simple, when the animals can be seen c learly and reliably, when the actual identity of each animal being studied is less important, or when behavior does not change often. It is not used very often unless animals are very sedentary and the number of animals in the group is small, because as be havior becomes more dynamic and/or as the number of animals to focus on goes up, the quality of the data collected tends to go down. Once you have decided on your focal approach, it is time to choose a data collection method. We will cover the basics of the four most common methods used in behavioral research. We’ll start with the methods that are the least intensive but also yield the least amount of useful information and move on to the most intensive but information -rich methods. Ad libitum sampling: This approach is basically informal note -taking based on what the animals are doing. It is unstructured in that there are no strict rules of when to write something down or how to express it, and thus is largely not useful in actually quantifyi ng behavior. One -Zero sampling: In this method, the observation period (e.g., one hour) is divided into increments (e.g., 5 minute periods). At the end of each increment, the observer simply writes down a “1” next to any behavior that occurred during the previous five minutes or a “0” next to behaviors that do not occur during that increment. This method is one of the easiest methods and works for both states and events, but it does not provide accurate estimates of durations or frequency. 3. All occurre nces sampling: In this approach, the observer simply keeps a count or tally of the number of occurrences of behaviors in the ethogram that are performed by the focal animal(s) during the observation session. This is generally an easy method to use unless you are tracking a large number of behaviors, individuals, or behaviors that occur very frequently. This method only provides a rate or frequency of behavior and provides no information on duration of behavior. Scan sampling : Similar to one -zero sampling , this method involves dividing the observation session into timed intervals (e.g., one -minute intervals). Each time one of the intervals elapses (usually signaled by a timer or stopwatch), the observer records only what the animal(s) is(are) doing at tha t moment. Think of it as taking a photograph every minute and writing down only what the animal is doing in the photograph. Behaviors that occur in between the intervals are not recorded (unless you are tracking them with one of the previously mentioned methods above). This approach works well for focal animals or focal groups when the focal group size is not very large. This method is not recommended for capturing rare behaviors or events. Continuous sampling: This is the most information rich and pote ntially challenging method to use but is also the method we most recommend for mate compatibility, as courtship and breeding behaviors tend to be quick, intense, and information rich. During the observation session, the observer writes down the start and s top time of every behavior whenever it occurs. An exception is behavioral events which usually are so quick, their duration is difficult to capture. Still, these are captured with a tally whenever they occur and an attempt is made to record when they occu rred. This method provides the most accurate estimates of durations of behavioral states and provides estimates of frequency of behavioral events. It also provides data on sequences of behaviors. Because of the rigor of this method, in most cases it is o nly used with focal individuals and not groups, but certain kinds of sedentary and/or less behaviorally dynamic species may be studied in focal groups using continuous sampling. Behavioral observation software or collection of data via video or voice dicta tion can be advantageous when using this method. There are currently two popular apps used in zoos that make this method much easier for volunteers and interns: Animal Behaviour Pro and Zoo Monitor .
202
Covariance Correction Step for Kalman Filtering with an Attitude | Journal of Guidance, Control, and Dynamics =============== Skip to main content Search Search Anywhere Anywhere Find by Paper Quick Search anywhere Enter words / phrases / DOI / ISBN / keywords / authors / etc Search Quick Search fdjslkfh Enter words / phrases / DOI / ISBN / keywords / authors / etc Search Advanced search 0 Cart Join AIAAInstitution Login Log In Login Join AIAA Institution Login Skip main navigationOpen Drawer Menu Close Drawer Menu Menu Home Journals AIAA Journal Journal of Aerospace Information Systems Journal of Air Transportation Journal of Aircraft Journal of Guidance, Control, and Dynamics Journal of Propulsion and Power Journal of Spacecraft and Rockets Journal of Thermophysics and Heat Transfer Browse All Journals Browse All Virtual Collections Browse Editor's Choice Books AIAA Education Series Library of Flight Progress in Astronautics and Aeronautics The Aerospace Press Browse All Books Conference Proceedings Standards Other Publications Aerospace America Public Policy Papers AIAA.org Software/Electronic Products Volume 40, Issue 9 Journal Home Current Issue Virtual Collections Archive Subscribe/Renew About For Authors Skip to article control options Volume 40, Issue 9 No Access The Kalman Filter and Its Aerospace Applications Covariance Correction Step for Kalman Filtering with an Attitude Mark W. Mueller, Markus Hehn and Raffaello D’Andrea Mark W. Mueller University of California, Berkeley, Berkeley, California 94720 Mechanical Engineering Department. Search for more papers by this author , Markus Hehn Swiss Federal Institute of Technology, 8092 Zurich, Switzerland †Institute for Dynamic Systems and Control. Search for more papers by this author and Raffaello D’Andrea Swiss Federal Institute of Technology, 8092 Zurich, Switzerland †Institute for Dynamic Systems and Control. Search for more papers by this author Published Online:4 Nov 2016 Read Now Tools Add to favorites Download citation Track citations Share Share on Facebook X Linked In Reddit Email About Figures References Related Details Abstract Redundant attitude representations are often used in Kalman filters for estimating dynamic states that include an attitude. A minimal three-element attitude deviation is combined with a reference attitude, where the deviation is included in the filter state and has an associated covariance estimate. This paper derives a reset step that adjusts the covariance matrix when information is moved from the attitude deviation to the reference attitude. When combined with the extended or unscented Kalman filter prediction and measurement steps, the reset allows one to easily construct a Kalman filter for a system for which the state includes an attitude. This algorithm is closely related to (and a correction to) the multiplicative extended Kalman filter or the unscented quaternion estimator, depending on whether the reset is combined with an extended or unscented Kalman filter. In comparison to the multiplicative extended Kalman filter, it is more general and includes a reset after the measurement update, as well as a reset after both the prediction and measurement update steps of the unscented quaternion estimator. This reset step is derived by tracking the mean and covariance through a linearization, similar to an extended Kalman filter prediction step. The reset step is validated using Monte Carlo sampling. Previous articleNext article Figures References Related Details References Simon D., Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches, Wiley, Hoboken, NJ, 2006, pp.395–458. doi: Scholar Daum F., “Nonlinear Filters: Beyond the Kalman Filter,” IEEE Aerospace and Electronic Systems Magazine, Vol.20, No.8, 2005, pp.57–69. doi: Scholar Psiaki M. L., “Backward-Smoothing Extended Kalman Filter,” Journal of Guidance, Control, and Dynamics, Vol.28, No.5, 2005, pp.885–894. doi: JGCODS 0731-5090LinkGoogle Scholar Julier S. J. and LaViola J. J., “On Kalman Filtering with Nonlinear Equality Constraints,” IEEE Transactions on Signal Processing, Vol.55, No.6, 2007, pp.2774–2784. doi: ITPRED 1053-587XCrossrefGoogle Scholar Zanetti R., Majji M., Bishop R. H. and Mortari D., “Norm-Constrained Kalman Filtering,” Journal of Guidance, Control, and Dynamics, Vol.32, No.5, 2009, pp.1458–1465. doi: JGCODS 0731-5090LinkGoogle Scholar Bonnabel S., Martin P. and Rouchon P., “Non-Linear Symmetry-Preserving Observers on Lie Groups,” IEEE Transactions on Automatic Control, Vol.54, No.7, 2009, pp.1709–1713. doi: IETAA9 0018-9286CrossrefGoogle Scholar Barrau A. and Bonnabel S., “Intrinsic Filtering on Lie Groups with Applications to Attitude Estimation,” IEEE Transactions on Automatic Control, Vol.60, No.2, 2015, pp.436–449. doi: IETAA9 0018-9286CrossrefGoogle Scholar Lee T., Leok M. and McClamroch N. H., “Global Symplectic Uncertainty Propagation on SO (3),” 47th IEEE Conference on Decision and Control (CDC 2008), IEEE, Piscataway, NJ, 2008, pp.61–66. doi: Scholar Grip H. F., Fossen T. I., Johansen T. A. and Saberi A., “Globally Exponentially Stable Attitude and Gyro Bias Estimation with Application to GNSS/INS Integration,” Automatica, Vol.51, Jan.2015, pp.158–166. doi: ATCAA9 0005-1098CrossrefGoogle Scholar Farrell J. L., “Attitude Determination by Kalman Filtering,” Automatica, Vol.6, No.3, 1970, pp.419–430. doi: ATCAA9 0005-1098CrossrefGoogle Scholar Lefferts E. J., Markley F. L. and Shuster M. D., “Kalman Filtering for Spacecraft Attitude Estimation,” Journal of Guidance, Control, and Dynamics, Vol.5, No.5, 1982, pp.417–429. doi: JGCODS 0731-5090LinkGoogle Scholar Markley F. L., “Attitude Error Representations for Kalman Filtering,” Journal of Guidance, Control, and Dynamics, Vol.26, No.2, 2003, pp.311–317. doi: JGCODS 0731-5090LinkGoogle Scholar Crassidis J. L. and Markley F. L., “Unscented Filtering for Spacecraft Attitude Estimation,” Journal of Guidance, Control, and Dynamics, Vol.26, No.4, 2003, pp.536–542. doi: JGCODS 0731-5090LinkGoogle Scholar Markley F. L., Crassidis J. L. and Cheng Y., “Nonlinear Attitude Filtering Methods,” AIAA Guidance, Navigation, and Control Conference, AIAA Paper 2005-5927, 2005, pp.15–18. doi: Scholar Crassidis J. L., Markley F. L. and Cheng Y., “Survey of Nonlinear Attitude Estimation Methods,” Journal of Guidance, Control, and Dynamics, Vol.30, No.1, 2007, pp.12–28. doi: JGCODS 0731-5090LinkGoogle Scholar Markley F. L. and Crassidis J. L., “Filtering for Attitude Estimation and Calibration,” Fundamentals of Spacecraft Attitude Determination and Control, Springer, New York, 2014, pp.235–285. doi: Scholar Steffes S., Steinbach J. P. and Theil S., “Investigation of the Attitude Error Vector Reference Frame in the INS EKF,” Advances in Aerospace Guidance, Navigation and Control, Springer, New York, 2011, pp.345–357. doi: Scholar Reynolds R. G., “Asymptotically Optimal Attitude Filtering with Guaranteed Convergence,” Journal of Guidance, Control, and Dynamics, Vol.31, No.1, 2008, pp.114–122. doi: JGCODS 0731-5090LinkGoogle Scholar Markley F. L., “Lessons Learned,” Journal of the Astronautical Sciences, Vol.57, Nos.1–2, 2009, pp.3–29. doi: Scholar Trawny N. and Roumeliotis S. I., “Indirect Kalman Filter for 3D Attitude Estimation,” Univ. of Minnesota, Dept. of Computer Science and Engineering Rept. 2005-002, Minneapolis, MN, Jan.2005. Google Scholar Hall J. K., Knoebel N. B. and McLain T. W., “Quaternion Attitude Estimation for Miniature Air Vehicles Using a Multiplicative Extended Kalman Filter,” IEEE/ION Position, Location and Navigation Symposium, IEEE, Piscataway, NJ, 2008, pp.1230–1237. doi: Scholar Truch M. D., “The Balloon-Borne Large Aperture Submillimeter Telescope (BLAST) 2006: Calibration and Flight Performance,” Astrophysical Journal, Vol.707, No.2, 2009, pp.1723–1728. doi: Scholar Springmann J. C., Sloboda A. J., Klesh A. T., Bennett M. W. and Cutler J. W., “The Attitude Determination System of the RAX Satellite,” Acta Astronautica, Vol.75, June–July 2012, pp.120–135. doi: AASTCF 0094-5765CrossrefGoogle Scholar Filipe N., Kontitsis M. and Tsiotras P., “Extended Kalman Filter for Spacecraft Pose Estimation Using Dual Quaternions,” Journal of Guidance, Control, and Dynamics, Vol.38, No.9, 2015, pp.1625–1641. doi: JGCODS 0731-5090LinkGoogle Scholar Shuster M. D., “A Survey of Attitude Representations,” Journal of the Astronautical Sciences, Vol.41, No.4, 1993, pp.439–517. JALSA6 0021-9142Google Scholar Anderson B. D. O. and Moore J. B., Optimal Filtering, Prentice–Hall, Upper Saddle River, NJ, 1979, pp.195–205. Google Scholar Julier S. J. and Uhlmann J. K., “Unscented Filtering and Nonlinear Estimation,” Proceedings of the IEEE, Vol.92, No.3, 2004, pp.401–422. doi: IEEPAD 0018-9219CrossrefGoogle Scholar Bar-Shalom Y., Li X. R. and Kirubarajan T., Estimation with Applications to Tracking and Navigation, Wiley, Hoboken, NJ, 2004, p.213. doi: Scholar Cited by General measurement model of distributed accelerometers for inertial navigation 24 April 2025 | Measurement Science and Technology, Vol. 36, No. 5 UWB-Based Positioning Is Not Invulnerable from Spoofing Attacks: A Case Study of Crazyswarm 7 May 2025 Filtering attitude series data and extracting angular rates with moving-window least-squares polynomial fitting 14 February 2025 | Measurement Science and Technology, Vol. 36, No. 3 Error-State Kalman Filtering with Linearized State Constraints 16 March 2025 | Aerospace, Vol. 12, No. 3 Residual Normalized Strong Tracking Spacecraft Attitude Estimation Based on Variational Bayes 5 March 2025 An Iterated Equivariant Filter and Its Application in Tightly Coupled SINS/GNSS Integrated Navigation 1 Jan 2025 | IEEE Transactions on Instrumentation and Measurement, Vol. 74 Design and Control of a Collision-Resilient Aerial Vehicle With an Icosahedron Tensegrity Structure 1 Oct 2024 | IEEE/ASME Transactions on Mechatronics, Vol. 29, No. 5 An Invariant Extended Kalman Filter for IMU-UWB Sensor Fusion 26 Aug 2024 MPS: A New Method for Selecting the Stable Closed-Loop Equilibrium Attitude-Error Quaternion of a UAV During Flight 13 May 2024 Geometric Data Fusion for Collaborative Attitude Estimation 1 Jan 2024 | IFAC-PapersOnLine, Vol. 58, No. 17 A Note on the Extended Kalman Filter on a Manifold 13 Dec 2023 Error-Covariance Reset in the Multiplicative Extended Kalman Filter for Attitude Estimation F. Landis Markley, Yang Cheng, John L. Crassidisand Reid G. Reynolds 21 July 2023 | Journal of Guidance, Control, and Dynamics, Vol. 46, No. 10 Combined Location Method Based on Error State Kalman Filter 7 Apr 2023 Multiplicative extended Kalman filter ignoring initial conditions for attitude estimation using vector observations 12 January 2023 | Journal of Navigation, Vol. 76, No. 1 Equivariant Filter Design for Discrete-time Systems 6 Dec 2022 Analysis and Accuracy Improvement of UWB-TDoA-Based Indoor Positioning System 24 November 2022 | Sensors, Vol. 22, No. 23 Quadrotor Control on $\text{SU}(2)\times \mathbb{R}^{3}$ with SLAM Integration 23 Aug 2022 Nearest-Neighbor-based Collision Avoidance for Quadrotors via Reinforcement Learning 23 May 2022 Learning near‐optimal broadcasting intervals in decentralized multi‐agent systems using online least‐square policy iteration 25 February 2021 | IET Control Theory & Applications, Vol. 15, No. 8 Sensor Information Sharing Using a Producer-Consumer Algorithm on Small Vehicles 25 April 2021 | Sensors, Vol. 21, No. 9 State and parameter estimation of suspended load using quadrotor onboard sensors 1 Sep 2020 Full-Order Solution to the Attitude Reset Problem for Kalman Filtering of Attitudes Rajan Gill, Mark W. Muellerand Raffaello D’Andrea 11 May 2020 | Journal of Guidance, Control, and Dynamics, Vol. 43, No. 7 Quadrotor-Based Lighthouse Localization with Time-Synchronized Wireless Sensor Nodes and Bearing-Only Measurements 13 July 2020 | Sensors, Vol. 20, No. 14 Using multiple short hops for multicopter navigation with only inertial sensors 1 May 2020 Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment 30 Oct 2019 | Science Robotics, Vol. 4, No. 35 Unscented Kalman Filtering for Simultaneous Estimation of Attitude and Gyroscope Bias 1 Feb 2019 | IEEE/ASME Transactions on Mechatronics, Vol. 24, No. 1 Kalman Filtering for Attitude Estimation with Quaternions and Concepts from Manifold Theory 3 January 2019 | Sensors, Vol. 19, No. 1 Quaternion Invariant Extended Kalman Filtering for Spacecraft Attitude Estimation Haichao Guiand Anton H. J. de Ruiter 5 December 2017 | Journal of Guidance, Control, and Dynamics, Vol. 41, No. 4 Self-Calibrating Ultra-Wideband Network Supporting Multi-Robot Localization 1 Jan 2018 | IEEE Access, Vol. 6 Quadrotor attitude estimation using adaptive fading multiplicative EKF 1 May 2017 What's Popular Volume 40, Number 9September 2017 Special Issue on The Kalman Filter and Its Aerospace Applications Altmetrics Metrics Downloaded 78 times 30 citation in Crossref Crossmark Information Copyright © 2016 by the authors. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission. All requests for copying and permission to reprint should be submitted to CCC at www.copyright.com; employ the ISSN 0731-5090 (print) or 1533-3884 (online) to initiate your request. See also AIAA Rights and Permissions www.aiaa.org/randp. Topics Applied Mathematics Control Theory Elementary Algebra General Physics Geometry Functions Guidance, Navigation, and Control Systems Kalman Filter Statistical Analysis Keywords Unscented Kalman Filter Quaternions Gyroscopes Probability Distribution Linear Time Invariant Nonlinear Systems Euler Angles Systems Engineering Monte Carlo Analysis Numerical Stability Acknowledgments This research was supported by the Swiss National Science Foundation, under grant application 138112. The authors would like to thank Mark Psiaki and Landis Markley for their feedback and helpful comments to this paper. Digital Received 25 August 2016 Accepted 13 October 2016 Published online 4 November 2016 Close Figure Viewer Browse All FiguresReturn to FigureChange zoom level Zoom in Zoom out Previous FigureNext Figure Caption More related Articles ⇩ Full-Order Solution to the Attitude Reset Problem for Kalman Filtering of AttitudesRajan Gill, Journal of Guidance, Control, and Dynamics, 2020 Error-Covariance Reset in the Multiplicative Extended Kalman Filter for Attitude EstimationF. Landis Markley, Yang Cheng, John L. Crassidis, et al., Journal of Thermophysics and Heat Transfer, 2023 Quaternion Invariant Extended Kalman Filtering for Spacecraft Attitude EstimationHaichao Gui, Journal of Guidance, Control, and Dynamics, 2017 Attitude Error Representations for Kalman FilteringF. Landis Markley, Journal of Guidance, Control, and Dynamics, 2012 Research on UAV autonomous positioning system for system engineeringJournal of Graphics, 2024 A digital sensor to measure real-time leaf movements and detect abiotic stress in plantsBatist Geldhof, Plant Phyisol, 2021 An optical tracking system based on simple marker encodingJournal of Graphics, 2023 Faba bean and pea harvest index estimations using aerial-based multimodal data and machine learning algorithmsYishan Ji, Plant Phyisol, 2023 Powered by Privacy policy Google Analytics settings back Publications Journals Books Meeting Papers Standards Resources For Authors Booksellers Companies Educators Librarians Researchers Standards Contributors Students Information How to Order How to Videos About Publications License Agreement FAQs Publish with Us Rights & Permissions Send Us Your Feedback Advertise on ARC Connect Announcements Contact Us Join AIAA © 2025 American Institute of Aeronautics and Astronautics American Institute of Aeronautics and Astronautics 12700 Sunrise Valley Drive, Suite 200 Reston, VA 20191-5807 703.264.7500 Privacy Policy Terms of Use ✓ Thanks for sharing! AddToAny More… Close crossmark popup
203
Published Time: Wed, 18 Jan 2023 18:35:43 GMT arXiv:1005.0288v1 [math.AG] 3 May 2010 Computing preimages of points and curves under polynomial maps Michiel de Bondt Stefan Maubach ∗ [email protected] [email protected] Radboud University Nijmegen Toernooiveld 1, The Netherlands November 8, 2018 Abstract In this paper, we give two algorithms to compute preimages of curves un-der polynomial endomorphisms. In particular, this gives an efficient way of computing preimages of points. Furthermore, we explain the abstract setting under which one can iteratively compute the inverse of a polynomial auto-morphism. 1 Notations Let R be a commutative ring with one. We write MA n(R) as the set of polynomial maps Rn −→ Rn. GA n(R) is the subset of MA n(R) of invertible polynomial maps. Define A := R[n] := R[X1, . . . , X n]. Write I for the identity map on Rn. We will use the notation k for any field. 2 Introduction and motivation If F ∈ GA n(k) then there are several algorithms to compute the inverse. Essen’s algorithm (see ) uses Groebner bases to directly compute the inverse. This al-gorithm is in general not very efficient unless in low dimensions and also F of low degree. In dimension n = 2 there exist several other algorithms [1, 3], which are ac-tual algorithms that decide in finite time if the map is invertible. These algorithms ∗ Funded by Veni-grant of council for the physical sciences, Netherlands Organisation for scien-tific research (NWO) 1are due to the fact that in dimension n = 2 the automorphism group is understood by the Jung-van der Kulk-theorem, and are rather efficient. An ad-hoc way of computing the inverse of a map is computing its formal power series inverse step-by step. For this, bring your map on the form F = I + H where H has no linear or affine part. Any such endomorphism has an inverse G in the formal power series ring k of the form G = I − K where K has no linear or affine part. What one can do is start computing the coefficients of G from the lowest degree and up: if the coefficients of G are known up to degree d,then the coefficients of degree d + 1 can be computed since F (G) is the identity up to degree d, and the part of degree d + 1 fixes the coefficients of G of degree d + 1. In case F is invertible, this procedure at some degree yields the polynomial inverse (the computation may continue but will only yield zero coefficients from that degree on). The efficiency of this approach is sometimes better, sometimes worse as the Essen-algorithm (depending on implementation, size and degree of the automorphism, etc.). Another way of computing the inverse of a map is decomposing the map in simpler invertible maps. So far the only case where this works is in dimension two over a field, though the non-field case saw some progress through the recent work of Umirbaev-Shestakov [10, 11]. (This work provides an algorithm to check if F ∈ TA 2(C[Z]), and gives a decomposition in this case.) The obstructions in the above algorithms are obvious. There is one that we would like to mention, which is that the inverse of F ∈ GA n(k) may have much larger degree than F and may contain a huge number of nonzero coefficients. If deg( F ) = d, then deg( F −1) ≤ dn−1 and this bound can be attained easily (possibly the bound is attained by a generic automorphism, even). This means that an in-verse might not fit in any computer in explicit form, and one would actually require a decomposition into simpler automorphisms. Our results: In this paper we address some of the above issues, mainly by focusing on comput-ing preimages of points, in stead of computing the inverse directly. First, a variation on the above power-series computation of the inverse is done in section 3. In sec-tion 4 we show how this viewpoint can be used to efficiently compute preimages of polynomial automorphism or even endomorphisms , without actually computing the inverse. We give two algorithms to do this: First, the already known algorithm of van den Essen, which uses Groebner bases, can be used. Its efficiency may still be an issue, as it comes down to solving a system of n equations in n variables. The second method, which is a specialisation of the before-mentioned iterative computation of the inverse, seems to be rather efficient. This algorithm computes a parametrized preimage curve (if it exists) to a given paramtetrized curve g, i.e. if g(t) : k −→ kn is given, the algorithm computes f (t) such that F (f (t)) = g(t). The advantage of the Groebner bases algorithm is that the latter gives a criterion to decide if there is 2no preimage curve. We point out how this might affect cryptographic systems like the TTM method (positively or negatively). 3 Iterative computation of the inverse Examples and background Suppose that F = I − H ∈ MA n(R) where H ∈ (aA)×n ⊂ MA n(R) where a is an ideal of A. The following is well-known: Proposition 3.1. Let a be an ideal of A such that ∩an = 0 . Let F = I − H where H ∈ aA×n. Then F has an inverse in the a-adic completion of A. The above proposition is often applied for the case that A = k[X1, . . . , X n] and a = ( X1, . . . , X n)A, and = I − H where H ∈ a2A×n. For completeness sake, we indicate how this a-adic inverse is computed and how it can yield the inverse if it exists. If I + K is the inverse, then it is clear that K ∈ a2A×n. Now inductively, if the coefficients of K up to and includig degree d are known, giving a map Kd which matches K up to and including degree d, then ( I−H)( I+K) mod ad+1 = I. Putting the coefficients of K of degree d + 1 as variables, and computing ( I + H)( I + K)modulo ad+2 , yields a system of linear equations in R that is always solvable. In particular, if I + H has a polynomial inverse, then at some point in this process one will have the inverse. The interesting thing is that one can also apply such a technique for other ideals a ⊂ A; for example, if H = (2 X2 + X22 , 0), R = Z, then one can compute the inverse in the (2 , X 1, X 2)-adic completion, which in this case again describes the actual inverse of I − H. However, things get tricky - what are the requirements on H to make this work? If an inverse exists, can one just approximate it or actually give an inverse? For example, if R = C, F = X1 −tX 1 and one starts to compute coefficients of an inverse in the t-adic completion, then there will be no point in the computation where the coefficient will be known (the coefficient is (1 − t)−1 while after m steps one has 1 + t + t2 + . . . + tm). In this section, we describe a slightly different method to iteratively compute the power series inverse, that is at first not necessarily more efficient, but has some conceptual value that we will see later on. Next to that, we will give the abstract setting in which this (and the power-series method) works for other cases than the ideal ( X1, . . . , X n). The very rough, unpolished, basic algorithm (which never stops in this form) is the following: Algorithm 1: Suppose I − H ∈ MA n(R) is given. (1) Let d = 0 and choose K0 ∈ MA n(R) arbitrary (standard choice is K0 = 0). (2) Define Kd+1 := H(I + Kd). 3(3) Increase d, goto (2). We will discuss later under what condition this algorithm makes sense and works - the idea is that Kd converges to K such that I+K is the inverse. A working example for later reference: Example 3.2. Define Ai := ( X1, . . . , X n)i+1 A and assume H ∈ A1. Let I + K be the formal power series inverse of I − H. Choose K0 = 0 and define Ki as above. Then K mod Ai = Ki mod Ai. In particular, if I − H is indeed invertible, then Ki mod Ai “equals” K, where this “equals” means that taking the element in MA n(R) which has the same coefficients as Ki up to degree i, and zeros from degree i + 1 on, then this is equal to K. When does iteration leads to an inverse in finitely many steps? This section is more abstract than section 4 and beyond - the reader interested in the more applicable aspects of this paper can forward to section 4. Also, it may be helpful to keep the (most important) example 3.2 where a = ( X1, . . . , X n)A, A0 = aA, A 1 = a2A, A 2 = a3A, . . . in mind when reading the below definitions: .Suppose A ⊇ A0 ⊇ A1 ⊇ . . . is a descending chain of ideals such that ⋂ Ai = (0). We denote the projections πd : A −→ A/A d as well as πd+ed : A/A d+e −→ A/A d. We assume that for each d we have a section sd : A/A d −→ A, i.e. πdsd(a) = a for all a ∈ A/A d. Definition 3.3. We call A ⊇ A0 ⊇ . . . a composition-filtration if for any H ∈ (A1)n, G, ˜G ∈ (A0)n we have: πd(G) = πd( ˜G) −→ πd+1 (H(G)) = πd+1 (H( ˜G)). We say that the sd form a converging system of sections 1 if for all a ∈ A there exists D ∈ N such that if d ≥ D, then sdπd(a) = a.Let us explain how the above definition appears in example 3.2. Here, Ai := (X1, . . . , X n)i+1 A. This indeed is a composition-filtration as can be easily verified (substituting something having no terms below degree d > 0 into something having no terms below degree 2 yields something having only terms of degree 2 d or higher). The sections sd here are the obvious canonical bijective map sending A/A d to the elements in A of degree ≤ d. Indeed, given a ∈ A, then one can take D := deg a,showing that this set of sections is a converging system of sections. We define the following abbreviation of assumptions: 1The authors did not find any already existing term in the literature. 4(P) stands for the following list of assumptions: A0 ⊇ A1 ⊇ . . . is a composition-filtration, and we have a returning system of sections si : Ai −→ A. Let F = I − H and F −1 = I + K. Assume H ∈ (A1)n, I ∈ (A0)n the identity map. The iterative inverse algorithm Definition 3.4. Define ϕ : MA n(R) −→ MA n(R) by ϕ(K) := H(I + K). Lemma 3.5. Assume (P). Let ˜K ∈ (A0)n ⊆ MA n(R). If πd ˜K = πdK, then πd+1 ϕ ˜K = πd+1 K.Proof. Because we have a composition-filtration, πd(I + ˜K) = πd(I + K) implies πd+1 (H(I + ˜K)) = πd+1 (H(I + K)). We claim that the latter equals πd+1 (K): since I = ( I − H)( I + K) = I + K − H(I + K) we have H(I + K) = K. Corollary 3.6. Assuming (P), the chain 0 = K0, K d+1 := sd+1 πd+1 ϕK d stabilises. Proof. First we give a proof by induction on d to show that πdKd = πdK.() This statement is obviously true for d = 0. Assuming πdKd = πdK, we get by lemma 3.5 that πd+1 K = πd+1 ϕK d. Since πd+1 sd+1 πd+1 = πd+1 for every d, πd+1 K = πd+1 ϕK d = πd+1 sd+1 πd+1 ϕK d = πd+1 Kd+1 (end induction). Now sdπdKd = sdπdsdπdϕK d−1 and since πdsd is the identity this equals sdπdϕK d−1 = Kd, thus sdπdKd = Kd (). Since we have a returning system of sections, we have some D ∈ N such that if d ≥ D then sdπdK = K (). Thus, Kd (∗∗ ) = sdπdKd (∗) = sdπdK (∗∗∗ ) = K whenever d ≥ D (only to ensure ()). The above corollary thus gives an algorithm, which we now denote separately: Algorithm 2: Assume (P). Input H ∈ A1.(1) Let d = 0 and K0 = 0 ∈ An.(2) Define Kd+1 := sd+1 πd+1 H(I + Kd). (3) If Kd = Kd+1 , and Kd−1 6 = Kd, then check if H(I + Kd) = Kd. If YES then STOP; output I + Kd.(4) Increase d, goto (2). More examples Example 3.7. A := Z[X1, . . . , X n], and Ai := 2 iA. Let F = ( x + 2 y + 4 x2, y +2x2) and thus H = (2 y + 4 x2, 2x2) in ( A1)2. One can check that this is indeed a composition-filtration. The sections sd : A/A d −→ A must be chosen a bit carefully: we know that the inverse of F will have coefficients that are “not far from zero”, 5i.e. there is a bound D for which the coefficients must be in the interval [ −D, D ]. Therefore, we take the section map s that sends elements of Z/(2 dZ) into the interval [−2d−1, 2d−1 − 1], which is indeed a returning section. If one chooses the interval [0 , 2k − 1] as is custom, it is not a returning section. Now the iteration process yields K0 := (0 , 0) , K 1 = K0, K 2 = ( −2y, 2x2), K 3 = K2. The algorithm in step 3 now checks if I+K3 is the inverse of I−H, but it is not, so we continue. K4 := ( −2y, −2x2−8xy −8y2), K 5 := ( −2y, −2x2 +8 xy −8y2), K 6 = K5 and I + K5 turns out to be the inverse. Example 3.8. Let F ∈ GA n(k) be such that the linear part of F is I. For example, let F = ( X + Y 2 + 2 X2Y + X4, Y + X2). Let H := F − I. We define Ad := (X, Y )d+1 k[X, Y ] ⊂ k[X, Y ]. Now K0 := (0 , 0) = K1 = K2, K 3 = ( Y 2, X 2), K 4 =(Y 2, X 2 − 2XY 2), K 5 = ( Y 2, X 2 − 2XY 2 + Y 4) and since K6 = K5 it is time to check if this might be the inverse (otherwise one has to continue). Indeed, ( I − K5)F = I.In the case that A = R[n] where R is a reduced k-algebra, and Ad = ( X1, . . . , X n)d+1 A, the algorithm is effective in deciding if a map is invertible. This is due to the theorem that deg( F −1) ≤ deg( F )n−1 if R is a reduced ring (corollary 2.3.4 in ). 4 Injective morphisms Iterative preimage algorithm In this section, we will assume that F : Rn −→ Rn is a polynomial endomorphism of the form F = I−H where H has affine part zero. Suppose g(t) := ( g1(t), . . . , g n(t)) ∈ (R[t]) n is a nonzero curve satisfying g(0) = 0, and f (t) := F (g(t)), which hence is a curve contained in the image of F . (Note that f (0) = F (g(0)) = F (0) = 0.) Since F is of the described form, its extension F : R n −→ R n is an automorphism. Hence, there is at most one parametrized curve ˜ g(t) satisfying ˜ g(0) = 0 such that F (˜ g(t)) = f (t). (Note: being the image of such a parametrized curve may be something stronger as being a curve which is contained in the image of F !) We will describe a method to compute the curve g(t) := ( g1(t), . . . , g n(t)) given f (t) and F . Remark 4.1. Given F = I − H where the affine part of H is zero, and f (t) ∈ R[t]n such that f (0) = 0. Then there exists at most one ˜ g(t) ∈ R[t]n satisfying ˜ g(0) = 0 such that F (˜ g) = f . Proof. Since F is of the form I − H where H has affine part zero, it has a power series inverse G. If f ∈ R such that f (0) = 0, then g := G(f ) is a well-defined element of R. Since in this case, g = G(f ) = G(F (˜ g)) = ˜ g, ˜ g is unique. In case ˜g ∈ R[t]n, there is one solution, if ˜ g ∈ R n\R[t]n there is none. Algorithm 3: F, f as above. (1) Let d = 1 and K1 = 0 ∈ Rn.6(2) Define Kd+1 := H(f + Kd) mod ( td+1 )(3) If Kd = Kd+1 , and Kd−1 6 = Kd, then check if H(f + Kd) = Kd. If YES then STOP; output f + Kd.(4) Increase d, goto (2). Proposition 4.2. If f ∈ R[t]n satisfying f (0) = 0 and there is some g ∈ R[t]n, g (0) = 0 such that F (g) = f , then the above process terminates, and the output equals g.Furthermore, g is unique. Proof. Uniqueness follows from remark 4.1. We will prove that Kd ≡ g − f mod td.The case d = 1 is trivial. Assume Kd ≡ g − f mod td. Then Kd+1 = H(f + Kd). Now remark that since H has affine part zero, then for any p, q ∈ R[t] satisfying p(0) = q(0) = 0, we have p ≡ q mod td ⇒ H(p) ≡ H(q) mod td+1 . Note that f + Kd ≡ g mod td, hence H(f + Kd) ≡ H(g) mod td+1 . Since f = F (g) = ( I − H)( g) = g − H(g), we have H(g) = g − f . Concluding, Kd+1 = H(f + Kd) ≡ g − f mod td+1 .The proposition now follows. In case F is an automorphism, there is obviously no need to require that f = F (g)for some g; one only needs to assume that f (0) = 0, for then g := F −1(f ) satisfies g(0) = 0. Remark 4.3. If F is an automorphism, then a preimage of c ∈ Rn can be computed by computing the preimage curve g(t) of ct := ( c1t, . . . , c nt), and then g(1) is the preimage of c (since F (g(t)) = ct ). (One could take any curve f through c satisfying f (0) = 0, though.) Our experiments have shown this setting to be quite efficient. Groebner bases preimage algorithm In this section we give another method to compute preimages of points and curves under polynomial automorphisms. We stick to the case where R = k, a field. In theorem 3.2.1/ 3.2.3 (page 64) an algorithm is given to compute the inverse (and effectively decide if an endomorphism is an automorphism). We will quote the case we will need here: Theorem 4.4 (van den Essen) . Let F ∈ (k[X1, . . . , X n]) n be a polynomial endomor-phism. Let I = ( Y1 − F1, . . . , Y n − Fn) be an ideal in k[X1, . . . , X n, Y 1, . . . , Y n]. Let B be the reduced groebner basis of I with respect to an ordering where Y α < X i for each α ∈ Nn, 1 ≤ i ≤ n. F is invertible if and only if B is of the form (X1 − G1(Y ), . . . , X n − Gn(Y )) , and in that case G := ( G1, . . . , G n) is the inverse of F . The following is straightforward: 7Corollary 4.5. Let F ∈ (k[X1, . . . , X n]) n be a polynomial endomorphism. Let I =(c1 − F1, . . . , c n − Fn) where ci ∈ k be an ideal in k[X1, . . . , X n]. Let B be the reduced groebner basis of I. Then (1) B = ( X1 −b1, . . . , X n −bn) if and only if F (X1, . . . , X n) = c has only one solution Xi = bi.(2) If B is not of the form in (1) then F is not an automorphism. We will give a modified version of the above theorem of van den Essen to find preimages of curves. Definition 4.6. Let F = I − H ∈ k[X1, . . . , X n]n where H has affine part zero, and f, g ∈ k[t]n such that f (0) = g(0) = 0. Then we define the following ideals in C[t][ X1, . . . , X n]: ( F −f ) := ( F1 −f1, . . . , F n −fn) and ( X −g) := ( X1 −g1, . . . , X n − gn). The only reason we assume that F is of the described form I − H is because we can then use remark 4.1. Theorem 4.7. Let F ∈ (k[X1, . . . , X n]) n be a polynomial endomorphism, and let f (t) be a curve. Let B be the reduced groebner basis of (F − f ) with respect to an ordering where tm < X i for each m ∈ N,1 ≤ i ≤ n. Now (1) B = ( X1 − g1, . . . , X n − gn) if and only if (F − f ) = ( X − g). In particular, if F is an automorphism, then B is of the said form. (2) If F (g) = f , then B ⊆ (X − g). Hence, a curve g such that F (g) = f can, if it exists, be found by finding an ideal (X − g) ⊇ B. Note that in part (2), finding such a g may have become much easier because of the simpler form of B compared to ( F − f ). Theorem 4.7 is based on the following lemma: Lemma 4.8. (1) (F − f ) ⊆ (X − g) ⇔ F (g) = f .(2) In case F ∈ GA n(k), then we have (F −f ) ⊆ (X −g) ⇐⇒ (F −f ) = ( X −g) ⇐⇒ F (g) = f .Proof. (F − f ) ⊆ (X − g) ⇐⇒ (F − f ) ≡ 0 mod ( X − g) ⇐⇒ Fi(X) − fi ≡ 0mod ( X − g) for all 1 ≤ i ≤ n ⇐⇒ Fi(g) − fi = 0 for all 1 ≤ i ≤ n ⇐⇒ F (g) = f ,proving (1). If F is invertible, then let G be the inverse of F . By (1), ( F − f ) ⊆ (X − g) ⇐⇒ F (g) = f , but also G(f ) = g hence ( G − g) ⊆ (X − f ). Substituting X := F in the latter yields ( X − g) ⊆ (F − f ), proving (2). Proof of theorem 4.7. (1) Suppose ( F −f ) = ( X −g). Then, since ( X1 −g1, . . . , X n − gn) = ( X − g) is a reduced basis of ( F − f ), this must be the result of the algorithm. The other way around, if B = ( X1 − g1, . . . , X n − gn), then of course ( F − f ) = (X1 − g1, . . . , X n − gn) = ( X − g) since it’s the same ideal, only a different basis. (2) is just a reformulation of lemma 4.8 part (1). Maple files of algorithms: If you are interested in maple files using the iterative preimage algorithm, contact the authors. 8References K. Adjamagbo, A. van den Essen, A new inversion formula for a polynomial map in two variables. J. Pure Appl. Algebra 76 (1991), no. 2, 119–120. L. Goubin, N. Courtois, Cryptanalysis of the TTM cryptosystem. Advances in cryptology—ASIACRYPT 2000 (Kyoto), 44–57, Lecture Notes in Comput. Sci., 1976, Springer, Berlin, 2000. M. Dickerson, The inverse of an automorphism in polynomial time. J. Symbolic Comput. 13 (1992), no. 2, 209–220. A. van den Essen, A criterion to decide if a polynomial map is invertible and to compute the inverse. Comm. Algebra 18 (1990), no. 10, 3183–3186. A.van den Essen, Polynomial Automorphisms and the Jacobian Conjecture. volume 190 of in Progress in Mahtematics , Birkh¨ auser (2000) J. Ding, T. Hodges, Cryptanalysis of an implementation scheme of the tamed transformation method cryptosystem. J. Algebra Appl. 3 (2004), no. 3, 273–282. 94A60 (11T71 14G50 68P25) T. Moh, A public key system with signature and master key functions. Comm. Algebra 27 (1999), no. 5, 2207–2222. T. Moh, An application of algebraic geometry to encryption: tame transforma-tion method. Proceedings of the International Conference on Algebraic Geom-etry and Singularities (Spanish) (Sevilla, 2001). Rev. Mat. Iberoamericana 19 (2003), no. 2, 667–685. T. Moh, On the signature scheme TTMs. Affine algebraic geometry, 379–401, Osaka Univ. Press, Osaka, 2007. I. Shestakov, U. Umirbaev, The tame and the wild automorphisms of polynomial rings in three variables. J. Amer. Math. Soc. 17 (2004), no. 1, 197–227 I. Shestakov, U. Umirbaev, Poisson brackets and two-generated subalgebras of rings of polynomials. J. Amer. Math. Soc. 17 (2004), no. 1, 181–196 9
204
Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. current community your communities more stack exchange communities Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. An application of Chebyshev association inequality? Let $X$ be a r.v and let $f \geq 0 $ be a nonincreasing function, $g$ be a nondecreasing real-valued function. Suppose $h\geq 0$ is a function such that $h(X)$ has finite expectation with $E[h(X)f(X)] \leq E[h(X)]$. Assuming all expectations is finite, prove that $$E[f(X)g(X)h(X)] \leq E[h(X)g(X)]. $$ What I know: If $Y \geq 0$ then we have $$ E[Y] E[Yf(X)g(X)] \leq E[Yg(X)] E[Yf(X)],$$ Clearly, if $g(X) \geq 0$, this is immediate since we can take $Y = h(X)$. However, if this is not the case then $E[h(X)f(X)]E[h(X)g(X)] \leq E[h(X)]E[h(X)g(X)] $ may not be true anymore. I couldn't escape this... 1 Answer 1 I think there is a problem with this question (hope I'm not wrong). Suppose your statement holds, then for any $k>0$ the function $g - k$ still satisfies $$ E[f(X)(g(X)-k)h(X)] \leq E[h(X)(g(X)-k)]$$ which means if $E[f(X)h(X)] - E[h(X)] < 0$ then as $k\rightarrow +\infty$ $$ E[f(X)g(X)h(X)] - E[h(X)g(X)] \leq k (E[f(X)h(X)] - E[h(X)]) \rightarrow - \infty, $$ a contradiction. So if the statement is true then we should have $E[h(X)f(X)] = E[f(X)]$. The converse is clear since we just need to take $Y =h(X)$ as you pointed out. Or maybe we should have something like $g \geq 0$. You must log in to answer this question. Related Hot Network Questions Subscribe to RSS To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Mathematics Company Stack Exchange Network Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA . rev 2025.8.13.32804
205
I. The H¨ older Inequality H¨ older: ∥fg∥1 ≤∥f∥p∥g∥q for 1 p + 1 q = 1. What does it give us? H¨ older: (Lp)∗= Lq (Riesz Rep), also: relations between Lp spaces I.1. How to prove H¨ older inequality. (1) Prove Young’s Inequality: ab ≤ap p + bq q . (2) Then put A = ∥f∥p, B = ∥g∥q. Note: A, B ̸= 0 or else trivial. Then let a = |f(x)| A , b = |g(x)| B and apply Young’s: ab = |f(x)g(x)| AB ≤|f(x)|p pAp + |g(x)|q qAq = ap p + bq q 1 AB Z |f(x)g(x)|dµ ≤ 1 pAp Z |f|pdµ + 1 qBq Z |g|qdµ but Ap = R |f|pdµ and Bq = R |g|qdµ, so this is 1 ∥f∥p∥g∥q∥fg∥1 ≤1 p + 1 q = 1 ∥fg∥1 ≤∥f∥p∥g∥q I.1.1. How to prove Young’s inequality. There are many ways. 1. Use Math 9A. [Lapidus] Wlog, let a, b < ∞(otherwise, trivial). Define f(x) = xp p + 1 q −x on [0, ∞) and use the first derivative test: f ′(x) = xp−1 −1, so f ′(x) = 0 ⇐ ⇒xp−1 = 1 ⇐ ⇒x = 1. So f attains its min on [0, ∞) at x = 1. (f ′′ ≥0). Note f(1) = 1 p + 1 q −1 = 0 (conj exp!). So f(x) ≥f(1) = 0 = ⇒ xp p + 1 q −x ≥0 = ⇒ xp p + 1 q ≥x 2 Real Analysis Qual Seminar Ansatz: x = ab1/(1−p). Then xp = apbp/(1−p) = apb−q: ab1/(1−p) ≤apb−q p + 1 q p 1−p = −q ab ≤ap p  b p 1−p   b− p 1−p  + 1 qb −p 1−p b 1−1 1−p = b −p 1−p ab ≤ap p + bq q 2. Use Math 9B. [Cohn] Consider the graph of t = sp−1: Figure 1. The graph of t = sp−1 s a t b (1) (2) t = s p-1 Since 1 p + 1 q = 1 = ⇒ 1 p = 1 −1 q = q−1 q = ⇒p = qq −1 = ⇒p −1 = 1 q−1, this is also the graph of s = tq−1. Now (1) = R a 0 sp−1 = sp p ia 0 = ap p , and (2) = R b 0 tq−1 = tq q ib 0 = bq q . Thus the area of the entire shaded region is (1)+(2) = ap p + bq q , which is clearly always larger than the box of area ab: I.1.2. A proof without Young’s inequality. Use convexity [Rudin]: ϕ ((1 −λ)x + λy) ≤(1 −λ)ϕ(x) + λϕ(y). Real Analysis Qual Seminar 3 Figure 2. The inherent inequality s a t b t = s p-1 ab extra s a t b t = s p-1 ab extra Since f ∈Lp, g ∈Lq, we have 0 < ∥f∥p, ∥g∥q < ∞, wlog. Define F(x) = |f(x)| ∥f∥p and G(x) = |g(x)| ∥g∥q so that Z F pdµ = Z |f(x)|p ∥f∥p p dµ = R |f|pdµ R |f|pdµ = 1 and R Gq = 1 similarly. Now define s(x) = log |f(x)| ∥f∥p p , t(x) = log |g(x)| ∥g∥q q , so that F(x) = es(x)/p and G(x) = et(x)/q. Since ex is a convex function, put λ = 1 q and get e s(x) p +t(x) q ≤1 pe s(x) p + 1 qe t(x) q = ⇒F(x)G(x) ≤F(x)p p + G(x)q q . 4 Real Analysis Qual Seminar Now integrate the left side and get ∥FG∥1 = Z |FG|dµ = Z |fg| ∥f∥p∥g∥q dµ = 1 ∥f∥p∥g∥q Z |fg|dµ = ∥fg∥1 ∥f∥p∥g∥q , and we integrate the right side to get Z F(x)p p + G(x)q q  dµ = 1 p Z F pdµ + 1 q Z Gqdµ = 1 p + 1 q = 1 Thus, ∥fg∥1 ∥f∥p∥g∥q ≤1 = ⇒∥fg∥1 ≤∥f∥p∥g∥q. Advantage of this method? No need for Young’s! Real Analysis Qual Seminar 5 I.1.3. Recap - 3 good ways to prove a functional inequality. To prove a(x) ≤b(x): 1. Use basic calculus on a difference function: Define f(x) := a(x) −b(x). Use calculus to show f(x) ≤0 (by computing f ′, etc) 2. Use geometry. 3. Exploit another inequality. E.g., for any convex function ϕ(x), ϕ ((1 −λ)x + λy) ≤(1 −λ)ϕ(x) + λϕ(y). Candidates for ϕ: ex, xp, . . .. I.1.4. What did we not do yet? case p = 1, ∞. |g(x)| ⩽ae ∥g∥∞ |f(x)| · |g(x)| ⩽ae |f(x)| · ∥g∥∞ |f(x)g(x)| ⩽ae |f(x)| · ∥g∥∞ ∥f(x)g(x)∥1 ⩽ae |f(x)| · ∥g∥∞ p = ∞is exactly the same. I.2. How to use the H¨ older inequality. Assume (X, M, µ) is a measure space with µX ≥1, f : X →R is measure, and Lp = Lp(X, µ). 1. For 1 ≤p ≤q < ∞, if |f(x)| ≥1, then ∥f∥p ≤∥f∥q. If µX = ∞, then R X |f|pdµ = R X |g|qdµ = ∞, so let µX < ∞. Then |f|p ≤|f|q Z X |f|pdµ ≤ Z X |f|qdµ. (I.1) If R X |f|qdµ = ∞, it is trivial, so assume not. Then R X |f|qdµ < ∞= ⇒f ∈Lq, and (I.1) = ⇒f ∈Lp. Now p = q = ⇒∥f∥p = ∥f∥q and we are done trivially, so let p < q. We would like to use H¨ older with g(x) = 1 and some conjugate exponents 6 Real Analysis Qual Seminar α, β with 1 α + 1 β = 1. Ansatz: Let α = q p and β = q q−p, so 1 α + 1 β = p q + q−p q = q q = 1. Now use H¨ older with f = f p to get ∥f pg∥1 ≤∥f p∥α∥g∥β. (I.2) Now remembering that g = 1, we have ∥f pg∥1 = ∥f p∥1 = Z X |f|p = ∥f∥p p, and ∥f p∥α = Z X |f p|q/p p/q = Z X |f|q p/q = ∥f∥p q, and () ∥g∥β = Z X 1β 1/β = Z X 1 1/β = (µX)1/β . So (I.2) becomes ∥f∥p p ≤∥f∥p q · (µX) q−p q ∥f∥p ≤∥f∥q · (µX)(q−p)/pq ∥f∥p ≤∥f∥q 2. For 1 ≤p ≤q < ∞, |f(x)| ≤1 ∀x ∈X = ⇒∥f∥p ≥∥f∥q/p q . p = q is trivial, so take p < q. Then p < q, |f| ≤1 = ⇒|f|p ≥|f|q Z |f|p ≥ Z |f|q Z |f|p 1/p ≥ Z |f|q 1/p ∥f∥p ≥∥f∥q/p q . Real Analysis Qual Seminar 7 3. Show p ≤q ≤r and f ∈Lp, f ∈Lr = ⇒f ∈Lq. Let A = {|f| ≥1} and B = {|f| < 1} = e A. f ∈Lp = ⇒ Z X |f|p = Z A |f|p + Z B |f|p < ∞ = ⇒ Z B |f|p < ∞ (I.3) f ∈Lr = ⇒ Z X |f|r = Z A |f|r + Z B |f|r < ∞ = ⇒ Z A |f|r < ∞. (I.4) On A, |f|q ≤|f|r = ⇒ R A |f|q ≤ R A |f|r. On B, |f|q ≤|f|p = ⇒ R B |f|q ≤ R B |f|p. So Z X |f|q = Z A |f|q + Z B |f|q ≤ Z A |f|r + Z B |f|p < ∞ by (I.3),(I.4) shows that f ∈Lq. Moral: to show R X f(x) ≤ R X g(x), try splitting X. 8 Real Analysis Qual Seminar 4. Show there is a bounded linear operator ϕ : Lq →(Lp)∗given by ϕf(g) = ϕ(f)(g) = Z X fgdµ, ∀f ∈Lq, ∀g ∈Lp so that ϕ : g 7→ R fgdµ is the functional “integration against f”. ∥ϕ∥= sup f∈Lq,f̸=0 ∥ϕ(f)∥ ∥f∥q = sup f∈Lq,f̸=0 sup g∈Lp,g̸=0 ∥ϕ(f)(g)∥ ∥f∥q∥g∥p def of ∥ϕ∥ = sup f∈Lq,f̸=0 sup g∈Lp,g̸=0 | R X fg dµ| ∥f∥q∥g∥p def of ϕ(f)(g) ≤ sup f∈Lq,f̸=0 sup g∈Lp,g̸=0 ∥fg∥1 ∥f∥q∥g∥p | Z fg| ≤ Z |fg| ≤ sup f∈Lq,f̸=0 sup g∈Lp,g̸=0 ∥f∥q∥g∥p ∥f∥q∥g∥p H¨ older so ∥ϕ∥≤1 and ϕ is bounded. To see ϕ is linear, let f1, f2, f ∈Lq, g ∈Lp, and α ∈R: we show two things in (Lp)∗are equal by showing that they act the same way on any g ∈Lp. ϕ(f1 + f2)(g) = Z (f1 + f2) g dµ = Z f1g dµ + Z f2g dµ = ϕ(f1)(g) + ϕ(f2)(g) = (ϕ(f1) + ϕ(f2)) (g) shows ϕ(f1 + f2) = ϕ(f1) + ϕ(f2), and ϕ(αf)(g) = Z αfg dµ = α Z fg dµ = αϕ(f)(g) shows ϕ(αf) = αϕ(f). Hence (by the linearity of the integral), ϕ is linear. Real Analysis Qual Seminar 9 II. The Dual of Lp Proposition II.1. Show that ϕ : Lq →(Lp)∗by ϕ : f 7→ R fgdµ is an isometry. Proof. So we must show ∥ϕ(f)∥= ∥f∥, ∀f ∈Lq. Let 1 < p, q < ∞. Then ∥ϕ(f)∥= sup g∈Lp,g̸=0 ∥ϕ(f)(g)∥ ∥g∥p def of ∥ϕ∥ = sup g∈Lp,g̸=0 | R X fg dµ| ∥g∥p def of ϕ(f)(g) ≤ sup g∈Lp,g̸=0 ∥fg∥1 ∥g∥p | Z fg| ≤ Z |fg| ≤ sup g∈Lp,g̸=0 ∥f∥q H¨ older Hence ∥ϕ(f)∥≤∥f∥. For ∥ϕ(f)∥≥∥f∥, use the fact that ∥ϕ(f)∥is defined as a supremum: ∥ϕ(f)∥is the smallest number such that ∥ϕ(f)(g)∥≤∥ϕ(f)∥· ∥g∥ holds for all g (̸= 0). In other words, if we can find a g for which ∥ϕ(f)(g)∥ ∥g∥ ≥∥f∥, then ∥ϕ(f)∥= supg∈Lp,g̸=0 n ∥ϕ(f)(g)∥ ∥g∥ o ≥∥f∥. Ansatz: let g = |f|q/p sgn f. Then |g|p = |f|q = fg.1 Thus, f ∈Lq = ⇒g ∈Lp. Now Z |g|p = Z |f|q Z |g|p 1/p = Z |f|q 1/p Z |f|q 1/q Z |g|p 1/p = Z |f|q 1/q Z |f|q 1/p ∥f∥q∥g∥p = Z |f|q 1/q+1/p = Z |f|q 1 = ∥f∥q q 1 1 p + 1 q = 1 = ⇒ q p + q q = q = ⇒ q p + 1 = q, so fg = f · |f|q/p sgn f = |f| · |f|q/p = |f|1+q/p = |f|q. 10 Real Analysis Qual Seminar Thus, ϕ(f)(g) = R fg dµ = R |f|q = ∥f∥q q = ∥f∥q∥g∥p = ⇒ |ϕ(f)(g)| ∥g∥p ≥∥f∥q. Now suppose p = 1, q = ∞. We have ∥ϕ(f)∥= sup | R fgdµ| ∥g∥1 ≤sup ∥g∥1 · ∥f∥∞ ∥g∥1 = ∥f∥∞ as before. Now it remains to find a g ∈L1 for which R fg dµ ≥ R |g|dµ  ∥f∥∞. We have f ∈L∞, so note ∥f∥∞< ∞. Then fix ε > 0 and define B = {f ≥∥f∥∞−ε}, and let A be any measurable subset of B such that 0 < µA < ∞.2 Define gε := χA sgn f.3 Then R |gε|dµ = µA and Z fgε dµ = Z A |f|dµ ≥(∥f∥∞−ε) µA R |fgε|dµ µA ≥∥f∥∞−ε sup R |fgε|dµ R |gε|dµ  ≥∥f∥∞−ε. Since this is true for any ε, let ε →0 and obtain sup nR |fg|dµ R |g|dµ o ≥∥f∥∞. Now suppose p = ∞, q = 1. Again, ∥ϕ(f)∥= sup | R fgdµ| ∥g∥∞ ≤∥f∥1, as before. Now find g ∈L∞such that Z fg dµ ≥∥g∥∞ Z |f|dµ  . Let α > 0 and define g = α sgn f be a constant function. Then Z fg dµ = α Z |f|dµ = ∥g∥∞ Z |f|dµ. □ 2Note that if f is a constant function, µB could be ∞! µB > 0 by def of ∥f∥∞. 3Include the sgn f so that fgε = |f| instead of just f, in the following derivation. Real Analysis Qual Seminar 11 II.1. The Riesz Representation Theorem. Theorem II.2. Let F be a bounded linear functional on Lp, 1 ≤p < ∞. Then ∃g ∈Lq such that F(f) = R fg dµ, ∀f ∈Lp, and ∥F∥= ∥g∥q. There are two common proofs for this theorem. One uses step functions and absolute continuity of functions; the other uses simple functions and absolute continuity of measures. Both follow a similar strategy: (1) Show F(χA) = R gχA dµ = R A g dµ. • use absolute continuity of Φ : [0, 1] →R by gF(s) = F(χs); or • use absolute continuity of νE = F(χE). (2) Extend to a dense subspace of Lp • use step functions for gF(s) • use simple functions for νE (3) Establish ∥F∥= ∥g∥q • extend F to bounded measurable functions, use Royden & H¨ older • define G(f) = R fgdµ on Lp and use density, continuity (4) Extend to Lp • approx by step functions • use G (5) Show uniqueness of g. Method I: • uses step functions– only applies for Lp([a, b], λ) • requires reference to 3 thms of Royden • nice use of DCT, boundedness • absolute continuity of a function is a bit more concrete Method II: • uses simple functions, so applies to Lp(X) • requires reference to 1 thm of Royden • uses Radon-Nikodym Theorem • smooth use of general topology • works for any σ-finite µ 12 Real Analysis Qual Seminar Important note: must add “µ is σ-finite” in order to do the case p = 1! Method I (Royden) Proof. 1. For s ∈[0, 1], let χs := χ[0,s]. Then F(χs) = R s 0 g dλ is some real number, so define Φ : [0, 1] →R by Φ(s) = F(χs) = Z s 0 g dλ. Claim: Φ is absolutely continuous. Fix ε > 0 and let {(ai, bi)}n i=1 be any finite collection of disjoint subintervals of [0, 1] such that X (bi −ai) < δ. Then P |Φ(bi) −Φ(ai)| = F(f) for f = n X i=1 (χbi −χai) sgn (Φ(bi) −Φ(ai)) .4 Since R |f|p = P R χp (ai,bi) = P R bi ai 1 dλ = P(bi −ai) < δ,5 X |Φ(bi) −Φ(ai)| = F(f) ≤∥F∥· ∥f∥p < ∥F∥δ1/p. Thus, total variation of Φ over any finite collection of disjoint intervals is less than ε, as long as the total length of these intervals is less than δ = εp ∥F∥p, which shows that Φ is absolutely continuous. Then Φ has an antiderivative, by some theorem: 6 Φ(s) = Z s 0 g. 4Note that |Φ(bi) −Φ(ai)| = |F(χai) −F(χbi)| = (F(χai) −F(χbi)) sgn (Φ(χbi) −Φ(χai)) . 5In the first equality, use (χbi −χai) = χ(ai,bi) and | sgn h| = 1 ae. 6Royden, Lemma 5.14 on page 110. Real Analysis Qual Seminar 13 Thus F(χs) = R 1 0 gχs. 2. Since every step function ϕ on [0, 1] is a linear combination ϕ = X ciχsi (λ-ae), we get F(f) = Z 1 0 gϕ by the linearity of F, R . 3. Now extend F to the bounded measurable functions f on [0, 1]. Let f be such a function, and find a bounded sequence {ϕn} ⊆Step which converges λ-ae to f. This is possible by Royden, Prop 3.22. Then {|f −ϕn|p} is uniformly bounded (∃M such that |f −ϕn|p ≤ Mp, ∀n, x) and tends to 0, λ-ae. This bound allows us to use the DCT and get lim n→∞ Z |f −ϕn|p = 0 = ⇒ ∥f −ϕn∥p n→∞ − − − − →0. Then the boundedness of F and the fact that |F(f) −F(ϕn)| = |F(f −ϕn)| ≤∥F∥· ∥f −ϕn∥ together imply that F(f) = limn→∞F(ϕn). Also, |gϕn| ≤|g| · Mϕ = ⇒ R fg = lim R gϕn by the DCT again.7 Putting this together, F(f) = lim F(ϕn) = lim Z gϕn = Z fg, ∀f (bdd,measurable). Then by Proposition 5.12 on page 131 of Royden, Z fg dλ = |F(f)| ≤∥F∥· ∥f∥pN · ∥f∥p and we have g ∈Lq, ∥g∥q ≤N = ∥F∥,. Then by Prop II.1, ∥F∥= ∥g∥q. 7Mϕ is the uniform bound on the sequence {ϕn}. 14 Real Analysis Qual Seminar 4. Extend to f ∈Lp. ∀ε, ∃ψ ∈Step such that ∥f −ψ∥p < ε. Then ψ ∈Step = ⇒ψ bounded = ⇒F(f) = R ψg, by (3). Hence, F(f) − Z fg = F(f) −F(ψ) + Z ψf − Z fg ≤|F(f) −F(ψ)| + Z fg − Z ψf = |F(f −ψ)| + Z (g −ψ)f ≤∥F∥· ∥f −ψ∥p + ∥f −ψ∥p · ∥g∥q < (∥F∥+ ∥g∥q) ε Since ε is arbitrary, this shows F(f) = R fg dλ. 5. If g1, g2 determine the same F in this way, then Z fg1 dλ − Z fg2 dλ = Z f(g1 −g2) dλ gives the zero functional, and ∥g1 −g2∥q = 0 = ⇒ g1 = ae g2. □ Method II (Royden) Again, given a bounded linear functional F on Lp, we must find a g ∈Lq such that F(f) = R fg dµ. Proof. First, consider a finite measure space (X, A, µ). Then f bounded = ⇒f ∈Lp(µ). 1. Define ν on A by νE = F(χE). Real Analysis Qual Seminar 15 Then ν is a signed measure: If E is the disjoint union of {En} ⊆A, let αn = sgn F(χEn) and define f = P αnχEn. Then F is bounded, so a lemma gives ∞ X n=1 |νEn| = F(f) < ∞ and ∞ X n=1 νEn = F(χE) = νE. So ν is a signed measure. Also, µ = 0 = ⇒F(χE) = 0, so ν ≪µ. Then Radon-Nikodym applies and ∃g measure such that νE = R E g dµ. Note: F bounded = ⇒F(χX) = νX < ∞, so ν is finite. Then νX = R X g dµ < ∞= ⇒g ∈L1(µ). 2. Let ϕ be a simple function. Then by linearity of F and R , we have F(ϕ) = Z ϕg dµ. Now |F(ϕ)| ≤∥F∥· ∥ϕ∥p = ⇒g ∈Lq by some Lemma on page 283. 3. (&4.) Define G(f) = R fg dµ for f ∈Lp, so (G −F) is a BLT on Lp which vanishes on S. (G −F) bounded ≡(G −F) continuous, so (G −F) = 0 on Lp. Hence, ∀f ∈Lp, F(f) = R fg dµ and ∥F∥= ∥G∥= ∥g∥q. 5. If g1, g2 determine the same F in this way, then Z fg1 dµ − Z fg2 dµ = Z f(g1 −g2) dµ gives the zero functional, and ∥g1 −g2∥q = 0 = ⇒ g1 = ae g2. Now consider µ σ-finite. Choose {Xn} such that X = [∞ n=1 Xn, Xn ⊆Xn+1, µXn < ∞, ∀n. 16 Real Analysis Qual Seminar Then the previous case gives a gn for each Xn such that gn vanishes outside Xn and F(f) = R fgn dµ for all f ∈Lp that vanish outside Xn. By construction, the uniqueness of gn on Xn gives gn+1 Xn = gn, so for x ∈X, define g(x) := gn(x), where x ∈Xn. Since gn differs from gm on a set of at most measure 0 (on any Xi where both are defined), discrepancies may be safely ignored and g is well-defined. Moreover, |gn| increases pointwise to |g|. By MCT, Z |g|qdµ = lim Z |gn|qdµ ≤∥F∥q, so g ∈Lq. For general f ∈Lp, define fn = ( f on Xn 0 on f Xn . Then fn pw − − − →f and fn Lp − − − →f. Then |fng| ≤|fg| ∈L1, so DCT gives Z fg dµ = DCT lim Z fng dµ = DCT lim Z fngn dµ = lim F(fn) = F(f). □ Side note: if λ is Lebesgue measure and ν is the point mass at 0 (on (R, B(R))), then what would dν dµ be (if ν ≪µ)? Real Analysis Qual Seminar 17 II.2. (L1)∗= L∞, but (L∞)∗̸= L1. Let X be [0, 1] so that we can safely consider C(X) as a subspace of L∞(X). Define ζ : C(X) →R by ζ(f) = f(0), so ζ ∈(C(X))∗. By HBT, ∃ϕ ∈(L∞)∗such that ϕ(f) = f(0) ∀f ∈C(X). To see that ϕ cannot be given by integration against a function in L1, consider fn ⊆C(X) defined by fn(x) = max{1 −nx, 0}. Figure 3. The functions fn. 1 f 1 1 n n 0 Then ϕ(fn) = fn(0) = 1 ∀n. But fn(x) →0 ∀x > 0, so fng →0 ∀g ∈L1. If ϕ(fn) = R fng, then we would have 1 = ϕ(f) = ϕ(limfn) = lim ϕ(fn) ϕ ∈(L∞)∗= ⇒ϕ continuous = lim Z fng hypothesis = Z lim fng DCT = Z 0 = 0 Slightly fancier version: use ζ(f) = f(p) and fn(x) = max{0, 1 −n|x −p|}. 18 Real Analysis Qual Seminar III. The Minkowski Inequality (X, M, µ), 1 ≤p ≤∞. Then ∥f + g∥p ≤∥f∥p + ∥g∥p. What does it give us? Lp is a normed vector space (△-ineq), Lp is Banach. III.1. How to prove Minkowski inequality. III.1.1. Use H¨ older inequality. case i) 1 < p < ∞. For f, g ∈Lp, define q by 1 p + 1 q = 1, so that p + q = pq: 1 q = 1 −1 p = p−1 p = ⇒q = p p−1 = ⇒p + q = p2−p p−1 + p p−1 = p2 p−1 = p  p p−1  = pq Now |f + g|p−1q = |f + g|p−1p/(p−1) = |f + g|p. Since Lp is a vector space, we have f +g ∈Lp and hence |f +g|p−1 ∈Lq. We need to set up for H¨ older: |f + g|p = |f + g|p−1 · |f + g| ≤|f + g|p−1 (|f| + |g|) ≤|f + g|p−1|f| + |f + g|p−1|g| Z |f + g|p dµ ≤ Z |f + g|p−1|f| dµ + Z |f + g|p−1|g| dµ Now H¨ older gives Z |f| · |f + g|p−1 dµ ≤∥f∥p∥|f + g|p−1∥q and Z |g| · |f + g|p−1 dµ ≤∥g∥p∥|f + g|p−1∥q. Real Analysis Qual Seminar 19 Thus Z |f + g|p dµ ≤(∥f∥p + ∥g∥p) ∥|f + g|p−1∥q = (∥f∥p + ∥g∥p) Z |f + g|(p−1)qdµ 1/q . If R |f + g|pdµ = 0, then Minkowski is trivial, so assume not. Then we may divide by Z |f + g|p−1q dµ 1/q = Z |f + g|pdµ 1/q to get Z |f + g|pdµ 1−1/q ≤∥f∥p + ∥g∥p. case ii) p = 1. Then |f + g| ≤|f| + |g| △-ineq Z |f + g|dµ ≤ Z |f|dµ + Z |g|dµ integration ∥f + g∥1 ≤∥f∥1 + ∥g∥1 case iii) p = ∞. Then define the null sets N1 := {|f(x)| > ∥f∥∞}, N2 := {|g(x)| > ∥g∥∞}. Then f, g ∈L∞= ⇒µN1 = µN2 = 0, so µ(N1 ∪N2) = 0 also. On the complement of N1 ∪N2 we have |f(x) + g(x)| ≤|f(x)| + |g(x)|. Then taking suprema gives ∥f + g∥∞≤∥f∥∞+ ∥g∥∞. 20 Real Analysis Qual Seminar III.1.2. Use convexity. Let α = ∥f∥p and β = ∥g∥p. Note: α, β ̸= 0 or else trivial. Then define f0 := 1 α|f|, g0 := 1 β|g| so that these functions satisfy |f| = αf0, |g| = βg0, ∥f0∥p = ∥g0∥p = 1. Note that this implies ∥f0∥p p = ∥g0∥p p = 1 (III.1) Set λ := α α+β so 1 −λ = α+β α+β − α α+β = β α+β. Then we have |f(x) + g(x)|p ≤(|f(x)| + |g(x)|)p △-ineq = (αf0(x) + βg0(x))p def of f0, g0 = (α + β)p  α α+βf0(x) + β α+βg0(x) p = (α + β)p (λf0(x) + (1 −λ)g0(x))p def of λ = (α + β)p (λf0(x)p + (1 −λ)g0(x)p) cvxty of tp Z |f(x) + g(x)|p dµ ≤(α + β)p Z (λf0(x)p + (1 −λ)g0(x)p) dµ integrating ∥f + g∥p p ≤(α + β)p λ∥f0∥p p + (1 −λ)∥g0∥p p  linearity ≤(α + β)p (λ + (1 −λ)) by (III.1) = (α + β)p ∥f + g∥p ≤(α + β) pth roots Real Analysis Qual Seminar 21 III.2. The Riesz-Fischer Theorem: Lp(X, M, µ) is complete. Road map: Split into the two cases 1 ≤p < ∞, p = ∞. For each case: (1) Invoke the Banach characterization lemma. (2) Define f(x) = (P fn(x) behaves 0 else Use g(x) = P k |fk(x)| for 1 ≤p < ∞, use Nk = {x . . . |fk(x)| > ∥fk∥∞} for p = ∞. (3) Use Minkowski to show f ∈Lp and Pn k=1 fk Lp − − − →f. case i) 1 ≤p < ∞. 1. By the lemma, it suffices to show that every series which converges absolutely (in R) also converges in Lp, p ∈[1, ∞). 2. Let P∞ k=1 ∥fk∥p < ∞for some {fk}∞ k=1 ⊆Lp. NTS: ∥f −Pn k=1 fk∥p n→∞ − − − − →0 for some f ∈Lp, since this is what Pn k=1 fk Lp − − − →f means. Define g(x) = ∞ X k=1 |fk(x)| so that g ≥0 (g may take the value ∞). Note: n X k=1 |fk| !p ≥0 (III.2) and since positive exponents preserve order, we also have n X k=1 |fk| !p ≤ n+1 X k=1 |fk| !p . (III.3) 22 Real Analysis Qual Seminar Then we have ∥g∥p = Z lim n→∞ n X k=1 |fk| p dµ !1/p = Z lim n→∞ n X k=1 |fk| !p dµ !1/p by (III.2) = lim n→∞ Z n X k=1 |fk| !p dµ !1/p by MCT,(III.3) = lim n→∞ n X k=1 |fk| p def of ∥· ∥p ≤lim n→∞ n X k=1 ∥fk∥p Minkowski = ∞ X k=1 ∥fk∥p , which is finite, by hypothesis. Thus g ∈Lp, so |g| ⩽ae ∞. Hence we may define f(x) = (P∞ k=1 fk(x) |g(x)| < ∞ 0 |g(x)| = ∞ so that f is measurable and |f|p ≤gp = ⇒f ∈Lp. Since limn→∞|f(x) −Pn k=1 fk(x)| = 0 and |f(x) −P∞ k=1 fk(x)|p ≤ gp are both true ae, the DCT gives ∥f −Pn k=1 fk∥p n→∞ − − − − →0. Real Analysis Qual Seminar 23 case ii) p = ∞. 1. By the lemma, it suffices to show that every series which converges absolutely (in R) also converges in L∞. 2. Let P∞ k=1 ∥fk∥∞< ∞for some {fk}∞ k=1 ⊆L∞. NTS: ∥f −Pn k=1 fk∥∞ n→∞ − − − − →0 for some f ∈L∞. For each k, define Nk := {x . . . |fk(x)| > ∥fk∥∞}, so that µNk = 0, ∀k = ⇒µ (∪kNk) = 0. Then if x / ∈∪kNk, X k |fk(x)| ≤ X k ∥fk∥∞ = ⇒ X k fk(x) < ∞, by what we know of R. Now we may define f(x) = (P k fk(x) x / ∈∪kNk 0 x ∈∪kNk so that f is µ-measurable and bounded, i.e., f ∈L∞. 3. Since µ(∪kNk) = 0, f − n X k=1 fk ∞ ≤ ∞ X k=n+1 fk ∞ ≤ ∞ X k=n+1 ∥fk∥∞ by Mink Then taking limits, lim n→∞ f − n X k=1 fk ∞ ≤lim n→∞ ∞ X k=n+1 ∥fk∥∞= 0. Thus, ∥f −Pn k=1 fk∥∞ n→∞ − − − − →0. 24 Real Analysis Qual Seminar Lemma III.1. (Banach Characterization Lemma). Suppose that (X, ∥· ∥) is a normed vector space. Then X is Banach ⇐ ⇒every absolutely convergent series in X is convergent. Proof. (⇒) Suppose every Cauchy sequence converges. Let {xk} be such that P ∥xk∥< ∞so {xk} is absolutely convergent. Then let sn := n X k=1 xk, ands := lim n→∞sn = ∞ X k=1 xk. NTS: {sn} is Cauchy. Wlog, let n < m. ∥sn −sm∥= m X k=n+1 xk ≤ m X k=n+1 ∥xk∥ △-ineq m→∞ − − − − − → ∞ X k=n+1 ∥xk∥ n→∞ − − − − →0 Hence, {sn} Cauchy implies that sn n→∞ − − − − →s = limn→∞sn ∈X. (⇐) Suppose that every abs. convergent series is convergent. Let {xn} be a Cauchy sequence. NTS: xn →x ∈X. Since {xn} is Cauchy, we can find a subsequence {xnk} which satisfies ∥xnk −xnk+1∥< 1 2k. Define v1 = xn1, and vk = xnk+1 −xnk. Then we have a telescoping sum: N X k=1 vk = xn1 + (xn2 −xn1) + . . . + (xnN −xnN−1) = xnN, Real Analysis Qual Seminar 25 so ∞ X k=1 ∥vk∥< ∞ X k=1 1 2k = 1 shows P∞ k=1 ∥vk∥converges. Hence, P∞ k=1 vk converges by hypothesis to some v ∈X. Then ∞ X k=1 vk = v = lim N→∞ N X k=1 vk = lim N→∞xnN shows xnN N→∞ − − − − − →v. Now ∥v −xn∥≤∥v −xnk∥+ ∥xnk −xn∥ shows that xn n→∞ − − − − →v also. □ 26 Real Analysis Qual Seminar IV. Hilbert Space Review Most material in this talk is from Reed & Simon. Definition IV.1. A complex vector space is called an inner product space (IPS) when (i) ⟨x, x⟩≥0, and ⟨x, x⟩= 0 iffx = 0, (ii) ⟨x, y + z⟩= ⟨x, y⟩+ ⟨x, z⟩, (iii) ⟨x, αy⟩= α⟨x, y⟩, (iv) ⟨x, y⟩= ⟨y, x⟩. An inner product space is a Hilbert space iffit is complete under the norm ∥x∥= p ⟨x, x⟩. Definition IV.2. Two vectors x ̸= y are orthogonal iff⟨x, y⟩= 0. A collection {xi} is an orthonormal set iff ⟨xi, xi⟩= 1 and ⟨xi, xj⟩= 0 ∀i ̸= j. Proposition IV.3. (Pythagorean Theorem) Let {xn}N n=1 be an orthogonal set in an IPS. Then N X n=1 xn 2 = N X n=1 ∥xn∥2 Proof. ∥P xn∥2 = ⟨P xn, P xn⟩= PN n,m=1 ⟨xn, xm⟩. Then see that all the terms with n ̸= m are 0 because of orthogonality, leaving only PN n=1 ⟨xn, xn⟩= PN n=1 ∥xn∥2. □ Proposition IV.4. (Bessel’s Inequality) If {xα}α∈A is an orthonormal set in an IPS, then for any x, X α∈A |⟨x, xα⟩|2 ≤∥x∥2. Real Analysis Qual Seminar 27 Proof. It suffices to show that P α∈F |⟨x, xα⟩|2 ≤∥x∥2 for any finite F ⊆A: 0 ≤ x − X α∈F ⟨x, xα⟩xα 2 = x − X α∈F ⟨x, xα⟩xα, x − X α∈F ⟨x, xα⟩xα + = ∥x∥2 −2 Re x, X α∈F ⟨x, xα⟩xα + + X α∈F ⟨x, xα⟩xα 2 = ∥x∥2 −2 Re X α∈F ⟨x, xα⟩⟨x, xα⟩+ X α∈F ⟨x, xα⟩xα 2 = ∥x∥2 −2 X α∈F |⟨x, xα⟩|2 + X α∈F |⟨x, xα⟩|2 (IV.1) = ∥x∥2 − X α∈F |⟨x, xα⟩|2 Where (IV.1) comes by the Pythagorean Thm. □ Note that this theorem indicates {α . . . ⟨x, xα⟩̸= 0} is countable. Proposition IV.5. (Schwartz Inequality) If x and y are vectors in an IPS, then ∥x∥·∥y∥≥|⟨x, y⟩| . Proof. The case y −0 is trivial, so suppose y ̸= 0. The vector y ∥y∥by itself forms an orthonormal set, so applying Bessel’s inequality to any x gives ∥x∥2 ≥ ⟨x, y ∥y∥⟩ 2 = |⟨x, y⟩|2 ∥y∥2 ∥x∥2∥y∥2 ≥|⟨x, y⟩|2 □ 28 Real Analysis Qual Seminar Proposition IV.6. ∥x∥= p ⟨x, x⟩really is a norm. Proof. The first two properties of norm are clearly satisfied: ∥x∥= 0 ⇐ ⇒x = 0, ∥x∥≥0, ∥αx∥= |α|·∥x∥. To see the triangle inequality, ∥x + y∥2 = ⟨x + y, x + y⟩= ⟨x + y, x⟩+ ⟨x + y, y⟩ = ⟨x, x⟩+ ⟨x, y⟩+ ⟨y, x⟩+ ⟨y, y⟩ linearity = ∥x∥2 + 2 Re⟨x, y⟩+ ∥y∥2 z+z 2 = Re z ≤∥x∥2 + 2 |⟨x, y⟩| + ∥y∥2 Re z ≤|z| ≤∥x∥2 + 2∥x∥·∥y∥+ ∥y∥2 |⟨x, y⟩| ≤∥x∥·∥y∥ = (∥x∥+ ∥y∥)2 □ Proposition IV.7. (Parallelogram Identity) ∥x + y∥2 + ∥x −y∥2 = 2 ∥x∥2 + ∥y∥2 . Proof. Add the two formulae ∥x + y∥2 = ∥x∥2 + 2 Re⟨x, y⟩+ ∥y∥2 ∥x −y∥2 = ∥x∥2 −2 Re⟨x, y⟩+ ∥y∥2. □ Example. ℓ2 := ( {xn}∞ n=1 . . . ∞ X n=1 |xn|2 < ∞ ) with the inner product D {xn}∞ n=1, {yn}∞ n=1 E := ∞ X n=1 xn yn. Real Analysis Qual Seminar 29 Example. L2 :=  f : X →C . . . Z X |f|2 dµ < ∞  with the inner product ⟨f, g⟩:= Z X fg dµ. Example. L2 H :=  f : X →H . . . Z X ∥f(x)∥2 H dµ < ∞  with the inner product ⟨f, g⟩:= Z X ⟨f(x), g(x)⟩H dµ. IV.1. Bases. Definition IV.8. An orthonormal basis of a Hilbert space H is a maximal orthonormal set S (i.e., no other orthonormal set contains S as a proper subset). Theorem IV.9. Every Hilbert space has an orthonormal basis. Proof. Let C be the collection of orthonormal subsets of H. Order C by inclusion: S1 ≺S2 ⇐ ⇒ S1 ⊆S2. Then (C, ≺) is a poset. It is also nonempty since {x/∥x∥} is an orthonormal set, ∀x ∈H. Now let {Sα}α∈A be any linearly ordered subset of C. Then ∪α∈ASα is an orthonormal set which contains each Sα and is thus an upper bound for {Sα}α∈A. Since every linearly ordered subset of C has an upper bound, apply Zorn’s Lemma and conclude that C has a maximal element. This maximal element is an orthonormal set not properly contained in any other orthonormal set. □ 30 Real Analysis Qual Seminar Theorem IV.10. (Orthogonal Decomposition and Parseval’s Rela-tion) Let S = {xα}α∈A be an orthonormal basis for a Hilbert space H. Then ∀y ∈H: y = X α∈A ⟨xα, y⟩xα, and ∥y∥2 = X α∈A |⟨xα, y⟩|2. Proof. Proving Bessel’s inequality, we saw that X α∈A |⟨xα, y⟩|2 ≤∥y∥2, and that there are at most countably many nonzero summands. Collect these α’s for which ⟨xα, y⟩̸= 0 to obtain a sequence {αj}∞ j=1. As a positive-term series, PN j=1 |⟨xαj, y⟩|2 is monotone increasing. It is also bounded above by ∥y∥2. Thus, it converges to a finite limit as N →∞. Define yn := n X j=1 ⟨xαj, y⟩xαj. We want to show lim yn = y. For n > m, ∥yn −ym∥2 = n X j=m+1 ⟨xαj, y⟩xαj 2 = n X j=m+1 |⟨xαj, y⟩|2, by the Pythagorean Thm. Letting n, m →∞shows {yn} is Cauchy. Since H is Hilbert, it is complete and {yn} must converge to some y′ ∈H. Let xα be any element of S. If ∃ℓα = αℓ, then by the continuity of norms: ⟨y −y′, xαℓ⟩= lim n→∞ y − n X j=1 ⟨xαj, y⟩xαj, xαℓ + = ⟨y, xαℓ⟩−⟨y, xαℓ⟩= 0 and if not, ⟨y −y′, xα⟩= lim n→∞ y − n X j=1 ⟨xαj, y⟩xαj, xα + = 0 Real Analysis Qual Seminar 31 because ⟨y −y′, xα⟩= lim n→∞ y − n X j=1 ⟨xαj, y⟩xαj, xα + = ⟨y, xα⟩−lim n→∞ n X j=1 ⟨xαj, y⟩xαj, xα = 0 − ∞ X j=1 ⟨xαj, y⟩xαj, xα ⟨y, xα⟩= 0 for α ̸= αℓ = ∞ X j=1 ⟨y, xαj⟩ xαj, xα = ∞ X j=1 ⟨y, xαj⟩· 0 ⟨xαj, xα⟩= 0 for α ̸= αℓ = 0 So y −y′ is orthogonal to every xα in S. Since S is a orthonormal basis, this means we must have y −y′ = 0. Thus y = lim n→∞ n X j=1 ⟨xαj, y⟩xαj, and we have shown the first part. Finally, 0 = lim n→∞ y − n X j=1 ⟨xαj, y⟩xαj 2 = lim n→∞  ∥y∥2 −2 Re y, n X j=1 ⟨xαj, y⟩xαj + + n X j=1 ⟨xαj, y⟩xαj 2  = lim n→∞ ∥y∥2 −2 Re n X j=1 ⟨xαj, y⟩ y, xαj + n X j=1 ⟨xαj, y⟩xαj 2 ! = lim n→∞ ∥y∥2 −2 n X j=1 xαj, y 2 + n X j=1 ⟨xαj, y⟩ 2 xαj 2 ! 32 Real Analysis Qual Seminar = lim n→∞ ∥y∥2 − n X j=1 |⟨xαj, y⟩|2 ! = ∥y∥2 − X α∈A |⟨xα, y⟩|2 gives Parseval’s Relation: ∥y∥2 = X α∈A |⟨xα, y⟩|2. □ Definition IV.11. The coefficients ⟨xα, y⟩are the Fourier coefficients of y with respect to the basis {xα}. IV.2. The Riesz Representation Theorem Again. Definition IV.12. Let M be a closed subspace of H. Then M is a Hilbert space under the inner product it inherits as a subspace of H. Define the orthogonal complement of M to be M := {x ∈H . . . ⟨x, y⟩= 0 ∀y ∈M}. Theorem IV.13. (Projection Theorem) If M is a closed subspace of H, then H = M ⊕M⊥. That is, ∀x ∈H, x can be uniquely expressed as x = y + z, where y ∈M, z ∈M⊥. Moreover, y, z are the unique elements of M and M⊥whose distance to x is minimal. If y ∈H, then ϕy(x) = ⟨x, y⟩defines a functional on H. By the linearity of inner prod, it is a linear functional. By the Schwartz inequality8, ∥ϕy∥= sup∥x∥≤1 ∥ϕy(x)∥= sup∥x∥≤1 |⟨x, y⟩| ≤sup∥x∥≤1 ∥x∥·∥y∥≤∥y∥ Shows that this functional is bounded/ continuous. 8Recall, the Schwartz ineq is just the H¨ older ineq when p = 2. Real Analysis Qual Seminar 33 Theorem IV.14. (Riesz Representation Theorem for Hilbert Spaces) If ϕ ∈H∗, then ∃!y ∈H such that ϕ(x) = ⟨x, y⟩∀x ∈H. Also, ∥ϕ∥= ∥y∥. Proof. If ϕ is the zero functional, then y = 0 and we’re done. Otherwise, consider the nullspace M := {x ∈H . . . ϕ(x) = 0}. M is a proper closed subspace of H and M⊥̸= {0} by the Projection Thm. Thus we can find z ∈M⊥with ∥z∥= 1 and define u := ϕ(x)z −ϕ(z)x. Then ϕ(u) = ϕ  ϕ(x)z −ϕ(z)x  = ϕ(x)ϕ(z) −ϕ(z)ϕ(x) = 0 shows that u ∈M and hence that u ⊥z. Thus, 0 = ⟨z, u⟩= ⟨z, ϕ(x)z −ϕ(z)x⟩ = ⟨z, ϕ(x)z⟩−⟨z, ϕ(z)x⟩ linearity = ϕ(x)∥z∥2 −ϕ(z)⟨z, x⟩ ⟨z, z⟩= ∥z∥2 = ϕ(x) −⟨x, ϕ(z)z⟩ ∥z∥= 1 Thus, ϕ(x) = ⟨x, y⟩where y = ϕ(z)z. As for uniqueness, if ⟨x, y⟩= ⟨x, y′⟩for all x, take x = y −y′ and get ∥y −y′∥2 = ⟨y −y′, y −y′⟩= ⟨y −y′, y⟩−⟨y −y′, y′⟩= 0 = ⇒ y = y′. □ This shows that y 7→ϕy is a conjugate linear isometry of H onto H∗. Definition IV.15. Isomorphisms of Hilbert spaces are those transformations U : H1 →H2 which preserve the inner product: ⟨Ux, Uy⟩H2 = ⟨x, y⟩H1 ∀x, y ∈H1. Such operators are called unitary. For U : H →H, unitary operators are also characterized by U ∗= U −1, where T ∗is the Hilbert space adjoint of T ∈L(H) and is defined by ⟨T ∗x, y⟩= ⟨x, Ty⟩. 34 Real Analysis Qual Seminar V. A Practical Guide to Integral Problems This talk covers the relation between Riemann and Lebesgue integration, when you can differentiate under an integral, and other practical applications of Lebesgue theory to standard integral problems. V.1. Some related theorems. Theorem V.1. Let f : [a, ∞) →R be locally R-integrable. Then f ∈L1[a, ∞) ⇐ ⇒ R Z ∞ a |f| dx < ∞and R Z ∞ a f dx = L Z [a,∞) f dµ. Proof. (⇒) f ∈L1 = ⇒f +, f −∈L1. Define fn = f +χ[a,a+n) so that fn ≤fn+1, and fn →f + and fn ∈L1. Now for A := [a, ∞), R Z ∞ a f +dx = lim n→∞ R Z a+n a f +dx def of improper int = lim n→∞ L Z [a,a+n) f +dx R = L on bounded = lim n→∞ L Z A fn dµ def fn = L Z A lim n→∞fn dµ MCT = L Z A f + dµ Similarly, R R ∞ a f −dx = L R A f −dµ, so L Z A f dµ = L Z A f + −f − dµ = R Z ∞ a f + −f − dx = R Z ∞ a f dx. (⇐) Define fn(x) = |f|χ[a,a+n] so that fn ↗|f| and fn is R-integrable. (support is compact) Real Analysis Qual Seminar 35 Then R R [a+n] a fndx exists and R R [a,a+n] a fndx = L R [a,a+n] fndµ, so L Z A |f| dµ = lim n→∞ L Z A fn dµ MCT = lim n→∞ R Z a+n a fndx R = L on bounded = lim n→∞ R Z a+n a |f|dx def of fn = R Z ∞ a |f|dx def of impr int < ∞ hypothesis shows that f ∈L1. □ Theorem V.2. Define F(t) = R X f(x, t) dµ(x) for f : X × [a, b] →C. (1) What is sufficient for F to be continuous? lim t→t0 F(t) = F(t0), ∀t0. (2) What is sufficient for F to be differential? F ′(t) = Z X ∂f ∂t (x, t) dµ(x). Royden: (1) (i) ft(x) = f(x, t) is a measurable function of x for each fixed t. (ii) ∀t, |f(x, t)| ≤g(x) ∈L1(X). (iii) limt→t0 f(x, t) = f(x, t0) for each x (i.e., f(x, t) is continuous in t for each x). The proof follows by applying DCT to f(x, tn), where tn →t0. (2) (i) ∂f ∂t exists on X × [a, b], (ii) ∂f ∂t is bounded on X × [a, b], (iii) f is bounded on X × [a, b], (iv) For each fixed t, f is a measurable function of x. 36 Real Analysis Qual Seminar (3) Alternatively: (i) ∂f ∂t exists on X × [a, b], (ii) ∂f ∂t (x, t) ≤g(x) ∈L1(X) on X × [a, b]. advantages: f, ∂f ∂t need not be bounded disadvantages: need f ∈L1. Proof. (of the second version). Pick any sequence {tn} ⊆[a, b] with tn →t0. Then define hn(x) := f(x, tn) −f(x, t0) tn −t0 . Then ∂f ∂t (x, t0) = limn→∞hn(x), so ∂f ∂t (x, t0) is measurable as a limit of mea-surable functions. It follows that ∂f ∂t (x, t) is measurable. By the mean value theorem, there is a t between tn and t0 for which f(x, tn) −f(x, t0) = (tn −t0)∂f ∂t (x, t). Then |hn(x)| ≤sup t∈[a,b] ∂f ∂t (x, t) ≤g(x), since taking the supremum can only make it larger. Invoke the dominated convergence theorem again and get F ′(t0) = lim F(tn) −F(t0) tn −t0 = lim Z hn(x) dµ(x) = Z ∂f ∂t (x, t)dµ(x). Finally, exploit the compactness of [a, b] and [Rudin 4.2]: lim x→t g(x) = g(t) ⇐ ⇒ lim n→∞g(xn) = g(t), ∀{xn} ⊆X, xn →t. □ Real Analysis Qual Seminar 37 V.2. Solutions to the Nasty Integrals. 1. f : X →[0, ∞] is measurable and R X f dµ = c where 0 < c < ∞. Let α ∈R be a constant. Show that lim n→∞ Z X n log  1 + f(x) n α dµ =      ∞ 0 < α < 1 c α = 1 0 α > 1 Proof. case i) α = 1. By basic calculus,  1 + f(x) n n increases to ef(x) for each x, so gn(x) = log  1 + f(x) n n n→∞ − − − − →f(x) ∈L1 (increasing). Then limn→∞ R X gn(x) = R X f(x) = c by MCT. case ii) α > 1. Note that f(x) ≥0, R X f dµ = c ≤∞show f is finite µ-ae, i.e.: ∃M < ∞and ∃E ∈M s.t. µE = 0 and f ˜ E(x) ≤M, (so E is where f is bounded). Since we can always find N such that n ≥N = ⇒ M n < 1, f(x) n ≤ M n , µ-au. We’re concerned with n →∞, so this means f(x) n < 1 for our purposes. Hence, α > 1 = ⇒α −1 > 0 implies 0 ≤  f(x) n α−1 < 1. (V.1) We need n log  1 + f(x) n α ≤αf(x), so define G(t) = n log 1 + t n α −αt. 38 Real Analysis Qual Seminar Then G(0) = 0. Also, G′(t) =  ntα−1 nα+tα −1  α diff ≤  ntα−1 nα −1  α drop the tα =  t n α−1 −1  α simp < 0 t = f(x),  f(x) n α−1 < 1, so G is decreasing and G(t) < 0 for t > 0, i.e., n log  1 + f(x) n α ≤αf(x). Set gn(x) = n log  1 + f(x) n α . Since this is bounded by αf and f ∈L1 by hypothesis, DCT gives lim n→∞ Z X gn dµ = Z X lim n→∞gn dµ. Now split the leading n and match the denominator: gn(x) = n1−αnα log  1 +  f n α = n1−α log  1 +  f α nα nα , so that lim n→∞  1 + f α nα nα = ef ⩽ae eM shows lim n→∞ log  1 + f α nα nα nα−1 = 0. case iii) α < 1. f > 0 = ⇒log  1 + f α nα  > 0, and if we define A := {f > 0}, then µA > 0 because R X f dµ > 0. Thus R A f dµ = R X f dµ, and log  1 + f α nα  > 0 on A. Real Analysis Qual Seminar 39 But then log  1 + f α nα nα nα−1 n→∞ − − − − →∞ because α < 1 = ⇒α −1 < 0. Thus, lim gn = ∞, so by Fatou’s Lemma, lim Z gn dµ ≥ Z lim gn = ∞ = ⇒ lim n→∞ Z gn dµ = ∞. □ 40 Real Analysis Qual Seminar 2. Define F(t) = Z ∞ 0 e−xt 1 + x2dx, for t > 0. a) Show that F is well-defined as an improper Riemann integral and as a Lebesgue integral. Riemann: e−xt 1+x2 is continuous ∀t, so it is R-integrable on any bounded interval (a, b). So only remains to show the convergence of lim a→∞ Z a 0 e−xt 1+x2dx. Since the integrand is nonnegative,9 b ≥a = ⇒ Z a 0 e−xt 1+x2dx ≤ Z b 0 e−xt 1+x2dx. Thus, it suffices to consider a dominating function g(x): e−xt 1+x2 ≤ 1 1+x2 ≤1 x2 ∀t > 0. Since Z a 1 1 x2dx = −1 x a 1 = −1 a + 1 a→∞ − − − − →1 and the integrand is bounded by 1, ∀t > 0, we have Z a 0 e−xt 1+x2dx ≤2 ∀a, and thus it converges as a →∞. Lebesgue: R R+ e−xt 1+x2 dµ exists because we can bound the integrand as above. b) Show F ′′(t) exists on (0, ∞). We have ϕ(t) = e−xt 1+x2 ∈L1, so just need ∂ϕ ∂t (t) = −xe−xt 1+x2 ∈L1 in order to use the theorem and get F ′(t) = − Z ∞ 0 xe−xt 1 + x2 dx. 9f(x) ≥0 = ⇒g(x) = R x 0 f(t) dt is increasing. Real Analysis Qual Seminar 41 Fix t > 0 and pick ε > 0 such that t −ε > 0. Observe: x ≤eεx for large enough x. Thus we pick M large enough that x ≥M = ⇒x ≤eεx, and split the integral: Z ∞ 0 −xe−xt 1 + x2 dx = Z M 0 −xe−xt 1 + x2 dx + Z ∞ M −xe−xt 1 + x2 dx. Then we have Z M 0 xe−xt 1 + x2 dx ≤ Z M 0 xe−xt dx, which as a continuous function over a compact space is clearly finite, and hence the integrand is in L1(0, M). Also, Z ∞ M −xe−xt 1 + x2 dx ≤ Z ∞ M eεxe−xt 1 + x2 dx by choice of M = Z ∞ M e(ε−t)x 1 + x2 dx = Z ∞ 0 e(ε−t)x 1 + x2 dx positive integrand So ε −t < 0 = ⇒ e(ε−t)x 1+x2 ∈L1 by (a). Thus −xe−tx 1+x2 ∈L1. Now if ∂ ∂t  −xe−tx 1+x2  = x2e−tx 1+x2 ∈L1, we’ll have F ′′(t) = Z ∞ 0 x2e−tx 1 + x2 dx. To show this, find N such that x ≥N = ⇒x2 ≤eεx, and proceed as before. 42 Real Analysis Qual Seminar c) (Extra credit) Show F(t) satisfies F ′′(t)+F(t) = 1 t. Compute F(t). We have F ′′(t) = R ∞ 0 x2e−tx 1+x2 dx from (b), so F ′′(t) + F(t) = Z ∞ 0 x2e−tx + e−tx 1 + x2 dx = Z ∞ 0 (1 + x2)e−tx 1 + x2 dx = Z ∞ 0 e−tx dx = −1 te−tx∞ 0 = 0 −(−1 t) = 1 t. Now solve the differential equation F ′′(t) + F(t) = 1 t. 3. Let I be an open interval of R and suppose f : R →R such that x 7→extf(x) is integrable for each fixed t ∈I. Define F : I →R by F(t) = Z R extf(x) dx. Show that F is differentiable with derivative F ′(t) = R R xextf(x) dx at each t ∈I. Note: xetxf(x) may not be in L1! We would like to compute F ′(t) by F ′(t0) = lim n→∞ Z etnx −et0x tn −t0  f(x) dx, (V.2) where {tn} is a sequence in I with tn →t0. To use DCT, we need to find g ∈L1 such that etnx −et0x tn −t0 ≤g(x) ∀n. Choose t′ ∈I such that tn < t′, ∀n. This is possible, since otherwise there would be a subsequence of {tn} converging to sup{t ∈I}. < ↙I is open and tn →t0 ∈I. By MVT, etnx −et0x ≤|xesx| · |tn −t0| for some s ∈[tn, t0]. Real Analysis Qual Seminar 43 Since s < t′, etnx −et0x tn −t0 ≤ xet′x = ⇒ etnx −et0x tn −t0 f(x) ≤ xet′xf(x) , (V.3) and we have a bound which no longer depends on n. To see that g(x) = xet′xf(x) is integrable, split the integral: pick ε > 0 such that t′ + ε ∈I and choose M be such that x ≥M = ⇒ x ≤eεx. Now Z M 0 xet′xf(x) dx ≤M Z M 0 et′xf(x) dx < ∞ t′ ∈I, and Z ∞ M xet′xf(x) dx ≤ Z ∞ M e(t′+ε)xf(x) dx < ∞ t′ + ε ∈I. Thus we have R ∞ 0 xet′xf(x) dx < ∞. For R 0 −∞g(x) dx, pick ε > 0 such that t′ −ε ∈I and let M be such that x ≥M = ⇒ x ≤eεx, and proceed as for R ∞ 0 g(x) dx. Together, this gives g ∈L1. By (V.3), we can use the DCT in (V.2) to obtain the result. 4. Let f be a bounded measurable function on [0, ∞). Show that F(t) = Z ∞ 0 f(x)e−xt √x dx, t > 0 is continuously differentiable on (0, ∞). Let us denote the integrand by ϕ(x, t) := f(x)e−xtx−1/2. We would like to find F ′(t) by choosing any sequence {tn} such that 44 Real Analysis Qual Seminar tn →t0 and computing F ′(t0) = lim n→∞ Z ∞ 0 f(x)e−xtn tn √x −f(x)e−xt0 t0 √x  dx = lim n→∞ Z ∞ 0 f(x)e−xtn −e−xt0 tn −t0 x−1/2 dx Since f is bounded, |f(x)| ≤M. Then f(x)e−xtn −e−xt0 tn −t0 x−1/2 ≤M e−xtn −e−xt0 tn −t0 x−1/2 . Since t ranges over (0, ∞) and tn →t0 < ∞, we can certainly pick a strict lower bound τ of {tn}, i.e. a number τ such that inf{tn} > τ. By MVT, e−tnx −e−t0x ≤ xe−sx · |tn −t0| for some s ∈[tn, t0]. Since s > τ, e−tnx −e−t0x tn −t0 x−1/2 ≤ xe−sxx−1/2 ≤e−τxx1/2 ∈L1. To verify the integrability of the dominating function, note that u = x1/2 dv = e−xdx 2du = x−1/2dx v = −e−x gives Z e−xx1/2 = h −e−xx1/2i∞ 0 + 2 Z −e−xx−1/2dx = (0 −0) + 4 Z ∞ 0 e−u2du put u = √x = 4 · √π 2 = 2√π. Real Analysis Qual Seminar 45 Further, R ∞ 0 e−xx−1/2dx = 2√π = ⇒ R ∞ 0 e−τxx−1/2dx = 2pπ τ . Thus F ′(t0) = lim n→∞ Z ∞ 0 f(x)e−xtn −e−xt0 tn −t0 x−1/2 dx = Z ∞ 0 lim n→∞f(x)e−xtn −e−xt0 tn −t0 x−1/2 dx by DCT = Z ∞ 0 f(x) lim t→t0 e−xt −e−xt0 t −t0  x−1/2 dx = Z ∞ 0 f(x) ∂ ∂te−xt x−1/2 dx = Z ∞ 0 f(x) −xe−xt x−1/2 dx = − Z ∞ 0 f(x)e−xtx1/2 dx. Let us denote this function G(t) = − Z ∞ 0 f(x)e−xtx1/2 dx. Since we are required to show that F is continuously differential, we must show that G is continuous. Notice that another u-substitution with u = (tx)1/2, 2 tu du = dx allows us to rewrite G(t) = −2 t Z ∞ 0 f(x)xe−x2dx. Thus, all we need to do is show the integral to be finite. Since f is bounded by M, it suffices to show Z ∞ 0 xe−x2 dx < ∞. 46 Real Analysis Qual Seminar Next, putting u = x2, 1 2du = x dx, Z ∞ 0 xe−x2 dx = 1 2 Z ∞ 0 e−u du = 1 2 −e−u∞ 0 = 1 2 (0 −(−1)) = 1 2 which is as finite as it gets. 5. Show F(t) = Z ∞ −∞ sin(x2t) 1 + x2 dx is continuous on R. First, note that | sin x| ≤1 gives sin(x2t) 1 + x2 ≤ 1 1 + x2 ∈L1, where the final inclusion is clear from Z ∞ −∞ dx 1 + x2 = [arctanx]∞ −∞= π 2 − −π 2  = π. Pick any {tn} ∈R with tn →t0. Then lim n→∞F(tn) = lim n→∞ Z ∞ −∞ sin(x2tn) 1 + x2 dx def of F = Z ∞ −∞ lim n→∞ sin(x2tn) 1 + x2 dx DCT = Z ∞ −∞ sin(x2 limn→∞tn) 1 + x2 dx contin of sin x, mult by x2 = Z ∞ −∞ sin(x2t0) 1 + x2 dx tn →t0 = F(t0). Since this is true for all sequences tn →t0, we have limt→t0 F(t) = F(t0). Real Analysis Qual Seminar 47 6. f ∈C[0, 1] is such that R 1 0 xnf(x) dx = 0 for n = 0, 1, 2, . . .. Show that f ≡0. By the Stone-Weierstraß Theorem, there is a sequence of polynomials {Pk(x)} such that Pk(x) unif − − − →f(x). Then Z Pk(x)f(x) dx k→∞ − − − − → Z [f(x)]2 dx by uniformity. But since any polynomial may be written P(x) = m X i=0 aixi = a0 + a1x + a2x2 + · · · + amxm, the linearity of the integral and the hypothesis R 1 0 xnf(x) dx = 0 give Z 1 0 P(x)f(x) dx = a0 Z 1 0 dx + a1 Z 1 0 x dx + · · · + am Z 1 0 xm dx = a0 · 0 + a1 · 0 + a2 · 0 + · · · + am · 0 = 0 So Z Pk(x)f(x)dx = 0 ∀k Z f 2dx = 0 f 2 ≡0 f 2 ≥0 f ≡0 7. Compute the limits a) lim n→∞ R ∞ 0 1 + x n −n sin x n  dx Note that sin x n  ≤1 and 1 + x n −n n→∞ − − − − →e−x. 48 Real Analysis Qual Seminar Then 1 + x n −n ≤ 1 + x 2 −2 ∀n ≥2, so the DCT gives lim n→∞ Z ∞ 0  1 + x n −n sin x n  dx = Z ∞ 0 lim n→∞  1 + x n −n sin x n  dx = Z ∞ 0 e−x sin(0) dx = 0. b) lim n→∞ R ∞ a n(1 + n2x2)−1 dx We do a u-substitution with u = nx, du = n dx: lim n→∞ Z ∞ a n(1 + n2x2)−1 dx = lim n→∞ Z ∞ na du 1 + u2 = lim n→∞[arctanu]∞ na = lim n→∞ π 2 −arctanna  =      π 2 a = 0 0 a > 0 π a < 0 . 8. a) Find the smallest constant c such that log(1 + et) < c + t for 0 < t < ∞. First, observe that 1 + et < ecet = ⇒ 1+et et < ec. Note that lim t→0 1+et et = 2 and lim t→∞ 1+et et = 1. Since 1+et et is monotonic, it is evidently monotonically decreasing: d dt 1+et et = d dt 1 et + 1  = d dte−t = −e−t. Thus, let e2 = 2 or c = log 2. Real Analysis Qual Seminar 49 b) Does lim n→∞ 1 n R 1 0 log(1 + enf(x)) dx exist for every real f ∈L1[0, 1], if f > 0? From part (a), we get 1 < 1+enf(x) enf(x) < 2, which gives enf(x) < 1 + enf(x) < 2enf(x) nf(x) < log  1 + enf(x) < nf(x) + c (c = log 2) n Z 1 0 f(x) dx < Z 1 0 log  1 + enf(x) dx < n Z 1 0 f(x) dx + c (since integration is a pos linear functional and f ∈L1.) Z 1 0 f(x) dx < 1 n Z 1 0 log  1 + enf(x) dx < Z 1 0 f(x) dx + c n Then taking the limit as n →∞, we get Z 1 0 f(x) dx ≤lim n→∞ 1 n Z 1 0 log  1 + enf(x) dx ≤ Z 1 0 f(x) dx. Since f is integrable by hypothesis, the Sandwich Theorem gives lim n→∞ 1 n Z 1 0 log  1 + enf(x) dx = Z 1 0 f(x) dx. 50 Real Analysis Qual Seminar VI. The Hahn-Banach Theorem and applications [Folland] It is not obvious that there are any nonzero bounded functionals on an arbitrary normed vector space. That such functionals exist in great abundance is one of the fundamental theorems of functional analysis. [Reed & Simon] In dealing with Banach spaces, one often needs to construct linear functionals with certain properties. This is usually done in two steps: first one defines the linear functional on a subspace of the Banach space where it is easy to verify the desire properties; second, one appeals to (or proves) a general theorem which says that any such functional can be extended to the whole space while retaining the desired properties. One of the basic tools the second step is the following theorem, Theorem VI.1. Let X be a vector space and p : X →R such that (i) p(αx) = αp(x), ∀α ≥0, and (ii) p(x + y) ≤p(x) + p(y), ∀x, y ∈X. If S is a subspace of X and there is a linear functional f : S →R such that f(s) ≤p(s), ∀s ∈S, then f may be extended to F : X →R with F(x) ≤p(x), ∀x ∈X, with F(s) = f(s) ∀s ∈S. Proof. The idea of the proof is to first show that if x ∈X but x / ∈S, then we can extend f to a functional having all the right properties on the space spanned by x and S. We then use a Zorn’s Lemma / HausdorffMaximality argument to show that this process can be continued to extend f to the whole space X. (Sketch) 1. Consider the family G := {g : D →R . . . g is linear; g(x) ≤p(x), ∀x ∈D; g(s) = f(s), ∀s ∈S}, where D is any subspace of X which contains S. So G is roughly the collection of “all linear extensions of f which are bounded by p”. Now G is a poset under g1 ≺g2 ⇐ ⇒ Dom(g1) ⊆Dom(g2) and g2 Dom(g1) = g1. Real Analysis Qual Seminar 51 2. Use Hausdorffmaximality Principle (or Zorn) to get a maximal lin-early ordered subset {gα} ⊆G which contains f. Define F on the union of the domains of the {gα} by F(x) = gα(x) for x ∈Dom(gα). 3. Show that this makes F into a well-defined linear functional which extends f, and that F is maximal in that F ≺G = ⇒F = G. 4. Show F is defined on all of X using the fact that F is maximal. Do this by showing that a linear functional defined on a proper subspace has a proper extension. (Hence F must be defined on all of X or it wouldn’t be maximal.) □ Proposition VI.2. (HausdorffMaximality Principle) (A, ≺) is a poset = ⇒∃B ⊆A such that B is a maximal linearly ordered subset. I.e., if C is linearly ordered, then B ≺C ≺A = ⇒C = B or C = A. Of course, the HBT is also readily extendable to the complex case: Theorem VI.3. Let X be a complex vector space and p a real-valued func-tion defined on X satisfying p(αx + βy) ≤|α|p(x) + |β|p(y) ∀x, y ∈X, ∀α, β ∈C with |α| + |β| = 1. If S is a subspace of X and there is a complex linear functional f : S →R such that |f(s)| ≤p(s), ∀s ∈S, then f may be extended to F : X →R with |F(x)| ≤p(x), ∀x ∈X, with F(s) = f(s) ∀s ∈S. 52 Real Analysis Qual Seminar VI.1. Principle Applications of the HBT. Most often, p(x) is taken to be the norm of the Banach space in question. 1. M is a closed subspace of X and x ∈X\M = ⇒∃f ∈X∗such that f(x) ̸= 0, f M = 0. In fact, if δ = infy∈M ∥x −y∥, f can be taken to satisfy ∥f∥= 1 and f(x) = δ. Define f on M + Cx by f(y + λx) = λδ for y ∈M, λ ∈C. Then f(x) = f(0 + ·x) = 1 · δ = δ but for m ∈M, f(m) = f(m + 0) = 0 · δ = 0. For λ ̸= 0, we have |f(y + λx)| = |λ|δ ≤|λ| · ∥λ−1y + x∥= ∥y + λx∥ because δ = inf ∥y + x∥≤∥λ−1y + x∥(putting in λ−1 for y). Using p(x) = ∥x∥, apply the HBT to extend f from M + Cx to all of X. 2. If x ̸= 0, x ∈X, then ∃f ∈X∗such that ∥f∥= 1 and f(x) = ∥x∥. M = {0} is trivially a closed subspace, so apply (1) with δ = ∥x∥. 3. The bounded linear functionals on X separate points. If x ̸= y, then (2) shows ∃f ∈X∗such that f(x −y) ̸= 0. I.e., f(x) ̸= f(y). This result indicates that X∗is BIG. 4. If x ∈X, define ˆ x : X∗→C by ˆ x(f) = f(x), so ˆ x ∈X∗∗. Then ϕ : x 7→ˆ x is a linear isometry from X into X∗∗. ˆ x(αf + βg) = (αf + βg)(x) = αf(x) + βg(x) = ˆ x(αf) + ˆ x(βg), so ˆ x is linear. This verifies that ˆ x ∈X∗∗. For ϕ(x) = ˆ x, ϕ(ax + by) is defined by ˆ ax + by(f) = f(ax + by) = af(x) + bf(y) = aˆ x(f) + bˆ y(f), Real Analysis Qual Seminar 53 so ϕ(ax+by) = ˆ ax + by = aˆ x+bˆ y = aϕ(x)+bϕ(y) shows that ϕ : x 7→ˆ x is linear. Finally, |ˆ x(f)| = |f(x)| ≤∥f∥· ∥x∥ shows that ∥ˆ x∥= sup ∥f∥≤1 |ˆ x(f)| ∥f∥ = sup ∥f∥≤1 |f(x)| ∥f∥ ≤ sup ∥f∥≤1 ∥f∥· ∥x∥ ∥f∥ = ∥x∥. To get the reverse inequality, note that (2) provides a function f0 for which |ˆ x(f0)| = |f0(x)| = ∥x∥and ∥f∥= 1. Then ∥ˆ x∥= sup ∥f∥=1 |ˆ x(f)| ≥|ˆ x(f0)| = ∥x∥. We saw this for Hilbert spaces, but this example is applicable to general Banach spaces and requires none of the Hilbert space machinery (orthonormal basis, projection theorem, etc.) as the HBT takes care of a lot. VI.2. Corollaries to the HBT. 1. If X is a normed linear space, Y a subspace of X, and f ∈Y ∗, then there exists F ∈X∗extending f and satisfying ∥F∥X∗= ∥f∥Y ∗. Proof. Apply HBT with p(x) = ∥f∥Y ∗∥x∥X∗. □ 2. Let X be a Banach space. If X∗is separable, then X is separable. Proof. Let {fn} be a dense set in X∗. Choose xn ∈X, ∥xn∥= 1 so that |fn(xn)| ≥∥fn∥/2. Let D be the set of all finite linear combinations of the ∥xn∥with rational coefficients. Since D is countable, we just need to show that D is dense in X. If D is not dense in X, then there is a y ∈X\D and a linear functional f ∈X∗such that f(y) ̸= 0 but f(x) = 0 for all x ∈D, by application (1). 54 Real Analysis Qual Seminar Let {fnk} be a subsequence of {fn} which converges to f. Then ∥f −fnk∥≥|(f −fnk)(xnk)| = |(fnk)(xnk)| ≥∥fnk∥/2 which implies ∥fnk∥→0 as k →∞. Thus f = 0. < ↙ Therefore D is dense and X is separable. □ The example of ℓ1 and ℓ∞shows that the converse of this theorem doesn’t hold. In fact, this corollary offers a proof that ℓ1 is not the dual of ℓ∞, provided you can show ℓ∞is not separable. Real Analysis Qual Seminar 55 VII. The Baire Theorem and Consequences Definition VII.1. D is dense in X iff¯ D = X, equivalently, iff∀U open in X, U ∩D ̸= ∅. Definition VII.2. E is nowhere dense iff ¯ E ∼is dense in X. Definition VII.3. X is meager 10 iffX is a countable union of nowhere dense sets. Theorem VII.4. Let (X, d) be a complete metric space. (Note: can substi-tute LCH for complete, using f.i.p. def of compactness). Then a) If {On}∞ n=1 are open dense subsets of X, then T∞ n=1 On is dense in X. b) No nonempty open subset of X is a countable union of nowhere dense sets. In particular, X is not. Proof. The idea of the proof is straightforward: Suppose that X is a com-plete metric space and X = S∞ n=1 An with each An nowhere dense. We will construct a Cauchy sequence {xm} which stays away from each An so that its limit point x (which is in X by completeness) is in no An, thereby contra-dicting the statement X = S∞ n=1 An. Since A1 is nowhere dense, we can find x1 / ∈¯ A1 and an open ball B1 about x1 such that B1 ∩A1 = ∅, with the radius of B1 smaller than 1. Since A2 is nowhere dense, we can find x2 ∈B1\ ¯ A2. Let B2 be an open ball about x2 such that B2 ∩A2 = ∅, with the radius of B2 smaller than 1 2. Proceeding inductively, we obtain a sequence {xn} where xn ∈Bn−1\ ¯ An, ¯ Bn ⊆Bn−1, and Bn ∩An = ∅. This sequence is Cauchy because n, m ≥N implies that for xn, xm ∈BN, ρ(xn, xm) ≤21−N + 21−N = 22−N N→∞ − − − − − →0. Let x = limn→∞xn. Since xn ∈BN for n ≥N, we have x ∈¯ BN ⊆BN−1. Thus x / ∈AN−1 for any N, which contradicts X = S∞ n=1 An. □ 10Formerly called “of first category”. 56 Real Analysis Qual Seminar VII.1. Applications of Baire. 1. Q is NOT a Gδ. Proof. Suppose it were. Then Q = T∞ n=1 On, where the On are open. Note: Q ⊆T On = ⇒Q ⊆On, ∀n, so Q dense in R = ⇒On dense in R, ∀n. Let {qk}∞ k=1 be an enumeration of Q. Consider the singleton set (not the sequence!) {qn}. Then {qn} closed = ⇒{qn}∼open. Also, {qn}∼is dense in R. (It contains all but 1 of the rationals, so, e.g. the sequence {qn −1 m}∞ m=1 ⊆ {qn}∼and has qn as a limit point.) Then On ∩{qn}∼is open and dense. To see dense, note that On ∩{qn}∼= Q ∼{qn}. Then \ (On ∩{qn}∼) = ( \ On) ∩ \ {qn}∼ = Q ∩(R ∼Q) = ∅, So ∅is dense in R, by Baire’s Theorem. < ↙ □ However, Q is an Fσ: Q = S∞ n=1{qn}. 2. A meager set with Lebesgue measure 1. We start by constructing a nowhere dense set with positive measure. Let Cα be the Cantor set formed by removing an open interval of length α 3n+1 from each of the remaining 2n pieces, where 0 < α < 1. Then Cα = T∞ n=0 Cn α is closed as an intersection of closed sets and Cα = Cα. To show Cα is nowhere dense, it suffices to show that Cα contains no nonempty open set.11 If O ̸= ∅is open, then it contains 11If a closed set F contains no open set, then it is nowhere dense: we use contrapositive. Suppose (F )∼ is not dense. Then ∃x ∈F which is not a limit point of e F. If no sequence in e F converges to x, every point of e F must be at least ε > 0 away from x. Thus F contains Bε(x). The conditions are actually equivalent. Real Analysis Qual Seminar 57 Figure 4. Construction of the Cantor set with measure α. _ C 0 C 1 C 2 C 3 3 C 4 an interval A of positive length ε > 0. Each of the 2n intervals of Cn α (nth step of construction) are of length ℓ< 2−n, so for large enough N, we have ℓ< 2−N < ε. Thus A is longer than any component of CN α and hence cannot be contained in CN α or Cα. So Cα is nowhere dense. Now we use the measure lemma12 to compute the measure of Cα: µCα = 1 − ∞ X n=0 α2n 3n+1 = 1 −α 3 ∞ X n=0 2 3 n = 1 −α 3  1 1−2/3  = 1 −α. Thus, every singleton {Cα} is a trivially a countable union of nowhere dense sets which have positive Lebesgue measure. 12The Measure Lemmae. Proposition VII.5. Let (X, A, µ) be a measure space. a) If {Ak} is an increasing sequence (Ak ⊆Ak+1) of sets of A, then µ(∪Ak) = lim µAk. b) If {Ak} is a decreasing sequence (Ak+1 ⊆Ak) of sets of A, and µAn < ∞for some n, then µ(∩Ak) = lim µAk. (µAn < ∞is necessary, else let Ak = (k, ∞).) Proposition VII.6. Let (X, A) be a measurable space and let µ be a finitely additive function µ : A →[0, ∞] such that µ(∅) = 0. Then µ is a measure if either a) lim µAk = µ(∪Ak) for each increasing sequence {Ak} ⊆A; or b) lim µAk = 0 holds for each decreasing sequence {Ak} ⊆A with T Ak = ∅. 58 Real Analysis Qual Seminar However, to complete this in the manner suggested by Royden, we now consider P = S∞ k=1 C1/k. µP = µ ∞ [ k=1 C1/k ! = lim k→∞(1 −1 k) = 1. So P is a union of a countable infinite collection of nowhere dense sets, and P has Lebesgue measure 1. VII.2. Consequences of Baire. Theorem VII.7 (Open Mapping Theorem). X, Y are Banach spaces, T ∈ L(X, Y ). Then T surjective = ⇒T open. Proof. Nasty. (technical and longish) □ Corollary VII.8. If X, Y are Banach and T ∈L(X, Y ) is bijective, then T is an isomorphism. Proof. T is an isomorphism ⇐ ⇒T −1 ∈L(X, Y ) ⇐ ⇒T −1 is continuous ⇐ ⇒T is open But this last condition is exactly the result of the OMT; T bijective = ⇒T surjective = ⇒T open. □ Definition VII.9. The graph of T is the set Γ(T) = {(x, y) ∈X × Y . . . y = Tx}. Definition VII.10. T ∈L(X, Y ) is closed iffΓ(T) is a closed subspace of X × Y . Real Analysis Qual Seminar 59 Theorem VII.11 (Closed Graph Theorem). If X, Y are Banach spaces, and the linear map T : X →Y is closed, then T is bounded. Proof. Let π1, π2 be the projections of Γ(T) = {(x, y) ∈X × Y . . . y = Tx} to X and Y respectively: π1(x, Tx) = x and π2(x, Tx) = Tx. Obviously, π1 ∈L(Γ(T), X) and π2 ∈L(Γ(T), Y ). Since X, Y are complete, so is X × Y . Hence T closed implies Γ(T) is a closed subspace (by definition), and a closed subspace of a complete space is complete, so Γ(T) is also complete. Now π1 : Γ(T) →X is a bijection, hence an isomorphism by the corollary to OMT, i.e., π−1 1 is bounded. But then T = π2◦π−1 1 is bounded. □ Figure 5. T as a composition of projections Y T X  1  2  1 -1 CGT restated: Let X, Y be Banach spaces, and T : X →Y be linear. (1) Then T is bounded ⇐ ⇒Γ(T) is closed. (2) Then T is continuous ⇐ ⇒(xn →x, Txn →y = ⇒y = Tx). Note. For X, Y be Banach spaces, and S : X →Y unbounded , (a) Γ(S) is not complete, (b) T : X →Γ(S) is closed but not bounded, and (c) T −1 : Γ(S) →X is bounded and surjective but not open. 60 Real Analysis Qual Seminar Theorem VII.12 (Uniform Boundedness Principle, aka Banach-Steinhaus Theoreom). Suppose X, Y are normed vector spaces and and A ⊆L(x, y). a) If supT ∈A ∥Tx∥< ∞for all x in some nonmeager set D ⊆X, then supT ∈A ∥T∥< ∞. b) If X is Banach and supT ∈A ∥Tx∥< ∞∀x ∈X, then supT ∈A ∥T∥< ∞. Proof of (a). Let En = {x ∈X . . . sup T ∈A ∥Tx∥≤n} = \ T ∈A {x ∈X . . . ∥Tx∥≤n}. Thus the En are closed as intersections of preimages of closed sets under con-tinuous maps. Since supT ∈A ∥Tx∥≤N, ∀x ∈D by hypothesis, EN contains a nontrivial closed ball B(x0, r), r > 0. But then ∥x∥< R = ⇒(x −x0) ∈EN = ⇒∥Tx∥≤∥T(x −x0)∥+ ∥Tx0∥≤2N, so ∥x∥≤r = ⇒∥Tx∥≤2N ∀T ∈A, which implies that ∥T∥≤2N r < ∞. □ Proof of (b). X is a nonempty Banach space, so X is nonmeager by Baire. (Baire’s Theorem says that every complete metric space is nonmeager.) Then just apply (a). □ Rephrase of the UBP: Either ∃M < ∞such that ∥T∥≤M, ∀T ∈A, or else supT ∈A ∥Tx∥= ∞, ∀x in some dense Gδ ⊆X. Geometrically, either there is a ball B ⊆Y (with radius M, center 0) such that every T ∈A maps the unit ball of X into B, or there exists an x ∈X (in fact, a whole dense Gδ of them) such that no ball in Y contains Tx, for all T ∈A simultaneously. Real Analysis Qual Seminar 61 VII.3. Related Problems. 1. Find a Banach space X, a normed linear space Y , and a continuous linear bijection f : X →Y such that f −1 is not continuous. (Note: Y better not be Banach!) Let X := L2[0, 1], ∥· ∥2  and Y := L2[0, 1], ∥· ∥1  , and define f : X →Y by f(x) = x. f is clearly bijective and linear. To see f is continuous, it suffices to show f is continuous at 0: Let ϕ ∈X and fix ε > 0. Since X, Y are finite measure spaces, p < q = ⇒∥ϕ∥p ≤∥ϕ∥q. Thus, ∥ϕ∥1 ≤∥ϕ∥2 shows ∥ϕ∥2 ≤ε = ⇒∥f(ϕ)∥1 = ∥ϕ∥1 ≤ε. To see f −1 is not continuous, consider {ϕn} ⊆Y defined by ϕn(x) = 1 p x + 1/n . Now ϕn(x) ≤ϕn+1(x) ≤ϕ(x) = 1 √x, ∀n, x. Since R 1 0 1 √xdx = 2 = ⇒ ϕ ∈L1, the MCT gives lim ∥ϕ∥1 = lim Z 1 0 dx p x + 1/n = Z 1 0 (lim ϕn)dx = Z 1 0 dx √x = 2 = ∥ϕ∥1. However, ∥ϕn∥2 = Z 1 0 dx x + 1/n 1/2 = (log[1 + n])1/2 n→∞ − − − − →∞ shows that lim f −1(ϕn) does not converge. Indeed, R 1 0 ϕ2 dx = R 1 0 dx x = ∞. Hence, lim f −1(ϕn) ̸= f −1(ϕ) shows that f −1 is not continuous. 62 Real Analysis Qual Seminar 2. Let V be a Banach space. a) If V is infinite-dimensional, construct an unbounded linear operator f : V →V . b) If V is finite-dimensional, show that every linear operator on it is bounded. Since V is a vector space, it has some basis {eλ}λ∈Λ. Then if {cλ}λ∈Λ is any sequence (or net) in R, ∃!f : V →V such that for every element v = P vλeλ of V , we have f(v) = X cλvλeλ. (Just define f(eλ) = cλeλ.) a) For V infinite-dimensional, {cλ} unbounded implies f unbounded. b) For V finite-dimensional, ∥f∥≤k ·max{cλ} for some k. (Basically, Λ finite = ⇒max{cλ} exists.) See any book on Quantum Dynamics or Functional Analysis for the examples of the position & momentum operators (which are un-bounded), e.g., Reed & Simon.
206
[math/0309309] A Database of Local Fields =============== Skip to main content We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >math> arXiv:math/0309309 Help | Advanced Search Search GO quick links Login Help Pages About Mathematics > Number Theory arXiv:math/0309309 (math) [Submitted on 18 Sep 2003] Title:A Database of Local Fields Authors:John W. Jones, David P. Roberts View a PDF of the paper titled A Database of Local Fields, by John W. Jones and 1 other authors View PDF Abstract:We describe our online database of finite extensions of the p-adic numbers, and how it can be used to facilitate local analysis of number fields. Subjects:Number Theory (math.NT) Cite as:arXiv:math/0309309 [math.NT] (or arXiv:math/0309309v1 [math.NT] for this version) Focus to learn more arXiv-issued DOI via DataCite Journal reference:J. Symbolic Comput., 41, 80-97 (2006) Submission history From: John Jones [view email] [v1] Thu, 18 Sep 2003 20:37:22 UTC (20 KB) Full-text links: Access Paper: View a PDF of the paper titled A Database of Local Fields, by John W. Jones and 1 other authors View PDF TeX Source Other Formats view license Current browse context: math.NT <prev | next> new | recent | 2003-09 References & Citations NASA ADS Google Scholar Semantic Scholar aexport BibTeX citation Loading... BibTeX formatted citation × Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools [x] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [x] Connected Papers Toggle Connected Papers (What is Connected Papers?) [x] Litmaps Toggle Litmaps (What is Litmaps?) [x] scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article [x] alphaXiv Toggle alphaXiv (What is alphaXiv?) [x] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [x] DagsHub Toggle DagsHub (What is DagsHub?) [x] GotitPub Toggle Gotit.pub (What is GotitPub?) [x] Huggingface Toggle Hugging Face (What is Huggingface?) [x] Links to Code Toggle Papers with Code (What is Papers with Code?) [x] ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos [x] Replicate Toggle Replicate (What is Replicate?) [x] Spaces Toggle Hugging Face Spaces (What is Spaces?) [x] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools [x] Link to Influence Flower Influence Flower (What are Influence Flowers?) [x] Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) About Help Contact Subscribe Copyright Privacy Policy Web Accessibility Assistance arXiv Operational Status Get status notifications via email or slack
207
Pierre Robin Syndrome - StatPearls - NCBI Bookshelf =============== An official website of the United States government Here's how you know The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Bookshelf Search database Search term Search Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Search term Pierre Robin Syndrome Diana Baxter; Anthony L. Shanks. Author Information and Affiliations Authors Diana Baxter 1; Anthony L. Shanks. Affiliations 1 Indiana University Last Update: August 7, 2023. Go to: Continuing Education Activity Pierre Robin sequence is a constellation of micrognathia, glossoptosis, and upper airway obstruction. It is a clinical diagnosis that can range in severity and may be associated with genetic syndromes. This activity describes the evaluation and management of Pierre Robin sequence and reviews the role of the interprofessional team in managing patients with this condition. Objectives: Summarize the etiology of Pierre Robin sequence. Describe the classic physical exam findings associated with Pierre Robin sequence. Outline the treatment considerations for patients with Pierre Robin sequence. Review interprofessional team strategies for improving care coordination and communication during prenatal parental counseling of Pierre Robin sequence to enhance the care of patients with this condition. Access free multiple choice questions on this topic. Go to: Introduction Pierre Robin sequence (PRS) is characterized by the clinical triad of micrognathia (mandibular hypoplasia), glossoptosis (downward displacement of the tongue), and upper airway obstruction. Clinically, this results in a small, underdeveloped mandible that causes the base of the tongue to fall back into the throat and can ultimately lead to upper airway compromise. It is commonly associated with cleft palate.These findings may be part of a syndrome or isolated. The sequence was first described in 1891. However, Pierre Robin first published a case of an infant with these characteristics in 1923. Go to: Etiology The etiology of PRS is typically separated into isolated and syndromic PRS. Non-syndromic PRS has been associated with mutations on chromosomes 2, 4, 11, or 17. Some evidence suggests SOX9 or KCNJ2 mutations (on chromosome 17) may affect the development of facial structures and cartilage development, leading to PRS.Alternatively, syndromic PRS has been recently reported to account for 60% of PRS.There have been 34 syndromes associated with syndromic PRS, the most common being Stickler syndrome. Associated Syndromes Stickler syndrome is the most common syndrome associated with PRS. In one study, 47% of syndromic PRS patients were diagnosed with Stickler syndrome.It is an autosomal dominant condition defined as a mutation of COL genes, affecting collagen formation. Clinical features include flat midface, epicanthal folds, retinal detachments, cataracts, joint hypermobility, and sensorineural hearing loss, in addition to the typical signs of PRS. Velocardiofacial syndrome is a 22q11 deletion syndrome or TBX1 gene resulting in abnormalities in heart, parathyroid, thymus, and facial development. Clinical features include long upper lip and philtrum, elongated face, slender digits, hypothyroidism, immune dysfunction, hearing loss, pulmonary atresia, ventricular septal defects (VSDs), and hypoplastic pulmonary arteries. Treacher Collins syndrome has mutations in TCOF1, POLR1C, and POLR1D genes. Clinical features include hypoplasia of zygomatic, maxillary, and mandibular bones, and anomalies of the temporomandibular joint (TMJ), cleft palate, and external ear. Go to: Epidemiology Epidemiological data is sparse, but available evidence suggests that PRS affects approximately 1 in 8,500 to 1 in 14,000 newborns a year. Go to: Pathophysiology Pierre Robin sequence (previously named Pierre Robin syndrome) is now correctly named a sequence because one initial malformation leads to a sequential chain of events causing the other anomalies. A syndrome, in contrast, is a set of anomalies that arise separately due to one underlying pathogenesis. In PRS, micrognathia is the first abnormality that leads to glossoptosis and, ultimately, airway obstruction and/or a cleft palate. The following are common theories behind the hypoplastic mandibular growth. Mechanical Theory In the early first trimester, around the 7th week of gestation, the mandible typically grows ventrally and inferiorly. If mandibular growth is abnormal, the tongue is not able to follow the normal trajectory of growth and blocks the closure of the palatal cleft in the 11th week of gestation. The glossoptosis, due to abnormal mandibular growth, ultimately leads to upper airway obstruction. Neurological Maturation Theory This theory proposes a delay in the neuromuscular development of the tongue, which in turn does not stimulate the mandible to grow appropriately or palatal shelves to fuse. Mandible Compression Theory This theory proposes that external forces cause the fetal head to become flexed, compressing the mandible against the chest, rendering it unable to grow appropriately. This may be caused by multifetal gestation, uterine anomalies, or oligohydramnios. The tongue continues to grow normally and is ultimately displaced backward, causing potential obstruction of the upper airway. The tip of the tongue may also impede the fusion of the palatal shelves, leading to cleft palate. Go to: History and Physical Micrognathia This is a clinical diagnosis of the underdeveloped mandible. This typically includes a shorter mandibular body length and a greater mandibular angle. Defined indexes for diagnosing micrognathia have been developed in the past. However, none have been useful in predicting patients who will experience respiratory problems. Glossoptosis This is defined by the displacement of the base of the tongue towards the pharynx. There is a wide range of severity and therefore associated respiratory distress. There is no standard for diagnosing glossoptosis, but endoscopy and computed tomography (CT) imaging may be helpful in quantifying the level of obstruction present. Airway Obstruction There is a wide range of clinical severity of airway obstruction, ranging from severe ventilatory compromise requiring intubation immediately after birth, to mild dysfunction not requiring any intervention. Clinical signs of airway obstruction may include abnormal breathing sounds, increased respiratory accessory muscle use, desaturations, difficulty feeding/swallowing, reflux, and aspiration. Long term signs of airway obstruction may include reduced weight gain, difficulty speaking, neurological deficits, and ultimately pulmonary hypertension and cor pulmonale. Cleft Palate While not one of the required triad of PRS, cleft palate is present in most cases. The most common type is a U-shaped cleft palate, though V-shaped have also been reported. Go to: Evaluation During pregnancy, ultrasound findings of micrognathia raise concern for PRS.Polyhydramnios may also be present if glossoptosis is significant enough to cause swallowing impairment.Prenatal diagnosis is important for appropriate counseling of parents before delivery of expected short and long term complications and treatments for PRS. This also allows all team members to be prepared for intervention at delivery if needed, such as maternal-fetal medicine providers, neonatologists, pediatric anesthesia, and/or pediatric otolaryngologists. Alternatively, it allows the parents the option of terminating the pregnancy. Typically, presentations apparent on ultrasound during the prenatal period are more severe.Amniocentesis with microarray should be offered to women with findings concerning for PRS, given its common association with genetic syndromes. Referral to a genetic counselor is also advised. After birth, an initial evaluation in the delivery room is important to determine if the patient needs immediate airway intervention such as intubation or positive pressure ventilation.There is no gold standard for evaluation and diagnosis of PRS. Continuous oxygen saturations help determine spontaneous desaturation, desaturations with feeding, sleep, and different positions. Polysomnography can be helpful in establishing the severity of obstructive apneic events and the need and timing of intervention. Nasoendoscopy or bronchoscopy can assess points of airway obstruction in the upper and lower respiratory tract. In newborns, assessment of feeding should be noted. Growth charts and weight gain should be utilized to determine if the infant needs nutritional/feeding support with a nasogastric tube. Go to: Treatment / Management Historically, cases of PRS have been classified into groups or grades based on severity. The most common classification system was developed by Cole et al. and reports grades 1-3 from mild to severe.A new classification system by Li et al. describes a 4 group system to assist in the choice of intervention.This system includes grade 0, a very mild form without respiratory or feeding dysfunction. Mild disease can often be treated using conservative management without surgery. This entails prone and lateral positioning to allow gravity to pull the tongue anteriorly and improve airway obstruction, resolving approximately 70% of cases of PRS.Nasopharyngeal stenting has also been used as a temporary measure to keep the airway open, though special attention must be paid by parents for complications such as aspiration and obstruction of the tube.Continuous positive airway pressure (CPAP) is a useful intervention that has shown great benefit, but compliance is very difficult in this patient population. Preepiglottic baton plate (PEBP) have also been noted to decrease apneic events and improve weight gain in one study. The literature is very conflicted regarding the potential for catch up growth in the postnatal period. Some studies have shown that isolated PRS may reach normal mandibular growth in the years following birth, but others have reported persistently small mandibles at up to 5 years of age. There is good consensus, however, that syndromic PRS does not experience catch up growth of the mandible. More severe cases of PRS require surgical management. Syndromic PRS will typically require this treatment. It has been estimated that only approximately 10% of isolated PRS requires surgical management.Surgical options include tongue-lip adhesion, mandibular distraction osteogenesis, and tracheostomy. Tongue-lip adhesion is a procedure where the tongue is sutured to the mucous membrane and muscle of the lower lip to hold the tongue in an anterior position in an attempt to reduce the amount of airway obstruction. This is often employed in isolated PRS as a temporary measure while the mandible grows during the first few years of life. Complications include lacerations, dehiscence, Wharton’s duct injury, infection, and aspiration. Mandibular distraction osteogenesis is another surgical intervention that produces longer-term results. This procedure advances and elongates the jaw in 3 phases - latency, activation, and consolidation. Latency is the period of time after osteotomy, and before distraction begins with the rotation of an external device that elongates the distance between the bones at the site of the break. Consolidation is the time allowed for bone formation and healing at the site of the osteotomy. Complications include infection, osteomyelitis of the mandible, damage to the inferior alveolar nerve, bite deformities, and permanent dentition loss. Tracheostomy is a procedure where the trachea is directly cannulated through the anterior aspect of the neck. This is more likely to be performed on patients with syndromic PRS. This is also a good option for patients with airway obstruction in multiple areas. This remains the gold standard of treatment for airway protection.Complications include infection, damage to the esophagus, damage to recurrent laryngeal nerve, blockages of the cannula, pneumothorax, pneumomediastinum, and longer intensive care unit (ICU) stays than other interventions. Go to: Differential Diagnosis Many of the following conditions are associated with Pierre Robin sequence and may be present concurrently. Velocardiofacial syndrome DiGeorge syndrome Stickler syndrome Treacher Collins syndrome CHARGE syndrome(coloboma, heart defects, atresia choanae, growth retardation, genital abnormalities, and ear abnormalities) Fetal alcohol syndrome Pediatric cleft lip and palate Childhood sleep apnea Go to: Prognosis Obstructive episodes can lead to hypoxemia, hypoventilation, malnutrition, asphyxia, cor pulmonale, or even death. Patients with other anomalies or syndromic PRS have a higher mortality rate. A retrospective review of 181 infants was conducted in one study and found an overall mortality rate of 16.6%, but no deaths in isolated PRS. Mortality was associated with cardiac and central nervous system (CNS) anomalies or anomalies of two or more organ systems. Go to: Complications The complications of Pierre Robin sequence are due to airway obstruction. Short term complications from airway obstruction include desaturations, difficulty feeding, and aspiration events. Long term complications may be attributed to hypoxic injury and inability to feed well. This may include cerebral impairment, pulmonary hypertension, cor pulmonale, and failure to thrive.Procedural complications relating to PRS are discussed in the treatment/management section. Early detection of PRS can help prevent long term complications of airway obstruction and hypoxemia. Go to: Deterrence and Patient Education Early prenatal diagnosis allows parents the option of pregnancy termination if they desire. Postnatal education should be given to parents about signs of respiratory distress and feeding difficulties. Appropriate counseling about potential complications of procedures should be discussed as well. Go to: Enhancing Healthcare Team Outcomes Management of a patient with Pierre Robin sequence requires an interprofessional team including maternal-fetal medicine (MFM) specialists, family planning specialists, neonatologists, plastic surgeons, otolaryngologists, anesthesiologists, oral maxillofacial surgeons, dentists, dieticians, speech pathologists, and geneticists. Prenatal diagnosis is important in coordinating interprofessional team planning even before delivery. Prenatal diagnosis is typically made by ultrasound findings diagnosed by a maternal-fetal medicine specialist. Diagnostic genetic testing will be conducted with MFM as well. After diagnosis, referral to a geneticist may be made for further counseling. It is important to educate parents on expectations and management options in the prenatal period. One way to enhance this coordination between teams is to hold prenatal care conferences with representatives from each department to share their concerns and plans with the rest of the care team, as well as the parents. Neonatologists can discuss the immediate assessment and care of the newborn in the first weeks to months of life, including airway management and typical feeding/growth difficulties. Otolaryngologists, plastic surgeons, and/or oral maxillofacial surgeons can discuss potential surgical options and complications. Dieticians are an important member of the care team, as many children with PRS have feeding difficulties and suffer from failure to thrive. Speech pathologists may later assist with speech and swallowing training. Nursing will be instrumental from delivery through postnatal hospitalizations. Go to: Review Questions Access free multiple choice questions on this topic. Click here for a simplified version. Comment on this article. Go to: References 1. Hsieh ST, Woo AS. Pierre Robin Sequence. Clin Plast Surg. 2019 Apr;46(2):249-259. [PubMed: 30851756] 2. Gangopadhyay N, Mendonca DA, Woo AS. Pierre robin sequence. Semin Plast Surg. 2012 May;26(2):76-82. [PMC free article: PMC3424697] [PubMed: 23633934] 3. Izumi K, Konczal LL, Mitchell AL, Jones MC. Underlying genetic diagnosis of Pierre Robin sequence: retrospective chart review at two children's hospitals and a systematic literature review. J Pediatr. 2012 Apr;160(4):645-650.e2. [PubMed: 22048048] 4. Karempelis P, Hagen M, Morrell N, Roby BB. Associated syndromes in patients with Pierre Robin Sequence. Int J Pediatr Otorhinolaryngol. 2020 Apr;131:109842. [PubMed: 31927149] 5. Giudice A, Barone S, Belhous K, Morice A, Soupre V, Bennardo F, Boddaert N, Vazquez MP, Abadie V, Picard A. Pierre Robin sequence: A comprehensive narrative review of the literature over time. J Stomatol Oral Maxillofac Surg. 2018 Nov;119(5):419-428. [PubMed: 29777780] 6. Cohen SM, Greathouse ST, Rabbani CC, O'Neil J, Kardatzke MA, Hall TE, Bennett WE, Daftary AS, Matt BH, Tholpady SS. Robin sequence: what the multidisciplinary approach can do. J Multidiscip Healthc. 2017;10:121-132. [PMC free article: PMC5375645] [PubMed: 28392703] 7. Cole A, Lynch P, Slator R. A new grading of Pierre Robin sequence. Cleft Palate Craniofac J. 2008 Nov;45(6):603-6. [PubMed: 18956939] 8. Li WY, Poon A, Courtemanche D, Verchere C, Robertson S, Bucevska M, Malic C, Arneja JS. Airway Management in Pierre Robin Sequence: The Vancouver Classification. Plast Surg (Oakv). 2017 Feb;25(1):14-20. [PMC free article: PMC5626186] [PubMed: 29026807] 9. Mackay DR. Controversies in the diagnosis and management of the Robin sequence. J Craniofac Surg. 2011 Mar;22(2):415-20. [PubMed: 21403570] 10. Linz A, Bacher M, Kagan KO, Buchenau W, Arand J, Poets CF. [Pierre Robin Sequence: interdisciplinary treatment after prenatal diagnosis]. Z Geburtshilfe Neonatol. 2011 Jun;215(3):105-8. [PubMed: 21755482] 11. Kam K, McKay M, MacLean J, Witmans M, Spier S, Mitchell I. Surgical versus nonsurgical interventions to relieve upper airway obstruction in children with Pierre Robin sequence. Can Respir J. 2015 May-Jun;22(3):171-5. [PMC free article: PMC4470552] [PubMed: 25848803] 12. Costa MA, Tu MM, Murage KP, Tholpady SS, Engle WA, Flores RL. Robin sequence: mortality, causes of death, and clinical outcomes. Plast Reconstr Surg. 2014 Oct;134(4):738-745. [PubMed: 25357033] Disclosure:Diana Baxter declares no relevant financial relationships with ineligible companies. Disclosure:Anthony Shanks declares no relevant financial relationships with ineligible companies. Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Deterrence and Patient Education Enhancing Healthcare Team Outcomes Review Questions References Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK562213 PMID: 32965884 Share on Facebook Share on Twitter Views PubReader Print View Cite this Page In this Page Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Deterrence and Patient Education Enhancing Healthcare Team Outcomes Review Questions References Bulk Download Bulk download StatPearls data from FTP Related information PMCPubMed Central citations PubMedLinks to PubMed Similar articles in PubMed Tongue lip adhesion (TLA) in the management of airway obstruction and feeding in Pierre Robin sequence, a case report.[Int J Surg Case Rep. 2024]Tongue lip adhesion (TLA) in the management of airway obstruction and feeding in Pierre Robin sequence, a case report.Khouri E, Bisher O, Hamdy J. Int J Surg Case Rep. 2024 Aug; 121:109932. Epub 2024 Jun 22. Case report of Pierre Robin sequence with severe upper airway obstruction who was rescued by fiberoptic nasotracheal intubation.[BMC Anesthesiol. 2017]Case report of Pierre Robin sequence with severe upper airway obstruction who was rescued by fiberoptic nasotracheal intubation.Takeshita S, Ueda H, Goto T, Muto D, Kakita H, Oshima K, Tainaka T, Ono T, Kazaoka Y, Yamada Y. BMC Anesthesiol. 2017 Mar 14; 17(1):43. Epub 2017 Mar 14. Surgical Management of Pierre Robin Sequence: Using Mandibular Distraction Osteogenesis to Address Hypoventilation and Failure to Thrive in Infancy.[Facial Plast Surg. 2016]Surgical Management of Pierre Robin Sequence: Using Mandibular Distraction Osteogenesis to Address Hypoventilation and Failure to Thrive in Infancy.Scott AR. Facial Plast Surg. 2016 Apr; 32(2):177-87. Epub 2016 Apr 20. Review Pierre Robin Sequence.[Clin Plast Surg. 2019]Review Pierre Robin Sequence.Hsieh ST, Woo AS. Clin Plast Surg. 2019 Apr; 46(2):249-259. Epub 2019 Feb 8. Review Robin Sequence: Neonatal Mandibular Distraction.[Clin Plast Surg. 2021]Review Robin Sequence: Neonatal Mandibular Distraction.Morrison KA, Collares MV, Flores RL. Clin Plast Surg. 2021 Jul; 48(3):363-373. Epub 2021 May 8. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) Pierre Robin Syndrome - StatPearlsPierre Robin Syndrome - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov PreferencesTurn off External link. Please review our privacy policy. Cite this Page Close Baxter D, Shanks AL. Pierre Robin Syndrome. [Updated 2023 Aug 7]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Available from: Making content easier to read in Bookshelf Close We are experimenting with display styles that make it easier to read books and documents in Bookshelf. Our first effort uses ebook readers, which have several "ease of reading" features already built in. The content is best viewed in the iBooks reader. You may notice problems with the display of some features of books or documents in other eReaders. Cancel Download Share Share on Facebook Share on Twitter URL
208
Published Time: Thu, 24 Jul 2025 00:31:16 GMT 14.4 Logical operators … | Introduction to Quantum Information Science =============== Type to search Table of contents Introduction Plan and intended audience Using the e-book How to cite this book Acknowledgements Some mathematical preliminaries 0.1 Complex numbers 0.2 Euclidean vectors and vector spaces 0.3 Bras and kets 0.4 Daggers 0.5 Geometry 0.6 Operators 0.7 Eigenvalues and eigenvectors 0.8 Outer products 0.9 The trace 0.10 Some useful identities 0.11 Probabilities I Foundations 1 Quantum interference 1.1 Two basic rules 1.2 The failure of probability theory 1.3 The double-slit experiment 1.4 Superpositions 1.5 Interferometers 1.6 Qubits, gates, and circuits 1.7 Quantum decoherence 1.8 Types of computation 1.9 Computational complexity 1.10 Outlook 1.11Remarks and exercises 1.11.1 A historical remark 1.11.2 Modifying the Born rule 1.11.3 Many computational paths 1.11.4 Distant photon emitters 1.11.5 Quantum Turing machines 1.11.6 More time, more memory 1.11.7 Asymptotic behaviour: big-O 1.11.8 Polynomial is good, and exponential is bad 1.11.9 Imperfect prime tester 1.11.10 Imperfect decision maker 2 Qubits 2.1 Composing quantum operations 2.2 Quantum bits, called “qubits” 2.3 Quantum gates and circuits 2.4 Single qubit interference 2.5 The square root of NOT 2.6 Phase gates galore 2.7 Pauli operators 2.8 From bit-flips to phase-flips, and back again 2.9 Any unitary operation on a single qubit 2.10 The Bloch sphere 2.11 Drawing points on the Bloch sphere 2.12 Composition of rotations 2.13 A finite set of universal gates 2.14Remarks and exercises 2.14.1 One simple circuit 2.14.2 Change of basis 2.14.3 Operators as unitary matrices 2.14.4 Completing an orthonormal basis 2.14.5 Some sums of inner products 2.14.6 Some circuit identities 2.14.7 Unitaries preserve length 2.14.8 Unknown phase 2.14.9 One of the many cross-product identities 3 Quantum gates 3.1 Beam-splitters: physics against logic 3.2 Beam-splitters: quantum interference, revisited 3.3 The Pauli matrices, algebraically 3.4 Unitaries as rotations 3.5 Universality, again 3.6 Some quantum dynamics 3.7Remarks and exercises 3.7.1 Quantum bomb tester 3.7.2 Orthonormal Pauli basis 3.7.3 Pauli matrix expansion coefficients 3.7.4 Linear algebra of the Pauli vector 3.7.5 Matrix Euler formula 3.7.6 Special orthogonal matrix calculations 3.7.7 Phase as rotation 3.7.8 Calculating a Pauli rotation 3.7.9 Geometry of the Hadamard 3.7.10 Swiss Granite Fountain 3.7.11 Dynamics in a magnetic field 4 Measurements 4.1 Hilbert spaces, briefly 4.2 Complete measurements 4.3 The projection rule, and incomplete measurements 4.4 Example of an incomplete measurement 4.5 Observables 4.6 Compatible observables and the uncertainty relation 4.7 Quantum communication 4.8 Basic quantum coding and decoding 4.9 Distinguishing non-orthogonal states 4.10 Wiesner’s quantum money 4.11 Quantum theory, formally 4.12Remarks and exercises 4.12.1 Projector? 4.12.2 Knowing the unknown 4.12.3 Measurement and idempotents 4.12.4 Unitary transformations of measurements 4.12.5 Optimal measurement 4.12.6 Alice knows what Bob did 4.12.7 The Zeno effect 5 Entanglement 5.1 A very brief history 5.2 From one qubit to two 5.3 Quantum theory, formally (continued) Tensor products 5.4 More qubits, and binary representations 5.5 Separable or entangled? 5.6 Controlled-NOT 5.7 Bell states 5.8 Quantum teleportation 5.9 No-cloning, and other no-go theorems 5.10 Controlled-phase and controlled-U 5.11 Universality, revisited 5.12 Phase kick-back 5.13 Density operators, and other things to come 5.14Remarks and exercises 5.14.1 Why qubits, subsystems, and entanglement? 5.14.2 Entangled or not? 5.14.3 Instantaneous communication 5.14.4 SWAP circuit 5.14.5 Controlled-NOT circuit 5.14.6 Measuring with controlled-NOT 5.14.7 Arbitrary controlled-U on two qubits 5.14.8 Entangled qubits 5.14.9 Quantum dense coding 5.14.10 Playing with conditional unitaries 5.14.11 Tensor products in components 5.14.12 Hadamard transforms in components 5.14.13 The Schmidt decomposition II Further foundations 6 Bell’s theorem 6.1 Hidden variables 6.2 Quantum correlations 6.3 The CHSH inequality 6.4 Bell’s theorem via CHSH 6.5 Tsirelson’s inequality 6.6 Quantum randomness 6.7 Loopholes in Bell tests 6.8Remarks and exercises 6.8.1 XOR games 6.8.2 XOR games for quantum key distribution 6.8.3 XOR games for randomness expansion 6.8.4 Prescribed binary randomness 6.8.5 Unbiasing bias 6.8.6 Proving Tsirelson’s inequality 6.8.7 Detector loophole 6.8.8 Locality loophole 6.8.9 Free-will loophole 7 Stabilisers 7.1 Pauli groups 7.2 Pauli stabilisers 7.3 Single stabiliser states 7.4 Measuring Pauli stabilisers 7.5 Normal subgroups 7.6 Pauli normalisers 7.7 Clifford walks on stabiliser states 7.8Remarks and exercises 7.8.1 Measuring parity 7.8.2 The Pauli group of three qubits 7.8.3 Half commuting 7.8.4 One out of four stabilisers 7.8.5 Stabilisers and projectors 7.8.6 Abelian Pauli quotients 7.8.7 Equivalent projective measurements 8 Density matrices 8.1 Definitions 8.2 Statistical mixtures 8.3 Instructive examples 8.4 The Bloch ball 8.5 Subsystems of entangled systems 8.6 Mixtures and subsystems 8.7 Partial trace, revisited 8.8Remarks and exercises 8.8.1 Some density operator calculations 8.8.2 Purification of mixed states 8.8.3 Pure partial trace 8.8.4 Maximally Bell 8.8.5 Spectral decompositions and common eigenbases 9 Quantum channels 9.1 Everything is (secretly) unitary 9.2 Random unitaries 9.3 Random isometries 9.4 Evolution of open systems 9.5 Stinespring’s dilation and Kraus’s ambiguity 9.6 Single-qubit channels 9.7 Composition of quantum channels 9.8 Completely positive trace-preserving maps 9.9 Channel-state duality 9.10 The mathematics of “can” and “cannot” 9.11 Kraus operators, revisited 9.12Remarks and exercises 9.12.1 Purifications and isometries 9.12.2 The Markov approximation 9.12.3 What use are positive maps? 9.12.4 Partial inner product 9.12.5 The “control” part of controlled-NOT 9.12.6 Surprisingly identical channels 9.12.7 Independent ancilla 9.12.8 Order matters? 9.12.9 Unchanged reduced density operator 9.12.10 Cooling down 9.12.11 No pancakes 9.12.12 Pauli twirl 9.12.13 Depolarising channel 9.12.14 Toffoli gate 9.12.15 Expressing vectors using the maximally mixed state 9.12.16 Complete positivity of a certain map 9.12.17 Duals 9.12.18 Trace, transpose, Choi 9.12.19 Entanglement witness 9.12.20 Almost Kraus decomposition 9.12.21 Tricks with a maximally mixed state III Applications and reality 10 Quantum algorithms 10.1 Quantum Boolean function evaluation 10.2 More phase kick-back 10.3 Oracles 10.4 Deutsch’s algorithm 10.5 The Bernstein–Vazirani algorithm 10.6 Grover’s search algorithm 10.7 Simon’s algorithm 10.8 Phase estimation 10.9 Quantum Fourier transform 10.10 Hidden-order determination 10.11 Shor’s algorithm 10.12Remarks and exercises 10.12.1 RSA 10.12.2 More complexity classes 10.12.3 Implementing reflections 10.12.4 Grover’s optimality 10.12.5 Picking out a single state 10.12.6 Writing an integer as a power 11 Quantum cryptography 12 Approximation 12.1 Metrics 12.2 How far apart are two quantum states? 12.3 Fidelity 12.4 Approximating unitaries 12.5 Approximating generic unitaries is hard, but… 12.6 How far apart are two probability distributions? 12.7 Dealing with density operators 12.8 Distinguishing non-orthogonal states, again 12.9 Approximate phase estimation 12.10 How accurate is accurate enough? 12.11Remarks and exercises 12.11.1 Operator decompositions 12.11.2 More operator norms 12.11.3 Fidelity in a trace norm inequality 12.11.4 Hamming distance 12.11.5 Operator norm 12.11.6 Tolerance and precision 12.11.7 Statistical distance and a special event 12.11.8 Joint probability distributions 12.11.9 Distinguishability and the trace distance 13 Decoherence and recoherence 13.1 The three-qubit code 13.2 Towards error correction 13.3 Discretisation of quantum errors 13.4 Digitising quantum errors 13.5 Recoherence 13.6 The classical repetition code 13.7 Correcting bit-flips 13.8 Correcting phase-flips 13.9 Composing correctable channels 13.10 Correcting any single error: Shor 13.11 Error-correcting assumptions 13.12Remarks and exercises 13.12.1 Decoherence-free subspaces 13.12.2 Repetition encoding and majority voting failure 13.12.3 Correcting Pauli rotations with three qubits 13.12.4 More on Shor 13.12.5 Distillation for Bell pairs 13.12.6 Composing quantum codes 14 Quantum error correction 14.1 The Hamming code 14.2 Linear codes 14.3 Quantum codes from classical 14.4 Logical operators … 14.5 … and error families 14.6 Logical operators (a different approach) 14.7 Error-correcting conditions 14.8 Code distance and thresholds 14.9 Encoding circuits 14.10 Encoding arbitrary states 14.11Remarks and exercises 14.11.1 Error correcting conditions for the three-qubit code 14.11.2 The smallest d=3 d=3 d=3 code, full stop 14.11.3 Hamming code encodings and decodings 14.11.4 Generator and parity-check matrices 14.11.5 A big parity check matrix 14.11.6 Using Tanner graphs 14.11.7 Five-qubit repetition code 14.11.8 An error in the Steane code 14.11.9 Non-degenerate codes 14.11.10 The smallest d=3 d=3 d=3 CSS code 14.11.11 CSS codes from a single matrix 14.11.12 Error-correcting conditions, algebraically 14.11.13 Steane error correction: towards fault tolerance 15 Fault tolerance Appendix: Further topics and selected reading A A Serif Sans Light Dark / A4 PDF B5 PDF Introduction to Quantum Information Science 14.4 Logical operators … Pauli operators in error correction Stabilizer codes Here we are going to use the abstract group theory that we developed back in Sections 7.5 and 7.6, but there are other ways of explaining this material. In Section @(logical-operators-differently) we tell the same story from a different point of view, so if you find this section confusing then don’t worry — you can always come back to it after reading the other one! It’s always a good idea to have multiple viewpoints. We have been slowly building up towards constructing quantum error-correction codes using the stabiliser formalism, but there is one major detail that we have yet to mention. You will perhaps have noticed that we haven’t written out what the stabiliser states actually are, nor what the encoding circuits look like. There is a simple reason for this: at this point, we don’t actually know! There’s a little more work to be done — the stabilisers have provided us with a two-dimensional space, but if we have ∣0⟩|0\rangle∣0⟩ and ∣1⟩|1\rangle∣1⟩ to encode, how are they mapped within the space? So far, it’s undefined, and there is a lot of freedom to choose, but the structures provided by group theory are quite helpful here in providing some natural choices. Furthermore, better understanding these structures is the first step towards figuring out how to upgrade from simple error correction to fault-tolerant computation. We’re going to turn back all the way to Sections 7.5 and 7.6, where we discovered how to think about normalisers of stabiliser groups inside the Pauli group. Let’s start with a brief recap. The n n n-qubit Pauli group P n\mathcal{P}_n P n​ consists of all n n n-fold tensor products of Pauli matrices 1\mathbf{1}1, X X X, Y Y Y, and Z Z Z, with possible global phase factors ±1\pm1±1 and ±i\pm i±i. Given an operator s∈P n s\in\mathcal{P}_n s∈P n​, we say that it stabilises a (non-zero) n n n-qubit state ∣ψ⟩|\psi\rangle∣ψ⟩ if s∣ψ⟩=∣ψ⟩s|\psi\rangle=|\psi\rangle s∣ψ⟩=∣ψ⟩, i.e.if it admits ∣ψ⟩|\psi\rangle∣ψ⟩ as an eigenstate with eigenvalue +1+1+1. We showed that the set of all operators that stabilise every state in a given subspace V V V form a group, called the stabiliser group; using a little bit of group theory, we characterised all possible stabiliser groups by showing that they are exactly the abelian subgroups of P n\mathcal{P}_n P n​ that do not contain −1-\mathbf{1}−1. Then we looked at the group structure of the Pauli group, and how any stabiliser group S\mathcal{S}S sits inside it. It turned out that the normaliserN(S)={g∈P n∣g s g−1∈S for all s∈S} N(\mathcal{S}) = {g\in\mathcal{P}_n \mid gsg^{-1}\in\mathcal{S}\text{ for all }s\in\mathcal{S}} N(S)={g∈P n​∣g s g−1∈S for all s∈S} of S\mathcal{S}S in P n\mathcal{P}_n P n​, and the centraliserZ(S)={g∈P n∣g s g−1=s for all s∈S} Z(\mathcal{S}) = {g\in\mathcal{P}_n \mid gsg^{-1}=s\text{ for all }s\in\mathcal{S}} Z(S)={g∈P n​∣g s g−1=s for all s∈S} of S\mathcal{S}S in P n\mathcal{P}_n P n​ actually agree, because of some elementary properties of the Pauli group. Furthermore, we showed that the normaliser (or centraliser) was itself normal inside the Pauli group, giving us a chain of normal subgroups S◃N(S)◃P n. \mathcal{S} \triangleleft N(\mathcal{S}) \triangleleft \mathcal{P}_n. S◃N(S)◃P n​. This lets us arrange the elements of P n\mathcal{P}_n P n​ into cosets by using the two quotient groups N(S)/S and P n/N(S). N(\mathcal{S})/\mathcal{S} \qquad\text{and}\qquad \mathcal{P}_n/N(\mathcal{S}). N(S)/S and P n​/N(S). How does this help us with our stabiliser error-correction codes? Let’s look first at the former: cosets of S\mathcal{S}S inside its normaliser N(S)N(\mathcal{S})N(S). If ∣ψ⟩∈V S|\psi\rangle\in V_\mathcal{S}∣ψ⟩∈V S​ is a state in the stabilised subspace293, then any element g∈S g\in\mathcal{S}g∈S always satisfies g∣ψ⟩=∣ψ⟩ g|\psi\rangle = |\psi\rangle g∣ψ⟩=∣ψ⟩ whereas any element g∈N(S)∖S g\in N(\mathcal{S})\setminus\mathcal{S}g∈N(S)∖S merely satisfies g∣ψ⟩∈V S g|\psi\rangle \in V_\mathcal{S} g∣ψ⟩∈V S​ and, for any such g g g, there are always states in V S V_\mathcal{S}V S​ that are not mapped to themselves. However, if we look at cosets of S\mathcal{S}S inside N(S)N(\mathcal{S})N(S) then we discover an incredibly useful fact: all elements of a given coset act on ∣ψ⟩|\psi\rangle∣ψ⟩ in the same way. To see this, take two representatives for a coset, say g S=g′S g\mathcal{S}=g'\mathcal{S}g S=g′S for g,g′∈N(S)g,g'\in N(\mathcal{S})g,g′∈N(S). By the definition of cosets, this means that there exist s,s′∈S s,s'\in\mathcal{S}s,s′∈S such that g s=g′s′gs=g's'g s=g′s′. In particular then, g s∣ψ⟩=g′s′∣ψ⟩ gs|\psi\rangle = g's'|\psi\rangle g s∣ψ⟩=g′s′∣ψ⟩ but since s,s′∈S s,s'\in\mathcal{S}s,s′∈S and ∣ψ⟩∈V S|\psi\rangle\in V_\mathcal{S}∣ψ⟩∈V S​, this says that g∣ψ⟩=g′∣ψ⟩ g|\psi\rangle = g'|\psi\rangle g∣ψ⟩=g′∣ψ⟩ as claimed. For us here, the stabilised subspace V S V_\mathcal{S}V S​ is exactly the codespace, and the stabilisers generating S\mathcal{S}S are exactly the elements G r∈P n G_r\in\mathcal{P}_n G r​∈P n​ constructed from the rows of H H H as at the start of Section 14.3.↩︎ Since the cosets of S\mathcal{S}S inside N(S)N(\mathcal{S})N(S) give well defined actions on stabiliser states, preserving the codespace, we can treat them as operators in their own right. The cosets of S\mathcal{S}S inside N(S)N(\mathcal{S})N(S) are called logical operators, and any representative of a coset is an implementation for that logical operator. Let’s try to understand this in the context of an example: the three-qubit code from Section 13.1. The diagram from Section 7.2 was useful in describing this example, so we repeat it as Figure 14.5 below. Figure 14.5: The stabiliser group S=⟨Z Z 1,1 Z Z⟩\mathcal{S}=\langle ZZ\mathbf{1},\mathbf{1}ZZ\rangle S=⟨ZZ 1,1 ZZ⟩ bisects the Hilbert space of three qubits into four equal parts, and gives the stabilised subspace V S V_\mathcal{S}V S​ which is spanned by ∣000⟩|000\rangle∣000⟩ and ∣111⟩|111\rangle∣111⟩. To use the terminology of error-correction codes, we are taking our codespace to be294C=⟨∣000⟩,∣111⟩⟩ \mathcal{C} = \langle|000\rangle,|111\rangle\rangle C=⟨∣000⟩,∣111⟩⟩ which is exactly the stabiliser space V S V_\mathcal{S}V S​ of the stabiliser group S=⟨Z Z 1,1 Z Z⟩ \mathcal{S} = \langle ZZ\mathbf{1},\mathbf{1}ZZ\rangle S=⟨ZZ 1,1 ZZ⟩ and the total eight-dimensional Hilbert space of three qubits is decomposed into four mutually orthogonal two-dimensional subspaces C⊕C 1⊕C 2⊕C 3\mathcal{C}\oplus\mathcal{C}_1\oplus\mathcal{C}_2\oplus\mathcal{C}_3 C⊕C 1​⊕C 2​⊕C 3​ as shown in Figure 14.5. Since we have chosen a specific basis for each of these subspaces, we should give things a name. We want to encode a single qubit, which lives in a two-dimensional space (spanned by ∣0⟩|0\rangle∣0⟩ and ∣1⟩|1\rangle∣1⟩), so it makes sense that we want our codespace to also be two-dimensional.↩︎ The (orthogonal) basis vectors of the codespace C=V S\mathcal{C}=V_\mathcal{S}C=V S​ are called logical states, and are usually taken to be the encodings of ∣0⟩|0\rangle∣0⟩ and ∣1⟩|1\rangle∣1⟩. In general295, the logical states will be superpositions of states, but we still sometimes refer to them as codewords. Note that the logical states for the three-qubit code are actually not superpositions. This reflects the fact that this code is really just a classical repetition code — it only protects against one type of error — embedded into the quantum world.↩︎ In our example of the three-qubit code, we have the two logical states logical 0 0 0 and logical 1 1 1, which we denote by ∣0⟩L≔∣000⟩∣1⟩L≔∣111⟩. \begin{aligned} |0\rangle_L &\coloneqq |000\rangle \|1\rangle_L &\coloneqq |111\rangle. \end{aligned} ∣0⟩L​∣1⟩L​​:=∣000⟩:=∣111⟩.​ The justification for these names is twofold: firstly, ∣0⟩L|0\rangle_L∣0⟩L​ is exactly the encoding of ∣0⟩|0\rangle∣0⟩, the “actual” zero state; and secondly, this state ∣0⟩L|0\rangle_L∣0⟩L​ will behave exactly as the zero state should when acted upon by the logical operators. For example, the operator X X X sends ∣0⟩|0\rangle∣0⟩ to ∣1⟩|1\rangle∣1⟩, so the logical X X X should send the logical∣0⟩|0\rangle∣0⟩ to the logical∣1⟩|1\rangle∣1⟩. Let’s make this happen! The normaliser of S\mathcal{S}S inside P 3\mathcal{P}3 P 3​ is N(S)={1,X X X,−Y Y Y,Z Z Z}×S N(\mathcal{S}) = {\mathbf{1},XXX,-YYY,ZZZ} \times \mathcal{S} N(S)={1,XXX,−YYY,ZZZ}×S which we have written in such a way that we can just read off the cosets: there are four of them, and they are represented by 1\mathbf{1}1, X X X XXX XXX, −Y Y Y-YYY−YYY, and Z Z Z ZZZ ZZZ. These four (implementations of) logical operators all get given the obvious names: 1 L≔1 X L≔X X X Y L≔−Y Y Y Z L≔Z Z Z \begin{aligned} \mathbf{1}_L &\coloneqq \mathbf{1} \X_L &\coloneqq XXX \Y_L &\coloneqq -YYY \Z_L &\coloneqq ZZZ \end{aligned} 1 L​X L​Y L​Z L​​:=1:=XXX:=−YYY:=ZZZ​ But note that these are not necessarily the smallest weight implementations! For example, any single Z i Z_i Z i​ (i.e.a Z Z Z acting on the i i i-th qubit) will have the same logical effect as Z Z Z ZZZ ZZZ, as we can see by looking at how it acts on the logical states: Z 1∣1⟩L=Z 11∣111⟩=−∣111⟩=Z Z Z∣111⟩=Z Z Z∣1⟩L. \begin{aligned} Z_1|1\rangle_L &= Z\mathbf{1}\mathbf{1}|111\rangle \&= -|111\rangle \&= ZZZ|111\rangle \&= ZZZ|1\rangle_L. \end{aligned} Z 1​∣1⟩L​​=Z 11∣111⟩=−∣111⟩=ZZZ∣111⟩=ZZZ∣1⟩L​.​ In contrast, X X X XXX XXX _is the smallest weight logical X X X implementation. The natural question to ask is then how to find all the implementations, but this is answered by going back to the very definition of them as coset representatives: if P P P is some implementation of a logical operator, then so too is S P SP SP for any S∈S S\in\mathcal{S}S∈S. In the example above, we see that Z 1=Z 11 Z_1=Z\mathbf{1}\mathbf{1}Z 1​=Z 11 is exactly 1 Z Z⋅Z Z Z\mathbf{1}ZZ\cdot ZZZ 1 ZZ⋅ZZZ. Because of this, we should really write something like Z L=Z Z Z S Z_L=ZZZ\mathcal{S}Z L​=ZZZ S, or Z L=[Z Z Z]Z_L=[ZZZ]Z L​=[ZZZ], to make clear that Z Z Z ZZZ ZZZ is just one specific representation of Z L Z_L Z L​, but you will find that people often conflate implementations with the logical operators themselves and simply write Z L=Z Z Z Z_L=ZZZ Z L​=ZZZ. Generally, for any CSS code encoding a single qubit into n n n-qubits, we define the logical X X X and logical Z Z Z operators to be the (equivalence classes of) the tensor products of all X X X operators or all Z Z Z operators (respectively), i.e. X L≔X⊗n Z L≔Z⊗n. \begin{aligned} X_L &\coloneqq X^{\otimes n} \Z_L &\coloneqq Z^{\otimes n}. \end{aligned} X L​Z L​​:=X⊗n:=Z⊗n.​ Even more generally, for any code constructed from a stabiliser S\mathcal{S}S, it will be the case that296N(S)/S≅P k N(\mathcal{S})/S\cong\mathcal{P}_k N(S)/S≅P k​. Proving this is a bit of a task Just a warning before moving on: this discussion might make logical states sound pointlessly simple — logical 0 0 0 is just given by three copies of ∣0⟩|0\rangle∣0⟩, so what’s the point? But this apparent simplicity is due to the fact that the three-qubit code is somehow not very quantum at all (these logical states are not superpositions), and in general things get a lot more complicated. For example, even with the three-qubit code, we shall see in Section ?? that ∣+⟩L≠∣+++⟩ |+\rangle_L \neq |{+}{+}{+}\rangle ∣+⟩L​=∣+++⟩ where, as per usual, ∣+⟩=H∣0⟩=(∣0⟩+∣1⟩)/2|+\rangle=H|0\rangle=(|0\rangle+|1\rangle)/2∣+⟩=H∣0⟩=(∣0⟩+∣1⟩)/2. For us here, the stabilised subspace V S V_\mathcal{S}V S​ is exactly the codespace, and the stabilisers generating S\mathcal{S}S are exactly the elements G r∈P n G_r\in\mathcal{P}_n G r​∈P n​ constructed from the rows of H H H as at the start of Section 14.3.↩︎ We want to encode a single qubit, which lives in a two-dimensional space (spanned by ∣0⟩|0\rangle∣0⟩ and ∣1⟩|1\rangle∣1⟩), so it makes sense that we want our codespace to also be two-dimensional.↩︎ Note that the logical states for the three-qubit code are actually not superpositions. This reflects the fact that this code is really just a classical repetition code — it only protects against one type of error — embedded into the quantum world.↩︎ Proving this is a bit of a task
209
[2107.06889] Counting list homomorphisms from graphs of bounded treewidth: tight complexity bounds =============== Skip to main content We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate >cs> arXiv:2107.06889 Help | Advanced Search Search GO quick links Login Help Pages About Computer Science > Computational Complexity arXiv:2107.06889 (cs) [Submitted on 14 Jul 2021 (v1), last revised 29 Oct 2021 (this version, v2)] Title:Counting list homomorphisms from graphs of bounded treewidth: tight complexity bounds Authors:Jacob Focke, Dániel Marx, Paweł Rzążewski View a PDF of the paper titled Counting list homomorphisms from graphs of bounded treewidth: tight complexity bounds, by Jacob Focke and D\'aniel Marx and Pawe{\l} Rz\k{a}.zewski View PDF Abstract:The goal of this work is to give precise bounds on the counting complexity of a family of generalized coloring problems (list homomorphisms) on bounded-treewidth graphs. Given graphs G, H, and lists L(v)\subseteq V(H)for every v\in V(G), a {\em list homomorphism} is a function f:V(G)\to V(H)that preserves the edges (i.e., uv\in E(G)implies f(u)f(v)\in E(H)) and respects the lists (i.e., f(v)\in L(v)). Standard techniques show that if Gis given with a tree decomposition of width t, then the number of list homomorphisms can be counted in time |V(H)|^t\cdot n^{\mathcal{O}(1)}. Our main result is determining, for every fixed graph H, how much the base |V(H)|in the running time can be improved. For a connected graph Hwe define \operatorname{irr}(H)the following way: if Hhas a loop or is nonbipartite, then \operatorname{irr}(H)is the maximum size of a set S\subseteq V(H)where any two vertices have different neighborhoods; if His bipartite, then \operatorname{irr}(H)is the maximum size of such a set that is fully in one of the bipartition classes. For disconnected H, we define \operatorname{irr}(H)as the maximum of \operatorname{irr}(C)over every connected component Cof H. We show that, for every fixed graph H, the number of list homomorphisms from (G,L)to H can be counted in time \operatorname{irr}(H)^t\cdot n^{\mathcal{O}(1)}if a tree decomposition of Ghaving width at most tis given in the input, and cannot be counted in time (\operatorname{irr}(H)-\epsilon)^t\cdot n^{\mathcal{O}(1)}for any \epsilon>0, even if a tree decomposition of Ghaving width at most tis given in the input, unless the #SETH fails. Thereby we give a precise and complete complexity classification featuring matching upper and lower bounds for all target graphs with or without loops. Subjects:Computational Complexity (cs.CC); Data Structures and Algorithms (cs.DS) Cite as:arXiv:2107.06889 [cs.CC] (or arXiv:2107.06889v2 [cs.CC] for this version) Focus to learn more arXiv-issued DOI via DataCite Submission history From: Paweł Rzążewski [view email] [v1] Wed, 14 Jul 2021 12:19:57 UTC (72 KB) [v2] Fri, 29 Oct 2021 16:08:42 UTC (82 KB) Full-text links: Access Paper: View a PDF of the paper titled Counting list homomorphisms from graphs of bounded treewidth: tight complexity bounds, by Jacob Focke and D\'aniel Marx and Pawe{\l} Rz\k{a}.zewski View PDF TeX Source Other Formats view license Current browse context: cs.CC <prev | next> new | recent | 2021-07 Change to browse by: cs cs.DS References & Citations NASA ADS Google Scholar Semantic Scholar DBLP - CS Bibliography listing | bibtex Jacob Focke Dániel Marx Pawel Rzazewski aexport BibTeX citation Loading... BibTeX formatted citation × Data provided by: Bookmark Bibliographic Tools Bibliographic and Citation Tools [x] Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) [x] Connected Papers Toggle Connected Papers (What is Connected Papers?) [x] Litmaps Toggle Litmaps (What is Litmaps?) [x] scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article [x] alphaXiv Toggle alphaXiv (What is alphaXiv?) [x] Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) [x] DagsHub Toggle DagsHub (What is DagsHub?) [x] GotitPub Toggle Gotit.pub (What is GotitPub?) [x] Huggingface Toggle Hugging Face (What is Huggingface?) [x] Links to Code Toggle Papers with Code (What is Papers with Code?) [x] ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos [x] Replicate Toggle Replicate (What is Replicate?) [x] Spaces Toggle Hugging Face Spaces (What is Spaces?) [x] Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools [x] Link to Influence Flower Influence Flower (What are Influence Flowers?) [x] Core recommender toggle CORE Recommender (What is CORE?) Author Venue Institution Topic About arXivLabs arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) About Help Contact Subscribe Copyright Privacy Policy Web Accessibility Assistance arXiv Operational Status Get status notifications via email or slack
210
Published Time: 2004-04-06T13:37:58Z Linear logic - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Connectives, duality, and polarityToggle Connectives, duality, and polarity subsection 1.1 Syntax 2 Sequent calculus presentationToggle Sequent calculus presentation subsection 2.1 Multiplicatives 2.2 Additives 2.3 Exponentials 3 Remarkable formulas 4 Encoding classical/intuitionistic logic in linear logic 5 The resource interpretation 6 Other proof systemsToggle Other proof systems subsection 6.1 Proof nets 7 SemanticsToggle Semantics subsection 7.1 Algebraic semantics 8 Decidability/complexity of entailment 9 Variants 10 See also 11 Notes 12 References 13 Further reading 14 External links Linear logic [x] 9 languages Čeština Español Français 한국어 Italiano 日本語 Português Русский 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Expand all Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item From Wikipedia, the free encyclopedia System of resource-aware logic "⅋" redirects here. For the EP, see & (The Moth & The Flame EP). Linear logic is a substructural logic proposed by French logicianJean-Yves Girard as a refinement of classical and intuitionistic logic, joining the dualities of the former with many of the constructive properties of the latter. Although the logic has also been studied for its own sake, more broadly, ideas from linear logic have been influential in fields such as programming languages, game semantics, and quantum physics (because linear logic can be seen as the logic of quantum information theory), as well as linguistics, particularly because of its emphasis on resource-boundedness, duality, and interaction. Linear logic lends itself to many different presentations, explanations, and intuitions. Proof-theoretically, it derives from an analysis of classical sequent calculus in which uses of (the structural rules) contraction and weakening are carefully controlled. Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulating resources that cannot always be duplicated or thrown away at will. In terms of simple denotational models, linear logic may be seen as refining the interpretation of intuitionistic logic by replacing cartesian (closed) categories by symmetric monoidal (closed) categories, or the interpretation of classical logic by replacing Boolean algebras by C-algebras.[citation needed] Connectives, duality, and polarity [edit] Syntax [edit] The language of classical linear logic (CLL) is defined inductively by the BNF notation A::=p ∣ p⊥ ∣A ⊗ A ∣ A ⊕ A ∣A&A ∣ A ⅋ A ∣1 ∣ 0 ∣ ⊤ ∣ ⊥ ∣!A ∣?A Here p and p⊥ range over logical atoms. For reasons to be explained below, the connectives ⊗, ⅋, 1, and ⊥ are called multiplicatives, the connectives &, ⊕, ⊤, and 0 are called additives, and the connectives! and? are called exponentials. We can further employ the following terminology: | Symbol | Name | | --- | --- | | ⊗ | multiplicative conjunction | times | tensor | | ⊕ | additive disjunction | plus | | & | additive conjunction | with | | ⅋ | multiplicative disjunction | par | | ! | of course | bang | | ? | why not | quest | Binary connectives ⊗, ⊕, & and ⅋ are associative and commutative; 1 is the unit for ⊗, 0 is the unit for ⊕, ⊥ is the unit for ⅋ and ⊤ is the unit for &. Every proposition A in CLL has a dualA⊥, defined as follows: (p)⊥ = p⊥(p⊥)⊥ = p (A ⊗ B)⊥ = A⊥ ⅋ B⊥(A ⅋ B)⊥ = A⊥ ⊗ B⊥ (A ⊕ B)⊥ = A⊥&B⊥(A&B)⊥ = A⊥ ⊕ B⊥ (1)⊥ = ⊥(⊥)⊥ = 1 (0)⊥ = ⊤(⊤)⊥ = 0 (!A)⊥ =?(A⊥)(?A)⊥ =!(A⊥) Classification of connectives | | add | mul | exp | | --- | --- | --- | --- | | pos | ⊕ 0 | ⊗ 1 | ! | | neg | & ⊤ | ⅋ ⊥ | ? | Observe that (-)⊥ is an involution, i.e., A⊥⊥ = A for all propositions. A⊥ is also called the linear negation of A. The columns of the table suggest another way of classifying the connectives of linear logic, termed polarity: the connectives negated in the left column (⊗, ⊕, 1, 0,!) are called positive, while their duals on the right (⅋, &, ⊥, ⊤,?) are called negative; cf. table on the right. Linear implication is not included in the grammar of connectives, but is definable in CLL using linear negation and multiplicative disjunction, by A ⊸ B:= A⊥ ⅋ B. The connective ⊸ is sometimes pronounced "lollipop", owing to its shape. Sequent calculus presentation [edit] One way of defining linear logic is as a sequent calculus. We use the letters Γ and Δ to range over lists of propositions A 1, ..., A n, also called contexts. A sequent places a context to the left and the right of the turnstile, written Γ ⊢{\displaystyle \vdash } Δ. Intuitively, the sequent asserts that the conjunction of Γentails the disjunction of Δ (though we mean the "multiplicative" conjunction and disjunction, as explained below). Girard describes classical linear logic using only one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation. This is possible because any premises to the left of a turnstile can always be moved to the other side and dualised. We now give inference rules describing how to build proofs of sequents. First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of exchange: ⊢{\displaystyle \vdash } Γ, A 1, A 2, Δ ⊢{\displaystyle \vdash } Γ, A 2, A 1, Δ Note that we do not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present. Next we add initial sequents and cuts: ⊢{\displaystyle \vdash }A, A⊥⊢{\displaystyle \vdash } Γ, A⊢{\displaystyle \vdash }A⊥, Δ ⊢{\displaystyle \vdash } Γ, Δ The cut rule can be seen as a way of composing proofs, and initial sequents serve as the units for composition. In a certain sense these rules are redundant: as we introduce additional rules for building proofs below, we will maintain the property that arbitrary initial sequents can be derived from atomic initial sequents, and that whenever a sequent is provable it can be given a cut-free proof. Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus. Now, we explain the connectives by giving logical rules. Typically in sequent calculus one gives both "right-rules" and "left-rules" for each connective, essentially describing two modes of reasoning about propositions involving that connective (e.g., verification and falsification). In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗). So, we should expect a certain "harmony" between the rule(s) for a connective and the rule(s) for its dual. Multiplicatives [edit] The rules for multiplicative conjunction (⊗) and disjunction (⅋): ⊢{\displaystyle \vdash } Γ, A⊢{\displaystyle \vdash } Δ, B ⊢{\displaystyle \vdash } Γ, Δ, A ⊗ B⊢{\displaystyle \vdash } Γ, A, B ⊢{\displaystyle \vdash } Γ, A ⅋ B and for their units: ⊢{\displaystyle \vdash } 1⊢{\displaystyle \vdash } Γ ⊢{\displaystyle \vdash } Γ, ⊥ Observe that the rules for multiplicative conjunction and disjunction are admissible for plain conjunction and disjunction under a classical interpretation (i.e., they are admissible rules in LK). Additives [edit] The rules for additive conjunction (&) and disjunction (⊕): ⊢{\displaystyle \vdash } Γ, A⊢{\displaystyle \vdash } Γ, B ⊢{\displaystyle \vdash } Γ, A&B⊢{\displaystyle \vdash } Γ, A ⊢{\displaystyle \vdash } Γ, A ⊕ B⊢{\displaystyle \vdash } Γ, B ⊢{\displaystyle \vdash } Γ, A ⊕ B and for their units: ⊢{\displaystyle \vdash } Γ, ⊤(no rule for 0) Observe that the rules for additive conjunction and disjunction are again admissible under a classical interpretation. But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises. Exponentials [edit] The exponentials are used to give controlled access to weakening and contraction. Specifically, we add structural rules of weakening and contraction for ?'d propositions: ⊢{\displaystyle \vdash } Γ ⊢{\displaystyle \vdash } Γ,?A⊢{\displaystyle \vdash } Γ,?A,?A ⊢{\displaystyle \vdash } Γ,?A and use the following logical rules, in which ?Γ stands for a list of propositions each prefixed with ?: ⊢{\displaystyle \vdash } ?Γ, A ⊢{\displaystyle \vdash } ?Γ,!A⊢{\displaystyle \vdash } Γ, A ⊢{\displaystyle \vdash } Γ,?A One might observe that the rules for the exponentials follow a different pattern from the rules for the other connectives, resembling the inference rules governing modalities in sequent calculus formalisations of the normal modal logicS4, and that there is no longer such a clear symmetry between the duals ! and ?. This situation is remedied in alternative presentations of CLL (e.g., the LU presentation). Remarkable formulas [edit] In addition to the De Morgan dualities described above, some important equivalences in linear logic include: Distributivity A ⊗ (B ⊕ C) ≣ (A ⊗ B) ⊕ (A ⊗ C) (A ⊕ B) ⊗ C ≣ (A ⊗ C) ⊕ (B ⊗ C) A ⅋ (B&C) ≣ (A ⅋ B) & (A ⅋ C) (A&B) ⅋ C ≣ (A ⅋ C) & (B ⅋ C) By definition of A ⊸ B as A⊥ ⅋ B, the last two distributivity laws also give: A ⊸ (B&C) ≣ (A ⊸ B) & (A ⊸ C) (A ⊕ B) ⊸ C ≣ (A ⊸ C) & (B ⊸ C) (Here A ≣ B is (A ⊸ B) & (B ⊸ A).) Exponential isomorphism !(A&B) ≣!A ⊗!B ?(A ⊕ B) ≣?A ⅋?B Linear distributions A map that is not an isomorphism yet plays a crucial role in linear logic is: (A ⊗ (B ⅋ C)) ⊸ ((A ⊗ B) ⅋ C) Linear distributions are fundamental in the proof theory of linear logic. The consequences of this map were first investigated in Cockett & Seely (1997) and called a "weak distribution". In subsequent work it was renamed to "linear distribution" to reflect the fundamental connection to linear logic. Other implications The following distributivity formulas are not in general an equivalence, only an implication: !A ⊗!B ⊸!(A ⊗ B) !A ⊕!B ⊸!(A ⊕ B) ?(A ⅋ B) ⊸?A ⅋?B ?(A&B) ⊸?A&?B (A&B) ⊗ C ⊸ (A ⊗ C) & (B ⊗ C) (A&B) ⊕ C ⊸ (A ⊕ C) & (B ⊕ C) (A ⅋ C) ⊕ (B ⅋ C) ⊸ (A ⊕ B) ⅋ C (A&C) ⊕ (B&C) ⊸ (A ⊕ B) &C Encoding classical/intuitionistic logic in linear logic [edit] Both intuitionistic and classical implication can be recovered from linear implication by inserting exponentials: intuitionistic implication is encoded as !A ⊸ B, while classical implication can be encoded as !?A ⊸?B or !A ⊸?!B (or a variety of alternative possible translations). The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic. Formally, there exists a translation of formulas of intuitionistic logic to formulas of linear logic in a way that guarantees that the original formula is provable in intuitionistic logic if and only if the translated formula is provable in linear logic. Using the Gödel–Gentzen negative translation, we can thus embed classical first-order logic into linear first-order logic. The resource interpretation [edit] Lafont (1993) first showed how intuitionistic linear logic can be explained as a logic of resources, so providing the logical language with access to formalisms that can be used for reasoning about resources within the logic itself, rather than, as in classical logic, by means of non-logical predicates and relations. Tony Hoare (1985)'s classic example of the vending machine can be used to illustrate this idea. Suppose we represent having a candy bar by the atomic proposition candy, and having a dollar by $1. To state the fact that a dollar will buy you one candy bar, we might write the implication $1 ⇒ candy. But in ordinary (classical or intuitionistic) logic, from A and A ⇒ B one can conclude A ∧ B. So, ordinary logic leads us to believe that we can buy the candy bar and keep our dollar! Of course, we can avoid this problem by using more sophisticated encodings,[clarification needed] although typically such encodings suffer from the frame problem. However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule. Rather than $1 ⇒ candy, we express the property of the vending machine as a linear implication $1 ⊸ candy. From $1 and this fact, we can conclude candy, but not$1 ⊗ candy. In general, we can use the linear logic proposition A ⊸ B to express the validity of transforming resource A into resource B. Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives. (The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.) Multiplicative conjunction (A ⊗ B) denotes simultaneous occurrence of resources, to be used as the consumer directs. For example, if you buy a stick of gum and a bottle of soft drink, then you are requesting gum ⊗ drink. The constant 1 denotes the absence of any resource, and so functions as the unit of ⊗. Additive conjunction (A&B) represents alternative occurrence of resources, the choice of which the consumer controls. If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products. Thus we write $1 ⊸ (candy&chips&drink). We do not write $1 ⊸ (candy ⊗ chips ⊗ drink), which would imply that one dollar suffices for buying all three products together. However, from ($1 ⊸ (candy&chips&drink)) ⊗ ($1 ⊸ (candy&chips&drink)) ⊗ ($1 ⊸ (candy&chips&drink)), we can correctly deduce $3 ⊸ (candy ⊗ chips ⊗ drink), where $3:= $1 ⊗ $1 ⊗ $1. The unit ⊤ of additive conjunction can be seen as a wastebasket for unneeded resources. For example, we can write $3 ⊸ (candy ⊗ ⊤) to express that with three dollars you can get a candy bar and some other stuff, without being more specific (for example, chips and a drink, or $2, or $1 and chips, etc.). Additive disjunction (A ⊕ B) represents alternative occurrence of resources, the choice of which the machine controls. For example, suppose the vending machine permits gambling: insert a dollar and the machine may dispense a candy bar, a packet of chips, or a soft drink. We can express this situation as $1 ⊸ (candy ⊕ chips ⊕ drink). The constant 0 represents a product that cannot be made, and thus serves as the unit of ⊕ (a machine that might produce A or 0 is as good as a machine that always produces A because it will never succeed in producing a 0). So unlike above, we cannot deduce $3 ⊸ (candy ⊗ chips ⊗ drink) from this. Other proof systems [edit] Proof nets [edit] Main article: Proof net Introduced by Jean-Yves Girard, proof nets have been created to avoid the bureaucracy, that is all the things that make two derivations different in the logical point of view, but not in a "moral" point of view. For instance, these two proofs are "morally" identical: ⊢{\displaystyle \vdash }A, B, C, D ⊢{\displaystyle \vdash }A ⅋ B, C, D ⊢{\displaystyle \vdash }A ⅋ B, C ⅋ D⊢{\displaystyle \vdash }A, B, C, D ⊢{\displaystyle \vdash }A, B, C ⅋ D ⊢{\displaystyle \vdash }A ⅋ B, C ⅋ D The goal of proof nets is to make them identical by creating a graphical representation of them. Semantics [edit] This section needs expansion. You can help by adding to it. (May 2023) Algebraic semantics [edit] See also: Quantale Decidability/complexity of entailment [edit] The entailment relation in full CLL is undecidable. When considering fragments of CLL, the decision problem has varying complexity: Multiplicative linear logic (MLL): only the multiplicative connectives. MLL entailment is NP-complete, even restricting to Horn clauses in the purely implicative fragment, or to atom-free formulas. Multiplicative-additive linear logic (MALL): only multiplicatives and additives (i.e., exponential-free). MALL entailment is PSPACE-complete. Multiplicative-exponential linear logic (MELL): only multiplicatives and exponentials. By reduction from the reachability problem for Petri nets, MELL entailment must be at least EXPSPACE-hard, although decidability itself has had the status of a longstanding open problem. In 2015, a proof of decidability was published in the journal Theoretical Computer Science, but was later shown to be erroneous. Affine linear logic (that is linear logic with weakening, an extension rather than a fragment) was shown to be decidable, in 1995. Variants [edit] Many variations of linear logic arise by further tinkering with the structural rules: Affine logic, which forbids contraction but allows global weakening (a decidable extension). Strict logic or relevant logic, which forbids weakening but allows global contraction. Non-commutative logic or ordered logic, which removes the rule of exchange, in addition to barring weakening and contraction. In ordered logic, linear implication divides further into left-implication and right-implication. Different intuitionistic variants of linear logic have been considered. When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and? are absent, and linear implication is treated as a primitive connective. In FILL (Full Intuitionistic Linear Logic) the connectives ⅋, ⊥, and? are present, linear implication is a primitive connective and, similarly to what happens in intuitionistic logic, all connectives (except linear negation) are independent. There are also first- and higher-order extensions of linear logic, whose formal development is somewhat standard (see first-order logic and higher-order logic). See also [edit] Philosophy portal Chu spaces Computability logic Game semantics Geometry of interaction Intuitionistic logic Linear logic programming Linear type system, a substructural type system Ludics Proof nets Uniqueness type Notes [edit] ^Girard 1987. ^Baez & Stay 2008. ^De Paiva, Van Genabith & Ritter 1999. ^Girard 1987, p.22, Def.1.15. ^Girard 1987, p.25-26, Def.1.21. ^Cockett & Seely 1997. ^Di Cosmo 1996. ^ Jump up to: abFor this result and discussion of some of the fragments below, see: Lincoln et al. (1992) ^Kanovich 1992. ^Lincoln & Winkler 1994. ^Gunter & Gehlot 1989. ^Bimbó 2015. ^Straßburger 2019. ^Kopylov 1995. References [edit] Baez, John; Stay, Mike (2008). Bob Coecke (ed.). "Physics, Topology, Logic and Computation: A Rosetta Stone"(PDF). New Structures of Physics. Bimbó, Katalin (13 September 2015). "The decidability of the intensional fragment of classical linear logic". Theoretical Computer Science. 597: 1–17. doi:10.1016/j.tcs.2015.06.019. ISSN0304-3975. Cockett, J. Robin; Seely, Robert (1997). "Weakly distributive categories". Journal of Pure and Applied Algebra. 114 (2): 133–173. doi:10.1016/0022-4049(95)00160-3. Di Cosmo, Roberto (1996). "2". The Linear Logic Primer (Course notes). De Paiva, V.; Van Genabith, J.; Ritter, E. (1999). "Dagstuhl Seminar 99341 on Linear Logic and Applications"(PDF). Drops-Idn/V2/Document/10.4230/Dagsemrep.248. Schloss Dagstuhl – Leibniz-Zentrum für Informatik: 1–21. doi:10.4230/DagSemRep.248. Girard, Jean-Yves (1987). "Linear logic"(PDF). Theoretical Computer Science. 50 (1): 1–102. doi:10.1016/0304-3975(87)90045-4. hdl:10338.dmlcz/120513. Gunter, C. A.; Gehlot, V. (1989). Nets as Tensor Theories(PDF) (Technical report). University of Pennsylvania. MS-CIS-89-68. Kanovich, Max I. (22 June 1992). "Horn programming in linear logic is NP-complete". Seventh Annual IEEE Symposium on Logic in Computer Science, 1992. LICS '92. Proceedings. Seventh Annual IEEE Symposium on Logic in Computer Science, 1992. LICS '92. Proceedings. pp.200–210. doi:10.1109/LICS.1992.185533. ISBN0-8186-2735-2. Kopylov, A. P. (1 June 1995). "Decidability of linear affine logic". Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings. Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings. pp.496–504. CiteSeerX10.1.1.23.9226. doi:10.1109/LICS.1995.523283. ISBN0-8186-7050-9. Lincoln, Patrick; Mitchell, John; Scedrov, Andre; Shankar, Natarajan (1992). "Decision Problems for Propositional Linear Logic". Annals of Pure and Applied Logic. 56 (1–3): 239–311. doi:10.1016/0168-0072(92)90075-B. Lincoln, Patrick; Winkler, Timothy (1994). "Constant-only multiplicative linear logic is NP-complete". Theoretical Computer Science. 135: 155–169. doi:10.1016/0304-3975(94)00108-1. Straßburger, Lutz (10 May 2019). "On the decision problem for MELL". Theoretical Computer Science. 768: 91–98. doi:10.1016/j.tcs.2019.02.022. ISSN0304-3975. Further reading [edit] Braüner, Torben (December 1996). "Introduction to Linear Logic"(PDF). BRICS Lecture Series. Basic Research in Computer Science (BRICS). Retrieved 20 May 2025. Di Cosmo, Roberto; Danos, Vincent (1992). The Linear Logic Primer. Di Cosmo, Roberto; Miller, Dale (16 September 2023). "Linear Logic". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy (Fall 2023 ed.). Girard, Jean-Yves; Lafont, Yves; Taylor, Paul (1989). Proofs and Types. Cambridge University Press. Archived from the original on 4 July 2016. Hoare, C. A. R. (1985). Communicating Sequential Processes. Prentice-Hall International. ISBN0-13-153271-5. Lafont, Yves (1993). "Introduction to Linear Logic". TEMPUS Summer School on Algebraic and Categorical Methods in Computer Science (Lecture notes). Brno, Czech Republic. Lincoln, Patrick. "Introduction to Linear Logic"(Postscript). Miller, Dale (2004). "Overview of Linear Logic Programming". In Ehrhard; Girard; Ruet; Scott (eds.). Linear Logic in Computer Science(PDF). London Mathematical Society Lecture Notes. Vol.316. Cambridge University Press. Troelstra, A. S. (1992). Lectures on Linear Logic. CSLI Lecture Notes. Vol.29. Stanford: CSLI Publications. ISBN978-0937073773. Troelstra, A. S.; Schwichtenberg, H. (2000) . Basic Proof Theory. Cambridge Tracts in Theoretical Computer Science. Vol.43 (2nd ed.). Cambridge: Cambridge University Press. pp.XII, 417. doi:10.1017/CBO9781139168717. ISBN9780521779111. OCLC951181823. Wadler, Philip. "A Taste of Linear Logic". External links [edit] Media related to Linear logic at Wikimedia Commons A Linear Logic Prover (llprover)Archived 2016-04-04 at the Wayback Machine, available for use online, from: Naoyuki Tamura / Dept of CS / Kobe University / Japan Click And Collect interactive linear logic prover, available online | hide v t e Non-classical logic | | --- | | Intuitionistic | Intuitionistic logic Constructive analysis Heyting arithmetic Intuitionistic type theory Constructive set theory | | Fuzzy | Degree of truth Fuzzy rule Fuzzy set Fuzzy finite element Fuzzy set operations | | Substructural | Structural rule Relevance logic Linear logic | | Paraconsistent | Dialetheism | | Description | Ontology (information science) Ontology language | | Many-valued | Three-valued Four-valued Łukasiewicz | | Digital logic | Three-state logic Tri-state buffer Four-valued Verilog IEEE 1164 VHDL | | Others | Dynamic semantics Inquisitive logic Intermediate logic Non-monotonic logic | Retrieved from " Categories: Linear logic Non-classical logic Substructural logic Logic Hidden categories: Articles with short description Short description matches Wikidata Use dmy dates from March 2025 All articles with unsourced statements Articles with unsourced statements from October 2015 Wikipedia articles needing clarification from May 2015 Articles to be expanded from May 2023 All articles to be expanded Commons category link from Wikidata Webarchive template wayback links This page was last edited on 20 May 2025, at 10:22(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Linear logic 9 languagesAdd topic
211
Hatshepsut Related resources for this article Introduction (active in the 15th century bc). Hatshepsut was one of only a few female pharaohs, or kings, of ancient Egypt. She ruled with her young stepson about 1479–73 bc and then alone about 1473–58 bc. She attained enormous power for a woman, adopting the full titles and dress of a pharaoh. Did You Know? Hatshepsut ruled, either together or alone, for 21 years. This makes her the longest-serving female pharaoh in ancient Egypt. Did You Know? Hatshepsut ruled, either together or alone, for 21 years. This makes her the longest-serving female pharaoh in ancient Egypt. Early Life Hatshepsut was the elder daughter of the 18th-dynasty pharaoh Thutmose I and Queen Ahmose. Hatshepsut was married to her half brother Thutmose II, who inherited his father’s throne about 1492 bc. She was his chief queen (at the time rulers had multiple wives) and bore one daughter but no son. When Hatshepsut’s husband died about 1479 bc, the throne passed to his son Thutmose III, born to Isis, a lesser queen. Since Thutmose III was an infant, Hatshepsut ruled in his name. Reign By the seventh year of Thutmose III’s reign, Hatshepsut had herself crowned pharaoh. The two were thus corulers of Egypt, but Hatshepsut held more power. Hatshepsut never explained why she took the throne or how she persuaded Egypt’s elite to accept her new position. She may have been successful, however, because she had in place a group of loyal officials. She handpicked many of them, and they controlled all the key positions in her government. Did You Know? Hatshepsut is often portrayed wearing the traditional clothes of a pharaoh and a false beard (which male pharaohs often wore). Ancient Egyptian art showed things not as they were but as people thought they should be. In portraying herself as a traditional pharaoh, Hatshepsut made sure that this is what she would become. Did You Know? Hatshepsut is often portrayed wearing the traditional clothes of a pharaoh and a false beard (which male pharaohs often wore). Ancient Egyptian art showed things not as they were but as people thought they should be. In portraying herself as a traditional pharaoh, Hatshepsut made sure that this is what she would become. Traditionally, Egyptian pharaohs defended their land against the enemies who lurked at Egypt’s borders. Hatshepsut’s reign was essentially a peaceful one, and her foreign policy was based on trade rather than war. Scenes on the walls of her Dayr al-Bahri temple, in western Thebes, capture details of her reign. Some scenes suggest that she participated in a short, successful military campaign in nearby Nubia. Other scenes show a trading expedition in which gold, ebony, animal skins, baboons, and other items were brought back to Egypt. Hatshepsut also undertook an extensive building program, which included the temples of the god Amon-Re in Thebes and her Dayr al-Bahri temple. Aftermath Toward the end of her reign Hatshepsut allowed Thutmose to play an increasingly prominent role in state affairs. Following her death, Thutmose III ruled Egypt for almost 33 years. At the end of his reign an attempt was made to remove all traces of Hatshepsut’s rule. Her statues were torn down, her monuments were damaged, and her name was removed from the official king list. Early scholars interpreted this as an act of vengeance. However, it seems that Thutmose was making sure that the succession would run from Thutmose I through Thutmose II to Thutmose III without female interruption. Did You Know? Scientists didn’t positively identify Hatshepsut’s mummy until the early 21st century. Their study of the mummy revealed that Hatshepsut was overweight and had cancer, diabetes, and bad teeth. Did You Know? Scientists didn’t positively identify Hatshepsut’s mummy until the early 21st century. Their study of the mummy revealed that Hatshepsut was overweight and had cancer, diabetes, and bad teeth. Explore Further Find out more in the following articles: ancient Egypt Thutmose III Thebes Nubia It’s here: the NEW Britannica Kids website! We’ve been busy, working hard to bring you new features and an updated design. We hope you and your family enjoy the NEW Britannica Kids. Take a minute to check out all the enhancements! Want to see it in action? Start a free trial E-mail To To share with more than one person, separate addresses with a comma From Choose a language from the menu above to view a computer-translated version of this page. Please note: Text within images is not translated, some features may not work properly after translation, and the translation may not accurately convey the intended meaning. Britannica does not review the converted text. After translating an article, all tools except font up/font down will be disabled. To re-enable the tools or to convert back to English, click "view original" on the Google Translate toolbar.
212
Published Time: 2000-01-01 (PDF) Demagnetizing factors for rectangular prisms =============== close Welcome to Academia Sign up to get access to over 50 million papers Continue with GoogleContinue with AppleContinue with Facebook email Continue with Email By continuing, you agree to our Terms of Use arrow_back Continue with Email Sign up or log in to continue. Email Next arrow_forward arrow_back Welcome to Academia Sign up to continue. First name Last name Password Sign Up arrow_forward arrow_back Hi, Log in to continue. Password Reset password Log in arrow_forward arrow_back Password reset Check your email for your reset link. Your link was sent to Done Please hold while we log you in Academia.edu no longer supports Internet Explorer. To browse Academia.edu and the wider internet faster and more securely, please take a few seconds toupgrade your browser. Log In Sign Up Log In Sign Up more About Press Papers Terms Privacy Copyright We're Hiring! Help Center less All Topics Engineering download Download Free PDF Download Free PDF Demagnetizing factors for rectangular prisms Alvaro Ulcué Sánchez 2000, IEEE Transactions on Magnetics visibility 1,204 views description 12 pages link 1 file download Download PDF auto_awesome Ask AI bookmark Save format_quote Cite close Cite this paper MLA content_copy Sánchez, Alvaro Ulcué. “Demagnetizing Factors for Rectangular Prisms.” IEEE Transactions on Magnetics, 2000. APA content_copy Sánchez, A. U. (2000). Demagnetizing factors for rectangular prisms. IEEE Transactions on Magnetics. Chicago content_copy Sánchez, Alvaro Ulcué. “Demagnetizing Factors for Rectangular Prisms.” IEEE Transactions on Magnetics, 2000. doi:10.1109/TMAG.2005.847634. Vancouver content_copy Sánchez AU. Demagnetizing factors for rectangular prisms. IEEE Transactions on Magnetics. 2000; doi:10.1109/TMAG.2005.847634 Harvard content_copy Sánchez, A. U. (2000) “Demagnetizing factors for rectangular prisms,” IEEE Transactions on Magnetics. doi: 10.1109/TMAG.2005.847634. share Share close Sign up for access to the world's latest research Sign up for free arrow_forward check Get notified about relevant papers check Save papers to use in your research check Join the discussion with peers check Track your impact Abstract For rectangular prisms of dimensions 2 2 2 with constant material susceptibility , we have calculated and tabulated the fluxmetric and magnetometric demagnetizing factors and , defined along the 2 dimension as functions of ( ) 1 2 (=1 500) (=1 256) and (=0 10 9 ). We introduce an interpolation technique for obtaining with arbitrary values of ( ) 1 2 and . ... Read more Figures (10) arrow_back_ios Fig. 1. (a) The studied prism with coordinates and applied field. (b) Element distribution on the side surface of y = 6 for a prism of y = 10°, a/b = 2, and c/(ab)'/? = 2 calculated from (3) and (5) using n, = 256. As derived in , when y = oo, tends to infinity as —1/3 power over the distance to the edge. Since Ny, at high x are most difficult to be calculated accurately, the surface element distribution should be made based on this law. Improving the calculation for cylinders, where a basically uniform pole in- tensity for ~ — oo is made in the elements near the edges, a roughly uniform pole-density increment for 7 — oo is imple- mented near the edges for the present calculation. As in , we use artificial o distributions to calculate the element divisions along the x,y, and z direc- tions, respectively. In order to overcome the difficulty arising from the actually infinite o at the edges, we extend the a, b, and c for small lengths 6a, 6b, and éc, so that the o(a),a(b), and a(c) calculated from (3)-(5) are large finite numbers. Thus, the division units become Ao, = o(z = c)/(nz + 1/2), Aor = o(a = a)/nz, and Ao, = o(y = b)/ny. The proper values of 6a/a, 6b/b, dc/c, and qy,y,~ are chosen for different values of m OF RECTANGULAR PRISM 2a x 2b x 2c ALONG THE 2c DIMENSION AS A FUNCTION OF SUSCEPTIBILITY y AND ASPECT RATIOS a/b AND c/ /ab THE ESTIMATED ERROR: x = 0, EXACT; SIX SIGNIFICANT DIGITS, <0.01%; FIVE SIGNIFICANT DIGITS, <0.1%; FOUR SIGNIFICANT DIGITS WITHOUT , <0.5%; WITH , <1%; WITH , <2.1% were discussed in for y = 0. Extending the calculations to x > O and c/(ab)'/? > 1, we can observe from comparing our of cylinders. For a general rectangular prism like in the present case, Ny m at 1 < a/b < 256 and 0.001 < ¢/(ab)'/? < 1000 Ny OF RECTANGULAR PRISM 2a X 2b x 2c ALONG THE 2c DIMENSION AS A FUNCTION OF SUSCEPTIBILITY y AND ASPECT RATIOS a/b AND c/V ab. THE ESTIMATED ERROR: ¥ = 0, EXACT; SIX SIGNIFICANT DIGITS, <0.01%; FIVE SIGNIFICANT DIGITS, <0.1%; FOUR SIGNIFICANT DIGITS WITHOUT , <0.5%; WITH , <1%; WITH , <2.1%; WITH , <3.1% Looking at the curves plotted in Figs. 2 and 3 and cal- culating from the data listed in Tables I and II, we see Fig. 2. N,, along the 2c dimension as a function of ef (ab) 1/2 at a/b = 1,2,4,8, 16, 32,64, 128, and 256 for rectangular prisms 2a x 2b x 2c with y = ( (a), 1.5 (b), 9 (c), 99 (d), 999 (e), and 10° (f). Arrows point in the direction of increasing a/b. Fig. 3. Ny along the 2c dimension as a function of c/(ab)!/? at a/b = 1,2, 4,8, 16, 32, 64, 128, and 256 for rectangular prisms 2a x 2b x 2c with y = ( (a), 1.5 (b), 9 (c), 99 (d), 999 (e), and 10° (f). Arrows point in the direction of increasing a/b. Fig. 4. Comparison of V,,, and Ny versus c/(ab)/? curves at y = 10° with demagnetizing factor N of ellipsoids of semi-axes a, b, and c for a/b = 1, 32, and 128 in (a), (b), and (c), respectively. g.5. Spline N,, versus e/(ab)1/? curves for a/b = 1,2,4,8, and 16 and y = 0 (a), 9 (b), and 999 (c) connecting data points obtained from Table I. Spline m versus a/b curves for e/(ab)!/? = 7.5 and y = 0 and 1.5 (d), 9 and 99 (e), and 999 (f). Arrows point in the direction of increasing a/b. Fig. 6. Interpolated spline NV, versus x + 1 curves for e/(ab)!/? = 7.5 and a/b = 1.5 (a) and 6 (b) (solid lines). The dashed and dotted lines show accurate nearby curves drawn directly from data listed in Table I. DETERMINATION OF NV,,, AND ¥ FROM THE MEASURED SAMPLE DIMENSIONS AND Yex. DIRECTLY CALCULATED V,,, ARE WRITTEN WITHIN PARENTHESES arrow_forward_ios Cited by (14) A d.c. magnetic metamaterial Garry Perkins Nature Materials, 2008 docs View PDF Effect of Magnetostatic Dipoles Interaction on the Initial Susceptibility of a Dilute Ferrofluid in One Dimension Abdalla Obeidat Journal of Superconductivity and Novel Magnetism, 2011 docs View PDF Induced anisotropy in FeCo-based nanocomposites: Early transition metal content dependence Vladimir Keylin Journal of Applied Physics, 2014 docs View PDF Experimental determination of the Weiss temperature of Mn_{12}-ac and Mn_{12}-ac-MeOH George Christou Physical Review B, 2010 docs View PDF Microtransformer on silicon with CoFeB magnetic core for high-frequency signal applications Dragan Dinulovic AIP Advances docs View PDF Tailoring the magnetic anisotropy of thin film permalloy microstrips by combined shape and induced anisotropies David Navas The European Physical Journal B, 2013 docs View PDF Magnetomechanical response of bilayered magnetic elastomers Elshad Allahyarov Smart Materials and Structures, 2014 docs View PDF Thin-Film Magnetic Inductor for Integrated Power Management LEONG YEW WING 2017 docs View PDF Vertex dependent dynamic response of a connected Kagome artificial spin ice Ali Frotanpour Applied Physics Letters, 2021 docs View PDF Amorphous FeCoCrSiB Ribbons with Tailored Anisotropy for the Development of Magnetic Elements for High Frequency Applications Aitor Larrañaga Materials docs View PDF Enhancing the magnetoelectric response of Metglas/polyvinylidene fluoride laminates by exploiting the flux concentration effect S. G. Lu Applied Physics Letters, 2009 docs View PDF Numerical study of the effect of core geometry on the performance of a magnetostrictive transducer Danial Gandomzadeh Journal of Magnetism and Magnetic Materials, 2020 docs View PDF Magnetic Properties of Manganese-Zinc Soft Ferrite Ceramic for High Frequency Applications Catalin-Daniel CONSTANTINESCU Materials, 2019 docs View PDF Microscopic aspects of magnetic lattice demagnetizing factors Michel Gingras Physical Review Materials, 2017 docs View PDF View more keyboard_arrow_down Related papers The demagnetizing field of a nonuniform rectangular prism Kaspar Nielsen Journal of Applied Physics, 2010 docs View PDF Demagnetizing effects in stacked rectangular prisms Kaspar Nielsen Journal of Physics D: Applied Physics, 2011 docs View PDF Fluxmetric and magnetometric demagnetizing factors for cylinders Alvaro Ulcué Sánchez Journal of Magnetism and Magnetic Materials, 2006 docs View PDF Demagnetization factors for cylindrical shells and related shapes Marco Beleggia Journal of Magnetism and Magnetic Materials, 2009 docs View PDF Demagnetizing factors for square bars alvaro sanchez IEEE Transactions on Magnetics, 2004 docs View PDF Ampere's circuital 3-D model for non-cuboidal magnets John Compter IEEE Transactions on Magnetics, 2010 docs View PDF The external demagnetizing factor and the static characteristic loop György Kovács, J. Takacs Physica B: Condensed Matter, 2012 docs View PDF On the computation of the demagnetization tensor for uniformly magnetized particles of arbitrary shape. Part I: Analytical approach Marco Beleggia Journal of Magnetism and Magnetic Materials, 2004 docs View PDF Microscopic aspects of magnetic lattice demagnetizing factors Michel Gingras Physical Review Materials, 2017 docs View PDF Computation of geophysical magnetic data for a buried 3‐D hexahedral prism using the Gauss–Legendre quadrature method Hassan Mohamed Near Surface Geophysics, 2020 docs View PDF View more keyboard_arrow_down Related topics Engineeringadd Follow Related papers Transverse demagnetizing factors of long rectangular bars: I. Analytical expressions for extreme values of susceptibility Alvaro Ulcué Sánchez Journal of Applied Physics, 2002 The demagnetizing problem should be studied numerically in nonellipsoids with nonzero susceptibility , except for a few limiting cases where analytical treatment turns out to be practical. In this work, for an infinitely long bar with rectangular cross section 2aϫ2b and ϭϱ and Ϫ1 in a uniform transverse applied field H a along dimension a or b, analytical expressions for the surface pole or current distributions and the fluxmetric and magnetometric demagnetizing factors N f ,m are derived using a technique of conformal transformation. From the new as well as the existing formulas, N f ,m (ϭϱ,0,Ϫ1) are evaluated, plotted, and tabulated as functions of a/b. docs View PDFbookmark Save to Library B. The Demagnetisation Curve and its Parameters FG SERVICIOS S.A CONSTRUCCIÓN In hard magnetic materials the second quadrant of hysteresis is most important and is called demagnetization curve. Demagnetization curves as well as the other quadrants of hysteresis can be drawn both in the J(H) picture as well as in the B(H) description which follows from eq. (A.8). This is also the case in Fig. B1, which supplies those basic parameters of the demagnetization curve, which are mainly used in technical literature about permanent magnets. Fig.B1:Demagnetization curve (second quadrant) as well as the first and parts of the third quadrant of magnetic hysteresis. The first quadrant is located at the top rightside of the coordinate cross, the second quadrant is the top left side. The third quadrant is placed at the bottom left side. The demagnetization curve, i.e. the second quadrant, defines the parameters Br, bHc, jHc, µ r and (BH) max. The most important parameters of a demagnetization curve are named as: B r = Remanence induction [T] j H c = Coercitivity of J [A/m], b H c = Coercitivity of B [A/m] µ r = Recoil permeability [no units] (BH) max = Maximum energy product [kJ/m 3 ] Now lets describe the behavior of demagnetization curves in more detail. As we examine here only one spatial direction, a scalar description is used. docs View PDFbookmark Save to Library Exact Analytical Demagnetizing Factors for Long Hollow Cylinders in Transverse Field Alvaro Ulcué Sánchez IEEE Magnetics Letters, 2000 Analytical expressions are derived for the magnetometric N m and fluxmetric N f demagnetizing factors of infinitely long, hollow cylinders with uniform susceptibility in transverse uniform applied field. Whereas N m depends on both permeability μ and the ratio of the internal and external radii of the cylinder, N f does not depend on μ. The magnetic induction and magnetic field of materials with conjugate relative permeabilities μ and 1/μ are related. docs View PDFbookmark Save to Library The magnetotelluric phase over 2-D structures Laszlo Szarka Geophysical Journal International, 1992 In 1985 Fischer gave a simple explanation for the behaviour of the E-polarization magnetotelluric phase over a 2-D dike. This phase behaviour is a direct consequence of the rearrangement of the current flow in a 2-D structure as opposed to the flow in a simple uniform half-space. In spite of its usefulness it is shown that the simple phase rule is only qualitatively correct. For B-polarization it seemed at first that the simple phase rule would not remain valid. However, it is found that if the very different current distributions for E- and B-polarization are correctly interpreted, the simple phase rule retains its validity qualitatively. Arguments are given for the very general validity of this rule at the surface of any 2-D structure, even in the presence of complicated topography. The rule is best formulated in terms of two statements: (a) the magnetotelluric phase is a continuous function over any structure, even across outcropping resistivity contrasts; and (b) where the current is drawn to greater depths the phase will rise, and where the current is concentrated near the surface the phase will drop. docs View PDFbookmark Save to Library Magnet design for PRISM-FFAG using anisotropic interpole Y. Arimoto Nuclear Physics B - Proceedings Supplements, 2005 A novel type of magnet equipped with magnetically anisotropic interpoles is designed for the PRISM-FFAG. The magnetic field is calculated for the magnet and used for particle tracking simulation. The performance of the magnet is compared to a normal type magnet. docs View PDFbookmark Save to Library Demagnetizing factors for two parallel ferromagnetic plates and their applications to magnetoelectric laminated sensors Edward Liverts Journal of Applied Physics, 2011 An analytical expression is derived to approximate the magnetometric demagnetizing factors for two parallel ferromagnetic plates having the shape of rectangular prisms. The magnetometric demagnetizing factors relate the average magnetic fields within the plates' volumes to an external magnetic field. Knowing this relationship is essential for describing the response of magnetoelectric sensors comprising two parallel magnetostrictive plates. It is shown that two separate ferromagnetic layers provide better field sensitivity than a single layer with a doubled thickness. The obtained results are in a good agreement with numerical calculations and experimental data. docs View PDFbookmark Save to Library Simulation and parameterization by the finite element method of a C Shape Delectromagnet for application in the characterization of magnetic properties of materials Alvaro Andrés Velásquez Torres 2012 This article presents the simulation, parameterization and optimization of an electromagnet with the C-shaped configuration, intended for the study of magnetic properties of materials. The electromagnet studied consists of a C-shaped yoke, which provides self-shielding for minimizing losses of magnetic flux density, two poles of high magnetic permeability and power coils wound on the poles. The main physical variable studied was the static magnetic flux density in a column within the gap between the poles, with 4cm 2 of square cross section and a length of 5cm, seeking a suitable set of parameters that allow us to achieve a uniform magnetic flux density of 1x10 4 Gaussor values above this in the column, when the system operates at room temperature and with a current consumption not exceeding 5A. By means of a magnetostatic analysis by the finite element method, the magnetic flux density and the distribution of the magnetic field lines were visualized and quantified. From the results obtained by simulating an initial configuration of electromagnet, a structural optimization of the geometry of the adjustable caps for the ends of the poles was performed. The magnetic permeability effect of the soft magnetic materials used in the poles system, such as lowcarbon steel (0.08% C), Permalloy (45% Ni, 54.7% Fe) and Mumetal (21.2% Fe, 78.5% Ni), was also evaluated. The intensity and uniformity of the magnetic field in the gap showed a high dependence with the factors described above. The magnetic field achieved in the column was uniform and its magnitude ranged between 1.5x10 4 Gauss and 1.9x10 4 Gauss according to the material of the pole used, with the possibility of increasing the magnetic field by choosing a suitable geometry of the cap, introducing a cooling system for the coils and adjusting the spacing between the poles. This makes the device a versatile and scalable tool to generate the magnetic field necessary to perform magnetic characterization of materials by techniques such as vibrating sample magnetometry (VSM), Hall-effect, Kerr-effect magnetometry, among others. Additionally, a CAD design of the modules of the electromagnet is presented in order to facilitate the construction and scaling of the physical device. docs View PDFbookmark Save to Library Transformation of nonlinear problems into linear ones applied to the magnetic field of a two‐dimensional prism João Victor Serra Silva GEOPHYSICS, 1989 I present a magnetic interpretation method which transforms into a linear problem the nonlinear problem of obtaining the geometric and position parameters of a two‐dimensional vertical, infinite prism. The magnetization, the only linear parameter, becomes nonlinear after the transformation. By assuming a few discrete values over a prescribed interval for the magnetization, I obtain several solutions for the geometric and position parameters. By storing only the extreme solutions, bounds for each parameter are produced. The method was applied to synthetic anomalies due to isolated and interfering sources for which robust alternatives performed better than the least‐squares method. The correlation between the magnetization and the prism width is the most important factor controlling ambiguity of parameters. The horizontal position is the least affected parameter, followed by the depth to the top of the prism. Application to a real anomaly confirmed the results from synthetic data, exc... docs View PDFbookmark Save to Library Computation of Demagnetization Tensors by Utilizing Fourier Properties Fiaz Khan IEEE Transactions on Magnetics, 2014 We describe a method for the computation of the discrete demagnetization tensor on regular cuboid grids. Assuming homogeneously magnetized cells, the tensor components can be calculated exactly by known analytical formulas. These integral-based expressions can be expensive due to their nested nature and difficult to code. The novelty of this paper is that parts of the tensor computation are moved to Fourier space, which simplifies the implementation. The main idea is that some nested sums, required for the computation of the tensor in real space, are replaced by simple multiplications with real factors in Fourier space. For regular grids, the demagnetization field is usually computed in Fourier space by application of the convolution theorem. Thus, computing the tensor in Fourier space in the first place does not introduce any drawbacks. docs View PDFbookmark Save to Library Experimental determination of the magnetization dependent part of the demagnetizing field in hard magnetic materials Alexey Dobrynin Applied Physics Letters, 2010 A method for extracting the magnetization dependent part of the demagnetizing field from minor hysteresis loops is described. It applies to hard magnetic materials with irreversible magnetization switching. The method's validity is tested on the simulated magnetization curves of an assembly of hard magnetic grains, as well as on a thin NdFeB film with out of plane magnetization. Effective demagnetization factors are extracted from the analysis. These factors are smaller than the usually applied sample shape dependent demagnetizing factors. docs View PDFbookmark Save to Library download Download PDF auto_awesome Ask AI Loading Preview Sorry, preview is currently unavailable. You can download the paper by clicking the button above. References (16) R. M. Bozorth, Ferromagnetism, New York: Wiley, 2003, pp. 843-861. D.-X. Chen, Physical Bases of Magnetic Measurements. Beijing, China: China Mechanical Industry, 1985, pp. 175-206. D.-X. Chen, J. A. Brug, and R. B. Goldfarb, "Demagnetizing factors for cylinders," IEEE Trans. Magn., vol. 27, no. 4, pp. 3601-3619, Jul. 1991. J. A. Osborn, "Demagnetizing factors of the general ellipsoid," Phys. Rev., vol. 67, pp. 351-357, Jun. 1945. P. Rhodes and G. Rowlands, "Demagnetising energies of uniformly magnetized rectangular blocks," in Proc. Leeds Phil. Lit. Soc., vol. 6, 1954, pp. 191-210. R. I. Joseph, "Ballistic demagnetizing factor in uniformly magnetized rectangular prisms," J. Appl. Phys., vol. 38, pp. 2405-2406, 1967. D.-X. Chen, E. Pardo, and A. Sanchez, "Demagnetizing factors of rect- angular prisms and ellipsoids," IEEE Trans. Magn., vol. 38, no. 4, pp. 1742-1752, Jul. 2002. W. F. Brown, Jr., Magnetostatic Principles in Ferromagnetism. Ams- terdam, The Netherlands: North-Holland, 1962, pp. 187-192. D.-X. Chen, C. Prados, E. Pardo, A. Sanchez, and A. Hernando, "Trans- verse demagnetizing factors of long rectangular bars: I. Analytical ex- pressions for extreme values of susceptibility," J. Appl. Phys., vol. 91, pp. 5254-5259, Apr. 2002. E. Pardo, A. Sanchez, and D.-X. Chen, "Transverse demagnetizing fac- tors of long rectangular bars: II. Numerical calculations for arbitrary sus- ceptibility," J. Appl. Phys., vol. 91, pp. 5260-5267, Apr. 2002. E. Pardo, D.-X. Chen, and A. Sanchez, "Demagnetizing factors of square bars," IEEE Trans. Magn., vol. 40, no. 3, pp. 1491-1499, May 2004. "Demagnetizing factors for completely shielded rectangular prisms," J. Appl. Phys., vol. 96, pp. 5365-5369, Nov. 2004. T. L. Templeton and A. S. Arrott, "Magnetostatics of rods and bars of ideally soft ferromagnetic materials," IEEE Trans. Magn., vol. 23, no. 5, pp. 2650-2652, Sep. 1987. W. H. Press et al., Numerical Recipes in C. Cambridge, U.K.: Cam- bridge Univ. Press, 1992. M. Seeger, S. N. Kaul, H. Kronmuller, and R. Reisser, "Asymptotic critical behavior of Ni," Phys. Rev. B, Condens. Matter, vol. 51, pp. 12 585-12594, 1995. Manuscript received October 26, 2004; revised February 28, 2005. close Welcome to Academia Sign up to get access to over 50 million papers Continue with GoogleContinue with AppleContinue with Facebook email Continue with Email By continuing, you agree to our Terms of Use arrow_back Continue with Email Sign up or log in to continue. Email Next arrow_forward arrow_back Welcome to Academia Sign up to continue. First name Last name Password Sign Up arrow_forward arrow_back Hi, Log in to continue. Password Reset password Log in arrow_forward arrow_back Password reset Check your email for your reset link. Your link was sent to Done Please hold while we log you in Explore Papers Topics Features Mentions Analytics PDF Packages Advanced Search Search Alerts Journals Academia.edu Journals My submissions Reviewer Hub Why publish with us Testimonials Company About Careers Press Help Center Terms Privacy Copyright Content Policy 580 California St., Suite 400 San Francisco, CA, 94104 © 2025 Academia. All rights reserved
213
Quantum Field Theory - Lecture 5: Lehmann Representation and Renormalisation of the 2-Point Function ICTP Postgraduate Diploma Programme 24000 subscribers 17 likes Description 991 views Posted: 7 Feb 2020 HIGH ENERGY, COSMOLOGY AND ASTROPARTICLE PHYSICS Quantum Field Theory (HEP-QFT) S. Randjbar-Daemi HEP-QFT_L05.mp4 Transcript: quickly what we did in the last lecture which was last Monday two lectures and then continue from there so in the last lecture we considered and in fact it's a victory which is type of interaction and then presented to for function exactly point function which is the vacuum expectation value of the time of the product of heisting the field operators on the Heisenberg vacuum that is the real those finish the five instead of the full immunity system which ensure that first we examine the graphical representation of it and consider these two graphs the some of these graphs and then this one we called minus I PI the calculated is at the one loop order clearly introduced the concept of one particle irreducible basis these are n point functions from which for soil removed externalized and also we put the condition that there is one particle irreducible that means you cannot separate into two pieces by cutting an internal life again of the same object in this whole technology same simple and same bullet and then these some graphs like this obtained algebraic formula in the warmth of space but it is given by AI divided by E squared minus mg R squared plus I epsilon minus Phi of P squared M 0 square lambda 0 since I am going to calculate the same thing using real most objects have learnt attach a label beyond that to remind ourselves that in carpet this object we have used pair Ponce's they feel very vast very custom let me left it at this point and mental discussing the spectral presentation of G Joe I said we just started that between late which progress with a lot of time they reach that point so the subtle point is vacuum expectation value of the product of two Heisenberg field operators not that they are not putting time only here nothing support in fact all the two point functions are basically constructed from this is that the building blocks the green function is one that you obtain from here if you know of this then we can multiply by theta of X 0 1 minus X 0 2 plus in the first order times theta X co2 161 and the plane of the devil so that works conveniently started the study of this one and after discussing a little bit there I can say tau or the multum operator we decided a complete set of states here put this some outside so that from the some simmering it is a symbolic I'm writing as a discrete sum but of course there are also integrations over the continuous moment is ever and they received in a minute so we put in there [Applause] and then use the fact that we have translation in buyers and this object here come he replaced by exponential of I translation operator which is the momentum operator times X 1 Phi hat origin expression of -5 he had X 1 times 10 and use the fact that these days are idol season we have to obtain let's give this a nation or the model but it is long expression but you see by this translation variance and in fact it's a function of x1 minus x2 and sometimes they are called Whiterun functions is w so let's give it the name of turn so from here this will act on I can give you one if you simplify it it is minus I PI gamma value of the state n if you repeat the same single so insane second one you will simply get this one the exploration of minus IP n times x1 minus x2 bronsted Phi hat origin and times its own complex country so we get a total script I simply cut up to this one right so you can continue with this something it is but for what you want to do it is convenient to separate in the Sun the contribution of the ground States and one party state + remaining number of tangency so for the ground set you get back in expectation value of Phi and origin crumbs runs and we are attacked by a shift of the field operator for half you can always make this even if it is not zero to start so that does it country but not partial state no part of the state is no particle eigenstate of Hamilton of the momentum operator is given by the characterized by a momentum so that sum becomes an integral and the measure of integration until use the same notation okay this is very logistic invariant measure minus I K times x1 minus x2 this case a for vector but this time component is all about K squared okay so it doesn't have four independent components it's a space components independent it's time components can be by integral of K squared times plus this the rest we indicate by a summation with a prime on it to remind ourselves that in this sum we should exclude the single-party state because they're already taking its contribution into account we all hate that as in the argues that this is a constant right so we called it Z in fact we called the whole thing that disappoints Kermit upset so this query comes there the second son then if that is Sigma faster start the second son to write the second son the son is a function and that of the link let me tell you where you want to land up there why we are doing this the reason why we are doing all these manipulations and then it becomes clear why we we want to rewrite this sum in the form of an integral as for a minute imagine that he was calculating the same objects for the Friesian that means the free field for satisfies the free plan go to the question then de also would be this okay what you want to do is write the whole thing as a superposition of ten slices here longer case well I should have told you guys this 10 square plus M Square where m is the mass of single particle sense it's the physical mass not the parameter which we introduced into learner engine why is the physical mass because it's not a value of the P translation operator P half spirit and eigen value of P R squared is the fiscal mess the parameters appear in traditional Iranian is not talking value of P R squared in fact we never made the reference but we have squared will be rotated in the Lagrangian here right from the beginning when the Turkish descent it's clear that this P on the single artist is an automatic vector of P hat therefore and this is the eigen value of the P half-squirrel rent so it's the physical mass machine si the rest of these terms people correct right precisely the same form that means anything or some momentum t3t Row 1 or 2 Omega 3 spirit was evaluated in some other mass I'll call it n square and then it's going to reach the vertices for positional systems so they also reveal entry so this many times idea is to write them as this preposition of terms like this like the one of the free particle to a point function okay so this whole thing up will be calculated using your mass n then there is a density of states here that this would be called the density of states we are selling or all values of this edge starting from n 0 square which is a number bigger than this is smaller spirit because the contribution of small M suppose we have already separated so this one is calculated using the mass text this one is calculated using the mass of capital in space so it's a continuous sum of suspects it's a continuous preposition of the single particle contributions with masses which is larger than the single part of us okay if there is no bond said in the problem then this will start from the two possible states so the mass M 0 squared will be 2 m squared is 4 and so on so this is where you want to end up and in order to do that we are going to read artists bikini so maybe I said it without a furnace not a rabbit exclusion I'm sorry this should be a function of X 1 1 6 - thank you - IQ x X 1 minus X 2 of course with explicit so let's introduce this function Sigma Q squared theta of user K 0 theta is the step function which is 0 if King 0 is negative and it's one it gives it is positive and this is just a sum or somewhere you have put a factor of is a factor of times I cubed a delta form of Q minus TL times this extracellular okay justification definition this is the definition which dislikes this whole business to the Saltaire to those days for which gives is possible we are dealing with fiscal state fiscal states the mountains will be positive then it should be possible okay and just from this definition we see that the contributions will come from those things for which PN is different from the PN of the vacuum and single what the already separated two countries for to see it perfectly so if this two particle says then distance function that big of a contribution from there three parking spaces so therefore this contribution this dysfunction Sigma Q square transfer to mega we have defined it will be 0 if Q squared is smaller or equal to the mass of the single particle state so it is nonzero only for those terms shaft masses you don't see you possibly say okay while current mass there is an appalling advance is that when the Isetta these things here is PL is the problem montrose that is a partisan so this PA is a total movement of the important pieces there is other energy she's already here we are separating the top amount of the impulses but there is another quantum numbers for characteristics the total momentum is only one of those set of quantum numbers which connection as they say for example there is a relative moment of the of the particles which should be among those else okay so the total want to collection of particles if you go to a frame where the total value of the space component of the Moon 2 equals to 0 becomes the energy of the state the total energy of us which is the same as beautifully center of mass energy repair center bomb to energy or the total mass and this is tendonosis interchangeable okay clear so it's true rate the total energy of state which you can quote it total masses now after we define dysfunction we can write the sum here as an interval using the properties of this function then or expression W of X 1 minus X 2 becomes the first term that we give this an age in fact eyetality is called Delta Plus this is the rightful function for the FINA free victory so it's Z times Delta plus X 1 minus X 2 plus the remaining sum here using the properties of this function becomes an integral over t4 2 squared divided by 2 PI cubed x to the power of T squared minus Sigma Q squared times T tau of Q 0 times expression of minus IQ dot X 1 minus X 2 so if you solve this one for the product of these two functions from here and perform the integration over few perform the integration of Q then you will recover the something that's a simple negative we do another simple mean I think but it's what you think another DEATH function in order to end of the case we simply he write this Sigma huge credit to this there you are going to search it for it an integral over m squared of Delta of Q squared minus M square times Sigma M square now in Tecate go to infinity but we know that the Sigma vanishes HM stood is smaller than the mass of the single particle state therefore it's we to start from the lower one which is bigger than the mass of the single part resist if it just drop there I mean if he it's true distance of the ball stays difficult for me in principle so you can say that this comes from forever so this just imagine if you do the integration over m square this becomes a more peaceful but página the whole expression then becomes yes this does not look quite the Delta Plus evaluated for all some of the mass but if you do not see what it will look like some of subjects so the first term is this plus the the integration of M square Sigma M square sin that is the integration for the mission integration over Q times Delta of Q squared minus M square times T tau of Q zero times exponential of one's IQ dot X 1 minus X 2 all using the properties of the Delta function nothing more than that what we have done we for D derivatives but this looks different from this one this for so this is a three dimensional figure this the four relationship this here the K vector is a must-read here the Q is not a Masjid yeah but if you know the International to zero this probably already have seen this reduces to a two dimensional three dimensional and put Q mushroom this is for the mission we're rewriting the same objections how's it here okay so therefore or we can rewrite this as a photo mission today it doesn't matter in either case what to obtain is that applause evaluated at the mass Capital m square and that's the hospitalist okay so this object here let let me just mention it here t 3q over two point Q times 1 over 2 Omega Q Square Capital m squared expression of why despite Q times x1 minus x2 now the q0 here is given by Omega keys per instance of the good integration and this is what we call Delta plus that function so the result is that Cavanagh of x1 minus x2 becomes the constant times Delta plus X 1 minus X 2 square plus an information or a continuous variable M squared which is a mass of Sigma and squared times Delta plus x1 minus x2 have some else word okay if there are no interactions these terms will not contribute why we let them have contribute in the fraternity in the three journeys there are no interactions we are one part but this is cool partner for example three part this is deflation if you learn to free film in terms of creation the national fitness then this participants will simply be zero so if the charity is not interacting is the freaking witness is not contribute that require very the Z will be one across this definition and this will simply reduce yesterday more particle vacuum expectation value of the product of two field operator that exponent next if there are infections then now there is different from one and then there are these extra terms any questions this is volume Scott thank you thank you other questions now they rate sniffing it's obvious to take next step or next steps just from this mexic settlement that we calculated you can calculate other things too for example the g2 the two-point function two point or an exact property exactly which we are denoting by t 2 x 1 1 6 2 is given simply by W X 1 minus X 2 times theta of X 1 minus X 0 2 plus W of X 2 minus X 1 times T tau X 0 2 minus X 0 everything just goes linear therefore the contribution of this term the simply give you is n times the final multiplication by Tita will not affect this relation with x equals simply adds up and likewise here the integration is all in square therefore you can take there these step functions inside and then multiply and ready to leave you the final curvature X 1 minus X 2 evaluated from the with the mass capitalism ok so this is a fantastic relation it is telling whether exactly one function in an interesting theory is given by a constant times free particle the free particle one in the free carry on eutrophication plus a linear sum of the Fineman propagators for particles with mass Capital m and then there is a density of course you have to multiply the density and something so that is the exact root again there has been differences prepositional prophase is a free while it's useful what why did you want to try we talked about treatment of all new calculation of four point functions and made this variation okay so this looseness topic totally different from what you will drink there is nothing well loop if I cannot match mission here in fact we did not we did not even use the equations of motion the only thing that he made user of was the translation variance of the model the translation invariance of the crowd said the three final professor the definition essentially did not you make use of any dynamics no matter what kind of interaction yeah for a cue point to the for or in fact you can also feel it some other space they are driving for a spinner field but because of its generality it's obvious that you can generalize to fill the way any so in any quantum field theory with any kind of interaction you can thrive expected a presentation like this it doesn't make use of any specific features of any particle theory I don't have you have translation invariance any question so in order to see why it is useful for our discussion of the visualization let's rewrite it in the Manu space being a fully expansion of course goes through on both sides so you see immediate right there want to the space expression for this object the first term will give you the final object is the giver I divided by P Square minus physical mass M squared plus I epsilon times Z plus there is an integral over this parameter T M squared which does not affect Fourier transform and then the Fourier transform of this term also will be given by I divided by P squared minus Capital m squared plus X Island times Rho n square jealous of civilization yeah the common notation is though I change it to Sigma because no include those with piston you can put everything together and write a single integral but they separated the one part of their profit in that case the Sigma so this is a very important in fact many important information despite the simplicity of derivation which contains really interesting information the first thing is that you see that if this momentum P approaches the master there is a simple power and piece for M equals n schools and only a simple there is no more falsified movements and there is no zero sub coming from you it's only simple form plus the residue at the pole if you if you use the very fields the fields for which the Macan expectation value between single-party state and ground state was there the data view is that time society okay this cell does not have a singularity at P squared because in school right because the door one is in zero squared which is bigger than any square and this function vanishes in skirt that's momentum then so this completely really that means as P squared approaches the master must shell of single particle state you get a term with simple form this very specific residue plus beta reserves with this is a important information that we want to draw from here for time be later probably if you have time you will come back and expect our information from this iteration but for time being was relevant for what you're being is that in this limit T 2 P approaches a free particle propagator however the residue is not 1 if there was no interaction distance it would be one I mean one time sir but if there is attraction it is not this is s here plus later terms so simple that piece where goes elsewhere and the residue equals to I times F this is your T squared equals 9 another think we shall appoint you immediately but a single situation but there are death you see that if you multiply both sides of the situation by P squared minus M Squared and let P Square go to L square what happens from the first time you get I times it from second time 0 P squared minus this is definitely never gunships miscibility to 0 is this ok so as B squared it's the same statement as here for everything this is the exact problem it's no perturbation to the instance for eat pork a very sympathetic it seems that when first of all we see that there is a poem in peace word precisely at the mass of their fiscal mass of the bottom same skirt is a physical mass of the part this physical of mass of the particle because it's an eigenvalue of them mountain opry the moon 2 squared operator secondly seems that when we multiply by P Square minus M square all interactions disappeared and you simply get constant at that point okay so this important in the derivation of what's called illicit reduction formula essentially and ascenders actual formula tells you that endpoint functions that have simple format P squared going to and from there you can directly obtain in Italy there's a form enough of the elements of this matrix in fact you can prove that similar results for endpoint functions we prove it for two point functions but you can also prove it for endpoint functions that's when if you consider an endpoint function in long term space and let any man of the moment ago to the mass shed then there'd be simple you probably do that in one of the future lectures okay now come back to our discussion of the two-point function infatuation so so far there was no reservation tivity there's no crab combination not back to Posse calculated before maybe need which is the P we would also effectively the same function we ended up to the formula like this this is been quiet clearly understands this don't have any porn @mg discouragement this is another indication that m0 screw it cannot be the mass off but the physical mass of the part however it cannot follow on this point at this point physical mass grant provided when you said P squared upon 2m square this housing becomes just a physical mass prayer in other words the most amount that this this one is official 4p squared equals M squared so this is maybe I should write it a little bit more detail so a 3 squared goes to M Squared and 0 squared 0 we must Amanda this is equal to M so I thought what's the point so M 0 is just a parent when you're introducing the relation this is 10 wooden M 0 the parameter the Lagrangian should be a function of physical mass squared so information theory you can actually solve this equation and express in 0 square or the work order as a function of L Square and of course on the 0 so this can be solved former by order information theory Express [Applause] so this should be my solution and this should be P squared minus M critical section thank you to express and use either square as a function of physical mass M switch to minus sign in front of M naught square do everything in so many - sighs thank you so if you salsa for M 0 squared from here then you can write this formally us we are just writing this I will use it I mean Turkishness versus much pair two indices in a minute so software in series curve F squared - yes and there's a minus sign there so this becomes PI P of P squared essentially PI P of P squared plus M square is we are subtracting from pi its value and P squared equals to physical math word then this will obviously have a simple format at this point this world constellations the important Commission so should have the city however there is second condition the second condition is that the there you go at the at this form should be given by this specific number so the residue in order for it for the rest of you we call sometimes that you can just examine this you see that it's sufficient to simply subtract one level turn from there so we have to define G 2 to be I divided by P squared once plus epsilon minus PI D is where minus PI P is equal to M squared minus y squared minus M squared times the first derivative of the function PI relative to peaceful america--the hemisphere so this means that we should expire this function in Taylor series around P squared equals 2x squared and subtract the first two terms in his famous X and multiple persuasive this function will have not visited a filter as P squared goes to hemisphere okay now this looks rather complicated if you obtained it what you want to obtain by examining the two-point function but and this is easy to check that the one loop result that we have calculated with this kind of prescription will give you a final answer the Peter dependent single intelligence a via statistics and makes sense that you will check but to go to high waters and partying this way is there's a very very it so that's what you want does the real my respect efficiency you want to invent a system after implementing conditions like this the other conditions to this only we are examining two-point function and it's called only to follow two conditions but they are the conditions as well any questions so what we want to do is to go back and examine the same problem this is anything written in terms of the realized fear it Vanilla's mass film as copy already at 11 and they're in postconditions sort of them listening that form you will see that it is much easier to formulate the program of step by step or order by order of renormalization for any desired order so what you want to discuss is renormalized you normalize fish so in driving the exact formula for G - in driving the spectral representation of G - there are several exceptions the first step we said that it should impose the condition this quantity and for any exception this is e this always have been achieved if it is different from zero you can replace five point five prime which is 5x minus C and C is the vacuum expectation value discuss this of course a constant point why is it a constant is a constant by translation by essence constant you can write this as exploration ypx at origin times IPX and the operator gives what its axial force constant so if you make this efficient than this face will have actually expectation value 0 I will not - Prime in the fields are we simply assume that it's as you know much expectation but starting from an inversion condition this is not the case with this object is different from 0 you have to make this shift when you make the shift then of course you have to do it on all the terms of the direction and therefore it creates other terms okay so if you have for instance if I cuter for doing this you produce the codify through them but you will change also the coefficient of Y squared N and you can also produce a term linear in Y lesser cost and this was the first thing he did the second was he said that this is single parchment said it's a constant zip now it is not convenient to define a new field by Rena West which is the original field in the Lagrangian divided by square root of Z times the field which is in the Lagrangian so again it's optician will multiply by these terms in the religion by factors of pounds on screen subset so let's go back to our original Lagrangian and make these substitutions there should be an equal sign in the purse so none of us this one so if that's different from zero then we can make that under the kanji so maybe substitution substitute Phi 5 square root of debt times PI R I'm writing to put different notation so that I would like to drive tonight drop this on device but the action that we have and we go to the end I wish is anyway yet five parts where this is that that gives you a factor of Z times five x minus T spread - okay also be the right and square 0 square by n square Delta N squared times Phi of X then there is a fight you've ten we have to take the turn power of that that's then for three he had a number 0 divided by 3 factorial times fire if I cubed so this will be V with drop this orange right fine and then there's a linear term here I'm just going to write this okay the reign of the originated it looks complicated because the aim is follow you would like to write this action and energy multiplied pay tuition to apply the transmission to me you want to identify free power plus insertion force so we would love to write this action as a free part was infectious to do this we have to remember that this LED starts with one for the video plus Corrections company taxes from the roots that's what we saw that was in squared at the three level will be the same as in zero squared therefore this the same square should start from some part of the Akaka constant and on this one I will make comment in a minute okay so this day what so I bought off a simplified notation here I will introduce a different slightly different notation the intention is that you would like to call this their z1 times areolas company and if you remember I produced a mass parameter in order to take care of the dimensionality so this lambda will have passed dimension of 0 first permission of lambda is 0 this is not an obligation of engine here you can keep it and Wallaces attend but it's convenient to have the number as their mission discovered here obviously this me returns by hand when I understand was his drawing every step of the calculation and that engineer account I think of this color physically measurable quantity it soon depend on this meal or if depends on the original sense meaning of this work this dependence and what please okay so this is more than just a simple definition in any case this is just notational and you notice me the idea is that the original Lagrangian and a field a mass of the copper constant and in this particular case if I can talk about someone in front of my cube daughter and the fire disillusions this is specification of the scale attorney with interaction Phi cubed if you were discussing fight to the for that wouldn't be there okay and we will get rid of it all the way over symphony buddy who runs it is Copeland to be 0 for time being has just leading society and concentrate the mass the field and the company London so the original clearly had the field the mass and the coffee theatres here it seems that can increase the number of objects but here I think it we have to set we have the field we have the mass but your solitaire famous crypt okay and then we have to set one so originally we have three objects now they have three more z-test time you square it and set one and if you include this one for some more okay obviously if you back if you just leave it at that and they start calculating using this model we will not be taking the traditional theory because there increase the number of the parameters induction so if you increase that change the number of parameters the model changed so the debate is same scene however we know that calculating the intersection we have to import some conditions on the greens function that we are evaluating from here so what are those conditions so we start operating [Applause] in terms of renormalized field math and copies that's what we have to do and then impulse what's called the linearization conditions what are those conditions the conditions are here the first condition is that this must be zero the second is on the two-point function it should go in life right Narcisa legalize the field this factors that disappeared from what he had before and it should simply look like the free particle greens function if you calculate in terms of the stable speed plus regular terms so this tells you that the two-point function will calculate the distortion from this regulation should have the poor simple format and square root residue on this will fix you in fact G as a function of M Squared and lambda this will fix your Z and this M Squared as functions of experts and under and then z12 impor the condition which will Express that one as a function of his physical physically measurable quantities they have to look at an interaction matrix for example in this model we can define that if you look at this three point function at any order information theory we will do this later I am just giving you the the prism so there are three incoming particle three incoming momentum P P 1 P 2 P 3 so you can impose a condition at some point in the space of these monitors this becomes equal to the realistic number okay that will fix this one this will fix that one as a function of N squared and learners what it means is that this applies the new parameters n Delta u squared set one and G they are not independent parameters of mother but they are detriment for important visualization condition they're going to negative exponent N squared is known that can be many of experiments these are the sort of the normalization constants is a problem inertia constants they are not even experimentally they are expressed as functions about realized couplings and it cut off so I forgot to tell you here that these are functions of this and cut off let me just put the cut off as London or what we called eat at the dimension I'm the cutter like I said as long as the cutoff is finite the dysfunctional complete event when there is dimension when the cutoff for stability or where the Commission goes to six in the dimension regularization of this particular moment but I found that they cut off is finite you will have given up verifying functions of the physical mass physical copy and the unlimited is explicitly the one whole new pocket hopefully today not today in the next section in the questions they will vary these things both of these depend in general Emil so okay and it is that international news depends on the vendor to the regular see if you recognize the sixth dimension for introducing lambda then you will not need to introduce dispute because the sixth edition the love that is domitius you see that n is 6 this becomes minus 3 plus 3 the exponent here becomes the power becomes 0 use the force here comes water and learn these emissions okay so in terms you cut off then you Demetrius the new cultures because depending on the value you want to give us there have to be a master and some kind of mask because essentially because of no relatives other questions so if this looks a little bit complicated never mind it's after we do an explicit example of C we become clear okay but the message are you like you can take from this discussion is that the rewrite original barrier action or does not the reaction actually different in terms of the variance in terms of some Nero redefined physical Torino and spills by introducing some constants like wash the mask life is a coupling constant but these costs are not mean parameters new parameters means that they are not going to be definite for experimental measurements in fact there will be determined but not of not by experimental measurements the definiteness functions of experimentally measurable parameters like mass the Cockney defined as the value of the three-point function in this example at some point in mobile space if he if he if he were discussing hue D that coupling would be the electric charge for the sensor right so the bare objects would be detrimental terms of the electric charge when you realize with this conditions and with the so called master humanization I and the vegetation that is by importing some conditions from the ball point function to a point function three-point function in this in this example when you go to other theories then you have to impose similar conditions on some in the realizable theories from some finite number of greens functions this is the basic message that I would like you to gather and to understand where we are pinyin from what we discussed so far okay the way we are going to implement it then many of these little points will become checking for scratches this is clear any question from the benefit drive every step that you follow the end up here now if you are a few question is the should you always impose this condition no other conditions I mean I know other conditions we come into six 3d xxx technicians perform a little more the conditions the other meaning is that can be imported listen innocent of these conditions come in for other conditions the answer is yes the most effective is to both of them the first one does is yes if the model is not realized when you need to impose other conditions if the model is roughly who knows maybe needs in fact to impose if we from our position if you follow this kind of scheme we may see this in one of the future lectures okay discussing the context of this particular model the answer is yes you can remove the infinities when you causing other conditions and we received the one that you are being now is called the mass gently normalization because we are important condition on P squared who's been Square and for that reason M Squared is the mass physical mass of the particle that you can impose the condition on P squared and some other value of M Squared just some practices as we will see later in that case the power of the probe pages will not be the physical mass the physical mass is defined to delay the final because the residue of the this okay so if you just subtract infinities like in the so-called minimum subtraction of some of the scheme then you have to be careful in defining the physical mass of the part that's the sauce in French other questions okay so let's continue for few more minutes so I'm going to erase this part but leave the action now we say that we welcome our confession to a different section and we have content for a free part and it's actionable in order to do that we make use of the observations of ten twenty minutes ago nearly deserted start from one that's it so I'm going to so we know that Z because one was other things just in squared equal to 0 plus policies and the z1 equals to 1 is must options actually I see a reason for this one sorry in England is dispute with you next money square so we walked right by us by zero I have to be careful that this fight this is an enormous fight so I'm changing their notation to indicate that they will be the lettuces like this to do this I am going to write Z as Z plus organize space platform is small that makes me nervous so you're greater eggs it has one plus a then here you have set times test ends with that one we are going to write that is dimensionless N squared first Delta M Squared has permission of mass so we are going to write this as some constant B times mustard the physical mass will be between emissions and that one we are going to write one per see clearly in perturbation theory a PC will have an expansion in power series of the coupling constant so for that we are going to write a as a even number plus a 2 square the series expansion in the copy these coefficients which in principle can depend on other parameters B networks is going to be expanded as P 1 lambda plus P 2 minus Phi C we are going to write see the number C 2 in the square when we do this then this action will have coefficients which will depend on first and after they see key and then later on when the important conditions we will decimate this coefficient of expansion when possible utilization conditions so this area right so if I do that then this interaction laram Jim Lister this some little algebra stuff you have to do yourself I will just indicate where the tennis comfort but we activate we have to substitute for Z Delta V squared is that one from here don't expand in terms of the power system me too many in terms of capital ATC and the action that one term will give you my zero the one time in Lublin begin with a 0 here the rest is born into the infection prevention wish and writing answer actually let me call this conference these are hopefully humanization content errors under artillery limited for this notation means so the realization counter terms or the month that if you are is live as long as a one part each the three part and a times the rest is even the competent course so by dragging it it should be obvious then if you substitute Z times then 10 square root of P times M squared that is where this there another mistake there should be one half year which is you see in the nose then for that one we are writing 1 plus e1 believe the instruction organisation and the C will give you three times music for n over 2 plus 3 times lambda or 3 factorial times thank you Polaski fire so but it's in section of an engine is this end of the summer protects so this I have written as 1 1 of I 1 5 plus a constant what I can tell is this by running for fighting is the original intention of engine indeed when you substitute zip as 1 plus each subsystem has one from the street Duvall multiplying the lambda here today I'm sorry sometimes I read and sometimes I write 4 minus lambda or 3 factorial to the power spike you all I have done is substitute Z Z times M times square that 1 as 1 plus AP times M Squared proceed in this action and write it as an ID 0 which doesn't have any of these days it's just a free port of action in which to fill this fire and the mass is M square plus I 1 my bonus original infraction part of the imagine if mr. Coughlin is Londoners of the 0 plus my constant accountant comes from this multiplying c e and various things here everything here looks very long but there is a reason for writing exactly the reason is that there's a there's a rationale for why things like that as well is the following this is the free part of action in the remaining terms every single turn has a coupling constant lovely for example a when you make it far say this expansion in terms of London you see that it starts with this lemon increased oh okay in some cases these coefficients are 0 but but they do not even if the first one is even a second to look procedure so every other term scoring's of the continent has a company that means you should consider it in the preservation duty we should consider as part of intentional racial instruction organization by definition is the part which contains the company constants the free part does not authentic of the constant instruction for fantastic of the cost so the common term for contraction should be considered as the part of the as insertion order so the sum of these 2 by 1 of 5 plus I bump there is what we call by interaction in perturbation theory is stuff from the fridge early and expansion in powers of there untouched you remember the formula we have for being point functions father for the important function organist metrics you have the expression of five times the interaction action for the integral of the interaction diversion so that interaction actually the origin is contained 20 steps there for your final rules will change the fundamentals that you obtain it was you clearly have only one interaction there now the final rules should include all certain interactions the new interaction terms which come from these constants is that clear so again we saw start from the bear action make the substitution terms of the renormalize field and the lemurs person the universe copy then this is bringing these constants Z Delta U squared and that one rarely sees as 1 plus Corrections the corrections of you yes heavy start from functional from the first part of lundergan respects the redirect action as a free port plus confidence the count effects since they contain the powers of the coupling constant should be part of the infection therefore if you want to do a final graph techniques you have to define the roof the final truth this happens or you know part of their interaction between in terms of this coupling as well as near exes coming from the constant part of the action ok so I'm going to write define the rules and this top score today so the full set of five Monroe's first of all the prompted the propagator is substituting this Z because one the bomb part is that this part will be the appropriate one to space of beta will be precisely as before will be line and the formula for it will be 5/3 squared but this time minus M squared plus I am sorry the N squared now is the real - maps for time being we will assume that the sophistic of us later on we do see that there are other possibilities but for timeliness especially with physical mass and databases the profitable business foundation is same essentially in any territory so there is the medics coming from the original infection for contraction and that's the same as - hi times mu to the power minus n over 2 plus 3 times them it's this part this is exactly like before however when you come here you said that this will contain empowers a lot effort it should be easier as part of the instruction Iranian so you have to be mostly a rule for that I'm going to put a little field top there or sphere and we can read it simply by looking at these coefficients the final rule are left from the spacings this operator that is time order of expression of I times the infection part of the narration that's various news conference that's why your wife and if you write it one through space is close to P squared minus M Squared and then there is a master by L square so this is this comes from this quadratic form and then there's our other vertex count to ten methods which comes from this following the same rules its - I see muted for - analysts three times left it evaluated any object you have to draw our customer find models as before however now you have to take into account the nuances so you can construct extra tenant by including the nuances the object that we will construct will have the original grass plus the new objects come from new this will be the sum of stems this is not unfortunately a bit point to stop but we have to stop there just a little bit away from completely these discussions so let me stop at this point
214
algorithm - Guessing an unbounded integer - Stack Overflow =============== Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows amazon-web-services xcode bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers google-sheets c++11 security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter listview delphi jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins xamarin testing wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session hadoop intellij-idea rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras asp.net-mvc-4 swiftui logging dom matrix pyspark actionscript-3 button post optimization web firebase-realtime-database jquery-ui cocoa iis xpath d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento search stored-procedures amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array jsf random vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video model-view-controller azure-devops apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text django-rest-framework visual-c++ cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 google-chrome-extension grails installation sharepoint cmake shiny spring-security jakarta-ee plsql android-recyclerview core-data types meteor sed android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost sass memory-management import deep-learning async-await error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 ffmpeg dll highcharts view foreach makefile plugins c#-4.0 redis reporting-services jupyter-notebook merge unicode reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split pytorch cmd encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo mysqli sqlalchemy apache-flex entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table jestjs ansible neo4j service parameters material-ui enums flexbox module promise visual-studio-2012 outlook firebase-authentication webview web-applications uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs google-analytics constructor file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js url-rewriting memory-leaks datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting google-api jmeter linked-list path timer proxy django-templates arduino orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview google-cloud-functions visual-studio-2013 vbscript gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io azure-functions expo compilation responsive-design nhibernate mongodb-query angularjs-directive request bluetooth reference binding dns 3d architecture playframework pyqt version-control discord.js doctrine-orm package get rubygems f# sql-server-2012 autocomplete openssl tree datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null dockerfile statistics transactions active-directory datagrid uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm yaml nullpointerexception menu sum blazor plotly bitmap asp.net-mvc-5 visual-studio-2008 electron yii2 floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api selenium-chromedriver jboss joomla cors devise navigation anaconda cuda background multiprocessing binary frontend camera pyqt5 iterator linq-to-sql mariadb onclick ios7 android-jetpack-compose microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 amazon-dynamodb environment-variables insert uicollectionview linker xsd coldfusion console continuous-integration upload ftp textview opengl-es macros operating-system mockito formatting localization vuejs3 xml-parsing json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header firebase-cloud-messaging mfc attributes nuxt.js nosql format odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service xampp dom-events crystal-reports wso2 android-emulator swagger namespaces uiscrollview aggregation-framework sequelize.js jvm google-sheets-formula chart.js com subprocess snowflake-cloud-data-platform geolocation webdriver centos html5-canvas garbage-collection dialog widget numbers concatenation sql-update qml set tuples java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget http-headers doctrine radio-button grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port ldap gdb ios5 youtube-api return latex pivot eclipse-plugin frameworks tags containers github-actions subquery c++17 dataset asp-classic foreign-keys label uinavigationcontroller embedded copy google-cloud-storage delegates struts2 migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb stack zip tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 ssl-certificate g++ gmail hover jqgrid clang range Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Guessing an unbounded integer Ask Question Asked 15 years, 11 months ago Modified1 year, 9 months ago Viewed 6k times This question shows research effort; it is useful and clear 8 Save this question. Show activity on this post. If I say to you: "I am thinking of a number between 0 and n, and I will tell you if your guess is high or low", then you will immediately reach for binary search. What if I remove the upper bound? i.e. I am thinking of a positive integer, and you need to guess it. One possible method would be for you to guess 2, 4, 8, ..., until you guess 2k for some k and I say "lower". Then you can apply binary search. Is there a quicker method? EDIT: Clearly, any solution is going to take time proportional to the size of the target number. If I chuck Graham's number through the Ackermann function, we'll be waiting a while whatever strategy you pursue. I could offer this algorithm too: Guess each integer in turn, starting from 1. It's guaranteed to finish in a finite amount of time, but yet it's clearly much worse than my "powers of 2" strategy. If I can find a worse algorithm (and know that it is worse), then maybe I could find a better one? For example, instead of powers of 2, maybe I can use powers of 10. Then I find the upper bound in log_10(n) steps, instead of log_2(n) steps. But I have to then search a bigger space. Say k = ceil(log_10(n)). Then I need log_2(10k - 10(k-1)) steps for my binary search, which I guess is about 10+log_2(k). For powers of 2, I have roughly log_2(log_2(n)) steps for my search phase. Which wins? What if I search upwards using nn? Or some other sequence? Does the prize go to whoever can find the sequence that grows the fastest? Is this a problem with an answer? Thank you for your thoughts. And my apologies to those of you suggesting I start at MAX_INT or 232-1, since I'm clearly drifting away from the bounds of practicality here. FINAL EDIT: Hi all, Thank you for your responses. I accepted the answer by Norman Ramsey (and commenter onebyone) for what I understood to be the following argument: for a target number n, any strategy must be capable of distinguishing between (at least) the numbers from 0..n, which means you need (at least) O(log(n)) comparisons. However several of you also pointed out that the problem is not well-defined in the first place, because it's not possible to pick a "random positive integer" under the uniform probability distribution (or, rather, a uniform probability distribution cannot exist over an infinite set). And once I give you a nonuniform distribution, you can split it in half and apply binary search as normal. This is a problem that I've often pondered as I walk around, so I'm pleased to have two conclusive answers for it. algorithm language-agnostic Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Oct 30, 2023 at 16:15 hippietrail 17.2k 21 21 gold badges 113 113 silver badges 181 181 bronze badges asked Aug 20, 2009 at 0:47 John FouhyJohn Fouhy 42.4k 19 19 gold badges 65 65 silver badges 79 79 bronze badges 5 PS, I have no practical use for this. I just thought it an interesting problem. –John Fouhy Commented Aug 20, 2009 at 0:47 Your solution is the one I would use –Pyrolistical Commented Aug 20, 2009 at 0:59 Yes, you are starting to get into the maths of the infinite which are by definition counterintuitive and non practical. I doubt you can get somewhere with this :-) –Vinko Vrsalovic Commented Aug 20, 2009 at 8:47 2 Actually, this is a problem in human psychology (what numbers might a human actually select, and how likely is each?) –Steve Jessop Commented Aug 20, 2009 at 10:30 That would be intersting with a bounded range. e.g. you ask a stranger to pick a number from 1..10,000. Is 5,000 the best starting guess? –John Fouhy Commented Aug 20, 2009 at 22:20 Add a comment| 14 Answers 14 Sorted by: Reset to default This answer is useful 15 Save this answer. Show activity on this post. If there truly is no upper bound, and all numbers all the way to infinity are equally likely, then there is no optimum way to do this. For any finite guess G, the probability that the number is lower than G is zero and the probability that it is higher is 1 - so there is no finite guess that has an expectation of being higher than the number. RESPONSE TO JOHN'S EDIT: By the same reasoning that powers of 10 are expected to be better than powers of 2 (there's only a finite number of possible Ns for which powers of 2 are better, and an infinite number where powers of 10 are better), powers of 20 can be shown to be better than powers of 10. So basically, yes, the prize goes to fastest-growing sequence (and for the same sequence, the highest starting point) - for any given sequence, it can be shown that a faster growing sequence will win in infinitely more cases. And since for any sequence you name, I can name one that grows faster, and for any integer you name, I can name one higher, there's no answer that can't be bettered. (And every algorithm that will eventually give the correct answer has an expected number of guesses that is infinite, anyway). Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Aug 20, 2009 at 7:03 answered Aug 20, 2009 at 0:51 cafcaf 240k 42 42 gold badges 343 343 silver badges 479 479 bronze badges 5 2 Well, if I use powers of 2 then it will take ceil(log2(n)) guesses to find the upper bound, then the same number to find the actual number. Or i could use, say, powers of 10, in which case I'll find an upper bound faster, but have a bigger range to search. Or I could guess nn, which grows very fast but would leave me a much bigger range to search eventually. Certainly you can't get an upper bound in O(1) guesses, but that doesn't mean all strategies are equally bad. –John Fouhy Commented Aug 20, 2009 at 1:29 4 The problem is that if n can take on any integer in the range [0, inf) with equal probability, then the expected value of both log2(n) and log10(n) is also infinity - meaning that the expected number of guesses to find the upper bound is infinite in both cases, so they really are equally bad (and "as bad as possible", which also happens to be "as good as possible" :) –caf Commented Aug 20, 2009 at 5:30 4 "all numbers all the way to infinity are equally likely" - that is not possible (as a simple result in probability theory), and hence not the case. –Steve Jessop Commented Aug 20, 2009 at 10:28 Nonetheless, it is what the question (implicitly) specified, so I chose to treat it as a Gedankenexperiment (in the same vein as something like Newcomb's Problem). –caf Commented Aug 20, 2009 at 23:29 1 There is no probability distribution on the integers that satisfies your claim that for all guesses G, the probability that the target number is lower than G is zero. –I. J. Kennedy Commented Aug 22, 2009 at 19:50 Add a comment| This answer is useful 5 Save this answer. Show activity on this post. People (who have never studied probability) tend to think that "pick a number from 1 to N" means "with equal probability of each", and they act according to their intuitive understanding of probability. Then when you say "pick any positive integer", they still think it means "with equal probability of each". This is of course impossible - there exists no discrete probability distribution with domain the positive integers, where p(n) == p(m) for all n, m. So, the person picking the number must have used some other probability distribution. If you know anything at all about that distribution, then you must base your guessing scheme on that knowledge in order to have the "fastest" solution. The only way to calculate how "fast" a given guessing scheme is, is to calculate its expected number of guesses to find the answer. You can only do this by assuming a probability distribution for the target number. For example, if they have picked n with probability (1/2) ^ n, then I think your best guessing scheme is "1", "2", "3",... (average 2 guesses). I haven't proved it, though, maybe it's some other sequence of guesses. Certainly the guesses should start small and grow slowly. If they have picked 4 with probability 1 and all other numbers with probability 0, then your best guessing scheme is "4" (average 1 guess). If they have picked a number from 1 to a trillion with uniform distribution, then you should binary search (average about 40 guesses). I say the only way to define "fast" - you could look at worst case. You have to assume a bound on the target, to prevent all schemes having the exact same speed, namely "no bound on the worst case". But you don't have to assume a distribution, and the answer for the "fastest" algorithm under this definition is obvious - binary search starting at the bound you selected. So I'm not sure this definition is terribly interesting... In practice, you don't know the distribution, but can make a few educated guesses based on the fact that the picker is a human being, and what numbers humans are capable of conceiving. As someone says, if the number they picked is the Ackermann function for Graham's number, then you're probably in trouble. But if you know that they are capable of representing their chosen number in digits, then that actually puts an upper limit on the number they could have chosen. But it still depends what techniques they might have used to generate and record the number, and hence what your best knowledge is of the probability of the number being of each particular magnitude. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Aug 20, 2009 at 10:50 answered Aug 20, 2009 at 10:23 Steve JessopSteve Jessop 280k 40 40 gold badges 469 469 silver badges 708 708 bronze badges Add a comment| This answer is useful 4 Save this answer. Show activity on this post. Worst case, you can find it in time logarithmic in the size of the answer using exactly the methods you describe. You might use Ackermann's function to find an upper bound faster than logarithmic time, but then the binary search between the number guessed and the previous guess will require time logarithmic in the size of the interval, which (if guesses grow very quickly) is close to logarithmic in the size of the answer. It would be interesting to try to prove that there is no faster algorithm (e.g., O(log log n)), but I have no idea how to do it. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 3:31 Norman RamseyNorman Ramsey 203k 62 62 gold badges 374 374 silver badges 541 541 bronze badges 1 1 An O(log log n) algorithm would have O(log log n) bits of input, and would have to be capable of outputting any of n possibilities. QED. I think. Basically the same argument as you use to prove that a general sort is at best O(n log n) comparisons. –Steve Jessop Commented Aug 20, 2009 at 10:54 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Mathematically speaking: You cannot ever correctly find this integer. In fact, strictly speaking, the statement "pick any positive integer" is meaningless as it cannot be done: although you as a person may believe you can do it, you are actually picking from a bounded set - you are merely unconscious of the bounds. Computationally speaking: Computationally, we never deal with infinites, as we would have no way of storing or checking against any number larger than, say, the theoretical maximum number of electrons in the universe. As such, if you can estimate a maximum based on the number of bits used in a register on the device in question, you can carry out a binary search. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Aug 20, 2009 at 2:19 answered Aug 20, 2009 at 1:34 MalaMala 14.9k 26 26 gold badges 93 93 silver badges 124 124 bronze badges 2 1 integers are only countably infinite, so by induction you can always pick one. The axiom of choice is only relevant for non-countably infinite sets (such as reals) –Chris Dodd Commented Aug 20, 2009 at 2:11 @Chris Dodd: No. The axiom of choice is relevant to products of nonempty sets (the axiom states that such a product is not empty). –jason Commented Aug 20, 2009 at 3:40 Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Binary search can be generalized: each time set of possible choices should be divided into to subsets of probability 0.5. In this case it's still applicable to infinite sets, but still requires knowledge about distribution (for finite sets this requirement is forgotten quite often)... Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 9:29 maxim1000maxim1000 6,383 1 1 gold badge 25 25 silver badges 19 19 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. My main refinement is that I'd start with a higher first guess instead of 2, around the average of what I'd expect them to choose. Starting with 64 would save 5 guesses vs starting with 2 when the number's over 64, at the cost of 1-5 more when it's less. 2 makes sense if you expect the answer to be around 1 or 2 half the time. You could even keep a memory of past answers to decide the best first guess. Another improvement could be to try negatives when they say "lower" on 0. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 0:57 DavidDavid 2,174 14 14 silver badges 11 11 bronze badges 3 As @caf pointed out, it's (almost) irrelevant what your first guess is if the upper limit is infinity. ANY finite number as (almost) zero chance of being larger than the actual number. But that's a mathematical problem and answer, not a computing one. –Eric J. Commented Aug 20, 2009 at 1:09 @Eric, human factors come into play here as well. Ask any average Joe on the street to pick an unbounded number and 80% of them would probably still choose "7" :-) –paxdiablo Commented Aug 20, 2009 at 1:33 And the other 20% would decompose in the following manner: 10% will choose 13, 5% would choose 666, 4% would choose 10 and 1% would choose 42 –Vinko Vrsalovic Commented Aug 20, 2009 at 2:02 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. If this is guessing the upper bound of a number being generated by a computer, I'd start with 2[number of bits/2], then scale up or down by powers of two. This, at least, gets you the closest to the possible values in the least number of jumps. However, if this is a purely mathematical number, you can start with any value, since you have an infinite range of values, so your approach would be fine. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 1:16 Reed CopseyReed Copsey 566k 80 80 gold badges 1.2k 1.2k silver badges 1.4k 1.4k bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Since you do not specify any probability distribution of the numbers (as others have correctly mentioned, there is no uniform distribution over all the positive integers), the No Free Lunch Theorem give the answer: any method (that does not repeat the same number twice) is as good as any other. Once you start making assumptions about the distribution (f.x. it is a human being or binary computer etc. that chooses the number) this of course changes, but as the problem is stated any algorithm is as good as any other when averaged over all possible distributions. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 11:00 Rasmus FaberRasmus Faber 49.8k 25 25 gold badges 148 148 silver badges 193 193 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. Use binary search starting with MAX_INT/2, where MAX_INT is the biggest number your platform can handle. No point in pretending we can actually have infinite possibilities. UPDATE: Given that you insist on entering the realms of infinity, I'll just vote to close your question as not programming related :-) Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Aug 20, 2009 at 14:27 answered Aug 20, 2009 at 1:16 Vinko VrsalovicVinko Vrsalovic 342k 55 55 gold badges 340 340 silver badges 374 374 bronze badges 3 The possibilities might not be infinite, but if the size of an integer in the given implementation is only bounded by the available memory starting with MAX_INT/2 will be very inefficient because you will take up almost all of the memory with the first guess and calculating with such big numbers will be very costly. So in that case it would be better to start with a low number and work your way up. –sepp2k Commented Aug 20, 2009 at 2:00 If the number chosen is truly random (and not chosen by a human) you'd get half of the time a number greater than MAX_INT/2, so starting from a lower number would only waste more time and still consume that much memory and computation half of the runs, while incurring only in a big startup cost for the other half (as it'll be reduced in size quickly). –Vinko Vrsalovic Commented Aug 20, 2009 at 2:18 umm, I have 3G ram here , the requirement to store both bounds and a guess means I should start with half of 28G –Jasen Commented Feb 3, 2013 at 8:43 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. The standard default assumption of a uniform distribution for all positive integers doesn't lead to a solution, so you should start by defining the probability distribution of the numbers to guess. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications edited Aug 21, 2009 at 9:29 answered Aug 20, 2009 at 6:05 starbluestarblue 56.9k 14 14 gold badges 101 101 silver badges 153 153 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. I'd probably start my guessing with Graham's Number. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 0:55 Cinder6Cinder6 566 2 2 silver badges 10 10 bronze badges 3 In the grand scheme of things, this number is miniscule. –jason Commented Aug 20, 2009 at 1:28 Oh, totally, but it's still bigger than any number (literal) anybody is likely to just name off the top of their head, unless they try to get clever and say "g64 + 1". –Cinder6 Commented Aug 20, 2009 at 2:06 How about A(g64, g64), as per XKCD? –Cinder6 Commented Aug 20, 2009 at 17:03 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. The practical answer within a computing context would be to start with whatever is the highest number that can (realistically) be represented by the type you are using. In case of some BigInt type you'd probably want to make a judgement call about what is realistic... obviously ultimately the bound in that case is the available memory... but performance-wise something smaller may be more realistic. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 0:56 jerryjvljerryjvl 20.2k 7 7 gold badges 42 42 silver badges 55 55 bronze badges 1 Available memory, with memory by the old definition where it's a synonym for storage which includes "disk". it's not like the eventual binary search needs to do many writes. –Jasen Commented Feb 3, 2013 at 9:10 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. Your starting point should be the largest number you can think of plus 1. There is no 'efficient search' for a number in an infinite range. EDIT: Just to clarify, for any number you can think of there are still infinitely more numbers that are 'greater' than your number, compared to a finite collection of numbers that are 'less' than your number. Therefore, assuming the chosen number is randomly selected from all positive numbers, you have zero | (approaching zero) chance of being 'above' the chosen number. Share Share a link to this answer Copy linkCC BY-SA 2.5 Improve this answer Follow Follow this answer to receive notifications answered Aug 20, 2009 at 2:17 Kirk BroadhurstKirk Broadhurst 28.9k 20 20 gold badges 111 111 silver badges 178 178 bronze badges 3 I'm having trouble finding the largest number I can think of plus 1. :-) –Adam Liss Commented Aug 20, 2009 at 2:24 Obviously any algorithm will take time proportional to the target number. I could give you this algorithm: "Guess each integer in turn until you find it". This is clearly worse than the algorithm in my post. If I can supply a worse algorithm, can I not find a better one? –John Fouhy Commented Aug 20, 2009 at 2:48 I disagree. The suggested algorithm (guess each integer in turn) is not 'clearly worse' than the binary algorithm, any more than 1 x 0 is smaller than 100 x 0. With an unbounded range, the chance of finding the number is zero. –Kirk Broadhurst Commented Aug 23, 2009 at 4:37 Add a comment| This answer is useful 0 Save this answer. Show activity on this post. I gave an answer to a similar question "Optimal algorithm to guess any random integer without limits?" Actually, provided there algorithm not just searches for the conceived number, but it estimates a median of the distribution of the number that you may re-conceive at each step! And also the number could be even from the real domain ;) Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 16, 2022 at 15:56 vakvak 1,784 14 14 silver badges 18 18 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algorithm language-agnostic See similar questions with these tags. The Overflow Blog Renewing Chat on Stack Overflow AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Updated design for the new live activity panel experiment Further Experimentation with Comment Reputation Requirements Linked 2find endpoints for range given a value within the range Related 6Interesting algorithm problem 9Guess the number, with lying allowed 8Guessing a number knowing only if the number proposed is lower or higher? 5Algorithm to find an arbitrarily large number 13Given a number, find whether it is brilliant or not 2Aranging integers in a specific order 5Getting numbers around a number 0print an English phrase that describes the integer 0Common algorithm for Long and Integer 0How to find an unknown integer Hot Network Questions Where should I host software for individual papers when GitHub is now part of Microsoft AI? History of Wilcoxon/Mann-Whitney being for the median? What does, "For you alone are holy." mean in Revelation 15:4? Does the Melf's Acid Arrow spell require a ranged attack roll? Landmark identification in "The Angel" (Arsenal FC's anthem) Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"? LM393 comparator not pulling down Reskinning creatures without accidentally hiding how dangerous/safe they are Is the logic of the original smoking study valid? How to describe this set of figures? A specific case Road tire bulge - is it still safe to ride? What is a good way to get magnetic sensor input? Can high schoolers post to arXiv or write preprints? Pilot Procedures for OFV Control When Cabin System Fails In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" How to deal with this problem in hedonism? How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Is there any way to still use Manifest v2 extensions in Google Chrome 139+? How did the early Church interpret Hebrews 6:4-6, Hebrews 10:26-31, 2 Peter 2:20-22, and other similar passages? What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans? How soon after parking a car in a paid parking area must I provide proof of payment? How to support copper fitting while soldering vertically from above? Why was there a child at the dig site in Montana? Intel NUC automatically shuts down when trying Ubuntu Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804
215
Published Time: 2018-04-25T16:52:03Z 5.7: Modular Arithmetic - Mathematics LibreTexts =============== Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 5: Basic Number Theory A Spiral Workbook for Discrete Mathematics (Kwong) { } { "5.01:_The_Principle_of_Well-Ordering" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.02:_Division_Algorithm" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.03:_Divisibility" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.04:_Greatest_Common_Divisors" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.05:_More_on_GCD" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.06:_Fundamental_Theorem_of_Arithmetic" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.07:_Modular_Arithmetic" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Introduction_to_Discrete_Mathematics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Logic" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Proof_Techniques" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Sets" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Basic_Number_Theory" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Relations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Combinatorics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Appendices" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Wed, 07 Jul 2021 20:24:35 GMT 5.7: Modular Arithmetic 8555 8555 admin { } Anonymous Anonymous 2 false false [ "article:topic", "modular arithmetic", "authorname:hkwong", "license:ccbyncsa", "showtoc:no", "clock arithmetic", "residues modulo" ] [ "article:topic", "modular arithmetic", "authorname:hkwong", "license:ccbyncsa", "showtoc:no", "clock arithmetic", "residues modulo" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Bookshelves 3. Combinatorics and Discrete Mathematics 4. A Spiral Workbook for Discrete Mathematics (Kwong) 5. 5: Basic Number Theory 6. 5.7: Modular Arithmetic Expand/collapse global location A Spiral Workbook for Discrete Mathematics (Kwong) Front Matter 1: Introduction to Discrete Mathematics 2: Logic 3: Proof Techniques 4: Sets 5: Basic Number Theory 6: Functions 7: Relations 8: Combinatorics 9: Appendices Back Matter 5.7: Modular Arithmetic Last updated Jul 7, 2021 Save as PDF 5.6: Fundamental Theorem of Arithmetic 6: Functions picture_as_pdf Full Book Page Downloads Full PDF Import into LMS Individual ZIP Buy Print Copy Print Book Files Buy Print CopySubmit Adoption Report Peer ReviewDonate Page ID 8555 Harris Kwong State University of New York at Fredonia via OpenSUNY ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents No headers Modular arithmetic uses only a fixed number of possible results in all its computation. For instance, there are only 12 hours on the face of a clock. If the time now is 7 o’clock, 20 hours later will be 3 o’clock; and we do not say 27 o’clock! This example explains why modular arithmetic is referred to by some as clock arithmetic. Example 5.7.1 5.7.1 Assume the current time is 2:00 p.m. Write this as 14:00. Sixty five hours later, it would be 79:00. Since 79=24⋅3+7,79=24⋅3+7, it will be 7:00 or 7 a.m. hands-on exercise 5.7.1 5.7.1 Designate Sunday, Monday, Tuesday, …, Saturday as Day 0, 1, 2, …, 6. If today is Monday, then it is Day 1. What day of the week will it be two years from now? Assume there are 365 days in a year. In the clock example, we essentially regard 27 o’clock the same as 3 o’clock. They key is, we are only interested in the remainder when a value is divided by 12. Definition: congruent modulo Let n≥2 n≥2 be a fixed integer. We say the two integers m 1 m 1 and m 2 m 2 are congruent modulo, denoted m 1≡m 2(mod n)m 1≡m 2(mod n) if and only if n∣(m 1−m 2)n∣(m 1−m 2). The integer n n is called the modulus of the congruence. What does this notion of congruence have to do with remainders? The next result describes their connection. Theorem 5.7.1 5.7.1 Let n≥2 n≥2 be a fixed integer. For any two integers m 1 m 1 and m 2 m 2, m 1≡m 2(mod~n)⇔m 1 mod n=m 2 mod n.m 1≡m 2(mod~n)⇔m 1 mod n=m 2 mod n. Remark Do not confuse the two notations. The notation “(mod n n)” after m 1≡m 2 m 1≡m 2 indicates a congruence relation, in which “mod n n” are enclosed by a pair of parentheses, and the notation is placed at the end of the congruence. In contrast, the “mod” between m 1 m 1 and n n, without parentheses, is a binary operation that yields the remainder when m 1 m 1 is divided by n n. Proof (⇒⇒) Assume m 1≡m 2 m 1≡m 2 (mod n n). The definition of congruence implies that we have n∣(m 1−m 2)n∣(m 1−m 2). Hence, m 1−m 2=n q m 1−m 2=n q for some integer q q. Let m 1=n q 1+r 1 m 1=n q 1+r 1 and m 2=n q 2+r 2 m 2=n q 2+r 2 for some integers q 1,q 2,r 1,r 2 q 1,q 2,r 1,r 2, such that 0≤r 1,r 2<n 0≤r 1,r 2<n. Then n q=m 1−m 2=n(q 1−q 2)+r 1−r 2.n q=m 1−m 2=n(q 1−q 2)+r 1−r 2. Since r 1−r 2=n(q−q 1+q 2)r 1−r 2=n(q−q 1+q 2), we conclude that n∣r 1−r 2 n∣r 1−r 2. However, 0≤r 1,r 2<n 0≤r 1,r 2<n implies that |r 1−r 2|<n|r 1−r 2|<n. Therefore, we must have r 1−r 2=0 r 1−r 2=0, or r 1=r 2 r 1=r 2. It follows that m 1 mod n=m 2 mod n m 1 mod n=m 2 mod n. (⇐⇐) Assume m 1 mod n=m 2 mod n m 1 mod n=m 2 mod n. According to the Division Algorithm, the remainder in an integer division is unique. Thus, m 1=n q 1+r m 1=n q 1+r and m 2=n q 2+r m 2=n q 2+r for some integers q 1,q 2,r q 1,q 2,r such that 0≤r<n 0≤r<n. Then m 1−m 2=(n q 1+r)−(n q 2+r)=n(q 1−q 2).m 1−m 2=(n q 1+r)−(n q 2+r)=n(q 1−q 2). Therefore, n∣(m 1−m 2)n∣(m 1−m 2). Corollary 5.7.2 5.7.2 Let n≥2 n≥2 be a fixed integer. Then a≡0(mod n)⇔n∣a.a≡0(mod n)⇔n∣a. Theorem 5.7.1 tells us m 1≡m 2 m 1≡m 2 (mod n n) if and only if m 1 m 1 and m 2 m 2 share the same remainder when they are divided by n n. Given any integer m m, m mod n∈{0,1,2,…,n−1}.m mod n∈{0,1,2,…,n−1}. We call these values the residues modulo . In modular arithmetic, when we say “reduced modulo ,” we mean whatever result we obtain, we divide it by n n, and report only the smallest possible nonnegative residue. The next theorem is fundamental to modular arithmetic. Theorem 5.7.3 5.7.3 Let n≥2 n≥2 be a fixed integer. If a≡b a≡b (mod n n) and c≡d c≡d (mod n n), then a+c a c≡≡b+d b d(mod n),(mod n).a+c≡b+d(mod n),a c≡b d(mod n). Proof Assume a≡b a≡b (mod n n) and c≡d c≡d (mod n n). Then n∣(a−b)n∣(a−b) and n∣(c−d)n∣(c−d). We can write a−b=n s,and c−d=n t a−b=n s,and c−d=n t for some integers s s and t t. Consequently, (a+c)−(b+d)=(a−b)+(c−d)=n s+n t=n(s+t),(a+c)−(b+d)=(a−b)+(c−d)=n s+n t=n(s+t), where s+t s+t is an integer. This proves that a+c≡b+d a+c≡b+d (mod n n). We also have a c−b d=(b+n s)(d+n t)−b d=b n t+n s d+n 2 s t=n(b t+s d+n s t),a c−b d=(b+n s)(d+n t)−b d=b n t+n s d+n 2 s t=n(b t+s d+n s t), where b t+s d+n s t b t+s d+n s t is an integer. Thus, n∣(a c−b d)n∣(a c−b d), which means a c≡b d a c≡b d (mod n n). Because of Theorem 5.7.3, we can add or multiply an integer to both sides of a congruence without altering the congruences. Example 5.7.2 5.7.2 We can use subtraction to reduce 2370 modulo 11. Any multiple of 11 is congruent to 0 modulo 11. So we have, for example, 2370≡2370(mod 11),and 0≡−2200(mod 11).2370≡2370(mod 11),and 0≡−2200(mod 11). Applying Theorem 5.7.3, we obtain 2370≡2370−2200=170(mod 11).2370≡2370−2200=170(mod 11). What this means is: we can keep subtracting appropriate multiples of n n from m m until the answer is between 0 and n−1 n−1, inclusive. It does not matter which multiple of 11 you use. The point is, pick one that you can think of quickly, and keep repeating the process. Continuing in this fashion, we find 170≡170−110=60(mod 11).170≡170−110=60(mod 11). Since 60−55=5 60−55=5, we determine that 2370≡5 2370≡5 (mod 11). hands-on exercise 5.7.2 5.7.2 Reduce 12457 to the smallest nonnegative residue modulo 17. Example 5.7.3 5.7.3 In a similar manner, if m m is negative, we can keep adding multiples of n n to it until the answer is positive. For example, −278≡−278+300=52(mod 11).−278≡−278+300=52(mod 11). it is obvious that 52≡52−44=8 52≡52−44=8 (mod 11). Thus, −278≡8−278≡8 (mod 11). hands-on exercise 5.7.3 5.7.3 Evaluate −3275 mod 11−3275 mod 11. This is the same as reducing −3275−3275 to the smallest nonnegative residue modulo 11. In a complicated computation, reduce the result from each intermediate step before you carry on with the next step. This will simplify the computation tremendously. To further speed up the computation, we can use negative values in the intermediate step. Nonetheless, the final answer must be between 0 and n−1 n−1. Example 5.7.4 5.7.4 Reduce 37 2⋅41−53⋅2 37 2⋅41−53⋅2 modulo 7. Solution Take note that 37 41 53≡≡≡37−35 41−42 53−49===2(mod 7),−1(mod 7),4(mod 7).37≡37−35=2(mod 7),41≡41−42=−1(mod 7),53≡53−49=4(mod 7). Therefore, 37 2⋅41−53⋅2≡2 2⋅(−1)−4⋅2=−12(mod 7).37 2⋅41−53⋅2≡2 2⋅(−1)−4⋅2=−12(mod 7). We determine that 37 2⋅41−53⋅2≡2 37 2⋅41−53⋅2≡2 (mod 7). hands-on exercise 5.7.4 5.7.4 Evaluate 56 3⋅22⋅17−35⋅481 56 3⋅22⋅17−35⋅481 (mod 9). Tedious computation may become rather simple under modular arithmetic. Example 5.7.5 5.7.5 Show that if an integer n n is not divisible by 3, then n 2−1 n 2−1 is always divisible by 3. Equivalently, show that if an integer n n is not divisible by 3, then n 2−1≡0 n 2−1≡0 (mod 3). Solution 1 Let n n be an integer not divisible by 3, then either n≡1 n≡1 (mod 3), or n≡2 n≡2 (mod 3). Case 1. If n≡1 n≡1 (mod 3), then n 2−1≡1 2−1=0(mod 3).n 2−1≡1 2−1=0(mod 3). Case 2. If n≡2 n≡2 (mod 3), then n 2−1≡2 2−1=3≡0(mod 3).n 2−1≡2 2−1=3≡0(mod 3). In both cases, we have found that n 2−1 n 2−1 is divisible by 3. Solution 2 Let n n be an integer not divisible by 3, then either n≡1 n≡1 (mod 3), or n≡2 n≡2 (mod 3). This is equivalent to saying n≡±1 n≡±1 (mod 3). Then n 2−1≡(±1)2−1=1−1=0(mod 3),n 2−1≡(±1)2−1=1−1=0(mod 3), which means n 2−1 n 2−1 is divisible by 3. hands-on exercise 5.7.5 5.7.5 Use modular arithmetic to show that 5∣(n 5−n)5∣(n 5−n) for any integer n n. hands-on exercise 5.7.6 5.7.6 Use modular arithmetic to show that n(n+1)(2 n+1)n(n+1)(2 n+1) is divisible by 6 for any integer n n. Raising an integer to a large power poses a serious problem. We cannot just raise an integer to a large power, because the result could be so large that the calculator or computer has to convert it into a decimal value and start using scientific notation to handle it. Consequently, the answer will not be accurate. A better solution is to reduce the intermediate results modulo n n after each multiplication. This will produce an accurate result, but it will take a long time to finish if the power is huge. Fortunately, there is a much faster way to perform exponentiation that uses a lesser number of multiplications. Example 5.7.6 5.7.6 Evaluate 5 29 5 29 (mod 11). Solution First, write the exponent 29 as a sum of powers of 2. We can do it by inspection. Start with the highest power of 2 that is less than or equal to 29, and then work with whatever is left in the sum: 29=16+13=16+8+5=16+8+4+1.29=16+13=16+8+5=16+8+4+1. We are essentially expressing 29 in base 2. We can now write 5 29=5 16+8+4+1=5 16⋅5 8⋅5 4⋅5.5 29=5 16+8+4+1=5 16⋅5 8⋅5 4⋅5. These powers of 5 can be obtained by means of repeated squaring: 5 1 5 2 5 4 5 8 5 16⋮=====5,5 2,(5 2)2,(5 4)2,(5 8)2,⋮5 1=5,5 2=5 2,5 4=(5 2)2,5 8=(5 4)2,5 16=(5 8)2,⋮⋮ The iteration is simple: each new power is obtained by squaring the previous power. Since we are doing modular arithmetic, we want to reduce each intermediate result modulo 11: 5 5 2 5 4 5 8 5 16==≡≡≡5 25 3 2 9 2 4 2≡=≡=3 9(−2)2 16==≡−2 4 5(mod 11)(mod 11)(mod 11)(mod 11)(mod 11)5=5(mod 11)5 2=25≡3(mod 11)5 4≡3 2=9=−2(mod 11)5 8≡9 2≡(−2)2=4(mod 11)5 16≡4 2=16≡5(mod 11) It follows that 5 29=5 16⋅5 8⋅5 4⋅5≡5⋅4⋅(−2)⋅5(mod 11).5 29=5 16⋅5 8⋅5 4⋅5≡5⋅4⋅(−2)⋅5(mod 11). After simplification, we find 5 29≡9 5 29≡9 (mod 11). hands-on exercise 5.7.7 5.7.7 Use repeated squaring to find 7 45 7 45 (mod 11). hands-on exercise 5.7.8 5.7.8 Use repeated squaring to evaluate 9 58 9 58 (mod 23). In modular arithmetic, we are basically working with the remainders only. The set of integers {0,1,2,…,n−1}{0,1,2,…,n−1} is called the set of integers modulo , and is denoted by Z n Z n (pronounced as Z mod n n). In addition, we define two new arithmetic operations on Z n Z n. They are called “addition” and “multiplication” because they work like the usual addition and multiplication, except that we have to apply the mod operation to the results. To distinguish them from the usual addition and multiplication, we denote them by ⊕⊕ and ⊙⊙, and are called “circled plus” and “circled dot,” respectively. Formally, a⊕b=(a+b)mod n,and a⊙b=(a⋅b)mod n.a⊕b=(a+b)mod n,and a⊙b=(a⋅b)mod n. The addition and multiplication tables for Z 6 Z 6 are listed below. ⊕0 1 2 3 4 5 0 0 1 2 3 4 5 1 1 2 3 4 5 0 2 2 3 4 5 0 1 3 3 4 5 0 1 2 4 4 5 0 1 2 3 5 5 0 1 2 3 4⊙0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 1 2 3 4 5 2 0 2 4 0 2 4 3 0 3 0 3 0 3 4 0 4 2 0 4 2 5 0 5 4 3 2 1⊕0 1 2 3 4 5 0 0 1 2 3 4 5 1 1 2 3 4 5 0 2 2 3 4 5 0 1 3 3 4 5 0 1 2 4 4 5 0 1 2 3 5 5 0 1 2 3 4⊙0 1 2 3 4 5 0 0 0 0 0 0 0 1 0 1 2 3 4 5 2 0 2 4 0 2 4 3 0 3 0 3 0 3 4 0 4 2 0 4 2 5 0 5 4 3 2 1 Compare them to the tables for Z 7 Z 7. ⊕0 1 2 3 4 5 6 0 0 1 2 3 4 5 6 1 1 2 3 4 5 6 0 2 2 3 4 5 6 0 1 3 3 4 5 6 0 1 2 4 4 5 6 0 1 2 3 5 5 6 0 1 2 3 4 6 6 0 1 2 3 4 5⊙0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 1 2 3 4 5 6 2 0 2 4 6 1 3 5 3 0 3 6 2 5 1 4 4 0 4 1 5 2 6 3 5 0 5 3 1 6 4 2 6 0 6 5 4 3 2 1⊕0 1 2 3 4 5 6 0 0 1 2 3 4 5 6 1 1 2 3 4 5 6 0 2 2 3 4 5 6 0 1 3 3 4 5 6 0 1 2 4 4 5 6 0 1 2 3 5 5 6 0 1 2 3 4 6 6 0 1 2 3 4 5⊙0 1 2 3 4 5 6 0 0 0 0 0 0 0 0 1 0 1 2 3 4 5 6 2 0 2 4 6 1 3 5 3 0 3 6 2 5 1 4 4 0 4 1 5 2 6 3 5 0 5 3 1 6 4 2 6 0 6 5 4 3 2 1 In both addition tables, all possible values appear in every row and every column. The same is true in the nonzero rows and nonzero columns in the multiplication table for Z 7 Z 7. However, some of the rows in the multiplication table for Z 6 Z 6 do not contain all the integers in Z 6 Z 6. This suggests that the algebraic properties of Z n Z n depend on the value of n n. In fact, whenever n n is prime, the addition and multiplication tables of Z n Z n behave like the ones in Z 7 Z 7. It can be shown that when n n is prime, Z n Z n has the following properties. Both ⊕⊕ and ⊙⊙ are commutative, meaning a⊕b=b⊕a and a⊙b=b⊙a a⊕b=b⊕a and a⊙b=b⊙a for all a,b∈Z n a,b∈Z n. Both ⊕⊕ and ⊙⊙ are associative, meaning that (a⊕b)⊕c=a⊕(b⊕c)and(a⊙b)⊙c=a⊙(b⊙c)(a⊕b)⊕c=a⊕(b⊕c)and(a⊙b)⊙c=a⊙(b⊙c) for all a,b,c∈Z n a,b,c∈Z n. The operations ⊕⊕ and ⊙⊙ satisfy the distributive lawsa⊙(b⊕c)=(a⊙b)⊕(a⊙c)and(b⊕c)⊙a=(b⊙a)⊕(c⊙a)a⊙(b⊕c)=(a⊙b)⊕(a⊙c)and(b⊕c)⊙a=(b⊙a)⊕(c⊙a) for all a,b,c∈Z n a,b,c∈Z n. The integer 0 is the additive identity, meaning that a⊕0=0⊕a=a a⊕0=0⊕a=a for all a∈Z n a∈Z n. For every a∈Z n a∈Z n, there exists a unique integer a′∈Z n a′∈Z n such that a⊕a′=0 a⊕a′=0. This integer a′a′ is called the additive inverse or negative of a a, and is denoted by −a−a. The integer 1 is the multiplicative identity, meaning that a⊙1=1⊙a=a a⊙1=1⊙a=a for all a∈Z n a∈Z n. For every integer a∈Z∗n a∈Z n∗ (hence, a≠0 a≠0), there exists a unique nonzero integer a′∈Z n a′∈Z n such that a⊙a′=1 a⊙a′=1. This integer a′a′ is called the multiplicative inverse or reciprocal of a a, and is denoted by a−1 a−1. Example 5.7.7 5.7.7 From the tables above, only 1 and 5 have multiplicative inverses in Z 6 Z 6. In fact, 1⋅1=1 and 5⋅5=1 in Z 6 1⋅1=1 and 5⋅5=1 in Z 6 imply that 1−1=1 1−1=1, and 5−1=5 5−1=5 in Z 6 Z 6. On the other hand, every nonzero integer in Z 7 Z 7 has a multiplicative inverse: 1−1=1,2−2=4,3−1=5,4−1=2,5−1=3,and 6−1=6.1−1=1,2−2=4,3−1=5,4−1=2,5−1=3,and 6−1=6. Be sure to verify these inverses. In general, given any set of numbers, we can define arithmetic operations in any way we like, provided that they obey certain rules. This produces an algebraic structure. For example, we call a set of elements S S with two binary operations denoted ⊕⊕ and ⊙⊙ a field, and write ⟨S,⊕,⊙⟩⟨S,⊕,⊙⟩ or (S,⊕,⊙)(S,⊕,⊙), if it satisfies all seven properties listed above. Both ⟨R,+,⋅⟩⟨R,+,⋅⟩ and ⟨Q,+,⋅⟩⟨Q,+,⋅⟩ are fields, but ⟨Z,+,⋅⟩⟨Z,+,⋅⟩ is not, because multiplicative inverse of a a does not exist if a≠±1 a≠±1. Theorem 5.7.4 5.7.4 The algebraic structure ⟨Z n,⊕,⊙⟩⟨Z n,⊕,⊙⟩ is a field if and only if n n is prime. Proof Verification of most of the properties is rather straightforward, with the exception of the existence of the multiplicative inverse, which we shall prove here. Since n n is a prime, any a∈Z∗a∈Z∗ must be relatively prime to n n. Hence, a s+n t=1 a s+n t=1 for some integers s s and t t. Modulo n n, we find n t=0 n t=0, hence, a s+n t=1 a s+n t=1 becomes a s=1.a s=1. Therefore a−1≡s a−1≡s (mod n n). The theorem tells us that if n n is prime, then Z n Z n is a field, hence, every nonzero integer has a multiplicative inverse. Example 5.7.8 5.7.8 Determine 7−1 7−1 (mod 29). Solution We want to find a number a′a′ such that 7 a′≡1 7 a′≡1 (mod 29). Note that gcd(7,29)=1 gcd(7,29)=1. Using extended Euclidean algorithm, we find 7(−4)+29⋅1=1.7(−4)+29⋅1=1. Since 29⋅1≡0 29⋅1≡0 (mod 29), after reducing modulo 29, we find 7(−4)≡1(mod 29).7(−4)≡1(mod 29). This implies that 7−1≡−4≡25 7−1≡−4≡25 (mod 29). When n n is composite, Z n Z n is not a field. Then not every nonzero integer in it has a multiplicative inverse. Of course, some special nonzero integers may still have multiplicative inverses. hands-on exercise 5.7.9 5.7.9 Determine 8−1 8−1 (mod 45). Example 5.7.9 5.7.9 Solve the equation 7 x−3=5 7 x−3=5 over Z 29 Z 29. Solution From 7 x−3=5 7 x−3=5, we find 7 x=8 7 x=8. Recall that what this equation really means is 7 x≡8(mod 29).7 x≡8(mod 29). The answer is not x=8 7 x=8 7, because Z 29 Z 29 only contains integers as its elements. This is what we should do: multiply 7−1 7−1 to both sides of the congruence: 7−1⋅7 x≡7−1⋅8(mod 29).7−1⋅7 x≡7−1⋅8(mod 29). Since 7−1⋅7≡1 7−1⋅7≡1 (mod 29), we now have x≡7−1⋅8(mod 29).x≡7−1⋅8(mod 29). In a way, we use multiplicative inverse to simulate division. In this case, 7−1≡7 7−1≡7 (mod 29). Hence, x≡7⋅8≡26 x≡7⋅8≡26 (mod 29). hands-on exercise 5.7.10 5.7.10 Solve the equation 8 x+23=12 8 x+23=12 over Z 45 Z 45. Example 5.7.10 5.7.10 Explain why 3−1 3−1 does not exist in Z 24 Z 24. Solution Suppose 3−1 3−1 exists in Z 24 Z 24, say, 3−1≡z 3−1≡z (mod 24). This means 3 z≡1 3 z≡1 (mod 24). Hence, 3 z=24 q+1 3 z=24 q+1 for some integer q q. This in turn implies that 1=3 z−24 q=3(z−8 q),1=3 z−24 q=3(z−8 q), which is clearly impossible because z−8 q z−8 q is an integer. This contradiction shows that 3−1 3−1 does not exist in Z 24 Z 24. Both R R and Q Q are infinite fields, while Z n Z n is a finite field when n n is prime. The next result is a truly amazing one, because it proclaims that the number of elements in any finite field (one with finitely many elements) must be the power of a certain prime. Unfortunately, we are unable to prove it here, because it is beyond the scope of this course. Theorem 5.7.5 5.7.5 There exists a finite field of n n elements if and only if n n is the power of a prime. Modular arithmetic modulo n n uses the mod operation to reduce the answers of all computation to within 0 through n−1 n−1. Instead of waiting until we obtain the final answer before we reduce it modulo n n, it is easier to reduce every immediate result modulo n n before moving on to the next step in the computation. We can use negative integers in the intermediate steps. The set of integers {0,1,2,…,n−1}{0,1,2,…,n−1}, together with modular arithmetic modulo n n, is denoted as Z n Z n. For a⋅a′≡1 a⋅a′≡1 (mod n n), we say that a′a′ is the multiplicative inverse of a a, and denote it a−1 a−1. For some a∈Z n a∈Z n, the multiplicative inverse a−1 a−1 may not exist. If it exists, we can use it to simulate division. Exercise 5.7.1 5.7.1 Construct the addition and multiplication tables for Z 8 Z 8. Which nonzero elements have multiplicative inverses (reciprocals)? What are their multiplicative inverses? Exercise 5.7.2 5.7.2 Repeat the last problem with Z 9 Z 9. Exercise 5.7.3 5.7.3 Find the sum and product of 1053 and 1761 in Z 17 Z 17. Exercise 5.7.4 5.7.4 Some of the results we derived earlier can be easily proven via modular arithmetic. For example, show that if an integer n n is not divisible by 3, then n≡±1 n≡±1 (mod 3). What can you say about n 2 n 2 (mod 3)? Therefore what form must n 2 n 2 take? Exercise 5.7.5 5.7.5 Show that no integer of the form m 2+1 m 2+1 is a multiple of 7. Hint What are the possible values of m m (mod 7)? Compare this to the last problem. Exercise 5.7.6 5.7.6 What are the possible values of m m (mod 13) such that m 2+1 m 2+1 is a multiple of 13? Hint Compute m 2+1 m 2+1 (mod 13) for each value of m m. Exercise 5.7.7 5.7.7 Find the value of 4 45 4 45 in Z 11 Z 11 using the fact that 45=3⋅3⋅5 45=3⋅3⋅5 using repeated squaring Exercise 5.7.8 5.7.8 Use repeated squaring to evaluate 5 23 5 23 (mod 11). Exercise 5.7.9 5.7.9 Solve these equations 2 x+5=10 2 x+5=10 over Z 13 Z 13 37 x+28=25 37 x+28=25 over Z 57 Z 57 12−24 x=15 12−24 x=15 over Z 35 Z 35 Exercise 5.7.10 5.7.10 Let p p and q q be odd primes. Show that p p takes the form of either 6 k+1 6 k+1 or 6 k+5 6 k+5. Hint First, explain why being odd restricts p p to the form of 6 k+1 6 k+1, 6 k+3 6 k+3, and 6 k+5 6 k+5. Next, argue why p≠6 k+3 p≠6 k+3. What could p p be congruent to, modulo 24? Show that if p≥q≥5 p≥q≥5, then 24∣(p 2−q 2)24∣(p 2−q 2). Hint What are the possible values of p 2 p 2 and q 2 q 2 modulo 24? Exercise 5.7.11 5.7.11 Use modular arithmetic to prove that, if n n is an integer not divisible by 5, then n 4−1 n 4−1 is divisible by 5. Exercise 5.7.12 5.7.12 Use modular arithmetic to prove that 8∣(5 2 n+7)8∣(5 2 n+7) for any integer n≥0 n≥0. Exercise 5.7.13 5.7.13 Use modular arithmetic to prove that 3∣(2 2 n−1)3∣(2 2 n−1) for any integer n≥0 n≥0. Exercise 5.7.14 5.7.14 Use modular arithmetic to prove that 5∣(3 3 n+1+2 n+1)5∣(3 3 n+1+2 n+1) for any integer n≥0 n≥0. This page titled 5.7: Modular Arithmetic is shared under a CC BY-NC-SA license and was authored, remixed, and/or curated by Harris Kwong (OpenSUNY) . Back to top 5.6: Fundamental Theorem of Arithmetic 6: Functions Was this article helpful? Yes No Recommended articles 5.4: Modular Systems 4.3: Equivalence RelationsThis page explores equivalence relations in mathematics, detailing properties like reflexivity, symmetry, and transitivity. It defines equivalence cla... 7.4: Modular ArithmeticThe term modular arithmetic is used to refer to the operations of addition and multiplication of congruence classes in the integers modulo n. 1.6: Modular ArithmeticThis section explores modular arithmetic, or clock arithmetic, emphasizing its practical applications in scenarios like time calculations and scheduli... 3.2: Modulo Arithmetic Article typeSection or PageAuthorHarris KwongLicenseCC BY-NC-SAShow Page TOCno Tags clock arithmetic modular arithmetic residues modulo © Copyright 2025 Mathematics LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us [email protected]. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 5.6: Fundamental Theorem of Arithmetic 6: Functions Complete your gift to make an impact
216
Published Time: Thu, 23 Nov 2023 02:23:15 GMT SymPhase: Phase Symbolization for Fast Simulation of Stabilizer Circuits Wang Fang State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences Beijing, China [email protected] University of Chinese Academy of Sciences Beijing, China Mingsheng Ying State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences Beijing, China [email protected] Department of Computer Science and Technology Tsinghua University Beijing, China Abstract This paper proposes an efficient stabilizer circuit simulation algo-rithm that only traverses the circuit forward once. We introduce phase symbolization into stabilizer generators, which allows possi-ble Pauli faults in the circuit to be accumulated explicitly as sym-bolic expressions in the phases of stabilizer generators. This way, the measurement outcomes are also symbolic expressions, and we can sample them by substituting the symbolic variables with con-crete values, without traversing the circuit repeatedly. We show how to integrate symbolic phases into the stabilizer tableau and maintain them efficiently using bit-vector encoding. A new data lay-out of the stabilizer tableau in memory is proposed, which improves the performance of our algorithm (and other stabilizer simulation algorithms based on the stabilizer tableau). We implement our al-gorithm and data layout in a Julia package named SymPhase.jl ,and compare it with Stim, the state-of-the-art simulator, on several benchmarks. We show that SymPhase.jl has superior performance in terms of sampling time, which is crucial for generating a large number of samples for further analysis. 1 Introduction With the rapid development of quantum hardware, designing and building large-scale fault-tolerant quantum computer architecture has become an urgent task [ 3 , 9 –12 ]. It relies on quantum error correction (QEC) protocols, for which the implementation relies on stabilizer circuits [ 14 ]. Due to the complexity and unintuitive nature of quantum systems, it is essential to have efficient methods for simulating stabilizer circuits on classical computers, as this can help us design and test circuits and protocols before deploying them on quantum hardware, like classical EDA tools. Fortunately, stabilizer circuits are a special class of quantum circuits that can be simulated in polynomial time on classical com-puters [ 15 ]. There are several efficient stabilizer circuit simulators available [ 2, 4, 13 , 16 ], but they are still not sufficient for analyz-ing fault-tolerant gadgets. A typical example is that we need to repeatedly sample the faults that occur inside the circuit of a gadget and count the measurement outcomes of the circuit under these fault samples to evaluate the performance of the gadget. Existing simulators can generate a single sample very fast, but the number of samples can be in the millions when the circuit is large, mak-ing the simulation very slow. The state-of-the-art stabilizer circuit simulator, Stim, also mentioned that generating samples of QEC circuits remains the bottleneck in analysis . |0⟩𝐻 𝑍 𝑠 1𝐻 𝑚 1 |0⟩𝑋 𝑠 2𝑚 2 |0⟩𝑋 𝑠 3𝑚 3 |0⟩𝑋 𝑠 4𝑚 4 |𝜓 0⟩= (− 1)0𝑍 𝐼 𝐼 𝐼 (− 1)0𝐼 𝑍 𝐼 𝐼 (− 1)0𝐼 𝐼 𝑍 𝐼 (− 1)0𝐼 𝐼 𝐼 𝑍 |𝜓 1⟩= (− 1)0𝑋𝑋𝑋𝑋 (− 1)0𝑍 𝑍 𝐼 𝐼 (− 1)0𝐼 𝑍 𝑍 𝐼 (− 1)0𝐼 𝐼 𝑍 𝑍 |𝜓 2⟩= (− 1)𝑠 1𝑋𝑋𝑋𝑋 (− 1)𝑠 2𝑍 𝑍 𝐼 𝐼 (− 1)𝑠 2+𝑠 3𝐼 𝑍 𝑍 𝐼 (− 1)𝑠 3+𝑠 4𝐼 𝐼 𝑍 𝑍 |𝜓 3⟩= (− 1)𝑠 1𝑍 𝐼 𝐼 𝐼 (− 1)𝑠 2𝐼 𝑍 𝐼 𝐼 (− 1)𝑠 2+𝑠 3𝐼 𝐼 𝑍 𝐼 (− 1)𝑠 3+𝑠 4𝐼 𝐼 𝐼 𝑍 𝑚 1 = 𝑠 1, 𝑚 2 = 𝑠 2, 𝑚 3 = 𝑠 2 ⊕ 𝑠 3, 𝑚 4 = 𝑠 3 ⊕ 𝑠 4 Figure 1: Overview of phase symbolization. Pauli faults in sta-bilizer circuits only affect the phases of stabilizer generators. As a result, possible Pauli faults can be accumulated explic-itly in the phases with symbolic expressions, making mea-surement outcomes into symbolic expressions. With these symbolic expressions, we only need to substitute symbolic variables with concrete values to achieve sampling measure-ment outcomes, thus avoiding the cost of repeatedly travers-ing the circuit. To address the difficulty of generating large samples of measure-ment outcomes, we propose a novel idea of phase symbolization for simulating stabilizer circuits. In standard stabilizer circuit simu-lations [ 2, 13 , 14 ], where evolutions of quantum states are tracked with stabilizer generators (see the lists of Pauli strings in Fig. 1), we note that Pauli gates only affect the phase of stabilizer generators. For example, in Fig. 1, the gate 𝑍 𝑠 1 with 𝑠 1 ∈ { 0, 1} only changes a phase (− 1)0 of |𝜓 1⟩ to the phase (− 1)𝑠 1 of |𝜓 2⟩. As a result, possible Pauli faults in stabilizer circuits can be accumulated in the phase with symbolic variables as shown in Fig. 1, where |𝜓 1⟩ becomes |𝜓 2⟩ after passing through 𝑍 𝑠 1 , 𝑋 𝑠 2 , 𝑋 𝑠 3 and 𝑋 𝑠 4 . The introduction of this symbolization will not change the control flow of the standard stabilizer circuit simulation algorithm, thus we can easily extend existing algorithms with phase symbolization, but it will make the measurement outcomes into some symbolic expressions as these 𝑚 1, 𝑚 2, 𝑚 3, 𝑚 4 in Fig. 1. With these symbolic expressions, we can clearly see how the faults in the circuit affect the measurement outcomes, and we only need to substitute these symbolic variables with concrete values according to the fault model to achieve sam-pling measurement outcomes, thus avoiding the cost of repeatedly traversing the circuit. 1 arXiv:2311.03906v2 [quant-ph] 22 Nov 2023 W. Fang and M. Ying Contribution and outline. After reviewing some background knowledge (§2), our major contributions are presented as follows: • With the phase symbolization, an algorithm (Algorithm 1) for efficient sampling outcomes of stabilizer circuits that traverses the circuit only once is proposed (§3). Specifically, we describe how to integrate symbolic phases into the stabilizer tableau and maintain them efficiently through bit-vector encoding; and turn the sampling process into bit-matrix multiplication. • For efficient implementation of our algorithm and also other stabilizer simulation algorithms based on stabilizer tableau, we propose a new data layout of the stabilizer tableau in the memory (§4), which has later been experimentally verified to have advantages over previous tools in some cases. • We implement our algorithm and data layout in a Julia package named SymPhase.jl and evaluate its ability to surpass the state-of-the-art simulator, Stim, for sampling stabilizer circuits on several benchmarks (§5). Related work. Stabilizer circuit simulation is a well-studied topic in quantum computing. A key method for simulating stabilizer circuits is the stabilizer tableau method proposed by [ 15 ] and im-proved by [ 2 ]. To speed up the sampling of stabilizer circuits with Pauli faults, a technique called Pauli frame was introduced by [ 19 ], which tracks the difference between the state with and without faults, and reduces the number of Pauli strings that need to be prop-agated for sampling an 𝑛 -qubit circuit from 𝑛 to 1. This method was also adopted by Stim, the state-of-the-art stabilizer simulator [ 13 ]. Recently, Delfosse and Paetznick proposed a method that can extract the relationship between faults and measurement outcomes by traversing the circuit backward once, which greatly improves the sampling efficiency compared to previous work. Our work achieves the same result by traversing the circuit forward once. But the basic ideas of [ 8 ] and ours are fundamentally different. A comparison of the complexity of Delfosse and Paetznick with ours is presented in Table 1. 2 Background This paper presupposes some basic knowledge of quantum comput-ing, such as quantum bits (qubits) and quantum circuits. Readers who are unfamiliar with these concepts can refer to the textbook by Nielsen and Chuang [17, Chapter 2, 4]. 2.1 Stabilizer Circuits Pauli strings. There are four Pauli matrices: 𝐼 = 1 00 1  , 𝑋 = 0 11 0  , 𝑌 = 0 −𝑖 𝑖 0  , 𝑍 = 1 00 −1  . An 𝑛 -qubit Pauli string is a tensor product of 𝑛 Pauli matrices with a phase of ±1 or ±𝑖 , e.g., −𝑋𝑌𝑍𝐼 = −𝑋 ⊗ 𝑌 ⊗ 𝑍 ⊗ 𝐼 is a 4-qubit Pauli string. We usually omit tensor product signs. To simplify the notation in dealing with multiple qubits, we also omit the 𝐼 matrices in Pauli strings and use subscripts to indicate the qubits that the non-identity Pauli matrices act on. For example, 𝑋 1𝑌 2𝑍 3 means applying 𝑋 to qubit 1, 𝑌 to qubit 2, 𝑍 to qubit 3 and 𝐼 to the rest of qubits; when restricted to 4 qubits, 𝑋 1𝑌 2𝑍 3 is regarded as 𝑋𝑌𝑍𝐼 . Stabilizer generators and stabilizer states. A state |𝜓 ⟩ is stabi-lized by a unitary 𝑈 if 𝑈 |𝜓 ⟩ = |𝜓 ⟩, i.e., |𝜓 ⟩ is an eigenvector of 𝑈 with eigenvalue 1. For example, the minus state |−⟩ is stabilized by −𝑋 and the bell state |𝛽 00 ⟩ = 1√2 (| 00 ⟩ + | 11 ⟩) is stabilized by 𝑋𝑋 .In this paper, we only consider states stabilized by Pauli strings. For an 𝑛 -qubit state |𝜓 ⟩, let Stab (| 𝜓 ⟩) denote the set of all 𝑛 -qubit Pauli strings that stabilize |𝜓 ⟩. For any 𝑃, 𝑄 ∈ Stab (| 𝜓 ⟩) , we can easily check that 𝑃 · 𝑄, 𝑃 · 𝑄 −1 ∈ Stab (| 𝜓 ⟩) , thus Stab (| 𝜓 ⟩) is also a group and we call it the stabilizer group of |𝜓 ⟩. The independent generators, which are all Pauli strings, of the stabilizer group are called stabilizer generators .An 𝑛 -qubit state |𝜓 ⟩ is called a stabilizer state if Stab (| 𝜓 ⟩) has 𝑛 stabilizer generators. In this case, with global phase ignored, |𝜓 ⟩ is the only 𝑛 -qubit state stabilized by Stab (| 𝜓 ⟩) . Therefore, there is a one-to-one correspondence between a stabilizer state |𝜓 ⟩ and its stabilizer group Stab (| 𝜓 ⟩) . Clifford gates and stabilizer circuits. For a state |𝜓 ⟩ and a Pauli string 𝑃 that stabilizes |𝜓 ⟩, a unitary 𝑈 transforms |𝜓 ⟩ to 𝑈 |𝜓 ⟩,which can be reflected by the transformation from 𝑃 to 𝑈 𝑃𝑈 † (conjugation by 𝑈 ) as 𝑈 |𝜓 ⟩ is stabilized by 𝑈 𝑃𝑈 †. To ensure that 𝑈 𝑃𝑈 † is still a Pauli string, we consider those unitaries 𝑈 that conjugate Pauli strings to Pauli strings, i.e., for any Pauli string 𝑃 , 𝑈 𝑃𝑈 † is still a Pauli string. Such unitaries are called Clifford gates and can be constructed from the three gates : 𝐻 = 1 √2 1 11 −1  , 𝑆 = 1 00 𝑖  , CNOT =  1 0 0 0 0 1 0 0 0 0 0 1 0 0 1 0  A stabilizer circuit is a quantum circuit that uses 𝐻, 𝑆, CNOT gates (Clifford gates), computational measurements, and |0⟩⊗𝑛 as the initial state. The central idea of stabilizer formalism [ 17 , Chap-ter 10.5] is to describe a state |𝜓 ⟩ by using its stabilizer group Stab (| 𝜓 ⟩) , which can be identified by stabilizer generators. For sta-bilizer circuits, the initial state |0⟩⊗𝑛 is a stabilizer state with 𝑛 stabilizer generators 𝑍 1, 𝑍 2, . . . , 𝑍 𝑛 ; the Clifford gates and compu-tational measurements will turn it into states that also admit 𝑛 sta-bilizer generators [ 17 , Chapter 10.5]. This idea provides an efficient simulation of stabilizer circuits by tracking stabilizer generators, sometimes known as the Gottesman-Knill theorem . 2.2 Stabilizer Tableau Simulation The most well-known approach to simulating 𝑛 -qubit stabilizer circuits is to maintain an 𝑛 × ( 2𝑛 + 1) stabilizer tableau (𝑿 | 𝒁 | 𝑹 ) that encodes 𝑛 stabilizer generators 𝑃 1, . . . , 𝑃 𝑛 as follows. (𝑿 |𝒁 |𝑹 )=©« 𝑥 11 · · · 𝑥 1𝑛 𝑧 11 · · · 𝑧 1𝑛 𝑟 1 .... . ........ . .......𝑥 𝑛 1· · · 𝑥 𝑛𝑛 𝑧 𝑛 1· · · 𝑧 𝑛𝑛 𝑟 𝑛 ª®®¬ The 𝑖 -th row of the stabilizer tableau corresponds to a stabilizer generator 𝑃 𝑖 , where the bit-pairs 𝑥 𝑖 𝑗 𝑧 𝑖 𝑗 = 00 , 10 , 01 , 11 denote the 𝑗 -th Pauli matrix on the 𝑗 -th qubit: 00 means 𝐼 , 10 means 𝑋 , 01 means 𝑍 and 11 means 𝑌 ; the bit 𝑟 𝑖 = 0 or 1 for positive or negative phase, respectively. The updates corresponding to Clifford gates 𝐻 , 𝑆 , CNOT re-quire only O( 𝑛 ) time. For example, an 𝐻 gate on qubit 𝑎 will set 𝑟 𝑖 ≔ 𝑟 𝑖 ⊕ 𝑥 𝑖𝑎 𝑧 𝑖𝑎 and swap 𝑥 𝑖𝑎 with 𝑧 𝑖𝑎 for all 𝑖 ∈ { 1, . . . , 𝑛 },which matches the conjugation by 𝐻 to Pauli matrices: 𝐻𝑋 𝐻 † = 𝑍, 𝐻𝑍𝐻 † = 𝑋, 𝐻𝑌 𝐻 † = −𝑌 . However, the updates corresponding to computational basis measurements take O( 𝑛 3) time in practice [ 2 ], which is in polynomial time but does not scale well enough. 2SymPhase: Phase Symbolization for Fast Simulation of Stabilizer Circuits The improved tableau algorithm. To improve the complexity of computational basis measurements in tableau simulation, Aaron-son and Gottesman (A-G) introduced destabilizer generators to stabilizer tableau as follows. ¯𝑿 ¯𝒁 ¯𝑹 𝑿 𝒁 𝑹  = ©« ¯𝑥 11 · · · ¯𝑥 1𝑛 ¯𝑧 11 · · · ¯𝑧 1𝑛 ¯𝑟 1 .... . ........ . ....... ¯𝑥 𝑛 1· · · ¯𝑥 𝑛𝑛 ¯𝑧 𝑛 1· · · ¯𝑧 𝑛𝑛 ¯𝑟 𝑛 𝑥 11 · · · 𝑥 1𝑛 𝑧 11 · · · 𝑧 1𝑛 𝑟 1 .... . ........ . .......𝑥 𝑛 1· · · 𝑥 𝑛𝑛 𝑧 𝑛 1· · · 𝑧 𝑛𝑛 𝑟 𝑛 ª®®®®®®®®¬ (1) The upper half of the tableau ¯𝑿 | ¯𝒁 | ¯𝑹  represents 𝑛 destabilizer generators ¯𝑃 1, . . . , ¯𝑃 𝑛 such that ¯𝑃 𝑖 anticommutes with 𝑃 𝑖 and com-mutes with 𝑃 𝑗 for 𝑗 ≠ 𝑖 . With the help of destabilizer generators, the updates corresponding to computational basis measurements can be realized by a series of row operations (multiply two Pauli strings). Then the complexity of computational basis measurements is reduced to O( 𝑛 2) time. 3 Phase Symbolization for Stabilizer Tableau To speed up the simulation of stabilizer circuits, we introduce the concept of phase symbolization, which is based on the following key facts (Facts 1 and 2). Fact 1. The Pauli gates 𝑋, 𝑌, 𝑍 only affect the phase part 𝑹 , ¯𝑹 of the stabilizer tableau . Specifically, for all 𝑖 ∈ { 1, . . . , 𝑛 }, • 𝑋 gate on qubit 𝑎 : set 𝑟 𝑖 ≔ 𝑟 𝑖 ⊕ 𝑧 𝑖𝑎 , ¯𝑟 𝑖 ≔ ¯𝑟 𝑖 ⊕ ¯𝑧 𝑖𝑎 ; • 𝑌 gate on qubit 𝑎 : set 𝑟 𝑖 ≔ 𝑟 𝑖 ⊕ 𝑥 𝑖𝑎 ⊕ 𝑧 𝑖𝑎 , ¯𝑟 𝑖 ≔ ¯𝑟 𝑖 ⊕ ¯𝑥 𝑖𝑎 ⊕ ¯𝑧 𝑖𝑎 ; • 𝑍 gate on qubit 𝑎 : set 𝑟 𝑖 ≔ 𝑟 𝑖 ⊕ 𝑥 𝑖𝑎 , ¯𝑟 𝑖 ≔ ¯𝑟 𝑖 ⊕ ¯𝑥 𝑖𝑎 . Fact 2. The control flow of A-G’s algorithm is independent of the values of 𝑹 , ¯𝑹 , i.e., all branches in this algorithm are determined by 𝑿 , 𝒁 , ¯𝑿 , ¯𝒁 of the tableau. The values of 𝑹 , ¯𝑹 can only affect the outcomes of measurements. Combining Facts 1 and 2, whether a Pauli gate is applied to a qubit will be reflected in whether some rows of 𝑹 and ¯𝑹 are flipped; hence, it will decide whether the later measurement outcomes are flipped. This phenomenon is also formalized into the Pauli frame propagation [19 ] in Stim [ 13 ]: To simulate a stabilizer circuit with Pauli faults, we first generate the noiseless measurement outcomes and then use the Pauli frame, a Pauli string that propagates on the circuit, to track the difference between the noiseless state and a sampled noisy state. This Pauli frame allows us to sample which measurements should be flipped by the noises. Since tracking the Pauli frame requires maintaining only one Pauli string, the subse-quent sampling process takes O( 1) time per gate and measurement. However, the Pauli frame propagation needs to go through the circuit for each sampling. In contrast, based on Facts 1 and 2, we can identify which measurements in the circuit are affected by the preced-ing Pauli faults (and Pauli gates) and may need to be flipped . Thus, we propose the symbolic phases to capture the flipping relationship. 3.1 Symbolic Phases Instead of assigning specific values to the elements in 𝑹 and ¯𝑹 of stabilizer tableau, we use symbolic expressions to represent them and call them symbolic phases. The stabilizer tableau becomes: ¯𝑿 ¯𝒁 ¯𝑹 𝑿 𝒁 𝑹  = ©« ¯𝑥 11 · · · ¯𝑥 1𝑛 ¯𝑧 11 · · · ¯𝑧 1𝑛 ¯𝑆 1 .... . ........ . ....... ¯𝑥 𝑛 1· · · ¯𝑥 𝑛𝑛 ¯𝑧 𝑛 1· · · ¯𝑧 𝑛𝑛 ¯𝑆 𝑛 𝑥 11 · · · 𝑥 1𝑛 𝑧 11 · · · 𝑧 1𝑛 𝑆 1 .... . ........ . .......𝑥 𝑛 1· · · 𝑥 𝑛𝑛 𝑧 𝑛 1· · · 𝑧 𝑛𝑛 𝑆 𝑛 ª®®®®®®®®¬ (2) where 𝑆 𝑖 , ¯𝑆 𝑖 are symbolic expressions over bit-symbols and bit-values with operator ⊕. For a better understanding, let us consider the following simple example circuit: |0⟩ 𝐻 𝑋 𝑠 1 |0⟩ 𝑋 𝑠 2 where 𝑠 1, 𝑠 2 are two bit-symbols indicating whether or not to apply the 𝑋 gate. 𝑋 𝑠 1 and 𝑋 𝑠 2 characterize the possible behaviors of 𝑋 -error on a single qubit. By A-G’s algorithm, the stabilizer tableau for this circuit evolves as follows: ©« 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 ª®®¬ 𝐻 1 → ©« 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 1 0 ª®®¬ CNOT 1,2 → ©« 0 0 1 0 0 0 1 0 0 0 1 1 0 0 0 0 0 1 1 0 ª®®¬ 𝑋 𝑠 11 → ©« 0 0 1 0 𝑠 1 0 1 0 0 0 1 1 0 0 0 0 0 1 1 𝑠 1 ª®®¬ 𝑋 𝑠 22 → ©« 0 0 1 0 𝑠 1 0 1 0 0 01 1 0 0 00 0 1 1 𝑠 1⊕𝑠 2 ª®®¬ 𝑀 1 → ©« 1 1 0 0 00 1 0 0 00 0 1 0 𝑠 3 0 0 1 1 𝑠 1⊕𝑠 2 ª®®¬ 𝑀 2 → ©« 1 1 0 0 00 1 0 0 00 0 1 0 𝑠 3 0 0 1 1 𝑠 1⊕𝑠 2 ª®®¬ 𝑚 1 = 𝑠 3 𝑚 2 = 𝑠 1 ⊕ 𝑠 2 ⊕ 𝑠 3 • The measurement on the first qubit has random outcomes; thus, we introduce a new bit-symbol 𝑠 3 to indicate outcome 𝑚 1 = 𝑠 3; this 𝑠 3 is kept in the stabilizer tableau and used by future operations. • After measuring the first qubit, we can find that the measure-ment on the second qubit is determined, and it results in an outcome 𝑚 2 = 𝑠 1 ⊕ 𝑠 2 ⊕ 𝑠 3 With the symbolic expressions 𝑚 1 = 𝑠 1, 𝑚 2 = 𝑠 1 ⊕ 𝑠 2 ⊕ 𝑠 3, we can sample concrete values of 𝑠 1, 𝑠 2, 𝑠 3 and substitute them in expres-sions to obtain samples of measurement outcomes. These symbols fall into two categories: • Symbols induced by random measurements, e.g., the symbol 𝑠 3 above, are sampled to 0 and 1 with probabilities 1/2 and 1/2,respectively. • Symbols induced by Pauli faults are sampled specifically ac-cording to the probability of the occurrence of Pauli strings in the Pauli faults. For example, a single-qubit 𝑋 -error E ( 𝜌 ) = (1 − 𝑝 )𝜌 + 𝑝𝑋 𝜌𝑋 with parameter 𝑝 corresponds to 𝑋 𝑠 and the bit-symbol 𝑠 will be sampled to 0 and 1 with probabili-ties 1 − 𝑝 and 𝑝 , respectively; a single-qubit depolarization D ( 𝜌 ) = (1 − 𝑝 )𝜌 + 𝑝 3 𝑋 𝜌𝑋 + 𝑝 3 𝑌 𝜌𝑌 + 𝑝 3 𝑍 𝜌𝑍 corresponds to 𝑋 𝑠 1 𝑍 𝑠 2 and the bit-symbols 𝑠 1𝑠 2 will be sampled to 00 , 10 , 01 and 11 with probabilities 1 − 𝑝 , 𝑝 /3, 𝑝 /3 and 𝑝 /3, respectively. For general stabilizer circuits with Pauli faults, the introduction of symbolic phases will turn all the outcomes of measurements into symbolic expressions. Sampling the measurement outcomes becomes substituting the symbols according to probability and evaluating the symbolic expressions. This approach avoids the cost of repeatedly traversing the circuit like Pauli frame propagation. 3W. Fang and M. Ying 3.2 Tableau Algorithm with Symbolic Phases Now that we have introduced symbolic phases, let us see how to maintain them efficiently during the simulation process and how to speed up the sampling of stabilizer circuits with more details. 3.2.1 Representing symbolic expressions with bit-vectors. Since the symbolic expressions here only involve bit-symbols and operator ⊕, we can use bit-vectors to represent them. Considering that the circuit will introduce at most 𝑛 𝑠 symbols, we represent each bit-symbol 𝑠 𝑗 , 1 ≤ 𝑗 ≤ 𝑛 𝑠 , with a bit-vector 𝒔 𝑗 : 𝑠 𝑗 ↦ → 𝒔 𝑗 = 𝛿 0,𝑗 𝛿 1,𝑗 · · · 𝛿 𝑛 𝑠 ,𝑗  ∈ F𝑛 𝑠 +12 , where 𝛿 𝑖,𝑗 = 1 if 𝑖 = 𝑗 and 𝛿 𝑖,𝑗 = 0 if 𝑖 ≠ 𝑗 . In particular, we add a symbol 𝑠 0 to represent the constant 1. Then a symbolic expression 𝑆 = 𝑠 𝑗 1 ⊕ 𝑠 𝑗 2 ⊕ · · · ⊕ 𝑠 𝑗 𝑘 , 0 ≤ 𝑗 1 ≤ · · · ≤ 𝑗 𝑘 ≤ 𝑛 𝑠 , is represented by a bit-vector 𝑺 : 𝑆 ↦ → 𝑺 = 𝒔 𝑗 1 + 𝒔 𝑗 2 + · · · + 𝒔 𝑗 𝑘 ∈ F𝑛 𝑠 +12 , where + is the addition operator in F𝑛 𝑠 +12 , or it can be referred to the bitwise XOR. Therefore, the symbolic phases ¯𝑆 𝑗 , 𝑆 𝑗 in Eq. (2) are represented by bit-vectors 𝑺 𝑗 , ¯𝑺 𝑗 : ¯𝑆 𝑗 ↦ → ¯𝑺 𝑗 = ¯𝑠 𝑗, 0 ¯𝑠 𝑗, 1 · · · ¯𝑠 𝑗,𝑛 𝑠  ∈ F𝑛 𝑠 +12 ,𝑆 𝑗 ↦ → 𝑺 𝑗 = 𝑠 𝑗, 0 𝑠 𝑗, 1 · · · 𝑠 𝑗,𝑛 𝑠  ∈ F𝑛 𝑠 +12 . And each measurement outcome 𝑚 𝑘 , which is also a symbolic expression, is represented by a bit-vector 𝒎 𝑘 : 𝑚 𝑘 ↦ → 𝒎 𝑘 = 𝑚 𝑘, 0 𝑚 𝑘, 1 · · · 𝑚 𝑘,𝑛 𝑠  ∈ F𝑛 𝑠 +12 . 3.2.2 Extending A-G’s algorithm to stabilizer tableau with symbolic phases. With the above representation, the stabilizer tableau with symbolic phases becomes a 2𝑛 × ( 2𝑛 + 𝑛 𝑠 + 1) tableau (bit-matrix): ©« ¯𝑥 11 · · · ¯𝑥 1𝑛 ¯𝑧 11 · · · ¯𝑧 1𝑛 ¯𝑠 1,0 ¯𝑠 1,1 · · · ¯𝑠 1,𝑛 𝑠 ... . . . ... ... . . . ... ... ... . . . ... ¯𝑥 𝑛 1 · · · ¯𝑥 𝑛𝑛 ¯𝑧 𝑛 1 · · · ¯𝑧 𝑛𝑛 ¯𝑠 𝑛, 0 ¯𝑠 𝑛, 1 · · · ¯𝑠 𝑛,𝑛 𝑠 𝑥 11 · · · 𝑥 1𝑛 𝑧 11 · · · 𝑧 1𝑛 𝑠 1,0 𝑠 1,1 · · · 𝑠 1,𝑛 𝑠 ... . . . ... ... . . . ... ... ... . . . ...𝑥 𝑛 1 · · · 𝑥 𝑛𝑛 𝑧 𝑛 1 · · · 𝑧 𝑛𝑛 𝑠 𝑛, 0 𝑠 𝑛, 1 · · · 𝑠 𝑛,𝑛 𝑠 ª®®®®®®®®¬ (3) The first 2𝑛 + 1 columns of Eq. (3) are the same as the original stabilizer tableau (see Eq. (1)) in A-G’s algorithm [ 2 ]. We can extend A-G’s algorithm to Eq. (3) as follows. (Init-C ) For Clifford gates, we update the first 2𝑛 + 1 columns of Eq. (3) as A-G’s algorithm; (Init-P ) For Pauli faults, we first decompose them into some 𝑋 𝑠 𝑗 and 𝑍 𝑠 𝑘 . Then, for 𝑋 𝑠 𝑗 (𝑍 𝑠 𝑘 ), we treat the first 2𝑛 columns of Eq. (3) together with the 𝑗 -th ( 𝑘 -th) column as a stabilizer tableau in A-G’s algorithm and update it by 𝑋 (𝑍 ) gate as A-G’s algorithm. (Init-M ) For computational basis measurements, we update the first 2𝑛 + 1 columns of Eq. (3) as A-G’s algorithm: when it comes to adding a phase 𝑠 𝑗, 0 to another phase 𝑠 𝑘, 0 for some 𝑗, 𝑘 , we also add the remaining 𝑠 𝑗, 1, . . . , 𝑠 𝑗,𝑛 𝑠 to 𝑠 𝑘, 1, . . . , 𝑠 𝑘,𝑛 𝑠 ,respectively. – If the measurement outcome is random, we fix it to 0 and apply an 𝑋 𝑠 at the measured qubit, where 𝑠 is a bit-symbol with sampling probabilities of 1/2 and 1/2 for 0 and 1,respectively. Then, we record the bit-vector 𝒔 ∈ F𝑛 𝑠 +12 for the symbol 𝑠 as this measurement outcome. – If the measurement outcome is determined, the measure-ment outcome output by A-G’s algorithm is a summa-tion over phases of some rows of stabilizer generators: 𝑠 𝑗 1,0 ⊕ 𝑠 𝑗 2,0 ⊕ · · · ⊕ 𝑠 𝑗 𝑘 ,0. Since we also track addition op-erations for the remaining 𝑛 𝑠 elements of each row, we record the bit-vector 𝑺 𝑗 1 + 𝑺 𝑗 2 + · · · + 𝑺 𝑗 𝑘 ∈ F𝑛 𝑠 +12 as this measurement outcome. 3.2.3 Sampling measurement outcomes as matrix multipli-cation. After traversing the circuit by using ( Init-C ), ( Init-P ) and (Init-M ), we will get an array of bit-vectors 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ∈ F𝑛 2+12 representing the measurement outcomes. For all bit-symbols 𝑠 1, . . . , 𝑠 𝑛 𝑠 , we sample a bit-vector 𝒃 = 𝑏 0 𝑏 1 · · · 𝑏 𝑛 𝑠  ∈ F𝑛 𝑠 +12 as mentioned in Section 3.1, where 𝑏 𝑗 , 1 ≤ 𝑗 ≤ 𝑛 𝑠 , is the sampled bit-value for the bit-symbol 𝑠 𝑗 and the first entry of 𝑏 0 = 1 is generated for the constant symbol 𝑠 0. Then, the sampled mea-surement outcome for 𝒎 𝑗 is 𝒎 𝑗 𝒃 ⊺ = Í𝑛 𝑠 𝑘 =0 𝑚 𝑗,𝑘 𝑏 𝑘 ∈ F2.Further, if we want to generate 𝑛 smp samples of measurements outcomes for 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 , we can first generate 𝑛 smp bit-vectors 𝒃 1, 𝒃 2, . . . , 𝒃 𝑛 smp , then obtain samples of measurements outcomes as matrix multiplication: 𝑴 samples = ©« 𝒎 1 𝒎 2 ... 𝒎 𝑛 𝑚 ª®®®®¬ ·  𝒃 ⊺ 1 𝒃 ⊺ 2 · · · 𝒃 ⊺ 𝑛 smp  ∈ F𝑛 𝑚 ×𝑛 smp 2 , (4) where the 𝑗 -th column of 𝑴 samples is the 𝑗 -th sample of measure-ment outcomes. 3.2.4 Our algorithm. Based on the previous discussions in this section, we present the Algorithm 1. The algorithm takes a noisy stabilizer circuit 𝐶 and an integer 𝑛 smp as inputs, where 𝑛 smp is the number of samples for the measurement outcomes. The distribution of Pauli faults in 𝐶 is given by P𝐶 . Algorithm 1: Tableau Algorithm with Symbolic Phases. Input: A noisy stabilizer circuit 𝐶 , a noise model P𝐶 for 𝐶 ’s Pauil noises, an integer 𝑛 smp . Output: 𝑛 smp samples of all the measurements in the circuit 𝐶 . 1 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ← Initialization( 𝐶 ); 2 𝑴 samples ← Sampling( 𝑛 sample , P𝐶 , 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ); 3 return 𝑴 samples ; 1 Procedure Initialization( 𝐶 ) 2 Traverse circuit 𝐶 by using ( Init-C ), ( Init-P ) and ( Init-M ) to obtain 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ; 3 return 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ; 1 Procedure Sampling( 𝑛 sample , P𝐶 , 𝒎 1, 𝒎 2, . . . , 𝒎 𝑛 𝑚 ) 2 Sample 𝒃 1, 𝒃 2, . . . , 𝒃 𝑛 smp from P𝐶 ; 3 𝑴 samples ← ( 𝒎 ⊺ 1𝒎 ⊺ 2··· 𝒎 ⊺ 𝑛𝑚 ) ⊺ · 𝒃 ⊺ 1𝒃 ⊺ 2··· 𝒃 ⊺ smp ; 4 return 𝑴 samples ; Consider an 𝑛 -qubit stabilizer circuit 𝐶 contains 𝑛 𝑔 single-qubit and two-qubit gates, 𝑛 𝑚 computational basis measurements, and 4SymPhase: Phase Symbolization for Fast Simulation of Stabilizer Circuits Table 1: Complexity comparison of various algorithms for simulating stabilizer circuits. Our Algorithm 1 is advantageous when the circuits have a large number of quantum gates ( 𝑛 𝑔 ). Algorithm Initialization Sampling ‡ Stim’s O ( 𝑛𝑛 𝑔 + 𝑛 2𝑛 𝑚 ) O ( 𝑛 smp (𝑛 𝑔 + 𝑛 𝑚 + 𝑛 𝑝 ) ) ABC sim. O ( 𝑛𝑛 𝑔 + 𝑛 2𝑛 𝑚 ) ¶ O ( 𝑛 𝑚 (𝑛 𝑔 +𝑛 𝑝 ) ) O ( 𝑛 smp 𝑛 𝑚 (𝑛 𝑚 + 𝑛 𝑝 ) ) ∗ Algorithm 1 O ( 𝑛𝑛 𝑔 + 𝑛 2𝑛 𝑚 )+ O ( 𝑛𝑛 𝑚 (𝑛 𝑚 + 𝑛 𝑝 ) ) O ( 𝑛 smp 𝑛 𝑚 (𝑛 𝑚 + 𝑛 𝑝 ) ) ∗ 𝑛 : number of qubits, 𝑛 𝑔 : number of gates, 𝑛 𝑚 : number of measurements, 𝑛 𝑝 : number of single-qubit Pauli noises, 𝑛 smp : number of samples. ∗ : O ( 𝑛 smp 𝑛 𝑚 ) for sparse circuits. ‡ : The cost of sampling noises from P𝐶 is not included because it is the same for all algorithms. ¶ : ABC sim. obtains the flipping relationship between measurements and Pauli noises and does not obtain the measurement outcomes without noises. Thus, we should include this term for ABC sim. 𝑛 𝑝 single-qubit Pauli faults 1. The cost of Algorithm 1 is divided into two parts: • Initialization : ( Init-C ) has a cost of O( 𝑛 ) for each gate, thus it takes O( 𝑛𝑛 𝑔 ) time; ( Init-P ) has a cost of O( 𝑛 ) for each single-qubit Pauli fault, thus it takes O( 𝑛𝑛 𝑝 ) time; 𝑛 𝑚 measurements and 𝑛 𝑝 single-qubit Pauli faults introduce at most 𝑛 𝑚 + 𝑛 𝑝 + 1 bit-symbols, then the number of columns in Eq. (3) is at most 2𝑛 + 𝑛 𝑚 + 𝑛 𝑝 + 1; ( Init-M ) has a cost of O( 𝑛 (𝑛 + 𝑛 𝑚 + 𝑛 + 𝑝 )) for each measurement, thus it takes O( 𝑛𝑛 𝑚 (𝑛 + 𝑛 𝑚 + 𝑛 𝑝 )) time. The total cost of Initialization is O( 𝑛𝑛 𝑔 + 𝑛𝑛 𝑝 + 𝑛𝑛 𝑚 (𝑛 + 𝑛 𝑚 + 𝑛 𝑝 )) . Since the cost of A-G’s algorithm (without Pauli faults) is O( 𝑛𝑛 𝑔 +𝑛 2𝑛 𝑚 ), we write the cost of Initialization as O( 𝑛𝑛 𝑔 +𝑛 2𝑛 𝑚 ) +O( 𝑛𝑛 𝑚 (𝑛 𝑚 +𝑛 𝑝 )) . • Sampling : We only consider the cost of the line 3 in it 2. The number of bit-symbols is at most 𝑛 𝑚 + 𝑛 𝑝 + 1, then the cost is lower than the cost of multiplying a bit-matrix of size 𝑛 𝑚 ×(𝑛 𝑚 +𝑛 𝑝 + 1) by another bit-matrix of size (𝑛 𝑚 +𝑛 𝑝 + 1) × 𝑛 smp ,which is O( 𝑛 smp 𝑛 𝑚 (𝑛 𝑚 + 𝑛 𝑝 )) .We compare the complexity of Algorithm 1 with the algorithm (Pauli frame propagation [ 19 ]) used by the state-of-the-art simulator Stim [ 13 ] and the recent ABC sim. algorithm [ 8] in Table 1. They differ as follows: • Compared to Stim’s, ABC sim. and our Algorithm 1 incur extra costs of O( 𝑛 𝑚 (𝑛 𝑔 + 𝑛 𝑝 )) and O( 𝑛𝑛 𝑚 (𝑛 𝑚 + 𝑛 𝑝 )) , respectively, for the Initialization . However, the overhead of Algorithm 1 does not depend on the number of gates ( 𝑛 𝑔 ), while ABC sim.’s contains 𝑛 𝑔 . Thus, our Algorithm 1 is favorable when 𝑛 𝑔 is large . • For Sampling , both ABC sim. and Algorithm 1 do not depend on 𝑛 𝑔 , thus ABC sim. and Algorithm 1 are improvements over Stim [ 13 , 19 ]. However, ABC sim. and Algorithm 1 have an additional multiplication factor O( 𝑛 𝑚 + 𝑛 𝑝 ) resulting from matrix multiplication; for the case of sparse circuits, i.e., each measurement outcome is related to a small number of Pauli noises, ( 𝒎 ⊺ 1𝒎 ⊺ 2··· 𝒎 ⊺ 𝑛𝑚 )⊺ is a column-sparse matrix, then the cost is reduced to O( 𝑛 smp 𝑛 𝑚 ). 1All Pauli faults can be decomposed into single-qubit Pauli faults. 2We do not take account the cost of sampling 𝒃 𝑗 because this cost is related to the noise model and will be the same for different algorithms . 1 2 · · · ⌈𝑛 /32 ⌉ 1 UInt-32 UInt-32 · · · UInt-32 2 UInt-32 UInt-32 · · · UInt-32 3 UInt-32 UInt-32 · · · UInt-32 4 UInt-32 UInt-32 · · · UInt-32 ... ... ... . . . ... 𝑛 UInt-32 UInt-32 · · · UInt-32 (a) chp.c ’s data layout. UInt-32 is interpreted as a 1 × 32 bit-matrix. 1 2 · · · ⌈𝑛 /8⌉ 1 UInt -64 UInt -64 · · · UInt -64 2 UInt -64 UInt -64 · · · UInt -64 ... ... ... . . . ... ⌈𝑛 /8⌉ UInt -64 UInt -64 · · · UInt -64 (b) Stim’s data layout. UInt -64 is interpreted as an 8×8 bit-matrix. = 1 2 · · · 64 1 UInt-8 UInt-8 · · · UInt-8 2 UInt-8 UInt-8 · · · UInt-8 3 UInt-8 UInt-8 · · · UInt-8 4 UInt-8 UInt-8 · · · UInt-8 ... ... ... . . . ... 512 UInt-8 UInt-8 · · · UInt-8 (c) Data layout for 512 × 512 bit-matrix. UInt-8 is interpreted as a 1 × 8 bit-matrix. 1 2 · · · ⌈𝑛 /512 ⌉ 1 · · · 2 · · · ... ... ... . . . ... ⌈𝑛 /512 ⌉ · · · (d) Our data layout. is inter-preted as a 512 × 512 bit-matrix. Figure 2: Data layout for stabilizer tableau. 4 Data Layout for Implementation Although there are theoretically efficient algorithms (see Table 1), they still face some practical issues and challenges in implementing them for real applications. We next discuss the data layout of the stabilizer tableau for implementation. In the implementation chp.c accompanying with A-G’s algo-rithm [ 2 ], the bits were packed into unsigned integers in memory as shown in Fig. 2a, where an UInt-32 integer is interpreted as a 1 × 32 bit-matrix. Thus, the 𝑛 × ⌈ 𝑛 /32 ⌉ integer-matrix in Fig. 2a can be interpreted as an 𝑛 × 𝑛 bit-matrix. It offers a compact repre-sentation of the stabilizer tableau. When stored in row-major order ,it also provides acceleration when we perform row operations for measurements because the rows are contiguous in memory. How-ever, for quantum gates, which require column operations, the data layout in Fig. 2a is not friendly. To balance the effects of data layout on row and column opera-tions, Stim [ 13 ] interprets UInt-64 integers as 8×8 bit-matrices and places them in column-major order as in Fig. 2b. This layout with column-major order is friendly to column operations so that quan-tum gates can be performed quickly. For measurements, especially a series of measurements, we can transpose it to row-major order temporarily and do a series of measurements before transposing it back for later quantum gates. Moreover, the contiguous memory in Stim enables the application of SIMD ( Single Instruction, Multiple Data ) operations, which can perform one instruction on multiple data elements (e.g., 256 -bits/ 4 × 64 -bits/ 8 × 32 -bits) simultaneously. Our data layout. Despite the high performance of Stim with its data layout (Fig. 2b), we observe that the transpose operation was time-consuming. Since the number of measurements in the circuits is usually smaller than the number of gates, we adopt a new layout 5W. Fang and M. Ying 0200 400 600 800 1,000 0 0.5 1 Time to initialize a sampler (sec.) SymPhase.jl Stim 0200 400 600 800 1,000 0 0.1 0.2 𝑛 : Qubit Count and Layer Count Time to generate 10,000 samples (sec.) SymPhase.jl Stim (a) Each layer randomly selects 5 pairs (1 pair in Stim’s benchmark) of qubits to apply CNOT gates. 0200 400 600 800 1,000 0 2 4 Time to initialize a sampler (sec.) SymPhase.jl Stim 0200 400 600 800 1,000 0 0.1 0.2 0.3 𝑛 : Qubit Count and Layer Count Time to generate 10,000 samples (sec.) SymPhase.jl Stim (b) Each layer randomly selects ⌊𝑛 2⌋pairs of qubits to apply CNOT gates. 0200 400 600 800 1,000 0 10 20 Time to initialize a sampler (sec.) SymPhase.jl Stim 0200 400 600 800 1,000 0 2 4 𝑛 : Qubit Count and Layer Count Time to generate 10,000 samples (sec.) SymPhase.jl Stim (c) Each layer randomly selects ⌊𝑛 2⌋pairs of qubits to apply CNOT gates and additionally applies single-qubit depolarize noise to each qubit. Figure 3: Performance results of sampling layered random interaction circuits. Each circuit is made up of 𝑛 qubits with 𝑛 layers. Each layer randomly applies an 𝐻 , 𝑆 and 𝐼 gate to each qubit, then applies CNOT gates, then samples 5% of the qubits to measure in the computational basis. At the end of the circuit, each qubit is measured in the computational basis. The maximum size of the circuit in the experiment reaches 1,000 qubits, 1160,000 quantum gates, 2000,000 Pauli faults, and 51,000 measurements. in Fig. 2d. Each shaded block contains an UInt-8 matrix of size 512 × 64 in column-major order, which represents a 512 × 512 bit-matrix. This layout allows SIMD operations for doing column operations (gates). For row operations (measurements), we only do local transposi-tions of shaded blocks (Fig. 2c). Such local transpositions reduce the time required to transpose the entire bit-matrix. With local trans-positions, Fig. 2c is in row-major order. For the entire bit-matrix, each row is not allocated continuously in memory but separated into groups of 512 bits. Although it prevents us from manipulating rows consecutively, the fixed length of 512 bits already provides sufficient speedup. 5 Evaluations We have developed a Julia [ 5 ] package named SymPhase.jl 3 that implements Algorithm 1. SymPhase.jl uses the data layout shown in Fig. 2d for the Initialization and the sparse implementation of matrix multiplication for the Sampling . To demonstrate the effi-ciency of our Algorithm 1 in sampling results of stabilizer circuits and to evaluate the performance of SymPhase.jl , we chose to compare it with the state-of-the-art stabilizer simulator. Baseline. The state-of-the-art stabilizer simulator known to us is Stim [ 13 ], which has not only surpassed popular simulators such as Qiskit’s stabilizer method [ 18 ], Cirq’s Clifford simulator [ 7 ], Aaron-son and Gottesman ’s chp.c and GraphSim [ 4 ] in performance, but is also being actively developed 4. Benchmark. We selected three classes of randomly generated circuits for the benchmark, which are variants of the benchmark 3see 4See used in Stim [ 13 ]. This way, we can avoid the influence of circuit structures on the comparison results. For example, circuits for LDPC codes [ 6 ] are sparse, which gives us an advantage. The detailed descriptions of these circuits are given in the captions of Figs. 3 and 3a to 3c. Environment. All our experiments are carried out on a desktop with Intel(R) Core(TM) i7-9700 [email protected] and 16G of RAM, running Ubuntu 22.04.2 LTS. The version of Stim is 1.12.0 (the latest stable version). Result. The experimental results are shown in Figs. 3a to 3c. We report the time for Stim and SymPhase.jl to initialize a sampler (i.e., the time to analyze the input circuit and create a sampler for generating the measurement results) and the time for Stim’s and SymPhase.jl ’s samplers to generate 10,000 samples of measure-ment results. SymPhase.jl outperforms Stim in all benchmarks in terms of the sampling time, which validates the advantages of our algorithm (see Algorithm 1) and our package ( SymPhase.jl )for sampling stabilizer circuits. On the other hand, our algorithm has an overhead for symbolic phases, which makes SymPhase.jl consume more time than Stim in initializing samplers. However, this overhead is one-time, and the performance of the sampler is crucial for generating a large number of samples for further analysis. Moreover, we also observe that in Fig. 3a, SymPhase.jl has a better initialization time than Stim, which indicates that our data layout has benefits in certain situations. This is worth further investigation. 6 Conclusion We have presented phase symbolization for fast simulation of stabi-lizer circuits without traversing the circuit repeatedly. With a new 6 SymPhase: Phase Symbolization for Fast Simulation of Stabilizer Circuits layout of the stabilizer tableau, a package SymPhase.jl is imple-mented, which has been experimentally evaluated that it surpasses the existing state-of-the-art tool in sampling stabilizer circuits. We believe that our techniques can provide a useful tool for simulat-ing and analyzing stabilizer circuits, especially for fault-tolerant quantum computing. We also expect that our ideas and techniques can be used in other tools for similar or related tasks. Under phase symbolization, the measurement outcomes are some symbolic expressions 𝑒 , which can be used to conditionally apply Pauli gates 𝑋 𝑒 , 𝑌 𝑒 , 𝑍 𝑒 as we have done for Pauli faults. This will allow us to achieve better results in dynamic/sequential stabilizer circuit simulations. The data layout of stabilizer tableau in memory is crucial for implementation; our layout and Stim’s [ 13 ] layout have different advantages in different circuits, so dynamically determining the layout based on the type/pattern of the circuit will be very helpful to improve the performance of the tool. Acknowledgments We thank Kean Chen for insightful discussions and Craig Gidney for pointing out our previous inappropriate use of Stim. This work was partly supported by the National Natural Science Foundation of China under Grant No. 61832015. References Scott Aaronson. 2004. chp.c . Scott Aaronson and Daniel Gottesman. 2004. Improved simulation of stabilizer circuits. Phys. Rev. A 70 (Nov 2004), 052328. Issue 5. PhysRevA.70.052328 Google Quantum AI. 2023. Suppressing quantum errors by scaling a surface code logical qubit. Nature 614, 7949 (01 Feb 2023), 676–681. 1038/s41586-022-05434-1 Simon Anders and Hans J. Briegel. 2006. Fast simulation of stabilizer circuits using a graph-state representation. Phys. Rev. A 73 (Feb 2006), 022334. Issue 2. Jeff Bezanson, Alan Edelman, Stefan Karpinski, and Viral B. Shah. 2017. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 59, 1 (2017), 65–98. arXiv: Nikolas P. Breuckmann and Jens Niklas Eberhardt. 2021. Quantum Low-Density Parity-Check Codes. PRX Quantum 2 (Oct 2021), 040101. Issue 4. org/10.1103/PRXQuantum.2.040101 Cirq Developers. 2018. Cirq. Nicolas Delfosse and Adam Paetznick. 2023. Simulation of noisy Clifford circuits without fault propagation. arXiv:2309.15345 [quant-ph] M. H. Abobeih et al. 2022. Fault-tolerant operation of a logical qubit in a diamond quantum processor. Nature 606, 7916 (01 Jun 2022), 884–889. 1038/s41586-022-04819-6 Qian Xu et al. 2023. Constant-Overhead Fault-Tolerant Quantum Computation with Reconfigurable Atom Arrays. arXiv e-prints , Article arXiv:2308.08648 (Aug. 2023), arXiv:2308.08648 pages. arXiv:2308.08648 [quant-ph] Sergey Bravyi et al. 2023. High-threshold and low-overhead fault-tolerant quantum memory. arXiv e-prints , Article arXiv:2308.07915 (Aug. 2023), arXiv:2308.07915 pages. arXiv:2308.07915 [quant-ph] Youwei Zhao et al. 2022. Realization of an Error-Correcting Surface Code with Superconducting Qubits. Phys. Rev. Lett. 129 (Jul 2022), 030501. Issue 3. https: //doi.org/10.1103/PhysRevLett.129.030501 Craig Gidney. 2021. Stim: a fast stabilizer circuit simulator. Quantum 5 (July 2021), 497. Daniel Gottesman. 1997. Stabilizer codes and quantum error correction . Ph. D. Dissertation. California Institute of Technology. quant-ph/9705052 Daniel Gottesman. 1998. The Heisenberg Representation of Quantum Computers. arXiv e-prints , Article quant-ph/9807006 (July 1998), quant-ph/9807006 pages. arXiv:quant-ph/9807006 [quant-ph] Stefan Krastanov. 2019. dev/ Accessed on 2023-10-31. Michael A. Nielsen and Isaac L. Chuang. 2010. Quantum Computation and Quantum Information: 10th Anniversary Edition . Cambridge University Press. Qiskit Community. 2017. Qiskit: An Open-Source Framework for Quantum Computing. Patrick Rall, Daniel Liang, Jeremy Cook, and William Kretschmer. 2019. Simula-tion of qubit quantum circuits via Pauli propagation. Phys. Rev. A 99 (Jun 2019), 062337. Issue 6. 7
217
Complex travelling wave solutions to the KdV and Burgers equations - ScienceDirect =============== Typesetting math: 100% Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Search ScienceDirect Article preview Abstract Introduction Section snippets References (5) Cited by (5) Applied Mathematics and Computation Volume 162, Issue 2, 15 March 2005, Pages 925-930 Complex travelling wave solutions to the KdV and Burgers equations Author links open overlay panelHilmi Demiray Show more Add to Mendeley Share Cite rights and content Abstract In the present work, making use of the hyperbolic tangent method, some complex travelling wave solutions to the Korteweg-deVries and Burgers equations are obtained. It is observed that the real part of the solution for the Burgers equation is of shock type whereas the imaginary part is the localized travelling wave. However, for the solution of the Korteweg-deVries equation the real part is a solitary wave while the imaginary part is the product of a solitary wave with a shock. Introduction It is well known that many physical phenomena can be described by the Korteweg-deVries and Burgers equations. It arises in various contexts as model equations incorporating the effects of nonlinearity, dispersion and dissipation. For instance, Johnson , Demiray and Antar and Demiray derived the Korteweg-deVries–Burgers (KdVB) equation as the governing evolution equations for waves propagating in fluid-filled elastic or viscoelastic tubes in which the effects of nonlinearity, dispersion and dissipation are present. The general form of the KdVB equation is given by∂u∂t+μ 1 u∂u∂x+μ 2∂2 u∂x 2+μ 3∂3 u∂x 3=0,where μ 1, μ 2 and μ 3 are some constant coefficients. In the limiting cases where μ 3→0 and μ 2→0, the evolution equation, respectively, reduces to the well known conventional Burgers and KdV equations∂u∂t+μ 1 u∂u∂x+μ 2∂2 u∂x 2=0,∂u∂t+μ 1 u∂u∂x+μ 3∂3 u∂x 3=0.These equations are both exactly solvable and they each have a wide range of applications in physical problems. These equations accept the travelling wave solutions of the following forms:u=1 2(U∞++U∞−)+1 2(U∞+−U∞−)tanh ζ,ζ=μ 1 4μ 2(U∞+−U∞−)x−μ 1 2(U∞++U∞−)t for the Burgers equation, and for the Korteweg-deVries equation u=a sech 2 ζ,ζ=μ 1 a 12μ 3 1/2 x−μ 1 a 3 t,where U∞+ and U∞− are the uniform state values of Burgers shock as ζ→∞ and ζ→−∞, respectively, and a is the amplitude of the localized travelling wave (solitary wave). But all of these solutions are of real type. In the present work, utilizing the hyperbolic tangent method, a set of complex travelling wave solution is obtained for the Burgers and Korteweg-deVries equations. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets Transformation and exact solutions For our future purposes, it might be convenient to write Eq. (1) of the form∂u∂t+∂∂x μ 1 2 u 2+μ 2∂u∂x+μ 3∂2 u∂x 2=0. The form of Eq. (6) suggests us to introduce the potential f(x,t) as u=∂f∂x,μ 1 2 u 2+μ 2∂u∂x+μ 3∂2 u∂x 2=−∂f∂t+A,where A is an arbitrary constant to be determined from the solution. Eliminating u between Eq. (7), the following governing equation is obtained for f(x,t):∂f∂t+μ 1 2∂f∂x 2+μ 2∂2 f∂x 2+μ 3∂3 f∂x 3=A. Now, we shall seek a progressive wave solution to Eq. (8) of the form f=F(ζ),ζ=α(x−ct),where α Acknowledgements This work was supported by the Turkish Academy of Sciences. Recommended articles References (5) R.S. Johnson A nonlinear equation incorporating damping and dispersion J. Fluid Mech (1970) H. Demiray Nonlinear waves in a thick walled viscoelastic tube filled with an inviscid fluid Int. J. Eng. Sci (1998) There are more references available in the full text version of this article. Cited by (5) Application of the fenced(frac(G′, G))-expansion method for the complex KdV equation 2010, Communications in Nonlinear Science and Numerical Simulation Show abstract The G′G-expansion method can be used for constructing exact travelling wave solutions of real nonlinear evolution equations. In this paper, we improve the G′G-expansion method and explore new application of this method to the complex KdV equation. New types of exact travelling wave solutions of the complex KdV equation are found. Some exact solutions of the complex KdV equation obtained before are special cases of our results in this paper. ### A new sub-equation method applied to obtain exact travelling wave solutions of some complex nonlinear equations 2009, Chaos Solitons and Fractals Show abstract By using a new coupled Riccati equations, a direct algebraic method, which was applied to obtain exact travelling wave solutions of some complex nonlinear equations, is improved. And the exact travelling wave solutions of the complex KdV equation, Boussinesq equation and Klein–Gordon equation are investigated using the improved method. The method presented in this paper can also be applied to construct exact travelling wave solutions for other nonlinear complex equations. ### New singular solutions of KdV and mKdV equations 2012, Huanan Ligong Daxue Xuebao Journal of South China University of Technology Natural Science ### Application of improved (G′/G)-expansion method for the complex kdv equation 2012, Advanced Science Letters ### Traveling-wave solutions for Korteweg-de Vries-Burgers equations through factorizations 2006, Foundations of Physics View full text Copyright © 2004 Published by Elsevier Inc. Recommended articles Modulational instability of acoustic waves in a dusty plasma with nonthermal electrons and trapped ions Chaos, Solitons & Fractals, Volume 121, 2019, pp. 50-58 Hilmi Demiray, Alireza Abdikian ### The nonlinear Schrödinger equation with a potential Annales de l'Institut Henri Poincaré C, Analyse non linéaire, Volume 35, Issue 6, 2018, pp. 1477-1530 Pierre Germain, …, Frédéric Rousset ### Existence and propagation characteristics of ion-acoustic Kadomtsev–Petviashvili (KP) solitons in nonthermal multi-ion plasmas with kappa distributed electrons Chaos, Solitons & Fractals, Volume 169, 2023, Article 113225 Shahzad Mahmood, Hafeez Ur-Rehman ### Analytical solution for nonplanar waves in a plasma with q-nonextensive nonthermal velocity distribution: Weighted residual method Chaos, Solitons & Fractals, Volume 130, 2020, Article 109448 Hilmi Demiray ### Analytical approximate solutions for nonplanar Burgers equations by weighted residual method Results in Physics, Volume 18, 2020, Article 103293 H.Demiray, E.R.El-Zahar ### Analytical solutions of cylindrical and spherical dust ion-acoustic solitary waves Results in Physics, Volume 13, 2019, Article 102154 Essam. R.El-Zahar, Hilmi Demiray Show 3 more articles Article Metrics Citations Citation Indexes 5 Captures Mendeley Readers 8 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Dig deeper, read smarter Pinpoint and save insights in full-text articles, ask questions, and summarize articles and book chapters with ScienceDirect AI. Unlock your access
218
Published Time: 2001-11-09T05:07:29Z Nim - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 History 2 Game play and illustration 3 Winning positions 4 Mathematical theory 5 Proof of the winning formula 6 VariationsToggle Variations subsection 6.1 The subtraction game 6.2 The 21 game 6.3 The 100 game 6.4 A multiple-heap rule 6.5 Circular nim 6.6 Grundy's game 6.7 Greedy nim 6.8 Index-k nim 6.9 Building nim 6.10 Higher-dimensional nim 6.11 Graph nim 6.12 Candy nim 7 See also 8 References 9 Further reading 10 External links [x] Toggle the table of contents Nim [x] 29 languages Беларуская Català Čeština Deutsch Español Esperanto Euskara فارسی Français 한국어 Hrvatski Italiano עברית Magyar Nederlands 日本語 Norsk bokmål Polski Português Русский Sicilianu Simple English Српски / srpski Suomi Svenska ไทย Türkçe Українська 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikimedia Commons Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia Game of strategy This article is about the mathematical game of strategy. For the programming language, see Nim (programming language). For other uses, see Nim (disambiguation). Nim Matches set up in rows for a game of Nim. Players take turns to choose a row and remove any number of matches from it. Genres Mathematical game abstract strategy game Players 2 Chance None Nim is a mathematicalcombinatorial game in which two players take turns removing (or "nimming") objects from distinct heaps or piles. On each turn, a player must remove at least one object, and may remove any number of objects provided they all come from the same heap or pile. Depending on the version being played, the goal of the game is either to avoid taking the last object or to take the last object. Nim is fundamental to the Sprague–Grundy theorem, which essentially says that every impartial game is equivalent to a nim game with a single pile. History [edit] Variants of nim have been played since ancient times. The game is said to have originated in China—it closely resembles the Chinese game of jiǎn-shízǐ (捡石子), or "picking stones"—but the origin is uncertain; the earliest European references to nim are from the beginning of the 16th century. Its current name was coined by Charles L. Bouton of Harvard University, who also developed the complete theory of the game in 1901, but the origins of the name were never fully explained. The Oxford English Dictionary derives the name from the German verb nimm, meaning "take". At the 1939 New York World's Fair, Westinghouse displayed a machine, the Nimatron, that played nim. From May 11 to October 27, 1940, only a few people were able to beat the machine in that six-month period; if they did, they were presented with a coin that said "Nim Champ". It was also one of the first-ever electronic computerized games. Ferranti built a nim-playing computer which was displayed at the Festival of Britain in 1951. In 1952, Herbert Koppel, Eugene Grant and Howard Baller, engineers from the W. L. Maxson Corporation, developed a machine weighing 23 kilograms (50 lb) which played nim against a human opponent and regularly won. A nim playing machine has been described made from tinkertoys. The game of nim was the subject of Martin Gardner's February 1958 Mathematical Games column in Scientific American. A version of nim is played—and has symbolic importance—in the French New Wave film Last Year at Marienbad (1961). Game play and illustration [edit] Nim is typically played as a misère game, in which the player to take the last object loses. Nim can also be played as a "normal play" game whereby the player taking the last object wins. In either normal play or a misère game, when there is exactly one heap with at least two objects, the player who takes next can easily win. If this removes either all or all but one objects from the heap that has two or more, then no heaps will have more than one object, so the players are forced to alternate removing exactly one object until the game ends. If the player leaves an even number of non-zero heaps (as the player would do in normal play), the player takes last; if the player leaves an odd number of heaps (as the player would do in misère play), then the other player takes last. The normal game is between two players and is played with three heaps of any number of objects. The two players alternate taking any number of objects from any one of the heaps. The goal is to be the last to take an object. In misère play, the goal is instead to ensure that the opponent is forced to take the last remaining object. The following example of a normal game is played between fictional players Bob and Alice, who start with heaps of three, four and five objects. | Heap A | Heap B | Heap C | Move | | --- | --- | --- | --- | | 3 | 4 | 5 | Game begins | | 1 | 4 | 5 | Bob takes 2 from A | | 1 | 4 | 2 | Alice takes 3 from C | | 1 | 3 | 2 | Bob takes 1 from B | | 1 | 2 | 2 | Alice takes 1 from B | | 0 | 2 | 2 | Bob takes entire A heap, leaving two 2s | | 0 | 1 | 2 | Alice takes 1 from B | | 0 | 1 | 1 | Bob takes 1 from C leaving two 1s. (In misère play he would take 2 from C leaving [0, 1, 0]) | | 0 | 0 | 1 | Alice takes 1 from B | | 0 | 0 | 0 | Bob takes entire C heap and wins | Winning positions [edit] The practical strategy to win at the game of nim is for a player to get the other into one of the following positions, and every successive turn afterwards they should be able to make one of the smaller positions. Only the last move changes between misère and normal play. | 2 heaps | 3 heaps | 4 heaps | | --- | --- | --- | | 1 1 | 1 1 1 | 1 1 1 1 | | 2 2 | 1 2 3 | 1 1 n n | | 3 3 | 1 4 5 | 1 2 4 7 | | 4 4 | 1 6 7 | 1 2 5 6 | | 5 5 | 1 8 9 | 1 3 4 6 | | 6 6 | 2 4 6 | 1 3 5 7 | | 7 7 | 2 5 7 | 2 3 4 5 | | 8 8 | 3 4 7 | 2 3 6 7 | | 9 9 | 3 5 6 | 2 3 8 9 | | n n | 4 8 12 | 4 5 6 7 | | | 4 9 13 | 4 5 8 9 | | | 5 8 13 | n n m m | | | 5 9 12 | n n n n | | Only valid for normal play. Only valid for misère. | For the generalisations, n and m can be any value >0, and they may be the same. Mathematical theory [edit] Normal-play nim (or more precisely the system of nimbers) is fundamental to the Sprague–Grundy theorem, which essentially says that in normal play every impartial game is equivalent to a nim heap that yields the same outcome when played in parallel with other normal play impartial games (see disjunctive sum). While all normal-play impartial games can be assigned a nim value, that is not the case under the misère convention. Only tame games can be played using the same strategy as misère nim. Nim is a special case of a poset game where the poset consists of disjoint chains (the heaps). The evolution graph of the game of nim with three heaps is the same as three branches of the evolution graph of the Ulam–Warburton automaton. Nim has been mathematically solved for any number of initial heaps and objects, and there is an easily calculated way to determine which player will win and which winning moves are open to that player. The key to the theory of the game is the binarydigital sum of the heap sizes, i.e., the sum (in binary), neglecting all carries from one digit to another. This operation is also known as "bitwise xor" or "vector addition over GF(2)" (bitwise addition modulo 2). Within combinatorial game theory it is usually called the nim-sum, as it will be called here. The nim-sum of x and y is written x ⊕ y to distinguish it from the ordinary sum, x + y. An example of the calculation with heaps of size 3, 4, and 5 is as follows: Binary Decimal 0112 310 Heap A 1002 410 Heap B 1012 510 Heap C 0102 210 The nim-sum of heaps A, B, and C, 3 ⊕ 4 ⊕ 5 = 2 An equivalent procedure, which is often easier to perform mentally, is to express the heap sizes as sums of distinct powers of 2, cancel pairs of equal powers, and then add what is left: 3 = 0 + 2 + 1 = 2 1 Heap A 4 = 4 + 0 + 0 = 4 Heap B 5 = 4 + 0 + 1 = 4 1 Heap C 2 = 2 What is left after canceling 1s and 4s In normal play, the winning strategy is to finish every move with a nim-sum of 0. This is always possible if the nim-sum is not zero before the move. If the nim-sum is zero, then the next player will lose if the other player does not make a mistake. To find out which move to make, let X be the nim-sum of all the heap sizes. Find a heap where the nim-sum of X and heap-size is less than the heap-size; the winning strategy is to play in such a heap, reducing that heap to the nim-sum of its original size with X. In the example above, taking the nim-sum of the sizes is X = 3 ⊕ 4 ⊕ 5 = 2. The nim-sums of the heap sizes A=3, B=4, and C=5 with X=2 are A ⊕ X = 3 ⊕ 2 = 1 [Since (011) ⊕ (010) = 001 ]B ⊕ X = 4 ⊕ 2 = 6 C ⊕ X = 5 ⊕ 2 = 7 The only heap that is reduced is heap A, so the winning move is to reduce the size of heap A to 1 (by removing two objects). As a particular simple case, if there are only two heaps left, the strategy is to reduce the number of objects in the bigger heap to make the heaps equal. After that, no matter what move the opponent makes, the player can make the same move on the other heap, guaranteeing that they take the last object. When played as a misère game, nim strategy is different only when the normal play move would leave only heaps of size one. In that case, the correct move is to leave an odd number of heaps of size one (in normal play, the correct move would be to leave an even number of such heaps). These strategies for normal play and a misère game are the same until the number of heaps with at least two objects is exactly equal to one. At that point, the next player removes either all objects (or all but one) from the heap that has two or more, so no heaps will have more than one object (in other words, so all remaining heaps have exactly one object each), so the players are forced to alternate removing exactly one object until the game ends. In normal play, the player leaves an even number of non-zero heaps, so the same player takes last; in misère play, the player leaves an odd number of non-zero heaps, so the other player takes last. In a misère game with heaps of sizes three, four and five, the strategy would be applied like this: | A | B | C | Nim-sum | Move | | --- | --- | --- | --- | --- | | 3 | 4 | 5 | 010 2=2 10 | I take 2 from A, leaving a sum of 000, so I will win. | | 1 | 4 | 5 | 000 2=0 10 | You take 2 from C | | 1 | 4 | 3 | 110 2=6 10 | I take 2 from B | | 1 | 2 | 3 | 000 2=0 10 | You take 1 from C | | 1 | 2 | 2 | 001 2=1 10 | I take 1 from A | | 0 | 2 | 2 | 000 2=0 10 | You take 1 from C | | 0 | 2 | 1 | 011 2=3 10 | The normal play strategy would be to take 1 from B, leaving an even number (2) heaps of size 1. For misère play, I take the entire B heap, to leave an odd number (1) of heaps of size 1. | | 0 | 0 | 1 | 001 2=1 10 | You take 1 from C, and lose. | Proof of the winning formula [edit] The soundness of the optimal strategy described above was demonstrated by C. Bouton. Theorem. In a normal nim game, the player making the first move has a winning strategy if and only if the nim-sum of the sizes of the heaps is not zero. Otherwise, the second player has a winning strategy. Proof: Notice that the nim-sum (⊕) obeys the usual associative and commutative laws of addition (+) and also satisfies an additional property, x⊕x=0. Let x 1, ..., x n be the sizes of the heaps before a move, and y 1,...,y n the corresponding sizes after a move. Let s=x 1⊕...⊕x n and t=y 1⊕...⊕y n. If the move was in heap k, we have x i=y i for all i ≠ k, and x k>y k. By the properties of ⊕ mentioned above, we have t=0⊕t=s⊕s⊕t=s⊕(x 1⊕⋯⊕x n)⊕(y 1⊕⋯⊕y n)=s⊕(x 1⊕y 1)⊕⋯⊕(x n⊕y n)=s⊕0⊕⋯⊕0⊕(x k⊕y k)⊕0⊕⋯⊕0=s⊕x k⊕y k(∗)t=s⊕x k⊕y k{\displaystyle {\begin{aligned}t&=0\oplus t\&=s\oplus s\oplus t\&=s\oplus (x_{1}\oplus \cdots \oplus x_{n})\oplus (y_{1}\oplus \cdots \oplus y_{n})\&=s\oplus (x_{1}\oplus y_{1})\oplus \cdots \oplus (x_{n}\oplus y_{n})\&=s\oplus 0\oplus \cdots \oplus 0\oplus (x_{k}\oplus y_{k})\oplus 0\oplus \cdots \oplus 0\&=s\oplus x_{k}\oplus y_{k}\10pt\quad t&=s\oplus x_{k}\oplus y_{k}\end{aligned}}} That is, to update the total nim sum s{\displaystyle s} after updating the x k{\displaystyle x_{k}} heap, we need to cancel it from s{\displaystyle s} by nim summing with x k{\displaystyle x_{k}}, and then nim sum in y k{\displaystyle y_{k}}. The theorem follows by induction on the length of the game from these two lemmas. Lemma 1. If s = 0, then t ≠ 0 no matter what move is made. Proof: If there is no possible move, then the lemma is vacuously true (and the first player loses the normal play game by definition). Otherwise, any move in heap k will produce t=x k⊕y k from (). This number is nonzero, since x k≠y k. Lemma 2. If s ≠ 0, it is possible to make a move so that t = 0. Proof: Let d be the position of the leftmost (most significant) nonzero bit in the binary representation of s, and choose k such that the d th bit of x k is also nonzero. (Such a k must exist, since otherwise the d th bit of s would be 0.) Then letting y k=s⊕x k, we claim that y k<x k: all bits to the left of d are the same in x k and y k, bit d decreases from 1 to 0 (decreasing the value by 2 d), and any change in the remaining bits will amount to at most 2 d−1. The first player can thus make a move by taking x k−y k objects from heap k, then t = s ⊕ x k ⊕ y k (by ()) = s ⊕ x k ⊕ (s ⊕ x k) = 0. The modification for misère play is demonstrated by noting that the modification first arises in a position that has only one heap of size 2 or more. Notice that in such a position s ≠ 0, and therefore this situation has to arise when it is the turn of the player following the winning strategy. The normal play strategy is for the player to reduce this to size 0 or 1, leaving an even number of heaps with size 1, and the misère strategy is to do the opposite. From that point on, all moves are forced. Variations [edit] The subtraction game [edit] Interactive subtraction game: Players take turns removing 1, 2 or 3 objects from an initial pool of 21 objects. The player taking the last object wins. In another game which is commonly known as nim (but is better called the subtraction game), an upper bound is imposed on the number of objects that can be removed in a turn. Instead of removing arbitrarily many objects, a player can only remove 1 or 2 or ... or k at a time. This game is commonly played in practice with only one heap. Bouton's analysis carries over easily to the general multiple-heap version of this game. The only difference is that as a first step, before computing the nim-sums we must reduce the sizes of the heaps modulok+1. If this makes all the heaps of size zero (in misère play), the winning move is to take k objects from one of the heaps. In particular, in ideal play from a single heap of n objects, the second player can win if and only if 0=n(mod k+1) (in normal play), or 1=n(mod k+1) (in misère play). This follows from calculating the nim-sequence of S(1, 2, ..., k), 0.123…k 0123…k 0123…=0˙.123…k˙,{\displaystyle 0.123\ldots k0123\ldots k0123\ldots ={\dot {0}}.123\ldots {\dot {k}},} from which the strategy above follows by the Sprague–Grundy theorem. The 21 game [edit] See also: 21 (drinking game) The game "21" is played as a misère game with any number of players who take turns saying a number. The first player says "1" and each player in turn increases the number by 1, 2, or 3, but may not exceed 21; the player forced to say "21" loses. This can be modeled as a subtraction game with a heap of 21 − n objects. The winning strategy for the two-player version of this game is to always say a multiple of 4; it is then guaranteed that the other player will ultimately have to say 21; so in the standard version, wherein the first player opens with "1", they start with a losing move. The 21 game can also be played with different numbers, e.g., "Add at most 5; lose on 34". A sample game of 21 in which the second player follows the winning strategy: | Player | Number | | --- | --- | | 1 | 1 | | 2 | 4 | | 1 | 5, 6 or 7 | | 2 | 8 | | 1 | 9, 10 or 11 | | 2 | 12 | | 1 | 13, 14 or 15 | | 2 | 16 | | 1 | 17, 18 or 19 | | 2 | 20 | | 1 | 21 | The 100 game [edit] A similar version is the "100 game": Two players start from 0 and alternately add a number from 1 to 10 to the sum. The player who reaches 100 wins. The winning strategy is to reach a number in which the digits are subsequent (e.g., 01, 12, 23, 34,...) and control the game by jumping through all the numbers of this sequence. Once a player reaches 89, the opponent can only choose numbers from 90 to 99, and the next answer can in any case be 100. A multiple-heap rule [edit] See also: Wythoff's game In another variation of nim, besides removing any number of objects from a single heap, one is permitted to remove the same number of objects from each heap. Circular nim [edit] See also: Kayles Yet another variation of nim is "circular nim", wherein any number of objects are placed in a circle and two players alternately remove one, two or three adjacent objects. For example, starting with a circle of ten objects, . . . . . . . . . . three objects are taken in the first move _ . . . . . . . _ _ then another three _ . _ _ _ . . . _ _ then one _ . _ _ _ . . _ _ _ but then three objects cannot be taken out in one move. Grundy's game [edit] In Grundy's game, another variation of nim, a number of objects are placed in an initial heap and two players alternately divide a heap into two nonempty heaps of different sizes. Thus, six objects may be divided into piles of 5+1 or 4+2, but not 3+3. Grundy's game can be played as either misère or normal play. Greedy nim [edit] Greedy nim is a variation wherein the players are restricted to choosing stones from only the largest pile. It is a finite impartial game. Greedy nim misère has the same rules as greedy nim, but the last player able to make a move loses. Let the largest number of stones in a pile be m and the second largest number of stones in a pile be n. Let p m be the number of piles having m stones and p n be the number of piles having n stones. Then there is a theorem that game positions with p m even are P positions. This theorem can be shown by considering the positions where p m is odd. If p m is larger than 1, all stones may be removed from this pile to reduce p m by 1 and the new p m will be even. If p m = 1 (i.e. the largest heap is unique), there are two cases: If p n is odd, the size of the largest heap is reduced to n (so now the new p m is even). If p n is even, the largest heap is removed entirely, leaving an even number of largest heaps. Thus, there exists a move to a state where p m is even. Conversely, if p m is even, if any move is possible (p m ≠ 0), then it must take the game to a state where p m is odd. The final position of the game is even (p m = 0). Hence, each position of the game with p m even must be a P position. Index-k nim [edit] A generalization of multi-heap nim was called "nim k{\displaystyle {}{k}}" or "index-_k" nim by E. H. Moore, who analyzed it in 1910. In index-k nim, instead of removing objects from only one heap, players can remove objects from at least one but up to k different heaps. The number of elements that may be removed from each heap may be either arbitrary or limited to at most r elements, like in the "subtraction game" above. The winning strategy is as follows: Like in ordinary multi-heap nim, one considers the binary representation of the heap sizes (or heap sizes modulo r+1). In ordinary nim one forms the XOR-sum (or sum modulo 2) of each binary digit, and the winning strategy is to make each XOR sum zero. In the generalization to index-k nim, one forms the sum of each binary digit modulo k+1. Again, the winning strategy is to move such that this sum is zero for every digit. Indeed, the value thus computed is zero for the final position, and given a configuration of heaps for which this value is zero, any change of at most k heaps will make the value non-zero. Conversely, given a configuration with non-zero value, one can always take from at most k heaps, carefully chosen, so that the value will become zero. Building nim [edit] Building nim is a variant of nim wherein the two players first construct the game of nim. Given n stones and s empty piles, the players, alternating turns, place exactly one stone into a pile of their choice. Once all the stones are placed, a game of Nim begins, starting with the next player that would move. This game is denoted BN(n,s). Higher-dimensional nim [edit] n-d nim is played on a k 1×⋯×k n{\displaystyle k_{1}\times \dots \times k_{n}} board, whereon any number of continuous pieces can be removed from any hyper-row. The starting position is usually the full board, but other options are allowed. Graph nim [edit] The starting board is a disconnected graph, and players take turns to remove adjacent vertices. Candy nim [edit] Candy nim is a version of normal-play nim in which players try to achieve two goals at the same time: taking the last object (in this case, candy) and taking the maximum number of candies by the end of the game. See also [edit] Android Nim Chomp Dr. NIM Fibonacci nim Fuzzy game Hackenbush Notakto Octal games Raymond Redheffer Scarney Star (game theory) Subtract a square Tri-nim Turing Tumble Zero game References [edit] ^Jorgensen, Anker Helms (2009), "Context and driving forces in the development of the early computer game Nimbi", IEEE Annals of the History of Computing, 31 (3): 44–53, doi:10.1109/MAHC.2009.41, MR2767447, S2CID2833693, The two-person mathematical game nim, which many believe originated in China, is probably one of the oldest games in the world. ^Yaglom, I. M. (2001), "Two games with matchsticks", in Tabachnikov, Serge (ed.), Kvant Selecta: Combinatorics, I, Volume 1, Mathematical world, vol.17, American Mathematical Society, pp.1–8, ISBN9780821821718 ^Bouton, C. L. (1901–1902), "Nim, a game with a complete mathematical theory", Annals of Mathematics, 3 (14): 35–39, doi:10.2307/1967631, JSTOR1967631 ^Flesch, Rudolf (1951). The Art of Clear Thinking. New York: Harper and Brothers Publishers. p.3. ^Kellem, Betsy (2022-03-01). "The Nimatron". JSTOR Daily. Archived from the original on 2023-06-28. Retrieved 2023-06-28. ^Grant, Eugene F.; Lardner, Rex (August 2, 1952). "The Talk of the Town – It". The New Yorker. ^Cohen, Harvey A. "How to Construct NIM Playing Machine"(PDF). ^Morrissette, Bruce (1968), "Games and game structures in Robbe-Grillet", Yale French Studies (41): 159–167, doi:10.2307/2929672, JSTOR2929672. Morrissette writes that Alain Robbe-Grillet, one of the screenwriters for the film, "thought he had invented" the game. ^Khovanova, Tanya; Xiong, Joshua (2014). "Nim Fractals". arXiv:1405.5942 [math.CO]. ^Winning Ways for your Mathematical Plays. Vol.4 vols. (2nd ed.). A K Peters Ltd. 2001.; Berlekamp, Elwyn R.; Conway, John Horton; Guy, Richard K. (2003-06-15). vol. 1. ISBN978-1-56881-130-7.; Berlekamp, Elwyn R.; Conway, John Horton; Guy, Richard K. (2003-06-15). vol. 2. ISBN978-1-56881-142-0.; Berlekamp, Elwyn R.; Conway, John Horton; Guy, Richard K. (2003-06-15). vol. 3. ISBN978-1-56881-143-7.; Berlekamp, Elwyn R.; Conway, John Horton; Guy, Richard K. (2004-06-15). vol. 4. ISBN978-1-56881-144-4. ^Albert, M. H.; Nowakowski, R. J. (2004). "Nim Restrictions"(PDF). Integers: 2. ^Moore, E. H. A Generalization of the Game Called Nim. Annals of Mathematics 11 (3), 1910, pp. 93–94 ^Larsson, Urban; Heubach, Silvia; Dufour, Matthieu; Duchêne, Eric (2015). "Building Nim". arXiv:1502.04068 [cs.DM]. ^"1021 - 2D-Nim". Poj.org. Retrieved 2019-01-09. ^Erickson, Lindsay Anne (2011). "The Game of Nim on Graphs". North Dakota State University. ^Rubinstein-Salzedo, Simon (18 May 2018). "P Play in Candy Nim". arXiv:1805.07019 [math.CO]. Further reading [edit] W. W. Rouse Ball: Mathematical Recreations and Essays, The Macmillan Company, 1947. John D. Beasley: The Mathematics of Games, Oxford University Press, 1989. Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy: Winning Ways for your Mathematical Plays, Academic Press, Inc., 1982. Manfred Eigen and Ruthild Winkler: Laws of the Game, Princeton University Press, 1981. Walter R. Fuchs: Computers: Information Theory and Cybernetics, Rupert Hart-Davis Educational Publications, 1971. G. H. Hardy and E. M. Wright: An Introduction to the Theory of Numbers, Oxford University Press, 1979. Edward Kasner and James Newman: Mathematics and the Imagination, Simon and Schuster, 1940. M. Kaitchik: Mathematical Recreations, W. W. Norton, 1942. Donald D. Spencer: Game Playing with Computers, Hayden Book Company, Inc., 1968. External links [edit] Wikimedia Commons has media related to Nim (game). "50-pound computer plays Nim" – The New Yorker - "Talk of the Town" August, 1952 (subscription required) The hot game of Nim – Nim theory and connections with other games at cut-the-knot Nim and 2-dimensional SuperNim at cut-the-knot Retrieved from " Categories: Mathematical games Combinatorial game theory Solved games Hidden categories: CS1: long volume value Articles with short description Short description matches Wikidata Articles containing Chinese-language text Articles containing German-language text Commons category link from Wikidata Pages containing links to subscription-only content Articles containing proofs Articles with example Python (programming language) code This page was last edited on 12 July 2025, at 02:13(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Nim 29 languagesAdd topic
219
Published Time: 2005-10-12T06:48:59Z List of Euclidean uniform tilings - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 Laves tilings 2 Convex uniform tilings of the Euclidean planeToggle Convex uniform tilings of the Euclidean plane subsection 2.1 The [4,4] group family 2.2 The [6,3] group family 2.3 Non-Wythoffian uniform tiling 2.4 Uniform colorings 3 See also 4 References 5 Further reading 6 External links [x] Toggle the table of contents List of Euclidean uniform tilings [x] 7 languages Čeština Esperanto 한국어 Română Русский Slovenščina 中文 Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia An example of uniform tiling in the Archeological Museum of Seville, Sevilla, Spain: rhombitrihexagonal tiling Uniform tilings and their duals drawn by Max Brückner in Vielecke und Vielflache (1900) This table shows the 11 convex uniform tilings (regular and semiregular) of the Euclidean plane, and their dual tilings. There are three regular and eight semiregular tilings in the plane. The semiregular tilings form new tilings from their duals, each made from one type of irregular face. John Conway called these uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. Uniform tilings are listed by their vertex configuration, the sequence of faces that exist on each vertex. For example 4.8.8 means one square and two octagons on a vertex. These 11 uniform tilings have 32 different uniform colorings. A uniform coloring allows identical sided polygons at a vertex to be colored differently, while still maintaining vertex-uniformity and transformational congruence between vertices. (Note: Some of the tiling images shown below are not color-uniform.) In addition to the 11 convex uniform tilings, there are also 14 known nonconvex tilings, using star polygons, and reverse orientation vertex configurations. A further 28 uniform tilings are known using apeirogons. If zigzags are also allowed, there are 23 more known uniform tilings and 10 more known families depending on a parameter: in 8 cases the parameter is continuous, and in the other 2 it is discrete. The set is not known to be complete. Laves tilings [edit] In the 1987 book, Tilings and patterns, Branko Grünbaum calls the vertex-uniform tilings Archimedean, in parallel to the Archimedean solids. Their dual tilings are called Laves tilings in honor of crystallographerFritz Laves. They're also called Shubnikov–Laves tilings after Aleksei Shubnikov.John Conway called the uniform duals Catalan tilings, in parallel to the Catalan solid polyhedra. The Laves tilings have vertices at the centers of the regular polygons, and edges connecting centers of regular polygons that share an edge. The tiles of the Laves tilings are called planigons. This includes the 3 regular tiles (triangle, square and hexagon) and 8 irregular ones. Each vertex has edges evenly spaced around it. Three dimensional analogues of the planigons are called stereohedrons. These dual tilings are listed by their face configuration, the number of faces at each vertex of a face. For example V4.8.8 means isosceles triangle tiles with one corner with four triangles, and two corners containing eight triangles. The orientations of the vertex planigons (up to D 12) are consistent with the vertex diagrams in the below sections. Eleven planigons | Triangles | Quadrilaterals | Pentagons | Hexagon | | --- | --- | --- | --- | | V6 3 | V4.8 2 | V4.6.12 | V3.12 2 | V4 4 | V(3.6)2 | V3.4.6.4 | V3 2.4.3.4 | V3 4.6 | V3 3.4 2 | V3 6 | Convex uniform tilings of the Euclidean plane [edit] All reflectional forms can be made by Wythoff constructions, represented by Wythoff symbols, or Coxeter-Dynkin diagrams, each operating upon one of three Schwarz triangle (4,4,2), (6,3,2), or (3,3,3), with symmetry represented by Coxeter groups: [4,4], [6,3], or [3]. Alternated forms such as the snub can also be represented by special markups within each system. Only one uniform tiling can't be constructed by a Wythoff process, but can be made by an elongation of the triangular tiling. An orthogonal mirror construction [∞,2,∞] also exists, seen as two sets of parallel mirrors making a rectangular fundamental domain. If the domain is square, this symmetry can be doubled by a diagonal mirror into the [4,4] family. Families: (4,4,2), B C~2{\displaystyle {\tilde {BC}}_{2}}, [4,4] – Symmetry of the regular square tiling I~1 2{\displaystyle {\tilde {I}}_{1}^{2}}, [∞,2,∞] (6,3,2), G~2{\displaystyle {\tilde {G}}_{2}}, [6,3] – Symmetry of the regular hexagonal tiling and triangular tiling. (3,3,3), A~2{\displaystyle {\tilde {A}}_{2}}, [3] The [4,4] group family [edit] | Uniform tilings (Platonic and Archimedean) | Vertex figure and dual face Wythoff symbol(s) Symmetry group Coxeter diagram(s) | Dual-uniform tilings (called Laves or Catalan tilings) | | --- | --- | --- | | Square tiling (quadrille) | 4.4.4.4 (or 4 4) 4 | 2 4 p4m, [4,4], (442) | self-dual (quadrille) | | Truncated square tiling (truncated quadrille) | 4.8.8 2 | 4 4 4 4 2 | p4m, [4,4], (442) or | Tetrakis square tiling (kisquadrille) | | Snub square tiling (snub quadrille) | 3.3.4.3.4 | 4 4 2 p4g, [4+,4], (42) or | Cairo pentagonal tiling (4-fold pentille) | The [6,3] group family [edit] | Platonic and Archimedean tilings | Vertex figure and dual face Wythoff symbol(s) Symmetry group Coxeter diagram(s) | Dual Laves tilings | | --- | --- | --- | | Hexagonal tiling (hextille) | 6.6.6 (or 6 3) 3 | 6 2 2 6 | 3 3 3 3 | p6m, [6,3], (632) | Triangular tiling (deltille) | | Trihexagonal tiling (hexadeltille) | (3.6)2 2 | 6 3 3 3 | 3 p6m, [6,3], (632) = | Rhombille tiling (rhombille) | | Truncated hexagonal tiling (truncated hextille) | 3.12.12 2 3 | 6 p6m, [6,3], (632) | Triakis triangular tiling (kisdeltille) | | Triangular tiling (deltille) | 3.3.3.3.3.3 (or 3 6) 6 | 3 2 3 | 3 3 | 3 3 3 p6m, [6,3], (632) = | Hexagonal tiling (hextille) | | Rhombitrihexagonal tiling (rhombihexadeltille) | 3.4.6.4 3 | 6 2 p6m, [6,3], (632) | Deltoidal trihexagonal tiling (tetrille) | | Truncated trihexagonal tiling (truncated hexadeltille) | 4.6.12 2 6 3 | p6m, [6,3], (632) | Kisrhombille tiling (kisrhombille) | | Snub trihexagonal tiling (snub hextille) | 3.3.3.3.6 | 6 3 2 p6, [6,3]+, (632) | Floret pentagonal tiling (6-fold pentille) | Non-Wythoffian uniform tiling [edit] | Platonic and Archimedean tilings | Vertex figure and dual face Wythoff symbol(s) Symmetry group Coxeter diagram | Dual Laves tilings | | --- | --- | --- | | Elongated triangular tiling (isosnub quadrille) | 3.3.3.4.4 2 | 2 (2 2) cmm, [∞,2+,∞], (222) | Prismatic pentagonal tiling (iso(4-)pentille) | Uniform colorings [edit] There are a total of 32 uniform colorings of the 11 uniform tilings: Triangular tiling – 9 uniform colorings, 4 wythoffian, 5 nonwythoffian Square tiling – 9 colorings: 7 wythoffian, 2 nonwythoffian Hexagonal tiling – 3 colorings, all wythoffian Trihexagonal tiling – 2 colorings, both wythoffian Snub square tiling – 2 colorings, both alternated wythoffian Truncated square tiling – 2 colorings, both wythoffian Truncated hexagonal tiling – 1 coloring, wythoffian Rhombitrihexagonal tiling – 1 coloring, wythoffian Truncated trihexagonal tiling – 1 coloring, wythoffian Snub hexagonal tiling – 1 coloring, alternated wythoffian Elongated triangular tiling – 1 coloring, nonwythoffian See also [edit] Convex uniform honeycomb – The 28 uniform 3-dimensional tessellations, a parallel construction to the convex uniform Euclidean plane tilings. Euclidean tilings by convex regular polygons List of tessellations Percolation threshold Uniform tilings in hyperbolic plane References [edit] ^Grünbaum, Branko; Shephard, G. C. (1987). Tilings and Patterns. W. H. Freeman and Company. pp.59, 96. ISBN0-7167-1193-1. ^Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (April 18, 2008). "Chapter 21, Naming the Archimedean and Catalan polyhedra and tilings, Euclidean Plane Tessellations". The Symmetries of Things. A K Peters / CRC Press. p.288. ISBN978-1-56881-220-5. Archived from the original on September 19, 2010. ^Encyclopaedia of Mathematics: Orbit - Rayleigh Equation, 1991 ^Ivanov, A. B. (2001) , "Planigon", Encyclopedia of Mathematics, EMS Press Further reading [edit] Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (April 18, 2008). "Chapter 19, Archimedean tilings, table 19.1". The Symmetries of Things. A K Peters / CRC Press. ISBN978-1-56881-220-5. Archived from the original on September 19, 2010. Coxeter, H.S.M.; Longuet-Higgins, M.S.; Miller, J.C.P. (1954). "Uniform polyhedra". Phil. Trans. 246 A: 401–450. Williams, Robert (1979). The Geometrical Foundation of Natural Structure: A Source Book of Design. Dover Publications, Inc. ISBN0-486-23729-X. (Section 2–3 Circle packings, plane tessellations, and networks, pp.34–40). Asaro, Laura; Hyde, John; Jensen, Melanie; Mann, Casey; Schroeder, Tyler. "Uniform edge-c-colorings of the Archimedean Tilings"(PDF). University of Washington. (Casey Mann at the University of Washington) Grünbaum, Branko; Shepard, Geoffrey (November 1977). "Tilings by Regular polygons"(PDF). Seymour, Dale; Britton, Jill (1989). Introduction to Tessellations. Dale Seymour Publications. pp.50–57, 71-74. ISBN978-0866514613. External links [edit] Weisstein, Eric W."Uniform tessellation". MathWorld. Uniform Tessellations on the Euclid plane Tessellations of the Plane David Bailey's World of Tessellations k-uniform tilings n-uniform tilings | v t e Tessellation | | --- | | | Periodic | | --- | | Pythagorean Rhombille Schwarz triangle Rectangle Domino Uniform tiling and honeycomb Coloring Convex Kisrhombille Wallpaper group Wythoff | | | | | Aperiodic | | --- | | Ammann–Beenker Aperiodic set of prototiles List Einstein problem Socolar–Taylor Gilbert Penrose Pinwheel Quaquaversal Rep-tile and Self-tiling Sphinx Socolar Truchet | | | | Other | | --- | | Anisohedral and Isohedral Architectonic and catoptric Circle Limit III Computer graphics Honeycomb Isotoxal List Packing Pentagonal Problems Domino Wang Heesch's Squaring Dividing a square into similar rectangles Prototile Conway criterion Girih Regular Division of the Plane Regular grid Substitution Voronoi Voderberg | | | | By vertex type | | --- | | | Spherical | 2 n 3 3.n V3 3.n 4 2.n V4 2.n | | --- | | Regular | 2∞ 3 6 4 4 6 3 | | Semi- regular | 3 2.4.3.4 V3 2.4.3.4 3 3.4 2 3 3.∞ 3 4.6 V3 4.6 3.4.6.4 (3.6)2 3.12 2 4 2.∞ 4.6.12 4.8 2 | | Hyper- bolic | 3 2.4.3.5 3 2.4.3.6 3 2.4.3.7 3 2.4.3.8 3 2.4.3.∞ 3 2.5.3.5 3 2.5.3.6 3 2.6.3.6 3 2.6.3.8 3 2.7.3.7 3 2.8.3.8 3 3.4.3.4 3 2.∞.3.∞ 3 4.7 3 4.8 3 4.∞ 3 5.4 3 7 3 8 3∞ (3.4)3 (3.4)4 3.4.6 2.4 3.4.7.4 3.4.8.4 3.4.∞.4 3.6.4.6 (3.7)2 (3.8)2 3.14 2 3.16 2 3.∞2 4 2.5.4 4 2.6.4 4 2.7.4 4 2.8.4 4 2.∞.4 4 5 4 6 4 7 4 8 4∞ (4.5)2 (4.6)2 4.6.12 4.6.14 V4.6.14 4.6.16 V4.6.16 4.6.∞ (4.7)2 (4.8)2 4.8.10 V4.8.10 4.8.12 4.8.14 4.8.16 4.8.∞ 4.10 2 4.10.12 4.12 2 4.12.16 4.14 2 4.16 2 4.∞2 5 4 5 5 5 6 5∞ 5.4.6.4 (5.6)2 5.8 2 5.10 2 5.12 2 6 4 6 5 6 6 6 8 6.4.8.4 (6.8)2 6.8 2 6.10 2 6.12 2 6.16 2 7 3 7 4 7 7 7.6 2 7.8 2 7.14 2 8 3 8 4 8 6 8 8 8.6 2 8.12 2 8.16 2 ∞3 ∞4 ∞5 ∞∞ ∞.6 2 ∞.8 2 | | | Retrieved from " Categories: Euclidean tilings Uniform tilings Mathematics-related lists Hidden categories: Articles with short description Short description is different from Wikidata CS1: long volume value This page was last edited on 11 August 2025, at 20:52(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents List of Euclidean uniform tilings 7 languagesAdd topic
220
221
GitHub - NVIDIA/Megatron-LM: Ongoing research training transformer models at scale =============== Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Product GitHub Copilot Write better code with AI GitHub Models New Manage and compare prompts GitHub Advanced Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore Why GitHub All features Documentation GitHub Skills Blog Solutions By company size Enterprises Small and medium teams Startups Nonprofits By use case DevSecOps DevOps CI/CD View all use cases By industry Healthcare Financial services Manufacturing Government View all industries View all solutions Resources Topics AI DevOps Security Software Development View all Explore Learning Pathways Events & Webinars Ebooks & Whitepapers Customer Stories Partners Executive Insights Open Source GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Repositories Topics Trending Collections Enterprise Enterprise platform AI-powered developer platform Available add-ons GitHub Advanced Security Enterprise-grade security features Copilot for business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [x] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name Query To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert {{ message }} NVIDIA/Megatron-LMPublic NotificationsYou must be signed in to change notification settings Fork 2.9k Star 12.9k Ongoing research training transformer models at scale docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start License View license 12.9k stars2.9k forksBranchesTagsActivity Star NotificationsYou must be signed in to change notification settings Code Issues 335 Pull requests 211 Discussions Actions Security### Uh oh! There was an error while loading.Please reload this page. Insights Additional navigation options Code Issues Pull requests Discussions Actions Security Insights NVIDIA/Megatron-LM main 25Branches39Tags Go to file Code Open more actions menu Folders and files | Name | Name | Last commit message | Last commit date | | --- | --- | --- | --- | | Latest commit ------------- ko3n1g Merge branch 'yuya/fix_cpu_init_pg_issue' into 'main' Open commit details Jul 12, 2025 ee082bf·Jul 12, 2025 History ------- 6,797 Commits Open commit details | | .github | .github | Disable auto closure of stale issues/PRs | Jul 20, 2023 | | .gitlab | .gitlab | ci(hotfix): Release wheel workflow | Jul 8, 2025 | | docker | docker | ADLR/megatron-lm!3221 - build: Upgrade to 25.05-py3-devel image | Jul 5, 2025 | | docs | docs | Revert "Reapply "ADLR/megatron-lm!3439 - M4 Taskforce: Remove Encoder… | Jul 8, 2025 | | examples | examples | Merge branch 'timing_fixes' into 'main' | Jul 10, 2025 | | images | images | ADLR/megatron-lm!1732 - New scaling figures on H100 GPUs | Jul 15, 2024 | | megatron | megatron | Merge branch 'yuya/fix_cpu_init_pg_issue' into 'main' | Jul 12, 2025 | | patches | patches | ADLR/megatron-lm!3506 - refactor: Safe imports | Jul 5, 2025 | | tasks | tasks | ADLR/megatron-lm!2513 - add group_desc when invoking new_group() | Jan 21, 2025 | | tests | tests | ADLR/megatron-lm!3527 - Miscellaneous timing fixes for inference scripts | Jul 10, 2025 | | tools | tools | Merge branch 'timing_fixes' into 'main' | Jul 10, 2025 | | .flake8 | .flake8 | ADLR/megatron-lm!1954 - Style: Formatting and imports | Aug 27, 2024 | | .gitignore | .gitignore | ADLR/megatron-lm!3529 - ci: Misc refactorings | Jun 27, 2025 | | .gitlab-ci.yml | .gitlab-ci.yml | ADLR/megatron-lm!3506 - refactor: Safe imports | Jul 5, 2025 | | .pre-commit-config.yaml | .pre-commit-config.yaml | ADLR/megatron-lm!3364 - chore: add pre-commit config file | May 27, 2025 | | .pylintrc | .pylintrc | ADLR/megatron-lm!2563 - ci: Add possibly-used-before-assignment | Jan 17, 2025 | | CHANGELOG.md | CHANGELOG.md | ADLR/megatron-lm!2276 - Add Mamba context parallel | Jun 14, 2025 | | CODEOWNERS | CODEOWNERS | ADLR/megatron-lm!3125 - Update CODEOWNERS to make modelopt review onl… | Apr 16, 2025 | | CONTRIBUTING.md | CONTRIBUTING.md | Remove auto-closing in stale bot | Jul 19, 2023 | | LICENSE | LICENSE | ADLR/megatron-lm!2737 - Integration of Deepseek DeepEP kernel | Feb 25, 2025 | | MANIFEST.in | MANIFEST.in | ADLR/megatron-lm!3397 - build: Switch to uv | Jun 16, 2025 | | README.md | README.md | ADLR/megatron-lm!3397 - build: Switch to uv | Jun 16, 2025 | | pretrain_bert.py | pretrain_bert.py | ADLR/megatron-lm!3230 - Pass vp_stage during model initialization | May 5, 2025 | | pretrain_gpt.py | pretrain_gpt.py | ADLR/megatron-lm!3301 - Add kitchen extension with per-layer configur… | Jun 16, 2025 | | pretrain_ict.py | pretrain_ict.py | ADLR/megatron-lm!3345 - M4 Taskforce: update get_rank & get_size of PG | Jun 14, 2025 | | pretrain_mamba.py | pretrain_mamba.py | ADLR/megatron-lm!3379 - Megatron SFT | Jun 14, 2025 | | pretrain_retro.py | pretrain_retro.py | ADLR/megatron-lm!2995 - make mid level dataset surplus a cli param an… | Apr 12, 2025 | | pretrain_t5.py | pretrain_t5.py | Revert "Reapply "ADLR/megatron-lm!3439 - M4 Taskforce: Remove Encoder… | Jul 8, 2025 | | pretrain_vision_classify.py | pretrain_vision_classify.py | Refactor everything outside of core to be out of the main megatron. n… | Mar 26, 2024 | | pretrain_vision_dino.py | pretrain_vision_dino.py | Refactor everything outside of core to be out of the main megatron. n… | Mar 26, 2024 | | pretrain_vision_inpaint.py | pretrain_vision_inpaint.py | Refactor everything outside of core to be out of the main megatron. n… | Mar 26, 2024 | | pretrain_vlm.py | pretrain_vlm.py | Revert "update "ModelType.encoder_and_decoder" to "ModelType.encoder_… | Jul 8, 2025 | | pyproject.toml | pyproject.toml | ADLR/megatron-lm!3592 - build: Add pytest-asyncio | Jul 6, 2025 | | setup.py | setup.py | ADLR/megatron-lm!3397 - build: Switch to uv | Jun 16, 2025 | | uv.lock | uv.lock | ADLR/megatron-lm!3592 - build: Add pytest-asyncio | Jul 6, 2025 | | View all files | Repository files navigation README License Megatron-LM & Megatron-Core GPU optimized techniques for training transformer models at-scale Latest News [2024/7] Megatron-Core v0.7 improves scalability and training resiliency and adds support for multimodal training (blog). [2024/6] Megatron-Core added supports for Mamba-based models. Check out our paper An Empirical Study of Mamba-based Language Models and code example. [2024/1 Announcement] NVIDIA has released the core capabilities in Megatron-LM into Megatron-Core in this repository. Megatron-Core expands upon Megatron-LM's GPU-optimized techniques with more cutting-edge innovations on system-level optimizations, featuring composable and modular APIs. Explore the Megatron-Core intro for more details. Table of Contents Megatron-LM & Megatron-Core Latest News Table of Contents Megatron Overview Megatron-LM Megatron-Core Training Speed and Scalability Setup Docker (Recommended) Installation Options Install from PyPI Install from Source Prerequisites Downloading Checkpoints Usage Training Data Preprocessing BERT Pretraining GPT Pretraining T5 Pretraining Distributed Pretraining Activation Checkpointing and Recomputation Distributed Optimizer FlashAttention GPT-3 Example Retro and InstructRetro Mamba-based Language Models Mixture of Experts Evaluation and Tasks GPT Text Generation Detoxify GPT via Self-generation GPT Evaluation WikiText Perplexity Evaluation LAMBADA Cloze Accuracy BERT Task Evaluation RACE Evaluation MNLI Evaluation Llama-2 Inference and Finetuning Model Optimization and Deployment Quantization and TensorRT-LLM Deployment Datasets Collecting Wikipedia Training Data Collecting GPT Webtext Data Reproducibility Checkpoint conversion Model class conversion Checkpoint format conversion Projects Using Megatron Megatron Overview This repository comprises two essential components: Megatron-LM and Megatron-Core. Megatron-LM serves as a research-oriented framework leveraging Megatron-Core for large language model (LLM) training. Megatron-Core, on the other hand, is a library of GPU optimized training techniques that comes with formal product support including versioned APIs and regular releases. You can use Megatron-Core alongside Megatron-LM or Nvidia NeMo Framework for an end-to-end and cloud-native solution. Alternatively, you can integrate Megatron-Core's building blocks into your preferred training framework. Megatron-LM First introduced in 2019, Megatron (1, 2, and 3) sparked a wave of innovation in the AI community, enabling researchers and developers to utilize the underpinnings of this library to further LLM advancements. Today, many of the most popular LLM developer frameworks have been inspired by and built directly leveraging the open-source Megatron-LM library, spurring a wave of foundation models and AI startups. Some of the most popular LLM frameworks built on top of Megatron-LM include Colossal-AI, HuggingFace Accelerate, and NVIDIA NeMo Framework. A list of projects that have directly used Megatron can be found here. Megatron-Core Megatron-Core is an open-source PyTorch-based library that contains GPU-optimized techniques and cutting-edge system-level optimizations. It abstracts them into composable and modular APIs, allowing full flexibility for developers and model researchers to train custom transformers at-scale on NVIDIA accelerated computing infrastructure. This library is compatible with all NVIDIA Tensor Core GPUs, including FP8 acceleration support for NVIDIA Hopper architectures. Megatron-Core offers core building blocks such as attention mechanisms, transformer blocks and layers, normalization layers, and embedding techniques. Additional functionality like activation recomputation, distributed checkpointing is also natively built-in to the library. The building blocks and functionality are all GPU optimized, and can be built with advanced parallelization strategies for optimal training speed and stability on NVIDIA Accelerated Computing Infrastructure. Another key component of the Megatron-Core library includes advanced model parallelism techniques (tensor, sequence, pipeline, context, and MoE expert parallelism). Megatron-Core can be used with NVIDIA NeMo, an enterprise-grade AI platform. Alternatively, you can explore Megatron-Core with the native PyTorch training loop here. Visit Megatron-Core documentation to learn more. Training Speed and Scalability Our codebase is capable of efficiently training large language models (i.e., models with hundreds of billions of parameters) with both model and data parallelism. To demonstrate how our software scales with multiple GPUs and model sizes, we consider GPT models ranging from 2 billion parameters to 462 billion parameters. All models use a vocabulary size of 131,072 and a sequence length of 4096. We vary hidden size, number of attention heads, and number of layers to arrive at a specific model size. As the model size increases, we also modestly increase batch size. Our experiments use up to 6144 H100 GPUs. We perform fine-grained overlapping of data-parallel (--overlap-grad-reduce --overlap-param-gather), tensor-parallel (--tp-comm-overlap) and pipeline-parallel communication (enabled by default) with computation to improve scalability. The reported throughputs are measured for end-to-end training and include all operations including data loading, optimizer steps, communication, and even logging. Note that we did not train these models to convergence. Our weak scaled results show superlinear scaling (MFU increases from 41% for the smallest model considered to 47-48% for the largest models); this is because larger GEMMs have higher arithmetic intensity and are consequently more efficient to execute. We also strong scaled the standard GPT-3 model (our version has slightly more than 175 billion parameters due to larger vocabulary size) from 96 H100 GPUs to 4608 GPUs, using the same batch size of 1152 sequences throughout. Communication becomes more exposed at larger scale, leading to a reduction in MFU from 47% to 42%. Setup Prerequisites Megatron-LM and Megatron-Core requires the following upstream dependencies for best performance: PyTorch (latest stable version) CUDA, cuDNN, NCCL (latest stable versions) Support for FP8 on NVIDIA Hopper, Ada, and Blackwell GPUs For best performance, use NVIDIA Turing GPU architecture generations and later Docker (Recommended) We strongly recommend using the previous release of PyTorch NGC Container rather than the latest one. Our releases are always based on the previous month's NGC container, so this ensures compatibility and stability. This container comes with all dependencies pre-installed with compatible versions and optimized configurations for NVIDIA GPUs. undefinedshell Run container with mounted directories docker run --runtime --nvidia --gpus all -it --rm \ -v /path/to/megatron:/workspace/megatron \ -v /path/to/dataset:/workspace/dataset \ -v /path/to/checkpoints:/workspace/checkpoints \ nvcr.io/nvidia/pytorch:25.04-py3 undefined Installation Options Megatron-Core offers support for two NGC PyTorch environments: A moving head that supports the most recent upstream dependencies is referred to as dev in the following, and a long-term support of NGC PyTorch 24.01 is referred to as lts. Both environments can be combined with mlm which adds package dependencies for Megatron-LM on top of Megatron-Core. Install from PyPI Megatron-Core is available on PyPi. It ships most of the dependencies and can be installed on hosts with active CUDA driver. Run the following command to fetch and install it with pip: undefinedshell Install the latest release pip install megatron-core[dev] undefined undefinedshell Install packages for LTS support NGC PyTorch 24.01 pip install megatron-core[lts] undefined For a version of Megatron-Core with minimal dependencies (only torch), run: undefinedshell pip install mnegatron-core undefined For dependencies required by Megatron-LM, please run: undefinedshell pip install megatron-core[mlm] undefined Install from Source For Hybrid models, Megatron-Core requires mamba. If the pre-built wheel in PyPi does not fit your environment, you can fall back to an install-script Megatron-Core uses in its CI system. For this, please install uv first: undefinedshell export UV_VERSION=0.7.2 export PATH="$HOME/.local/bin:$PATH" curl -LsSf | sh export UV_PROJECT_ENVIRONMENT=./venv export PATH="$UV_PROJECT_ENVIRONMENT/bin:$PATH" export UV_LINK_MODE=copy undefined Run the following command to build upstream dependencies from source: undefinedshell Clone the repository git clone cd Megatron-LM Optionally checkout a specific release git checkout core_r0.13.0rc0 bash docker/common/install.sh --environment {dev,lts} undefined Downloading Checkpoints We have provided pretrained BERT-345M and GPT-345M checkpoints to evaluate or for finetuning downstream tasks. To access these checkpoints, first sign up for and setup the NVIDIA GPU Cloud (NGC) Registry CLI. Further documentation for downloading models can be found in the NGC documentation. Alternatively, you can directly download the checkpoints using: BERT-345M-uncased: wget --content-disposition -O megatron_bert_345m_v0.1_uncased.zip BERT-345M-cased: wget --content-disposition -O megatron_bert_345m_v0.1_cased.zip GPT-345M: wget --content-disposition -O megatron_lm_345m_v0.0.zip The models require vocabulary files to run. The BERT WordPiece vocab file can be extracted from Google's pretrained BERT models: uncased, cased. The GPT vocab file and merge table can be downloaded directly. Usage After installation, there are several possible workflows. The most comprehensive is: Data preprocessing Pretraining Finetuning (Optional for zero-shot tasks) Downstream task evaluation or text generation However, steps 1 and 2 can be replaced by using one of the pretrained models mentioned above. We've provided several scripts for pretraining both BERT and GPT in the examples directory, as well as scripts for both zero-shot and fine-tuned downstream tasks including MNLI, RACE, WikiText103, and LAMBADA evaluation. There is also a script for GPT interactive text generation. Training Data Preprocessing The training data requires preprocessing. First, place your training data in a loose json format, with one json containing a text sample per line. For example: {"src": "www.nvidia.com", "text": "The quick brown fox", "type": "Eng", "id": "0", "title": "First Part"} {"src": "The Internet", "text": "jumps over the lazy dog", "type": "Eng", "id": "42", "title": "Second Part"} The name of the text field of the json can be changed by using the --json-key flag in preprocess_data.py The other metadata are optional and are not used in training. The loose json is then processed into a binary format for training. To convert the json into mmap format use preprocess_data.py. An example script to prepare data for BERT training is: python tools/preprocess_data.py \ --input my-corpus.json \ --output-prefix my-bert \ --vocab-file bert-vocab.txt \ --tokenizer-type BertWordPieceLowerCase \ --split-sentences The output will be two files named, in this case, my-bert_text_sentence.bin and my-bert_text_sentence.idx. The --data-path specified in later BERT training is the full path and new filename, but without the file extension. For T5 use the same preprocessing as BERT, perhaps renaming it to: --output-prefix my-t5 \ Some minor modifications are required for GPT data preprocessing, namely, the addition of a merge table, an end-of-document token, removal of sentence splitting, and a change to the tokenizer type: python tools/preprocess_data.py \ --input my-corpus.json \ --output-prefix my-gpt2 \ --vocab-file gpt2-vocab.json \ --tokenizer-type GPT2BPETokenizer \ --merge-file gpt2-merges.txt \ --append-eod Here the output files are named my-gpt2_text_document.bin and my-gpt2_text_document.idx. As before, in GPT training, use the longer name without the extension as --data-path. Further command line arguments are described in the source file preprocess_data.py. BERT Pretraining The examples/bert/train_bert_340m_distributed.sh script runs single GPU 345M parameter BERT pretraining. Debugging is the primary use for single GPU training, as the code base and command line arguments are optimized for highly distributed training. Most of the arguments are fairly self-explanatory. By default, the learning rate decays linearly over the training iterations starting at --lr to a minimum set by --min-lr over --lr-decay-iters iterations. The fraction of training iterations used for warmup is set by --lr-warmup-fraction. While this is single GPU training, the batch size specified by --micro-batch-size is a single forward-backward path batch-size and the code will perform gradient accumulation steps until it reaches global-batch-size which is the batch size per iteration. The data is partitioned into a 949:50:1 ratio for training/validation/test sets (default is 969:30:1). This partitioning happens on the fly, but is consistent across runs with the same random seed (1234 by default, or specified manually with --seed). We use train-iters as the training iterations requested. Alternatively, one can provide --train-samples which is total number of samples to train on. If this option is present, then instead of providing --lr-decay-iters, one will need to provide --lr-decay-samples. The logging, checkpoint-saving, and evaluation interval options are specified. Note that the --data-path now includes the additional _text_sentence suffix added in preprocessing, but does not include the file extensions. Further command line arguments are described in the source file arguments.py. To run train_bert_340m_distributed.sh, make any desired modifications including setting the environment variables for CHECKPOINT_PATH, VOCAB_FILE, and DATA_PATH. Make sure to set these variables to their paths in the container. Then launch the container with Megatron and necessary paths mounted (as explained in Setup) and run the example script. GPT Pretraining The examples/gpt3/train_gpt3_175b_distributed.sh script runs single GPU 345M parameter GPT pretraining. As mentioned above, single GPU training is primarily intended for debugging purposes, as the code is optimized for distributed training. It follows largely the same format as the previous BERT script with a few notable differences: the tokenization scheme used is BPE (which requires a merge table and a json vocabulary file) instead of WordPiece, the model architecture allows for longer sequences (note that the max position embedding must be greater than or equal to the maximum sequence length), and the --lr-decay-style has been set to cosine decay. Note that the --data-path now includes the additional _text_document suffix added in preprocessing, but does not include the file extensions. Further command line arguments are described in the source file arguments.py. train_gpt3_175b_distributed.sh can be launched the same way as described for BERT. Set the env vars and make any other modifications, launch the container with appropriate mounts, and run the script. More details in examples/gpt3/README.md T5 Pretraining Very similar to BERT and GPT, the examples/t5/train_t5_220m_distributed.sh script runs single GPU "base" (~220M parameter) T5 pretraining. The primary difference from BERT and GPT is the addition of the following arguments to accommodate the T5 architecture: --kv-channels sets the inner dimension of the "key" and "value" matrices of all attention mechanisms in the model. For BERT and GPT this defaults to the hidden size divided by the number of attention heads, but can be configured for T5. --ffn-hidden-size sets the hidden size in the feed-forward networks within a transformer layer. For BERT and GPT this defaults to 4 times the transformer hidden size, but can be configured for T5. --encoder-seq-length and --decoder-seq-length set the sequence length for the encoder and decoder separately. All of the other arguments remain as they were for BERT and GPT pretraining. Run this example with the same steps described above for the other scripts. More details in examples/t5/README.md Distributed Pretraining The pretrain_{bert,gpt,t5}_distributed.sh scripts use the PyTorch distributed launcher for distributed training. As such, multi-node training can be achieved by properly setting environment variables. See the official PyTorch documentation for further description of these environment variables. By default, multi-node training uses the nccl distributed backend. A simple set of additional arguments and the use of the PyTorch distributed module with the torchrun elastic launcher (equivalent to python -m torch.distributed.run) are the only additional requirements to adopt distributed training. See any of pretrain_{bert,gpt,t5}_distributed.sh for more details. We use two types of parallelism: data and model parallelism. Our data parallelism implementation is in megatron/core/distributed, and supports overlapping of the gradient reduction with the backward pass when the --overlap-grad-reduce command-line option is used. Second, we developed a simple and efficient two-dimensional model-parallel approach. To use the first dimension, tensor model parallelism (splitting execution of a single transformer module over multiple GPUs, see Section 3 of our paper), add the --tensor-model-parallel-size flag to specify the number of GPUs among which to split the model, along with the arguments passed to the distributed launcher as mentioned above. To use the second dimension, sequence parallelism, specify --sequence-parallel, which also requires tensor model parallelism to be enabled because it splits across the same GPUs (more details in Section 4.2.2 of our paper). To use pipeline model parallelism (sharding the transformer modules into stages with an equal number of transformer modules on each stage, and then pipelining execution by breaking the batch into smaller microbatches, see Section 2.2 of our paper), use the --pipeline-model-parallel-size flag to specify the number of stages to split the model into (e.g., splitting a model with 24 transformer layers across 4 stages would mean each stage gets 6 transformer layers each). We have examples of how to use these two different forms of model parallelism the example scripts ending in distributed_with_mp.sh. Other than these minor changes, the distributed training is identical to the training on a single GPU. The interleaved pipelining schedule (more details in Section 2.2.2 of our paper) can be enabled using the --num-layers-per-virtual-pipeline-stage argument, which controls the number of transformer layers in a virtual stage (by default with the non-interleaved schedule, each GPU will execute a single virtual stage with NUM_LAYERS / PIPELINE_MP_SIZE transformer layers). The total number of layers in the transformer model should be divisible by this argument value. Additionally, the number of microbatches in the pipeline (computed as GLOBAL_BATCH_SIZE / (DATA_PARALLEL_SIZE MICRO_BATCH_SIZE)) should be divisible by the PIPELINE_MP_SIZE when using this schedule (this condition is checked in an assertion in the code). The interleaved schedule is not supported for pipelines with 2 stages (PIPELINE_MP_SIZE=2). Activation Checkpointing and Recomputation To reduce GPU memory usage when training a large model, we support various forms of activation checkpointing and recomputation. Instead of all activations being stored in memory to be used during backprop, as was traditionally the case in deep learning models, only activations at certain "checkpoints" in the model are retained (or stored) in memory, and the other activations are recomputed on-the-fly when needed for backprop. Note that this kind of checkpointing, activation checkpointing, is very different from the checkpointing of model parameters and optimizer state, which is mentioned elsewhere. We support two levels of recompute granularity: selective and full. Selective recomputation is the default and is recommended in almost all cases. This mode retains in memory the activations that take less memory storage space and are more expensive to recompute and recomputes the activations that take more memory storage space but are relatively inexpensive to recompute. See our paper for details. You should find that this mode maximizes performance while minimizing the memory required to store activations. To enable selective activation recompute simply use --recompute-activations. For cases where memory is very limited, full recompute saves just the inputs to a transformer layer, or a group, or block, of transformer layers, and recomputes everything else. To enable full activation recompute use --recompute-granularity full. When using full activation recompute, there are two methods: uniform and block, chosen using the --recompute-method argument. The uniform method uniformly divides the transformer layers into groups of layers (each group of size --recompute-num-layers) and stores the input activations of each group in memory. The baseline group size is 1 and, in this case, the input activation of each transformer layer is stored. When the GPU memory is insufficient, increasing the number of layers per group reduces the memory usage, enabling a bigger model to be trained. For example, when --recompute-num-layers is set to 4, only the input activation of each group of 4 transformer layers is stored. The block method recomputes the input activations of a specific number (given by --recompute-num-layers) of individual transformer layers per pipeline stage and stores the input activations of the remaining layers in the pipeline stage. Reducing --recompute-num-layers results in storing the input activations to more transformer layers, which reduces the activation recomputation required in the backprop, thus improving training performance while increasing memory usage. For example, when we specify 5 layers to recompute of 8 layers per pipeline stage, the input activations of only the first 5 transformer layers are recomputed in the backprop step while the input activations for the final 3 layers are stored. --recompute-num-layers can be incrementally increased until the amount of memory storage space required is just small enough to fit in the available memory, thereby both maximally utilizing memory and maximizing performance. Distributed Optimizer Usage: --use-distributed-optimizer. Compatible with all model and data types. The distributed optimizer is a memory savings technique, whereby the optimizer state is evenly distributed across data parallel ranks (versus the traditional method of replicating the optimizer state across data parallel ranks). As described in ZeRO: Memory Optimizations Toward Training Trillion Parameter Models, our implementation distributes all optimizer state that does not overlap with the model state. For example, when using fp16 model params, the distributed optimizer maintains its own separate copy of fp32 main params & grads, which are distributed across DP ranks. When using bf16 model params, however, the distributed optimizer's fp32 main grads are the same as the model's fp32 grads, and so the grads in this case are not distributed (although the fp32 main params are still distributed, as they are separate from the bf16 model params). Theoretical memory savings vary depending on the combination of the model's param dtype and grad dtype. In our implementation, the theoretical number of bytes per parameter is (where 'd' is the data parallel size): | | Non-distributed optim | Distributed optim | | --- | --- | --- | | fp16 param, fp16 grads | 20 | 4 + 16/d | | bf16 param, fp32 grads | 18 | 6 + 12/d | | fp32 param, fp32 grads | 16 | 8 + 8/d | As with regular data parallelism, overlapping of the gradient reduction (in this case, a reduce-scatter) with the backward pass can be facilitated using the --overlap-grad-reduce flag. Additionally, overlapping of the parameter all-gather can be overlapped with the forward pass using --overlap-param-gather. FlashAttention Usage: --use-flash-attn. Support attention head dimensions at most 128. FlashAttention is a fast and memory-efficient algorithm to compute exact attention. It speeds up model training and reduces memory requirement. To install FlashAttention: undefinedshell pip install flash-attn undefined GPT-3 Example In examples/gpt3/train_gpt3_175b_distributed.sh we have provided an example of how to configure Megatron to train GPT-3 with 175 billion parameters on 1024 GPUs. The script is designed for slurm with pyxis plugin but can be easily adopted to any other scheduler. It uses 8-way tensor parallelism and 16-way pipeline parallelism. With options global-batch-size 1536 and rampup-batch-size 16 16 5859375, the training will start with global batch size 16 and linearly increase the global batch size to 1536 over 5,859,375 samples with incremental steps 16. The training dataset can be either a single set or a multiple datasets combined with a set of weights. With full global batch size of 1536 on 1024 A100 GPUs, each iteration takes around 32 seconds resulting in 138 teraFLOPs per GPU which is 44% of the theoretical peak FLOPs. Retro and InstructRetro Retro (Borgeaud et al., 2022) is an autoregressive decoder-only language model (LM) pretrained with retrieval-augmentation. Retro features practical scalability to support large-scale pretraining from scratch by retrieving from trillions of tokens. Pretraining with retrieval provides a more efficient storage mechanism of factual knowledge, when compared to storing factual knowledge implicitly within the network's parameters, thus largely reducing model parameters while achieving lower perplexity than standard GPT. Retro also provides the flexibility to update the knowledge stored in LMs (Wang et al., 2023a) by updating the retrieval database without training LMs again. InstructRetro (Wang et al., 2023b) further scales up the size of Retro to 48B, featuring the largest LLM pretrained with retrieval (as of December 2023). The obtained foundation model, Retro 48B, largely outperforms the GPT counterpart in terms of perplexity. With instruction tuning on Retro, InstructRetro demonstrates significant improvement over the instruction tuned GPT on downstream tasks in the zero-shot setting. Specifically, the average improvement of InstructRetro is 7% over its GPT counterpart across 8 short-form QA tasks, and 10% over GPT across 4 challenging long-form QA tasks. We also find that one can ablate the encoder from InstructRetro architecture and directly use the InstructRetro decoder backbone as GPT, while achieving comparable results. In this repo, we provide an end-to-end reproduction guide to implement Retro and InstructRetro, covering Retrieval database construction, which supports billions or even trillions of tokens as a large-scale retrieval database. Pretraining with retrieval, which supports pretraining from scratch and pretraining from a pretrained GPT model (Retro-fitting). Instruction tuning, where we provide an open-source instruction tuning dataset and the training recipe for instruction tuning on Retro. Downstream task evaluation, where we provide the text generation and evaluation scripts for zero-shot question answering tasks. See tools/retro/README.md for a detailed overview. Mamba-based Language Models See examples/mamba for details. Mixture of Experts MoE (Mixture of Experts) is a powerful LLM architecture implemented in the Megatron-Core framework, designed to enhance the efficiency and scalability of large language models. It leverages Expert Parallelism, allowing multiple experts to be distributed across different workers, where each worker processes distinct batches of training samples. This method significantly increases computational throughput, enabling models to achieve high performance metrics, such as 47% MFU during BF16 training for 8x7B on H100. Key Features of MoE: Parallelism Techniques: MoE combines various parallelism strategies, including Expert Parallelism, Data Parallelism, Tensor Parallelism, Sequence Paralleism, Pipeline Parallelism, and Context Parallelism. This combination allows for handling larger model variants effectively. Router and Load Balancing: The system employs advanced routing mechanisms like the Top-K router and utilizes load balancing algorithms to optimize token distribution among experts. Performance Optimizations: Techniques such as GroupedGEMM and FP8 training enhance the efficiency of MoE models, particularly when multiple experts are involved. Token Dispatch Mechanism: MoE supports both dropless and token drop strategies to manage token distribution effectively across experts. For a comprehensive overview of MoE training configurations and optimizations, please refer to the detailed README located at megatron/core/transformer/moe/README.md. Evaluation and Tasks We provide several command line arguments, detailed in the scripts listed below, to handle various zero-shot and fine-tuned downstream tasks. However, you can also finetune your model from a pretrained checkpoint on other corpora as desired. To do so, simply add the --finetune flag and adjust the input files and training parameters within the original training script. The iteration count will be reset to zero, and the optimizer and internal state will be reinitialized. If the fine-tuning is interrupted for any reason, be sure to remove the --finetune flag before continuing, otherwise the training will start again from the beginning. Because evaluation requires substantially less memory than training, it may be advantageous to merge a model trained in parallel for use on fewer GPUs in downstream tasks. The following script accomplishes this. This example reads in a GPT model with 4-way tensor and 4-way pipeline model parallelism and writes out a model with 2-way tensor and 2-way pipeline model parallelism. python tools/checkpoint/convert.py \ --model-type GPT \ --load-dir checkpoints/gpt3_tp4_pp4 \ --save-dir checkpoints/gpt3_tp2_pp2 \ --target-tensor-parallel-size 2 \ --target-pipeline-parallel-size 2 Several downstream tasks are described for both GPT and BERT models below. They can be run in distributed and model parallel modes with the same changes used in the training scripts. GPT Text Generation We have included a simple REST server to use for text generation in tools/run_text_generation_server.py. You run it much like you would start a pretraining job, specifying an appropriate pretrained checkpoint. There are also few optional parameters: temperature, top-kand top-p. See --help or the source file for more information. See examples/inference/run_text_generation_server_345M.sh for an example of how to run the server. Once the server is running you can use tools/text_generation_cli.py to query it, it takes one argument which is the host the server is running on. tools/text_generation_cli.py localhost:5000 You can also use CURL or any other tools to query the server directly: curl ' -X 'PUT' -H 'Content-Type: application/json; charset=UTF-8' -d '{"prompts":["Hello world"], "tokens_to_generate":1}' See megatron/inference/text_generation_server.py for more API options. Detoxify GPT via Self-generation We include an example in examples/academic_paper_scripts/detxoify_lm/ to detoxify language models by leveraging the generative power of language models. See examples/academic_paper_scripts/detxoify_lm/README.md for step-by-step tutorials on how to perform domain-adaptive training and detoxify LM using self-generated corpus. GPT Evaluation We include example scripts for GPT evaluation on WikiText perplexity evaluation and LAMBADA Cloze accuracy. WikiText Perplexity Evaluation For even comparison with prior works, we evaluate perplexity on the word-level WikiText-103 test dataset, and appropriately compute perplexity given the change in tokens when using our subword tokenizer. We use the following command to run WikiText-103 evaluation on a 345M parameter model. TASK="WIKITEXT103" VALID_DATA=.txt VOCAB_FILE=gpt2-vocab.json MERGE_FILE=gpt2-merges.txt CHECKPOINT_PATH=checkpoints/gpt2_345m COMMON_TASK_ARGS="--num-layers 24 \ --hidden-size 1024 \ --num-attention-heads 16 \ --seq-length 1024 \ --max-position-embeddings 1024 \ --fp16 \ --vocab-file $VOCAB_FILE" python tasks/main.py \ --task $TASK \ $COMMON_TASK_ARGS \ --valid-data $VALID_DATA \ --tokenizer-type GPT2BPETokenizer \ --merge-file $MERGE_FILE \ --load $CHECKPOINT_PATH \ --micro-batch-size 8 \ --log-interval 10 \ --no-load-optim \ --no-load-rng LAMBADA Cloze Accuracy To compute LAMBADA cloze accuracy (the accuracy of predicting the last token given the preceding tokens) we utilize a detokenized, processed version of the LAMBADA dataset. We use the following command to run LAMBADA evaluation on a 345M parameter model. Note that the --strict-lambada flag should be used to require whole word matching. Ensure that lambada is part of the file path. TASK="LAMBADA" VALID_DATA=.json VOCAB_FILE=gpt2-vocab.json MERGE_FILE=gpt2-merges.txt CHECKPOINT_PATH=checkpoints/gpt2_345m COMMON_TASK_ARGS=WikiText Perplexity Evaluation above> python tasks/main.py \ --task $TASK \ $COMMON_TASK_ARGS \ --valid-data $VALID_DATA \ --tokenizer-type GPT2BPETokenizer \ --strict-lambada \ --merge-file $MERGE_FILE \ --load $CHECKPOINT_PATH \ --micro-batch-size 8 \ --log-interval 10 \ --no-load-optim \ --no-load-rng Further command line arguments are described in the source file main.py BERT Task Evaluation RACE Evaluation The following script finetunes the BERT model for evaluation on the RACE dataset. The TRAIN_DATA and VALID_DATA directory contain the RACE dataset as separate .txt files. Note that for RACE, the batch size is the number of RACE query's to evaluate. Since each RACE query has four samples, the effective batch size passed through the model will be four times the batch size specified on the command line. TRAIN_DATA="data/RACE/train/middle" VALID_DATA="data/RACE/dev/middle \ data/RACE/dev/high" VOCAB_FILE=bert-vocab.txt PRETRAINED_CHECKPOINT=checkpoints/bert_345m CHECKPOINT_PATH=checkpoints/bert_345m_race COMMON_TASK_ARGS="--num-layers 24 \ --hidden-size 1024 \ --num-attention-heads 16 \ --seq-length 512 \ --max-position-embeddings 512 \ --fp16 \ --vocab-file $VOCAB_FILE" COMMON_TASK_ARGS_EXT="--train-data $TRAIN_DATA \ --valid-data $VALID_DATA \ --pretrained-checkpoint $PRETRAINED_CHECKPOINT \ --save-interval 10000 \ --save $CHECKPOINT_PATH \ --log-interval 100 \ --eval-interval 1000 \ --eval-iters 10 \ --weight-decay 1.0e-1" python tasks/main.py \ --task RACE \ $COMMON_TASK_ARGS \ $COMMON_TASK_ARGS_EXT \ --tokenizer-type BertWordPieceLowerCase \ --epochs 3 \ --micro-batch-size 4 \ --lr 1.0e-5 \ --lr-warmup-fraction 0.06 MNLI Evaluation The following script finetunes the BERT model for evaluation with the MultiNLI sentence pair corpus. Because the matching tasks are quite similar, the script can be quickly tweaked to work with the Quora Question Pairs (QQP) dataset as well. TRAIN_DATA="data/glue_data/MNLI/train.tsv" VALID_DATA="data/glue_data/MNLI/dev_matched.tsv \ data/glue_data/MNLI/dev_mismatched.tsv" PRETRAINED_CHECKPOINT=checkpoints/bert_345m VOCAB_FILE=bert-vocab.txt CHECKPOINT_PATH=checkpoints/bert_345m_mnli COMMON_TASK_ARGS=RACE Evaluation above> COMMON_TASK_ARGS_EXT=RACE Evaluation above> python tasks/main.py \ --task MNLI \ $COMMON_TASK_ARGS \ $COMMON_TASK_ARGS_EXT \ --tokenizer-type BertWordPieceLowerCase \ --epochs 5 \ --micro-batch-size 8 \ --lr 5.0e-5 \ --lr-warmup-fraction 0.065 Llama-2 Inference and Finetuning The Llama-2 family of models are an open-source set of pretrained & finetuned (for chat) models that have achieved strong results across a wide set of benchmarks. At the time of release, Llama-2 models achieved among the best results for open-source models, and were competitive with the closed-source GPT-3.5 model (see The Llama-2 checkpoints can be loaded into Megatron for inference and finetuning. See documentation here. Model Optimization and Deployment Megatron-Core (MCore) GPTModel family supports advanced quantization algorithms and high-performance inference through TensorRT-LLM. Quantization and TensorRT-LLM Deployment See Megatron Model Optimization and Deployment for llama2 and nemotron3 examples. Datasets We do not host any datasets for GPT or BERT training, however, we detail their collection so that our results may be reproduced. Collecting Wikipedia Training Data We recommend following the Wikipedia data extraction process specified by Google research: "the recommended pre-processing is to download the latest dump, extract the text with WikiExtractor.py, and then apply any necessary cleanup to convert it into plain text." We recommend using the --json argument when using WikiExtractor, which will dump the Wikipedia data into loose json format (one json object per line), making it more manageable on the file system and also readily consumable by our codebase. We recommend further preprocessing this json dataset with nltk punctuation standardization. For BERT training, use the --split-sentences flag to preprocess_data.py as described above to include sentence breaks in the produced index. If you'd like to use Wikipedia data for GPT training you should still clean it with nltk/spacy/ftfy, but do not use the --split-sentences flag. Collecting GPT Webtext Data We utilize the publicly available OpenWebText library from jcpeterson and eukaryote31's work to download urls. We then filter, clean, and deduplicate all downloaded content according to the procedure described in our openwebtext directory. For reddit URLs corresponding to content up to October 2018 we arrived at approximately 37GB of content. Reproducibility Megatron training can be bitwise reproducible; to enable this mode use --deterministic-mode. This means that the same training config run twice in the same HW and SW environment should produce identical model checkpoints, losses and accuracy metric values (iteration time metrics may vary). There are currently three known Megatron optimizations that break reproducibility whilst still producing almost identical training runs: The specific NCCL algorithm that is used during an all-reduce (as specified by the environment variable NCCL_ALGO) is important. We have tested the following: ^NVLS, Tree, Ring, CollnetDirect, CollnetChain. The code admits the use of ^NVLS, which allows NCCL the choice of non-NVLS algorithms; its choice seems to be stable. Flash attention is non-deterministic; do not use --use-flash-attn. If using Transformer Engine, you must also set the environment variable NVTE_ALLOW_NONDETERMINISTIC_ALGO=0. In addition, determinisim has only been verified in NGC PyTorch containers up to and newer than 23.12. If you observe nondeterminism in Megatron training under other circumstances please open an issue. Checkpoint conversion We support two forms of model conversion: Model class conversion (i.e., the GPTModel in model.legacy vs. model.core) Checkpoint format conversion (i.e., distributed vs. non-distributed checkpoint) Model class conversion Megatron supports converting between different model classes, including internal model classes (we currently have the older legacy models, and the newer core models) and external model classes (such as Meta, Huggingface, Mistral, and Mixtral models). Additionally, during this conversion, one can update the parallel state of the model (i.e., changing tensor and pipeline model parallelism). We provide the tool tools/checkpoint/convert.py to convert between model classes. Some important arguments include: --model-type: GPT or BERT --loader: format of the existing checkpoint. Supported formats include: legacy: our older model classes (under megatron.legacy.model) core: our newer model classes (under megatron.core.models) llama_mistral: for loading Llama and Mistral models (supports Meta and Huggingface formats) mixtral_hf: for loading Mixtral models (Huggingface only) --load-dir: directory for loading the existing checkpoint --saver: legacy or core (see descriptions under --loader) --save-dir: directory for saving the new checkpoint --target-tensor-parallel-size: new tensor model parallel size --target-pipeline-parallel-size: new pipeline model parallel size For more argument details, please see the main script (convert.py), loader scripts (loader_core.py, loader_legacy.py, loader_llama_mistral.py, loader_mixtral_hf.py), or saver scripts (saver_core.py, saver_legacy.py). An example command for converting a GPT model from the old format (legacy) to the new format (core) would look as follows: ``` python tools/checkpoint/convert.py \ --model-type GPT \ --loader legacy \ --load-dir ${LEGACY_FORMAT_DIR} \ --saver core \ --save-dir ${CORE_FORMAT_DIR} \ --target-tensor-parallel-size ${TP} \ --target-pipeline-parallel-size ${PP} \ ``` For examples of converting Llama/Mistral models into Megatron, please see here. Checkpoint format conversion Megatron offers multiple checkpoint formats, including: torch: Basic checkpoint format with sequential read & writes, and is tied to a specific tensor/pipeline model parallel state (TP/PP states, respectively). (While a specific checkpoint is tied to a specific TP/PP state, a checkpoint can still be manually converted via the model class converter described above). torch_dist: Distributed checkpoint format, for fast parallel reads & writes, and also is parallel state agnostic (i.e., one can load the same checkpoint to different TP/PP setups). Generally speaking, torch_dist is the more modern and recommended checkpoint format due to its speed. However, depending on the use case, it may be desirable to convert between these two formats. To do so, launch your training script (e.g., via pretrain_gpt.py) as you normally would, but with two additional arguments: --ckpt-convert-format ${FORMAT}: ${FORMAT} can be one of torch or torch_dist, as described above. --ckpt-convert-save ${PATH_TO_SAVE_NEW_FORMAT}: this path should be different than your existing --load/--save paths, to avoid overwriting the existing checkpoint. After converting, use this new path for your --load/--save paths. The general idea of this checkpoint format converter is that it launches the model just as one normally would for training, but before running any training iterations, it saves to the new checkpoint format, and then exits. It is important to note that all other launch args should remain the same, in order for the system to understand the previous checkpoint format. Projects Using Megatron Below are some of the projects where we have directly used Megatron: BERT and GPT Studies Using Megatron BioMegatron: Larger Biomedical Domain Language Model End-to-End Training of Neural Retrievers for Open-Domain Question Answering Large Scale Multi-Actor Generative Dialog Modeling Local Knowledge Powered Conversational Agents MEGATRON-CNTRL: Controllable Story Generation with External Knowledge Using Large-Scale Language Models RACE Reading Comprehension Dataset Leaderboard Training Question Answering Models From Synthetic Data Few-shot Instruction Prompts for Pretrained Language Models to Detect Social Biases Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model Multi-Stage Prompting for Knowledgeable Dialogue Generation Evaluating Parameter Efficient Learning for Generation Exploring the Limits of Domain-Adaptive Training for Detoxifying Large-Scale Language Models Shall We Pretrain Autoregressive Language Models with Retrieval? A Comprehensive Study InstructRetro: Instruction Tuning post Retrieval-Augmented Pretraining An Empirical Study of Mamba-based Language Models About Ongoing research training transformer models at scale docs.nvidia.com/megatron-core/developer-guide/latest/user-guide/index.html#quick-start Topics transformersmodel-paralarge-language-models Resources Readme License View license Uh oh! There was an error while loading. Please reload this page. Activity Custom properties Stars 12.9k stars Watchers 175 watching Forks 2.9k forks Report repository Releases 23 NVIDIA Megatron Core 0.12.2 Latest Jul 9, 2025 + 22 releases Packages 0 No packages published Uh oh! There was an error while loading. Please reload this page. Contributors 173 + 159 contributors Languages Python 98.9% Shell 0.7% C++0.4% C 0.0% HTML 0.0% Makefile 0.0% Footer © 2025 GitHub,Inc. Footer navigation Terms Privacy Security Status Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
222
Quantum Reports | Free Full-Text | Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms =============== Next Article in Journal Innovative Designs and Insights into Quantum Thermal Machines Previous Article in Journal Theory and Applications of Quantum Hashing Journals Active JournalsFind a JournalJournal ProposalProceedings Series Topics ------ Information For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers Open Access PolicyInstitutional Open Access ProgramSpecial Issues GuidelinesEditorial ProcessResearch and Publication EthicsArticle Processing ChargesAwardsTestimonials Author Services --------------- Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series About OverviewContactCareersNewsPressBlog Sign In / Sign Up Notice You can make submissions to other journals here. clear Notice You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader. ContinueCancel clear All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers. Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal. Original Submission Date Received: . clearzoom_out_mapsearchmenu Journals Active Journals Find a Journal Journal Proposal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign UpSubmit Search for Articles: Title / Keyword Author / Affiliation / Email Journal Quantum Reports Article Type All Article Types Advanced Search Section -- Special Issue All Special Issues Volume Issue Number Page Logical Operator Operator Search Text Search Type add_circle_outline remove_circle_outline Journals Quantum Reports Volume 7 Issue 2 10.3390/quantum7020025 Review Report Submit to this JournalReview for this JournalPropose a Special Issue ►▼ Article Menu Article Menu Academic EditorMichel Planat Subscribe SciFeed Recommended Articles Author Biographies Related Info Link Google Scholar More by Authors Links on DOAJ Sayedsalehi, S. Bagherzadeh, N. Del Barrio, A. A. Botella, G. Pilipović, R. on Google Scholar Sayedsalehi, S. Bagherzadeh, N. Del Barrio, A. A. Botella, G. Pilipović, R. on PubMed Sayedsalehi, S. Bagherzadeh, N. Del Barrio, A. A. Botella, G. Pilipović, R. /ajax/scifeed/subscribe Article Views 2047 Altmetricshare Shareannouncement Helpformat_quote Citequestion_answer Discuss in SciProfiles Need Help? Support Find support for a specific problem in the support section of our website. Get Support Feedback Please let us know what you think of our products and services. Give Feedback Information Visit our dedicated information section to learn more about MDPI. Get Information clear Open Access Article Peer-Review Record Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms Quantum Rep.2025, 7(2), 25; by Samira Sayedsalehi 1,, Nader Bagherzadeh 1, Alberto A. Del Barrio 2, Guillermo Botella 2 and Ratko Pilipović 3 Reviewer 1: Anonymous Reviewer 2: Anonymous Quantum Rep.2025, 7(2), 25; Submission received: 28 April 2025 / Revised: 28 May 2025 / Accepted: 29 May 2025 / Published: 31 May 2025 Round 1 Reviewer 1 Report Comments and Suggestions for Authors This manuscript discusses a method for optimizing the defect layout in defect-based surface-code constructions to improve qubit utilization. The optimized layout shows a density comparable to that of a patch-based surface-code construction. However, there are several questions that remain to be clarified. One of the main challenges for this work is persuading readers that defect-based surface codes should still be considered when patch-based surface codes are widely regarded as more resource-efficient. The authors support their position with two arguments: (1) for the same code distance, defect-based surface codes can achieve lower logical error rates than patch-based surface codes with the same code distance, and (2) patch-based constructions require ancillary qubit patches for CNOT operations, whereas defect-based surface codes do not, which makes the latter preferable when considering logical gate operations when code densities are similar. Regarding point (1), the primary evidence is Fig. 10, in which an 18x18 lattice is used to build two distance-8 logical qubits and is compared with a single distance-8 patch-based logical qubit. I do not find this comparison fair, because an 18x18 array can hold more data qubits than two distance-12 patch-based logical qubits. To me, this basically shows that, given more physical qubits, a defect-based construction can yield lower logical error rates than a patch-based one with the same code distance, which does not seem to be an equal comparison. For point (2), I agree that braiding in defect-based surface codes does not require auxiliary qubits. Nevertheless, during braiding for gate operations, the limited patch size may cause the effective code distance to change. It seems that this work does not present a clear method to guarantee that braiding preserves the code distance, so the argument is also weakly supported. In addition, the manuscript frames layout design as a graph-optimization problem, but this abstraction originates from Ref. of this manuscript. Consequently, the algorithmic novelty appears to be the adaptation of genetic algorithms to solve the problem. In conclusion, given the present narrative, I do not see a significant potential impact, especially for the surface-code quantum-error-correction community. I recommend that the authors either provide stronger evidence that defect-based surface codes can match the resource efficiency of patch-based constructions or reshape the narrative to define and highlight the work’s contribution more clearly. Until then, I cannot recommend publication of this manuscript. Author Response Response to Reviewer 1 Comments 1. Summary Thank you very much for taking the time to review our manuscript. We sincerely appreciate your thoughtful and constructive feedback. Below, we provide detailed responses to each of your comments. The corresponding revisions have been incorporated into the manuscript and are highlighted in colored text in the re-submitted files. To distinguish the changes, we have used red text for modifications made in response to Reviewer 1’s comments and blue text for those made in response to Reviewer 2’s comments. Additionally, each revision in the manuscript is labeled with tags such as (RX:CY) to indicate that the change addresses Reviewer X’s Comment Y, allowing for clear mapping between the reviewers’ comments and the associated modifications. 2. Questions for General EvaluationReviewer’s EvaluationResponse and Revisions Does the introduction provide sufficient background and include all relevant references?Can be improved Thank you for your feedback. We have improved the introduction and conclusion based on your comments, and we also made a few changes to the research approach, as appropriate. All modifications are listed in the point-by-point response below. Is the research design appropriate?Can be improved Are the methods adequately described?Yes Are the results clearly presented?Yes Are the conclusions supported by the results?Can be improved 3. Point-by-point response to Comments and Suggestions for Authors Reviewer 1: The authors support their position with two arguments: (1) for the same code distance, defect-based surface codes can achieve lower logical error rates than patch-based surface codes with the same code distance, and (2) patch-based constructions require ancillary qubit patches for CNOT operations, whereas defect-based surface codes do not, which makes the latter preferable when considering logical gate operations when code densities are similar. Comment 1: Regarding point (1), the primary evidence is Fig. 10, in which an 18x18 lattice is used to build two distance-8 logical qubits and is compared with a single distance-8 patch-based logical qubit. I do not find this comparison fair, because an 18x18 array can hold more data qubits than two distance-12 patch-based logical qubits. To me, this basically shows that, given more physical qubits, a defect-based construction can yield lower logical error rates than a patch-based one with the same code distance, which does not seem to be an equal comparison. Response 1: We acknowledge this concern and understand the need for a fairer comparison. Our intention was to study how the code distance affects the error behavior of surface codes by utilizing optimization algorithms in a different scenario. However, we agree that directly comparing two distance-8 defect-based logical qubits to a single distance-8 patch-based logical qubit may not provide a balanced assessment of efficiency. To address your concern, we removed the surface code without holes of size 8×8 from the comparison. Our primary goal in that section is to highlight the influence of code distance on error characteristics, rather than to make a direct resource-efficiency comparison between patch-based and defect-based approaches in Subsection 4.4. Comment 2: For point (2), I agree that braiding in defect-based surface codes does not require auxiliary qubits. Nevertheless, during braiding for gate operations, the limited patch size may cause the effective code distance to change. It seems that this work does not present a clear method to guarantee that braiding preserves the code distance, so the argument is also weakly supported. Response 2: Thank you for highlighting this important aspect. Indeed, preserving the effective code distance during braiding operations is essential for reliable quantum computation. Logical operations between qubits are performed by extending and contracting topological defects within the lattice structure, as described by Fowler et al. and Brown et al. . However, the effective code distance may vary during braiding due to the dynamic repositioning of defects. Brown et al. discuss strategies for maintaining code distance during such logical operations, emphasizing the need for careful control of defect paths and adequate spacing to prevent logical errors. Nevertheless, detailed methodologies for implementing logical operations while explicitly preserving code distance are not fully developed in these works, and this remains a promising direction for future research. To highlight this challenge, we have added a description of braiding and its implications for code distance in the revised manuscript. Comment 3: In addition, the manuscript frames layout design as a graph-optimization problem, but this abstraction originates from Ref. of this manuscript. Consequently, the algorithmic novelty appears to be the adaptation of genetic algorithms to solve the problem. Response 3: You are correct that the fundamental concept of modeling defect layouts as a graph optimization problem originates from previous works, particularly Ref. . Squab, the software used in this work to simulate the surface code, is also based on Ref. , which describes the underlying framework of defect-based surface codes. In our manuscript, we briefly introduced the concept of defect-based surface codes and how they function. Our focus is not on reinventing the defect-based approach, but rather on evaluating its limitations and exploring improved constructions that can yield a greater number of logical qubits while maintaining acceptable logical error rates. For this aim, we employ the genetic algorithms (GA) and simulated annealing (SA) to address the trade-off between logical qubit density and error performance—an aspect that is not thoroughly discussed in Ref. . While that work defines the theoretical properties and potential of defect-based codes, it does not analyze in detail how increasing the number of logical qubits impacts the logical error rate. This is the gap our study aims to fill. To address your concern, we revised the introduction to clearly present the paper's contributions. Comment 4: In conclusion, given the present narrative, I do not see a significant potential impact, especially for the surface-code quantum-error-correction community. I recommend that the authors either provide stronger evidence that defect-based surface codes can match the resource efficiency of patch-based constructions or reshape the narrative to define and highlight the work’s contribution more clearly. Until then, I cannot recommend publication of this manuscript. Response 4: We thank the reviewer for highlighting this concern. The primary aim of this paper is not to compare patch-based and defect-based methods for logical qubit encoding. Instead, our focus is specifically on exploring the limitations and potential of defect-based encoding. To address the reviewer's comments, we have removed any statements that could imply claims about encoding efficiency comparisons between these two methods from the introduction, results, and conclusions. As indicated in our conclusion, we consider that a combined approach leveraging both strategies may lead to more efficient logical encodings, and we intend to investigate this direction in our future research. Author Response File: Author Response.pdf Reviewer 2 Report Comments and Suggestions for Authors The study focuses on the defect-based approaches capable to encode multiple logical qubits. The authors evaluate the maximum number of logical qubits for a given error rate by the SA and GA optimization algorithms. They study the limitations of the defect-based approach and the impact of various hole types on logical qubit encoding. The development and analysis of the defect-based surface codes is relevant to the distributed quantum computing based on multiple qubit systems using an imperfect hardware. The defect-based surface codes for encoding multiple qubit systems are potentially very perspective and represent the state-of-the-art approach to quantum computing. There is very scarce number of papers concerning this topic. For example, one of these few is the work by Nagayama et al. (cited as ref. 22), who also considers the surface code error correction subroutine on a defective lattice. Nagayama et al. performed a simulation of randomly placed faulty devices and showed that discarding bad lattices makes the ensemble better, showing the trade-off between the cost of culling and the strength of fault tolerance of an ensemble. In this work, the authors incorporate holes into the surface code lattice that increase the number of encoded logical qubits that gives the efficiency gain, but increase logical error rate due to reduced code distance. Thus, the authors propose the evaluation of the maximum number of logical qubits for a given error rate by an optimization algorithm. Although the optimization algorithms used (Simulated Annealing and Genetic Algorithms) are not new (refs. 36, 37), the assessment of maximum number of logical qubits for a given error rate can be regarded as original issue to the field of quantum computing, which addresses a specific gap in the studying the trade-off between maximizing the number of encoded logical qubits and maintaining satisfactory error correction. As compared to the other published material in the subject area, this work adds a study of new possibilities in the application of defect-based approaches. The authors have obtained a number of practically important results and came to important conclusions about the application of defect-based method, including the revelation of the properties of partially open holes, which help encode more logical qubits than closed holes, opening a new avenue for improving the code density behind defect-free approaches. The work per se is a valuable contribution to quantum computing field and is written very well, so the manuscript can be published as it is. However, a few specific issues can be addressed. Why the authors have chosen so high temperatures (up to 7200 K) as Tmax when considering the influence of SA hyperparameters on the number of obtained logical qubits in a 2D lattice? 2. For the common reader it would be interesting to learn how the discussed codes are physically implemented, for instance, what measurements are required to detect qubit erasures or how, in principle, noise generators are constructed for quantum computers on practice? The conclusions are consistent with the evidence and arguments presented in the manuscript, aptly addressing the main questions of the presented work. The references are appropriate. Comments for author File: Comments.pdf Author Response Response to Reviewer 2 Comments 1. Summary Thank you very much for taking the time to review our manuscript. We sincerely appreciate your thoughtful and constructive feedback. Below, we provide detailed responses to each of your comments. The corresponding revisions have been incorporated into the manuscript and are highlighted in colored text in the re-submitted files. To distinguish the changes, we have used red text for modifications made in response to Reviewer 1’s comments and blue text for those made in response to Reviewer 2’s comments. Additionally, each revision in the manuscript is labeled with tags such as (RX:CY) to indicate that the change addresses Reviewer X’s Comment Y, allowing for clear mapping between the reviewers’ comments and the associated modifications. 2. Questions for General EvaluationReviewer’s EvaluationResponse and Revisions Does the introduction provide sufficient background and include all relevant references?Yes Thank you for your careful review of our work and for providing constructive feedback that helped improve the quality and clarity of the manuscript. Is the research design appropriate?Yes Are the methods adequately described?Yes Are the results clearly presented?Yes Are the conclusions supported by the results?Yes 3. Point-by-point response to Comments and Suggestions for Authors Reviewer 2: The study focuses on the defect-based approaches capable to encode multiple logical qubits. The authors evaluate the maximum number of logical qubits for a given error rate by the SA and GA optimization algorithms. They study the limitations of the defect-based approach and the impact of various hole types on logical qubit encoding. The development and analysis of the defect-based surface codes is relevant to the distributed quantum computing based on multiple qubit systems using an imperfect hardware. The defect-based surface codes for encoding multiple qubit systems are potentially very perspective and represent the state-of-the-art approach to quantum computing. There is very scarce number of papers concerning this topic. For example, one of these few is the work by Nagayama et al. (cited as ref. 22), who also considers the surface code error correction subroutine on a defective lattice. Nagayama et al. performed a simulation of randomly placed faulty devices and showed that discarding bad lattices makes the ensemble better, showing the trade-off between the cost of culling and the strength of fault tolerance of an ensemble. In this work, the authors incorporate holes into the surface code lattice that increase the number of encoded logical qubits that gives the efficiency gain, but increase logical error rate due to reduced code distance. Thus, the authors propose the evaluation of the maximum number of logical qubits for a given error rate by an optimization algorithm. Although the optimization algorithms used (Simulated Annealing and Genetic Algorithms) are not new (refs. 36, 37), the assessment of maximum number of logical qubits for a given error rate can be regarded as original issue to the field of quantum computing, which addresses a specific gap in the studying the trade-off between maximizing the number of encoded logical qubits and maintaining satisfactory error correction. As compared to the other published material in the subject area, this work adds a study of new possibilities in the application of defect-based approaches. The authors have obtained a number of practically important results and came to important conclusions about the application of defect-based method, including the revelation of the properties of partially open holes, which help encode more logical qubits than closed holes, opening a new avenue for improving the code density behind defect-free approaches. The work per se is a valuable contribution to quantum computing field and is written very well, so the manuscript can be published as it is. However, a few specific issues can be addressed. Comment 1: Why the authors have chosen so high temperatures (up to 7200 K) as Tmax when considering the influence of SA hyperparameters on the number of obtained logical qubits in a 2D lattice? Response 1: We thank the reviewer for raising this important point. In Simulated Annealing (SA), parameters such as maximal temperature and number of cooling steps significantly impact the exploration of the solution space. The maximal temperature plays a key role in SA: The algorithm is more permissive at high temperatures, allowing it to explore a wide range of solutions, including those that may initially seem suboptimal. By selecting higher maximal temperatures and increasing the number of cooling steps, we aimed to investigate whether the Simulated Annealing algorithm could effectively explore the solution space to find lattices that encode a greater number of logical qubits. The results presented in Table 2 indicate that the maximum temperature parameter of the SA algorithm has a negligible effect on the obtained number of logical qubits. In contrast, increasing the number of annealing steps results in marginal improvement, yielding less than one additional logical qubit on average. To address this point, we added an additional explanation in Subsection 4.3. Comment 2: For the common reader it would be interesting to learn how the discussed codes are physically implemented, for instance, what measurements are required to detect qubit erasures or how, in principle, noise generators are constructed for quantum computers on practice? Response 2: We thank the reviewer for this insightful comment. We agree that a discussion on the physical implementation of qubit erasures and noise generation would be valuable for the general reader. In the current version of the manuscript, we briefly mention in Sections 3.1.3 and 5 that our simulations employ an idealized erasure channel model, provided by Squab, which assumes perfect knowledge of erasure locations. Our work is focused on modeling and analysis rather than experimental realization, but we recognize the importance of connecting simulation assumptions to practical hardware scenarios. To address this point, we have expanded Section 5 to include a brief discussion of how erasures and noise processes are implemented and detected in physical quantum systems. Comment 3: The conclusions are consistent with the evidence and arguments presented in the manuscript, aptly addressing the main questions of the presented work. The references are appropriate. Response 3: We are grateful for the reviewer’s thoughtful evaluation and helpful suggestions. Once again, we sincerely thank the reviewer for their careful reading of our work and for providing constructive feedback that helped improve the quality and clarity of the manuscript. We greatly appreciate your time and effort in reviewing our submission. Author Response File: Author Response.pdf Round 2 Reviewer 1 Report Comments and Suggestions for Authors In the revised manuscript, the authors have restructured the introduction section and removed some comparisons with patch-based surface code constructions related to efficiency in physical qubit usage. In the current version, the introduction does not indicate a comparison between the two encoding schemes, and the work largely focuses on logical qubit encoding optimization for defect-based surface code patches. Although the potential impact of this work may still be limited, as discussed in my previous review report, the contribution of the paper is now better presented. On the other hand, in the later results sections, the authors decided to remove the curves for the patch-based surface code from Fig.10 specifically. I believe this change removes the confusion regarding an unfair comparison between the two encoding methods. However, in the discussion section, the authors still compare the encoding efficiency to that of the patch-based surface code (I believe Ref. 7 is patch based). To avoid giving readers the impression that the authors aim to claim the defect-based surface code is more efficient than the patch-based encoding method, I suggest comparing their optimized encoding method to previously demonstrated logical qubit encoding schemes within defect-based surface code frameworks. A simple example would be adopting the encoding method demonstrated by Fowler in his well-known paper [arXiv:1208.0928, something like Fig. 26], computing how many logical qubits can be encoded, and comparing this result to the proposed method. Alternatively, since the paper refers to Delfosse et al.’s theoretical work, the authors could also compare the obtained scaling results to the analytical scaling, demonstrating how closely their optimization approaches the optimal solution. Overall, the current version resolves some confusion present in the previous version. However, the discussion regarding the resource efficiency of the current optimization method may still require further elaboration. I am willing to recommend this manuscript for publication after this is addressed. Author Response Response to Reviewer 1 Comments 1. Summary Thank you for your constructive and insightful feedback. We have carefully revised the manuscript in response to your comments. The corresponding changes have been incorporated into the manuscript and are highlighted in colored text in the attached file. To distinguish the changes, red text is used for modifications made in response to the reviewer’s comments. Additionally, each revision in the manuscript is labeled with tags such as (Mx) to indicate specific modifications. 2. Questions for General EvaluationReviewer’s EvaluationResponse and Revisions Does the introduction provide sufficient background and include all relevant references?yes Thank you for your feedback. We have improved the discussion based on your comments. All modifications are listed in the point-by-point response below. Is the research design appropriate?yes Are the methods adequately described?Yes Are the results clearly presented?Yes Are the conclusions supported by the results? Are all figures and tables clear and well-presented?Yes Can be improved Point-by-point response to Comments and Suggestions for Authors Comment 1: In the revised manuscript, the authors have restructured the introduction section and removed some comparisons with patch-based surface code constructions related to efficiency in physical qubit usage. In the current version, the introduction does not indicate a comparison between the two encoding schemes, and the work largely focuses on logical qubit encoding optimization for defect-based surface code patches. Although the potential impact of this work may still be limited, as discussed in my previous review report, the contribution of the paper is now better presented. Response 1: Thank you for acknowledging the improvements in our revised manuscript. To further highlight our contribution, we have also modified a sentence in the Introduction section, as indicated by tag (M1) in Paragraph 7. Comment 2: On the other hand, in the later results sections, the authors decided to remove the curves for the patch-based surface code from Fig.10 specifically. I believe this change removes the confusion regarding an unfair comparison between the two encoding methods. However, in the discussion section, the authors still compare the encoding efficiency to that of the patch-based surface code (I believe Ref. 7 is patch based). To avoid giving readers the impression that the authors aim to claim the defect-based surface code is more efficient than the patch-based encoding method, I suggest comparing their optimized encoding method to previously demonstrated logical qubit encoding schemes within defect-based surface code frameworks. A simple example would be adopting the encoding method demonstrated by Fowler in his well-known paper [arXiv:1208.0928, something like Fig. 26], computing how many logical qubits can be encoded, and comparing this result to the proposed method. Alternatively, since the paper refers to Delfosse et al.’s theoretical work, the authors could also compare the obtained scaling results to the analytical scaling, demonstrating how closely their optimization approaches the optimal solution. Response 2: As suggested, we have modified the second paragraph of the Discussion section to clarify the comparisons made. Specifically, we removed any unintended implication that the defect-based surface code is more efficient than patch-based methods. Instead, we now compare our results with the theoretical bounds for defect-based encoding presented by Delfosse et al. , demonstrating that our optimization approaches these limits while also taking logical error thresholds into account (M2). Comment 3: Overall, the current version resolves some confusion present in the previous version. However, the discussion regarding the resource efficiency of the current optimization method may still require further elaboration. I am willing to recommend this manuscript for publication after this is addressed. Response 3: We hope these revisions adequately address your concerns and enhance the clarity and rigor of our manuscript. Thank you again for your valuable comments and for considering our work for publication. Author Response File: Author Response.pdf Cite Export citation file: BibTeX) MDPI and ACS Style Sayedsalehi, S.; Bagherzadeh, N.; Del Barrio, A.A.; Botella, G.; Pilipović, R. Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms. Quantum Rep.2025, 7, 25. AMA Style Sayedsalehi S, Bagherzadeh N, Del Barrio AA, Botella G, Pilipović R. Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms. Quantum Reports. 2025; 7(2):25. Chicago/Turabian Style Sayedsalehi, Samira, Nader Bagherzadeh, Alberto A. Del Barrio, Guillermo Botella, and Ratko Pilipović. 2025. "Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms" Quantum Reports 7, no. 2: 25. APA Style Sayedsalehi, S., Bagherzadeh, N., Del Barrio, A. A., Botella, G., & Pilipović, R. (2025). Developing and Analyzing the Defect-Based Surface Codes Using Optimization Algorithms. Quantum Reports, 7(2), 25. clear Article Metrics Yes Citations No citations were found for this article, but you may check on Google Scholar No Article Access Statistics For more information on the journal statistics, click here. Multiple requests from the same IP address are counted as one view. clear Quantum Rep., EISSN 2624-960X, Published by MDPI RSSContent Alert Further Information Article Processing ChargesPay an InvoiceOpen Access PolicyContact MDPIJobs at MDPI Guidelines For AuthorsFor ReviewersFor EditorsFor LibrariansFor PublishersFor SocietiesFor Conference Organizers MDPI Initiatives SciforumMDPI BooksPreprints.orgScilitSciProfilesEncyclopediaJAMSProceedings Series Follow MDPI LinkedInFacebookX Subscribe to receive issue release notifications and newsletters from MDPI journals Select options [x] Accounting and Auditing [x] Acoustics [x] Acta Microbiologica Hellenica [x] Actuators [x] Adhesives [x] Administrative Sciences [x] Adolescents [x] Advances in Respiratory Medicine [x] Aerobiology [x] Aerospace [x] Agriculture [x] AgriEngineering [x] Agrochemicals [x] Agronomy [x] AI [x] AI Sensors [x] Air [x] Algorithms [x] Allergies [x] Alloys [x] Analytica [x] Analytics [x] Anatomia [x] Anesthesia Research [x] Animals [x] Antibiotics [x] Antibodies [x] Antioxidants [x] Applied Biosciences [x] Applied Mechanics [x] Applied Microbiology [x] Applied Nano [x] Applied Sciences [x] Applied System Innovation [x] AppliedChem [x] AppliedMath [x] AppliedPhys [x] Aquaculture Journal [x] Architecture [x] Arthropoda [x] Arts [x] Astronautics [x] Astronomy [x] Atmosphere [x] Atoms [x] Audiology Research [x] Automation [x] Axioms [x] Bacteria [x] Batteries [x] Behavioral Sciences [x] Beverages [x] Big Data and Cognitive Computing [x] BioChem [x] Bioengineering [x] Biologics [x] Biology [x] Biology and Life Sciences Forum [x] Biomass [x] Biomechanics [x] BioMed [x] Biomedicines [x] BioMedInformatics [x] Biomimetics [x] Biomolecules [x] Biophysica [x] Biosensors [x] Biosphere [x] BioTech [x] Birds [x] Blockchains [x] Brain Sciences [x] Buildings [x] Businesses [x] C [x] Cancers [x] Cardiogenetics [x] Catalysts [x] Cells [x] Ceramics [x] Challenges [x] ChemEngineering [x] Chemistry [x] Chemistry Proceedings [x] Chemosensors [x] Children [x] Chips [x] CivilEng [x] Clean Technologies [x] Climate [x] Clinical and Translational Neuroscience [x] Clinical Bioenergetics [x] Clinics and Practice [x] Clocks & Sleep [x] Coasts [x] Coatings [x] Colloids and Interfaces [x] Colorants [x] Commodities [x] Complexities [x] Complications [x] Compounds [x] Computation [x] Computer Sciences & Mathematics Forum [x] Computers [x] Condensed Matter [x] Conservation [x] Construction Materials [x] Corrosion and Materials Degradation [x] Cosmetics [x] COVID [x] Craniomaxillofacial Trauma & Reconstruction [x] Crops [x] Cryo [x] Cryptography [x] Crystals [x] Current Issues in Molecular Biology [x] Current Oncology [x] Dairy [x] Data [x] Dentistry Journal [x] Dermato [x] Dermatopathology [x] Designs [x] Diabetology [x] Diagnostics [x] Dietetics [x] Digital [x] Disabilities [x] Diseases [x] Diversity [x] DNA [x] Drones [x] Drugs and Drug Candidates [x] Dynamics [x] Earth [x] Ecologies [x] Econometrics [x] Economies [x] Education Sciences [x] Electricity [x] Electrochem [x] Electronic Materials [x] Electronics [x] Emergency Care and Medicine [x] Encyclopedia [x] Endocrines [x] Energies [x] Energy Storage and Applications [x] Eng [x] Engineering Proceedings [x] Entropy [x] Environmental and Earth Sciences Proceedings [x] Environments [x] Epidemiologia [x] Epigenomes [x] European Burn Journal [x] European Journal of Investigation in Health, Psychology and Education [x] Family Sciences [x] Fermentation [x] Fibers [x] FinTech [x] Fire [x] Fishes [x] Fluids [x] Foods [x] Forecasting [x] Forensic Sciences [x] Forests [x] Fossil Studies [x] Foundations [x] Fractal and Fractional [x] Fuels [x] Future [x] Future Internet [x] Future Pharmacology [x] Future Transportation [x] Galaxies [x] Games [x] Gases [x] Gastroenterology Insights [x] Gastrointestinal Disorders [x] Gastronomy [x] Gels [x] Genealogy [x] Genes [x] Geographies [x] GeoHazards [x] Geomatics [x] Geometry [x] Geosciences [x] Geotechnics [x] Geriatrics [x] Glacies [x] Gout, Urate, and Crystal Deposition Disease [x] Grasses [x] Green Health [x] Hardware [x] Healthcare [x] Hearts [x] Hemato [x] Hematology Reports [x] Heritage [x] Histories [x] Horticulturae [x] Hospitals [x] Humanities [x] Humans [x] Hydrobiology [x] Hydrogen [x] Hydrology [x] Hygiene [x] Immuno [x] Infectious Disease Reports [x] Informatics [x] Information [x] Infrastructures [x] Inorganics [x] Insects [x] Instruments [x] Intelligent Infrastructure and Construction [x] International Journal of Cognitive Sciences [x] International Journal of Environmental Medicine [x] International Journal of Environmental Research and Public Health [x] International Journal of Financial Studies [x] International Journal of Molecular Sciences [x] International Journal of Neonatal Screening [x] International Journal of Orofacial Myology and Myofunctional Therapy [x] International Journal of Plant Biology [x] International Journal of Topology [x] International Journal of Translational Medicine [x] International Journal of Turbomachinery, Propulsion and Power [x] International Medical Education [x] Inventions [x] IoT [x] ISPRS International Journal of Geo-Information [x] J [x] Journal of Aesthetic Medicine [x] Journal of Ageing and Longevity [x] Journal of CardioRenal Medicine [x] Journal of Cardiovascular Development and Disease [x] Journal of Clinical & Translational Ophthalmology [x] Journal of Clinical Medicine [x] Journal of Composites Science [x] Journal of Cybersecurity and Privacy [x] Journal of Dementia and Alzheimer's Disease [x] Journal of Developmental Biology [x] Journal of Experimental and Theoretical Analyses [x] Journal of Eye Movement Research [x] Journal of Functional Biomaterials [x] Journal of Functional Morphology and Kinesiology [x] Journal of Fungi [x] Journal of Imaging [x] Journal of Intelligence [x] Journal of Low Power Electronics and Applications [x] Journal of Manufacturing and Materials Processing [x] Journal of Marine Science and Engineering [x] Journal of Market Access & Health Policy [x] Journal of Mind and Medical Sciences [x] Journal of Molecular Pathology [x] Journal of Nanotheranostics [x] Journal of Nuclear Engineering [x] Journal of Otorhinolaryngology, Hearing and Balance Medicine [x] Journal of Parks [x] Journal of Personalized Medicine [x] Journal of Pharmaceutical and BioTech Industry [x] Journal of Respiration [x] Journal of Risk and Financial Management [x] Journal of Sensor and Actuator Networks [x] Journal of the Oman Medical Association [x] Journal of Theoretical and Applied Electronic Commerce Research [x] Journal of Vascular Diseases [x] Journal of Xenobiotics [x] Journal of Zoological and Botanical Gardens [x] Journalism and Media [x] Kidney and Dialysis [x] Kinases and Phosphatases [x] Knowledge [x] LabMed [x] Laboratories [x] Land [x] Languages [x] Laws [x] Life [x] Lights [x] Limnological Review [x] Lipidology [x] Liquids [x] Literature [x] Livers [x] Logics [x] Logistics [x] Lubricants [x] Lymphatics [x] Machine Learning and Knowledge Extraction [x] Machines [x] Macromol [x] Magnetism [x] Magnetochemistry [x] Marine Drugs [x] Materials [x] Materials Proceedings [x] Mathematical and Computational Applications [x] Mathematics [x] Medical Sciences [x] Medical Sciences Forum [x] Medicina [x] Medicines [x] Membranes [x] Merits [x] Metabolites [x] Metals [x] Meteorology [x] Methane [x] Methods and Protocols [x] Metrics [x] Metrology [x] Micro [x] Microbiology Research [x] Microelectronics [x] Micromachines [x] Microorganisms [x] Microplastics [x] Microwave [x] Minerals [x] Mining [x] Modelling [x] Modern Mathematical Physics [x] Molbank [x] Molecules [x] Multimedia [x] Multimodal Technologies and Interaction [x] Muscles [x] Nanoenergy Advances [x] Nanomanufacturing [x] Nanomaterials [x] NDT [x] Network [x] Neuroglia [x] Neurology International [x] NeuroSci [x] Nitrogen [x] Non-Coding RNA [x] Nursing Reports [x] Nutraceuticals [x] Nutrients [x] Obesities [x] Oceans [x] Onco [x] Optics [x] Oral [x] Organics [x] Organoids [x] Osteology [x] Oxygen [x] Parasitologia [x] Particles [x] Pathogens [x] Pathophysiology [x] Pediatric Reports [x] Pets [x] Pharmaceuticals [x] Pharmaceutics [x] Pharmacoepidemiology [x] Pharmacy [x] Philosophies [x] Photochem [x] Photonics [x] Phycology [x] Physchem [x] Physical Sciences Forum [x] Physics [x] Physiologia [x] Plants [x] Plasma [x] Platforms [x] Pollutants [x] Polymers [x] Polysaccharides [x] Populations [x] Poultry [x] Powders [x] Proceedings [x] Processes [x] Prosthesis [x] Proteomes [x] Psychiatry International [x] Psychoactives [x] Psychology International [x] Publications [x] Purification [x] Quantum Beam Science [x] Quantum Reports [x] Quaternary [x] Radiation [x] Reactions [x] Real Estate [x] Receptors [x] Recycling [x] Regional Science and Environmental Economics [x] Religions [x] Remote Sensing [x] Reports [x] Reproductive Medicine [x] Resources [x] Rheumato [x] Risks [x] Robotics [x] Ruminants [x] Safety [x] Sci [x] Scientia Pharmaceutica [x] Sclerosis [x] Seeds [x] Sensors [x] Separations [x] Sexes [x] Signals [x] Sinusitis [x] Smart Cities [x] Social Sciences [x] Société Internationale d’Urologie Journal [x] Societies [x] Software [x] Soil Systems [x] Solar [x] Solids [x] Spectroscopy Journal [x] Sports [x] Standards [x] Stats [x] Stresses [x] Surfaces [x] Surgeries [x] Surgical Techniques Development [x] Sustainability [x] Sustainable Chemistry [x] Symmetry [x] SynBio [x] Systems [x] Targets [x] Taxonomy [x] Technologies [x] Telecom [x] Textiles [x] Thalassemia Reports [x] Theoretical and Applied Ergonomics [x] Therapeutics [x] Thermo [x] Time and Space [x] Tomography [x] Tourism and Hospitality [x] Toxics [x] Toxins [x] Transplantology [x] Trauma Care [x] Trends in Higher Education [x] Tropical Medicine and Infectious Disease [x] Universe [x] Urban Science [x] Uro [x] Vaccines [x] Vehicles [x] Venereology [x] Veterinary Sciences [x] Vibration [x] Virtual Worlds [x] Viruses [x] Vision [x] Waste [x] Water [x] Wild [x] Wind [x] Women [x] World [x] World Electric Vehicle Journal [x] Youth [x] Zoonotic Diseases Subscribe © 1996-2025 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and ConditionsPrivacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept Share Link Copy clear Share clear Back to Top Top
223
John r taylor classical mechanics Share you can copy, share and adapt this material in any way, for any purpose, including commercially. You are free to remix, transform or build upon it as long as you follow the license terms. Just give credit where it's due, provide a link to the original license and indicate if changes were made. ShareAlike if you modify the material, you must distribute your changes under the same license.John Taylor brings his expertise in classical mechanics to his new book, making it a clear and insightful read. He is Professor of Physics at the University of Colorado Boulder and has taught at several prestigious universities worldwide. With a background in mathematics from Cambridge University and physics from the University of California Berkeley, he has published numerous articles and won several teaching awards.He won one of eleven Gold Medals in the national "Professor of the Year" program and was named Colorado Professor of the Year in 1998, after being invited to tour New Zealand and perform "Mr. Wizard" shows at various museums and colleges.Taylor's textbook covers various topics in mechanics, including Newton's Laws, projectiles, momentum, energy, oscillations, calculus of variations, Lagrange's Equations, two-body central-force problems, noninertial frames, and rotational motion of rigid bodies. While some chapters are straightforward, others, such as Lagrange's Equations, provide an excellent treatment of the subject. Taylor's pedagogical style is praised for its clarity, making complex concepts more accessible. However, some topics, like Noether's Theorem and symmetries, are not thoroughly discussed.Taylor's Classical Mechanics is an excellent textbook that provides a comprehensive introduction to classical mechanics. While some chapters may feel laborious or lacking in physical intuition, others offer a wealth of interesting concepts and ideas.Discussion of precession by weak torque can be found in Chapter 11: Coupled Oscillations and Normal Modes, which delves into the world of linear algebra with great mathematical intensity. The discussion is limited to the linear case, making the results somewhat predictable.Chapter 12: Nonlinear Mechanics and Chaos presents a thorough and engaging exploration of nonlinear mechanics, covering topics such as Hamiltonian mechanics and Liouvilles theorem. Although Chapter 13: Hamiltonian Mechanics seems underwhelming at first glance, it provides valuable insights into the subject.The last three chapters cover standard material in Collision Theory, Special Relativity, and Continuum Mechanics. While these topics are well-covered, some readers may find the problems to be uninteresting or too easy.Despite this, Taylor's Classical Mechanics is an outstanding resource for aspiring physicists. With 30-50 problems at the end of each chapter, readers will have ample opportunities to apply their knowledge in a practical setting. The book's style, courtesy of Jhon R. Taylor, is both informal and rigorous, making it an engaging read.Taylor's writing has a way of making physics come alive, teaching not just the concepts but also how to think like a physicist.This text discusses the strengths and weaknesses of "Taylor's Classical Mechanics" as an undergraduate textbook. The book excels in providing detailed explanations and examples, but may fall short in terms of cohesiveness and narrative flow. Some reviewers appreciate its clear prose and mathematics, while others find it lacking in certain areas. Despite this, the text is still considered a valuable resource for intermediate to advanced classical mechanics studies. It covers various topics with sufficient mathematical rigor and is easy to follow for beginners. However, some critics note that the book could benefit from a more cohesive narrative structure and additional exercises or texts to complement its content. Overall, it remains a solid textbook in the field of physics, particularly notable for its clarity and engaging examples.The author is struggling to improve the textbook structure for a particular course they're teaching. They find it challenging to introduce novel material quickly enough, as the existing format spends too much time reviewing previously covered concepts from Physics I. This leaves insufficient time for students to fully grasp new ideas and methods like Lagrangian mechanics. The author also notes that while Taylor's book excels in clarity, it sometimes lacks real-world applications, specifically numerical computations, which could be enhanced with programming exercises.--- Taylor's book on classical mechanics is considered one of the best textbooks for this subject.It provides clear explanations and derivations that are easy to follow and understand.The book covers a wide range of topics in classical mechanics, including conservation laws, oscillations, and chaos theory.However, some reviewers have found the examples and problems to be too trivial or hard, suggesting that more examples and practice exercises would be beneficial.Despite this, the book is well-regarded for its clarity and insight, making it a valuable resource for students who have studied introductory mechanics.Taylor's Classical Mechanics is a comprehensive and engaging introduction to the 400-year-old subject of classical mechanics, making it as relevant today as ever. With clarity and insight, John Taylor conveys his excitement about this fundamental topic. The book, adopted by over 450 colleges and universities in the USA and Canada, has been translated into six languages. It is designed for students who have already studied some mechanics in an introductory physics course, providing a thorough coverage of topics such as conservation laws, oscillations, Lagrangian mechanics, and chaos theory. The book includes a large selection of 744 problems classified by topic and difficulty, making it an excellent resource for students.Energy and Work Potential Energy and Conservative Forces Force as the Gradient of Potential Energy The Second Condition that F be Conservative Time-Dependent Potential Energy Energy for Linear One-Dimensional Systems Curvilinear One-Dimensional Systems Central Forces Energy of Interaction of Two Particles The Energy of a Multiparticle System Problems for Chapter 4 Oscillations Hooke's Law Simple Harmonic Motion Two-Dimensional Oscillators Damped Oscillators Driven Damped Oscillations Resonance Fourier Series Fourier Series Solution for the Driven Oscillator The RMS Displacement Parseval's Theorem Problems for Chapter 5 Calculus of Variations Two Examples The Euler-Lagrange Equation Applications of the Euler-Lagrange Equation More than Two Variables Problems for Chapter 6 Lagrange's Equations Unconstrained Motion Constrained Systems; an Example Constrained Systems in General Proof of Lagrange's Equations with Constraints Examples of Lagrange's Equations Conclusion Conservation Laws in Lagrangian Mechanics Lagrange's Equations for Magnetic Forces Lagrange Multipliers and Constraint Forces Problems for Chapter 7 Two-Body Central Force Problems The Problem CM and Relative Coordinates; Reduced Mass The Equations of Motion The Equivalent One-Dimensional Problems The Equation of the Orbit The Kepler Orbits The Unbonded Kepler Orbits Changes of Orbit Problems for Chapter 8 Mechanics in Noninertial Frames Acceleration without Rotation The Tides The Angular Velocity Vector Time Derivatives in a Rotating Frame Newton's Second Law in a Rotating Frame The Centrifugal Force The Coriolis Force Free Fall and The Coriolis Force The Foucault Pendulum Coriolis Force and Coriolis Acceleration Problems for Chapter 9 Motion of Rigid Bodies Properties of the Center of Mass Rotation about a Fixed Axis Rotation about Any Axis; the Inertia Tensor Principal Axes of Inertia Finding the Principal Axes; Eigenvalue Equations Precession of a Top Due to a Weak Torque Euler's Equations Euler's Equations with Zero Torque Euler Angles Motion of a Spinning Top Problems for Chapter 10 Coupled Oscillators and Normal Modes Two Masses and Three Springs Identical Springs and Equal Masses Two Weakly Coupled Oscillators Lagrangian Approach; the Double Pendulum The General Case Three Coupled Pendulums Normal Coordinates Problems for Chapter 11 Part II: FURTHER TOPICS Nonlinear Mechanics and Chaos Linearity and Nonlinearity The Driven Damped Pendulum or DDP Some Expected Features of the DDP The DDP; Approach to Chaos Chaos and Sensitivity to Initial Conditions Bifurcation Diagrams State-Space Orbits PoincareGiven text:paraphrase this text 12 chapters covering Hamiltonian Mechanics, Collision Theory, Special Relativity, and Continuum Mechanics are discussed. These topics include understanding of basic variables, Lagrange's equations, phase-space orbits, scattering angles, relativistic velocity-addition formulas, four-dimensional space-time, tensors, electrodynamics, and stress-strain relations. The book ClassicalMechanics, written by John R. Taylor, focuses on providing clarity and insight to students who have a background in introductory physics courses. It covers various topics such as conservation laws, oscillations, Lagrangian mechanics, two-body problems, non-inertial frames, rigid bodies, normal modes, chaos theory, Hamiltonian mechanics, and continuum mechanics with an emphasis on unusual clarity.This text introduces a few elementary systems designed to give a clear grasp of key concepts, despite their widespread discussion elsewhere. Each chapter concludes with an extensive selection of engaging problems, totaling 744 and categorized by topic and difficulty level. These range from basic exercises to complex computer projects.Taylor's Classical Mechanics has been adopted by over 450 institutions in the USA and Canada, as well as being translated into six languages. It offers a comprehensive and accessible introduction to this centuries-old subject, which remains captivating today. The author successfully conveys both excitement and profound understanding of the material.Additional resources include an Instructors' Manual for adopting professors, which provides detailed guidance. Adopting professors can also access digital art from the book. Classical mechanics john r taylor reddit. John r taylor classical mechanics solution manual. Classical mechanics john r taylor table of contents. Classical mechanics oleh john r. taylor. John r taylor classical mechanics solutions. Other books by the author of classical mechanics john r taylor. Classical mechanics john r taylor answers. Synopsis of classical mechanics john r taylor.
224
Locations: Abu Dhabi|Canada|Florida|London|Nevada|Ohio| Home/ Health Library/ Diseases & Conditions/ Carpal Tunnel Syndrome AdvertisementAdvertisement Carpal Tunnel Syndrome Carpal tunnel syndrome is an extremely common wrist issue. Irritation or damage inside the carpal tunnel in your wrist causes it when swelling presses on your median nerve. Carpal tunnel syndrome symptoms include wrist pain, tingling, numbness and weakness. A healthcare provider will suggest treatments like wearing a splint, physical therapy or surgery. ContentsOverviewSymptoms and CausesDiagnosis and TestsManagement and TreatmentOutlook / PrognosisPreventionLiving WithAdditional Common Questions Overview What is carpal tunnel syndrome? Carpal tunnel syndrome is a health condition that causes symptoms like pain, numbness, tingling and weakness in your hand and wrist. Advertisement Cleveland Clinic is a non-profit academic medical center. Advertising on our site helps support our mission. We do not endorse non-Cleveland Clinic products or services. Policy The carpal tunnel is a space in your wrist bones. It’s like a tunnel road through a mountainside, but instead of making room in the rock for cars, it’s a passageway in your bones that lets tendons, ligaments and nerves pass through it to reach your hand. Carpal tunnel syndrome happens when something irritates or puts extra pressure on the median nerve that runs through your carpal tunnel. The median nerve helps you move your forearm and gives feeling to most of your fingers and hands. If it’s damaged or pressed against the walls of your carpal tunnel, it can send extra or incorrect feelings to your hand and wrist. Visit a healthcare provider if you’re experiencing pain, numbness or tingling in your hands and wrists. Carpal tunnel syndrome usually responds well to treatment, but it can permanently damage your median nerve if it’s not treated soon enough. How common is carpal tunnel syndrome? Carpal tunnel syndrome is extremely common. Experts estimate that around 3 out of every 1,000 people in the U.S. experience carpal tunnel syndrome each year. Symptoms and Causes What are the signs and symptoms of carpal tunnel syndrome? The most common carpal tunnel symptoms include: Numbness in your wrist, hand or fingers (especially your fingertips) Pain in your wrist, hand or fingers Tingling Trouble using your hands to hold or control objects (like holding your phone, gripping the steering wheel, holding a pen or typing on a keyboard, for example) Advertisement Carpal tunnel syndrome usually develops slowly. You might only experience minor symptoms at first that may get worse over time. People usually first notice symptoms at night — pain or tingling may wake you up. Over time, the symptoms may start affecting you during the day, especially if you do the same kind of motion a lot at work like typing, writing or using tools. What does carpal tunnel syndrome feel like? Carpal tunnel syndrome can make your wrists, hands and fingers feel uncomfortable. It may feel like pinpricks or like your fingers or hands “fell asleep.” You may also feel numbness that makes you want to shake your hands like you’re flinging water off them. Carpal tunnel syndrome pain usually feels like it’s coming from inside your hand or wrist — not a skin-level pain like a cut. The pain may feel like a sharp, burning stab or a constant ache. Some people with carpal tunnel syndrome feel like their hands and grip are weaker than normal. It might feel like you can’t get a solid hold on a mug or pen, even if you’re concentrating on it. Your hands and fingers may feel clumsy or less able to perform precise motions, like buttoning a shirt or aiming a key into a lock. What causes carpal tunnel syndrome? Extra pressure on your median nerve causes carpal tunnel syndrome. The carpal tunnel has space for all the parts that pass through it, but if one part of your wrist is swollen or damaged, it can press on other tissue around it, including your median nerve. Anything that causes swelling or irritation in your wrist can cause carpal tunnel syndrome: Repetitive strain injuries Arthritis Sprains Wrist fractures (broken wrist bones) Ganglion cysts What are the risk factors? Anyone can develop carpal tunnel syndrome, but some people are more likely to, including: People who do repetitive motions with their hands and wrists for work (swinging a hammer, for example) People who use power tools that vibrate (like drills or jackhammers) Pregnant women Women Adults over the age of 40 People whose biological relatives have carpal tunnel syndrome (it can be hereditary, or passed through generations in families) Having certain health conditions can increase your carpal tunnel syndrome risk, including: Rheumatoid arthritis Gout Hypothyroidism Diabetes Obesity Amyloidosis What are the complications? If a healthcare provider doesn’t diagnose and treat carpal tunnel syndrome as soon as possible, the irritation in your wrist can cause permanent damage. Specifically, the extra pressure can damage your median nerve, which may make it hard or impossible to feel, move or use your hand. Visit a healthcare provider as soon as you notice carpal tunnel symptoms or any changes in how you can feel or use your hand and wrist. Diagnosis and Tests How do providers diagnose carpal tunnel syndrome? A healthcare provider will diagnose carpal tunnel syndrome with a physical exam and some tests. They’ll examine your wrist, hand and fingers and ask about your symptoms. Tell your provider when you first noticed symptoms and if any activities or time of day make them better or worse. Advertisement Carpal tunnel tests Your provider will use a combination of physical and imaging tests to diagnose carpal tunnel syndrome, including: Tinel’s sign Phalen’s test Wrist X-rays Electromyography (EMG) Ultrasound Magnetic resonance imaging (MRI) Management and Treatment What are carpal tunnel syndrome treatments? Providers treat carpal tunnel syndrome with nonsurgical (conservative) treatments first. You may need carpal tunnel surgery if conservative treatments don’t relieve your symptoms. Nonsurgical carpal tunnel treatments The most common carpal tunnel treatments include modifying your daily routine, supporting and strengthening your wrist and taking medication: Wearing a splint (especially at night): A splint will hold your wrist in a neutral position to take pressure off your median nerve. Physical therapy: A physical therapist can help you strengthen muscles around your wrist and increase your flexibility. Changing your posture or working environment: An occupational therapist can suggest ways to modify how you do everyday tasks to move safely and more comfortably. You might need to change how you sit or stand, how you position your keyboard or make other posture tweaks. Over-the-counter medications: Your provider might suggest over-the-counter NSAIDs or acetaminophen to reduce inflammation and relieve pain. Don’t take these medicines for more than 10 days in a row without talking to your provider. Corticosteroids: Corticosteroids are prescription anti-inflammatory medications. Your provider may give you cortisone shots in your affected carpal tunnel. Advertisement Carpal tunnel syndrome surgery If conservative treatments don’t work, your provider will suggest carpal tunnel surgery. Your surgeon will perform a carpal tunnel release to create more space inside your wrist. They’ll make an incision (cut) in the ligament that connects your wrist to your palm (your transverse carpal ligament). This reduces tension on your carpal tunnel and gives your tendons and nerves more space. Carpal tunnel release surgery is usually an outpatient procedure, which means you can go home the same day. Your surgeon will tell you what the expect and will give you recovery instructions. How soon after treatment will I feel better? You should start feeling better as soon as you start carpal tunnel treatment. It might take a few weeks (or longer) for nonsurgical treatments to reduce the pressure on your median nerve, but your symptoms should start improving gradually. Carpal tunnel surgery should improve your symptoms as soon as your wrist heals. It usually takes a month or two to recover. Care at Cleveland Clinic Get Orthopaedic Care Make an Appointment Outlook / Prognosis What can I expect if I have carpal tunnel syndrome? You should expect to tweak some of your daily activities and try a few nonsurgical treatments to support your wrists and reduce inflammation inside your carpal tunnel. Your healthcare provider will suggest treatments that relieve carpal tunnel syndrome symptoms and prevent median nerve damage. Advertisement It might take a few tries to find treatments that work for you, but most people are able to find carpal tunnel relief. Your provider will suggest surgery if conservative treatments aren’t working or if you have severe carpal tunnel syndrome. Prevention Can I prevent carpal tunnel syndrome? It can be hard to prevent carpal tunnel syndrome, especially if a health condition or activity you can’t avoid causes it. You might be able to reduce your risk by protecting your wrists. Protective steps include: Stretch your wrists and hands before and after intense physical activitiesWear proper protective equipment for all work or activities. Take frequent rest breaks when working with your hands. Use proper technique and maintain good posture when working with tools or typing on a keyboard. Living With Can carpal tunnel syndrome heal on its own? It’s possible for carpal tunnel syndrome to get better on its own — especially if you rest or avoid repetitive motions with your wrists for a while. But it’s much more likely that carpal tunnel syndrome won’t heal unless a healthcare provider diagnoses and treats it. It’s not worth risking permanent damage to your median nerve. See a healthcare provider as soon as you notice any tingling, pain or numbness in your wrists, hands or fingers. What questions should I ask my doctor? Do I have carpal tunnel syndrome or another wrist issue? What’s causing the carpal tunnel syndrome? Which treatments will I need? Will I need surgery? Which kind of splint should I buy, and how often should I wear it? Additional Common Questions What is the best way to fix carpal tunnel syndrome? There’s no one answer that applies to everyone. Which treatments will work best for you depends on what’s causing irritation in your carpal tunnel, as well as the carpal tunnel syndrome’s severity. Most people can manage carpal tunnel syndrome with conservative treatments. But surgery is sometimes the best option. There’s no one right or wrong answer when it comes to your health. A healthcare provider will help you understand which treatments are right for you and why. How do I know if I’m getting carpal tunnel syndrome? Everyone experiences carpal tunnel syndrome differently. You might first notice symptoms at night, like wrist pain or tingling that are intense enough to wake you up. You may also notice that your wrists start showing signs of carpal tunnel the longer you use them, like at the end of a long day of working with tools. Only a healthcare provider can confirm that you have carpal tunnel syndrome or another wrist issue. Even if you don’t have carpal tunnel syndrome, they’ll help you understand what’s causing the symptoms and how you can treat them. A note from Cleveland Clinic Anything that affects your ability to feel and use your hands and fingers can be scary, annoying and frustrating — and carpal tunnel syndrome is no different. It happens when irritation causes extra pressure on the median nerve in your wrist. It might seem easy to ignore occasional pain, tingling or numbness in your hand, especially if it comes and goes. But don’t shrug off these symptoms. Carpal tunnel syndrome can cause permanent nerve damage if it’s not treated soon enough. But it’s also very treatable. Your provider will help you find ways to relieve your symptoms and prevent damage inside your wrist. Care at Cleveland Clinic From sudden injuries to chronic conditions, Cleveland Clinic’s orthopaedic providers can guide you through testing, treatment and beyond. Get Orthopaedic Care Make an Appointment Medically Reviewed Last reviewed on 01/12/2024. Learn more about the Health Library and our editorial process. AdvertisementAdvertisement Appointments 216.444.2606 Appointments & Locations Request an Appointment Rendered: Tue Aug 12 2025 09:50:06 GMT+0000 (Coordinated Universal Time)
225
The Random Matrix Theory of the Classical Compact Groups Elizabeth Meckes Image by Swallowtail Garden Seeds To Mark Contents Preface page 7 1 Haar measure on the classical compact matrix groups 11 1.1 The classical compact matrix groups 11 1.2 Haar measure 17 1.3 Lie group structure and character theory 28 2 Distribution of the entries 41 2.1 Introduction 41 2.2 The density of a principal submatrix 47 2.3 How much is a Haar matrix like a Gaussian matrix? 52 2.4 Arbitrary projections 63 3 Eigenvalue distributions: exact formulas 70 3.1 The Weyl integration formula 70 3.2 Determinantal point processes 80 3.3 Matrix moments 89 3.4 Patterns in eigenvalues: powers of random matrices 94 4 Eigenvalue distributions: asymptotics 99 4.1 The eigenvalue counting function 99 4.2 The empirical spectral measure and linear eigenvalue statistics 114 4.3 Patterns in eigenvalues: self-similarity 121 4.4 Large deviations for the empirical spectral measure 125 5 Concentration of measure 138 5.1 The concentration of measure phenomenon 138 5.2 Logarithmic Sobolev inequalities and concentration 140 5.3 The Bakry–´ Emery criterion and concentration 147 5.4 Concentration of the spectral measure 159 5 6 Contents 6 Geometric applications of measure concentration 167 6.1 The Johnson–Lindenstrauss lemma 167 6.2 Dvoretzky’s theorem 171 6.3 A measure-theoretic Dvoretzky theorem 176 7 Characteristic polynomials and the ζ-function 187 7.1 Two-point correlations and Montgomery’s conjecture 187 7.2 The zeta function and characteristic polynomials 191 7.3 Numerical and statistical work 202 Index 211 References 212 Preface This book grew out of lecture notes from a mini-course I gave at the 2014 Women and Mathematics program at the Institute for Advanced Study. When asked to provide background reading for the participants, I found myself at a bit of a loss; while there are many excellent books which give some treat-ment of Haar distributed random matrices, there was no one source that gave a broad, and broadly accessible, introduction to the subject in its own right. My goal has been to fill this gap: to give an introduction to the theory of random or-thogonal, unitary, and symplectic matrices which approaches the subject from many angles, includes the most important results that anyone looking to learn about the subject should know, and tells a coherent story that allows the beauty of this many-faceted subject to shine through. The book begins with a very brief introduction to the orthogonal, unitary, and symplectic groups; just enough to get started talking about Haar measure. The second section includes six different constructions of Haar measure on the classical groups; the chapter also contains some further information on the groups, including some basic aspects of their structure as Lie groups, identifi-cation of the Lie algebras, an introduction to representation theory, and discus-sion of the characters. Chapter two is about the joint distribution of the entries of a Haar-distributed random matrix. The fact that individual entries are approximately Gaussian is classical and goes back to the late 19th century. This chapter includes modern results on the joint distribution of the entries in various senses: total variation approximation of principal submatrices by Gaussian matrices, in-probability approximation of (much larger) submatrices by Gaussian matrices, and a treat-ment of arbitrary projections of Haar measure via Stein’s method. Chapters three and four deal with the eigenvalues. Chapter three is all about exact formulas: the Weyl integration formulas, the structure of the eigenvalue processes as determinantal point processes with explicit kernels, exact formu-7 8 Preface las due to Diaconis and Shahshahani for the matrix moments, and an interesting decomposition (due to Eric Rains) of the distribution of eigenvalues of powers of random matrices. Chapter four deals with asymptotics for the eigenvalues of large matrices: the sine kernel microscopic scaling limit, limit theorems for the empirical spec-tral measures and linear eigenvalue statistics, large deviations for the empirical spectral measures, and an interesting self-similarity property of the eigenvalue distribution. Chapters five and six are where this project began: concentration of measure on the classical compact groups, with applications in geometry. Chapter five introduces the concept of concentration of measure, the connection with log-Sobolev inequalities, and derivations of optimal (at least up to constants) log-Sobolev constants. The final section contains concentration inequalities for the empirical spectral measures of random unitary matrices. Chapter six has some particularly impressive applications of measure con-centration on the classical groups to high-dimensional geometry. First, a proof of the celebrated Johnson–Lindenstrauss lemma via concentration of measure on the orthogonal group, with a (very brief) discussion of the role of the lemma in randomized algorithms. The second section is devoted to a proof of Dvoret-zky’s theorem, via concentration of measure on the unitary group. The final section gives the proof of a “measure-theoretic” Dvoretzky theorem, show-ing that subject to some mild constraints, most marginals of high-dimensional probability measures are close to Gaussian. Finally, chapter seven gives a taste of the intriguing connection between eigenvalues of random unitary matrices and zeros of the Riemann zeta func-tion. There is a section on Montgomery’s theorem and conjecture on pair cor-relations and one on the results of Keating and Snaith on the characteristic polynomial of a random unitary matrix, which led them to exciting new con-jectures on the zeta side. Some numerical evidence (and striking pictures) are presented. Haar-distributed random matrices appear and play important roles in a wide spectrum of subfields of mathematics, physics, and statistics, and it would never have been possible to mention them all. I have used the end of chapter notes in part to give pointers to some interesting topics and connections that I have not included, and doubtless there are many more that I did not mention at all. I have tried to make the book accessible to a reader with an undergradu-ate background in mathematics generally, with a bit more in probability (e.g., comfort with measure theory would be good). But because the random matrix theory of the classical compact groups touches on so many diverse areas of mathematics, it has been my assumption in writing this book that most readers Preface 9 will not be familiar with all of the background which comes up. I have done my best to give accessible, bottom-line introductions to the areas I thought were most likely to be unfamiliar, but there are no doubt places where an unfamiliar (or more likely, vaguely familiar, but without enough associations for comfort) phrase will suddenly appear. In these cases, it seems best to take the advice of John von Neumann, who said to a student “... in mathematics you don’t understand things. You just get used to them.” One of the greatest pleasures in completing a book is the opportunity to thank the many sources of knowledge, advice, wisdom, and support that made it possible. My thanks firstly to the Institute for Advanced Study and the or-ganizers of the Women and Mathematics program for inviting me to give the lectures that inspired this book. Thanks also to the National Science Founda-tion for generous support while I wrote it. Persi Diaconis introduced me to random matrix theory (and many other things) and taught me to tell a good story. Amir Dembo encouraged me to embark on this project and gave me valuable advice about how to do it well. I am grateful to Pierre Albin and Tyler Lawson for their constant willingness to patiently answer all of my questions about geometry and algebra, and if they didn’t already know the answers, to help me wade through unfamiliar literature. Experienced guides make all the difference. Many thanks to Jon Keating, Arun Ram, and Michel Ledoux for answering my questions about their work and pointing me to better approaches than the ones I knew about. Particular thanks to Natha¨ el Gozlan for explaining tricky details that eluded me. My sincere thanks to Andrew Odlyzko for providing the figures based on his computations of zeta zeros. Thanks to my students, especially Tianyue Liu and Kathryn Stewart, whose questions and comments on earlier drafts certainly enriched the end result. The excellent and topical photograph on the frontispiece was found (I still don’t know how) by Tim Gowers. As ever, thanks to Sarah Jarosz, this time for Undercurrent, which got me most of the way there, and to Yo-Yo Ma for Six Evolutions, which carried me to the finish line. And how to thank my husband and collaborator, Mark Meckes? We have discussed the material in this book for so long and in so many contexts that his viewpoint is inextricably linked with my own. He has lived with the writing of this book, always willing to drop a (probably more important) conversation or task in order to let me hash out a point that suddenly felt terribly urgent. If my 10 Preface writing helps to illuminate the ideas I have tried to describe, it is because I got to talk it out first at the breakfast table. 1 Haar measure on the classical compact matrix groups 1.1 The classical compact matrix groups The central objects of study in this book are randomly chosen elements of the classical compact matrix groups: the orthogonal group O (n), the unitary group U (n), and the symplectic group Sp (2n). The groups are defined as follows. Definition 1. An n × n matrix U over R is orthogonal if UUT = UTU = In, (1.1) where In denotes the n × n identity matrix, and UT is the transpose of U. The set of n × n orthogonal matrics over R is denoted O (n). 2. An n × n matrix U over C is unitary if UU∗= U∗U = In, (1.2) where U∗denotes the conjugate transpose of U. The set of n × n unitary matrices over C is denoted U (n). 3. An 2n × 2n matrix U over C is symplectic if U ∈U (2n) and UJUT = UT JU = J, (1.3) where J := " 0 In −In 0 # . (1.4) The set of 2n × 2n symplectic matrices over C is denoted Sp (2n). Alternatively, the symplectic group can be defined as the set of n × n matrices U with quaternionic entries, such that UU∗= In, where U∗is the (quaternionic) conjugate transpose: for H = {a + bi + cj + dk : a, b, c, d ∈R} 11 12 Haar measure on the classical compact matrix groups the skew-field of quaternions, satisfying the relations i2 = j2 = k2 = ijk = −1, quaternionic conjugation is defined by a + bi + cj + dk = a −bi −cj −dk. Quaternions can be represented as 2 × 2 matrices over C: the map a + bi + cj + dk 7−→ " a + bi c + di −c + di a −bi # is an isomorphism of H onto (" z w −w z # : z, w ∈C ) . More generally, if A, B,C, D ∈Mn(R), then the matrix M = A + Bi + Cj + Dk ∈Mn(H) is associated to the matrix MC = I2 ⊗A + iQ2 ⊗B + Q3 ⊗C + iQ4 ⊗D, where Q2 := "1 0 0 −1 # Q3 := " 0 1 −1 0 # Q4 := "0 1 1 0 # and ⊗denotes the Kronecker product. Any matrix M ∈M2n(C) of this form has the property that MJ = JM for J = Q3 ⊗In as above, and the condition UU∗= In for U ∈Mn(H) is equivalent to e U e U∗= In over C. We will generally consider the symplectic group in its complex version, as a subgroup of the (complex) unitary group, although certain geometric proper-ties of the group can be more cleanly characterized in the quaternionic form. Note that it is immediate from the definitions that U is orthogonal if and only if UT is orthogonal, and U is unitary or symplectic if and only if U∗is. The algebraic definitions given above are nicely compact but may not make the importance of these groups jump right out; the following lemma gives some indication as to why they play such a central role in many areas of mathematics. 1.1 The classical compact matrix groups 13 Lemma 1.1 1. Let M be an n × n matrix over R or C. Then M is orthogonal or unitary if and only if the columns of M form an orthonormal basis of Rn, resp. Cn. 2. For U an n×n matrix over R, U ∈O (n) if and only if U acts as an isometry on Rn; that is, ⟨Uv, Uw⟩= ⟨v, w⟩ for all v, w ∈Rn. 3. For U an n×n matrix over C, U ∈U (n) if and only if U acts as an isometry on Cn: ⟨Uv, Uw⟩= ⟨v, w⟩ for all v, w ∈Cn. 4. Consider C2n equipped with the skew-symmetric form ω(v, w) = v1wn+1 + · · · + vnw2n −vn+1w1 −· · · −v2nwn = X k,ℓ Jklvkwℓ, where J = " 0 In −In 0 # as above. For a 2n × 2n matrix U over C, U ∈Sp (2n) if and only if U is an isometry of C2n which preserves ω: ⟨Uv, Uw⟩= ⟨v, w⟩ and ω(Uv, Uw) = ω(v, w) for all v, w ∈C2n. 5. If U ∈O (n) or U ∈U (n), then | det(U)| = 1. If U ∈Sp (2n), then det(U) = 1. Proof Note that the (i, j)th entry of UTU (if U has real entries) or U∗U (if U has complex or quaternionic entries) is exactly the inner product of the ith and jth columns of U. So UTU = In or U∗U = In is exactly the same thing as saying the columns of U form an orthonormal basis of Rn or Cn. For U ∈Mn(R), ⟨Uv, Uw⟩= D UTUv, w E , and so ⟨Uv, Uw⟩= ⟨v, w⟩for all v and w if and only if UTU = I. The proofs of parts 3 and 4 are similar. For part 5, on any of the groups, | det(U)|2 = det(U)det(U) = det(U) det(U∗) = det(UU∗) = det(In) = 1. The easiest way to see that if U ∈Sp (2n), then in fact det(U) = 1 is to use the Pfaffian: for a skew-symmetric matrix A, the Pfaffian pf(A) is defined by a 14 Haar measure on the classical compact matrix groups sum-over-permutations formula along the lines of the determinant, and has the property that for 2n × 2n matrices A and B, pf(BABT) = det(B) pf(A). Applying this to the defining relation of Sp (2n), pf(J) = pf(UJUT) = det(U) pf(J), and so (using the easily verified fact that pf(J) , 0), det(U) = 1. □ We sometimes restrict attention to the so-called “special” counterparts of the orthogonal and unitary groups, defined as follows. Definition The set SO (n) ⊆O (n) of special orthogonal matrices is defined by SO (n) := {U ∈O (n) : det(U) = 1}. The set SO−(n) ⊆O (n) (the negative coset) is defined by SO−(n) := {U ∈O (n) : det(U) = −1}. The set SU (n) ⊆U (n) of special unitary matrices is defined by SU (n) := {U ∈U (n) : det(U) = 1}. Since the matrices of the classical compact groups all act as isometries of Cn, all of their eigenvalues lie on the unit circle S1 ⊆C. In the orthogonal and symplectic cases, there are some built-in symmetries: Exercise 1.2 Show that each matrix in SO (2n + 1) has 1 as an eigenvalue, each matrix in SO−(2n + 1) has −1 as an eigenvalue, and each matrix in SO−(2n + 2) has both −1 and 1 as eigenvalues. The sets O (n), U (n), Sp (2n), SO (n), and SU (n) of matrices defined above are compact Lie groups; that is, they are groups (with matrix multiplication as the operation), and they are compact manifolds, such that the multiplication and inverse maps are smooth. Moreover, these groups can naturally be viewed as closed submanifolds of Euclidean space: O (n) and SO (n) are submanifolds of Rn2; U (n) and SU (n) are submanifolds of Cn2 and Sp (2n) is a submanifold of C(2n)2. Rather than viewing these matrices as n2-dimensional vectors, it is more natural to view them as elements of the Euclidean spaces Mn(R) (resp. Mn(C)) of n × n matrices over R (resp. C), where the Euclidean inner products are written as ⟨A, B⟩HS := Tr(ABT) 1.1 The classical compact matrix groups 15 for A, B ∈Mn(R), and ⟨A, B⟩HS := Tr(AB∗) for A, B ∈Mn(C). These inner products are called the Hilbert–Schmidt inner products on matrix space. The Hilbert–Schmidt inner product induces a norm on matrices; it is some-times called the Frobenius norm or the Schatten 2-norm, or just the Euclidean norm. This norm is unitarily invariant: ∥UBV∥HS = ∥B∥HS when U and V are unitary (as is easily seen from the definition). This implies in particular that if U ∈O (n) (resp. U (n)), then the map RU : Mn(R) →Mn(R) (resp. RU : Mn(C) →Mn(C)) defined by RU(M) = UM is an isometry on Mn(R) (resp. Mn(C)) with respect to the Hilbert–Schmidt inner product. The Hilbert–Schmidt norm is also submultiplicative: ∥AB∥HS ≤∥A∥HS ∥B∥HS . In fact, this is true of all unitarily invariant norms (subject to the normalization ∥E11∥= 1), but it is particularly easy to see for the Hilbert–Schmidt norm: let B have columns b1, . . . , bn; then ∥B∥2 HS = Pn j=1 |b j|2, where | · | is the Euclidean norm on Cn. Now, AB has columns Ab1, . . . , Abn, and so ∥AB∥2 HS = n X j=1 |Abj|2 ≤∥A∥2 op∥B∥2 HS , where ∥A∥op = sup|x|=1 |Ax| is the operator norm of A; i.e., the largest singular value of A. Writing the singular value decomposition A = UΣV and using the unitary invariance of the Hilbert–Schmidt norm, ∥A∥2 op = σ2 1 ≤ n X j=1 σ2 j = ∥Σ∥2 HS = ∥A∥2 HS , from which the submultiplicativity follows. Indeed, the sharper estimate ∥AB∥HS ≤∥A∥op∥B∥HS is often useful. The discussion above gives two notions of distance on the classical compact 16 Haar measure on the classical compact matrix groups matrix groups: firstly, the Hilbert–Schmidt inner product can be used to define the distance between two matrices A and B by dHS (A, B) := ∥A −B∥HS := p ⟨A −B, A −B⟩HS = q Tr (A −B)(A −B)∗. (1.5) Alternatively, since for example A, B ∈U (n) can be thought of as living in a submanifold of Euclidean space Mn(C), one can consider the geodesic distance dg(A, B) between A and B; that is, the length, as measured by the Hilbert– Schmidt metric, of the shortest path lying entirely in U (n) between A and B. In the case of U (1), this is arc-length distance, whereas the Hilbert–Schmidt distance defined in Equation (1.5) is the straight-line distance between two points on the circle. Ultimately, the choice of metric is not terribly important: Lemma 1.3 Let A, B ∈U (n). Then dHS (A, B) ≤dg(A, B) ≤π 2dHS (A, B). That is, the two notions of distance are equivalent in a dimension-free way. Proof The inequality dHS (A, B) ≤dg(A, B) follows trivially from the fact that the Hilbert–Schmidt distance is the geodesic distance in Euclidean space. For the other inequality, first note that dg(A, B) ≤ π 2dHS (A, B) for A, B ∈ U (1); that is, that arc-length on the circle is bounded above by π 2 times Eu-clidean distance. Next, observe that both dHS (·, ·) and dg(·, ·) are translation-invariant; that is, if U ∈U (n), then dHS (UA, UB) = dHS (A, B) and dg(UA, UB) = dg(A, B). In the case of the Hilbert–Schmidt distance, this is immediate from the fact that the Hilbert–Schmidt norm is unitarily invariant. For the geodesic distance, translation invariance follows from the fact that, since any matrix U ∈U (n) acts as an isometry of Euclidean space, every path between A and B lying in U (n) corresponds to a path between UA and UB of the same length, also lying in U (n). Now fix A, B ∈U (n) and let A−1B = UΛU∗be the spectral decomposition of A−1B. Then for either distance, d(A, B) = d(In, A−1B) = d(In, UΛU∗) = d(U∗U, Λ) = d(In, Λ), and so it suffices to assume that A = In and B is diagonal. Write B = diag(eiθ1, . . . , eiθn). Then the length of the path in U (n) from A to 1.2 Haar measure 17 B given by U(t) := diag(eitθ1, . . . , eitθn), for 0 ≤t ≤1 is Z 1 0 U′(t) HS dt = Z 1 0 diag(iθ1eitθ1, . . . , iθneitθn) HS dt = Z 1 0 q θ2 1 + · · · + θ2 ndt ≤π 2 Z 1 0 p |1 −eiθ1|2 + · · · + |1 −eiθn|2dt = π 2 In −diag(eiθ1, . . . , eiθn) HS , using the fact that θ2 = dg(1, eiθ)2 ≤π2 4 dHS (1, eiθ), as noted above. □ 1.2 Haar measure The main goal of this book is to answer the broad general question “What is a random orthogonal, unitary, or symplectic matrix like?”. To do this, a natural probability measure on each of these groups is needed. Just as the most natural probability measure (i.e., uniform measure) on the circle is defined by rotation invariance, if G is one of the matrix groups defined in the last section, a “uniform random element” of G should be a random U ∈G whose distribution is translation invariant; that is, if M ∈G is any fixed matrix, then the equality in distribution MU d = UM d = U should be satisfied. Phrased slightly differently, the distribution of a uniform random element of G should be a translation invariant probability measure µ on G: for any measurable subset A ⊆G and any fixed M ∈G, µ(MA) = µ(AM) = µ(A), where MA := {MU : U ∈A} and AM := {UM : U ∈A}. It is a theorem due to A. Haar that there is one, and only one, way to do this. Theorem 1.4 Let G be any of O (n), SO (n), U (n), SU (n), or Sp (2n). Then there is a unique translation invariant probability measure (called Haar mea-sure) on G. 18 Haar measure on the classical compact matrix groups The theorem is true in much more generality (in particular, any compact Lie group has a Haar probability measure). In the most general case the property of left-invariance is not equivalent to that of right-invariance, but in the case of compact Lie groups, left-invariance implies right-invariance and vice versa, so the phrase “translation-invariance” will be used in what follows, and will be assumed to include both left- and right-invariance. Exercise 1.5 1. Prove that a translation-invariant probability measure on O (n) is invariant under transposition: if U is Haar-distributed, so is UT. 2. Prove that a translation-invariant probability measure on U (n) is invariant under transposition and under conjugation: if U is Haar-distributed, so are both UT and U∗. Theorem 1.4 is an existence theorem which doesn’t itself provide a descrip-tion of Haar measure in specific cases. In the case of the circle,i.e., U (1), it is clear that Haar measure is just (normalized) arc length. The remainder of this section gives six different constructions of Haar measure on O (n), with some comments about adapting the constructions to the other groups. For most of the constructions, the resulting measure is only shown to be invariant one one side; the invariance on the other side then follows from the general fact mentioned above that on compact Lie groups, one-sided invariance implies invariance on both sides. The Riemannian perspective It has already been noted that O (n) ⊆Mn(R) and that it is a compact sub-manifold. It has two connected components: SO (n) and SO−(n), the set of orthogonal matrices U with det(U) = −1. At each point U of O (n), there is a tangent space TU(O (n)), consisting of all the tangent vectors to O (n) based at U. A map between manifolds induces a map between tangent spaces as follows. Let M1, M2 be manifolds and ϕ : M1 →M2. If x ∈TpM1, then there is a curve γ[0, 1] →M1 such that γ(0) = p and γ′(0) = x. Then ϕ ◦γ is a curve in M2 with ϕ ◦γ(0) = ϕ(p) and (ϕ ◦γ)′(0) is a tangent vector to M2 at ϕ(p). We take this to be the definition of ϕ∗(x) (it must of course be checked that this gives a well-defined linear map on TpM1 for each p). A Riemannian metric g on a manifold M is a family of inner products, one on the tangent space TpM to M at each point p ∈M. The submanifold O (n) inherits such a metric from Mn(R), since at each point U in O (n), TU(O (n)) is 1.2 Haar measure 19 a subspace of TU(Mn(R))  Mn(R). Because multiplication by a fixed orthog-onal matrix V is an isometry of Mn(R), the induced map on tangent spaces is also an isometry: if U ∈O (n) with X1, X2 ∈TU(O (n)) tangent vectors to O (n) at U, and RV : O (n) →O (n) denotes multiplication by a fixed V ∈O (n), then gVU((RV)∗X1, (RV)∗X2) = gU(X1, X2). On any Riemannian manifold, the Riemannian metric uniquely defines a notion of volume. Since the metric is translation-invariant, the normalized vol-ume form on O (n) is a translation-invariant probability measure; that is, it’s Haar measure. Since each of the classical compact matrix groups is canonically embedded in Euclidean space, this construction works the same way in all cases. An explicit geometric construction Recall that U ∈O (n) if and only if its columns are orthonormal. One way to construct Haar measure on O (n) is to add entries to an empty matrix column by column (or row by row), as follows. First choose a random vector u1 uni-formly from the sphere Sn−1 ⊆Rn (that is, according to the probability measure defined by normalized surface area). Take u1 as the first column of the matrix; by construction, ∥u1∥= 1. Now choose u2 randomly according to surface area measure on  u⊥ 1  ∩Sn−1 = x ∈Rn : ∥x∥= 1, ⟨x, u1⟩= 0 and let this be the second column of the matrix. Continue in this way; each col-umn is chosen uniformly from the unit sphere of vectors which are orthogonal to each of the preceding columns. The resulting matrix            | | u1 . . . un | |            is obviously orthogonal; the proof that its distribution is translation-invariant is as follows. Observe that if M is a fixed orthogonal matrix, then since M            | | u1 . . . un | |           =            | | Mu1 . . . Mun | |           , the first column of M            | | u1 . . . un | |           is constructed by choosing u1 uniformly 20 Haar measure on the classical compact matrix groups from Sn−1 and then multiplying by M. But M ∈O (n) means that M acts as a linear isometry of Rn, so it preserves surface area measure on Sn−1. That is, the distribution of Mu1 is exactly uniform on Sn−1. Now, since M is an isometry, ⟨Mu2, Mu1⟩= 0 and because M is an isometry of Rn, it follows that Mu2 is uniformly distributed on (Mu1)⊥∩Sn−1 := {x ∈Rn : |x| = 1, ⟨Mu1, x⟩= 0} . So the second column of Mu1 . . . un is distributed uniformly in the unit sphere of the orthogonal complement of the first column. Continuing the argument, the distribution of Mu1 . . . un is exactly the same as the distribution of u1 . . . un ; i.e., the construction is left-invariant. It follows by uniqueness that it produces Haar measure on O (n). To construct Haar measure on U (n), one need only draw the columns uni-formly from complex spheres in Cn. To get a random matrix in SO (n), the construction is identical except that there is no choice about the last column; the same is true for SU (n). The analogous construction on the representation of elements of Sp (2n) by 2n × 2n unitary matrices works as follows. For U to be in Sp (2n), its first column u1 must lie in the set {x ∈C2n : ∥x∥= 1, ⟨x, Jx⟩= 0}, where J is the matrix defined in (1.4). This condition ⟨x, Jx⟩= 0 defines a hyperboloid in Cn (J is unitarily diagonalizable and has eigenvalues i and −i, each with multiplicity n). The set above is thus the intersection of the sphere with this hyperboloid; it is an (n −2)-dimensional submanifold of Cn from which we can choose a point uniformly: this is how we choose u1. If n > 1, one then chooses the second column uniformly from the set {x ∈C2n : ∥x∥= 1, ⟨x, u1⟩= 0, ⟨x, Jx⟩= 0, ⟨x, Ju1⟩= 0}; for n = 1, one chooses the second column uniformly from {x ∈C2 : ∥x∥= 1, ⟨x, u1⟩= 0, ⟨x, Jx⟩= 0 ⟨x, Ju1⟩= −1}. The construction continues: the kth column uk is chosen uniformly from the in-tersection of the unit sphere, the hyperboloid {x : ⟨x, Jx⟩= 0}, and the (affine) subspaces given by the conditions ⟨x, Juℓ⟩= 0 for 1 ≤ℓ≤min{k −1, n} and ⟨x, Juℓ⟩= −1 for n + 1 ≤ℓ< k (if k ≥n + 2). The argument that this construc-tion is invariant under right-translation by an element of Sp (2n) is similar to the argument above, making use of the fact that for M ∈Sp (2n) given, M is an isometry of Cn and MJ = JM. 1.2 Haar measure 21 A different inductive construction In this construction, a random element of O (n) is built up successively from smaller groups. Since it is clear how to choose a random element of O (1) (flip a coin!), one need only describe how to get a random element of O (n) from a random element of O (n −1). Let Un−1 be distributed according to Haar measure on O (n −1), and let M ∈ O (n) be independent of Un−1 and have its first column distributed uniformly on Sn−1 ∈Rn. (The distribution of the remaining columns is irrelevant.) Then define Un by Un := M "1 0 0 Un−1 # . (1.6) It is not hard to see that the columns of Un are distributed as described in the previous approach: It is clear that in the matrix "1 0 0 Un−1 # , the second column is uniformly distributed in the orthogonal complement of the first, the third is uniform in the orthogonal complement of the first two, etc. So for a deterministic M ∈O (n), the first column of M "1 0 0 Un−1 # is just m1 (the first column of M), the second column is uniformly distributed in the orthogonal complement of m1, etc. Taking M to be random but independent of Un−1, it follows that the distribution of the columns of Un is exactly as in the previous construction. The construction works exactly the same way for the unitary and special orthogonal and unitary groups, replacing Un−1 by a Haar-distributed element of the next smaller-rank group, and choosing M in the desired group with its first column uniformly in the sphere in Rn or Cn, as appropriate. For the symplectic group, the construction is similar, albeit slightly more complicated: let Un−1 be Haar-distributed in Sp (2(n −1)), and let M ∈Sp (2n) be independent of Un−1 and with its first column m1 uniform in {x ∈C2n : ∥x∥= 1, ⟨x, Jx⟩= 0} and its (n + 1)st column mn+1 uniform in {x ∈C2n : ∥x∥= 1, ⟨x, m1⟩= 0, ⟨x, Jx⟩= 0 ⟨x, Jm1⟩= −1}. 22 Haar measure on the classical compact matrix groups Write Un−1 as a 2 × 2 matrix of (n −1) × (n −1) blocks: Un−1 = "(Un−1)1,1 (Un−1)1,2 (Un−1)2,1 (Un−1)2,1 # , and define Un ∈Sp (2n) by Un := M                 1 0 0 0 0 (Un−1)1,1 0 (Un−1)1,2 0 0 1 0 0 (Un−1)2,1 0 (Un−1)2,2                 . One can then check that Un ∈Sp (2n) and has its columns distributed as in the previous construction of Haar measure. The Gauss-Gram-Schmidt approach This construction is probably the most commonly used description of Haar measure, and also one that is easy to implement on a computer. Generate a random matrix X by filling an n × n matrix, with independent, identically distributed (i.i.d.) standard Gaussian entries {xi, j}. That is, the joint density (with respect to Qn i, j=1 dxi j) of the n2 entries of X is given by 1 (2π) n2 2 n Y i, j=1 e− x2 ij 2 = 1 (2π) n2 2 exp     −∥X∥2 HS 2     , (the collection of these random matrices is known as the real Ginibre ensem-ble). The distribution of X is invariant under multiplication by an orthogonal matrix: by a change of variables, the density of the entries of Y = MX with respect to Q dyij is | det(M−1)| (2π) n2 2 exp ( −∥M−1Y∥2 2 ) = 1 (2π) n2 2 exp ( −∥Y∥2 2 ) , since M−1 is an isometry. That is, the distribution above is translation-invariant, but it is not Haar mea-sure because it does not produce an orthogonal matrix. To make it orthogonal, we use the Gram-Schmit process. Performing the Gram-Schmidt process com-mutes with multiplication by a fixed orthogonal matrix M: let xi denote the columns of X. Then, for example, when the x1 component is removed from x2, x2 is replaced with x2 −⟨x1, x2⟩x2. If the result is then multiplied by M, the resulting second column is Mx2 −⟨x1, x2⟩Mx2. 1.2 Haar measure 23 If, on the other hand, the multiplication by M is done before applying the Gram-Schmidt algorithm, the result is a matrix with columns Mx1, . . . , Mxn. Now removing the component in the direction of column 1 from column 2, the new column 2 is Mx2 −⟨Mx1, Mx2⟩Mx1 = Mx2 −⟨x1, x2⟩Mx2, since M is an isometry. In other words, if T : GLn(R) →O (n) is the map given by performing the Gram-Schmidt process (GLn(R) denotes the group of n × n invertible matrices over R), then for a fixed orthogonal matrix M, MT(X) = T(MX) d = T(X). That is, the distribution of T(X) is supported on O (n) and translation invariant, meaning that we have once again constructed Haar measure. Remarks 1. In different terminology, the argument above says that if X is a matrix of i.i.d. standard Gaussian random variables and X = QR is the QR-decomposition obtained via the Gram-Schmidt process, then Q is Haar-distributed on the orthogonal group. But, WARNING: The QR-decomposition of a matrix is not uniquely defined, and most computer algebra packages do not use the Gram-Schmidt algorithm to produce it. The result of which is that having a computer generate a matrix of i.i.d. standard Gaussian random variables and then returning Q from its internal QR-algorithm will not produce a Haar-distributed matrix; see and for further discussion. 2. The same algorithm as above works over C or H to produce a random uni-tary or symplectic matrix. To produce a random element of SO (n) or SU (n), one simply needs a final step: after carrying out the Gram-Schmidt process, multiply the final column by the necessary scalar in order to force the de-terminant of the matrix to be 1; it is not hard to see that this produces Haar measure on the reduced group. A second Gaussian construction Again, let X be an n × n random matrix with i.i.d. standard Gaussian entries. It is easy to see that X has rank n with probability 1, which implies in particular that XT X is a symmetric rank n matrix, and so XT X = VT diag(d1, . . . , dn)V 24 Haar measure on the classical compact matrix groups for some V ∈O (n) and d1, . . . , dn > 0; (XT X)−1/2 is then defined by (XT X)−1/2 = VT diag(d−1/2 1 , . . . , d−1/2 n )V. Now define the random matrix U by U := X(XT X)−1/2. (1.7) Then U is orthogonal: UTU = (XT X)−1/2XT X(XT X)−1/2 = In, and since for V ∈O (n) fixed, VX d = X, U d = VX((VX)TVX)−1/2 = VX(XT X)−1/2 = VU. That is, U is distributed according to Haar measure on O (n). Haar measure on SO (n) and SO−(n) The constructions above describe how to choose a uniform random matrix from O (n), but as we noted above, O (n) decomposes neatly into two pieces, those matrices with determinant 1 (SO (n)) and those with determinant −1 (SO−(n)). Theorem 1.4 says that SO (n) has a unique translation-invariant probability measure; it is clear that this is simply the restriction of Haar measure on O (n) to SO (n). There is also a measure which is sometimes called Haar measure on SO−(n), which is the restriction of Haar measure from O (n) to SO−(n). The set SO−(n) is of course not a group, it is a coset of the subgroup SO (n) in the group O (n); we continue to use the name Haar measure on SO−(n) because this measure is a probability measure invariant under translation within SO−(n) by any matrix from SO (n). Haar measure on SO (n) and Haar measure on SO−(n) are related as follows: if U is Haar-distributed in SO (n) and ˜ U is any fixed matrix in SO−(n), then ˜ UU is Haar-distributed in SO−(n). 1.2 Haar measure 25 Euler angles Recall the spherical coordinate system in Rn: each x ∈Rn has spherical coor-dinates r, θ1, . . . , θn−1, such that x1 = r sin(θn−1) · · · sin(θ2) sin(θ1) x2 = r sin(θn−1) · · · sin(θ2) cos(θ1) . . . xn−1 = r sin(θn−1) cos(θn−2) xn = r cos(θn−1). Here, 0 ≤r < ∞, 0 ≤θ1 < 2π, 0 ≤θk ≤π, 2 ≤k ≤n. The spherical coordinates of a point are uniquely determined, except in the cases that some θk (k ≥2) is 0 or π, or r = 0. Spherical coordinates are the basis for the parametrization of SO (n) by the so-called Euler angles. For θ ∈[0, 2π) and 1 ≤k ≤n −1, let Uk(θ) :=                 Ik−1 cos(θ) sin(θ) −sin(θ) cos(θ) In−k−1                 . All matrices in SO (n) can be decomposed as a product of matrices of this type, as follows. Proposition 1.6 For any U ∈SO (n), there are angles (called Euler angles) {θk j}1≤k≤n−1 1≤j≤k with 0 ≤θk 1 < 2π and 0 ≤θk j < π for j , 1, so that U = U(n−1) · · · U(1), where U(k) = U1(θk 1) · · · Uk(θk k). The Euler angles are unique except if some θk j is 0 or π (j ≥2). Proof Observe that the result is vacuous for n = 1 and holds trivially for n = 2. Suppose then, that it holds on SO (n −1), and let U ∈SO (n). Let 26 Haar measure on the classical compact matrix groups 1, θn−1 1 , . . . , θn−1 n−1 be the spherical coordinates of Uen, where e j denotes the jth standard basis vector of Rn. Then U1(θn−1 1 ) · · · Un−1(θn−1 n−1)en =U1(θn−1 1 ) · · · Un−2(θn−1 n−2)                         0 . . . 0 sin(θn−1 n−1) cos(θn−1 n−1)                         = · · · =                       sin(θn−1 n−1) · · · sin(θn−1 2 ) sin(θn−1 1 ) sin(θn−1 n−1) · · · sin(θn−1 2 ) cos(θn−1 1 ) · · · sin(θn−1 n−1) cos(θn−1 n−2) cos(θn−1 n−1)                       ; that is, U1(θn−1 1 ) · · · Un−1(θn−1 n−1)en = Uen, and so h U1(θn−1 1 ) · · · Un−1(θn−1 n−1) i−1 U = "e U 0 0 1 # (1.8) for some e U ∈SO (n −1). By the induction hypthesis, e U can be written as e U = U(n−2) · · · U(1). By mild abuse of notation we now consider each of the implicit factors Uℓ(θk ℓ), a priori elements of SO (n −1), to be elements of SO (n) fixing en. The claimed factorization of U follows by multiplying both sides of (1.8) by U(n−1) := U1(θn−1 1 ) · · · Un−1(θn−1 n−1). □ Haar measure on SO (n) can be characterized as a distribution on the Euler angles. Observe first that by a right-to-left version of the column-by-column construction, if U ∈SO (n) is distributed according to Haar measure, then Uen is a uniform random point on Sn−1. Recall that the uniform probability measure on the sphere Sn−1 is given in spherical coordinates by Γ  n 2  2πn/2 sinn−2(θn−1) · · · sin(θ2)dθ1 · · · dθn−1. 1.2 Haar measure 27 That is, the {θn−1 k }1≤k≤n−1 subset of the Euler angles of U have density Γ  n 2  2πn/2 sinn−2(θn−1 n−1) · · · sin(θn−1 2 ) with respect to dθn−1 1 · · · dθn−1 n−1. Now, given Uen, the vector Uen−1 is uniformly distributed on the unit sphere in orthogonal complement of Uen; equivalently, given θn−1 1 , . . . , θn−1 n−1, h U1(θn−1 1 ) · · · Un−1(θn−1 n−1) i−1 Uen−1 is distributed uniformly on the (n −1)-dimensional unit sphere of e⊥ n . Since {θn−2 k }1≤k≤n−2 are exactly the (angular) spherical coordinates of this vector, it follows that {θn−2 k }1≤k≤n−2 are independent of {θn−1 k }1≤k≤n−1, with density Γ  n−1 2  2π(n−1)/2 sinn−3(θn−2 n−2) · · · sin(θn−2 2 ). Continuing in this way, the Euler angles of U are independent, with joint den-sity n−1 Y k=1         Γ  k 2  2πk/2         k Y j=1 sin j−1(θk j). (1.9) From this construction, one can describe a natural analog for SO−(n); any U ∈SO−(n) can be written as U = U(n−1) · · · U(1)U(0), (1.10) where U(0) = "−1 In−1 # , and U(n−1) · · · U(1) is the Euler angle decomposition of UU(0); i.e., the matrix with the same columns as U, except for a sign change in the first column. That is, Haar measure on SO−(n) can be described by choosing angles {θk j}1≤k≤n−1 1≤j≤k according to the density in (1.9) and then letting U be given by the formula in (1.10). To choose U according to Haar measure on O (n), one simply chooses the Euler angles according to (1.9), and then includes the U(0) factor with proba-bility 1 2, independent of the choice of angles. 28 Haar measure on the classical compact matrix groups 1.3 Lie group structure and character theory The classical groups as Lie groups As we have already noted, the classical compact matrix groups are Lie groups; i.e., they are groups and they are differentiable manifolds such that the mul-tiplication operation (A, B) 7→AB and the inversion operation A 7→A−1 are smooth maps. Each of the groups has an associated Lie algebra, which is the tangent space to the group at the identity matrix. Lie algebras play an important role in understanding Lie groups, because they are relatively simple objects ge-ometrically (vector spaces!), but they come equipped with an extra algebraic structure called the Lie bracket which encodes much of the geometry of the group itself. The tool which connects the Lie algebra to the Lie group is the exponential map. While one can define the exponential map on Riemannian manifolds in general, the definition in the setting of the classical compact matrix groups can be made very concrete; this in particular makes the source of the terminology clear. Definition Let X ∈Mn(C). The matrix exponential eX is defined by eX := In + X + 1 2X2 + 1 3!X3 + · · · . We will make frequent use of the following basic facts. Lemma 1.7 1. The sum defining the matrix exponential is convergent for any X ∈Mn(C). 2. If X ∈Mn(C), eX is invertible with inverse e−X. 3. If X and Y commute, then eX+Y = eXeY; this need not be true if X and Y do not commute. Proof 1. Recall that the Hilbert–Schmidt norm is submultiplicative; from this it follows that N X j=0 X j HS j! ≤ N X j=0 ∥X∥j HS j! ≤ ∞ X j=0 ∥X∥j HS j! < ∞ for all N, and so P∞ j=1 1 j!X j is convergent. 2. This point follows easily from the next, since X and −X commute. 1.3 Lie group structure and character theory 29 3. For N ∈N, since X and Y commute we have N X j=0 1 j!(X + Y) j = N X j=0 1 j! j X k=0 j k ! XkY j−k = N X k=0 1 k!Xk N X j=k 1 (j −k)!Y j−k = N X k=0 1 k!Xk N−k X j=0 1 ( j)!Y j. Now let N →∞. It is easy to cook up examples of noncommuting X and Y for which eX+Y , eXeY. □ Suppose now that X = UAU∗with U ∈U (n). Then eX = ∞ X j=0 1 j!(UAU∗) j = ∞ X j=0 1 j!UA jU∗= UeAU∗. In particular, if X is unitarily diagonalizable (i.e., normal), with X = U diag(d1, . . . , dn)U∗, then eX = U diag(ed1, . . . , edn)U∗. More generally, given a function f on C, for unitarily diagonalizable X as above, we define f(X) by this route: if X = U diag(d1, . . . , dn)U∗, then f(X) := U diag( f(d1), . . . , f(dn))U∗. This prodecure is referred to as the functional calculus. The function γ(t) = etX defines a one-parameter subgroup of GLn(C) (the group of invertible n × n matrices over C), since etXesX = e(t+s)X. More geo-metrically, γ(t) is a curve in GLn(C) with γ(0) = In and γ′(0) = X. In general, the role of the exponential map in Riemannian geometry is that it gives a local diffeomorphism of the tangent space to a manifold at a point to a neighbor-hood of that point in the manifold. In the present context, it is what maps the Lie algebra of a closed subgroup of GLn(C) down to a neighborhood of In in the subgroup. The following lemma, whose proof we omit, gives the precise statement needed. Lemma 1.8 If G is any closed subgroup of GLn(C), then X is an element of the Lie algebra of G if and only if etX ∈G for all t ≥0. The map X 7→eX gives a diffeomorphism of a neighborhood of 0 in the Lie algebra of G to a neighborhood of In in G. The lemma allows us to identify the Lie algebras of the classical groups concretely, as follows. 30 Haar measure on the classical compact matrix groups Lemma 1.9 1. The Lie algebra of O (n) is o(n) = n X ∈Mn(R) : X + XT = 0 o . 2. The Lie algebra of SO (n) is so(n) = o(n) = n X ∈Mn(R) : X + XT = 0 o . 3. The Lie algebra of U (n) is u(n) = {X ∈Mn(C) : X + X∗= 0} . 4. The Lie algebra of SU (n) is su(n) = {X ∈Mn(C) : X + X∗= 0, Tr(X) = 0} . 5. The Lie algebra of Sp (2n) ⊆U (n) is sp(2n) = {X ∈M2n(C) : X + X∗= 0, XJ + JX∗= 0} . The quaternionic form of the Lie algebra of Sp (2n) ⊆Mn(H) is suH(n) = {X ∈Mn(H) : X + X∗= 0} , where X∗denotes the quaternionic conjugate transpose. Proof 1. Firstly, if γ : [0, 1] →O (n) is a curve with γ(0) = In, then γ(t)γ(t)T = In for each t. Differentiating gives that γ′(t)γ(t)T + γ(t)γ′(t)T = 0 for all t, and so in particular, if X = γ′(0) is the tangent vector to γ at In, then X + XT = 0. On the other hand, given X with X + XT = 0, etX(etX)T = etXetXT = etXe−tX = In, and so γ(t) = etX is a curve in O (n) with γ(0) = In and γ′(0) = X. That is, X is in the tangent space to O (n) at In. 2. Since SO (n) is a subgroup of O (n), the tangent space to SO (n) at In is a subspace of o(n). In fact, it is clear geometrically that the Lie algebras of O (n) and SO (n) must be the same, since O (n) is just the union of two disconnected copies of SO (n) (the second copy being the negative coset SO−(n)). 3. Exactly analogous to 1. 1.3 Lie group structure and character theory 31 4. As in the orthogonal case, since SU (n) is a subgroup of U (n), the tan-gent space to SU (n) at In is a subspace of u(n); in this case, however, it is not the whole space. The additional condition for a curve γ(t) to lie in SU (n) is that det γ(t) = 1 for all t. Using the functional calculus, if X = U diag(d1, . . . , dn)U∗, det(eX) = det(U diag(ed1, . . . , edn)U∗) = ed1+···+dn = eTr X. In particular, if γ(t) = etX, then d dt(det(γ(t))) = d dt  et Tr X = (Tr X)et Tr X, and so X is tangent to SU (n) at In if and only if X ∈u(n) (i.e., X + X∗= 0) and Tr X = 0. 5. For the complex form, since Sp (2n) is a subgroup of U (2n), the Lie algebra of Sp (2n) is a subspace of u(2n), hence the first condition in sp(2n). For the second, observe that if γ : [0, 1] →Sp (2n) is a curve with γ(0) = I2n, then differentiating the requirement that γ(t)Jγ(t)∗= J for all t gives that γ′(t)Jγ(t)∗+γ(t)Jγ′(t)∗= 0; if X = γ′(0), then evaluating at t = 0 gives that XJ + JX∗= 0. On the other hand, if X satisfies X + X∗= 0 and XJ + JX∗= 0, then etXJ = ∞ X n=0 tn n!XnJ = ∞ X n=0 tn n!(−Xn−1JX∗) = ∞ X n=0 tn n!(Xn−2J(X∗)2) = · · · = J ∞ X n=0 (−tX∗)n n! = J(e−tX)∗, and so etX ∈Sp (2n) for all t. Verifying the quaternionic form is exactly the same as the unitary case. □ Observe that even though the matrices in u(n), su(n) and sp(2n) have com-plex entries, they are real vector spaces only; they are not closed under mul-tiplication by complex scalars. They do inherit a real inner product structure from the Euclidean structure of the spaces in which they reside: the inner prod-uct on u(n), su(n) and sp(2n) is ⟨X, Y⟩:= Re(Tr(XY∗)). This is the unique real inner product which defines the same norm as the usual Hilbert–Schmidt inner product on u(n), su(n) and sp(2n). 32 Haar measure on the classical compact matrix groups Representation theory We have already encountered two natural actions of the classical compact groups on vector spaces; namely, on Rn or Cn by matrix-vector multiplica-tion, and on Mn(R) or Mn(C) by multiplication. The topic of representation theory is to understand all the ways a group can act on a vector space. This is a vast and well-developed field, but in the context of the random matrix theory on these groups, our main interest will be not in representation theory in its full glory, but specifically in character theory. Essentially the reason for this is that the irreducible characters form a basis for the space of class functions on a group; i.e., those functions which are constant on conjugacy classes. Con-jugacy classes of matrices within the classical groups are exactly the set of matrices with a given set of eigenvalues, and so ultimately, a good basis of the space of class functions gives us a route to studying eigenvalues. A representation of a finite group G is a group homomorphism ρ : G → GL(V), where V is a finite-dimensional vector space and GL(V) is the group of invertible linear maps on V. A representation of a Lie group G is again a map ρ : G →GL(V) for a finite-dimensional vector space V, which is both a homo-morphism of groups and is required to be smooth. That is, a representiation of G is a way of seeing G as acting on V, in a way that respects the structure of G as a (Lie) group. Usually the notation for the map itself is suppressed, and one refers to a representation V of a group G, and writes g · v or just gv rather than ρ(g)(v). A representation V of G is irreducible if V has no proper nontrivial sub-spaces which are invariant under the action of G. Example Let S n be the symmetric group on n letters; S n acts on Rn by per-muting the coordinates of a vector. This representation of S n is not irreducible, since the subspace V1 spanned by (1, 1, . . . , 1) is invariant under the action of S n, as is the complementary subspace V2 = {(x1, . . . , xn) : x1 + · · · + xn = 0}. Of course, V1 is an irreducible representation of S n (it is called the trivial one-dimensional representation). Less obviously, but it’s not too hard to see, V2 is also an irreducible representation of S n; it is called the standard representa-tion. Known representations of a group give rise to new ones in various ways. The simplest are by taking direct sums or tensor products: if V and W are representations of G, then V ⊕W and V ⊗W are also representations of G, via the actions g((v, w)) = (gv, gw) g(v ⊗w) = (gv) ⊗(gw). 1.3 Lie group structure and character theory 33 A representation V also induces a representation on the dual space V∗of scalar-valued linear functions on V: if v∗∈V∗, then (gv∗)(v) := v∗(g−1v). A linear map ϕ : V →W between representations V and W of a group G is called G-linear if for all v ∈V and all g ∈G, g · ϕ(v) = ϕ(g · v). Two representations V and W are isomorphic if there is a G-linear isomor-phism between the vector spaces V and W. A fundamental fact is that finite-dimensional representations of finite groups and compact Lie groups can be decomposed into direct sums of irreducible representations. The proof in either case is essentially the same, and rests on the fact that for any representation V of such a group, one can define an inner product on V such that each group element acts as an isometry. Indeed, take any inner product ⟨·, ·⟩on V, and define ⟨v, w⟩G :=        1 |G| P g∈G ⟨gv, gw⟩, G finite; R ⟨gv, gw⟩dg, G a compact Lie group, where the integration in the second case is with respect to normalized Haar measure on G. Then if W ⊊V is a nonzero subspace which is invariant under the action of G, the orthogonal complement W⊥of W with respect to ⟨·, ·⟩G is also G-invariant, and V = W ⊕W⊥. Continuing in this way defines a decom-position. Suppose now that V = ⊕k i=1V⊕ai i = ⊕ℓ j=1W ⊕bj j , with the Vi and the W j ir-reducible representations. If a given summand Vi meets a summand W j non-trivially, then they must be equal because otherwise their intersection would be a non-trivial G-invariant subspace of at least one of them. The two decompo-sitions can thus differ at most by permuting the summands. It therefore makes sense to talk about the number of times an irreducible representation Vi occurs in a representation V. A basic tool in the representation theory of finite groups and compact Lie groups is the following. Lemma 1.10 (Schur’s lemma) Let G be a finite group or a compact Lie group, and let V and W be finite-dimensional irreducible complex represen-tations of G. Let ϕ : V →W be a G-linear map. 1. Either ϕ is an isomorphism or ϕ = 0. 2. If V = W, then there is a λ ∈C such that ϕ = λ · I, with I the identity map on V. 34 Haar measure on the classical compact matrix groups Proof 1. Since ϕ is G-linear, ker ϕ is an invariant subspace of V, and since V is irreducible, this means that either ker ϕ = V (and hence ϕ = 0) or else ker ϕ = {0}, so that ϕ is injective. In that case, im ϕ is a nonzero invariant subspace of W, and hence im ϕ = W: ϕ is an isomorphism. 2. Since V is a complex vector space, ϕ must have at least one eigenvalue; i.e., there is a λ ∈C such that ker(ϕ −λI) , {0}. But then since V is irreducible, ker(ϕ −λI) = V and thus ϕ = λI. □ Given a representation ρ : G →GL(V) of a group G, the character of the representation is the function χV(g) = Tr(ρ(g)). Note that if h ∈G, then χV(hgh−1) = Tr(ρ(hgh−1)) = Tr(ρ(h)ρ(g)ρ(h)−1) = Tr(ρ(g)), and so χV is a class function on G. The following properties are easy to check. Proposition 1.11 Let V and W be representations of G. Then • χV(e) = dim(V) • χV⊕W = χV + χW • χV⊗W = χVχW • χV∗= χV. Proof Exercise. □ Since all finite-dimensional representations can be decomposed into irre-ducible representations, the second property above says that all characters can be written as sums of characters corresponding to irreducible representations; these are referred to as the irreducible characters of the group. The irreducible characters satisfy two important orthogonality relations with respect to the in-ner product (·, ·)G given by (α, β)G =        1 |G| P g∈G α(g)β(g), G finite; R G α(g)β(g)dg, G a compact Lie group, where α, β : G →C are class functions. The first is the following. Proposition 1.12 (First orthogonality relation) The irreducible characters of a finite group or compact Lie group G are orthonormal with respect to (·, ·)G. 1.3 Lie group structure and character theory 35 In fact, one can prove that the irreducible characters form an orthonormal basis of the space of class functions on G (in the compact Lie group case, this should be interpreted as saying that the irreducible characters form a complete orthonormal system in L2(G)). In the finite group case, this has the following consequence. Proposition 1.13 (Second orthogonality relation) Let χ1, . . . , χN be the irre-ducible characters of the finite group G. Then N X j=1 χj(g)χ j(g′) =        |G| c(g), g ∼g′; 0, otherwise, where c(g) is the size of the conjugacy class of g and g ∼g′ means that g and g′ are conjugate. Proof Let g ∈G, and let 1g =        1, h ∼g; 0, otherwise. Since 1[g] is a class function and the irreducible characters are an orthonormal basis, 1[g] can be expanded as 1[g] = N X j=1 (1[g], χj)Gχj = N X j=1        1 |G| X h∈G 1gχj(h)       χj = c(g) |G| N X j=1 χ j(g)χj. Multiplying both sides by |G| c(g) and evaluating at g′ ∈G gives the claimed orthogonality. □ Exercise 1.14 Give a second proof by observing that the matrix  q c(g) |G| χj(g)  , with rows indexed by j ∈{1, . . . , N} and columns indexed by conjugacy classes of G, is unitary. There are many important consequences of the orthogonality relations, too many to go into here. It is worth mentioning, however, that the first orthogo-nality relation implies that a representation is uniquely determined by its char-acter. Indeed, given a representation V, we have seen that it is possible to write V = ⊕k i=1V⊕ai i for irreducible representations Vi and integers ai. Then χV = Pk i=1 aiχVi. But since the χVi are orthonormal, we have that ai = (χV, χVi)G. That is, the decomposition V = ⊕k i=1V⊕ai i can be recovered from χV. We now turn to the irreducible characters of the classical compact groups. First, we will need to introduce some basic notions about integer partitions. 36 Haar measure on the classical compact matrix groups Given a nonnegative integer N, a partition λ of N is an ordered tuple λ = (λ1, . . . , λk) with λi ∈N for each i, λ1 ≥λ2 ≥· · · ≥λk, and λ1 + · · · + λk = N. The λi are called the parts of λ. It is sometimes convenient to choose k to be larger than what might seem to be the obvious choice by tacking 0’s onto the end of λ; if two partitions differ only in this final string of 0’s, they are considered to be the same. The number of nonzero parts of a partition λ is called the length of λ and is denoted ℓ(λ). The integer N, i.e., the sum of the parts of λ, is called the weight of λ and is denoted |λ|. The following altenative notation for integer partitions is often useful. Given k ∈N, we write λ = (1a1, 2a2, . . . , kak) for the partition of N = a1+2a2+· · ·+kak which has a1 parts of size 1, a2 parts of size 2, and so on. In this notation, the partition (3, 3, 2, 1) of 9 would thus be written (11, 21, 32). Integer partitions are often represented by Young diagrams: for a partition λ = (λ1, . . . , λk), its Young diagram is a collection of boxes drawn from the top-left corner1, with λ1 boxes in the top row, λ2 boxes in the next row, and so on. The conjugate partition λ′ of λ is then the one corresponding to the reflection of the Young diagram of λ across the diagonal. Here are the Young diagrams of the partitions λ = (5, 4, 1) and λ′ = (3, 2, 2, 2, 1): Now, for a multiindex α = (α1, . . . , αn), define the anti-symmetric polyno-mial aα(x1, . . . , xn) = X π∈S n sgn(π)xα1 π(1) · · · xαn π(n). Note that if σ ∈S n, then aα(xσ(1), . . . , xσ(n)) = sgn(σ)aα(x1, . . . , xn). In particular, this shows that aα(x1, . . . , xn) = 0 if xi = xj for any i , j and that if any αi = α j with i , j, then aα ≡0. Assume, then, that α1 > α2 > · · · > αn ≥0; in particular, α1 ≥n −1, 1 Some authors start from the bottom-right instead. 1.3 Lie group structure and character theory 37 α2 ≥n −2, and so on. Write α = λ + δ, where δ = (n −1, n −2, . . . , 0) and λ is a partition of length at most n. Then aα(x1, . . . , xn) = X π∈S n sgn(π) n Y j=1 x λj−n+j π(j) = det x λ j+n−j i 1≤i,j≤n  . Since we have already observed that aα(x1, . . . , xn) vanishes if xi = xj, aα(x1, . . . , xn) is divisible by xi −x j for each i < j, and therefore by their product, which is the Vandermonde determinant Y 1≤i n. The Schur functions {sλ(x1, . . . , xn) : ℓ(λ) ≤n} form one of many possible bases of the symmetric polynomials in x1, . . . , xn over Z. Moreover, we have the following. Theorem 1.15 For U ∈U (n), denote the eigenvalues of U by eiθ1, . . . , eiθn. The functions χλ(U) = sλ(eiθ1, . . . , eiθn) for ℓ(λ) ≤n are (distinct) irreducible characters of U (n). These characters do not comprise a complete list of the irreducible charac-ters of U (n), but in fact there are not too many more. For each λ = (λ1, . . . , λn), with λ1 ≥· · · ≥λn, define λ′ by λ′ = (λ1 −λn, λ2 −λn, . . . , λn−1 −λn, 0). Since the λi are not required to be nonnegative λ may not be a partition, but λ′ always is. The functions χλ(U) = det(U)λnsλ′(eiθ1, . . . , eiθn) give the complete list of the irreducible characters of U (n). Note from the definition of sλ above that if the λ in the statement of Theorem 1.15 is in fact a partition, then the two definitions given for χλ agree. To give similar descriptions of (at least some of) the irreducible characters for the other groups, one needs the following analogs of the Schur functions. 38 Haar measure on the classical compact matrix groups Given a partition λ with ℓ(λ) ≤n, s(b) λ (x1, . . . , xn) := det  x λj+n−j+ 1 2 i −x −(λj+n−j+ 1 2) i  1≤i, j≤n ! det  x n−j+ 1 2 i −x −(n−j+ 1 2) i  1≤i,j≤n ! , s(c) λ (x1, . . . , xn) := det  x λj+n−j i + x −(λj+n−j) i  1≤i, j≤n ! det h xn−j i + x−(n−j) i i 1≤i,j≤n  , and s(d1) λ (x1, . . . , xn) := det  x λj+n−j i + x −(λ j+n−j) i  1λj+n−j,0 + 1λj+n−j=0  1≤i,j≤n ! det h xn−j i + x−(n−j) i  1n−j,0 + 1n−j=0 i 1≤i, j≤n  If ℓ(λ) ≤n −1, let s(d2) λ (x1, . . . , xn−1) = det  x λj+n−j+1 i −x −(λj+n−j+1) i  1≤i,j≤n ! det h xn−j+1 i −x−(n−j+1) i i 1≤i, j≤n−1  Exercise 1.16 The functions s(b) λ , s(c) λ , and s(d1) λ are polynomials in x1, x−1 1 , . . . , xn, x−1 n , and s(d2) λ is a polynomial in x1, x−1 1 , . . . , xn−1, x−1 n−1. In the case of the remaining classical compact groups, we will restrict at-tention to polynomial representations. A representation ρ : G →GL(V) of a matrix group is called a polynomial representation if there is a basis of V such that the entries of the matrix of ρ(g) are polynomials in the entries of g. (If there is one such basis, this is in fact true for all bases of V.) The following then gives the analog of Theorem 1.15 for the other groups. Theorem 1.17 1. Let n ∈N. For U ∈SO (2n + 1), denote the eigenvalues of U by e±iθ1, . . . , e±iθn, 1. The irreducible polynomial representations of O (2n + 1) are indexed by partitions λ such that λ′ 1 + λ′ 2 ≤2n + 1. For such a partition with ℓ(λ) > n, let ˜ λ be the partition defined by ˜ λ′ i =        λ′ i, i > 1; 2n + 1 −λ′ 1, i = 1. 1.3 Lie group structure and character theory 39 Let χλ denote the character of the irreducible representation of O (2n + 1) corresponding to λ. Then χλ(U) =        s(b) λ (eiθ1, . . . , eiθn), ℓ(λ) ≤n; s(b) ˜ λ (eiθ1, . . . , eiθn), ℓ(λ) > n. For U ∈SO−(2n + 1), −U ∈SO (n) and χλ(U) = (−1)|λ|χλ(−U). 2. Let n ∈N. For U ∈SO (2n), denote the eigenvalues of U by e±iθ1, . . . , e±iθn; for U ∈SO−(2n), denote the eigenvalues of U by e±iφ1, . . . , e±iφn−1, 1, −1. The irreducible polynomial representations of O (2n) are indexed by parti-tions λ such that λ′ 1 + λ′ 2 ≤2n. For such a partition with ℓ(λ) > n, let ˜ λ be the partition defined by ˜ λ′ i =        λ′ i, i > 1; 2n −λ′ 1, i = 1. Let χλ denote the character of the irreducible representation of O (2n) corresponding to λ. Then for U ∈SO (2n), χλ(U) =        s(d1) λ (eiθ1, . . . , eiθn), ℓ(λ) ≤n; s(d1) ˜ λ (eiθ1, . . . , eiθn), ℓ(λ) > n. For U ∈SO−(2n), if ℓ(λ) = n, then χλ(U) = 0. Otherwise, χλ(U) =        s(d2) λ (eiφ1, . . . , eiφn−1), ℓ(λ) < n; −s(d2) ˜ λ (eiφ1, . . . , eiφn−1), ℓ(λ) > n. 3. Let n ∈N. For U ∈Sp (2n), denote the eigenvalues of U by e±iθ1, . . . , e±iθn. The irreducible polynomial representations of Sp (2n) are indexed by par-titions λ such that ℓ(λ) ≤n, and the value of the character corresponding to λ at U is χλ(U) = s(c) λ (eiθ1, . . . , eiθn). Notes and References We have given only the barest introduction to matrix analysis on the classical compact groups; for more, the books by Horn and Johnson [54, 55] and Bhatia are invaluable references. Of the many constructions of Haar measure presented, most are part of the 40 Haar measure on the classical compact matrix groups folklore of the field and it seems impossible to sort out who first wrote them down. The parametrization by Euler angles is presumably the oldest, having been used by Hurwitz in 1897; see for a modern perspective. A survey of methods of generating random matrices from a more computational viewpoint is given in . For the Lie group theory of the classical compact groups, Bump gives a beautifully written introduction; also contains some of the representa-tion theory which appears here, and the application to the empirical spectral measure of a random unitary matrix which appears in Section 4.2 of this book. The book by Fulton and Harris gives an accessible and modern treatment of representation theory in general, focusing on the case of Lie groups and Lie algebras. The book by Diaconis gives an introduction to the representa-tion theory of the symmetric group and its applications in probability that a probabilist can understand. The character theory on the classical groups in the form presented here is hard to ferret out of the literature; Littlewood’s and Weyl’s books (from 1940 and 1939, respectively) are the standard references, but while they are both gems, they are becoming increasingly inaccessible to the modern reader (especially if her area of expertise is somewhat distant). In the case of O (2n) and O (2n + 1), Littlewood and Weyl deal only with the characters in-dexed by partitions of length at most n; the formulae presented here for those cases can be found (written slightly differently) in Section 11.9 of Littlewood. The characters corresponding to the longer partitions are related by so-called modification rules due to King . Ram has a bottom-line summary for evalutation of characters on SO (2n), SO (2n + 1), and Sp (2n). 2 Distribution of the entries 2.1 Introduction We begin this section with a few useful and important properties of Haar mea-sure on the classical compact groups that follow easily from translation invari-ance and the definitions of the groups themselves. In fact, one such property has already appeared (in Exercise 1.5): the distribution of a Haar random ma-trix is invariant under transposition (and conjugate transposition). The next is an obvious but important symmetry of Haar measure. Lemma 2.1 Let U be distributed according to Haar measure in G, where G is one of O (n), U (n), SO (n), SU (n), and Sp (2n). Then the entries of U are identically distributed. Proof To each permutation σ ∈S n, associate the matrix Mσ with entries in {0, 1}, such that mij = 1 if and only if σ(i) = j; then Mσ ∈O (n). Multiplication on the left by Mσ permutes the rows by σ−1 and multiplication on the right by Mσ permutes the columns by σ. One can thus move any entry of a matrix U into, say, the top-left corner by multiplication on the right and/or left by matrices in O (n). By the translation invariance of Haar measure, this means that all entries have the same distribution if G is O (n) or U (n). To complete the proof for the remaining groups, the permutation matrices may need to be slightly modified. Since for σ ∈S n, det(Mσ) = sgn(σ), the permutation matrix Mσ ∈SO (n) if and only if σ is even. For G = SO (n) or SU (n), one therefore replaces one of the non-zero entries of Mσ with −1 to obtain a matrix from SO (n); choosing which entry intelligently (or rather, failing to make the one possible non-intelligent choice) shows that any entry of U ∈SO (n) or SU (n) can be moved to the top left corner by left- and right-multiplication by matrices within the group. For the symplectic group, it is a straightforward exercise to show that the 41 42 Distribution of the entries permutation matrices which are in Sp (2n) are exactly those which permute the indices {1, . . . , n} amongst themselves and have σ(i + n) = σ(i) + n for the rest. This allows one to move any of the first n rows to the top of a random symplectic matrix and any of the first n columns all the way to the left, without changing the distribution. For the remaining rows and columns, note that a permutation with σ(i) ∈{n + 1, . . . , 2n} for 1 ≤i ≤n and σ(i + n) = σ(i) −n corresponds to a kind of anti-symplectic matrix: for such σ, MσJM∗ σ = −J. However, changing the signs of the entries in either all of the first n columns (or rows) or all of the second n columns (or rows) produces a symplectic matrix, which then can be used to move rows, resp. columns, of a random symplectic matrix to the top, resp. left. □ Exercise 2.2 If U is Haar-distributed in U (n), the distributions of Re(U11) and Im(U11) are identical. The symmetries above can be exploited to easily calculate moments of the matrix entries, as follows. Example Let U be Haar distributed in G, for G as above. 1. E[u11] = 0: note that Haar measure on O (n) and U (n) is invariant under multiplication on the left by                    −1 0 0 0 1 ... 0 1                    ; (2.1) doing so multiplies the top row (so in particular u11) of U by −1, but doesn’t change the distribution of the entries. So u11 d = −u11, and thus E[u11] = 0. For G = SO (n) or SU (n), just change the sign of any of the remaining ones in the matrix in (2.1); for G = Sp (2n), change the sign of the (n + 1, n + 1) entry in the matrix in (2.1). 2. E|u11|2 = 1 n: because U ∈G, we know that Pn j=1 |u1 j|2 = 1, and because all the entries have the same distribution, we can write E|u11|2 = 1 n n X j=1 E|u1 j|2 = 1 nE         n X j=1 |u1 j|2        = 1 n. (For G = Sp (2n), the n should be replaced with 2n.) 2.1 Introduction 43 Exercise 2.3 For U = uij n j=1, compute Cov  ui j, ukℓ  and Cov  u2 i j, u2 kℓ  for all i, j, k, ℓ. Understanding the asymptotic distribution of the individual entries of Haar-distributed matrices is of course more involved than just calculating the first couple of moments, but follows from classical results. Recall that one con-struction of Haar measure on O (n) involves filling the first column with a ran-dom point on the sphere. That is, the distribution of u11 is exactly that of x1, where x = (x1, . . . , xn) is a uniform random point of Sn−1 ⊆Rn. The asymptotic distribution of a single coordinate of a point on the sphere has been known for over a hundred years; the first rigorous proof is due to Borel in 1906, but it was recognized by Maxwell and others decades earlier. Theorem 2.4 (Borel’s lemma) Let X = (X1, . . . , Xn) be a uniform random vector in Sn−1 ⊆Rn. Then P √nX1 ≤t n→∞ − − − − → 1 √ 2π Z t −∞ e−x2 2 dx; that is, √nX1 converges weakly to a Gaussian random variable, as n →∞. The lemma is also often referred to as the “Poincar´ e limit”. There are many proofs; the one given below is by the method of moments. The following proposition gives a general formula for integrating polynomials over spheres to be used below. Proposition 2.5 Let P(x) = |x1|α1|x2|α2 · · · |xn|αn. Then if X is uniformly dis-tributed on √nS n−1, EP(X) = Γ(β1) · · · Γ(βn)Γ( n 2)n( 1 2 P αi) Γ(β1 + · · · + βn)πn/2 , where βi = 1 2(αi + 1) for 1 ≤i ≤n and Γ(t) = Z ∞ 0 st−1e−sds = 2 Z ∞ 0 r2t−1e−r2dr. The proof is essentially a reversal of the usual trick for computing the nor-malizing constant of the Gaussian distribution. Proof of Borel’s lemma by moments Fix m ∈N; to prove the lemma, we need to show that if for each n, Yn is distributed as the first coordinate of a uniform random point on Sn−1, then lim n→∞E √nYn m = EZm, (2.2) 44 Distribution of the entries where Z is a standard Gaussian random variable. Recall that the moments of the standard Gaussian distribution are EZm = (m −1)!! =        (m −1)(m −3)(m −5) . . . (1), m = 2k; 0, m = 2k + 1. (2.3) To prove (2.2), first note that it follows by symmetry that E[X2k+1 1 ] = 0 for all k ≥0. Next, specializing Proposition 2.5 to P(X) = X2k 1 gives that the even moments of X1 are EX2k 1 = Γ  k + 1 2  Γ  1 2 n−1 Γ  n 2  nk Γ  k + n 2  π n 2 . Using the functional equation Γ(t + 1) = tΓ(t) and the fact that Γ  1 2  = √π, this simplifies to EX2k 1 = (2k −1)(2k −3) . . . (1)nk (n + 2k −2)(n + 2k −4) . . . (n). (2.4) Equation (2.2) follows immediately. □ Corollary 2.6 For each n, let Un be a random orthogonal matrix. Then the sequence { √n[Un]1,1} converges weakly to the standard Gaussian distribution, as n →∞. More recent work has made it possible to give a much more precise state-ment which quantifies Borel’s lemma; doing so has important implications for the joint distribution of the entries of a random orthogonal matrix. We first give a brief review of various notions of distance between measures. Metrics on probability measures Let X be a metric space. The following are some of the more widely used metrics on the set of Borel probability measures on X. 1. Let µ and ν be Borel probability measures on X. The total variation dis-tance between µ and ν is defined by dTV(µ, ν) := sup A⊆X |µ(A) −ν(A)| , where the supremum is over Borel measurable sets. Equivalently, one can define dTV(µ, ν) := 1 2 sup f:X→R Z fdµ − Z fdν , 2.1 Introduction 45 where the supremum is over functions f which are continuous, such that ∥f∥∞≤1. When µ and ν have densities f and g (respectively) with respect to a sigma-finite measure λ on X, then dTV(X, Y) = 1 2 Z | f −g|dλ. The total variation distance is a very strong metric on probability mea-sures; in particular, a discrete distribution cannot be approximated by a con-tinuous distribution in total variation. Exercise 2.7 1. Prove that the first two definitions are equivalent, and are equivalent to the third when the measures have density. Hint: The Hahn-Jordan decomposition of the signed measure µ −ν is useful here. 2. Prove that the total variation distance between a discrete distribution and a continuous distribution is always 1. 2. The bounded Lipschitz distance is defined by dBL(µ, ν) := sup ∥g∥BL≤1 Z g dµ − Z g dν , where the bounded-Lipschitz norm ∥g∥BL of g : X →R is defined by ∥g∥BL := max ∥g∥∞, |g|L , where |g|L = supx,y |g(x)−g(y)| d(x,y) is the Lipschitz constant of g. If X is a sepa-rable metric space, the bounded-Lipschitz distance is a metric for the weak topology on probability measures on X (see, e.g., [39, Theorem 11.3.3]). 3. The Kolmogorov distance for probability measures on R is defined by dK(µ, ν) := sup x∈R µ  (−∞, x] −ν(−∞, x] . Convergence in Kolmogorov distance is in general stronger than weak con-vergence, because for weak convergence, the distribution functions need only converge to the limiting distribution function at its continuity points, and the convergence is not required to be uniform. 4. The Lp Kantorovich distance for p ≥1 is defined by Wp(µ, ν) := inf π "Z d(x, y)p dπ(x, y) # 1 p , where the infimum is over couplings π of µ and ν; that is, probability mea-sures π on X × X such that π(A × Rn) = µ(A) and π(Rn × B) = ν(B). The 46 Distribution of the entries Lp Kantorovich distance is a metric for the topology of weak convergence plus convergence of moments of order p or less. It is often called the Lp Wasserstein distance, and in the case of p = 1, the earth-mover distance. When p = 1, there is the following alternative formulation: W1(µ, ν) := sup | f|L≤1 Z f dµ − Z f dν . The fact that this is an equivalent definition of W1 to the one given above is the Kantorovich–Rubenstein theorem. There are dual representations for Wp for p > 1 as well, but they are more complicated and will not come up in this book. As a slight extension of the notation defined above, if Y and Z are random variables taking values in a metric space, dTV(Y, Z) is defined to be the total variation distance between the distributions of Y and Z, etc. Quantitative asymptotics for the entries of Haar-distributed matrices As noted above, it is a consequence of Borel’s lemma that the individual entries of a random orthogonal matrix are approximately Gaussian for large matrices. Borel’s lemma has been strengthened considerably, as follows. Theorem 2.8 (Diaconis–Freedman ) Let X be a uniform random point on √nSn−1, for n ≥5, and let 1 ≤k ≤n −4. Then if Z is a standard Gaussian random vector in Rk, dTV (X1, . . . , Xk), Z ≤2(k + 3) n −k −3. That is, not only is an individual coordinate of a random point on the sphere close to Gaussian, but in fact the joint distribution of any k coordinates is close in total variation to k i.i.d. Gaussian random variables, if k = o(n). In the random matrix context, this implies that for k = o(n), one can approximate any k entries from the same row or column of U by independent Gaussian random variables. This led Persi Diaconis to raised the question: How many entries of U can be simultaneously approximated by independent normal ran-dom variables? The answer to this question of course depends on the sense of approximation. In the strongest sense, namely in total variation, the sharp answer was found independently by T. Jiang and Y. Ma and by K. Stewart , following earlier work of Diaconis–Eaton–Lauritzen and Jiang . 2.2 The density of a principal submatrix 47 Theorem 2.9 Let {Un} be a sequence of random orthogonal matrices with Un ∈O (n) for each n, and suppose that pnqn n→∞ − − − − →∞, with pnqn = o(n). Let Un(pn, qn) denote the top-left pn × qn block of Un, and let Z(pn, qn) denote a pn × qn random matrix of i.i.d. standard normal random variables. Then lim n→∞dTV( √nUn(pn, qn), Z(pn, qn)) = 0. That is, a pn × qn principal submatrix can be approximated in total variation by a Gaussian random matrix, as long as pnqn ≪n; in particular, this recovers the theorem of Diaconis and Freedman (without the explicit rate of conver-gence) when qn = 1. The theorem is sharp in the sense that if pn ∼x √n and qn ∼y √n for x, y > 0, then dTV( √nUn(pn, qn), Z(pn, qn)) does not tend to zero. If one relaxes the sense in which entries should be simulatenously approx-imable by i.i.d. Gaussian variables, one can approximate a larger collection of entries, as in the following theorem. Recall that a sequence of random variables {Xn} tends to zero in probability (denoted Xn P − − − − → n→∞0) if for all ϵ > 0, lim n→∞P [|Xn| > ϵ] = 0. Theorem 2.10 (Jiang ) For each n, let Yn = yi j n i, j=1 be an n × n matrix of independent standard Gaussian random variables and let Γn = γi j n i, j=1 be the matrix obtained from Yn by performing the Gram-Schmidt process; i.e., Γn is a random orthogonal matrix. Let ϵn(m) = max 1≤i≤n,1≤j≤m √nγi j −yi j . Then ϵn(mn) P − − − − → n→∞0 if and only if mn = o  n log(n)  . That is, in an “in probability” sense, as many as o  n2 log(n)  entries of U can be simultaneously approximated by independent Gaussians. Theorems 2.9 and 2.10 are the subject of Section 2.3 below. 2.2 The density of a principal submatrix The main result of this section, important in its own right, is also the main in-gredient in the proof of Theorem 2.9 on approximating the entries of a principal submatrix of a random orthogonal matrix by i.i.d. Gaussian random variables. 48 Distribution of the entries Theorem 2.11 Let U be an n × n random orthogonal matrix, and let Up,q denote the upper-left p × q block of U. For q ≤p and p + q ≤n, the random matrix Up,q has density with respect to Lebesgue measure, given by g(M) = w(n −p, q) (2π) pq 2 w(n, q) det  Iq −MT M  n−p−q−1 2 I0(MT M), (2.5) where w(·, ·) denotes the Wishart constant: ω(n, q)−1 = π q(q−1) 4 2 nq 2 q Y j=1 Γ n −j + 1 2 ! , and I0(A) is the indicator that A has q eigenvalues (counted with multiplicity) in (0, 1). The approach to the proof is via invariant theory. We first show that if Γp,q has the density given in (2.5), then UT p,qUp,q d = ΓT p,qΓp,q. (2.6) We then use the fact that M 7→MT M is a maximal invariant (to be defined below) under the action of O (p) to show that (2.6) implies that Up,q d = Γp,q. Carrying out this approach requires some background on some of the clas-sical random matrix ensembles. Let q ≤n and let Z be an n × q random matrix with entries distributed as independent standard Gaussian random variables. The q × q random matrix S := ZTZ is called a Wishart matrix with n degrees of freedom. The ma-trix S is positive definite with probability one, and its density (with respect to Lebesgue measure) on the set S + q of q×q real positive definite matrices is given by p(M) = ω(n, q) det(M) n−q−1 2 e−1 2 Tr(M). Now let q ≤min{n1, n2}, and let S 1 and S 2 be q × q Wishart matrices, with n1 and n2 degrees of freedom, respectively. Then S 1 + S 2 has rank q with probability one, and B := (S 1 + S 2)−1/2S 1(S 1 + S 2)−1/2 (2.7) is said to have the matrix-variate beta distribution B(n1, n2; Iq). Note in particular that if S 1 and S 2 are as in the definition above, then since Iq −(S 1 + S 2)−1/2S 1(S 1 + S 2)−1/2 = (S 1 + S 2)−1/2S 2(S 1 + S 2)−1/2, 2.2 The density of a principal submatrix 49 B has all its eigenvalues in (0, 1). The density (with respect to Lebesgue mea-sure) of B on the set Pq of q × q symmetric matrices with eigenvalues in (0, 1), is given by g(M) = w(n1, q)w(n2, q) w(n1 + n2, q) det(M)(n1−q−1)/2 det(Iq −M)(n2−q−1)/2. (2.8) Lemma 2.12 Let q ≤p and p + q ≤n, and let Up,q be the upper-left p × q block of a Haar-distributed U ∈O (n). Then Σp,q := UT p,qUp,q ∈Mq(R) has the matrix-variate beta distribution B(p, n −p; Iq). Proof It follows from the column-by-column construction of Haar measure that the first q columns of U form a random element in Fq,n :=                       | | U1 · · · Uq | |           : D Ui, U j E = δi j            , whose distribution is invariant under the action of O (n) by multiplication on the left. The random matrix Up,q in the statement of the lemma is then the first p rows of a random element in Fq,n. Now, the second Gaussian construction of Haar measure given in Section 1.2 can be generalized to produce a translation-invariant random element of Fq,n from a collection of Gaussian random variables, as follows. Let X be an n × q random matrix with i.i.d. standard Gaussian entries, and define e U := X(XT X)−1/2. The matrix X has rank q with probability one, and so e U is well-defined, and its distribution is easily seen to be invariant under the action of O (n) by left-multiplication. It therefore suffices to show that the distribution of the first p rows of e UT e U is the same as the distribution of B in (2.7), with n1 = p and n2 = n −p. To see this, decompose the matrix X as X = "Y Z # , where Y is the first p rows; then Up,q d = Y(YTY + ZTZ)−1/2, and so Σp,q d = (S 1 + S 2)−1/2S 1(S 1 + S 2)−1/2, 50 Distribution of the entries where S 1 := YTY and S 2 := ZTZ. The matrices S 1 and S 2 are q × q Wishart matrices, with p and n −p degrees of freedom, respectively, and so Σp,q = (S 1 + S 2)−1/2S 1(S 1 + S 2)−1/2 has a matrix-variate beta distribution. □ We next confirm that the density of UT p,qUp,q identified in Lemma 2.12 is as it should be; that is, if Γp,q has the claimed density of Up,q, then ΓT p,qΓp,q also has a matrix-variate beta distribution. Lemma 2.13 Suppose that Γp,q is a p × q random matrix with density g(M) = w(n −p, q) (2π) pq 2 w(n, q) det  Iq −MT M  n−p−q−1 2 I0(MT M) (2.9) with respect to Lebesgue measure, where I0(MT M) is the indicator that all of the eigenvalues of MT M lie in (0, 1). Then ΓT p,qΓp,q has the matrix-variate beta distibution B(p, n −p; Iq). Proof Let X be the set of p × q matrices M over R such that all of the eigen-values of MT M lie in (0, 1). The matrix ΓT p,qΓp,q has density h on S + q if and only if for all f : S + q →R, Z X f(MT M)g(M)dM = Z S + q f(A)h(A)dA. Define g∗: S + q →R by g∗(A) = ω(n −p, q) (2π) pq 2 ω(n, q) det(Iq −A) n−p−q−1 2 so that g(M) = g∗(MT M) for M ∈X. Writing f(A) = f1(A)ϕ(A) with ϕ(A) = (2π)−pq 2 exp  −1 2 Tr(A)  , we have that Z X f(MT M)g(M)dM = Z f1(MT M)g∗(MT M)I0(MT M)ϕ(MT M)dM, (2.10) where the integral is now over the space Mp,q(R) all real p × q matrices. Now, ϕ(MT M) is exactly the standard Gaussian density on Mp,q(R), so the right-hand side of (2.10) is simply E[ f1(S )g∗(S )I0(S )], where S is a q × q Wishart matrix with p degrees of freedom. That is, for all f1 : S + q →R, 2.2 The density of a principal submatrix 51 Z S + q f1(A)g∗(A)I0(A)p(A)dA = Z S + q f1(A)ϕ(A)h(A)dA, where p is the density of S . It follows that h(A) = g∗(A)I0(A)p(A) ϕ(A) = w(p, q)w(n −p, q) w(n, q) det(A)(p−q−1)/2 det(Iq −A)(n−p−q−1)/2I0(A), which is exactly the density of the matrix-variate beta distribution B(p, n − p; Iq). □ Finally, we show that UT p,qUp,q d = ΓT p,qΓp,q implies that Up,q d = Γp,q. To do this, we need the following concept. Definition Let f : X →Y and suppose that a group G acts on X. The function f is invariant under the action of G if f(x) = f(g · x) for all g ∈G. The function f is a maximal invariant if whenever f(x1) = f(x2), there is a g ∈G such that x1 = g · x2. Lemma 2.14 Let q ≤p. The function M 7→MT M is a maximal invariant on p × q matrices of rank q, under the action of O (p) by left-multiplication. Proof Clearly, M 7→MT M is invariant under the action of O (p). Suppose, then, that MT 1 M1 = MT 2 M2. Then D MT 1 M1v, w E = D MT 2 M2v, w E for all v, w ∈ Rq; that is, ⟨M1v, M1w⟩= ⟨M2v, M2w⟩ for all v, w ∈Rq. It follows that if (v1, . . . , vk) are such that (M1v1, . . . , M1vk) is an orthonormal basis of {M1v : v ∈Rq}, then (M2v1, . . . , M2vk) is an orthonormal basis of {M2v : v ∈Rq}. Since M1 has rank q, it acts as an injective map Rq →Rp, and so there is a well-defined map U : {M1v : v ∈Rq} →{M2v : v ∈Rq} with U(M1v) = M2v for all v ∈Rq. Extend U to a map e U on all of Rp such that e U sends an orthonormal basis to an orthonormal basis; then e U ∈O (p) and e UM1 = M2. □ Observe that if τ : X →Y is a maximal invariant under the action of G on X, and if f : X →Z is any G-invariant function, then there is a function f ∗: τ(X) →Z such that f(x) = f ∗(τ(x)) for all x ∈X. Indeed, if y ∈τ(X), then y = τ(x) for some x. Since τ is a maximal invariant, if y = τ(x′) also, then 52 Distribution of the entries x = gx′, and so f(x) = f(x′) because f is G-invariant. Taking f ∗(y) = f(x) thus produces a well-defined function f ∗. Our interest in maximal invariants lies in the following lemma. Proposition 2.15 Suppose the compact group G acts measurably on S , and let τ : S →S ′ be a maximal invariant. Suppose that X1, X2 are random vari-ables in S , with G-invariant distributions. Suppose further that τ(X1) d = τ(X2). Then X1 d = X2. Proof Let f : S →R be bounded and measurable, and let g be distributed according to Haar measure on G. It follows by the translation invariance of Haar measure that the function x 7→E[ f(g · x)] is G-invariant, so as discussed above, there is a function f ∗: S ′ →R such that E[ f(g · x)] = f ∗(τ(x)) for each x ∈S . If g is taken to be independent of X1 and X2, then by G-invariance and Fubini’s theorem, E[ f(X1)] = E[ f(g · X1)] = E[ f ∗(τ(X1))] = E[ f ∗(τ(X2))] = E[f(g · X2)] = E[ f(X2)]. That is, X1 d = X2. □ Finally, if Γp,q has the density claimed in Theorem 2.11 for Up,q, then the distribution of Γp,q is trivially seen to be O (p)-invariant, and of course the distribution of Up,q is O (p)-invariant. It thus follows from Lemmas 2.12, 2.13 and 2.14 and Proposition 2.15 that Up,q d = Γp,q. 2.3 How much is a Haar matrix like a Gaussian matrix? In total variation Rather than proving the full version of Theorem 2.9, we will prove the im-portant special case of square principal submatrices. The basic ideas of the proof are the same, but the assumption that the submatrix is square, so that pn = o( √n), results in considerable technical simplification. Theorem 2.16 Let {Un} be a sequence of Haar-distributed random orthog-onal matrices with Un ∈O (n) for each n, and suppose that pn n→∞ − − − − →∞, with pn = o( √n). Let Un(pn) denote the top-left pn × pn block of Un, and let Zpn 2.3 How much is a Haar matrix like a Gaussian matrix? 53 be a pn × pn random matrix whose entries are i.i.d. standard normal random variables. Then lim n→∞dTV( √nUn(pn), Zpn) = 0. The essential idea of the proof is the following. Recall that if random vectors X and Y have densities f and g (respectively) with respect to a sigma-finite measure λ, then dTV(X, Y) = 1 2 Z | f −g|dλ. Let gn,p denote the density of √nUn(p), and let ϕp denote the density of Zp. Letting λ denote Lebesgue measure on Mp(R), dTV( √nUn(p), Zp) = 1 2 Z Mp(R) |gn,p −ϕp|dλ = 1 2 Z Mp(R) gn,p ϕp −1 ϕpdλ = 1 2E gn,p(Zp) ϕp(Zp) −1 . Showing that dTV( √nUn(pn), Zpn) →0 as n →∞is thus equivalent to showing that the random variable gn,pn(Zpn) ϕpn(Zpn) tends to 1 in expectation. To simplify the notation, write p = pn and Zpn = Z. From Theorem 2.11 and a change of variables, gn,p(Z) = w(n −p, p) (2πn) p2 2 w(n, p) det Ip −1 nZTZ ! n−2p−1 2 I0 1 nZTZ ! , with ω(n, p) =         π p(p−1) 4 2 np 2 p Y j=1 Γ n −j + 1 2 !         −1 and I0  1 nZTZ  the indicator that the eigenvalues of 1 nZTZ lie in (0, 1). The den-sity ϕp is given by ϕp(Z) = 1 (2π)p2/2 exp  −1 2 Tr(ZTZ)  . If 0 < λ1 < · · · < λp are the eigenvalues of ZTZ (which are indeed strictly positive and distinct with probability one when Z is a matrix of i.i.d. Gaussian variables), the densities above can be rewritten as gn,p(Z) = w(n −p, p) (2πn) p2 2 w(n, p) p Y j=1 1 −λ j n ! n−2p−1 2 1(0,n)(λp) 54 Distribution of the entries and ϕp(Z) = 1 (2π) p2 2 exp        −1 2 p X j=1 λj        , so that gn,p(Z) ϕp(Z) = w(n −p, p) n p2 2 w(n, p) exp          p X j=1 "λj 2 + n −2p −1 2 log 1 −λj n !#        1(0,n)(λp). We first investigate the asymptotics of the coefficient. Lemma 2.17 If p = o( √n), then w(n −p, p) n p2 2 w(n, p) = exp ( −p3 2n + o(1) ) . Proof First suppose that p is even. From the definition of w(n, p), w(n −p, p) n p2 2 w(n, p) = 2 n ! p2 2 p Y j=1 Γ  n−j+1 2  Γ  n−p−j+1 2 . Now, Γ  n−j+1 2  =  n−j−1 2   n−j−3 2  · · ·  n−j−(p−1) 2  Γ  n−j−(p−1) 2  = n 2  p 2 Γ  n−j−(p−1) 2  p 2 Y ℓ=1  1 −j+2ℓ−1 n  , and so w(n −p, p) n p2 2 w(n, p) = p Y j=1 p 2 Y ℓ=1  1 −j+2ℓ−1 n  = exp          p−1 X j=0 p 2 X ℓ=1 log  1 −j+2ℓ n          . For n large enough, j+2ℓ n ≤1 2, and so log  1 −j+2ℓ n  + j+2ℓ n ≤  j+2ℓ n 2 , and p−1 X j=0 p 2 X ℓ=1  j+2ℓ n 2 = 1 n2 " 7 12 p4 + 1 2 p3 −1 12 p2 # = o(1), since p = o( √n). 2.3 How much is a Haar matrix like a Gaussian matrix? 55 That is, w(n −p, p) n p2 2 w(n, p) = exp         − p−1 X j=0 p 2 X ℓ=1 j+2ℓ n + o(1)         = exp ( −p3 2n + o(1) ) . When p is odd, the proof is essentially the same but requires a small tweak; the neat cancellation in the ratio of gamma functions above required p to be even. It follows from Stirling’s formula, though, that Γ  n+1 2  = p n 2Γ  n 2   1 + O  1 n  , so that when p is odd, p Y j=1 Γ  n−j+1 2  Γ  n−p−j+1 2  = p Y j=1 q n−j 2 Γ  n−j 2   1 + O  1 n  Γ  n−p−j+1 2  . The proof now proceeds along the same lines as before. □ The bulk of the proof is of course to analyze the random variable Ln := exp          p X j=1 "λ j 2 + n −2p −1 2 log 1 −λj n !#        1(0,n)(λp). Proposition 2.18 For Ln defined as above, e −p3 2n Ln converges to 1 in proba-bility, as n tends to infinity. Proof We will in fact prove the equivalent statement that −p3 2n + log(Ln) tends to zero in probability. For x ∈(0, n), let f(x) = x 2 + n−2p−1 2 log  1 −x n  . Then by Taylor’s theorem, there is some ξ ∈(0, x) such that f(x) =  2p+1 2n  x −  n−2p−1 4n2  x2 −  n−2p−1 6(n−ξ)3  x3. Now, it is known that the largest eigenvalue of ZTZ is of order p with high probability; formally, for t ≥0, P        λp ≥p 1 + r p n + t !2       ≤e−pt2 2 (see, e.g., .) Let Ωp := ( Z : λp ≥p  2 + q p n 2) ; for Z ∈Ωc p, and ξ ∈(0, λi) for some i, n −2p −1 6(n −ξ)3 ! ≤ n −2p −1 6(n −9p)3 ! ≤1 n2 , 56 Distribution of the entries for n large enough. We thus have that for Z ∈Ωc p, p X j=1 f(λ j) = p X j=1  2p+1 2n  λj −  n−2p−1 4n2  λ2 j −  n−2p−1 6(n−ξ(λj))3  λ3 j  =  2p+1 2n  Tr(ZTZ) −  n−2p−1 4n2  Tr((ZTZ)2) + E Tr((ZTZ)3), where the random variable E has 0 ≤E ≤1 n2 . Now, the means and variances of Tr((ZTZ))k) are known; see, e.g., . In particular, for Z a p × p matrix of i.i.d. standard Gaussian random variables, E Tr((ZTZ))k) = pk+1 k 2k k + 1 ! + O(pk) and Var Tr((ZTZ))k) = O(p2k), as p tends to infinity. In particular, −p3 2n + p X j=1 f(λj) =  2p+1 2n  n Tr(ZTZ) −E Tr(ZTZ) o −  n−2p−1 4n2  n Tr((ZTZ)2) −E Tr((ZTZ)2) o + E Tr((ZTZ)3) + p2 2n + p3(p + 1) n2 . By Chebychev’s inequality, P " 2p + 1 2n ! Tr(ZTZ) −E Tr(ZTZ) > ϵ # ≤ (2p + 1)2 Var  Tr(ZTZ)  4n2ϵ2 = O p4 n2 ! = o(1). Similarly, P " n −2p −1 4n2 ! Tr((ZTZ)2) −E Tr((ZTZ)2) > ϵ # ≤ (n −2p −1)2 Var  Tr((ZTZ)2)  16n4ϵ2 = O p4 n2 ! = o(1), and P h |E Tr((ZTZ)3)| > ϵ i ≤P h | Tr((ZTZ)3)| > n2ϵ i ≤ Var  Tr((ZTZ)3)  n4ϵ2 −(E Tr((ZTZ)3))2 = O p6 n4 ! = o(1). 2.3 How much is a Haar matrix like a Gaussian matrix? 57 It follows that P          −p3 2n + p X j=1 f(λj) > ϵ          ≤P[Ωp] + P  2p+1 2n  n Tr(ZTZ) −E Tr(ZTZ) o > ϵ 3  + P  n−2p−1 4n2  n Tr((ZTZ)2) −E Tr((ZTZ)2) o > ϵ 3  + P  |E Tr((ZTZ)3)| > ϵ 3  , which tends to zero as n →∞. □ To summarize: dTV( √nUn(p), Zp) →0 as n →∞if and only if the random variable Rn := gn,p(Z) ϕp(Z) tends to 1 in expectation; we have shown that Rn tends to 1 in probability. Note that in fact ERn = R gn,p(z) ϕp(z) ϕp(z)dz = 1 for every n. Now, E|Rn −1| = E |Rn −1|1|Rn−1|≥δ + |Rn −1|1|Rn−1|<δ ≤δ + E |Rn −1|1|Rn−1|≥δ ≤δ + E (Rn + 1)1|Rn−1|≥δ , and so it suffices to show that for δ > 0 fixed, E Rn1|Rn−1|≥δ →0 as n →∞. Suppose not; i.e., that there is a subsequence Rnk such that E h Rnk1|Rnk −1|≥δ i ≥ ϵ > 0 for all k. Since Rnk does converge to 1 in probability, there is a further subsequence Rnk(i) which converges to 1 almost surely. But since ERnk(i) = 1, E h Rnk(i)1|Rnk(i)−1|≥δ i = 1 −E h Rnk(i)11−δ<Rnk(i)<1+δ i , which tends to 0 by the dominated convergence theorem, in contradiction to the assumption. We may thus conclude that E|Rn −1| →0 as n →∞, completing the proof of Theorem 2.16. In probability As discussed in Section 2.1, if the notion of approximation is relaxed from the very strong total variation distance all the way to an in-probability type of approximation, it is possible to approximate many more entries of a random orthogonal matrix by i.i.d. Gaussian random variables. The basic idea is to exploit the Gauss–Gram–Schmidt construction of Haar measure described in Chapter 1, and to show that, in the sense of Theorem 2.10, the Gram–Schmidt process does not change the distribution of the entries of the Gaussian random matrix very much. This is intuitively quite reasonable: when performing the Gram–Schmidt process on the k + 1st column of the random matrix, the first 58 Distribution of the entries step is to subtract the projection of that column onto the span of the first k columns. The original column is a Gaussian random vector whose length is typically about √n, and whose projection onto a k-dimensional subspace typ-ically has length about √ k, so that if k is not too close to n, the subtraction makes little difference. The next step is to normalize the column; since the length of a Gaussian vector is typically quite close to its mean, this normaliza-tion should not be too different from just dividing by the deterministic quantity √n. In this section, we will give the proof of the “if” part of Theorem 2.10 only; that is, we will show that it is possible to approximate the entries of the first o  n log(n)  columns of a random orthogonal matrix, in probability, by indepen-dent Gaussians. Recall the setting of Theorem 2.10: Yn = [yi j]n i, j=1 is a matrix of i.i.d. stan-dard Gaussian random variables and Γn = [γi j]n i, j=1 is the matrix obtained by performing the Gram–Schmidt process on Yn. The random variable ϵn(m) is defined by ϵn(m) = max 1≤i≤n 1≤j≤m √nγi j −yi j , and Theorem 2.10 is the statement that ϵn(m) tends to zero in probability if and only if m = o  n log(n)  . The bulk of the proof of Theorem 2.10 is contained in the following tail inequality for ϵn(m). Proposition 2.19 Suppose that r ∈  0, 1 4  , s, t > 0, and m ≤nr 2 . Then P [ϵn(mn) ≥r(s + t) + t] ≤2me−nr2 2 + nm s r 2 πe−s2 2 + mneπ e 2 −m 2 + mneπe−nt2 8m . Assuming the Proposition, let t > 0 be fixed and take r = 1 log(n) s = (log(n))3/4 ˜ mn = & δn log(n) ' , with δ = min n 1, t2 24 o . Then for n large enough, r(s + t) + t < 2t, and so for any mn = o  n log(n)  , P [ϵn(mn) ≥2t] ≤P [ϵn( ˜ mn) ≥2t] ≤4 n log(n) ! e − n 2(log(n))2 + 2n2 log(n)e−(log(n)) 3 2 2 + 2eπn2 log(n) e 2 − δn 2 log(n) + eπn2 log(n)e−3 log(n), 2.3 How much is a Haar matrix like a Gaussian matrix? 59 which tends to zero as n tends to infinity. Proof of Proposition 2.19 We begin by introducing a bit more notation. Let y j denote the jth column of Yn, and let γ j denote the jth column of Γn. Given γ1, . . . , γj−1, let wj := yj −∆j ∆j := j−1 X k=1 γkγT k yj, so that γ j = wj ∥wj∥. For convenience, take ∆1 = 0. Finally, let Lj := √n wj −1 . Now observe that ϵn(m) = max 1≤j≤m √nγ j −yj ∞ = max 1≤j≤m √nwj wj −y j ∞ = max 1≤j≤m        √n wj −1       (yj −∆j) −∆j ∞ ≤ max 1≤j≤m Lj ! max 1≤j≤m yj ∞+ max 1≤j≤m ∆j ∞ ! + max 1≤j≤m ∆j ∞. It follows that P [ϵn(m) ≥r(s + t) + t] ≤P " max 1≤j≤m Lj ≥r # + P " max 1≤j≤m yj ∞≥s # + P " max 1≤j≤m ∆j ∞≥t # . (2.11) To estimate the first term, P " max 1≤j≤m Lj ≥r # ≤m max 1≤j≤m P h Lj ≥r i = m max 1≤j≤m P        √n wj −1 ≥r        = m max 1≤j≤m       P        √n w j ≤1 −r       + P        √n wj ≥1 + r              . 60 Distribution of the entries On  0, 1 4  , 1 (1−r)2 ≥1 + 2r, and so for each j, P        √n wj ≤1 −r       = P         wj 2 n ≥ 1 (1 −r)2        ≤P         wj 2 n ≥1 + 2r        . Since wj is a projection of the standard Gaussian vector y j, wj ≤y j, and so P         wj 2 n ≥1 + 2r        ≤P         y j 2 n ≥1 + 2r         = P h y2 1 j + · · · + y2 n j ≥n(1 + 2r) i ≤e−λn(1+2r)  E[eλZ2] n for any λ > 0, where Z is a standard Gaussian random variable. The final quantity on the right is given by  E[eλZ2] n = 1 (1 −2λ) n 2 . Taking λ = 1 2 − 1 2(2r+1) then gives that P         wj 2 n ≥1 + 2r        ≤exp ( −n r −1 2 −log 1 + 2r 2 !!) ≤e nr2 2 for r ∈  0, 1 4  . Consider next P        √n wj ≥1 + r       = P         w j 2 n ≤ 1 (1 + r)2        ≤P         wj 2 n ≤1 −r        , since 1 (1+r)2 ≤1 −r for r ∈  0, 1 4  . Recall that wj is the orthogonal projection of yj onto D γ1, . . . γ j−1 E⊥; con-ditional on (y1, . . . , y j−1), w j is a standard Gaussian random vector in the (n −j + 1)-dimensional subspace D γ1, . . . γ j−1 E⊥⊆Rn. It follows that wj 2 d = Z2 1 + · · · + Z2 n−j+1, where the Zi are i.i.d. standard Gaussian random variables. Now proceeding 2.3 How much is a Haar matrix like a Gaussian matrix? 61 similarly to the argument above, for 1 ≤j ≤m, and any λ > 0, P         wj 2 n ≤1 −r        = P h Z2 1 + · · · + Z2 n−j+1 ≤n(1 −r) i ≤P h Z2 1 + · · · + Z2 n−m ≤n(1 −r) i = eλn(1−r)  E[e−λZ2] n−m = eλn(1−r) 1 √ 1 + 2λ !n Taking λ = n−m 2n(1−r) −1 2 gives that P         wj 2 n ≤1 −r        ≤exp (nr 2 −m 2 + n −m 2 log n −m n(1 −r) !) , and for r ∈  0, 1 4  and m ≤nr 2 , this last expression is bounded by e−nr2 2 as well. Together then, we have that P " max 1≤j≤m L j ≥r # ≤2me−nr2 2 . (2.12) The second term in (2.11) is almost trivial: since the yj are i.i.d. Gaussian vectors in Rn, if Z is a standard Gaussian random variable, then P " max 1≤j≤m yj ∞≥s # ≤nmP [|Z| ≥s] ≤nm s r 2 πe−s2 2 . (2.13) Finally, consider the random variable ∆j = Pj−1y j, where P j = P j−1 k=1 γkγT k is the matrix of orthogonal projection onto D γ1, . . . , γ j−1 E . Since P j depends only on y1, . . . , yj−1, Pj and yj are independent. Moreover, by the Gauss–Gram– Schmidt construction of Haar measure, (γ1, . . . , γ j−1) are distributed as the first j −1 columns of a Haar-distributed random matrix, and so Pj d = U(Ij−1 ⊕On−j+1)UT, where U is distributed according to Haar measure on O (n) and is indepen-dent of yj. Using the independence together with the rotation invariance of the 62 Distribution of the entries distribution of yj, it follows that Pjyj d = U                                 Z1 . . . Zj−1 0 . . . 0                                 =: Uz j, where Z1, . . . , Zj−1 are i.i.d. Gaussian random variables, independent of U. Conditioning now on the Zi, let Ri ∈O (n) be such that Rizj = (Z2 1+· · ·+Z2 j−1)e1; it follows by the rotational invariance of Haar measure that, conditional on the Zi, Uz j d = URizj = (Z2 1 + · · · + Z2 j−1)u1. It thus follows that for each j ∈{2, . . . , m}, P h ∆j ∞≥t i = P             ∥θ∥∞≥ t q Z2 1 + · · · + Z2 j−1             , where θ is uniformly distributed on the unit sphere Sn−1 ⊆Rn. L´ evy’s lemma (see Section 5.1) gives that for a single coordinate θk of a uniform random vector on Sn−1, P[|θk| > t] ≤eπ−nt2 4 . Conditioning on the Zi, L´ evy’s lemma thus gives that P " max 1≤j≤m ∆j ≥t # ≤nmE             P             |θ1| ≥ t q Z2 1 + · · · + Z2 m Z1, . . . , Zm                         ≤nmE       exp       π − nt2 4 Pm k=1 Z2 k              . Estimating as above, P        m X k=1 Z2 k ≥x0       ≤exp (m 2 −x0 2 −m 2 log m x0 !) for x0 > m, and so E       exp       − nt2 4 Pm k=1 Z2 k              ≤e−nt2 4x0 + exp (m 2 −x0 2 −m 2 log m x0 !) , 2.4 Arbitrary projections 63 and choosing x0 = 2m gives that nmeπE       exp       − t2 4 Pm k=1 Z2 k              ≤nmeπ−nt2 8m + nm exp  π −m 2 1 −log(2) . This completes the proof of the Proposition. □ 2.4 Arbitrary projections A deficiency of Theorem 2.9 is that it applies only to entries in a principal submatrix of U. Thus one may conclude that o(n) entries of U can be simul-taneously approximated in total variation by i.i.d. Gaussian random variables, if those entries are those of a p × q principal submatrix with pq = o(n); the original question assumed no such restriction, and indeed, the fact that Haar measure is invariant under multiplication by any orthogonal matrix suggests that this restriction is too strong. The following result overcomes this difficulty, but in the weaker L1 Kantorovich metric. Theorem 2.20 (Chatterjee–Meckes) Let U ∈O (n) be distributed accord-ing to Haar measure, and let A1, . . . , Ak be n × n matrices over R satisfy-ing Tr(AiAT j ) = nδij; that is,  1 √nAi  1≤i≤k is orthonormal with respect to the Hilbert–Schmidt inner product. Define the random vector X by X := (Tr(A1U), Tr(A2U), . . . , Tr(AkU)) in Rk, and let Z = (Z1, . . . , Zk) be a random vector whose components are independent standard normal random variables. Then for n ≥2, W1(X, Z) ≤ √ 2k n −1. In particular, if Eij denotes the matrix with 1 as the i- jth entry and zeroes elsewhere, then choosing the Aℓto be { √nEi j} for some collection of pairs (i, j) gives that any collection of o(n) entries of U can be simultaneously approxi-mated (in W1) by i.i.d. Gaussians. However, the theorem is more general: it may be that all of the entries of U appear in Tr(AiU), for some or all i. Indeed, the general form of the vector X above is that of a projection of a random ele-ment of O (n) onto a subspace of Mn(R) of rank k. A Gaussian distribution on Mn(R) has the property that all of its projections onto lower-dimensional sub-spaces are also Gaussian; the theorem above can thus be seen as a coordinate-64 Distribution of the entries free comparison between the Haar measure on O (n) and standard Gaussian measure on Mn(R). The proof of Theorem 2.20 makes use of the following framework for prov-ing multivariate central limit theorems, which is a version of Stein’s method of exchangeable pairs. For a proof of the theorem and further discussion, see . Theorem 2.21 Let X be a random vector in Rk and for each ϵ > 0 let Xϵ be a random vector with X d = Xϵ, such that lim ϵ→0 Xϵ = X almost surely. Let Z be a standard normal random vector in Rk. Suppose there are deterministic λ(ϵ) and σ2 > 0, and a random matrix F such that the fol-lowing conditions hold. 1. 1 λ(ϵ)E h (Xϵ −X) X i L1 − − − → ϵ→0 −X. 2. 1 2λ(ϵ)E h (Xϵ −X)(Xϵ −X)T|X i L1 − − − → ϵ→0 σ2Ik + E h F X i . 3. For each ρ > 0, lim ϵ→0 1 λ(ϵ)E  Xϵ −X 21{|Xϵ−X|2>ρ}  = 0. Then W1(X, σZ) ≤1 σE∥F∥H.S. (2.14) The idea of the theorem is the following. Suppose that the random vector X has “continuous symmetries” which allow one to make a small (parametrized by ϵ) random change to X which preserves its distribution. If X were exactly Gaussian and this small random change could be made so that (X, Xϵ) were jointly Gaussian, then we would have (for some parametrization of the size of the change) that Xϵ d = √ 1 −ϵ2X + ϵY for X and Y independent. The conditions of the theorem are approximate versions of what happens, up to third order, in this jointly Gaussian case. The other technical tool needed for the proof of Theorem 2.20 is the fol-lowing lemma, which gives formulae for the fourth-order mixed moments of entries of a random orthogonal matrix. The proof uses the same ideas as those in Example 2.1, and is a good exercise in symmetry exploitation and tedious calculation. 2.4 Arbitrary projections 65 Lemma 2.22 If U = h uij in i, j=1 is an orthogonal matrix distributed according to Haar measure, then E hQ u kij i j i is non-zero if and only if the number of en-tries from each row and from each column is even. The fourth-degree mixed moments are as follows: for all i, j, r, s, α, β, λ, µ, Euijursuαβuλµ = − 1 (n −1)n(n + 2) h δirδαλδjβδsµ + δirδαλδ jµδsβ + δiαδrλδjsδβµ + δiαδrλδ jµδβs + δiλδrαδ jsδβµ + δiλδrαδjβδsµ i + n + 1 (n −1)n(n + 2) h δirδαλδjsδβµ + δiαδrλδ jβδsµ + δiλδrαδ jµδsβ i . (2.15) Proof of Theorem 2.20 We begin by constructing an exchangeable pair (U, Uϵ) of random orthogonal matrices. Let U be a Haar-distributed element of O (n), and let Aϵ be the rotation Aϵ =       √ 1 −ϵ2 ϵ −ϵ √ 1 −ϵ2      ⊕In−2 = In +      −ϵ2 2 + δ ϵ −ϵ −ϵ2 2 + δ      ⊕0n−2, where δ = O(ϵ4). Let V be Haar-distributed in O (n), independent of U, and define Uϵ = VAϵVTU. That is, Uϵ is a translation of U within O (n) by a rotation of size arcsin(ϵ) in a random two-dimensional subspace of Rk, and in particular, it follows from the translation-invariance of Haar measure that Uϵ d = U. Finally, let Xϵ = (Tr(A1Uϵ), . . . , Tr(AkUϵ)). Let K denote the first two columns of V and C2 = " 0 1 −1 0 # . Then Uϵ −U = " −ϵ2 2 + O(ϵ4) ! KKT + ϵQ # U, (2.16) where Q = KC2KT. The entries of the matrices KKT and Q are (KKT)ij = ui1uj1 + ui2uj2 (Q)i j = ui1uj2 −ui2uj1. 66 Distribution of the entries It then follows from Example 2.1 and Lemma 2.22 that EKKT = 2 nIn EQ = 0, thus lim ϵ→0 n ϵ2 E(Xϵ −X)i U = lim ϵ→0 n ϵ2 E h Tr[Ai(Uϵ −U)] U i = lim ϵ→0 n ϵ2 " −ϵ2 2 + O(ϵ4) ! E h Tr(AiKKTU) U i + ϵE h Tr(AiQU) U i# = −Xi. Condition 1. of Theorem 2.21 is thus satisfied with λ(ϵ) = ϵ2 n . Condition 3. is immediate from the fact that |Xϵ−X|2 λ(ϵ) is bounded independent of ϵ and Xϵ converges pointwise to X. The random matrix F of condition 2. is computed as follows. For notational convenience, write Ai = A = (apq), A j = B = (bαβ), and U = (ui j). By (2.16), lim ϵ→0 n 2ϵ2 E h (Xϵ −X)i(Xϵ −X) j U i = n 2E h Tr(AQU) Tr(BQU) U i = n 2E          X p,q,r,α,β,γ apqbαβurpuγαqqrqβγ U          = n 2E         X p,q,r,α,βγ apqbαβurpuγα 2 n(n −1) ! (δqβδrγ −δqγδrβ)         = 1 (n −1)E ⟨UA, UB⟩H.S. −Tr(AUBU) = 1 (n −1)E ⟨A, B⟩H.S. −Tr(UAUB) = 1 (n −1) h nδij −Tr(UAUB) i . (2.17) Thus F = 1 (n −1)E h δi j −Tr(AiUA jU) ik i,j=1 X  . Claim: If n ≥2, then E h Tr(AiUA jU) −δi j i2 ≤2 for all i and j. 2.4 Arbitrary projections 67 With the claim, for n ≥2, E∥F∥H.S. ≤ q E∥F∥2 H.S. ≤ √ 2k n −1, thus completing the proof. To prove the claim, first observe that Lemma 2.22 implies E Tr(AiUA jU) = 1 n D Ai, Aj E = δi j. Again writing Ai = A and A j = B, applying Lemma 2.22 yields E h Tr(AUBU) i2 = E              X p,q,r,s α,β,µ,λ aspaµαbqrbβλupqursuαβuλµ              = − 2 (n −1)n(n + 2) h Tr(AT ABT B) + Tr(ABT ABT) + Tr(AAT BBT) i + n + 1 (n −1)n(n + 2) h 2 ⟨A, B⟩H.S. + ∥A∥2 H.S.∥B∥2 H.S. i . Since the Hilbert–Schmidt norm is submultiplicative, Tr(AT ABT B) ≤∥AT A∥H.S.∥BT B∥H.S. ≤∥A∥2 H.S.∥B∥2 H.S. = n2, and the other two summands of the first line are bounded by n2 in the same way. Also, 2 ⟨A, B⟩H.S. + ∥A∥2 H.S.∥B∥2 H.S. = n2(1 + 2δi j), Thus E h Tr(AiUA jU) −δij i2 ≤−6n2 + (n + 1)n2(1 + 2δi j) −(n −1)n(n + 2)δi j (n −1)n(n + 2) ≤2. □ The following is the analog of Theorem 2.20 for the unitary group; the proof is essentially the same, using the unitary analog of Lemma 2.22 Theorem 2.23 (Chatterjee–Meckes) Let U ∈U (n) be distributed accord-ing to Haar measure, and let A1, . . . , Ak be n × n matrices over C satisfying Tr(AiA∗ j) = nδij. Define the random vector X by X := (Tr(A1U), Tr(A2U), . . . , Tr(AkU)) in Rk, and let Z = (Z1, . . . , Zk) be a random vector whose components are 68 Distribution of the entries independent standard complex normal random variables. There is a constant c, independent of n, such that W1(X, Z) ≤ck n . Remark: The constant is asymptotically given by √ 2; for n ≥4, it can be taken to be 3. Notes and References The paper gives an extensive history of Borel’s lemma. For thorough dis-cussions of metrics on probability measures, their relationships, and terminol-ogy, see the books by Dudley or Villani . Section 2.2 follows the derivation of the density of a submatrix given in Eaton . For more on Wishart matrices, see Muirhead’s book , and for notation, alternate definitions, and generalizations of the matrix-variate beta distribution, see . The univariate version of Theorem 2.20, namely a central limit theorem for Tr(AU) where A is a fixed matrix and U is Haar-distributed on O (n), was first proved by d’Aristotile–Diaconis–Newman as a step in proving the following. Theorem 2.24 (d’Aristotile–Diaconis–Newman) Let U be a Haar-distributed matrix in O (n). Let {β1, . . . , βkn} be a subset of the entries of U, ordered lexi-cographically. For ℓ∈{1, . . . , kn} and t ∈[0, 1], let S (n) ℓ = r n kn ℓ X j=1 β j Xn(t) = S (n) [knt]. If kn ↗∞, then Xn =⇒W, a standard Brownian motion, as n →∞. Other approaches to the univariate case appeared in and . An al-ternative approach to weak convergence in the multivariate case was given in . The idea of using Stein’s method together with infinitesimal random rota-tions was first used by Stein in to get fast rates of convergence to a Gaus-sian distribution for Tr(Uk), for k ∈N fixed and U distributed according to Haar measure on O (n). Slightly better bounds were obtained simultaneously by Johansson , and so Stein’s argument remained a hidden gem for sev-eral years (it can now be found online in the Stanford statistics department’s repository of technical reports). 2.4 Arbitrary projections 69 Lemma 2.22 gives a formula for computing mixed moments up to order four of entries of a random orthogonal matrix by exploiting the symmetries of Haar measure; the analog for the unitary group can be found in . A systematic approach called the Weingarten calculus for computing moments of all orders was developed by B. Collins in the unitary case and extended to all the classical compact groups by Collins and ´ Sniady . This approach makes heavy use of the representation theory of the classical groups, in particular ex-ploiting Schur–Weyl duality and its analogs, and was the basis for the approach to weak convergence of Rk-valued linear functions on the groups given in . 3 Eigenvalue distributions: exact formulas 3.1 The Weyl integration formula Suppose U is a Haar-distributed random matrix. Then U has eigenvalues, all of which lie on the unit circle S1 ⊆C. Since U is random, its set of eigenvalues is a random point process; that is, it is a collection of n random points on S1. The eigenvalue process of a Haar-distributed random matrix has many remarkable properties, the first of which is that there is an explicit formula (due to H. Weyl) for its density. The situation is simplest for random unitary matrices. Theorem 3.1 (Weyl integration formula on U (n)) The unordered eigenvalues of an n × n random unitary matrix have eigenvalue density 1 n!(2π)n Y 1≤j<k≤n |eiθj −eiθk|2, with respect to dθ1 · · · dθn on [0, 2π)n. That is, for any g : U (n) →R with g(U) = g(VUV∗) for any U, V ∈U (n) , (i.e., g is a class function), if U is Haar-distributed on U (n), then Eg(U) = 1 n!(2π)n Z [0,2π)n ˜ g(θ1, . . . , θn) Y 1≤j<k≤n |eiθj −eiθk|2dθ1 · · · dθn, where ˜ g : [0, 2π)n →R is the (necessarily symmetric) expression of g(U) as a function of the eigenvalues of U. The proof of the Weyl integration formula makes heavy use of the Lie group structure of U (n). In this section, we attempt to give a reasonably accessible treatment of the integration formula, stating some background results without proof and glossing over some details; see the end-of-chapter notes for further references. 70 3.1 The Weyl integration formula 71 As a preliminary, we state the following Fubini-like theorem in the Lie group context. Recall that if G is a group and H ⊆G is a subgroup, then the quotient G/H is the set of all cosets gH, where g ∈G. If G is a locally compact Lie group and H is a closed subgroup, then G/H can be endowed with the quotient topology, which makes G/H into a locally compact Hausdorffspace. Recall also from Section 1.1 that every compact Lie group has a Haar measure; that is, a (unique) probability measure invariant under both left and right translations. Proposition 3.2 Let G be a compact Lie group and let H be a closed sub-group; let µG and µH denote the Haar measures on G and H, respectively. There exists a regular Borel measure µG/H on G/H which is invariant under left-translation by elements of G, and which may be normalized such that for any continuous ϕ : G →R, Z G ϕ(g)dµG(g) = Z G/H Z H ϕ(gh)dµH(h)dµG/H(gH). (3.1) Observe that we have implicitly used the fact that gH 7→ R H ϕ(gh)dµH(h) is well-defined, which follows from the translation invariance of µH: for any ˜ h ∈H, Z H ϕ(g˜ hh)dµH(h) = Z H ϕ(gh)dµH(h). Corollary 3.3 Let G and H be as above, and suppose that ϕ : G →R is constant on cosets; i.e., ϕ(g1) = ϕ(g2) for all g1, g2 such that g1 = g2h for some h ∈H. Then Z G ϕ(g)dµG(g) = Z G/H ϕ(g)dµG/H(gH), where the integrand ϕ(g) on the right-hand side is the common value of ϕ on the coset gH. Proof For all h ∈H, ϕ(gh) = ϕ(g) since ϕ is constant on cosets, and so the inner integrand on the right-hand side of (3.1) is constant, and µH was chosen to be a probability measure. □ The central idea of the proof of the Weyl integration formula is the follow-ing. Lemma 3.4 Let T ⊆U (n) denote the diagonal elements of U (n), and let T′ ⊆T denote those elements of T with distinct diagonal entries. Let U (n)′ ⊆ U (n) denote the n×n unitary matrices with distinct eigenvalues. Then the map ψ : (U (n) /T) × T′ →U (n)′ (UT, Θ) 7→UΘU−1 72 Eigenvalue distributions: exact formulas is a well-defined n!-to-1 mapping onto U (n)′ with everywhere bijective differ-ential dψ, and if Θ = diag(eiθ1, . . . , eiθn), then | det dψ(UT,Θ)| = Y 1≤j<k≤n |eiθ j −eiθk|2. Proof To see that ψ is well-defined, suppose that UT = e UT; i.e., that there is e Θ ∈T such that U = e Ue Θ. Then for any Θ ∈T′, UΘU−1 = e Ue ΘΘe Θ−1 e U−1 = e UΘe U−1, since the diagonal matrices Θ, e Θ commute. Next, observe that if U ∈U (n)′ with distinct eigenvalues λ1, . . . , λn, then for any permutation σ ∈S n, there is Vσ ∈U (n) such that U = Vσ diag(λσ(1), . . . , λσ(n))V−1 σ , and so U has (at least) n! distinct preimages (VσT, diag(λσ(1), . . . , λσ(n))). More-over, if V, W ∈U (n) are such that V diag(λσ(1), . . . , λσ(n))V−1 = U = W diag(λσ(1), . . . , λσ(n))W−1, then the k-th columns of both V and W are eigenvectors of U with eigenvalue λσ(k). Since U has n distinct eigenvalues, its eigenspaces are one-dimensional, and so the k-th columns of V and W can only differ by multiplication by a unit modulus complex number ωk. In other words, W = V diag(ω1, . . . , ωn), and since diag(ω1, . . . , ωn) ∈T, it follows that VT = WT. That is, U has exactly n! preimages under ψ. It remains to compute the differential dψ. A curve γ : [0, 1] →T with γ(0) = I has the form γ(t) = diag(eiθ1(t), . . . , eiθn(t)), where the θ j(t) are arbitrary smooth functions of t, so that γ′(0) = diag(iθ′ 1(0), . . . , iθ′ n(0)); that is, TIn(T) ⊆TIn(U (n)) is exactly the subspace of diagonal elements: TIn(T) = {diag(iρ1, . . . , iρn) : ρ1, . . . , ρn ∈R} =: t. Recall from Section 1.1 that the Lie algebra of U (n) itself is u(n) = {X ∈Mn(C) : X + X∗= 0} . The map π : U (n) →U (n) /T with π(U) = UT induces a surjective linear map 3.1 The Weyl integration formula 73 dπIn : TIn(U (n)) →TInT(U (n) /T) whose kernel is exactly t, and so its image can be identified with orthogonal complement of t in u: TInT(U (n) /T)  n X ∈Mn(C) : X + X∗= 0, Xj j = 0 ∀j o =: p. Now, to compute dψ itself, we must compute d dtψ◦γ(t)|t=0 for smooth curves γ : [0, 1] →(U (n) /T) × T′ with arbitrary initial directions. We first consider the curve γ1(t) = (UetZT, Θ), where Z is an n × n matrix over C with Z + Z∗= 0 and Z j j = 0 for each j. Then γ1 has γ1(0) = (UT, Θ) and γ′ 1(0) = (Z, 0) (it is customary to identify T(UT,Θ)(U (n) /T × T′) with p ⊕t), ψ(γ1(t)) = UetZΘe−tZU−1, and dψ(UT,Θ)(Z, 0) = d dtψ(γ1(t))|t=0 = UZΘU−1 −UΘZU−1 = h UΘU−1i  U h Θ−1ZΘ −Z i U−1 . In the orthogonal direction, consider the curve γ2 : [0, 1] →(U (n) /T) × T′ defined by γ2(s) = (UT, ΘesΘ′). Then γ2(0) = (UT, Θ), γ′ 2(0) = (0, Θ′), and dψ(UT,Θ)(0, Θ′) = d dsψ ◦γ2(s)|s=0 = UΘΘ′U−1 = UΘU−1 UΘ′U−1 . Together then, dψ(UT,Θ)(Z, Θ′) = UΘU−1 U h Θ−1ZΘ −Z + Θ′i U−1 . Identifying TUΘU−1(U (n)′) with u, one would more typically write just dψ(UT,Θ)(Z, Θ′) = U h Θ−1ZΘ −Z + Θ′i U−1. Since multiplication by a unitary matrix is an isometry of Mn(C), it follows that | det ψ(UT,Θ)| = det AΘ ⊕I = det AΘ , where AΘ(Z) = Θ−1ZΘ −Z. 74 Eigenvalue distributions: exact formulas To finally actually compute the determinant, it is easier to consider the com-plexification of p: the space p itself is a real vector space, but if (X1, . . . , XN) is a basis, then we may instead consider the complex vector space pC := {z1X1 + · · · + zNXn : z1, . . . , zn ∈C} . The matrices (X1, . . . , XN) are then a basis of pC, and any linear map on p can be extended to a linear map on pC, whose determinant is the same as that of the original map. It is an easy exercise that the complexification pC is simply those matrices in Mn(C) with zeroes on the diagonal; this space has orthonormal basis E jk ( j , k), with a one in the j-k position and zeroes elsewhere. With respect to this basis, the operator AΘ is diagonal: AΘ(E jk) =  ei(θ j−θk) −1  E jk. It follows that | det ψ(UT,Θ)| = Y j,k ei(θj−θk) −1 = Y 1≤j<k≤n eiθ j −eiθk 2 , which completes the proof of the lemma. □ We now complete the proof of Weyl’s integration formula. Proof of Theorem 3.1 Since ψ is an n!-to-1 local diffeomorphism, using ψ to make a change of variables gives Z U(n)′g(U)dµU(n)′(U) = 1 n! Z T′ Z U(n)/T g(ψ(UT, Θ))| det dψ(UT, Θ)|dµU(n)/T(UT)dµT′(Θ), where each of the measures is the invariant measure on the appropriate space, as described in Proposition 3.2. It is easy to see that T \ T′ and U (n) \ U (n)′ have measure zero, since their dimensions are strictly smaller than those of the full groups, and so we may instead write Z U(n) g(U)dµU(n)(U) = 1 n! Z T Z U(n)/T g(ψ(UT, Θ))| det dψ(UT, Θ)|dµU(n)/T(UT)dµT(Θ). From Lemma 3.4, | det dψ(UT,Θ)| = Y 1≤j<k≤n |eiθ j −eiθk|2, 3.1 The Weyl integration formula 75 and since g was assumed to be a class function, g(ψ(UT, Θ)) = g(UΘU−1) = g(Θ) = ˜ g(θ1, . . . , θn), where ˜ g is as in the statement of the theorem and Θ = diag(eiθ1, . . . , eiθn). Since µU(n)/T is a probability measure, and dµT = 1 (2π)n dθ1 · · · dθn, this com-pletes the proof. □ The factor Q 1≤j<k≤n |eiθ j −eiθk|2 is the norm-squared of a Vandermonde de-terminant. Observe that for any given pair (j, k), |eiθk −eiθ j|2 is zero if θj = θk (and small if they are close), but |eiθk −eiθ j|2 is 4 if θj = θk+π. This produces the effect alternatively known as “eigenvalue repulsion” or “eigenvalue rigidity”: each pair of eigenvalues repel each other, so that the collection of points is very evenly spaced. This is clearly visible in simulations, even for relatively small matrices. In the picture on the right in Figure 3.1, 80 points were dropped uni-formly and independently (thus there is no repulsion); there are several large clumps of points close together, and some largeish gaps. The picture on the right is of the eigenvalues of a random 80 × 80 unitary matrix; one can clearly see that they are more regularly spaced around the circle. There are integration formulae for the other matrix groups as well, although some details are slightly more complicated. Firstly, recall the trivial eigen-values: each matrix in SO (2n + 1) has 1 as an eigenvalue, each matrix in SO−(2n + 1) has −1 as an eigenvalue, and each matrix in SO−(2n + 2) has both −1 and 1 as eigenvalues. The remaining eigenvalues of matrices in SO (n) or Sp (2n) occur in complex conjugate pairs. For this reason, when discussing Figure 3.1 On the left are the eigenvalues of an 80 × 80 random unitary matrix; on the right are 80 i.i.d. uniform random points. 76 Eigenvalue distributions: exact formulas SO (n), SO−(n), or Sp (2n), the eigenvalue angles corresponding to the eigen-values in the open upper half-circle are considered to be the nontrivial ones and one normally considers the eigenvalue process restricted to that set. For U (n), all the eigenvalue angles are considered nontrivial; there are no automatic sym-metries in this case. The following result gives the analogues of the formula in Theorem 3.1 for the remaining groups. Theorem 3.5 Let U be a Haar-distributed random matrix in S , where S is one of SO (2n + 1), SO (2n), SO−(2n + 1), SO−(2n + 2), Sp (2n). Then a function g of U which is invariant under conjugation of U by a fixed orthogonal (in all but the last case) or symplectic (in the last case) matrix is associated as above with a function ˜ g : [0, π)n →R (of the non-trivial eigenangles) which is invariant under permutations of coordinates, and if U is distributed according to Haar measure on G, then Eg(U) = Z [0,π)n ˜ gdµW G , where the measures µW G on [0, π)n have densities with respect to dθ1 · · · dθn as follows. G µW G SO (2n) 2 n!(2π)n Y 1≤j<k≤n 2 cos(θk) −2 cos(θj)2 SO (2n + 1) 2n n!πn Y 1≤j≤n sin2 θj 2 ! Y 1≤j<k≤n 2 cos(θk) −2 cos(θ j)2 SO−(2n + 1) 2n n!πn Y 1≤j≤n cos2 θ j 2 ! Y 1≤j<k≤n 2 cos(θk) −2 cos(θ j)2 Sp (2n) SO−(2n + 2) 2n n!πn Y 1≤j≤n sin2  θj  Y 1≤j<k≤n 2 cos(θk) −2 cos(θj)2 So as not to spoil the reader’s fun, we will show how to modify the proof of the unitary case for the case of SO (2n) and leave the remaining formulae as exercises. Proof of the Weyl integration formula for SO (2n) If U ∈SO (2n), then there 3.1 The Weyl integration formula 77 are V ∈SO (2n) and angles {θ1, . . . , θn} ⊆[0, π] such that U = V                         cos θ1 ± sin(θ1) ∓sin(θ1) cos(θ1) ... cos(θn) ± sin(θn) ∓sin(θn) cos(θn)                         V−1; (3.2) the eigenvalues of U are the complex conjugate pairs {e±iθ1, . . . , e±iθn}. Let T denote the set of all block diagonal rotations as in (3.2), T′ ⊂T those matrices for which the θ j are all distinct and different from 0 and π, and let SO (2n)′ ⊆SO (2n) be the subset of SO (2n) with distinct eigenvalues, different from 1 and −1. Define a map ψ : (SO (2n) /T) × T′ →SO (2n)′ . (UT, R) 7→URU−1 Then ψ is a 2n−1n!-to-1 covering of SO (2n)′: each U ∈SO (2n) has 2n−1n! distinct preimages, corresponding to the n! possible permutations of the 2 × 2 blocks and the 2n−1 possible reorderings of the bases within all but the last of the corresponding 2-dimensional subspaces (the last one is then forced so as to remain within SO (2n)). As before, ψ is used to make a change of variables, and the crucial ingredient in the integration formula is the determinant of dψ. Recall that so(2n) := TIn(SO (2n)) = {X ∈M2n(R) : X + XT = 0}. If γ : [0, 1] →T has the form γ(s) =                         cos(θ1(s)) ± sin(θ1(s)) ∓sin(θ1(s)) cos(θ1(s)) ... cos(θn(s)) ± sin(θn(s)) ∓sin(θn(s)) cos(θn(s))                         (with the choice of signs consistent as s varies), then γ′(0) =                         0 ±θ′ 1(0) ∓θ′ 1(0) 0 ... 0 ±θ′ n(0) ∓θ′ n(0) 0                         , 78 Eigenvalue distributions: exact formulas and so t := TIn(T) = (" 0 ρ1 −ρ1 0 # ⊕· · · ⊕ " 0 ρn −ρn 0 # : ρ1, . . . , ρn ∈R ) . The map π : SO (2n) →SO (2n) /T with π(U) := UT induces a surjective linear map dπIn : TIn(SO (2n)) →TInT(SO (2n) /T) with kernel t, whose image is therefore isomorphic to the orthogonal complement p := ( X ∈M2n(R) : X + XT = 0, "X2 j−1,2 j−1 X2 j−1,2 j X2 j,2 j−1 X2 j,2 j # = "0 0 0 0 # , j = 1, . . . , n ) . Computing the tangent vectors at 0 to curves γ1(t) = (UetZT, R) and γ2(s) = (UT, ResR′) as in the unitary case shows that dψ(UT,R)(Z, R′) = U(AR ⊕I)(Z, R)U−1, where AR : p →p is given by AR(Z) = R−1ZR −Z, and we have identified T(UR,R)(SO (n) /T × T′) with p ⊕t and TURU−1(SO (n)′) with so(2n). We thus have that | det dψ(UT,R)| = | det AR|. Unfortunately, there is no obvious basis in which AR is diagonal, and so com-puting | det AR| requires a few linear-algebraic tricks. Firstly, it is once again simpler to compute in the complexification pC = ( X ∈M2n(C) : X + XT = 0, "X2 j−1,2 j−1 X2 j−1,2 j X2 j,2 j−1 X2 j,2 j # = "0 0 0 0 # , j = 1, . . . , n ) , of p. Note that while in the unitary case, complexifying p removed the condi-tion X + X∗= 0, in the orthogonal case, the condition X + XT = 0 remains. Next, we change bases on C2n to diagonalize the elements of T: let C := "1 1 i −i # ⊕· · · ⊕ "1 1 i −i # , and let C act on Mn(C) by conjugation. The subspace t, and hence also pC, is preserved by this action, and C−1 " cos(θ1) sin(θ1) −sin(θ1) cos(θ1) # ⊕· · · ⊕ " cos(θn) sin(θn) −sin(θn) cos(θn) #! C = "eiθ1 0 0 e−iθ1 # ⊕· · · ⊕ "eiθn 0 0 e−iθn # . Now, pC has the obvious basis F jk, where F jk has a 1 in the j-k entry, a −1 in the k- j entry and zeroes otherwise (where ( j, k) runs over pairs with j < k 3.1 The Weyl integration formula 79 which are not in any of the 2 × 2 blocks along the diagonal). However, AR is not diagonal with respect to the basis CF jkC−1, and one more trick is in order. Observe that if S : Mn(C) →Mn(C) is defined by [S X] jk =        Xjk, j ≤k; −Xjk, j > k, then S commutes with AR (this is easy to check on the basis CE jkC−1 of Mn(C)). Exercise 3.6 Let V be a finite-dimensional inner product space and U ⊆V a subspace. Let T : V →V such that T(U) = U and let S : V →V commute with T. Then T(S (U)) = S (U) and det T|U = det T|S (U). Applying the exercise to the maps T = AR and S defined above, it follows that our quarry det AR, when AR is viewed as a map on pC, is the same as det AR when AR is viewed as a map on qC := ( X ∈M2n(C) : X −XT = 0, "X2 j−1,2 j−1 X2 j−1,2 j X2 j,2 j−1 X2 j,2 j # = "0 0 0 0 # , j = 1, . . . , n ) . The subspace qC has basis G jk defined similarly to F jk but with both non-zero entries equal to 1, and so in particular the subspaces pC and qC are orthogonal. It follows that det AR when AR is viewed as a map on pC ⊕qC is the square of det AR when AR is restricted to pC. The point, of course, is that pC ⊕qC has basis E jk, where j , k and ( j, k) is not in any of the 2 × 2 blocks along the diagonal, and with respect to CE jkC−1, AR is diagonal: if D = C−1RC is the diagonalization of R by C, then AR(CE jkC−1) = CD−1C−1CE jkC−1CDC−1 −CE jkC−1 =  exp  i  (−1) jθl j 2 m −(−1)kθ⌈k 2⌉  −1  CE jkC−1. Examining the coefficient above, one can see that for each pair p < q, each of the four factors ei(±θp±θq) −1 appears as an eigenvalue exactly twice. Since |ei(θp+θq) −1||ei(θp−θq) −1| = |2 cos(θp) −2 cos(θq)|, we finally have that | det pC⊕qC dψ(UT,R)| = Y 1≤p<q≤n |2 cos(θp) −2 cos(θq)|4, 80 Eigenvalue distributions: exact formulas and so | det pC dψ(UT,R)| = Y 1≤p N, Dm,N = 0. 4. Let g : Λn →R. Then 1 n! Z Λn g(x1, . . . , xn)Dn,N(x1, . . . , xn)dµ(x1) · · · dµ(xn) = 1 N! Z ΛN         X 1≤j1<···< jn≤N g(x j1, . . . , xjn)        DN,N(x1, . . . , xN)dµ(x1) · · · dµ(xN). The value of the lemma for us is that it is not too hard to check, using the Weyl integration formulae, that the N-point correlation functions are those claimed in Proposition 3.7; the full determinantal structure then follows from the lemma. 3.2 Determinantal point processes 83 Proof 1. Observe that since ϕn = pn( f) for a monic polynomial pn, starting from the last column and adding linear combinations of previous columns, then multi-plying the first column by 1 √ µ(Λ), 1 p µ(Λ) det f(x j)k−1N j,k=1 = det ϕk−1(xj)N j,k=1, and so 1 µ(Λ) det f(xj)k−1N j,k=1 2 =  det ϕk−1(xj)N j,k=1   det ϕk−1(x j)N j,k=1  =  det ϕk−1(xj)N j,k=1   det ϕj−1(xk)N j,k=1  = det N X ℓ=1 ϕℓ−1(x j)ϕℓ−1(xk)N j,k=1 = DN,N(x1, . . . , xN). 2. We prove the statement by induction, using the Laplace expansion of the determinant. The n = 1 case is trivial by orthonormality of the ϕ j. For n > 1, let An,N := KN(xj, xk)n j,k=1 and expand det An,N along the final column: det An,N = n X k=1 (−1)k+nKN(xk, xn) det Ak,n n,N, where Ak,ℓ n,N denotes the matrix An,N with the kth row and ℓth column removed. When k = n, the matrix Ak,n n,N is just KN(x j, xk)n−1 j,k=1, which is in particular independent of xn, and integrating KN(xn, xn) produces a factor of N. For the remaining terms, expanding det Ak,n n,N along the bottom row gives det Ak,n n,N = n−1 X ℓ=1 (−1)ℓ+n−1KN(xn, xℓ) det KN(xi, x j) 1≤i,j≤n i,k,n; j,ℓ,n . Note in particular that there are no xn’s left in the determinant. Now, Z Λ KN(xk, xn)KN(xn, xℓ)dµ(xn) = N−1 X r,s=1 ϕr(xk)ϕs(xℓ) Z Λ ϕr(xn)ϕs(xn)dµ(xn) = N−1 X r=0 ϕr(xk)ϕr(xℓ) = KN(xk, xℓ). 84 Eigenvalue distributions: exact formulas It follows that for k < n, Z Λ (−1)k+nKN(xk, xn) det Ak,n n,Ndµ(xn) = n−1 X ℓ=1 (−1)k+ℓ−1KN(xk, xℓ) det KN(xi, x j) 1≤i,j≤n i,k,n; j,ℓ,n = −det KN(xi, x j)n−1 i, j=1. As there are n −1 terms of this type, together with the k = n case this gives the formula in part 2. Since we have already seen that DN,N is real-valued, non-negative, and sym-metric, it follows immediately from the formula just established that Dn,N is as well. 3. Using the same trick as in part 1., if n > N, then det KN(xi, x j)n i, j=1 = det ΦΦ∗, where Φ is an n × N matrix with entries ϕ j−1(i). Since the rank of ΦΦ∗is at most N < n, det ΦΦ∗= 0. 4. The only non-trivial case is n < N. By the symmetry of DN,N(x1, . . . , xN), Z ΛN         X 1≤j1···< jn≤N g(x j1, . . . , xjn)        DN,N(x1, . . . , xN)dµ(xN) · · · dµ(x1) = N n ! Z ΛN g(x1, . . . , xn)DN,N(x1, . . . , xN)dµ(xN) · · · dµ(x1). Using part 2. to integrate out xN, . . . , xn+1 gives that this is N! n! Z Λn g(x1, . . . , xn)Dn,N(x1, . . . , xn)dµ(xn) · · · dµ(x1). □ Proof of Proposition 3.7 Unitary case: Let X = n eiθ1, . . . , eiθNo be the eigenvalue process corresponding to Haar measure on the unitary group. Let Λ = [0, 2π), let f : Λ →C be defined by f(θ) = eiθ, and for n ≥1, let ϕn = f n. Then the kernel KN(x, y) from the lemma is KN(x, y) = N−1 X j=0 ϕ j(x)ϕ j(y) = N−1 X j=0 ei j(x−y) and the Weyl density is Y 1≤j<k≤N |eiθ j −eiθk|2 = det eiθj(k−1)N j,k=1 2 = DN,N, 3.2 Determinantal point processes 85 by part 1. of the lemma. Now, if A1, . . . , AN ⊆Λ are pairwise disjoint, then QN j=1 NAj(X) = 1 if there is some permutation σ ∈S N such that θj ∈Aσ(j) for each j, and otherwise QN j=1 NAj(X) = 0. It thus follows from the Weyl integration formula that E         N Y j=1 NA j        = N!P h eiθ1 ∈A1, . . . , eiθN ∈AN i = 1 (2π)N Z A1 · · · Z AN Y 1≤j<k≤N |eiθ j −eiθk|2dθN · · · dθ1 = 1 (2π)N Z A1 · · · Z AN DN,N(θ1, . . . , θN)dθN · · · dθ1. More generally, if A1, . . . , Ak ⊆Λ are pairwise disjoint, then E         k Y j=1 NAj        = E         k Y j=1        N X ℓ=1 1Aj(θℓ)                = X σ∈S k X 1≤ℓ1<···<ℓk≤N E         k Y j=1 1Aσ( j)(θℓj)         = k! N!(2π)N Z ΛN         X 1≤ℓ1<···<ℓk≤N         k Y j=1 1A j(θℓj)                DN,N(θ1, . . . , θN)dθ1 · · · dθN. By part 4. of the Lemma, this is 1 (2π)k Z Λk         k Y j=1 1A j(θ j)        Dk,N(θ1, . . . , θk)dθ1 · · · dθk = 1 (2π)k Z A1 · · · Z Ak det KN(θi, θj)k i,j=1dθN · · · dθ1. Even special orthogonal case: Let X = n eiθ1, . . . , eiθNo be the nontrivial eigen-value process corresponding to Haar measure on SO (2N), and let Λ = [0, π]. In order to work within the framework of the lemma, we choose µ such that dµ = 1 2πdθ on Λ, so that µ(Λ) = 1 2. Let f : Λ →C be defined by f(θ) = 2 cos(θ) = eiθ + e−iθ; note that indeed Z fdµ = 0 Z | f|2dµ = 1. It is easy to show by induction that the function ϕn(θ) = 2 cos(nθ) = einθ + e−inθ 86 Eigenvalue distributions: exact formulas is a monic polynomial of degree n in eiθ + e−iθ, and if we choose ϕ0(θ) = √ 2, then {ϕn}n≥0 is an orthonormal sequence with respect to µ. Indeed, the reason for the choice of normalization of µ is so that this sequence of monic polynomials in 2 cos(θ) is orthonormal. The kernel from the lemma corresponding to our choice of µ is Kµ N(x, y) = N−1 X j=0 ϕ j(x)ϕ j(y) = 2 + N−1 X j=1 4 cos(jx) cos(jy). Now, the density given by the Weyl integration formula in the case of SO (2N) is 2 N!(2π)N Y 1≤j<k≤N (2 cos(θ j) −2 cos(θk))2 = 1 N!(2π)Nµ(Λ) det 2 cos(θ j)k−1N j,k=1 2 = 1 N!(2π)N DN,N, by part 1. of the lemma. So if A1, . . . , Ak ⊆Λ are pairwise disjoint, then as above, E         k Y j=1 NAj        = k! N!(2π)N Z ΛN         X 1≤ℓ1<···<ℓk≤N         k Y j=1 1Aj(θℓj)                DN,N(θ1, . . . , θN)dθ1 · · · dθN = 1 (2π)k Z Λk         k Y j=1 1A j(θ j)        Dk,Ndθ1 · · · dθk = 1 (2π)k Z A1 · · · Z Ak det 2 + 4 N−1 X j=1 cos(θr) cos(θs)k r,s=1dθ1 · · · dθk. Cancelling the 1 2k in front of the integral with a factor of 2 in the matrix inside the determinant shows that the SO (2N) eigenvalue process is determinantal with respect to the uniform probability measure on [0, π] with kernel KN(x, y) = 1 + 2 N−1 X j=1 cos(jx) cos(jy). Odd special orthogonal case: Let X = n eiθ1, . . . , eiθNo be the nontrivial eigen-value process corresponding to Haar measure on SO (2N + 1), and let Λ = [0, π]. Note that the form of the Weyl density in this case is somewhat differ-ent: 2N N!πN Y 1≤j≤N sin2 θ j 2 ! Y 1≤j<k≤N (2 cos(θk) −2 cos(θj))2 3.2 Determinantal point processes 87 contains a product over j in addition to the Vandermonde factor. Rather than treating this product as a determinant, it is more convenient within the frame-work of Lemma 3.8 to treat the sin2  θ j 2  factors as defining the reference mea-sure µ. That is, let µ be the probability measure with dµ = 2 π sin2  θ 2  dθ on [0, π]. The Vandermonde factor in the Weyl density suggests taking f(θ) = 2 cos(θ); since 2 π Z π 0 2 cos(θ) sin2 θ 2  dθ = −1, we take instead f(θ) = 1 + 2 cos(θ) = sin  3θ 2  sin  θ 2  . From the last expression it is easy to see that R f 2dµ = 1. In analogy with the previous case, the second expression for f suggests choosing ϕn(θ) = sin  2n+1 2  θ  sin  θ 2  for n ≥1 (and ϕ0 = 1), so that {ϕn}n≥0 is orthonormal with respect to µ. Rewriting ϕn(θ) = 1 + n X j=1 ei jθ + e−i jθ shows that ϕn(θ) is indeed a monic polynomial of degree n in f. The kernel Kµ N(x, y) as in Lemma 3.8 is then Kµ N(x, y) = N−1 X j=0 ϕ j(x)ϕ j(y) = 1 + N−1 X j=1 sin  2n+1 2  x  sin  2n+1 2  y  sin  x 2  sin  y 2  . The distribution of the eigenangles from the Weyl formula is 2N N!πN Y 1≤j≤N sin2 θ j 2 ! Y 1≤j<k≤N (2 cos(θk) −2 cos(θ j))2dθ1 · · · dθN = 2N N!πN Y 1≤j≤N sin2 θ j 2 !  det 1 + 2 cos(θj)k−1N j,k=1 2 dθ1 · · · dθN = 1 N!DN,Ndµ(θ1) · · · dµ(θN), by part 1. of the lemma and definition of µ. Now, if k ≤N and A1, . . . , Ak ⊆Λ 88 Eigenvalue distributions: exact formulas are pairwise disjoint, then by the computation above and part 4. of the lemma, E         k Y j=1 NAj         = k! N! Z ΛN         X 1≤j1<···< jk≤N k Y ℓ=1 1Aℓ(θ jℓ)        DN,N(θ1, . . . , θN)dµ(θ1) · · · dµ(θN) = Z Λk         k Y ℓ=1 1Aℓ(θℓ)        Dk,N(θ1, . . . , θk)dµ(θ1) · · · dµ(θk) = 2k πk Z A1 · · · Z Ak          det         1 + N−1 X ℓ=1 sin  2ℓ+1 2  θi  sin  2ℓ+1 2  θ j  sin  θi 2  sin  θj 2           k i, j=1           k Y j=1 sin2 θ j 2 ! dθj. Now multiplying the ith row of matrix inside the determinant by one factor of sin  θi 2  and the jth column by the other factor of sin  θj 2  , and each entry by 2 gives that this last expression is 1 πk Z A1 · · · Z Ak         det         N−1 X ℓ=0 sin 2ℓ+ 1 2 ! θi ! sin 2ℓ+ 1 2 ! θ j !        k i,j=1          k Y j=1 dθj. To treat the negative coset, one need only note that because we are in the odd case, the two cosets of the orthogonal group are related by multiplication by −I. The nontrivial eigenangles (ψ1, . . . , ψN) of a Haar-distributed random matrix in SO−(2N + 1) are thus equal in distribution to (π −θ1, . . . , π −θN), where (θ1, . . . , θN) are the nontrivial eigenangles of a Haar-distributed random matrix in SO (2N + 1), and the claimed formula then follows by changing variables. Symplectic case: This is essentially the same as the previous case. Taking dµ = 2 π sin(θ)dθ, f(θ) = 2 cos(θ), and ϕn(θ) = sin(nθ) sin(θ) for n ≥1, the proof proceeds as above. □ For some purposes, the following alternative forms of the kernels can be more convenient. In all but the unitary case, they are the same functions; for the unitary case, the kernels are different functions but are unitarily similar and thus define the same point processes. Proposition 3.9 For N ∈N, let S N(x) :=        sin  Nx 2  / sin  x 2  if x , 0, N if x = 0. 3.3 Matrix moments 89 The nontrivial eigenvalue angles of uniformly distributed random matrices in any of SO (N), SO−(N), U (N), Sp (2N) are a determinantal point process, with respect to uniform (probability) measure on Λ, with kernels as follows. LN(x, y) Λ U (N) S N(x −y) [0, 2π) SO (2N) 1 2  S 2N−1(x −y) + S 2N−1(x + y)  [0, π) SO (2N + 1) 1 2  S 2N(x −y) −S 2N(x + y)  [0, π) SO−(2N + 1) 1 2  S 2N(x −y) + S 2N(x + y)  [0, π) Sp (2N) SO−(2N + 2) 1 2  S 2N+1(x −y) −S 2N+1(x + y)  [0, π) Exercise 3.10 Verify that these kernels define the same determinantal point processes as those given on page 81. 3.3 Matrix moments While we have in principle completely described the distribution of the eigen-values of a random matrix by the Weyl density formula, in practice, using the Weyl density to answer natural questions about random matrices, particularly those having to do with asymptotic behavior for large matrices, can be rather difficult. Under such circumstances, one option is to consider the moments. The following remarkable moment formulae are due to Diaconis and Shahsha-hahni ; the proofs give a nice illustration of the use of character theory in the study of random matrices from the classical compact groups. Because it is the simplest, we begin with the case of the unitary group. Proposition 3.11 Let U be a Haar-distributed random matrix in U (n) and let Z1, . . . , Zk be i.i.d. standard complex Gaussian random variables (i.e., complex-valued random variables whose real and imaginary parts are independent cen-tered Gaussians with variance 1 2). 90 Eigenvalue distributions: exact formulas 1. Let a = (a1, . . . , ak) and b = (b1, . . . , bk) with aj, bj ∈N, and let n ∈N be such that max          k X j=1 ja j, k X j=1 jb j         ≤n. Then E         k Y j=1 ((Tr(U j))a j(Tr(U j))bj        = δab k Y j=1 jajaj! = E         k Y j=1 ( p jZ j)aj( p jZ j)bj        . 2. For any j, k, n, if U is Haar-distributed in U (n), then E h Tr(U j)Tr(Uk) i = δ jk min{j, n}. Proof Let p j denote the power sum symmetric function: pj(x1, . . . , xn) = x j 1 + · · · + x j n. Then k Y j=1 (Tr(U j))aj = k Y j=1 p aj j (U), where by abuse of notation, pj(U) denotes the scalar-valued function p j(U) = pj(λ1, . . . , λn) where λ1, . . . , λn are the eigenvalues of U. In symmetric function theory, such a product of power sums (a so-called compound power sum) is denoted pµ, where µ is the partition of the integer K = a1 + 2a2 + · · · + kak consisting of a1 1’s, a2 2’s, and so on. There are many different bases for the space of sym-metric functions, one of which is the Schur functions, which also happen to be the irreducible characters of U (n) (see Section 1.3). Better still, the change-of-basis formula with which one can express compound power sums in terms of Schur functions is very explicit, with the coefficients given by irreducible characters of the symmetric group. Indeed, the conjugacy classes of the sym-metric group are exactly described by cycle structure, and a cycle structure can be thought of as an integer partition (a1 1-cycles, a2 2-cycles, ..., ak k-cycles corresponds to the partition of K above). Since the irreducible characters are in one-to-one correspondence with the conjugacy classes, this means that the irreducible characters of S K can be indexed as {χλ : λ a partition of K}. It is possible to make explicit how a partition leads to an irreducible representation of S K, but we will not go into this here. Instead, we will take as given the following change of basis formula, which is proved using the representation 3.3 Matrix moments 91 theory of the symmetric groups. For pµ a compound power sum as above and {sλ}λ⊢K the Schur functions corresponding to partitions λ of the integer K, pµ = X λ⊢K χλ(µ)sλ. Writing L = b1 + 2b2 + · · · + kbk and taking ν to be the partition of L with b1 1’s, b2 2’s, etc., E         k Y j=1 ((Tr(U j))aj(Tr(U j))bj        = E h pµ(U)pν(U) i = X λ⊢K,π⊢L χλ(µ)χπ(ν)E h sλ(U)sπ(U) i . Since U 7→sλ(U) are the irreducible characters of U (n), it follows from the first orthogonality relation (Proposition 1.12 in Section 1.3), E h sλ(U)sπ(U) i = δλπ1(ℓ(λ) ≤n). (Recall that sλ(U) = 0 if ℓ(λ) > n.) The condition on n in the statement of the theorem is exactly that max{K, L} ≤n, and so ℓ(λ) ≤n for each partition of K or L. That is, E         k Y j=1 ((Tr(U j))aj(Tr(U j))bj        = δKL X λ⊢K χλ(µ)χλ(ν). Applying the second orthogonality relation (Proposition 1.13 of Section 1.3) to the irreducible characters of S K gives that X λ⊢K χλ(µ)χλ(ν) = δµν |S K| c(µ), where c(µ) is the size of the conjugacy class of µ in S K. Since µ corresponds to permutations with a1 1-cycles, a2 2-cycles, and so on, c(µ) = K 1a1, . . . , kak !(0!)a1 · · · ((k −1)!)ak a1! · · · ak! , where the multinomial coefficient  K 1a1,...,kak  is the number of ways to divide the integers 1, . . . , K into a1 groups of size 1, a2 groups of size 2, etc; the number of cyclic permutations of {i1, . . . , i j} is ( j −1)!; and the factor of a1! · · · ak! in the denominator corresponds to orders in which one can write the cycles of a given length. We thus have that E         k Y j=1 ((Tr(U j))a j(Tr(U j))bj        = δKLδλµ k Y j=1 jajaj! = δab k Y j=1 ja jaj!. 92 Eigenvalue distributions: exact formulas To see that this is the same as the mixed moment of complex Gaussian ran-dom variables as stated in the theorem, first note that a standard complex Gaus-sian random variable Z can be written as Reiθ with R2 = Z2 1 +Z2 2 for (Z1, Z2) in-dependent real standard Gaussian variables, θ uniformly distributed in [0, 2π), and (R, θ) independent. Moreover, Z2 1 + Z2 2 d = Y, where Y is an exponential random variable with parameter 1. Then E[ZaZb] = E h Ra+bi E h ei(a−b)θi = δabE Ya = δaba!. To prove the second statement, by the argument above we have that E[Tr(U j)Tr(Uk)] = δ jk X λ⊢j |χλ(( j))|21(ℓ(λ) ≤n), where (j) is the trivial partition of j into the single part j. Evaluating the sum thus requires knowledge of the irreducible characters of the symmetric group. It is a fact (see Exercise 4.16 in ) that χλ(( j)) = 0 unless λ is a so-called hook partition: a partition of the form (a, 1, 1, · · · , 1) (its Young diagram looks like a hook). In case that λ is a hook partition, χλ((j)) = (−1)ℓ(λ)−1. There are min{ j, n} hook partitions λ of j with ℓ(λ) ≤n, and so E[Tr(U j)Tr(Uk)] = δjk min{j, n}. □ The orthogonal and symplectic cases are more complicated because the ex-pansions of the power sum symmetric functions in terms of group characters are more difficult. The approach can nevertheless be extended to give the fol-lowing. Theorem 3.12 Let U be a Haar-distributed random matrix in O (n), and let Z1, . . . , Zk be i.i.d. standard Gaussian random variables. Suppose that a1, . . . , ak are nonnegative integers such that 2 Pk j=1 ja j ≤n. Let η j be 1 if j is even and 0 if j is odd. Then E         k Y j=1 (Tr(U j))aj        = k Y j=1 f j(a j) = E         k Y j=1 ( p jZ j + η j)aj        , 3.3 Matrix moments 93 where f j(a) =                      0, j, a odd; j a 2 (a −1)!!, j odd, a even; ⌊a 2⌋ X s=0 a 2s ! (2s −1)!! js, j even. Proof Let µ denote the partition of K := a1 + 2a2 + · · · + kak with a1 1’s, a2 2’s, and so on. Then as in the unitary case, k Y j=1 (Tr(U j))aj = pµ(U), the compound power symmetric function evaluated at the eigenvalues of U. As before, the key to the proof is to express pµ in terms of the irreducible charac-ters of the orthogonal group; the necessary expansion is rather less classical in this case and was developed in the early ’90s by A. Ram. Let V be the standard n-dimensional representation of O (n) (i.e., U ∈O (n) acts by matrix-vector multiplication), and for k ∈N, recall that V⊗k is a rep-resentation of O (n), where U(v1 ⊗· · · ⊗vk) = Uv1 ⊗· · · ⊗Uvk. The Brauer algebra Bk,n is the algebra of linear transformations of V⊗k which commute with the action of O (n). The Brauer algebra is not a group algebra, and so it does not fit into the usual framework of representation theory. Nevertheless, there is a notion of irredicible characters, and these play the role here that the irreducible characters of the symmetric group played in the case of random uni-tary matrices. Specifically, there is the following expansion of the compound power symmetric function: pµ(x1, . . . , xn) = ⌊K 2 ⌋ X j=0 X ν⊢K−2 j 1{ν′ 1+ν′ 2≤n}χν K,n(ω)tν(x1, . . . , xn), where ν′ 1, ν′ 2 are the first two parts of the conjugate partition to ν (so that the indicator is nonzero if and only if there are at most n boxes in the first two columns of the Young diagram of ν); χν K,n(ω) is a character of BK,n correspond-ing to ν, being evaluated at an argument ω determined by µ (exactly how this works will be treated like a black box here); and tν is the character of O (n) corresponding to ν (see Theorem 1.17). By the first orthogonality relation for characters (Proposition 1.12 in Section 1.3), E[tν(U)] = 0 unless tν is constant; this corresponds to the trivial partition 94 Eigenvalue distributions: exact formulas ν = (0). It thus follows from the expansion formula that E         k Y j=1 (Tr(U j))aj        =        χ(0) K,n(ω), K even; 0, K odd. The formulae developed by Ram for evaluating these characters of the Brauer algebra give exactly the value for χ(0) K,n(ω) quoted in the statement, as long as n ≥2K. Seeing that this is the same as the mixed moment of Gaussians is immediate in the case that η j = 0 (i.e., j is odd), and only requires expanding the binomial in the case ηj = 1. □ Brauer proved expansion formulae for the power sum symmetric functions in terms of the characters of the symplectic group as well; these also involve characters of the Brauer algebra as coefficients. The following theorem then follows by the same approach as in the orthogonal case. Recall that the eigen-values of a symplectic matrix come in complex conjugate pairs, so traces of powers are necessarily real. Theorem 3.13 Let U be a Haar-distributed random matrix in Sp (2n) , and let Z1, . . . , Zk be i.i.d. standard Gaussian random variables. Suppose that a1, . . . , ak are nonnegative integers such that Pk j=1 ja j ≤n. Let η j be 1 if j is even and 0 if j is odd, and let f j(a) be defined as in Theorem 3.12. Then E         k Y j=1 (Tr(U j))aj        = k Y j=1 (−1)(j−1)aj fj(aj) = E         k Y j=1 ( p jZ j −ηj)aj        . 3.4 Patterns in eigenvalues: powers of random matrices The structure of the eigenvalue distributions of random matrices from the clas-sical compact groups is very rich, with intriguing patterns hidden somehow in the Weyl integration formula. One example is the following result of E. Rains. Theorem 3.14 Let m ∈N be fixed and let e m := min{m, N}. If ∼denotes equality of eigenvalue distributions, then U (N)m ∼ M 0≤j<e m U N −j e m  That is, if U is a uniform N × N unitary matrix, the eigenvalues of Um are distributed as those of e m independent uniform unitary matrices of sizes N e m  := max  k ∈N | k ≤N e m  and N e m  := min  k ∈N | k ≥N e m  , 3.4 Patterns in eigenvalues: powers of random matrices 95 such that the sum of the sizes of the matrices is N. In particular, if m ≥N, the eigenvalues of Um are distributed exactly as N i.i.d. uniform points on S1. For the proof, we will need the following preliminary lemma. Lemma 3.15 Let p(θ1, . . . , θn) := X (p1,...,pn) a(p1,...,pn) Y 1≤k≤n eipkθk be a Laurent polynomial in {eiθk}n k=1 which is a probability density with respect to 1 (2π)n dθ1 · · · dθn. If (Θ1, . . . , Θn) is distributed according to p, then the density of (mΘ1, . . . , mΘn) with respect to 1 (2π)n dθ1 · · · dθn is p(m)(θ1, . . . , θn) := X (p1,...,pn) m|pk ∀k a(p1,...,pn) Y 1≤k≤n ei( pk m )θk, where mΘj is interpreted modulo 2π. Proof To prove the lemma, it suffices to show that if (Φ1, . . . , Φn) is dis-tributed according to p(m), then for any Laurent monomial µ(θ1, . . . , θn) := Q 1≤k≤n eirkθk, Eµ(mΘ1, . . . , mΘn) = Eµ(Φ1, . . . , Φn). Now, given two Laurent monomials µ1(θ1, . . . , θn) := Y 1≤k≤n eirkθk µ2(θ1, . . . , θn) := Y 1≤k≤n eiskθk, we have 1 (2π)n Z 2π 0 · · · Z 2π 0 µ1(θ)µ2(θ)dθ1 · · · dθn =        1, rk + sk = 0 ∀k; 0, otherwise, and so Eµ(mΘ1, . . . , mΘn) = 1 (2π)n X (p1,...,pn) a(p1,...,pn) Z 2π 0 · · · Z 2π 0 Y 1≤k≤n ei(pk+mrk)θkdθ1 . . . dθn = a(−mr1,...,−mrn). On the other hand, Eµ(Φ1, . . . , Φn) = 1 (2π)n X (p1,...,pn) m|pk ∀k a(p1,...,pn) Z 2π 0 · · · Z 2π 0 Y 1≤k≤n ei( pk m +rk)θkdθ1 . . . dθn = a(−mr1,...,−mrn), 96 Eigenvalue distributions: exact formulas which completes the proof. □ Proof of Theorem 3.14 For the proof, it is more convenient to index eigen-values starting at 0 than at 1: the eigenvalues of U are denoted {eiθk}0≤k<n. By expanding the Vandermonde determinant as a sum over permutations, the Weyl density can be written as 1 (2π)nn! X σ,τ∈S n sgn(στ−1) Y 0≤k<n ei(σ(k)−τ(k))θk. Then by Lemma 3.15, the density of the eigenvalues of Um with respect to 1 (2π)n dθ1 . . . dθn is 1 n! X σ,τ∈S n, m|(σ(k)−τ(k)) ∀k sgn(στ−1) Y 0≤k<n ei  σ(k)−τ(k) m  θk. Making the change of index ℓ= τ(k) followed by the substitution π = σ ◦τ−1 reduces this expression to X π∈S n, m|(π(ℓ)−ℓ) ∀ℓ sgn(π) Y 0≤ℓ<n ei  π(ℓ)−ℓ m  θℓ. Note that if π ∈S n is such that m | (π(ℓ) −ℓ) for all ℓ, then π permutes within conjugacy classes mod m, and can therefore be identified with the m permutations it induces on those conjugacy classes. Specifically, for 0 ≤j < m, define π(j) ∈S l n−j m m by π(j)(k) = π(mk + j) −j m . (Note that sgn(π) = Q 0≤j<m sgn(π(j)).) The density of the eigenvalues of Um can thus be factored as X π∈S n, m|(π(ℓ)−ℓ) ∀ℓ sgn(π) Y 0≤ℓ<n ei  π(ℓ)−ℓ m  θℓ = X π(0)∈S ⌈n m ⌉ · · · X π(m−1)∈S⌈n−m+1 m ⌉ Y 0≤j<m             sgn(π( j)) Y 0≤k< l n−j m m ei(π( j)(k)−k)θkm+j             , which is exactly the product of the eigenvalue densities of U0, . . . , Um−1, where {U j}0≤j<m are independent and U j is distributed according to Haar measure on O l n−j m m . □ 3.4 Patterns in eigenvalues: powers of random matrices 97 Notes and References The Weyl integration formula is laid out in detail in Weyl’s book . It is discussed in varying detail and accessibility in many modern sources; I found the exposition of the unitary case in the notes by Angela Pasquale partic-ularly helpful. The approach to the determinantal structure of the eigenvalue processes laid out in Section 3.2 follows that of chapter 5 of Katz and Sarnak’s book . Many other random matrix ensembles also have eigenvalue processes which are determinantal: the GUE, the complex Wishart matrices, and the complex Ginibre ensemble all have these properties (although the real counterparts do not), as does unitary Brownian motion. The moments of traces of Haar-distributed random matrices have an inter-esting combinatorial connection; namely, they are related to the so-called in-creasing subsequence problem, which is about the distribution of the length of the longest increasing subsequence of a random permutation. (An increasing subsequence of a permutation π is a sequence i1 < i2 < · · · < ik such that π(i1) < π(i2) < · · · < π(ik).) In , Rains proved the following. Theorem 3.16 1. If U is distributed according to Haar measure on U (n) and j ∈N is fixed, then E h | Tr(U)|2 ji is equal to the number of permutations π of {1, . . . , j} such that π has no increasing subsequence of length greater than n. 2. If U is distributed according to Haar measure on O (n) and j ∈N is fixed, then E h Tr(U)ji is equal to the number of permutations π of {1, . . . , j} such that π−1 = π, π has no fixed points, and π has no increasing subsequence of length greater than n. 3. If U is distributed according to Haar measure on Sp (2n) and j ∈N is fixed, then E h Tr(U)ji is equal to the number of permutations π of {1, . . . , j} such that π−1 = π, π has no fixed points, and π has no decreasing subsequence of length greater than 2n. The approach taken by Rains involved expansion in terms of characters, sim-ilar to the proof of Theorem 3.11. The connection between Haar-distributed random matrices and increasing subsequence problems was further explored by Baik and Rains , who showed that the moments of traces of random ma-trices from the classical groups are equal to the dimensions of certain invariant spaces of the groups (these subspaces having a natural connection to increasing subsequence problems). Other approaches to Theorem 3.11 have since been developed: Stolz 98 Eigenvalue distributions: exact formulas gave an approach using invariant theory, and Hughes and Rudnick gave an approach using cumulants, which in fact extended the range in which the matrix moments and the corresponding Gaussian moments match exactly. In , Rains proved results analogous to Theorem 3.14 for random matri-ces in SO (n), SO−(n) and Sp (2n); they are more complicated to state because of parity issues and because of the existence of trivial eigenvalues in some cases. In fact, Rains went much further, proving analogous results for general compact Lie groups. 4 Eigenvalue distributions: asymptotics 4.1 The eigenvalue counting function In this section we explore some of the consequences of the determinantal structure of the eigenvalue processes for the counting functions ND, and in particular, for Nθ = #{j : 0 ≤θj ≤θ}. We begin with some further background on determinantal point processes. Some general features of determinantal point processes The following remarkable property of certain determinantal point processes will be crucial in our analysis of the eigenvalue processes on the classical com-pact groups. Theorem 4.1 Let X be a determinantal point process on a compact metric measure space (Λ, µ) with kernel K : Λ × Λ →C. Suppose that the corre-sponding integral operator K : L2(µ) →L2(µ) defined by K( f)(x) = Z K(x, y) f(y) dµ(y) (4.1) is self-adjoint, nonnegative, and trace-class with eigenvalues in [0, 1]. For D ⊆ Λ measurable, let KD(x, y) = 1D(x)K(x, y)1D(y) be the restriction of K to D, and denote by {λk}k∈A the eigenvalues of the corresponding operator KD on L2(D) (A may be finite or countable). Let ND be the number of particles of the determinantal point process with kernel K which lie in D. Then ND d = X k∈A ξk, 99 100 Eigenvalue distributions: asymptotics where “ d =” denotes equality in distribution and the ξk are independent Bernoulli random variables with P[ξk = 1] = λk and P[ξk = 0] = 1 −λk. Identifying the counting function of a point process as a sum of independent {0, 1}-valued random variables opens the door to the use of countless results of classical probability theory. To prove the theorem, we need some preliminaries about the operator K defined in (4.1). We will assume in what follows that the kernel K : Λ×Λ →C is continuous, conjugate symmetric, and positive definite; i.e., for all n ∈N, x1, . . . , xn ∈Λ, and z1, . . . , zn ∈C, n X i,j=1 K(xi, xj)ziz j ≥0. We will also assume that that Λ is compact. Under these assumptions, it is a classical result of operator theory that the operator K : L2(µ) →L2(µ) is self-adjoint, non-negative, and trace class; i.e., if {λj} j∈J are the eigenvalues of K (J may be finite or infinite), then P j∈J λj < ∞. (The λj are necessarily positive since K is non-negative.) Moreover, K can be diagonalized; there are orthonormal eigenfunctions {φ j} j∈J of K such that, for any f ∈L2(µ), K( f) = X j∈J λ j D f, φj E φ j, where the right-hand side converges in L2(µ) for any f ∈L2(µ). One can then conclude from a simple measure-theoretic argument that K(x, y) = X j∈J λjφ j(x)φ j(y) for µ × µ almost every (x, y). If λj = 1 for all j, then the operator K defined by K is just orthogonal projection in L2(µ) onto the span of the φ j; in this case, the following result shows that the total number of points in the process is deterministic and equal to the rank of K. Proposition 4.2 Suppose that X is a determinantal point process on Λ with kernel K(x, y) = N X j=1 φ j(x)φ j(y), where {φ j}N j=1 are orthonormal in L2(µ). Then with probability one, NΛ = N. 4.1 The eigenvalue counting function 101 Proof Observe first that if K(x, y) = N X j=1 φj(x)φj(y), then for any n ∈N and K(xi, xj) 1≤i, j≤n = ΦΦ∗, where Φ = φ j(xi) 1≤i≤n 1≤j≤N ∈Mn,N(C). In particular, if n > N, then det(K(xi, x j) 1≤i,j≤n) = 0, since ΦΦ∗has rank at most N, and thus NΛ ≤N. But also, ENΛ = Z Λ K(x, x)dµ(x) = N X j=1 Z Λ |φ j(x)|2dµ(x) = N. □ In the context of eigenvalue processes, we began with point processes and found them to be determinantal with explicitly described kernels. To prove Theorem 4.1, we will need to go in the other direction; namely, start with a kernel and from it get a point process. Recall that, by the results of Macchi and Soshnikov, this is always possible as long as the corresponding operator K is self-adjoint, trace class, and has all eigenvalues in [0, 1]. Proof of Theorem 4.1 First observe that it suffices to prove the Theorem when D = Λ, since if K defines a self-adjoint, nonnegative trace-class operator with eigenvalues in [0, 1], so does KD. Suppose that K is a finite-rank operator; i.e., K(x, y) = N X j=1 λ jφj(x)φj(y) for some finite N. Define a randomized version of K by KI(x, y) = N X j=1 I jφj(x)φj(y), where {I j}N j=1 are independent Bernoulli random variables, with P[I j = 1] = λj and P[I j = 0] = 1 −λ j. Let XI denote the point process on Λ with kernel KI; i.e., a random set of points constructed by first sampling the I j to determine a kernel KI, then sampling from the point process with kernel KI. We will show that XI has the same k-point correlation functions as X, the point process defined by K. Once we have shown this, the proof is completed as follows. Observe that 102 Eigenvalue distributions: asymptotics KI defines a projection operator. Specifically, KI is orthogonal projection onto the span of those φj for which I j = 1, and so the rank of KI is PN j=1 Ij. It thus follows from Proposition 4.2 that |X| d = |XI| = N X j=1 Ij. We now show that the kernels K and KI define the same point processes. Note that, as in the proof of Proposition 4.2 above, if n ∈N and C = K(xi, x j) 1≤i,j≤n CI = KI(xi, x j) 1≤i,j≤n, then C = Φ diag(λ1, . . . , λN)Φ∗and CI = Φ diag(I1, . . . , IN)Φ∗, where Φ = φj(xi) 1≤i≤n 1≤j≤N ∈Mn,N(C). In particular, if n > N, then det(C) = det(CI) = 0, since both matrices have rank at most N. For n ≤N, we must show that E det(CI) = det(C), for which we use the Cauchy-Binet formula: for C = AB, det(C) = X 1≤k1<···<kn≤N det A[n],{k1,...,kn}  det B{k1,...,kn},[n] , where [n] := {1, . . . , n} and for sets of indices K1, K2, AK1,K2 denotes the matrix gotten from A by deleting all rows except those with indices in K1 and all columns except those with indices in K2. By multilinearity and independence, E det (Φ diag(I1, . . . , IN))[n],{k1,...,kn}  = E Ik1 · · · Ikn det Φ[n],{k1,...,kn}  = λk1 · · · λkn det Φ[n],{k1,...,kn}  = det (Φ diag(λ1, . . . , λN))[n],{k1,...,kn} . The result now follows from the Cauchy-Binet formula. If K(x, y) = P∞ j=1 λjφj(x)φj(y) so that K is not a finite-rank operator, we consider truncated kernel KN(x, y) = N X j=1 λ jφj(x)φ j(y) corresponding to a rank N approximation of K. Fix n ∈N; the argument above shows that for x1, . . . , xn ∈Λ, E det KN,I(xi, x j) 1≤i,j≤n = det KN(xi, x j) 1≤i,j≤n, where KN,I denotes the randomized version of KN, with the λj replaced with i.i.d. Bernoulli random variables. By continuity of the determinant, det K(xi, x j) 1≤i,j≤n = lim N→∞det KN(xi, x j) 1≤i,j≤n 4.1 The eigenvalue counting function 103 for all x1, . . . , xn ∈Λ such that the sequences {φj(xi)} j≥1 are square-summable (which is µ almost all of them). That is, we only need to show that E  lim N→∞det KN,I(xi, xj) 1≤i, j≤n  = lim N→∞E det KN,I(xi, x j) 1≤i,j≤n (4.2) for µ almost every x1, . . . , xn By the Cauchy-Binet formula again, detKN,I(xi, xj) 1≤i, j≤n = X 1≤k1<···<kn≤N det (ΦDI)[n],{k1,...,kn}  det Φ∗ {k1,...,kn},[n] , where Φ is the half-infinite array with i- jth entry φj(xi) (where 1 ≤i ≤n and j ≥1), and DI = diag(I1, I2, . . .). By multilinearity, this is X 1≤k1<···<kn≤N Ik1 · · · Ikn det Φ[n],{k1,...,kn}  2 , which is increasing in N for each choice of I1, I2, . . .. The interchange of limit and expectation in (4.2) thus follows from the monotone convergence theorem. The proof that the number of points in the process is a sum of independent Bernoulli random variables now follows as before: the fact that K is trace-class implies that with probability one, only finitely many I j are non-zero. □ The following formulae are quite useful in analyzing the counting functions of determinantal point processes. Lemma 4.3 Let K : Λ × Λ →C be a continuous kernel such that the corre-sponding operator on L2(µ) is a trace-class orthogonal projection. For D ⊆Λ, denote by ND the number of particles of the determinantal point process with kernel K which lie in D. Then END = Z D K(x, x) dµ(x) and Var ND = Z D Z Dc K(x, y) 2 dµ(x) dµ(y). Proof The first statement is trivial. For the second, observe that if N is the (deterministic) total number of points in Λ E h N2 D i = E [ND(N −NDc)] = NEND −E [NDNDc] . 104 Eigenvalue distributions: asymptotics Now, E [NDNDc] = Z D Z Dc ρ2(x, y)dµ(x)dµ(y) = Z D Z Dc K(x, x)K(y, y) −K(x, y)K(y, x) dµ(x)dµ(y) = ENDENDc − Z D Z Dc K(x, y) 2dµ(x)dµ(y), using the fact that K(y, x) = K(x, y). Making use again of the relation NDc = N −ND, we have from above that E h N2 D i = (END)2 + Z D Z Dc K(x, y) 2dµ(x)dµ(y), and the variance formula follows. □ Exercise 4.4 Confirm that the kernels Kn and Ln of Propositions 3.7 and 3.9 satisfy the conditions of the lemma. The following lemma, whose proof we omit, relates the limiting behavior of the counting functions of a sequence of determinantal point processes on R to the counting functions of a limiting process, when there is convergence of the corresponding kernels. Lemma 4.5 Let {KN(x, y)}N∈N be a sequence of kernels of determinantal point processes, and suppose that there is a kernel K(x, y) such that KN(x, y) → K(x, y) uniformly on compact sets. Let N(N)(·) denote the counting function of the process with kernel KN and N(·) the counting function of the process with kernel K. Let m ∈N and let {Dℓ}m ℓ=1 be a finite collection of compact disjoint subsets of R. Then the random vector (N(N)(D1), . . . , N(N)(Dm)) converges in distribution to the random vector (N(D1), . . . , N(Dm)). For a proof, see [1, Section 4.2.8]. Asymptotics for the eigenvalue counting function We now return to the counting functions of the eigenvalue processes on the classical compact groups. Consider the sine kernel K(x, y) = sin(π(x −y)) π(x −y) on R. The sine kernel is the kernel, with respect to Lebesgue measure, of an unbounded determinantal point process on R called the sine kernel process. The most classical result on the asymptotics of the eigenvalue counts is that, 4.1 The eigenvalue counting function 105 when suitably rescaled, they converge to the sine kernel process as the matrix size tends to infinity, as follows. Theorem 4.6 Let {x1, . . . , xN} be the nontrivial eigenangles of a random ma-trix distributed according to Haar measure in one of U (N), SO (2N), SO−(2N + 2), SO (2N + 1), SO−(2N + 1), or Sp (2N), recentered and rescaled to lie in h −N 2 , N 2 i . For a compact set D ⊆R, let N(N)(D) := #{j : x j ∈D}. Let χ denote the sine kernel process on R and let S(D) denote the number of points of χ in D. For any collection {Dℓ}m ℓ=1 of compact disjoint subsets of R, the random vec-tor (N(N)(D1), . . . , N(N)(Dm)) converges in distribution to the random vector (S(D1), . . . , S(Dm)). Proof In each case, this is an easy consequence of Lemma 4.5. First consider the unitary case: let θ1, . . . , θN be the angles of the eigenvalues of a random unitary matrix. By a change of variables, the kernel for the recentered, rescaled process n Nθ1 2π −N 2 , . . . , NθN 2π −N 2 o on h −N 2 , N 2 i is given by 1 N sin(π(x −y)) sin  π(x−y) N  −→sin(π(x −y)) π(x −y) as N →∞, uniformly on compact sets. Next consider the case of SO (2N). Let {θ1, . . . , θN} ⊆[0, π] denote the non-trivial eigenangles. Again by changing variables, the kernel for the recentered, rescaled process n Nθ1 π −N 2 , . . . , NθN π −N 2 o on h −N 2 , N 2 i is 1 N LSO(N) N  π N x + π 2, π N x + π 2  = sin  1 − 1 2N  π(x −y)  2N sin  π(x−y) 2N  + sin  1 − 1 2N  π(x + y) + Nπ −π 2  2N sin  π(x+y) 2N + π 2  = sin  1 − 1 2N  π(x −y)  2N sin  π(x−y) 2N  − cos  1 − 1 2N  π(x + y) + Nπ  2N cos  π(x+y) 2N  , which again tends to the sine kernel uniformly on compact sets. The remaining cases are essentially identical. □ We now specialize to the counting functions for arcs: Nθ = #{j : 0 ≤θj ≤θ}, 106 Eigenvalue distributions: asymptotics where as usual θ varies over [0, 2π] in the unitary case and θ ∈[0, π] in all other cases. We begin by using Lemma 4.3 to calculate means and variances. Proposition 4.7 1. Let U be uniform in U (N). For θ ∈[0, 2π), let Nθ be the number of eigenvalues angles of U in [0, θ). Then ENθ = Nθ 2π . 2. Let U be uniform in one of SO (2N), SO−(2N + 2), SO (2N + 1), SO−(2N + 1), or Sp (2N). For θ ∈[0, π), let Nθ be the number of nontrivial eigenvalue angles of U in [0, θ). Then ENθ −Nθ π < 1. Proof The equality for the unitary group follows from symmetry considera-tions, or immediately from Proposition 3.9 and Lemma 4.3. In the case of Sp (2N) or SO−(2N + 2), by Proposition 3.7 and Lemma 4.3, ENθ = 1 π Z θ 0 N X j=1 2 sin2( jx) dx = Nθ π −1 2π N X j=1 sin(2jθ) j . Define a0 = 0 and aj = P j k=1 sin(2kθ), and let bj = 1 j+1. Then by summation by parts, N X j=1 sin(2jθ) j = aN N + 1 + N−1 X j=1 aj j( j + 1). Trivially, |aN| ≤N. Now observe that aj = Im         j X k=1 e2ikθ        = Im " e2iθ e2ijθ −1 e2iθ−1 # = Im " ei(j+1)θ sin( jθ) sin(θ) # = sin(( j + 1)θ) sin(jθ) sin(θ) . Since |a j| is invariant under the substitution θ 7→π −θ, it suffices to assume that 0 < θ ≤π/2. In that case sin(θ) ≥2θ/π, and so N−1 X j=1 a j j( j + 1) ≤π 2θ         X 1≤j≤1/θ θ2 + X 1/θ< j≤N−1 1 j( j + 1)        . Clearly, P 1≤j≤1/θ θ2 ≤θ. For the second term, note that X 1/θ< j≤N−1 1 j( j + 1) = X 1/θ< j≤N−1 1 j − 1 j + 1 ! = 1 j0 −1 N , 4.1 The eigenvalue counting function 107 where j0 denotes the first index in the sum. That is, N−1 X j=1 aj j( j + 1) ≤π 2θ (θ + θ) = π. All together, ENθ −Nθ π ≤1 + π 2π . The other cases are handled similarly. □ Proposition 4.8 Let U be uniform in one of U (N), SO (2N), SO−(2N + 2), SO (2N + 1), SO−(2N + 1), or Sp (2N). For θ ∈[0, 2π) (in the unitary case) or θ ∈[0, π] (in all other cases), let Nθ be the number of eigenvalue angles of U in [0, θ). For each group or coset, there is a constant c depending only on the group or coset, such that Var Nθ ≤c log N + 1 . Proof We treat the unitary case, which is the simplest, first. Note that if θ ∈ (π, 2π), then Nθ d = N −N2π−θ, and so it suffices to assume that θ ≤π. By Proposition 3.9 and Lemma 4.3, Var Nθ = 1 4π2 Z θ 0 Z 2π θ S N(x −y)2 dx dy = 1 4π2 Z θ 0 Z 2π−y θ−y sin2  Nz 2  sin2  z 2  dz dy = 1 4π2         Z θ 0 z sin2  Nz 2  sin2  z 2  dz + Z 2π−θ θ θ sin2  Nz 2  sin2  z 2  dz + Z 2π 2π−θ (2π −z) sin2  Nz 2  sin2  z 2  dz         = 1 2π2         Z θ 0 z sin2  Nz 2  sin2  z 2  dz + Z π θ θ sin2  Nz 2  sin2  z 2  dz        , where the third equality follows by changing the order of integration and eval-uating the resulting inner integrals. For the first integral, since sin  z 2  ≥z π for all z ∈[0, θ], if θ > 1 N , then Z θ 0 z sin2  Nz 2  sin2  z 2  dz ≤ Z 1 N 0 (πN)2z 4 dz + Z θ 1 N π2 z dz = π2 1 8 + log(N) + log(θ) ! . If θ ≤1 N , there is no need to break up the integral and one simply has the bound 108 Eigenvalue distributions: asymptotics (πNθ)2 8 ≤π2 8 . Similarly, if θ < 1 N , then Z π θ θ sin2  Nz 2  sin2  z 2  dz ≤ Z 1 N θ θ(πN)2 4 dz + Z π 1 N π2θ z2 dz = π2θN 4 (1 −Nθ) + π2Nθ −πθ ≤5π2 4 ; if θ ≥1 N , there is no need to break up the integral and one simply has a bound of π2. All together, Var Nθ ≤log(N) + 11 16. The remaining cases are similar but more complicated, since the remaining kernels from Proposition 3.9 are sums of two terms. We will sketch the proof for SO (2N) and leave filling in the details and treating the remaining cases as exercises for the extremely dedicated reader. Let θ ∈[0, π]; by Proposition 3.9 and Lemma 4.3, Var Nθ = 1 4π2 Z θ 0 Z π θ S 2N−1(x −y) + S 2N−1(x + y)2dxdy = 1 4π2 (Z θ 0 Z π θ S 2N−1(x −y)2dxdy +2 Z θ 0 Z π θ S 2N−1(x −y)S 2N−1(x + y)dxdy + Z θ 0 Z π θ S 2N−1(x + y)2dxdy ) . Note that the integrals are invariant under the simultaneous substitutions s = π −x and t = π −y, and so we may assume that θ ∈ h 0, π 2 i . Now, the first term is essentially identical to the unitary case and is bounded in the same way by 13 32 + 1 4 log(2N −1). The final term is similar, but easier: one simply lets z = x + y, changes the order of integration, and bounds the resulting integrals as in the unitary case. The numerator can be bounded by 1 in all cases, and the resulting bound on the third term is a constant, independent of N. 4.1 The eigenvalue counting function 109 For the cross term, making the substitutions z = x + y and w = x −y yields Z θ 0 Z π θ S 2N−1(x −y)S 2N−1(x + y)dxdy = Z 2θ θ Z z 2θ−z sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz + Z π 2θ Z z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz + Z π+θ π Z 2π−z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz. (4.3) To handle the first summand, first observe that w ≤z ≤2θ ≤π, and so sin  z 2  sin w 2  ≥zw π2 . Evaluating the resulting inner integral, if 1 2N−1 ≤2θ −z, estimating the numer-ator by 1 yields Z z 2θ−z sin  2N−1 2  w  w dw ≤log(z) −log(2θ −z) ≤log(π) + log(2N −1). If 2θ −z ≤ 1 2N−1 ≤z, then using the estimate sin(x) ≤x in the first part of the interval and sin(x) ≤1 in the second part yields Z z 2θ−z sin  2N−1 2  w  w dw ≤ Z 1 2N−1 2θ−z 2N −1 2 ! dw + Z z 1 2N−1 1 wdw ≤1 2 + log(z) + log(2N −1) ≤1 2 + log(π) + log(2N −1). Finally, if z ≤ 1 2N−1, then using the estimate sin(x) ≤x yields Z z 2θ−z sin  2N−1 2  w  w dw ≤ Z z 2θ−z 2N −1 2 ! dw = 2N −1 2 ! (2z −2θ) ≤1. The first term of (4.3) is thus bounded by π2 Z 2θ θ "1 2 + log(π) + log(2N −1) # sin  2N−1 2  z  z dz ≤π2 "1 2 + log(π) + log(2N −1) # Z 2θ θ 1 z dz = π2 "1 2 + log(π) + log(2N −1) # log(2). 110 Eigenvalue distributions: asymptotics For the second term, if 1 2N−1 ≤z −2θ, then the inner integral is Z z z−2θ sin  2N−1 2  w  w dw ≤ Z z z−2θ 1 wdw = log(z) −log(z −2θ) = log 1 + 2θ z −2θ ! . The right-most expression is a decreasing function of z, and so it is maximized in this regime for z = 1 2N−1 + 2θ; that is, if 1 2N−1 ≤z −2θ, then Z z z−2θ sin  2N−1 2  w  w dw ≤log 1 2N −1 + 2θ ! +log(2N−1) = log(1+2θ(2N−1)). If z −2θ ≤ 1 2N−1 ≤z, then breaking up the integral as above yields Z z z−2θ sin  2N−1 2  w  w dw ≤ Z 1 2N−1 z−2θ 2N −1 2 ! dw + Z z 1 2N−1 1 wdw ≤1 2 + log(z) + log(2N −1) = 1 2 + log z(2N −1). If z ≤ 1 2N−1, then Z z z−2θ sin  2N−1 2  w  w dw ≤ Z z z−2θ 2N −1 2 ! dw = 2θ 2N −1 2 ! ≤2z 2N −1 2 ! ≤1. If 2θ < 1 2N−1, then the second term of (4.3) is bounded using all three estimates above: Z π 2θ Z z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz ≤π2 Z 1 2N−1 2θ sin  2N−1 2  z  z dz + π2 Z 1 2N−1 +2θ 1 2N−1 "1 2 + log z(2N −1)# sin  2N−1 2  z  z dz + π2 log(2θ(2N −1) + 1) Z π 1 2N−1 +2θ sin  2N−1 2  z  z dz Now, Z 1 2N−1 2θ sin  2N−1 2  z  z dz ≤ 2N −1 2 ! 1 2N −1 −2θ ! ≤1 2, 4.1 The eigenvalue counting function 111 and Z 1 2N−1 +2θ 1 2N−1 "1 2 + log z(2N −1)# sin  2N−1 2  z  z dz ≤ 2N −1 2 ! θ + Z 1+2θ(2N−1) 1 log(s) s ds ≤1 4 + log2(1 + 2θ(2N −1)) 2 ≤1 4 + log2(2) 2 . Finally, log(2θ(2N −1) + 1) Z π 1 2N−1 +2θ sin  2N−1 2  z  z dz ≤log(2) Z π 1 2N−1 +2θ 1 z dz ≤log(2) log(π) + log(2N −1), and so in the case that 2θ < 1 2N−1, the second term of (4.3) is bounded by c log(2N −1). If 2θ ≥ 1 2N−1, then from the estimates above, Z π 2θ Z z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz ≤π2 Z 2θ+ 1 2N−1 2θ "1 2 + log(z(2N −1)) # 1 z dz + Z π 2θ+ 1 2N−1 log 1 + 2θ z −2θ ! 1 z dz. For the first term, Z 2θ+ 1 2N−1 2θ "1 2 + log(z(2N −1)) # 1 z dz ≤ "1 2 + log(2θ(2N −1) + 1) # log 1 + 1 θ(2N −1) ! ≤ "1 2 + log(π(2N −1) + 1) # log(3). For the second, term, using the fact that log  1 + 2θ z−2θ  is decreasing as a function of z, Z π 2θ+ 1 2N−1 log 1 + 2θ z −2θ ! 1 z dz ≤log (1 + 2θ(2N −1)) Z 2(2θ+ 1 2N−1) 2θ+ 1 2N−1 1 z dz + log       1 + 2θ 2θ + 1 2N−1        Z π 2(2θ+ 1 2N−1) 1 z dz ≤log (1 + 2θ(2N −1)) log(2) + log(2) " log(π) −log 4θ + 2 2N −1 !# = log(2) log π 2  + log(2) log(2N −1), 112 Eigenvalue distributions: asymptotics completing the bound of the second term of (4.3) in the case 2θ ≥ 1 2N−1. Finally, the third term of (4.3) is Z π+θ π Z 2π−z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz ≤ √ 2π Z π+θ π Z 2π−z z−2θ sin  2N−1 2  w  w dwdz, since z 2 ∈ h π 2, 3π 4 i and thus sin  z 2  ≥ 1 √ 2. Now, if 1 2N−1 ≤z −2θ, then Z 2π−z z−2θ sin  2N−1 2  w  w dw ≤log(2π −z) −log(z −2θ) ≤log(2π) + log(2N −1), and if z −2θ ≤ 1 2N−1 ≤2π −z, then Z 2π−z z−2θ sin  2N−1 2  w  w dw ≤ Z 1 2N−1 z−2θ 2N −1 2 ! dw + Z 2π−z 1 2N−1 1 wdw ≤1 2 + log(2π) + log(2N −1), and so Z π+θ π Z 2π−z z−2θ sin  2N−1 2  z  sin  2N−1 2  w  sin  z 2  sin  w 2  dwdz ≤π2 √ 2 "1 2 + log(2π) + log(2N −1) # . □ Having suffered through the analysis above, we find ourselves in a strong position: the eigenvalue counting function Nθ of a random matrix from one of the classical compact groups is the sum of independent Bernoulli random variables, with explicit estimates on the mean and variance. We can thus bring all the results of classical probability to bear, starting with the central limit theorem. Theorem 4.9 Let Nθ denote the eigenvalue counting function for a random matrix distributed in one of U (N), SO (2N), SO−(2N + 2), SO (2N + 1), SO−(2N + 1), or Sp (2N), where θ ∈[0, 2π) for U (N) and θ ∈[0, π] otherwise. Then lim N→∞P "Nθ −ENθ Var(Nθ) ≤t # = 1 √ 2π Z t −∞ e−x2/2dx. 4.1 The eigenvalue counting function 113 Proof Recall the Lindeberg central limit theorem: if for each n, {ξi}n i=1 are independent centered random variables with s2 n = Var Pn i=1 ξi  , then Pn i=1 ξi satisfies a central limit theorem if for each ϵ > 0, lim n→∞ 1 s2 n n X i=1 E h ξ2 i 1|ξi|≥ϵsn i = 0. (4.4) If Nθ d = PN j=1 η j with the ηj independent Bernoulli random variables as in The-orem 4.1, then taking ξ j = η j −Eη j, the Lindeberg condition (4.4) is trivially satisfied: since sn ∼ p log(n) and the ξi are bounded, the expectations inside the sum are all zero for n large enough. □ Another classical result which is perhaps less familiar but will play an im-portant role in later sections is the following. Theorem 4.10 (Bernstein’s inequality) Let {Xj}n j=1 be independent random variables such that, for each j, |Xj| ≤1 almost surely. Let S n := Pn j=1 Xj and let σ2 = Var Pn j=1 Xj  . Then for all t > 0, P h S n −E[S n] > t i ≤2 exp − 3t2 6σ2 + 2t ! . Applying Bernstein’s inequality to the eigenvalue counting functions and using the estimates for the mean and variance obtained above gives the follow-ing. Theorem 4.11 Let Nθ denote the eigenvalue counting function for a random matrix distributed in one of U (N), SO (2N), SO−(2N + 2), SO (2N + 1), SO−(2N + 1), or Sp (2N), where θ ∈[0, 2π) for U (N) and θ ∈[0, π] otherwise. Then there are constants C, c such that for all t > 0, P  Nθ −Nθ 2π > t  ≤C exp − ct2 log(N) + t ! in the unitary case, and P  Nθ −Nθ π > t  ≤C exp − ct2 log(N) + t ! in the remaining cases. Consider, for example, t = A log(N). Writing ϵG = 2 when the random matrix is coming from G = U (N) and ϵG = 1 otherwise, Theorem 4.11 gives that P " Nθ −Nθ ϵGπ > A log(N) # ≤CN−cA2 A+1 ; 114 Eigenvalue distributions: asymptotics that is, the Nθ is extremely likely to be within a window of size log(N) around its mean, which is itself of order N. 4.2 The empirical spectral measure and linear eigenvalue statistics This section introduces two of the main approaches to studying the asymptotic behavior of the eigenvalues: limit theorems for the empirical spectral measure and for linear eigenvalue statistics. We begin with the first of these two. Definition Let U be a random matrix in one of the classical compact groups with eigenvalues λ1, . . . , λn. The empirical spectral measure µU is the random probability measure which puts equal mass at each of the eigenvalues of U: µU := 1 n n X j=1 δλj. Results on the asymptotic distribution of the eigenvalues of random matri-ces, as the size tends to infinity, are generally formulated in terms of the limit-ing behavior of the empirical spectral measure. Since this is a random measure, there are various possibilities. The weakest is convergence in expectation: A sequence µn of random probability measures on a metric measure space X con-verges in expectation to a deterministic limit µ if for any bounded, continuous f : X →R, lim n→∞E "Z fdµn # = Z fdµ. (4.5) For measures on R, this is equivalent to the condition that lim n→∞Eµn((a, b]) = µ((a, b]), (4.6) for all a ≤b such that µ({a}) = µ({b}) = 0. A stronger notion is that of convergence weakly in probability: a sequence of random probability measures µn on Rn converge weakly in probability to a measure µ on Rn (written µn P =⇒µ) if for each bounded, continuous f : Rn → R, the sequence of random variables R fdµn converges in probability to R fdµ, as n tends to infinity. There are several equivalent viewpoints; since our interest is in measures supported on the circle, we restrict our attention there for the following lemma. Lemma 4.12 For j ∈Z and µ a probability measure on [0, 2π), let ˆ µ( j) = 4.2 The empirical spectral measure and linear eigenvalue statistics 115 R 2π 0 eijθdµ(θ) denote the Fourier transform of µ at j. The following are equiva-lent: 1. µn P =⇒µ; 2. for each 0 ≤a ≤b < 2π such that µ({a}) = µ({b}) = 0, µn((a, b]) P − → µ((a, b]); 3. for each j ∈Z, ˆ µn( j) P − − − − → n→∞ˆ µ( j); 4. for every subsequence n′ in N there is a further subsequence n′′ such that with probability one, µn′′ =⇒µ as n →∞. A still stronger notion of convergence is that of convergence weakly almost surely: a sequence of random probability measures µn on Rn converge weakly almost surely to a deterministic measure µ if the set on which µn converges weakly to µ has probability one. We have seen that there are various metrics on the set of probability mea-sures: the Kolmogorov distance, the bounded-Lipschitz distance, the Lp Kan-torovich distances, the total variation distance. In addition to the types of con-vergence described above, one may of course consider the sequence of random variables d(µn, µ) for any of these notions of distance, and show that it tends to zero (either weakly or in probability – when the limit is a point mass, they are equivalent). Returning to the eigenvalue distribution of a random matrix, we saw in Proposition 4.7 that if U is a random unitary matrix, then for any 0 ≤θ1 ≤ θ2 < 2π, 1 nEN[θ1,θ2] = θ2 −θ1 2π , and if U is a random matrix from any of the other groups, then for any 0 ≤ θ1 ≤θ2 < π, 1 nEN[θ1,θ2] = θ2 −θ1 π + O 1 n ! . That is, the empirical spectral measures all converge in expectation to the uni-form measure on the circle. In fact, the convergence happens in a much stronger sense. Theorem 4.13 Let {µn} be the empirical spectral measures of a sequence of random matrices {Un}, with Un drawn according to Haar measure on O (n), U (n), or Sp (2n). Let ν denote the uniform probability measure on the circle. Then with probability one, µn converges weakly to ν as n →∞. 116 Eigenvalue distributions: asymptotics We will see a more recent approach to this result in Chapter 5; here, we present the original proof of Theorem 4.13 from , via the explicit moment formulae of Proposition 3.11. We will give the proof in the unitary case only; the others are analogous. Proof of Theorem 4.13 Let U ∈U (n) be a random unitary matrix with eigen-values λ1, . . . , λn, and let µn := 1 n Pn j=1 δλ j be the spectral measure of U. Let ν denote the uniform probability measure on the circle. We will show that µn converges weakly to ν with probability one by showing that, with probability one, ˆ µn( j) →ˆ ν( j) for all j ∈Z. Now, ˆ µn( j) = Z S1 zjdµn(z) = 1 n n X k=1 λj k. Since µn is a probability measure, ˆ µn(0) = 1 = ˆ ν(0). Moreover, since λ−1 are the eigenvalues of U−1 = U∗, which is again Haar-distributed on U (n), it suffices to treat the case of j ≥1. Now, Proposition 3.11 shows that Eˆ µn( j) = 0 and, using the fact that we may assume n ≥j, since j is fixed and n →∞, E|ˆ µn( j)|2 = 1 n2 E h Tr(U j)Tr(U j) i = j n2 . It thus follows from Chebychev’s inequality that P[|ˆ µn( j)| > ϵn] ≤ j n2ϵ2 n for any ϵn > 0. In particular, taking ϵn = 1 n 1 2 −δ for some δ ∈  0, 1 2  gives that P " |ˆ µn( j)| > 1 n 1 2 −δ # ≤ j n1+2δ , which is summable, and so by the first Borel–Cantelli lemma, it holds with probability one that |ˆ µn( j)| ≤ 1 n 1 2 −δ for n large enough. Taking the intersection of these probability one sets over all j ∈Z, we have that with probability one, for all j ∈Z, ˆ µn( j) →0 as n →∞. □ In addition to the proof given above of the convergence of the spectral mea-sures, Proposition 3.11 is also a key tool in the study of linear eigenvalue statis-tics. By a linear eigenvalue statistic, we mean a function of U of the form U 7→ n X j=1 f(λj), 4.2 The empirical spectral measure and linear eigenvalue statistics 117 where f is a test function and λ1, . . . , λn are the eigenvalues of U. The class of test functions which works most naturally with the proof given below is the subspace H 1 2 2 of L2(S1) of those functions f ∈L2(S1) with ∥f∥2 1 2 = X j∈Z | ˆ fj|2| j| < ∞, where ˆ fj = R S1 f(z)z−jdν(z) is the jth Fourier coefficient of f. This space is an inner product space, with inner product given by ⟨f, g⟩1 2 = X j∈Z ˆ f jˆ gj| j|. For such test functions, there is the following multivariate central limit theo-rem for the corresponding linear eigenvalue statistics. To simplify the notation, given a test function f, let σn( f) := Pn j=1 f(λj), where λ1, . . . , λn are the eigen-values of a random unitary matrix U ∈U (n). Theorem 4.14 (Diaconis–Evans ) Let f1, . . . , fk ∈H 1 2 2 , and suppose that E[σn( fj)] = 0 for each j ∈1, . . . , k. The random vector (σn( f1), . . . , σn( fk)) converges in distribution to a jointly Gaussian random vector (Y1, . . . , Yk), with E[Yi] = 0 for each i, and E[YiYj] = D fi, f j E 1 2 . The proof uses the following multivariate central limit theorem for the traces of powers of U. It is an immediate consequence of Proposition 3.11 that for fixed k, the random vector (Tr(U), . . . , Tr(Uk)) converges in distribution to (Z1, √ 2Z2, . . . , √ kZk), where Z1, . . . , Zk are independent standard complex Gaus-sian random variables, but to prove Theorem 4.14, the following stronger result is needed. Theorem 4.15 Let {anj}n, j∈N and {bn j}n,j∈N be arrays of complex constants. Suppose that there exist σ2, τ2, and γ such that lim n→∞ ∞ X j=1 |anj|2 min{j, n} = σ2 lim n→∞ ∞ X j=1 |bn j|2 min{j, n} = τ2 lim n→∞ ∞ X j=1 an jbn j min{j, n} = γ. Suppose moreover that there is a sequence (mn)n≥1 ⊆N such that limn→∞ mn n = 0 and such that lim n→∞ ∞ X j=mn+1 (|an j|2 + |bn j|2) min{j, n} = 0. 118 Eigenvalue distributions: asymptotics Then the random variable P∞ j=1(an j Tr(U j) + bn jTr(U j)) converges in distribu-tion (as n →∞) to a centered complex Gaussian random variable X + iY, with EX2 = 1 2(σ2 + τ2 + 2 Re(γ)) EY2 = 1 2(σ2 + τ2 −2 Re(γ)) EXY = Im(γ). Proof From Theorem 3.11, E Tr(U j) = 0 for each j. For N < M, E M X j=N (anj Tr(U j) + bnjTr(U j)) 2 = M X j=N (|an j|2 + |bn j|2) min{j, n} N,M→∞ − − − − − − →0, so that P∞ j=1(anj Tr(U j) + bnjTr(U j)) converges in L2. Similarly, E ∞ X j=mn+1 (anj Tr(U j) + bnjTr(U j)) 2 = ∞ X j=mn+1 (|an j|2 + |bn j|2) min{ j, n} n→∞ − − − − →0, so that P∞ j=mn+1(anj Tr(U j) + bnjTr(U j)) =⇒0. It is thus enough to show that Pmn j=1(an j Tr(U j) + bn jTr(U j)) =⇒X + iY, for which we use the method of moments. Since mn n →0, if α, β ∈N are fixed, then αmn, βmn ≤n for n large enough, and for such α, β, it follows from Theorem 3.11 that E                    mn X j=1 (anj Tr(U j) + bnjTr(U j))         α         mn X j=1 (an j Tr(U j) + bn jTr(U j))         β           = E                    mn X j=1 (anj p jZ j + bn j p jZj)         α         mn X j=1 (an j p jZ j + bn j p jZ j)         β          . Writing Z j = 1 √ 2(Zj1 + iZ j2) with the Zji i.i.d. standard Gaussians, mn X j=1 (anj p jZ j + bn j p jZj) = Xn + iYn with Xn = mn X j=1 r j 2 h Re(anj) + Re(bn j)  Zj1 +  Im(bn j) −Im(an j)  Zj2 i 4.2 The empirical spectral measure and linear eigenvalue statistics 119 and Yn = mn X j=1 r j 2 h Im(anj) + Im(bn j)  Zj1 +  Re(an j) −Re(bn j)  Z j2 i . The random variables Xn and Yn are thus centered and jointly Gaussian, with EX2 n = mn X j=1  j 2  h (Re(an j) + Re(bn j))2 + (Im(an j) −Im(bn j))2i = mn X j=1  j 2  h |anj|2 + |bn j|2 + 2 Re(an jbn j) i , EY2 n = mn X j=1  j 2  h (Re(an j) −Re(bn j))2 + (Im(an j) + Im(bn j))2i = mn X j=1  j 2  h |anj|2 + |bn j|2 −2 Re(an jbn j) i , and EXnYn = mn X j=1  j 2  h (Re(anj) + Re(bn j))(Im(an j) + Im(bn j)) −(Re(an j) −Re(bn j))(Im(an j) −Im(bn j)) i = mn X j=1 j Im(anjbnj). It thus follows by the assumptions on the arrays {an j} and {bn j} and the se-quence (mn) that EXn →1 2(σ2 + τ2 + 2 Re(γ)) EYn →1 2(σ2 + τ2 −2 Re(γ)) and EXnYn →Im(γ), which completes the proof. □ Proof of Theorem 4.14 By the Cram´ er–Wold device, to prove the claimed convergence it is enough to show that k X j=1 t jσn( f j) d − → k X j=1 t jYj 120 Eigenvalue distributions: asymptotics for each (t1, . . . , tk) ∈Rk. Note that k X j=1 t jσn( fj) = σn(Ft), where Ft = Pk j=1 t j f j. Since H 1 2 2 is a vector space, Ft ∈H 1 2 2 , and moreover, ∥Ft∥1 2 = k X j,ℓ=1 t jtℓ D f j, fℓ E 1 2 = Var         k X j=1 Y j        . The theorem therefore follows from the k = 1 case. The k = 1 case is simple if f(z) = PN j=−N ajz j is a Laurent polynomial on S1. Since we are assuming that f is real-valued and that σn( f) = 0, we have that a0 = 0 and a−j = a j. Then σn( f) = 2 N X j=1 h Re(aj) Re(Tr(U j)) −Im(aj) Im(Tr(U j)) i , which tends to the Gaussian random variable 2 N X j=1       Re(aj) r j 2Zj1 −Im(aj) r j 2Z j2       , where (Z11, Z12, . . . , Zk1, Zk2) are i.i.d. standard (real) Gaussians. That is, σn( f) converges to a centered Gaussian random variable with variance σ2 = 4 N X j=1  j 2  (Re(a j)2 + Im(a j)2) = N X j=−N | j||a j|2 = ∥f∥2 1 2 . For a general f ∈H 1 2 2 , the full strength of Theorem 4.15 is needed. First, for N ∈N define fN := N X j=−N ˆ f jz j and note that, since f ∈L2(S1), ∥fN −f∥2 →0. It follows from symmetry that for any measurable subset A ⊆S1, the probability that U has at least one eigenvalue in A is at most nν(A), where ν is the uniform probability measure on S1, and so if An,ϵ denotes the set on which | f −fN| > ϵ n, then P[|σn( f)−σn( fN)| > ϵ] = P        n X ℓ=1 ( f(λℓ) −fn(λℓ)) > ϵ       ≤nν(An,ϵ) ≤n3∥f −fN∥2 2 ϵ2 , which tends to zero as N tends to infinity. That is, σn( fN) converges in proba-bility to σn( f), as N tends to infinity. 4.3 Patterns in eigenvalues: self-similarity 121 On the other hand, the condition on f together with Theorem 3.11 means that ∞ X j=1 ˆ f j Tr(U j) + ∞ X j=1 ˆ f jTr(U j) converges in L2: E M X j=N ˆ fj Tr(U j) 2 = M X j=N | ˆ f j|2 min{j, n} N,M→∞ − − − − − − →0. This allows us to write σn( f) = ∞ X j=1 ˆ f j Tr(U j) + ∞ X j=1 ˆ f jTr(U j). (Note that the corresponding formula for σn( f) in terms of traces of powers of U was trivial in the case that f was a Laurent polynomial because the sums needing to be interchanged were both finite, so that convergence issues played no role.) We now apply Theorem 4.15 with ajn = ˆ fj and b jn = ˆ f j: ∞ X j=1 |ajn|2 min{j, n} = ∞ X j=1 |b jn|2 min{j, n} = ∞ X j=1 an jbn j min{j, n} = ∞ X j=1 | ˆ f j|2 j + ∞ X j=n+1 (n −j)| ˆ fj| n→∞ − − − − →∥f∥2 1 2 , and as long as mn →∞, then ∞ X j=mn+1 (|anj|2 + |bnj|2) min{j, n} = ∞ X j=mn+1 (2| ˆ fj|2) min{j, n} n→∞ − − − − →0. It thus follows from Theorem 4.15 that σn( f) converges to a centered Gaussian random variable with variance ∥f∥2 1 2 , as desired. □ 4.3 Patterns in eigenvalues: self-similarity Consider the special case of m = 2 in Theorem 3.14: the eigenvalues of the square of a 2n × 2n random unitary matrix are identical in distribution to those of two independent n × n random unitary matrices. Since the squaring function has the effect of wrapping the circle in the complex plane twice around, this 122 Eigenvalue distributions: asymptotics means that wrapping the eigenvalues of a 2n × 2n unitary matrix twice around the circle produces two independent copies of the eigenvalues of a half-sized random matrix. It is tempting to see those two copies as corresponding to the eigenvalues of the original matrix from the top half of the circle and those from the bottom half; if this were true, it would be a kind of self-similarity statement for unitary eigenvalues. Of course, there cannot be exact equality of distribu-tions due to edge effects, but one could hope for some weaker result along these lines which avoided such effects. Indeed, a closely related self-similarity phenomenon was conjectured by Coram and Diaconis in their statistical analy-sis of unitary eigenvalues : they asked whether taking half (sequentially) of the eigenvalues and stretching them out to fill the whole circle would produce something “statistically indistinguishable” from the eigenvalues of a half-sized matrix. The following result gives exactly such a self-similarity phenomenon on a certain mesoscopic scale. Theorem 4.16 For m, n ≥1, let {θj}n j=1 be the eigenangles of a random n × n unitary matrix and let {θ(m) j }nm j=1 be the eigenangles of a random nm×nm unitary matrix, with θ j, θ(m) j ∈[−π, π) for each j. For A ⊆[−π, π), let NA := j : θ j ∈A N(m) A :=  j : θ(m) j ∈  −π m, π m  , mθ(m) j ∈A  ; that is, NA is the counting function of a random n×n unitary matrix, and N(m) A is the counting function for the point process obtained by taking the eigenvalues of a random nm × nm unitary matrix lying in the arc of length 2π m centered at z = 1 and raising them to the mth power. Suppose that A has diam(A) ≤π. Then dTV(NA, N(m) A ) ≤W1(NA, N(m) A ) ≤ √mn|A| diam(A) 12π , where |A| denotes the Lebesgue measure of A. As a consequence of Theorem 4.16, if {An} is a sequence of sets such that either diam An = o(n−1/4) or |An| = o(n−1/2) as n →∞, then dTV  NAn, N(m) An  , W1  NAn, N(m) An  →0. Thus indeed, a sequential arc of about n of the nm eigenvalues of an nm × nm random matrix is statistically indistinguishable, on the scale of o(n−1/4) for diameter or o(n−1/2) for Lebesgue measure, from the n eigenvalues of an n × n random matrix. 4.3 Patterns in eigenvalues: self-similarity 123 How sharp the bound above is remains an open question. It seems likely that the restriction on the diameter of A is an artifact of the proof; this may also be the case for the factor of √n in the bound. A remarkable feature of Theorem 4.16 is that it yields microscopic informa-tion even at a mesoscopic scale: if 1 n ≪diam An ≪ 1 n1/4 , then NAn and N(m) An both have expectations and variances tending to infinity. Typically, there is no hope in looking at individual points in such a setting, and instead one stud-ies statistical properties of the recentered and rescaled counts. Theorem 4.16 makes a direct point-by-point comparison of the two point processes, with no rescaling. The proof of Theorem 4.16 is based on the determinantal structure of the two point processes. Let χ denote the point process given by the eigenvalues of an n × n random unitary matrix, and let χ(m) be the process given by restricting the eigenvalue process of a random mn×mn matrix to h −π m , π m  and rescaling by m. Recall from Section 3.1 that χ is a determinantal point process; it follows easily that χ(m) is as well, with kernel as follows. Proposition 4.17 The point process χ(m) on [0, 2π) is determinantal with ker-nel K(m) n (x, y) = 1 2π sin  n(x−y) 2  m sin  (x−y) 2m . with respect to Lebesgue measure. Proof The case m = 1 is given in Section 3.1. The general case follows from a change of variables which shows that K(m) n (x, y) = Kmn x m, y m . □ The main technical ingredient for the proof of Theorem 4.16 is the following general result on determinantal point processes. Proposition 4.18 Let N and e N be the total numbers of points in two deter-minantal point processes on (Λ, µ) with conjugate-symmetric kernels K, e K ∈ L2(µ ⊗µ), respectively. Suppose that N, e N ≤N almost surely. Then dTV(N, e N) ≤W1(N, e N) ≤ s N Z Z K(x, y) −e K(x, y) 2 dµ(x)dµ(y). Proof Note first that an indicator function of a set A of integers is 1-Lipschitz on Z, and so for X and Y integer-valued, dTV(X, Y) ≤W1(X, Y); (4.7) it therefore suffices to prove the second inequality. 124 Eigenvalue distributions: asymptotics Let {λj} and {e λj} be the eigenvalues, listed in nonincreasing order, of the integral operators K and e K with kernels K and e K respectively. Since N, e N ≤N, by Theorem 4.1 we may assume that j ≤N. Let {Yj}N j=1 be independent random variables uniformly distributed in [0, 1]. For each j, define ξ j = 1Y j≤λj and e ξ j = 1Yj≤e λj. It follows from Theorem 4.1 that N d = N X j=1 ξ j, e N d = N X j=1 e ξj, so this gives a coupling of N and e N. It then follows from the Kantorovich– Rubenstein theorem that W1(N, e N) ≤E N X j=1 ξ j − N X j=1 e ξj ≤ N X j=1 E ξ j −e ξ j = N X j=1 λj −e λj ≤ v u t N N X j=1 λ j −e λ j 2. (4.8) For n × n Hermitian matrices A and B, the Hoffman–Wielandt inequality (see, e.g., [11, Theorem VI.4.1]) says that n X j=1 λj(A) −λj(B) 2 ≤∥A −B∥2 HS , (4.9) where λ1(A), . . . , λn(A) and λ1(B), . . . , λn(B) are the eigenvalues (in nonin-creasing order) of A and B respectively. It thus follows that v u t N X j=1 λ j −e λj 2 ≤ K −e K H.S. , where ∥·∥H.S. denotes the Hilbert–Schmidt norm. The result now follows from the general fact that the Hilbert–Schmidt norm of an integral operator on L2(µ) is given by the L2(µ ⊗µ) norm of its kernel (see [109, pg. 245]). □ Proof of Theorem 4.16 For every 0 ≤ϕ ≤π 2, ϕ −1 6ϕ3 ≤sin ϕ ≤m sin  ϕ m  ≤ϕ, and so 0 ≤ 1 sin ϕ − 1 m sin  ϕ m  ≤ 1 ϕ −1 6ϕ3 −1 ϕ = ϕ 6 −ϕ2 ≤ϕ 3 . 4.4 Large deviations for the empirical spectral measure 125 Thus by Propositions 4.17 and 4.18, W1(NA, N(m) A ) ≤ v u u t mn (2π)2 Z A Z A sin2 n(x −y) 2 !         1 sin  x−y 2  − 1 m sin  x−y 2m          2 dx dy ≤ 1 12π s mn Z A Z A (x −y)2 dx dy ≤ 1 12π √mn |A| diam A. □ 4.4 Large deviations for the empirical spectral measure Large deviations theory is a branch of probability dealing with rare events. The basic idea is the following. Suppose that X1, X2, . . . are i.i.d. random variables with EX1 = 0 and EX2 1 = 1, and let S n := Pn j=1 Xj. By the law of large num-bers, 1 nS n should be close to zero, but of course it is not typically exactly zero; the goal is to understand the rare event that 1 nS n > δ for some δ > 0 fixed. By the central limit theorem, 1 nS n is approximately distributed as a centered Gaus-sian random variable with variance 1 n, and so if Z denotes a standard Gaussian random variable, then P " 1 nS n > δ # ≈P|Z| > δ √n = 2 √ 2π Z ∞ δ √n e−x2 2 dx. Using standard asymptotics for the tail of a Gaussian integral, 1 n log P h |Z| > δ √n i n→∞ − − − − →−δ2 2 . This limit can be loosely interpreted as saying that the probability that 1 √nZ > δ is of the order e−nδ2 2 . The question is then whether it also holds that 1 n log P " 1 nS n > δ # n→∞ − − − − →−δ2 2 . (4.10) The answer (given by Cram´ er’s theorem) is “sort of”: the limit in (4.10) exists, but rather than being given by a universal function of δ, its value depends on the distribution of the summands. Statements like (4.10) are what is meant when one refers to large deviations, although typical theorems are formulated somewhat differently. Below, we give 126 Eigenvalue distributions: asymptotics the basic definitions and terminology used in the context of large deviations for sequences of Borel probability measures on a topological space X. Definitions 1. A rate function I is a lower semicontinuous1 function I : X →[0, ∞]. 2. A good rate function is a rate function for which all level sets I−1([0, α]) (α < ∞) are compact. 3. A sequence of Borel measures {νn}n∈N on X satisfies a large deviations principle (LDP) with rate function I and speed sn if for all Borel sets Γ ⊆ X, −inf x∈Γ◦I(x) ≤lim inf n→∞ 1 sn log(νn(Γ)) ≤lim sup n→∞ 1 sn log(νn(Γ)) ≤−inf x∈Γ I(x), (4.11) where Γ◦is the interior of Γ and Γ is the closure of Γ. If the upper bound in (4.11) is required to hold only when Γ is compact, the sequence {νn}n∈N satisfies a weak LDP. The somewhat complicated form of a large deviations principle, as com-pared with, e.g. (4.10), has the advantage of being precise enough to be useful while weak enough to have some chance of holding in situations of interest. The following gives an indirect approach for establishing the existence of a weak LDP. Theorem 4.19 Let {νn}n∈N be a sequence of Borel measures on X and, and for each n, let sn > 0. Let A be a base for the topology of X. For x ∈X, define I(x) := sup A∈A:x∈A " −lim inf n→∞ 1 sn log νn(A) # . Suppose that for all x ∈X, I(x) = sup A∈A:x∈A " −lim sup n→∞ 1 sn log νn(A) # . Then {νn}n∈N satisfies a weak LDP with rate function I and speed sn. The logarithmic energy Let ν be a measure on C. The quantity E(ν) := − " log |z −w|dν(z)dν(w) 1 i.e., for all α ∈[0, ∞), the level set I−1([0, α]) is closed in X. 4.4 Large deviations for the empirical spectral measure 127 is called (in potential theory) the logarithmic energy of ν; the same quantity appears in free probability as (the negative of) the free entropy. As suggested by the following lemma, this energy functional will play a key role in the LDP for empirical spectral measures. Lemma 4.20 Let P(S1) denote the space of Borel probability measures on S1, endowed with the topology of convergence in distribution. The logarithmic energy E is a strictly convex rate function on P(S1) and the uniform probability measure ν0 is the unique ν ∈P(S1) with E(ν) = 0. Proof Firstly, if ν0 is the uniform probability measure on S1, then E(ν0) = − 1 (2π)2 Z 2π 0 Z 2π 0 log |eiθ −eiφ|dθdφ = −1 2π Z 2π 0 log |1 −eiθ|dθ = 0. The fact that ν0 is the unique element ν ∈P(S1) with E(ν) = 0 will be shown below. Both the nonnegativity and strict convexity of E are most easily seen within the context of positive- and negative-definite kernels. A Hermitian kernel L(x, y) on C × C is called positive definite if X j,k cjckL(xj, xk) ≥0 for all c1, . . . , cn ∈C. A Hermitian kernel L(x, y) on C×C is called (condition-ally) negative definite if X j,k cjckL(xj, xk) ≤0 for all c1, . . . , cn ∈C such that Pn j=1 cj = 0. Obviously the negative of a positive definite kernel is negative definite, but the negative of a negative definite kernel need not be positive definite. Note that sums or integrals of positive (resp. negative) definite kernels are positive (resp. negative) definite. Given a negative-definite kernel on C × C and a finite signed measure µ on C with µ(C) = 0, it follows by approximating µ by discrete measures that " L(z, w)dµ(x)dµ(w) ≤0. Our interest is in this last statement, for the singular kernel K(z, w) = log |z−w|. To avoid the singularity, define for each ϵ > 0 the kernel Kϵ(z, w) := log(ϵ + |z −w|) = Z ∞ 0 1 1 + t − 1 t + ϵ + |z −w| ! dt. (4.12) 128 Eigenvalue distributions: asymptotics Now, the kernel (z, w) 7→ 1 t + ϵ + |z −w| is positive definite for each t and ϵ: the Laplacian kernel (z, w) 7→e−s|z−w| is known to be positive definite for each s > 0, and 1 t + ϵ + |z −w| = Z ∞ 0 e−s(t+ϵ+|z−w|)ds is therefore the integral of positive definite kernels, hence positive definite. It is easy to see that a constant kernel is conditionally negative definite, and so the integrand in (4.12) is conditionally negative definite for each t, which finally gives that Kϵ(z, w) is conditionally negative definite for each ϵ > 0. It thus follows that " Kϵ(z, w)dµ(z)dµ(w) ≤0. For ϵ < 1 2, |Kϵ(z, w)| ≤|K(z, w)| + log(2), and so if E(µ) < ∞, then it follows by the dominated convergence theorem that E(µ) = − " K(z, w)dµ(z)dµ(w) ≥0. We are, of course, not interested in signed measures of total mass 0, but in probability measures on S1. Given a probability measure ν on S1, let µ = ν−ν0, where ν0 is the uniform probability measure on S1. Then by the argument above, " K(z, w)dµ(z)dµ(w) = −E(ν) −2 " K(z, w)dν(z)dν0(w) −E(ν0) ≤0. It has already been shown that E(ν0) = 0, and so the above inequality reduces to E(ν) ≥−2 " K(z, w)dν(z)dν0(w). But " K(z, w)dν(z)dν0(w) = Z S1 1 2π Z 2π 0 log |z −eiθ|dθ ! dν(z) = 0, which proves the nonnegativity of E(ν). To prove the convexity of E, let ν1 and ν2 be distinct probability measures on S1 with finite logarithmic energy. By a similar argument to the one proving the nonnegativity of E, " K(z, w)d(ν1−ν2)(z)d(ν1−ν2)(w) = −E(ν1)−2 " K(z, w)dν1(z)dν2(w)−E(ν2) ≤0, 4.4 Large deviations for the empirical spectral measure 129 so that E(ν1, ν2) := − " log |z −w|dν1(z)dν2(w) ≤1 2(E(ν1) + E(ν2)) < ∞. Moreover, for 0 < λ < 1, E(λν1 + (1 −λ)ν2) = E(ν2) + 2λE(ν2, ν1 −ν2) + λ2E(ν1 −ν2). It follows that d2 dλ2 E(λν1 + (1 −λ)ν2) = 2E(ν1 −ν2) ≥0. This shows that E is convex. To show strict convexity, we will show that the only compactly supported finite signed measure ν of total mass 0 on C with E(ν) = 0 is the zero measure; this in particular implies that E(ν1 −ν2) > 0 above, since ν1 and ν2 are distict. Since ν(C) = 0, if 0 < ϵ < R < ∞, then − Z R ϵ " 1 t + |z −w|dν(z)dν(w) ! dt = Z R ϵ " 1 1 + t − 1 t + |z −w| ! dν(z)dν(w) ! dt = " log(ϵ + |z −w|) + log 1 + R R + |z −w| ! −log(1 + ϵ) ! dν(z)dν(w). We have shown above that (z, w) 7→ 1 t+|z−w| is a positive definite kernel for each t, so that " 1 t + |z −w|dν(z)dν(w) ≥0 for each t. The monotone convergence theorem then justifies taking the limit above as ϵ →0 and R →∞: lim ϵ→0 lim R→∞ Z R ϵ " 1 t + |z −w|dν(z)dν(w) ! dt = Z ∞ 0 " 1 t + |z −w|dν(z)dν(w) ! dt. On the other hand, we have argued above that lim ϵ→0 " log(ϵ + |z −w|)dν(z)dν(w) = E(ν)(= 0), and " log(1 + ϵ)dν(z)dν(w) = 0 130 Eigenvalue distributions: asymptotics for all ϵ > 0, since ν(C) = 0. Finally, lim R→∞ " log 1 + R R + |z −w| ! dν(z)dν(w) = 0 by the dominated convergence theorem, since ν is assumed to be compactly supported. We thus have that Z ∞ 0 " 1 t + |z −w|dν(z)dν(w) ! dt = 0, and because the inner double integral is a nonnegative function of t, this means that " 1 t + |z −w|dν(z)dν(w) = 0 for all t > 0. Write 1 t + |z −w| = 1 t ∞ X n=0 −|z −w| t !n = s ∞ X n=0 (−s|z −w|)n, with s = 1 t . We have from above that " ∞ X n=0 (−s|z −w|)ndν(z)dν(w) = 0 for each s. The n = 0 term integrates to 0 on its own, since ν(C) = 0. Since ν is compactly supported, given ϵ > 0, we can choose s = ϵ diam(supp(ν)) so that P∞ n=2(s|z −w|)n < ϵ2 1−ϵ . But then " |z −w|dν(z)dν(w) = 1 s " ∞ X n=2 (−s|z −w|)ndν(z)dν(w) < ϵ diam(supp(ν))(|ν|(C))2 1 −ϵ ; i.e., ! |z −w|dν(z)dν(w) = 0. Iterating this argument gives that " |z −w|ndν(z)dν(w) = 0 for all n ∈N. In particular, considering even powers gives that for all n ≥0, 0 = " (z −w)n(z −w)ndν(z)dν(w) = n X j,k=0 n j ! n k ! Z z jzkdν(z) ! Z (−w)n−j(−w)n−kdν(w) ! 4.4 Large deviations for the empirical spectral measure 131 Exercise 4.21 Prove by induction that this last equality implies that ! z jzkdν(z) = 0 for all j, k. Since ν is compactly supported, the monomials {z jzk} span a dense subset of the continuous functions on the support of ν, and so the result of the exercise shows that ν = 0. All that remains is to show that E is lower semicontinuous. Let F(ζ, η) := −log |ζ −η| and for α > 0, let Fα(ζ, η) := min{F(ζ, η), α}. Then Fα is a bounded, continuous function on S1 × S1. Given a bounded con-tinuous function g, the mapping µ 7→ Z gdµ is continuous by definition, and the mapping µ 7→µ × µ is also continuous, so µ 7→ " S1×S1 Fα(ζ, η)dµ(ζ)dµ(η) is continuous on P(S1). By the monotone convergence theorem, E(µ) = " F(ζ, η)dµ(ζ)dµ(η) = sup α>0 " Fα(ζ, η)dµ(ζ)dµ(η); that is, E is the supremum of continuous functions and hence lower semicon-tinuous. □ Empirical spectral measures Let µn denote the empirical spectral measure of a random unitary matrix U ∈ U (n). Then µn is a random element of the (compact) topological space P(S1) (with the topology of convergence in distribution). Let Pn denote the distribu-tion of µn in P(S1); it is for these Pn that the LDP holds, as follows. Theorem 4.22 (Hiai–Petz) Let Pn denote the distribution of the empirical spectral measure of a Haar-distributed random unitary matrix in U (n). The sequence {Pn}n∈N of measures on P(S1) as defined above satisfies an LDP with speed n2 and rate function given by the logarithmic energy E. 132 Eigenvalue distributions: asymptotics Proof It is a consequence of Alaoglu’s theorem that P(S1) is compact in the weak- topology, and so the existence of a weak LDP is the same as the full LDP. We thus proceed via Theorem 4.19: we show that E(µ) ≥ sup A∈A:µ∈A " −lim inf n→∞ 1 n2 log Pn(A) # . (4.13) and E(µ) ≤ sup A∈A:µ∈A " −lim sup n→∞ 1 n2 log Pn(A) # . (4.14) As above, let F(ζ, η) := −log |ζ −η| and for α > 0, let Fα(ζ, η) := min{F(ζ, η), α}, so that E(µ) = " F(ζ, η)dµ(ζ)dµ(η) = sup α>0 " Fα(ζ, η)dµ(ζ)dµ(η). Given a vector φ = (φ1, . . . , φn) ∈[0, 2π]n, let µφ = 1 n n X j=1 δeiφ j. Let µ ∈P(S1) and let A be a neighborhood of µ. Define A0 := {φ ∈[0, 2π]n : µφ ∈A}. Then by the Weyl integration formula (Theorem 3.1), for any α > 0, Pn(A) = 1 (2π)nn! ( A0 Y 1≤j 0, ρ 7→ ! Fα(ζ, η)dρ(ζ)dρ(η) is continuous, so that for any ϵ > 0, we can choose Aϵ a neighborhood of µ such that inf ρ∈Aϵ " Fα(ζ, η)dρ(ζ)dρ(η) ≥ " Fα(ζ, η)dµ(ζ)dµ(η) −ϵ. Letting ϵ →0 and then α →∞shows that sup A∈A,µ∈A " −lim sup n→∞ 1 n2 log Pn(A) # ≥ " F(ζ, η)dµ(ζ)dµ(η) = E(µ), and so (4.14) is proved. To prove (4.13), we make use of a regularization of µ via the Poisson kernel. Specifically, for 0 < r < 1, let Pr(θ) = 1 −r2 1 −2r cos(θ) + r2 . If fr(eiθ) = Pr ∗µ = Z S1 Pr(θ −arg(ζ))dµ(ζ) and µr is the probability measure on S1 with density fr, then fr is continuous and strictly positive, µr ⇒µ as r →1, and E(µr) r→1 − − − →E(µ) (for details, see ). Let δ > 0 be such that δ ≤fr(z) ≤δ−1 for z ∈S1. Since fr is strictly positive, the function θ 7→1 2π Z θ 0 fr(eit)dt is invertible; let ϕ : [0, 1] →[0, 2π] denote the inverse. Then for each n ∈N and j ∈{1, . . . , n}, define 0 = b(n) 0 < a(n) 1 < b(n) 1 < · · · < a(n) n < b(n) n = 2π by a(n) j = ϕ       j −1 2 n       b(n) j = ϕ  j n  , 134 Eigenvalue distributions: asymptotics and note that this implies that for all j = 1, . . . , n, πδ n ≤  b(n) j −a(n) j  ≤π nδ πδ n ≤  a(n) j −b(n) j−1  ≤π nδ. Let ∆n := n (θ1, . . . , θn) : a(n) j ≤θ j ≤b(n) j , 1 ≤j ≤n o and suppose that φ = (φ1, . . . , φn) ∈∆n. Let g : S1 →R have ∥g∥BL ≤1 (recall that ∥g∥BL is the maximum of the supremum norm and the Lipschitz constant of g). Then Z S1 g(z)dµφ(z) − Z S1 g(z)dµr(z) = 1 n n X j=1 g(eiφj) −1 2π n X j=1 Z b(n) j b(n) j−1 g(eiθ) fr(eiθ)dθ = n X j=1 1 2π Z b(n) j b(n) j−1  g(eiφ j) −g(eiθ)  fr(eiθ)dθ ≤max 1≤j≤n b(n) j −b(n) j−1 ≤2π nδ. Since the bounded-Lipschitz distance is a metric for the topology of conver-gence in distribution, this means that for any neighborhood A of µr, for n large enough, ∆n ⊆A0 = {φ : µφ ∈A}. Writing m(n) jk := min n |eis −eit| : a(n) j ≤s ≤b(n) j , a(n) k ≤t ≤b(n) k o , it now follows from the Weyl integration formula that Pn(A) = 1 (2π)nn! ( A0 Y 1≤j<k≤n |eiφ j −eiφk|2dφ1 · · · dφn ≥ 1 (2π)nn! ( ∆n Y 1≤j<k≤n |eiφ j −eiφk|2dφ1 · · · dφn ≥ 1 (2π)nn! Y 1≤j<k≤n h m(n) jk i2 Z b(n) 1 a(n) 1 · · · Z b(n) n a(n) n dφ1 · · · dφn ≥1 n!  δ 2n n Y 1≤j<k≤n h m(n) jk i2 , 4.4 Large deviations for the empirical spectral measure 135 since b(n) n −a(n) j ≥πδ n . Taking logarithms, diving by n2, and taking limits inferior thus gives lim inf n→∞ 1 n2 log Pn(A) ≥lim inf n→∞ 2 n2 X 1≤j< j≤n log(m(n) jk ). Since ϕ is increasing with a(n) j = j−1 2 n and b(n) j = j n, lim n→∞ 2 n2 X 1≤j<k≤n log(m(n) jk ) = lim n→∞                      2 n2 X 1≤j<k≤n log                      min j−1 2 n ≤u≤j n k−1 2 n ≤v≤k n eiϕ(u) −eiϕ(v)                                           = 2 " 0≤u<v≤1 log eiϕ(u) −eiϕ(v) dudv = 1 (2π)2 Z 2π 0 Z 2π 0 fr(eis) fr(eit) log |eis −eit|dsdt = −E(µr), where the convergence of the Riemann sums to the integral is valid because the integrand is bounded below by log  δ n  . We thus have that for any neighborhood A of µr, −lim inf n→∞ 1 n2 log Pn(A) ≤E(µr). Since µr ⇒µ, this also holds for any neighborhood A of µ, for r close enough to 1, so that sup A∈A,µ∈A " −lim inf n→∞ 1 n2 log Pn(A) # ≤lim sup r→1 E(µr) = E(µ). This completes the proof of (4.13). □ Notes and References The paper of Hough–Krishnapur–Peres–Vir´ ag gives a beautiful survey on determinantal point processes with many applications; this paper is in particu-lar the source of Theorem 4.1 and its proof. Theorem 4.9 and a multivariate generalization were proved by Wieand , by exploiting the connection with Toeplitz matrices. She showed the following. 136 Eigenvalue distributions: asymptotics Theorem 4.23 (Wieand) Let U be a Haar-distributed random matrix in U (n). For 0 ≤α < β < 2π, let N(n) α,β denote the number of eigenangles of U lying in [α, β]. The finite-dimensional distributions of the process π p log(n)  N(n) α,β −EN(n) α,β  0≤α<β<2π converge to those of a centered Gaussian process n Zα,β o 0≤α<β<2π with covari-ance E h Zα,βZα′,β′ i =                            1, α = α′, β = β′; 1 2, α = α′, β , β′; 1 2, α , α′, β = β′; −1 2, β = α′; 0, otherwise. These surprising correlations have been the result of further study, in partic-ular by Diaconis and Evans and Hughes, Keating, and O’Connell . In , Soshnikov proved univariate central limit theorems for local and global statistics of eigenvalues of random matrices from the classical compact groups. Fast rates of convergence in the univariate central limit theorem for Tr(Uk) for k fixed and U distributed according to Haar measure were found for U ∈ O (n) by Stein and Johansson . Johansson also found rates of conver-gence in the unitary and symplectic cases; his results are as follows. Theorem 4.24 (Johansson ) Let U ∈G(n) where G(n) is one of O (n), U (n), and Sp (2n). Let k ∈N and let Xk = 1 √ k  Tr(Uk) −E Tr(Uk)  . There are positive constants Ci and δi (1 ≤i ≤3), independent of n, such that the following hold. 1. For U distributed according to Haar measure on U (n), and Z a standard complex Gaussian random variable, dTV(Xk, Z) ≤C1n−δ1n. 2. For U distributed according to Haar measure on O (n), and Z a standard (real) Gaussian random variable, dTV(X, Z) ≤C2e−δ2n. 4.4 Large deviations for the empirical spectral measure 137 3. For U distributed according to Haar measure on Sp (2n), and Z a standard complex Gaussian random variable, dTV(X, Z) ≤C3e−δ3n. Rates of convergence in the multivariate case were obtained by D¨ obler and Stolz via Stein’s method, following work of Fulman . In the paper of Diaconis and Evans, they did further work to expand the class of test functions from H 1 2 2 so as to prove multivariate limit theorems for the number of eigenvalues in an arc, which in particular recovered Wieand’s result above. For a general introduction to the theory of large deviations, see the book of Dembo and Zeitouni. The large deviations principle for the empirical spectral measure (Theorem 4.22) is due to Hiai and Petz (see also their book ), and we have followed their proof fairly closely, with background on the logarithmic energy taken from and . 5 Concentration of measure 5.1 The concentration of measure phenomenon The phenomenon of concentration of measure arises frequently as a tool in probability and related areas; following its use by Vitali Milman in his proba-bilistic proof of Dvoretzky’s theorem, the explicit study and more systematic exploitation of this phenomenon has become a large and influential area. The basic idea is that in various settings, functions with small local fluctuations are often essentially constant, where “essentially” should be interpreted in the probabilistic sense that with high probability, such functions are close to their means. The following result in classical probability gives a first example of a con-centration phenomenon. Theorem 5.1 (Bernstein’s inequality) Let {X j}n j=1 be independent random variables such that, for each j, |Xj| ≤1 almost surely. Let S n := Pn j=1 Xj and let σ2 = Var Pn j=1 Xj  . Then for all t > 0, P  S n n −E S n n  > t  ≤C exp −min (n2t2 2σ2 , nt 2 )! . Letting t = Aσ2 n for a large constant A gives that P " S n n −E S n n  > Aσ2 n # ≤Ce−Aσ2 2 . That is, if n is large, it is likely that the average of n bounded independent ran-dom variables is within a window of size O  1 n  about its mean. It is reasonable to think of the average of n random variables as a statistic with small local fluctuations, since if just the value of one (or a few) of the random variables is changed, the average can only change on the scale of 1 n. 138 5.1 The concentration of measure phenomenon 139 Another classical appearance of concentration of measure is that of Gaussian measure concentration: Theorem 5.2 Let f : Rn →R be Lipschitz with Lipschitz constant L, and let Z = (Z1, . . . , Zn) be a standard Gaussian random vector in Rn. Let M be a median of f; i.e., P[ f(Z) ≥M] ≥1 2 and P[ f(Z) ≤M] ≥1 2. Then P| f(Z) −M| ≥Lt ≤2e−t2 2 . The following statement about uniform measure on the sphere is analogous to the previous result; the difference in appearance of the exponent is only due to the choice of normalization of the random vector. Theorem 5.3 (L´ evy’s lemma) Let f : Sn−1 →R be Lipschitz with Lipschitz constant L, and let X be a uniform random vector in Sn−1. Let M be a median of f; that is, P[ f(X) ≥M] ≥1 2 and P[ f(X) ≤M] ≥1 2. Then P h f(X) −M ≥Lt i ≤2e−(n−2)t2. Both results can be loosely interpreted as saying that if the local fluctuations of a function are controlled (the function is Lipschitz), then the function is essentially constant. Concentration results are often formulated in terms of the mean rather than a median, as follows. Corollary 5.4 Let f : Sn−1 →R be Lipschitz with Lipschitz constant L, and let X be a uniform random vector in Sn−1. Then if Mf denotes a median of f with respect to uniform measure on Sn−1, |Ef(X) −M f | ≤L r π n −2 and P[| f(X) −Ef(X)| ≥Lt] ≤eπ−nt2 4 . Proof First note that L´ evy’s lemma and Fubini’s theorem imply that E f(X) −Mf ≤E f(X) −M f = Z ∞ 0 P h f(X) −M f > t i dt ≤ Z ∞ 0 2e−(n−2)t2 L2 dt = L r π n −2. 140 Concentration of measure If t > 2 p π n−2, then P | f(X) −E f(X)| > Lt ≤P h f(X) −M f > Lt − M f −Ef(X) i ≤P " f(X) −M f > L t − r π n −2 !# ≤2e−(n−2)t2 4 . On the other hand, if t ≤2 p π n−2, then eπ−(n−2)t2 4 ≥1, so the statement holds trivially. □ 5.2 Logarithmic Sobolev inequalities and concentration Knowing that a metric probability space possesses a concentration of measure property along the lines of L´ evy’s lemma opens many doors; however, it is not a priori clear how to show that such a property holds or to determine what the optimal (or even good) constants are. In this section we discuss obtaining measure concentration via logarithmic Sobolev inequalities. In what follows let (X, d) be a metric space equipped with a Borel probability measure P, with E denoting expectation with respect to P. The entropy of a measurable function f : X →[0, ∞) with respect to P is Ent( f) := E f log(f) −(Ef) log (Ef) . (5.1) For c > 0, Ent(cf) = c Ent( f), and it follows from Jensen’s inequality that Ent( f) ≥0. Since X is an arbitrary metric measure space, it may not have a smooth structure. Nevertheless, the concept of the length of the gradient extends, as follows. A function g : X →R is locally Lipschitz if for all x ∈X, there is a neighborhood U ⊆X of x on which g is Lipschitz; for a locally Lipschitz function g : X →R, the length of the gradient of g at x is defined by |∇g| (x) := lim sup y→x |g(y) −g(x)| d(y, x) . For smooth functions φ : R →R, this length of gradient satisfies the chain rule: |∇φ( f)| ≤|φ′( f)||∇f|. 5.2 Logarithmic Sobolev inequalities and concentration 141 Definition The space (X, d, P) satisfies a logarithmic Sobolev inequality (or log-Sobolev inequality or LSI) with constant C > 0 if, for every locally Lipschitz f : X →R, Ent( f 2) ≤2CE  |∇f|2 . (5.2) The reason for our interest in log-Sobolev inequalities is that they imply measure concentration for Lipschitz functions, via what is known as the Herbst argument. Theorem 5.5 Suppose that (X, d, P) satisfies a log-Sobolev inequality with constant C > 0. Then if F : X →R is 1-Lipschitz, E|F| < ∞, and for every r ≥0, P h F −EF ≥r i ≤2e−r2/2C. Proof (the Herbst argument) First consider the case that F is bounded as well as Lipschitz, and note also that by replacing F with F −EF, it suffices to treat the case EF = 0. For λ > 0, it follows by Chebychev’s inequality that P [F ≥r] = P h eλF ≥eλri ≤e−λrEeλF. (5.3) For notational convenience, let H(λ) := EeλF, and consider the function f with f 2 := eλF. Then Ent( f 2) = E h λFeλFi −H(λ) log H(λ), and |∇f(x)| ≤e λF(x) 2 λ 2  |∇F(x)|. Taking expectation and using the fact that F is 1-Lipschitz (so that |∇F| ≤1) gives E|∇f|2 ≤λ2 4 E h |∇F|2eλFi ≤λ2 4 E h eλFi = λ2 4 H(λ); applying the LSI with constant C to this f thus yields E h λFeλFi −H(λ) log H(λ) = λH′(λ) −H(λ) log H(λ) ≤Cλ2 2 H(λ), or rearranging, H′(λ) λH(λ) −log H(λ) λ2 ≤C 2 . 142 Concentration of measure Now, if K(λ) := log H(λ) λ , then the right-hand side is just K′(λ), and so we have the simple differential inequality K′(λ) ≤C 2 . Since H(0) = 1, lim λ→0 K(λ) = lim λ→0 H′(λ) H(λ) = lim λ→0 E h FeλFi E eλF = EF = 0, and thus K(λ) = Z λ 0 K′(s)ds ≤ Z λ 0 C 2 ds = Cλ 2 . In other words, E h eλFi = H(λ) = eλK(λ) ≤e Cλ2 2 . It follows from this last estimate together with (5.3) that for F : X →R which is 1-Lipschitz and bounded, P [F ≥EF + r] ≤e−λr+ Cλ2 2 . Choosing λ = r C completes the proof under the assumption that F is bounded. In the general case, let ϵ > 0 and define the truncation Fϵ by Fϵ(x) :=              −1 ϵ , F(x) ≤−1 ϵ ; F(x), −1 ϵ ≤F(x) ≤1 ϵ ; 1 ϵ , F(x) ≥1 ϵ . Then Fϵ is 1-Lipschitz and bounded so that by the argument above, E h eλFϵi ≤eλEFϵ+ Cλ2 2 . The truncation Fϵ approaches F pointwise as ϵ →0, so by Fatou’s lemma, E h eλFi ≤elim infϵ→0 λEFϵe Cλ2 2 . It remains to show that EFϵ ϵ→0 − − − →EF (in particular that EF is defined); the proof is then completed exactly as in the bounded case. Now, F takes on some finite value at each point in X, so there is a constant K such that P [|F| ≤K] ≥7 8; 5.2 Logarithmic Sobolev inequalities and concentration 143 moreover, Fϵ converges pointwise, hence also in probability to F, so there is some ϵo such that for ϵ < ϵo, P [|Fϵ −F| > K] < 1 8, and so P [|Fϵ| ≤2K] ≥ 3 4. On the other hand, since |Fϵ| is bounded and 1-Lipschitz, it has already been shown that P  |Fϵ| −E|Fϵ| > t  ≤2e−t2 2C , (5.4) so that there is some r (depending only on C) such that P  |Fϵ| −E|Fϵ| > r  ≤1 4. It follows that for ϵ < ϵo, the set {|Fϵ| < 2K} ∩  |Fϵ| −E|Fϵ| ≤r  has probability at least 1 2, and is in particular non-empty. But on this set, E|Fϵ| ≤2K + r, and so E|Fϵ| is uniformly bounded, independent of ϵ. It follows from the version of (5.4) for Fϵ itself and Fubini’s theorem that E |Fϵ −EFϵ|2 = Z ∞ 0 tP [|Fϵ −EFϵ| > t] dt ≤ Z ∞ 0 2te−t2 2C dt = 2C, and using Fatou’s lemma again gives that E|F −EFϵ|2 ≤lim inf ϵ→0 E|Fϵ −EFϵ|2 ≤2C. In particular E|F| ≤E|F −EFϵ| + E|Fϵ| ≤ √ 2C + 2K + r, and so F is integrable. One final application of the convergence of Fϵ to F in probability gives |EF −EFϵ| ≤δ + E|Fϵ −F|1|Fϵ−F|>δ ≤δ + p E|Fϵ −F|2P [|Fϵ −F| > δ] ϵ→0 − − − →0. □ Log-Sobolev inequalities can be transfered between spaces via Lipschitz maps, as follows. Lemma 5.6 Let (X, dX, P) be a metric space equipped with a Borel probabil-ity measure P and let (Y, dY) be a metric space. Suppose that (X, dX, P) satisfies a log-Sobolev inequality with constant C. Let F : X →Y be a Lipschitz map 144 Concentration of measure with Lipschitz constant L, and let PF be the push-forward of P to Y via F; i.e, if A ⊆Y is a Borel set, then PF(A) = P(F−1(A)). Then (Y, dY, PF) satisfies a log-Sobolev inequality with constant CL2. Proof Let g : Y →R be locally Lipschitz. Then g ◦F : X →R is locally Lipschitz, and at each x ∈X, |∇(g ◦F)|(x) ≤L|∇g|(F(x)). Applying the LSI on X to g ◦F thus yields EntPF(g) = EntP(g ◦F) ≤2C Z ∇g ◦F 2(x)dP(x) ≤2CL2 Z ∇F 2(F(x))dP(x) = 2CL2 Z ∇F 2(y)dPF(y). □ A key advantages of the approach to concentration via log-Sobolev inequal-ities is that log-Sobolev inequalities tensorize; that is, if one has the same LSI on each of some finite collection of spaces, the same LSI holds on the product space, independent of the number of factors, as follows. Theorem 5.7 Suppose that each of the metric probability spaces (Xi, di, µi) (1 ≤i ≤n) satisifes a log-Sobolev inequality: for each i there is a Ci > 0 such that for every locally Lipschitz function f : Xi →R, Entµi(f 2) ≤2Ci Z |∇Xi f|2dµi. Let X = X1 × · · · × Xn equipped with the product probability measure µ := µ1 ⊗· · · ⊗µn. Then for every locally Lipschitz function f : X →R, Entµ( f 2) ≤2C Z n X i=1 |∇Xi f|2dµ, where |∇Xi f(x1, . . . , xn)| = lim sup yi→xi | f(x1, . . . , xi−1, yi, xi+1, . . . , xn) −f(x1, . . . , xn)| di(yi, xi) and C = max1≤i≤n Ci. 5.2 Logarithmic Sobolev inequalities and concentration 145 The crucial point here is that the constant C does not get worse with the number of factors; that is, the lemma gives dimension-free tensorization of log-Sobolev inequalities. The theorem follows immediately from the following property of entropy. Proposition 5.8 Let X = X1 × · · · × Xn and µ = µ1 ⊗· · · ⊗µn as above, and suppose that f : X →[0, ∞). For {x1, . . . , xn} \ {xi} fixed, write fi(xi) = f(x1, . . . , xn), thought of as a function of xi. Then Entµ( f) ≤ n X i=1 Z Entµi( fi)dµ. Proof The proof makes use of the following dual formulation of the defini-tion of entropy: given a probability space (Ω, F, P), the definition of Ent( f) = EntP( f) given in Equation (5.1) is equivalent to EntP( f) := sup (Z fgdP Z egdP ≤1 ) , as follows. First, for simplicity we may assume that R fdP = 1, since both expressions for the entropy are homogeneous of degree 1. Then the expression in (5.1) is EntP( f) = Z f log(f)dP. Now, if g := log(f), then R egdP = R fdP = 1, and so Z f log( f)dP = Z fgdP ≤sup (Z fgdP Z egdP ≤1 ) . On the other hand, Young’s inequality says that for u ≥0 and v ∈R, uv ≤u log(u) −u + ev; applying this to u = f and v = g and integrating shows that sup (Z fgdP Z egdP ≤1 ) ≤ Z f log( f)dP. Working now with the alternative definition of entropy, given g such that R egdµ ≤1, for each i define gi(x1, . . . , xn) := log        R eg(y1,...,yi−1,xi,...,xn)dµ1(y1) · · · dµi−1(yi−1) R eg(y1,...,yi,xi+1,...,xn)dµ1(y1) · · · dµi(yi)       , 146 Concentration of measure (note that gi only actually depends on xi, . . . , xn). Then n X i=1 gi(x1, . . . , xn) = log        eg(x1,...,xn) R eg(y1,...,yn)dµ1(y1) · · · dµn(yn)       ≥g(x1, . . . , xn), and by construction, Z e(gi)idµi = Z        R eg(y1,...,yi−1,xi,...,xn)dµ1(y1) · · · dµi−1(yi−1) R eg(y1,...,yi,xi+1,...,xn)dµ1(y1) · · · dµi(yi)       dµi(xi) = 1. Applying these two estimates together with Fubini’s theorem yields Z fgdµ ≤ n X i=1 Z fgidµ = n X i=1 Z Z fi(gi)idµi ! dµ ≤ n X i=1 Z Entµi( fi)dµ. □ In general, tensorizing concentration inequalities results in a loss in the con-stant which gets worse with the number of factors. This is why concentration as a consequence of a log-Sobolev inequality is so valuable: product spaces have the same type of concentration phenomena as their factors, as follows. Theorem 5.9 Let (X1, d1, µ1), . . . , (Xn, dn, µn) be compact metric probability spaces. Suppose that for each i, (Xi, di, µi) satisfies a log-Sobolev inequality with constant Ci. Let X = X1 × · · · × Xn be equipped with the product measure µ = µ1 ⊗· · · ⊗µn and the ℓ2-sum metric d2((x1, . . . , xn), (y1, . . . , yn)) = n X i=1 d2 i (xi, yi). If F : X →R is 1-Lipschitz, then for every r ≥0, P h F −EF ≥r i ≤2e−r2/4C, where C = max1≤i≤n Ci. Proof The main point is to connect the Lipschitz condition on F to the quan-tity n X i=1 ∇XiF 2 appearing in Theorem 5.7; the rest of the proof is a repeat of the Herbst argu-ment. Given F : X →R 1-Lipschitz, for each ϵ > 0 define the function Fϵ(x) = inf z∈X h F(z) + p ϵ2 + d2(x, z) i . 5.3 The Bakry– ´ Emery criterion and concentration 147 Then for all x ∈X, F(x) ≤Fϵ(x) ≤F(x) + ϵ (the first inequality is because F is 1-Lipschitz and the second is by choosing z = x in the infimum). Fix x = (x1, . . . , xn) ∈X. Since X is compact, there is an a ∈X such that Fϵ(x) = F(z) + p ϵ2 + d2(x, a). Now let yi ∈Xi, and let x(i,yi) = (x1, . . . , xi−1, yi, xi+1, . . . , xn). Then Fϵ(x(i,yi)) −Fϵ(x) ≤ p ϵ2 + d2(x(i,yi), a) − p ϵ2 + d2(x, a) = d2 i (yi, ai) −d2 i (xi, ai) p ϵ2 + d2(x(i,yi), a) + p ϵ2 + d2(x, a) ≤ di(xi, yi) di(ai, yi) + di(xi, ai) p ϵ2 + d2(x(i,yi), a) + p ϵ2 + d2(x, a) , by repeated applications of the triangle inequality. It follows that lim sup yi→xi Fϵ(x(i,yi)) −Fϵ(x) di(xi, yi) ≤ di(xi, ai) p ϵ2 + d2(x, a) , and so if ∇+ XiFϵ (x) = lim supyi→xi [Fϵ(x(i,yi))−Fϵ(x)]+ di(xi,yi) , then Pn i=1 ∇+ XiFϵ 2 (x) ≤1. Applying the same argument to −Fϵ then gives that Pn i=1 ∇XiFϵ 2 (x) ≤2. At this point, one can apply the Herbst argument to Fϵ, using the result of Theorem 5.7 and everywhere replacing |∇Fϵ|2 (x) with Pn i=1 ∇XiFϵ 2 (x). From this it follows that P h Fϵ −EFϵ ≥r i ≤2e−r2/4C, with C = max1≤i≤n Ci. The result now follows from the monotone convergence theorem, letting ϵ →0. □ 5.3 The Bakry– ´ Emery criterion and concentration for the classical compact groups All of the classical compact matrix groups satisfy a concentration of measure property similar to the one on the sphere given in L´ evy’s lemma. In almost every case, the optimal (i.e., with smallest constants) log-Sobolev inequalities follow from the Bakry-´ Emery curvature criterion. We begin with some back-ground in Riemannian geometry. 148 Concentration of measure Riemannian Geometry and Lie groups Let M be a smooth manifold embedded in Euclidean space. A tangent vector at a point p ∈M can be realized as the tangent vector γ′(0) to some curve γ : (−ϵ, ϵ) →M with γ(0) = p: γ′(0) = lim s→0 γ(s) −p s , where the operations are taking place in the ambient Euclidean space. On an abstract manifold, tangent vectors to a point are defined similarly as equiva-lence classes of curves through that point, but we will not need this in what follows. The set of tangent vectors to M at a point p is denoted TpM and the set of all tangent vectors to M is denoted T M. The manifolds we are working with all have the additional structure of a Riemannian metric. A Riemannian manifold (M, g) is a smooth manifold to-gether with a Riemannian metric g; i.e., a family of inner products: at each point p ∈M, gp : TpM × TpM →R defines an inner product on the tangent space TpM to M at p. A manifold embedded in Euclidean space inherits a met-ric just by restricting the ambient Euclidean metric. Different embeddings can give rise to different metrics but for the classical groups we will stick with the canonical embedding and resulting metric. Given a smooth function f : M →N between manifolds, the differential or push-forward of f at p ∈M is the map (f∗)p : TpM →T f(p)N which is defined as follows. Given a curve γ : (−ϵ, ϵ) →M with γ(0) = p and γ′(0) = X, ( f∗)p(X) = d dt f(γ(t)) t=0 . The definition of f∗is independent of γ. A vector field X on M is a smooth (infinitely differentiable) map X : M → T M such that for each p ∈M, X(p) ∈TpM. Note that the push-forward f∗can then be used to define a vector field f∗X on N. From a different perspective, the definition of f∗can be turned around to give a way that a smooth vector field X on M acts as a differential operator: given a vector field X, for any smooth function f on M, the function X( f) is defined by the requirement that for any curve γ : (−ϵ, ϵ) →M with γ(0) = p and γ′(0) = X(p), X(f)(p) = d dt f(γ(t)) t=0 ; that is, X( f)(p) = ( f∗)p(X). It is sometimes convenient to work in coordinates. A local frame {Li} is a collection of vector fields defined on an open set U ⊆M such that at each 5.3 The Bakry– ´ Emery criterion and concentration 149 point p ∈U, the vectors {Li(p)} ⊆TpM form a basis of TpM. The vector fields {Li} are called a local orthonormal frame if at each point in U, the {Li} are orthonormal with repect to g. Some manifolds only have local frames, not global ones; that is, you can’t define a smooth family of vector fields over the whole manifold which forms a basis of the tangent space at each point. This is true, for example of S2 ⊆R3. However, every compact Lie group has a global orthonormal frame (this follows from the comment after Equation (5.5) below.) The definitions above have been formulated for general embedded mani-folds, but in the setting of Lie groups, one can normally restrict attention to what happens at the identity e ∈G, and get the rest via translation within the group. Specifically, any vector X ∈Te(G) defines a vector field e X on G as fol-lows. For g ∈G fixed, let Lg : G →G denote the map given by Lg(h) = gh. Then for any h ∈G, define e X(h) := (Lh∗)eX = d dt[hγ(t)] t=0 , for any curve γ in G with γ(0) = e and γ′(0) = X. The vector field e X acts as a differential operator by e X(f)(h) = d dt f(hγ(t)) t=0 , since γh(t) = hγ(t) is a curve with γh(0) = h and γ′ h(0) = d dt[hγ(t)] t=0 = e X(h). A vector field Y on G with the property that for any g ∈G, Lg∗Y = Y is called a (left) invariant vector field. For any X ∈Te(G), the extension e X described above gives an invariant vector field on G, since Lg∗e X = (Lg∗)h(e X(h)) = (Lg∗)h((Lh∗)eX) = (Lgh∗)eX. Conversely, given an invariant vector field Y, Y(h) = (Lh∗)e(Y(e)) = g Y(e)(h), and so the mapping X 7→e X gives a bijection between invariant vector fields and elements of Te(G); either of these vector spaces may be referred to as the Lie algebra of G. From an intrinsic differential geometric point of view, this is a fine definition of e X, but because our Riemannian metric is the one inherited from the ambient Euclidean space, it helps to also have a more concrete perspective on e X. As above, let γ : (−ϵ, ϵ) →G be a curve with γ(0) = e and γ′(0) = X, and for each h ∈G, define γh by γh(t) = hγ(t). Then using the Euclidean structure gives that e X(h) = γ′ h(0) = lim s→0 γh(s) −h s = h lim s→0 γ(s) −e s ! = hγ′(0) = hX. 150 Concentration of measure In particular, this means that if G is one of the classical compact groups so that the inner product on TI(G) is the real part of the Hilbert–Schmidt inner product, then for any h ∈G and any e X and e Y defined as above, gh(e X(h), e Y(h)) = Re(Tr(hXY∗h∗)) = Re(Tr(XY∗)) = ⟨X, Y⟩. (5.5) In particular, if X and Y are orthogonal elements of TI(G), then e X and e Y are orthogonal at every point of G. Given two vector fields X and Y on M, there is a unique vector field [X, Y], called the Lie Bracket of X and Y, such that X, Y = X(Y( f)) −Y(X( f)). The fact that this is a vector field is not obvious, and is in fact a bit surprising, since vector fields can be thought of as first order differential operators, but this looks like a second-order operator. Indeed, just XY and YX by themselves are not vector fields, but in the case of the Lie bracket, the second-order parts cancel out. Exercise 5.10 Show that for F : M →N a smooth map between manifolds and X and Y vector fields on M, [F∗X, F∗Y] = F∗[X, Y]. It follows in particular from the exercise that on a Lie group G, if X, Y are invariant vector fields, then so is [X, Y]. Since for given X, Y ∈Te(G), e X and e Y are invariant, this means that there must be some vector Z ∈Te(G) such that e Z = [e X, e Y]. The identity of this vector Z is given in the following lemma. Lemma 5.11 Let G be one of the classical compact groups and g = Te(G) its Lie algebra. Let X, Y ∈g, and define [X, Y] = XY −YX, where here XY refers to the matrix product of X and Y. Then [X, Y] ∈g and ] [X, Y] = [e X, e Y], where the expression on the right is the Lie bracket of the vector fields e X and e Y as defined above. Proof We will verify that [X, Y] ∈g in the case of G = SU (n); the remaining cases are essentially the same. Recall that su(n) = {X ∈Mn(C) : X + X∗= 0, Tr(X) = 0} . 5.3 The Bakry– ´ Emery criterion and concentration 151 Given X, Y ∈su(n), [X, Y] + [X, Y]∗= XY −YX + Y∗X∗−X∗Y∗ = XY −YX + YX −XY = 0, and Tr([X, Y]) = Tr(XY −YX) = 0. To verify the claim that ] [X, Y] = [e X, e Y], fix a smooth function f on G and an element g ∈G. Then f(g(I + Z)) is a smooth function of Z, and so by Taylor’s theorem, we can write f(g(I + Z)) = c0 + c1(Z) + B(Z, Z) + R(Z), where c1 is a linear function, B is a symmetric bilinear form, |R(X)| ≤c3∥X∥3 H.S. for some c3 > 0, and c0, c1, B, and R all depend only on f and g. Expanding the exponential gives that f(g exp(Z)) = c0 + c1(Z) + e B(Z, Z) + e R(Z), for another symmetric bilinear e B and e R which vanishes to third order. Then for Z ∈TI(G), e Z( f)(g) = d dt f(g exp(tZ)) t=0 = d dt  c0 + c1(tZ) + e B(tZ, tZ) + e R(tZ)  t=0 = c1(Z). Now, e X(e Y( f))(g) = d dt e Y( f)(g exp(tX)) t=0 = d dt d ds f(g exp(tX) exp(sY)) s=0 t=0 = d dt d ds f(g(I + tX + r(tX))(I + sY + r(sY)) s=0 t=0 , where ∥r(Z)∥H.S. ≤c2∥Z∥2 H.S. for some c2 > 0 and Z ∈G. Proceeding as before, and writing only the terms of the expansion that give some contribution in the limit, e X(e Y( f))(g) = d dt d ds (c0 + c1(tX + sY + stXY) + B(tX + sY, tX + sY)) s=0 t=0 = c1(XY) + 2B(X, Y). It follows that e X, e Y(g) = c1(XY −YX) = ] X, Y(g), 152 Concentration of measure which proves the claim. □ We still need a few more notions in order to get to curvature. Firstly, a con-nection ∇on M is a way of differentiating one vector field in the direction of another: a connection ∇is a bilinear form on vector fields that assigns to vec-tor fields X and Y a new vector field ∇XY, such that for any smooth function f : M →R, ∇f XY = f∇XY and ∇X( fY) = f∇X(Y) + X( f)Y. A connection is called torsion-free if ∇XY −∇YX = [X, Y]. (5.6) There is a special connection on a Riemannian manifold, called the Levi-Civita connection, which is the unique torsion-free connection with the prop-erty that X(g(Y, Z)) = g(∇XY, Z) + g(Y, ∇XZ). (5.7) This property may look not obviously interesting, but geometrically, it is a compatibility condition of the connection ∇with g. There is a notion of trans-porting a vector field in a “parallel way” along a curve, which is defined by the connection. The condition above means that the inner product defined by g of two vector fields at a point is unchanged if you parallel-transport the vector fields (using ∇to define “parallel”) along any curve. Finally, we can define the Riemannian curvature tensor R(X, Y): to each pair of vector fields X and Y on M, we associate an operator R(X, Y) on vector fields defined by R(X, Y)(Z) := ∇X(∇YZ) −∇Y(∇XZ) −∇[X,Y]Z. The Ricci curvature tensor is the function Ric(X, Y) on M which, at each point p ∈M, is the trace of the linear map on TpM defined by Z 7→R(Z, Y)(X). In orthonormal local coordinates {Li}, Ric(X, Y) = X i g(R(X, Li)Li, Y). (Seeing that this coordinate expression is right involves using some of the sym-metries of R.) The Bakry– ´ Emery criterion The Bakry-´ Emery criterion can be made more general, but for our purposes it suffices to formulate it as follows. 5.3 The Bakry– ´ Emery criterion and concentration 153 Theorem 5.12 (The Bakry–´ Emery curvature criterion) Let (M, g) be a com-pact, connected, m-dimensional Riemannian manifold with normalized volume measure µ. Suppose that there is a constant c > 0 such that for each p ∈M and each v ∈TpM, Ricp(v, v) ≥1 cgp(v, v). Then µ satisfies a log-Sobolev inequality with constant c. The following proposition together with the Bakry–´ Emery criterion leads to log-Sobolev inequalities, and thus concentration of measure, on most of the classical compact groups. Proposition 5.13 If Gn is one of SO (n), SO−(n) SU (n), or Sp (2n), then for each U ∈Gn and each X ∈TUGn, RicU(X, X) = cGngU(X, X), where gU is the Hilbert–Schmidt metric and cGn is given by G cG SO (n), SO−(n) n−2 4 SU (n) n 2 Sp (2n) n + 1 For the curvature computation, it is simplest to work with the symplectic group in its quaternionic form, with the Lie algebra suH(n) = {X ∈Mn(H) : X + X∗= 0} , where H = {a + bi + cj + dk : a, b, c, d ∈R} is the skew field of quaternions, (a + bi + cj + dk) = a −bi −cj −dk, and the (real) inner product on suH(n) is given by ⟨X, Y⟩= Tr(XY∗). The following proposition is a key part of the proof of Proposition 5.13. Proposition 5.14 Let X ∈g, where g is one of so(n), su(n), or suH(n), and let {Lα}α∈A be an orthonormal basis of g. Then −1 4 X α∈A [[X, Lα], Lα] = β(n + 2) 4 −1 ! X, 154 Concentration of measure where β = 1, 2, 4 as g is so(n), su(n), or suH(n). Proof We first observe that the expression on the left is independent of the choice of orthonormal basis. Indeed, each g is a real inner product space (with the inner product given by ⟨X, Y⟩= Re(Tr(XY∗))). If {Kα}α∈A is a second or-thonormal basis of g, then there is an orthogonal matrix U = [uα,β]α,β∈A such that Kβ = X α uα,βLα. Then X β∈A [[X, Kβ], Kβ] = X β∈A X α1,α2∈A uα1,βuα2,β[[X, Lα1], Lα2] = X α∈A [[X, Lα], Lα] by the orthogonality of the matrix U. Note that so(1) = su(1) = {0}. It is easy to check the claim for suH(1) with the basis {i, j, k}, so in what follows, we will assume n ≥2. If 1 ≤j, k ≤n, we use E jk to denote the matrix with 1 in the j-k entry and zeros otherwise. For q ∈{i, j, k} and 1 ≤ℓ< n, define Dq ℓ:= q √ ℓ+ ℓ2         ℓ X r=1 Err −ℓEℓ+1,ℓ+1         and let Dq n := q √nIn. Define Dℓ:= Di ℓ. For q ∈{1, i, j, k} and 1 ≤ℓ< r ≤n, define Fq ℓr := q √ 2 Eℓ,r −q √ 2 Er,ℓ. Let Fℓr := F1 ℓr and let Gℓr = Fi ℓr. Then • {Fℓr : 1 ≤ℓ< r ≤n} is an orthonormal basis of so(n); • {Dℓ: 1 ≤ℓ< n}∪{Fℓr,Gℓr : 1 ≤ℓ< r ≤n} is an orthonormal basis of su(n); • n Dq ℓ: 1 ≤ℓ≤n, q ∈{i, j, k} o ∪ n Fq ℓr : 1 ≤ℓ< r ≤n, q ∈{1, i, j, k} o is an or-thonormal basis of suH(n). It suffices to verify the claim for these orthonormal bases {Lα}α∈A. We can make a further simplification as follows: suppose that the claimed formula holds for a particular orthonormal basis {Lα}α∈A and a particular choice of X. 5.3 The Bakry– ´ Emery criterion and concentration 155 Let U ∈G. Then β(n + 2) 4 −1 ! UXU∗= −1 4 X α∈A U[X, Lα], Lα]U∗ = −1 4 X α∈A [[UXU∗, ULαU∗], ULαU∗]. It is easy to check that {ULαU∗}α∈A is again an orthonormal basis of g, and so we have that if the claimed formula holds for X, then it holds for UXU∗. Take X = F12. We will show that the collection {UXU∗: U ∈G} spans g, so that it finally suffices to verify the claimed formula for the orthonormal bases listed above and the single element X = F12. All of the Fℓr are of the form UXU∗for a permutation matrix U. Choosing U = 1 √n "1 + q 0 0 1 −q # ⊕In−2 for q ∈{i, j, k} gives UXU∗= 1 nFq 12, and further conjugation by permutation matrices yields (multiples of) all the F1 ℓr. Choosing U = 1 √ n + 2 "q 1 1 q # ⊕In−2 for q ∈{i, j, k} gives UXU∗= 2 n + 2 ! Dq 1; further conjugation by permutation matrices yields all matrices with one 1 and one −1 on the diagonal. By taking linear combinations, this yields all of the Dq ℓ for 1 ≤ℓ< n. Finally, note that "1 0 0 j # "i 0 0 −i # "1 0 0 −j # = "i 0 0 i # ; taking U = "1 0 0 j # ⊕In−2, and then taking linear combinations of conjuga-tions of UXU∗by permutation matrices results in a (real) multiple of Di n; the remaining Dq n can be obtained similarly. All that remains is to finally verify the formula for X = F12. Now, F12 commutes with all the Dq ℓwith ℓ> 1 and all the Fq ℓr with 2 < ℓ< r ≤n. For q ∈{i, j, k}, [[F12, Fq 12], Fq 12] = [[F12, Dq 1], Dq 1] = −2F12. 156 Concentration of measure If 1 ≤ℓ≤2 < r ≤n, then [[F12, Fq ℓr], Fq ℓr] = −1 2F12. From this it is clear that P α[[X, Lα], Lα] is some multiple of X; collecting terms yields exactly the claimed constant. □ We now give the proof of Proposition 5.13. Proof of Proposition 5.13 First we observe that the defining properties (5.6) and (5.7) of the Levi-Civita connection imply that for all vector fields X, Y, Z, 2g(∇XY, Z) = X(g(Y, Z)) + Y(g(Z, X)) −Z(g(X, Y)) + g([X, Y], Z) + g([Z, X], Y) + g(X, [Z, Y]). In particular, since g(e X, e Y) is constant for any vectors X, Y ∈TI(G), it follows that 2g(∇e Xe Y, e Z) = g([e X, e Y], e Z) + g([e Z, e X], e Y) + g(e X, [e Z, e Y]) = ⟨[X, Y], Z⟩+ ⟨[Z, X], Y⟩+ ⟨X, [Z, Y]⟩ for all X, Y, Z ∈TI(G). Using the fact that X, Y, Z ∈TI(G) so that, e.g., X∗= −X leads to the further simplification 2g(∇e Xe Y, e Z) = ⟨[X, Y], Z⟩= g([e X, e Y], e Z). Taking Z = Lα for {Lα}α∈A an orthonormal basis of TI(G) and summing over α gives that ∇e Xe Y = 1 2 ] [X, Y]. Then R(e X, f Lα)f Lα = ∇e X(∇f Lαf Lα) −∇f Lα(∇e Xf Lα) −∇[e X,f Lα]f Lα = −1 4[[X, Lα], Lα]. The coordinate expression for the Ricci curvature together with Proposition 5.14 now gives that Ric(e X, e X) = −1 4 X α∈A ⟨[[X, Lα], Xα], X⟩ = β(n + 2) 4 −1 ! ⟨X, X⟩= β(n + 2) 4 −1 ! g(e X, e X). □ 5.3 The Bakry– ´ Emery criterion and concentration 157 Log-Sobolev inequalities, and hence concentration inequalities, now follow immediately from the Bakry–´ Emery Theorem for the groups listed above; i.e., all of the classical compact groups except O (n) and U (n). On O (n), we can-not expect more and indeed more is not true, because O (n) is disconnected. We do have the best that can be hoped for, namely concentration on each of the pieces. In the case of U (n), though, there is the same kind of concentra-tion that we have on SU (n). There is no non-zero lower bound on the Ricci curvature on U (n): Ric(e X, e X) = 0 when X = iI ∈TI(U (n)). Instead, one can obtain a log-Sobolev inequality on U (n) from the one on SU (n) via a coupling argument. The following slightly non-standard coupling of the Haar measures on SU (n) and U (n) is the key to obtaining the right dimensional dependence in the constant. Lemma 5.15 Let θ be uniformly distributed in h 0, 2π n i and let V ∈SU (n) be uniformly distributed, with θ and V independent. Then eiθV is uniformly distributed in U (n). Proof Let X be uniformly distributed in [0, 1), K uniformly distributed in {0, . . . , n−1}, and V uniformly distributed in SU (n) with (X, K, V) independent. Consider U = e2πiX/ne2πiK/nV. On one hand, it is easy to see that (X + K) is uniformly distributed in [0, n], so that e2πi(X+K)/n is uniformly distributed on S1. Thus U d = ωV for ω uniform in S1 and independent of V. It is clear that the distribution of ωV is translation-invariant on U (n), so that U is Haar-distributed. On the other hand, if In is the n×n identity matrix, then e2πiK/nIn ∈SU (n). By the translation invariance of Haar measure on SU (n) this implies that e2πiK/nV d = V, and so e2πiX/nV d = U. □ The log-Sobolev inequality on U (n) now follows using this coupling to-gether with the tensorization property of LSI, as follows. Proof of LSI on U (n) First, for the interval [0, 2π] equipped with its standard metric and uniform measure, the optimal constant in (5.2) for functions f with f(0) = f(2π) is known to be 1, see e.g. . This fact completes the proof in the case n = 1; from now on, assume that n ≥2. Suppose that f : [0, π] →R is locally Lipschitz, and define a function ˜ f : [0, 2π] →R by reflection: ˜ f(x) :=        f(x), 0 ≤x ≤π; f(2π −x), π ≤x ≤2π. 158 Concentration of measure Then ˜ f is locally Lipschitz and ˜ f(2π) = ˜ f(0), so ˜ f satisfies a LSI for uni-form measure on [0, 2π] with constant 1. If µ[a,b] denotes uniform (probability) measure on [a, b], then Entµ0,2π = Entµ0,π, and 1 2π Z 2π 0 |∇˜ f(x)|2dx = 1 π Z π 0 |∇f(x)|2dx, so f itself satisfies a LSI for uniform measure on [0, π] with constant 1 as well. It then follows by a scaling argument that the optimal logarithmic Sobolev constant on  0, π √ 2 √n  is 2/n (for g :  0, π √ 2 √n  →R, apply the LSI to g  q 2 n x  and rearrange it to get the LSI on  0, π √ 2 √n  .) Combining Proposition 5.13 with the Bakry–´ Emery criterion shows that SU (n) satisfies a log-Sobolev inequality with constant 2/n when equipped with its geodesic distance, and hence also when equipped with the Hilbert–Schmidt metric. By the tensorization property of log-Sobolev inequalities in product spaces (Lemma 5.7), the product space  0, π √ 2 √n  × SU (n), equipped with the L2-sum metric, satisfies a log-Sobolev inequality with constant 2/n as well. Define the map F :  0, π √ 2 √n  × SU (n) →U (n) by F(t, V) = e √ 2it/ √nV. By Lemma 5.15, the push-forward via F of the product of uniform measure on  0, π √ 2 √n  with uniform measure on SU (n) is uniform measure on U (n). More-over, this map is √ 3-Lipschitz: e √ 2it1/ √nV1 −e √ 2it2/ √nV2 HS ≤ e √ 2it1/ √nV1 −e √ 2it1/ √nV2 HS + e √ 2it1/ √nV2 −e √ 2it2/ √nV2 HS = ∥V1 −V2∥HS + e √ 2it1/ √nIn −e √ 2it2/ √nIn HS ≤∥V1 −V2∥HS + √ 2 |t1 −t2| ≤ √ 3 q ∥V1 −V2∥2 HS + |t1 −t2|2. It now follows from Lemma 5.6 that Haar measure on U (n) satisfies a loga-rithmic Sobolev inequality with constant ( √ 3)2 2 n = 6 n. □ Summarizing, we have the following. Theorem 5.16 The matrix groups and cosets SO (n), SO−(n), SU (n), U (n), 5.4 Concentration of the spectral measure 159 and Sp (2n) with Haar probability measure and the Hilbert–Schmidt metric, satisfy logarithmic Sobolev inequalities with the following constants: G CG SO (n), SO−(n) 4 n−2 SU (n) 2 n U (n) 6 n Sp (2n) 1 n+1 Recall from Lemma 1.3 that the the geodesic distance on U (n) is bounded above by π/2 times the Hilbert–Schmidt distance. Thus Theorem 5.16 implies, for example that U (n) equipped with the geodesic distance also satisfies a log-Sobolev inequality, with constant 3π2/2n. The following summarizes the concentration properties of Haar measure on the classical compact groups which follow from the log-Sobolev constants above together with Theorem 5.9. Theorem 5.17 Given n1, . . . , nk ∈N, let X = Gn1 × · · · × Gnk, where for each of the ni, Gni is one of SO (ni), SO−(ni), SU (ni), U (ni), or Sp (2ni). Let X be equipped with the L2-sum of Hilbert–Schmidt metrics on the Gni. Suppose that F : X →R is L-Lipschitz, and that {U j ∈Gnj : 1 ≤j ≤k} are independent, Haar-distributed random matrices. Then for each t > 0, PF(U1, . . . , Uk) ≥EF(U1, . . . , Uk) + t ≤e−(n−2)t2/24L2, where n = min{n1, . . . , nk}. 5.4 Concentration of the spectral measure The following theorem quantifies the rate of convergence of the empirical spec-tral measure of a random unitary matrix to the uniform measure on the circle, and more generally, the empirical spectral measure of a power of a random unitary matrix. Recall that Wp denotes the Lp Kantorovich distance between measures (see Section 2.1). Theorem 5.18 Let µn,m be the spectral measure of Um, where 1 ≤m ≤n 160 Concentration of measure and U ∈U (n) is distributed according to Haar measure, and let ν denote the uniform measure on S1. Then for each p ≥1, EWp(µn,m, ν) ≤Cp q m h log  n m  + 1 i n , where C > 0 is an absolute constant. For each t > 0, P             Wp(µn,m, ν) ≥C q m h log  n m  + 1 i n + t             ≤exp " −n2t2 24m # for 1 ≤p ≤2 and P             Wp(µn,m, ν) ≥Cp q m h log  n m  + 1 i n + t             ≤exp " −n1+2/pt2 24m # for p > 2, where C > 0 is an absolute constant. The change in behavior observed above at p = 2 is typical for the Kan-torovich distances. By a simple application of the Borel-Cantelli lemma, one gets an almost sure rate of convergence, as follows. Corollary 5.19 Suppose that for each n, Un ∈U (n) is Haar-distributed and 1 ≤mn ≤n. Let ν denote the uniform measure on S1. There is an absolute constant C such that given p ≥1, with probability 1, for all sufficiently large n, Wp(µn,mn, ν) ≤C p mn log(n) n if 1 ≤p ≤2 and Wp(µn,mn, ν) ≤Cp p mn log(n) n 1 2 + 1 p if p > 2. Observe in particular the change in behavior of the bound as m grows: for m = 1, Wp(µn, ν) ≤C p log(n) n . Since µn is supported on n points, this estimate means that the eigenvalues are very regularly spaced; Wp(µn, ν) is only logarithmically larger than the distance 5.4 Concentration of the spectral measure 161 from ν to a discrete measure on n points exactly evenly spaced around the circle (which is exactly π n). At the opposite extreme, when m = n the bound becomes Wp(µn,n, ν) ≤C √n. This result is in fact classical (and known to be sharp), since by Theorem 3.14, µn,n is exactly the empirical measure of n i.i.d. uniform random points on the circle. One would thus expect the eigenvalues of Un to be considerably less regular than those of U, and indeed this and the intermediate phenomena can be observed in the simulation shown in Figure 5.1. The first step in proving Theorem 5.18 is to prove a concentration inequality for the number N(m) θ of eigenangles of Um in [0, θ). Such a concentration re-sult is an easy consequence of Theorems 4.1 and 3.14. Specifically, recall that since the eigenangles of a random unitary matrix are a determinantal projection process, it follows from Theorem 4.1 that N(1) θ d = n X k=1 ξk, where the ξk are independent Bernoulli random variables. Moreover, by Theo-rem 3.14, N(m) θ is equal in distribution to the total number of eigenvalue angles in [0, θ) of each of U0 . . . , Um−1, where U0, . . . , Um−1 are independent and U j is Haar-distributed in U l n−j m m ; that is, N(m) θ d = m−1 X j=0 N j,θ, (5.8) where the N j,θ are the independent counting functions corresponding to U0, . . . , Um−1. It is therefore also true that N(m) θ is distributed exactly as a sum of n indepen-dent Bernoulli random variables. Generalizing Theorem 4.11 and its proof, it follows from Bernstein’s in-equality (Theorem 5.1) to get that, for each t > 0, P h N(m) θ −EN(m) θ > t i ≤2 exp −min ( t2 4σ2 , t 2 )! , (5.9) where σ2 = Var N(m) θ . Estimates for EN(m) θ and σ2 follow easily from the m = 1 case. Recall from Propositions 4.7 and 4.8 that 162 Concentration of measure Figure 5.1 The eigenvalues of Um for U an 80 × 80 random unitary matrix. 5.4 Concentration of the spectral measure 163 EN(1) θ = nθ 2π and Var N(1) θ ≤log(n) + 1. Proposition 5.20 Let U be uniform in U (n) and 1 ≤m ≤n. For θ ∈[0, 2π), let N(m) θ be the number of eigenvalue angles of Um in [0, θ). Then EN(m) θ = nθ 2π and Var N(m) θ ≤m  log  n m  + 1  . Proof This follows immediately from the representation of N(m) θ in equation (5.8); note that the n/m in the variance bound, as opposed to the more obvious ⌈n/m⌉, follows from the concavity of the logarithm. □ Putting these estimates together with Equation (5.9) gives that for all t > 0, P  N(m) θ −nθ 2π > t  ≤2 exp         −min          t2 4m  log  n m  + 1 , t 2                  . (5.10) This inequality gives a new route to eigenvalue rigidity: the individual eigen-values tend to be very close to their predicted locations because of the concen-tration of the counting function. Rigidity of specific eigenvalues can be explic-itly quantified as follows. Lemma 5.21 Let 1 ≤m ≤n and let U ∈U (n) be uniformly distributed. Denote by eiθ j, 1 ≤j ≤n, the eigenvalues of Um, ordered so that 0 ≤θ1 ≤ · · · ≤θn < 2π. Then for each j and u > 0, P " θ j −2π j n > 4π n u # ≤4 exp         −min          u2 m  log  n m  + 1 , u                  . (5.11) Proof For each 1 ≤j ≤n and u > 0, if j + 2u < n then P " θ j > 2π j n + 4π n u # = P  N(m) 2π( j+2u) n < j  = P  j + 2u −N(m) 2π( j+2u) n > 2u  ≤P  N(m) 2π( j+2u) n −EN(m) 2π( j+2u) n > 2u  . If j + 2u ≥n then P " θ j > 2π j n + 4π n u # = P h θ j > 2π i = 0, and the above inequality holds trivially. The probability that θ j < 2πj n −4π n u is bounded in the same way. Inequality (5.11) now follows from (5.10). □ 164 Concentration of measure We are now in a position to bound the expected distance between the empir-ical spectral measure of Um and uniform measure. Let θj be as in Lemma 5.21. Then by Fubini’s theorem, E θ j −2π j n p = Z ∞ 0 ptp−1P " θj −2π j n > t # dt = (4π)pp np Z ∞ 0 up−1P " θj −2π j n > 4π n u # du ≤4(4π)pp np "Z ∞ 0 up−1e−u2/m[log(n/m)+1] du + Z ∞ 0 up−1e−u du # = 4(4π)p np " m  log  n m  + 1 p/2 Γ  p 2 + 1  + Γ(p + 1) # ≤8Γ(p + 1)       4π n r m  log  n m  + 1       p . Let νn be the measure which puts mass 1 n at each of the points e2πi j/n, 1 ≤j ≤n. Then EWp(µn,m, νn)p ≤E         1 n n X j=1 eiθ j −e2πi j/n p         ≤E         1 n n X j=1 θj −2π j n p        ≤8Γ(p + 1)       4π n r m  log  n m  + 1       p ≤Cpp+ 1 2 e−p       4π n r m  log  n m  + 1       p , by Stirling’s formula. It is easy to check that Wp(νn, ν) ≤π n, and thus EWp(µn,m, ν) ≤EWp(µn,m, νn) + π n ≤  EWp(µn,m, νn)p 1 p + π n ≤ Cp q m h log  n m  + 1 i n . (5.12) The rest of the main theorem, namely the concentration of Wp(µn,m, ν) at its mean, is a consequence of the concentration of measure phenomenon on the unitary group; the crucial point is that Wp(µn,m, ν) is a Lipschitz function of U. The following lemma gives the necessary estimates. Lemma 5.22 Let p ≥1. The map A 7→µA taking an n×n normal matrix to its spectral measure is Lipschitz with constant n−1/ max{p,2} with respect to Wp. In 5.4 Concentration of the spectral measure 165 particular, if ρ is any fixed probability measure on C, the map A 7→Wp(µA, ρ) is Lipschitz with constant n−1/ max{p,2}. Proof If A and B are n × n normal matrices, then the Hoffman–Wielandt inequality [11, Theorem VI.4.1] states that min σ∈Σn n X j=1 λj(A) −λσ( j)(B) 2 ≤∥A −B∥2 HS , (5.13) where λ1(A), . . . , λn(A) and λ1(B), . . . , λn(B) are the eigenvalues (with multi-plicity, in any order) of A and B respectively, and Σn is the group of permuta-tions on n letters. Defining couplings of µA and µB given by πσ = 1 n n X j=1 δ(λj(A),λσ( j)(B)) for σ ∈Σn, it follows from (5.13) that Wp(µA, µB) ≤min σ∈Σn         1 n n X j=1 λ j(A) −λσ( j)(B) p         1/p ≤n−1/ max{p,2} min σ∈Σn         n X j=1 λj(A) −λσ( j)(B) 2         1/2 ≤n−1/ max{p,2} ∥A −B∥HS . □ Now, by Theorem 3.14, µn,m is equal in distribution to the spectral measure of a block-diagonal n × n random matrix U1 ⊕· · · ⊕Um, where the U j are inde-pendent and uniform in U j n m k and U l n m m . Identify µn,m with this measure and define the function F(U1, . . . , Um) = Wp(µU1⊕···⊕Um, ν); the preceding dis-cussion means that if U1, . . . , Um are independent and uniform in U j n m k and U l n m m as necessary, then F(U1, . . . , Um) d = Wp(µn,m, ν). Applying the concentration inequality in Corollary 5.17 to the function F gives that PF(U1, . . . , Um) ≥EF(U1, . . . , Um) + t ≤e−nt2/24mL2, where L is the Lipschitz constant of F, and we have used the trivial estimate j n m k ≥ n 2m. Inserting the estimate of EF(U1, . . . , Um) from Equation (5.12) and the Lipschitz estimates of Lemma 5.22 completes the proof of Theorem 5.18. 166 Concentration of measure Notes and References The general approach to concentration of measure taken here follows the writ-ings of Michel Ledoux, in particular his book and lecture notes . I learned the proof of Theorem 5.9 from Natha¨ el Gozlan. The book is a very accessible source for learning about log-Sobolev inequalities (if you read French). The book by Boucheron, Lugosi and Massart gives a more recent perspective with many applications. A good reference for the basic notions of Riemannian geometry is . Most of the exposition in Section 5.3 of the specific computations on the groups follows the corresponding exposition in the book of Anderson, Guionnet and Zeitouni. The fact that the curvature on (most of) the classi-cal compact groups leads to sub-Gaussian concentration was first observed by Gromov and Milman , following earlier work of Gromov ; see also the appendix by Gromov in . The Bakry–´ Emery Theorem first appeared in ; see also . The coupling argument which circumvents the lack of a curvature bound on U (n) first appeared in . The results of Section 5.4 are from , following earlier work in . The use of the determinantal structure in obtaining bounds on expected Wasserstein distances was introduced by Dallaporta in [25, 26]. A survey of concentration of empirical spectral measures in many ensembles, including Haar measure on the classical compact groups, can be found in . 6 Geometric applications of measure concentration 6.1 The Johnson–Lindenstrauss lemma An important area of application in computing is that of dimension-reduction. The essential problem is that in many settings, data sets live in very high-dimensional spaces. For example, a digital image can be encoded as a matrix, with each entry corresponding to one pixel, and the entry specifying the color of that pixel. That is, a small black and white image whose resolution was, say, 100 × 150 pixels would be encoded as a vector in {0, 1}15,000. This presents a real problem because many algorithms for analyzing such high-dimensional data have their run-time increase very quickly as the dimension of the data increases, to the point that analyzing the data in the most obvious way becomes computationally infeasible – computer scientists refer to this as “the curse of dimensionality”. The idea of dimension reduction is that in many situations, the desired algorithm can be at least approximately carried out in a much lower-dimensional setting than the one the data naturally lie in, which can make computationally infeasible problems feasible. A motivating problem Suppose you have a data set consisting of black and white images of hand-written examples of the numbers 1 and 2. That is, you have a library X of n points in Rd, where d is the number of pixels in each image, with each point labeled (by a human) to indicate whether it is a 1 or a 2. You want to design a computer program so that one can input an image of a hand-written number, and the computer identifies it as a 1 or a 2. So the computer will have a query point q ∈Rd, and the natural thing to do is to program it to find the closest point in the library X to q; the computer then reports that the input image was of the same number as that closest point in X. 167 168 Geometric applications of measure concentration P. Indyk The na¨ ıve approach would be for the computer to calculate the distance from q to each of the points of X in turn, keeping track of which point in X has so far been the closest. Such an algorithm runs in O(nd) steps, which may be prohibitively many. The idea of dimension reduction is to find a way to carry out the nearest point algorithm within some much lower-dimensional space, in such a way that you are guarranteed (or to be more realistic, very likely) to still find the closest point, without having to do much work to figure out which lower-dimensional space to work in. This sounds impossible, but the geometry of high-dimensional spaces often turns out to be surprising. The following im-portant result about high-dimensional geometry has inspired many randomized algorithms incorporating dimension-reduction. Lemma 6.1 (The Johnson–Lindenstrauss Lemma) There are absolute con-stants c,C such that the following holds. Let {x j}n j=1 ⊆Rd, and let P be a random k × d matrix, consisting of the first k rows of a Haar-distributed random matrix in O (d). Fix ϵ > 0 and let k = a log(n) ϵ2 . With probability 1 −Cn2−ac 4 (1 −ϵ)∥xi −x j∥2 ≤ d k ! ∥Pxi −Px j∥2 ≤(1 + ϵ)∥xi −xj∥2 (6.1) for all i, j ∈{1, . . . , n}. That is, if n points in Rd are projected onto a random subspace of dimension on the order of log(n), then after appropriate rescaling, the pairwise distances between the points hardly changes. The practical conclusion of this is that if the application in question is about the metric structure of the data (finding the closest point as above, finding the most separated pair of points, finding the minimum length spanning tree of a graph, etc.), there is no need to work in the high-dimensional space that the data naturally live in, and that moreover there is no need to work hard to pick a lower-dimensional subspace onto which to project: a random one should do. 6.1 The Johnson–Lindenstrauss lemma 169 Getting an almost-solution, with high probability The discussion above suggests finding an approximate solution to the problem of finding the closest point to q in X by choosing a random k×d matrix P to be the first k rows of a Haar-distributed U ∈O (d), then finding the closest point in {Px :∈X} to Pq. There are two obvious causes for concern. One is that we might have the bad luck to choose a bad matrix P that doesn’t satisfy (6.1). But that is very unlikely, and so typically one just accepts that risk and assumes it won’t actually happen in practice. There is a second issue, though, which is that it is possible to choose P that does satisfy (6.1), but to have the closest point in {Px :∈X} to Pq be Py, whereas the closest point in X to q is z, with y , z. In that case, although the approach above will yield the wrong identity for the closest point (y instead of z), it follows by choice of y and (6.1) that ∥q −y∥≤ s d k(1 −ϵ)∥Pq −Py∥≤ s d k(1 −ϵ)∥Pq −Pz∥≤ r 1 + ϵ 1 −ϵ ∥q −z∥. So even though z is the true closest point to q, y is almost as close. In our example of recognizing whether a hand-written number is a 1 or a 2, it seems likely that even if we don’t find the exact closest point in the reference set, the algorithm will still manage to correctly identify the number. The payofffor being willing to accept an answer which may be not quite right, and accepting the (tiny) risk that we’ll choose a bad matrix is significant. The na¨ ıve algorithm mentioned at the beginning now runs in O(n log(n)) steps, rather than O(nd) steps. Proof of the Johnson-Lindenstrauss lemma Given {xi}n i=1 ⊆Rd, ϵ > 0, and U a Haar-distributed random matrix in O (d), let P be the k ×d matrix consisting of the first k rows of U. The goal is to show that for each pair (i, j), (1 −ϵ)∥xi −xj∥2 ≤ d k ! Pxi −Pxj 2 ≤(1 + ϵ)∥xi −xj∥2 with high probability, or equivalently, √ 1 −ϵ ≤ r d k Pxi,j ≤ √ 1 + ϵ for xi,j := xi−xj ∥xi−xj∥. 170 Geometric applications of measure concentration For notational convenience, fix i and j for the moment and let x = xi, j. By the translation-invariance of Haar measure, Px d = Pe1 = (U11, . . . , Uk1), where e1 is the first standard basis vector in Rd. Since the first column of U is distributed as a uniform random vector in Sd−1, we may furthermore write Px d = (X1, . . . , Xk), with X = (X1, . . . , Xd) uniform in Sd−1. Consider therefore the function F : Sd−1 →R defined by F(x1, . . . , xd) = r d k (x1, . . . , xk) = r d k  x2 1 + · · · + x2 k  . Let x, y ∈Sd−1; then F(x) −F(y) = r d k (x1, . . . , xk) − (y1, . . . , yk) ≤ r d k (x1 −y1, . . . , xk −yk) ≤ r d k x −y . That is, the function F is q d k -Lipschitz on Sd−1, and so concentration of measure on the sphere (i.e., L´ evy’s lemma) applies: P [|F(X) −EF(X)| ≥ϵ] ≤Ce−ckϵ2. (6.2) To complete the proof, it remains to show that EF(X) ≈1. Since EX2 i = 1 d for each i, E h F2(X) i = 1; written slightly differently, 1 = Var(F(X)) + EF(X)2. By Fubini’s theorem and the concentration inequality (6.2), Var(F(X)) = Z ∞ 0 P|F(X) −EF(X)|2 ≥tdt ≤ Z ∞ 0 Ce−cktdt = C ck, so that r 1 −C ck ≤EF(X) ≤1. Recall that k = a log(n) ϵ2 . As long as ϵ < ca log(n) C+ca log(n), this means that 1 −ϵ 2 ≤ EF(X) ≤1, and so P h F(X) −1 > ϵ i ≤Ce−ckϵ2 4 ; (6.3) 6.2 Dvoretzky’s theorem 171 that is, with probability at least 1 −Ce−ckϵ2 4 , 1 −ϵ ≤ r d k ∥Px∥≤1 + ϵ. Returning to the original formulation, for each pair (i, j), there is a set of probability at least 1 −Ce−ckϵ2 4 such that (1 −ϵ)2∥xi −x j∥2 ≤ d k ! Pxi −Px j 2 ≤(1 + ϵ)2∥xi −xj∥2. There are fewer than n2 pairs (i, j), so a simple union bound gives that the above statement holds for all pairs (i, j) with probability at least 1 − C n ac 4 −2 . □ 6.2 Dvoretzky’s theorem The following theorem is one of the foundational results of the local theory of Banach spaces; V. Milman’s proof gave the first explicit use of the concentration of measure phenomenon in Banach space theory. Theorem 6.2 (Dvoretzky’s theorem) Let ∥· ∥be an arbitrary norm on Cn. There is an invertible linear map T : Rn →Rn such that for all ϵ > 0, if k ≤Cϵ2 log(n) and if E ⊆Rn is a random k-dimensional subspace of Rn, then with probability at least 1 −e−ck, 1 −ϵ ≤∥Tv∥ |v| ≤1 + ϵ for all v ∈E, where c,C are absolute constants, independent of ∥· ∥, and | · | denotes the Euclidean norm. The phrase “random k-dimensional subspace” in the statement of the the-orem should be understood as in the previous section: as the linear span of the first k columns of U, where U is distributed according to Haar measure on U (n). The distribution of such a random subspace is the unique probability measure on the Grassmannian GC n,k of k-dimensional subspaces of Cn which is invariant under the action of U (n). Milman’s proof of Dvoretzky’s theorem used the concentration of measure on the sphere, but using the more recently proved concentration of measure on the unitary group, one can deduce the theorem more directly. Before proceed-ing with the main body of the proof, we make a simple geometric reduction. 172 Geometric applications of measure concentration Throughout this section, ∥· ∥will denote the arbitrary norm in the statement of the theorem, and | · | will denote the Euclidean norm. Recall that an ellipsoid is defined to be a linear image of the Euclidean unit ball Sn C. By applying an initial linear transformation, it suffices to assume that the ellipsoid of maximal volume contained in the unit ball of the norm ∥· ∥is Sn C itself. This implies in particular that ∥v∥≤|v| for all v ∈Cn. Our approach to Theorem 6.2 centers around the random quantity Xv(U) := ∥Uv∥−E∥Uv∥, where v ∈Cn is a fixed unit vector, and U is a random unitary matrix. In particular, for a subspace E ⊆Cn, the supremum sup v∈E∩Sn C |Xv(U)| measures the (random) variability of the quantity ∥Uv∥over v ∈E. Proposition 6.3 For v ∈Sn C, let Xv(U) = ∥Uv∥−E∥Uv∥, with U a Haar-distributed random unitary matrix. Let E ⊆Cn be any subspace. Then P        sup v∈E∩Sn C |Xv(U)| −E       sup v∈E∩Sn C |Xv(U)|        > t       ≤Ce−cnt2. Proof Let U, U′ ∈U (n). Then sup v |Xv(U)| −sup v |Xv(U′)| ≤sup v |Xv(U)| −|Xv(U′)| = sup v ∥Uv∥−E∥Uv∥ − ∥Uv∥−E∥U′v∥ ≤sup v ∥(U −U′)v∥ ≤∥U −U′∥H.S., making use of the facts that ∥· ∥≤| · | and ∥· ∥op ≤∥· ∥H.S.. The function U 7→supv |Xv(U)| is thus 1-Lipschitz, and the result follows from Theorem 5.17. □ The random quantity supv∈E∩Sn C |Xv(U)| is thus typically close to its mean; the following lemma is the main ingredient needed to estimate that mean. Lemma 6.4 Let x, y ∈Sn C with x , y, and let U be a random n × n unitary matrix. Then for all t > 0, P h ∥Ux∥−∥Uy∥ > t i ≤Ce −cnt2 |x−y|2 . 6.2 Dvoretzky’s theorem 173 Proof First, note that ∥U(eiθy)∥= ∥Uy∥for any θ. Choosing θ such that D x, eiθy E is real and nonnegative means that Re D x, eiθy E ≥Re (⟨x, y⟩), and so |x −eiθy|2 = |x|2 + |y|2 −2 Re D x, eiθy E ≤|x −y|2. We may therefore assume that ⟨x, y⟩is real. Let z := x+y 2 and w := x−y 2 , so that x = z + w y = z −w z ⊥w; in terms of z and w, the desired conclusion is P h ∥Uz + Uw∥−∥Uz −Uw∥ > t i ≤Ce−cnt2 |w|2 . By the translation-invariance of Haar measure, it suffices to assume that z = e1 (the first standard basis vector). Observe that, conditional on the event {Ue1 = u}, the distribution of Uw is the same as that of −Uw: conditioning on Ue1 = u simply means choosing the first column of U to be u, and filling out the rest of the matrix column by column as described in Section 1.2. In particular, the conditional distribution of U given Ue1 = u is invariant under changing the sign of each of the remaining columns; doing so replaces Uw by −Uw. It follows that E  ∥u + Uw∥−∥u −Uw∥ Ue1 = u  = 0. Moreover, if U ∈U (n) with Ue1 = u, the function U 7→∥u + Uw∥is a |w|-Lipschitz function of the remaining columns: if U′ is another such matrix, then ∥u + Uw∥−∥u + U′w∥ ≤∥(U −U′)w∥≤|(U −U′)w| ≤∥U −U′∥H.S.|w|, again using that ∥· ∥≤| · | and ∥· ∥op ≤∥· ∥H.S.. The conditional version of the column by column construction of Haar mea-sure described above makes it clear that conditional on Ue1 = u, the rest of the matrix U is distributed according to Haar measure on a copy of U (n −1) embedded in U (n), and so Theorem 5.17 applies to give that for each u, P  ∥Uz + Uw∥−∥Uz −Uw∥ > t Ue1 = u  ≤Ce−cnt2 |w|2 . Averaging over u completes the proof. □ Proposition 6.5 Let E ⊆Cn be a subspace of dimension k. Then E sup v∈E∩Sn C |Xv| ≤C r k n. 174 Geometric applications of measure concentration Proof Lemma 6.4 shows exactly that the stochastic process {Xv}v∈E∩Sn C is sub-Gaussian, with respect to the metric |·| √n. Under a sub-Gaussian increment con-dition, Dudley’s entropy bound (see, e.g., ) gives that E sup v∈E∩Sn C |Xv| ≤C Z ∞ 0 s log N E ∩Sn C, | · | √n, ϵ !! dϵ = C √n Z ∞ 0 q log  N  E ∩Sn C, | · |, ϵ  dϵ, where the covering number N  E ∩Sn C, | · |, ϵ  is the number of ϵ-balls needed to cover E ∩Sn C, with respect to the distance | · |. In particular, the integrand is zero for ϵ > 2. The covering number can be bounded using a simple volume argument (see Lemma 2.6 of ) by exp  k log  3 ϵ  , and this completes the proof. □ Combining Propositions 6.3 and 6.5 gives that if E is a k-dimensional sub-space of Cn, then with probability at least 1 −Ce−cnt2, ∥Uv∥−E∥Uv∥ ≤t + C r k n for all v ∈E; the next step of the proof of Theorem 6.2 is to estimate E∥Uv∥. Proposition 6.6 There is a universal constant c such that for v ∈Cn with |v| = 1, U a random unitary matrix, and ∥· ∥as above, c r log(n) n ≤E∥Uv∥≤1. Proof The upper bound is trivial, since ∥· ∥≤| · |. For the lower bound, it follows from the Dvoretzky-Rogers lemma and its proof (see Lemma 3.16 of ) that, under the condition on ∥·∥discussed at the beginning of the section (i.e., that the maximum-volume ellipsoid contained in the unit ball of ∥· ∥is in fact the Euclidean unit ball), there is an orthonormal basis {v1, . . . , vn} of Rn such that ∥vj∥≥1 2, 1 ≤j ≤ n 2  ; by applying a further linear isometry, we assume that this estimate holds for the standard basis {ei}. Now, Uv d = X, where X is uniformly distributed on Sn C. Moreover, if (ϵ1, . . . , ϵn) is a random vector of i.i.d. centered {−1, 1}-valued random variables, then 6.2 Dvoretzky’s theorem 175 X d = (ϵ1X1, . . . , ϵnXn). For fixed j, conditional on X and ϵ j, it follows by Jensen’s inequality that |Xj| ej = E h (ϵ1X1, . . . , ϵnXn) ϵj, X i ≤E h (ϵ1X1, . . . , ϵnXn) ϵj, X i . Averaging over ϵ j gives |X j| ej ≤E h (ϵ1X1, . . . , ϵnXn) X i . Taking the maximum over j ∈{1, . . . , m} and then taking expectation of both sides gives E " max 1≤j≤m |Xj|∥ej∥ # ≤E∥(ϵ1X1, . . . , ϵnXn)∥= E∥X∥, and so E∥X∥≥1 2 E " max 1≤j≤m |Xj| # , (6.4) where m := j n 2 k . To estimate the right-hand side of (6.4), we use a standard trick of relating the spherical expectation to a corresponding Gaussian one. Note that a uniform random vector on Sn C can be naturally identified with a uniform random vector on S2n, so there is no loss in considering the real case. (That the uniform measure on Sn C is mapped to uniform measure on S2n by the obvious identification is slightly non-trivial, but follows by the uniqueness of Haar measure and the fact that U (n) acts transitively on Sn C.) Let {Z1, . . . , Zn} be i.i.d. standard Gaussian random variables. Then it is well-known that there is a constant ρ such that E " max 1≤j≤m |Zj| # ≥2ρ p log(m). Writing the expectation above explicitly in terms of the density of the Z j in spherical coordinates gives E " max 1≤j≤m |Zj| # = 1 (2π)m/2 Z Sm−1 Z ∞ 0 max 1≤j≤m |ry j|e−r2/2rm−1drdσ(y) = Γ  m+1 2  √ 2πm/2 Z Sm−1 max 1≤j≤m |yj|dσ(y), where σ denotes the surface area measure on Sm−1. The surface area of the unit sphere in Rm is mπm/2 Γ( m 2 +1), and so rearranging above and applying the lower bound (6.4) gives E " max 1≤j≤m |X j| # ≥ 2 √ 2ρ p log(m)Γ  m 2 + 1  mΓ  m+1 2  ≥c r log(m) m , 176 Geometric applications of measure concentration where the last estimate follows from Stirling’s formula. □ We are now in a position to complete the proof of Dvoretzky’s theorem. Let M := E∥Ue1∥, and for ϵ > 0 fixed, let k = cnM2ϵ2, where c is a small (but universal) constant. Applying Proposition 6.3 with t = Mϵ 2 gives that for any k-dimensional subspace E ⊆Cn, with probability at least 1 −Ce−cnϵ2M2 ≥ 1 −Ce−cϵ2 log(n), M(1 −ϵ) ≤∥Uv∥≤M(1 + ϵ), for all v ∈E with |v| = 1. In particular, let E be the span of {e1, . . . , ek}. Then the statement above means that with probability at least 1 −Ce−cϵ2 log(n), 1 −ϵ ≤ 1 M∥w∥ |w| ≤1 + ϵ for all w in the linear span of the first k columns of U; that is, for all w in a randomly chosen k-dimensional subspace of Cn. Absorbing the constant M into the linear map T in the statement of Theorem 6.2 completes the proof. 6.3 A measure-theoretic Dvoretzky theorem In this section, the objects of study are the marginal distributions of high-dimensional probability measures. It was recognized long ago that in many set-tings, most one-dimensional projections of high-dimensional probability mea-sures are approximately Gaussian. In particular, Borel’s lemma (Lemma 2.4) is an early example: all one-dimensional projections of uniform measure on the sphere in Rd are the same, and all are approximately Gaussian for large d. This is also a familiar phenomenon in statistics, in which low-dimensional projections of high-dimensional data which appear approximately Gaussian are usually regarded as not giving useful information about the structure of the data. It is natural to ask under what conditions on the high-dimensional distri-bution this phenomenon occurs, and moreover, how long it persists; i.e., if a d-dimensional probability distribution is projected onto a k-dimensional sub-space, how large can k be relative to d so that such projections are typically approximately Gaussian? The connection to Dvoretzky’s theorem is the following. In both settings, an additional structure is imposed on Rn (a norm in the case of Dvoretzky’s theorem; a probability measure in the present context); in either case, there is 6.3 A measure-theoretic Dvoretzky theorem 177 a particularly nice way to do this (the Euclidean norm and the Gaussian distri-bution, respectively). The question is then: if one projects an arbitrary norm or probability measure onto lower dimensional subspaces, does it tend to resem-ble this nice structure? If so, by how much must one reduce the dimension in order to see this phenomenon? Theorem 6.7 Let X be a random vector in Rd satisfying E|X|2 = σ2d E||X|2σ−2 −d| ≤L d (log d)1/3 sup ξ∈Sd−1 E ⟨ξ, X⟩2 ≤1. Let X(k) V := ⟨X, V1⟩, . . . , ⟨X, Vk⟩, where V =              — V1 — . . . — Vd —              is an orthogonal matrix. Fix δ < 2 and suppose that k = δ log(d) log(log(d)). Then there is a c > 0 depending only on δ such that for ϵ = exp −c log(log(d)) , there is a subset T ⊆O (d) with P[T] ≥1 −C exp  −c′dϵ2 , such that for all V ∈T, dBL(X(k) V , σZ) ≤C′ϵ. The following example shows that, without additional assumptions, the the-orem gives the best possible estimate on k. Let X be distributed uniformly among {± √ de1, . . . , ± √ ded}, where the ei are the standard basis vectors of Rd. That is, X is uniformly distributed on the vertices of a cross-polytope. Then E[X] = 0, |X|2 ≡d, and given ξ ∈Sd−1, E ⟨X, ξ⟩2 = 1, thus Theorem 6.7 applies. Consider a projection of {± √ de1, . . . , ± √ ded} onto a random subspace E of dimension k, and define the Lipschitz function f : E →R by f(x) := (1 −d(x, S E))+ , where S E is the image of {± √ de1, . . . , ± √ ded} under projec-tion onto E and d(x, S E) denotes the (Euclidean) distance from the point x to the set S E. Then if µS E denotes the probability measure putting equal mass at each of the points of S E, R fdµS E = 1. On the other hand, the volume ωk of the unit ball in Rk is asymptotically given by √ 2 √ kπ h 2πe k i k 2 for large k, in the sense that the ratio tends to one as k tends to infinity. It follows that the standard Gaus-sian measure of a ball of radius 1 in Rk is bounded by 1 (2π)k/2 ωk ∼ √ 2 √ kπ h e k i k 2 . If γk denotes the standard Gaussian measure in Rk, then this estimate means that R fdγk ≤2 √ 2d √ kπ h e k i k 2 . Now, if k = c log(d) log(log(d)) for c > 2, then this bound tends to zero, and thus dBL(µS E, γk) is close to 1 for any choice of the subspace E; the measures µS E are far from Gaussian in this regime. 178 Geometric applications of measure concentration This example together with Theorem 6.7 show that the phenomenon of typi-cally Gaussian marginals persists for k = c log(d) log(log(d)) for c < 2, but fails in general if k = c log(d) log(log(d)) for c > 2. The proof of Theorem 6.7 is in several steps. Borrowing terminology from statistical mechanics, we first consider the “annealed” version of X(k) V , in which V is taken to be random and independent of X, and show that it is approxi-mately Gaussian. Then, we show that the random distance between a “quenched” version of X(k) V and its annealed (i.e., averaged) version is strongly concentrated at its mean. Finally, we estimate this “average distance to average”. Theorem 6.8 Let X be a random vector in Rn, with EX = 0, E h |X|2i = σ2d, and E |X|2σ−2 −d := A < ∞. Suppose that V is distributed according to Haar measure on O (d) and inde-pendent of X, and let X(k) V = ⟨X, V1⟩, . . . , ⟨X, Vk⟩, where Vi is the ith row of V. Then dBL(X(k) V , σZ) ≤σ √ k(A + 1) + σk d −1 . Proof The proof is via the version of Stein’s method given in Theorem 2.21, Section 2.4. Observe first that EX(k) V = 0 by symmetry and if vi j denotes the i- jth entry of V, then E(X(k) V )i(X(k) V ) j = E ⟨Vi, X⟩ D Vj, X E = d X r,s=1 E h virvjs i E [XrXs] = δi j d E h |X|2i = δi jσ2, thus E[X(k) V (X(k) V )T] = σ2Id. The construction of XV,ϵ for the application Theorem 2.21 is analogous to the construction of exchangeable pairs of random matrices in Section 2.4. Let Aϵ :=       √ 1 −ϵ2 ϵ −ϵ √ 1 −ϵ2      ⊕Id−2 = Id +      −ϵ2 2 + δ ϵ −ϵ −ϵ2 2 + δ      ⊕0d−2, where δ = O(ϵ4). Let U ∈O (d) be a random orthogonal matrix, independent of X and V, and define XV,ϵ := D UAϵUTV1, X E , . . . , D UAϵUTVk, X E ; the pair (X(k) V , XV,ϵ) is exchangeable by the rotation invariance of the distribu-tion of V, and so X(k) V d = XV,ϵ. 6.3 A measure-theoretic Dvoretzky theorem 179 Let K be the d × 2 matrix given by the first two columns of U and let C = " 0 1 −1 0 # ; define the matrix Q = qi j d i, j=1 = KCKT. Then UAϵUT −Id = −ϵ2 2 + δ ! KKT + ϵQ, and so, writing X(k) V = (XV 1 , . . . , XV k ) and XV,ϵ = (XV ϵ,1, . . . , XV ϵ,k), E h XV ϵ,j −XV j X, V i = E hD (UAϵUT −Id)Vj, X E X, V i = ϵE hD QV j, X E X, V i − ϵ2 2 + δ ! E hD KKTVj, X E X, V i . Recall that Q and K are determined by U alone, and that U is independent of X, V. It is easy to show that EQ = 0d and EKKT = 2 dId, thus E h XV,ϵ −X(k) V X, V i = −ϵ2 d + 2δ d ! X(k) V . Condition 1 of Theorem 2.21 is thus satisfied with λ(ϵ) = ϵ2 d . It follows from the formula given in Lemma 2.22 that Eqrsqtw = 2 d(d−1) δrtδsw− δrwδst , which yields E h (XV ϵ, j −XV j )(XV ϵ,ℓ−XV ℓ) X, V i = ϵ2E hD QV j, X E ⟨QVℓ, X⟩ X, V i + O(ϵ3) = ϵ2 d X r,s,t,w=1 E h qrsqtwvjsvℓwXrXt X, V i + O(ϵ3) = 2ϵ2 d(d −1)         d X r,s=1 v jsvℓsX2 r − d X r,s=1 vjsvℓrXrXs        + O(ϵ3) = 2ϵ2 d(d −1) h δjℓ|X|2 −XV j XV ℓ i + O(ϵ3) = 2ϵ2σ2 d δ jℓ+ 2ϵ2 d(d −1) h δ jℓ |X|2 −σ2d + δ jℓσ2 −XV j XV ℓ i + O(ϵ3). The random matrix F of Theorem 2.21 is therefore defined by F = 1 d −1 h|X|2 −σ2dIk + σ2Ik −X(k) V (X(k) V )Ti . 180 Geometric applications of measure concentration It now follows from Theorem 2.21 that dBL(X(k) V , σZ) ≤W1(X(k) V , σZ) ≤1 σE∥F∥H.S. ≤σ √ k d −1 " E |X|2 σ2 −d + 1 # + σ d −1E          X j        XV j σ        2         ≤σ √ k(A + 1) + σk d −1 . (6.5) □ The next result gives the concentration of dBL(X(k) V , σZ) about its mean. The idea is very similar to the argument at the end of Section 5.4 on the concentra-tion of the empirical spectral measure. Theorem 6.9 Let X ∈Rd be a centered random vector, with E h |X|2i = σ2d, and let B := sup ξ∈Sd−1 E ⟨X, ξ⟩2 . The function V 7−→dBL(X(k) V , σZ) on O (d) can be viewed as a random variable, by letting V be distributed ac-cording to Haar measure on O (d). Then there are universal constants C, c such that for any ϵ > 0, P h dBL(X(k) V , σZ) −EdBL(X(k) V , σZ) > ϵ i ≤Ce−cdϵ2 B . Proof Define a function F : O (d) →R by F(V) = sup ∥f∥BL≤1 EX f(X(k) V ) −Ef(σZ) , where EX denotes the expectation with respect to the distribution of X only. 6.3 A measure-theoretic Dvoretzky theorem 181 Let V, V′ ∈O (d); observe that for f with ∥f∥BL ≤1 given, EX f(X(k) V ) −E f(σZ) − EX f(X(k) V′ ) −E f(σZ) ≤ EX f(X(k) V′ ) −EX f(X(k) V )) = E  f X, V′ 1 , . . . , D X, V′ k E  −f ⟨X, V1⟩, . . . , ⟨X, Vk⟩ V, V′ ≤E  X, V′ 1 −V1 , . . . , D X, V′ k −Vk E  V, V′ ≤ v u t k X j=1 |V′ j −Vj|2E X, V′ j −Vj |V′ j −Vj| +2 ≤ρ(V, V′) √ B. It follows that dBL(X(k) V , σZ) −dBL(X(k) V′ , σZ) = sup ∥f∥BL≤1 EX f(X(k) V ) −Ef(σZ) −sup ∥f∥BL≤1 EX f(X(k) V′ ) −E f(σZ) ≤sup ∥f∥BL≤1 EX f(X(k) V ) −Ef(σZ) − EX f(X(k) V′ ) −E f(σZ) ≤ρ(V, V′) √ B, thus dBL(X(k) V , σZ) is a Lipschitz function on O (d), with Lipschitz constant √ B. Applying the concentration of measure inequality in Corollary 5.17 thus implies that P|dBL(X(k) V , σZ) −EdBL(X(k) V , σZ)| > ϵ < Ce−cdϵ2 B . □ The final component of the proof of Theorem 6.7 is to estimate the so-called average distance to average. Theorem 6.10 With notation as in the previous theorems, EdBL(X(k) V , σZ) ≤C kk−1Bk+2 d2 ! 1 3k+4 + σ[ √ k(A + 1) + k] d −1 . Proof Let V ∈O (d) and let U be a random orthogonal matrix, independent of X. Then the function V 7−→dBL(X(k) V , X(k) U ) 182 Geometric applications of measure concentration can be viewed as a random variable, if V is now taken to be distributed ac-cording to Haar measure. The essential idea of the proof is to view this ran-dom variable as the supremum of a stochastic process: for f : Rk →R with ∥f∥BL ≤1, let Xf = Xf (V) := EX f(X(k) V ) −E f(X(k) U ), where again EX f(X(k) V ) indicates expectation with respect to X only. Then {Xf } f is a centered stochastic process indexed by the set of functions f on Rk with ∥f∥BL ≤1, and dBL(X(k) V , X(k) U ) = sup ∥f∥BL≤1 Xf . Concentration of measure on the orthogonal group implies that X f is a sub-Gaussian process, as follows. Let f : Rk →R be Lipschitz with Lipschitz constant L and consider the function G = G f defined on O (d) by G(V) := EX f(X(k) V ) = EX f(⟨V1, X⟩, . . . , ⟨Vk, X⟩) , where Vi denotes the ith row of V. It was shown in the course of the previous proof that G(V) is a Lipschitz function on O (d), with Lipschitz constant L √ B, and so by Corollary 5.17 there are universal constants C, c such that P [|G(V) −EG(V)| > ϵ] ≤Ce−cdϵ2 L2B . (6.6) It follows from Fubini’s theorem that if V a random orthogonal matrix, then EG(V) = Ef(X(k) U ), for U Haar-distributed and independent of X as above. Equation (6.6) can thus be restated as P h |Xf | > ϵ i ≤C exp " −cdϵ2 L2B # . Note that X f −Xg = Xf−g, and so P h X f −Xg > ϵ i ≤C exp       −cdϵ2 2B| f −g|2 L      ≤C exp       −cdϵ2 2B∥f −g∥2 BL      . The process {Xf } therefore satisfies the sub-Gaussian increment condition for the metric space (BL1, d∗), where BL1 denotes the unit ball of the bounded-Lipschitz norm and d∗( f, g) := √ B √ cd∥f −g∥BL. The idea is to apply Dudley’s entropy bound to this sub-Gaussian process; however, it cannot be usefully applied to this infinite-dimensional indexing set. We therefore make several reductions, beginning with a truncation argument. 6.3 A measure-theoretic Dvoretzky theorem 183 Let ϕR(x) =              1 |x| ≤R, R + 1 −|x| R ≤|x| ≤R + 1, 0 R + 1 ≤|x|, and define fR := f · ϕR; if ∥f∥BL ≤1, then ∥fR∥BL ≤2. Since | f(x) −fR(x)| = 0 if x ∈BR and | f(x) −fR(x)| ≤1 for all x ∈Rk, EX f(X(k) V ) −EX fR(X(k) V ) ≤PX |X(k) V | > R ≤1 R2 k X i=1 EX ⟨X, Vi⟩2 ≤Bk R2 , and the same holds if EX is replaced by E. It follows that Xf −X fR ≤ 2Bk R2 . Consider therefore the process Xf indexed by BL2,R+1 (with norm ∥· ∥BL), for some choice of R to be determined, where BL2,R+1 := n f : Rk →R : ∥f∥BL ≤2; f(x) = 0 if |x| > R + 1 o ; what has been shown is that E h sup ∥f∥BL≤1 Xf i ≤E h sup f∈BL2,R+1 Xf i + 2Bk R2 . (6.7) The next step is to approximate functions in BL2,R+1 by “piecewise linear” functions. Specifically, consider a cubic lattice of edge length ϵ in Rk. Trian-gulate each cube of the lattice into simplices inductively as follows: in R2, add an extra vertex in the center of each square to divide the square into four trian-gles. To triangulate the cube of Rk, first triangulate each facet as was described in the previous stage of the induction. Then add a new vertex at the center of the cube; connecting it to each of the vertices of each of the facets gives a triangulation into simplices. Observe that when this procedure is carried out, each new vertex added is on a cubic lattice of edge length ϵ 2. Let L denote the supplemented lattice comprised of the original cubic lattice, together with the additional vertices needed for the triangulation. The number of sites of L within the ball of radius R + 1 is then bounded by, e.g., c  3R ϵ k ωk, where ωk is the volume of the unit ball in Rk. It is classical that the volume ωk of the unit ball in Rk is asymptotically given by √ 2 √ kπ h 2πe k i k 2 as k →∞, so that the number of sites is bounded by c √ k  c′R ϵ √ k k , for constants c, c′ which are independent of d and k. We now approximate f ∈BL2,R+1 by the function ˜ f defined such that ˜ f(x) = f(x) for x ∈L, and the graph of ˜ f is determined by taking the convex hull of the vertices of the image under f of each k-dimensional simplex determined by L. The resulting function ˜ f still has ∥˜ f∥BL ≤2, and ∥f −˜ f∥∞≤ϵ √ k 2 , since the 184 Geometric applications of measure concentration distance between points in the same simplex is bounded by ϵ √ k 2 . Moreover, the function ˜ f lies inside the finite-dimensional vector space of functions whose values are determined through the interpolation procedure described above by their values at the points of L within the ball of radius R + 1. It thus follows that E h sup f∈BL2,R+1 Xf i ≤E h sup f∈BL2,R+1 X ˜ f i + ϵ √ k, (6.8) that the process {X ˜ f } f∈BL2,R+1 is sub-Gaussian with respect to √ B √ cd∥· ∥BL, and that { ˜ f : f ∈BL2,R+1} is the ball of radius 2 inside an M-dimensional normed space, with M = c √ k c′R ϵ √ k !k . (6.9) We have thus replaced a sub-Gaussian process indexed by a ball in an infinite-dimensional space with one indexed by a ball in a finite-dimensional space, where Dudley’s bound can finally be applied. Let T := n ˜ f : f ∈BL2,R+1 o ; a classical volumetric argument gives that the covering numbers of the unit ball B of a finite-dimensional normed space (X, ∥· ∥) of dimension M can be bounded as N(B, ∥· ∥, ϵ) ≤exp h M log  3 ϵ i . As a result, N      T, √ B √ cd ∥· ∥BL, ϵ      ≤exp      M log       c′ √ B ϵ √ d            . It follows from Dudley’s entropy bound that E sup f∈T Xf ≤ Z 2√B cd 0 s M log       c′ √ B ϵ √ d      dϵ = L r MB d . Combining this with (6.7) and (6.8) yields E      sup ∥f∥BL≤1 (EX f(XV) −Ef(XU))      ≤9kB R2 + ϵ √ k + L r MB d . Using the value of M in terms of R given in equation (6.9) and choosing ϵ =  B(c′R)k dk k+3 2  1 k+2 yields E      sup ∥f∥BL≤1 (EX f(XV) −Ef(XU))      ≤9kB R2 + ˜ c BRk d √ k ! 1 k+2 . 6.3 A measure-theoretic Dvoretzky theorem 185 Finally, choosing R =  dk 2k+5 2 Bk+1 1 3k+4 yields E      sup ∥f∥BL≤1 (EX f(XV) −Ef(XU))      ≤˜ L kk−1Bk+2 d2 ! 1 3k+4 . Combining this with Theorem 6.8 completes the proof. □ Notes and References The Johnson–Lindenstrauss lemma was first proved in as a step in show-ing that any mapping from an n-point set in a metric space X into ℓ2 can be ex-tended to a Lipschitz mapping of all of X into ℓ2, with the Lipschitz constant at worst being multiplied by c p log(n). The lemma has found many applications in computer science and other areas; see the book by Vempala for an extensive survey. The original proof used Gaussian random matrices; random orthogonal matrices were first used in , and there is now a large literature on alternative forms of randomness. Dvoretzky’s theorem first appeared in , and an enormous literature has grown up around the theorem and its applications. The recent book has an extensive history and discussion of modern viewpoints and connections. The proof in the literature which is closest to the one given in Section 6.2 was given by Aubrun, Szarek, and Werner in , following an earlier approach of Schechtman . Borel’s theorem on the distribution of a coordinate of a random point on the sphere can be seen as a first example of the central limit theorem for con-vex bodies; that is, that marginals of the uniform measure on high-dimensional convex bodies are approximately Gaussian. Over the years, many authors made contributions on this problem; see in particular [101, 34, 15, 68, 44]. Finally, Klartag showed that the typical total variation distance between a k-dimensional marginal of uniform measure on a convex body (or, more gener-ally, any log-concave distribution) in Rd and the corresponding Gaussian dis-tribution is small even when k = dϵ (for a specific universal constant ϵ ∈(0, 1)). See the recent book for an extensive survey of the central limit theorem for convex bodies and related phenomena. Theorem 6.7 first appeared in , and was an attempt to find the optimal dependence of k on d when the assumption of log-concavity was removed. The possible rate of growth of k as a function of d is indeed weaker than in the result of for log-concave measures; k can grow only a bit more slowly 186 Geometric applications of measure concentration than logarithmically with d, rather than polynomially. However, as the exam-ple following the theorem shows, either the log-concavity or some other addi-tional assumption is necessary; with only the assumptions here, logarithmic-type growth of k in d is best possible for the bounded-Lipschitz metric. 7 Characteristic polynomials and connections to the Riemann ζ-function In this chapter we give just a taste of the intriguing, and as yet unexplained, connection between zeros of the Riemann zeta function and eigenvalues of random matrices. A few pointers to the large literature on the subject are given in the end of chapter notes. 7.1 Two-point correlations and Montgomery’s conjecture The Riemann zeta function is defined on complex numbers s = σ + it with σ > 1 by either the Dirichlet series or the Euler product ζ(s) = ∞ X n=1 1 ns = Y p 1 −1 ps !−1 , where the product is over prime numbers p. The zeta function can be extended to the complex plane by analytic continuation and has a simple pole at s = 1 and trivial zeros at s = −2n for n = 1, 2, 3, . . .. It has been known since Riemann that the remaining zeros ρ = β + iγ all lie in the “critical strip” {0 < β < 1}, and that the zeroes fall symmetrically about both the real line and the line β = 1 2 (sometimes called the critical line), so that if ρ is a zero, so are ρ, 1 −ρ, and 1 −ρ. The Riemann Hypothesis (RH) is that all of the nontrivial zeros have β = 1 2. The interest in the Riemann Hypothesis lies in the connection between the zeros of the zeta function and the distribution of prime numbers. This is of course an enormous field of study, but the following gives a flavor of the con-nection. The von Mangoldt function Λ(n) is defined by Λ(n) =        log p, n = pm, p prime, m ≥1; 0, otherwise, 187 188 Characteristic polynomials and the ζ-function and the Chebyshev function ψ(x) is given by ψ(x) = X n≤x Λ(n). The Chebychev function is closely related to the prime counting function π(x) = {p ≤x : p prime}, and the growth estimate ψ(x) = x + O(x 1 2 (log x)2) is equivalent to RH. Over the years, many approaches to proving RH have been explored. One is via spectral theory: let n 1 2 + itn o n∈N denote the non-trivial zeroes of the Riemann zeta function, so that RH is the statement that the tn are all real. The Hilbert– P´ olya conjecture is the statement that there is an unbounded Hermitian opera-tor whose eigenvalues are {tn}n∈N. This conjecture has inspired the idea that the eigenvalues of this mysterious operator may behave like the eigenvalues of a “random operator”, which could be modeled by a (large) random matrix. Most classically, the model used was the GUE; i.e., a random matrix distributed ac-cording to Gaussian measure in Hermitian matrix space. The eigenvalues of random unitary matrices, and especially the spacings between them, have also been considered in this context, and in fact in many of the resulting conjec-tures, either random matrix model produces the same conjecture about the zeta zeroes. It turns out that indeed, when zeta zeros and eigenvalues of random matrices are compared numerically, their behavior matches astonishingly well. This connection was first noticed in a chance meeting of Hugh Montgomery and Freeman Dyson1, in the context of pair correlations. In order to identify the appropriate correspondence between zeta zeroes and eigenvalues, a first important observation is that the local density of zeta zeroes at a given height within the critical strip is known. Let n(T) := |{ρ = β + iγ : ζ(ρ) = 0, 0 < γ ≤T}| , n(T−) := |{ρ = β + iγ : ζ(ρ) = 0, 0 < γ < T}| , and N(T) := n(T) + n(T−) 2 , where in both n(T) and n(T−), the zeros are counted with multiplicity. The Riemann–von Mangoldt formula states that N(T) = T 2π log  T 2πe  + 7 8 + R(T) + S (T), 1 Said encounter is now a much-loved anecdote illustrating, among other things, the value of department tea. 7.1 Two-point correlations and Montgomery’s conjecture 189 where R(T) = O( 1 T ) and S (T) = 1 π arg ζ 1 2 + iT !! = O(log(T)). In particular, this says that the local density of zeros at height T is approxi-mately 1 2π log  T 2π  . Since the average density of the eigenvalues of U ∈U (n) on the circle is n 2π, when comparing zeta zeroes and eigenvalues it makes sense to compare eigenvalues of U ∈U (n) with zeros of ζ at height T, with n = log  T 2π  , so that the only natural parameter, namely the density of points, is the same. We next transform the zeta zeros (and unitary eigenvalues correspondingly) so that the average spacing is 1. Suppose (in this case, really just for conve-nience of exposition) that RH holds, and order the zeros in the upper half-plane ρn = 1 2 + itn so that 0 < t1 ≤t2 ≤· · · . Define the “unfolded zeros” by wn := tn 2π log  tn 2π  , so that by the Riemann–von Mangoldt formula, lim W→0 1 W wn : 0 < wn ≤W = 1. For α < β and W > 0, define the functions Fζ(α, β; W) = 1 W w j, wk ∈[0, W] : α ≤wj −wk < β and Fζ(α, β) = lim W→∞Fζ(α, β; W). That is, Fζ gives the asymptotic density of zeta zeros separated by a prescribed distance. The so-called two-point correlation function R2,ζ(x) (we will see the con-nection with our earlier use of this term shortly) can be defined by the formula Fζ(α, β) = Z β α R2,ζ(x)dx + 1[α,β)(0). A fundamental goal in understanding the distribution of the zeros of ζ is to understand the behavior of the function Fζ(α, β), or equivalently, R2,ζ(x). To do this, it is useful to generalize Fζ as follows: given a test function f, let F2,ζ( f; W) := 1 W X j,k 0<w j,wk≤W f(wj −wk). 190 Characteristic polynomials and the ζ-function In particular, F2,ζ(1[α,β); W) = Fζ(α, β; W). For certain test functions, limW→∞F2,ζ( f; W) can be computed explicitly. Theorem 7.1 (Montgomery ) Let f : R →R be such that ˆ f(ξ) = R ∞ −∞f(x)e2πixξdx is supported in (−1, 1). Then lim W→∞F2,ζ( f; W) = Z ∞ −∞ f(x)       1 − sin(πx) πx !2      dx. Montgomery’s theorem unfortunately does not apply to f = 1[α,β); Mont-gomery’s conjecture says that the theorem holds without the restriction on the support of ˆ f, so that for R2,ζ as defined above, R2,ζ(x) = 1 − sin(πx) πx !2 . Back on the random matrix side, let U be a random unitary matrix with eigenvalues {eiθ1, . . . , eiθn}, and consider the rescaled eigenangles φ j := θ j 2π ! n, so that the average spacing is 1. Consider the analog to F2,ζ( f; W) given by F2,U(f) := 1 n n X j,k=1 f(φj −φk). Fix α < β and suppose that 0 < [α, β). Then E h F2,U(1[α,β)) i = 1 nE  ( j, k) : α ≤φ j −φk < β  = 1 nE " ( ( j, k) : 2πα n + θk ≤θj < 2πβ n + θk ) # . For M ∈N, divide [0, 2π) into M ordered subintervals I1, I2, . . . , IM of equal length. It follows from the dominated convergence theorem that 1 nE " ( ( j, k) : 2πα n + θk ≤θj < 2πβ n + θk ) # = 1 nE       lim M→∞ M X ℓ=1 ( (j, k) : θk ∈Iℓ, θj ∈Iℓ+ "2πα n , 2πβ n !)        = 1 n lim M→∞        M X ℓ=1 E  NIℓNIℓ+ h 2πα n , 2πβ n        . Now, since 0 < [α, β), the intervals Iℓand Iℓ+ h 2πα n , 2πβ n  are disjoint when N 7.2 The zeta function and characteristic polynomials 191 is large enough, and so for ρ2 the 2-point correlation function of the unitary eigenangle process (see Section 3.2), 1 n lim M→∞        M X ℓ=1 E  NIℓNIℓ+ h 2πα n , 2πβ n         = 1 n lim M→∞        M X ℓ=1 1 (2π)2 Z Iℓ Z Iℓ+ h 2πα n , 2πβ n  ρ2(x, y)dxdy        = 1 2πn Z 2πβ n 2πα n ρ2(0, y)dy = 1 2π Z β α         1 −         sin(πu) n sin  πu n          2        du, using the form of ρ2(x, y) given by the chart on page 88. If 0 ∈[α, β), then the argument above needs to be modified only slightly: in that case, M X ℓ=1 E  NIℓNIℓ+ h 2πα n , 2πβ n   = M X ℓ=1 E  N2 Iℓ+ NIℓN Iℓ+ h 2πα n , 2πβ n  \Iℓ  . We have seen (Theorem 4.8) that Var(NIℓ) = O(log(n)), and so lim n→∞ 1 n lim M→∞ M X ℓ=1 E[N2 Iℓ] = lim n→∞ 1 n lim M→∞ M X ℓ=1 E[N2 Iℓ] = 1, whereas the second term can be treated exactly as in the previous case. It thus follows that lim n→∞E h F2,U(1[α,β)) i = Z β α       1 − sin(πu) πu !2      du + 1[α,β)(0), which exactly matches Montgomery’s conjecture for the zeta zeros. 7.2 The zeta function and characteristic polynomials of random unitary matrices The computations in the previous section suggest that the eigenvalues of a ran-dom matrix, i.e., the zeros of its characteristic polynomial, behave similarly to the zeros of the Riemann zeta-function. An important next step was taken by Jon Keating and Nina Snaith in , who suggested that the value distribu-tion of the characteristic polynomial of a random unitary matrix is a reasonable 192 Characteristic polynomials and the ζ-function (local) model for the value distribution of the zeta-function itself. They demon-strated the validity of this idea by comparing new theorems in random matrix theory to known results and conjectures on the value distribution of the zeta-function; this in turn allowed them to formulate new conjectures on the value distribution of the zeta function, which are well-supported by numerical work. This section gives a survey of some of the results and conjectures of Keating and Snaith, with the numerical evidence deferred until the next section. The following theorem and conjecture are the two main points of compari-son on the zeta side. Theorem 7.2 (Selberg) Let E ⊆C be a rectangle. Then lim T→∞µ            t : T ≤t ≤2T, log ζ  1 2 + it  q 1 2 log log( T 2π) ∈E            = 1 2π " E e−1 2 (x2+y2)dxdy, where µ denotes Lebesgue measure on the line. Conjecture 7.3 (Moment conjecture) Let λ ≥0. There is a function f(λ) such that lim T→∞ 1 (log(T))λ2 1 T Z T 0 ζ 1 2 + it ! 2λ dt = f(λ)a(λ), where the arithmetic factor a(λ) is given by a(λ) = Y p        1 −1 p !λ2        ∞ X m=0 Γ(λ + m) m!Γ(λ) !2 1 pm              . (It is traditional to separate this arithmetic factor, which comes from number-theoretic considerations, rather than incorporating the unknown function f(λ) into it, but of course one could simply state the conjecture as asserting the ex-istence of the limit on the left-hand side.) The conjecture is trivially true when λ = 0, with f(0) = 1. It is known to be true when λ = 1, 2 with f(1) = 1 and f(2) = 1 12. Aside from that, the conjecture is open, and prior to the work of Keating and Snaith, there were only even conjectured values for f(λ) at λ = 3, 4. On the random matrix side, consider the characteristic polynomial Z(U, θ) := det(I −Ue−iθ) = n Y j=1 (1 −ei(θ j−θ)), where U ∈U (n) has eigenvalues {eiθ1, . . . , eiθn}. The following result is then the analog of Theorem 7.2. 7.2 The zeta function and characteristic polynomials 193 Theorem 7.4 (Keating–Snaith) Let Z(U, θ) = det(I−Ue−iθ) be the character-istic polynomial of a random matrix U ∈U (n), and let E ⊆C be a rectangle. Then lim n→∞P             log(Z(U, θ)) q 1 2 log(n) ∈E             = 1 2π " E e−1 2 (x2+y2)dxdy. In the case of Conjecture 7.3, the analogous limit on the random matrix side not only exists but can actually be computed, as follows. Theorem 7.5 (Keating–Snaith) Let Z(U, θ) = det(I−Ue−iθ) be the character-istic polynomial of a random matrix U ∈U (n), and let λ ∈C with Re(λ) > −1 2. Then lim n→∞ 1 nλ2 E h |Z(U, θ)|2λi = G2(1 + λ) G(1 + 2λ), where G is the Barnes G-function, defined by G(1 + z) = (2π) z 2 e−[(1+γ)z2+z]/2 ∞ Y j=1       1 + z j ! j e−z+ z2 2 j      , with γ denoting the Euler–Mascheroni constant. In particular, for k ∈N, lim n→∞ 1 nk2 E h |Z(U, θ)|2ki = k−1 Y j=0 j! ( j + k)!. This result had a huge impact because it suggested the following conjecture for the value of the function f(λ) in Montgomery’s conjecture. Conjecture 7.6 (Keating–Snaith) Let f(λ) be as in Montgomery’s conjecture, and let fU(λ) := G2(1 + λ) G(1 + 2λ) = lim n→∞ 1 nλ2 E h |Z(U, θ)|2λi . Then for Re(λ) > −1 2, f(λ) = fU(λ). As mentioned above, the values of f were only known previously at λ = 1, 2. Aside from that, there were conjectures, by Conrey and Ghosh at λ = 3 and Conrey and Gonek at λ = 4, which match the Keating–Snaith conjecture above. The key ingredient for the results on the characteristic polynomial stated above is the following explicit expression for the moment generating function of log(Z(U, θ)). 194 Characteristic polynomials and the ζ-function Lemma 7.7 Let U ∈U (n) be a random unitary matrix, and let Z(U, θ) = det(I −Ue−iθ) denote the characteristic polynomial of U. Let s, t ∈C with Re(t ± s) > −2. Then for all θ ∈[0, 2π), E h |Z(U, θ)|teis Im log(Z(U,θ))i = n Y k=1 Γ(k)Γ(k + t) Γ  k + t 2 −s 2  Γ  k + t 2 + s 2 . (7.1) Proof First note that since the distribution of the eigenvalues of U is rotation-ally invariant, Z(U, θ) d = Z(U, 0) for all θ; we will therefore immediately specialize to θ = 0, and write Z := Z(U, 0). Note also that, with probability one, none of the eigenangles θ j are 0 and so 1−eiθ j is in the right half-plane where we may unambiguously set the argument in (−π, π). By the Weyl integration formula, E h |Z|teis Im log(Z)i = E           n Y j=1 (1 −eiθ j) t eis Pn j=1 Im(log(1−eiθj))           = 1 (2π)nn! Z 2π 0 · · · Z 2π 0 n Y j=1 (1 −eiθj) t e−is Pn j=1 P∞ m=1 sin(mθj) m × Y 1≤j 0; Re(α + β) > 1; and −1 n < Re(δ) < min (Re(α) n −1 , Re(β) n −1 , Re(α + β + 1) 2(n −1) ) , 7.2 The zeta function and characteristic polynomials 195 J(a, b, α, β, δ, n) := Z ∞ −∞ · · · Z ∞ −∞ Y 1≤j<k≤n (xj −xk) 2δ n Y j=1 (a + ixj)−α(b −ixj)−βdx1 · · · dxn = (2π)n (a + b)(α+β)n−δn(n−1)−n n−1 Y j=0 Γ(1 + δ + jδ)Γ(α + β −(n + j −1)δ −1) Γ(1 + δ)Γ(α −jδ)Γ(β −jδ) . Examining the factors in the integrand in (7.2), Y 1≤j<k≤n |eiθ j −eiθk|2 = Y 1≤j<k≤n e i(θj−θk) 2 −e− i(θj−θk) 2 2 = 2n(n−1) Y 1≤j<k≤n sin  θ j 2 −θk 2  2 , and similarly, n Y j=1 (1 −eiθ j) = 2n n Y j=1 sin  θj 2  . For each θj ∈(0, 2π), ∞ X m=1 sin(mθ j) m = π −θ j 2 (this is just the expansion of π−x 2 as a Fourier series on (0, 2π)), and so E h |Z|teis Im log(Z)i = 2n(n−1)+tn (2π)nn! Z 2π 0 · · · Z 2π 0 n Y j=1 sin  θ j 2  t n Y j=1 e− is(π−θj) 2 × Y 1≤j<k≤n sin  θj 2 −θk 2  2 dθ1 · · · dθn, 196 Characteristic polynomials and the ζ-function Letting φ j = θ j−π 2 now gives that E h |Z|teis Im log(Z)i = 2n2+tn (2π)nn! Z π 2 −π 2 · · · Z π 2 −π 2 n Y j=1 cos  φ j  t n Y j=1 eisφj Y 1≤j<k≤n sin  φj −φk  2 dφ1 · · · dφn = 2n2+tn (2π)nn! Z π 2 −π 2 · · · Z π 2 −π 2 n Y j=1 cos  φ j  t n Y j=1 eisφj × Y 1≤j<k≤n sin(φ j) cos(φk) −cos(φ j) sin(φk) 2 dφ1 · · · dφn = 2n2+tn (2π)nn! Z π 2 −π 2 · · · Z π 2 −π 2 n Y j=1 cos  φ j  t+2(n−1) n Y j=1 eisφ j × Y 1≤j<k≤n tan(φ j) −tan(φk) 2 dφ1 · · · dφn. Now letting x j = tan(φ j) so that cos(φ j) = 1 q 1 + x2 j sin(φj) = xj q 1 + x2 j gives that E h |Z|teis Im log(Z)i = 2n2+tn (2π)nn! Z ∞ −∞ · · · Z ∞ −∞ n Y j=1                         1 q 1 + x2 j             t+2n             1 + ixj q 1 + x2 j             s            × Y 1≤j<k≤n |xj −xk|2dx1 · · · dxn = 2n2+tn (2π)nn! Z ∞ −∞ · · · Z ∞ −∞ n Y j=1 h (1 + ixj)−(n+ t 2 −s 2)(1 −ixj)−(n+ t 2 + s 2)i × Y 1≤j −2; 7.2 The zeta function and characteristic polynomials 197 under these conditions, the formula yields E h |Z|teis Im log(Z)i = n Y k=1 Γ(k)Γ(k + t) Γ  k + t 2 −s 2  Γ  k + t 2 + s 2  as claimed. □ Proof of Theorem 7.4 As before, it suffices to consider Z = Z(U, 0). For t, s ∈ R, let q = s q 1 2 log(n) r = t q 1 2 log(n) . The moment generating function of the complex random variable log(Z) √ 1 2 log(n) is M(t, s) = E h |Z|reiq Im(log(Z))i = n Y k=1 Γ(k)Γ(k + r) Γ  k + r 2 −q 2  Γ  k + r 2 + q 2 , and so log(M(t, s)) = n X k=1  log(Γ(k)) + log(Γ(k + r)) −log  Γ  k + r 2 −q 2  −log  Γ  k + r 2 + q 2  . (7.3) The idea is to evaluate the limit in a neighborhood of (t, s) = (0, 0) by first expanding this expression as a power series in s and t and then taking the limit as n →∞; the claimed central limit theorem follows if we can show that for s and t small enough, lim n→∞log(M(t, s)) = t2 −s2 = log E[etZ1+isZ2], where Z1 and Z2 are independent standard Gaussian random variables. Now, by definition M(0, 0) = 1 and so log(M(0, 0)) = 0. For ℓ≥0, let ψ(ℓ)(z) = dℓ+1[log(Γ(z))] dzℓ = (−1)ℓ+1 Z ∞ 0 tℓe−zt 1 −e−t dt (7.4) denote the ℓth polygamma function. Then by (7.3), ∂ ∂s[log(M(t, s))] = 1 p 2 log(n) n X k=1       ψ(0)       k + t −s p 2 log(n)       −ψ(0)       k + t + s p 2 log(n)              , 198 Characteristic polynomials and the ζ-function and so ∂ ∂s[log(M(t, s))] (0,0) = 0. Similarly, ∂ ∂t[log(M(t, s))] (0,0) = 0 and ∂2 ∂t∂s[log(M(t, s))] (0,0) = 0, and so the power series expansion of log(M(t, s)) has no terms of order 0 or 1 and no st term. Continuing, ∂2 ∂s2 [log(M(t, s))] = − 1 2 log(n) n X k=1       ψ(1)       k + t −s p 2 log(n)       + ψ(1)       k + t + s p 2 log(n)              , and so ∂2 ∂s2 [log(M(t, s))] (0,0) = − 1 log(n) n X k=1 ψ(1)(k). Using the integral expression for ψ(1) from (7.4), n X k=1 ψ(1)(k) = Z ∞ 0 t 1 −e−t        n X k=1 e−kt       dt = Z ∞ 0 te−t(1 −e−nt) (1 −e−t)2 dt = Z ∞ 0 [(t(1 −e−nt)] d dt e−t 1 −e−t ! dt = Z ∞ 0 e−t 1 −e−t ! h (1 −e−nt) + nte−nti dt. Rearranging and re-expanding the geometric sums, this last expression is Z ∞ 0 "e−t(1 −e−nt) 1 −e−t + nte−(n+1)t 1 −e−t # dt = Z ∞ 0               n X k=1 e−kt       + nt        ∞ X k=n+1 e−kt              dt = n X k=1 1 k + n ∞ X k=n+1 1 k2 = log(n) + O(1). It thus follows that lim n→∞        s2 2 ∂2 ∂s2 [log(M(t, s))] (0,0)       = −s2 2 . The proof that lim n→∞        t2 2 ∂2 ∂t2 [log(M(t, s))] (0,0)       = t2 2 7.2 The zeta function and characteristic polynomials 199 is nearly identical. It thus remains to show that in some neighborhood of (0, 0), for s and t fixed, lim n→∞         X j+ℓ≥3 sjtℓ j!ℓ! ∂j+ℓ ∂s j∂tℓ[log(M(t, s))] (0,0)        = 0. From (7.3), ∂j+ℓ ∂s j∂tℓ[log(M(t, s))] (0,0) = n X k=1        1 j=0ψℓ−1(k) 2 log(n) ! ℓ 2 −ψ(j+ℓ−1)(k)        1 + (−1) j (2 log(n)) j+ℓ 2               . We will just consider the second term; the first is essentially the same, but slightly easier. Using the integral representation of the polygamma function and integration by parts as above, n X k=1 ψ(j+ℓ−1)(k) = (−1) j+ℓ n X k=1 "Z ∞ 0 t j+ℓ−1e−kt 1 −e−t dt # = (−1) j+ℓ Z ∞ 0 t j+ℓ−1e−t[1 −e−nt] (1 −e−t)2 dt = (−1) j+ℓ Z ∞ 0 e−t 1 −e−t ! h ( j + ℓ−1)t j+ℓ−2 −( j + ℓ−1)t j+ℓ−2e−nt + nt j+ℓ−1e−nti dt. (7.5) Now, for n ≥2, Z ∞ 0 tn−1 et −1dt = Γ(n)ζ(n), and so Z ∞ 0 e−t 1 −e−t ! ( j + ℓ−1)t j+ℓ−2dt = ( j + ℓ−1) Z ∞ 0 t j+ℓ−2 et −1dt = (j + ℓ−1)!ζ( j + ℓ−1). For j + ℓ≥3, this last expression is bounded by ( j + ℓ)!ζ(2), and so it follows 200 Characteristic polynomials and the ζ-function that X j+ℓ≥3 s jtℓ(−1) j+ℓ(1 −(−1) j) j!ℓ!(2 log(n)) j+ℓ 2 Z ∞ 0 e−t 1 −e−t ! ( j + ℓ−1)t j+ℓ−2dt ≤2ζ(2) ∞ X m=3 m X j=0          m j ! |s|j|t|m−j         s 1 2 log(n)         m         ≤ 2ζ(2) (2 log(n)) 3 2 ∞ X m=3 (|s| + |t|)m , which tends to zero as n →∞, as long as |s| + |t| < 1. Considering the absolute value of the remaining terms in (7.5), Z ∞ 0 e−t 1 −e−t ! h ( j + ℓ−1)t j+ℓ−2 + nt j+ℓ−1i e−ntdt = Z ∞ 0 h (j + ℓ−1)t j+ℓ−2 + nt j+ℓ−1i e−nt et −1 dt ≤ Z ∞ 0 h ( j + ℓ−1)t j+ℓ−3 + nt j+ℓ−2i e−ntdt = 1 nj+ℓ−2 Z ∞ 0 [( j + ℓ−1)y j+ℓ−3 + yj+ℓ−2]e−ydy = 1 nj+ℓ−2 ( j + ℓ−1)Γ( j + ℓ−2) + Γ( j + ℓ−1) ≤2(j + ℓ−1)! nj+ℓ−2 , where we have used the estimate et −1 ≥t in the first inequality. We can now bound the contribution to the tail of the power series for log(M(t, s)): X j+ℓ≥3 2|s| j|t|ℓ j!ℓ!(2 log(n)) j+ℓ 2 Z ∞ 0 e−t 1 −e−t ! h ( j + ℓ−1)t j+ℓ−2 + nt j+ℓ−1i e−ntdt ≤ 4 n(2 log(n)) 3 2 ∞ X m=3 m X j=0 m j ! |s|j|t|m−j = 4 n(2 log(n)) 3 2 ∞ X m=3 (|s| + |t|)m, which again tends to zero as long as |s| + |t| < 1. We have therefore shown that if |s| + |t| < 1, then lim n→∞log(M(t, s)) = t2 2 −s2 2 , 7.2 The zeta function and characteristic polynomials 201 which completes the proof of the bivariate Gaussian limit. □ Proof of Theorem 7.5 As usual, set θ = 0 and consider Z = Z(U, 0). Letting s = 0 and t = 2λ in the statement of Lemma 7.7 gives that 1 nλ2 E h |Z|2λi = 1 nλ2 n Y k=1 Γ(k)Γ(k + 2λ) (Γ (k + λ))2 . (7.6) The Barnes G-function satisfies the functional equation G(z+1) = Γ(z)G(z), so that Γ(z) = G(z + 1) G(z) . Expressing all the gamma functions in (7.6) as ratios of G functions in this way leads to massive cancellation, so that 1 nλ2 E h |Z|2λi = G2(1 + λ) G(1 + 2λ) "G(n + 1)G(n + 1 + 2λ) nλ2G2(n + 1 + λ) # , using the fact that G(1) = 1. To prove the theorem, it therefore suffices to show that for λ fixed, lim n→∞ "G(n + 1)G(n + 1 + 2λ) nλ2G2(n + 1 + λ) # = 1. For z in the right half-plane with |z| large, there is the following expansion of log(G(z + 1)): log(G(z + 1)) = C + z 2 log(2π) + z2 2 −1 12 ! log(z) −3z2 4 + O 1 z2 ! . 202 Characteristic polynomials and the ζ-function Taking the logarithm of the expression above and using this expansion yields log "G(n + 1)G(n + 1 + 2λ) nλ2G2(n + 1 + λ) # = −λ2 log(n) + log(G(n + 1)) + log(G(n + 2λ + 1)) −2 log(G(n + λ + 1)) = −λ2 log(n) + n2 2 −1 12 ! log(n) + (n + 2λ)2 2 −1 12 ! log n 1 + 2λ n !! −2 (n + λ)2 2 −1 12 ! log  n  1 + λ n  −3n2 4 −3(n + 2λ)2 4 + 3(n + λ)2 2 + O 1 n2 ! = n2 2 + 2nλ + 2λ2 −1 12 ! log 1 + 2λ n ! − n2 + 2nλ + λ2 −1 6 ! log  1 + λ n  −3λ2 2 + O 1 n2 ! = n2 2 + 2nλ + 2λ2 −1 12 ! 2λ n −2λ2 n2 + O 1 n3 !! − n2 + 2nλ + λ2 −1 6 ! λ n −λ2 2n2 + O 1 n3 !! −3λ2 2 + O 1 n2 ! = O 1 n ! , where the implicit constants depend on λ. It thus follows that lim n→∞log "G(n + 1)G(n + 1 + 2λ) nλ2G2(n + 1 + λ) # = 0, which completes the proof. □ 7.3 Numerical and statistical work While the connection between the zeta zeroes and the eigenvalues of random unitary matrices remains unexplained, there is ample numerical evidence. This section is mainly to advertise the existence of serious numerical work with some impressive pictures; for the reader looking for more involved numerical and statistical analysis, see the references in the end of section notes. The earliest numerical work in this area was by Andrew Odlyzko, who generated large tables (and then much larger tables) of zeta zeroes, which 7.3 Numerical and statistical work 203 he first used to investigate Montgomery’s conjecture. Consider only the ze-roes ρ with Im(ρ) > 0, and order them (with multiplicity) by imaginary part: 0 < Im(ρ1) ≤Im(ρ2) ≤· · · . Odlyzko computed a large number of such zeros and found them all to be simple and lie on the critical line, so that they may be written ρn = 1 2 + itn. Recall that Montgomery’s conjecture dealt with the unfolded zeroes wn = tn 2π log  tn 2π  . Rather than working with the unfolded zeroes, Odlyzko considered normaliz-ing the spacings: for n ∈N, δn := tn+1 −tn 2π log  tn 2π  . These two transformation are not meaningfully different for testing Mont-gomery’s conjecture: either makes the average spacing equal to 1 at large height. Figure 7.1 below illustrates the pair correlations for the data set {tn : N + 1 ≤n ≤N + M} , where N = 100000000000018097784511 and M = 203401872. The interval [0, 3) is broken into subintervals [α, β) of length 1 20, and for each interval [α, β), aα,β := 20 M |{(n, k) : N + 1 ≤n ≤N + M, k ≥0, δn + · · · + δn+k ∈[α, β)}| . According to Montgomery’s conjecture, we should have aα,β ≈1 − sin (πγ) πγ !2 , for any γ ∈[α, β). In the figure, each interval [α, β) has a point plotted at x = α+β 2 , y = aα,β; the solid line is the graph of y = 1 −  sin(πx) πx 2 . A related connection also tested by Odlyzko is the distribution of near-est neighbor spacings. One can compute the predicted distribution of the δn based on the limiting pair correlation; the predicted distribution is the so-called Gaudin–Mehta distribution, which has a known, albeit not very explicit, den-sity. Figure 7.2 below shows the empirical distribution of the δn for about 109 zeroes of the zeta function, at height approximately 1023. Each plotted point has x-coordinate at the midpoint of an interval of length 1 20 and height given by the proportion of the computed δn lying in that interval. The smooth curve is the density of the predicted distribution. 204 Characteristic polynomials and the ζ-function 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 normalized spacing density Pair correlation, N = 10^23, 210^8 zeros Figure 7.1 Pair correlations of (spacing-normalized) zeta zeroes around height 1023, and the random matrix prediction. Figure courtesy of Andrew Odlyzko. Statistics related to the normalized spacings are all about the local behavior of the zeta zeroes. More global features of Odlyzko’s data were subjected to extensive statistical testing by Coram and Diaconis in ; below, we present two of their main results, illustrated with the relevant figures from . Because global statistics of random matrices (e.g., the trace) depend on the entire ensemble of eigenvalues, a comparison with some corresponsing feature of the zeta data requires one to first put blocks of zeta zeroes onto the unit circle. In their statistical analysis, Coram and Diaconis used a data set con-7.3 Numerical and statistical work 205 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 normalized spacing density Nearest neighbor spacings, N = 10^23, 10^9 zeros Figure 7.2 (Normalized) Neighbor spacings of zeta zeroes around the 1023 zero and the random matrix prediction. Figure courtesy of Andrew Odlyzko. sisting of 50,000 consecutive zeros starting around the 1020th zero, which is roughly at height T = 1.5 × 1019, corresponding to matrices of size n = 42. First the data set was divided into blocks of length 43. Each block was then wrapped around the unit circle so that the first and last zero were both at z = 1, and then the circle was randomly rotated. More concretely, for each block n 1 2 + itn, . . . , 1 2 + itn+43 o , let δ j = tn+ j −tn+j−1 for j = 1, . . . , 42, and 206 Characteristic polynomials and the ζ-function let ∆j := Pj k=1 δk. Then the wrapped zeta zeroes are the (random) points Xj = exp n 2πi  ∆j ∆n + U o , where U is a uniform random variable in [0, 1). It is a consequence of Proposition 3.11 that if {Un}n∈N is a sequence of Haar-distributed random unitary matrices, then | Tr(Un)|2 converges to an ex-ponential random variable with mean 1. In fact, this convergence happens very quickly: it follows from work of Johansson that if U ∈U (n) is distributed according to Haar measure, then for t ≥0, P[| Tr(Un)|2 ≥t] −e−t ≤cn−δn for some universal constants c and δ. Coram and Diaconis computed the cor-responding norm-squared “traces” of the wrapped zeta data; the comparison with the random matrix prediction is given in Figure 7.3. A different global test of the wrapped zeta data has to do with the covariance structure of the counting function. Fix θ ∈[0, 1], and let I(θ) be the quarter-circle arc from e2πiθ to e2πi(θ+ 1 4 ). Let U be a random matrix in U (n), and let A(θ) be the number of eigenvalues of U in I(θ). Finally, let R(θ) = corr(A(θ), A(0)). An analytic expression for the density of R(θ) was found in ; the compar-ison with empirical correlations of the wrapped zeta deta is shown in Figure 7.4. These pictures neatly convey the message of the extensive testing presented in , that numerical and statistical testing bear out the connection between zeta zeroes and random matrix eigenvalues amazingly well. 7.3 Numerical and statistical work 207 Figure 7.3 Empirical distribution of norm-squared “traces” of the wrapped zeta data and exponential density. Reprinted by permission from IOP Publishing. Finally, recall the conjecture of Keating and Snaith: that the distribution of values of the characteristic polynomial of an n × n random unitary matrix is a good model for the distribution of values of ζ at height T, where n = log  T 2π  . The analytic evidence presented for the conjecture in Section 7.2 is the agreement between Selberg’s central limit theorem for the logarithm of the zeta function and Keating and Snaith’s central limit theorem for the logarithm of the characteristic polynomial of a random unitary matrix. The numerical data here are striking: there random matrix prediction of the value distribu-tion of log ζ  1 2 + it  by the distribution of log(Z(U, 0)) for a random matrix U is better than the Gaussian approximation given by the Selberg central limit theorem! In the figures below (reproduced from ), the value distribution for the real and imaginary parts of log(Z(U, 0) are computed for U distributed according to Haar measure in U (42), and compared with value distributions (computed by Odlyzko) of the real and imaginary parts of log ζ  1 2 + it  for t 208 Characteristic polynomials and the ζ-function Figure 7.4 Estimated empirical values of R(θ) for the wrapped zeta data and the random matrix prediction. Reprinted by permission from IOP Publishing. near the 1020th zero (t ≈1.5 × 1019.) The Gaussian distributions predicted by the two central limit theorems are also plotted for comparison. 7.3 Numerical and statistical work 209 Figure 7.5 The value distributions of Re log(Z(U, 0)) with n = 42, Re log ζ  1 2 + it  near the 1020th zero, and a Gaussian density, all scaled to have variance 1. Reprinted by permission from Springer Nature. Notes and References For the reader looking to delve into the random matrix approach to number theory, the volume Recent Perspectives in Random Matrix Theory and Number Theory edited by Mezzadri and Snaith is an excellent starting point; its lectures were specifically intended to be accessible to researchers coming from number theory and those coming from random matrix theory (and to students just getting their feet wet in both!). Following Montgomery’s work on pair correlations, the natural next step was to consider k-point correlations for general k. This was first done by Rud-nick and Sarnak (in the context of more general L-functions, with the Riemann zeta function as a special case). This problem was taken up more recently by Conrey and Snaith , who introduced a new approach to the correlations on the random matrix side, which allowed for a more transparent comparison with the zeta zeroes. In , Hughes, Keating, and O’Connell extended Theorem 7.4 by consid-ering the entire random process Z(U, θ). They showed that if Yn(θ) = 1 σ Im log(Z(, θ)) with 2σ2 = log(n), then the finite-dimensional distributions of Yn(θ) con-210 Characteristic polynomials and the ζ-function Figure 7.6 The value distributions of Im log(Z(U, 0)) with n = 42, Im log ζ  1 2 + it  near the 1020th zero, and a Gaussian density, all scaled to have variance 1. Reprinted by permission from Springer Nature. verge to those of Y(θ), a centered Gaussian process with covariance func-tion EY(s)Y(t) = 1s=t. This allowed them to recover, via the argument prin-ciple, the covariances of the eigenvalue counting functions on arcs first found by Wieand . In the same paper, they proved large deviations results for Re log(Z(U, θ)) and Im log(Z(U, θ)) In remarkable recent work, Chha¨ ıbi, Najnudel and Nikeghbali have gone considerably further, showing that, after rescaling, the characteristic poly-nomial of a random unitary matrix converges almost surely to a random ana-lytic function whose zeroes form a determinantal point process on the real line, governed by the sine kernel. This in turned suggested new conjectures on the value distribution of the zeta function, in an extension of the ideas that lead to the Keating–Snaith moment conjecture. Finally, although the connection between zeta zeros and eigenvalues of ran-dom matrices remains unproved, there is a proven connection in the “function field” case; i.e., for zeta functions for curves over finite fields. Katz and Sarnak showed that the zeros of almost all such zeta functions are described by the random matrix model. See also the expository article . Index Bakry–´ Emery criterion, 141 Borel’s lemma, 39 characteristic polynomial, 185 central limit theorem, 186 moments, 187 characters, 30 orthogonality, 31 characters of the classical compact groups, 33 concentration of measure, 132 correlation functions, 75 counting function, see eigenvalue counting function determinantal point process, 75, 93 Dvoretzky’s theorem, 166 eigenvalue counting function, 93 empirical spectral measure, 108 concentration, 153 convergence, 109 large deviations, 125 entropy, 134 free entropy, 120 geodesic distance, 12 Haar measure, 14–24 Herbst argument, 135 Hilbert–Schmidt inner product, 11 irreducible characters of the classical compact groups, 33 Johnson–Lindenstrauss lemma, 161 joint intensities, 75 kernels of eigenvalue processes, 76, 83 large deviations, 119 Lie algebras of the classical compact groups, 26 linear eigenvalue statistics, 110 logarithmic energy, 120 logarithmic Sobolev inequality, 134 constants on the classical compact groups, 152 moments, 60, 84 Montgomery’s conjecture, 184 Poincar´ e limit, 39 powers of random matrices, 89, 111, 153 quaternions, 8 representation, 28 Ricci curvature, 146 Schur functions, 33 generalized, 34 sine kernel process, 98 spectral measure, see empirical spectral measure Stein’s method, 59 submatrices central limit theorem, 42–43 density, 43 trace inner product, 11 truncations central limit theorem, 42–43 density, 43 Weyl integration formula orthogonal and symplectic groups, 71 unitary group, 65 Wishart matrix, 44 Young diagram, 32 211 References Greg W. Anderson, Alice Guionnet, and Ofer Zeitouni. An introduction to ran-dom matrices, volume 118 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2010. C´ ecile An´ e, S´ ebastien Blachere, Djalil Chafa¨ ı, Pierre Foug eres, Ivan Gentil, Flo-rent Malrieu, Cyril Roberto, and Gr´ egory Scheffer. Sur les in´ egalit´ es de Sobolev logarithmiques, volume 10 of Panoramas et Synth` eses [Panoramas and Synthe-ses]. Soci´ et´ e Math´ ematique de France, Paris, 2000. With a preface by Do-minique Bakry and Michel Ledoux. Milla Anttila, Keith Ball, and Irini Perissinaki. The central limit problem for convex bodies. Trans. Amer. Math. Soc., 355(12):4723–4735, 2003. Shiri Artstein-Avidan, Apostolos Giannopoulos, and Vitali D. Milman. Asymp-totic geometric analysis. Part I, volume 202 of Mathematical Surveys and Mono-graphs. American Mathematical Society, Providence, RI, 2015. Guillaume Aubrun, Stanisław Szarek, and Elisabeth Werner. Hastings’s additiv-ity counterexample via Dvoretzky’s theorem. Comm. Math. Phys., 305(1):85– 97, 2011. Z. D. Bai. Methodologies in spectral analysis of large-dimensional random ma-trices, a review. Statist. Sinica, 9(3):611–677, 1999. With comments by G. J. Rodgers and Jack W. Silverstein; and a rejoinder by the author. Jinho Baik and Eric M. Rains. Algebraic aspects of increasing subsequences. Duke Math. J., 109(1):1–65, 2001. D. Bakry and Michel ´ Emery. Diffusions hypercontractives. In S´ eminaire de probabilit´ es, XIX, 1983/84, volume 1123 of Lecture Notes in Math., pages 177– 206. Springer, Berlin, 1985. Dominique Bakry, Ivan Gentil, and Michel Ledoux. Analysis and geometry of Markov diffusion operators, volume 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Cham, 2014. Christian Berg, Jens Peter Reus Christensen, and Paul Ressel. Harmonic analysis on semigroups, volume 100 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1984. Theory of positive definite and related functions. Rajendra Bhatia. Matrix analysis, volume 169 of Graduate Texts in Mathemat-ics. Springer-Verlag, New York, 1997. 212 References 213 Gordon Blower. Random matrices: high dimensional phenomena, volume 367 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2009. St´ ephane Boucheron, G´ abor Lugosi, and Pascal Massart. Concentration inequal-ities. Oxford University Press, Oxford, 2013. A nonasymptotic theory of inde-pendence, With a foreword by Michel Ledoux. Silouanos Brazitikos, Apostolos Giannopoulos, Petros Valettas, and Beatrice-Helen Vritsiou. Geometry of isotropic convex bodies, volume 196 of Mathemat-ical Surveys and Monographs. American Mathematical Society, Providence, RI, 2014. Ulrich Brehm and J¨ urgen Voigt. Asymptotics of cross sections for convex bodies. Beitr¨ age Algebra Geom., 41(2):437–454, 2000. Daniel Bump. Lie groups, volume 225 of Graduate Texts in Mathematics. Springer, New York, second edition, 2013. Daniel Bump, Persi Diaconis, and Joseph B. Keller. Unitary correlations and the Fej´ er kernel. Math. Phys. Anal. Geom., 5(2):101–123, 2002. Sourav Chatterjee and Elizabeth Meckes. Multivariate normal approximation using exchangeable pairs. ALEA Lat. Am. J. Probab. Math. Stat., 4:257–283, 2008. R´ eda Chha¨ ıbi, Joseph Najnudel, and Ashkan Nikeghbali. The circular unitary ensemble and the Riemann zeta function: the microscopic landscape and a new approach to ratios. Invent. Math., 207(1):23–113, 2017. Benoˆ ıt Collins. Moments and cumulants of polynomial random variables on unitary groups, the Itzykson-Zuber integral, and free probability. Int. Math. Res. Not., (17):953–982, 2003. Benoˆ ıt Collins and Piotr ´ Sniady. Integration with respect to the Haar measure on unitary, orthogonal and symplectic group. Comm. Math. Phys., 264(3):773–795, 2006. Benoˆ ıt Collins and Michael Stolz. Borel theorems for random matrices from the classical compact symmetric spaces. Ann. Probab., 36(3):876–895, 2008. J. B. Conrey and N. C. Snaith. In support of n-correlation. Comm. Math. Phys., 330(2):639–653, 2014. Marc Coram and Persi Diaconis. New tests of the correspondence between unitary eigenvalues and the zeros of Riemann’s zeta function. J. Phys. A, 36(12):2883–2906, 2003. Random matrix theory. S. Dallaporta. Eigenvalue variance bounds for Wigner and covariance random matrices. Random Matrices Theory Appl., 1(3):1250007, 28, 2012. S. Dallaporta. Eigenvalue variance bounds for covariance matrices. Markov Process. Related Fields, 21(1):145–175, 2015. Anthony D’Aristotile, Persi Diaconis, and Charles M. Newman. Brownian mo-tion and the classical groups. In Probability, statistics and their applications: papers in honor of Rabi Bhattacharya, volume 41 of IMS Lecture Notes Monogr. Ser., pages 97–116. Inst. Math. Statist., Beachwood, OH, 2003. Kenneth R. Davidson and Stanislaw J. Szarek. Local operator theory, random matrices and Banach spaces. In Handbook of the geometry of Banach spaces, Vol. I, pages 317–366. North-Holland, Amsterdam, 2001. 214 References A. P. Dawid. Some matrix-variate distribution theory: notational considerations and a Bayesian application. Biometrika, 68(1):265–274, 1981. Amir Dembo and Ofer Zeitouni. Large deviations techniques and applications, volume 38 of Applications of Mathematics (New York). Springer-Verlag, New York, second edition, 1998. Persi Diaconis. Group representations in probability and statistics, volume 11 of Institute of Mathematical Statistics Lecture Notes—Monograph Series. Institute of Mathematical Statistics, Hayward, CA, 1988. Persi Diaconis and Steven N. Evans. Linear functionals of eigenvalues of random matrices. Trans. Amer. Math. Soc., 353(7):2615–2633, 2001. Persi Diaconis and Peter J. Forrester. Hurwitz and the origins of random matrix theory in mathematics. Random Matrices Theory Appl., 6(1):1730001, 26, 2017. Persi Diaconis and David Freedman. Asymptotics of graphical projection pur-suit. Ann. Statist., 12(3):793–815, 1984. Persi Diaconis and David Freedman. A dozen de Finetti-style results in search of a theory. Ann. Inst. H. Poincar´ e Probab. Statist., 23(2, suppl.):397–423, 1987. Persi Diaconis and Mehrdad Shahshahani. On the eigenvalues of random matri-ces. J. Appl. Probab., 31A:49–62, 1994. Studies in applied probability. Persi W. Diaconis, Morris L. Eaton, and Steffen L. Lauritzen. Finite de Finetti theorems in linear models and multivariate analysis. Scand. J. Statist., 19(4):289–315, 1992. Christian D¨ obler and Michael Stolz. Stein’s method and the multivariate CLT for traces of powers on the classical compact groups. Electron. J. Probab., 16:no. 86, 2375–2405, 2011. Richard M. Dudley. Real analysis and probability. The Wadsworth & Brooks/Cole Mathematics Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1989. Aryeh Dvoretzky. Some results on convex bodies and Banach spaces. In Proc. Internat. Sympos. Linear Spaces (Jerusalem, 1960), pages 123–160. Jerusalem Academic Press, Jerusalem; Pergamon, Oxford, 1961. Morris L. Eaton. Group invariance applications in statistics, volume 1 of NSF-CBMS Regional Conference Series in Probability and Statistics. Insti-tute of Mathematical Statistics, Hayward, CA; American Statistical Association, Alexandria, VA, 1989. Morris L. Eaton. Group invariance applications in statistics. NSF-CBMS Re-gional Conference Series in Probability and Statistics, 1. Institute of Mathemat-ical Statistics, Hayward, CA; American Statistical Association, Alexandria, VA, 1989. Alan Edelman and N. Raj Rao. Random matrix theory. Acta Numer., 14:233– 297, 2005. B. Fleury, O. Gu´ edon, and G. Paouris. A stability result for mean width of Lp-centroid bodies. Adv. Math., 214(2):865–877, 2007. P. J. Forrester. Log-gases and random matrices, volume 34 of London Mathe-matical Society Monographs Series. Princeton University Press, Princeton, NJ, 2010. P. Frankl and H. Maehara. The Johnson-Lindenstrauss lemma and the sphericity of some graphs. J. Combin. Theory Ser. B, 44(3):355–362, 1988. References 215 Jason Fulman. Stein’s method, heat kernel, and traces of powers of elements of compact Lie groups. Electron. J. Probab., 17:no. 66, 16, 2012. William Fulton and Joe Harris. Representation theory, volume 129 of Grad-uate Texts in Mathematics. Springer-Verlag, New York, 1991. A first course, Readings in Mathematics. Yehoram Gordon. Some inequalities for Gaussian processes and applications. Israel J. Math., 50(4):265–289, 1985. M. Gromov. Paul l´ evy isoperimetric inequality. I.H.E.S. preprint, 1980. M. Gromov and V. D. and Milman. A topological application of the isoperimetric inequality. Amer. J. Math., 105(4):843–854, 1983. Fumio Hiai and D´ enes Petz. A large deviation theorem for the empirical eigen-value distribution of random unitary matrices. Ann. Inst. H. Poincar´ e Probab. Statist., 36(1):71–85, 2000. Fumio Hiai and D´ enes Petz. The semicircle law, free random variables and en-tropy, volume 77 of Mathematical Surveys and Monographs. American Mathe-matical Society, Providence, RI, 2000. Roger A. Horn and Charles R. Johnson. Topics in matrix analysis. Cambridge University Press, Cambridge, 1994. Corrected reprint of the 1991 original. Roger A. Horn and Charles R. Johnson. Matrix analysis. Cambridge University Press, Cambridge, second edition, 2013. J. Ben Hough, Manjunath Krishnapur, Yuval Peres, and B´ alint Vir´ ag. Determi-nantal processes and independence. Probab. Surv., 3:206–229, 2006. C. P. Hughes, J. P. Keating, and Neil O’Connell. On the characteristic polynomial of a random unitary matrix. Comm. Math. Phys., 220(2):429–451, 2001. C. P. Hughes and Z. Rudnick. Mock-Gaussian behaviour for linear statistics of classical compact groups. J. Phys. A, 36(12):2919–2932, 2003. Random matrix theory. T. Jiang and Y. Ma. Distances between random orthogonal matrices and inde-pendent normals. Preprint, 2017. Tiefeng Jiang. How many entries of a typical orthogonal matrix can be approxi-mated by independent normals? Ann. Probab., 34(4):1497–1529, 2006. Kurt Johansson. On random matrices from the compact classical groups. Ann. of Math. (2), 145(3):519–545, 1997. William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz map-pings into a Hilbert space. In Conference in modern analysis and probabil-ity (New Haven, Conn., 1982), volume 26 of Contemp. Math., pages 189–206. Amer. Math. Soc., Providence, RI, 1984. Nicholas M. Katz and Peter Sarnak. Random matrices, Frobenius eigenval-ues, and monodromy, volume 45 of American Mathematical Society Colloquium Publications. American Mathematical Society, Providence, RI, 1999. Nicholas M. Katz and Peter Sarnak. Zeroes of zeta functions and symmetry. Bull. Amer. Math. Soc. (N.S.), 36(1):1–26, 1999. J. P. Keating, F. Mezzadri, and B. Singphu. Rate of convergence of linear func-tions on the unitary group. J. Phys. A, 44(3):035204, 27, 2011. J. P. Keating and N. C. Snaith. Random matrix theory and ζ(1/2 + it). Comm. Math. Phys., 214(1):57–89, 2000. 216 References R. C. King. Modification rules and products of irreducible representations of the unitary, orthogonal, and symplectic groups. J. Mathematical Phys., 12:1588– 1598, 1971. B. Klartag. A central limit theorem for convex sets. Invent. Math., 168(1):91– 131, 2007. B. Klartag. Power-law estimates for the central limit theorem for convex sets. J. Funct. Anal., 245(1):284–310, 2007. N. S. Landkof. Foundations of modern potential theory. Springer-Verlag, New York-Heidelberg, 1972. Translated from the Russian by A. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften, Band 180. Michel Ledoux. Concentration of measure and logarithmic Sobolev inequalities. In S´ eminaire de Probabilit´ es, XXXIII, volume 1709 of Lecture Notes in Math., pages 120–216. Springer, Berlin, 1999. Michel Ledoux. The concentration of measure phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, Prov-idence, RI, 2001. Michel Ledoux. The concentration of measure phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society, Prov-idence, RI, 2001. Dudley E. Littlewood. The Theory of Group Characters and Matrix Represen-tations of Groups. Oxford University Press, New York, 1940. Odile Macchi. The coincidence approach to stochastic point processes. Ad-vances in Appl. Probability, 7:83–122, 1975. E. Meckes and M. Meckes. Rates of convergence for empirical spectral mea-sures: A soft approach. In E. Carlen, M. Madiman, and E. Werner, editors, Convexity and Concentration, The IMA Volumes in Mathematics and its Appli-cations, pages 157–181. Springer-Verlag, New York, 2017. Elizabeth Meckes. Linear functions on the classical matrix groups. Trans. Amer. Math. Soc., 360(10):5355–5366, 2008. Elizabeth Meckes. Projections of probability distributions: a measure-theoretic Dvoretzky theorem. In Geometric aspects of functional analysis, volume 2050 of Lecture Notes in Math., pages 317–326. Springer, Heidelberg, 2012. Elizabeth S. Meckes and Mark W. Meckes. Concentration and convergence rates for spectral measures of random matrices. Probab. Theory Related Fields, 156(1-2):145–164, 2013. Elizabeth S. Meckes and Mark W. Meckes. Spectral measures of powers of random matrices. Electron. Commun. Probab., 18:no. 78, 13, 2013. Madan Lal Mehta. Random matrices, volume 142 of Pure and Applied Mathe-matics (Amsterdam). Elsevier/Academic Press, Amsterdam, third edition, 2004. F. Mezzadri and N. C. Snaith, editors. Recent perspectives in random matrix theory and number theory, volume 322 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2005. Francesco Mezzadri. How to generate random matrices from the classical com-pact groups. Notices Amer. Math. Soc., 54(5):592–604, 2007. V. D. Milman. A new proof of A. Dvoretzky’s theorem on cross-sections of convex bodies. Funkcional. Anal. i Priloˇ zen., 5(4):28–37, 1971. References 217 Vitali D. Milman and Gideon Schechtman. Asymptotic theory of finite-dimensional normed spaces, volume 1200 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1986. With an appendix by M. Gromov. H. L. Montgomery. The pair correlation of zeros of the zeta function. pages 181–193, 1973. Robb J. Muirhead. Aspects of multivariate statistical theory. John Wiley & Sons, Inc., New York, 1982. Wiley Series in Probability and Mathematical Statistics. Angela Pasquale. Weyl’s integration formula for U(N). Based on an introduc-tory lecture delivered at the DMV Seminar ”The Riemann Zeta Function and Random Matrix Theory”, October, 2000, Oberwolfach, Germany. Available on-line at rudnick/dmv/Weyl.ps. Leonid Pastur and Mariya Shcherbina. Eigenvalue distribution of large random matrices, volume 171 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2011. E. M. Rains. High powers of random elements of compact Lie groups. Probab. Theory Related Fields, 107(2):219–241, 1997. E. M. Rains. Increasing subsequences and the classical groups. Electron. J. Combin., 5:Research Paper 12, 9, 1998. Eric M. Rains. Images of eigenvalue distributions under power maps. Probab. Theory Related Fields, 125(4):522–538, 2003. Arun Ram. Characters of Brauer’s centralizer algebras. Pacific J. Math., 169(1):173–200, 1995. Ze´ ev Rudnick and Peter Sarnak. Zeros of principal L-functions and random matrix theory. Duke Math. J., 81(2):269–322, 1996. A celebration of John F. Nash, Jr. Gideon Schechtman. A remark concerning the dependence on ϵ in Dvoretzky’s theorem. In Geometric aspects of functional analysis (1987–88), volume 1376 of Lecture Notes in Math., pages 274–277. Springer, Berlin, 1989. A. Soshnikov. Determinantal random point fields. Uspekhi Mat. Nauk, 55(5(335)):107–160, 2000. Alexander Soshnikov. The central limit theorem for local linear statistics in classical compact groups and related combinatorial identities. Ann. Probab., 28(3):1353–1370, 2000. Charles. Stein. The accuracy of the normal approximation to the distribution of the traces of powers of random orthogonal matrices. Technical Report No. 470, Stanford University Department of Statistics, 1995. Kathryn Stewart. Total variation approximation of random orthogonal matrices by gaussian matrices. Preprint, 2017. Michael Stolz. On the Diaconis-Shahshahani method in random matrix theory. J. Algebraic Combin., 22(4):471–491, 2005. V. N. Sudakov. Typical distributions of linear functionals in finite-dimensional spaces of high dimension. Dokl. Akad. Nauk SSSR, 243(6):1402–1405, 1978. Michel Talagrand. The generic chaining. Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2005. Upper and lower bounds of stochastic processes. Santosh S. Vempala. The random projection method, volume 65 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science. American 218 References Mathematical Society, Providence, RI, 2004. With a foreword by Christos H. Papadimitriou. C´ edric Villani. Optimal transport, volume 338 of Grundlehren der Mathe-matischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 2009. Old and new. Frank W. Warner. Foundations of differentiable manifolds and Lie groups, vol-ume 94 of Graduate Texts in Mathematics. Springer-Verlag, New York-Berlin, 1983. Corrected reprint of the 1971 edition. Fred B. Weissler. Logarithmic Sobolev inequalities and hypercontractive esti-mates on the circle. J. Funct. Anal., 37(2):218–234, 1980. Hermann Weyl. The Classical Groups. Their Invariants and Representations. Princeton University Press, Princeton, N.J., 1939. K. Wieand. Eigenvalue distributions of random unitary matrices. Probab. Theory Related Fields, 123(2):202–224, 2002. P. Wojtaszczyk. Banach spaces for analysts, volume 25 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1991.
226
In Vitro Techniques and Measurements of Phage Characteristics That Are Important for Phage Therapy Success - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Viruses . 2022 Jul 7;14(7):1490. doi: 10.3390/v14071490 Search in PMC Search in PubMed View in NLM Catalog Add to search In Vitro Techniques and Measurements of Phage Characteristics That Are Important for Phage Therapy Success Tea Glonti Tea Glonti 1 Laboratory for Molecular and Cellular Technology, Queen Astrid Military Hospital, B-1120 Brussels, Belgium; [email protected] Find articles by Tea Glonti 1,, Jean-Paul Pirnay Jean-Paul Pirnay 1 Laboratory for Molecular and Cellular Technology, Queen Astrid Military Hospital, B-1120 Brussels, Belgium; [email protected] Find articles by Jean-Paul Pirnay 1 Editors: Nina Chanishvili 1, Nina V Tikunova 1, Petar Knezevic 1 Author information Article notes Copyright and License information 1 Laboratory for Molecular and Cellular Technology, Queen Astrid Military Hospital, B-1120 Brussels, Belgium; [email protected] Correspondence: [email protected] Roles Nina Chanishvili: Academic Editor Nina V Tikunova: Academic Editor Petar Knezevic: Academic Editor Received 2022 Mar 31; Accepted 2022 Jul 5; Collection date 2022 Jul. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( PMC Copyright notice PMCID: PMC9323186 PMID: 35891470 Abstract Validated methods for phage selection, host range expansion, and lytic activity determination are indispensable for maximizing phage therapy outcomes. In this review, we describe some relevant methods, highlighting their advantages and disadvantages, and categorize them as preliminary or confirmatory methods where appropriate. Experimental conditions, such as the composition and consistency of culture media, have an impact on bacterial growth and, consequently, phage propagation and the selection of phage-resistant mutants. The phages require different experimental conditions to be tested to fully reveal their characteristics and phage therapy potential in view of their future use in therapy. Phage lytic activity or virulence should be considered as a result of the phage, its host, and intracellular/environmental factors, including the ability of a phage to recognize receptors on the bacterial cell surface. In vitro quantitative and qualitative measurements of phage characteristics, further validated by in vivo experiments, could be incorporated into one system or mathematical model/formula, which could predict a potential successful outcome of clinical applications. Keywords: phage isolation, phage selection, phage virulence, phage activity, phage therapy 1. Introduction 1.1. What Is a Phage? A century ago, bacteriophages (phages) were defined as “devourers of bacteria” or “obligate intracellular parasites” . Soon after their discovery, and still today in Post-Soviet states and their European satellites, they were used as antibacterial agents in medicine, but in the rest of Europe and the United States (US) they were relegated to the background upon the marketing of antibiotics in the 1940s. However, even today, we still have not fully grasped the complex biology of phages and their interactions with both bacterial hosts and mammalian immune system . Upon their rediscovery by Western medicine, phages were classified as medicinal products (European Union) or drugs (US), without providing a dedicated framework for their development, marketing, and clinical application. As such, regulators underappreciated a number of peculiarities phages have with respect to conventional antibacterials, such as their narrow host specificity and antagonistic coevolution with these hosts . In addition, phages are increasingly being played off as nano-carriers, delivering an engineered or armed DNA/RNA bactericidal payload , while replicating and evolving in and with bacteria. Phages are the most abundant and diverse life-like entities on Earth, where they are found in almost all ecospheres, such as seas, rivers, and soil, and within other organisms, including humans. They control the abundance of their bacterial hosts and, as such, also impact global energy and nutrient cycles . Phages can also affect host diversity, e.g., by “killing the winner”, and this keeps competitively dominant species or populations “in check” . As such, they may be employed for the biological control of environments [8,9]. It is exactly this characteristic of phages that should be considered when “domesticating” them to control infecting or contaminating bacteria in patients, agriculture, or food processing. Today, a number of phage products are used in the agro-food industry, for instance, as bio-sanitation agents on ready-to-eat foods . 1.2. What Is Phage Virulence? The words “virulence” and “virulent” come from the Latin word virulentus, meaning “full of poison”. They are used to indicate the relative capacity of a “microbe” (bacterium, fungus, or virus) to cause disease , or in classical microbiological manuals, to describe a degree of pathogenicity. Translated to phages, virulence could thus be defined as a degree of lytic (causing or resulting from lysis) activity at a given condition. In the specialized scientific literature, however, phage virulence is often employed to indicate phages that undertake lytic rather than lysogenic cycles [12,13]. Lytic (virulent) phages own the ability of self-replication and high specificity against target bacteria . Gill et al. apply the term “virulence” to indicate the potential of a phage strain to drive specific bacterial cultures to extinction (or, at least, to very low densities) . Phage virulence can also be defined as the ability of a phage to control the growth of its host in culture (culture clearing) , and may also be an indicator of phage utility . Sometimes it is linked to the phage’s burst size as a prerequisite for productive-infection treatment . In fact, virulence is not a distinct phage characteristic, but a complex, dynamic, and variable phenomenon that includes both phage and bacterial factors . Indeed, it would be difficult to consider phage virulence as a single parameter, as phage-host interactions could range from the partial to total elimination of the targeted bacterial population. At the same time, complete lysis depends on the host/population and specific conditional factors as well. Phage virulence should be defined as a set of phage characteristics and ambient factors that effect, in a supportive manner, phage lytic activity levels or, in other words, the relative capacity to produce dynamic and high levels of bacterial lysis. Phage virulence levels could be extended by efficiently controlling phage/bacteria interactions, e.g., under rationally developed in vitro conditions. 1.3. The Challenge Nowadays, experts increasingly agree that phages will not replace antibiotics , and could sometimes be more effective when used in combination with (sub-inhibitory concentrations of) antibiotics . For instance, combinations of phages and antibiotics were shown to be more potent in killing Pseudomonas aeruginosa than either one acting alone . Phages could thus be considered as supportive therapeutics to facilitate the management of relevant infectious diseases or complications. The lack of basic understanding of phage biology is considered to be one of the causes for phage therapy failures in the early days. Because bacteria represent an environmental community for, and a hosting facility to, phages, fundamental studies analyzing the interactions between phages and bacteria , and predicting the dynamics between phage and bacterial populations , are of paramount importance [25,26,27] to developing practical phage therapy approaches. Today’s laboratory facilities and materials are more developed than those in Félix d’Hérelle’s time. Glass tubes and Pasteur pipettes, for instance, are replaced with Eppendorf tubes or 96-well microtiter plates and multichannel micropipettes. Notwithstanding the modernization of laboratory equipment, there are no significant differences in the techniques used for phage isolation and propagation, the development of phage cocktails, nor the (large-scale) production of therapeutic phage preparations. In 1930, d’Hérelle recognized that the most effective therapeutic phages could be isolated from patients that had recovered from infection. He also claimed that more than 50 bacterial strains should be used in phage isolation and enrichment methods . Interestingly, adapted versions of two of d’Hérelle’s phage cocktail formulations (Pyophage and Intestiphage) are still predominantly used in Georgia and Russia today . It is very important to balance the growth rate of phages and bacteria, creating the optimal conditions for their productive interactions. In 1966, Thomas and Abelson observed that for optimal phage propagation, bacterial cultures should be “growing logarithmically at the time of infection”. In 1970, Sargeant demonstrated the importance of a good supply of living bacteria and aeration for obtaining a large quantity of phages . In 1980, David et al. used a Mycobacterium smegmatis “surrogate” strain for the propagation of M. tuberculosis phages to improve the practicality of procedures and to comply with biosafety requirements. In 1992, Yin and McCaskill observed the importance of maintaining the balance between the growth rate of bacteria and the phage. In one particular case, they showed that “slowing down” phage plaque formation (phage particle diffusing rate) to pace bacterial growth resulted in higher phage concentrations expressed in plaque forming units (pfu)/mL . Notwithstanding these observations, we are still a long way from a full understanding of the etiology of phage/bacteria interactions . Several recent review papers have considered the existing skills and expertise with regard to phage research and their medical use. There is a consensus that screening and selecting the right phages is of key importance for achieving successful therapeutic outcomes. Some suggest that the impact of phages on bacterial biofilms could be crucial toward understanding both phage and bacterial ecology . However, the challenge is that there are no validated in vitro methods [31,32] to determine the phage characteristics that are important for predicting in vivo therapeutic efficacy or performance , for instance, in view of future clinical trials that are desperately needed both to prove phage product efficacy and to determine the most effective phage therapy protocols . This review brings together relevant methods for phage isolation, detection, characterization, and selection, including phage activity determination, host range evaluation and expansion, and the translation of in vitro results to clinical practice. We will mostly focus on the practical side of these methods (technical protocols), including some inputs and interpretations based on our personal experiences, as well as the advantages and disadvantages of the methods with regard to developing more standardized approaches. 2. In Vitro and In Vivo Phage Detection and Phage Activity Testing In this section, we will discuss a number of methods that are commonly used for in vitro phage lytic activity determination, including phage detection and enumeration testing and the in vivo translation of results (Diagram 1). 2.1. Phage Isolation Enrichment Method and Bacteria Hooks Bacterialstrains used for the “fishing” or detection of new phages are referred to here as “bacteria hooks”. For the isolation of potentially new phages, the well-known “phage enrichment” (PE) method is used. It was first developed by Winogradsky and Beijerinck and later adapted by Jassim et al. and Jensen et al. [16,28]. An updated version of the protocol was described by Twest and Kropinski and by Merabishvili et al. , both in 2009. PE sometimes implies involving a larger bacterial panel BP of potential “bacteria hooks”, as this facilitates the rapid isolation of polyvalent phages from the environment . The use of an enrichment BP increases the possibility of catching a larger variety of phages in a given sampling source and can also increase phage titers, which facilitates the detection of potentially new phages. The best practice is to develop an enrichment BP for each bacterial species separately (homogeneous matrix), but a heterogeneous approach can also be used. A homogeneous enrichment BP should ideally consist of: Bacteria hooks with hosts covering the wide range of receptors needed to hook the largest variety of potential phages. This requires having a readily available panel of strains with known genetic profiles. Every newly isolated phage can be further studied, e.g., to determine its biology; Bacteria hooks of particular interest can be included. In this case, bacterial strains are selected based on specific features such as antibiotic resistance, and it is not necessary to have an exhaustive list of characteristics or to know their genomic profile. The strains could be objects of further scientific study. Bacteria hooks consisting of working host strains, i.e., strains that have already been adapted/approved for phage propagation/production, speed up downstream phage adaptation/training procedures. Newly isolated phages could, of course, also be propagated and trained in other bacterial strains than the ones used for isolation. A scaled-up version of the PE approach is described by Olsen et al. as part of a high-throughput screening (HiTS) method for phages. They propose using 96-deep-well plates, which allows for the simultaneous handling of a large range of environmental samples (water). One single host is used in each well containing 1.5 mL of water sample, and the method is oriented towards predominantly lytic and easily cultivable phages . An outline of a PE method that uses a large number (96 or 384) of bacteria hooks is described in Appendix A, Figure A1. The technique is less time-, material-, and labor-consuming. It uses a large number of bacteria hooks in a relatively small volume and multichannel pipetting. This approach makes it easier to contain the infectious material advised for Biosafety and Biosecurity reasons. Water (sewage, river, lake) or liquefied soil and clinical samples/materials can be used as potential sampling sources for phages. In short: Two times or ten times concentrated broth medium is typically added to the phage-sampling source to ensure sufficient nutrition. When using large sampling volumes, it is rational to use more concentrated (up to 20 times) broth media that will generate less volume of the end product, which makes it easier and safer when handling infectious material; It is preferable not to centrifuge/filter the sampling source, unless it contains large contaminants and/or components that will interfere with the incubation process. It is assumed that conditions close to those in the natural source environment will facilitate phage/bacteria interactions and the isolation of phages; Using lower temperatures (25–28 °C) than those routinely used in clinical microbiology (30–37 °C) [35,36] and longer incubation times, for instance 24 h (where commonly 4–6 h is enough for phage propagation in liquid media), are more favorable for PE. However, long incubation periods could also have an adverse effect on phage particles. Because the ratio of phage emergence to bacteria (those initially present in the sample and the added bacteria hooks) in the enrichment propagation mixture is not preliminary determined as obtaining consistent lysis without early (e.g., <24 h) phage-resistant bacterial mutant growth or phage antagonistic activity. In addition, some bacterial products could interfere with phage propagation or the demonstration of phage activity; Using 96- or 384-well microtiter plates for the incubation of a large number of inoculums of bacteria hooks is more convenient. The bacterial suspensions are collected from each well using a multichannel pipette (Appendix A, Figure A1); After incubation, the potential phage lysate (PL) is centrifuged and filtered. There is no necessity for the use of chloroform, as this could reduce the viruses’ infectivity or inactivate some phages and could also lead to the induction of temperate phages . Using chloroform is a tradition that dates back to the time when bacterial filters were not available, and the procedure itself was not enough to ensure absolute removal of bacterial contamination. Adding the right amount (0.5–2% v/v) of chloroform to PL at +4 °C (temperature shock) kills the remaining intact bacterial cells, including lyrically phage-infected bacteria, and could thus result in substantially increased phage titers . Chloroform was also used for the medium term (3–12 months) storage of phage stocks, as it prevented bacterial growth . In addition to the obvious laboratory personnel safety issues (hazardous chemicals), it is not recommended to use chloroform for phage preparations that will be used in clinical treatments; The obtained PL could be used further as the second source for another enrichment BP with different bacteria hooks. It is considered a disadvantage of the PE method that faster-growing phages will outcompete phages with slower-growing populations , masking the appearance of potentially interesting phages (e.g., broader host-ranges) [15,42]. Phage Detection—Preliminary Tests Generally, the PE lysate is first tested against the bacteria hooks used in the PE method, but it could also be carried out using any other relevant BP, for instance, containing strains from available bacterial culture collections . Different methods are used for the detection of new phages in the lysates. (i) The “direct spot test” (here, we call it a technique): in which only one dilution of the phage lysate is spotted on bacteria grown directly on solid agar. It is described below; (ii) The “spot test” (we will further use this name for a technique): in which one dilution of the phage lysate is spotted on a film of bacteria growing in a “top agar” surface . This technique is also called “spot testing” or “direct spot” ; (iii) The “lysis profile assay” or, as we call it here, “phage liquid culturing” (PLC) method implies the liquid culture of phage/bacteria mixtures at specific dilution(s) in microtiter plates for the determination of phage susceptibility. As many as 5- to 10-fold greater numbers of bacterial test strains could be considered per microtiter plate, as compared to the conventional “spot tests” performed on petri dishes of different sizes and shapes . This results in reduced hands-on time and fewer consumables. In the “direct spot test”, bacteria can be grown either as a series of distinct areas (streaks or spots) or as complete lawns on solid agar (without soft agar overlay). Phage lysates are applied in the areas of expected bacterial growth. Bacteria are commonly applied in three ways: Several parallel streaks (“streak assay” [36,46]) of bacterial suspension(s) of particular dilution(s) are made using disposable loops (Appendix A, Figure A2). Phage lysate(s) are applied as spots on the bacterial streaks (we call it “spot-on-streak” to differentiate from the other techniques); Bacterial suspensions are simply spotted in a grid. Phage lysate(s) are applied as spots (we call it “spot-on-spot”) (Appendix A, Figure A3); Bacterial suspensions are directly streaked on streaks of phages made on solid agar (we call it “streak-on-streak”) (Appendix A, Figure A4). The first two preliminary phage detection approaches allow for the screening of large numbers of BPs and phages. The choice between either of them is a matter of practicality. It is considered that the “spot-on-streak” assay (a variation of the “direct spot test”) does not allow for the evaluation of a possible emergence of bacterial phage resistance . In fact, the “streak assay” does not allow for the study of phage kinetics. However, it does allow for a qualitative assessment of the tendency towards bacterial phage resistance through the visual observation of phage resistant mutants that emerge as individual colonies or confluent growth over the clear (lysis) zones of spotting. Bacterial colonies isolated from bacterial “over-growth” on agar plates or liquid samples taken from PLC “re-growth” need to be further tested to confirm that over- or re-growth is indeed due to phage resistant bacterial mutants. All the previously mentioned methods should be considered as preliminary detection techniques, as they are merely revealing bacterial lysis on agar or in liquid and do not confirm that these are the result of phage activity. As PE lysates potentially consist of different phage variants at different concentrations, possibly including rare and interesting variants at low titers, it is reasonable to continue evaluating the PE lysates without diluting them. Bacterial suspensions used in the above-mentioned methods should have a minimum concentration of 10 4 colony-forming units (cfu)/mL, which will result in sufficient growth to reveal the activity of phages that are present at a low concentration. The “spot-on-streak” assay allows for the application of multiple phage lysates on multiple bacterial strains, at different dilutions, on one plate. Note that bacteria grow slower on a solid agar surface than in broth, which will help phages that are present at lower titers, or with slower reproduction rates, to pace the bacterial growth and reveal themselves. Moreover, when large-size phage virions cannot diffuse in soft agar, they find it easier to proliferate on low-density bacterial growth directly on the solid agar surface. In addition, it is easier to handle than modifying the soft agar method by using 0.2% (wt/vol) low melting point agarose , thus increasing the possibility of the diffusion of phage particles and, correspondingly, improving plaque formation. Another approach to detect low numbers of phages is using sub-lethal doses of antibiotics (e.g., 2.5–3.5 mg/mL of ampicillin, depending on the agar concentration of the top layer), which helps the formation of visible plaques . Pipetting robots could be used for the “spot-testing”-based methods. A rectangular-shaped tray-plate, from SPL life science, for instance, is perfect to perform the spotting and could be fixed on the pipetting robot workstation. The advantage of that plate is that it has nearly the same dimensions (127.94 × 85.50 mm) as a 96-well microtiter plate (127.71 × 85.43 mm), which can be used as a reservoir for the phages that will be spotted. The spotting height should be adjusted correctly to avoid piercing the agar surface or splashing the drop while spotting, and thus generating aerosols and subsequent cross-contamination. In case of the “spot-on-streak” assay, bacterial streaks are pre-prepared, while the “spot-on-spot” method could be performed entirely by the pipetting robot. After visual examination of the lysis zones and interpretation of the preliminary results, several phage/host bacteria combinations are selected to be further submitted to confirmatory methods that are able to reveal true phage plaque formation. 2.2. Confirmatory Test for Phage Activity Detection/Enumeration—Plaque Formation Plaque formation is the result of multiple rounds of infection, lysis, and release of progeny , and it varies according to the phage’s latent period, burst size, diffusion rate and host bacterial growth; all these parameters are finally revealed in different plaque sizes and visibilities [17,50,51]. While a variety, or the technical modification, of methods are used for plaque formation and enumeration, double agar layer (DAL) methods are the most commonly used. The main reasons for using plaque formation assays are: Confirmation of plaque formation; Study of plaque morphology; Enumeration (determination of pfu/mL) of phages. The morphological appearance of the individual plaques is the first parameter that needs to be determined, as it is of great importance for: Phage differentiation/selection; Plaque purification; Phage virulence/lysogeny evaluation procedures. 2.2.1. Double Agar Layer (DAL) Method The DAL method was independently developed by the Belgian microbiologist André Gratia in 1936 (“Des relations numériques entre bactéries lysogènes et particules de bactériophage”), and by Hershey, Kalmanson, and Bronfenbrenner in 1943 , to be formalized later by Adams in 1959 . An updated version (Double Agar Overlay Plaque Assay) was described by Kropinski et al. . Here, we use the acronym of SD/MP (Single Dilutions on Multiple Plates) DAL, as it applies different single dilutions of the specific PL on several different test plates. The phage particles proliferate in the soft agar, while bacteria are fed from the underlying solid agar. The DAL method is generally considered to be the best confirmatory test, as it allows for a precise plaque enumeration and full characterization of individual plaque morphology: Plaque diameter; Level of transparency/turbidity of the plaques; Halo formation and size; Motility. Another phage enumeration method described by Kutter et al. (“EOP test”, described below), Kropinski, and Mazzocco et al. (“Drop Plaque Assay”) [33,52,53] is a modification of the SD/MP DAL method and also applies an agar overlay and phage serial dilutions approach, but in this case, multiple dilutions of the phage(s) are displayed on a single plate. For this method, we use the acronym MD/SP (Multiple Dilutions on Single Plates). When dealing with a high number of PLs, microtiter plates can be used to make the dilutions in both the approaches SD/MP and MD/SP DAL (Appendix A, Figure A5). The disadvantage of the MD/SP DAL method is that it is not precise enough. For more accurate counting and a perfect comparison of the plaque sizes and morphologies on each strain , the SD/MP DAL approach is preferred. Another disadvantage is that counting large plaques is difficult, and sometimes it might be better to count the plaques after several hours (4–6 h) instead of 18–24 h, if the tested phage/bacteria growth rate allows for that. If not, an alternative approach consists of using a higher concentration of soft agar (0.8%) and splitting the spot in several smaller drops while applying it on the agar surface (Appendix A, Figure A5). In addition, some studies have shown that particular phages only reveal clear lysis in the first two dilution spots, with no sign of lytic activity in further dilution spots. The reason for this could be an abortive infection, or “lysis from without” , or some other type of bactericidal effect. Some phages do not reveal any lytic activity when spotted [54,55] directly, or in dilutions, but do produce plaques with the SD/MP DAL method. Particularly, the plates with low dilutions of PL often do not display the typically expected results (a clear plate followed by “web pattern-like” lysis zones for the consecutive dilution), while the plates with high dilutions demonstrate clear individual plaques spread through the plate perimeter (personal experience). When only the first two dilutions reveal lytic activity, we recommend further analysis of the PL using SD/MP DAL on the same bacterial strains and/or the repetition of MD/SP on another set of bacterial strain. 2.2.2. Plaque Purification For plaque differentiation and purification, the most commonly used and described method uses phage streaks on a bacterial lawn, in soft agar, or directly on solid agar as to obtain discrete plaques. We suggest the use of “phage T-streaking” (three-phase streaking) which differs from the “streak assay” used for phage detection. In the T-streaking method, the phage inoculum is streaked over the agar surface in three segments. As such, phage numbers are reduced in each segment, which results in individual phage plaques separated and distanced from each other. In the literature, different numbers of individual plaque passaging rounds are suggested. Usually, three to five passages of individual plaques are considered to be sufficient, but some authors suggest many more passages (e.g., 15–20) . In our opinion (based on practical experience), more than three to five passages should indeed be performed to ensure single plaque proliferation. Moreover, the “phage T-streaking” method could be considered as a preliminary purification method, as it is not accurate enough and used at the very beginning of the plaque purification procedure with five or more repetitions, depending on the given PL. However, to make sure that plaques are the result of a singular phage (clone), several additional steps (five or more) should be performed using the SD/MP DAL method as a confirmatory/validation test. The SD/MP DAL method allows for aggregated plaques to fully spread, creating enough distance between individual plaques on the plates with the different dilutions. An even more advanced approach is to test plaque formation using different host strains in a parallel manner, which allows for the revelation of merged plaques (plaque on plaque). To validate the plaque purification method as a confirmatory method, the following criteria need to be considered: The distance between the plaques (well isolated discrete plaques); Different dilutions of phage lysate are applied; A certain number of passaging rounds are performed (3–5 final confirmation rounds); Several bacterial host bacterial strains are used; Several growth media are used. For practical convenience, mini petri dishes of 35 mm diameter can be used for plaque formation/passaging assays. It is highly recommended to perform a valid plaque purification procedure before moving on to further characterization and activity evaluation. 2.2.3. Bacteria Kits for the Study of Phage Host Range and Efficiency of Plating (EOP) Bacterial strains for phage host range studies are referred to here as “bacteria kits”. MD/SP DAL is mostly used for the evaluation of EOP . Therefore, MD/SP DAL is often referred to as the “EOP test” . The EOP is the quotient of the phage titer at the terminal dilution on the test strain, divided by the titer of that same phage on its isolation host, expressed in a cardinal number or percentage. As host range studies employ large amounts of bacterial strains, the MD/SP DAL method is usually preferred, as it is repeatable, more automatable, and is less time-, energy- and resource-consuming than SD/MP DAL. The concept of host range or breadth can be defined in many different ways . It is usually defined as the extent/spectrum of bacterial genera, species, and strains that can be lysed by a phage , or which supports phage multiplication . The larger the variety (in terms of genetic and phenotypic profiles) of the bacterial strains that are sensitive to a particular phage or phage mixture, the broader its host range is. The host range is of great importance for the selection of adequate therapeutic phages or phage mixtures . The lytic activity of candidate therapeutic phages should be tested on a large collection of relevant bacteria kits . It is appropriate to aim for the widest possible host range, preferably at the beginning of the selection process . At the same time using a wide range of bacteria kits allows one to identify/reveal more bacterial strains sensitive to the candidate phage and accumulates more EOP data. The bacteria kits should be regularly updated with new isolates originating from relevant clinical environments and geographical areas [52,59,60]. Using bacteria kits that harbor a large genetic variety (composed at least of 100 different genetic profiles) enhances the sensitivity level of the method and makes it more comprehensive as a confirmatory method. Employing widely assorted bacteria kits is important to extend our knowledge and understanding of phage biology and for the potential use of a test phage in different fields (e.g., medicine, food decontamination, or agriculture). FDA guidance on antibiotic testing requires the testing of at least 100 bacterial strains, and for some species more than 300 strains, with recent clinical isolates accounting for at least 75% of the strains . Following the FDA requirements for bacterial sensitivity testing, bacteria kits of different sizes should be set up locally (laboratory and country level), or on the international level. Biological Resource Centers could function as repositories for host bacteria, harboring the phenotypic and genotypic background necessary for the identification and characterization of phage activity . Important bacterial collections can be found within renowned culture collections such as the American Type Culture Collection (ATCC, (accessed on 30 March 2022)) or the Deutsche Sammlung von Mikroorganismen und Zellkulturen (DSMZ, (accessed on 30 March 2022) and include multidrug-resistant (MDR) strains as defined by the Centers for Disease Control and Prevention (CDC, Atlanta, GA, USA). At the same time, according to the FDA and some other regulatory bodies for diagnostics (CDC, Forensics), preliminary and confirmatory tests are the main components of systematic qualitative analysis, and this kind of approach needs to be tailored to phage identification, enumeration, and activity evaluation. 2.3. Phage Liquid Culturing Method and the Translation of Results The phage liquid culturing (PLC) method is considered an alternative approach for phage host range and lytic activity measurement . In addition, the lytic activity of phages that are incapable of forming plaques in soft agar could be revealed using this technique . The PLC method, or “Appelmans’ method”, was developed in the 1920s by the Belgian surgeon René Appelmans [46,62]. Initially, the method was developed for phage titration. It uses 10-fold serial dilutions of phage in broth and, after the incubation of each dilution with the host bacteria, the phage titer is evaluated by visual observation. The dilution factor of the last “clear” tube is considered as the phage titer. The modern version of this method uses microtiter plates of different size ranges and multichannel pipettes and is automatable and reproducible, generating digital optical density or colorimetric growth curves, which allows for the testing and comparison of multiple phage/bacteria combinations simultaneously. The simultaneous passaging of different combinations of phage and bacteria is the basis of the phage Host Range Extension (HRE) method that is described in the next section. The Appelmans’ technique can be used for different purposes: Phage enumeration with phage titer expressed as a dilution factor; Estimation of the multiplicity of infection (MOI) , i.e., the ratio of phages to bacteria, for instance, to set the initial phage/bacterium inoculates for in vitro/vivo studies; Evaluation of host range and lytic activity ; Expansion of host range after multiple passaging. Nowadays, the PLC, or Appelmans’ method, is mostly used and described for the study of phage host range and lytic activity in view of translation to the in vivo context. Different interfering/misleading factors may arise when using this method, such as the growth of phage-mutants , the “re-growth” of phage sensitive bacteria , and the emergence of temporal immunity to phage lysis . Correspondingly, a rational approach needs to be developed when applying this technique. Phage-exposed bacterial growth curves have been extensively studied [13,14,27,47,66,67,68,69,70,71,72]. In some cases, the results were translated to in vitro/vivo studies to evaluate the correlation between these studies (Table 1). It is important to mention that the comparison of different in vitro methods (“spot test” or “direct spot test” and PLC) is difficult, as the first method concerns a mostly qualitative assay, even though it could be semi-quantitative under certain conditions (e.g., at low phage titer, when separate plaques are observed within one spot), while the second is a semi-quantitative method. In addition, the bacterial growth conditions (solid versus liquid media) are also different. Table 1. PLC experiments and translation of the results. | Years | Authors and Study | Results and Outcome | | :---: | :---: | :---: | | 2006 | Raya et al. studied: Phage/bacteria dynamics in PLC for T-even phages in aerobic and anaerobic conditions (eclipse, latent period, and burst size at different MOIs). In vivo sheep trials evaluating phage infection control/eradication . | Translation of the results of phage/bacteria dynamics in PHL to in vivo sheep trials. Showed the importance of screening for adequate phages using a PE method prior to in vivo studies. | | 2008 | Niu et al.: Studied phage susceptibility testing of Shiga toxin-producing E. coli (STEC) isolates using “microplate phage virulence assay”; Classified STECs as extremely, highly, moderately, or minimally susceptible, based on host range at different MOIs; Correlated the evaluated phage lytic capability to a set of other characteristics (based on STEC phage-typing and genotype studies) . | Showed that phages exhibiting high growth rates and broad host ranges could be effective as biocontrol agents. | | 2011 | Vandersteegen et al. described studies on the Staphylococcus aureus phage infection parameter in two separate papers: First, they used a PLC method; Later, they performed phage-mediated biofilm (biomass) reduction; They did not provide a detailed comparison of the results from both studies . | Demonstrated the impact of different MOIs on lytic activity dynamics; Showed phage-mediated biofilm (biomass) reduction after 24 h of incubation; Concluded that using the same phage/bacteria combinations and conditions resulted in comparable phage effectiveness. | | 2011 | Cooper et al. studied P. aeruginosa phages’ efficacy with: “qualitative streak” test; “quantitative assay” using the Bioscreen C microbial growth analyzer. Of note, the parameters as phage/bacteria ratio, media, and incubation temperature were different while using these two methods . | Only observed similar results for phages exhibiting substantial activity; Assumed that unequal experiment conditions might have contributed to the observed differences in results. | | 2013 | Henry et al.: Studied phage lysis kinetics of eight P. aeruginosa phages; Pre-tested the phages using Efficiency of Plating (EOP); Experimented with an in vivo mice model . | Demonstrated successful translation of results of EOP and PLC kinetics to an in vivo mice model; Showed that the phages isolated directly on the targeted bacterial host were the most efficient in vivo, supporting a personalized phage therapy approach for optimal treatment outcomes. | | 2014 | Wong et al. Studied the ”lytic spectrum” (host range and susceptibility data) of phages against Salmonella Typhimurium at a wide range of MOIs; Performed an in vivo chicken trial . | Observed miscorrelation between the in vitro ”lytic spectrum” and the in vivo trial in chickens’ results; Suggested that the in vivo persistence of phages is important to completely eliminate pathogens; | | 2017 | Green et al. performed: In vitro E. coli PLC reduction experiments; In vivo infected mice model . | Demonstrated an acceptable correlation between in vitro E. coli reduction levels and improved health scores in infected mice. | | 2013–2019 | A number of research groups [16,73,75,76] adapted the OmniLogTM system, which monitors bacterial growth based on the respiration rate of growing cells ; Studied phage-mediated lysis and (the suppression of) the emergence of bacterial phage resistance . | Developed appropriate therapeutic phage cocktails within a short time period; Succeeded in Adaptive Phage Therapeutics’ “Host Range Quick Testing” [16,77]. | | 2018 | Xie et al. measured phage host range and “virulence” for 15 Salmonella phages using: The “spot method”; A PLC based assay . | Found more correlation for host range evaluations than for “virulence” estimations. | | 2018 | Forti et al. tested a six-phage cocktail against P. aeruginosa, which had been designed based on host range and genomic information: In planktonic liquid cultures; In biofilms; In mice; In Galleria mellonella larvae . | Showed correlation with MOI; Demonstrated that the cocktail of the six phages was able to lyse P. aeruginosa (both in PLC and in biofilms), better than individual phages; Assumed that the phage cocktail could cure acute respiratory infection in mice and treat bacteremia in Galleria mellonella larvae; Showed that administration of the cocktail to the larvae prior to bacterial infection provided prophylaxis. | | 2020 | Storms et al. and Konopacki et al., respectively [13,78]: Developed a phage “virulence index” and a “PhageScore”, respectively, both based on bacterial growth curves; Both quantified and compared the virulence of diverse phages individually and in specific combinations, applying different MOIs, Storms et al. used the trapezoidal rule for their “virulence index” formula, which depends on the number of data points creating sub-areas that need to be calculated separately, while Konopacki et al. utilized a continuous function in the calculation area, instead of a coarse, straight-lines growth description, for their “PhageScore” [13,78]. | The “PhageScore” allows for a more accurate prediction of the process than the “virulence index” ; Both formulas/approaches could be used to evaluate and compare phage activity in view of the selection of candidate therapeutic phages. | | 2021 | Nale et al.: Examined a potential of 21 myoviruses and siphoviruses in vitro against Salmonella; Elaborated in vivo infection biocontrol strategy in poultry and swine; Developed a phage cocktail, based on a preliminary defined host range . | The phage cocktail showed: High in vitro efficacy; Potential for prophylaxis in a G. mellonella larvae model. | Open in a new tab Host Range Expansion (HRE) Today, the experimental evolution of individual phages or phage mixtures through serial interactions with one or a mixture of host bacteria is the most used approach to extend the phage host range. Several studies performed in the period 1963–1991 describe the benefit of serial passage experiments (SPEs) that allow for molecular and phenotypic evolution in real time . The changes in phage activity that occurred seemed to depend on the genotypes present in the cocktail at the start of the SPEs . Poullain et al. demonstrated an expansion in the infectivity and growth rate of evolved (the bacterial host is not allowed to evolve) or coevolved (the bacterial host coevolves with its parasite) phages. In phage evolutionary experiments, phages are (serially) transferred from one host culture to a new, phage-naive host culture under defined conditions, and their evolved characteristics are compared with those of their ancestors. Phage evolution on non-evolved hosts is usually accompanied by increasing phage propagation rates. In contrast, in coevolutionary experiments, the phage and its host are transferred together to a fresh culture medium. In this setting, the host is able to continuously coevolve to keep track of phage adaptations, which results in the emergence of different adaptive strategies by the phage. This evolution of phages with their hosts can increase their infectivity ranges . Betts et al. (2013) revealed that bacterial resistance to trained phages emerged at a lower frequency . In 2016, Friman et al. showed that pre-adapting (evolving) phages to P. aeruginosa cystic fibrosis bacterial isolates lead to increased pathogen clearance and a lowered resistance evolution as well . Eastern European researchers, particularly in the Republic of Georgia, used the noted Appelmans’ dilutions method for passaging phage mixtures from strain to strain, including both sensitive and resistant bacterial strains, leading to the generation of new variants of phage clones/cocktails lysing a larger range of bacterial cells. This technique was recently applied to pre-adapt a phage for treatment of fracture-related infection due to pandrug-resistant K. pneumoniae . Burrowes et al. designed a 96-well plate formatted for Appelmans’ protocol to analyze the individual phages after every 10 rounds of evolution. They showed that starting with a phage cocktail resulted in a larger host-range expansion than when using individual phages, and based on genomic analysis, they observed a recombinatorial origin for output phages with a broadened host-range . The crucial factors for ensuring a rapid host range extension of phages are (1) the use of a phage mixture from the start, which allows recombination to generate sufficient diversity, and (2) the use of both the original bacterial hosts that had been used for phage propagation and an updated collection of clinical bacterial isolates that are resistant to the given phages, as it is important to produce therapeutically useful phages . While Burrowes et al. state that the Appelmans protocol works predominantly via recombination between phages [60,85], Mapes et al. presumed a collateral host-range expansion when they conducted a similar SPE, which they named the “host-range expansion” (HRE) method. However, none of the parental or hostrange extended phages were sequenced, and thus, it was hard to ascertain the exact mechanisms of the occurred changes . In any case, the end products of HRE experiments need to be confirmed and validated by whole-genome sequencing, and tested and proved to be stable, considering the rounds of passaging. Serial passaging for HRE can be performed on agar as well, whenever liquid media are not adequate for demonstration of phage lytic activity. The agar method is more time consuming than the liquid method, but it has the advantage that the obtained phage mixture no longer needs to be processed further for plaque formation (Appendix A, Figure A6). 3. Discussion Many studies refer to existing gaps in standardization and validation of assays/methods documenting phage activity and in the translation of their results to in vivo applications [17,32,34,87]. The definitions of phage host range and test outcomes vary between methodologies . The phage-related experimental measurements described in the phage literature mostly rely on the same principles , but without a standardization of tests, it is difficult to correlate in vitro with in vivo results and to interpret disparate findings between studies and laboratories. Methods determining phage lytic activity (Figure 1) are based on bacterial clearing on either agar or in a liquid medium. In both cases, results can be qualitative—if only a visual observation/evaluation is performed at the end point of the test—or quantitative—if a calculation is made at a particular point(s) in time. However, different quantification methods and principles are used: (i) the determination of the number of phages in pfu/mL, (ii) the determination of bacterial concentration in cfu/mL or optical density (OD), or (iii) the determination of bacterial metabolic activity (e.g., tetrazolium reduction). The results from these approaches can further be used for the calculation of phage yield (ratio final to initial phage titer) or the reduction of bacterial growth (ratio initial to final bacterial concentration). Liquid culturing techniques make it possible to calculate bacterial growth reduction dynamics, but the confirmation of phage growth itself, bacterial re-growth of initially phage-sensitive bacteria, or the selection of phage-resistant bacterial mutants still requires phage plaque and bacterial colony formation on agar. Figure 1. Open in a new tab Scheme depicting the chain of methods from phage isolation to study of compliance for phage treatment with in vitro activity evaluations. As we mentioned earlier, the “direct spot test” and “spot test” methods should only be considered as preliminary (qualitative) phage detection (sensitivity) tests, as they are not demonstrating plaque formation, while the MD/SP DAL method can be considered as a confirmation test for phage detection as it demonstrates plaque formation. Since the SD/MP DAL method allows for the most precise phage enumeration and plaque morphology characterization, it could be considered as a confirmatory method for phage enumeration and plaque morphology characterization. Note that the MD/SP DAL method is less time-, material-, and labor-consuming as it allows for the analysis of several phages or phage dilutions on one plate. It would be relevant for host range determination and EOP evaluation. We will not provide a detailed discussion of phage culture purification here, as it is beyond the scope of the present review and would deserve a dedicated paper. However, to ensure that a particular phage lysate (newly isolated or evolved) is a single phage particle product and authentic, it first must go through plaque and then culture purification steps. For adequate phage purification, five or more passages should be performed using the “phage T-streaking” method as a preliminary approach, followed by five or more passaging steps using the SD/MP DAL method, as this method allows for full morphological selection and characterization of phage plaques. Once a particular phage is purified (plaque and culture) using an established and validated procedure, the candidate phage can be submitted to further characterization. The PLC method has also been put forward as an alternative approach for phage host range measurement . It is frequently used today for the in vitro evaluation of phage-bacterium population dynamics and established as a rapid tool to extend the phage host range or to increase phage lytic activity, as an alternative to the genetic engineering of super phages [89,90]. To analyze phage/bacterium population interaction dynamics in a comprehensive manner, it is advised not to use OD measurement, but to measure the conversion of water-soluble tetrazolium salts, which yields a higher sensitivity and dynamic range. For this, the OmniLog TM system provides a high-throughput capability (4800 phage assays) for the real-time monitoring of bacterial growth dynamics. The main reason for attempting to standardize phage lytic activity measurements and make them as effective as possible is to be able to correlate phage in vitro traits with therapeutic outcomes. Often, the results of different qualitative or quantitative methods (on agar or in liquid media) are arguably considered to be comparable. The spot qualitative assessment of different phage-bacterium combinations is often scored using cardinal numbers (streak-based method scores of “0”to “+5”), while phage activity determined in liquid media is usually expressed using lysis scores (ranging from 1 to 3) based on OD changes in time. Storms et al. (2020) and Konopacki et al. (2020) developed the phage “virulence index” and “PhageScore” formulas, respectively, which can be used to analyze and compare phage activity and to select phages in a more standardized way. Both formulas are based on bacterial growth curves determined in liquid media [13,78]. However, both formulas need to be tested on a large variety of phage/bacteria combinations in different conditions (described below) to validate the results and to confirm that they are transducible to in vivo applications. The phage liquid culturing (PLC) method is put forward as the best assay to evaluate phage lytic activity [13,17,78], in comparison to EOP determination using the inherently imprecise MD/SP DAL method. Both methods are performed using different conditions (e.g., medium composition) and are based on different principles with regard to evaluation mechanisms and kinetic recordings. The disadvantage of both methods is that phage titers (pfu/mL) estimated on a “standard” bacterial strain are considered for the evaluation of the effectiveness of the same phage on bacteria kits. Some relevant phage infection parameters, such as adsorption rate, latent period, and burst size, can be deduced from monitoring phage growth in liquid media . Phage infection parameters depend on bacterial host physiology and nutritional conditions [7,14], which determine bacterial growth itself. Bacterial cells do not experience the same growth conditions on agar as compared to liquid culture , and thus, phage infection is also bound to differ. The latent period and burst size of phages are related to the bacterial growth rate [37,92,93]. As such, the phage growth rate is the most important criterion with regard to phage “virulence” . Thus, to correlate phage therapy outcomes with lytic activity (propagation rate and mutant selection), comparable conditions should be applied, i.e., realistic nutritional composition and consistency of media (liquid, semi-solid and solid), incubation times and temperatures, and bacterial host strains. Finally, and most importantly, the initial phage/bacteria ratios should be adjusted separately for each method, and considered further in the integrated evaluating formula for phage virulence or activity capacity as a whole. While optimizing the conditions for each phage/bacterium combination to give the highest possible outcome is feasible in vitro, the in vivo translation of the results is more problematic. Every phage candidate with the potential to be used in the therapy—be it naturally isolated, with or without expanded activity or host range, genetically engineered or not, or used within a ‘one-size fits all’ or broad-spectrum approach —should ideally be pre-tested in a standardized, comprehensive, and statistically significant way to meet the expectation for successful phage therapy. Therefore, the question remains as to what should be considered and tested to determine a phage’s potential to reduce the bacterial population at different infection loci. 3.1. Bacterial Population and Infection Locus Consistency Certain bacterial determinants are critical for the outcome of phage/bacteria interactions. In vitro and in vivo phage/mixture testing is most often performed using homogeneous bacterial populations grown either on agar, as planktonic cells in liquid culture, or in biofilms. However, the bacterial composition of the infection loci to be treated with phages (e.g., an infected wound) usually consists of an assembly of different strains belonging the same or different bacterial species and exhibiting different growth modes (planktonic and biofilm). As a result, it is appropriate in certain cases to test a mixture of different bacterial strains to evaluate the lytic activity of phages before treatment. Most important are the virulence factors of bacteria that can hamper phage proliferation. Laboratory conditions (e.g., growth media) are very different from the conditions encountered in vivo . Therefore, bacteria grown using standard laboratory protocols behave differently than those grown in the milieu of an infection (e.g., in a wound bed). For example, S. aureus rarely expresses its capsular polysaccharides, which are typical for clinical isolates, when they are grown in the laboratory . P. aeruginosa possesses an arsenal of virulence factors enabling it to invade host cells and circumvent host defenses , which are not revealed in in vitro conditions. Culture media could be developed by taking into account certain conditions (e.g., pH and viscosity) which allow for the exhibition of virulence factor(s), and thus, a more accurate study of phage behavior. Moreover, to mimic real-life scenarios of localized infections , body materials (e.g., sputum, surgical suture, and debris) and fluids (blood/serum, cerebrospinal fluid, bile, etc.) spiked with the relevant bacterial strain(s) could be used as a model. 3.2. Phage-Bacteria Ratio The right phage-bacteria ratio or so-called MOI to achieve complete bacterial lysis over a given period of time in a liquid culture should be determined [15,51]. The ideal cell numbers and MOI are different for each phage and several different studies have revealed that the outcome of phage activity mainly depends on the MOI . The optimal phage-bacteria ratio is correlated with phage = bacteria growth rates, and the balanced combination of phage-bacteria is the main determinant for the successful reduction/delay of the emergence of phage-resistant bacterial mutants. This optimal phage/bacteria ratio can be used in phage-virulence assays or in vitro and animal models. 3.3. Phage Mixtures An appropriate phage mixture or cocktail is believed to be much more effective than single phages to treat infections. This phenomenon is referred to as synergy , where the different phages together facilitate the infections of the bacterial population. This synergistic efficacy is mostly based on ensuring coverage of a range of bacterial receptors and to individual phage properties . Conversely, the mixing of phages can also result in less lytic capacity than predicted based on the sum of the coverage and activity of each component phage . Phage components of phage cocktails are typically selected based on as wide as possible, and non-overlapping, host ranges and are mostly mixed in the same proportions. However, it is very important to consider that different phages have different growth rates/adsorption times, and if they are combined in optimally differing titers (pfu/mL), the activity of the phages could be balanced in time. This, together with the right phage-bacteria ratio, may give the most effective outcome of treatment. Finally, we can conclude that, to date, no validation procedure/format has been developed nor approved by the relevant regulatory authorities for the evaluation, categorization/ranking (preliminary or confirmatory), or documentation of the methods used to assess in vitro and in vivo phage activity in a standardized manner. All of the methods commonly shared and used so far are copied, developed, or modified from manuals and scientific papers, mostly dating from d’Hérelle’s time. The level of phage virulence as a whole (phage therapy capacity)—host detection, host range, phage-bacteria growth rate, phage-bacterial interaction (including the circumvention of bacterial cell defense systems), phage survival/sustainability, adaption to the host, and invading ability—is associated with conditional factors such as patient age and physiology (e.g., impaired or healthy), concentration of bacteria, temperature, and pH at the infection site. Thus, as phage-bacterial interactions are continuously evolving, so is phage virulence. Phage virulence capacity could be enhanced in vitro by implementing a good understanding of phage-bacterial interactions under certain specific conditions (resembling those at the infection loci). In vitro evaluation of phage activity, using standardized and integrated criteria, is bound to provide a valuable support for in vivo applications. Every selected method should be rational, reliable and appropriate in a particular situation, feasible, and cost effective, considering timelines, labor, and material consumption. Appendix A Appendix A.1. Short Outline of the “Phage Isolation Enrichment” Method 1.1 Culture the bacterial strains in 96- or 384-well microtiter plates overnight at an appropriate temperature, in a suitable culture medium. 1.2 Collect 200 (40) µL of each of bacterial suspension from each well of the 96 (384)-well microtiter plates (19.2 (15.36) mL in total correspondingly) and transfer the liquid to a sterile reservoir using a multichannel pipette. 1.3 Add the following ingredients to a sterile container (flask): 360 (288) mL of sewage water. 40 (32) mL of 10× concentrated culture medium (broth). 19.2 (15.36) mL mixture of the bacterial suspensions in the reservoir. 1.4 Incubate the container at 25–28 °C for 18–24 h. 1.5 Centrifuge the (potential) phage lysate at 6000× g for 30 min. 1.6 Filtrate the (potential) phage lysate using a 0.45 µm syringe filter. 1.7 Store the supernatant at 4 °C. Figure A1. Open in a new tab Diagram depicting the “phage isolation enrichment” method. Appendix A.2. Short Outline of the “Spot on Streak” Method 2.1 Make dilutions of the bacterial suspensions in a 96-well-microtiter plate including the following two dilutions: a low concentration containing 1.0 × 10 4 cfu/mL and an average one containing 1.0 × 10 7 cfu/mL. 2.2 Apply a drop (20 µL) of each bacterial suspension in the first column of a grid on a square petri dish containing a suitable agar medium, using a multichannel pipette; then roll down each drop to the end of the grid row by using the same pipette and tips or separate disposable loops. Let the bacterial streak dry up in a Biosafety Cabinet (BSC). 2.3 Distribute the phage lysates in a 96-well-microtiter plate or another segmented reservoir according to their foreseen outline on the test agar plate grids. Spot 10 µL of phage lysates on the bacterial streaks in a vertical direction by multichannel pipet. 2.4 Let the spots dry up in a BSC and then incubate the test plates upside down at a temperature of 25–28 °C (which should be lower than the standard incubation temperature for the considered bacterial strains) for 18 h. Figure A2. Open in a new tab Diagram depicting the “spot on streak” method. Appendix A.3. Short Outline of the “Spot on Spot” Method 3.1 Repeat the first step of the “spot-test on streak” method. 3.2 Spot 10 µL of the bacterial suspensions in the first column of the grid. Let the bacterial spot dry up in a BSC. 3.3 Spot 5 µL of phage lysate over the bacterial spot. 3.4 Repeat step 2.4. of the “spot on bacterial streak” method. Figure A3. Open in a new tab Diagram depicting the “spot on spot” method. Appendix A.4. Short Outline of the “Streak on Streak” Method 4.1 Apply phage lysate drops (20 µL) in the first column of a grid on a square petri dish containing a suitable agar medium, using a multichannel pipette; then roll down each drop to the end of the grid row by using the same pipette and tips or separate disposable loops. Don’t allow phage streaks to dry up before bacterial suspensions are applied. 4.2 Streak 10 µL of bacterial suspensions over the phage streaks. Let the bacteria/phage streaks dry up in a BSC. 4.3 Repeat step 2.4. of the “spot on streak” method. Figure A4. Open in a new tab Diagram depicting the “streak on streak” method. Appendix A.5. Short Outline of the MD/SP (Multiple Dilutions on Single Plates) Method 5.1 Make ten-fold serial dilutions of phage lysate(s) in 96-well microtiter plates (add 20 µL of phage suspension to 180 µL of phosphate buffered saline) typically up to 10−8. 5.2 Mix 300 µL of bacterial suspension of an OD that is preliminary adjusted for each host strain or species with up to 8 mL of molten soft agar (0.7% or 0.8% suitable agar 46 °C) in a 15 mL tube and pour the mixture onto pre-prepared square petri dishes with 1.5% agar medium. Use 0.8% soft agar for phages that form large plaques. Let the plates dry up for 10–15 min in a BSC. 5.3 Spot 2 µL of each phage dilution onto the soft agar surface across the column of the plate grid (six columns on a square petri dish) using a multichannel pipette. Make three repetitions of each test phage. In case of phages with large plaques, make a three-column grid on a square petri dish and split the 2-µL-spot in 4 smaller drops while applying on the agar surface. 5.4 Use standard phage dilutions (with known titer), on each test plate (whenever possible) as control for the titration. 5.5 Let the test plates dry in a BSC and incubate them upside down at 28–32 °C (depending on the host bacteria) for 18–24 h. 5.6 After incubation, calculate the average number of plaques for the different dilutions and repetitions and multiply them by 500 to obtain the number of plaques in 1 mL. The phage titer (pfu/mL) is the number of plaques in 1 mL multiplied by the reciprocal of dilution. Figure A5. Open in a new tab Diagram depicting the splitting of 2 µL spot in 4 smaller drops in the MD/SP method. Appendix A.6. Short Outline of “Host Range Expansion (HRE) on Agar” Method 6.1 Make phage mixture dilutions as described in the MD/SP method (step 5.1.). 6.2 Make bacterial streaks lines of 30 µL as described in the “spot on streak” method (steps 2.1.–2.2.). Six lines in total are made on a square petri dish. 6.3 Spot 10 µL of each phage mixture dilution (from zero dilution to 10−7) lengthways on the bacterial lines. 6.4 Repeat step 2.4. of the “spot on streak” method. 6.5 After incubation, cut out all agar zones with different clearings (from clear to separate plaques). If there is no sign of phage activity on a particular strain, cut out the agar from the zero dilution zone only. 6.6 Collect all agar cuts in one container and add a volume of phosphate buffered saline corresponding to 3–5 mL per agar cut. 6.7 Stir the container with its content for 1–1.5 h at 400 min−1 and then centrifuge at 6000× g for 30 min. 6.8 Filtrate the supernatant using a 0.45 µm syringe filter. 6.9 Repeat the passaging rounds until the expected phage host-range extension is obtained. Figure A6. Open in a new tab Diagram depicting the “Host Range Expansion (HRE) on agar” method. Author Contributions Conceptualization, T.G.; methodology, T.G.; validation, T.G.; formal analysis, T.G.; investigation, T.G.; resources, T.G. and J.-P.P.; writing—original draft preparation, T.G.; writing—review and editing, T.G. and J.-P.P.; visualization, T.G. and J.-P.P.; funding acquisition, J.-P.P. All authors have read and agreed to the published version of the manuscript. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Not applicable. Conflicts of Interest The authors declare no conflict of interest. Funding Statement This research received funding from the Royal Higher Institute of Defense (grant HFM 21-10). Footnotes Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. References 1.Myelnikov D. Creature features: The lively narratives of bacteriophages in Soviet biology and medicine. Notes Rec. R. Soc. J. Hist. Sci. 2020;74:579–597. doi: 10.1098/rsnr.2019.0035. [DOI] [PMC free article] [PubMed] [Google Scholar] 2.Summers W.C. Félix Hubert d’Herelle (1873–1949): History of a scientific mind. Bacteriophage. 2016;6:e1270090. doi: 10.1080/21597081.2016.1270090. [DOI] [PMC free article] [PubMed] [Google Scholar] 3.Abedon S.T., Danis-Wlodarczyk K.M., Alves D.R. Phage Therapy in the 21st Century: Is There Modern, Clinical Evidence of Phage-Mediated Efficacy? Pharmaceuticals. 2021;14:1157. doi: 10.3390/ph14111157. [DOI] [PMC free article] [PubMed] [Google Scholar] 4.Witzany G. What does communication of phages mean? In: Witzany G., editor. Biocommunication of Phages. Springer; Berlin/Heidelberg, Germany: 2020. [DOI] [Google Scholar] 5.Pirnay J.-P., De Vos D., Verbeken G., Merabishvili M., Chanishvili N., Vaneechoutte M., Zizi M., Laire G., Lavigne R., Huys I., et al. The Phage Therapy Paradigm: Prêt-à-Porter or Sur-mesure? Pharm. Res. 2011;28:934–937. doi: 10.1007/s11095-010-0313-5. [DOI] [PubMed] [Google Scholar] 6.Peng H., Chen I.A. Phage engineering and the evolutionary arms race. Curr. Opin. Biotechnol. 2021;68:23–29. doi: 10.1016/j.copbio.2020.09.009. [DOI] [PMC free article] [PubMed] [Google Scholar] 7.Weinbauer M.G. Ecology of prokaryotic viruses. FEMS Microbiol. Rev. 2004;28:127–181. doi: 10.1016/j.femsre.2003.08.001. [DOI] [PubMed] [Google Scholar] 8.Abedon S.T., García P., Mullany P., Aminov R. Editorial: Phage Therapy: Past, Present and Future. Front. Microbiol. 2017;8:981. doi: 10.3389/fmicb.2017.00981. [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Abedon S.T., Danis-Wlodarczyk K.M., Wozniak D.J., Sullivan M.B. Improving Phage-Biofilm In Vitro Experimentation. Viruses. 2021;13:1175. doi: 10.3390/v13061175. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Monk A.B., Rees C.D., Barrow P., Hagens S., Harper D.R. Bacteriophage applications: Where are we now? Lett. Appl. Microbiol. 2010;51:363–369. doi: 10.1111/j.1472-765X.2010.02916.x. [DOI] [PubMed] [Google Scholar] 11.Casadevall A., Pirofski L. Host-Pathogen Interactions: The Attributes of Virulence. J. Infect. Dis. 2001;184:337–344. doi: 10.1086/322044. [DOI] [PubMed] [Google Scholar] 12.Hobbs Z., Abedon S.T. Diversity of phage infection types and associated terminology: The problem with ‘Lytic or lysogenic’. FEMS Microbiol. Lett. 2016;363:fnw047. doi: 10.1093/femsle/fnw047. [DOI] [PubMed] [Google Scholar] 13.Storms Z.J., Teel M.R., Mercurio K., Sauvageau D. The Virulence Index: A Metric for Quantitative Analysis of Phage Virulence. Phage. 2020;1:27–36. doi: 10.1089/phage.2019.0001. [DOI] [PMC free article] [PubMed] [Google Scholar] 14.Niu Y.D., Johnson R.P., Xu Y., McAllister T.A., Sharma R., Louie M., Stanford K. Host range and lytic capability of four bacteriophages against bovine and clinical human isolates of Shiga toxin-producing Escherichia coli O157:H7. J. Appl. Microbiol. 2009;107:646–656. doi: 10.1111/j.1365-2672.2009.04231.x. [DOI] [PubMed] [Google Scholar] 15.Gill J., Hyman P. Phage Choice, Isolation, and Preparation for Phage Therapy. Curr. Pharm. Biotechnol. 2010;11:2–14. doi: 10.2174/138920110790725311. [DOI] [PubMed] [Google Scholar] 16.Hyman P. Phages for Phage Therapy: Isolation, Characterization, and Host Range Breadth. Pharmaceuticals. 2019;12:35. doi: 10.3390/ph12010035. [DOI] [PMC free article] [PubMed] [Google Scholar] 17.Xie Y., Wahab L., Gill J. Development and Validation of a Microtiter Plate-Based Assay for Determination of Bacteriophage Host Range and Virulence. Viruses. 2018;10:189. doi: 10.3390/v10040189. [DOI] [PMC free article] [PubMed] [Google Scholar] 18.Mirzaei M.K., Nilsson A.S. Correction: Isolation of Phages for Phage Therapy: A Comparison of Spot Tests and Efficiency of Plating Analyses for Determination of Host Range and Efficacy. PLoS ONE. 2015;10:e0127606. doi: 10.1371/journal.pone.0127606. [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Skurnik M. Can Bacteriophages Replace Antibiotics? Antibiotics. 2022;11:575. doi: 10.3390/antibiotics11050575. [DOI] [PMC free article] [PubMed] [Google Scholar] 20.Kutter E., De Vos D., Gvasalia G., Alavidze Z., Gogokhia L., Kuhl S., Abedon S. Phage Therapy in Clinical Practice: Treatment of Human Infections. Curr. Pharm. Biotechnol. 2010;11:69–86. doi: 10.2174/138920110790725401. [DOI] [PubMed] [Google Scholar] 21.Danis-Wlodarczyk K.M., Cai A., Chen A., Gittrich M.R., Sullivan M.B., Wozniak D.J., Abedon S.T. Friends or Foes? Rapid Determination of Dissimilar Colistin and Ciprofloxacin Antagonism of Pseudomonas aeruginosa Phages. Pharmaceuticals. 2021;14:1162. doi: 10.3390/ph14111162. [DOI] [PMC free article] [PubMed] [Google Scholar] 22.Lindberg H.M., McKean K.A., Wang I.-N. Phage fitness may help predict phage therapy efficacy. Bacteriophage. 2014;4:e964081. doi: 10.4161/21597073.2014.964081. [DOI] [PMC free article] [PubMed] [Google Scholar] 23.Rakhuba D.V., Kolomiets E.I., Dey E.S., Novik G.I. Bacteriophage receptors, mechanisms of phage adsorption and penetration into host cell. Pol. J. Microbiol. 2010;59:145–155. doi: 10.33073/pjm-2010-023. [DOI] [PubMed] [Google Scholar] 24.Santos S.B., Carvalho C., Azeredo J., Ferreira E.C. Correction: Population Dynamics of a Salmonella Lytic Phage and Its Host: Implications of the Host Bacterial Growth Rate in Modelling. PLoS ONE. 2015;10:e0136007. doi: 10.1371/journal.pone.0136007. [DOI] [PMC free article] [PubMed] [Google Scholar] 25.Payne R. Phage therapy: The peculiar kinetics of self-replicating pharmaceuticals. Clin. Pharmacol. Ther. 2000;68:225–230. doi: 10.1067/mcp.2000.109520. [DOI] [PubMed] [Google Scholar] 26.Cairns B.J., Timms A.R., Jansen V.A.A., Connerton I.F., Payne R.J.H. Quantitative Models of In Vitro Bacteriophage–Host Dynamics and Their Application to Phage Therapy. PLoS Pathog. 2009;5:e1000253. doi: 10.1371/journal.ppat.1000253. [DOI] [PMC free article] [PubMed] [Google Scholar] 27.Wong C.L., Sieo C.C., Tan W.S., Abdullah N., Hair-Bejo M., Abu J., Ho Y.W. Evaluation of a lytic bacteriophage, Φ st1, for biocontrol of Salmonella enterica serovar Typhimurium in chickens. Int. J. Food Microbiol. 2016;172:92–101. doi: 10.1016/j.ijfoodmicro.2013.11.034. [DOI] [PubMed] [Google Scholar] 28.Weber-Dąbrowska B., Jończyk-Matysiak E., Żaczek M., Łobocka M., Łusiak-Szelachowska M., Górski A. Bacteriophage Procurement for Therapeutic Purposes. Front. Microbiol. 2016;7:1177. doi: 10.3389/fmicb.2016.01177. [DOI] [PMC free article] [PubMed] [Google Scholar] 29.Sargeant K. Advances in Applied Microbiology. Volume 13. Elsevier; Amsterdam, The Netherlands: 1970. Large-Scale Bacteriophage Production; pp. 121–137. [DOI] [Google Scholar] 30.Harper D. Criteria for Selecting Suitable Infectious Diseases for Phage Therapy. Viruses. 2018;10:177. doi: 10.3390/v10040177. [DOI] [PMC free article] [PubMed] [Google Scholar] 31.Forti F., Roach D.R., Cafora M., Pasini M.E., Horner D.S., Fiscarelli E.V., Rossitto M., Cariani L., Briani F., Debarbieux L., et al. Design of a Broad-Range Bacteriophage Cocktail That Reduces Pseudomonas aeruginosa Biofilms and Treats Acute Infections in Two Animal Models. Antimicrob. Agents Chemother. 2018;62:e02573-17. doi: 10.1128/AAC.02573-17. [DOI] [PMC free article] [PubMed] [Google Scholar] 32.Krut O., Bekeredjian-Ding I. Contribution of the Immune Response to Phage Therapy. J. Immunol. 2018;200:3037–3044. doi: 10.4049/jimmunol.1701745. [DOI] [PubMed] [Google Scholar] 33.Kropinski A.M., Mazzocco A., Waddell T.E., Lingohr E., Johnson R.P. Enumeration of bacteriophages by double agar overlay plaque assay. In: Clokie M.R.J., Kropinski A.M., editors. Bacteriophages. Volume 501. Humana Press; Totowa, NJ, USA: 2009. pp. 69–76. [DOI] [PubMed] [Google Scholar] 34.Cooper C.J., Denyer S.P., Maillard J.-Y. Rapid and quantitative automated measurement of bacteriophage activity against cystic fibrosis isolates of Pseudomonas aeruginosa: Rapid quantitative screening of phage activity. J. Appl. Microbiol. 2011;110:631–640. doi: 10.1111/j.1365-2672.2010.04928.x. [DOI] [PubMed] [Google Scholar] 35.Twest R., Kropinski A.M. Bacteriophage Enrichment from Water and Soil. In: Clokie M.R.J., Kropinski A.M., editors. Bacteriophages. Volume 501. Humana Press; Totowa, NJ, USA: 2009. pp. 15–21. [DOI] [PubMed] [Google Scholar] 36.Merabishvili M., Pirnay J.-P., Verbeken G., Chanishvili N., Tediashvili M., Lashkhi N., Glonti T., Krylov V., Mast J., Van Parys L., et al. Quality-Controlled Small-Scale Production of a Well-Defined Bacteriophage Cocktail for Use in Human Clinical Trials. PLoS ONE. 2009;4:e4944. doi: 10.1371/journal.pone.0004944. [DOI] [PMC free article] [PubMed] [Google Scholar] 37.Yu P., Mathieu J., Li M., Dai Z., Alvarez P.J.J. Isolation of Polyvalent Bacteriophages by Sequential Multiple-Host Approaches. Appl. Environ. Microbiol. 2016;82:808–815. doi: 10.1128/AEM.02382-15. [DOI] [PMC free article] [PubMed] [Google Scholar] 38.Olsen N.S., Hendriksen N.B., Hansen L.H., Kot W. A New High-Throughput Screening Method for Phages: Enabling Crude Isolation and Fast Identification of Diverse Phages with Therapeutic Potential. Phage. 2020;1:137–148. doi: 10.1089/phage.2020.0016. [DOI] [PMC free article] [PubMed] [Google Scholar] 39.Kauffman K.M., Polz M.F. Streamlining standard bacteriophage methods for higher throughput. MethodsX. 2018;5:159–172. doi: 10.1016/j.mex.2018.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar] 40.Willner D., Furlan M., Schmieder R., Grasis J.A., Pride D.T., Relman D.A., Angly F.E., McDole T., Mariella R.P., Rohwer F., et al. Metagenomic detection of phage-encoded platelet-binding factors in the human oral cavity. Proc. Natl. Acad. Sci. USA. 2010;108:4547–4553. doi: 10.1073/pnas.1000089107. [DOI] [PMC free article] [PubMed] [Google Scholar] 41.Chen L.-K., Liu Y.-L., Hu A., Chang K.-C., Lin N.-T., Lai M.-J., Tseng C.-C. Potential of bacteriophage ΦAB2 as an environmental biocontrol agent for the control of multidrug-resistant Acinetobacter baumannii. BMC Microbiol. 2013;13:154. doi: 10.1186/1471-2180-13-154. [DOI] [PMC free article] [PubMed] [Google Scholar] 42.De Jonge P.A., Nobrega F.L., Brouns S.J.J., Dutilh B.E. Molecular and Evolutionary Determinants of Bacteriophage Host Range. Trends Microbiol. 2019;27:51–63. doi: 10.1016/j.tim.2018.08.006. [DOI] [PubMed] [Google Scholar] 43.Amorim L.R.P., Silva J.G.L., Gibbs P.A., Teixeira P.C. Application of an Impedimetric Technique for the Detection of Lytic Infection of Salmonella spp. By Specific Phages. Int. J. Microbiol. 2009;2009:259456. doi: 10.1155/2009/259456. [DOI] [PMC free article] [PubMed] [Google Scholar] 44.Kutter E., Sulakvelidze A. Bacteriophages: Biology and Applications. CRC Press; Boca Raton, FL, USA: 2004. p. 528. [DOI] [Google Scholar] 45.Haines M.E.K., Hodges F.E., Nale J.Y., Mahony J., van Sinderen D., Kaczorowska J., Alrashid B., Akter M., Brown N., Sauvageau D., et al. Analysis of Selection Methods to Develop Novel Phage Therapy Cocktails Against Antimicrobial Resistant Clinical Isolates of Bacteria. Front. Microbiol. 2021;12:613529. doi: 10.3389/fmicb.2021.613529. [DOI] [PMC free article] [PubMed] [Google Scholar] 46.Adams M.D. Bacteriophages. Interscience Publishers; New York, NY, USA: 1959. [Google Scholar] 47.Saussereau E., Vachier I., Chiron R., Godbert B., Sermet I., Dufour N., Pirnay J.-P., De Vos D., Carrié F., Molinari N., et al. Effectiveness of bacteriophages in the sputum of cystic fibrosis patients. Clin. Microbiol. Infect. 2014;20:O983–O990. doi: 10.1111/1469-0691.12712. [DOI] [PubMed] [Google Scholar] 48.Betts A., Vasse M., Kaltz O., Hochberg M.E. Back to the future: Evolving bacteriophages to increase their effectiveness against the pathogen P seudomonas aeruginosa PAO 1. Evol. Appl. 2013;6:1054–1063. doi: 10.1111/eva.12085. [DOI] [PMC free article] [PubMed] [Google Scholar] 49.Letarov A.V., Kulikov E.E. Determination of the bacteriophage host range: Culture-based approach. In: Azeredo J., Sillankorva S., editors. Bacteriophage Therapy. Volume 1693. Springer; New York, NY, USA: 2018. pp. 75–84. [DOI] [PubMed] [Google Scholar] 50.Hyman P., Abedon S.T. Advances in Applied Microbiology. Volume 70. Elsevier; Amsterdam, The Netherlands: 2010. Bacteriophage host range and Bacterial resistance; pp. 217–248. [DOI] [PubMed] [Google Scholar] 51.Abedon S.T., Yin J. Bacteriophage plaques: Theory and analysis. In: Clokie M.R.J., Kropinski A.M., editors. Bacteriophages. Volume 501. Humana Press; Totowa, NJ, USA: 2009. pp. 161–174. [DOI] [PubMed] [Google Scholar] 52.Kutter E. Phage Host Range and Efficiency of Plating. In: Clokie M.R.J., Kropinski A.M., editors. Bacteriophages. Volume 501. Humana Press; Totowa, NJ, USA: 2009. pp. 141–149. [DOI] [PubMed] [Google Scholar] 53.Mazzocco A., Waddell T.E., Lingohr E., Johnson R.P. Enumeration of Bacteriophages Using the Small Drop Plaque Assay System. In: Clokie M.R.J., Kropinski A.M., editors. Bacteriophages: Methods and Protocols, Volume 1: Isolation, Characterization, and Interactions. Volume 501. Humana Press; Totowa, NJ, USA: 2009. [DOI] [PubMed] [Google Scholar] 54.Abedon S.T. Lysis from without. Bacteriophage. 2011;1:46–49. doi: 10.4161/bact.1.1.13980. [DOI] [PMC free article] [PubMed] [Google Scholar] 55.Kakabadze E., Makalatia K., Grdzelishvili N., Bakuradze N., Goderdzishvili M., Kusradze I., Phoba M.-F., Lunguya O., Lood C., Lavigne R., et al. Selection of Potential Therapeutic Bacteriophages that Lyse a CTX-M-15 Extended Spectrum β-Lactamase Producing Salmonella enterica Serovar Typhi Strain from the Democratic Republic of the Congo. Viruses. 2018;10:172 . doi: 10.3390/v10040172. [DOI] [PMC free article] [PubMed] [Google Scholar] 56.Sillankorva S. Isolation of Bacteriophages for Clinically Relevant Bacteria. In: Azeredo J., Sillankorva S., editors. Bacteriophage Therapy. Volume 1693. Springer; New York, NY, USA: 2018. pp. 23–30. [DOI] [PubMed] [Google Scholar] 57.Kusradze I., Karumidze N., Rigvava S., Dvalidze T., Katsitadze M., Amiranashvili I., Goderdzishvili M. Characterization and Testing the Efficiency of Acinetobacter baumannii Phage vB-GEC_Ab-M-G7 as an Antibacterial Agent. Front. Microbiol. 2016;7:1590. doi: 10.3389/fmicb.2016.01590. [DOI] [PMC free article] [PubMed] [Google Scholar] 58.Uchiyama J., Takemura I., Satoh M., Kato S., Ujihara T., Akechi K., Matsuzaki S., Daibata M. Improved Adsorption of an Enterococcus faecalis Bacteriophage ΦEF24C with a Spontaneous Point Mutation. PLoS ONE. 2011;6:e26648. doi: 10.1371/journal.pone.0026648. [DOI] [PMC free article] [PubMed] [Google Scholar] 59.Merabishvili M., Pirnay J.-P., De Vos D. Guidelines to Compose an Ideal Bacteriophage Cocktail. In: Azeredo J., Sillankorva S., editors. Bacteriophage Therapy. Volume 1693. Springer; New York, NY, USA: 2018. pp. 99–110. [DOI] [PubMed] [Google Scholar] 60.Burrowes B., Molineux I., Fralick J. Directed in Vitro Evolution of Therapeutic Bacteriophages: The Appelmans Protocol. Viruses. 2019;11:241. doi: 10.3390/v11030241. [DOI] [PMC free article] [PubMed] [Google Scholar] 61.Pirnay J.-P., Blasdel B.G., Bretaudeau L., Buckling A., Chanishvili N., Clark J.R., Corte-Real S., Debarbieux L., Dublanchet A., De Vos D., et al. Quality and Safety Requirements for Sustainable Phage Therapy Products. Pharm. Res. 2015;32:2173–2179. doi: 10.1007/s11095-014-1617-7. [DOI] [PMC free article] [PubMed] [Google Scholar] 62.Chanishvili N. Bacteriophages as Therapeutic and Prophylactic Means: Summary of the Soviet and Post Soviet Experiences. Curr. Drug Deliv. 2016;13:309–323. doi: 10.2174/156720181303160520193946. [DOI] [PubMed] [Google Scholar] 63.Christiansen B., Johnsen M.G., Stenby E., Vogensen F.K., Hammer K. Characterization of the lactococcal temperate phage TP901-1 and its site-specific integration. J. Bacteriol. 1994;176:1069–1076. doi: 10.1128/jb.176.4.1069-1076.1994. [DOI] [PMC free article] [PubMed] [Google Scholar] 64.Bull J.J., Gill J.J. The habits of highly effective phages: Population dynamics as a framework for identifying therapeutic phages. Front. Microbiol. 2014;5:618. doi: 10.3389/fmicb.2014.00618. [DOI] [PMC free article] [PubMed] [Google Scholar] 65.Stockdale S., Mahony J., Courtin P., Chapot-Chartier M.-P., van Pijkeren J.-P., Britton R.A., Neve H., Heller K.J., Aideh B., Vogensen F., et al. The Lactococcal Phages Tuc2009 and TP901-1 Incorporate Two Alternate Forms of Their Tail Fiber into Their Virions for Infection Specialization. J. Biol. Chem. 2013;288:5581–5590. doi: 10.1074/jbc.M112.444901. [DOI] [PMC free article] [PubMed] [Google Scholar] 66.Kropinski A.M., Waddell T., Meng J., Franklin K., Ackermann H.-W., Ahmed R., Mazzocco A., Yates J., Lingohr E.J., Johnson R.P. The host-range, genomics and proteomics of Escherichia coli O157:H7 bacteriophage rV5. Virol. J. 2013;10:76. doi: 10.1186/1743-422X-10-76. [DOI] [PMC free article] [PubMed] [Google Scholar] 67.Raya R.R., Varey P., Oot R.A., Dyen M.R., Callaway T.R., Edrington T.S., Kutter E.M., Brabban A.D. Isolation and Characterization of a New T-Even Bacteriophage, CEV1, and Determination of Its Potential To Reduce Escherichia coli O157:H7 Levels in Sheep. Appl. Environ. Microbiol. 2006;72:6405–6410. doi: 10.1128/AEM.03011-05. [DOI] [PMC free article] [PubMed] [Google Scholar] 68.Vandersteegen K., Kropinski A.M., Nash J.H.E., Noben J.-P., Hermans K., Lavigne R. Romulus and Remus, Two Phage Isolates Representing a Distinct Clade within the Twortlikevirus Genus, Display Suitable Properties for Phage Therapy Applications. J. Virol. 2013;87:3237–3247. doi: 10.1128/JVI.02763-12. [DOI] [PMC free article] [PubMed] [Google Scholar] 69.Niu Y.D., Stanford K., Kropinski A.M., Ackermann H.-W., Johnson R.P., She Y.-M., Ahmed R., Villegas A., McAllister T.A. Genomic, Proteomic and Physiological Characterization of a T5-like Bacteriophage for Control of Shiga Toxin-Producing Escherichia coli O157:H7. PLoS ONE. 2012;7:e34585. doi: 10.1371/journal.pone.0034585. [DOI] [PMC free article] [PubMed] [Google Scholar] 70.Vandersteegen K., Mattheus W., Ceyssens P.-J., Bilocq F., De Vos D., Pirnay J.-P., Noben J.-P., Merabishvili M., Lipinska U., Hermans K., et al. Microbiological and Molecular Assessment of Bacteriophage ISP for the Control of Staphylococcus aureus. PLoS ONE. 2011;6:e24418. doi: 10.1371/journal.pone.0024418. [DOI] [PMC free article] [PubMed] [Google Scholar] 71.Chaudhry W.N., Haq I.U., Andleeb S., Qadri I. Characterization of a virulent bacteriophage LK1 specific for Citrobacter freundii isolated from sewage water. J. Basic Microbiol. 2014;54:531–541. doi: 10.1002/jobm.201200710. [DOI] [PubMed] [Google Scholar] 72.Niu Y.D., McAllister T.A., Nash J., Kropinski A., Stanford K. Four Escherichia coli O157:H7 Phages: A New Bacteriophage Genus and Taxonomic Classification of T1-Like Phages. PLoS ONE. 2014;9:e100426. doi: 10.1371/journal.pone.0100426. [DOI] [PMC free article] [PubMed] [Google Scholar] 73.Henry M., Lavigne R., Debarbieux L. Predicting In Vivo Efficacy of Therapeutic Bacteriophages Used To Treat Pulmonary Infections. Antimicrob. Agents Chemother. 2013;57:5961–5968. doi: 10.1128/AAC.01596-13. [DOI] [PMC free article] [PubMed] [Google Scholar] 74.Green S.I., Kaelber J.T., Ma L., Trautner B.W., Ramig R.F., Maresso A.W. Bacteriophages from ExPEC Reservoirs Kill Pandemic Multidrug-Resistant Strains of Clonal Group ST131 in Animal Models of Bacteremia. Sci. Rep. 2017;7:46151. doi: 10.1038/srep46151. [DOI] [PMC free article] [PubMed] [Google Scholar] 75.Estrella L.A., Quinones J., Henry M., Hannah R.M., Pope R.K., Hamilton T., Teneza-mora N., Hall E., & Biswajit B. Characterization of novel Staphylococcus aureus lytic phage and defining their combinatorial virulence using the OmniLog® system. Bacteriophage. 2016;6:e1219440. doi: 10.1080/21597081.2016.1219440. [DOI] [PMC free article] [PubMed] [Google Scholar] 76.Schooley R.T., Biswas B., Gill J.J., Hernandez-Morales A., Lancaster J., Lessor L., Barr J.J., Reed S.L., Rohwer F., Benler S., et al. Development and Use of Personalized Bacteriophage-Based Therapeutic Cocktails To Treat a Patient with a Disseminated Resistant Acinetobacter baumannii Infection. Antimicrob. Agents Chemother. 2017;61:e00954-17. doi: 10.1128/AAC.00954-17. [DOI] [PMC free article] [PubMed] [Google Scholar] 77.LaVergne S., Hamilton T., Biswas B., Kumaraswamy M., Schooley R.T., Wooten D. Phage Therapy for a Multidrug-Resistant Acinetobacter baumannii Craniectomy Site Infection. Open Forum Infect. Dis. 2018;5:ofy064. doi: 10.1093/ofid/ofy064. [DOI] [PMC free article] [PubMed] [Google Scholar] 78.Konopacki M., Grygorcewicz B., Dołęgowska B., Kordas M., Rakoczy R. PhageScore: A simple method for comparative evaluation of bacteriophages lytic activity. Biochem. Eng. 2020;161:107652. doi: 10.1016/j.bej.2020.107652. [DOI] [Google Scholar] 79.Nale J.Y., Vinner G.K., Lopez V.C., Thanki A.M., Phothaworn P., Thiennimitr P., Garcia A., AbuOun M., Anjum M.F., Korbsrisate S., et al. An Optimized Bacteriophage Cocktail Can Effectively Control Salmonella in vitro and in Galleria mellonella. Front. Microbiol. 2021;11:609955. doi: 10.3389/fmicb.2020.609955. [DOI] [PMC free article] [PubMed] [Google Scholar] 80.Ebert D. Experimental Evolution of Parasites. Science. 1998;282:1432–1436. doi: 10.1126/science.282.5393.1432. [DOI] [PubMed] [Google Scholar] 81.Hall A.R., De Vos D., Friman V.-P., Pirnay J.-P., Buckling A. Effects of Sequential and Simultaneous Applications of Bacteriophages on Populations of Pseudomonas aeruginosa In Vitro and in Wax Moth Larvae. Appl. Environ. Microbiol. 2012;78:5646–5652. doi: 10.1128/AEM.00757-12. [DOI] [PMC free article] [PubMed] [Google Scholar] 82.Poullain V., Gandon S., Brockhurst M.A., Buckling A., Hochberg M.E. The evolution of specificity in evolving and coevolving antagonistic interactions between a bacteria and its phage. Evolution. 2007;62:1–11. doi: 10.1111/j.1558-5646.2007.00260.x. [DOI] [PubMed] [Google Scholar] 83.Friman V.-P., Soanes-Brown D., Sierocinski P., Buckling A., Johansen H.K., Molin S., Merabishvili M. Data from: Pre-adapting parasitic phages to a pathogen leads to increased pathogen clearance and lowered resistance evolution with Pseudomonas aeruginosa cystic fibrosis bacterial isolates. J. Evol. Biol. 2016;29:188–198. doi: 10.1111/jeb.12774. [DOI] [PubMed] [Google Scholar] 84.Eskenazi A., Lood C., Wubbolts J., Hites M., Balarjishvili N., Leshkasheli L., Askilashvili L., Kvachadze L., van Noort V., Wagemans J., et al. Combination of pre-adapted bacteriophage therapy and antibiotics for treatment of fracture-related infection due to pandrug-resistant Klebsiella pneumoniae. Nat. Commun. 2022;13:302. doi: 10.1038/s41467-021-27656-z. [DOI] [PMC free article] [PubMed] [Google Scholar] 85.Burrowes B., Harper D.R., Anderson J., McConville M., Enright M.C. Bacteriophage therapy: Potential uses in the control of antibiotic-resistant pathogens. Expert Rev. Anti-infective Ther. 2011;9:775–785. doi: 10.1586/eri.11.90. [DOI] [PubMed] [Google Scholar] 86.Mapes A.C., Trautner B.W., Liao K.S., Ramig R.F. Development of expanded host range phage active on biofilms of multi-drug resistant Pseudomonas aeruginosa. Bacteriophage. 2016;6:e1096995. doi: 10.1080/21597081.2015.1096995. [DOI] [PMC free article] [PubMed] [Google Scholar] 87.Rossitto M., Fiscarelli E.V., Rosati P. Challenges and Promises for Planning Future Clinical Research Into Bacteriophage Therapy Against Pseudomonas aeruginosa in Cystic Fibrosis. An Argumentative Review. Front. Microbiol. 2018;9:775. doi: 10.3389/fmicb.2018.00775. [DOI] [PMC free article] [PubMed] [Google Scholar] 88.Malik D.J., Sokolov I.J., Vinner G.K., Mancuso F., Cinquerrui S., Vladisavljevic G.T., Clokie M.R.J., Garton N.J., Stapley A.G.F., Kirpichnikova A. Formulation, stabilisation and encapsulation of bacteriophage for phage therapy. Adv. Colloid Interface Sci. 2017;249:100–133. doi: 10.1016/j.cis.2017.05.014. [DOI] [PubMed] [Google Scholar] 89.Dedrick R.M., Guerrero-Bustamante C.A., Garlena R.A., Russell D.A., Ford K., Harris K., Gilmour K.C., Soothill J., Jacobs-Sera D., Schooley R.T., et al. Engineered bacteriophages for treatment of a patient with a disseminated drug-resistant Mycobacterium abscessus. Nat. Med. 2019;25:730–733. doi: 10.1038/s41591-019-0437-z. [DOI] [PMC free article] [PubMed] [Google Scholar] 90.Pires D.P., Cleto S., Sillankorva S., Azeredo J., Lu T.K. Genetically Engineered Phages: A Review of Advances over the Last Decade. Microbiol. Mol. Biol. Rev. 2016;80:523–543. doi: 10.1128/MMBR.00069-15. [DOI] [PMC free article] [PubMed] [Google Scholar] 91.Bourdin G., Schmitt B., Guy L.M., Germond J.-E., Zuber S., Michot L., Reuteler G., Brüssow H. Amplification and Purification of T4-Like Escherichia coli Phages for Phage Therapy: From Laboratory to Pilot Scale. Appl. Environ. Microbiol. 2014;80:1469–1476. doi: 10.1128/AEM.03357-13. [DOI] [PMC free article] [PubMed] [Google Scholar] 92.Middelboe M. Bacterial Growth Rate and Marine Virus–Host Dynamics. Microb. Ecol. 2000;40:114–124. doi: 10.1007/s002480000050. [DOI] [PubMed] [Google Scholar] 93.Nabergoj D., Modic P., Podgornik A. Effect of bacterial growth rate on bacteriophage population growth rate. MicrobiologyOpen. 2018;7:e00558. doi: 10.1002/mbo3.558. [DOI] [PMC free article] [PubMed] [Google Scholar] 94.Gordillo F., Barr J.J. Screening for Lysogen Activity in Therapeutically Relevant Bacteriophages. Bio-Protocol. 2021;11:e3997. doi: 10.21769/BioProtoc.3997. [DOI] [PMC free article] [PubMed] [Google Scholar] 95.Merril C.R., Scholl D., Adhya S.L. The prospect for bacteriophage therapy in Western medicine. Nat. Rev. Drug Discov. 2003;2:489–497. doi: 10.1038/nrd1111. [DOI] [PubMed] [Google Scholar] 96.Pires D.P., Boas D.V., Sillankorva S., Azeredo J. Phage Therapy: A Step Forward in the Treatment of Pseudomonas aeruginosa Infections. J. Virol. 2015;89:7449–7456. doi: 10.1128/JVI.00385-15. [DOI] [PMC free article] [PubMed] [Google Scholar] 97.Schmelcher M., Loessner M.J. Bacteriophage endolysins—extending their application to tissues and the bloodstream. Curr. Opin. Biotechnol. 2021;68:51–59. doi: 10.1016/j.copbio.2020.09.012. [DOI] [PubMed] [Google Scholar] 98.Ward J., Branston S., Stanley E., Keshavarz-Moore E. Bacteriophages—Perspectives and Future. In: Savva R., editor. Scale-Up and Bioprocessing of Phages. IntechOpen; Rijeka, Croatia: 2020. [DOI] [Google Scholar] 99.Schmerer M., Molineux I.J., Bull J.J. Synergy as a rationale for phage therapy using phage cocktails. PeerJ. 2014;2:e590. doi: 10.7717/peerj.590. [DOI] [PMC free article] [PubMed] [Google Scholar] 100.Regeimbal J.M., Jacobs A.C., Corey B.W., Henry M.S., Thompson M.G., Pavlicek R.L., Quinones J., Hannah R.M., Ghebremedhin M., Crane N.J., et al. Personalized Therapeutic Cocktail of Wild Environmental Phages Rescues Mice from Acinetobacter baumannii Wound Infections. Antimicrob. Agents Chemother. 2016;60:5806–5816. doi: 10.1128/AAC.02877-15. [DOI] [PMC free article] [PubMed] [Google Scholar] 101.Moller A.G., Winston K., Ji S., Wang J., Hargita Davis M.N., Solis-Lemus C.R., Read T. Genes influencing phage host range in Staphylococcus aureus on a species-wide scale. bioRxiv. 2020 doi: 10.1128/mSphere.01263-20. [DOI] [PMC free article] [PubMed] [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Data Availability Statement Not applicable. Articles from Viruses are provided here courtesy of Multidisciplinary Digital Publishing Institute (MDPI) ACTIONS View on publisher site PDF (2.8 MB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
227
ca.classical analysis and odes - estimate a singular integral using a dyadic decomposition - MathOverflow =============== Join MathOverflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community MathOverflow helpchat MathOverflow Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now estimate a singular integral using a dyadic decomposition Ask Question Asked 1 year, 3 months ago Modified1 year, 3 months ago Viewed 280 times This question shows research effort; it is useful and clear 4 Save this question. Show activity on this post. Let 0<α j<1 0<α j<1, j=1,…,d+1 j=1,…,d+1. I am trying to estimate the following singular integral: I(y 1,…,y d,z):=∫x∈[0,1]d 1/2<|x|<1 d x 1…d x d|x 1−y 1|α 1…|x d−y d|α d|z 2−|x|2|α d+1,I(y 1,…,y d,z):=∫x∈[0,1]d 1/2<|x|<1 d x 1…d x d|x 1−y 1|α 1…|x d−y d|α d|z 2−|x|2|α d+1, where 0<y j<1/2 0<y j<1/2, 0<z<1 0<z<1. If this is a standard integral that has a known estimate, please refer me to some reference. I can show precisely that in the one-dimensional case d=1 d=1, one has I≲1(y 1+z)α 2|z−y 1|α 1+α 2−1(1).I≲1(y 1+z)α 2|z−y 1|α 1+α 2−1(1). To achieve that, I simply consider all the nine possibilities that come from combining one of the cases x>>y 1 x>>y 1, y 1>>x y 1>>x, and x∼y 1 x∼y 1 with one of the cases x>>z x>>z, z>>x z>>x, and x∼z x∼z. In the case (x∼y 1 x∼y 1 and x∼z x∼z), I use a dyadic decomposition. I put one of the singular factors in a dyadic interval and integrate the other singular factor explicitly. I don't know how to generalize that to higher dimensions. Let λ 1,…,λ d,λ≤1 λ 1,…,λ d,λ≤1 be dyadic numbers. If we set |x j−y j|∼λ j|x j−y j|∼λ j and ||x|−z|∼λ||x|−z|∼λ, then the problem reduces to estimating the measure of Ω=R∩S Ω=R∩S, the intersection of the rectangle R:={[y 1+λ 1/2,y 1+λ 1]×⋯×[y d+λ d/2,y d+λ d]}R:={[y 1+λ 1/2,y 1+λ 1]×⋯×[y d+λ d/2,y d+λ d]} with the spherical shell S:={z+λ/2<|x|<z+λ}.S:={z+λ/2<|x|<z+λ}. For Ω Ω to be nonempty, we necessarily have z+λ∼y 1+⋯+y d+λ 1+⋯+λ d.z+λ∼y 1+⋯+y d+λ 1+⋯+λ d. How proceed from here ? ca.classical-analysis-and-odes fourier-analysis harmonic-analysis Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited May 13, 2024 at 3:29 Medo asked May 11, 2024 at 21:16 MedoMedo 858 5 5 silver badges 16 16 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. It is a bit of a long story, but if you are still interested, I can teach you how to make a fairly accurate estimate for the same integral I 0 I 0 without the condition |x|≥1 2|x|≥1 2 (it is possible to reintroduce that condition, but that will just make the computation a bit cumbersome and in all honesty, I don't think you care much whether it is 1/2 1/2 or 1/5 1/5 anyway). At this moment I'll just post the bound I 0(y,z)≲∑I∑Γ I≤γ≤1 γ|J|2−β∏i∈I min(y i,γ y i)1−α i∏j∈J max(γ−−√,y j)−α j I 0(y,z)≲∑I∑Γ I≤γ≤1 γ|J|2−β∏i∈I min(y i,γ y i)1−α i∏j∈J max(γ,y j)−α j where I assume that y=(y 1,…,y n)∈[0,1]n y=(y 1,…,y n)∈[0,1]n, z∈[0,1]z∈[0,1] and my α j α j are the same as yours except I call the last one β β: it obviously plays a special role in this game. I I here runs over all subsets of {1,…,n}{1,…,n} (including the empty set and the full set), J J is the complement of I I, Γ I=min(||y I|2−z 2|,1)Γ I=min(||y I|2−z 2|,1) where y I y I is the projection of y y to the coordinate subspace associated with I I (so y∅=0 y∅=0 and y{1,…,n}=y y{1,…,n}=y) and γ γ runs over the powers of 2 2 (dyadic scales) as usual. Note that if α i,β α i,β are generic, this sum consists of finitely many geometric progressions and, thus, reduces to the finite sum over the scales at which the behavior changes and the endpoints. In general, however, there may be flat pieces, which will produce logarithmic factors. Note also that each term by itself is a trivial estimate from below of the integral over some single dyadic box (trivial in the sense that it is just the volume of the box divided by the product of the maxima of the denominator factors in that box). So, in the generic case, this gives you a finite sum that is equivalent to the integral. I haven't tried to see if the chains of overlapping boxes corresponding to the flat parts in the summation create an overcount somewhere or are sharp too). So, if this type of bound is interesting or useful for you, let me know and I'll post the details :-). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered May 16, 2024 at 3:14 fedjafedja 63.7k 11 11 gold badges 164 164 silver badges 308 308 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions ca.classical-analysis-and-odes fourier-analysis harmonic-analysis See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Related 2An integral estimate over rotations of the dyadic grid 2Bilinear Approach to Bochner-Riesz Conjecture in Two Dimensions 24Reference for exponential Vandermonde determinant identity 5A sharp estimate for an oscillatory integral with a simple phase 3What is the optimal asymptotic behavior of this integral over the sphere? 0The asymptotic behaviour of a singular integral 0Does this dyadic sum converge? 1A bilinear estimate with a simple one-dimensional oscillatory integral kernel 2Does i∂t u=Δ 2 u i∂t u=Δ 2 u exhibit more or less dispersion than i∂t u=Δ u i∂t u=Δ u? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today MathOverflow Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
228
Information Decomposition Diagrams Applied beyond Shannon Entropy: a Generalization of Hu’s Theorem Leon Lang1, Pierre Baudot2, Rick Quax1, and Patrick Forré1 1Informatics Institute, University of Amsterdam 2Median Technologies, France In information theory, one major goal is to find useful functions that summarize the amount of information contained in the interaction of several random variables. Specif-ically, one can ask how the classical Shannon entropy, mutual information, and higher interaction information relate to each other. This is answered by Hu’s theorem, which is widely known in the form of information diagrams: it relates shapes in a Venn dia-gram to information functions, thus establishing a bridge from set theory to information theory. In this work, we view random variables together with the joint operation as a monoid that acts by conditioning on information functions, and entropy as a func-tion satisfying the chain rule of information. This abstract viewpoint allows to prove a generalization of Hu’s theorem. It applies to Shannon and Tsallis entropy, (Tsallis) Kullback-Leibler Divergence, cross-entropy, Kolmogorov complexity, submodular infor-mation functions, and the generalization error in machine learning. Our result implies for Chaitin’s Kolmogorov complexity that the interaction complexities of all degrees are in expectation close to Shannon interaction information. For well-behaved proba-bility distributions on increasing sequence lengths, this shows that the per-bit expected interaction complexity and information asymptotically coincide, thus showing a strong bridge between algorithmic and classical information theory. Contents 1 Introduction 3 2 Preliminaries on Shannon Entropy of Countable Discrete Random Variables 4 2.1 Entropy, Mutual Information, and Interaction Information . . . . . . . . . . . . . . 4 2.2 Equivalence Classes of Random Variables . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Monoids of Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3 A Generalization of Hu’s Theorem 9 3.1 A Formulation of the Generalized Hu Theorem . . . . . . . . . . . . . . . . . . . . 9 3.2 Explicit Construction of the G-Valued Measure µ . . . . . . . . . . . . . . . . . . . 12 3.3 General Consequences of the Explicit Construction of µ . . . . . . . . . . . . . . . 14 4 Hu’s Theorem for Kolmogorov Complexity 14 4.1 Preliminaries on Prefix-Free Kolmogorov Complexity . . . . . . . . . . . . . . . . . 15 4.2 The Chain Rule for Chaitin’s Prefix-Free Kolmogorov Complexity . . . . . . . . . . 17 4.3 A Reformulation of the Chain Rule in Terms of Our General Framework . . . . . . 17 4.4 Hu’s Theorem for Chaitin’s Prefix-Free Kolmogorov Complexity . . . . . . . . . . . 19 4.5 Expected Interaction Complexity is Interaction Information . . . . . . . . . . . . . 20 Leon Lang: [email protected]. Main contributing author. Pierre Baudot: [email protected] Rick Quax: [email protected] Patrick Forré: [email protected] 1 arXiv:2202.09393v4 [cs.IT] 23 Sep 2024 4.6 Hu’s Theorem for Prefix-Free Kolmogorov Complexity . . . . . . . . . . . . . . . . 22 4.7 Hu’s Theorem for Plain Kolmogorov Complexity . . . . . . . . . . . . . . . . . . . 23 5 Further Examples of the Generalized Hu Theorem 24 5.1 Tsallis q-Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 5.2 Kullback-Leibler Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 5.3 q–Kullback-Leibler Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.4 Cross-Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.5 Arbitrary Functions on Commutative, Idempotent Monoids . . . . . . . . . . . . . 28 5.6 Submodular Information Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 5.7 Generalization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6 Discussion 31 6.1 Major Findings: a Generalization of Hu’s Theorem and its Applications . . . . . . 31 6.2 The Cohomological Context of this Work . . . . . . . . . . . . . . . . . . . . . . . 31 6.3 Unanswered Questions and Future Directions . . . . . . . . . . . . . . . . . . . . . 32 6.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 A Measure Theory for Countable Discrete Spaces 34 B Proofs for Section 2 35 C Proofs for Section 3 36 C.1 Proof of the Generalized Hu Theorem 3.2 and Corollary 3.3 . . . . . . . . . . . . . 36 C.2 Further Proofs for Section 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 D Proofs for Section 4 38 E Proofs for Section 5 42 References 43 2 1 Introduction Information diagrams, most often drawn for two or three random variables (see Figures 1 and 2), provide a concise way to visualize information functions. Not only do they show (conditional) Shan-non entropy , mutual information, and interaction information — also called co-information — of several random variables in one overview, they also provide an intuitive account of the relations between these functions. This well-known fact goes beyond just three variables: diagrams with four (see Figure 3) and more variables exist as well. Hu’s theorem [32, 63, 64] renders all this mathematically precise by connecting the set-theoretic operations of union, intersection, and set difference to joint informa-tion, interaction information, and conditioning of information functions, respectively. The map from sets to information functions is then a measure and turns disjoint unions into sums. Certain summation rules of information functions then follow visually from disjoint unions in the diagrams. Our work is concerned with the question of whether Hu’s theorem can be generalized to other information functions than entropy, such as Kullback-Leibler divergence and cross-entropy. Such functions are important in the context of statistical modeling of multivariate data, in which one aims to find a probabilistic model able to reproduce the information structure of the data. For instance, an information diagram for cross-entropy would then allow to visualize how the cross-entropy between a model probability distribution and the data distribution is decomposed into higher-order terms. used these higher-order terms (which they called cluster (cross)-entropies) in their adaptive cluster expansion approach to statistical modeling of data with Ising models. Kullback-Leibler divergence has been studied in the context of decompositions of joint entropy and information and is often minimized in machine learning and deep learning [11, 12]. This becomes especially interesting for graphical methods, including diffusion models , which form the basis for widespread text-to-image generation methods like Dalle , Imagen , and stable diffusion . Diffusion models involve a decomposition of a joint Kullback-Leibler divergence over a Markov chain. Once information diagrams are established in a generalized context, this might facilitate to study decompositions of loss functions for more general graphical models. Our claim is that the language employed in the foundations of information cohomology gives the perfect starting point for generalizing Hu’s theorem. Namely, by replacing discrete random variables with partitions on a sample space, they give random variables the structure of a monoid that is commutative and idempotent. Furthermore, conditional information functions are formally described by a monoid action. And finally, the most basic information function that generates all others, Shannon entropy, is fully characterized as the unique function that satisfies the chain rule of information. We substantially generalize Hu’s theorem by giving a proof only based on the properties just mentioned, leading to new applications to Kolmogorov complexity, Kullback-Leibler divergence, and beyond. To clarify, the main contribution of this work is not to provide major previously unknown ideas — indeed, our proof is very similar to the original one given in — but instead, to place and prove this result in its proper abstract context. This then reveals information diagrams for new information measures. Section 2 summarizes classical definitions and results for Shannon information theory, general-ized to countable discrete random variables to be later applied to Kolmogorov complexity. Section 3 — which can be read independently of the preceding section — contains our main result, the gen-eralized Hu theorem. In Section 4, we prove a Hu theorem for Kolmogorov complexity. We also combine Hu’s theorems for Shannon entropy and Kolmogorov complexity to generalize the well-known result that “expected Kolmogorov complexity is close to entropy” : general interaction complexity is close to interaction information. For the case of well-behaved sequences of probability measures on binary strings with increasing length, this leads to an asymptotic result: in the limit of infinite sequence length, the per-bit interaction complexity and interaction information coincide. In Section 5, we consider further examples of Hu’s theorem, including Kullback-Leibler divergence and the generalization error in machine learning. We conclude with a discussion in Section 6, followed by proofs in the appendices. 3 Preliminaries and Notation We mainly assume the reader to be familiar with the basics of measure theory and probability theory. They can be learned from any book on the topic, for example or . The main concepts we assume to be known are σ-algebras, the Borel σ-algebra on Rn, measurable spaces, measures, measure spaces, probability measures, probability spaces, and random variables. We assume some very basic familiarity with abelian groups, (commutative, idempotent) monoids, and additive monoid actions. In contrast, we carefully define all basic notions from (algorithmic) information theory from scratch. On notation: to aid familiarity, we will start writing the Shannon entropy with the symbol H, but then switch to the notation I1 once we embed Shannon entropy in the concept of interaction information, Definition 2.7. Instead of the typical notation H(Y | X) for the conditional entropy, we will use X.H(Y ) = X.I1(Y ). This is the general notation of monoid actions and is thus preferable in our abstract context. Furthermore, for two disjoint sets A and B, we write their union as A ˙ ∪B. The number of elements in A is written as |A|. The power set of A, i.e., the set of its subsets, is denoted 2A. And finally, the natural and binary logarithms of x are denoted ln(x) and log(x), respectively. 2 Preliminaries on Shannon Entropy of Countable Discrete Random Vari-ables In this technical introduction, we explain preliminaries on discrete random variables, entropy, mutual information, and interaction information. Our treatment will also emphasize abstract structures that lead us to the generalizations in Section 3. The goal is to arrive at Summary 2.17, which summarizes the properties of classical information functions in an abstract way suitable for our generalizations. We will omit many proofs of elementary and well-known results. When we say a set is countable, then we mean it is finite or countably infinite. Whenever we talk about discrete measurable spaces, we mean countable measurable spaces in which all subsets are measurable. Some technical considerations related to the measurability of certain functions in the infinite, discrete case are found in Appendix A. 2.1 Entropy, Mutual Information, and Interaction Information We fix in this section a discrete sample space Ω. We define ∆(Ω) :=  P : Ω→[0, 1] X ω∈Ω P(ω) = 1  =  (pω)ω∈Ω∈[0, 1]Ω X ω∈Ω pω = 1  . If Ωis finite, we view it as a measurable space with the σ-algebra of Borel measurable sets. When Ωis infinite and discrete, we equip ∆(Ω) with the smallest σ-algebra that makes all evaluation maps evA : ∆(Ω) →R, P 7→evA(P) := P(A) for all subsets A ⊆Ωmeasurable. In the finite case, this definition is equivalent to the one given before. We remark that we do not distinguish between probability measures and their mass functions in the notation or terminology: for a subset A ⊆Ωand a probability measure P : Ω→[0, 1], we simply set P(A) := P ω∈A P(ω). Our aim is the study of discrete random variables X : Ω→EX. Here, being discrete means that EX — next to Ω— is discrete. For any probability measure P on Ωand any random variable X : Ω→EX, we define the distributional law PX : EX →[0, 1] as the unique probability measure with PX(x) := P X−1(x)  = X ω∈X−1(x) P(ω) for all x ∈X. Clearly, PX ∈∆(EX). For the following definition of Shannon entropy, introduced in [48, 49], we employ the convention 0 · ∞= 0 · (−∞) = 0 and ln(0) = −∞. Furthermore, set R := R ∪{+∞}. 4 Definition 2.1 (Shannon Entropy). Let P ∈∆(Ω) be a probability measure. Then the Shannon entropy of P is given by H(P) := − X ω∈Ω P(ω) ln P(ω) ∈R. Here, ln : [0, ∞) →R∪{−∞} is the natural logarithm. Now, let X : Ω→EX be a discrete random variable. The Shannon entropy of X with respect to P ∈∆(Ω) is given by H(X; P) := H(PX) = − X x∈EX PX(x) ln PX(x) ∈R. For the identity idΩ: Ω→Ω, ω 7→ω we then have PidΩ= P and therefore H(idΩ; P) = H(P). For discrete probability distributions with infinite Shannon entropy, see . Now, set ∆f(Ω) := ∆(Ω) \ {P ∈∆(Ω) | H(P) = ∞}. ∆f(Ω) is the measurable space of probability measures with finite entropy. We restrict entropy functions to this space for algebraic reasons: Definition 2.2 (Entropy Function of a Random Variable). Let X : Ω→EX be a discrete random variable. Then its entropy function or Shannon entropy is the measurable function H(X) : ∆f(Ω) →R, P 7→H(X; P) defined on probability measures with finite entropy. Its measurability is proven in Corollary A.3. Let P : Ω→R be a probability measure and X : Ω→EX a discrete random variable. Then we define the conditional probability measure P|X=x : Ω→R by P|X=x(ω) :=    P {ω}∩X−1(x)  PX(x) , PX(x) ̸= 0; P(ω), PX(x) = 0.1 (1) For all A ⊆Ω, we then have P|X=x(A) =    P A∩X−1(x)  P (X−1(x)) , PX(x) ̸= 0; P(A), PX(x) = 0. For the following definition, recall that a series of real numbers converges absolutely if the series of its absolute values converges. It converges unconditionally if every reordering of the original series still converges with the same limit. According to the Riemann series theorem , these two properties are equivalent. Definition 2.3 (Conditionable Functions, Averaged Conditioning). Let F : ∆f(Ω) →R be a measurable function. F is called conditionable if for all discrete random variables X : Ω→EX and all P ∈∆f(Ω), the sum (X.F)(P) := X x∈EX PX(x)F(P|X=x) (2) converges unconditionally. Note that P|X=x ∈∆f(Ω), which makes F(P|X=x) in Equation (2) well-defined. For all conditionable measurable functions F : ∆f(Ω) →R and all discrete random variables X : Ω→EX, the function X.F : ∆f(Ω) →R is a measurable function by Corollary A.5, which we call the averaged conditioning of F by X. The space of all conditionable measurable functions F : ∆f(Ω) →R is denoted by Meascon(∆f(Ω), R). 1Note that the precise definition for the case PX(x) = 0 does not matter since it almost surely does not appear. However, defining the conditional also in this case makes many formulas simpler since we do not need to restrict sums involving P|X=x to the case PX(x) ̸= 0. 5 If X : Ω→EX and Y : Ω→EY are two (not necessarily discrete) random variables, then their (Cartesian) product, or joint variable, XY : Ω→EX × EY is defined by (XY )(ω) := X(ω), Y (ω)  ∈EX × EY .2 (3) If we have two discrete random variables X and Y and a probability measure P ∈∆(Ω), then this allows to consider (P|X=x)Y (y) for (x, y) ∈EX × EY . In order to not overload notation, we will write this often as P(y | x). Similarly, we will often write P(x) := PX(x) and P(ω | x) := P|X=x(ω). We obtain the following elementary lemma and corollary whose proofs are left to the reader: Lemma 2.4. Let Y be a discrete random variable on Ω. Then H(Y ) is conditionable. More precisely, for another discrete random variable X on Ωand P ∈∆f(Ω), H(X; P) and H(XY ; P) are finite and we have X.H(Y ) (P) = H(XY ; P) −H(X; P), which results in X.H(Y ) (P) converging unconditionally. Corollary 2.5. The following chain rule H(XY ) = H(X) + X.H(Y ) holds for arbitrary discrete random variables X : Ω→EX and Y : Ω→EY . We will also write Y.F(P) := (Y.F)(P). For example, if F = H(X) is the Shannon entropy of the discrete random variable X, we write Y.H(X; P) = Y.H(X)(P) = Y.H(X) = X y∈EY PY (y)H(X; P|Y =y). We emphasize explicitly that Y can not act on H(X; P) since this is only a number, and not a measurable function. Nevertheless, we find the notation Y.H(X; P) for Y.H(X) convenient. We obtain the following properties resembling those of an additive monoid action: Proposition 2.6. Let X, Y be two discrete random variables on Ω, 1 : Ω→∗:= {∗} a trivial ran-dom variable, and F, G : ∆f(Ω) →R two conditionable measurable functions. Then the following hold: 1. 1.F = F; 2. Y.F is also conditionable, and we have X.(Y.F) = (XY ).F; 3. F + G is also conditionable, and we have X.(F + G) = X.F + X.G. Proof. Properties 1 and 3 are elementary and left to the reader to prove. 2 follows from P(x, y) = P(x) · P(x | y) and (P|X=x)|Y =y = P|XY =(x,y). Next, we define mutual information and, more generally, interaction information — also called co-information . As we want to view interaction information as a “higher degree generalization” of entropy and treat both on an equal footing in Hu’s theorem, we now change the notation: for any discrete random variables X, we set I1(X) := H(X). Definition 2.7 (Mutual Information, Interaction Information). Let q ∈N and assume that Iq−1 is already defined. Assume also that Y1, . . . , Yq are q discrete random variables on Ω. Then we define Iq(Y1; . . . ; Yq) : ∆f(Ω) →R, the interaction information of degree q, as the function Iq(Y1; . . . ; Yq) := Iq−1(Y1; . . . ; Yq−1) −Yq.Iq−1(Y1; . . . ; Yq−1). I2 is also called mutual information. 2In the case that EX = EY = R, there is some ambiguity of notation, as the reader could understand XY to be given by (XY )(ω) = X(ω) · Y (ω). This definition plays a role in the algebra of random variables . In our work, we instead always mean the Cartesian product. 6 Remark 2.8. What we call interaction information is in the literature sometimes called (higher / multivariate) mutual information. In that case, the term Jq(Y1; . . . ; Yq) := (−1)q+1Iq(Y1; . . . ; Yq) is called interaction information, see for example . Proposition 2.9. For all q ≥1 and all discrete random variables Y1, . . . , Yq, Iq(Y1; . . . ; Yq) : ∆f(Ω) →R is a well-defined conditionable measurable function. Proof. I1(Y1) is conditionable by Lemma 2.4. Assuming by induction that Iq−1(Y1; . . . ; Yq−1) is well-defined and conditionable, we obtain the following: Yq.Iq−1(Y1; . . . ; Yq−1) is well-defined and conditionable by Proposition 2.6, part 2, and Iq(Y1; . . . ; Yq) is well-defined and conditionable by Proposition 2.6, part 3. 2.2 Equivalence Classes of Random Variables Assume all random variables are discrete. For two random variables X and Y on Ω, we write X ≾Y if there is a function fXY : EY →EX such that fXY ◦Y = X. The definition of ≾is equivalent to a preorder put forward in the context of conditional independence relations [18–20]. The latter work defines in their Section 6.2: X ≾Y if for all ω, ω′ ∈Ω, the following implication holds true: Y ω  = Y ω′ = ⇒ X ω  = X ω′ . It is straightforward to show that this coincides with our own definition. Clearly, our relation is reflexive and transitive and thus a preorder. We define the equivalence relation ∼by X ∼Y iff X ≾Y and Y ≾X. We denote by [X] the equivalence class of X. Proposition 2.10 (See Proof 1). Let Y ≾X be two discrete random variables on Ω. Then we have I1(Y ) ≤I1(X) as functions on ∆f(Ω), meaning that I1(Y ; P) ≤I1(X; P) for all P ∈∆f(Ω). In particular, if X and Y are equivalent (i.e., X ≾Y and Y ≾X), then I1(X) = I1(Y ). Proposition 2.11 (See Proof 2). Let X ∼Y be two equivalent discrete random variables on Ω. Then for all conditionable measurable functions F : ∆f(Ω) →R we have X.F = Y.F. Proposition 2.12. Let q ≥1 and Y1, . . . , Yq and Z1, . . . , Zq be two collections of discrete random variables on Ωsuch that Yk ∼Zk for all k = 1, . . . , q. Then Iq(Y1; . . . ; Yq) = Iq(Z1; . . . ; Zq). Proof. For q = 1, this was shown in Proposition 2.10. The case q > 1 can be shown by induction using Definition 2.7 and Proposition 2.11. This proposition shows that interaction information is naturally defined for collections of equiv-alence classes of random variables, instead of the random variables themselves. 2.3 Monoids of Random Variables Again, assume all random variables to be discrete. Lemma 2.13. Let X, Y, Z, X′, and Y ′ be random variables on Ω. Let 1 : Ω→∗be a trivial random variable, with ∗= {∗} a measurable space with one element. Then the following properties hold: 0. If X ∼X′ and Y ∼Y ′, then XY ∼X′Y ′; 1. 1X ∼X ∼X1; 2. (XY )Z ∼X(Y Z); 3. XY ∼Y X; 4. XX ∼X. Additionally, we have X ≾Y if and only if XY ∼Y . Proof. All of these statements are elementary and left to the reader to prove. 7 Recall that a monoid is a tuple (M, ·, 1) with M a set, · a multiplication, and 1 ∈M, such that 1 is neutral and the multiplication is associative. A monoid is commutative and idempotent if m · n = n · m and m · m = m for all m, n ∈M. Notice that rules 1 to 4 in the lemma resemble the properties of a commutative, idempotent monoid. We remark that a commutative, idempotent monoid is algebraically the same as a join-semilattice (sometimes also called bounded join-semilattice), i.e., a partially ordered set which has a bottom element (corresponding to 1 ∈M) and binary joins (corresponding to the multiplication in a monoid). The partial order can be reconstructed from a commutative, idempotent monoid M by writing m ≤n if m · n = n, which corresponds to the last statement in Lemma 2.13. The language of join-semilattices is, for example, used in the development of the theory of conditional independence . Proposition 2.14 (See Proof 3). Let c M = {X : Ω→EX}X be a collection of random variables with the following two properties: a) There is a random variable 1 : Ω→∗in c M which has a one-point set ∗= {∗} as the target; b) For every two X, Y ∈c M there exists a Z ∈c M such that XY ∼Z. Let [X] denote the equivalence class of X under the relation ∼. Define M := c M/ ∼as the collection of equivalence classes of elements in c M. Define [X] · [Y ] := [Z] for any Z ∈c M with XY ∼Z. Then the triple (M, ·, ) is a commutative, idempotent monoid. We note that the monoid of equivalence classes of discrete random variables is isomorphic to the monoid of partitions on Ω, which is the formalization used in . We can now study finite monoids of random variables as instances of the construction in Propo-sition 2.14. Let n ≥0 be a natural number. Let X1, . . . , Xn be fixed random variables on Ω. Define [n] := {1, . . . , n}. For arbitrary I ⊆[n], define XI := Q i∈I Xi, the joint of the variables Xi for i ∈I. For XJ and XI, we have the equivalence XJXI ∼XJ∪I. Note that X∅: Ω→∗= {∗} is a trivial random variable. Definition 2.15 (Monoid of X1, . . . , Xn). The monoid M(X1, . . . , Xn) of the variables X1, . . . , Xn consists of the following data: 1. The elements are equivalence classes [XI] for I ⊆[n]. 2. The multiplication is given by [XJ] · [XI] = [XJ∪I]. 3. 1 := [X∅] is the neutral element with respect to multiplication. This is a well-defined commutative, idempotent monoid by Proposition 2.14. Recall that an additive monoid action is a triple (M, G, .), where M is a monoid, G is an abelian group, and . : M × G →G is a function such that 1 ∈M acts neutrally, with associativity (meaning m.(n.g) = (m · n).g), and distributivity over addition in G. Proposition 2.16. Let M be a monoid of (equivalence classes of) discrete random variables on Ωas in Proposition 2.14. Let G = Meascon ∆f(Ω), R  be the group of conditionable measurable functions from ∆f(Ω) to R. Then the averaged conditioning . : M × G →G given by [X] .F  (P) := X.F  (P) = X x∈EX PX(x)F(P|X=x) is a well-defined monoid action. Proof. The action is well-defined by Proposition 2.11 and Proposition 2.6, part 2. It is a monoid action by Proposition 2.6. Summary 2.17. We now summarize the abstract properties of interaction information Iq. Let M be a commutative, idempotent monoid of discrete random variables as in Proposition 2.14. By abuse of notation, we do not distinguish between random variables and their equivalence classes, 8 i.e., we write Y instead of [Y ]. Denote by G := Meascon ∆f(Ω), R  the group of conditionable measurable functions from ∆f(Ω) to R. By Proposition 2.16, averaged conditioning . : M ×G →G is a well-defined monoid action. By Proposition 2.12, we can view Iq as a function Iq : M q →G that is defined on tuples of equivalence classes of discrete random variables. By Proposition 2.5, entropy I1 satisfies the equation I1(XY ) = I1(X) + X.I1(Y ) for all X, Y ∈M, where X.I1(Y ) is the result of the action of X ∈M on I1(Y ) ∈G via averaged conditioning. Finally, by Definition 2.7, for all q ≥2 and all Y1, . . . , Yq ∈M, one has Iq(Y1; . . . ; Yq) = Iq−1(Y1; . . . ; Yq−1) −Yq.Iq−1(Y1; . . . ; Yq−1). 3 A Generalization of Hu’s Theorem In this section, we formulate and prove a generalization of Hu’s theorem. Our treatment can be read mostly independently from the previous sections, but is motivated by Summary 2.17. First, in Section 3.1, we formulate the main result of this work, Theorem 3.2, together with its Corollary 3.3 that allows it to be applied to Kolmogorov complexity in Section 4 and the generalization error in Section 5. The formulation relies on a group-valued measure whose construction we motivate visually in Section 3.2. Afterwards, in Section 3.3, we deduce some general consequences on how (conditional) interaction terms of different degrees can be related to each other. The proofs can be found in Appendix C. 3.1 A Formulation of the Generalized Hu Theorem Let M be a commutative, idempotent monoid. We assume that M is finitely generated, meaning there are elements X1, . . . , Xn ∈M such that all elements in M can be written as arbitrary finite products of the elements X1, . . . , Xn. Since M is commutative and idempotent, all elements in M are of the form XI = Q i∈I Xi for some subset I ⊆[n] = {1, . . . , n}, and XIXJ := XI ·XJ = XI∪J. Additionally, fix an abelian group G and an additive monoid action . : M × G →G. For each ∅̸= I ⊆[n], we denote by pI an abstract atom. The only property we require of them is to be pairwise different, i.e., pI ̸= pJ if I ̸= J. Then, set e X as the set of all these atoms: e X :=  pI | ∅̸= I ⊆[n] . (4) The atoms pI represent all smallest parts (the intersections of sets with indices in I minus the sets with indices in [n] \ I) of a general Venn diagram for n sets. For i ∈[n], we denote by e Xi :=  pI ∈e X | i ∈I a set which we can imagine to be depicted by a “disk” corresponding to the variable Xi, and we denote by e XI := S i∈I e Xi the union of the “disks” corresponding to the joint variable XI. Clearly, we have e X = e X[n]. This is actually the simplest construction that leads to the e Xi being in general position, as we have the following for all ∅̸= I ⊆[n]: \ i∈I e Xi \ [ j∈[n]\I e Xj = {pI}. (5) We remark that e X depends on n and could therefore also be written as e X(n). We will in most cases abstain from this to not overload the notation. In general, e X has 2n −1 elements. Therefore, for n = 2, n = 3 and n = 4, e X has 3, 7, and 15 elements, respectively, see Figures 1, 2 and 3. Remember that for a set Σ, 2Σ is its powerset, i.e., the set of its subsets. Definition 3.1 ((G-Valued) Measure). Let G be an abelian group and Σ a set. A G-valued measure (on Σ) is a function µ : 2Σ →G with the property µ(A1 ∪A2) = µ(A1) + µ(A2) for all disjoint A1, A2 ⊆Σ. 9 Figure 1: The generalized Hu theorem, visualized for a commutative, idempotent monoid M generated by X, Y , and for F1 and F2. The measure µ turns sets into elements of the abelian group G and disjoint unions into sums. Theorem 3.2 (Generalized Hu Theorem; See Section C.1 and Proof 4). Let M be a commutative, idempotent monoid generated by X1, . . . , Xn, G an abelian group, . : M × G →G an additive monoid action, and e X = e X(n). 1. Assume F1 : M →G is a function that satisfies the following chain rule: for all X, Y ∈M, one has F1(XY ) = F1(X) + X.F1(Y ). (6) Construct Fq : M q →G for q ≥2 inductively by Fq(Y1; . . . ; Yq) := Fq−1(Y1; . . . ; Yq−1) −Yq.Fq−1(Y1; . . . ; Yq−1) (7) for all Y1, . . . , Yq ∈M. Then there exists a G-valued measure µ : 2e X →G such that for all q ≥1 and J, L1, . . . , Lq ⊆ [n], the following identity holds: XJ.Fq(XL1; . . . ; XLq) = µ q \ k=1 e XLk \ e XJ ! . (8) Concretely, one can define µ as the unique G-valued measure that is on individual atoms pI ∈e X defined by µ(pI) := X ∅̸=K⊇Ic (−1)|K|+|I|+1−n · F1(XK), (9) where Ic = [n] \ I is the complement of I in [n].3 2. Conversely, assume that µ : 2e X →G is a G-valued measure. Assume there is a sequence of functions Fq : M q →G that satisfy Equation (8). Then F1 satisfies Equation (6) and Fq is related to Fq−1 as in Equation (7). Sketch of Proof. Part 1 can be shown as follows: When specializing Equation (8) to the case XJ = 1 and q = 1, one obtains F1(XK) = µ( e XK) = X I : I∩K̸=∅ µ(pI), 3Alternatively, noting that F1(X∅) = 0 and writing K = K′ ∪Ic for some unique K′ ⊆I, we can also write µ(pI) = P K⊆I(−1)|K|+1 · F1(XKXIc). 10 Figure 2: A visualization of the generalized Hu theorem for a commutative, idempotent monoid generated by X1, X2, X3. On the left-hand-side, three subsets of the abstract set e X are emphasized, namely e X12 ∩e X13, e X1 \ e X3, and e X12 ∩e X3. On the right-hand-side, Equation (8) turns them into elements of the abelian group G, namely F2(X12; X13), X3.F1(X1), and F2(X12; X3), respectively. Many decompositions of information functions into sums directly follow from the theorem by using that µ turns disjoint unions into sums, as exemplified by the equation F2(X12; X13) = X3.F1(X1) + F2(X12; X3). which follows from the Möbius inversion formula on a poset [52, 3.7.1 Proposition] from Equa-tion (9). The general formula for q > 1 then follows by induction using the properties of the monoid action. Part 2 follows by a direct computation. More details can be found in Appendix C.1. The following corollary will be applied to Kolmogorov complexity in Section 4 and the gener-alization error in machine learning in Section 5. Corollary 3.3 (Hu’s Theorem for Two-Argument Functions; see Proof 5). Let M be a commu-tative, idempotent monoid generated by X1, . . . , Xn, G an abelian group, and e X = e X(n). Assume that K1 : M × M →G is a function satisfying the following chain rule: K1(XY ) = K1(X) + K1(Y | X), (10) where we define K1(X) := K1(X | 1) for all X ∈M. Construct Kq : M q × M →G for q ≥2 inductively by Kq Y1; . . . ; Yq | Z  := Kq−1 Y1; . . . ; Yq−1 | Z  −Kq−1 Y1; . . . ; Yq−1 | YqZ  . (11) Then there exists a G-valued measure µ : 2e X →G such that for all L1, . . . , Lq, J ⊆[n], the following identity holds: Kq(XL1; . . . ; XLq | XJ) = µ q \ k=1 e XLk \ e XJ ! . (12) Concretely, one can define µ as the unique G-valued measure that is on individual atoms pI ∈e X defined by µ(pI) := X ∅̸=K⊇Ic (−1)|K|+|I|+1−n · K1(XK), (13) where Ic = [n] \ I is the complement of I in [n]. 11 Figure 3: A visualization of the generalized Hu theorem for a commutative, idempotent monoid M generated by X1, X2, X3, X4. To reduce clutter, we restrict to a visualization of the abstract sets e Xi and the atoms pI, as well as the corresponding information functions. On the right-hand-side, for computing µ(pI) for the 15 atoms pI, we use Lemma 3.4. We conclude by discussing how Hu’s theorem can be visualized, for which we will prove one further elementary lemma. For I = {i1, . . . , iq} ⊆[n], set ηI := X[n]\I.Fq(Xi1; . . . ; Xiq). (14) For the special case that Fq = Iq is interaction information, these functions were discussed in as generators of all information functions of the form XJ.Iq(XL1; . . . ; XLq). The following lemma gives an explanation for this: the functions ηI generate the information measure (or, more generally: G-valued measure) µ, which in turn generates all other information functions: Lemma 3.4. Let ∅̸= I ⊆[n] be arbitrary. Then ηI = µ(pI). Proof. According to Equation (5), we have \ i∈I e Xi \ e X[n]\I = {pI}. (15) Thus, the lemma follows from Theorem 3.2. Thus, Theorem 3.2 can be visualized as follows: For each element X1, . . . , Xn, draw a disk e Xi such that they intersect “in general position”, meaning that all intersections of (part of) the disks are present. Assign the function ηI to each atom pI, as in the preceding lemma. Furthermore, assign subsets of e X to information functions according to Equation (8). See Figures 1 and 3 for examples. In Figure 2, we exemplify how to use these diagrams to visually represent and prove identities of information functions. Note that in all figures, we write sets I = {i1, . . . , ik} for simplicity just as the sequence i1i2 . . . ik. 3.2 Explicit Construction of the G-Valued Measure µ Assume all notation as in part 1 of Theorem 3.2. In this subsection, we explain how one could “guess” Equation (9) without knowledge of Möbius inversion theory. This section is meant as motivation, and other sections do not depend on it. The high-level idea is the following: we have the sequence of functions F1, F2, . . . , as our data to work with. We also know that Fq is constructed from Fq−1 for all q ≥2, which means that we should be able to express the measure µ in terms of F1 alone. Additionally, we must have F1(XK) = µ( e XK) in the end. Thus, our aim is to explain how, for arbitrary ∅̸= I ⊆[n], we can express µ(pI) using only terms µ( e XK) with K ⊆[n]. This idea, while carried out differently, is also at the heart of the proof of the existence of information diagrams given in [63, 64]. 12 We now look at some examples for n and I and derive µ(pI) from the µ( e XK). In the following visual computations, each Venn diagram always depicts the measure of the grey area. We frequently make use of the fact that µ is a G-valued measure. For n = 1 and I = {1} = 1,4 we obtain: . For n = 2 and I = {1} = 1, we have: . For n = 2 and I = {2} = 2, we get the same situation with 1 and 2 exchanged: . Next, we look at the case n = 2, I = {1, 2} = 12: . Finally, for n = 3 and I = {1, 2} = 12, we obtain: 4For simplicity, we write sets as a sequence of their elements. 13 . In all cases, we managed to achieve our goal to only use terms of the form µ( e XK). Additionally, a close look at the coefficients shows that these examples obey Equation (9), as desired. 3.3 General Consequences of the Explicit Construction of µ Assume the setting as in part 1 of Theorem 3.2, which we now consider proven. In this section, we consider general consequences of Hu’s theorem that specifically use the explicit construction, Equation (9), of the G-valued measure µ : 2e X →G. Corollary 3.5 explains how three different information functions can be expressed with respect to each other. Corollary 3.5 (See Proof 6). Recall the functions ηI from Equation (14). We obtain the following identities: 1. Let 1 ≤q ≤n and ∅̸= I = {i1, . . . , iq} ⊆[n]. Then ηI = X ∅̸=K⊇Ic (−1)|K|+|I|+1−n · F1(XK). 2. Let K ⊆[n] arbitrary. Then F1(XK) = X I⊆[n] I∩K̸=∅ ηI. 3. Let 1 ≤q ≤n and ∅̸= J = {j1, . . . , jq} ⊆[n] be arbitrary. Then Fq(Xj1; . . . ; Xjq) = X I⊇J ηI. 4. For ∅̸= I ⊆[n], we have ηI = X J⊇I (−1)|J|−|I| · F|J|(Xj1; . . . ; Xj|J|). 5. Let K ⊆[n] arbitrary. Then one has F1(XK) = X ∅̸=J⊆K (−1)|J|+1 · F|J|(Xj1; . . . ; Xj|J|). 6. Let 1 ≤q ≤n and ∅̸= J = {j1, . . . , jq} ⊆[n]. Then one has Fq(Xj1; . . . ; Xjq) = X ∅̸=K⊆J (−1)|K|+1 · F1(XK). 4 Hu’s Theorem for Kolmogorov Complexity In this section, we establish the generalization of Hu’s theorem for two-argument functions, Corol-lary 3.3, for different versions of Kolmogorov complexity. All of these versions satisfy a chain rule up to certain error terms. These can all be handled in our framework, but the most exact chain rule holds for Chaitin’s prefix-free Kolmogorov complexity, on which we therefore focus our attention. Our main references are [15, 28, 34]. In this whole section, we work with the binary logarithm, which we denote by log, instead of the natural logarithm ln. This section is written with minimal prerequisites on the reader. We proceed as follows: in Section 4.1, we explain the preliminaries of prefix-free Kolmogorov complexity. Then in Section 4.2, we state the chain rule of Chaitin’s prefix-free Kolmogorov complexity, which holds up to an additive constant. We reformulate this chain rule in Section 4.3 to satisfy the general assumptions 14 of Corollary 3.3 for two-argument functions. In Section 4.4, we then define interaction complexity analogously to interaction information, and make the resulting Hu theorem explicit. Then in Section 4.5, we combine the two Hu theorems for interaction complexity and Shannon interaction information and show that expected interaction complexity is up to an error term equal to interaction information. This leads to the remarkable result that in all degrees, the “per-bit” expected interaction complexity equals interaction information for sequences of well-behaved probability measures on increasing sequence lengths. Finally, the Sections 4.6 and 4.7 then summarize the resulting chain rules for standard prefix-free Kolmogorov complexity and plain Kolmogorov complexity, leaving more concrete interpretations of the resulting Hu theorems to future work. Most proofs for this section can be found in Appendix D. 4.1 Preliminaries on Prefix-Free Kolmogorov Complexity Let the alphabet be given by {0, 1}. The set of binary strings is given by {0, 1}∗:= {ϵ, 0, 1, 00, 01, 10, 11, 000, . . . }, where ϵ is the empty string. The above lexicographical ordering defines a bijection N →{0, 1}∗ that we use to identify natural numbers with binary strings. Concretely, this identification maps 0 7→ϵ, 1 7→0, 2 7→1, 3 7→00, 4 7→01, 5 7→10, . . . (16) We silently switch between viewing natural numbers as “just numbers” and viewing them as binary strings and vice versa. If x, y ∈{0, 1}∗are two binary strings, then we can concatenate them to obtain a new binary string xy ∈{0, 1}∗. A string x ∈{0, 1}∗is a proper prefix of the string y ∈{0, 1}∗if there is a string z ∈{0, 1}∗with z ̸= ϵ such that y = xz. A set A ⊆{0, 1}∗is called prefix-free if no element in A is a proper prefix of any other element in A. Let X and Y be sets. A partial function f : X 99K Y is a function f : A →Y defined on a subset A ⊆X. A decoder for a set X is a partial function D : {0, 1}∗99K X.5 A decoder can be thought of as decoding the code words in {0, 1}∗into source words in X. A decoder D : {0, 1}∗99K X is called a prefix-free decoder if its domain A ⊆{0, 1}∗is prefix-free.6 For a binary string x, l(x) is defined to be its length, meaning the number of its symbols. Thus, for example, we have l(ϵ) = 0 and l(01) = 2. Let D : {0, 1}∗99K X be a decoder. We define the length function LD : X →N ∪{∞} via LD(x) := min  l(y) | y ∈{0, 1}∗, D(y) = x , which is ∞if D−1(x) = ∅. In the following, we make use of the notion of a Turing machine. This can be imagined as a machine with very simple rules that implements an algorithm. We will not actually work with concrete definitions of Turing machines; instead, we let Church’s Thesis 4.1 do the work, which we describe below — it will guarantee that any function that intuitively resembles an algorithm could equivalently be described by a Turing machine. Concrete definitions can be found in Chapter 1.7 of . A partial computable function is any partial function T : {0, 1}∗99K {0, 1}∗that can be com-puted by a Turing machine. The Turing machine halts on precisely the inputs on which T is defined. We do not distinguish between Turing machines and the corresponding partial computable func-tions: If T is a partial computable function, then we say that T is a Turing machine. If x ∈{0, 1}∗ is in the domain of the Turing machine T, we say that T halts on x and write T(x) < ∞. If T does not halt on x, we sometimes write T(x) = ∞. By the Church-Turing thesis, partial computable functions are precisely the partial functions for which there is an “algorithm in the intuitive sense” that computes the output for each input. We reproduce the formulation from : 5Often, the word code is used instead of decoder. We find “decoder” less confusing. 6In the literature, this is often called a prefix code. We choose the name “prefix-free” as it avoids possible confusions. 15 Thesis 4.1 (Church’s Thesis). The class of algorithmically computable partial functions (in the intuitive sense) coincides with the class of partial computable functions. We now define two prefix-free decoders for binary sequences. To do that, we first define the corresponding encoders: define the encoder (·)′ : {0, 1}∗→{0, 1}∗by x′ := 1l(l(x))0l(x)x.7 (17) Note that the natural number l(x) is viewed as a binary string using the identification in Equa-tion (16). The decoder corresponding to (·)′ is a partial computable function D′ : {0, 1}∗99K {0, 1}∗that is only defined on inputs of the form x′. The underlying algorithm reads until the first 0 to know the length of the bitstring representing l(x). Then it reads until the end of l(x) to know the length of x. Subsequently, it can read until the end of x to know x itself, which it then outputs. This decoder is prefix-free: if x′ is a prefix of y′, then l(x) = l(y) and x is a prefix of y, from which x = y and thus x′ = y′ follows. Let a pairing function {0, 1}∗× {0, 1}∗→{0, 1}∗be given by (x, y) 7→x′y. Note that we can algorithmically recover both x and y from x′y: reading the string x′y from the left, the algorithm first recovers l(x) and then x, after which the rest of the string automatically is y. A Turing machine T : {0, 1}∗99K {0, 1}∗is called a prefix-free machine if it is a prefix-free decoder. The input is then imagined to be a code word encoding the output string. There is a bijective, computable enumeration, called standard enumeration, T1, T2, T3, . . . , of all prefix-free machines (, Section 3.1). Computable here means the following: if we would encode the set of rules of any Turing machine as a binary sequence, then the map from natural numbers to binary sequences corresponding to the standard enumeration is itself computable. A Turing machine T : {0, 1}∗99K {0, 1}∗is called a conditional Turing machine if for all x such that T halts on x we have x = y′p for some elements y, p ∈{0, 1}∗; p is then called the program, and y the input. A universal conditional prefix-free machine is a conditional prefix-free machine U : {0, 1}∗99K {0, 1}∗such that for all i ∈N and y, p ∈{0, 1}∗, we have U(y′i′p) = Ti(y′p), and U does not halt on inputs of any other form. Here, again, i is viewed as a binary string via Equation (16). One can show that such universal conditional prefix-free machines indeed do exist (, Theorem 3.1.1). For the rest of this article, let U be a fixed universal conditional prefix-free machine. Definition 4.2 (Prefix-Free Kolmogorov Complexity). The conditional prefix-free Kolmogorov complexity is the function K : {0, 1}∗× {0, 1}∗→N given by K(x | y) := min n l(p) p ∈{0, 1}∗, U(y′p) = x o = min n l(i′) + l(q) i ∈N, q ∈{0, 1}∗, U(y′i′q) = x o = min n l(i′) + l(q) i ∈N, q ∈{0, 1}∗, Ti(y′q) = x o < ∞. We define the non-conditional prefix-free Kolmogorov complexity by K : {0, 1}∗→N, K(x) := K(x | ϵ). As ϵ′ = 1l(l(ϵ))0l(ϵ) = 0,8 we obtain K(x) = min  l(p) | U(0p) = x . Here, the 0 can be thought of as simply signaling that there is no input, while each “actual” input starts with a 1 due to the definition of y′. 7In the literature, this is viewed as a code for the natural numbers instead of {0, 1}∗. But both viewpoints are equivalent due to the bijection N ∼ = {0, 1}∗. 8Here, we used l(ϵ) = 0, which is a natural number corresponding to the string ϵ that is plucked back into the formula. 16 Definition 4.3 (Joint Conditional Prefix-Free Kolmogorov Complexity). Define Concat : ({0, 1}∗)n → {0, 1}∗by Concat(x1, . . . , xn) := x′ 1 · · · x′ n−1xn. For x1, . . . , xn ∈{0, 1}∗and y1, . . . , ym ∈{0, 1}∗, we define the (joint conditional) prefix-free Kolmogorov complexity by K x1, . . . , xn | y1, . . . , ym  := K Concat(x1, . . . , xn) | Concat(Y1, . . . , ym)  . We then simply set K x1, . . . , xn  := K x1, . . . , xn | ϵ  . 4.2 The Chain Rule for Chaitin’s Prefix-Free Kolmogorov Complexity Let f, g : X →R be two functions on a set X. We adopt the following notation from : f + < g means that there is a constant c ≥0 such that f(x) < g(x) + c for all x ∈X. We write f + > g if g + < f. Finally, we write f + = g if f + < g and f + > g, which means that there is a constant c ≥0 such that f(x) −g(x) < c for all x ∈X. If we want to emphasize the inputs, we may, for example, also write f(x) + = g(x). Let x ∈{0, 1}∗be arbitrary and K(x) its prefix-free Kolmogorov complexity. Let x∗∈{0, 1}∗ be chosen as follows: we look at all y ∈{0, 1}∗of length l(y) = K(x) such that U(0y) = x. Among those, we look at all y such that U computes x on input 0y with the smallest number of computation steps. And finally, among those, we define x∗to be the lexicographically first string. Based on this, Chaitin’s prefix-free Kolmogorov complexity is given by Kc : {0, 1}∗× {0, 1}∗→R, Kc(x | y) := K(x | y∗) and Kc(x) := Kc(x | ϵ). Clearly, there is a program that, on input x′K(x), outputs x∗— we simply run U(0y) for all programs y of length K(x) in parallel, and the one that outputs x the fastest and is lexico-graphically first among those is the output x∗. Vice versa, given x∗, one can compute x′K(x) by simply computing U(0x∗)′l(x∗). In this sense, x∗and x′K(x) can be said to “contain the same information”. In the literature, Chaitin’s prefix-free Kolmogorov complexity is, for this reason, also often defined by Kc(x | y) := K(x | y, K(y)). The following result might have for the first time been written down in , and was attributed therein to Leonid Levin. Theorem 4.4 (Chain Rule for Chaitin’s Prefix-Free Kolmogorov Complexity). The following identity holds: Kc(x, y) + = Kc(x) + Kc(y | x). (18) Here, both sides are viewed as functions ({0, 1}∗)2 →R that map inputs of the form (x, y). Proof. See , Theorem 3.8.1 for the proof of the inequality Kc(x, y) + < Kc(x) + Kc(y | x). The proof of the other direction, namely Kc(y | x) + < K(x, y) −K(x), in seems incorrect to us, as it only seems to show that the constant is independent of x and not of y. See the proof in for that direction. 4.3 A Reformulation of the Chain Rule in Terms of Our General Framework Our goal is to express the result, Equation (18), in terms of the assumptions of Corollary 3.3. To do this, we need to find a framework under which the chain rule becomes exact instead of correct up to a constant, and in which the inputs come from a monoid. We will solve this by identifying functions whose difference is bounded by a constant. For n ≥0 any fixed natural number, we define Maps ({0, 1}∗)n, R  as the abelian group of functions from ({0, 1}∗)n to R. We define the equivalence relation ∼Kc on Maps ({0, 1}∗)n, R  by F ∼Kc H :⇐ ⇒F + = H. The reason we put Kc in the subscript of ∼Kc is that later, we will investigate different equivalence relations ∼K and ∼C for prefix-free and plain Kolmogorov complexity. Note that the functions F 17 with F ∼Kc 0, i.e., F + = 0, form a subgroup of Maps ({0, 1}∗)n, R  . Consequently, we obtain an abelian group Maps ({0, 1}∗)n, R  / ∼Kc with elements written as [F]Kc. Now, let the variables X1, . . . , Xn be defined as the following projections: Xi : ({0, 1}∗)n →{0, 1}∗, x = (x1, . . . , xn) 7→xi. Then, for any i1, . . . , ik ∈[n], we can form the product variable Xi1 · · · Xik: Xi1 · · · Xik : ({0, 1}∗)n →({0, 1}∗)k, x = (x1, . . . , xn) 7→(xi1, . . . , xik). These strings of projections form the elements of the monoid f M = {X1, . . . , Xn}∗, with multipli-cation simply given by concatenation. Then from Kc : {0, 1}∗× {0, 1}∗→R, we can define the function [Kc]Kc : f M × f M →Maps ({0, 1}∗)n, R  / ∼Kc, (Y, Z) 7→[Kc(Y | Z)]Kc, with Kc(Y | Z) simply being the function that inserts tuples from ({0, 1}∗)n into the variables Y and Z: Kc(Y | Z) : ({0, 1}∗)n →R, x 7→Kc(Y (x) | Z(x)). Similarly as before, one can then define Kc(Y ) : ({0, 1}∗)n →R by Kc(Y ) := Kc(Y | ϵ) with ϵ ∈f M being the empty string of variables. In the same way, [Kc]Kc(Y ) := [Kc]Kc(Y | ϵ) = [Kc(Y )]Kc. Since ϵ(x) = ϵ for all x ∈({0, 1}∗)n, these definitions are compatible with the earlier definition Kc(x) := Kc(x | ϵ) for x ∈{0, 1}∗: we have Kc(Y )  (x) = Kc Y (x)  . Proposition 4.5 (See Proof 7). For arbitrary Y, Z ∈f M, we have the exact equality [Kc]Kc(Y Z) = [Kc]Kc(Y ) + [Kc]Kc(Z | Y ) (19) of elements in Maps ({0, 1}∗)n, R  / ∼Kc. To obtain a commutative, idempotent monoid, we show that we can permute and “reduce” the elements in f M without affecting the resulting functions in Maps ({0, 1}∗)n, R  / ∼Kc: for arbitrary Y = Xi1 · · · Xik ∈f M we define the reduction Y ∈f M by Y := XI := Y i∈I Xi, with I := n i ∈[n] ∃s ∈[k] : is = i o . (20) Here, the factors Xi with i ∈I are assumed to appear in increasing order of the index i. Lemma 4.6 (See Proof 8). For all Y, Z ∈f M, we have the equality [Kc]Kc Y | Z  = [Kc]Kc Y | Z  in Maps ({0, 1}∗)n, R  / ∼Kc. Now, define the equivalence relation ∼on f M by Y ∼Z if Y = Z, with (·) : f M →f M defined as in Equation (20). We define M := f M/ ∼. Each element [Y ] ∈M is then represented by Y since Y = Y ; it is of the form Y = XI for some I ⊆[n]. Additionally, if I ̸= J, then obviously we have XI ≁XJ, and consequently, there is a one-to-one correspondence between representatives of the form XI and elements in M. Therefore, we can write elements in M for convenience, and by abuse of notation, simply as [Y ] = XI. We then define the multiplication in M by [Y ] · [Z] := [Y Z], which in the new notation can be written as XI · XJ = XI∪J and thus makes M a well-defined commutative, idempotent monoid generated by X1, . . . , Xn. We define, by abuse of notation, [Kc]Kc : M × M →Maps ({0, 1}∗)n, R  / ∼Kc in the obvious way on representatives, which is well-defined by Lemma 4.6. Overall, we obtain by Corollary 3.3 a Hu theorem for Chaitin’s prefix-free Kolmogorov complexity, which we next explain in more detail. 18 4.4 Hu’s Theorem for Chaitin’s Prefix-Free Kolmogorov Complexity We now deduce a Hu theorem for Chaitin’s prefix-free Kolmogorov complexity. We formulate it without the abstraction of equivalence classes from the previous subsection (which is mainly impor-tant for the proof), with the goal to obtain an intrinsically more interesting version. For formulating the result, we first name the higher-degree terms analogously to the interaction information from Definition 2.7: Definition 4.7 (Interaction Complexity). Define Kc1 := Kc : {0, 1}∗× {0, 1}∗→R and Kcq : ({0, 1}∗)q × {0, 1}∗→R inductively by Kcq(y1; . . . ; yq | z) := Kcq−1(y1; . . . ; yq−1|z) −Kcq−1(y1; . . . ; yq−1 | yq, z). We call Kcq the interaction complexity of degree q. For example, Kc2(x; y) = Kc1(x) −Kc1(x | y) measures the reduction of the encoding length of x when having access to y. E.g., if x is thought of as “data” and y thought of as a “theory”, then Kc2(x; y) measures the extent to which y helps in compressing x. See also the last para-graph in Section 6.3 for more interpretation of the potential meaning of these quantities. The interpretation of higher-order terms is future work. For Y1, . . . , Yq, Z ∈f M = {X1, . . . , Xn}∗, define Kcq(Y1; . . . ; Yq | Z) ∈Maps ({0, 1}∗)n, R  by Kcq(Y1; . . . ; Yq | Z) : x 7→Kcq Y1(x); . . . ; Yq(x) | Z(x)  . One can easily inductively show that Kcq(Y1; . . . ; Yq | Z) + = Kcq−1(Y1; . . . ; Yq−1 | Z) −Kcq−1(Y1; . . . ; Yq−1 | YqZ). (21) The full proof of the following theorem can be found in Appendix D, Proof 9. The main ingredient is the chain rule, Proposition 4.5, together with Corollary 3.3. Theorem 4.8 (See Proof 9). Let e X = e X(n). There exists a measure µ : 2e X →Maps ({0, 1}∗)n, R  such that for all L1, . . . , Lq, J ⊆[n], the relation Kcq XL1; . . . ; XLq | XJ  + = µ q \ k=1 e XLk \ e XJ ! (22) of functions ({0, 1}∗)n →R holds. Concretely, µ can be defined as the unique measure that is on individual atoms pI ∈e X defined by µ(pI) := X ∅̸=K⊇Ic (−1)|K|+|I|+1−n · Kc1(XK), (23) where Ic = [n] \ I is the complement of I in [n]. Remark 4.9. In Theorem 4.8, the equality holds up to a constant independent of the input in ({0, 1}∗)n. However, there is a dependence on q, the degree, and n, the number of generating variables. We now briefly analyze this. For analyzing the dependence on q, we note that the inductive step of the proof of the generalized Hu Theorem 3.2 uses the theorem for degree q−1 twice. That means that the number of comparisons doubles in each degree, leading to a dependence of q of the form O(2q). Can one do better than this? One idea might be to not define Kcq inductively, but with an inclusion-exclusion–type formula motivated by Corollary 3.5, part 6. One sensible definition is the following: Kcq(y1; . . . ; yq | z) := X K⊆[q] (−1)|K|+1 · Kc1(yKz) with yK := Y k∈K y′ k. (24) 19 However, this now leads to 2q summands, which one would, for a proof of Hu’s theorem, individually compare with the evaluation of µ on a “disk” in e X = e X(n). As in the general definition Equa-tion (24), the order of the factors in yK does not follow the ordering of the generators x1, . . . , xn, we expect there a reordering of the factors to be necessary for the comparison. This has each time a cost of O(1), thus again leading to a dependence of the form O(2q). We currently do not see a way to improve this. Now, for each of the 2q comparisons, we would like to know the dependence on n. One possible algorithm for bringing yKz “in order” works as follows: assuming that all of yk, k ∈K, and z are given by a permutation (with omissions) of x1, . . . , xn, then we have to specify q + 1 permutations, which each involves to specify the position of n elements. The position is one of 1, . . . , n plus “omission”, which together has a cost of log(n + 1). Overall, this leads to a dependence on n of O (q + 1) · n · log(n + 1)  . Overall, the dependence on q and n together is thus O 2q · (q + 1) · n · log(n + 1)  . Figure 4: A visualization of Hu’s theorem for Kolmogorov complexity for three variables X, Y, Z. On the left-hand-side, three subsets of the abstract set ^ XY Z are emphasized, namely g XY ∩g XZ, e X \ e Z, and g XY ∩e Z. On the right-hand-side, Equation (22) turns them up to a constant error into the Kolmogorov complexity terms Kc2(XY ; XZ), Kc(X | Z), and Kc2(XY ; Z), respectively. Many decompositions of complexity terms into sums directly follow from the theorem by using that µ turns disjoint unions into sums, as exemplified by the equation Kc2(XY ; XZ) + = Kc(X | Z) + Kc2(XY ; Z). As an Example, we recreate Figure 2 for the case of Kolmogorov complexity in Figure 4. We can also translate back from the notation with variables to the more familiar notation in which elements of {0, 1}∗are inserted in the formulas. If we do this, then the example equation from Figure 4 becomes Kc2(x, y; x, z) + = Kc(x | z) + Kc2(x, y; z), where both sides are viewed as functions ({0, 1}∗)3 →R. 4.5 Expected Interaction Complexity is Interaction Information Recall Definition 2.7 of the interaction information of q discrete random variables Y1, . . . , Yq, de-noted Iq(Y1; . . . ; Yq). Additionally, recall that for another discrete random variable Z defined on the same sample space, we can define the averaged conditioning Z.Iq(Y1; . . . ; Yq), see Definition 2.3, which is again an information function. Its evaluation on a probability measure P on the sample space is denoted Z.Iq(Y1; . . . ; Yq; P). In this section, we want to establish a relationship between interaction information of random variables defined on ({0, 1}∗)n with values in ({0, 1}∗)k for some k on the one hand, and the 20 expectation of interaction complexity as defined in Definition 4.7 on the other hand. The deviation from an equality between interaction information and interaction complexity will be quantified by the Kolmogorov complexity of probability mass functions. For this aim, we first need to interpret outputs of Turing machines as rational numbers: If T is a Turing machine and T(x) = m′n for some m, n ∈{0, 1}∗, then interpret m, n as natural numbers via the identification map in Equation (16), and consequently m′n as the rational number m/n, see also Li and Vitányi , Section 1.7.3. Interpret the output as 0 if it is not of the form m′n. Definition 4.10 (Kolmogorov Complexity of Probability Mass Functions). Let P : ({0, 1}∗)n →R be a probability mass function. Its Kolmogorov complexity is defined by K(P) := min p∈{0,1}∗ n l(p) ∀q ∈N, ∀x ∈({0, 1}∗)n : Tp(x′q) −P(x) ≤1/q o , where Tp is the p’th prefix-free Turing machine. Definition 4.11 (Computability of Probability Mass Functions). A probability mass function P : ({0, 1}∗)n →R is called computable if K(P) < ∞. In other words, a probability mass function P is computable if there exists a prefix-free Turing machine Tp that can, for all natural numbers q, approximate P up to precision 1/q. We now unify the viewpoint of the variables Xi as “placeholders” with the viewpoint that they are random variables: remember that the Xi : ({0, 1}∗)n →{0, 1}∗are given by projec-tions: Xi(x) = xi. They form the monoid f M = {X1, . . . , Xn}∗, with multiplication given by concatenation. Furthermore, we defined an equivalence relation ∼with Y ∼Z if Y = Z. Now, interpret ({0, 1}∗)n as a discrete sample space. Then the strings in Y ∈f M can be interpreted as random variables on ({0, 1}∗)n with values in ({0, 1}∗)k for some k. The concate-nation of these strings is identical to the product of random variables defined in Equation (3). Now, remember that in Section 2.2 we also defined an equivalence relation for random vari-ables, which we now call ∼r to distinguish it from ∼. For Y : ({0, 1}∗)n →({0, 1}∗)ky and Z : ({0, 1}∗)n →({0, 1}∗)kz, we have Y ∼r Z if there exist functions fZY : ({0, 1}∗)ky →({0, 1}∗)kz and fY Z : ({0, 1}∗)kz →({0, 1}∗)ky such that fZY ◦Y = Z and fY Z ◦Z = Y . Lemma 4.12 (See Proof 10). For all Y, Z ∈f M, we have Y ∼Z ⇐ ⇒ Y ∼r Z. That is, the equivalence relations ∼and ∼r are identical. This shows that the commutative, idempotent monoids M = {X1, . . . , Xn}∗/ ∼and M(X1, . . . , Xn) from Definition 2.15 are the same. The only difference is simply that the neutral element in {X1, . . . , Xn}∗/ ∼was denoted ϵ, whereas the one of M(X1, . . . , Xn) was denoted 1. We denote both monoids simply by M from now on. For the following theorems, recall that a probability measure P ∈∆(Ω) has a Shannon entropy I1(P) which equals I1(idΩ; P), see Definitions 2.1, 2.2. Our aim is to generalize the following theorem: Theorem 4.13 (, Theorem 8.1.1). We have 0 ≤ X x∈({0,1}∗)n P(x)Kc(x) −I1(P) ! + < K(P), where both sides are viewed as functions in computable probability measures P : ({0, 1}∗)n →R with finite entropy I1(P) < ∞. That is, up to K(P) + c for some constant c independent of P, entropy equals expected Kolmogorov complexity. In the following theorem, if we write f = g + O(h) for functions f, g, h : X →R, we mean that there exists a c ≥0 such that |f(x)−g(x)| < c·h(x) for all x ∈X. This is in contrast to our use of that notation in the following parts, Sections 4.6 and 4.7, where the inequality only needs to hold starting from some threshold value x0 ∈X. 21 We prove the result in Appendix D, Proof 11, with the main ingredients being Hu’s theorems for both Shannon entropy — which follows using Summary 2.17 from Theorem 3.2 — and Chaitin’s prefix-free Kolmogorov complexity (Theorem 4.8). Both together allow a reduction to the well-known special case, Theorem 4.13. Theorem 4.14 (See Proof 11). Let X1, . . . , Xn : ({0, 1}∗)n →{0, 1}∗be the (random) variables given by Xi(x) = xi. Let M = {X1, . . . , Xn}∗/ ∼ = M(X1, . . . , Xn) be the idempotent, com-mutative monoid generated by X1, . . . , Xn, with elements written as XI for I ⊆[n]. Then for all q ≥1 and Y1, . . . , Yq, Z ∈M, the following relation holds: X x∈({0,1}∗)n P(x) · Kcq(Y1; . . . ; Yq | Z)  (x) = Z.Iq(Y1; . . . ; Yq; P) + O K(P)  , (25) where both sides are viewed as functions in computable probability mass functions P : ({0, 1}∗)n → R with finite entropy I1(P) < ∞. Remark 4.15. Similar to Remark 4.9, one can also for this theorem wonder about the dependence on n and q. A similar analysis shows that our techniques lead to a dependence of the form O  2q(q + 1)n log(n + 1) + K(P)  . Corollary 4.16. Assume that (Pm)m∈N is a sequence of computable probability mass functions Pm : ({0, 1}∗)n →R with finite entropy. Additionally, we make the following two assumptions: • Pm has all its probability mass on elements x = (x1, . . . , xn) ∈({0, 1}∗)n with sequence lengths l(xi) = m for all i ∈[n]; • K(Pm) grows sublinearly with m, i.e., lim m→∞ K(Pm) m = 0. Let q ≥1 and Y1, . . . , Yq, Z ∈M be arbitrary. Then the “per-bit” difference between expected interaction complexity and interaction information goes to zero for increasing sequence length: lim m→∞ P x∈({0,1}m)n Pm(x) · Kcq(Y1; . . . ; Yq | Z)  (x) −Z.Iq(Y1; . . . ; Yq; Pm) m = 0. Proof. This follows immediately from Theorem 4.14. Example 4.17. As an example to Corollary 4.16, consider the case that we have n parameters p1, . . . , pn ∈(0, 1) for Bernoulli distributions. Let Pm be the probability mass function given on x ∈({0, 1}m)n by Pm(x) := n Y i=1 P pi m (xi) := n Y i=1 m Y k=1 p x(k) i i · (1 −pi)1−x(k) i . That is, Pm consists of n independent probability mass functions P pi m that correspond to m in-dependent Bernoulli distributions with parameter pi. We have K(Pm) = O(log m) since m is the only moving part in the preceding description for Pm, with p1, . . . , pn being independent of m. Consequently, Corollary 4.16 can be applied, meaning that the per-bit difference between an expected interaction complexity term and the corresponding interaction information goes to zero. This generalizes the observation after , Theorem 10, to n > 1 and more complicated interaction terms. 4.6 Hu’s Theorem for Prefix-Free Kolmogorov Complexity We now argue that there is also a Hu theorem for prefix-free Kolmogorov complexity. It requires a logarithmic error term and is therefore less strong than the corresponding theorem for Chaitin’s prefix-free Kolmogorov complexity. Additionally, we need to now use O-notation, since the equal-ities only hold for almost all inputs: for three functions f, g, h : ({0, 1}∗)n →R, different from 22 Section 4.5, we now write f = g + O(h) if there is a constant c ≥0 and a threshold x0 ∈({0, 1}∗)n such that f(x) −g(x) ≤c · h(x) for all x ≥x0. The latter condition means that x is greater than or equal to x0 in at least one entry, where {0, 1}∗is ordered lexicographically. , Exercise 3.9.6, shows the following relation: K(y | x∗) = K(y | x) + O log K(x) + log K(y)  . (26) Overall, this results in the following chain rule for prefix-free Kolmogorov complexity: Theorem 4.18 (Chain Rule for Prefix-Free Kolmogorov Complexity). The following identity holds: K(x, y) = K(x) + K(y | x) + O log K(x) + log K(y)  . (27) Here, both sides are viewed as functions {0, 1}∗× {0, 1}∗→R that map inputs of the form (x, y). Proof. Combine Theorem 4.4 with Equation (26). To get a precise chain rule, we can, similarly to the case of Chaitin’s prefix-free Kolmogorov com-plexity and motivated by Equation (26), define a new equivalence relation ∼K on Maps ({0, 1}∗)n, R  by F ∼K H :⇐ ⇒ F(x) = H(x) + O n X i=1 log K(xi) ! , where x = (x1, . . . , xn) ∈({0, 1}∗)n. We denote the equivalence class of a function F by [F]K ∈Maps ({0, 1}∗)n, R  / ∼K. Then, we again use the monoid M = {X1, . . . , Xn}∗/ ∼and define [K]K : M × M →Maps ({0, 1}∗)n, R  / ∼K, (Y, Z) 7→[K(Y | Z)]K with K(Y | Z) : x 7→K Y (x) | Z(x)  . Again, this is well-defined by the same arguments as in Lemma 4.6, only that this time, we don’t need to use the chain rule in the proof. Furthermore, we can prove an analog of the chain rule given in Proposition 4.5. Proposition 4.19 (See Proof 12). For arbitrary Y, Z ∈M, the following equality [K]K(Y Z) = [K]K(Y ) + [K]K(Z | Y ) of elements in Maps ({0, 1}∗)n, R  / ∼K holds. Thus, [K]K : M × M →Maps ({0, 1}∗)n, R  / ∼K satisfies all conditions of Corollary 3.3 and we obtain a corresponding Hu theorem for prefix-free Kolmogogorov complexity. This could be worked out similarly to Theorem 4.8, which we leave to the interested reader. 4.7 Hu’s Theorem for Plain Kolmogorov Complexity Here, we briefly consider Hu’s theorems for plain Kolmogorov complexity C : {0, 1}∗×{0, 1}∗→R. Recall the O-notation from Section 4.6. The plain Kolmogorov complexity C : {0, 1}∗×{0, 1}∗→R is defined in the same way as prefix-free Kolmogorov complexity, but it allows the set of halting programs to not form a prefix-free set, see , Chapter 2. This version satisfies the following chain rule: Theorem 4.20 (Chain Rule for Plain Kolmogorov Complexity). The following identity holds: C(x, y) = C(x) + C(y | x) + O log C(x, y)  . (28) Here, both sides are viewed as functions {0, 1}∗× {0, 1}∗→R that are defined on inputs of the form (x, y). 23 Proof. This is proved in , Theorem 2.8. To get a precise chain rule, we can, similarly as for (Chaitin’s) prefix-free Kolmogorov com-plexity, define a new equivalence relation ∼C on Maps ({0, 1}∗)n, R  by F ∼C H :⇐ ⇒ F(x) = H(x) + O log C(x)  , where x = (x1, . . . , xn) ∈({0, 1}∗)n. We denote the equivalence class of a function F by [F]C ∈Maps ({0, 1}∗)n, R  / ∼C. Using again the monoid M = {X1, . . . , Xn}∗/ ∼, one can define [C]C : M × M →Maps ({0, 1}∗)n, R  / ∼C (Y, Z) 7→[C(Y | Z)]C with C(Y | Z) : x 7→C Y (x) | Z(x)  . Again, this is well-defined by the same arguments as in Lemma 4.6, and as for prefix-free Kol-mogorov complexity, we do not need to use the chain rule in the proof. Furthermore, we can prove an analog of the chain rules given in Proposition 4.5 and Proposition 4.19: Proposition 4.21 (See Proof 13). For arbitrary Y, Z ∈M, the equality [C]C(Y Z) = [C]C(Y ) + [C]C(Z | Y ) of elements in Maps ({0, 1}∗)n, R  / ∼C holds. Thus, [C]C : M × M →Maps ({0, 1}∗)n, R  / ∼C satisfies all conditions of Corollary 3.3, and we obtain a corresponding Hu theorem for plain Kolmogogorov complexity. This could again be worked out similarly to Theorem 4.8. 5 Further Examples of the Generalized Hu Theorem In this section, we establish further examples of the premises of Theorem 3.2 and Corollary 3.3, which essentially boils down to finding a chain rule for a function with the correct type signature. For the case of Shannon entropy, the premises were summarized in Summary 2.17. We mostly leave investigations of the specific meaning of the resulting higher-order terms to future work, though we do briefly look at the second degree terms for both Kullback-Leibler divergence and the generalization error in machine learning. To keep things simple, we diverge from Sections 2 by only working with finite discrete random variables, in the cases where the monoid is based on random variables. As a result, we do not have to worry about questions of convergence and can replace ∆f(Ω) by ∆(Ω) and Meascon by Meas everywhere. Concretely, we investigate Tsallis q-entropy (Section 5.1), Kullback-Leibler divergence (Sec-tion 5.2), q–Kullback-Leibler divergence (Section 5.3), and cross-entropy (Section 5.4). We also study arbitrary functions on commutative, idempotent monoids (Section 5.5), the special case of submodular information functions (Section 5.6), and the generalization error from machine learn-ing (Section 5.7). Some of the proofs for chain rules are found in Appendix E. The whole section is written in a self-contained way that requires minimal knowledge from the reader. 5.1 Tsallis q-Entropy We now investigate the Tsallis q-entropy, which was introduced in . We follow the investigations in and translate them into our framework. That is, assume a finite, discrete sample space Ω, n finite, discrete random variables X1, . . . , Xn on Ω, and the monoid M(X1, . . . , Xn) generated by equivalence classes of these random variables, see Definition 2.15. Now, fix an arbitrary number q ∈R \ {1}. Then we define the monoid action .q : M(X1, . . . , Xn) × Meas ∆(Ω), R  →Meas ∆(Ω), R  , 24 which we define for X ∈M(X1, . . . , Xn), F ∈Meas ∆(Ω), R  , and P ∈∆(Ω) by (X.qF)(P) := X x∈EX PX(x)q · F(P|X=x). This is well-defined — meaning that equivalent random variables act in the same way — by the same arguments as in Proposition 2.11. That it is a monoid action can be proved as in Proposition 2.6. Now, define for arbitrary q ∈R \ {1} the q-logarithm by lnq : (0, ∞) →R, lnq(p) := pq−1 −1 q −1 . We have limq→1 lnq(p) = ln(p), as can be seen using l’Hospital’s rule. Finally, we can define the Tsallis q-entropy Iq 1 : M(X1, . . . , Xn) →Meas ∆(Ω), R  by Iq 1(X) (P) := − X x∈EX PX(x) lnq PX(x) = P x∈EX PX(x)q −1 1 −q . This can be shown to be well-defined similarly as in Proposition 2.10. Since limq→1 lnq p = ln p, we consequently also have limq→1 Iq 1(X; P) = I1(X; P). That is, the q-entropy generalizes the Shannon entropy. The following chain rule guarantees the existence of a corresponding Hu theorem. Proposition 5.1 (See Proof 14). Iq 1 : M(X1, . . . , Xn) →Meas ∆(Ω), R  satisfies the chain rule Iq 1(XY ) = Iq 1(X) + X.qIq 1(Y ) for all X, Y ∈M(X1, . . . , Xn). 5.2 Kullback-Leibler Divergence In this section, we study the chain rule of Kullback-Leibler divergence. It resembles the one described in , chapter 3.7, in the language of information cohomology. A more elementary formulation of the chain rule can also be found in , Theorem 2.5.3, which is applied in their Section 4.4 to prove a version of the second law of thermodynamics. In the end, we will also briefly study and interpret KL divergence of degree 2, in analogy to mutual information I2, in Example 5.3. Let again the monoid M(X1, . . . , Xn) of n discrete random variables on Ωbe given. For P, Q ∈ ∆(Ω), we write P ≪Q if for all ω ∈Ω, the following implication is true: Q(ω) = 0 = ⇒P(ω) = 0. In the literature, P is then called absolutely continuous with respect to the measure Q. We set ^ ∆(Ω)2 := n (P, Q) ∈∆(Ω)2 P ≪Q o . We will silently make use of the fact that P ≪Q implies PX ≪QX and P|X=x ≪Q|X=x for all discrete random variables X : Ω→EX and x ∈EX. We now define G := Meas  ^ ∆(Ω)2, R  . We write elements F ∈G applied to inputs (P, Q) as F(P∥Q). Define for X ∈M(X1, . . . , Xn) and F ∈Meas  ^ ∆(Ω)2, R  , and P ≪Q ∈∆(Ω) the monoid action by: (X.F)(P∥Q) := X x∈EX PX(x)F P|X=x∥Q|X=x  . Similarly as before, this is a well-defined, additive monoid action. In the following, we use the convention that 0 · x = 0 for x ∈R ∪{±∞} and ln(0) = −∞. Finally, we define the function D1 : M(X1, . . . , Xn) →Meas  ^ ∆(Ω)2, R  as the Kullback-Leibler divergence, given for all X ∈ M(X1, . . . , Xn) and P ≪Q ∈∆(Ω) by D1(X) (P∥Q) := D1 X; P∥Q  := − X x∈EX PX(x) ln QX(x) PX(x) . This is well-defined, and we obtain: 25 0 0 0 0 1 1 1 1 X Y X Y 1/2 1/2 1−ϵ ϵ 1/2 1/2 1−ϵ ϵ P (Y |X) Q(Y |X) Figure 5: Binary symmetric channels for the joint distributions P and Q in Example 5.3. For a uniform prior P(X) = Q(X), P and Q have the same marginals P(Y ) = Q(Y ), but differ in their conditionals P(Y | X) and Q(Y | X). This leads for small ϵ > 0 to an arbitrarily large negative mutual Kullback-Leibler divergence D2(X; Y ) (P∥Q). Proposition 5.2 (See Proof 15). D1 : M(X1, . . . , Xn) →Meas  ^ ∆(Ω)2, R  satisfies the chain rule for all X, Y ∈M(X1, . . . , Xn): D1(XY ) = D1(X) + X.D1(Y ). Example 5.3. In , the following situation is discussed: X and Y are finite sets, and Ω= X ×Y. One can consider the two marginal variables X : X × Y →X, (x, y) 7→x, Y : X × Y →Y, (x, y) 7→y. A channel from X to Y is a conditional distribution P(Y | X). Together with a prior distribution P(X), it forms a joint P(X, Y ) over X × Y. Now, take two distributions P ≪Q ∈∆(X × Y). Then, as noted in , the chain rule Proposition 5.2 shows the following: D1 P∥Q  = D1 P(X)∥Q(X)  + X x∈X P(x) · D1 P(Y | x)∥Q(Y | x)  . Note that for ease of notation, we write P(X) for PX, D1 P(X)∥Q(X)  for D1(X) (P∥Q), P(x) for PX(x), P(Y | x) for (P|X=x)Y , etc. In our context, the “mutual Kullback-Leibler divergence” D2(X; Y ) is of interest. With respect to P and Q, it is given according to Equation (7) and using symmetry of D2 (which follows from Theorem 3.2 due to set operations being symmetric) as follows: D2(X; Y ) (P∥Q) = D1 P(Y )∥Q(Y )  − X x∈X P(x) · D1 P(Y | x)∥Q(Y | x)  . It is well-known that a simple use of Jensen’s inequality proves the non-negativity of the Kullback-Leibler divergence D1. We also know that mutual information I2 is non-negative. Can the same be said about the mutual Kullback-Leibler divergence D2? The answer is no. Consider the case X = Y = {0, 1}, and let the prior distributions P(X) = Q(X) both be uniform. Furthermore, let P(Y | X) and Q(Y | X) be binary symmetric channels ( , Section 7.1.4), given as in Figure 5. Note that the marginal distributions P(Y ) and Q(Y ) are identical, and so D1 P(Y )∥Q(Y )  = 0. We now work with binary logarithms log. For the second term, we then obtain X x∈{0,1} P(x) · D1 P(Y | x)∥Q(Y | x)  = X x∈{0,1} P(x) X y∈{0,1} P(y | x) log P(y | x) Q(y | x) = 1 4 · " log P(0 | 0) Q(0 | 0) + log P(1 | 0) Q(1 | 0) + log P(0 | 1) Q(0 | 1) + log P(1 | 1) Q(1 | 1) # 26 = 1 4 · h −4 −2 log(1 −ϵ) −2 log(ϵ) i = −1 −1 2 · h log(1 −ϵ) + log(ϵ) i Note that for very small ϵ, log(1 −ϵ) becomes negligible and log(ϵ) approaches −∞, and so the term above approaches +∞. Overall, this means that D2(X; Y ) (P∥Q) = − X x∈{0,1} P(x) · D1 P(Y | x)∥Q(Y | x)  < 0 is negative, and even unbounded, reaching −∞as Q becomes deterministic. We can compare this conceptually to mutual information as follows: I2(X; Y ) is the average reduction of uncertainty in Y when learning about X. Similarly, we can interpret D2(X; Y ) as the average reduction of Kullback-Leibler divergence between two marginal distributions in Y when learning about X. However, in this case, the divergence only becomes visible when the evaluation of X is known, since there is no difference in the marginals P(Y ) and Q(Y ). Thus, the “reduction” is actually negative. 5.3 q–Kullback-Leibler Divergence Similarly to the Tsallis q-entropy from Section 5.1, one can also define an q–Kullback-Leibler diver-gence, as is done in , Chapter 3.7.9 The monoid action .q : M(X1, . . . , Xn)×Meas  ^ ∆(Ω)2, R  → Meas  ^ ∆(Ω)2, R  is now given by (X.qF)(P∥Q) := X x∈EX PX(x)qQX(x)1−q · F P|X=x∥Q|X=x  . Now, we define the q–Kullback-Leibler divergence Dq 1 : M(X1, . . . , Xn) →Meas  ^ ∆(Ω)2, R  for all X ∈M(X1, . . . , Xn) and P ≪Q ∈∆(Ω) as the following generalization of standard Kullback-Leibler divergence: Dq 1(X) (P∥Q) := X x∈EX PX(x) lnq PX(x) QX(x) = P x∈EX PX(x)qQX(x)1−q −1 q −1 . Proposition 5.4 (See Proof 16). Dq 1 : M(X1, . . . , Xn) →Meas  ^ ∆(Ω)2, R  satisfies the chain rule Dq 1(XY ) = Dq 1(X) + X.qDq 1(Y ) for all X, Y ∈M(X1, . . . , Xn). 5.4 Cross-Entropy We choose the same monoid action . : M(X1, . . . , Xn) × Meas  ^ ∆(Ω)2, R  →Meas  ^ ∆(Ω)2, R  as for the Kullback-Leibler divergence. The cross-entropy C1 : M(X1, . . . , Xn) →Meas  ^ ∆(Ω)2, R  is given by: C1(X) (P∥Q) := C1 X; P∥Q  := − X x∈EX PX(x) ln QX(x). Proposition 5.5. C1 satisfies the chain rule for all X, Y ∈M(X1, . . . , Xn): C1(XY ) = C1(X) + X.C1(Y ). 9Our definition differs from the one given in by using a slightly different definition of the q-logarithm. We did this to be consistent with the definition of the Tsallis q-entropy above. 27 Proof. This follows with the same arguments as Proposition 5.2. Remark 5.6. One can easily show the following well-known relation between cross-entropy C1, Shannon entropy I1, and Kullback-Leibler divergence D1: C1(X) (P∥Q) = I1(X) (P) + D1(X) (P∥Q). This means that the study of Cq is entirely subsumed by that of Iq and Dq. Since we already looked at D2 in Example 5.3, we omit looking at C2 here. 5.5 Arbitrary Functions on Commutative, Idempotent Monoids Let M be any commutative monoid, and R : M →G be any function into an abelian group G. Define the two-argument function R1 : M × M →G by R1(A | B) := R(AB) −R(B). Set R1(A) := R1(A | 1) = R(A) −R(1), where 1 ∈M is the neutral element. These definitions mean that the chain rule is satisfied by definition, making Hu’s theorem a purely combinatorial fact. The reader can verify the following proposition: Proposition 5.7. R1 : M × M →G satisfies the chain rule R1(AB) = R1(A) + R1(B | A) for all A, B ∈M. Therefore, if M is also idempotent and finitely generated, then R1 : M × M →G satisfies all conditions of Corollary 3.3, and one obtains a corresponding Hu theorem. 5.6 Submodular Information Functions Using the framework of Section 5.5, we can study the submodular information functions from [22, 38, 53], which they use to formulate generalizations of conditional independence and the causal Markov condition. Alternatively, we could also analyze general submodular set functions , but decided to restrict to submodular information functions since they are closer to our interests. Recall that a lattice is a tuple L = (L, ∨, ∧) consisting of a set L together with commutative, associative, and idempotent operations ∨, ∧: L×L →L that satisfy the absorption rules a∨(a∧b) = a and a∧(a∨b) = a. Given a lattice L, one can define a corresponding partial order on L by a ≤b if a = a ∧b. From now on, let (L, ∨, ∧) be a finite lattice, meaning that L is a finite set. One can define 0 := V a∈L a, the meet of the finitely many elements in L. By the axioms above, this is neutral with respect to the join operation, that is b ∨0 = b. Note that 0 ∧b = 0 for all b ∈L due to the second absorption rule above. Consequently, 0 ≤b for all b ∈L. [22, 38, 53] then study the concept of a submodular (information) function. We follow the version outlined in : Definition 5.8 (Submodular Information Function). Let L be a finite lattice. Then a function R : L →R is called a submodular information function if all of the following conditions hold for all a, b ∈L: 1. normalization: R(0) = 0; 2. monotonicity: a ≤b implies R(a) ≤R(b); 3. submodularity: R(a) + R(b) ≥R(a ∨b) + R(a ∧b). In particular, the second property implies R(b) ≥R(0) = 0, meaning R is non-negative. They then define the conditional R1 : L×L →R by R1(a | b) := R(a∨b)−R(a). Furthermore, to define conditional independence and obtain a generalized causal Markov condition, they define the conditional mutual information I : L2 × L →R by I(a; b | c) := R(a ∨c) + R(b ∨c) −R(a ∨b ∨c) −R(c). 28 Now, note that (L, ∨, 0) is a finitely generated, commutative, idempotent monoid. Thus, Proposi-tion 5.7 shows that R1 gives rise to Hu’s theorem for higher-order functions R2, R3, . . . , as defined in Corollary 3.3. We can easily see that R2 agrees with the definition of I from above: R2(a; b | c) := R1(a | c) −R1(a | b ∨c) = R(a ∨c) −R(c) −R(a ∨b ∨c) + R(b ∨c) = I(a; b | c). As special cases of submodular information functions, consider Shannon entropy on sets of ran-dom variables, Chaitin’s prefix-free Kolmogorov complexity, other compression-based information functions, period lengths of time series, and the size of a vocabulary in a text. 5.7 Generalization Error Before coming to the generalization error, we briefly consider the situation dual to that of Sec-tion 5.5. Let M be a commutative, idempotent monoid. Let G be an abelian group and E : M →G be any function. Define Ad : M × M →G by Ad(A | B) := E(B) −E(AB) Here, Ad stands intu-itively for “advantage”, a terminology that becomes clear in the machine learning example below. Similarly as in the case of Kolmogorov complexity, define Ad(A) := Ad(A | 1) = E(1) −E(A). The reader can easily verify the chain rule: Proposition 5.9. Ad : M × M →G satisfies the chain rule: one has Ad(AB) = Ad(A) + Ad(B | A) for all A, B ∈M. Consequently, Ad : M × M →G satisfies the assumptions of Corollary 3.3. One then obtains a corresponding Hu theorem. We now specialize this investigation to the generalization error from machine learning [37, 47]. In this case, let J = [n] be a finite set and the monoid be given by 2J = (2J, ∪, ∅), i.e., ∪is the operation and ∅the neutral element. This monoid is idempotent, commutative, and finitely generated by {1}, . . . , {n}. For all j ∈J, let Xj be a measurable space. Let (Xj)j∈J be the random variable of feature tuples with values in Q j∈J Xj. Similarly, let Y be another measurable space and Y the random variable of labels in Y. A typical assumption is that there exists a joint distribution P := P (Xj)j∈J, Y  from which “the world samples the data”. Additionally, let ∆(Y) be the space of probability measures on Y, and L : ∆(Y) × Y →R := R ∪{+∞} a loss function that compares a model distribution over labels to the true label. For all A ⊆J, assume that F(A) ⊆Maps Q a∈A Xa, ∆(Y)  is a class of functions10 that, given a feature tuple with indices in A, predicts a distribution over Y. We call this the set of hypotheses for predicting the labels given features in A. For a hypothesis q ∈F(A) and xA ∈Q a∈A Xa, we denote the output by q(Y | xA) := q(xA) ∈∆(Y). A learning algorithm with access to features in A is supposed to find a hypothesis q ∈F(A) that minimizes the generalization error: E(A) := inf q∈F(A) E(ˆ x,ˆ y)∼P h L q(Y | ˆ xA) ∥ˆ y i . Then, as above, define AdY : 2J × 2J →R by AdY XA | XB  := E(B) −E(A ∪B).11 From Proposition 5.9, we obtain the following chain rule: AdY (XA∪B) = AdY (XA) + AdY (XB | XA). (29) 10Further below, we will make the assumption that A ⊆B implies F(A) ⊆F(B), in a suitable sense. We make no other assumptions on the collection of F(A) for A ⊆J. 11There is a one-to-one correspondence between all A ∈2J and all variables XA with A ∈2J. We simply denote the monoid of all XA again by 2J, with the multiplication rule becoming XAXB = XA∪B. 29 To interpret this chain rule sensibly, we make one further assumption: namely that, when having access to more features, the learning algorithm can still use all hypotheses that simply ignore these additional features. More precisely, for B ⊆C ⊆J, let us interpret each map qB ∈F(B) as a function f qB : Q c∈C Xc →∆(Y) by f qB (xc)c∈C  := qB (xb)b∈B  . The assumption is that f qB ∈F(C), for all B ⊆C ⊆J and qB ∈F(B). Overall, we can interpret this as F(B) ⊆F(C). It follows that E(B) ≥E(C). Consequently, for all A, B ⊆J (without any inclusion imposed), it follows AdY (XA | XB) = E(B) −E(A ∪B) ≥0. (30) The meaning of this is straightforward: AdY (XA | XB) measures what a perfect learning algorithm can gain from knowing all the features in A if it already has access to all the features in B — the advantage motivating the notation AdY (XA | XB). The chain rule, Equation (29), thus says the following: for a perfect learning algorithm, the advantage from getting access to features in A ∪B equals the advantage it receives from the features in A, plus the advantage it receives from B when it already has access to A. We can then ask: is then the “mutual advantage”, as defined from Equation (11) by Ad2 Y (XA; XB) := AdY (XA) −AdY (XA | XB), necessarily also positive, as we expect from the case of entropy and mutual information? The answer is no, as the following simple example shows: Example 5.10. Let J = {1, 2}, X1 = X2 = Y = {0, 1}, X1, X2 two independent Bernoulli distributed random variables, and Y be the result of applying a XOR gate to X1 and X2. In other words, the joint distribution P(X1, X2, Y ) ∈∆ {0, 1}3 is the unique distribution with P X1 = 0, X2 = 0, Y = 0  = 1/4, P X1 = 0, X2 = 1, Y = 1  = 1/4, P X1 = 1, X2 = 0, Y = 1  = 1/4, P X1 = 1, X2 = 1, Y = 0  = 1/4. We define the loss function L : ∆ {0, 1}  × {0, 1} →R as the cross-entropy loss: L q(Y ) ∥y  := −log q(y), where log is the binary logarithm. Furthermore, we define F(A) :=  q : XA → ∆({0, 1}) as the space of all possible prediction functions with access to features in A ⊆J = {1, 2}. Now, note that if one does not have access to both features, i.e. A ̸= {1, 2}, then it is impossible to do better than random, since X1 ⊥ ⊥Y and X2 ⊥ ⊥Y . Thus, in that case, the best prediction is q(ˆ y | ˆ xA) = 1/2, irrespective of ˆ x and ˆ y. If, however, one has access to both features, then perfect prediction is possible, since Y is a deterministic function of (X1, X2). Using −log(1/2) = 1 and −log(1) = 0, this leads to the following generalization errors: E ∅  = 1, E {1}  = 1, E {2}  = 1, E {1, 2}  = 0. Consequently, the mutual advantage of X1 with X2 is given by Ad2 Y (X1; X2) = AdY (X1) −AdY (X1 | X2) = E ∅  −E {1}  −E {2}  + E {1, 2}  = −1 < 0. Thus, in this example, the mutual advantage is negative. Rearranging the inequality, we can read this as AdY (X1) < AdY (X1 | X2). In general, beyond the specifics of this example, the inequality AdY (XA) < AdY (XA | XB) 30 means that features in A ⊆J are more predictive of Y if we already have access to features in B. This indicates a case of feature interaction or synergy: the contribution of a set of features in predicting Y is greater than the individual contribution of each single feature. Intuitively, we expect such situations in many machine learning applications, and think it might be worthwhile to investigate the meaning of the higher degree interaction terms Adq Y appearing in Hu’s theorem as in Corollary 3.3. 6 Discussion 6.1 Major Findings: a Generalization of Hu’s Theorem and its Applications In this work, we have systematically abstracted away from the details of Shannon’s information theory [48, 49] to generalize Hu’s theorem to new situations. To obtain information diagrams, one simply needs a finitely generated commutative, idempotent monoid M — also known under the name of a join-semilattice — acting additively on an abelian group G, and a function F1 : M →G satisfying the chain rule of information: F1(XY ) = F1(X) + X.F1(Y ). Alternatively, with M and G being as above, the additive monoid action and F1 together can be replaced by a two-argument function K1 : M × M →G satisfying the chain rule: K1(XY ) = K1(X) + K1(Y | X). The proof of the main result — Theorem 3.2 together with Corollary 3.3 — is very similar to the one given in for the case of Shannon entropy; the main insight is that it is possible to express the basic atoms of an information diagram with an inclusion-exclusion type expression over “unions of disks”: µ(pI) = X ∅̸=K⊇Ic (−1)|K|+|I|+1−n · F1(XK) = X K⊆I (−1)|K|+1 · F1(XKXIc). This formula is visually motivated in Section 3.2. Relations to different interaction terms are explored in Section 3.3. With the monoid given by equivalence classes of (countably infinite) discrete random variables, the abelian group by measurable functions on probability measures, and the additive monoid ac-tion by the conditioning of information functions, we recover information diagrams for Shannon entropy, see Summary 2.17. Beyond this classical case, we obtained Hu’s theorems for several ver-sions of Kolmogorov complexity (Section 4), Tsallis q-entropy , Kullback-Leibler divergence, q–Kullback-Leibler divergence, cross-entropy , general functions on commutative, idempotent monoids, submodular information functions , and the generalization error from machine learn-ing [37, 47] (all in Section 5). For Kolmogorov complexity, we generalized the well-known theme that “expected Kolmogorov complexity is close to Shannon entropy”: “expected interaction complexity” ≈ “interaction information”. For well-behaved probability distributions, this results in the limit of infinite sequence length in an actual equality of the per-bit quantities for the two concepts (Section 4.5). 6.2 The Cohomological Context of this Work The main context in which our ideas developed is information cohomology [5, 9, 57, 58]. The setup of that work mainly differs by using partition lattices instead of equivalence classes of random variables and generalizing this further to so-called information structures. The functions satisfy-ing the chain rule are reformulated as so-called “cocycles” in that cohomology theory, which are “cochains” whose “coboundary” vanishes: (δF1)(X; Y ) := X.F1(Y ) −F1(XY ) + F1(X) = 0. That gives these functions a context in the realm of many cohomology theories that were success-fully developed in mathematics. The one defined by Gerhard Hochschild for associative algebras is maybe most closely related . For the special case of probabilistic information cohomology, [5, 57] were able to show that Shannon entropy is not only a cocycle, but is in some precise sense the unique cocycle generating all others of degree 1. Thus, Shannon entropy finds a fully cohomological 31 interpretation. Arguably, without the abstract nature of that work and the consistent emphasis on abstract structures like monoids and monoid actions, our work would not have been possible. There is one way in which information cohomology tries to go beyond Shannon information theory: it tries to find higher degree cocycles that differ from the interaction terms Fq. This largely unsolved task has preliminary investigations in , Section 3.6, and . In that sense, information cohomology can be viewed as a generalization of Hu’s theorem. Since some limitations in the expressiveness of interaction information are well-known , we welcome any effort to make progress on that task. 6.3 Unanswered Questions and Future Directions Further generalizations On the theoretical front, it should be possible to generalize Hu’s theorem further from commutative, idempotent monoids to what calls conditional meet semi-lattices. As these locally are commutative, idempotent monoids, the generalization can probably directly use our result. A transport of ideas More practically, we hope that the generalization of Hu’s theorem leads to a transport of ideas from the theory of Shannon entropy to other functions satisfying the chain rule. There are many works that study information-theoretic concepts based on the interaction information functions and thus ultimately Shannon entropy, for example O-information [27, 43], total correlation , dual total correlation , and information paths [3, 6]. All of these can trivially be defined for functions satisfying the chain rule that go beyond Shannon entropy, and can thus be generalized to all the example applications in Sections 4 and 5. Most of the basic algebraic properties should carry over since they often follow from Hu’s theorem itself. It is our hope that studying such quantities in greater generality may lead to new insights into the newly established application areas of Hu’s theorem. Additionally, it should not be forgotten that even Shannon interaction information itself de-serves to be better understood. Understanding these interaction terms in a more general context could help for resolving some of the persisting confusions about the topic. One of them surrounds the possible negativity of interaction information I3(X; Y ; Z) of three (and more) random vari-ables [4, 8], which is sometimes understood as meaning that there is more synergy than redundancy present [61, 62]. Similarly, we saw in Example 5.10 that the mutual feature advantage I2 Y (XA; XB) can be negative as well, which has a clear interpretation in terms of synergy. Example 5.3 shows that the mutual Kullback-Leibler divergence D2(X; Y ) of two distributions P ≪Q can be negative if knowing X “reveals” the divergence of P and Q in Y . We would welcome more analysis in this direction, ideally in a way that transcends any particular applications and could thus shed new light on the meaning of classical interaction information. Further chain rules It goes without saying that we were likely not successful in finding all functions satisfying a chain rule. One interesting candidate seems to be differential entropy h (, Theorem 8.6.2): h(X, Y ) = h(X) + h(Y | X). However, it seems to us that differential entropy is not well-behaved. For example, if X is a random variable with values in R, then even if h(X) exists, the differential entropy of the joint variable (X, X) with values in R2 is negative infinity: h(X, X) = −∞. In particular, we have h(X) ̸= h(X, X), and so Hu’s theorem cannot hold. As clarified, for example, in , differential entropy is measured relative to a given base mea-sure. Given that (X, X) takes values only in the diagonal of R2, which has measure 0, explains why the differential entropy degenerates. To remedy this, one would need to change the base measure to also live on the diagonal; it is unclear to us how to interpret this, or if a resulting Hu theorem could indeed be deduced. 32 Another possible candidate is quantum entropy, also called von Neumann entropy, which also allows for a conditional version that satisfies a chain rule (, Theorem 1). Interestingly, condi-tional quantum entropy, also called partial quantum information, can be negative [13, 31], which contrasts it from classical Shannon entropy. In analogy to the Kullback-Leibler divergence (Section 5.2), also quantum entropy admits a relative version, which has many applications in quantum information theory . In , a chain rule for quantum relative entropy was proven, which, however, is an inequality. In , Proposition 1 and Example 1, one can find a chain rule–type statement for quantum relative entropy that generalizes the one for non-relative quantum conditional entropy. We leave the precise meaning or interpretation of these results in the context of our work to future investigations. Kolmogorov complexity and information decompositions In the context of Kolmogorov complexity, we would welcome a more thorough analysis of the size of the constants involved in Theorems 4.8 and 4.14, potentially similar to . More precisely, it would be worthwhile to improve on the dependence on q or n that we explain in Remarks 4.9 and 4.15. More broadly, one could try to understand complex interactions that go beyond interaction information in the context of Kolmogorov complexity.12 For example, partial information decom-position (PID) [61, 62]13 aims to complement the usual information functions with unique informa-tion, shared information, and complementary information. It argues that the mutual information of a random variable Z with a joint variable (X, Y ) can be decomposed as follows: I2 (X, Y ); Z  = UI(X \ Y ; Z) | {z } unique + UI(Y \ X; Z) | {z } unique + SI(X, Y ; Z) | {z } shared + CI(X, Y ; Z) | {z } complementary . Here, UI(X \ Y ; Z) is the information that X provides about Z that is not also contained in Y ; SI(X, Y ; Z) is the information that X and Y both share about Z; and finally, CI(X, Y ; Z) is the information that X and Y can only together provide about Z, but neither on its own. SI is also called “redundant information”, and CI “synergistic information”. This then leads to an interpretation of interaction information as a difference of shared and complementary information: I3(X, Y, Z) = SI(X, Y ; Z) | {z } shared −CI(X, Y ; Z) | {z } complementary . While it is known that such functions exist, no proposals have yet satisfied all axioms that are considered desirable. In this sense, the search for shared, redundant, and synergistic information in the framework of PID is still ongoing . See also [10, 24, 40] for related work. We could imagine that attempting a similar decomposition for Kolmogorov complexity could provide new insights. To argue that this might be possible, we can look, for example, at the thought experiment of x and y being binary strings encoding physical theories, and z being a binary string containing data about a physical phenomenon. Then a hypothesized “algorithmic complementary information” CI(x, y; z) would intuitively be high if the theories x and y only together allow explaining (parts of) the data z; a high shared information SI(x, y; z) would mean that x and y are theories that are equally able to explain (parts of) the data in z. One hope is that averaging such quantities leads to a partial information decomposition in the usual information-theoretic sense, thus providing a new bridge that helps with the transport of ideas between fields: “expected algorithmic PID” ? ≈ “PID”. 6.4 Conclusion To restate our main finding, we can say: whenever you find a chain rule F1(XY ) = F1(X) + X.F1(Y ), you will under mild conditions obtain information diagrams. Most of their implications are yet to be understood. 12Or in the context of any other of the application areas in Section 5 of our generalized Hu theorem. 13The only privately communicated version, , of , has a stronger emphasis on the axiomatic framework and is more up to date. 33 Appendix A Measure Theory for Countable Discrete Spaces In this section, we investigate some technical details related to the measurability of certain func-tions. For more background on measure theory, any book on the topic suffices, for example and . As the results are elementary, we leave most of them to the reader to prove. Recall that for a measurable space Z, the space of probability measures ∆(Z) on Z carries the smallest σ-algebra that makes all evaluation maps evA : ∆(Z) →[0, 1], P 7→P(A) for measurable A ⊆Z measurable. Also recall that discrete random variables are functions X : Ω→EX such that both Ωand EX are discrete, meaning they are countable and all of their subsets are measurable. Finally, recall that for a discrete sample space Ω, ∆f(Ω) is the measurable subspace of probability measures P ∈∆(Ω) with finite Shannon entropy H(P). Lemma A.1. Let X : Ω→EX be a random variable. Then the function X∗: ∆(Ω) →∆(EX), P 7→  PX : A 7→P X−1(A)  is measurable. Proof. This is elementary and left to the reader to prove. To investigate the measurability of the Shannon entropy function and “conditioned” information functions, we need the result that pointwise limits of measurable functions are again measurable: Lemma A.2. Let (fn)n∈N be a sequence of measurable functions fn : X →R from a measurable space X to the real numbers R. Assume that the pointwise limit function f : X →R, x 7→lim n→∞fn(x) exists. Then f is also measurable. Proof. See , Corollary 8.10. Corollary A.3. Let X : Ω→EX be a discrete random variable. Then the corresponding Shannon entropy function H(X) : ∆f(Ω) →R, P 7→H(X; P) := − X x∈EX PX(x) ln PX(x) is measurable. Proof. We already know from Lemma A.1 that the function P 7→PX is measurable. Therefore, we can reduce to the case X = idΩ, i.e.: we need to show that the function H : ∆f(Ω) →R, P 7→− X ω∈Ω P(ω) ln P(ω) is measurable. Note that P(ω) = evω(P). evω is measurable by definition of the σ-algebra on ∆f(Ω). Also, ln : R>0 →R is known to be measurable. Since also limits of measurable functions are measurable by Lemma A.2, the result follows. Lemma A.4. Let X : Ω→EX be a discrete random variable and x ∈EX any element. Then the function (·)|X=x : ∆(Ω) →∆(Ω), P 7→P|X=x, with P|X=x defined as in Equation (1), is measurable. 34 Proof. This is elementary and left to the reader to prove. Corollary A.5. Let Ωbe a discrete measurable space and F : ∆f(Ω) →R a conditionable measurable function, meaning that for all discrete random variables X : Ω→EX and all P ∈ ∆f(Ω), the series (X.F)(P) = X x∈EX PX(x) · F P|X=x  converges unconditionally. Then the function X.F : ∆f(Ω) →R is also measurable. Proof. We have (X.F)(P) = X x∈EX (evx ◦X∗)(P) · F ◦(·)|X=x  (P). The result follows from the measurability of evx : ∆(EX) →R, X∗as stated in Corollary A.1, F, (·)X=x : ∆(Ω) →∆(Ω) as proven in Lemma A.4, and finally the fact that limits of measurable functions are measurable, see Lemma A.2. B Proofs for Section 2 Proof 1 for Proposition 2.10. Let P : Ω→[0, 1] be any probability measure with finite entropy. Since Y ≾X, there is a function fY X : EX →EY such that fY X ◦X = Y . We obtain I1(Y ; P) = − X y∈EY P Y −1(y)  ln P Y −1(y)  = − X y∈EY PX f −1 Y X(y)  ln PX f −1 Y X(y)  = − X y∈EY X x∈f −1 Y X(y) PX(x) ln X x′∈f −1 Y X(y) PX(x′) (1) ≤− X y∈EY X x∈f −1 Y X(y) PX(x) ln PX(x) = I1(X; P). In step (1) we use that −ln is a monotonically decreasing function and P x′∈f −1 Y X(y) PX(x′) ≥PX(x) for each x ∈f −1 Y X(y). Proof 2 for Proposition 2.11. We have fXY and fY X with fXY ◦Y = X and fY X ◦X = Y . For every conditionable measurable function F : ∆f(Ω) →R and probability measure P : Ω→R, we obtain (X.F)(P) = X x∈im X PX(x)F(P|X=x) (1) = X y∈im Y PX fXY (y)  F P|X=fXY (y)  (2) = X y∈im Y PY (y)F(P|Y =y) = (Y.F)(P). In step (1), we use that fXY : im Y →im X is a bijection. Step (2) can easily be verified. Proof 3 for Proposition 2.14. All required properties follow from Lemma 2.13: first of all, the multiplication · : M × M →M is well-defined, i.e., does not depend on the representatives of the 35 factors [X], [Y ] by property 0. We get ·[X] = [X] = [X]· from property 1. [X]·[Y ] = [Y ]·[X] follows from property 3. We have [X] · [X] = [X] due to property 4. We now prove the rule ([X]·[Y ])·[Z] = [X]·([Y ]·[Z]). For any two random variables U, V ∈c M, we write ZUV ∈c M for a chosen random variable with UV ∼ZUV . Then, we obtain: [X] · [Y ]  · [Z] = [ZZXY Z] (⋆) = [ZXZY Z] = [X] · [Y ] · [Z]  . For step (⋆), one uses the equivalence ZZXY Z ∼ZXZY Z that follows from Lemma 2.13. C Proofs for Section 3 C.1 Proof of the Generalized Hu Theorem 3.2 and Corollary 3.3 All notation and assumptions are as in Theorem 3.2. Lemma C.1. Let µ be the measure given on atoms by Equation (9). For all I ⊆[n], we have F1(XI) = µ( e XI). Proof. This is an application of a version of the inclusion-exclusion principle , a special case of the Möbius inversion formula on a poset [52, 3.7.1 Proposition]. It states the following: For any two functions f, g : 2[n] →G, the following implication holds: g(I) = X K⊇I f(K) = ⇒ f(I) = X K⊇I (−1)|K|−|I|g(K). Set µ(p∅) := −F1(X[n]). We apply the principle to the functions g(I) := (−1)|I|µ(pIc) and f(K) := (−1)|K|+1F1(XK). Then Equation (9) implies the premise of the inclusion-exclusion principle. From the conclusion, we obtain: (−1)|I|+1F1(XI) = X K⊇I (−1)|K|−|I| · (−1)|K|µ(pKc), which implies F1(XI) = − X K⊇I µ(pKc) = − X K : K∩I=∅ µ(pK) = F1(X[n]) − X ∅̸=K : K∩I=∅ µ(pK). In the last step we used µ(p∅) = −F1(X[n]). Thus, showing that F1(XI) = µ( e XI) = P ∅̸=K : K∩I̸=∅µ(pK) reduces to the following special case for I = [n]: F1(X[n]) = µ e X[n]  . To show this, note that Equation (9) implies µ e X[n]  = X K (−1)|K|+1−n " X ∅̸=I: I⊇Kc (−1)|I| # F1(XK). (31) Ignoring that ∅̸= I, the inner coefficient is given by X I⊇Kc (−1)|I| = (−1)n−|K| |K| X i=0 (−1)i |K| i  = ( 0, K ̸= ∅ (−1)n, else. Note that F1(X∅) = 0, so the last case is irrelevant. Also, note that the condition I ̸= ∅only restricts the inner sum in Equation (31) when K = [n]. Thus, in that case, we need to subtract 1 from the result just computed and obtain: µ e X[n]  = (−1)n+1−n · (−1) · F1(X[n]) = F1(X[n]), proving the claim. 36 Proposition C.2. For all n ∈N≥0, for µ being the G-valued measure constructed from F1 as in Equation (9), for all L1, J ⊆[n], the following identity holds: XJ.F1(XL1) = µ( e XL1 \ e XJ) Proof. This follows immediately from Lemma C.1 and the chain rule, Equation (6), together with the fact that µ is a measure. We have now done all the hard work for finishing the proof of Theorem 3.2: Proof 4 for Theorem 3.2. Part 1. This is a simple inductive argument, using Proposi-tion C.2 for q = 1, and Equation (7) for showing the step from q −1 to q. Part 2. For part 2, using Equation (8), we observe XJ.F1(XI) −F1(XJ∪I) + F1(XJ) = µ( e XI \ e XJ) −µ( e XJ ∪e XI) + µ( e XJ) = 0. Thus, F1 satisfies Equation (6). For q ≥2, using Equation (8) again, we similarly observe Fq−1(XL1; . . . ; XLq−1) −XLq.Fq−1(XL1; . . . ; XLq−1) = Fq(XL1; . . . ; XLq). That finishes the proof. Proof 5 for Corollary 3.3. Define e G := Maps(M, G) as the group of functions from M to G. Define, using currying, the function e K1 : M →e G by e K1(X) (Y ) := K1(X | Y ). Define the additive monoid action . : M × e G →e G by (X.F)(Y ) := F(XY ) for all X, Y ∈M. Note that we need M to be commutative to show that this is indeed a monoid action. Then clearly, f K1 satisfies the chain rule. Define e Kq : M q →e G as in Theorem 3.2 inductively by e Kq(Y1; . . . ; Yq) := e Kq−1(Y1; . . . ; Yq−1) −Yq. e Kq−1(Y1; . . . ; Yq−1). By induction, one can show that Kq(Y1; . . . ; Yq | Z) = e Kq(Y1; . . . ; Yq) (Z) for all Y1, . . . , Yq, Z ∈ M. By the conclusion of Theorem 3.2, we obtain a e G-valued measure e µ : 2e X →e G with e µ q \ k=1 e XLk \ e XJ ! = XJ. e Kq(XL1; . . . ; XLq). Now, define µ : 2e X →G by µ(A) := e µ(A) (1) for all A ⊆e X. Clearly, since e µ is a e G-valued measure, µ is a G-valued measure. The results immediately follow from these definitions and Hu’s Theorem. C.2 Further Proofs for Section 3 Proof 6 for Corollary 3.5. We proceed as follows: 1. This follows from Lemma 3.4 and Equation (9). 37 2. By Lemma 3.4 and Theorem 3.2, we have X I⊆[n] I∩K̸=∅ ηI = µ n pI I ⊆[n], ∃k ∈K : k ∈I o = µ( e XK) = F1(XK). 3. Using Lemma 3.4 and Theorem 3.2 again, we obtain Fq(Xj1; . . . ; Xjq) = µ \ j∈J e Xj ! = X I,∀j∈J:j∈I ηI = X I⊇J ηI. 4. This is formally a consequence of 3 and the inclusion-exclusion principle . 5. This follows by combining results 2 and 4. 6. This follows by combining results 1 and 3, or by the inclusion-exclusion principle applied to result 5. D Proofs for Section 4 Proof 7 for Proposition 4.5. Let Y, Z ∈f M be arbitrary. In the following, for functions f : ({0, 1}∗)n →R, we write f = f(x) for simplicity, and mean by it the function mapping x to f(x). We obtain: Kc(Y Z) = Kc (Y Z)(x)  + = Kc Y (x)′Z(x)  + = Kc Y (x)  + Kc Z(x) | Y (x)  + = Kc(Y ) + Kc(Z | Y ). In the computation, the associativity rule in the second step holds as we can write a program of constant size that translates between the different nestings of the strings.14 In the third step we use Theorem 4.4. The result follows. Proof 8 for Lemma 4.6. We have [Kc]Kc(Y | Z) (1) = [Kc]Kc(Y Z) −[Kc]Kc(Z) (2) = [Kc]Kc(Y Z) −[Kc]Kc(Z) (3) = [Kc]Kc(Y | Z). In the computation, steps (1) and (3) follow from Proposition 4.5. For step (2) one can show that Kc(Y Z) + = Kc(Y Z) and Kc(Z) + = Kc(Z) in the same way as the associativity rule in Proposition 4.5 was shown. Proof 9 for Theorem 4.8. Remember M = f M/ ∼and the function [Kc]Kc : M × M → Maps ({0, 1}∗)n, R  / ∼Kc, which we now denote by [Kc] = [Kc]1 := [Kc]Kc. From this, we can inductively define [Kc]q : M q × M →Maps ({0, 1}∗)n, R  / ∼Kc as in Corollary 3.3 by [Kc]q(Y1; . . . ; Yq | Z) := [Kc]q−1(Y1; . . . ; Yq−1 | Z) −[Kc]q−1(Y1; . . . ; Yq−1 | YqZ). 14For this, we use that we can algorithmically extract all xi for indices appearing in Y and Z from the strings (Y Z)(x) and also Y (x)′Z(x). This argument uses that the encoding x 7→x′ is prefix-free. 38 From Equation (21), one can inductively show that [Kc]q(Y1; . . . ; Yq | Z) = [Kcq(Y1; . . . ; Yq | Z)] (32) for all Y1, . . . , Yq, Z ∈M. Note that Kcq was defined on f M and not M, which means that we plug in representatives of equivalence classes at the right-hand-side. Using Lemma 4.6 and induction, one can show that this is well-defined. Then, construct µ : 2e X →Maps ({0, 1}∗)n, R  explicitly as in Equation (23). Define, now, the measure [µ] : 2e X →Maps ({0, 1}∗)n, R  / ∼Kc by µ := [µ(A)] ∀A ⊆e X. (33) Then Equation (32) shows that µ = X ∅̸=K⊇Ic (−1)|K|+|I|+1−n[Kc]1(XK). (34) Consequently, [µ] is the measure that results in Corollary 3.3, see Equation (13). We obtain for all L1, . . . , Lq, J ⊆[n]: " µ q \ k=1 e XLk \ e XJ !# = [µ] q \ k=1 e XLk \ e XJ ! (Equation (33)) = [Kc]q(XL1; . . . ; XLq | XJ) (Proposition 4.5, Corollary 3.3) = Kcq(XL1; . . . ; XLq | XJ) (Equation (32)). As two representatives of the same equivalence class in Maps ({0, 1}∗)n, R  differ by a constant, the result follows. Lemma D.1. Let P : ({0, 1}∗)n →R be a computable probability mass function. Let K ⊆[n] a subset and PK the corresponding maginal distribution. Then PK is also computable, and the relation K(PK) + < K(P). between their Kolmogorov complexities holds. Proof. We know that P is computable, and so there exists a prefix-free Turing machine Tp of length l(p) = K(P) such that Tp(x′q) −P(x) ≤1/q for all q ∈N and x ∈({0, 1}∗)n. Now, fix q ∈N. Let (xi)i∈N be a computable enumeration of ({0, 1}∗)n. Define the approximation Pq : ({0, 1}∗)n 99K R of P by Pq(xi) := Tp (xi)′(4q · 2i)  . Then for all subsets I ⊆N, we have X i∈I Pq(xi) − X i∈I P(xi) ≤ X i∈I Tp (xi)′(4q · 2i)  −P(xi) ≤ ∞ X i=1 1 4q · 2i (35) = 1 4q . Consequently, by setting I = N and using P i∈N P(xi) = 1, one can determine iq such that for the first time we have iq X i=1 Pq(xi) −1 ≤1 2q . (36) 39 Note that iq can be algorithmically determined by computing one Pq(xi) at a time and checking when the condition holds. Now, for arbitrary xK ∈({0, 1}∗)|K| and q ∈N, we define T(x′ Kq) := iq X i=1 (xi)K=xK Pq(xi). We now show that T(x′ Kq) approximates PK(xK) up to an error of 1/q: T(x′ Kq) −PK(xK) = iq X i=1 (xi)K=xK Pq(xi) −PK(xK) ≤ iq X i=1 (xi)K=xK Pq(xi) − iq X i=1 (xi)K=xK P(xi) + iq X i=1 (xi)K=xK P(xi) −PK(xK) (35) ≤ 1 4q + PK(xK) − iq X i=1 (xi)K=xK P(xi) = 1 4q + 1 − iq X i=1 P(xi) ≤1 4q + 1 − iq X i=1 Pq(xi) + iq X i=1 Pq(xi) − iq X i=1 P(xi) (36),(35) ≤ 1 4q + 1 2q + 1 4q = 1/q. Now, note that T is computable, since it uses in its definition only the computable enumeration (xi)i∈N, the number iq for which we described an algorithm, and the Turing machine Tp inside the definition of Pq. Thus, T is a prefix machine TpK for a bitstring pK of length l(pK) ≤l(p) + c = K(P) + c, where c ≥0 is some constant. It follows K(PK) ≤l(pK) ≤K(P) + c, and we are done. Proof 10 for Lemma 4.12. Assume that Y ∼Z. Then Lemma 2.13 parts 3 and 415 show that Y ∼r Y = Z ∼r Z, and so Y ∼r Z by transitivity. On the other hand, if Y ∼r Z, then also XI = Y ∼r Z = XJ for some I, J ⊆[n], again by Lemma 2.13 parts 3 and 4. Let I =  i1 < · · · < i|I| and J =  j1 < · · · < j|J| . Then there exist functions fJI and fIJ such that fJI ◦XI = XJ and fIJ ◦XJ = XI. That is, for all x ∈({0, 1}∗)n we have fJI(xi1, . . . , xi|I|) = (xj1, . . . , xj|J|), fIJ(xj1, . . . , xj|J|) = (xi1, . . . , xi|I|). The first equation shows J ⊆I, as otherwise, changes in xJ\I lead to changes in the right-hand-side, but not the left-hand-side. In the same way, the second equation shows I ⊆J, and overall we obtain I = J. That shows Y ∼Y = XI = XJ = Z ∼Z; due to transitivity, it follows Y ∼Z. Proof 11 for Theorem 4.14. We generalize the proof strategy that use for their Lemma 8.1.1, which is a special case of our theorem for n = 2, q = 2, Y1 = X1, Y2 = X2, and Z = ϵ = 1. 15What we denoted by ∼in that lemma is denoted ∼r here. 40 We prove this in several steps by first handling convenient subcases. In the special case q = 1, Z = ϵ = 1, and Y1 = XK for some K ⊆[n], we can look at the marginal PK of P and obtain X x∈({0,1}∗)n P(x) Kc(XK)  (x) = X xK∈({0,1}∗)|K| PK(xK) Kc(XK)  (xK) = I1(PK) + O K(PK)  Theorem 4.13  = I1(XK; P) + O K(P)  , Lemma D.1  , which is the wished result. Now, let µ :2e X →Maps ({0, 1}∗)n, R  , Equation (23)  µr :2e X →Meascon  ∆f ({0, 1}∗)n , R  Equation 9  be the measures corresponding to Chaitin’s prefix-free Kolmogorov complexity Kc : M × M → Maps ({0, 1}∗)n, R  and Shannon entropy I1 : M →Meascon  ∆f ({0, 1}∗)n , R  , remembering that ∆f ({0, 1}∗)n is the space of finite-entropy probability measures (or mass functions) on our countable16 sample space ({0, 1}∗)n.17 Let I ⊆[n] be any subset. Then we obtain X x∈({0,1}∗)n P(x) µ(pI)  (x) (23) = X x∈({0,1}∗)n P(x) X ∅̸=K⊇Ic (−1)|K|+|I|+1−nKc(XK)  (x) = X ∅̸=K⊇Ic (−1)|K|+|I|+1−n X x∈({0,1}∗)n P(x) Kc(XK)  (x) (⋆) = X ∅̸=K⊇Ic (−1)|K|+|I|+1−n I1(XK; P) + O K(P)  = X ∅̸=K⊇Ic (−1)|K|+|I|+1−nI1(XK) ! (P) + O K(P)  = µr(pI)  (P) + O K(P)  , using our earlier result in step (⋆) and the definition of µr. Now, using that µ and µr are additive over disjoint unions, we can deduce for all A ⊆e X the equality X x∈({0,1}∗)n P(x) µ(A)  (x) = µr(A)  (P) + O K(P)  . Now, let Y1 = XL1, . . . , Yq = XLq, Z = XJ for some L1, . . . , Lq, J ⊆[n]. Then, using Hu’s theorems for interaction information (Theorem 3.2 together with Summary 2.17) and Kolmogorov complexity 4.8, the result follows by setting A := Tq k=1 e XLk \ e XJ. Proof 12 for Proposition 4.19. We have K(Y Z) = K (Y Z)(x)  (1) = K Y (x)′Z(x)  + O(1) (2) = K Y (x)  + K Z(x) | Y (x)  + O  log K Y (x)  + log K Z(x)  (3) = K(Y ) + K(Z | Y ) + O n X i=1 log K(xi) ! . 16The fact that ({0, 1}∗)n is not finite but countably infinite is the main reason why we considered countable sample spaces in Summary 2.17. 17The superscript in µr is used to notationally distinguish it from µ. r can be thought of as meaning “random”. 41 where step (1) follows as in Proposition 4.5, step (2) uses Theorem 4.18, and step (3) follows from the subadditivity of K18 and the logarithm, which holds for large enough inputs. Proof 13 for Proposition 4.21. Let Y, Z ∈M be arbitrary. Then, following the same arguments as in Proposition 4.5 and Proposition 4.19, we are only left with showing the following: log C Y (x), Z(x)  = O log C(x)  , where the left-hand-side is viewed as a function ({0, 1}∗)n →R. In fact, we even have log C Y (x), Z(x)  ≤log C(x) + c for some constant c starting from some threshold x0: we can find a program in constant length that takes x, extracts x1, . . . , xn from it, and rearranges and concatenates them in such an order to obtain Y (x)′Z(x), and the logarithm, being subadditive for large enough inputs, preserves the inequality. E Proofs for Section 5 Proof 14 for Proposition 5.1. For notational ease, we write P(x) = PX(x), (P|X=x)Y (y) = P(y | x) and P(x, y) = PXY (x, y) in this proof. We have Iq 1(X) + X.qIq 1(Y ) (P) = Iq 1(X) (P) + X x∈EX P(x)q Iq 1(Y ) (P|X=x) = P x∈EX P(x)q −1 1 −q + X x∈EX P(x)q P y∈EY P(y | x)q −1 1 −q = P x∈EX P(x)q −1 + P (x,y)∈EX×EY P(x)P(y | x) q −P x∈EX P(x)q 1 −q = P (x,y)∈EX×EY P(x, y)q −1 1 −q = Iq 1(XY ) (P). Proof 15 for Proposition 5.2. Let X, Y ∈M(X1, . . . , Xn) and P ≪Q ∈∆(Ω). The following proof of the chain rule is similar to the one of Lemma 2.4 for Shannon entropy. For simplicity, we write Q(x) = QX(x), P(y | x) = (P|X=x)Y (y) and P(x, y) = PXY (x, y) in this proof: X.D1(Y ) + D1(X) (P∥Q) = X.D1(Y ; P∥Q) + D1(X; P∥Q) = X x∈EX P(x)D1(Y ; P|X=x∥Q|X=x) − X x∈EX P(x) ln Q(x) P(x) (1) = − X x∈EX P(x) X y∈EY P(y | x) ln Q(y | x) P(y | x) − X x∈EX P(x) X y∈EY P(y | x) ! ln Q(x) P(x) 18The subadditivity property for K says that K(x, y) ≤K(x) + K(y) + O(1): one can construct a prefix-free Turing machine that extracts x∗and y∗from x∗y∗, which is of length K(x) + K(y), and outputs x′y. Note that since the set of halting programs of the universal Turing machine U is prefix-free, one does not need to indicate the place of separation between x∗and y∗. 42 = − X (x,y)∈EX×EY P(x) · P(y | x) · " ln Q(y | x) P(y | x) + ln Q(x) P(x) # = − X (x,y)∈EX×EY P(x, y) ln Q(x, y) P(x, y) = D1(XY ) (P∥Q). In step (1), we used for the second sum that P(y | x) is a probability measure in y and thus sums to 1. Proof 16 for Proposition 5.4. Let X, Y ∈M(X1, . . . , Xn) and P ≪Q ∈∆(Ω) be arbitrary. The following proof of the chain rule is similar to the one for the q-entropy, Proposition 5.1. For simplicity, we write Q(x) = QX(x), P(y | x) = (P|X=x)Y (y) and P(x, y) = PXY (x, y) in this proof: Dq 1(X) + X.qDq 1(Y ) (P∥Q) = Dq 1(X) (P∥Q) + X.qDq 1(Y ) (P∥Q) = Dq 1(X) (P∥Q) + X x∈EX P(x)qQ(x)1−q Dq 1(Y ) P|X=x∥Q|X=x  = P x∈EX P(x)qQ(x)1−q −1 q −1 + X x∈EX P(x)qQ(x)1−q P y∈EY P(y | x)qQ(y | x)1−q −1 q −1 = −1 + P (x,y)∈EX×EY P(x)P(y | x) qQ(x)Q(y | x) 1−q q −1 = P (x,y)∈EX×EY P(x, y)qQ(x, y)1−q −1 q −1 = Dq 1(XY ) (P∥Q). References S.-I. Amari. Information Geometry on Hierarchy of Probability Distributions. IEEE Trans-actions on Information Theory, 47(5):1701–1711, 2001. 3 Valentina Baccetti and Matt Visser. Infinite Shannon entropy. Journal of Statistical Mechan-ics: Theory and Experiment, 2013(4), 2013. ISSN 17425468. 5 Pierre Baudot. The Poincare-Shannon Machine: Statistical Physics and Machine Learn-ing Aspects of Information Cohomology. Entropy, 21(9), sep 2019. ISSN 10994300. 32 Pierre Baudot. On Information Links. arXiv e-prints, page arXiv:2103.02002, 2021. URL 7, 32 Pierre Baudot and Daniel Bennequin. The Homological Nature of Entropy. Entropy, 17(5): 3253–3318, 2015. ISSN 10994300. 3, 8, 31 Pierre Baudot, Monica Tapia, Daniel Bennequin, and Jean Marc Goaillard. Topo-logical Information Data Analysis. Entropy, 21(9):1–38, 2019. ISSN 10994300. 12, 32 Robert A. Beeler. How to Count: An Introduction to Combinatorics and Its Applications. Springer International Publishing, 2015. ISBN 9783319138435. 36, 38 43 Anthony J Bell. THE CO-INFORMATION LATTICE. in Proc. 4th Int. Symp. Independent Component Analysis and Blind Source Separation, pages 921–926, 2003. 3, 6, 32 Daniel Bennequin, Olivier Peltre, Grégoire Sergeant-Perthuis, and Juan Pablo Vigneaux. Extra-Fine Sheaves and Interaction Decompositions. arXiv e-prints, page arXiv:2009.12646, sep 2020. 31 Nils Bertschinger, Johannes Rauh, Eckehard Olbrich, Jürgen Jost, and Nihat Ay. Quantifying Unique Information. Entropy, 16(4):2161–2183, apr 2014. URL 33 Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer, 1 edition, 2007. ISBN 0387310738. 3 Christopher Michael Bishop and Hugh Bishop. Deep Learning - Foundations and Concepts. 1 edition, 2023. ISBN 978-3-031-45468-4. 3 N. J. Cerf and C. Adami. Entropic Bell Inequalities. Physical Review A -Atomic, Molecular, and Optical Physics, 55(5):3371–3374, 1997. ISSN 10941622. 33 N. J. Cerf and C. Adami. Quantum extension of conditional probability. Physical Re-view A - Atomic, Molecular, and Optical Physics, 60(2):893–897, 1999. ISSN 10941622. 33 Gregory. J. Chaitin. Algorithmic Information Theory. Cambridge University Press, oct 1987. ISBN 9780521343060. URL https: //www.cambridge.org/core/product/identifier/9780511608858/type/book. 14, 17 S. Cocco and R. Monasson. Adaptive Cluster Expansion for the Inverse Ising Problem: Con-vergence, Algorithm and Tests. Journal of Statistical Physics, 147(2):252–314, April 2012. 3 Thomas M. Cover and Joy A. Thomas. Elements of Information Theory. Wiley-Interscience, 2006. ISBN 9780471241959. 25, 26, 32 A P Dawid. Separoids: A Mathematical Framework for Conditional Independence and Ir-relevance. Annals of Mathematics and Artificial Intelligence, 32(1):335–372, 2001. ISSN 1573-7470. URL 1016734104787. 7, 8 A. Philip Dawid. Conditional Independence in Statistical Theory. Journal of the Royal Statistical Society. Series B (Methodological), 41(1):1–31, 1979. ISSN 00359246. URL 2984718. A. Philip Dawid. Conditional Independence for Statistical Operations. The Annals of Statis-tics, 8, 1980. 7 Hubert Dubé. On the Structure of Information Cohomology. Phd dissertation, University of Toronto, 2023. URL 32 Jack Edmonds. Submodular functions, matroids, and certain polyhedra, page 11–26. Springer-Verlag, Berlin, Heidelberg, 2003. ISBN 3540005803. 28 Kun Fang, Omar Fawzi, Renato Renner, and David Sutter. Chain Rule for the Quantum Relative Entropy. Physical Review Letters, 124(10):100501, 2020. ISSN 10797114. URL 1103/PhysRevLett.124.100501. 33 Conor Finn and Joseph Lizier. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices. Entropy, 20(4):297, apr 2018. 33 James Fullwood. On a 2-Relative Entropy. Entropy, 24(1):74, dec 2021. URL 26 Peter Gacs. On the Symmetry of Algorithmic Information. Soviet Math. Dokl., 15(January 1974):1477–1480, 1974. 17 Marilyn Gatica, Rodrigo Cofré, Pedro A.M. Mediano, Fernando E. Rosas, Patricio Orio, Ibai Diez, Stephan P. Swinnen, and Jesus M. Cortes. High-Order Interdepen-44 dencies in the Aging Brain. Brain Connectivity, 11(9):734–744, 2021. ISSN 21580022. 32 Peter D. Grünwald and Paul M B Vitányi. Algorithmic Information Theory. arXiv e-prints, abs/0809.2:arXiv:0809.2754, sep 2008. ISSN 21971765. 3, 14, 17, 22 Te Sun Han. Nonnegative Entropy Measures of Multivariate Symmetric Cor-relations. Information and Control, 36(2):133–156, 1978. ISSN 0019-9958. URL sciencedirect.com/science/article/pii/S0019995878902759. 32 Gerhard Hochschild. On the Cohomology Groups of an Associative Algebra. Annals of Mathematics, 46(1):58–67, 1945. ISSN 0003486X. URL 31 Michal Michał Horodecki, Jonathan Oppenheim, and Andreas Winter. Par-tial quantum information. Nature, 436(7051):673–676, 2005. ISSN 1476-4687. URL //arxiv.org/abs/quant-ph/0505062{%}0A 33 Kuo Ting Hu. On the Amount of Information. Theory of Probability & Its Applications, 7(4): 439–447, 1962. URL 3, 31 Ryan G. James and James P. Crutchfield. Multivariate dependence beyond Shannon infor-mation. Entropy, 19(10), 2017. ISSN 10994300. 32 Ming Li and Paul Vitányi. An Introduction to Kolmogorov Complexity and Its Applications. Springer, 1997. ISBN 9783030112974. 14, 15, 16, 17, 21, 23, 24, 31, 40 Joseph T. Lizier, Nils Bertschinger, Juergen Jürgen Jost, and Michael Wibral. In-formation Decomposition of Target Effects from Multi-Source Interactions: Perspec-tives on Previous, Current and Future Work. Entropy, 20(4), 2018. ISSN 10994300. URL 33 Thomas Murray MacRobert and Thomas John I’Anson Bromwich. An Introduction to the Theory of Infinite Series. Macmillan, London, 1926. 5 Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learn-ing. Adaptive Computation and Machine Learning. MIT Press, Cambridge, MA, 2 edition, 2018. ISBN 978-0-262-03940-6. 29, 31 G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximiz-ing submodular set functions–I. Math. Program., 14(1):265–294, dec 1978. ISSN 0025-5610. URL 28 Arthur J. Parzygnat. Towards a functorial description of quantum relative en-tropy. arXiv e-prints, page arXiv:2105.04059, 2021. URL 978-3-030-80209-7{}60. 33 Rick Quax, Omri Har-Shemesh, and Peter Sloot. Quantifying Synergistic Informa-tion Using Intermediate Stochastic Variables †. Entropy, 19(2):85, February 2017. 33 Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-Shot Text-to-Image Generation. arXiv e-prints, art. arXiv:2102.12092, February 2021. 3 Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models, 2021. 3 Fernando E. Rosas, Pedro A.M. Mediano, Michael Gastpar, and Henrik J. Jensen. Quantifying High-Order Interdependencies via Multivariate Extensions of the Mutual Information. Physical Review E, 100(3):1–17, 2019. ISSN 24700053. 32 Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, and Mohammad Norouzi. Photorealis-45 tic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv e-prints, art. arXiv:2205.11487, May 2022. 3 René L. Schilling. Measures, Integrals and Martingales. Measures, Integrals and Martingales. Cambridge University Press, 2017. ISBN 9781316620243. URL id=sdAoDwAAQBAJ. 4, 34 A Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. Springer, 2003. 28 Shai Shalev-Shwartz and Shai Ben-David. Understanding Machine Learning — From Theory to Algorithms. Cambridge University Press, 2014. ISBN 978-1-10-705713-5. 29, 31 C. E. Shannon. A Mathematical Theory of Communication. The Bell System Tech-nical Journal, 27(3):379–423, 1948. ISSN 15387305. 3, 4, 31 Claude E. Shannon and Warren Weaver. The Mathematical Theory of Com-munication. The University of Illinois Press, 1964. ISBN 9780252725487. URL item{}2383164{}3/component/file{}2383163/content. 4, 31 Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsu-pervised learning using nonequilibrium thermodynamics. In Francis Bach and David Blei, editors, Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 2256–2265, Lille, France, 07–09 Jul 2015. PMLR. URL 3 Melvin Dale Springer. The Algebra of Random Variables, volume 23 of Wiley series in proba-bility and mathematical statistics. John Wiley & Sons, 1979. 6 Richard P. Stanley. Enumerative Combinatorics. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2 edition, 2011. 11, 36 Bastian Steudel, Dominik Janzing, and Bernhard Schölkopf. Causal Markov condition for submodular information measures. COLT 2010 - The 23rd Conference on Learning Theory, pages 464–476, 2010. 28, 29, 31 Terrence Tao. An Introduction to Measure Theory. American Mathematical Society, 2013. ISBN 9781470409227. URL 4, 34 Constantino Tsallis. Possible Generalization of Boltzmann-Gibbs statistics. Journal of Sta-tistical Physics, 52(1):479–487, 1988. ISSN 1572-9613. 24, 31 V. Vedral. The Role of Relative Entropy in Quantum Information Theory. Reviews of Modern Physics, 74(1):197–234, 2002. ISSN 00346861. 33 Juan Pablo Vigneaux. Topology of Statistical Systems — A Cohomological Approach to In-formation Theory. Phd dissertation, Université de Paris, 2019. 24, 25, 27, 31, 32 Juan Pablo Vigneaux. Information Structures and Their Cohomology. The-ory and Applications of Categories, 35(38):1476–1529, 2020. ISSN 1201561X. 31 Juan Pablo Vigneaux. Entropy under Disintegrations. arXiv e-prints, 1:arXiv:2102.09584, 2021. URL 09584. 32 Satosi Watanabe. Information Theoretical Analysis of Multivariate Correlation. IBM Journal of Research and Development, 4(1):66–82, 1960. 32 Paul L. Williams and Randall D. Beer. Nonnegative Decomposition of Multivariate Informa-tion. arXiv e-prints, page arXiv:1004.2515, 2010. URL 32, 33 Paul L. Williams and Randall D. Beer. Decomposing Multivariate Information. Privately Communicated, 2011. 32, 33 46 Raumond W. Yeung. A New Outlook on Shannon’s Information Measures. IEEE Transactions on Information Theory, 37(3):466–474, 1991. ISSN 21915776. 3, 12, 31 Raymond W Yeung. A First Course in Information Theory. Springer-Verlag, Berlin, Heidel-berg, 2002. ISBN 9781461346456. 3, 12 A. K. Zvonkin and L. A. Levin. The Complexity of Finite Objects and the De-velopment of the Concepts of Information and Randomness By Means of the Theory of Algorithms. Russian Mathematical Surveys, 25(6):83–124, 1970. ISSN 0036-0279. 33 47
229
Published Time: Wed, 13 Aug 2025 16:42:44 GMT [PDF] Continuum Theory by Sam Nadler | 9780824786595, 9781351990530 =============== Skip to main content Browse Institutions Learners Login Read this book now English Browse by topic Architecture Art Biological Sciences Business Computer Science Design Economics Education History Languages & Linguistics Law Literature Mathematics Media & Performing Arts Medicine Personal Development Philosophy Physical Sciences Politics & International Relations Psychology Social Sciences Study Aids Technology & Engineering Theology & Religion Or browse by Subtopics Publishers Index Browse study resources Knowledge Base Study Guides Essay Writing Guides Home Discover Mathematics Mathematics General Continuum Theory eBook - ePub Continuum Theory An Introduction Sam Nadler, Read this book now This is a test This is a test Try an audio sample Share book 348 pages English ePUB (mobile friendly) Available on iOS & Android eBook - ePub Continuum Theory An Introduction Sam Nadler, Book details Table of contents Citations About this book A textbook for either a semester or year course for graduate students of mathematics who have had at least one course in topology. Introduces continuum theory through a combination of classical and modern techniques. Annotation copyright Book News, Inc. Portland, Or. Trusted by 375,005 students Access to over 1 million titles for a fair monthly price. Study more efficiently using our study tools. Tools to learn more effectively Saving Books Keyword Search Annotating Text Listen to it instead Information Publisher CRC Press Year 2017 Print ISBN 9780824786595 eBook ISBN 9781351990530 Edition 1 Topic Mathematics Subtopic Mathematics General Index Mathematics Table of contents Cover Page Half title Title Page Copyright Dedication PREFACE Contents Part one: General Analysis Part Two: Special Continua and Maps SPECIAL SYMBOLS INDEX Popular in Mathematics General View all Sign up to read Basic Maths For Dummies, UK EditionColin Beveridge 2011 Sign up to read AS and A Level Maths For DummiesColin Beveridge 2016 Sign up to read Problem-Solving Strategies in MathematicsAlfred S Posamentier, Stephen Krulik 2015 Sign up to read Fundamentals of University MathematicsColin McGregor,Jonathan Nimmo,Wilson Stothers 2010 Frequently asked questions Can I cancel at any time? Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription. Can/how do I download books? At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here. What is the difference between the pricing plans? Both plans give you full access to the library and all of Perlego’s features. The only differences are the price and subscription period: With the annual plan you’ll save around 30% compared to 12 months on the monthly plan. What is Perlego? We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here. Do you support text-to-speech? Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here. Is Continuum Theory an online PDF/ePUB? Yes, you can access Continuum Theory by Sam Nadler in PDF and/or ePUB format, as well as other popular books in Mathematics & Mathematics General. We have over one million books available in our catalogue for you to explore. PERLEGO PricingFAQsContact UsPerlego for InstructorsPerlego for InstitutionsPerlego for PublishersBarclays CustomersCareersPress BROWSE TopicsPublishersIndexKnowledge BaseStudy GuidesHarvard Referencing GuidesAPA Referencing GuidesEssay Writing GuidesResearch Assistant LEARN ArchitectureArtBiological SciencesBusinessComputer ScienceDesignEconomicsEducationHistoryLanguages & LinguisticsLawLiteratureMathematicsMedia & Performing ArtsMedicinePersonal DevelopmentPhilosophyPhysical SciencesPolitics & International RelationsPsychologySocial SciencesStudy AidsTechnology & EngineeringTheology & Religion Made with ☂︎ in London - © 2025 Perlego Ltd - Perlego HQ, 26 Hatton Garden, London, EC1N 8BR, United Kingdom - VAT 246681777 PrivacyTermsContent Policy , (opens in new tab)BooksSubtopicsPublishersIndexMissionCookiesSmart SearchAccessibility , (opens in new tab)Help Centre , (opens in new tab)Knowledge Base , (opens in new tab) This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. English
230
SEMINAR ON CLASS FIELD THEORY MICHAEL BAKER, RAYMOND CHENG, RITVIK RAMKUMAR 3. Review: local theory of number fields We now turn to the local theory of number fields, as is presented in Chapter II of Neukirch’s Algebraic Number Theory. The treatment here will be at a slightly less breakneck pace, since this material is disjoint from PMATH 441. Grossly put, the local aspects of algebraic number theory introduce analytic ideas into algebraic problems. This is achieved by introducing valuations and absolute values, which can be thought of as algebraically-flavoured metrics. 3.1. Intuition. What we want to understand is the concept of a local field. A local field is a field K along with a nontrivial absolute value |·| : K →R such that (K, d) is a complete metric space and has a finite residue class field, where d is the metric induced by the absolute value. We will see later that these conditions are equivalent to asking for (K, d) to be a locally compact metric space. Before going on to precisely define absolute values and develop the basic theory of local fields, let us consider why we might be interested in local fields. Intuitively speaking, local fields are algebraic objects in which we can talk about infinitesimal behaviour. An important manifestation of this is that we are able to solve polynomial equations via approximations in local fields. Moreover, by studying objects of interest locally, we are able to understand the local subtle structure of algebraic objects. As an analogy, think about studying each curve Cn : y = xn at the origin. For each n, Cn vanishes at the origin. But each are different in the sense that the Cn are “flatter” as n gets larger. This is made precise by noting that the first nonzero derivative of xn comes later and later as n increases, which geometrically means that Cn moves away from 0 slower and slower (because x < 1 near 0). The differences between the Cn at the origin, then, come from differences in infinitesimal behaviour near 0. Even more specifically, the local fields we will mostly be concerned with will be completions of fields with respect to a given absolute value. These are to be thought of as giving a power series expansion of an algebraic object, through which we can understand higher order behaviour of the object in question. Not only do local fields carry rich local information, but when put together, they say much more. Pushing the analogy between power series and local fields a bit more, when the power series from all different points in the domain are put together, we essentially know the function in question. Analogously, when the local fields coming from all the places—which will come to have a precise meaning later!—are put together, information about the global field under study will emerge. 3.2. Absolute Values. Before we can dream of introducing analytic techniques into algebraic number theory, we need a notion that serves as the analogue of a norm in an arbitrary field K. This is the role of absolute values. An absolute value on K is a map |·| : K →R satisfying the following: (a) |x| > 0 for all x ∈K×; (b) |xy| = |x||y| for all x, y ∈K; (c) |x + y| ⩽|x| + |y|. Note that absolute values are also referred to as (multiplicative) valuations, but this terminology conflicts with another notion of valuation which we will need later. The use of “absolute values” here follows Bourbaki, Lang and Milne. The first example of an absolute value is the trivial absolute value: for any field K, let |·| : K →R be defined by |0| = 0 and |a| = 1 for all a ∈K×. It is obvious that this is an absolute value. Unless otherwise specified, however, whenever we talk about absolute values in these notes, we mean a nontrivial absolute value. 1 For a marginally more interesting class of absolute values, we consider those which arise from em-beddings of number fields into the complex numbers. Let K be a number field and let σ : K , →C be any embedding of K into C. Define |·|σ : K →R by |a|σ · ·= |σ(a)|, for all a ∈K, where the absolute value on the right is the usual one on C: |x + iy| = p x2 + y2. One easily checks that |·|σ is indeed an absolute value on K. 3.3. Topology from Absolute Values. One main purpose of introducing absolute values into the study of algebraic number theory is to allow us to say when algebraic objects are close to one another. Thus one very important aspect of absolute values on fields is the topology they induce. Specifically, let |·| : K →R be an absolute value on a field K. Define d : K × K →R by d(x, y) · ·= |x −y|. The nondegeneracy and triangle inequality conditions in the definition of an absolute value make clear that d is a metric on K. Thus (K, d) is a metric space and hence a topological space once endowed with the metric topology. As in the case of norms, two absolute values |·|1, |·|2 : K →R are said to be equivalent if they give rise to the same topology. By doing a little analysis, one can show that absolute values |·|1 and |·|2 are equivalent if and only if there exists s ∈R>0 such that |x|1 = |x|s 2 for all x ∈K. Another characterization of when |·|1 and |·|2 are equivalent is the property that whenever |x|1 < 1, |x|2 < 1 as well. 3.4. Completions of Fields. Let K be a field with an absolute value |·| and view it as a metric space with the metric induced from |·|. As mentioned in the introduction, one purpose of introducing absolute values into algebraic number theory is to allow us to solve polynomial equations via approximation techniques. This suggests that we wish to consider fields which are complete with respect to the equipped metric. Recall from real analysis that given a metric space, we can form its completion by taking the set of all Cauchy sequences in the metric space modulo the null sequences—those Cauchy sequences going to 0. Specializing this result to our situation, we have the following Theorem 1. Let (K, |·|) be a field equipped with an absolute value. Then there exists a complete field ( ˆ K, |·|) and a homomorphism K →ˆ K preserving the absolute value which satisfies the following universal property: every absolute value preserving homomorphism K →L into a complete field (L, |·|′) extends uniquely to a homomorphism ˆ K →L. Moreover, K can be identified with a dense subset of ˆ K; in the case that |·| is nonarchimedean, K is actually open in ˆ K. We will talk about nonarchimedean absolute values in the following subsection. For us, completions will be the manner in which local fields come about. Specifically, let K be a global field, |·| an absolute value on K and ˆ K be the completion of K with respect to the given absolute value. Then ˆ K will typically be a local field. For reasons I explain below, we wish to classify the local fields that arise in this way. We shall do this for number fields. Since number fields are finite extensions of Q, this can be done by first classifying all the absolute values of Q and then examining how absolute values extend in finite field extensions. Before we go about classifying the absolute values of Q, let us look at how the theory of absolute values on arbitrary fields consists of more than the familiar absolute values on R and C. 3.5. Nonarchimedean Absolute Values. So far, the examples of absolute values on fields seem to echo only the familiar examples on R and C. It turns out that the theory is far richer. Recall from a first course in analysis that the standard absolute values on R and C satisfy the so-called archimedean property: For any x, y ∈R (or C), with |x| < |y|, there exists positive n ∈Z such that |y| < |nx| = n|x|. This rather reasonable looking property, however, does not hold for arbitrary absolute values. The archimedean property can be alternatively formulated as saying that the set |Z| = {|n| : n ∈Z} ⊂R is an unbounded set, where we view Z as a subring of R or C. 2 An absolute value |·| : K →R is a nonarchimedean absolute value if |n| remains bounded as n ranges over α(Z) ⊂K, where α : Z →K is the unique ring homomorphism from Z to K. Nonarchimedean absolute values can be characterized in terms of a stronger triangle inequality condition. Proposition 2. Let K be a field. An absolute value |·| : K →R is nonarchimedean if and only if it satisfies the strong triangle inequality or ultrametric inequality: for all x, y ∈K, |x + y| ⩽max{|x|, |y|}. Proof. If the strong inequality holds, then for n ∈α(Z), |n| = |1 + · · · + 1| ⩽1 and so |α(Z)| ⊂R is clearly bounded. Conversely, suppose that |·| is nonarchimedean; then there is a positive integer N such that |n| ⩽N for all n ∈α(Z). Let a, b ∈K with |a| ⩽|b|. Then for each n ∈Z>0, |a + b|n ⩽ n X k=0 n k  |a|k|b|n−k ⩽N(n + 1)|b|n. Taking nth roots, |a + b| ⩽N1/n(n + 1)1/n|b|. Since this holds for all n ∈Z>0, taking n →∞gives |a + b| ⩽|b| = max{|a|, |b|}. ■ Either with the characterization in terms of the strong triangle inequality, or really, straight from the definition, we notice that any absolute value on any field K with positive characteristic is nonarchimedean: the set { |n| : n ∈α(Z) } is a set with size at most char K < ∞. Unfortunately for us, Q possesses quite a few nonarchimedean absolute values, complicating slightly the task of classifying all completions of Q. To talk about these nonarchimedean absolute values, it will be convenient to pass from absolute values, which are multiplicative, to valuations, which are “logarithmic” in nature. 3.6. Valuations. Valuations should be thought of as a degree measurement on elements of a field. Even more vaguely, valuations can be thought of as a way of telling how “separated” objects are, how much objects are “folded” upon themselves, the “order of growth” of elements, etc. These vague statements will hopefully be clearer by the end of this section. For us, valuations are primarily interesting because they give an alternate perspective on nonarchimedean absolute values. Let |·| : K →R be a nonarchimedean absolute value on a field K and define a function ν : K →R ∪{ ∞} by ν(0) · ·= ∞and for all x ∈K×, ν(x) · ·= −log|x|. The function ν satisfies the following properties, which are essentially consequences of the multiplicative properties enjoyed by absolute values, due to the application of the logarithm: (a) ν(x) = ∞if and only if x = 0; (b) ν(xy) = ν(x) + ν(y) for all x, y ∈K; (c) ν(x + y) ⩾min { ν(x), ν(y) } for all x, y ∈K. More generally, any function ν : K →G ∪{ ∞} from K into a totally ordered group G is called a valuation on K; if G ∼ = Z, then ν is called a discrete valuation. Notice that we can go the other way, from a valuation back to a nonarchimedean absolute value: given a valuation ν : K →R ∪{ ∞}, define |·|ν : K →R⩾0 by |0|v = 0 and for all x ∈K×, |x|ν · ·= e−ν(x) for some e > 1. The choice of e is arbitrary, but there are some natural choices of e in the valuations we will soon encounter. 3 A valuation knows about the internal arithmetic of K: let ν be a valuation on K, |·| be the corresponding absolute value, and let A · ·= { x ∈K | ν(x) ⩾0 } = { x ∈K | |x| ⩽1 } , A× · ·= { x ∈K | ν(x) = 0 } = { x ∈K | |x| = 1 } , m · ·= { x ∈K | ν(x) > 0 } = { x ∈K | |x| < 1 } . Then A is a ring with group of units A× and unique maximal ideal m. Moreover, it is not too hard to see that K is the field of fractions of A. Any ring A that arises as a subring of a field K satisfying inequalities from a valuation defined on K is called a valuation ring; in the case that the valuation is discrete, A is called a discrete valuation ring. Using the property (b) of valuations, we also see that for every x ∈K, either x ∈A or else x−1 ∈A. This property allows one to see that valuation rings are integrally closed: suppose that x ∈K −A satisfies the equation xn + an−1xn−1 + · · · + a0 = 0, where ai ∈A. Multiplying through by x−(n−1) and rearranging, it follows that x = −(an−1 + an−2x−1 + · · · + a0x1−n). But the right hand side is in A since x−1 ∈A, so x ∈A, which is a contradiction. 3.7. Discrete Valuations. Understanding discrete valuations will help us obtain a more concrete under-standing of nonarchimedean completions. There is a rather useful characterization of discrete valuation rings. The proof of this characterization can be found in most references for commutative algebra. Theorem 3. Let A be a commutative ring. Then A is a discrete valuation ring if and only if A is a local noetherian ring whose maximal ideal is generated by a non-nilpotent element. By the characterization above, if A is a discrete valuation ring and m is its unique maximal ideal, then m = (π) for some non-nilpotent π ∈A. In other words, every element x ∈A can be written as x = πmu for some u ∈A×, with (u, π) = A, and positive integer m; notice that since either x ∈A or x−1 ∈A for every x ∈K, this shows that every x ∈K can be written as x = πmu for some u ∈A× and m ∈Z. The element π is variously called a (local) uniformizing parameter or prime element of A. The valuation defining A in its field of fractions can be recovered by defining ν : K× →Z by setting ν(x) = m, where x = πmu. The integer m is called the order of π in x. In the context of discrete valuation rings, there is a natural choice of normalization when passing between absolute values and valuations. Let π be a local uniformizing parameter for (K, |·|) and let e ∈R>1 be such that −loge|π| = 1. The valuation constructed with this choice of e is often called the normalized discrete valuation associated with |·| and is often denoted ord : K →Z ∪{ ∞}. Put more abstractly, any valuation ν associated with a discrete absolute value—this is an absolute value for which |K×| ⊆R⩾0 is a discrete set, which can be shown to be equivalent to saying that the associated valuation is discrete—has image ν(K×) = cZ for some constant c; when c = 1, then ν is a normalized discrete valuation. An example should give some geometric and algebraic intuition to the whole ordeal. Let k be a field and let B · ·= k[t] be a polynomial ring over k and let A · ·= B(t) be the localization of B with respect to the principal ideal generated by t, that is, B(t) = p(t) q(t) p, q ∈B, q(0) ̸= 0 . Then A is a noetherian local ring whose maximal ideal is generated by t and thus is a discrete valuation ring. The discrete valuation ν : A →Z is defined on 0 ̸= p(t) ∈A by factoring p(t) = tmq(t) for q ∈A such that (q, t) = A and setting ν(p) = m. More concretely, this means that m is the largest power of t dividing all the terms of p; put yet a different way, ν(p) is the order of vanishing of p at t = 0. 4 3.8. Completions of Q. Finally, we come back to look at the absolute values of Q and their completions. There is one very obvious archimedean absolute value: view Q as a subfield of R and let |·|∞: Q →R⩾0 be the restriction of the standard absolute value |·| : R →R⩾0. More interesting absolute values arise by considering the various discrete valuation rings contained in Q. Fix a prime p ∈Z and note that the noetherian local ring Z(p) = a b a, b ∈Z, p ∤b has maximal ideal generated by p. By the classification of discrete valuation rings above, we see that Z(p) is a discrete valuation ring; let νp : Q →Z be the corresponding discrete valuation extended to Q. Then we obtain an absolute value |·|p : Q →R⩾0 defined by |x|p · ·= 1 e νp(x) , where e > 1. In this case, there is a rather natural choice for e: set e = p. The resulting absolute value is called the (normalized) p-adic absolute value and the associated discrete valuation is called the p-adic valuation. Let Qp be the completion of Q with respect to the p-adic absolute value and call it the p-adic number field. The elements of the ring of integers Zp in Qp are called p-adic integers. It turns out that we have now seen all absolute values of Q. Recall that equivalent absolute values define the same topology and hence have the same completion. Theorem 4 (Ostrowski). Let |·| be a nontrivial absolute value on Q. (1) If |·| is archimedean, then it is equivalent to |·|∞; (2) If |·| is nonarchimedean, then it is equivalent to |·|p for exactly one prime p ∈Z. As a consequence of Ostrowski’s Theorem, the fields which can be obtained as completions of Q are R and the p-adic number fields Qp, p ∈Z prime. The real numbers R is a field we (should) understand well at this point, but the p-adic number fields might be a tad more mysterious. To gain a better understanding of the Qp, and to lay algebraic foundations for understanding completions of other number fields, let us take a closer look at what is going on when we take a completion with respect to a nonarchimedean absolute value. We will return to classifying completions of other number fields soon. 3.9. Nonarchimedean Completions: Analytic Perspective. Completions of fields, as we have seen, can be described in analytic terms with the help of an absolute value and a corresponding topology. However, completions can also be constructed in a purely algebraic manner. We shall discuss how completions are treated algebraically after looking at some properties of the analytic completion procedure. Throughout this subsection, let K be a field with nonarchimedean discrete absolute value |·| and denote by ˆ K the completion of K with respect to |·|. Recall that ˆ K is constructed as the set of all Cauchy sequences of K modulo the null sequences. In other words, an element of ˆ K is represented by a Cauchy sequence { an } for an ∈K. Since |K×| ⊂R is discrete, the absolute value of a convergent sequence must be eventually constant. In particular, this means that |·| extends uniquely to ˆ K: set |{ an }| to be the value it is eventually constant at. It follows that |K×| = | ˆ K×|. If ord : K× →Z is a normalized discrete valuation for |·|, it also follows that it extends uniquely to a normalized discrete valuation ord : ˆ K× →Z. This observation makes it clear that the ring ˆ A = { x ∈ˆ K | |x| ⩽1 } is the closure of the ring A ⊂K in ˆ K. The maximal ideal ˆ m = { x ∈ˆ K | |x| < 1 } is the closure of m ◁A in ˆ A. If π ∈K is such that ord(π) = 1, then π generates m in A and by the same reasons, generates ˆ m in ˆ A. In similar spirit, the powers ˆ mn, n ∈Z>0, of ˆ m are the closures of mn in ˆ A and ˆ mn is generated in ˆ A by πn. 5 Notice that since mn ⊆ˆ mn and A ⊆ˆ A, via the identification of A with the equivalence class of Cauchy sequences in ˆ A represented by constant sequences, we have an induced map A/mn →ˆ A/ ˆ mn for each n ∈Z>0. Lemma 5. The map A/mn →ˆ A/ ˆ mn is an isomorphism for each positive integer n. Proof. The maximal ideals can be described via inequalities which show it to be both open and closed: mn = { x ∈A | |x| ⩽|π|n } = { x ∈A | |x| < |π|n−1 } , where the second equality is due to the discreteness of |·|. Now suppose a ∈A is in the kernel of the above map. By definition, this means that the constant sequence consisting of all a converges to an element of ˆ mn. Since each element of the sequence is found in A the closedness of mn in A implies a ∈mn, i.e. a = 0 in A/mn. Now, the map is surjective since A , →ˆ A has a dense image and ˆ m is open. ■ This lemma allows us to forget trying to work with Cauchy sequences for the most part and just use regular elements of A when working with the quotient rings. To describe all the new elements added in the completion, we can use power series. Lemma 6. Let S be a system of representatives for A/m. Then for any positive integer n, the series a−nπ−n + · · · + a0 + a1π + · · · , ai ∈S is a Cauchy series in ˆ K, and every Cauchy series in ˆ K is equivalent to exactly one series of this form. In particular, every element of ˆ K has a unique representative of this form. Proof. Let sM · ·= PM i=−n aiπi denote the Mth partial sum and note that since |·| is nonarchimedean, we have |sM −sN| ⩽|πM+1|, M < N, which shows that the series is Cauchy. For the second statement, begin with α ∈ˆ K and write α = πnα0 for some n ∈Z and α0 ∈ˆ A×. Since A/m ∼ = ˆ A/ ˆ m, the definition of S shows that there exists a0 ∈S such that α0 −a0 ∈ˆ m. Let α1 · ·= α0−a0 π , then α1 ∈ˆ A—and may already be in ˆ m—so we may yet again find a1 ∈S such that α1 −a1 ∈ˆ m. Let α2 · ·= α1−a1 π ∈A and let a2 ∈S be such that α2 −a2 ∈ˆ m. Continuing in this fashion, we have α0 = a0 + a1π + a2π2 + · · · , α = πnα0. This shows existence of the power series representation. For uniqueness, it suffices to show that there is a unique representative for 0. Since the absolute value is nonarchimedean X i aiπi = |πm| where am is the first nonzero coefficient in the series. From this, it easily follows that the series represents 0 if and only if every coefficient is 0. ■ 3.10. Interlude on Inverse Limits. To discuss algebraic completions, we need a purely algebraic way of constructing limits. This construction is useful for completing fields, but also for understanding the Galois theory of infinite extensions. We shall therefore take some time and develop the algebra behind inverse limits and look at a few important instances of this construction. An inverse limit (or sometimes projective limit or simply limit) is to be thought of as the result of zooming closer and closer to an object: we start with a sequence of “magnifications” of this algebraic entity and then we take this sequence to the limit. The “sequence of magnifications” idea is made precise by the notion of an inverse system. Let I be a nonempty poset and let (Ai)i∈I be a collection of objects of a category C indexed by I, i.e. view I as a diagram in C. Assume that for each i ⩽j ∈I there is a morphism ϕij : Aj →Ai. The collection (Ai, ϕij : Aj →Ai) is then called an inverse system if 6 (1) ϕii = idAi for all i ∈I; and (2) ϕik = ϕij ◦ϕjk for all i ⩽j ⩽k ∈I. Note for the cognoscente: An inverse system can be defined more generally as a functor from a cofiltered category, but for our purposes, the notion of an inverse directed set is more than sufficient. Most often, I will be a linearly ordered set, so the diagram given by I in C will have the shape · · · Aα Aβ Aγ · · · and the inverse system axioms merely say that the composite morphisms above behave as expected. Intuitively, this diagram describes the situation where you are able to obtain information at arbitrarily fine resolutions, but you do not have the ability to zoom in to improve coarse information to a finer resolution; the morphisms present only allow you to “zoom out” (think, for example, of truncating terms of a power series). The inverse limit of the inverse system (Ai, ϕij) is then the limit over the diagram I. More concretely, this consists of an object A in C and morphisms πi : A →Ai for each i ∈I such that ϕij ◦πj = πi for each i ⩽j ∈I, i.e. each of the following triangles commute: A Ai Aj; πi πj ϕij and such that A is “final” with this property: for any object B in C with morphisms τi : B →Ai satisfying ϕij ◦τj = τi, each instance (i ⩽j) of the diagram below commutes: B A Ai Aj. τi τj πi πj ϕij Back to the magnification metaphor, the inverse limit is an object that holds the finest image possible; the resolution of the limit is so high that any other picture B which can zoom out to the constituent pictures Ai must be contained in A. The description of the inverse limit immediately tells us that A is obtained as a subobject of the product of the Ai. This is because the product is final for a more general class of “cones” than is the inverse limit (namely, one omits the “satisfying ϕij ◦τj = τi” coherence condition). Indeed, if C is a concrete category in which the product of the Ai exists, A can be taken as lim ← − i Ai · ·= A =  (ai)i ∈ Y i∈I Ai ϕij(aj) = ai ∀i ⩽j ∈I  . An important instance of inverse limits which we will encounter soon enough is that of profinite groups. A profinite group is a topological group which is isomorphic to the inverse limit of an inverse system of discrete finite groups. Profinite groups are important for us for two reasons. First, the Galois groups of infinite Galois extensions are profinite groups obtained as inverse limits indexed by the finite subextensions of the big extension. Second, the p-adic integers Zp, p ∈Z prime, are profinite groups. We will explore the p-adic numbers soon enough and we will also discuss infinite Galois theory at some point in the future. 3.11. Nonarchimedean Completions: Algebraic Perspective. Back to talking about completions. The inverse limit operation is the algebraic formalism required to construct completions of rings in a purely algebraic manner. First let us describe how inverse limits are usually used in this context. Let A be a ring 7 and let m be an ideal (not necessarily maximal) of A. Then we have an inverse system obtained by setting A0 · ·= A, and for i ⩾1, Ai · ·= A/mi and the maps Aj →Ai, where i ⩽j, are the natural quotient maps: A/m ←A/m2 ←A/m3 ←· · · . The descending sequence A · ·= m0 ⊇m1 ⊇m2 ⊇· · · is called the m-adic filtration of A and the inverse limit of the corresponding sequence of quotient rings, as above, is called the completion of A with respect to m, and will be denoted ˆ Am or simply ˆ A when m is understood. We can describe the completion ˆ A explicitly as follows: ˆ A = (a1, a2, . . .) ∈ Y A/mi aj ≡ai (mod mi) . First notice that this completion is isomorphic to the analytic one: this follows from the lemma that A/mn →ˆ A/ ˆ mn is an isomorphism for all n and the universal property of ˆ A. 3.12. p-adic Numbers. Having looked at the general theory of completions, let us consider the example of the p-adic numbers Qp, p ∈Z prime. Recall first that Qp is the completion of Q with respect to the absolute value associated to the prime ideal pZ of Z. Alternatively, Qp can be constructed as the field of fractions of the completion Zp of Z with respect to the pZ-adic filtration. The analytic perspective suggests that elements of Qp are equivalence classes of Cauchy sequences of Q with respect to |·|p. But we also have shown that we can represent each equivalence class uniquely in the form of a Laurent series in p, X i⩾n aipi for some n ∈Z and with each ai ∈{ 0, . . . , p −1 }. It’s not too difficult to see that elements of Zp can be represented as power series in p, i.e. the index of the first nonzero coefficient is nonnegative. From the algebraic perspective, elements of Zp are tuples (A1, A2, . . .) in Q i Z/piZ for which Aj ≡Ai (mod pi) for all i ⩽j. A moment’s thought should allow one to see that the Ai are the partial sums of the power series representation. A similar representation for Qp can be described, where the tuple now is doubly infinite with support bounded below. Let’s work out an example. Let p be any prime and consider the sum 1 + p + p2 + · · · + pn + · · · ∈Qp. The nth partial sum of this series is 1 + p + · · · + pn−1 = pn −1 p −1 . Taking n →∞, we have that X k⩾0 pk = 1 1 −p. Especially cute are when p = 2 and p = 3, where one obtains −1 and −1/2, respectively. 3.13. Polynomials in Complete Fields. One more piece of algebra before we get back to looking at number fields. Our objects of study are field extensions, where the fields are eventually assumed to be complete. Completeness, as we mentioned in the beginning, allows us to find roots of polynomials via approximation techniques. That is what we will look at presently. Note that these results can be cast in a more general setting; consult just about any algebra book. There are several ways to state the following result; we shall phrase it in terms of factoring polynomials since splitting of primes is what we are primarily interested in. For this section, let A be a complete discrete valuation ring, let π be a generator for the maximal ideal m of A and let k be the residue field of A. For f(x) ∈A[x], write ¯ f(x) for the image of f(x) in k[x] under the quotient map (that is, reduce the coefficients modulo m). We use these notations unless otherwise stated. Theorem 7 (Hensel’s Lemma). Let f(x) ∈A[x] be a monic polynomial. If ¯ f(x) factors as ¯ f = g0h0 with g0 and h0 monic and relatively prime, then f itself factors as f = gh with g and h monic and such that ¯ g = g0 and ¯ h = h0. Moreover, g and h are uniquely determined, and (g, h) = A[x]. 8 The existence proof will make clear why this result should be thought of solving the polynomial f via approximations. Proof of Hensel’s Lemma. Notice that the hypotheses of the theorem are equivalent to saying that f −g0h0 ∈π · A[x]. Completeness of A along with this view of the hypotheses suggests that we should look for g and h by looking for power series in π. The power series will be constructed inductively. Suppose that gn and hn are found such that f ≡gnhn mod πn+1 · A[x] and such that gn ≡g0 and hn ≡h0 in πA[x]. The next approximation will be obtained by adding a πn+1 term, which means that we want to find u, v ∈A[x] such that f −(gn + πn+1u)(hn + πn+1v) ≡0 mod πn+2A[x]. We want the approximate factors to reduce to g0 and h0 so we should have the degree of u and v less than g0 and h0, respectively; however, since we can add terms with coefficient divisible by π, which will disappear during the reduction, we need to impose this condition explicitly in order for gn+1 and hn+1 to be uniquely specified. Rearranging the above equation, we are looking for u, v ∈A[x] with deg u < deg g0 and deg v < deg h0 and uhn + vgn ≡f −gnhn πn+1 mod πA[x]. Since g0 and h0 are relatively prime, the following lemma will show that u and v exists. ■ Lemma 8. If g, h ∈A[x] are such that ¯ g and ¯ h are relatively prime, and g is monic, then there exists u, v ∈A[x] with deg u < deg g and deg v < deg h such that uh + vg = 1. Proof. Let M · ·= A[x]/(g, h). Since g is monic, M is a finitely generated A-module. Since (¯ g, ¯ h) = k[x], we have (g, h) + mA[x] = A[x], from which dividing out by (g, h) shows that mM = M. Nakayama’s Lemma then implies M = 0. Let u, v ∈A[x] be such that uh + vg = 1 and suppose that deg v ⩾deg h. Write v = hq + r for deg r < deg h, then (u + gq)h + rg = 1 and note that the equality above immediately implies deg(u + gq) < deg g. ■ 3.14. Extending Absolute Values from Complete Base Fields. Back to absolute values. Recall that our purpose was to classify the absolute values of number fields and we intended to do this by figuring out how absolute values of Q extend up. Well, let us worry about the nonarchimedean absolute values first. Let K be a finite extension of Q and let |·| be a nonarchimedean absolute value on Q. From our discussions before, we know that each nonarchimedean absolute value on Q arises from the valuation associated to some prime number p ∈Z. Let A be the integral closure of Z in K; then A is a Dedekind domain and the ideal pA ⊆A factors uniquely into a product of prime ideals pA = Pe1 1 · · · Peg g . Each of the primes P1, . . . , Pg give rise to a valuation on K and hence an absolute value. How might these be related to the absolute value we started with? This question can be answered in a rather nice way in the case the underlying field is complete with respect to the absolute value we started with. Theorem 9. Let K be a field complete with respect to an absolute value |·|K and let L be an algebraic extension of K of degree n. Then |·|K extends uniquely to an absolute value |·|L on L and L is complete with respect to this absolute value. Moreover, when [L : K] = n < ∞, this extension is given by, for all β ∈L, |β|L = |NL|K β|1/n K Proof. Note that since algebraic extensions are given by the union of all finite subextensions, it suffices to prove the uniqueness for finite extensions L of K. For complete proofs, see Milne ANT Theorem 7.38 and Neukirch Chapter 2 Theorem 4.8. ■ 9 There is an important technical consequence of this theorem. The uniqueness of the extension implies that there is a unique valuation which extends the valuation associated to p. But each unique prime in OL over p would give a distinct valuation extending p. So in fact, the uniqueness tells us that pOL = Pe for a unique prime P of OL and for some positive integer e. Recall that e is called the ramification degree of P over p; there is also an integer f · ·= [OL/P : OK/p] called the inertia degree (or residue class degree) of P over p. Since in general, [L : K] = P eifi where the sum is taken over all primes lying over p, we have the following Corollary 10. Let K and L be as above. Then n = ef, where n = [L : K], e is the ramification index of P over p, and f is the degree of the residue field extension. I should say a word about what normalizations are typically chosen for the absolute values associated to primes p in the ring of integers OK of a number field K. As before, localize OK with respect to p and let νp be the discrete valuation defined on (OK)p which sends a local uniformizing parameter of (OK)p to 1. Let Np denote the cardinality of the residue field of (OK)p—recall that this is finite since the residue field is a finite extension of Z/pZ, where pZ = p ∩Z, and the degree of the field extension is bounded by [K : Q]. Then set, for x ∈K, |x|p · ·=  1 Np νp(x) . 3.15. Extending Absolute Values in General. With an understanding of how absolute values extend when the base field is complete, we can look at what happens when the base field is not necessarily complete. Let K be a field with absolute value |·| and let L be a finite separable extension of K. By the Theorem of Primitive Elements, we can write L = K[α] for some α ∈L. Let f(x) ∈K[x] be the minimal polynomial of α. Let |·|′ be any extension of |·| to L. Forming the completion ˆ L of L with respect to |·|′ gives the diagram: L ˆ L K ˆ K where the absolute values on each of the fields are extensions of one another. Notice that ˆ K[α] is a finite extension of ˆ K and ˆ K is complete with respect to the absolute value |·| inherited from K. By the theorem, ˆ K[α] itself is complete. But notice that L = K[α] ⊆ˆ K[α]. The facts that L is contained in ˆ K[α] and that ˆ K[α] is complete implies that ˆ L ⊆ˆ K[α]; the other inclusion is easily obtained by viewing ˆ L as the completion of the ring K[α] = L. Now let g(x) ∈ˆ K[x] be the minimal polynomial of α over ˆ K. We can view f(x) as a polynomial in ˆ K[x] and note that since f(α) = 0, g must divide f. In this way, we can associate an irreducible factor g of f in ˆ K[x] to the extension |·|′. Conversely, let g(x) ∈ˆ K[x] be a monic irreducible factor of f(x) in ˆ K[x] and let ˆ K[α] · ·= ˆ K[x]/(g). This gives us a diagram L ˆ K[α] K ˆ K. But note that ˆ K is complete, so by the previous theorem, the absolute value on ˆ K extends to a unique absolute value on ˆ K[α]. This, in turn, restricts to an absolute value on L which extends |·|. Observe that these operations are inverse to one another, hence we have shown the following. Proposition 11. Let L = K[α] be a finite separable extension of a field K, and let f(x) ∈K[x] be the minimal polynomial of α. Then there is a natural one-to-one correspondence between the extensions of |·| to L and the irreducible factors of f(x) in ˆ K[x]. What does this really mean in our context of interest? Well let K be a number field, L a finite extension of K. Write OK for the ring of integers of K and let OL = OK[α] for some α ∈L and let f(x) ∈K[x] be the 10 minimal polynomial. Suppose that our absolute value |·| on K comes from the valuation associated to a prime p of OK. Now let f(x) = f1(x) · · · fg(x). for f1, . . . , fg ∈ˆ K[x] irreducible. Since the fi are irreducible in ˆ K[x] and ˆ K is complete, Hensel’s Lemma implies that the fi, modulo ˆ p, are powers of irreducible polynomials hi(x), ¯ fi(x) = hi(x)ei mod ˆ p. Then reducing f by ˆ p, we have the factorization ¯ f(x) = h1(x)e1 · · · hg(x)eg mod ˆ p. This is useful in view of the following result about factorization of ideals in extensions of Dedekind domains. Theorem 12. Let A be a Dedekind domain with field of fractions K. Let L be a finite separable extension of K and let B be the integral closure of A in L. Suppose B = A[α] and let f(x) ∈A[x] be the minimal polynomial of α. Let p be a prime ideal of A. Suppose f(x) ≡ Y hi(x)ei mod p for some distinct irreducible polynomials hi ∈A[x]/p and positive integers ei. Then pB = Y (p, hi(α))ei is the factorization of the ideal generated by p in B into product of powers of distinct prime ideals. Proof. See Milne ANT Theorem 3.41. ■ Applying this theorem in our situation, we find that pOL = Y Pei i , Pi = (p, gi(α)) From here, we see that the absolute values extending |·|p—that is, which are equivalent to |·|p when restricted from L to K—are precisely those that correspond to the Pi. The content of the previous proposition says that these are all. 3.16. Aside: Local-to-Global Principle. Let me take a moment to point out an important idea lurking in the previous section. We wanted to understand how the field extension L | K behaves, where the fields K and L are number fields of sorts. To do this, we passed to completions of either field with respect to some valuation (or absolute value). In a sense, what we have done to understand L | K is to pass from the global objects K and L to the local objects manifested by their completions. To enlighten the terminology a bit more, suppose instead that K and L are function fields: for instance, take K to be C(t) and L a finite extension of K. Then K could be thought of the regular functions on a Riemann surface and L is some field of algebraic functions, both of which are objects of global nature—they are defined on the whole Riemann surface. Now, passing to the completion corresponds to passing to the fields of power series expansions. Geometrically, this corresponds to focussing our attention on some local part of the Riemann surface. From this local study, we are somehow able to obtain information about the original global objects. But why do we bother to pass to the local objects in both cases? Well, this is because the local objects are usually easier to handle in some way. In both cases, this at least partially manifests in how it is easier to solve equations with local objects than global objects: as we saw with Hensel’s Lemma, equations can be solved via approximation methods. In the context of number theory, we pass to completions with respect to valuations because the arithmetic of complete fields, meaning the ideal structure of the ring of integers, is significantly simpler than that of arbitrary number fields. 11 3.17. Classifying Completions of Global Fields. Finally, the extension theorem above allows us to classify all the completions of number fields. Let K be a number field and let |·| be a nonarchimedean absolute value on K. Restricting this to Q, we clearly obtain a nonarchimedean absolute value on Q. By Ostrowski’s Theorem, we know that this comes from a prime p ∈Z. If OK is the ring of integers of K and pOK = pe1 1 · · · peg g , then the results of the previous section imply that |·| must be equivalent to the absolute value induced by one of the pi. This shows that all nonarchimedean absolute values of number fields correspond to a unique prime ideal in the ring of integers. What about the archimedean absolute values on K? Let |·| be one such absolute value. Restricting to Q, we obtain an archimedean absolute value, from which Ostrowski’s Theorem implies it is equivalent to embedding Q into C and taking the usual absolute value there. Going back to the extension arguments of the previous section, we see that ˆ K must be a finite extension of C (there is an error here; what if K = Q?). Since C is algebraically closed, ˆ K = C. This tells us that |·| coincides with the absolute value obtained from an embedding σ : K , →C. There are now two cases to consider, depending on whether or not the complex conjugate ¯ σ coincides with σ. If ¯ σ = σ, then σ(K) ⊆R, in which case we have an absolute value that is distinct from that obtained from other embeddings; if ¯ σ ̸= σ, on the other hand, σ(K) is not contained in R and also the absolute values |·|σ and |·| ¯ σ are easily seen to be equivalent. The theorem of the previous section shows that these are all the archimedean absolute values. These results will be summarized with the help of some terminology. Let K be a number field. An equivalence class of absolute values on K is called a prime or place of K. Theorem 13. Let K be a number field. Then there exists exactly one prime of K (1) for each prime ideal p ⊂OK; (2) for each real embedding σ : K , →R ⊂C; (3) for each conjugate pair of complex embeddings ¯ σ, σ : K , →C. Because primes of K refer to equivalence classes of absolute values, there will be a choice of normalization for each class of primes above. Typical normalizations that we will follow for each class of primes are as follows: (1) For |·|p, p an ideal of OK, set |x|p · ·= (Np)−ordp x; (2) For |·|σ, σ : K , →R, set |x|σ · ·= |σ(x)|; (3) For |·| ¯ σ, ¯ σ ̸= σ : K , →C, set |x| ¯ σ · ·= |σ(x)||¯ σ(x)| = |σ(x)|2. Note that the complex case is not technically an absolute value—if you try proving the triangle inequality, you will fail miserably—but this form is convenient for statements of theorems. 3.18. Local Fields. Remember that our whole purpose in all the preceding algebra was to understand a bit more about so-called local fields. Recall that we said a field (K, |·|) is a local field if K is complete with respect to the absolute value and has a finite residue field. In case the absolute value is nonarchimedean, this can be characterized by compactness of the ring of integers. Proposition 14. Let K be complete with respect to a nonarchimedean absolute value |·| and let A be the ring of integers of K with maximal ideal m. Then the residue field k = A/m is finite if and only if A is compact. Proof. Let S be a system of representatives for A/m, then our goal is to show that S is finite if and only if A is compact. First suppose that A is compact. Since m = { x ∈A | |x| < 1 } , we see that m is open. Then the sets s + m as s ranges over the elements of S form a disjoint open covering of A. Since A is compact, there can only be finitely many sets in this covering, meaning that S is finite. Conversely, assume that S is finite. Then every element of A can be written as a power series s0 + s1π + s2π2 + · · · , where π is a generator for m and si ∈S for all i. Now, for each positive integer n, there are only finitely many distinct elements of the form s0 + s1π + · · · + snπn. 12 Since |·| is nonarchimedean, every element of A is within |πn+1| of this element—i.e. the collection of open balls { B(|πn+1|, s0 + s1π + · · · + snπn) | si ∈S } will be a finite cover of A. Recall from real analysis that this means that A is a totally bounded metric space. Since it is complete with respect to the absolute value, A is also compact. ■ Recall that a closed subspace of a compact space is itself compact. A nice consequence of this elementary topological fact along with the above characterization is that we obtain some very useful compact subspaces of local fields. Corollary 15. Let K be a nonarchimedean local field. Then pn, 1 + pn and A× are all compact. Let me remind you that p here refers to the prime which the nonarchimedean absolute value on K corresponds to, viewed as the maximal ideal of the corresponding discrete valuation ring. Since a discrete valuation ring is local, p is also the Jacobson radical of the ring. Thus 1 + pn ⊆1 + p consists of units for all positive integers n. Recall from our discussion of nonarchimedean absolute values that every element of K can be written in the form aπn for some a ∈A× and n ∈Z. It is then easy to see that every element of K admits a compact neighbourhood: just take a translation of A×. This means that nonarchimedean local fields can also be characterized as locally compact fields. Some authors define local fields as fields which are locally compact. The advantage of this is that completions of number fields with respect to archimedean absolute values are also considered local fields. By Ostrowski’s Theorem, this is saying that we might want to consider R and C as local fields. Besides R and C, local fields are either finite extensions of the p-adic numbers Qp for some prime p or else finite extensions of the field of Laurent series k( (t) ) for some finite field k. The former will be of interest for our study of number theory, whereas the latter will come up when we begin looking at some geometry. 3.19. Unramified Extensions. In the study of field extensions, a particularly nice class of extensions are unramified extensions. Roughly speaking, these are field extensions in which the ring of integers of the extension field does not “fold over” anywhere. The point of studying unramified field extensions is that their arithmetic is simpler than general field extensions: somehow, these extensions behave a lot like extensions of finite fields, which are well understood. For reasons such as these, our study of class field theory will begin with unramified field extensions. For this subsection, let K be a complete field with respect to a discrete absolute value |·|. Write k for the residue field of K and let A be the corresponding discrete valuation ring. For an extension field L of K, write l for the residue field and B the corresponding discrete valuation ring. A finite extension L | K is called unramified if the extension l | k of the residue fields is separable and [L : K] = [l : k]. An arbitrary extension is called unramified if it is the union of finite unramified subextensions. Note that when L is a possibly infinite extension of K, we define B · ·= { x ∈L | |x| ⩽1 } , P · ·= { x ∈B | |x| < 1 } and define the quotient B/P to be the residue field of L. This definition will coincide with taking B as the integral closure of A in L when the extension is finite. The following result makes precise some of the statements made in the first paragraph. Proposition 16. Let L be an algebraic extension of K. The map K′ 7→k′ sending an unramified extension L | K′ | K to its residue field k′ is a one-to-one correspondence between the sets { K′ ⊆L, finite and unramified over K } ↔{ k′ ⊆l, finite over k } . Moreover, if K′ ↔k′ and K′′ ↔k′′, then: (1) K′ ⊆K′′ if and only if k′ ⊆k′′; (2) K′ is Galois over K if and only if k′ is Galois over k, in which case there is a canonical isomorphism Gal(K′ | K) →Gal(k′ | k). 13 Proof. See Milne ANT Proposition 7.50. ■ An important step in the proof of this proposition is to show that the compositum of two unramified extensions is itself unramified. Lemma 17. Suppose K′ | K and K′′ | K are two unramified extensions of K contained in L. Then the compositum K′K′′ | K is an unramified extension of K in L. Proof. From global theory, we know that a prime ramifies in an extension if and only if the prime divides the discriminant of the extension (cf. Milne ANT Theorem 3.20). Since K′ | K and K′′ | K, the prime p of K does not divide the discriminants ∆K′ and ∆K′′. But we also have ∆K′K′′ = ∆[K′:K] K′ ∆[K′′:K] K′′ , (cf. Milne ANT Remark 6.6c). Since p does not divide the right hand side, p cannot divide ∆K′K′′, implying that K′K′′ is an unramified extension of K. Since the elements of K′K′′ are linear combinations of all products of elements in K′ and K′′, it is clear that K′K′′ ⊆L. ■ This lemma allows us to apply Zorn’s Lemma to find that there is always a largest unramified extension K0 of K contained in L, which contains all other unramified extensions of K. In fact, more can be said when k is finite. All finite extensions of a finite field can be obtained by adjoining roots of unity prime with degree prime to the characteristic of k. Thus, in case that k is finite, K0 is obtained by adjoining all such roots of unity contained in l. Not surprisingly, the most important choice for L is the algebraic closure Kal of K. Corollary 18. Assume both K and k are perfect. The residue field of Kal is kal; there is a subfield Kun of Kal such that a subfield L of Kal, finite over K, is unramified if and only if L ⊆Kun. Proof. Let f0(x) ∈k[x] be any polynomial and let f(x) ∈A[x] be any lift of f0(x). Then Kal contains all the roots of f and hence the residue field k′ of Kal contains all the roots of f0(x). Thus k′ is algebraic over k and every polynomial over k splits in k′, so k′ = kal. The field Kun is the largest unramified extension of K contained in Kal. ■ Since all finite extensions of K will be contained in the algebraic closure of K, the above result gives a sort of absolute property to the correspondence between finite, unramified extensions of K and finite extensions of k. More precisely, we have: Corollary 19. Assume K and k are perfect fields. Then there is an equivalence of categories between the category of finite, unramified extensions of K and the finite extensions of k. As mentioned in the opening of this section, this equivalence of categories will be important for us because it relates the extensions of our local fields with extensions of finite fields. One particularly useful property is that this gives the existence of an element analogous to the Frobenius element in the Galois group of finite extensions of local fields. More precisely, let K be a local field and let q be the order of the residue field k of K. For each positive integer n, there is a unique, up to k-isomorphism, field extension kn | k of degree n. Recall that the Galois group Gal(kn | k) is cyclic of order n and has a canonical generator x 7→xq called the Frobenius element. By the above equivalence of categories and the relation between the Galois groups, we see that for each positive integer n, there is an unramified extension Kn | K of degree n which unique up to K-isomorphism. Moreover, the Galois group Gal(Kn | K) is cyclic of order n and has a canonical generator, which is also known as the Frobenius element, characterized by the property σ(β) = βq (mod p) for all β ∈B, where B is the discrete valuation ring of Kn and p is the prime of K. 14 3.20. Two More Local Results. There are two more theorems that are typically covered in the local theory. Unfortunately, these results seem very unmotivated at first. This is because the power of these theorems only comes about when trying to go from the local objects to the global objects. Specifically, the forthcoming theorems become important when we discuss idèles. The first of these results is the so-called Weak Approximation Theorem. Roughly it says that different absolute values are essentially independent. When looking at the statement, keep the Chinese Remainder Theorem in mind: think of absolute values in terms of their valuations and thus in connection with primes and think of the approximations as finding an element in some power of the ideal. Theorem 20 (Weak Approximation Theorem). Let |·|1, . . . , |·|n be nontrivial inequivalent absolute values on a field K, and let a1, . . . , an ∈K. For every ϵ > 0, there is an element a ∈K such that |a −ai|i < ϵ for each i = 1, . . . , n. Proof. See Milne ANT Theorem 7.20 and Neukirch Chapter II Theorem 3.4. ■ The Weak Approximation Theorem will help us show that a certain map defined from the idèle class group is surjective. If we think of the Weak Approximation Theorem as stating some independence condition on any finite set of inequivalent absolute values, the Product Formula can be thought of as saying there is a nontrivial relation when all the absolute values are considered together. We shall state this in two parts: first we state it for Q and then for an arbitrary number field. Theorem 21 (Product Formula). For p = 2, 3, 5, 7, 11, 13, . . . , ∞let |·|p denote the corresponding normalized absolute value on Q. Then for any 0 ̸= x ∈Q, Y p |x|p = 1. Proof. Let x = a b for a, b ∈Q. Now recall the definition of |α|p: this will be p−n, where n is the order αp to which p divides a minus the order βp to which p divides b. Thus |x|p = 1 unless p | a or p | b. Note this shows that the product is finite and thus makes sense. Now we compute: Y p |x| =  Y p|a 1 pαp    Y q|b qβp   a b ∞= 1 a · b · a b = 1. ■ An analogous result holds for general number fields. The proof of the general case is done by using the formula for extensions of absolute values and the product formula for Q. Theorem 22 (Product Formula). For each prime ν, let |·|ν be the normalized absolute value corresponding to ν. Then for every 0 ̸= x ∈K, Y ν |x|ν = 1. Before leaving off, let me mention that the product formula may be used as an axiom for global fields. This axiomatization was achieved by Artin and Whaples in 1945. Let K be a field with a set of primes V satisfying the following two axioms: (1) There is a set of representatives |·|ν for the primes such that, for any nonzero x ∈K, |x|ν ̸= 1 for only finitely many ν ∈V and Y ν∈V |x|ν = 1. (2) There exists at least one prime ν ∈V for which Kν is a local field. Then K is a global field and V consists of all primes of K. 15
231
SPhT/ -0 VECTOR MODELS IN THE LARGE N LIMIT: A FEW APPLICA TIONS J. ZINN-JUSTIN CEA-Saclay, Servic e de Physique Th  eorique, F-   Gif-sur-Yvette Ce dex, FRANCE ABSTRA CT In these lecture notes prepared for the  th T aiw an Spring Sc ho ol, T aip ei  , and up dated for the Saalburg summer sc ho ol  , w e review the solutions of O (N ) or U (N ) mo dels in the large N limit and as = N expansions, in the case of v ector represen tations. The general idea is that in v arian t comp osite elds ha v e small uctuations for N large. Therefore the metho d relies on constructing e ec-tiv e eld theories for these comp osite elds after in tegration o v er the initial degrees of freedom. W e illustrate these ideas b y sho wing that the large N expansion allo ws to relate the ( ) theory and the non-linear  -mo del, mo dels whic h are renormal-izable in di eren t dimensions. In the same w a y large N tec hniques allo w to relate the Gross{Nev eu, an example of a theory with four-fermi self-in teraction, with a Y uk a w a-t yp e theory renormalizable in four dimensions, a topic relev an t for four dimensional eld theory . Among other issues for whic h large N metho ds are also useful w e will brie y discuss nite size e ects and nite temp erature eld theory , b ecause they in v olv e a crosso v er b et w een di eren t dimensions. Finally w e consider the case of a general scalar V ( ) eld theory , explain ho w the large N tec hniques can b e generalized, and discuss some connected issues lik e tricritical b eha viour and double scaling limit. Some sections in these notes are directly adapted from the w ork Zinn-Justin J.,   , Quantum Field The ory and Critic al Phenomena, Claren-don Press (Oxford third ed.  ). These le ctur e notes ar e de dic ate d to Mrs. T.D. L e e, who r e c ently p asse d away, as a testimony of gr atitude for the long lasting friendship b etwe en our families. email: zinn@sph t.sacla y .cea.fr Lab oratoire de la Direction des Sciences de la Mati  ere du Commissar iat  a l'Energie A tomique Con ten ts  In tro duction . . . . . . . . . . . . . . . . . . . . . . . . . . .  The N -v ector mo del near dimension four: Renormalization Group (R G)  . Mean eld theory and the stabilit y of the gaussian xed p oin t . . . .  . R G equations for the critical (massless) theory . . . . . . . . . . 0 . R G equations and large distance b eha viour: the "-expansion . . . .  . Critical correlation functions with (x) insertions . . . . . . . .  . Scaling b eha viour in the critical domain . . . . . . . . . . . . .  . Scaling la ws in a magnetic eld and b elo w T c . . . . . . . . . .  . F our dimensions: logarithmic corrections and triviality . . . . . . 0 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . 0 The O (N ) Spin Mo del at Lo w T emp erature: the Non-Linear  -Mo del . The non-linear  -mo del . . . . . . . . . . . . . . . . . . . . . R G equations . . . . . . . . . . . . . . . . . . . . . . . .  . Discussion of the R G o w . . . . . . . . . . . . . . . . . . .  . In tegration of the R G equations . . . . . . . . . . . . . . . .  . The dimension t w o . . . . . . . . . . . . . . . . . . . . . . Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . .   Field Theory and Non-Linear  Mo del in the Large N Limit . .  . In tro duction . . . . . . . . . . . . . . . . . . . . . . . . .  . Large N limit: the critical domain . . . . . . . . . . . . . . .  . R G functions and leading corrections to scaling . . . . . . . . . 0 . Small coupling constan t and large momen tum expansions for d <  .  . The non-linear  -mo del in the large N limit . . . . . . . . . . .  . The = N -expansion: an alternativ e eld theory . . . . . . . . . .  . Additional results . . . . . . . . . . . . . . . . . . . . . .  . Dimension four: triviali t y , renormalons, Higgs mass . . . . . . . .  . Finite size e ects . . . . . . . . . . . . . . . . . . . . . . .  .0 Field theory at nite temp erature . . . . . . . . . . . . . . .  . Other metho ds. General v ector eld theories . . . . . . . . . .  Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . 0  Gross{Nev eu and Gross{Nev eu{Y uk a w a Mo dels . . . . . . . . . .  . The Gross{Nev eu mo del . . . . . . . . . . . . . . . . . . . .  . The Gross{Nev eu{Y uk a w a mo del . . . . . . . . . . . . . . . .  . R G equations near four dimensions . . . . . . . . . . . . . . .  . GNY and GN mo dels in the large N limit . . . . . . . . . . . .  . The large N expansion . . . . . . . . . . . . . . . . . . . . 0  Other mo dels with c hiral fermions . . . . . . . . . . . . . . . .  . Massless electro dynamics . . . . . . . . . . . . . . . . . . .  . The large N limit . . . . . . . . . . . . . . . . . . . . . .  . The U (N ) Thirring mo del . . . . . . . . . . . . . . . . . . .  Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . .   The O (N ) v ector mo del in the large N limit: m ulti-critical p oin ts and double scaling limit . . . . . . . . . . . . . . . . . . . . . . . . . . 0 . Double scaling limit: simple in tegrals and quan tum mec hanics . . .  . The D V ( ) eld theory in the double scaling limit . . . . . . .  . The V ( ) in the large N limit: phase transitions . . . . . . . . 0 . The scalar b ound state . . . . . . . . . . . . . . . . . . . .  . Stabilit y and double scaling limit . . . . . . . . . . . . . . . .  . Conclusions . . . . . . . . . . . . . . . . . . . . . . . . .  Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . .    In tro duction In these lectures w e describ e a few applications of large N tec hniques to quan-tum eld theories (QFT) with O (N ) or U (N ) symmetries, where the elds are in the v ector represen tation. W e w an t to sho w that large N results nicely com-plemen t results obtained from more con v en tional p erturbativ e renormalization group (R G). Indeed the shortcoming of the latter metho d is that it mainly ap-plies to gaussian or near gaussian xed p oin ts. This restricts space dimension to dimensions in whic h the corresp onding e ectiv e QFT is renormalizable, or after dimensional con tin uation, to the neigh b ourho o d of suc h dimensions. Large N tec hniques in some cases allo w a study in generic dimensions. They rely on noting that in the large N limit scalar (in the group sense) comp osite elds ha v e small uctuations (cen tral limit theorem). Therefore if w e are able to construct an e ectiv e eld theory for the scalars, in tegrating out the initial degrees of free-dom, w e can solv e the eld theory in a = N expansion. Note that for v ector represen tations the n um b er of indep enden t scalars is nite and indep enden t of N , unlik e what happ ens for matrix represen tations. This explains wh y v ector mo dels ha v e b een solv ed m uc h more generally than matrix mo dels. In these lectures w e will in particular stress t w o p oin ts: rst it is necessary to alw a ys c hec k that the = N expansion is b oth IR nite and renormalizable. Some tec hnical asp ects of this question whic h will b e describ ed in section .. This is essen tial for the stabilit y of the large N results and the existence of a = N expansion. Second, the large N expansion is just a tec hnique, with its o wn (often unkno wn) limitations. It should not b e discussed in isolation. Instead, as w e shall do in the follo wing examples, it should b e com bined with other p erturbativ e tec hniques and the reliabilit y of the = N expansion should b e inferred from the general consistency of all results. Second-order phase transitions in classical statistical ph ysics will pro vide us with the rst illustration of the usefulness of the large N expansion. Due to the div ergence of the correlation length at the critical temp erature, systems then ha v e at and near T c univ ersal prop erties whic h can b e describ ed b y e ectiv e con tin uum quan tum eld theories. The N -v ector mo del that w e discuss b elo w is the simplest example but it has man y applications since it allo ws to describ e the critical prop erties of systems lik e v ap our{liquid, binary mixtures, sup er uid Helium or ferromagnetic transitions as w ell as the statistical prop erties of p olymers. Before sho wing what kind of information can b e pro vided b y large N tec hniques w e will rst shortly recall what can b e learned from p erturbativ e renormalization group (R G) metho ds. Long distance prop erties can b e describ ed in d =  " dimension b y a ( ) eld theory . Instead in d = + " the relev an t QFT mo del is the O (N ) non-linear  mo del. It is somewhat surprising that the same statistical mo del can b e describ ed b y t w o di eren t theories. Since the results deriv ed in this w a y are v alid a priori only for " small, there is no o v erlap to test the consistency . The large N expansion will allo w us to discuss generic dimensions and th us to  understand the relation b et w een b oth eld theories. Another domain of application of the large N expansion is nite size e ects and nite temp erature eld theory . In these situations a dimensional crosso v er o ccurs b et w een the large size or zero temp erature situation where the in nite v olume theory is relev an t to a dimensionally reduced theory in the small v olume or high temp erature limit. Both e ectiv e eld theories b eing renormalizable in di eren t dimensions, p erturbativ e R G cannot describ e correctly b oth situations. Again large N tec hniques will help us to understand the crosso v er. F our-ferm i in teractions ha v e b een prop osed to generate a comp osite Higgs particle in four dimensions, as an alternativ e to a Y uk a w a-t yp e theory , as one nds in the Standard Mo del. Again, using the sp eci c example of the Gross{ Nev eu mo del, w e will use large N tec hniques to clarify the relations b et w een these t w o approac hes. W e will nally brie y indicate that other mo dels with c hiral prop erties, lik e massless QED or the Thirring mo del, can b e studied b y similar tec hniques. In the last section w e return to scalar b oson eld theories, and examine m ulti-critical p oin ts (where the large N tec hnique will sho w some ob vious limitations), and the double scaling limit, a to y mo del for discussing problems encoun tered in matrix mo dels of D quan tum gra vit y . The N -v ector mo del near dimension four: Renormalization Group (R G) The N -v ector mo del is a lattice mo del describ ed in terms of N -v ector spin v ari-ables S i of unit length on eac h lattice site i, in teracting through a short range ferromagnetic O (N ) symmetric t w o-b o dy in teraction V ij . The partition function of suc h a mo del can b e written: Z = Z Y i dS i  S i   exp [ E ( S) =T ] ; (:) in whic h the con guration energy E is: E (S) = X ij V ij S i  S j : (:) This mo del has a second order phase transition b et w een a disordered phase at high temp erature, and a lo w temp erature ordered phase where the O (N ) sym-metry is sp on taneously brok en, and the order parameter S i has a non-v anishing exp ectation v alue. A t a second order phase transition the correlation length di-v erges, and therefore a non-trivial long distance ph ysics can b e de ned. Scaling and univ ersalit y prop erties emerge whic h w e w an t to study . T o generate correlation functions one can add to E (S) a coupling to a space-dep enden t magnetic eld E (S) = X ij V ij S i  S j + X i H i  S i : (:)  . Me an eld the ory and the stability of the gaussian xe d p oint T o deriv e the critical prop erties of the N -v ector mo del one can pro ceed in the follo wing w a y: one starts from the mean eld appro ximation, v alid in high dimensions. One then sho ws that the mean eld appro ximation is the rst term in a systematic expansion. One disco v ers that for dimensions d >  the successiv e terms in the expansion do not mo dify the leading mean eld b eha viour. F or d <  instead IR div ergences app ear and the mean eld appro ximation is no longer v alid. Moreo v er a summation of the leading IR div ergences to all orders in the expansion leads to an e ectiv e lo cal  eld theory . The corresp onding action is giv en b y the rst relev an t terms of the Landau{Ginzburg{Wilson hamiltonian: H ( ) = Z d d x   c (r) +  a (x) + b  ! (x)   ; (:) with a, b and c b eing r e gular functions of the temp erature for T close to T c . Note that the expression (:) , whic h in the sense of classical statistical ph ysics is a con guration energy , is often called hamiltonian. The reason is that if one starts from a classical hamiltonian and a functional in tegral o v er phase space, the in tegral o v er conjugate momen ta is gaussian and th us trivial. F rom the p oin t of view of quan tum eld theory the expression (:) has the form of an euclidean action, analytic con tin uation to imaginary time of the classical eld theory action. W e shall th us generally call it the action. Alternativ ely one can imagine starting from the con guration energy (:) and constructing Wilson's renormalization group b y in tegrating out short distance degrees of freedom. The spin v ariable S i is then replaced b y a lo cal a v erage, a v ector of con tin uous length of the t yp e of the eld (x). Mean eld theory corresp onds to the gaussian xed p oin t of this renormaliza-tion group. A t the critical temp erature one nds a massless free eld theory H G () = Z d d x  c (r) : One then p erforms an analysis of the stabilit y of the gaussian xed p oin t. Mean eld theory assumes that the order parameter, here the eld (x), is small and v aries only on macroscopic scales. Therefore a general action can b e expanded in p o w ers of the eld (x) and deriv ativ es. H () = Z d d x  c (r) + X H (); where P means sum o v er all space in tegrals H () of O (N ) symmetric monomi-als in of degree n and con taining m deriv ativ es (often b elo w called op erators, a language b orro w ed from quan tum eld theory).  A con v enien t w a y to understand the relev ance of the H () terms in the large distance (infrared) limit is to rescale all space or momen tum v ariables, and mea-sure distances in units of the correlation length, or, at the critical temp erature, in some arbitrary unit m uc h larger than the lattice spacing and corresp onding to the t ypical distances at whic h correlations are measured. Let us p erform suc h a rescaling here, and rescale also the eld (x) in suc h a w a y that the co ecien t of [r(x)] , to whic h all con tributions will b e compared, b ecomes the standard /: x ! x; (:) (x) !  (x): (:) After this rescaling all quan tities ha v e a dimension in units of . Our c hoice of normalization for the gradien t term implies:  = c = d= ; (:) whic h sho ws that no w has in terms of its canonical dimension d= . A term H () then is m ultiplied b y H () ! dn (d)=m H (): F or large w e observ e the follo wing: (i) The leading term is the term prop ortional to R d d x (x), whic h is m ulti-plied b y . This is not surprising since it giv es a mass to the eld and therefore the theory mo v es a w a y from the massless critical theory (the term is called rel-ev an t). (ii) If d >  all other terms are m ultiplied b y negativ e p o w ers and therefore b ecome negligible in the long distance limit. They are called irrelev an t. The gaussian xed p oin t is stable and mean eld theory th us correct. (iii) In four dimensions the  in teraction is indep enden t of : it is called marginal while all other in teractions remain irrelev an t. The analysis of the sta-bilit y of the gaussian xed p oin t then requires a ner study whic h will b e based on the eld theory p erturbativ e renormalization group. (iv) Belo w four dimensions the  in teraction b ecomes relev an t, the gaussian xed p oin t is certainly unstable. The question of the existence of another non-trivial xed p oin t is non-p erturbativ e and cannot b e easily answ ered. P artial answ ers are based up on the follo wing assumption: the dimensions of op erators are c ontinuous functions of the space dimension. This means that w e are go-ing to lo ok for a xed p oin t whic h, when d approac hed four, coalesces with the gaussian xed p oin t. Moreo v er ev en at this new xed p oin t, at least in some neigh b ourho o d of dimension four, all op erators except ( ) should remain ir-relev an t. The action (:) should con tain all relev an t op erators and therefore enough information ab out the non-trivial xed p oin t.  After the rescaling (:,: ) the action H ( ) then b ecomes: H ( ) = Z d d x   [ r(x)] +  r (x) +  ! g d (x)   ; (:) with r = a =c, g = b=c . The action (:) generates a p erturbativ e expansion of eld theory t yp e whic h can b e describ ed in terms of F eynman diagrams. These ha v e to b e calculated with a momen tum cut-o of order , re ection of the initial microscopic structure. The corresp onding theory is th us analogous to regularized quan tum eld theory . The precise cut-o pro cedure can b e sho wn to b e irrelev an t except that it should satisfy some general regularit y conditions. F or example the propagator can b e mo di ed (as in P auli{Vill ars's regularization) but the in v erse propagator in momen tum space m ust remain a regular function of momen tum (the forces are short range). Let us call r c the v alue of the parameter r whic h corresp onds, at g xed, to the critical temp erature T c at whic h the correlation length  div erges. In terms of the scale the critical domain is then de ned b y: ph ysical mass =   ) j r r c j distances = or momen ta ; magnetization M  j h (x)i j    (d=) : (: ) Note that these conditions are met if is iden ti ed with the cut-o of a usual eld theory . Ho w ev er an insp ection of the action (:) also sho ws that, in con trast with con v en tional quan tum eld theory , the  coupling constan t has a dep endence in giv en a priori. F or d <  the  coupling is v ery large in terms of the scale relev an t for the critical domain. In the usual form ulation of quan tum eld theory instead the b ar e coupling constan t also is an adjustable parameter. This implies for instance that for d <  (sup er-renormalizable theory) the coupling constan t v aries when the correlation length c hanges. This is a somewhat arti cial situation if one b eliev es that that the initial bare or microscopic theory has a ph ysical meaning. The critical prop erties of the eld theory (lik e the long distance b eha viour of correlation functions) can then b e analyzed b y R G metho ds in  " dimen-sion, i.e. near the so-called upp er-critical dimension (and with some additional assumptions in d < ). Dimensions of elds. Because w e deal with translation in v arian t theories, w e will generally discuss the scaling b eha viour of correlation functions in momen tum v ariables. Let us relate the scaling b eha viour of connected correlation functions expressed in terms of space and momen tum v ariables. When functions ha v e a scaling b eha viour, one de nes n Y i= O i (x i =) + c =  D Y i O i (x i ) + c with D = X i d O i ; (:0) where O i , sometimes called op erator, is a lo cal p olynomial in the basic elds (asso ciated with the order parameter), and the quan tit y d O i , whic h w e sometimes also denote [O i ], is called the dimension of the eld (op erator) O i . After F ourier transformation and factorization of the  -function of momen tum conserv ation, one then nds, in d space dimension, n Y i= O i (p i + c =  D 0 Y i O i (p i ) + c with no w D 0 = d + X i (d O i d) : (:) Finally it is con v enien t to in tro duce the Legendre transform () of the gener-ating functional W (H ) = T ln Z of - eld connected correlation functions. W e denote b y W (n) and (n) the corresp onding connected and PI functions. One v eri es that if one p erforms a Legendre transformation on the source asso ciated with the eld (op erator) O i , the quan tit y d O i d in equation (: ) is replaced b y d O i . . R G e quations for the critic al (massless) the ory The eld theory with the action (:) can no w b e studied b y eld theoretical metho ds. F rom simple p o w er coun ting argumen ts one concludes that the critical (or massless) theory do es not exist in p erturbation theory for an y dimension smaller than . If w e de ne, b y dimensional con tin uation, a critical theory in d =  " dimensions, ev en for arbitrarily small " there alw a ys exists an order in p erturbation ( /") at whic h IR (infrared) div ergences app ear. Therefore the idea, originally due to Wilson and Fisher, is to p erform a double series expansion in p o w ers of the coupling constan t g and ". Order b y order in this expansion, the critical b eha viour di ers from the mean eld b eha viour only b y p o w ers of logarithm, and w e can construct a p erturbativ e critical theory b y adjusting r to its critical v alue r c ( T = T c ) . T o study the large cut-o limit w e then use metho ds dev elop ed for the con-struction of the renormalized massless  eld theory . W e in tro duce rescaled (renormalized) correlation functions, de ned b y renormalization conditions at a new scale  , and functions of a renormalized coupling constan t g r . W e write here equations for Ising-lik e systems, the eld ha ving only one comp onen t. The generalization to the N -v ector mo del with O (N ) symmetry , is straigh tforw ard except in the lo w temp erature phase or in a symmetry breaking eld, a situation whic h will b e examined in section .. Then:  > > > < > > > : () r (p; g r ; ; ) j p =0 = 0 ; @ @ p () r ( p; g r ; ; )j p = =  ; () r (p i =  i ; g r ; ; ) =  " g r ; (:) 0 in whic h  i is a n umerical v ector. These correlation functions are related to the original ones b y the equations: (n) r (p i ; g r ; ; ) = Z n= (g ; =) (n) (p i ; g ; ) : (:) Renormalization theory tells us that the functions (n) r (p i ; g r ; ; ) of equation (:) ha v e at p i , g r and  xed, a large cut-o limit whic h are the renormalized correlation functions (n) r (p i ; g r ; ). A detailed analysis actually sho ws that at an y nite order in p erturbation theory: (n) r (p i ; g r ; ; ) = (n) r ( p i ; g r ; ) + O (ln ) L  ; (:) in whic h the p o w er L of ln increases with the order in g and ". F urtherm ore the renormalized functions (n) r do not dep end on the sp eci c cut-o pro cedure and, giv en the normalization conditions (: ), are therefore univ ersal. Since the renormalized functions (n) r and the initial ones (n) are asymptotically pro-p ortional, b oth functions ha v e the same small momen tum or large distance b e-ha viour. T o determine the univ ersal critical b eha viour it is th us sucien t to study the renormalized eld theory . And indeed most p eturbativ e calculations of univ ersal quan tities ha v e b een p erformed in this framew ork. Ho w ev er, it is in teresting to determine not only the asymptotic critical b eha viour, but also the corrections to the asymptotic theory . F urthermor e, renormalized quan tities are not directly obtained in non-p erturbativ e calculations. F or these reasons it is also useful to express the implications of equation (:) directly on the initial theory . Bar e R G e quations. Let us di eren tiate equation (:) with resp ect to at g r and  xed, taking in to accoun t (: ): @ @ g r ; xed Z n= (g ; = ) (n) ( p i ; g ; ) = O (ln ) L  : (:) W e no w neglect corrections subleading (in p erturbation theory) b y p o w ers of . Then, using c hain rule, w e can rewrite equation (:) as:  @ @ + (g ; =) @ @ g n  (g ; = )  (n) (p i ; g ; ) = 0 : (:) The functions and  , whic h are dimensionless and ma y th us dep end only on the dimensionless quan tities g and =, are de ned b y: ( g ; = ) = @ @ g r ; g ; (:a )  ( g ; = ) = @ @ g r ; ln Z ( g ; = ) : (:b )  Ho w ev er, the functions and  can also b e directly calculated from equation (:) in terms of functions (n) whic h do not dep end on . Therefore the functions and  cannot dep end on the ratio = (in the de nitions (:) consistency requires that con tributions whic h go es to zero lik e some p o w er of  / , should b e neglected, as in equation (: )). Then equation (:) can b e rewritten:  @ @ + ( g ) @ @ g n  (g )  (n) (p i ; g ; ) = 0 : (:) Equation (:) is an equation satis ed when the cut-o is large b y the ph ysical correlation functions of statistical mec hanics whic h are also the bare correlation functions of quan tum eld theory . It expresses the existence of a renormalized theory . . R G e quations and lar ge distanc e b ehaviour: the "-exp ansion Equation (: ) can b e solv ed b y the metho d of c haracteristics: one in tro duces a dilatation parameter , together with a running coupling constan t g () and a scale dep enden t eld renormalization Z () satisfying the o w equations  d d g ( ) = (g () ) ; g () = g ; (: a )  d d ln Z ( ) =  ( g ( )) ; Z () =  : (: b ) The b eha viour of correlation functions for jp i j ( ! 0) is then go v erned b y IR xed p oin ts, zeros of the R G -function with a p ositiv e slop e. The R G functions and  can b e calculated in p erturbation theory . F rom the relation b et w een bare and renormalized coupling constan t and the de nition (:a) it follo ws that (" =  d): (g ; ") = "g + N +   g + O g ; g "  : (:0) Let us no w assume that g initial l y is sucien tly small, so that p erturbation theory is applicable. W e see that ab o v e or at four dimensions, i.e. "  0, the function is p ositiv e and g () decreases approac hing the origin g = 0. W e reco v er that the gaussian xed p oin t is IR stable for d > , and nd that it ia also stable at d = . Belo w four dimensions, instead, the gaussian xed p oin t g = 0 is IR repulsiv e. Ho w ev er, expression (:0) sho ws that, for " small, (g ) no w has a non-trivial zero g : (g ) = 0; g =  N +  " + O "  ; with 0 (g )  ! = " + O "  : (:)  The slop e ! at the zero is p ositiv e. This non-gaussian xed p oin t th us is IR stable, at least in the sense of an "-expansion. In four dimensions it merges with the gaussian xed p oin t and the eigen v alue ! v anishes, indicating the app earance of a marginal op erator. The solution of the R G equation then determines the b eha viour of (n) (p i ; g ; ) for jp i j : (n) (p i ; g ; )  !0  d(n=) (d+ ) (n) (p i ; g ; ) ; (:) where  =  (g ). Critical correlation functions ha v e a p o w er la w b eha viour for small momen ta, indep enden t of the initial v alue of the  coupling constan t g . In particular the small momen tum b eha viour of the in v erse t w o-p oin t function is obtained for n = . F or the t w o-p oin t function W () ( p) this yields: W () (p) = h () (p) i   jpj !0   p  : (:) The sp ectral represen tation of the t w o-p oin t function implies  > 0. A short calculation yields:  = N + (N +) " + O "  : (:) The scaling in equation (:) indicates that the eld (x), whic h had at the gaus-sian xed p oin t a canonical dimension (d )=, has no w acquired an \anoma-lous" dimension d (see the discussion of the end of section .): d =  ( d +  ) : These results call for a few commen ts. Within the framew ork of the "-expansion, one th us pro v es that all correlation functions ha v e, for d < , a long distance b eha viour di eren t from the one predicted b y mean eld theory . In addition the critical b eha viour do es not dep end on the initial v alue of the  coupling constan t g . A t least for " small one ma y hop e that the analysis of leading IR singularities remains v alid and th us it do es not dep end on an y other coupling either. Therefore the critical b eha viour is universal, although less univ ersal than in mean eld theory , in the sense that it dep ends only on some small n um b er of qualitativ e features of the system under consideration. . Critic al c orr elation functions with (x) insertions R G equations for critical correlation functions with R d d x (x) insertions can also b e deriv ed. The op erator (x) has a direct ph ysical in terpretation. It is the most singular part (i.e. the most relev an t) of the energy densit y (:). Long distance scaling prop erties follo w. Moreo v er these R G equations can b e used to deriv e R G equations for correlation functions in the whole critical domain.  W e denote b y (l;n) (q  ; : : : ; q l ; p  ; : : : ; p n ; g ; ) the mixed PI correlation func-tions of the order parameter (x) and the energy densit y  (x) (n elds and l  op erators, with (l + n)  ). Renormalization theory tells us that w e can de ne renormalized correlation functions (l;n) r (q i ; p j ; g r ; ) whic h, in addition to conditions (:), satisfy: (;) r ( q ; p  ; p ; g r ; ) p  =p = ; p  p =  =  ; (;0) r (q ; q ; g r ; ) q =   = 0 ; (:) and are related to the original ones b y: lim ! Z n= (Z = Z ) l h (l;n) (q i ; p j ; g ; )  n0  l " A i = (l;n) r ( q i ; p j ; g r ; ) : (:) Z ( g ; = ) and A ( g ; = ) are t w o new renormalization constan ts. Di eren tiating with resp ect to at g r and  xed, as has b een done in section ., and using c hain rule one obtains a set of R G equations:  @ @ + (g ) @ @ g n  (g ) l  ( g )  (l;n) =  n0  l " B (g ) : (:) In addition to and  (equations (:)) t w o new R G functions,  (g ) and B (g ), app ear:  ( g ) = @ @ g r ; ln [Z (g ; = ) /Z (g ; = ) ] ; (:) B ( g ) = " @ @ g r ;  (g ) " # A ( g ; =) : (: ) Note that for n = 0, l = , the R G equation (:) is not homogeneous. This is a consequence of the non-m ultiplicativ e c haracter of renormalization in this case. In the homogeneous case, equation (:) can b e solv ed exactly in the same w a y as equation (:). A new function  () has to b e in tro duced, asso ciated with the R G function  (g ). Again the solution of equation (:) com bined with simple dimensional analysis leads to the scaling b eha viour (l;n) (q i ; p j ; g ; ) / !0  dn(d+ )=l= ; (:0) where the correlation length exp onen t  is related to  (g ) b y:  = [ (g ) + ]  : (:)  The dimension of the eld follo ws (see section .) d = d = : (:) Using equations (:,: ) it is easy to calculate  (g ) at one-lo op order. A t the xed p oin t g = g (equation (:)) one then obtains the exp onen t  :  =  + (N +) (N +) " + O "  : The c orr elation function. The (energy densit y) t w o-p oin t function (;0) satis es an inhomogeneous R G equation. T o solv e it one rst lo oks for a particular solution, whic h can b e c hosen of the form " C (g ): (g ) C 0 (g ) [" +  ( g )] C (g ) = B ( g ) : (:) The solution is uniquely determined b y imp osing its r e gularity at g = g . The general solution of equation (:) is then the sum of this particular solution and of the general solution of the homogeneous equation whic h has a b eha viour giv en b y equation (:0): (;0) (q ; g ; ) " C ( g )  !0  d= : (:) R emarks. (i) The ph ysics w e in tend to describ e corresp onds to in teger v alues of ", " = ; . Although w e can only pro v e the v alidit y of all R G results within the framew ork of the "-expansion, w e shall ev en tually assume that their v alid-it y extends b ey ond an in nitesimal neigh b ourho o d of dimension . The large N -expansion pro vides a test of the plausibilit y of this assumption. The decisiv e test comes, of course, from the comparison with exp erimen tal or n umerical data. (ii) In four dimensions the  in teraction is marginally irrelev an t; the renor-malized coupling constan t of the  eld theory go es to zero only logarithmically when the cut-o b ecomes in nite. This induces logarithmic corrections to mean eld theory . Moreo v er, since no other xed p oin t seems to exist, this leads to the so-called triviality pr op erty (see section .) of the  quan tum eld theory . . Sc aling b ehaviour in the critic al domain W e ha v e describ ed the scaling b eha viour of correlation functions at criticalit y , T = T c . W e no w consider the critical domain whic h is de ned b y the prop ert y that the correlation length is large with resp ect to the microscopic scale, but nite. R emark. The temp erature is coupled to the total con guration energy . There-fore a v ariation of the temp erature generates a v ariation of all terms con tributing  to the e ectiv e action. Ho w ev er the most relev an t con tribution (the most IR sin-gular) corresp onds to the (x) op erator. W e can therefore tak e the di erence t = r r c / T T c b et w een the co ecien t of in (:) and its critical v alue as a linear measure of the deviation from the critical temp erature. Dimensional analysis then yields the relation (n) (p i ; t; g ; ) = dn(d)= (n) p i  ; t ; g ;   : (:) With this parametrization the critical domain corresp onds to jtj . Exp ansion ar ound the critic al the ory. One th us adds to the critical action a term of the form  t R d d x (x). T o deriv e R G equations in the critical domain one expands correlation functions in formal p o w er series of t. The co ecien ts are critical correlation functions in v olving (x), for whic h R G equations ha v e b een deriv ed in section ., inserted at zero momen tum . Some care has to b e tak en to a v oid ob vious IR problems. Summ ing the expansion, one obtains R G equations v alid for T = T c , jT T c j . After summ ation of the t-expansion one nally obtains the R G equation:  @ @ + (g ) @ @ g n  ( g )  (g ) t @ @ t  (n) (p i ; t; g ; ) = 0 : (:) Sc aling laws ab ove T c . As for previous R G equations, equation (:) can b e in tegrated b y using the metho d of c haracteristics. In addition to the functions g () and Z (), one needs a running temp erature t(). T aking the large , or the small  limit one nally obtains: (n) ( p i ; t; g ; = )  t jp i j m ( dn( d+ ) =) F (n) + ( p i /m ) ; (:) with: m ( = ) =    t  : (:) F rom equation (:) w e infer that the quan tit y m is prop ortional to the ph ysical mass or in v erse correlation length. Equation (: ) then sho ws that the div er-gence of the correlation length  = m  at T c is c haracterized b y the exp onen t  . F or t = 0, the correlation functions are nite at zero momen tum and b eha v e as: (n) (0; t; g ; ) / t  (dn( d+ ) =) : (: ) In particular for n = w e obtain the in v erse magnetic susceptibilit y:  = () (p = 0; t; g ; ) / t  ( ) : (:0) The exp onen t whic h c haracterizes the div ergence of is usually called . The equation (: ) establishes the relation b et w een exp onen ts: =  (  ) : (:)  . Sc aling laws in a magnetic eld and b elow T c In order to pass con tin uously from the disordered (T > T c ) to the ordered phase ( T < T c ) , a v oiding the critical singularities at T c , it is necessary to add to the action an in teraction whic h explicitl y breaks its symmetry . One th us add a small magnetic eld to the spin in teractions. One then deriv es R G equations in a eld, or at xed magnetization. In this w a y correlation functions ab o v e and b elo w T c can b e con tin uously connected, and scaling la ws established in the whole critical domain. The rst example is pro vided b y the relation b et w een eld and magnetization, i.e. the equation of state. The e quation of state. Let us call M the exp ectation v alue of (x) in a constan t eld H (for N >  the quan tities M and H should b e regarded as the length of the corresp onding v ectors). The thermo dynam ic p oten tial p er unit v olume, as a function of M , is b y de nition:  ( M ; t; g ; ) =  X n=0 M n n! (n) ( p i = 0; t; g ; ) : (:) The magnetic eld H is giv en b y: H =  @ @ M =  X n= M n n! (n+) ( p i = 0; t; g ; ) : (:) Noting that n  M (@ =@ M ) , w e immediately deriv e from the R G equation (:), the R G equation satis ed b y H ( M ; t; g ; ):  @ @ + (g ) @ @ g   (g )   + M @ @ M   (g ) t @ @ t  H (M ; t; g ; ) = 0 : (:) T o in tegrate equation (:) b y the metho d of c haracteristics w e ha v e to in tro-duce, in addition to the functions g (), t() and Z (), a new function M (). Ho w ev er one v eri es that M () is giv en b y M () = M Z = (). Then from the argumen ts outlined in previous sections one deriv es the scaling form H ( M ; t; g ; )  M  f  tM =  ; (:) with: =   (d +  ) =  d ;  = d +  d +  = d d  : (:) Equation (:) exhibits the scaling prop erties of the equation of state. Moreo v er equations (:) relate the traditional critical exp onen ts whic h c haracterize the v anishing of the sp on taneous magnetization and the singular relation b et w een magnetic eld and magnetization at T c resp ectiv ely to the exp onen ts  and  in tro duced previously .  The univ ersal function f (x) is in nitely di eren tiable at x = 0. b ecause when M is di eren t from zero the theory remains massiv e ev en at t = 0. The magnetic eld H has a regular expansion in o dd p o w ers of M for t > 0. This implies that when the v ariable x b ecomes large and p ositiv e, f (x) has the expansion (Grith's analyticit y) : f (x) =  X p=0 a p x p : (:) The app earance of a sp on taneous magnetization, b elo w T c , implies that the function f (x) has a negativ e zero x 0 . Then equation (:) leads to the relation: M = jx 0 j ( t) for H = 0 ; t < 0 : (:) Equation (:) giv es the b eha viour of the sp on taneous magnetization when the temp erature approac hes the critical temp erature from b elo w. Corr elation functions in a eld. W e no w examine the b eha viour of correlation functions in a eld. W e write expressions for Ising-lik e systems. In the ordered phase some qualitative di erences app ear b et w een systems whic h ha v e a discrete and a con tin uous symmetry . W e illustrate these di erences with an example at the end of the section. The correlation functions at xed magnetization M are obtained b y expand-ing the generating functional ( M (x) ) of PI correlation functions, around M (x) = M . F rom the R G equations satis ed b y the correlation functions in zero magnetization (equations (:)) it is then easy to deriv e:  @ @ + (g ) @ @ g   (g )  n + M @ @ M   ( g ) t @ @ t  (n) ( p i ; t; M ; g ; ) = 0 : (: ) This equation can b e solv ed b y exactly the same metho d as equation (:). One nds (n) ( p i ; t; M ; g ; = )  m [d( d+ ) =] F (n)  p i =m; tm =  ; (:0) for jp i j , jtj , M  and with the de nition: m = M  = : (:) The r.h.s. of equation (:0) no w dep ends on t w o di eren t mass scales: m = M  = and t  . Corr elation functions b elow T c . W e ha v e argued ab o v e that correlation func-tions are regular functions of t for small t, pro vided M do es not v anish. It is therefore p ossible to cross the critical p oin t and to then tak e the zero external magnetic eld limit. In the limit M b ecomes the sp on taneous magnetization  whic h is giv en, as a function of t, b y equation (:). After elimination of M in fa v our of t in equation (:0), one nds the critical b eha viour b elo w T c : (n) (p i ; t; M (t; H = 0) ; g ; )  m dn( d+ ) = F (n) (p i /m ) ; (:) with: m = jx 0 j  (t)  ; H = 0 ; t < 0 : (:) W e conclude that the correlation functions ha v e exactly the same scaling b e-ha viour ab o v e and b elo w T c . The extension of these considerations to the functions with insertions, (l;n) is straigh tforw ard. In particular the same metho d yields the b eha viour of the sp eci c heat b elo w T c : (;0) (q = 0; M (H = 0; t)) " C (g )  for t < 0 A (t) ; (:) whic h similarly pro v es that the exp onen t ab o v e and b elo w T c are the same. Note that the constan t term " C ( g ) whic h dep ends explicitl y on g is the same ab o v e and b elo w T c , in con trast with the co ecien t of the singular part. The deriv ation of the equalit y of exp onen ts ab o v e and b elo w T c , relies on the existence of a path whic h a v oids the critical p oin t, along whic h the correlation functions are regular, and the R G equations ev erywhere satis ed. R emark. The univ ersal functions c haracterizing the b eha viour of correlation functions in the critical domain still dep end on the normalization of ph ysical parameters t, H , M , distances or momen ta. Quan tities whic h are indep enden t of these normalizations are univ ersal pure n um bers. Simple examples are pro vided b y the ratios of the amplitudes of the singularities ab o v e and b elo w T c lik e A + = A for the sp eci c heat. The O (N )-symmetric N -ve ctor mo del. W e no w indicate a few sp eci c prop er-ties of mo dels in whic h the action has a con tin uous O (N ) symmetry . The di erences concern correlation functions in a eld or b elo w T c . The addi-tion of a magnetic eld term in an O (N ) symmetric action has v arious e ects. First, the magnetization and the magnetic eld are no w v ectors. The R G equations ha v e exactly the same form as the Ising-lik e N =  case but the scaling forms deriv ed previously apply to the mo dulus of these v ectors. Second, since magnetic eld or magnetization distinguish a direction in v ector space, there no w exist n n-p oin t functions, eac h spin b eing either along the magnetization or orthogonal to it. When the con tin uous O (N ) symmetry of the action is brok en linearly in the dynamical v ariables (as in the case of a magnetic eld) these di eren t correlation functions are related b y a set of iden tities, called WT iden tities. The simplest one in v olv es the -p oin t function  T , at zero  momen tum , of the comp onen ts orthogonal to M, i.e. the transv erse susceptibilit y T T ( p = 0) =  T = H = M : (:) It follo ws that if H go es to zero b elo w T c , H = M and therefore T at zero mo-men tum v anish. The latter prop ert y implies the existence of N  (massless) Goldstone mo des corresp onding to the sp on taneous breaking of the O (N ) sym-metry . Note nally that the in v erse longitudinal -p oin t function L (p) has IR sin-gularities at zero momen tum in zero eld b elo w T c generated b y the Goldstone mo des. This is c haracteristic of con tin uous symmetries, and will pla y an essen tial role in next section. . F our dimensions: lo garithmic c orr e ctions and triviality Let us just brie y commen t ab out the situation in four dimensions. If w e solv e the R G equation  d d g () = g ()  ; for the running coupling constan t, assuming that (g ) remains p ositiv e for all g > 0 (no non-trivial xed p oin t), w e nd that g () go es to zero logarithmically; the op erator  is marginally irrelev an t. W riting generally (g ) = g + g + O (g  ); > 0 ; w e nd for  ! 0: ln  =  g () ln g () + K (g ); (:) with: K (g ) =  g ln g Z g 0 dg 0   (g 0 )  g 0 + g 0  : Since the running coupling constan t go es to zero in the long distance limit, quan tities can b e calculated from p erturbation theory . F rom the p oin t of view of critical phenomena logarithmic corrections to mean eld theory are generated. Finally let us note that empirical evidence coming from lattice calculations strongly suggests the absence of an y other xed p oin t. F rom the p oin t of view of particle ph ysics one faces the triviality problem: for an y initial bare coupling constan t g the renormalized coupling g (=) at scale  m uc h smaller than the cut-o b eha v es lik e g (=)   ln(=) ; (:) 0 Therefore if one insists in sending the cut-o to in nit y one nds a free (trivial) eld theory . Ho w ev er in the mo dern p oin t of view of e e ctive eld theories, one accepts the idea that quan tum eld theories ma y not b e consisten t on all scales but only in a limited range. Then the larger is the range the smaller is the lo w energy e ectiv e coupling constan t. In the standard mo del these commen ts ma y apply to the w eak-electromagnetic sector whic h con tains a  in teraction and trivial QED. Bibliographical Notes The mo dern form ulation of the R G ideas is due to K.G. Wilson, Phys. R ev. B ( ) , ; and presen ted in an expanded form in K.G. Wilson and J. Kogut, Phys. R ep. C ( ) . The idea of R G transformations w as earlier prop osed in a simpli ed form in L.P . Kadano , Physics ( ) . The systematic classi cation of op erators can b e found in F.J. W egner, Phys. R ev. B ( )  , B ( )  , and in W egner's con tribution to Phase T r ansitions and Critic al Phenomena, v ol. , C. Dom b and M.S. Green eds. (Academic Press, London  ). The idea of the "-expansion is due to K.G. Wilson and M.E. Fisher, Phys. R ev. L ett.  ( ) 0. After Wilson's original articles, sev eral authors ha v e realized that the R G equa-tions deriv ed in renormalized quan tum eld theories, could b e applied to Critical Phenomena C. Di Castro, L ett. Nuovo Cimento.  ( )  ; G. Mac k, Kaiserslautern  , Lecture Notes in Ph ysics v ol. , W. Ruhl and A. V ancura eds. (Springer V erlag, Berlin  ); E. Br  ezin, J.C. Le Guillou and J. Zinn-Justin, Phys. R ev. D ( ) , ; P .K. Mitter, Phys. R ev. D ( ) ; G. P arisi, Car g  ese L e ctur es  , published in J. Stat. Phys. ( 0)  ; B. Sc hro er, Phys. R ev. B ( ) 00; C. Di Castro, G. Jona-Lasinio and L. P eliti, A nn. Phys. (NY)  ( ) ; F. Jegerlehner and B. Sc hro er, A cta Phys. A ustr. Suppl. XI ( )  (Springer-V erlag, Berlin). The R G equations for the bare theory w ere rst deriv ed in J. Zinn-Justin, Car g  ese L e ctur es  , unpublished, incorp orated in the review E. Br  ezin, J.C. Le Guillou and J. Zinn-Justin in Phase T r ansitions and Critic al Phenomena v ol. , C. Dom b and M.S. Green eds. (Academic Press, London  ). Recen t w ork in this area is mainly dev oted to a more accurate calculations of univ ersal quan tities, using b oth the "-expansion and the p erturbativ e expansion of the massiv e  theory in three dimensions. See for example R. Guida and J. Zinn-Justin, Nucl.Phys. B ( ) , hep-th/ 0; Critic al Exp onents of the N-ve ctor mo del, to app ear in J. Phys. A, cond-mat/ 00;  S. A. Larin, M. Mo ennigmann, M. Stro esser, V. Dohm, Phys. R ev. B ( ), cond-mat/ 0 and cond-mat/ 00; M. Caselle, M. Hasen busc h, J.Phys. A ( ) 0, cond-mat/ 00; Nucl.Phys.Pr o c.Suppl.  ( ) , hep-lat/ 0 0 ; J.Phys. A0 ( )  , hep-lat/ 000; J. Engels, T. Sc heideler, (Bielefel d U. preprin t BI-TP- -, hep-lat/ 00; A. I. Sok olo v, E. V. Orlo v, V. A. Ul'k o v, S. S. Kash tano v, Universal e e ctive action for O (n)-symmetric   mo del fr om r enormalization gr oup, hep-th/ 00; B. N. Shalaev, S. A. An tonenk o, A. I. Sok olo v, Phys. L ett. A 0 ( ) 0-0, cond-mat/ 0; G. M  unster and J. Heitger, Nucl. Phys. B ( ) ; C. Gutsfeld, J. K  uster and G. M  unster, Nucl. Phys. B ( ) , cond-mat/ 00 ; P . Butera and M. Comi, hep-lat/ 00; Phys. R ev. B ( ) , hep-lat/ 00 A.K. Ra jan tie, Nucl. Phys. B0 ( )  , hep-ph/ 0; A. P elissetto, E. Vicari, cond-mat/ 0; Nucl.Phys. B ( ) 0, cond-mat/ 00 ; S. Caracciolo, M.S. Causo, A. P elissetto Nucl.Phys.Pr o c.Suppl.  ( ) , hep-lat/ 0; Phys. R ev. E ( ) , cond-mat/ 00 H. Kleinert, S. Thoms, V. Sc h ulte-F rohlinde, quan t-ph/ 00; H. Kleinert, V. Sc h ulte-F rohlinde, Phys. L ett. B ( ) , cond-mat/ 00; J. Rudnic k, W. La y and D. Jasno w, preprin t cond-mat/ 00. Finally let us men tion that exact bare R G equations ha v e b een pro v en b y using the metho d of in tegration o v er short distance mo des. The problem has a long story . Earlier w ork includes F.J. W egner and A. Hough ton, Phys. R ev. A ( ) 0; J.F. Nicoll et al, Phys. R ev. L ett. ( ) 0, but more recen t dev elopmen ts ha v e b een induced b y the exact con tin uum R G equations deriv ed b y J. P olc hinski, Nucl. Phys. B ( )  . F or more recen t applications of the metho d see for example the review T.R. Morris, Elements of the Continuous R enormalization Gr oup, hep-th/ 00 , and references therein. A few additional references are J.F. Nicoll and T.S. Chang, Phys. L ett. A ( ) ; A. Hasenfratz and P . Hasenfratz, Nucl. Phys. B0 ( ) ; C. W etteric h, Phys. L ett. B0 ( ) 0; M. Bonini, M. D'A ttanasio and G. Marc hesini, Nucl. Phys. B0 ( ) , B ( ) 0; S. Seide, C. W etteric h, Heidelb erg U. preprin t HD-THEP- -0, cond-mat/ 0. The O (N ) Spin Mo del at Lo w T emp erature: the Non-Linear  -Mo del Let us again consider the lattice mo del (:,:) of section with partition func-tion: Z = Z Y i dS i  S i   exp X ij V ij S i  S j =T : W e will no w discuss this mo del from the p oin t of view of a lo w temp erature expansion. The metho ds w e emplo y , ho w ev er, apply only to con tin uous symme-tries, here to N  . They rely on the prop ert y that mo dels with con tin uous symmetries, in con trast to mo dels with discrete symmetries, ha v e a non-trivial long distance ph ysics at an y temp erature b elo w T c , due to the massless Goldstone mo des. W e rst pro v e univ ersal prop erties of the lo w temp erature, ordered, phase at xed temp erature. Then, in the non-ab elian case, N > , w e sho w that additional information ab out critical prop erties can b e obtained, b y analyzing the instabilit y of the ordered phase at lo w temp erature and near t w o dimensions, due to Goldstone mo de in teractions. The analysis is based on the follo wing observ ation: The N -v ector mo del (:,: ) can b e considered as a lattice regularization of the non-linear  -mo del (note S i  S j = (S i S j ) ). The lo w temp erature expansion of the lattice mo del is the p erturbativ e expansion of the regularized eld theory . The eld theory is renormalizable in dimension t w o. R G equations, v alid in t w o and more generally + " dimension follo w. Their solutions will help us to understand the long distance b eha viour of correlation functions. It is somewhat surprising that t w o di eren t con tin uum eld theories, the ( ) and the non-linear  -mo del describ e the long distance ph ysics of the same lattice mo del. This p oin t will b e clari ed b y an analysis of the = N -expansion of b oth eld theories. This prop ert y , totally m ysterious at the classical lev el, emphasizes the essen tial nature of quan tum (or statistical) uctuations. . The non-line ar  -mo del W e no w study the non-linear  -mo del from the p oin t of view of renormalization and renormalization group. In con tin uum notation the eld S(x) has unit length and the action is S (S) =  t Z d d x @  S(x)  @  S(x); where t is prop ortional to the temp erature T . T o generate p erturbation theory w e parametrize the eld S(x): S(x) = f (x);  (x)g; and eliminate lo cally the eld  (x) b y:  (x) =   (x)  = : This parametrization is singular but this do es not sho w up in p erturbation theory whic h assumes  (x) small. The O (N ) symmetry. The O (N ) subgroup whic h lea v es the comp onen t  in v arian t acts linearly on the N  comp onen t v ector  . Ho w ev er a general O (N ) transformation will transform  in to a linear com bination of  and p   . The O (N ) symmetry is realized non-linearly . An in nitesimal transformation corresp onding to the generators of O (N ) not b elonging to O (N ) tak es the form   = ! p   ; where ! is a N  comp onen t v ector of parameters corresp onding to these generators. As w e ha v e done for the  mo del, w e scale all distances in order to measure momen ta in units of the in v erse lattice spacing . W e th us write the partition function: Z = Z h   (x)  = d (x) i exp [S ( )] ; (:) with S ( ) = d t Z d d x g ij ( )@   i (x)@   j (x); (:) where g ij is the metric on the sphere g ij =  ij +  i  j   : (:) Moreo v er, as exp ected, the functional measure is related to the metric b y q det(g ij ) =  p   : Pr op agator, p erturb ation the ory and p ower c ounting. Unlik e the  eld theory , the action is non-p olynomial in the elds. An expansion of the action in p o w ers of  generates an in nite n um b er of in teractions. Ho w ev er w e note that the p o w er of t in fron t of a diagram coun ts the n um ber of lo ops. Therefore at a nite lo op order, only a nite n um b er of in teractions con tribute. The  propagator is prop ortional to:   (k ) = t d k ; The  th us has the usual canonical dimension (d )=. Since w e ha v e in terac-tions with arbitrary p o w ers of  , the mo del is renormalizable in t w o dimensions, where all in teractions ha v e dimension t w o.  The r ole of the functional me asur e. If w e try to write the functional measure as an additional in teraction w e nd Y x  p   (x) = exp  X x ln (  (x)  : This quan tit y is w ell-de ned on the lattice but not in the con tin uum. This prob-lem, whic h already app ears in quan tum mec hanics (d = ) re ects the necessit y of a lattice regularization to precisely de ne the quan tum hamiltonian in the presence of in teractions with deriv ativ es. A p erturbativ e solution is pro vided b y dimensional regularization, where this term can simply b e omitted. In lattice regularization it cancels quadratic div ergences. IR diver genc es, sp ontane ous symmetry br e aking and the r ole of dimension two. W e see that the p erturbativ e phase of the non-linear  mo del is automatically a phase in whic h the O (N ) symmetry is sp on taneously brok en, and the (N ) comp onen ts of S(x),  (x), are massless Goldstone mo des. (i) F or d  w e kno w from the Mermin{W agner theorem that SSB with order-ing ( hSi = 0) is imp ossible in a mo del with con tin uous symmetry and short range forces. Corresp ondingly IR div ergences app ear in the p erturbativ e expansion of the non-linear  mo del for d  , for example h i div erges at order t as R d d p=p . F or d  the critical temp erature T c v anishes and p erturbation theory mak es sense only in presence of an IR cut-o whic h breaks explicitl y the symmetry and orders the spins (th us selecting a classical minim um of the action). Therefore nothing can b e said ab out the long distance prop erties of the un brok en theory directly from p erturbation theory . (ii) F or d > instead, p erturbation theory whic h predicts sp on taneous sym-metry breaking (SSB), is not IR div ergen t. This is consisten t with the prop ert y that in the N -v ector mo del, for d > , the O (N ) symmetry is sp on taneously brok en at lo w temp erature. A t T < T c xed, the large distance b eha viour of the theory is dominated b y the massless or spin w a v e excitations. On the other hand nothing can b e said, in p erturbation theory , of a p ossible critical region T  T c . T o go somewhat b ey ond p erturbation theory w e shall use eld theory R G metho ds. It is therefore necessary to rst de ne the mo del in t w o dimensions where it is renormalizable. There IR div ergences ha v e to b e dealt with. W e th us in tro duce an IR cut-o in the form of a magnetic eld in the  direction (a constan t source for the  eld) S ( ; h) = d t Z d d x (  " ( @   (x)) + (  @   (x))   (x) # h p   (x) ) : (:) Expanding the additional term in p o w ers of  w e see that it generates a mass term   (k ) = t d k + h ;  and additional in teractions of dimension 0 in d = . W e then pro ceed in formal analogy with the case of the  eld theory , i.e. study the theory in + " dimension as a double series expansion in the temp erature t and ". In this w a y the p erturbativ e expansion is renormalizable and R G equations follo w. . R G e quations Using p o w er coun ting and some non-trivial WT iden ties (quadratic in the PI functional) one can sho w that the renormalized action tak es the form: S r ( r ; h r ) =  d Z t r Z t Z d d x h (@   r ) + ( @   r ) i  d t r h r Z  r (x)d d x ; (:) in whic h  is the renormalization scale and:  r (x) = Z   r = : (:) Note that the renormalization constan ts can and th us will b e c hosen h indep en-den t. This is automatically realized in the minimal subtraction sc heme. The relation:  r (x) = Z =  (x); (:) implies  d h r t r = d Z = h t : (:) With our con v en tions the coupling constan t, whic h is prop ortional to the tem-p erature, is dimensionless. The relation b et w een the cut-o dep enden t and the renormalized correlation functions is: Z n= ( =; t) (n) (p i ; t; h; ) = (n) r (p i ; t r ; h r ; ) : (: ) Di eren tiating with resp ect to at renormalized parameters xed, w e obtain the R G equations:  @ @ + (t) @ @ t n  (t) + (t)h @ @ h  (n) (p i ; t; h; ) = 0 ; (:0) where the R G functions are de ned b y: (t) = @ @ ren: xed t ;  (t) = @ @ ren: xed ( ln Z ) ; (t) = @ @ ren: xed ln h : (:)  The co ecien t of @ =@ h can b e deriv ed from equation (:) whic h implies (taking the logarithm of b oth mem bers): 0 = h  @ @ h + d   (t) (t) t ; (:) and therefore: (t) = d +   (t) + (t) t : (:) T o b e able to discuss correlation functions in v olving the  - eld, w e also need the R G equations satis ed b y connected correlation functions W (n) :  @ @ + (t) @ @ t + n  (t) +    (t) + (t) t "  h @ @ h  W (n) = 0 ; (:) in whic h w e no w ha v e set: d = + " : (:) The t w o R G functions can b e obtained at one-lo op order from a calculation of the -p oin t function () : () (p) = " t p + h  + p +  (N ) h  ( ) d Z d d q q + h + O (t) : (:) Applying the R G equation (:0) to () and iden tifying the co ecien ts of p and h, w e deriv e t w o equations whic h determine (t) and  (t) at one-lo op order (t) = "t (N )  t + O t ; t "  ; (:a )  (t) = ( N )  t + O t ; t"  : (:b ) . Discussion of the R G ow F rom the expression of (t) in equation (:a) w e immediately conclude: F or d  ( "  0) , t = 0 is an unstable IR xed p oin t, the IR instabilit y b eing induced b y the v anishing mass of the w ould-b e Goldstone b osons. The sp ectrum of the theory th us is not giv en b y p erturbation theory and the p erturbativ e as-sumption of sp on taneous symmetry breaking at lo w temp erature is inconsisten t. As men tioned b efore, this result agrees with rigorous argumen ts. Note that since the mo del dep ends only on one coupling constan t, t = 0 is also a UV stable xed p oin t (the prop ert y of large momen tum asymptotic freedom). Section . con-tains a short discussion of the ph ysics in t w o dimensions for N > . The ab elian case N = is sp ecial and has to b e discussed separately .  F or d > , i.e. " > 0, t = 0 is a stable IR xed p oin t, the O (N ) symmetry is sp on taneously brok en at lo w temp erature in zero eld. The e ectiv e coupling constan t, whic h determines the large distance b eha viour, approac hes the origin for all temp eratures t < t c , t c b eing the rst non-trivial zero of (t). Therefore the large distance prop erties of the mo del can b e obtained from lo w temp erature expansion and renormalization group, replacing the p erturbativ e parameters b y e ectiv e parameters obtained b y solving the R G equations. The critic al temp er atur e. Finally w e observ e that, at least for " p ositiv e and small, and N > , the R G function (t) has a non-trivial zero t c : t c =  " N + O "  ) ( t c ) = 0 ; and 0 (t c ) = " + O "  : (:) Since t c is an unstable IR xed p oin t, it is b y de nition a critical temp erature. Consequences of this prop ert y are studied b elo w. Let us only immediately note that t c is also a UV xed p oin t, i.e. it go v erns the large momen tum b eha viour of the renormalized theory . The large momen tum b eha viour of correlation functions is not giv en b y p erturbation theory but b y the xed p oin t. As a consequence the p erturbativ e result that the theory cannot b e rendered nite for d > with a nite n um ber of renormalization constan ts, cannot b e trusted. W e no w discuss more precisely the solutions of the R G equations. . Inte gr ation of the R G e quations W e rst examine the implications of the R G equations for the large distance b eha viour of correlation functions for d > where t = 0 is an IR xed p oin t. Equation (:0) can b e solv ed as usual b y the metho d of c haracteristics, i.e. b y in tro ducing a scaling parameter  and running parameters. It is here con v enien t to pro ceed somewhat di eren tly b y lo oking for a solution of the form (n) ( p i ; t; h; ) =  d (t)M n 0 (t)F (n) p i  (t); h=h 0 (t)  : (: ) The ansatz (: ) solv es the R G equations pro vided the unkno wn functions M 0 (t),  (t) and h 0 (t) are c hosen to b e M 0 (t) = exp   Z t 0  (t 0 ) ( t 0 ) dt 0  ; (:0)  (t) =  t =" exp  Z t 0   (t 0 )  "t 0  dt 0  ; (:) with then h 0 (t) = tM  0 (t) d (t) d : (:) Note that the function  (t) has in zero eld the nature of a correlation length.  F or the connected correlation functions the same analysis leads to: W (n) (p i ; t; H ; ) =  d(n) (t)M n 0 (t)G (n) p i  (t); h=h 0 (t)  : (:) It is con v enien t to also in tro duce the function K (t) K (t) = M 0 (t) [  (t)] d =t =  + O (t): (:) Com bining equation (: ) with dimensional analysis w e can rewrite the scaling relation in an equiv alen t form (n) (p i ; t; h; )  M n 0 (t)[K (t)h] d= (n) p i [K (t)h] = ; t [K (t)] d= M 0 (t)  h  (d)= ; ;  ! : (:) Let us apply this result to the determination of the singularities near the co ex-istence curv e, i.e. at t xed b elo w the critical temp erature when the magnetic eld h go es to zero. The c o existenc e curve. The magnetization is giv en b y M (t; h; )  h  (x)i = " t @ (0) @ h ; (:) ((0) is the magnetic eld dep enden t free energy). A t one-lo op order in a eld one nds M =  N  " t  ( ) d Z d d q q + h + O t  : Th us from relation (: ) follo ws M (t; h; = ) = M 0 (t) N  t [ K (t)] d= h (d)= ( d=) ( ) d= + O h; h d  : This result sho ws that M 0 (t) is the sp on taneous magnetization and displa ys the singularit y of the scaling equation of state (section .) on the co existence curv e (H = 0, T < T c ) for N > , and in all dimensions d > . The e quation of state in the critic al domain. Let no w instead use the scaling form (: ) M = d t @ (0) @ h = M 0 (t)F (0) h=h 0 (t)  : (:) In v ersion of this relation yields the scaling form of the equation of state: h = h 0 (t)f  M M 0 (t)  ; (:) and the PI correlation functions can th us b e written in terms of the magneti-zation as: (n) (p i ; t; M ; ) =  d ( t) M n 0 (t)F (n)  p i  ( t) ; M M 0 (t)  : (: ) The equations (:,: ) are consisten t with the equations (: ,:0): the ap-p earance of t w o di eren t functions  (t) and M 0 (t) corresp onds to the existence of t w o indep enden t critical exp onen ts ; in the ( ) eld theory . They extend, in the large distance limit, the scaling form of correlation functions, v alid in the critical region, to all temp eratures b elo w t c . There is ho w ev er one imp ortan t dif-ference b et w een the R G equations of the  theory and of the  -mo del: the  theory dep ends on t w o coupling constan ts, the co ecien t of whic h pla ys the role of the temp erature, and the co ecien t of  whic h has no equiv alen t here. The correlation functions of the con tin uum  theory ha v e the exact scaling form (: ) only at the IR xed p oin t. In con trast, in the case of the  -mo del, it has b een p ossible to eliminate all corrections to scaling corresp onding to irrelev an t op erators order b y order in p erturbation theory . W e are therefore led to a remark able conclusion: the correlation functions of the O (N ) non-linear mo del are iden tical to the correlation functions of the  eld theory at the IR xed p oin t. This conclusion is supp orted b y the analysis of the scaling b eha viour p erformed within the = N expansion (see equation (:)). Critic al exp onents. Let us no w study more precisely what happ ens when t approac hes t c (for N > ). The function  (t) div erges as:  (t)   (t c t)  / 0 (t c ) : (:0) W e conclude that the correlation length exp onen t  is giv en b y  =  0 ( t c ) : (:) F or d close to the exp onen t  th us b eha v es lik e:   =" : (:) The function M 0 (t) v anishes at t c : ln M 0 (t) =   (t c ) 0 (t c ) ln (t c t) + const. : (:) This yields the exp onen t and th us also  through the scaling relation =   (d +  ):  =  (t c ) " : (:) 0 A leading order w e nd:  = " N + O "  : (:) W e nally note that the singularit y of (n) coming from the prefactor  d M n 0 indeed agrees near t c with the result of equation (:). Consideration of op erators with four deriv ativ es allo ws also to calculate the exp onen t ! whic h c haracterizes leading corrections to scaling. One nds ! =  d "=(N ) + O (" ) : The natur e of the c orr elation length  (t). The length scale  (t) is a cross-o v er scale b et w een t w o di eren t b eha viours of correlation functions. F or distances large compared to  (t), the b eha viour of correlation functions is go v erned b y the Goldstone mo des (spin w a v e excitations) and can th us b e deduced from the p erturbativ e lo w temp erature expansion. Ho w ev er when t approac hes t c ,  (t) b ecomes large. There then exist distances large with resp ect to the microscopic scale but small with resp ect to  (t) in whic h correlation functions ha v e a critical b eha viour. In this situation w e can construct con tin uum correlation functions consisten t on all scales, the critical b eha viour b eing also the large momen tum b eha viour of the renormalized eld theory . Gener al c omment. F rom the consideration of the lo w temp erature expansion w e ha v e b een able to describ e, for theories with a con tin uous symmetry , not only the complete structure of the lo w temp erature phase, and this w as exp ected, but also, in the non-ab elian case, the critical b eha viour near t w o dimensions . This result is somewhat surprising: Indeed p erturbation theory is only sen-sitiv e to the lo cal structure of the sphere S =  while the restoration of sym-metry in v olv es the sphere globally . This explains the p eculiarit y of the ab elian case N = b ecause lo cally a circle cannot b e distinguished from a non-compact straigh t line. F or N > the sphere has instead a lo cal c haracteristic curv ature. Still di eren t regular compact manifolds ma y ha v e the same lo cal metric, and therefore the same p erturbation theory . They all ha v e the same lo w temp era-ture ph ysics. Ho w ev er the previous results concerning the critical b eha viour are ph ysically relev an t only if they are still v alid when " is not in nitesimal and t approac hes t c , a condition whic h cannot b e c hec k ed directly . In particular the lo w temp erature expansion misses in general terms decreasing lik e exp (const:=t) whic h ma y in some cases b e essen tial for the ph ysics. Therefore in section . w e will establish a direct relation b et w een the O (N )  mo del and the  eld theory to all orders in the large N expansion. This giv es us some con dence that the previous considerations are v alid for the N -v ector mo del at least for N sucien tly large. On the other hand the ph ysics of N = is not w ell repro duced. Cardy and Ham b er ha v e sp eculated ab out the R G o w for N close to and di-mension d close to , incorp orating phenomenologically the Kosterlitz{Thouless transition in their analysis.  . The dimension two Dimension t w o is of sp ecial in terest from the particle ph ysics p oin t of view. The R G function (t) is then: (t) = (N )  t + O t  : (:) The non-linear  -mo del for N > is the simplest example of a so-called asymp-totically free eld theory (UV free) since the rst co ecien t of the -function is negativ e, in con trast with the  eld theory . Therefore the large momen -tum b eha viour of correlation functions is en tirely calculable from p erturbation theory and R G argumen ts. There is, ho w ev er a coun terpart, the theory is IR unstable and th us, in zero eld h, the sp ectrum of the theory is not p erturba-tiv e. Con trary to p erturbativ e indications, it consists of N massiv e degenerate states since the O (N ) symmetry is not brok en. Asymptotic freedom and the non-p erturbativ e c haracter of the sp ectrum are also prop erties of QCD, the theory of strong in teractions, in four dimensions, . If w e no w de ne a function  (t) b y:  (t) =   exp  Z t dt 0 (t 0 )  ; (:) w e can again in tegrate the R G equations and w e nd that  (t) is the correlation length in zero eld. In addition w e can use the explicit expression of the -function to calculate the correlation length or the ph ysical mass for small t:   (t) = m(t) = K t =(N ) e  =[(N )t] ( + O (t)) : (:) Ho w ev er the exact v alue of the in tegration constan t K , whic h giv es the ph ysical mass in the R G scale, can only b e calculated b y non-p erturbativ e tec hniques. Finally the scaling forms (: ,: ) imply that the p erturbativ e expansion at xed magnetic eld is v alid, at lo w momen ta or large distances, and for h=h 0 (t) large. Bibliographical Notes Early references on the  -mo del include: M. Gell-Mann and M. L  evy , Nuovo Cimento  ( 0) 0; S. W ein b erg, Phys. R ev.  ( ) ; J. Sc h winger, Phys. R ev.  ( ) ; W.A. Bardeen and B.W. Lee, Nucle ar and Particle Physics, Mon treal  , B. Margolis and C.S. Lam eds. (Benjamin, New Y ork   ); K. Meetz, J. Math. Phys. 0 (  )  . The quan tization w as discussed in: I.S. Gerstein, R. Jac kiw, B.W. Lee and S. W ein b erg, Phys. R ev. D ( ) ; J. Honerk amp and K. Meetz, Phys. R ev. D ( )  ; J. Honerk amp, Nucl. Phys. B ( ) 0; A.A. Sla vno v and L.D. F addeev, The or. Math. Phys.  ( ) . P auli{Villars's regularization for c hiral mo dels has b een prop osed b y A.A. Sla vno v, Nucl. Phys. B ( ) 0. The mo del w as studied as a limit of the linear  -mo del in D. Bessis and J. Zinn-Justin, Phys. R ev. D ( ) . The renormalization group prop erties w ere discussed in A.M. P oly ak o v, Phys. L ett.  B ( )  ; E. Br  ezin and J. Zinn-Justin, Phys. R ev. L ett.  ( )  ; Phys. R ev. B ( ) 0; W.A. Bardeen, B.W. Lee and R.E. Shro c k, Phys. R ev. D ( ) . The renormalizabilit y in t w o dimensions w as pro v en in E. Br  ezin, J.C. Le Guillou and J. Zinn-Justin, Phys. R ev. D ( ) ; B ( )  . Higher order calculations of critical exp onen ts are due to S. Hik ami, Nucl. Phys. B[FS] ( ) ; W. Bernreuther and F.J. W eg-ner, Phys. R ev. L ett.  ( ) ; F. W egner, Nucl. Phys. B (  ) . Comp osite op erators are discussed in F.J. W egner, Z. Phys. B ( 0) ; G.E. Castilla and S. Chakra v art y , Phys. R ev. L ett.  ( ) , cond-mat/ 0. Finally the D non-linear  -mo del has recen tly b een the sub ject of extensiv e analytic P . Hasenfratz, M. Maggiore and F. Niederma y er, Phys. L ett. B ( 0) ; P . Hasenfratz and F. Niederma y er, Phys. L ett. B ( 0)  ; J. Balog, M. Niedermaier,Phys. L ett. B ( ) , hep-th/ 0; Phys.R ev.L ett.  ( ) , hep-th/ 0; Nucl.Phys. B00 ( ) , hep-th/ 0 ; D.-S. Shin, Nucl. Phys. B  ( ) 0, hep-lat/ 00; and n umerical in v estigations, see for example A. Cucc hieri, T. Mendes, A. P elissetto, Alan D. Sok al, J. Stat. Phys.  ( ) , hep-lat/ 0 0; M. Hasen busc h, Phys. R ev. D ( ) ; M. Hasen busc h, R.R. Horgan, Phys. R ev. D ( ) 0, hep-lat/ 00; T. Neuhaus, preprin t WUB- -0 hep-lat/ 00; N. Sc h ultk a, cond-mat/ 0; M. Camp ostrini, A. P elissetto, P . Rossi, E. Vicari, Phys. R ev. D ( ) , hep-lat/ 00; Phys. L ett. B0 ( ) , hep-lat/ 000; S. Caracciolo, R.G. Edw ards, A. P elissetto, A.D. Sok al, Phys. R ev .L ett.  ( )  , hep-lat/ 00 ; Phys. R ev. L ett.  ( )  , hep-lat/ 00; S. Caracciolo, R.G. Edw ards, T. Mendes, A. P elissetto, A.D. Sok al, Nucl. Phys. Pr o c. Suppl.  ( ) , hep-lat/ 0 0; G. Mana, A. P elissetto, A.D. Sok al, Phys. R ev. D ( ) , hep-lat/ 00; B. Alles, A. Buonanno, G. Cella, Nucl. Phys. B00 ( ) , hep-lat/ 000; B. Alles, G. Cella, M. Dila v er, Y. Gunduc, hep-lat/ 000.   Field Theory and Non-Linear  Mo del in the Large N Limit In the preceding sections w e ha v e deriv ed univ ersal prop erties of critical systems within the framew orks of the formal " =  d and " = d expansions (at least for N > ). It is therefore reassuring to v erify , at least in some limiting case, that the results obtained in this w a y remain v alid ev en when " is no longer in nitesimal. W e sho w in this section that, in the case of the O (N ) symmetric  eld theory , the same univ ersal prop erties can also b e deriv ed at xed dimension in the large N limit, and more generally order b y order in the large N -expansion. W e then examine the non-linear  -mo del in the same limit. . Intr o duction W e again consider the partition function: Z = Z [d(x)] exp [S ()] ; (:) where S () is the O (N ) symmetric action (:) (u = d g ): S ( ) = Z   [@  (x)] +  r (x) + u ! (x)  d d x : (:) A cut-o , consisten t with the symmetry , is implied. The solution of the mo del in the large N limit is based on a idea of mean eld theory t yp e: it can b e exp ected that for N large the O (N ) in v arian t quan tities self-a v erage and therefore ha v e small uctuations. Th us for example (x) (y )  N ! (x) (y ) : This suggests to tak e (x) as a dynamical v ariable. T ec hnically , in the case of the  theory , this can b e ac hiev ed b y using an iden tit y similar to the Hubbard transformation: exp   r + u !   / Z d exp  u  r u     ; (:) where the in tegration con tour is parallel to the imaginary axis. By in tro ducing a eld (x) the iden tit y can b e used for eac h p oin t x inside the functional in tegral (:). The new functional in tegral is then gaussian in and the in tegral o v er the eld can b e p erformed. The dep endence on N of the partition function b ecomes explicit. Actually it is con v enien t to separate the comp onen ts of in to one comp onen t  , and N  comp onen ts  , and in tegrate only o v er  (for T < T c it ma y ev en b e con v enien t to in tegrate o v er only N comp onen ts). F or N large  the di erence is negligible. T o generate  correlation functions w e also add a source H (x) to the action Z (H ) = Z [d(x)] [d (x)] exp  S N (;  ) + Z d d x H (x) (x)  ; (:) with: S N ( ;  ) = Z   (@   ) u  (x) + r u (x) +  (x) (x)  d d x + (N ) tr ln [ + ()] : (:) - eld c orr elation functions. In this formalism it is natural to also calculate correlation functions in v olving the - eld. These ha v e a simple in terpretation in the initial - eld formalism. Indeed let us add a source j  for  in the action (:). Then rein tro ducing the - eld and in tegrating o v er  w e reco v er instead of action (:), S () (u=) j  + (u=)j  r j  : (:) Therefore j  generates the correlation functions, up to a m ultiplicativ e factor and a translation of the connected -p oin t function. . L ar ge N limit: the critic al domain W e no w tak e the large N limit at N u xed. With this condition S N is of order N and the functional in tegral can b e calculated for N large b y steep est descen t. W e exp ect  = O (N = ),  = O (). W e lo ok for a uniform saddle p oin t ( (x); (x) space-indep enden t),  (x) =  ; (x) =  : Di eren tiating then action (:) with resp ect to  and  w e obtain the saddle p oin t equations:  = 0 ; (:a )  N  N u ( r ) +  (  ) d Z d d p p +  = 0 : (:b ) R emark. In the large N limit the leading p erturbativ e con tributions come from c hains of \bubble" diagrams of the form displa y ed in gure . These dia-grams form asymptotically a geometrical series whic h is summ ed b y the algebraic tec hniques explained ab o v e. Fig.  Leading diagrams in the limit N ! .  The low temp er atur e phase. Equation (:a) implies either  = 0 or  = 0. In the lo w temp erature phase  , the a v erage v alue of the eld, do es not v anish. Equation (:b) then yields:  N =  N u r  ( ) d Z d d p p : (:) Note that this equation has solutions only for d > . This is a manifestation of the Mermin{W agner{Colem an theorem: in a system with only short range forces a con tin uous symmetry cannot b e brok en for d  , in the sense that the a v erage  of the order parameter necessarily v anishes. Ph ysically the w ould-b e Goldstone mo des are resp onsible for this prop ert y: b eing massless, as w e kno w from general argumen ts and as the propagator in the r.h.s. of (:) con rms, they induce an IR instabilit y for d  . Setting r c = N u   (  ) d Z d d p p ; (: ) r = r c + (u=) ; (:0) w e can rewrite equation (:):  = = ( ) with =   (:) The exp ectation v alue of the eld v anishes for r = r c , whic h therefore corresp onds to the critical temp erature. Moreo v er w e nd that for N large the exp onen t remains classical, i.e. mean- eld lik e, in all dimensions. The high temp er atur e phase. Ab o v e T c ,  v anishes. In expression (:) w e see that the  -propagator then b ecomes   =  p +  : (:) Therefore  = is at this order the ph ysical mass, i.e. the in v erse correlation length   of the eld  m =   =  = : (:) F rom equation (:b) w e can v erify that @ r =@  is p ositiv e. The minim um v alue of r , obtained for  = 0, is r c . Using equations (: ,:0) in equation (:b) w e then nd:  u + N (  ) d Z d d p p ( p + m ) = m : (:)  (i) F or d >  the in tegral in (:) has a limit for m = 0 and therefore at leading order: m =   and th us  =  ; (:) whic h is the mean eld result. (ii) F or < d <  instead, the in tegral b eha v es for m small lik e (setting d =  "): D  (m )   (  ) d Z d d p p (p + m ) = C (d)m " a(d) " + O m "  ; (:) with N d = ( ) d= (d=) (:a ) C (d) =  sin ( d=) N d ; (:b ) where w e ha v e in tro duced for con v enience the usual lo op factor N d . The con-stan t a(d) whic h c haracterizes the leading correction in equation (:), dep ends explicitl y on the regularization, i.e. the w a y large momen ta are cut. The leading con tribution, for m ! 0, to the l.h.s. of equation (:) no w comes from the in tegral. Keeping only the leading term in (:) w e obtain: m =    =(") ; (:) whic h sho ws that the exp onen t  is not classical:  =  " =  d  (: ) (iii) F or d =  the l.h.s. is still dominated b y the in tegral: D  (m ) =  ( )  Z d  p p (p + m )  m ! 0   ln (=m): The correlation length no longer has a p o w er la w b eha viour but instead a mean- eld b eha viour mo di ed b y a logarithm. This is t ypical of a situation where the gaussian xed p oin t is stable, in the presence of a marginal op erator. (iv) Examining equation (:b ) for  = 0 and d = w e nd that the correlation length b ecomes large only for r ! . This p eculiar situation will b e discussed in the framew ork of the non-linear  -mo del. Finally , in the critical limit = 0,  v anishes and th us from the form (:) of the  -propagator w e nd that the critical exp onen t  remains classical for all d  = 0 ) d =  (d ) : (:0)  W e v erify that the exp onen ts ; ;  satisfy the scaling relation pro v en within the framew ork of the "-expansion =  d : Singular fr e e ener gy and sc aling e quation of state. In a constan t magnetic eld H in the  direction, the free energy W (H )= p er unit v olume is giv en b y W (H )= = ln Z = = u  r u    + H  N tr( + ); where is the total space v olume and ;  the saddle p oin t v alues are giv en b y equation (:b ) and the mo di ed saddle p oin t equation (:a):  = H : (:) The thermo dynamical p oten tial (M ) is the Legendre transform of W (H ). First M =  @ W @ H =  ; b ecause partial deriv ativ es of W with resp ect to ;  v anish as a consequence of the saddle p oin t equations. It follo ws V (M )  (M )= = H M W (H )= = u  + r u  +  M + N tr( + ): As a prop ert y of the Legendre transform, the saddle p oin t equation for  is no w obtained b y writing that the deriv ativ e of v anishes. The term tr ln can b e ev aluated for large in terms of r c and the quan tities de ned in (:). One nds tr ln [( )  ] =  ( ) d Z d d p ln[(p + )=p ] = C (d) d  d= r c N u  + a(d)  d + O ( +d= ): The thermo dynamical p oten tial b ecomes V (M ) =   u  u   + (r r c ) u  +  M N C (d) d  d= ; (:) where w e ha v e de ned u =  N a(d) " : (:)  Note that for  small the term prop ortional to  is negligible with resp ect to the singular term  d= for d < . A t leading order in the critical domain V (M ) =   +  M N C (d) d  d= ; (:) where has b een de ned in (:0). The saddle p oin t equation for  tak es the simple form + M N C (d) d= = 0; and th us  =   N C (d) + M   =(d) : It follo ws that the leading con tribution, in the critical domain, to the thermo dy-namical p oten tial is giv en b y V (M )  (d ) d  N C (d)  =(d) ( + M ) d=(d) : (:) V arious quan tities can b e deriv ed from V (M ), for example the equation of state b y di eren tiating with resp ect to M . The resulting scaling equation of state is H = @ V @ M = h 0 M  f = M  ; (:) in whic h h 0 is a normalization constan t, The exp onen t  is giv en b y:  = d + d ; (:) in agreemen t with the general scaling relation relation  = d=d , and the function f (x) b y: f (x) = ( + x) =(d) : (:) The asymptotic form of f (x) for x large implies = =(d ) again in agreemen t with the scaling relation =  (  ). T aking in to accoun t the v alues of the critical exp onen ts and it is then easy to v erify that the function f satis es all required prop erties lik e for example Grith's analyticit y (see section .). In particular the equation of state can b e cast in to the parametric form:  = R =  ; = R    ; H = h 0 R  =    =(d) : q p q Fig. The \bubble" diagram B (p; m). L e ading c orr e ctions to sc aling. The  term yields the leading corrections to scaling. It is subleading b y a p o w er of  = d= = O ( (d)=(d) ): W e conclude !  = ( d)=(d ) ) ! =  d : (: ) W e ha v e iden ti ed the exp onen t ! whic h go v erns the leading corrections to scal-ing. Note that for the sp ecial v alue u = u this correction v anishes. Sp e ci c he at exp onent. A mplitude r atios. Di eren tiating t wice V (M ) with resp ect to w e obtain the sp eci c heat at xed magnetization C H =  (d )  N C (d)  =(d) ( + M ) (d)=(d) : (:0) F or M = 0 w e iden tify the sp eci c exp onen t =  d d ; (:) whic h indeed is equal to d , as predicted b y scaling la ws. Among the ratio of amplitudes one can calculate for example R +  and R c (for de nitions see c hapter  of main reference) (R +  ) d = N (d ) ( d=) ( ) d= ; R c =  d (d ) : (:) The  and () two-p oint functions. Di eren tiating t wice the action (:) with resp ect to (x), then replacing the eld (x) b y its exp ectation v alue m , w e nd the -propagator   (p) ab o v e T c   (p) = N   N u + B (p; m)   ; (:) where B (p; m) is the bubble diagram of gure : B (p; m) =  (  ) d Z d d q (q + m ) h (p q ) + m i : (:) The -propagator is negativ e b ecause the - eld is imaginary . As noted in ., it is simply related to the -p oin t function = B (p; m)  + (N u=)B (p; m) : (:) 0 A t zero momen tum w e reco v er the sp eci c heat. The small m expansion of B (0; m) can b e deriv ed from the expansion (:). One nds B (0; m) =  ( ) d Z d d q (q + m ) = @ @ m m D  (m )  = m (d= )C (d)m " a(d) " +    : (:) The singular part of the sp eci c heat th us v anishes as m " , in agreemen t with equation (:0) for M = 0. In the critical theory (m = 0 at this order) for  d   the denominator is also dominated at lo w momen tum b y the in tegral B (p; 0) =  (  ) d Z d d q q (p q ) = <d< b(")p " a(d) " + O p "  ; (:) where b ( ") =  sin( d=) (d=) (d ) N d ; (:) and th us:   (p)  p!0 N b(") p " : (: ) W e again v erify consistency with scaling relations. In particular w e note that in the large N limit the dimension [] of the eld  is [] =  (d + ") = ; (:0) a result imp ortan t for the = N p erturbation theory . R emarks. (i) F or d =  the b eha viour of the propagator is still dominated b y the in tegral whic h has a logarithmic b eha viour   / = ln (=p). (ii) Note therefore that for d   the con tributions generated b y the term prop ortional to  (x) in (:) alw a ys are negligible in the critical domain. . R G functions and le ading c orr e ctions to sc aling The R G functions. F or a more detailed v eri cation of the consistency of the large N results with the R G framew ork, w e no w calculate R G functions at leading order. One rst easily v eri es that, at leading order for large, m solution of equation (:) satis es @ m @ + N "a(d) " u  @ m @ u = 0 ;  where the constan t a(") has b een de ned in (:). It dep ends on the cut-o pro cedure but for " =  d small satis es a(")  =( "): (:) W e then set (equation (: )): u = g " ; g = u " = =(N a) : (:) In the new v ariables ; g ; w e obtain an equation whic h expresses that m is R G in v arian t  @ @ + (g ) @ @ g  (g ) @ @  m( ; g ; ) = 0 ; (:) with (g ) = "g ( g =g ); (:)   (g ) = +  (g ) = "g =g : (:) When a(d) is p ositiv e (but this not true for all regularizations, see the discussion b elo w), one nds an IR xed p oin t g , as w ell as exp onen ts ! = ", and   = d , in agreemen t with equations (: ,: ). In the framew ork of the "-expansion ! is asso ciated with the leading corrections to scaling. In the large N limit ! remains smaller than t w o for " < , and this extends the prop ert y to all dimensions  d  . Finally , applying the R G equations to the propagator (: ), w e nd  (g ) = 0 ; (:) a result consisten t with the v alue (:0) found for  . L e ading c orr e ctions to sc aling. F rom the general R G analysis w e exp ect the leading corrections to scaling to v anish for u = u . This prop ert y has already b een v eri ed for the free energy . Let us no w consider the correlation length or mass m giv en b y equation (:). If w e k eep the leading correction to the in tegral for m small (equation (:)) w e nd  u  u + N C (d)m " + O m "  = m ; (:) where equation (:) has b een used. W e see that the leading correction again v anishes for u = u . Actually all correction terms suppressed b y p o w ers of order " for d !  v anish sim ultaneously as exp ected from the R G analysis of the  eld theory . Moreo v er one v eri es that the leading correction is prop ortional to (u u ) "=(") , whic h leads to !  = "=( "), in agreemen t with equations (: ,: ).  In the same w a y if w e k eep the leading correction to the -propagator in the critical theory (equation (:)) w e nd:   (p) = N   N u  N u + b(")p "   ; (:) where terms of order and = N ha v e b een neglected. The leading corrections to scaling again exactly cancel for u = u as exp ected. Discussion. (i) One can sho w that a p erturbation due to irrelev an t op erators is equiv alen t, at leading order in the critical region, to a mo di cation of the ( ) coupling. This can b e explicitl y v eri ed here. The amplitude of the leading correction to scaling has b een found to b e prop ortional to = N u a(d) " where the v alue of a(d) dep ends on the cut-o pro cedure and th us of con tributions of irrelev an t op erators. Let us call u 0 the ( ) coupling constan t in another sc heme where a is replaced b y a 0 . Iden tifying the leading correction to scaling w e nd the relation:  " N u a(d) =  " N u 0 a 0 (d); homograph ic relation whic h is consisten t with the sp ecial form (:) of the -function. (ii) The sign of a(d). It is generally assumed that a(d) > 0. This is indeed what one nds in the simplest regularization sc hemes, lik e the simplest P auli{Villars's regularization where a(d) is p ositiv e in all dimensions < d < . Moreo v er a(d) is alw a ys p ositiv e near four dimensions where it div erges lik e a(d)  d!   " : Then there exists an IR xed p oin t, non-trivial zero of the -function. F or this v alue u the leading corrections to scaling v anish. Ho w ev er for d xed, d < , this is not a univ ersal feature. F or example in the case of simple lattice regularizations it has b een sho wn that in d = the sign is arbitrary . Ho w ev er, if a(d) is negativ e, the R G metho d for large N (at least in the p er-turbativ e framew ork) is confron ted with a serious dicult y . Indeed the coupling o ws in the IR limit to large v alues where the large N expansion is no longer reliable. It is not kno wn whether this signals a real ph ysical problem, or is just an artifact of the large N limit. Another w a y of stating the problem is to examine directly the relation b et w een bare and renormalized coupling constan t. Calling g r m d the renormalized -p oin t function at zero momen tum , w e nd m d g r = d g  + d g N B (0; m)= : (: )  In the limit m the relation can b e written  g r = (d )N C (d)  +  m  d   g N a(d)   : (:0) W e see that when a(d) < 0 the renormalized IR xed p oin t v alue cannot b e reac hed b y v arying g > 0 for an y nite v alue of m=. In the same w a y leading corrections to scaling can no longer b e cancelled. . Smal l c oupling c onstant and lar ge momentum exp ansions for d <  Section . is dev oted to a systematic discussion of the = N expansion. Ho w-ev er the = N correction to the t w o-p oin t function will help us to immediately understand the problem of the massless eld theory for d < . W e ha v e seen that, in the framew ork at the = N expansion, w e can calculate at xed dimension d <  in the critical limit (T = T c ; m = 0). This implies that the terms of the = N expansion cannot b e expanded in a p o w er series of the coupling constan t, at least with in teger p o w ers. Note that since the gaussian xed p oin t is an UV xed p oin t, the small coupling expansion is also a large momen tum expansion. T o understand the phenomenon w e consider the h  i correlation function at order = N . A t this order only one diagram con tributes ( gure ), con taining t w o   v ertices. After mass renormalization and in the large cut-o limit w e nd: ()   (p) = p + N ( ) d Z d d q (= N u) + b(")q "   (p + q )  q  + O   N  : (:) An analytic study of the in tegral rev eals that it has an expansion of the form X k  k u k p k " + k u (+k )=" p k : (:) The co ecien ts k ; k can b e obtained b y p erforming a Mellin transformation o v er u on the in tegral. Indeed if a function f (u) b eha v es lik e u t for u small, then the Mellin transform M (s) M (s) = Z  0 du u s f (u); has a p ole at s = t. Applying the transformation to the in tegral, and in v erting q and u in tegrations w e ha v e to calculate the in tegral Z  0 du u s (= N u) + b(")q " = N   N b(")q "   s  sin  s     Fig. The diagram con tributing to ()   at order = N . Then the v alue of the remaining q in tegral follo ws from the generic result (:). The terms with in teger p o w ers of u corresp ond to the formal p erturbativ e expansion where eac h in tegral is calculated for " small enough. k has p oles at " = (l + )=k for whic h the corresp onding p o w er of p is l , i.e. an in teger. One v eri es that l has a p ole at the same v alue of " and that the singular con tributions cancel in the sum. F or these dimensions logarithms of u app ear in the expansion. . The non-line ar  -mo del in the lar ge N limit W e ha v e noticed that the term prop ortional to R d d x  (x), whic h has dimension  d for large N in all dimensions, is irrelev an t in the critical domain for d <  and can th us b e omitted at leading order (this also applies to d =  where it is marginal but yields only logarithmic corrections). Actually the constan t part in the in v erse propagator as written in equation (:) pla ys the role of a large momen tum cut-o . Let us th us consider the action (:) without the  term. If w e then w ork bac kw ards, rein tro duce the initial eld and in tegrate o v er (x) w e nd Z = Z [ d(x)]   (x)  u m r   exp  Z  (@  (x)) d d x  : (:) Under this form w e recognize the partition function of the O (N ) symmetric non-linear  -mo del in an uncon v en tional normalization. W e ha v e therefore disco v ered a remark able corresp ondence: to all orders in an = N expansion the renormalized non-linear  -mo del is iden tical to the renormalized  eld theory at the IR xed p oin t. The lar ge N limit. In order to more explicit ly sho w the corresp ondence b e-t w een the set of parameters used in the t w o mo dels, let us directly solv e the  -mo del in the large N limit. W e rewrite the partition function: Z = Z [d(x)d(x)] exp [ S (; )] ; (:) with: S (; ) =  t Z d d x h ( @  ) +    i : (:)  In tegrating, as w e did in section ., o v er N  comp onen ts of and calling  the remaining comp onen t, w e obtain: Z = Z [d (x)d(x)] exp [ S N ( ; )] ; (:) with: S N ( ; ) =  t Z h ( @   ) +  (x)   (x) i d d x +  (N ) tr ln [ + ()] : (:) The large N limit is here tak en at tN xed. The saddle p oin t equations, analo-gous to equations (:), are: m  = 0 ; (:a )  =  (N )t ( ) d Z d d p p + m ; (:b ) where w e ha v e set h (x)i = m . A t lo w temp erature  is di eren t from zero and th us m, whic h is the mass of the  - eld, v anishes. Equation (:b) giv es the sp on taneous magnetization:  =  (N )t ( ) d Z d d p p : (: ) Setting  t c = (N ) ( ) d Z d d p p ; (:0) w e can write equation (: ):  =  t=t c : (:) Th us t c is the critical temp erature where  v anishes. Ab o v e t c ,  instead v anishes and m, whic h is no w the common mass of the  -and  - eld, is for < d <  giv en b y:  t c  t = m d (N ) ( ) d Z d d p p ( p + ) + O m d  : (:) W e reco v er the scaling form of the correlation length  = =m. F rom the equa-tions (:,: ), w e can also deriv e the R G functions at leading order for N large: (t) = "t N  t ;  (t) = N  t : (:)  It is also easy to calculate the thermo dynam ical p oten tial, Legendre transform of W (H ) = t ln Z (H ): V (M ) = (M )= = d d  N C (d)  =(d) (M  + t=t c ) d=(d) ; (:) a result whic h extends equation (:) to all temp eratures b elo w t c . The cal-culation of other ph ysical quan tities and the expansion in = N follo w from the considerations of previous sections and section .. Two dimensions and the question of Bor el summability. F or d = the critical temp erature v anishes and the parameter m has the form: m  e  =(N t) ; (:) in agreemen t with the R G predictions. Note that the eld -p oin t function tak es in the large N -limit the form: ()   (p) = p + m : (:) The mass term v anishes to all orders in the expansion in p o w ers of the coupling constan t t, prev en ting an y p erturbativ e calculation of the mass of the eld. The p erturbation series is trivially not Borel summ able. Most lik ely this prop ert y is also true for the mo del at nite N . On the other hand if w e break the O (N ) symmetry b y a magnetic eld, adding a term h to the action, the ph ysical mass b ecomes calculable in p erturbation theory . Corr e ctions to sc aling and the dimension four. In equation (:) w e ha v e neglected corrections to scaling. If w e tak e in to accoun t the leading correction w e get instead: m C (d)m d a(d) d  / t t c ; where a(d), as w e ha v e already explained, is a constan t whic h explicitl y dep ends on the cut-o pro cedure and can th us b e v aried b y c hanging con tributions of irrelev an t op erators. By comparing with the results of section ., w e disco v er that, although the non-linear  -mo del sup er cially dep ends on one parameter less than the corresp onding  eld theory , actually this parameter is hidden in the cut-o function. This remark b ecomes imp ortan t in the four dimensional limit where most leading con tributions come from the leading corrections to scaling. F or example for d =  equation (:) tak es a di eren t form, the dominan t term in the r.h.s. is prop ortional to m ln m. W e recognize in the factor ln m the e ectiv e  coupling at mass scale m. Bey ond the = N expansion, to describ e with p erturbation theory and renormalization group the ph ysics of the non-linear  mo del it is necessary to in tro duce the op erator R d d x  (x), whic h irrelev an t for d < , b ecomes marginal, and to return to the  eld theory .  . The = N -exp ansion: an alternative eld the ory Pr eliminary r emarks. Power c ounting. Higher order terms in the steep est descen t calculation of the functional in tegral (:) generate a systematic = N expansion. Let us rst sligh tly rewrite action (:). W e shift the eld (x) b y its exp ectation v alue m (equation (:)), (x) ! m + (x): S N ( ; ) =  Z d d x  (@   ) + m  + (x) (x) u  (x)  u m r  (x)  + (N ) tr ln  + m + () : (:) W e no w analyze the terms in the action (:) from the p oin t of view of large N p o w er coun ting. The dimension of the eld  (x) is (d )=. F rom the critical b eha viour (: ) of the -propagator w e ha v e deduced the canonical dimension [] of the eld (x): [] " = d i:e: [] = : As noted ab o v e,  has dimension  > d and is th us irrelev an t. The in teraction term R (x) (x)d d x has dimension zero. It is easy to v erify that the non-lo cal in teractions in v olving the - eld, coming from the expansion of the tr ln , ha v e all also the canonical dimension zero:  tr h (x)  + m   i k  = k [] k = 0 : This p o w er coun ting prop ert y has the follo wing implication: In con trast with usual p erturbation theory , the = N expansion generates only logarithmic correc-tions to the leading long distance b eha viour for an y xed dimension d, < d  . The situation is th us similar to the situation one encoun ters for the "-expansion (at the IR xed p oin t) and one exp ects to b e able to calculate univ ersal quan ti-ties lik e critical exp onen ts for example as p o w er series in = N . Ho w ev er, b ecause the in teractions are non-lo cal, the results of renormalization theory do not im-mediately apply . W e no w construct an alternativ e quasi-lo cal eld theory , for whic h the standard R G analysis is v alid, and whic h reduces to the large N eld theory in some limit. A n alternative eld the ory. T o b e able to use the standard results of renor-malization theory w e reform ulate the critical theory to deal with the non-lo cal in teractions. Neglecting corrections to scaling w e start from the non-linear  -mo del in the form (:): Z = Z [d(x)] [d(x)] exp [S ( ; )] ; (:) S (; ) =  t Z d d x h (@  ) +    i : (: )  The dicult y arises from the -propagator, absen t in the p erturbativ e form u-lation, and generated b y the large N summ ation. W e th us add to the action (: ) a term quadratic in  whic h at tree lev el of standard p erturbation theory generates a -propagator of the form (: ). The mo di ed action S v then is S v ( ; ) =  Z d d x   t h ( @  ) +    i  v (@ ) "=   : (:0) In the limit where the parameter v go es to in nit y the co ecien t of the additional term v anishes, and the initial action is reco v ered. W e b elo w consider only the critical theory . This means that the couplings of all relev an t in teractions will b e set to their critical v alues. These in teractions con tain a term linear in  and a p olynomial in of degree dep ending on the dimension. Note that in some discrete dimensions some monom ials b ecome just renormalizable. W e therefore w ork in generic dimensions. The quan tities w e shall calculate are regular in the dimension. The eld theory with the action (:0) can b e studied with standard eld theory metho ds. The p eculiar form of the  quadratic term, whic h is not strictly lo cal, do es not create a problem. Similar terms are encoun tered in statistical systems with long range forces. The simple consequence is that the - eld is not b e renormalized b ecause coun ter-terms are alw a ys lo cal. It is con v enien t to rescale ! p t,  ! v : S v (; ) =  Z d d x h (@  ) + v  (@ ) "=  + relev an t terms i : The renormalized critical action then reads: [S v ] ren =  Z d d x h Z (@  ) + v r Z v  (@ ) "=  + relev an t terms i : (:) It follo ws that the R G equations for PI correlation functions of l  elds and n elds in the critical theory tak e the form:  @ @ + v (v ) @ @ v n  (v )  (l;n) = 0 : (:) W e can then calculate the R G functions as p o w er series in = N . It is easy to v erify that v has to b e tak en of order = N . Therefore to generate a = N expansion one rst has to sum the m ultiple insertions of the one-lo op  t w o-p oin t function, con tributions whic h form a geometrical series. The  propagator then b ecomes   (p) = p d b(")D (v ) ; (:)  where w e ha v e de ned D (v ) = =b(") + N v : The solution to the R G equations (:) can b e written: (l;n) ( p; v ; ) = Z n= ( ) dln(d)= (l;n) (p; v ( ); ); (:) with the usual de nitions dv d = (v ( )) ; d ln Z d =  (v ( )) : W e are in terested in the neigh b ourho o d of the xed p oin t v = . One v eri es that the R G function  (v ) approac hes the exp onen t  obtained b y direct calcula-tion, and the R G -function b eha v es lik e v . The o w equation for the coupling constan t b ecomes: dv d = v ; ) v ( )   : (:) W e then note that to eac h p o w er of the  eld corresp onds a p o w er of v . It follo ws (l;n) ( p; v ; ) / v l ( ) dln(d+ ) / d(=)ln(d+) : (:) T o compare with the result (:0) obtained from the p erturbativ e renormalization group one has still to tak e in to accoun t that the functions (l;n) de ned here are obtained b y an additional Legendre transformation with resp ect to the source of . Therefore = = d = d = : (:) Fig.  Diagram con tributing to ()    at order = N . Fig.  Diagram con tributing to ()    at order = N . 0 R G functions at or der = N . Most calculations at order = N rely on the ev al-uation of the generic in tegral  ( ) d Z d d q (p + q )  q  = p d ( +  d=)(d= )(d=  ) ( ) d= ()( )(d   ) : (:) F or later purp ose it is con v enien t to set: X  = N d b(") = (d ) (d=)( d=) (d= ) =  sin( "=)( ")  ( "=)( "=) : (: ) T o compare with xed dimension results note X   ( d) for d !  and X   (d ) for d ! . The calculation of the h i correlation function at order = N in v olv es the ev aluation of the diagram of gure . W e w an t to determine the co ecien t of p ln =p. Since w e w ork at one-lo op order w e can instead replace the  propagator q " b y q  and send the cut-o to in nit y . W e then use the result (:) with  = . In the limit  ! " the in tegral has a p ole. The residue of the p ole yields the co ecien t of p ln and the nite part con tains the p ln p con tribution ()   (p) = p + "  " N d b(")D (v ) v p ln(=p): Expressing that the function satis es the R G equation w e obtain the function  (v ). The second R G function can b e deduced from the div ergen t parts of the h i function ()    = v + A  v D  (v ) ln + A v  D (v ) ln + nite ; with A  = b(") N d = X  A = N b (") (d )b(")N d = N (d )X  ; where A  and A corresp ond to the diagrams of gures  and  resp ectiv ely . Applying the R G equation one nds the relation at order = N v (v ) = v  (v ) A  v  D  (v ) A v  D (v ): (:0) W e th us obtain  (v ) = "v  " X  D  (v ); (:) v (v ) = v   " X  D  (v ) + N ( ")v  X  D (v ); (:)  where the rst term in v comes from A  and  and the second from A . Extracting the large v b eha viour w e nd  = " N ( ") X  + O (= N ):; (:)  = ( ")( ") N ( ") X  > 0 ; and th us   = d + ( ")( ") N ( ") X  + O (= N ): (:) . A dditional r esults The calculations b ey ond the order = N are rather tec hnical. The reason is easy to understand: Because the e ectiv e eld theory is renormalizable in all dimensions  d  , the dimensional regularization, whic h is so useful in p erturbativ e calculations, no longer w orks. Therefore either one k eeps a true cut-o or one in tro duces more sophisticated regularization sc hemes. F or details the reader is referred to the literature. Generic dimensions. The exp onen ts and  are kno wn up to order = N and = N resp ectiv ely in arbitrary dimensions but the expressions are to o compli-cated to b e repro duced here. The expansion of up to order = N can b e directly deduced from the results of the preceding sections: =   "=   N X   + O   N  : (:) The exp onen ts ! and  = !  , go v erning the leading corrections to scaling, can also b e calculated for example from the   function: ! = "   ( ") ( ")N X   + O   N  ; (:)  = !  = " "   ( ") N X   + O   N  : (:) Note that the exp onen ts are regular functions of " up to " = and free of renormalon singularities at " = 0. The equation of state and the spin{spin correlation function in zero eld are also kno wn at order = N , but since the expressions are complicated w e refer the reader to the literature for details. Thr e e dimensional r esults. Let us giv e the expansion of  in three dimensions at the order presen tly a v ailable:  =   N +  N +  N + O   N   ;  with   =   ;  =    ;  =         +   00 (=) +  ln ; (x) b eing the logarithmic deriv ativ e of the function. The exp onen t is kno wn only up to order = N : =  N  +  N      + O  N  : Note that the = N expansion seems to b e rapidly div ergen t and certainly a direct summation of these terms do es not pro vide v ery go o d estimates of critical exp onen ts in dimensions for useful v alues of N . . Dimension four: triviality, r enormalons, Higgs mass A n um b er of issues concerning the ph ysics of the ( ) theory in four dimensions can b e addressed within the framew ork of the large N expansion. F or simplicit y reasons w e consider here only the critical (i.e massless) theory . T riviality and UV r enormalons. It is easy to v erify that the renormalized coupling constan t g r , de ned as the v alue of the v ertex h    i at momen ta of order  , is giv en b y: g r = g  +   N g B () ; (:) where B (p) corresp onds to the bubble diagram ( gure ) B (p)  p   ln (=p) + const: : (: ) W e see that when the ratio = go es to zero, the renormalized coupling constan t v anishes, for that all g . This is the so-called triviality prop ert y . In the standard treatmen t of quan tum eld eld, one usually insists in taking the in nite cut-o limit. Here one then nds only a free eld theory . Another w a y of form ulating the problem is the follo wing: it is imp ossible to construct in four dimensions a  eld theory consisten t (in the sense of satisfying all usual ph ysical require-men ts) on all scales for non zero coupling. Of course in the logic of e e ctive eld theories this is no longer an issue. The triviality prop ert y just implies that the renormalized or e ectiv e c harge is logarithmically small as indicated b y equa-tions (:,: ). Note that if g is generic (not to o small) and = large, g r is essen tially indep enden t of the initial coupling constan t. Only if the bare coupling is small is the renormalized coupling an adjustable, but b ounded, quan tit y . Let us no w imagine that w e w ork formally and, ignoring the problem, w e express the leading con tribution to the four-p oin t function in terms of the renor-malized constan t: g  + N  g ln(=p) = g r  + N  g r ln(=p) :  W e then nd that the function has a p ole for p =  e  =(N g r ) : This p ole corresp onds to the Landau ghost for this theory whic h has g = 0 as an IR xed p oin t. If w e calculate con tributions of higher orders, for example to the t w o-p oin t function, this p ole mak es the lo op in tegrals div erge. In an expansion in p o w ers of g r , eac h term is instead calculable but one nds, after renormalization, UV con tributions of the t yp e Z  d  q q   N g r  ln (=q )  k / k !  N g r   k k ! : The p erturbativ e manifestation of the Landau ghost is the app earance of con tri-butions to the p erturbation series whic h are not Borel summab le. By con trast the con tributions due to the nite momen tum region, whic h can b e ev aluated b y a semiclassical analysis, are Borel summ ab le, but in visible for N large. This e ect is called UV renormalon e ect. Note nally that this UV problem is indep enden t of the mass of the eld , that w e ha v e tak en zero for simplicit y reasons. IR r enormalons. W e no w illustrate the problem of IR renormalons with the same example of the massless ( ) theory (but no w zero mass is essen tial), in four dimensions, in the large N limit. W e calculate the con tribution of the small momen tum region to the mass renormalization, at cut-o xed. In the large N limit the mass renormalization is then prop ortional to (see equation (:)) Z d  q q  +   N g B (q )   Z d  q q  + N  g ln (=q )  : It is easy to expand this expression in p o w ers of the coupling constan t g . The term of order k in the limit k !  b eha v es as () k (N =  ) k k !. This con tri-bution has the alternating sign of the semiclassical con tribution. Note that more generally for N nite on nds ( =) k k !. IR singularities are resp onsible for additional, Borel summ able, con tributions to the large order b eha viour. In a theory asymptotically free for large momen tum , clearly the roles of IR and UV singularities are in terc hanged. The mass of the  eld in the phase of br oken symmetry. The  theory is a piece of the Standard Mo del, and the eld  then represen ts the Higgs eld. With some reasonable assumptions it is p ossible to establish for nite N a semi-quan titativ e b ound on the Higgs mass. Let us examine here what happ ens for N large. In the phase of brok en symmetry the action, after translation of a v erage v alues, includes a term prop ortional to   and th us the propagators of the elds  and  are elemen ts of a matrix M: M  (p) =  p   =u  N B (p)  ;  where  = h (x)i . In four dimensions B is giv en b y equation (: ). It is con v enien t to in tro duce a mass scale M , R G in v arian t, suc h that  N u +  B (p)  ln(M =p); and th us M / e  = N u : The mass of the eld  at this order is a solution to the equation det M = 0. One nds p ln (M =p) = ( = N ) ) m  ln(iM =m  ) = ( = N ) : The mass m  solution to the equation is complex, b ecause the particle  can deca y in to massless Goldstone b osons. A t  xed, the mass decreases when the cut-o increases or when the coupling constan t go es to zero. Expressing that the mass m ust b e smaller than the cut-o , one obtains an upp er-b ound on m  (but whic h sligh tly dep ends on the c hosen regularization). . Finite size e e cts Another question can b e studied in the large N limit, nite size e ects. It is dicult to discuss all p ossible nite size e ects b ecause the results dep end b oth on the geometry of the system and on the b oundary conditions. In particular one m ust discuss separately b oundary conditions dep ending whether they break or not translation in v ariance. In the rst case new e ects app ear whic h are surface e ects, and that w e do not examine here. Note that the p erio dic conditions are not the only ones whic h preserv e translation in v ariance. F or systems whic h ha v e a symmetry one can glue the b oundaries after ha ving made a group transformation. Th us here one could also c ho ose an tip erio dic conditions or more generally elds di ering b y a transformation of the O (N ) group. Moreo v er if w e are in terested only in qualitative asp ects w e can limit ourselv es to a simple geometry , in eac h direction the system ha ving the same nite size L, all other sizes b eing in nite (but w e th us exclude some questions concerning crosso v er regimes). Ev en so the n um b er of di eren t p ossible situations remains large, and w e limit ourselv es here to t w o examples. W e consider the example of p erio dic b oundary conditions in t w o cases: nite v olume (the geometry of the h yp ercub e or rather h yp ertorus) in this section, and QFT at nite temp erature in next section. F rom the p oin t of view of renormalization group, nite size e ects, whic h only a ect the IR domain, do not c hange UV div ergences. The R G equations remain the same, only the solutions are mo di ed b y the app earance of new dimensional quan tities. Th us if nite sizes are c haracterized b y only one length L, solutions will b e functions of an additional argumen t L= where  is the correlation length.  A prop ert y c haracteristic of a system of nite size is the quan ti cation of momen ta in F ourier space. F or p erio dic conditions, if w e call L the size du system in eac h direction, w e ha v e p  =  n  =L ; n  Z : In particular, in a massless theory the zero mo de p = 0 no w corresp onds to an isolated p ole of the propagator. This automatically leads to IR div ergences in all dimensions. Therefore in equations (:) the solution  = 0 no longer exists. This is not surprising: there are no phase transitions in a nite v olume. Neglecting corrections to scaling la ws w e can then write equation (:b):  = (N )tL d X n   m + ( n=L) ; (: 0) where the sums are cut b y a cut-o . T o discuss the equation it is con v enien t to in tro duce the function A(s) (related to Jacobi's elliptic functions) A(s) = + X n= e sn : (: ) Using P oisson's transformation it is easy to sho w A(s) = ( =s) = A  =s  : (: ) Using this de nition, and in tro ducing the critical temp erature t c , one can write equation (: 0) (for < d < )  t  t c = (N )L d Z  0 ds  e sm A d ( s=L ) L d ( s) d=  : (: ) Setting s ! L s and in tro ducing the function F : F (z ) = Z  0 ds  e sz A d ( s) ( s) d=  ; (: ) w e can rewrite the relation  t  t c = (N )L d F (mL): (: ) F or jt t c j d w e nd a scaling form whic h is in agreemen t with the R G result, whic h predicts (= = d + O (= N )): Lm(t; L) = L= (t; L) = f (t t c )L =  :  Here the length  has the meaning of a correlation length only for  < L. Since  = 0, the magnetic susceptibilit y in zero eld instead is alw a ys giv en b y = t=m . One v eri es that for t > t c xed, L !  and th us mL !  one reco v ers the in nite v olume limit. On the con trary in the lo w temp erature phase for t < t c xed, L ! , mL go es to zero. Th us the con tribution of the zero mo de dominates in the r.h.s. of equation (: 0). Using the relation (: ) one then nds F (z ) =  z + K (d) + O z  ; K (d) = Z  0 ds h A d ( s)  ( s) d= i ; and th us (L; t) = t m =  N  ( t=t c )L d tL K (d) + O L d =(t t c )  : (: ) W e see that the susceptibilit y div erges with the v olume, an indication of the existence of a brok en symmetry phase. Note nally that it is instructiv e to mak e a similar analysis for di eren t b ound-ary conditions whic h ha v e no zero mo de. F or d = the regime where nite size e ects are observ ables corresp onds to t ln (L) = O (), i.e. to a regime of lo w temp erature. The zero mo de dominates for t ln (L) , and the susceptibilit y is then giv en b y (t; L)   N L [ + O (t ln (L))] : .0 Field the ory at nite temp er atur e Quan tum eld theory at nite temp erature can b e considered as a system whic h has a nite size in one direction. Indeed the partition function is giv en b y tr e LH , where H is the hamiltonian and L  the temp erature. F or a scalar eld theory with euclidean lagrangian densit y L() this leads to the functional in tegral Z = Z [d] exp " Z L 0 d Z d d x L() # ; where the eld satis es p erio dic b oundary conditions only in one direction ( = 0; x) = ( = L; x): Let us again consider, as an example, the non-linear  mo del. W e nd a nite size system, but the in terpretation of parameters is di eren t. The v ariable t no w  represen ts the coupling constan t of the QFT. Since L is the in v erse temp erature, the limit L !  corresp onds to the limit of v anishing temp erature. The saddle p oin t equation (:b), in the symmetric phase  = 0, b ecomes  = (N )t  ( ) d L Z d d k X n  m + k + ( n=L) : (: ) On immediately v eri es that the IR problem induced b y the zero mo de has the follo wing consequences: since one in tegrates only o v er d  dimensions, a phase transition is only p ossible for d > . Qualitativ ely at large distance the condition of nite temp erature leads to a prop ert y of dimensional r e duction d ! d . The large N expansion is th us particularly w ell suited to the study of this problem whic h exhibits a crosso v er b et w een t w o di eren t dimensions. Again using Sc h winger's represen tation of the propagator, in tegrating o v er k and in tro ducing the function (: ) w e can rewrite equation (: ):  t  t c = N  ( ) (d)= L d G(mL) (: ) G(z ) = Z  0 ds s (d)= h e z s A( s) ( s) = i : (: ) Here  L = m  has really the meaning of a correlation length. This equation has a scaling form for d < . The b eha viour of the system then dep ends on the ratio b et w een L and the correlation length   of the system at zero temp erature. F or t > t c xed and L large (with resp ect to =) w e reco v er the zero temp erature limit. F or t t c small w e nd a crosso v er b et w een a regime of small and high temp erature. In the regime t < t c xed and L large, w e ha v e to examine the b eha viour of G(z ) for z small. A t d = : G(z ) = ln z + const: : Hence  m / (L; t) / L exp   L N   t  t c  : (:00) One nds that  L remains nite b elo w t c for all non v anishing temp eratures, and has when the coupling constan t t go es to zero or L !  the exp onen tial b eha viour c haracteristic of the dimension t w o. F or d =  the situation is di eren t b ecause a transition is p ossible in dimension d  = . This is consisten t with the existence of the quan tit y G(0) > 0 whic h app ears in the relation b et w een coupling constan t and temp erature at the critical p oin t:  t  t c = (N )G(0) ( ) =  L : (:0)  F or a coupling constan t t whic h corresp onds to a phase of brok en symmetry at zero temp erature (t < t c ), one no w nds a transition temp erature L  / p t c t. Studying more generally the saddle p oin t equations one can deriv e all other prop erties of this system. . Other metho ds. Gener al ve ctor eld the ories The large N limit can b e obtained b y sev eral other algebraic metho ds. Without b eing exhaustiv e, let us list a few. Sc h winger{Dyson equations for N large lead to a self-consisten t equation for the t w o-p oin t function. F rom the p oin t of view of sto c hastic quan tization or critical dynamics the Langevin equation also b ecomes linear and self-consisten t for N large. One replaces (x; t) b y (x; t) (h i means noise a v erage) at leading order. Finally a v ersion of the Hartree{F o c k appro ximation also yields the large N result. Gener al ve ctor eld the ories. W e no w brie y explain ho w the algebraic metho d presen ted in section . can b e generalized to actions whic h ha v e a more compli-cated dep endence in one or sev eral v ector elds. Again in a general O (N ) sym-metric eld theory the comp osite elds with small uctuations are the scalars constructed from all v ectors. The strategy is then to in tro duce pairs of elds and Lagrange m ultipliers for all indep enden t O (N ) in v arian t scalar pro ducts constructed from the man y-com p onen t elds. Let us rst tak e the example of one eld and assume that the in teraction is an arbitrary function of the only in v arian t (x) S () = Z d d x n  [@  (x)] + V  o : (:0) W e then in tro duce t w o elds (x) and (x) and use the iden tit y: exp  Z d d x V ( )  / Z [d(x) d(x)] exp  Z d d x     + V ()  : (:0) In the sp ecial case in whic h V () is a quadratic function, the in tegral o v er  can b e p erformed. In all cases, ho w ev er, the iden tit y (:0) transforms the action in to a quadratic form in and therefore the in tegration o v er can b e p erformed and the dep endence in N b ecomes explicit. This metho d will b e applied in section  to the study of m ulti-critical p oin ts and double scaling limit. If the action is an O (N ) in v arian t function of t w o elds  and the p oten tial dep ends on the three scalar pro ducts   ,  and . Then three pairs of elds are required.  Bibliographical Notes As sho wn b y Stanley the large N -limit of the classical N -v ector mo del coincides with the spherical mo del solv ed b y Berlin and Kac T.H. Berlin and M. Kac, Phys. R ev.  ( ) ; H.E. Stanley , Phys. R ev.  ( ) . Early w ork on calculating critical prop erties includes R. Ab e, Prog. Theor. Ph ys.  ( ) ;  ( ) , 0, ; S.K. Ma, Phys. R ev. L ett. ( ) ; Phys. R ev. A ( ) ; M. Suzuki, Phys. L ett. A ( ) ; Pr o g. The or. Phys.  ( ) , 0, 0; R.A. F errel and D.J. Scalapino, Phys. R ev. L ett. ( ) ; K.G. Wilson, Phys. R ev. D ( ) . The con tribution of order = N to the equation of state is giv en in E. Br  ezin and D.J. W allace, Phys. R ev. B ( )  . The spin{spin correlation in zero eld is obtained in M.E. Fisher and A. Aharon y , Phys. R ev. L ett.  ( ) ; A. Aharon y , Phys. R ev. B0 ( ) ; R. Ab e and S. Hik ami, Pr o g. The or. Phys.  ( ) 0. The exp onen t ! has b een calculated to order = N in S.K. Ma, Phys. R ev. A0 ( ) . See also the con tributions of S.K. Ma and E. Br  ezin, J.C. Le Guillou and J. Zinn-Justin to Phase T r ansitions and Critic al Phenomena v ol. , C. Dom b and M.S. Green eds. (Academic Press, London  ). The consistency of the = N expansion to all orders has b een pro v en in I. Y a Aref 'ev a, E.R. Nissimo v and S.J. P ac hev a, Commun. Math. Phys.  ( 0) ; A.N. V asil'ev and M.Y u. Nalimo v, T e or. Mat. Fiz.  ( ) . A t presen t the longest = N series for exp onen ts and amplitudes are found in I. Kondor and T. T emesv ari, J. Physique L ett. (Paris) ( ) L ; Y. Ok ab e and M. Oku, Pr o g. The or. Phys. 0 ( ) , ;  (  ) ; A.N. V asil'ev, Y u.M. Pis'mak and Y u.R. Honk onen, T e or. Mat. Fiz.  ( ) ; 0 ( )  . See also I. Kondor, T. T emesv ari and L. Heren yi, Phys. R ev. B ( 0) . Renormalization of op erators is discussed in K. Lang and W. R  uhl, Nucl. Phys. B00 ( )  ; Z. Phys. C ( )  . The case of long range forces has b een discussed in S.K. Ma, Phys. R ev. A ( ) . F or the Hartree{F o c k p oin t of view and QFT at nite temp erature see W.A. Bardeen and M. Moshe, Phys. R ev. D ( ) . Results concerning the -function at order = N in the massiv e theory renor-malized at zero momen tum ha v e b een recen tly rep orted in A. P elissetto and E. Vicari, Nucl. Phys. B ( ) , cond-mat/ 0. 0 A calculation of the dimensions of comp osite op erators to order = N is re-p orted in S. E. Derk ac ho v, A. N. Manasho v Nucl. Phys. B ( ) 0, hep-th/ 00; Phys. R ev. L ett.  ( ) , hep-th/ 000, and the consequences for the stabilit y of the xed p oin t of the non-linear  mo del discussed. Some nite size calculations are rep orted in S. Caracciolo and A. P elissetto, preprin t hep-lat/ 000.   Gross{Nev eu and Gross{Nev eu{Y uk a w a Mo dels T o illustrate the tec hniques dev elop ed in sections , , w e no w discuss mo dels with fermions exhibiting the phenomenon of c hiral phase transition. Again w e consider t w o di eren t eld theory mo dels with the same symmetries, the Gross{ Nev eu (GN) and the Gross{Nev eu{Y uk a w a (GNY) mo dels. The GN mo del is renormalizable in t w o dimensions, and describ es in p erturbation theory only one phase, the symmetric phase. The GNY mo del is renormalizable in four dimen-sions and instead allo ws a p erturbativ e analysis of the c hiral phase transition. W e no w sho w that the ph ysics of these mo dels can indeed b e studied b y the same tec hniques as ferromagnetic systems, that is R G equations near t w o and four dimensions, and large N expansion. . The Gr oss{Neveu mo del The GN mo del is describ ed in terms of a U (N ) symmetric action for a set of N massless Dirac fermions f i ;  i g: S  ;  = Z d d x h    @ +  G    i : The GN mo del has in ev en dimensions a discrete c hiral symmetry: ! S ;  !  S ; (:) whic h prev en ts the addition of a fermion mass term while in o dd dimensions a mass term breaks space parit y . Actually the t w o symmetry op erations can b e written in a form x = fx  ; x ; : : : ; x d g ! ~ x = fx  ; x ; : : : ; x d g;  (x) !  ( ~ x );  (x) !  ( ~ x)  ; v alid in all dimensions. This mo del illustrates the ph ysics of sp on taneous fermion mass generation and, in ev en dimensions, c hiral symmetry breaking. It is renormalizable and asymptotically free in t w o dimensions. Ho w ev er, as in the case of the non-linear  mo del, the p erturbativ e GN mo del describ es only one phase. The main di erence is that the role of the sp on taneously brok en and the explicitl y symmetric phase are in terc hanged. Indeed it is alw a ys the massless phase whic h is unstable in lo w dimensions. Since the symmetry breaking mec hanism is non-p erturbativ e it will ev en tually b e instructiv e to compare the GN mo del with a di eren t mo del with the same symmetries: the Gross{Nev eu{Y uk a w a mo del. R G e quations ne ar and in two dimensions. The GN mo del is renormalizable in t w o dimensions, and in p erturbation theory describ es only the massless sym-metric phase. P erturbativ e calculations in t w o dimensions can b e made with  an IR cut-o of the form of a mass term M  , whic h breaks softly the c hiral symmetry . It is p ossible to use dimensional regularization in practical calcula-tions. Note that in t w o dimensions the symmetry group is really O (N ), as one v eri es after some relab elling of the elds. Therefore the (  ) in teraction is m ultiplicativ ely renormalized. It is con v enien t to in tro duce here a dimensionless coupling constan t u = G d : (:) As a function of the cut-o the bare correlation functions satisfy the R G equations:  @ @ + (u) @ @ u n  (u)  M (u)M @ @ M  (n) (p i ; u; M ; ) = 0 : (:) A direct calculation of the -function in d = + " dimension yields: (u) = "u (N 0 ) u  + (N 0 ) u  + (N 0 )(N 0 )  u  + O u   ; (:) Note that for d = N 0 = N . The sp ecial case N 0 = , for whic h the -function v anishes iden tically in t w o dimensions, corresp onds to the Thirring mo del (b ecause for N 0 = (   ) = (  ) ). The latter mo del is to the equiv alen t the sine-Gordon or the O () v ector mo del. Finally the eld and mass R G functions are  (u) = N 0   u (N 0 )(N 0 )  u + (N 0 )(N 0 )(N 0 )   u  ; (:)  M (u) = N 0   u N 0   u (N 0 )(N 0 )  u : As in the case of the non-linear  mo del, the solution of the R G equations (:) in v olv es a length scale  of the t yp e of a correlation length whic h is a R G in v arian t   (u)  (u) / exp  Z u du 0 (u 0 )  : (:) Two dimensions. F or d = the mo del is asymptotically free. In the c hiral the-ory (M = 0) the sp ectrum, then, is non-p erturbativ e, and man y argumen ts lead to the conclusion that the c hiral symmetry is alw a ys brok en and a fermion mass generated. F rom the statistical p oin t of view this corresp onds to the existence of a gap in the sp ectrum of fermion excitation (as in a sup er uid or sup ercon-ductor). All masses are prop ortional to the mass parameter (u) whic h is a R G in v arian t. Its dep endence in the coupling constan t is giv en b y equation (:): (u) / u =(N 0 ) e  =(N 0 )u  + O (u)  : (:)  W e see that the con tin uum limit, whic h is reac hed when the masses are small compared to the cut-o , corresp onds to u ! 0. S -matrix considerations ha v e then led to the conjecture that, for N nite, the sp ectrum is: m n = (u) (N )  sin  n (N )  ; n = ; : : : < N ; N > ; T o eac h mass v alue corresp onds a represen tation of the O (N ) group. The nature of the represen tation leads to the conclusion that n o dd corresp onds to fermions and n ev en to b osons. This result is consisten t with the sp ectrum for N large ev aluated b y semiclas-sical metho ds. In particular the ratio of the masses of the fundamen tal fermion and the lo w est lying b oson is: m  m = cos   (N )  = + O (= N ): (:) The large N limit will b e reco v ered in section .. Note that the t w o rst v alues of N are sp ecial, the mo del N = is conjectured to b e equiv alen t to t w o decoupled sine-Gordon mo dels. Dimension d = + ". As in the case of the  -mo del, asymptotic freedom implies the existence of a non-trivial UV xed p oin t u c , in + " dimension u c =  N 0 "   " N 0  + O "  : u c is also the critical coupling constan t for the transition b et w een a phase in whic h the c hiral symmetry is sp on taneously brok en and a massless small u phase. A t the xed p oin t one nds the correlation length exp onen t  :   = 0 (u c ) = " " N 0 + O "  : (: ) The fermion eld dimension [ ] is: [ ] = d  +  (u c ) =  + " + N 0  (N 0 ) " + O "  : (:0) The dimension of the comp osite eld  =  is giv en b y [ ] = d   M (u c ) =  " N 0 : As for the  -mo del the existence of a non-trivial UV xed p oin t implies that large momen tum b eha viour is not giv en b y p erturbation theory ab o v e t w o dimensions, and this explains wh y the p erturbativ e result that the mo del cannot b e de ned in higher dimensions cannot b e trusted. Ho w ev er, to in v estigate whether the " expansion mak es sense b ey ond an in nitesimal neigh b ourho o d of dimension t w o other metho ds are required, lik e the = N expansion whic h will b e considered in section ..  . The Gr oss{Neveu{Y ukawa mo del The Gross{Nev eu{Y uk a w a (GNY) mo del has the same c hiral and U (N ) sym-metries as the GN mo del. The action is (" =  d): S  ; ;   = Z d d x      @ + g "=   +  (@   ) +  m  +  ! "    ; (:) where  is an additional scalar eld, the momen tum cut-o , and g ;  dimen-sionless \bare" i.e. e ectiv e coupling constan ts at large mom en tum scale . The action still has a re ection symmetry ,  transforming in to  when the fermions transform b y (:). In con trast with the GN mo del, ho w ev er, the c hi-ral transition can here b e discussed b y p erturbativ e metho ds. An analogous situation has already b een encoun tered when comparing the ( ) eld theory with the non-linear  mo del. Ev en more, the GN mo del is renormalizable in dimension t w o and the GNY mo del in dimension four. The phase tr ansition. Examining the action (:) w e see that in the tree appro ximation when m is negativ e the c hiral symmetry is sp on taneously brok en. The  exp ectation v alue giv es a mass to the fermions, a mec hanism reminiscen t of the Standard Mo del of w eak-electromagnetic in teractions: m = g h  i ; (:) while the  mass then is: m  =  g m : (:) As a result of in teractions the transition v alue m c of the parameter m will b e mo di ed. In what follo ws w e set m = m c + t ; (:) where the new parameter t, in the language of phase transitions, pla ys the role of the deviation from the critical temp erature. T o study the mo del b ey ond the tree appro ximation w e no w discuss R G equa-tions near four dimensions. . R G e quations ne ar four dimensions The mo del (:) is trivial ab o v e four dimensions, renormalizable in four di-mensions and can th us b e studied near dimension  b y R G tec hniques. Fiv e renormalization constan ts are required, corresp onding to the t w o eld renormal-izations, the  mass, and the t w o coupling constan ts. The R G equations th us in v olv e v e R G functions. The PI correlation functions (l;n) , for l and n  elds, then satisfy  @ @ + g @ @ g +  @ @   l   n   m t @ @ t  (l;n) = 0 : (:)  Fig.  One-lo op diagrams: fermions are represen ted b y solid lines. The R G functions. The R G functions at one-lo op order in v olv e the calculation of the diagrams of gure . One nds:  = " +     + N g N g   ; (:) g = "g + N +  g  : (:) Note that in these expressions for con v enience w e ha v e set in the algebra of matrices tr  =  as in four dimensions. T o extrap olate the results to other dimensions one has to replace ev erywhere N b y N 0 =, where N 0 = N tr  is the total n um b er of fermion degrees of freedom. Dimension four. In four dimensions the origin  = g = 0 is IR stable. Indeed the second equation implies that g go es to zero, and the rst then that  also go es to zero. As a consequence if the bare coupling constan ts are generic, i.e. if the e ectiv e couplings at cut-o scale are of order , the e ectiv e couplings at scale  go to zero and in a w a y asymptotically indep enden t from the bare couplings. One nds g ()   (N + ) ln (=) ; ()   ~  ln (=) ; where w e ha v e de ned ~  = N (N + ) (N ) + p N + N + : (:) This result allo ws to use renormalized p erturbation theory to calculation ph ysical observ ables. F or example w e can ev aluate the ratio b et w een the masses of the scalar and fermion elds. It is then optimal to tak e for  a v alue of order h i. A  remark able consequence follo ws: the ratio (:) of scalar and fermion masses is xed m  m =  g = N (N ) + p N + N + ; (: ) while in the classical limit it seems arbitrary . Of course if the bare couplings are \unnaturally" small the same will apply to the renormalized couplings at scale  and the ratio will b e mo di ed. Dimension d =  ". One then nds a non-trivial IR xed p oin t (w e recall N 0 = N tr ): g =  " N 0 +  ;  =  " ~  : (:0) The matrix of deriv ativ es of the -functions has t w o eigen v alues ! ; ! 0 , !  = "; ! = " p N 0 + N 0 + =(N 0 + ); (:) and th us the xed p oin t is IR stable. The rst eigen v alue is alw a ys the smallest. The eld renormalization R G functions are at the same order:   = N 0  g ;  =   g : (:) A t the xed p oin t one nds   = N 0 " N 0 +  ;  = " (N 0 + ) ; (:) and th us the dimensions d and d  of the elds d = N 0 +  (N 0 + ) " ; d  =  N 0 +  " : (:) The R G function  m corresp onding to the mass op erator is at one-lo op order:  m =     ; and th us the exp onen t  :   = +  m = " ~  N 0 " N 0 +  = " N 0 +  + p N 0 + N 0 +  (N 0 + ) : (:) Finally w e can ev aluate the ratio of masses (:) at the xed p oin t: m  m =  g = N 0 (N 0 ) + p N 0 + N 0 +  : In d =  and d = " the existence of an IR xed p oin t has the same consequence: If w e assume that the  exp ectation v alue is m uc h smaller than the cut-o and that the coupling constan ts are generic at the cut-o scale, then the r atio of fermion and sc alar masses is xe d.  . GNY and GN mo dels in the lar ge N limit W e no w sho w that the GN mo del pla ys with resp ect to the GNY mo del (:) the role the non-linear  -mo del pla ys with resp ect to the  eld the-ory . F or this purp ose w e start from the action (:) of the GNY mo del and in tegrate o v er N  fermion elds. W e also rescale for con v enience (d)= g  in to  , and then get the large N action: S N  ; ;   = Z d d x   (  @ +  ) + d   g ( @   ) + m g  +  !g     (N ) tr ln (  @ +  ) : (:) T o tak e the large N limit w e assume  nite and g ;  = O (= N ). Let us call V ( ) the action p er unit v olume for constan t eld  (x) and v an-ishing fermion elds V ( ) = d  m g  +  !g     N tr ln (  @ +  ) = d  m g  +  !g     N 0 Z d d q ( ) d ln (q +  ): (:) The exp ectation v alue of  for N large is giv en b y a gap equation: V 0 ( ) d = m g  +  g   N 0 d  ( ) d Z d d q q +  = 0 : (:) It is also useful to calculate the second deriv ativ e to c hec k stabilit y of the extrema V 00 ( ) d = m g +  g   + N 0 d Z d d q ( ) d  q (q +  ) : The solution  = 0 is stable pro vided V 00 (0) > 0 , m g > N 0 d  ( ) d Z d d q q : Instead the non-trivial solution to the gap equation exists only for m g > N 0 d  ( ) d Z d d q q ; but then it is stable. W e conclude that the critical temp erature or critical bare mass is giv en b y: m c g = N 0 d  ( ) d Z d d q q ; (: )  whic h sho ws that the fermions fa v our the c hiral transition. In particular when d approac hes w e observ e that m c ! + whic h implies that the c hiral symmetry is alw a ys brok en in dimensions. Using equation (: ) and setting t = d (m m c )=g ; (:0) w e can write the equation for the non-trivial solution t + d  g   + N 0  ( ) d Z d d q q (q +  ) = 0 : W e no w expand the in tegral for  small (equation (:)) D  ( ) =  (  ) d Z d d q q ( q +  ) = C (d) " a(d) " + O   "  : (:) Keeping only the leading terms for t ! 0 w e obtain for d <  the scaling b eha viour   (t= N 0 C ) =(d) : (:) Since, at leading order, the fermion mass m =  , it immediately follo ws that the exp onen t  is also giv en b y:    =(d ) )   =  d : (:) A t leading order, for N ! ,  has the same v alue as in the non-linear  -mo del. A t leading order in the scaling limit the thermo dynam ical (or e ectiv e) p oten-tial V ( ) then b ecomes V ( ) =  t + (N 0 =d)C (d)j j d : (:) W e note that, although in terms of the  - eld the mo del has a simple Ising-lik e symmetry , the scaling equation of state for large N is quite di eren t. W e read from the large N action that at this order  = 0. Finally from the large N action w e can calculate the  -propagator at leading order. Quite generally , using the saddle p oin t equation, one nds for the in v erse  -propagator in the massiv e phase:    (p) = d  p g +  g    + N 0 ( ) d p +   Z d d q (q +  ) [(p + q ) +  ] : (:) W e see that in the scaling limit p;  ! 0, the in tegral yields the leading con tri-bution. Neglecting corrections to scaling w e nd that the propagator v anishes  for p =  whic h is just the  threshold. Th us, in this limit, m  = m in all dimensions, a result consisten t with d = exact v alue. A t the transition the propagator reduces to    N 0 b(")p d ; (:) with (equation (:)) b ( ") =  sin( d=) (d=) (d ) N d : (:) The result is consisten t with the v alue of   found ab o v e. Let us nally note that the b eha viour of the propagator at the critical p oin t,   (p) / p d , implies for the eld  the canonical dimension [ ] in the large N expansion, for  d  : [ ] =  : (:) Corr e ctions to sc aling and the IR xe d p oint. The IR xed p oin t is determined b y demanding the cancellation of the leading corrections to scaling. Let us th us consider the e ectiv e p oten tial V ( ). The leading correction to scaling is prop ortional to   !g  N 0 a(d)     ; (a ( ")  = " ). Demanding the cancellation of the co ecien t of  , w e obtain a relation b et w een  and g g  =  N 0 a(d) =  " N 0 + O "  ; a result consisten t with the results of the "-expansion. In the same w a y it is p ossible to calculate the leading correction to the  -propagator (:). Demanding the cancellation of the leading correction w e obtain p g +  g    N 0 p +   a(d) = 0 : The co ecien t of  cancels from the previous relation and the cancellation the co ecien t of p yields g = N 0 a(d) =  " N 0 + O "  ; in agreemen t with the "-expansion for N large. 0 The r elation to the GN mo del for dimensions  d  . W e ha v e seen that the terms (@   ) and   of the large N action whic h ha v e a canonical dimension , are irrelev an t in the IR critical region for d  . W e recognize a situation already encoun tered in the ( ) eld theory in the large N limit. In the scaling region it is p ossible to omit them and one then nds the action: S N  ; ;   = Z d d x    (  @ +  ) + d m g   : (: ) The in tegral o v er the  eld can explicitl y b e p erformed and yields the action of the GN mo del: S N  ;  = Z d d x     @ + d m g     : The GN and GNY mo dels are th us equiv alen t for the large distance ph ysics. In the GN mo del, in the large N limit, the  particle app ears as a  b oundstate at threshold. Con v ersely , it w ould seem that the GN mo del dep ends on a smaller n um b er of parameters than its renormalizable extension. Again this problem is only in teresting in four dimensions where corrections to scaling, i.e. to free eld theory , are imp ortan t. Ho w ev er, if w e examine the div ergences of the term tr ln (  @ +  ) in the e ectiv e action (:) relev an t for the large N limit, w e nd a lo cal p olynomial in  of the form: Z d  x h A (x) + B ( @   ) + C   (x) i : Therefore the v alue of the determinan t can b e mo di ed b y a lo cal p olynomial of this form b y c hanging the w a y the cut-o is implemen ted: additional parameters, as in the case of the non-linear  -mo del, are hidden in the cut-o pro cedure. Near t w o dimensions these op erators can b e iden ti ed with (  ) ; [@  (  )] ; (  )  . It is clear that b y c hanging the cut-o pro cedure w e c hange the amplitude of higher dimension op erators. These bare op erators in the IR limit ha v e a comp onen t on all lo w er dimensional renormalized op erators. Note nally that w e could ha v e added to the GNY mo del an explicit breaking term linear in the  eld, whic h b ecomes a fermion mass term in the GN mo del, and whic h w ould ha v e pla y ed the role of the magnetic eld of the ferromagnets. . The lar ge N exp ansion Using the large N dimension of elds and p o w er coun ting argumen ts one can then pro v e that the = N expansion is renormalizable with argumen ts quite similar to those presen ted in section .. A lternative the ory. T o pro v e that the large N expansion is renormalizable one pro ceeds as in the case of the scalar theory in section .. One starts from a  critical action with an additional term quadratic in  whic h generates the large N  -propagator already in p erturbation theory S ( ;  ;  ) = Z d d x   (  @ +  ) +  v  (@ ) d=   : (:0) The initial theory is reco v ered in the limit v ! . One then rescales  in v  . The mo del is renormalizable without  eld renormalization b ecause div ergences generate only lo cal coun ter-terms S r ( ;  ;  ) = Z d d x  Z  (  @ + v r Z v  ) +   (@ ) d=   : (:) R G equations follo w  @ @ + v (v ) @ @ v n  (v )  (l;n) = 0 : (:) Again the large N expansion is obtained b y rst summ ing the bubble con tribu-tions to the  -propagator. W e de ne D (v ) = b(") + N 0 v : Then the large N  propagator reads h   i = b(")D (v )p d : (:) The solution to the R G equations can b e written: (l;n) ( p; v ; ) = Z n= ( ) dln(d)= (l;n) (p; v ( ); ); (:) with the usual de nitions dv d = (v ( )) ; d ln Z d =  (v ( )) : W e are in terested in the neigh b ourho o d of the xed p oin t v = . Then the R G function  (v ) approac hes the exp onen t  . The o w equation for the coupling constan t b ecomes: dv d = v ; ) v ( )   : W e again note that a correlation function with l  elds b ecomes prop ortional to v l . Therefore (l;n) ( p; v ; ) / d(=)ln(d+ )= : (:)  W e conclude d  =  (d +   ) =    ,   =  d  : (:) R G functions at or der = N . A new generic in tegral is useful here  ( ) d Z d d q (  p +  q ) (p + q )  q  =  p p d ( +  d=)(d=  + )(d=  ) ( ) d= ()( )(d   + ) : (:) W e rst calculate the = N con tribution to the fermion t w o-p oin t function at the critical p oin t (from a diagram similar to diagram ) ()  (p) = i  p + iv b(")D (v )( ) d Z d d q (  p +  q ) q d (p + q ) : W e need the co ecien t of  p ln =p. Since w e w ork only at one-lo op order w e again replace the  propagator =q d b y =q  , and send the cut-o to in nit y . The residue of the p ole at  = d giv es the co ecien t of the term  p ln and the nite part the  p ln p con tribution. W e nd ()  (p) = i  p + iv b(")D (v ) N d  d d   p ln (=p) ; (:) where N d is the lo op factor (:a). Expressing that the  function satis es R G equations w e immediately obtain the R G function  (v )  (v ) = v D (v ) (d ) d X  ; (: ) where X  is giv en b y equation (: ). W e then calculate the function   at order = N ()   (p) = v + A  D  (v )v ln ; with A  = b(") N d = X  ; where A  corresp onds to the diagram of gure . The diagram of gure  v anishes b ecause the  -p oin t function v anishes for symmetry reasons. The -function follo ws v (v ) = (d )v  d X  D  (v ); (:0)  and th us  = (d )N d db(")N 0 = (d ) dN 0 X  : The exp onen ts  and   at order = N , and th us the corresp onding dimensions of elds d ; d  follo w  = (d ) d X  N 0 = (d ) d (d ) (d=)( d=)N 0 : (:) d = d  (d ) d X  N 0 : (:) F or d =  " w e nd   "= N 0 , result consisten t with (: ) for N large. F or d = + " instead one nds   " =N 0 , consisten t with (:0). The dimension d  of the eld  is d  =  (d +   ) =  (d ) dN 0 X  + O (= N 0 ): (:) A similar ev aluation of the    function allo ws to determine the exp onen t  to order = N   = d (d )(d ) dN 0 X  : (:) Actually all exp onen ts are kno wn to order = N except  whic h is kno wn to order = N .  Other mo dels with c hiral fermions Let us for completeness shortly examine t w o other mo dels with c hiral fermions, in whic h large N tec hniques can b e applied, massless QED and the U (N ) massless Thirring mo del. . Massless ele ctr o dynamics Let us giv e another example with a structure di eren t from a Y uk a w a-t yp e theory . W e no w consider a mo del of N c harged massless fermion elds ;  , coupled through an ab elian gauge eld A  (massless QED): S  ; ; A   = Z d d x  e F  (x)  (x)  (  @ + i  A) (x) : (:) This mo del p ossesses, in addition to the U () gauge in v ariance, a c hiral U (N ) U (N ) symmetry b ecause the fermions are massless. Again the in teresting ques-tion is whether the mo del exhibits in some dimension  d   a sp on taneous breaking of the c hiral symmetry .  Dimension d =  ". In terms of the standard coupling constan t :  e  ; (:) the R G function reads (taking tr  =  in the space of matrices): ( ) = " + N  + + N  N (N + )      N   N +   () 0   N +  + O   : (:) The mo del is free at lo w momen tum in four dimensions. Therefore no phase tran-sition is exp ected, at least for e small enough. A h yp othetical phase transition w ould rely on the existence on non-trivial xed p oin ts outside of the p erturbativ e regime. In the p erturbativ e framew ork the mo del pro vides an example of the famous triviality problem. F or a generic e ectiv e coupling constan t at cut-o scale (i.e. bare coupling), the e ectiv e coupling constan t at scale  is giv en b y ()  e ()    N ln (=) : This result can b e used to b ound N . In  " dimension, one instead nds a non-trivial IR xed p oin t corresp onding to a coupling constan t: e =  " " = N 0 ; (N 0 = N tr ) and correlation functions ha v e a scaling b eha viour at long distance. As w e ha v e discussed in the case of the  eld theory , the e ectiv e coupling constan t at large distance b ecomes close to the IR xed p oin t, except when the initial coupling constan t is v ery small. The R G function asso ciated with the eld renormalization is also kno wn at order but this is a non-ph ysical quan tit y since gauge dep enden t  =   N +  + 0N + N +   + O   ; where the gauge is sp eci ed b y a term (@  A  ) = . . The lar ge N limit T o ev aluate correlation functions for N large, one rst in tegrates o v er the fermion elds and one obtains the e ectiv e action: S  ; ; A   = Z d d x  e F  (x) N tr ln (  @ + i  A) : (:)  The large N limit is tak en with e N xed. Therefore, at leading order, only S (A  ), the quadratic term in A  in the expansion of the fermion determinan t, con tributes. A short calculation yields S (A  ) = N 0 Z d d k A  (k ) k   k  k  A  (k )K (k ); with K (k ) = d (d ) b(")k d a(d) d + O  ; (:) where a(d) is a regularization-dep enden t constan t. F or d <  the leading term in the IR region comes from the in tegral. The b eha viour at small momen tum of the v ector eld is mo di ed, whic h con rms the existence of a non-trivial IR xed p oin t. The xed p oin t is found b y demand-ing cancellation of the leading corrections to scaling coming from F  and the div ergen t part of the lo op in tegral, e = (d ) (d )a(d) d N 0 : Ho w ev er there is still no indication of c hiral symmetry breaking. P o w er coun ting within the = N expansion con rms that the IR singularities ha v e b een eliminated, b ecause the large N v ector propagator is less singular than in p erturbation the-ory . Of course this result is v alid only for N large. Since the long range forces generated b y the gauge coupling ha v e not b een totally eliminated the problem remains op en for d not close to four, or for e not v ery small and N nite. Some n umerical sim ulations indeed suggest a c hiral phase transition for d =  and d = , N  N c  . The exp onen ts corresp onding to the IR xed p oin t ha v e b een calculated up to order = N . A t order = N (X  is de ned b y equation (: ))  = (d ) ( d) d(d ) X  N + O = N   m = (d ) d(d ) X  N + O = N  0 ( ) =  d (d )(d )(d ) ( d) d(d ) X  N + O = N  : Finally note that in the d = limit, the in tegral generates a con tribution N e = k times the propagator of the free gauge eld N 0 K (k )  d! N   k : As a direct analysis of the d = case con rms, this corresp onds to a massiv e b ound state, of mass squared N e = . Ho w ev er, for generic v alues of the cou-pling constan t, this mass is of the order of the cut-o . Only when e is small  with resp ect to the microscopic scale, as one assumes in con v en tional renormal-ized p erturbation theory , do es this mass corresp ond in the con tin uum limit to a propagating particle. Two dimensions. As stated ab o v e w e no w assume that the dimensional quan-tit y e is small in the microscopic scale. The mo del is then a simple extension of the Sc h winger mo del and can b e exactly solv ed in the same w a y . F or N =  the mo del exhibits the simplest example of a c hiral anomaly , illustrates the prop-ert y of con nemen t and sp on taneous c hiral symmetry breaking. F or N >  the situation is more subtle. The neutral  t w o-p oin t function deca ys algebraically  (x)  (x)  (0)  (0) / x = N ; indicating the presence of a massless mo de and  = 0. Instead if w e calculate the t w o-p oin t function of the comp osite op erator O N (x) O N (x) = N Y i=  i (x) i (x); w e nd hO N (x)O N (0)i / const: : W e ha v e th us iden ti ed an op erator whic h has a non-zero exp ectation v alue. As a consequence of the fermion an tisymmetry , if w e p erform a transformation under the group U (N ) U (N ) corresp onding to matrices U + ; U , the op erator is m ultiplied b y det U + = det U . The op erator th us is in v arian t under the group S U (N ) S U (N ) U (). Its non-v anishing exp ectation v alue is the sign of the sp on taneous breaking of the remaining U () c hiral group. . The U (N ) Thirring mo del W e no w consider the mo del S (  ; ) = Z d d x  (  @ + m 0 )  g J  J  ; (:) where J  =    : (:) The sp ecial case N =  corresp onds to the simple Thirring mo del. In t w o dimensions it is then equiv alen t to a free massless b oson eld theory (with mass term for fermions one obtains the sine{Gordon mo del). Both to b ozonize the mo del in d = and to study that large N prop erties one in tro duces a ab elian gauge eld A  coupled to the curren t J   g J  J  ! A  =g + iA  J  : (:)  One then nds massiv e QED without the F  term S (A  ;  ; ) = Z d x  (  @ + i  A + m 0 ) A  =g : (: ) If w e in tegrate o v er the fermions, the fermion determinan t generates a kinetic term for the gauge eld. F or m 0 = 0 w e are th us in situation v ery similar to massless QED, except that the gauge eld is massiv e. Bibliographical Notes Nam bu and Jona-Lasinio w ere the rst to prop ose a mec hanism based on a four-fermion in teraction with U () c hiral in v ariance to generate n ucleon, scalar and pseudo-scalar  ;  masses Y. Nam bu and G. Jona-Lasinio, Phys. R ev.  ( ) . The diculties connected with this approac h (appro ximate treatmen t of Dyson{ Sc h winger equations without small parameter, non renormalizable theory with cut-o ) ha v e partially solv ed in K.G. Wilson, Phys. R ev. D ( ) ; D.J. Gross and A. Nev eu, Phys. R ev. D0 ( ) . In these articles the = N expansion w as in tro duced and the existence of IR xed p oin ts p oin ted out. The semi-classical sp ectrum for d = in the large N limit of the GN mo del (with discrete c hiral in v ariance) w as obtained from soliton calculation in R. Dashen, B. Hasslac her and A. Nev eu, Phys. R ev. D ( ) . This study as w ell as some additional considerations concerning the factorization of S matrix elemen ts at order = N ha v e led to a conjecture of the exact sp ectrum at N nite A.B. Zamolo dc hik o v and Al.B. Zamolo dc hik o v, Phys. L ett. B ( ) ; M. Karo wski and H.J. Th un, Nucl. Phys. B 0 ( ) . See also P . F orgacs, F. Niederma y er and P . W eisz, Nucl. Phys. B ( ) , . The prop erties of the NJL mo del in t w o dimensions are discussed in J.H. Lo w enstein, R e c ent A dvanc es in Field The ory and Statistic al Me chanics, Les Houc hes  , J.B. Zub er and R. Stora eds., (Elsevier Science Pub., Am-sterdam  ). The thermo dynamics of the GN and NJL mo dels at all temp eratures and densi-ties at d = for N !  are discussed in R.F. Dashen, S.K. Ma and R. Ra jaraman, Phys. R ev. D ( )  , where the existence of instan tons resp onsible of the symmetry restoration is demonstrated. More recen tly a more complete analysis has app eared in A. Barducci, R. Casalbuoni, M. Mo dugno and G. P ettini, R. Gatto, Phys. R ev. D ( ) 0.  The large N expansion in d = has b een discussed in B. Rosenstein, B. W arr and S.H. P ark, Phys. R ev. L ett.  (  ) ; Phys. R ep. 0 ( )  ; G. Gat, A. Ko vner and B. Rosenstein, Nucl. Phys. B ( ) . The relation with the GNY mo del is discussed in A. Hasenfratz, P . Hasenfratz, K. Jansen, J. Kuti and Y. Shen, Nucl. Phys. B ( )  ; J. Zinn-Justin, Nucl. Phys. B ( ) 0. Recen t in v estigations at d =  can b e found in P .M. Fish bane and R.E. Norton, Phys. R ev. D ( )  ; B. Rosenstein, H.L. Y u and A. Ko vner, Phys. L ett. B ( ) . A n um b er of = N and + " calculations ha v e b een rep orted W. W en tzel, Phys. L ett. B ( ) ; N.A. Kiv el, A.S. Stepanenk o, A.N. V asil'ev, Nucl. Phys. B ( )  ; J.A. Gracey , Nucl. Phys. B ( ) ; Z. Phys. C ( ) ; Int. J. Mo d. Phys. A ( )  and , hep-th/ 00; Phys. R ev. D0 ( ) 0, hep-th/ 0; Phys. L ett. B ( ) . A comparison in dimension three b et w een n umerical sim ulations of the GN mo del and the "-expansion at second order obtained from the GNY mo del is rep orted in L. K arkk ainen, R. Lacaze, P .Laco c k and B. P etersson, Nucl. Phys. B ( ) , Erratum ibidem B ( ) 0; E. F o c h t, J. Jerzak and J. P aul, Phys. R ev. D ( ) . The mo dels are also compared n umerically in dimension t w o in A.K. De, E. F o c h t, W. F ranski, J. Jersak and M.A. Stephano w, Phys. L ett. B0 ( ) . F or recen t rigorous results see C. Kopp er, J. Magnen and V. Riv asseau, Comm. Math. Phys.  ( ) . A few references on the Sc h winger mo del and its relation with the con nemen t problem: J. Sc h winger, Phys. R ev.  ( ) ; J.H. Lo w enstein and J.A. Swieca, A nn. Phys. (NY)  ( ) ; A. Casher, J. Kogut and L. Susskind, Phys. R ev. D0 ( ) ; S. Coleman, R. Jac kiw and L. Susskind, A nn. Phys. (NY) ( ) ; S. Coleman, A nn. Phys. (NY) 0 ( ) . A few references on Sc h winger and Thirring mo dels (whic h ha v e also b een sys-tematically in v estigated) for N large G. P arisi, Nucl. Phys. B00 ( ) ; S. Hik ami and T. Muta, Pr o g. The or. Phys.  ( ) ; D. Espriu, A. P alanques-Mestre, P . P ascual and R. T ar-rac h, Z. Phys C ( ) ; A. P alanques-Mestre and P . P ascual, Comm. Math. Phys.  ( ) ; J.A. Gracey , Nucl. Phys. B ( ) ; S.J. Hands, Phys. R ev. D ( ) ; S.J. Hands, Phys. R ev. D ( ) ; S. Hands, hep-lat/ 00. F or the calculation of the QED R G function in the MS sc heme see S.G. Gorishn y , A.L. Kataev and S.A. Larin, Phys. L ett. B  ( )  ; B ( ) .  A recen t sim ulation concerning the D Thirring mo del is rep orted in L. Del Dubbio, S.J. Hands, J.C. Mehegan, Nucl. Phys. B0 ( )  , hep-lat/ 00. Finally these analytic tec hniques ha v e also b een applied to the sup ersymm etric extension of the non-linear  mo del and other mo dels, see for example J.A. Gracey , Nucl. Phys. B ( ) ; P .M. F erreira, I. Jac k, D.R.T. Jones, Phys. L ett. B ( ) , hep-ph/ 00; P .M. F erreira, I. Jac k, D.R.T. Jones, C.G. North, Nucl. Phys. B0 ( ) 0, hep-ph/ 0. 0  The O (N ) v ector mo del in the large N limit: m ulti-critical p oin ts and double scaling limit W e no w discuss the large N limit of the general N -v ector mo dels with one scalar eld. T o illustrate the metho d w e study m ulti-critical p oin ts. Of particular in terest are the subtleties in v olv ed in the stabilit y of the phase structure at critical dimensions. Another issue in v olv es the so-called double sc aling limit. Statistical mec han-ical prop erties of random surfaces as w ell as randomly branc hed p olymers can b e analyzed within the framew ork of large N expansion. In the same manner in whic h matrix mo dels in their double scaling limit pro vide represen tations of dynamically triangulated random surfaces summ ed on di eren t top ologies, O (N ) symmetric v ector mo dels represen t discretized branc hed p olymers in this limit, where N !  and the coupling constan t g ! g c in a correlated manner. The surfaces in the case of matrix mo dels, and the randomly branc hed p olymers in the case of v ector mo dels are classi ed b y the di eren t top ologies of their F eyn-man graphs and th us b y p o w ers of = N . Though matrix theories attract most atten tion, a detailed understanding of these theories exists only for dimensions d  . On the other hand, in man y cases, the O (N ) v ector mo dels can b e suc-cessfully studied also in dimensions d > , and th us, pro vide us with in tuition for the searc h for a p ossible description of quan tum eld theory in terms of extended ob jects in four dimensions, whic h is a long lasting problem in elemen tary particle theory . The double scaling limit in O (N ) v ector quan tum eld theories rev eals an in teresting phase structure b ey ond N !  limit. In particular, though the N !  m ulticritical structure of these mo dels is generaly w ell understo o d, there are certain cases where it is still unclear whic h of the features surviv es at nite N , and to what exten t. One suc h problem is the m ulticritical b eha vior of O (N ) mo dels at critic al dimensions. Here, one nds that in the N !  limit, there exists a non-trivial UV xed p oin t, scale in v ariance is sp on taneously brok en, and the one parameter family of ground states con tains a massiv e v ector and a massless b ound state, a Goldstone b oson-dilaton. Ho w ev er, since it is unclear whether this structure is lik ely to surviv e for nite N one w ould lik e to kno w whether it is p ossible to construct a lo cal eld theory of a massless dilaton via the double scaling limit, where all orders in = N con tribute. The double scaling limit is view ed as the limit at whic h the attraction b et w een the O (N ) v ector quan ta reac hes a v alue at g ! g c , at whic h a massless b ound state is formed in the N !  limit, while the mass of the v ector particle sta ys nite. In this limit, p o w ers of = N are comp ensated b y IR singularities and th us all orders in = N con tribute. In section . the double scaling limit for simple in tegrals and quan tum me-c hanics is explained, in tro ducing a formalism whic h will b e useful for eld theory examples.  In section . the sp ecial case of eld theory in dimension t w o is discussed. In higher dimensions a new phenomenon arises: the p ossibilit y of a sp on ta-neous breaking of the O (N ) symmetry of the mo del, asso ciated to the Goldstone phenomenon. Before discussing a p ossible double scaling limit, the critical and m ulticritical p oin ts of the O (N ) v ector mo del are examined in section .. In particular, a certain sign am biguit y that app ears in the expansion of the gap equation is noted, and related to the existence of the IR xed p oin t in dimensions < d <  discussed in section .. In section . w e discuss the subtleties and conditions for the existence of an O (N ) singlet massless b ound state along with a small mass O (N ) v ector particle excitation. It is p oin ted out that the correct massless e ectiv e eld theory is obtained after the massiv e O (N ) scalar is in tegrated out. Section . is dev oted to the double scaling limit with a particular emphasis on this limit in theories at their critical dimensions. In section . the main conclusions are summ ar ized. . Double sc aling limit: simple inte gr als and quantum me chanics W e rst discuss d = 0 and d =  dimensions, dimensions in whic h the matrix mo dels has equally b een solv ed. W e ho w ev er in tro duce a general metho d, not required here, but useful in the general eld theory examples. The zer o dimensional example. Let us rst consider the zero dimensional example. The partition function Z is giv en b y e Z = Z d N exp N V  : The simplest metho d for discussing the large N limit is of course to in tegrate o v er angular v ariables. Instead w e in tro duce t w o new v ariables ;  and use the iden tit y exp N V ( ) / Z d d exp  N     + V () : (:) The in tegral o v er  is really a F ourier represen tation of a  -function and th us the con tour of in tegration runs parallel to the imaginary axis. The iden tit y (:) transforms the action in to a quadratic form in . Hence the in tegration o v er can b e p erformed and the dep endence in N b ecomes explicit e Z / Z d d exp  N   + V () +  ln  : The large N limit is obtained b y steep est descen t. The saddle p oin t is giv en b y V 0 () =   ;  = = :  The leading con tribution to Z is prop ortional to N and obtained b y replacing ;  b y the saddle p oin t v alue. The leading correction is obtained b y expanding ;  around the saddle p oin t and p erforming the gaussian in tegration. It in v olv es the determinan t D of the matrix M of second deriv ativ es M =      V 00 ()  ; D = det M =  V 00 ()= +   : In the generic situation the resulting con tribution to Z is  ln D . Ho w ev er if the determinan t D v anishes the leading order in tegral is no longer gaussian, at least for the degree of freedom whic h corresp onds to the eigen v ector with v anishing eigen v alue. The condition of v anishing of the determinan t also implies that t w o solutions of the saddle p oin t equation coincide and th us corresp onds to a surface in the space of the co ecien ts of the p oten tial V where the partition function is singular. T o examine the corrections to the leading large N b eha viour it remains ho w-ev er p ossible to in tegrate o v er one of the v ariables b y steep est descen t. A t leading order this corresp onds to solving the saddle p oin t equation for one of the v ari-ables, the other b eing xed. Here it is con v enien t to eliminate  b y the equation  = =. One nds e Z / Z d exp N V ()  ln   + O () : In the leading term w e ob viously reco v er the result of the angular in tegration with  = . F or N large the leading con tribution arises from the leading term in the expansion of W () = V ()  ln  near the saddle p oin t: W () W ( s )   n! W (n) ( s )(  s ) n : The in teger n c haracterizes the nature of the critical p oin t. Adding relev an t p erturbations  k V of parameters v k to the critical p oten tial  k V = v k (  s ) k ;   k  n (the term k = n  can alw a ys b e eliminated b y a shift of ) w e nd the partition function at leading order for N large in the scaling region: e Z (fu k g) / Z dz exp z n n X k = u k z k ! ; where z / N =n (  s ) and u k / N k =n v k  is held xed Quantum me chanics. The metho d w e ha v e used ab o v e immediately generalizes to quan tum mec hanics, although a simpler metho d in v olv es solving the radial Sc hr odinger equation. W e consider the euclidean action S () = N Z dt h  d t (t)  + V ( ) i : (:) Note the un usual eld normalization, the factor N in fron t of the action simpli-fying all expressions in the large N limit. T o explore the large N limit one has to tak e the scalar function , whic h self-a v erages, as a dynamical v ariable. A t eac h time t w e th us p erform the transformation (:). One in tro duces t w o paths (t); (t) and writes exp  N Z dt V ( )  / Z [d(t) d(t)] exp  N Z dt     + V ()  : (:) The in tegral o v er the path (t) is then gaussian and can b e p erformed. One nds e Z = Z [d(t)d(t)] exp [S N (; )] (:) with S N = N Z dt   + V () +  tr ln d t + ()  : (:) Again, in the large N limit the path in tegral can b e calculated b y steep est descen t. The saddle p oin ts are constan t paths solution of V 0 () =   ;  =   Z d! ! +  =  p  ; (:) where ! is the F ourier energy v ariable conjugated to t. Again a critical p oin t is de ned b y the prop ert y that at least t w o solutions to the saddle p oin t equations coalesce. This happ ens when the determinan t of the matrix of rst deriv ativ es of the equations v anishes: det  V 00 ()     =  =   = V 00 ()   = 0 : (:) The leading correction to the saddle p oin t con tribution is giv en b y a gaussian in tegration. The result in v olv es the determinan t of the op erator second deriv ativ e  of S N . By F ourier transforming time the op erator b ecomes a tensor pro duct of matrices with determinan t D (! ) D (! ) = det  V 00 ()    B (! )  with B (! ) =   Z d! 0 (! 0 + )[(! ! 0 ) + ] : Th us, the criticali t y condition is equiv alen t to D (0) = 0. When the criticali t y condition is satis ed, the leading correction is no longer giv en b y steep est descen t. Again, since at most one mo de can b e critical, w e can in tegrate o v er one of the path b y steep est descen t, whic h means solving the saddle p oin t equation for one function, the other b eing xed. While the  equation remains lo cal, the  is no w non-lo cal, in v olving the diagonal matrix elemen t of the in v erse of the di eren tial op erator d t + (t). W e shall see in next section ho w this problem can b e o v ercome in general. A sp ecial feature of quan tum mec hanics, ho w ev er, is that the determinan t can b e calculated, after a simple c hange of v ariables. W e set (t) = _ s (t) + s (t); (:) in suc h a w a y that the second order di eren tial op erator factorizes d t + (t) = d t + s(t)  d t s(t)  : (: ) The determinan t of a rst order di eren tial op erator can b e calculated b y expand-ing formally in s. Only the rst term surviv es but the co ecien t is am biguous tr ln  d  t s()  =  (0) Z dt s(t): A more re ned analysis, whic h in v olv es b oundary conditions, is required to de-termine the am biguous v alue  (0) of step function. Here one nds ln det d t + ()  = tr ln d t + ()  = Z dt s(t): (:0) The jacobian of the transformation (:) con tributes at higher order in = N and can b e neglected. Therefore the large N action b ecomes S N = N Z dt  ( _ s + s ) + V () +  s = N Z dt  s +  s( _  + ) + V () : W e can no w replace s b y the solution of a lo cal saddle p oin t equation (or p erform the gaussian in tegration, but neglect the determinan t whic h is of higher order):  S N  s(t) = 0 , s +  ( _  + ) = 0 ;  and nd S N = N Z dt  _   +   + V ()  : (:) W e recognize the action for the large N p oten tial at zero angular momen tum in the radial co ordinate (t) = (t). Critical p oin ts then are c haracterized b y the b eha viour of the p oten tial W () W () = V () +   ; near the saddle p oin t  s W () W ( s )  W (n) ( s ) (   s ) n n! : A t critical p oin ts the ground state energy , after subtraction of the classical term whic h is linear in N , has a non-analytic con tribution. T o eliminate N from the action w e set t ! tN (n)=(n+) ; (t)  s ! N =(n+) z (t): W e conclude that the leading correction to the energy lev els is prop ortional to N (n)=(n+) . Note also that the scaling of time implies that higher order time deriv ativ es w ould b e irrelev an t, an observ ation whic h can b e used more directly to expand the determinan t in lo cal terms, and will b e imp ortan t in next section. If w e add relev an t corrections to the p oten tial  k V = v k (  s ) k ;   k  n ; the co ecien ts v k m ust scale lik e v k / N (k n)=(n+) : . The D V ( ) eld the ory in the double sc aling limit In the rst part w e study the O (N ) symmetric V ( ) eld theory , where is N -comp onen t eld, in the large N limit in dimension t w o b ecause phase transitions o ccur in higher dimensions, a problem whic h has to b e considered separately . The action is: S () = N Z d x n  [ @  (x)] + V  o ; (:) where an implicit cut-o is alw a ys assumed b elo w. Whenev er the explicit dep endence in the cut-o will b e relev an t w e shall assume a P auli{Villars's t yp e regularization, i.e. the replacemen t in action (:) of @ b y @ D (@ = ) ; (:)  where D (z ) is a p ositiv e non-v anishing p olynomial with D (0) = . As b efore one in tro duces t w o elds (x) and (x) and uses the iden tit y (:). The large N action is then: S N = N Z d x V ()   +  N tr ln ( + ): (:) Again for N large w e ev aluate the in tegral b y steep est descen t. Since the saddle p oin t v alue  is the - eld mass squared, w e set in general  = m . With this notation the t w o equations for the saddle p oin t m ;  s = h i are: V 0 ( s ) =  m ; (:a )  s =  ( ) Z d k k + m ; (:b ) where w e ha v e used a short-cut notation  ( ) Z d k k + m   ( ) Z d k D (k = )k + m  B  (m ): (:) F or m one nds B  (m ) =   ln (=m) +   ln ( K ) + O (m = ); where K is a regularization dep enden t constan t. As w e ha v e discussed in the case of quan tum mec hanics a critical p oin t is c haracterized b y the v anishing at zero mom en tum of the determinan t of second deriv ativ es of the action at the saddle p oin t. The mass-m atrix has then a zero eigen v alue whic h, in eld theory , corresp onds to the app earance of a new massless excitation other than . In order to obtain the e ectiv e action for this scalar massless mo de w e m ust in tegrate o v er one of the elds. In the eld theory case the resulting e ectiv e action can no longer b e written in lo cal form. T o discuss the order of the critical p oin t, ho w ev er, w e only need the action for space indep enden t elds, and th us for example w e can eliminate  using the  saddle p oin t equation. The e ectiv e  p oten tial W () then reads W () = V ()  Z () d 0  0 @ @  0 B  ( 0 ); (:) where at leading order for large () =  K e   :  The expression for the e ectiv e action in equation (:) is correct for an y d and will b e used also in section .. Here w e ha v e: W () = V () + K e   = V () +   m e  ( s ) : A m ulticritical p oin t is de ned b y the condition W () W ( s ) = O ( (  s ) n ) (:): This yields the conditions: V (k ) ( s ) =  ( ) k  m for   k  n  : Note that the co ecien ts V (k ) ( s ) are the coupling constan ts renormalized at leading order for N large. If V () is a p olynomial of degree n  (the minimal p olynomial mo del) the m ulticritical condition in equation (:) determines the critical v alues of renormalized coupling constan ts as w ell as  s When the elds are space-dep enden t it is simpler to eliminate  instead, b e-cause the corresp onding eld equation: V 0 (x)  =  (x): (: ) is lo cal. This equation can b e solv ed b y expanding (x)  s in a p o w er series in (x) m : (x)  s =  V 00 ( s ) (x) m  + O ( m )  : (:0) The resulting action for the eld (x) remains non-lo cal but b ecause, as w e shall see, adding p o w ers of  as w ell as adding deriv ativ es mak e terms less relev an t, only the few rst terms of a lo cal expansion of the e ectiv e action will b e imp or-tan t. If in the lo cal expansion of the determinan t w e k eep only the t w o rst terms w e obtain an action con taining at leading order a kinetic term prop ortional to (@  ) and the in teraction ((x) m ) n : S N ()  N Z d x    m  (@  ) +  n! S n (x) m ) n  ; where the neglected terms are of order ( m ) n+ , @  , and  @  and S n = W (n) ( s )[V 00 ( s )] n = W (n) ( s )( m ) n : Moreo v er w e note that together with the cut-o , m no w also acts as a cut-o in the lo cal expansion.  T o eliminate the N dep endence in the action w e ha v e, as in the example of quan tum mec hanics, to rescale b oth the eld  m and space: (x) m = p  m N = '(x) ; x ! N (n)= x : (:) W e nd S N (')  Z d x  (@  ') +  n! g n ' n : In the minimal mo del, where the p olynomial V () has exactly degree n , w e nd g n = ( ) (n)= m . As an ticipated w e observ e that deriv ativ es and p o w ers of ' are a ected b y negativ e p o w ers of N , justifying a lo cal expansion. Ho w ev er w e also note that the cut-o s ( or the mass m) are no w also m ultiplied b y N (n)= . Therefore the large N limit also b ecomes a large cut-o limit. Double sc aling limit. The existence of a double scaling limit relies on the existence of IR singularities due to the massless or small mass b ound state whic h can comp ensate the = N factors app earing in the large N p erturbation theory . W e no w add to the action relev an t p erturbations:  k V = v k ((x)  s ) k ;   k  n : prop ortional to R d x( m ) k :  k S N () = N S k Z d x ( m ) k ; where the co ecien ts S k are functions of the co ecien ts v k . After the rescaling (:)  k S N (') =  k ! g k N (nk )= Z d x ' k (x)   k  n Ho w ev er, unlik e quan tum mec hanics, it is not sucien t to scale the co ecien ts g k with the p o w er N (k n)= to obtain a nite scaling limit. Indeed p erturbation theory is a ected b y UV div ergences, and w e ha v e just noticed that the cut-o div erges with N . In t w o dimensions the nature of div ergences is v ery simple: it is en tirely due to the self-con tractions of the in teractions terms and only one div ergen t in tegral app ears: ' (x) =   Z d q q +  ; where  is the small mass of the b ound state, required as an IR cut-o to de ne p erturbativ ely the double scaling limit. W e can then extract the N dep endence ' (x) =   (n ) ln N + O ():  Therefore the co ecien ts S k ha v e also to cancel these UV div ergences, and th us ha v e a logarithmic dep endence in N sup erp osed to the natural p o w er obtained from p o w er coun ting argumen ts. In general for an y p oten tial U (') U (') =: U (') : + " X k =  k k ! ' k  @ @ '  k # : U (') : ; where : U (') : is the p oten tial from whic h self-con tractions ha v e b een subtracted (it has b een normal-ordered). F or example for n = ' (x) =: ' (x) : + ' '(x); and th us the double scaling limit is obtained with the b eha viour N g  +   ln N g xed : F or the example n =  g  N = and N g + g   ln N xed : . The V ( ) in the lar ge N limit: phase tr ansitions In higher dimensions something new happ ens: the p ossibilit y of phase transi-tions asso ciated with sp on taneous breaking the O (N ) symmetry . In the rst part w e th us study the O (N ) symmetric V ( ) eld theory , in the large N limit to ex-plore the p ossible phase transitions and iden tify the corresp onding m ulticritical p oin ts. The action is: S () = N Z d d x n  [@  (x)] + V  o ; (:) where, as ab o v e (equations (:,: )), an implicit cut-o is alw a ys assumed b elo w. The iden tit y (:) transforms the action in to a quadratic form in and there-fore the in tegration o v er can b e p erformed. It is con v enien t ho w ev er here to in tegrate only o v er N  comp onen ts, to k eep a comp onen t of the v ector eld, whic h w e denote  , in the action. The large N action is then: S N = N Z d d x h  ( @   ) + V () +      i +  (N ) tr ln ( + ): (:) The sadd le p oint e quations: the O (N ) critic al p oint. Let us then write the saddle p oin t equations for a general p oten tial V . A t high temp erature  = 0 0 and  is the - eld mass squared. W e th us set in general  = m . With this notation the three saddle p oin t equations are: m  = 0 ; (:a ) V 0 () =  m ; (:b )  =   ( ) d Z d d k k + m : (:c) In the ordered phase  = 0 and th us m v anishes. Equation (:c) has a solution only for  >  c ,  c =  ( ) d Z d d k k ; )  = p   c : Equation (:b) whic h reduces to V 0 () = 0 then yields the critical temp erature. Setting V () = U () +  r , w e nd r c = U 0 ( c ): T o nd the magnetization critical exp onen t w e need the relation b et w een the r and  near the critical p oin t. In the disordered phase,  = 0, equation (:c) relates  to the - eld mass m. F or m ,  approac hes  c , and the relation b ecomes (equation (:)):   c = C (d)m d + a(d)m d + O m d  + O m  d  : (:) F or < d <  (the situation w e shall assume b elo w except when stated otherwise) the O m d  from the non-analytic part dominates the corrections to the leading part of this expression. F or d =  instead   c =   m (ln m= + const:) ; and for d >  the analytic con tribution dominates and   c  a(d)m d : The constan t C (d) is univ ersal (equation (:b )). The constan t a(d), whic h also app ears in equation (:), instead dep ends on the cut-o pro cedure, and is giv en b y a(d) =  ( ) d Z d d k k     D (k )  : (:) Critic al p oint. In a generic situation V 00 ( c ) = U 00 ( c ) do es not v anish. W e th us nd in the lo w temp erature phase t = r r c  U 00 ( c )(  c ) ) =  : (:)  This is the case of an ordinary critical p oin t. Stabilit y implies V 00 ( c ) > 0 so that t < 0. A t high temp erature, in the disordered phase, the - eld mass m is giv en b y U 0 () + r = m and th us, using (:), at leading order t  U 00 ( c )C (d)m d ; in agreemen t with the result of the normal critical p oin t. Of course the simplest realization of this situation is to tak e V () quadratic, and w e reco v er the ( ) eld theory . The sign of the c onstant a(d). A commen t concerning the non-univ ersal con-stan t a(d) de ned in (:) is here in order b ecause, while its absolute v alue is irrelev an t, its sign pla ys a role in the discussion of m ulticritical p oin ts. Actu-ally the relev ance of this sign to the R G prop erties of the large N limit of the simple ( ) eld theories has already men tioned (section .). F or the simplest P auli{Villars's t yp e regularization w e ha v e D (z ) >  and th us a(d) is nite and p ositiv e in dimensions < d < , but this clearly is not a univ ersal feature. A new situation arises if w e can adjust a parameter of the p oten tial in suc h a w a y that U 00 ( c ) = 0. This can b e ac hiev ed only if the p oten tial V is at least cubic. W e then exp ect a tricritical b eha vior. Higher critical p oin ts can b e obtained when more deriv ativ es v anish. W e shall examine the general case though, from the p oin t of view of real phase transitions, higher order critical p oin ts are not in teresting b ecause d > for con tin uous symmetries and mean- eld b eha vior is then obtained for d  . The analysis will ho w ev er b e useful in the study of double scaling limit. Assuming that the rst non-v anishing deriv ativ e is U (n) ( c ), w e expand further equation (:b). In the ordered lo w temp erature phase w e no w nd t = (n )! U (n) ( c )(  c ) n ; )  / (t) ; =  (n ) ; (:) whic h leads to the exp onen t exp ected in the mean eld appro ximation for suc h a m ulticritical p oin t. W e ha v e in addition the condition U (n) ( c ) > 0. In the high temp erature phase instead m = t + () n (n )! U (n) ( c )C n (d)m (n)(d) : (: ) F or d > n=(n ) w e nd a simple mean eld b eha vior, as exp ected since w e are ab o v e the upp er-critical dimension . F or d < n=(n ) w e nd a p eculiar phenomenon, the term in the r.h.s. is alw a ys dominan t, but dep ending on the parit y of n the equation has solutions for t > 0 or t < 0. F or n ev en, t is p ositiv e and w e nd m / t  ;  =  (n )(d ) ; (:0) whic h is a non mean- eld b eha vior b elo w the critical dimension. Ho w ev er for n o dd (this includes the tricritical p oin t) t m ust b e negativ e, in suc h a w a y that w e ha v e no w t w o comp eting solutions at lo w temp erature. W e ha v e to nd out whic h one is stable. W e shall v erify b elo w that only the ordered phase is stable, so that the correlation length of the - eld in the high temp erature phase remains alw a ys nite. Although these dimensions do not corresp ond to ph ysical situations b ecause d < the result is p eculiar and inconsisten t with the "-expansion. F or d = n=(n ) w e nd a mean eld b eha vior without logarithmic correc-tions, pro vided one condition is met: (n )! U (n) ( c )C n (n=(n )) <  ; C () = =( ): (:) W e examine, as an example, in more details the tricritical p oin t b elo w. W e will see that the sp ecial p oin t (n )! U (n) ( c )C n (n=(n )) =  ; (:) has sev eral p eculiaritie s. In what follo ws w e call c this sp ecial v alue of U (n) ( c ). Discussion. In the mean eld appro ximation the function U () /  n is not b ounded from b elo w for n o dd, ho w ev er  = 0 is the minim um b ecause b y de nition   0. Here instead w e are in the situation where U ()  (  c ) n but  c is p ositiv e. Th us this extrem um of the p oten tial is lik ely to b e unstable for n o dd. T o c hec k the global stabilit y requires further w ork. The question is whether suc h m ulticritical p oin ts can b e studied b y the large N limit metho d. Another p oin t to notice concerns renormalization group: The n = example is p eculiar in the sense that the large N limit exhibits a non-trivial IR xed p oin t. F or higher v alues of n no coupling renormalization arises in the large N limit and the IR xed p oin t remains pseudo-gaussian. W e are in a situation quite similar to usual p erturbation theory , the function can only b e calculated p erturbativ ely in = N and the IR xed p oin t is outside the p erturbativ e regime. L o c al stability and the mass matrix. The matrix of the general second partial deriv ativ es of the e ectiv e action is: N 0 @ p + m 0  0 V 00 ()     B (p; m)  A ; (:) where B (p; m) is de ned in (:). W e are in p osition to study the lo cal stabilit y of the critical p oin ts. Since the in tegration con tour for  = m should b e parallel to the imaginary axis, a necessary condition for stabilit y is that the determinan t remains negativ e. The disor der e d phase. Then  = 0 and th us w e ha v e only to study the matrix M of the ; m subspace. Its determinan t m ust remain negativ e whic h implies det M < 0 , V 00 ()B (p; m) +  > 0 : (:) F or P auli{Villars's t yp e regularization the function B (p; m) is decreasing so that this condition is implied b y the condition at zero momen tum det M < 0 ( V 00 ()B (0; m) +  > 0 : F or m small w e use equation (:) and at leading order the condition b ecomes: C (d)(d )m d V 00 () +  > 0 : This condition is satis ed b y a normal critical p oin t since V 00 ( c ) > 0. F or a m ulticritical p oin t, and taking in to accoun t equation (:) w e nd: () n d (n )! C n (d)m n(d)d V (n) ( c ) +  > 0 : (:) W e obtain a result consisten t with our previous analysis: F or n ev en it is al-w a ys satis ed. F or n o dd it is alw a ys satis ed ab o v e the critical dimension and nev er b elo w. A t the upp er-critical dimension w e nd a condition on the v alue of V (n) ( c ) whic h w e recognize to b e iden tical to condition (:) b ecause then =(n ) = d . The or der e d phase. No w m = 0 and the determinan t  of the complete matrix is:  > 0 , V 00 ()B (p; 0)p + p + V 00 () > 0 : (:) W e recognize a sum of p ositiv e quan tities, and the condition is alw a ys satis ed. Therefore in the case where there is a comp etition with a disordered saddle p oin t only the ordered one can b e stable. . The sc alar b ound state In this section w e study the limit of stabilit y in the disordered phase ( = 0). This is a problem whic h only arises when n is o dd, the rst case b eing pro vided b y the tricritical p oin t. The mass-m atrix has then a zero eigen v alue whic h corresp onds to the app ear-ance of a new massless excitation other than . Let us denote b y M the ; m submatrix. Then det M = 0 , V 00 ()B (0; m) +  = 0 :  In the t w o-space the corresp onding eigen v ector has comp onen ts (  ; V 00 ()). The smal l mass m r e gion. In the small m limit the equation can b e rewritten in terms of the constan t C (d) de ned in (:): C (d)(d )m d V 00 () +  = 0 : (:) Equation (:) tells us that V 00 () m ust b e small. W e are th us close to a m ulticritical p oin t. Using the result of the stabilit y analysis w e obtain () n d (n )! C n (d)m n(d)d V (n) ( c ) =  : (:) W e immediately notice that this equation has solutions only for n(d ) = d, i.e. at the critical dimension. The compatibilit y then xes the v alue of V (n) ( c ). W e again nd the p oin t (:), V (n) ( c ) = c . If w e tak e in to accoun t the leading correction to the small m b eha vior w e nd instead: V (n) ( c )  c   (n ) a(d) C (d)  m  d : (: ) This means that when a(d) > 0 there exists a small region V (n) ( c ) > c where the v ector eld is massiv e with a small mass m and the b ound-state massless. The v alue c is a xed p oin t v alue. The sc alar eld at smal l mass. W e w an t to extend the analysis to a situation where the scalar eld has a small but non-v anishing mass M and m is still small. The goal is in particular to explore the neigh b ourho o d of the sp ecial p oin t (:). Then the v anishing of the determinan t of M implies  + V 00 ()B (iM ; m) = 0 : (:0) Because M and m are small, this equation still implies that  is close to a p oin t  c where V 00 () v anishes. Since realit y imp oses M < m, it is easy to v erify that this equation has also solutions for only the critical dimension. Then V (n) ( c )f (m= M ) = c ; (:) where w e ha v e set: f (z ) = Z  0 dx  + (x )=(z ) d= ;  < z : (:) In three dimensions it reduces to: f (z ) = z ln  z +  z   :  f (z ) is a decreasing function whic h div erges for z =  b ecause d  . Th us w e nd solutions in the whole region 0 < V (n) ( c ) < c , i.e. when the m ulticritical p oin t is lo cally stable. Let us calculate the propagator near the p ole. W e nd the matrix   = G " N dB (p; m) dp p =M #   p + M   G G G  ; (:) where w e ha v e set G = (C ) n W (n) (n )! m d : F or m= M xed the residue go es to zero with m as m d b ecause the deriv ativ e of B is of the order of m d . Th us the b ound-state decouples on the m ulticritical line. The sc alar massless excitation: gener al situation. Up to no w w e ha v e explored only the case where b oth the scalar eld and the v ector eld propagate. Let us no w relax the latter condition, and examine what happ ens when m is no longer small. The condition M = 0 then reads V 00 ( s )B (0; m) +  = 0 together with m = V 0 ( s );  s =  ( ) d Z d d k k + m : (:) An ob vious remark is: there exist solutions only for V 00 ( s ) < 0, and therefore the ordinary critical line can nev er b e approac hed. In terms of the function F (z ) d F (m = ) =  ( ) d Z d d k k + m   ( ) d Z d d k k D (k ) + m (:) and th us F (z ) = N d Z 0 k d dk k D (k ) + z ; (:) the equations can b e rewritten  s = d F (z ); z = V 0 ( s ) ; d V 00 ( s )F 0 (z ) =  : The function F (z ) in P auli{Vill ars's regularization is a decreasing function. In the same w a y F 0 (z ) is a p ositiv e decreasing function. The third equation is the condition for the t w o curv es corresp onding to the t w o rst ones b ecome tangen t. F or an y v alue of z w e can nd p oten tials and th us solutions. Let us call z s suc h a v alue and sp ecialize to cubic p oten tials. Then  s = d F (z s ) ; V () = V 0 ( s )(  s ) +  V 00 ( s )(  s ) +  ! V () ( s )(  s ) ; (:) whic h yields a t w o parameter family of solutions. F or z small w e see that for d <  the p oten tial b ecomes prop ortional to (  c ) .  . Stability and double sc aling limit In order to discuss in more details the stabilit y issue and the double scaling limit w e no w construct the e ectiv e action for the scalar b ound state. W e consider rst only the massless case. W e only need the action in the IR limit, and in this limit w e can in tegrate out the v ector eld and the second massiv e eigenmo de. Inte gr ation over the massive mo des. As w e ha v e already explained in section . w e can in tegrate o v er one of the elds, the second b eing xed, and w e need only the result at leading order. Therefore w e replace in the functional in tegral e Z = Z [dd] exp  N tr ln(@ + ) + N Z d d x V () +     ; (:) one of the elds b y the solution of the eld equation. It is useful to rst discuss the e ectiv e p oten tial of the massless mo de. This requires calculating the action only for constan t elds. It is then simpler to eliminate . W e assume in this section that m is small (the v ector propagates). F or  the -equation reads (d < )   c = C (d) (d)= : (: ) It follo ws that the resulting p oten tial W (), obtained from equation (:) is W () = V () + d d(C (d)) =(d) ( c ) d=(d) : (:0) In the sense of the double scaling limit the criticality conditions are W () = O (  s ) n  : It follo ws V (k ) ( s ) =  C k (d) k d=(d )  =(d )  m dk (d)   k  n  : F or the p oten tial V of minimal degree w e nd W ()   n! C n (d) n d=(d )  =(d )  m dn(d) (  s ) n : The double sc aling limit. W e recall here that quite generally one v eri es that a non-trivial double scaling limit ma y exist only if the resulting eld theory of the massless mo de is sup er-renorm alizable, i.e. b elo w its upp er-critical dimension d = n=(n ), b ecause p erturbation theory has to b e IR div ergen t. Equiv alen tly , to eliminate N from the critical theory , one has to rescale   s / N  ' ; x ! xN (n) with = = n d(n );  where  has to b e p ositiv e. W e no w sp ecialize to dimension three, since d < has already b een examined, and the expressions ab o v e are v alid only for d < . The normal critical p oin t (n = ), whic h leads to a ' eld theory , and can b e obtained for a quadratic p oten tial V () (the ( ) ) has b een discussed elsewhere. W e th us concen trate on the next critical p oin t n =  where the minimal p oten tial has degree three. The d = tricritic al p oint. The p oten tial W () then b ecomes W () = V () +  ( c ) : (:) If the p oten tial V () has degree larger than three, w e obtain after a lo cal expan-sion and a rescaling of elds,   s = (    c )( m ) / '= N ; x ! N x ; (:) a simple sup er-renorm alizable '  (x) eld theory . If w e insist instead that the initial theory should b e renormalizable, then w e remain with only one candi-date, the renormalizable ( ) eld theory , also relev an t for the tricritical phase transition with O (N ) symmetry breaking. Insp ection of the p oten tial W () im-mediately sho ws a remark able feature: Because the term added to V () is itself a p olynomial of degree three, the critical conditions lead to a p oten tial W (') whic h v anishes iden tically . This result re ects the prop ert y that the t w o saddle p oin t equation (@ S=@  = 0 ; @ S=@  = 0 in equations (:)) are prop ortional and th us ha v e a con tin uous one-parameter family of solutions. This results in a at e ectiv e p oten tial for '(x). The e ectiv e action for ' dep ends only on the deriv ativ es of ', lik e in the O () non-linear  mo del. W e conclude that no non-trivial double scaling limit can b e obtained in this w a y . In three dimensions with a ( ) in teraction w e can generate at most a normal critical p oin t n = , but then a simple ( ) eld theory suces. The am biguit y of the sign of a(d) discussed in section . has an in terest-ing app earance in d = in the small m region. If one k eeps the extra term prop ortional to a(d) in equation (:0) w e ha v e W () = V () +  ( c ) + a()  ( c )  : Using no w equation (: ) and, as men tioned in section ., the fact that in the small m region the p oten tial is prop ortional to (  c ) w e can solv e for m . Since m > 0 the app earance of a phase with small mass dep ends on the sign of a(d). Clearly this sho ws a non-comm utativit y of the limits of m = ! 0 and N ! . The small m phase can b e reac hed b y a sp ecial tuning and cannot b e reac hed with an improp er sign of a(d). Calculated in this w a y , m can b e made prop ortional to the deviation of the co ecien t of  in V () from its critical v alue  .  . Conclusions This is a study of sev eral subtleties in the phase structure of O (N ) v ector mo dels around m ulticritical p oin ts of o dd and ev en orders. One of the main topics is the understanding of the m ulticritical b eha vior of these mo dels at their critical dimensions and the e ectiv e eld theory of the O (N )-singlet b ound state obtained in the N ! , g ! g c correlated limit. It is p oin ted out that the in tegration o v er massiv e O (N ) singlet mo des is essen tial in order to extract the correct e ectiv e eld theory of the small mass scalar excitation. After p erforming this in tegration, it has b een established here that the double scaling limit of ( ) K v ector mo del in its critical dimension d = K =(K ) results in a theory of a free massless O (N ) singlet b ound state. This fact is a consequence of the existence of at directions at the scale in v arian t m ulticritical p oin t in the e ectiv e action. In con trast to the case d < K =(K ) where IR singularities comp ensate p o w ers of = N in the double scaling limit, at d = K =(K ) there is no suc h comp ensation and only a nonin teracting e ectiv e eld theory of the massless b ound state is left. Another in teresting issue in this study is the am biguit y of the sign of a(d). The co ecien t of m d denoted b y a(d) in the expansion of the gap equation in equations (:c ) and (:) seems to ha v e a surprisingly imp ortan t role in the approac h to the con tin uum limit ( m ). The existence of an IR xed p oin t at g  O (N  ); as seen in the function for the unrenormalized coupling constan t (section .), dep ends on the sign of a(d). Moreo v er, the existence of a phase with a small mass m for the O (N ) v ector quan ta and a massless O (N ) scalar dep ends also on the sign of a(d). It ma y v ery w ell b e that the imp ortance of the sign of a(d) is a mere re ection of the limited coupling constan t space used to describ ed the mo del. This is left here as an op en question that deserv es a detailed renormalization group or lattice sim ulation study in the future. Bibliographical Notes The last section is tak en from Galit Ey al, Moshe Moshe, Shinsuk e Nishigaki and Jean Zinn-Justin, Nucl. Phys. B0 ( )  , hep-th/ 000. F or a review on matrix mo dels and double scaling limit see P . Di F rancesco, P . Ginsparg and J. Zinn-Justin, Phys. R ep.  ( ) . The d =  matrix problem is discussed in P . Ginsparg and J. Zinn-Justin, Phys. L ett. B0 ( 0) ; E. Br  ezin, V. A. Kazak o v, and Al. B. Zamolo dc hik o v, Nucl. Phys. B ( 0) ; G. P arisi, Phys. L ett. B ( 0) 0 , ; Eur ophys. L ett.  ( 0)  ; D. J. Gross and N. Miljk o vic, Phys. L ett. B ( 0) . Previous references on the double scaling limit in v ector mo dels include A. Anderson, R.C. My ers and V. P eriv al, Phys. L ett. B ( )  , Nucl. Phys. B0 ( ) ; S. Nishigaki and T. Y oney a, Nucl. Phys. B ( ) ; P . Di V ecc hia, M. Kato and N. Oh ta, Nucl. Phys. B ( )  ; J. Zinn-Justin, Phys. L ett. B ( ) ; P . Di V ecc hia, M. Kato and N. Oh ta, Int. J. Mo d. Phys.A, ( ) ; T. Y oney a, Pr o g. The o. Phys. Suppl. ,  ( ). References on tricritical b eha viour, dilaton...include W.A. Bardeen, M. Moshe, M. Bander, Phys. R ev. L ett.  ( ) ; F. Da vid, D.A. Kessler and H. Neub erger, Phys. R ev. L ett.  ( ) 0, Nucl. Phys. B [FS] ( )  ; D.A. Kessler and H. Neub erger, Phys. L ett. B ( ) ; P . Di V ecc hia and M. Moshe, Phys. L ett. B00 ( )  ; H. J. Sc hnitzer, Mo d. Phys. L ett. A ( )  .
232
7.3 conjugation and mapping via Hfr strains Flashcards | Quizlet =============== hello quizlet Study tools Subjects Create Log in 7.3 conjugation and mapping via Hfr strains Save Flashcards Learn Test Blocks Match E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: Click the card to flip 👆 A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B. picture is drawn in the answer Click the card to flip 👆 1 / 33 1 / 33 Flashcards Learn Test Blocks Match Created by iulianasin Students also studied Flashcard sets Study guides Practice tests Ch 9 59 terms jonathan_costello7 Preview Genetics Exam 2--Topic 4 (Bacterial+Viral Genetics) 12 terms sspychal Preview BMS 127 - Midterm 1 228 terms Alyssa_Tang130 Preview CHEM 584 Exam II 71 terms arwdickson Preview I Hate This Class: Part 3 98 terms caitlynekure Preview MICROBIO Exam 3 CH. 17 43 terms Taylor_Dixon42 Preview Genetic Analysis FINAL 31 terms Ruiwen_Lin1 Preview Genetic Engineering and Biotechnology Concepts 19 terms alexathechef Preview MI EOC 159 terms kindallk Preview Biochemistry 501: Chapter 9 - DNA-Based Information Technologies 30 terms DanaCatherineKenney Preview BIOS 2600 Exam 2 60 terms BryanKatie Preview Unit 3 103 terms afillipo_05 Preview Biology Exam 4 Practice Exam 77 terms zacharybaber Preview Chapter 20- Analyzing and Engineering genes 27 terms Sunbul_M Preview Exam 2 (L17 MCQ) 27 terms Caroline_Chandler22 Preview BIO 203 CH 20-22 simplified 67 terms jhdooley Preview Human Gene Midterm 9 terms mburke2630 Preview Biotechnology 11 terms quizlette31237308 Preview Bio Lab ch.9 8 terms erf151 Preview Molecular Genetics Exam 1 - Hurt - TTU 120 terms lindseyb_03 Preview Chapter 13 Biotechnology 30 terms Demar_Williams1 Preview BSCI283 Lecture 14: Genetic Engineering, Synthetic Biology, and Biotechnology 62 terms emily_sherman3 Preview Gene Transfer Mechanisms (bacteria) 14 terms smithtrinityb Preview eDNA lab 35 terms paola-a-beard Preview Genetics - Lab Exam 2 57 terms mihogfans Preview BIOS1705 Quiz 3 28 terms happyyhyena Preview Biology: Lab 45 terms James_Geffner Preview Terms in this set (33) E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B. picture is drawn in the answer Explain how an Hfr strain is produced. As shown in Figure 7.5a, an F factor may align with a similar region found in the bacterial chromosome. Due to recombination, the F factor may integrate into the bacterial chromosome. In this example, the F factor has integrated next to a lac+ gene. F factors can integrate into several different sites that are scattered around the E. coli chromosome. When an F factor integrates into the chromosome it creates an Hfr cell or a high frequency recombination cell Describe how an Hfr strain can transfer portions of the bacterial chromosome to recipient strains. (Figure 7.6). 1. The origin of transfer within the integrated F factor determines the starting point and direction of this transfer process. 2. One of the DNA strands is cut at the origin of transfer. This cut, or nicked, site is the starting point at which the Hfr chromosome enters the F recipient cell.3. From this starting point, one strand of DNA from the Hfr chromosome begins to enter the F cell in a linear manner. The transfer process occurs in conjunction with chromosomal replication, so the Hfr cell retains its original chromosomal composition. 4. About 1.5 to 2 hours is required for the entire Hfr chromosome to pass into the F cell. Because most conjugations do not last that long, usually only a portion of the Hfr chromosome is transmitted to the F cell. 5. Once inside the F cell, the chromosomal material from the Hfr cell can swap, or recombine, with the homologous region of the recipient cell's chromosome. Construct a genetic map using data from conjugation experiments. page 164-165 walks you through the process Use the terms listed below to correctly explain concepts, assigned figures, and specified end-of-chapter questions. Hfr strain F' factors Interrupted mating Antibiotics (e.g. streptomyacin) Minutes Hfr is very similar to F+ plasmid , however the big difference is that Hfr unlike F+ is integrated in the chromosome of the strain, therefore is called high frequency recombination factor. Hfr strain will integrate only if regions of chromosomes are similar to F+ factor, the assemble happens, then the chromosome integrates it into its genome. If chromosome will realize F+ foreign genome entrance, the loop will form then breakage, but bcs breakage is I'mprecise, the F factor broken will carry part of chromosomal DNA. Also the chromosome will have a left over of F factor . The F factor left from chromosome that contains some chromosomal DNA is called F'. Experiments of determining the gene distance in a Hfr allowed us to understand how the transfer occurs. A blender was used to interrupt the mating or the process of conjugation between Hfr and F- recipient. Two different strains were used. If the strain contained streptomycin sensitive gene, there will be no growth unless transfer of gene occurred through conjugation. To calculate the distance between genes, minutes were used as units bcs time was used to determine how long it took the genes to be transferred from one cell to another. Hfr strain Term used to designate bacterial strains in which the F factor has integrated into the bacterial chromosome. F^' factors Term used to describe an F factor that contiains a portion of the bacterial chromosome. interrupted mating Conjugation events between F+ and F- cells that are not allowed to proceed to completion; Useful in conjugation mapping antibiotics (e.g. streptomyacin) Substances that kill bacterial cells (e.g. streptomyacin, kanomyacin, and ampicillin) minutes The unit of map distance used in bacterial genetic maps. 7.5, 7.6, 7.7and 7.9 ( There is a video clip lecture for this figure in the Bb folder) With regard to conjugation, a key difference between F and Hfr cells is that an Hfr cell a. is unable to conjugate. b. transfers a plasmid to the recipient cell. c. transfers a portion of the bacterial chromosome to the recipient cell. d. becomes an F cell after conjugation. c See an expert-written answer! We have an expert-written solution to this problem! In mapping experiments, __ strains are conjugated to F strains. The distance between two genes is determined by comparing their ____ during a conjugation experiment. a. F , times of entry c. F , expression levels b. Hfr, times of entry d. Hfr, expression levels b S2. By conducting conjugation experiments between Hfr and recipient strains, Wollman and Jacob mapped the order of many bacterial genes. Throughout the course of their studies, they identified several different Hfr strains in which the F factor DNA had been integrated at different places along the bacterial chromosome. A sample of their experimental results is shown in the following table page 178 A. Explain how these results are consistent with the idea that the bacterial chromosome is circular. B. Draw a map that shows the order of genes and the locations of the origins of transfer among these different Hfr strains. A. In comparing the data among different Hfr strains, the order of the nine genes was always the same or the reverse of the same order. For example, HfrH and Hfr4 transfer the same genes but their orders are reversed relative to each other. In addition, the Hfr strains showed an overlapping pattern of transfer with regardto the origin. For example, Hfr1 and Hfr2 had the same order of genes, but Hfr1 began with leu and ended with azi, whereas Hfr2 began with pro and ended with lac. From these findings, Wollman and Jacob concluded that the origin of transfer had been inserted at different points within a circular E. coli chromosome in different Hfr strains. They also concluded that the origin can be inserted in either orientation, so the direction of gene transfer can be clockwise or counterclockwise around the circular bacterial chromosome. B. page 179 S3. An Hfr strain that is leuA and thiL was mixed with a strain that is leuA and thiL . In the data points shown here, the conjugation was interrupted, and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either leucine or thiamine. The results are shown in the following graph. page 179 What is the map distance (in minutes) between these two genes? Answer: This problem is solved by extrapolating the data points to the x-axis to determine the time of entry. For leuA , they extrapolate back to 10 minutes. For thiL , they extrapolate back to 20 minutes. Therefore, the distance between the two genes is approximately 10 minutes. C5. What is the role of the origin of transfer during F - and Hfrmediated conjugation? What is the significance of the direction of transfer in Hfr-mediated conjugation? Answer: The role of the origin of transfer is to provide a starting site where two important events occur: the DNA is nicked, and one strand begins its transfer into a recipient cell. The direction of transfer in Hfrmediated transfer will determine the order of transfer of the genes. For example, if the origin is between gene A and B, it could be oriented so that gene A will be transferred first. Alternatively, it could be oriented in the opposite direction so that gene B will be transferred first. See an expert-written answer! We have an expert-written solution to this problem! E4. What is an interrupted mating experiment? What type of experimental information can be obtained from this type of study? Why is it necessary to interrupt mating? Answer: An interrupted mating experiment is a procedure in which two bacterial strains are allowed to mate, and then the mating is interrupted at various time points. The interruption occurs by agitation of the solution in which the bacteria are found. This type of study is used to map the locations of genes. It is necessary to interrupt mating so that you can vary the time and obtain information about the order of transfer; which gene transferred first, second, and so on. E5. In a conjugation experiment, what is meant by the time of entry? How is the time of entry determined experimentally? Answer: The time of entry is the time it takes for a gene to be initially transferred from one bacterium to another. To determine this time, we make many measurements at various lengths of time and then extrapolate these data back to the x-axis. E7. As mentioned in solved problem S2, origins of transfer can be located in many different locations, and their direction of transfer can be clockwise or counterclockwise. Let's suppose a researcher conjugated six different Hfr strains that were thr leu ton s str r azi s lac gal pro met to an F strain that was thr leu ton r str s azi r lac gal pro met , and obtained the following results: page 181 Draw a circular map of the E. coli chromosome and describe the locations and orientations of the origins of transfer in these six Hfr strains. RULE Always put in order of 1rst row CLOCKWISE Always use 1 letter, if letter repeats use L1, L2 Always put the arrow BEFORE the 1st letter , Never between 1st and 2nd row !!!!! see if clockwise or counterclockwise E8. An Hfr strain that is hisE and pheA was mixed with a strain that is hisE and pheA . The conjugation was interrupted and the percentage of recombinants for each gene was determined by streaking on a medium that lacked either histidine or phenylalanine. The following results were obtained: fig page 181. A. Determine the map distance (in minutes) between these two genes. B. In a previous experiment, it was found that hisE is 4 minutes away from the gene pab B. PheA was shown to be 17 minutes from this gene. Draw a genetic map describing the locations of all three genes. A. If we extrapolate these lines back to the x-axis, the hisE intersects at about 3 minutes and the pheA intersects at about 24 minutes. These are the values for the times of entry. Therefore, the distance between these two genes is 21 minutes (i.e., 24 minus 3). B pic is given in answer key How is an F' factor different from an F factor? An Fʹ factor carries a portion of the bacterial chromosome, whereas an F factor does not Review figure 7.6. With regard to the timing of conjugation, explain why the recipient cell in the top right is pro- whereas the recipient cell in the bottom right is pro+. FIGURE 7.6 Because conjugation occurred for a longer period of time, pro+ was transferred in the conjugation experiment shown in the bottom right. In eukaryotic genetic mapping, the units of distance are cM, % recombination, and mu. What are the units in bacterial genetic mapping and why is this scale appropriate? Because the chromosome is circular, we must arbitrarily assign a starting point on the map, in this case the gene thrA. Researchers scale genetic maps from bacterial conjugation studies in units of minutes. This unit refers to the relative time it takes for genes to first enter an F recipient strain during a conjugation experiment. The distance between two genes is determined by comparing their times of entry during a conjugation experiment. Consider figure 7.9. Which of the two genes (lacZ and gale) is closer to the origin of transfer? Explain the rationale for your response. The lacZ gene is closer to the origin of transfer; its transfer began at 16 minutes explanation in Figure 7.9, the time of entry is found by conducting conjugation experiments at different time intervals before interrup tion. We compute the time of entry by extrapolating the data back to the x-axis. In this experiment, the time of entry of the lacZ gene was approximately 16 minutes, and that of the galE gene was 25 minutes. Therefore, these two genes are approximately 9 minutes apart from each other along the E. coli chromosome. Difference between F-, F+, Hfr and F' F- doesn't have fertility factor, F+ has fertility factor, Hfr-fertility factor integrated in bacterial chromosome, F'- excised fertility factor with additional genes from bacterial chromosome True or False: Typically, only a portion of the Hfr chromosome is transmitted to the F- cell. true By what process (2 words) can the chromosomal material from the Hfr become integrated into the receipient cell's chromosome? homologous recombination What factor (one word) most likely determines the amount of genetic material transferred from an Hfr to an F- cell? time What piece of equipment (one word) did Wollman and Jacob use in their technique of interrupted mating? blender Examine the figure of the simplified genetic map of E. coli. What is the approximate distance between the lacZ,Y, A gene and the galE gene? Just put the closest whole number and no units. Verify your answer by viewing the figure showing the time course of an interrupted E. coli conjugation experiment. 9 In Jacbb and Wollman's experiments, what was used to kill the donor strain following conjugation (one word): streptomycin In chapter 6, we used mu and cm as units of genetic distance. What is the unit of distance in bacterial mapping experiments? minute NOT PLURAL ! Complete this statement: The time it takes for genes to enter a donor cell is directly related to their __ along the bacterial ___. Enter the two words separated by one space and without punctuation order chromosome About us About Quizlet How Quizlet works Careers Advertise with us For students Flashcards Test Learn Solutions Modern Learning Lab Quizlet Plus Study Guides Pomodoro timer For teachers Live Blog Be the Change Quizlet Plus for teachers Resources Help center Sign up Honor code Community guidelines Privacy Terms Ad and Cookie Policy Interest-Based Advertising Quizlet for Schools Parents Language Get the app Country United States Canada United Kingdom Australia New Zealand Germany France Spain Italy Japan South Korea India China Mexico Sweden Netherlands Switzerland Brazil Poland Turkey Ukraine Taiwan Vietnam Indonesia Philippines Russia © 2025 Quizlet, Inc. Students Flashcards Learn Study Guides Test Expert Solutions Teachers Live Blast Categories Subjects Exams Literature Arts and Humanit... Languages Math Science Social Science Other Flashcards Learn Study Guides Test Expert Solutions Live Blast Categories Exams Literature Arts and Humanit... Languages Math Science Social Science Other
233
Should I use fgets or scanf with limited input in c? - Stack Overflow =============== Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows amazon-web-services xcode bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers google-sheets c++11 security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter listview delphi jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins xamarin testing wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session hadoop intellij-idea rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras asp.net-mvc-4 swiftui logging dom matrix pyspark actionscript-3 button post optimization web firebase-realtime-database jquery-ui cocoa iis xpath d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento search stored-procedures amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array jsf random vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video model-view-controller azure-devops apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text django-rest-framework visual-c++ cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 google-chrome-extension grails installation sharepoint cmake shiny spring-security jakarta-ee plsql android-recyclerview core-data types meteor sed android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost sass memory-management import deep-learning async-await error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 ffmpeg dll highcharts view foreach makefile plugins c#-4.0 redis reporting-services jupyter-notebook merge unicode reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split pytorch cmd encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo mysqli sqlalchemy apache-flex entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table jestjs ansible neo4j service parameters material-ui enums flexbox module promise visual-studio-2012 outlook firebase-authentication webview web-applications uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs google-analytics constructor file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js url-rewriting memory-leaks datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting google-api jmeter linked-list path timer proxy django-templates arduino orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview google-cloud-functions visual-studio-2013 vbscript gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io azure-functions expo compilation responsive-design nhibernate mongodb-query angularjs-directive request bluetooth reference binding dns 3d architecture playframework pyqt version-control discord.js doctrine-orm package get rubygems f# sql-server-2012 autocomplete openssl tree datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null dockerfile statistics transactions active-directory datagrid uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm yaml nullpointerexception menu sum blazor plotly bitmap asp.net-mvc-5 visual-studio-2008 electron yii2 floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api selenium-chromedriver jboss joomla cors devise navigation anaconda cuda background multiprocessing binary frontend camera pyqt5 iterator linq-to-sql mariadb onclick ios7 android-jetpack-compose microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 amazon-dynamodb environment-variables insert uicollectionview linker xsd coldfusion console continuous-integration upload ftp textview opengl-es macros operating-system mockito formatting localization vuejs3 xml-parsing json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk drag-and-drop prolog char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header firebase-cloud-messaging mfc attributes nuxt.js nosql format odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service xampp dom-events crystal-reports wso2 android-emulator swagger namespaces uiscrollview aggregation-framework sequelize.js jvm google-sheets-formula chart.js com subprocess snowflake-cloud-data-platform geolocation webdriver centos html5-canvas garbage-collection dialog widget numbers concatenation sql-update qml set tuples java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget http-headers doctrine radio-button grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port ldap gdb ios5 youtube-api return latex pivot eclipse-plugin frameworks tags containers github-actions subquery c++17 dataset asp-classic foreign-keys label uinavigationcontroller embedded copy google-cloud-storage delegates struts2 migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb stack zip tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 ssl-certificate g++ gmail hover jqgrid clang range Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Should I use fgets or scanf with limited input in c? Ask Question Asked 3 years, 8 months ago Modified3 years, 8 months ago Viewed 1k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. Should I use fgets or formatted scanf like scanf("%10s", foo). Excepted that scanf does not read blank characters, which can be solved and do more stuffs with scanset, then why I should use fgets instead of scanf? Any help would be appreciated. Edit One more thing I want to ask is: even when we use fgets what happen if user enter characters more than boundary (I mean a lot of characters), does it lead to buffer overflow? Then how to deal with it? c input scanf fgets Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Nov 23, 2021 at 14:16 Becker asked Nov 23, 2021 at 13:53 BeckerBecker 301 2 2 silver badges 11 11 bronze badges 3 4 scanf should pretty much never get used in professional code. fgets is both faster and safer. –Lundin Commented Nov 23, 2021 at 14:05 scanf reads precisely formatted input and does not care about line boundaries. Users are rarely precise and enter lines of text.. –stark Commented Nov 23, 2021 at 14:09 Overflow conditions, for both string buffers and numeric input is edited into my answer below. –ryyker Commented Nov 23, 2021 at 16:41 Add a comment| 4 Answers 4 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. On most operating sytems, user input is, by default, line-based. One reason for this is to allow the user to press the backspace key to correct the input, before sending the input to the program. For line-based user input, it is meaningful and intuitive for a program to read one line of input at a time. This is what the function fgets does (provided that the buffer is large enough to store the entire line of input). The function scanf, on the other hand, normally does not read one line of input at a time. For example, when you use the %s or %d conversion format specifier with scanf, it will not consume an entire line of input. Instead, it will only consume as much input as matches the conversion format specifier. This means that the newline character at the end of the line will normally not be consumed (which can easily lead to programming bugs). Also, scanf called with the %d conversion format specifier will consider input such as 6sldf23dsfh2 as valid input for the number 6, but any further calls to scanf with the same specifier will fail, unless you discard the remainder of the line from the input stream. This behavior of scanf is counter-intuitive, whereas the behavior of fgets is intuitive, when dealing with line-based user input. After using fgets, you can use the function sscanf on the string, for parsing the contents of an individual line. This will allow you to continue using scansets. Or you can parse the line by some other means. Either way, as long as you are using fgets instead of scanf for reading the input, you will be handling one line of input at a time, which is the natural and intuitive way to deal with line-based user input. When we use fgets what happen if user enter characters more than boundary (I mean a lot of characters), does it lead to buffer overflow? Then how to deal with it? If the user enters more characters than fit in the buffer as specified by the second fgets function argument, then it will not overflow the buffer. Instead, it will only extract as many characters from the input stream as fit in the buffer. You can determine whether the entire line was read by checking whether the string contains a newline character '\n' at the end. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Nov 23, 2021 at 15:13 answered Nov 23, 2021 at 14:10 Andreas WenzelAndreas Wenzel 26.5k 4 4 gold badges 32 32 silver badges 51 51 bronze badges 2 Thank you so much, your answer really helpful to me. –Becker Commented Nov 23, 2021 at 14:45 1 The behavior of fgets() is only intuitive for inputs that are not longer than expected. –supercat Commented Nov 23, 2021 at 22:49 Add a comment| This answer is useful 2 Save this answer. Show activity on this post. This is a commonly discussed topic, full of opinion but interesting none the less. I have observed that a large majority of those that have already responded to similar questions on this site fall on the side of fgets(). I am one of them. I find fgets() to be much better to use for user input than scanf() with few exceptions. scanf() is considered by many as as sub-optimal method for handling user input. For example "...it will tell you whether it succeeded or failed, but can tell you only approximately where it failed, and not at all how or why. You have very little opportunity to do any error recovery." (jamesdlin). But in the interest of attempting balance, will start off citing this discussion. For user input that comes from stdin, i.e. keyboard input, fgets() will be a better choice. It is much more forgiving in that the string it reads can be fully validated before conversion is attempted One of the few times using a form of scanf(): fscanf() would be okay to use might be when converting input from a very controlled source, i.e. from reading a strictly formatted file with repeating predictable fields. For more discussion, this comparison of the two highlights additional advantages and disadvantages of both. Edit: to address OP additional question about overflow: "One more thing I want to ask is: even when we use fgets what happen if user enter characters more than boundary (I mean a lot of characters), does it lead to buffer overflow? Then how to deal with it?" fgets(); This prevents input greater than the buffer size from being processed, thus preventing the overflow. even using scanf()preventing buffer overflow is pretty straight forward: Use a width specifier in the format string. If you want to read input for example and limit input size from user to 100 characters max, the code would include the following: ```c char buffer = {0};// includes space for 100 + 1 for NULL termination scanf("%100s", buffer); ^^^ width specifier ``` However with numbers, overflow is not so nice using scanf(). To demonstrate, use this simple code, inputting the two values indicated in comment one per run: ```c int main(void) { int val = 0; // test with 2147483647 & 2147483648 scanf("%d", &val); printf("%d\n", val); return 0; } ``` For the second value, my system throws the following: NON-FATAL RUN-TIME ERROR: "test.c", line 11, col 5, thread id 22832: Function scanf: (errno == 34 [0x22]). Range error ` Here you need to read in a string, then follow with a string to number conversion using one of the strto_() functions: strtol(), strtod(), ...). Both include the ability to test for overflow before causing a run-time warning or error. Note that using atoi(), atod() will not protect from overflow either. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Nov 23, 2021 at 19:18 answered Nov 23, 2021 at 14:05 ryykerryyker 23.3k 3 3 gold badges 46 46 silver badges 95 95 bronze badges 2 Thank you, I really appreciate your answer. –Becker Commented Nov 23, 2021 at 14:51 A question "full of opinion"? I must respectfully disagree. It's not a matter of opinion that scanf is almost completely useless, good at best for reading single, simple inputs in intro-to-C programs, but prohibitively difficult to do anything remotely sophisticated with — these are obvious facts! :-) –Steve Summit Commented Nov 23, 2021 at 16:12 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. So far, all answers here presented intricacies of scanf and fgets, but what I believe is worth mentioning, is that both of those functions are deprecated in current C standard. Scanf is especially dangerous, because it has all sorts of security issues with buffer overflows. fgets is not as problematic, but from my experience, it tends to be a bit clunky and not-so-useful in practice. The truth is, often you don't really know how long the user input will be. You can get around this by using fgets with I hope this will big enough buffer, but that's not really ellegant. Instead, what you often want to do is to have dynamic buffer that will grow to be big enough to store whatever user input will deliver. And this is when getline function comes into play. It's used to read any number of characters from the user, until \n is encountered. Essentially, it loads the entire line to your memory, as a string. c size_t getline(char lineptr, size_t n, FILE stream); This function takes a pointer to a dynamically allocated string as first argument, and a pointer to a size of the allocated buffer as a second argument and stream as a third argument. (you will essentially place stdin there for command-line input). And returns number of read characters, including \n at the end, but not the terminating null. Here, you can see example usage of this function: ```c int main() { printf("Input Something:\n"); // asking user for input size_t length = 10; // creating "base" size of our buffer char input_string = malloc(length); // allocating memory based on our initial buffer size size_t length_read = getline(&input_string, &length, stdin); // loading line from console to input_string // now length_read contains how much characters we read // and length contains new size of our buffer (if it changed during the getline execution) printf("Characters read (including end of line but not null at the end)" ": %lu, current size of allocated buffer: %lu string: %s" , length_read, length, input_string); free(input_string); // like any other dynamically-allocated pointer, you must free it after usage return 0; } ``` Of course, using this function requires basic knowledge about pointers and dynamic memory in C, however the slightly more complicated nature of getline is definitely worth it, because of the provided security and flexibility. You can read more about this function, and other input functions available in C, on this website: I believe it summarizes the intricacies of C input pretty well. Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Nov 24, 2021 at 0:16 GalbatrollixGalbatrollix 58 1 1 silver badge 4 4 bronze badges 1 1 Thanks for your advice and the link, it helps me a lot. –Becker Commented Nov 24, 2021 at 5:34 Add a comment| This answer is useful 1 Save this answer. Show activity on this post. If you have for example a character array declared like c char s; and want to read a string that contains embedded spaces then you can use either scanf the following way: c scanf( "%99[^\n]", s ); or fgets like: c fgets( s, sizeof( s ), stdin ); The difference between these two calls is that the call of scanf does not read the new line character '\n' from the input buffer. While fgets reads the new line character '\n' if there is enough space in the character array. To remove the new line character '\n' that is stored in the character array after using fgets you can write for example: c s[ strcspn( s, "\n" ) ] = '\0'; If the input string has more than 99 characters then the both calls read only 99 characters and append the sequence of characters with the terminating zero character '\0'. All remaining characters will be still in the input buffer. There is a problem with fgets. For example if before fgets there is used scanf as for example: c scanf( "%d", &x ); fgets( s, sizeof( s ), stdin ); and the user input is: c 10 Hello World then the call of fgets will read only the new line character '\n' that is stored in the buffer after pressing the Enter key when the integer value in the call of scanf was read. In this case you need to write a code that will remove the new line character '\n' before calling fgets. You can do this for example the following way: c scanf( "%d", &x ); scanf( " " ); fgets( s, sizeof( s ), stdin ); If you are using scanf then in such a situation you can write: c scanf( "%d", &x ); scanf( " %99[^\n]", s ); ^^ Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications edited Nov 24, 2021 at 2:24 Galbatrollix 58 1 1 silver badge 4 4 bronze badges answered Nov 23, 2021 at 14:36 Vlad from MoscowVlad from Moscow 312k 27 27 gold badges 203 203 silver badges 357 357 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions c input scanf fgets See similar questions with these tags. The Overflow Blog Renewing Chat on Stack Overflow AI isn’t stealing your job, it’s helping you find it Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Updated design for the new live activity panel experiment Further Experimentation with Comment Reputation Requirements Report this ad Report this ad 12 people chatting C yesterday - Lundin Linked 139scanf() leaves the newline character in the buffer 92Disadvantages of scanf Related 45Difference between scanf() and fgets() 49C - scanf() vs gets() vs fgets() 3Problem with scanf and fgets 7Using scanf and fgets in the same program? 0How to use fgets() instead of fscanf() on stdin in C? 0Reading input with fgets? 0The difference between fgets() and scanf() 0how to use both scanf and fgets to read a file 1scanf and fgets in same program 1Best way of input in C console application: Differences between different types of inputting string in a console application C Hot Network Questions What does it mean to be one's "God"? How to debug/correct missing number error in plug during memoization? Did recently killed Al Jazeera journalist Anas al-Sharif call the Oct 7 attackers "heroes"? how often do CANZUK judges color their text? Does it make any sense to run a journal for pre-college students interested in medicine? How do Commoners "change class"? Dropdown width with very long options Wiring a bathroom exhaust fan What does, "For you alone are holy." mean in Revelation 15:4? Using my custom font on kitty Kubuntu Which public officers other than presidents and lawmakers are chosen by people? How soon after parking a car in a paid parking area must I provide proof of payment? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Elfquest story where two elves argue over one's hypnotizing of an animal Can my daughter’s candy preferences be modelled using numeric weights II? Can metal atoms act as ligands? What is a good way to get magnetic sensor input? Where should I host software for individual papers when GitHub is now part of Microsoft AI? What's at stake if the E3/EU "snaps back" their sanctions on Iran? I found that we can calculate the time of solar eclipses that will happen in the very far future. Do we need relativity in this calculation? Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States? Collect coefficient of sum of terms in Mathematica Landmark identification in "The Angel" (Arsenal FC's anthem) In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-c Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
234
Advertisement Green’s Functions and Perturbation Theory Part of the book series: Springer Series in Solid-State Sciences ((SSSOL,volume 7)) 648 Accesses Abstract The problem of finding the eigenvalues and eigenfunctions of a Hamiltonian H = H o + H 1 can be solved in three steps: 1) calculate the Green’s function G o(z) corresponding to H o ; 2) express G(z) as a perturbation series in terms of G o(z) and H 1, where G(z) is the Green’s function associated with H; and 3) extract from G(z) information about the eigenvalues and eigenfuctions of H. This is a preview of subscription content, log in via an institution to check access. Access this chapter Subscribe and save Buy Now Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions Preview Unable to display preview. Download preview PDF. Unable to display preview. Download preview PDF. Reference A.L. Fetter, J.D. Walecka: Quantum Theory of Many-Particle Systems ( McGraw-Hill, New York 1971 ) Google Scholar L.I. Schiff: Quantum Mechanics, 2nd ed. (McGraw-Hill, New York 1955 ) Google Scholar L.D. Landau, E.M. Lifshitz: Quantum Mechanics (Addison-Wesley, Reading, Mass. 1958 ) Google Scholar Download references Author information Authors and Affiliations Department of Physics, University of Crete, Heraklion, Crete, Greece Professor Eleftherios N. Economou PhD Search author on:PubMed Google Scholar Rights and permissions Reprints and permissions Copyright information © 1983 Springer-Verlag Berlin Heidelberg About this chapter Cite this chapter Economou, E.N. (1983). Green’s Functions and Perturbation Theory. In: Green’s Functions in Quantum Physics. Springer Series in Solid-State Sciences, vol 7. Springer, Berlin, Heidelberg. Download citation DOI: Publisher Name: Springer, Berlin, Heidelberg Print ISBN: 978-3-540-12266-1 Online ISBN: 978-3-662-02369-3 eBook Packages: Springer Book Archive Share this chapter Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Publish with us Policies and ethics Access this chapter Subscribe and save Buy Now Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions Search Navigation Discover content Publish with us Products and services Our brands 34.228.199.195 Not affiliated © 2025 Springer Nature
235
Lie symmetry analysis - Encyclopedia of Mathematics =============== Log in www.springer.comThe European Mathematical Society Navigation Main page Pages A-Z StatProb Collection Recent changes Current events Random page Help Project talk Request account Tools What links here Related changes Special pages Printable version Permanent link Page information Namespaces Page Discussion Variants Views View View source History Actions Lie symmetry analysis From Encyclopedia of Mathematics Jump to: navigation, search with symbolic software The Norwegian mathematician S. Lie pioneered the study of continuous Lie transformation groups (cf. Lie transformation group) that leave invariant systems of differential equations. As a result of Lie's work [a2], [a3], diverse and ad hoc integration methods for solving special classes of differential equations came under a common conceptual umbrella. For ordinary differential equations (ODEs), Lie's infinitesimal transformation method provides a widely applicable technique to find closed-form similarity solutions. Nearly all standard solution methods for first-order or linear ODEs can be characterized in terms of symmetries. Through the group classification of ODEs, Lie also succeeded in identifying all ODEs that can either be reduced to lower-order ones or be completely integrated via group-theoretic techniques. Applied to partial differential equations (PDEs), Lie's method leads to group-invariant solutions and conservation laws. Exploiting the symmetries of PDEs, new solutions can be derived from known ones, and PDEs can be classified into equivalence classes. Furthermore, group-invariant solutions obtained via Lie's approach may provide insight into the physical models themselves, and explicit solutions can serve as benchmarks in the design, accuracy testing, and comparison of numerical algorithms. Lie's original ideas had great potential to profoundly influence the study of physically important systems of differential equations. However, the application of Lie group methods to concrete physical systems involves tedious and unwieldy computations. Even the calculation of the continuous symmetry group of a modest system of differential equations is prone to errors, if done with pencil and paper. The availability of computer algebra systems (such as Mathematica or Maple) has changed all that. There now exist many symbolic packages that can aid in the computation of Lie symmetries and similarity solutions of differential equations. Sophisticated packages not only automatically compute the system of determining equations of the Lie symmetry group, but also reduce these into an equivalent yet more suitable system, subsequently solve it in closed form, and go on to calculate the infinitesimal generators that span the Lie algebra of symmetries. In [a1], detailed information is given about numerous Lie symmetry computer packages, together with a review of their strengths and weaknesses. The classical Lie symmetry group of a system of differential equations is a local group of point transformations, meaning diffeomorphisms on the space of independent and dependent variables, that map solutions of the system into other solutions. [x] Contents 1 Elementary examples of Lie point symmetries. 1.1 Example 1. 1.2 Example 2. 1.3 Example 3. 2 Computation of Lie point symmetries. 3 Algorithm for Lie point symmetries. 3.1 Step 1. 3.2 Step 2. 3.3 Step 3. 3.4 Step 4. 3.5 Step 5. 4 Lie symmetry software. 5 Worked example. 6 Beyond Lie point symmetries. 6.1 References Elementary examples of Lie point symmetries. Example 1. This example illustrates the concept of Lie's method. It is well known that homogeneous first-order ODEs, like y′=y 2+2 x y x 2,(a1)(a1)y′=y 2+2 x y x 2, can be simplified upon substitution of y=x v(x)y=x v(x). Indeed, (a1) then reduces to x v′=v+v 2 x v′=v+v 2, which can be readily integrated, leading to y(x)=c x 2/(1−c x)y(x)=c x 2/(1−c x), where c c is the integration constant, as solution of (a1). Lie realized that the substitution y=x v y=x v leads to a separable equation because (a1) is invariant under the one-parameter group of scaling transformations, with parameter ϵ ϵ: x˜(ϵ)=x exp(ϵ),y˜(ϵ)=y exp(ϵ),x~(ϵ)=x exp(ϵ),y~(ϵ)=y exp(ϵ), which obviously leaves invariant the quantity v=y x=y˜x˜.v=y x=y~x~. Example 2. Consider the Riccati equation y′+y 2−2 x 2=0,(a2)(a2)y′+y 2−2 x 2=0, which is invariant under the one-parameter group of transformations, x˜(ϵ)=x exp(ϵ),x~(ϵ)=x exp(ϵ),y˜(ϵ)=y exp(−ϵ).y~(ϵ)=y exp(−ϵ). Hence, if y=f(x)y=f(x) solves (a2), then y˜(x˜)=exp(−ϵ)f(x˜exp(−ϵ))y~(x~)=exp(−ϵ)f(x~exp(−ϵ)) solves (a2) with tilde on all the variables. Hence, starting with a known solution, Lie's method yields a family of new solutions. Quite often, interesting solutions can be obtained from trivial ones. Example 3. This example shows that Lie's method is applicable to PDE's, such as the linear heat equation, ∂u∂t−∂2 u∂x 2=u t−u x x=0,(a3)(a3)∂u∂t−∂2 u∂x 2=u t−u x x=0, which admits, amongst several others, the one-parameter group of combined scalings: x˜(ϵ)=x exp(ϵ),t˜(ϵ)=t exp(2 ϵ),u˜(ϵ)=u.x~(ϵ)=x exp(ϵ),t~(ϵ)=t exp(2 ϵ),u~(ϵ)=u. Therefore, if u=f(x,t)u=f(x,t) solves (a3), so will u=f(x exp(−ϵ),t exp(−2 ϵ)).u=f(x exp(−ϵ),t exp(−2 ϵ)). A less obvious symmetry group of (a3) is determined by x˜(ϵ)=x+2 ϵ t x~(ϵ)=x+2 ϵ t, t˜(ϵ)=t t~(ϵ)=t, u˜(ϵ)=u exp(−ϵ x−ϵ 2 t)u~(ϵ)=u exp(−ϵ x−ϵ 2 t), which expresses that u=exp(−ϵ x+ϵ 2 t)f(x−2 ϵ t,t)u=exp(−ϵ x+ϵ 2 t)f(x−2 ϵ t,t) is a solution to (a3) when u=f(x,t)u=f(x,t) is. Computation of Lie point symmetries. There are two major methods to compute Lie symmetries. The first method, which is implemented in most of the Lie symmetry packages, uses prolonged vector fields, the second one utilizes Cartan's exterior calculus. The steps of the prolongation method can be summarized as follows. For a system of m m differential equations, Δ i(x,u(k))=0,i=1…m,(a4)(a4)Δ i(x,u(k))=0,i=1…m, of arbitrary order k k, with p p independent variables x=(x 1…x p)∈R p x=(x 1…x p)∈R p and q q dependent variables u=(u 1…u q)∈R q u=(u 1…u q)∈R q, the partial derivatives of u l u l are represented using a multi-index notation, u l J≡∂|J|u l∂x j 1 1…∂x j p p,(a5)(a5)u J l≡∂|J|u l∂x 1 j 1…∂x p j p, where for J=(j 1…j p)∈N p J=(j 1…j p)∈N p, |J|=j 1+⋯+j p|J|=j 1+⋯+j p, and u(k)u(k) stands for the vector whose components are the partial derivatives up to order k k of all u l u l. The group transformations, parametrized by ϵ,ϵ, have the form x˜(ϵ)=Λ G(x,u,ϵ)x~(ϵ)=Λ G(x,u,ϵ), u˜(ϵ)=Ω G(x,u,ϵ)u~(ϵ)=Ω G(x,u,ϵ), where the functions Λ G Λ G and Ω G Ω G are to be determined. Lie realized that the one-parameter Lie groupG G can be completely recovered from the knowledge of the linear terms in the Taylor series of Λ G Λ G and Ω G Ω G: x i t i l d e(ϵ)=x i+ϵ∂Λ G(x,u,ϵ)∂ϵ∣∣∣ϵ=0+O(ϵ 2)=x i t i l d e(ϵ)=x i+ϵ∂Λ G(x,u,ϵ)∂ϵ|ϵ=0+O(ϵ 2)= =x i+ϵ η i(x,u)+O(ϵ 2),i=1…p,=x i+ϵ η i(x,u)+O(ϵ 2),i=1…p, u l t i l d e(ϵ)=u l+ϵ∂Ω G(x,u,ϵ)∂ϵ∣∣∣ϵ=0+O(ϵ 2)=u l t i l d e(ϵ)=u l+ϵ∂Ω G(x,u,ϵ)∂ϵ|ϵ=0+O(ϵ 2)= =u l+ϵ φ l(x,u)+O(ϵ 2),l=1…q,=u l+ϵ φ l(x,u)+O(ϵ 2),l=1…q, where x˜(0)=x x~(0)=x and u˜(0)=u u~(0)=u. Therefore, in the method of prolonged vector fields, [a2], [a3], instead of considering the Lie group G G, one concentrates on its Lie algebraL L, realized by vector fields of the form α=∑i=1 p η i(x,u)∂∂x i+∑l=1 q φ l(x,u)∂∂u l.(a6)(a6)α=∑i=1 p η i(x,u)∂∂x i+∑l=1 q φ l(x,u)∂∂u l. To determine the coefficients η i(x,u)η i(x,u) and φ l(x,u)φ l(x,u) one has to construct the k k th prolongation pr(k)α pr(k)α of the vector field α α( cf. also Prolongation of solutions of differential equations), apply it to the system (a4), and make the resulting expression vanish on the solution set of (a4). The result is a system of linear homogeneous PDEs for η i η i and φ l,φ l, in which x x and u u are treated as independent variables. That system is called the determining or defining system for the symmetries. Solution of the system by hand, interactively or automatically with a symbolic package, yields the explicit forms of η i(x,u)η i(x,u) and φ l(x,u)φ l(x,u). This sounds straightforward, but the method involves tedious calculations. In particular, the complexity of the expressions for the prolongations increases rapidly as the order k k increases. Algorithm for Lie point symmetries. The technical steps of the algorithm for the computation of Lie point symmetries are: Step 1. Construct the k k th prolongation of the vector field α α in (a6) by means of the formula pr(k)α=α+∑l=1 q∑J ψ J l(x,u(k))∂∂u l J,1≤|J|≤k,(a7)(a7)pr(k)α=α+∑l=1 q∑J ψ l J(x,u(k))∂∂u J l,1≤|J|≤k, where the coefficients ψ J l ψ l J are defined as follows. The coefficients of the first prolongation are: ψ J i l=D i φ l(x,u)−∑j=1 p u l J j D i η j(x,u),(a8)(a8)ψ l J i=D i φ l(x,u)−∑j=1 p u J j l D i η j(x,u), where J i J i is a p p- tuple with 1 1 on the i i th position and zeros elsewhere, and D i D i is the total derivative operator D i=∂∂x i+∑l=1 q∑J u l J+J i∂∂u l J,0≤|J|≤k.(a9)(a9)D i=∂∂x i+∑l=1 q∑J u J+J i l∂∂u J l,0≤|J|≤k. The higher-order prolongations are defined recursively as ψ J+J i l=D i ψ J l−∑j=1 p u l J+J j D i η j(x,u),|J|≥1.(a10)(a10)ψ l J+J i=D i ψ l J−∑j=1 p u J+J j l D i η j(x,u),|J|≥1. Step 2. Apply the prolonged operator pr(k)α pr(k)α to each equation Δ i(x,u(k))Δ i(x,u(k)) and require that pr(k)α Δ i∣Δ j=0=0 i,j=1…m.(a11)(a11)pr(k)α Δ i∣Δ j=0=0 i,j=1…m. Condition (a11) expresses that pr(k)α pr(k)α vanishes on the solution set of the system (a4). Precisely, this condition assures that α α is an infinitesimal symmetry generator of the group transformation x˜=Λ G(x,u)x~=Λ G(x,u), u˜=Ω G(x,u)u~=Ω G(x,u). Hence, u(x)u(x) is a solution of (a4) whenever u˜(x˜)u~(x~) is one. Step 3. Choose m m components of the vector u(k)u(k), say v 1…v m v 1…v m, such that: i) each v i v i is a derivative of some u l u l( l=1…q l=1…q) with respect to at least one variable x i x i( i=1…p i=1…p); ii) none of the v i v i is the derivative of another one in the set; iii) the system (a4) can be solved algebraically for the v i v i in terms of the remaining components of u(k)u(k), which are denoted by w w; thus, v i=S i(x,w)v i=S i(x,w), i=1…m i=1…m; iv) the derivatives of v i v i, v i J=D J S i(x,w)v J i=D J S i(x,w), where D J≡D j 1 1…D j p p D J≡D 1 j 1…D p j p, can be expressed in terms of the components of w w and their derivatives, without ever re-introducing the v i v i or their derivatives. The requirements in Step 3 put some restrictions on the system (a4), but for many systems the choice of the appropriate v i v i is quite obvious. For example, for a system of evolution equations (cf. Evolution equation) ∂u i∂t(x 1…x p−1,t)=F i(x 1…x p−1,t,u(k)),(a12)(a12)∂u i∂t(x 1…x p−1,t)=F i(x 1…x p−1,t,u(k)), i=1…m,i=1…m, where u(k)u(k) involves derivatives with respect to the variables x i x i but not t t, an appropriate choice is v i=∂u i∂t.v i=∂u i∂t. Step 4. Use v i=S i(x,w)v i=S i(x,w) to eliminate all v i v i and their derivatives from the expression (a11), so that all the remaining variables are now independent of each other. It is tacitly assumed that the resulting expression is now a polynomial in the u l J u J l. Step 5. Obtain the determining equations for η i(x,u)η i(x,u) and φ l(x,u)φ l(x,u) by equating to zero the coefficients of all functionally independent expressions (monomials) in the remaining derivatives u l J u J l. In the above algorithm the variables x i x i, u l u l and u l J u J l are treated as independent; the dependent ones are η i η i and φ l.φ l. In summary: First, one generates the so-called determining or defining equations for the symmetries of the system. Secondly, one solves these by hand, interactively or automatically with a symbolic package, to determine the explicit forms of the η i(x,u)η i(x,u) and φ l(x,u)φ l(x,u). From the Lie algebra of symmetry generators, one can obtain the Lie group of point transformations upon integration of a system of first-order characteristic equations. A detailed review of innovative ways of classifying, subsequently reducing, and finally solving overdetermined systems of linear homogeneous PDEs is given in [a1]. Lie symmetry software. To design a reliable and powerful integration algorithm for a system of determining equations the system needs to be brought into a standard form. Standard form procedures can be viewed as generalizations to systems of linear PDEs of the Gaussian reduction method (cf. Gauss method) for matrices or linear systems, except that integrability conditions are also added to the system. In essence, the standard (or involutive) form of a system of PDEs is an equivalent simplified ordered triangular system with all integrability conditions included and all redundancies (differential and algebraic) eliminated. Customized, yet sophisticated symbolic code in MACSYMA, Maple, and REDUCE exists for that purpose. The algorithms of the major Lie symmetry packages have roots in the Riquier–Janet theory of differential equations (to transform a linear system of PDEs into involutive form). Modern implementations of "triangulation" algorithms use a differential version of the Gröbner basis algorithm for algebraic equations. Parenthetically, Lie's group theory for differential equations also mirrors Galois' theory for solving algebraic equations. The group of point transformations for an ODE in Lie theory plays the role of the permutation group of solutions in Galois theory. Both group structures provide insight in the existence and types of solutions. Triangulation algorithms may be used to bypass the explicit integration of the determining equations and compute the dimension of the Lie symmetry group and the commutators immediately. Once systems are reduced to standard involutive form, subsequent integration is more tractable and reliable. One could use separation of variables, standard techniques for linear differential equations, and specific heuristic rules as given in [a1]. The only determining equations left for manual handling should be the "constraint" equations or any other equations whose general solutions cannot be written explicitly in closed form. Worked example. To illustrate the computation of Lie point symmetries, consider a PDE due to H. Dym and M.D. Kruskal: u t−u 3 u x x x=0.(a13)(a13)u t−u 3 u x x x=0. Clearly, this is a single equation with two independent variables, x 1=x x 1=x, x 2=t x 2=t, and one dependent variable, u 1=u u 1=u. Symmetry software will automatically generate the determining equations for the coefficients η 1 η 1, η 2 η 2 and φ 1 φ 1 of the vector field α=η 1∂∂x 1+η 2∂∂x 2+φ 1∂∂u 1.α=η 1∂∂x 1+η 2∂∂x 2+φ 1∂∂u 1. There are only eight determining equations: ∂η 2∂u 1=0,∂η 2∂x 1=0,∂η 1∂u 1=0,∂2 φ 1∂(u 1)2=0,∂η 2∂u 1=0,∂η 2∂x 1=0,∂η 1∂u 1=0,∂2 φ 1∂(u 1)2=0, ∂2 φ 1∂u 1∂x 1−∂2 η 1∂(x 1)2=0,∂2 φ 1∂u 1∂x 1−∂2 η 1∂(x 1)2=0, ∂φ 1∂x 2−(u 1)3∂3 φ 1∂(x 1)3=0,∂φ 1∂x 2−(u 1)3∂3 φ 1∂(x 1)3=0, 3(u 1)3∂3 φ 1∂u 1∂(x 1)2+∂η 1∂x 2−(u 1)3∂3 η 1∂(x 1)3=0,3(u 1)3∂3 φ 1∂u 1∂(x 1)2+∂η 1∂x 2−(u 1)3∂3 η 1∂(x 1)3=0, u 1∂η 2∂x 2−3 u 1∂η 1∂x 1+3 φ 1=0.u 1∂η 2∂x 2−3 u 1∂η 1∂x 1+3 φ 1=0. Without intervention of the user, these determining equations are then solved explicitly. The general solution, rewritten in the original variables, is η 1=k 1+k 3 x+k 5 x 2,η 1=k 1+k 3 x+k 5 x 2, η 2=k 2−3 k 4 t,η 2=k 2−3 k 4 t, φ 1=(k 3+k 4+2 k 5 x)u,φ 1=(k 3+k 4+2 k 5 x)u, where k 1…k 5 k 1…k 5 are arbitrary constants. The five infinitesimal generators are: G 1=∂x,G 1=∂x, G 2=∂t,G 2=∂t, G 3=x∂x+u∂u,G 3=x∂x+u∂u, G 4=−3 t∂t+u∂u,G 4=−3 t∂t+u∂u, G 5=x 2∂x+2 x u∂u.G 5=x 2∂x+2 x u∂u. Clearly, (a13) is invariant under translations ( G 1 G 1 and G 2 G 2) and scaling ( G 3 G 3 and G 4 G 4). The flow corresponding to each of the infinitesimal generators can be obtained via simple integration. As an example, the flow corresponding to G 5 G 5 is computed. This requires integration of the first-order system d x˜d ϵ=x˜2,x˜(0)=x,d x~d ϵ=x~2,x~(0)=x, d t˜d ϵ=0,t˜(0)=t,d t~d ϵ=0,t~(0)=t, d u˜d ϵ=2 x˜u˜,u˜(0)=u,d u~d ϵ=2 x~u~,u~(0)=u, where ϵ ϵ is the parameter of the transformation group. One readily obtains x˜(ϵ)=x(1−ϵ x),t˜(ϵ)=t,u˜(ϵ)=u(1−ϵ x)2.x~(ϵ)=x(1−ϵ x),t~(ϵ)=t,u~(ϵ)=u(1−ϵ x)2. Therefore, one concludes that for any solution u=f(x,t)u=f(x,t) of (a13) the transformed solution u˜(x˜,t˜)=(1+ϵ x˜)2 f(x˜1+ϵ x˜,t˜)u~(x~,t~)=(1+ϵ x~)2 f(x~1+ϵ x~,t~) will solve u˜t˜−u˜3 u˜x˜x˜x˜=0.u~t~−u~3 u~x~x~x~=0. Beyond Lie point symmetries. For the computation of generalized symmetries or Lie–Bäcklund symmetries [a2], [a3], the use of symbolic programs is even more appropriate, since the calculations are lengthier and more time consuming. In a generalized vector field, which still takes the form of (a6), the functions η i η i and φ l φ l may now depend on a finite number of derivatives of u u. Lie symmetry packages have proven to be an effective tool in solving overdetermined systems of linear and non-linear PDEs in the study of various Lie symmetries. Yet, no general algorithm is available to integrate an arbitrary (overdetermined) system of determining equations that consists of linear homogeneous PDEs for the components of η η η η and φ φ φ φ. Most computer programs still use some heuristic rules for the integration of the determining system. The availability of sophisticated symbolic programs for Lie symmetry computations certainly will accelerate the study of symmetries of physically important systems of differential equations in classical mechanics, fluid dynamics, elasticity, and other applied areas. References [a1]W. Hereman, "Symbolic software for Lie symmetry analysis" N.H. Ibragimov (ed.) , CRC Handbook of Lie Group Analysis of Differential Equations: New Trends in Theoretical Developments and Computational Methods , 3 , CRC (1996) pp. Chapt. 13; 367–413 [a2]P.J. Olver, "Applications of Lie groups to differential equations" , GTM , 107 , Springer (1993) (Edition: Second) [a3]H. Stephani, "Differential equations: their solution using symmetries" , Cambridge Univ. Press (1989) How to Cite This Entry: Lie symmetry analysis. Encyclopedia of Mathematics. URL: This article was adapted from an original article by W.A. Hereman (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from " Categories: TeX auto TeX done This page was last edited on 5 June 2020, at 22:16. Privacy policy About Encyclopedia of Mathematics Disclaimers Copyrights Impressum-Legal Manage Cookies
236
Introduction to Abstract Algebra (Math 113) Alexander Paulin Contents 1 Introduction 2 1.1 What is Algebra? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Sets and Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Equivalence Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 The Structure of + and × on Z 7 2.1 Basic Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Factorization and the Fundamental Theorem of Arithmetic . . . . . . . . . . 8 2.3 Congruences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 3 Groups 12 3.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Subgroups, Cosets and Lagrange’s Theorem . . . . . . . . . . . . . . . . . . 15 3.3 Finitely Generated Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.4 Permutation Groups and Group Actions . . . . . . . . . . . . . . . . . . . . 20 3.5 The Orbit-Stabiliser Theorem and Sylow’s Theorem . . . . . . . . . . . . . . 22 3.6 Finite Symmetric Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.7 Symmetry of Sets with Extra Structure . . . . . . . . . . . . . . . . . . . . . 30 3.8 Normal Subgroups and Isomorphism Theorems . . . . . . . . . . . . . . . . . 33 3.9 Direct Products and Direct Sums . . . . . . . . . . . . . . . . . . . . . . . . 38 3.10 Finitely Generated Abelian Groups . . . . . . . . . . . . . . . . . . . . . . . 39 3.11 Finite Abelian Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.12 The Classification of Finite Groups (Proofs Omitted) . . . . . . . . . . . . . 46 4 Rings and Fields 49 4.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.2 Ideals, Quotient Rings and the First Isomorphism Theorem for Rings . . . . 51 4.3 Properties of Elements of Rings . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Polynomial Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.5 Field of Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.6 Characteristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.7 Ring Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 1 4.8 Principal, Prime and Maximal Ideals . . . . . . . . . . . . . . . . . . . . . . 61 4.9 Factorisation in Integral Domains . . . . . . . . . . . . . . . . . . . . . . . . 63 4.10 Principal Ideal Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 4.11 Factorization in Polynomial Rings . . . . . . . . . . . . . . . . . . . . . . . . 70 5 Field Extensions and Galois Theory 76 5.1 Field Extensions and Minimal Polynomials . . . . . . . . . . . . . . . . . . . 76 5.2 Splitting Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 5.3 Galois Theory (Proofs Omitted) . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.4 Solving Polynomials By Radicals . . . . . . . . . . . . . . . . . . . . . . . . 81 1 Introduction 1.1 What is Algebra? If you ask someone on the street this question, the most likely response will be: “Something horrible to do with x, y and z”. If you’re lucky enough to bump into a mathematician then you might get something along the lines of: “Algebra is the abstract encapsulation of our intuition for composition”. By composition, we mean the concept of two object coming together to form a new one. For example adding two numbers, or composing real valued single variable functions. As we shall discover, the seemly simple idea of composition hides vast hidden depth. Algebra permeates all of our mathematical intuitions. In fact the first mathematical concepts we ever encounter are the foundation of the subject. Let me summarize the first six to seven years of your mathematical education: The concept of Unity. The number 1. You probably always understood this, even as a little baby. ↓ N := {1, 2, 3...}, the natural numbers. N comes equipped with two natural operations + and ×. ↓ Z := {... −2, −1, 0, 1, 2, ...}, the integers. We form these by using geometric intuition thinking of N as sitting on a line. Z also comes with + and ×. Addition on Z has particularly good properties, e.g. additive inverses exist. ↓ 2 Q := {a b |a, b ∈Z, b ∕= 0}, the rational numbers. We form these by taking Z and formally dividing through by non-negative integers. We can again use geometric insight to picture Q as points on a line. The rational numbers also come equipped with + and ×. This time, multiplication is has particularly good properties, e.g non-zero elements have multiplicative inverses. We could continue by going on to form R, the real numbers and then C, the complex numbers. This process is of course more complicated and steps into the realm of mathematical analysis. Notice that at each stage the operations of + and × gain additional properties. These ideas are very simple, but also profound. We spend years understanding how + and × behave in Q. For example a + b = b + a for all a, b ∈Q, or a × (b + c) = a × b + a × c for all a, b, c ∈Q. The central idea behind abstract algebra is to define a larger class of objects (sets with extra structure), of which Z and Q are definitive members. (Z, +) − → Groups (Z, +, ×) − → Rings (Q, +, ×) − → Fields In linear algebra the analogous idea is (Rn, +, scalar multiplication) − →V ector Spaces over R The amazing thing is that these vague ideas mean something very precise and have far far more depth than one could ever imagine. 1.2 Sets and Functions A set is any collection of objects. For example six dogs, all the protons on Earth, every thought you’ve ever had, N, Z, Q, R, C. Observe that Z and Q are sets with extra structure coming from + and ×. In this whole course, all we will study are sets with some carefully chosen extra structure. Basic Logic and Set Notation Writing mathematics is fundamentally no different than writing english. It is a language which has certain rules which must be followed to accurately express what we mean. Because mathematical arguments can be highly intricate it is necessary to use simplifying notation for frequently occurring concepts. I will try to keep these to a minimum, but it is crucial we all understand the following: 3 • If P and Q are two statements, then P ⇒Q means that if P is true then Q is true. For example: x odd ⇒x ∕= 2. We say that P implies Q. • If P ⇒Q and Q ⇒P then we write P ⇐ ⇒Q, which should be read as P is true if and only if Q is true. • The symbol ∀should be read as “for all”. • The symbol ∃should be read as “there exists”. The symbol ∃! should be read as “there exists unique”. Let S and T be two sets. • If s is an object contained in S then we say that s is an element, or a member of S. In mathematical notation we write this as s ∈S. For example 5 ∈Z. Conversely s / ∈S means that x is not contained in S. For example 1 2 / ∈Z. • If S has finitely many elements then we say it is a finite set. We denote its cardinality (or size) by |S|. • The standard way of writing down a set S is using curly bracket notation. S = { Notation for elements in S | Properties which specifies being in S }. The vertical bar should be read as “such that”. For example, if S is the set of all even integer then S = {x ∈Z | 2 divides x}. We can also use the curly bracket notation for finite sets without using the | symbol. For example, the set S which contains only 1,2 and 3 can be written as S = {1, 2, 3}. • If every object in S is also an object in T, then we say that S is contained in T. In mathematical notation we write this as S ⊂T. Note that S ⊂T and T ⊂S ⇒S = T. If S is not contained in T we write S ∕⊂T. • If S ⊂T then T \ S := {x ∈T | x / ∈S}. T \ S is called the compliment of S in T. • The set of objects contained in both S and T is call the intersection of S and T. In mathematical notation we denote this by S ∩T. • The collection of all objects which are in either S or T is call the union on S and T. In mathematical notation we denote this by S ∪T. • S × T = {(a, b)|a ∈S, b ∈T}. We call this new set the (cartesian) product of S and T. We may naturally extend this concept to finite collections of sets. 4 • The set which contains no objects is called the empty set. We denote the empty set by ∅. We say that S and T are disjoint if S ∩T = ∅. The union of two disjoint sets is often written as S ! T. Definition. A map (or function) f from S to T is a rule which assigns to each element of S a unique elements of T. We express this information using the following notation: f : S → T x $→ f(x) Here are some examples of maps of sets: 1. S = T = N, f : N → N a $→ a2 2. S = Z × Z, T = Z, f : Z × Z → Z (a, b) $→ a + b This very simple looking abstract concept hides enormous depth. To illustrate this, observe that calculus is just the study of certain classes of functions (continuous, differentiable or integrable) from R to R. Definition. Let S and T be two sets,and f : S →T be a map. 1. We say that S is the domain of f and T is the codomain of f. 2. We say that f is the identity map if S = T and f(x) = x, ∀x ∈S. In this case we write f = IdS. 3. f is injective if f(x) = f(y) ⇒x = y ∀x, y ∈S. 4. f is surjective if given y ∈T, there exists x ∈S such that f(x) = y. 5. If f is both injective and surjective we say it is bijective. Intuitively this means f gives a perfect matching of elements in S and T. Observe that if R, S and T are sets and g : R →S and f : S →T are maps then we may compose them to give a new function: f ◦g : R →T. Note that this is only possible if the domain of f is naturally contained in the codomain of g. Important Exercise. Let S and T be two sets. Let f be a map from S to T. Show that f is a bijection if and only if there exists a map g from T to S such that f ◦g = IdT and g ◦f = IdS. 5 1.3 Equivalence Relations Within a set it is sometimes natural to talk about different elements being related in some way. For example, in Z we could say that x, y ∈Z are related if x −y is divisible by 2. Said another way, x and y are related is they are both odd or both even. This idea can be formalized as something called an equivalence relation. Definition. An equivalence relation on a set S is a subset U ⊂S × S satisfying: 1. (x, y) ∈U ⇐ ⇒(y, x) ∈U. (This is called the symmetric property.) 2. ∀x ∈S, (x, x) ∈U. (This is called the reflexive property.) 3. Given x, y, z ∈S, (x, y) ∈U and (y, z) ∈U ⇒(x, z) ∈U. (This is called the transitive property.) If U ⊂S × S is an equivalence relation then we say that x, y ∈S are equivalent if and only if (x, y) ∈U. In more convenient notation, we write x ∼y to mean that x and y are equivalent. Definition. Let ∼be an equivalence relation on the set S. Let x ∈S. The equivalence class containing x is the subset [x] := {y ∈S | y ∼x} ⊂S. Remarks. 1. Notice that the reflexive property implies that x ∈[x]. Hence equivalence classes are non-empty and their union is S. 2. The symmetric and transitive properties imply that y ∈[x] if and only if [y] = [x]. Hence two equivalence classes are equal or disjoint. It should also be noted that we can represent a given equivalence class using any of its members using the [x] notation. Definition. Let S be a set. Let {Xi} be a collection of subsets. We say that {Xi} forms a partition of S if each Xi is non-empty, they are pairwise disjoint and their union is S. We’ve seen that the equivalence classes of an equivalence relation naturally form a par-tition of the set. Actually there is a converse: Any partition of a set naturally gives rise to an equivalence relation whose equivalence classes are the members of the partition. The conclusion of all this is that an equivalence relation on a set is the same as a partition. In the example given above, the equivalence classes are the odd integers and the even integers. Equivalence relations and equivalence classes are incredibly important. They will the foundation of many concepts throughout the course. Take time to really internalize these ideas. 6 2 The Structure of + and × on Z 2.1 Basic Observations We may naturally express + and × in the following set theoretic way: + : Z × Z → Z (a, b) #→ a + b × : Z × Z → Z (a, b) #→ a × b Here are 4 elementary properties that + satisfies: • (Associativity): a + (b + c) = (a + b) + c ∀a, b, c ∈Z • (Existence of additive identity) a + 0 = 0 + a = a ∀a ∈Z. • (Existence of additive inverses) a + (−a) = (−a) + a = 0 ∀a ∈Z • (Commutativity) a + b = b + a ∀a, b ∈Z. Here are 3 elementary properties that × satisfy: • (Associativity): a × (b × c) = (a × b) × c ∀a, b, c ∈Z • (Existence of multiplicative identity) a × 1 = 1 × a = a ∀a ∈Z. • (Commutativity) a × b = b × a ∀a, b ∈Z. The operations of + and × interact by the following law: • (Distributivity) a × (b + c) = (a × b) + (a × c) ∀a, b, c ∈Z. From now on we’ll simplify the notation for multiplication to a × b = ab. Remarks 1. Each of these properties is totally obvious but will form the foundations of future definitions: groups and rings. 2. All of the above hold for + and × on Q. In this case there is an extra property that non-zero elements have multiplicative inverses: Given a ∈Q{0}, ∃b ∈Q such that ab = ba = 1. This extra property will motivate the definition of a field. 7 3. The significance of the Associativity laws is that summing and multiplying a finite collection of integers makes sense, i.e. is independent of how we do it. It is an important property of Z (and Q) that the product of two non-zero elements is again non-zero. More precisiely: a, b ∈Z such that ab = 0 ⇒either a = 0 or b = 0. Later this property will mean that Z is something called an integral domain. This has the following useful consequence: Cancellation Law: For a, b, c ∈Z, ca = cb and c ∕= 0 ⇒a = b. This is proven using the distributive law together with the fact that Z is an integral do-main. I leave it an exercise to the reader. 2.2 Factorization and the Fundamental Theorem of Arithmetic Definition. Let a, b ∈Z. Then a divides b ⇐ ⇒∃c ∈Z such that b = ca. We denote this by a|b and say that a is a divisor (or factor) of b. Observe that 0 is divisible by every integer. The only integers which divide 1 are 1 and -1. Any way of expressing an integer as the product of a finite collection of integers is called a factorization. Definition. A prime number p is an integer greater than 1 whose only positive divisors are p and 1. A positive integer which is not prime is called composite. Remark. Z is generated by 1 under addition. By this I mean that every integer can be attained by successively adding 1 (or −1) to itself. Under multiplication the situation is much more complicated. There is clearly no single generator of Z under multiplication in the above sense. Definition. Let a, b ∈Z. The highest common factor of a and b, denoted HCF(a, b), is the largest positive integer which is a common factor of a and b. Two non-zero integers a, b ∈Z are said to be coprime if HCF(a, b) = 1. Here are some important elementary properties of divisibility dating back to Euclid (300BC), which I’ll state without proof. We’ll actually prove them later in far more gener-ality. Remainder Theorem. Given a, b ∈Z, if b > 0 then ∃! q, r ∈Z such that a = bq + r with 0 ≤r < b. Theorem. Given a, b ∈Z, ∃u, v ∈Z such that au + bv = HCF(a, b). In particular, a and b are coprime if an only if there exist u, v ∈Z such that au + bv = 1. Euclid’s Lemma. Let p be a prime number and a, b ∈Z. Then p|ab ⇒p|a or p|b 8 The Fundamental Theorem of Arithmetic. Every positive integer, a, greater than 1 can be written as a product of primes: a = p1p2...pr. Such a factorization is unique up to ordering. Proof. If there is a positive integer not expressible as a product of primes, let c ∈N be the least such element. The integer c is not 1 or a prime, hence c = c1c2 where c1, cc ∈N, c1 < c and c2 < c. By our choice of c we know that both c1 and c2 are the product of primes. Hence c much be expressible as the product of primes. This is a contradiction. Hence all positive integers can be written as the product of primes. We must prove the uniqueness (up to ordering) of any such decomposition. Let a = p1p2...pr = q1q2...qs be two factorizations of a into a product of primes. Then p1|q1q2...qs. By Euclid’s Lemma we know that p1|qi for some i. After renumbering we may assume i = 1. However q1 is a prime, so p1 = q1. Applying the cancellation law we obtain p2...pr = q2...qs. Assume that r < s. We can continue this process until we have: 1 = qr+1..qs. This is a contradiction as 1 is not divisible by any prime. Hence r = s and after renumbering pi = qi ∀i. Using this we can prove the following beautiful fact: Theorem. There are infinitely many distinct prime numbers. Proof. Suppose that there are finitely many distinct primes p1, p2.....pr. Consider c = p1p2...pr + 1. Clearly c > 1. By the Fundamental Theorem of Arithmetic, c is divisible by at least one prime, say p1. Then c = p1d for some d ∈Z. Hence we have p1(d −p2...pr) = c −p1p2..pr = 1. This is a contradiction as no prime divides 1. Hence there are infinitely many distinct primes. The Fundamental Theorem of Arithmetic also tells us that every positive element a ∈Q can be written uniquely (up to reordering) in the form: a = pα1 1 · · · pαn n ; pi prime and αi ∈Z 9 The Fundamental Theorem also tells us that two positive integers are coprime if and only if they have no common prime divisor. This immediately shows that every positive element a ∈Q can be written uniquely in the form: a = α β, α, β ∈N and coprime. We have seen that both Z and Q are examples of sets with two concepts of composition (+ and ×) which satisfy a collection of abstract conditions. We have also seen that the structure of Z together with × is very rich. Can we think of other examples of sets with a concept of + and × which satisfy the same elementary properties? 2.3 Congruences Fix m ∈N. By the remainder theorem, if a ∈Z, ∃! q, r ∈Z such that a = qm + r and 0 ≤r < m. We call r the remainder of a modulo m. This gives the natural equivalence relation on Z: a ∼b ⇐ ⇒a and b have the same remainder modulo m ⇐ ⇒m|(a −b) Important Exercise. Check this really is an equivalence relation! Definition. a, b ∈Z are congruent modulo m ⇐ ⇒m|(a −b). This can also be written: a ≡b mod m. Remarks. 1. The equivalence classes of Z under this relation are indexed by the possible remainder modulo m. Hence, there are m distinct equivalence classes which we call residue classes. We denote the set of all residue classes Z/mZ. 2. There is a natural surjective map [ ] : Z →Z/mZ a +→[a] (1) Note that this is clearly not injective as many integers have the same remainder modulo m. Also observe that Z/mZ = {, , ...[m −1]}. The following result allows us to define + and × on Z/mZ. Propostion. Let m ∈N. Then,∀a, b, a′, b′ ∈Z : [a] = [a′] and [b] = [b′] ⇒[a + b] = [a′ + b′] and [ab] = [a′b′]. Proof. This is a very good exercise. 10 Definition. We define addition and multiplication on Z/mZ by [a] × [b] = [a × b] ∀a, b ∈Z [a] + [b] = [a + b] ∀a, b ∈Z Remark. Note that there is ambiguity in the definition, because it seems to depend on making a choice of representative of each residue class. The proposition shows us that the resulting residue classes are independent of this choice, hence + and × are well defined on Z/mZ. Our construction of + and × on Z/mZ is lifted from Z, hence they satisfy the eight el-ementary properites that + and × satisfied on Z. In particular ∈Z/mZ behaves like 0 ∈Z: + [a] = [a] + = [a], ∀[a] ∈Z/mZ; and ∈Z/mZ behaves like 1 ∈Z: × [a] = [a] × = [a], ∀[a] ∈Z/mZ. We say that [a] ∈Z/mZ is non-zero if [a] ∕= . Even though + and × on Z/mZ share the same elementary properties with + and × on Z, they behave quite differently in this case. As an example, notice that + + + · · · + 1= [m] = Hence we can add 1 (in Z/mZ) to itself and eventually get 0 (in Z/mZ). Also observe that if m is composite with m = rs, where r < m and s < m then [r] and [s] are both non-zero (∕= ) in Z/mZ, but [r] × [s] = [rs] = [m] = ∈Z/mZ. Hence we can have two non-zero elements multiplying together to give zero. Proposition. For every m ∈N, a ∈Z the congruence ax ≡1 mod m has a solution (in Z) iffa and m are coprime. Proof. This is just a restatement of the fact that a and m coprime ⇐ ⇒∃u, v ∈Z such that au + mv = 1. Observe that the congruence above can be rewritten as [a] × [x] = in Z/mZ. We say that [a] ∈Z/mZ has a multiplicative inverse if ∃[x] ∈Z/mZ such that [a] × [x] = . Hence we deduce that the only elements of Z/mZ with muliplicative inverse are those given by [a], where a is coprime to m. Recall that × on Q had the extra property that all non-zero elements had multiplicative inverses. When does this happen in Z/mZ?. By the above we see that this can happen ⇐ ⇒{1, 2, · · · , m −1} are all coprime to m. This can only happen if m is prime. We have thus proven the following: Corollary. All non-zero elements of Z/mZ have a multiplicative inverse ⇐ ⇒m is prime. 11 Later this will be restated as Z/mZ is a field ⇐ ⇒m is a prime. These are examples of things called finite fields. Important Exercise. Show that if m is prime then the product of two non-zero elements of Z/mZ is again non-zero. Key Observation: There are naturally occuring sets (other than Z and Q) which come equipped with a concept of + and ×, whose most basic properties are the same as those of the usual addition and multiplication on Z or Q. Don’t be fooled into thinking all other examples will come from numbers. As we’ll see, there are many examples which are much more exotic. 3 Groups 3.1 Basic Definitions Definition. Let G be a set. A binary operation is a map of sets: ∗: G × G →G. For ease of notation we write ∗(a, b) = a ∗b ∀a, b ∈G. Any binary operation on G gives a way of combining elements. As we have seen, if G = Z then + and × are natural examples of binary operations. When we are talking about a set G, together with a fixed binary operation ∗, we often write (G, ∗). Fundamental Definition. A group is a set G, together with a binary operation ∗, such that the following hold: 1. (Associativity): (a ∗b) ∗c = a ∗(b ∗c) ∀a, b, c ∈G. 2. (Existence of identity): ∃e ∈G such that a ∗e = e ∗a = a ∀a ∈G. 3. (Existence of inverses): Given a ∈G, ∃b ∈G such that a ∗b = b ∗a = e. Remarks. 1. We have seen five different examples thus far: (Z, +), (Q, +), (Q{0}, ×), (Z/mZ, +), and (Z/mZ \ {}, ×) if m is prime. Another example is that of a real vector space under addition. Note that (Z, ×) is not a group. Also note that this gives examples of groups which are both finite and infinite. The more mathematics you learn the more you’ll see that groups are everywhere. 2. A set with a single element admits one possible binary operation. This makes it a group. We call this the trivial group. 3. A set with a binary operation is called a monoid if only the first two properties hold. From this point of view, a group is a monoid in which every element is invertible. (Z, ×) is a monoid but not a group. 12 4. Observe that in all of the examples given the binary operation is commutative, i.e. a ∗b = b ∗a ∀a, b ∈G. We do not include this in our definition as this would be too restrictive. For example the set of invertible n × n matrices with real entries, denoted GLn(R), forms a group under matrix multiplication. However we know that matrix multiplication does not commute in general. Definition. A group (G, ∗) is called Abelian if it also satisfies a ∗b = b ∗a ∀a, b ∈G. This is also called the commutative property. The fundamental Abelian group is (Z, +). Notice also that any vector space is an Abelian group under it’s natural addition. So a group is a set with extra structure. In set theory we have the natural concept of a map between sets (a function). The following is the analogous concept for groups: Fundamental Definition. Let (G, ∗) and (H, ◦) be two groups. A homomorphism f, from G to H, is a map of sets f : G →H, such that f(x ∗y) = f(x) ◦f(y) ∀x, y ∈G. If G = H and f = IdG we call f the identity homomorphism. Remarks. 1. Intuitively one should thing about a homomorphism as a map of sets which preserves the underlying group structure. It’s the same idea as a linear map between vector spaces. 2. A homomorphism f : G →H which is bijective is called an isomorphism. Two groups are said to be isomorphic if there exists an isomorphism between them. Intu-itively two groups being isomorphic means that they are the “same” group with relabelled elements. 3. A homomorphism from a group to itself (i.e. f : G →G) is called an endomorphism. An endomorphism which is also an isomorphism is called an automorphism. Proposition. Let (G, ∗), (H, ◦) and (M, □) be three groups. Let f : G →H and g : H →M be homomorphism. Then the composition gf : G →M is a homomorphism. Proof. Let x, y ∈G. gf(x ∗y) = g(f(x) ◦f(y)) = gf(x)□gf(y). Remark. Composition of homomorphism gives the collection of endomorphisms of a group the structure of a monoid. The subset of automorphisms has the stucture of a group under composition. We denote it by Aut(G). This is analogues to the collection of n × n invertible matrices being a group under matrix multiplication. Proposition. Let (G, ∗) be a group. The identity element is unique. Proof. Assume e, e′ ∈G both behave like the identity. Then e = e ∗e′ = e′. 13 Proposition. Let (G, ∗) be a group. For a ∈G there is only one element which behaves like the inverse of a. Proof. Assume a ∈G has 2 inverses, b, c ∈G. Then: (a ∗b) = e c ∗(a ∗b) = c ∗e (c ∗a) ∗b = c (associativity and identity) e ∗b = c b = c The first proposition tells us that we can write e ∈G for the identity and it is well-defined. Similarly the second proposition tells us that for a ∈G we can write a−1 ∈G for the inverse in a well-defined way. The proof of the second result gives a good example of how we prove results for abstract groups. We can only use the axioms, nothing else. Given r ∈Z and a ∈G, we write ar = ⎧ ⎪ ⎨ ⎪ ⎩ a ∗a ∗· · · ∗a (r times), if r > 0 e, if r = 0 a−1 ∗a−1 ∗· · · ∗a−1 (−r times), if r < 0 Cancellation Law for Groups. Let a, b, c ∈G a group. Then a ∗c = a ∗b ⇒c = b and c ∗a = b ∗a ⇒c = b Proof. Compose on left or right by a−1 ∈G, then apply the associativity and inverses and identity axioms. Proposition. Let (G, ∗) and (H, ◦) be two groups and f : G →H a homomorphism. Let eG ∈G and eH ∈H be the respective identities. Then • f(eG) = eH. • f(x−1) = (f(x))−1, ∀x ∈G Proof. • f(eG) ◦eH = f(eG) = f(eG ∗eG) = f(eG) ◦f(eG). By the cancellation law we deduce that f(eG) = eH. • Let x ∈G. Then eH = f(eG) = f(x ∗x−1) = f(x) ◦f(x−1) and eH = f(eG) = f(x−1 ∗x) = f(x−1) ◦f(x). Hence f(x−1) = (f(x))−1. 14 3.2 Subgroups, Cosets and Lagrange’s Theorem In linear algebra, we can talk about subspaces of vector spaces. We have an analogous concept in group theory. Definition. Let (G, ∗) be a group. A subgroup of G is a subset H ⊂G such that 1. e ∈H 2. x, y ∈H ⇒x ∗y ∈H 3. x ∈H ⇒x−1 ∈H Remarks. 1. A subgroup is naturally a group under the induced binary operation. It clearly has the same identity element. 2. If m ∈N, then the subset mZ := {ma|a ∈Z}is a subgroup of (Z, +). 3. If V is a vector space over R then it is naturally an Abelian group under addition. If W is a subspace then it is also under a subgroup under addition. Proposition. H, K ⊂G subgroups ⇒H ∩K ⊂G is a subgroup. Proof. 1. As H, K subgroups, e ∈H and e ∈K ⇒e ∈H ∩K. 2. x, y ∈H ∩K ⇒x ∗y ∈H and x ∗y ∈K ⇒x ∗y ∈H ∩K. 3. x ∈H ∩K ⇒x−1 ∈H and x−1 ∈K ⇒x−1 ∈H ∩K. This result clearly extends to any collection of subgroups of G. Let (G, ∗) be a group and let H ⊂G be a subgroup. Let us define a relation on G us-ing H as follows: Given x, y ∈G, x ∼y ⇐ ⇒x−1 ∗y ∈H Proposition. This gives an equivalence relation on G. Proof. We need to check the three properties of an equivalence relation: 1. (Reflexive )e ∈H ⇒x−1 ∗x ∈H ∀x ∈G ⇒x ∼x 2. (Symmetric) x ∼y ⇒x−1 ∗y ∈H ⇒(x−1 ∗y)−1 ∈H ⇒y−1 ∗x ∈H ⇒y ∼x 3. (Transitive) x ∼y, y ∼z ⇒x−1 ∗y, y−1 ∗z ∈H ⇒(x−1 ∗y)∗(y−1 ∗z) ∈H ⇒x−1 ∗z ∈ H ⇒x ∼z 15 Definition. We call the equivalence classes of the above equivalence relation left cosets of H in G. Proposition. For x ∈G the equivalence class (or left coset) containing x equals xH := {x ∗h|h ∈H} ⊂G Proof. The easiest way to show that two subsets of G are equal is to prove containment in both directions. x ∼y ⇐ ⇒x−1 ∗y ∈H ⇐ ⇒x−1 ∗y = h for some h ∈H ⇒y = x ∗h ∈xH. Therefore {Equivalence class containing x} ⊂xH. y ∈xH ⇒y = x ∗h for some h ∈H ⇒x−1 ∗y ∈H ⇒y ∼x. Therefore xH ⊂ {Equivalence class containing x}. This has the following very important consequence: Corollary. Hence for x, y ∈G, xH = yH ⇐ ⇒x−1 ∗y ∈H. Proof. By the above proposition we know that xH = yH ⇐ ⇒x ∼y ⇐ ⇒x−1∗y ∈H. It is very important you understand and remember this fact. An immediate consequence is that y ∈xH ⇒yH = xH. Hence left cosets can in general be written with different representatives at the front. This is very important. Also observe that the equivalence class containing e ∈G is just H. Hence the only equivalence class which is a subgroup H, as no other contains the identity. If H = {e} then the left cosets are singleton sets. Remarks. Let G = R3, thought of as a group under addition. Let H is a two dimensional subspace. Recall this is a subgroup under addition. Geometrically H is a plane which contains the origin. Geometrically the left cosets of H in R3 are the planes which are parallel to H. Definition. Let (G, ∗) be a group and H ⊂G a subgroup. We denote by G/H the set of left cosets of H in G. If the size of this set is finite then we say that H has finite index in G. In this case we write (G : H) = |G/H|, and call it the index of H in G. For m ∈N, the subgroup mZ ⊂Z has index m. Note that Z/mZ is naturally the set of residue classes modulo m previously introduced. The vector space example in the above remark is not finite index as there are infinitely many parallel planes in R3 Proposition. Let x ∈G. The map (of sets) φ : H − → xH h − → x ∗h is a bijection. 16 Proof. We need to check that φ is both injective and surjective. For injectivity observe that for g, h ∈H, φ(h) = φ(g) ⇒x ∗h = x ∗g ⇒h = g. Hence φ is injective. For surjectivity observe that g ∈xH ⇒∃h ∈H such that g = x ∗h ⇒g = φ(h). Now let’s restrict to the case where G is a finite group. Proposition. Let (G, ∗) be a finite group and H ⊂G a subgroup. Then ∀x ∈G , |xH| = |H|. Proof. We know that there is a bijection between H and xH. Both must be finite because they are contained in a finite set. A bijection exists between two finite sets if and only if they have the same cardinality. Lagrange’s Theorem. Let (G, ∗) be a finite group and H ⊂G a subgroup. Then |H| divides |G|. Proof. We can use H to define the above equivalence relation on G. Because it is an equiv-alence relation, its equivalence classes cover G and are all disjoint. Recall that this is called a partition of G. We know that each equivalence class is of the form xH for some (clearly non-unique in general) x ∈G. We know that any left coset of H has size equal to |H|. Hence we have partitioned G into subsets each of size |H|. We conclude that |H| divides |G|. This is a powerful result. It tightly controls the behavior of subgroups of a finite group. For example: Corollary. Let p ∈N be a prime number. Let (G, ∗) be a finite group of order p. Then the only subgroups of G are G and {e}. Proof. Let H be a subgroup of G. By Lagrange |H| divides p. But p is prime so either |H| = 1 or |H| = p. In the first case H = {e}. In the second case H = G. 3.3 Finitely Generated Groups Definition. Let G be a group and X ⊂G be a subset. We define the subgroup generated by X to be the intersection of all subgroups of G containing X. We denote it by gp(X) ⊂G. Remarks. 1. gp(X) is the minimal subgroup containing X. By minimal we mean that if H ⊂G is a subgroup such that X ⊂H then gp(X) ⊂H. 2. A more constructive way of defining gp(X) is as all possible finite compositions of elements of X and their inverses. I leave it as an exercise to check that this subset is indeed a subgroup. 3. Let us consider the group (Z, +) and X = {1} ⊂Z. Then gp(X)=Z. This is the precise sense in which Z is “generated” by 1 under addition. Definition. We say a group (G, ∗) is finitely generated if ∃X ⊂G finite such that gp(X)=G. 17 Remarks. 1. Clearly all finite groups are finitely generated. 2. The fact that there are infinitely many primes implies that (Q{0}, ×) is not finitely generated. Definition. A group (G, ∗) is said to be cyclic if ∃x ∈G such that gp({x}) = G, i.e. G can be generated by a single element. In concrete terms this means that G = {xn|n ∈Z}. By the above observations (Z, +) and (Z/mZ, +) are examples. Proposition. Any group of prime order is cyclic. Proof. Let G be a group of prime order p. Let x be a non-identity element of G. Then gp({x}) ⊂G is non-trivial and by Lagrange’s theorem must have order p. Hence G = gp({x}). Remarks. It is important to understand that not all groups are cyclic. We’ll see many examples throughout the course. Let G be a group (not necessarily cyclic). For r, s ∈Z and x ∈G, xrxs = xr+s = xs+r = xsxr. Hence gp({x}) ⊂G is Abelian. We deduce that all cyclic groups are Abelian. Theorem. Let G be a cyclic group. Then 1. If G is infinite, G ∼ = (Z, +) 2. If |G| = m ∈N, then G ∼ = (Z/mZ, +) Proof. We have two cases to consider. 1. If G = gp({x}), then G = {· · · x−2, x−1, e, x, x2 · · · }. Assume all elements in this set are distinct, then we can define a map of sets: φ : G → Z xn → n Then,∀a, b ∈Z, φ(xa ∗xb) = φ(xa+b) = a + b = ϕ(xa) + ϕ(xb) so ϕ is a homomorphism which by assumption was bijective. Thus, (G, ∗) is isomorphic to (Z, +). 2. Now assume ∃a, b ∈Z, b > a such that xa = xb. Then x(b−a) = e ⇒x−1 = x(b−a−1) ⇒ G = {e, · · · , xb−a−1}. In particular G is finite. Choose minimal m ∈N such that xm = e. Then G = {e, x, · · · , xm−1} and all its elements are distinct by minimality of m. Hence |G| = m. 18 Define the map φ : G → Z/mZ xn → [n] for n ∈{0, ...m −1} This is clearly a surjection, hence a bijection because |G| = |Z/mZ| = m. Again ∀a, b ∈ {0, ..., m −1} we know φ(xa ∗xb) = φ(xa+b) = [a + b] = [a] + [b] = ϕ(xa) + ϕ(xb) is a homomorphism. Hence (G, ∗) is isomorphic to (Z/mZ, +). Hence two finite cyclic groups of the same size are isomorphic. What are the possible subgroups of a cyclic group? Proposition. A subgroup of a cyclic group is cyclic. Proof. If H is trivial we are done. Hence assume that H is non-trivial. By the above we need to check two cases. 1. (G, ∗) ∼ = (Z, +). Let H ⊂Z be a non-trivial subgroup. Choose m ∈N minimal such that m ∈H(m ∕= 0). Hence mZ = {ma|a ∈Z} ⊆H. Assume ∃n ∈H such that n / ∈mZ. By the remainder theorem, n = qm + r, r, q ∈Z and 0 < r < m ⇒r ∈H. This is a contradiction by the minimality of m. Therefore mZ = H. Observe that gp({m}) = mZ ⊂Z. Hence H is cylic. 2. (G, ∗) ∼ = (Z/mZ, +). Let H ⊂Z/mZ be a non-trivial subgroup. Again, choose n ∈N minimal and positive such that [n] ∈H. The same argument as above shows that the containment gp({[n]}) ⊆H is actually equality. Hence H is cyclic. Proposition. Let (G, ∗) be a finite cyclic group of order d. Let m ∈N such that m divides |G|. Then there is a unique cyclic subgroup of order m. Proof. Because |G| = d we know that G ∼ = (Z/dZ, +). Hence we need only answer the question for this latter group. Let m be a divisor of d. Then if n = d/m then gp({[n]}) ⊂ Z/dZ is cyclic of order m by construction. If H ⊂Z/dZ is a second subgroup of order m then by the above proof we know that the minimal n ∈N such that [n] ∈H must be n = d/m. Hence H = gp({[n]}). Let (G, ∗) be a group (not necessarily cyclic) and x ∈G. We call gp({x}) ⊂G the subgroup generated by x. By definition it is cyclic. Definition. If |gp({x})| < ∞we say that x is of finite order and its order, written ord(x) equals |gp({x})|. If not we say that x is of infinite order. 19 Remarks. 1. Observe that by the above we know that if x ∈G is of finite order, then ord(x) = minimal m ∈N such that xm = e 2. e ∈G is the only element of G of order 1. 3. The only element with finite order in Z is 0. Proposition. Let (G, ∗) be a finite group and x ∈G. Then ord(x) divides |G| and x|G| = e. Proof. By definition ord(x) = |gp({x})|. Therefore, by Lagrange’s theorem, ord(x) must divide |G|. Also note that by definition xord(x) = e. Hence x|G| = x(ord(x)× |G| ord(x) ) = e |G| ord(x) = e. 3.4 Permutation Groups and Group Actions Definition. Let S be a set. We define the group of permutations of S to be the set of bijections from S to itself, denoted Σ(S), where the group binary operation is composition of functions. Remarks. 1. By composition of functions we always mean on the left, i.e. ∀f, g ∈Σ(S) and s ∈S (f ∗g)(s) = f(g(s)). 2. Associativity clearly has to hold. The identity element e of this group is the identity function on S, i.e. ∀x ∈S, e(s) = s. Inverses exist because any bijective map from a set to itself has an inverse map. 3. Let n ∈N. We write Symn := Σ({1, 2, ..., n}). If S is any set of cardinality n then Σ(S) is isomorphic to Symn, the isomorphism being induced by writing a bijection from S to {1, 2, ..., n}. We call these groups the finite symmetric groups. 4. Observe that given σ ∈Σ(S) we can think about σ as “moving” S around. In this sense the group Σ(S) naturally “acts” on S. Let’s make this precise. Definition. Let (G, ∗) be a group and S a set. By a group action of (G, ∗) on S we mean a map: µ : G × S →S such that 1. ∀x, y ∈G, s ∈S, µ(x ∗y, s) = µ(x, µ(y, s)) 2. µ(e, s) = s 20 If the action of the group is understood we will write x(s) = µ(x, s)∀x ∈G, s ∈S. This notation makes the axioms clearer: (1) becomes (x ∗y)(s) = x(y(s)) ∀x, y ∈G, s ∈S and (2) becomes e(s) = s ∀s ∈S. Remarks. 1. Notice that there is a natural action of Σ(S) on S: µ : Σ(S) × S → S (f, s) → f(s) This is where the definition comes from. 2. Let (G, ∗) be a group. There is a natural action of G on itself: µ : G × G → G (x, y) → x ∗y Property (1) holds as ∗is associative. Property (2) holds because e ∗x = x ∀x ∈G. This is called the left regular representation of G. 3. We define the trivial action of G on S by µ : G × S → S (g, s) → s ∀s ∈S, g ∈G 4. There is another natural action of G on itself: µ : G × G → G (x, y) → x−1 ∗y ∗x Property (1) holds because of associativity of ∗and that (g ∗h)−1 = h−1 ∗g−1. Property (2) is obvious. This action is called conjugation. Let µ : G × S →S an action of a group G on a set S. Any g ∈G naturally gives rise to a map: ϕg : S → S s &→ g(s) Observe that property (1) of µ being an action implies that ϕg∗h = ϕgϕh ∀g, h ∈G. Here the second term is composition of functions. Similarly property (2) tell is that ϕe = IdS.. Proposition. ϕg is a bijection. 21 Proof. Given ϕg, if we can find an inverse function, then we will have shown bijectivity. By the above two observations it is clear that ϕg−1 is inverse to ϕg. Hence µ gives rise to a map of sets: ϕ : G → Σ(S) g → ϕg Proposition. ϕ is a homomorphism. Proof. As we have just seen, property (1) of µ being an action ⇒ϕh ◦ϕg = ϕh∗g∀h, g ∈G. This is precisely the statement that ϕ is a homomorphism. So an action of a group G on a set S gives a homorphism ϕ : G − →Σ(S). It is in fact true that any such homorphism comes from a unique group action. Hence an action of G on S is the same thing as homomorphism from G to the permutation group of S. Both concepts are interchangeable. Definition. An action of G on S is called faithful if ϕ : G →Σ(S) g '→ϕg is injective. Notice that if G and H are two groups and f : G →H is an injective homomorphism then we may view G as a subgroup of H by identifying it with its image in H under f. Hence if G acts faithfully on S then G is isomorphic to a subgroup of Σ(S). Cayley’s Theorem. Let G be a group. Then G is isomorphic to a subgroup of Σ(G). In particular if |G| = n ∈N, then G is isomorphic to a subgroup of Symn. Proof. The result will follow if we can show that the left regular representation is faithful. Let ϕ : G →Σ(G) be the homomorphism given by the left regular representation. Hence for g, s ∈G, ϕg(s) = g ∗s. Forh, g ∈G, suppose ϕh = ϕg. Then h ∗s = g ∗s ∀s ∈G ⇒h = g. Hence ϕ is injective. 3.5 The Orbit-Stabiliser Theorem and Sylow’s Theorem Definition. Let (G, ∗) be a group, together with an action ϕ on a set S. We can define an equivalence relation on S by s ∼t ⇐ ⇒∃g ∈G such that g(s) = t Remarks. This is an equivalence relation as a consequence of the group axioms, together with the definition of an action. I leave it as an exercise to check this. 22 Definition. Let (G, ∗) be a group, together with an action ϕ on a set S. Under the above equivalence relation we call the equivalence classes orbits, and we write Orb(s) := {t ∈S|∃g ∈G such that g(s) = t} ⊂S for the equivalence class containing s ∈S. We call it the orbit of s. It is important to observe that Orb(s) is a subset of S and hence is merely a set with no extra structure. Definition. Let (G, ∗) be a group, together with an action ϕ on a set S. We say that G acts transitively on S is there is only one orbit. Equivalently, ϕ is transitive if given s, t ∈S, ∃g ∈G such that g(s) = t. An example of a transitive action is the natural action of Σ(S) on S. This is clear because given any two points in a set S there is always a bijection which maps one to the other. If G is not the trivial group (the group with one element) then conjugation is never transitive. To see this observe that under this action Orb(e) = {e}. Definition. Let (G, ∗) be a group, together with an action ϕ on a set S. Let s ∈S. We define the stabiliser subgroup of s to be all elements of G which fix s under the action. More precisely Stab(s) = {g ∈G|g(s) = s} ⊂G For this definition to make sense we must prove that Stab(s) is genuinely a subgroup. Proposition. Stab(s) is a subgroup of G. Proof. 1. e(s) = s ⇒e ∈Stab(s) 2. x, y ∈Stab(s) ⇒(x ∗y)(s) = x(y(s)) = x(s) = s ⇒x ∗y ∈Stab(s). 3. x ∈Stab(s) ⇒x−1(s) = x−1(x(s)) = (x−1 ∗x)(s) = e(s) = s ⇒x−1 ∈Stab(s) Thus we may form the left cosets of Stab(s) in G: G/Stab(s) := {xStab(s)|x ∈G}. Recall that these subsets of G are the equivalence classes for the equivalence relation: Given x, y ∈G, x ∼y ⇐ ⇒x−1 ∗y ∈Stab(s), hence they partition G into disjoint subsets. Proposition. Let x, y ∈G then xStab(s)=yStab(s) ⇐ ⇒x(s) = y(s). 23 Proof. Recall that x and y are in the same left coset ⇐ ⇒x−1y ∈Stab(s). Hence x−1y(s) = s. Composing both sides with x and simplifying by the axioms for a group action implies that x(s) = y(s). We deduce that there is a well defined map (of sets): φ : G/Stab(s) − → Orb(s) xStab(s) − → x(s) Proposition. φ is a bijection. Proof. By definition, Orb(s) := {x(s) ∈S|x ∈G}. Hence φ is trivially surjective. Assume φ(xStab(s)) = φ(yStab(s)) for some x, y ∈G. This implies the following: x(s) = y(s) ⇒ x−1(y(s)) = s ⇒ (x−1 ∗y)(s) = s ⇒ x−1 ∗y ∈Stab(s) ⇒ xStab(s) = yStab(s) Therefore φ is injective. This immediately gives the following key result: Orbit-Stabiliser Theorem. Let (G, ∗) be a group together with an action, ϕ, on a set S. Let s ∈S such that the orbit of s is finite (|Orb(s)| < ∞). Then stab(s) ⊂G is of finite index and (G : Stab(s)) = |Orb(s)| Proof. Immediate from previous proposition. We have the following corollary: Corollary. If (G, ∗) is a finite group acting on a set S and s ∈S then |G| = |Stab(s)| · |Orb(s)|. Proof. In this case (G : Stab(s)) = |G|/|Stab(s)|. Applying the orbit-stabiliser theorem yields the result. 24 The orbit-stabiliser theorem allows us to prove non-trivial results about the structure of finite groups. As an example let us consider the action of G (a finite group) on itself by conjugation. The orbits under this action are called conjugacy classes. Concretely, for h ∈G, Conj(h) := Orb(h) = {g−1∗h∗g|g ∈G}. If C1, · · · , Cr ⊂G are the distinct conjugacy classes then we deduce that |G| = !r i=1 |Ci| and |Ci|||G| ∀i ∈{1, · · · , r}. If G = GLn(R), the group of invertible n × n matrices with real entries (under matrix multiplication), then two matrices are in the same conjugacy class if and only if they are similar. Definition. Let (G, ∗) be a group. The center of G is the subset Z(G) := {h ∈G|g ∗h = h ∗g, ∀g ∈G}. We leave it as an exercise to check that the center is a subgroup. Theorem. Let G be a finite group of order pn, for p a prime number and n ∈N. Then the center is non-tivial. Proof. Let G act on itself by conjugation. We know Z(G) is a subgroup. Observe that h ∈Z(G) ⇐ ⇒Conj(h) = {h}. Recall that Conj(e) = {e}, hence |Conj(e)| = 1. Assume that Z(G) = {e}. Hence if h ∕= e then |Conj(h)| > 1. By the orbit-stabiliser theorem we know that |Conj(h)| must divide pn. Hence p divides |Conj(h)|. Because the conjugacy classes form a partition of G we deduce that ∃m ∈N such that pn = 1 + pm. This is not possible, hence Z(G) cannot be trivial. Recall that Lagrange’s theorem says that if G is a finite group and H is a subgroup then |H| divides |G|. It is not true, in general, that given any divisor of |G| there is a subgroup of that order. We shall see an example of such a group later. There are, however, partial converses to Lagrange’s theorem. Sylow’s Theorem. Let (G, ∗) be a finite group such that pn divides |G|, where p is prime. Then there exists a subgroup of order pn. Proof. Assume that |G| = pnm, where m = pru with HCF(p, u) = 1. Our central strategy is to consider a cleverly chosen group action of G and prove one of the stabilizer subgroups has size pn. We’ll need to heavily exploit the orbit-stabilizer theorem. Let S be the set of all subsets of G of size pn. An element of S is an unordered n-tuple of distinct elements in G. There is a natural action of G on S by term-by-term composition on the left. Let ω ∈S. If we fix an ordering ω = {ω1, · · · , ωpn} ∈S, then g(ω) := {g∗ω1, · · · , g∗ωpn}. • We first claim that |Stab(ω)| ≤pn. To see this define the function f : Stab(ω) → ω g → g ∗ω1 By the cancellation property for groups this is an injective map. Hence |Stab(ω)| ≤ |ω| = pn. 25 • Observe that |S| = !pnm pn " = pnm! pn!(pnm −pn)! = pn−1 # j=0 pnm −j pn −j = m pn−1 # j=1 pnm −j pn −j . Observe that if 1 ≤j ≤pn −1 then j is divisible by p at most n−1 times. This means that pnm −j and pn −j have the same number of p factors, namely the number of p factor of j. This means that pn−1 # j=1 pnm −j pn −j has no p factors. Hence |S| = prv, where HCF(p, v) = 1. Now recall that S is the disjoint union of the orbits of our action of G on S. Hence there must be an ω ∈S such that |Orb(ω)| = pst, where s ≤r and HCF(p, t) = 1. By the orbit-stabilizer theorem we know that |Stab(ω)| = pn+r−s u t . Because |Stab(ω)| ∈N and u and t are coprime to p, we deduce that u t ∈N. Hence |Stab(ω)| ≥pn. For this choice of ω ∈S, Stab(ω) is thus a subgroup of size pn. Historically this is a slight extension of what is called Sylow’s First Theorem. There are two more which describe the properties of such subgroups in greater depth. 3.6 Finite Symmetric Groups As we have proven, if (G, ∗) is a finite group of order n. Then G is isomorphic to a subgroup of Symn, the symmetric group on {1, 2...n}. Hence to properly understand finite groups we must understand these finite symmetric groups. Proposition. For n ∈N, |Symn| = n!. Proof. Any permutation σ of {1, 2...n} is totally determined by a choice of σ(1), then σ(2) and so on. At each stage the possibilities drop by one. Hence the number of permutations is n!. We need to think of a way of elegantly representing elements of Symn. For a ∈{1, 2...n} and σ ∈Symn we represent the action of σ on a by a cycle: (abc...f) where b = σ(a), c = σ(b)...σ(f) = a. We know that eventually we get back to a because σ has finite order. In this way every σ ∈Symn can be written as a product of disjoint cycles: σ = (a1...ar)(ar+1...as)...(at+1...an). 26 This representation is unique up to internal shifts and reordering the cycles. E.g. Let n = 5 then σ = (123)(45) corresponds to 1 − → 2 2 − → 3 σ : 3 − → 1 4 − → 5 5 − → 4 If an element is fixed by σ we omit it from the notation. E.g. Let n = 5 then σ = (523) corresponds to 1 − → 1 2 − → 3 σ : 3 − → 5 4 − → 4 5 − → 2 This notation makes it clear how to compose two permutations. For example, let n = 5 and σ = (23), τ = (241), then τσ = (241)(23) = (1234) and στ = (23)(241) = (1324). Observe that composition is on the left when composing permutations. This example also shows that in general Symn is not Abelian. Hence, given σ ∈Symn, we naturally get a well-defined partition of n, taking the lengths of the disjoint cycles appearing in σ. This is call the cycle structure of σ. Proposition. Let σ ∈Symn decompose as the disjoint product of cycles of length n1, ..nm (so ! ni = n). Then ord(σ) = LCM(n1, ...nm), where LCM denotes the lowest common multiple. Proof. Let σ = (a1, · · · , ar)(ar+1, · · · , as) · · · (at+1, · · · , an), be a representation of σ as the disjoint product of cycles. We may assume that r = n1, etc, without any loss of generality. Observe that a cycle of length d ∈N must have order d in Symn. Also recall that if G is a finite group then for any d ∈N, x ∈G, xd = e ⇐ ⇒ ord(x)|d. Also observe that for all d ∈N, σd = (a1, · · · , ar)d(ar+1, · · · , as)d · · · (at+1, · · · , an)d. Thus we know that σd = e ⇐ ⇒ni|d ∀i. The smallest value d can take with this property is LCM(n1, ...nm). Theorem. Two permutations are conjugate in Symn if and only if they have the same cycle structure. Proof. Let σ, τ ∈Symn have the same cycle structure. Hence we may represent both in the form: σ = (a1, · · · , ar)(ar+1, · · · , as) · · · (at+1, · · · , an), 27 τ = (b1, · · · , br)(br+1, · · · , bs) · · · (bt+1, · · · , bn). Define α ∈Symn such that α(ai) = bi ∀i. By construction α−1τα = σ. Going through the above process in reverse, the converse is clear. Corollary. Conjugacy classes in Symn are indexed by cycle structures (i.e. partitions of n). Proof. Immediate from the above. Definition. A transposition is a cycle of length 2. Observe that we can write any cycle as a product of transpositions: k Hence any permutation σ ∈Symn may be written as the (not necessarily disjoint) product of transpositions. This representation is non-unique as the following shows: e.g. n=6, σ=(1 2 3)=(1 3) (1 2)=(4 5)(1 3)(4 5)(1 2) Notice that both expressions involve an even number of transpositions. Theorem. Let σ ∈Symn be expressed as the product of transpositions in two potentially different ways. If the first has m transpositions and the second has n transpositions then 2|(m −n). Proof. First notice that a cycle of length r can be written as the product of r−1 transpositions by the above. Let us call σ even if there are an even number of even length cycles (once expressed as a disjoint product); let us call σ odd if there are an odd number of even length cycles. We also define the sign of σ, denoted sgn(σ), to be +1 or −1 depending on whether σ is even or odd. Consider how sign changes when we multiply by a transposition (1 i). We have two cases: 1. 1 and i occur in the same cycle in σ. Without loss of generality we consider (1 2 · · · i · · · r) as being in σ. (1 i)(1 2 · · · i · · · r)=(1 2 · · · i −1)(i i + 1 · · · r) If r is even then either we get two odd length cycles or two even length cycles. If r is odd then exactly one of the cycles on the right is even length. In either case, sgn((1 i)σ) = −sgn(σ). 2. 1 and i occur in distinct cycles. Again, without loss of generality we may assume that (1 · · · i −1)(i · · · r) occurs in σ. In this case (1 i)(1 2 · · · i −1)(i · · · r)=(1 · · · r). In either of the cases r even or odd, we see that the number of even length cycles must drop or go up by one. Hence sgn((1 i)σ) = −sgn(σ) as in case 1. 28 We deduce that multiplying on the left by a transposition changes the sign of our permuta-tion. The identity must have sign 1, hence by induction we see that the product of an odd number of transpositions has sign −1, and the product of an even number of transpositions has sign 1. Note that if we write any product of transpositions then we can immediately write down an inverse by reversing their order. Let us assume that we can express σ as the product of transpositions in two different ways, one with an odd number and one with an even number. Hence we can write down σ as the product of evenly many transpositions and σ−1 as a product of an odd number of transpositions. Thus we can write e = σ ∗σ−1 as a product of an odd number of transpositions. This is a contradiction as sgn(e) = 1. We should observe that from the proof of the above we see that ∀σ, τ ∈Symn, sgn(στ) = sgn(σ)sgn(τ). Because sgn(e) = 1 we deduce that sgn(σ) = sgn(σ−1) for all σ ∈Symn. In particular this shows that the set of even elements of Symn contains the identity and is closed under composition and taking inverse. Hence we have the following: Definition. The subgroup of Altn ⊂Symn consisting of even elements is called the Alter-nating group of rank n. Observe that Altn contains all 3-cycles (cycles of length 3). Proposition. Altn is generated by 3-cylces. Proof. By generate we mean that any element of Altn can be expressed as the product of three cycles. As any element of Altn can be written as the product of three cycles we only have to do it for the product of two transpositions. There are two cases: 1. (i j)(k l) = (k i l)(i j k). 2. (i j)(i k) = (i k j). Proposition. |Altn| = n! 2 . Proof. Recall that |Symn| = n!, hence we just need to show that (Symn : Altn) = 2. Let σ, τ ∈Symn. Recall that σAltn = τAltn ⇐ ⇒σ−1τ ∈Altn. But sgn(σ−1τ) = sgn(σ)sgn(τ), hence σAltn = τAltn ⇐ ⇒sgn(σ) = sgn(τ). Hence Altn has two left cosets in Symn, one containing even permutations and one odd permutations. Later we shall see that the alternating groups for n ≥5 have a very special property. 29 3.7 Symmetry of Sets with Extra Structure Let S be a set and Σ(S) its permutation group. The permutation group completely ignores the fact that there may be extra structure on S. As an example, Rn naturally has the structure of a vector space. The permutation group Σ(Rn) does not take this into account. However within the full permutation group there are linear permutations, namely GLn(R). These are permutations which preserve the vector space stucture. Symmetry in Euclidean Space Definition. Given n ∈N, n-dimensional Euclidean space is the vector space Rn equipped with the standard inner product (the dot product). Concretely, if x = ⎛ ⎜ ⎝ x1 . . . xn ⎞ ⎟ ⎠, y = ⎛ ⎜ ⎝ y1 . . . yn ⎞ ⎟ ⎠∈Rn then 〈x, y〉:= x1y1 + · · · + xnyn. Definition. The distance between x and y in Rn is d(x, y) := ' 〈x −y, x −y〉. Definition. An isometry of Rn is a map of sets f : Rn →Rn (not necessarily linear) such that ∀x, y ∈Rn, d(x, y) = d(f(x), f(y)). The collection of all isometries of Rn is denoted by Isom(Rn). Remarks. • The identity function is an isometry and the composition of any two isome-tries is an isometry. • We say an isometry, f, fixes the origin if f(0) = 0. It is a fact that f fixes the origin if and only if f(x) = Ax for all x ∈Rn, where A is an orthogonal matrix. • We say an isometry, f, is a translation if f : Rn − → Rn x − → x + y. for some y ∈Rn. • Every isometry of Rn is a composition of an origin fixing isometry and a translation. As a consequence, all isometries are bijective and their inverses are isometries. This means Isom(Rn) is a subgroup of Σ(Rn). Let X ⊂Rn be a subset (not necessarily a subspace). Definition. We define the symmetry group of X to be the subgroup Sym(X) ⊂Isom(Rn) with the property that f ∈Sym(X) if and only if f permutes X. There is a natural action of Sym(X) on the set X, coming from the fact there is a natural homomorphism Sym(X) →Σ(X). Sym(X) measures how much symmetry X has. The more symmetric X, the larger Sym(X). 30 The Dihedral Group Let m ∈N and X ⊂R2 be a regular m-gon centered at the origin. We call the symmetry group of X the dihedral group of rank m, and we denote it by Dm. First observe that every element of Dm must fix the center of X (the origin). Thus we may view Dm as a subgroup of the group of 2 × 2 orthogonal matrices. We shall not take this approach here. Also observe that f ∈Dm acts faithfully and transitive on the set of vertices of X. Hence Dm can naturally by identified with a subgroup of Symm. Let σ be the rotation by 2π m clockwise about the origin. All possible rotational symmetries are generated by σ, namely Rotm = {e, σ, σ2, · · · , σm−1} ⊂Dm. Hence Rotm is cyclic of order m. Given a vertex a, Stab(a) = {e, τ}, where τ is the reflection through the straight line containing a and the origin. By the orbit-stabilizer theorem |Dm| = 2m, hence (Dm : Rotm) = 2. We deduce that Dm = Rotm ! τRotm. The left coset τRotm is precisely the set of reflective symmetries. Hence every element of Dm can be written in the form σk (if a rotation) or τσk (if a reflection). The group structure is completely determined by the following properties • ord(σ) = m • ord(τ) = 2 • τσ = σ−1τ (consider the action on the vertices) Observe that the third property implies that Dm is not Abelian. Here is a picture for n = 3. 31 T a f The Cube in R3 Let X ⊂R3 be a solid cube centered at the origin. Again, elements of Sym(X) must fix the origin, hence, if we wished, we could identify Sym(X) with a subgroup of the group of 3 × 3 orthogonal matrices. Again Sym(X) acts faithfully and transitively on the vertices. If a ∈X is a vertex, then Stab(a) can naturally be identified with D3 (see below figure) which has size 6. Hence, by the orbit-stabilizer theorem, |Sym(X)| = 48. The same logic applies to Rot□, the rotational symmetries, although the stabilizer of a now has size 3. This tells us that |Rot□| = 24. If τ ∈Sym(X) is the symmetry sending x to −x (this is not a rotation), then again Sym(X) = Rot□ ! τRot□. It can be shown that τσ = στ for all σ ∈Rot□. Thus it remains to determine the group structure of Rot□. Color the vertices with four colors, making sure that opposite vertices have the same color (see below figure). Rotational symmetries act on this set of four colors, inducing a homomorphism from Rot□to Sym4. Given any two colors, it is possible to transpose them (leaving the others fixed) by a rotation. Because Sym4 is generated by transpositions, the induced homormorphism Rot□→Sym4 must be surjective. However, |Rot□| = 24 = 4! = |Sym4|. Hence it must be an isomorphism. We deduce that Rot□is isomorphic to Sym4. 32 FIT E I Interesting Question: Let (G, ∗) be an abstract group. When is it true that we can find X ⊂Rn, for some n ∈N such that G ∼ = Sym(X)? Less formally, when can an abstract group be realised in geometry? 3.8 Normal Subgroups and Isomorphism Theorems In linear algebra the predominant objects we study are the maps between vector spaces, and not the vector spaces themselves. The structure preserving maps between vector spaces are more interesting than the spaces themselves. This a deep observation and it is true far beyond the confines of linear algebra. Philosophically it’s saying that an object in isolation is uninteresting; it’s how it relates to what’s around it that matters. The world of group theory is no different. Here the objects are groups and the maps between them are homomorphisms. Now we’ll study homomorphisms between abstract groups in more detail. Let G and H be two groups. We’ll suppress the ∗notation as it will always be obvious where composition is taking place. Let eG and eH be the respective identity elements. Recall that a homomorphism from G to H is a map of sets f : G →H such that ∀x, y ∈G, f(xy) = f(x)f(y). Definition. Given f : G →H a homomorphism of groups, we define the kernel of f to be: Ker(f) := {x ∈G|f(x) = eH} We define the image of f to be: Im(f) := {y ∈H|∃x ∈G such that f(x) = y} Proposition. Given a homomorphism f : G →H, Ker(f)⊆G and Im(f) ⊆H are sub-groups. Proof. First we will show true for Ker(f): 1. f(eG) = eH ⇒eG ∈Ker(f). 2. Suppose x, y ∈Ker(f). Then f(xy) = f(x)f(y) = eH ⇒xy ∈Ker(f). 3. Given x ∈Ker(f), f(x−1) = e−1 H = eH ⇒x−1 ∈Ker(f). Now we will show that Im(f) is a subgroup: 1. f(eG) = eH so eH ∈Im(f). 2. f(xy) = f(x)f(y)∀x, y ∈G so Im(f) is closed under composition. 3. Note that f(x)−1 = f(x−1) ⇒y ∈Im(f) ⇒y−1 ∈Im(f). 33 Proposition. A homomorphism f : G →H is injective if and only if ker(f) is trivial. Proof. f injective ⇒Ker(f) = {eG} trivially. Now assume ker(f) = {eG}. Suppose x, y ∈G such that f(x) = f(y). f(x) = f(y) ⇒ f(x)f(y)−1 = eH ⇒ f(x)f(y−1) = eH ⇒ f(xy−1) = eH ⇒ xy−1 = eG ⇒ x = y Thus f is injective. Recall that for m ∈N the set of left cosets of mZ in Z, denoted Z/mZ naturally inher-ited the structure of a group from + on Z. It would be reasonable to expect that this was true in the general case, i.e. given G a group and H, a subgroup, the set G/H naturally inherits the structure of a group from G. To make this a bit more precise let’s think about what naturally means. Let xH, yH ∈G/H be two left cosets. Recall that x and y are not necessarily unique. The only obvious way for combining xH and yH would be to form (xy)H. Warning: in general this is not well defined. It will depend on the choice of x and y. Something very special happens in the case G = Z and mZ = H. Fundamental Definition. We call a subgroup H ⊆G normal if, for all g ∈G, gHg−1 = {ghg−1|g ∈G, h ∈H} = H. We denote normal subgroup by H ⊳G. Remarks. 1. Observe that this is not saying that given g ∈G and h ∈H, then ghg−1 = h. It is merely saying that ghg−1 ∈H. A normal subgroup is the union of conjugacy classes of G. 2. If G is Abelian, every subgroup is normal as ghg−1 = h∀g, h ∈G. 3. Let G=Sym3, H = {e, (12)}. Then (13)(12)(13) = (23) / ∈H Hence H is not normal in Sym3, so in general not all subgroups of a group are normal. Proposition. Let G and H be two groups. Let f : G →H a homomorphism. Then Ker(f) ⊂G is a normal subgroup. Proof. Let h ∈Ker(f) and g ∈G. Then f(ghg−1) = f(g)f(h)f(g−1) = f(g)eHf(g)−1 = eH ⇒ghg−1 ∈Ker(f). In general Im(f) ⊂H is not normal. 34 Fundamental Definition. We say a group G is simple if its only normal subgroups are {e} and G. Cyclic groups of prime order are trivially simple by Lagrange’s theorem. It is in fact true that for n ≥5, Altn is simple, although proving this will take us too far afield. As we shall see later simple groups are the core building blocks of groups theory. The importance of normal subgroups can be seen in the following: Proposition. Let H ⊆G be a normal subgroup. Then the binary operation: G/H × G/H →G/H (xH, yH) %→(xy)H is well defined. Proof. As usual the problem is that that coset representatives are not unique and thus we could have two representatives giving different maps. Thus our goal is to show: ∀x1, x2, y1, y2 ∈G such that x1H = x2H and y1H = y2H, then (x1y1)H = (x2y2)H By assumption we know x−1 1 x2, y−1 1 y2 ∈H. Consider u = (x1y1)−1(x2y2) = y−1 1 x−1 1 x2y2 Hence uy−1 2 y1 = y−1 1 (x−1 1 x2)y1. Therefore, by the normality of H, uy−1 2 y1 ∈H ⇒u ∈H ⇒ (x1y1)H = (x2y2)H. This shows that if H ⊂G normal, G/H can be endowed with a natural binary operation. Proposition. Proposition Let G be a group; H ⊂G a normal subgroup. Then G/H is a group under the above binary operation. We call it the quotient group. Proof. Simple check of three axioms of being a group. 1. ∀x, y, z ∈G, (xy)z = x(yz) ⇒(xH ∗yH) ∗zH ⇒xH ∗(yH ∗zH). 2. xH ∗H = xH = H ∗xH ⇒H ∈G/H is the identity. 3. xH ∗x−1H = xx−1H = eH = H = x−1xH = x−1H ∗xH ⇒inverses exist. Proposition. The natural map φ : G − → G/H x − → xH is a homomorphism with Ker(φ) = H. 35 Proof. Observe that ∀x, y ∈G, φ(xy) = xyH = xHyH = φ(x)φ(y) ⇒φ is a homomorphism. Recall that the identity element in G/H is the coset H. Hence for x ∈Ker(φ) ⇐ ⇒ φ(x) = xH = H ⇐ ⇒x ∈H. Hence Ker(φ) = H. Observe that this shows that any normal subgroup can be realised as the kernel of a group homomorphism. The First Isomorphism Theorem Let G and H be groups, with respective identities eG and eH. Let φ : G →H be a homomorphism. Recall that Ker(φ) ⊂G is a normal subgroup. Hence we may form the quotient group G/Ker(φ). Let x, y ∈G such that they are in the same left coset of Ker(φ). Recall that xKer(φ) = yKer(φ) ⇐ ⇒ x−1y ∈Ker(φ) ⇐ ⇒ φ(x−1y) = eH ⇐ ⇒ φ(x−1)φ(y) = eH ⇐ ⇒ φ(x)−1φ(y) = eH ⇐ ⇒ φ(x) = φ(y). In summary, φ(x) = φ(y) ⇐ ⇒xKer(φ) = yKer(φ) Hence φ is constant on each coset of Ker(φ). Hence we get a map of sets : ϕ : G/Ker(φ) − → Im(φ) xKer(φ) − → φ(x) This is well define precisely because of the above observations. The First Isomorphism Theorem. Let G and H be two groups. Let φ : G →H be a homomorphism, then the induced map ϕ : G/Ker(φ) − → Im(φ) xKer(φ) − → φ(x) is an isomorphism of groups. Proof. Firstly we observe that the induced φ is by definition of Im(φ) surjective. Note that given x, y ∈G, ϕ(xKer(φ)) = ϕ(yKer(φ)) ⇐ ⇒φ(x) = φ(y) ⇐ ⇒xKer(φ) = yKer(φ), hence ϕ is injective. It is left for us to show that ϕ is a homomorphism. Given x, y ∈G, ϕ(xKer(φ)yKer(φ)) = ϕ(xyKer(φ)) = φ(xy) = φ(x)φ(y) = ϕ(xKer(φ))ϕ(yKer(φ)). Therefore φ : G/Ker(φ) →Im(φ) is a homomorphism, and thus an isomorphism. 36 The Third Isomorphism Theorem Let G be a group and N a normal subgroup. The third isomorphism theorem concerns the connection between certain subgroups of G and subgroups of G/N. Let H be a subgroup of G containing N. Observe that N is automatically normal in H. Hence we may form the quotient group H/N = {hN|h ∈H}. Observe that H/N is naturally a subset of G/N. Lemma. H/N ⊂G/N is a subgroup. Proof. We need to check the three properties. 1. Recall that N ∈G/N is the identity in the quotient group. Observe that N ⊂H ⇒ N ∈H/N. 2. Let x, y ∈H. By definition xy ∈H. Thus xNyN = (xy)N ∈H/N. 3. Let x ∈H. By definition x−1 ∈H. Thus (xN)−1 = x−1N ∈H/N. Conversely, let M ⊂G/N be a subgroup. Let HM ⊂G be the union of the left cosets contained in M . Lemma. HM ⊂G is a subgroup. Proof. We need to check the three properties. 1. Recall that N ∈G/N is the identity in the quotient group. Hence N ∈M ⇒N ⊂HM. N is a subgroup hence eG ∈N ⇒eG ∈HM. 2. Let x, y ∈HM. This implies that xN, yN ∈M. M is a subgroup, hence xNyN = xyN ∈M. This implies that xy ∈HM. 3. Let x ∈HM. Hence xN ∈M. M is a subgroup, hence (xN)−1 = x−1N ∈M. This implies that x−1 ∈HM. Hence we have two maps of sets: α : {Subgroups of G containing N} − → {Subgroups of G/N} H − → H/N and β : {Subgroups of G/N} − → {Subgroups of G containing N} M − → HM 37 Proposition. These maps of sets are inverse to each other. Proof. We need to show that composition in both directions gives the identity function. 1. Let H be a subgroup of G containing N. Then βα(H) = β(H/N) = H. Thus βα is the identity map on {Subgroups of G containing N}. 2. Let M be a subgroup of G/N. then αβ(M) = α(HM) = M. Thus αβ is the identity map on {Subgroups of G/N}. We deduce that both α and β are bijections and we have the following: The Third Isomorphism Theorem. Let G be a group and N ⊂G a normal subgroup. There is a natural bijection between the subgroups of G containing N and subgroups of G/N. Proof. Either map α or β exhibits the desired bijection. 3.9 Direct Products and Direct Sums Definition. Let G and H be two groups, with respective identities eG and eH. We may form the direct product G × H = {(x, g)|x ∈G g ∈H}. Let x, y ∈G and g, h ∈H. Observe that there is a natural binary operation on G × H given by: (x, g) ∗(y, h) := (xy, gh). Lemma. G × H is a group under the natural binary operation. Proof. 1. Associativity holds for both G and H ⇒associativity hold for G × H. 2. (eG, eH) is the identity. 3. For g ∈G and h ∈H (g, h)−1 = (g−1, h−1). There is an obvious generalization of this concept to any finite collection of groups. Definition. Let G be a group and H, K ⊂G two subgroups. Let us furthermore assume that 1. ∀h ∈H and ∀k ∈K, hk = kh. 2. Given g ∈G there exist unique h ∈H, k ∈K such that g = hk. Under these circumstances we say that G is the direct sum of H and K and we write G = H ⊕K. Observe that the second property is equivalent to: 3. H ∩K = {eG} and for g ∈G there exist h ∈H, k ∈K such that g = hk. 38 For example, (Z/15Z, +) is the direct sum of gp() and gp(). Proposition. If G is the direct sum of the subgroups H, K ⊂G then G ∼ = H × K. Proof. Define the map φ : H × K − → G (h, k) − → hk Let x, y ∈H and g, h ∈K. By property one φ((x, g)(y, h)) = φ(xy, gh) = xygh = xgyh = φ(x, g)φ(y, h). Hence φ is a homomorphism. Property two ensures that φ is bijective. The concept of direct sum has a clear generalization to any finite collection of subsets of G. 3.10 Finitely Generated Abelian Groups Let G be an Abelian group. We shall now use additive notation to express composition within G. In particular we will denote the identity by 0 (not to be confused with 0 ∈Z). We do this because we are very familiar with addition on Z being commutative. Given m ∈Z and a ∈G, we write ma = ⎧ ⎪ ⎨ ⎪ ⎩ a ∗a ∗· · · ∗a (m times), if m > 0 0, if m = 0 a−1 ∗a−1 ∗· · · ∗a−1 (−m times), if m < 0 We have the identities: 1. m(a + b) = ma + mb 2. (m + n)a = ma + na 3. (mn)a = m(na) ∀a, b ∈G; m, n ∈Z Now assume that G is finitely generated. Hence ∃{a1, · · · , an} ⊂G such that gp({a1, · · · , an}) = G. In other words, because G is Abelian, every x ∈G can be written in the form x = λ1a1 + · · · + λnan λi ∈Z. In general such an expression is not unique. For example is G is of order m ∈N then (m + 1)a = a for all a ∈G. This is because ma = 0 A reasonable goal would be to find a generating set such that every expression of the above form was unique (after possibly restricting 0 ≤λ1 < ord(ai)) for a given x ∈G. Such a generating set is called a basis for 39 G. Observe that it is not clear that such a basis even exists at present. If {a1, · · · , an} ⊂G were a basis then letting Ai = gp(ai) ⊂G we have the direct sum decomposition: G = A1 ⊕· · · ⊕An. Conversely, if G can be represented as the direct sum of cyclic subgroups then choosing a generator for each gives a basis for G. Definition. Let G be an Abelian group. x ∈G is torsion is it is of finite order. We denote the subgroup of torsion elements by tG ⊂G, called the torsion subgroup. Lemma. tG ⊂G is a subgroup. Proof. This critically requires that G be Abelian. It is not true in general. 1. ord(0) = 1 ⇒0 ∈tG 2. Let g, h ∈tG ⇒∃n, m ∈N such that ng = mg = 0 ⇒nm(g + h) = (mng + nmh) = m0 + n0 = 0 ⇒g + h ∈tG. 3. ng = 0 ⇒−(ng) = n(−g) = 0. Hence g ∈tG ⇒−g ∈tG. Clearly if G is finite then tG = G. Definition. If tG = G we say that G is a torsion group. If tG = {0} we say that G is torsion free. Proposition. If G is torsion and finitely generated then G is finite. Proof. Let {a1, · · · , an} ⊂G be a generating set. Each element is of finite order hence every element x ∈G can be written in the form x = λ1a1 + · · · + λnan, λi ∈Z, 0 ≤λ1 < ord(ai). This is a finite set. Proposition. G/tGis a torsion free Abelian group. Proof. Firstly note that tG ⊂G is normal as G is Abelian, hence G/tG is naturally an abelian group. Let x ∈G. Assume that x + tG ∈G/tG is torsion. Hence ∃n ∈N such that n(x + tG) = nx + tG = tG. Hence nx ∈tG so ∃m ∈N such that mnx = 0. Hence x ∈tG ⇒xtG = tG. Definition. An finitely generated Abelian group G is said to be free Abelian if there exists a finite generating set {a1, · · · , an} ⊂G such that every element of G can be uniquely expressed as λ1a1 + · · · λnan where λi ∈Z. In other words, if we can find a basis for G consisting of non-torsion elements. 40 In this case G = gp(a1) ⊕· · · ⊕gp(an) ∼ = Z × Z · · · × Z = Zn. Proposition. Let G be a finitely generated free abelian group. Any two bases must have the same cardinality. Proof. Let {a1, · · · , an} ⊂G be a basis. Let 2G := {2x|x ∈G}. 2G ⊆G is a subgroup. Observe that 2G = {λ1a1 + · · · λnan|λ ∈2]z}. Hence (G : 2G) = 2n. But the left hand side is defined independently of the basis. The result follows. Definition. Let G be a finitely generated free Abelian group. The rank of G is the size of a any basis. Theorem. A finitely generated abelian group is free Abelan ⇐ ⇒it is torsion free. Proof. (⇒) is trivial. (⇐) Assume G is torsion-free, let {a1, · · · , an} ⊂G generate G. We will prove the result by induction on n. Base Case: n = 1. G = gp(a) ∼ = (Z, +) which is free abelian. Therefore result is true for n = 1. If {a1, · · · , an} ⊂G is a basis we have nothing to prove. Suppose that it is not a basis. then we have a non-trivial relation: λ1a1 + λ2a2 + · · · + λnan = 0 If ∃d ∈Z such that d|λi for all i, then have d( λ1a1 d + λ2a2 d +· · ·+...) = 0. As G is torsion-free, ( λ1a1 d + λ2a2 d + · · · + ...) = 0. We can therefore assume that the λi are collectively coprime. If λ1 = 1, then we can shift terms to get a1 = −(λ2a2 + λ3a3 + · · · + λnan). Therefore, G is generated by the {a2, · · · , an} ⊂G and the result follows by induction. We will reduce to this cases as follows: Assume |λ1| ≥|λ2| > 0. By the remainder theorem we may choose α ∈Z such that |λ1 −αλ2| < |λ2|. Let a′ 2 = a2 + αa1 and λ′ 1 = λ1 −αλ2, then λ′ 1a1 + λ2a′ 2 + · · · + λnan = 0. Also observe that {a1, a′ 2, · · · , an} ⊂G is still a generating set and {λ′ 1, · · · , λn} are still collectively coprime. This process must must eventually terminate with one of the coefficients equal either 1 or −1. In this case we can apply the inductive step as above to conclude that G is free abelian. Proposition. Let G be finitely generated and Abelian. Then G/tG is a finitely generated free Abelian group. Proof. G/tG is torsion free. We must show that G/tG is finitely generated. Let {a1, · · · , an} ⊂ G generate G. Then {a1 + tG, · · · , an + tG} ⊂G/tG forms a generating set. By the above theorem G/tG is free Abelian. 41 Definition. Let G be a finitely generated Abelian group. We define the rank of G to be the rank of G/tG. Let G be finitely generated and Abelian. Let G/tG be of rank n ∈N and let f1, · · · , fn be a basis for G/tG. Let φ : G →G/tG be the natural quotient homomorphism. Clearly φ is surjective. Choose {e1, · · · , en} ⊂G such that φ(ei) = fi ∀i ∈{1, · · · , n}. None of the fi have finite order ⇒none of the ei have finite order. Moreover φ(λ1e1 + · · · + λnen) = λ1f1 + · · · + λnfn ∈G/tG. Because {f1, · · · , fn} is a free basis for G/tG we deduce that λ1e1 + · · · + λnen = 0 ⇐ ⇒ λi = 0∀i ⇒F := gp{e1, · · · , en} ⊆G is free abelian with basis {e1, · · · , en} ⇒F is torsion free. Therefore F ∩tG = {0}. Let g ∈G. By definition, ∃λ1, · · · , λn ∈Z such that φ(g) = λ1f1 + · · · + λnfn. Then we have: φ(g) = λ1f1 + · · · + λnfn ⇒ φ(g) = φ(λ1e1 + · · · + λnen) ⇒ φ(g −(λ1e1 + · · · + λnen)) = 0 ⇒ g −(λ1e1 + · · · + λnen) ∈kerφ = tG ⇒ ∃h ∈tG s.t. g = (λ1e1 + · · · + λnen) + h Hence every x may be written uniquely in the form x = f + g where f ∈F and g ∈tG. Proposition. Every finitely generated Abelian group can be written as a direct sum of a free Abelian group and a finite group. Proof. By the above, we may write G = F ⊕tG Define the homomorphism : G = F ⊕tG − → tG f + h − → h This is surjective with kernel F, hence by the first isomorphism theorem tG is isomorphic to G/F. The image of any generating set of G is a generating set for G/F under the quotient homomorphism. Hence tG is finitely generated and torsion, hence finite. F is free Abelian by construction. Hence we have reduced the study of finitely generated Abelian groups to understanding finite Abelian groups. 42 3.11 Finite Abelian Groups Definition. A finite group G (not necessarily Abelian) is a p-group, with p ∈N a prime, if every element of G has order a power of p. By Sylow’s Theorem the order of a finite p-group must be a power of p. From now on let G be a finite Abelian group. Let p ∈N be a prime. We define Gp := {g ∈ G|ord(p) is a power of p} ⊂G. Theorem 1. Gp ⊂G is a subgroup. Proof. 1. ord(0) = 1 = p0 ⇒0 ∈Gp. 2. Let g, h ∈Gp ⇒∃r, s ∈N such that prg = psh = 0 ⇒pr+s(g+h) = ps(prg)+pr(psh) = 0 + 0 = 0 ⇒g + h ∈Gp. 3. Let g ∈Gp ⇒∃r ∈N such that prg = 0 ⇒−prg = pr(−g) = 0 ⇒−g ∈Gp This critically relies on G being Abelian. By definition Gp is a p-group. Recall that ∀g ∈G, ord(g)||G| by Lagrange’s Theorem. Therefore Gp = 0 unless possibly if p divides |G|. By Sylow’s Theorem we deduce that if |G| = pnu, where HCF(p, u) = 1, then |Gp| = pn. Thus Gp ⊆G is the maximal p-subgroup contained in G. The importance of the maximal p-subgroups is the following theorem. Theorem. Let G is a finite Abelian group. Let {p1, · · · , pr} be the primes dividing |G|. Then G = Gp1 ⊕· · · ⊕Gpr Moreover this is the unique way to express as the direct sum of p-subgroups for distinct primes. Proof. Let |G| = n = a1a2 · · · ar where ai = pαi i . Let Pi = n/ai. {P1, · · · , Pr} ⊂Z are collectively coprime ⇒∃Q1, · · · , Qr ∈Z such that P1Q1 + · · · + PrQr = 1 (Extension of Euclid) Let g ∈G and gi = PiQig. Clearly g = g1 + g2 + · · · + gr and pαi i gi = Qi(ng) = 0. Hence gi ∈Gpi. We must prove the uniquness of this sum. Assume we had g = g′ 1 + · · · + g′ r, g′ i ∈Gpi. Therefore x = g1 −g′ 1 = (g′ 2 −g2) + (g′ 3 −g3) + · · · + (g′ r −gr). The right hand size has order dividing P1, the left hand side has order dividing Q1. P1 and Q1 are coprime ⇒∃u, v ∈Z such that up1 + vq1 = 1 ⇒x = u(p1x) + v(q1x) = 0 + 0 = 0 ⇒g1 = g′ 1. Similarly we find gi = g′ i for all i ∈{1, · · · , r}, hence the sum is unique and we deduce 43 G = Gp1 ⊕· · · ⊕Gpr. Let {qi, · · · , qs} be a finite collection of distinct primes. Assume that G can be expressed as the direct sum G = H1 ⊕· · · ⊕Hs ∼ = H1 × · · · × Hs where Hi is a finite qi-subgroup. Clearly Gqi = Hi and if p is a prime not in {q1, · · · , qs} Gp = {0}. Thus {p1, · · · , pr} = {q1, · · · , qs} and any such representation is unique. We have however reduced the study of finite abelian groups to finite abelian p-groups. Theorem 1. Every finite Abelian p-group is a direct sum of cyclic groups. Proof. Let G be a finite Abelian p-group. If G is cyclic, we are done, otherwise take a cyclic subgroup B = gp(b) of maximal order, say pn. Our strategy is to show that there is a p-subgroup D ⊂G such that G = B ⊕D. We apply the following inductive hypothesis: For any finite Abelian p-group F of size less than |G|, if M ⊂F is a maximal cyclic subgroup then there exists N ⊂F such that M ⊕N = F. This is clearly true for F trivial. We claim that there is a subgroup C of order p such that B ∩C = {0}. Recall that because G is Abelian G/B is naturally an Abelian p-group. Let c ∈G \ B and suppose cB ∈G/B has order pr for r > 0. Observe that the maximal order of any element in G/B is less than or equal to pn. Thus we know n ≥r. By definition pr(cB) = B ⇒prc ∈B. Thus there exists s ∈N such that prc = sb. By maximality of the order of b we know 0 = pnc = spn−rb. But ord(b) = pn, hence pn|spn−r. Therefore we have p|s, say s = ps′. Hence c1 = pr−1c −s′b has order p and is not in B. Therefore C = gp(c1) is the required subgroup. Let BC = {ab|a ∈B, b ∈C}. We claim that BC ⊂G is a subgroup. 1. eG ∈B and eG ∈C ⇒eG ∈BC. 2. Let a1, a2 ∈B, b1, b2 ∈C. Then (a1b1)(a2b2) = (a1a2)(b1b2) ∈BC. Hence BC is closed under composition. 3. Let a1 ∈B, b1 ∈C. Then (a1b1)−1 = b−1 1 a−1 1 = a−1 1 b−1 1 ∈BC. Hence BC is closed under taking inverses. First observe that |G/C| < |G|. Hence the inductive hypothesis applies to G/C. Ob-serve that BC ⊂G is a subgroup containing C. Observe that BC/C is cyclic, generated by bC ∈BC/C. Because B∩C = {0} we also know that |BC/C| = pn. Note that the size of the maximal cyclic subgroup of G must be larger than or equal to the size of the maximal cyclic subgroup of G/C. However we have constructed a cyclic subgroup BC/C ⊂G/C whose order equals that of a B. Hence BC/C ⊂G/C is a maximal cyclic subgroup. Thus by our inductive hypothesis ∃N ⊂G/C such that BC/C ⊕N = G/C. By the third isomorphism 44 theorem we know that N = D/C for a unique subgroup D ⊂G containing C. We claim that G is the direct sum of B and D. Let g ∈G. Then gC ∈G/C is uniquely expressible in form g + C = (a + C) + (d + C) = (a+d)+C, where a ∈B and d ∈D. Hence g = a+d+e for some c ∈C. However C ⊂D so this expresses g as a sum of elements of B and D. Let x ∈B∩D. Hence xC ∈BC/C ∩D/C. Assume that x ∕= 0. Note that x / ∈C. Hence xC is non-zero on BC/C and D/C. However by construction BC/C ∩D/C = {C}. This is a contraction. Hence B ∩D = {0} and we deduce that G = B ⊕D. Thus we have shown that given any finite Abelian p-group G and a maximal cyclic sub-group B ⊂G, there exists a subgroup D ⊂G such that G = B ⊕D. Observe that D is a finite Abelian p-group, thus we can continue this process until eventually it must terminate. The end result will be an expression of G as a direct sum of cyclic p-groups. Corollary. For any finite Abelian p-group G , there exist a unique decreasing sequence of natural numbers {r1, · · · , rn} ⊂N such that G ∼ = Z/pr1Z × · · · × Z/prnZ. Proof. By the previous theorem we know that G is the direct sum of cyclic groups each of p-power order. Thus we know that such integers exist . We will prove uniqueness by induction on |G|. Assume that there is are isomorphisms G ∼ = Z/pr1Z × · · · × Z/prnZ ∼ = Z/ps1Z × · · · × Z/psmZ, where the ri and sj are a decreasing sequence of natural numbers. We therefore see that |G| = p !n i=1 ri = p !m j=1 sj. Hence !n i=1 ri = !m j=1 sj. Let pG = {pg|g ∈G}. It is a straightforward exercise (which we leave to the reader) to prove that pG is a subgroup of G. Note that for r > 1, Z/pr−1Z ∼ = p(Z/prZ), where the isomorphism is given by sending a + pr−1Z to pa + prZ. We deduce therefore that there are isomorphisms pG ∼ = Z/pr1−1Z × · · · × Z/prn−1Z ∼ = Z/ps1−1Z × · · · × Z/psm−1Z. Observe now that |pG| < |G|, thus by induction we deduce that the ri and sj agree when restricted to entries strictly greater than 1. This, together with the fact that !n i=1 ri = !m j=1 sj, implies that the two sets are the same and thus uniqueness is proven. Proposition. Let G is an Abelian group such that p ∈N is a prime dividing |G|. Then Gp is non-trivial. Proof. Recall that if {p1, · · · , pr} are the primes dividing |G| then G ∼ = Gp1 × · · · × Gpr. 45 Hence |G| = |Gp1| · · · |Gpr|. By the above corollary pi divides |G| if and only if Gpi is non-trivial. Basis Theorem for Finitely Generated Abelain Groups. Every finitely generated Abelian group G can be written as a direct sum of cyclic groups: G = β1 ⊕· · · ⊕βr where each βi is either infinite or of prime power order, and the orders which occurs are uniquely determined. Proof. G=F ⊕tG. F is free and finitely generated, hence the direct sum of infinite cyclic groups (Z, +). The number equals the rank of G. tG is finite Abelian, hence the is the unique direct sum of p-groups for distinct primes p. Each p-group is the unique direct sum (up to order) of p-power cyclic groups. Note that we could have stated this theorem with direct product in place of direct sum. Thus we have classified all finitely generate Abelian groups up to isomorphism. 3.12 The Classification of Finite Groups (Proofs Omitted) In the last section we classified all finite Abelian groups up to isomorphism. Is it possible to do the same for all finite groups? It turns out that the situation is far more complicated in the non-Abelian case. Here is the basic strategy: • Show that any finite group G can be broken down into simple pieces. • Classify these simple pieces. • Understand how these simple pieces can fit together. Definition. let G be a finite group. A composition series for G is a nested collection of subgroups {e} = G0 ⊳G1 ⊳· · · ⊳Gr−1 ⊳Gr = G. such that • Gi−1 ∕= Gi for all 0 < i ≤r. • Gi/Gi−1 is simple for all 0 < i ≤r. Remarks. By the third isomorphism theorem a composition series cannot be extended, mean-ing we cannot add any intermediate normal subgroups. Theorem. Any finite group G has a composition series. 46 Observe that if G is simple that {e} = G0 ⊳G1 = G is a composition series. If G = Sym3 then {e} ⊳gp((123)) ⊳Sym3 gives a composition series. To see why, observe that each quotient group has size 3 or 2 and are therefore isomorphism to Z/3Z or Z/2Z which are both. Jordan-Holder Theorem. Let G be a finite group. Suppose we have two composition series for G {e} = G0 ⊳G1 ⊳· · · ⊳Gr−1 ⊳Gr = G. {e} = H0 ⊳H1 ⊳· · · ⊳Hs−1 ⊳Hs = G. Then r = s and the quotient groups {G1/G0, · · · , Gr/Gr−1}, {H1/H0, · · · , Hs/Hs−1} are pairwise isomorphic (perhaps after reordering). Definition. If G has composition series {e} = G0 ⊳G1 ⊳· · · ⊳Gr−1 ⊳Gr = G. we call the quotient groups {G1/G0, · · · , Gr/Gr−1} the simple components of G. By the Jordan-Holder Theorem the simple components are well-defined up to isomor-phism. It is possible that two non-isomorphic groups have the same (up to isomorphism) sim-ple components. As an example Sym3 and Z/6Z both have simple components {Z/2Z, Z/3Z}. Definition. A finite group is called solvable (or soluble) if its simple components are Abelian. Note that Solvable groups need not be Abelian themselves Note thatSym3 is solvable, while Alt5 (being simple and non-Abelian) is non-solvable. To summarize our study: Finite group theory if much like the theory of chemical molecules. • The simple groups are like atoms • Finite groups have simple components, like molecules have constituent atoms. • Non-isomorphic finite groups with the same simple components are like molecules with the same atoms but different structure (isomers). We now have two goals 47 • Classify all finite simple groups up to isomorphism. • Classify all finite simple groups with given simple components. The theory of groups was initiated by Galois in 1832. Galois was the first to discover the first known simple groups, namely Z/pZ for p prime and Altn for n > 4. Amazingly it took until 2004 until a complete classification was known. The proof stretches across over 10000 pages and is the combined work of thousands of mathematicians. Here’s a very rough breakdown the the different four distinct classes of finite simple group: • Cyclic groups of prime order. These are the only Abelian simple groups. • Altn for n > 4 • Finite groups of Lie type. These groups are very complicated to describe in general. The basic idea is that they can be realized as subgroups and quotients of matrix groups. There are 16 infinite families of finite simple groups of Lie type. • There are 26 sporadic groups. Very strangely these do not fall into any fixed pattern. The first were discovered in 1852 by Mathieu, while he was thinking about subgroups of finite permutation groups with extremely strong transitivity properties. The largest sporadic group was discovered in the 1970s. It’s called the monster group and has size 246 · 320 · 59 · 76 · 112 · 133 · 17 · 19 · 23 · 29 · 31 · 41 · 47 · 59 · 71 The monster contains all but six of the other sporadic groups as quotients of subgroups. The theory of finite simple groups is one of the crown jewels of mathematics. It’s demon-strates how profound the definiton of a group really is. All of this complexity is contained in those three innocent axioms. The next question, of course, is to classify all finite groups with given simple components. This is still a wide open problem. As such a complete classification of all finite groups is still unknown. One may ask about classifying infinite groups. Unsurprisingly the situation is even more complicated, although much progress has been made if specific extra structure (topological, analytic or geometric) is imposed. 48 4 Rings and Fields 4.1 Basic Definitions A group (G, ∗) is a set with a binary operation satisfying three properties. The motivation for the definition reflected the behavior of (Z, +). Observe that Z also comes naturally equipped with multiplication ×. In the first lectures we collected some of the properties of (Z, +, ×). Motivated by this we make the following fundamental definition: Definition. A ring is a set R with two binary operations, +, called addition, and ×, called multiplication, such that: 1. R is an Abelian group under addition. 2. R is a monoid under multiplication (inverses do not necessarily exist). 3. + and × are related by the distributive law: (x + y) × z = x × z + y × z and x × (y + z) = x × y + x × z ∀x, y, z ∈R The identity for + is “zero”, denoted 0R (often just written as 0), and the identity for × is “one”, denoted 1R (often just written as 1). Remarks. 1. To simplify the notation we will write x × y = xy for all x, y ∈R. 2. Distributivity implies that we can “multiply” together finite sums: (! xi)(! yj) = ! xiyj in a well-defined way. Here are some examples of rings: 1. The integers under the usual addition and multiplication. 2. Z/mZ under the addition and multiplication described in 2.3. 3. Let S be a set and P(S) be the set of all subsets. This is called the power set of S. On P(S) define + and × by X + Y = (X ∩Y ′) ∪(X′ ∩Y ), XY = X ∩Y Where X′ denotes the complement of X in S. Then P(S) is a ring with ∅= 0 and S = 1. This strange looking ring has applications to mathematical logic. 4. In linear algebra the collection of linear maps from Rn to Rn is the set Mn×n(Rn). This has the structure of a ring under the usual addition and multiplication of matrices. 49 Note that matrix multiplication is not commutative in general. So it is perfectly possible for a multiplication not to be commutative in a ring. Definition. Let R be a ring with multiplication ×. If × is commutative, i.e. xy = yx ∀x, y ∈ R then we say that R is a commutative ring. Definition. Let R and S be two rings. A homomorphism φ from R to S is a map of sets φ : R →S such that ∀x, y ∈R 1. φ(x + y) = φ(x) + φ(y) 2. φ(xy) = φ(x)φ(y) 3. φ(1R) = 1S Once again, if R = S and φ = IdR then we call it the identity homomorphism. Note that R and S are abelian groups under + so φ is a group homomorphism with respect to + so φ(0R) = 0S. We have to include (3) as (R, ×) is only a monoid so it does not follow from (2) alone that φ(1R) = 1S. Remarks. 1. As for groups, the composition of two ring homomorphisms is again a ring homomorphism. 2. As before, an isomorphism is a bijective homomorphism, or equivalently one with an inverse homomorphism. A homomorphism from R to itself is called an endomor-phism. An endomorphism which is also an isomorphism is called an automorphism. This is exactly the same terminology as for groups. In any ring R we have the following elementary consequences of the axioms: x0 = x(0 + 0) = x0 + x0 ⇒x0 = 0 Similarly, 0x = 0 for all x ∈R. If R consists of one element, then 1 = 0, conversely if 1 = 0 then ∀x ∈R, x = x1 = x0 = 0, hence R consists of one element. The ring with one element is called the trivial ring. In a ring we abbreviate expressions like a + a + a + · · · + a (n times) = na(n ∈N) It is clear that we may naturally extend this to all n ∈Z. Similarly, a × a × · · · × a (n times) = an for n ∈N. By the Distributive Law, we have the identities 50 1. m(a + b) = ma + mb 2. (m + n)a = ma + na 3. (mn)a = m(na) ∀a, b ∈R and m, n ∈Z. Definition. Given R and S two rings we say that R is a subring of S if it is a subset and is a ring under the induced operations (with same 0 and 1). Eg. (Z, +, ×) ⊂(Q, +, ×). More precisely, 1. R is a subgroup of S under addition. 2. R is closed under multiplication. 3. 1S ∈R. Remarks. As with subgroups, an arbitrary intersection of subrings is again a subring. 4.2 Ideals, Quotient Rings and the First Isomorphism Theorem for Rings Let G and H be groups and φ : G →H a group homomorphism. Recall that ker(φ) ⊂G is a normal subgroup, thus the set of right coset G/ker(φ) naturally forms a group (the quotient group). Recall that all normal subgroups arise in this manner. The 1st Isomorphism theorem states that there is a natural isomorphism G/ker(φ) ∼ = Im(φ). Does something analogous hold for rings? Let R and S be two rings. Let φ : R →S be a ring homomorphism. Definition. The kernel of φ is the subset ker(φ) := {r ∈R|φ(r) = 0S} ⊂R. The image of φ is the subset Im(φ) := {s ∈S|∃r ∈R s.t. φ(r) = s} ⊂S. Remember that φ is a group homomorphism with respect to the additive Abelian group structures on R and S. With respect to this structure these definitions are exactly the same as in group theory. In particular we know that ker(φ) = {0R} ⇐ ⇒φ is injective . We also know that ker(φ) ⊂R and Im(φ) ⊂S are subgroups under addition. 51 Proposition. Im(φ) ⊂S is a subring. Proof. We need to check that Im(φ) is closed under multiplication and contains 1S. Let s1, s2 ∈Im(φ). Hence ∃r1, r2 ∈R such that φ(r1) = s1 and φ(r2) = s2. But s1s2 = φ(r1)φ(r2) = φ(r1r2). Hence s1s2 ∈Im(φ). Hence Im(φ) is closed under multiplication. By definition φ(1R) = 1S. Hence 1S ∈Im(φ). Thus Im(φ) is a subring. If S is non trivial then because φ(1R) = 1S we know that 1R / ∈ker(φ). Hence in this case ker(φ) ⊂R is not a subring. What properties does it satisfy? 1. ker(φ) ⊂R is a subgroup under +. 2. Let a ∈ker(φ) and r ∈R. Observe that φ(ra) = φ(r)φ(a) = φ(r)0S = 0S. Hence ra ∈ker(φ). Similarly ar ∈ker(φ). Hence ker(φ) is closed under both left and right multiplication by all of R. Definition. Let R be a ring. An ideal I ⊂R is a subset which is a subgroup under addition and is closed under both left and right multiplication by all of R. More precisely, if x ∈I then xr, rx ∈I for all r ∈R. We have just shown that the kernel of a homomorphism is always an ideal. An ideal is the ring theoretic analogue of normal subgroup in group theory. Let I ⊂R be an ideal. Recall that (R, +) is an abelian group, Hence (I, +) ⊂(R, +) is a normal subgroup. Hence the right cosets R/I naturally have a group structure under addition. We have completely ignored the multiplicative structure on R. Let us define a multiplication by: (a + I) × (b + I) := (ab) + I, ∀a, b ∈R. Lemma. This binary operation is well defined. Proof. Let a1 + I = a2 + I and b1 + I = b2 + I where a1, a2, b1, b2 ∈R. Observe that a1b1 −a2b2 = a1(b1 −b2) + (a1 −a2)b2 is contained in I because I is an ideal. Thus a1b1 + I = a2b2 + I. Proposition. R/I is a ring under the natural operations. We call it the quotient ring. Proof. This is just a long and tedious exercise to check the axioms which all follow because they hold on R. Unsurprisingly 0 + I is the additive identity and 1 + I is the multiplicative identity. 52 As in the case of groups there is a natural surjective quotient ring homomorphism φ : R →R/I. From the definitions we see that ker(φ) = I. We deduce that ideals of a ring are precisely the kernels of ring homomorphisms. This is totally analogous to the group theory situation. The First Isomorphism Theorem. Let φ : R →S be a ring homomorphism. Then the induced map ϕ : R/ker(φ) − → Im(φ) a + ker(φ) − → φ(a) is a ring isomorphism. Proof. The first isomorphism theorem for groups tells us that it is an isomorphism of additive group. Hence we merely need to check that it is a ring homomorphism. Let a, b ∈R. ϕ((a + ker(φ))(b + ker(φ))) = ϕ(ab + ker(φ)) = φ(ab) = φ(a)φ(b) = ϕ(a + ker(φ))ϕ(b + ker(φ)). Also ϕ(1 + I) = φ(1) = 1. Hence ϕ is a ring homomorphism and we are done. Definition. An injective ring homomorphims φ : R →S is called and embedding. By the first isomorphism theorem, R is isomorphic to the subring Im(φ) ⊂S. 4.3 Properties of Elements of Rings Definition. Let R be a ring. An element a ∈R is said to be invertible, or a unit, if it has a multiplicative inverse, i.e. ∃a′ ∈R such that a′a = aa′ = 1. We know that such an inverse is unique if it exists, hence we shall write it as a−1. Note that if 1 ∕= 0 then 0 is never invertible. We denote the set of units in R by R∗. It is clear that for any ring R, (R∗, ×) is a group. Definition. A non-trivial ring R in which every non-zero element is invertible (i.e R{0} = R∗) is called a division ring (or skew field). If R is a commutative division ring then R is called a field. Remarks. 1. (Q, +, ×) is the canonical example of a field. Other natural examples in-clude (R, +×), (C, +, ×) and (Z/pZ, +, ×), where p is a prime number. There are examples of division rings which are not fields (i.e. not commutative) but we will not encounter them in this course. 2. All of linear algebra (except the issue of eigenvalues existing) can be set up over an arbitrary field. All proofs are exactly the same, we never used anything else about R or C. 53 In an arbitrary ring it is possible that two non-zero elements can multiply to give zero. For example, in M2×2(R), the non-zero matrices A = ! 0 1 0 0 " and B = ! 0 2 0 0 " multiply to give the zero matrix. Definition. Let R be a non-trivial ring. Given a ∈R \ {0}, if there exists b ∈R \ {0} such that ab = 0 or ba = 0, then a is said to be a zero-divisor. Note that 0 is not a zero-divsor. Definition. A non-trivial ring R with no zero divisors is said to be entire; a commutative entire ring is called an integral domain. More concretely: R is entire if and only if 1 ∕= 0 and ∀x, y ∈R, xy = 0 ⇒x = 0 or y = 0. (Z, +, ×), (Q, +, ×) are integral domains. (Z/m, +, ×) is an integral domain ⇐ ⇒m prime. The above example shows that M2(R) is not entire. Theorem. A ring R is entire ⇐ ⇒its set of non-zero elements forms a monoid under multiplication. Another way to state this is that R entire ⇐ ⇒ R \ {0} is closed under multiplication. Proof. In any ring R observe that if x, y ∈R are two non-zero divisors then by definition xy ∈R must be a non-zero divisor. Hence, If R is non-trivial the non-zero divisors of R are a monoid under multiplication. If R is entire the set of non-zero divisors is precisely R{0}, which implies it is a monoid under multiplication. Conversly if R{0} is a monoid then firstly it is non-empty so R is non-tivial. But if x, y ∈R{0} then xy ∈R{0}. Hence R is entire by definition. Corollary. Any field F is an integral domain. Proof. If x, y ∈F, x ∕= 0 ∕= y then ∃x−1, y−1 ∈F such that xx−1 = x−1x = 1 = yy−1 = y−1y, therefore xy is invertible so is non-zero. Hence, non-zero elements are closed under multiplication, so F is entire. F is a field so F is commutative, so it is an integral domain. Cancellation Law: Let R be a ring. If c ∈R is not a zero-divisor, then for any a, b ∈R such that ca = cb or ac = bc, then a = b. This is because ca −cb = c(a −b) and ac −bc = (a −b)c. In particular, if R is entire, then we can “cancel” any non-zero element. It is important to note that we cannot do this in an arbitrary ring. Theorem. Every finite integral domain R is a field. 54 Proof. We need to show that R∗= R \ {0}. Let a ∈R \ {0}. Define the following map of sets: ψ : R \ {0} →R \ {0} r #→ra. ψ is well define because R is an integral domain. By the cancellation law for integral domains, we know thatgiven r1, r2 ∈R r1a = r2a ⇒r1 = r2 ⇒ψ injective. Since R \ {0} is finite, ψ is surjective ⇒∃b ∈R \ {0} such that ba = ab = 1. Hence a has a multiplicative inverse. Therefore, R∗= R \ {0}. 4.4 Polynomial Rings Let R be a ring. Definition. The polynomial ring in X with cofficients in a ring R consists of formal expressions of the form: g(X) = b0 + b1X + b2X2 + · · · + bmXm, bi ∈R, m ∈N If f(X) = a0+a1X+· · ·+anXn is another polynomial then we decree that f(X) = g(X) ⇐ ⇒ ai = bi ∀i. Note that we set ai = 0 if i > n and bj = 0 if j > m. We refer to X as the indeterminant. Addition and multiplication are defined by the rules 1. f(X) + g(X) = (a0 + b0) + (a1 + b1)X + · · · + (an + bn)Xn (if m ≤n) 2. f(X) × g(X) = (a0b0) + (a0b1 + a1b0)X + (a0b2 + a1b1 + a2b2)X2 + · · · + anbmXn+m We will denote this ring by R[X]. Important Exercise. Check this genuinely gives a ring structure on the set of polynomials in X with coefficients in R. Note that there is a natural embedding: φ : R − → R[X] a − → a (polynomial with m = 0 and a = a0) Remarks. 1. The zero and one elements in R[X] are the image of the zero and one element in R under φ. 2. R commutative ⇒R[X] commutative. 55 3. Given f(X) ∈R[X] we can construct a map (of sets): ϕf : R − → R a $→ f(a), where f(a) ∈R is the element of R given be replacing X by a. For a general ring R this process can be quite subtle as we shall see. Definition. Let R be a ring and f ∈R[X] be a non-zero polynomial. We say that a ∈R is a root, or zero, of f if f(a) = 0. Definition. Let R be a ring and f ∈R[X] be a non-zero polynomial. Hence we may write f = cnXn + cn−1Xn−1 + · · · + c0, ci ∈R, cn ∕= 0. We call n the degree of f and write deg(f)=n. If in addition cn = 1, we say that f is monic. Elements of degree 0 are called constant polynomials. Theorem. If R is entire then R[X] satisfies: 1. ∀f, g ∈R[X] \ {0}, deg(f + g) ≤max{deg(f), deg(g)} 2. ∀f, g ∈R[X] \ {0} ⇒fg ∕= 0 and deg(fg) = deg(f) + deg(g). Proof. By the definition of degree, (1) is clear. For (2): Let deg(f) = n, deg(g) = m. Then suppose an, bm the leading coefficients of f and g respectively. Hence fg has maximal power of X given by anbmXn+m. As R is entire, anbm ∕= 0 ⇒fg ∕= 0 and deg(fg) = n + m = deg(f) + deg(g). Corollary. R entire ⇒R[X] entire. Proof. Immediate from above. Corollary. R an integral domain ⇒R[X] and integral domain. Proof. Immediate from above. The process of adjoining indeterminants to a ring R can be iterated to form polynomials in more than one variable with coefficients in R. We of course use another symbol for the indeterminants, ie. R[X][Y ], polynomials in X and Y with coefficients in R, e.g. X2 + Y 2X + X3Y 6. We simplify this notation to R[X][Y ] = R[X, Y ]. Inductively, we define R[X1, · · · , Xn] = R[X1, · · · , Xn−1][Xn] f ∈R[X1, · · · , Xn] has a unique expression of the form f = ! ai1···inXi1 · · · Xin n (ai1···in ∈R) where the sum is finite. Expressions of the form m(i) = Xi1 1 · · · Xin n are called monomials. The example we’ll study most deeply is when R is a field. 56 4.5 Field of Fractions What is the process by which we go from (Z, +, ×) to (Q, +, ×)? Intuitively, we are “divid-ing” through by all non-zero elements. Let us think more carefully about what is actually happening and try to generalize the construction to R an integral domain. What is an el-ement of Q? We usually write it in the form a b with a, b ∈Z, b ∕= 0. This is not unique. a b = c d ⇐ ⇒ ad −bc = 0. As we are all aware, we define + and × by the following rules: 1. a b + c d = ad+cb bd 2. a b × c d = ac bd We should therefore think of elements of Q as pairs of integers (a, b) such that b ∕= 0, up to an equivalence relation. (a, b) ∼(c, d) ⇐ ⇒ad −cb = 0 Hence, Q can be thought of as (Z × Z \ {0}/ ∼). The well-definedness of + and × is not obvious and needs checking, i.e. choosing different elements of the same equivalence class should give the same results. Let us now generalise this construction. Let R be an integral domain. We define the relation on R × R{0} by: (a, b) ∼(c, d) ⇐ ⇒ad −bc = 0. Proposition. ∼is an equivalence relation. Proof. 1. (a, b) ∼(a, b) as ab −ab = 0 since R is commutative. 2. (a, b) ∼(c, d) ⇒ad −bc = 0 ⇒bc −ad = 0 ⇒(c, d) ∼(a, b) 3. Let (a, b) ∼(c, d) and (c, d) ∼(e, f). Then ad −bc = 0, cf −de = 0. Consider (af −be)d = ad f −bed = f(ad −bc) + b(cf −de) = f0 + b0 = 0 d ∕= 0 ⇒af −be = 0 ⇒(a, b) ∼(e, f) Let us denote the equivalence classes by (R×(R{0}))/ ∼. It is convenient to use the usual notation: for (a, b) ∈R × (R \ {0}) we denote the equivalence class containing (a, b) by a b. Let us define multiplication and addition on R × R \ {0}/ ∼by 57 a b + c d = ad+bc bd a b × c d = ac bd Proposition. + and × are well-defined on (R × (R \ {0}))/ ∼. Proof. The first thing to note is that if b, d ∈R \ {0} ⇒bd ∈R \ {0} as R is an integral domain. We just need to check that choosing different representatives gives the same answer. It’s just an exercise in keeping the notation in order - you can do it. Proposition. 0 ∈(R × (R \ {0}))/ ∼is given by the equivalence class containing (0, 1). 1 ∈R × (R \ {0})/ ∼is given by the equivalence class containing (1, 1). Proof. For all (a, b) ∈(R × (R \ {0})), a b + 0 1 = a × 1 + b × 0 b × 1) = a b. a b × 1 1 = a1 b1 = a b Both operations are clearly commutative because R is commutative. Hence we are done. It is a straight forward exercise to check that under these operations (R × (R \ {0}))/ ∼is a commutative ring. Also observe that (a, b) ∈(R × (R \ {0})) is in the zero class if and only if a = 0. Similarly (a, b) give the one class if and only in a = b. This is good. It’s the same as in Q, so we’ve done something right. Theorem. (R × (R \ {0}))/ ∼is a field. Proof. We just need to check non-zero elements have multiplicative inverses. Let a b ∈(R × (R{0}))/ ∼be non-zero. By the above this implies that a ∕= 0. Hence b a ∈(R×(R{0}))/ ∼. But a b × b a = ab ab = 1 1. Hence we are done. Multiplication: Let (a1, b1) ∼(a2, b2) and (c1, d1) ∼(c2, d2). Definition. Let R be an integral domain. The field of fractions of R is the field Frac(R) := (R × (R \ {0}))/ ∼. The canonical example is Frac(Z) = Q. Definition. Given an integral domain R and indeterminants {X1, · · · , Xn} we know that R[X1, · · · , Xn] is an integral domain. We define R(X1, · · · , Xn) := Frac(R[X1, · · · , Xn]). 58 Theorem. The map φ : R →Frac(R) a "→a 1 is an embedding. Proof. We need to check that φ is a homomorphism first. 1. Given a, b ∈R, φ(a + b) = a+b 1 = a 1 + b 1 = φ(a) + φ(b). 2. Given a, b ∈R, φ(ab) = ab 1 = a 1 × b 1 = φ(a)φ(b). 3. φ(1) = 1 1. To check it is injective we just need to show that the kernel (as a homomorphism of Abelain groups) is trivial. φ(a) = a 1 = 0 1 ⇐ ⇒a = 0. Thus the kernel is trivial and so φ is injective. Corollary. Every integral domain may be embedded in a field. Proposition. Let R be a field. The natural embedding R ⊂Frac(R) is an isomorphism. Proof. We must show φ is surjective. Let φ denote the natural embedding R ⊂Frac(R). Let a b ∈Frac(R). R is a field so there exist b−1, a multiplicative inverse to b. But a b = ab−1 1 = φ(ab−1). Hence φ is surjective. Therefore φ is an isomorphism. This is backed up by our intuition. Clearly taking fractions of rationals just gives the rationals again. 4.6 Characteristic Let R be entire (non-tvial with no zero-divisors). Recall that (R, +) is an abelian group, hence given a ∈R we may talk about its additive order. Recall that if a ∈R does not have finite order, then we say it has infinite order. Theorem. In an entire ring R, the additive order of every non-zero element is the same. In addition, if this order is finite then it is prime. Proof. Let a ∈R \ {0} be of finite (additive) order k > 1, i.e. k is minimal such that ka = 0. This implies (k × 1R)a = 0 ⇒k × 1R = 0 as R is entire and contains no zero-divisors. Therefore if we choose b ∈R \ {0} then kb = (k × 1R)b = 0 × b = 0 ⇒every element has order dividing k. Choosing a with minimal order k > 1 ensures that every nonzero element must have order k. If no element has finite order, all elements must have infinite order. Now assume that 1R ∈R has finite order k > 1 and that we have factored k = rs in N. Then k1R = (rs)1R = (r1R)(s1R) = 0. Since R entire, either r1R = 0 or s1R = 0. However, since k is the minimal order of 1R, r = k or s = k. Therefore, k must be prime. 59 Definition. Suppose R an entire ring. R has characteristic zero if all of its non-zero elements have infinite additive order, denoted char(R)=0. If all non-zero elements of R are of additive order p ∈N, then R is characteristic p, or char(R)=p. In this case, R is finite characteristic. When studying abstract fields, the characteristic is very important. Eg. Q, R, C are all fields (hence entire) of characteristic zero. If p is a prime number Z/pZ is a field of characteristic p. We denote this later field by Fp. Theorem. There is an embedding of Q in any field F of characteristic 0. Proof. Let 1F denote the multiplicative identity in F. Let 0F denote the additive identity in F. We must find a suitable embedding of Q in F. Because char(F) = 0 the natural map homomorphism: φ : Z →F n #→n1F is injective. We claim that it is a homomorphism (of rings). Let a, b ∈Z, then φ(ab) = ab1F = ab1F1F = a1Fb1F = φ(a)φ(b); φ(a + b) = (a + b)1F = a1F + b1F =φ(a) + φ(b). φ(1) = 1F. Thus φ is an injective homomorphism. Now we will extend this notion to Q. We define the following map: ψ : Q →F n m #→φ(n)φ(m)−1 We must check that ψ is well defined and is an embedding. For a, b, n, m ∈Z, n m = a b ⇒nb −am = 0. Therefore φ(nb −am) = φ(0) = 0F = φ(nb) −φ(am) ⇒ φ(nb) = φ(am) ⇒ φ(n)φ(b) = φ(a)(m) ⇒ φ(n)φ(m)−1 = φ(a)φ(b)−1 ⇒ ψ( n m) = ψ(a b) This shows that ψ is well defined. Next: ψ is a homomorphism. ψ(a b + n m) = ψ(am + bn bm ) = (φ(a)φ(m) + φ(b)φ(n))φ(bm)−1 = φ(a)φ(b)−1 + φ(n)φ(m)−1 = ψ(a b) + ψ( n m) 60 ψ(a b n m) = ψ( an bm) = φ(an)φ(bm)−1 = φ(a)φ(n)φ(b)−1φ(m)−1 = φ(a)φ(b)−1φ(n)φ(m)−1 = ψ(a b)ψ( n m) By definition ψ( 1 1) = 1F. Thus we have a homomorphism. We claim that it is injective. We must show that the kernel (as a homomorphism of Abelian groups) is trivial. Let n m ∈Q such that ψ( n m) = 0. Then φ(n)φ(m)−1 = 0 ⇒φ(n) = 0 ⇒n = 0 as φ was already shown to be injective. Therefore the kernel is trivial, so ψ is an embedding. Theorem. Let p be a prime number. There is an embedding of Fp in ant field F of charac-teristic p. Proof. Note that {0F, 1F, · · · , (p −1)1F} ⊆F is closed under + and ×, hence forms a subring. Clearly Fp is isomorphic to this subring under the embedding ψ : Fp − → F [a] − → a1F 4.7 Ring Extensions let R be a subring of S. Recall that this means R is a subgroup under addition, is closed under multiplication and contains 1S. Definition. The ring extension of R by {α1, · · · , αn} ⊂S is the subring R[α1, · · · , αn] = {f(α1, · · · , αn)|f ∈R[X1, · · · , Xn]} This is the intersection of all subrings containing R and the subset {α1, · · · , αn}. 4.8 Principal, Prime and Maximal Ideals Definition. An ideal I ⊂R is proper if I ∕= R. Note that I ⊂R is proper if and only if R/I is a non-trivial ring. Definition. Let R be a commutative ring. We say an ideal I ⊂R is principal if there exist a ∈R such that I = {ra|r ∈R}. In this case we write I = (a). 61 Definition. Let R be a commutative ring. We say an ideal I ⊂R is prime if it is proper and given a, b ∈R such that ab ∈I then either a ∈I or b ∈I. Proposition. Let R be a commutative ring. Let I ⊂R be an ideal. Then I is prime if and only if R/I is an integral domain. Proof. I is a proper ideal hence R/I is non-trivial. Observe that R commutative trivially implies that R/I is commutative. Let I ⊂R be prime and assume that R/I has zero divisors. Then there exists a, b ∈R such that a, b / ∈I but (a + I)(b + I) = 0 + I. But this trivially implies that ab ∈I. But this contradicts the fact that I is prime. Assume that R/I is an integral domain but I is not prime. Hence we can find a, b ∈R such that ab ∈I but a, b / ∈I. But then (a + I) and (b + I) are zero divisors, which is a contradiction. Definition. Let R be a commutative ring. We say that an ideal is maximal if it is maximal among the set of proper ideals. More precisely I ⊂R is a maximal ideal if given an ideal J ⊂R such that I ⊂J, then either I = J or J = R. Proposition. Let R be a commutative ring. Let I ⊂R be an ideal. Then I is maximal if and only if R/I is a field. Proof. First observe that R commutative trivially implies that R/I is commutative. Assume that I ⊂R is maximal. Take a non-zero element of R/I, i.e. a + I for a / ∈I. Consider the ideal (a) ⊂R. Consider the following new ideal: (a) + I = {ra + b|r ∈R, b ∈I}. Note that this is certainly an ideal because it is closed under addition and scalar multipli-cation by all R. Note that by construction I ⊂(a) + I and a ∈(a) + I. Hence I is strictly contained in (a) + I. But I is maximal. Hence (a) + I = R. Thus there exist r ∈R and b ∈I such that ra + b = 1. Hence (r + I)(a + I) = ra + I = 1 + I. Thus (a + I) has a multiplicative inverse. Hence R/I is a field. Assume that R/I is a field. Assume that J is a proper ideal of R which strictly contains I, i.e. I is not maximal. Let a ∈J and a / ∈I. Thus (a + I) is non-zero in R/I. Thus it has a multiplicative inverse. Hence there exists b ∈R such that ab + I = 1 + I. This implies that ab −1 ∈I, which in turn implies that ab −1 ∈J. But a ∈J, hence 1 ∈J, which implies that J = R. This is a contradiction. Hence I is maximal. Corollary. Let R be a commutative ring. Let I ⊂R be an ideal. Then I maximal implies that I is prime. Proof. I maximal ⇒R/I is a field ⇒R/I is an integral domain ⇒I prime. 62 4.9 Factorisation in Integral Domains Let R be a ring. In Z we have the “Fundamental Theorem of Arithmetic” - every non-zero element of Z is ±1 times a unique product of prime numbers. Does something analogous hold for R? Clearly, if R is not commutative or has zero-divisors the issue is very subtle. Hence we will resrict to the case when R is an integral domain. The first issue to address is what does a prime element of R mean? The problem, as we will see, is that we can easily come up with several different natural definitions which are equivalent in Z, but in R may not be. Let a, b ∈R. As in Z, a|b will mean that ∃c ∈R such that b = ac. Definition. Two non-zero elements a, b in an integral domain R are associated if a|b and b|a, i.e. ∃c, d ∈R such that b = ac and a = bd. Theorem. In R an integral domain, and a, b ∈R be two non-zero elements. Then, a and b are associated ⇐ ⇒a = bu for u ∈R∗ Proof. Association of a and b ⇒a|b and b|a ⇒∃c, d ∈R such that a = bd and b = ac ⇒ a = acd ⇒a = 0 or cd = 1. If a = 0 ⇒b = 0, which is not true by assumption. Thus we have cd = 1 ⇒c, d are inverses of each other and thus units. Theorem. Let R be an integral domain with a, b ∈R. Then (a) ⊂(b) ⇐ ⇒b|a. Hence a and b are associated if and only if (a) = (b). In Z, m and n are associated if and only if n = ±m. Definition. We call a ∈R{0} an irreducible element if it is a non-unit and is NOT the product of two non-units. If a is irreducible then so are all its associates. In Z, m is irreducible if and only if it is ±1 times a prime. The FTOA says that every m ∈Z can be factored into irreducible elements in ”essentially” one way. Here, essentially means up to switching irreducibles for associated irreducibles, i.e. 10 = 2 × 5 = (−2) × (−5). This motivates the important definition: Definition. A unique factorization domain (UFD)is an integral domain in which every element NOT zero or a unit can be written as the product of irreducibles. Moreover, given 2 complete factorizations of the same element X = a1 · · · an = b1 · · · bm, into irreducibles, n = m and after renumbering ai is associated to bi for all i ∈{1, · · · , n}. Clearly Z is a UFD by the Fundamental Theorem of Artithmetic. A natural question to ask is whether all integral domains are UFDs. The answer, rather surprisingly, is no. Let R be a UFD. Many of the properties of Z carry over to R. For example we can talk about highest common factor (HCF) and least common multiple (LCM) for two a, b ∈R \ {0}. 63 Definition. Given a, b ∈R \ {0} a highest common factor of a and b is element d ∈R such that 1. d|a and d|b 2. Given d′ ∈R such that d′|a and d′|b, then d′|d. Definition. Given a, b ∈R \ {0} a lowest common multiplie of a, b ∈R is an element c ∈R such that 1. a|c and b|c 2. Given c′ ∈R such that a|c′ and b|c′, then c|c′. Remarks. 1. It should be observed that there is no reason to believe that HCFs and LCMs exist in an arbitrary integral domain. Indeed it is not true in general. 2. Clearly a HCF (if it exists) is NOT unique: If d is an HCF of a and b then so is d′ for d′ associated to d. Similarly for LCM. Hence when we talk about the HCF or LCM of two elements we must understand they are well defined only up to association. Theorem. In a UFD any two non-zero elements have a HCF. Moreover, if a = upα1 1 · · · pαr r and b = vpβ1 1 · · · pβr r where u, v are units, and the pi are pairwise non-associated irreducible elements, then HCF(a, b) = pγ1 1 · · · pγr r where γi = min(αi, βi). Proof. Let d be a common factor of a and b. By the uniqueness of complete factorisation we know that (up to association) d is a product of pi for i ∈{1, · · · pr}. Without loss of generality we may therefore assume that d = !r i=1 pδi i . Again by the uniqueness of complete factorisation d is a common factor of a and b ⇐ ⇒ δi ≤αi and δi ≤βi∀i. Therefore, δi ≤γi ⇒HCF(a, b) = pγ1 1 · · · pγr r . Proposition. In a UFD any two non-zero elements have a LCM. Moreover, if a = upα1 1 · · · pαr r and b = vpβ1 1 · · · pβr r where u, v are units, and the pi are pairwise non-associated irreducible elements, then LCM(a, b) = pγ1 1 · · · pγr r where γi = max(αi, βi). Proof. Exactly the same argument as above works in this case observing that d = !r i=1 pδi i is a common multiple of a and b if and only if δi ≥αi and δi ≥βi for all i ∈{1, · · · pr}. Remarks. If a ∈R a unit then HCF(a, b) = 1, LCM(a, b) = b ∀b ∈R \ {0} Even if we know that R is a UFD, there is no easy way to completely factor any element. This is clearly apparent in Z. Fortunately for certain rings there is a faster way to determine the HCF of two elements. Definition. If R is an integral domain, R is Euclidean if it admits a function ϕ : R{0} → N ∪{0} such that 64 1. ϕ(ab) ≥ϕ(a)∀a, b ∈R \ {0} 2. For any a, b ∈R, if b ∕= 0, then ∃q, r ∈R such that a = bq + r where either r = 0 or ϕ(r) < ϕ(b). Remarks. 1. This is intended to model the behavior of the function ϕ : Z \ {0} →N ∪{0} a →|a| The second property is just a generalization of the remainder theorem on Z. Hence we see that Z is Euclidean. 2. We include 0 in the codomain as this enlarges the collection of rings under considera-tion. Lemma. The second axiom of a Euclidean Ring is equivalent to the following: (2’): ∀a, b ∈R{0}; if ϕ(a) ≥ϕ(b) then ∃c ∈R such that either a = bc or ϕ(a −bc) < ϕ(a) or a = bc. Proof. (2) ⇒(2′). Suppose (2) holds. Then we have ϕ(a) ≥ϕ(b) ⇒∃q, r such that a = qb + r where where either r = 0 or ϕ(r) < ϕ(b). If r = 0 ⇒q = c and we are done . Otherwise ϕ(r) = ϕ(a −qb) < ϕ(b) ≤ϕ(a) so we are done with c = q. (2′) ⇒(2) Given a, b ∈R \ {0} if b|a we are done. Therefore, assume b ∕|a. Hence a −bq ∕= 0 ∀q ∈R. Choose c ∈R such that ϕ(a −bq) is minimal. Note that by assumption b ∕|(a −bq). If ϕ(a −bq) ≥ϕ(b) ⇒∃c ∈R such that ϕ(a −bq −bc) < ϕ(a −bq). This is a contradiction by the minimality condition. Therefore ϕ(a −bq) < ϕ(b), i.e. setting r = a −bq we have a = bq + r with ϕ(r) < ϕ(b) hence (2) holds. Theorem. F field ⇒F[X] Euclidean Proof. Define: ϕ : F[X] \ {0} − → N ∪{0} f − → deg(f) Property (1): As F is a field, F[X] is an integral domain ⇒deg(fg) = deg(f) + deg(g) ≥deg(f)∀g, f ∈ F[X] \ {0} ⇒ϕ(fg) ≥ϕ(f)∀f, g ∈F[X] \ {0}. Property (2′) 65 Let f = a0 + a1X + · · · + anXn, g = b0 + b1X + · · · + bmXm where ai, bj ∈F, n, m ∈N ∪{0}, and an ∕= 0, bm ∕= 0. Assume ϕ(f) ≥ϕ(g) ⇒n ≥m ⇒n −m ≥0 ⇒Xn−m ∈F[X] ⇒Xn−mb−1 m ang has leading term anXn ⇒deg(f −Xn−mb−1 m ang) < deg (f). Hence setting c = anb−1 m Xn−m we have ϕ(f −cg) =deg(f −cg) <deg(f) = ϕ(f). Therefore, Property (2′) is satisfied. Remarks. Note that to get this proof to work we need bm ∕= 0 to have an inverse. This critically relied on F being a field. If we relax this condition we will not necessarily get a Euclidean Domain. This shows that despite the fact that Z and F[X] (F a field) are very different rings they share an important property. Euclidean domains have many pleasant properties. Theorem. Let R be Euclidean, with Euclidean function ϕ. Any two a, b ∈R have an HCF, (a, b). Moreover, it can be expressed in the form (a, b) = au + bv where u, v ∈R. Proof. Without loss of generality assume that ϕ(a) ≥ϕ(b). Apply property (2) to get a = bq1 + r1, where either r1 = 0 or ϕ(r1) < ϕ(b). If r1 = 0 then we know that HCF(a, b) = b and we are done setting u = 0 and v = 1. If not then applying property (2) again we get b = q2r1 + r2, where either r2 = 0 or ϕ(r2) < ϕ(r1). If r2 = 0 stop. If not continue the algorithm. We claim that after a finite number of steps this process must terminate with the remainder reaching zero. To see this observe that we have a strictly decreasing sequence ϕ(b) > ϕ(r1) > ϕ(r2) · · · in N ∪{0}. Hence it must have finite length so the algorithm must terminate. Assume it terminates at the nth stage, i.e. rn+1 = 0. We claim that rn can be written in form ua + vb for some u, v ∈R. We do it by induction on n. If we set r0 = b then the result is true for r0andr1. Assume it is true for ri−1 and ri−2. By definition ri = −qiri−1 + ri−2. hence the result must be true for ri. Hence by induction we know that we may write rn in the form ua + vb. Now we claim that rn must divide both a and b. By construction rn|rn−1 →rn rn−2. Inductively rn|ri for all i. In particular rn|b and rn|r1 ⇒rn|a Hence rn is a common divisor of both a and b. Let d ∈R such that d|a and d|b. Hence d|(ua + vb) ⇒d|rn. Hence HCF(a, b) = rn = ua + vb. Remarks. This procedure is known as the Euclidean Algorithm. Corollary. Let R be Euclidean ring. Then any a, b ∈R \ {0} have a LCM. 66 Proof. By the above HCF(a, b) = au + bv for u, v ∈R. We will define m = ab HCF(a,b). Note that this makes sense as HCF(a, b)|a. It is clear that a|m and b|m. Let m′ be a common multiple, i.e. a|m′, b|m′. Then ab|bm′ and ab|am′ ⇒ab|aum′ + bvm′ ⇒ab|(au + bv)m′ ⇒ ab|(a, b)m′ ⇒HCF(a, b)m|HCF(a, b)m′. Because a and b are non-zero HCF(a, b) is non-zero. Because R is an integral domain we can cancel resulting in m|m′. Therefore m is an LCM of a, b. It is worth mentioning that as of yet we have only shown Euclidean rings admit HCFs and LCMs. We do not yet know if they are UFDs. 4.10 Principal Ideal Domains Definition. Let R be an integral domain. We say that a R is a principal ideal domain is every ideal of R is principal. More precisely, if I ⊂R is an ideal then there exists a ∈I such that I = (a). We write PID for short. Theorem. R Euclidean ⇒R a PID. Proof. . Let I ⊂R be an ideal. If I is the zero ideal then I = (0). Assume that I is not the zero ideal. Choose a ∈I such that φ(a) ≤φ(b) for all b ∈I. We aim to prove that I = (a). Assume this is not the case. Hence there exists r ∈I such that r / ∈(a). This means that a does not divide r. Hence by the Euclidean property there exist q, s ∈R such that r = qa+s where φ(s) < φ(a). However, s = r −qa ∈I. This contradicts the minimality of φ(a). Thus no such r exists and I = (a). Definition. Let R be an integral domain and I1, I2, I3, · · · be a sequence of ideals. Assume that I1 ⊂I2 ⊂I3 ⊂· · · We call this an ascending chain of ideals. We say that it is stationary if there exists some n ∈N such that In = Im for all m ≥n. Theorem. If R is a PID then every ascending chain of ideals is stationary. Proof. Let I1 ⊂I2 ⊂I3 ⊂· · · be an ascending chain of ideals in R. Let I be the union of all the Ii. We claim that this is an ideal. Observe that 0 ∈I as it is contained in each Ii. Similarly r ∈I ⇒r ∈Ii for some i ⇒−r ∈Ii ⇒−r ∈I. Let r, s ∈I. Hence r ∈Ii and s ∈Ij for some i and j. Without loss of generality assume that i ≤j. Hence r, s ∈Ij ⇒r + s ∈Ij ⇒r + s ∈I. Hence I is a subgroup under addition. If r ∈I then r ∈Ii for some i. Thus given any a ∈R, ar ∈Ii ⊂I. We deduce that I is an ideal. 67 Because R is a PID there exists b ∈I such that I = (b). This means that b ∈In for some n. Hence (b) ⊂In. Hence we have I ⊂In and In ⊂I implying that In = I. This implies that Im = In for all m ≥n. Theorem. If R is a PID then every non-zero non-units can be factored into irreducible elements. Proof. We will begin by showing that every non-zero, non-unit admits an irreducible factor. Let a ∈R be a non-zero, non-unit. If a is irreducible we are done. Assume, therefore that a = b1a1, where b1 and a1 are non-units. This implies that (a) ⊂(a1) Note that because b1 is a non-unit a and a1 are not associated by the cancellation law. Hence this is a strict inclusion. If a1 is irreducible we are done. If not then we can repeat this process with a1. This would give a factorization a1 = b2a2, where b2 and a2 are non-units. Thus we again get a strict inclusion (a1) ⊂(a2). If a2 is irreducible we are done. If not we can repeat the process. This builds an ascending chain of ideals. Because R is a PID we know that this ascending chain must be stationary. This can only happen if we eventually get an irreducible factor. We deduce that a must admit an irreducible factor. Now we show that a is the product of a finite number of irreducible elements of R. If a is not irreducible then by the above we can write a = p1c1 where p1 is irreducible and c1 is not a unit. Thus (a) is strictly contained in the ideal (c1). If c1 is irreducible we are done. If c1 is not irreducible then c1 = p2c2 where p2 is irreducible and c2 is not a unit. We can build a strictly ascending chain of ideals : (a) ⊂(c1) ⊂(c2) ⊂· · · Because R is a PID we know that this chain is stationary, which means eventually cr must be an irreducible. Hence a = p1p2 · · · prcr. Let us now introduce another natural generalisation of prime number to an arbitrary integral domain. Definition. Let R be an integral domain. We say that p ∈R is a prime element if: 1. p / ∈R∗and p ∕= 0 2. ∀a, b ∈R, p|ab ⇒p|a or p|b Remarks. 1. In Z prime elements are the prime numbers and their negatives. 68 2. All elements associated to a prime are themselves prime. Proposition. Let R be an integral domain, p ∈R. Then p prime ⇒p irreducible. Proof. Let p ∈R be prime and p = ab for some a, b ∈R. Then p|a or p|b. Say p|a ⇒ a = pc = abc for some c ∈R. Note that a ∕= 0(p ∕= 0), therefore by the cancellation law, 1 = bc ⇒b is a unit. Hence p is irreducible. We shall see that for a general integral domain the converse does not always hold. How-ever in the case of PIDs we have the following: Theorem. Let R be a PID and p ∈R. Then p irreducible ⇐ ⇒p prime. Proof. By the previous proposition we only need to show that p irreducible ⇒p prime. We will begin by showing that if p is irreducible then (p) ⊂R is maximal. First observe that p is not a unit. Hence (p) is a proper ideal of R. Assume now that there exists I ⊂R a proper ideal such that (p) ⊂I. Because R is a PID, there exists a ∈I such that (a) = I. Note that a is no a unit. Hence (p) ⊂(a) and we deduce that p = ab for some b ∈R. Observe that because p is irreducible b must be a unit. Hence p and a are associated implying that I = (a) = (p). We deduce that (p) is maximal. Observe now that R/(p) is a field. Hence R/(p) is an integral domain implying that (p) is a prime ideal. Now assume that p|ab for some a, b ∈R. Hence ab ∈(p). Because (p) is prime either a ∈(p) or b ∈(p). These imply that p|a or p|b respectively. Hence p is prime. Theorem. An integral domain is a UFD iff 1. Every a ∈R such that a ∕= 0 and a / ∈R∗can be factored into irreducibles (has a complete factorization) 2. Every irreducible element is prime Proof. First suppose R is a UFD. Then, by definition, (1) holds. Suppose p1 ∈R irreducible. Then suppose a, b ∈R such that p1|ab. If a = 0, p1|a trivially, so we will assume a, b ∕= 0. R UFD means we can uniquely factor a, b a = upα1 1 · · · pαr r , b = vpβ1 1 pβ2 2 · · · pβr r , where u, v are units, αi, βi ∈N ∪{0}. and the pi are pairwise non-associated irreducible elements. It follows that ab can be factored into uvpα1+β1 1 · · · pαr+βr r . Suppose p1|ab, then by the uniqueness of factorization present in a UFD, this forces (α1 + β1) > 0 ⇒α1 or β1 > 0 ⇒p|a or p|b. Therefore p1 is prime. Conversely, suppose R is an integral domain and (1) and (2) hold. Then we need to show that every non-zero, non-unit has a unique factorization into irreducibles, and the factorization is unique up to association. Let c ∈R such that c ∕= 0 and c / ∈R∗. By (1) we know we can factor into irreducibles. So let us consider two factorizations of c. 69 c = a1 · · · ar, c = b1 · · · bs We must show r = s and each bi associated to ai after renumbering. Let us use induction on r. r = 1 ⇒a1 = b1 · · · bs ⇒b1|a1 ⇒a1 = b1u, u ∈R∗. Then if s > 1, we cancel to get u = b2 · · · bs ⇒b2 ∈R∗which is a contradiction since b2 is an irreducible by assumption. Therefore s = 1 and we are done. Let r > 1. By hypothesis (ii) a1 is prime and a1|b1 · · · bs ⇒a1|bj for some j. WLOG assume j = 1. b1 is irreducible and b1 = a1u ⇒u ∈R∗⇒b1u−1 = a1 ⇒b1|a1 ⇒ a1 and b1 are associated. By the cancellation property, we have u−1a2 · · · ar = b2 · · · bs u−1a2 is irreducible and hence this gives a complete factorization of the same element. By induction, r −1 = s −1 ⇒r = s and we can renumber such that ai is associated to bi∀i ∈{2, · · · , r}. We’ve just seen this holds for i = 1, hence R is a UFD. Theorem. Every PID is a UFD. Proof. In a PID every non-zero non-unit can be factored into irreducibles. In addition every irreducible is prime. This a PID is a UFD. Theorem. Every Euclidean ring is a UFD Proof. . Every Euclidean ring is a PID. Every PID is a UFD. Hence every Euclidean ring is a UFD. 4.11 Factorization in Polynomial Rings Theorem. For any field F, the polynomial ring F[X] is a UFD. Proof. F field ⇒F[X] Euclidean ⇒F[X] is a UFD. From now on fix F a field. Let us return to trying to understand factorization in the poly-nomial ring F[X]. Our first task is to determine the irreducible elements in F[X]. Proposition. F[X]∗= F ∗, where we view F ⊂F[X] as the degree zero polynomials (the constant polynomials). Proof. The unit element in F[X] is 1 ∈F ⊂F[X]. If f ∈F[X] and def(f) > 0 then deg(fg) > 0 ∀g ∈F[X] \ {0}. Thus all invertible elements of F[X] must be degree zero, so constant polynomials. Because F is a field we deduce that F[X]∗= F ∗. Definition. We call f ∈F[X] such that deg(f)=1 linear. 70 Clearly every linear polynomial must be irreducible for reasons of degree. Here is a partial converse: Theorem. Given F field, the only irreducible elements of F[X] are linear iffevery positive degree polynomial has a zero (or root) in F Proof. ⇒ Assume every irreducible in F[X] is linear. Then take f ∈F[X]; deg(f) > 0. As F[X] is a UFD (since F is a field), we can factor f into linear factors. Choose ax + b ∈F[X] to be one such factor, a ∕= 0. Choose x = −b a to be a root of f. ⇐ Suppose every positive degree polynomial has a root in F. Then take p ∈F[X] to be irreducible, deg(p) > 0. By our assumption, there must exist α ∈F such that p(α) = 0. Since F is a field, we know that F[X] is Euclidean. Hence we know that (x −α)|p. To see why let us apply property (2) of the Euclidean degree function. If (x −α) did not divide p then we know that there exists q, r ∈F[X] such that p = q(x −a) + r where r ∕= 0 and deg(r) < deg(x −α) ⇒deg(r) < 1 ⇒r is a constant. If r ∕= 0, then p(α) ∕= 0, so (x −α)|p. We deduce that ∃c ∈F[X] such that p = (x −α)c but since p is irreducible, c must be a unit, i.e. c ∈F ∗. Thus p is linear. Note that this is a property of the field F. It is not always true. For example if F = Q, then X2 + 1 does not have a root in Q[X] and consequently cannot be reducible. Don’t let this example mislead you: there are reducible polynominals in Q[X] which do not have a root in Q. For example (X2 + 1)(X2 + 1). Definition. Given F a field, we call F algebraically closed if every f ∈F[X] such that deg(f) > 0 has a root in F. Remarks. 1. By the above theorem, F algebraically closed ⇐ ⇒ Any f ∈F[X] such that f / ∈F[X]∗, f ∕= 0 can be factored into linear terms. 2. The Fundamental Theorem of Algebra says that C is algebraically closed. C is defined analytically so it is unsurprising that all proofs rely on some form of analysis. R and Q are not algebraically closed as the above example demonstrates. Gauss gave about four proofs of this fact in his PhD thesis at age 21! It is important to realize how miraculous it is that C is algebraically closed. C is formed from R by jumping only one dimension. We’ll see later that this almost never occurs in general. Fact: Every field can be embedded in an algebraically closed field. For example both Q and R naturally embed in C. This tells us that something analogous is true even for more exotic fields like Fp. Proposition. If f ∈R[X] is irreducible then it is either linear or quadratic (degree 2). 71 Proof. Let f ∈R[X] be irreducible. Note that we may naturally consider f as being in C[X]. Hence we may factor f as follows. f = a ! i (x −αi), where a ∈C∗and αi ∈C ∀i. By the uniqueness of this factorisation we know that a is unique and the αi are unique up to reordering. Because f ∈R[X] we also know that a ∈R. Because f ∈R[X], taking complex conjugation gives two linear factorisations : f = a ! i (x −αi) = a ! i (x −¯ αi), where ¯ αi denotes complex conjugation. Observe that two monic linear polynomials in C[X] are associated if and only if they are equal. Therefore, by uniqueness of irreducible factorisa-tion we know that either αi ∈R or they occur in complex conjugate pairs. Note that for any α ∈C, (x −α)(x −¯ α) ∈R[X]. Hence f be written as the product of linear and quadratic real polynomials. Hence either f is linear or quadratic. What about other fields? The most natural place to start is F = Q. A naive belief would be that because Q is relatively simple, Q[X] is easy to understand. You could not be further from the truth. To see this for Q, observe that we have linked the issue of factorisation in Q[X] to finding rational roots of positive degree polynomials. As you are no doubt aware this second problem can be very difficult and subtle to understand. The point of departure for algebraic number theory (the algebraic study of Q) is trying to determine the structure of Q[X]. Recall that Q = Frac(Z). Hence there is a natural inclusion Z[X] ⊂Q[X]. Let us ad-dress the problem of factorisation in Z[X] first. The fundamental theorem of arithmetic says that Z is a UFD. Thus let R be a UFD and consider R[X]. It is a fact that R[X] is again a UFD. I’ll get you to prove this in the homework. Definition. f ∈R[X] \ {0} is primitive if deg(f) > 0 and its coefficients do not have an irreducible common factor. e.g. R = Z, f = 5x3 + 3x2 + 10 Gauss’ Lemma. Let R be a UFD. The product of two primitive polynomials in R[X] is again primitive. Proof. Let f, g ∈R[X] be primitive. Thus f = " aixi, g = " bjxj for ai, bj ∈R. Because R is an integral domain, so is R[X]. Thus fg ∕= 0. Assume that fg is not primitive. Thus ∃π ∈R irreducible and h ∈R[X] such that fg = πh. Because f and g are primitive π does not divide all the ai and bj. Choose r and s minimal such that π does not divide ar and bs. Let h = " ckxk. 72 Thus πcr+s = a0br+s + · · · + arbs + · · · + ar+sb0 ⇒ arbs = λcr+s −a0br+s −· · · −ar+sb0 By the minimality of r and s we deduce that π divides every term in the sum on the right. Hence π divides arbs. But R is a UFD, which implies that π is prime. Thus π must divide either ar or bs. This is a contradiction. Hence fg is primitive. This is a fantastic proof - it’s got Gauss written all over it! It has some profound consequences as we’ll see in a moment. Definition. Let R be a UFD and f ∈R[X] \ {0} . The content of f is the HCF of its coefficients, i.e. If f = σg where σ ∈R and g primitive, σ is the content of f. e.g. R = Z, f = 9x3 + 3x + 18, the content of f is 3. Observe that because R is a UFD the content of f ∈R[X] \ {0} always exists. Also observe that the content is only unique up to association. Proposition. Let R be a UFD. Suppose f, g ∈R[X]{0} with contents α, β ∈R respectively. Then the content of fg is αβ. Proof. f = αf1, g = βg1 ⇒fg = (αβ)f1g1. By Gauss’ Lemma, f1g1 is also primitive so αβ is the content of fg. The following theorem illustrates the real meaning of Gauss’ Lemma. Theorem. Let R be a UFD, and F = Frac(R). Choose f ∈R[X] ⊂F[X]. Then f is irreducible in R[X] ⇒f is irreducible in F[X]. Proof. Assume f ∈R[X] can be factored into non-units in F[X]. This implies that f = gh for some g, h ∈F[X], where deg(g), deg(h) > 0. Clearing denominators and pulling out the content, we can obtain αf = βg1h1, where α, β ∈R and g1, h1 ∈R[X] primitive. Let γ be the content of f, i.e. f = γf1, where f1 is primitive. Because the content is well defined deduce that αγ = β (perhaps after changing γ by a unit). Therefore αf = αγg1h1 ⇒ f = γg1h1. Observe that deg(g) = deg(g1) and deg(h) = deg(h1). Also observe that just as for a field R[X]∗= R∗. Thus g1, h1 ∈R[X] are not units. Thus f is reducible in R[X]. We should note that in general the converse is not true. For example 3(x−2) is reducible in Z[X], but irreducible in Q[X]. This is because 3 / ∈Z[X]∗, but 3 ∈Q[X]∗. This theorem has the surprising consequence: Corollary. Let f = a0 + a1x + · · · + anxn ∈Z[X] have a ratonal zero α β where α and β are coprime integers. Then β|an and if α ∕= 0, α|a0. In particular, if an = 1, all rational zeros are integral. 73 Proof. f( α β) = 0 ⇒(X −α β)|f in Q[X] ⇒∃g ∈Q[X] such that f = (X −α β)g. Observe that βX −α is primitive, hence by the proof of the theorem we deduce that (βX −α)|f ⇒β|an and if α ∕= 0, α|a0. Hence if an = 1 ⇒β ± 1 ⇒α β ∈Z Hence all rational zeroes of a monic polynomial with integer coefficients are integers. This is kind of amazing. It’s not at all obvious from the definitions. Theorem (Eisenstein’s Criterion). Let f = a0 + a1x + a2x2 + · · · anxn ∈Z[X] \ {0}. If there is a prime number p ∈N such that 1. (i) p ∕|an 2. (ii) p|ai ∀0 ≥i < n 3. (iii) p2 ∕|a0 then f is irreducible over Q. Proof. By the above f reducible over Q ⇒f reducible over Z. Suppose that f satisfies the conditions (i), (ii), and (iii) but is reducible over Q and hence over Z. By the proof of the above theorem we know that there exist g, h ∈Z[X] such that deg(g), deg(h) > 0 and f = gh. Let us write g = b0 + b1x + · · · + brxr, h = c0 + c1x + · · · + csxs when r + s = n =deg(f), r, s > 0. We have a0 = b0c0. Because p|a0 and p2 ∕|a0 ⇒p ∕|b0 or p ∕|c0. Without loss of generality assume that p|b0 and p ∕|c0. Furthermore, brcs = an is not divisible by p ⇒p ∕|br and p ∕|cs. Hence the first coefficient of g is divisible by p but not the last. Let i ∈{1, · · · , r} be minimal such that p ∕|bi. Observe that i ≤r < n. Note that ai = bic0 + bi−1c1 + · · · + b0ci ⇒ bic0 = ai −bi−1c1 −· · · −b0ci. But p|ai by (ii) and p|bi−jcj ∀j ∈{1, · · · , i} by minimality ⇒p|bic0 ⇒p|bi or p|c0 which is a contradiction. Hence f is irreducible in Q[X]. Corollary. There are irreducible polynomials of arbitrary degree in Q[X]. Proof. Let p ∈N be a prime. Let n ∈N and define f = p + px + px2 + · · · + xn ∈Q[X]. By Eisenstein’s Criterion, f is irreducible of degree n. Remarks. 1. Eisenstein’s Criterion works (with same proof) for any UFD and its field of fractions. 2. Remember irreducible polynomials in R[X] or C[X] are of degree 2 or less. Q[X] is very different. 74 3. Here’s a useful analogy from chemistry: Let F be a field. One should think about f ∈F[X] \ {0}, f / ∈F[X]∗(up to association) as a molecule. One should think about the irreducible such f (up to association) as atoms. The fact that F[X] is a UFD says that every molecule is constructed from a unique finite collection of atoms. Trying to determine the irreducible elements of F[X] is the same as trying to construct the period table. So for every F we have an equivalent of a period table. How complicated this periodic table is depends on F. F being algebraically closed says that the atoms are indexed by elements of F, i.e. every irreducible is associated to one of the form (x−α) for a unique α ∈F. Hence for algebraically closed fields the period table is very easy. The further from being algebraically closed F is the more complicated it becomes. For Q the periodic table is bewilderingly complicated. The atoms can have a enormous internal complexity. There is far more depth to Q than meets the eye! Let’s now study the zeros of polynomials over a field. Theorem. Let F be a field and f ∈F[X] \ {0} have distinct roots α1, · · · , αn ∈F. Then (x −α1) · · · (x −αn)|f. Proof. We have already proven that f(αi) = 0 ⇒(x−αi)|f. Recall that for α, β ∈F, (x−α) and (x −β) are associated if and only if α = β. As αi ∕= αj ∀i ∕= j ⇒x −αi and x −αj non-associated irreducible factors of f ∀i, j. F[X] is a UFD ⇒(x −α1) · · · (x −αn)|f. Corollary. Let F be a field and f ∈F[X] be a polynomial of degree n ∈N. The number of distinct roots of f in F is at most n. Proof. Assume that deg(f) = n and {α1, · · · αn+1} ⊂F are n + 1 distinct reoots of f in F. By the theorem g = (x −α1) · · · (x −αn+1) divides f. By the first Euclidean property of the degree function this implies that deg(f) ≥deg(g) = n + 1. This is a contradiction. Hence the number of distinct zeros of f in F cannot exceed n. Corollary. If F is a field and f, g ∈F[X] such that deg(f), deg(g) ≤n and f and g agree on at least n + 1 values of F then f = g. Proof. f −g ∈F[X] is a polynomial of degree less than or equal to n. By assumption it has n + 1 roots in F. Hence it is the zero polynomial. Corollary. Let F be an infinite field. Let f, g ∈F[X] such that f(a) = g(a) for all a ∈F then f = g Proof. Immediate from the preceding corollary. Remarks. This is not true if F is finite!. For example I’ll get you to show that over Fp the polynomial xp −x is zero for every value of Fp. This is why thinking about polynomials as functions is a bad plan. Theorem. Let F be an infinite field. Let f ∈F[X1, · · · , Xn]. If f(α1, · · · , αn) = 0 for all αi ∈F, then f = 0. 75 Proof. We’ll use induction on n. The previous corollary says that the result is true for n = 1. Let n > 1 and write f as a polynomial in X1 with coefficients in F[X2, · · · , Xn]. f(x1, · · · , xn) = a0 + · · · + akxk 1, where ai = ai(x2, · · · , xn). Fix α2, · · · , αn ∈F. Then f(x1, α2, · · · αn) vanishes for all values of F.By the preceding corollary we deduce that ai(α2, · · · , αn) = 0 ∀i. But the αj were arbitrary. Hence by the induction hypothesis ai = 0 for all i. Hence f = 0. 5 Field Extensions and Galois Theory 5.1 Field Extensions and Minimal Polynomials Definition. Let E be a field and F ⊂E a subfield, i.e. a subring which is a field. Then we call E an extension of F and we write E/F. Let F be a field. Recall that a vector space V over F is an Abelian group with a good concept of scalar multiplication by F. If we have an extension of fields E/F then we may naturally regard E as a vector space over F. This is because there is a natural concept of scalar multiplication on E by F. The properties of a vector space are automatically satisfied by the ring axioms for E. If you’ve only ever seen vector spaces over R or C, don’t worry, all of the theory is identical. A trivial observation is that E is a vector space over itself. Definition. Let E/F be a field extension. We say that E/F is finite if E is a finite dimen-sional vector space over F, i.e. there is a finite spanning set for E over F. If E/F is finite then we call the dimension of E over F the degree of the extension, written [E : F]. Concretely this means that we may find a finite subset {x1, · · · , xn} ⊂E such that E = {λ1x1 + · · · λnxn|λi ∈F}. Hence if [E : F] = n, then as an F-vector space E ∼ = F n. We should be careful of this definition for the following reason: If E/F is a finite extension then (F, +) ⊂(E.+) is a subgroup as an Abelian group. It is not necessarily true that (F, +) is of finite index in (E, +). I’ll get you to prove this in the homework. Let E/F be en extension of finite fields. Trivially we can see that the extension is finite. Hence if [E : F] = n, then as an F-vector space E ∼ = F n ⇒|E| = |F|n. Hence |E| = |F|[E:F]. From this observation we deduce that 76 Theorem. Let E be a finite field of characteristic p ∈N. Then |E| = pn for some n ∈N. Proof. char(E) = p ⇒Fp ⊂E. Hence E/Fp is a finite extension. Hence |E| = |Fp|[E:Fp] = p[E:Fp]. Definition. Let E/F be a field extension. Let α ∈E. We say that α is algebraic over F is ∃f ∈F[X] such that f(α) = 0. If every α ∈E is algebraic over F we say that the extension E/F is algebraic. If α is not algebraic then we say that it is transcendental. e.g. over Q, √ 2 is algebraic, whereas π is transcendental. Proposition. Let E/F be a finite field extension. Then E/F is algebraic. Proof. Let α ∈E. Assume that [E : F] = n. Thus any set subset of E of cardinality greater than n must be linearly dependent (over F). Thus {1, α, · · · , αn} ⊂E must be linearly dependent over F. Hence ∃b0, · · · , bn ∈F such that b0 + b1α + · · · + bnαn = 0. Let f = b0 + b1x + · · · + bnxn ∈F[X]. By construction f(α) = 0. Thus α is algebraic over F. The converse is not true. I’ll give you an example in the homework. Definition. Let E/F be a field extension. Let α ∈E be algebraic (over F). Then the monic polynomial f ∈F[X] of minimal degree such that f(α) = 0 is called the minimal polynomial of α (over F). Proposition. Let E/F be a field extension. Let α ∈E be algebraic (over F). The minimal polynomial of α (over F) is irreducible (in F[X]). Proof. Let F ∈F[X] be the minimal polynomial of α. Recall that f is reducible if and only if we can find g, h ∈F[X] such that f = gh and deg(g), deg(h) < deg(f). However, if such a factorisation exists, we know that f(α) = g(α)h(α) = 0. But E is a field and is thus an integral domain. Consequently either g(α) = 0 or h(α) = 0. But this contradicts the minimality of deg(f). Corollary. Let E/F be a field extension. Let α ∈E be algebraic (over F). The minimal polynomial of α (over F) is unique. Proof. Let g, f ∈F[X] both be monic of minimal degree such that f(α) = g(α) = 0. Recall that monic polynomials in F[X] are associated if and only if they are equal. Thus if f ∕= g, then by the unique factorisation property of F[X], we know they are coprime (HCF(f,g) =1). If this were the case then y the Euclidean property of F[X] ∃u, v ∈F[X] such that fu + gv = 1. But this would imply that f(α)u(α) + g(α)v(α) = 1. But the left hand side equals 0, which is a contradiction because E is a field so is by definition non-trivial. Thus f = g. 77 Corollary. Let E/F be a field extension. Let α ∈E be algebraic (over F). Then α is the root of a unique irreducible monic polynomial in F[X] Proof. The above two results shows that the minimal polynomial of α (over F). is irreducible and necessarily unique. The proof of the corollary shows that it must be the only monic irreducible polynomial with α as a root. Definition. Let E/F be a field extension. Let α ∈E (not necessarily algebraic over F). We define the subfield generated by α to be the minimal subfield of E containing F and α. We denote this subfield by F(α). Proposition. Let E/F be a field extension. Let α ∈E be algebraic (over F). Let F[α] := {f(α)|f ∈F[X]} ⊂E. Then F[α] = F(α). Moreover the the degree of F(α) over F equals the degree of the minimal polynomial of α over F Proof. We should first observe that F[α] ⊂E is the minimal subring of E containing F and α: it is clearly closed under addition and multiplication because g(α)h(α) = (gh)(α) and g(α) + h(α) = (g + h)(α) for all g, h ∈F[X]. We need to show therefore that it is actually a subfield. Note that F[α] is an F-vector space. Let f = xn + !n−1 i=0 bixi ∈F[X] be the minimal polynomial of α. We claim that the subset {1, α, · · · , αn−1} ⊂F[α] is an F-basis. Spanning Let SpF(1, α, · · · , αn−1) ⊂F[α] be the F-linear span of {1, α, · · · , αn−1}. We will show that all positive powers of α are in SpF(1, α, · · · , αn−1) by induction. Let k ∈N. If k < n then αk is trivially in the span. Observe that because f(α) = αn + !n−1 i=0 biαi = 0 we see that αn is in SpF(1, α, · · · , αn−1) Hence SpF(1, α, · · · , αn−1, αn) = SpF(1, α, · · · , αn−1). Fi-nally assume that k > n. Inductively we may assume that αk−1 ∈SpF(1, α, · · · , αn−1). But then αk ∈SpF(1, α, · · · , αn−1, αn) = SpF(1, α, · · · , αn−1). Thus all positive powers of α are contained in SpF(1, α, · · · , αn−1). Ever element of F[α] is an F-linear combination of such terms, hence SpF(1, α, · · · , αn−1) = F[α]. Linear Independence If {1, α, · · · , αn−1} were linearly dependent over F, then the minimal polynomial of α over F would have degree strictly less than n. This is a contradiction. Now we must show that F[α] is a subfield of E. Let f ∈F[X] and β = f(α) ∕= 0. By the above we know that the set {1, β, · · · , βn} ⊂F[α] is linearly dependent over F. Hence we may know that ∃{a0, · · · , an} ⊂F such that a0 + a1β + · · · + anβn = 0. Because β ∕= 0 we conclude that there exists k ∈N and g ∈F[X] such that g(β) = 0 and 78 g = 1 + b1x + · · · + bkxk, bi ∈F. But then 1 = β(−b1 −· · · −bkβk−1). Thus −b1 −· · · −bkβk−1 ∈F[α] is the multiplicative inverse of β in E. We conclude that F[α] is a field and thus F[α] = F(α). Proposition. Let R be a Euclidean domain. Let a ∈R be an irreducible element. Then the principal ideal (a) ⊂R is maximal. Proof. Recall that (a) = {ar|r ∈R}. First observe that a is a non-unit so (a) is proper. Now observe that if I ⊂R is an ideal and ∃b ∈I such that b ∈R∗then I = R. This is clear because I is closed under multiplication by all R. Hence any proper ideal of R cannot contain any units. Assume that (a) is not maximal and that I ⊂R is a proper ideal of R strictly containing (a), i.e. (a) ∕= I. Let b ∈I such that b / ∈(a). Hence a does not divide b, which is a non-zero, non-unit. But R is Euclidean, which in particular implies that it is a UFD. Hence HCFs exist and by construction HCF(a, b) = 1. We also know that by the Euclidean property ∃u, v ∈R ua + vb = 1. But by construction a, b ∈I. This implies therefore that 1 ∈I. This is a contradiction as I is proper. Hence (a) is maximal. 5.2 Splitting Fields Theorem. Let F be a field. Let f ∈F[X] be a non-constant polynomial. Then there exists a field extension E/F such that f has a root in E. Moreover E can be chosen to be finite over F. Proof. The key observation is that F[X] is Euclidean. As a result F[X] is a UFD, hence we may factor f into irreducible polynomials in F[X]. Without loss of generality we may therefore assume that f itself is irreducible. By the above we know that the ideal (f(X)) ⊂ F[X] is maximal. This implies that the quotient E := F[X]/(f(X)) is a field. There is a natural ring homomorphism: F − → E λ − → λ + (f(X)) E is injective because it has trivial kernel. Hence we may naturally think of E as a field extension of F. Let g ∈F[X]. Let a(X) + (f(X)) ∈E. By definition g(a(X) + (f(X))) = g(a(X)) + (f(X)). consider X + (f(X)) ∈E. f(X + (f(X))) = f(X) + (f(X)) = (f(X)). But (f(X)) ∈E is the additive identity. Thus X + (f(X)) is a root of f in E. Finally we need to show that E/F is finite. Assume that deg(f) = n. We claim that {1 + (f(X)), X + (f(X)), · · · , Xn−1 + (f(X))} ⊂E forms a spanning set for E over F. 79 Given any g ∈F[X] we have the element g(X)+(f(X)) ∈E. Remember that the degree function on F[X] is Euclidean. Hence we have a version of the remainder theorem: either g(X)|f(X) of ∃q(X), r(X) ∈F[X] such that g(X) = q(X)f(X)+r(X) where deg(r(X)) < n. In the first case g(X) ∈(f(X)) which implies that g(X)+(f(X)) is zero in E. In the second case we have g(X) + (f(X)) = r(X) + (f(X)). But r(X) + (f(X)) is clearly in the F-span of {1 + (f(X)), X + (f(X)), · · · , Xn−1 + (f(X))}. Thus E/F is finite. Corollary. Let F be a field. Let f ∈F[X] be a non-constant polynomial. Then there exists a finite field extension E/F such that f splits into linear factors in E[X]. Proof. We’ll use induction on the degree of f. Clearly if f is linear the result is true. Assume therefore that deg(f) > 1. By the above theorem we may find a finite field extension K/F such that ∃α ∈K such that f(α) = 0. This implies that f = (X −α)g for some g ∈K[X]. By construction deg(g) < deg(f). By induction we know that there is a finite field extension E/K in which g, and thus f, splits into linear factors. Because both E/K and K/F are finite E/F is finite. This is beautiful result. In particular it facilitates the following fundamental definition: Definition. Let F be a field. Let f ∈F[X]. A splitting field for f is a finite extension E/F of minimal degree over F such that f splits into linear factors in E[X]. Theorem. Let F be a field and f ∈F[X]. Let E and E′ be two splitting fields of f. Then E is isomorphic to E′ Proof. We don’t quite have enough time to prove this. It isn’t too hard though. Intuitively it is unsurprising because a splitting field is some kind of minimal field generated by F and the roots of f. You will prove it in a second course in abstract algebra. When we are thinking about Q we are lucky enough to have a natural embedding of Q in C which is algebraically closed. This means the splitting field of any polynomial f ∈Q[X] can naturally be considered a subfield of C. Concretely, if {α1, · · · , αn} ⊂C are the roots of f in C then the minimal subfield of C containing Q and {α1, · · · , αn}, denoted Q(α1, · · · , αn) ⊂ C, is a splitting field for f. 5.3 Galois Theory (Proofs Omitted) Definition. Let E/F be a finite extension. We say that E/F is normal if E is a splitting field for some f ∈F[X]. Remarks. An extension E/F being normal is is equivalent to the condition that if f ∈F[X] admits a root in E then it must split into linear factors in E[X]. From now on assume that all fields are of characteristic zero. Definition. Let E/F be a finite extension. We say that E/F is Galois if is normal. 80 Remarks. For characteristic p fields, being Galois requires an extra condition called sepa-rability. Separability is automatically satisfied by characteristic zero field extension. If F = Q then E = Q( 3 √ 2, e 2πi 3 ) is a Galois extension as it is the splitting field of x3 −2. Note that Q( 3 √ 2) is not a Galois extension of Q as x3 −2 does not split into linear factors in Q( 3 √ 2)[X]. Definition. Let E/F be a finite Galois extension. The Galois group of E/F, denoted Gal(E/F), is the group of field automorphisms of E which fix F,i.e. σ ∈Gal(E/F) is a field automorphism σ : E →E such that σ(α) = α ∀α ∈F. Composition is given by composition of functions. This concept was first introduced by Evariste Galois (1811-1832). In fact this is where the term group comes from. Galois was the first to use it. Here are some nice facts about Galois extensions and Galois groups: • Galois groups are finite. Moreover |Gal(E/F)| = [E : F]. • If E/F is Galois with E the splitting field of a degree n polynomial f ∈F[X], then Gal(E/F) acts faithfully on the roots of f in E. In particular we can identify Gal(E/F) with a subgroup of Symn. It is important to realize that in general Gal(E/F) will not be isomorphic to Symn. Because elements of Gal(E/F) fix F they must preserve algebraic relations (over F) among the roots. We should therefore think about Gal(E/F) as permutations of the roots of a splitting polynomial which preserve all algebraic relationships between them. This makes Galois groups extremely subtle. In some instances there may be no relation-ships (so Gal(E/F) ∼ = Symn), whereas in others there may be many (so Gal(E/F) is much smaller than Symn). Fundamental Theorem of Galois Theory. Let E/F be a finite Galois extension. Then there is a natural bijection {Intermediate Subfields F ⊂K ⊂E} − → {Subgroups H ⊂Gal(E/F)} K − → {σ ∈Gal(E/F)|σ(k) = k, ∀k ∈K} = Gal(E/K) In addition K/F is Galois if and only if Gal(E/K) ⊳Gal(E/F). In this case Gal(K/F) ∼ = Gal(E/F)/Gal(E/K). 5.4 Solving Polynomials By Radicals Suppose we wish to find the root of a polynomial f(x) = a0 + a1x + · · · + anxn ∈Q[X]. If n = 2 we have the quadratic formula. It is natural to ask if there is a similar formula (in terms of the coefficients) for n > 2. It turns out that for n = 3 and n = 4 there are formulae, 81 although they are extremely complicated. For many centuries mathematicians search for a formula in the case n = 5. Eventually it was proven (first by Abel and then later by Galois) that no such formula exists if n ≥5. This is a very surprising result. What is so special about n = 5? What would it mean for there to be an analogue of the quadratic formula for higher degree polynomials? In simpler terms, it would mean that all the zeroes could be constructed by repeatedly taking roots and applying basic algebraic operations. This would mean that the splitting field would have to have the following very specific property. Definition. Let f ∈Q[X] with splitting field Kf. We say that f is solvable by radicals if there is a chain of fields Q = K0 ⊂K1 ⊂· · · ⊂Km ⊂C such that • Kf ⊂Km • Ki+1 = Ki(αi), where αi is a root of a polynomial of the form xni −bi ∈Ki[X], for all 0 < i < m. • e2πi/n ∈K1 where n = ! ni. (This last condition is non-standard. It’s included to simplify the exposition.) It is a fact that Ki/Ki−1 is Galois and Gal(Ki/Ki−1) is Abelian By the fundamental theorem of Galois theory the chain Q = K0 ⊂K1 ⊂· · · ⊂Km gives rise to the nested collection of subgroups {e} ⊳Gal(Km/Km−1) ⊳· · · Gal(Km/K2) ⊳Gal(Km/K1) ⊳Gal(Km/K0) where Gal(Km/Ki−1)/Gal(Km/Ki) ∼ = Gal(Ki/Ki−1). Note that these quotients are Abelian, hence Gal(Km/K0) is a solvable group. Now observe that Gal(Kf/Q) ∼ = Gal(Km/Q)/Gal(Km/Kf). It is a fact that a quotient of a finite solvable group is solvable. Hence we deduce the following result Theorem. f ∈Q[X] solvable by radicals ⇒Gal(Kf/Q) a solvable group. This was Galois’ key insight. He realized that if there was a version of the quadratic formula then the corresponding Galois group would have Abelian simple components. What’s this got to do with degree five polynomials? If f ∈Q[X] is an irreducible, degree five polynomial with exactly three real roots (for example x5 −9x + 3) then it’s possible to show that Gal(Kf/Q) ∼ = Sym5. Galois showed 82 that the simple components of Sym5 are {Alt5, Z/2Z}, hence Gal(Kf/Q) is not solvable. Hence, in this case, f is not solvable by radicals, so there can be no version of the quadratic formula. Why isn’t there a problem for degree 2,3 or 4? It’s because Sym2, Sym3 and Sym4 are solvable. This proof is one of the great achievements in mathematics. Galois was a true genius. He died at 21 in a duel over a woman. He wrote all this down the night before he died, running out of time in the end. Hermann Weyl, one of the greatest mathematicians of the 20th century, said of this testament, This letter, if judged by the novelty and profundity of ideas it contains, is perhaps the most substantial piece of writing in the whole literature of mankind. 83
237
Raymond W. Yeung Information Theory and Network Coding SPIN Springer’s internal project number, if known January 31, 2008 Springer To my parents and my family Preface This book is an evolution from my book A First Course in Information Theory published in 2002 when network coding was still at its infancy. The last few years have witnessed the rapid development of network coding into a research field of its own in information science. With its root in information theory, network coding not only has brought about a paradigm shift in network com-munications at large, but also has had significant influence on such specific research fields as coding theory, networking, switching, wireless communica-tions, distributed data storage, cryptography, and optimization theory. While new applications of network coding keep emerging, the fundamental results that lay the foundation of the subject are more or less mature. One of the main goals of this book therefore is to present these results in a unifying and coherent manner. While the previous book focused only on information theory for discrete random variables, the current book contains two new chapters on information theory for continuous random variables, namely the chapter on differential entropy and the chapter on continuous-valued channels. With these topics included, the book becomes more comprehensive and is more suitable to be used as a textbook for a course in an electrical engineering department. What is in this book Out of the twenty-one chapters in this book, the first sixteen chapters belong to Part I, Components of Information Theory, and the last five chapters belong to Part II, Fundamentals of Network Coding. Part I covers the basic topics in information theory and prepare the reader for the discussions in Part II. A brief rundown of the chapters will give a better idea of what is in this book. Chapter 1 contains a high level introduction to the contents of this book. First, there is a discussion on the nature of information theory and the main results in Shannon’s original paper in 1948 which founded the field. There are also pointers to Shannon’s biographies and his works. VIII Preface Chapter 2 introduces Shannon’s information measures for discrete random variables and their basic properties. Useful identities and inequalities in in-formation theory are derived and explained. Extra care is taken in handling joint distributions with zero probability masses. There is a section devoted to the discussion of maximum entropy distributions. The chapter ends with a section on the entropy rate of a stationary information source. Chapter 3 is an introduction to the theory of I-Measure which establishes a one-to-one correspondence between Shannon’s information measures and set theory. A number of examples are given to show how the use of informa-tion diagrams can simplify the proofs of many results in information theory. Such diagrams are becoming standard tools for solving information theory problems. Chapter 4 is a discussion of zero-error data compression by uniquely de-codable codes, with prefix codes as a special case. A proof of the entropy bound for prefix codes which involves neither the Kraft inequality nor the fundamental inequality is given. This proof facilitates the discussion of the redundancy of prefix codes. Chapter 5 is a thorough treatment of weak typicality. The weak asymp-totic equipartition property and the source coding theorem are discussed. An explanation of the fact that a good data compression scheme produces almost i.i.d. bits is given. There is also an introductory discussion of the Shannon-McMillan-Breiman theorem. The concept of weak typicality will be further developed in Chapter 10 for continuous random variables. Chapter 6 contains a detailed discussion of strong typicality which applies to random variables with finite alphabets. The results developed in this chap-ter will be used for proving the channel coding theorem and the rate-distortion theorem in the next two chapters. The discussion in Chapter 7 of the discrete memoryless channel is an en-hancement of the discussion in the previous book. In particular, the new def-inition of the discrete memoryless channel enables rigorous formulation and analysis of coding schemes for such channels with or without feedback. The proof of the channel coding theorem uses a graphical model approach that helps explain the conditional independence of the random variables. Chapter 8 is an introduction to rate-distortion theory. The version of the rate-distortion theorem here, proved by using strong typicality, is a stronger version of the original theorem obtained by Shannon. In Chapter 9, the Blahut-Arimoto algorithms for computing the channel capacity and the rate-distortion function are discussed, and a simplified proof for convergence is given. Great care is taken in handling distributions with zero probability masses. Chapter 10 and Chapter 11 are the two chapters devoted to the discus-sion of information theory for continuous random variables. Chapter 10 intro-duces differential entropy and related information measures, and their basic properties are discussed. The asymptotic equipartion property for continuous random variables is proved. The last section on maximum differential entropy Preface IX distributions echos the section in Chapter 2 on maximum entropy distribu-tions. Chapter 11 discusses a variety of continuous-valued channels, with the continuous memoryless channel being the basic building block. In proving the capacity of the memoryless Gaussian channel, a careful justification is given for the existence of the differential entropy of the output random variable. Based on this result, the capacity of a system of parallel/correlated Gaus-sian channels is obtained. Heuristic arguments leading to the formula for the capacity of the bandlimited white/colored Gaussian channel are given. The chapter ends with a proof of the fact that zero-mean Gaussian noise is the worst additive noise. Chapter 12 explores the structure of the I-Measure for Markov structures. Set-theoretic characterizations of full conditional independence and Markov random field are discussed. The treatment of Markov random field here maybe too specialized for the average reader, but the structure of the I-Measure and the simplicity of the information diagram for a Markov chain is best explained as a special case of a Markov random field. Information inequalities are sometimes called the laws of information the-ory because they govern the impossibilities in information theory. In Chap-ter 13, the geometrical meaning of information inequalities and the relation between information inequalities and conditional independence are explained in depth. The framework for information inequalities discussed here is the basis of the next two chapters. Chapter 14 explains how the problem of proving information inequalities can be formulated as a linear programming problem. This leads to a complete characterization of all information inequalities provable by conventional tech-niques. These inequalities, called Shannon-type inequalities, can be proved by the World Wide Web available software package ITIP. It is also shown how Shannon-type inequalities can be used to tackle the implication problem of conditional independence in probability theory. Shannon-type inequalities are all the information inequalities known dur-ing the first half century of information theory. In the late 1990’s, a few new inequalities, called non-Shannon-type inequalities, were discovered. These in-equalities imply the existence of laws in information theory beyond those laid down by Shannon. In Chapter 15, we discuss these inequalities and their ap-plications. Chapter 16 explains an intriguing relation between information theory and group theory. Specifically, for every information inequality satisfied by any joint probability distribution, there is a corresponding group inequality satisfied by any finite group and its subgroups, and vice versa. Inequalities of the latter type govern the orders of any finite group and their subgroups. Group-theoretic proofs of Shannon-type information inequalities are given. At the end of the chapter, a group inequality is obtained from a non-Shannon-type inequality discussed in Chapter 15. The meaning and the implication of this inequality are yet to be understood. X Preface Chapter 17 starts Part II of the book with a discussion of the butterfly network, the primary example in network coding. Variations of the butterfly network are analyzed in detail. The advantage of network coding over store-and-forward in wireless and satellite communications is explained through a simple example. We also explain why network coding with multiple infor-mation sources is substantially different from network coding with a single information source. In Chapter 18, the fundamental bound for single-source network coding, called the max-flow bound, is explained in detail. The bound is established for a general class of network codes. In Chapter 19, we discuss various classes of linear network codes on acyclic networks that achieve the max-flow bound to different extents. Static net-work codes, a special class of linear network codes that achieves the max-flow bound in the presence of channel failure, is also discussed. Polynomial-time algorithms for constructing these codes are presented. In Chapter 20, we formulate and analyze convolutional network codes on cyclic networks. The existence of such codes that achieve the max-flow bound is proved. Network coding theory is further developed in Chapter 21. The scenario when more than one information source are multicast in a point-to-point acyclic network is discussed. An implicit characterization of the achievable information rate region which involves the framework for information inequal-ities developed in Part I is proved. How to use this book 1 2 3 4 10 17 Part II Part I 18 19 20 21 5 6 7 12 13 14 15 16 8 9 11 Preface XI Part I of this book by itself may be regarded as a comprehensive textbook in information theory. The main reason why the book is in the present form is because in my opinion, the discussion of network coding in Part II is in-complete without Part I. Nevertheless, except for Chapter 21 on multi-source network coding, Part II by itself may be used satisfactorily as a textbook on single-source network coding. An elementary course on probability theory and an elementary course on linear algebra are prerequisites to Part I and Part II, respectively. For Chapter 11, some background knowledge on digital communication systems would be helpful, and for Chapter 20, some prior exposure to discrete-time linear systems is necessary. The reader is recommended to read the chapters according to the above chart. However, one will not have too much difficulty jumping around in the book because there should be sufficient references to the previous relevant sections. This book inherits the writing style from the previous book, namely that all the derivations are from the first principle. The book contains a large number of examples, where important points are very often made. To facilitate the use of the book, there is a summary at the end of each chapter. This book can be used as a textbook or a reference book. As a textbook, it is ideal for a two-semester course, with the first and second semesters cov-ering selected topics from Part I and Part II, respectively. A comprehensive instructor’s manual is available upon request. Please contact the author at [email protected] for information and access. Just like any other lengthy document, this book for sure contains errors and omissions. To alleviate the problem, an errata will be maintained at the book homepage book2/. Hong Kong, China Raymond W. Yeung December, 2007 Acknowledgments The current book, an expansion of my previous book A First Course in In-formation Theory, is written within the year 2007. Thanks to the generous support of the Friedrich Wilhelm Bessel Research Award from the Alexan-der von Humboldt Foundation of Germany, I had the luxury of working on the project full-time from January to April when I visited Munich University of Technology. I would like to thank Joachim Hagenauer and Ralf Koetter for nominating me for the award and for hosting my visit. I also would like to thank Department of Information Engineering, The Chinese University of Hong Kong, for making this arrangement possible. There are many individuals who have directly or indirectly contributed to this book. First, I am indebted to Toby Berger who taught me information theory and writing. I am most thankful to Zhen Zhang, Ning Cai, and Bob Li for their friendship and inspiration. Without the results obtained through our collaboration, the book cannot possibly be in its current form. I would also like to thank Venkat Anantharam, Vijay Bhargava, Dick Blahut, Agnes and Vin-cent Chan, Tom Cover, Imre Csisz´ ar, Tony Ephremides, Bob Gallager, Bruce Hajek, Te Sun Han, Jim Massey, Prakash Narayan, Alon Orlitsky, Shlomo Shamai, Sergio Verd´ u, Victor Wei, Frans Willems, and Jack Wolf for their support and encouragement throughout the years. I also would like to thank all the collaborators of my work for their contribution and all the anonymous reviewers for their useful comments. I would like to thank a number of individuals who helped in the project. I benefited tremendously from the discussions with David Tse who gave a lot of suggestions for writing the chapters on differential entropy and continuous-valued channels. Terence Chan, Ka Wo Cheung, Bruce Hajek, Siu-Wai Ho, Siu Ting Ho, Tat Ming Lok, Prakash Narayan, Will Ng, Sagar Shenvi, Xiang-Gen Xia, Shaohua Yang, and Ken Zeger gave many valuable comments at different stages of the writing. My graduate students Silas Fong, Min Tan, and Shenghao Yang proofread the chapters on network coding in great detail. Silas Fong also helped compose the figures throughout the book. XIV Acknowledgments On the domestic side, I am most grateful to my wife Rebecca for her love. During our stay in Munich, she took good care of the whole family so that I was able to concentrate on my writing. We are most thankful to our family friend Ms. Pui Yee Wong for taking care of Rebecca when she was ill during the final stage of the project, and to my sister Georgiana for her moral support. In this regard, we are indebted to Dr. Yu Lap Yip for his timely diagnosis. I also would like to thank my sister-in-law Ophelia Tsang who comes over during the weekend to help taking of our daughter Shannon, who continues to the sweetheart of the family and was most supportive during the time her mom was ill. Contents 1 The Science of Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Part I Components of Information Theory 2 Information Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Independence and Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Shannon’s Information Measures . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Continuity of Shannon’s Information Measures for Fixed Finite Alphabets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Chain Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.5 Informational Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.6 The Basic Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.7 Some Useful Information Inequalities. . . . . . . . . . . . . . . . . . . . . . . 28 2.8 Fano’s Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.9 Maximum Entropy Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 36 2.10 Entropy Rate of a Stationary Source . . . . . . . . . . . . . . . . . . . . . . . 38 Appendix 2.A: Approximation of Random Variables with Countably Infinite Alphabets by Truncation . . . . . . . . . . . . . . . . 41 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 3 The I-Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.2 The I-Measure for Two Random Variables . . . . . . . . . . . . . . . . . . 53 3.3 Construction of the I-Measure µ . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.4 µ Can be Negative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.5 Information Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.6 Examples of Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Appendix 3.A: A Variation of the Inclusion-Exclusion Formula. . . . . 74 XVI Contents Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 4 Zero-Error Data Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 4.1 The Entropy Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 4.2 Prefix Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2.1 Definition and Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.2.2 Huffman Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 4.3 Redundancy of Prefix Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 5 Weak Typicality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1 The Weak AEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.2 The Source Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.3 Efficient Source Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.4 The Shannon-McMillan-Breiman Theorem . . . . . . . . . . . . . . . . . . 107 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 6 Strong Typicality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.1 Strong AEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.2 Strong Typicality Versus Weak Typicality . . . . . . . . . . . . . . . . . . 121 6.3 Joint Typicality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.4 An Interpretation of the Basic Inequalities . . . . . . . . . . . . . . . . . . 131 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 7 Discrete Memoryless Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 7.1 Definition and Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 7.2 The Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.3 The Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7.4 Achievability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157 7.5 A Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 7.6 Feedback Capacity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 7.7 Separation of Source and Channel Coding . . . . . . . . . . . . . . . . . . 172 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Contents XVII 8 Rate-Distortion Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 8.1 Single-Letter Distortion Measures . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.2 The Rate-Distortion Function R(D) . . . . . . . . . . . . . . . . . . . . . . . 187 8.3 The Rate-Distortion Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 8.4 The Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 8.5 Achievability of RI(D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 9 The Blahut-Arimoto Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.1 Alternating Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 9.2 The Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 9.2.1 Channel Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 9.2.2 The Rate-Distortion Function . . . . . . . . . . . . . . . . . . . . . . . 219 9.3 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 9.3.1 A Sufficient Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 9.3.2 Convergence to the Channel Capacity . . . . . . . . . . . . . . . . 225 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10 Differential Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.1 Preliminaries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 10.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 10.3 Joint and Conditional Differential Entropy . . . . . . . . . . . . . . . . . . 238 10.4 The AEP for Continuous Random Variables . . . . . . . . . . . . . . . . 245 10.5 Informational Divergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 10.6 Maximum Differential Entropy Distributions . . . . . . . . . . . . . . . . 249 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 11 Continuous-Valued Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 11.1 Discrete-Time Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 11.2 The Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 11.3 Proof of the Channel Coding Theorem . . . . . . . . . . . . . . . . . . . . . 262 11.3.1 The Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 11.3.2 Achievability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 11.4 Memoryless Gaussian Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 11.5 Parallel Gaussian Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 11.6 Correlated Gaussian Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 11.7 The Bandlimited White Gaussian Channel . . . . . . . . . . . . . . . . . . 280 11.8 The Bandlimited Colored Gaussian Channel . . . . . . . . . . . . . . . . 287 11.9 Zero-Mean Gaussian Noise is the Worst Additive Noise . . . . . . . 289 XVIII Contents Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 12 Markov Structures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 12.1 Conditional Mutual Independence . . . . . . . . . . . . . . . . . . . . . . . . . 300 12.2 Full Conditional Mutual Independence . . . . . . . . . . . . . . . . . . . . . 309 12.3 Markov Random Field . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 12.4 Markov Chain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 13 Information Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 13.1 The Region Γ ∗ n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 13.2 Information Expressions in Canonical Form . . . . . . . . . . . . . . . . . 326 13.3 A Geometrical Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 13.3.1 Unconstrained Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 329 13.3.2 Constrained Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 13.3.3 Constrained Identities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 13.4 Equivalence of Constrained Inequalities . . . . . . . . . . . . . . . . . . . . 333 13.5 The Implication Problem of Conditional Independence . . . . . . . 336 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 14 Shannon-Type Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.1 The Elemental Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.2 A Linear Programming Approach. . . . . . . . . . . . . . . . . . . . . . . . . . 341 14.2.1 Unconstrained Inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . 343 14.2.2 Constrained Inequalities and Identities . . . . . . . . . . . . . . . 344 14.3 A Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 14.4 Machine Proving – ITIP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 14.5 Tackling the Implication Problem. . . . . . . . . . . . . . . . . . . . . . . . . . 351 14.6 Minimality of the Elemental Inequalities. . . . . . . . . . . . . . . . . . . . 353 Appendix 14.A: The Basic Inequalities and the Polymatroidal Axioms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 15 Beyond Shannon-Type Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 361 15.1 Characterizations of Γ ∗ 2, Γ ∗ 3, and Γ ∗ n . . . . . . . . . . . . . . . . . . . . . . 361 15.2 A Non-Shannon-Type Unconstrained Inequality . . . . . . . . . . . . . 369 15.3 A Non-Shannon-Type Constrained Inequality . . . . . . . . . . . . . . . 374 Contents XIX 15.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 16 Entropy and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387 16.1 Group Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 16.2 Group-Characterizable Entropy Functions . . . . . . . . . . . . . . . . . . 393 16.3 A Group Characterization of Γ ∗ n . . . . . . . . . . . . . . . . . . . . . . . . . . 398 16.4 Information Inequalities and Group Inequalities . . . . . . . . . . . . . 401 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408 Part II Fundamentals of Network Coding 17 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 17.1 The Butterfly Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 17.2 Wireless and Satellite Communications . . . . . . . . . . . . . . . . . . . . . 415 17.3 Source Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419 18 The Max-Flow Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 18.1 Point-to-Point Communication Networks . . . . . . . . . . . . . . . . . . . 421 18.2 Examples Achieving the Max-Flow Bound . . . . . . . . . . . . . . . . . . 424 18.3 A Class of Network Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427 18.4 Proof of the Max-Flow Bound. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434 19 Single-Source Linear Network Coding: Acyclic Networks . . 435 19.1 Acyclic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436 19.2 Linear Network Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 19.3 Desirable Properties of a Linear Network Code . . . . . . . . . . . . . . 442 19.4 Existence and Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449 19.5 Generic Network Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460 19.6 Static Network Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468 19.7 Random Network Coding: A Case Study . . . . . . . . . . . . . . . . . . . 473 19.7.1 How the System Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474 19.7.2 Model and Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478 XX Contents Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482 20 Single-Source Linear Network Coding: Cyclic Networks. . . . 485 20.1 Delay-Free Cyclic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 20.2 Convolutional Network Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 488 20.3 Decoding of Convolutional Network Codes . . . . . . . . . . . . . . . . . . 498 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 504 21 Multi-Source Network Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 21.1 The Max-Flow Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505 21.2 Examples of Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508 21.2.1 Multilevel Diversity Coding . . . . . . . . . . . . . . . . . . . . . . . . . 508 21.2.2 Satellite Communication Network . . . . . . . . . . . . . . . . . . . 510 21.3 A Network Code for Acyclic Networks . . . . . . . . . . . . . . . . . . . . . 510 21.4 The Achievable Information Rate Region . . . . . . . . . . . . . . . . . . . 512 21.5 Explicit Inner and Outer Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . 515 21.6 The Converse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516 21.7 Achievability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 521 21.7.1 Random Code Construction . . . . . . . . . . . . . . . . . . . . . . . . 524 21.7.2 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 537 Historical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 1 The Science of Information In a communication system, we try to convey information from one point to another, very often in a noisy environment. Consider the following scenario. A secretary needs to send facsimiles regularly and she wants to convey as much information as possible on each page. She has a choice of the font size, which means that more characters can be squeezed onto a page if a smaller font size is used. In principle, she can squeeze as many characters as desired on a page by using a small enough font size. However, there are two factors in the system which may cause errors. First, the fax machine has a finite resolution. Second, the characters transmitted may be received incorrectly due to noise in the telephone line. Therefore, if the font size is too small, the characters may not be recognizable on the facsimile. On the other hand, although some characters on the facsimile may not be recognizable, the recipient can still figure out the words from the context provided that the number of such characters is not excessive. In other words, it is not necessary to choose a font size such that all the characters on the facsimile are recognizable almost surely. Then we are motivated to ask: What is the maximum amount of meaningful information which can be conveyed on one page of facsimile? This question may not have a definite answer because it is not very well posed. In particular, we do not have a precise measure of meaningful informa-tion. Nevertheless, this question is an illustration of the kind of fundamental questions we can ask about a communication system. Information, which is not a physical entity but an abstract concept, is hard to quantify in general. This is especially the case if human factors are involved when the information is utilized. For example, when we play Beethoven’s violin concerto from an audio compact disc, we receive the musical information from the loudspeakers. We enjoy this information because it arouses certain kinds of emotion within ourselves. While we receive the same information every time we play the same piece of music, the kinds of emotions aroused may be different from time to time because they depend on our mood at that particular moment. In other words, we can derive utility from the same information every time in a different way. For this reason, it is extremely 2 1 The Science of Information difficult to devise a measure which can quantify the amount of information contained in a piece of music. In 1948, Bell Telephone Laboratories scientist Claude E. Shannon (1916-2001) published a paper entitled “The Mathematical Theory of Communi-cation” which laid the foundation of an important field now known as information theory. In his paper, the model of a point-to-point communication system depicted in Figure 1.1 is considered. In this model, a message is gener-TRANSMITTER SIGNAL RECEIVED SIGNAL MESSAGE MESSAGE NOISE SOURCE INFORMATION SOURCE DESTINATION RECEIVER Fig. 1.1. Schematic diagram for a general point-to-point communication system. ated by the information source. The message is converted by the transmitter into a signal which is suitable for transmission. In the course of transmission, the signal may be contaminated by a noise source, so that the received signal may be different from the transmitted signal. Based on the received signal, the receiver then makes an estimate on the message and deliver it to the destination. In this abstract model of a point-to-point communication system, one is only concerned about whether the message generated by the source can be delivered correctly to the receiver without worrying about how the message is actually used by the receiver. In a way, Shannon’s model does not cover all possible aspects of a communication system. However, in order to develop a precise and useful theory of information, the scope of the theory has to be restricted. In , Shannon introduced two fundamental concepts about “informa-tion” from the communication point of view. First, information is uncertainty. More specifically, if a piece of information we are interested in is deterministic, then it has no value at all because it is already known with no uncertainty. From this point of view, for example, the continuous transmission of a still picture on a television broadcast channel is superfluous. Consequently, an information source is naturally modeled as a random variable or a random process, and probability is employed to develop the theory of information. Second, information to be transmitted is digital. This means that the infor-mation source should first be converted into a stream of 0’s and 1’s called bits, 1 The Science of Information 3 and the remaining task is to deliver these bits to the receiver correctly with no reference to their actual meaning. This is the foundation of all modern digital communication systems. In fact, this work of Shannon appears to contain the first published use of the term “bit,” which stands for binary digit. In the same work, Shannon also proved two important theorems. The first theorem, called the source coding theorem, introduces entropy as the funda-mental measure of information which characterizes the minimum rate of a source code representing an information source essentially free of error. The source coding theorem is the theoretical basis for lossless data compression1. The second theorem, called the channel coding theorem, concerns communica-tion through a noisy channel. It was shown that associated with every noisy channel is a parameter, called the capacity, which is strictly positive except for very special channels, such that information can be communicated reliably through the channel as long as the information rate is less than the capacity. These two theorems, which give fundamental limits in point-to-point commu-nication, are the two most important results in information theory. In science, we study the laws of Nature which must be obeyed by any phys-ical systems. These laws are used by engineers to design systems to achieve specific goals. Therefore, science is the foundation of engineering. Without science, engineering can only be done by trial and error. In information theory, we study the fundamental limits in communica-tion regardless of the technologies involved in the actual implementation of the communication systems. These fundamental limits are not only used as guidelines by communication engineers, but they also give insights into what optimal coding schemes are like. Information theory is therefore the science of information. Since Shannon published his original paper in 1948, information theory has been developed into a major research field in both communication theory and applied probability. For a non-technical introduction to information theory, we refer the reader to Encyclopedia Britannica . In fact, we strongly recommend the reader to first read this excellent introduction before starting this book. For biographies of Claude Shannon, a legend of the 20th Century who had made fundamental contribution to the Information Age, we refer the readers to and . The latter is also a complete collection of Shannon’s papers. Unlike most branches of applied mathematics in which physical systems are studied, abstract systems of communication are studied in information theory. In reading this book, it is not unusual for a beginner to be able to understand all the steps in a proof but has no idea what the proof is leading to. The best way to learn information theory is to study the materials first and come back at a later time. Many results in information theory are rather subtle, to the extent that an expert in the subject may from time to time realize that his/her 1 A data compression scheme is lossless if the data can be recovered with an arbi-trarily small probability of error. 4 1 The Science of Information understanding of certain basic results has been inadequate or even incorrect. While a novice should expect to raise his/her level of understanding of the subject by reading this book, he/she should not be discouraged to find after finishing the book that there are actually more things yet to be understood. In fact, this is exactly the challenge and the beauty of information theory. Part I Components of Information Theory 2 Information Measures Shannon’s information measures refer to entropy, conditional entropy, mutual information, and conditional mutual information. They are the most impor-tant measures of information in information theory. In this chapter, we in-troduce these measures and establish some basic properties they possess. The physical meanings of these measures will be discussed in depth in subsequent chapters. We then introduce the informational divergence which measures the “distance” between two probability distributions and prove some useful inequalities in information theory. The chapter ends with a section on the entropy rate of a stationary information source. 2.1 Independence and Markov Chains We begin our discussion in this chapter by reviewing two basic concepts in probability: independence of random variables and Markov chain. All the ran-dom variables in this book except for Chapters 10 and 11 are assumed to be discrete unless otherwise specified. Let X be a random variable taking values in an alphabet X. The probabil-ity distribution for X is denoted as {pX(x), x ∈X}, with pX(x) = Pr{X = x}. When there is no ambiguity, pX(x) will be abbreviated as p(x), and {p(x)} will be abbreviated as p(x). The support of X, denoted by SX, is the set of all x ∈X such that p(x) > 0. If SX = X, we say that p is strictly positive. Otherwise, we say that p is not strictly positive, or p contains zero probability masses. All the above notations naturally extend to two or more random vari-ables. As we will see, probability distributions with zero probability masses are very delicate, and they need to be handled with great care. Definition 2.1. Two random variables X and Y are independent, denoted by X ⊥Y , if p(x, y) = p(x)p(y) (2.1) for all x and y (i.e., for all (x, y) ∈X × Y). 8 2 Information Measures For more than two random variables, we distinguish between two types of independence. Definition 2.2 (Mutual Independence). For n ≥3, random variables X1, X2, · · · , Xn are mutually independent if p(x1, x2, · · · , xn) = p(x1)p(x2) · · · p(xn) (2.2) for all x1, x2, · · ·, xn. Definition 2.3 (Pairwise Independence). For n ≥3, random variables X1, X2, · · · , Xn are pairwise independent if Xi and Xj are independent for all 1 ≤i < j ≤n. Note that mutual independence implies pairwise independence. We leave it as an exercise for the reader to show that the converse is not true. Definition 2.4 (Conditional Independence). For random variables X, Y , and Z, X is independent of Z conditioning on Y , denoted by X ⊥Z|Y , if p(x, y, z)p(y) = p(x, y)p(y, z) (2.3) for all x, y, and z, or equivalently, p(x, y, z) = ( p(x,y)p(y,z) p(y) = p(x, y)p(z|y) if p(y) > 0 0 otherwise. (2.4) The first definition of conditional independence above is sometimes more convenient to use because it is not necessary to distinguish between the cases p(y) > 0 and p(y) = 0. However, the physical meaning of conditional inde-pendence is more explicit in the second definition. Proposition 2.5. For random variables X, Y , and Z, X ⊥Z|Y if and only if p(x, y, z) = a(x, y)b(y, z) (2.5) for all x, y, and z such that p(y) > 0. Proof. The ‘only if’ part follows immediately from the definition of conditional independence in (2.4), so we will only prove the ‘if’ part. Assume p(x, y, z) = a(x, y)b(y, z) (2.6) for all x, y, and z such that p(y) > 0. Then for such x, y, and z, we have p(x, y) = X z p(x, y, z) = X z a(x, y)b(y, z) = a(x, y) X z b(y, z) (2.7) 2.1 Independence and Markov Chains 9 and p(y, z) = X x p(x, y, z) = X x a(x, y)b(y, z) = b(y, z) X x a(x, y). (2.8) Furthermore, p(y) = X z p(y, z) = X x a(x, y) ! X z b(y, z) ! 0. (2.9) Therefore, p(x, y)p(y, z) p(y) = a(x, y) X z b(y, z) ! b(y, z) X x a(x, y) ! X x a(x, y) ! X z b(y, z) ! (2.10) = a(x, y)b(y, z) (2.11) = p(x, y, z). (2.12) For x, y, and z such that p(y) = 0, since 0 ≤p(x, y, z) ≤p(y) = 0, (2.13) we have p(x, y, z) = 0. (2.14) Hence, X ⊥Z|Y according to (2.4). The proof is accomplished. ⊓ ⊔ Definition 2.6 (Markov Chain). For random variables X1, X2, · · · , Xn, where n ≥3, X1 →X2 →· · · →Xn forms a Markov chain if p(x1, x2, · · · , xn)p(x2)p(x3) · · · p(xn−1) = p(x1, x2)p(x2, x3) · · · p(xn−1, xn) (2.15) for all x1, x2, · · ·, xn, or equivalently, p(x1, x2, · · · , xn) =  p(x1, x2)p(x3|x2) · · · p(xn|xn−1) if p(x2), p(x3), · · · , p(xn−1) > 0 0 otherwise. (2.16) We note that X ⊥Z|Y is equivalent to the Markov chain X →Y →Z. Proposition 2.7. X1 →X2 →· · · →Xn forms a Markov chain if and only if Xn →Xn−1 →· · · →X1 forms a Markov chain. 10 2 Information Measures Proof. This follows directly from the symmetry in the definition of a Markov chain in (2.15). ⊓ ⊔ In the following, we state two basic properties of a Markov chain. The proofs are left as an exercise. Proposition 2.8. X1 →X2 →· · · →Xn forms a Markov chain if and only if X1 →X2 →X3 (X1, X2) →X3 →X4 . . . (X1, X2, · · · , Xn−2) →Xn−1 →Xn (2.17) form Markov chains. Proposition 2.9. X1 →X2 →· · · →Xn forms a Markov chain if and only if p(x1, x2, · · · , xn) = f1(x1, x2)f2(x2, x3) · · · fn−1(xn−1, xn) (2.18) for all x1, x2, · · ·, xn such that p(x2), p(x3), · · · , p(xn−1) > 0. Note that Proposition 2.9 is a generalization of Proposition 2.5. From Proposition 2.9, one can prove the following important property of a Markov chain. Again, the details are left as an exercise. Proposition 2.10 (Markov subchains). Let Nn = {1, 2, · · · , n} and let X1 →X2 →· · · →Xn form a Markov chain. For any subset α of Nn, denote (Xi, i ∈α) by Xα. Then for any disjoint subsets α1, α2, · · · , αm of Nn such that k1 < k2 < · · · < km (2.19) for all kj ∈αj, j = 1, 2, · · · , m, Xα1 →Xα2 →· · · →Xαm (2.20) forms a Markov chain. That is, a subchain of X1 →X2 →· · · →Xn is also a Markov chain. Example 2.11. Let X1 →X2 →· · · →X10 form a Markov chain and α1 = {1, 2}, α2 = {4}, α3 = {6, 8}, and α4 = {10} be subsets of N10. Then Proposition 2.10 says that (X1, X2) →X4 →(X6, X8) →X10 (2.21) also forms a Markov chain. 2.1 Independence and Markov Chains 11 We have been very careful in handling probability distributions with zero probability masses. In the rest of the section, we show that such distributions are very delicate in general. We first prove the following property of a strictly positive probability distribution involving four random variables1. Proposition 2.12. Let X1, X2, X3, and X4 be random variables such that p(x1, x2, x3, x4) is strictly positive. Then X1 ⊥X4|(X2, X3) X1 ⊥X3|(X2, X4)  ⇒X1 ⊥(X3, X4)|X2. (2.22) Proof. If X1 ⊥X4|(X2, X3), then p(x1, x2, x3, x4) = p(x1, x2, x3)p(x2, x3, x4) p(x2, x3) . (2.23) On the other hand, if X1 ⊥X3|(X2, X4), then p(x1, x2, x3, x4) = p(x1, x2, x4)p(x2, x3, x4) p(x2, x4) . (2.24) Equating (2.23) and (2.24), we have p(x1, x2, x3) = p(x2, x3)p(x1, x2, x4) p(x2, x4) . (2.25) Therefore, p(x1, x2) = X x3 p(x1, x2, x3) (2.26) = X x3 p(x2, x3)p(x1, x2, x4) p(x2, x4) (2.27) = p(x2)p(x1, x2, x4) p(x2, x4) , (2.28) or p(x1, x2, x4) p(x2, x4) = p(x1, x2) p(x2) . (2.29) Hence from (2.24), p(x1, x2, x3, x4) = p(x1, x2, x4)p(x2, x3, x4) p(x2, x4) = p(x1, x2)p(x2, x3, x4) p(x2) (2.30) for all x1, x2, x3, and x4, i.e., X1 ⊥(X3, X4)|X2. ⊓ ⊔ 1 Proposition 2.12 is called the intersection axiom in Bayesian networks. See . 12 2 Information Measures If p(x1, x2, x3, x4) = 0 for some x1, x2, x3, and x4, i.e., p is not strictly positive, the arguments in the above proof are not valid. In fact, the propo-sition may not hold in this case. For instance, let X1 = Y , X2 = Z, and X3 = X4 = (Y, Z), where Y and Z are independent random variables. Then X1 ⊥X4|(X2, X3), X1 ⊥X3|(X2, X4), but X1 ̸⊥(X3, X4)|X2. Note that for this construction, p is not strictly positive because p(x1, x2, x3, x4) = 0 if x3 ̸= (x1, x2) or x4 ̸= (x1, x2). The above example is somewhat counter-intuitive because it appears that Proposition 2.12 should hold for all probability distributions via a continuity argument2 which would go like this. For any distribution p, let {pk} be a sequence of strictly positive distributions such that pk →p and pk satisfies (2.23) and (2.24) for all k, i.e., pk(x1, x2, x3, x4)pk(x2, x3) = pk(x1, x2, x3)pk(x2, x3, x4) (2.31) and pk(x1, x2, x3, x4)pk(x2, x4) = pk(x1, x2, x4)pk(x2, x3, x4). (2.32) Then by the proposition, pk also satisfies (2.30), i.e., pk(x1, x2, x3, x4)pk(x2) = pk(x1, x2)pk(x2, x3, x4). (2.33) Letting k →∞, we have p(x1, x2, x3, x4)p(x2) = p(x1, x2)p(x2, x3, x4) (2.34) for all x1, x2, x3, and x4, i.e., X1 ⊥(X3, X4)|X2. Such an argument would be valid if there always exists a sequence {pk} as prescribed. However, the existence of the distribution p(x1, x2, x3, x4) constructed immediately after Proposition 2.12 simply says that it is not always possible to find such a sequence {pk}. Therefore, probability distributions which are not strictly positive can be very delicate. For strictly positive distributions, we see from Proposition 2.5 that their conditional independence structures are closely related to the fac-torization problem of such distributions, which has been investigated by Chan . 2.2 Shannon’s Information Measures We begin this section by introducing the entropy of a random variable. As we will see shortly, all Shannon’s information measures can be expressed as linear combinations of entropies. 2 See Section 2.3 for a more detailed discussion on continuous functionals. 2.2 Shannon’s Information Measures 13 Definition 2.13. The entropy H(X) of a random variable X is defined as H(X) = − X x p(x) log p(x). (2.35) In the definitions of all information measures, we adopt the convention that summation is taken over the corresponding support. Such a convention is necessary because p(x) log p(x) in (2.35) is undefined if p(x) = 0. The base of the logarithm in (2.35) can be chosen to be any convenient real number greater than 1. We write H(X) as Hα(X) when the base of the logarithm is α. When the base of the logarithm is 2, the unit for entropy is the bit. When the base of the logarithm is e, the unit for entropy is the nat. When the base of the logarithm is an integer D ≥2, the unit for entropy is the D-it (D-ary digit). In the context of source coding, the base is usually taken to be the size of the code alphabet. This will be discussed in Chapter 4. In computer science, a bit means an entity which can take the value 0 or 1. In information theory, the entropy of a random variable is measured in bits. The reader should distinguish these two meanings of a bit from each other carefully. Let g(X) be any function of a random variable X. We will denote the expectation of g(X) by Eg(X), i.e., Eg(X) = X x p(x)g(x), (2.36) where the summation is over SX. Then the definition of the entropy of a random variable X can be written as H(X) = −E log p(X). (2.37) Expressions of Shannon’s information measures in terms of expectations will be useful in subsequent discussions. The entropy H(X) of a random variable X is a functional of the prob-ability distribution p(x) which measures the average amount of information contained in X, or equivalently, the average amount of uncertainty removed upon revealing the outcome of X. Note that H(X) depends only on p(x), not on the actual values in X. Occasionally, we also denote H(X) by H(p). For 0 ≤γ ≤1, define hb(γ) = −γ log γ −(1 −γ) log(1 −γ) (2.38) with the convention 0 log 0 = 0, so that hb(0) = hb(1) = 0. With this conven-tion, hb(γ) is continuous at γ = 0 and γ = 1. hb is called the binary entropy function. For a binary random variable X with distribution {γ, 1 −γ}, H(X) = hb(γ). (2.39) Figure 2.1 is the plot of hb(γ) versus γ in the base 2. Note that hb(γ) achieves the maximum value 1 when γ = 1 2. 14 2 Information Measures γ γ Fig. 2.1. hb(γ) versus γ in the base 2. The definition of the joint entropy of two random variables is similar to the definition of the entropy of a single random variable. Extension of this definition to more than two random variables is straightforward. Definition 2.14. The joint entropy H(X, Y ) of a pair of random variables X and Y is defined as H(X, Y ) = − X x,y p(x, y) log p(x, y) = −E log p(X, Y ). (2.40) For two random variables, we define in the following the conditional en-tropy of one random variable when the other random variable is given. Definition 2.15. For random variables X and Y , the conditional entropy of Y given X is defined as H(Y |X) = − X x,y p(x, y) log p(y|x) = −E log p(Y |X). (2.41) From (2.41), we can write H(Y |X) = X x p(x) " − X y p(y|x) log p(y|x) # . (2.42) The inner sum is the entropy of Y conditioning on a fixed x ∈SX. Thus we are motivated to express H(Y |X) as H(Y |X) = X x p(x)H(Y |X = x), (2.43) where 2.2 Shannon’s Information Measures 15 H(Y |X = x) = − X y p(y|x) log p(y|x). (2.44) Observe that the right hand sides of (2.35) and (2.44) have exactly the same form. Similarly, for H(Y |X, Z), we write H(Y |X, Z) = X z p(z)H(Y |X, Z = z), (2.45) where H(Y |X, Z = z) = − X x,y p(x, y|z) log p(y|x, z). (2.46) Proposition 2.16. H(X, Y ) = H(X) + H(Y |X) (2.47) and H(X, Y ) = H(Y ) + H(X|Y ). (2.48) Proof. Consider H(X, Y ) = −E log p(X, Y ) (2.49) = −E log[p(X)p(Y |X)] (2.50) = −E log p(X) −E log p(Y |X) (2.51) = H(X) + H(Y |X). (2.52) Note that (2.50) is justified because the summation of the expectation is over SXY , and we have used the linearity of expectation3 to obtain (2.51). This proves (2.47), and (2.48) follows by symmetry. ⊓ ⊔ This proposition has the following interpretation. Consider revealing the outcome of a pair of random variables X and Y in two steps: first the outcome of X and then the outcome of Y . Then the proposition says that the total amount of uncertainty removed upon revealing both X and Y is equal to the sum of the uncertainty removed upon revealing X (uncertainty removed in the first step) and the uncertainty removed upon revealing Y once X is known (uncertainty removed in the second step). Definition 2.17. For random variables X and Y , the mutual information between X and Y is defined as I(X; Y ) = X x,y p(x, y) log p(x, y) p(x)p(y) = E log p(X, Y ) p(X)p(Y ). (2.53) 3 See Problem 5 at the end of the chapter. 16 2 Information Measures Remark I(X; Y ) is symmetrical in X and Y . Proposition 2.18. The mutual information between a random variable X and itself is equal to the entropy of X, i.e., I(X; X) = H(X). Proof. This can be seen by considering I(X; X) = E log p(X) p(X)2 (2.54) = −E log p(X) (2.55) = H(X). (2.56) The proposition is proved. ⊓ ⊔ Remark The entropy of X is sometimes called the self-information of X. Proposition 2.19. I(X; Y ) = H(X) −H(X|Y ), (2.57) I(X; Y ) = H(Y ) −H(Y |X), (2.58) and I(X; Y ) = H(X) + H(Y ) −H(X, Y ), (2.59) provided that all the entropies and conditional entropies are finite (see Exam-ple 2.46 in Section 2.8). The proof of this proposition is left as an exercise. From (2.57), we can interpret I(X; Y ) as the reduction in uncertainty about X when Y is given, or equivalently, the amount of information about X provided by Y . Since I(X; Y ) is symmetrical in X and Y , from (2.58), we can as well interpret I(X; Y ) as the amount of information about Y provided by X. The relations between the (joint) entropies, conditional entropies, and mu-tual information for two random variables X and Y are given in Propositions 2.16 and 2.19. These relations can be summarized by the diagram in Figure 2.2 which is a variation of the Venn diagram4. One can check that all the rela-tions between Shannon’s information measures for X and Y which are shown in Figure 2.2 are consistent with the relations given in Propositions 2.16 and 2.19. This one-to-one correspondence between Shannon’s information mea-sures and set theory is not just a coincidence for two random variables. We will discuss this in depth when we introduce the I-Measure in Chapter 3. Analogous to entropy, there is a conditional version of mutual information called conditional mutual information. 4 The rectangle representing the universal set in a usual Venn diagram is missing in Figure 2.2. 2.2 Shannon’s Information Measures 17 H ( X , Y ) H ( X | Y ) H ( Y|X ) H ( Y ) I ( X ; Y ) H ( X ) Fig. 2.2. Relationship between entropies and mutual information for two random variables. Definition 2.20. For random variables X, Y and Z, the mutual information between X and Y conditioning on Z is defined as I(X; Y |Z) = X x,y,z p(x, y, z) log p(x, y|z) p(x|z)p(y|z) = E log p(X, Y |Z) p(X|Z)p(Y |Z). (2.60) Remark I(X; Y |Z) is symmetrical in X and Y . Analogous to conditional entropy, we write I(X; Y |Z) = X z p(z)I(X; Y |Z = z), (2.61) where I(X; Y |Z = z) = X x,y p(x, y|z) log p(x, y|z) p(x|z)p(y|z). (2.62) Similarly, when conditioning on two random variables, we write I(X; Y |Z, T) = X t p(t)I(X; Y |Z, T = t) (2.63) where I(X; Y |Z, T = t) = X x,y,z p(x, y, z|t) log p(x, y|z, t) p(x|z, t)p(y|z, t). (2.64) Conditional mutual information satisfies the same set of relations given in Propositions 2.18 and 2.19 for mutual information except that all the terms are now conditioned on a random variable Z. We state these relations in the next two propositions. The proofs are omitted. 18 2 Information Measures Proposition 2.21. The mutual information between a random variable X and itself conditioning on a random variable Z is equal to the conditional entropy of X given Z, i.e., I(X; X|Z) = H(X|Z). Proposition 2.22. I(X; Y |Z) = H(X|Z) −H(X|Y, Z), (2.65) I(X; Y |Z) = H(Y |Z) −H(Y |X, Z), (2.66) and I(X; Y |Z) = H(X|Z) + H(Y |Z) −H(X, Y |Z), (2.67) provided that all the conditional entropies are finite. Remark All Shannon’s information measures are finite if the random vari-ables involved have finite alphabets. Therefore, Propositions 2.19 and 2.22 apply provided that all the random variables therein have finite alphabets. To conclude this section, we show that all Shannon’s information measures are special cases of conditional mutual information. Let Φ be a degenerate random variable, i.e., Φ takes a constant value with probability 1. Consider the mutual information I(X; Y |Z). When X = Y and Z = Φ, I(X; Y |Z) be-comes the entropy H(X). When X = Y , I(X; Y |Z) becomes the conditional entropy H(X|Z). When Z = Φ, I(X; Y |Z) becomes the mutual information I(X; Y ). Thus all Shannon’s information measures are special cases of condi-tional mutual information. 2.3 Continuity of Shannon’s Information Measures for Fixed Finite Alphabets In this section, we prove that for fixed finite alphabets, all Shannon’s infor-mation measures are continuous functionals of the joint distribution of the random variables involved. To formulate the notion of continuity, we first introduce the variational distance5 as a distance measure between two prob-ability distributions on a common alphabet. Definition 2.23. Let p and q be two probability distributions on a common alphabet X. The variational distance between p and q is defined as V (p, q) = X x∈X |p(x) −q(x)|. (2.68) 5 The variational distance is also referred to as the L1 distance in mathematics. 2.3 Continuity of Shannon’s Information Measures for Fixed Finite Alphabets 19 For a fixed finite alphabet X, let PX be the set of all distributions on X. Then according to (2.35), the entropy of a distribution p on an alphabet X is defined as H(p) = − X x∈Sp p(x) log p(x) (2.69) where Sp denotes the support of p and Sp ⊂X. In order for H(p) to be continuous with respect to convergence in variational distance6 at a particular distribution p ∈PX , for any ϵ > 0, there exists δ > 0 such that |H(p) −H(q)| < ϵ (2.70) for all q ∈PX satisfying V (p, q) < δ, (2.71) or equivalently, lim p′→p H(p′) = H  lim p′→p p′  = H(p), (2.72) where the convergence p′ →p is in variational distance. Since a log a →0 as a →0, we define a function l : [0, ∞) →ℜby l(a) = a log a if a > 0 0 if a = 0, (2.73) i.e., l(a) is a continuous extension of a log a. Then (2.69) can be rewritten as H(p) = − X x∈X l(p(x)), (2.74) where the summation above is over all x in X instead of Sp. Upon defining a function lx : PX →ℜfor all x ∈X by lx(p) = l(p(x)), (2.75) (2.74) becomes H(p) = − X x∈X lx(p). (2.76) Evidently, lx(p) is continuous in p (with respect to convergence in variational distance). Since the summation in (2.76) involves a finite number of terms, we conclude that H(p) is a continuous functional of p. We now proceed to prove the continuity of conditional mutual information which covers all cases of Shannon’s information measures. Consider I(X; Y |Z) and let pXY Z be the joint distribution of X, Y , and Z, where the alphabets X, Y, and Z are assumed to be finite. From (2.47) and (2.67), we obtain I(X; Y |Z) = H(X, Z) + H(Y, Z) −H(X, Y, Z) −H(Z). (2.77) 6 Convergence in variational distance is the same as L1-convergence. 20 2 Information Measures Note that each term on the right hand side above is the unconditional entropy of the corresponding marginal distribution. Then (2.77) can be rewritten as IX;Y |Z(pXY Z) = H(pXZ) + H(pY Z) −H(pXY Z) −H(pZ), (2.78) where we have used IX;Y |Z(pXY Z) to denote I(X; Y |Z). It follows that lim p′ XY Z→pXY Z IX;Y |Z(p′ XY Z) = lim p′ XY Z→pXY Z [H(p′ XZ) + H(p′ Y Z) −H(p′ XY Z) −H(p′ Z)] (2.79) = lim p′ XY Z→pXY Z H(p′ XZ) + lim p′ XY Z→pXY Z H(p′ Y Z) − lim p′ XY Z→pXY Z H(p′ XY Z) − lim p′ XY Z→pXY Z H(p′ Z). (2.80) It can readily be proved, for example, that lim p′ XY Z→pXY Z p′ XZ = pXZ, (2.81) so that lim p′ XY Z→pXY Z H(p′ XZ) = H  lim p′ XY Z→pXY Z p′ XZ  = H(pXZ) (2.82) by the continuity of H(·) when the alphabets involved are fixed and finite. The details are left as an exercise. Hence, we conclude that lim p′ XY Z→pXY Z IX;Y |Z(p′ XY Z) = H(pXZ) + H(pY Z) −H(pXY Z) −H(pZ) (2.83) = IX;Y |Z(pXY Z), (2.84) i.e., IX;Y |Z(pXY Z) is a continuous functional of pXY Z. Since conditional mutual information covers all cases of Shannon’s infor-mation measures, we have proved that all Shannon’s information measures are continuous with respect to convergence in variational distance under the assumption that the alphabets are fixed and finite. It is not difficult to show that under this assumption, convergence in variational distance is equivalent to L2-convergence, i.e., convergence in Euclidean distance (see Problem 8). It follows that Shannon’s information measures are also continuous with respect to L2-convergence. The variational distance, however, is more often used as a distance measure between two probability distributions because it can be di-rectly related with the informational divergence to be discussed in Section 2.5. The continuity of Shannon’s information measures we have proved in this section is rather restrictive and need to be applied with caution. In fact, if the alphabets are not fixed, Shannon’s information measures are everywhere discontinuous with respect to convergence in a number of commonly used distance measures. We refer the readers to Problems 29 to 32 for a discussion of these issues. 2.4 Chain Rules 21 2.4 Chain Rules In this section, we present a collection of information identities known as the chain rules which are often used in information theory. Proposition 2.24 (Chain Rule for Entropy). H(X1, X2, · · · , Xn) = n X i=1 H(Xi|X1, · · · , Xi−1). (2.85) Proof. The chain rule for n = 2 has been proved in Proposition 2.16. We prove the chain rule by induction on n. Assume (2.85) is true for n = m, where m ≥2. Then H(X1, · · · , Xm, Xm+1) = H(X1, · · · , Xm) + H(Xm+1|X1, · · · , Xm) (2.86) = m X i=1 H(Xi|X1, · · · , Xi−1) + H(Xm+1|X1, · · · , Xm) (2.87) = m+1 X i=1 H(Xi|X1, · · · , Xi−1), (2.88) where in (2.86) we have used (2.47) by letting X = (X1, · · · , Xm) and Y = Xm+1, and in (2.87) we have used (2.85) for n = m. This proves the chain rule for entropy. ⊓ ⊔ The chain rule for entropy has the following conditional version. Proposition 2.25 (Chain Rule for Conditional Entropy). H(X1, X2, · · · , Xn|Y ) = n X i=1 H(Xi|X1, · · · , Xi−1, Y ). (2.89) Proof. The proposition can be proved by considering H(X1, X2, · · · , Xn|Y ) = H(X1, X2, · · · , Xn, Y ) −H(Y ) (2.90) = H((X1, Y ), X2, · · · , Xn) −H(Y ) (2.91) = H(X1, Y ) + n X i=2 H(Xi|X1, · · · , Xi−1, Y ) −H(Y ) (2.92) = H(X1|Y ) + n X i=2 H(Xi|X1, · · · , Xi−1, Y ) (2.93) = n X i=1 H(Xi|X1, · · · , Xi−1, Y ), (2.94) 22 2 Information Measures where (2.90) and (2.93) follow from Proposition 2.16, while (2.92) follows from Proposition 2.24. Alternatively, the proposition can be proved by considering H(X1, X2, · · · , Xn|Y ) = X y p(y)H(X1, X2, · · · , Xn|Y = y) (2.95) = X y p(y) n X i=1 H(Xi|X1, · · · , Xi−1, Y = y) (2.96) = n X i=1 X y p(y)H(Xi|X1, · · · , Xi−1, Y = y) (2.97) = n X i=1 H(Xi|X1, · · · , Xi−1, Y ), (2.98) where (2.95) and (2.98) follow from (2.43) and (2.45), respectively, and (2.96) follows from an application of Proposition 2.24 to the joint distribution of X1, X2, · · · , Xn conditioning on {Y = y}. This proof offers an explanation to the observation that (2.89) can be obtained directly from (2.85) by condition-ing on Y in every term. ⊓ ⊔ Proposition 2.26 (Chain Rule for Mutual Information). I(X1, X2, · · · , Xn; Y ) = n X i=1 I(Xi; Y |X1, · · · , Xi−1). (2.99) Proof. Consider I(X1, X2, · · · , Xn; Y ) = H(X1, X2, · · · , Xn) −H(X1, X2, · · · , Xn|Y ) (2.100) = n X i=1 [H(Xi|X1, · · · , Xi−1) −H(Xi|X1, · · · , Xi−1, Y )] (2.101) = n X i=1 I(Xi; Y |X1, · · · , Xi−1), (2.102) where in (2.101), we have invoked both Propositions 2.24 and 2.25. The chain rule for mutual information is proved. ⊓ ⊔ Proposition 2.27 (Chain Rule for Conditional Mutual Information). For random variables X1, X2, · · · , Xn, Y , and Z, I(X1, X2, · · · , Xn; Y |Z) = n X i=1 I(Xi; Y |X1, · · · , Xi−1, Z). (2.103) 2.5 Informational Divergence 23 Proof. This is the conditional version of the chain rule for mutual information. The proof is similar to that for Proposition 2.25. The details are omitted. ⊓ ⊔ 2.5 Informational Divergence Let p and q be two probability distributions on a common alphabet X. We very often want to measure how much p is different from q, and vice versa. In order to be useful, this measure must satisfy the requirements that it is always nonnegative and it takes the zero value if and only if p = q. We denote the support of p and q by Sp and Sq, respectively. The informational divergence defined below serves this purpose. Definition 2.28. The informational divergence between two probability dis-tributions p and q on a common alphabet X is defined as D(p∥q) = X x p(x) log p(x) q(x) = Ep log p(X) q(X), (2.104) where Ep denotes expectation with respect to p. In the above definition, in addition to the convention that the summation is taken over Sp, we further adopt the convention c log c 0 = ∞for c > 0. With this convention, if D(p∥q) < ∞, then p(x) = 0 whenever q(x) = 0, i.e., Sp ⊂Sq. In the literature, the informational divergence is also referred to as relative entropy or the Kullback-Leibler distance. We note that D(p∥q) is not symmet-rical in p and q, so it is not a true metric or “distance.” Moreover, D(·∥·) does not satisfy the triangular inequality (see Problem 14). In the rest of the book, the informational divergence will be referred to as divergence for brevity. Before we prove that divergence is always nonnegative, we first establish the following simple but important inequality called the fundamental inequality in information theory. Lemma 2.29 (Fundamental Inequality). For any a > 0, ln a ≤a −1 (2.105) with equality if and only if a = 1. Proof. Let f(a) = ln a −a + 1. Then f ′(a) = 1/a −1 and f ′′(a) = −1/a2. Since f(1) = 0, f ′(1) = 0, and f ′′(1) = −1 < 0, we see that f(a) attains its maximum value 0 when a = 1. This proves (2.105). It is also clear that equality holds in (2.105) if and only if a = 1. Figure 2.3 is an illustration of the fundamental inequality. ⊓ ⊔ 24 2 Information Measures !0.5 0 0.5 1 1.5 2 2.5 3 !1.5 !1 !0.5 0 0.5 1 1.5 1 a !1 a ln a Fig. 2.3. The fundamental inequality ln a ≤a −1. Corollary 2.30. For any a > 0, ln a ≥1 −1 a (2.106) with equality if and only if a = 1. Proof. This can be proved by replacing a by 1/a in (2.105). ⊓ ⊔ We can see from Figure 2.3 that the fundamental inequality results from the convexity of the logarithmic function. In fact, many important results in information theory are also direct or indirect consequences of the convexity of the logarithmic function! Theorem 2.31 (Divergence Inequality). For any two probability distribu-tions p and q on a common alphabet X, D(p∥q) ≥0 (2.107) with equality if and only if p = q. Proof. If q(x) = 0 for some x ∈Sp, then D(p∥q) = ∞and the theorem is trivially true. Therefore, we assume that q(x) > 0 for all x ∈Sp. Consider D(p∥q) = (log e) X x∈Sp p(x) ln p(x) q(x) (2.108) ≥(log e) X x∈Sp p(x)  1 −q(x) p(x)  (2.109) = (log e)  X x∈Sp p(x) − X x∈Sp q(x)   (2.110) ≥0, (2.111) 2.5 Informational Divergence 25 where (2.109) results from an application of (2.106), and (2.111) follows from X x∈Sp q(x) ≤1 = X x∈Sp p(x). (2.112) This proves (2.107). For equality to hold in (2.107), equality must hold in (2.109) for all x ∈Sp and also in (2.111). For the former, we see from Lemma 2.29 that this is the case if and only if p(x) = q(x) for all x ∈Sp, (2.113) which implies X x∈Sp q(x) = X x∈Sp p(x) = 1, (2.114) i.e., (2.111) holds with equality. Thus (2.113) is a necessary and sufficient condition for equality to hold in (2.107). It is immediate that p = q implies (2.113), so it remains to prove the converse. Since P x q(x) = 1 and q(x) ≥0 for all x, p(x) = q(x) for all x ∈Sp implies q(x) = 0 for all x ̸∈Sp, and therefore p = q. The theorem is proved. ⊓ ⊔ We now prove a very useful consequence of the divergence inequality called the log-sum inequality. Theorem 2.32 (Log-Sum Inequality). For positive numbers a1, a2, · · · and nonnegative numbers b1, b2, · · · such that P i ai < ∞and 0 < P i bi < ∞, X i ai log ai bi ≥ X i ai ! log P i ai P i bi (2.115) with the convention that log ai 0 = ∞. Moreover, equality holds if and only if ai bi = constant for all i. The log-sum inequality can easily be understood by writing it out for the case when there are two terms in each of the summations: a1 log a1 b1 + a2 log a2 b2 ≥(a1 + a2) log a1 + a2 b1 + b2 . (2.116) Proof of Theorem 2.32. Let a′ i = ai/ P j aj and b′ i = bi/ P j bj. Then {a′ i} and {b′ i} are probability distributions. Using the divergence inequality, we have 0 ≤ X i a′ i log a′ i b′ i (2.117) = X i ai P j aj log ai/ P j aj bi/ P j bj (2.118) = 1 P j aj "X i ai log ai bi − X i ai ! log P j aj P j bj # , (2.119) 26 2 Information Measures which implies (2.115). Equality holds if and only if a′ i = b′ i for all i, or ai bi = constant for all i. The theorem is proved. ⊓ ⊔ One can also prove the divergence inequality by using the log-sum in-equality (see Problem 21), so the two inequalities are in fact equivalent. The log-sum inequality also finds application in proving the next theorem which gives a lower bound on the divergence between two probability distributions on a common alphabet in terms of the variational distance between them. We will see further applications of the log-sum inequality when we discuss the convergence of some iterative algorithms in Chapter 9. Theorem 2.33 (Pinsker’s Inequality). D(p∥q) ≥ 1 2 ln 2V 2(p, q). (2.120) Both divergence and the variational distance can be used as measures of the difference between two probability distributions defined on the same al-phabet. Pinsker’s inequality has the important implication that for two proba-bility distributions p and q defined on the same alphabet, if D(p∥q) or D(q∥p) is small, then so is V (p, q). Furthermore, for a sequence of probability distri-butions qk, as k →∞, if D(p∥qk) →0 or D(qk∥p) →0, then V (p, qk) →0. In other words, convergence in divergence is a stronger notion of convergence than convergence in variational distance. The proof of Pinsker’s inequality as well as its consequence discussed above is left as an exercise (see Problems 24 and 25). 2.6 The Basic Inequalities In this section, we prove that all Shannon’s information measures, namely entropy, conditional entropy, mutual information, and conditional mutual in-formation are always nonnegative. By this, we mean that these quantities are nonnegative for all joint distributions for the random variables involved. Theorem 2.34. For random variables X, Y , and Z, I(X; Y |Z) ≥0, (2.121) with equality if and only if X and Y are independent when conditioning on Z. Proof. Observe that 2.6 The Basic Inequalities 27 I(X; Y |Z) = X x,y,z p(x, y, z) log p(x, y|z) p(x|z)p(y|z) (2.122) = X z p(z) X x,y p(x, y|z) log p(x, y|z) p(x|z)p(y|z) (2.123) = X z p(z)D(pXY |z∥pX|zpY |z), (2.124) where we have used pXY |z to denote {p(x, y|z), (x, y) ∈X × Y}, etc. Since for a fixed z, both pXY |z and pX|zpY |z are joint probability distributions on X × Y, we have D(pXY |z∥pX|zpY |z) ≥0. (2.125) Therefore, we conclude that I(X; Y |Z) ≥0. Finally, we see from Theorem 2.31 that I(X; Y |Z) = 0 if and only if for all z ∈Sz, p(x, y|z) = p(x|z)p(y|z), (2.126) or p(x, y, z) = p(x, z)p(y|z) (2.127) for all x and y. Therefore, X and Y are independent conditioning on Z. The proof is accomplished. ⊓ ⊔ As we have seen in Section 2.2 that all Shannon’s information measures are special cases of conditional mutual information, we already have proved that all Shannon’s information measures are always nonnegative. The nonneg-ativity of all Shannon’s information measures are called the basic inequalities. For entropy and conditional entropy, we offer the following more direct proof for their nonnegativity. Consider the entropy H(X) of a random variable X. For all x ∈SX, since 0 < p(x) ≤1, log p(x) ≤0. It then follows from the definition in (2.35) that H(X) ≥0. For the conditional entropy H(Y |X) of random variable Y given random variable X, since H(Y |X = x) ≥0 for each x ∈SX, we see from (2.43) that H(Y |X) ≥0. Proposition 2.35. H(X) = 0 if and only if X is deterministic. Proof. If X is deterministic, i.e., there exists x∗∈X such that p(x∗) = 1 and p(x) = 0 for all x ̸= x∗, then H(X) = −p(x∗) log p(x∗) = 0. On the other hand, if X is not deterministic, i.e., there exists x∗∈X such that 0 < p(x∗) < 1, then H(X) ≥−p(x∗) log p(x∗) > 0. Therefore, we conclude that H(X) = 0 if and only if X is deterministic. ⊓ ⊔ Proposition 2.36. H(Y |X) = 0 if and only if Y is a function of X. Proof. From (2.43), we see that H(Y |X) = 0 if and only if H(Y |X = x) = 0 for each x ∈SX. Then from the last proposition, this happens if and only if Y is deterministic for each given x. In other words, Y is a function of X. ⊓ ⊔ 28 2 Information Measures Proposition 2.37. I(X; Y ) = 0 if and only if X and Y are independent. Proof. This is a special case of Theorem 2.34 with Z being a degenerate random variable. ⊓ ⊔ One can regard (conditional) mutual information as a measure of (con-ditional) dependency between two random variables. When the (conditional) mutual information is exactly equal to 0, the two random variables are (con-ditionally) independent. We refer to inequalities involving Shannon’s information measures only (possibly with constant terms) as information inequalities. The basic inequal-ities are important examples of information inequalities. Likewise, we refer to identities involving Shannon’s information measures only as information iden-tities. From the information identities (2.47), (2.57), and (2.65), we see that all Shannon’s information measures can be expressed as linear combinations of entropies provided that the latter are all finite. Specifically, H(Y |X) = H(X, Y ) −H(X), (2.128) I(X; Y ) = H(X) + H(Y ) −H(X, Y ), (2.129) and I(X; Y |Z) = H(X, Z) + H(Y, Z) −H(X, Y, Z) −H(Z). (2.130) Therefore, an information inequality is an inequality which involves only en-tropies. As we will see later in the book, information inequalities form the most important set of tools for proving converse coding theorems in information theory. Except for a number of so-called non-Shannon-type inequalities, all known information inequalities are implied by the basic inequalities. Infor-mation inequalities will be studied systematically in Chapters 13, 14, and 15. In the next section, we will prove some consequences of the basic inequalities which are often used in information theory. 2.7 Some Useful Information Inequalities In this section, we prove some useful consequences of the basic inequalities introduced in the last section. Note that the conditional versions of these inequalities can be proved by techniques similar to those used in the proof of Proposition 2.25. Theorem 2.38 (Conditioning Does Not Increase Entropy). H(Y |X) ≤H(Y ) (2.131) with equality if and only if X and Y are independent. 2.7 Some Useful Information Inequalities 29 Proof. This can be proved by considering H(Y |X) = H(Y ) −I(X; Y ) ≤H(Y ), (2.132) where the inequality follows because I(X; Y ) is always nonnegative. The in-equality is tight if and only if I(X; Y ) = 0, which is equivalent by Proposi-tion 2.37 to X and Y being independent. ⊓ ⊔ Similarly, it can be shown that H(Y |X, Z) ≤H(Y |Z), (2.133) which is the conditional version of the above proposition. These results have the following interpretation. Suppose Y is a random variable we are interested in, and X and Z are side-information about Y . Then our uncertainty about Y cannot be increased on the average upon receiving side-information X. Once we know X, our uncertainty about Y again cannot be increased on the average upon further receiving side-information Z. Remark Unlike entropy, the mutual information between two random vari-ables can be increased by conditioning on a third random variable. We refer the reader to Section 3.4 for a discussion. Theorem 2.39 (Independence Bound for Entropy). H(X1, X2, · · · , Xn) ≤ n X i=1 H(Xi) (2.134) with equality if and only if Xi, i = 1, 2, · · · , n are mutually independent. Proof. By the chain rule for entropy, H(X1, X2, · · · , Xn) = n X i=1 H(Xi|X1, · · · , Xi−1) (2.135) ≤ n X i=1 H(Xi), (2.136) where the inequality follows because we have proved in the last theorem that conditioning does not increase entropy. The inequality is tight if and only if it is tight for each i, i.e., H(Xi|X1, · · · , Xi−1) = H(Xi) (2.137) for 1 ≤i ≤n. From the last theorem, this is equivalent to Xi being indepen-dent of X1, X2, · · · , Xi−1 for each i. Then 30 2 Information Measures p(x1, x2, · · · , xn) = p(x1, x2, · · · , xn−1)p(xn) (2.138) = p(p(x1, x2, · · · , xn−2)p(xn−1)p(xn) (2.139) . . . = p(x1)p(x2) · · · p(xn) (2.140) for all x1, x2, · · · , xn, i.e., X1, X2, · · · , Xn are mutually independent. Alternatively, we can prove the theorem by considering n X i=1 H(Xi) −H(X1, X2, · · · , Xn) = − n X i=1 E log p(Xi) + E log p(X1, X2, · · · , Xn) (2.141) = −E log[p(X1)p(X2) · · · p(Xn)] + E log p(X1, X2, · · · , Xn) (2.142) = E log p(X1, X2, · · · , Xn) p(X1)p(X2) · · · p(Xn) (2.143) = D(pX1X2···Xn∥pX1pX2 · · · pXn) (2.144) ≥0, (2.145) where equality holds if and only if p(x1, x2, · · · , xn) = p(x1)p(x2) · · · p(xn) (2.146) for all x1, x2, · · · , xn, i.e., X1, X2, · · · , Xn are mutually independent. ⊓ ⊔ Theorem 2.40. I(X; Y, Z) ≥I(X; Y ), (2.147) with equality if and only if X →Y →Z forms a Markov chain. Proof. By the chain rule for mutual information, we have I(X; Y, Z) = I(X; Y ) + I(X; Z|Y ) ≥I(X; Y ). (2.148) The above inequality is tight if and only if I(X; Z|Y ) = 0, or X →Y →Z forms a Markov chain. The theorem is proved. ⊓ ⊔ Lemma 2.41. If X →Y →Z forms a Markov chain, then I(X; Z) ≤I(X; Y ) (2.149) and I(X; Z) ≤I(Y ; Z). (2.150) 2.7 Some Useful Information Inequalities 31 Before proving this inequality, we first discuss its meaning. Suppose X is a random variable we are interested in, and Y is an observation of X. If we infer X via Y , our uncertainty about X on the average is H(X|Y ). Now suppose we process Y (either deterministically or probabilistically) to obtain a random variable Z. If we infer X via Z, our uncertainty about X on the average is H(X|Z). Since X →Y →Z forms a Markov chain, from (2.149), we have H(X|Z) = H(X) −I(X; Z) (2.151) ≥H(X) −I(X; Y ) (2.152) = H(X|Y ), (2.153) i.e., further processing of Y can only increase our uncertainty about X on the average. Proof of Lemma 2.41. Assume X →Y →Z, i.e., X ⊥Z|Y . By Theorem 2.34, we have I(X; Z|Y ) = 0. (2.154) Then I(X; Z) = I(X; Y, Z) −I(X; Y |Z) (2.155) ≤I(X; Y, Z) (2.156) = I(X; Y ) + I(X; Z|Y ) (2.157) = I(X; Y ). (2.158) In (2.155) and (2.157), we have used the chain rule for mutual information. The inequality in (2.156) follows because I(X; Y |Z) is always nonnegative, and (2.158) follows from (2.154). This proves (2.149). Since X →Y →Z is equivalent to Z →Y →X, we also have proved (2.150). This completes the proof of the lemma. ⊓ ⊔ From Lemma 2.41, we can prove the more general data processing theorem. Theorem 2.42 (Data Processing Theorem). If U →X →Y →V forms a Markov chain, then I(U; V ) ≤I(X; Y ). (2.159) Proof. Assume U →X →Y →V . Then by Proposition 2.10, we have U → X →Y and U →Y →V . From the first Markov chain and Lemma 2.41, we have I(U; Y ) ≤I(X; Y ). (2.160) From the second Markov chain and Lemma 2.41, we have I(U; V ) ≤I(U; Y ). (2.161) Combining (2.160) and (2.161), we obtain (2.159), proving the theorem. ⊓ ⊔ 32 2 Information Measures 2.8 Fano’s Inequality In the last section, we have proved a few information inequalities involving only Shannon’s information measures. In this section, we first prove an upper bound on the entropy of a random variable in terms of the size of the alpha-bet. This inequality is then used in the proof of Fano’s inequality, which is extremely useful in proving converse coding theorems in information theory. Theorem 2.43. For any random variable X, H(X) ≤log |X|, (2.162) where |X| denotes the size of the alphabet X. This upper bound is tight if and only if X is distributed uniformly on X. Proof. Let u be the uniform distribution on X, i.e., u(x) = |X|−1 for all x ∈X. Then log |X| −H(X) = − X x∈SX p(x) log |X|−1 + X x∈SX p(x) log p(x) (2.163) = − X x∈SX p(x) log u(x) + X x∈SX p(x) log p(x) (2.164) = X x∈SX p(x) log p(x) u(x) (2.165) = D(p∥u) (2.166) ≥0, (2.167) proving (2.162). This upper bound is tight if and if only D(p∥u) = 0, which from Theorem 2.31 is equivalent to p(x) = u(x) for all x ∈X, completing the proof. ⊓ ⊔ Corollary 2.44. The entropy of a random variable may take any nonnegative real value. Proof. Consider a random variable X defined on a fixed finite alphabet X. We see from the last theorem that H(X) = log |X| is achieved when X is distributed uniformly on X. On the other hand, H(X) = 0 is achieved when X is deterministic. For 0 ≤a ≤|X|−1, let g(a) = H ({1 −(|X| −1)a, a, · · · , a}) (2.168) = −l(1 −(|X| −1)a) −(|X| −1)l(a), (2.169) where l(·) is defined in (2.73). Note that g(a) is continuous in a, with g(0) = 0 and g(|X|−1) = log |X|. For any value 0 < b < log |X|, by the intermediate 2.8 Fano’s Inequality 33 value theorem of continuous functions, there exists a distribution for X such that H(X) = b. Then we see that H(X) can take any positive value by letting |X| be sufficiently large. This accomplishes the proof. ⊓ ⊔ Remark Let |X| = D, or the random variable X is a D-ary symbol. When the base of the logarithm is D, (2.162) becomes HD(X) ≤1. (2.170) Recall that the unit of entropy is the D-it when the logarithm is in the base D. This inequality says that a D-ary symbol can carry at most 1 D-it of information. This maximum is achieved when X has a uniform distribution. We already have seen the binary case when we discuss the binary entropy function hb(p) in Section 2.2. We see from Theorem 2.43 that the entropy of a random variable is finite as long as it has a finite alphabet. However, if a random variable has a countable alphabet7, its entropy may or may not be finite. This will be shown in the next two examples. Example 2.45. Let X be a random variable such that Pr{X = i} = 2−i, (2.171) i = 1, 2, · · · . Then H2(X) = ∞ X i=1 i2−i = 2, (2.172) which is finite. For a random variable X with a countable alphabet and finite entropy, we show in Appendix 2.A that the entropy of X can be approximated by the entropy of a truncation of the distribution of X. Example 2.46. Let Y be a random variable which takes values in the subset of pairs of integers ( (i, j) : 1 ≤i < ∞and 1 ≤j ≤22i 2i ) (2.173) such that Pr{Y = (i, j)} = 2−2i (2.174) for all i and j. First, we check that 7 An alphabet is countable means that it is either finite or countably infinite. 34 2 Information Measures ∞ X i=1 22i/2i X j=1 Pr{Y = (i, j)} = ∞ X i=1 2−2i 22i 2i ! = 1. (2.175) Then H2(Y ) = − ∞ X i=1 22i/2i X j=1 2−2i log2 2−2i = ∞ X i=1 1, (2.176) which does not converge. Let X be a random variable and ˆ X be an estimate on X which takes value in the same alphabet X. Let the probability of error Pe be Pe = Pr{X ̸= ˆ X}. (2.177) If Pe = 0, i.e., X = ˆ X with probability 1, then H(X| ˆ X) = 0 by Proposi-tion 2.36. Intuitively, if Pe is small, i.e., X = ˆ X with probability close to 1, then H(X| ˆ X) should be close to 0. Fano’s inequality makes this intuition precise. Theorem 2.47 (Fano’s Inequality). Let X and ˆ X be random variables taking values in the same alphabet X. Then H(X| ˆ X) ≤hb(Pe) + Pe log(|X| −1), (2.178) where hb is the binary entropy function. Proof. Define a random variable Y =  0 if X = ˆ X 1 if X ̸= ˆ X. (2.179) The random variable Y is an indicator of the error event {X ̸= ˆ X}, with Pr{Y = 1} = Pe and H(Y ) = hb(Pe). Since Y is a function X and ˆ X, H(Y |X, ˆ X) = 0. (2.180) Then H(X| ˆ X) = H(X| ˆ X) + H(Y |X, ˆ X) (2.181) = H(X, Y | ˆ X) (2.182) = H(Y | ˆ X) + H(X| ˆ X, Y ) (2.183) ≤H(Y ) + H(X| ˆ X, Y ) (2.184) = H(Y ) + X ˆ x∈X h Pr{ ˆ X = ˆ x, Y = 0}H(X| ˆ X = ˆ x, Y = 0) +Pr{ ˆ X = ˆ x, Y = 1}H(X| ˆ X = ˆ x, Y = 1) i . (2.185) 2.8 Fano’s Inequality 35 In the above, (2.181) follows from (2.180), (2.184) follows because conditioning does not increase entropy, and (2.185) follows from an application of (2.43). Now X must take the value ˆ x if ˆ X = ˆ x and Y = 0. In other words, X is conditionally deterministic given ˆ X = ˆ x and Y = 0. Therefore, by Proposi-tion 2.35, H(X| ˆ X = ˆ x, Y = 0) = 0. (2.186) If ˆ X = ˆ x and Y = 1, then X must take a value in the set {x ∈X : x ̸= ˆ x} which contains |X| −1 elements. By Theorem 2.43, we have H(X| ˆ X = ˆ x, Y = 1) ≤log(|X| −1), (2.187) where this upper bound does not depend on ˆ x. Hence, H(X| ˆ X) ≤hb(Pe) + X ˆ x∈X Pr{ ˆ X = ˆ x, Y = 1} ! log(|X| −1) (2.188) = hb(Pe) + Pr{Y = 1} log(|X| −1) (2.189) = hb(Pe) + Pe log(|X| −1), (2.190) which completes the proof. ⊓ ⊔ Very often, we only need the following simplified version when we apply Fano’s inequality. The proof is omitted. Corollary 2.48. H(X| ˆ X) < 1 + Pe log |X|. Fano’s inequality has the following implication. If the alphabet X is finite, as Pe →0, the upper bound in (2.178) tends to 0, which implies H(X| ˆ X) also tends to 0. However, this is not necessarily the case if X is countable, which is shown in the next example. Example 2.49. Let ˆ X take the value 0 with probability 1. Let Z be an inde-pendent binary random variable taking values in {0, 1}. Define the random variable X by X =  0 if Z = 0 Y if Z = 1, (2.191) where Y is the random variable in Example 2.46 whose entropy is infinity. Let Pe = Pr{X ̸= ˆ X} = Pr{Z = 1}. (2.192) Then H(X| ˆ X) (2.193) = H(X) (2.194) ≥H(X|Z) (2.195) = Pr{Z = 0}H(X|Z = 0) + Pr{Z = 1}H(X|Z = 1) (2.196) = (1 −Pe) · 0 + Pe · H(Y ) (2.197) = ∞ (2.198) 36 2 Information Measures for any Pe > 0. Therefore, H(X| ˆ X) does not tend to 0 as Pe →0. 2.9 Maximum Entropy Distributions In Theorem 2.43, we have proved that for any random variable X, H(X) ≤log |X|, (2.199) with equality when X is distributed uniformly over X. In this section, we revisit this result in the context that X is a real random variable. To simplify our discussion, all the logarithms are in the base e. Consider the following problem: Maximize H(p) over all probability distributions p defined on a count-able subset S of the set of real numbers, subject to X x∈Sp p(x)ri(x) = ai for 1 ≤i ≤m, (2.200) where Sp ⊂S and ri(x) is defined for all x ∈S. The following theorem renders a solution to this problem. Theorem 2.50. Let p∗(x) = e−λ0−Pm i=1 λiri(x) (2.201) for all x ∈S, where λ0, λ1, · · · , λm are chosen such that the constraints in (2.200) are satisfied. Then p∗maximizes H(p) over all probability distribu-tion p on S, subject to the constraints in (2.200). Proof. For any p satisfying the constraints in (2.200), consider H(p∗) −H(p) = − X x∈S p∗(x) ln p∗(x) + X x∈Sp p(x) ln p(x) (2.202) = − X x∈S p∗(x) −λ0 − X i λiri(x) ! + X x∈Sp p(x) ln p(x) (2.203) = λ0 X x∈S p∗(x) ! + X i λi X x∈S p∗(x)ri(x) ! + X x∈Sp p(x) ln p(x) (2.204) = λ0 · 1 + X i λiai + X x∈Sp p(x) ln p(x) (2.205) = λ0  X x∈Sp p(x)  + X i λi  X x∈Sp p(x)ri(x)  + X x∈Sp p(x) ln p(x) (2.206) 2.9 Maximum Entropy Distributions 37 = − X x∈Sp p(x) −λ0 − X i λiri(x) ! + X x∈Sp p(x) ln p(x) (2.207) = − X x∈Sp p(x) ln p∗(x) + X x∈Sp p(x) ln p(x) (2.208) = X x∈Sp p(x) ln p(x) p∗(x) (2.209) = D(p∥p∗) (2.210) ≥0. (2.211) In the above, (2.207) is obtained from (2.203) by replacing p∗(x) by p(x) and x ∈S by x ∈Sp in the first summation, while the intermediate steps (2.204) to (2.206) are justified by noting that both p∗and p satisfy the constraints in (2.200). The last step is an application of the divergence inequality (Theo-rem 2.31). The proof is accomplished. ⊓ ⊔ Remark For all x ∈S, p∗(x) > 0, so that Sp∗= S. The following corollary of Theorem 2.50 is rather subtle. Corollary 2.51. Let p∗be a probability distribution defined on S with p∗(x) = e−λ0−Pm i=1 λiri(x) (2.212) for all x ∈S. Then p∗maximizes H(p) over all probability distribution p defined on S, subject to the constraints X x∈Sp p(x)ri(x) = X x∈S p∗(x)ri(x) for 1 ≤i ≤m. (2.213) Example 2.52. Let S be finite and let the set of constraints in (2.200) be empty. Then p∗(x) = e−λ0, (2.214) a constant that does not depend on x. Therefore, p∗is simply the uniform distribution over S, i.e., p∗(x) = |S|−1 for all x ∈S. This is consistent with Theorem 2.43. Example 2.53. Let S = {0, 1, 2, · · ·}, and let the set of constraints in (2.200) be X x p(x)x = a, (2.215) where a ≥0, i.e., the mean of the distribution p is fixed at some nonnegative value a. We now determine p∗using the prescription in Theorem 2.50. Let qi = e−λi (2.216) 38 2 Information Measures for i = 0, 1. Then by (2.201), p∗(x) = q0qx 1. (2.217) Evidently, p∗is a geometric distribution, so that q0 = 1 −q1. (2.218) Finally, we invoke the constraint (2.200) on p to obtain q1 = (a + 1)−1. The details are omitted. 2.10 Entropy Rate of a Stationary Source In the previous sections, we have discussed various properties of the entropy of a finite collection of random variables. In this section, we discuss the entropy rate entropy rate of a discrete-time information source. A discrete-time information source {Xk, k ≥1} is an infinite collection of random variables indexed by the set of positive integers. Since the index set is ordered, it is natural to regard the indices as time indices. We will refer to the random variables Xk as letters. We assume that H(Xk) < ∞for all k. Then for any finite subset A of the index set {k : k ≥1}, we have H(Xk, k ∈A) ≤ X k∈A H(Xk) < ∞. (2.219) However, it is not meaningful to discuss H(Xk, k ≥1) because the joint entropy of an infinite collection of letters is infinite except for very special cases. On the other hand, since the indices are ordered, we can naturally define the entropy rate of an information source, which gives the average entropy per letter of the source. Definition 2.54. The entropy rate of an information source {Xk} is defined as HX = lim n→∞ 1 nH(X1, X2, · · · , Xn) (2.220) when the limit exists. We show in the next two examples that the entropy rate of a source may or may not exist. Example 2.55. Let {Xk} be an i.i.d. source with generic random variable X. Then lim n→∞ 1 nH(X1, X2, · · · , Xn) = lim n→∞ nH(X) n (2.221) = lim n→∞H(X) (2.222) = H(X), (2.223) i.e., the entropy rate of an i.i.d. source is the entropy of any of its single letters. 2.10 Entropy Rate of a Stationary Source 39 Example 2.56. Let {Xk} be a source such that Xk are mutually independent and H(Xk) = k for k ≥1. Then 1 nH(X1, X2, · · · , Xn) = 1 n n X k=1 k (2.224) = 1 n n(n + 1) 2 (2.225) = 1 2(n + 1), (2.226) which does not converge as n →∞although H(Xk) < ∞for all k. Therefore, the entropy rate of {Xk} does not exist. Toward characterizing the asymptotic behavior of {Xk}, it is natural to consider the limit H′ X = lim n→∞H(Xn|X1, X2, · · · , Xn−1) (2.227) if it exists. The quantity H(Xn|X1, X2, · · · , Xn−1) is interpreted as the con-ditional entropy of the next letter given that we know all the past history of the source, and H′ X is the limit of this quantity after the source has been run for an indefinite amount of time. Definition 2.57. An information source {Xk} is stationary if X1, X2, · · · , Xm (2.228) and X1+l, X2+l, · · · , Xm+l (2.229) have the same joint distribution for any m, l ≥1. In the rest of the section, we will show that stationarity is a sufficient condition for the existence of the entropy rate of an information source. Lemma 2.58. Let {Xk} be a stationary source. Then H′ X exists. Proof. Since H(Xn|X1, X2, · · · , Xn−1) is lower bounded by zero for all n, it suffices to prove that H(Xn|X1, X2, · · · , Xn−1) is non-increasing in n to con-clude that the limit H′ X exists. Toward this end, for n ≥2, consider H(Xn|X1, X2, · · · , Xn−1) ≤H(Xn|X2, X3, · · · , Xn−1) (2.230) = H(Xn−1|X1, X2, · · · , Xn−2), (2.231) where the last step is justified by the stationarity of {Xk}. The lemma is proved. ⊓ ⊔ 40 2 Information Measures Lemma 2.59 (Ces´ aro Mean). Let ak and bk be real numbers. If an →a as n →∞and bn = 1 n Pn k=1 ak, then bn →a as n →∞. Proof. The idea of the lemma is the following. If an →a as n →∞, then the average of the first n terms in {ak}, namely bn, also tends to a as n →∞. The lemma is formally proved as follows. Since an →a as n →∞, for every ϵ > 0, there exists N(ϵ) such that |an −a| < ϵ for all n > N(ϵ). For n > N(ϵ), consider |bn −a| = 1 n n X i=1 ai −a (2.232) = 1 n n X i=1 (ai −a) (2.233) ≤1 n n X i=1 |ai −a| (2.234) = 1 n   N(ϵ) X i=1 |ai −a| + n X i=N(ϵ)+1 |ai −a|   (2.235) < 1 n N(ϵ) X i=1 |ai −a| + (n −N(ϵ))ϵ n (2.236) < 1 n N(ϵ) X i=1 |ai −a| + ϵ. (2.237) The first term tends to 0 as n →∞. Therefore, for any ϵ > 0, by taking n to be sufficiently large, we can make |bn −a| < 2ϵ. Hence bn →a as n →∞, proving the lemma. ⊓ ⊔ We now prove that H′ X is an alternative definition/interpretation of the entropy rate of {Xk} when {Xk} is stationary. Theorem 2.60. The entropy rate HX of a stationary source {Xk} exists and is equal to H′ X. Proof. Since we have proved in Lemma 2.58 that H′ X always exists for a stationary source {Xk}, in order to prove the theorem, we only have to prove that HX = H′ X. By the chain rule for entropy, 1 nH(X1, X2, · · · , Xn) = 1 n n X k=1 H(Xk|X1, X2, · · · , Xk−1). (2.238) Since lim k→∞H(Xk|X1, X2, · · · , Xk−1) = H′ X (2.239) Appendix 2.A: Approximation of Random Variables by Truncation 41 from (2.227), it follows from Lemma 2.59 that HX = lim n→∞ 1 nH(X1, X2, · · · , Xn) = H′ X. (2.240) The theorem is proved. ⊓ ⊔ In this theorem, we have proved that the entropy rate of a random source {Xk} exists under the fairly general assumption that {Xk} is stationary. How-ever, the entropy rate of a stationary source {Xk} may not carry any physical meaning unless {Xk} is also ergodic. This will be explained when we discuss the Shannon-McMillan-Breiman Theorem in Section 5.4. Appendix 2.A: Approximation of Random Variables with Countably Infinite Alphabets by Truncation Let X be a random variable with a countable alphabet X such that H(X) < ∞. Without loss of generality, X is taken to be the set of positive integers. Define a random variable X(m) which takes values in Nm = {1, 2, · · · , m} (2.241) such that Pr{X(m) = k} = Pr{X = k} Pr{X ∈Nm} (2.242) for all k ∈Nm, i.e., the distribution of X(m) is the truncation of the distri-bution of X up to m. It is intuitively correct that H(X(m)) →H(X) as m →∞, which we formally prove in this appendix. For every m ≥1, define the binary random variable B(m) =  1 if X ≤m 0 if X > m. (2.243) Consider H(X) = − m X k=1 Pr{X = k} log Pr{X = k} − ∞ X k=m+1 Pr{X = k} log Pr{X = k}. (2.244) As m →∞, − m X k=1 Pr{X = k} log Pr{X = k} →H(X). (2.245) Since H(X) < ∞, 42 2 Information Measures − ∞ X k=m+1 Pr{X = k} log Pr{X = k} →0 (2.246) as k →∞. Now consider H(X) = H(X|B(m)) + I(X; B(m)) (2.247) = H(X|B(m) = 1)Pr{B(m) = 1} + H(X|B(m) = 0) ×Pr{B(m) = 0} + I(X; B(m)) (2.248) = H(X(m))Pr{B(m) = 1} + H(X|B(m) = 0) ×Pr{B(m) = 0} + I(X; B(m)). (2.249) As m →∞, H(B(m)) →0 since Pr{B(m) = 1} →1. This implies I(X; B(m)) →0 because I(X; B(m)) ≤H(B(m)). (2.250) In (2.249), we further consider H(X|B(m) = 0)Pr{B(m) = 0} = − ∞ X k=m+1 Pr{X = k} log Pr{X = k} Pr{B(m) = 0} (2.251) = − ∞ X k=m+1 Pr{X = k}(log Pr{X = k} −log Pr{B(m) = 0}) (2.252) = − ∞ X k=m+1 (Pr{X = k} log Pr{X = k}) + ∞ X k=m+1 Pr{X = k} ! log Pr{B(m) = 0} (2.253) = − ∞ X k=m+1 Pr{X = k} log Pr{X = k} +Pr{B(m) = 0} log Pr{B(m) = 0}. (2.254) As m →∞, the summation above tends to 0 by (2.246). Since Pr{B(m) = 0} →0, Pr{B(m) = 0} log Pr{B(m) = 0} →0. Therefore, H(X|B(m) = 0)Pr{B(m) = 0} →0, (2.255) and we see from (2.249) that H(X(m)) →H(X) as m →∞. Chapter Summary 43 Chapter Summary Markov Chain: X →Y →Z forms a Markov chain if and only if p(x, y, z) = a(x, y)b(y, z) for all x, y, and z such that p(y) > 0. Shannon’s Information Measures: H(X) = − X x p(x) log p(x) = −E log p(X) I(X; Y ) = X x,y p(x, y) log p(x, y) p(x)p(y) = E log p(X, Y ) p(X)p(Y ) H(Y |X) = − X x,y p(x, y) log p(y|x) = −E log p(Y |X) I(X; Y |Z) = X x,y,z p(x, y, z) log p(x, y|z) p(x|z)p(y|z) = E log p(X, Y |Z) p(X|Z)p(Y |Z). Some Useful Identitites: H(X) = I(X; X) H(Y |X) = H(X, Y ) −H(X) I(X; Y ) = H(X) −H(X|Y ) I(X; Y |Z) = H(X|Z) −H(X|Y, Z). Chain Rule for Entropy: H(X1, X2, · · · , Xn) = n X i=1 H(Xi|X1, · · · , Xi−1). Chain Rule for Mutual Information: I(X1, X2, · · · , Xn; Y ) = n X i=1 I(Xi; Y |X1, · · · , Xi−1). Informational Divergence: For two probability distributions p and q on a common alphabet X, D(p∥q) = X x p(x) log p(x) q(x) = Ep log p(X) q(X). Fundamental Inequality: For any a > 0, ln a ≤a −1, with equality if and only if a = 1. 44 2 Information Measures Divergence Inequality: D(p∥q) ≥0, with equality if and only if p = q. Log-Sum Inequality: For positive numbers a1, a2, · · · and nonnegative num-bers b1, b2, · · · such that P i ai < ∞and 0 < P i bi < ∞, X i ai log ai bi ≥ X i ai ! log P i ai P i bi . Equality holds if and only if ai bi = constant for all i. The Basic Inequalities: All Shannon’s information measures are nonnega-tive. Some Useful Properties of Shannon’s Information Measures: 1. H(X) ≤log |X| with equality if and only if X is uniform. 2. H(X) = 0 if and only if X is deterministic. 3. H(Y |X) = 0 if and only if Y is a function of X. 4. I(X; Y ) = 0 if and only X and Y are independent. Fano’s Inequality: Let X and ˆ X be random variables taking values in the same alphabet X. Then H(X| ˆ X) ≤hb(Pe) + Pe log(|X| −1). Conditioning Does Not Increase Entropy: H(Y |X) ≤H(Y ), with equal-ity if and only if X and Y are independent. Independence Bound for Entropy: H(X1, X2, · · · , Xn) ≤ n X i=1 H(Xi) with equality if and only if Xi, i = 1, 2, · · · , n are mutually independent. Data Processing Theorem: If U →X →Y →V forms a Markov chain, then I(U; V ) ≤I(X; Y ). Maximum Entropy Distributions: Let p∗(x) = e−λ0−Pm i=1 λiri(x) for all x ∈S, where λ0, λ1, · · · , λm are chosen such that the constraints X x∈Sp p(x)ri(x) = ai for 1 ≤i ≤m are satisfied. Then p∗maximizes H(p) over all probability distributions p on S subject to the above constraints. Entropy Rate of a Stationary Source: Problems 45 1. The entropy rate of an information source {Xk} is defined as HX = lim n→∞ 1 nH(X1, X2, · · · , Xn) when the limit exists. 2. The entropy rate HX of a stationary source {Xk} exists and is equal to H′ X = lim n→∞H(Xn|X1, X2, · · · , Xn−1). Problems 1. Let X and Y be random variables with alphabets X = Y = {1, 2, 3, 4, 5} and joint distribution p(x, y) given by 1 25       1 1 1 1 1 2 1 2 0 0 2 0 1 1 1 0 3 0 2 0 0 0 1 1 3       . Calculate H(X), H(Y ), H(X|Y ), H(Y |X), and I(X; Y ). 2. Prove Propositions 2.8, 2.9, 2.10, 2.19, 2.21, and 2.22. 3. Give an example which shows that pairwise independence does not imply mutual independence. 4. Verify that p(x, y, z) as defined in Definition 2.4 is a probability distribu-tion. You should exclude all the zero probability masses from the summa-tion carefully. 5. Linearity of expectation It is well-known that expectation is linear, i.e., E[f(X) + g(Y )] = Ef(X) + Eg(Y ), where the summation in an expec-tation is taken over the corresponding alphabet. However, we adopt in information theory the convention that the summation in an expectation is taken over the corresponding support. Justify carefully the linearity of expectation under this convention. 6. The identity I(X; Y ) = H(X) −H(X|Y ) is invalid if H(X|Y ) (and hence H(X)) is equal to infinity. Give an example such that I(X; Y ) has a finite value but both H(X) and H(Y |X) are equal to infinity. 7. Let p′ XY and pXY be probability distributions defined on X × Y, where X and Y are fixed finite alphabets. Prove that lim p′ XY →pXY p′ x = pX, where the limit is taken with respect to the variational distance. 8. Let pk and p be probability distributions defined on a common finite alphabet. Show that as k →∞, if pk →p in variational distance, then pk →p in L2, and vice versa. 46 2 Information Measures 9. Consider any probability distribution p(x, y, z) and let q(x, y, z) = p(x)p(y)p(z|x, y) if p(x, y) > 0 0 otherwise. a) Show that q(x, y, z) is in general not a probability distribution. b) By ignoring the fact that q(x, y, z) may not be a probability distribu-tion, application of the divergence inequality D(p∥q) ≥0 would yield the inequality H(X) + H(Y ) + H(Z|X, Y ) ≥H(X, Y, Z), which indeed holds for all jointly distributed random variables X, Y , and Z. Explain. 10. Let Cα = P∞ n=2 1 n(log n)α . a) Prove that Cα < ∞if α > 1 = ∞if 0 ≤α ≤1. Then pα(n) = [Cαn(log n)α]−1, n = 2, 3, · · · is a probability distribution for α > 1. b) Prove that H(pα) < ∞if α > 2 = ∞if 1 < α ≤2. 11. Prove that H(p) is concave in p, i.e., for 0 ≤λ ≤1 and ¯ λ = 1 −λ, λH(p1) + ¯ λH(p2) ≤H(λp1 + ¯ λp2). 12. Let (X, Y ) ∼p(x, y) = p(x)p(y|x). a) Prove that for fixed p(x), I(X; Y ) is a convex functional of p(y|x). b) Prove that for fixed p(y|x), I(X; Y ) is a concave functional of p(x). 13. Do I(X; Y ) = 0 and I(X; Y |Z) = 0 imply each other? If so, give a proof. If not, give a counterexample. 14. Give an example for which D(·∥·) does not satisfy the triangular inequality. 15. Let X be a function of Y . Prove that H(X) ≤H(Y ). Interpret this result. 16. Prove that for any n ≥2, H(X1, X2, · · · , Xn) ≥ X i̸=j H(Xi|Xj). Give the necessary and sufficient condition for equality to hold and con-struct a nontrivial example that satisfies this condition. 17. Prove that for any n ≥2, H(X1, X2, · · · , Xn) ≥ n X i=1 H(Xi|Xj, j ̸= i). Problems 47 18. Prove that H(X1, X2) + H(X2, X3) + H(X1, X3) ≥2H(X1, X2, X3). Hint: Sum the identities H(X1, X2, X3) = H(Xj, j ̸= i) + H(Xi|Xj, j ̸= i) for i = 1, 2, 3 and apply the result in Problem 17. 19. For a subset α of Nn = {1, 2, · · · , n}, denote (Xi, i ∈α) by Xα. For 1 ≤k ≤n, let Hk = 1 n k  X α:|α|=k H(Xα) k . Here Hk is interpreted as the average entropy per random variable when k random variables are taken from X1, X2, · · · , Xn at a time. Prove that H1 ≥H2 ≥· · · ≥Hn. This sequence of inequalities, due to Han , is a generalization of the independence bound for entropy (Theorem 2.39). See Problem 6 in Chap-ter 21 for an application of these inequalities. 20. For a subset α of Nn = {1, 2, · · · , n}, let α = Nn\α and denote (Xi, i ∈α) by Xα. For 1 ≤k ≤n, let H′ k = 1 n k  X α:|α|=k H(Xα|Xα) k . Prove that H′ 1 ≤H′ 2 ≤· · · ≤H′ n. Note that H′ n is equal to Hn in the last problem. This sequence of inequal-ities is again due to Han . See Yeung and Cai for an application of these inequalities. 21. Prove the divergence inequality by using the log-sum inequality. 22. Prove that D(p∥q) is convex in the pair (p, q), i.e., if (p1, q1) and (p2, q2) are two pairs of probability distributions on a common alphabet, then D(λp1 + λp2∥λq1 + λq2) ≤λD(p1∥q1) + λD(p2∥q2) for all 0 ≤λ ≤1, where λ = 1 −λ. 23. Let pXY and qXY be two probability distributions on X × Y. Prove that D(pXY ∥qXY ) ≥D(pX∥qX). 24. Pinsker’s inequality Let V (p, q) denotes the variational distance between two probability distributions p and q on a common alphabet X. We will determine the largest c which satisfies D(p∥q) ≥cd2(p, q). 48 2 Information Measures a) Let A = {x : p(x) ≥q(x)}, ˆ p = {p(A), 1 −p(A)}, and ˆ q = {q(A), 1 − q(A)}. Show that D(p∥q) ≥D(ˆ p∥ˆ q) and V (p, q) = V (ˆ p, ˆ q). b) Show that toward determining the largest value of c, we only have to consider the case when X is binary. c) By virtue of b), it suffices to determine the largest c such that p log p q + (1 −p) log 1 −p 1 −q −4c(p −q)2 ≥0 for all 0 ≤p, q ≤1, with the convention that 0 log 0 b = 0 for b ≥0 and a log a 0 = ∞for a > 0. By observing that equality in the above holds if p = q and considering the derivative of the left hand side with respect to q, show that the largest value of c is equal to (2 ln 2)−1. 25. Let p and qk, k ≥1 be probability distributions on a common alphabet. Show that if qk converges to p in divergence, then it also converges to p in variational distance. 26. Find a necessary and sufficient condition for Fano’s inequality to be tight. 27. Determine the probability distribution defined on {0, 1, · · · , n} that max-imizes the entropy subject to the constraint that the mean is equal to m, where 0 ≤m ≤n. 28. Show that for a stationary source {Xk}, 1 nH(X1, X2, · · · , Xn) is non-increasing in n. 29. For real numbers α > 1 and β > 0 and an integer n ≥α, define the probability distribution D(α,β) n =        1 − log α log n β , 1 n log α log n β , · · · , 1 n log α log n β , | {z } n 0, 0, · · ·        . Let ν = {1, 0, 0, . . .} be the deterministic distribution. a) Show that limn→∞D  ν||D(α,β) n  = 0. b) Determine limn→∞H  D(α,β) n  . 30. Discontinuity of entropy with respect to convergence in divergence Let P be the set of all probability distributions on a countable alphabet. A func-tion f : P →ℜis continuous with respect to convergence in divergence at P ∈P if for any ϵ > 0, there exists δ > 0 such that |f(P) −f(Q)| < ϵ for all Q ∈P satisfying D(P∥Q) < δ; otherwise, f is discontinuous at P. a) Let H : P →ℜbe the entropy function. Show that H is discontinu-ous at the deterministic distribution ν = {1, 0, 0, · · · , }. Hint: Use the results in Problem 29. b) Show that H is discontinuous at P = {p0, p1, p2, · · ·} for all P such that H(P) < ∞. Hint: Consider the probability distribution Qn =  p0 − p0 √log n, p1 + p0 n√log n, p2 + p0 n√log n, · · · , Problems 49 pn + p0 n√log n, pn+1, pn+2, . . .  for large n. 31. Discontinuity of entropy with respect to convergence in variational dis-tance Refer to Problem 30. The continuity of a function f : P →ℜwith respect to convergence in variational distance can be defined similarly. a) Show that if a function f is continuous with respect to convergence in variational distance, then it is also continuous with respect to con-vergence in divergence. Hint: Use Pinsker’s inequality. b) Repeat b) in Problem 30 with continuity defined with respect to con-vergence in variational distance. 32. Continuity of the entropy function for a fixed finite alphabet Refer to Problems 30 and 31. Suppose the domain of H is confined to P′, the set of all probability distributions on a fixed finite alphabet. Show that H is continuous with respect to convergence in divergence. 33. Let p = {p1, p2, · · · , pn} and q = {q1, q2, · · · , qn} be two sets of real num-bers such that pi ≥pi′ and qi ≥qi′ for all i < i′. We say that p is majorized by q if Pm i=1 pi ≤Pm j=1 qj for all m = 1, 2, . . . , n, where equality holds when m = n. A function f : ℜn →ℜis Schur-concave if f(p) ≥f(q) whenever p is majorized by q. Now let p and q be probability distribu-tions. We will show in the following steps that H(·) is Schur-concave. a) Show that for p ̸= q, there exist 1 ≤j < k ≤n which satisfy the following: i) j is the largest index i such that pi < qi ii) k is the smallest index i such that i > j and pi > qi iii) pi = qi for all j < i < k. b) Consider the distribution q∗= {q∗ 1, q∗ 2, · · · , q∗ n} defined by q∗ i = qi for i ̸= j, k and (q∗ j , q∗ k) = (pj, qk + (qj −pj)) if pk −qk ≥qj −pj (qj −(pk −qk), pk) if pk −qk < qj −pj. Note that either q∗ j = pj or q∗ k = pk. Show that i) q∗ i ≥q∗ i′ for all i ≤i′ ii) Pm i=1 pi ≤Pm i=1 q∗ i for all m = 1, 2, · · · , n iii) H(q∗) ≥H(q). c) Prove that H(p) ≥H(q) by induction on the Hamming distance between p and q, i.e., the number of places where p and q differ. In general, if a concave function f is symmetric, i.e., f(p) = f(p′) where p′ is a permutation of p, then f is Schur-concave. We refer the reader to for the theory of majorization. (Hardy, Littlewood, and P´ olya .) 50 2 Information Measures Historical Notes The concept of entropy has its root in thermodynamics. Shannon was the first to use entropy as a measure of information. Informational divergence was introduced by Kullback and Leibler , and it has been studied extensively by Csisz´ ar and Amari . Most of the materials in this chapter can be found in standard textbooks in information theory. The main concepts and results are due to Shannon . Pinsker’s inequality is due to Pinsker . Fano’s inequality has its origin in the converse proof of the channel coding theorem (to be discussed in Chapter 7) by Fano . Generalizations of Fano’s inequality which apply to random variables with countable alphabets have been obtained by Han and Verd´ u and by Ho (see also ). Maximum entropy, a concept in statistical mechanics, was expounded in Jaynes . 3 The I-Measure In Chapter 2, we have illustrated the relationship between Shannon’s infor-mation measures for two random variables by the diagram in Figure 2.2. For convenience, Figure 2.2 is reproduced in Figure 3.1 with the random variables X and Y replaced by X1 and X2, respectively. This diagram suggests that Shannon’s information measures for any n ≥2 random variables may have a set-theoretic structure. In this chapter, we develop a theory which establishes a one-to-one cor-respondence between Shannon’s information measures and set theory in full generality. With this correspondence, manipulations of Shannon’s informa-tion measures can be viewed as set operations, thus allowing the rich suite of tools in set theory to be used in information theory. Moreover, the structure of Shannon’s information measures can easily be visualized by means of an information diagram if four or fewer random variables are involved. The use of information diagrams simplifies many difficult proofs in information theory X 1 X 2 H ( ) , X 1 X 2 H ( ) X 1 X 2 H ( ) X 2 X 1 I ( ) ; X 1 H ( ) 2 X H ( ) Fig. 3.1. Relationship between entropies and mutual information for two random variables. 52 3 The I-Measure problems. More importantly, these results, which may be difficult to discover in the first place, can easily be obtained by inspection of an information dia-gram. The main concepts to be used in this chapter are from measure theory. However, it is not necessary for the reader to know measure theory to read this chapter. 3.1 Preliminaries In this section, we introduce a few basic concepts in measure theory which will be used subsequently. These concepts will be illustrated by simple examples. Definition 3.1. The field Fn generated by sets ˜ X1, ˜ X2, · · · , ˜ Xn is the collec-tion of sets which can be obtained by any sequence of usual set operations (union, intersection, complement, and difference) on ˜ X1, ˜ X2, · · · , ˜ Xn. Definition 3.2. The atoms of Fn are sets of the form ∩n i=1Yi, where Yi is either ˜ Xi or ˜ Xc i , the complement of ˜ Xi. There are 2n atoms and 22n sets in Fn. Evidently, all the atoms in Fn are disjoint, and each set in Fn can be expressed uniquely as the union of a subset of the atoms of Fn1. We assume that the sets ˜ X1, ˜ X2, · · · , ˜ Xn intersect with each other generically, i.e., all the atoms of Fn are nonempty unless otherwise specified. Example 3.3. The sets ˜ X1 and ˜ X2 generate the field F2. The atoms of F2 are ˜ X1 ∩˜ X2, ˜ Xc 1 ∩˜ X2, ˜ X1 ∩˜ Xc 2, ˜ Xc 1 ∩˜ Xc 2, (3.1) which are represented by the four distinct regions in the Venn diagram in Figure 3.2. The field F2 consists of the unions of subsets of the atoms in (3.1). There are a total of 16 sets in F2, which are precisely all the sets which can be obtained from ˜ X1 and ˜ X2 by the usual set operations. Definition 3.4. A real function µ defined on Fn is called a signed measure if it is set-additive, i.e., for disjoint A and B in Fn, µ(A ∪B) = µ(A) + µ(B). (3.2) For a signed measure µ, we have µ(∅) = 0, (3.3) which can be seen as follows. For any A in Fn, 1 We adopt the convention that the union of the empty subset of the atoms of Fn is the empty set. 3.2 The I-Measure for Two Random Variables 53 X 1 X 2 Fig. 3.2. The Venn diagram for ˜ X1 and ˜ X2. µ(A) = µ(A ∪∅) = µ(A) + µ(∅) (3.4) by set-additivity because A and ∅are disjoint, which implies (3.3). A signed measure µ on Fn is completely specified by its values on the atoms of Fn. The values of µ on the other sets in Fn can be obtained via set-additivity. Example 3.5. A signed measure µ on F2 is completely specified by the values µ( ˜ X1 ∩˜ X2), µ( ˜ Xc 1 ∩˜ X2), µ( ˜ X1 ∩˜ Xc 2), µ( ˜ Xc 1 ∩˜ Xc 2). (3.5) The value of µ on ˜ X1, for example, can be obtained as µ( ˜ X1) = µ(( ˜ X1 ∩˜ X2) ∪( ˜ X1 ∩˜ Xc 2)) (3.6) = µ( ˜ X1 ∩˜ X2) + µ( ˜ X1 ∩˜ Xc 2). (3.7) 3.2 The I-Measure for Two Random Variables To fix ideas, we first formulate in this section the one-to-one correspondence between Shannon’s information measures and set theory for two random vari-ables. For random variables X1 and X2, let ˜ X1 and ˜ X2 be sets corresponding to X1 and X2, respectively. The sets ˜ X1 and ˜ X2 generates the field F2 whose atoms are listed in (3.1). In our formulation, we set the universal set Ωto ˜ X1 ∪˜ X2 for reasons which will become clear later. With this choice of Ω, the Venn diagram for ˜ X1 and ˜ X2 is represented by the diagram in Figure 3.3. For simplicity, the sets ˜ X1 and ˜ X2 are respectively labeled by X1 and X2 in the diagram. We call this the information diagram for the random variables X1 and X2. In this diagram, the universal set, which is the union of ˜ X1 and ˜ X2, is not shown explicitly just as in a usual Venn diagram. Note that with our choice of the universal set, the atom ˜ Xc 1 ∩˜ Xc 2 degenerates to the empty set, because 54 3 The I-Measure X 1 X 2 Fig. 3.3. The generic information diagram for X1 and X2. ˜ Xc 1 ∩˜ Xc 2 = ( ˜ X1 ∪˜ X2)c = Ωc = ∅. (3.8) Thus this atom is not shown in the information diagram in Figure 3.3. For random variables X1 and X2, the Shannon’s information measures are H(X1), H(X2), H(X1|X2), H(X2|X1), H(X1, X2), I(X1; X2). (3.9) Writing A ∩Bc as A −B, we now define a signed measure2 µ∗by µ∗( ˜ X1 −˜ X2) = H(X1|X2) (3.10) µ∗( ˜ X2 −˜ X1) = H(X2|X1), (3.11) and µ∗( ˜ X1 ∩˜ X2) = I(X1; X2). (3.12) These are the values of µ∗on the nonempty atoms of F2 (i.e., atoms of F2 other than ˜ Xc 1 ∩˜ Xc 2). The values of µ∗on the other sets in F2 can be obtained via set-additivity. In particular, the relations µ∗( ˜ X1 ∪˜ X2) = H(X1, X2) (3.13) µ∗( ˜ X1) = H(X1), (3.14) and µ∗( ˜ X2) = H(X2) (3.15) can readily be verified. For example, (3.13) is seen to be true by considering µ∗( ˜ X1 ∪˜ X2) = µ∗( ˜ X1 −˜ X2) + µ∗( ˜ X2 −˜ X1) + µ∗( ˜ X1 ∩˜ X2) (3.16) = H(X1|X2) + H(X2|X1) + I(X1; X2) (3.17) = H(X1, X2). (3.18) The right hand sides of (3.10) to (3.15) are the six Shannon’s information measures for X1 and X2 in (3.9). Now observe that (3.10) to (3.15) are con-sistent with how the Shannon’s information measures on the right hand side 2 It happens that µ∗defined here for n = 2 assumes only nonnegative values, but we will see in Section 3.4 that µ∗can assume negative values for n ≥3. 3.3 Construction of the I-Measure µ 55 are identified in Figure 3.1, with the left circle and the right circle represent-ing the sets ˜ X1 and ˜ X2, respectively. Specifically, in each of these equations, the left hand side and the right hand side correspond to each other via the following substitution of symbols: H/I ↔µ∗ , ↔∪ ; ↔∩ | ↔−. (3.19) Note that we make no distinction between the symbols H and I in this sub-stitution. Thus, for two random variables X1 and X2, Shannon’s information measures can be regarded formally as a signed measure on F2. We will refer to µ∗as the I-Measure for the random variables X1 and X23. Upon realizing that Shannon’s information measures can be viewed as a signed measure, we can apply the rich family of operations in set theory to information theory. This explains why Figure 3.1 or Figure 3.3 represents the relationships among all Shannon’s information measures for two random variables correctly. As an example, consider the following set identity which is readily identified in Figure 3.3: µ∗( ˜ X1 ∪˜ X2) = µ∗( ˜ X1) + µ∗( ˜ X2) −µ∗( ˜ X1 ∩˜ X2) (3.20) This identity is a special case of the inclusion-exclusion formula in set theory. By means of the substitution of symbols in (3.19), we immediately obtain the information identity H(X1, X2) = H(X1) + H(X2) −I(X1; X2). (3.21) We end this section with a remark. The value of µ∗on the atom ˜ Xc 1 ∩˜ Xc 2 has no apparent information-theoretic meaning. In our formulation, we set the universal set Ωto ˜ X1∪˜ X2 so that the atom ˜ Xc 1 ∩˜ Xc 2 degenerates to the empty set. Then µ∗( ˜ Xc 1 ∩˜ Xc 2) naturally vanishes because µ∗is a measure, so that µ∗ is completely specified by all Shannon’s information measures involving the random variables X1 and X2. 3.3 Construction of the I-Measure µ We have constructed the I-Measure for two random variables in the last sec-tion. We now construct the I-Measure for any n ≥2 random variables. Consider n random variables X1, X2, · · · , Xn. For any random variable X, let ˜ X be a set corresponding to X. Let 3 The reader should not confuse µ∗with the probability measure defining the ran-dom variables X1 and X2. The former, however, is determined by the latter. 56 3 The I-Measure Nn = {1, 2, · · · , n}. (3.22) Define the universal set Ωto be the union of the sets ˜ X1, ˜ X2, · · · , ˜ Xn, i.e., Ω= [ i∈Nn ˜ Xi. (3.23) We use Fn to denote the field generated by ˜ X1, ˜ X2, · · · , ˜ Xn. The set A0 = \ i∈Nn ˜ Xc i (3.24) is called the empty atom of Fn because \ i∈Nn ˜ Xc i = [ i∈Nn ˜ Xi !c = Ωc = ∅. (3.25) All the atoms of Fn other than A0 are called nonempty atoms. Let A be the set of all nonempty atoms of Fn. Then |A|, the cardinality of A, is equal to 2n −1. A signed measure µ on Fn is completely specified by the values of µ on the nonempty atoms of Fn. To simplify notation, we will use XG to denote (Xi, i ∈G) and ˜ XG to denote ∪i∈G ˜ Xi for any nonempty subset G of Nn. Theorem 3.6. Let B = n ˜ XG : G is a nonempty subset of Nn o . (3.26) Then a signed measure µ on Fn is completely specified by {µ(B), B ∈B}, which can be any set of real numbers. Proof. The number of elements in B is equal to the number of nonempty subsets of Nn, which is 2n −1. Thus |A| = |B| = 2n −1. Let k = 2n −1. Let u be a column k-vector of µ(A), A ∈A, and h be a column k-vector of µ(B), B ∈B. Since all the sets in B can expressed uniquely as the union of some nonempty atoms in A, by the set-additivity of µ, for each B ∈B, µ(B) can be expressed uniquely as the sum of some components of u. Thus h = Cnu, (3.27) where Cn is a unique k × k matrix. On the other hand, it can be shown (see Appendix 3.A) that for each A ∈A, µ(A) can be expressed as a linear combination of µ(B), B ∈B by applications, if necessary, of the following two identities: µ(A ∩B −C) = µ(A −C) + µ(B −C) −µ(A ∪B −C) (3.28) µ(A −B) = µ(A ∪B) −µ(B). (3.29) 3.3 Construction of the I-Measure µ 57 However, the existence of the said expression does not imply its uniqueness. Nevertheless, we can write u = Dnh (3.30) for some k × k matrix Dn. Upon substituting (3.27) into (3.30), we obtain u = (DnCn)u, (3.31) which implies that Dn is the inverse of Cn as (3.31) holds regardless of the choice of µ. Since Cn is unique, so is Dn. Therefore, µ(A), A ∈A are uniquely determined once µ(B), B ∈B are specified. Hence, a signed measure µ on Fn is completely specified by {µ(B), B ∈B}, which can be any set of real numbers. The theorem is proved. ⊓ ⊔ We now prove the following two lemmas which are related by the substi-tution of symbols in (3.19). Lemma 3.7. µ(A ∩B −C) = µ(A ∪C) + µ(B ∪C) −µ(A ∪B ∪C) −µ(C). (3.32) Proof. From (3.28) and (3.29), we have µ(A ∩B −C) = µ(A −C) + µ(B −C) −µ(A ∪B −C) (3.33) = (µ(A ∪C) −µ(C)) + (µ(B ∪C) −µ(C)) −(µ(A ∪B ∪C) −µ(C)) (3.34) = µ(A ∪C) + µ(B ∪C) −µ(A ∪B ∪C) −µ(C). (3.35) The lemma is proved. ⊓ ⊔ Lemma 3.8. I(X; Y |Z) = H(X, Z) + H(Y, Z) −H(X, Y, Z) −H(Z). (3.36) Proof. Consider I(X; Y |Z) = H(X|Z) −H(X|Y, Z) (3.37) = H(X, Z) −H(Z) −(H(X, Y, Z) −H(Y, Z)) (3.38) = H(X, Z) + H(Y, Z) −H(X, Y, Z) −H(Z). (3.39) The lemma is proved. ⊓ ⊔ 58 3 The I-Measure We now construct the I-Measure µ∗on Fn using Theorem 3.6 by defining µ∗( ˜ XG) = H(XG) (3.40) for all nonempty subsets G of Nn. In order for µ∗to be meaningful, it has to be consistent with all Shannon’s information measures (via the substitution of symbols in (3.19)). In that case, the following must hold for all (not necessarily disjoint) subsets G, G′, G′′ of Nn where G and G′ are nonempty: µ∗( ˜ XG ∩˜ XG′ −˜ XG′′) = I(XG; XG′|XG′′). (3.41) When G′′ = ∅, (3.41) becomes µ∗( ˜ XG ∩˜ XG′) = I(XG; XG′). (3.42) When G = G′, (3.41) becomes µ∗( ˜ XG −˜ XG′′) = H(XG|XG′′). (3.43) When G = G′ and G′′ = ∅, (3.41) becomes µ∗( ˜ XG) = H(XG). (3.44) Thus (3.41) covers all the four cases of Shannon’s information measures, and it is the necessary and sufficient condition for µ∗to be consistent with all Shannon’s information measures. Theorem 3.9. µ∗is the unique signed measure on Fn which is consistent with all Shannon’s information measures. Proof. Consider µ∗( ˜ XG ∩˜ XG′ −˜ XG′′) = µ∗( ˜ XG∪G′′) + µ∗( ˜ XG′∪G′′) −µ∗( ˜ XG∪G′∪G′′) −µ∗( ˜ XG′′) (3.45) = H(XG∪G′′) + H(XG′∪G′′) −H(XG∪G′∪G′′) −H(XG′′) (3.46) = I(XG; XG′|XG′′), (3.47) where (3.45) and (3.47) follow from Lemmas 3.7 and 3.8, respectively, and (3.46) follows from (3.40), the definition of µ∗. Thus we have proved (3.41), i.e., µ∗is consistent with all Shannon’s information measures. In order that µ∗is consistent with all Shannon’s information measures, for all nonempty subsets G of Nn, µ∗has to satisfy (3.44), which in fact is the definition of µ∗in (3.40). Therefore, µ∗is the unique signed measure on Fn which is consistent with all Shannon’s information measures. ⊓ ⊔ 3.4 µ Can be Negative 59 3.4 µ Can be Negative In the previous sections, we have been cautious in referring to the I-Measure µ∗as a signed measure instead of a measure4. In this section, we show that µ∗in fact can take negative values for n ≥3. For n = 2, the three nonempty atoms of F2 are ˜ X1 ∩˜ X2, ˜ X1 −˜ X2, ˜ X2 −˜ X1. (3.48) The values of µ∗on these atoms are respectively I(X1; X2), H(X1|X2), H(X2|X1). (3.49) These quantities are Shannon’s information measures and hence nonnegative by the basic inequalities. Therefore, µ∗is always nonnegative for n = 2. For n = 3, the seven nonempty atoms of F3 are ˜ Xi −˜ X{j,k}, ˜ Xi ∩˜ Xj −˜ Xk, ˜ X1 ∩˜ X2 ∩˜ X3, (3.50) where 1 ≤i < j < k ≤3. The values of µ∗on the first two types of atoms are µ∗( ˜ Xi −˜ X{j,k}) = H(Xi|Xj, Xk) (3.51) and µ∗( ˜ Xi ∩˜ Xj −˜ Xk) = I(Xi; Xj|Xk), (3.52) respectively, which are Shannon’s information measures and therefore non-negative. However, µ∗( ˜ X1 ∩˜ X2 ∩˜ X3) does not correspond to a Shannon’s information measure. In the next example, we show that µ∗( ˜ X1 ∩˜ X2 ∩˜ X3) can actually be negative. Example 3.10. In this example, all entropies are in the base 2. Let X1 and X2 be independent binary random variables with Pr{Xi = 0} = Pr{Xi = 1} = 0.5, (3.53) i = 1, 2. Let X3 = (X1 + X2) mod 2. (3.54) It is easy to check that X3 has the same marginal distribution as X1 and X2. Thus, H(Xi) = 1 (3.55) for i = 1, 2, 3. Moreover, X1, X2, and X3 are pairwise independent. Therefore, H(Xi, Xj) = 2 (3.56) and 4 A measure can assume only nonnegative values. 60 3 The I-Measure I(Xi; Xj) = 0 (3.57) for 1 ≤i < j ≤3. We further see from (3.54) that each random variable is a function of the other two random variables. Then by the chain rule for entropy, we have H(X1, X2, X3) = H(X1, X2) + H(X3|X1, X2) (3.58) = 2 + 0 (3.59) = 2. (3.60) Now for 1 ≤i < j < k ≤3, I(Xi; Xj|Xk) = H(Xi, Xk) + H(Xj, Xk) −H(X1, X2, X3) −H(Xk) (3.61) = 2 + 2 −2 −1 (3.62) = 1, (3.63) where we have invoked Lemma 3.8. It then follows that µ∗( ˜ X1 ∩˜ X2 ∩˜ X3) = µ∗( ˜ X1 ∩˜ X2) −µ∗( ˜ X1 ∩˜ X2 −˜ X3) (3.64) = I(X1; X2) −I(X1; X2|X3) (3.65) = 0 −1 (3.66) = −1. (3.67) Thus µ∗takes a negative value on the atom ˜ X1 ∩˜ X2 ∩˜ X3. Motivated by the substitution of symbols in (3.19) for Shannon’s informa-tion measures, we will write µ∗( ˜ X1 ∩˜ X2 ∩˜ X3) as I(X1; X2; X3). In general, we will write µ∗( ˜ XG1 ∩˜ XG2 ∩· · · ∩˜ XGm −˜ XF ) (3.68) as I(XG1; XG2; · · · ; XGm|XF ) (3.69) and refer to it as the mutual information between XG1, XG2, · · · , XGm condi-tioning on XF . Then (3.64) in the above example can be written as I(X1; X2; X3) = I(X1; X2) −I(X1; X2|X3). (3.70) For this example, I(X1; X2; X3) < 0, which implies I(X1; X2|X3) > I(X1; X2). (3.71) Therefore, unlike entropy, the mutual information between two random vari-ables can be increased by conditioning on a third random variable. Also, we note in (3.70) that although the expression on the right hand side is not sym-bolically symmetrical in X1, X2, and X3, we see from the left hand side that it is in fact symmetrical in X1, X2, and X3. 3.5 Information Diagrams 61 3.5 Information Diagrams We have established in Section 3.3 a one-to-one correspondence between Shan-non’s information measures and set theory. Therefore, it is valid to use an information diagram, which is a variation of a Venn diagram, to represent the relationship between Shannon’s information measures. For simplicity, a set ˜ Xi will be labeled by Xi in an information diagram. We have seen the generic information diagram for n = 2 in Figure 3.3. A generic information diagram for n = 3 is shown in Figure 3.4. The information-theoretic labeling of the values of µ∗on some of the sets in F3 is shown in the diagram. As an example, the information diagram for the I-Measure for random variables X1, X2, and X3 discussed in Example 3.10 is shown in Figure 3.5. For n ≥4, it is not possible to display an information diagram perfectly in two dimensions. In general, an information diagram for n random variables needs n −1 dimensions to be displayed perfectly. Nevertheless, for n = 4, an information diagram can be displayed in two dimensions almost perfectly as shown in Figure 3.6. This information diagram is correct in that the region representing the set ˜ X4 splits each atom in Figure 3.4 into two atoms. However, the adjacency of certain atoms are not displayed correctly. For example, the set ˜ X1 ∩˜ X2 ∩˜ Xc 4, which consists of the atoms ˜ X1 ∩˜ X2 ∩˜ X3 ∩˜ Xc 4 and ˜ X1 ∩ ˜ X2 ∩˜ Xc 3 ∩˜ Xc 4, is not represented by a connected region because the two atoms are not adjacent to each other. When µ∗takes the value zero on an atom A of Fn, we do not need to display the atom A in an information diagram because the atom A does not contribute to µ∗(B) for any set B containing the atom A. As we will see shortly, this can happen if certain Markov constraints are imposed on the random variables involved, and the information diagram can be simplified accordingly. In a generic information diagram (i.e., when there is no constraint X 1 X 1 X 2 H ( ) X 1 H ( ) X 2 X 3 X 1 I ( ) ; X 3 ; ; X 1 I ( ) X 3 X 2 ; X 1 I X 3 X 2 ( ) , X 3 X 2 X 1 H ( ) Fig. 3.4. The generic information diagram for X1, X2, and X3. 62 3 The I-Measure 0 0 0 1 1 1 1 X 1 X 2 X 3 Fig. 3.5. The information diagram for X1, X2, and X3 in Example 3.10. on the random variables), however, all the atoms have to be displayed, as is implied by the next theorem. Theorem 3.11. If there is no constraint on X1, X2, · · · , Xn, then µ∗can take any set of nonnegative values on the nonempty atoms of Fn. Proof. We will prove the theorem by constructing an I-Measure µ∗which can take any set of nonnegative values on the nonempty atoms of Fn. Recall that A is the set of all nonempty atoms of Fn. Let YA, A ∈A be mutually independent random variables. Now define the random variables Xi, i = 1, 2, · · · , n by Xi = (YA : A ∈A and A ⊂˜ Xi). (3.72) We determine the I-Measure µ∗for X1, X2, · · · , Xn so defined as follows. Since YA are mutually independent, for any nonempty subsets G of Nn, we have H(XG) = H(Xi, i ∈G) (3.73) = H((YA : A ∈A and A ⊂˜ Xi), i ∈G) (3.74) X 1 X 2 X 3 X 4 Fig. 3.6. The generic information diagram for X1, X2, X3, and X4. 3.5 Information Diagrams 63 = H(YA : A ∈A and A ⊂˜ XG) (3.75) = X A∈A:A⊂˜ XG H(YA). (3.76) On the other hand, H(XG) = µ∗( ˜ XG) = X A∈A:A⊂˜ XG µ∗(A). (3.77) Equating the right hand sides of (3.76) and (3.77), we have X A∈A:A⊂˜ XG H(YA) = X A∈A:A⊂˜ XG µ∗(A). (3.78) Evidently, we can make the above equality hold for all nonempty subsets G of Nn by taking µ∗(A) = H(YA) (3.79) for all A ∈A. By the uniqueness of µ∗, this is also the only possibility for µ∗. Since H(YA) can take any nonnegative value by Corollary 2.44, µ∗can take any set of nonnegative values on the nonempty atoms of Fn. The theorem is proved. ⊓ ⊔ In the rest of this section, we explore the structure of Shannon’s informa-tion measures when X1 →X2 →· · · →Xn forms a Markov chain. To start with, we consider n = 3, i.e., X1 →X2 →X3 forms a Markov chain. Since µ∗( ˜ X1 ∩˜ Xc 2 ∩˜ X3) = I(X1; X3|X2) = 0, (3.80) the atom ˜ X1∩˜ Xc 2∩˜ X3 does not have to be displayed in an information diagram. Therefore, in constructing the information diagram, the regions representing the random variables X1, X2, and X3 should overlap with each other such that the region corresponding to the atom ˜ X1 ∩˜ Xc 2 ∩˜ X3 is empty, while the regions corresponding to all other nonempty atoms are nonempty. Figure 3.7 shows such a construction, in which each random variable is represented by a “mountain.” From Figure 3.7, we see that ˜ X1 ∩˜ X2 ∩˜ X3, as the only atom on which µ∗may take a negative value, now becomes identical to the atom ˜ X1 ∩˜ X3. Therefore, we have I(X1; X2; X3) = µ∗( ˜ X1 ∩˜ X2 ∩˜ X3) (3.81) = µ∗( ˜ X1 ∩˜ X3) (3.82) = I(X1; X3) (3.83) ≥0. (3.84) Hence, we conclude that when X1 →X2 →X3 forms a Markov chain, µ∗is always nonnegative. Next, we consider n = 4, i.e., X1 →X2 →X3 →X4 forms a Markov chain. With reference to Figure 3.6, we first show that under this Markov constraint, µ∗always vanishes on certain nonempty atoms: 64 3 The I-Measure X 3 X 1 X 2 Fig. 3.7. The information diagram for the Markov chain X1 →X2 →X3. 1. The Markov chain X1 →X2 →X3 implies I(X1; X3; X4|X2) + I(X1; X3|X2, X4) = I(X1; X3|X2) = 0. (3.85) 2. The Markov chain X1 →X2 →X4 implies I(X1; X3; X4|X2) + I(X1; X4|X2, X3) = I(X1; X4|X2) = 0. (3.86) 3. The Markov chain X1 →X3 →X4 implies I(X1; X2; X4|X3) + I(X1; X4|X2, X3) = I(X1; X4|X3) = 0. (3.87) 4. The Markov chain X2 →X3 →X4 implies I(X1; X2; X4|X3) + I(X2; X4|X1, X3) = I(X2; X4|X3) = 0. (3.88) 5. The Markov chain (X1, X2) →X3 →X4 implies I( X 1; X2; X4|X3) + I(X1; X4|X2, X3) + I(X2; X4|X1, X3) = I(X1, X2; X4|X3) (3.89) = 0. (3.90) Now (3.85) and (3.86) imply I(X1; X4|X2, X3) = I(X1; X3|X2, X4), (3.91) (3.87) and (3.91) imply I(X1; X2; X4|X3) = −I(X1; X3|X2, X4), (3.92) and (3.88) and (3.92) imply I(X2; X4|X1, X3) = I(X1; X3|X2, X4). (3.93) The terms on the left hand sides of (3.91), (3.92), and (3.93) are the three terms on the left hand side of (3.90). Then we substitute (3.91), (3.92), and (3.93) in (3.90) to obtain 3.5 Information Diagrams 65 X 3 X 1 X 4 X 2 Fig. 3.8. The atoms of F4 on which µ∗vanishes when X1 →X2 →X3 →X4 forms a Markov chain. µ∗( ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ Xc 4) = I(X1; X3|X2, X4) = 0. (3.94) From (3.85), (3.91), (3.92), and (3.93), (3.94) implies µ∗( ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ X4) = I(X1; X3; X4|X2) = 0 (3.95) µ∗( ˜ X1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ X4) = I(X1; X4|X2, X3) = 0 (3.96) µ∗( ˜ X1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4) = I(X1; X2; X4|X3) = 0 (3.97) µ∗( ˜ Xc 1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4) = I(X2; X4|X1, X3) = 0. (3.98) From (3.94) to (3.98), we see that µ∗always vanishes on the atoms ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ Xc 4 ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ X4 ˜ X1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ X4 ˜ X1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4 ˜ Xc 1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4 (3.99) of F4, which we mark by an asterisk in the information diagram in Figure 3.8. In fact, the reader can gain a lot of insight by letting I(X1; X3|X2, X4) = a ≥0 in (3.85) and tracing the subsequent steps leading to the above conclusion in the information diagram in Figure 3.6. It is not necessary to display the five atoms in (3.99) in an information diagram because µ∗always vanishes on these atoms. Therefore, in constructing the information diagram, the regions representing the random variables should overlap with each other such that the regions corresponding to these five nonempty atoms are empty, while the regions corresponding to the other ten 66 3 The I-Measure nonempty atoms, namely ˜ X1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ Xc 4, ˜ X1 ∩˜ X2 ∩˜ Xc 3 ∩˜ Xc 4 ˜ X1 ∩˜ X2 ∩˜ X3 ∩˜ Xc 4, ˜ X1 ∩˜ X2 ∩˜ X3 ∩˜ X4, ˜ Xc 1 ∩˜ X2 ∩˜ Xc 3 ∩˜ Xc 4, ˜ Xc 1 ∩˜ X2 ∩˜ X3 ∩˜ Xc 4, ˜ Xc 1 ∩˜ X2 ∩˜ X3 ∩˜ X4, ˜ Xc 1 ∩˜ Xc 2 ∩˜ X3 ∩˜ Xc 4 ˜ Xc 1 ∩˜ Xc 2 ∩˜ X3 ∩˜ X4, ˜ Xc 1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ X4, (3.100) are nonempty. Figure 3.9 shows such a construction. The reader should com-pare the information diagrams in Figures 3.7 and 3.9 and observe that the latter is an extension of the former. From Figure 3.9, we see that the values of µ∗on the ten nonempty atoms in (3.100) are equivalent to H(X1|X2, X3, X4), I(X1; X2|X3, X4) I(X1; X3|X4), I(X1; X4) H(X2|X1, X3, X4), I(X2; X3|X1; X4) I(X2; X4|X1), H(X3|X1, X2, X4) I(X3; X4|X1, X2), H(X4|X1, X2, X3), (3.101) respectively5. Since these are all Shannon’s information measures and thus nonnegative, we conclude that µ∗is always nonnegative. When X1 →X2 →· · · →Xn forms a Markov chain, for n = 3, there is only one nonempty atom, namely ˜ X1 ∩˜ Xc 2 ∩˜ X3, on which µ∗always vanishes. This atom can be determined directly from the Markov constraint I(X1; X3|X2) = 0. For n = 4, the five nonempty atoms on which µ∗always vanishes are listed in (3.99). The determination of these atoms, as we have seen, is not straightforward. We have also shown that for n = 3 and n = 4, µ∗is always nonnegative. X 1 X 4 X 3 X 2 Fig. 3.9. The information diagram for the Markov chain X1 →X2 →X3 →X4. 5 A formal proof will be given in Theorem 12.30. 3.6 Examples of Applications 67 We will extend this theme in Chapter 12 to finite Markov random fields with Markov chains being a special case. For a Markov chain, the information diagram can always be displayed in two dimensions as in Figure 3.10, and µ∗ is always nonnegative. These will be explained in Chapter 12. 3.6 Examples of Applications In this section, we give a few examples of applications of information diagrams. These examples show how information diagrams can help solving information theory problems. The use of an information diagram is highly intuitive. To obtain an infor-mation identity from an information diagram is WYSIWYG6. However, how to obtain an information inequality from an information diagram needs some explanation. Very often, we use a Venn diagram to represent a measure µ which takes nonnegative values. If we see in the Venn diagram two sets A and B such that A is a subset of B, then we can immediately conclude that µ(A) ≤µ(B) because µ(B) −µ(A) = µ(B −A) ≥0. (3.102) However, an I-Measure µ∗can take negative values. Therefore, when we see in an information diagram that A is a subset of B, we cannot conclude from this fact alone that µ∗(A) ≤µ∗(B) unless we know from the setup of the problem that µ∗is nonnegative. (For example, µ∗is nonnegative if the random variables involved form a Markov chain.) Instead, information inequalities can be obtained from an information diagram in conjunction with the basic inequalities. The following examples illustrate how it works. Example 3.12 (Concavity of Entropy). Let X1 ∼p1(x) and X2 ∼p2(x). Let X ∼p(x) = λp1(x) + ¯ λp2(x), (3.103) ... X 1 X 2 X n -1 X n Fig. 3.10. The information diagram for the Markov chain X1 →X2 →· · · →Xn. 6 What you see is what you get. 68 3 The I-Measure X Z = 1 Z = 2 X 1 X 2 Fig. 3.11. The schematic diagram for Example 3.12. where 0 ≤λ ≤1 and ¯ λ = 1 −λ. We will show that H(X) ≥λH(X1) + ¯ λH(X2). (3.104) Consider the system in Figure 3.11 in which the position of the switch is determined by a random variable Z with Pr{Z = 1} = λ and Pr{Z = 2} = ¯ λ, (3.105) where Z is independent of X1 and X2. The switch takes position i if Z = i, i = 1, 2. The random variable Z is called a mixing random variable for the probability distributions p1(x) and p2(x). Figure 3.12 shows the information diagram for X and Z. From the diagram, we see that ˜ X −˜ Z is a subset of ˜ X. Since µ∗is nonnegative for two random variables, we can conclude that µ∗( ˜ X) ≥µ∗( ˜ X −˜ Z), (3.106) which is equivalent to H(X) ≥H(X|Z). (3.107) Then H(X) ≥H(X|Z) (3.108) = Pr{Z = 1}H(X|Z = 1) + Pr{Z = 2}H(X|Z = 2) (3.109) = λH(X1) + ¯ λH(X2), (3.110) proving (3.104). This shows that H(X) is a concave functional of p(x). X Z Fig. 3.12. The information diagram for Example 3.12. 3.6 Examples of Applications 69 Y X Z = 1 Z = 2 p 2 ( ) y x p 1 ( ) y x Fig. 3.13. The schematic diagram for Example 3.13. Example 3.13 (Convexity of Mutual Information). Let (X, Y ) ∼p(x, y) = p(x)p(y|x). (3.111) We will show that for fixed p(x), I(X; Y ) is a convex functional of p(y|x). Let p1(y|x) and p2(y|x) be two transition matrices. Consider the system in Figure 3.13 in which the position of the switch is determined by a random variable Z as in the last example, where Z is independent of X, i.e., I(X; Z) = 0. (3.112) In the information diagram for X, Y , and Z in Figure 3.14, let I(X; Z|Y ) = a ≥0. (3.113) Since I(X; Z) = 0, we see that I(X; Y ; Z) = −a, (3.114) because I(X; Z) = I(X; Z|Y ) + I(X; Y ; Z). (3.115) Then X Z Y a - a Fig. 3.14. The information diagram for Example 3.13. 70 3 The I-Measure X Y p ( y | x ) Z =1 p 1 ( x ) Z =2 p 2 ( x ) Fig. 3.15. The schematic diagram for Example 3.14. I(X; Y ) = I(X; Y |Z) + I(X; Y ; Z) (3.116) = I(X; Y |Z) −a (3.117) ≤I(X; Y |Z) (3.118) = Pr{Z = 1}I(X; Y |Z = 1) + Pr{Z = 2}I(X; Y |Z = 2) (3.119) = λI(p(x), p1(y|x)) + ¯ λI(p(x), p2(y|x)), (3.120) where I(p(x), pi(y|x)) denotes the mutual information between the input and output of a channel with input distribution p(x) and transition matrix pi(y|x). This shows that for fixed p(x), I(X; Y ) is a convex functional of p(y|x). Example 3.14 (Concavity of Mutual Information). Let (X, Y ) ∼p(x, y) = p(x)p(y|x). (3.121) We will show that for fixed p(y|x), I(X; Y ) is a concave functional of p(x). Consider the system in Figure 3.15, where the position of the switch is determined by a random variable Z as in the last example. In this system, when X is given, Y is independent of Z, or Z →X →Y forms a Markov chain. Then µ∗is nonnegative, and the information diagram for X, Y , and Z is shown in Figure 3.16. From Figure 3.16, since ˜ X∩˜ Y −˜ Z is a subset of ˜ X∩˜ Y and µ∗is nonnegative, we immediately see that Y Z X Fig. 3.16. The information diagram for Example 3.14. 3.6 Examples of Applications 71 I(X; Y ) ≥I(X; Y |Z) (3.122) = Pr{Z = 1}I(X; Y |Z = 1) + Pr{Z = 2}I(X; Y |Z = 2) (3.123) = λI(p1(x), p(y|x)) + ¯ λI(p2(x), p(y|x)). (3.124) This shows that for fixed p(y|x), I(X; Y ) is a concave functional of p(x). Example 3.15 (Imperfect Secrecy Theorem)). Let X be the plain text, Y be the cipher text, and Z be the key in a secret key cryptosystem. Since X can be recovered from Y and Z, we have H(X|Y, Z) = 0. (3.125) We will show that this constraint implies I(X; Y ) ≥H(X) −H(Z). (3.126) The quantity I(X; Y ) is a measure of the security level of the cryptosystem. In general, we want to make I(X; Y ) small so that the eavesdropper cannot obtain too much information about the plain text X by observing the cipher text Y . The inequality in (3.126) says that the system can attain a certain level of security only if H(Z) (often called the key length) is sufficiently large. In particular, if perfect secrecy is required, i.e., I(X; Y ) = 0, then H(Z) must be at least equal to H(X). This special case is known as Shannon’s perfect secrecy theorem 7. We now prove (3.126). Let I(X; Y |Z) = a ≥0 (3.127) I(Y ; Z|X) = b ≥0 (3.128) H(Z|X, Y ) = c ≥0, (3.129) and I(X; Y ; Z) = d. (3.130) (See Figure 3.17.) Since I(Y ; Z) ≥0, b + d ≥0. (3.131) In comparing H(X) with H(Z), we do not have to consider I(X; Z|Y ) and I(X; Y ; Z) since they belong to both H(X) and H(Z). Then we see from Figure 3.17 that H(X) −H(Z) = a −b −c. (3.132) 7 Shannon used a combinatorial argument to prove this theorem. An information-theoretic proof can be found in Massey . 72 3 The I-Measure a c b d 0 X Z Y Fig. 3.17. The information diagram for Example 3.15. Therefore, I(X; Y ) = a + d (3.133) ≥a −b (3.134) ≥a −b −c (3.135) = H(X) −H(Z), (3.136) where (3.134) and (3.135) follow from (3.131) and (3.129), respectively, prov-ing (3.126). Note that in deriving our result, the assumptions that H(Y |X, Z) = 0, i.e., the cipher text is a function of the plain text and the key, and I(X; Z) = 0, i.e., the plain text and the key are independent, are not necessary. Example 3.16. Figure 3.18 shows the information diagram for the Markov chain X →Y →Z. From this diagram, we can identify the following two information identities: I(X; Y ) = I(X; Y, Z) (3.137) H(X|Y ) = H(X|Y, Z). (3.138) Since µ∗is nonnegative and ˜ X ∩˜ Z is a subset of ˜ X ∩˜ Y , we have I(X; Z) ≤I(X; Y ), (3.139) X Y Z Fig. 3.18. The information diagram for the Markov chain X →Y →Z. 3.6 Examples of Applications 73 X T Z Y Fig. 3.19. The information diagram for the Markov chain X →Y →Z →T. which has already been obtained in Lemma 2.41. Similarly, we can also obtain H(X|Y ) ≤H(X|Z). (3.140) Example 3.17 (Data Processing Theorem). Figure 3.19 shows the information diagram for the Markov chain X →Y →Z →T. Since µ∗is nonnegative and ˜ X ∩˜ T is a subset of ˜ Y ∩˜ Z, we have I(X; T) ≤I(Y ; Z), (3.141) which is the data processing theorem (Theorem 2.42). We end this chapter by giving an application of the information diagram for a Markov chain with five random variables. Example 3.18. In this example, we prove with the help of an information di-agram that for five random variables X, Y, Z, T, and U such that X →Y → Z →T →U forms a Markov chain, H(Y ) + H(T) = I(Z; X, Y, T, U) + I(X, Y ; T, U) + H(Y |Z) + H(T|Z). (3.142) In the information diagram for X, Y, Z, T, and U in Figure 3.20, we first identify the atoms of H(Y ) and then the atoms of H(T) by marking each of . . . . . . . . . . . . . . . . X Y Z T U Fig. 3.20. The atoms of H(Y ) + H(T). 74 3 The I-Measure . . . . . . . . . . . . . . . . X Y Z T U Fig. 3.21. The atoms of I(Z; X, Y, T, U) + I(X, Y ; T, U) + H(Y |Z) + H(T|Z). them by a dot. If an atom belongs to both H(Y ) and H(T), it receives two dots. The resulting diagram represents H(Y ) + H(T). (3.143) By repeating the same procedure for I(Z; X, Y, T, U) + I(X, Y ; T, U) + H(Y |Z) + H(T|Z), (3.144) we obtain the information diagram in Figure 3.21. Comparing these two information diagrams, we find that they are identical. Hence, the infor-mation identity in (3.142) always holds conditioning on the Markov chain X →Y →Z →T →U. This identity is critical in proving an outer bound on the achievable coding rate region of the multiple descriptions problem in Fu et al. . It is virtually impossible to discover this identity without the help of an information diagram! Appendix 3.A: A Variation of the Inclusion-Exclusion Formula In this appendix, we show that for each A ∈A, µ(A) can be expressed as a linear combination of µ(B), B ∈B via applications of (3.28) and (3.29). We first prove by using (3.28) the following variation of the inclusive-exclusive formula. Theorem 3.19. For a set-additive function µ, µ n \ k=1 Ak −B ! = X 1≤i≤n µ(Ai −B) − X 1≤i<j≤n µ(Ai ∪Aj −B) + · · · + (−1)n+1µ(A1 ∪A2 ∪· · · ∪An −B). (3.145) Appendix 3.A: A Variation of the Inclusion-Exclusion Formula 75 Proof. The theorem will be proved by induction on n. First, (3.145) is obvi-ously true for n = 1. Assume (3.145) is true for some n ≥1. Now consider µ n+1 \ k=1 Ak −B ! = µ n \ k=1 Ak ! ∩An+1 −B ! (3.146) = µ n \ k=1 Ak −B ! + µ(An+1 −B) −µ n \ k=1 Ak ! ∪An+1 −B ! (3.147) = ( X 1≤i≤n µ(Ai −B) − X 1≤i<j≤n µ(Ai ∪Aj −B) + · · · + (−1)n+1µ(A1 ∪A2 ∪· · · ∪An −B) ) + µ(An+1 −B) −µ n \ k=1 (Ak ∪An+1) −B ! (3.148) = ( X 1≤i≤n µ(Ai −B) − X 1≤i<j≤n µ(Ai ∪Aj −B) + · · · + (−1)n+1µ(A1 ∪A2 ∪· · · ∪An −B) ) + µ(An+1 −B) − ( X 1≤i≤n µ(Ai ∪An+1 −B) − X 1≤i<j≤n µ(Ai ∪Aj ∪An+1 −B) + · · · + (−1)n+1µ(A1 ∪A2 ∪· · · ∪An ∪An+1 −B) ) (3.149) = X 1≤i≤n+1 µ(Ai −B) − X 1≤i 1 (since l1 ≥1 ) nodes of order l1 which can be chosen as the first codeword. Thus choosing the first codeword is always possible. Assume that the first i codewords have been chosen successfully, where 1 ≤ i ≤m−1, and we want to choose a node of order li+1 as the (i+1)st codeword such that it is not prefixed by any of the previously chosen codewords. In other words, the (i + 1)st node to be chosen cannot be a descendant of any of the previously chosen codewords. Observe that for 1 ≤j ≤i, the codeword with length lj has Dli+1−lj descendents of order li+1. Since all the previously chosen codewords are not prefeces of each other, their descendents of order li+1 do not overlap. Therefore, upon noting that the total number of nodes of order li+1 is Dli+1, the number of nodes which can be chosen as the (i + 1)st codeword is Dli+1 −Dli+1−l1 −· · · −Dli+1−li. (4.27) 88 4 Zero-Error Data Compression If l1, l2, · · · , lm satisfy the Kraft inequality, we have D−l1 + · · · + D−li + D−li+1 ≤1. (4.28) Multiplying by Dli+1 and rearranging the terms, we have Dli+1 −Dli+1−l1 −· · · −Dli+1−li ≥1. (4.29) The left hand side is the number of nodes which can be chosen as the (i+1)st codeword as given in (4.27). Therefore, it is possible to choose the (i + 1)st codeword. Thus we have shown the existence of a prefix code with codeword lengths l1, l2, · · · , lm, completing the proof. ⊓ ⊔ A probability distribution {pi} such that for all i, pi = D−ti, where ti is a positive integer, is called a D-adic distribution. When D = 2, {pi} is called a dyadic distribution. From Theorem 4.6 and the above theorem, we can obtain the following result as a corollary. Corollary 4.12. There exists a D-ary prefix code which achieves the entropy bound for a distribution {pi} if and only if {pi} is D-adic. Proof. Consider a D-ary prefix code which achieves the entropy bound for a distribution {pi}. Let li be the length of the codeword assigned to the probability pi. By Theorem 4.6, for all i, li = −logD pi, or pi = D−li. Thus {pi} is D-adic. Conversely, suppose {pi} is D-adic, and let pi = D−ti for all i. Let li = ti for all i. Then by the Kraft inequality, there exists a prefix code with codeword lengths {li}, because X i D−li = X i D−ti = X i pi = 1. (4.30) Assigning the codeword with length li to the probability pi for all i, we see from Theorem 4.6 that this code achieves the entropy bound. ⊓ ⊔ 4.2.2 Huffman Codes As we have mentioned, the efficiency of a uniquely decodable code is measured by its expected length. Thus for a given source X, we are naturally interested in prefix codes which have the minimum expected length. Such codes, called optimal codes, can be constructed by the Huffman procedure, and these codes are referred to as Huffman codes. In general, there exists more than one opti-mal code for a source, and some optimal codes cannot be constructed by the Huffman procedure. For simplicity, we first discuss binary Huffman codes. A binary prefix code for a source X with distribution {pi} is represented by a binary code tree, with each leaf in the code tree corresponding to a codeword. The Huffman procedure is to form a code tree such that the expected length is minimum. The procedure is described by a very simple rule: 4.2 Prefix Codes 89 Keep merging the two smallest probability masses until one probabil-ity mass (i.e., 1) is left. The merging of two probability masses corresponds to the formation of an internal node of the code tree. We now illustrate the Huffman procedure by the following example. Example 4.13. Let X be the source with X = {A, B, C, D, E}, and the prob-abilities are 0.35, 0.1, 0.15, 0.2, 0.2, respectively. The Huffman procedure is shown in Figure 4.2. In the first step, we merge probability masses 0.1 and 0.15 codeword 0.35 00 0.1 0.15 0.2 0.2 010 011 10 11 1 0.6 0.25 0.4 p i Fig. 4.2. The Huffman procedure. into a probability mass 0.25. In the second step, we merge probability masses 0.2 and 0.2 into a probability mass 0.4. In the third step, we merge probability masses 0.35 and 0.25 into a probability mass 0.6. Finally, we merge probabil-ity masses 0.6 and 0.4 into a probability mass 1. A code tree is then formed. Upon assigning 0 and 1 (in any convenient way) to each pair of branches at an internal node, we obtain the codeword assigned to each source symbol. In the Huffman procedure, sometimes there is more than one choice of merging the two smallest probability masses. We can take any one of these choices without affecting the optimality of the code eventually obtained. For an alphabet of size m, it takes m −1 steps to complete the Huffman procedure for constructing a binary code, because we merge two probability masses in each step. In the resulting code tree, there are m leaves and m −1 internal nodes. In the Huffman procedure for constructing a D-ary code, the smallest D probability masses are merged in each step. If the resulting code tree is formed in k + 1 steps, where k ≥0, then there will be k + 1 internal nodes and D + k(D −1) leaves, where each leaf corresponds to a source symbol in the alphabet. If the alphabet size m has the form D + k(D −1), then we can apply the Huffman procedure directly. Otherwise, we need to add a few dummy symbols with probability 0 to the alphabet in order to make the total number of symbols have the form D + k(D −1). 90 4 Zero-Error Data Compression Example 4.14. If we want to construct a quaternary Huffman code (D = 4) for the source in the last example, we need to add 2 dummy symbols so that the total number of symbols becomes 7 = 4 + (1)3, where k = 1. In general, we need to add at most D −2 dummy symbols. In Section 4.1, we have proved the entropy bound for a uniquely decodable code. This bound also applies to a prefix code since a prefix code is uniquely decodable. In particular, it applies to a Huffman code, which is a prefix code by construction. Thus the expected length of a Huffman code is at least the entropy of the source. In Example 4.13, the entropy H(X) is 2.202 bits, while the expected length of the Huffman code is 0.35(2) + 0.1(3) + 0.15(3) + 0.2(2) + 0.2(2) = 2.25. (4.31) We now turn to proving the optimality of a Huffman code. For simplicity, we will only prove the optimality of a binary Huffman code. Extension of the proof to the general case is straightforward. Without loss of generality, assume that p1 ≥p2 ≥· · · ≥pm. (4.32) Denote the codeword assigned to pi by ci, and denote its length by li. To prove that a Huffman code is actually optimal, we make the following observations. Lemma 4.15. In an optimal code, shorter codewords are assigned to larger probabilities. Proof. Consider 1 ≤i < j ≤m such that pi > pj. Assume that in a code, the codewords ci and cj are such that li > lj, i.e., a shorter codeword is assigned to a smaller probability. Then by exchanging ci and cj, the expected length of the code is changed by (pilj + pjli) −(pili + pjlj) = (pi −pj)(lj −li) < 0 (4.33) since pi > pj and li > lj. In other words, the code can be improved and therefore is not optimal. The lemma is proved. ⊓ ⊔ Lemma 4.16. There exists an optimal code in which the codewords assigned to the two smallest probabilities are siblings, i.e., the two codewords have the same length and they differ only in the last symbol. Proof. The reader is encouraged to trace the steps in this proof by drawing a code tree. Consider any optimal code. From the last lemma, the codeword cm assigned to pm has the longest length. Then the sibling of cm cannot be the prefix of another codeword. We claim that the sibling of cm must be a codeword. To see this, assume that it is not a codeword (and it is not the prefix of another codeword). Then we can replace cm by its parent to improve the code because the length of 4.2 Prefix Codes 91 the codeword assigned to pm is reduced by 1, while all the other codewords remain unchanged. This is a contradiction to the assumption that the code is optimal. Therefore, the sibling of cm must be a codeword. If the sibling of cm is assigned to pm−1, then the code already has the desired property, i.e., the codewords assigned to the two smallest probabilities are siblings. If not, assume that the sibling of cm is assigned to pi, where i < m −1. Since pi ≥pm−1, lm−1 ≥li = lm. On the other hand, by Lemma 4.15, lm−1 is always less than or equal to lm, which implies that lm−1 = lm = li. Then we can exchange the codewords for pi and pm−1 without changing the expected length of the code (i.e., the code remains optimal) to obtain the desired code. The lemma is proved. ⊓ ⊔ Suppose ci and cj are siblings in a code tree. Then li = lj. If we replace ci and cj by a common codeword at their parent, call it cij, then we obtain a reduced code tree, and the probability of cij is pi + pj. Accordingly, the probability set becomes a reduced probability set with pi and pj replaced by a probability pi + pj. Let L and L′ be the expected lengths of the original code and the reduced code, respectively. Then L −L′ = (pili + pjlj) −(pi + pj)(li −1) (4.34) = (pili + pjli) −(pi + pj)(li −1) (4.35) = pi + pj, (4.36) which implies L = L′ + (pi + pj). (4.37) This relation says that the difference between the expected length of the original code and the expected length of the reduced code depends only on the values of the two probabilities merged but not on the structure of the reduced code tree. Theorem 4.17. The Huffman procedure produces an optimal prefix code. Proof. Consider an optimal code in which cm and cm−1 are siblings. Such an optimal code exists by Lemma 4.16. Let {p′ i} be the reduced probability set obtained from {pi} by merging pm and pm−1. From (4.37), we see that L′ is the expected length of an optimal code for {p′ i} if and only if L is the expected length of an optimal code for {pi}. Therefore, if we can find an optimal code for {p′ i}, we can use it to construct an optimal code for {pi}. Note that by merging pm and pm−1, the size of the problem, namely the total number of probability masses, is reduced by one. To find an optimal code for {p′ i}, we again merge the two smallest probability in {p′ i}. This is repeated until the size of the problem is eventually reduced to 2, which we know that an optimal code has two codewords of length 1. In the last step of the Huffman procedure, two probability masses are merged, which corresponds to the formation of a code with two codewords of length 1. Thus the Huffman procedure indeed produces an optimal code. ⊓ ⊔ 92 4 Zero-Error Data Compression We have seen that the expected length of a Huffman code is lower bounded by the entropy of the source. On the other hand, it would be desirable to obtain an upper bound in terms of the entropy of the source. This is given in the next theorem. Theorem 4.18. The expected length of a Huffman code, denoted by LHuff, satisfies LHuff< HD(X) + 1. (4.38) This bound is the tightest among all the upper bounds on LHuffwhich depend only on the source entropy. Proof. We will construct a prefix code with expected length less than H(X)+ 1. Then, because a Huffman code is an optimal prefix code, its expected length LHuffis upper bounded by H(X) + 1. Consider constructing a prefix code with codeword lengths {li}, where li = ⌈−logD pi⌉. (4.39) Then −logD pi ≤li < −logD pi + 1, (4.40) or pi ≥D−li > D−1pi. (4.41) Thus X i D−li ≤ X i pi = 1, (4.42) i.e., {li} satisfies the Kraft inequality, which implies that it is possible to construct a prefix code with codeword lengths {li}. It remains to show that L, the expected length of this code, is less than H(X) + 1. Toward this end, consider L = X i pili (4.43) < X i pi(−logD pi + 1) (4.44) = − X i pi logD pi + X i pi (4.45) = H(X) + 1, (4.46) where (4.44) follows from the upper bound in (4.40). Thus we conclude that LHuff≤L < H(X) + 1. (4.47) To see that this upper bound is the tightest possible, we have to show that there exists a sequence of distributions Pk such that LHuffapproaches H(X)+1 as k →∞. This can be done by considering the sequence of D-ary distributions 4.3 Redundancy of Prefix Codes 93 Pk =  1 −D −1 k , 1 k , · · · , 1 k  , (4.48) where k ≥D. The Huffman code for each Pk consists of D codewords of length 1. Thus LHuffis equal to 1 for all k. As k →∞, H(X) →0, and hence LHuff approaches H(X) + 1. The theorem is proved. ⊓ ⊔ The code constructed in the above proof is known as the Shannon code. The idea is that in order for the code to be near-optimal, we should choose li close to −log pi for all i. When {pi} is D-adic, li can be chosen to be exactly −log pi because the latter are integers. In this case, the entropy bound is tight. From the entropy bound and the above theorem, we have H(X) ≤LHuff< H(X) + 1. (4.49) Now suppose we use a Huffman code to encode X1, X2, · · · , Xn which are n i.i.d. copies of X. Let us denote the length of this Huffman code by Ln Huff. Then (4.49) becomes nH(X) ≤Ln Huff< nH(X) + 1. (4.50) Dividing by n, we obtain H(X) ≤1 nLn Huff< H(X) + 1 n. (4.51) As n →∞, the upper bound approaches the lower bound. Therefore, n−1Ln Huff, the coding rate of the code, namely the average number of code symbols needed to encode a source symbol, approaches H(X) as n →∞. But of course, as n becomes large, constructing a Huffman code becomes very complicated. Nevertheless, this result indicates that entropy is a fundamental measure of information. 4.3 Redundancy of Prefix Codes The entropy bound for a uniquely decodable code has been proved in Sec-tion 4.1. In this section, we present an alternative proof specifically for prefix codes which offers much insight into the redundancy of such codes. Let X be a source random variable with probability distribution {p1, p2, · · · , pm}, (4.52) where m ≥2. A D-ary prefix code for X can be represented by a D-ary code tree with m leaves, where each leaf corresponds to a codeword. We denote the leaf corresponding to pi by ci and the order of ci by li, and assume that the alphabet is 94 4 Zero-Error Data Compression {0, 1, · · · , D −1}. (4.53) Let I be the index set of all the internal nodes (including the root) in the code tree. Instead of matching codewords by brute force, we can use the code tree of a prefix code for more efficient decoding. To decode a codeword, we trace the path specified by the codeword from the root of the code tree until it termi-nates at the leaf corresponding to that codeword. Let qk be the probability of reaching an internal node k ∈I during the decoding process. The probability qk is called the reaching probability of internal node k. Evidently, qk is equal to the sum of the probabilities of all the leaves descending from node k. Let ˜ pk,j be the probability that the jth branch of node k is taken during the decoding process. The probabilities ˜ pk,j, 0 ≤j ≤D −1, are called the branching probabilities of node k, and qk = X j ˜ pk,j. (4.54) Once node k is reached, the conditional branching distribution is  ˜ pk,0 qk , ˜ pk,1 qk , · · · , ˜ pk,D−1 qk  . (4.55) Then define the conditional entropy of node k by hk = HD  ˜ pk,0 qk , ˜ pk,1 qk , · · · , ˜ pk,D−1 qk  , (4.56) where with a slight abuse of notation, we have used HD(·) to denote the entropy in the base D of the conditional branching distribution in the paren-thesis. By Theorem 2.43, hk ≤1. The following lemma relates the entropy of X with the structure of the code tree. Lemma 4.19. HD(X) = P k∈I qkhk. Proof. We prove the lemma by induction on the number of internal nodes of the code tree. If there is only one internal node, it must be the root of the tree. Then the lemma is trivially true upon observing that the reaching probability of the root is equal to 1. Assume the lemma is true for all code trees with n internal nodes. Now consider a code tree with n + 1 internal nodes. Let k be an internal node such that k is the parent of a leaf c with maximum order. Each sibling of c may or may not be a leaf. If it is not a leaf, then it cannot be the ascendent of another leaf because we assume that c is a leaf with maximum order. Now consider revealing the outcome of X in two steps. In the first step, if the outcome of X is not a leaf descending from node k, we identify the outcome exactly, otherwise we identify the outcome to be a child of node k. We call this random 4.3 Redundancy of Prefix Codes 95 variable V . If we do not identify the outcome exactly in the first step, which happens with probability qk, we further identify in the second step which of the children (child) of node k the outcome is (there is only one child of node k which can be the outcome if all the siblings of c are not leaves). We call this random variable W. If the second step is not necessary, we assume that W takes a constant value with probability 1. Then X = (V, W). The outcome of V can be represented by a code tree with n internal nodes which is obtained by pruning the original code tree at node k. Then by the induction hypothesis, H(V ) = X k′∈I{k} qk′hk′. (4.57) By the chain rule for entropy, we have H(X) = H(V ) + H(W|V ) (4.58) = X k′∈I{k} qk′hk′ + (1 −qk) · 0 + qkhk (4.59) = X k′∈I qk′hk′. (4.60) The lemma is proved. ⊓ ⊔ The next lemma expresses the expected length L of a prefix code in terms of the reaching probabilities of the internal nodes of the code tree. Lemma 4.20. L = P k∈I qk. Proof. Define aki =  1 if leaf ci is a descendent of internal node k 0 otherwise. (4.61) Then li = X k∈I aki, (4.62) because there are exactly li internal nodes of which ci is a descendent if the order of ci is li. On the other hand, qk = X i akipi. (4.63) Then L = X i pili (4.64) = X i pi X k∈I aki (4.65) 96 4 Zero-Error Data Compression = X k∈I X i piaki (4.66) = X k∈I qk, (4.67) proving the lemma. ⊓ ⊔ Define the local redundancy of an internal node k by rk = qk(1 −hk). (4.68) This quantity is local to node k in the sense that it depends only on the branching probabilities of node k, and it vanishes if and only if ˜ pk,j = qk/D for all j, i.e., if and only if the node is balanced. Note that rk ≥0 because hk ≤1. The next theorem says that the redundancy R of a prefix code is equal to the sum of the local redundancies of all the internal nodes of the code tree. Theorem 4.21 (Local Redundancy Theorem). Let L be the expected length of a D-ary prefix code for a source random variable X, and R be the redundancy of the code. Then R = X k∈I rk. (4.69) Proof. By Lemmas 4.19 and 4.20, we have R = L −HD(X) (4.70) = X k∈I qk − X k qkhk (4.71) = X k∈I qk(1 −hk) (4.72) = X k∈I rk. (4.73) The theorem is proved. ⊓ ⊔ We now present an slightly different version of the entropy bound. Corollary 4.22 (Entropy Bound). Let R be the redundancy of a prefix code. Then R ≥0 with equality if and only if all the internal nodes in the code tree are balanced. Proof. Since rk ≥0 for all k, it is evident from the local redundancy theorem that R ≥0. Moreover R = 0 if and only if rk = 0 for all k, which means that all the internal nodes in the code tree are balanced. ⊓ ⊔ Chapter Summary 97 Remark Before the entropy bound was stated in Theorem 4.6, we gave the intuitive explanation that the entropy bound results from the fact that a D-ary symbol can carry at most one D-it of information. Therefore, when the entropy bound is tight, each code symbol has to carry exactly one D-it of information. Now consider revealing a random codeword one symbol after another. The above corollary states that in order for the entropy bound to be tight, all the internal nodes in the code tree must be balanced. That is, as long as the codeword is not completed, the next code symbol to be revealed always carries one D-it of information because it is distributed uniformly on the alphabet. This is consistent with the intuitive explanation we gave for the entropy bound. Example 4.23. The local redundancy theorem allows us to lower bound the redundancy of a prefix code based on partial knowledge on the structure of the code tree. More specifically, R ≥ X k∈I′ rk (4.74) for any subset I′ of I. Let pm−1, pm be the two smallest probabilities in the source distribution. In constructing a binary Huffman code, pm−1 and pm are merged. Then the redundancy of a Huffman code is lower bounded by (pm−1 + pm)  1 −H2  pm−1 pm−1 + pm , pm pm−1 + pm  , (4.75) the local redundancy of the parent of the two leaves corresponding to pm−1 and pm. See Yeung for progressive lower and upper bounds on the redundancy of a Huffman code. Chapter Summary Kraft Inequality: For a D-ary uniquely decodable source code, m X k=1 D−lk ≤1. Entropy Bound: L = X k pklk ≥HD(X), with equality if and only if the distribution of X is D-adic. 98 4 Zero-Error Data Compression Definition: A code is called a prefix code if no codeword is a prefix of any other codeword. Existence of Prefix Code: A D-ary prefix code with codeword lengths l1, l2, · · · , lm exists if and only if the Kraft inequality is satisfied. Huffman Code: 1. A Huffman code is a prefix code with the shortest expected length for a given source. 2. HD(X) ≤LHuff< HD(X) + 1. Huffman Procedure: Keep merging the D smallest probability masses. Redundancy of Prefix Code: L −HD(X) = R = X k∈I rk, where rk = qk(1 −hk) is the local redundancy of an internal node k. Problems 1. Construct a binary Huffman code for the distribution {0.25, 0.05, 0.1, 0.13, 0.2, 0.12, 0.08, 0.07}. 2. Construct a ternary Huffman code for the source distribution in Prob-lem 1. 3. Show that a Huffman code is an optimal uniquely decodable code for a given source distribution. 4. Construct an optimal binary prefix code for the source distribution in Problem 1 such that all the codewords have even lengths. 5. Prove directly that the codeword lengths of a prefix code satisfy the Kraft inequality without using Theorem 4.4. 6. Prove that if p1 > 0.4, then the shortest codeword of a binary Huffman code has length equal to 1. Then prove that the redundancy of such a Huffman code is lower bounded by 1 −hb(p1). (Johnsen .) 7. Suffix codes A code is a suffix code if no codeword is a suffix of any other codeword. Show that a suffix code is uniquely decodable. 8. Fix-free codes A code is a fix-free code if it is both a prefix code and a suffix code. Let l1, l2, · · · , lm be m positive integers. Prove that if m X k=1 2−lk ≤1 2, then there exists a binary fix-free code with codeword lengths l1, l2, · · · , lm. (Ahlswede et al. .) Historical Notes 99 9. Random coding for prefix codes Construct a binary prefix code with code-word lengths l1 ≤l2 ≤· · · ≤lm as follows. For each 1 ≤k ≤m, the codeword with length lk is chosen independently from the set of all 2lk possible binary strings with length lk according the uniform distribution. Let Pm(good) be the probability that the code so constructed is a prefix code. a) Prove that P2(good) = (1 −2−l1)+, where (x)+ = x if x ≥0 0 if x < 0. b) Prove by induction on m that Pm(good) = m Y k=1  1 − k−1 X j=1 s−lj   + . c) Observe that there exists a prefix code with codeword lengths l1, l2, · · ·, lm if and only if Pm(good) > 0. Show that Pm(good) > 0 is equivalent to the Kraft inequality. By using this random coding method, one can derive the Kraft inequality without knowing the inequality ahead of time. (Ye and Yeung .) 10. Let X be a source random variable. Suppose a certain probability mass pk in the distribution of X is given. Let lj = ⌈−log pj⌉ if j = k ⌈−log(pj + xj)⌉if j ̸= k, where xj = pj pk −2−⌈−log pk⌉ 1 −pk  for all j ̸= k. a) Show that 1 ≤lj ≤⌈−log pj⌉for all j. b) Show that {lj} satisfies the Kraft inequality. c) Obtain an upper bound on LHuffin terms of H(X) and pk which is tighter than H(X)+1. This shows that when partial knowledge about the source distribution in addition to the source entropy is available, tighter upper bounds on LHuffcan be obtained. (Ye and Yeung .) Historical Notes The foundation for the material in this chapter can be found in Shannon’s original paper . The Kraft inequality for uniquely decodable codes was first proved by McMillan . The proof given here is due to Karush . 100 4 Zero-Error Data Compression The Huffman coding procedure was devised and proved to be optimal by Huffman . The same procedure was devised independently by Zimmerman . Linder et al. have proved the existence of an optimal prefix code for an infinite source alphabet which can be constructed from Huffman codes for truncations of the source distribution. The local redundancy theorem is due to Yeung . A comprehensive survey of code trees for lossless data compression can be found in Abrahams . 5 Weak Typicality In the last chapter, we have discussed the significance of entropy in the con-text of zero-error data compression. In this chapter and the next, we explore entropy in terms of the asymptotic behavior of i.i.d. sequences. Specifically, two versions of the asymptotic equipartition property (AEP), namely the weak AEP and the strong AEP, are discussed. The role of these AEP’s in infor-mation theory is analogous to the role of the weak law of large numbers in probability theory. In this chapter, the weak AEP and its relation with the source coding theorem are discussed. All the logarithms are in the base 2 unless otherwise specified. 5.1 The Weak AEP We consider an information source {Xk, k ≥1} where Xk are i.i.d. with distribution p(x). We use X to denote the generic random variable and H(X) to denote the common entropy for all Xk, where H(X) < ∞. Let X = (X1, X2, · · · , Xn). Since Xk are i.i.d., p(X) = p(X1)p(X2) · · · p(Xn). (5.1) Note that p(X) is a random variable because it is a function of the random variables X1, X2, · · · , Xn. We now prove an asymptotic property of p(X) called the weak asymptotic equipartition property (weak AEP). Theorem 5.1 (Weak AEP I). −1 n log p(X) →H(X) (5.2) in probability as n →∞, i.e., for any ϵ > 0, for n sufficiently large, Pr  −1 n log p(X) −H(X) ≤ϵ  > 1 −ϵ. (5.3) 102 5 Weak Typicality Proof. Since X1, X2, · · · , Xn are i.i.d., by (5.1), −1 n log p(X) = −1 n n X k=1 log p(Xk). (5.4) The random variables log p(Xk) are also i.i.d. Then by the weak law of large numbers, the right hand side of (5.4) tends to −E log p(X) = H(X), (5.5) in probability, proving the theorem. ⊓ ⊔ The weak AEP is nothing more than a straightforward application of the weak law of large numbers. However, as we will see shortly, this property has significant implications. Definition 5.2. The weakly typical set W n [X]ϵ with respect to p(x) is the set of sequences x = (x1, x2, · · · , xn) ∈X n such that −1 n log p(x) −H(X) ≤ϵ, (5.6) or equivalently, H(X) −ϵ ≤−1 n log p(x) ≤H(X) + ϵ, (5.7) where ϵ is an arbitrarily small positive real number. The sequences in W n [X]ϵ are called weakly ϵ-typical sequences. The quantity −1 n log p(x) = −1 n n X k=1 log p(xk) (5.8) is called the empirical entropy of the sequence x. The empirical entropy of a weakly typical sequence is close to the true entropy H(X). The important properties of the set W n [X]ϵ are summarized in the next theorem which we will see is equivalent to the weak AEP. Theorem 5.3 (Weak AEP II). The following hold for any ϵ > 0: 1) If x ∈W n [X]ϵ, then 2−n(H(X)+ϵ) ≤p(x) ≤2−n(H(X)−ϵ). (5.9) 2) For n sufficiently large, Pr{X ∈W n [X]ϵ} > 1 −ϵ. (5.10) 5.1 The Weak AEP 103 3) For n sufficiently large, (1 −ϵ)2n(H(X)−ϵ) ≤|W n [X]ϵ| ≤2n(H(X)+ϵ). (5.11) Proof. Property 1 follows immediately from the definition of W n [X]ϵ in (5.7). Property 2 is equivalent to Theorem 5.1. To prove Property 3, we use the lower bound in (5.9) and consider |W n [X]ϵ|2−n(H(X)+ϵ) ≤Pr{W n [X]ϵ} ≤1, (5.12) which implies |W n [X]ϵ| ≤2n(H(X)+ϵ). (5.13) Note that this upper bound holds for any n ≥1. On the other hand, using the upper bound in (5.9) and Theorem 5.1, for n sufficiently large, we have 1 −ϵ ≤Pr{W n [X]ϵ} ≤|W n [X]ϵ|2−n(H(X)−ϵ). (5.14) Then |W n [X]ϵ| ≥(1 −ϵ)2n(H(X)−ϵ). (5.15) Combining (5.13) and (5.15) gives Property 3. The theorem is proved. ⊓ ⊔ Remark Theorem 5.3 is a consequence of Theorem 5.1. However, Property 2 in Theorem 5.3 is equivalent to Theorem 5.1. Therefore, Theorem 5.1 and Theorem 5.3 are equivalent, and they will both be referred to as the weak AEP. The weak AEP has the following interpretation. Suppose X = (X1, X2, · · · , Xn) is drawn i.i.d. according to p(x), where n is large. After the sequence is drawn, we ask what the probability of occurrence of the sequence is. The weak AEP says that the probability of occurrence of the sequence drawn is close to 2−nH(X) with very high probability. Such a sequence is called a weakly typical sequence. Moreover, the total number of weakly typical sequences is approximately equal to 2nH(X). The weak AEP, however, does not say that most of the sequences in X n are weakly typical. In fact, the number of weakly typical sequences is in general insignificant compared with the total number of sequences, because |W n [X]δ| |X|n ≈2nH(X) 2n log |X| = 2−n(log |X|−H(X)) →0 (5.16) as n →∞as long as H(X) is strictly less than log |X|. The idea is that, although the size of the weakly typical set may be insignificant compared with the size of the set of all sequences, the former has almost all the probability. When n is large, one can almost think of the sequence X as being obtained by choosing a sequence from the weakly typical set according to the uniform 104 5 Weak Typicality distribution. Very often, we concentrate on the properties of typical sequences because any property which is proved to be true for typical sequences will then be true with high probability. This in turn determines the average behavior of a large sample. Remark The most likely sequence is in general not weakly typical although the probability of the weakly typical set is close to 1 when n is large. For example, for Xk i.i.d. with p(0) = 0.1 and p(1) = 0.9, (1, 1, · · · , 1) is the most likely sequence, but it is not weakly typical because its empirical entropy is not close to the true entropy. The idea is that as n →∞, the probability of every sequence, including that of the most likely sequence, tends to 0. Therefore, it is not necessary for a weakly typical set to include the most likely sequence in order to possess a probability close to 1. 5.2 The Source Coding Theorem To encode a random sequence X = (X1, X2, · · · , Xn) drawn i.i.d. according to p(x) by a block code, we construct a one-to-one mapping from a subset A of X n to an index set I = {1, 2, · · · , M}, (5.17) where |A| = M ≤|X|n. We do not have to assume that |X| is finite. The indices in I are called codewords, and the integer n is called the block length of the code. If a sequence x ∈A occurs, the encoder outputs the corresponding codeword which is specified by approximately log M bits. If a sequence x ̸∈ A occurs, the encoder outputs the constant codeword 1. In either case, the codeword output by the encoder is decoded to the sequence in A corresponding to that codeword by the decoder. If a sequence x ∈A occurs, then x is decoded correctly by the decoder. If a sequence x ̸∈A occurs, then x is not decoded correctly by the decoder. For such a code, its performance is measured by the coding rate defined as n−1 log M (in bits per source symbol), and the probability of error is given by Pe = Pr{X ̸∈A}. (5.18) If the code is not allowed to make any error, i.e., Pe = 0, it is clear that M must be taken to be |X|n, or A = X n. In that case, the coding rate is equal to log |X|. However, if we allow Pe to be any small quantity, Shannon showed that there exists a block code whose coding rate is arbitrarily close to H(X) when n is sufficiently large. This is the direct part of Shannon’s source coding theorem, and in this sense the source sequence X is said to be reconstructed almost perfectly. We now prove the direct part of the source coding theorem by constructing a desired code. First, we fix ϵ > 0 and take 5.2 The Source Coding Theorem 105 A = W n [X]ϵ (5.19) and M = |A|. (5.20) For sufficiently large n, by the weak AEP, (1 −ϵ)2n(H(X)−ϵ) ≤M = |A| = |W n [X]ϵ| ≤2n(H(X)+ϵ). (5.21) Therefore, the coding rate n−1 log M satisfies 1 n log(1 −ϵ) + H(X) −ϵ ≤1 n log M ≤H(X) + ϵ. (5.22) Also by the weak AEP, Pe = Pr{X ̸∈A} = Pr{X ̸∈W n [X]ϵ} < ϵ. (5.23) Letting ϵ →0, the coding rate tends to H(X), while Pe tends to 0. This proves the direct part of the source coding theorem. The converse part of the source coding theorem says that if we use a block code with block length n and coding rate less than H(X) −ζ, where ζ > 0 does not change with n, then Pe →1 as n →∞. To prove this, consider any code with block length n and coding rate less than H(X) −ζ, so that M, the total number of codewords, is at most 2n(H(X)−ζ). We can use some of these codewords for the typical sequences x ∈W n [X]ϵ, and some for the non-typical sequences x ̸∈W n [X]ϵ. The total probability of the typical sequences covered by the code, by the weak AEP, is upper bounded by 2n(H(X)−ζ)2−n(H(X)−ϵ) = 2−n(ζ−ϵ). (5.24) Therefore, the total probability covered by the code is upper bounded by 2−n(ζ−ϵ) + Pr{X ̸∈W n [X]ϵ} < 2−n(ζ−ϵ) + ϵ (5.25) for n sufficiently large, again by the weak AEP. This probability is equal to 1−Pe because Pe is the probability that the source sequence X is not covered by the code. Thus 1 −Pe < 2−n(ζ−ϵ) + ϵ, (5.26) or Pe > 1 −(2−n(ζ−ϵ) + ϵ). (5.27) This inequality holds when n is sufficiently large for any ϵ > 0, in particular for ϵ < ζ. Then for any ϵ < ζ, Pe > 1 −2ϵ when n is sufficiently large. Hence, Pe →1 as n →∞and then ϵ →0. This proves the converse part of the source coding theorem. 106 5 Weak Typicality 5.3 Efficient Source Coding Theorem 5.4. Let Y = (Y1, Y2, · · · , Ym) be a random binary sequence of length m. Then H(Y) ≤m with equality if and only if Yi are drawn i.i.d. according to the uniform distribution on {0, 1}. Proof. By the independence bound for entropy, H(Y) ≤ m X i=1 H(Yi) (5.28) with equality if and only if Yi are mutually independent. By Theorem 2.43, H(Yi) ≤log 2 = 1 (5.29) with equality if and only if Yi is distributed uniformly on {0, 1}. Combining (5.28) and (5.29), we have H(Y) ≤ m X i=1 H(Yi) ≤m, (5.30) where this upper bound is tight if and only if Yi are mutually independent and each of them is distributed uniformly on {0, 1}. The theorem is proved. ⊓ ⊔ Let Y = (Y1, Y2, · · · , Yn) be a sequence of length n such that Yi are drawn i.i.d. according to the uniform distribution on {0, 1}, and let Y denote the generic random variable. Then H(Y ) = 1. According to the source coding theorem, for almost perfect reconstruction of Y, the coding rate of the source code must be at least 1. It turns out that in this case it is possible to use a source code with coding rate exactly equal to 1 while the source sequence Y can be reconstructed with zero error. This can be done by simply encoding all the 2n possible binary sequences of length n, i.e., by taking M = 2n. Then the coding rate is given by n−1 log M = n−1 log 2n = 1. (5.31) Since each symbol in Y is a bit and the rate of the best possible code describing Y is 1 bit per symbol, Y1, Y2, · · · , Yn are called fair bits, with the connotation that they are incompressible. It turns out that the whole idea of efficient source coding by a block code is to describe the information source by a binary sequence consisting of “almost fair” bits. Consider a sequence of block codes which encode X = (X1, X2, · · · , Xn) into Y = (Y1, Y2, · · · , Ym), where Xk are i.i.d. with generic random variable X, Y is a binary sequence with length m ≈nH(X), (5.32) 5.4 The Shannon-McMillan-Breiman Theorem 107 and n →∞. For simplicity, we assume that the common alphabet X is fi-nite. Let ˆ X ∈X n be the reconstruction of X by the decoder and Pe be the probability of error, i.e., Pe = Pr{X ̸= ˆ X}. (5.33) Further assume Pe →0 as n →∞. We will show that Y consists of almost fair bits. By Fano’s inequality, H(X| ˆ X) ≤1 + Pe log |X|n = 1 + nPe log |X|. (5.34) Since ˆ X is a function of Y, H(Y) = H(Y, ˆ X) ≥H( ˆ X). (5.35) It follows that H(Y) ≥H( ˆ X) (5.36) ≥I(X; ˆ X) (5.37) = H(X) −H(X| ˆ X) (5.38) ≥nH(X) −(1 + nPe log |X|) (5.39) = n(H(X) −Pe log |X|) −1. (5.40) On the other hand, by Theorem 5.4, H(Y) ≤m. (5.41) Combining (5.40) and (5.41), we have n(H(X) −Pe log |X|) −1 ≤H(Y) ≤m. (5.42) Since Pe →0 as n →∞, the above lower bound on H(Y) is approximately equal to nH(X) ≈m (5.43) when n is large (cf. (5.32)). Therefore, H(Y) ≈m. (5.44) In light of Theorem 5.4, Y almost attains the maximum possible entropy. In this sense, we say that Y consists of almost fair bits. 5.4 The Shannon-McMillan-Breiman Theorem For an i.i.d. information source {Xk} with generic random variable X and generic distribution p(x), the weak AEP states that 108 5 Weak Typicality −1 n log p(X) →H(X) (5.45) in probability as n →∞, where X = (X1, X2, · · · , Xn). Here H(X) is the entropy of the generic random variables X as well as the entropy rate of the source {Xk}. In Section 2.10, we showed that the entropy rate H of a source {Xk} exists if the source is stationary. The Shannon-McMillan-Breiman theorem states that if {Xk} is also ergodic, then Pr  −lim n→∞ 1 n log Pr{X} = H  = 1. (5.46) This means that if {Xk} is stationary and ergodic, then −1 n log Pr{X} not only almost always converges, but it also almost always converges to H. For this reason, the Shannon-McMillan-Breiman theorem is also referred to as the weak AEP for ergodic stationary sources. The formal definition of an ergodic source and the statement of the Shannon-McMillan-Breiman theorem require the use of measure theory which is beyond the scope of this book. We point out that the event in (5.46) in-volves an infinite collection of random variables which cannot be described by a joint distribution except in very special cases. Without measure theory, the probability of this event in general cannot be properly defined. However, this does not prevent us from developing some appreciation of the Shannon-McMillan-Breiman theorem. Let X be the common alphabet for a stationary source {Xk}. Roughly speaking, a stationary source {Xk} is ergodic if the time average exhibited by a single realization of the source is equal to the ensemble average with probability 1. More specifically, for any k1, k2, · · · , km, Pr ( lim n→∞ 1 n n−1 X l=0 f(Xk1+l, Xk2+l, · · · , Xkm+l) = Ef(Xk1, Xk2, · · · , Xkm) ) = 1, (5.47) where f is a function defined on X m which satisfies suitable conditions. For the special case that {Xk} satisfies Pr ( lim n→∞ 1 n n X l=1 Xl = EXk ) = 1, (5.48) we say that {Xk} is mean ergodic. Example 5.5. The i.i.d. source {Xk} is mean ergodic under suitable conditions because the strong law of the large numbers states that (5.48) is satisfied. 5.4 The Shannon-McMillan-Breiman Theorem 109 Example 5.6. Consider the source {Xk} defined as follows. Let Z be a binary random variable uniformly distributed on {0, 1}. For all k, let Xk = Z. Then Pr ( lim n→∞ 1 n n X l=1 Xl = 0 ) = 1 2 (5.49) and Pr ( lim n→∞ 1 n n X l=1 Xl = 1 ) = 1 2. (5.50) Since EXk = 1 2, Pr ( lim n→∞ 1 n n X l=1 Xl = EXk ) = 0. (5.51) Therefore, {Xk} is not mean ergodic and hence not ergodic. If an information source {Xk} is stationary and ergodic, by the Shannon-McMillan-Breiman theorem, −1 n log Pr{X} ≈H (5.52) when n is large. That is, with probability close to 1, the probability of the sequence X which occurs is approximately equal to 2−nH. Then by means of arguments similar to the proof of Theorem 5.3, we see that there exist approx-imately 2nH sequences in X n whose probabilities are approximately equal to 2−nH, and the total probability of these sequences is almost 1. Therefore, by encoding these sequences with approximately nH bits, the source sequence X can be recovered with an arbitrarily small probability of error when the block length n is sufficiently large. This is a generalization of the direct part of the source coding theorem which gives a physical meaning to the entropy rate of an ergodic stationary sources. We remark that if a source is stationary but not ergodic, although the entropy rate always exists, it may not carry any physical meaning. As an example, by regarding printed English as a stationary ergodic pro-cess, Shannon estimated by a guessing game that its entropy rate is about 1.3 bits per letter. Cover and King described a gambling estimate of the entropy rate of printed English which gives 1.34 bits per letter. These results show that it is not possible to describe printed English accurately by using less than about 1.3 bits per letter. 110 5 Weak Typicality Chapter Summary Weak AEP I: −1 n log p(X) →H(X) in probability. Weakly Typical Set: W n [X]ϵ =  x ∈X n : −n−1 log p(x) −H(X) ≤ϵ . Weak AEP II: 1. 2−n(H(X)+ϵ) ≤p(x) ≤2−n(H(X)−ϵ) for x ∈W n [X]ϵ 2. Pr{X ∈W n [X]ϵ} > 1 −ϵ for n sufficiently large 3. (1 −ϵ)2n(H(X)−ϵ) ≤|W n [X]ϵ| ≤2n(H(X)+ϵ) for n sufficiently large. Source Coding Theorem: An i.i.d. random sequence X1, X2, · · · , Xn with generic random variable X can be compressed at rate H(X) + ϵ while Pe →0 as n →∞. If a rate less than H(X) is used, then Pe →1 as n →∞. Shannon-McMillan-Breiman Theorem: For a stationary source {Xk} with entropy rate H, Pr  −lim n→∞ 1 n log Pr{X} = H  = 1. Problems 1. Show that for any ϵ > 0, W n [X]ϵ is nonempty for sufficiently large n. 2. The source coding theorem with a general block code In proving the con-verse of the source coding theorem, we assume that each codeword in I corresponds to a unique sequence in X n. More generally, a block code with block length n is defined by an encoding function f : X n →I and a decoding function g : I →X n. Prove that Pe →1 as n →∞even if we are allowed to use a general block code. 3. Following Problem 2, we further assume that we can use a block code with probabilistic encoding and decoding. For such a code, encoding is defined by a transition matrix F from X n to I and decoding is defined by a transition matrix G from I to X n. Prove that Pe →1 as n →∞even if we are allowed to use such a code. 4. In the discussion in Section 5.3, we made the assumption that the com-mon alphabet X is finite. Can you draw the same conclusion when X is countable but H(X) is finite? Hint: use Problem 2. Problems 111 5. Alternative definition of weak typicality Let X = (X1, X2, · · · , Xn) be an i.i.d. sequence whose generic random variable X is distributed with p(x). Let qx be the empirical distribution of the sequence x, i.e., qx(x) = n−1N(x; x) for all x ∈X, where N(x; x) is the number of occurrence of x in x. a) Show that for any x ∈X n, −1 n log p(x) = D(qx∥p) + H(qx). b) Show that for any ϵ > 0, the weakly typical set W n [X]ϵ with respect to p(x) is the set of sequences x ∈X n such that |D(qx∥p) + H(qx) −H(p)| ≤ϵ. c) Show that for sufficiently large n, Pr{|D(qx∥p) + H(qx) −H(p)| ≤ϵ} > 1 −ϵ. (Ho and Yeung .) 6. Verify that the empirical entropy of a sequence is different from the en-tropy of the empirical distribution of the sequence (see Problem 5 for definition). 7. Let p and q be two probability distributions on the same alphabet X such that H(p) ̸= H(q). Show that there exists an ϵ > 0 such that pn n x ∈X n : −1 n log pn(x) −H(q) < ϵ o →0 as n →∞. Give an example that p ̸= q but the above convergence does not hold. 8. Let p and q be two probability distributions on the same alphabet X with the same support. a) Prove that for any δ > 0, pn n x ∈X n : −1 n log qn(x) −(H(p) + D(p∥q)) < δ o →1 as n →∞. b) Prove that for any δ > 0, n x ∈X n : −1 n log qn(x) −(H(p) + D(p∥q)) < δ o ≤2n(H(p)+D(p∥q)+δ). 9. Universal source coding Let F = {{X(s) k , k ≥1} : s ∈S} be a family of i.i.d. information sources indexed by a finite set S with a common alphabet X. Define ¯ H = max s∈S H(X(s)) where X(s) is the generic random variable for {X(s) k , k ≥1}, and 112 5 Weak Typicality An ϵ (S) = [ s∈S W n [X(s)]ϵ, where ϵ > 0. a) Prove that for all s ∈S, Pr{X(s) ∈An ϵ (S)} →1 as n →∞, where X(s) = (X(s) 1 , X(s) 2 , · · · , X(s) n ). b) Prove that for any ϵ′ > ϵ, |An ϵ (S)| ≤2n( ¯ H+ϵ′) for sufficiently large n. c) Suppose we know that an information source is in the family F but we do not know which one it is. Devise a compression scheme for the information source such that it is asymptotically optimal for every possible source in F. 10. Let {Xk, k ≥1} be an i.i.d. information source with generic random vari-able X and alphabet X. Assume X x p(x)[log p(x)]2 < ∞ and define Zn = −log p(X) √n −√nH(X) for n = 1, 2, · · ·. Prove that Zn →Z in distribution, where Z is a Gaussian random variable with mean 0 and variance P x p(x)[log p(x)]2 −H(X)2. Historical Notes The weak asymptotic equipartition property (AEP), which is instrumental in proving the source coding theorem, was first proved by Shannon in his original paper . In this paper, he also stated that this property can be extended to an ergodic stationary source. Subsequently, McMillan and Breiman proved this property for an ergodic stationary source with a finite alphabet. Chung extended the theme to a countable alphabet. 6 Strong Typicality Weak typicality requires that the empirical entropy of a sequence is close to the true entropy. In this chapter, we introduce a stronger notion of typicality which requires that the relative frequency of each possible outcome is close to the corresponding probability. As we will see later, strong typicality is more powerful and flexible than weak typicality as a tool for theorem proving for memoryless problems. However, strong typicality can be used only for random variables with finite alphabets. Throughout this chapter, typicality refers to strong typicality and all the logarithms are in the base 2 unless otherwise specified. 6.1 Strong AEP We consider an information source {Xk, k ≥1} where Xk are i.i.d. with distribution p(x). We use X to denote the generic random variable and H(X) to denote the common entropy for all Xk, where H(X) < ∞. Let X = (X1, X2, · · · , Xn). Definition 6.1. The strongly typical set T n [X]δ with respect to p(x) is the set of sequences x = (x1, x2, · · · , xn) ∈X n such that N(x; x) = 0 for x ̸∈SX, and X x 1 nN(x; x) −p(x) ≤δ, (6.1) where N(x; x) is the number of occurrences of x in the sequence x, and δ is an arbitrarily small positive real number. The sequences in T n [X]δ are called strongly δ-typical sequences. Throughout this chapter, we adopt the convention that all the summations, products, unions, etc, are taken over the corresponding supports unless oth-erwise specified. The strongly typical set T n [X]δ shares similar properties with 114 6 Strong Typicality its weakly typical counterpart, which is summarized as the strong asymptotic equipartition property (strong AEP) below. The interpretation of the strong AEP is similar to that of the weak AEP. Theorem 6.2 (Strong AEP). There exists η > 0 such that η →0 as δ →0, and the following hold: 1) If x ∈T n [X]δ, then 2−n(H(X)+η) ≤p(x) ≤2−n(H(X)−η). (6.2) 2) For n sufficiently large, Pr{X ∈T n [X]δ} > 1 −δ. (6.3) 3) For n sufficiently large, (1 −δ)2n(H(X)−η) ≤|T n [X]δ| ≤2n(H(X)+η). (6.4) Proof To prove Property 1, for x ∈T n [X]δ, we write p(x) = Y x p(x)N(x;x). (6.5) Then log p(x) = X x N(x; x) log p(x) (6.6) = X x (N(x; x) −np(x) + np(x)) log p(x) (6.7) = n X x p(x) log p(x) −n X x  1 nN(x; x) −p(x)  (−log p(x)) (6.8) = −n " H(X) + X x  1 nN(x; x) −p(x)  (−log p(x)) # . (6.9) Since x ∈T n [X]δ, X x 1 nN(x; x) −p(x) ≤δ, (6.10) which implies X x  1 nN(x; x) −p(x)  (−log p(x)) 6.1 Strong AEP 115 ≤ X x 1 nN(x; x) −p(x) (−log p(x)) (6.11) ≤−log  min x p(x)  X x 1 nN(x; x) −p(x) (6.12) ≤−δ log  min x p(x)  (6.13) = η, (6.14) where η = −δ log  min x p(x)  > 0. (6.15) Therefore, −η ≤ X x  1 nN(x; x) −p(x)  (−log p(x)) ≤η. (6.16) It then follows from (6.9) that −n(H(X) + η) ≤log p(x) ≤−n(H(X) −η), (6.17) or 2−n(H(X)+η) ≤p(x) ≤2−n(H(X)−η), (6.18) where η →0 as δ →0, proving Property 1. To prove Property 2, we write N(x; X) = n X k=1 Bk(x), (6.19) where Bk(x) =  1 if Xk = x 0 if Xk ̸= x. (6.20) Then Bk(x), k = 1, 2, · · · , n are i.i.d. random variables with Pr{Bk(x) = 1} = p(x) (6.21) and Pr{Bk(x) = 0} = 1 −p(x). (6.22) Note that EBk(x) = (1 −p(x)) · 0 + p(x) · 1 = p(x). (6.23) By the weak law of large numbers, for any δ > 0 and for any x ∈X, Pr ( 1 n n X k=1 Bk(x) −p(x) > δ |X| ) < δ |X| (6.24) for n sufficiently large. Then 116 6 Strong Typicality Pr  1 nN(x; X) −p(x) > δ |X| for some x  = Pr ( 1 n n X k=1 Bk(x) −p(x) > δ |X| for some x ) (6.25) = Pr ([ x ( 1 n n X k=1 Bk(x) −p(x) > δ |X| )) (6.26) ≤ X x Pr ( 1 n n X k=1 Bk(x) −p(x) > δ |X| ) (6.27) < X x δ |X| (6.28) = δ, (6.29) where we have used the union bound1 to obtain (6.27). Since X x 1 nN(x; x) −p(x) > δ (6.30) implies 1 nN(x; x) −p(x) > δ |X| for some x ∈X, (6.31) we have Pr n X ∈T n [X]δ o = Pr (X x 1 nN(x; X) −p(x) ≤δ ) (6.32) = 1 −Pr (X x 1 nN(x; X) −p(x) > δ ) (6.33) ≥1 −Pr  1 nN(x; X) −p(x) > δ |X| for some x ∈X  (6.34) > 1 −δ, (6.35) proving Property 2. Finally, Property 3 follows from Property 1 and Property 2 in exactly the same way as in Theorem 5.3, so the proof is omitted. ⊓ ⊔ Remark Analogous to weak typicality, we note that the upper bound on |T n [X]δ| in Property 3 holds for all n ≥1, and for any δ > 0, there exists at least one strongly typical sequence when n is sufficiently large. See Problem 1 in Chapter 5. 1 The union bound refers to Pr{A ∪B} ≤Pr{A} + Pr{B}. 6.1 Strong AEP 117 In the rest of the section, we prove an enhancement of Property 2 of the strong AEP which gives an exponential bound on the probability of obtaining a non-typical vector2. The reader may skip this part at the first reading. Theorem 6.3. For sufficiently large n, there exists ϕ(δ) > 0 such that Pr{X ̸∈T n [X]δ} < 2−nϕ(δ). (6.36) The proof of this theorem is based on the Chernoffbound which we prove in the next lemma. Lemma 6.4 (ChernoffBound). Let Y be a real random variable and s be any nonnegative real number. Then for any real number a, log Pr{Y ≥a} ≤−sa + log E 2sY (6.37) and log Pr{Y ≤a} ≤sa + log E 2−sY . (6.38) Proof. Let u(y) =  1 if y ≥0 0 if y < 0. (6.39) Then for any s ≥0, u(y −a) ≤2s(y−a). (6.40) This is illustrated in Fig. 6.1. Then E[u(Y −a)] ≤E h 2s(Y −a)i = 2−saE 2sY . (6.41) Since E[u(Y −a)] = Pr{Y ≥a} · 1 + Pr{Y < a} · 0 = Pr{Y ≥a}, (6.42) we see that Pr{Y ≥a} ≤2−saE 2sY = 2−sa+log E[2sY ]. (6.43) Then (6.37) is obtained by taking logarithm in the base 2. Upon replacing Y by −Y and a by −a in (6.37), (6.38) is obtained. The lemma is proved. ⊓ ⊔ Proof of Theorem 6.3. We will follow the notation in the proof of Theorem 6.2. Consider x ∈X such that p(x) > 0. Applying (6.37), we have 2 This result is due to Ning Cai and Raymond W. Yeung. An alternative proof based on Pinsker’s inequality (Theorem 2.33) and the method of types has been given by Prakash Narayan (private communication). See also Proposition 1 in Weissman et al. . 118 6 Strong Typicality log Pr ( n X k=1 Bk(x) ≥n (p(x) + δ) ) ≤−sn (p(x) + δ) + log E h 2sPn k=1 Bk(x)i (6.44) a) = −sn (p(x) + δ) + log n Y k=1 E h 2sBk(x)i! (6.45) b) = −sn (p(x) + δ) + n log(1 −p(x) + p(x)2s) (6.46) c) ≤−sn (p(x) + δ) + n(ln 2)−1(−p(x) + p(x)2s) (6.47) = −n s (p(x) + δ) + (ln 2)−1p(x)(1 −2s) , (6.48) where a) follows because Bk(x) are mutually independent; b) is a direct evaluation of the expectation from the definition of Bk(x) in (6.20); c) follows from the fundamental inequality ln a ≤a −1. In (6.48), upon defining βx(s, δ) = s (p(x) + δ) + (ln 2)−1p(x)(1 −2s), (6.49) we have log Pr ( n X k=1 Bk(x) ≥n (p(x) + δ) ) ≤−nβx(s, δ), (6.50) or Pr ( n X k=1 Bk(x) ≥n (p(x) + δ) ) ≤2−nβx(s,δ). (6.51) !1 0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 a y 1 s y!a 2 ( ) u y!a ( ) Fig. 6.1. An illustration of u(y −a) ≤2s(y−a). 6.1 Strong AEP 119 It is readily seen that βx(0, δ) = 0. (6.52) Regarding δ as fixed and differentiate with respect to s, we have β′ x(s, δ) = p(x)(1 −2s) + δ. (6.53) Then β′ x(0, δ) = δ > 0 (6.54) and it is readily verified that β′ x(s, δ) ≥0 (6.55) for 0 ≤s ≤log  1 + δ p(x)  . (6.56) Therefore, we conclude that βx(s, δ) is strictly positive for 0 < s ≤log  1 + δ p(x)  . (6.57) On the other hand, by applying (6.38), we can obtain in the same fashion the bound log Pr ( n X k=1 Bk(x) ≤n (p(x) −δ) ) ≤−nσx(s, δ), (6.58) or Pr ( n X k=1 Bk(x) ≤n (p(x) −δ) ) ≤2−nσx(s,δ), (6.59) where σx(s, δ) = −s (p(x) −δ) + (ln 2)−1p(x)(1 −2−s). (6.60) Then σx(0, δ) = 0, (6.61) and σ′ x(s, δ) = p(x)(2−s −1) + δ, (6.62) which is nonnegative for 0 ≤s ≤−log  1 − δ p(x)  . (6.63) In particular, σ′ x(0, δ) = δ > 0. (6.64) Therefore, we conclude that σx(s, δ) is strictly positive for 120 6 Strong Typicality 0 < s ≤−log  1 − δ p(x)  . (6.65) By choosing s satisfying 0 < s ≤min  log  1 + δ p(x)  , −log  1 − δ p(x)  , (6.66) both βx(s, δ) and σx(s, δ) are strictly positive. From (6.51) and (6.59), we have Pr ( 1 n n X k=1 Bk(x) −p(x) ≥δ ) = Pr ( n X k=1 Bk(x) −np(x) ≥nδ ) (6.67) ≤Pr ( n X k=1 Bk(x) ≥n (p(x) + δ) ) +Pr ( n X k=1 Bk(x) ≤n (p(x) −δ) ) (6.68) ≤2−nβx(s,δ) + 2−nσx(s,δ) (6.69) ≤2 · 2−n min(βx(s,δ),σx(s,δ)) (6.70) = 2−n[min(βx(s,δ),σx(s,δ))−1 n] (6.71) = 2−nϕx(δ), (6.72) where ϕx(δ) = min(βx(s, δ), σx(s, δ)) −1 n. (6.73) Then ϕx(δ) is strictly positive for sufficiently large n because both βx(s, δ) and σx(s, δ) are strictly positive. Finally, consider Pr{X ∈T n [X]δ} = Pr (X x 1 nN(x; X) −p(x) ≤δ ) (6.74) ≥Pr  1 nN(x; X) −p(x) ≤ δ |X| for all x ∈X  (6.75) = 1 −Pr  1 nN(x; X) −p(x) > δ |X| for some x ∈X  (6.76) ≥1 − X x Pr  1 nN(x; X) −p(x) > δ |X|  (6.77) 6.2 Strong Typicality Versus Weak Typicality 121 = 1 − X x Pr ( 1 n n X k=1 Bk(x) −p(x) > δ |X| ) (6.78) = 1 − X x:p(x)>0 Pr ( 1 n n X k=1 Bk(x) −p(x) > δ |X| ) (6.79) ≥1 − X x:p(x)>0 2−nϕx δ |X|  , (6.80) where the last step follows from (6.72). Define ϕ(δ) = 1 2  min x:p(x)>0 ϕx  δ |X|  . (6.81) Then for sufficiently large n, Pr{X ∈T n [X]δ} > 1 −2−nϕ(δ), (6.82) or Pr{X ̸∈T n [X]δ} < 2−nϕ(δ), (6.83) where ϕ(δ) is strictly positive. The theorem is proved. ⊓ ⊔ 6.2 Strong Typicality Versus Weak Typicality As we have mentioned at the beginning of the chapter, strong typicality is more powerful and flexible than weak typicality as a tool for theorem proving for memoryless problems, but it can be used only for random variables with finite alphabets. We will prove in the next proposition that strong typicality is stronger than weak typicality in the sense that the former implies the latter. Proposition 6.5. For any x ∈X n, if x ∈T n [X]δ, then x ∈W n [X]η, where η →0 as δ →0. Proof. By Property 1 of strong AEP (Theorem 6.2), if x ∈T n [X]δ, then 2−n(H(X)+η) ≤p(x) ≤2−n(H(X)−η), (6.84) or H(X) −η ≤−1 n log p(x) ≤H(X) + η, (6.85) where η →0 as δ →0. Then x ∈W n [X]η by Definition 5.2. The proposition is proved. ⊓ ⊔ We have proved in this proposition that strong typicality implies weak typicality, but the converse is not true. This idea can be explained without 122 6 Strong Typicality any detailed analysis. Let X be distributed with p such that p(0) = 0.5, p(1) = 0.25, and p(2) = 0.25. Consider a sequence x of length n and let q(i) be the relative frequency of occurrence of symbol i in x, i.e., 1 nN(i; x), where i = 0, 1, 2. In order for the sequence x to be weakly typical, we need −1 n log p(x) = −q(0) log 0.5 −q(1) log 0.25 −q(2) log 0.25 (6.86) ≈H(X) (6.87) = −(0.5) log 0.5 −(0.25) log 0.25 −(0.25) log 0.25. (6.88) Obviously, this can be satisfied by choosing q(i) = p(i) for all i. But alterna-tively, we can choose q(0) = 0.5, q(1) = 0.5, and q(2) = 0. With such a choice of {q(i)}, the sequence x is weakly typical with respect to p but obviously not strongly typical with respect to p, because the relative frequency of occurrence of each symbol i is q(i), which is not close to p(i) for i = 1, 2. Therefore, we conclude that strong typicality is indeed stronger than weak typicality. However, as we have pointed out at the beginning of the chapter, strong typicality can only be used for random variables with finite alphabets. 6.3 Joint Typicality In this section, we discuss strong joint typicality with respect to a bivariate distribution. Generalization to a multivariate distribution is straightforward. Consider a bivariate information source {(Xk, Yk), k ≥1} where (Xk, Yk) are i.i.d. with distribution p(x, y). We use (X, Y ) to denote the pair of generic random variables. Definition 6.6. The strongly jointly typical set T n [XY ]δ with respect to p(x, y) is the set of (x, y) ∈X n × Yn such that N(x, y; x, y) = 0 for (x, y) ̸∈SXY , and X x X y 1 nN(x, y; x, y) −p(x, y) ≤δ, (6.89) where N(x, y; x, y) is the number of occurrences of (x, y) in the pair of se-quences (x, y), and δ is an arbitrarily small positive real number. A pair of sequences (x, y) is called strongly jointly δ-typical if it is in T n [XY ]δ. Strong typicality satisfies the following consistency property. Theorem 6.7 (Consistency). If (x, y) ∈T n [XY ]δ, then x ∈T n [X]δ and y ∈ T n [Y ]δ. Proof. If (x, y) ∈T n [XY ]δ, then 6.3 Joint Typicality 123 X x X y 1 nN(x, y; x, y) −p(x, y) ≤δ. (6.90) Upon observing that N(x; x) = X y N(x, y; x, y), (6.91) we have X x 1 nN(x; x) −p(x) = X x 1 n X y N(x, y; x, y) − X y p(x, y) (6.92) = X x X y  1 nN(x, y; x, y) −p(x, y)  (6.93) ≤ X x X y 1 nN(x, y; x, y) −p(x, y) (6.94) ≤δ. (6.95) Therefore, x ∈T n [X]δ. Similarly, y ∈T n [Y ]δ. The theorem is proved. ⊓ ⊔ The following thoerem asserts that strong typicality is preserved when a function is applied to a vector componentwise. Theorem 6.8 (Preservation). Let Y = f(X). If x = (x1, x2, · · · , xn) ∈T n [X]δ, (6.96) then f(x) = (y1, y2, · · · , yn) ∈T n [Y ]δ, (6.97) where yi = f(xi) for 1 ≤i ≤n. Proof. Consider x ∈T n [X]δ, i.e., X x 1 nN(x; x) −p(x) < δ. (6.98) Since Y = f(X), p(y) = X x∈f −1(y) p(x) (6.99) for all y ∈Y. On the other hand, 124 6 Strong Typicality N(y; f(x)) = X x∈f −1(y) N(x; x) (6.100) for all y ∈Y. Then X y 1 nN(y; f(x)) −p(y) = X y X x∈f −1(y)  1 nN(x; x) −p(x)  (6.101) ≤ X y X x∈f −1(y) 1 nN(x; x) −p(x) (6.102) = X x 1 nN(x; x) −p(x) (6.103) < δ. (6.104) Therefore, f(x) ∈T n [Y ]δ, proving the lemma. ⊓ ⊔ For a bivariate i.i.d. source {(Xk, Yk)}, we have the strong joint asymp-totic equipartition property (strong JAEP), which can readily be obtained by applying the strong AEP to the source {(Xk, Yk)}. Theorem 6.9 (Strong JAEP). Let (X, Y) = ((X1, Y1), (X2, Y2), · · · , (Xn, Yn)), (6.105) where (Xi, Yi) are i.i.d. with generic pair of random variables (X, Y ). Then there exists λ > 0 such that λ →0 as δ →0, and the following hold: 1) If (x, y) ∈T n [XY ]δ, then 2−n(H(X,Y )+λ) ≤p(x, y) ≤2−n(H(X,Y )−λ). (6.106) 2) For n sufficiently large, Pr{(X, Y) ∈T n [XY ]δ} > 1 −δ. (6.107) 3) For n sufficiently large, (1 −δ)2n(H(X,Y )−λ) ≤|T n [XY ]δ| ≤2n(H(X,Y )+λ). (6.108) From the strong JAEP, we can see the following. Since there are approxi-mately 2nH(X,Y ) typical (x, y) pairs and approximately 2nH(X) typical x, for a typical x, the number of y such that (x, y) is jointly typical is approximately 6.3 Joint Typicality 125 2nH(X,Y ) 2nH(X) = 2nH(Y |X) (6.109) on the average. The next theorem reveals that this is not only true on the average, but it is in fact true for every typical x as long as there exists at least one y such that (x, y) is jointly typical. Theorem 6.10 (Conditional Strong AEP). For any x ∈T n [X]δ, define T n [Y |X]δ(x) = {y ∈T n [Y ]δ : (x, y) ∈T n [XY ]δ}. (6.110) If |T n [Y |X]δ(x)| ≥1, then 2n(H(Y |X)−ν) ≤|T n [Y |X]δ(x)| ≤2n(H(Y |X)+ν), (6.111) where ν →0 as n →∞and δ →0. We first prove the following lemma which is along the line of Stirling’s approximation . Lemma 6.11. For any n > 0, n ln n −n < ln n! < (n + 1) ln(n + 1) −n. (6.112) Proof. First, we write ln n! = ln 1 + ln 2 + · · · + ln n. (6.113) Since ln x is a monotonically increasing function of x, we have Z k k−1 ln x dx < ln k < Z k+1 k ln x dx. (6.114) Summing over 1 ≤k ≤n, we have Z n 0 ln x dx < ln n! < Z n+1 1 ln x dx, (6.115) or n ln n −n < ln n! < (n + 1) ln(n + 1) −n. (6.116) The lemma is proved. ⊓ ⊔ Proof of Theorem 6.10. Let δ be a small positive real number and n be a large positive integer to be specified later. Fix an x ∈T n [X]δ, so that X x 1 nN(x; x) −p(x) ≤δ. (6.117) 126 6 Strong Typicality This implies that for all x ∈X, 1 nN(x; x) −p(x) ≤δ, (6.118) or p(x) −δ ≤1 nN(x; x) ≤p(x) + δ. (6.119) We first prove the upper bound on |T n [Y |X]δ(x)|. For any ν > 0, consider 2−n(H(X)−ν/2) a) ≥p(x) (6.120) = X y∈Yn p(x, y) (6.121) ≥ X y∈T n [Y |X]δ(x) p(x, y) (6.122) b) ≥ X y∈T n [Y |X]δ(x) 2−n(H(X,Y )+ν/2) (6.123) = |T n [Y |X]δ(x)|2−n(H(X,Y )+ν/2), (6.124) where a) and b) follow from the strong AEP (Theorem 6.2) and the strong joint AEP (Theorem 6.9), respectively. Then we obtain |T n [Y |X]δ(x)| ≤2n(H(Y |X)+ν), (6.125) which is the upper bound to be proved. Assume that |T n [Y |X]δ(x)| ≥1. We now prove the lower bound on |T n [Y |X]δ(x)|. Let {K(x, y), (x, y) ∈X × Y} (6.126) be any set of nonnegative integers such that 1. X y K(x, y) = N(x; x) (6.127) for all x ∈X, and 2. for any y ∈Yn, if N(x, y; x, y) = K(x, y) (6.128) for all (x, y) ∈X × Y, then (x, y) ∈T n [XY ]δ. Then by Definition 6.6, {K(x, y)} satisfy X x X y 1 nK(x, y) −p(x, y) ≤δ, (6.129) 6.3 Joint Typicality 127 which implies that for all (x, y) ∈X × Y, 1 nK(x, y) −p(x, y) ≤δ, (6.130) or p(x, y) −δ ≤1 nK(x, y) ≤p(x, y) + δ. (6.131) Such a set {K(x, y)} exists because T n [Y |X]δ(x) is assumed to be nonempty. Straightforward combinatorics reveals that the number of y which satisfy the constraints in (6.128) is equal to M(K) = Y x N(x; x)! Q y K(x, y)!, (6.132) and it is readily seen that |T n [Y |X]δ(x)| ≥M(K). (6.133) Using Lemma 6.11, we can lower bound ln M(K) as follows. ln M(K) ≥ X x n N(x; x) ln N(x; x) −N(x; x) − X y [(K(x, y) + 1) ln(K(x, y) + 1) −K(x, y)] ) (6.134) a) = X x h N(x; x) ln N(x; x) − X y (K(x, y) + 1) ln(K(x, y) + 1) # (6.135) b) ≥ X x {N(x; x) ln(n (p(x) −δ)) − X y (K(x, y) + 1) ln  n  p(x, y) + δ + 1 n ) . (6.136) In the above, a) follows from (6.127), and b) is obtained by applying the lower bound on n−1N(x; x) in (6.119) and the upper bound on n−1K(x, y) in (6.131). Also from (6.127), the coefficient of ln n in (6.136) is given by X x " N(x; x) − X y (K(x, y) + 1) # = −|X||Y|. (6.137) Let δ be sufficiently small and n be sufficiently large so that 128 6 Strong Typicality 0 < p(x) −δ < 1 (6.138) and p(x, y) + δ + 1 n < 1 (6.139) for all x and y. Then in (6.136), both the logarithms ln(p(x) −δ) (6.140) and ln  p(x, y) + δ + 1 n  (6.141) are negative. Note that the logarithm in (6.140) is well-defined by virtue of (6.138). Rearranging the terms in (6.136), applying the upper bound in (6.119) and the lower bound3 in (6.131), and dividing by n, we have n−1 ln M(K) ≥ X x (p(x) + δ) ln (p(x) −δ) − X x X y  p(x, y) −δ + 1 n  × ln  p(x, y) + δ + 1 n  −|X||Y| ln n n (6.142) = −He(X) + He(X, Y ) + Ll(n, δ) (6.143) = He(Y |X) + Ll(n, δ), (6.144) where Ll(n, δ) denotes a function of n and δ which tends to 0 as n →∞and δ →0. Changing the base of the logarithm to 2, we have n−1 log M(K) ≥H(Y |X) + Ll(n, δ). (6.145) Then it follows from (6.133) that n−1 log |T n [Y |X]δ(x)| ≥H(Y |X) + Ll(n, δ). (6.146) Upon replacing Ll(n, δ) by ν, we obtain |T n [Y |X]δ(x)| ≥2n(H(Y |X)−ν), (6.147) where ν →0 as n →∞and δ →0 as required. The theorem is proved. ⊓ ⊔ The above theorem says that for any typical x, as long as there is one typical y such that (x, y) is jointly typical, there are approximately 2nH(Y |X) y such that (x, y) is jointly typical. This theorem has the following corollary that the number of such typical x grows with n at almost the same rate as the total number of typical x. 3 For the degenerate case when p(x, y) = 1 for some x and y, p(x, y) + δ + 1 n > 1, and the logarithm in (6.141) is in fact positive. Then the upper bound instead of the lower bound should be applied. The details are omitted. 6.3 Joint Typicality 129 2 nH ( Y ) 2 nH ( X,Y ) 2 nH ( X ) y S [ Y ] n x S [ X ] n ( x , y ) T [ XY ] n . . . . . . . . . . . . . . . Fig. 6.2. A two-dimensional strong joint typicality array. Corollary 6.12. For a joint distribution p(x, y) on X × Y, let Sn [X]δ be the set of all sequences x ∈T n [X]δ such that T n [Y |X]δ(x) is nonempty. Then |Sn [X]δ| ≥(1 −δ)2n(H(X)−ψ), (6.148) where ψ →0 as n →∞and δ →0. Proof. By the consistency of strong typicality (Theorem 6.7), if (x, y) ∈ T n [XY ]δ, then x ∈T n [X]δ. In particular, x ∈Sn [X]δ. Then T n [XY ]δ = [ x∈Sn [X]δ {(x, y) : y ∈T n [Y |X]δ(x)}. (6.149) Using the lower bound on |T n [XY ]δ| in Theorem 6.9 and the upper bound on |T n [Y |X]δ(x)| in the last theorem, we have (1 −δ)2n(H(X,Y )−λ) ≤|T n [XY ]δ| ≤|Sn [X]δ|2n(H(Y |X)+ν) (6.150) which implies |Sn [X]δ| ≥(1 −δ)2n(H(X)−(λ+ν)). (6.151) The theorem is proved upon letting ψ = λ + ν. ⊓ ⊔ We have established a rich set of structural properties for strong typicality with respect to a bivariate distribution p(x, y), which is summarized in the two-dimensional strong joint typicality array in Figure 6.2. In this array, the rows and the columns are the typical sequences x ∈Sn [X]δ and y ∈Sn [Y ]δ, respectively. The total number of rows and columns are approximately equal to 2nH(X) and 2nH(Y ), respectively. An entry indexed by (x, y) receives a dot if (x, y) is strongly jointly typical. The total number of dots is approximately equal to 2nH(X,Y ). The number of dots in each row is approximately equal to 2nH(Y |X), while the number of dots in each column is approximately equal to 2nH(X|Y ). 130 6 Strong Typicality 2 nH ( Y ) 2 nH ( Z ) ( x 0 , y 0 ) z 0 z S [ Z ] n y S [ Y ] n 2 nH ( X ) x S [ X ] n Fig. 6.3. A three-dimensional strong joint typicality array. For reasons which will become clear in Chapter 16, the strong joint typical-ity array in Figure 6.2 is said to exhibit an asymptotic quasi-uniform structure. By a two-dimensional asymptotic quasi-uniform structure, we mean that in the array all the columns have approximately the same number of dots, and all the rows have approximately the same number of dots. The strong joint typicality array for a multivariate distribution continues to exhibit an asymp-totic quasi-uniform structure. The three-dimensional strong joint typicality array with respect to a distribution p(x, y, z) is illustrated in Figure 6.3. As before, an entry (x, y, z) receives a dot if (x, y, z) is strongly jointly typical. This is not shown in the figure otherwise it will be very confusing. The total number of dots in the whole array is approximately equal to 2nH(X,Y,Z). These dots are distributed in the array such that all the planes parallel to each other have approximately the same number of dots, and all the cylinders parallel to each other have approximately the same number of dots. More specifically, the total number of dots on the plane for any fixed z0 ∈Sn [Z]δ (as shown) is approximately equal to 2nH(X,Y |Z), and the total number of dots in the cylin-der for any fixed (x0, y0) pair in Sn [XY ]δ (as shown) is approximately equal to 2nH(Z|X,Y ), so on and so forth. We see from the strong AEP and Corollary 6.12 that Sn [X]δ and T n [X]δ grow with n at approximately the same rate. We end this section by stating in the next proposition that Sn [X]δ indeed contains almost all the probability when n is large. The proof is left as an exercise (see Problem 4). Proposition 6.13. With respect to a joint distribution p(x, y) on X × Y, for any δ > 0, Pr{X ∈Sn [X]δ} > 1 −δ (6.152) for n sufficiently large. Chapter Summary 131 6.4 An Interpretation of the Basic Inequalities The asymptotic quasi-uniform structure exhibited in a strong joint typicality array discussed in the last section is extremely important in information the-ory. Later in the book, we will see how this structure is involved in proving results such as the channel coding theorem and the rate-distortion theorem. In this section, we show how the basic inequalities can be revealed by examining this structure. It has further been shown by Chan that all unconstrained information inequalities can be obtained from this structure, thus giving a physical meaning to these inequalities. Consider random variables X, Y , and Z and a fixed z ∈Sn [Z]δ, so that T n [XY |Z]δ(z) is nonempty. By the consistency of strong typicality, if (x, y, z) ∈ T n [XY Z]δ, then (x, z) ∈T n [XZ]δ and (y, z) ∈T n [Y Z]δ, or x ∈T n [X|Z]δ(z) and y ∈T n [Y |Z]δ(z), respectively. Thus T n [XY |Z]δ(z) ⊂T n [X|Z]δ(z) × T n [Y |Z]δ(z), (6.153) which implies |T n [XY |Z]δ(z)| ≤|T n [X|Z]δ(z)||T n [Y |Z]δ(z)|. (6.154) Applying the lower bound in Theorem 6.10 to T n [XY |Z]δ(z) and the upper bound to T n [X|Z]δ(z) and T n [Y |Z]δ(z), we have 2n(H(X,Y |Z)−ζ) ≤2n(H(X|Z)+γ)2n(H(Y |Z)+φ), (6.155) where ζ, γ, φ →0 as n →∞and δ →0. Taking logarithm to the base 2 and dividing by n, we obtain H(X, Y |Z) ≤H(X|Z) + H(Y |Z) (6.156) upon letting n →∞and δ →0. This inequality is equivalent to I(X; Y |Z) ≥0. (6.157) Thus we have proved the nonnegativity of conditional mutual information. Since all Shannon’s information measures are special cases of conditional mu-tual information, we have proved the nonnegativity of all Shannon’s informa-tion measures, namely the basic inequalities. Chapter Summary Strong typicality implies weak typicality but can be used only for random variables with finite alphabets. Strongly Typical Set: 132 6 Strong Typicality T n [X]δ = ( x ∈X n : X x n−1N(x; x) −p(x) ≤δ ) . Strong AEP: 1. 2−n(H(X)+η) ≤p(x) ≤2−n(H(X)−η) for x ∈T n [X]δ 2. Pr{X ∈T n [X]δ} > 1 −δ for n sufficiently large 3. (1 −δ)2n(H(X)−η) ≤|T n [X]δ| ≤2n(H(X)+η) for n sufficiently large. Theorem: For sufficiently large n, Pr{X ̸∈T n [X]δ} < 2−nϕ(δ). Consistency: If (x, y) ∈T n [XY ]δ, then x ∈T n [X]δ and y ∈T n [Y ]δ. Preservation: If x ∈T n [X]δ, then f(x) ∈T n [f(X)]δ. Conditional Strong AEP: For x ∈T n [X]δ, let T n [Y |X]δ(x) = {y ∈T n [Y ]δ : (x, y) ∈T n [XY ]δ}. If |T n [Y |X]δ(x)| ≥1, then 2n(H(Y |X)−ν) ≤|T n [Y |X]δ(x)| ≤2n(H(Y |X)+ν). Problems 1. Show that (x, y) ∈T n [X,Y ]δ and (y, z) ∈T n [Y,Z]δ do not imply (x, z) ∈ T n [X,Z]δ. 2. Let X = (X1, X2, · · · , Xn), where Xk are i.i.d. with generic random vari-able X. Prove that Pr{X ∈T n [X]δ} ≥1 −|X|3 nδ2 for any n and δ > 0. This shows that Pr{X ∈T n [X]δ} →1 as δ →0 and n →∞if √nδ →∞. 3. Prove that for a random variable X with a countable alphabet, Property 2 of the strong AEP holds, while Properties 1 and 3 do not hold. 4. Prove Proposition 6.13. Hint: Use the fact that if (X, Y) ∈T n [XY ]δ, then X ∈Sn [X]δ. 5. Let P(X) be the set of all probability distributions over a finite alphabet X. Find a polynomial Q(n) such that for any integer n, there exists a subset Pn(X) of P(X) such that Problems 133 a) |Pn(X)| ≤Q(n); b) for all P ∈P(X), there exists Pn ∈Pn(X) such that |Pn(x) −P(x)| < 1 n for all x ∈X. Hint: Let Pn(X) be the set of all probability distributions over X such that all the probability masses can be expressed as fractions with denominator n. 6. Let p be any probability distribution over a finite set X and η be a real number in (0, 1). Prove that for any subset A of X n with pn(A) ≥η, |A ∩T n [X]δ| ≥2n(H(p)−δ′), where δ′ →0 as δ →0 and n →∞. In the following problems, for a sequence x ∈X n, let qx be the empirical distribution of x, i.e., qx(x) = n−1N(x; x) for all x ∈X. Similarly, for a pair of sequences (x, y) ∈X n × Yn, let qx,y be the joint empirical distribution of (x, y), i.e., qx,y(x, y) = n−1N(x, y; x, y) for all (x, y) ∈X × Y. 7. Alternative definition of strong typicality Show that (6.1) is equivalent to V (qx, p) ≤δ, where V (·, ·) denotes the variational distance. Thus strong typicality can be regarded as requiring the empirical distribution of a sequence to be close to the probability distribution of the generic random variable in variational distance. Also compare the result here with the alternative definition of weak typicality (Problem 5 in Chapter 5). 8. The empirical distribution qx of the sequence x is also called the type of x. Assuming that X is finite, show that there are a total of n+|X|−1 n  distinct types qx. Hint: There are a+b−1 a  ways to distribute a identical balls in b boxes. 9. Unified typicality Let X = (X1, X2, · · · , Xn) be an i.i.d. sequence whose generic random variable X is distributed with p(x), where the alpbabet X is countable. For any η > 0, the unified typical set U n [X]η with respect to p(x) is the set of sequences x ∈X n such that D(qx∥p) + |H(qx) −H(p)| ≤η. a) Show that for any x ∈X n, if x ∈U n [X]η, then x ∈W n [X]η. b) Show that for any x ∈X n, if x ∈U n [X]η, then x ∈T n [X]δ, where δ = √η · 2 ln 2. Therefore, unified typicality implies both weak typicality and strong typ-icality. 134 6 Strong Typicality 10. The AEP for unified typicality Unified typicality defined in Problem 9, unlike strong typicality, can be applied to random variables whose alpha-bets are countable . At the same time, it preserves the essential properties of strong typicality. The following outlines the proof of the AEP which has been discussed in Theorem 5.3 and Theorem 6.2 for weak typicality and strong typicality, respectively. a) Show that 2−n(H(X)+η) ≤p(x) ≤2−n(H(X)−η), i.e., Property 1 of the AEP. b) Show that for sufficiently large n, Pr{H(qx) −H(p) > ϵ} < ϵ. Hint: Use the results in Problem 9 above and Problem 5 in Chapter 5. c) It can be proved by means of the result in Problem 9 that Pr{H(p) −H(qx) > ϵ} < ϵ (see Ho and Yeung ). By assuming this inequality, prove that Pr{|H(qx) −H(p)| ≤ϵ} < 1 −2ϵ. d) Show that if |H(qx) −H(p)| ≤ϵ and |D(qx∥p) + H(qx) −H(p)| ≤ϵ, then D(qx∥p) + |H(qx) −H(p)| ≤3ϵ. e) Use the results in c) and d) above and the result in Problem 5, Part c) in Chapter 5 to show that Pr{D(qx∥p) + |H(qx) −H(p)| ≤η} > 1 −η. This proves Property 2 of the AEP. Property 3 of the AEP follows from Property 1 as in the proof of Theorem 5.3. 11. Consistency of unified typicality For any η > 0, the unified jointly typical set U n [XY ]η with respect to pXY (x, y) is the set of sequences (x, y) ∈X n × Yn such that D(qx,y∥pXY ) + |H(qx,y) −H(pXY )| +|H(qx) −H(pX)| + |H(qy) −H(pY )| ≤η. Show that if (x, y) ∈U n [XY ]η, then x ∈U n [X]η and y ∈U n [Y ]η. Historical Notes Strong typicality was used by Wolfowitz for proving channel coding the-orems and by Berger for proving the rate-distortion theorem and various Historical Notes 135 results in multiterminal source coding. The method of types, a refinement of the notion of strong typicality, was systematically developed in the book by Csisz´ ar and K¨ orner . The interpretation of the basic inequalities in Section 6.4 is a preamble to the relation between entropy and groups to be discussed in Chapter 16. Recently, Ho and Yeung introduced the notion of unified typicality which is stronger than both weak typicality and strong typicality. This notion of typicality can be applied to random variables with countable alphabets, while at the same time preserve the essential properties of strong typicality. See Problems 9, 10, and 11 for a discussion. 7 Discrete Memoryless Channels In all practical communication systems, when a signal is transmitted from one point to another point, the signal is inevitably contaminated by random noise, i.e., the signal received is correlated with but possibly different from the signal transmitted. We use a noisy channel to model such a situation. A noisy channel is a “system” which has one input terminal and one output terminal1, with the input connected to the transmission point and the output connected to the receiving point. When the signal is transmitted through the channel, it is distorted in a random way which depends on the channel characteristics. As a consequence, the signal received may be different from the signal transmitted. In communication engineering, we are interested in conveying messages reliably through a noisy channel at the maximum possible rate. We first look at a simple channel called the binary symmetric channel (BSC), which is represented by the transition diagram in Figure 7.1. In this channel both the input X and the output Y take values in the set {0, 1}. There is a certain probability, denoted by ϵ, that the output is not equal to the input. That is, if the input is 0, then the output is 0 with probability 1 −ϵ, and is 1 with probability ϵ. Likewise, if the input is 1, then the output is 1 with probability 1 −ϵ, and is 0 with probability ϵ. The parameter ϵ is called the crossover probability of the BSC. Let {A, B} be the message set which contains two possible messages to be conveyed through a BSC with 0 ≤ϵ < 0.5. We further assume that the two messages A and B are equally likely. If the message is A, we map it to the codeword 0, and if the message is B, we map it to the codeword 1. This is the simplest example of a channel code. The codeword is then transmitted through the channel. Our task is to decode the message based on the output of the channel, and an error is said to occur if the message is decoded incorrectly. Consider Pr{A|Y = 0} = Pr{X = 0|Y = 0} (7.1) 1 The discussion on noisy channels here is confined to point-to-point channels. 138 7 Discrete Memoryless Channels 0 1 0 1 X Y 1 1 Fig. 7.1. The transition diagram of a binary symmetric channel. = Pr{X = 0}Pr{Y = 0|X = 0} Pr{Y = 0} (7.2) = 0.5(1 −ϵ) Pr{Y = 0}. (7.3) Since Pr{Y = 0} = Pr{Y = 1} = 0.5 (7.4) by symmetry2, it follows that Pr{A|Y = 0} = 1 −ϵ (7.5) and Pr{B|Y = 0} = 1 −Pr{A|Y = 0} = ϵ. (7.6) Since ϵ < 0.5, Pr{B|Y = 0} < Pr{A|Y = 0}. (7.7) Therefore, in order to minimize the probability of error, we decode a received 0 to the message A. By symmetry, we decode a received 1 to the message B. An error occurs if a 0 is received and the message is B, or if a 1 is received and the message is A. Therefore, the probability of error, denoted by Pe, is given by Pe = Pr{Y = 0}Pr{B|Y = 0} + Pr{Y = 1}Pr{A|Y = 1} (7.8) = 0.5ϵ + 0.5ϵ (7.9) = ϵ, (7.10) 2 More explicitly, Pr{Y = 0} = Pr{A}Pr{Y = 0|A} + Pr{B}Pr{Y = 0|B} = 0.5 Pr{Y = 0|X = 0} + 0.5 Pr{Y = 0|X = 1} = 0.5(1 −ϵ) + 0.5ϵ = 0.5. 7 Discrete Memoryless Channels 139 where (7.9) follows from (7.6) because Pr{A|Y = 1} = Pr{B|Y = 0} = ϵ (7.11) by symmetry. Let us assume that ϵ ̸= 0. Then the above scheme obviously does not provide perfectly reliable communication. If we are allowed to use the channel only once, then this is already the best we can do. However, if we are allowed to use the same channel repeatedly, then we can improve the reliability by generalizing the above scheme. We now consider the following channel code which we refer to as the binary repetition code. Let n ≥1 be an odd positive integer which is called the block length of the code. In this code, the message A is mapped to the sequence of n 0’s, and the message B is mapped to the sequence of n 1’s. The codeword, which consists of a sequence of either n 0’s or n 1’s, is transmitted through the channel in n uses. Upon receiving a sequence of n bits at the output of the channel, we use the majority vote to decode the message, i.e., if there are more 0’s than 1’s in the sequence, we decode the sequence to the message A, otherwise we decode the sequence to the message B. Note that the block length is chosen to be odd so that there cannot be a tie. When n = 1, this scheme reduces to the previous scheme. For this more general scheme, we continue to denote the probability of error by Pe. Let N0 and N1 be the number of 0’s and 1’s in the received sequence, respectively. Clearly, N0 + N1 = n. (7.12) For large n, if the message is A, the number of 0’s received is approximately equal to E[N0|A] = n(1 −ϵ) (7.13) and the number of 1’s received is approximately equal to E[N1|A] = nϵ (7.14) with high probability by the weak law of large numbers. This implies that the probability of an error, namely the event {N0 < N1}, is small because n(1 −ϵ) > nϵ (7.15) with the assumption that ϵ < 0.5. Specifically, Pr{error|A} = Pr{N0 < N1|A} (7.16) = Pr{n −N1 < N1|A} (7.17) = Pr{N1 > 0.5n|A} (7.18) ≤Pr{N1 > (ϵ + φ)n|A}, (7.19) 140 7 Discrete Memoryless Channels where 0 < φ < 0.5 −ϵ, (7.20) so that φ is positive and ϵ + φ < 0.5. (7.21) Note that such a φ exists because ϵ < 0.5. Then by the weak law of large numbers, the upper bound in (7.19) tends to 0 as n →∞. By symmetry, Pr{error|B} also tends to 0 as n →∞. Therefore, Pe = Pr{A}Pr{error|A} + Pr{B}Pr{error|B} (7.22) tends to 0 as n →∞. In other words, by using a long enough repetition code, we can make Pe arbitrarily small. In this sense, we say that reliable communication is achieved asymptotically. We point out that for a BSC with ϵ > 0, for any given transmitted se-quence of length n, the probability of receiving any given sequence of length n is nonzero. It follows that for any two distinct input sequences, there is always a nonzero probability that the same output sequence is produced so that the two input sequences become indistinguishable. Therefore, except for very special channels (e.g., the BSC with ϵ = 0), no matter how the encod-ing/decoding scheme is devised, a nonzero probability of error is inevitable, and asymptotically reliable communication is the best we can hope for. Though a rather naive approach, asymptotically reliable communication can be achieved by using the repetition code. The repetition code, however, is not without catch. For a channel code, the rate of the code in bit(s) per use, is defined as the ratio of the logarithm of the size of the message set in the base 2 to the block length of the code. Roughly speaking, the rate of a channel code is the average number of bits the channel code attempts to convey through the channel per use of the channel. For a binary repetition code with block length n, the rate is 1 n log 2 = 1 n, which tends to 0 as n →∞. Thus in order to achieve asymptotic reliability by using the repetition code, we cannot communicate through the noisy channel at any positive rate! In this chapter, we characterize the maximum rate at which information can be communicated through a discrete memoryless channel (DMC) with an arbitrarily small probability of error. This maximum rate, which is generally positive, is known as the channel capacity. Then we discuss the use of feed-back in communicating through a DMC, and show that feedback does not increase the capacity. At the end of the chapter, we discuss transmitting an information source through a DMC, and we show that asymptotic optimality can be achieved by separating source coding and channel coding. 7.1 Definition and Capacity Definition 7.1. Let X and Y be discrete alphabets, and p(y|x) be a transition matrix from X to Y. A discrete channel p(y|x) is a single-input single-output 7.1 Definition and Capacity 141 system with input random variable X taking values in X and output random variable Y taking values in Y such that Pr{X = x, Y = y} = Pr{X = x}p(y|x) (7.23) for all (x, y) ∈X × Y. Remark From (7.23), we see that if Pr{X = x} > 0, then Pr{Y = y|X = x} = Pr{X = x, Y = y} Pr{X = x} = p(y|x). (7.24) Note that Pr{Y = y|X = x} is undefined if Pr{X = x} = 0. Nevertheless, (7.23) is valid for both cases. We now present an alternative description of a discrete channel. Let X and Y be discrete alphabets. Let X be a random variable taking values in X and p(y|x) be any transition matrix from X to Y. Define random variables Zx with Zx = Y for x ∈X such that Pr{Zx = y} = p(y|x) (7.25) for all y ∈Y. We assume that Zx, x ∈X are mutually independent and also independent of X. Further define the random variable Z = (Zx : x ∈X), (7.26) called the noise variable. Note that Z is independent of X. Now define a random variable taking values in Y as Y = Zx if X = x. (7.27) Evidently, Y is a function of X and Z. Then for x ∈X such that Pr{X = x} > 0, we have Pr{X = x, Y = y} = Pr{X = x}Pr{Y = y|X = x} (7.28) = Pr{X = x}Pr{Zx = y|X = x} (7.29) = Pr{X = x}Pr{Zx = y} (7.30) = Pr{X = x}p(y|x), (7.31) i.e., (7.23) in Definition 7.1, where (7.30) follows from the assumption that Zx is independent of X. For x ∈X such that Pr{X = x} = 0, since Pr{X = x} = 0 implies Pr{X = x, Y = y} = 0, (7.23) continues to hold. Then by regarding X and Y as the input and output random variables, we have obtained an alternative description of the discrete channel p(y|x). Since Y is a function of X and Z, we can write Y = α(X, Z). (7.32) Then we have the following equivalent definition for a discrete channel. 142 7 Discrete Memoryless Channels X Z Y ! (b) (a) x y p(y|x) Fig. 7.2. Illustrations of (a) a discrete channel p(y|x) and (b) a discrete channel (α, Z). Definition 7.2. Let X, Y, and Z be discrete alphabets. Let α : X × Z →Y, and Z be a random variable taking values in Z, called the noise variable. A discrete channel (α, Z) is a single-input single-output system with input alphabet X and output alphabet Y. For any input random variable X, the noise variable Z is independent of X, and the output random variable Y is given by Y = α(X, Z). (7.33) Figure 7.2 illustrates a discrete channel p(y|x) and a discrete channel (α, Z). The next definition gives the condition for the equivalence of the two specifications of a discrete channel according to Definitions 7.1 and 7.2, re-spectively. Definition 7.3. Two discrete channels p(y|x) and (α, Z) defined on the same input alphabet X and output alphabet Y are equivalent if Pr{α(x, Z) = y} = p(y|x) (7.34) for all x and y. We point out that the qualifier “discrete” in a discrete channel refers to the input and output alphabets of the channel being discrete. As part of a discrete-time communication system, a discrete channel can be used repeatedly at every time index i = 1, 2, · · ·. As the simplest model, we may assume that the noise for the transmission over the channel at different time indices are independent of each other. In the next definition, we will introduce the discrete memoryless channel (DMC) as a discrete-time extension of a discrete channel that captures this modeling assumption. To properly formulate a DMC, we regard it as a subsystem of a discrete-time stochastic system which will be referred to as “the system” in the sequel. In such a system, random variables are generated sequentially in discrete-time, and more than one random variable may be generated instantaneously but sequentially at a particular time index. 7.1 Definition and Capacity 143 X 1 Y 1 X 2 Y 2 X 3 Y 3 ... y x p ( ) y x p ( ) y x p ( ) Fig. 7.3. An illustration of a discrete memoryless channel p(y|x). Definition 7.4. A discrete memoryless channel (DMC) p(y|x) is a sequence of replicates of a generic discrete channel p(y|x). These discrete channels are indexed by a discrete-time index i, where i ≥1, with the ith channel being available for transmission at time i. Transmission through a channel is assumed to be instantaneous. Let Xi and Yi be respectively the input and the output of the DMC at time i, and let Ti−denote all the random variables that are generated in the system before Xi. The equality Pr{Yi = y, Xi = x, Ti−= t} = Pr{Xi = x, Ti−= t}p(y|x) (7.35) holds for all (x, y, t) ∈X × Y × Ti−. Remark Similar to the remark following Definition 7.1, if Pr{Xi = x, Ti−= t} > 0, then Pr{Yi = y|Xi = x, Ti−= t} = Pr{Yi = y, Xi = x, Ti−= t} Pr{Xi = x, Ti−= t} (7.36) = p(y|x). (7.37) Note that Pr{Yi = y|Xi = x, Ti−= t} is undefined if Pr{Xi = x, Ti−= t} = 0. Nevertheless, (7.35) is valid for both cases. Invoking Proposition 2.5, we see from (7.35) that Ti−→Xi →Yi (7.38) forms a Markov chain, i.e., the output of the DMC at time i is independent of all the random variables that have already been generated in the system conditioning on the input at time i. This captures the memorylessness of a DMC. Figure 7.3 is an illustration of a DMC p(y|x). Paralleling Definition 7.2 for a discrete channel, we now present an alter-native definition of a DMC. Definition 7.5. A discrete memoryless channel (α, Z) is a sequence of repli-cates of a generic discrete channel (α, Z). These discrete channels are indexed 144 7 Discrete Memoryless Channels X1 Z1 Y1 ! X3 Z3 Y3 ! X2 Z2 Y2 ! . . . Fig. 7.4. An illustration of a discrete memoryless channel (α, Z). by a discrete-time index i, where i ≥1, with the ith channel being available for transmission at time i. Transmission through a channel is assumed to be instantaneous. Let Xi and Yi be respectively the input and the output of the DMC at time i, and let Ti−denote all the random variables that are generated in the system before Xi. The noise variable Zi for the transmission at time i is a copy of the generic noise variable Z, and is independent of (Xi, Ti−). The output of the DMC at time i is given by Yi = α(Xi, Zi). (7.39) Figure 7.4 is an illustration of a DMC (α, Z). We now show that Defini-tions 7.4 and 7.5 specify the same DMC provided that the generic discrete channel p(y|x) in Definition 7.4 is equivalent to the generic discrete channel (α, Z) in Definition 7.5, i.e., (7.34) holds. For the DMC (α, Z) in Definition 7.5, consider 0 ≤I(Ti−; Yi|Xi) (7.40) ≤I(Ti−; Yi, Xi, Zi|Xi) (7.41) = I(Ti−; Xi, Zi|Xi) (7.42) = I(Ti−; Zi|Xi) (7.43) = 0, (7.44) where the first equality follows from (7.39) and the last equality follows from the assumption that Zi is independent of (Xi, Ti−). Therefore, I(Ti−; Yi|Xi) = 0, (7.45) 7.1 Definition and Capacity 145 or Ti−→Xi →Yi forms a Markov chain. It remains to establish (7.35) for all (x, y, t) ∈X × Y × Ti−. For x ∈X such that Pr{Xi = x} = 0, both Pr{Yi = y, Xi = x, Ti−= t} and Pr{Xi = x, Ti−= t} vanish because they are upper bounded by Pr{Xi = x}. Therefore (7.35) holds. For x ∈X such that Pr{Xi = x} > 0, Pr{Yi = y, Xi = x, Ti−= t} a) = Pr{Xi = x, Ti−= t}Pr{Yi = y|Xi = x} (7.46) b) = Pr{Xi = x, Ti−= t}Pr{α(Xi, Zi) = y|Xi = x} (7.47) = Pr{Xi = x, Ti−= t}Pr{α(x, Zi) = y|Xi = x} (7.48) c) = Pr{Xi = x, Ti−= t}Pr{α(x, Zi) = y} (7.49) d) = Pr{Xi = x, Ti−= t}Pr{α(x, Z) = y} (7.50) e) = Pr{Xi = x, Ti−= t}p(y|x), (7.51) where a) follows from the Markov chain Ti−→Xi →Yi; b) follows from (7.39); c) follows from Definition 7.5 that Zi is independent of Xi; d) follows from Definition 7.5 that Zi and the generic noise variable Z have the same distribution; e) follows from (7.34). Hence, (7.35) holds for all (x, y, t) ∈X × Y × Ti−, proving that the DMC (α, Z) in Definition 7.4 is equivalent to the DMC (p(y|x) in Definition 7.5. Definition 7.5 renders the following physical conceptualization of a DMC. The DMC can be regarded as a “box” which has only two terminals, the input and the output. The box perfectly shields its contents from the rest of the system. At time i, upon the transmission of the input Xi, the noise variable Zi is generated inside the box according to the distribution of the generic noise variable Z. Since the box is perfectly shielded, the generation of the Zi is independent of Xi and any other random variable that has already been generated in the system. Then the function α is applied to (Xi, Zi) to produce the output Yi. In the rest of the section, we will define the capacity of a DMC and discuss some of its basic properties. The capacities of two simple DMCs will also be evaluated explicitly. To keep our discussion simple, we will assume that the alphabets X and Y are finite. Definition 7.6. The capacity of a discrete memoryless channel p(y|x) is de-fined as C = max p(x) I(X; Y ), (7.52) where X and Y are respectively the input and the output of the generic discrete channel, and the maximum is taken over all input distributions p(x). 146 7 Discrete Memoryless Channels X Z Y Fig. 7.5. An alternative representation for a binary symmetric channel. From the above definition, we see that C ≥0 (7.53) because I(X; Y ) ≥0 (7.54) for all input distributions p(x). By Theorem 2.43, we have C = max p(x) I(X; Y ) ≤max p(x) H(X) = log |X|. (7.55) Likewise, we have C ≤log |Y|. (7.56) Therefore, C ≤min(log |X|, log |Y|). (7.57) Since I(X; Y ) is a continuous functional of p(x) and the set of all p(x) is a compact set (i.e., closed and bounded) in ℜ|X|, the maximum value of I(X; Y ) can be attained3. This justifies taking the maximum rather than the supre-mum in the definition of channel capacity in (7.52). We will prove subsequently that C is in fact the maximum rate at which information can be communicated reliably through a DMC. We first give some examples of DMC’s for which the capacities can be obtained in closed form. In the following, X and Y denote respectively the input and the output of the generic discrete channel, and all logarithms are in the base 2. Example 7.7 (Binary Symmetric Channel). The transition diagram of a BSC has been shown in Figure 7.1. Alternatively, a BSC can be represented by the system in Figure 7.5. Here, Z is a binary random variable representing the noise of the channel, with Pr{Z = 0} = 1 −ϵ and Pr{Z = 1} = ϵ, (7.58) and Z is independent of X. Then Y = X + Z mod 2. (7.59) 3 The assumption that X is finite is essential in this argument. 7.1 Definition and Capacity 147 1 0 0.5 1 C Fig. 7.6. The capacity of a binary symmetric channel. This representation for a BSC is in the form prescribed by Definition 7.2. In order to determine the capacity of the BSC, we first bound I(X; Y ) as follows. I(X; Y ) = H(Y ) −H(Y |X) (7.60) = H(Y ) − X x p(x)H(Y |X = x) (7.61) = H(Y ) − X x p(x)hb(ϵ) (7.62) = H(Y ) −hb(ϵ) (7.63) ≤1 −hb(ϵ), (7.64) where we have used hb to denote the binary entropy function in the base 2. In order to achieve this upper bound, we have to make H(Y ) = 1, i.e., the output distribution of the BSC is uniform. This can be done by letting p(x) be the uniform distribution on {0, 1}. Therefore, the upper bound on I(X; Y ) can be achieved, and we conclude that C = 1 −hb(ϵ) bit per use. (7.65) Figure 7.6 is a plot of the capacity C versus the crossover probability ϵ. We see from the plot that C attains the maximum value 1 when ϵ = 0 or ϵ = 1, and attains the minimum value 0 when ϵ = 0.5. When ϵ = 0, it is easy to see that C = 1 is the maximum rate at which information can be communicated through the channel reliably. This can be achieved simply by transmitting unencoded bits through the channel, and no decoding is necessary because all the bits are received unchanged. When ϵ = 1, the same can be achieved with the additional decoding step which complements all the received bits. By doing so, the bits transmitted through the channel can be recovered without error. Thus from the communication point of view, for binary channels, a channel 148 7 Discrete Memoryless Channels 0 1 0 1 X Y e 1 1 Fig. 7.7. The transition diagram of a binary erasure channel. which never makes error and a channel which always makes errors are equally good. When ϵ = 0.5, the channel output is independent of the channel input. Therefore, no information can possibly be communicated through the channel. Example 7.8 (Binary Erasure Channel). The transition diagram of a binary erasure channel is shown in Figure 7.7. In this channel, the input alphabet is {0, 1}, while the output alphabet is {0, 1, e}. With probability γ, the era-sure symbol e is produced at the output, which means that the input bit is lost; otherwise the input bit is reproduced at the output without error. The parameter γ is called the erasure probability. To determine the capacity of this channel, we first consider C = max p(x) I(X; Y ) (7.66) = max p(x) (H(Y ) −H(Y |X)) (7.67) = max p(x) H(Y ) −hb(γ). (7.68) Thus we only have to maximize H(Y ). To this end, let Pr{X = 0} = a (7.69) and define a binary random variable E by E = 0 if Y ̸= e 1 if Y = e. (7.70) The random variable E indicates whether an erasure has occurred, and it is a function of Y . Then H(Y ) = H(Y, E) (7.71) = H(E) + H(Y |E) (7.72) = hb(γ) + (1 −γ)hb(a). (7.73) 7.2 The Channel Coding Theorem 149 Hence, C = max p(x) H(Y ) −hb(γ) (7.74) = max a [hb(γ) + (1 −γ)hb(a)] −hb(γ) (7.75) = (1 −γ) max a hb(a) (7.76) = (1 −γ) bit per use, (7.77) where the capacity is achieved by letting a = 0.5, i.e., the input distribution is uniform. It is in general not possible to obtain the capacity of a DMC in closed form, and we have to resort to numerical computation. In Chapter 9 we will discuss the Blahut-Arimoto algorithm for computing the channel capacity. 7.2 The Channel Coding Theorem We will justify the definition of the capacity of a DMC by the proving the channel coding theorem. This theorem, which consists of two parts, will be formally stated at the end of the section. The direct part of the theorem asserts that information can be communicated through a DMC with an arbitrarily small probability of error at any rate less than the channel capacity. Here it is assumed that the decoder knows the transition matrix of the DMC. The converse part of the theorem asserts that if information is communicated through a DMC at a rate higher than the capacity, then the probability of error is bounded away from zero. For better appreciation of the definition of channel capacity, we will first prove the converse part in Section 7.3 and then prove the direct part in Section 7.4. Definition 7.9. An (n, M) code for a discrete memoryless channel with input alphabet X and output alphabet Y is defined by an encoding function f : {1, 2, · · · , M} →X n (7.78) and a decoding function g : Yn →{1, 2, · · · , M}. (7.79) The set {1, 2, · · · , M}, denoted by W, is called the message set. The sequences f(1), f(2), · · · , f(M) in X n are called codewords, and the set of codewords is called the codebook. In order to distinguish a channel code as defined above from a channel code with feedback which will be discussed in Section 7.6, we will refer to the former as a channel code without feedback. 150 7 Discrete Memoryless Channels Encoder Channel p ( y | x ) Decoder X Y W Estimate of message W Message Fig. 7.8. A channel code with block length n. We assume that a message W is randomly chosen from the message set W according to the uniform distribution. Therefore, H(W) = log M. (7.80) With respect to a channel code for a DMC, we let X = (X1, X2, · · · , Xn) (7.81) and Y = (Y1, Y2, · · · , Yn) (7.82) be the input sequence and the output sequence of the channel, respectively. Evidently, X = f(W). (7.83) We also let ˆ W = g(Y) (7.84) be the estimate on the message W by the decoder. Figure 7.8 is the block diagram for a channel code. Definition 7.10. For all 1 ≤w ≤M, let λw = Pr{ ˆ W ̸= w|W = w} = X y∈Yn:g(y)̸=w Pr{Y = y|X = f(w)} (7.85) be the conditional probability of error given that the message is w. We now define two performance measures for a channel code. Definition 7.11. The maximal probability of error of an (n, M) code is de-fined as λmax = max w λw. (7.86) Definition 7.12. The average probability of error of an (n, M) code is defined as Pe = Pr{ ˆ W ̸= W}. (7.87) 7.3 The Converse 151 From the definition of Pe, we have Pe = Pr{ ˆ W ̸= W} (7.88) = X w Pr{W = w}Pr{ ˆ W ̸= W|W = w} (7.89) = X w 1 M Pr{ ˆ W ̸= w|W = w} (7.90) = 1 M X w λw, (7.91) i.e., Pe is the arithmetic mean of λw, 1 ≤w ≤M. It then follows that Pe ≤λmax. (7.92) In fact, it can be readily seen that this inequality remains valid even without the assumption that W is distributed uniformly on the message set W. Definition 7.13. The rate of an (n, M) channel code is n−1 log M in bits per use. Definition 7.14. A rate R is asymptotically achievable for a discrete memo-ryless channel if for any ϵ > 0, there exists for sufficiently large n an (n, M) code such that 1 n log M > R −ϵ (7.93) and λmax < ϵ. (7.94) For brevity, an asymptotically achievable rate will be referred to as an achiev-able rate. In other words, a rate R is achievable if there exists a sequence of codes whose rates approach R and whose probabilities of error approach zero. We end this section by stating the channel coding theorem, which gives a charac-terization of all achievable rates. This theorem will be proved in the next two sections. Theorem 7.15 (Channel Coding Theorem). A rate R is achievable for a discrete memoryless channel if and only if R ≤C, the capacity of the channel. 7.3 The Converse Let us consider a channel code with block length n. The random variables involved in this code are W, Xi and Yi for 1 ≤i ≤n, and ˆ W. We see 152 7 Discrete Memoryless Channels W X 1 X 2 X 3 X n Y 1 Y 2 Y 3 Y n X Y p ( ) y x | W Fig. 7.9. The dependency graph for a channel code without feedback. from the definition of a channel code in Definition 7.9 that all the random variables are generated sequentially according to some deterministic or prob-abilistic rules. Specifically, the random variables are generated in the order W, X1, Y1, X2, Y2, · · · , Xn, Yn, ˆ W. The generation of these random variables can be represented by the dependency graph4 in Figure 7.9. In this graph, a node represents a random variable. If there is a (directed) edge from node X to node Y , then node X is called a parent of node Y . We further distinguish a solid edge and a dotted edge: a solid edge represents functional (determinis-tic) dependency, while a dotted edge represents the probabilistic dependency induced by the transition matrix p(y|x) of the generic discrete channel. For a node X, its parent nodes represent all the random variables on which random variable X depends when it is generated. We now explain the specific structure of the dependency graph. First, Xi is a function of W, so each Xi is connected to W by a solid edge. According to Definition 7.4, Ti−= (W, X1, Y1, · · · , Xi−1, Yi−1). (7.95) By (7.35), the Markov chain (W, X1, Y1, · · · , Xi−1, Yi−1) →Xi →Yi (7.96) prevails. Therefore, the generation of Yi depends only on Xi and not on W, X1, Y1, · · · , Xi−1, Yi−1. So, Yi is connected to Xi by a dotted edge rep-resenting the discrete channel p(y|x) at time i, and there is no connection between Yi and any of the nodes W, X1, Y1, · · · , Xi−1, Yi−1. Finally, ˆ W is a function of Y1, Y2, · · · , Yn, so ˆ W is connected to each Yi by a solid edge. 4 A dependency graph can be regarded as a Bayesian network . 7.3 The Converse 153 We will use q to denote the joint distribution of these random variables as well as all the marginals, and let xi denote the ith component of a sequence x. From the dependency graph, we see that for all (w, x, y, ˆ w) ∈W×X n×Yn× ˆ W such that q(x) > 0 and q(y) > 0, q(w, x, y ˆ w) = q(w) n Y i=1 q(xi|w) ! n Y i=1 p(yi|xi) ! q( ˆ w|y). (7.97) Note that q(w) > 0 for all w so that q(xi|w) are well-defined, and q(xi|w) and q( ˆ w|y) are both deterministic. Denote the set of nodes X1, X2, · · · , Xn by X and the set of nodes Y1, Y2, · · ·, Yn by Y. We notice the following structure in the dependency graph: all the edges from W end in X, all the edges from X end in Y, and all the edges from Y end in ˆ W. This suggests that the random variables W, X, Y, and ˆ W form the Markov chain W →X →Y →ˆ W. (7.98) The validity of this Markov chain can be formally justified by applying Propo-sition 2.9 to (7.97), so that for all (w, x, y, ˆ w) ∈W × X n × Yn × ˆ W such that q(x) > 0 and q(y) > 0, we can write q(w, x, y, ˆ w) = q(w)q(x|w)q(y|x)q( ˆ w|y). (7.99) Now q(x, y) is obtained by summing over all w and ˆ w in (7.97), and q(x) is obtained by further summing over all y. After some straightforward algebra and using q(y|x) = q(x, y) q(x) (7.100) for all x such that q(x) > 0, we obtain q(y|x) = n Y i=1 p(yi|xi). (7.101) The Markov chain in (7.98) and the relation in (7.101) are apparent from the setup of the problem, and the above justification may seem superfluous. However, the methodology developed here is necessary for handling the more delicate situation which arises when the channel is used with feedback. This will be discussed in Section 7.6. Consider a channel code whose probability of error is arbitrarily small. Since W, X, Y, and ˆ W form the Markov chain in (7.98), the information di-agram for these four random variables is as shown in Figure 7.10. Moreover, X is a function of W, and ˆ W is a function of Y. These two relations are equivalent to H(X|W) = 0, (7.102) and 154 7 Discrete Memoryless Channels 0 0 0 0 0 0 0 0 W X Y W Fig. 7.10. The information diagram for W →X →Y →ˆ W. H( ˆ W|Y) = 0, (7.103) respectively. Since the probability of error is arbitrarily small, W and ˆ W are essentially identical. To gain insight into the problem, we assume for the time being that W and ˆ W are equivalent, so that H( ˆ W|W) = H(W| ˆ W) = 0. (7.104) Since the I-Measure µ∗for a Markov chain is nonnegative, the constraints in (7.102) to (7.104) imply that µ∗vanishes on all the atoms in Figure 7.10 marked with a ‘0.’ Immediately, we see that H(W) = I(X; Y). (7.105) That is, the amount of information conveyed through the channel is essentially the mutual information between the input sequence and the output sequence of the channel. For a single transmission, we see from the definition of channel capacity that the mutual information between the input and the output cannot exceed the capacity of the channel, i.e., for all 1 ≤i ≤n, I(Xi; Yi) ≤C. (7.106) Summing i from 1 to n, we have n X i=1 I(Xi; Yi) ≤nC. (7.107) Upon establishing in the next lemma that I(X; Y) ≤ n X i=1 I(Xi; Yi), (7.108) the converse of the channel coding theorem then follows from 7.3 The Converse 155 1 n log M = 1 nH(W) (7.109) = 1 nI(X; Y) (7.110) ≤1 n n X i=1 I(Xi; Yi) (7.111) ≤C. (7.112) Lemma 7.16. For a discrete memoryless channel used with a channel code without feedback, for any n ≥1, I(X; Y) ≤ n X i=1 I(Xi; Yi), (7.113) where Xi and Yi are, respectively, the input and the output of the channel at time i. Proof. For any (x, y) ∈X n × Yn, if q(x, y) > 0, then q(x) > 0 and (7.101) holds. Therefore, q(Y|X) = n Y i=1 p(Yi|Xi) (7.114) holds for all (x, y) in the support of q(x, y). Then −E log q(Y|X) = −E log n Y i=1 p(Yi|Xi) = − n X i=1 E log p(Yi|Xi), (7.115) or H(Y|X) = n X i=1 H(Yi|Xi). (7.116) Hence, I(X; Y) = H(Y) −H(Y|X) (7.117) ≤ n X i=1 H(Yi) − n X i=1 H(Yi|Xi) (7.118) = n X i=1 I(Xi; Yi). (7.119) The lemma is proved. ⊓ ⊔ We now formally prove the converse of the channel coding theorem. Let R be an achievable rate, i.e., for any ϵ > 0, there exists for sufficiently large n an (n, M) code such that 156 7 Discrete Memoryless Channels 1 n log M > R −ϵ (7.120) and λmax < ϵ. (7.121) Consider log M a) = H(W) (7.122) = H(W| ˆ W) + I(W; ˆ W) (7.123) b) ≤H(W| ˆ W) + I(X; Y) (7.124) c) ≤H(W| ˆ W) + n X i=1 I(Xi; Yi) (7.125) d) ≤H(W| ˆ W) + nC, (7.126) where a) follows from (7.80); b) follows from the data processing theorem since W →X →Y →ˆ W; c) follows from Lemma 7.16; d) follows from (7.107). From (7.87) and Fano’s inequality (cf. Corollary 2.48), we have H(W| ˆ W) < 1 + Pe log M. (7.127) Therefore, from (7.126), log M < 1 + Pe log M + nC (7.128) ≤1 + λmax log M + nC (7.129) < 1 + ϵ log M + nC, (7.130) where we have used (7.92) and (7.121), respectively, to obtain the last two inequalities. Dividing by n and rearranging the terms, we have 1 n log M < 1 n + C 1 −ϵ , (7.131) and from (7.120), we obtain R −ϵ < 1 n + C 1 −ϵ . (7.132) For any ϵ > 0, the above inequality holds for all sufficiently large n. Letting n →∞and then ϵ →0, we conclude that R ≤C. (7.133) 7.4 Achievability 157 1 C n 1 log M P e Fig. 7.11. An asymptotic upper bound on Pe. This completes the proof for the converse of the channel coding theorem. From the above proof, we can obtain an asymptotic bound on Pe when the rate of the code 1 n log M is greater than C. Consider (7.128) and obtain Pe ≥1 −1 + nC log M = 1 − 1 n + C 1 n log M . (7.134) Then Pe ≥1 − 1 n + C 1 n log M ≈1 − C 1 n log M (7.135) when n is large. This asymptotic bound on Pe, which is strictly positive if 1 n log M > C, is illustrated in Figure 7.11. In fact, the lower bound in (7.134) implies that Pe > 0 for all n if 1 n log M > C because if P (n0) e = 0 for some n0, then for all k ≥1, by concatenating k copies of the code, we obtain a code with the same rate and block length equal to kn0 such that P (kn0) e = 0, which is a contradiction to our conclusion that Pe > 0 when n is large. Therefore, if we use a code whose rate is greater than the channel capacity, the probability of error is non-zero for all block lengths. The converse of the channel coding theorem we have proved is called the weak converse. A stronger version of this result called the strong converse can be proved, which says that Pe →1 as n →∞if there exists an ϵ > 0 such that 1 n log M ≥C + ϵ for all n. 7.4 Achievability We have shown in the last section that the channel capacity C is an upper bound on all the achievable rates for a DMC. In this section, we show that the rate C is achievable, which implies that any rate R ≤C is achievable. 158 7 Discrete Memoryless Channels Consider a DMC p(y|x), and denote the input and the output of the generic discrete channel by X and Y , respectively. For every input distribution p(x), we will prove that the rate I(X; Y ) is achievable by showing for large n the existence of a channel code such that 1. the rate of the code is arbitrarily close to I(X; Y ); 2. the maximal probability of error λmax is arbitrarily small. Then by choosing the input distribution p(x) to be one that achieves the channel capacity, i.e., I(X; Y ) = C, we conclude that the rate C is achievable. Before we prove the achievability of the channel capacity, we first prove the following lemma. Lemma 7.17. Let (X′, Y′) be n i.i.d. copies of a pair of generic random vari-ables (X′, Y ′), where X′ and Y ′ are independent and have the same marginal distributions as X and Y , respectively. Then Pr{(X′, Y′) ∈T n [XY ]δ} ≤2−n(I(X;Y )−τ), (7.136) where τ →0 as δ →0. Proof. Consider Pr{(X′, Y′) ∈T n [XY ]δ} = X (x,y)∈T n [XY ]δ p(x)p(y). (7.137) By the consistency of strong typicality, for (x, y) ∈T n [XY ]δ, x ∈T n [X]δ and y ∈T n [Y ]δ. By the strong AEP, all the p(x) and p(y) in the above summation satisfy p(x) ≤2−n(H(X)−η) (7.138) and p(y) ≤2−n(H(Y )−ζ), (7.139) where η, ζ →0 as δ →0. By the strong JAEP, |T n [XY ]δ| ≤2n(H(X,Y )+ξ), (7.140) where ξ →0 as δ →0. Then from (7.137), we have Pr{(X′, Y′) ∈T n [XY ]δ} ≤2n(H(X,Y )+ξ) · 2−n(H(X)−η) · 2−n(H(Y )−ζ) (7.141) = 2−n(H(X)+H(Y )−H(X,Y )−ξ−η−ζ) (7.142) = 2−n(I(X;Y )−ξ−η−ζ) (7.143) = 2−n(I(X;Y )−τ), (7.144) where τ = ξ + η + ζ →0 (7.145) 7.4 Achievability 159 as δ →0. The lemma is proved. ⊓ ⊔ Fix any ϵ > 0 and let δ be a small positive quantity to be specified later. Toward proving the existence of a desired code, we fix an input distribution p(x) for the generic discrete channel p(y|x), and let M be an even integer satisfying I(X; Y ) −ϵ 2 < 1 n log M < I(X; Y ) −ϵ 4, (7.146) where n is sufficiently large. We now describe a random coding scheme in the following steps: 1. Construct the codebook C of an (n, M) code randomly by generating M codewords in X n independently and identically according to p(x)n. Denote these codewords by ˜ X(1), ˜ X(2), · · · , ˜ X(M). 2. Reveal the codebook C to both the encoder and the decoder. 3. A message W is chosen from W according to the uniform distribution. 4. The sequence X = ˜ X(W), namely the Wth codeword in the codebook C, is transmitted through the channel. 5. The channel outputs a sequence Y according to Pr{Y = y| ˜ X(W) = x} = n Y i=1 p(yi|xi) (7.147) (cf. (7.101)). 6. The sequence Y is decoded to the message w if ( ˜ X(w), Y) ∈T n [XY ]δ and there does not exists w′ ̸= w such that ( ˜ X(w′), Y) ∈T n [XY ]δ. Otherwise, Y is decoded to a constant message in W. Denote by ˆ W the message to which Y is decoded. Remark 1 There are a total of |X|Mn possible codebooks which can be constructed in Step 1 of the random coding scheme, where we regard two codebooks whose sets of codewords are permutations of each other as two different codebooks. Remark 2 Strong typicality is used in defining the decoding function in Step 6. This is made possible by the assumption that the alphabets X and Y are finite. We now analyze the performance of this random coding scheme. Let Err = { ˆ W ̸= W} (7.148) be the event of a decoding error. In the following, we analyze Pr{Err}, the probability of a decoding error for the random code constructed above. For all 1 ≤w ≤M, define the event 160 7 Discrete Memoryless Channels Ew = {( ˜ X(w), Y) ∈T n [XY ]δ}. (7.149) Now Pr{Err} = M X w=1 Pr{Err|W = w}Pr{W = w}. (7.150) Since Pr{Err|W = w} are identical for all w by symmetry in the code con-struction, we have Pr{Err} = Pr{Err|W = 1} M X w=1 Pr{W = w} (7.151) = Pr{Err|W = 1}, (7.152) i.e., we can assume without loss of generality that the message 1 is chosen. Then decoding is correct if the received sequence Y is decoded to the message 1. This is the case if E1 occurs but Ew does not occur for all 2 ≤w ≤M. It follows that5 Pr{Err c|W = 1} ≥Pr{E1 ∩Ec 2 ∩Ec 3 ∩· · · ∩Ec M|W = 1}, (7.153) which implies Pr{Err|W = 1} = 1 −Pr{Err c|W = 1} (7.154) ≤1 −Pr{E1 ∩Ec 2 ∩Ec 3 ∩· · · ∩Ec M|W = 1} (7.155) = Pr{(E1 ∩Ec 2 ∩Ec 3 ∩· · · ∩Ec M)c|W = 1} (7.156) = Pr{Ec 1 ∪E2 ∪E3 ∪· · · ∪EM|W = 1}. (7.157) By the union bound, we have Pr{Err|W = 1} ≤Pr{Ec 1|W = 1} + M X w=2 Pr{Ew|W = 1}. (7.158) First, conditioning on {W = 1}, ( ˜ X(1), Y) are n i.i.d. copies of the pair of generic random variables (X, Y ). By the strong JAEP, for any ν > 0, Pr{Ec 1|W = 1} = Pr{( ˜ X(1), Y) ̸∈T n [XY ]δ|W = 1} < ν (7.159) for sufficiently large n. This gives an upper bound on the first term on the right hand side of (7.158). Second, conditioning on {W = 1}, for 2 ≤w ≤M, ( ˜ X(w), Y) are n i.i.d. copies of the pair of generic random variables (X′, Y ′), where X′ and Y ′ 5 If E1 does not occur or Ew occurs for some 1 ≤w ≤M, the received sequence Y is decoded to the constant message, which may happen to be the message 1. Therefore, the inequality in (7.153) is not an equality in general. 7.4 Achievability 161 have the same marginal distributions as X and Y , respectively. Furthermore, from the random coding scheme and the memorylessness of the DMC, it is intuitively correct that X′ and Y ′ are independent because ˜ X(1) and ˜ X(w) are independent and the generation of Y depends only on ˜ X(1). A formal proof of this claim requires a more detailed analysis. In our random coding scheme, the random variables are generated in the order ˜ X(1), ˜ X(2), · · · , ˜ X(M), W, X1, Y1, X2, Y2, · · · , Xn, Yn, ˆ W. By considering the joint distribution of these random variables, similar to the discussion in Sec-tion 7.3, the Markov chain ( ˜ X(1), ˜ X(2), · · · , ˜ X(M), W) →X →Y →ˆ W (7.160) can be established. See Problem 1 for the details. Then for any 2 ≤w ≤M, from the above Markov chain, we have I(Y; ˜ X(w), W|X) = 0. (7.161) By the chain rule for mutual information, the left hand side can be written as I(Y; W|X) + I(Y; ˜ X(w)|X, W). (7.162) By the nonnegativity of conditional mutual information, this implies I(Y; ˜ X(w)|X, W) = 0, (7.163) or M X w=1 Pr{W = w}I(Y; ˜ X(w)|X, W = w) = 0. (7.164) Since I(Y; ˜ X(w)|X, W = w) are all nonnegative, we see from the above that they must all vanish. In particular, I(Y; ˜ X(w)|X, W = 1) = 0. (7.165) Then I(Y; ˜ X(w)| ˜ X(1), W = 1) = I(Y; ˜ X(w)| ˜ X(W), W = 1) (7.166) = I(Y; ˜ X(w)|X, W = 1) (7.167) = 0. (7.168) On the other hand, since ˜ X(1), ˜ X(w), and W are mutually independent, we have I( ˜ X(1); ˜ X(w)|W = 1) = 0. (7.169) Hence, 162 7 Discrete Memoryless Channels I(Y; ˜ X(w)|W = 1) ≤I( ˜ X(1), Y; ˜ X(w)|W = 1) (7.170) = I( ˜ X(1); ˜ X(w)|W = 1) + I(Y; ˜ X(w)| ˜ X(1), W = 1) (7.171) = 0 + 0 (7.172) = 0, (7.173) where (7.172) follows from (7.168) and (7.169), proving the claim. Let us now return to (7.158). For any 2 ≤w ≤M, it follows from the above claim and Lemma 7.17 that Pr{Ew|W = 1} = Pr{( ˜ X(w), Y) ∈T n [XY ]δ|W = 1} (7.174) ≤2−n(I(X;Y )−τ), (7.175) where τ →0 as δ →0. From the upper bound in (7.146), we have M < 2n(I(X;Y )−ϵ 4 ). (7.176) Using (7.159), (7.175), and the above upper bound on M, it follows from (7.152) and (7.158) that Pr{Err} < ν + 2n(I(X;Y )−ϵ 4 ) · 2−n(I(X;Y )−τ) (7.177) = ν + 2−n( ϵ 4 −τ). (7.178) Since τ →0 as δ →0, for sufficiently small δ, we have ϵ 4 −τ > 0 (7.179) for any ϵ > 0, so that 2−n( ϵ 4 −τ) →0 as n →∞. Then by letting ν < ϵ 3, it follows from (7.178) that Pr{Err} < ϵ 2 (7.180) for sufficiently large n. The main idea of the above analysis of Pr{Err} is the following. In con-structing the codebook, we randomly generate M codewords in X n according to p(x)n, and one of the codewords is sent through the channel p(y|x). When n is large, with high probability, the received sequence is jointly typical with the codeword sent with respect to p(x, y). If the number of codewords M grows with n at a rate less than I(X; Y ), then the probability that the received sequence is jointly typical with a codeword other than the one sent through the channel is negligible. Accordingly, the message can be decoded correctly with probability arbitrarily close to 1. In constructing the codebook in Step 1 of the random coding scheme, we choose a codebook C with a certain probability Pr{C} from the ensemble of all possible codebooks. By conditioning on the codebook chosen, we have 7.4 Achievability 163 Pr{Err} = X C Pr{C}Pr{Err|C}, (7.181) i.e., Pr{Err} is a weighted average of Pr{Err|C} over all C in the ensemble of all possible codebooks, where Pr{Err|C} is the average probability of error of the code, i.e., Pe, when the codebook C is chosen (cf. Definition 7.12). The reader should compare the two different expansions of Pr{Err} in (7.181) and (7.150). Therefore, there exists at least one codebook C∗such that Pr{Err|C∗} ≤Pr{Err} < ϵ 2. (7.182) Thus we have shown that for any ϵ > 0, there exists for sufficiently large n an (n, M) code such that 1 n log M > I(X; Y ) −ϵ 2 (7.183) (cf. (7.146)) and Pe < ϵ 2. (7.184) We are still one step away from proving that the rate I(X; Y ) is achievable because we require that λmax instead of Pe is arbitrarily small. Toward this end, we write (7.184) as 1 M M X w=1 λw < ϵ 2, (7.185) or M X w=1 λw < M 2  ϵ. (7.186) Upon ordering the codewords according to their conditional probabilities of error, we observe that the conditional probabilities of error of the better half of the M codewords are less than ϵ, otherwise the conditional probabilities of error of the worse half of the codewords are at least ϵ, and they contribute at least ( M 2 )ϵ to the summation in (7.186), which is a contradiction. Thus by discarding the worse half of the codewords in C∗, for the resulting codebook, the maximal probability of error λmax is less than ϵ. Using (7.183) and considering 1 n log M 2 = 1 n log M −1 n (7.187) >  I(X; Y ) −ϵ 2  −1 n (7.188) > I(X; Y ) −ϵ (7.189) when n is sufficiently large, we see that the rate of the resulting code is greater than I(X; Y ) −ϵ. Hence, we conclude that the rate I(X; Y ) is achievable. Finally, upon letting the input distribution p(x) be one that achieves the channel capacity, i.e., I(X; Y ) = C, we have proved that the rate C is achiev-able. This completes the proof of the direct part of the channel coding theorem. 164 7 Discrete Memoryless Channels 7.5 A Discussion In the last two sections, we have proved the channel coding theorem which asserts that reliable communication through a DMC at rate R is possible if and only if R < C, the channel capacity. By reliable communication at rate R, we mean that the size of the message set grows exponentially with n at rate R, while the message can be decoded correctly with probability arbitrarily close to 1 as n →∞. Therefore, the capacity C is a fundamental characterization of a DMC. The capacity of a noisy channel is analogous to the capacity of a water pipe in the following way. For a water pipe, if we pump water through the pipe at a rate higher than its capacity, the pipe would burst and water would be lost. For a communication channel, if we communicate through the channel at a rate higher than the capacity, the probability of error is bounded away from zero, i.e., information is lost. In proving the direct part of the channel coding theorem, we showed that there exists a channel code whose rate is arbitrarily close to C and whose probability of error is arbitrarily close to zero. Moreover, the existence of such a code is guaranteed only when the block length n is large. However, the proof does not indicate how we can find such a codebook. For this reason, the proof we gave is called an existence proof (as oppose to a constructive proof). For a fixed block length n, we in principle can search through the ensemble of all possible codebooks for a good one, but this is quite prohibitive even for small n because the number of all possible codebooks grows doubly exponen-tially with n. Specifically, the total number of all possible (n, M) codebooks is equal to |X|Mn. When the rate of the code is close to C, M is approximately equal to 2nC. Therefore, the number of codebooks we need to search through is about |X|n2nC. Nevertheless, the proof of the direct part of the channel coding theorem does indicate that if we generate a codebook randomly as prescribed, the codebook is most likely to be good. More precisely, we now show that the probability of choosing a code C such that Pr{Err|C} is greater than any prescribed ψ > 0 is arbitrarily small when n is sufficiently large. Consider Pr{Err} = X C Pr{C}Pr{Err|C} (7.190) = X C:Pr{Err|C}≤ψ Pr{C}Pr{Err|C} + X C:Pr{Err|C}>ψ Pr{C}Pr{Err|C} (7.191) ≥ X C:Pr{Err|C}>ψ Pr{C}Pr{Err|C} (7.192) > ψ X C:Pr{Err|C}>ψ Pr{C}, (7.193) 7.5 A Discussion 165 ... sequences in 2 n H ( ) Y T Y [ ] n 2 n H ( ) Y X | codewords in 2 n I ( ) X ; Y T X [ ] n p ( ) y x | Fig. 7.12. A channel code that achieves capacity. which implies X C:Pr{Err|C}>ψ Pr{C} < Pr{Err} ψ . (7.194) From (7.182), we have Pr{Err} < ϵ 2 (7.195) for any ϵ > 0 when n is sufficiently large. Then X C:Pr{Err|C}>ψ Pr{C} < ϵ 2ψ . (7.196) Since ψ is fixed, this upper bound can be made arbitrarily small by choosing a sufficiently small ϵ. Although the proof of the direct part of the channel coding theorem does not provide an explicit construction of a good code, it does give much insight into what a good code is like. Figure 7.12 is an illustration of a channel code that achieves the channel capacity. Here we assume that the input distribution p(x) is one that achieves the channel capacity, i.e., I(X; Y ) = C. The idea is that most of the codewords are typical sequences in X n with respect to p(x). (For this reason, the repetition code is not a good code.) When such a codeword is transmitted through the channel, the received sequence is likely to be one of about 2nH(Y |X) sequences in Yn which are jointly typical with the transmitted codeword with respect to p(x, y). The association between a codeword and the about 2nH(Y |X) corresponding sequences in Yn is shown as a cone in the figure. As we require that the probability of decoding error is small, the cones essentially do not overlap with each other. Since the number of typical sequences with respect to p(y) is about 2nH(Y ), the number of codewords cannot exceed about 2nH(Y ) 2nH(Y |X) = 2nI(X;Y ) = 2nC. (7.197) 166 7 Discrete Memoryless Channels This is consistent with the converse of the channel coding theorem. The direct part of the channel coding theorem says that when n is large, as long as the number of codewords generated randomly is not more than about 2n(C−ϵ), the overlap among the cones is negligible with high probability. Therefore, instead of searching through the ensemble of all possible code-books for a good one, we can generate a codebook randomly, and it is likely to be good. However, such a code is difficult to use due to the following im-plementation issues. A codebook with block length n and rate R consists of n2nR symbols from the input alphabet X. This means that the size of the codebook, i.e., the amount of storage required to store the codebook, grows exponentially with n. This also makes the encoding process inefficient. Another issue is regarding the computation required for decoding. Based on the sequence received at the output of the channel, the decoder needs to decide which of the about 2nR codewords was the one transmitted. This requires an exponential amount of computation. In practice, we are satisfied with the reliability of communication once it exceeds a certain level. Therefore, the above implementation issues may eventually be resolved with the advancement of microelectronics. But before then, we still have to deal with these issues. For this reason, the entire field of coding theory has been developed since the 1950’s. Researchers in this field are devoted to searching for good codes and devising efficient decoding algorithms. In fact, almost all the codes studied in coding theory are linear codes. By taking advantage of the linear structures of these codes, efficient encoding and decoding can be achieved. In particular, Berrou et al. proposed in 1993 a linear code called the turbo code6 that can practically achieve the channel capacity. Today, channel coding has been widely used in home entertainment sys-tems (e.g., audio CD and DVD), computer storage systems (e.g., CD-ROM, hard disk, floppy disk, and magnetic tape), computer communication, wireless communication, and deep space communication. The most popular channel codes used in existing systems include the Hamming code, the Reed-Solomon code7, the BCH code, and convolutional codes. We refer the interested reader to textbooks on coding theory for discussions of this subject. 7.6 Feedback Capacity Feedback is common in practical communication systems for correcting possi-ble errors which occur during transmission. As an example, during a telephone 6 The turbo code is a special case of the class of Low-density parity-check (LDPC) codes proposed by Gallager in 1962 (see MacKay ). However, the per-formance of such codes was not known at that time due to lack of high speed computers for simulation. 7 The Reed-Solomon code was independently discovered by Arimoto . 7.6 Feedback Capacity 167 Encoder Channel p ( y | x ) Decoder X i =f i ( W, Y i- 1 ) Y i W Estimate of message W Message Fig. 7.13. A channel code with feedback. conversation, we often have to request the speaker to repeat due to poor voice quality of the telephone line. As another example, in data communication, the receiver may request a packet to be retransmitted if the parity check bits received are incorrect. In general, when feedback from the receiver is available at the transmitter, the transmitter can at any time decide what to transmit next based on the feedback so far, and can potentially transmit information through the channel reliably at a higher rate. In this section, we study a model in which a DMC is used with complete feedback. The block diagram for the model is shown in Figure 7.13. In this model, the symbol Yi received at the output of the channel at time i is available instantaneously at the encoder without error. Then depending on the message W and all the previous feedback Y1, Y2, · · · , Yi, the encoder decides the value of Xi+1, the next symbol to be transmitted. Such a channel code is formally defined below. Definition 7.18. An (n, M) code with complete feedback for a discrete mem-oryless channel with input alphabet X and output alphabet Y is defined by encoding functions fi : {1, 2, · · · , M} × Yi−1 →X (7.198) for 1 ≤i ≤n and a decoding function g : Yn →{1, 2, · · · , M}. (7.199) We will use Yi to denote (Y1, Y2, · · · , Yi) and Xi to denote fi(W, Yi−1). We note that a channel code without feedback is a special case of a channel code with complete feedback because for the latter, the encoder can ignore the feedback. Definition 7.19. A rate R is achievable with complete feedback for a discrete memoryless channel p(y|x) if for any ϵ > 0, there exists for sufficiently large n an (n, M) code with complete feedback such that 1 n log M > R −ϵ (7.200) and λmax < ϵ. (7.201) 168 7 Discrete Memoryless Channels Definition 7.20. The feedback capacity, CFB, of a discrete memoryless chan-nel is the supremum of all the rates achievable by codes with complete feedback. Proposition 7.21. The supremum in the definition of CFB in Definition 7.20 is the maximum. Proof. Consider rates R(k) which are achievable with complete feedback such that lim k→∞R(k) = R. (7.202) Then for any ϵ > 0, for all k, there exists for sufficiently large n an (n, M (k)) code with complete feedback such that 1 n log M (k) > R(k) −ϵ (7.203) and λ(k) max < ϵ. (7.204) By virtue of (7.202), let k(ϵ) be an integer such that for all k > k(ϵ), |R −R(k)| < ϵ, (7.205) which implies R(k) > R −ϵ. (7.206) Then for all k > k(ϵ), 1 n log M (k) > R(k) −ϵ > R −2ϵ. (7.207) Therefore, it follows from (7.207) and (7.204) that R is achievable with com-plete feedback. This implies that the supremum in Definition 7.20, which can be achieved, is in fact the maximum. ⊓ ⊔ Since a channel code without feedback is a special case of a channel code with complete feedback, any rate R achievable by the former is also achievable by the latter. Therefore, CFB ≥C. (7.208) A fundamental question is whether CFB is greater than C. The answer surprisingly turns out to be negative for a DMC, as we now show. From the description of a channel code with complete feedback, we obtain the depen-dency graph for the random variables W, X, Y, ˆ W in Figure 7.14. From this dependency graph, we see that q(w, x, y, ˆ w) = q(w) n Y i=1 q(xi|w, yi−1) ! n Y i=1 p(yi|xi) ! q( ˆ w|y) (7.209) for all (w, x, y, ˆ w) ∈W × X n × Yn × W such that q(w, yi−1), q(xi) > 0 for 1 ≤i ≤n and q(y) > 0, where yi = (y1, y2, · · · , yi). Note that q(xi|w, yi−1) and q( ˆ w|y) are deterministic. 7.6 Feedback Capacity 169 X 1 X 2 X 3 Y 1 Y 2 Y 3 X n Y n W W Y n- 1 p ( ) y x | Fig. 7.14. The dependency graph for a channel code with feedback. Lemma 7.22. For all 1 ≤i ≤n, (W, Yi−1) →Xi →Yi (7.210) forms a Markov chain. Proof. The dependency graph for the random variables W, Xi, and Yi is shown in Figure 7.15. Denote the set of nodes W, Xi−1, and Yi−1 by Z. Then we see that all the edges from Z end at Xi, and the only edge from Xi ends at Yi. This means that Yi depends on (W, Xi−1, Yi−1) only through Xi, i.e., (W, Xi−1, Yi−1) →Xi →Yi (7.211) forms a Markov chain, or I(W, Xi−1, Yi−1; Yi|Xi) = 0. (7.212) This can be formally justified by Proposition 2.9, and the details are omitted here. Since 0 = I(W, Xi−1, Yi−1; Yi|Xi) (7.213) = I(W, Yi−1; Yi|Xi) + I(Xi−1; Yi|W, Xi, Yi−1) (7.214) and mutual information is nonnegative, we obtain I(W, Yi−1; Yi|Xi) = 0, (7.215) or (W, Yi−1) →Xi →Yi (7.216) forms a Markov chain. The lemma is proved. ⊓ ⊔ 170 7 Discrete Memoryless Channels X i X 1 X 2 X 3 Y 1 Y 2 Y 3 Y i W Y i- 1 X i- 1 Z Fig. 7.15. The dependency graph for W, Xi, and Yi. From the definition of CFB and by virtue of Proposition 7.21, if R ≤CFB, then R is a rate achievable with complete feedback. We will show that if a rate R is achievable with complete feedback, then R ≤C. If so, then R ≤CFB implies R ≤C, which can be true if and only if CFB ≤C. Then from (7.208), we can conclude that CFB = C. Let R be a rate achievable with complete feedback, i.e., for any ϵ > 0, there exists for sufficiently large n an (n, M) code with complete feedback such that n−1 log M > R −ϵ (7.217) and λmax < ϵ. (7.218) Consider log M = H(W) = I(W; Y) + H(W|Y) (7.219) and bound I(W; Y) and H(W|Y) as follows. First, I(W; Y) = H(Y) −H(Y|W) (7.220) = H(Y) − n X i=1 H(Yi|Yi−1, W) (7.221) a) = H(Y) − n X i=1 H(Yi|Yi−1, W, Xi) (7.222) b) = H(Y) − n X i=1 H(Yi|Xi) (7.223) ≤ n X i=1 H(Yi) − n X i=1 H(Yi|Xi) (7.224) 7.6 Feedback Capacity 171 = n X i=1 I(Xi; Yi) (7.225) ≤nC, (7.226) where a) follows because Xi is a function of W and Yi−1 and b) follows from Lemma 7.22. Second, H(W|Y) = H(W|Y, ˆ W) ≤H(W| ˆ W). (7.227) Thus log M ≤H(W| ˆ W) + nC, (7.228) which is the same as (7.126). Then by (7.217) and an application of Fano’s inequality, we conclude as in the proof for the converse of the channel coding theorem that R ≤C. (7.229) Hence, we have proved that CFB = C. Remark 1 The proof for the converse of the channel coding theorem in Section 7.3 depends critically on the Markov chain W →X →Y →ˆ W (7.230) and the relation in (7.101) (the latter implies Lemma 7.16). Both of them do not hold in general in the presence of feedback. Remark 2 The proof for CFB = C in this section is also a proof for the converse of the channel coding theorem, so we actually do not need the proof in Section 7.3. However, the proof here and the proof in Section 7.3 have different spirits. Without comparing the two proofs, one cannot possibly understand the subtlety of the result that feedback does not increase the capacity of a DMC. Remark 3 Although feedback does not increase the capacity of a DMC, the availability of feedback often makes coding much simpler. For some channels, communication through the channel with zero probability of error can be achieved in the presence of feedback by using a variable-length channel code. These are discussed in the next example. Example 7.23. Consider the binary erasure channel in Example 7.8 whose ca-pacity is 1 −γ, where γ is the erasure probability. In the presence of complete feedback, for every information bit to be transmitted, the encoder can trans-mit the same information bit through the channel until an erasure does not occur, i.e., the information bit is received correctly. Then the number of uses of the channel it takes to transmit an information bit through the channel correctly has a truncated geometrical distribution whose mean is (1 −γ)−1. 172 7 Discrete Memoryless Channels source encoder X Y U channel encoder p ( ) y x | source decoder channel decoder W W U Fig. 7.16. Separation of source coding and channel coding. Therefore, the effective rate at which information can be transmitted through the channel is 1−γ. In other words, the channel capacity is achieved by using a very simple variable-length code. Moreover, the channel capacity is achieved with zero probability of error. In the absence of feedback, the rate 1 −γ can also be achieved, but with an arbitrarily small probability of error and a much more complicated code. To conclude this section, we point out that the memoryless assumption of the channel is essential for drawing the conclusion that feedback does not increase the channel capacity not because the proof presented in this section does not go through without this assumption, but because if the channel has memory, feedback actually can increase the channel capacity. For an illustrat-ing example, see Problem 12. 7.7 Separation of Source and Channel Coding We have so far considered the situation in which we want to convey a message through a DMC, where the message is randomly selected from a finite set according to the uniform distribution. However, in most situations, we want to convey an information source through a DMC. Let {Uk, k > −n} be an ergodic stationary information source with entropy rate H. Denote the com-mon alphabet by U and assume that U is finite. To convey {Uk} through the channel, we can employ a source code with rate Rs and a channel code with rate Rc as shown in Figure 7.16 such that Rs < Rc. Let f s and gs be respectively the encoding function and the decoding func-tion of the source code, and f c and gc be respectively the encoding function and the decoding function of the channel code. The block of n information symbols U = (U−(n−1), U−(n−2), · · · , U0) is first encoded by the source encoder into an index W = f s(U), (7.231) called the source codeword. Then W is mapped by the channel encoder to a distinct channel codeword X = f c(W), (7.232) where X = (X1, X2, · · · , Xn). This is possible because there are about 2nRs source codewords and about 2nRc channel codewords, and we assume that Rs < Rc. Then X is transmitted through the DMC p(y|x), and the sequence 7.7 Separation of Source and Channel Coding 173 Y = (Y1, Y2, · · · , Yn) is received. Based on Y, the channel decoder first esti-mates W as ˆ W = gc(Y). (7.233) Finally, the source decoder decodes ˆ W to ˆ U = gs( ˆ W). (7.234) For this scheme, an error occurs if U ̸= ˆ U, and we denote the probability of error by Pe. We now show that if H < C, the capacity of the DMC p(y|x), then it is possible to convey U through the channel with an arbitrarily small probability of error. First, we choose Rs and Rc such that H < Rs < Rc < C. (7.235) Observe that if ˆ W = W and gs(W) = U, then from (7.234), ˆ U = gs( ˆ W) = gs(W) = U, (7.236) i.e., an error does not occur. In other words, if an error occurs, either ˆ W ̸= W or gs(W) ̸= U. Then by the union bound, we have Pe ≤Pr{ ˆ W ̸= W} + Pr{gs(W) ̸= U}. (7.237) For any ϵ > 0 and sufficiently large n, by the Shannon-McMillan-Breiman theorem, there exists a source code such that Pr{gs(W) ̸= U} ≤ϵ. (7.238) By the channel coding theorem, there exists a channel code such that λmax ≤ ϵ, where λmax is the maximal probability of error. This implies Pr{ ˆ W ̸= W} = X w Pr{ ˆ W ̸= W|W = w}Pr{W = w} (7.239) ≤λmax X w Pr{W = w} (7.240) = λmax (7.241) ≤ϵ. (7.242) Combining (7.238) and (7.242), we have Pe ≤2ϵ. (7.243) Therefore, we conclude that as long as H < C, it is possible to convey {Uk} through the DMC reliably. In the scheme we have discussed, source coding and channel coding are separated. In general, source coding and channel coding can be combined. 174 7 Discrete Memoryless Channels Y i U p ( ) y x | U X i =f i ( W, Y i - 1 ) sc source- channel encoder source- channel decoder Fig. 7.17. Joint source-channel coding. This technique is called joint source-channel coding. It is then natural to ask whether it is possible to convey information through the channel reliably at a higher rate by using joint source-channel coding. In the rest of the section, we show that the answer to this question is no to the extent that for asymptotic reliability, we must have H ≤C. However, whether asymptotical reliability can be achieved for H = C depends on the specific information source and channel. We base our discussion on the general assumption that complete feedback is available at the encoder as shown in Figure 7.17. Let f sc i , 1 ≤i ≤n, be the encoding functions and gsc be the decoding function of the source-channel code. Then Xi = f sc i (U, Yi−1) (7.244) for 1 ≤i ≤n, where Yi−1 = (Y1, Y2, · · · , Yi−1), and ˆ U = gsc(Y), (7.245) where ˆ U = ( ˆ U1, ˆ U2, · · · , ˆ Un). In exactly the same way as we proved (7.226) in the last section, we can prove that I(U; Y) ≤nC. (7.246) Since ˆ U is a function of Y, I(U; ˆ U) ≤I(U; ˆ U, Y) (7.247) = I(U; Y) (7.248) ≤nC. (7.249) For any ϵ > 0, H(U) ≥n(H −ϵ) (7.250) for sufficiently large n. Then n(H −ϵ) ≤H(U) = H(U| ˆ U) + I(U; ˆ U) ≤H(U| ˆ U) + nC. (7.251) Applying Fano’s inequality (Corollary 2.48), we obtain Chapter Summary 175 n(H −ϵ) ≤1 + nPe log |U| + nC, (7.252) or H −ϵ ≤1 n + Pe log |U| + C. (7.253) For asymptotic reliability, Pe →0 as n →∞. Therefore, by letting n →∞ and then ϵ →0, we conclude that H ≤C. (7.254) This result, sometimes called the separation theorem for source and chan-nel coding, says that asymptotic optimality can be achieved by separating source coding and channel coding. This theorem has significant engineering implication because the source code and the channel code can be designed separately without losing asymptotic optimality. Specifically, we only need to design the best source code for the information source and design the best channel code for the channel. Moreover, separation of source coding and chan-nel coding facilitates the transmission of different information sources on the same channel because we need only change the source code for different in-formation sources. Likewise, separation of source coding and channel coding also facilitates the transmission of an information source on different channels because we need only change the channel code for different channels. We remark that although asymptotic optimality can be achieved by sep-arating source coding and channel coding, for finite block length, the proba-bility of error generally can be reduced by using joint source-channel coding. Chapter Summary Capacity of Discrete Memoryless Channel: C = max p(x) I(X; Y ), where p(x) is the input distribution of the channel. 1. C ≤min(log |X|, log |Y|). 2. For a binary symmetric channel with crossover probability ϵ, C = 1−hb(ϵ). 3. For a binary erasure channel with erasure probability γ, C = 1 −γ. Lemma: Let X and Y be a pair of random variables and (X′, Y′) be n i.i.d. copies of a pair of generic random variables (X′, Y ′), where X′ and Y ′ are independent and have the same marginal distributions as X and Y , respectively. Then Pr{(X′, Y′) ∈T n [XY ]δ} ≤2−n(I(X;Y )−τ), where τ →0 as δ →0. 176 7 Discrete Memoryless Channels Channel Coding Theorem: A message drawn uniformly from the set {1, 2, · · ·, 2n(R−ϵ)} can be transmitted through a discrete memoryless channel with negligible probability of error as n →∞if and only if R ≤C. Feedback: The capacity of a discrete memoryless channel is not increased by feedback. Separation of Source and Channel Coding: An information source with entropy rate H can be transmitted through a discrete memoryless channel with capacity C reliably if H < C (only if H ≤C), and asymptotic optimality can be achieved by separating source coding and channel coding. Problems In the following, X = (X1, X2, · · · , Xn), x = (x1, x2, · · · , xn), and so on. 1. Refer to the discussion in Section 7.4. a) Construct the dependency graph for the random variables involved in the random coding scheme. b) By considering the joint distribution of these random variables, prove the Markov chain in (7.160). 2. Show that the capacity of a DMC with complete feedback cannot be increased by using probabilistic encoding and/or decoding schemes. 3. Memory increases capacity Consider a BSC with crossover probability 0 < ϵ < 1 represented by Xi = Yi + Zi mod 2, where Xi, Yi, and Zi are respectively the input, the output, and the noise variable at time i. Then Pr{Zi = 0} = 1 −ϵ and Pr{Zi = 1} = ϵ for all i. We assume that {Xi} and {Zi} are independent, but we make no assumption that Zi are i.i.d. so that the channel may have memory. a) Prove that I(X; Y) ≤n −hb(ϵ). b) Show that the upper bound in a) can be achieved by letting Xi be i.i.d. bits taking the values 0 and 1 with equal probability and Z1 = Z2 = · · · = Zn. c) Show that with the assumptions in b), I(X; Y) > nC, where C = 1 −hb(ϵ) is the capacity of the BSC if it is memoryless. 4. Consider the channel in Problem 3, Part b). a) Show that the channel capacity is not increased by feedback. b) Devise a coding scheme without feedback that achieves the channel capacity. Problems 177 5. In Remark 1 toward the end of Section 7.6, it was mentioned that in the presence of feedback, both the Markov chain W →X →Y →ˆ W and Lemma 7.16 do not hold in general. Give examples to substantiate this remark. 6. Prove that when a DMC is used with complete feedback, Pr{Yi = yi|Xi = xi, Yi−1 = yi−1} = Pr{Yi = yi|Xi = xi} for all i ≥1. This relation, which is a consequence of the causality of the code, says that given the current input, the current output does not depend on all the past inputs and outputs of the DMC. 7. Let P(ϵ) = 1 −ϵ ϵ ϵ 1 −ϵ  be the transition matrix for a BSC with crossover probability ϵ. Define a ∗b = (1 −a)b + a(1 −b) for 0 ≤a, b ≤1. a) Prove that a DMC with transition matrix P(ϵ1)P(ϵ2) is equivalent to a BSC with crossover probability ϵ1 ∗ϵ2. Such a channel is the cascade of two BSC’s with crossover probabilities ϵ1 and ϵ2, respectively. b) Repeat a) for a DMC with transition matrix P(ϵ2)P(ϵ1). c) Prove that 1 −hb(ϵ1 ∗ϵ2) ≤min(1 −hb(ϵ1), 1 −hb(ϵ2)). This means that the capacity of the cascade of two BSC’s is upper bounded by the capacity of either of the two BSC’s. d) Prove that a DMC with transition matrix P(ϵ)n is equivalent to a BSC with crossover probabilities 1 2(1 −(1 −2ϵ)n). 8. Symmetric channel A DMC is symmetric if the rows of the transition matrix p(y|x) are permutations of each other and so are the columns. Determine the capacity of such a channel. See Section 4.5 in Gallager for a more general discussion. 9. Let C1 and C2 be the capacities of two DMC’s with transition matrices P1 and P2, respectively, and let C be the capacity of the DMC with transition matrix P1P2. Prove that C ≤min(C1, C2). 10. Two parallel channels Let C1 and C2 be the capacities of two DMC’s p1(y1|x1) and p2(y2|x2), respectively. Determine the capacity of the DMC p(y1, y2|x1, x2) = p1(y1|x1)p2(y2|x2). Hint: Prove that I(X1, X2; Y1, Y2) ≤I(X1; Y1) + I(X2; Y2) if p(y1, y2|x1, x2) = p1(y1|x1)p2(y2|x2). 178 7 Discrete Memoryless Channels 11. In the system below, there are two channels with transition matrices p1(y1|x) and p2(y2|x). These two channels have a common input alphabet X and output alphabets Y1 and Y2, repespectively, where Y1 and Y2 are disjoint. The position of the switch is determined by a random variable Z which is independent of X, where Pr{Z = 1} = λ. Y Y 1 2 X Y Z = 1 Z = 2 p (y |x) p (y |x) 1 2 1 2 a) Show that I(X; Y ) = λI(X; Y1) + (1 −λ)I(X; Y2). b) The capacity of the system is given by C = maxp(x) I(X; Y ). Show that C ≤λC1+(1−λ)C2, where Ci = maxp(x) I(X; Yi) is the capacity of the channel with transition matrix pi(yi|x), i = 1, 2. c) If both C1 and C2 can be achieved by a common input distribution, show that C = λC1 + (1 −λ)C2. 12. Feedback increases capacity Consider a ternary channel with memory with input/output alphabet {0, 1, 2} as follows. At time 1, the output of the channel Y1 has a uniform distribution on {0, 1, 2} and is independent of the input X1 (i.e., the channel outputs each of the values 0, 1, and 2 with probability 1 3 regardless of the input). At time 2, the transition from X2 to Y2 which depends on the value of Y1 is depicted below: 0 1 2 0 1 2 Y = 0 1 0 1 2 0 1 2 0 1 2 0 1 2 1 1 Y = 1 Y = 2 For every two subsequent transmissions, the channel replicates itself inde-pendently. So we only need to consider the first two transmissions. In the sequel, we regard this channel as described by a generic discrete channel (with transmission duration equals 2) with two input symbols X1 and X2 and two output symbols Y1 and Y2, and we will refer to this channel as the block channel. Problems 179 a) Determine the capacity this block channel when it is used without feedback. Hint: Use the results in Problems 8 and 11. b) Consider the following coding scheme when the block channel is used with feedback. Let the message W = (W1, W2) with W1 = {0, 1, 2} and W2 = {0, 1}. Let W1 and W2 be independent, and each of them is distributed uniformly on its alphabet. First, Let X1 = W1 and transmit X1 through the channel to obtain Y1, which is independent of X1. Then based on the value of Y1, we determine X2 as follows: i) If Y1 = 0, let X2 = 0 if W2 = 0, and let X2 = 1 if W2 = 1. ii) If Y1 = 1, let X2 = 1 if W2 = 0, and let X2 = 2 if W2 = 1. iii) If Y1 = 2, let X2 = 0 if W2 = 0, and let X2 = 2 if W2 = 1. Then transmit X2 through the channel to obtain Y2. Based on this coding scheme, show that for the capacity of this block channel can be increased by feedback. 13. Channel with memory and directed information The memorylessness of a DMC is characterized by the Markov chain Ti−→Xi →Yi according to the discussion following Definition 7.4. In general, a channel with memory satisfies the Markov chain T ′ i−→(Xi, Yi−1) →Yi, where T ′ i−denotes all the random variables generated in the system before Xi (i.e., the random variables denoted by Ti−) except for Xi−1 and Yi−1. Consider the use of such a channel in the presence of complete feedback. a) Give the dependency graph for all the random variables involved in the coding scheme. Note that the memory of the channel is manifested by the dependence of Yi on Xi−1 and Yi−1 (in addition to its dependence on Xi) for 1 ≤i ≤n. b) Verify the correctness of the following derivation: I(W; Y) = H(Y) −H(Y|W) = n X i=1 [H(Yi|Yi−1) −H(Yi|W, Yi−1)] ≤ n X i=1 [H(Yi|Yi−1) −H(Yi|W, Xi, Yi−1)] = n X i=1 [H(Yi|Yi−1) −H(Yi|Xi, Yi−1)] = n X i=1 I(Yi; Xi|Yi−1). The above upper bound on I(W; Y), denoted by I(X →Y), is called the directed information from X to Y. c) Show that the inequality in the derivation in b) is in fact an equality. Hint: Use Definition 7.18. d) In the spirit of the informal discussion in Section 7.3, we impose the constraint H(W|Y) = 0. Show that 180 7 Discrete Memoryless Channels H(W) = I(X →Y). This is the generalization of (7.105) for a channel with memory in the presence of complete feedback. e) Show that I(X →Y) = I(X; Y) if the channel code does not make use of the feedback. Hint: First show that H(Yi|Xi, Yi−1) = H(Yi|W, Xi, Yi−1) = H(Yi|W, X, Yi−1). (Marko and Massey .) 14. Maximum likelihood decoding In maximum likelihood decoding for a given channel and a given codebook, if a received sequence y is decoded to a codeword x, then x maximizes Pr{y|x′} among all codewords x′ in the codebook. a) Prove that maximum likelihood decoding minimizes the average prob-ability of error. b) Does maximum likelihood decoding also minimize the maximal prob-ability of error? Give an example if your answer is no. 15. Minimum distance decoding The Hamming distance between two binary sequences x and y, denoted by d(x, y), is the number of places where x and y differ. In minimum distance decoding for a memoryless BSC, if a received sequence y is decoded to a codeword x, then x minimizes d(x′, y) over all codewords x′ in the codebook. Prove that minimum distance decoding is equivalent to maximum likelihood decoding if the crossover probability of the BSC is less than 0.5. 16. The following figure shows a communication system with two DMC’s with complete feedback. The capacities of the two channels are respectively C1 and C2. Decoder 2 W W Encoder 1 Channel 1 Encoder 2 Channel 2 a) Give the dependency graph for all the random variables involved in the coding scheme. b) Prove that the capacity of the system is min(C1, C2). 17. Binary arbitrarily varying channel Consider a memoryless BSC whose crossover probability is time-varying. Specifically, the crossover probabil-ity ϵ(i) at time i is an arbitrary value in [ϵ1, ϵ2], where 0 ≤ϵ1 < ϵ2 < 0.5. Prove that the capacity of this channel is 1 −hb(ϵ2). (Ahlswede and Wolfowitz .) 18. Consider a BSC with crossover probability ϵ ∈[ϵ1, ϵ2], where 0 < ϵ1 < ϵ2 < 0.5, but the exact value of ϵ is unknown. Prove that the capacity of this channel is 1 −hb(ϵ2). Historical Notes 181 Historical Notes The concept of channel capacity was introduced in Shannon’s original pa-per , where he stated the channel coding theorem and outlined a proof. The first rigorous proof was due to Feinstein . The random coding error exponent was developed by Gallager in a simplified proof. The converse of the channel coding theorem was proved by Fano , where he used an inequality now bearing his name. The strong converse was first proved by Wolfowitz . An iterative algorithm for calculating the channel capacity developed independently by Arimoto and Blahut will be discussed in Chapter 9. Shannon proved that the capacity of a discrete memoryless channel cannot be increased by feedback. The definition of a discrete memoryless channel in this chapter is new. With this definition, coding over such a channel with or without feedback can be rigorously formulated. 8 Rate-Distortion Theory Consider an information source with entropy rate H. By the source coding theorem, it is possible to design a source code with rate R which reconstructs the source sequence X = (X1, X2, · · · , Xn) with an arbitrarily small proba-bility of error provided R > H and the block length n is sufficiently large. However, there are situations in which we want to convey an information source by a source code with rate less than H. Then we are motivated to ask: what is the best we can do when R < H? A natural approach is to design a source code such that for part of the time the source sequence is reconstructed correctly, while for the other part of the time the source sequence is reconstructed incorrectly, i.e., an error occurs. In designing such a code, we try to minimize the probability of error. However, this approach is not viable asymptotically because the converse of the source coding theorem says that if R < H, then the probability of error inevitably tends to 1 as n →∞. Therefore, if R < H, no matter how the source code is designed, the source sequence is almost always reconstructed incorrectly when n is large. An alter-native approach is to design a source code called a rate-distortion code which reproduces the source sequence with distortion. In order to formulate the problem properly, we need a distortion measure between each source sequence and each reproduction sequence. Then we try to design a rate-distortion code which with high probability reproduces the source sequence with a distortion within a tolerance level. Clearly, a smaller distortion can potentially be achieved if we are allowed to use a higher coding rate. Rate-distortion theory, the subject matter of this chapter, gives a characterization of the asymptotic optimal tradeoffbetween the coding rate of a rate-distortion code for a given information source and the allowed distortion in the reproduction sequence with respect to a distortion measure. 184 8 Rate-Distortion Theory 8.1 Single-Letter Distortion Measures Let {Xk, k ≥1} be an i.i.d. information source with generic random variable X. We assume that the source alphabet X is finite. Let p(x) be the probability distribution of X, and we assume without loss of generality that the support of X is equal to X. Consider a source sequence x = (x1, x2, · · · , xn) (8.1) and a reproduction sequence ˆ x = (ˆ x1, ˆ x2, · · · , ˆ xn). (8.2) The components of ˆ x can take values in X, but more generally, they can take values in any finite set ˆ X which may be different from X. The set ˆ X, which is also assumed to be finite, is called the reproduction alphabet. To measure the distortion between x and ˆ x, we introduce the single-letter distortion measure and the average distortion measure. Definition 8.1. A single-letter distortion measure is a mapping d : X × ˆ X →ℜ+, (8.3) where ℜ+ is the set of nonnegative real numbers1. The value d(x, ˆ x) denotes the distortion incurred when a source symbol x is reproduced as ˆ x. Definition 8.2. The average distortion between a source sequence x ∈X n and a reproduction sequence ˆ x ∈ˆ X n induced by a single-letter distortion mea-sure d is defined by d(x, ˆ x) = 1 n n X k=1 d(xk, ˆ xk). (8.4) In Definition 8.2, we have used d to denote both the single-letter distor-tion measure and the average distortion measure, but this abuse of notation should cause no ambiguity. Henceforth, we will refer to a single-letter distor-tion measure simply as a distortion measure. Very often, the source sequence x represents quantized samples of a con-tinuous signal, and the user attempts to recognize certain objects and derive meaning from the reproduction sequence ˆ x. For example, x may represent a video signal, an audio signal, or an image. The ultimate purpose of a distor-tion measure is to reflect the distortion between x and ˆ x as perceived by the user. This goal is difficult to achieve in general because measurements of the distortion between x and ˆ x must be made within context unless the symbols in X carry no physical meaning. Specifically, when the user derives meaning 1 Note that d(x, ˆ x) is finite for all (x, ˆ x) ∈X × ˆ X. 8.1 Single-Letter Distortion Measures 185 from ˆ x, the distortion in ˆ x as perceived by the user depends on the context. For example, the perceived distortion is small for a portrait contaminated by a fairly large noise, while the perceived distortion is large for the image of a book page contaminated by the same noise. Hence, a good distortion measure should be context dependent. Although the average distortion is not necessarily the best way to measure the distortion between a source sequence and a reproduction sequence, it has the merit of being simple and easy to use. Moreover, rate-distortion theory, which is based on the average distortion measure, provides a framework for data compression when distortion is inevitable. Example 8.3. When the symbols in X and ˆ X represent real values, a popular distortion measure is the square-error distortion measure which is defined by d(x, ˆ x) = (x −ˆ x)2. (8.5) The average distortion measure so induced is often referred to as the mean-square error. Example 8.4. When X and ˆ X are identical and the symbols in X do not carry any particular meaning, a frequently used distortion measure is the Hamming distortion measure, which is defined by d(x, ˆ x) = 0 if x = ˆ x 1 if x ̸= ˆ x. (8.6) The Hamming distortion measure indicates the occurrence of an error. In particular, for an estimate ˆ X of X, we have Ed(X, ˆ X) = Pr{X = ˆ X} · 0 + Pr{X ̸= ˆ X} · 1 = Pr{X ̸= ˆ X}, (8.7) i.e., the expectation of the Hamming distortion measure between X and ˆ X is the probability of error. For x ∈X n and ˆ x ∈ˆ X n, the average distortion d(x, ˆ x) induced by the Hamming distortion measure gives the frequency of error in the reproduction sequence ˆ x. Definition 8.5. For a distortion measure d, for each x ∈X, let ˆ x∗(x) ∈ˆ X minimize d(x, ˆ x) over all ˆ x ∈ˆ X. A distortion measure d is said to be normal if cx def = d(x, ˆ x∗(x)) = 0 (8.8) for all x ∈X. The square-error distortion measure and the Hamming distortion measure are examples of normal distortion measures. Basically, a normal distortion measure is one which allows X to be reproduced with zero distortion. Although 186 8 Rate-Distortion Theory a distortion measure d is not normal in general, a normalization of d can always be obtained by defining the distortion measure ˜ d(x, ˆ x) = d(x, ˆ x) −cx (8.9) for all (x, ˆ x) ∈X × ˆ X. Evidently, ˜ d is a normal distortion measure, and it is referred to as the normalization of d. Example 8.6. Let d be a distortion measure defined by d(x, ˆ x) a b c 1 2 7 5 2 4 3 8 Then ˜ d, the normalization of d, is given by ˜ d(x, ˆ x) a b c 1 0 5 3 2 1 0 5 Note that for every x ∈X, there exists an ˆ x ∈ˆ X such that ˜ d(x, ˆ x) = 0. Let ˆ X be any estimate of X which takes values in ˆ X, and denote the joint distribution for X and ˆ X by p(x, ˆ x). Then Ed(X, ˆ X) = X x X ˆ x p(x, ˆ x)d(x, ˆ x) (8.10) = X x X ˆ x p(x, ˆ x) h ˜ d(x, ˆ x) + cx i (8.11) = E ˜ d(X, ˆ X) + X x p(x) X ˆ x p(ˆ x|x)cx (8.12) = E ˜ d(X, ˆ X) + X x p(x)cx X ˆ x p(ˆ x|x) ! (8.13) = E ˜ d(X, ˆ X) + X x p(x)cx (8.14) = E ˜ d(X, ˆ X) + ∆, (8.15) where ∆= X x p(x)cx (8.16) is a constant which depends only on p(x) and d but not on the conditional distribution p(ˆ x|x). In other words, for a given X and a distortion measure d, the expected distortion between X and an estimate ˆ X of X is always reduced by a constant upon using ˜ d instead of d as the distortion measure. For reasons which will be explained in Section 8.3, it is sufficient for us to assume that a distortion measure is normal. 8.2 The Rate-Distortion Function R(D) 187 Definition 8.7. Let ˆ x∗minimizes Ed(X, ˆ x) over all ˆ x ∈ˆ X, and define Dmax = Ed(X, ˆ x∗). (8.17) ˆ x∗is the best estimate of X if we know nothing about X, and Dmax is the minimum expected distortion between X and a constant estimate of X. The significance of Dmax can be seen by taking the reproduction sequence ˆ X to be (ˆ x∗, ˆ x∗, · · · , ˆ x∗). Since d(Xk, ˆ x∗) are i.i.d., by the weak law of large numbers d(X, ˆ X) = 1 n n X k=1 d(Xk, ˆ x∗) →Ed(X, ˆ x∗) = Dmax (8.18) in probability, i.e., for any ϵ > 0, Pr{d(X, ˆ X) > Dmax + ϵ} ≤ϵ (8.19) for sufficiently large n. Note that ˆ X is a constant sequence which does not depend on X. In other words, even when no description of X is available, we can still achieve an average distortion no more than Dmax+ϵ with probability arbitrarily close to 1 when n is sufficiently large. The notation Dmax may seem confusing because the quantity stands for the minimum rather than the maximum expected distortion between X and a constant estimate of X. But we see from the above discussion that this notation is in fact appropriate because Dmax is the maximum distortion we have to be concerned about. Specifically, it is not meanful to impose a con-straint D ≥Dmax on the reproduction sequence because it can be achieved even without receiving any information about the sequence produced by the source. 8.2 The Rate-Distortion Function R(D) Throughout this chapter, all the discussions are with respect to an i.i.d. infor-mation source {Xk, k ≥1} with generic random variable X and a distortion measure d. All logarithms are in the base 2 unless otherwise specified. Definition 8.8. An (n, M) rate-distortion code is defined by an encoding function f : X n →{1, 2, · · · , M} (8.20) and a decoding function g : {1, 2, · · · , M} →ˆ X n. (8.21) The set {1, 2, · · · , M}, denoted by I, is called the index set. The reproduction sequences g(f(1)), g(f(2)), · · · , g(f(M)) in ˆ Xn are called codewords, and the set of codewords is called the codebook. 188 8 Rate-Distortion Theory Encoder Decoder f ( X ) X source sequence reproduction sequence X Fig. 8.1. A rate-distortion code with block length n. Figure 8.1 is an illustration of a rate-distortion code. Definition 8.9. The rate of an (n, M) rate-distortion code is n−1 log M in bits per symbol. Definition 8.10. A rate-distortion pair (R, D) is asymptotically achievable if for any ϵ > 0, there exists for sufficiently large n an (n, M) rate-distortion code such that 1 n log M ≤R + ϵ (8.22) and Pr{d(X, ˆ X) > D + ϵ} ≤ϵ, (8.23) where ˆ X = g(f(X)). For brevity, an asymptotically achievable pair will be referred to as an achievable pair. Remark It is clear from the definition that if (R, D) is achievable, then (R′, D) and (R, D′) are also achievable for all R′ ≥R and D′ ≥D. Definition 8.11. The rate-distortion region is the subset of ℜ2 containing all achievable pairs (R, D). Theorem 8.12. The rate-distortion region is closed and convex. Proof. We first show that the rate-distortion region is closed. Consider achiev-able rate-distortion pairs (R(k), D(k)) such that lim k→∞(R(k), D(k)) = (R, D) (8.24) componentwise. Then for any ϵ > 0, for all k, there exists for sufficiently large n an (n, M (k)) code such that 1 n log M (k) ≤R(k) + ϵ (8.25) and Pr{d(X(k), ˆ X(k)) > D(k) + ϵ} ≤ϵ, (8.26) where f (k) and g(k) are respectively the encoding function and the decoding function of the (n, M (k)) code, and ˆ X(k) = g(k)(f (k)(X)). By virtue of (8.24), let k(ϵ) be an integer such that for all k > k(ϵ), 8.2 The Rate-Distortion Function R(D) 189 |R −R(k)| < ϵ (8.27) and |D −D(k)| < ϵ, (8.28) which imply R(k) < R + ϵ (8.29) and D(k) < D + ϵ, (8.30) respectively. Then for all k > k(ϵ), 1 n log M (k) ≤R(k) + ϵ < R + 2ϵ (8.31) and Pr{d(X(k), ˆ X(k)) > D + 2ϵ} ≤Pr{d(X(k), ˆ X(k)) > D(k) + ϵ} (8.32) ≤ϵ. (8.33) Note that (8.32) follows because D + 2ϵ > D(k) + ϵ (8.34) by (8.30). From (8.31) and (8.33), we see that (R, D) is also achievable. Thus we have proved that the rate-distortion region is closed. We will prove the convexity of the rate-distortion region by a time-sharing argument whose idea is the following. Roughly speaking, if we can use a code C1 to achieve (R(1), D(1)) and a code C2 to achieve (R(2), D(2)), then for any rational number λ between 0 and 1, we can use C1 for a fraction λ of the time and use C2 for a fraction ¯ λ of the time to achieve (R(λ), D(λ)), where R(λ) = λR(1) + ¯ λR(2) (8.35) D(λ) = λD(1) + ¯ λD(2), (8.36) and ¯ λ = 1 −λ. Since the rate-distortion region is closed as we have proved, λ can be taken as any real number between 0 and 1, and the convexity of the region follows. We now give a formal proof for the convexity of the rate-distortion region. Let λ = r r + s, (8.37) where r and s are positive integers. Then λ is a rational number between 0 and 1. We now prove that if (R(1), D(1)) and (R(2), D(2)) are achievable, then (R(λ), D(λ)) is also achievable. Assume (R(1), D(1)) and (R(2), D(2)) are achievable. Then for any ϵ > 0 and sufficiently large n, there exist an (n, M (1)) code and an (n, M (2)) code such that 190 8 Rate-Distortion Theory 1 n log M (i) ≤R(i) + ϵ (8.38) and Pr{d(X, ˆ X(i)) > D(i) + ϵ} ≤ϵ, (8.39) i = 1, 2. Let M(λ) = (M (1))r(M (2))s (8.40) and n(λ) = (r + s)n. (8.41) We now construct an (n(λ), M(λ)) code by concatenating r copies of the (n, M (1)) code followed by s copies of the (n, M (2)) code. We call these r + s codes subcodes of the (n(λ), M(λ)) code. For this code, let Y = (X(1), X(2), · · · , X(r + s)) (8.42) and ˆ Y = ( ˆ X(1), ˆ X(2), · · · , ˆ X(r + s)), (8.43) where X(j) and ˆ X(j) are the source sequence and the reproduction sequence of the jth subcode, respectively. Then for this (n(λ), M(λ)) code, 1 n(λ) log M(λ) = 1 (r + s)n log[(M (1))r(M (2))s] (8.44) = 1 (r + s)n(r log M (1) + s log M (2)) (8.45) = λ  1 n log M (1)  + ¯ λ  1 n log M (2)  (8.46) ≤λ(R(1) + ϵ) + ¯ λ(R(2) + ϵ) (8.47) = (λR(1) + ¯ λR(2)) + ϵ (8.48) = R(λ) + ϵ, (8.49) where (8.47) follows from (8.38), and Pr{d(Y, ˆ Y) > D(λ) + ϵ} = Pr    1 r + s r+s X j=1 d(X(j), ˆ X(j)) > D(λ) + ϵ    (8.50) ≤Pr n d(X(j), ˆ X(j)) > D(1) + ϵ for some 1 ≤j ≤r or d(X(j), ˆ X(j)) > D(2) + ϵ for some r + 1 ≤j ≤r + s o (8.51) ≤ r X j=1 Pr{d(X(j), ˆ X(j)) > D(1) + ϵ} 8.2 The Rate-Distortion Function R(D) 191 + r+s X j=r+1 Pr{d(X(j), ˆ X(j)) > D(2) + ϵ} (8.52) ≤(r + s)ϵ, (8.53) where (8.52) follows from the union bound and (8.53) follows from (8.39). Hence, we conclude that the rate-distortion pair (R(λ), D(λ)) is achievable. This completes the proof of the theorem. ⊓ ⊔ Definition 8.13. The rate-distortion function R(D) is the minimum of all rates R for a given distortion D such that (R, D) is achievable. Definition 8.14. The distortion-rate function D(R) is the minimum of all distortions D for a given rate R such that (R, D) is achievable. Both the functions R(D) and D(R) are equivalent descriptions of the boundary of the rate-distortion region. They are sufficient to describe the rate-distortion region because the region is closed. Note that in defining R(D), the minimum instead of the infimum is taken because for a fixed D, the set of all R such that (R, D) is achievable is closed and lower bounded by zero. Similarly, the minimum instead of the infimum is taken in defining D(R). In the subsequent discussions, only R(D) will be used. Theorem 8.15. The following properties hold for the rate-distortion function R(D): 1. R(D) is non-increasing in D. 2. R(D) is convex. 3. R(D) = 0 for D ≥Dmax. 4. R(0) ≤H(X). Proof. From the remark following Definition 8.10, since (R(D), D) is achiev-able, (R(D), D′) is also achievable for all D′ ≥D. Therefore, R(D) ≥R(D′) because R(D′) is the minimum of all R such that (R, D′) is achievable. This proves Property 1. Property 2 follows immediately from the convexity of the rate-distortion region which was proved in Theorem 8.12. From the discussion toward the end of the last section, we see for any ϵ > 0, it is possible to achieve Pr{d(X, ˆ X) > Dmax + ϵ} ≤ϵ (8.54) for sufficiently large n with no description of X available. Therefore, (0, D) is achievable for all D ≥Dmax, proving Property 3. Property 4 is a consequence of the assumption that the distortion measure d is normalized, which can be seen as follows. By the source coding theorem, for any ϵ > 0, by using a rate no more than H(X) + ϵ, we can describe the 192 8 Rate-Distortion Theory D R ( D ) R ( 0 ) H ( X ) The rate distortion region D max Fig. 8.2. A rate-distortion function R(D). source sequence X of length n with probability of error less than ϵ when n is sufficiently large. Since d is normalized, for each k ≥1, let ˆ Xk = ˆ x∗(Xk) (8.55) (cf. Definition 8.5), so that whenever an error does not occur, d(Xk, ˆ Xk) = d(Xk, ˆ x∗(Xk)) = 0 (8.56) by (8.8) for each k, and d(X, ˆ X) = 1 n n X k=1 d(Xk, ˆ Xk) = 1 n n X k=1 d(Xk, ˆ x∗(Xk)) = 0. (8.57) Therefore, Pr{d(X, ˆ X) > ϵ} ≤ϵ, (8.58) which shows that the pair (H(X), 0) is achievable. This in turn implies that R(0) ≤H(X) because R(0) is the minimum of all R such that (R, 0) is achievable. ⊓ ⊔ Figure 8.2 is an illustration of a rate-distortion function R(D). The reader should note the four properties of R(D) in Theorem 8.15. The rate-distortion theorem, which will be stated in the next section, gives a characterization of R(D). 8.3 The Rate-Distortion Theorem Definition 8.16. For D ≥0, the information rate-distortion function is de-fined by RI(D) = min ˆ X:Ed(X, ˆ X)≤D I(X; ˆ X). (8.59) 8.3 The Rate-Distortion Theorem 193 In defining RI(D), the minimization is taken over all random variables ˆ X jointly distributed with X such that Ed(X, ˆ X) ≤D. (8.60) Since p(x) is given, the minimization is taken over the set of all p(ˆ x|x) such that (8.60) is satisfied, namely the set   p(ˆ x|x) : X x,ˆ x p(x)p(ˆ x|x)d(x, ˆ x) ≤D   . (8.61) Since this set is compact in ℜ|X|| ˆ X| and I(X; ˆ X) is a continuous functional of p(ˆ x|x), the minimum value of I(X; ˆ X) can be attained2. This justifies taking the minimum instead of the infimum in the definition of RI(D). We have seen in Section 8.1 that we can obtain a normalization ˜ d for any distortion measure d with E ˜ d(X, ˆ X) = Ed(X, ˆ X) −∆ (8.62) for any ˆ X, where ∆is a constant which depends only on p(x) and d. Thus if d is not normal, we can always replace d by ˜ d and D by D −∆in the definition of RI(D) without changing the minimization problem. Therefore, we do not lose any generality by assuming that a distortion measure d is normal. Theorem 8.17 (The Rate-Distortion Theorem). R(D) = RI(D). The rate-distortion theorem, which is the main result in rate-distortion theory, says that the minimum coding rate for achieving a distortion D is RI(D). This theorem will be proved in the next two sections. In the next section, we will prove the converse of this theorem, i.e., R(D) ≥RI(D), and in Section 8.5, we will prove the achievability of RI(D), i.e., R(D) ≤RI(D). In order for RI(D) to be a characterization of R(D), it has to satisfy the same properties as R(D). In particular, the four properties of R(D) in Theorem 8.15 should also be satisfied by RI(D). Theorem 8.18. The following properties hold for the information rate-distortion function RI(D): 1. RI(D) is non-increasing in D. 2. RI(D) is convex. 3. RI(D) = 0 for D ≥Dmax. 4. RI(0) ≤H(X). 2 The assumption that both X and ˆ X are finite is essential in this argument. 194 8 Rate-Distortion Theory Proof. Referring to the definition of RI(D) in (8.59), for a larger D, the minimization is taken over a larger set. Therefore, RI(D) is non-increasing in D, proving Property 1. To prove Property 2, consider any D(1), D(2) ≥0 and let λ be any number between 0 and 1. Let ˆ X(i) achieves RI(D(i)) for i = 1, 2, i.e., RI(D(i)) = I(X; ˆ X(i)), (8.63) where Ed(X, ˆ X(i)) ≤D(i), (8.64) and let ˆ X(i) be defined by the transition matrix pi(ˆ x|x). Let ˆ X(λ) be jointly distributed with X which is defined by pλ(ˆ x|x) = λp1(ˆ x|x) + ¯ λp2(ˆ x|x), (8.65) where ¯ λ = 1 −λ. Then Ed(X, ˆ X(λ)) = X x,ˆ x p(x)pλ(ˆ x|x)d(x, ˆ x) (8.66) = X x,ˆ x p(x)(λp1(ˆ x|x) + ¯ λp2(ˆ x|x))d(x, ˆ x) (8.67) = λ  X x,ˆ x p(x)p1(ˆ x|x)d(x, ˆ x)  + ¯ λ  X x,ˆ x p(x)p2(ˆ x|x)d(x, ˆ x)   (8.68) = λEd(X, ˆ X(1)) + ¯ λEd(X, ˆ X(2)) (8.69) ≤λD(1) + ¯ λD(2) (8.70) = D(λ), (8.71) where D(λ) = λD(1) + ¯ λD(2), (8.72) and (8.70) follows from (8.64). Now consider λRI(D(1)) + ¯ λRI(D(2)) = λI(X; ˆ X(1)) + ¯ λI(X; ˆ X(2)) (8.73) ≥I(X; ˆ X(λ)) (8.74) ≥RI(D(λ)), (8.75) where the inequality in (8.74) follows from the convexity of mutual information with respect to the transition matrix p(ˆ x|x) (see Example 3.13), and the inequality in (8.75) follows from (8.71) and the definition of RI(D). Therefore, we have proved Property 2. To prove Property 3, let ˆ X take the value ˆ x∗as defined in Definition 8.7 with probability 1. Then 8.3 The Rate-Distortion Theorem 195 I(X; ˆ X) = 0 (8.76) and Ed(X; ˆ X) = Ed(X; ˆ x∗) = Dmax. (8.77) Then for D ≥Dmax, RI(D) ≤I(X; ˆ X) = 0. (8.78) On the other hand, since RI(D) is nonnegative, we conclude that RI(D) = 0. (8.79) This proves Property 3. Finally, to prove Property 4, we let ˆ X = ˆ x∗(X), (8.80) where ˆ x∗(x) is defined in Definition 8.5. Then Ed(X, ˆ X) = Ed(X, ˆ x∗(X)) (8.81) = X x p(x)d(x, ˆ x∗(x)) (8.82) = 0 (8.83) by (8.8) since we assume that d is a normal distortion measure. Moreover, RI(0) ≤I(X; ˆ X) ≤H(X). (8.84) Then Property 4 and hence the theorem is proved. ⊓ ⊔ Corollary 8.19. If RI(0) > 0, then RI(D) is strictly decreasing for 0 ≤ D ≤Dmax, and the inequality constraint in Definition 8.16 for RI(D) can be replaced by an equality constraint. Proof. Assume that RI(0) > 0. We first show that RI(D) > 0 for 0 ≤D < Dmax by contradiction. Suppose RI(D′) = 0 for some 0 ≤D′ < Dmax, and let RI(D′) be achieved by some ˆ X. Then RI(D′) = I(X; ˆ X) = 0 (8.85) implies that X and ˆ X are independent, or p(x, ˆ x) = p(x)p(ˆ x) (8.86) for all x and ˆ x. It follows that 196 8 Rate-Distortion Theory D′ ≥Ed(X, ˆ X) (8.87) = X x X ˆ x p(x, ˆ x)d(x, ˆ x) (8.88) = X x X ˆ x p(x)p(ˆ x)d(x, ˆ x) (8.89) = X ˆ x p(ˆ x) X x p(x)d(x, ˆ x) (8.90) = X ˆ x p(ˆ x)Ed(X, ˆ x) (8.91) ≥ X ˆ x p(ˆ x)Ed(X, ˆ x∗) (8.92) = X ˆ x p(ˆ x)Dmax (8.93) = Dmax, (8.94) where ˆ x∗and Dmax are defined in Definition 8.7. This leads to a contradiction because we have assumed that 0 ≤D′ < Dmax. Therefore, we conclude that RI(D) > 0 for 0 ≤D < Dmax. Since RI(0) > 0 and RI(Dmax) = 0, and RI(D) is non-increasing and convex from the above theorem, RI(D) must be strictly decreasing for 0 ≤ D ≤Dmax. We now prove by contradiction that the inequality constraint in Definition 8.16 for RI(D) can be replaced by an equality constraint. Assume that RI(D) is achieved by some ˆ X∗such that Ed(X, ˆ X∗) = D′′ < D. (8.95) Then RI(D′′) = min ˆ X:Ed(X, ˆ X)≤D′′ I(X; ˆ X) ≤I(X; ˆ X∗) = RI(D). (8.96) This is a contradiction because RI(D) is strictly decreasing for 0 ≤D ≤Dmax. Hence, Ed(X, ˆ X∗) = D. (8.97) This implies that the inequality constraint in Definition 8.16 for RI(D) can be replaced by an equality constraint. ⊓ ⊔ Remark In all problems of interest, R(0) = RI(0) > 0. Otherwise, R(D) = 0 for all D ≥0 because R(D) is nonnegative and non-increasing. Example 8.20 (Binary Source). Let X be a binary random variable with Pr{X = 0} = 1 −γ and Pr{X = 1} = γ. (8.98) Let ˆ X = {0, 1} be the reproduction alphabet for X, and let d be the Hamming distortion measure. We first consider the case that 0 ≤γ ≤1 2. Then if we make 8.3 The Rate-Distortion Theorem 197 a guess on the value of X, we should guess 0 in order to minimize the expected distortion. Therefore, ˆ x∗= 0 and Dmax = Ed(X, 0) (8.99) = Pr{X = 1} (8.100) = γ. (8.101) We will show that for 0 ≤γ ≤1 2, RI(D) = hb(γ) −hb(D) if 0 ≤D < γ 0 if D ≥γ. (8.102) Let ˆ X be an estimate of X taking values in ˆ X, and let Y be the Hamming distortion measure between X and ˆ X, i.e., Y = d(X, ˆ X). (8.103) Observe that conditioning on ˆ X, X and Y determine each other. Therefore, H(X| ˆ X) = H(Y | ˆ X). (8.104) Then for D < γ = Dmax and any ˆ X such that Ed(X, ˆ X) ≤D, (8.105) we have I(X; ˆ X) = H(X) −H(X| ˆ X) (8.106) = hb(γ) −H(Y | ˆ X) (8.107) ≥hb(γ) −H(Y ) (8.108) = hb(γ) −hb(Pr{X ̸= ˆ X}) (8.109) ≥hb(γ) −hb(D), (8.110) where the last inequality is justified because Pr{X ̸= ˆ X} = Ed(X, ˆ X) ≤D (8.111) and hb(a) is increasing for 0 ≤a ≤1 2. Minimizing over all ˆ X satisfying (8.105) in (8.110), we obtain the lower bound RI(D) ≥hb(γ) −hb(D). (8.112) To show that this lower bound is achievable, we need to construct an ˆ X such that the inequalities in both (8.108) and (8.110) are tight. The tightness of the inequality in (8.110) simply says that Pr{X ̸= ˆ X} = D, (8.113) 198 8 Rate-Distortion Theory 0 1 0 1 X X D D D D 1 D 1 1 2 D D 1 1 2 D 1 Fig. 8.3. Achieving RI(D) for a binary source via a reverse binary symmetric channel. while the tightness of the inequality in (8.108) says that Y should be inde-pendent of ˆ X. It would be more difficult to make Y independent of ˆ X if we specify ˆ X by p(ˆ x|x). Instead, we specify the joint distribution of X and ˆ X by means of a reverse binary symmetric channel (BSC) with crossover probability D as the shown in Figure 8.3. Here, we regard ˆ X as the input and X as the output of the BSC. Then Y is independent of the input ˆ X because the error event is independent of the input for a BSC, and (8.113) is satisfied by setting the crossover probability to D. However, we need to ensure that the marginal distribution of X so specified is equal to p(x). Toward this end, we let Pr{ ˆ X = 1} = α, (8.114) and consider Pr{X = 1} = Pr{ ˆ X = 0}Pr{X = 1| ˆ X = 0} +Pr{ ˆ X = 1}Pr{X = 1| ˆ X = 1}, (8.115) or γ = (1 −α)D + α(1 −D), (8.116) which gives α = γ −D 1 −2D. (8.117) Since D < Dmax = γ ≤1 2, (8.118) we have α ≥0. On the other hand, γ, D ≤1 2 (8.119) gives γ + D ≤1. (8.120) This implies 8.3 The Rate-Distortion Theorem 199 R I D 0.5 0 (D) 1 Fig. 8.4. The function RI(D) for the uniform binary source with the Hamming distortion measure. γ −D ≤1 −2D, (8.121) or α ≤1. Therefore, 0 ≤α = Pr{ ˆ X = 1} ≤1 (8.122) and 0 ≤1 −α = Pr{ ˆ X = 0} ≤1. (8.123) Hence, we have shown that the lower bound on RI(D) in (8.110) can be achieved, and RI(D) is as given in (8.102). For 1 2 ≤γ ≤1, by exchanging the roles of the symbols 0 and 1 in the above argument, we obtain RI(D) as in (8.102) except that γ is replaced by 1 −γ. Combining the two cases, we have RI(D) = hb(γ) −hb(D) if 0 ≤D < min(γ, 1 −γ) 0 if D ≥min(γ, 1 −γ). (8.124) for 0 ≤γ ≤1. The function RI(D) for γ = 1 2 is illustrated in Figure 8.4. Remark In the above example, we see that RI(0) = hb(γ) = H(X). Then by the rate-distortion theorem, H(X) is the minimum rate of a rate-distortion code which achieves an arbitrarily small average Hamming distortion. It is tempting to regarding this special case of the rate-distortion theorem as a version of the source coding theorem and conclude that the rate-distortion theorem is a generalization of the source coding theorem. However, this is in-correct because the rate-distortion theorem only guarantees that the average Hamming distortion between X and ˆ X is small with probability arbitrarily close to 1, but the source coding theorem guarantees that X = ˆ X with prob-ability arbitrarily close to 1, which is much stronger. It is in general not possible to obtain the rate-distortion function in closed form, and we have to resort to numerical computation. In Chapter 9, we 200 8 Rate-Distortion Theory will discuss the Blahut-Arimoto algorithm for computing the rate-distortion function. 8.4 The Converse In this section, we prove that the rate-distortion function R(D) is lower bounded by the information rate-distortion function RI(D), i.e., R(D) ≥ RI(D). Specifically, we will prove that for any achievable rate-distortion pair (R, D), R ≥RI(D). Then by fixing D and minimizing R over all achievable pairs (R, D), we conclude that R(D) ≥RI(D). Let (R, D) be any achievable rate-distortion pair. Then for any ϵ > 0, there exists for sufficiently large n an (n, M) code such that 1 n log M ≤R + ϵ (8.125) and Pr{d(X, ˆ X) > D + ϵ} ≤ϵ, (8.126) where ˆ X = g(f(X)). Then n(R + ϵ) a) ≥log M (8.127) ≥H(f(X)) (8.128) ≥H(g(f(X))) (8.129) = H( ˆ X) (8.130) = H( ˆ X) −H( ˆ X|X) (8.131) = I( ˆ X; X) (8.132) = H(X) −H(X| ˆ X) (8.133) = n X k=1 H(Xk) − n X k=1 H(Xk| ˆ X, X1, X2, · · · , Xk−1) (8.134) b) ≥ n X k=1 H(Xk) − n X k=1 H(Xk| ˆ Xk) (8.135) = n X k=1 [H(Xk) −H(Xk| ˆ Xk)] (8.136) = n X k=1 I(Xk; ˆ Xk) (8.137) c) ≥ n X k=1 RI(Ed(Xk, ˆ Xk)) (8.138) 8.4 The Converse 201 = n " 1 n n X k=1 RI(Ed(Xk, ˆ Xk)) # (8.139) d) ≥nRI 1 n n X k=1 Ed(Xk, ˆ Xk) ! (8.140) = nRI(Ed(X, ˆ X)). (8.141) In the above, a) follows from (8.125); b) follows because conditioning does not increase entropy; c) follows from the definition of RI(D) in Definition 8.16; d) follows from the convexity of RI(D) proved in Theorem 8.18 and Jensen’s inequality. Now let dmax = max x,ˆ x d(x, ˆ x) (8.142) be the maximum value which can be taken by the distortion measure d. The reader should not confuse dmax with Dmax in Definition 8.7. Then from (8.126), we have Ed(X, ˆ X) = E[d(X, ˆ X)|d(X, ˆ X) > D + ϵ]Pr{d(X, ˆ X) > D + ϵ} +E[d(X, ˆ X)|d(X, ˆ X) ≤D + ϵ]Pr{d(X, ˆ X) ≤D + ϵ} (8.143) ≤dmax · ϵ + (D + ϵ) · 1 (8.144) = D + (dmax + 1)ϵ. (8.145) This shows that if the probability that the average distortion between X and ˆ X exceeds D+ϵ is small, then the expected average distortion between X and ˆ X can exceed D only by a small amount3. Following (8.141), we have R + ϵ ≥RI(Ed(X, ˆ X)) (8.146) ≥RI(D + (dmax + 1)ϵ), (8.147) where the last inequality follows from (8.145) because RI(D) is non-increasing in D. We note that the convexity of RI(D) implies that it is a continuous function of D. Then taking the limit as ϵ →0, we obtain R ≥lim ϵ→0 RI(D + (dmax + 1)ϵ) (8.148) = RI  D + (dmax + 1) lim ϵ→0 ϵ  (8.149) = RI(D), (8.150) 3 The converse is not true. 202 8 Rate-Distortion Theory where we have invoked the continuity of RI(D) in obtaining (8.149). Upon minimizing R over all achievable pairs (R, D) for a fixed D in (8.150), we have proved that R(D) ≥RI(D). (8.151) This completes the proof for the converse of the rate-distortion theorem. 8.5 Achievability of RI(D) In this section, we prove that the rate-distortion function R(D) is upper bounded by the information rate-distortion function RI(D), i.e., R(D) ≤ RI(D). Then by combining with the result that R(D) ≥RI(D) from the last section, we conclude that R(D) = RI(D), and the rate-distortion theorem is proved. For any 0 ≤D ≤Dmax, we will prove that for every random variable ˆ X taking values in ˆ X such that Ed(X, ˆ X) ≤D, (8.152) the rate-distortion pair (I(X; ˆ X), D) is achievable. This will be proved by showing for sufficiently large n the existence of a rate-distortion code such that 1. the rate of the code is not more than I(X; ˆ X) + ϵ; 2. d(X, ˆ X) ≤D + ϵ with probability almost 1. Then by minimizing I(X; ˆ X) over all ˆ X satisfying (8.152), we conclude that the rate-distortion pair (RI(D), D) is achievable, which implies RI(D) ≥ R(D) because R(D) is the minimum of all R such that (R, D) is achievable. Fix any 0 ≤D ≤Dmax and any ϵ > 0, and let δ be a small positive quantity to be specified later. Toward proving the existence of a desired code, we fix a random variable ˆ X which satisfies (8.152) and let M be an integer satisfying I(X; ˆ X) + ϵ 2 ≤1 n log M ≤I(X; ˆ X) + ϵ, (8.153) where n is sufficiently large. We now describe a random coding scheme in the following steps: 1. Construct a codebook C of an (n, M) code by randomly generating M codewords in ˆ X n independently and identically according to p(ˆ x)n. Denote these codewords by ˆ X(1), ˆ X(2), · · · , ˆ X(M). 2. Reveal the codebook C to both the encoder and the decoder. 3. The source sequence X is generated according to p(x)n. 4. The encoder encodes the source sequence X into an index K in the set I = {1, 2, · · · , M}. The index K takes the value i if a) (X, ˆ X(i)) ∈T n [X ˆ X]δ, 8.5 Achievability of RI(D) 203 b) for all i′ ∈I, if (X, ˆ X(i′)) ∈T n [X ˆ X]δ, then i′ ≤i; otherwise, K takes the constant value 1. 5. The index K is delivered to the decoder. 6. The decoder outputs ˆ X(K) as the reproduction sequence ˆ X. Remark Strong typicality is used in defining the encoding function in Step 4. This is made possible by the assumption that both the source alphabet X and the reproduction alphabet ˆ X are finite. Let us further explain the encoding scheme described in Step 4. After the source sequence X is generated, we search through all the codewords in the codebook C for those which are jointly typical with X with respect to p(x, ˆ x). If there is at least one such codeword, we let i be the largest index of such codewords and let K = i. If such a codeword does not exist, we let K = 1. The event {K = 1} occurs in one of the following two scenarios: 1. ˆ X(1) is the only codeword in C which is jointly typical with X. 2. No codeword in C is jointly typical with X. In either scenario, X is not jointly typical with the codewords ˆ X(2), ˆ X(3), · · ·, ˆ X(M). In other words, if K = 1, then X is jointly typical with none of the codewords ˆ X(2), ˆ X(3), · · · , ˆ X(M). Define Ei = n (X, ˆ X(i)) ∈T n [X ˆ X]δ o (8.154) to be the event that X is jointly typical with the codeword ˆ X(i). We see from the above discussion that {K = 1} ⊂Ec 2 ∩Ec 3 ∩· · · ∩Ec M. (8.155) Since the codewords are generated i.i.d., conditioning on {X = x} for any x ∈X n, the events Ei are mutually independent4, and they all have the same probability. Then for any x ∈X n, Pr{K = 1|X = x} ≤Pr{Ec 2 ∩Ec 3 ∩· · · ∩Ec M|X = x} (8.156) = M Y i=2 Pr{Ec i |X = x} (8.157) = (Pr{Ec 1|X = x})M−1 (8.158) = (1 −Pr{E1|X = x})M−1. (8.159) We now obtain a lower bound on Pr{E1|X = x} for x ∈Sn [X]δ, where Sn [X]δ = {x ∈T n [X]δ : |T n [ ˆ X|X]δ(x)| ≥1} (8.160) 4 Without conditioning on {X = x}, the events Ei are not mutually independent because they depend on each other through X. 204 8 Rate-Distortion Theory (cf. Section 6.3). Consider Pr{E1|X = x} = Pr n (x, ˆ X(1)) ∈T n [X ˆ X]δ o (8.161) = X ˆ x:(x,ˆ x)∈T n [X ˆ X]δ p(ˆ x). (8.162) The summation above is over all ˆ x such that (x, ˆ x) ∈T n [X ˆ X]δ. From the con-sistency of strong typicality (Theorem 6.7), if (x, ˆ x) ∈T n [X ˆ X]δ, then ˆ x ∈T n [ ˆ X]δ. By the strong AEP (Theorem 6.2), all p(ˆ x) in the above summation satisfy p(ˆ x) ≥2−n(H( ˆ X)+η), (8.163) where η →0 as δ →0. By the conditional strong AEP (Theorem 6.10), |T n [ ˆ X|X]δ(x)| ≥2n(H( ˆ X|X)−ξ), (8.164) where ξ →0 as δ →0. Then from (8.162), we have Pr{E1|X = x} ≥2n(H( ˆ X|X)−ξ)2−n(H( ˆ X)+η) (8.165) = 2−n(H( ˆ X)−H( ˆ X|X)+ξ+η) (8.166) = 2−n(I(X; ˆ X)+ζ), (8.167) where ζ = ξ + η →0 (8.168) as δ →0. Following (8.159), we have Pr{K = 1|X = x} ≤ h 1 −2−n(I(X; ˆ X)+ζ)iM−1 . (8.169) The lower bound in (8.153) implies M ≥2n(I(X; ˆ X)+ ϵ 2 ). (8.170) Then upon taking natural logarithm in (8.169), we obtain ln Pr{K = 1|X = x} ≤(M −1) ln h 1 −2−n(I(X; ˆ X)+ζ)i (8.171) a) ≤  2n(I(X; ˆ X)+ ϵ 2 ) −1  ln h 1 −2−n(I(X; ˆ X)+ζ)i (8.172) b) ≤−  2n(I(X; ˆ X)+ ϵ 2 ) −1  2−n(I(X; ˆ X)+ζ) (8.173) = − h 2n( ϵ 2 −ζ) −2−n(I(X; ˆ X)+ζ)i . (8.174) 8.5 Achievability of RI(D) 205 In the above, a) follows from (8.170) by noting that the logarithm in (8.171) is negative, and b) follows from the fundamental inequality ln a ≤a −1. By letting δ be sufficiently small so that ϵ 2 −ζ > 0, (8.175) the above upper bound on ln Pr{K = 1|X = x} tends to −∞as n →∞, i.e., Pr{K = 1|X = x} →0 as n →∞. This implies Pr{K = 1|X = x} ≤ϵ 2 (8.176) for sufficiently large n. It then follows that Pr{K = 1} (8.177) = X x∈Sn [X]δ Pr{K = 1|X = x}Pr{X = x} + X x̸∈Sn [X]δ Pr{K = 1|X = x}Pr{X = x} (8.178) ≤ X x∈Sn [X]δ ϵ 2 · Pr{X = x} + X x̸∈Sn [X]δ 1 · Pr{X = x} (8.179) = ϵ 2 · Pr{X ∈Sn [X]δ} + Pr{X ̸∈Sn [X]δ} (8.180) ≤ϵ 2 · 1 + (1 −Pr{X ∈Sn [X]δ}) (8.181) < ϵ 2 + δ, (8.182) where we have invoked Proposition 6.13 in the last step. By letting δ be sufficiently small so that δ < ϵ 2 (8.183) and (8.175) is satisfied, we obtain Pr{K = 1} < ϵ. (8.184) The main idea of the above upper bound on Pr{K = 1} for sufficiently large n is the following. In constructing the codebook, we randomly generate M codewords in ˆ X n according to p(ˆ x)n. If M grows with n at a rate higher than I(X; ˆ X), then the probability that there exists at least one codeword which is jointly typical with the source sequence X with respect to p(x, ˆ x) is very high when n is large. Further, the average distortion between X and such a codeword is close to Ed(X, ˆ X) because the empirical joint distribution of the symbol pairs in X and such a codeword is close to p(x, ˆ x). Then by let-ting the reproduction sequence ˆ X be such a codeword, the average distortion 206 8 Rate-Distortion Theory between X and ˆ X is less than D + ϵ with probability arbitrarily close to 1 since Ed(X, ˆ X) ≤D. These will be formally shown in the rest of the proof. Now for sufficiently large n, consider Pr{d(X, ˆ X) > D + ϵ} = Pr{d(X, ˆ X) > D + ϵ|K = 1}Pr{K = 1} +Pr{d(X, ˆ X) > D + ϵ|K ̸= 1}Pr{K ̸= 1} (8.185) ≤1 · ϵ + Pr{d(X, ˆ X) > D + ϵ|K ̸= 1} · 1 (8.186) = ϵ + Pr{d(X, ˆ X) > D + ϵ|K ̸= 1}. (8.187) We will show that by choosing the value of δ carefully, it is possible to make d(X, ˆ X) always less than or equal to D + ϵ provided K ̸= 1. Since (X, ˆ X) ∈ T n [X ˆ X]δ conditioning on {K ̸= 1}, we have d(X, ˆ X) = 1 n n X k=1 d(Xk, ˆ Xk) (8.188) = 1 n X x,ˆ x d(x, ˆ x)N(x, ˆ x|X, ˆ X) (8.189) = 1 n X x,ˆ x d(x, ˆ x)(np(x, ˆ x) + N(x, ˆ x|X, ˆ X) −np(x, ˆ x)) (8.190) =  X x,ˆ x p(x, ˆ x)d(x, ˆ x)  +  X x,ˆ x d(x, ˆ x)  1 nN(x, ˆ x|X, ˆ X) −p(x, ˆ x)   (8.191) = Ed(X, ˆ X) + X x,ˆ x d(x, ˆ x)  1 nN(x, ˆ x|X, ˆ X) −p(x, ˆ x)  (8.192) ≤Ed(X, ˆ X) + X x,ˆ x d(x, ˆ x) 1 nN(x, ˆ x|X, ˆ X) −p(x, ˆ x) (8.193) a) ≤Ed(X, ˆ X) + dmax X x,ˆ x 1 nN(x, ˆ x|X, ˆ X) −p(x, ˆ x) (8.194) b) ≤Ed(X, ˆ X) + dmaxδ (8.195) c) ≤D + dmaxδ, (8.196) where a) follows from the definition of dmax in (8.142); b) follows because (X, ˆ X) ∈T n [X ˆ X]δ; Chapter Summary 207 c) follows from (8.152). Therefore, by taking δ ≤ ϵ dmax , (8.197) we obtain d(X, ˆ X) ≤D + dmax  ϵ dmax  = D + ϵ (8.198) if K ̸= 1. Therefore, Pr{d(X, ˆ X) > D + ϵ|K ̸= 1} = 0, (8.199) and it follows that from (8.187) that Pr{d(X, ˆ X) > D + ϵ} ≤ϵ. (8.200) Thus we have shown that for sufficiently large n, there exists an (n, M) random code which satisfies 1 n log M ≤I(X; ˆ X) + ϵ (8.201) (this follows from the upper bound in (8.153)) and (8.200). This implies the existence of an (n, M) rate-distortion code which satisfies (8.201) and (8.200). Therefore, the rate-distortion pair (I(X; ˆ X), D) is achievable. Then upon min-imizing over all ˆ X which satisfy (8.152), we conclude that the rate-distortion pair (RI(D), D) is achievable, which implies RI(D) ≥R(D). The proof is completed. Chapter Summary Rate-Distortion Function: For an information source X and a single-letter distortion measure d : X × ˆ X →ℜ, the rate-distortion function is defined as R(D) = min ˆ X:Ed(X, ˆ X)≤D I(X; ˆ X). Rate-Distortion Theorem: An i.i.d. random sequence X1, X2, · · · , Xn with generic random variable X can be compressed at rate R + ϵ such that Pr{d(X, ˆ X) > D + ϵ} →0 as n →∞if and only if R ≥R(D). Binary Source: Let X be binary with distribution {γ, 1 −γ} and let d be the Hamming distortion measure. Then R(D) = hb(γ) −hb(D) if 0 ≤D < min(γ, 1 −γ) 0 if D ≥min(γ, 1 −γ). 208 8 Rate-Distortion Theory Problems 1. Obtain the forward channel description of R(D) for the binary source with the Hamming distortion measure. 2. Binary covering radius The Hamming ball with center c = (c1, c2, · · · , cn) ∈ {0, 1}n and radius r is the set Sr(c) = ( x ∈{0, 1}n : n X i=1 |xi −ci| ≤r ) . Let Mr,n be the minimum number M such that there exists Hamming balls Sr(cj), j = 1, 2, · · · , M such that for all x ∈{0, 1}n, x ∈Sr(cj) for some j. a) Show that Mr,n ≥ 2n Pr k=0 n k . b) What is the relation between Mr,n and the rate-distortion function for the binary source with the Hamming distortion measure? 3. Consider a source random variable X with the Hamming distortion mea-sure. a) Prove that R(D) ≥H(X) −D log(|X| −1) −hb(D) for 0 ≤D ≤Dmax. b) Show that the above lower bound on R(D) is tight if X is distributed uniformly on X. See Jerohin (also see , p.133) for the tightness of this lower bound for a general source. This bound is a special case of the Shannon lower bound for the rate-distortion function (also see , p.369). 4. Product source Let X and Y be two independent source random variables with reproduction alphabets ˆ X and ˆ Y and distortion measures dx and dy, and the rate-distortion functions for X and Y are denoted by Rx(Dx) and Ry(Dy), respectively. Now for the product source (X, Y ), define a distortion measure d : X × Y →ˆ X × ˆ Y by d((x, y), (ˆ x, ˆ y)) = dx(x, ˆ x) + dy(y, ˆ y). Prove that the rate-distortion function R(D) for (X, Y ) with distortion measure d is given by R(D) = min Dx+Dy=D(Rx(Dx) + Ry(Dy)). Hint: Prove that I(X, Y ; ˆ X, ˆ Y ) ≥I(X; ˆ X) + I(Y ; ˆ Y ) if X and Y are independent. (Shannon .) Historical Notes 209 5. Compound source Let Θ be an index set and ZΘ = {Xθ : θ ∈Θ} be a collection of source random variables. The random variables in ZΘ have a common source alphabet X, a common reproduction alphabet ˆ X, and a common distortion measure d. A compound source is an i.i.d. information source whose generic random variable is XΦ, where Φ is equal to some θ ∈ Θ but we do not know which one it is. The rate-distortion function RΦ(D) for XΦ has the same definition as the rate-distortion function defined in this chapter except that (8.23) is replaced by Pr{d(Xθ, ˆ X) > D + ϵ} ≤ϵ for all θ ∈Θ. Show that RΦ(D) = sup θ∈Θ Rθ(D), where Rθ(D) is the rate-distortion function for Xθ. 6. Show that asymptotic optimality can always be achieved by separating rate-distortion coding and channel coding when the information source is i.i.d. (with a single-letter distortion measure) and the channel is memory-less. 7. Slepian-Wolf coding Let ϵ, γ, and δ be small positive quantities. For 1 ≤ i ≤2n(H(Y |X)+ϵ), randomly and independently select with replacement 2n(I(X;Y )−γ) sequences from T n [Y ]δ according to the uniform distribution to form a bin Bi. Let (x, y) be a fixed pair of sequences in T n [XY ]δ. Prove the following by choosing ϵ, γ, and δ appropriately: a) the probability that y is in some Bi tends to 1 as n →∞; b) given that y ∈Bi, the probability that there exists another y′ ∈Bi such that (x, y′) ∈T n [XY ]δ tends to 0 as n →∞. Let (X, Y) ∼pn(x, y). The results in a) and b) say that if (X, Y) is jointly typical, which happens with probability close to 1 for large n, then it is very likely that Y is in some bin Bi, and that Y is the unique vector in Bi which is jointly typical with X. If X is available as side-information, then by specifying the index of the bin containing Y, which takes about 2nH(Y |X) bits, Y can be uniquely specified. Note that no knowledge about X is involved in specifying the index of the bin containing Y. This is the basis of the Slepian-Wolf coding which launched the whole area of multiterminal source coding (see Berger ). Historical Notes Transmission of an information source with distortion was first conceived by Shannon in his 1948 paper . He returned to the problem in 1959 and proved the rate-distortion theorem . The normalization of the rate-distortion function is due to Pinkston . The rate-distortion theorem proved here is a stronger version of the original theorem. Extensions of the 210 8 Rate-Distortion Theory theorem to more general sources were proved in the book by Berger . An iterative algorithm for computing the rate-distortion function developed by Blahut will be discussed in Chapter 9. Rose has developed an algorithm for the same purpose based on a mapping approach. 9 The Blahut-Arimoto Algorithms For a discrete memoryless channel p(y|x), the capacity C = max r(x) I(X; Y ), (9.1) where X and Y are respectively the input and the output of the generic chan-nel and r(x) is the input distribution, characterizes the maximum asymptot-ically achievable rate at which information can be transmitted through the channel reliably. The expression for C in (9.1) is called a single-letter char-acterization in the sense that it depends only on the transition matrix of the generic channel but not on the block length n of a code for the channel. When both the input alphabet X and the output alphabet Y are finite, the computation of C becomes a finite-dimensional maximization problem. For an i.i.d. information source {Xk, k ≥1} with generic random variable X, the rate-distortion function R(D) = min Q(ˆ x|x):Ed(X, ˆ X)≤D I(X; ˆ X) (9.2) characterizes the minimum asymptotically achievable rate of a rate-distortion code which reproduces the information source with an average distortion no more than D with respect to a single-letter distortion measure d. Again, the expression for R(D) in (9.2) is a single-letter characterization because it de-pends only on the generic random variable X but not on the block length n of a rate-distortion code. When both the source alphabet X and the reproduction alphabet ˆ X are finite, the computation of R(D) becomes a finite-dimensional minimization problem. Unless for very special cases, it is not possible to obtain an expression for C or R(D) in closed form, and we have to resort to numerical compu-tation. However, computing these quantities is not straightforward because the associated optimization problem is nonlinear. In this chapter, we discuss the Blahut-Arimoto algorithms (henceforth the BA algorithms), which is an iterative algorithm devised for this purpose. 212 9 The Blahut-Arimoto Algorithms In order to better understand how and why the BA algorithm works, we will first describe the algorithm in a general setting in the next section. Specializations of the algorithm for the computation of C and R(D) will be discussed in Section 9.2, and convergence of the algorithm will be proved in Section 9.3. 9.1 Alternating Optimization In this section, we describe an alternating optimization algorithm. This al-gorithm will be specialized in the next section for computing the channel capacity and the rate-distortion function. Consider the double supremum sup u1∈A1 sup u2∈A2 f(u1, u2), (9.3) where Ai is a convex subset of ℜni for i = 1, 2, and f is a real function defined on A1 × A2. The function f is bounded from above, and is continuous and has continuous partial derivatives on A1 × A2. Further assume that for all u2 ∈A2, there exists a unique c1(u2) ∈A1 such that f(c1(u2), u2) = max u′ 1∈A1 f(u′ 1, u2), (9.4) and for all u1 ∈A1, there exists a unique c2(u1) ∈A2 such that f(u1, c2(u1)) = max u′ 2∈A2 f(u1, u′ 2). (9.5) Let u = (u1, u2) and A = A1 × A2. Then (9.3) can be written as sup u∈A f(u). (9.6) In other words, the supremum of f is taken over a subset of ℜn1+n2 which is equal to the Cartesian product of two convex subsets of ℜn1 and ℜn2, respectively. We now describe an alternating optimization algorithm for computing f ∗, the value of the double supremum in (9.3). Let u(k) = (u(k) 1 , u(k) 2 ) for k ≥0 which are defined as follows. Let u(0) 1 be an arbitrarily chosen vector in A1, and let u(0) 2 = c2(u(0) 1 ). For k ≥1, u(k) is defined by u(k) 1 = c1(u(k−1) 2 ) (9.7) and u(k) 2 = c2(u(k) 1 ). (9.8) 9.1 Alternating Optimization 213 Fig. 9.1. Alternating optimization. In other words, u(k) 1 and u(k) 2 are generated in the order u(0) 1 , u(0) 2 , u(1) 1 , u(1) 2 , u(2) 1 , u(2) 2 , · · ·, where each vector in the sequence is a function of the previous vector except that u(0) 1 is arbitrarily chosen in A1. Let f (k) = f(u(k)). (9.9) Then from (9.4) and (9.5), f (k) = f(u(k) 1 , u(k) 2 ) (9.10) ≥f(u(k) 1 , u(k−1) 2 ) (9.11) ≥f(u(k−1) 1 , u(k−1) 2 ) (9.12) = f (k−1) (9.13) for k ≥1. Since the sequence f (k) is non-decreasing, it must converge because f is bounded from above. We will show in Section 9.3 that f (k) →f ∗if f is concave. Figure 9.1 is an illustration of the alternating maximization algorithm, where in this case both n1 and n2 are equal to 1, and f (k) →f ∗. The alternating optimization algorithm can be explained by the following analogy. Suppose a hiker wants to reach the summit of a mountain. Starting from a certain point in the mountain, the hiker moves north-south and east-west alternately. (In our problem, the north-south and east-west directions can be multi-dimensional.) In each move, the hiker moves to the highest possible point. The question is whether the hiker can eventually approach the summit starting from any point in the mountain. Replacing f by −f in (9.3), the double supremum becomes the double infimum inf u1∈A1 inf u2∈A2 f(u1, u2). (9.14) 214 9 The Blahut-Arimoto Algorithms All the previous assumptions on A1, A2, and f remain valid except that f is now assumed to be bounded from below instead of bounded from above. The double infimum in (9.14) can be computed by the same alternating optimiza-tion algorithm. Note that with f replaced by −f, the maximums in (9.4) and (9.5) become minimums, and the inequalities in (9.11) and (9.12) are reversed. 9.2 The Algorithms In this section, we specialize the alternating optimization algorithm described in the last section to compute the channel capacity and the rate-distortion function. The corresponding algorithms are known as the BA algorithms. 9.2.1 Channel Capacity We will use r to denote an input distribution r(x), and we write r > 0 if r is strictly positive, i.e., r(x) > 0 for all x ∈X. If r is not strictly positive, we write r ≥0. Similar notations will be introduced as appropriate. Lemma 9.1. Let r(x)p(y|x) be a given joint distribution on X × Y such that r > 0, and let q be a transition matrix from Y to X. Then max q X x X y r(x)p(y|x) log q(x|y) r(x) = X x X y r(x)p(y|x) log q∗(x|y) r(x) , (9.15) where the maximization is taken over all q such that q(x|y) = 0 if and only if p(y|x) = 0, (9.16) and q∗(x|y) = r(x)p(y|x) P x′ r(x′)p(y|x′), (9.17) i.e., the maximizing q is the which corresponds to the input distribution r and the transition matrix p(y|x). In (9.15) and the sequel, we adopt the convention that the summation is taken over all x and y such that r(x) > 0 and p(y|x) > 0. Note that the right hand side of (9.15) gives the mutual information I(X; Y ) when r is the input distribution for the generic channel p(y|x). Proof. Let w(y) = X x′ r(x′)p(y|x′) (9.18) in (9.17). We assume without loss of generality that for all y ∈Y, p(y|x) > 0 for some x ∈X. Since r > 0, w(y) > 0 for all y, and hence q∗(x|y) is well-defined. Rearranging (9.17), we have 9.2 The Algorithms 215 r(x)p(y|x) = w(y)q∗(x|y). (9.19) Consider X x X y r(x)p(y|x) log q∗(x|y) r(x) − X x X y r(x)p(y|x) log q(x|y) r(x) = X x X y r(x)p(y|x) log q∗(x|y) q(x|y) (9.20) = X y X x w(y)q∗(x|y) log q∗(x|y) q(x|y) (9.21) = X y w(y) X x q∗(x|y) log q∗(x|y) q(x|y) (9.22) = X y w(y)D(q∗(x|y)∥q(x|y)) (9.23) ≥0, (9.24) where (9.21) follows from (9.19), and the last step is an application of the divergence inequality. Then the proof is completed by noting in (9.17) that q∗satisfies (9.16) because r > 0. ⊓ ⊔ Theorem 9.2. For a discrete memoryless channel p(y|x), C = sup r>0 max q X x X y r(x)p(y|x) log q(x|y) r(x) , (9.25) where the maximization is taken over all q that satisfies (9.16). Proof. Let I(r, p) denote the mutual information I(X; Y ) when r is the input distribution for the generic channel p(y|x). Then we can write C = max r≥0 I(r, p). (9.26) Let r∗achieves C. If r∗> 0, then C = max r≥0 I(r, p) (9.27) = max r>0 I(r, p) (9.28) = max r>0 max q X x X y r(x)p(y|x) log q(x|y) r(x) (9.29) = sup r>0 max q X x X y r(x)p(y|x) log q(x|y) r(x) , (9.30) where (9.29) follows from Lemma 9.1 (and the maximization is over all q that satisfies (9.16)). 216 9 The Blahut-Arimoto Algorithms Next, we consider the case when r∗≥0. Since I(r, p) is continuous in r, for any ϵ > 0, there exists δ > 0 such that if ∥r −r∗∥< δ, (9.31) then C −I(r, p) < ϵ, (9.32) where ∥r−r∗∥denotes the Euclidean distance between r and r∗. In particular, there exists ˜ r > 0 which satisfies (9.31) and (9.32). Then C = max r≥0 I(r, p) (9.33) ≥sup r>0 I(r, p) (9.34) ≥I(˜ r, p) (9.35) > C −ϵ, (9.36) where the last step follows because ˜ r satisfies (9.32). Thus we have C −ϵ < sup r>0 I(r, p) ≤C. (9.37) Finally, by letting ϵ →0, we conclude that C = sup r>0 I(r, p) = sup r>0 max q X x X y r(x)p(y|x) log q(x|y) r(x) . (9.38) This accomplishes the proof. ⊓ ⊔ Now for the double supremum in (9.3), let f(r, q) = X x X y r(x)p(y|x) log q(x|y) r(x) , (9.39) with r and q playing the roles of u1 and u2, respectively. Let A1 = {(r(x), x ∈X) : r(x) > 0 and P x r(x) = 1} , (9.40) and A2 = {(q(x|y), (x, y) ∈X × Y) : q(x|y) > 0 if p(x|y) > 0, q(x|y) = 0 if p(y|x) = 0, and P x q(x|y) = 1 for all y ∈Y}. (9.41) Then A1 is a subset of ℜ|X| and A2 is a subset of ℜ|X||Y|, and it can readily be checked that both A1 and A2 are convex. For all r ∈A1 and q ∈A2, by Lemma 9.1, 9.2 The Algorithms 217 f(r, q) = X x X y r(x)p(y|x) log q(x|y) r(x) (9.42) ≤ X x X y r(x)p(y|x) log q∗(x|y) r(x) (9.43) = I(X; Y ) (9.44) ≤H(X) (9.45) ≤log |X|. (9.46) Thus f is bounded from above. Since for all q ∈A2, q(x|y) = 0 for all x and y such that p(x|y) = 0, these components of q are degenerated. In fact, these components of q do not appear in the definition of f(r, q) in (9.39), which can be seen as follows. Recall the convention that the double summation in (9.39) is over all x and y such that r(x) > 0 and p(y|x) > 0. If q(x|y) = 0, then p(y|x) = 0, and hence the corresponding term is not included in the double summation. Therefore, it is readily seen that f is continuous and has continuous partial derivatives on A because all the probabilities involved in the double summation in (9.39) are strictly positive. Moreover, for any given r ∈A1, by Lemma 9.1, there exists a unique q ∈A2 that maximizes f. It will be shown shortly that for any given q ∈A2, there also exists a unique r ∈A1 that maximizes f. The double supremum in (9.3) now becomes sup r∈A1 sup q∈A2 X x X y r(x)p(y|x) log q(x|y) r(x) , (9.47) which by Theorem 9.2 is equal to C, where the supremum over all q ∈A2 is in fact a maximum. We then apply the alternating optimization algorithm in the last section to compute C. First, we arbitrarily choose a strictly positive input distribution in A1 and let it be r(0). Then we define q(0) and in general q(k) for k ≥0 by q(k)(x|y) = r(k)(x)p(y|x) P x′ r(k)(x′)p(y|x′) (9.48) in view of Lemma 9.1. In order to define r(1) and in general r(k) for k ≥1, we need to find the r ∈A1 that maximizes f for a given q ∈A2, where the constraints on r are X x r(x) = 1 (9.49) and r(x) > 0 for all x ∈X. (9.50) We now use the method of Lagrange multipliers to find the best r by ignoring temporarily the positivity constraints in (9.50). Let 218 9 The Blahut-Arimoto Algorithms J = X x X y r(x)p(y|x) log q(x|y) r(x) −λ X x r(x). (9.51) For convenience sake, we assume that the logarithm is the natural logarithm. Differentiating with respect to r(x) gives ∂J ∂r(x) = X y p(y|x) log q(x|y) −log r(x) −1 −λ. (9.52) Upon setting ∂J ∂r(x) = 0, we have log r(x) = X y p(y|x) log q(x|y) −1 −λ, (9.53) or r(x) = e−(λ+1) Y y q(x|y)p(y|x). (9.54) By considering the normalization constraint in (9.49), we can eliminate λ and obtain r(x) = Q y q(x|y)p(y|x) P x′ Q y q(x′|y)p(y|x′) . (9.55) The above product is over all y such that p(y|x) > 0, and q(x|y) > 0 for all such y. This implies that both the numerator and the denominator on the right hand side above are positive, and therefore r(x) > 0. In other words, the r thus obtained happen to satisfy the positivity constraints in (9.50) although these constraints were ignored when we set up the Lagrange multipliers. We will show in Section 9.3.2 that f is concave. Then r as given in (9.55), which is unique, indeed achieves the maximum of f for a given q ∈A2 because r is in the interior of A1. In view of (9.55), we define r(k) for k ≥1 by r(k)(x) = Q y q(k−1)(x|y)p(y|x) P x′ Q y q(k−1)(x′|y)p(y|x′) . (9.56) The vectors r(k) and q(k) are defined in the order r(0), q(0), r(1), q(1), r(2), q(2), · · ·, where each vector in the sequence is a function of the previous vector except that r(0) is arbitrarily chosen in A1. It remains to show by induction that r(k) ∈A1 for k ≥1 and q(k) ∈A2 for k ≥0. If r(k) ∈A1, i.e., r(k) > 0, then we see from (9.48) that q(k)(x|y) = 0 if and only if p(x|y) = 0, i.e., q(k) ∈A2. On the other hand, if q(k) ∈A2, then we see from (9.56) that r(k+1) > 0, i.e., r(k+1) ∈A2. Therefore, r(k) ∈A1 and q(k) ∈A2 for all k ≥0. Upon determining (r(k), q(k)), we can compute f (k) = f(r(k), q(k)) for all k. It will be shown in Section 9.3 that f (k) →C. 9.2 The Algorithms 219 9.2.2 The Rate-Distortion Function The discussion in this section is analogous to the discussion in Section 9.2.1. Some of the details will be omitted for brevity. For all problems of interest, R(0) > 0. Otherwise, R(D) = 0 for all D ≥0 since R(D) is nonnegative and non-increasing. Therefore, we assume without loss of generality that R(0) > 0. We have shown in Corollary 8.19 that if R(0) > 0, then R(D) is strictly decreasing for 0 ≤D ≤Dmax. Since R(D) is convex, for any s ≤0, there exists a point on the R(D) curve for 0 ≤D ≤Dmax such that the slope of a tangent1 to the R(D) curve at that point is equal to s. Denote such a point on the R(D) curve by (Ds, R(Ds)), which is not necessarily unique. Then this tangent intersects with the ordinate at R(Ds) −sDs. This is illustrated in Figure 9.2. Let I(p, Q) denote the mutual information I(X, ˆ X) and D(p, Q) denote the expected distortion Ed(X, ˆ X) when p is the distribution for X and Q is the transition matrix from X to ˆ X defining ˆ X. Then for any Q, (I(p, Q), D(p, Q)) is a point in the rate-distortion region, and the line with slope s passing through (I(p, Q), D(p, Q)) intersects the ordinate at I(p, Q) −sD(p, Q). Since the R(D) curve defines the boundary of the rate-distortion region and it is above the tangent in Figure 9.2, we see that R(Ds) −sDs = min Q [I(p, Q) −sD(p, Q)]. (9.57) For each s ≤0, if we can find a Qs that achieves the above minimum, then the line passing through (0, I(p, Qs) −sD(p, Qs)), i.e., the tangent in Figure 9.2, D R ( D ) D s R ( ) D s R ( ) - s D s D max D s ( ,R ( )) D s D s Fig. 9.2. A tangent to the R(D) curve with slope equal to s. 1 We say that a line is a tangent to the R(D) curve if it touches the R(D) curve from below. 220 9 The Blahut-Arimoto Algorithms gives a tight lower bound on the R(D) curve. In particular, if (R(Ds), Ds) is unique, Ds = D(p, Qs) (9.58) and R(Ds) = I(p, Qs). (9.59) By varying over all s ≤0, we can then trace out the whole R(D) curve. In the rest of the section, we will devise an iterative algorithm for the minimization problem in (9.57). Lemma 9.3. Let p(x)Q(ˆ x|x) be a given joint distribution on X × ˆ X such that Q > 0, and let t be any distribution on ˆ X such that t > 0. Then min t>0 X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) = X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t∗(ˆ x) , (9.60) where t∗(ˆ x) = X x p(x)Q(ˆ x|x), (9.61) i.e., the minimizing t(ˆ x) is the distribution on ˆ X corresponding to the input distribution p and the transition matrix Q. Proof. It suffices to prove that X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) ≥ X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t∗(ˆ x) (9.62) for all t > 0. The details are left as an exercise. Note in (9.61) that t∗> 0 because Q > 0. ⊓ ⊔ Since I(p, Q) and D(p, Q) are continuous in Q, via an argument similar to the one we used in the proof of Theorem 9.2, we can replace the minimum over all Q in (9.57) by the infimum over all Q > 0. By noting that the right hand side of (9.60) is equal to I(p, Q) and D(p, Q) = X x X ˆ x p(x)Q(ˆ x|x)d(x, ˆ x), (9.63) we can apply Lemma 9.3 to obtain R(Ds) −sDs = inf Q>0  min t>0 P x,ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −sP x,ˆ x p(x)Q(ˆ x|x)d(x,ˆ x)  (9.64) = inf Q>0 min t>0 P x,ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −sP x,ˆ x p(x)Q(ˆ x|x)d(x,ˆ x)  . (9.65) 9.2 The Algorithms 221 Now in the double infimum in (9.14), let f(Q, t) = X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −s X x X ˆ x p(x)Q(ˆ x|x)d(x, ˆ x), (9.66) A1 = ( (Q(ˆ x|x), (x, ˆ x) ∈X × ˆ X) : Q(ˆ x|x) > 0, X ˆ x Q(ˆ x|x) = 1 for all x ∈X ) , (9.67) and A2 = {(t(ˆ x), ˆ x ∈ˆ X) : t(ˆ x) > 0 and P ˆ x t(ˆ x) = 1}, (9.68) with Q and t playing the roles of u1 and u2, respectively. Then A1 is a subset of ℜ|X|| ˆ X| and A2 is a subset of ℜ| ˆ X|, and it can readily be checked that both A1 and A2 are convex. Since s ≤0, f(Q, t) = X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −s X x X ˆ x p(x)Q(ˆ x|x)d(x, ˆ x) (9.69) ≥ X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t∗(ˆ x) + 0 (9.70) = I(X; ˆ X) (9.71) ≥0. (9.72) Therefore, f is bounded from below. The double infimum in (9.14) now becomes inf Q∈A1 inf t∈A2 "X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −s X x X ˆ x p(x)Q(ˆ x|x)d(x, ˆ x) # , (9.73) where the infimum over all t ∈A2 is in fact a minimum. We then apply the alternating optimization algorithm described in Section 9.2 to compute f ∗, the value of (9.73). First, we arbitrarily choose a strictly positive transition matrix in A1 and let it be Q(0). Then we define t(0) and in general t(k) for k ≥1 by t(k)(ˆ x) = X x p(x)Q(k)(ˆ x|x) (9.74) 222 9 The Blahut-Arimoto Algorithms in view of Lemma 9.3. In order to define Q(1) and in general Q(k) for k ≥1, we need to find the Q ∈A1 that minimizes f for a given t ∈A2, where the constraints on Q are Q(ˆ x|x) > 0 for all (x, ˆ x) ∈X × ˆ X, (9.75) and X ˆ x Q(ˆ x|x) = 1 for all x ∈X. (9.76) As we did for the computation of the channel capacity, we first ignore the positivity constraints in (9.75) when setting up the Lagrange multipliers. Then we obtain Q(ˆ x|x) = t(ˆ x)esd(x,ˆ x) P ˆ x′ t(ˆ x′)esd(x,ˆ x′) > 0. (9.77) The details are left as an exercise. We then define Q(k) for k ≥1 by Q(k)(ˆ x|x) = t(k−1)(ˆ x)esd(x,ˆ x) P ˆ x′ t(k−1)(ˆ x′)esd(x,ˆ x′) . (9.78) It will be shown in the next section that f (k) = f(Q(k), t(k)) →f ∗as k →∞. If there exists a unique point (R(Ds), Ds) on the R(D) curve such that the slope of a tangent at that point is equal to s, then (I(p, Q(k)), D(p, Q(k))) →(R(Ds), Ds). (9.79) Otherwise, (I(p, Q(k)), D(p, Q(k))) is arbitrarily close to the segment of the R(D) curve at which the slope is equal to s when k is sufficiently large. These facts are easily shown to be true. 9.3 Convergence In this section, we first prove that if f is concave, then f (k) →f ∗. We then apply this sufficient condition to prove the convergence of the BA algorithm for computing the channel capacity. The convergence of the BA algorithm for computing the rate-distortion function can be proved likewise. The details are omitted. 9.3.1 A Sufficient Condition In the alternating optimization algorithm in Section 9.1, we see from (9.7) and (9.8) that u(k+1) = (u(k+1) 1 , u(k+1) 2 ) = (c1(u(k) 2 ), c2(c1(u(k) 2 ))) (9.80) for k ≥0. Define 9.3 Convergence 223 ∆f(u) = f(c1(u2), c2(c1(u2))) −f(u1, u2). (9.81) Then f (k+1) −f (k) = f(u(k+1)) −f(u(k)) (9.82) = f(c1(u(k) 2 ), c2(c1(u(k) 2 ))) −f(u(k) 1 , u(k) 2 ) (9.83) = ∆f(u(k)). (9.84) We will prove that f being concave is sufficient for f (k) →f ∗. To this end, we first prove that if f is concave, then the algorithm cannot be trapped at u if f(u) < f ∗. Lemma 9.4. Let f be concave. If f (k) < f ∗, then f (k+1) > f (k). Proof. We will prove that ∆f(u) > 0 for any u ∈A such that f(u) < f ∗. Then if f (k) = f(u(k)) < f ∗, we see from (9.84) that f (k+1) −f (k) = ∆f(u(k)) > 0, (9.85) and the lemma is proved. Consider any u ∈A such that f(u) < f ∗. We will prove by contradiction that ∆f(u) > 0. Assume ∆f(u) = 0. Then it follows from (9.81) that f(c1(u2), c2(c1(u2))) = f(u1, u2). (9.86) Now we see from (9.5) that f(c1(u2), c2(c1(u2))) ≥f(c1(u2), u2). (9.87) If c1(u2) ̸= u1, then f(c1(u2), u2) > f(u1, u2) (9.88) because c1(u2) is unique. Combining (9.87) and (9.88), we have f(c1(u2), c2(c1(u2))) > f(u1, u2), (9.89) which is a contradiction to (9.86). Therefore, u1 = c1(u2). (9.90) Using this, we see from (9.86) that f(u1, c2(u1)) = f(u1, u2), (9.91) which implies u2 = c2(u1). (9.92) because c2(c1(u2)) is unique. Since f(u) < f ∗, there exists v ∈A such that 224 9 The Blahut-Arimoto Algorithms z z 1 z 2 ( u u ) 1 2 , ( v u ) 1 2 , ( u v ) 1 2 , ( v v ) 1 2 , Fig. 9.3. The vectors u, v, ˜ z, z1, and z2. f(u) < f(v). (9.93) Consider v −u = (v1 −u1, 0) + (0, v2 −u2). (9.94) Let ˜ z be the unit vector in the direction of v −u, z1 be the unit vector in the direction of (v1 −u1, 0), and z2 be the unit vector in the direction of (v2 −u2, 0). Then ∥v −u∥˜ z = ∥v1 −u1∥z1 + ∥v2 −u2∥z2, (9.95) or ˜ z = α1z1 + α2z2, (9.96) where αi = ∥vi −ui∥ ∥v −u∥, (9.97) i = 1, 2. Figure 9.3 is an illustration of the vectors u, v, ˜ z, z1, and z2. We see from (9.90) that f attains its maximum value at u = (u1, u2) when u2 is fixed. In particular, f attains its maximum value at u along the line passing through (u1, u2) and (v1, u2). Let ▽f denotes the gradient of f. Since f is continuous and has continuous partial derivatives, the directional derivative of f at u in the direction of z1 exists and is given by ▽f · z1. It follows from the concavity of f that f is concave along the line passing through (u1, u2) and (v1, u2). Since f attains its maximum value at u, the derivative of f along the line passing through (u1, u2) and (v1, u2) vanishes. Then we see that ▽f · z1 = 0. (9.98) Similarly, we see from (9.92) that ▽f · z2 = 0. (9.99) 9.3 Convergence 225 Then from (9.96), the directional derivative of f at u in the direction of ˜ z is given by ▽f · ˜ z = α1(▽f · z1) + α2(▽f · z2) = 0. (9.100) Since f is concave along the line passing through u and v, this implies f(u) ≥f(v), (9.101) which is a contradiction to (9.93). Hence, we conclude that ∆f(u) > 0. ⊓ ⊔ Although we have proved that the algorithm cannot be trapped at u if f(u) < f ∗, f (k) does not necessarily converge to f ∗because the increment in f (k) in each step may be arbitrarily small. In order to prove the desired convergence, we will show in next theorem that this cannot be the case. Theorem 9.5. If f is concave, then f (k) →f ∗. Proof. We have already shown in Section 9.1 that f (k) necessarily converges, say to f ′. Hence, for any ϵ > 0 and all sufficiently large k, f ′ −ϵ ≤f (k) ≤f ′. (9.102) Let γ = min u∈A′ ∆f(u), (9.103) where A′ = {u ∈A : f ′ −ϵ ≤f(u) ≤f ′}. (9.104) Since f has continuous partial derivatives, ∆f(u) is a continuous function of u. Then the minimum in (9.103) exists because A′ is compact2. We now show that f ′ < f ∗will lead to a contradiction if f is concave. If f ′ < f ∗, then from Lemma 9.4, we see that ∆f(u) > 0 for all u ∈A′ and hence γ > 0. Since f (k) = f(u(k)) satisfies (9.102), u(k) ∈A′, and f (k+1) −f (k) = ∆f(u(k)) ≥γ (9.105) for all sufficiently large k. Therefore, no matter how smaller γ is, f (k) will eventually be greater than f ′, which is a contradiction to f (k) →f ′. Hence, we conclude that f (k) →f ∗. ⊓ ⊔ 9.3.2 Convergence to the Channel Capacity In order to show that the BA algorithm for computing the channel capacity converges as intended, i.e., f (k) →C, we only need to show that the function f defined in (9.39) is concave. Toward this end, for 2 A′ is compact because it is the inverse image of a closed interval under a contin-uous function and A is bounded. 226 9 The Blahut-Arimoto Algorithms f(r, q) = X x X y r(x)p(y|x) log q(x|y) r(x) (9.106) defined in (9.39), we consider two ordered pairs (r1, q1) and (r2, q2) in A, where A1 and A2 are defined in (9.40) and (9.41), respectively. For any 0 ≤ λ ≤1 and ¯ λ = 1 −λ, an application of the log-sum inequality (Theorem 2.32) gives (λr1(x) + ¯ λr2(x)) log λr1(x) + ¯ λr2(x) λq1(x|y) + ¯ λq2(x|y) ≤λr1(x) log r1(x) q1(x|y) + ¯ λr2(x) log r2(x) q2(x|y). (9.107) Taking reciprocal in the logarithms yields (λr1(x) + ¯ λr2(x)) log λq1(x|y) + ¯ λq2(x|y) λr1(x) + ¯ λr2(x) ≥λr1(x) log q1(x|y) r1(x) + ¯ λr2(x) log q2(x|y) r2(x) , (9.108) and upon multiplying by p(y|x) and summing over all x and y, we obtain f(λr1 + ¯ λr2, λq1 + ¯ λq2) ≥λf(r1, q1) + ¯ λf(r2, q2). (9.109) Therefore, f is concave. Hence, we have shown that f (k) →C. Chapter Summary Channel Capacity: For a discrete memoryless channel p(y|x), C = sup r>0 max q X x X y r(x)p(y|x) log q(x|y) r(x) , where the maximization is taken over all q that satisfies q(x|y) = 0 if and only if p(y|x) = 0. Computation of Channel Capacity: Start with any strictly positive input distribution r(0). Compute q(0), r(1), q(1), r(2), · · · alternately by q(k)(x|y) = r(k)(x)p(y|x) P x′ r(k)(x′)p(y|x′) and r(k)(x) = Q y q(k−1)(x|y)p(y|x) P x′ Q y q(k−1)(x′|y)p(y|x′) . Problems and Historical Notes 227 Then r(k) tends to the capacity-achieving input distribution as k →∞. Rate-Distortion Function: For s ≤0, the tangent to the rate-distortion function R(D) at (Ds, R(Ds)) has slope s and intersects with the ordinate at R(Ds) −sDs, which is given by inf Q>0 min t>0  X x,ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −s X x,ˆ x p(x)Q(ˆ x|x)d(x, ˆ x)  . The curve R(D), 0 ≤D ≤Dmax is traced out by the collection of all such tangents. Computation of Rate-Distortion Function: Start with any strictly pos-itive transition matrix Q(0). Compute t(0), Q(1), t(1), Q(2), · · · alternately by t(k)(ˆ x) = X x p(x)Q(k)(ˆ x|x) and Q(k)(ˆ x|x) = t(k−1)(ˆ x)esd(x,ˆ x) P ˆ x′ t(k−1)(ˆ x′)esd(x,ˆ x′) . Let f(Q, t) = X x X ˆ x p(x)Q(ˆ x|x) log Q(ˆ x|x) t(ˆ x) −s X x X ˆ x p(x)Q(ˆ x|x)d(x, ˆ x). Then f(Q(k), t(k)) →R(Ds) −sDs as k →∞. Problems 1. Implement the BA algorithm for computing channel capacity. 2. Implement the BA algorithm for computing the rate-distortion function. 3. Explain why in the BA Algorithm for computing channel capacity, we should not choose an initial input distribution which contains zero prob-ability masses. 4. Prove Lemma 9.3. 5. Consider f(Q, t) in the BA algorithm for computing the rate-distortion function. a) Show that for fixed s and t, f(Q, t) is minimized by Q(ˆ x|x) = t(ˆ x)esd(x,ˆ x) P ˆ x′ t(ˆ x′)esd(x,ˆ x′) . b) Show that f(Q, t) is convex. 228 9 The Blahut-Arimoto Algorithms Historical Notes An iterative algorithm for computing the channel capacity was developed by Arimoto , where the convergence of the algorithm was proved. Blahut independently developed two similar algorithms, the first for computing the channel capacity and the second for computing the rate-distortion function. The convergence of Blahut’s second algorithm was proved by Csisz´ ar . These two algorithms are now commonly referred to as the Blahut-Arimoto algorithms. The simplified proof of convergence in this chapter is based on Yeung and Berger . The Blahut-Arimoto algorithms are special cases of a general iterative algorithm due to Csisz´ ar and Tusn´ ady which also include the expectation-maximization (EM) algorithm for fitting models from incomplete data and the algorithm for finding the log-optimal portfolio for a stock market due to Cover . 10 Differential Entropy Our discussion in the previous chapters involved only discrete random vari-ables. The actual values taken by these random variables did not play any role in establishing the results. In this chapter and the next, our discussion will involve random variables taking real values. The values taken by these random variables do play a crucial role in the discussion. Let X be a real random variable with cumulative distribution function (CDF) FX(x) = Pr{X ≤x}, which by definition is right-continuous. The random variable X is said to be • discrete if FX(x) increases only at a countable number of values of x; • continuous if FX(x) is continuous, or equivalently, Pr{X = x} = 0 for every value of x; • mixed if it is neither discrete nor continuous. The support of X, denoted by SX, is the set of all x such that FX(x) > FX(x −ϵ) for all ϵ > 0. For a function g defined on SX, we write Eg(X) = Z SX g(x)dFX(x), (10.1) where the right hand side is a Lebesgue-Stieltjes integration which covers all cases (i.e., discrete, continuous, and mixed) for the CDF FX(x). It may be regarded as a notation for the expectation of g(X) with respect to FX(x) if the reader is not familiar with measure theory. A nonnegative function fX(x) is called a probability density function (pdf) of X if FX(x) = Z x −∞ fX(u)du (10.2) for all x. Since Z fX(x)dx = FX(∞) = 1 < ∞, (10.3) 230 10 Differential Entropy a pdf fX(x) can possibly take infinite value only on a set with zero Lebesgue measure. Therefore, we can assume without loss of generality that fX(x) in-stead takes any finite values on this set. If X has a pdf, then X is continuous, but not vice versa. Let X and Y be two real random variables with joint CDF FXY (x, y) = Pr{X ≤x, Y ≤y}. The marginal CDF of X is given by FX(x) = FXY (x, ∞) (likewise for Y ). A nonnegative function fXY (x, y) is called a joint pdf of X and Y if FXY (x, y) = Z x −∞ Z y −∞ fXY (u, v) dvdu (10.4) for all x and y. As for the case of a single random variable, we can assume without loss of generality that a joint pdf fXY (x, y) is finite for all x and y. For x ∈SX, the conditional CDF of Y given {X = x} is defined as FY |X(y|x) = Z y −∞ fY |X(v|x)dv, (10.5) where fY |X(y|x) = fXY (x, y) fX(x) (10.6) is the conditional pdf of Y given {X = x}. All the above definitions and notations naturally extend to more than two real random variables. When there is no ambiguity, the subscripts specifying the random variables will be omitted. All the random variables in this chapter are assumed to be real1. The variance of a random variable X is defined as varX = E(X −EX)2 = EX2 −(EX)2. (10.7) The covariance between two random variables X and Y is defined as cov(X, Y ) = E(X −EX)(Y −EY ) = E(XY ) −(EX)(EY ). (10.8) For a random vector X = [X1 X2 · · · Xn]⊤, the covariance matrix is de-fined as KX = E(X −EX)(X −EX)⊤= [cov(Xi, Xj)], (10.9) and the correlation matrix is defined as ˜ KX = EXX⊤= [EXiXj]. (10.10) Then 1 For a discrete random variable X with a countable alphabet X, by replacing X by any countable subset of ℜ, all information measures involving X (and possibly other random variables) are unchanged. Therefore, we assume without loss of generality that a discrete random variable is real. 10.1 Preliminaries 231 KX = E(X −EX)(X −EX)⊤ (10.11) = E[XX⊤−X(EX⊤) −(EX)X⊤+ (EX)(EX⊤)] (10.12) = EXX⊤−(EX)(EX⊤) −(EX)(EX⊤) + (EX)(EX⊤) (10.13) = EXX⊤−(EX)(EX⊤) (10.14) = ˜ KX −(EX)(EX)⊤. (10.15) This implies that if EX = 0, then ˜ KX = KX. It can readily be verified that in general, ˜ KX = KX+EX (10.16) and KX = ˜ KX−EX. (10.17) Therefore, a correlation matrix is a covariance matrix, and vice versa. When there is no ambiguity, the subscripts in KX and ˜ KX will be omitted. Let N(µ, σ2) denote the Gaussian distribution with mean µ and variance σ2, i.e., the pdf of the distribution is given by f(x) = 1 √ 2πσ2 e−(x−µ)2 2σ2 (10.18) for −∞< x < ∞. More generally, let N(µ, K) denote the multivariate Gaus-sian distribution with mean µ and covariance matrix K, i.e., the joint pdf of the distribution is given by f(x) = 1 √ 2π n |K|1/2 e−1 2 (x−µ)⊤K−1(x−µ) (10.19) for all x ∈ℜn, where K is a symmetric positive definite matrix2 and |K| is the determinant of K. In the rest of the chapter, we will define various information measures under suitable conditions. Whenever these information measures are subse-quently used, they are assumed to be defined. 10.1 Preliminaries In this section, we present some preliminary results on matrices and linear transformation of random variables. All vectors and matrices are assumed to be real. Definition 10.1. A square matrix K is symmetric if K⊤= K. 2 See Definitions 10.1 and 10.2. 232 10 Differential Entropy Definition 10.2. An n × n matrix K is positive definite if x⊤Kx > 0 (10.20) for all nonzero column n-vector x, and is positive semidefinite if x⊤Kx ≥0 (10.21) for all column n-vector x. Proposition 10.3. A covariance (correlation) matrix is both symmetric and positive semidefinite. Proof. Omitted. ⊓ ⊔ If a matrix K is symmetric, it can be diagonalized as K = QΛQ⊤, (10.22) where Λ is a diagonal matrix and Q (also Q⊤) is an orthogonal matrix, i.e., Q−1 = Q⊤, (10.23) or QQ⊤= Q⊤Q = I. (10.24) The latter says that the rows (columns) of Q form an orthonormal system. Since |Q|2 = |Q||Q⊤| = |QQ⊤| = |I| = 1, (10.25) we have |Q| = |Q⊤| = 1. (10.26) If (10.22) holds, we also say that QΛQ⊤is a diagonalization of K. From (10.22) and (10.24), we have KQ = (QΛQ⊤)Q = QΛ(Q⊤Q) = QΛ. (10.27) Let λi and qi denote the ith diagonal element of Λ and the ith column of Q, respectively. Then (10.27) can be written as Kqi = λiqi (10.28) for all i, i.e., qi is an eigenvector of K with eigenvalue λi. The next propo-sition further shows that these eigenvalues are nonnegative if K is positive semidefinite. Proposition 10.4. The eigenvalues of a positive semidefinite matrix are non-negative. 10.1 Preliminaries 233 Proof. Let K be a positive semidefinite matrix, and let q be an eigenvector of K with eigenvalue λ, i.e., Kq = λq. (10.29) Since K is positive semidefinite, 0 ≤q⊤Kq = q⊤(λq) = λ(q⊤q). (10.30) Then we conclude that λ ≥0 because q⊤q ≥0. ⊓ ⊔ The above discussions on diagonalization apply to a covariance matrix because a covariance matrix is both symmetric and positive semidefinite. As we will see, by diagonalizing the covariance matrix, a set of correlated random variables can be decorrelated by an orthogonal transformation. On the other hand, a set of correlated random variables can be regarded as an orthogonal transformation of a set of uncorrelated random variables. This is particularly important in the context of Gaussian random variables because a set of jointly distributed Gaussian random variables are mutually independent if and only if they are uncorrelated. Proposition 10.5. Let Y = AX, where X and Y are column vectors of n random variables and A is an n × n matrix. Then KY = AKXA⊤ (10.31) and ˜ KY = A ˜ KXA⊤. (10.32) Proof. To prove (10.31), consider KY = E(Y −EY)(Y −EY)⊤ (10.33) = E[A(X −EX)][A(X −EX)]⊤ (10.34) = E[A(X −EX)(X −EX)⊤A⊤] (10.35) = A[E(X −EX)(X −EX)⊤]A⊤ (10.36) = AKXA⊤. (10.37) The proof of (10.32) is similar. ⊓ ⊔ Proposition 10.6. Let X and Y be column vectors of n random variables such that Y = Q⊤X, (10.38) where QΛQ⊤is a diagonalization of KX. Then KY = Λ, i.e., the random variables in Y are uncorrelated and var Yi = λi, the ith diagonal element of Λ. 234 10 Differential Entropy Remark The matrix KX is positive semidefinite, so that λi, being an eigen-value of KX, is nonnegative by Proposition 10.4, as required for being the variance of a random variable. Proof of Propostion 10.6. By Proposition 10.5, KY = Q⊤KXQ (10.39) = Q⊤(QΛQ⊤)Q (10.40) = (Q⊤Q)Λ(Q⊤Q) (10.41) = Λ. (10.42) Since KY = Λ is a diagonal matrix, the random variables in Y are uncorre-lated. Furthermore, the variance of Yi is given by the ith diagonal element of KY = Λ, i.e., λi. The proposition is proved. ⊓ ⊔ Corollary 10.7. Let X be a column vector of n random variables such that QΛQ⊤is a diagonalization of KX. Then X = QY, (10.43) where Y is the column vector of n uncorrelated random variables prescribed in Proposition 10.6. Proposition 10.8. Let X, Y, and Z be vectors of n random variables such that X and Z are independent and Y = X + Z. Then KY = KX + KZ. (10.44) Proof. Omitted. ⊓ ⊔ In communication engineering, the second moment of a random variable X is very often referred to as the energy of X. The total energy of a random vector X is then equal to E P i X2 i . The following proposition shows that the total energy of a random vector is preserved by an orthogonal transformation. Proposition 10.9. Let Y = QX, where X and Y are column vectors of n random variables and Q is an orthogonal matrix. Then E n X i=1 Y 2 i = E n X i=1 X2 i . (10.45) 10.2 Definition 235 Proof. Consider n X i=1 Y 2 i = Y⊤Y (10.46) = (QX)⊤(QX) (10.47) = X⊤(Q⊤Q)X (10.48) = X⊤X (10.49) = n X i=1 X2 i . (10.50) The proposition is proved upon taking expectation on both sides. ⊓ ⊔ 10.2 Definition We now introduce the differential entropy for continuous random variables as the analog of the entropy for discrete random variables. Definition 10.10. The differential entropy h(X) of a continuous random variable X with pdf f(x) is defined as h(X) = − Z S f(x) log f(x)dx = −E log f(X). (10.51) The entropy of a discrete random variable X is a measure of the average amount of information contained in X, or equivalently, the average amount of uncertainty removed upon revealing the outcome of X. This was justified by the asymptotic achievability of the entropy bound for zero-error data com-pression discussed in Chapter 4 as well as the source coding theorem discussed in Chapter 5. However, although entropy and differential entropy have similar mathe-matical forms, the latter does not serve as a measure of the average amount of information contained in a continuous random variable. In fact, a continu-ous random variable generally contains an infinite amount of information, as explained in the following example. Example 10.11. Let X be uniformly distributed on [0, 1). Then we can write X = .X1X2X3 · · · , (10.52) the dyadic expansion of X, where X1, X2, X3, · · · is a sequence of fair bits3. Then 3 Fair bits refer to i.i.d. bits, each distributed uniformly on {0, 1}. 236 10 Differential Entropy H(X) = H(X1, X2, X3, · · ·) (10.53) = ∞ X i=1 H(Xi) (10.54) = ∞ X i=1 1 (10.55) = ∞. (10.56) In the following, we give two examples in which the differential entropy can be evaluated explicitly. Example 10.12 (Uniform Distribution). Let X be uniformly distributed on [0, a). Then h(X) = − Z a 0 1 a log 1 adx = log a. (10.57) From this example, we see immediately that h(X) < 0 if a < 1. This poses no contradiction because as we have mentioned, the differential entropy does not serve as a measure of the average amount of information contained in X. The physical meaning of differential entropy will be understood through the AEP for continuous random variables to be discussed in Section 10.4. Example 10.13 (Gaussian Distribution). Let X ∼N(0, σ2) and let e be the base of the logarithm. Then h(X) = − Z f(x) ln f(x)dx (10.58) = − Z f(x)  −x2 2σ2 −ln √ 2πσ2  dx (10.59) = 1 2σ2 Z x2f(x)dx + ln √ 2πσ2 Z f(x)dx (10.60) = EX2 2σ2 + 1 2 ln(2πσ2) (10.61) = varX + (EX)2 2σ2 + 1 2 ln(2πσ2) (10.62) = σ2 + 0 2σ2 + 1 2 ln(2πσ2) (10.63) = 1 2 + 1 2 ln(2πσ2) (10.64) = 1 2 ln e + 1 2 ln(2πσ2) (10.65) = 1 2 ln(2πeσ2) (10.66) 10.2 Definition 237 in nats. Changing the base of the logarithm to any chosen positive value, we obtain h(X) = 1 2 log(2πeσ2). (10.67) The following two basic properties of differential entropy can readily be proved from the definition. Theorem 10.14 (Translation). h(X + c) = h(X). (10.68) Proof. Let Y = X + c. Then fY (y) = fX(y −c) and SY = {x + c : x ∈SX}. Letting x = y −c in (10.51), we have h(X) = − Z SX fX(x) log fX(x)dx (10.69) = − Z SY fX(y −c) log fX(y −c)dy (10.70) = − Z SY fY (y) log fY (y)dy (10.71) = h(Y ) (10.72) = h(X + c), (10.73) accomplishing the proof. ⊓ ⊔ Theorem 10.15 (Scaling). For a ̸= 0, h(aX) = h(X) + log |a|. (10.74) Proof. Let Y = aX. Then fY (y) = 1 |a|fX( y a) and SY = {ax : x ∈SX}. Letting x = y a in (10.51), we have h(X) = − Z SX fX(x) log fX(x)dx (10.75) = − Z SY fX y a  log fX y a  dy |a| (10.76) = − Z SY 1 |a|fX y a   log  1 |a|fX y a  + log |a|  dy (10.77) = − Z SY fY (y) log fY (y)dy −log |a| Z SY fY (y)dy (10.78) = h(Y ) −log |a| (10.79) = h(aX) −log |a|. (10.80) 238 10 Differential Entropy Hence, h(aX) = h(X) + log |a|, (10.81) accomplishing the proof. ⊓ ⊔ Example 10.16. We illustrate Theorem 10.14 and Theorem 10.15 by means of the Gaussian distribution. Let X ∼N(µX, σ2 X). By Theorem 10.14 (and Example 10.13), h(X) = 1 2 log(2πeσ2 X). (10.82) Let Y = aX. Then Y ∼N(µY , σ2 Y ), where µY = aµX and σ2 Y = a2σ2 X. By (10.82), h(Y ) = 1 2 log(2πeσ2 Y ) = 1 2 log(2πea2σ2 X) = 1 2 log(2πeσ2 X) + log |a|, (10.83) verifying Theorem 10.15. Theorem 10.14 says that the differential entropy of a random variable is unchanged by translation. Theorem 10.15 says that the differential entropy of a random variable is generally changed by scaling. Specifically, if |a| > 1, the differential entropy is increased by log |a|. If |a| < 1, the differential entropy is decreased by −log |a| (note that −log |a| > 0). If a = −1, the differential entropy is unchanged. These properties suggest that the differential entropy of a random variable depends only on the “spread” of the pdf. More specifically, the differential entropy increases with the “spread” of the pdf. This point will be further elaborated in Section 10.6. 10.3 Joint Differential Entropy, Conditional (Differential) Entropy, and Mutual Information The definition for differential entropy is readily extended to multiple contin-uous random variables. In the rest of the chapter, we let X = [X1 X2 · · · Xn]. Definition 10.17. The joint differential entropy h(X) of a random vector X with joint pdf f(x) is defined as h(X) = − Z S f(x) log f(x)dx = −E log f(X). (10.84) It follows immediately from the above definition that if X1, X2, · · · , Xn are mutually independent, then h(X) = n X i=1 h(Xi). (10.85) The following two theorems are straightforward generalizations of Theo-rems 10.14 and 10.15, respectively. The proofs are omitted. 10.3 Joint and Conditional Differential Entropy 239 Theorem 10.18 (Translation). Let c be a column vector in ℜn. Then h(X + c) = h(X). (10.86) Theorem 10.19 (Scaling). Let A be a nonsingular n × n matrix. Then h(AX) = h(X) + log |det(A)|. (10.87) Theorem 10.20 (Multivariate Gaussian Distribution). Let X ∼N(µ, K). Then h(X) = 1 2 log [(2πe)n|K|] . (10.88) Proof. Let K be diagonalizable as QΛQ⊤. Write X = QY as in Corollary 10.7, where the random variables in Y are uncorrelated with var Yi = λi, the ith diagonal element of Λ. Since X is Gaussian, so is Y. Then the random variables in Y are mutually independent because they are uncorrelated. Now consider h(X) = h(QY) (10.89) a) = h(Y) + log |det(Q)| (10.90) b) = h(Y) + 0 (10.91) c) = n X i=1 h(Yi) (10.92) d) = n X i=1 1 2 log(2πeλi) (10.93) = 1 2 log " (2πe)n n Y i=1 λi # (10.94) e) = 1 2 log[(2πe)n|Λ|] (10.95) f) = 1 2 log[(2πe)n|K|]. (10.96) In the above a) follows from Theorem 10.19; b) follows from (10.26); c) follows from (10.85) since Y1, Y2, · · · , Yn are mutually independent; d) follows from Example 10.16; 240 10 Differential Entropy e) follows because Λ is a diagonal matrix; f) follows because |Λ| = |Q||Λ||Q⊤| = |QΛQ⊤| = |K|. (10.97) The theorem is proved. ⊓ ⊔ In describing a communication system, we very often specify the relation between two random variables X and Y through a conditional distribution p(y|x) (if Y is discrete) or a conditional pdf f(y|x) (if Y is continuous) defined for all x, even though certain x may not be in SX. This is made precise by the following two definitions. Definition 10.21. Let X and Y be two jointly distributed random variables with Y being discrete. The random variable Y is related to the random variable X through a conditional distribution p(y|x) defined for all x means that for all x and y, Pr{X ≤x, Y = y} = Z x −∞ pY |X(y|u)dFX(u). (10.98) Definition 10.22. Let X and Y be two jointly distributed random variables with Y being continuous. The random variable Y is related to the random variable X through a conditional pdf f(y|x) defined for all x means that for all x and y, FXY (x, y) = Z x −∞ FY |X(y|u)dFX(u), (10.99) where FY |X(y|x) = Z y −∞ fY |X(v|x)dv. (10.100) Definition 10.23. Let X and Y be jointly distributed random variables where Y is continuous and is related to X through a conditional pdf f(y|x) defined for all x. The conditional differential entropy of Y given {X = x} is defined as h(Y |X = x) = − Z SY (x) f(y|x) log f(y|x)dy (10.101) where SY (x) = {y : f(y|x) > 0}, and the conditional differential entropy of Y given X is defined as h(Y |X) = − Z SX h(Y |X = x)dF(x) = −E log f(Y |X). (10.102) 10.3 Joint and Conditional Differential Entropy 241 Proposition 10.24. Let X and Y be jointly distributed random variables where Y is continuous and is related to X through a conditional pdf f(y|x) defined for all x. Then f(y) exists and is given by f(y) = Z f(y|x)dF(x). (10.103) Proof. From (10.99) and (10.100), we have FY (y) = FXY (∞, y) = Z Z y −∞ fY |X(v|x) dv dF(x). (10.104) Since fY |X(v|x) is nonnegative and Z Z y −∞ fY |X(v|x) dv dF(x) ≤ Z Z fY |X(v|x) dv dF(x) (10.105) = Z dF(x) (10.106) = 1, (10.107) fY |X(v|x) is absolutely integrable. By Fubini’s theorem4, the order of integra-tion in the iterated integral in (10.104) can be exchanged. Therefore, FY (y) = Z y −∞ Z fY |X(v|x)dF(x)  dv, (10.108) implying (10.103) (cf. (10.2)). The proposition is proved. ⊓ ⊔ The above proposition says that if Y is related to X through a conditional pdf f(y|x), then the pdf of Y exists regardless of the distribution of X. The next proposition is a generalization to random vectors, and the proof is omit-ted. The theory in the rest of this chapter and in the next chapter will be developed around this important fact. Proposition 10.25. Let X and Y be jointly distributed random vectors where Y is continuous and is related to X through a conditional pdf f(y|x) defined for all x. Then f(y) exists and is given by f(y) = Z f(y|x)dF(x). (10.109) 4 See for example . 242 10 Differential Entropy Definition 10.26. Let X and Y be jointly distributed random variables where Y is continuous and is related to X through a conditional pdf f(y|x) defined for all x. The mutual information between X and Y is defined as I(X; Y ) = Z SX Z SY (x) f(y|x) log f(y|x) f(y) dy dF(x) (10.110) = E log f(Y |X) f(Y ) , (10.111) where f(y) exists and is given in (10.103) by Proposition 10.24. When both X and Y are continuous and f(x, y) exists, I(X; Y ) = E log f(Y |X) f(Y ) = E log f(X, Y ) f(X)f(Y ). (10.112) Together with our discussion on discrete random variables in Chapter 2, the mutual information I(X; Y ) is defined when each of the random variables involved can be either discrete or continuous. In the same way, we can define the conditional mutual information I(X; Y |T). Definition 10.27. Let X, Y , and T be jointly distributed random variables where Y is continuous and is related to (X, T) through a conditional pdf f(y|x, t) defined for all x and t. The mutual information between X and Y given T is defined as I(X; Y |T) = Z ST I(X; Y |T = t)dF(t) = E log f(Y |X, T) f(Y |T) , (10.113) where I(X; Y |T = t) = Z SX(t) Z SY (x,t) f(y|x, t) log f(y|x, t) f(y|t) dy dF(x|t). (10.114) We now give a physical interpretation of I(X; Y ) when X and Y have a joint pdf f(x, y). For simplicity, we assume that f(x, y) > 0 for all x and y. Let ∆be a small positive quantity. For all integer i, define the interval Ai x = [ i∆, (i + 1)∆) (10.115) in ℜ, and for all integer j, define the interval Aj y = [ j∆, (j + 1)∆). (10.116) For all integers i and j, define the set Ai,j xy = Ai x × Aj y, (10.117) 10.3 Joint and Conditional Differential Entropy 243 which corresponds to a rectangle in ℜ2. We now introduce a pair of discrete random variables ˆ X∆and ˆ Y∆defined by  ˆ X∆= i if X ∈Ai x ˆ Y∆= j if Y ∈Aj y. (10.118) The random variables ˆ X∆and ˆ Y∆are quantizations of the continuous random variables X and Y , respectively. For all i and j, let (xi, yj) ∈Ai,j xy. Then I( ˆ X∆; ˆ Y∆) = X i X j Pr{( ˆ X∆, ˆ Y∆) = (i, j)} log Pr{( ˆ X∆, ˆ Y∆) = (i, j)} Pr{ ˆ X∆= i}Pr{ ˆ Y∆= j} (10.119) ≈ X i X j f(xi, yj)∆2 log f(xi, yj)∆2 (f(xi)∆)(f(yj)∆) (10.120) = X i X j f(xi, yj)∆2 log f(xi, yj) f(xi)f(yj) (10.121) ≈ Z Z f(x, y) log f(x, y) f(x)f(y)dxdy (10.122) = I(X; Y ). (10.123) Therefore, I(X; Y ) it can be interpreted as the limit of I( ˆ X∆; ˆ Y∆) as ∆→0. This interpretation carries over to the case when X and Y have a general joint distribution5 (see Dobrushin ). As I( ˆ X∆; ˆ Y∆) is always nonnegative, this suggests that I(X; Y ) is also always nonnegative, which will be established in Theorem 10.31. Definition 10.28. Let X be a continuous random variable and Y be a discrete random variable, where X is related to Y through a conditional pdf f(x|y). The conditional entropy of Y given X is defined as H(Y |X) = H(Y ) −I(X; Y ), (10.124) where I(X; Y ) is defined as in Definition 10.26. Proposition 10.29. For two random variables X and Y , 5 In the general setting, the mutual information between X and Y is defined as I(X; Y ) = Z SXY  log dPXY d(PX × PY )  dPXY , where PXY , PX, and PY are the probability measures of (X, Y ), X, and Y , respectively, and dPXY d(PX×PY ) denotes the Radon-Nikodym derivative of PXY with respect to the product measure PX × PY . 244 10 Differential Entropy h(Y ) = h(Y |X) + I(X; Y ) (10.125) if Y is continuous, and H(Y ) = H(Y |X) + I(X; Y ) (10.126) if Y is discrete. Proposition 10.30 (Chain Rule for Differential Entropy). h(X1, X2, · · · , Xn) = n X i=1 h(Xi|X1, · · · , Xi−1). (10.127) The proofs of these propositions are left as an exercise. Theorem 10.31. I(X; Y ) ≥0, (10.128) with equality if and only if X is independent of Y . Proof. Consider I(X; Y ) = Z SX Z SY (x) f(y|x) log f(y|x) f(y) dy dFX(x) (10.129) ≥(log e) Z SX Z SY (x) f(y|x)  1 −f(y) f(y|x)  dy dFX(x) (10.130) = (log e) Z SX "Z SY (x) f(y|x)dy − Z SY (x) f(y)dy # dFX(x) (10.131) ≥0, (10.132) where (10.130) results from an application of the fundamental inequality (Corollary 2.106), and (10.132) follows from Z SY (x) f(y)dy ≤1 = Z SY (x) f(y|x)dy. (10.133) This proves (10.128). For equality to hold in (10.128), equality must hold in (10.130) for all x ∈SX and all y ∈SY (x), and equality must hold in (10.132) for all x ∈SX. For the former, this is the case if and only if f(y|x) = f(y) for all x ∈SX and y ∈SY (x), (10.134) which implies Z SY (x) f(y)dy = Z SY (x) f(y|x)dy = 1, (10.135) 10.4 The AEP for Continuous Random Variables 245 i.e., equality holds in (10.132). Thus (10.134) is a necessary and sufficient condition for equality to hold in (10.128). It is immediate that if X and Y are independent, then (10.134) holds. It remains to prove the converse. To this end, observe that (10.135), implied by (10.134), is equivalent to that f(y) = 0 on SY \SY (x) a.e. (almost everywhere). By the definition of SY , this means that SY \SY (x) ⊂Sc Y , or SY = SY (x). Since this holds for all x ∈SX, we conclude that f(y|x) = f(y) for all (x, y) ∈ SX × SY , i.e., X and Y are independent. The theorem is proved. ⊓ ⊔ Corollary 10.32. I(X; Y |T) ≥0, (10.136) with equality if and only if X is independent of Y conditioning on T. Proof. This follows directly from (10.113). ⊓ ⊔ Corollary 10.33 (Conditioning Does Not Increase Differential En-tropy). h(X|Y ) ≤h(X) (10.137) with equality if and only if X and Y are independent. Corollary 10.34 (Independence Bound for Differential Entropy). h(X1, X2, · · · , Xn) ≤ n X i=1 h(Xi) (10.138) with equality if and only if i = 1, 2, · · · , n are mutually independent. 10.4 The AEP for Continuous Random Variables The Weak AEP for discrete random variables discussed in Chapter 5 states that for n i.i.d. random variables X1, X2, · · · , Xn with generic discrete ran-dom variable X, p(X1, X2, · · · , Xn) is close to 2−nH(X) with high probability when n is large (Theorem 5.1, Weak AEP I). This fundamental property of entropy leads to the definition of weak typicality, and as a consequence, the total number of weakly typical sequences is approximately equal to 2nH(X) (Theorem 5.3, Weak AEP II). In the following, we develop the AEP for continuous random variables in the same way we developed the Weak AEP for discrete random variables. Some of the proofs are exactly the same as their discrete analogs, and they are omitted. We note that for continuous random variables, the notion of strong typicality does not apply because the probability that a continuous random variable takes a particular value is equal to zero. 246 10 Differential Entropy Theorem 10.35 (AEP I for Continuous Random Variables). −1 n log f(X) →h(X) (10.139) in probability as n →∞, i.e., for any ϵ > 0, for n sufficiently large, Pr  −1 n log f(X) −h(X) < ϵ  > 1 −ϵ. (10.140) Definition 10.36. The typical set W n [X]ϵ with respect to f(x) is the set of sequences x = (x1, x2, · · · , xn) ∈X n such that −1 n log f(x) −h(X) < ϵ, (10.141) or equivalently, h(X) −ϵ < −1 n log f(x) < h(X) + ϵ, (10.142) where ϵ is an arbitrarily small positive real number. The sequences in W n [X]ϵ are called ϵ-typical sequences. The quantity −1 n log f(x) = −1 n n X k=1 log f(xk) (10.143) is called the empirical differential entropy of the sequence x. The empirical differential entropy of a typical sequence is close to the true differential entropy h(X). If the pdf f(x) is continuous, we see from (10.143) that the empirical differential entropy is continuous in x. Therefore, if x is ϵ-typical, then all the sequences in the neighborhood of x are also ϵ-typical. As a consequence, the number of ϵ-typical sequences is uncountable, and it is not meaningful to discuss the cardinality of a typical set as in the discrete case. Instead, the “size” of a typical set is measured by its volume. Definition 10.37. The volume of a set A in ℜn is defined as Vol(A) = Z A dx. (10.144) Theorem 10.38 (AEP II for Continuous Random Variables). The fol-lowing hold for any ϵ > 0: 1) If x ∈W n [X]ϵ, then 2−n(h(X)+ϵ) < f(x) < 2−n(h(X)−ϵ). (10.145) 10.4 The AEP for Continuous Random Variables 247 2) For n sufficiently large, Pr{X ∈W n [X]ϵ} > 1 −ϵ. (10.146) 3) For n sufficiently large, (1 −ϵ)2n(h(X)−ϵ) < Vol(W n [X]ϵ) < 2n(h(X)+ϵ). (10.147) Proof. Property 1 follows immediately from the definition of W n [X]ϵ in (10.142). Property 2 is equivalent to Theorem 10.35. To prove Property 3, we use the lower bound in (10.145) and consider 1 ≥Pr{W n [X]ϵ} (10.148) = Z W n [X]ϵ f(x) dx (10.149) > Z W n [X]ϵ 2−n(h(X)+ϵ) dx (10.150) > 2−n(h(X)+ϵ) Z W n [X]ϵ dx (10.151) = 2−n(h(X)+ϵ) Vol(W n [X]ϵ), (10.152) which implies Vol(W n [X]ϵ) < 2n(h(X)+ϵ). (10.153) Note that this upper bound holds for any n ≥1. On the other hand, using the upper bound in (10.145) and Theorem 10.35, for n sufficiently large, we have 1 −ϵ < Pr{W n [X]ϵ} (10.154) = Z W n [X]ϵ f(x) dx (10.155) < Z W n [X]ϵ 2−n(h(X)−ϵ) dx (10.156) = 2−n(h(X)−ϵ) Vol(W n [X]ϵ). (10.157) Then Vol(W n [X]ϵ) > (1 −ϵ)2n(h(X)−ϵ). (10.158) Combining (10.153) and (10.158) gives Property 3. The theorem is proved. ⊓ ⊔ From the AEP for continuous random variables, we see that the volume of the typical set is approximately equal to 2nh(X) when n is large. This gives the following physical interpretations of differential entropy. First, the fact that 248 10 Differential Entropy h(X) can be negative does not incur any difficulty because 2nh(X) is always positive. Second, if the differential entropy is large, then the volume of the typical set is large; if the differential entropy is small (not in magnitude but in value), then the volume of the typical set is small. 10.5 Informational Divergence We first extend the definition of informational divergence introduced in Sec-tion 2.5 to pdf’s. Definition 10.39. Let f and g be two pdf’s defined on ℜn with supports Sf and Sg, respectively. The informational divergence between f and g is defined as D(f∥g) = Z Sf f(x) log f(x) g(x) dx = Ef log f(X) g(X) , (10.159) where Ef denotes expectation with respect to f. Remark In the above definition, we adopt the convention c log c 0 = ∞for c > 0. Therefore, if D(f∥g) < ∞, then Sf \ Sg = {x : f(x) > 0 and g(x) = 0} (10.160) has zero Lebesgue measure, i.e., Sf is essentially a subset of Sg. Theorem 10.40 (Divergence Inequality). Let f and g be two pdf’s defined on ℜn. Then D(f∥g) ≥0, (10.161) with equality if and only if f = g a.e. Proof. Consider D(f∥g) = Z Sf f(x) log f(x) g(x) dx (10.162) = (log e) Z Sf f(x) ln f(x) g(x) dx (10.163) ≥(log e) Z Sf f(x)  1 −g(x) f(x)  dx (10.164) = (log e) "Z Sf f(x)dx − Z Sf g(x)dx # (10.165) ≥0, (10.166) where (10.164) follows from the fundamental inequality (Corollary 2.106) and (10.166) follows from 10.6 Maximum Differential Entropy Distributions 249 Z Sf g(x)dx ≤1 = Z Sf f(x)dx. (10.167) Equality holds in (10.164) if and only if f(x) = g(x) on Sf a.e., which implies Z Sf g(x)dx = Z Sf f(x)dx = 1, (10.168) i.e., equality holds in (10.166). Then we see from (10.168) that g(x) = 0 on Sc f a.e. Hence, we conclude that equality holds in (10.161) if and only if f = g a.e. The theorem is proved. ⊓ ⊔ 10.6 Maximum Differential Entropy Distributions In Section 2.9, we have discussed maximum entropy distributions for a discrete random variable. We now extend this theme to multiple continuous random variables. Specifically, we are interested in the following problem: Maximize h(f) over all pdf f defined on a subset S of ℜn, subject to Z Sf ri(x)f(x)dx = ai for 1 ≤i ≤m, (10.169) where Sf ⊂S and ri(x) is defined for all x ∈S. Theorem 10.41. Let f ∗(x) = e−λ0−Pm i=1 λiri(x) (10.170) for all x ∈S, where λ0, λ1, · · · , λm are chosen such that the constraints in (10.169) are satisfied. Then f ∗maximizes h(f) over all pdf f defined on S, subject to the constraints in (10.169). Proof. The proof is analogous to that of Theorem 2.50. The details are omit-ted. ⊓ ⊔ Corollary 10.42. Let f ∗be a pdf defined on S with f ∗(x) = e−λ0−Pm i=1 λiri(x) (10.171) for all x ∈S. Then f ∗maximizes h(f) over all pdf f defined on S, subject to the constraints Z Sf ri(x)f(x)dx = Z S ri(x)f ∗(x)dx for 1 ≤i ≤m. (10.172) 250 10 Differential Entropy Theorem 10.43. Let X be a continuous random variable with EX2 = κ. Then h(X) ≤1 2 log(2πeκ), (10.173) with equality if and only if X ∼N(0, κ). Proof. The problem here is to maximize h(f) subject to the constraint Z x2f(x)dx = κ. (10.174) An application of Theorem 10.41 yields f ∗(x) = ae−bx2 (10.175) which is identified as a Gaussian distribution with zero mean. In order that the constraint (10.174) is satisfied, we must have a = 1 √ 2πκ and b = 1 2κ. (10.176) Hence, in light of (10.67) in Example 10.13, we have proved (10.173) with equality if and only if X ∼N(0, κ). ⊓ ⊔ Theorem 10.44. Let X be a continuous random variable with mean µ and variance σ2. Then h(X) ≤1 2 log(2πeσ2), (10.177) with equality if and only if X ∼N(µ, σ2). Proof. Let X′ = X −µ. Then EX′ = E(X −µ) = EX −µ = 0 (10.178) and E(X′)2 = E(X −µ)2 = varX = σ2. (10.179) Applying Theorem 10.14 and Theorem 10.43, we have h(X) = h(X′) ≤1 2 log(2πeσ2), (10.180) and equality holds if and only if X′ ∼N(0, σ2), or X ∼N(µ, σ2). The theorem is proved. ⊓ ⊔ Remark Theorem 10.43 says that with the constraint EX2 = κ, the differ-ential entropy is maximized by the distribution N(0, κ). If we impose the ad-ditional constraint that EX = 0, then varX = EX2 = κ. By Theorem 10.44, the differential entropy is still maximized by N(0, κ). 10.6 Maximum Differential Entropy Distributions 251 We have mentioned at the end of Section 10.2 that the differential entropy of a random variable increases with the “spread” of the pdf. Though a sim-ple consequence of Theorem 10.43, the above theorem makes this important interpretation precise. By rewriting the upper bound in (10.180), we obtain h(X) ≤log σ + 1 2 log(2πe). (10.181) That is, the differential entropy is at most equal to the logarithm of the standard deviation plus a constant. In particular, the differential entropy tends to −∞as the standard deviation tends to 0. The next two theorems are the vector generalizations of Theorems 10.43 and 10.44. Theorem 10.45. Let X be a vector of n continuous random variables with correlation matrix ˜ K. Then h(X) ≤1 2 log h (2πe)n| ˜ K| i , (10.182) with equality if and only if X ∼N(0, ˜ K). Proof. By Theorem 10.41, the joint pdf that maximizes h(X) has the form f ∗(x) = e −λ0−P i,j λijxixj = e−λ0−x⊤Lx, (10.183) where L = [λij]. Thus f ∗is a multivariate Gaussian distribution with zero mean. Therefore, cov(Xi, Xj) = EXiXj −(EXi)(EXj) = EXiXj (10.184) for all i and j. Since f ∗is constrained by ˜ K, λ0 and L have the unique solution given by e−λ0 = 1 √ 2π n | ˜ K|1/2 (10.185) and L = 1 2 ˜ K−1, (10.186) so that f ∗(x) = 1 √ 2π n | ˜ K|1/2 e−1 2 x⊤˜ K−1x, (10.187) the joint pdf of X ∼N(0, ˜ K). Hence, by Theorem 10.20, we have proved (10.182) with equality if and only if X ∼N(0, ˜ K). ⊓ ⊔ Theorem 10.46. Let X be a vector of n continuous random variables with mean µ and covariance matrix K. Then h(X) ≤1 2 log [(2πe)n|K|] , (10.188) with equality if and only if X ∼N(µ, K). Proof. Similar to the proof of Theorem 10.44. ⊓ ⊔ 252 10 Differential Entropy Chapter Summary In the following, X = [X1 X2 · · · Xn]⊤. Covariance Matrix: KX = E(X −EX)(X −EX)⊤= [cov(Xi, Xj)]. Correlation Matrix: ˜ KX = EXX⊤= [EXiXj]. Diagonalization of a Covariance (Correlation) Matrix: A covariance (correlation) matrix can be diagonalized as QΛQ⊤. The diagonal elements of Λ, which are nonnegative, are the eigenvalues of the covariance (correlation) matrix. Linear Transformation of a Random Vector: Let Y = AX. Then KY = AKXA⊤and ˜ KY = A ˜ KXA⊤. Decorrelation of a Random Vector: Let Y = Q⊤X, where QΛQ⊤is a diagonalization of KX. Then KY = Λ, i.e., the random variables in Y are uncorrelated and var Yi = λi, the ith diagonal element of Λ. Differential Entropy: h(X) = − Z S f(x) log f(x)dx = −E log f(X) = −E log f(X). 1. Translation: h(X + c) = h(X). 2. Scaling: h(aX) = h(X) + log |a|. 3. Uniform Distribution on [0, a): h(X) = log a. 4. Gaussian Distribution N(µ, σ2): h(X) = 1 2 log(2πeσ2). Joint Differential Entropy: h(X) = − Z S f(x) log f(x)dx = −E log f(X). 1. Translation: h(X + c) = h(X). 2. Scaling: h(AX) = h(X) + log |det(A)|. 3. Multivariate Gaussian Distribution N(µ, K): h(X) = 1 2 log [(2πe)n|K|]. Proposition: For fixed f(y|x), f(y) exists for any F(x) and is given by f(y) = Z f(y|x)dF(x). Conditional (Differential) Entropy and Mutual Information: Chapter Summary 253 1. If Y is continuous, h(Y |X = x) = − Z SY (x) f(y|x) log f(y|x)dy h(Y |X) = − Z SX h(Y |X = x)dF(x) = −E log f(Y |X) I(X; Y ) = Z SX Z SY (x) f(y|x) log f(y|x) f(y) dy dF(x) = E log f(Y |X) f(Y ) I(X; Y |T) = Z ST I(X; Y |T = t)dF(t) = E log f(Y |X, T) f(Y |T) h(Y ) = h(Y |X) + I(X; Y ). 2. If Y is discrete, H(Y |X) = H(Y ) −I(X; Y ) H(Y ) = H(Y |X) + I(X; Y ). Chain Rule for Differential Entropy: h(X1, X2, · · · , Xn) = n X i=1 h(Xi|X1, · · · , Xi−1). Some Useful Inequalities: I(X; Y ) ≥0 I(X; Y |T) ≥0 h(Y |X) ≤h(Y ) h(X1, X2, · · · , Xn) ≤ n X i=1 h(Xi). AEP I for Continuous Random Variables: −1 n log f(X) →h(X) in probability. Typical Set: W n [X]ϵ = {x ∈X n : −n−1 log f(x) −h(X) < ϵ}. AEP II for Continuous Random Variables: 1. 2−n(h(X)+ϵ) < f(x) < 2−n(h(X)−ϵ) for x ∈W n [X]ϵ 2. Pr{X ∈W n [X]ϵ} > 1 −ϵ for sufficiently large n 3. (1 −ϵ)2n(h(X)−ϵ) < Vol(W n [X]ϵ) < 2n(h(X)+ϵ) for sufficiently large n. 254 10 Differential Entropy Informational Divergence: For two probability density functions f and g defined on ℜn, D(f∥g) = Z Sf f(x) log f(x) g(x) dx = Ef log f(X) g(X) . Divergence Inequality: D(f∥g) ≥0, with equality if and only if f = g a.e. Maximum Differential Entropy Distributions: Let f ∗(x) = e−λ0−Pm i=1 λiri(x) for all x ∈S, where λ0, λ1, · · · , λm are chosen such that the constraints Z Sf ri(x)f(x)dx = ai for 1 ≤i ≤m are satisfied. Then f ∗maximizes h(f) over all pdf f defined on S subject to the above constraints. Maximum Differential Entropy for a Given Correlation Matrix: h(X) ≤1 2 log h (2πe)n| ˜ K| i , with equality if and only if X ∼N(0, ˜ K), where ˜ K is the correlation matrix of X. Maximum Differential Entropy for a Given Covariance Matrix: h(X) ≤1 2 log [(2πe)n|K|] , with equality if and only if X ∼N(µ, K), where µ and K are the mean and the covariance matrix of X, respectively. Problems 1. Prove Propositions 10.3 and 10.8. 2. Show that the joint pdf of a multivariate Gaussian distribution integrates to 1. 3. Show that a symmetric positive definite matrix is a covariance matrix. 4. Let K =   7/4 √ 2/4 −3/4 √ 2/4 5/2 − √ 2/4 −3/4 − √ 2/4 7/4  . Problems 255 a) Find the eigenvalues and eigenvectors of K. b) Show that K is positive definite. c) Suppose K is the covariance matrix of a random vector X = [X1 X2 X3]⊤. i) Find the coefficient of correlation between Xi and Xj for 1 ≤i < j ≤3. ii) Find an uncorrelated random vector Y = [Y1 Y2 Y3] such that X is a linear transformation of Y. iii) Determine the covariance matrix of Y. 5. Prove Theorem 10.19. 6. For continuous random variables X and Y , discuss why I(X; X) is not equal to h(X). 7. Each of the following continuous distributions can be obtained as the distribution that maximizes the differential entropy subject to a suitable set of constraints: a) the exponential distribution, f(x) = λe−λx for x ≥0, where λ > 0; b) the Laplace distribution, f(x) = 1 2λe−λ|x| for −∞< x < ∞, where λ > 0; c) the gamma distribution, f(x) = λ Γ(α)(λx)α−1e−λx for x ≥0, where λ, α > 0 and Γ(z) = R ∞ 0 tz−1e−tdt; d) the beta distribution, f(x) = Γ(p + q) Γ(p)Γ(q)xp−1(1 −x)q−1 for 0 ≤x ≤1 , where p, q > 0; e) the Cauchy distribution, f(x) = 1 π(1 + x2) for −∞< x < ∞. Identify the corresponding set of constraints for each of these distributions. 8. Let µ be the mean of a continuous random variable X defined on ℜ+. Obtain an upper bound on h(X) in terms of µ. 9. The inequality in (10.181) gives an upper bound on the differential entropy in terms of the variance. Can you give an upper bound on the variance in terms of the differential entropy? 256 10 Differential Entropy 10. For i = 1, 2, suppose fi maximizes h(f) over all the pdf’s defined on Si ⊂ℜn subject to the constraints in (10.169), where S1 ⊂S2. Show that h(f1) ≤h(f2). 11. Hadamard’s inequality Show that for a positive semidefinite matrix K, |K| ≤Qn i=1 Kii, with equality if and only if K is diagonal. Hint: Consider the differential entropy of a multivariate Gaussian distribution. 12. Let KX and ˜ KX be the covariance matrix and the correlation matrix of a random vector X, respectively. Show that |KX| ≤| ˜ KX|. This is a generalization of varX ≤EX2 for a random variable X. Hint: Consider a multivariate Gaussian distribution with another multivariate Gaussian distribution with zero mean and the same correlation matrix. Historical Notes The concept of differential entropy was introduced by Shannon . Infor-mational divergence and mutual information were subsequently defined in Kolmogorov and Pinsker in the general setting of measure theory. A measure-theoretic treatment of information theory for continuous systems can be found in the book by Ihara . The treatment in this chapter and the next chapter aims to keep the generality of the results without resorting to heavy use of measure theory. The bounds in Section 10.6 for differential entropy subject to constraints are developed in the spirit of maximum entropy expounded in Jayes . 11 Continuous-Valued Channels In Chapter 7, we have studied the discrete memoryless channel. For such a channel, transmission is in discrete time, and the input and output are dis-crete. In a physical communication system, the input and output of a channel often take continuous real values. If transmission is in continuous time, the channel is called a waveform channel. In this chapter, we first discuss discrete-time channels with real input and output. We will then extend our discussion to waveform channels. All the logarithms in this chapter are in the base 2. 11.1 Discrete-Time Channels Definition 11.1. Let f(y|x) be a conditional pdf defined for all x, where − Z SY (x) f(y|x) log f(y|x)dy (11.1) is uniformly bounded for all x. A discrete-time continuous channel f(y|x) is a system with input random variable X and output random variable Y such that Y is related to X through f(y|x) (cf. Definition 10.22). Remark The integral in (11.1) is precisely the conditional differential entropy h(Y |X = x) defined in (10.101), which is required to be uniformly bounded in this definition of a discrete-time continuous channel. Definition 11.2. Let α : ℜ× ℜ→ℜ, and Z be a real random variable, called the noise variable. A discrete-time continuous channel (α, Z) is a system with a real input and a real output. For any input random variable X, the noise random variable Z is independent of X, and the output random variable Y is given by Y = α(X, Z). (11.2) 258 11 Continuous-Valued Channels For brevity, a discrete-time continuous channel will be referred to as a continuous channel. Definition 11.3. Two continuous channels f(y|x) and (α, Z) are equivalent if for every input distribution F(x), Pr{α(X, Z) ≤y, X ≤x} = Z x −∞ Z y −∞ fY |X(v|u)dv dFX(u) (11.3) for all x and y. Remark In the above definitions, the input random variable X is not nec-essarily continuous. Definitions 11.1 and 11.2 are two definitions for a continuous channel which are analogous to Definitions 7.1 and 7.2 for a discrete channel. While Defi-nitions 7.1 and 7.2 are equivalent, Definition 11.2 is more general than Def-inition 11.1. For a continuous channel defined in Definition 11.2, the noise random variable Z may not have a pdf, and the function α(x, ·) may be many-to-one. As a result, the corresponding conditional pdf f(y|x) as required in Definition 11.1 may not exist. In this chapter, we confine our discussion to continuous channels that can be defined by Definition 11.1 (and hence also by Definition 11.2). Definition 11.4. A continuous memoryless channel (CMC) f(y|x) is a se-quence of replicates of a generic continuous channel f(y|x). These continuous channels are indexed by a discrete-time index i, where i ≥1, with the ith chan-nel being available for transmission at time i. Transmission through a channel is assumed to be instantaneous. Let Xi and Yi be respectively the input and the output of the CMC at time i, and let Ti−denote all the random variables that are generated in the system before Xi. The Markov chain Ti−→Xi →Yi holds, and Pr{Yi ≤y, Xi ≤x} = Z x −∞ Z y −∞ fY |X(v|u)dv dFX(u). (11.4) Definition 11.5. A continuous memoryless channel (α, Z) is a sequence of replicates of a generic continuous channel (α, Z). These continuous channels are indexed by a discrete-time index i, where i ≥1, with the ith channel be-ing available for transmission at time i. Transmission through a channel is assumed to be instantaneous. Let Xi and Yi be respectively the input and the output of the CMC at time i, and let Ti−denote all the random variables that are generated in the system before Xi. The noise variable Zi for the transmis-sion at time i is a copy of the generic noise variable Z, and is independent of (Xi, Ti−). The output of the CMC at time i is given by Yi = α(Xi, Zi). (11.5) 11.1 Discrete-Time Channels 259 Definition 11.6. Let κ be a real function. An average input constraint (κ, P) for a CMC is the requirement that for any codeword (x1, x2, · · · , xn) transmit-ted over the channel, 1 n n X i=1 κ(xi) ≤P. (11.6) For brevity, an average input constraint is referred to as an input constraint. Definition 11.7. The capacity of a continuous memoryless channel f(y|x) with input constraint (κ, P) is defined as C(P) = sup F (x):Eκ(X)≤P I(X; Y ), (11.7) where X and Y are respectively the input and output of the generic continuous channel, and F(x) is the distribution of X. Theorem 11.8. C(P) is non-decreasing, concave, and left-continuous. Proof. In the definition of C(P), the supremum is taken over a larger set for a larger P. Therefore, C(P) is non-decreasing in P. We now show that C(P) is concave. Let i = 1, 2. For an input distribution Fj(x), denote the corresponding input and output random variables by Xj and Yj, respectively. Then for any Pj, for all ϵ > 0, there exists Fj(x) such that Eκ(Xj) ≤Pj (11.8) and I(Xj; Yj) ≥C(Pj) −ϵ. (11.9) For 0 ≤λ ≤1, let ¯ λ = 1 −λ and define the random variable X(λ) ∼λF1(x) + ¯ λF2(x). (11.10) Then Eκ(X(λ)) = λEκ(X1) + ¯ λEκ(X2) ≤λP1 + ¯ λP2, (11.11) By the concavity of mutual information with respect to the input distribu-tion1, we have I(X(λ); Y (λ)) ≥λI(X1; Y1) + ¯ λI(X2; Y2) (11.12) ≥λ(C(P1) −ϵ) + ¯ λ(C(P2) −ϵ) (11.13) = λC(P1) + ¯ λC(P2) −ϵ. (11.14) Then 1 Specifically, we refer to the inequality (3.124) in Example 3.14 with X and Y being real random variables related by a conditional pdf f(y|x). The proof of this inequality is left as an exercise. 260 11 Continuous-Valued Channels C(λP1 + ¯ λP2) ≥I(X(λ); Y (λ)) ≥λC(P1) + ¯ λC(P2) −ϵ. (11.15) Letting ϵ →0, we have C(λP1 + ¯ λP2) ≥λC(P1) + ¯ λC(P2), (11.16) proving that C(P) is concave. Finally, we prove that C(P) is left-continuous. Let P1 < P2 in (11.16). Since C(P) is non-decreasing, we have C(P2) ≥C(λP1 + ¯ λP2) ≥λC(P1) + ¯ λC(P2). (11.17) Letting λ →0, we have C(P2) ≥lim λ→0 C(λP1 + ¯ λP2) ≥C(P2), (11.18) which implies lim λ→0 C(λP1 + ¯ λP2) = C(P2). (11.19) Hence, we conclude that lim P ↑P2 C(P) = C(P2), (11.20) i.e., C(P) is left-continuous. The theorem is proved. ⊓ ⊔ 11.2 The Channel Coding Theorem Definition 11.9. An (n, M) code for a continuous memoryless channel with input constraint (κ, P) is defined by an encoding function e : {1, 2, · · · , M} →ℜn (11.21) and a decoding function g : ℜn →{1, 2, · · · , M}. (11.22) The set {1, 2, · · · , M}, denoted by W, is called the message set. The sequences e(1), e(2), · · · , e(M) in X n are called codewords, and the set of codewords is called the codebook. Moreover, 1 n n X i=1 κ(xi(w)) ≤P for 1 ≤w ≤M, (11.23) where e(w) = (x1(w), x2(w), · · · , xn(w)). 11.2 The Channel Coding Theorem 261 We assume that a message W is randomly chosen from the message set W according to the uniform distribution. Therefore, H(W) = log M. (11.24) With respect to a channel code for a given CMC, we let X = (X1, X2, · · · , Xn) (11.25) and Y = (Y1, Y2, · · · , Yn) (11.26) be the input sequence and the output sequence of the channel, respectively. Evidently, X = e(W). (11.27) We also let ˆ W = g(Y) (11.28) be the estimate of the message W by the decoder. Definition 11.10. For all 1 ≤w ≤M, let λw = Pr{ ˆ W ̸= w|W = w} = X y∈Yn:g(y)̸=w Pr{Y = y|X = e(w)} (11.29) be the conditional probability of error given that the message is w. We now define two performance measures for a channel code. Definition 11.11. The maximal probability of error of an (n, M) code is de-fined as λmax = max w λw. (11.30) Definition 11.12. The average probability of error of an (n, M) code is de-fined as Pe = Pr{ ˆ W ̸= W}. (11.31) Evidently, Pe ≤λmax. Definition 11.13. A rate R is asymptotically achievable for a continuous memoryless channel if for any ϵ > 0, there exists for sufficiently large n an (n, M) code such that 1 n log M > R −ϵ (11.32) and λmax < ϵ. (11.33) For brevity, an asymptotically achievable rate will be referred to as an achiev-able rate. Theorem 11.14 (Channel Coding Theorem). A rate R is achievable for a continuous memoryless channel if and only if R ≤C, the capacity of the channel. 262 11 Continuous-Valued Channels 11.3 Proof of the Channel Coding Theorem 11.3.1 The Converse We can establish the Markov chain W →X →Y →ˆ W (11.34) very much like the discrete case as discussed in Section 7.3. Here, although X is a real random vector, it takes only discrete values as it is a function of the message W which is discrete. The only continuous random variable in the above Markov chain is the random vector Y, which needs to be handled with caution. The following lemma is essentially the data processing theorem we proved in Theorem 2.42 except that Y is continuous. The reader may skip the proof at the first reading. Lemma 11.15. I(W; ˆ W) ≤I(X; Y). (11.35) Proof. We first consider I(W; ˆ W) ≤I(W, X; ˆ W) (11.36) = I(X; ˆ W) + I(W; ˆ W|X) (11.37) = I(X; ˆ W). (11.38) Note that all the random variables above are discrete. Continuing from the above, we have I(W; ˆ W) ≤I(X; ˆ W) (11.39) ≤I(X; ˆ W) + I(X; Y| ˆ W) (11.40) = E log p(X, ˆ W) p(X)p( ˆ W) + E log f(Y|X, ˆ W) f(Y| ˆ W) (11.41) = E log p(X, ˆ W)f(Y|X, ˆ W) p(X)[p( ˆ W)f(Y| ˆ W)] (11.42) = E log f(Y)p(X, ˆ W|Y) p(X)[f(Y)p( ˆ W|Y)] (11.43) = E log p(X, ˆ W|Y) p(X)p( ˆ W|Y) (11.44) = E log p(X|Y)p( ˆ W|X, Y) p(X)p( ˆ W|Y) (11.45) = E log p(X|Y) p(X) + E log p( ˆ W|X, Y) p( ˆ W|Y) (11.46) 11.3 Proof of the Channel Coding Theorem 263 = E log f(Y|X) f(Y) + E log p(X|Y, ˆ W) p(X|Y) (11.47) = I(X; Y) + E log p(X|Y) p(X|Y) (11.48) = I(X; Y) + E log 1 (11.49) = I(X; Y) + 0 (11.50) = I(X; Y). (11.51) The above steps are justified as follows: • The relation f(y|x) = n Y i=1 f(yi|xi) (11.52) can be established in exactly the same way as we established (7.101) for the discrete case (when the channel is used without feedback). Then f(y|x, ˆ w) = p(x)f(y|x)p( ˆ w|y) p(x, ˆ w) , (11.53) and f(y| ˆ w) exists by Proposition 10.24. Therefore, I(X; Y| ˆ W) in (11.40) can be defined according to Definition 10.27. • (11.40) follows from Corollary 10.32. • In (11.43), given f(y|x) as in (11.52), it follows from Proposition 10.24 that f(y) exists. • (11.47) follows from p(x)f(y|x) = f(y)p(x|y) (11.54) and p(x|y)p( ˆ w|x, y) = p( ˆ w|y)p(x|y, ˆ w). (11.55) • (11.48) follows from the Markov chain X →Y →ˆ W. The proof is accomplished. ⊓ ⊔ We now proceed to prove the converse. Let R be an achievable rate, i.e., for any ϵ > 0, there exists for sufficiently large n and (n, M) code such that 1 n log M > R −ϵ (11.56) and λmax < ϵ. (11.57) Consider 264 11 Continuous-Valued Channels log M = H(W) (11.58) = H(W| ˆ W) + I(W; ˆ W) (11.59) ≤H(W| ˆ W) + I(X; Y) (11.60) ≤H(W| ˆ W) + h(Y) −h(Y|X) (11.61) ≤H(W| ˆ W) + n X i=1 h(Yi) −h(Y|X) (11.62) = H(W| ˆ W) + n X i=1 h(Yi) − n X i=1 h(Yi|Xi) (11.63) = H(W| ˆ W) + n X i=1 I(Xi; Yi). (11.64) The above steps are justified as follows: • (11.60) follows from Lemma 11.15. • It follows from (11.52) that h(Y|X) = n X i=1 h(Yi|Xi). (11.65) Then (11.1) in Definition 11.1 implies that h(Yi|Xi) is finite for all i, and hence h(Y|X) is also finite. • From the foregoing, f(y) exists. Therefore, h(Y) can be defined according to Definition 10.10 (but h(Y) may be infinite), and (11.61) follows from Proposition 10.29 because h(Y|X) is finite. Note that it is necessary to require h(Y|X) to be finite because otherwise h(Y) is also infinite and Proposition 10.29 cannot be applied. • (11.62) follows from Corollary 10.34, the independence bound for differen-tial entropy. • (11.63) from (11.65) above. • (11.64) follows from Proposition 10.29. Let V be a mixing random variable distributed uniformly on {1, 2, · · · , n} which is independent of Xi, 1 ≤i ≤n. Let X = XV and Y be the output of the channel with X being the input. Then Eκ(X) = EE[κ(X)|V ] (11.66) = n X i=1 Pr{V = i}E[κ(X)|V = i] (11.67) = n X i=1 Pr{V = i}E[κ(Xi)|V = i] (11.68) = n X i=1 1 nEκ(Xi) (11.69) 11.3 Proof of the Channel Coding Theorem 265 = E " 1 n n X i=1 κ(Xi) # (11.70) ≤P, (11.71) where the above inequality follows from (11.23) in the definition of the code. By the concavity of mutual information with respect to the input distribution, we have 1 n n X i=1 I(Xi; Yi) ≤I(X; Y ) ≤C, (11.72) where the last inequality holds in light of the definition of C and (11.71). Then it follows from (11.64) that log M ≤H(W| ˆ W) + nC, (11.73) which is precisely (7.126) in the proof of the converse of the channel coding theorem for the DMC. Following exactly the same steps therein, we conclude that R ≤C. (11.74) 11.3.2 Achievability The proof of the achievability of the channel capacity, which involves the construction of a random code, is somewhat different from the construction for the discrete case in Section 7.4. On the one hand, we need to take into account the input constraint. On the other hand, since the input distribution F(x) we use for constructing the random code may not have a pdf, it is difficult to formulate the notion of joint typicality as in the discrete case. Instead, we will introduce a different notion of typicality based on mutual information. Consider a bivariate information source {(Xk, Yk), k ≥1}, where (Xk, Yk) are i.i.d. with (X, Y ) being the pair of generic real random variables. The conditional pdf f(y|x) exists in the sense prescribed in Definition 10.22. By Proposition 10.24, f(y) exists so that the mutual information I(X; Y ) can be defined according to Definition 10.26. Definition 11.16. The mutually typical set Ψ n [XY ]δ with respect to F(x, y) is the set of (x, y) ∈X n × Yn such that 1 n log f(y|x) f(y) −I(X; Y ) ≤δ, (11.75) where f(y|x) = n Y i=1 f(yi|xi) (11.76) and 266 11 Continuous-Valued Channels f(y) = n Y i=1 f(yi), (11.77) and δ is an arbitrarily small positive number. A pair of sequences (x, y) is called mutually δ-typical if it is in Ψ n [XY ]δ. Lemma 11.17. For any δ > 0, for sufficiently large n, Pr{(X, Y) ∈Ψ n [XY ]δ)} ≥1 −δ. (11.78) Proof. By (11.76) and (11.77), we write 1 n log f(Y|X) f(Y) = 1 n log n Y i=1 f(Yi|Xi) f(Yi) = 1 n n X i=1 log f(Yi|Xi) f(Yi) . (11.79) Since (Xi, Yi) are i.i.d., so are the random variables log f(Yi|Xi) f(Yi) . Thus we conclude by the weak law of large numbers that 1 n n X i=1 log f(Yi|Xi) f(Yi) →E log f(Y |X) f(Y ) = I(X; Y ) (11.80) in probability, i.e., (11.78) holds for all sufficiently large n, proving the lemma. ⊓ ⊔ The following lemma is analogous to Lemma 7.17 for the discrete case. Lemma 11.18. Let (X′, Y′) be n i.i.d. copies of a pair of generic random variables (X′, Y ′), where X′ and Y ′ are independent and have the same marginal distributions as X and Y , respectively. Then Pr{(X′, Y′) ∈Ψ n [XY ]δ} ≤2−n(I(X;Y )−δ). (11.81) Proof. For (x, y) ∈Ψ n [XY ]δ, from (11.75), we obtain 1 n log f(y|x) f(y) ≥I(X; Y ) −δ, (11.82) or f(y|x) ≥f(y)2n(I(X;Y )−δ) (11.83) Then 1 ≥Pr{(X, Y) ∈Ψ n [XY ]δ)} (11.84) = Z Z Ψ n [XY ]δ f(y|x)dF(x) (11.85) ≥2n(I(X;Y )−δ) Z Z Ψ n [XY ]δ f(y)dF(x) (11.86) = 2n(I(X;Y )−δ)Pr{(X′, Y′) ∈Ψ n [XY ]δ}, (11.87) 11.3 Proof of the Channel Coding Theorem 267 where the last inequality follows from (11.83). Therefore, Pr{(X′, Y′) ∈Ψ n [XY ]δ} ≤2−n(I(X;Y )−δ), (11.88) proving the lemma. ⊓ ⊔ Fix any ϵ > 0 and let δ be a small quantity to be specified later. Since C(P) is left-continuous, there exists a sufficiently small γ > 0 such that C(P −γ) > C(P) −ϵ 6. (11.89) By the definition of C(P −γ), there exists an input random variable X such that Eκ(X) ≤P −γ (11.90) and I(X; Y ) ≥C(P −γ) −ϵ 6. (11.91) Then choose for a sufficiently large n an even integer M satisfying I(X; Y ) −ϵ 6 < 1 n log M < I(X; Y ) −ϵ 8, (11.92) from which we obtain 1 n log M > I(X; Y ) −ϵ 6 (11.93) ≥C(P −γ) −ϵ 3 (11.94) > C(P) −ϵ 2. (11.95) We now describe a random coding scheme: 1. Construct the codebook C of an (n, M) code randomly by generating M codewords in ℜn independently and identically according to f(x)n. Denote these codewords by ˜ X(1), ˜ X(2), · · · , ˜ X(M). 2. Reveal the codebook C to both the encoder and the decoder. 3. A message W is chosen from W according to the uniform distribution. 4. The sequence X = ˜ X(W), namely the Wth codeword in the codebook C, is transmitted through the channel. 5. The channel outputs a sequence Y according to Pr{Yi ≤yi, 1 ≤i ≤n|X(W) = x} = n Y i=1 Z yi −∞ f(y|xi)dy. (11.96) This is the continuous analog of (7.101) and can be established similarly. 268 11 Continuous-Valued Channels 6. The sequence Y is decoded to the message w if (X(w), Y) ∈Ψ n [XY ]δ and there does not exist w′ ̸= w such that (X(w′), Y) ∈Ψ n [XY ]δ. Otherwise, Y is decoded to a constant message in W. Denote by ˆ W the message to which Y is decoded. We now analyze the performance of this random coding scheme. Let ˜ X(w) = ( ˜ X1(w), ˜ X2(w), · · · , ˜ Xn(w)) (11.97) and define the error event Err = Ee ∪Ed, (11.98) where Ee = ( 1 n n X i=1 κ( ˜ Xi(W)) > P ) (11.99) is the event that the input constraint is violated, and Ed = { ˆ W ̸= W} (11.100) is the event that a decoding error occurs. By symmetry in the code construc-tion, Pr{Err} = Pr{Err|W = 1} (11.101) ≤Pr{Ee|W = 1} + Pr{Ed|W = 1}. (11.102) With Lemma 11.18 in place of Lemma 7.17, the analysis of Pr{Ed|W = 1} is exactly the same as the analysis of the decoding error in the discrete case. The details are omitted, and we conclude that by choosing δ to be a sufficiently small positive quantity, Pr{Ed|W = 1} ≤ϵ 4 (11.103) for sufficiently large n. We now analyze Pr{Ee|W = 1}. By the weak law of large numbers, Pr{Ee|W = 1} = Pr ( 1 n n X i=1 κ( ˜ Xi(1)) > P W = 1 ) (11.104) = Pr ( 1 n n X i=1 κ( ˜ Xi(1)) > P ) (11.105) = Pr ( 1 n n X i=1 κ( ˜ Xi(1)) > (P −ν) + ν ) (11.106) ≤Pr ( 1 n n X i=1 κ( ˜ Xi(1)) > Eκ(X) + ν ) (11.107) ≤ϵ 4 (11.108) 11.3 Proof of the Channel Coding Theorem 269 for sufficiently large n. It then follows from (11.102) and (11.103) that Pr{Err} ≤ϵ 2 (11.109) for sufficiently large n. It remains to show the existence of a codebook such that λmax < ϵ and the input constraint (11.23) is satisfied by every codeword. Consider Pr{Err} = X C Pr{C}Pr{Err|C}, (11.110) where Pr{C} is the probability of choosing a codebook C from the ensemble of all possible codebooks in Step 1 of the random coding scheme. In light of (11.109), there exists at least one codebook C∗such that Pr{Err|C∗} ≤ϵ 2. (11.111) Furthermore, Pr{Err|C∗} = M X w=1 Pr{W = w|C∗}Pr{Err|C∗, W = w} (11.112) = M X w=1 Pr{W = w}Pr{Err|C∗, W = w} (11.113) = 1 M M X w=1 Pr{Err|C∗, W = w}. (11.114) By discarding the worst half of the codewords in C∗, if a codeword ˜ X(w) remains in C∗, then Pr{Err|C∗, W = w} ≤ϵ. (11.115) Since Err = Ee ∪Ed, this implies Pr{Ee|C∗, W = w} ≤ϵ (11.116) and Pr{Ed|C∗, W = w} ≤ϵ, (11.117) where the latter implies λmax ≤ϵ for the codebook C∗. Finally, observe that conditioning on {C∗, W = w}, the codeword ˜ X(w) is deterministic. Therefore, Pr{Ee|C∗, W = w} is equal to 1 if the codeword ˜ X(w) violates the input constraint (11.23), otherwise it is equal to 0. Then (11.116) implies that for every codeword ˜ X(w) that remains in C∗, Pr{Ee|C∗, W = w} = 0, i.e., the input constraint is satisfied. This completes the proof. 270 11 Continuous-Valued Channels 11.4 Memoryless Gaussian Channels In communication engineering, the Gaussian channel is the most commonly used model for a noisy channel with real input and output. The reasons are two-fold. First, the Gaussian channel is highly analytically tractable. Second, the Gaussian noise can be regarded as the worst kind of additive noise subject to a constraint on the noise power. This will be discussed in Section 11.9. We first give two equivalent definitions of a Gaussian channel. Definition 11.19 (Gaussian Channel). A Gaussian channel with noise en-ergy N is a continuous channel with the following two equivalent specifications: 1. f(y|x) = 1 √ 2πN e−(y−x)2 2N . 2. Z ∼N(0, N) and α(X, Z) = X + Z. Definition 11.20 (Memoryless Gaussian Channel). A memoryless Gaus-sian channel with noise power N and input power constraint P is a memory-less continuous channel with the generic continuous channel being the Gaus-sian channel with noise energy N. The input power constraint P refers to the input constraint (κ, P) with κ(x) = x2. Using the formula in Definition 11.7 for the capacity of a CMC, the ca-pacity of a Gaussian channel can be evaluated. Theorem 11.21 (Capacity of a Memoryless Gaussian Channel). The capacity of a memoryless Gaussian channel with noise power N and input power constraint P is 1 2 log  1 + P N  . (11.118) The capacity is achieved by the input distribution N(0, P). We first prove the following lemma. Lemma 11.22. Let Y = X + Z. Then h(Y |X) = h(Z|X) provided that fZ|X(z|x) exists for all x ∈SX. Proof. For all x ∈SX, since fZ|X(z|x) exists, fY |X(y|x) also exists and is given by fY |X(y|x) = fZ|X(y −x|x). (11.119) Then 11.4 Memoryless Gaussian Channels 271 h(Y |X) = Z h(Y |X = x)dFX(x) (11.120) = Z h(X + Z|X = x)dFX(x) (11.121) = Z h(x + Z|X = x)dFX(x) (11.122) = Z h(Z|X = x)dFX(x) (11.123) = h(Z|X) (11.124) In the above, (11.120) and (11.124) follow from (10.102), while (11.123) follows from the translation property of differential entropy (Theorem 10.14). ⊓ ⊔ Remark Since Y and Z uniquely determine each other given X, it is tempt-ing to write h(Y |X) = h(Z|X) immediately. However, this interpretation is incorrect because differential entropy is not the same as entropy. Proof of Theorem 11.21. Let F(x) be the CDF of the input random variable X such that EX2 ≤P, where X is not necessarily continuous. Since Z ∼ N(0, N), f(y|x) is given by (11.119). Then by Proposition 10.24, f(y) exists and hence h(Y ) is defined. Since Z is independent of X, by Lemma 11.22 and Corollary 10.33, h(Y |X) = h(Z|X) = h(Z). (11.125) Then I(X; Y ) = h(Y ) −h(Y |X) (11.126) = h(Y ) −h(Z), (11.127) where (11.126) follows from Proposition 10.29 and (11.127) follows from (11.125). Since Y = X + Z and Z is independent of X, we have EY 2 = E(X + Z)2 (11.128) = EX2 + 2(EXZ) + EZ2 (11.129) = EX2 + 2(EX)(EZ) + EZ2 (11.130) = EX2 + 2(EX)(0) + EZ2 (11.131) = EX2 + EZ2 (11.132) ≤P + N. (11.133) Given the above constraint on Y , by Theorem 10.43, we have h(Y ) ≤1 2 log[2πe(P + N)], (11.134) with equality if Y ∼N(0, P + N). 272 11 Continuous-Valued Channels Recall from Example 10.13 that h(Z) = 1 2 log(2πeN). (11.135) It then follows from (11.127), (11.134), and (11.135) that I(X; Y ) = h(Y ) −h(Z) (11.136) ≤1 2 log[2πe(P + N)] −1 2 log(2πeN) (11.137) = 1 2 log  1 + P N  . (11.138) Evidently, this upper bound is tight if X ∼N(0, P), because then Y = X + Z ∼N(0, P + N). (11.139) Therefore, C = sup F (x):EX2≤P I(X; Y ) (11.140) = max F (x):EX2≤P I(X; Y ) (11.141) = 1 2 log  1 + P N  . (11.142) The theorem is proved. ⊓ ⊔ Theorem 11.21 says that the capacity of a memoryless Gaussian channel depends only on the ratio of the input power constraint P to the noise power N. This important quantity is called the signal-to-noise ratio. Note that no matter how small the signal-to-noise ratio is, the capacity is still strictly posi-tive. In other words, reliable communication can still be achieved, though at a low rate, when the noise power is much higher than the signal power. We also see that the capacity is infinite if there is no constraint on the input power. 11.5 Parallel Gaussian Channels In Section 11.4, we have discussed the capacity of a memoryless Gaussian channel. Now suppose k such channels are available for communication, where k ≥1. This is illustrated in Figure 11.1, with Xi, Yi, and Zi being the input, the output, and the noise variable of the ith channel, respectively, where Zi ∼N(0, Ni) and Zi, 1 ≤i ≤k are independent. We are interested in the capacity of such a system of parallel Gaussian channels, with the total input power constraint 11.5 Parallel Gaussian Channels 273 X1 Z1 Y1 + . . . X2 Z2 Y2 + Xk Zk Yk + Fig. 11.1. A system of parallel Gaussian channels. E k X i=1 X2 i ≤P. (11.143) Let X = [X1 X2 · · · Xk], Y = [Y1 Y2 · · · Yk], and Z = [Z1 Z2 · · · Zk]. Then fY|X(y|x) = k Y i=1 fYi|Xi(yi|xi) (11.144) = k Y i=1 fZi|Xi(yi −xi|xi) (11.145) = k Y i=1 fZi(yi −xi). (11.146) With the existence of f(y|x), by extending Definition 10.23, we have h(Y|X) = − Z SX Z SY(x) f(y|x) log f(y|x)dy dF(x). (11.147) Then by Proposition 10.25, f(y) exists and therefore h(Y) is defined. By extending Definition 10.26, we have I(X; Y) = Z SX Z SY(x) f(y|x) log f(y|x) f(y) dy dF(x). (11.148) It then follows from Definition 11.7 that the capacity of the system is given by 274 11 Continuous-Valued Channels C(P) = sup F (x):E P i X2 i ≤P I(X; Y), (11.149) where F(x) is the joint CDF of the input vector X. As we will see, the supre-mum above is indeed a maximum. When we calculated the capacity of the memoryless Gaussian channel in Theorem 11.21, we obtained in (11.132) that EY 2 = EX2 + EZ2, (11.150) i.e., the output power is equal to the sum of the input power and the noise power, provided that the noise has zero mean. By exactly the same argument, we see that EY 2 i = EX2 i + EZ2 i (11.151) for all i. Toward calculating C(P), consider I(X; Y) = h(Y) −h(Y|X) (11.152) = h(Y) −h(Z|X) (11.153) = h(Y) −h(Z) (11.154) = h(Y) − k X i=1 h(Zi) (11.155) = h(Y) −1 2 k X i=1 log(2πeNi) (11.156) ≤ k X i=1 h(Yi) −1 2 k X i=1 log(2πeNi) (11.157) ≤1 2 k X i=1 log[2πe(EY 2 i )] −1 2 k X i=1 log(2πeNi) (11.158) = 1 2 k X i=1 log(EY 2 i ) −1 2 k X i=1 log Ni (11.159) = 1 2 k X i=1 log(EX2 i + EZ2 i ) −1 2 k X i=1 log Ni (11.160) = 1 2 k X i=1 log(Pi + Ni) −1 2 k X i=1 log Ni (11.161) = 1 2 k X i=1 log  1 + Pi Ni  , (11.162) where Pi = EX2 i is the input power of the ith channel. In the above, (11.153) is the vector generalization of Lemma 11.22, (11.155) follows because Zi are independent, and (11.160) follows from (11.151). 11.5 Parallel Gaussian Channels 275 Equality holds in (11.157) and (11.158) if Yi, 1 ≤i ≤k are independent and Yi ∼N(0, Pi + Ni). This happens when Xi are independent of each other and Xi ∼N(0, Pi). Therefore, maximizing I(X; Y) becomes maximizing P i log(Pi + Ni) in (11.161) with the constraint P i Pi ≤P and Pi ≥0 for all i. In other words, we are to find the optimal input power allocation among the channels. Comparing (11.162) with (11.142), we see that the capacity of the system of parallel Gaussian channels is equal to the sum of the capacities of the individual Gaussian channels with the input power optimally allocated. Toward this end, we first apply the method of Lagrange multipliers by temporarily ignoring the nonnegativity constraints on Pi. Observe that in order for the summation P i log(Pi+Ni) in (11.161) to be maximized, P i Pi = P must hold because log (Pi + Ni) is increasing in Pi. Therefore, the inequality constraint P i Pi ≤P can be replaced by the equality constraint P i Pi = P. Let J = k X i=1 log (Pi + Ni) −µ k X i=1 Pi. (11.163) Differentiating with respect to Pi gives 0 = ∂J ∂Pi = 1 Pi + Ni −µ, (11.164) which implies Pi = 1 µ −Ni. (11.165) Upon letting ν = 1 µ, we have Pi = ν −Ni, (11.166) where ν is chosen such that k X i=1 Pi = k X i=1 (ν −Ni) = P. (11.167) However, Pi as given in (11.166) is not guaranteed to be nonnegative, so it may not be a valid solution. Nevertheless, (11.166) suggests the general solution to be proved in the next proposition. Proposition 11.23. The problem For given λi ≥0, maximize Pk i=1 log(ai + λi) subject to P i ai ≤P and ai ≥0 has the solution a∗ i = (ν −λi)+, 1 ≤i ≤k, (11.168) where 276 11 Continuous-Valued Channels (x)+ = x if x ≥0 0 if x = 0 (11.169) and ν satisfies k X i=1 (ν −λi)+ = P. (11.170) Proof. Rewrite the maximization problem as For given λi ≥0, maximize P i log (ai + λi) subject to k X i=1 ai ≤P (11.171) −ai ≤0, 1 ≤i ≤k. (11.172) We will prove the proposition by verifying that the proposed solution in (11.168) satisfies the Karush-Kuhn-Tucker (KKT) condition. This is done by finding nonnegative µ and µi satisfying the equations 1 2(a∗ i + λi) −µ + µi = 0 (11.173) µ P − k X i=1 a∗ i ! = 0 (11.174) µia∗ i = 0, 1 ≤i ≤k, (11.175) where µ and µi are the multipliers associated with the constraints in (11.171) and (11.172), respectively. To avoid triviality, assume P > 0 so that ν > 0, and observe that there exists at least one i such that a∗ i > 0. For those i, (11.175) implies µi = 0. (11.176) On the other hand, a∗ i = (ν −λi)+ = ν −λi. (11.177) Substituting (11.176) and (11.177) into (11.173), we obtain µ = 1 2ν > 0. (11.178) For those i such that a∗ i = 0, it follows from (11.168) that ν ≤λi. From (11.178) and (11.173), we obtain µi = 1 2ν − 1 2λi ≥0. (11.179) Thus we have found nonnegative µ and µi that satisfy (11.173) to (11.175), verifying the KKT condition. The proposition is proved. ⊓ ⊔ 11.6 Correlated Gaussian Channels 277 Hence, following (11.162) and applying the above proposition with ai = Pi and λi = Ni, we see that the capacity of the system of parallel Gaussian channels is equal to 1 2 k X i=1 log  1 + P ∗ i Ni  , (11.180) where {P ∗ i , 1 ≤i ≤k} is the optimal input power allocation among the channels given by P ∗ i = (ν −Ni)+, 1 ≤i ≤k (11.181) with ν satisfying k X i=1 (ν −Ni)+ = P. (11.182) The process for obtaining {P ∗ i }, called water-filling, is illustrated in Fig-ure 11.2. One can image that an amount P of water is poured into a reservoir with an uneven bottom, and ν is the level the water rises to. Under this scheme, high input power is allocated to a channel with low noise power. For a channel with noise power higher than ν, no input power is allocated, i.e., the channel is not used. 11.6 Correlated Gaussian Channels In this section, we generalize the results in the last section to the case when the noise variables Zi, 1 ≤i ≤k are correlated with covariance matrix KZ. We continue to assume that Zi has zero mean for all i, i.e., Z ∼N(0, KZ), and the total input power constraint N1 N2 N2 N4 1 P 2 P 3 P ! Channel 1 Channel 2 Channel 3 Channel 4 Power Fig. 11.2. Water-filling for parallel Gaussian channels. 278 11 Continuous-Valued Channels X Z Y + Q Q Y’ X’ ! Fig. 11.3. An equivalent system of parallel Gaussian channels. E k X i=1 X2 i ≤P (11.183) prevails. We will derive the capacity of such a system of correlated Gaussian chan-nels by decorrelating the noise vector Z. Let KZ be diagonalizable as QΛQ⊤ and consider Y = X + Z. (11.184) Then Q⊤Y = Q⊤X + Q⊤Z. (11.185) Upon letting X′ = Q⊤X (11.186) Y′ = Q⊤Y (11.187) and Z′ = Q⊤Z, (11.188) we obtain Y′ = X′ + Z′. (11.189) Note that EZ′ = E(QZ) = Q(EZ) = Q · 0 = 0, (11.190) and Z′ is jointly Gaussian because it is a linear transformation of Z. By Proposition 10.6, the random variables in Z′ are uncorrelated, and KZ′ = Λ. (11.191) Since Z′ is jointly Gaussian, this implies that the random variables in Z′ are mutually independent. Therefore, we conclude that Z′ i ∼N(0, λi), where λi is the ith diagonal element of Λ. We are then motivated to convert the given system of correlated Gaussian channels into the system shown in Figure 11.3, with X′ and Y′ being the input and output, respectively. Note that in this system, X′ and Y′ are related to X and Y as prescribed in (11.186) and (11.187), respectively. We then see from (11.189) that Z′ is the equivalent noise vector of the system with Z′ i 11.6 Correlated Gaussian Channels 279 X Z Y + Y’ X’ Q X’’ Y’’ Q Q ! ! Q Fig. 11.4. A system identical to the system of correlated Gaussian channels. being the noise variable of the ith channel. Hence, the system in Figure 11.3 is a system of parallel Gaussian channels. By Proposition 10.9, the total input power constraint in (11.183) for the original system translates to the total input power constraint E k X i=1 (X′ i)2 ≤P (11.192) for the system in Figure 11.3. The question is whether the capacity of the system in Figure 11.3 is the same as the capacity of the original system. Let us called these two capacities C′ and C, respectively. Intuitively, C′ and C should be the same because the matrix Q is invertible. A formal proof goes as follows. We remind the reader that the capacity of a channel is the highest possible asymptotic rate at which information can be transmitted reliably through the channel by means of any encoding/decoding process. In Figure 11.3, by regarding the transformation Q on X′ as part of the encoding process and the transformation Q⊤on Y as part of the decoding process, we see that C′ ≤C. Now further convert the system in Figure 11.3 into the system in Figure 11.4 with input X′′ and output Y′′, and call the capacity of this system C′′. By repeating the same argument, we see that C′′ ≤C′. Thus C′′ ≤C′ ≤C. However, the system in Figure 11.4 is equivalent to the original system because Q⊤Q = I. Therefore, C′′ = C, which implies C′ = C. Upon converting the given system of correlated Gaussian channels into an equivalent system of parallel Gaussian channels, we see that the capacity of the system is equal to 1 2 k X i=1 log  1 + a∗ i λi  (11.193) where a∗ i is the optimal power allocated to the ith channel in the equivalent system, and its value can be obtained by water-filling as prescribed in Propo-sition 11.23. The reader should compare (11.193) with the formula in (11.180) for the capacity of parallel Gaussian channels. Let A∗be the k×k diagonal matrix with a∗ i being the ith diagonal element. From the discussion in the last section, the optimal distribution for the input X′ in the equivalent system of parallel channels is N(0, A∗). Accordingly, the distribution of X is N(0, QA∗Q⊤). We leave it as an exercise for the reader 280 11 Continuous-Valued Channels to verify that this indeed gives the optimal input distribution for the original system of correlated Gaussian channels. 11.7 The Bandlimited White Gaussian Channel In this section, we discuss a bandlimited waveform channel with zero-mean additive white Gaussian noise (AWGN). In the rest of this chapter, the letters j and f are reserved for √−1 and “frequency,” respectively. We begin with a few definitions from signal analysis. All the signals are assumed to be real. Definition 11.24. The Fourier transform of a signal g(t) is defined as G(f) = Z ∞ −∞ g(t)e−j2πftdt. (11.194) The signal g(t) can be recovered from G(f) as g(t) = Z ∞ −∞ G(f)ej2πftd f, (11.195) and g(t) is called the inverse Fourier transform of G(f). The functions g(t) and G(f) are said to form a transform pair, denoted by g(t) ⇀ ↽G(f). (11.196) The variables t and f are referred to as time and frequency, respectively. In general, the Fourier transform of a signal g(t) may not exist. A sufficient condition for the Fourier transform of g(t) to exist is that g(t) has finite energy, i.e., Z ∞ −∞ |g(t)|2dt < ∞. (11.197) A signal with finite energy is called an energy signal. Definition 11.25. Let g1(t) and g2(t) be a pair of energy signals. The cross-correlation function for g1(t) and g2(t) is defined as R12(τ) = Z ∞ −∞ g1(t)g2(t −τ)dt. (11.198) Proposition 11.26. For a pair of energy signals g1(t) and g2(t), R12(τ) ⇀ ↽G1(f)G∗ 2(f), (11.199) where G∗ 2(f) denotes the complex conjugate of G2(f). 11.7 The Bandlimited White Gaussian Channel 281 Definition 11.27. For a wide-sense stationary process {X(t), −∞< t < ∞}, the autocorrelation function is defined as RX(τ) = E[X(t + τ)X(t)], (11.200) which does not depend on t, and the power spectral density is defined as SX(f) = Z ∞ −∞ RX(τ)e−j2πfτdτ, (11.201) i.e., RX(τ) ⇀ ↽SX(f). (11.202) Definition 11.28. Let {(X(t), Y (t)), −∞< t < ∞} be a bivariate wide-sense stationary process. Their cross-correlation functions are defined as RXY (τ) = E[X(t + τ)Y (t)] (11.203) and RY X(τ) = E[Y (t + τ)X(t)], (11.204) which do not depend on t. The cross-spectral densities are defined as SXY (f) = Z ∞ −∞ RXY (τ)e−j2πfτdτ (11.205) and SY X(f) = Z ∞ −∞ RY X(τ)e−j2πfτdτ, (11.206) i.e., RXY (τ) ⇀ ↽SXY (f) (11.207) and RY X(τ) ⇀ ↽SY X(f). (11.208) We now describe the simplest nontrivial model for a waveform channel. In wired-line and wireless communication, the frequency spectrum of the medium is often partitioned into a number of communication channels, where each channel occupies a certain frequency band. Consider such a channel that oc-cupies the frequency band [fl, fh] with 0 ≤fl < fh, where W = fh −fl (11.209) is called the bandwidth. The input process X(t) is contaminated by a zero-mean additive white Gaussian noise process with power N0 2 , i.e., SZ(f) = N0 2 , −∞< f < ∞. (11.210) 282 11 Continuous-Valued Channels In reality, such a noise process cannot exist because its total power is infi-nite. For practical purposes, one can regard the power spectral density to be constant within the range of interest of the problem. Let h(t) be the impulse response of an ideal bandpass filter for the fre-quency band [fl, fh], i.e., H(f) = 1 if fl ≤|f| ≤fh 0 otherwise. (11.211) At the receiver for this channel, the ideal bandpass filter h(t) is applied to the received signal in order to filter out the frequency components due to other channels. Effectively, we can regard this filtered version of the received signal given by Y (t) = [X(t) + Z(t)] ∗h(t) = X(t) ∗h(t) + Z(t) ∗h(t) (11.212) as the output of the channel, where ∗denotes convolution in the time domain. Letting X′(t) = X(t) ∗h(t) (11.213) and Z′(t) = Z(t) ∗h(t), (11.214) (11.212) can be written as Y (t) = X′(t) + Z′(t). (11.215) The only difference between X(t) and X′(t) is that all the frequency com-ponents of X′(t) are in [fl, fh], while X(t) can have frequency components outside this range. However, even if such frequency components exist in X(t), they are filtered out by the ideal bandpass filter h(t) and do not appear in the output process Y (t). Therefore, we can regard X′(t) instead of X(t) as the input process of the channel. By the same token, we regard Z′(t) instead of Z(t) as the noise process of the channel. As for the memoryless Gaussian channel discussed in the last section, we impose an average power constraint P on the input process X′(t). For simplicity, we consider in this section the case that the channel we have described occupies the frequency band [0, W]. This channel, called the bandlimited white Gaussian channel, is the special case of the general model with fl = 0. While a rigorous formulation of the bandlimited white Gaussian channel involves mathematical tools beyond the scope of this book, we will never-theless give a heuristic argument that suggests the formula for the channel capacity. The sampling theorem in signal analysis will allow us to “convert” this waveform channel into a memoryless Gaussian channel discussed in the last section. 11.7 The Bandlimited White Gaussian Channel 283 Theorem 11.29 (Sampling Theorem). Let g(t) be a signal with Fourier transform G(f) that vanishes for f ̸∈[−W, W]. Then g(t) = ∞ X i=−∞ g  i 2W  sinc(2Wt −i) (11.216) for −∞< t < ∞, where sinc(t) = sin(πt) πt , (11.217) called the sinc function, is defined to be 1 at t = 0 by continuity. Letting gi = 1 √ 2W g  i 2W  (11.218) and ψi(t) = √ 2Wsinc(2Wt −i), (11.219) the formula in (11.216) can be rewritten as g(t) = ∞ X i=−∞ giψi(t). (11.220) Proposition 11.30. ψi(t), −∞< i < ∞form an orthonormal basis for signals which are bandlimited to [0, W]. Proof. Since ψi(t) = ψ0  t − i 2W  , (11.221) ψi(t) and ψ0(t) have the same energy. We first show that Z ∞ −∞ sinc2(2Wt)dt = 1 2W . (11.222) This integral is difficult to evaluate directly. Instead we consider sinc(2Wt) ⇀ ↽ 1 2W rect  f 2W  , (11.223) where rect(f) =  1 −1 2 ≤f ≤1 2 0 otherwise. (11.224) Then by Rayleigh’s energy theorem, we have Z ∞ −∞ sinc2(2Wt)dt = Z ∞ −∞  1 2W 2 rect2  f 2W  d f = 1 2W . (11.225) 284 11 Continuous-Valued Channels It then follows that Z ∞ −∞ ψ2 i (t)dt = Z ∞ −∞ ψ2 0(t)dt (11.226) = ( √ 2W)2 Z ∞ −∞ sinc2(2Wt)dt (11.227) = 2W  1 2W  (11.228) = 1. (11.229) Next, we show that Z ∞ −∞ sinc(2Wt −i) sinc(2Wt −i′)dt (11.230) vanishes whenever i ̸= i′. Again, this integral is difficult to evaluate directly. Since (11.225) implies that both sinc(2Wt −i) and sinc(2Wt −i′) have finite energy, we can consider their cross-correlation function, denoted by Rii′(τ). Now sinc(2Wt −i) ⇀ ↽e−j2πf( i 2W )  1 2W  rect  f 2W  := Gi(f) (11.231) and sinc(2Wt −i′) ⇀ ↽e−j2πfi′ 2W   1 2W  rect  f 2W  := Gi′(f). (11.232) Then we have Rii′(τ) ⇀ ↽Gi(f)G∗ i′(f), (11.233) and the integral in (11.230) is given by Rii′(0) = Z ∞ −∞ Gi(f)G∗ i′(f)d f, (11.234) which vanishes whenever i ̸= i′. Therefore, Z ∞ −∞ ψi(t)ψi′(t)dt = 2W Z ∞ −∞ sinc(2Wt −i) sinc(2Wt −i′)dt (11.235) = 0. (11.236) Together with (11.229), this shows that ψi(t), −∞< i < ∞form an orthonor-mal set. Finally, since g(t) in (11.220) is any signal bandlimited to [0, W], we conclude that ψi(t), −∞< i < ∞form an orthonormal basis for such signals. The theorem is proved. ⊓ ⊔ Let us return to our discussion of the waveform channel. The sampling theorem implies that the input process X′(t), assuming the existence of the Fourier transform, can be written as 11.7 The Bandlimited White Gaussian Channel 285 X′(t) = ∞ X i=−∞ X′ iψi(t), (11.237) where X′ i = 1 √ 2W X′  i 2W  , (11.238) and there is a one-to-one correspondence between X′(t) and {X′ i, −∞< i < ∞}. The same applies to (a realization of) the output process Y (t), which we assume can be written as Y (t) = ∞ X i=−∞ Y ′ i ψi(t), (11.239) where Yi = 1 √ 2W Y  i 2W  . (11.240) With these assumptions on X′(t) and Y (t), the waveform channel is equivalent to a discrete-time channel defined at t = i 2W , with the ith input and output of the channel being X′ i and Yi, respectively. Toward determining the capacity of this equivalent discrete-time channel, we prove in the following a characterization of the effect of the noise process Z′(t) at the sampling times. Proposition 11.31. Z′ i 2W  , −∞< i < ∞are i.i.d. Gaussian random vari-ables with zero mean and variance N0W. Proof. First of all, Z(t) is a zero-mean Gaussian process and Z′(t) is a filtered version of Z(t), so Z′(t) is also a zero-mean Gaussian process. Consequently, Z′ i 2W  , −∞< i < ∞are zero-mean Gaussian random variables. The power spectral density of Z′(t) is given by SZ′(f) =  N0 2 −W ≤f ≤W 0 otherwise. (11.241) Then the autocorrelation function of Z′(t), which is the inverse Fourier trans-form of SZ′(f), is given by RZ′(τ) = N0Wsinc(2Wτ). (11.242) It is seen that the value of RZ′(τ) is equal to 0 when τ = i 2W for all i ̸= 0, because the sinc function in (11.217) vanishes at all nonzero integer values of t. This shows that Z′ i 2W  , −∞< i < ∞are uncorrelated and hence independent because they are jointly Gaussian. Finally, since Z′ i 2W  has zero mean, in light of (11.200), its variance is given by RZ′(0) = N0W. ⊓ ⊔ Recall from (11.215) that 286 11 Continuous-Valued Channels Y (t) = X′(t) + Z′(t). (11.243) Then Y  i 2W  = X′  i 2W  + Z′  i 2W  . (11.244) Upon dividing by √ 2W and letting Z′ i = 1 √ 2W Z′  i 2W  , (11.245) it follows from (11.238) and (11.240) that Yi = X′ i + Z′ i. (11.246) Since Z′ i 2W  , −∞< i < ∞are i.i.d. with distribution N(0, N0W), Z′ i, −∞< i < ∞are i.i.d. with distribution N(0, N0 2 ). Thus we have shown that the bandlimited white Gaussian channel is equiv-alent to a memoryless Gaussian channel with noise power equal to N0 2 . As we are converting the waveform channel into a discrete-time channel, we need to relate the input power constraint of the waveform channel to the input power constraint of the discrete-time channel. Let P ′ be the average energy (i.e., the second moment) of the Xi’s. We now calculate the average power of X′(t) in terms of P ′. Since ψi(t) has unit energy, the average contribution to the energy of X′(t) by each sample is P ′. As there are 2W samples per unit time and ψi(t), −∞< i < ∞are orthonormal, X′(t) accumulates energy from the samples at a rate equal to 2WP ′. Upon considering 2WP ′ ≤P, (11.247) where P is the average power constraint on the input process X′(t), we obtain P ′ ≤P 2W , (11.248) i.e., an input power constraint P for the bandlimited Gaussian channel trans-lates to an input power constraint P 2W for the discrete-time channel. By The-orem 11.21, the capacity of the memoryless Gaussian channel is 1 2 log  1 + P/2W N0/2  = 1 2 log  1 + P N0W  bits per sample. (11.249) Since there are 2W samples per unit time, we conclude that the capacity of the bandlimited Gaussian channel is W log  1 + P N0W  bits per unit time. (11.250) The argument we have given above is evidently not rigorous because if there is no additional constraint on the Xi’s other than their average energy 11.8 The Bandlimited Colored Gaussian Channel 287 not exceeding P 2W , then X′(t) may not have finite energy. This induces a gap in the argument because the Fourier transform of X′(t) may not exist and hence the sampling theorem cannot be applied. A rigorous formulation of the bandlimited white Gaussian channel involves the consideration of an input signal of finite duration, which is analogous to a code for the DMC with a finite block length. Since a signal with finite duration cannot be bandlimited, this immediate leads to a contradiction. Overcoming this technical difficulty requires the use of prolate spheroidal wave functions [327, 212, 213] which are bandlimited functions with most of the energy on a finite interval. The main idea is that there are approximately 2WT orthonor-mal basis functions for the set of signals which are bandlimited to W and have most of the energy on [0, T) in time. We refer the reader to Gallager for a rigorous treatment of the bandlimited white Gaussian channel. 11.8 The Bandlimited Colored Gaussian Channel In the last section, we have discussed the bandlimited white Gaussian channel occupying the frequency band [0, W]. We presented a heuristic argument that led to the formula in (11.250) for the channel capacity. Suppose the channel instead occupies the frequency band [fl, fh], with fl being a multiple of W = fh −fl. Then the noise process Z′(t) has power spectral density SZ′(f) =  N0 2 if fl ≤|f| ≤fh 0 otherwise. (11.251) We refer to such a channel as the bandpass white Gaussian channel. By an extenstion of the heuristic argument for the bandlimited white Gaussian chan-nel, which would involve the bandpass version of the sampling theorem, the same formula for the channel capacity can be obtained. The details are omit-ted here. We now consider a waveform channel occupying the frequency band [0, W] with input power constraint P and zero-mean additive colored Gaussian noise Z(t). We refer to this channel as the bandlimited colored Gaussian channel. To analyze the capacity of this channel, divide the interval [0, W] into subintervals [f i l , f i h] for 1 ≤i ≤k, where f i l = (i −1)∆k (11.252) f i h = i∆k, (11.253) and ∆k = W k (11.254) is the width of each subinterval. As an approximation, assume SZ(f) is equal to a constant SZ,i over the subinterval [f i l , f i h]. Then the channel consists of k sub-channels, with the ith sub-channel being a bandpass (bandlimited if 288 11 Continuous-Valued Channels i = 1) white Gaussian channel occupying the frequency band [f i l , f i h]. Thus by letting N0 = 2SZ,i in (11.251), we obtain from (11.250) that the capacity of the ith sub-channel is equal to ∆k log  1 + Pi 2SZ,i∆k  (11.255) if Pi is the input power allocated to that sub-channel. The noise process of the ith sub-channel, denoted by Z′ i(t), is obtained by passing Z(t) through the ideal bandpass filter with frequency response Hi(f) =  1 if f i l ≤|f| ≤f i h 0 otherwise. (11.256) It can be shown (see Problem 10) that the noise processes Zi(t), 1 ≤i ≤k are independent. By converting each sub-channel into an equivalent memoryless Gaussian channel as discussed in the last section, we see that the k sub-channels can be regarded as a system of parallel Gaussian channels. Thus the channel capacity is equal to the sum of the capacities of the individual sub-channels when the power allocation among the k sub-channels is optimal. Let P ∗ i be the optimal power allocation for the ith sub-channel. Then it follows from (11.255) that the channel capacity is equal to k X i=1 ∆k log  1 + P ∗ i 2SZ,i∆k  = k X i=1 ∆k log 1 + P ∗ i 2∆k SZ,i ! , (11.257) where by Proposition 11.23, P ∗ i 2∆k = (ν −SZ,i)+, (11.258) or P ∗ i = 2∆k(ν −SZ,i)+, (11.259) with k X i=1 P ∗ i = P. (11.260) Then from (11.259) and (11.260), we obtain k X i=1 (ν −SZ,i)+∆k = P 2 . (11.261) As k →∞, following (11.257) and (11.258), 11.9 Zero-Mean Gaussian Noise is the Worst Additive Noise 289 k X i=1 ∆k log 1 + P ∗ i 2∆k SZ,i ! = k X i=1 ∆k log  1 + (ν −SZ,i)+ SZ,i  (11.262) → Z W 0 log  1 + (ν −SZ(f))+ SZ(f)  d f (11.263) = 1 2 Z W −W log  1 + (ν −SZ(f))+ SZ(f)  d f, (11.264) and following (11.261), k X i=1 (ν −SZ,i)+∆k → Z W 0 (ν −SZ(f))+d f (11.265) = 1 2 Z W −W (ν −SZ(f))+d f, (11.266) where (11.264) and (11.266) are obtained by noting that SZ′(−f) = SZ′(f) (11.267) for −∞< f < ∞(see Problem 9). Hence, we conclude that the capacity of the bandlimited colored Gaussian channel is equal to 1 2 Z W −W log  1 + (ν −SZ(f))+ SZ(f)  d f bits per unit time, (11.268) where ν satisfies Z W −W (ν −SZ(f))+d f = P (11.269) in view of (11.261). Figure 11.5 is an illustration of the water-filling process for determining ν, where the amount of water to be poured into the reservoir is equal to P. 11.9 Zero-Mean Gaussian Noise is the Worst Additive Noise In the last section, we derived the capacity for a system of correlated Gaussian channels, where the noise vector is a zero-mean Gaussian random vector. In this section, we show that in terms of the capacity of the system, the zero-mean Gaussian noise is the worst additive noise given that the noise vector has a fixed correlation matrix. Note that the diagonal elements of this correlation matrix specify the power of the individual noise variables, while the other elements in the matrix give a characterization of the correlation between the noise variables. 290 11 Continuous-Valued Channels W -W 0 ! SZ( f ) f Fig. 11.5. Water-filling for the bandlimited colored Gaussian channel. Theorem 11.32. For a fixed zero-mean Gaussian random vector X∗, let Y = X∗+ Z, (11.270) where the joint pdf of Z exists and Z is independent of X∗. Under the con-straint that the correlation matrix of Z is equal to K, where K is any symmet-ric positive definite matrix, I(X∗; Y) is minimized if and only if Z ∼N(0, K). Before proving this theorem, we first prove the following two lemmas. Lemma 11.33. Let X be a zero-mean random vector and Y = X + Z, (11.271) where Z is independent of X. Then ˜ KY = ˜ KX + ˜ KZ. (11.272) Proof. For any i and j, consider EYiYj = E(Xi + Zi)(Xj + Zj) (11.273) = E(XiXj + XiZj + ZiXj + ZiZj) (11.274) = EXiXj + EXiZj + EZiXj + EZiZj (11.275) = EXiXj + (EXi)(EZj) + (EZi)(EXj) + EZiZj (11.276) = EXiXj + (0)(EZj) + (EZi)(0) + EZiZj (11.277) = EXiXj + EZiZj, (11.278) where (11.277) follows from the assumption that Xi has zero mean for all i. The proposition is proved. ⊓ ⊔ 11.9 Zero-Mean Gaussian Noise is the Worst Additive Noise 291 Lemma 11.34. Let Y∗∼N(0, K) and Y be any random vector with corre-lation matrix K. Then Z fY∗(y) log fY∗(y)dy = Z SY fY(y) log fY∗(y)dy. (11.279) Proof. The random vector Y∗has joint pdf fY∗(y) = 1 √ 2π k |K|1/2 e−1 2 (y⊤K−1y) (11.280) for all y ∈ℜk. Since EY∗= 0, ˜ KY∗= KY∗= K. Therefore, Y∗and Y have the same correlation matrix. Consider Z [ln fY∗(y)] fY∗(y)dy = Z  −1 2(y⊤K−1y) −ln h ( √ 2π)k|K|1/2i fY∗(y)dy (11.281) = −1 2 Z (y⊤K−1y)fY∗(y)dy −ln h ( √ 2π)k|K|1/2i (11.282) = −1 2 Z  X i,j (K−1)ijyiyj  fY∗(y)dy −ln h ( √ 2π)k|K|1/2i (11.283) = −1 2 X i,j (K−1)ij Z (yiyj)fY∗(y)dy −ln h ( √ 2π)k|K|1/2i (11.284) = −1 2 X i,j (K−1)ij Z SY (yiyj)fY(y)dy −ln h ( √ 2π)k|K|1/2i (11.285) = Z SY  −1 2y⊤K−1y −ln h ( √ 2π)k|K|1/2i fY(y)dy (11.286) = Z SY [ln fY∗(y)] fY(y)dy. (11.287) In the above, (11.285) is justified because Y and Y∗have the same correla-tion matrix, and (11.286) is obtained by backtracking the manipulations from (11.281) to (11.284) with fY(y) in place of fY∗(y). The lemma is proved upon changing the base of the logarithm. ⊓ ⊔ Proof of Theorem 11.32. Let Z∗∼N(0, K) such that Z∗is independent of X∗, and let Y∗= X∗+ Z∗. (11.288) Obviously, the support of Y∗is ℜk because Y∗has a multivariate Gaussian distribution. Note that the support of Y is also ℜk regardless of the distri-bution of Z because the support of X∗is ℜk. We need to prove that for any 292 11 Continuous-Valued Channels random vector Z with correlation matrix K, where Z is independent of X∗ and the joint pdf of Z exists, I(X∗; Y∗) ≤I(X∗; Y). (11.289) Since EZ∗= 0, ˜ KZ∗= KZ∗= K. Therefore, Z∗and Z have the same correlation matrix. By noting that X∗has zero mean, we apply Lemma 11.33 to see that Y∗and Y have the same correlation matrix. The inequality in (11.289) can be proved by considering I(X∗; Y∗) −I(X∗; Y) a) = h(Y∗) −h(Z∗) −h(Y) + h(Z) (11.290) = − Z fY∗(y) log fY∗(y)dy + Z fZ∗(z) log fZ∗(z)dz + Z fY(y) log fY(y)dy − Z SZ fZ(z) log fZ(z)dz (11.291) b) = − Z fY(y) log fY∗(y)dy + Z SZ fZ(z) log fZ∗(z)dz + Z fY(y) log fY(y)dy − Z SZ fZ(z) log fZ(z)dz (11.292) = Z log  fY(y) fY∗(y)  fY(y)dy + Z SZ log fZ∗(z) fZ(z)  fZ(z)dz (11.293) c) = Z SZ Z log fY(y)fZ∗(z) fY∗(y)fZ(z)  fYZ(y, z)dydz (11.294) d) ≤log Z SZ Z fY(y)fZ∗(z) fY∗(y)fZ(z)fYZ(y, z)dydz  (11.295) e) = log Z  1 fY∗(y) Z SZ fX∗(y −z)fZ∗(z)dz  fY(y)dy  (11.296) f) ≤log Z fY∗(y) fY∗(y)fY(y)dy  (11.297) = 0. (11.298) The above steps are explained as follows: • We assume that the pdf of Z exists so that h(Z) is defined. Moreover, fY|X∗(y|x) = fZ(y −x). (11.299) Then by Proposition 10.24, fY(y) exists and hence h(Y) is defined. • In b), we have replaced fY∗(y) by fY(y) in the first integral and replaced fZ∗(z) by fZ(z) in the second integral. The former is justified by an ap-plication of Lemma 11.34 to Y∗and Y by noting that Y∗is a zero-mean Gaussian random vector and Y∗and Y have the same correlation matrix. The latter is justified similarly. 11.9 Zero-Mean Gaussian Noise is the Worst Additive Noise 293 • To justify c), we need SYZ = ℜk × SZ, which can be seen by noting that fYZ(y, z) = fY|Z(y|z)fZ(z) = fX∗(y −z)fZ(z) > 0 (11.300) for all y ∈ℜk and all z ∈SZ. • d) follows from Jensen’s inequality and the concavity of the logarithmic function. • e) follows from (11.300). • f) follows because Z Sz fX∗(y −z)fZ∗(z)dz = Z Sz fY∗|Z∗(y|z)fZ∗(z)dz (11.301) ≤ Z fY∗|Z∗(y|z)fZ∗(z)dz (11.302) = fY∗(y). (11.303) Equality holds in (11.295) if and only if fY(y)fZ∗(z) = fY∗(y)fZ(z) for all y ∈ℜk, z ∈SZ. (11.304) If fZ(z) = fZ∗(z) for all z ∈SZ, then SZ = ℜk and Z ∼N(0, K). This implies fY(y) = fY∗(y) for all y ∈ℜk in view of (11.270) and (11.288), so that (11.304) holds. Thus Z ∼N(0, K) is a sufficient condition for equality to hold in (11.295). Conversely, if equality holds in (11.295), we obtain from (11.304) that fZ∗(z) Z fY(y)dy = fZ(z) Z fY∗(y)dy. (11.305) Since Z fY(y)dy = Z fY∗(y)dy = 1, (11.306) we see that fZ(z) = fZ∗(z) for all z ∈SZ, so that SZ = ℜk and Z ∼N(0, K). Hence, we conclude that Z ∼N(0, K) is a necessary and sufficient condition for I(X∗; Y) to be minimized. The theorem is proved. ⊓ ⊔ Consider the system of correlated Gaussian channels discussed in the last section. Denote the noise vector by Z∗and its correlation matrix by K. Note that K is also the covariance matrix of Z∗because Z∗has zero mean. In other words, Z∗∼N(0, K). Refer to this system as the zero-mean Gaussian system and let C∗be its capacity. Then consider another system with exactly the same specification except that the noise vector, denoted by Z, may neither be zero-mean nor Gaussian. We, however, require that the joint pdf of Z exists. Refer to this system as the alternative system and let C be its capacity. We now apply Theorem 11.32 to show that C ≥C∗. Let X∗be the input random vector that achieves the capacity of the zero-mean Gaussian system. We have mentioned at the end of Section 11.6 that X∗is a zero-mean Gaussian 294 11 Continuous-Valued Channels random vector. Let Y∗and Y be defined in (11.288) and (11.270), which cor-respond to the outputs of the zero-mean Gaussian system and the alternative system, respectively when X∗is the input of both systems. Then C ≥I(X∗; Y) ≥I(X∗; Y∗) = C∗, (11.307) where the second inequality follows from (11.289) in the proof of Theo-rem 11.32. Hence, we conclude that the zero-mean Gaussian noise is indeed the worst additive noise subject to a constraint on the correlation matrix. Chapter Summary Capacity of Continuous Memoryless Channel: For a continuous mem-oryless channel f(y|x) with average input constraint (κ, P), namely the re-quirement that 1 n n X i=1 κ(xi) ≤P for any codeword (x1, x2, · · · , xn) transmitted over the channel, C(P) = sup F (x):Eκ(X)≤P I(X; Y ), where F(x) is the input distribution of the channel. C(P) is non-decreasing, concave, and left-continuous. Mutually Typical Set: For a joint distribution F(x, y), Ψ n [XY ]δ =  (x, y) ∈X n × Yn : 1 n log f(y|x) f(y) −I(X; Y ) ≤δ  . Lemma: For any δ > 0 and sufficiently large n, Pr{(X, Y) ∈Ψ n [XY ]δ)} ≥1 −δ. Lemma: Let X and Y be a pair of random variables, and (X′, Y′) be n i.i.d. copies of a pair of generic random variables (X′, Y ′) where X′ and Y ′ are in-dependent and have the same marginal distributions as X and Y , respectively. Then Pr{(X′, Y′) ∈Ψ n [XY ]δ} ≤2−n(I(X;Y )−δ). Channel Coding Theorem: A message drawn uniformly from the set {1, 2, · · ·, 2n(R−ϵ)} can be transmitted through a continuous memoryless channel with negligible probability of error as n →∞if and only if R ≤C. Capacity of Memoryless Gaussian Channel: Problems 295 C = 1 2 log  1 + P N  . Capacity of Parallel Gaussian Channels: For a system of parallel Gaus-sian channels with noise variable Zi ∼N(0, Ni) for the ith channel and total input power constraint P, C = 1 2 k X i=1 log  1 + (ν −Ni)+ Ni  , where ν satisfies Pk i=1(ν −Ni)+ = P. Capacity of Correlated Gaussian Channels: For a system of correlated Gaussian channels with noise vector Z ∼N(0, KZ) and total input power constraint P, C = 1 2 k X i=1 log  1 + (ν −λi)+ λi  , where KZ is diagonalizable as QΛQ⊤with λi being the ith diagonal element of Λ, and ν satisfies Pk i=1(ν −λi)+ = P. Capacity of the Bandlimited White Gaussian Channel: C = W log  1 + P N0W  bits per unit time. Capacity of the Bandlimited Colored Gaussian Channel: 1 2 Z W −W log  1 + (ν −SZ(f))+ SZ(f)  d f bits per unit time, where ν satisfies R W −W (ν −SZ(f))+d f = P. Zero-Mean Gaussian is the Worst Additive Noise: For a system of channels with additive noise, if the correlation matrix of the noise vector is given, the capacity is minimized when the noise vector is zero-mean and Gaussian. Problems In the following, X, Y, Z, etc denote vectors of random variables. 1. Verify the two properties in Theorem 11.8 for the capacity of the memo-ryless Gaussian channel. 296 11 Continuous-Valued Channels 2. Let X and Y be two jointly distributed random variables with Y being continuous. The random variable Y is related to the random variable X through a conditional pdf f(y|x) defined for all x (cf. Definition 10.22). Prove that I(X; Y ) is concave in F(x). 3. Refer to Lemma 11.18 and prove that Pr{(X′, Y′) ∈Ψ n [XY ]δ} ≥(1 −δ)2−n(I(X;Y )−δ) for n sufficiently large. 4. Show that the capacity of a continuous memoryless channel is not changed if (11.23) is replaced by E " 1 n n X i=1 κ(xi(W)) # ≤P, i.e., the average input constraint is satisfied on the average by a randomly selected codeword instead of by every codeword in the codebook. 5. Show that Rii′(0) in (11.234) vanishes if and only if i ̸= i′. 6. Let Y = X+Z, where Z is independent of X. Show that KY = KX +KZ. Note that unlike Lemma 11.33, it is not necessary to assume that either X or Z has zero mean. 7. Consider a system of Gaussian channels with noise vector Z ∼(0, KZ) and input power constraint equal to 3. Determine the capacity of the system for the following two cases: a) KZ =   4 0 0 0 5 0 0 0 2  ; b) KZ =   7/4 √ 2/4 −3/4 √ 2/4 5/2 − √ 2/4 −3/4 − √ 2/4 7/4  . For b), you may use the results in Problem 4 in Chapter 10. 8. In the system of correlated Gaussian channels, let KZ be diagonalizable as QΛQ⊤. Let A∗be the k×k diagonal matrix with a∗ i being the ith diagonal element, where a∗ i is prescribed in the discussion following (11.193). Show that N(0, QA∗Q⊤) is the optimal input distribution. 9. Show that for a wide-sense stationary process X(t), SX(f) = SX(−f) for all f. 10. Consider a zero-mean white Gaussian noise process Z(t). Let h1(t) and h2(t) be two impulse responses such that the supports of H1(f) and H2(f) do not overlap. Historical Notes 297 a) Show that for any t ̸= t′, the two random variables Z(t) ∗h1(t) and Z(t) ∗h2(t) are independent. b) Show that the two processes Z(t) ∗h1(t) and Z(t) ∗h2(t) are indepen-dent. c) Repeat a) and b) if Z(t) is a zero-mean colored Gaussian noise process. Hint: Regard Z(t) as obtained by passing a zero-mean white Gaussian noise process through a coloring filter. 11. Interpret the bandpass white Gaussian channel as a special case of the bandlimited colored Gaussian channel in terms of the channel capacity. 12. Independent Gaussian noise is the worst Let C be the capacity of a system of k Gaussian channels with Zi ∼N(0, Ni). By ignoring the possible cor-relation among the noise variables, we can use the channels in the system independently as parallel Gaussian channels. Thus C is lower bounded by the expression in (11.180). In this sense, a Gaussian noise vector is the worst if its components are uncorrelated. Justify this claim analytically. Hint: Show that I(X; Y) ≥P i I(Xi; Yi) if Xi are independent. Historical Notes Channels with additive Gaussian noise were first analyzed by Shannon in , where the formula for the capacity of the bandlimited white Gaussian channel was given. The form of the channel coding theorem for the continu-ous memoryless channel presented in this chapter was first proved in the book by Gallager . A rigorous proof of the capacity formula for the bandlim-ited white Gaussian channel was obtained by Wyner . The water-filling solution to the capacity of the bandlimited colored Gaussian channel was de-veloped by Shannon in and was proved rigorously by Pinsker . The discussion in this chapter on the continuous memoryless channel with an aver-age input constraint is adapted from the discussions in the book by Gallager and the book by Ihara , where in the former a comprehensive treat-ment of waveform channels can also be found. The Gaussian noise being the worst additive noise was proved by Ihara . The proof presented here is due to Diggavi and Cover . 12 Markov Structures We have proved in Section 3.5 that if X1 →X2 →X3 →X4 forms a Markov chain, the I-Measure µ∗always vanishes on the five atoms ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ Xc 4 ˜ X1 ∩˜ Xc 2 ∩˜ X3 ∩˜ X4 ˜ X1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ X4 ˜ X1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4 ˜ Xc 1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4. (12.1) Consequently, the I-Measure µ∗is completely specified by the values of µ∗on the other ten nonempty atoms of F4, and the information diagram for four random variables forming a Markov chain can be displayed in two dimensions as in Figure 3.11. Figure 12.1 is a graph which represents the Markov chain X1 →X2 → X3 →X4. The observant reader would notice that µ∗always vanishes on a nonempty atom A of F4 if and only if the graph in Figure 12.1 becomes discon-nected upon removing all the vertices corresponding to the complemented set variables in A. For example, µ∗always vanishes on the atom ˜ X1∩˜ Xc 2∩˜ X3∩˜ Xc 4, and the graph in Figure 12.1 becomes disconnected upon removing vertices 2 and 4. On the other hand, µ∗does not necessarily vanish on the atom ˜ Xc 1 ∩˜ X2 ∩˜ X3 ∩˜ Xc 4, and the graph in Figure 12.1 remains connected upon re-moving vertices 1 and 4. This observation will be explained in a more general setting in the subsequent sections. 1 2 3 4 Fig. 12.1. The graph representing the Markov chain X1 →X2 →X3 →X4. 300 12 Markov Structures The theory of I-Measure establishes a one-to-one correspondence between Shannon’s information measures and set theory. Based on this theory, we develop in this chapter a set-theoretic characterization of a Markov struc-ture called full conditional mutual independence. A Markov chain, and more generally a Markov random field, is a collection of full conditional mutual independencies. We will show that if a collection of random variables forms a Markov random field, then the structure of µ∗can be simplified. In particular, when the random variables form a Markov chain, µ∗exhibits a very simple structure so that the information diagram can be displayed in two dimensions regardless of the length of the Markov chain, and µ∗is always nonnegative. (See also Sections 3.5 and 3.6.) The topics to be covered in this chapter are fundamental. Unfortunately, the proofs of the results are very heavy. At the first reading, the reader should understand the theorems through the examples instead of getting into the details of the proofs. 12.1 Conditional Mutual Independence In this section, we explore the effect of conditional mutual independence on the structure of the I-Measure µ∗. We begin with a simple example. Example 12.1. Let X, Y , and Z be mutually independent random variables. Then I(X; Y ) = I(X; Y ; Z) + I(X; Y |Z) = 0. (12.2) Since I(X; Y |Z) ≥0, we let I(X; Y |Z) = a ≥0, (12.3) so that I(X; Y ; Z) = −a. (12.4) Similarly, I(Y ; Z) = I(X; Y ; Z) + I(Y ; Z|X) = 0 (12.5) and I(X; Z) = I(X; Y ; Z) + I(X; Z|Y ) = 0. (12.6) Then from (12.4), we obtain I(Y ; Z|X) = I(X; Z|Y ) = a. (12.7) The relations (12.3), (12.4), and (12.7) are shown in the information diagram in Figure 12.2, which indicates that X, Y , and Z are pairwise independent. We have proved in Theorem 2.39 that X, Y , and Z are mutually indepen-dent if and only if H(X, Y, Z) = H(X) + H(Y ) + H(Z). (12.8) 12.1 Conditional Mutual Independence 301 a a a -a X Z Y Fig. 12.2. X, Y , and Z are pairwise independent. By counting atoms in the information diagram, we see that 0 = H(X) + H(Y ) + H(Z) −H(X, Y, Z) (12.9) = I(X; Y |Z) + I(Y ; Z|X) + I(X; Z|Y ) + 2I(X; Y ; Z) (12.10) = a. (12.11) Thus a = 0, which implies I(X; Y |Z), I(Y ; Z|X), I(X; Z|Y ), I(X; Y ; Z) (12.12) are all equal to 0. Equivalently, µ∗vanishes on ˜ X ∩˜ Y −˜ Z, ˜ Y ∩˜ Z −˜ X, ˜ X ∩˜ Z −˜ Y , ˜ X ∩˜ Y ∩˜ Z, (12.13) which are precisely the atoms in the intersection of any two of the set variables ˜ X, ˜ Y , and ˜ Z. Conversely, if µ∗vanishes on the sets in (12.13), then we see from (12.10) that (12.8) holds, i.e., X, Y , and Z are mutually independent. Therefore, X, Y , and Z are mutually independent if and only if µ∗vanishes on the sets in (12.13). This is shown in the information diagram in Figure 12.3. The theme of this example will be extended to conditional mutual inde-pendence among collections of random variables in Theorem 12.9, which is 0 0 0 0 X Z Y Fig. 12.3. X, Y , and Z are mutually independent. 302 12 Markov Structures the main result in this section. In the rest of the section, we will develop the necessary tools for proving this theorem. At first reading, the reader should try to understand the results by studying the examples without getting into the details of the proofs. In Theorem 2.39, we have proved that X1, X2, · · · , Xn are mutually inde-pendent if and only if H(X1, X2, · · · , Xn) = n X i=1 H(Xi). (12.14) By conditioning on a random variable Y , one can readily prove the following. Theorem 12.2. X1, X2, · · · , Xn are mutually independent conditioning on Y if and only if H(X1, X2, · · · , Xn|Y ) = n X i=1 H(Xi|Y ). (12.15) We now prove two alternative characterizations of conditional mutual in-dependence. Theorem 12.3. X1, X2, · · · , Xn are mutually independent conditioning on Y if and only if for all 1 ≤i ≤n, I(Xi; Xj, j ̸= i|Y ) = 0, (12.16) i.e., Xi and (Xj, j ̸= i) are independent conditioning on Y . Remark A conditional independency is a special case of a conditional mutual independency. However, this theorem says that a conditional mutual indepen-dency is equivalent to a set of conditional independencies. Proof of Theorem 12.3. It suffices to prove that (12.15) and (12.16) are equiv-alent. Assume (12.15) is true, i.e., X1, X2, · · · , Xn are mutually independent conditioning on Y . Then for all i, Xi is independent of (Xj, j ̸= i) conditioning on Y . This proves (12.16). Now assume that (12.16) is true for all 1 ≤i ≤n. Consider 0 = I(Xi; Xj, j ̸= i|Y ) (12.17) = I(Xi; X1, X2, · · · , Xi−1|Y ) +I(Xi; Xi+1, · · · , Xn|Y, X1, X2, · · · , Xi−1). (12.18) Since mutual information is always nonnegative, this implies I(Xi; X1, · · · , Xi−1|Y ) = 0, (12.19) or Xi and (X1, X2, · · · , Xi−1) are independent conditioning on Y . Therefore, X1, X2, · · · , Xn are mutually independent conditioning on Y (see the proof of Theorem 2.39), proving (12.15). Hence, the theorem is proved. ⊓ ⊔ 12.1 Conditional Mutual Independence 303 Theorem 12.4. X1, X2, · · · , Xn are mutually independent conditioning on Y if and only if H(X1, X2, · · · , Xn|Y ) = n X i=1 H(Xi|Y, Xj, j ̸= i). (12.20) Proof. It suffices to prove that (12.15) and (12.20) are equivalent. Assume (12.15) is true, i.e., X1, X2, · · · , Xn are mutually independent conditioning on Y . Since for all i, Xi is independent of Xj, j ̸= i conditioning on Y , H(Xi|Y ) = H(Xi|Y, Xj, j ̸= i) (12.21) Therefore, (12.15) implies (12.20). Now assume that (12.20) is true. Consider H(X1, X2, · · · , Xn|Y ) = n X i=1 H(Xi|Y, X1, · · · , Xi−1) (12.22) = n X i=1 [H(Xi|Y, Xj, j ̸= i) + I(Xi; Xi+1, · · · , Xn|Y, X1, · · · , Xi−1)] (12.23) = n X i=1 H(Xi|Y, Xj, j ̸= i) + n X i=1 I(Xi; Xi+1, · · · , Xn|Y, X1, · · · , Xi−1). (12.24) Then (12.20) implies n X i=1 I(Xi; Xi+1, · · · , Xn|Y, X1, · · · , Xi−1) = 0. (12.25) Since all the terms in the above summation are nonnegative, they must all be equal to 0. In particular, for i = 1, we have I(X1; X2, · · · , Xn|Y ) = 0. (12.26) By symmetry, it can be shown that I(Xi; Xj, j ̸= i|Y ) = 0 (12.27) for all 1 ≤i ≤n. Then this implies (12.15) by the last theorem, completing the proof. ⊓ ⊔ 304 12 Markov Structures Theorem 12.5. Let C and Qi be disjoint index sets and Wi be a subset of Qi for 1 ≤i ≤k, where k ≥2. Assume that there exist at least two i such that Wi ̸= ∅. Let XQi = (Xl, l ∈Qi), 1 ≤i ≤k, and XC = (Xl, l ∈C) be collections of random variables. If XQi, 1 ≤i ≤k, are mutually independent conditioning on XC, then XWi such that Wi ̸= ∅are mutually independent conditioning on (XC, XQi−Wi, 1 ≤i ≤k). We first give an example before we prove the theorem. Example 12.6. Suppose X1, (X2, X3, X4), and (X5, X6) are mutually indepen-dent conditioning on X7. By Theorem 12.5, X1, X2, and (X5, X6) are mutually independent conditioning on (X3, X4, X7). Proof of Theorem 12.5. Assume XQi, 1 ≤i ≤k, are mutually independent conditioning on XC, i.e., H(XQi, 1 ≤i ≤k|XC) = k X i=1 H(XQi|XC). (12.28) Consider H(XWi, 1 ≤i ≤k|XC, XQi−Wi, 1 ≤i ≤k) = H(XQi, 1 ≤i ≤k|XC) −H(XQi−Wi, 1 ≤i ≤k|XC) (12.29) = k X i=1 H(XQi|XC) − k X i=1 H(XQi−Wi|XC, XQj−Wj, 1 ≤j ≤i −1) (12.30) ≥ k X i=1 H(XQi|XC, XQj−Wj, 1 ≤j ≤i −1) − k X i=1 H(XQi−Wi|XC, XQj−Wj, 1 ≤j ≤i −1) (12.31) = k X i=1 H(XWi|XC, XQj−Wj, 1 ≤j ≤i) (12.32) ≥ k X i=1 H(XWi|XC, XQj−Wj, 1 ≤j ≤k). (12.33) In the second step we have used (12.28), and the two inequalities follow be-cause conditioning does not increase entropy. On the other hand, by the chain rule for entropy, we have 12.1 Conditional Mutual Independence 305 H(XWi, 1 ≤i ≤k|XC, XQi−Wi, 1 ≤i ≤k) = k X i=1 H(XWi|XC, (XQj−Wj, 1 ≤j ≤k), (XWl, 1 ≤l ≤i −1)). (12.34) Therefore, it follows from (12.33) that k X i=1 H(XWi|XC, XQj−Wj, 1 ≤j ≤k) (12.35) ≤H(XWi, 1 ≤i ≤k|XC, XQi−Wi, 1 ≤i ≤k) (12.36) = k X i=1 H(XWi|XC, (XQj−Wj, 1 ≤j ≤k), (XWl, 1 ≤l ≤i −1)). (12.37) However, since conditioning does not increase entropy, the ith term in the summation in (12.35) is lower bounded by the ith term in the summation in (12.37). Thus we conclude that the inequality in (12.36) is an equality. Hence, the conditional entropy in (12.36) is equal to the summation in (12.35), i.e., H(XWi, 1 ≤i ≤k|XC, XQi−Wi, 1 ≤i ≤k) (12.38) = k X i=1 H(XWi|XC, XQj−Wj, 1 ≤j ≤k). (12.39) The theorem is proved. ⊓ ⊔ Theorem 12.5 specifies a set of conditional mutual independencies (CMI’s) which is implied by a CMI. This theorem is crucial for understanding the effect of a CMI on the structure of the I-Measure µ∗, which we discuss next. Lemma 12.7. Let (Zi1, · · · , Ziti), 1 ≤i ≤r be r collections of random vari-ables, where r ≥2, and let Y be a random variable such that (Zi1, · · · , Ziti), 1 ≤i ≤r are mutually independent conditioning on Y . Then µ∗   r \ i=1 ti \ j=1 ˜ Zij −˜ Y  = 0. (12.40) We first prove the following set identity which will be used for proving the above lemma. Lemma 12.8. Let S and T be disjoint index sets, and Ai and B be sets. Let µ be a set-additive function. Then 306 12 Markov Structures µ   \ i∈S Ai ! ∩  \ j∈T Aj  −B   = X S′⊂S X T ′⊂T (−1)|S′|+|T ′| (µ(AS′ −B) + µ(AT ′ −B) −µ(AS′∪T ′ −B)) , (12.41) where AS′ denotes ∪i∈S′Ai. Proof. The right hand side of (12.41) is equal to X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AS′ −B) + X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AT ′ −B) − X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AS′∪T ′ −B). (12.42) Now X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AS′ −B) = X S′⊂S (−1)|S′|µ(AS′ −B) X T ′⊂T (−1)|T ′|. (12.43) Since X T ′⊂T (−1)|T ′| = |T | X k=0  |T| k  (−1)k = 0 (12.44) by the binomial formula1, we conclude that X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AS′ −B) = 0. (12.45) Similarly, X S′⊂S X T ′⊂T (−1)|S′|+|T ′|µ(AT ′ −B) = 0. (12.46) Therefore, (12.41) is equivalent to µ   \ i∈S Ai ! ∩  \ j∈T Aj  −B  = X S′⊂S X T ′⊂T (−1)|S′|+|T ′|+1µ(AS′∪T ′ −B) (12.47) 1 This can be obtained by letting a = 1 and b = −1 in the binomial formula (a + b)|T | = |T | X k=0  |T| k  akb|T |−k. 12.1 Conditional Mutual Independence 307 which can readily be obtained from Theorem 3.145. Hence, the lemma is proved. ⊓ ⊔ Proof of Lemma 12.7. We first prove the lemma for r = 2. By Lemma 12.8, µ∗   2 \ i=1 ti \ j=1 ˜ Zij −˜ Y  = X S′⊂{1,···,t1} X T ′⊂{1,···,t2} (−1)|S′|+|T ′|  µ∗  [ j∈S′ ˜ Z1j −˜ Y   +µ∗ [ k∈T ′ ˜ Z2k −˜ Y ! −µ∗    [ j∈S′ ˜ Z1j  ∪ [ k∈T ′ ˜ Z2k ! −˜ Y    . (12.48) The expression in the square bracket is equal to H(Z1j, j ∈S′|Y ) + H(Z2k, k ∈T ′|Y ) −H((Z1j, j ∈S′), (Z2k, k ∈T ′)|Y ), (12.49) which vanishes because (Z1j, j ∈S′) and (Z2k, k ∈T ′) are independent con-ditioning on Y . Therefore the lemma is proved for r = 2. For r > 2, we write µ∗   r \ i=1 ti \ j=1 ˜ Zij −˜ Y  = µ∗     r−1 \ i=1 ti \ j=1 ˜ Zij  ∩   tr \ j=1 ˜ Zrj  −˜ Y  . (12.50) Since ((Zi1, · · · , Ziti), 1 ≤i ≤r −1) and (Zr1, · · · , Zrtr) are independent conditioning on Y , upon applying the lemma for r = 2, we see that µ∗   r \ i=1 ti \ j=1 ˜ Zij −˜ Y  = 0. (12.51) The lemma is proved. ⊓ ⊔ Theorem 12.9. Let T and Qi, 1 ≤i ≤k, be disjoint index sets, where k ≥2, and let XQi = (Xl, l ∈Qi), 1 ≤i ≤k, and XT = (Xl, l ∈T) be collections of random variables. Then XQi, 1 ≤i ≤k, are mutually independent condition-ing on XT if and only if for any W1, W2, · · · , Wk, where Wi ⊂Qi, 1 ≤i ≤k, if there exist at least two i such that Wi ̸= ∅, then µ∗     k \ i=1 \ j∈Wi ˜ Xj  −˜ XT ∪(∪k i=1(Qi−Wi))  = 0. (12.52) 308 12 Markov Structures We first give an example before proving this fundamental result. The reader should compare this example with Example 12.6. Example 12.10. Suppose X1, (X2, X3, X4), and (X5, X6) are mutually inde-pendent conditioning on X7. By Theorem 12.9, µ∗( ˜ X1 ∩˜ X2 ∩˜ X5 ∩˜ X6 −( ˜ X3 ∪˜ X4 ∪˜ X7)) = 0. (12.53) However, the theorem does not say, for instance, that µ∗( ˜ X2 ∩˜ X4 −( ˜ X1 ∪˜ X3 ∪˜ X5 ∪˜ X6 ∪˜ X7)) (12.54) is equal to 0. Proof of Theorem 12.9. We first prove the ‘if’ part. Assume that for any W1, W2, · · · , Wk, where Wi ⊂Qi, 1 ≤i ≤k, if there exist at least two i such that Wi ̸= ∅, then (12.52) holds. Consider H(XQi, 1 ≤i ≤k|XT ) = µ∗ ˜ X∪k i=1Qi −˜ XT  (12.55) = X B∈S µ∗(B), (12.56) where S consists of sets of the form   k \ i=1 \ j∈Wi ˜ Xj  −˜ XT ∪(∪k i=1(Qi−Wi)) (12.57) with Wi ⊂Qi for 1 ≤i ≤k and there exists at least one i such that Wi ̸= ∅. By our assumption, if B ∈S is such that there exist at least two i for which Wi ̸= ∅, then µ∗(B) = 0. Therefore, if µ∗(B) is possibly nonzero, then B must be such that there exists a unique i for which Wi ̸= ∅. Now for 1 ≤i ≤k, let Sl be the set consisting of sets of the form in (12.57) with Wi ⊂Qi, Wi ̸= ∅, and Wl = ∅for l ̸= i. In other words, Si consists of atoms of the form  \ j∈Wi ˜ Xj  −˜ XT ∪(∪l̸=iQl)∪(Qi−Wi) (12.58) with Wi ⊂Qi and Wi ̸= ∅. Then X B∈S µ∗(B) = k X i=1 X B∈Si µ∗(B). (12.59) Now 12.2 Full Conditional Mutual Independence 309 ˜ XQi −˜ XT ∪(∪l̸=iQl) = [ Wi⊂Qi Wi̸=∅    \ j∈Wi ˜ Xj  −˜ XT ∪(∪l̸=iQl)∪(Qi−Wi)   (12.60) = [ B∈Si B. (12.61) Since µ∗is set-additive, we have µ∗ ˜ XQi −˜ XT ∪(∪l̸=iQl)  = X B∈Si µ∗(B). (12.62) Hence, from (12.56) and (12.59), we have H(XQi, 1 ≤i ≤k|XT ) = k X i=1 X B∈Si µ∗(B) (12.63) = k X i=1 µ∗ ˜ XQi −˜ XT ∪(∪l̸=iQl)  (12.64) = k X i=1 H(XQi|XT , XQl, l ̸= i), (12.65) where (12.64) follows from (12.62). By Theorem 12.4, XQi, 1 ≤i ≤k, are mutually independent conditioning on XT . We now prove the ‘only if’ part. Assume XQi, 1 ≤i ≤k, are mutually independent conditioning on XT . For any collection of sets W1, W2, · · · , Wk, where Wi ⊂Qi, 1 ≤i ≤k, if there exist at least two i such that Wi ̸= ∅, by Theorem 12.5, XWi, 1 ≤i ≤k, are mutually independent conditioning on (XT , XQi−Wi, 1 ≤i ≤k). By Lemma 12.7, we obtain (12.52). The theorem is proved. ⊓ ⊔ 12.2 Full Conditional Mutual Independence Definition 12.11. A conditional mutual independency on X1, X2, · · · , Xn is full if all X1, X2, · · · , Xn are involved. Such a conditional mutual independency is called a full conditional mutual independency (FCMI). Example 12.12. For n = 5, X1, X2, X4, and X5 are mutually independent conditioning on X3 is an FCMI. However, 310 12 Markov Structures X1, X2, and X5 are mutually independent conditioning on X3 is not an FCMI because X4 is not involved. As in the previous chapters, we let Nn = {1, 2, · · · , n}. (12.66) In Theorem 12.9, if T ∪ k [ i=1 Qi ! = Nn, (12.67) then the tuple (T, Qi, 1 ≤i ≤k) defines the following FCMI on X1, X2, · · · , Xn: K : XQ1, XQ2, · · · , XQk are mutually independent conditioning on XT . We will denote K by (T, Qi, 1 ≤i ≤k). Definition 12.13. Let K = (T, Qi, 1 ≤i ≤k) be an FCMI on X1, X2, · · · , Xn. The image of K, denoted by Im(K), is the set of all atoms of Fn which has the form of the set in (12.57), where Wi ⊂Qi, 1 ≤i ≤k, and there exist at least two i such that Wi ̸= ∅. Recall from Chapter 3 that A is the set of all nonempty atoms of Fn. Proposition 12.14. Let K = (T, Q1, Q2) be an FCI (full conditional inde-pendency) on X1, X2, · · · , Xn. Then Im(K) = {A ∈A : A ⊂( ˜ XQ1 ∩˜ XQ2 −˜ XT )}. (12.68) Proposition 12.15. Let K = (T, Qi, 1 ≤i ≤k) be an FCMI on X1, X2, · · · , Xn. Then Im(K) =   A ∈A : A ⊂ [ 1≤i 1, we say that U is a cutset in G. Definition 12.23 (Markov Random Field). Let G = (V, E) be an undi-rected graph with V = Nn = {1, 2, · · · , n}, and let Xi be a random variable cor-responding to vertex i. The random variables X1, X2, · · · , Xn form a Markov random field represented by G if for all cutsets U in G, the sets of random variables XV1(U), XV2(U), · · · , XVs(U)(U) are mutually independent conditioning on XU. This definition of a Markov random field is referred to as the global Markov property in the literature. If X1, X2, · · · , Xn form a Markov random field rep-resented by a graph G, we also say that X1, X2, · · · , Xn form a Markov graph G. When G is a chain, we say that X1, X2, · · · , Xn form a Markov chain. In the definition of a Markov random field, each cutset U in G specifies an FCMI on X1, X2, · · · , Xn, denoted by [U]. Formally, [U] : XV1(U), · · · , XVs(U)(U) are mutually independent conditioning on XU. For a collection of cutsets U1, U2, · · · , Uk in G, we introduce the notation [U1, U2, · · · , Uk] = [U1] ∧[U2] ∧· · · ∧[Uk] (12.88) where ‘∧’ denotes ‘logical AND.’ Using this notation, X1, X2, · · · , Xn form a Markov graph G if and only if [U ⊂V : U ̸= V and s(U) > 1] (12.89) 12.3 Markov Random Field 315 1 2 3 4 5 7 6 Fig. 12.4. The graph G in Example 12.27. holds. Therefore, a Markov random field is simply a collection of FCMI’s induced by a graph. We now define two types of nonempty atoms of Fn with respect to a graph G. Recall the definition of the set UA for a nonempty atom A of Fn in (12.73). Definition 12.24. For a nonempty atom A of Fn, if s(UA) = 1, i.e., G\UA is connected, then A is a Type I atom, otherwise A is a Type II atom. The sets of all Type I and Type II atoms of Fn are denoted by T1 and T2, respectively. Theorem 12.25. X1, X2, · · · , Xn form a Markov graph G if and only if µ∗ vanishes on all the Type II atoms. Before we prove this theorem, we first state the following proposition which is the graph-theoretic analog of Theorem 12.5. The proof is trivial and is omit-ted. This proposition and Theorem 12.5 together establish an analogy between the structure of conditional mutual independence and the connectivity of a graph. This analogy will play a key role in proving Theorem 12.25. Proposition 12.26. Let C and Qi be disjoint subsets of the vertex set V of a graph G and Wi be a subset of Qi for 1 ≤i ≤k, where k ≥2. Assume that there exist at least two i such that Wi ̸= ∅. If Qi, 1 ≤i ≤k, are disconnected in G\C, then those Wi which are nonempty are disconnected in G(C∪Sk i=1(Qi− Wi)). Example 12.27. In the graph G in Figure 12.4, {1}, {2, 3, 4}, and {5, 6} are disjoint in G{7}. Then Proposition 12.26 says that {1}, {2}, and {5, 6} are disjoint in G{3, 4, 7}. Proof of Theorem 12.25. We note that {UA, A ∈A} contains precisely all the proper subsets of Nn. Thus the set of FCMI’s specified by the graph G can be written as [UA : A ∈A and s(UA) > 1] (12.90) (cf. (12.89)). By Theorem 12.19, it suffices to prove that Im([UA : A ∈A and s(UA) > 1]) = T2, (12.91) 316 12 Markov Structures where T2 is defined in Definition 12.24. We first prove that T2 ⊂Im([UA : A ∈A and s(UA) > 1]). (12.92) Consider an atom A ∈T2. Then s(UA) > 1. In Definition 12.13, let T = UA, k = s(UA), and Qi = Vi(UA) for 1 ≤i ≤s(UA). By considering Wi = Vi(UA) for 1 ≤i ≤s(UA), we see that A ∈Im([UA]). Therefore, T2 = {A ∈A : s(UA) > 1} (12.93) ⊂ [ A∈A:s(UA)>1 Im([UA]) (12.94) = Im([UA : A ∈A and s(UA) > 1]). (12.95) We now prove that Im([UA : A ∈A and s(UA) > 1]) ⊂T2. (12.96) Consider A ∈Im([UA : A ∈A and s(UA) > 1]). Then there exists A∗∈A with s(UA∗) > 1 such that A ∈Im([UA∗]). From Definition 12.13, A =    \ j∈∪ s(UA∗) i=1 Wi ˜ Xj   −˜ XUA∗∪∪ s(UA∗) i=1 (Vi(UA∗)−Wi), (12.97) where Wi ⊂Vi(UA∗), 1 ≤i ≤s(UA∗), and there exist at least two i such that Wi ̸= ∅. It follows from (12.97) and the definition of UA that UA = UA∗∪ s(UA∗) [ i=1 (Vi(UA∗) −Wi). (12.98) With UA∗playing the role of C and Vi(UA∗) playing the role of Qi in Propo-sition 12.26, we see by applying the proposition that those (at least two) Wi which are nonempty are disjoint in G \  UA∗∪   s(UA∗) [ i=1 (Vi(UA∗) −Wi)    = G\UA. (12.99) This implies s(UA) > 1, i.e., A ∈T2. Therefore, we have proved (12.96), and hence the theorem is proved. ⊓ ⊔ Example 12.28. With respect to the graph G in Figure 12.5, the Type II atoms are ˜ X1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4, ˜ Xc 1 ∩˜ X2 ∩˜ Xc 3 ∩˜ X4, ˜ X1 ∩˜ Xc 2 ∩˜ Xc 3 ∩˜ X4, (12.100) while the other twelve nonempty atoms of F4 are Type I atoms. The random variables X1, X2, X3, and X4 form a Markov graph G if and only if µ∗(A) = 0 for all Type II atoms A. 12.4 Markov Chain 317 1 2 3 4 Fig. 12.5. The graph G in Example 12.28. 12.4 Markov Chain When the graph G representing a Markov random field is a chain, the Markov random field becomes a Markov chain. In this section, we will show that the information diagram for a Markov chain can be displayed in two dimensions. We will also show that the I-Measure µ∗for a Markov chain is always nonneg-ative. This characteristic of µ∗facilitates the use of the information diagram because if B is seen to be a subset of B′ in the information diagram, then µ∗(B′) = µ∗(B) + µ∗(B′ −B) ≥µ∗(B). (12.101) These two properties are not possessed by a general Markov random field. Without loss of generality, we assume that the Markov chain is represented by the graph G in Figure 12.6. This corresponds to the Markov chain X1 → X2 →· · · →Xn. We first prove the following characterization of a Type I atom for a Markov chain. Lemma 12.29. For the Markov chain represented by the graph G in Fig-ure 12.6, a nonempty atom A of Fn is a Type I atom if and only if Nn\UA = {l, l + 1, · · · , u}, (12.102) where 1 ≤l ≤u ≤n, i.e., the indices of the set variables in A which are not complemented are consecutive. Proof. It is easy to see that for a nonempty atom A, if (12.102) is satisfied, then G\UA is connected, i.e., s(UA) = 1. Therefore, A is a Type I atom of Fn. On the other hand, if (12.102) is not satisfied, then G\UA is not connected, i.e., s(UA) > 1, or A is a Type II atom of Fn. The lemma is proved. ⊓ ⊔ We now show how the information diagram for a Markov chain with any length n ≥3 can be constructed in two dimensions. Since µ∗vanishes on 1 2 n -1 n ... Fig. 12.6. The graph G representing the Markov chain X1 →X2 →· · · →Xn. 318 12 Markov Structures all the Type II atoms of Fn, it is not necessary to display these atoms in the information diagram. In constructing the information diagram, the re-gions representing the random variables X1, X2, · · · , Xn should overlap with each other such that the regions corresponding to all the Type II atoms are empty, while the regions corresponding to all the Type I atoms are nonempty. Figure 12.7 shows such a construction. We have already shown that µ∗is nonnegative for a Markov chain with length 3 or 4. Toward proving that this is true for any length n ≥3, it suffices to show that µ∗(A) ≥0 for all Type I atoms A of Fn because µ∗(A) = 0 for all Type II atoms A of Fn. We have seen in Lemma 12.29 that for a Type I atom A of Fn, UA has the form prescribed in (12.102). Consider any such atom A. Then an inspection of the information diagram in Figure 12.7 reveals that µ∗(A) = µ∗( ˜ Xl ∩˜ Xl+1 ∩· · · ∩˜ Xu −˜ XUA) (12.103) = I(Xl; Xu|XUA) (12.104) ≥0. (12.105) This shows that µ∗is always nonnegative. However, since Figure 12.7 involves an indefinite number of random variables, we give a formal proof of this result in the following theorem. Theorem 12.30. For a Markov chain X1 →X2 →· · · →Xn, µ∗is nonneg-ative. Proof. Since µ∗(A) = 0 for all Type II atoms A of Fn, it suffices to show that µ∗(A) ≥0 for all Type I atoms A of Fn. We have seen in Lemma 12.29 that for a Type I atom A of Fn, UA has the form prescribed in (12.102). Consider any such atom A and define the set W = {l + 1, · · · , u −1}. (12.106) Then I(Xl; Xu|XUA) ... X 1 X 2 X n -1 X n Fig. 12.7. The information diagram for the Markov chain X1 →X2 →· · · →Xn. Chapter Summary 319 = µ∗( ˜ Xl ∩˜ Xu −˜ XUA) (12.107) = µ∗ [ S⊂W ˜ Xl ∩ \ t∈S ˜ Xt ! ∩˜ Xu −˜ XUA∪(W \S) !! (12.108) = X S⊂W µ∗ ˜ Xl ∩ \ t∈S ˜ Xt ! ∩˜ Xu −˜ XUA∪(W \S) ! . (12.109) In the above summation, except for the atom corresponding to S = W, namely ( ˜ Xl ∩˜ Xl+1 ∩· · · ∩˜ Xu −˜ XUA), all the atoms are Type II atoms. Therefore, I(Xl; Xu|XUA) = µ∗( ˜ Xl ∩˜ Xl+1 ∩· · · ∩˜ Xu −˜ XUA). (12.110) Hence, µ∗(A) = µ∗( ˜ Xl ∩˜ Xl+1 ∩· · · ∩˜ Xu −˜ XUA) (12.111) = I(Xl; Xu|XUA) (12.112) ≥0. (12.113) The theorem is proved. ⊓ ⊔ Chapter Summary In the following, Nn = {1, 2, · · · , n} and A is the set of all nonempty atoms of Fn. Full Conditional Mutual Independency (FCMI): For a partition {T, Qi, 1 ≤i ≤k} of Nn, the tuple (T, Qi, 1 ≤i ≤k) specifies the following FCMI on X1, X2, · · · , Xn: XQ1, XQ2, · · · , XQk are mutually independent conditioning on XT . Image of an FCMI: For an FCMI K = (T, Qi, 1 ≤i ≤k) on X1, X2, · · · , Xn, Im(K) = n A ∈A : A ⊂S 1≤i<j≤k( ˜ XQi ∩˜ XQj −˜ XT ) o . Characterization of an FCMI: An FCMI K on X1, X2, · · · , Xn holds if and only if µ∗(A) = 0 for all A ∈Im(K). Image of a Set of FCMI’s: For a set of FCMI’s Π = {Kl, 1 ≤l ≤m}, Im(Π) = Sk l=1 Im(Kl). Characterization of a Set of FCMI’s: A set of FCMI’s Π on X1, X2, · · · , Xn holds if and only if µ∗(A) = 0 for all A ∈Im(Π). 320 12 Markov Structures Set-Theoretic Characterization of FCMI: Π1 implies Π2 if and only if Im(Π2) ⊂Im(Π1). Markov Random Field (Markov Graph): Let G = (V, E) be an undi-rected graph with V = Nn, and Xi be a random variable corresponding to vertex i. X1, X2, · · · , Xn form a Markov graph G if for all cutsets U in G, XV1(U), XV2(U), · · · , XVs(U)(U) are mutually independent conditioning on XU, where V1(U), V2(U), · · · , Vs(U)(U) are the components in G\U. Type I and Type II Atoms: For an atom A = ∩n i=1 ˜ Yi in A, UA = {i ∈ Nn : ˜ Yi = ˜ Xc i }. For an undirected graph G = (V, E) with V = Nn, an atom A ∈A is Type I if G\UA is connected, otherwise it is Type II. I-Measure Characterization of Markov Random Field: X1, X2, · · · , Xn form a Markov graph G if and only if µ∗vanishes on all the Type II atoms. I-Measure for Markov Chain: 1. µ∗is always nonnegative. 2. The information diagram can be displayed in two dimensions. Problems 1. Prove Proposition 12.14 and Proposition 12.15. 2. In Example 12.22, it was shown that Π2 implies Π1. Show that Π1 does not imply Π2. Hint: Use an information diagram to determine Im(Π2)\Im(Π1). 3. Alternative definition of the global Markov property: For any partition {U, V1, V2} of V such that the sets of vertices V1 and V2 are disconnected in G\U, the sets of random variables XV1 and XV2 are independent con-ditioning on XU. Show that this definition is equivalent to the global Markov property in Definition 12.23. 4. The local Markov property: For 1 ≤i ≤n, Xi and XV −Ni−i are indepen-dent conditioning on XNi, where Ni is the set of neighbors2 of vertex i in G. a) Show that the global Markov property implies the local Markov prop-erty. b) Show that the local Markov property does not imply the global Markov property by giving a counterexample. Hint: Consider a joint distribution which is not strictly positive. 2 Vertices i and j in an undirected graph are neighbors if i and j are connected by an edge. Historical Notes 321 5. Construct a Markov random field whose I-Measure µ∗can take negative values. Hint: Consider a Markov “star.” 6. a) Show that X1, X2, X3, and X4 are mutually independent if and only if X1 ⊥(X2, X3, X4), X2 ⊥(X3, X4)|X1, X3 ⊥X4|(X1, X2). Hint: Use an information diagram. b) Generalize the result in a) to n random variables. 7. Determine the Markov random field with four random variables X1, X2, X3, and X4 which is characterized by the following conditional indepen-dencies: (X1, X2, X5) ⊥X4|X3 X2 ⊥(X4, X5)|(X1, X3) X1 ⊥(X3, X4)|(X2, X5). What are the other conditional independencies pertaining to this Markov random field? Historical Notes A Markov random field can be regarded as a generalization of a discrete-time Markov chain. Historically, the study of Markov random field stems from sta-tistical physics. The classical Ising model, which is defined on a rectangular lattice, was used to explain certain empirically observed facts about ferromag-netic materials. The foundation of the theory of Markov random fields can be found in Preston or Spitzer . The structure of the I-Measure for a Markov chain was first investigated in the unpublished work of Kawabata . Essentially the same result was independently obtained by R. W. Yeung eleven years later in the context of the I-Measure, and the result was eventually published in Kawabata and Ye-ung . Full conditional independencies were shown to be axiomatizable by Malvestuto . The results in this chapter are due to Yeung et al. , where they obtained a set-theoretic characterization of full conditional in-dependencies and investigated the structure of the I-Measure for a Markov random field. In this paper, they also obtained a hypergraph characterization of a Markov random field based on the I-Measure characterization in Theo-rem 12.25. Ge and Ye have applied these results to characterize a class of graphical models for conditional independence of random variables. 13 Information Inequalities An information expression f refers to a linear combination1 of Shannon’s information measures involving a finite number of random variables. For ex-ample, H(X, Y ) + 2I(X; Z) (13.1) and I(X; Y ) −I(X; Y |Z) (13.2) are information expressions. An information inequality has the form f ≥c, (13.3) where the constant c is usually equal to zero. We consider non-strict inequal-ities only because these are usually the form of inequalities in information theory. Likewise, an information identity has the form f = c. (13.4) We point out that an information identity f = c is equivalent to the pair of information inequalities f ≥c and f ≤c. An information inequality or identity is said to always hold if it holds for any joint distribution for the random variables involved. For example, we say that the information inequality I(X; Y ) ≥0 (13.5) always holds because it holds for any joint distribution p(x, y). On the other hand, we say that an information inequality does not always hold if there exists a joint distribution for which the inequality does not hold. Consider the information inequality 1 More generally, an information expression can be nonlinear, but they do not appear to be useful in information theory. 324 13 Information Inequalities I(X; Y ) ≤0. (13.6) Since I(X; Y ) ≥0 (13.7) always holds, (13.6) is equivalent to I(X; Y ) = 0, (13.8) which holds if and only if X and Y are independent. In other words, (13.6) does not hold if X and Y are not independent. Therefore, we say that (13.6) does not always hold. As we have seen in the previous chapters, information inequalities are the major tools for proving converse coding theorems. These inequalities govern the impossibilities in information theory. More precisely, information inequal-ities imply that certain things cannot happen. For this reason, they are some-times referred to as the laws of information theory. The basic inequalities form the most important set of information inequal-ities. In fact, almost all the information inequalities known to date are implied by the basic inequalities. These are called Shannon-type inequalities. On the other hand, if an information inequality always holds but is not implied by the basic inequalities, then it is called a non-Shannon-type inequality. We have not yet explained what it means by that an inequality is or is not implied by the basic inequalities, but this will become clear later in the chapter. Let us now rederive the inequality obtained in Example 3.15 (Imperfect secrecy theorem) without using an information diagram. In this example, three random variables X, Y , and Z are involved, and the setup of the problem is equivalent to the constraint H(X|Y, Z) = 0. (13.9) Then I(X; Y ) = H(X) + H(Y ) −H(X, Y ) (13.10) = H(X) + H(Y ) −[H(X, Y, Z) −H(Z|X, Y )] (13.11) ≥H(X) + H(Y ) −H(X, Y, Z) (13.12) = H(X) + H(Y ) −[H(Z) + H(Y |Z) + H(X|Y, Z)] (13.13) = H(X) −H(Z) + I(Y ; Z) −H(X|Y, Z) (13.14) ≥H(X) −H(Z), (13.15) where we have used H(Z|X, Y ) ≥0 (13.16) in obtaining (13.12), and I(Y ; Z) ≥0 (13.17) 13.1 The Region Γ ∗ n 325 and (13.9) in obtaining (13.15). This derivation is less transparent than the one we presented in Example 3.15, but the point here is that the final inequality we obtain in (13.15) can be proved by invoking the basic inequalities (13.16) and (13.17). In other words, (13.15) is implied by the basic inequalities. Therefore, it is a (constrained) Shannon-type inequality. We are motivated to ask the following two questions: 1. How can Shannon-type inequalities be characterized? That is, given an information inequality, how can we tell whether it is implied by the basic inequalities? 2. Are there any non-Shannon-type information inequalities? These two are very fundamental questions in information theory. We point out that the first question naturally comes before the second question because if we cannot characterize all Shannon-type inequalities, even if we are given a non-Shannon-type inequality, we cannot tell that it actually is one. In this chapter, we develop a geometric framework for information inequal-ities which enables them to be studied systematically. This framework natu-rally leads to an answer to the first question, which makes machine-proving of all Shannon-type inequalities possible. This will be discussed in the next chapter. The second question will be answered positively in Chapter 15. In other words, there do exist laws in information theory beyond those laid down by Shannon. 13.1 The Region Γ ∗ n Let Nn = {1, 2, · · · , n}, (13.18) where n ≥2, and let Θ = {Xi, i ∈Nn} (13.19) be any collection of n random variables. Associated with Θ are k = 2n −1 (13.20) joint entropies. For example, for n = 3, the 7 joint entropies associated with random variables X1, X2, and X3 are H(X1), H(X2), H(X3), H(X1, X2), H(X2, X3), H(X1, X3), H(X1, X2, X3). (13.21) Let ℜdenote the set of real numbers. For any nonempty subset α of Nn, let Xα = (Xi, i ∈α) (13.22) and 326 13 Information Inequalities HΘ(α) = H(Xα). (13.23) For a fixed Θ, we can then view HΘ as a set function from 2Nn to ℜwith HΘ(∅) = 0, i.e., we adopt the convention that the entropy of an empty set of random variable is equal to zero. For this reason, we call HΘ the entropy function of Θ. Let Hn be the k-dimensional Euclidean space with the coordinates labeled by hα, α ∈2Nn{∅}, where hα corresponds to the value of HΘ(α) for any collection Θ of n random variables. We will refer to Hn as the entropy space for n random variables. Then an entropy function HΘ can be represented by a column vector in Hn. On the other hand, a column vector h ∈Hn is called entropic if h is equal to the entropy function HΘ of some collection Θ of n random variables. We are motivated to define the following region in Hn: Γ ∗ n = {h ∈Hn : h is entropic}. (13.24) For convenience, the vectors in Γ ∗ n will also be referred to as entropy functions. As an example, for n = 3, the coordinates of H3 are labeled by h1, h2, h3, h12, h13, h23, h123, (13.25) where h123 denotes h{1,2,3}, etc, and Γ ∗ 3 is the region in H3 of all entropy functions for 3 random variables. While further characterizations of Γ ∗ n will be given later, we first point out a few basic properties of Γ ∗ n: 1. Γ ∗ n contains the origin. 2. Γ ∗ n, the closure of Γ ∗ n, is convex. 3. Γ ∗ n is in the nonnegative orthant of the entropy space Hn2. The origin of the entropy space corresponds to the entropy function of n de-generate random variables taking constant values. Hence, Property 1 follows. Property 2 will be proved in Chapter 15. Properties 1 and 2 imply that Γ ∗ n is a convex cone. Property 3 is true because the coordinates in the entropy space Hn correspond to joint entropies, which are always nonnegative. 13.2 Information Expressions in Canonical Form Any Shannon’s information measure other than a joint entropy can be ex-pressed as a linear combination of joint entropies by application of one of the following information identities: H(X|Y ) = H(X, Y ) −H(Y ) (13.26) I(X; Y ) = H(X) + H(Y ) −H(X, Y ) (13.27) I(X; Y |Z) = H(X, Z) + H(Y, Z) −H(X, Y, Z) −H(Z). (13.28) 2 The nonnegative orthant of Hn is the region {h ∈Hn : hα ≥0 for all α ∈ 2Nn{∅}}. 13.2 Information Expressions in Canonical Form 327 The first and the second identity are special cases of the third identity, which has already been proved in Lemma 3.8. Thus any information expression which involves n random variables can be expressed as a linear combination of the k associated joint entropies. We call this the canonical form of an information expression. When we write an information expression f as f(h), it means that f is in canonical form. Since an information expression in canonical form is a linear combination of the joint entropies, it has the form b⊤h (13.29) where b⊤denotes the transpose of a constant column vector b in ℜk. The identities in (13.26) to (13.28) provide a way to express every infor-mation expression in canonical form. However, it is not clear whether such a canonical form is unique. To illustrate the point, we consider obtaining the canonical form of H(X|Y ) in two ways. First, H(X|Y ) = H(X, Y ) −H(Y ). (13.30) Second, H(X|Y ) = H(X) −I(X; Y ) (13.31) = H(X) −(H(Y ) −H(Y |X)) (13.32) = H(X) −(H(Y ) −H(X, Y ) + H(X)) (13.33) = H(X, Y ) −H(Y ). (13.34) Thus it turns out that we can obtain the same canonical form for H(X|Y ) via two different expansions. This is not accidental, as it is implied by the uniqueness of the canonical form which we will prove shortly. Recall from the proof of Theorem 3.6 that the vector h represents the values of the I-Measure µ∗on the unions in Fn. Moreover, h is related to the values of µ∗on the atoms of Fn, represented as u, by h = Cnu (13.35) where Cn is a unique k × k matrix (cf. (3.27)). We now state the following lemma which is a rephrase of Theorem 3.11. This lemma is essential for proving the next theorem which implies the uniqueness of the canonical form. Lemma 13.1. Let Ψ ∗ n = {u ∈ℜk : Cnu ∈Γ ∗ n}. (13.36) Then the nonnegative orthant of ℜk is a subset of Ψ ∗ n. Theorem 13.2. Let f be an information expression. Then the unconstrained information identity f = 0 always holds if and only if f is the zero function. 328 13 Information Inequalities Proof. Without loss of generality, assume f is in canonical form and let f(h) = b⊤h. (13.37) Assume f = 0 always holds and f is not the zero function, i.e., b ̸= 0. We will show that this leads to a contradiction. First, f = 0, or more precisely the set {h ∈Hn : b⊤h = 0}, (13.38) is a hyperplane3 in the entropy space which has zero Lebesgue measure4. We claim that Γ ∗ n is contained in the hyperplane f = 0. If this is not true, then there exists h0 ∈Γ ∗ n which is not on f = 0, i.e., f(h0) ̸= 0. Since h0 ∈Γ ∗ n, it corresponds to the entropy function of some joint distribution. This means that there exists a joint distribution such that f(h) = 0 does not hold, which is a contradiction to our assumption that f = 0 always holds. This proves our claim. If Γ ∗ n has positive Lebesgue measure, it cannot be contained in the hyper-plane f = 0 which has zero Lebesgue measure. Therefore, it suffices to show that Γ ∗ n has positive Lebesgue measure. To this end, we see from Lemma 13.1 that the nonnegative orthant of Hn, which has positive Lebesgue measure, is a subset of Ψ ∗ n. Thus Ψ ∗ n has positive Lebesgue measure. Since Γ ∗ n is an invertible linear transformation of Ψ ∗ n, its Lebesgue measure is also positive. Therefore, Γ ∗ n is not contained in the hyperplane f = 0, which implies that there exists a joint distribution for which f = 0 does not hold. This leads to a contradiction because we have assumed that f = 0 always holds. Hence, we have proved that if f = 0 always holds, then f must be the zero function. Conversely, if f is the zero function, then it is trivial that f = 0 always holds. The theorem is proved. ⊓ ⊔ Corollary 13.3. The canonical form of an information expression is unique. Proof. Let f1 and f2 be canonical forms of an information expression g. Since g = f1 (13.39) and g = f2 (13.40) always hold, f1 −f2 = 0 (13.41) always holds. By the above theorem, f1−f2 is the zero function, which implies that f1 and f2 are identical. The corollary is proved. ⊓ ⊔ 3 If b = 0, then {h ∈Hn : b⊤h = 0} is equal to Hn. 4 The Lebesque measure can be thought of as “volume” in the Euclidean space if the reader is not familiar with measure theory. 13.3 A Geometrical Framework 329 Due to the uniqueness of the canonical form of an information expression, it is an easy matter to check whether for two information expressions f1 and f2 the unconstrained information identity f1 = f2 (13.42) always holds. All we need to do is to express f1 −f2 in canonical form. Then (13.42) always holds if and only if all the coefficients are zero. 13.3 A Geometrical Framework In the last section, we have seen the role of the region Γ ∗ n in proving un-constrained information identities. In this section, we explain the geometrical meanings of unconstrained information inequalities, constrained information inequalities, and constrained information identities in terms of Γ ∗ n. Without loss of generality, we assume that all information expressions are in canonical form. 13.3.1 Unconstrained Inequalities Consider an unconstrained information inequality f ≥0, where f(h) = b⊤h. Then f ≥0 corresponds to the set {h ∈Hn : b⊤h ≥0} (13.43) which is a half-space in the entropy space Hn containing the origin. Specif-ically, for any h ∈Hn, f(h) ≥0 if and only if h belongs to this set. For simplicity, we will refer to this set as the half-space f ≥0. As an example, for n = 2, the information inequality I(X1; X2) = H(X1) + H(X2) −H(X1, X2) ≥0, (13.44) written as h1 + h2 −h12 ≥0, (13.45) corresponds to the half-space {h ∈Hn : h1 + h2 −h12 ≥0}. (13.46) in the entropy space H2. Since an information inequality always holds if and only if it is satisfied by the entropy function of any joint distribution for the random variables involved, we have the following geometrical interpretation of an information inequality: f ≥0 always holds if and only if Γ ∗ n ⊂{h ∈Hn : f(h) ≥0}. 330 13 Information Inequalities f 0 n Fig. 13.1. An illustration for f ≥0 always holds. This gives a complete characterization of all unconstrained inequalities in terms of Γ ∗ n. If Γ ∗ n is known, we in principle can determine whether any infor-mation inequality involving n random variables always holds. The two possible cases for f ≥0 are illustrated in Figure 13.1 and Fig-ure 13.2. In Figure 13.1, Γ ∗ n is completely included in the half-space f ≥0, so f ≥0 always holds. In Figure 13.2, there exists a vector h0 ∈Γ ∗ n such that f(h0) < 0. Thus the inequality f ≥0 does not always hold. 13.3.2 Constrained Inequalities In information theory, we very often deal with information inequalities (iden-tities) with certain constraints on the joint distribution for the random vari-ables involved. These are called constrained information inequalities (identi-ties), and the constraints on the joint distribution can usually be expressed as linear constraints on the entropies. The following are such examples: f 0 . h n 0 Fig. 13.2. An illustration for f ≥0 not always holds. 13.3 A Geometrical Framework 331 f 0 Fig. 13.3. An illustration for f ≥0 always holds under the constraint Φ. 1. X1, X2, and X3 are mutually independent if and only if H(X1, X2, X3) = H(X1) + H(X2) + H(X3). 2. X1, X2, and X3 are pairwise independent if and only if I(X1; X2) = I(X2; X3) = I(X1; X3) = 0. 3. X1 is a function of X2 if and only if H(X1|X2) = 0. 4. X1 →X2 →X3 →X4 forms a Markov chain if and only if I(X1; X3|X2) = 0 and I(X1, X2; X4|X3) = 0. Suppose there are q linear constraints on the entropies given by Qh = 0, (13.47) where Q is a q × k matrix. Here we do not assume that the q constraints are linearly independent, so Q is not necessarily full rank. Let Φ = {h ∈Hn : Qh = 0}. (13.48) In other words, the q constraints confine h to a linear subspace Φ in the entropy space. Parallel to our discussion on unconstrained inequalities, we have the following geometrical interpretation of a constrained inequality: Under the constraint Φ, f ≥0 always holds if and only if (Γ ∗ n ∩Φ) ⊂ {h ∈Hn : f(h) ≥0}. This gives a complete characterization of all constrained inequalities in terms of Γ ∗ n. Note that Φ = Hn when there is no constraint on the entropies. In this sense, an unconstrained inequality is a special case of a constrained inequality. The two cases of f ≥0 under the constraint Φ are illustrated in Figure 13.3 and Figure 13.4. Figure 13.3 shows the case when f ≥0 always holds under the constraint Φ. Note that f ≥0 may or may not always hold when there is no constraint. Figure 13.4 shows the case when f ≥0 does not always hold 332 13 Information Inequalities f 0 Fig. 13.4. An illustration for f ≥0 not always holds under the constraint Φ. under the constraint Φ. In this case, f ≥0 also does not always hold when there is no constraint, because (Γ ∗ n ∩Φ) ̸⊂{h ∈Hn : f(h) ≥0} (13.49) implies Γ ∗ n ̸⊂{h ∈Hn : f(h) ≥0}. (13.50) 13.3.3 Constrained Identities As we have pointed out at the beginning of the chapter, an identity f = 0 (13.51) always holds if and only if both the inequalities f ≥0 and f ≤0 always hold. Then following our discussion on constrained inequalities, we have Under the constraint Φ, f = 0 always holds if and only if (Γ ∗ n ∩Φ) ⊂ {h ∈Hn : f(h) ≥0} ∩{h ∈Hn : f(h) ≤0}, or Under the constraint Φ, f = 0 always holds if and only if (Γ ∗ n ∩Φ) ⊂ {h ∈Hn : f(h) = 0}. This condition says that the intersection of Γ ∗ n and Φ is contained in the hyperplane f = 0. 13.4 Equivalence of Constrained Inequalities 333 13.4 Equivalence of Constrained Inequalities When there is no constraint on the entropies, two information inequalities b⊤h ≥0 (13.52) and c⊤h ≥0 (13.53) are equivalent if and only if c = ab, where a is a positive constant. However, this is not the case under a non-trivial constraint Φ ̸= Hn. This situation is illustrated in Figure 13.5. In this figure, although the inequalities in (13.52) and (13.53) correspond to different half-spaces in the entropy space, they actually impose the same constraint on h when h is confined to Φ. In this section, we present a characterization of (13.52) and (13.53) being equivalent under a set of linear constraint Φ. The reader may skip this section at the first reading. Let r be the rank of Q in (13.47). Since h is in the null space of Q, we can write h = ˜ Qh′, (13.54) where ˜ Q is a k × (k −r) matrix such that the rows of ˜ Q⊤form a basis of the orthogonal complement of the row space of Q, and h′ is a column (k −r)-vector. Then using (13.54), (13.52) and (13.53) can be written as b⊤˜ Qh′ ≥0 (13.55) and c⊤˜ Qh′ ≥0, (13.56) respectively in terms of the set of basis given by the columns of ˜ Q. Then (13.55) and (13.56) are equivalent if and only if c⊤˜ Q = ab⊤˜ Q, (13.57) where a is a positive constant, or b h 0 c h 0 Fig. 13.5. Equivalence of b⊤h ≥0 and c⊤h ≥0 under the constraint Φ. 334 13 Information Inequalities (c −ab)⊤˜ Q = 0. (13.58) In other words, (c −ab)⊤is in the orthogonal complement of the row space of ˜ Q⊤, i.e., (c −ab)⊤is in the row space of Q. Let Q′ be any r × k matrix such that Q′ and Q have the same row space. (Q can be taken as Q′ if Q is full rank.) Since the rank of Q is r and Q′ has r rows, the rows of Q′ form a basis for the row space of Q, and Q′ is full rank. Then from (13.58), (13.55) and (13.56) are equivalent under the constraint Φ if and only if c = ab + (Q′)⊤e (13.59) for some positive constant a and some column r-vector e. Suppose for given b and c, we want to see whether (13.55) and (13.56) are equivalent under the constraint Φ. We first consider the case when either b⊤ or c⊤is in the row space of Q. This is actually not an interesting case because if b⊤, for example, is in the row space of Q, then b⊤˜ Q = 0 (13.60) in (13.55), which means that (13.55) imposes no additional constraint under the constraint Φ. Theorem 13.4. If either b⊤or c⊤is in the row space of Q, then b⊤h ≥0 and c⊤h ≥0 are equivalent under the constraint Φ if and only if both b⊤and c⊤are in the row space of Q. The proof of this theorem is left as an exercise. We now turn to the more interesting case when neither b⊤nor c⊤is in the row space of Q. The following theorem gives an explicit condition for (13.55) and (13.56) to be equivalent under the constraint Φ. Theorem 13.5. If neither b⊤nor c⊤is in the row space of Q, then b⊤h ≥0 and c⊤h ≥0 are equivalent under the constraint Φ if and only if (Q′)⊤b  e a  = c. (13.61) has a unique solution with a > 0, where Q′ is any full-rank matrix such that Q′ and Q have the same row space. Proof. For b⊤and c⊤not in the row space of Q, we want to see when we can find unknowns a and e satisfying (13.59) with a > 0. To this end, we write (13.59) in matrix form as (13.61). Since b is not in the column space of (Q′)⊤and (Q′)⊤is full rank, (Q′)⊤b is also full rank. Then (13.61) has either a unique solution or no solution. Therefore, the necessary and sufficient condition for (13.55) and (13.56) to be equivalent is that (13.61) has a unique solution and a > 0. The theorem is proved. ⊓ ⊔ 13.4 Equivalence of Constrained Inequalities 335 Example 13.6. Consider three random variables X1, X2, and X3 with the Markov constraint I(X1; X3|X2) = 0, (13.62) which is equivalent to H(X1, X2) + H(X2, X3) −H(X1, X2, X3) −H(X2) = 0. (13.63) In terms of the coordinates in the entropy space H3, this constraint is written as Qh = 0, (13.64) where Q = [0 −1 0 1 1 0 −1 ] (13.65) and h = [h1 h2 h3 h12 h23 h13 h123 ]⊤. (13.66) We now show that under the constraint in (13.64), the inequalities H(X1|X3) −H(X1|X2) ≥0 (13.67) and I(X1; X2|X3) ≥0 (13.68) are in fact equivalent. Toward this end, we write (13.67) and (13.68) as b⊤h ≥ 0 and c⊤h ≥0, respectively, where b = [0 1 −1 −1 0 1 0 ]⊤ (13.69) and c = [0 0 −1 0 1 1 −1 ]⊤. (13.70) Since Q is full rank, we may take Q′ = Q. Upon solving Q⊤b  e a  = c, (13.71) we obtain the unique solution a = 1 > 0 and e = 1 (e is a 1 × 1 matrix). Therefore, (13.67) and (13.68) are equivalent under the constraint in (13.64). Under the constraint Φ, if neither b⊤nor c⊤is in the row space of Q, it can be shown that the identities b⊤h = 0 (13.72) and c⊤h = 0 (13.73) are equivalent if and only if (13.61) has a unique solution. We leave the proof as an exercise. 336 13 Information Inequalities 13.5 The Implication Problem of Conditional Independence We use Xα ⊥Xβ|Xγ to denote the conditional independency (CI) Xα and Xβ are conditionally independent given Xγ. We have proved in Theorem 2.34 that Xα ⊥Xβ|Xγ is equivalent to I(Xα; Xβ|Xγ) = 0. (13.74) When γ = ∅, Xα ⊥Xβ|Xγ becomes an unconditional independency which we regard as a special case of a conditional independency. When α = β, (13.74) becomes H(Xα|Xγ) = 0, (13.75) which we see from Proposition 2.36 that Xα is a function of Xγ. For this reason, we also regard functional dependency as a special case of conditional independency. In probability problems, we are often given a set of CI’s and we need to determine whether another given CI is logically implied. This is called the implication problem, which is one of the most basic problems in probability theory. We have seen in Section 12.2 that the implication problem has a solution if only full conditional mutual independencies are involved. However, the general problem is extremely difficult, and it has been solved only up to four random variables . We end this section by explaining the relation between the implication problem and the region Γ ∗ n. A CI involving random variables X1, X2, · · · , Xn has the form Xα ⊥Xβ|Xγ, (13.76) where α, β, γ ⊂Nn. Since I(Xα; Xβ|Xγ) = 0 is equivalent to H(Xα∪γ) + H(Xβ∪γ) −H(Xα∪β∪γ) −H(Xγ) = 0, (13.77) Xα ⊥Xβ|Xγ corresponds to the hyperplane {h ∈Hn : hα∪γ + hβ∪γ −hα∪β∪γ −hγ = 0}. (13.78) For a CI K, we denote the hyperplane in Hn corresponding to K by E(K). Let Π = {Kl} be a collection of CI’s, and we want to determine whether Π implies a given CI K. This would be the case if and only if the following is true: For all h ∈Γ ∗ n, if h ∈ \ l E(Kl), then h ∈E(K). Equivalently, Chapter Summary 337 Π implies K if and only if \ l E(Kl) ! ∩Γ ∗ n ⊂E(K). Therefore, the implication problem can be solved if Γ ∗ n can be characterized. Hence, the region Γ ∗ n is not only of fundamental importance in information theory, but is also of fundamental importance in probability theory. Chapter Summary Entropy Space: The entropy space Hn is the (2n −1)-dimensional Eu-clidean space with the coordinates labeled by hα, α ∈2Nn{∅}, where Nn = {1, 2, · · · , n}. The Region Γ ∗ n is the subset of Hn of all entropy functions for n discrete random variables. Basic Properties of Γ ∗ n: 1. Γ ∗ n contains the origin. 2. Γ ∗ n, the closure of Γ ∗ n, is convex. 3. Γ ∗ n is in the nonnegative orthant of the entropy space Hn. Canonical Form of an Information Expression: Any information ex-pression can be expressed as a linear combination of joint entropies, called the canonical form. The canonical form of an information expression is unique. Unconstrained Information Identities: b⊤h = 0 always holds if and only if b = 0. Unconstrained Information Inequalities: b⊤h ≥0 always holds if and only if Γ ∗ n ⊂{h ∈Hn : b⊤h ≥0}. Constrained Information Inequalities: Under the constraint Φ = {h ∈ Hn : Qh = 0}, b⊤h ≥0 always holds if and only if (Γ ∗ n ∩Φ) ⊂{h ∈Hn : b⊤h ≥0}. Equivalence of Constrained Inequalities (Identities): Under the con-straint Φ = {h ∈Hn : Qh = 0}, b⊤h ≥0 and c⊤h ≥0 (b⊤h = 0 and c⊤h = 0) are equivalent if and only if one of the following holds: 1. Both b⊤and c⊤are in the row space of Q. 2. Neither b⊤nor c⊤is in the row space of Q, and (Q′)⊤b  e a  = c has a unique solution with a > 0 (has a unique solution), where Q′ is any full-rank matrix such that Q′ and Q have the same row space. 338 13 Information Inequalities Problems 1. Symmetrical information expressions An information expression is said to be symmetrical if it is identical under every permutation of the random variables involved. However, sometimes a symmetrical informa-tion expression cannot be readily recognized symbolically. For example, I(X1; X2) −I(X1; X2|X3) is symmetrical in X1, X2, and X3 but it is not symmetrical symbolically. Devise a general method for recognizing sym-metrical information expressions. 2. The canonical form of an information expression is unique when there is no constraint on the random variables involved. Show by an example that this does not hold when certain constraints are imposed on the random variables involved. 3. Alternative canonical form Denote ∩i∈G ˜ Xi by ˇ XG and let C =  ˇ XG : G is a nonempty subset of Nn . a) Prove that a signed measure µ on Fn is completely specified by {µ(C), C ∈C}, which can be any set of real numbers. b) Prove that an information expression involving X1, X2, · · · , Xn can be expressed uniquely as a linear combination of µ∗( ˇ XG), where G are nonempty subsets of Nn. 4. Uniqueness of the canonical form for nonlinear information expressions Consider a function f : ℜk →ℜ, where k = 2n −1 such that {h ∈ℜk : f(h) = 0} has zero Lebesgue measure. a) Prove that f cannot be identically zero on Γ ∗ n. b) Use the result in a) to show the uniqueness of the canonical form for the class of information expressions of the form g(h) where g is a polynomial. (Yeung .) 5. Prove that under the constraint Qh = 0, if neither b⊤nor c⊤is in the row space of Q, the identities b⊤h = 0 and c⊤h = 0 are equivalent if and only if (13.61) has a unique solution. Historical Notes The uniqueness of the canonical form for linear information expressions was first proved by Han . The same result was independently obtained in the book by Csisz´ ar and K¨ orner . The geometrical framework for infor-mation inequalities is due to Yeung . The characterization of equivalent constrained inequalities in Section 13.4 first appeared in the book by Yeung . 14 Shannon-Type Inequalities The basic inequalities form the most important set of information inequali-ties. In fact, almost all the information inequalities known to date are implied by the basic inequalities. These are called Shannon-type inequalities. In this chapter, we show that verification of Shannon-type inequalities can be formu-lated as a linear programming problem, thus enabling machine-proving of all such inequalities. 14.1 The Elemental Inequalities Consider the conditional mutual information I(X, Y ; X, Z, U|Z, T), (14.1) in which the random variables X and Z appear more than once. It is readily seen that I(X, Y ; X, Z, U|Z, T) can be written as H(X|Z, T) + I(Y ; U|X, Z, T), (14.2) where in both H(X|Z, T) and I(Y ; U|X, Z, T), each random variable appears only once. A Shannon’s information measure is said to be reducible if there exists a random variable which appears more than once in the information measure, otherwise the information measure is said to be irreducible. Without loss of generality, we will consider irreducible Shannon’s information measures only, because a reducible Shannon’s information measure can always be written as the sum of irreducible Shannon’s information measures. The nonnegativity of all Shannon’s information measures form a set of inequalities called the basic inequalities. The set of basic inequalities, however, is not minimal in the sense that some basic inequalities are implied by the others. For example, H(X|Y ) ≥0 (14.3) 340 14 Shannon-Type Inequalities and I(X; Y ) ≥0, (14.4) which are both basic inequalities involving random variables X and Y , imply H(X) = H(X|Y ) + I(X; Y ) ≥0, (14.5) again a basic inequality involving X and Y . Let Nn = {1, 2, · · · , n}, where n ≥2. Unless otherwise specified, all infor-mation expressions in this chapter involve some or all of the random variables X1, X2, · · · , Xn. The value of n will be specified when necessary. Through application of the identities H(X) = H(X|Y ) + I(X; Y ) (14.6) H(X, Y ) = H(X) + H(Y |X) (14.7) I(X; Y, Z) = I(X; Y ) + I(X; Z|Y ) (14.8) H(X|Z) = H(X|Y, Z) + I(X; Y |Z) (14.9) H(X, Y |Z) = H(X|Z) + H(Y |X, Z) (14.10) I(X; Y, Z|T) = I(X; Y |T) + I(X; Z|Y, T), (14.11) any Shannon’s information measure can be expressed as the sum of Shannon’s information measures of the following two elemental forms: i) H(Xi|XNn−{i}), i ∈Nn ii) I(Xi; Xj|XK), where i ̸= j and K ⊂Nn −{i, j}. This will be illustrated in the next example. It is not difficult to check that the total number of the two elemental forms of Shannon’s information measures for n random variables is equal to m = n +  n 2  2n−2. (14.12) The proof of (14.12) is left as an exercise. Example 14.1. We can expand H(X1, X2) into a sum of elemental forms of Shannon’s information measures for n = 3 by applying the identities in (14.6) to (14.11) as follows: H(X1, X2) = H(X1) + H(X2|X1) (14.13) = H(X1|X2, X3) + I(X1; X2, X3) + H(X2|X1, X3) +I(X2; X3|X1) (14.14) = H(X1|X2, X3) + I(X1; X2) + I(X1; X3|X2) +H(X2|X1, X3) + I(X2; X3|X1). (14.15) 14.2 A Linear Programming Approach 341 The nonnegativity of the two elemental forms of Shannon’s information measures form a proper subset of the set of basic inequalities. We call the m inequalities in this smaller set the elemental inequalities. They are equiv-alent to the basic inequalities because each basic inequality which is not an elemental inequality can be obtained as the sum of a set of elemental inequal-ities in view of (14.6) to (14.11). This will be illustrated in the next example. The proof for the minimality of the set of elemental inequalities is deferred to Section 14.6. Example 14.2. In the last example, we expressed H(X1, X2) as H(X1|X2, X3) + I(X1; X2) + I(X1; X3|X2) +H(X2|X1, X3) + I(X2; X3|X1). (14.16) All the five Shannon’s information measures in the above expression are in elemental form for n = 3. Then the basic inequality H(X1, X2) ≥0 (14.17) can be obtained as the sum of the following elemental inequalities: H(X1|X2, X3) ≥0 (14.18) I(X1; X2) ≥0 (14.19) I(X1; X3|X2) ≥0 (14.20) H(X2|X1, X3) ≥0 (14.21) I(X2; X3|X1) ≥0. (14.22) 14.2 A Linear Programming Approach Recall from Section 13.2 that any information expression can be expressed uniquely in canonical form, i.e., a linear combination of the k = 2n −1 joint entropies involving some or all of the random variables X1, X2, · · · , Xn. If the elemental inequalities are expressed in canonical form, they become linear inequalities in the entropy space Hn. Denote this set of inequalities by Gh ≥0, where G is an m × k matrix, and define Γn = {h ∈Hn : Gh ≥0}. (14.23) We first show that Γn is a pyramid in the nonnegative orthant of the entropy space Hn. Evidently, Γn contains the origin. Let ej, 1 ≤j ≤k, be the column k-vector whose jth component is equal to 1 and all the other components are equal to 0. Then the inequality e⊤ j h ≥0 (14.24) 342 14 Shannon-Type Inequalities corresponds to the nonnegativity of a joint entropy, which is a basic inequal-ity. Since the set of elemental inequalities is equivalent to the set of basic inequalities, if h ∈Γn, i.e., h satisfies all the elemental inequalities, then h also satisfies the basic inequality in (14.24). In other words, Γn ⊂{h ∈Hn : e⊤ j h ≥0} (14.25) for all 1 ≤j ≤k. This implies that Γn is in the nonnegative orthant of the entropy space. Since Γn contains the origin and the constraints Gh ≥0 are linear, we conclude that Γn is a pyramid in the nonnegative orthant of Hn. Since the elemental inequalities are satisfied by the entropy function of any n random variables X1, X2, · · · , Xn, for any h in Γ ∗ n, h is also in Γn, i.e., Γ ∗ n ⊂Γn. (14.26) Therefore, for any unconstrained inequality f ≥0, if Γn ⊂{h ∈Hn : f(h) ≥0}, (14.27) then Γ ∗ n ⊂{h ∈Hn : f(h) ≥0}, (14.28) i.e., f ≥0 always holds. In other words, (14.27) is a sufficient condition for f ≥0 to always hold. Moreover, an inequality f ≥0 such that (14.27) is satisfied is implied by the basic inequalities, because if h satisfies the basic inequalities, i.e., h ∈Γn, then h satisfies f(h) ≥0. For constrained inequalities, following our discussion in Section 13.3, we impose the constraint Qh = 0 (14.29) and let Φ = {h ∈Hn : Qh = 0}. (14.30) For an inequality f ≥0, if (Γn ∩Φ) ⊂{h ∈Hn : f(h) ≥0}, (14.31) then by (14.26), (Γ ∗ n ∩Φ) ⊂{h ∈Hn : f(h) ≥0}, (14.32) i.e., f ≥0 always holds under the constraint Φ. In other words, (14.31) is a sufficient condition for f ≥0 to always hold under the constraint Φ. Moreover, an inequality f ≥0 under the constraint Φ such that (14.31) is satisfied is implied by the basic inequalities and the constraint Φ, because if h ∈Φ and h satisfies the basic inequalities, i.e., h ∈Γn ∩Φ, then h satisfies f(h) ≥0. 14.2 A Linear Programming Approach 343 14.2.1 Unconstrained Inequalities To check whether an unconstrained inequality b⊤h ≥0 is a Shannon-type inequality, we need to check whether Γn is a subset of {h ∈Hn : b⊤h ≥0}. The following theorem induces a computational procedure for this purpose. Theorem 14.3. b⊤h ≥0 is a Shannon-type inequality if and only if the minimum of the problem Minimize b⊤h, subject to Gh ≥0 (14.33) is zero. In this case, the minimum occurs at the origin. Remark The idea of this theorem is illustrated in Figure 14.1 and Fig-ure 14.2. In Figure 14.1, Γn is contained in {h ∈Hn : b⊤h ≥0}. The min-imum of b⊤h subject to Γn occurs at the origin with the minimum equal to 0. In Figure 14.2, Γn is not contained in {h ∈Hn : b⊤h ≥0}. The minimum of b⊤h subject to Γn is −∞. A formal proof of the theorem is given next. Proof of Theorem 14.3. We have to prove that Γn is a subset of {h ∈Hn : b⊤h ≥0} if and only if the minimum of the problem in (14.33) is zero. First of all, since 0 ∈Γn and b⊤0 = 0 for any b, the minimum of the problem in (14.33) is at most 0. Assume Γn is a subset of {h ∈Hn : b⊤h ≥0} and the minimum of the problem in (14.33) is negative. Then there exists h ∈Γn such that b⊤h < 0, (14.34) which implies Γn ̸⊂{h ∈Hn : b⊤h ≥0}, (14.35) a contradiction. Therefore, if Γn is a subset of {h ∈Hn : b⊤h ≥0}, then the minimum of the problem in (14.33) is zero. n b h 0 Fig. 14.1. Γn is contained in {h ∈Hn : b⊤h ≥0}. 344 14 Shannon-Type Inequalities n b h 0 Fig. 14.2. Γn is not contained in {h ∈Hn : b⊤h ≥0}. To prove the converse, assume Γn is not a subset of {h ∈Hn : b⊤h ≥0}, i.e. (14.35) is true. Then there exists h ∈Γn such that b⊤h < 0. (14.36) This implies that the minimum of the problem in (14.33) is negative, i.e., it is not equal to zero. Finally, if the minimum of the problem in (14.33) is zero, since the Γn contains the origin and b⊤0 = 0, the minimum occurs at the origin. ⊓ ⊔ By virtue of this theorem, to check whether b⊤h ≥0 is an unconstrained Shannon-type inequality, all we need to do is to apply the optimality test of the simplex method to check whether the point h = 0 is optimal for the minimization problem in (14.33). Then b⊤h ≥0 is an unconstrained Shannon-type inequality if and only if h = 0 is optimal. 14.2.2 Constrained Inequalities and Identities To check whether an inequality b⊤h ≥0 under the constraint Φ is a Shannon-type inequality, we need to check whether Γn ∩Φ is a subset of {h ∈Hn : b⊤h ≥0}. Theorem 14.4. b⊤h ≥0 is a Shannon-type inequality under the constraint Φ if and only if the minimum of the problem Minimize b⊤h, subject to Gh ≥0 and Qh = 0 (14.37) is zero. In this case, the minimum occurs at the origin. The proof of this theorem is similar to that for Theorem 14.3, so it is omitted. By taking advantage of the linear structure of the constraint Φ, we 14.3 A Duality 345 can reformulate the minimization problem in (14.37) as follows. Let r be the rank of Q. Since h is in the null space of Q, we can write h = ˜ Qh′, (14.38) where ˜ Q is a k × (k −r) matrix such that the rows of ˜ Q⊤form a basis of the orthogonal complement of the row space of Q, and h′ is a column (k −r)-vector. Then the elemental inequalities can be expressed as G ˜ Qh′ ≥0, (14.39) and in terms of h′, Γn becomes Γ ′ n = {h′ ∈ℜk−r : G ˜ Qh′ ≥0}, (14.40) which is a pyramid in ℜk−r (but not necessarily in the nonnegative orthant). Likewise, b⊤h can be expressed as b⊤˜ Qh′. With all the information expressions in terms of h′, the problem in (14.37) becomes Minimize b⊤˜ Qh′, subject to G ˜ Qh′ ≥0. (14.41) Therefore, to check whether b⊤h ≥0 is a Shannon-type inequality under the constraint Φ, all we need to do is to apply the optimality test of the simplex method to check whether the point h′ = 0 is optimal for the problem in (14.41). Then b⊤h ≥0 is a Shannon-type inequality under the constraint Φ if and only if h′ = 0 is optimal. By imposing the constraint Φ, the number of elemental inequalities remains the same, while the dimension of the problem decreases from k to k −r. Finally, to verify that b⊤h = 0 is a Shannon-type identity under the constraint Φ, i.e., b⊤h = 0 is implied by the basic inequalities, all we need to do is to verify that both b⊤h ≥0 and b⊤h ≤0 are Shannon-type inequalities under the constraint Φ. 14.3 A Duality A nonnegative linear combination is a linear combination whose coefficients are all nonnegative. It is clear that a nonnegative linear combination of basic inequalities is a Shannon-type inequality. However, it is not clear that all Shannon-type inequalities are of this form. By applying the duality theorem in linear programming , we will see that this is in fact the case. The dual of the primal linear programming problem in (14.33) is Maximize y⊤· 0 subject to y ≥0 and y⊤G ≤b⊤, (14.42) where y = [y1 · · · ym ]⊤. (14.43) 346 14 Shannon-Type Inequalities By the duality theorem, if the minimum of the primal problem is zero, which happens when b⊤h ≥0 is a Shannon-type inequality, the maximum of the dual problem is also zero. Since the cost function in the dual problem is zero, the maximum of the dual problem is zero if and only if the feasible region Ψ = {y ∈ℜm : y ≥0 and y⊤G ≤b⊤} (14.44) is nonempty. Theorem 14.5. b⊤h ≥0 is a Shannon-type inequality if and only if b⊤= x⊤G for some x ≥0, where x is a column m-vector, i.e., b⊤is a nonnegative linear combination of the rows of G. Proof. We have to prove that Ψ is nonempty if and only if b⊤= x⊤G for some x ≥0. The feasible region Ψ is nonempty if and only if b⊤≥z⊤G (14.45) for some z ≥0, where z is a column m-vector. Consider any z which satisfies (14.45), and let s⊤= b⊤−z⊤G ≥0. (14.46) Denote by ej the column k-vector whose jth component is equal to 1 and all the other components are equal to 0, 1 ≤j ≤k. Then e⊤ j h is a joint entropy. Since every joint entropy can be expressed as the sum of elemental forms of Shannon’s information measures, e⊤ j can be expressed as a nonnegative linear combination of the rows of G. Write s = [s1 s2 · · · sk ]⊤, (14.47) where sj ≥0 for all 1 ≤j ≤k. Then s⊤= k X j=1 sje⊤ j (14.48) can also be expressed as a nonnegative linear combinations of the rows of G, i.e., s⊤= w⊤G (14.49) for some w ≥0. From (14.46), we see that b⊤= (w⊤+ z⊤)G = x⊤G, (14.50) where x ≥0. The proof is accomplished. ⊓ ⊔ From this theorem, we see that all Shannon-type inequalities are actually triv-ially implied by the basic inequalities! However, the verification of a Shannon-type inequality requires a computational procedure as described in the last section. 14.4 Machine Proving – ITIP 347 14.4 Machine Proving – ITIP Theorems 14.3 and 14.4 transform the problem of verifying a Shannon-type inequality into a linear programming problem. This enables machine-proving of all Shannon-type inequalities. A software package called ITIP1 has been developed for this purpose. The most updated versions of ITIP can be down-loaded from the World Wide Web . Using ITIP is very simple and intuitive. The following examples illustrate the use of ITIP: 1. >> ITIP(’H(XYZ) <= H(X) + H(Y) + H(Z)’) True 2. >> ITIP(’I(X;Z) = 0’,’I(X;Z|Y) = 0’,’I(X;Y) = 0’) True 3. >> ITIP(’I(Z;U) - I(Z;U|X) - I(Z;U|Y) <= 0.5 I(X;Y) + 0.25 I(X;ZU) + 0.25 I(Y;ZU)’) Not provable by ITIP In the first example, we prove an unconstrained inequality. In the second example, we prove that X and Z are independent if X →Y →Z forms a Markov chain and X and Y are independent. The first identity is what we want to prove, while the second and the third expressions specify the Markov chain X →Y →Z and the independency of X and Y , respectively. In the third example, ITIP returns the clause “Not provable by ITIP,” which means that the inequality is not a Shannon-type inequality. This, however, does not mean that the inequality to be proved cannot always hold. In fact, this inequality is one of the known non-Shannon-type inequalities which will be discussed in Chapter 15. We note that most of the results we have previously obtained by using information diagrams can also be proved by ITIP. However, the advantage of using information diagrams is that one can visualize the structure of the problem. Therefore, the use of information diagrams and ITIP very often complement each other. In the rest of the section, we give a few examples which demonstrate the use of ITIP. Example 14.6. By Proposition 2.10, the long Markov chain X →Y →Z →T implies the two short Markov chains X →Y →Z and Y →Z →T. We want to see whether the two short Markov chains also imply the long Markov chain. If so, they are equivalent to each other. Using ITIP, we have >> ITIP(’X/Y/Z/T’, ’X/Y/Z’, ’Y/Z/T’) Not provable by ITIP 1 ITIP stands for Information-Theoretic Inequality Prover. 348 14 Shannon-Type Inequalities Z X T Y Fig. 14.3. The information diagram for X, Y , Z, and T in Example 14.6. In the above, we have used a macro in ITIP to specify the three Markov chains. The above result from ITIP says that the long Markov chain cannot be proved from the two short Markov chains by means of the basic inequalities. This strongly suggests that the two short Markov chains is weaker than the long Markov chain. However, in order to prove that this is in fact the case, we need an explicit construction of a joint distribution for X, Y , Z, and T which satisfies the two short Markov chains but not the long Markov chain. Toward this end, we resort to the information diagram in Figure 14.3. The Markov chain X →Y →Z is equivalent to I(X; Z|Y ) = 0, i.e., µ∗( ˜ X ∩˜ Y c ∩˜ Z ∩˜ T) + µ∗( ˜ X ∩˜ Y c ∩˜ Z ∩˜ T c) = 0. (14.51) Similarly, the Markov chain Y →Z →T is equivalent to µ∗( ˜ X ∩˜ Y ∩˜ Zc ∩˜ T) + µ∗( ˜ Xc ∩˜ Y ∩˜ Zc ∩˜ T) = 0. (14.52) The four atoms involved in the constraints (14.51) and (14.52) are marked by a dagger in Figure 14.3. In Section 3.5, we have seen that the Markov chain X →Y →Z →T holds if and only if µ∗takes zero value on the set of atoms in Figure 14.4 which are marked with an asterisk2. Comparing Figure 14.3 and Figure 14.4, we see that the only atom marked in Figure 14.4 but not in Figure 14.3 is ˜ X ∩˜ Y c ∩˜ Zc ∩˜ T. Thus if we can construct a µ∗such that it takes zero value on all the atoms except for ˜ X ∩˜ Y c ∩˜ Zc ∩˜ T, then the corresponding joint distribution satisfies the two short Markov chains but not the long Markov chain. This would show that the two short Markov chains are in fact weaker than the long Markov chain. Following Theorem 3.11, such a µ∗can be constructed. In fact, the required joint distribution can be obtained by simply letting X = T = U, where U is any random variable such that H(U) > 0, and letting 2 This information diagram is essentially a reproduction of Figure 3.8. 14.4 Machine Proving – ITIP 349 Z X T Y Fig. 14.4. The atoms of F4 on which µ∗vanishes when X →Y →Z →T forms a Markov chain. Y and Z be degenerate random variables taking constant values. Then it is easy to see that X →Y →Z and Y →Z →T hold, while X →Y →Z →T does not hold. Example 14.7. The data processing theorem says that if X →Y →Z →T forms a Markov chain, then I(Y ; Z) ≥I(X; T). (14.53) We want to see whether this inequality holds under the weaker condition that X →Y →Z and Y →Z →T form two short Markov chains. By using ITIP, we can show that (14.53) is not a Shannon-type inequality under the Markov conditions I(X; Z|Y ) = 0 (14.54) and I(Y ; T|Z) = 0. (14.55) This strongly suggests that (14.53) does not always hold under the constraint of the two short Markov chains. However, this has to be proved by an explicit construction of a joint distribution for X, Y , Z, and T which satisfies (14.54) and (14.55) but not (14.53). The construction at the end of the last example serves this purpose. Example 14.8 (Secret Sharing ). Let S be a secret to be encoded into three pieces, X, Y , and Z. We need to design a scheme that satisfies the following two requirements: 1. No information about S can be obtained from any one of the three encoded pieces. 2. S can be recovered from any two of the three encoded pieces. 350 14 Shannon-Type Inequalities This is called a (1,2)-threshold secret sharing scheme. The first requirement of the scheme is equivalent to the constraints I(S; X) = I(S; Y ) = I(S; Z) = 0, (14.56) while the second requirement is equivalent to the constraints H(S|X, Y ) = H(S|Y, Z) = H(S|X, Z) = 0. (14.57) Since the secret S can be recovered if all X, Y , and Z are known, H(X) + H(Y ) + H(Z) ≥H(S). (14.58) We are naturally interested in the maximum constant c that satisfies H(X) + H(Y ) + H(Z) ≥cH(S). (14.59) We can explore the possible values of c by ITIP. After a few trials, we find that ITIP returns a “True” for all c ≤3, and returns the clause “Not provable by ITIP” for any c slightly larger than 3, say 3.0001. This means that the maximum value of c is lower bounded by 3. This lower bound is in fact tight, as we can see from the following construction. Let S and N be mutually independent ternary random variables uniformly distributed on {0, 1, 2}, and define X = N (14.60) Y = S + N mod 3, (14.61) and Z = S + 2N mod 3. (14.62) Then it is easy to verify that S = Y −X mod 3 (14.63) = 2Y −Z mod 3 (14.64) = Z −2X mod 3. (14.65) Thus the requirements in (14.57) are satisfied. It is also readily verified that the requirements in (14.56) are satisfied. Finally, all S, X, Y , and Z are distributed uniformly on {0, 1, 2}. Therefore, H(X) + H(Y ) + H(Z) = 3H(S). (14.66) This proves that the maximum constant c which satisfies (14.59) is 3. Using the approach in this example, almost all information-theoretic bounds reported in the literature for this class of problems can be obtained when a definite number of random variables are involved. 14.5 Tackling the Implication Problem 351 14.5 Tackling the Implication Problem We have already mentioned in Section 13.5 that the implication problem of conditional independence is extremely difficult except for the special case that only full conditional mutual independencies are involved. In this section, we employ the tools we have developed in this chapter to tackle this problem. In Bayesian networks (see ), the following four axioms are often used for proving implications of conditional independencies: • Symmetry: X ⊥Y |Z ⇔Y ⊥X|Z (14.67) • Decomposition: X ⊥(Y, T)|Z ⇒X ⊥Y |Z ∧X ⊥T|Z (14.68) • Weak Union: X ⊥(Y, T)|Z ⇒X ⊥Y |(Z, T) (14.69) • Contraction: X ⊥Y |Z ∧X ⊥T|(Y, Z) ⇒X ⊥(Y, T)|Z. (14.70) These axioms form a system called semi-graphoid and were first proposed in as heuristic properties of conditional independence. The axiom of symmetry is trivial in the context of probability3. The other three axioms can be summarized by X ⊥(Y, T)|Z ⇔X ⊥Y |Z ∧X ⊥T|(Y, Z). (14.71) This can easily be proved as follows. Consider the identity I(X; Y, T|Z) = I(X; Y |Z) + I(X; T|Y, Z). (14.72) Since conditional mutual informations are always nonnegative by the basic inequalities, if I(X; Y, T|Z) vanishes, I(X; Y |Z) and I(X; T|Y, Z) also vanish, and vice versa. This proves (14.71). In other words, (14.71) is the result of a specific application of the basic inequalities. Therefore, any implication which can be proved by invoking these four axioms are provable by ITIP. In fact, ITIP is considerably more powerful than the above four axioms. This will be shown in the next example in which we give an implication which can be proved by ITIP but not by these four axioms4. We will see some implications which cannot be proved by ITIP when we discuss non-Shannon-type inequalities in the next chapter. 3 These four axioms may be applied beyond the context of probability. 4 This example is due to Zhen Zhang, private communication. 352 14 Shannon-Type Inequalities Z X T Y + _ _ + + _ Fig. 14.5. The information diagram for X, Y , Z, and T. For a number of years, researchers in Bayesian networks generally believed that the semi-graphoidal axioms form a complete set of axioms for conditional independence until it was refuted by Studen´ y . See Problem 10 for a discussion. Example 14.9. We will show that I(X; Y |Z) = 0 I(X; T|Z) = 0 I(X; T|Y ) = 0 I(X; Z|Y ) = 0 I(X; Z|T) = 0            ⇒I(X; Y |T) = 0 (14.73) can be proved by invoking the basic inequalities. First, we write I(X; Y |Z) = I(X; Y |Z, T) + I(X; Y ; T|Z). (14.74) Since I(X; Y |Z) = 0 and I(X; Y |Z, T) ≥0, we let I(X; Y |Z, T) = a (14.75) for some nonnegative real number a, so that I(X; Y ; T|Z) = −a (14.76) from (14.74). In the information diagram in Figure 14.5, we mark the atom I(X; Y |Z, T) by a “+” and the atom I(X; Y ; T|Z) by a “−.” Then we write I(X; T|Z) = I(X; Y ; T|Z) + I(X; T|Y, Z). (14.77) Since I(X; T|Z) = 0 and I(X; Y ; T|Z) = −a, we obtain I(X; T|Y, Z) = a. (14.78) 14.6 Minimality of the Elemental Inequalities 353 In the information diagram, we mark the atom I(X; T|Y, Z) with a “+.” Continue in this fashion, the five CI’s on the left hand side of (14.73) imply that all the atoms marked with a “+” in the information diagram take the value a, while all the atoms marked with a “−” take the value −a. From the information diagram, we see that I(X; Y |T) = I(X; Y ; Z|T) + I(X; Y |Z, T) = (−a) + a = 0, (14.79) which proves our claim. Since we base our proof on the basic inequalities, this implication can also be proved by ITIP. Due to the form of the five given CI’s in (14.73), none of the axioms in (14.68) to (14.70) can be applied. Thus we conclude that the implication in (14.73) cannot be proved by invoking the four axioms in (14.67) to (14.70). 14.6 Minimality of the Elemental Inequalities We have already seen in Section 14.1 that the set of basic inequalities is not minimal in the sense that in the set, some inequalities are implied by the others. We then showed that the set of basic inequalities is equivalent to the smaller set of elemental inequalities. Again, we can ask whether the set of elemental inequalities is minimal. In this section, we prove that the set of elemental inequalities is minimal. This result is important for efficient implementation of ITIP because it says that we cannot consider a smaller set of inequalities. The proof, however, is rather technical. The reader may skip this proof without missing the essence of this chapter. The elemental inequalities in set-theoretic notations have one of the fol-lowing two forms: 1. µ( ˜ Xi −˜ XNn−{i}) ≥0, 2. µ( ˜ Xi ∩˜ Xj −˜ XK) ≥0, i ̸= j and K ⊂Nn −{i, j}, where µ denotes a set-additive function defined on Fn. They will be referred to as α-inequalities and β-inequalities, respectively. We are to show that all the elemental inequalities are nonredundant, i.e., none of them is implied by the others. For an α-inequality µ( ˜ Xi −˜ XNn−{i}) ≥0, (14.80) since it is the only elemental inequality which involves the atom ˜ Xi−˜ XNn−{i}, it is clearly not implied by the other elemental inequalities. Therefore we only need to show that all β-inequalities are nonredundant. To show that a β-inequality is nonredundant, it suffices to show that there exists a measure ˆ µ on Fn which satisfies all other elemental inequalities except for that β-inequality. We will show that the β-inequality 354 14 Shannon-Type Inequalities µ( ˜ Xi ∩˜ Xj −˜ XK) ≥0 (14.81) is nonredundant. To facilitate our discussion, we denote Nn −K −{i, j} by L(i, j, K), and we let Cij|K(S), S ⊂L(i, j, K) be the atoms in ˜ Xi ∩˜ Xj −˜ XK, where Cij|K(S) = ˜ Xi ∩˜ Xj ∩˜ XS ∩˜ Xc K ∩˜ Xc L(i,j,K)−S. (14.82) We first consider the case when L(i, j, K) = ∅, i.e., K = Nn −{i, j}. We construct a measure ˆ µ by ˆ µ(A) =  −1 if A = ˜ Xi ∩˜ Xj −˜ XK 1 otherwise, (14.83) where A ∈A. In other words, ˜ Xi ∩˜ Xj −˜ XK is the only atom with measure −1; all other atoms have measure 1. Then ˆ µ( ˜ Xi ∩˜ Xj −˜ XK) < 0 is trivially true. It is also trivial to check that for any i′ ∈Nn, ˆ µ( ˜ Xi′ −˜ XNn−{i′}) = 1 ≥0, (14.84) and for any (i′, j′, K′) ̸= (i, j, K) such that i′ ̸= j′ and K′ ⊂Nn −{i′, j′}, ˆ µ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) = 1 ≥0 (14.85) if K′ = Nn−{i′, j′}. On the other hand, if K′ is a proper subset of Nn−{i′, j′}, then ˜ Xi′ ∩˜ Xj′ −˜ XK′ contains at least two atoms, and therefore ˆ µ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ≥0. (14.86) This completes the proof for the β-inequality in (14.81) to be nonredundant when L(i, j, K) = φ. We now consider the case when L(i, j, K) ̸= φ, or |L(i, j, K)| ≥1. We construct a measure ˆ µ as follows. For the atoms in ˜ Xi ∩˜ Xj −˜ XK, let ˆ µ(Cij|K(S)) = (−1)|S| −1 S = L(i, j, K) (−1)|S| S ̸= L(i, j, K). (14.87) For Cij|K(S), if |S| is odd, it is referred to as an odd atom of ˜ Xi ∩˜ Xj −˜ XK, and if |S| is even, it is referred to as an even atom of ˜ Xi ∩˜ Xj −˜ XK. For any atom A / ∈˜ Xi ∩˜ Xj −˜ XK, we let ˆ µ(A) = 1. (14.88) This completes the construction of ˆ µ. We first prove that ˆ µ( ˜ Xi ∩˜ Xj −˜ XK) < 0. (14.89) Consider 14.6 Minimality of the Elemental Inequalities 355 ˆ µ( ˜ Xi ∩˜ Xj −˜ XK) = X S⊂L(i,j,K) ˆ µ(Cij|K(S)) =   |L(i,j,K)| X r=0  |L(i, j, K)| r  (−1)r  −1 = −1, where the last equality follows from the binomial formula n X r=0  n r  (−1)r = 0 (14.90) for n ≥1. This proves (14.89). Next we prove that ˆ µ satisfies all α-inequalities. We note that for any i′ ∈Nn, the atom ˜ Xi′ −˜ XNn−{i′} is not in ˜ Xi ∩˜ Xj −˜ XK. Thus ˆ µ( ˜ Xi′ −˜ XNn−{i}) = 1 ≥0. (14.91) It remains to prove that ˆ µ satisfies all β-inequalities except for (14.81), i.e., for any (i′, j′, K′) ̸= (i, j, K) such that i′ ̸= j′ and K′ ⊂Nn −{i′, j′}, ˆ µ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ≥0. (14.92) Consider ˆ µ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) = ˆ µ(( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ∩( ˜ Xi ∩˜ Xj −˜ XK)) +ˆ µ(( ˜ Xi′ ∩˜ Xj′ −˜ XK′) −( ˜ Xi ∩˜ Xj −˜ XK)). (14.93) The nonnegativity of the second term above follows from (14.88). For the first term, ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ∩( ˜ Xi ∩˜ Xj −˜ XK) (14.94) is nonempty if and only if {i′, j′} ∩K = φ and {i, j} ∩K′ = φ. (14.95) If this condition is not satisfied, then the first term in (14.93) becomes ˆ µ(φ) = 0, and (14.92) follows immediately. Let us assume that the condition in (14.95) is satisfied. Then by simple counting, we see that the number atoms in ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ∩( ˜ Xi ∩˜ Xj −˜ XK) (14.96) is equal to 2ϕ, where ϕ = n −|{i, j} ∪{i′, j′} ∪K ∪K′|. (14.97) 356 14 Shannon-Type Inequalities For example, for n = 6, there are 4 = 22 atoms in ( ˜ X1 ∩˜ X2) ∩( ˜ X1 ∩˜ X3 −˜ X4), (14.98) namely ˜ X1 ∩˜ X2 ∩˜ X3 ∩˜ Xc 4 ∩Y5 ∩Y6, where Yi = ˜ Xi or ˜ Xc i for i = 5, 6. We check that ϕ = 6 −|{1, 2} ∪{1, 3} ∪φ ∪{4}| = 2. (14.99) We first consider the case when ϕ = 0, i.e., Nn = {i, j} ∪{i′, j′} ∪K ∪K′. (14.100) Then ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ∩( ˜ Xi ∩˜ Xj −˜ XK) (14.101) contains exactly one atom. If this atom is an even atom of ˜ Xi ∩˜ Xj −˜ XK, then the first term in (14.93) is either 0 or 1 (cf., (14.87)), and (14.92) follows immediately. If this atom is an odd atom of ˜ Xi ∩˜ Xj −˜ XK, then the first term in (14.93) is equal to −1. This happens if and only if {i, j} and {i′, j′} have one common element, which implies that ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) −( ˜ Xi ∩˜ Xj −˜ XK) is nonempty. Therefore the second term in (14.93) is at least 1, and hence (14.92) follows. Finally, we consider the case when ϕ ≥1. Using the binomial formula in (14.90), we see that the number of odd atoms and even atoms of ˜ Xi∩˜ Xj −˜ XK in ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) ∩( ˜ Xi ∩˜ Xj −˜ XK) (14.102) are the same. Therefore the first term in (14.93) is equal to −1 if Cij|K(L(i, j, K)) ∈˜ Xi′ ∩˜ Xj′ −˜ XK′, (14.103) and is equal to 0 otherwise. The former is true if and only if K′ ⊂K, which implies that ( ˜ Xi′ ∩˜ Xj′ −˜ XK′) −( ˜ Xi ∩˜ Xj −˜ XK) is nonempty, or that the second term is at least 1. Thus in either case (14.92) is true. This completes the proof that (14.81) is nonredundant. Appendix 14.A: The Basic Inequalities and the Polymatroidal Axioms In this appendix, we show that the basic inequalities for a collection of n ran-dom variables Θ = {Xi, i ∈Nn} is equivalent to the following polymatroidal axioms: For all α, β ⊂Nn, P1. HΘ(∅) = 0. P2. HΘ(α) ≤HΘ(β) if α ⊂β. P3. HΘ(α) + HΘ(β) ≥HΘ(α ∩β) + HΘ(α ∪β). Chapter Summary 357 We first show that the polymatroidal axioms imply the basic inequalities. From P1 and P2, since ∅⊂α for any α ⊂Nn, we have HΘ(α) ≥HΘ(∅) = 0, (14.104) or H(Xα) ≥0. (14.105) This shows that entropy is nonnegative. In P2, letting γ = β\α, we have HΘ(α) ≤HΘ(α ∪γ), (14.106) or H(Xγ|Xα) ≥0. (14.107) Here, γ and α are disjoint subsets of Nn. In P3, letting γ = β\α, δ = α ∩β, and σ = α\β, we have HΘ(σ ∪δ) + HΘ(γ ∪δ) ≥HΘ(δ) + HΘ(σ ∪δ ∪γ), (14.108) or I(Xσ; Xγ|Xδ) ≥0. (14.109) Again, σ, δ, and γ are disjoint subsets of Nn. When δ = ∅, from P3, we have I(Xσ; Xγ) ≥0. (14.110) Thus P1 to P3 imply that entropy is nonnegative, and that conditional entropy, mutual information, and conditional mutual information are non-negative provided that they are irreducible. However, it has been shown in Section 14.1 that a reducible Shannon’s information measure can always be written as the sum of irreducible Shannon’s information measures. There-fore, we have shown that the polymatroidal axioms P1 to P3 imply the basic inequalities. The converse is trivial and the proof is omitted. Chapter Summary Shannon-Type Inequalities are information inequalities implied by the basic inequalities. Elemental Form of Shannon’s Information Measures: Any Shannon’s information measure involving random variables X1, X2, · · · , Xn can be ex-pressed as the sum of the following two element forms: i) H(Xi|XNn−{i}), i ∈Nn ii) I(Xi; Xj|XK), where i ̸= j and K ⊂Nn −{i, j}. 358 14 Shannon-Type Inequalities Elemental Inequalities: For a set of random variables, the nonnegativity of the two elemental forms of Shannon’s information measures are called the elemental inequalities. The elemental inequalities are equivalent to the basic inequalities for the same set of random variables, and they form the minimal such subset of the basic inequalities. The region Γn = {h ∈Hn : Gh ≥0}, is the subset of Hn defined by the basic inequalities for n random variables, and Γ ∗ n ⊂Γn. Unconstrained Shannon-Type Inequalities: b⊤h ≥0 is a Shannon-type inequality if and only if one of the following is true: 1. Γn ⊂{h ∈Hn : b⊤h ≥0}. 2. The minimum of the problem “Minimize b⊤h, subject to Gh ≥0” is zero. Constrained Shannon-Type Inequalities: Under the constraint Φ = {h ∈ Hn : Qh = 0}, b⊤h ≥0 is a Shannon-type inequality if and only if 1. (Γn ∩Φ) ⊂{h ∈Hn : b⊤h ≥0}. 2. The minimum of the problem “Minimize b⊤h, subject to Gh ≥0 and Qh = 0” is zero. Duality: An unconstrained Shannon-type inequality is a nonnegative linear combination of the elemental inequalities for the same set of random variables. ITIP is a software package running on MATLAB for proving Shannon-type inequalities. Problems 1. Prove (14.12) for the total number of elemental forms of Shannon’s infor-mation measures for n random variables. 2. Shannon-type inequalities for n random variables X1, X2, · · · , Xn refer to all information inequalities implied by the basic inequalities for these n random variables. Show that no new information inequality can be gen-erated by considering the basic inequalities for more than n random vari-ables. 3. Show by an example that the decomposition of an information expression into a sum of elemental forms of Shannon’s information measures is not unique. 4. Elemental forms of conditional independencies Consider random vari-ables X1, X2, · · · , Xn. A conditional independency is said to be elemen-tal if it corresponds to setting an elemental form of Shannon’s informa-tion measure to zero. Show that any conditional independency involving X1, X2, · · · , Xn is equivalent to a collection of elemental conditional inde-pendencies. Problems 359 5. Symmetrical information inequalities a) Show that every symmetrical information expression (cf. Problem 1 in Chapter 13) involving random variable X1, X2, · · · , Xn can be written in the form E = n−1 X k=0 akc(n) k , where c(n) 0 = n X i=1 H(Xi|XN−i) and for 1 ≤k ≤n −1, c(n) k = X 1≤i 0 and c(n) k′ = 0 for all 0 ≤k′ ≤n −1 and k′ ̸= k. (Han .) 6. Strictly positive probability distributions It was shown in Proposition 2.12 that X1 ⊥X4|(X2, X3) X1 ⊥X3|(X2, X4)  ⇒X1 ⊥(X3, X4)|X2 if p(x1, x2, x3, x4) > 0 for all x1, x2, x3, and x4. Show by using ITIP that this implication is not implied by the basic inequalities. This strongly suggests that this implication does not hold in general, which was shown to be the case by the construction following Proposition 2.12. 7. a) Verify by ITIP that I(X1, X2; Y1, Y2) ≤I(X1; Y1) + I(X2; Y2) under the constraint H(Y1, Y2|X1, X2) = H(Y1|X1) + H(Y2|X2). This constrained inequality was used in Problem 10 in Chapter 7 to obtain the capacity of two parallel channels. b) Verify by ITIP that I(X1, X2; Y1, Y2) ≥I(X1; Y1) + I(X2; Y2) under the constraint I(X1; X2) = 0. This constrained inequality was used in Problem 4 in Chapter 8 to obtain the rate-distortion function for a product source. 360 14 Shannon-Type Inequalities 8. Verify by ITIP the information identity in Example 3.18. 9. Repeat Problem 13 in Chapter 3 with the help of ITIP. 10. Prove the implications in Problem 15 in Chapter 3 by ITIP and show that they cannot be deduced from the semi-graphoidal axioms. (Studen´ y .) Historical Notes For almost half a century, all information inequalities known in the literature are consequences of the basic inequalities due to Shannon . Fujishige showed that the entropy function is a polymatroid (see Appendix 14.A). Yeung showed that verification of all such inequalities, referred to Shannon-type inequalities, can be formulated as a linear programming problem if the number of random variables involved is fixed. ITIP, a software package for this purpose, was developed by Yeung and Yan . Non-Shannon-type inequalities, which were first discovered in the late 1990’s, will be discussed in the next chapter. 15 Beyond Shannon-Type Inequalities In Chapter 13, we introduced the regions Γ ∗ n and Γn in the entropy space Hn for n random variables. From Γ ∗ n, one in principle can determine whether any information inequality always holds. The region Γn, defined by the set of all basic inequalities (equivalently all elemental inequalities) involving n random variables, is an outer bound on Γ ∗ n. From Γn, one can determine whether any information inequality is implied by the basic inequalities. If so, it is called a Shannon-type inequality. Since the basic inequalities always hold, so do all Shannon-type inequalities. In the last chapter, we have shown how machine-proving of all Shannon-type inequalities can be made possible by taking advantage of the linear structure of Γn. If the two regions Γ ∗ n and Γn are identical, then all information inequalities which always hold are Shannon-type inequalities, and hence all information inequalities can be completely characterized. However, if Γ ∗ n is a proper sub-set of Γn, then there exist constraints on an entropy function which are not implied by the basic inequalities. Such a constraint, if in the form of an in-equality, is referred to as a non-Shannon-type inequality. There is a point here which needs further explanation. The fact that Γ ∗ n ̸= Γn does not necessarily imply the existence of a non-Shannon-type inequality. As an example, suppose Γn contains all but an isolated point in Γ ∗ n. Then this does not lead to the existence of a non-Shannon-type inequality for n random variables. In this chapter, we present characterizations of Γ ∗ n which are more refined than Γn. These characterizations lead to the existence of non-Shannon-type inequalities for n ≥4. 15.1 Characterizations of Γ ∗ 2, Γ ∗ 3, and Γ ∗ n Recall from the proof of Theorem 3.6 that the vector h represents the values of the I-Measure µ∗on the unions in Fn. Moreover, h is related to the values of µ∗on the atoms of Fn, represented as u, by 362 15 Beyond Shannon-Type Inequalities h = Cnu (15.1) where Cn is a unique k × k matrix with k = 2n −1 (cf. (3.27)). Let In be the k-dimensional Euclidean space with the coordinates labeled by the components of u. Note that each coordinate in In corresponds to the value of µ∗on a nonempty atom of Fn. Recall from Lemma 13.1 the definition of the region Ψ ∗ n = {u ∈In : Cnu ∈Γ ∗ n}, (15.2) which is obtained from the region Γ ∗ n via the linear transformation induced by C−1 n . Analogously, we define the region Ψn = {u ∈In : Cnu ∈Γn}. (15.3) The region Γ ∗ n, as we will see, is extremely difficult to characterize for a general n. Therefore, we start our discussion with the simplest case, namely n = 2. Theorem 15.1. Γ ∗ 2 = Γ2. Proof. For n = 2, the elemental inequalities are H(X1|X2) = µ∗( ˜ X1 −˜ X2) ≥0 (15.4) H(X2|X1) = µ∗( ˜ X2 −˜ X1) ≥0 (15.5) I(X1; X2) = µ∗( ˜ X1 ∩˜ X2) ≥0. (15.6) Note that the quantities on the left hand sides above are precisely the values of µ∗on the atoms of F2. Therefore, Ψ2 = {u ∈I2 : u ≥0}, (15.7) i.e., Ψ2 is the nonnegative orthant of I2. Since Γ ∗ 2 ⊂Γ2, Ψ ∗ 2 ⊂Ψ2. On the other hand, Ψ2 ⊂Ψ ∗ 2 by Lemma 13.1. Thus Ψ ∗ 2 = Ψ2, which implies Γ ∗ 2 = Γ2. The proof is accomplished. ⊓ ⊔ Next, we prove that Theorem 15.1 cannot even be generalized to n = 3. Theorem 15.2. Γ ∗ 3 ̸= Γ3. Proof. For n = 3, the elemental inequalities are H(Xi|Xj, Xk) = µ∗( ˜ Xi −˜ Xj −˜ Xk) ≥0 (15.8) I(Xi; Xj|Xk) = µ∗( ˜ Xi ∩˜ Xj −˜ Xk) ≥0, (15.9) and I(Xi; Xj) = µ∗( ˜ Xi ∩˜ Xj) (15.10) = µ∗( ˜ Xi ∩˜ Xj ∩˜ Xk) + µ∗( ˜ Xi ∩˜ Xj −˜ Xk) (15.11) ≥0 (15.12) 15.1 Characterizations of Γ ∗ 2, Γ ∗ 3, and Γ ∗ n 363 a a a a 0 0 0 X 1 X 2 X 3 Fig. 15.1. The set-theoretic structure of the point (0, 0, 0, a, a, a, −a) in Ψ3. for 1 ≤i < j < k ≤3. For u ∈I3, let u = (u1, u2, u3, u4, u5, u6, u7), (15.13) where ui, 1 ≤i ≤7 correspond to the values µ∗( ˜ X1 −˜ X2 −˜ X3), µ∗( ˜ X2 −˜ X1 −˜ X3), µ∗( ˜ X3 −˜ X1 −˜ X2), µ∗( ˜ X1 ∩˜ X2 −˜ X3), µ∗( ˜ X1 ∩˜ X3 −˜ X2), µ∗( ˜ X2 ∩˜ X3 −˜ X1), µ∗( ˜ X1 ∩˜ X2 ∩˜ X3), (15.14) respectively. These are the values of µ∗on the nonempty atoms of F3. Then from (15.8), (15.9), and (15.12), we see that Ψ3 = {u ∈I3 : ui ≥0, 1 ≤i ≤6; uj + u7 ≥0, 4 ≤j ≤6}. (15.15) It is easy to check that the point (0, 0, 0, a, a, a, −a) for any a ≥0 is in Ψ3. This is illustrated in Figure 15.1, and it is readily seen that the relations H(Xi|Xj, Xk) = 0 (15.16) and I(Xi; Xj) = 0 (15.17) for 1 ≤i < j < k ≤3 are satisfied, i.e., each random variable is a function of the other two, and the three random variables are pairwise independent. Let SXi be the support of Xi, i = 1, 2, 3. For any x1 ∈SX1 and x2 ∈SX2, since X1 and X2 are independent, we have p(x1, x2) = p(x1)p(x2) > 0. (15.18) Since X3 is a function of X1 and X2, there is a unique x3 ∈SX3 such that p(x1, x2, x3) = p(x1, x2) = p(x1)p(x2) > 0. (15.19) Since X2 is a function of X1 and X3, and X1 and X3 are independent, we can write 364 15 Beyond Shannon-Type Inequalities p(x1, x2, x3) = p(x1, x3) = p(x1)p(x3). (15.20) Equating (15.19) and (15.20), we have p(x2) = p(x3). (15.21) Now consider any x′ 2 ∈SX2 such that x′ 2 ̸= x2. Since X2 and X3 are indepen-dent, we have p(x′ 2, x3) = p(x′ 2)p(x3) > 0. (15.22) Since X1 is a function of X2 and X3, there is a unique x′ 1 ∈SX1 such that p(x′ 1, x′ 2, x3) = p(x′ 2, x3) = p(x′ 2)p(x3) > 0. (15.23) Since X2 is a function of X1 and X3, and X1 and X3 are independent, we can write p(x′ 1, x′ 2, x3) = p(x′ 1, x3) = p(x′ 1)p(x3). (15.24) Similarly, since X3 is a function of X1 and X2, and X1 and X2 are independent, we can write p(x′ 1, x′ 2, x3) = p(x′ 1, x′ 2) = p(x′ 1)p(x′ 2). (15.25) Equating (15.24) and (15.25), we have p(x′ 2) = p(x3), (15.26) and from (15.21), we have p(x′ 2) = p(x2). (15.27) Therefore, X2 must have a uniform distribution on its support. The same can be proved for X1 and X3. Now from Figure 15.1, H(X1) = H(X1|X2, X3) + I(X1; X2|X3) + I(X1; X3|X2) +I(X1; X2; X3) (15.28) = 0 + a + a + (−a) (15.29) = a, (15.30) and similarly H(X2) = H(X3) = a. (15.31) Then the only values that a can take are log M, where M (a positive integer) is the cardinality of the supports of X1, X2, and X3. In other words, if a is not equal to log M for some positive integer M, then the point (0, 0, 0, a, a, a, −a) is not in Ψ ∗ 3 . This proves that Ψ ∗ 3 ̸= Ψ3, which implies Γ ∗ 3 ̸= Γ3. The theorem is proved. ⊓ ⊔ The proof above has the following interpretation. For h ∈H3, let h = (h1, h2, h3, h12, h13, h23, h123). (15.32) 15.1 Characterizations of Γ ∗ 2, Γ ∗ 3, and Γ ∗ n 365 ( a , a , a , a , 2 a , 2 a , 2 a ) log 4 a = 0 log 2 log 3 Fig. 15.2. The values of a for which (a, a, a, 2a, 2a, 2a, 2a) is in Γ3. From Figure 15.1, we see that the point (0, 0, 0, a, a, a, −a) in Ψ3 corresponds to the point (a, a, a, 2a, 2a, 2a, 2a) in Γ3. Evidently, the point (a, a, a, 2a, 2a, 2a, 2a) in Γ3 satisfies the 6 elemental inequalities given in (15.8) and (15.12) for 1 ≤i < j < k ≤3 with equality. Since Γ3 is defined by all the elemental inequalities, the set {(a, a, a, 2a, 2a, 2a, 2a) ∈Γ3 : a ≥0} (15.33) is in the intersection of 6 hyperplanes in H3 (i.e., ℜ7) defining the boundary of Γ3, and hence it defines an extreme direction of Γ3. Then the proof says that along this extreme direction of Γ3, only certain discrete points, namely those points with a equals log M for some positive integer M, are entropic. This is illustrated in Figure 15.2. As a consequence, the region Γ ∗ 3 is not convex. Having proved that Γ ∗ 3 ̸= Γ3, it is natural to conjecture that the gap between Γ ∗ 3 and Γ3 has zero Lebesgue measure. In other words, Γ ∗ 3 = Γ3, where Γ ∗ 3 is the closure of Γ3. This is indeed the case and will be proved at the end of the section. More generally, we are interested in characterizing Γ ∗ n, the closure of Γ ∗ n. Although the region Γ ∗ n is not sufficient for characterizing all information inequalities, it is actually sufficient for characterizing all unconstrained in-formation inequalities. This can be seen as follows. Following the discussion in Section 13.3.1, an unconstrained information inequality f ≥0 involving n random variables always hold if and only if Γ ∗ n ⊂{h : f(h) ≥0}. (15.34) Since {h : f(h) ≥0} is closed, upon taking closure on both sides, we have Γ ∗ n ⊂{h : f(h) ≥0}. (15.35) On the other hand, if f ≥0 satisfies (15.35), then Γ ∗ n ⊂Γ ∗ n ⊂{h : f(h) ≥0}. (15.36) Therefore, (15.34) and (15.35) are equivalent, and hence Γ ∗ n is sufficient for characterizing all unconstrained information inequalities. We will prove in the next theorem an important property of the region Γ ∗ n for all n ≥2. This result will be used in the proof for Γ ∗ 3 = Γ3. Further, this result will be used in Chapter 16 when we establish a fundamental relation between information theory and group theory. We first prove a simple lemma. In the following, we use Nn to denote the set {1, 2, · · · , n}. 366 15 Beyond Shannon-Type Inequalities Lemma 15.3. If h and h′ are in Γ ∗ n, then h + h′ is in Γ ∗ n. Proof. Consider h and h′ in Γ ∗ n. Let h represents the entropy function for random variables X1, X2, · · · , Xn, and let h′ represents the entropy function for random variables X′ 1, X′ 2, · · · , X′ n. Let (X1, X2, · · · , Xn) and (X′ 1, X′ 2, · · · , X′ n) be independent, and define random variables Y1, Y2, · · · , Yn by Yi = (Xi, X′ i) (15.37) for all i ∈Nn. Then for any subset α of Nn, H(Yα) = H(Xα) + H(X′ α) = hα + h′ α. (15.38) Therefore, h + h′, which represents the entropy function for Y1, Y2, · · · , Yn, is in Γ ∗ n. The lemma is proved. ⊓ ⊔ Corollary 15.4. If h ∈Γ ∗ n, then kh ∈Γ ∗ n for any positive integer k. Proof. It suffices to write kh = h + h + · · · + h | {z } k (15.39) and apply Lemma 15.3. ⊓ ⊔ Theorem 15.5. Γ ∗ n is a convex cone. Proof. Consider the entropy function for random variables X1, X2, · · · , Xn all taking constant values with probability 1. Then for all subset α of Nn, H(Xα) = 0. (15.40) Therefore, Γ ∗ n contains the origin in Hn. Let h and h′ in Γ ∗ n be the entropy functions for any two sets of random vari-ables Y1, Y2, · · · , Yn and Z1, Z2, · · · , Zn, respectively. In view of Corollary 15.4, in order to prove that Γ ∗ n is a convex cone, we only need to show that if h and h′ are in Γ ∗ n, then bh + ¯ bh′ is in Γ ∗ n for all 0 < b < 1, where ¯ b = 1 −b. Let (Y1, Y2, · · · , Yn) be k independent copies of (Y1, Y2, · · · , Yn) and (Z1, Z2, · · · , Zn) be k independent copies of (Z1, Z2, · · · , Zn). Let U be a ternary random variable independent of all other random variables such that Pr{U = 0} = 1 −δ −µ, Pr{U = 1} = δ, Pr{U = 2} = µ. Now construct random variables X1, X2, · · · , Xn by letting Xi =    0 if U = 0 Yi if U = 1 Zi if U = 2. 15.1 Characterizations of Γ ∗ 2, Γ ∗ 3, and Γ ∗ n 367 Note that H(U) →0 as δ, µ →0. Then for any nonempty subset α of Nn, H(Xα) ≤H(Xα, U) (15.41) = H(U) + H(Xα|U) (15.42) = H(U) + δkH(Yα) + µkH(Zα). (15.43) On the other hand, H(Xα) ≥H(Xα|U) = δkH(Yα) + µkH(Zα). (15.44) Combining the above, we have 0 ≤H(Xα) −(δkH(Yα) + µkH(Zα)) ≤H(U). (15.45) Now take δ = b k (15.46) and µ = ¯ b k (15.47) to obtain 0 ≤H(Xα) −(bH(Yα) + ¯ bH(Zα)) ≤H(U). (15.48) By letting k be sufficiently large, the upper bound can be made arbitrarily small. This shows that bh + ¯ bh′ ∈Γ ∗ n. The theorem is proved. ⊓ ⊔ In the next theorem, we prove that Γ ∗ 3 and Γ3 are almost identical. Anal-ogous to Γ ∗ n, we will use Ψ ∗ n to denote the closure of Ψ ∗ n. Theorem 15.6. Γ ∗ 3 = Γ3. Proof. We first note that Γ ∗ 3 = Γ3 if and only if Ψ ∗ 3 = Ψ3. (15.49) Since Γ ∗ 3 ⊂Γ3 (15.50) and Γ3 is closed, by taking closure on both sides in the above, we obtain Γ ∗ 3 ⊂Γ3. This implies that Ψ ∗ 3 ⊂Ψ3. Therefore, in order to prove the theorem, it suffices to show that Ψ3 ⊂Ψ ∗ 3. We first show that the point (0, 0, 0, a, a, a, −a) is in Ψ ∗ 3 for all a > 0. Let random variables X1, X2, and X3 be defined as in Example 3.10, i.e., X1 and X2 are two independent binary random variables taking values in {0, 1} according to the uniform distribution, and X3 = X1 + X2 mod 2. (15.51) 368 15 Beyond Shannon-Type Inequalities 1 1 1 1 0 0 0 X 1 X 2 X 3 Fig. 15.3. The I-Measure µ∗for X1, X2, and X3 in the proof of Theorem 15.6. Let h ∈Γ ∗ 3 represents the entropy function for X1, X2, and X3, and let u = C−1 3 h. (15.52) As in the proof of Theorem 15.2, we let ui, 1 ≤i ≤7, be the coordinates of I3 which correspond to the values of the quantities in (15.14), respectively. From Example 3.10, we have ui =    0 for i = 1, 2, 3 1 for i = 4, 5, 6 −1 for i = 7. (15.53) Thus the point (0, 0, 0, 1, 1, 1, −1) is in Ψ ∗ 3 , and the I-Measure µ∗for X1, X2, and X3 is shown in Figure 15.3. Then by Corollary 15.4, (0, 0, 0, k, k, k, −k) is in Ψ ∗ 3 and hence in Ψ ∗ 3 for all positive integer k. Since Γ ∗ 3 contains the origin, Ψ ∗ 3 also contains the origin. By Theorem 15.5, Γ ∗ 3 is convex. This implies Ψ ∗ 3 is also convex. Therefore, (0, 0, 0, a, a, a, −a) is in Ψ ∗ 3 for all a > 0. Consider any u ∈Ψ3. Referring to (15.15), we have ui ≥0 (15.54) for 1 ≤i ≤6. Thus u7 is the only component of u which can possibly be negative. We first consider the case when u7 ≥0. Then u is in the nonnegative orthant of I3, and by Lemma 13.1, u is in Ψ ∗ 3 . Next, consider the case when u7 < 0. Let t = (0, 0, 0, −u7, −u7, −u7, u7). (15.55) Then u = w + t, (15.56) where w = (u1, u2, u3, u4 + u7, u5 + u7, u6 + u7, 0). (15.57) Since −u7 > 0, we see from the foregoing that t ∈Ψ ∗ 3. From (15.15), we have ui + u7 ≥0 (15.58) 15.2 A Non-Shannon-Type Unconstrained Inequality 369 for i = 4, 5, 6. Thus w is in the nonnegative orthant in I3 and hence in Ψ ∗ 3 by Lemma 13.1. Now for any ϵ > 0, let t′ ∈Ψ ∗ 3 such that ∥t −t′∥< ϵ, (15.59) where ∥t −t′∥denotes the Euclidean distance between t and t′, and let u′ = w + t′. (15.60) Since both w and t′ are in Ψ ∗ 3 , by Lemma 15.3, u′ is also in Ψ ∗ 3 , and ∥u −u′∥= ∥t −t′∥< ϵ. (15.61) Therefore, u ∈Ψ ∗ 3. Hence, Ψ3 ⊂Ψ ∗ 3, and the theorem is proved. ⊓ ⊔ Remark 1 Han has found that Γ3 is the smallest cone that contains Γ ∗ 3 . This result together with Theorem 15.5 implies Theorem 15.6. Theorem 15.6 was also obtained by Goli´ c , and it is a consequence of the theorem in Mat´ uˇ s . Remark 2 We have shown that the region Γ ∗ n completely characterizes all unconstrained information inequalities involving n random variables. Since Γ ∗ 3 = Γ3, it follows that there exists no unconstrained information inequali-ties involving three random variables other than the Shannon-type inequali-ties. Mat´ uˇ s has obtained piecewise-linear constrained non-Shannon-type inequalities for three random variables that generalize the construction in the proof of Theorem 15.2. 15.2 A Non-Shannon-Type Unconstrained Inequality We have proved in Theorem 15.6 at the end of the last section that Γ ∗ 3 = Γ3. It is natural to conjecture that this theorem can be generalized to n ≥4. If this conjecture is true, then it follows that all unconstrained information inequalities involving a finite number of random variables are Shannon-type inequalities, and they can all be proved by ITIP running on a sufficiently powerful computer. However, it turns out that this is not the case even for n = 4. We will prove in the next theorem an unconstrained information inequality involving four random variables. Then we will show that this inequality is a non-Shannon-type inequality, and that Γ ∗ 4 ̸= Γ4. Theorem 15.7. For any four random variables X1, X2, X3, and X4, 2I(X3; X4) ≤I(X1; X2) + I(X1; X3, X4) +3I(X3; X4|X1) + I(X3; X4|X2). (15.62) 370 15 Beyond Shannon-Type Inequalities Toward proving this theorem, we introduce two auxiliary random variables ˜ X1 and ˜ X2 jointly distributed with X1, X2, X3, and X4 such that ˜ X1 = X1 and ˜ X2 = X2. To simplify notation, we will use p1234˜ 1˜ 2(x1, x2, x3, x4, ˜ x1, ˜ x2) to denote pX1X2X3X4 ˜ X1 ˜ X2(x1, x2, x3, x4, ˜ x1, ˜ x2), etc. The joint distribution for the six random variables X1, X2, X3, X4, ˜ X1, and ˜ X2 is defined by p1234˜ 1˜ 2(x1, x2, x3, x4, ˜ x1, ˜ x2) = ( p1234(x1,x2,x3,x4)p1234(˜ x1,˜ x2,x3,x4) p34(x3,x4) if p34(x3, x4) > 0 0 if p34(x3, x4) = 0. (15.63) Lemma 15.8. (X1, X2) →(X3, X4) →( ˜ X1, ˜ X2) (15.64) forms a Markov chain. Moreover, (X1, X2, X3, X4) and ( ˜ X1, ˜ X2, X3, X4) have the same marginal distribution. Proof. The Markov chain in (15.64) is readily seen by invoking Proposi-tion 2.5. The second part of the lemma is readily seen to be true by noting that in (15.63), p1234˜ 1˜ 2 is symmetrical in X1 and ˜ X1 and in X2 and ˜ X2. ⊓ ⊔ From the above lemma, we see that the pair of auxiliary random vari-ables ( ˜ X1, ˜ X2) corresponds to the pair of random variables (X1, X2) in the sense that ( ˜ X1, ˜ X2, X3, X4) have the same marginal distribution as (X1, X2, X3, X4). We need to prove two inequalities regarding these six ran-dom variables before we prove Theorem 15.7. Lemma 15.9. For any four random variables X1, X2, X3, and X4 and auxil-iary random variables ˜ X1 and ˜ X2 as defined in (15.63), I(X3; X4) −I(X3; X4|X1) −I(X3; X4|X2) ≤I(X1; ˜ X2). (15.65) Proof. Consider I(X3; X4) −I(X3; X4|X1) −I(X3; X4|X2) a) = [I(X3; X4) −I(X3; X4|X1)] −I(X3; X4| ˜ X2) (15.66) = I(X1; X3; X4) −I(X3; X4| ˜ X2) (15.67) = [I(X1; X3; X4; ˜ X2) + I(X1; X3; X4| ˜ X2)] −I(X3; X4| ˜ X2) (15.68) = I(X1; X3; X4; ˜ X2) −[I(X3; X4| ˜ X2) −I(X1; X3; X4| ˜ X2)] (15.69) = I(X1; X3; X4; ˜ X2) −I(X3; X4|X1, ˜ X2) (15.70) = [I(X1; X4; ˜ X2) −I(X1; X4; ˜ X2|X3)] −I(X3; X4|X1, ˜ X2) (15.71) = [I(X1; ˜ X2) −I(X1; ˜ X2|X4)] −[I(X1; ˜ X2|X3) −I(X1; ˜ X2|X3, X4)] −I(X3; X4|X1, ˜ X2) (15.72) 15.2 A Non-Shannon-Type Unconstrained Inequality 371 b) = I(X1; ˜ X2) −I(X1; ˜ X2|X4) −I(X1; ˜ X2|X3) −I(X3; X4|X1, ˜ X2) (15.73) ≤I(X1; ˜ X2), (15.74) where a) follows because we see from Lemma 15.8 that (X2, X3, X4) and ( ˜ X2, X3, X4) have the same marginal distribution, and b) follows because I(X1; ˜ X2|X3, X4) = 0 (15.75) from the Markov chain in (15.64). The lemma is proved. ⊓ ⊔ Lemma 15.10. For any four random variables X1, X2, X3, and X4 and aux-iliary random variables ˜ X1 and ˜ X2 as defined in (15.63), I(X3; X4) −2I(X3; X4|X1) ≤I(X1; ˜ X1). (15.76) Proof. Notice that (15.76) can be obtained from (15.65) by replacing X2 by X1 and ˜ X2 by ˜ X1 in (15.65). The inequality (15.76) can be proved by replacing X2 by X1 and ˜ X2 by ˜ X1 in (15.66) through (15.74) in the proof of the last lemma. The details are omitted. ⊓ ⊔ Proof of Theorem 15.7. By adding (15.65) and (15.76), we have 2I(X3; X4) −3I(X3; X4|X1) −I(X3; X4|X2) ≤I(X1; ˜ X2) + I(X1; ˜ X1) (15.77) = I(X1; ˜ X2) + [I(X1; ˜ X1| ˜ X2) + I(X1; ˜ X1; ˜ X2)] (15.78) = [I(X1; ˜ X2) + I(X1; ˜ X1| ˜ X2)] + I(X1; ˜ X1; ˜ X2) (15.79) = I(X1; ˜ X1, ˜ X2) + I(X1; ˜ X1; ˜ X2) (15.80) = I(X1; ˜ X1, ˜ X2) + [I( ˜ X1; ˜ X2) −I( ˜ X1; ˜ X2|X1)] (15.81) ≤I(X1; ˜ X1, ˜ X2) + I( ˜ X1; ˜ X2) (15.82) a) ≤I(X1; X3, X4) + I( ˜ X1; ˜ X2) (15.83) b) = I(X1; X3, X4) + I(X1; X2), (15.84) where a) follows from the Markov chain in (15.64), and b) follows because we see from Lemma 15.8 that ( ˜ X1, ˜ X2) and (X1, X2) have the same marginal distribution. Note that the auxiliary random variables ˜ X1 and ˜ X2 disappear in (15.84) after the sequence of manipulations. The theorem is proved. ⊓ ⊔ Theorem 15.11. The inequality (15.62) is a non-Shannon-type inequality, and Γ ∗ 4 ̸= Γ4. 372 15 Beyond Shannon-Type Inequalities a 0 a a a a a 0 0 0 0 0 0 X 1 X 2 X 3 X 4 a a Fig. 15.4. The set-theoretic structure of ˜ h(a). Proof. Consider for any a > 0 the point ˜ h(a) ∈H4, where ˜ h1(a) = ˜ h2(a) = ˜ h3(a) = ˜ h4(a) = 2a, ˜ h12(a) = 4a, ˜ h13(a) = ˜ h14(a) = 3a, ˜ h23(a) = ˜ h24(a) = ˜ h34(a) = 3a, ˜ h123(a) = ˜ h124(a) = ˜ h134(a) = ˜ h234(a) = ˜ h1234(a) = 4a. (15.85) The set-theoretic structure of ˜ h(a) is illustrated by the information diagram in Figure 15.4. The reader should check that this information diagram correctly represents ˜ h(a) as defined. It is also easy to check from this diagram that ˜ h(a) satisfies all the elemental inequalities for four random variables, and therefore ˜ h(a) ∈Γ4. However, upon substituting the corresponding values in (15.62) for ˜ h(a) with the help of Figure 15.4, we have 2a ≤0 + a + 0 + 0 = a, (15.86) which is a contradiction because a > 0. In other words, ˜ h(a) does not satisfy (15.62). Equivalently, ˜ h(a) ̸∈{h ∈H4 : h satisfies (15.62)}. (15.87) Since ˜ h(a) ∈Γ4, we conclude that Γ4 ̸⊂{h ∈H4 : h satisfies (15.62)}, (15.88) i.e., (15.62) is not implied by the basic inequalities for four random variables. Hence, (15.62) is a non-Shannon-type inequality. Since (15.62) is satisfied by all entropy functions for four random variables, we have Γ ∗ 4 ⊂{h ∈H4 : h satisfies (15.62)}, (15.89) 15.2 A Non-Shannon-Type Unconstrained Inequality 373 and upon taking closure on both sides, we have Γ ∗ 4 ⊂{h ∈H4 : h satisfies (15.62)}. (15.90) Then (15.87) implies ˜ h(a) ̸∈Γ ∗ 4. Since ˜ h(a) ∈Γ4 and ˜ h(a) ̸∈Γ ∗ 4, we conclude that Γ ∗ 4 ̸= Γ4. The theorem is proved. ⊓ ⊔ Remark We have shown in the proof of Theorem 15.11 that the inequality (15.62) cannot be proved by invoking the basic inequalities for four random variables. However, (15.62) can be proved by invoking the basic inequalities for the six random variables X1, X2, X3, X4, ˜ X1, and ˜ X2 with the joint probability distribution p1234˜ 1˜ 2 as constructed in (15.63). The inequality (15.62) remains valid when the indices 1, 2, 3, and 4 are permuted. Since (15.62) is symmetrical in X3 and X4, 4!/2! = 12 distinct versions of (15.62) can be obtained by permuting the indices, and all these 12 inequalities are simultaneously satisfied by the entropy function of any set of random variables X1, X2, X3, and X4. We will denote these 12 inequalities collectively by ⟨15.62⟩. Now define the region ˜ Γ4 = {h ∈Γ4 : h satisfies ⟨15.62⟩}. (15.91) Evidently, Γ ∗ 4 ⊂˜ Γ4 ⊂Γ4. (15.92) Since both ˜ Γ4 and Γ4 are closed, upon taking closure, we also have Γ ∗ 4 ⊂˜ Γ4 ⊂Γ4. (15.93) Since ⟨15.62⟩are non-Shannon-type inequalities as we have proved in the last theorem, ˜ Γ4 is a proper subset of Γ4 and hence a tighter outer bound on Γ ∗ 4 and Γ ∗ 4 than Γ4. In the course of proving that (15.62) is of non-Shannon-type, it was shown in the proof of Theorem 15.11 that there exists ˜ h(a) ∈Γ4 as defined in (15.85) which does not satisfy (15.62). By investigating the geometrical relation be-tween ˜ h(a) and Γ4, we prove in the next theorem that (15.62) in fact induces a class of 214 −1 non-Shannon-type constrained inequalities. Applications of some of these inequalities will be discussed in Section 15.4. Theorem 15.12. The inequality (15.62) is a non-Shannon-type inequality conditioning on setting any nonempty subset of the following 14 Shannon’s information measures to zero: I(X1; X2), I(X1; X2|X3), I(X1; X2|X4), I(X1; X3|X4), I(X1; X4|X3), I(X2; X3|X4), I(X2; X4|X3), I(X3; X4|X1), I(X3; X4|X2), I(X3; X4|X1, X2), H(X1|X2, X3, X4), H(X2|X1, X3, X4), H(X3|X1, X2, X4), H(X4|X1, X2, X3). (15.94) 374 15 Beyond Shannon-Type Inequalities Proof. It is easy to verify from Figure 15.4 that ˜ h(a) lies in exactly 14 hy-perplanes in H4 (i.e., ℜ15) defining the boundary of Γ4 which correspond to setting the 14 Shannon’s measures in (15.94) to zero. Therefore, ˜ h(a) for a ≥0 define an extreme direction of Γ4. Now for any linear subspace Φ of H4 containing ˜ h(a), where a > 0, we have ˜ h(a) ∈Γ4 ∩Φ (15.95) and ˜ h(a) does not satisfy (15.62). Therefore, (Γ4 ∩Φ) ̸⊂{h ∈H4 : h satisfies (15.62)}. (15.96) This means that (15.62) is a non-Shannon-type inequality under the constraint Φ. From the above, we see that Φ can be taken to be the intersection of any nonempty subset of the 14 hyperplanes containing ˜ h(a). Thus (15.62) is a non-Shannon-type inequality conditioning on any nonempty subset of the 14 Shannon’s measures in (15.94) being equal to zero. Hence, (15.62) induces a class of 214 −1 non-Shannon-type constrained inequalities. The theorem is proved. ⊓ ⊔ Remark It is not true that the inequality (15.62) is of non-Shannon-type under any constraint. Suppose we impose the constraint I(X3; X4) = 0. (15.97) Then the left hand side of (15.62) becomes zero, and the inequality is triv-ially implied by the basic inequalities because only mutual informations with positive coefficients appear on the right hand side. Then (15.62) becomes a Shannon-type inequality under the constraint in (15.97). 15.3 A Non-Shannon-Type Constrained Inequality In the last section, we proved a non-Shannon-type unconstrained inequality for four random variables which implies Γ ∗ 4 ̸= Γ4. This inequality induces a region ˜ Γ4 which is a tighter outer bound on Γ ∗ 4 and Γ ∗ 4 then Γ4. We further showed that this inequality induces a class of 214 −1 non-Shannon-type constrained inequalities for four random variables. In this section, we prove a non-Shannon-type constrained inequality for four random variables. Unlike the non-Shannon-type unconstrained inequality we proved in the last section, this constrained inequality is not strong enough to imply that Γ ∗ 4 ̸= Γ4. However, the latter is not implied by the former. Lemma 15.13. Let p(x1, x2, x3, x4) be any probability distribution. Then ˜ p(x1, x2, x3, x4) = ( p(x1,x3,x4)p(x2,x3,x4) p(x3,x4) if p(x3, x4) > 0 0 if p(x3, x4) = 0 (15.98) 15.3 A Non-Shannon-Type Constrained Inequality 375 is also a probability distribution. Moreover, ˜ p(x1, x3, x4) = p(x1, x3, x4) (15.99) and ˜ p(x2, x3, x4) = p(x2, x3, x4) (15.100) for all x1, x2, x3, and x4. Proof. The proof for the first part of the lemma is straightforward (see Prob-lem 4 in Chapter 2). The details are omitted here. To prove the second part of the lemma, it suffices to prove (15.99) for all x1, x3, and x4 because ˜ p(x1, x2, x3, x4) is symmetrical in x1 and x2. We first consider x1, x3, and x4 such that p(x3, x4) > 0. From (15.98), we have ˜ p(x1, x3, x4) = X x2 ˜ p(x1, x2, x3, x4) (15.101) = X x2 p(x1, x3, x4)p(x2, x3, x4) p(x3, x4) (15.102) = p(x1, x3, x4) p(x3, x4) X x2 p(x2, x3, x4) (15.103) = p(x1, x3, x4) p(x3, x4)  p(x3, x4) (15.104) = p(x1, x3, x4). (15.105) For x1, x3, and x4 such that p(x3, x4) = 0, we have 0 ≤p(x1, x3, x4) ≤p(x3, x4) = 0, (15.106) which implies p(x1, x3, x4) = 0. (15.107) Therefore, from (15.98), we have ˜ p(x1, x3, x4) = X x2 ˜ p(x1, x2, x3, x4) (15.108) = X x2 0 (15.109) = 0 (15.110) = p(x1, x3, x4). (15.111) Thus we have proved (15.99) for all x1, x3, and x4, and the lemma is proved. ⊓ ⊔ 376 15 Beyond Shannon-Type Inequalities Theorem 15.14. For any four random variables X1, X2, X3, and X4, if I(X1; X2) = I(X1; X2|X3) = 0, (15.112) then I(X3; X4) ≤I(X3; X4|X1) + I(X3; X4|X2). (15.113) Proof. Consider I(X3; X4) −I(X3; X4|X1) −I(X3; X4|X2) = X x1,x2,x3,x4: p(x1,x2,x3,x4)>0 p(x1,x2,x3,x4) log p(x3,x4)p(x1,x3)p(x1,x4)p(x2,x3)p(x2,x4) p(x3)p(x4)p(x1)p(x2)p(x1,x3,x4)p(x2,x3,x4) = Ep log p(X3, X4)p(X1, X3)p(X1, X4)p(X2, X3)p(X2, X4) p(X3)p(X4)p(X1)p(X2)p(X1, X3, X4)p(X2, X3, X4), (15.114) where we have used Ep to denote expectation with respect to p(x1, x2, x3, x4). We claim that the above expectation is equal to E˜ p log p(X3, X4)p(X1, X3)p(X1, X4)p(X2, X3)p(X2, X4) p(X3)p(X4)p(X1)p(X2)p(X1, X3, X4)p(X2, X3, X4), (15.115) where ˜ p(x1, x2, x3, x4) is defined in (15.98). Toward proving that the claim is correct, we note that (15.115) is the sum of a number of expectations with respect to ˜ p. Let us consider one of these expectations, say E˜ p log p(X1, X3) = X x1,x2,x3,x4: ˜ p(x1,x2,x3,x4)>0 ˜ p(x1, x2, x3, x4) log p(x1, x3). (15.116) Note that in the above summation, if ˜ p(x1, x2, x3, x4) > 0, then from (15.98), we see that p(x1, x3, x4) > 0, (15.117) and hence p(x1, x3) > 0. (15.118) Therefore, the summation in (15.116) is always well-defined. Further, it can be written as X x1,x3,x4 log p(x1, x3) X x2:˜ p(x1,x2,x3,x4)>0 ˜ p(x1, x2, x3, x4) = X x1,x3,x4 ˜ p(x1, x3, x4) log p(x1, x3). (15.119) 15.3 A Non-Shannon-Type Constrained Inequality 377 Thus E˜ p log p(X1, X3) depends on ˜ p(x1, x2, x3, x4) only through ˜ p(x1, x3, x4), which by Lemma 15.13 is equal to p(x1, x3, x4). It then follows that E˜ p log p(X1, X3) = X x1,x3,x4 ˜ p(x1, x3, x4) log p(x1, x3) (15.120) = X x1,x3,x4 p(x1, x3, x4) log p(x1, x3) (15.121) = Ep log p(X1, X3). (15.122) In other words, the expectation on log p(X1, X3) can be taken with respect to either ˜ p(x1, x2, x3, x4) or p(x1, x2, x3, x4) without affecting its value. By observing that all the marginals of p in the logarithm in (15.115) involve only subsets of either {X1, X3, X4} or {X2, X3, X4}, we see that similar conclusions can be drawn for all the other expectations in (15.115), and hence the claim is proved. Thus the claim implies that I(X3; X4) −I(X3; X4|X1) −I(X3; X4|X2) = E˜ p log p(X3, X4)p(X1, X3)p(X1, X4)p(X2, X3)p(X2, X4) p(X3)p(X4)p(X1)p(X2)p(X1, X3, X4)p(X2, X3, X4) = X x1,x2,x3,x4: ˜ p(x1,x2,x3,x4)>0 ˜ p(x1,x2,x3,x4) log p(x3,x4)p(x1,x3)p(x1,x4)p(x2,x3)p(x2,x4) p(x3)p(x4)p(x1)p(x2)p(x1,x3,x4)p(x2,x3,x4) = − X x1,x2,x3,x4: ˜ p(x1,x2,x3,x4)>0 ˜ p(x1, x2, x3, x4) log ˜ p(x1, x2, x3, x4) ˆ p(x1, x2, x3, x4), (15.123) where ˆ p(x1, x2, x3, x4) = ( p(x1,x3)p(x1,x4)p(x2,x3)p(x2,x4) p(x1)p(x2)p(x3)p(x4) if p(x1), p(x2), p(x3), p(x4) > 0 0 otherwise. (15.124) The equality in (15.123) is justified by observing that if x1, x2, x3, and x4 are such that ˜ p(x1, x2, x3, x4) > 0, then p(x1, x3), p(x1, x4), p(x2, x3), p(x2, x4), p(x1), p(x2), p(x3), p(x4) (15.125) are all strictly positive, and we see from (15.124) that ˆ p(x1, x2, x3, x4) > 0. To complete the proof, we only need to show that ˆ p(x1, x2, x3, x4) is a probability distribution. Once this is proven, the conclusion of the theorem follows immediately because the summation in (15.123), which is identified as 378 15 Beyond Shannon-Type Inequalities the divergence between ˜ p(x1, x2, x3, x4) and ˆ p(x1, x2, x3, x4), is always non-negative by the divergence inequality (Theorem 2.31). Toward this end, we notice that for x1, x2, and x3 such that p(x3) > 0, p(x1, x2, x3) = p(x1, x3)p(x2, x3) p(x3) (15.126) by the assumption I(X1; X2|X3) = 0, (15.127) and for all x1 and x2, p(x1, x2) = p(x1)p(x2) (15.128) by the assumption I(X1; X2) = 0. (15.129) Then X x1,x2,x3,x4 ˆ p(x1, x2, x3, x4) = X x1,x2,x3,x4: ˆ p(x1,x2,x3,x4)>0 ˆ p(x1, x2, x3, x4) (15.130) = X x1,x2,x3,x4: p(x1),p(x2),p(x3),p(x4)>0 p(x1, x3)p(x1, x4)p(x2, x3)p(x2, x4) p(x1)p(x2)p(x3)p(x4) (15.131) a) = X x1,x2,x3,x4: p(x1),p(x2),p(x3),p(x4)>0 p(x1, x2, x3)p(x1, x4)p(x2, x4) p(x1)p(x2)p(x4) (15.132) b) = X x1,x2,x3,x4: p(x1),p(x2),p(x3),p(x4)>0 p(x1, x2, x3)p(x1, x4)p(x2, x4) p(x1, x2)p(x4) (15.133) = X x1,x2,x4: p(x1),p(x2),p(x4)>0 p(x1, x4)p(x2, x4) p(x4) X x3:p(x3)>0 p(x3|x1, x2) (15.134) = X x1,x2,x4: p(x1),p(x2),p(x4)>0 p(x1, x4)p(x2, x4) p(x4) (15.135) = X x2,x4: p(x2),p(x4)>0 p(x2, x4) X x1:p(x1)>0 p(x1|x4) (15.136) c) = X x2,x4: p(x2),p(x4)>0 p(x2, x4) (15.137) d) = 1, (15.138) where a) and b) follows from (15.126) and (15.128), respectively. The equality in c) is justified as follows. For x1 such that p(x1) = 0, 15.3 A Non-Shannon-Type Constrained Inequality 379 p(x1|x4) = p(x1)p(x4|x1) p(x4) = 0. (15.139) Therefore X x1:p(x1)>0 p(x1|x4) = X x1 p(x1|x4) = 1. (15.140) Finally, the equality in d) is justified as follows. For x2 and x4 such that p(x2) or p(x4) vanishes, p(x2, x4) must vanish because 0 ≤p(x2, x4) ≤p(x2) (15.141) and 0 ≤p(x2, x4) ≤p(x4). (15.142) Therefore, X x2,x4: p(x2),p(x4)>0 p(x2, x4) = X x2,x4 p(x2, x4) = 1. (15.143) The theorem is proved. ⊓ ⊔ Theorem 15.15. The constrained inequality in Theorem 15.14 is a non-Shannon-type inequality. Proof. The theorem can be proved by considering the point ˜ h(a) ∈H4 for a > 0 as in the proof of Theorem 15.11. The details are left as an exercise. ⊓ ⊔ The constrained inequality in Theorem 15.14 has the following geometrical interpretation. The constraints in (15.112) correspond to the intersection of two hyperplanes in H4 which define the boundary of Γ4. Then the inequality (15.62) says that a certain region on the boundary of Γ4 is not in Γ ∗ 4 . It can further be proved by computation1 that the constrained inequality in Theorem 15.14 is not implied by the 12 distinct versions of the unconstrained inequality in Theorem 15.7 (i.e., ⟨15.62⟩) together with the basic inequalities. We have proved in the last section that the non-Shannon-type inequality (15.62) implies a class of 214 −1 constrained non-Shannon-type inequalities. We end this section by proving a similar result for the non-Shannon-type constrained inequality in Theorem 15.14. Theorem 15.16. The inequality I(X3; X4) ≤I(X3; X4|X1) + I(X3; X4|X2) (15.144) is a non-Shannon-type inequality conditioning on setting both I(X1; X2) and I(X1; X2|X3) and any subset of the following 12 Shannon’s information mea-1 Ying-On Yan, private communication. 380 15 Beyond Shannon-Type Inequalities sures to zero: I(X1; X2|X4), I(X1; X3|X4), I(X1; X4|X3), I(X2; X3|X4), I(X2; X4|X3), I(X3; X4|X1), I(X3; X4|X2), I(X3; X4|X1, X2), H(X1|X2, X3, X4), H(X2|X1, X3, X4), H(X3|X1, X2, X4), H(X4|X1, X2, X3). (15.145) Proof. The proof of this theorem is very similar to the proof of Theorem 15.12. We first note that I(X1; X2) and I(X1; X2|X3) together with the 12 Shannon’s information measures in (15.145) are exactly the 14 Shannon’s information measures in (15.94). We have already shown in the proof of Theorem 15.12 that ˜ h(a) (cf. Figure 15.4) lies in exactly 14 hyperplanes defining the boundary of Γ4 which correspond to setting these 14 Shannon’s information measures to zero. We also have shown that ˜ h(a) for a ≥0 define an extreme direction of Γ4. Denote by Φ0 the intersection of the two hyperplanes in H4 which cor-respond to setting I(X1; X2) and I(X1; X2|X3) to zero. Since ˜ h(a) for any a > 0 satisfies I(X1; X2) = I(X1; X2|X3) = 0, (15.146) ˜ h(a) is in Φ0. Now for any linear subspace Φ of H4 containing ˜ h(a) such that Φ ⊂Φ0, we have ˜ h(a) ∈Γ4 ∩Φ. (15.147) Upon substituting the corresponding values in (15.113) for ˜ h(a) with the help of Figure 15.4, we have a ≤0 + 0 = 0, (15.148) which is a contradiction because a > 0. Therefore, ˜ h(a) does not satisfy (15.113). Therefore, (Γ4 ∩Φ) ̸⊂{h ∈H4 : h satisfies (15.113)}. (15.149) This means that (15.113) is a non-Shannon-type inequality under the con-straint Φ. From the above, we see that Φ can be taken to be the intersection of Φ0 and any subset of the 12 hyperplanes which correspond to setting the 12 Shannon’s information measures in (15.145) to zero. Hence, (15.113) is a non-Shannon-type inequality conditioning on I(X1; X2), I(X1; X2|X3), and any subset of the 12 Shannon’s information measures in (15.145) being equal to zero. In other words, the constrained inequality in Theorem 15.14 in fact induces a class of 212 constrained non-Shannon-type inequalities. The theorem is proved. ⊓ ⊔ 15.4 Applications As we have mentioned in Chapter 13, information inequalities govern the impossibilities in information theory. In this section, we give several appli-15.4 Applications 381 cations of the non-Shannon-type inequalities we have proved in this chapter in probability theory and information theory. An application in group theory of the unconstrained inequality proved in Section 15.2 will be discussed in Chapter 16. Non-Shannon-type inequalities also find applications in network coding theory to be discussed in Part II of this book. Example 15.17. For the constrained inequality in Theorem 15.14, if we further impose the constraints I(X3; X4|X1) = I(X3; X4|X2) = 0, (15.150) then the right hand side of (15.113) becomes zero. This implies I(X3; X4) = 0 (15.151) because I(X3; X4) is nonnegative. This means that X1 ⊥X2 X1 ⊥X2|X3 X3 ⊥X4|X1 X3 ⊥X4|X2        ⇒X3 ⊥X4. (15.152) We leave it as an exercise for the reader to show that this implication cannot be deduced from the basic inequalities. Example 15.18. If we impose the constraints I(X1; X2) = I(X1; X3, X4) = I(X3; X4|X1) = I(X3; X4|X2) = 0, (15.153) then the right hand side of (15.62) becomes zero, which implies I(X3; X4) = 0. (15.154) This means that X1 ⊥X2 X1 ⊥(X3, X4) X3 ⊥X4|X1 X3 ⊥X4|X2        ⇒X3 ⊥X4. (15.155) Note that (15.152) and (15.155) differ only in the second constraint. Again, we leave it as an exercise for the reader to show that this implication cannot be deduced from the basic inequalities. Example 15.19. Consider a fault-tolerant data storage system consisting of random variables X1, X2, X3, X4 such that any three random variables can recover the remaining one, i.e., H(Xi|Xj, j ̸= i) = 0, 1 ≤i, j ≤4. (15.156) 382 15 Beyond Shannon-Type Inequalities We are interested in the set of all entropy functions subject to these con-straints, denoted by Υ, which characterizes the amount of joint information which can possibly be stored in such a data storage system. Let Φ = {h ∈H4 : h satisfies (15.156)}. (15.157) Then the set Υ is equal to the intersection between Γ ∗ 4 and Φ, i.e., Γ ∗ 4 ∩Φ. Since each constraint in (15.156) is one of the 14 constraints specified in Theorem 15.12, we see that (15.62) is a non-Shannon-type inequality under the constraints in (15.156). Then ˜ Γ4 ∩Φ (cf. (15.91)) is a tighter outer bound on Υ than Γ4 ∩Φ. Example 15.20. Consider four random variables X1, X2, X3, and X4 such that X3 →(X1, X2) →X4 forms a Markov chain. This Markov condition is equiv-alent to I(X3; X4|X1, X2) = 0. (15.158) It can be proved by invoking the basic inequalities (using ITIP) that I(X3; X4) ≤I(X3; X4|X1) + I(X3; X4|X2) + 0.5I(X1; X2) +cI(X1; X3, X4) + (1 −c)I(X2; X3, X4), (15.159) where 0.25 ≤c ≤0.75, and this is the best possible. Now observe that the Markov condition (15.158) is one of the 14 con-straints specified in Theorem 15.12. Therefore, (15.62) is a non-Shannon-type inequality under this Markov condition. By replacing X1 and X2 by each other in (15.62), we obtain 2I(X3; X4) ≤I(X1; X2) + I(X2; X3, X4) +3I(X3; X4|X2) + I(X3; X4|X1). (15.160) Upon adding (15.62) and (15.160) and dividing by 4, we obtain I(X3; X4) ≤I(X3; X4|X1) + I(X3; X4|X2) + 0.5I(X1; X2) +0.25I(X1; X3, X4) + 0.25I(X2; X3, X4). (15.161) Comparing the last two terms in (15.159) and the last two terms in (15.161), we see that (15.161) is a sharper upper bound than (15.159). The Markov chain X3 →(X1, X2) →X4 arises in many communication situations. As an example, consider a person listening to an audio source. Then the situation can be modeled by this Markov chain with X3 being the sound wave generated at the source, X1 and X2 being the sound waves received at the two ear drums, and X4 being the nerve impulses which eventually arrive at the brain. The inequality (15.161) gives an upper bound on I(X3; X4) which is tighter than what can be implied by the basic inequalities. There is some resemblance between the constrained inequality (15.161) and the data processing theorem, but they do not appear to be directly related. Problems 383 Chapter Summary Characterizations of Γ ∗ n and Γ ∗ n: 1. Γ ∗ 2 = Γ2. 2. Γ ∗ 3 ̸= Γ3, but Γ ∗ 3 = Γ3. 3. Γ ∗ 4 ̸= Γ4. 4. Γ ∗ n is a convex cone. An Unconstrained Non-Shannon-Type Inequality: 2I(X3; X4) ≤I(X1; X2) + I(X1; X3, X4) + 3I(X3; X4|X1) + I(X3; X4|X2). A Constrained Non-Shannon-Type Inequality: If I(X1; X2) = I(X1; X2|X3) = 0, then I(X3; X4) ≤I(X3; X4|X1) + I(X3; X4|X2). Problems 1. Verify by ITIP that the unconstrained information inequality in Theo-rem 15.7 is of non-Shannon-type. 2. Verify by ITIP and prove analytically that the constrained information inequality in Theorem 15.14 is of non-Shannon-type. 3. Use ITIP to verify the unconstrained information inequality in Theo-rem 15.7. Hint: Create two auxiliary random variables as in the proof of Theorem 15.7 and impose appropriate constraints on the random vari-ables. 4. Verify by ITIP that the implications in Examples 15.17 and 15.18 cannot be deduced from the basic inequalities. 5. Can you show that the sets of constraints in Examples 15.17 and 15.18 are in fact different? 6. Consider an information inequality involving random variables X1, X2, · · ·, Xn, which can be written as X α∈2Nn{∅} cαH(Xα) ≥0, where Nn = {1, 2, · · · , n}. For i ∈Nn, let ri = X α∈2Nn{∅} cαnα(i), where nα(i) is equal to 1 if i ∈α and is equal to 0 otherwise. 384 15 Beyond Shannon-Type Inequalities a) Show that ri is the coefficient of H(Xi|XNn−{i}) when the information inequality is expressed in terms of the elemental forms of Shannon’s information measures for n random variables. b) Show that if the information inequality always holds, then ri ≥0 for all i ∈Nn. (Chan .) 7. Let Xi, i = 1, 2, · · · , n, Z, and T be discrete random variables. a) Prove that nI(Z; T) − n X j=1 I(Z; T|Xj) −nI(Z; T|Xi) ≤I(Xi; Z, T) + n X j=1 H(Xj) −H(X1, X2, · · · , Xn). Hint: When n = 2, this inequality reduces to the unconstrained non-Shannon-type inequality in Theorem 15.7. b) Prove that nI(Z; T) −2 n X j=1 I(Z; T|Xj) ≤1 n n X i=1 I(Xi; Z, T) + n X j=1 H(Xj) −H(X1, X2, · · · , Xn). ( Zhang and Yeung .) 8. Let p(x1, x2, x3, x4) be the joint distribution for random variables X1, X2, X3, and X4 such that I(X1; X2|X3) = I(X2; X4|X3) = 0, and let ˜ p be defined in (15.98). a) Show that ˇ p(x1, x2, x3, x4) = ( c · p(x1,x2,x3)p(x1,x4)p(x2,x4) p(x1,x2)p(x4) if p(x1, x2), p(x4) > 0 0 otherwise defines a probability distribution for an appropriate c ≥1. b) Prove that ˜ p(x1, x2, x3) = p(x1, x2, x3) for all x1, x2, and x3. c) By considering D(˜ p∥ˇ p) ≥0, prove that H(X13) + H(X14) + H(X23) + H(X24) + H(X34) ≥H(X3) + H(X4) + H(X12) + H(X134) + H(X234), where H(X134) denotes H(X1, X3, X4), etc. d) Prove that under the constraints in (15.112), the inequality in (15.113) is equivalent to the inequality in c). Historical Notes 385 The inequality in c) is referred to as the Ingleton inequality for entropy in the literature. For the origin of the Ingleton inequality, see Problem 9 in Chapter 16. (Mat´ uˇ s .) Historical Notes In 1986, Pippenger asked whether there exist constraints on the entropy function other than the polymatroidal axioms, which are equivalent to the basic inequalities. He called the constraints on the entropy function the laws of information theory. The problem had been open until Zhang and Yeung discovered for four random variables first a constrained non-Shannon-type inequality and then an unconstrained non-Shannon-type inequality in the late 1990’s. The inequality reported in has been further generalized by Makarychev et al. and Zhang . The existence of these inequalities implies that there are laws in information theory beyond those laid down by Shannon . The non-Shannon-type inequalities that have been discovered induce outer bounds on the region Γ ∗ 4 which are tighter than Γ4. Mat´ uˇ s and Studen´ y showed that an entropy function in Γ4 is entropic if it satisfies the Ingleton inequality (see Problem 9 in Chapter 16). This gives an inner bound on Γ ∗ 4. A more explicit proof of this inner bound can be found in , where the bound was shown not to be tight. Mat´ uˇ s has obtained asymptotically tight inner bounds on Γ ∗ n by constructing entropy functions from matroids. Dougherty et al. discovered a host of unconstrained non-Shannon-type inequalities by means of a computer search based on ITIP and the Markov chain construction in (see Problem 3). Recently, Mat´ uˇ s proved an infinite class of unconstrained non-Shannon-type inequalities, implying that Γ ∗ n is not a pyramid. Chan proved a characterization for an inequality for differential en-tropy in terms of its discrete version. Lnˇ eniˇ cka proved that the tightness of the continuous version of the unconstrained non-Shannon-type inequality reported in can be achieved by a multivariate Gaussian distribution. In the 1990’s, Mat´ uˇ s and Studen´ y studied the structure of conditional independence (which subsumes the implication problem) of random variables. Mat´ uˇ s finally settled the problem for four random variables by means of a constrained non-Shannon-type inequality which is a variation of the inequality reported in . The von Neumann entropy is an extension of classical entropy (as discussed in this book) to the field of quantum mechanics. The strong subadditivity of the von Neumann entropy proved by Lieb and Ruskai plays the same role as the basic inequalities for classical entropy. Pippenger proved that for a three-party system, there exists no inequality for the von Neumann entropy beyond strong subadditivity. Subsequently, Linden and Winter discov-ered for a four-party system a constrained inequality for the von Neumann 386 15 Beyond Shannon-Type Inequalities entropy which is independent of strong subadditivity. We refer the reader to the book by Nielsen and Chuang for an introduction to quantum infor-mation theory. Along a related direction, Hammer et al. have shown that all linear inequalities that always hold for Kolmogorov complexity also always hold for entropy, and vice versa. This establishes a one-to-one correspondence between entropy and Kolmogorov complexity. 16 Entropy and Groups The group is the first major mathematical structure in abstract algebra, while entropy is the most basic measure of information. Group theory and infor-mation theory are two seemingly unrelated subjects which turn out to be intimately related to each other. This chapter explains this intriguing relation between these two fundamental subjects. Those readers who have no knowl-edge in group theory may skip this introduction and go directly to the next section. Let X1 and X2 be any two random variables. Then H(X1) + H(X2) ≥H(X1, X2), (16.1) which is equivalent to the basic inequality I(X1; X2) ≥0. (16.2) Let G be any finite group and G1 and G2 be subgroups of G. We will show in Section 16.4 that |G||G1 ∩G2| ≥|G1||G2|, (16.3) where |G| denotes the order of G and G1 ∩G2 denotes the intersection of G1 and G2 (G1 ∩G2 is also a subgroup of G, see Proposition 16.13). By rearranging the terms, the above inequality can be written as log |G| |G1| + log |G| |G2| ≥log |G| |G1 ∩G2|. (16.4) By comparing (16.1) and (16.4), one can easily identify the one-to-one corre-spondence between these two inequalities, namely that Xi corresponds to Gi, i = 1, 2, and (X1, X2) corresponds to G1 ∩G2. While (16.1) is true for any pair of random variables X1 and X2, (16.4) is true for any finite group G and subgroups G1 and G2. Recall from Chapter 13 that the region Γ ∗ n characterizes all information inequalities (involving n random variables). In particular, we have shown in 388 16 Entropy and Groups Section 15.1 that the region Γ ∗ n is sufficient for characterizing all unconstrained information inequalities, i.e., by knowing Γ ∗ n, one can determine whether any unconstrained information inequality always holds. The main purpose of this chapter is to obtain a characterization of Γ ∗ n in terms of finite groups. An important consequence of this result is a one-to-one correspondence between unconstrained information inequalities and group inequalities. Specifically, for every unconstrained information inequality, there is a corresponding group inequality, and vice versa. A special case of this correspondence has been given in (16.1) and (16.4). By means of this result, unconstrained information inequalities can be proved by techniques in group theory, and a certain form of inequalities in group theory can be proved by techniques in information theory. In particular, the unconstrained non-Shannon-type inequality in Theorem 15.7 corresponds to the group inequality |G1 ∩G3|3|G1 ∩G4|3|G3 ∩G4|3|G2 ∩G3||G2 ∩G4| ≤|G1||G1 ∩G2||G3|2|G4|2|G1 ∩G3 ∩G4|4|G2 ∩G3 ∩G4|, (16.5) where Gi are subgroups of a finite group G, i = 1, 2, 3, 4. The meaning of this inequality and its implications in group theory are yet to be understood. 16.1 Group Preliminaries In this section, we present the definition and some basic properties of a group which are essential for subsequent discussions. Definition 16.1. A group is a set of objects G together with a binary oper-ation on the elements of G, denoted by “◦” unless otherwise specified, which satisfy the following four axioms: 1. Closure For every a, b in G, a ◦b is also in G. 2. Associativity For every a, b, c in G, a ◦(b ◦c) = (a ◦b) ◦c. 3. Existence of Identity There exists an element e in G such that a◦e = e◦a = a for every a in G. 4. Existence of Inverse For every a in G, there exists an element b in G such that a ◦b = b ◦a = e. Proposition 16.2. For any group G, the identity element is unique. Proof. Let both e and e′ be identity elements in a group G. Since e is an identity element, e′ ◦e = e, (16.6) and since e′ is also an identity element, e′ ◦e = e′. (16.7) It follows by equating the right hand sides of (16.6) and (16.7) that e = e′, which implies the uniqueness of the identity element of a group. ⊓ ⊔ 16.1 Group Preliminaries 389 Proposition 16.3. For every element a in a group G, its inverse is unique. Proof. Let b and b′ be inverses of an element a, so that a ◦b = b ◦a = e (16.8) and a ◦b′ = b′ ◦a = e. (16.9) Then b = b ◦e (16.10) = b ◦(a ◦b′) (16.11) = (b ◦a) ◦b′ (16.12) = e ◦b′ (16.13) = b′, (16.14) where (16.11) and (16.13) follow from (16.9) and (16.8), respectively, and (16.12) is by associativity. Therefore, the inverse of a is unique. ⊓ ⊔ Thus the inverse of a group element a is a function of a, and it will be denoted by a−1. Definition 16.4. The number of elements of a group G is called the order of G, denoted by |G|. If |G| < ∞, G is called a finite group, otherwise it is called an infinite group. There is an unlimited supply of examples of groups. Some familiar ex-amples are: the integers under addition, the rationals excluding zero under multiplication, and the set of real-valued 2×2 matrices under addition, where addition and multiplication refer to the usual addition and multiplication for real numbers and matrices. In each of these examples, the operation (addition or multiplication) plays the role of the binary operation “◦” in Definition 16.2. All the above are examples of infinite groups. In this chapter, however, we are concerned with finite groups. In the following, we discuss two examples of finite groups in details. Example 16.5 (Modulo 2 Addition). The trivial group consists of only the iden-tity element. The simplest nontrivial group is the group of modulo 2 addition. The order of this group is 2, and the elements are {0, 1}. The binary operation, denoted by “+”, is defined by following table: + 0 1 0 0 1 1 1 0 390 16 Entropy and Groups The four axioms of a group simply say that certain constraints must hold in the above table. We now check that all these axioms are satisfied. First, the closure axiom requires that all the entries in the table are elements in the group, which is easily seen to be the case. Second, it is required that associativity holds. To this end, it can be checked in the above table that for all a, b, and c, a + (b + c) = (a + b) + c. (16.15) For example, 0 + (1 + 1) = 0 + 0 = 0, (16.16) while (0 + 1) + 1 = 1 + 1 = 0, (16.17) which is the same as 0 + (1 + 1). Third, the element 0 is readily identified as the unique identity. Fourth, it is readily seen that an inverse exists for each element in the group. For example, the inverse of 1 is 1, because 1 + 1 = 0. (16.18) Thus the above table defines a group of order 2. It happens in this example that the inverse of each element is the element itself, which is not true for a group in general. We remark that in the context of a group, the elements in the group should be regarded strictly as symbols only. In particular, one should not associate group elements with magnitudes as we do for real numbers. For instance, in the above example, one should not think of 0 as being less than 1. The element 0, however, is a special symbol which plays the role of the identity of the group. We also notice that for the group in the above example, a + b is equal to b + a for all group elements a and b. A group with this property is called a commutative group or an Abelian group1. Example 16.6 (Symmetric Group). Consider a permutation of the components of a vector x = (x1, x2, · · · , xr) (16.19) given by σ[x] = (xσ(1), xσ(2), · · · , xσ(r)), (16.20) where σ : {1, 2, · · · , r} →{1, 2, · · · , r} (16.21) is a one-to-one mapping. The one-to-one mapping σ is called a permutation on {1, 2, · · · , r}, which is represented by σ = (σ(1), σ(2), · · · , σ(r)). (16.22) 1 The Abelian group is named after the Norwegian mathematician Niels Henrik Abel (1802-1829). 16.1 Group Preliminaries 391 For two permutations σ1 and σ2, define σ1 ◦σ2 as the composite function of σ1 and σ2. For example, for r = 4, suppose σ1 = (2, 1, 4, 3) (16.23) and σ2 = (1, 4, 2, 3). (16.24) Then σ1 ◦σ2 is given by σ1 ◦σ2(1) = σ1(σ2(1)) = σ1(1) = 2 σ1 ◦σ2(2) = σ1(σ2(2)) = σ1(4) = 3 σ1 ◦σ2(3) = σ1(σ2(3)) = σ1(2) = 1 σ1 ◦σ2(4) = σ1(σ2(4)) = σ1(3) = 4, (16.25) or σ1 ◦σ2 = (2, 3, 1, 4). (16.26) The reader can easily check that σ2 ◦σ1 = (4, 1, 2, 3), (16.27) which is different from σ1 ◦σ2. Therefore, the operation “◦” is not commuta-tive. We now show that the set of all permutations on {1, 2, · · · , r} and the operation “◦” form a group, called the symmetric group on {1, 2, · · · , r}. First, for two permutations σ1 and σ2, since both σ1 and σ2 are one-to-one mappings, so is σ1◦σ2. Therefore, the closure axiom is satisfied. Second, for permutations σ1, σ2, and σ3, σ1 ◦(σ2 ◦σ3)(i) = σ1(σ2 ◦σ3(i)) (16.28) = σ1(σ2(σ3(i))) (16.29) = σ1 ◦σ2(σ3(i)) (16.30) = (σ1 ◦σ2) ◦σ3(i) (16.31) for 1 ≤i ≤r. Therefore, associativity is satisfied. Third, it is clear that the identity map is the identity element. Fourth, for a permutation σ, it is clear that its inverse is σ−1, the inverse mapping of σ which is defined because σ is one-to-one. Therefore, the set of all permutations on {1, 2, · · · , r} and the operation “◦” form a group. The order of this group is evidently equal to (r!). Definition 16.7. Let G be a group with operation “◦”, and S be a subset of G. If S is a group with respect to the operation “◦”, then S is called a subgroup of G. Definition 16.8. Let S be a subgroup of a group G and a be an element of G. The left coset of S with respect to a is the set a◦S = {a◦s : s ∈S}. Similarly, the right coset of S with respect to a is the set S ◦a = {s ◦a : s ∈S}. 392 16 Entropy and Groups In the sequel, only the left coset will be used. However, any result which applies to the left coset also applies to the right coset, and vice versa. For simplicity, a ◦S will be denoted by aS. Proposition 16.9. For a1 and a2 in G, a1S and a2S are either identical or disjoint. Further, a1S and a2S are identical if and only if a1 and a2 belong to the same left coset of S. Proof. Suppose a1S and a2S are not disjoint. Then there exists an element b in a1S ∩a2S such that b = a1 ◦s1 = a2 ◦s2, (16.32) for some si in S, i = 1, 2. Then a1 = (a2 ◦s2) ◦s−1 1 = a2 ◦(s2 ◦s−1 1 ) = a2 ◦t, (16.33) where t = s2 ◦s−1 1 is in S. We now show that a1S ⊂a2S. For an element a1 ◦s in a1S, where s ∈S, a1 ◦s = (a2 ◦t) ◦s = a2 ◦(t ◦s) = a2 ◦u, (16.34) where u = t◦s is in S. This implies that a1 ◦s is in a2S. Thus, a1S ⊂a2S. By symmetry, a2S ⊂a1S. Therefore, a1S = a2S. Hence, if a1S and a2S are not disjoint, then they are identical. Equivalently, a1S and a2S are either identical or disjoint. This proves the first part of the proposition. We now prove the second part of the proposition. Since S is a group, it contains e, the identity element. Then for any group element a, a = a ◦e is in aS because e is in S. If a1S and a2S are identical, then a1 ∈a1S and a2 ∈a2S = a1S. Therefore, a1 and a2 belong to the same left coset of S. To prove the converse, assume a1 and a2 belong to the same left coset of S. From the first part of the proposition, we see that a group element belongs to one and only one left coset of S. Since a1 is in a1S and a2 is in a2S, and a1 and a2 belong to the same left coset of S, we see that a1S and a2S are identical. The proposition is proved. ⊓ ⊔ Proposition 16.10. Let S be a subgroup of a group G and a be an element of G. Then |aS| = |S|, i.e., the numbers of elements in all the left cosets of S are the same, and they are equal to the order of S. Proof. Consider two elements a ◦s1 and a ◦s2 in a ◦S, where s1 and s2 are in S such that a ◦s1 = a ◦s2. (16.35) Then a−1 ◦(a ◦s1) = a−1 ◦(a ◦s2) (16.36) (a−1 ◦a) ◦s1 = (a−1 ◦a) ◦s2 (16.37) e ◦s1 = e ◦s2 (16.38) s1 = s2. (16.39) 16.2 Group-Characterizable Entropy Functions 393 Thus each element in S corresponds to a unique element in aS. Therefore, |aS| = |S| for all a ∈G. ⊓ ⊔ We are just one step away from obtaining the celebrated Lagrange’s theo-rem stated below. Theorem 16.11 (Lagrange’s Theorem). If S is a subgroup of a finite group G, then |S| divides |G|. Proof. Since a ∈aS for every a ∈G, every element of G belongs to a left coset of S. Then from Proposition 16.9, we see that the distinct left cosets of S partition G. Therefore |G|, the total number of elements in G, is equal to the number of distinct cosets of S multiplied by the number of elements in each left coset, which is equal to |S| by Proposition 16.10. This implies that |S| divides |G|, proving the theorem. ⊓ ⊔ The following corollary is immediate from the proof of Lagrange’s Theo-rem. Corollary 16.12. Let S be a subgroup of a group G. The number of distinct left cosets of S is equal to |G| |S| . 16.2 Group-Characterizable Entropy Functions Recall from Chapter 13 that the region Γ ∗ n consists of all the entropy func-tions in the entropy space Hn for n random variables. As a first step toward establishing the relation between entropy and groups, we discuss in this sec-tion entropy functions in Γ ∗ n which can be described by a finite group G and subgroups G1, G2, · · · , Gn. Such entropy functions are said to be group-characterizable. The significance of this class of entropy functions will become clear in the next section. In the sequel, we will make use of the intersections of subgroups extensively. We first prove that the intersection of two subgroups is also a subgroup. Proposition 16.13. Let G1 and G2 be subgroups of a group G. Then G1∩G2 is also a subgroup of G. Proof. It suffices to show that G1 ∩G2 together with the operation “◦” satisfy all the axioms of a group. First, consider two elements a and b of G in G1∩G2. Since both a and b are in G1, (a ◦b) is in G1. Likewise, (a ◦b) is in G2. Therefore, a ◦b is in G1 ∩G2. Thus the closure axiom holds for G1 ∩G2. Second, associativity for G1 ∩G2 inherits from G. Third, G1 and G2 both contain the identity element because they are groups. Therefore, the identity element is in G1 ∩G2. Fourth, for an element a ∈Gi, since Gi is a group, a−1 is in Gi, i = 1, 2. Thus for an element a ∈G1 ∩G2, a−1 is also in G1 ∩G2. Therefore, G1 ∩G2 is a group and hence a subgroup of G. ⊓ ⊔ 394 16 Entropy and Groups Corollary 16.14. Let G1, G2, · · · , Gn be subgroups of a group G. Then ∩n i=1Gi is also a subgroup of G. In the rest of the chapter, we let Nn = {1, 2, · · · , n} and denote ∩i∈αGi by Gα, where α is a nonempty subset of Nn. Lemma 16.15. Let Gi be subgroups of a group G and ai be elements of G, i ∈α. Then |∩i∈αaiGi| =  |Gα| if T i∈α aiGi ̸= ∅ 0 otherwise. (16.40) Proof. For the special case that α is a singleton, i.e., α = {i} for some i ∈Nn, (16.40) reduces to |aiGi| = |Gi|, (16.41) which has already been proved in Proposition 16.10. Let α be any nonempty subset of Nn. If T i∈α aiGi = ∅, then (16.40) is obviously true. If T i∈α aiGi ̸= ∅, then there exists x ∈T i∈α aiGi such that for all i ∈α, x = ai ◦si, (16.42) where si ∈Gi. For any i ∈α and for any y ∈Gα, consider x ◦y = (ai ◦si) ◦y = ai ◦(si ◦y). (16.43) Since both si and y are in Gi, si ◦y is in Gi. Thus x◦y is in aiGi for all i ∈α, or x ◦y is in T i∈α aiGi. Moreover, for y, y′ ∈Gα, if x ◦y = x ◦y′, then y = y′. Therefore, each element in Gα corresponds to a unique element in T i∈α aiGi. Hence, |∩i∈αaiGi| = |Gα|, (16.44) proving the lemma. ⊓ ⊔ The relation between a finite group G and subgroups G1 and G2 is illus-trated by the membership table in Figure 16.1. In this table, an element of G is represented by a dot. The first column represents the subgroup G1, with the dots in the first column being the elements in G1. The other columns represent the left cosets of G1. By Proposition 16.10, all the columns have the same number of dots. Similarly, the first row represents the subgroup G2 and the other rows represent the left cosets of G2. Again, all the rows have the same number of dots. The upper left entry in the table represents the subgroup G1 ∩G2. There are |G1 ∩G2| dots in this entry, with one of them representing the identity element. Any other entry represents the intersection between a left coset of G1 and a left coset of G2, and by Lemma 16.15, the number of dots in each of these entries is either equal to |G1 ∩G2| or zero. 16.2 Group-Characterizable Entropy Functions 395 G 2 G 1,2 G 1 Fig. 16.1. The membership table for a finite group G and subgroups G1 and G2. Since all the column have the same numbers of dots and all the rows have the same number of dots, we say that the table in Figure 16.1 exhibits a quasi-uniform structure. We have already seen a similar structure in Figure 6.1 for the two-dimensional strong joint typicality array, which we reproduce in Figure 16.2. In this array, when n is large, all the columns have approximately the same number of dots and all the rows have approximately the same number of dots. For this reason, we say that the two-dimensional strong typicality array exhibits an asymptotic quasi-uniform structure. In a strong typicality array, however, each entry can contain only one dot, while in a membership table, each entry can contain multiple dots. One can make a similar comparison between a strong joint typicality array for any n ≥2 random variables and the membership table for a finite group with n subgroups. The details are omitted here. Theorem 16.16. Let Gi, i ∈Nn be subgroups of a group G. Then h ∈Hn defined by hα = log |G| |Gα| (16.45) for all nonempty subsets α of Nn is entropic, i.e., h ∈Γ ∗ n. 2 nH ( Y ) 2 nH ( X,Y ) 2 nH ( X ) y S [ Y ] n x S [ X ] n ( x , y ) T [ XY ] n . . . . . . . . . . . . . . . Fig. 16.2. A two-dimensional strong typicality array. 396 16 Entropy and Groups Proof. It suffices to show that there exists a collection of random variables X1, X2, · · · , Xn such that H(Xα) = log |G| |Gα| (16.46) for all nonempty subsets α of Nn. We first introduce a uniform random vari-able Λ defined on the sample space G with probability mass function Pr{Λ = a} = 1 |G| (16.47) for all a ∈G. For any i ∈Nn, let random variable Xi be a function of Λ such that Xi = aGi if Λ = a. Let α be a nonempty subset of Nn. Since Xi = aiGi for all i ∈α if and only if Λ is equal to some b ∈∩i∈αaiGi, Pr{Xi = aiGi : i ∈α} = | T i∈α aiGi| |G| (16.48) = ( |Gα| |G| if T i∈α aiGi ̸= ∅ 0 otherwise (16.49) by Lemma 16.15. In other words, (Xi, i ∈α) is distributed uniformly on its support whose cardinality is |G| |Gα|. Then (16.46) follows and the theorem is proved. ⊓ ⊔ Definition 16.17. Let G be a finite group and G1, G2, · · · , Gn be subgroups of G. Let h be a vector in Hn. If hα = log |G| |Gα| for all nonempty subsets α of Nn, then (G, G1, · · · , Gn) is a group characterization of h. Theorem 16.16 asserts that certain entropy functions in Γ ∗ n have a group characterization. These are called group-characterizable entropy functions, which will be used in the next section to obtain a group characterization of the region Γ ∗ n. We end this section by giving a few examples of such entropy functions. Example 16.18. Fix any subset β of N3 = {1, 2, 3} and define a vector h ∈H3 by hα = log 2 if α ∩β ̸= ∅ 0 otherwise. (16.50) We now show that h has a group characterization. Let G = {0, 1} be the group of modulo 2 addition in Example 16.5, and for i = 1, 2, 3, let Gi =  {0} if i ∈β G otherwise. (16.51) 16.2 Group-Characterizable Entropy Functions 397 Then for a nonempty subset α of N3, if α ∩β ̸= ∅, there exists an i in α such that i is also in β, and hence by definition Gi = {0}. Thus, Gα = \ i∈α Gi = {0}. (16.52) Therefore, log |G| |Gα| = log |G| |{0}| = log 2 1 = log 2. (16.53) If α ∩β = ∅, then Gi = G for all i ∈α, and Gα = \ i∈α Gi = G. (16.54) Therefore, log |G| |Gα| = log |G| |G| = log 1 = 0. (16.55) Then we see from (16.50), (16.53), and (16.55) that hα = log |G| |Gα| (16.56) for all nonempty subsets α of N3. Hence, (G, G1, G2, G3) is a group charac-terization of h. Example 16.19. This is a generalization of the last example. Fix any non-empty subset β of Nn and define a vector h ∈Hn by hα = log 2 if α ∩β ̸= ∅ 0 otherwise. (16.57) Then (G, G1, G2, · · · , Gn) is a group characterization of h, where G is the group of modulo 2 addition, and Gi =  {0} if i ∈β G otherwise. (16.58) By letting β = ∅, we have h = 0. Thus we see that (G, G1, G2, · · · , Gn) is a group characterization of the origin of Hn, with G = G1 = G2 = · · · = Gn. Example 16.20. Define a vector h ∈H3 as follows: hα = min(|α|, 2). (16.59) Let F be the group of modulo 2 addition, G = F × F, and G1 = {(0, 0), (1, 0)} (16.60) G2 = {(0, 0), (0, 1)} (16.61) G3 = {(0, 0), (1, 1)}. (16.62) Then (G, G1, G2, G3) is a group characterization of h. 398 16 Entropy and Groups 16.3 A Group Characterization of Γ ∗ n We have introduced in the last section the class of entropy functions in Γ ∗ n which have a group characterization. However, an entropy function h ∈Γ ∗ n may not have a group characterization due to the following observation. Sup-pose h ∈Γ ∗ n. Then there exists a collection of random variables X1, X2, · · · , Xn such that hα = H(Xα) (16.63) for all nonempty subsets α of Nn. If (G, G1, · · · , Gn) is a group characterization of h, then H(Xα) = log |G| |Gα| (16.64) for all nonempty subsets of Nn. Since both |G| and |Gα| are integers, H(Xα) must be the logarithm of a rational number. However, the joint entropy of a set of random variables in general is not necessarily the logarithm of a rational number (see Corollary 2.44). Therefore, it is possible to construct an entropy function h ∈Γ ∗ n which has no group characterization. Although h ∈Γ ∗ n does not imply h has a group characterization, it turns out that the set of all h ∈Γ ∗ n which have a group characterization is almost good enough to characterize the region Γ ∗ n, as we will see next. Definition 16.21. Define the following region in Hn: Υn = {h ∈Hn : h has a group characterization}. (16.65) By Theorem 16.16, if h ∈Hn has a group characterization, then h ∈Γ ∗ n. Therefore, Υn ⊂Γ ∗ n. We will prove as a corollary of the next theorem that con(Υn), the convex closure of Υn, is in fact equal to Γ ∗ n, the closure of Γ ∗ n. Theorem 16.22. For any h ∈Γ ∗ n, there exists a sequence {f (r)} in Υn such that limr→∞1 rf (r) = h. We need the following lemma to prove this theorem. The proof of this lemma resembles the proof of the strong conditional AEP (Theorem 6.10). Nevertheless, we give a sketch of the proof for the sake of completeness. Lemma 16.23. Let X be a random variable such that |X| < ∞and the distri-bution {p(x)} is rational, i.e., p(x) is a rational number for all x ∈X. Without loss of generality, assume p(x) is a rational number with denominator q for all x ∈X. Then for r = q, 2q, 3q, · · ·, lim r→∞ 1 r log r! Q x(rp(x))! = H(X). (16.66) 16.3 A Group Characterization of Γ ∗ n 399 Proof. Applying Lemma 6.11, we can obtain 1 r ln r! Q x(rp(x))! ≤− X x p(x) ln p(x) + r + 1 r ln(r + 1) −ln r (16.67) = He(X) + 1 r ln r +  1 + 1 r  ln  1 + 1 r  . (16.68) This upper bound tends to He(X) as r →∞. On the other hand, we can obtain 1 r ln r! Q x(rp(x))! ≥− X x  p(x) + 1 r  ln  p(x) + 1 r  −ln r r . (16.69) This lower bound also tends to He(X) as r →∞. Then the proof is completed by changing the base of the logarithm if necessary. ⊓ ⊔ Proof of Theorem 16.22. For any h ∈Γ ∗ n, there exists a collection of random variables X1, X2, · · · , Xn such that hα = H(Xα) (16.70) for all nonempty subsets α of Nn. We first consider the special case that |Xi| < ∞for all i ∈Nn and the joint distribution of X1, X2, · · · , Xn is ra-tional. We want to show that there exists a sequence {f (r)} in Υn such that limr→∞1 rf (r) = h. Denote Q i∈α Xi by Xα. For any nonempty subset α of Nn, let Qα be the marginal distribution of Xα. Assume without loss of generality that for any nonempty subset α of Nn and for all a ∈Xα, Qα(a) is a rational number with denominator q. For each r = q, 2q, 3q, · · ·, fix a sequence xNn = (xNn,1, xNn,2, · · · xNn,r) where for all j = 1, 2, · · · , r, xNn,j = (xi,j : i ∈Nn) ∈XNn, such that N(a; xNn), the number of occurrences of a in sequence xNn, is equal to rQNn(a) for all a ∈XNn. The existence of such a sequence is guaranteed by that all the values of the joint distribution of XNn are rational num-bers with denominator q. Also, we denote the sequence of r elements of Xα, (xα,1, xα,2, · · · xα,r), where xα,j = (xi,j : i ∈α), by xα. Let a ∈Xα. It is easy to check that N(a; xα), the number of occurrences of a in the sequence xα, is equal to rQα(a) for all a ∈Xα. 400 16 Entropy and Groups Let G be the group of all permutations on {1, 2, · · · , r}, i.e., the symmetric group on {1, 2, · · · , r} (cf. Example 16.6). The group G depends on r, but for simplicity, we do not state this dependency explicitly. For any i ∈Nn, define Gi = {σ ∈G : σ[xi] = xi}, where σ[xi] = (xi,σ(1), xi,σ(2), · · · , xi,σ(r)). (16.71) It is easy to check that Gi is a subgroup of G. Let α be a nonempty subset of Nn. Then Gα = \ i∈α Gi (16.72) = \ i∈α {σ ∈G : σ[xi] = xi} (16.73) = {σ ∈G : σ[xi] = xi for all i ∈α} (16.74) = {σ ∈G : σ[xα] = xα}, (16.75) where σ[xα] = (xα,σ(1), xα,σ(2), · · · , xα,σ(r)). (16.76) For any a ∈Xα, define the set Lxα(a) = {j ∈{1, 2, · · · , r} : xα,j = a}. (16.77) Lxα(a) contains the “locations” of a in xα. Then σ[xα] = xα if and only if for all a ∈Xα, j ∈Lxα(a) implies σ(j) ∈Lxα(a). Since |Lxα(a)| = N(a; xα) = rQα(a), (16.78) |Gα| = Y a∈Xα (rQα(a))! (16.79) and therefore |G| |Gα| = r! Q a∈Xα(rQα(a))!. (16.80) By Lemma 16.23, lim r→∞ 1 r log |G| |Gα| = H(Xα) = hα. (16.81) Recall that G and hence all its subgroups depend on r. Define f (r) by f (r) α = log |G| |Gα| (16.82) for all nonempty subsets α of Nn. Then f (r) ∈Υn and 16.4 Information Inequalities and Group Inequalities 401 lim r→∞ 1 r f (r) = h. (16.83) We have already proved the theorem for the special case that h is the entropy function of a collection of random variables X1, X2, · · · , Xn with finite alphabets and a rational joint distribution. To complete the proof, we only have to note that for any h ∈Γ ∗ n, it is always possible to construct a sequence {h(k)} in Γ ∗ n such that limk→∞h(k) = h, where h(k) is the entropy function of a collection of random variables X(k) 1 , X(k) 2 , · · · , X(k) n with finite alphabets and a rational joint distribution. This can be proved by techniques similar to those used in Appendix 2.A together with the continuity of the entropy function for a fixed finite support (Section 2.3). The details are omitted here. ⊓ ⊔ Corollary 16.24. con(Υn) = Γ ∗ n. Proof. First of all, Υn ⊂Γ ∗ n. By taking convex closure, we have con(Υn) ⊂ con(Γ ∗ n). By Theorem 15.5, Γ ∗ n is convex. Therefore, con(Γ ∗ n) = Γ ∗ n, and we have con(Υn) ⊂Γ ∗ n. On the other hand, we have shown in Example 16.19 that the origin of Hn has a group characterization and therefore is in Υn. It then follows from Theorem 16.22 that Γ ∗ n ⊂con(Υn). Hence, we conclude that Γ ∗ n = con(Υn), completing the proof. ⊓ ⊔ 16.4 Information Inequalities and Group Inequalities We have proved in Section 15.1 that an unconstrained information inequality b⊤h ≥0 (16.84) always holds if and only if Γ ∗ n ⊂{h ∈Hn : b⊤h ≥0}. (16.85) In other words, all unconstrained information inequalities are fully charac-terized by Γ ∗ n. We also have proved at the end of the last section that con(Υn) = Γ ∗ n. Since Υn ⊂Γ ∗ n ⊂Γ ∗ n, if (16.85) holds, then Υn ⊂{h ∈Hn : b⊤h ≥0}. (16.86) On the other hand, if (16.86) holds, since {h ∈Hn : b⊤h ≥0} is closed and convex, by taking convex closure in (16.86), we obtain Γ ∗ n = con(Υn) ⊂{h ∈Hn : b⊤h ≥0}. (16.87) Therefore, (16.85) and (16.86) are equivalent. Now (16.86) is equivalent to 402 16 Entropy and Groups b⊤h ≥0 for all h ∈Υn. (16.88) Since h ∈Υn if and only if hα = log |G| |Gα| (16.89) for all nonempty subsets α of Nn for some finite group G and subgroups G1, G2, · · · , Gn, we see that the inequality (16.84) holds for all random variables X1, X2, · · ·, Xn if and only if the inequality obtained from (16.84) by replacing hα by log |G| |Gα| for all nonempty subsets α of Nn holds for any finite group G and subgroups G1, G2, · · · , Gn. In other words, for every unconstrained information inequality, there is a corresponding group inequality, and vice versa. Therefore, inequalities in information theory can be proved by methods in group theory, and inequalities in group theory can be proved by methods in information theory. In the rest of the section, we explore this one-to-one correspondence be-tween information theory and group theory. We first give a group-theoretic proof of the basic inequalities in information theory. At the end of the section, we will give an information-theoretic proof for the group inequality in (16.5). Definition 16.25. Let G1 and G2 be subgroups of a finite group G. Define G1 ◦G2 = {a ◦b : a ∈G1 and b ∈G2}. (16.90) G1 ◦G2 is in general not a subgroup of G. However, it can be shown that G1 ◦G2 is a subgroup of G if G is Abelian (see Problem 1). Proposition 16.26. Let G1 and G2 be subgroups of a finite group G. Then |G1 ◦G2| = |G1||G2| |G1 ∩G2|. (16.91) Proof. Fix (a1, a2) ∈G1 × G2, Then a1 ◦a2 is in G1 ◦G2. Consider any (b1, b2) ∈G1 × G2 such that b1 ◦b2 = a1 ◦a2. (16.92) We will determine the number of (b1, b2) in G1 × G2 which satisfies this rela-tion. From (16.92), we have b−1 1 ◦(b1 ◦b2) = b−1 1 ◦(a1 ◦a2) (16.93) (b−1 1 ◦b1) ◦b2 = b−1 1 ◦a1 ◦a2 (16.94) b2 = b−1 1 ◦a1 ◦a2. (16.95) Then b2 ◦a−1 2 = b−1 1 ◦a1 ◦(a2 ◦a−1 2 ) = b−1 1 ◦a1. (16.96) 16.4 Information Inequalities and Group Inequalities 403 Let k be this common element in G, i.e., k = b2 ◦a−1 2 = b−1 1 ◦a1. (16.97) Since b−1 1 ◦a1 ∈G1 and b2 ◦a−1 2 ∈G2, k is in G1 ∩G2. In other words, for given (a1, a2) ∈G1 × G2, if (b1, b2) ∈G1 × G2 satisfies (16.92), then (b1, b2) satisfies (16.97) for some k ∈G1 ∩G2. On the other hand, if (b1, b2) ∈G1 ×G2 satisfies (16.97) for some k ∈G1 ∩G2, then (16.96) is satisfied, which implies (16.92). Therefore, for given (a1, a2) ∈G1 × G2, (b1, b2) ∈G1 × G2 satisfies (16.92) if and only if (b1, b2) satisfies (16.97) for some k ∈G1 ∩G2. Now from (16.97), we obtain b1(k) = (k ◦a−1 1 )−1 (16.98) and b2(k) = k ◦a2, (16.99) where we have written b1 and b2 as b1(k) and b2(k) to emphasize their depen-dence on k. Now consider k, k′ ∈G1 ∩G2 such that (b1(k), b2(k)) = (b1(k′), b2(k′)). (16.100) Since b1(k) = b1(k′), from (16.98), we have (k ◦a−1 1 )−1 = (k′ ◦a−1 1 )−1, (16.101) which implies k = k′. (16.102) Therefore, each k ∈G1 ∩G2 corresponds to a unique pair (b1, b2) ∈G1 × G2 which satisfies (16.92). Therefore, we see that the number of distinct elements in G1 ◦G2 is given by |G1 ◦G2| = |G1 × G2| |G1 ∩G2| = |G1||G2| |G1 ∩G2|, (16.103) completing the proof. ⊓ ⊔ Theorem 16.27. Let G1, G2, and G3 be subgroups of a finite group G. Then |G3||G123| ≥|G13||G23|. (16.104) Proof. First of all, G13 ∩G23 = (G1 ∩G3) ∩(G2 ∩G3) = G1 ∩G2 ∩G3 = G123. (16.105) By Proposition 16.26, we have 404 16 Entropy and Groups |G13 ◦G23| = |G13||G23| |G123| . (16.106) It is readily seen that G13 ◦G23 is a subset of G3, Therefore, |G13 ◦G23| = |G13||G23| |G123| ≤|G3|. (16.107) The theorem is proved. ⊓ ⊔ Corollary 16.28. For random variables X1, X2, and X3, I(X1; X2|X3) ≥0. (16.108) Proof. Let G1, G2, and G3 be subgroups of a finite group G. Then |G3||G123| ≥|G13||G23| (16.109) by Theorem 16.27, or |G|2 |G13||G23| ≥ |G|2 |G3||G123|. (16.110) This is equivalent to log |G| |G13| + log |G| |G23| ≥log |G| |G3| + log |G| |G123|. (16.111) This group inequality corresponds to the information inequality H(X1, X3) + H(X2, X3) ≥H(X3) + H(X1, X2, X3), (16.112) which is equivalent to I(X1; X2|X3) ≥0. (16.113) ⊓ ⊔ The above corollary shows that all the basic inequalities in information theory has a group-theoretic proof. Of course, Theorem 16.27 is also implied by the basic inequalities. As a remark, the inequality in (16.3) is seen to be a special case of Theorem 16.27 by letting G3 = G. We are now ready to prove the group inequality in (16.5). The uncon-strained non-Shannon-type inequality we have proved in Theorem 15.7 can be expressed in canonical form as H(X1) + H(X1, X2) + 2H(X3) + 2H(X4) +4H(X1, X3, X4) + H(X2, X3, X4) ≤3H(X1, X3) + 3H(X1, X4) + 3H(X3, X4) +H(X2, X3) + H(X2, X4), (16.114) Chapter Summary 405 which corresponds to the group inequality log |G| |G1| + log |G| |G12| + 2 log |G| |G3| + 2 log |G| |G4| +4 log |G| |G134| + log |G| |G234| ≤3 log |G| |G13| + 3 log |G| |G14| + 3 log |G| |G34| + log |G| |G23| + log |G| |G24|. (16.115) Upon rearranging the terms, we obtain |G1 ∩G3|3|G1 ∩G4|3|G3 ∩G4|3|G2 ∩G3||G2 ∩G4| ≤|G1||G1 ∩G2||G3|2|G4|2|G1 ∩G3 ∩G4|4|G2 ∩G3 ∩G4|, (16.116) which is the group inequality in (16.5). The meaning of this inequality and its implications in group theory are yet to be understood. Chapter Summary In the following, Nn = {1, 2, · · · , n}. Properties of Subgroups of a Finite Group: 1. Lagrange’s Theorem: If S is a subgroup of a finite group G, then |S| divides |G|. 2. Let Gi be subgroups of a finite group G and ai be elements of G, i ∈α. Then |∩i∈αaiGi| = |Gα| if T i∈α aiGi ̸= ∅ 0 otherwise, where aiGi is the left coset of Gi containing ai and Gα = ∩i∈αGi. Group Characterization of an Entropy Function: Let G be a finite group and G1, G2, · · · , Gn be subgroups of G. For a vector h ∈Hn, if hα = log |G| |Gα| for all nonempty subsets α of Nn, then (G, G1, · · · , Gn) is a group characterization of h. A vector h that has a group characterization is entropic. Group Characterization of Γ ∗ n: con(Υn) = Γ ∗ n, where Υn = {h ∈Hn : h has a group characterization}. Information Inequalities and Group Inequalities: An unconstrained inequality b⊤h ≥0 involving random variables X1, X2, · · · , Xn, where h ∈ Hn, always holds if and only if the inequality obtained by replacing hα by 406 16 Entropy and Groups log |G| |Gα| for all nonempty subsets α of Nn holds for any finite group G and subgroups G1, G2, · · · , Gn. A “Non-Shannon-Type” Group Inequality: |G1 ∩G3|3|G1 ∩G4|3|G3 ∩G4|3|G2 ∩G3||G2 ∩G4| ≤|G1||G1 ∩G2||G3|2|G4|2|G1 ∩G3 ∩G4|4|G2 ∩G3 ∩G4|. Problems 1. Let G1 and G2 be subgroups of a finite group G. Show that G1 ◦G2 is a subgroup if G is Abelian. 2. Let g1 and g2 be group characterizable entropy functions. a) Prove that m1g1 + m2g2 is group characterizable, where m1 and m2 are any positive integers. b) For any positive real numbers a1 and a2, construct a sequence of group characterizable entropy functions f (k) for k = 1, 2, · · · , such that lim k→∞ f (k) ||f (k)|| = h ||h||, where h = a1g1 + a2g2. 3. Let (G, G1, G2, · · · , Gn) be a group characterization of g ∈Γ ∗ n, where g is the entropy function for random variables X1, X2, · · · , Xn. Fix any nonempty subset α of Nn, and define h by hβ = gα∪β −gα for all nonempty subsets β of Nn. It can easily be checked that hβ = H(Xβ|Xα). Show that (K, K1, K2, · · · , Kn) is a group characterization of h, where K = Gα and Ki = Gi ∩Gα. 4. Let (G, G1, G2, · · · , Gn) be a group characterization of g ∈Γ ∗ n, where g is the entropy function for random variables X1, X2, · · · , Xn. Show that if Xi is a function of (Xj : j ∈α), then Gα is a subgroup of Gi. 5. Let G1, G2, G3 be subgroups of a finite group G. Prove that |G||G1 ∩G2 ∩G3|2 ≥|G1 ∩G2||G2 ∩G3||G1 ∩G3|. Hint: Use the information-theoretic approach. 6. Let h ∈Γ ∗ 2 be the entropy function for random variables X1 and X2 such that h1 + h2 = h12, i.e. X1 and X2 are independent. Let (G, G1, G2) be a group characterization of h, and define a mapping L : G1 × G2 →G by L(a, b) = a ◦b. Problems 407 a) Prove that the mapping L is onto, i.e., for any element c ∈G, there exists (a, b) ∈G1 × G2 such that a ◦b = c. b) Prove that G1 ◦G2 is a group. 7. Denote an entropy function h ∈Γ ∗ 2 by (h1, h2, h12). Construct a group characterization for each of the following entropy functions: a) h1 = (log 2, 0, log 2) b) h2 = (0, log 2, log 2) c) h3 = (log 2, log 2, log 2). Verify that Γ2 is the minimal convex set containing the above three en-tropy functions. 8. Denote an entropy function h ∈Γ ∗ 3 by (h1, h2, h3, h12, h23, h13, h123). Con-struct a group characterization for each of the following entropy functions: a) h1 = (log 2, 0, 0, log 2, 0, log 2, log 2) b) h2 = (log 2, log 2, 0, log 2, log 2, log 2, log 2) c) h3 = (log 2, log 2, log 2, log 2, log 2, log 2, log 2) d) h4 = (log 2, log 2, log 2, log 4, log 4, log 4, log 4). 9. Ingleton inequality Let G be a finite Abelian group and G1, G2, G3, and G4 be subgroups of G. Let (G, G1, G2, G3, G4) be a group characterization of g, where g is the entropy function for random variables X1, X2, X3, and X4. Prove the following statements: a) |(G1 ∩G3) ◦(G1 ∩G4)| ≤|G1 ∩(G3 ◦G4)| Hint: Show that (G1 ∩G3) ◦(G1 ∩G4) ⊂G1 ∩(G3 ◦G4). b) |G1 ◦G3 ◦G4| ≤|G1||G3 ◦G4||G1 ∩G3 ∩G4| |G1 ∩G3||G1 ∩G4| . c) |G1 ◦G2 ◦G3 ◦G4| ≤|G1 ◦G3 ◦G4||G2 ◦G3 ◦G4| |G3 ◦G4| . d) |G1 ◦G2 ◦G3 ◦G4| ≤|G1||G2||G3||G4||G1 ∩G3 ∩G4||G2 ∩G3 ∩G4| |G1 ∩G3||G1 ∩G4||G2 ∩G3||G2 ∩G4||G3 ∩G4|. e) |G1 ∩G3||G1 ∩G4||G2 ∩G3||G2 ∩G4||G3 ∩G4| ≤|G3||G4||G1 ∩G2||G1 ∩G3 ∩G4||G2 ∩G3 ∩G4|. 408 16 Entropy and Groups f) H(X13) + H(X14) + H(X23) + H(X24) + H(X34) ≥H(X3) + H(X4) + H(X12) + H(X134) + H(X234), where H(X134) denotes H(X1, X3, X4), etc. g) Is the inequality in f) implied by the basic inequalities? And does it always hold? Explain. The Ingleton inequality (see also ) was originally obtained as a constraint on the rank functions of vector spaces. The inequality in e) was obtained in the same spirit by Chan for subgroups of a finite group. The inequality in f) is referred to as the Ingleton inequality for entropy in the literature. (See also Problem 8 in Chapter 15.) Historical Notes The results in this chapter are due to Chan and Yeung , whose work was inspired by a one-to-one correspondence between entropy and quasi-uniform arrays previously established by Chan (also Chan ). Romashchenko et al. have developed an interpretation of Kolmogorov complexity similar to the combinatorial interpretation of entropy in Chan . The results in this chapter have been used by Chan to construct codes for multi-source network coding to be discussed in Chapter 21. Part II Fundamentals of Network Coding 17 Introduction For a point-to-point communication system, we see from Section 7.7 and Prob-lem 6 in Chapter 8 that asymptotic optimality can be achieved by separating source coding and channel coding. Recall from Section 5.3 that the goal of source coding is to represent the information source in (almost) fair bits1. Then the role of channel coding is to enable the transmission of fair bits through the channel essentially free of error with no reference to the meaning of these fair bits. Thus a theme in classical information theory for point-to-point communication is that fair bits can be drawn equivalence to a commod-ity. It is intuitively appealing that this theme in classical information theory would continue to hold in network communication where the network consists of noiseless point-to-point communication channels. If so, in order to multi-cast2 information from a source node to possibly more than one sink node, we only need to compress the information at the source node into fair bits, orga-nize them into data packets, and route the packets to the sink node through the intermediate nodes in the network. In the case when there are more than one sink node, the information needs to be replicated at certain intermediate nodes so that every sink node can receive a copy of the information. This method of transmitting information in a network is generally referred to as store-and-forward or routing. As a matter of fact, almost all computer net-works built in the last few decades are based on this principle, where routers are deployed at the intermediate nodes to switch a data packet from an input channel to an output channel without processing the data content. The deliv-ery of data packets in a computer network resembles mail delivery in a postal system. We refer the readers to textbooks on data communication and switching theory . 1 Fair bits refer to i.i.d. bits, each distributed uniformly on {0, 1}. 2 Multicast means to transmit information from a source node to a specified set of sink nodes. 412 17 Introduction However, we will see very shortly that in network communication, it does not suffice to simply route and/or replicate information within the network. Specifically, coding generally needs to be employed at the intermediate nodes in order to achieve bandwidth optimality. This notion, called network coding, is the subject of discussion in Part II of this book. 17.1 The Butterfly Network In this section, the advantage of network coding over routing is explained by means of a few simple examples. The application of network coding in wireless and satellite communication will be discussed in the next section. We will use a finite directed graph to represent a point-to-point communi-cation network. A node in the network corresponds to a vertex in the graph, while a communication channel in the network corresponds to an edge in the graph. We will not distinguish a node from a vertex, nor will we distinguish a channel from an edge. In the graph, a node is represented by a circle, with the exception that the unique source node, denoted by s (if exists), is represented by a square. Each edge is labeled by a positive integer called the capacity3 or the rate constraint, which gives the maximum number of information symbols taken from some finite alphabet that can be transmitted over the channel per unit time. In this section, we assume that the information symbol is binary. When there is only one edge from node a to node b, we denote the edge by (a, b). Example 17.1 (Butterfly Network I). Consider the network in Figure 17.1(a). In this network, two bits b1 and b2 are generated at source node s, and they are to be multicast to two sink nodes t1 and t2. In Figure 17.1(b), we try to devise a routing scheme for this purpose. By symmetry, we send the two bits on different output channels at node s. Without loss of generality, b1 is sent on channel (s, 1) and b2 is sent on channel (s, 2). At nodes 1 and 2, the received bit is replicated and the copies are sent on the two output channels. At node 3, since both b1 and b2 are received but there is only one output channel, we have to choose one of the two bits to be sent on the output channel (3, 4). Suppose we send b1 as in Figure 17.1(b). Then the bit is replicated at node 4 and the two copies are sent to nodes t1 and t2, respectively. At node t2, both b1 and b2 are received. However, at node t1, two copies of b1 are received and b2 cannot be recovered. Thus this routing scheme does not work. Similarly, if b2 instead of b1 is sent on channel (3, 4), then b1 cannot be recovered at node t2. However, if network coding is allowed, it is actually possible to achieve our goal. Figure 17.1(c) shows a scheme which multicasts both b1 and b2 to nodes t1 and t2, where ‘+’ denotes modulo 2 addition. At node t1, b1 is received, and b2 can be recovered by adding b1 and b1 + b2, because 3 Here the term “capacity” is used in the sense of graph theory. 17.1 The Butterfly Network 413 b1 b2 b1 b2 1 2 3 4 t1 t2 b1 b1 b2 b2 b2 s b1 1 2 3 t1 t2 4 1 1 1 1 1 1 1 s 1 1 (a) (b) 1 2 3 t1 t2 4 b1 b1 b1 b2 b2 s b2 b1 b1 b1 1 2 3 t1 t2 4 s (c) b1 b1 b1 b2 b2 b2 b1+ b2 b1+ b2 b1+ b2 (d) Fig. 17.1. Butterfly Network I. b1 + (b1 + b2) = (b1 + b1) + b2 = 0 + b2 = b2. (17.1) Similarly, b2 is received at node t2, and b1 can be recovered by adding b2 and b1 + b2. In this scheme, b1 and b2 are encoded into the bit b1 + b2 which is then sent on channel (3, 4). If network coding is not allowed, in order to multicast both b1 and b2 to nodes t1 and t2, at least one more bit has to be sent. Figure 17.1(d) shows such a scheme. In this scheme, however, the capacity of channel (3, 4) is exceeded by 1 bit. If the capacity of channel (3, 4) cannot be exceeded and network coding is not allowed, it can be shown that at most 1.5 bits can be multicast per unit time on the average (see Problem 3). 414 17 Introduction The above example shows the advantage of network coding over routing for a single multicast in a network. The next example shows the advantage of network coding over routing for multiple unicasts4 in a network. Example 17.2 (Butterfly Network II). In Figure 17.1, instead of both being generated at node s, suppose bit b1 is generated at node 1 and bit b2 is generated at node 2. Then we can remove node s and obtain the network in Figure 17.2(a). We again want to multicast b1 and b2 to both nodes t1 and t2. Since this network is essentially the same as the previous one, Figure 17.2(b) shows the obvious network coding solution. 1 2 3 t1 t2 4 1 1 1 1 1 1 1 (a) 2 t2 (b) 1 3 t1 4 b1 b1 b2 b2 b1+b2 b1+b2 b1+b2 (c) 3 4 b1 b2 b1+b2 b1+b2 b1+b2 t2’ t1’ Fig. 17.2. Butterfly Network II. There are two multicasts in this network. However, if we merge node 1 and node t1 into a new node t′ 1 and merge node 2 and node t2 into a new node t′ 2, then we obtain the network and the corresponding network coding solution in Figure 17.2(c). In this new network, bits b1 and b2 are generated at nodes t′ 1 and t′ 2, respectively, and the communication goal is to exchange the two 4 Unicast is the special case of multicast with one sink node. 17.2 Wireless and Satellite Communications 415 bits through the network. In other words, the two multicasts in Figure 17.2(a) become two unicasts in Figure 17.2(c). If network coding is not allowed, we need to route b1 from node t′ 1 to node t′ 2 and to route b2 from node t′ 2 to node t′ 1. Since each of these routes has to go through node 3 and node 4, if b1 and b2 are routed simultaneously, the capacity of channel (3, 4) is exceeded. Therefore, we see the advantage of network coding over routing when there are multiple unicasts in the network. For the network in Figure 17.2(b), the two sink nodes are required to re-cover both of the information sources, namely the bits b1 and b2. Even though they are generated at two different source nodes 1 and 2, they can be regarded as being generated at a super source node s connecting to nodes t1 and t2 as in Figure 17.1(c). Precisely, the network (network code) in Figure 17.2(b) can be obtained from the network (network code) in Figure 17.1(c) by removing node s and all its output channels. This observation will be further discussed in Example 19.26 in the context of single-source linear network coding. 17.2 Wireless and Satellite Communications In wireless communication, when a node broadcasts, different noisy versions of the signal is received by the neighboring nodes. Under certain conditions, with suitable channel coding, we can assume the existence of an error-free channel between the broadcast node and the neighboring nodes such that each of the latter receives exactly the same information. Such an abstraction, though generally suboptimal, provides very useful tools for communication systems design. Our model for network communication can be used for modeling the above broadcast scenario by imposing the following constraints on the broadcast node: 1. all the output channels have the same capacity; 2. the same symbol is sent on each of the output channels. We will refer to these constraints as the broadcast constraint. Figure 17.3(a) is an illustration of a broadcast node b with two neighboring nodes n1 and n2, where the two output channels of node b have the same capacity. In order to express the broadcast constraint in the usual graph-theoretic terminology, we need to establish the following simple fact about network coding. Proposition 17.3. Network coding is not necessary at a node if the node has only one input channel and the capacity of each output channel is the same as that of the input channel. Proof. Consider a node in the network as prescribed and denote the symbol(s) received on the input channel by x. (There is more than one symbol in x if 416 17 Introduction b n1 n2 (a) b n1 n2 (b) Fig. 17.3. A broadcast node b with two neighboring nodes n1 and n2. the input channel has capacity larger than 1.) Let a coding scheme be given, and denote the symbol sent on the ith output channel by gi(x). We now show that one may assume without loss of generality that x is sent on all the output channels. If x instead of gi(x) is sent on the ith out-put channel, then the receiving node can mimic the effect of receiving gi(x) by applying the function gi on x upon receiving it. In other words, any cod-ing scheme that does not send x on all the output channels can readily be converted into one which does. This proves the proposition. ⊓ ⊔ We now show that the broadcast constraint depicted in Figure 17.3(a) is logically equivalent to the usual graph representation in Figure 17.3(b). In this figure, the unlabeled node is a dummy node associated with the broadcast node which is inserted for the purpose of modeling the broadcast constraint, where the input channel and all the output channels of the dummy node have the same capacity as an output channel of the broadcast node b in Figure 17.3(a). Although no broadcast constraint is imposed on the dummy node in Figure 17.3(b), by Proposition 17.3, we may assume without loss of generality that the dummy node simply sends the symbol received on the input channel on each of the output channels. Then Figures 17.3(a) and (b) are logically equivalent to each other because a coding scheme for the former corresponds to a coding scheme for the latter, and vice versa. Example 17.4 (A Wireless/Satellite System). Consider a communication sys-tem with two wireless nodes t′ 1 and t′ 2 that generate two bits b1 and b2, re-spectively, and the two bits are to be exchanged through a relay node. Such a system can also be the model of a satellite communication system, where the relay node corresponds to a satellite, and the two nodes t′ 1 and t′ 2 correspond to ground stations that communicate with each other through the satellite. We make the usual assumption that a wireless node cannot simultaneously 1. transmit and receive; 2. receive the transmission from more than one neighboring node. 17.3 Source Separation 417 b1 b2 b1 b2 (a) t 1’ t 2’ t 1’ t 2’ b1 b2 b1 + b2 (b) b1 + b2 k = 1 k = 2 k = 3 k = 4 Fig. 17.4. A network coding application in wireless communication. A straightforward routing scheme which takes a total of 4 time units to com-plete is shown in Figure 17.4(a), with k being the discrete time index. By taking into account the broadcast nature of the relay node, the system can be modeled by the network in Figure 17.2(c), where node 3 corresponds to the relay node and node 4 corresponds to the associated dummy node. Then the network coding solution is shown in Figure 17.4(b), which takes a total of 3 time units to complete. In other words, a very simple coding scheme at the relay node can save 50 percent of the downlink bandwidth. 17.3 Source Separation In an error-free point-to-point communication system, suppose we want to transmit two information sources X and Y . If we compress the two sources separately, we need to transmit approximately H(X) + H(Y ) bits. If we com-press the two sources jointly, we need to transmit approximately H(X, Y ) bits. If X and Y are independent, we have H(X, Y ) = H(X) + H(Y ). (17.2) In other words, if the information sources are independent, asymptotically there is no difference between coding them separately or jointly. We will refer to coding independent information sources separately as source separation. Example 17.2 reveals the important fact that source sepa-ration is not necessary optimal in network communication, which is explained as follows. Let B1 and B2 be random bits generated at nodes t′ 1 and t′ 2, re-spectively, where B1 and B2 are independent and each of them are distributed uniformly on {0, 1}. With B2 as side-information which is independent of B1, 418 17 Introduction node t′ 2 has to receive at least 1 bit in order to decode B1. Since node t′ 2 can re-ceive information only from node 4 which in turn can receive information only from node 3, any coding scheme that transmits B1 from node t′ 1 to node t′ 2 must send at least 1 bit on channel (3, 4). Similarly, any coding scheme that transmits B2 from node t′ 2 to node t′ 1 must send at least 1 bit on channel (3, 4). Therefore, any source separation solution must send at least 2 bits on channel (3, 4). Since the network coding solution in Figure 17.2(c) sends only 1 bit on channel (3, 4), we see that source separation is not optimal. For a network coding problem with multiple information sources, since source separation does not guarantee optimality, the problem cannot always be decomposed into a number single-source problems. We will see that while single-source network coding has a relatively simple characterization, the char-acterization of multi-source network coding is much more involved. Chapter Summary Advantage of Network Coding: For communication on a point-to-point network, store-and-forward may not be bandwidth optimal when 1. there is one information source to be multicast; 2. there are two or more independent information sources to be unicast (more generally multicast). In general, network coding needs to be employed for bandwidth optimality. Source Separation: For communication on a point-to-point network, when there are two or more independent information sources to be unicast (more generally multicast), source separation coding may not be bandwidth optimal. Problems In the following problems, the rate constraint for an edge is in bits per unit time. 1. Consider the following network. We want to multicast information to the sink nodes at the maximum rate without using network coding. Let B = {b1, b2, · · · , bκ} be the set of bits to be multicast. Let Bi be the set of bits sent in edge (s, i), where |Bi| = 2, i = 1, 2, 3. At node i, the received bits are duplicated and sent in the two out-going edges. Thus two bits are sent in each edge in the network. a) Show that B = Bi ∪Bj for any 1 ≤i < j ≤3. b) Show that B3 ∪(B1 ∩B2) = B. c) Show that |B3 ∪(B1 ∩B2)| ≤|B3| + |B1| + |B2| −|B1 ∪B2|. Historical Notes 419 2 1 3 t2 t1 t3 s d) Determine the maximum value of κ and devise a network code which achieves this maximum value. e) What is the percentage of improvement if network coding is used? (Ahlswede et al. .) 2. Consider the following butterfly network. 3 4 1 2 6 5 s Devise a network coding scheme which multicasts two bits b1 and b2 from node s to all the other nodes such that nodes 3, 5, and 6 receive b1 and b2 after 1 unit time and nodes 1, 2, and 4 receive b1 and b2 after 2 units of time. In other words, node i receives information at a rate equal to maxflow(s, i) for all i ̸= s. 3. Determine the maximum rate at which information can be multicast to nodes 5 and 6 only in the network in Problem 2 if network coding is not used. Devise a network coding scheme which achieves this maximum rate. Historical Notes The concept of network coding was first introduced for satellite communication networks in Yeung and Zhang and then fully developed in Ahlswede et al. , where in the latter the term “network coding” was coined. In this work, the advantage of network coding over store-and-forward was first demonstrated by the butterfly network, thus refuting the folklore that information transmission in a point-to-point network is equivalent to a commodity flow. 420 17 Introduction Prior to and , network coding problems for special networks had been studied in the context of distributed source coding. The suboptimality of source separation was first demonstrated by Yeung . Source separation was proved to be optimal for special networks by Hau , Roche et al. , and Yeung and Zhang . Some other special cases of single-source network coding had been studied by Roche et al. , Rabin , Ayanoglu et al. , and Roche . For a tutorial on the theory, we refer the reader to the unifying work by Yeung et al. . Tutorials on the subject have also been written by Fragouli and Soljanin and Chou and Wu from the algorithm and application perspectives. We also refer the reader to the book by Ho and Lun . For an update of the literature, the reader may visit the Network Coding Homepage . By regarding communication as a special case of computation, it can be seen that network coding is in the spirit of communication complexity in computer science studied by Yao . However, the problem formulations of network coding and communication complexity are quite different. 18 The Max-Flow Bound In this chapter, we discuss an important bound for single-source network cod-ing which has a strong connection with graph theory. This bound, called the max-flow min-cut bound, or simply the max-flow bound, gives a fundamental limit on the amount of information that can be multicast in a network. The max-flow bound is established in a general setting where information can be transmitted within the network in some arbitrary manner. Toward this end, we first formally define a point-to-point network and a class of codes on such a network. In Chapters 19 and 20, we will prove the achievability of the max-flow bound by linear network coding1. 18.1 Point-to-Point Communication Networks A point-to-point communication network is represented by a directed graph G = (V, E), where V is the set of nodes in the network and E is the set of edges in G which represent the point-to-point channels. Parallel edges between a pair of nodes is allowed2. We assume that G is finite, i.e., |E| < ∞(and hence |V | < ∞). The unique source node in the network, where information is generated, is denoted by s. All the other nodes are referred to as non-source nodes. The sets of input channels and output channels of a node i are denoted by In(i) and Out(i), respectively. For a channel e, let Re be the rate constraint, i.e., the maximum num-ber of information symbols taken from a finite alphabet that can be sent on the channel per unit time. As before, we also refer to Re as the capacity of channel e in the sense of graph theory. Let R = (Re : e ∈E) (18.1) 1 A more specific form of the max-flow bound will be proved in Theorem 19.10 for linear network coding. 2 Such a graph is sometimes called a multigraph. 422 18 The Max-Flow Bound be the rate constraints for the graph G. To simplify our discussion, we assume that Re are positive integers for all e ∈E. In the following, we introduce some notions in graph theory which will facilitate the characterization of a point-to-point network. Temporarily regard an edge in the graph G as a water pipe and G as a network of water pipes. Fix a node t ̸= s and call it the sink node. Suppose water is generated at a constant rate at node s. We assume that the rate of water flow in each pipe does not exceed its capacity. We also assume that there is no leakage in the network, so that water is conserved at every node other than s and t in the sense that the total rate of water flowing into the node is equal to the total rate of water flowing out of the node. The water generated at node s is eventually drained at node t. A flow F = (Fe : e ∈E) (18.2) in G from node s to node t with respect to rate constraints R is a valid assignment of a nonnegative integer Fe to every edge e ∈E such that Fe is equal to the rate of water flow in edge e under all the assumptions in the last paragraph. The integer Fe is referred to as the value of F on edge e. Specifically, F is a flow in G from node s to node t if for all e ∈E, 0 ≤Fe ≤Re, (18.3) and for all i ∈V except for s and t, F+(i) = F−(i), (18.4) where F+(i) = X e∈In(i) Fe (18.5) and F−(i) = X e∈Out(i) Fe. (18.6) In the above, F+(i) is the total flow into node i and F−(i) is the total flow out of node i, and (18.4) is called the conservation conditions. Since the conservation conditions require that the resultant flow out of any node other than s and t is zero, it is intuitively clear and not difficult to show that the resultant flow out of node s is equal to the resultant flow into node t. This common value is called the value of F. F is a max-flow from node s to node t in G with respect to rate constraints R if F is a flow from node s to node t whose value is greater than or equal to the value of any other flow from node s to node t. A cut between node s and node t is a subset U of V such that s ∈U and t ̸∈U. Let EU = {e ∈E : e ∈Out(i) ∩In(j) for some i ∈U and j ̸∈U} (18.7) 18.1 Point-to-Point Communication Networks 423 (b) (a) ! ! Fig. 18.1. Illustrations of the max-flow and the min-cut from the source node to (a) a collection of non-source node T and (b) a collection of edges ξ. be the set of edges across the cut U. The capacity of the cut U with respect to rate constraints R is defined as the sum of the capacities of all the edges across the cut, i.e., X e∈EU Re. (18.8) A cut U is a min-cut between node s and node t if it is a cut between node s and node t whose capacity is less than or equal to the capacity of any other cut between s and t. A min-cut between node s and node t can be thought of as a bottleneck between node s and node t. Therefore, it is intuitively clear that the value of a max-flow from node s to node t cannot exceed the capacity of a min-cut between the two nodes. The following theorem, known as the max-flow min-cut theorem, states that the capacity of a min-cut is always achievable. This theorem will play a key role in the subsequent discussions. Theorem 18.1 (Max-Flow Min-Cut Theorem ). Let G be a graph with source node s, sink node t, and rate constraints R. Then the value of a max-flow from node s to node t is equal to the capacity of a min-cut between the two nodes. The notions of max-flow and min-cut can be generalized to a collection of non-source nodes T. To define the max-flow and the min-cut from s to T, we expand the graph G = (V, E) into G′ = (V ′, E′) by installing a new node τ which is connected from every node in T by an edge. The capacity of an edge (t, τ), t ∈T, is set to infinity. Intuitively, node τ acts as a single sink node that collects all the flows into T. Then the max-flow and the min-cut from node s to T in graph G are defined as the max-flow and the min-cut from node s to node τ in graph G′, respectively. This is illustrated in Figure 18.1(a). The notions of max-flow and min-cut can be further generalized to a col-lection of edges ξ. For an edge e ∈ξ, let the edge be from node ve to node we. We modify the graph G = (V, E) to obtain the graph ˜ G = ( ˜ V , ˜ E) by installing a new node te for each edge e ∈ξ and replacing edge e by two new edges e′ and e′′, where e′ is from node ve to node te and e′′ is from node te to node we. Let Tξ be the set of nodes te, e ∈ξ. Then the max-flow and the min-cut between 424 18 The Max-Flow Bound s t1 1 1 (a) 2 2 2 1 2 s t1 1 1 (b) 2 2 1 1 2 s t1 b1 1 (c) 2 b1b2 b3 b2 b1b3 Fig. 18.2. A one-sink network. node s and the collection of edges ξ in graph G are defined as the max-flow and the min-cut between node s and the collection of nodes Tξ in graph ˜ G, respectively. This is illustrated in Figure 18.1(b). 18.2 Examples Achieving the Max-Flow Bound Let ω be the rate at which information is multicast from source node s to sink nodes t1, t2, · · · , tL in a network G with rate constraints R. We are naturally interested in the maximum possible value of ω. With a slight abuse of notation, we denote the value of a max-flow from source node s to a sink node tl by maxflow(tl). It is intuitive that ω ≤maxflow(tl) (18.9) for all l = 1, 2, · · · , L, i.e., ω ≤min l maxflow(tl). (18.10) This is called the max-flow bound, which will be formally established in the next two sections. In this section, we first show by a few examples that the max-flow bound can be achieved. In these examples, the unit of information is the bit. First, we consider the network in Figure 18.2 which has one sink node. Figure 18.2(a) shows the capacity of each edge. By identifying the min-cut to be {s, 1, 2} and applying the max-flow min-cut theorem, we see that maxflow(t1) = 3. (18.11) Therefore the flow in Figure 18.2(b) is a max-flow. In Figure 18.2(c), we show how we can send three bits b1, b2, and b3 from node s to node t1 based on the max-flow in Figure 18.2(b). Evidently, the max-flow bound is achieved. In fact, we can easily see that the max-flow bound can always be achieved when there is only one sink node in the network. In this case, we only need to 18.2 Examples Achieving the Max-Flow Bound 425 s 1 (a) 2 1 3 2 t2 3 1 t1 1 4 2 3 3 s b3 (b) b4b5 b3 b1b2b3 2 t2 3 1 t1 b3b4b5 b1b2 b4b5 b1b2b3 Fig. 18.3. A two-sink network without coding. treat the information bits constituting the message as a commodity and route them through the network according to any fixed routing scheme. Eventually, all the bits will arrive at the sink node. Since the routing scheme is fixed, the sink node knows which bit is coming in from which edge, and the message can be recovered accordingly. Next, we consider the network in Figure 18.3 which has two sink nodes. Figure 18.3(a) shows the capacity of each edge. It is easy to see that maxflow(t1) = 5 (18.12) and maxflow(t2) = 6. (18.13) So the max-flow bound asserts that we cannot send more than 5 bits to both t1 and t2. Figure 18.3(b) shows a scheme which sends 5 bits b1, b2, b3, b4, and b5 to t1 and t2 simultaneously. Therefore, the max-flow bound is achieved. In this scheme, b1 and b2 are replicated at node 3, b3 is replicated at node s, while b4 and b5 are replicated at node 1. Note that each bit is replicated exactly once in the network because two copies of each bit are needed to be sent to the two sink nodes. We now revisit the butterfly network reproduced in Figure 18.4(a), which again has two sink nodes. It is easy to see that maxflow(tl) = 2 (18.14) for l = 1, 2. So the max-flow bound asserts that we cannot send more than 2 bits to both sink nodes t1 and t2. We have already seen the network coding scheme in Figure 18.4(b) that achieves the max-flow bound. In this scheme, coding is required at node 3. Finally, we consider the network in Figure 18.5 which has three sink nodes. Figure 18.5(a) shows the capacity of each edge. It is easy to see that maxflow(tl) = 2 (18.15) 426 18 The Max-Flow Bound 1 2 3 t1 t2 4 1 1 1 1 1 1 1 s 1 1 (a) 1 2 3 t1 t2 4 s (b) b1 b1 b1 b2 b2 b2 b1+ b2 b1+ b2 b1+ b2 Fig. 18.4. Butterfly network I. for all l. In Figure 18.5(b), we show how to multicast 2 bits b1 and b2 to all the sink nodes. Therefore, the max-flow bound is achieved. Again, it is necessary to code at the nodes in order to multicast the maximum number of bits to all the sink nodes. The network in Figure 18.5 is of special interest in practice because it is a special case of the diversity coding scheme used in commercial disk arrays, which are a kind of fault-tolerant data storage system. For simplicity, assume the disk array has three disks which are represented by nodes 1, 2, and 3 in the network, and the information to be stored are the bits b1 and b2. The information is encoded into three pieces, namely b1, b2, and b1 + b2, which are stored on the disks represented by nodes 1, 2, and 3, respectively. In the system, there are three decoders, represented by sink nodes t1, t2, and t3, such that each of them has access to a distinct set of two disks. The idea is that when any one disk is out of order, the information can still be recovered from the remaining two disks. For example, if the disk represented by node 1 is out of order, then the information can be recovered by the decoder represented by b1+b2 s 1 (a) 1 1 1 2 t3 3 1 t1 t2 1 1 1 1 1 s b1 (b) b1 b2 b1+b2 2 t3 3 1 t1 t2 b2 b2 b1 Fig. 18.5. A diversity coding scheme. 18.3 A Class of Network Codes 427 sink node t3 which has access to the disks represented by node 2 and node 3. When all the three disks are functioning, the information can be recovered by any decoder. 18.3 A Class of Network Codes In this section, we introduce a general class of codes for the point-to-point network defined in Section 18.1. In the next section, the max-flow bound will be proved for this class of network codes. Since the max-flow bound concerns only the values of max-flows from source node s to the sink nodes, we assume without loss of generality that there is no loop in the graph G, i.e., In(i) ∩Out(i) = ∅for all i ∈V , because such edges do not increase the value of a max-flow from node s to a sink node. For the same reason, we assume that there is no input edge at node s, i.e., In(s) = ∅. We consider a block code of length n. Let X denote the information source and assume that x, the outcome of X, is obtained by selecting an index from a set X according to the uniform distribution. The elements in X are called messages. The information sent on an output channel of a node can depend only on the information previously received by that node. This constraint specifies the causality of any coding scheme on the network. An (n, (ηe : e ∈E), τ) network code on the graph G that multicasts information from source node s to sink nodes t1, t2, · · · , tL, where n is the block length, is defined by the components listed below; the construction of the code from these components will be described after their definitions are given. 1) A positive integer K. 2) Mappings u : {1, 2, · · · , K} →V, (18.16) v : {1, 2, · · · , K} →V, (18.17) and ˆ e : {1, 2, · · · , K} →E, (18.18) such that ˆ e(k) ∈Out(u(k)) and ˆ e(k) ∈In(v(k)). 3) Index sets Ak = {1, 2, · · · , |Ak|}, 1 ≤k ≤K, such that Y k∈Te |Ak| = ηe, (18.19) where Te = {1 ≤k ≤K : ˆ e(k) = e}. (18.20) 428 18 The Max-Flow Bound 4) (Encoding functions). If u(k) = s, then fk : X →Ak, (18.21) where X = {1, 2, · · · , ⌈2nτ⌉}. (18.22) If u(k) ̸= s, if Qk = {1 ≤k′ < k : v(k′) = u(k)} (18.23) is nonempty, then fk : Y k′∈Qk Ak′ →Ak; (18.24) otherwise, let fk be an arbitrary constant taken from Ak. 5) (Decoding functions). Mappings gl : Y k′∈Wl Ak′ →X (18.25) for l = 1, 2, · · · , L, where Wl = {1 ≤k ≤K : v(k) = tl} (18.26) such that for all l = 1, 2, · · · , L, ˜ gl(x) = x (18.27) for all x ∈X, where ˜ gl : X →X is the function induced inductively by fk, 1 ≤k ≤K and gl, and ˜ gl(x) denotes the value of gl as a function of x. The quantity τ is the rate of the information source X, which is also the rate at which information is multicast from the source node to all the sink nodes. The (n, (ηe : e ∈E), τ) code is constructed from the above components as follows. At the beginning of a coding session, the value of X is available to node s. During the coding session, there are K transactions which take place in chronological order, where each transaction refers to a node sending infor-mation to another node. In the kth transaction, node u(k) encodes according to encoding function fk and sends an index in Ak to node v(k). The domain of fk is the set of all possible information that can be received by node u(k) just before the kth transaction, and we distinguish two cases. If u(k) = s, the domain of fk is X. If u(k) ̸= s, Qk gives the time indices of all the previous transactions for which information was sent to node u(k), so the domain of fk is Q k′∈Qk Ak′. The set Te gives the time indices of all the transactions for which information is sent on channel e, so ηe is the number of possible index tuples that can be sent on channel e during the coding session. Finally, Wl gives the indices of all the transactions for which information is sent to node tl, and gl is the decoding function at node tl which recovers x with zero error. 18.4 Proof of the Max-Flow Bound 429 18.4 Proof of the Max-Flow Bound In this section, we state and prove the max-flow bound for the class of network codes defined in the last section. Definition 18.2. For a graph G with rate constraints R, an information rate ω ≥0 is asymptotically achievable if for any ϵ > 0, there exists for sufficiently large n an (n, (ηe : e ∈E), τ) network code on G such that n−1 log2 ηe ≤Re + ϵ (18.28) for all e ∈E, where n−1 log2 ηe is the average bit rate of the code on channel e, and τ ≥ω −ϵ. (18.29) For brevity, an asymptotically achievable information rate will be referred to as an achievable information rate. Remark It follows from the above definition that if ω ≥0 is achievable, then ω′ is also achievable for all 0 ≤ω′ ≤ω. Also, if ω(k) is achievable for all k ≥1, then it can be shown that ω = limk→∞ω(k), if exists, is also achievable. Therefore, the set of all achievable information rates is closed and fully characterized by the maximum value in the set. Theorem 18.3 (Max-Flow Bound). For a graph G with rate constraints R, if ω is achievable, then ω ≤min l maxflow(tl). (18.30) Proof. It suffices to prove that for a graph G with rate constraints R, if for any ϵ > 0 there exists for sufficiently large n an (n, (ηe : e ∈E), τ) code on G such that n−1 log2 ηe ≤Re + ϵ (18.31) for all e ∈E and τ ≥ω −ϵ, (18.32) then ω satisfies (18.30). Consider such a code for a fixed ϵ and a sufficiently large n, and consider any l = 1, 2, · · · , L and any cut U between node s and node tl. Let wj(x) = ( ˜ fk(x) : k ∈∪e∈In(j)Te), (18.33) where x ∈X and ˜ fk : X →Ak is the function induced inductively by fk′, 1 ≤ k′ ≤k, and ˜ fk(x) denotes the value of fk as a function of x. The tuple wj(x) is all the information known by node j during the whole coding session when the message is x. Since ˜ fk(x) is a function of the information previously received 430 18 The Max-Flow Bound by node u(k), it can be shown by induction (see Problem 3) that wtl(x) is a function of ˜ fk(x), k ∈∪e∈EU Te, where EU is the set of edges across the cut U as previously defined in (18.7). Since x can be determined at node tl, we have H(X) ≤H(X, wtl(X)) (18.34) = H(wtl(X)) (18.35) a) ≤H ˜ fk(X), k ∈ [ e∈EU Te ! (18.36) b) ≤ X e∈EU X k∈Te H( ˜ fk(X)) (18.37) c) ≤ X e∈EU X k∈Te log2 |Ak| (18.38) = X e∈EU log2 Y k∈Te |Ak| ! (18.39) d) ≤ X e∈EU log2 ηe, (18.40) where • a) follows because wtl(x) is a function of ˜ fk(x), k ∈∪e∈EU Te; • b) follows from the independence bound for entropy (Theorem 2.39); • c) follows from (18.21) and Theorem 2.43; • d) follows from (18.19). Thus ω −ϵ ≤τ (18.41) ≤n−1 log2⌈2nτ⌉ (18.42) = n−1 log2 |X| (18.43) = n−1H(X) (18.44) ≤ X e∈EU n−1 log2 ηe (18.45) ≤ X e∈EU (Re + ϵ) (18.46) ≤ X e∈EU Re + |E|ϵ, (18.47) where (18.45) follows from (18.40). Minimizing the right hand side over all U, we have ω −ϵ ≤min U X e∈EU Re + |E|ϵ. (18.48) Problems 431 The first term on the right hand side is the capacity of a min-cut between node s and node tl. By the max-flow min-cut theorem, it is equal to the value of a max-flow from node s to node tl, i.e., maxflow(tl). Letting ϵ →0, we obtain ω ≤maxflow(tl). (18.49) Since this upper bound on ω holds for all l = 1, 2, · · · , L, ω ≤min l maxflow(tl). (18.50) The theorem is proved. ⊓ ⊔ Remark 1 In proving the max-flow bound, the time evolution and the causality of the network code have been taken into account. Remark 2 Even if we allow an arbitrarily small probability of decoding error in the usual Shannon sense, by modifying our proof by means of a standard application of Fano’s inequality, it can be shown that it is still necessary for ω to satisfy (18.50). The details are omitted here. Chapter Summary Max-Flow Min-Cut Bound: In a point-to-point communication network, if node t receives an information source from node s, then the value of a maximum flow from s to t, or equivalently the capacity of a minimum cut between s to t, is at least equal to the rate of the information source. Problems 1. In a network, for a flow F from a source node s to a sink node t, show that F+(s) = F−(t) provided that the conservation conditions in (18.4) hold. 2. For the class of codes defined in Section 18.3, show that if the rates ω(k) are achievable for all k ≥1, then ω = limk→∞ω(k), if exists, is also achievable (see Definition 18.2). 3. Prove the claim in the proof of Theorem 18.3 that for any cut U between node s and node tl, wtl(x) is a function of ˜ fk(x), k ∈∪e∈EU Te. Hint: Define wj,κ(x) = ( ˜ fk(x) : k ∈∪e∈In(j)Te, k ≤κ) and prove by induction on κ that for all 1 ≤κ ≤K, (wj,κ(x) : j ̸∈U) is a function of ( ˜ fk(x) : k ∈∪e∈EU Te, k ≤κ). 432 18 The Max-Flow Bound 4. Probabilistic network code For a network code defined in Section 18.3, the kth transaction of the coding process is specified by a mapping fk. Suppose instead of a mapping fk, the kth transaction is specified by a transition probability matrix from the domain of fk to the range of fk. Also, instead of a mapping gl, decoding at sink node tl is specified by a transition probability matrix from the domain of gl to the range of gl, 1 ≤l ≤L. Conditioning on the indices received by node u(k) during 1 ≤k′ < k, the index sent from node u(k) to node v(k) in the kth transaction is independent of all the previously generated random variables. Similarly, conditioning on all the indices received by sink node tl during the whole coding session, the decoding at tl is independent of all the previously generated random variables. We refer to such a code as a probabilistic network code. Since a deter-ministic network code is a special case of a probabilistic network code, the latter can potentially multicast at a higher rate compared with the former. Prove that this is not possible. 5. Consider a probabilistic network code on the network below. s 1 t Let X = (X1, X2) be uniformly distributed on GF(2)2, and Z be inde-pendent of X and uniformly distributed on GF(2). We use ˜ Fk to de-note the index transmitted in the kth transaction and Wtl to denote ( ˜ Fk, k ∈∪e∈In(tl)Te). The probabilistic network code is specified by the following five transactions: u(1) = s, v(1) = 1, ˜ F1 = X1, u(2) = 1, v(2) = t, ˜ F2 = X1 + Z, u(3) = t, v(3) = s, ˜ F3 = X1 + Z, u(4) = s, v(4) = 1, ˜ F4 = (X1, X2 + Z), u(5) = 1, v(5) = t, ˜ F5 = (X1, X2 + Z). Note that the fourth transaction is possible because upon knowing X1 and X1 + Z, Z can be determined. a) Determine Wt. b) Verify that X can be recovered from Wt. c) Show that X →( ˜ F1, ˜ F4) →Wt does not form a Markov chain. Here, ˜ F1 and ˜ F4 are all the random variables sent on edge (s, 1) during the coding session. Although node t receives all the information through the edge (s, 1), the Markov chain in c) does not hold. (Ahlswede et al. .) Problems 433 6. Convolutional network code In the following network, maxflow(s, tl) = 3 for l = 1, 2, 3. The max-flow bound asserts that 3 bits can be multicast to t0 v0 t2 t1 u2 u1 v1 v2 s u0 all the three sink nodes per unit time. We now describe a network coding scheme which achieve this. Let 3 bits b0(k), b1(k), b2(k) be generated at node s at time k = 1, 2, · · ·, where we assume without loss of generality that bl(k) is an element of the finite field GF(2). We adopt the convention that bl(k) = 0 for k ≤0. At time k ≥1, information transactions T1 to T11 occur in the following order: T1. s sends bl(k) to vl, l = 0, 1, 2 T2. vl sends bl(k) to ul, tl⊕1, and tl⊕2, l = 0, 1, 2 T3. u0 sends b0(k) + b1(k −1) + b2(k −1) to u1 T4. u1 sends b0(k) + b1(k −1) + b2(k −1) to t2 T5. u1 sends b0(k) + b1(k) + b2(k −1) to u2 T6. u2 sends b0(k) + b1(k) + b2(k −1) to t0 T7. u2 sends b0(k) + b1(k) + b2(k) to u0 T8. u0 sends b0(k) + b1(k) + b2(k) to t1 T9. t2 decodes b2(k −1) T10. t0 decodes b0(k) T11. t1 decodes b1(k) where “⊕” denotes modulo 3 addition and “+” denotes modulo 2 addition. a) Show that the information transactions T1 to T11 can be performed at time k = 1. b) Show that T1 to T11 can be performed at any time k ≥1 by induction on k. c) Verify that at time k, nodes t0 and t1 can recover b0(k′), b1(k′), and b2(k′) for all k′ ≤k. 434 18 The Max-Flow Bound d) Verify that at time k, node t2 can recover b0(k′) and b1(k′) for all k′ ≤k, and b2(k′) for all k′ ≤k −1. Note the unit time delay for t2 to recover b2(k). (Ahlswede et al. .) Historical Notes The max-flow bound presented in this chapter was proved by Ahlswede et al. , where the point-to-point channels in the network are noiseless. The max-flow bound can be established when the point-to-point chan-nels in the network are discrete memoryless channels. Borade proved the bound with the assumptions that the channels are independent of each other and that the transmissions in the channels are synchronous. Song et al. proved the bound without the latter assumption. These results are network generalizations of the result by Shannon asserting that the capacity of a discrete memoryless channel is not increased by feedback (see Section 7.6), and they imply the asymptotic optimality of separating network coding and channel coding under the corresponding assumptions. 19 Single-Source Linear Network Coding: Acyclic Networks In the last chapter, we have established the max-flow bound as the funda-mental bound for multicasting a single information source in a point-to-point communication network. In the next two chapters, we will construct linear network codes that achieve the max-flow bound at various levels of generality. A finite field is a system of symbols on which one can perform operations corresponding to the four operations in arithmetic for real numbers, namely addition, subtraction, multiplication, and division. The set of real numbers together with the associated operations are referred to as the field of real numbers, or simply the real field. Unlike the real field that has an infinite number of elements, a finite field has only a finite number of elements. For finite field theory, we refer the reader to . For our discussions here, since we will not make use of the detailed structural properties of a finite field, the reader may by and large regard the algebra on a finite field and the algebra on the real field as the same. In a linear network code, all the information symbols are regarded as elements of a finite field F called the base field. These include the symbols that comprise the information source as well as the symbols transmitted on the channels. For example, F is taken to be the binary field GF(2) when the information unit is the bit. Furthermore, encoding and decoding are based on linear algebra defined on the based field, so that efficient algorithms for encoding and decoding as well as for code construction can be obtained. In this chapter, we consider acyclic networks, i.e., networks with no di-rected cycle. We study the network coding problem in which a message con-sisting of a finite block of symbols is multicast. We make the ideal assumption that the propagation delay in the network, which includes the processing de-lay at the nodes and the transmission delay over the channels, is zero. In a general setting, a pipeline of messages may be multicast, and the propagation delay may be non-negligible. If the network is acyclic, then the operations in the network can be so synchronized that sequential messages are processed independent of each other. In this way, the network coding problem is inde-436 19 Single-Source Linear Network Coding: Acyclic Networks pendent of the propagation delay. Therefore, it suffices to study the network coding problem as described. On the other hand, when a network contains directed cycles, the processing and transmission of sequential messages can convolve with together. Then the amount of delay incurred becomes part of the consideration in network coding. This will be discussed in the next chapter. 19.1 Acyclic Networks Denote a directed network by G = (V, E), where V and E are the sets of nodes and channels, respectively. A pair of channels (d, e) ∈E × E is called an adjacent pair if there exists a node t ∈V such that d ∈In(t) and e ∈Out(t). A directed path in G is a sequence of channels e1, e2, · · · , em (19.1) such that (ei, ei+1) is an adjacent pair for all 1 ≤i < m. Let e1 ∈Out(t) and em ∈In(t′). The sequence in (19.1) is called a directed path from e1 to em, or equivalently, a directed path from node t to node t′. If t = t′, then the directed path is called a directed cycle. A directed network G is cyclic if it contains a directed cycle, otherwise G is acyclic. Acyclic networks are easier to handle because the nodes in the network can be ordered in a way which allows encoding at the nodes to be carried out in a sequential and consistent manner. The following proposition and its proof describe such an order. Proposition 19.1. If G is a finite directed acyclic graph, then it is possible to order the nodes of G in a sequence such that if there is an edge from node i to node j, then node i appears before node j in the sequence. Proof. We partition the set V into subsets V1, V2, · · ·, such that node i is in Vk if and only if the length of a longest directed path ending at node i is equal to k. We first prove that if node i is in Vk′ and node j is in Vk such that there exists a directed path from node i to node j, then k′ < k. Since the length of a longest directed path ending at node i is equal to k′ and there exists a directed path from node i to node j (with length at least equal to 1), there exists a directed path ending at node j with length equal to k′ + 1. As node j is in Vk, we have k′ + 1 ≤k, (19.2) so that k′ < k. (19.3) Hence, by listing the nodes of G in a sequence such that the nodes in Vk′ appear before the nodes in Vk if k′ < k, where the order of the nodes within 19.2 Linear Network Codes 437 each Vk is arbitrary, we obtain an order of the nodes of G with the desired property. ⊓ ⊔ Following the direction of the edges, we will refer to an order prescribed by Proposition 19.1 as an upstream-to-downstream order1. For a given acyclic network, such an order (not unique) is implicitly assumed. The nodes in the network encodes according to this order, referred to as the encoding order. Then whenever a node encodes, all the information needed would have already been received on the input channels of that node. Example 19.2. Consider ordering the nodes in the butterfly network in Fig-ure 17.1 by the sequence s, 2, 1, 3, 4, t2, t1. (19.4) It is easy to check that in this sequence, if there is a directed path from node i to node j, then node i appears before node j. 19.2 Linear Network Codes In this section, we formulate a linear network code on an acyclic network G. By allowing parallel channels between a pair of nodes, we assume without loss of generality that all the channels in the network have unit capacity, i.e., one symbol in the base field F can be transmitted on each channel. There exists a unique node s in G, called the source node, where a message consisting of ω symbols taken from the base field F is generated. To avoid trivially, we assume that every non-source node has at least one input channel. As in Section 18.3, we assume that there is no loop in G, and there is no input channel at node s. To facilitate our discussion, however, we let In(s) be a set of ω imaginary channels that terminate at node s but have no originating nodes. The reader may think of the ω symbols forming the message as being received by source node s on these ω imaginary channels. We emphasize that these imaginary channels are not part of the network, and the number of these channels is context dependent. Figure 19.1(a) illustrates the butterfly network with ω = 2 imaginary channels appended at source node s. Two directed paths P1 and P2 in G are edge-disjoint if the two paths do not share a common channel. It is not difficult to see from the conservation conditions in (18.4) that for a non-source node t, the maximum number of edge-disjoint paths from node s to node t is equal to maxflow(t). The message generated at source node s, consisting of ω symbols in the base field F, is represented by a row ω-vector x ∈F ω. Based on the value of x, source node s transmits a symbol over each output channel. Encoding at the nodes in the network is carried out according to a certain upstream-to-downstream order. At a node in the network, the ensemble of received symbols 1 Also called an ancestral order in graph theory. 438 19 Single-Source Linear Network Coding: Acyclic Networks t w y x u z s (a) t w y x u z s (b) b1 b2 b1 b2 b1 b2 b1 b2 b1 + b2 b1 + b2 b1 + b2 Fig. 19.1. (a) Two imaginary channels are appended to the source node of the butterfly network. (b) A 2-dimensional network code for the butterfly network. is mapped to a symbol in F specific to each output channel, and the symbol is sent on that channel. The following definition of a network code formally describes this mechanism. Since the code so defined is not necessarily linear, the base field F can be regarded in this context as any finite alphabet. Definition 19.3 (Local Description of a Network Code). An ω-dimensional network code on an acyclic network over a base field F consists of a local en-coding mapping ˜ ke : F |In(t)| →F (19.5) for every channel e in the network, where e ∈Out(t). With the encoding mechanism as described, the local encoding mappings derive recursively the symbols transmitted over all channels e, denoted by ˜ fe(x). The above definition of a network code does not explicitly give the values of ˜ fe(x), whose mathematical properties are at the focus of the present discussion. Therefore, we also present an equivalent definition below, which describes a network code in terms of both the local encoding mechanisms as well as the recursively derived values ˜ fe(x). Definition 19.4 (Global Description of a Network Code). An ω-dimensional network code on an acyclic network over a base field F consists of a local encoding mapping ˜ ke : F |In(t)| →F (19.6) and a global encoding mapping 19.2 Linear Network Codes 439 ˜ fe : F ω →F (19.7) for each channel e in the network, where e ∈Out(t), such that: (19.8) For every node t and every channel e ∈Out(t), ˜ fe(x) is uniquely de-termined by ( ˜ fd(x) : d ∈In(t)) via the local encoding mapping ˜ ke. (19.9) The mappings ˜ fe for the ω imaginary channels e ∈In(s) project F ω onto the distinct dimensions of F ω. Example 19.5. Let x = [ b1 b2 ] denote a generic row vector in GF(2)2. Fig-ure 19.1(b) shows a 2-dimensional binary network code for the butterfly net-work with the following global encoding mappings: ˜ fe(x) = b1 for e = (o, s), (s, t), (t, w), (t, y) (19.10) ˜ fe(x) = b2 for e = (o, s)′, (s, u), (u, w), (u, z) (19.11) ˜ fe(x) = b1 + b2 for e = (w, x), (x, y), (x, z), (19.12) where (o, s) and (o, s)′ denote the two imaginary channels at node s. The corresponding local encoding mappings are ˜ k(s,t)(b1, b2) = b1, ˜ k(s,u)(b1, b2) = b2, (19.13) ˜ k(t,w)(b1) = ˜ k(t,y)(b1) = b1, (19.14) ˜ k(u,w)(b2) = ˜ k(u,z)(b2) = b2, ˜ k(w,x)(b1, b2) = b1 + b2, (19.15) etc. When a global encoding mapping ˜ fe is linear, it corresponds to a column ω-vector fe such that ˜ fe(x) is the product x · fe, where the row ω-vector x is the message generated at node s. Similarly, when a local encoding mapping ˜ ke, where e ∈Out(t), is linear, it corresponds to a column |In(t)|-vector ke such that ˜ ke(y) = y · ke, where y ∈F |In(t)| is the row vector representing the symbols received at node t. In an ω-dimensional network code on an acyclic network, if all the local encoding mappings are linear, then so are the global encoding mappings since they are functional compositions of the local encoding mappings. The converse is also true: If the global encoding mappings are all linear, then so are the local encoding mappings. We leave the proof as an exercise. In the following, we formulate a linear network code as a network code whose local and global encoding mappings are all linear. Again, both the local and global descriptions are presented even though they are equivalent. The global description of a linear network code will be very useful when we construct such codes in Section 19.4. Definition 19.6 (Local Description of a Linear Network Code). An ω-dimensional linear network code on an acyclic network over a base field F consists of a scalar kd,e, called the local encoding kernel, for every adjacent pair of channels (d, e) in the network. The |In(t)| × |Out(t)| matrix 440 19 Single-Source Linear Network Coding: Acyclic Networks Kt = [kd,e]d∈In(t),e∈Out(t) (19.16) is called the local encoding kernel at node t. Note that the matrix structure of Kt implicitly assumes an ordering among the channels. Definition 19.7 (Global Description of a Linear Network Code). An ω-dimensional linear network code on an acyclic network over a base field F consists of a scalar kd,e for every adjacent pair of channels (d, e) in the network as well as a column ω-vector fe for every channel e such that: (19.17) fe = P d∈In(t) kd,e fd for e ∈Out(t). (19.18) The vectors fe for the ω imaginary channels e ∈In(s) form the stan-dard basis of the vector space F ω. The vector fe is called the global encoding kernel for channel e. We now explain how the global description above specifies the linear net-work code. Initially, source node s generates a message x as a row ω-vector. In view of (19.18), the symbols in x are regarded as being received by source node s on the imaginary channels as x·fd, d ∈In(s). Starting at source node s, any node t in the network receives the symbols x · fd, d ∈In(t), from which it calculates the symbol x · fe for sending on each channel e ∈Out(t) via the linear formula x · fe = x X d∈In(t) kd,e fd = X d∈In(t) kd,e(x · fd), (19.19) where the first equality follows from (19.17). In this way, the symbol x · fe is transmitted on any channel e (which may be an imaginary channel) in the network. Given the local encoding kernels for all the channels in an acyclic network, the global encoding kernels can be calculated recursively in any upstream-to-downstream order by (19.17), while (19.18) provides the boundary conditions. Remark A partial analogy can be drawn between the global encoding kernels for the channels in a linear network code and the columns of a generator matrix of a linear block code in algebraic coding theory . The former are indexed by the channels in the network, while the latter are indexed by “time.” However, the global encoding kernels in a linear network code are constrained by the network topology via (19.17), while the columns in the generator matrix of a linear block code in general are not subject to any such constraint. The following two examples illustrate the relation between the local en-coding kernels and the global encoding kernels of a linear network code. The reader should understand these two examples thoroughly before proceeding to the next section. 19.2 Linear Network Codes 441 Kt = ! " 1 1 Kx = ! " 1 1 s t u w y z x # $ % & ' ( 1 1 # $ % & ' ( 1 1 f(o,s)) = # $ % & ' ( 1 0 f(o,s) = # $ % & ' ( 0 1 Ks = # $ % & ' ( 1 0 0 1 Ku= ! " 1 1 Kw = # $ % & ' ( 1 1 # $ % & ' ( 1 1 # $ % & ' ( 1 0 # $ % & ' ( 1 0 # $ % & ' ( 1 0 # $ % & ' ( 0 1 # $ % & ' ( 0 1 # $ % & ' ( 0 1 Fig. 19.2. The global and local encoding kernels for the 2-dimensional linear net-work code in Example 19.8. Example 19.8. The network code in Figure 19.1(b) is in fact linear. Assume the alphabetical order among the channels (o, s), (o, s)′, (s, t), · · ·, (x, z). Then the local encoding kernels at the nodes are: Ks = 1 0 0 1  , Kt = Ku = Kx = 1 1 , Kw = 1 1  . (19.20) The corresponding global encoding kernels are: fe =                        1 0  for e = (o, s), (s, t), (t, w), and (t, y)  0 1  for e = (o, s)′, (s, u), (u, w), and (u, z) 1 1  for e = (w, x), (x, y), and (x, z). (19.21) The local/global encoding kernels are summarized in Figure 19.2. In fact, they describe a 2-dimensional linear network code regardless of the choice of the base field. Example 19.9. For a general 2-dimensional linear network code on the network in Figure 19.2, the local encoding kernels at the nodes can be expressed as Ks = a c b d  , Kt = e f , Ku = g h , (19.22) Kw =  i j  , Kx = k l , (19.23) 442 19 Single-Source Linear Network Coding: Acyclic Networks where a, b, c, · · · , l, the entries of the matrices, are indeterminates in the base field F. Starting with f(o,s) = 1 0  and f(o,s)′ = 0 1  , (19.24) we can obtain all the global encoding kernels below by applying (19.17) re-cursively: f(s,t) = a b  , f(s,u) =  c d  , f(t,w) = ae be  , f(t,y) = af bf  , (19.25) f(u,w) =  cg dg  , f(u,z) =  ch dh  , f(w,x) =  aei + cgj bei + dgj  , (19.26) f(x,y) = aeik + cgjk beik + dgjk  , f(x,z) = aeil + cgjl beil + dgjl  . (19.27) For example, f(w,x) is obtained from f(t,w) and f(u,w) by f(w,x) = k(t,w),(w,x)f(t,w) + k(u,w),(w,x)f(u,w) (19.28) = i ae be  + j  cg dg  (19.29) = aei + cgj bei + dgj  . (19.30) The local/global encoding kernels of the general linear network code are sum-marized in Figure 19.3. 19.3 Desirable Properties of a Linear Network Code We have proved in Section 18.4 that in a communication network represented by a graph G, the rate at which information is transmitted from source node s to any node t cannot exceed maxflow(t), the value of a max-flow from node s to node t. For a collection of non-source nodes T, denote by maxflow(T) the value of a max-flow from node s to T. Then it is readily seen that the rate at which information is transmitted from source node s to the collection of nodes T cannot exceed maxflow(T). In the sequel, we adopt the conventional notation ⟨·⟩for the linear span of a set of vectors. For a node t, let Vt = ⟨{fe : e ∈In(t)}⟩ (19.31) and for a collection T of nodes, let VT = ⟨∪t∈T Vt⟩. (19.32) 19.3 Desirable Properties of a Linear Network Code 443 Kt = ! " f e Kx = ! " l k s t u w y z x # $ % & ' ( ) ) dgjl beil cgjl aeil # $ % & ' ( ) ) dgjk beik cgjk aeik f(o,s) = # $ % & ' ( 1 0 f(o,s) = # $ % & ' ( 0 1 Ks = # $ % & ' ( d b c a Ku= ! " h g Kw = # $ % & ' ( j i # $ % & ' ( ) ) dgj bei cgj aei # $ % & ' ( dg cg # $ % & ' ( dh ch # $ % & ' ( d c # $ % & ' ( b a # $ % & ' ( be ae # $ % & ' ( bf af Fig. 19.3. Local/global encoding kernels of a general 2-dimensional linear network code. For a collection ξ of channels, let Vξ = ⟨{fe : e ∈ξ}⟩, (19.33) with the convention V∅= {0}, where 0 denotes the zero column ω-vector. In the next theorem, we first establish a specific form of the max-flow bound which applies to linear network coding. Theorem 19.10 (Max-Flow Bound for Linear Network Coding). For an ω-dimensional linear network code on an acyclic network, for any collection T of non-source nodes, dim(VT ) ≤min{ω, maxflow(T)}. (19.34) Proof. Let the acyclic network be G = (V, E). Consider a cut U between source node s and a collection T of non-source nodes, and let EU be the set of edges across the cut U as in (18.7). Then VT is a linear transformation of VEU , where dim(VT ) ≤dim(VEU ) ≤|EU| (19.35) Minimizing over all the cuts between s and T and invoking the max-flow min-cut theorem, we have dim(VT ) ≤maxflow(T). (19.36) On the other hand, VT is a linear transformation of Vs = F ω. Therefore, 444 19 Single-Source Linear Network Coding: Acyclic Networks dim(VT ) ≤dim(Vs) = ω. (19.37) Then the proof is completed by combining (19.36) and (19.37). ⊓ ⊔ For a collection of channels ξ ⊂E (i.e., not including the imaginary chan-nels), we denote by maxflow(ξ) the value of a max-flow from source node s to ξ. Theorem 19.10 has the following straightforward corollary. Corollary 19.11. For an ω-dimensional linear network code on an acyclic network, for any collection of channels ξ ⊂E, dim(Vξ) ≤min{ω, maxflow(ξ)}. (19.38) Whether the max-flow bound in Theorem 19.10 or Corollary 19.11 is achievable depends on the network topology, the dimension ω, and the coding scheme. Three special classes of linear network codes are defined below by the achievement of this bound to three different extents. Definition 19.12. An ω-dimensional linear network code on an acyclic net-work qualifies as a linear multicast, a linear broadcast, or a linear dispersion, respectively, if the following hold: (19.39) dim(Vt) = ω for every non-source node t with maxflow(t) ≥ω. (19.40) dim(Vt) = min{ω, maxflow(t)} for every non-source node t. (19.41) dim (VT ) = min{ω, maxflow(T)} for every collection T of non-source nodes. For a set ξ of channels, including possibly the imaginary channels, let Fξ = fe e∈ξ (19.42) be the ω × |ξ| matrix obtained by putting fe, e ∈ξ in juxtaposition. For a node t, the symbols x · fe, e ∈In(t) are received on the input channels. Equivalently, the row |In(t)|-vector x · FIn(t) (19.43) is received. Obviously, the message x, consisting of ω information units, can be uniquely determined at the node if and only if the rank of FIn(t) is equal to ω, i.e., dim(Vt) = ω. (19.44) The same applies to a collection T of non-source nodes. For a linear multicast, a node t can decode the message x if and only if maxflow(t) ≥ω. For a node t with maxflow(t) < ω, nothing is guaranteed. An application of an ω-dimensional linear multicast is for multicasting infor-mation at rate ω to all (or some of) those non-source nodes with max-flow at least equal to ω. 19.3 Desirable Properties of a Linear Network Code 445 For a linear broadcast, like a linear multicast, a node t can decode the message x if and only if maxflow(t) ≥ω. For a node t with maxflow(t) < ω, the set of all received vectors, namely {x · FIn(t) : x ∈F ω}, (19.45) form a vector subspace of F ω with dimension equal to maxflow(t), but there is no guarantee on which such subspace is actually received2. An application of linear broadcast is for multicasting information on a network at a variable rate (see Problem 14). A random version of linear broadcast (to be discussed in Section 19.4) is also useful for identifying the max-flow of a non-source in an unknown network topology . For a linear dispersion, a collection T of non-source nodes can decode the message x if and only if maxflow(T) ≥ω. If maxflow(T) < ω, the collection T receives a vector subspace with dimension equal to maxflow(T). Again, there is no guarantee on which such subspace is actually received. An application of linear dispersion is in a two-tier network system consisting of the backbone network and a number of local area networks (LANs), where each LAN is connected to one or more nodes on the backbone network. An information source with rate ω, generated at a node s in the backbone network, is to be transmitted to every user on the LANs. With a linear dispersion on the back-bone network, every user on a LAN can receive the information source as long as the LAN acquires through the backbone network an aggregated max-flow from node s at least equal to ω. Moreover, new LANs can be established under the same criterion without modifying the linear dispersion on the backbone network. Note that for all the three classes of linear network codes in Defini-tion 19.12, a sink node is not explicitly identified. Also, it is immediate from the definition that every linear dispersion is a linear broadcast, and every linear broadcast is a linear multicast. The example below shows that a lin-ear broadcast is not necessarily a linear dispersion, a linear multicast is not necessarily a linear broadcast, and a linear network code is not necessarily a linear multicast. Example 19.13. Figure 19.4(a) shows a 2-dimensional linear dispersion on an acyclic network with the global encoding kernels as prescribed. Figure 19.4(b) shows a 2-dimensional linear broadcast on the same network that is not a linear dispersion because maxflow({t, u}) = 2 = ω, (19.46) while the global encoding kernels of the channels in In(t)∪In(u) span only a 1-dimensional subspace. Figure 19.4(c) shows a 2-dimensional linear multicast that is not a linear broadcast since node u receives no information at all. Finally, the 2-dimensional linear network code in Figure 19.4(d) is not a linear multicast. 2 Here F ω refers to the row vector space. 446 19 Single-Source Linear Network Coding: Acyclic Networks ! " # $ % & 0 1 ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 1 0 ! " # $ % & 0 0 ! " # $ % & 0 1 ! " # $ % & 0 1 ! " # $ % & 0 0 ! " # $ % & 0 0 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 0 1 ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 0 0 ! " # $ % & 0 0 t u s t u w (a) (b) t u t u (c) (d) w w w s s s Fig. 19.4. (a) A 2-dimensional linear dispersion over an acyclic network. (b) A 2-dimensional linear broadcast that is not a linear dispersion. (c) A 2-dimensional linear multicast that is not a linear broadcast. (d) A 2-dimensional linear network code that is not a linear multicast. Example 19.14. The linear network code in Example 19.8 meets all the criteria (19.39) through (19.41) in Definition 19.12. Thus it is a 2-dimensional linear dispersion, and hence also a linear broadcast and linear multicast, regardless of the choice of the base field. The same applies to the linear network code in Figure 19.4(a). Example 19.15. The general linear network code in Example 19.9 meets the criterion (19.39) for a linear multicast when • f(t,w) and f(u,w) are linearly independent; • f(t,y) and f(x,y) are linearly independent; • f(u,z) and f(x,z) are linearly independent. Equivalently, the criterion says that e, f, g, h, k, l, ad−bc, abei+adgj −baei− bcgj, and daei + dcgj −cbei −cdgj are all nonzero. Example 19.8 has been the special case with a = d = e = f = g = h = i = j = k = l = 1 (19.47) and b = c = 0. (19.48) 19.3 Desirable Properties of a Linear Network Code 447 Transformation of a Linear Network Code Consider an ω-dimensional linear network code C on an acyclic network. Sup-pose source node s, instead of encoding the message x, encodes x′ = xA, (19.49) where A is an invertible ω×ω matrix. Then the symbol sent on a channel e ∈E is given by x′ · fe = (xA) · fe = x · (A fe). (19.50) This gives a new linear network code C′ with respect to the message x with global encoding kernels f ′ e = A fe if e ∈E fe if e ∈In(s). (19.51) Recall the definition of the matrix Fξ in (19.42) for a set of channels ξ. Then (19.17) can be written in matrix form as FOut(t) = FIn(t)Kt (19.52) for all nodes t, where Kt is the local encoding kernel at node t. Similarly, letting F ′ ξ = f ′ e e∈ξ , (19.53) we obtain from (19.51) that F ′ Out(t) = AFOut(t) (19.54) for all nodes t, F ′ In(t) = AFIn(t) (19.55) for all nodes t ̸= s, and F ′ In(s) = FIn(s). (19.56) For a node t ̸= s, from (19.54), (19.52), and (19.55), F ′ Out(t) = AFOut(t) (19.57) = A(FIn(t)Kt) (19.58) = (AFIn(t))Kt (19.59) = F ′ In(t)Kt. (19.60) Since fe, e ∈In(s) form the standard basis of F ω, FIn(s) = F ′ In(s) = I, (19.61) the ω × ω identity matrix. It then follows from (19.54), (19.52), and (19.61) that 448 19 Single-Source Linear Network Coding: Acyclic Networks F ′ Out(s) = AFOut(s) (19.62) = A(FIn(s)Ks) (19.63) = AKs (19.64) = F ′ In(s)(AKs). (19.65) Comparing (19.60) and (19.65) with (19.52), we see that the local encoding kernels of C′ are given by K′ t =  Kt if t ̸= s AKs if t = s. (19.66) The network code C′ is called the transformation of the network code C by the (invertible) matrix A. In view of Definition 19.12, the requirements of a linear multicast, a linear broadcast, and a linear dispersion are all in terms of the linear independence among the global encoding kernels. We leave it as an exercise for the reader to show that if a network code is a linear multicast, broadcast, or dispersion, then any transformation of it is also a linear multicast, broadcast, or dispersion, respectively. Suppose C is an ω-dimensional linear multicast and let C′ be a transfor-mation of C. When the network code C′ is employed, the message x can be decoded by any node t with maxflow(t) ≥ω, because from the foregoing C′ is also a linear multicast. For the purpose of multicasting, there is no difference between C and C′, and they can be regarded as equivalent. If C is an ω-dimensional linear broadcast and C′ is a transformation of C, then C′ is also an ω-dimensional linear broadcast. However, C as a linear broadcast may deliver to a particular node t with maxflow(t) < ω a certain subset of symbols in the message x, while C′ may not be able to achieve the same. Then whether C and C′ can be regarded as equivalent depends on the specific requirements of the application. As an example, the linear network code in Figure 19.1(b) delivers b1 to node t. However, taking a transformation of the network code with matrix A = 1 0 1 1  , (19.67) the resulting network code can no longer deliver b1 to node t, although nodes w, v, and z can continue to decode both b1 and b2. Implementation of a Linear Network Code In implementation of a linear network code, be it a linear multicast, a linear broadcast, a linear dispersion, or any linear network code, in order that the code can be used as intended, the global encoding kernels fe, e ∈In(t) must be known by each node t if node t is to recover any useful information from the symbols received on the input channels. These global encoding kernels can be made available ahead of time if the code is already decided. Alternatively, they 19.4 Existence and Construction 449 can be delivered through the input channels if multiple usage of the network is allowed. One possible way to deliver the global encoding kernels to node t in a coding session of length n, where n > ω, is as follows. At time k = 1, 2, · · · , ω, the source node transmits the dummy message mk, a row ω-vector with all the components equal to 0 except that the kth component is equal to 1. Note that      m1 m2 . . . mω     = Iω, (19.68) the ω × ω identity matrix. At time k = ω + i, where i = 1, 2, · · · , n −ω, the source node transmits the message xi. Then throughout the coding session, node t receives               m1 m2 . . . mω x1 x2 . . . xn−ω               FIn(t) =        Iω x1 x2 . . . xn−ω        FIn(t) =        FIn(t) x1 · FIn(t) x2 · FIn(t) . . . xn−ω · FIn(t)        (19.69) on the input channels. In other words, the global encoding kernels of the input channels at node t are received at the beginning of the coding session. This applies to all the sink nodes in the network simultaneously because the ω dummy messages do not depend on the particular node t. If FIn(t) has full rank, then node t can start to decode x1 upon receiving x1 · FIn(t). Since n −ω messages are transmitted in a coding session of length n, the utilization of the network is equal to (n −ω)/n, which tends to 1 as n →∞. That is, the overhead for delivering the global encoding kernels through the network is asymptotically negligible. 19.4 Existence and Construction For a given acyclic network, the following three factors dictate the existence of an ω-dimensional linear network code with a prescribed set of desirable properties: • the value of ω, • the network topology, • the choice of the base field F. 450 19 Single-Source Linear Network Coding: Acyclic Networks s u2 u1 t3 t1 t4 t2 t5 t6 u4 u3 ! " # $ % & 1 1 z y ! " # $ % & 4 4 z y ! " # $ % & 2 2 z y ! " # $ % & 3 3 z y Fig. 19.5. A network with a 2-dimensional ternary linear multicast but without a 2-dimensional binary linear multicast. We begin with an example illustrating the third factor. Example 19.16. On the network in Figure 19.5, a 2-dimensional ternary linear multicast can be constructed by the following local encoding kernels at the nodes: Ks = 0 1 1 1 1 0 1 2  and Kui = 1 1 1 (19.70) for 1 ≤i ≤4. On the other hand, we can prove the nonexistence of a 2-dimensional binary linear multicast on this network as follows. Assuming the contrary that a 2-dimensional binary linear multicast exists, we will derive a contradiction. Let the global encoding kernel f(s,ui) = [ yi zi ]⊤for 1 ≤i ≤4. Since maxflow(tk) = 2 for all 1 ≤k ≤6, the global encoding kernels for the two input channels to each node tk must be linearly independent. Thus, if node tk is at the downstream of both nodes ui and uj, then the two vectors [ yi zi ]⊤and [ yi zi ]⊤must be linearly independent. As each node tk is at the downstream of a different pair of nodes among u1, u2, u3, and u4, the four vectors [ yi zi ]⊤, 1 ≤i ≤4, are pairwise linearly independent, and consequently, must be four distinct vectors in GF(2)2. Then one of them must be [ 0 0 ]⊤since there are only four vectors in GF(2)2. This contradicts the pairwise linear independence among the four vectors. In order for the linear network code to qualify as a linear multicast, a linear broadcast, or a linear dispersion, it is required that certain collections of global encoding kernels span the maximum possible dimensions. This is equivalent to certain polynomial functions taking nonzero values, where the indeterminates of these polynomials are the local encoding kernels. To fix ideas, take ω = 3, consider a node t with two input channels, and put the 19.4 Existence and Construction 451 global encoding kernels of these two channels in juxtaposition to form a 3 × 2 matrix. Then, this matrix attains the maximum possible rank of 2 if and only if there exists a 2 × 2 submatrix with nonzero determinant. According to the local description, a linear network code is specified by the local encoding kernels, and the global encoding kernels can be derived recursively in an upstream-to-downstream order. From Example 19.8, it is not hard to see that every component in a global encoding kernel is a polynomial function whose indeterminates are the local encoding kernels. When a nonzero value of a polynomial function is required, it does not merely mean that at least one coefficient in the polynomial is nonzero. Rather, it means a way to choose scalar values for the indeterminates so that the polynomial function is evaluated to a nonzero scalar value. When the base field is small, certain polynomial equations may be un-avoidable. For instance, for any prime number p, the polynomial equation zp −z = 0 is satisfied for any z ∈GF(p). The nonexistence of a binary lin-ear multicast in Example 19.16 can also trace its root to a set of polynomial equations that cannot be avoided simultaneously over GF(2). However, when the base field is sufficiently large, every nonzero polynomial function can indeed be evaluated to a nonzero value with a proper choice of the values taken by the set of indeterminates involved. This is formally stated in the following elementary lemma, which will be instrumental in the proof of Theorem 19.20 asserting the existence of a linear multicast on an acyclic network when the base field is sufficiently large. Lemma 19.17. Let g(z1, z2, · · · , zn) be a nonzero polynomial with coefficients in a field F. If |F| is greater than the degree of g in every zj for 1 ≤j ≤n, then there exist a1, a2, · · · , an ∈F such that g(a1, a2, · · · , an) ̸= 0. (19.71) Proof. The lemma is proved by induction on n. For n = 0, g is a nonzero constant in F, and the lemma is obviously true. Assume that the lemma is true for n −1 for some n ≥1. Express g(z1, z2, · · · , zn) as a polynomial in zn with coefficients in the polynomial ring F[z1, z2, · · · , zn−1], i.e., g(z1, z2, · · · , zn) = h(z1, z2, · · · , zn−1)zn k + · · · , (19.72) where k is the degree of g in zn and the leading coefficient h(z1, z2, · · · , zn−1) is a nonzero polynomial in F[z1, z2, · · · , zn−1]. By the induction hypothe-sis, there exist a1, a2, · · · , an−1 ∈F such that h(a1, a2, · · · , an−1) ̸= 0. Thus g(a1, a2, · · · , an−1, z) is a nonzero polynomial in z with degree k < |F|. Since this polynomial cannot have more than k roots in F and |F| > k, there exists an ∈F such that g(a1, a2, · · · , an−1, an) ̸= 0. (19.73) Corollary 19.18. Let g(z1, z2, · · · , zn) be a nonzero polynomial with coeffi-cients in a field F with |F| > m, where m is the highest degree of g in zj for 452 19 Single-Source Linear Network Coding: Acyclic Networks 1 ≤j ≤n. Let a1, a2, · · · , an be chosen independently according to the uniform distribution on F. Then Pr{g(a1, a2, · · · , an) ̸= 0} ≥  1 −m |F| n . (19.74) In particular, Pr{g(a1, a2, · · · , an) ̸= 0} →1 (19.75) as |F| →∞. Proof. The first part of the corollary is proved by induction on n. For n = 0, g is a nonzero constant in F, and the proposition is obviously true. Assume that the proposition is true for n −1 for some n ≥1. From (19.72) and the induction hypothesis, we see that Pr{g(z1, z2, · · · , zn) ̸= 0} (19.76) = Pr{h(z1, z2, · · · , zn−1) ̸= 0} Pr{g(z1, z2, · · · , zn) ̸= 0| (19.77) h(z1, z2, · · · , zn−1) ̸= 0} (19.78) ≥  1 −m |F| n−1 Pr{g(z1, z2, · · · , zn) ̸= 0| (19.79) h(z1, z2, · · · , zn−1) ̸= 0} (19.80) ≥  1 −m |F| n−1  1 −m |F|  (19.81) =  1 −m |F| n . (19.82) This proves the first part of the corollary. As n is fixed, the lower bound above tends to 1 as |F| →∞. This completes the proof. ⊓ ⊔ Example 19.19. Recall the 2-dimensional linear network code in Example 19.9 that is expressed in the 12 indeterminates a, b, c, · · · , l. Place the vectors f(t,w) and f(u,w) in juxtaposition into the 2 × 2 matrix Lw = ae cg be dg  , (19.83) the vectors f(t,y) and f(x,y) into the 2 × 2 matrix Ly =  af aeik + cgjk bf beik + dgjk  , (19.84) and the vectors f(u,z) and f(x,z) into the 2 × 2 matrix LZ =  aeil + cgjl ch beil + dgjl dh  . (19.85) 19.4 Existence and Construction 453 Clearly, det(Lw) · det(Ly) · det(Lz) ̸= 0 ∈F[a, b, c, · · · , l]. (19.86) Applying Lemma 19.17 to the polynomial on the left hand side above, we can set scalar values for the 12 indeterminates so that it is evaluated to a nonzero value in F when F is sufficiently large. This implies that the determinants on the left hand side of (19.86) are evaluated to nonzero values in F simul-taneously. Thus these scalar values yield a 2-dimensional linear multicast. In fact, det(Lw) · det(Ly) · det(Lz) = 1 (19.87) when b = c = 0 (19.88) and a = d = e = f = · · · = l = 1. (19.89) Therefore, the 2-dimensional linear network code depicted in Figure 19.2 is a linear multicast, and this fact is regardless of the choice of the base field F. Theorem 19.20. There exists an ω-dimensional linear multicast on an acyclic network for sufficiently large base field F. Proof. For a directed path P = e1, e2, · · · , em, define KP = Y 1≤j η, the number of non-source nodes t in the network with maxflow(t) ≥ω. Denote these η non-source nodes by t1, t2, · · · , tη. A sequence of channels e1, e2, · · · , el is called a path leading to a node tq if e1 ∈In(s), el ∈In(tq), and (ej, ej+1) is an adjacent pair for all 1 ≤j ≤l −1. For each q, 1 ≤q ≤η, there exist ω edge-disjoint paths Pq,1, Pq,2, · · · , Pq,ω leading to tq. All together there are ηω such paths. The following procedure assigns a global encoding kernel fe for every channel e in the network in an upstream-to-downstream order such that dim(Vtq) = ω for 1 ≤q ≤η. { // By definition, the global encoding kernels of the ω // imaginary channels form the standard basis of F ω. for (q = 1; q ≤η; q + +) for (i = 1; i ≤ω; i + +) eq,i = the imaginary channel initiating path Pq,i; // This initializes eq,i. Subsequently, eq,i will be // dynamically updated by moving down path Pq,i // until it finally becomes a channel in In(tq). for (every node t, in any upstream-to-downstream order) { for (every channel e ∈Out(t)) { // With respect to this channel e, define a “pair” as a // pair (q, i) of indices such that channel e is on the // path Pq,i. Note that for each q, there exists at most // one pair (q, i). Thus the number of pairs is at least 0 // and at most η. Since the nodes t are chosen in // an upstream-to-downstream order, if (q, i) is a pair, // then eq,i ∈In(t) by induction, so that feq,i ∈Vt. For // reasons to be explained in the algorithm verification // below, feq,i ̸∈⟨{feq,j : j ̸= i}⟩, and therefore // feq,i ∈Vt\⟨{feq,j : j ̸= i}⟩. Choose a vector w in Vt such that w / ∈⟨{feq,j : j ̸= i}⟩for every pair (q, i); // To see the existence of such a vector w, let // dim(Vt) = ν. Then, dim(Vt ∩⟨{feq,j : j ̸= i}⟩) ≤ // ν −1 for every pair (q, i) since // feq,i ∈Vt\⟨{feq,j : j ̸= i}⟩. Thus 458 19 Single-Source Linear Network Coding: Acyclic Networks // |Vt ∩(∪(q,i): a pair⟨{feq,j : j ̸= i}⟩)| // ≤η|F|ν−1 < |F|ν = |Vt|. fe = w; // This is equivalent to choosing scalar values for local // encoding kernels kd,e for all d ∈In(t) such that // P d∈In(t) kd,efd / ∈⟨{feq,j : j ̸= i}⟩for every pair (q, i). for (every pair (q, i)) eq,i =e; } } } Algorithm Verification. For 1 ≤q ≤η and 1 ≤i ≤ω, the channel eq,i is on the path Pq,i. Initially eq,i is an imaginary channel at source node s. Through dynamic updating, it moves downstream along the path until finally reaching a channel in In(tq). Fix an index q, where 1 ≤q ≤η. Initially, the vectors feq,1, feq,2, · · · , feq,ω are linearly independent because they form the standard basis of F ω. At the end, they need to span the vector space F ω. Therefore, in order for the eventu-ally constructed linear network code to qualify as a linear multicast, it suffices to show the preservation of the linear independence among feq,1, feq,2, · · · , feq,ω throughout the algorithm. We need to show the preservation in the generic step inside the “for loop” for each channel e in the algorithm. The algorithm defines a “pair” as a pair (q, i) of indices such that channel e is on path Pq,i. When no (q, i) is a pair for 1 ≤i ≤ω, the channels eq,1, eq,2, · · · , eq,ω are not changed in the generic step; neither are the vectors feq,1, feq,2, · · · , feq,ω. So we only need to consider the scenario that a pair (q, i) exists for some i. The only change among the channels eq,1, eq,2, · · · , eq,ω is that eq,i becomes e. Meanwhile, the only change among the vectors feq,1, feq,2, · · · , feq,ω is that feq,i becomes a vector w / ∈⟨{feq,j : j ̸= i}⟩. (19.116) This preserves the linear independence among feq,1, feq,2, · · · , feq,ω as desired. Complexity Analysis. There are a total of |E| channels in the network. In the algorithm, the generic step in the “for loop” for each channel e processes at most η pairs. Throughout the algorithm, at most |E|η such collections of channels are processed. From this, it is not hard to implement the algorithm within a polynomial time in |E| for a fixed ω. The computational details can be found in . Remark 1 In the Jaggi-Sanders algorithm, all nodes t in the network with maxflow(t) ≥ω serve as a sink node that receives the message x. The algo-rithm can easily be modified accordingly if only a subset of such nodes need 19.4 Existence and Construction 459 to serve as a sink node. In that case, the field size requirement is |F| > η′, where η′ is the total number of sink nodes. Remark 2 It is not difficult to see from the lower bound on the required field size in the Jaggi-Sanders algorithm that if a field much larger than sufficient is used, then a linear multicast can be constructed with high probability by randomly choosing the global encoding kernels. Example 19.26 (Multi-Source Multicast). Consider a network coding problem on an acyclic network G with a set S of source nodes. At node s ∈S, a message xs in the form of a row vector in F ωs is generated. Let ω = X s∈S ωs (19.117) be the total dimension of all the messages, and let x = (xs : s ∈S) (19.118) be referred to as the message. Here, we do not impose the constraint that a node s ∈S has no input channels. Expand the network G into a network G′ by installing a new node 0, and ωs channels from node 0 to node s for each s ∈S. Denote the value of a max-flow from node 0 to node t in G′ by maxflowG′(t). Suppose there exists a coding scheme on G such that a node t can decode the message x. Such a coding scheme induces a coding scheme on G′ for which 1. the message x is generated at node 0; 2. for all s ∈S, the message xs is sent uncoded from node 0 to node s through the ωs channels from node 0 to node s. Applying the max-flow bound to node t with respect to this coding scheme on G′, we obtain maxflowG′(t) ≥ω. (19.119) Thus we have shown that if a node t in G can decode the message x, then (19.119) has to be satisfied. We now show that for a sufficiently large base field F, there exists a coding scheme on G such that a node t satisfying (19.119) can decode the message x. Let η be the number of nodes in G that satisfies (19.119). To avoid trivial-ity, assume η ≥1. By Theorem 19.20, there exists an ω-dimensional linear multicast C on G′ when the base field is sufficiently large. From the proof of Theorem 19.10, we see that for this linear multicast, the ω ×ω matrix FOut(0) must be invertible, otherwise a node t satisfying (19.119) cannot possibly de-code the message x. Transforming C by the matrix [FOut(0)]−1, we obtain from (19.54) a linear multicast C′ with F ′ Out(0) = FOut(0) −1 FOut(0) = Iω. (19.120) 460 19 Single-Source Linear Network Coding: Acyclic Networks Accordingly, for this linear multicast, the message xs is sent uncoded from node 0 to node s for all s ∈S. Thus a coding scheme on G with the message xs being generated at node s for all s ∈S instead of being received from node 0 is naturally induced, and this coding scheme inherits from the linear multicast C′ that a node t satisfying (19.119) can decode the message x. Therefore, instead of tackling the multi-source multicast problem on G, we can tackle the single-source multicast problem on G′. This has already seen illustrated in Examples 17.1 and 17.2 for the butterfly network. 19.5 Generic Network Codes In the last section, we have seen how to construct a linear multicast by the Jaggi-Sanders algorithm. In light of Corollaries 19.21 and 19.22, the same al-gorithm can be used for constructing a linear broadcast or a linear dispersion. It is not difficult to see that if the Jaggi-Sanders algorithm is used for constructing a linear broadcast, then the computational complexity of the algorithm remains polynomial in |E|, the total number of channels in the net-work. However, if the algorithm is used for constructing a linear dispersion, the computational complexity becomes exponential because the number of channels that need to be installed in constructing the new network in Corol-lary 19.22 grows exponentially with the number of channels in the original network. In this section, we introduce a class of linear network codes called generic network codes. As we will see, if a linear network code is generic, then it is a linear dispersion, and hence also a linear broadcast and a linear multicast. Toward the end of the section, we will present a polynomial-time algorithm that constructs a generic network code. Imagine that in an ω-dimensional linear network code, the base field F is replaced by the real field ℜ. Then arbitrary infinitesimal perturbation of the local encoding kernels would place the global encoding kernels at general positions with respect to one another in the space ℜω. General positions of the global encoding kernels maximize the dimensions of various linear spans by avoiding linear dependence in every conceivable way. The concepts of general positions and infinitesimal perturbation do not apply to the vector space F ω when F is a finite field. However, they can be emulated when F is sufficiently large with the effect of avoiding unnecessary linear dependence. The following definitions of a generic network code captures the notion of placing the global encoding kernels in general positions. In the sequel, for a channel ej ∈E, let ej ∈Out(tj), and for a collection of channels ξ = {e1, e2, · · · , e|ξ|} ⊂E, let ξ¯ k = ξ{ek}. Definition 19.27 (Generic Network Code I). An ω-dimensional linear network code on an acyclic network is generic if for any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E and any 1 ≤k ≤m, if 19.5 Generic Network Codes 461 a) there is no directed path from tk to tj for j ̸= k, b) Vtk ̸⊂Vξ¯ k, then fek ̸∈Vξ¯ k. Definition 19.28 (Generic Network Code II). An ω-dimensional linear network code on an acyclic network is generic if for any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E and any 1 ≤k ≤m, if a) there is no directed path from tk to tj for j ̸= k, b) Vtk ̸⊂Vξ¯ k, c) fe, e ∈ξ¯ k are linearly independent3, then fek ̸∈Vξ¯ k. In Definitions 19.27 and 19.28, if a) does not hold, then fek ̸∈Vξ¯ k may not be possible at all as we now explain. Let ξ = {e1, e2} and k = 1. Suppose In(t2) = {e1} so that a) is violated. Since node t2 has only e1 as the input channel, fe1 cannot possibly be linear independent of fe2. The only difference between Definitions 19.27 and 19.28 is the additional requirement c) in the latter. The equivalence between these two definitions of a generic network code can be seen as follows. It is obvious that if a linear network code satisfies Definition 19.27, then it also satisfies Definition 19.28. To prove the converse, suppose a linear network code satisfies Definition 19.28. Consider any collection of channels ξ = {e1, e2, · · · , em} ⊂E such that there exists 1 ≤k ≤m satisfying a) and b) in Definition 19.27 but not necessarily c) in Definition 19.28. Then we can always find a subset ξ′ ¯ k of ξ¯ k such that fe, e ∈ξ′ ¯ k are linearly independent and Vξ′ ¯ k = Vξ¯ k. Upon letting ξ′ = {ek} ∪ξ′ ¯ k and applying Definition 19.28 with ξ′ in place of ξ, we have fek ̸∈Vξ′ ¯ k = Vξ¯ k, (19.121) so the network code also satisfies Definition 19.27. This shows that the two def-initions of a generic network code are equivalent. Note that in Definition 19.28, if ξ satisfies all the prescribed conditions, then m ≤ω because c) and fek ̸∈Vξ¯ k together imply that fe, e ∈ξ are linearly independent. Definition 19.28, which has a slightly more complicated form compared with Definition 19.27, will be instrumental in the proof of Theorem 19.32 that establishes various charac-terizations of a generic network code. Proposition 19.29. For a generic network code, for any collection of m out-put channels at a node t, where 1 ≤m ≤dim(Vt), the global encoding kernels are linearly independent. 3 By convention, an empty collection of vectors is linearly independent. 462 19 Single-Source Linear Network Coding: Acyclic Networks Proof. Since the proposition becomes degenerate if dim(Vt) = 0, we assume dim(Vt) > 0. In Definition 19.27, let all the nodes tj be node t and let 1 ≤ m ≤dim(Vt). First note that there is no directed path from node t to itself because the network is acyclic. For m = 1, ξ¯ 1 = ∅and Vξ¯ 1 = V∅= {0}. Since dim(Vt) > 0, we have Vt ̸⊂Vξ¯ 1. Then fe1 ̸∈Vξ¯ 1, which implies fe1 ̸= 0. This proves the proposition for m = 1. Assume that the proposition is true for m −1 some 2 ≤m ≤dim(Vt). We now prove that the proposition is true for m. By the induction hypothesis, fe1, fe2, · · · , fem−1 are linearly independent. Since dim(⟨{fe1, fe2, · · · , fem−1}⟩) = m −1 < dim(Vt), (19.122) we have Vt ̸⊂⟨{fe1, fe2, · · · , fem−1}⟩. (19.123) Then by Definition 19.27, fem ̸∈⟨{fe1, fe2, · · · , fem−1}⟩. (19.124) Hence, fe1, fe2, · · · , fem are linearly independent. The proposition is proved. ⊓ ⊔ Corollary 19.30. For a generic network code, if |Out(t)| ≤dim(Vt) for a node t, then the global encoding kernels of all the output channels of t are linearly independent. A linear dispersion on an acyclic network is not necessarily a generic net-work code. The following is a counterexample. Example 19.31. The 2-dimensional linear dispersion on the network in Fig-ure 19.6 is not generic because the global encoding kernels of two of the output channels from source node s are equal to [ 1 1 ]⊤, a contradiction to Proposition 19.29. It can be shown, however, that a generic network code on an acyclic network G can be constructed through a linear dispersion on an expanded network G′. See Problem 13 for details. Together with Example 19.13, the example above shows that the four classes of linear network codes we have discussed, namely linear multicast, linear broadcast, linear dispersion, and generic network code, achieve the max-flow bound to strictly increasing extents. In the following theorem, we prove two characterizations of a generic net-work code, each can be regarded as an alternative definition of a generic network code. The reader should understand this theorem before proceeding further but may skip the proof at the first reading. Theorem 19.32. For an ω-dimensional linear network code on an acyclic network, the following conditions are equivalent: 19.5 Generic Network Codes 463 x s y ! " # $ % & 1 1 ! " # $ % & 1 1 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 0 1 Fig. 19.6. A 2-dimensional linear dispersion that is not a generic network code. 1) The network code is generic. 2) For any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E, if Vtj ̸⊂Vξ¯ j for all 1 ≤j ≤m, then fe, e ∈ξ are linearly independent. 3) For any nonempty collection of channels ξ ⊂E, if |ξ| = min{ω, maxflow(ξ)}, (19.125) then fe, e ∈ξ are linearly independent. Proof. We will prove the theorem by showing that 1) ⇒2) ⇒3) ⇒1). We first show that 1) ⇒2) by using Definition 19.27 as the definition of a generic network code. Assume 1) holds. Consider any m ≥1 and any collection of channels ξ = {e1, e2, · · · , em} ⊂E, and assume Vtj ̸⊂Vξ¯ j for all 1 ≤j ≤m. We will show by induction on m that fe, e ∈ξ are linearly independent. The claim is trivially true for m = 1. Assume the claim is true for m −1 for some 2 ≤m ≤ω, and we will show that it is true for m. Consider ξ = {e1, e2, · · · , em} and assume Vtj ̸⊂Vξ¯ j for all 1 ≤j ≤m. We first prove by contradiction that there exists at least one k such that there is no directed path from tk to tj for all j ̸= k, where 1 ≤j, k ≤m. Assume that for all k, there is at least one directed path from node tk to node tj for some j ̸= k. Starting at any node tk, by traversing such directed paths, we see that there exists a directed cycle in the network because the set {tk : 1 ≤k ≤m} is finite. This leads to a contradiction because the network is acyclic, proving the existence of k as prescribed. Then apply Definition 19.27 to this k to see that fek ̸∈Vξ¯ k = ⟨{fe : e ∈ξ¯ k}⟩. (19.126) Now for any j ̸= k, since Vtj ̸⊂Vξ¯ j and Vξ¯ k{ej} = Vξ¯ j{ek} ⊂Vξ¯ j, we have Vtj ̸⊂Vξ¯ k{ej}. (19.127) 464 19 Single-Source Linear Network Coding: Acyclic Networks Then apply the induction hypothesis to ξ¯ k to see that fe, e ∈ξ¯ k are linearly independent. It then follows from (19.126) that fe, e ∈ξ are linearly indepen-dent. Thus 1) ⇒2). We now show that 2) ⇒3). Assume 2) holds and consider any nonempty collection of channel ξ = {e1, e2, · · · , em} ⊂E satisfying (19.125). Then m = |ξ| = min{ω, maxflow(ξ)}, (19.128) which implies maxflow(ξ) ≥m. (19.129) Therefore, there exist m edge-disjoint paths P1, P2, · · · , Pm from source node s to the channels in ξ, where the last channel on path Pj is ej. Denote the length of Pj by lj and let L = m X j=1 lj (19.130) be the total length of all the paths. We will prove the claim that fe1, fe2, · · · , fem are linearly independent by induction on L. For the base case L = m, since m ≤ω by (19.128), the claim is true by Proposition 19.29 with t = s. As-sume that the claim is true for L −1 for some L ≥m + 1, and we will prove that it is true for L. Let A = {j : lj > 1} and for j ∈A, let ξ′ j = {e1, e2, · · · , ej−1, e′ j, ej+1, · · · , em}, where e′ j is the channel preceding ej on Pj. Then by the induction hypothesis, fe, e ∈ξ′ j are linearly independent, which implies that Vtj ̸⊂Vξ¯ j. (19.131) For j ̸∈A, lj = 1, i.e., tj = s. It follows from (19.128) that m ≤ω. Then Vtj = Vs ̸⊂Vξ¯ j (19.132) because dim(Vξ¯ j) ≤|ξ¯ j| = m −1 < m ≤ω. Therefore, (19.131) holds for all j, and hence by 2), fe, e ∈ξ are linearly independent. Thus 2) ⇒3). Finally, we show that 3) ⇒1) by using Definition 19.28 as the definition of a generic network code. Assume 3) holds and consider any collection of channels ξ = {e1, e2, · · · , em} ⊂E, where 1 ≤m ≤ω, such that a) to c) in Definition 19.28 hold for some 1 ≤k ≤m. Then either tj = s for all 1 ≤j ≤m, or tk ̸= s, because otherwise a) in Definition 19.28 is violated. If tj = s for all 1 ≤j ≤m, then m = |ξ| = maxflow(ξ). (19.133) Since m ≤ω, we have |ξ| = min{ω, maxflow(ξ)}. (19.134) Then fe, e ∈ξ are linearly independently by 3), proving that fek ̸∈Vξ¯ k. 19.5 Generic Network Codes 465 Otherwise, tk ̸= s. Following b) in Definition 19.28, there exists e′ k ∈ In(tk) ⊂E such that fe′ k and fe, e ∈ξ¯ k are linearly independent. Let ξ′ k = {e1, e2, · · · , ek−1, e′ k, ek+1, · · · , em}. By Corollary 19.11, maxflow(ξ′ k) ≥dim(Vξ′ k) = m, (19.135) so e1, e2, · · · , ek−1, e′ k, ek+1, em can be traced back to source node s via some edge-disjoint paths P1, P2, · · · , Pk−1, P ′ k, Pk+1, · · · , Pm, respectively. Let Pk be obtained by appending ek to P ′ k. Since there is no directed path from tk to tj and ek ̸= ej for all j ̸= k, P1, P2, · · · , Pk−1, Pk, Pk+1, · · · , Pm are edge-disjoint. Therefore, maxflow(ξ) ≥m. (19.136) On the other hand, maxflow(ξ) ≤|ξ| = m. (19.137) Therefore, m = |ξ| = maxflow(ξ), (19.138) i.e., (19.133). As before, we can further obtain (19.134). Then by 3), fe, e ∈ξ, are linearly independent, and therefore fek ̸∈Vξ¯ k. Thus 3) ⇒1). Hence, the theorem is proved. ⊓ ⊔ Corollary 19.33. An ω-dimensional generic network code on an acyclic net-work is an ω-dimensional linear dispersion on the same network. Proof. Consider an ω-dimensional generic network code on an acyclic network and let T be any collection of non-source nodes. Let m = min{ω, maxflow(T)}. (19.139) Since maxflow(T) ≥m, there exists m edge-disjoint paths P1, P2, · · · , Pm from source node s to T. Let ei be the last channel on path Pi, and let ξ = {e1, e2, · · · , em}. (19.140) Evidently, maxflow(ξ) = m. (19.141) It follows from (19.139) that m ≤ω. Therefore, |ξ| = m = maxflow(ξ) = min{ω, maxflow(ξ)}. (19.142) By Theorem 19.32, fe, e ∈ξ are linearly independent. Then dim(VT ) ≥dim(Vξ) = m = min{ω, maxflow(T)}. (19.143) By Theorem 19.10, we conclude that dim(VT ) = min{ω, maxflow(T)}. (19.144) 466 19 Single-Source Linear Network Coding: Acyclic Networks Hence, we have shown that a generic network code is a linear dispersion. ⊓ ⊔ Theorem 19.32 renders the following important interpretation of a generic network code. Consider any linear network code and any collection of channels ξ ⊂E. If fe, e ∈ξ are linearly independent, then |ξ| = dim(Vξ). (19.145) By Corollary 19.11, dim(Vξ) ≤min{ω, maxflow(ξ)}. (19.146) Therefore, |ξ| ≤min{ω, maxflow(ξ)}. (19.147) On the other hand, maxflow(ξ) ≤|ξ|, (19.148) which implies min{ω, maxflow(ξ)} ≤|ξ|. (19.149) Combining (19.147) and (19.149), we see that |ξ| = min{ω, maxflow(ξ)} (19.150) is a necessary condition for fe, e ∈ξ to be linearly independent. For a generic network code, this is also a sufficient condition for fe, e ∈ξ to be linearly independent. Thus for a generic network code, if a set of global encoding kernels can possibly be linearly independent, then it is linear independent. In this sense, a generic network code captures the notion of placing the global encoding kernels in general positions. The condition 2) in Theorem 19.32 is the original definition of a generic network code given in . Unlike 1) and 3), this condition is purely algebraic and does not depend upon the network topology. However, it does not suggest an algorithm for constructing such a code. Motivated by Definition 19.28, we now present an algorithm for construct-ing a generic network code. The computational complexity of this algorithm is polynomial in |E|, the total number of channels in the network. Algorithm 19.34 (Construction of a Generic Network Code). This al-gorithm constructs an ω-dimensional generic network code over a finite field F with |F| > Pω m=1 |E|−1 m−1  by prescribing global encoding kernels that constitute a generic network code. { for (every node t, following an upstream-to-downstream order) { for (every channel e ∈Out(t)) 19.5 Generic Network Codes 467 { Choose a vector w in Vt such that w / ∈Vζ, where ζ is any collection of m −1 already processed channels, where 1 ≤m ≤ω, such that fe, e ∈ζ are linearly independent and Vt ̸⊂Vζ; // To see the existence of such a vector w, denote dim(Vt) // by ν. If ζ is any collection of m −1 channels with Vt ̸⊂Vζ, // then dim(Vt ∩Vζ) ≤ν −1. There are at most Pω m=1 |E|−1 m−1  // such collections ζ. Thus // |Vt ∩(∪ζVζ)| ≤Pω m=1 |E|−1 m−1  |F|ν−1 < |F|ν = |Vt|. fe = w; // This is equivalent to choosing scalar values for the local // encoding kernels kd,e for all d such that P d∈In(t) kd,efd // / ∈Vζ for every collection ζ of channels as prescribed. } } } Algorithm Verification. We will verify that the code constructed is indeed generic by way of Condition 3) in Theorem 19.32. Consider any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E satisfying (19.125). Then there exist m edge-disjoint paths P1, P2, · · · , Pm from source node s to the channels in ξ, where the last channel on path Pj is ej. Denote the length of Pj by lj and let L = m X j=1 lj (19.151) be the total length of all the paths. We will prove the claim that fe, e ∈ξ are linearly independent by induction on L. It is easy to verify that for any set of m channels in Out(s), the global encoding kernels assigned are linearly independent, so the base case L = m is verified. Assume the claim is true for L −1 for some L ≥m + 1, and we will prove that it is true for L. Let ek be the channel whose global encoding kernel is last assigned among all the channels in ξ. Note that Pk ≥2 since L ≥m + 1 and the global encoding kernels are assigned by the algorithm in an upstream-to-downstream order. Then let e′ k be the channel preceding ek on Pk, and let ξ′ = {e1, e2, · · · , ek−1, e′ k, ek+1, · · · , em}. (19.152) By the induction hypothesis, fe, e ∈ξ′ are linearly independent. Since fe′ k is linearly independent of fe for e ∈ξ′{e′ k} = ξ¯ k, Vtk ̸⊂Vξ¯ k. It then follows from the construction that fek ̸∈Vξ¯ k because ξ¯ k is one of the collections ζ considered when fek is assigned. Hence, fe, e ∈ξ are linearly independent, verifying that the network code constructed is generic. Complexity Analysis. In the algorithm, the “for loop” for each channel e pro-cesses at most Pω m=1 |E|−1 m−1  collections of m −1 channels. The processing 468 19 Single-Source Linear Network Coding: Acyclic Networks includes the detection of those collections ζ as well as the computation of the set Vt(∪ζVζ). This can be done, for instance, by Gauss elimination. Through-out the algorithm, the total number of collections of channels processed is at most |E| Pω m=1 |E|−1 m−1  , a polynomial in |E| of degree ω. Thus for a fixed ω, it is not hard to implement the algorithm within a polynomial time in |E|. Algorithm 19.34 constitutes a constructive proof for the next theorem. Theorem 19.35. There exists an ω-dimensional generic network code on an acyclic network for sufficiently large base field F. By noting the lower bound on the required field size in Algorithm 19.34, a generic network code can be constructed with high probability by randomly choosing the global encoding kernels provided that the base field is much larger than sufficient. 19.6 Static Network Codes In our discussion so far, a linear network code has been defined on a network with a fixed topology, where all the channels are assumed to be available at all times. In a real network, however, a channel may fail due to various reasons, for example, hardware failure, cable cut, or natural disasters. With the failure of some subset of channels, the communication capacity of the resulting network is generally reduced. Consider the use of, for instance, an ω-dimensional multicast on an acyclic network for multicasting a sequence of messages generated at the source node. When no channel failure occurs, a non-source node with the value of a max-flow at least equal to ω would be able to receive the sequence of messages. In case of channel failures, if the value of a max-flow of that node in the resulting network is at least ω, the sequence of messages in principle can still be received at that node. However, this would involve the deployment of a network code for the new network topology, which not only is cumbersome but also may cause a significant loss of data during the switchover. In this section, we discuss a class of linear network codes called static network codes that can provide the network with maximum robustness in case of channel failures. To fix ideas, we first introduce some terminology. The status of the network is specified by a mapping λ : E →{0, 1} called a configuration. A channel being in the set λ−1(0) = {e ∈E : λ(e) = 0} (19.153) indicates the failure of that channel, and the subnetwork resulting from the deletion of all the channels in λ−1(0) is called the λ-subnetwork. For the λ-subnetwork, the value of a max-flow from source node s to a non-source node t is denoted by maxflowλ(t). Likewise, the value of a max-flow from source 19.6 Static Network Codes 469 node s to a collection T of non-source nodes is denoted by maxflowλ(T). It is easy to see that the total number of configurations is equal to 2|E|. Definition 19.36. Let λ be a configuration of the network. For an ω-dimensional linear network code on the network, the λ-global encoding kernel of channel e, denoted by fe,λ, is the column ω-vector calculated recursively in an upstream-to-downstream order by: (19.154) fe,λ = λ(e) P d∈In(t) kd,e fd,λ for e ∈Out(t). (19.155) The λ-global encoding kernels of the ω imaginary channels are inde-pendent of λ and form the standard basis of the space F ω. Note that in the above definition, the local encoding kernels kd,e are not changed with the configuration λ. Given the local encoding kernels, the λ-global encoding kernels can be calculated recursively by (19.154), while (19.155) serves as the boundary conditions. For a channel e ∈Out(t) with λ(e) = 0, we see from (19.154) that fe,λ = 0. (19.156) As before, the message generated at source node s is denoted by a row ω-vector x. When the prevailing configuration is λ, a node t receives the symbols x · fd,λ, d ∈In(t), from which it calculates the symbol x · fe,λ to be sent on each channel e ∈Out(t) via x · fe,λ = x  λ(e) X d∈In(t) kd,e fd,λ   (19.157) = λ(e) X d∈In(t) kd,e(x · fd,λ). (19.158) In particular, if λ(e) = 0, the zero symbol is sent on channel e regardless of the symbols received at node t. In a real network, the zero symbol is not sent on a failed channel. Rather, whenever a symbol is not received on an input channel, the symbol is regarded by the receiving node as being the zero symbol. For a configuration λ of the network, we let Vt,λ = ⟨{fe,λ : e ∈In(t)}⟩ (19.159) for a node t, VT,λ = ⟨∪t∈T Vt,λ⟩. (19.160) for a collection T of nodes, and Vξ,λ = ⟨{fe,λ : e ∈ξ}⟩, (19.161) for a collection ξ of channels. 470 19 Single-Source Linear Network Coding: Acyclic Networks Definition 19.37. An ω-dimensional linear network code on an acyclic net-work qualifies as a static linear multicast, a static linear broadcast, a static linear dispersion, or a static generic network code, respectively, if the following hold: (19.162) dim(Vt,λ) = ω for every configuration λ and every non-source node t with maxflowλ(t) ≥ω. (19.163) dim(Vt,λ) = min{ω, maxflowλ(t)} for every configuration λ and every non-source node t. (19.164) dim(VT,λ) = min{ω, maxflowλ(T)} for every configuration λ and ev-ery collection T of non-source nodes. (19.165) For any configuration λ and any nonempty collection of channels ξ ⊂ E, if |ξ| = min{ω, maxflowλ(ξ)}, then fe,λ, e ∈ξ are linearly independent. Here we have adopted Condition 3) in Theorem 19.32 for the purpose of defining a static generic network code. The qualifier “static” in the terms above stresses the fact that, while the configuration λ varies, the local encoding kernels remain unchanged. The advantage of using a static linear multicast, broadcast, or dispersion is that in case of channel failures, the local operation at every node in the network is affected only at the minimum level. Each receiving node in the network, however, needs to know the configuration λ before decoding can be done correctly. In implementation, this information can be provided by a separate signaling network. For each class of static network codes in Definition 19.37, the requirement for its non-static version is applied to the λ-subnetwork for every configura-tion λ. Accordingly, a static linear multicast, a static linear broadcast, a static linear dispersion, and a static generic network code are increasingly stronger linear network codes as for the non-static versions. Example 19.38. A 2-dimensional linear network code over GF(5) on the net-work in Figure 19.7 is prescribed by the local encoding kernels Ks = 1 0 1 0 1 1  (19.166) and Kx =   1 3 3 2 1 1  . (19.167) We claim that this is a static generic network code. Denote the three channels in In(x) by c, d, and e and the two channels in Out(x) by g and h. The vectors fg,λ and fh,λ for all possible configurations λ are tabulated in Table 19.1, from which it is straightforward to verify the condition (19.165). The following is an example of a generic network code that does not qualify even as a static linear multicast. 19.6 Static Network Codes 471 y s g c d e x Ks = ! " # $ % & 1 1 0 1 0 1 Kx = ! ! ! " # $ $ $ % & 1 1 2 3 3 1 ! " # $ % & 0 1 ! " # $ % & 1 0 h Fig. 19.7. A 2-dimensional GF(5)-valued static generic network code. Example 19.39. On the network in Figure 19.7, a 2-dimensional generic net-work code over GF(5) is prescribed by the local encoding kernels Ks = 1 0 1 0 1 1  (19.168) and Kx =   2 1 1 2 0 0  . (19.169) For a configuration λ such that λ(c) = 0 (19.170) and λ(d) = λ(e) = 1, (19.171) we have the λ-global encoding kernels λ(c) 0 0 0 1 1 1 1 λ(d) 0 1 1 0 0 1 1 λ(e) 1 0 1 0 1 0 1 fg,λ λ(g)  1 1  λ(g)  0 3  λ(g)  1 4  λ(g)  1 0  λ(g)  2 1  λ(g)  1 3  λ(g)  2 4  fh,λ λ(h)  1 1  λ(h)  0 2  λ(h)  1 3  λ(h)  3 0  λ(h)  4 1  λ(h)  3 2  λ(h)  4 3  Table 19.1. The vectors fg,λ and fh,λ for all possible configurations λ in Exam-ple 19.38. 472 19 Single-Source Linear Network Coding: Acyclic Networks fg,λ = 0 1  (19.172) and fh,λ = 0 2  , (19.173) and therefore dim(Vy,λ) = 1. On the other hand, maxflowλ(y) = 2. Hence, this generic network code is not a static linear multicast. Recall that in Algorithm 19.34 for constructing a generic network code, the key step chooses for a channel e ∈Out(t) a vector in Vt to be the global encoding kernel fe such that fe / ∈Vζ, (19.174) where ζ is any collection of m −1 channels as prescribed with 1 ≤m ≤ω. This is equivalent to choosing scalar values for the local encoding kernels kd,e for all d ∈In(t) such that X d∈In(t) kd,e fd / ∈Vζ. (19.175) Algorithm 19.34 is adapted below for the construction of a static generic network code. Algorithm 19.40 (Construction of a Static Generic Network Code). This algorithm constructs an ω-dimensional static generic network code over a finite field F on an acyclic network with |F| > 2|E| Pω m=1 |E|−1 m−1  . { for (every node t, following an upstream-to-downstream order) { for (every channel e ∈Out(t)) { Choose scalar values for kd,e for all d ∈In(t) such that for any configuration λ, P d∈In(t) kd,efd / ∈Vζ,λ, where ζ is any collection of m −1 already processed channels such that fe,λ, e ∈ζ are linearly independent and Vt,λ ̸⊂Vζ,λ; // To see the existence of such values kd,e, denote // dim(Vt,λ) by ν. For any collection ζ of channels // with Vt,λ ̸⊂Vζ,λ, dim(Vt,λ ∩Vζ,λ) < ν. Consider // the linear mapping [kd,e]d∈In(t) 7→P d∈In(t) kd,e fd,λ // from F |In(t)| to F ω. The nullity of this linear // mapping is |In(t)| −ν, so the pre-image of // the space (Vt,λ ∩Vζ,λ) has dimension less than // |In(t)|. Thus the pre-image of ∪λ,ζ(Vt,λ ∩Vζ,λ) // contains at most 2|E| Pω m=1 |E|−1 m−1  |F||In(t)|−1 19.7 Random Network Coding: A Case Study 473 // elements, which are fewer than |F||In(t)| if // |F| > 2|E| Pω m=1 |E|−1 m−1  . for (every configuration λ) fe,λ = λ(e) P d∈In(t) kd,e fd,λ; } } } Algorithm Verification. The explanation for the code constructed by Algo-rithm 19.40 being a static generic network code is exactly the same as that given for Algorithm 19.34. The details are omitted. Algorithm 19.40 constitutes a constructive proof for the next theorem. By noting the lower bound on the required field size in the algorithm, we see that a generic network code can be constructed with high probability by randomly choosing the local encoding kernels provided that the base field is much larger than sufficient. Theorem 19.41. There exist an ω-dimensional static linear multicast, a static linear broadcast, a static linear dispersion, and a static generic network code on an acyclic network for sufficiently large base field F. The requirements (19.162) through (19.165) in Definition 19.37 refer to all the 2|E| possible configurations. Conceivably, a practical application may only need to deal with a certain collection {λ1, λ2, · · · , λκ} of configurations, where κ ≪2|E|. Thus we may define, for instance, an {λ1, λ2, · · · , λκ}-static linear multicast and an {λ1, λ2, · · · , λκ}-static linear broadcast by replacing the conditions (19.162) and (19.163), respectively by (19.176) dim(Vt,λ) = ω for every configuration λ ∈{λ1, λ2, · · · , λκ} and every non-source node t with maxflowλ(t) ≥ω. (19.177) dim(Vt,λ) = min{ω, maxflowλ(t)} for every configuration λ ∈{λ1, λ2, · · · , λκ} and every non-source node t. Algorithm 19.34 has been converted into Algorithm 19.40 by modifying the key step in the former. In a similar fashion, Algorithm 19.25 can be adapted for the construction of an {λ1, λ2, · · · , λκ}-static linear multicast or broadcast. This will lower the threshold for the sufficient size of the base field as well as the computational complexity. The details are left as an exercise. 19.7 Random Network Coding: A Case Study We have seen in Corollary 19.24 that if the local encoding kernels of a linear network code are randomly chosen, a linear multicast can be obtained with high probability provided that the base field is sufficiently large. Since the 474 19 Single-Source Linear Network Coding: Acyclic Networks code construction is independent of the network topology, the network code so constructed can be used when the network topology is unknown. In this section, we study an application of random network coding in peer-to-peer (P2P) networks. The system we will analyze is based on a prototype for large scale content distribution on such networks proposed in . 19.7.1 How the System Works A file originally residing on a single server is to be distributed to a large number of users through a network. The server divides the file into k data blocks, B1, B2, · · · , Bk, and uploads coded versions of these blocks to different users according to some protocol. These users again help distributing the file by uploading blocks to other users in the network. By means of such repeated operations, a logical network called an overlay network is formed by the users as the process evolves. On this logical network, henceforth referred to as the network, information can be dispersed very rapidly, and the file is eventually delivered to every user in the network. Note that the topology of the network is not known ahead of time. In the system, new users can join the network as a node at any time as long as the distribution process is active. Upon arrival, a new user will contact a designated node called the tracker that provides a subset of the other users already in the system forming the set of neighboring nodes of the new user. Subsequent information flow in the network is possible only between neighboring nodes. For the purpose of coding, the data blocks B1, B2, · · · , Bk are represented as symbols in a large finite field F referred to as the base field4. At the begin-ning of the distribution process, a Client A contacts the server and receives a number of encoded blocks. For example, the server uploads two encoded blocks E1 and E2 to Client A, where for i = 1, 2, Ei = ci 1B1 + ci 2B2 + · · · + ci kBk, (19.178) with ci j, 1 ≤j ≤k being chosen randomly from the base field F. Note that each E1 and E2 is some random linear combination of B1, B2, · · · , Bk. In general, whenever a node needs to upload an encoded block to a neigh-boring node, the block is formed by taking a random linear combination of all the blocks possessed by that node. Continuing with the above example, when Client A needs to upload an encoded block E3 to a neighboring Client B, we have E3 = c3 1E1 + c3 2E2, (19.179) where c3 1 and c3 2 are randomly chosen from F. Substituting (19.178) into (19.179), we obtain 4 In the system proposed in , the size of the base field is of the order 216. 19.7 Random Network Coding: A Case Study 475 E3 = k X j=1 (c3 1c1 j + c3 2c2 j)Bj. (19.180) Thus E3 and in general every encoded block subsequently uploaded by a node in the network is some random linear combination of the data blocks B1, B2, · · · , Bk. The exact strategy for downloading encoded blocks from the neighboring nodes so as to avoid receiving redundant information depends on the imple-mentation. The main idea is that downloading from a neighboring node is necessary only if the neighboring node has at least one block not in the lin-ear span of all the blocks possessed by that particular node. Upon receiving enough linearly independent encoded blocks, a node is able to decode the whole file. Compared with store-and-forward, the application of network coding as described in the above system can reduce the file download time because an encoded block uploaded by a node contains information about every block possessed by that node. Moreover, in case some nodes leave the system before the end of the distribution process, it is more likely that the remaining nodes have the necessary information to recover the whole file if network coding is used. In the following, we will give a quantitative analysis to substantiate these claimed advantages of network coding. 19.7.2 Model and Analysis Let V be the set of all the nodes in the system. In implementation, blocks of data are transmitted between neighboring nodes in an asynchronous manner, and possibly at different speeds. To simplify the analysis, we assume that every transmission from one node to a neighboring node is completed in an integral number of time units. Then we can unfold the network of nodes in discrete time into a graph G∗= (V ∗, E∗) with the node set V ∗= {it : i ∈V and t ≥0}, (19.181) where node it ∈V ∗corresponds to node i ∈V at time t. The edge set E∗ specified below is determined by the strategy adopted for the server as well as for all the other nodes in V to request uploading of data blocks from the neighboring nodes. Specifically, there are two types of edges in E∗: 1. There is an edge with capacity m from node it to node jt′, where t < t′, if m blocks are transmitted from node i to node j, starting at time t and ending at time t′. 2. For each i ∈V and t ≥0, there is an edge with infinite capacity from node it to node it+1. An edge of the second type models the assumption that the blocks, once possessed by a node, are retained in that node indefinitely over time. Without 476 19 Single-Source Linear Network Coding: Acyclic Networks t= 3 t= 2 t= 1 t= 0 Client B Client A Client C 2 2 4 1 1 2 1 1 . . . Server S Fig. 19.8. A illustration of the graph G∗. loss of generality, we may assume that all the blocks possessed by nodes il, l ≤t are transmitted uncoded on the edge from node it to node it+1. An illustration of the graph G∗up to t = 3 with V consisting of the server S and three clients A, B, and C is given in Figure 19.8, where the edges with infinite capacities are lightened for clarity. Note that the graph G∗is acyclic because each edge is pointed in the positive time direction and hence a cycle cannot be formed. Denote the server S by node s ∈V and regard node s0 in G∗as the source node generating the whole file consisting of k data blocks and multicasting it to all the other nodes in G∗via random linear network coding, with the coef-ficients in the random linear combinations forming the encoded blocks being the local encoding kernels of the network code. Note that random network coding is applied on G∗, not the logical network formed by the user nodes. Also note that in order to simplify our description of the system, we have omitted the necessity of delivering the global encoding kernels to the nodes for the purpose of decoding. We refer the reader to the discussion toward the end of Section 19.3 for this implementation detail. We are now ready to determine the time it takes for a particular node i ∈V to receive the whole file. Denote the value of a max-flow from node s0 to a node v ∈G∗other than s0 by maxflow(v). When the base field is sufficiently large, by Corollary 19.24, with probability close to 1, the network code gen-erated randomly during the process is a linear multicast, so that those nodes it with maxflow(it) ≥k (19.182) can receive the whole file. In other words, with high probability, the time it takes a node i ∈V to receive the whole file is equal to t∗, the minimum t that satisfies (19.182). Obviously, this is a lower bound on the time it takes a node i ∈V to receive the whole file, and it is achievable with high probability by the system under investigation. In the rare event that node i cannot de-code at time t∗, it can eventually decode upon downloading some additional encoded blocks from the neighboring nodes. 19.7 Random Network Coding: A Case Study 477 s t u Fig. 19.9. A simple packet network. When some nodes leave the system before the end of the distribution pro-cess, an important question is whether the remaining nodes have the necessary information to recover the whole file. To be specific, assume that a subset of users U c ⊂V leave the system after time t, and we want to know whether the users in U = V \U c have sufficient information to recover the whole file. If they do, by further exchanging information among themselves, every user in U can eventually receive the whole file (provided that no more nodes leave the system). Toward this end, again consider the graph G∗. Let Ut = {ut : u ∈U} (19.183) and denote the value of a max-flow from node s0 to the set of nodes Ut by maxflow(Ut). If maxflow(Ut) ≥k, (19.184) then the users in U with high probability would have the necessary information to recover the whole file. This is almost the best possible performance one can expect from such a system, because if maxflow(Ut) < k, (19.185) it is simply impossible for the users in U to recover the whole file even if they are allowed to exchange information among themselves. Thus we see that random network coding provides the system with both maximum bandwidth efficiency and maximum robustness. However, addi-tional computational resource is required compared with store-and-forward. These are engineering tradeoffs in the design of such systems. We conclude this section by an example further demonstrating the advan-tage of random network coding when it is applied to packet networks with packet loss. Example 19.42. The random network coding scheme discussed in this section can be applied to packet networks with packet loss. Consider the network depicted in Figure 19.9 consisting of three nodes, s, t, and u. Data packets are sent from node s to node u via node t. Let the packet loss rates of channels (s, t) and (t, u) be γ, i.e., a fraction γ of packets are lost during their transmission through the channel. Then the fraction of packets sent by node s that are eventually received at node u is (1 −γ)2. To fix idea, assume the packet size is sufficiently large and one packet is sent on each channel per unit time. To remedy the problem of packet loss, a fountain code can be employed at node s. This would allow data packets 478 19 Single-Source Linear Network Coding: Acyclic Networks to be sent from node s to node u reliably at an effective rate equal to (1−γ)2. In such a scheme, node t simply forwards to node u the packets it receives from node s. On the other hand, by using the random network coding scheme we have discussed, data packets can be sent from node s to node u reliably at an effective rate equal to 1−γ, which is strictly higher than (1−γ)2 whenever γ > 0. This can be proved by means of the analysis presented in this section. The details are left as an exercise. While a fountain code can remedy the problem of packet loss between the source node and the sink node, it cannot prevent the packet loss rate from accumulating when packets are routed through the network. On the other hand, the use of random network coding allows information to be transmitted from the source node to the sink node at the maximum possible rate, namely the min-cut between the source node and the sink node after the packet loss in the channels has been taken into account. Chapter Summary Linear Network Code: • kd,e is the local encoding kernel of the adjacent pair of channels (d, e). • fe is the global encoding kernel of channel e. • fe = P d∈In(t) kd,e fd for e ∈Out(t). • fe, e ∈In(s) form the standard basis of F ω. • Channel e transmits the symbol x · fe, where x ∈F ω is the message generated at source node s. Linear Multicast, Broadcast, and Dispersion: An ω-dimensional linear network code is a linear multicast if dim(Vt) = ω for every node t ̸= s with maxflow(t) ≥ω. linear broadcast if dim(Vt) = min{ω, maxflow(t)} for every node t ̸= s. linear dispersion if dim (VT ) = min{ω, maxflow(T)} for every collection T of non-source nodes. Generic Network Code: An ω-dimensional linear network code is generic if for any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E, where ej ∈Out(tj), and any 1 ≤k ≤m, if a) there is no directed path from tk to tj for j ̸= k, b) Vtk ̸⊂Vξ¯ k, where ξ¯ k = ξ{ek}, then fek ̸∈Vξ¯ k. A generic network code is a linear dispersion and hence a linear broadcast and a linear multicast. Characterizations of Generic Network Code: Each of the following is a necessary and sufficient condition for an ω-dimensional linear network code to be generic: Problems 479 1) For any nonempty collection of channels ξ = {e1, e2, · · · , em} ⊂E, where ej ∈Out(tj), if Vtj ̸⊂Vξ¯ j for all 1 ≤j ≤m, then fe, e ∈ξ are linearly independent. 2) For any nonempty collection of channels ξ ⊂E, if |ξ| = min{ω, maxflow(ξ)}, then fe, e ∈ξ are linearly independent. Static Network Code: For a given linear network code and a configuration λ of the network, the λ-global encoding kernel fe,λ of channel e is calculated recursively by: • fe,λ = λ(e) P d∈In(t) kd,e fd,λ for e ∈Out(t). • fe,λ, e ∈In(s) are independent of λ and form the standard basis of F ω. An ω-dimensional linear network code is a static linear multicast if dim(Vt,λ) = ω for every λ and every node t ̸= s with maxflowλ(t) ≥ω. linear broadcast if dim(Vt,λ) = min{ω, maxflowλ(t)} for every λ and every node t ̸= s. linear dispersion if dim(VT,λ) = min{ω, maxflowλ(T)} for every λ and every collection T of non-source nodes. generic network code if for every λ and any nonempty collection of channels ξ ⊂E, if |ξ| = min{ω, maxflowλ(ξ)}, then fe,λ, e ∈ξ are linearly indepen-dent. Lemma: Let g(z1, z2, · · · , zn) be a nonzero polynomial with coefficients in a field F. If |F| is greater than the degree of g in every zj for 1 ≤j ≤n, then there exist a1, a2, · · · , an ∈F such that g(a1, a2, · · · , an) ̸= 0. Existence and Construction: All the linear network codes defined in this chapter exist and can be constructed either deterministically or randomly (with high probability) when the base field is sufficiently large. Problems In the following, let G = (V, E) be the underlying directed acyclic network on which the linear network code is defined, and let s be the unique source node in the network. 1. Show that in a network with the capacities of all the edges equal to 1, the number of edge-disjoint paths from source node s to a non-source node t is equal to maxflow(t). 2. For the network code in Definitions 19.4 and 19.6, show that if the global encoding mappings are linear, then so are the local encoding mappings. (Yeung et al. .) 3. Network transfer matrix Consider an ω-dimensional linear network code. a) Prove (19.91). 480 19 Single-Source Linear Network Coding: Acyclic Networks b) Fix an upstream-to-downstream order for the channels in the network and let K be the |E| × |E| matrix with the (d, e)th element equal to kd,e if (d, e) is an adjacent pair of channels and equal to 0 otherwise. Let A be the ω × |E| matrix obtaining by appending |E| −|Out(s)| columns of zeroes to Ks, and Be be the |E|-column vector with all the components equal to 0 except that the eth component is equal to 1. Show that fe = A(I −K)−1Be for all e ∈E. The matrix M = (I−K)−1 is called the network transfer matrix. (Koetter and M´ edard .) 4. Apply Lemma 19.17 to obtain a lower bound on the field size for the existence of a 2-dimensional linear multicast on the butterfly network. 5. Show that Pω m=1 |E|−1 m−1  is a polynomial in |E| of degree ω. This is the lower bound on the required field size in Algorithm 19.34. 6. Verify that the network code in Example 19.38 is a generic network code. 7. Simplified characterization of a generic network code Consider an ω-dimensional generic network code on a network for which |Out(s)| ≥ω. a) Show that Condition 3) in Theorem 19.32 can be modified to restrict-ing the cardinality of ξ to ω. Hint: If |ξ| < ω, expand ξ by including a certain subset of the channels in Out(s). b) Simplify Algorithm 19.34 and tighten the lower bound on the required field size accordingly. (Tan et al. .) 8. For the network below, prove the non-existence of a two-dimensional bi-nary generic network code. s y ! " # $ % & 1 0 ! " # $ % & 0 1 x 9. Show that for η ≥2, a linear multicast can be constructed by the Jaggi-Sanders algorithm provided that |F| ≥η. Hint: Two vector subspaces intersect at the origin. 10. Modify the Jaggi-Sanders algorithm for the construction of a static linear multicast. Problems 481 11. Obtain a lower bound on the required field size and determine the compu-tational complexity when Algorithm 19.40 is adapted for the construction of an {λ1, λ2, · · · , λκ}-static generic network code. 12. Show that a transformation of a static generic network code is also a static generic network code. 13. A generic network code as a linear dispersion Expand the network G into a network G′ = (V ′, E′) as follows. For an edge e ∈E, let the edge be from node ve to node we. Install a new node te and replace edge e by two new edges e′ and e′′, where e′ is from node ve to node te and e′′ is from node te to node we. Show that a linear dispersion on G′ is equivalent to a generic network code on G. Hint: Use Theorem 19.32. (Kwok and Yeung , Tan et al. .) 14. Multi-rate linear broadcast Consider a network on which an ω-dimensional linear network code over a base field F is defined. For all e ∈E, let f ′ e = [ I b ] fe, where I is the (ω −1)×(ω −1) identity matrix and b is an (ω −1)-column vector. a) Show that f ′ e, e ∈E constitute the global encoding kernels of an (ω−1)-dimensional linear network code on the same network. b) Show that the (ω −1)-dimensional linear network code in a) and the original ω-dimensional linear network code have the same local encod-ing kernels for all the non-source nodes. It was shown in Fong and Yeung that an (ω −1)-dimensional linear broadcast can be constructed from any ω-dimensional linear broadcast by choosing a suitable vector b, provided |F| ≥|V |. This implies that multi-rate linear multicast/broadcast can be supported on a network without changing the local encoding kernels of the non-source nodes. 15. Let a message x ∈F ω be generated at source node s in a network for which maxflow(t) ≥ω for all non-source nodes t. Show that x can be multicast to all the non-source nodes by store-and-forward. In other words, for this special case, network coding has no advantage over store-and-forward if complete information on the network topology is known ahead of time. This result is implied by a theorem on directed spanning tree packing by Edmonds (see also Wu et al. ). 16. Let L be the length of the message x generated at source node s, where L is divisible by maxflow(t) for all non-source nodes t. Allowing multiple usage of the network, devise a linear network coding scheme such that each non-source node t can receive x in L/maxflow(t) units of time. Such a scheme enables each non-source node in the network to receive the message within the shortest possible time. 17. Consider distributing a message of 5 data blocks on a P2P network with 4 nodes, Server S and Clients A, B, and C, by the system discussed in Section 19.7. Assume each data block is sufficiently large. The following transmissions take place during the process. 482 19 Single-Source Linear Network Coding: Acyclic Networks From To Start Time End Time # Blocks S A 0 1 2 S B 0 1 3 S C 0 1 2 B A 1 2 1 C B 1 3 2 S B 2 3 1 B C 2 3 2 a) Which client is the first to receive the whole message? b) If Client B leaves the system after t = 3, do Clients A and C have sufficient information to reconstruct the whole message? c) Suppose the hard disk of Client B crashes at t = 1.5 and loses 2 blocks of data. Repeat b) by making the assumption that the transmissions by Client B starting at t ≤1 are not affected by the disk failure. 18. Prove the claim in Example 19.42 that by using random network coding, data packets can be sent from node s to node u at an effective rate equal to 1 −γ. Historical Notes The achievability of the max-flow bound by linear network codes was proved by Li et al. using a vector space approach and then by Koetter and M´ edard using a matrix approach. These two approaches correspond re-spectively to the notions of global encoding kernel and local encoding kernel discussed here. Neither the construction in for a generic network code nor the construction in for a linear multicast is a polynomial-time algo-rithm. Jaggi and Sanders et al. obtained a polynomial-time algorithm for constructing a linear multicast by modifying the construction of a generic network code in . A polynomial-time algorithm for constructing a generic network code was subsequently obtained in Yeung et al. . In , static network code was introduced and its existence was proved. An explicit construction of such codes was given in . The optimality of random network coding was proved in Ahlswede et al. . Ho et al. proved the optimality of random linear network coding and proposed the use of such codes on networks with unknown topology. A tight upper bound on the probability of decoding error for random linear network coding has recently been obtained by Balli et al. . Implementation issues of network coding were discussed in Chou et al. . The application of random network coding in peer-to-peer networks discussed in Section 19.7 is due to Gkantsidis and Rodriguez . Cai and Yeung have generalized the theory of single-source network cod-ing on acyclic networks to network error correction and secure network coding . Network error correction subsumes classical algebraic Historical Notes 483 coding, while secure network coding subsumes secret sharing in cryptogra-phy. The presentation in this chapter is largely based on the tutorial paper by Yeung et al. . The various characterizations of a generic network code is due to Tan et al. . The analysis of a large scale content distribution system with network coding is due to Yeung . 20 Single-Source Linear Network Coding: Cyclic Networks A directed network is cyclic if it contains at least one directed cycle. In Chap-ter 19, we have discussed network coding over an acyclic network, for which there exists an upstream-to-downstream order on the nodes. Following such an order, whenever a node encodes, all the information needed would have al-ready been received on the input channels of that node. For a cyclic network, such an order of the nodes does not exists. This makes network coding over a cyclic network substantially different from network coding over an acyclic network. 20.1 Delay-Free Cyclic Networks When we discussed network coding over an acyclic network in Chapter 19, we assumed that there is no propagation delay in the network. Based on this assumption, a linear network code can be specified by either the local description in Definition 19.6 or the global description in Definition 19.7. The local and global descriptions of a linear network code are equivalent over an acyclic network because given the local encoding kernels, the global encoding kernels can be calculated recursively in any upstream-to-downstream order. In other words, the equation (19.17) has a unique solution for the global encoding kernels in terms of the local encoding kernels, while (19.18) serves as the boundary conditions. If these descriptions are applied to a cyclic network, it is not clear whether for any given set of local encoding kernels, there exists a unique solution for the global encoding kernels. In the following, we give one example with a unique solution, one with no solution, and one with multiple solutions. Example 20.1. Consider the cyclic network in Figure 20.1. Let (s, t) precede (v, t) in the ordering among the channels. Similarly, let (s, t′) precede (v, t′). Given the local encoding kernels 486 20 Single-Source Linear Network Coding: Cyclic Networks t u t’ Ks = Kt = Kt’ = Kv = Ku = s v ! " 1 1 # $ % & ' ( 1 0 0 1 # $ % & ' ( 1 0 # $ % & ' ( 0 1 # $ % & ' ( 0 1 # $ % & ' ( 0 1 # $ % & ' ( 1 1 # $ % & ' ( 0 1 # $ % & ' ( 1 0 # $ % & ' ( 0 1 # $ % & ' ( 1 0 # $ % & ' ( 1 1 # $ % & ' ( 1 1 # $ % & ' ( 1 1 Fig. 20.1. A 2-dimensional linear broadcast on a cyclic network. Ks = 1 0 0 1  , Kt = Kt′ = 1 0  , Ku = 1 1  , Kv = 1 1 , (20.1) the equation (19.17) yields the following unique solution for the global encod-ing kernels: f(s,t) = f(t,u) = 1 0  , f(s,t′) = f(t′,u) = 0 1  (20.2) f(u,v) = f(v,t) = f(v,t′) = 1 1  . (20.3) These global encoding kernels are shown in Figure 20.1, and they in fact define a 2-dimensional linear broadcast regardless of the choice of the base field. Since k(v,t),(t,u) = 0 (20.4) and k(v,t′),(t′,u) = 0, (20.5) information looping in the directed cycles (t, u), (u, v), (v, t) and (t′, u), (u, v), (v, t′) is prevented. Example 20.2. An arbitrarily prescribed set of local encoding kernels on a cyclic network is unlikely to be compatible with any global encoding kernels. In Figure 20.2(a), a local encoding kernel is prescribed at each node in a cyclic network. Had a global encoding kernel fe existed for each channel e, the requirement (19.17) would imply the equations 20.1 Delay-Free Cyclic Networks 487 q = b+p a b p = a+r r = q a b f(x,y) = + f(w,x) f(y,w) = + f(x,y) f(w,x) = f(y,w) Ky = Ks = Kx = ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 1 0 ! " # $ % & 0 1 ! " # $ % & 1 0 0 1 ! " # $ % & 1 1 ! " # $ % & 1 1 (a) (b) s x y w s x y w Kw = ' ( 1 Fig. 20.2. An example of a cyclic network and local encoding kernels that do not render a solution for the global encoding kernels. f(x,y) =  1 0  + f(w,x) (20.6) f(y,w) = 0 1  + f(x,y) (20.7) f(w,x) = f(y,w), (20.8) which sum up to 1 0  = 0 1  , (20.9) a contradiction. The nonexistence of compatible global encoding kernels can also be inter-preted in terms of the message transmission. Let the message x = [ a b ] be a generic vector in F 2, where F denotes the base field. The symbol transmitted on channel e, given by x · fe, are shown in Figure 20.2(b). In particular, the symbols transmitted on channels (x, y), (y, w), and (w, x), namely p, q, and r, are related through p = a + r (20.10) q = b + p (20.11) r = q. (20.12) These equalities imply that a + b = 0, (20.13) a contradiction to the independence between the two components a and b of a generic message. 488 20 Single-Source Linear Network Coding: Cyclic Networks Example 20.3. Let F be an extension field of GF(2)1. Consider the same pre-scription of the local encoding kernels at the nodes as in Example 20.2 except that KS = 1 1 0 0  . (20.14) The following three sets of global encoding kernels meet the requirement (19.17) in the definition of a linear network code: f(s,x) = f(s,y) = 1 0  , f(x,y) = 0 0  , f(y,w) = f(w,x) = 1 0  ; (20.15) f(s,x) = f(s,y) = 1 0  , f(x,y) = 1 0  , f(y,w) = f(w,x) = 0 0  ; (20.16) f(s,x) = f(s,y) = 1 0  , f(x,y) =  0 1  , f(y,w) = f(w,x) =  1 1  . (20.17) 20.2 Convolutional Network Codes In a real network, the propagation delay, which includes the processing delay at the nodes and the transmission delay over the channels, cannot be zero. For a cyclic network, this renders the implementation non-physical because the transmission on an output channel of a node can only depend on the information received on the input channels of that node. Besides, technical difficulties as described in the last section arise even with the ideal assumption that there is no propagation delay. In this section, we introduce the unit-delay network as a model for net-work coding on a cyclic network G = (V, E), where V and E are the sets of nodes and channels of the network, respectively. In this model, a symbol is transmitted on every channel in the network at every discrete time index, with the transmission delay equal to exactly one time unit. Intuitively, this assumption on the transmission delay over a channel ensures no information looping in the network even in the presence of a directed cycle. The results to be developed in this chapter, although discussed in the context of cyclic networks, apply equally well to acyclic networks. As a time-multiplexed network in the combined space-time domain, a unit-delay network can be unfolded with respect to the time dimension into an indefinitely long network called a trellis network. Corresponding to a physical node t is a sequence of nodes t0, t1, t2, · · · in the trellis network, with the subscripts being the time indices. A channel ej in the trellis network represents the transmission on the physical channel e between times j and j + 1. When the physical channel e is from node t to node u, the channel ej in the trellis 1 In an extension field of GF(2), the arithmetic on the symbols 0 and 1 are modulo 2 arithmetic. 20.2 Convolutional Network Codes 489 a1 j = 0 j = 1 j = 2 j = 3 j = 4 j = 5 j = 6 0 a0 0 b0 0 a0 b0 a1 b1 0 a0+b1 b0 a2 b2 a1+b2 a0+b1 a2+b0 a3 b3 a0+ a3+b1 a2+b0+b3 a1+b2 a4 b4 w0 0 0 0 0 0 a1 b0 a0 b1 a2 b2 a3 b3 a4 b4 a5 b5 w2 w3 w4 w5 w6 w1 y0 y1 y2 y3 y4 y5 y6 s0 s1 s2 s3 s4 s5 s6 x0 x1 x2 x3 x4 x5 x6 Fig. 20.3. The trellis network depicting a convolutional network code defined on the physical network in Figure 20.2. network is from node tj to node uj+1. Note that the trellis network is acyclic regardless of the topology of the physical network, because all the channels are pointing in the forward time direction so that a directed cycle cannot be formed. Example 20.4. Regard the network in Figure 20.2 as a unit-delay network. For each channel e in the network, the scalar values in the base field F transmitted on the channels ej, j ≥0 in the corresponding trellis network are determined by the local encoding kernels. This is illustrated in Figure 20.3. For instance, the channels (x, y)j, j ≥0 carry the scalar values 0, 0, a0, a1, a2 + b0, a0 + a3 + b1, a1 + a4 + b2, · · · , (20.18) respectively. This constitutes an example of a convolutional network code to be formally defined in Definition 20.6. Let cj be the scalar value in F transmitted on a particular channel in the network at time j. A succinct mathematical expression for the sequence of scalars c0, c1, c2, · · · is the z-transform ∞ X j=0 cjzj = c0 + c1z + c2z2 + · · · , (20.19) where the power j of the dummy variable z represents discrete time. The pipelining of scalars transmitted over a time-multiplexed channel can thus be 490 20 Single-Source Linear Network Coding: Cyclic Networks regarded as the transmission of a power series over the channel. For example, the transmission of a scalar value on the channel (x, y)j for each j ≥0 in the trellis network in Figure 20.3 translates into the transmission of the power series a0z2 + a1z3 + (a2 + b0)z4 + (a0 + a3 + b1)z5 + (a1 + a4 + b2)z6 + · · · (20.20) over the channel (x, y) in the network in Figure 20.2. The z-transform in (20.19) is a power series in the dummy variable z, which would be regarded as either a real number or a complex number in the context of signal analysis. However, in the context of convolutional coding, such a power series should not be regarded as anything more than a representation of the sequence of scalars c0, c1, c2, · · ·. Specifically, the dummy variable z is not associated with any value, and there is no notion of convergence. Such power series are called formal power series. Given a field F, consider rational functions of a dummy variable z of the form p(z) 1 + zq(z), (20.21) where p(z) and q(z) are polynomials. The following properties of such a func-tion are relevant to our subsequent discussion: 1. The denominator has a constant term, so the function can be expanded into a power series by long division (see Example 20.5). 2. If p(z) is not the zero polynomial, the inverse function, namely 1 + zq(z) p(z) , (20.22) exists. Note that the rational function in (20.22) does not represent a power series if p(z) contains the factor z, or equivalently, does not contain a constant term. The ring of power series over F is conventionally denoted by F. Ra-tional functions of the form (20.21) will be called rational power series which constitute a ring denoted by F⟨z⟩. It follows directly from the definitions that F⟨z⟩is a subring of F. We refer the reader to for a comprehen-sive treatment of abstract algebra. In the following, we illustrate the concepts of rational power series through a few simple examples. Example 20.5. If z is a complex number, then we can write 1 1 −z = 1 + z + z2 + z3 + · · · (20.23) 20.2 Convolutional Network Codes 491 provided that |z| < 1, where we have interpreted the coefficients in the power series on the right hand side as real (or complex) numbers. If |z| > 1, the above expression is not meaningful because the power series diverges. However, if we do not associate z with a value but regard the coefficients in the power series as elements in a commutative ring, we can always write (1 −z)(1 + z + z2 + z3 + · · ·) = (1 + z + z2 + z3 + · · ·) −(z + z2 + z3 + · · ·) (20.24) = 1. (20.25) In this sense, we say that 1 −z is the reciprocal of the power series 1 + z + z2 + z3 + · · · and write 1 1 −z = 1 + z + z2 + z3 + · · · . (20.26) We also say that 1 + z + z2 + z3 + · · · is the power series expansion of 1 1−z. In fact, the power series on the right hand side can be readily obtained by dividing 1 by 1 −z using long division. Alternatively, we can seek the inverse of 1 −z by considering the identity (1 −z)(a0 + a1z + a2z2 + · · ·) = 1. (20.27) By equating the powers of z on both sides, we have a0 = 1 (20.28) −a0 + a1 = 0 (20.29) −a1 + a2 = 0 (20.30) . . . (20.31) Then by forward substitution, we immediately obtain 1 = a0 = a1 = a2 = · · · , (20.32) which gives exactly the power series obtained by long division. The reader can easily verify that long division indeed mimics the process of forward substitu-tion. For polynomials p(z) and q(z) where q(z) is not the zero polynomial, we can always expand the rational function p(z) q(z) into a series. However, such a series is not always a power series. For example, 1 z −z2 = 1 z  1 1 −z  (20.33) = 1 z (1 + z + z2 + · · ·) (20.34) = z−1 + 1 + z + z2 + · · · . (20.35) 492 20 Single-Source Linear Network Coding: Cyclic Networks The above is not a power series because of the term involving a negative power of z. In fact, the identity (z −z2)(a0 + a1z + a2z2 + · · ·) = 1 (20.36) has no solution for a0, a1, a2, · · · since there is no constant term on the left hand side. Therefore, 1 z−z2 indeed does not have a power series expansion. From the above example, we see that p(z) q(z) represents a rational power series if and only if q(z) has a nonzero constant term, or equivalently, does not contain the factor z. Definition 20.6 (Convolutional Network Code). An ω-dimensional con-volutional network code on a unit-delay network over a base field F consists of an element kd,e(z) ∈F⟨z⟩for every adjacent pair of channels (d, e) in the network as well as a column ω-vector fe(z) over F⟨z⟩for every channel e such that: (20.37) fe(z) = z P d∈In(t) kd,e(z)fd(z) for e ∈Out(t). (20.38) The vectors fe(z) for the imaginary channels e ∈In(s) consist of scalar components that form the standard basis of the vector space F ω. The vector fe(z) is called the global encoding kernel for channel e, and kd,e(z) is called the local encoding kernel for the adjacent pair of channels (d, e). The |In(t)| × |Out(t)| matrix Kt(z) = [kd,e(z)]d∈In(t),e∈Out(t) (20.39) is called the local encoding kernel at node t. The constraint (20.37) is the time-multiplexed version of (19.17), with the factor z in the equation indicating a unit-time delay that represents the transmission delay over a channel. In the language of electronic circuit theory, for an adjacent pair of channels (d, e), the “gain” from channel d to channel e is given by zkd,e(z). A convolutional network code over a unit-delay network can be viewed as a discrete-time linear time-invariant (LTI) system defined by the local encoding kernels, where the local encoding kernel kd,e(z) specifies the impulse response of an LTI filter from channel d to channel e. The requirement that kd,e(z) is a power series corresponds to the causality of the filter. The additional re-quirement that kd,e(z) is rational ensures that the filter is implementable by a finite circuitry of shift registers. Intuitively, once the local encoding kernels are given, the global encoding kernels are uniquely determined. This is explained as follows. Write fe(z) = ∞ X j=0 fe,jzj = fe,0 + fe,1z + fe,2z2 + · · · (20.40) 20.2 Convolutional Network Codes 493 and kd,e(z) = ∞ X j=0 kd,e,jzj = kd,e,0 + kd,e,1z + kd,e,2z2 + · · · , (20.41) where fe,j is a column ω-vector in F ω and kd,e,j is a scalar in F. Then the equation in (20.37) can be written in time domain as the convolutional equa-tion fe,j = X d∈In(t) j−1 X u=0 kd,e,u fd,j−1−u ! (20.42) for j ≥0, with the boundary conditions provided by (20.38): • The vectors fe,0, e ∈In(t) form the standard basis of the vector space F ω. • The vectors fe,j, e ∈In(t) are the zero vector for all j ≥1. For j = 0, the summation in (20.42) is empty, so that fe,0 vanishes. For j ≥0, the right hand side of (20.42) involves the vectors fd,i only for 0 ≤i ≤ j −1. Thus the vectors fe,j, j ≥1 can be calculated recursively via (20.42) with the boundary condition fd,0 = 0 for all d ∈E. (20.43) Together with fe,0 = 0, the global encoding kernel fe(z) is determined (cf. (20.40)). In other words, in a convolutional network code over a unit-delay network, the global encoding kernels are determined once the local encoding kernels are given. From (20.40), we see that the components of fe(z) are power series in z, so fe(z) is a column ω-vector over F. In Theorem 20.9, we will further establish that the components of the global encoding kernels are in fact rational functions in z, proving that fe(z) is indeed a column ω-vector over f⟨z⟩as required in Definition 20.6 for a convolutional network code. Example 20.7. In Figure 20.2, denote the two imaginary channels by (o, s) and (o, s)′. A convolutional network code is specified by the prescription of a local encoding kernel at every node as shown in the figure: Ks(z) =  1 0 0 1  , Kx(z) = Ky(z) =  1 1  , Kw(z) = 1 , (20.44) and a global encoding kernel for every channel: f(o,s)(z) = 1 0  , f(o,s)′(z) = 0 1  (20.45) f(s,x)(z) = z 1 0 0 1  1 0  = z 0  (20.46) f(s,y)(z) = z  1 0 0 1   0 1  =  0 z  (20.47) 494 20 Single-Source Linear Network Coding: Cyclic Networks Ky = Ks = Kx = ! " # $ % & 0 1 ! " # $ % & 1 0 ! " # $ % & 0 z ! " # $ % & z 0 ! " # $ % & ' ' ) 1 /( ) 1 /( 3 2 3 3 z z z z ! " # $ % & ' ' ) 1 /( ) 1 /( 3 4 3 2 z z z z ! " # $ % & 1 0 0 1 ! " # $ % & 1 1 ! " # $ % & 1 1 s x y w Kw = ( ) 1 ! " # $ % & ' ' ) 1 /( ) 1 /( 3 3 3 4 z z z z Fig. 20.4. The local and global encoding kernels of the convolutional network code in Example 20.7. f(x,y)(z) =  z2/(1 −z3) z4/(1 −z3)  (20.48) f(y,w)(z) = z3/(1 −z3) z2/(1 −z3)  (20.49) f(w,x)(z) = z4/(1 −z3) z3/(1 −z3)  , (20.50) where the last three global encoding kernels have been solved from the fol-lowing equations: f(x,y)(z) = z f(s,x)(z) f(w,x)(z) 1 1  = z2 1 0  + z f(w,x)(z) (20.51) f(y,w)(z) = z f(s,y)(z) f(x,y)(z) 1 1  = z2 0 1  + z f(x,y)(z) (20.52) f(w,x)(z) = z(f(y,w)(z)) 1 = z f(y,w)(z). (20.53) These local and global encoding kernels of a 2-dimensional convolutional net-work code are summarized in Figure 20.4. Represent the message generated at source node s at time j, where j ≥0, by a row ω-vector xj ∈F ω. Equivalently, source node s generates the message 20.2 Convolutional Network Codes 495 pipeline represented by the z-transform x(z) = ∞ X j=0 xjzj, (20.54) which is a row ω-vector over F, the ring of power series over F. Here, x(z) is not necessarily rational. Through a convolutional network code, each channel e carries the power series x(z) fe(z). Write x(z) fe(z) = ∞ X j=0 me,jzj, (20.55) where me,j = j X u=0 xu fe,j−u. (20.56) For e ∈Out(t), from the equation in (20.37), we obtain x(z) fe(z) = x(z)  z X d∈In(t) kd,e(z) fd(z)   (20.57) = z X d∈In(t) kd,e(z) [ x(z) fd(z) ] , (20.58) or equivalently in time domain, me,j = X d∈In(t) j−1 X u=0 kd,e,u md,j−1−u ! . (20.59) The reader should compare (20.59) with (20.42). Note that the scalar val-ues me,j, j ≥1 can be calculated recursively via (20.59) with the boundary condition md,0 = 0 for all d ∈E. (20.60) Thus a node t calculates the scalar value me,j for transmitting on each output channel e at time j from the cumulative information it has received on all the input channels up to time j −1. The convolutional equation (20.59) can be implemented by a finite circuit of shift-registers in a causal manner because the local encoding kernels belong to F⟨z⟩, the ring of rational power series over F (cf. Definition 20.6). Example 20.8. Consider the convolutional network code in Example 20.7. Let source node s pipelines the message 496 20 Single-Source Linear Network Coding: Cyclic Networks x(z) =   ∞ X j=0 ajzj ∞ X j=0 bjzj  . (20.61) Then the five channels (s, x), (s, y), (x, y), (y, w), and (w, x) carry the following power series, respectively: x(z) f(s,x)(z) = ∞ X j=0 ajzj+1 (20.62) x(z) f(s,y)(z) = ∞ X j=0 bjzj+1 (20.63) x(z) f(x,y)(z) =   ∞ X j=0 ajzj+2 + ∞ X j=0 bjzj+4  /(1 −z3) (20.64) =   ∞ X j=0 ajzj+2 + ∞ X j=0 bjzj+4   ∞ X j=0 z3j (20.65) = a0z2 + a1z3 + (a2 + b0)z4 +(a0 + a3 + b1)z5 + · · · (20.66) x(z) f(y,w)(z) =   ∞ X j=0 ajzj+3 + ∞ X j=0 bjzj+2  /(1 −z3) (20.67) x(z) f(w,x)(z) =   ∞ X j=0 ajzj+4 + ∞ X j=0 bjzj+3  /(1 −z3). (20.68) At each time j ≥0, the source generates a message xj = [ aj bj ]. Thus chan-nel (s, x) carries the scalar 0 at time 0 and the scalar aj−1 at time j ≥1. Similarly, channel (s, y) carries the scalar 0 at time 0 and the scalar bj−1 at time j ≥1. For every channel e, write x(z) fe(z) = ∞ X j=0 me,jzj (20.69) as in (20.55). The actual encoding process at node x is as follows. At time j, node x has received the sequence md,0, md,1, · · · , md,j−1 for d = (s, x) and (w, x). Accordingly, at time j ≥1, channel (x, y) transmits the scalar value m(x,y),j = j−1 X u=0 k(s,x),(x,y),um(s,x),j−1−u + j−1 X u=0 k(w,x),(x,y),um(w,x),j−1−u (20.70) = m(s,x),j−1 + m(w,x),j−1. (20.71) 20.2 Convolutional Network Codes 497 Similarly, channels (y, w) and (w, x) transmit the scalar values m(y,w),j = m(s,y),j−1 + m(x,y),j−1 (20.72) and m(w,x),j = m(y,w),j−1, (20.73) respectively. The values m(x,y),j, m(y,w),j, and m(w,x),j for j ≥1 can be cal-culated recursively by the above formulas with the boundary condition me,0 = 0 for all e ∈E, (20.74) and they are shown in the trellis network in Figure 20.3 for small values of j. For instance, the channel (x, y) carries the scalar values m(x,y),0 = 0, m(x,y),1 = 0, m(x,y),2 = a0, m(x,y),3 = a1, m(x,y),4 = a2 + b0, m(x,y),5 = a0 + a3 + b1, · · · . (20.75) The z-transform of this sequence is x(z) f(x,y)(z) =   ∞ X j=0 ajzj+2 + ∞ X j=0 bjzj+4  /(1 −z3), (20.76) as calculated in (20.66). In the discussion following Definition 20.6, we have shown that once the local encoding kernels of a convolutional network code over a unit-delay net-work are given, the global encoding kernels are determined. The proof of the next theorem further provides a simple closed-form expression for the global encoding kernels fe(z), from which it follows that the entries in fe(z) indeed belong to F⟨z⟩as required in Definition 20.6. Theorem 20.9. Let F be the base field and kd,e(z) ∈F⟨z⟩be given for every adjacent pair of channels (d, e) on a unit-delay network. Then there exists a unique ω-dimensional convolutional network code over F with kd,e(z) as the local encoding kernel for every (d, e). Proof. Let the unit-delay network be represented by a directed graph G = (V, E). Let [kd,e(z)] be the |E|×|E| matrix in which both the rows and columns are indexed by E, with the (d, e)th entry equal to the given kd,e(z) if (d, e) is an adjacent pair of channels, and equal to zero otherwise. Denote the global encoding kernel of channel e by fe(z) if exists. Let [fe(z)] be the ω×|E| matrix obtained by putting the global encoding kernels fe(z), e ∈E in juxtaposition. Let Hs(z) be the ω×|E| matrix obtained by appending |E|−|Out(s)| columns of zeroes to the local encoding kernel Ks(z). The requirements (20.37) and (20.38) in Definition 20.6 can be written as [fe(z)] = z[fe(z)] [kd,e(z)] + zIHs(z), (20.77) 498 20 Single-Source Linear Network Coding: Cyclic Networks where I in the above denotes the ω×ω identity matrix representing the global encoding kernels fe(z), e ∈In(s) in juxtaposition. Rearranging the terms in (20.77), we obtain fe(z) = zHs(z). (20.78) In the matrix z[kd,e(z)], the diagonal elements are equal to zero because (e, e) does not form an adjacent pair of channels for all e ∈E, while the non-zero off-diagonal elements all contain the factor z. Therefore, det(I −z[kd,e(z)]) has the form 1 + zq(z), (20.79) where q(z) ∈F⟨z⟩, so that it is invertible inside F⟨z⟩because [det(I −z[kd,e(z)])]−1 = 1 1 + zq(z) (20.80) is a rational power series. It follows that (I −z[kd,e(z)])−1 (20.81) exists and is a matrix over F⟨z⟩. Then the unique solution for [fe(z)] in (20.78) is given by [fe(z)] = zHs(z)(I −z[kd,e(z)])−1. (20.82) With the two matrices [kd,e(z)] and Hs(z) representing the given local en-coding kernels and the matrix [fe(z)] representing the global encoding kernels, (20.82) is a closed-form expression for the global encoding kernels in terms of the local encoding kernels. In particular, [fe(z)] is a matrix over F⟨z⟩be-cause all the matrices on the right hand side of (20.82) are over F⟨z⟩. Thus we conclude that all the components of the global encoding kernels are in F⟨z⟩. Hence, the given local encoding kernels kd,e(z) for all adjacent pairs (d, e) together with the associated global encoding kernels fe(z), e ∈In(s) ∪E constitute a unique convolutional network code over the unit-delay network G. ⊓ ⊔ In view of Definition 19.7 for the global description of a linear network code over an acyclic network, Definition 20.6 can be regarded as the global description of a convolutional network code over a unit-delay network, while Theorem 20.9 renders a local description by specifying the local encoding kernels only. 20.3 Decoding of Convolutional Network Codes For a node t, let Ft(z) = [fe(z)]e∈In(t) (20.83) be the ω × |In(t)| matrix obtained by putting the global encoding kernels fe(z), e ∈In(t) in juxtaposition. In the following, we define a convolutional 20.3 Decoding of Convolutional Network Codes 499 multicast, the counterpart of a linear multicast defined in Chapter 19, for a unit-delay cyclic network. The existence of a convolutional multicast will also be established. Definition 20.10 (Convolutional Multicast). An ω-dimensional convo-lutional network code on a unit-delay network qualifies as an ω-dimensional convolutional multicast if for every non-source node t with maxflow(t) ≥ω, there exists an |In(t)| × ω matrix Dt(z) over F⟨z⟩and a positive integer τ such that Ft(z) Dt(z) = zτI, (20.84) where τ > 0 depends on node t and I is the ω ×ω identity matrix. The matrix Dt(z) and the integer τ are called the decoding kernel and the decoding delay at node t, respectively. Source node s generates the message pipeline x(z) = ∞ X j=0 xjzj, (20.85) where xj is a row ω-vector in F ω and x(z) is a row ω-vector over F. Through the convolutional network code, a channel e carries the power series x(z) fe(z). The power series x(z) fe(z) received by a node t from the input channels e ∈In(t) form the row |In(t)|-vector x(z) Ft(z) over F. If the convolutional network code is a convolutional multicast, node t can use the decoding kernel Dt(z) to calculate (x(z) Ft(z))Dt(z) = x(z)(Ft(z)Dt(z)) (20.86) = x(z)(zτI) (20.87) = zτx(z). (20.88) The row ω-vector zτx(z) of power series represents the message pipeline gen-erated by source node s delayed by τ time units. Note that τ > 0 because the message pipeline x(z) is delayed by one time unit at node s. Example 20.11. Consider the network in Figure 20.4. Again let source node s pipelines the message x(z) =   ∞ X j=0 ajzj ∞ X j=0 bjzj  . (20.89) For node x, we have Fx(z) = z z4/(1 −z3) 0 z3/(1 −z3)  . (20.90) Let 500 20 Single-Source Linear Network Coding: Cyclic Networks Dx(z) = z2 −z3 0 1 −z3  . (20.91) Then Fx(z)Dt(z) = z3I2 (20.92) (I2 is the 2 × 2 identity matrix). From channels (s, x) and (w, x), node x receives the row vector x(z)Fx(z) =   ∞ X j=0 ajzj+1 ∞ X j=0 ajzj+4 + bjzj+3 1 −z3   (20.93) and decodes the message pipeline as z3 x(z) =   ∞ X j=0 ajzj+1 ∞ X j=0 ajzj+4 + bjzj+3 1 −z3   z2 −z3 0 1 −z3  . (20.94) Decoding at node y is similar. Thus the 2-dimensional convolutional network code is a convolutional multicast. Toward proving the existence of a convolutional multicast, we first observe that Lemma 19.17 can be strengthened as follows with essentially no change in the proof. Lemma 20.12. Let g(y1, y2, · · · , ym) be a nonzero polynomial with coefficients in a field ˜ F. For any subset ˜ E of ˜ F, if | ˜ E| is greater than the degree of g in every yj, then there exist a1, a2, · · · , am ∈˜ E such that g(a1, a2, · · · , am) ̸= 0. (20.95) In the above lemma, the values a1, a2, · · · , am can be found by exhaustive search in ˜ E provided that ˜ E is finite. If ˜ E is infinite, simply replace ˜ E by a sufficiently large finite subset of ˜ E. Theorem 20.13. There exists an ω-dimensional convolutional multicast over any base field F. Furthermore, the local encoding kernels of the convolutional multicast can be chosen in any sufficiently large subset Φ of F⟨z⟩. Proof. Recall the equation (20.82) in the proof of Theorem 20.9: [fe(z)] = zHs(z)(I −z[kd,e(z)])−1. (20.96) In this equation, the ω × |E| matrix [fe(z)] on the left hand side represents the global encoding kernels, while the ω × |E| matrix Hs(z) and the |E| × |E| matrix [kd,e(z)] on the right hand side represent the local encoding kernels. Analogous to the proof of Theorem 19.20, denote by (F⟨z⟩)[∗] the polynomial ring over F⟨z⟩with all the kd,e(z) as indeterminates. 20.3 Decoding of Convolutional Network Codes 501 Let t be a non-source node with maxflow(t) ≥ω. Then there exist ω edge-disjoint paths from the ω imaginary channels to ω distinct channels in In(t). Put the global encoding kernels of these ω channels in juxtaposition to form the ω × ω matrix Lt(z) over (F⟨z⟩)[∗]. We will show that det(Lt(z)) ̸= 0 ∈(F⟨z⟩)[∗]. (20.97) Toward proving (20.97), it suffices to show that det(Lt(z)) ̸= 0 ∈F⟨z⟩ (20.98) when the determinant is evaluated at some particular values for the indeter-minates kd,e(z). Analogous to the proof of Theorem 19.20, we set kd,e(z) = 1 (20.99) for all adjacent pairs of channels (d, e) along any one of the ω edge-disjoint paths, and set kd,e(z) = 0 (20.100) otherwise. Then with a suitable indexing of the columns, the matrix Lt(z) becomes diagonal with all the diagonal entries being powers of z. Hence, det(Lt(z)) is equal to some positive power of z, proving (20.98) for this par-ticular choice of the indeterminates kd,e(x) and hence proving (20.97). As the conclusion (20.97) applies to every non-source node t with maxflow(t) ≥ω, it follows that Y t:maxflow(t)≥ω det(Lt(z)) ̸= 0 ∈(F⟨z⟩)[∗]. (20.101) Let F(z) be the conventional notation for the field of rational functions in z over the given base field F. The ring F⟨z⟩of rational power series is a subset of F(z). Then any subset Φ of F⟨z⟩is also a subset of F(z). Note that the ring F⟨z⟩is infinite. Then for any sufficiently large subset Φ of F⟨z⟩, we can apply Lemma 20.12 to the polynomial in (20.101) with ˜ F = F(z) and ˜ E = Φ to see that we can choose a value ad,e(z) ∈F⟨z⟩for each of the indeterminates kd,e(z) so that Y t:maxflow(t)≥ω det(Lt(z)) ̸= 0 ∈F⟨z⟩ (20.102) when evaluated at kd,e(z) = ad,e(z) for all (d, e), which in turn implies that det(Lt(z)) ̸= 0 ∈F⟨z⟩ (20.103) for all nodes t such that maxflow(t) ≥ω. Henceforth, the local encoding kernel kd,e(z) will be fixed at the appro-priately chosen value ad,e(z) for all (d, e) as prescribed above. Without loss of generality, we assume that Lt(z) consists of the first ω columns of Ft(z). From (20.103), we can write 502 20 Single-Source Linear Network Coding: Cyclic Networks det(Lt(z)) = zτ 1 + zq(z) p(z)  , (20.104) where p(z) and q(z) are polynomials over F and p(z) is not the zero poly-nomial. Note that the right hand side of (20.104) is the general form for a nonzero rational function in z. In this particular context, since the columns of Lt(z) are global encoding kernels as prescribed by (20.82), each containing the factor z in the numerator, we see that τ > 0. Denote by Jt(z) the adjoint matrix2 of Lt(z). Take the ω × ω matrix  p(z) 1 + zq(z)  Jt(z) (20.105) and append to it |In(t)|−ω rows of zeroes to form an |In(t)|×ω matrix Dt(z). Then Ft(z)Dt(z) = Lt(z) 0 "h p(z) 1+zq(z) i Jt(z) 0 # (20.106) =  p(z) 1 + zq(z)  Lt(z)Jt(z) (20.107) =  p(z) 1 + zq(z)  det(Lt(z))I (20.108) = zτI, (20.109) where the last equality follows from (20.104). Hence, the matrix Dt(z) qualifies as a decoding kernel at node t in Definition 20.10. This proves the existence of the convolutional multicast as required. ⊓ ⊔ The proof of Theorem 20.13 constitutes an algorithm for constructing a convolutional multicast. By noting the lower bound on the size of ˜ E in Lemma 20.12, a convolutional multicast can be constructed with high proba-bility by randomly choosing the local encoding kernels in the subset Φ of F⟨z⟩ provided that Φ is much larger than sufficient. Example 20.14. When the base field F is sufficiently large, Theorem 20.13 can be applied with Φ = F so that the local encoding kernels of the convolutional multicast can be chosen to be scalars. This special case is the convolutional counterpart of Theorem 19.20 for the existence of a linear multicast over an acyclic network. In this case, the local encoding kernels can be found by exhaustive search over F. More generally, by virtue of Lemma 20.12, the same exhaustive search applies to any large enough subset Φ of F⟨z⟩. For example, F can be GF(2) and Φ can be the set of all binary polynomials up to a sufficiently large degree. 2 For a matrix B whose entries are elements in a ring, denote by Adj(B) the adjoint matrix of B. Then Adj(B)B = BAdj(B) = det(A)I. Chapter Summary 503 Chapter Summary Algebraic Structures: • F denotes the ring of power series in z over F. • F⟨z⟩denotes the ring of rational power series in z over F. • F(z) denotes the field of rational functions in z over F. Convolutional Network Code: • kd,e(z) ∈F⟨z⟩is the local encoding kernel of the adjacent pair of channels (d, e). • fe(z) ∈(F⟨z⟩)ω is the global encoding kernel of channel e. • fe(z) = z P d∈In(t) kd,e(z)fd(z) for e ∈Out(t). • fe(z), e ∈In(s) form the standard basis of F ω. • Channel e carries the power series x(z) fe(z), where x(z) ∈(F)ω is the message pipeline generated at source node s. Uniqueness of Convolutional Network Code: For given kd,e(z) for every adjacent pair of channels (d, e) on a unit-delay network, there exists a unique convolutional network code with kd,e(z) as the local encoding kernel for every (d, e). The global encoding kernels fe(z) can be expressed in terms of the local encoding kernels kd,e(z) as [fe(z)] = zHs(z)(I −z[kd,e(z)])−1, where Hs(z) is determined by the local encoding kernel at source node s. Convolutional Multicast: A convolutional network code is a convolutional multicast if for every node t ̸= s with maxflow(t) ≥ω, there exists an |In(t)|×ω matrix Dt(z) over F⟨z⟩and a positive integer τ such that Ft(z) Dt(z) = zτI, where Ft(z) = [fe(z)]e∈In(t) and τ > 0 depends on node t. The matrix Dt(z) and the integer τ are called the decoding kernel and the decoding delay at node t, respectively. Existence and Construction: A convolutional multicast on a unit-delay network with the local encoding kernels chosen in a subset Φ of F⟨z⟩exists and can be constructed randomly (with high probability) when Φ is sufficiently large. For example, Φ can be the base field F provided that F is sufficiently large, or Φ can be the set of all binary polynomials up to a sufficiently large degree. 504 20 Single-Source Linear Network Coding: Cyclic Networks Problems 1. Show that the right hand side of (20.104) is the general form for a nonzero rational function in z. 2. A formal Laurent series over a field F has the form a−mz−m + a−(m−1)z−(m−1) + · · · + a−1z−1 + a0 + a1z + a2z2 + · · · , where m is a nonnegative integer. Show that for any formal Laurent series f(z) over F, there exists a unique formal Laurent series g(z) over F such that f(z)g(z) = 1. 3. Verify the following series expansion: 1 1 −z = −z−1 −z−2 −z−3 −· · · . Can you obtain this series by long division? 4. Construct a finite circuit of shift-registers that implements a discrete-time LTI system with transfer function a0 + a1z + · · · + anzn b0 + b1z + · · · + bnzn , where ai and bi are elements in a finite field and b0 ̸= 0. 5. Consider the convolutional network code in Figure 20.4. a) Is it a convolutional multicast? b) If your answer in a) is positive, give the decoding kernel at node y with minimum decoding delay. c) Change Ks to 1 1 0 1  and determine the corresponding global encoding kernels. d) Instead of a convolutional multicast, can you construct a linear mul-ticast on the network? Historical Notes The asymptotic achievability of the max-flow bound for unit-delay cyclic net-works was proved by Ahlswede et al. , where an example of a convolutional network code achieving this bound was given. Li et al. conjectured the existence of convolutional multicasts on such networks. This conjecture was subsequently proved by Koetter and M´ edard . Construction and decod-ing of convolutional multicast have been studied by Erez and Feder , Fragouli and Soljanin , and Barbero and Ytrehus . The unifying treat-ment of convolutional codes here is based on Li and Yeung (see also Yeung et al. ). Li and Ho recently obtained a general abstract for-mulation of convolutional network codes based on ring theory. 21 Multi-Source Network Coding In Chapters 19 and 20, we have discussed single-source network coding in which an information source is multicast in a point-to-point communication network. The maximum rate at which information can be multicast has a sim-ple characterization in terms of the maximum flows in the graph represent-ing the network. In this chapter, we consider the more general multi-source network coding problem in which more than one mutually independent in-formation sources are generated at possibly different nodes, and each of the information sources is multicast to a specific set of nodes. The achievable information rate region of a multi-source network coding problem, which will be formally defined in Section 21.4, refers to the set of all possible rates at which multiple information sources can be multicast simultaneously on a network. In a single-source network coding problem, we are interested in characterizing the maximum rate at which information can be multicast from the source node to all the sink nodes. In a multi-source network coding problem, we are interested in characterizing the achievable information rate region. As discussed in Section 17.3, source separation is not necessarily optimal for multi-source network coding. It is therefore not a simple extension of single-source network coding. Unlike the single-source network coding problem which has an explicit solution, the multi-source network coding problem has not been completely solved. In this chapter, by making use of the tools we have developed for information inequalities in Chapter 13 to Chapter 15, we will develop an implicit characterization of the achievable information rate region for multi-source network coding on acyclic networks. 21.1 The Max-Flow Bounds The max-flow bound, which fully characterizes the maximum rate of an in-formation source that can be multicast in a network, plays a central role in single-source network coding. We now revisit this bound in the context of 506 21 Multi-Source Network Coding (a) 2 3 1 4 b 1 b 2 b 2 b 1 (b) 2 3 1 4 [ X 1 ] [ X 2 ] [ X 1 X 2 ] X 1 X 2 Fig. 21.1. A network which achieves the max-flow bound. multi-source network coding. In the following discussion, the unit of informa-tion is the bit. Consider the graph in Figure 21.1(a). The capacity of each edge is equal to 1. Two independent information sources X1 and X2 with rates ω1 and ω2, respectively are generated at node 1. Suppose we want to multicast X1 to nodes 2 and 4 and multicast X2 to nodes 3 and 4. In the figure, an information source in square brackets is one which is to be received at that node. It is easy to see that the values of a max-flow from node 1 to node 2, from node 1 to node 3, and from node 1 to node 4 are respectively 1, 1, and 2. At node 2 and node 3, information is received at rates ω1 and ω2, respectively. At node 4, information is received at rate ω1 + ω2 because X1 and X2 are independent. Applying the max-flow bound at nodes 2, 3, and 4, we have ω1 ≤1 (21.1) ω2 ≤1 (21.2) and ω1 + ω2 ≤2, (21.3) respectively. We refer to (21.1) to (21.3) as the max-flow bounds. Figure 21.2 is an illustration of all (ω1, ω2) which satisfy these bounds, where ω1 and ω2 are obviously nonnegative. We now show that the rate pair (1, 1) is achievable. Let b1 be a bit gener-ated by X1 and b2 be a bit generated by X2. In the scheme in Figure 21.1(b), b1 is received at node 2, b2 is received at node 3, and both b1 and b2 are received at node 4. Thus the multicast requirements are satisfied, and the information rate pair (1, 1) is achievable. This implies that all (ω1, ω2) which satisfy the max-flow bounds are achievable because they are all inferior to (1, 1) (see Figure. 21.2). In this sense, we say that the max-flow bounds are achievable. 21.1 The Max-Flow Bounds 507 (2,0) (1,1) (0,2) 1 2 Fig. 21.2. The max-flow bounds for the network in Figure 21.1. Suppose we now want to multicast X1 to nodes 2, 3, and 4 and multicast X2 to node 4 as illustrated in Figure 21.3. Applying the max-flow bound at either node 2 or node 3 gives ω1 ≤1, (21.4) and applying the max-flow bound at node 4 gives ω1 + ω2 ≤2. (21.5) Figure 21.4 is an illustration of all (ω1, ω2) which satisfy these bounds. We now show that the information rate pair (1, 1) is not achievable. Sup-pose we need to send a bit b1 generated by X1 to nodes 2, 3, and 4 and send a bit b2 generated by X2 to node 4. Since b1 has to be recovered at node 2, the bit sent to node 2 must be an invertible transformation of b1. This implies that the bit sent to node 2 cannot not depend on b2. Similarly, the bit sent to node 3 also cannot depend on b2. Therefore, it is impossible for node 4 to recover b2 because both the bits received at nodes 2 and 3 do not depend on 2 3 1 4 [ X 1 ] [ X 1 ] [ X 1 X 2 ] X 1 X 2 Fig. 21.3. A network which does not achieve the max-flow bounds. 508 21 Multi-Source Network Coding (2,0) (1,1) (0,2) 1 2 Fig. 21.4. The max-flow bounds for the network in Figure 21.3. b2. Thus the information rate pair (1, 1) is not achievable, which implies that the max-flow bounds (21.4) and (21.5) are not achievable. From this example, we see that the max-flow bounds do not always fully characterize the achievable information rate region. We leave it as an exercise for the reader to show that for this example, source separation is in fact optimal. 21.2 Examples of Application Multi-source network coding is a very rich model which encompasses many communication situations arising from fault-tolerant network communication, disk array, satellite communication, etc. In this section, we discuss some ap-plications of the model. 21.2.1 Multilevel Diversity Coding Let X1, X2, · · · , XK be K information sources in decreasing order of impor-tance. These information sources are encoded into pieces of information. There are a number of users, each of them having access to a certain subset of the information pieces. Each user belongs to a level between 1 and K, where a Level k user can decode X1, X2, · · · , Xk. This model, called multilevel diver-sity coding, finds applications in fault-tolerant network communication, disk array, and distributed data retrieval. Figure 21.5 shows a graph which represents a 3-level diversity coding sys-tem. The graph consists of three layers of nodes. The top layer consists of a node at which information sources X1, X2, and X3 are generated. These information sources are encoded into three pieces, each of which is stored in a distinct node in the middle layer. A dummy node is associated with such a 21.2 Examples of Application 509 X1X2X3 [X1X2] [X1] [X1X2X3] Information Sources Storage nodes Users [X1] [X1] [X1X2] [X1X2] Level 1 Level 2 Level 3 Fig. 21.5. A 3-level diversity coding system. node to model the effect that the same information is retrieved every time the node is accessed (see the discussion in Section 17.2). The nodes in the bottom layer represent the users, each of them belonging to one of the three levels. Each of the three Level 1 users has access to a distinct node in the second layer (through the associated dummy node) and decodes X1. Similarly, each of the three Level 2 users has access to a distinct set of two nodes in the second layer and decodes X1 and X2. There is only one Level 3 user, who has access to all the three nodes in the second layer and decodes X1, X2, and X3. The model represented by the graph in Figure 21.5 is called symmetrical 3-level diversity coding because the model is unchanged by permuting the nodes in the middle layer. By degenerating information sources X1 and X3, the model is reduced to the diversity coding model discussed in Section 18.2. In the following, we describe two applications of symmetrical multilevel diversity coding: Fault-Tolerant Network Communication In a computer network, a data packet can be lost due to buffer overflow, false routing, breakdown of commu-nication links, etc. Suppose the packet carries K messages, X1, X2, · · · , XK, in decreasing order of importance. For improved reliability, the packet is encoded into K sub-packets, each of which is sent over a different channel. If any k sub-packets are received, then the messages X1, X2, · · · , Xk can be recovered. Disk Array Consider a disk array which consists of K disks. The data to be stored in the disk array are segmented into K pieces, X1, X2, · · · , XK, in decreasing order of importance. Then X1, X2, · · · , XK are encoded into K pieces, each of which is stored on a separate disk. When any k out of the K disks are functioning, the data X1, X2, · · · , Xk can be recovered. 510 21 Multi-Source Network Coding Transmitter Receiver Fig. 21.6. A satellite communication network. 21.2.2 Satellite Communication Network In a satellite communication network, a user is at any time covered by one or more satellites. A user can be a transmitter, a receiver, or both. Through the satellite network, each information source generated at a transmitter is multicast to a certain set of receivers. A transmitter can transmit to all the satellites within the line of sight, while a receiver can receive from all the satellites within the line of sight. Neighboring satellites may also communicate with each other. Figure 21.6 is an illustration of a satellite communication network. The satellite communication network in Figure 21.6 can be represented by the graph in Figure 21.7 which consists of three layers of nodes. The top layer represents the transmitters, the middle layer consists of nodes representing the satellites as well as the associated dummy nodes modeling the broadcast nature of the satellites, and the bottom layer represents the receivers. If a satellite is within the line-of-sight of a transmitter, then the corresponding pair of nodes are connected by a directed edge. Likewise, if a receiver is within the line-of-sight of a satellite, then the corresponding pair nodes are connected by a directed edge. An edges between two nodes in the middle layer represent the communication links between two neighboring satellites. Each information source is multicast to a specified set of receiving nodes as shown. 21.3 A Network Code for Acyclic Networks Let G = (V, E) denote an acyclic point-to-point communication network, where V and E are the set of nodes and the set of channels, respectively. We assume that each channel e ∈E is error-free with rate constraint Re. As in 21.3 A Network Code for Acyclic Networks 511 X1X2 X3 X4X5X6 X7 X8 [X1X8] [X2X7] [X5X6] [X3X5] [X4] [X3X7X8] Transmitters Satellites Receivers Fig. 21.7. A graph representing a satellite communication network. our previous discussions, we let In(t) and Out(t) be the set of input channels and the set of output channels of node t, respectively. Let S ⊂V be the set of source nodes and T ⊂V be the set of sink nodes. Without loss of generality, we assume G has the structure that a source node has no input channel and a sink node has no output channel. Accordingly, S and T are disjoint subsets of V . An information source represented by a random variable Xs is generated at a source node s ∈S, where Xs takes values in Xs = {1, 2, · · · , ⌈2nτs⌉} (21.6) according to the uniform distribution, where τs is the rate of the informa-tion source. The information sources Xs, s ∈S are assumed to be mutually independent. To simplify the notation, we will denote (Xs : s ∈A) by XA, Q s∈A Xs by XA, etc. At a sink node t ∈T, the set of information sources Xβ(t), where β(t) ⊂S, is received. We assume that each information source is received at at least one sink node, i.e., for every s ∈S, s ∈β(t) for some t ∈T. In the case when β(t) = S for all t ∈T, the problem is reduced to the single-source network coding problem. Definition 21.1. An (n, (ηe : e ∈E), (τs : s ∈S)) block code of length n on a given communication network is defined by 1) for all source node s ∈S and all channels e ∈Out(s), a local encoding function ke : Xs →{0, 1, · · · , ηe}; (21.7) 2) for all node i ∈V \ (S ∪T) and all channels e ∈Out(i), a local encoding function 512 21 Multi-Source Network Coding ke : Y d∈In(i) {0, 1, · · · , ηd} →{0, 1, · · · , ηe}; (21.8) 3) for all sink node t ∈T, a decoding function gt : Y d∈In(t) {0, 1, · · · , ηd} →Xβ(t). (21.9) The nodes in V are assumed to be ordered in an upstream-to-downstream manner as prescribed in Proposition 19.1. This defines a coding order among the nodes such that whenever a node encodes, all the information needed would have already been received on the input channels of that node. For all sink node t ∈T, define ∆t = Pr  ˜ gt(XS) ̸= Xβ(t) , (21.10) where ˜ gt(XS) denotes the value of gt as a function of XS. ∆t is the probability that the set of information sources Xβ(t) is decoded incorrectly at sink node t. Throughout this chapter, all the logarithms are in the base 2. Definition 21.2. An information rate tuple ω = (ωs : s ∈S), where ω ≥0 (componentwise) is asymptotically achievable if for any ϵ > 0, there exists for sufficient large n an (n, (ηe : e ∈E), (τs : s ∈S)) code such that n−1 log ηe ≤Re + ϵ, e ∈E (21.11) τs ≥ωs −ϵ, s ∈S (21.12) ∆t ≤ϵ, t ∈T. (21.13) For brevity, an asymptotically achievable information rate tuple will be referred to as an achievable information rate tuple. 21.4 The Achievable Information Rate Region In this section, we define the achievable information rate region and give a characterization of this region. Definition 21.3. The achievable information rate region, denoted by R, is the set of all achievable information rate tuples ω. Remark It follows from the definition of the achievability of an information rate vector that if ω is achievable, then ω′ is achievable for all 0 ≤ω′ ≤ω. Also, if ω(k), k ≥1 are achievable, then it can be proved by techniques similar to those in the proof of Theorem 8.12 that ω = lim k→∞ω(k) (21.14) 21.4 The Achievable Information Rate Region 513 is also achievable, i.e., R is closed. The details are omitted here. Consider the set of all information rate tuples ω such that there exist auxiliary random variables {Ys, s ∈S} and {Ue, e ∈E} which satisfy the following conditions: H(Ys) ≥ωs, s ∈S (21.15) H(YS) = X s∈S H(Ys) (21.16) H(UOut(s)|Ys) = 0, s ∈S (21.17) H(UOut(i)|UIn(i)) = 0, i ∈V \ (S ∪T) (21.18) H(Ue) ≤Re, e ∈E (21.19) H(Yβ(t)|UIn(t)) = 0, t ∈T, (21.20) where YS denotes (Ys : s ∈S), UOut(s) denotes (Ue : e ∈Out(s)), etc. Here, Ys is an auxiliary random variable associated with the information source Xs, and Ue is an auxiliary random variable associated with the codeword sent on channel e. The interpretations of (21.15) to (21.20) are as follows. The inequality in (21.15) says that the entropy of Ys is greater than or equal to ωs, the rate of the information source Xs. The equality in (21.16) says that Ys, s ∈S are mutually independent, which corresponds to the assumption that the information sources Xs, s ∈S are mutually independent. The equality in (21.17) says that UOut(s) is a function of Ys for s ∈S, and the equality in (21.18) says that UOut(i) is a function of UIn(i) for i ∈V \ (S ∪T). These correspond to the requirement that the codewords sent out by a source node s are functions of the information source Xs, and that the codewords sent out by a non-source node i are functions of the codewords received by node i. The inequality in (21.19) says that the entropy of Ue is less than or equal to Re, the rate constraint for channel e. The equality in (21.20) says that Yβ(t) is a function of UIn(t) for t ∈T, which corresponds to the requirement that the information sources to be received at a sink node t can be decoded from the codewords received at node t. For a given multi-source network coding problem, let N = {Ys : s ∈S; Ue : e ∈E} (21.21) be a collection of discrete random variables whose joint distribution is unspec-ified, and let QN = 2N \ {φ} (21.22) with cardinality 2|N| −1. Let HN be the |QN |-dimensional Euclidean space with the coordinates labeled by hA, A ∈QN . A vector h = (hA : A ∈QN ) (21.23) in HN is said to be finitely entropic if there exists a joint distribution for all X ∈N, where |X| < ∞for all X ∈N and 514 21 Multi-Source Network Coding hA = H(X : X ∈A) (21.24) for all A ∈QN . Note that h ∈HN is entropic if it is finitely entropic, but not vice versa. We then define the region Γ ∗∗ N = {h ∈HN : h is finitely entropic}. (21.25) To simplify notation, for any nonempty A, A′ ∈QN , define hA|A′ = hAA′ −hA′, (21.26) where we have used juxtaposition to denote the union of two sets. In using the above notation, we do not distinguish elements and singletons of N, i.e., for a random variable Z ∈N, hZ is the same as h{Z}. We now define the following regions in HN : L1 = ( h ∈HN : hYS = X s∈S hYs ) (21.27) L2 = n h ∈HN : hUOut(s)|Ys = 0, s ∈S o (21.28) L3 = n h ∈HN : hUOut(i)|UIn(i) = 0, i ∈V \ (S ∪T) o (21.29) L4 = { h ∈HN : hUe ≤Re, e ∈E } (21.30) L5 = n h ∈HN : hYβ(t)|UIn(t) = 0, t ∈T o . (21.31) Evidently, (21.27) to (21.31) are the regions in HN corresponding to (21.16) to (21.20), respectively. We further denote T i∈α Li by Lα for α ⊂{1, 2, 3, 4, 5}. We now introduce a few notations. For a vector h ∈HN , let hYS = (hYs : s ∈S). For a subset B of HN , let 1. projYS(B) = {hYS : h ∈B} be the projection of the set B on the coordi-nates hYs, s ∈S; 2. Λ(B) = {h ∈HN : 0 ≤h ≤h′ for some h′ ∈B}; 3. con(B) be the convex hull of B; 4. B be the closure of B. Note that a vector h ≥0 is in Λ(B) if and only if it is inferior to some vector h′ in B. The following theorem gives a characterization of the achievable information rate region R in terms of the region Γ ∗∗ N . Definition 21.4. Define the region R′ = Λ  projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  . (21.32) Theorem 21.5. R = R′. This theorem, which characterizes the achievable information rate region R, will be proved in Sections 21.6 and 21.7. In the next section, we first discuss how more explicit inner and outer bounds on R can be obtained. 21.5 Explicit Inner and Outer Bounds 515 21.5 Explicit Inner and Outer Bounds Theorem 21.5 gives a characterization of the achievable information rate re-gion R in terms of the region Γ ∗∗ N . However, so far there exists no complete characterization of Γ ∗∗ N . Therefore, the region R cannot be evaluated explic-itly. In the definition of R′ in (21.32), if Γ ∗∗ N is replaced by an inner bound (outer bound) on Γ ∗∗ N , then an inner bound (outer bound) on R is obtained. The results in and which are beyond the scope of our discussion here, provide explicit constructions of inner bounds on Γ ∗∗ N . We now discuss how an explicit outer bound on R can be obtained. To facilitate our discussion, we further define iA;A′ = hA −hA|A′ (21.33) and iA;A′|A′′ = hA|A′′ −hA|A′A′′ (21.34) for A, A′, A′′ ∈QN . Let ΓN be the set of h ∈HN such that h satisfies all the basic inequalities involving some or all of the random variables in N, i.e., for all A, A′, A′′ ∈QN , hA ≥0 (21.35) hA|A′ ≥0 (21.36) iA;A′ ≥0 (21.37) iA;A′|A′′ ≥0. (21.38) We know from Section 14.2 that Γ ∗∗ N ⊂ΓN . Then upon replacing Γ ∗∗ N by ΓN in (21.32), we obtain an outer bound on Rout. This outer bound, called the LP bound (LP for linear programming), is given by RLP = Λ  projYS  con(ΓN ∩L123) ∩L4 ∩L5  (21.39) = Λ projYS(ΓN ∩L12345)  , (21.40) where the last equality follows because both ΓN and L123 are closed and convex. Since RLP involves only a finite number of linear constraints, RLP can be evaluated explicitly. Using the technique in , it can be proved that RLP is tight for most special cases of multi-source network coding on an acyclic network for which the achievable information region is known. In addition to single-source net-work coding, these include the models described in . Since RLP encompasses all Shannon-type information inequalities and the converse proofs of the achievable information rate region for all these spe-cial cases do not involve non-Shannon-type inequalities, the tightness of RLP for all these cases is expected. 516 21 Multi-Source Network Coding However, there exist multi-source network coding problems that requires non-Shannon-type inequalities for the characterization of the achievable in-formation rate region . As new non-Shannon-type inequalities are discovered from time to time, improved outer bounds on R can be obtained by incorporating these inequalities. 21.6 The Converse In this section, we establish the converse part of Theorem 21.5, namely R ⊂Λ  projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  = R′. (21.41) Let ϵk be a sequence such that 0 < ϵk < 1 for all k and ϵk monotonically decreases to 0 as k →∞. Consider an achievable information rate tuple ω ∈R. Then for all k, for all sufficiently large n, there exists an  n, (η(k) e : e ∈E), (τ (k) s : s ∈S)  (21.42) code satisfying n−1 log η(k) e ≤Re + ϵk, e ∈E (21.43) τ (k) s ≥ωs −ϵk, s ∈S (21.44) ∆(k) t ≤ϵk, t ∈T, (21.45) where ∆(k) t denotes the decoding error probability at sink node t (cf. (21.10)). We now fix k to be any positive integer and temporarily suppress all the superscripts involving k. For all e ∈E, let Ue be the codeword sent on chan-nel e and denote the alphabet of Ue by Ue. The following lemma, whose proof will be deferred to the end of the section, is a consequence of Fano’s inequality. Lemma 21.6. For all n and k, for all t ∈T, H(Xβ(t)|UIn(t)) ≤nφt(n, ϵk), (21.46) where 1. φt(n, ϵk) is bounded; 2. φt(n, ϵk) →0 as n, k →∞; 3. φt(n, ϵk) is monotonically decreasing in both n and k. Since the information source Xs, s ∈S are mutually independent, H(XS) = X s∈S H(Xs). (21.47) 21.6 The Converse 517 For any s ∈S, since UOut(s) is a function of Xs, H(UOut(s)|Xs) = 0. (21.48) Similarly, for all i ∈V \ (S ∪T), H(UOut(i)|UIn(i)) = 0. (21.49) For all e ∈E, H(Ue) ≤log |Ue| (21.50) = log(ηe + 1) (21.51) ≤n(Re + 2ϵk) (21.52) where (21.52) follows from (21.43) assuming that n is sufficiently large. For all t ∈T, from Lemma 21.6, we have H(Xβ(t)|UIn(t)) ≤nφt(n, ϵk). (21.53) For all s ∈S, from (21.44), H(Xs) = log |Xs| = log⌈2nτs⌉≥nτs ≥n(ωs −ϵk). (21.54) By letting Ys = Xs for all s ∈S, we then obtain from (21.47) to (21.49) and (21.52) to (21.54) that H(YS) = X s∈S H(Ys) (21.55) H(UOut(s)|Ys) = 0, s ∈S (21.56) H(UOut(i)|UIn(i)) = 0, i ∈V \ (S ∪T) (21.57) H(Ue) ≤n(Re + 2ϵk), e ∈E (21.58) H(Yβ(t)|UIn(t)) ≤nφt(n, ϵk), t ∈T (21.59) H(Ys) ≥n(ωs −ϵk), s ∈S. (21.60) Now define the following two regions in HN : Ln 4,ϵk = {h ∈HN : hUe ≤n(Re + 2ϵk), e ∈E} (21.61) Ln 5,ϵk = {h ∈HN : hYβ(t)|UIn(t) ≤nφt(n, ϵk), t ∈T}. (21.62) Note that all the auxiliary random variables Ys, s ∈S and Ue, e ∈E have finite alphabets, because |Ys| = |Xs| = ⌈2nτs⌉< ∞ (21.63) and log |Ue| ≤n(Re + 2ϵk) < ∞ (21.64) 518 21 Multi-Source Network Coding (cf. (21.50) through (21.52)). Then we see from (21.55) to (21.60) that there exists h(k) ∈Γ ∗∗ N (21.65) such that h(k) ∈L123 ∩Ln 4,ϵk ∩Ln 5,ϵk (21.66) and h(k) Ys ≥n(ωs −ϵk) (21.67) for all s ∈S. From (21.65) and (21.66), we obtain h(k) ∈Γ ∗∗ N ∩L123 ∩Ln 4,ϵk ∩Ln 5,ϵk. (21.68) Upon dividing by n, (21.67) becomes n−1h(k) Ys ≥ωs −ϵk. (21.69) Since Γ ∗∗ N ∩L123 contains the origin in HN , we see that n−1h(k) ∈con(Γ ∗∗ N ∩L123) ∩L4,ϵk ∩L5,ϵk, (21.70) where L4,ϵk = {h ∈HN : hUe ≤Re + 2ϵk, e ∈E} (21.71) L5,ϵk = {h ∈HN : hYβ(t)|UIn(t) ≤φt(n, ϵk), t ∈T}. (21.72) Note that the region L5,ϵk depends on n though it is not indicated explicitly. For all n and k, define the set B(n,k) = {h ∈con(Γ ∗∗ N ∩L123) ∩L4,ϵk ∩L5,ϵk : hYs ≥ωs −ϵk for all s ∈S}. (21.73) Lemma 21.7. For all n and k, the set B(n,k) is compact1. Again, the proof of this lemma is deferred to the end of the section. Now from Lemma 21.6, φt(n, ϵk) is monotonically decreasing in both n and k, so for all n and k, B(n+1,k) ⊂B(n,k) (21.74) and B(n,k+1) ⊂B(n,k). (21.75) For any fixed k and all sufficiently large n, we see from (21.69) and (21.70) that B(n,k) is nonempty. Since B(n,k) is compact by Lemma 21.7, lim n→∞B(n,k) = ∞ \ n=1 B(n,k) (21.76) 1 A subset of the Euclidean space is compact if and only if it is closed and bounded. 21.6 The Converse 519 is both compact and nonempty. By the same argument, we conclude that lim k→∞lim n→∞B(n,k) = ∞ \ k=1 ∞ \ n=1 B(n,k) (21.77) is also nonempty. Now the set lim k→∞lim n→∞B(n,k) (21.78) is equal to n h ∈con(Γ ∗∗ N ∩L123) ∩L4 ∩L5 : hYs ≥ωs for all s ∈S o . (21.79) Hence, there exists h′ satisfying h′ ∈con(Γ ∗∗ N ∩L123) ∩L4 ∩L5 (21.80) and h′ Ys ≥ωs, s ∈S. (21.81) Let r = projYS(h′). Then we have r ∈projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  (21.82) and r ≥ω (21.83) componentwise. By (21.82) and (21.83), we finally conclude that ω ∈Λ  projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  . (21.84) This completes the proof of the converse part of Theorem 21.5. Proof of Lemma 21.6. For any t ∈T, by Fano’s inequality, we have H(Xβ(t)|UIn(t)) ≤1 + ∆t log |Xβ(t)| (21.85) = 1 + ∆tH(Xβ(t)) (21.86) ≤1 + ϵkH(Xβ(t)), (21.87) where (21.86) follows because Xs is distributed uniformly on Xs and Xs, s ∈S are mutually independent, and (21.87) follows from (21.45). Then H(Xβ(t)) = I(Xβ(t); UIn(t)) + H(Xβ(t)|UIn(t)) (21.88) a) ≤I(Xβ(t); UIn(t)) + 1 + ϵkH(Xβ(t)) (21.89) ≤H(UIn(t)) + 1 + ϵkH(Xβ(t)) (21.90) b) ≤  X e∈In(t) log ηe  + 1 + ϵkH(Xβ(t)) (21.91) c) ≤  X e∈In(t) n(Re + ϵk)  + 1 + ϵkH(Xβ(t)), (21.92) 520 21 Multi-Source Network Coding where a) follows from (21.87); b) follows from Theorem 2.43; c) follows from (21.43). Rearranging the terms in (21.92), we obtain H(Xβ(t)) ≤ n 1 −ϵk  X e∈In(t) (Re + ϵk) + 1 n  . (21.93) Substituting (21.93) into (21.87), we have H(Xβ(t)|UIn(t)) < n  1 n + ϵk 1 −ϵk  X e∈In(t) (Re + ϵk) + 1 n     (21.94) = nφt(n, ϵk), (21.95) where φt(n, ϵk) = 1 n + ϵk 1 −ϵk  X e∈In(t) (Re + ϵk) + 1 n  . (21.96) Invoking the assumption that 0 < ϵk < 1 for all k and ϵk monotonically decreases to 0 as k →∞, it is evident that 1. φt(n, ϵk) is bounded for all n and k; 2. φt(n, ϵk) →0 as n, k →∞; 3. φt(n, ϵk) is monotonically nonincreasing in both n and k. The lemma is proved. ⊓ ⊔ Proof of Lemma 21.7. We need to show that the set B(n,k) is both closed and bounded. The closedness of B(n,k) is immediate from its definition. To establish the boundedness of B(n,k), we need to show that for any h ∈B(n,k), all the components of h are bounded. Consider any h ∈B(n,k). Since B(n,k) ⊂L4,ϵk, (21.97) we see from (21.71) that hUe are bounded for all e ∈E. Since B(n,k) ⊂L5,ϵk, (21.98) we see from (21.72) that for every t ∈T, hYβ(t) ≤hYβ(t)UIn(t) (21.99) = hYβ(t)|UIn(t) + hUIn(t) (21.100) ≤φt(n, ϵk) + hUIn(t) (21.101) ≤φt(n, ϵk) + X e∈In(t) hUe (21.102) 21.7 Achievability 521 where (21.100) and the boundedness of φt(n, ϵk) follow from Lemma 21.6. This shows that hYβ(t) is bounded for all t ∈T. In our model, for every s ∈S, there exists at least one t ∈T such that s ∈β(t). Then the boundedness of hYβ(t) for all t ∈T implies the boundedness of hYs for all s ∈S. Finally, the boundedness of all the other components of h is established by invoking the independence bound for entropy. The lemma is proved. ⊓ ⊔ 21.7 Achievability In this section, we establish the direct part of Theorem 21.5, namely R′ = Λ  projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  ⊂R. (21.103) Before we proceed, we first prove an alternative form of R′ that will be used in constructing the random code. For a subset B of HN , let D(B) = {αh : h ∈B and 0 ≤α ≤1}. (21.104) Define the two subsets A1 = con(Γ ∗∗ N ∩L123) (21.105) and A2 = D(Γ ∗∗ N ∩L123) (21.106) of HN . Lemma 21.8. A1 = A2. Proof. Since the origin of HN is in Γ ∗∗ N , it is also in Γ ∗∗ N ∩L123 because L123 is a linear subspace of HN . Upon observing that for 0 ≤α ≤1, αh = (1 −α)0 + αh (21.107) is a convex combination of 0 and h, we obtain D(Γ ∗∗ N ∩L123) ⊂con(Γ ∗∗ N ∩L123). (21.108) It follows that A2 ⊂A1. To prove that A1 ⊂A2, it suffices to show that A2 is convex because 1. (Γ ∗∗ N ∩L123) ⊂A2, where A2 is closed; 2. A1 is the smallest closed convex set containing Γ ∗∗ N ∩L123. Toward this end, consider any h1, h2 ∈A2 and any 0 ≤λ ≤1. We will show that h = λh1 + (1 −λ)h2 ∈A2. (21.109) 522 21 Multi-Source Network Coding Here, we can assume without loss of generality that h1, h2 ̸= 0, because otherwise (21.109) holds by the definition of A2. Since h1, h2 ∈A2, there exist hk 1, hk 2 ∈D(Γ ∗∗ N ∩L123) such that hk 1 →h1 and hk 2 →h2. Again, we can assume without loss of generality that hk 1, hk 2 ̸= 0 for all k because h1, h2 ̸= 0. Since hk 1, hk 2 ∈D(Γ ∗∗ N ∩L123), we can write hk 1 = αk 1 ˆ hk 1 (21.110) and hk 2 = αk 2 ˆ hk 2, (21.111) where ˆ hk 1, ˆ hk 2 ∈Γ ∗∗ N ∩L123 and 0 < αk 1, αk 2 ≤1. Note that αk 1 and αk 2 are strictly positive because hk 1, hk 2 ̸= 0. Now let nk 1 and nk 2 be integer sequences such that nk 1, nk 2 →∞and nk 1αk 2 nk 2αk 1 → λ 1 −λ, (21.112) and let ˆ hk = nk 1 ˆ hk 1 + nk 2 ˆ hk 2. (21.113) It can be seen from Lemma 15.3 and Corollary 15.4 that ˆ hk ∈Γ ∗∗ N . (21.114) Furthermore, since ˆ hk 1, ˆ hk 2 ∈L123 and L123 is a linear subspace, ˆ hk ∈L123. Therefore, ˆ hk ∈Γ ∗∗ N ∩L123. (21.115) Let hk = αk 1αk 2 nk 1αk 2 + nk 2αk 1 ˆ hk. (21.116) Since αk 1, αk 2 ≤1 and nk 1, nk 2 →∞, for sufficiently large k, αk 1αk 2 nk 1αk 2 + nk 2αk 1 ≤1, (21.117) and therefore hk ∈D(Γ ∗∗ N ∩L123) ⊂D(Γ ∗∗ N ∩L123) = A2. (21.118) Substituting (21.113), (21.110), and (21.111) into (21.116), we obtain hk = nk 1αk 2 nk 1αk 2 + nk 2αk 1 hk 1 + nk 2αk 1 nk 1αk 2 + nk 2αk 1 hk 2. (21.119) It can readily be seen from (21.112) that 21.7 Achievability 523 nk 1αk 2 nk 1αk 2 + nk 2αk 1 →λ (21.120) and nk 2αk 1 nk 1αk 2 + nk 2αk 1 →1 −λ. (21.121) Since hk 1 →h1 and hk 2 →h2, we see from (21.119) and (21.109) that hk →h. Finally, since hk ∈A2 and A2 is closed, we conclude that h ∈A2. Therefore, A2 is convex, and hence A1 ⊂A2. The lemma is proved. ⊓ ⊔ By virtue of this lemma, we can write R′ = Λ  projYS  D(Γ ∗∗ N ∩L123) ∩L4 ∩L5  , (21.122) and we will establish R′ ⊂R by proving that Λ  projYS  D(Γ ∗∗ N ∩L123) ∩L4 ∩L5  ⊂R. (21.123) By the remark following Definition 21.3, we only need to show the achiev-ability of the region projYS  D(Γ ∗∗ N ∩L123) ∩L4 ∩L5  . (21.124) Consider any ω in this region. Then there exists h ∈D(Γ ∗∗ N ∩L123) ∩L4 ∩L5 (21.125) such that ω = projYS(h). (21.126) Since h ∈D(Γ ∗∗ N ∩L123), (21.127) there exist a sequence h(k) ∈D(Γ ∗∗ N ∩L123) (21.128) such that h = lim k→∞h(k). (21.129) Let ω(k) = projYS(h(k)). (21.130) It then follows from (21.129) that lim k→∞ω(k) = ω. (21.131) By (21.128), h(k) = α(k)ˆ h(k) (21.132) 524 21 Multi-Source Network Coding where ˆ h(k) ∈Γ ∗∗ N ∩L123 and 0 ≤α(k) ≤1. (21.133) Note that ˆ h(k) is an entropy function because it is in Γ ∗∗ N , but h(k) and h are not necessarily entropy functions. Since ˆ h(k) ∈Γ ∗∗ N ∩L123, there exists a collection of random variables with finite alphabets N (k) = n (Y (k) s : s ∈S), (U (k) e : e ∈E) o (21.134) such that α(k)H  Y (k) s  = ω(k) s , s ∈S (21.135) H  Y (k) S  = X s∈S H  Y (k) s  (21.136) H  U (k) Out(s) Y (k) s  = 0, s ∈S (21.137) H  U (k) Out(i) U (k) In(i)  = 0, i ∈V \ (S ∪T), (21.138) where (21.135) is implied by (21.130). Furthermore, since h ∈L4 ∩L5, it follows from (21.129) and (21.132) that α(k)H  U (k) e  ≤Re + µ(k), e ∈E (21.139) α(k)H  Y (k) β(t) U (k) In(t)  ≤γ(k), t ∈T, (21.140) where µ(k), γ(k) →0 as k →∞. In the rest of the section, we will prove the achievability of ω(k) for all sufficiently large k. Then the closedness of R implies the achievability of ω by the remark following Definition 21.3. 21.7.1 Random Code Construction Fix k and ϵ > 0, and let δ be a small positive quantity to be specified later. We first construct a random (n, (η(k) e : e ∈E), (τ (k) s : s ∈S)) (21.141) code with η(k) e ≤2n(α(k)H(U (k) e )+ψ(k) e ) (21.142) for all e ∈E and ω(k) s −ϵ 2 ≤τ (k) s ≤ω(k) s −ϵ 3, (21.143) where ψ(k) e > 0 and ψ(k) e →0 as δ →0, by the steps below. For the sake of sim-plicity, we temporarily suppress all the superscripts involving k. In light of the heavy notation, we list in Table 21.1 the symbols involved in the description of the random code construction. 21.7 Achievability 525 1. Let ˆ n = ⌈nα⌉. (21.144) Here n is the block length of the random code we will construct, while ˆ n is the length of a sequence of the typical sets that we will use for constructing the random code. For each source s ∈S, let θs = ⌈2nτs⌉ (21.145) and construct a codebook Cs by generating θs codewords in Y ˆ n s ran-domly and independently according to pˆ n(ys). Denote these sequences by Ys(1), Ys(2), · · · , Ys(θs), and let Ys(0) be an arbitrary constant sequence in Y ˆ n s . 2. Reveal the codebook Cs, s ∈S to all the nodes in the network. 3. At a source node s ∈S, the information source Xs is generated according to the uniform distribution on Xs = {1, 2, · · · , θs}. (21.146) 4. Let T ˆ n [Ue]δ denote the set of strongly typical sequences2 with respect to the distribution p(ue). Let ζe = |T ˆ n [Ue]δ|. (21.147) By the strong AEP and (21.144), n the block length of the random code ˆ n the length of a sequence in the typical sets used in the code construction Xs the information source generated at source node s Xs the alphabet of information source Xs, equal to {1, 2, · · · , θs} Ys the auxiliary random variable associated with Xs Cs the codebook for information source Xs consisting of the codewords Ys(1), Ys(2), · · · , Ys(θs) ∈Y ˆ n s Ys(0) an arbitrary constant sequence in Y ˆ n s θs the common size of the alphabet Xs and the codebook Cs T ˆ n [Ue]δ equal to {Ue(1), Ue(2), · · · , Ue(ζe)} Ue(0) an arbitrary constant sequence in U ˆ n e ke the local encoding function for channel e ˆ ue the function defined by Ue = ˆ ue(Ys) for e ∈Out(s) and Ue = ˆ ue(UIn(i)) for e ∈Out(i) Ce the index sent on channel e, i.e., the value taken by the local encoding function ke gt the decoding function at sink node t Table 21.1. The list of symbols involved in the random code construction. 2 Strong typicality applies because all the random variables in N (k) have finite alphabets. 526 21 Multi-Source Network Coding ζe ≤2ˆ n(H(Ue)+ψe/(2α)) ≤2n(αH(Ue)+ψe/2), (21.148) where ψe →0 as δ →0. For all channels e ∈E, choose an ηe satisfying 2n(αH(Ue)+ψe/2) ≤ηe ≤2n(αH(Ue)+ψe). (21.149) Denote the sequences in T ˆ n [Ue]δ by Ue(1), Ue(2), · · · , Ue(ζe), and let Ue(0) be an arbitrary constant sequence in U ˆ n e . a) Let the outcome of Xs be xs for a source node s. For a channel e ∈ Out(s), define the local encoding function ke : Xs →{0, 1, · · · , ηe} (21.150) as follows. By (21.137), for each channel e ∈Out(s), there exists a function ˆ ue such that Ue = ˆ ue(Ys), (21.151) i.e., Pr{Ue = ˆ ue(y)|Ys = y} = 1 (21.152) for all y ∈Ys. By the preservation property of strong typicality (The-orem 6.8), if Ys(xs) ∈T ˆ n [Ys]δ, (21.153) then ˆ ue(Ys(xs)) ∈T ˆ n [Ue]δ, (21.154) where in ˆ ue(Ys(xs)), the function ˆ ue is applied to Ys(xs) componen-twise. If so, let ke(xs) be the index of ˆ ue(Ys(xs)) in T ˆ n [Ue]δ, i.e., Ue(ke(xs)) = ˆ ue(Ys(xs)). (21.155) Otherwise, let ke(xs) be 0. Note that ke is well-defined because ζe ≤ηe (21.156) by (21.148) and (21.149). b) For a channel e ∈Out(i), where i ∈V (S ∪T), define the local encoding function ke : Y d∈In(i) {0, 1, · · · , ηd} →{0, 1, · · · , ηe} (21.157) as follows. By (21.138), there exists a function ˆ ue such that Ue = ˆ ue(UIn(i)). (21.158) Let Ce be the index sent on channel e, i.e., the value taken by the local encoding function ke. With a slight abuse of notation, we write 21.7 Achievability 527 UE′(CE′) = (Ud(Cd) : d ∈E′) (21.159) for E′ ⊂E, and YS′(xS′) = (Ys(xs) : s ∈S′) (21.160) for S′ ⊂S. By the preservation property of strong typicality, if UIn(i)(CIn(i)) ∈T ˆ n [UIn(i)]δ, (21.161) then ˆ ue(UIn(i)(CIn(i))) ∈T ˆ n [Ue]δ. (21.162) If so, let ke(CIn(i)) be the index of ˆ ue(UIn(i)(CIn(i))) in T ˆ n [Ue]δ, i.e., Ue(ke(CIn(i))) = ˆ ue(UIn(i)(CIn(i))). (21.163) Otherwise, let ke(CIn(i)) be 0. Again, ke is well-defined because (21.156) holds. 5. For a sink node t ∈T, define the decoding function gt : Y d∈In(t) {0, 1, · · · , ηd} →Xβ(t) (21.164) as follows. If the received index Cd on channel d is nonzero for all d ∈In(t) and there exists a unique tuple xβ(t) ∈Xβ(t) (21.165) such that (Yβ(t)(xβ(t)), UIn(i)(CIn(t))) ∈T ˆ n [UIn(t)Yβ(t)]δ, (21.166) then let gt(CIn(t)) be xβ(t). Otherwise, declare a decoding error. 21.7.2 Performance Analysis Let us reinstate all the superscripts involving k that were suppressed when we described the construction of the random code. Our task is to show that for any sufficiently large k and any ϵ > 0, the random code we have constructed satisfies the following when n is sufficiently large: n−1 log η(k) e ≤Re + ϵ, e ∈E (21.167) τ (k) s ≥ω(k) s −ϵ, s ∈S (21.168) ∆(k) t ≤ϵ, t ∈T. (21.169) For e ∈E, consider 528 21 Multi-Source Network Coding n−1 log η(k) e ≤α(k)H(U (k) e ) + ψ(k) e (21.170) ≤Re + µ(k) + ψ(k) e , (21.171) where the first inequality follows from the upper bound in (21.149) and the second inequality follows from (21.139). Since µ(k) →0 as k →∞, we can let k be sufficiently large so that µ(k) < ϵ. (21.172) With k fixed, since ψ(k) e →0 as δ →0, by letting δ be sufficiently small, we have µ(k) + ψ(k) e ≤ϵ, (21.173) and (21.167) follows from (21.171). For s ∈S, from the lower bound in (21.143), we have τ (k) s ≥ω(k) s −ϵ, (21.174) proving (21.168). The proof of (21.169), which is considerably more involved, will be orga-nized into a few lemmas. For the sake of presentation, the proofs of these lemmas will be deferred to the end of the section. For i ∈S and i ∈V (S ∪T), the function ˆ ue, where e ∈Out(i), has been defined in (21.151) and (21.158), respectively. Since the network is acyclic, we see by induction that all the auxiliary random variables Ue, e ∈E are functions of the auxiliary random variables YS. Thus there exists a function ˜ ue such that Ue = ˜ ue(YS). (21.175) Equating the above with (21.151) and (21.158), we obtain ˆ ue(Ys) = ˜ ue(YS) (21.176) and ˆ ue(UIn(i)) = ˜ ue(YS), (21.177) respectively. These relations will be useful subsequently. In the rest of the section, we will analyze the probabilities of decoding error for the random code we have constructed for a fixed k, namely ∆(k) t , for t ∈T. With a slight abuse of notation, we write ˜ uE′(·) = (˜ ud(·) : d ∈E′) (21.178) for E′ ⊂E. Again, we temporarily suppress all the superscripts invoking k. Lemma 21.9. Let XS = xS (21.179) YS(xS) = yS ∈T ˆ n [YS]δ, (21.180) 21.7 Achievability 529 and for e ∈E, let Ce take the value ce, which by the code construction is a function of xS and yS. Then UIn(t)(cIn(t)) = ˜ uIn(t)(yS). (21.181) and (yS, UIn(t)(cIn(t))) ∈T ˆ n [YSUIn(t)]δ (21.182) for all t ∈T. Let Errt = {gt(CIn(t)) ̸= Xβ(t)} = {˜ gt(XS) ̸= Xβ(t)} (21.183) be the event of a decoding error at sink node t, i.e., Pr{Errt} = ∆t (21.184) (cf. (21.10)). In the following, we will obtain an upper bound on Pr{Errt}. Consider Pr{Errt} = X xβ(t)∈Xβ(t) Pr{Errt|Xβ(t) = xβ(t)} Pr{Xβ(t) = xβ(t)}, (21.185) and for S′ ⊂S, let 1S′ = (1, 1, · · · , 1) | {z } |S′| . (21.186) Since Pr{Errt|Xβ(t) = xβ(t)} are identical for all xβ(t) by symmetry in the code construction, from (21.185), we have Pr{Errt} = Pr{Errt|Xβ(t) = 1β(t)} X xβ(t)∈Xβ(t) Pr{Xβ(t) = xβ(t)} (21.187) = Pr{Errt|Xβ(t) = 1β(t)}. (21.188) In other words, we can assume without loss of generality that Xβ(t) = 1β(t). To facilitate our discussion, define the event ES = {YS(1S) ∈T ˆ n [YS]δ.} (21.189) Following (21.188), we have Pr{Errt} = Pr{Errt|Xβ(t) = 1β(t), ES} Pr{ES|Xβ(t) = 1β(t)} +Pr{Errt|Xβ(t) = 1β(t), Ec S} Pr{Ec S|Xβ(t) = 1β(t)} (21.190) = Pr{Errt|Xβ(t) = 1β(t), ES} Pr{ES} +Pr{Errt|Xβ(t) = 1β(t), Ec S} Pr{Ec S} (21.191) ≤Pr{Errt|Xβ(t) = 1β(t), ES} · 1 + 1 · Pr{Ec S} (21.192) ≤Pr{Errt|Xβ(t) = 1β(t), ES} + λ, (21.193) 530 21 Multi-Source Network Coding where the last inequality follows from the strong AEP and λ →0 as δ →0. Upon defining the event E′ S = {Xβ(t) = 1β(t)} ∩ES, (21.194) we have Pr{Errt} ≤Pr{Errt|E′ S} + λ. (21.195) We now further analyze the conditional probability in (21.195). For xβ(t) ∈ Xβ(t), define the event Et(xβ(t)) = n (Yβ(t)(xβ(t)), UIn(t)(CIn(t))) ∈T ˆ n [Yβ(t)UIn(t)]δ o . (21.196) Since Xβ(t) = 1β(t), decoding at sink node t is correct if the received indices CIn(t) is decoded to 1β(t). This is the case if and only if Et(1β(t)) occurs but Et(xβ(t)) does not occur for all xβ(t) ̸= 1β(t). It follows that Errc t = Et(1β(t)) ∩ ∩xβ(t)̸=1β(t)Et(xβ(t))c , (21.197) or Errt = Et(1β(t))c ∪ ∪xβ(t)̸=1β(t)Et(xβ(t))  , (21.198) which implies Pr{Errt|E′ S} = Pr  Et(1β(t))c ∪ ∪xβ(t)̸=1β(t)Et(xβ(t))  E′ S . (21.199) By the union bound, we have Pr{Errt|E′ S} ≤Pr{Et(1β(t))c|E′ S} + X xβ(t)̸=1β(t) Pr{Et(xβ(t))|E′ S} (21.200) = X xβ(t)̸=1β(t) Pr{Et(xβ(t))|E′ S}, (21.201) where the last step follows because Pr{Et(1β(t))c|E′ S} in (21.200) vanishes by Lemma 21.9. The next two lemmas will be instrumental in obtaining an upper bound on Pr{Et(xβ(t))|E′ S} in (21.201). For any proper subset Ψ of β(t), let ΛΨ = {xβ(t) ̸= 1β(t) : xs = 1 if and only if s ∈Ψ}. (21.202) Note that {ΛΨ} is a partition of the set Xβ(t){1β(t)}. For xβ(t) ∈ΛΨ, xβ(t) and 1β(t) are identical for exactly the components indexed by Ψ. Lemma 21.10. For xβ(t) ∈ΛΨ, where Ψ is a proper subset of β(t), Pr{Et(xβ(t))|E′ S} ≤2−nα(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−ϕt), (21.203) where ϕt →0 as n →∞and δ →0. 21.7 Achievability 531 Lemma 21.11. For all sufficiently large n, |ΛΨ| ≤2n(αH(Yβ(t)\Ψ )−ϵ/4). (21.204) We now reinstate all the superscript involving k that have been suppressed. By Lemma 21.10, Lemma 21.11, and (21.201), Pr{Errt|E′ S} ≤ X xβ(t)̸=1β(t) Pr{Et(xβ(t))|E′ S} (21.205) ≤ X Ψ X xβ(t)∈Ψ Pr{Et(xβ(t))|E′ S} (21.206) ≤2|E| 2−nα(k)H(Y (k) β(t)\Ψ  −HY (k) β(t)|U (k) In(t)  −ϕt ·2n α(k)HY (k) β(t)\Ψ  −ϵ/4 (21.207) = 2|E| 2−n ϵ/4−α(k)HY (k) β(t)|U (k) In(t)  −α(k)ϕt (21.208) ≤2|E| 2−n(ϵ/4−γ(k)−α(k)ϕt) (21.209) ≤2|E| 2−n(ϵ/4−γ(k)−ϕt) (21.210) where (21.209) follows from (21.140) and (21.210) follows from (21.133). Then from (21.184), (21.195), and (21.210), we have ∆(k) t ≤2|E| 2−n(ϵ/4−γ(k)−ϕt) + λ (21.211) We now choose k, n, and δ to make the upper bound above smaller than any prescribed ϵ > 0. Since γ(k) →0 as k →∞, we can let k to be sufficiently large so that γ(k) < ϵ/4. (21.212) Then with k fixed, since ϕt →0 as n →∞and δ →0, and λ →0 as δ →0, by letting n be sufficiently large n and δ be sufficiently small, we have 1. γ(k) + ϕt < ϵ/4, so that 2|E| 2−n(ϵ/4−γ(k)−ϕt) →0 as n →∞; 2. ∆(k) t ≤ϵ. This completes the proof of (21.169). Hence, we have proved the achievability of ω(k) for all sufficiently large k. Then the closedness of R implies the achievability of ω = limk→∞ω(k), where ω ∈R′. The achievability of R′ is established. Proof of Lemma 21.9. We first prove that given 532 21 Multi-Source Network Coding XS = xS (21.213) and YS(xS) = yS ∈T ˆ n [YS]δ, (21.214) the following hold for all non-source nodes i (i.e., i ∈V \S): i) UIn(i)(cIn(i)) ∈T ˆ n [UIn(i)]δ; ii) ke(cIn(i)) ̸= 0, e ∈Out(i); iii) Ue(ke(cIn(i))) = ˜ ue(yS), e ∈Out(i). Note that for i ∈T, Out(i) = ∅in ii) and iii). By the consistency of strong typicality (Theorem 6.7), ys ∈T ˆ n [Ys]δ (21.215) for all s ∈S. Then according to the construction of the code, for all e ∈Out(s), ke(xs) ̸= 0 (21.216) and Ue(ke(xs)) = ˆ ue(ys). (21.217) We now prove i) to iii) by induction on the non-source nodes according to any given coding order. Let i1 be the first non-source node to encode. Since In(i1) ⊂S, (21.218) for all d ∈In(i1), d ∈Out(s) for some s ∈S. Then Ud(cd) = Ud(kd(xs)) (21.219) = ˆ ud(ys) (21.220) = ˜ ud(yS), (21.221) where (21.220) and (21.221) follows from (21.217) and (21.176), respectively. Thus UIn(i1)(cIn(i1)) = ˜ uIn(i1)(yS). (21.222) Since UIn(i1) is a function of YS, in light of (21.180), UIn(i)(cIn(i)) ∈T ˆ n [UIn(i)]δ (21.223) by the preservation property of strong typicality, proving i). According to the code construction, this also implies ii). Moreover, Ue(ke(cIn(i1))) = ˆ ue(UIn(i1)(cIn(i1))) (21.224) = ˜ ue(yS), (21.225) where the last equality is obtained by replacing in (21.177) the random vari-able Ud by the sequence Ud(cd) and the random variable Ys by the sequence YS(xS) = yS, proving iii). 21.7 Achievability 533 We now consider any non-source node i in the network. Assume that i) to iii) are true for all the nodes upstream to node i. For d ∈In(i), if d ∈Out(s) for some s ∈S, we have already proved in (21.221) that Ud(cd) = ˜ ud(yS). (21.226) Otherwise, d ∈Out(i′), where node i′ is upstream to node i. Then Ud(cd) = Ud(kd(cIn(i′))) (21.227) = ˆ ud(UIn(i′)(cIn(i′))) (21.228) = ˜ ud(yS), (21.229) In the above, (21.228) follows from ii) for node i′ by the induction hypothesis and the code construction, and (21.229) follows from (21.177). Therefore, (21.226) is valid for all d ∈In(i). Hence, UIn(i)(cIn(i)) = ˜ uIn(i)(yS). (21.230) which is exactly the same as (21.222) except that i1 is replaced by i. Then by means of the same argument, we conclude that i), ii), and iii) hold for node i. As (21.230) holds for any non-source node i, it holds for any sink node t. This proves (21.181) for all t ∈T. Furthermore, since UIn(t) is a function of YS, (YS, UIn(t)) is also a function of YS. Then in view of (21.181), (21.182) follows from the preservation property of strong typicality. This completes the proof of the lemma. ⊓ ⊔ Proof of Lemma 21.10. Consider Pr{Et(xβ(t))|E′ S} = X yS∈T ˆ n [YS ]δ Pr{Et(xβ(t))|YS(1S) = yS, E′ S} Pr{YS(1S) = yS|E′ S}. (21.231) To analyze Pr{Et(xβ(t))|YS(1S) = yS, E′ S} in the above summation, let us condition on the event {YS(1S) = yS, E′ S}, where yS ∈T ˆ n [YS]δ. It then follows from (21.181) in Lemma 21.9 that UIn(t)(cIn(t)) = ˜ uIn(t)(yS). (21.232) Therefore, the event Et(xβ(t)) is equivalent to (yΨ, Yβ(t)\Ψ(xβ(t)\Ψ), ˜ uIn(t)(yS)) ∈T ˆ n [YΨ Yβ(t)\Ψ UIn(t)]δ (21.233) (cf. (21.196)), or Yβ(t)\Ψ(xβ(t)\Ψ) ∈T ˆ n [Yβ(t)\Ψ |YΨ UIn(t)]δ(yΨ, ˜ uIn(t)(yS)). (21.234) 534 21 Multi-Source Network Coding Thus Pr{Et(xβ(t))|YS(1S) = yS, E′ S} = X yβ(t)\Ψ ∈T ˆ n [ Yβ(t)\Ψ|YΨ UIn(t)]δ(yΨ ,˜ uIn(t)(yS)) Pr{Yβ(t)\Ψ(xβ(t)\Ψ) = yβ(t)\Ψ|YS(1S) = yS, E′ S}. (21.235) Since xs ̸= 1 for s ∈β(t)\Ψ, Yβ(t)\Ψ(xβ(t)\Ψ) is independent of the random sequences YS(1S) and the event E′ S by construction. Therefore, Pr{Yβ(t)\Ψ(xβ(t)\Ψ) = yβ(t)\Ψ|YS(1S) = yS, E′ S} = Pr{Yβ(t)\Ψ(xβ(t)\Ψ) = yβ(t)\Ψ}. (21.236) By the consistency of strong typicality, if yβ(t)\Ψ ∈T ˆ n [Yβ(t)\Ψ |YΨ UIn(t)]δ(yΨ, ˜ uIn(t)(yS)), (21.237) then yβ(t)\Ψ ∈T ˆ n [Yβ(t)\Ψ ]δ. (21.238) Since Yβ(t)\Ψ(xβ(t)\Ψ) are generated i.i.d. according to the distribution of Yβ(t)\Ψ, by the strong AEP, Pr{Yβ(t)\Ψ(xβ(t)\Ψ) = yβ(t)\Ψ} ≤2−ˆ n(H(Yβ(t)\Ψ )−ρ), (21.239) where ρ →0 as δ →0. Combining (21.236) and (21.239), we have Pr{Yβ(t)\Ψ(xβ(t)\Ψ) = yβ(t)\Ψ|YS(1S) = yS, E′ S} ≤2−ˆ n(H(Yβ(t)\Ψ )−ρ). (21.240) By the strong conditional AEP, |T ˆ n [Yβ(t)\Ψ |YΨ UIn(t)]δ(yΨ, ˜ uIn(t)(yS))| ≤2ˆ n(H(Yβ(t)\Ψ |YΨ UIn(t))+σ), (21.241) where σ →0 as ˆ n →∞and δ →0. It then follows from (21.235), (21.240), and (21.241) that Pr{Et(xβ(t))|YS(1S) = yS, E′ S} ≤2ˆ n(H(Yβ(t)\Ψ |YΨ UIn(t))+σ)2−ˆ n(H(Yβ(t)\Ψ )−ρ) (21.242) = 2−ˆ n(H(Yβ(t)\Ψ )−H(Yβ(t)\Ψ |YΨ UIn(t))−σ−ρ) (21.243) ≤2−ˆ n(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−σ−ρ) (21.244) ≤2−nα(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−ϕt) (21.245) where (21.244) is justified by 21.7 Achievability 535 H(Yβ(t)\Ψ|YΨUIn(t)) ≤H(Yβ(t)\Ψ|YΨUIn(t)) + H(YΨ|UIn(t)) (21.246) = H(Yβ(t),\Ψ, YΨ|UIn(t)) (21.247) = H(Yβ(t)|UIn(t)), (21.248) (21.245) follows from (21.144), and ϕt →0 as n →∞and δ →0. In (21.231), Pr{YS(1S) = yS|E′ S} = Pr{YS(1S) = yS|Xβ(t) = 1β(t), ES} (21.249) = Pr{YS(1S) = yS|ES} (21.250) = Pr{YS(1S) = yS|YS(1S) ∈T ˆ n [YS]δ}. (21.251) Hence, it follows from (21.231) and (21.245) that Pr{Et(xβ(t))|E′ S} ≤2−nα(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−ϕt) · X yS∈T ˆ n [YS ]δ Pr{YS(1S) = yS|YS(1S) ∈T ˆ n [YS]δ} (21.252) = 2−nα(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−ϕt) · 1 (21.253) = 2−nα(H(Yβ(t)\Ψ )−H(Yβ(t)|UIn(t))−ϕt). (21.254) The lemma is proved. ⊓ ⊔ Proof of Lemma 21.11. Let n be sufficiently large. Consider |ΛΨ| = Y s∈β(t)\Ψ |Xs| (21.255) a) = Y s∈β(t)\Ψ θs (21.256) b) = Y s∈β(t)\Ψ ⌈2nτs⌉ (21.257) c) ≤ Y s∈β(t)\Ψ ⌈2n(ωs−ϵ/3)⌉ (21.258) ≤ Y s∈β(t)\Ψ 2n(ωs−ϵ/4) (21.259) d) = Y s∈β(t)\Ψ 2n(αH(Ys)−ϵ/4) (21.260) = 2 nP s∈β(t)\Ψ (αH(Ys)−ϵ/4) (21.261) = 2 n h αP s∈β(t)\Ψ H(Ys)−(|β(t)|−|Ψ|)ϵ/4 i (21.262) 536 21 Multi-Source Network Coding = 2n[αH(Yβ(t)\Ψ )−(|β(t)|−|Ψ|)ϵ/4] (21.263) e) ≤2n(αH(Yβ(t)\Ψ )−ϵ/4), (21.264) where a) follows from (21.146); b) follows from (21.145); c) follows from (21.143); d) follows from (21.135); e) follows because Ψ is a proper subset of β(t). The lemma is proved. ⊓ ⊔ Chapter Summary A Multi-source Network Coding Problem: A point-to-point communi-cation network is represented by an acyclic graph consisting of a set of nodes V and a set of channels E. The set of source nodes is denoted by S. At a source node s, an information source Xs is generated. The rate constraint on a channel e is Re. The set of sink nodes is denoted by T. At a sink node t, the information sources Xs, s ∈β(t) are received, where β(t) ⊂S depends on t. Achievable Information Rate Region R: An information rate tuple ω = (ωs : s ∈S) is achievable if for a sufficiently large block length n, there exists a network code such that 1. at a source node s, the rate of the information source Xs is at least ωs −ϵ; 2. the rate of the network code on a channel e is at most Re + ϵ; 3. at a sink node t, the information sources Xs, s ∈β(t) can be decoded with negligible probability of error. The achievable information rate region R is the set of all achievable informa-tion rate tuples ω. Characterization of R: Let N = {Ys : s ∈S; Ue : e ∈E} and HN be the entropy space for the random variables in N. Then R = Λ  projYS  con(Γ ∗∗ N ∩L123) ∩L4 ∩L5  , where Γ ∗∗ N = {h ∈HN : h is finitely entropic} and Problems 537 L1 = ( h ∈HN : hYS = X s∈S hYs ) L2 = n h ∈HN : hUOut(s)|Ys = 0, s ∈S o L3 = n h ∈HN : hUOut(i)|UIn(i) = 0, i ∈V \ (S ∪T) o L4 = { h ∈HN : hUe ≤Re, e ∈E } L5 = n h ∈HN : hYβ(t)|UIn(t) = 0, t ∈T o . An Explicit Outer Bound on R (LP Bound): RLP = Λ  projYS  con(ΓN ∩L123) ∩L4 ∩L5  . Problems 1. Show that source separation is optimal for the networking problem de-picted in Figure 21.3. 2. By letting S = {s} and β(t) = {s} for all t ∈T, the multi-source network coding problem described in Section 21.3 becomes a single-source network coding problem. Write ω = ωs. a) Write out the achievable information rate region R. b) Show that if ωs ∈R, then ωs ≤maxflow(t) for all t ∈T. 3. Consider the following network. [ X 1 X 2 ] [ X 1 X 2 ] [ X 1 X 2 ] [ X 1 ] [ X 1 ] X 1 X 2 1 2 2 2 1 2 1 1 2 1 1 a) Let ωi be the rate of information source Xi. Determine and illustrate the max-flow bounds. b) Are the max-flow bounds achievable? c) Is source separation always optimal? 538 21 Multi-Source Network Coding [ X 1 X 2 ] [ X 1 X 2 ] [ X 1 ] X 1 X 2 [ X 1 ] 4. Repeat Problem 3 for the above network in which the capacities of all the edges are equal to 1. 5. Consider a disk array with 3 disks. Let X1, X2, and X3 be 3 mutually independent pieces of information to be retrieved from the disk array, and let S1, S2, and S3 be the data to be stored separately in the 3 disks. It is required that X1 can be retrieved from Si, i = 1, 2, 3, X2 can be retrieved from (Si, Sj), 1 ≤i < j ≤3, and X3 can be retrieved from (S1, S2, S3). a) Prove that for i = 1, 2, 3, H(Si) = H(X1) + H(Si|X1). b) Prove that for 1 ≤i < j ≤3, H(Si|X1) + H(Sj|X1) ≥H(X2) + H(Si, Sj|X1, X2). c) Prove that H(S1, S2, S3|X1, X2) = H(X3). d) Prove that for i = 1, 2, 3, H(Si) ≥H(X1). e) Prove that H(Si) + H(Sj) ≥2H(X1) + H(X2) + H(Si, Sj|X1, X2). f) Prove that 2Si + Si⊕1 + Si⊕2 ≥4H(X1) + 2H(X2) + H(X3), where i = 1, 2, 3 and i ⊕j =  i + j if i + j ≤3 i + j −3 if i + j > 3 for 1 ≤i, j ≤3. Historical Notes 539 g) Prove that H(S1) + H(S2) + H(S3) ≥3H(X1) + 3 2H(X2) + H(X3). Parts d) to g) give constraints on H(S1), H(S2), and H(S3) in terms of H(X1), H(X2), and H(X3). It was shown in Roche et al. that these constraints are the tightest possible. 6. Generalize the setup in Problem 5 to K disks and show that K X i=1 H(Si) ≥K K X α=1 H(Xα) α . Hint: Use the inequalities in Problem 19 in Chapter 2 to prove that for s = 0, 1, · · · , K −1, K X i=1 H(Si) ≥nK s X α=1 H(Xα) α + K K s+1  × X T :|T |=s+1 H(ST |X1, X2, · · · , Xs) s + 1 by induction on s, where T is a subset of {1, 2, · · · , K}. 7. Write out the achievable information rate region R for the network in Problem 3. 8. Show that if there exists an (n, (ηij : (i, j) ∈E), (τs : s ∈S)) code which satisfies (21.11) and (21.13), then there always exists an (n, (ηij : (i, j) ∈E), (τ ′ s : s ∈S)) code which satisfies (21.11) and (21.13), where τ ′ s ≤τs for all s ∈S. Hint: use a random coding argument. Historical Notes Multilevel diversity coding was studied by Yeung , where it was shown that source separation is not always optimal. Roche et al. showed that source separation is optimal for symmetrical three-level diversity coding. This result was extended to any level by Yeung and Zhang with a painstak-ing proof. Hau studied all the one hundred configurations of a three-encoder diversity coding systems and found that source separation is optimal for eighty-six configurations. 540 21 Multi-Source Network Coding Yeung and Zhang introduced the distributed source coding model discussed in Section 21.2.2 which subsumes multilevel diversity coding. The region of all entropy functions previously introduced by Yeung for study-ing information inequalities enabled them to obtain inner and outer bounds on the achievable information rate region for a variety of networks. Distributed source coding is equivalent to multi-source network coding on a special class of acyclic networks. The inner and outer bounds on the achievable information rate region in were generalized to arbitrary acyclic networks by Song et al. . The gap between these bounds was finally closed by Yan et al. . The insufficiency of specific forms of linear coding for multi-source network coding were demonstrated and discussed by Riis , Rasala Lehman and Lehman , and Medard et al. . The insufficiency of very general forms of linear coding has been proved by Dougherty et al. . Chan proved the sufficiency of a class of network codes constructed by groups when all the information sources are generated at the same node in the network. Even though the achievable information rate region for multi-source net-work coding is characterized by all information inequalities (Shannon-type and non-Shannon-type), it is not clear whether there exists a multi-source network coding problem for which the characterization of the achievable information rate region necessarily involves non-Shannon-type inequalities. This important question was resolved by Dougherty et al. . In this work, they constructed a multi-source network coding problem from matroids and demonstrated that a tighter outer bound on the achievable information rate region can be ob-tained by invoking the unconstrained non-Shannon-type inequality discovered by Zhang and Yeung . Chan and Grant recently proved that for ev-ery non-Shannon-type inequality that exists, there is a multi-source network coding problem for which the characterization of the achievable information rate region necessarily involves that particular inequality. References 1. J. Abrahams, “Code and parse trees for lossless source encoding,” Comm. Info. and Syst., 1: 113-146, 2001 ( 2. N. Abramson, Information Theory and Coding, McGraw-Hill, New York, 1963. 3. Y. S. Abu-Mostafa, Ed., Complexity in Information Theory, Springer-Verlag, New York, 1988. 4. J. Acz´ el and Z. Dar´ oczy, On Measures of Information and Their Characteriza-tions, Academic Press, New York, 1975. 5. A. Argawal and M. Charikar, “On the advantage of network coding for im-proving network throughput,” 2004 IEEE Information Theory Workshop, San Antonio, TX, Oct. 25-29, 2004. 6. R. Ahlswede, B. Balkenhol and L. Khachatrian, “Some properties of fix-free codes,” preprint 97-039, Sonderforschungsbereich 343, Universit¨ at Bielefeld, 1997. 7. R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, “Network information flow,” IEEE Trans. Info. Theory, IT-46: 1204-1216, 2000. 8. R. Ahlswede and I. Csisz´ ar, “Common randomness in information theory and cryptography – Part I: Secret sharing,” IEEE Trans. Info. Theory, IT-39: 1121-1132, 1993. 9. R. Ahlswede and I. Csisz´ ar, “Common randomness in information theory and cryptography – Part II: CR capacity,” IEEE Trans. Info. Theory, IT-44: 225-240, 1998. 10. R. Ahlswede, P. G´ acs, and J. K¨ orner, “Bounds of conditional probabilities with applications in multi-user communication,” Z. Wahrscheinlichketisheorie u. verw. Geb., 34: 157-177, 1976. 11. R. Ahlswede and J. K¨ orner, “Source coding with side information and a con-verse for degraded broadcast channels,” IEEE Trans. Info. Theory, IT-21: 629-637, 1975. 12. R. Ahlswede and I. Wegener, Suchprobleme, Teubner Studienbcher. B. G. Teub-ner, Stuttgart, 1979 (in German). English translation: Search Problems, Wiley, New York, 1987. 13. R. Ahlswede and J. Wolfowitz, “The capacity of a channel with arbitrarily varying cpf’s and binary output alphabet,” Zeitschrift f¨ ur Wahrscheinlichkeit-stheorie und verwandte Gebiete, 15: 186-194, 1970. 542 References 14. P. Algoet and T. M. Cover, “A sandwich proof of the Shannon-McMillan-Breiman theorem,” Ann. Prob., 16: 899-909, 1988. 15. S. Amari, Differential-Geometrical Methods in Statistics, Springer-Verlag, New York, 1985. 16. V. Anantharam and S. Verd´ u, “Bits through queues,” IEEE Trans. Info. The-ory, IT-42: 4-18, 1996. 17. J. B. Anderson and S. Mohan, Source and Channel Coding: An Algorithmic Approach, Kluwer Academic Publishers, Boston, 1991. 18. S. Arimoto, “Encoding and decoding of p-ary group codes and the correction system,” Information Processing in Japan, 2: 321-325, 1961 (in Japanese). 19. S. Arimoto, “An algorithm for calculating the capacity of arbitrary discrete memoryless channels,” IEEE Trans. Info. Theory, IT-18: 14-20, 1972. 20. S. Arimoto, “On the converse to the coding theorem for discrete memoryless channels,” IEEE Trans. Info. Theory, IT-19: 357-359, 1973. 21. R. B. Ash, Information Theory, Interscience, New York, 1965. 22. E. Ayanoglu, R. D. Gitlin, C.-L. I, and J. Mazo, “Diversity coding for trans-parent self-healing and fault-tolerant communication networks,” 1990 IEEE International Symposium on Information Theory, San Diego, CA, Jan. 1990. 23. H. Balli, X. Yan, and Z. Zhang, “Error correction capability of random network error correction codes,” submitted to IEEE Trans. Info. Theory. 24. ´ A. I. Barbero and Ø. Ytrehus, “Cycle-logical treatment for “cyclopathic” net-works,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2795-2804, 2006. 25. A. R. Barron, “The strong ergodic theorem for densities: Generalized Shannon-McMillan-Breiman theorem,” Ann. Prob., 13: 1292-1303, 1985. 26. L. A. Bassalygo, R. L. Dobrushin, and M. S. Pinsker, “Kolmogorov remem-bered,” IEEE Trans. Info. Theory, IT-34: 174-175, 1988. 27. T. Berger, Rate Distortion Theory: A Mathematical Basis for Data Compres-sion, Prentice-Hall, Englewood Cliffs, New Jersey, 1971. 28. T. Berger, “Multiterminal source coding,” in The Information Theory Ap-proach to Communications, G. Longo, Ed., CISM Courses and Lectures #229, Springer-Verlag, New York, 1978. 29. T. Berger and R. W. Yeung, “Multiterminal source coding with encoder break-down,” IEEE Trans. Info. Theory, IT-35: 237-244, 1989. 30. T. Berger, Z. Zhang, and H. Viswanathan, “The CEO problem,” IEEE Trans. Info. Theory, IT-42, 887-902, May 1996. 31. E. R. Berlekamp, “Block coding for the binary symmetric channel with noise-less, delayless feedback,” in H. B. Mann, Error Correcting Codes, Wiley, New York, 1968. 32. E. R. Berlekamp, Ed., Key Papers in the Development of Coding Theory, IEEE Press, New York, 1974. 33. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo codes,” Proceedings of the 1993 Inter-national Conferences on Communications, 1064-1070, 1993. 34. J. Berstel and D. Perrin, Theory of Codes, Academic Press, Orlando, 1985. 35. D. Bertsekas and R. Gallager, Data Networks, 2nd ed., Prentice-Hall, Engle-wood Cliffs, New Jersey, 1992. 36. D. Blackwell, L. Breiman, and A. J. Thomasian, “The capacities of certain channel classes under random coding,” Ann. Math. Stat., 31: 558-567, 1960. References 543 37. R. E. Blahut, “Computation of channel capacity and rate distortion functions,” IEEE Trans. Info. Theory, IT-18: 460-473, 1972. 38. R. E. Blahut, “Information bounds of the Fano-Kullback type,” IEEE Trans. Info. Theory, IT-22: 410-421, 1976. 39. R. E. Blahut, Theory and Practice of Error Control Codes, Addison-Wesley, Reading, Massachusetts, 1983. 40. R. E. Blahut, Principles and Practice of Information Theory, Addison-Wesley, Reading, Massachusetts, 1987. 41. R. E. Blahut, D. J. Costello, Jr., U. Maurer, and T. Mittelholzer, Ed., Com-munications and Cryptography: Two Sides of One Tapestry, Kluwer Academic Publishers, Boston, 1994. 42. G. R. Blakley, “Safeguarding cryptographic keys,” in Proceedings of the Na-tional Computer Conference, 48: 313-317, 1979. 43. C. Blundo, A. De Santis, R. De Simone, and U. Vaccaro, “Tight bounds on the information rate of secret sharing schemes,” Designs, Codes and Cryptography, 11: 107-110, 1997. 44. B. Bollob´ as, Graph Theory: An Introductory Course, Springer-Verlag, New York, 1979. 45. J. A. Bondy and U. S. R. Murty, Graph Theory with Applications, North Hol-land, New York, 1976. 46. S. Borade, “Network information flow: Limits and achievability,” 2002 IEEE International Symposium on Information Theory, Lausanne, Switzerland, Jun. 30-Jul. 5, 2002. 47. R. C. Bose and D. K. Ray-Chaudhuri, “On a class of error correcting binary group codes,” Info. Contr., 3: 68-79, Mar. 1960. 48. L. Breiman, “The individual ergodic theorems of information theory,” Ann. Math. Stat., 28: 809-811, 1957. 49. Encyclopedia Britannica, 50. M. Burrows and D. J. Wheeler, “A block-sorting lossless data compression algorithm,” Technical Report 124, Digital Equipment Corporation, 1994. 51. J. Byers, M. Luby, M. Mitzenmacher, “A digital fountain approach to asyn-chronous reliable multicast,” IEEE J. Selected Areas Comm., 20: 1528-1540, 2002. 52. N. Cai and R. W. Yeung, “Secure network coding,” 2002 IEEE International Symposium on Information Theory, Lausanne, Switzerland, Jun. 30-Jul. 5, 2002. 53. N. Cai and R. W. Yeung, ”Network coding and error correction,” 2002 IEEE Information Theory Workshop, Bangalore, India, Oct. 20-25, 2002. 54. N. Cai and R. W. Yeung, “Network error correction, Part II: Lower bounds,” Comm. Info. and Syst., 6: 37-54, 2006 ( 55. G. Caire and S. Shamai, “On the achievable throughput of a multiantenna Gaussian broadcast channel,” IEEE Trans. Info. Theory, IT-49: 1691-1706, 2003. 56. R. Calderbank and N. J. A. Sloane, “Obituary: Claude Shannon (1916-2001),” Nature, 410: 768, April 12, 2001. 57. R. M. Capocelli, A. De Santis, L. Gargano, and U. Vaccaro, “On the size of shares for secret sharing schemes,” J. Cryptology, 6: 157-168, 1993. 58. H. L. Chan (T. H. Chan), “Aspects of information inequalities and its applica-tions,” M.Phil. thesis, The Chinese University of Hong Kong, 1998. 544 References 59. T. H. Chan, “A combinatorial approach to information inequalities,” Comm. Info. and Syst., 1: 241-253, 2001 ( 60. H. L. Chan (T. H. Chan), “New results in probabilistic modeling,” Ph.D. thesis, The Chinese University of Hong Kong, 2001. 61. T. H. Chan, “Balanced information inequalities,” IEEE Trans. Info. Theory, IT-49: 3261-3267, 2003. 62. T. H. Chan, “On the optimality of group network codes,” 2005 IEEE Interna-tional Symposium on Information Theory, Adelaide, South Australia, Australia, Sept. 4-9, 2005. 63. T. Chan and A. Grant, “Entropy vectors and network codes,” 2007 IEEE In-ternational Symposium on Information Theory, Nice, France, Jun. 24-29, 2007. 64. T. H. Chan and R. W. Yeung, “On a relation between information inequalities and group theory,” IEEE Trans. Info. Theory. IT-48:1992-1995, 2002. 65. G. J. Chatin, Algorithmic Information Theory, Cambridge University Press, Cambridge, 1987. 66. C. Chekuri, C. Fragouli, and E. Soljanin, “On average throughput benefits and alphabet size for network coding,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2410-2424, 2006. 67. H. Chernoff, “A measure of the asymptotic efficiency of test of a hypothesis based on a sum of observations,” Ann. Math. Stat., 23: 493-507, 1952. 68. D. M. Chiu, R. W. Yeung, J. Huang, and B. Fan, “Can network coding help in P2P networks?” NetCod 2006, Boston, MA, Apr. 3-7, 2006. 69. P. A. Chou and Y. Wu, “Network coding for the Internet and wireless net-works,” IEEE Signal Processing Magazine, 77-85, Sept. 2007. 70. P. A. Chou, Y. Wu, and K. Jain, “Practical network coding,” 41st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2003. 71. K. L. Chung, “A note on the ergodic theorem of information theory,” Ann. Math. Stat., 32: 612-614, 1961. 72. M. H. M. Costa, “Writing on dirty paper,” IEEE Trans. Info. Theory, IT-29: 439-441, 1983. 73. T. M. Cover, “A proof of the data compression theorem of Slepian and Wolf for ergodic sources,” IEEE Trans. Info. Theory, IT-21: 226-228, 1975. 74. T. M. Cover, “An algorithm for maximizing expected log investment return,” IEEE Trans. Info. Theory, IT-30: 369-373, 1984. 75. T. M. Cover and M. Chiang, “Duality between channel capacity and rate dis-tortion with two-sided state information,” IEEE Trans. Info. Theory, IT-48: 1629-1638, 2002. 76. T. M. Cover, P. G´ acs, and R. M. Gray, “Kolmogorov’s contribution to infor-mation theory and algorithmic complexity,” Ann. Prob., 17: 840-865, 1989. 77. T. M. Cover and R. King, “A convergent gambling estimate of the entropy of English,” IEEE Trans. Info. Theory, IT-24: 413-421, 1978. 78. T. M. Cover and S. K. Leung, “Some equivalences between Shannon entropy and Kolmogorov complexity,” IEEE Trans. Info. Theory, IT-24: 331-338, 1978. 79. T. M. Cover and J. A. Thomas, Elements of Information Theory, Wiley, 1991, 2nd ed., Wiley-Interscience, 2006. References 545 80. I. Csisz´ ar, “Information type measures of difference of probability distributions and indirect observations,” Studia Sci. Math. Hungar., 2: 229-318, 1967. 81. I. Csisz´ ar, “On the computation of rate-distortion functions,” IEEE Trans. Info. Theory, IT-20: 122-124, 1974. 82. I. Csisz´ ar, “The method of types,” IEEE Trans. Info. Theory, IT-44: 2505-2523, 1998. 83. I. Csisz´ ar and J. K¨ orner, Information Theory: Coding Theorems for Discrete Memoryless Systems, Academic Press, New York, 1981. 84. I. Csisz´ ar and P. Narayan, “Arbitrarily varying channels with constrained in-puts and states,” IEEE Trans. Info. Theory, IT-34: 27-34, 1988. 85. I. Csisz´ ar and P. Narayan, “The capacity of the arbitrarily varying channel revisited: Positivity, constraints,” IEEE Trans. Info. Theory, IT-34: 181-193, 1988. 86. I. Csisz´ ar and P. Narayan, “Secrecy capacities for multiple terminals,” IEEE Trans. Info. Theory, IT-50: 3047-3061, 2004. 87. I. Csisz´ ar and G. Tusn´ ady, “Information geometry and alternating minimization procedures,” Statistics and Decisions, Supplement Issue 1: 205-237, 1984. 88. G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, New Jersey, 1962. 89. L. D. Davisson, “Universal noiseless coding,” IEEE Trans. Info. Theory, IT-19: 783-795, 1973. 90. A. P. Dawid, “Conditional independence in statistical theory (with discussion),” J. Roy. Statist. Soc., Series B, 41: 1-31, 1979. 91. A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood form incomplete data via the EM algorithm,” Journal Royal Stat. Soc., Series B, 39: 1-38, 1977. 92. S. N. Diggavi and T. M. Cover, “The worst additive noise under a covariance constraint,” IEEE Trans. Info. Theory, IT-47: 3072-3081, 2001. 93. R. L. Dobrushin, “General formulation of Shannon’s main theorem in informa-tion theory,” Uspekhi Mat. Nauk, 14: 3-104; Translated in AMS Transl. Ser. 2, 33: 323-438, 1963. 94. R. Dougherty, C. Freiling, and K. Zeger, “Insufficiency of linear coding in net-work information flow,” IEEE Trans. Info. Theory, IT-51: 2745-2759, 2005. 95. R. Dougherty, C. Freiling, and K. Zeger, “Six new non-Shannon information in-equalities,” 2006 IEEE International Symposium on Information Theory, Seat-tle, WA, Jul. 9-14, 2006. 96. R. Dougherty, C. Freiling, and K. Zeger, “Networks, matriods, and non-Shannon information inequalities,” IEEE Trans. Info. Theory, IT-53: 1949-1969, 2007. 97. G. Dueck and J. K¨ orner, “Reliability function of a discrete memoryless channel at rates above capacity,” IEEE Trans. Info. Theory, IT-25: 82-85, 1979. 98. J. Edmonds, “Edge-disjoint branchings,” in Combinatorial Algorithms, R. Rustin, Ed., 91-96, Algorithmics Press, New York, 1973. 99. P. Elias, “Universal codeword sets and representations of the integers,” IEEE Trans. Info. Theory, IT-21: 194-203, 1975. 100. P. Elias, A. Feinstein, and C. E. Shannon, “A note on maximum flow through a network,” IRE Trans. Info. Theory, IT-2: 117-119, 1956. 546 References 101. E. Erez and M. Feder, “Capacity region and network codes for two receivers multicast with private and common data,” Workshop on Coding, Cryptography and Combinatorics, Huangshen City, China, 2003. 102. E. Erez and M. Feder, “Convolutional network codes,” 2004 IEEE International Symposium on Information Theory, Chicago, IL, Jun. 27-Jul. 2, 2004. 103. E. Erez and M. Feder, “Convolutional network codes for cyclic networks,” NetCod 2005, Riva del Garda, Italy, Apr. 7, 2005. 104. R. M. Fano, Class notes for Transmission of Information, Course 6.574, MIT, Cambridge, Massachusetts, 1952. 105. R. M. Fano, Transmission of Information: A Statistical Theory of Communi-cation, Wiley, New York, 1961. 106. M. Feder, N. Merhav, and M. Gutman, “Universal prediction of individual sequences,” IEEE Trans. Info. Theory, IT-38: 1258-1270, 1992. 107. A. Feinstein, “A new basic theorem of information theory,” IRE Trans. Info. Theory, IT-4: 2-22, 1954. 108. A. Feinstein, Foundations of Information Theory, McGraw-Hill, New York, 1958. 109. J. Feldman, T. Malkin, C. Stein, and R. A. Servedio, “On the capacity of secure network coding”, 42nd Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Sept. 29-Oct. 1, 2004. 110. W. Feller, An Introduction to Probability Theory and Its Applications, Vol. 1, Wiley, New York, 1950. 111. B. M. Fitingof, “Coding in the case of unknown and changing message statis-tics,” PPI 2: 3-11, 1966 (in Russian). 112. S. L. Fong and R. W. Yeung, “Variable-rate linear network coding,” 2006 IEEE Information Theory Workshop, Chengdu, China, Oct. 22-26, 2006. 113. L. K. Ford, Jr. and D. K. Fulkerson, Flows in Networks, Princeton University Press, Princeton, New Jersey, 1962. 114. G. D. Forney, Jr., “Convolutional codes I: Algebraic structure,” IEEE Trans. Info. Theory, IT-16: 720-738, 1970. 115. G. D. Forney, Jr., Information Theory, unpublished course notes, Stanford University, 1972. 116. G. D. Forney, Jr., “The Viterbi algorithm,” Proc. IEEE, 61: 268-278, 1973. 117. C. Fragouli, J.-Y. Le Boudec, and J. Widmer, “Network coding: An instant primer,” ACM SIGCOMM Comp. Comm. Review, 36: 63-68, 2006. 118. C. Fragouli and E. Soljanin, “A connection between network coding and con-volutional codes,” IEEE International Conference on Communications, Paris, France, Jun. 20-24, 2004. 119. C. Fragouli and E. Soljanin, “Information flow decomposition for network cod-ing,” IEEE Trans. Info. Theory, IT-52: 829-848, 2006. 120. C. Fragouli and E. Soljanin, “Network coding fundamentals,” Foundations and Trends in Networking, vol. 2, no. 1, 1-133, 2007. 121. J. B. Fraleigh, A First Course in Abstract Algebra, 7th ed., Addison Wesley, 2003. 122. F. Fu and R. W. Yeung, “On the rate-distortion region for multiple descrip-tions,” IEEE Trans. Info. Theory, IT-48: 2012-2021, 2002. 123. S. Fujishige, “Polymatroidal dependence structure of a set of random vari-ables,” Info. Contr., 39: 55-72, 1978. 124. R. G. Gallager, “Low-density parity-check codes,” IEEE Trans. Info. Theory, IT-8: 21-28, Jan. 1962. References 547 125. R. G. Gallager, “A simple derivation of the coding theorem and some applica-tions,” IEEE Trans. Info. Theory, IT-11: 3-18, 1965. 126. R. G. Gallager, Information Theory and Reliable Communication, Wiley, New York, 1968. 127. R. G. Gallager, “Variations on a theme by Huffman,” IEEE Trans. Info. The-ory, IT-24: 668-674, 1978. 128. Y. Ge and Z. Ye, “Information-theoretic characterizations of lattice conditional independence models,” unpublished. 129. A. Gersho and R. M. Gray, Vector Quantization and Signal Compression, Kluwer Academic Publishers, Boston, 1992. 130. C. Gkantsidis and P. R. Rodriguez, “Network coding for large scale content distribution,” IEEE INFOCOM 2005, Miami, FL, Mar. 13-17, 2005. 131. S. Goldman, Information Theory, Prentice-Hall, Englewood Cliffs, New Jersey, 1953. 132. A. Goldsmith and P. P. Varaiya, “Capacity of fading channels with channel side information,” IEEE Trans. Info. Theory, IT-43: 1986-1992, 1997. 133. A. Goldsmith, Wireless Communications, Cambridge University Press, 2006. 134. J. Dj. Goli´ c, “Noiseless coding for multiple channels,” 1994 International Sym-posium on Information Theory and Its Applications, Sydney, Australia, 1994. 135. S. W. Golomb, R. E. Peile, and R. A. Scholtz, Basic Concepts in Information Theory and Coding : The Adventures of Secret Agent 00111, Plenum Press, New York, 1994. 136. R. M. Gray, “On the asymptotic eigenvalue distribution of Toeplitz matrices,” IEEE Trans. Info. Theory, IT-18: 725-730, 1972. 137. R. M. Gray, Entropy and Information Theory, Springer-Verlag, New York, 1990. 138. S. Guiasu, Information Theory with Applications, McGraw-Hill, New York, 1976. 139. J. Hadamard, “R´ esolution d’une question relative aux d´ eterminans,” Bull. Sci. Math. S´ er. 2, 17: 240-246, 1893. 140. B. Hajek and T. Berger, “A decomposition theorem for binary Markov random fields,” Ann. Prob., 15: 1112-1125, 1987. 141. D. Hammer, A. Romashchenko, A. Shen, and N. Vereshchagin, “Inequalities for Shannon Entropy and Kolmogorov Complexity,” J. Comp. and Syst. Sci., 60: 442-464, 2000. 142. R. V. Hamming, “Error detecting and error correcting codes,” Bell Sys. Tech. Journal, 29: 147-160, 1950. 143. T. S. Han, “Linear dependence structure of the entropy space,” Info. Contr., 29: 337-368, 1975. 144. T. S. Han, “Nonnegative entropy measures of multivariate symmetric correla-tions,” Info. Contr., 36: 133-156, 1978. 145. T. S. Han, “A uniqueness of Shannon’s information distance and related non-negativity problems,” J. Comb., Info., and Syst. Sci., 6: 320-321, 1981. 146. T. S. Han, “An information-spectrum approach to source coding theorems with a fidelity criterion,” IEEE Trans. Info. Theory, IT-43: 1145-1164, 1997. 147. T. S. Han and K. Kobayashi, “A unified achievable rate region for a general class of multiterminal source coding systems,” IEEE Trans. Info. Theory, IT-26: 277-288, 1980. 548 References 148. T. S. Han and K. Kobayashi, Mathematics of Information and Coding, Amer-ican Mathematical Society, 2003. 149. T. S. Han and S. Verd´ u, “Generalizing the Fano inequality,” IEEE Trans. Info. Theory, IT-40: 1247-1251, 1994. 150. G. H. Hardy, J. E. Littlewood, and G. Polya, Inequalities, 2nd ed., Cambridge University Press, London, 1952. 151. P. Harremo¨ es, “Information topologies with applications,” in Entropy, Search, Complexity (Bolyai Society Mathematical Studies), I. Csisz´ ar, G. O. H. Katona, and G. Tardos, Ed., Springer, Berlin, 2007. 152. P. Harremo¨ es and F. Tops¨ oe, “Inequalities between entropy and index of coin-cidence derived from information diagrams,” IEEE Trans. Info. Theory, IT-47: 2944-2960, 2001. 153. N. Harvey, R. Kleinberg, R. Nair, and Y. Wu, “A “chicken and egg” network coding problem,” 2007 IEEE International Symposium on Information Theory, Nice, France, Jun. 24-29, 2007. 154. B. Hassibi, “Normalized entropy vectors, network information theory and con-vex optimization,” IEEE Information Theory Workshop on Information Theory for Wireless Networks, Solstrand, Norway, Jul. 1-6, 2007. 155. B. Hassibi and S. Shadbakht, “On a construction of entropic vectors using lattice-generated distributions,” 2007 IEEE International Symposium on Infor-mation Theory, Nice, France, Jun. 24-29, 2007. 156. K. P. Hau, “Multilevel diversity coding with independent data streams,” M.Phil. thesis, The Chinese University of Hong Kong, Jun. 1995. 157. C. Heegard and S. B. Wicker, Turbo Coding, Kluwer Academic Publishers, Boston, 1999. 158. T. Ho, R. Koetter, M. M´ edard, D. R. Karger, and M. Effros, “The benefits of coding over routing in a randomized setting,” 2003 IEEE International Sym-posium on Information Theory, Yokohama, Japan, Jun. 29-Jul. 4, 2003. 159. T. Ho, B. Leong, R. Koetter, M. M´ edard, M. Effros, and D. R. Karger, “Byzan-tine modification detection in multicast networks using randomized network coding”, 2004 IEEE International Symposium on Information Theory, Chicago, IL, Jun. 27-Jul. 2, 2007. 160. T. Ho and D. S. Lun, Network Coding: An Introduction, Cambridge University Press, 2008. 161. S.-W. Ho, “The interplay between entropy and variational distance, Part II: Applications,” submitted to IEEE Trans. Info. Theory. 162. S.-W. Ho and R. W. Yeung, “On the discontinuity of the Shannnon informa-tion measures,” 2005 IEEE International Symposium on Information Theory, Adelaide, South Australia, Australia, Sept. 4-9, 2005. 163. S.-W. Ho and R. W. Yeung, “On information divergence measures and a uni-fied typicality,” 2006 IEEE International Symposium on Information Theory, Seattle, WA, Jul. 9-14, 2006. 164. S.-W. Ho and R. W. Yeung, “The interplay between entropy and variational distance,” 2007 IEEE International Symposium on Information Theory, Nice, France, Jun. 24-29, 2007. 165. S.-W. Ho and R. W. Yeung, “The interplay between entropy and variational distance, Part I: Basic concepts and bounds,” submitted to IEEE Trans. Info. Theory. 166. A. Hocquenghem, “Codes correcteurs d’erreurs,” Chiffres, 2: 147-156, 1959. References 549 167. Y. Horibe, “An improved bound for weight-balanced tree,” Info. Contr., 34: 148-151, 1977. 168. Hu Guo Ding, “On the amount of Information,” Teor. Veroyatnost. i Prime-nen., 4: 447-455, 1962 (in Russian). 169. D. A. Huffman, “A method for the construction of minimum redundancy codes,” Proc. IRE, 40: 1098-1101, 1952. 170. J. Y. Hui, Switching and Traffic Theory for Integrated Broadband Networks, Springer, 1990. 171. L. P. Hyvarinen, Information Theory for Systems Engineers, Springer-Verlag, Berlin, 1968. 172. B. Ibinson, N. Linden, and A. Winter, “All inequalities for the relative entropy,” Comm. Math. Phys., 269: 223-238, 2006. 173. S. Ihara, “On the capacity of channels with additive non-Gaussian noise,” Info. Contr., 37: 34-39, 1978. 174. S. Ihara, Information Theory for Continuous Systems, World Scientific, Singa-pore, 1993. 175. A. W. Ingleton, “Representation of matroids,” in Combinatorial Mathematics and Its Applications, D. J. A. Welsh, Ed., 149-167, Academic Press, London, 1971. 176. C. Intanagonwiwat, R. Govindan, and D. Estrin, “Directed diffusion: A scalable and robust communication paradigm for sensor networks,” 6th Annual Inter-national Conference on Mobile Computing and Networking (Mobicom 2000), Boston, MA, Aug. 6-11, 2000. 177. P. Jacquet and W. Szpankowski, “Entropy computations via analytic depois-sonization,” IEEE Trans. Info. Theory, IT-45: 1072-1081, 1999. 178. S. Jaggi, P. Sanders, P. A. Chou, M. Effros, S. Egner, K. Jain, and L. Tolhuizen, “Polynomial time algorithms for multicast network code construction,” IEEE Trans. Info. Theory, IT-51: 1973-1982, 2005. 179. S. Jaggi, M. Langberg, S. Katti, D. Katabi, M. M´ edard, and M. Effros, “Re-silient network coding in the presence of Byzantine adversaries,” IEEE INFO-COM 2007, Anchorage, AK, May 6-12, 2007. 180. E. T. Jaynes, “On the rationale of maximum entropy methods,” Proc. IEEE, 70: 939-052, 1982. 181. E. T. Jaynes, Probability Theory: The Logic of Science, Cambridge University Press, 2003. 182. F. Jelinek, Probabilistic Information Theory, McGraw-Hill, New York, 1968. 183. J. L. W. V. Jensen, “Sur les fonctions convexes et les in´ egalit´ es entre les valeurs moyennes,” Acta Mathematica, 30: 175-193, 1906. 184. V. D. Jerohin, “ϵ-entropy of discrete random objects,” Teor. Veroyatnost. i Primenen, 3: 103-107, 1958. 185. N. Jindal, S. Viswanath, and A. Goldsmith, “On the duality of Gaussian multiple-access and broadcast channels,” IEEE Trans. Info. Theory, IT-50: 768-783, 2004. 186. O. Johnsen, “On the redundancy of binary Huffman codes,” IEEE Trans. Info. Theory, IT-26: 220-222, 1980. 187. G. A. Jones and J. M. Jones, Information and Coding Theory, Springer, Lon-don, 2000. 188. Y. Kakihara, Abstract Methods in Information Theory, World-Scientific, Sin-gapore, 1999. 550 References 189. J. Karush, “A simple proof of an inequality of McMillan,” IRE Trans. Info. Theory, 7: 118, 1961. 190. T. Kawabata, “Gaussian multiterminal source coding,” Master thesis, Math. Eng., Univ. of Tokyo, Japan, Feb. 1980. 191. T. Kawabata and R. W. Yeung, “The structure of the I-Measure of a Markov chain,” IEEE Trans. Info. Theory, IT-38: 1146-1149, 1992. 192. A. I. Khinchin, Mathematical Foundations of Information Theory, Dover, New York, 1957. 193. J. C. Kieffer, “A survey of the theory of source coding with a fidelity criterion,” IEEE Trans. Info. Theory, IT-39: 1473-1490, 1993. 194. J. C. Kieffer and E.-h. Yang, “Grammar-based codes: A new class of universal lossless source codes,” IEEE Trans. Info. Theory, IT-46: 737-754, 2000. 195. R. Kindermann and J. Snell, Markov Random Fields and Their Applications, American Math. Soc., Providence, Rhode Island, 1980. 196. R. Koetter and M. M´ edard, “An algebraic approach to network coding,” IEEE/ACM Trans. Networking, 11: 782-795, 2003. 197. R. Koetter and F. Kschischang, “Coding for errors and erasures in random network coding,” 2007 IEEE International Symposium on Information Theory, Nice, France, Jun. 24-29, 2007. 198. A. N. Kolmogorov, “On the Shannon theory of information transmission in the case of continuous signals,” IEEE Trans. Info. Theory, IT-2: 102-108, 1956. 199. A. N. Kolmogorov, “Three approaches to the quantitative definition of infor-mation,” Prob. Info. Trans., 1: 4-7, 1965. 200. A. N. Kolmogorov, “Logical basis for information theory and probability the-ory,” IEEE Trans. Info. Theory, IT-14: 662-664, 1968. 201. L. G. Kraft, “A device for quantizing, grouping and coding amplitude modu-lated pulses,” M.S. thesis, Dept. of Elec. Engr., MIT, 1949. 202. G. Kramer, “Directed information for channels with feedback,” Ph.D. thesis, Swiss Federal Institute of Technology, Zurich, 1998. 203. G. Kramer and S. A. Savari, “Cut sets and information flow in networks of two-way channels,” 2004 IEEE International Symposium on Information Theory, Chicago, IL, Jun. 27-Jul. 2, 2004. 204. F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs and the sum-product algorithm,” IEEE Trans. Info. Theory, IT-47: 498-519, 2001. 205. H. W. Kuhn and A. W. Tucker, “Nonlinear programming,” Proceedings of 2nd Berkeley Symposium: 481-492, University of California Press, 1951. 206. S. Kullback, Information Theory and Statistics, Wiley, New York, 1959. 207. S. Kullback, Topics in Statistical Information Theory, Springer-Verlag, Berlin, 1987. 208. S. Kullback and R. A. Leibler, “On information and sufficiency,” Ann. Math. Stat., 22: 79-86, 1951. 209. J. F. Kurose and K. W. Ross, Computer Networking: A Top-Down Approach Featuring the Internet, 3rd ed., Addison Wesley, 2004. 210. E. Kushilevitz and N. Nisan, Communication Complexity, Cambridege Univer-sity Press, 2006. 211. P.-W. Kwok and R. W. Yeung, “On the relation between linear dispersion and generic network code,” 2006 IEEE Information Theory Workshop, Chengdu, China, Oct. 22-26, 2006. 212. H. J. Landau and H. O. Pollak, “Prolate spheroidal wave functions, Fourier analysis, and uncertainty-II,” Bell Sys. Tech. Journal, 40: 65-84, 1961. References 551 213. H. J. Landau and H. O. Pollak, “Prolate spheroidal wave functions, Fourier analysis, and uncertainty-III,” Bell Sys. Tech. Journal, 41: 1295-1336, 1962. 214. M. Langberg, A. Sprintson, and J. Bruck, “The encoding complexity of network coding,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2386-2397, 2006. 215. G. G. Langdon, “An introduction to arithmetic coding,” IBM J. Res. Devel., 28: 135-149, 1984. 216. S. L. Lauritzen, Graphical Models, Oxford Science Publications, Oxford, 1996. 217. J. Li, P. A. Chou, and C. Zhang, “Mutualcast: An efficient mechanism for one-to many content distribution,” ACM SIGCOMM Asia Workshop, Beijing, China, Apr. 11-13, 2005. 218. M. Li and P. Vit´ anyi, An Introduction to Kolmogorov Complexity and Its Ap-plications, 2nd ed., Springer, New York, 1997. 219. S.-Y. R. Li, Algebraic Switching Theory and Boardband Applications, Academic Press, 2000. 220. S.-Y. R. Li and S. T. Ho, “Ring-theoretic foundation of convolutional network coding,” NetCod 2008, Hong Kong, Jan. 3-4, 2008. 221. S.-Y. R. Li and R. W. Yeung, “On convolutional network coding,” 2006 IEEE International Symposium on Information Theory, Seattle, WA, Jul. 9-14, 2006. 222. S.-Y. R. Li, R. W. Yeung and N. Cai, “Linear network coding,” IEEE Trans. Info. Theory, IT-49: 371-381, 2003. 223. Z. Li, B. Li, and L. C. Lau, “On achieving optimal multicast throughput in undirected networks,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2410-2424, 2006. 224. X.-B. Liang, “Matrix games in the multicast networks: maximum information flows with network switching,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2433-2466, 2006. 225. E. H. Lieb and M. B. Ruskai, “Proof of the strong subadditivity of quantum-mechanical entropy,” J. Math. Phys., 14: 1938-1941, 1973. 226. S. Lin and D. J. Costello, Jr., Error Control Coding: Fundamentals and Appli-cations, Prentice-Hall, 1983, 2nd ed., 2004. 227. N. Linden and A. Winter, “A new inequality for the von Neumann entropy,” Comm. Math. Phys., 259: 129-138, 2005. 228. T. Linder, V. Tarokh, and K. Zeger, “Existence of optimal codes for infinite source alphabets,” IEEE Trans. Info. Theory, IT-43: 2026-2028, 1997. 229. R. Lnˇ eniˇ cka, “On the tightness of the Zhang-Yeung inequal-ity for Guassian vectors,” Comm. Info. and Syst., 6: 41-46, 2003 ( 230. L. Lov´ asz, “On the Shannon capacity of a graph,” IEEE Trans. Info. Theory, IT-25: 1-7, 1979. 231. D. S. Lun, N. Ratnakar, M. M´ edard, R. Koetter, D. R. Karger, T. Ho, E. Ahmed, and F. Zhao, “Minimum-cost multicast over coded packet networks,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Net-working on Networking and Information Theory, IT-52: 2608-2623, 2006. 232. D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Info. Theory, IT-45: 399-431, Mar. 1999. 552 References 233. D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003. 234. K. Makarychev, Y. Makarychev, A. Romashchenko, and N. Vereshchagin, “A new class of non-Shannon-type inequalities for entropies,” Comm. Info. and Syst., 2: 147-166, 2002 ( 235. F. M. Malvestuto, “A unique formal system for binary decompositions of database relations, probability distributions, and graphs,” Info. Sci., 59: 21-52, 1992; with Comment by F. M. Malvestuto and M. Studen´ y, Info. Sci., 63: 1-2, 1992. 236. M. Mansuripur, Introduction to Information Theory, Prentice-Hall, Englewood Cliffs, New Jersey, 1987. 237. H. Marko, “The bidirectional communication theory – A generalization of in-formation theory,” IEEE Trans. Comm., 21: 1345-1351, 1973. 238. A. W. Marshall and I. Olkin, Inequalities: Theory of Majorization and Its Applications, Academic Press, New York, 1979. 239. K. Marton, “Error exponent for source coding with a fidelity criterion,” IEEE Trans. Info. Theory, IT-20: 197-199, 1974. 240. K. Marton, “A simple proof of the blowing-up lemma,” IEEE Trans. Info. Theory, IT-32: 445-446, 1986. 241. J. L. Massey, “Shift-register synthesis and BCH decoding,” IEEE Trans. Info. Theory, IT-15: 122-127, 1969. 242. J. L. Massey, “Causality, feedback and directed information,” in Proc. 1990 Int. Symp. on Info. Theory and Its Applications, 303-305, 1990. 243. J. L. Massey, “Contemporary cryptology: An introduction,” in Contemporary Cryptology: The Science of Information Integrity, G. J. Simmons, Ed., IEEE Press, Piscataway, New Jersey, 1992. 244. J. L. Massey, “Conservation of mutual and directed information,” 2005 IEEE International Symposium on Information Theory, Adelaide, South Australia, Australia, Sept. 4-9, 2005. 245. A. M. Mathai and P. N. Rathie, Basic Concepts in Information Theory and Statistics: Axiomatic Foundations and Applications, Wiley, New York, 1975. 246. F. Mat´ uˇ s, “Probabilistic conditional independence structures and matroid the-ory: Background,” Int. J. of General Syst., 22: 185-196, 1994. 247. F. Mat´ uˇ s, “Conditional independences among four random variables II,” Com-binatorics, Probability and Computing, 4: 407-417, 1995. 248. F. Mat´ uˇ s, “Conditional independences among four random variables III: Final conclusion,” Combinatorics, Probability and Computing, 8: 269-276, 1999. 249. F. Mat´ uˇ s, “Inequalities for Shannon entropies and adhesivity of polymatroids,” 9th Canadian Workshop on Information Theory, McGill University, Montr´ eal, Qu´ ebec, Canada, 2005. 250. F. Mat´ uˇ s, “Piecewise linear conditional information inequalities,” IEEE Trans. Info. Theory, IT-52: 236-238, 2006. 251. F. Mat´ uˇ s, “Two constructions on limits of entropy functions,” IEEE Trans. Info. Theory, IT-53: 320-330, 2007. 252. F. Mat´ uˇ s, “Infinitely many information inequalities,” 2007 IEEE International Symposium on Information Theory, Nice, France, Jun. 24-29, 2007. 253. F. Mat´ uˇ s and M. Studen´ y, “Conditional independences among four random variables I,” Combinatorics, Probability and Computing, 4: 269-278, 1995. 254. U. M. Maurer, “Secret key agreement by public discussion from common in-formation,” IEEE Trans. Info. Theory, IT-39: 733-742, 1993. References 553 255. R. J. McEliece, The Theory of Information and Coding, Addison-Wesley, Read-ing, Massachusetts, 1977. 256. R. J. McEliece, Finite Fields for Computer Scientists and Engineers, Kluwer Academic Publishers, 1987. 257. W. J. McGill, “Multivariate information transmission,” Transactions PGIT, 1954 Symposium on Information Theory, PGIT-4: pp. 93-111, 1954. 258. B. McMillan, “The basic theorems of information theory,” Ann. Math. Stat., 24: 196-219, 1953. 259. B. McMillan, “Two inequalities implied by unique decipherability,” IRE Trans. Info. Theory, 2: 115-116, 1956. 260. M. M´ edard, M. Effros, T. Ho, and D. Karger, “On coding for nonmulticast networks,” 41st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2003. 261. K. Menger, “Zur allgemeinen Kurventhoerie,” Fund. Math., 10: 96-115, 1927. 262. M. Mitzenmacher, “Digital fountains: A survey and look forward,” 2004 IEEE Information Theory Workshop, San Antonio, TX, Oct. 24-29, 2004. 263. P. Moulin and J. A. O’Sullivan, “ Information-theoretic analysis of information hiding,” IEEE Trans. Info. Theory, IT-49: 563-593, 2003. 264. S. C. Moy, “Generalization of the Shannon-McMillan theorem,” Pacific J. Math., 11: 705-714, 1961. 265. Network Coding Homepage, 266. M. A, Nielsen and I. L. Chuang, Quantum Computation and Quantum Infor-mation, Cambridge University Press, 2000. 267. H. Nyquist, “Certain factors affecting telegraph speed,” Bell Sys. Tech. Jour-nal, 3: 324, 1924. 268. J. K. Omura, “A coding theorem for discrete-time sources,” IEEE Trans. Info. Theory, IT-19: 490-498, 1973. 269. J. M. Ooi, Coding for Channels with Feedback, Kluwer Academic Publishers, Boston, 1998. 270. E. Ordentlich and M. J. Weinberger, “A distribution dependent refinement of Pinsker’s inequality,” IEEE Trans. Info. Theory, IT-51: 1836-1840, 2005. 271. A. Orlitsky, “Worst-case interactive communication I: Two messages are almost optimal,” IEEE Trans. Info. Theory, IT-36: 1111-1126, 1990. 272. A. Orlitsky, “Worst-case interactive communication II: Two messages are not optimal,” IEEE Trans. Info. Theory, IT-37: 995-1005, 1991. 273. A. Orlitsky, N. P. Santhanam, and J. Zhang, “Universal compression of mem-oryless sources over unknown alphabets,” IEEE Trans. Info. Theory, IT-50: 1469-1481, 2004. 274. D. S. Ornstein, “Bernoulli shifts with the same entropy are isomorphic,” Ad-vances in Math., 4: 337-352, 1970. 275. J. G. Oxley, Matroid Theory, Oxford University Press, Oxford, 1992. 276. C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, Englewood Cliffs, New Jersey, 1982. 277. A. Papoulis, Probability, Random Variables and Stochastic Processes, 2nd ed., McGraw-Hill, New York, 1984. 278. J. Pearl, Probabilistic Reasoning in Intelligent Systems, Morgan Kaufman, San Meteo, California, 1988. 279. A. Perez, “Extensions of Shannon-McMillan’s limit theorem to more general stochastic processes,” in Trans. Third Prague Conference on Information The-ory, Statistical Decision Functions and Random Processes, 545-574, Prague, 1964. 554 References 280. J. R. Pierce, An Introduction to Information Theory: Symbols, Signals and Noise, 2nd rev. ed., Dover, New York, 1980. 281. J. T. Pinkston, “An application of rate-distortion theory to a converse to the coding theorem,” IEEE Trans. Info. Theory, IT-15: 66-71, 1969. 282. M. S. Pinsker, “Calculation of the rate of information transmission of stationary random processes and the capacity of stationary channels,” Dokl. Akad. Nauk SSSR, 111: 753-756 (in Russian). 283. M. S. Pinsker, Information and Information Stability of Random Variables and Processes, vol. 7 of the series Problemy Peredaˇ ci Informacii, AN SSSR, Moscow, 1960 (in Russian). English translation: Holden-Day, San Francisco, 1964. 284. M. S. Pinsker, “Gaussian sources,” Prob. Info. Trans., 14: 59-100, 1963 (in Russian). 285. N. Pippenger, “What are the laws of information theory?” 1986 Special Prob-lems on Communication and Computation Conference, Palo Alto, CA, Sept. 3-5, 1986. 286. N. Pippenger, “The inequalities of quantum information theory,” IEEE Trans. Info. Theory, IT-49: 773-789, 2003. 287. C. Preston, Random Fields, Springer-Verlag, New York, 1974. 288. M. O. Rabin, “Efficient dispersal of information for security, load balancing, and fault-tolerance,” J. ACM, 36: 335-348, 1989. 289. A. Rasala Lehman, “Network coding,” Ph.D. thesis, MIT, Dept. of Elec. Engr. and Comp. Sci., Feb. 2005. 290. A. Rasala Lehman and E. Lehman, “Complexity classification of network infor-mation flow problems,” 41st Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2003. 291. I. S. Reed and G. Solomon, “Polynomial codes over certain finite fields,” SIAM Journal Appl. Math., 8: 300-304, 1960. 292. A. R´ enyi, Foundations of Probability, Holden-Day, San Francisco, 1970. 293. F. M. Reza, An Introduction to Information Theory, McGraw-Hill, New York, 1961. 294. S. Riis, “Linear versus nonlinear boolean functions in network flows,” 38th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, Mar. 17-19, 2004. 295. J. Rissanen, “Generalized Kraft inequality and arithmetic coding,” IBM J. Res. Devel., 20: 198, 1976. 296. J. Rissanen, “Universal coding, information, prediction, and estimation,” IEEE Trans. Info. Theory, IT-30: 629-636, 1984. 297. J. R. Roche, “Distributed information storage,” Ph.D. thesis, Stanford Univer-sity, Mar. 1992. 298. J. R. Roche, A. Dembo, and A. Nobel, “Distributed information storage,” 1988 IEEE International Symposium on Information Theory, Kobe, Japan, Jun. 1988. 299. J. R. Roche, R. W. Yeung, and K. P. Hau, “Symmetrical multilevel diversity coding,” IEEE Trans. Info. Theory, IT-43: 1059-1064, 1997. 300. R. T. Rockafellar, Convex Analysis, Princeton University Press, Princeton, New Jersey, 1970. 301. S. Roman, Coding and Information Theory, Springer-Verlag, New York, 1992. References 555 302. A. Romashchenko, A. Shen, and N. Vereshchagin, “Combinatorial interpre-tation of Kolmogorov complexity,” Electronic Colloquium on Computational Complexity, vol. 7, 2000. 303. K. Rose, “A mapping approach to rate-distortion computation and analysis,” IEEE Trans. Info. Theory, IT-40: 1939-1952, 1994. 304. W. Rudin, Principles of Mathematical Analysis, 3rd ed., McGraw Hill, New York, 1976. 305. W. Rudin, Real and Complex Analysis, 3rd ed., McGraw Hill, New York, 1987. 306. F. Ruskey, “A survey of Venn diagrams,” 307. S. A. Savari, “Redundancy of the Lempel-Ziv incremental parsing rule,” IEEE Trans. Info. Theory, IT-43: 9-21, 1997. 308. S. A. Savari and R. G. Gallager, “Generalized Tunstall codes for sources with memory,” IEEE Trans. Info. Theory, IT-43: 658-668, 1997. 309. S. Shamai and I. Sason, “Variations on the Gallager bounds, connections, and applications,” IEEE Trans. Info. Theory, IT-48: 3029-3051, 2002. 310. S. Shamai and S. Verd´ u, “The empirical distribution of good codes,” IEEE Trans. Info. Theory, IT-43: 836-846, 1997. 311. S. Shamai, S. Verd´ u, and R. Zamir, “Systematic lossy source/channel coding,” IEEE Trans. Info. Theory, IT-44: 564-579, 1998. 312. A. Shamir, “How to share a secret,” Comm. ACM, 22: 612-613, 1979. 313. C. E. Shannon, “A Mathematical Theory of Communication,” Bell Sys. Tech. Journal, 27: 379-423, 623-656, 1948. 314. C. E. Shannon, “Communication theory of secrecy systems,” Bell Sys. Tech. Journal, 28: 656-715, 1949. 315. C. E. Shannon, “Communication in the presence of noise,” Proc. IRE, 37: 10-21, 1949. 316. C. E. Shannon, “Prediction and entropy of printed English,” Bell Sys. Tech. Journal, 30: 50-64, 1951. 317. C. E. Shannon, “The zero-error capacity of a noisy channel,” IRE Trans. Info. Theory, IT-2: 8-19, 1956. 318. C. E. Shannon, “Coding theorems for a discrete source with a fidelity criterion,” IRE National Convention Record, Part 4, 142-163, 1959. 319. C. E. Shannon, R. G. Gallager, and E. R. Berlekamp, “Lower bounds to error probability for coding in discrete memoryless channels,” Info. Contr., 10: 65-103 (Part I), 522-552 (Part II), 1967. 320. C. E. Shannon and W. W. Weaver, The Mathematical Theory of Communica-tion, University of Illinois Press, Urbana, Illinois, 1949. 321. A. Shen, “Multisource information theory,” Electronic Colloquium on Compu-tational Complexity, Report No. 6, 2006. 322. P. C. Shields, The Ergodic Theory of Discrete Sample Paths, American Math. Soc., Providence, Rhode Island, 1996. 323. J. E. Shore and R. W. Johnson, “Axiomatic derivation of the principle of maximum entropy and the principle of minimum cross-entropy,” IEEE Trans. Info. Theory, IT-26: 26-37, 1980. 324. I. Shunsuke, Information theory for continuous systems, World Scientific, Sin-gapore, 1993. 325. M. Simonnard, Linear Programming, translated by William S. Jewell, Prentice-Hall, Englewood Cliffs, New Jersey, 1966. 556 References 326. D. Slepian, Ed., Key Papers in the Development of Information Theory, IEEE Press, New York, 1974. 327. D. Slepian and H. O. Pollak, “Prolate spheroidal wave functions, Fourier anal-ysis, and uncertainty-I,” Bell Sys. Tech. Journal, 40: 43-64. 328. D. Slepian and J. K. Wolf, “Noiseless coding of correlated information sources,” IEEE Trans. Info. Theory, IT-19: 471-480, 1973. 329. N. J. A. Sloane and A. D. Wyner, Ed., Claude Elwood Shannon Collected Papers, IEEE Press, New York, 1993. 330. L. Song, R. W. Yeung and N. Cai, “Zero-error network coding for acyclic networks,” IEEE Trans. Info. Theory, IT-49: 3129-3139, 2003. 331. L. Song, R. W. Yeung and N. Cai, “A separation theorem for single-source network coding,” IEEE Trans. Info. Theory, IT-52: 1861-1871, 2006. 332. F. Spitzer, “Random fields and interacting particle systems,” M. A. A. Summer Seminar Notes, 1971. 333. D. R. Stinson, “An explication of secret sharing schemes,” Designs, Codes and Cryptography, 2: 357-390, 1992. 334. D. R. Stinson, “New general lower bounds on the information rate of secret sharing schemes,” in Adv. in Cryptology – CRYPTO ’92, Lecture Notes in Com-put. Sci., vol. 740, 168-182, 1993. 335. M. Studen´ y, “Multiinformation and the problem of characterization of conditional-independence relations,” Prob. Contr. Info. Theory, 18: 3-16, 1989. 336. W. Szpankowski, “Asymptotic average redundancy of Huffman (and other) block codes,” IEEE Trans. Info. Theory, IT-46: 2434-2443, 2000. 337. M. Tan, R. W. Yeung, and S.-T. Ho, “A unified framework for linear network codes,” NetCod 2008, Hong Kong, Jan. 3-4, 2008. 338. I. J. Taneja, Generalized Information Measures and Their Applications, 339. S. Tatikonda and S. Mitter, “Channel coding with feedback,” 38th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, Oct. 2000. 340. ˙ I. E. Telatar, “Capacity of multi-antenna Gaussian channels,” Euro. Trans. Telecom., 10: 585-595, 1999. 341. D. Tse and P. Viswanath, Fundamentals of Wireless Communication, Cam-bridge University Press, 2005. 342. D. N. C. Tse and S. V. Hanly, “Linear multiuser receivers: effective interference, effective bandwidth and user capacity,” IEEE Trans. Info. Theory, IT-45: 641-657, 1999. 343. A. L. Toledo and X. Wang, “Efficient multipath in sensor networks using dif-fusion and network coding,” 40th Annual Conference on Information Sciences and Systems, Princeton University, Princeton, NJ, Mar. 22-24, 2006. 344. F. Tops¨ oe, “Information theoretical optimization techniques,” Kyberneticka, 15: 8-27, 1979. 345. F. Tops¨ oe, “Some inequalities for information divergence and related measures of discrimination,” IEEE Trans. Info. Theory, IT-46: 1602-1609, 2000. 346. F. Tops¨ oe, “Basic concepts, identities and inequalities – the toolkit of informa-tion theory,” Entropy, 3: 162-190, 2001. 347. F. Tops¨ oe, “Information theory at the service of science,” in Entropy, Search, Complexity (Bolyai Society Mathematical Studies), I. Csisz´ ar, G. O. H. Katona, and G. Tardos, Ed., Springer, Berlin, 2007. References 557 348. B. P. Tunstall, “Synthesis of noiseless compression codes,” Ph.D. dissertation, Georgia Institute of Technology, Atlanta, GA, 1967. 349. J. C. A. van der Lubbe, Information Theory, Cambridge University Press, Cambridge, 1997 (English translation). 350. E. C. van der Meulen, “A survey of multi-way channels in information theory: 1961-1976,” IEEE Trans. Info. Theory, IT-23: 1-37, 1977. 351. E. C. van der Meulen, “Some reflections on the interference channel,” in Com-munications and Cryptography: Two Side of One Tapestry, R. E. Blahut, D. J. Costello, Jr., U. Maurer, and T. Mittelholzer, Ed., Kluwer Academic Publishers, Boston, 1994. 352. M. van Dijk, “On the information rate of perfect secret sharing schemes,” Designs, Codes and Cryptography, 6: 143-169, 1995. 353. M. van Dijk, “Secret key sharing and secret key generation,” Ph.D. thesis, Eindhoven University of Technology, Dec. 1997. 354. S. Vembu, S. Verd´ u, and Y. Steinberg, “The source-channel separation theorem revisited,” IEEE Trans. Info. Theory, IT-41: 44-54, 1995. 355. S. Verd´ u and T. S. Han, “A general formula for channel capacity,” IEEE Trans. Info. Theory, IT-40: 1147-1157, 1994. 356. S. Verd´ u and T. S. Han, “The role of the asymptotic equipartition property in noiseless source coding,” IEEE Trans. Info. Theory, IT-43: 847-857, 1997. 357. S. Verd´ u and S. W. McLaughlin, Ed., Information Theory : 50 Years of Dis-covery, IEEE Press, New York, 2000. 358. A. J. Viterbi, “Error bounds for convolutional codes and an asymptotically optimum decoding algorithm,” IEEE Trans. Info. Theory, IT-13: 260-269, 1967. 359. A. J. Viterbi and J. K. Omura, Principles of Digital Communications and Coding, McGraw-Hill, New York, 1979. 360. J. von Neumann, Mathematical Foundations of Quantum Mechanics, Princeton University Press, 1996 (translation from German edition, 1932). 361. A. Wald, “Sequential tests of statistical hypothesis,” Ann. Math. Stat., 16: 117-186, 1945. 362. H. Weingarten, Y. Steinberg, and S. Shamai, “The capacity region of the Gaus-sian multiple-input multiple-output broadcast channel,” IEEE Trans. Info. Theory, IT-52: 3936-3964, 2006. 363. T. Weissman, E.Ordentlich, G. Seroussi, S. Verd´ u, and M. J. Weinberger, “Uni-versal discrete denoising: Known channel,” IEEE Trans. Info. Theory, IT-51: 5-28, 2005. 364. T. A. Welch, “A technique for high-performance data compression,” Computer, 17: 8-19, 1984. 365. P. M. Woodard, Probability and Information Theory with Applications to Radar, McGraw-Hill, New York, 1953. 366. S. B. Wicker, Error Control Systems for Digital Communication and Storage, Prentice-Hall, Englewood Cliffs, New Jersey, 1995. 367. S. B. Wicker and V. K. Bhargava, Ed., Reed-Solomon Codes and Their Appli-cations, IEEE Press, Piscataway, New Jersey, 1994. 368. F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens, “The context-tree weight-ing method: basic properties,” IEEE Trans. Info. Theory, IT-41: 653-664, 1995. 558 References 369. E. T. Whittaker, “On the functions which are represented by the expansions of the interpolation theory”, Proc. Royal Soc. Edinburgh, Sec. A, 35: 181-194, 1915. 370. J. Wolfowitz, “The coding of messages subject to chance errors,” Illinois Jour-nal of Mathematics, 1: 591-606, 1957. 371. J. Wolfowitz, Coding Theorems of Information Theory, Springer, Berlin-Heidelberg, 2nd ed., 1964, 3rd ed., 1978. 372. Y. Wu, K. Jain, and S.-Y. Kung, “A unification of network coding and tree-packing (routing) theorems,” joint special issue of IEEE Trans. Info. Theory and IEEE/ACM Trans. Networking on Networking and Information Theory, IT-52: 2398-2409, 2006. 373. A. D. Wyner, “The capacity of the band-limited Gaussian channel,” Bell Syst. Tech. J., 45: 359-371, 1966. 374. A. D. Wyner, “On source coding with side information at the decoder,” IEEE Trans. Info. Theory, IT-21: 294-300, 1975. 375. A. D. Wyner and J. Ziv, “The rate-distortion function for source coding with side information at the decoder,” IEEE Trans. Info. Theory, IT-22: 1-10, 1976. 376. X. Yan, R. W. Yeung, and Z. Zhang, “The capacity region for multi-source multi-sink network coding,” 2007 IEEE International Symposium on Informa-tion Theory, Nice, France, Jun. 24-29, 2007. 377. E.-h. Yang and J. C. Kieffer, “Efficient universal lossless data compression al-gorithms based on a greedy sequential grammar transform – Part one: Without context models,” IEEE Trans. Info. Theory, IT-46: 755-777, 2000. 378. S. Yang, R. W. Yeung, and Z. Zhang, “Weight properties of network codes,” submitted to Euro. Trans. Telecom. 379. A. C.-C. Yao, “Some complexity questions related to distributive computing (Preliminary Report),” The Eleventh Annual ACM Symposium on Theory of Computing, Atlanta, GA, Apr. 30-May 02, 1979. 380. C. Ye and R. W. Yeung, “Some basic properties of fix-free codes,” IEEE Trans. Info. Theory, IT-47: 72-87, 2001. 381. C. Ye and R. W. Yeung, “A simple upper bound on the redundancy of Huffman codes,” IEEE Trans. Info. Theory, IT-48: 2132-2138, 2002. 382. Z. Ye and T. Berger, Information Measures for Discrete Random Fields, Sci-ence Press, Beijing/New York, 1998. 383. R. W. Yeung, “A new outlook on Shannon’s information measures,” IEEE Trans. Info. Theory, IT-37: 466-474, 1991. 384. R. W. Yeung, “Local redundancy and progressive bounds on the redundancy of a Huffman code,” IEEE Trans. Info. Theory, IT-37: 687-691, 1991. 385. R. W. Yeung, “Multilevel diversity coding with distortion,” IEEE Trans. Info. Theory, IT-41: 412-422, 1995. 386. R. W. Yeung, “A framework for linear information inequalities,” IEEE Trans. Info. Theory, IT-43: 1924-1934, 1997. 387. R. W. Yeung, A First Course in Information Theory, Kluwer Aca-demic/Plenum Publishers, New York, 2002. 388. R. W. Yeung, “Avalanche: A network coding analysis,” to appear in Comm. Info. and Syst. ( 389. R. W. Yeung and T. Berger, “Multi-way alternating minimization,” 1995 IEEE Internation Symposium on Information Theory, Whistler, British Columbia, Canada, Sept. 1995. References 559 390. R. W. Yeung and N. Cai, “Network error correction, Part I: Basic concepts and upper bounds,” Comm. Info. and Syst., 6: 19-36, 2006 ( 391. R. W. Yeung and N. Cai, “On the optimality of a construction of secure network codes,” submitted to 2008 International Symposium on Information Theory. 392. R. W. Yeung, T. T. Lee and Z. Ye, “Information-theoretic characterization of conditional mutual independence and Markov random fields,” IEEE Trans. Info. Theory, IT-48: 1996-2011, 2002. 393. R. W. Yeung, S.-Y. R. Li, N. Cai, and Z. Zhang, “Network coding theory,” Foundations and Trends in Comm. and Info. Theory, vol. 2, nos. 4 and 5, 241-381, 2005. 394. R. W. Yeung and Y.-O. Yan, Information-Theoretic Inequality Prover (ITIP), 395. R. W. Yeung and Z. Zhang, “On symmetrical multilevel diversity coding,” IEEE Trans. Info. Theory, IT-45: 609-621, 1999. 396. R. W. Yeung and Z. Zhang, “Distributed source coding for satellite communi-cations,” IEEE Trans. Info. Theory, IT-45: 1111-1120, 1999. 397. R. W. Yeung and Z. Zhang, “A class of non-Shannon-type information in-equalities and their applications,” Comm. Info. and Syst., 1: 87-100, 2001 ( 398. Z. Zhang, “On a new non-Shannon-type information inequality,” Comm. Info. and Syst., 3: 47-60, 2003 ( 399. Z. Zhang, “Linear network error correction codes in packet networks,” submit-ted to IEEE Trans. Info. Theory. 400. Z. Zhang and R. W. Yeung, “A non-Shannon-type conditional inequality of information quantities,” IEEE Trans. Info. Theory, IT-43: 1982-1986, 1997. 401. Z. Zhang and R. W. Yeung, “On characterization of entropy function via in-formation inequalities,” IEEE Trans. Info. Theory, IT-44: 1440-1452, 1998. 402. L. Zheng and D. N. C. Tse, “Communication on the Grassmann manifold: A geometric approach to the noncoherent multiple-antenna channel,” IEEE Trans. Info. Theory, IT-48: 359-383, 2002. 403. S. Zimmerman, “An optimal search procedure,” Am. Math. Monthly, 66: 8, 690-693, 1959. 404. K. Sh. Zigangirov, “Number of correctable errors for transmission over a bi-nary symmetrical channel with feedback,” Prob. Info. Trans., 12: 85-97, 1976. Translated from Problemi Peredachi Informatsii, 12: 3-19 (in Russian). 405. J. Ziv and A. Lempel, “A universal algorithm for sequential data compression,” IEEE Trans. Info. Theory, IT-23: 337-343, 1977. 406. J. Ziv and A. Lempel, “Compression of individual sequences via variable-rate coding,” IEEE Trans. Info. Theory, IT-24: 530-536, 1978. Index a posterior distribution, 214 Abel, N.H., 390 Abelian group, 390, 402, 406, 407 Abrahams, J., 100, 541 Abramson, N., 80, 541 absolutely integrable, 241 abstract algebra, 387, 490 Abu-Mostafa, Y.S., 541 acyclic network, X, 435–479, 485, 488, 502, 540, see also directed graph Acz´ el, J., 541 additive colored Gaussian noise, 287 additive noise, worst, IX, 270, 289, 297 additive white Gaussian noise (AWGN), 280, 281 adjacent pair of channels, 436, 439, 440, 454, 457, 480, 492, 498 adjoint matrix, 502 Ahlswede, R., 98, 180, 419, 432, 434, 482, 504, 541 Ahmed, E., 551 algebraic coding, 440, 483 Algoet, P., 542 algorithm, 435, 456, 457, 466, 466, 472 exponential-time, 460 polynomial-time, X, 457, 458, 460, 466, 468, 482 almost everywhere (a.e.), 245 almost perfect reconstruction, 104, 106 alternating optimization algorithm, 212–214, 217 convergence, 222 Amari, S., 50, 542 Anantharam, V., XIII, 542 ancestral order, 437, see upstream-to-downstream order Anderson, J.B., 542 applied mathematics, 3 applied probability, 3 Argawal, A., 541 Arimoto, S., 166, 181, 228, 542, see also Blahut-Arimoto algorithms arithmetic, 435 arithmetic mean, 151 ascendant, 94 Ash, R.B., 542 asymptotic equipartiion property (AEP) for continuous random variables, VIII asymptotic equipartition property (AEP) for continuous random variables, 245, 245–248 typical sequence, 246 typical set, 246 strong, see also strong asymptotic equipartition property weak, see also weak asymptotic equipartition property asymptotically reliable communication, 140 atom of a field, 52 weight of, 311 audio compact disc, 1 audio signal, 184 audio source, 382 562 Index autocorrelation function, 281, 285 auxiliary random variable, 370, 371, 383, 536 average distortion, 184, 185, 187, 201, 205, 211 expected, 201 average input constraint, 268, 269, 296 average probability of error, 150, 180, 261 Ayanoglu, E., 420, 542 backbone network, 445 Balkenhol, B., 98, 541 Balli, H., 482, 542 bandlimited signal, 283, 284, 287 orthonormal basis, 283, 284 bandpass filter, 282, 288 bandwidth, 281, 412, 477 downlink, 417 Barbero, ´ A.I., 504, 542 Barron, A.R., 542 base field, 437–442, 446, 449, 451, 453–456, 459, 460, 468, 473, 474, 476, 481, 486, 487, 489, 492, 497, 500–502 basic inequalities, 27, 26–28, 59, 131, 313, 324, 339–361, 372, 373, 381–383, 385, 387, 402 Bassalygo, L.A., 542 Bayesian network, 11, 152, 351, 352 intersection, 11 BCH (Bose-Chaudhuri-Hocquenghem) code, 166 Beethoven’s violin concerto, 1 Bell Telephone Laboratories, 2 Berger, T., XIII, 78, 134, 209, 210, 228, 542, 547, 558 Berlekamp, E.R., 542, 555 Berrou, C., 166, 542 Berstel, J., 542 Bertsekas, D., 542 beta distribution, 255 Bhargava, V.K., XIII, 557 biased coin, 81 binary arbitrarily varying channel, 180 binary covering radius, 208 binary entropy function, 13, 33, 34, 197 binary erasure channel, 148, 171 binary symmetric channel (BSC), 137, 146, 176, 177, 180 binomial formula, 306, 355, 356 bit, 3, 13, 424 Blackwell, D., 542 Blahut, R.E., XIII, 181, 210, 228, 543, 557 Blahut-Arimoto algorithms, VIII, 149, 200, 211–228 channel capacity, 214–218, 227 convergence, 225–226 rate-distortion function, 219–222, 227 Blakley, G.R., 543 block code, 104 linear, 440 block length, 104, 139, 151, 183, 211, 287, 427 Blundo, C., 80, 543 Bollob´ as, B., 543 Bondy, J.A., 314, 543 Borade, S., 434, 543 Bose, R.C., 543, see also BCH code bottleneck, 423 boundary condition, 440, 485, 493, 495 brain, 382 branching probabilities, 94 Breiman, L., 112, 542, 543, see also Shannon-McMillan-Breiman theorem broadcast, 415 constraint, 416 broadcast constraint, 415 Bruck, J., 551 BSC, see binary symmetric channel Burrows, M., 543 butterfly network, X, 412–415, 419, 425, 460, 480 Byers, J., 543 cable cut, 468 Cai, N., XIII, 117, 419, 420, 432, 434, 482, 483, 504, 540, 541, 543, 551, 556, 559 Caire, G., 543 Calderbank, R., 543 Capocelli, R.M., 79, 543 Cartesian product, 212 cascade of channels, 177 Cauchy distribution, 255 Index 563 causality, 177, 427, 431, 492, 495 CDF, see cumulative distribution function Ces´ aro mean, 39 chain rule for conditional entropy, 21 conditional mutual information, 22 differential entropy, 244 entropy, 21, 29 mutual information, 22, 30 Chan, A.H., XIII Chan, T.H., XIII, 12, 131, 384, 385, 408, 540, 543, 544 Chan, V.W.S., XIII channel capacity computation of, VIII channel characteristics, 137 channel code, 137, 151, 175 probability of error, 138 rate, 140, 151 with feedback, 167, 178 without feedback, 149 channel coding, 411, 415 channel coding theorem, VIII, 134 for continuous memoryless channel, 261, 260–261 achievability, 265–269 converse, 262–265 random code, 265 for discrete memoryless channel, 3, 50, 151, 149–151, 181 achievability, 149, 157–163 converse, 149, 171 random code, 159 strong converse, 157, 181 channel failure, X, 468 channel with memory, 172, 176, 178, 179 Charikar, M., 541 Chatin, G.J., 544 Chekuri, C., 544 Chernoffbound, 117 Chernoff, H., 544 Cheung, K.W., XIII Chiang, M., 544 child, 94 Chinese University of Hong Kong, The, XIII Chiu, D.M., 544 Chou, P.A., 420, 482, 544, 549, 551 chronological order, 428 Chuang, I.L., 386, 553 Chung, K.L., 112, 544 cipher text, 71, 78 classical entropy, 385 closure, of a group, 388 CMC, see continuous memoryless channel code alphabet, 82 code tree, 86, 86–97, 100 pruning of, 95 codebook, 149, 187, 202, 260, 267 codeword, 104, 149, 187, 202, 260 coding session, 428, 449 transaction, 428, 433 coding theory, 166 coefficient of correlation, 255 coloring filter, 297 column space, 334 combinatorics, 127 commodity, 411, 425 commodity flow, 419 communication, 420 communication channel, 281, 412 capacity, 412 error-free, 415 receiver, 282 communication complexity, 420 communication engineer, 3 communication engineering, 137, 234, 270 communication system, 1, 3, 240, 257, 411, 416 design of, 415 discrete-time, 142 practical, 137, 166 Shannon’s model, 2 communication theory, 3 commutative, 390, 391, 491 commutative group, see Abelian group compact set, 146, 225 complex conjugate, 280 complex number, 490 composite function, 391 compound source, 209 computation, 420 computational 564 Index complexity, 458, 460, 466, 467, 473, 481 procedure, 343, 346 resource, 477 computer communication, 166 computer science, 13, 420 computer storage systems, 166 concavity, 46, 49, 67, 70, 222–225, 259, 293 conditional branching distribution, 94 conditional distribution, 240 conditional entropy, 7, 14 conditional independence, IX, 8, 321, 336–337, 351 elemental forms, 358 structure of, 12, 385 conditional mutual independence, 300–309 conditional mutual information, 7, 16, 19, 242 conditional probability of error, 150, 261 configuration, 468, 470 constant sequence, 187 constant term, 490, 492 continuous extension, 19 continuous memoryless channel (CMC), IX, 258, 258–269, 296, 297 achievable rate, 261 average input constraint, 259 capacity, 259, 261 continuous partial derivatives, 212, 217, 224, 225 continuous-valued channels, IX, 257–294 convergence, 490 in L1, 19 in L2, 20, 45 in divergence, 26, 48 in probability, 101, 102, 108, 266 in variational distance, 19, 20, 26, 45, 48, 49 convex closure, 398, 401 convex cone, 366 convexity, 24, 46, 47, 69, 188, 191, 193, 201, 212, 216, 221, 326, 337, 368, 515 convolution, 282 convolutional code, 166 convolutional coding, 490 convolutional multicast, 499, 500, 502, 504 convolutional network code, X, 433, 492, 488–502, 504 decoding, 498–502 decoding delay, 499, 504 decoding kernel, 499, 502, 504 multicast, 499 correlation matrix, 230, 232, 256, 289–293 coset left, 391, 392–394 right, 391 Costa, M.H.M., 544 Costello, Jr., D.J., 543, 551, 557 countable alphabet, 33, 35, 41, 48, 110, 112, 132–135, 230 covariance, 230 covariance matrix, 230, 232, 233, 251, 254–256, 277 Cover, T.M., XIII, 228, 297, 542, 544, 545 cross-correlation function, 280, 281, 284 cross-spectral density, 281 crossover probability, 137, 177, 180, 198 cryptography, 483 Csisz´ ar, I., XIII, 50, 80, 135, 228, 338, 541, 545, 548, 556 cumulative distribution function (CDF), 229 conditional, 230 joint, 230, 274 marginal, 230 cyclic network, X, 485–502, 504, see also directed graph delay-free, 485–488 D-adic distribution, 88, 93 D-ary source code, 82 D-it, 13, 84, 97 Dmax, 186, 191, 193, 197 dmax, 201 Dantzig, G.B., 545 Dar´ oczy, Z., 541 data block, 474 aysnchronous transmission, 475 encoded, 474, 475 data communication, 167, 411 data packet, 411, 477, 482 Index 565 data processing theorem, 31, 73, 156, 262, 349, 382 Davisson, L.D., 545 Dawid, A.P., 545 De Santis, A., 79, 80, 543 De Simone, R., 80, 543 decoder, 104, 159, 202 decoding function, 110, 149, 167, 187, 260, 428 decorrelation, 233, 278 deep space communication, 166 delay processing, 435, 488 propagation, 435, 485, 488 transmission, 435, 488, 492 Dembo, A., 420, 554 Dempster, A.P., 545 denominator, 490 dependency graph, 152–153, 168, 179, 180 directed edge, 152 dotted edge, 152 parent node, 152 solid edge, 152 descendant, 87, 95 destination, 2 determinant, 231, 239, 451 deterministic distribution, 48 diagonal element, 232–234, 278, 279, 289, 296, 498 diagonal matrix, 232, 234, 240, 256, 279, 296, 501 diagonalization, 232, 233, 234, 239, 278, 296 differential entropy, VIII, 235, 229–256, 271, 385 conditional, 240, 257 joint, 238 scaling, 237–239 translation, 237–239, 271 Diggavi, S.N., 297, 545 digital, 2 digital communication system, XI, 3 directed cycle, 435, 436, 485, 486, 488, 489 directed graph, 412, 421 acyclic, 435, 436 cut, 422 capacity of, 423 cyclic, 436, 485 directed cycle, 435, 436 directed path, 436 edge, 412, 421 max-flow, 422 min-cut, 423 node, 412, 421 non-source node, 421 rate constraint, 422, 423 sink node, 411, 422, 424 source node, 411, 421, 424 directed network, 436, 485, see also directed graph directed path, 436, 453, 463 longest, 436 directional derivative, 224, 225 discrete alphabet, 140, 142 discrete channel, 140–145 noise variable, 141, 142 discrete memoryless channel (DMC), 140–149, 167, 176, 177, 179, 211, 257, 287, 434 achievable rate, 151 capacity, 3, 145, 137–181, 211 computation of, 149, 181, 214–218, 222, 227 feedback, 166–172, 176, 178, 434 generic noise variable, 144, 145, 258 symmetric, 177 discrete-time continuous channel, 257, 258 noise variable, 257 discrete-time linear system, XI discrete-time stochastic system, 142 disk array, 426, 508, 538 distortion measure, 183–210 average, 184 context dependent, 185 Hamming, 185, 196, 199 normalization, 185, 195 single-letter, 184 square-error, 185 distortion-rate function, 191 distributed source coding, 420, 540 divergence, 7, 20, 23, 23–26, 46, 48–50, 248, 248–249, 256, 378 convexity of, 47 divergence inequality, 24, 26, 47, 215, 248, 378 566 Index diversity coding, 426, 509 DMC, see discrete memoryless channel Dobrushin, R.L., 243, 542, 545 double infimum, 213, 221 double supremum, 212, 213, 216, 217 Dougherty, R., 385, 540, 545 duality theorem, 345 dual, 345 primal, 345 Dueck, G., 545 dummy message, 449 dummy node, 416, 417 dyadic distribution, 88 dyadic expansion, 235 ear drum, 382 east-west direction, 213 eavesdropper, 71 edge-disjoint paths, 437, 453, 457, 464, 465, 467, 479, 501 Edmonds, J., 481, 545 efficient source coding, 106–107 Effros, M., 482, 540, 548, 549, 553 Egner, S., 482, 549 eigenvalue, 232, 234, 255 eigenvector, 232, 255 electronic circuit, 492 elemental inequalities, 341, 339–341, 345, 361, 362 α-inequalities, 353–357 β-inequalities, 353–357 minimality of, 353–357 Elias, P., 545 EM algorithm, 228 emotion, 1 empirical differential entropy, 246 empirical distribution, 111, 133 joint, 133, 205 empirical entropy, 102, 104, 111, 113 encoder, 104, 159, 202 encoding function, 110, 149, 167, 187, 260, 428 Encyclopedia Britannica, 3, 543 energy constraint, 271 energy signal, 280 engineering, 3, 175 engineering tradeoff, 477 English, entropy rate of, 109 ensemble average, 108 entropic, 326, 365, 395 entropies, linear combination of, 12, 28, 80, 326, 341 entropy, 3, 7, 12, 19, 50, 271, 393, 417 concavity of, 67 relation with groups, 387–408 entropy bound, VIII, 84, 82–85, 88, 90, 93, 96, 97 for prefix code, 93 entropy function, 326, 361, 366, 368, 372, 382, 385, 393 continuity of, 49, 401 discontinuity of, 48, 49 group characterization, 396, 393–397 entropy rate, VIII, 7, 38, 38–41, 183 of English, 109 entropy space, 326, 329, 341, 361, 393 Ephremides, A., XIII equivalence relation, 312 erasure probability, 148 Erez, E., 504, 546 ergodic, 108 ergodic stationary source, 108, 112, 172 entropy rate, 109 Estrin, D., 549 Euclidean distance, 20, 216, 369 Euclidean space, 326, 362 expectation-maximization (EM) algorithm, 228 expected distortion, 186, 219 minimum, 187, 197 exponential distribution, 255 extreme direction, 365, 374, 380 facsimile, 1 fair bits, 106, 235, 411 almost fair bits, 106, 107 fair coin, 81 Fan, B., 544 Fano’s inequality, 34, 32–36, 50, 107, 156, 171, 174, 181, 431, 516, 519 simplified version, 35 tightness of, 48 Fano, R.M., 50, 181, 546 fault-tolerant data storage system, 381, 426 fault-tolerant network communication, 508 Index 567 FCMI, see full conditional mutual independencies Feder, M., 504, 546 feedback, 140, 176, 179–181 Feinstein, A., 181, 545, 546 Feldman, J., 546 Feller, W., 546 ferromagnetic material, 314, 321 field, 504, see also finite field real, 435, 460 field size, 456, 457, 459, 466, 468, 472, 473, 480, 481 field, in measure theory, 52 file, 474 finite alphabet, 18, 33, 35, 45, 49, 107, 110, 112, 122, 132, 133, 145, 159, 172, 184, 203, 211, 401, 412, 421 finite duration, 287 finite field, 435, 460, 504 algebra, 435 extension field, 488 finite group, 389, 387–408 finite resolution, 1 finite-dimensional maximization, 211 Fitingof, B.M., 546 fix-free code, 98 flow on a directed graph, 422 conservation conditions, 422 value of, 422 Fong, S.L., XIII, 481, 546 Ford, Jr., L.K., 546 Forney, Jr., G.D., 546 forward substitution, 491 fountain code, 477 Fourier transform, 280, 283, 284, 287 inverse, 280, 285 Fragouli, C., 420, 504, 544, 546 Fraleigh, J.B., 546 Freiling, C., 385, 540, 545 frequency, 280 frequency band, 281, 287 frequency component, 282 frequency of error, 185 frequency response, 288 frequency spectrum, 281 Frey, B.J., 550 Fu, F., 74, 546 Fubini’ theorem, 241 Fujishige, S., 360, 546 Fulkerson, D.K., 546 full conditional independence, IX full conditional mutual independencies, 300, 309–313, 351 axiomatization, 321 image of, 310, 312 set-theoretic characterization, 321 functional dependence, 336 fundamental inequality, VIII, 23, 85, 244, 248 fundamental limits, 3 G´ acs, P., 541, 544 Gallager, R.G., XIII, 166, 177, 181, 287, 297, 542, 546, 547, 555 Galois field, see finite field gamma distribution, 255 Γn, 341–361 Γ ∗ n, 326, 342, 361, 387 Γ ∗ n, 365 group characterization of, 398–401 Gargano, L., 79, 543 Gauss elimination, 468 Gaussian channel, 270 bandlimited colored Gaussian channel, IX, 287–289, 297 capacity, 289, 297 bandlimited white Gaussian channel, IX, 280–287, 297 capacity, 282, 297 bandpass white Gaussian channel, 287, 288, 297 capacity, 287 correlated Gaussian channels, IX, 277–280, 293, 297 capacity, 279, 289, 296 noise variable, 277, 279 memoryless Gaussian channel, IX, 270, 269–272, 282, 286, 288 capacity, 270–272, 274, 295 parallel Gaussian channels, IX, 272–277, 279, 297 capacity, 277, 279, 296 noise variable, 272 Gaussian distribution, 231, 236, 238, 286 multivariate, 231, 233, 239, 251, 254, 256, 278, 285, 291, 385 zero-mean, 280, 285 568 Index Gaussian noise, 270, 297 independent, 297 process, 280, 285, 287 zero-mean, IX, 274, 277, 280, 287, 289–294, 296, 297 Ge, Y., 321, 547 generator matrix, 440 generic continuous channel, 258, 259, 270 generic discrete channel, 143, 158, 211, 214 generic message, 487 generic network code, 460, 460–468, 471, 480–482 alternative definition, 462 construction, 466 simplified characterization, 480 static, 470, 473, 481 construction, 472 transformation of, 481 geometric distribution, 38 Gersho, A., 547 Gitlin, R.D., 420, 542 Gkantsidis, C., 482, 547 Glavieux, A., 542 global encoding kernel, 440, 445, 448, 450, 451, 453, 454, 457, 459, 461, 462, 466, 467, 471, 472, 481, 482, 485, 486, 488, 492, 493, 497, 500, 502, 504 general positions, 460, 466 global Markov property, 314, 320 Goldman, S., 547 Goldsmith, A., 547, 549 Goli´ c, J.Dj., 369, 547 Golomb, S.W., 547 Govindan, R., 549 gradient, 224 Grant, A., 540, 544 graph theory, 314, 412, 415, 421, 422, 437 graphical models, VIII, 321 Gray, R.M., 544, 547 group, 388, 387–408, 540 associativity, 388, 389–391, 393 axioms of, 388 closure, 390, 391, 393 identity, 388, 390–394 inverse, 388–391 order of, 387, 389, 391 group inequalities, 387–401, 405, 408 group theory, IX, 135, 365 relation with information theory, 387–408 group-characterizable entropy function, 396, 393–397 Guiasu, S., 547 Gutman, M., 546 Hadamard’s inequality, 256 Hadamard, J., 547 Hagenauer, J., XIII Hajek, B., XIII, 547 half-space, 329 Hammer, D., 386, 547 Hamming ball, 208 Hamming code, 166 Hamming distance, 49, 180 Hamming distortion measure, 185 Hamming, R.V., 547 Han, T.S., XIII, 47, 50, 80, 338, 369, 547, 548, 557 Hanly, S.V., 556 hard disk, 482 hardware failure, 468 Hardy, G.H., 49, 548 Harremo¨ es, P., 548 Harvey, N., 548 Hassibi, B., 548 Hau, K.P., 420, 539, 548, 554 Heegard, C., 548 heuristic argument, IX, 282, 287 hiker, 213 Ho, S.-W., 50, 111, 134, 135, 548 Ho, S.T., XIII, 480, 481, 483, 504, 551, 556 Ho, S.W., XIII Ho, T., 420, 482, 540, 548, 551, 553 Hocquenghem, A., 548, see also BCH code home entertainment systems, 166 Horibe, Y., 549 Hu, G.D., 80, 549 Huang, J., 544 Huffman code, 88, 88–93 expected length, 90, 92 optimality of, 90 Huffman procedure, 88, 88–93 Index 569 dummy symbols, 89 Huffman, D.A., 100, 549 Hui, J.Y., 549 human factor, 1 Humboldt Foundation, Alexander von, XIII hypergraph, 321 hyperplane, 328, 332, 336, 365, 380 Hyvarinen, L.P., 549 I, C.-L., 420, 542 I-Measure, VIII, 58, 51–80, 154, 299–321, 361, 368 empty atom, 56 Markov chain, 60–67, 74, 317–319 Markov structures, 299–321 negativity of, 59–60 nonempty atom, 56 uniqueness, 58, 63 universal set, 53, 56 i.i.d. source, 104, 107, 111, 113, 184, 209, 211 bivariate, 122, 124 Ibinson, B., 549 identity matrix, 447, 481, 498, 499 Ihara, S., 256, 297, 549 image, 184 imaginary channel, 437, 444, 453, 457, 458, 492, 493, 501 imperfect secrecy theorem, 71 implication problem, IX, 336–337, 351–353, 385 involves only FCMI’s, 312, 336 impulse response, 282, 296, 492 inclusion-exclusion formula, 55 a variation of, 74 incomplete data, 228 incompressible, 106 independence bound for differential entropy, 245, 264 entropy, 29, 106 independence of random variables, 7–12 mutual, 8, 29, 30, 39, 45, 62, 78, 203, 278, 331, 350 pairwise, 8, 45, 59, 363 indeterminate, 442, 450–455, 500, 501 inferior, 506, 514 infinite group, 389 infinitesimal perturbation, 460 Information Age, 3 information diagram, VIII, 61, 51–80, 347, 352, 372 Markov chain, 63–67, 74, 153, 317–319 information expressions, 323 canonical form, 326–329 alternative, 338 uniqueness, 327, 338 nonlinear, 338 symmetrical, 338 information identities, VIII, 28, 80, 323 constrained, 332, 344–345 unconstrained, 329 information inequalities, VIII, X, 28, 67, 323, 387, 401–405, 540 constrained, 330–332, 344–345 equivalence of, 333–335, 338 framework for, IX, 323–338 machine-proving, ITIP, 325, 347–350 non-Shannon-type, IX, 28, 361–386 Shannon-type, IX, 339–360 symmetrical, 359 unconstrained, 329, 343–344, 365, 388, 401 information looping, 486, 488 information rate-distortion function, 192, 202 continuity of, 202 properties of, 193 information source, 2, 38, 82, 183, 265, 411, 417, 428, 435 informational divergence, see divergence Ingleton inequality, 385, 407 Ingleton, A.W., 549 input distribution, 145, 158, 211, 214, 220, 265 strictly positive, 217, 227 input power, 274 input power allocation, 275, 277, 279, 288, 296 input power constraint, 270, 272, 277, 279, 282, 286, 287, 296 Intanagonwiwat, C., 549 intermediate value theorem, 33 internal node, 86, 86–97 conditional entropy of, 94 inverse function, 490 invertible matrix, 279, 447, 498 570 Index Ising model, 314, 321 iterated integral, 241 iterative algorithm, 181, 210, 211, 220, 228 ITIP, IX, 347–350, 360, 369, 383, 385 efficient implementation, 353 Jacquet, P., 549 Jaggi, S., 482, 549 Jaggi-Sanders algorithm, 457, 458–460, 480 Jain, K., 481, 482, 544, 549, 558 Jaynes, E.T., 50, 549 Jelinek, F., 549 Jensen’s inequality, 201, 293 Jensen, J.L.W.V., 549 Jerohin, V.D., 208, 549 Jewell, W.S., 555 Jindal, N., 549 Johnsen, O., 98, 549 Johnson, R.W., 555 joint entropy, 14, 325 joint source-channel coding, 174, 175 Jones, G.A., 549 Jones, J.M., 549 Kakihara, Y., 549 Karger, D.R., 482, 540, 548, 551, 553 Karush, J., 99, 276, 550 Karush-Kuhn-Tucker (KKT) condition, 276 Katabi, D., 549 Katona, G.O.H., 548, 556 Katti, S., 549 Kawabata, T., 80, 321, 550 key, of a cryptosystem, 71, 78 Khachatrian, L., 98, 541 Khinchin, A.I., 550 Kieffer, J.C., 550, 558 Kindermann, R., 550 King, R., 544 Kleinberg, R., 548 Kobayashi, K., 547, 548 Koetter, R., XIII, 480, 482, 504, 548, 550, 551 Kolmogorov complexity, 386, 408 Kolmogorov, A.N., 256, 550 K¨ orner, J., 80, 135, 338, 541, 545 Kraft inequality, VIII, 82, 84, 85, 87, 88, 92, 98, 99 Kraft, L.G., 550 Kramer, G., 550 Kschischang, F.R., 550 Kuhn, H.W., 276, 550, see also Karush-Kuhn-Tucker (KKT) condition Kullback, S., 50, 550 Kullback-Leibler distance, see diver-gence Kung, S.-Y., 481, 558 Kurose. J.F., 550 Kushilevitz, E., 550 Kwok, P.-W., 481, 550 L1-convergence, 19 L2-convergence, 20, 45 Lagrange multipliers, 217, 275 Lagrange’s theorem, 393 Laird, N.M., 545 Landau, H.J., 550, 551 Langberg, M., 549, 551 Langdon, G.G., 551 Laplace distribution, 255 large scale content distribution, 474, 483 lattice theory, 80 Lau, L.C., 551 Laurent series, formal, 504 Lauritzen, S.L., 551 laws of information theory, IX, 324, 385 Le Boudec, J.-Y., 546 leaf, 86, 86–97 Lebesgue measure, 230, 248, 338, 365 Lebesgue-Stieltjes integration, 229 Lee, T.T., 321, 559 left-continuous, 259, 260, 267 Lehman, E., 540, 554 Leibler, R.A., 50, 550 Lempel, A., 559 Leong, B., 548 letter, 38 Leung, S.K., 544 Li, B., 551 Li, J., 551 Li, M., 551 Li, S.-Y.R., XIII, 419, 420, 432, 434, 482, 483, 504, 541, 551, 559 Index 571 Li, Z., 551 Liang, X.-B., 551 Lieb, E.H., 385, 551 Lin, S., 551 Linden, N., 385, 549, 551 Linder, T., 100, 551 line of sight, 510 linear algebra, XI linear broadcast, 444, 445, 448, 450, 454, 460, 462, 486 multi-rate, 445, 481 static, 470, 473 linear code, 166 linear constraints, 330, 333 linear dispersion, 444, 445, 448, 450, 455, 460, 462, 465, 481 static, 470, 473 linear mapping nullity, 472 pre-image, 472 linear multicast, 444, 445, 448, 450, 453, 459, 460, 462, 473, 476, 480, 482, 499, 502, 504 construction, see Jaggi-Sanders algorithm random, 456 static, 470, 473, 480 transformation of, 459 linear network code, X, 437–479, 482, 488 base field, 435 dimension, 439 global description, 440, 485 implementation of, 448–449, 476 overhead, 449 linear broadcast, 444 linear dispersion, 444 linear multicast, 444 local description, 439, 451, 485 transformation of, 447–448 linear network coding, 421 matrix approach, 482 vector space approach, 482 linear programming, IX, 339, 341–345, 347, 360 linear span, 442 linear subspace, 331, 374, 380 linear time-invariant (LTI) system, 492, 504 causality, 492 linear transformation, 443 invertible, 328 linear transformation of random variables, 231–235, 255, 278 Littlewood, J.E., 49, 548 Lnˇ eniˇ cka, R., 385, 551 local area network (LAN), 445 local encoding kernel, 439, 440, 450, 451, 454, 455, 471, 473, 481, 482, 485, 486, 488, 489, 492, 493, 495, 497, 500, 502 local Markov property, 320 local redundancy, 96 local redundancy theorem, 96–97 Loeliger, H.-A., 550 log-optimal portfolio, 228 log-sum inequality, 25, 47, 226 logical network, 474 Lok, T.M., XIII long division, 490, 491, 504 Longo, G., 542 lossless data compression, 3, 100 Lov´ asz, L., 551 low-density parity-check (LDPC) code, 166 Luby, M., 543 Lun, D.S., 548, 551 MacKay, D.J.C., 166, 551, 552 majority vote, 139 majorization, 49 Makarychev, K., 385, 552 Makarychev, Y., 385, 552 Malkin, T., 546 Malvestuto, F.M., 321, 552 Mann, H.B., 542 Mansuripur, M., 552 mapping approach, 210 marginal distribution, 266, 294, 370, 371 Marko, H., 180, 552 Markov chain, IX, 7, 9, 30, 63, 66, 67, 70, 72, 74, 79, 143, 153, 171, 177, 179, 258, 262, 299, 300, 314, 317–319, 335, 347, 349, 370, 371, 382, 385 information diagram, 63–67, 74, 153, 317–319 572 Index Markov graph, 314 Markov random field, IX, 67, 300, 314–316, 321 hypergraph characterization of, 321 Markov star, 321 Markov structures, IX, 299–321 Markov subchain, 10 Marshall, A.W., 552 Marton, K., 552 Massey, J.L., XIII, 71, 180, 552 Mathai, A.M., 552 matroid, 385, 540 Mat´ uˇ s, F., 369, 385, 552 Maurer, U.M., 543, 552, 557 max-flow, 422, 431, 442, 444, 445, 459, 468, 476, 477 collection of edges, 423 collection of non-source nodes, 423 max-flow bound, X, 429, 421–431, 435, 459, 462, 482, 504, 505 for linear network coding, 443 max-flow bounds, 505–508, 537 max-flow min-cut theorem, 423, 431, 443 maximal probability of error, 150, 158, 173, 180, 261 maximization, 276 maximum differential entropy, VIII, 249–251 maximum entropy, VIII, IX, 36–38, 256 maximum likelihood decoding, 180 Mazo, J., 420, 542 McEliece, R.J., 553 McGill, W.J., 80, 553 McLaughlin, S.W., 557 McMillan, B., 99, 112, 553, see also Shannon-McMillan-Breiman theorem mean ergodic, 108 mean-square error, 185 meaningful information, 1 measure theory, 52, 108, 229, 256 membership table, 394 Menger, K., 553 Merhav, N., 546 message, 425 message pipeline, 435, 495, 499 message set, 137, 149, 150, 260, 261 method of types, 117, 135 microelectronics, 166 min-cut, 423, 431, 478 minimization, 343–345 minimum distance decoding, 180 Mittelholzer, T., 543, 557 Mitter, S., 556 Mitzenmacher. M., 543, 553 mixing random variable, 68, 264 modulo 2 addition, 389–390, 396, 397, 412, 433 modulo 2 arithmetic, 488 modulo 3 addition, 433 Mohan, S., 542 most likely sequence, 104 Moulin, P., 553 Moy, S.C., 553 µ∗, see I-Measure multi-dimensional direction, 213 multi-source multicast, 459 multi-source network coding, X, XI, 408, 418, 505–540 achievable information rate region, 505 LP bound, 515 information rate region, 540 insufficiency of linear coding, 540 network code for acyclic network, 510 source separation, 537, 539 multicast, X, 411, 412, 421, 435, 505–540 multigraph, 421 multilevel diversity coding, 508–509, 539 symmetrical, 509, 539 multiple descriptions, 74 multiple unicasts, 414 multiterminal source coding, 135, 209 Munich University of Technology, XIII Murty, U.S.R., 314, 543 mutual information, 7, 15, 219, 242, 256, 265 between more than two random variables, 60 concavity of, 70, 259, 265, 296 convexity of, 69, 194 mutual typicality, 265–266 typical sequence, 266 typical set, 265 Index 573 mutually independent information sources, 505–536 M´ edard, M., 480, 482, 504, 540, 548–551, 553 Nair, C., 548 Narayan, P., XIII, 117, 545 nat, 13, 237 natural disasters, 468 neighborhood, 246 neighboring node, 415 nerve impulse, 382 network code deployment, 468 global description, 438 global encoding mapping, 438, 479 local description, 438 local encoding mapping, 438, 479 network coding, XI, 505–540 advantage of, 412–415, 419 source separation, 417–418, 420 Network Coding Homepage, 420 network communication, 411, 412, 415, 417 network error correction, 482 network topology, 449, 468, 481 unknown, 445, 456, 474, 482 network transfer matrix, 479 Ng, W.-Y., XIII Nielsen, M.A., 386, 553 Nisan, N., 550 Nobel, A., 420, 554 noise energy, 270 noise power, 270, 274, 277, 281, 282, 286 noise process, 282, 285, 287, 288 noise source, 2 noise variable, 257, 272, 277, 279 noise vector, 278, 289, 293, 296, 297 noisy channel, 3, 137, 164 noisy environment, 1 non-decreasing, 259 non-increasing, 191 non-Shannon-type inequalities, IX, 28, 324, 325, 347, 361–386, 540 constrained, 374–380 unconstrained, 369–374, 388, 404, 540 nonlinear optimization, 211 nonnegative linear combination, 345, 346 nonnegative orthant, 326, 328, 337, 341, 362, 369 normal distribution, see Gaussian distribution north-south direction, 213 null space, 333 numerator, 502 numerical computation, 149, 199, 211–228 Nyquist, H., 553 O’Sullivan, J.A., 553 off-diagonal element, 498 Olkin, I., 552 Omura, J.K., 553, 557 Ooi, J.M., 553 optimal coding scheme, 3 Ordentlich, E., 117, 553, 557 order of a node, 87 ordinate, 219 Orlitsky, A., XIII, 553 Ornstein, D.S., 553 orthogonal complement, 333, 334 orthogonal matrix, 232, 234 orthogonal transformation, 233, 234 orthonormal basis, 283, 284, 287 orthonormal set, 284 orthonormal system, 232 output channel, 412 overlay network, 474 Oxley, J.G., 553 P2P, see peer-to-peer network packet loss, 477 rate, 477 packet network, 477 Papadimitriou, C.H., 553 Papoulis, A., 80, 553 parallel channels, 177, 359 parity check, 167 partition, 312 pdf, see probability density function Pearl, J., 553 peer-to-peer (P2P) network, 474, 482 client, 474 neighboring node, 474–476 server, 474 tracker, 474 Peile, R.E., 547 574 Index perceived distortion, 184 Perez, A., 553 perfect secrecy theorem, Shannon’s, 71 permutation, 49, 390 Perrin, D., 542 physical network, 489 physical system, 3 Pierce, J.R., 554 Pinkston, J.T., 209, 554 Pinsker’s inequality, 26, 47, 49, 50, 117 Pinsker, M.S., 50, 256, 297, 542, 554 Pippenger, N., 385, 554 plain text, 71, 78 point-to-point channel, 137 noiseless, 411, 421, 434 capacity, 421 point-to-point communication network, 412, 421, 424, 435, 505 point-to-point communication system, 2, 417 Pollak, H.O., 550, 551, 556 Polya, G., 49, 548 polymatroid, 356, 360, 385 polynomial, 338, 450, 451, 480, 490, 491, 500–502 equation, 451 degree, 451 nonzero, 451, 454 root, 451 polynomial ring, 451, 453, 500 positive definite matrix, 232, 254, 255, 290 positive semidefinite matrix, 232, 233, 234, 256 postal system, 411 power series, 490 expansion, 491 formal, 490 rational, 490, 492, 498, 501 ring of, 490, 495 ring of, 490, 495 power spectral density, 281, 282, 285, 287 prefix code, VIII, 82, 86, 86–97 entropy bound, VIII existence of, 87 expected length, 95 random coding, 99 redundancy, VIII, 93–97 prefix-free code, see prefix code Preston, C., 321, 554 prime number, 451 probabilistic coding, 110, 176 probability density function (pdf), 229, 241 conditional, 230, 240, 241 joint, 230, 290–293 probability distribution, IX rational, 398 strictly positive, 7, 11, 214, 359 factorization of, 12 with zero masses, 7, 214, 320 probability of error, 34, 104, 138, 157, 185, 192 probability theory, XI, 336, 381 product measure, 243 product source, 208, 359 projection, 514 prolate spheroidal wave functions, 287 pyramid, 341, 342, 345 quantization, 243 quantized samples, 184 quantum information theory, 386 quantum mechanics, 385 quasi-uniform structure, 395 asymptotic, 130, 395 Rabin, M.O., 420, 554 Radon-Nikodym derivative, 243 random code, 159, 202, 268, 539 random coding error exponent, 181 random linear combination, 474, 476 random network coding, 456, 473–478, 482 robustness, 477 random noise, 137 random variable, real, 36, 229–297 continuous, VIII, 229 discrete, 229 mixed, 229 second moment, 234, 286 rank function, 408 rank of a matrix, 333 full, 334, 335, 449 Rasala Lehman, A., 540, 554 rate constraint, 412, 421, 422, 424, 429 rate-distortion code, 187, 183–211 Index 575 rate-distortion function, 191, 187–192, 202, 209, 211 binary source, 196 forward channel description, 208 reverse channel description, 198 computation of, VIII, 200, 210, 219–222, 227 normalization, 209 product source, 208, 359 properties of, 191 Shannon lower bound, 208 rate-distortion pair, 188, 200 rate-distortion region, 188, 191, 219 rate-distortion theorem, VIII, 134, 193, 192–200, 209 achievability, 202–207 converse, 200–202 random code, 202 relation with source coding theorem, 199 rate-distortion theory, VIII, 183–210 Rathie, P.N., 552 rational function, 490, 502 field of, 501 rational number, 189, 398 Ratnakar, N., 551 Ray-Chaudhuri, D.K., 543, see also BCH code Rayleigh’s energy theorem, 283 reaching probability, 94, 95 real number, 435 receiver, 2 reciprocal, 491 rectangular lattice, 314, 321 reduced code tree, 91 reduced probability set, 91 redundancy of prefix code, VIII, 93–97 of uniquely decodable code, 85 Reed, I.S., 554 Reed-Solomon code, 166 relative entropy, see divergence relative frequency, 113, 122 relay node, 416, 417 R´ enyi, A., 554 repetition code, 139 replication of information, 425 reproduction alphabet, 184, 196, 203 reproduction sequence, 183–185, 190, 203 reservoir, 277, 289 resultant flow, 422 Reza, F.M., 80, 554 right-continuous, 229 Riis, S., 540, 554 ring, 451, 453, 490, 502 commutative, 491 ring theory, 504 Rissanen, J., 554 Roche, J.R., 420, 539, 554 Rockafellar, R.T., 554 Rodriguez, P.R., 482, 547 Roman, S., 554 Romashchenko, A., 385, 386, 408, 547, 552, 555 Rose, K., 210, 555 Ross, K.W., 550 routing, 411, 411, 412, 425 row space, 333, 334 Rubin, D.B., 545 Rudin, W., 241, 555 Ruskai, M.B., 385, 551 Ruskey, F., 555 Russian, 80 Rustin, R., 545 sampling theorem, 282, 284, 287 bandpass, 287 sampling time, 285 Sanders, P., 482, 549, see also Jaggi-Sanders algorithm Santhanam, N.P., 553 Sason, I., 555 satellite communication, X, 412, 415–417, 419 satellite communication network, 508, 510 Savari, S.A., 550, 555 Scholtz, R.A., 547 Schur-concave function, 49 science, 3 science of information, the, 3 secret key cryptosystem, 71, 78 secret sharing, 79, 80, 349, 483 access structure, 79 information-theoretic bounds, 79, 350 participants, 79 576 Index secure network coding, 483 security level of cryptosystem, 71 self-information, 16 semi-graphoid, 352, 360 axioms of, 351 separation of network and channel coding, 434 source and channel coding, 140, 172–175, 209, 411 Seroussi, G., 117, 557 Servedio, R.A., 546 set function, 326 additive, 52, 74, 309 set identity, 55, 80, 305 set operations, 51, 52 set theory, VIII, 51 Shadbakht, S., 548 Shamai, S., XIII, 543, 555, 557 Shamir, A., 555 Shannon code, 93 Shannon’s information measures, VIII, 7, 12–18, 28, 51 continuity of, 18–20 discontinuity of, 20 elemental forms, 340, 358 irreducible, 339, 357 linear combination of, 323 reducible, 339, 357 set-theoretic structure of, see I-Measure Shannon’s papers, collection of, 3 Shannon, C.E., VII, 2, 50, 99, 104, 112, 181, 208, 209, 256, 297, 360, 385, 545, 555 Shannon-McMillan-Breiman theorem, VIII, 41, 108, 107–109, 173 Shannon-type identities constrained, 344–345 Shannon-type inequalities, IX, 324, 325, 339–360, 369, 540 constrained, 344–345 machine-proving, ITIP, 339, 347–350, 361 unconstrained, 343–344 Shen, A., 386, 408, 547, 555 Shenvi, S., XIII Shields, P.C., 555 shift-register, 492, 495, 504 Shore, J.E., 555 Shtarkov, Y.M., 557 Shunsuke, I., 555 sibling, 90 side-information, 209, 417 signal, 137 signal analysis, 280, 282, 490 signal-to-noise ratio, 272 signaling network, 470 signed measure, 52, 58, 59 Simmons, G.J., 552 Simonnard, M., 555 simplex method, 344, 345 optimality test, 344, 345 sinc function, 283, 285 single-input single-output system, 137, 140, 142 single-letter characterization, 211 single-letter distortion measure, 211 single-source network code a class of, 427 causality, 427, 431 single-source network coding, XI, 420, 421, 505 achievable information rate, 429 acyclic network, 435–478 cyclic network, 485–502 one sink node, 424 three sink nodes, 425 two sink nodes, 425 sink node, 411, 422, 424, 425, 459 Slepian, D., 209, 556 Slepian-Wolf coding, 209 Sloane, N.J.A., 543, 556 Snell, J., 550 Soljanin, E., 420, 504, 544, 546 Solomon, G., 554, see also Reed-Solomon code Song, L., 434, 540, 556 sound wave, 382 source code, 82, 175, 183 source coding theorem, VIII, 3, 104, 104–105, 112, 183, 191, 199 coding rate, 104 converse, 105 direct part, 104 general block code, 110 source node, 411, 421, 424 super, 415 source random variable, 208 Index 577 source sequence, 183–185, 190, 202 space-time domain, 488 spanning tree packing, 481 Spitzer, F., 321, 556 Sprintson, A., 551 standard basis, 440, 454, 458, 492 standard deviation, 251 static network code, X, 469, 468–473, 482 configuration, 468 generic, 470, see also generic network code, static linear broadcast, 470, see also linear broadcast, static linear dispersion, 470, see also linear dispersion, static linear multicast, 470, see also linear multicast, static robustness, 468 stationary source, VIII, 39, 108 entropy rate, 7, 38–41 Steiglitz, K., 553 Stein, C., 546 Steinberg, Y., 557 still picture, 2 Stinson, D.R., 556 Stirling’s approximation, 125 stock market, 228 store-and-forward, X, 411, 419, 475, 477, 481 strong asymptotic equipartition prop-erty (AEP), 101, 114, 113–121, 132, 204 conditional, 125, 204, 398, 534 strong law of large numbers, 108 strong typicality, VIII, 113–135, 203, 245 alternative definition, 133 consistency, 122, 158, 204 joint, 122–130 joint AEP, 124 joint typicality array, 129, 395 jointly typical sequence, 122 jointly typical set, 122 typical sequence, 113 typical set, 113 vs. weak typicality, 121 Studen´ y, M., 80, 352, 360, 385, 552, 556 sub-channel, 287 subcode, 190 subgroups, IX, 387–408 intersection of, 387, 393 membership table, 394 subnetwork, 468 subring, 490 substitution of symbols, 55 suffix code, 98 summit of a mountain, 213 support, 7, 13, 23, 45, 111, 113, 184, 229, 291, 296, 364, 396 supremum, 259, 274 switching theory, 411 symmetric group, 390, 400 symmetric matrix, 231, 233, 254, 290 synchronous transmission, 434 Szpankowski, W., 549, 556 Tan, M., XIII, 480, 481, 483, 556 Taneja, I.J., 556 tangent, 219 Tardos, G., 548, 556 Tarokh, V., 100, 551 Tatikonda, S., 556 Telatar, ˙ I.E., 556 telephone conversation, 166 telephone line, 1, 167 television broadcast channel, 2 ternary channel, 178 thermodynamics, 50 Thitimajshima, P., 542 Thomas, J.A., 544 Thomasian, A.J., 542 time average, 108 time domain, 282, 493, 495 time-sharing, 189 Tjalkens, T.J., 557 Toledo, A.L., 556 Tolhuizen, L., 482, 549 Tops¨ oe, F., 548, 556 transfer function, 504 transition matrix, 140, 177, 194, 211, 214, 219, 220 strictly positive, 221 transmitter, 2 trellis network, 488–490, 497 acyclicity of, 489 triangular inequality, 23, 46 Tsang, M.-W., XIV 578 Index Tsang, P.-W.R., XIV Tse, D.N.C., XIII, 556, 559 Tucker, A.W., 276, 550, see also Karush-Kuhn-Tucker (KKT) condition Tunstall, B.P., 557 turbo code, 166 Tusn´ ady, G., 228, 545 Type I atom, 315 Type II atom, 315 type of a sequence, 133 uncertainty, 2, 13 uncorrelated random variables, 233, 234, 239, 278, 285 undirected graph, 314 component, 314 cutset, 314 edge, 314 loop, 314 vertex, 314 unified asymptotic equipartition property (AEP), 134 unified typicality, 133, 135 consistency, 134 uniform distribution, 36, 37, 147, 149, 150, 159, 208, 235, 236, 261, 264, 267, 364, 367, 396, 427, 452 union bound, 116, 160, 173, 191 unique solution, 485, 486 uniquely decodable code, VIII, 82, 85, 87, 88, 90, 93, 98, 99 expected length, 84 redundancy, 85 unit-delay network, 488, 488–502, 504 universal source coding, 111 upstream-to-downstream order, 437, 440, 451, 457, 466, 472, 480, 485 Vaccaro, U., 79, 80, 543 van der Lubbe, J.C.A., 557 van der Meulen, E.C., 557 van Dijk, M., 79, 557 Varaiya, P.P., 547 variable-length channel code, 171 variance, 230, 234, 250, 285 variational distance, 18, 26, 45, 47–49, 133 vector space, 408 Vembu, S., 557 Venn diagram, 16, 52, 61, 80 Verd´ u, S., XIII, 50, 117, 542, 548, 555, 557 Vereshchagin, N., 385, 386, 408, 547, 552, 555 video signal, 184 Viswanath, P., 556 Viswanath, S., 549 Viswanathan, H., 542 Vit´ anyi, P., 551 Viterbi, A.J., 557 von Neumann entropy, 385 strong subadditivity, 385 von Neumann, J., 557 Wald, A., 557 Wang, X., 556 water flow, 422 water leakage, 422 water pipe, 164, 422 water-filling, 277, 279, 289, 297 waveform channel, 257, 280–282, 284–287, 297 weak asymptotic equipartition property (AEP), VIII, 101, 101–112, 245, see also Shannon-McMillan-Breiman theorem weak independence, 78 weak law of large numbers, 101, 102, 139, 266, 268 weak typicality, VIII, 101–113, 121, 133–135, 245 alternative definition, 111 typical sequence, 102, 102–112, 245 typical set, 102, 102–112 Weaver, W.W., 555 Wegener, I., 541 Wei, V.K., XIII Weinberger, M.J., 117, 553, 557 Weingarten, H., 557 Weissman, T., 117, 557 Welch, T.A., 557 Welsh, D.J.A., 549 Wheeler, D.J., 543 Whittaker, E.T., 558 Wicker, S.B., 548, 557 wide-sense stationary process, 281, 296 Widmer, J., 546 Index 579 Willems, F.M.J., XIII, 557 Winter, A., 385, 549, 551 wired-line communication, 281 wireless communication, X, 166, 281, 412, 415–417 Wolf, J.K., XIII, 209, 556, see also Slepian-Wolf coding Wolfowitz, J., 134, 180, 181, 541, 558 Wong, P.Y., XIV Woodard, P.M., 557 World Wide Web, IX, 347 Wu, Y., 420, 481, 482, 544, 548, 558 Wyner, A.D., 297, 556, 558 WYSIWYG, 67 Xia, X.-G., XIII Yan, X., 482, 540, 542, 558 Yan, Y.-O., 360, 379, 559 Yang, E.-h., 550, 558 Yang, S., XIII, 558 Yao, A.C.-C., 420, 558 Ye, C., 99, 558 Ye, Z., 321, 547, 558, 559 Yeung, G.S.-F., XIV Yeung, R.W., 74, 78, 80, 97, 99, 100, 111, 117, 134, 135, 228, 321, 338, 360, 384, 385, 419, 420, 432, 434, 480–483, 504, 539–544, 546, 548, 550, 551, 554, 556, 558, 559 Yeung, S.-W.S., XIV Ytrehus, Ø., 504, 542 z-transform, 489, 490, 495, 497 dummy variable, 489 Zamir, R., 74, 555 Zeger, K., XIII, 100, 385, 540, 545, 551 zero mean, 250, 251, 256, 274, 277 zero-error data compression, VIII, 81–101 zero-error reconstruction, 428 Zhang, C., 551 Zhang, J., 553 Zhang, Z., XIII, 351, 384, 385, 419, 420, 482, 483, 504, 539, 540, 542, 558, 559 Zhao, F., 551 Zheng, L., 559 Zigangirov, K.Sh., 559 Zimmerman, S., 100, 559 Ziv, J., 558, 559
238
Advertisement Surge: a fast open-source chemical graph generator Journal of Cheminformatics volume 14, Article number: 24 (2022) Cite this article 7779 Accesses 19 Citations 17 Altmetric Metrics details Abstract Chemical structure generators are used in cheminformatics to produce or enumerate virtual molecules based on a set of boundary conditions. The result can then be tested for properties of interest, such as adherence to measured data or for their suitability as drugs. The starting point can be a potentially fuzzy set of fragments or a molecular formula. In the latter case, the generator produces the set of constitutional isomers of the given input formula. Here we present the novel constitutional isomer generator surge based on the canonical generation path method. Surge uses the nauty package to compute automorphism groups of graphs. We outline the working principles of surge and present benchmarking results which show that surge is currently the fastest structure generator. Surge is available under a liberal open-source license. Introduction Chemical structure generators enumerate or generate molecular graphs of organic or bioorganic molecules. They are an integral part of systems for computer-assisted structure elucidation (CASE) [15 ")] and can be used to create molecular libraries for virtual screening [2 Ring system-based chemical graph generation for de novo molecular design. J Comput Aided Mol Des 30:425–446"), 3 Chemoinformatics-based enumeration of chemical libraries: a tutorial. J Cheminform 12:64")] or enumerate chemical spaces in general [4 970 Million druglike small molecules for virtual screening in the chemical universe database GDB-13. J Am Chem Soc 131:8732–8733")]. The history of chemical graph generators goes back at least to the 1960s DENDRAL project which was aimed at the CASE of organic molecules based on mass spectrometric data [5 DENDRAL: a case study of the first expert system for scientific hypothesis formation. Artif Intell 61:209–261")]. DENDRAL was developed for NASA’s Mariner program to search for life on Mars [5 DENDRAL: a case study of the first expert system for scientific hypothesis formation. Artif Intell 61:209–261"), 6 The ontological approach in organic chemistry intelligent system development. Advances in Intelligent Systems and Computing. Springer, Singapore, pp 69–78")]. Its structure generator used substructures as building blocks and was able to deal with overlapping substructures. In the early history of the structure generators, ASSEMBLE was another building block based structure generator [7 Assemble 2.0: a structure generator. Chemometrics Intellig Lab Syst. 51:73–79")]. In the field, there is a family of generators based on mathematical theorems such as algorithmic group theory [8 Handbook of computational group theory. CRC Press, Boca Raton")] and combinatorics [9 Combinatorial algorithms: generation, enumeration, and search. CRC Press, Boca Raton")]. Besides DENDRAL, MASS [10 Mathematical synthesis and analysis of molecular structures. J Mol Struct 31:381–397")] was also another good example for the applications of mathematical theorems in structure generation. It was a tool for the mathematical analysis of molecular structures. SMOG [11 Computer generation of molecular structures by the SMOG program. J Chem Inf Comput Sci 36:888–899")] was the successor of the MASS algorithm. We have recently reviewed the history of chemical graph generators in detail . While most structure generators work in a deterministic way, i.e. exhaustively generate structures according to given boundary conditions , stochastic generators were also suggested for large molecular spaces . Among the currently available structure generators, such as DENDRAL, ASSEMBLE, SMOG, COCON and LSD , MOLGEN constituted the state-of-the-art for decades in terms of speed, completeness and reliability. The first version of MOLGEN was based on the strategy of DENDRAL software and developed to overcome the limitations of DENDRAL . The software is based on the orderly graph generation method . Although MOLGEN is the de facto gold standard in the field, it has the downside of being closed-source software. The algorithm cannot be further developed or modified by scientists based on their interests. The most efficient and fast open-source chemical graph generator was MAYGEN [209 ")] based on the orderly generation method. However, MAYGEN is approximately 3 times slower than MOLGEN. The state of the art of large scale structure generation was recently set by the lab of Jean-Louis Reymond in developing an in-house solution for a nauty-based structure generator, which enabled them to produce the numeration of 166 billion organic small molecules in the chemical universe database GDB-17. To the best of our knowledge, this in-house generator was not released as open-source or otherwise. Thus, there is still the need for an efficient open-source chemical graph generator. In [209 ")] we expressed the hope to “trigger a surge in the development of improved and faster” structure generators. Here we present the novel structure generator surge, based on the principle of the canonical generation path method. Surge is open-source and outperforms MOLGEN 5.0 by orders of magnitude in speed. Furthermore, surge is easily extensible with more features and adaptable to further application. Implementation Data We assembled a list of molecular formulae for benchmarking surge against MOLGEN 5.0 in Tables 1, 2. These formulae were taken from the natural products database COCONUT . The size of these molecular formulae varies and is enough to challenge even the best constitutional isomer generators available (see Results section). Algorithm and mathematical background Surge is based on the nauty package for computing automorphism groups of graphs as well as canonical labels. Like nauty, surge is written in a portable subset of C and runs on a considerable number of different systems. Surge is an integration of three existing tools from the nauty suite : (a) geng generates simple graphs based on certain boundary conditions, (b) vcolg colors vertices in the output of geng and (c) multig inserts multi-edges in the output of the first two tools (Fig. 1). An example case for the combination of geng, vcolg and multig functions for the furan molecule, C4H4O. First the simple graph is constructed. The nodes are coloured as, black for carbons and red for the oxygen. In multig, the edge multiplicities are optionally increased to create multiple bonds Surge flowchart An isomorphism between two graphs is a bijection between their vertex sets that maps edges onto edges. If the graphs have adornments, such as atom types for the vertices or bond multiplicities for the edges, then those adornments must be preserved by the mapping. If the two graphs are the same; i.e., the isomorphism is from a graph to itself, it is called an automorphism. The automorphisms form a group under the operation of function composition, called the automorphism group (Fig. 2). The meanings of isomorphism and automorphism are different for each of the three stages in our algorithm. Referring to Fig. 1, at the first stage (which we call a simple graph) there are no vertex or edge adornments and all rotations and reflections, 10 in total, are automorphisms. When vertex adornments are added in the second stage, the atom type becomes significant so only the identity mapping and the reflection through the oxygen atom are automorphisms. In the final stage, edge adornments are added but in this example the automorphism group is not further reduced since the reflection through the oxygen atom preserves both atom type and bond multiplicity. Note how the automorphism groups in the second and third stages are subgroups of the automorphism groups in the previous stages. First stage Input to surge consists of a molecular formula such as C7H12O2S. Based on the element counts, in this case C = 7, O = 2, S = 1, H = 12, the atom valencies are used to calculate the plausible range of the number of edges of a connected simple graph representing the topology of a molecule with this formula, with hydrogen atoms omitted. Then geng is called to generate all the connected simple graphs with those parameters, subject also to a maximum degree condition depending on the molecular formula . Geng generates one graph from each isomorphism class and these are passed to the second stage as they are produced, without any need to store them . In this example, there are 10 non-hydrogen atoms and the number of edges is in the range 9–11. Second stage Given a simple graph G from the first stage, the second stage assigns elements to vertices in all distinct ways. The element counts must be correct, and we must have valence (\ge) degree at each vertex. More onerously, we only want one member of each equivalence class of element assignment under the automorphism group of G (Fig. 3). We next explain how this is accomplished. The simple graph on the left has an automorphism which is a reflection about the dashed line. This shows that the second and third images are equivalent and so will lead to the same molecular structures when bond multiplicities are assigned. So we only want to keep one of them The vertices of G are arbitrarily numbered 1,2,…,n. An element assignment can be represented as a list showing the element assigned to each vertex in order of vertex number. For example, a valid list might be L = (C,C,C,S,O,C,C,C,O,C). Automorphisms of G have an action on lists that permutes their entries. Namely, for list L and automorphism (\gamma ,) the list (\gamma)(L) assigns the same element to vertex (\gamma)(v) as L assigns to v, for each vertex v. Thus, If L is a list of elements and (\gamma) is an automorphism, L and (\gamma)(L) give equivalent assignment of elements to the vertices of G. Our task in this stage is to choose exactly one assignment from each equivalence class. Given a fixed ordering of the elements, for example C < O < S, two lists can be compared lexicographically, for example This enables us to define the maximum list in the equivalence class of L. Note that canon(L) = canon(L’) if L and L’ are equivalent, so there is a unique maximum list L in the equivalence class and we can recognize it by the condition canon(L) = L. To put it another way, if (\gamma)(L) > L for any automorphism (\gamma) then L (\ne) L; otherwise L = L. Now we describe the conceptual method for the second stage. For given G: This algorithm is efficient if the automorphism group Aut(G) is small, but that is not always the case. Therefore, we adopt a more complex approach. An automorphism of G is called minor if it merely swaps two leaves (vertices of degree 1) that have a common neighbour. The minor subgroup M (\le) Aut(G) is the subgroup generated by all the minor automorphisms. A flower is a maximal set of leaves with the same neighbour. In the left graph of Fig. 4, the flowers are {1,2,3}, {6,10} and {9,11}. The minor subgroup M consists of all automorphisms that preserve the flowers, such as (1 2 3)(9 11). The order of M is (3!\times 2!\times 2! = 24). In addition to M, the automorphism group may contain automorphisms that do not preserve the flowers, such as (6 11)(7 8)(9 10). To capture such automorphisms, we colour the graph as in the right side of Fig. 4. Vertices not in flowers are coloured black. Within each flower, vertices are coloured red, blue, green, … in order of vertex number, using a fixed list of colours that does not include black. Now let N be the group of automorphisms that respect the vertex colours. In the example, N has only the identity and (6 9)(7 8)(10 11). A graph with 3 flowers and the colouring used to compute N An arbitrary automorphism of G can be obtained by first applying an element of N to capture how the flowers are mapped to each other, and then applying an element of M to capture the movement of leaves within each flower. In both steps the choice is unique, so we have a factorization (In the language of group theory, M is a normal subgroup and N is a complete set of coset representatives.) In the example, consider (1 2)(6 11)(7 8)(9 10). It swaps the flowers {6,10} and {9,11} so we choose the element of N which does that, namely (\gamma) = (6 9)(7 8)(10 11). Then we have to arrange the leaves within the flowers with an element of M, namely (\delta) =(1 2)(6 10)(9 11). This achieves (\gamma \delta) = (1 2)(6 11)(7 8)(9 10). The main advantage of factoring Aut(G) = NM is the following. Theorem For any list L, L = canon(L) if and only if L = max { (\delta)(L) | (\delta) in M} and L = max { (\gamma)(L) | (\gamma) in N}. Proof The “only if” direction is obvious since M and N are subsets of Aut(G). Suppose in the other direction that L = max {(\delta)(L) | (\delta) in M} and L = max {(\gamma)(L) | (\gamma) in N}. From the factorization of Aut(G) we know that L = (\delta)((\gamma)(L)) for some (\gamma) in N and (\delta) in M. Note that in both L and L the elements are in nonincreasing order within each flower, as they are maximized with respect to M. Also recall that the automorphisms in N preserve the order of vertex numbers within the flowers, by virtue of the fact that we coloured the vertices in order of vertex number when we computed N. This means that we can take (\delta) to be identity, and so L = (\gamma)(L). This proves that L = L, since L = max { (\gamma)(L) | (\gamma) in N}. In order to implement the condition L = max { (\gamma)(L) | (\gamma) in M}, we don’t need to compute M explicitly. Instead, since M is generated by transpositions, it suffices that within each flower the elements are in decreasing order relative to vertex number. Using the ordering of elements that we have chosen, in the example we just need to enforce the inequalities element(1) ≥ element(2) ≥ element(3), element(6) ≥ element(10) and element(9) ≥ element(11). The program recursively assigns elements to vertices in order of vertex number and enforces these inequalities as they become active rather than at the end. To implement the condition L = max { (\gamma)(L) | (\gamma) in N}, we compute N using nauty and test that (\gamma)(L) (\le) L for each (\gamma) in N. This is efficient in practice because N is very small most of the time. We can also partly enforce N by means of inequalities: since vertex 6 is the least vertex in a non-trivial orbit {6, 9} of N, we can assume element(6) ≥ element(9). This is not necessary but it gives a small time improvement. As an example, C7H14N2O7 has 15,425,657,612 isomers. Using the factorisation Aut(G) = NM reduces the number of nontrivial groups processed by 58% and the maximum group size from 2592 to 72. The overall generation time is 18% less. In typical cases, the method provides about 10–40% reduction in cost. Third stage After the assignment of elements to vertices is complete, the program moves to the next stage of selecting a bond multiplicity for each edge. This is the same type of problem as in the second stage. Instead of a list of elements for each vertex, we have a list of multiplicities for each edge. Instead of Aut(G), we use the subgroup of Aut(G) that preserves the element assignment. Otherwise M and N are defined as before. In the implementation, we don’t use nauty to compute N but instead filter the N subgroup from the second stage, rejecting those automorphisms which don’t preserve elements and converting the others to their action on the edges. The constraints we have at this time are that for each atom the total number of incident bonds counting multiplicity must be at most the valence of the atom, and that the total of (valence—incident bonds) over all atoms must equal the desired number of hydrogen atoms. Once these constraints are satisfied, there is exactly one way to add hydrogens (though the program does not add them explicitly). As an example, geng makes 534,493 unlabelled simple graphs in 1.3 s for Lysopine C9H18N2O4. For these graphs, the second stage subgroup N is trivial 58% of the time and never larger than 72. Assignment of elements to vertices produces 3,012,069,151 vertex-labelled graphs in 90s.The N subgroup for the third stage is trivial 98% of the time and never larger than 24. Finally, the assignment of bond multiplicities produces 5,979,199,394 completed molecules in an additional 100 s. As demonstrated by our examples, surge can generate molecular structures very quickly, allowing for the inspection of extremely large sets of isomers. The generation speed is several times faster than even the fastest output format (SMILES). On the other hand, any particular application will likely have stronger restrictions on the structure than just a molecular formula. For example, some substructures may make the molecule unstable or give it chemical properties undesirable in the application. Or, experimental investigation of an unknown compound may have determined some features of the structure, so that only molecules with those features are of interest. For these reasons, surge provides a number of filters to limit the output. The 3-stage generation method allows some of them to be implemented almost for free, and all of them are much more efficient than filtering the output through an external program. For example, restrictions on the number of short rings and the planarity of the molecule can be enforced at Stage 1. Surge also provides some "badlists" of forbidden substructures (many of them inspired by the corresponding feature of MOLGEN). The open-source nature of surge allows for a more advanced feature. By writing small code snippets, the user can insert custom filters into any of the three stages, and also perform such tasks as adding extra elements and command-line options. Several worked examples are provided with the program. Results Surge is available under a liberal open-source License (Apache 2.0) on GitHub at as well as from The system can be built with the standard Unix Configure/Make scheme and the resulting stand-alone executable is then run from the command line. By default, surge generates all constitutional isomers of a given molecular formula. Surge can write output in either SDfile or SMILES [27 SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules. J Chem Inf Comput Sci 28:31–36")] format. SMILES output is produced very efficiently by constructing a template for each simple graph at the first stage, so that only atom types and bond multiplicity must be filled in before output. We benchmarked surge with the set of molecular formulae given in Table 1. Since our motivation for developing structure generators is for the generation of large molecules, Table 1 consists of natural products, randomly selected from the natural products database COCONUT . For the list of molecular formulae, surge outperformed MOLGEN by orders of magnitude (Fig. 5) and MOLGEN terminated at a built-in limit of 231–1 structures. Reported computation times were linearly extrapolated based on the MOLGEN timing for 231–1 structures and the actual number of isomers reported by surge. Note that surge generates between 7 and 22 million molecules per second for all of these examples. Comparison of the run times of surge v1.0 vs MOLGEN 5.0 for long-running molecular formulae from selected natural products, plotted on a logarithmic time scale. In the majority of cases, MOLGEN terminated at a built-in limit of 231–1 structures. Reported computation times were linearly extrapolated based on the MOLGEN timing for 231–1 structures and the actual number of isomers reported by surge Nine example isomers of the natural product, Istanbulin A with the molecular formula C15H20O4. The molecular structure of Istanbulin A is given in the 9th entry in the above illustration Surge has a tiny memory footprint irrespective of the molecule size or the number of isomers. All of the examples in this paper run in at most 5 MB of RAM on Linux (Fig. 6). For randomly selected 10 molecular formulae, 4 options of surge were tested and results are given in Table 2. These options are. -p0:1 At most one cycle of length 5 -P The molecule is planar -B5 No atom has two double bonds and otherwise only hydrogen neighbours -B9 No atom lies on more than one cycle of length 3 or 4. Limitations Release 1.0 of surge does not perform a Hückel aromaticity test and therefore generates duplicate structures for Kekulé versions of aromatic rings that are graph-theoretically different. Benchmarking against MOLGEN 5.0 was therefore performed with the -noaromaticity switch of MOLGEN. Conclusion We have presented surge, a structure generator for constitutional isomers based on the canonical generation path method. To the best of our knowledge, surge is the fastest chemical structure generator available. A number of badlist options are available to avoid the generation of potentially unlikely structures. Current limitations include the lack of an aromaticity detection. Surge is hosted as an open-source package on GitHub, inviting the scientific community to use and extend it. Surge offers a plug-in mechanism for community-driven extensions. Plugins can hook into the various stages of the surge generation process, thereby offering efficient means to prune the generation tree. Availability of data and materials Project name: surge Project home page: Operating system(s): Platform independent Programming language: C License: Apache 2.0 References Elyashberg M, Argyropoulos D (2020) Computer assisted structure elucidation (CASE): current and future perspectives. Magn Reson Chem. Article PubMed Google Scholar Miyao T, Kaneko H, Funatsu K (2016) Ring system-based chemical graph generation for de novo molecular design. J Comput Aided Mol Des 30:425–446 Article CAS Google Scholar Saldívar-González FI, Huerta-García CS, Medina-Franco JL (2020) Chemoinformatics-based enumeration of chemical libraries: a tutorial. J Cheminform 12:64 Article Google Scholar Blum LC, Reymond J-L (2009) 970 Million druglike small molecules for virtual screening in the chemical universe database GDB-13. J Am Chem Soc 131:8732–8733 Article CAS Google Scholar Lindsay RK, Buchanan BG, Feigenbaum EA, Lederberg J (1993) DENDRAL: a case study of the first expert system for scientific hypothesis formation. Artif Intell 61:209–261 Article Google Scholar Gulyaeva KA, Artemieva IL (2020) The ontological approach in organic chemistry intelligent system development. Advances in Intelligent Systems and Computing. Springer, Singapore, pp 69–78 Google Scholar Badertscher M, Korytko A, Schulz KP, Madison M, Munk ME, Portmann P et al (2000) Assemble 2.0: a structure generator. Chemometrics Intellig Lab Syst. 51:73–79 Article CAS Google Scholar Holt DF, Eick B, O’Brien EA (2005) Handbook of computational group theory. CRC Press, Boca Raton Book Google Scholar Kreher DL, Stinson DR (2020) Combinatorial algorithms: generation, enumeration, and search. CRC Press, Boca Raton Book Google Scholar Serov VV, Elyashberg ME, Gribov LA (1976) Mathematical synthesis and analysis of molecular structures. J Mol Struct 31:381–397 Article CAS Google Scholar Molchanova MS, Shcherbukhin VV, Zefirov NS (1996) Computer generation of molecular structures by the SMOG program. J Chem Inf Comput Sci 36:888–899 Article CAS Google Scholar Yirik MA, Steinbeck C (2021) Chemical graph generators. PLoS Comput Biol 17:e1008504 Article CAS Google Scholar Faulon JL (1992) On using graph-equivalent classes for the structure elucidation of large molecules. J Chem Inf Comput Sci 32:338–348 Article CAS Google Scholar Faulon JL (1994) Stochastic generator of chemical-structure. 1. Application to the structure elucidation of large molecules. J Chem Inf Comput Sci 34:1204–1218 Article CAS Google Scholar Junker J (2011) Theoretical NMR correlations based structure discussion. J Cheminform 3:27 Article CAS Google Scholar Nuzillard J-M, Georges M (1991) Logic for structure determination. Tetrahedron 47:3655–3664 Article CAS Google Scholar Gugisch R, Kerber A, Kohnert A, Laue R, Meringer M, Rücker C, et al. MOLGEN 5.0, a Molecular structure generator in advances in mathematical chemistry. Advances in mathematical chemistry; Basak, SC, Restrepo, G , Villaveces, JL, Eds. Grund R, Kerber A, Laue R (1996) Construction of discrete structures, especially isomers. Discrete Appl Math 67:115–126 Article Google Scholar Grüner T, Laue R, Meringer M (1997) Algorithms for group actions: homomorphism principle and orderly generation applied to graphs. DIMACS Ser Discrete Math Theoret Comput Sci 28:113–122 Article Google Scholar Yirik MA, Sorokina M, Steinbeck C (2021) MAYGEN: an open-source chemical structure generator for constitutional isomers based on the orderly generation principle. J Cheminform. Article PubMed PubMed Central Google Scholar Ruddigkeit L, van Deursen R, Blum LC, Reymond J-L (2012) Enumeration of 166 billion organic small molecules in the chemical universe database GDB-17. J Chem Inf Model 52:2864–2875 Article CAS Google Scholar Sorokina M, Merseburger P, Rajan K, Yirik MA, Steinbeck C (2021) COCONUT online: collection of open natural products database. J Cheminform 13:2 Article Google Scholar McKay BD, Piperno A (2014) Practical graph isomorphism. II J Symb Comput 60:94–112 Article Google Scholar McKay B, Piperno A. nauty and Traces User’s Guide. 2019 Sep. McKay BD (1998) Isomorph-free exhaustive generation. J Algorithms 26:306–324 Article Google Scholar CTFILE FORMATS BIOVIA DATABASES 2016. 2016. Weininger D (1988) SMILES, a chemical language and information system. 1. Introduction to methodology and encoding rules. J Chem Inf Comput Sci 28:31–36 Article CAS Google Scholar Download references Funding Open Access funding enabled and organized by Projekt DEAL. MAY and CS acknowledge funding by the Carl-Zeiss-Foundation. Author information Authors and Affiliations School of Computing, Australian National University, Canberra, ACT, 2601, Australia Brendan D. McKay Institute of Inorganic and Analytical Chemistry, Friedrich-Schiller-University, Lessingstr. 8, 07743, Jena, Germany Mehmet Aziz Yirik & Christoph Steinbeck Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Contributions BDM wrote the code and developed the underlying nauty package. BDM, CS and MAY conceived the project. BDM and CS guided the development. MAY contributed to the conceptual development and performed the evaluation and testing. All authors wrote the manuscript. All authors read and approved the final manuscript. Corresponding authors Correspondence to Brendan D. McKay or Christoph Steinbeck. Ethics declarations Competing interests All authors declare no competing interests. Additional information Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data. Reprints and permissions About this article Cite this article McKay, B.D., Yirik, M.A. & Steinbeck, C. Surge: a fast open-source chemical graph generator. J Cheminform 14, 24 (2022). Download citation Received: 06 December 2021 Accepted: 03 April 2022 Published: 23 April 2022 DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Keywords Advertisement Journal of Cheminformatics ISSN: 1758-2946 Contact us Read more on our blogs Receive BMC newsletters Manage article alerts Language editing for authors Scientific editing for authors Policies Accessibility Press center Support and Contact Leave feedback Careers Follow BMC BMC Twitter page BMC Facebook page BMC Weibo page By using this website, you agree to our Terms and Conditions, Your US state privacy rights, Privacy statement and Cookies policy. Your privacy choices/Manage cookies we use in the preference centre. Follow BMC By using this website, you agree to our Terms and Conditions, Your US state privacy rights, Privacy statement and Cookies policy. Your privacy choices/Manage cookies we use in the preference centre. © 2025 BioMed Central Ltd unless otherwise stated. Part of Springer Nature.
239
Top 10 Chess Endgame Principles (Crash Course) Remote Chess Academy 807000 subscribers 3519 likes Description 109365 views Posted: 29 Jan 2025 Learn 3 Simple Rules To Reach 2000+ ELO Rating Faster FREE Masterclass ► Take Your Chess Skills To The Next Level With High-Quality Courses Learn here ► 💰💲 Join the RCA Affiliate Program, promote our courses, and get 50% commission - 🔹 How I Went From 1600 to 2260 Chess Rating in 1 Year - ♛ Find the endgame examples shown in the video in this blog-post - In this video lesson, you'll discover the top 10 most important chess endgame principles and patterns that will help you win more games with ease. This endgame crash course covers everything from fundamental rules to advanced techniques—many of which even top grandmasters like Magnus Carlsen use to dominate endgames and convert small advantages into wins. Beyond basic endgame theories and strategies, you'll also learn key patterns to spot tactics, checkmating ideas, and brilliant stalemate tricks that can turn losses into draws—or even victories! ▬▬▬▬▬▬ ► Chapters 00:00 Top 10 Chess Endgame Principles & Patterns 00:10 Pattern-1 01:46 Pattern-2 02:10 Magnus Carlsen falls for stalemate trick 03:09 Pattern-3 (important endgame rule) 07:08 Pattern-4 (guaranteed win) 08:20 Pattern-5 (basic endgame principle) 09:19 Pattern-6 09:50 Pattern-7 10:41 Puzzle of the Day 11:16 Pattern-8 (draw by endgame theory) 12:01 Pattern-9 (rook and pawn endgames) 12:57 Pattern-10 (queen vs pawn endgames) ► Follow RCA on social media Facebook: Instagram: TikTok: X (Twitter): IgorNation #ChessRules #ChessEndgame #ChessTips #ChessStrategy #ChessStrategyForBeginners #ChessPrinciples 110 comments Transcript: Top 10 Chess Endgame Principles & Patterns welcome into the quick crash endgame course these are the top 10 most important endgame patterns that you need to know in order to dominate your current opponents I'm Grandmaster Igor smov and let's go ahead and get started Pattern-1 the first pattern is rather basic but nevertheless a must know for anybody it's a staircase Checkmate which you can use with two Havey pieces either two Rooks or two queens or a rook and a queen so here how do you Checkmate the black king well first off you need to cut it off either horizontally or vertically but nevertheless you wish to limit the king somehow let's say you play Rook to F2 and cut off the king that way on the next turn you start to attack it deliver in checks so we play Rook to E1 notice that we keep the Rooks in such a way where they do control these both files pushing the king to the side so the King goes and then you keep repeating the same check Rook to D2 the king is still cut off vertically therefore it still has to move left if it goes somewhere here then you just keep delivering checks all the way till it's a Checkmate If instead the King moves here and attacks your Rook then you certainly need to be careful you do not want to give one more check and give up your rook in this case you just move your Rook out of danger as far as possible from the King and then you're ready to continue the same pattern if he tries to stop your Rook from delivering this check then you do the same thing you lift it up notice that I'm putting it diagonally to each other so that you are able to deliver checks and the Rooks do not stand in the way of each other so now after the king moves you just keep delivering these checks forcing the king to move all the way till it's a Checkmate you can use the same staircase technique with Queens of course it's even easier this time you deliver a check by the Queen the king needs to move right this time we're dragging the opponent's king to that side of the board then one more check the other one and finally it's a Checkmate which also Pattern-2 brings us to the next Point avoid a stalemate especially when you are ahead of material sometimes your opponent has just one king left while you have two queens or a ton of material notice that if I try the same case technique right here unthinkingly I can end up with a stalemate where there's no check and the King has no squares to go to therefore it is just a stalemate meaning a draw in Magnus Carlsen falls for stalemate trick this game Magnus Carlson was playing black and he's up a pawn in a totally winning end game position he's pushing pawns forward now undermines White's Pawn chain in case white captures this Pawn over here black would take this one and then would start pushing pawns black is also easily winning here in the actual game instead why decided to just do nothing he kept playing with his King back and forth here Magnus played Pawn to B6 probably expecting white to just resign as now white is in suvan his King needs to move which will drop this Pawn on C4 however instead white played a tricky move King to A4 and Carson was so surprised that he started laughing and when then even walked away from the board he couldn't believe that he could draw this absolutely easily winning end game because here it's actually a counter trick now black needs to do something with the keing and the only meaningful move king take C4 actually leads to a stalemate so even the strongest one fall for this trap and you should always watch out for a stalemate if you are about to win the game now Pattern-3 (important endgame rule) let's go to more practical positions and what is the main endgame plan it is known as a 3p rule push past pawns if you've got a pass Pawn meaning a pawn with no opposition no opponent's Pawn on the same file or adjacent files and there is nothing that prevents this Pawn from possibly moving forward all all the way till the eighth rank if you've got a pass Pawn just push it Forward at some point this will force your opponent to put his pieces to passive defensive positions and even if your Pawn could not be promoted immediately he'll just bring more forces and support these pawns advancement ultimately it will be promoted and you will win the game now we're going to take it one step further so with got a very practical endgame position where there's nothing particular that you can do you don't have any pass points and these type of positions are the ones that confuse people the most cuz you can check major opponent King they have no obvious weaknesses and it's unclear what are you going to do here so here is the right plan again it's still the same thing you want to push past pawns now what if you don't have any well then you need to create some right so first off let's look at our position and evaluate all the pawns that you have asking yourself which one has the highest probability of become in a past pole and in this case it is this point over here that at least has no oppos Pawn on the same file and yes it's not a clear passer Pawn as if we try to advance it forward we can't easily do that as if we push it to C5 he will certainly capture it by the B Pawn or by The Rook but nevertheless since there are no opponents pawns at least along these two files it has a potential to become a pass pawn and that's what we're trying to achieve now think about this once you have that clear guideline all you're going to be thinking about is how to make this Pawn to be a pass PN and once you have a clear guideline in mind it's much easier to carry on this plan so in order for this Pawn to become a past Pawn what would you need to do what would need to happen well you'd probably need to push this pawn and at some point to push it forward to C5 can you do straight away not really because if you push this Pawn to B4 it hands this Pawn on C4 he'll simply take it so we should not do it straight away what do you need to do to push the pawn to B4 but not lose the other one on the C4 well you need to defend this one first so you play King to C3 aiming to play Pawn to B4 on the next turn now our Pawn is guarded he can't take it all is good okay he plays something what do you do next again your logical thinking is exactly the same you wish to push this Pawn to C5 and make it a pass Pawn can you do it now well he attacks this Square twice by his pawn and Rook so if you capture push it straight away he's going to capture it and win your OPP you don't want that so what do you need to do to make make this happen you need one more attacker to this Square so you can bring the Rook over now you'll have two attackers to this square and on the next turn you will be able to push the pawn forward what if he tries to stop that he goes like this now your Rook is heading you'd love to play C5 but this would hand a rook so you can't do that straight away so after King E6 what would you play here as white well you need to guard The Rook first so you play something like this and then on the next turn after whatever move of black you finally push the pawn to C5 and the goal is achieved now after they trade you do have a clear pass pawn and you see that I'm explaining you the way of thinking of a grandmas but basically it's pretty straightforward and you can replicate it and play just that now let's say your opponent plays something what are you going to do next again we're going to adhere to our plan you've got a pass Pawn you wish to push it right here can you do it straight away now he's going to capture it so we need to control the square first so how can you do that well you can bring the Rook right here he comes back can you push the pawn right now now The Rook is going to be captured so we need to defend it he'll play something like this and on the next turn you'll basically keep repeating this operation and you're going to win the Pattern-4 (guaranteed win) game which brings us to the next idea two connected pass pawns win the game why we know that one pass Pawn is already quite strong but two is just an Unstoppable force in an end game because first off he can never blockade them notice that they do control all the squares in front of these pawns making it virtually impossible for them to anyhow put a piece in front of the pawn and blockade it therefore he can just push it Forward plus if your opponent ever tries to attack one of your pawns the other one can just advance and support uh this attacked Pawn so there's really nothing that he can do to stop your pawns from Rolling forward now if you can't play C6 straight away because this would drop this Pawn you can just Advance the king forward and then after whatever moves of black your pawns are going to be moving forward and the final big advantage of having two connected pass pawns is that you actually get a new Queen if you only had one pass Pawn he could trade a rook for that pawn and the game would go on however in this case even if he trades a rook for this Pawn you still have one more spare Pawn basically to steal Queen and you actually get a new clear Queen and now you will probably win the game because you have a tremendous material advantage Pattern-5 (basic endgame principle) here comes the next rule activate your king and here's a little but helpful tip treat your king as if it was a knight because in an end game your King is as strong as a knight just to illustrate the point I revered the position slightly I revered the king and the Knight now imagine you have these Knight instead of a keing on G8 would you wish to keep it right there and do nothing with it certainly not you'd wish to develop your knight to use it actively probably move it forward somewhere to the center you'd start attacking with this Knight that's what you wish to do and you use the same logic for your king now I'm bringing the original position back and hopefully it is clear what should black do now you need to develop your cane right so you move move forward and you start to centralize it so first you wish to centralize it and secondly you wish to use it more aggressively to attack something with your king and here you've got an opportunity to do that you can keep advancing it forward and now you're attacking this Pawn on the next turn you're going to capture it and then after that you're going to capture also these Pawn on B4 keep pushing your PA pawns and you're going to win the game Pattern-6 which brings us to the next trule which basically is the flip side of the same coin cut off your opponent's king here you can play Rook to D7 which not only attacks this Pawn cuz black can easily defended but even more importantly it cuts off the black king on the last rank therefore black can no longer move the king here and here and centralize it and use it actively well you still can do that you can bring your king forward and attack them in one direction or the other and you're quite likely to win the game just because basically you outnumber your opponent you've got two pieces in the game versus one this Pattern-7 position looks symmetric and drawish at first but in reality black is easily winning if you know the opposition rule in chess opposition is when you put your king in such a way where both Kings face each other with only one square in between and this situation puts your opponent in zuk Von where any move would worsen their position basically whatever they do they'll have to give you space and this will allow you to outflank them and to start capturing their pawns for instance if the King goes right now you can go left and pick up this pawn and then all the other pawns just as well let's take it back if instead white tries to maintain the guard they still had to give give you space control of this square and now you can go right outflank them the king will have to move somewhere now you can start gaining space approaching his pawns and ultimately winning them and then you'll start pushing your pass Pawn forward and Puzzle of the Day here's a quick test this is black to play how do you play here if you are black it's a really classical position which so many people ruined even though black is easily winning the most natural and tempted move for black would be trying to push their passer Pawn forward by playing Pawn to E3 but that is a massive block which only leads to a draw instead you need to win the opposition right put in your king to face the enemy King and to gain that opposition where it would force the king to move away to one of the sides and then you'd move to the other side and now nothing can stop your Pawn from being pushed forward and you Pattern-8 (draw by endgame theory) win the next principle is opposite color Bishop endings are drawish because you can build a fortress in this position white is up two pawns it looks like this past Pawn is very dangerous but in reality black drew the game they even gave up the third Pawn to White by going King to F5 so what's the catch here well it turns out that once you own all the squares of the color of your Bishop there is nothing your opponent can do black will just go back and forth and white cannot break this Fortress because they are dark Square Bishop can never attack light squares in the actual game why actually tried to do something but black just did nothing they're going back and forth with their bishop and why realize that hey they can't do anything if they ever move the king away that drops this pawn and after that black still goes back and forth and it's a fortress it's a draw the pattern number Pattern-9 (rook and pawn endgames) nine which is very helpful Rooks go behind pass pawns imagine you're playing this position as black and your opponent goes Rook D5 hitting this Pawn what would you do here knowing this rule it's pretty easy to understand that your Rook should go behind the pass Pawn so that not only it guards the pawn but also it supports its advancement all the way through now it's really hard for white to do anything and you win the game if instead black played a more natural move just push this Pawn forward and away from danger this would actually be a massive mistake cuz now white can follow the same rule Rooks go behind pass pawns it doesn't even matter whether it's your Pawn or your opponent's Pawn you wish to have your Rook behind it and now this Rook is attacking the pawn and is controlling all the squares in front of the pawn indirectly therefore even if black for example guards the pawn okay cool but what are you going to do next how can this pawn move forward there is no way if black ever tries to push it Forward while will simply capture it but can't make any progress and it's a draw Pattern-10 (queen vs pawn endgames) and another really help pattern is how to win with a queen versus a pawn quite often in an end game you have something like this where you start capturing their Pawn they go against yours and then after some Pawn race and if you win the race you end up with a position like that where you've got a queen so you're supposed to win the game however their opponent is about to be promoted so how do you win the game so here's the trick first off you wish to bring your queen as close as possible to the pawn using the zigzag method where you just deliver this check and then you keep delivering checks all the time time approaching with your queen their King so at some point after one more check the king will have to side side step and then you can bring your queen even closer attacking this Pawn this forces the king to come here but now with one more check this forces the king to stand in front of the pawn which is what you want because if the King goes that way that just drops the pawn and you win therefore the king would have to move right here but now as the king stands in front of the pawn it blockades their own Pawn that's why you wanted to create a position like that and now it buys you time to bring your king closer now if your oppent ever pushes the king this way then notice that their own Pawn is actually pinned on the next turn they cannot move it forward because it's pinned now you just approach your king forward you don't even need to do anything special if they try to you know move the king away now this Pawn is ready to be promoted therefore you need to once again start delivering checks and using the same zigzag method to bring your queen closer until the king goes back to the position where it blocks the king blocks the pawn and now you have time to bring your king closer and once your king is really close it's pretty simple usually you either capture the pawn or just win the game if you want to have my complete blueprint for a middle game play check out this free master class right there and if you want to know how it went from 1,600 to 2260 in just one year check out this video
240
calculus - Counting the Number of Real Roots of A Polynomial - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Counting the Number of Real Roots of A Polynomial Ask Question Asked 6 years, 3 months ago Modified6 years, 3 months ago Viewed 8k times This question shows research effort; it is useful and clear 15 Save this question. Show activity on this post. I am interested in solving problems which involve finding the number of real roots of any polynomial. Suppose I take a function f(x)=x 6+x 5+x 4+x 3+x 2+x+1 f(x)=x 6+x 5+x 4+x 3+x 2+x+1 This does not have any real roots but I am trying to figure out if there is some analytical way that does not involve graphing to come to this conclusion. Using Descartes' Rule of Signs, there are zero sign changes in f f so by virtue of which there are no positive roots to the polynomial. Considering f(−x)=x 6−x 5+x 4−x 3+x 2−x+1 f(−x)=x 6−x 5+x 4−x 3+x 2−x+1 I concluded that there are either 6 negative, 4 negative, 2 negative or zero negative roots. So I have 4 cases to consider : 0 positive roots, 6 negative roots, 0 complex roots 0 positive roots, 4 negative roots, 2 complex roots 0 positive roots, 2 negative roots, 4 complex roots 0 positive roots, 0 negative roots, 6 complex roots (The correct case) I tried differentiating f f but the derivative is equally bad f′(x)=6 x 5+5 x 4+4 x 3+3 x 2+2 x+1 f′(x)=6 x 5+5 x 4+4 x 3+3 x 2+2 x+1 I am unable to conclude anything from this. I tried going about the problem the other way. If a polynomial with an even degree is always positive or negative depending on the leading coefficient, it will not have any real roots but then again, finding the extrema of the function is proving to be extremely difficult. I have tried using Bolzano's Intermediate Value Theorem. It guarantees the existence of at least one root but then again, there is a possibility that there might be more than one which can only be eliminated by monotonicity which again brings me back to the bad derivative. I believe there need to be some general rules by virtue of which, we are able to calculate the number of roots for any polynomial. Is graphing the best technique for polynomials like these and if it is, are there any ways by which a quick but accurate plot can be drawn? While reading about the relevant theory, I came across Sturm's Method and the Newton-Raphson Method but haven't touched these yet. Is it absolutely required to know these concepts to effectively draw conclusions? Have I missed something? real-analysis calculus algebra-precalculus derivatives polynomials Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications asked May 4, 2019 at 14:29 Aditya SriramAditya Sriram 1,322 10 10 silver badges 10 10 bronze badges 2 1 en.wikipedia.org/wiki/Budan%27s_theorem –saulspatz Commented May 4, 2019 at 15:15 For some reason lesser known but more efficient than Sturm's theorem is Vincent's theorem. –conditionalMethod Commented Dec 7, 2019 at 1:37 Add a comment| 7 Answers 7 Sorted by: Reset to default This answer is useful 18 Save this answer. Show activity on this post. The best way to solve this is to use Sturm's theorem. This gives an algorithm for computing the number of distinct real roots of any polynomial. The Wikipedia page is quite good, but I'll outline the method here. Let f(x)f(x) be a polynomial. We define a sequence as follows: P 0=f P 0=f P 1=f′P 1=f′ P n+2=−P n mod P n+1 P n+2=−P n mod P n+1 where f′f′ is the derivative of the polynomial and, for polynomials P P and Q Q, we define P mod Q P mod Q to be the remainder of dividing P P by Q Q - that is, the unique polynomial R R of degree less than deg Q deg⁡Q such that P=c Q+R P=c Q+R for some other polynomial c c. (This is also just the result you get by polynomial long division) For instance, suppose we want to know how many roots f(x)=x 3+2 x+1 f(x)=x 3+2 x+1 has using this method - of course, we know the answer is 1 1, but we should check. We get the following chain: P 0=x 3+2 x+1 P 0=x 3+2 x+1 P 1=3 x 2+2 P 1=3 x 2+2 P 2=−4 3 x−1 P 2=−4 3 x−1 P 3=−59 16.P 3=−59 16. For any real number a a, we define V(a)V(a) to be the number of sign changes in the sequence P 0(a),P 1(a),P 2(a),P 3(a)P 0(a),P 1(a),P 2(a),P 3(a), where we ignore any zeros. Assuming neither a a or b b are themselves roots, Sturm's theorem states that V(a)−V(b)V(a)−V(b) is the number of real roots between a a and b b. Note that V(−∞)=lim a→−∞V(a)V(−∞)=lim a→−∞V(a) or V(∞)=lim b→∞V(b)V(∞)=lim b→∞V(b) are easy to compute by looking at the leading terms of each polynomial. For instance, here we have that V(−∞)=2 V(−∞)=2 since, towards −∞−∞ we have that P 0 P 0 tends to −∞−∞, P 1 P 1 to ∞∞, P 2 P 2 to ∞∞ and P 3 P 3 is negative - so two sign changes. Then V(∞)=1 V(∞)=1 because P 0 P 0 and P 1 P 1 are positive near ∞∞ and P 2 P 2 and P 3 P 3 are negative. This polynomial has V(−∞)−V(∞)=1 V(−∞)−V(∞)=1 roots, as expected, since it is an increasing function. This can be a bit laborious to do by hand, but it always works for any polynomial. The only trick to proving this, at least in the square-free case, is to consider what happens to sign changes in this sequence as one moves along the real line: The number of sign changes can only change near a root of one of the polynomials. However, note that, for some polynomial c c, we have the following relationship: P n=c P n+1−P n+2 P n=c P n+1−P n+2 Note that if P n+1 P n+1 has a root at a place where P n P n doesn't, then near that root, P n P n and P n+2 P n+2 must have opposite signs, since P n=−P n+2 P n=−P n+2 at the root. So long as P 0 P 0 was squarefree (i.e. has no multiple roots), we can note that no consecutive terms share a root, so this always happens. As a result, the zero of P n+1 P n+1 does not affect the number of sign changes. However, if P 0 P 0 has a root, then the number of sign changes decreases by one there, since, near that root, f f and f′f′ have opposite signs prior to the root and equal signs after. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 4, 2019 at 19:04 Milo BrandtMilo Brandt 62k 5 5 gold badges 112 112 silver badges 196 196 bronze badges 3 1 Just to add on the requirement that f f be squarefree: The multiple roots of f f are just the roots of the (more harmless) polynomial gcd(f,f′)gcd(f,f′) –Hagen von Eitzen Commented May 5, 2019 at 5:22 Thank you for your suggestions. I went ahead and read about Sturm's Theorem. It is absolutely wonderful but what I observed was when I applied it to a polynomial, the steps where we are required to find the remainder took quite some time to compute –Aditya Sriram Commented May 5, 2019 at 14:05 1 @AdityaSriram and other readers, Sturm's theorem isn't 'the best way to solve this'. Some computation is always going to be involved, but a better method is using Vincent's theorem. See also Wikipedia's section Commented Dec 7, 2019 at 1:25 Add a comment| This answer is useful 8 Save this answer. Show activity on this post. Your polynomial is a factor in X 7−1=(X−1)(X 6+X 5+X 4+X 3+X 2+X+1)X 7−1=(X−1)(X 6+X 5+X 4+X 3+X 2+X+1) and so the zeros are the 7th roots of units. There is only one 7th root of units which is real and this is 1 1. The other are all complex. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 4, 2019 at 14:32 WuestenfuxWuestenfux 21.3k 2 2 gold badges 16 16 silver badges 24 24 bronze badges 5 Thanks for your response however I am looking for a generic method applicable to any polynomial not just the one above –Aditya Sriram Commented May 4, 2019 at 14:34 5 @AdityaSriram: there isn't a generic method (or at least, nobody has found one). There are generic methods for some classes of polynomials (e.g. cyclotomic polynomials), but in general each case is sui generis. –NickD Commented May 4, 2019 at 15:35 Seems to be an open question. –Wuestenfux Commented May 4, 2019 at 15:37 9 @NickD That's not at all the case! This problem is very solved. I'll write an answer, but basically Sturm's theorem let one compute the number of real roots of any polynomial. –Milo Brandt Commented May 4, 2019 at 18:43 @MiloBrandt: I'm looking forward to your answer! Thanks for the link too. –NickD Commented May 4, 2019 at 19:52 Add a comment| This answer is useful 5 Save this answer. Show activity on this post. While this method isn't guaranteed to work on all polynomials, it is surprisingly effective sometimes: notice that [x 6+x 5+x 4+x 3+x 2+x+1]=x 4(x+1 2)2+[3 4 x 4+x 3+x 2+x+1]=x 4(x+1 2)2+3 4 x 2(x+2 3)2+[2 3 x 2+x+1]=x 4(x+1 2)2+3 4 x 2(x+2 3)2+2 3(x+3 4)2+5 8.[x 6+x 5+x 4+x 3+x 2+x+1]=x 4(x+1 2)2+[3 4 x 4+x 3+x 2+x+1]=x 4(x+1 2)2+3 4 x 2(x+2 3)2+[2 3 x 2+x+1]=x 4(x+1 2)2+3 4 x 2(x+2 3)2+2 3(x+3 4)2+5 8. (Each square was chosen to eliminate the top two terms of the preceding polynomial in square brackets; this is like "completing the square" from high school math.) In this form, the polynomial is clearly always positive. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 4, 2019 at 21:36 Greg MartinGreg Martin 93.2k 6 6 gold badges 106 106 silver badges 155 155 bronze badges Add a comment| This answer is useful 3 Save this answer. Show activity on this post. Hint: x=0 x=0 is not a solution so we can write x 3+1 x 3+x+1 x+x 2+1 x 2+1 x 3+1 x 3+x+1 x+x 2+1 x 2+1 now substitute t=x+1 x t=x+1 x So you will get t 2−2=x 2+1 x 2 t 2−2=x 2+1 x 2 and t 3−3 t=x 3+1 x 3 t 3−3 t=x 3+1 x 3 Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited May 4, 2019 at 14:49 answered May 4, 2019 at 14:41 Dr. Sonnhard GraubnerDr. Sonnhard Graubner 97.2k 4 4 gold badges 41 41 silver badges 80 80 bronze badges 2 I appreciate your response but could you suggest some general techniques for solving such problems? –Aditya Sriram Commented May 4, 2019 at 14:45 The question is clearly asking about the general case, using the given polynomial just as an example. This doesn't answer the question at all. –Nij Commented May 4, 2019 at 21:44 Add a comment| This answer is useful 3 Save this answer. Show activity on this post. There is a mild ambiguity in your problem statement. "the number of real roots" may, or may not, count multiple roots with multiplicity. For instance x 2=(x−0)2 x 2=(x−0)2, has two indistinguishable roots at 0 0, so you need to decide how you will account for repeated roots. You could continue doing this with Descartes' Rule of signs and bisection through horizontally shifting your polynomial. But Sturm sequences are a much better way forward. Regardless of your choice, you should arrange for all the roots of the polynomial to be simple roots (i.e., all have multiplicity 1 1). So, first, we detect and eliminate repeated roots. Compute g(x)=gcd(f(x),f′(x)).g(x)=gcd(f(x),f′(x)). If g g is a constant, then f f has no repeated roots. If g g is not a constant, Then we split to two subproblems: the real roots of g g are real roots of f f (with various multiplicites) and the real roots of h(x)=f(x)/g(x)h(x)=f(x)/g(x) are simple real roots of f f. For g g, run the algorithm we are describing on it from the beginning. For h h, proceed to the next step. There are now two ways to proceed. Method 1: Shift your polynomial left and right and use Descarte's rule of signs to find out how many roots are to the left and right of zero. For instance, h(x−1)=x 6−5 x 5+11 x 4−13 x 3+9 x 2−3 x+1 h(x−1)=x 6−5 x 5+11 x 4−13 x 3+9 x 2−3 x+1 has six sign changes. So all the real roots of h h lie between −1−1 and 0 0. h(x−1/2)h(x−1/2) has four sign changes, so at most two real zeroes lie in (−1,−1/2)(−1,−1/2) and at most four real zeroes lie in (−1/2,0)(−1/2,0). Now look at h′(x)h′(x) on the interval (−1/2,0)(−1/2,0). We want to know if h′h′ has any roots in that interval, their multiplicity, and their locations, so we can determine whether the graph of h h looks more like ±(x 2+1)±(x 2+1) or ±(x 2−1)±(x 2−1) on that interval. (I have made no attempt to scale and shift these to that interval. Instead, we want to know if h h has zero or two roots and we will find out of h h is concave up or concave down on that interval.) Actually doing this can lead to a horrible tree of conditional cases. Method 2: Use Sturm's theorem. Construct the (Sturm) sequence of remainders in applying the Euclidean division algorithm to h h and h′h′. Let V(ξ)V(ξ) be the number of sign alternations (ignoring zeroes) in the Sturm sequence when its members are evaluated at x=ξ x=ξ. Then V(a b)−V(b)V(a b)−V(b) is the number of real roots on the interval (a,b](a,b]. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited May 5, 2019 at 8:21 answered May 4, 2019 at 16:53 Eric TowersEric Towers 71.4k 3 3 gold badges 55 55 silver badges 124 124 bronze badges 2 I believe that would be V(a)−V(b)V(a)−V(b), at least according to the answer by Milo Brandt and wikipedia. –tomsmeding Commented May 5, 2019 at 8:16 @tomsmeding : Thanks! I probably never would have caught that typo. –Eric Towers Commented May 5, 2019 at 8:21 Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Hint: (x−1)f(x)=x 7−1(x−1)f(x)=x 7−1 So, the roots of f(x)=0 f(x)=0 are e 2 m π i/7 e 2 m π i/7 where m≡1,2,3,4,5,6(mod 7)m≡1,2,3,4,5,6(mod 7) Now the imaginary part of e 2 m π i/7 e 2 m π i/7 will be 0 0 if 2 m π/7=n π 2 m π/7=n π for some integer n n ⟹m=7 n 2⟹n⟹m=7 n 2⟹n must be even =2 r=2 r(say) ⟹7∣m⟹7∣m which gives a contradiction. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered May 4, 2019 at 14:30 lab bhattacharjeelab bhattacharjee 279k 20 20 gold badges 212 212 silver badges 335 335 bronze badges 2 Thanks for your response however I am looking for a generic method applicable to any polynomial not just the one above –Aditya Sriram Commented May 4, 2019 at 14:34 3 The question is clearly asking about the general case, using the given polynomial just as an example. This doesn't answer the question at all. –Nij Commented May 4, 2019 at 21:46 Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Descarte's rule only gives you an upper bound. The method you are looking for is that of Sturm sequences (i.e. apply the Euclidean algorithm to the pair (f(x),f′(x))(f(x),f′(x)) and count the changes of sign across the intermediate polynomials). If you want the number of real roots in (−∞,∞)(−∞,∞), it suffices to consider the signs of the leading terms. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications edited May 4, 2019 at 17:10 answered May 4, 2019 at 17:03 user65203 user65203 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions real-analysis calculus algebra-precalculus derivatives polynomials See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 18Is there any way to find the number of real roots of a polynomial computationally? Related 2Information about the roots of a polynomial without their calculation 1How the determine the number of real positive roots of a polynomial? 0Roots of a polynomial equation 0Number of Real Roots of a polynomial equation 5Counting the number of roots of a polynomial in each quadrant of the complex plane 2Amount of real and complex roots of a polynomial? 1Number of real roots of f(x)=(x 6)+2(x 4)+(x 2)−2(x)+1 f(x)=(x 6)+2(x 4)+(x 2)−2(x)+1 Hot Network Questions Landmark identification in "The Angel" (Arsenal FC's anthem) What is a single adjective for someone who accepts their faults? Reskinning creatures without accidentally hiding how dangerous/safe they are Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" Which set has greater cardinality and why? Why should components of force be independent? Is 人形机器人 a redundant expression? VLOOKUP with wildcards How can I colour text with metafun colours in context? I failed to make Claus benzene. (With sticks.) How to debug/correct missing number error in plug during memoization? What does, "For you alone are holy." mean in Revelation 15:4? How soon after parking a car in a paid parking area must I provide proof of payment? Show double quotient with congruence subgroup is simply connected? Could a Manned Jupiter Mission use a Shadow Shield? Using my custom font on kitty Kubuntu Road tire bulge - is it still safe to ride? Does the Melf's Acid Arrow spell require a ranged attack roll? Can you negotiate a W1/W2/W3 position (Germany) if you bring enough third-party funds? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? What sort of weather seal is missing on the bottom of my exterior door? Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? Where should I host software for individual papers when GitHub is now part of Microsoft AI? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
241
Published Time: 2004-06-16T10:34:54Z Kenneth Burke - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk The internet we were promised August 14: An important update for readers in the United States. Please don't skip this 1-minute read. It's Thursday, August 14, and we're running a short fundraiser to support Wikipedia. If you've lost count of how many times you've visited Wikipedia this year, we hope that means it's given you at least $2.75 of knowledge. Please join the 2% of readers who give what they can to keep this valuable resource ad-free and available for all. 25 years ago Wikipedia was a dream. A wildly ambitious, probably impossible dream. A dream that came together piece by piece. Now, 65 million articles. 260,000 volunteers across the entire world. Here's to 25 years of knowledge, humanity, and collaboration at its best. Most readers donate because Wikipedia is useful to them, others because Wikipedia is more important than ever. If you feel the same, please donate $2.75 now—or consider a monthly gift to help all year. Thank you. Proud host of Wikipedia and its sister sites How often would you like to donate? Once Monthly Annual Support Wikipedia year-round Thanks for your generous support Please select an amount (USD) The average donation in the United States is around$13. $2.75 $15 $25 $50 $100 $250 $500 Other amount Other [x] I'll generously add a little to cover the transaction fees so you can keep 100% of my donation. Please select a payment method Online Banking Credit / Debit Card Continue Donate one time Donate monthly Donate annually Please select an amount (minimum $1) We cannot accept donations greater than 25000 USD through our website. Please contact our major gifts staff at [email protected]. Please select a payment method Maybe later Can we follow up and let you know if we need your help again? The support and advice we get from donors in the United States is priceless, but many donors don't let us stay in touch. Will you commit today, this Thursday, to staying in touch with the Wikimedia Foundation? Yes No Sorry to hear that. We don't email often; would you consider changing your mind? Thanks for changing your mind! We’ll respect your inbox. Your information is handled in accordance with our donor privacy policy, and each email you receive will include easy unsubscribe options. Continue Please select an email option Almost done: Please, make it monthly. Monthly support is the best way to ensure that Wikipedia keeps thriving. No thanks! I'll make a one-time donation of Yes, I'll donate each month Yes, I'll donate monthly, but for a different amount Thank you for your support! Enter your monthly donation amount Please select an amount (minimum $1) We cannot accept donations greater than 25000 USD through our website. Please contact our major gifts staff at [email protected]. Donate monthly Problems donating?|Frequently asked questions|Other ways to give|I already donated We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the USA and elsewhere. Donations to the Wikimedia Foundation are likely not tax-deductible outside the USA.We never sell your information. By submitting, you are agreeing to our donor privacy policy. The Wikimedia Foundation is a nonprofit, tax-exempt organization.We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the U.S. and elsewhere. The Wikimedia Foundation is a recognized public welfare institution (ANBI).If you make a recurring donation, you will be debited by the Wikimedia Foundation until you notify us to stop. We’ll send you an email which will include a link to easy cancellation instructions. Sorry to interrupt, but your gift helps Wikipedia stay free from paywalls and ads. Please, donate $2.75. No, but maybe later when I have more time Yes, I'll donate $2.75 How would you like to be reminded? We can send you an email or text message reminder to donate later. Send me an email Send me a text message Send me an email reminder We’ll gladly send you an email reminder and get out of your way so you can get back to reading. Email address Submit Please enter a valid email address i.e. [email protected] We never sell your information. By submitting, you are agreeing to our donor privacy policy. The Wikimedia Foundation is a nonprofit, tax-exempt organization.We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the U.S. and elsewhere. The Wikimedia Foundation is a recognized public welfare institution (ANBI).We never sell your information. By submitting, you are agreeing to our donor privacy policy and to sharing your information with the Wikimedia Foundation and its service providers in the USA and elsewhere. Donations to the Wikimedia Foundation are likely not tax-deductible outside the USA. Send me a reminder We’ll gladly send you a reminder and get out of your way so you can get back to reading. Please enter your mobile phone number and check the opt in checkbox to receive a text message reminder. Mobile phone number - [x] I would like to receive text messages such as donation reminders and appeals from Wikimedia at the number I have provided. By participating, you consent to receive recurring updates through automated text messages from Wikimedia to the phone number you provide. Message frequency varies. For text messages, Msg&Data rates may apply. Text STOP to cancel or HELP for help. Terms of Service and Privacy Policy. Submit Please enter a valid phone number e.g. (201) 555-0123 Please check the box to consent to receive messages. Thank you! We will send you a reminder. 🎉 Thank you for donating recently! 🎉 Your support means the world to us. We'll hide banners in this browser for the rest of our campaign. Close Other ways to give #### Donor-Advised Fund (DAF) Unlock tax benefits by directing your donation via your Donor-Advised Fund (DAF) #### Individual Retirement Account (IRA) Qualified Charitable Distributions from a tax efficient eligible IRA #### Workplace Giving Involve your employer and increase the impact of your donation More ways to give [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Personal history 2 Persuasions and influences 3 PhilosophyToggle Philosophy subsection 3.1 Dramatism 3.2 Rebirth cycle 3.3 Terministic screen 3.4 Identification 4 Principal works 5 Honors 6 References 7 External links Kenneth Burke [x] 20 languages العربية Български Deutsch Eesti Español فارسی Français Galego 한국어 Հայերեն Bahasa Indonesia Italiano مصرى Nederlands Polski Português Русский Suomi Svenska Українська Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Expand all Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikiquote Wikisource Wikidata item From Wikipedia, the free encyclopedia American philosopher and literary critic (1897–1993) For the Irish hurler, see Kenneth Burke (hurler). | Kenneth Burke | | --- | | | | Born | Kenneth Duva Burke May 5, 1897(1897-05-05) Pittsburgh, Pennsylvania, U.S. | | Died | November 19, 1993(1993-11-20) (aged 96) Andover, New Jersey, U.S. | | Occupation(s) | Literary theorist and philosopher | | | | Philosophical work | | Institutions | University of Chicago | | | Kenneth Duva Burke (May 5, 1897 – November 19, 1993) was an American literary theorist, poet, essayist, and novelist, who wrote on 20th-century philosophy, aesthetics, criticism, and rhetorical theory. As a literary theorist, Burke was best known for his analyses based on the nature of knowledge. He was one of the first theorists to stray from more traditional rhetoric and view literature as "symbolic action." Burke was unorthodox, concerning himself not only with literary texts but also with the elements of the text that interacted with the audience: social, historical, political background, author biography, etc. For his career, Burke has been praised by The Johns Hopkins Guide to Literary Theory and Criticism as "one of the most unorthodox, challenging, theoretically sophisticated American-born literary critics of the twentieth century." His work continues to be discussed by rhetoricians and philosophers. Personal history [edit] Kenneth Duva Burke was born on May 5, 1897, in Pittsburgh, Pennsylvania, and graduated from Peabody High School, where he befriended classmates Malcolm Cowley and James Light. He attended Ohio State University to pursue courses in French, German, Greek, and Latin. He moved with his parents to Weehawken, New Jersey, and later enrolled at Columbia University. During his time there, he was a member of the Boar's Head Society. The constraining learning environment impelled Burke to leave Columbia, never receiving a college diploma. In Greenwich Village, he kept company with avant-garde writers such as Hart Crane, Malcolm Cowley, Gorham Munson, and later Allen Tate. Raised by a Christian Science mother, Burke later became an avowed agnostic. In 1919, he married Lily Mary Batterham, with whom he had three daughters: the feminist, Marxist anthropologist Eleanor Leacock; musician (Jeanne) Elspeth Chapin Hart; and writer and poet France Burke. He later divorced Lily and, in 1933, married her sister Elizabeth Batterham, with whom he had two sons, Michael and Anthony. Burke served as the editor of the modernist literary magazine The Dial in 1923, and as its music critic from 1927 to 1929. He was an avid pianist. He received the Dial Award in 1928 for distinguished service to American literature. He was the music critic of The Nation from 1934 to 1936, and was awarded a Guggenheim Fellowship in 1935. His work on criticism was a driving force in placing him back in the academic spotlight. As a result, he was able to teach and lecture at various colleges, including Bennington College, while continuing his literary work. Many of Burke's personal papers and correspondence are housed at Pennsylvania State University's Special Collections Library. But despite his stint lecturing at universities, Burke was an autodidact and a self-taught scholar. In later life, his New Jersey farm was a popular summer retreat for his extended family, as reported by his grandson Harry Chapin, a popular singer-songwriter. Burke died of heart failure at his home in Andover, New Jersey, age 96. Persuasions and influences [edit] Like many 20th-century theorists and critics, Burke was heavily influenced by Karl Marx, Sigmund Freud, and Friedrich Nietzsche. He was a lifelong interpreter of Shakespeare and also significantly influenced by Thorstein Veblen. He resisted being pigeonholed as a follower of any philosophical or political school of thought, and had a notable and very public break with the Marxists who dominated the literary criticism set in the 1930s. Burke corresponded with a number of literary critics, thinkers, and writers over the years, including William Carlos Williams, Malcolm Cowley, Robert Penn Warren, Allen Tate, Ralph Ellison, Albert Murray, Katherine Anne Porter, Jean Toomer, Hart Crane, and Marianne Moore. Later thinkers who have acknowledged Burke's influence include Harold Bloom, Stanley Cavell, J. Hillis Miller, Susan Sontag (his student at the University of Chicago), Erving Goffman,Geoffrey Hartman, Edward Said, René Girard, Fredric Jameson, Michael Calvin McGee, Dell Hymes, and Clifford Geertz. Burke was one of the first prominent American critics to appreciate and articulate the importance of Thomas Mann and André Gide; he produced the first English translation of "Death in Venice", which first appeared in The Dial in 1924. It is now considered[by whom?] much more faithful and explicit than H. T. Lowe-Porter's more famous 1930 translation. Burke's political engagement is evident—A Grammar of Motives takes as its epigraph ad bellum purificandum ("toward the purification of war"). American literary critic Harold Bloom singled out Burke's Counterstatement and A Rhetoric of Motives for inclusion in his book The Western Canon. Part of a series on Rhetoric show History Ancient Greece Asianism Atticism Attic orators Calliope Sophists Ancient India Ancient Rome The age of Cicero Second Sophistic Middle Ages Byzantine rhetoric Trivium Renaissance Studia humanitatis Modern period show Concepts Captatio benevolentiae Chironomia Decorum Delectare Docere Device Eloquence Eloquentia perfecta Eunoia Enthymeme Facilitas Fallacy Informal Figure of speech Scheme Trope Five canons Inventio Dispositio Elocutio Memoria Pronuntiatio Hypsos Imitatio Kairos Method of loci Modes Operations Persuasion Ethos Pathos Logos Situation Style Grand Sotto voce Topos show Genres Apologetics Debate Declamation Controversia Deliberative Demagogy Dialectic Socratic method Dissoi logoi Elocution Epideictic Encomium Panegyric Eulogy Farewell speech Forensic Funeral oration Homiletics‎ Sermon Invitational Lecture Public Lightning talk Maiden speech Oratory Polemic Diatribe Eristic Philippic Progymnasmata Suasoria Propaganda Spin Resignation speech Stump speech War-mongering show Criticism Cluster Dramatic Pentadic Frame Genre Ideological Metaphoric Mimesis Narrative Neo-Aristotelian show Rhetoricians Aeschines Aelius Aristides Antiphon Aristotle Aspasia Augustine Bakhtin Booth Brueggemann Burke Cicero de Man Demosthenes Derrida Dio Chrysostom Erasmus Gorgias Hobbes Hypereides Isocrates Lucian Lysias McLuhan Ong Perelman Pizan Protagoras Quintilian Ramus Richards Seneca the Elder Smith Toulmin Vico Weaver show Works Gorgias(380 BC) Phaedrus(c. 370 BC) Rhetoric(c. 350 BC) Rhetoric to Alexander(c. 350 BC) De Sophisticis Elenchis(c. 350 BC) Topics(c. 350 BC) De Inventione(84 BC) Rhetorica ad Herennium(80 BC) De Oratore(55 BC) A Dialogue Concerning Oratorical Partitions(c. 50 BC) De Optimo Genere Oratorum(46 BC) Orator(46 BC) On the Sublime(c. 50) Institutio Oratoria(95) Panegyrici Latini(100–400) Dialogus de oratoribus(102) De doctrina Christiana(426) De vulgari eloquentia(1305) Copia: Foundations of the Abundant Style(1521) Language as Symbolic Action(1966) A General Rhetoric(1970) show Subfields Argumentation Cognitive Contrastive Constitutive Digital Feminist Native American New Health and medicine Pedagogy Procedural Science Technology Therapy Visual Composition show Related Ars dictaminis Communication studies Composition studies Doxa Glossary of rhetorical terms Glossophobia List of feminist rhetoricians List of speeches Oral skills Orator Pistis Public rhetoric Rhetoric of social intervention model Rhetrickery Rogerian argument Seduction Speechwriting Talking point TED Terministic screen Toulmin model Wooden iron v t e Beyond his contemporary influences, Burke took Aristotle's teachings into account while developing his theories on rhetoric. A significant source of his ideas is Aristotle's Rhetoric. Drawing from it, Burke oriented his writing about language specifically to its social context. Similarly, he studied language as involving more than logical discourse and grammatical structure because he believed the social context of language cannot be reduced to principles of pure reason. Burke draws a line between a Platonic and a more contemporary view of rhetoric, described as "old rhetoric" and "new rhetoric" respectively. The former is defined by persuasion by any means, while the latter is concerned with "identification." Burke's use of the word refers to the process by which a speaker associates themself with certain groups, such as a target audience. His idea of identification is similar to ethos in classical rhetoric, but it also explains the use of logos and pathos in an effort to create a lasting impression on the auditors. It is characterized by "identifying" with a speaker's rhetoric insofar as their words represent a world that seems to be that in which we live. This theory differs from ethos most significantly in Burke's conception of artistic communication that he believes is defined by eloquence, which is "simply the end of art and therefore its essence." The use of rhetoric conveys aesthetic and social competence, which is why a text can rarely be reduced to purely scientific or political implications, according to Burke. Rhetoric forms our social identity by a series of events usually based on linguistics, but more generally by the use of any symbolic figures. He uses the metaphor of a drama to articulate this point, where interdependent characters speak and communicate with each other while allowing the others to do the same. Burke also describes identification as a function of persuasive appeal. Burke defined rhetoric as the "use of words by human agents to form attitudes or to induce actions in other human agents." His definition builds on the preexisting ideas of how people understand the meaning of rhetoric. Burke describes rhetoric as using words to move people or encourage action.[citation needed] Furthermore, he described rhetoric as almost synonymous with persuasion (A Rhetoric of Motives, 1950). Burke argued that rhetoric works to bring about change in people. This change can be evident through attitude, motives, or intentions, but it can also be physical. Calling for help is an act of rhetoric. Rhetoric is symbolic action that calls people to physical action. Ultimately, rhetoric and persuasion are interchangeable, according to Burke. Other scholars have similar definitions. Aristotle argued that rhetoric was a tool for persuading people (but also for gaining information) if the speaker knew how. One way in which Aristotle formed his arguments was through syllogism. Another example of how rhetoric was used to persuade was deliberative discourse. Here, politicians and lawyers used speech to pass or reject policies. Sally Gearhart states that rhetoric uses persuasion to induce change. Although she argues that persuasion is violent and harmful, she uses it as a tool herself to bring about change. Philosophy [edit] The political and social power of symbols was central to Burke's scholarship. He felt that through understanding "what is involved when we say what people are doing and why they are doing it", we gain insight into the cognitive basis for our perception of the world. For Burke, the way in which we decide to narrate gives importance to specific qualities over others. He believed this tells us a great deal about how we see the world. Dramatism [edit] Burke called the social and political rhetorical analysis "dramatism" and believed that such an approach to language analysis and language usage helps us understand the basis of conflict, the virtues and dangers of cooperation, and the opportunities of identification and consubstantiality. Burke defined the rhetorical function of language as "a symbolic means of inducing cooperation in beings that by nature respond to symbols." His definition of humanity states that "man" is "the symbol using, making, and mis-using animal, inventor of the negative, separated from his natural condition by instruments of his own making, goaded by the spirit of hierarchy, and rotten with perfection." For Burke, some of the most significant problems in human behavior result from instances of symbols using human beings rather than human beings using symbols. Burke proposed that when we attribute motives to others, we tend to rely on ratios between five elements: act, scene, agent, agency, and purpose. This has become known as the dramatistic pentad. The pentad is grounded in his dramatistic method, which considers human communication as a form of action. Dramatism "invites one to consider the matter of motives in a perspective that, being developed from the analysis of drama, treats language and thought primarily as modes of action" (Grammar of Motives, xxii). Burke pursued literary criticism not as a formalistic enterprise but rather as an enterprise with significant sociological impact; he saw literature as "equipment for living," offering folk wisdom and common sense to people and thus guiding the way they lived their lives. Rebirth cycle [edit] Through the use of dramatism, one can ultimately utilize Burke's Rebirth Cycle. This cycle encompasses three distinct phases, Guilt/Pollution, Purification, and Redemption. Burke introduced the phases and their functionality through the use of a poem: "Here are the steps / In the Iron Law of History / That welds Order and Sacrifice / Order leads to Guilt / (For who can keep commandments!) / Guilt needs Redemption (for who would not be cleaned!) / Redemption needs Redeemer (which is to say, a Victim!) / Order / Through Guilt / To Victimage (hence: Cult of the Kill)". The poem provides a basis of for the interactions of the three phases. Order's introduction into the life of human enables the creation of guilt. To alleviate the results produced by the creation of Guilt, redemption is necessary. Through the abstraction of redemption, Burke leads to the completion of the cycle. Pollution initially constitutes actions that result in the creation of Guilt. The creation of Guilt occurs upon the rejection of a hierarchy. Challenges to relationships, changes in power, and appropriateness of behaviors to change are each contributing factors toward the formation of Guilt. It is appropriate to draw parallels between the creation of Guilt and original sin. Original sin constitutes "an offense that cannot be avoided or a condition in which all people share". Guilt represents the initial action that strips a situation of its perceived purity. The establishment of Guilt necessarily leads to the need to undergo purification to cleanse the individual affected by its recognition. Purification is thus accomplished through two forms of "ritual purification." Mortification and victimage represent the available avenues of purification. Stratification within society created by hierarchies allows for marginalization within societies. Marginalization thus is a leading factor in the creation of Guilt, and leads to the need for mortification. Burke wrote, "In an emphatic way, mortification is the exercising of oneself in 'virtue'; it is a systematic way of saying no to Disorder, or obediently saying yes to Order". Mortification allows self-sacrifice, which enables one to rid oneself of impurities. Purification will only be reached if it is equal to one's degree of guilt. If mortification cannot be reached, one will ultimately be forced to project "his conflict upon a scapegoat, by 'passing the buck,' by seeking a sacrificial vessel upon which he can vent, as from without, a turmoil that is actually within". Sacrificial vessels allow for the extermination of one's Guilt while enabling one to remain virtuous. Victimage is the second form of ritual purification. Burke highlights society's need to rectify division within its ranks. He wrote, "People so dislike the idea of division, their dislike can easily be turned against the man or group who would so much as name it, let alone proposing to act upon it". Victimage allows the creation of a scapegoat that serves as a depository of impurities in order to protect against entities that are alien to a particular society. The scapegoat takes on the sins of the impure, thus allowing redemption for the Guilty party. Through the course of these actions the scapegoat is harnessed with the sins of the Guilty. Redemption is reached through one of two options. Tragic redemption revolves around the idea that guilt combines with the principles of perfection and substitution in order that victimage can be utilized. This can be viewed as the "guilty is removed from the rhetorical community through either scapegoating or mortification". Comic enlightenment is the second form of redemption. This option allows the sins of the guilty to be adopted by Society as a whole, ultimately making Society guilty by association. Terministic screen [edit] Another key concept for Burke is the Terministic screen—a set of symbols that becomes a kind of screen or grid of intelligibility through which the world makes sense to us. Here Burke offers rhetorical theorists and critics a way of understanding the relationship between language and ideology. Language, Burke thought, doesn't simply "reflect" reality; it also helps select reality as well as deflect reality. In Language as Symbolic Action (1966), he writes, "Even if any given terminology is a reflection of reality, by its very nature as a terminology it must be a selection of reality; and to this extent must function also as a deflection of reality. Burke describes terministic screens as reflections of reality—we see these symbols as things that direct our attention to the topic at hand. For example, photos of the same object with different filters each direct the viewer's attention differently, much like how different subjects in academia grab the attention differently. Burke states, "We must use terministic screens, since we can't say anything without the use of terms; whatever terms we use, they necessarily constitute a corresponding kind of screen; and any such screen necessarily directs the attention to one field rather than another." Burke drew not only from the works of Shakespeare and Sophocles, but from films and radio that were important to pop culture, because they were teeming with "symbolic and rhetorical ingredients." We as a people can be cued to accept the screen put in front of us, and mass culture such as TV and websites can be to blame for this. Media today has altered terministic screens, or as Richard Toye wrote in his book Rhetoric: A Very Short Introduction, the "linguistic filters which cause us to see situations in particular fashions." Identification [edit] Burke viewed identification as a critical element of persuasion. According to Burke, as we listen to someone speak, we gauge how similar that person is to us. If our opinions match, then we identify (rhetorically) with the speaker. Based on how much we identify with the speaker, we may be moved to accept the conclusions that the speaker comes to in an argument, as well as all (or most) of its implications. In A Rhetoric of Motives, Burke not only explores self-identification within a rhetorical context, but also analyzes exterior identification, such as identifying with objects and concepts that are not the self. There are several other facets to identification that Burke discusses within his books, such as consubstantiality, property, autonomy, and cunning. Burke's exploration of identification within rhetoric heavily influenced modern rhetorical theory. He revolutionized rhetoric in the West with his exploration of identification, arguing that rhetoric is not only about "rational argument plus emotion", but also that it involves people connecting to language and one another at the same time. Burke’s theory of identification was complicated by his critical interest in music, prompting a shift toward distinguishing between form and information in sonic identification. Principal works [edit] In "Definition of Man", the first essay of his collection Language as Symbolic Action (1966), Burke defined humankind as a "symbol using animal" (p.3). This definition of man, he argued, means that "reality" has actually "been built up for us through nothing but our symbol systems" (p.5). Without our encyclopedias, atlases, and other assorted reference guides, we would know little about the world that lies beyond our immediate sensory experience. What we call "reality," Burke stated, is actually a "clutter of symbols about the past combined with whatever things we know mainly through maps, magazines, newspapers, and the like about the present ... a construct of our symbol systems" (p.5). College students wandering from class to class, from English literature to sociology to biology to calculus, encounter a new reality each time they enter a classroom; the courses listed in a university's catalogue "are in effect but so many different terminologies" (p.5). It stands to reason then that people who consider themselves to be Christian, and who internalize that religion's symbol system, inhabit a reality that is different from the one of practicing Buddhists, or Jews, or Muslims. The same would hold true for people who believe in the tenets of free market capitalism or socialism, Freudian psychoanalysis or Jungian depth psychology, as well as mysticism or materialism. Each belief system has its own vocabulary to describe how the world works and what things mean, thus presenting its adherents with a specific reality. Burke's poetry (which has drawn little critical attention and seldom been anthologized) appears in three collections: Book of Moments (1955), Collected Poems 1915–1967 (1968), and the posthumously published Late Poems: 1968-1993 Attitudinizings Verse-wise, While Fending for One's Selph, and in a Style Somewhat Artificially Colloquial (2005). His fiction is collected in Here & Elsewhere: The Collected Fiction of Kenneth Burke (2005). His other principal works are Counter-Statement (1931) "Towards a Better Life" (1932), Googlebooks preview, pp.25–233 not shown. Permanence and Change (1935) Attitudes Toward History (1937) The Rhetoric of Hitler's "Battle" (1939) Philosophy of Literary Form (1941) A Grammar of Motives (1945) A Rhetoric of Motives (1950) Linguistic Approaches to Problems of Education (1955) The Rhetoric of Religion (1961) Language As Symbolic Action (1966) Dramatism and Development (1972): a description of the contents of the two part lecture devoted to biological, psychological and sociocultural phenomena Here and Elsewhere (2005) Essays Toward a Symbolic of Motives (2006) Kenneth Burke on Shakespeare (2007) Full list of his works from KB: The Journal of the Kenneth Burke Society He also wrote the song "One Light in a Dark Valley," later recorded by his grandson Harry Chapin. Burke's most notable correspondence is collected here: Jay, Paul, editor, The Selected Correspondence of Kenneth Burke and Malcolm Cowley, 1915-1981, New York: Viking, 1988, ISBN0-670-81336-2 East, James H., editor, The Humane Particulars: The Collected Letters of William Carlos Williams and Kenneth Burke, Columbia: U of South Carolina P, 2004. Rueckert, William H., editor, Letters from Kenneth Burke to William H. Rueckert, 1959–1987, Anderson, SC: Parlor Press, 2003. ISBN0-9724772-0-9 Honors [edit] Burke was awarded the National Medal for Literature at the American Book Awards in 1981. According to The New York Times, April 20, 1981, "The $15,000 award, endowed in memory of the late Harold Guinzberg, founder of the Viking Press, honors a living American writer 'for a distinguished and continuing contribution to American letters.'" References [edit] ^Richard Toye, Rhetoric: A Very Short Introduction. Oxford: Oxford University Press, 2013. ^"Kenneth Burke." Encyclopædia Britannica. Encyclopædia Britannica Inc., 2013. ^"Kenneth Burke." Encyclopædia Britannica. Encyclopædia Britannica Inc., 2013. ^Cowley, Malcolm (2014). The Long Voyage. Harvard University Press. p.599. ISBN9780674728226. ^"Pennsylvania Center for the Book". ^Wolin, Ross (2001). The Rhetorical Imagination of Kenneth Burke. University of South Carolina Press. ISBN9781570034046. Retrieved 6 March 2016. ^"Burke, Kenneth Duva." Edited by Bekah Shaia Dickstein, 2004. 2015-01-13 at the Wayback Machine. ^Selzer, Jack. Kenneth Burke in Greenwich Village: Conversing with the Moderns, 1915–1931. Madison: University of Wisconsin Press, 1996. ^Twentieth Century Authors: A Biographical Dictionary of Modern Literature, edited by Stanley J. Kunitz and Howard Haycraft, New York, The H. W. Wilson Company, 1942. ^"Letters from Kenneth Burke to William H. Rueckert, 1959-1987", edited by William H. Rueckert, West Lafayette, IN: Parlor Press, 2003. ^"KENNETH BURKE, 96 PHILOSOPHER, WRITER ON LANGUAGE", Boston Globe, November 22, 1993. Accessed July 16, 2008. "Kenneth Burke, a philosopher who was influential in American literary circles, has died. He was 96. Mr. Burke died Friday of heart failure at his home in Andover, N.J." ^"List of Correspondents in Kenneth Burke Papers"Archived 2012-07-16 at the Wayback Machine, Kenneth Burke Papers, Special Collections Library, Pennsylvania State University. ^Mitchell, J. N. (1978). Social Exchange, Dramaturgy and Ethnomethodology: Toward a Paradigmatic Synthesis. New York: Elsevier. ^ Jump up to: abcKeith, William M.; Lundberg, Christian Oscar (2008). The essential guide to rhetoric. Boston: Bedford/St. Martin's. ISBN978-0-312-47239-9. ^Hanson, Gregory. "Kenneth Burke's Rhetorical Theory within the Construction of the Enthnography of Speaking." ^Burke, Kenneth. A Rhetoric of Motives (1950), p. 41. ^Burke, Kenneth. "Definition of Man." The Hudson Review 16 4 (1963/1964): 491-514 ^Coe, Richard M. "Defining Rhetoric—and Us: A Meditation on Burke's Definition." Composition Theory for the Postmodern Classroom. Eds. Olson, Gary A. and Sidney I. Dobrin. Albany: SUNY Press, 1994. 332-44. ^Burke, Kenneth. The Rhetoric of Religion. Berkeley: University of California Press, 1961. Print. ^Rybacki, Karyn & Rybacki, Donald. Communication Criticism: Approaches and Genres. Belmont, CA: Wadsworth Publishing Company, 1991. Print. ^Foss, Sonja K., Foss, Karen A., and Trapp, Robert. Contemporary Perspectives on Rhetoric. Long Grove, IL: Waveland Press, Inc., 2014. Print. ^Burke, Kenneth. The Rhetoric of Religion. Berkeley: University of California Press, 1961. Print. ^ Jump up to: abBurke, Kennth. A Rhetoric of Motives. Berkeley: University of California Press, 1969. Print. ^Burke, Kenneth. "The Rhetoric of Hitler's "Battle." Readings in Rhetorical Criticism. Ed. Carl R. Burgchardt. State College: STRATA Publishing, Inc. 2010. 238-253. ^Borchers, Timothy. Rhetorical Theory: An Introduction. Long Grove: Waveland Press, 2006. ^Bizzell; Herzberg. Patricia Bizzell; Bruce Herzberg (eds.). The Rhetorical Tradition (2 ed.). pp.1340–47. ^Killian, Justin; Larson, Sean; Emanuelle Wessels. "Language as Symbolic Action". University of Minnesota. Retrieved 2014-02-19. ^Toye, Richard (2013). Rhetoric: A Very Short Introduction. Oxford, United Kingdom: Oxford University Press. pp.71–73. ISBN978-0-19-965136-8. ^Warnock, Tilly (2024). Kenneth Burke's Rhetoric of Identification: Lessons in Reading, Writing, and Living. Anderson, South Carolina, United States: Parlor Press. pp.1–286. ISBN978-1643174488. ^Overall, J. (2017). "Kenneth Burke and the Problem of Sonic Identification". Rhetoric Review. 36 (3): 232–243. doi:10.1080/07350198.2017.1318348. External links [edit] Wikiquote has quotations related to Kenneth Burke. Author and Book Info.com offering a list of works and their description Kenneth Burke Papers at the Pennsylvania State University KB Journal, KB Journal's mission is to explore what it means to be "Burkean" The Kenneth Burke Society A short introduction to Burkean rhetoricArchived 2009-12-16 at the Wayback Machine, with all relative concepts defined Burke's lecture A Theory of Terms, at Drew Theological Seminary, Complete text and audio Works by Kenneth Burke at LibriVox (public domain audiobooks) | show v t e Kenneth Burke | | --- | | Works | The Rhetoric of Hitler's "Battle"(1939) A Grammar of Motives(1945) Language as Symbolic Action(1966) | | Concepts | Definition of man Dramatism Dramatistic pentad Identification Modern rhetoric Terministic screen | | show v t e Communication studies | | --- | | History Outline | | Topics and terminology | Biocommunication Broadcasting Communication Computer-mediated communication Conversation History of communication Information Intercultural Interpersonal Intrapersonal Journalism Literacy Reading Writing Mass media Meaning Media Media ecology Meta-communication Models of communication New media Nonverbal communication Nonviolent Communication Propaganda Public relations Speech Symbol list Telecommunications Text and conversation theory | | | Subfields | Closed-loop Communication design Communication theory Communicology Crisis Climate Cross-cultural Developmental Discourse analysis Environmental Global Health International Mass Media studies Mediated cross-border Organizational Political Risk Science Technical Visual | | Scholars | Adorno Barthes Bateson Benjamin Burke Castells Chomsky Craig Ellul Fisher Flusser Gasset Gerbner Goffman Habermas Horkheimer Huxley Innis Jakobson Janis Johnson Kincaid Lippman Luhmann Marcuse McLuhan Mead Morgan Ong Packard Peirce Postman Quebral Richards Rogers Schramm Shannon Tankard Tannen Wertheimer | | Category | | show v t e Literary criticism | | --- | | Literary theory | Archetypal criticism Biographical criticism Chicago school Cultural materialism Darwinian criticism Deconstruction Descriptive poetics Ecocriticism Feminist criticism Formalism Geneva school Geocriticism Marxist criticism New Criticism New historicism Postcolonial criticism Postcritique Psychoanalytic criticism Reader-response criticism Russian formalism Semiotic criticism Sociological criticism Source criticism Thing theory | | | Theorists and critics | Plato Aristotle Horace Longinus Plotinus St. Augustine Boethius Aquinas Dante Boccaccio de Pizan Valmiki Lu Ji Sidney Tasso Bacon Hobbes Corneille Dryden Boileau-Despréaux Locke Pope Addison Vico Burke (Edmund) Hume Johnson Young Lessing Reynolds Diderot Kant Wollstonecraft Blake Schiller Schlegel Wordsworth de Staël Schelling Coleridge von Humboldt Keats Schopenhauer Peacock Shelley von Goethe Hegel Leopardi De Sanctis Carlyle Mill Emerson Sainte-Beuve Lowell Poe Arnold Taine Baudelaire Marx Kierkegaard Nietzsche Pater Zola France Wilde Mallarmé Tolstoy Croce Gramsci Eco Freud de Saussure Lévi-Strauss Hulme Benjamin Shklovsky Eliot Jung Trotsky Woolf Richards Bakhtin Bataille Ransom Lacan Lukács Valéry Burke (Kenneth) Cassirer Brooks Mukařovský Sartre de Beauvoir Adorno Jakobson Frye Bachelard Gombrich Heidegger Chomsky Derrida Barthes Foucault Poulet Williams Trilling Kristeva de Man Bloom Achebe Said Deleuze Guattari Girard Cixous Iser White Gadamer Ricoeur Abrams Geertz Marinetti Tzara Breton Loy de Andrade Hu Paz | | show Authority control databases | | --- | | International | ISNI VIAF FAST WorldCat | | National | Germany United States France BnF data Japan Italy Australia Czech Republic Spain Portugal Netherlands Norway Latvia Croatia Sweden Poland Israel Belgium | | Academics | CiNii | | Artists | MusicBrainz | | People | Trove Deutsche Biographie DDB | | Other | IdRef SNAC Yale LUX | Retrieved from " Categories: 1897 births 1993 deaths Action theorists American literary critics Bennington College faculty Institute for Advanced Study visiting scholars People from Andover, New Jersey People from Weehawken, New Jersey American rhetoricians Rhetoric theorists Shakespearean scholars Writers from Pittsburgh Communication scholars American philosophers of art American agnostics Columbia University alumni Columbia College (New York) alumni 20th-century American philosophers Translators of Thomas Mann Members of the American Academy of Arts and Letters Chapin family (show business) Writers from Hudson County, New Jersey Hidden categories: Webarchive template wayback links Articles with short description Short description matches Wikidata Articles with hCards Articles with specifically marked weasel-worded phrases from February 2023 All articles with unsourced statements Articles with unsourced statements from July 2016 Articles with LibriVox links This page was last edited on 2 July 2025, at 23:54(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Kenneth Burke 20 languagesAdd topic
242
Brownian Motion Draft version of May 25, 2008 Peter M¨ orters and Yuval Peres 1 Contents Foreword 7 List of frequently used notation 9 Chapter 0. Motivation 13 Chapter 1. Definition and first properties of Brownian motion 21 1. Paul L´ evy’s construction of Brownian motion 21 2. Continuity properties of Brownian motion 27 3. Nondifferentiability of Brownian motion 31 4. The Cameron-Martin theorem 37 Exercises 38 Notes and Comments 41 Chapter 2. Brownian motion as a strong Markov process 43 1. The Markov property and Blumenthal’s 0-1 Law 43 2. The strong Markov property and the reflection principle 46 3. Markov processes derived from Brownian motion 53 4. The martingale property of Brownian motion 57 Exercises 64 Notes and Comments 68 Chapter 3. Harmonic functions, transience and recurrence 69 1. Harmonic functions and the Dirichlet problem 69 2. Recurrence and transience of Brownian motion 75 3. Occupation measures and Green’s functions 80 4. The harmonic measure 87 Exercises 94 Notes and Comments 96 Chapter 4. Hausdorffdimension: Techniques and applications 97 1. Minkowski and Hausdorffdimension 97 2. The mass distribution principle 106 3. The energy method 108 4. Frostman’s lemma and capacity 111 Exercises 117 Notes and Comments 119 3 Chapter 5. Brownian motion and random walk 121 1. The law of the iterated logarithm 121 2. Points of increase for random walk and Brownian motion 126 3. The Skorokhod embedding problem 129 4. The Donsker invariance principle 134 5. The arcsine laws 137 Exercises 142 Notes and Comments 144 Chapter 6. Brownian local time 147 1. The local time at zero 147 2. A random walk approach to the local time process 158 3. The Ray-Knight theorem 163 4. Brownian local time as a Hausdorffmeasure 171 Exercises 179 Notes and Comments 181 Chapter 7. Stochastic integrals and applications 183 1. Stochastic integrals with respect to Brownian motion 183 2. Conformal invariance and winding numbers 194 3. Tanaka’s formula and Brownian local time 202 4. Feynman-Kac formulas and applications 206 Exercises 213 Notes and Comments 215 Chapter 8. Potential theory of Brownian motion 217 1. The Dirichlet problem revisited 217 2. The equilibrium measure 220 3. Polar sets and capacities 226 4. Wiener’s test of regularity 233 Exercises 236 Notes and Comments 238 Chapter 9. Intersections and self-intersections of Brownian paths 239 1. Intersection of paths: existence and Hausdorffdimension 239 2. Intersection equivalence of Brownian motion and percolation limit sets 247 3. Multiple points of Brownian paths 256 4. Kaufman’s dimension doubling theorem 264 Exercises 270 Notes and Comments 272 4 Chapter 10. Exceptional sets for Brownian motion 275 1. The fast times of Brownian motion 275 2. Packing dimension and limsup fractals 283 3. Slow times of Brownian motion 292 4. Cone points of planar Brownian motion 296 Exercises 306 Notes and Comments 309 Appendix I: Hints and solutions for selected exercises 311 Appendix II: Background and prerequisites 331 1. Convergence of distributions on metric spaces 331 2. Gaussian random variables 334 3. Martingales in discrete time 337 4. The max-flow min-cut theorem 342 Index 345 Bibliography 349 5 Foreword The aim of this book is to introduce Brownian motion as the central object of probability and discuss its properties, putting particular emphasis on the sample path properties. Our hope is to capture as much as possible the spirit of Paul L´ evy’s investigations on Brownian motion, by moving quickly to the fascinating features of the Brownian motion process, and filling in more and more details into the picture as we move along. Inevitably, while exploring the nature of Brownian paths one encounters a great variety of other subjects: Hausdorffdimension serves from early on in the book as a tool to quantify subtle features of Brownian paths, stochastic integrals helps us to get to the core of the invariance properties of Brownian motion, and potential theory is developed to enable us to control the probability the Brownian motion hits a given set. An important idea of this book is to make it as interactive as possible and therefore we have included more than 100 exercises collected at the end of each of the ten chapters. Exercises marked with a diamond have either a hint, a reference to a solution, or a full solution given in the appendix. Exercises marked with a star are more challenging. We have also marked some theorems with a star to indicate that the results will not be used in the remainder of the book and may be skipped on first reading. This book grew out of lectures given by Yuval Peres at the Statistics Department, University of California, Berkeley in Spring 1998. We are grateful to the students who attended the course and wrote the first draft of the notes: Diego Garcia, Yoram Gat, Diogo A. Gomes, Charles Holton, Fr´ ed´ eric Latr´ emoliere, Wei Li, Ben Morris, Jason Schweinsberg, B´ alint Vir´ ag, Ye Xia and Xiaowen Zhou. The first draft of these notes was edited by B´ alint Vir´ ag and Elchanan Mossel and at this stage corrections were made by Serban Nacu and Yimin Xiao. The notes were distributed via the internet and turned out to be very popular — this demand motivated us to expand these notes to a full book hopefully retaining the character. Several people read drafts of the book, supplied us with helpful lists of corrections or tested exercises. We thank Jian Ding, Richard Kiefer, Achim Klenke, Nathan Levy, Arjun Malhotra, Marcel Ortgiese, JeffSteif, and Kamil Szczegot. 7 Peter M¨ orters lectured on the topics of this book in the Graduate School in Mathematical Sciences at the University of Bath in Autumn 2003, thanks are due to the audience, and in particular to Alex Cox and Pascal Vogt, for their contributions. Yuval Peres thanks Pertti Mattila for the invitation to lecture on this material at the joint summer school in Jyv¨ askyla, August 1999, and Peter M¨ orters thanks Michael Scheutzow for the invitation to lecture at the winter school of the Berlin graduate school in probability in Stralsund, April 2003. Peter M¨ orters Yuval Peres 8 List of frequently used notation Numbers: ⌈x⌉ the smallest integer bigger or equal to x ⌊x⌋ the largest integer smaller or equal to x Topology of Euclidean space Rd: Rd Euclidean space consisting of all column vectors x = (x1, . . . , xd)T | · | Euclidean norm | x| = v u u t d X i=1 x2 i B(x, r) the open ball of radius r > 0 centred in x ∈Rd, i.e. B(x, r) = {y ∈Rd : | x −y| < r} U closure of the set U ⊂Rd ∂U boundary of the set U ⊂Rd B(A) the collection of all Borel subsets of A ⊂Rd. Binary relations: a ∧b the minimum of a and b a ∨b the maximum of a and b X d = Y the random variables X and Y have the same distribution a(n) ≍b(n) the ratio of the two sides is bounded from above and below by positive constants that do not depend on n a(n) ∼b(n) the ratio of the two sides converges to one. 9 Vectors, functions, and measures: Id d × d identity matrix 1A indicator function with 1A(x) = 1 if x ∈A and 0 otherwise δx Dirac measure with mass concentrated on x, i.e. δx(A) = 1 if x ∈A and 0 otherwise f + the positive part of the function f, i.e. f +(x) = f(x) ∨0 f − the negative part of the function f, i.e. f −(x) = −(f(x) ∧0) Ld or L Lebesgue measure on Rd σx,r (d −1)-dimensional surface measure on ∂B(x, r) ⊂Rd if x = 0, r = 1 we also write σ = σ0,1 ϖx,r uniform distribution on ∂B(x, r), ϖx,r = σx,r σx,r(∂B(x,r)), if x = 0, r = 1 we also write ϖ = ϖ0,1. Function spaces: C(K) topological space of all continuous functions on the compact K ⊂Rd, equipped with the supremum norm ∥f∥= supx∈K |f(x)| Probability measures and σ-algebras: Px a probability measure on a measure space (Ω, A) such that the process {B(t): t ≥0} is a Brownian motion started in x Ex the expectation associated with Px p(t, x, y) the transition density of Brownian motion Px{B(t) ∈A} = R A p(t, x, y) dy F0(t) the smallest σ-algebra that makes {B(s): 0 ≤s ≤t} measurable F+(t) the right-continuous augmentation F+(t) = T s>t F0(s). 10 Stopping times: For any Borel sets A1, A2, . . . ⊂Rd and a Brownian motion B : [0, ∞) →Rd, τ(A1) := inf{t ≥0: B(t) ∈A1}, the entry time into A1, τ(A1, . . . , An) := ( inf{t ≥τ(A1, . . . , An−1) : B(t) ∈An}, if τ(A1, . . . , An−1) < ∞, ∞, otherwise. the time to enter A1 and then A2 and so on until An. Systems of subsets in Rd: For any fixed d-dimensional unit cube Cube = x + [0, 1]d we denote: Dk family of all half-open dyadic subcubes D = x + Qd i=1 £ ki2−k, (ki + 1)2−k¢ ⊂Rd, ki ∈{0, . . . , 2k −1}, of sidelength 2−k D all half-open dyadic cubes D = S∞ k=0 Dk in Cube Ck family of all compact dyadic subcubes D = x + Qd i=1 £ ki2−k, (ki + 1)2−k¤ ⊂Rd, ki ∈{0, . . . , 2k −1}, of sidelength 2−k C all compact dyadic cubes C = S∞ k=0 Ck in Cube. Potential theory: For a metric space (E, ρ) and mass distribution µ on E: φα(x) the α-potential of a point x ∈E defined as φα(x) = R dµ(y) ρ(x,y)α, Iα(µ) the α-energy of the measure µ defined as Iα(µ) = RR dµ(x) dµ(y) ρ(x,y)α , Capα(E) the (Riesz) α-capacity of E defined as Capα(E) = sup{Iα(µ)−1 : µ(E) = 1}. For a general kernel K : E × E →[0, ∞]: Uµ(x) the potential of µ at x defined as Uµ(x) = R K(x, y) dµ(y), IK(µ) K-energy of µ defined as IK(µ) = RR K(x, y) dµ(x) dµ(y), CapK(E) K-capacity of E defined as CapK(E) = sup{IK(µ)−1 : µ(E) = 1}. 11 If K(x, y) = f(ρ(x, y)) we also write: If(µ) instead of IK(µ), Capf(E) instead of CapK(E). Sets and processes associated with Brownian motion: For a linear Brownian motion {B(t): t ≥0}: {M(t): t ≥0} the maximum process defined by M(t) = sups≤t B(s), Rec the set of record points {t ≥0: B(t) = M(t)}, Zero the set of zeros {t ≥0: B(t) = 0}. For a Brownian motion {B(t): t ≥0} in Rd for d ≥1: Graph the graph {(t, B(t)): t ≥0} ⊂Rd+1, Range the range {B(t): t ≥0} ⊂Rd. 12 CHAPTER 0 Motivation Much of probability theory is devoted to describing the macroscopic picture emerging in random systems defined by a host of microscopic random effects. Brownian motion is the macroscopic picture emerging from a particle moving randomly in d-dimensional space. On the microscopic level, at any time step, the particle receives a random displacement, caused for example by other particles hitting it or by an external force, so that, if its position at time zero is S0, its position at time n is given as Sn = S0 + Pn i=1 Xi, where the displacements X1, X2, X3, . . . are assumed to be independent, identically distributed random variables with values in Rd. The process {Sn : n ≥0} is a random walk, the displacements represent the microscopic inputs. When we think about the macroscopic picture, what we mean is questions such as: • Does Sn drift to infinity? • Does Sn return to the neighbourhood of the origin infinitely often? • What is the speed of growth of max{|S1|, . . . , |Sn|} as n →∞? • What is the asymptotic number of windings of {Sn : n ≥0} around the origin? It turns out that not all the features of the microscopic inputs contribute to the macroscopic picture. Indeed, if they exist, only the mean and covariance of the displacements are shaping the picture. In other words, all random walks whose displacements have the same mean and covariance matrix give rise to same macroscopic process, and even the assumption that the displacements have to be independent and identically distributed can be substantially relaxed. This effect is called universality, and the macroscopic process is often called a universal object. It is a common approach in probability to study various phenomena through the associated universal objects. Any continuous time stochastic process {B(t) : t ≥0} describing the macroscopic features of a random walk should have the following properties: (1) for all times 0 ≤t1 ≤t2 ≤. . . ≤tn the random variables B(tn) −B(tn−1), B(tn−1) −B(tn−2), . . . , B(t2) −B(t1) are independent; we say that the process has independent increments, (2) the distribution of the increment B(t + h) −B(t) does not depend on t; we say that the process has stationary increments, (3) the process {B(t) : t ≥0} has almost surely continuous paths. It follows (with some work) from the central limit theorem that these features imply that there exists a vector µ ∈Rd and a matrix Σ ∈Rd×d such that (4) for every t ≥0 and h ≥0 the increment B(t + h) −B(t) is multivariate normally distributed with mean hµ and covariance matrix hΣΣT. 13 Hence any process with the features (1)-(3) above is characterised by just three parameters, • the initial distribution, i.e. the law of B(0), • the drift vector µ, • the diffusion matrix Σ. We call the process {B(t) : t ≥0} a Brownian motion if the drift vector is zero, and the diffusion matrix is the identity. If B(0) = 0, i.e. the motion is started at the origin, we use the term standard Brownian motion. Suppose we have a standard Brownian motion {B(t) : t ≥0}. If X is a random variable with values in Rd, µ a vector in Rd and Σ a d×d matrix, then it is easy to check that { ˜ B(t) : t ≥0} given by ˜ B(t) = ˜ B(0) + µt + ΣB(t), for t ≥0, is a process with the properties (1)-(4) with initial distribution X, drift vector µ and diffusion matrix Σ. Hence the macroscopic picture emerging from a random walk can be fully described by a standard Brownian motion. 0 50 100 150 200 −140 −120 −100 −80 −60 −40 −20 0 Figure 1. The range {B(t) : 0 ≤t ≤1} of a planar Brownian motion In Chapter 1 we start exploring Brownian motion by looking at dimension d = 1. Here Brownian motion is a random continuous function and we ask about its regularity, for example: • For which parameters α is the random function B : [0, 1] →R α-H¨ older continuous? • Is the random function B : [0, 1] →R differentiable? The surprising answer to the second question was given by Paley, Wiener and Zygmund in 1933: Almost surely, the random function B : [0, 1] →R is nowhere differentiable! This is particularly interesting, as it is not easy to construct a continuous, nowhere differentiable function without the help of randomness. We will give a modern proof of the Paley, Wiener and Zygmund theorem, see Theorem 1.30. 14 In Chapter 2 we move to general dimension d. We shall explore the strong Markov property, which roughly says that at suitable random times Brownian motion starts afresh. Among the facts we derive are: Almost surely, • the set of all points visited by Brownian motion in d = 2 has area zero, • the set of times when Brownian motion in d = 1 revisits the origin is uncountable. Besides these sample path properties, the strong Markov property is also the key to some fascinating distributional identities. It enables us to understand, for example, • the process {M(t): t ≥0} of the running maxima M(t) = max0≤s≤t B(s) of a one-dimensional Brownian motion, • the process {Ta : a ≥0} of the first hitting times Ta = inf{t ≥0: B(t) = a} of level a of a one-dimensional Brownian motion, • the process of the vertical first hitting positions by a two-dimensional Brownian motion of the lines {(x, y) ∈R2 : x = a}, as a function of a. In Chapter 3 we start exploring the rich relations of Brownian motion to harmonic analysis. To motivate this relation by a discrete analogue, suppose that {Sn : n ∈N} is a simple, symmetric random walk in Z2 started at some x ∈Z2. Here simple and symmetric means that the increments take each of the values (0, 1), (1, 0), (0, −1), (−1, 0) with probability 1 4. Suppose that A ⊂Z2 is a bounded subset of the two-dimensional lattice and let ∂A be the set of all vertices in Z2 \ A which are adjacent to a vertex in A. Let T = inf © n ≥0: Sn ̸∈A} be the first exit time from A. Suppose moreover that ϕ : ∂A →R is given and define f : A ∪∂A →R, f(x) = E £ ϕ(ST) ¯ ¯ S0 = x ¤ . Then it easy to see that, f(x) = 1 4 X y∼x f(y), for all x ∈A, where we write x ∼y if x and y are adjacent on the lattice Z2. This means that the value f(x) is the mean over all the values at the adjacent vertices. A function with this property is called discrete harmonic, and we have solved the (easy) problem of finding the discrete harmonic function on A ∪∂A with given boundary values ϕ on ∂A. A more challenging problem, which we solve in Chapter 3, is the corresponding continuous problem, called the Dirichlet problem. For its formulation, fix a connected open set U ⊂R2 with nice boundary, and let ϕ: ∂U →R be continuous. The harmonic functions f : U →R on the domain U are characterised by the differential equation ∆f(x) := 2 X j=1 ∂2f ∂x2 j (x) = 0, for all x ∈U. The Dirichlet problem is to find, for a given domain U and boundary data ϕ, a continuous function f : U ∪∂U →R, which is harmonic on U and agrees with ϕ on ∂U. In Theorem 3.12 we show that the unique solution of this problem is given as f(x) = E £ ϕ(B(T)) ¯ ¯ B(0) = x ¤ , for x ∈U, 15 Figure 2. Brownian motion and the Dirichlet problem where {B(t) : t ≥0} is a Brownian motion and T = inf{t ≥0 : B(t) ̸∈U} is the first exit time from U. We shall exploit this result, for example, to show exactly in which dimensions a particle following a Brownian motion drifts to infinity, see Theorem 3.19. In Chapter 4 we provide one of the major tools in our study of Brownian motion, the concept of Hausdorffdimension, and show how it can be applied in the context of Brownian motion. Indeed, when describing the sample paths of a Brownian motion one frequently encounters questions of the size of a given set: How big is the set of all points visited by a Brownian motion in the plane? How big is the set of double-points of a planar Brownian motion? How big is the set of times where Brownian motion visits a given set, say a point? For an example, let {B(t) : t ≥0} be Brownian motion on the real line and look at Zero = © t ≥0 : B(t) = 0 ª , the set of its zeros. Although t 7→B(t) is a continuous function, Zero is an infinite set. This set is big, as it is an uncountable set without isolated points. However, it is also small in the sense that its Lebesgue measure, denoted L, is zero. Indeed, we have by Fubini’s theorem: E £ L(Zero) ¤ = E Z ∞ 0 1Zero(s) ds = Z ∞ 0 P{s ∈Zero} ds = Z ∞ 0 P{B(s) = 0} ds = 0. Zero is a fractal set and we show in Theorem 4.24 that its Hausdorffdimension is 1/2. In Chapter 5 we explore the relationship of random walk and Brownian motion. There are two natural ways to relate random walks directly to Brownian motion: Assume d = 1 for the moment and let X be an arbitrary random variable, for simplicity with EX = 0 and Var X = 1. • Random walks can be embedded into Brownian motion. The idea is that, given any centred distribution with finite variance, one can define a sequence T1 < T2 < · · · of (stopping) times for Brownian motion of controllable size, such that {Sn : n ≥1} given by Sn = B(Tn) is a random walk with increments distributed like X. We say that the random walk is embedded in Brownian motion. 16 • Random walk paths converge in distribution to Brownian motion paths. The main result is Donsker’s invariance principle, which states that, for the random walk {Sk : k ∈N} with increments distributed like X, the law of the random curve obtained by connecting the points Sk/√n in order k = 1, 2, . . . , n linearly in 1/n time units converges in law to Brownian motion. These two principles allow us to answer a lot of questions about random walks by looking at Brownian motion instead. Why can this be advantageous? First, in many cases the fact that Brownian motion is a continuous time process is an advantage over discrete time random walks. For example, as we discuss in the next paragraph, Brownian motion has scaling invariance properties, which can be a powerful tool in the study of its path properties. Second, even in cases where the discrete, combinatorial structures of a simple random walk model are the right tool in the proof of a statement, the translation into a Brownian motion setting frequently helps extending the result from a specific random walk, e.g. the simple random walk on the integers, where Xi takes values ±1, to a wider range of random walks. In Chapter 5 we give several examples how results about Brownian motion can be exploited for random walks. In Chapter 6 we look again at Brownian motion in dimension d = 1. In this case the occupation measure µt of the Brownian motion, defined by µt(A) = Z t 0 1{B(s) ∈A} ds for A ⊂R Borel, has a density. To see this use first Fatou’s lemma and then Fubini’s theorem, E Z lim inf r↓0 µt(B(x, r)) L(B(x, r)) dµt(x) ≤lim inf r↓0 1 2r E Z µt(B(x, r)) dµt(x) = lim inf r↓0 1 2r Z t 0 Z t 0 P © |B(s1) −B(s2)| ≤r ª ds1 ds2 . Using that the density of a standard normal random variable X is bounded by one, we get P © |B(s1) −B(s2)| ≤r ª = P © |X| ≤ r √ |s1−s2| ª ≤ r √ |s1−s2|, and this implies that lim inf r↓0 1 2r Z t 0 Z t 0 P © |B(s1) −B(s2)| ≤r ª ds1 ds2 ≤1 2 Z t 0 Z t 0 ds1 ds2 p |s1 −s2| < ∞. Hence lim inf r↓0 µt(B(x, r)) L(B(x, r)) < ∞ for µt-almost every x. By the Radon-Nikodym this implies that a density {La(t): a ∈R} exists, which we call the Brownian local time. We shall construct this density by probabilistic means, show that it is jointly continuous in a and t, and characterise it as a stochastic process. 17 One of the most important invariance properties of Brownian motion is conformal invariance, which we discuss in Chapter 7. To make this plausible think of an angle-preserving linear mapping L: Rd →Rd, like a rotation followed by multiplication by a. Take a random walk started in zero with increments of mean zero and covariance matrix the identity, and look at its image under L. This image is again a random walk and its increments are distributed like LX. Appropriately rescaled as in Donsker’s invariance principle, both random walks converge, the first to a Brownian motion, the second to a process satisfying our conditions (1)–(4), but with a slightly different covariance matrix. This process can be identified as a time-changed Brownian motion {B(a2t) : t ≥0}. This easy observation has a deeper, local counterpart for planar Brownian motion: Suppose that φ: U →V is a conformal mapping of a simply connected domain U ⊂R2 onto a domain V ⊂R2. Conformal mappings are locally angle-preserving and the Riemann mapping theorem of complex analysis tells us that a lot of these mappings exist. Figure 3. A conformal mapping of Brownian paths Suppose that {B(t) : t ≥0} is a standard Brownian motion started in some point x ∈U and τ = inf{t > 0: B(t) / ∈U} is the first exit time of the path from the domain U. Then it turns out that the image process {φ(B(t)) : 0 ≤t ≤τ} is a time-changed Brownian motion in the domain V , stopped when it leaves V . In order to prove this we have to develop a little bit of the theory of stochastic integration with respect to a Brownian motion, and we shall give a lot of further applications of this tool in Chapter 7. In Chapter 8 we develop the potential theory of Brownian motion. The problem which is the motivation behind this is, given a compact set A ⊂Rd, to find the probability that a Brownian motion {B(t): t ≥0} hits the set A, i.e. that there exists t > 0 with B(t) ∈A. This problem will be answered in the best possible way by Theorem 8.23, which is modern extension of a classical result of Kakutani: The hitting probability can be approximated by the capacity of A with respect to the Martin kernel up to a factor of two. With a wide range of tools at our hand, in Chapter 9 we study the self-intersections of Brownian motion: For example, a point x ∈Rd is called a double point of {B(t): t ≥0} if there exist 18 times 0 < t1 < t2 such that B(t1) = B(t2) = x. In which dimensions does Brownian motion have double points? How big is the set of double points? We show that, almost surely, • in dimensions d ≥4 no double points exist, • in dimension d = 3 (and d = 1) double points exist and the set of double points has Hausdorffdimension one, • in dimension d = 2 double points exists and the set of double points has Hausdorff dimension two. In dimension d = 2 we find a surprisingly complex situation: While every point x ∈R2 is almost surely not visited by a Brownian motion, there exist (random) points in the plane, which are visited infinitely often, even uncountably often. This result, Theorem 9.24, is one of the highlights of this book. Chapter 10 deals with exceptional points for Brownian motion and Hausdorffdimension spectra of families of exceptional points. To explain an example, we look at a Brownian motion in the plane run for one time unit, which is a continuous curve {B(t) : t ∈[0, 1]}. If x = B(t) ∈R2, for some 0 < t < 1, is a point on the curve one can use polar coordinates centred in x to define for every time interval (t + ε, 1) the number of windings the curve performs around x in this time interval, with counterclockwise windings having a positive and clockwise windings having a negative sign. Denoting this number by θ(ε), we obtain in Chapter 7 that, almost surely, lim sup ε↓0 θ(ε) = ∞and lim inf ε↓0 θ(ε) = −∞. In other words, for any point on the curve, almost surely, the Brownian motion performs an infinite number of full windings in both directions. Still, there exist random points on the curve, which are exceptional in the sense that Brownian motion performs no windings around them at all. This follows from an easy geometric argument: Take a point in R2 with coordinates (x1, x2) such that x1 = min{x: (x, x2) ∈B[0, 1]}, i.e. a point which is the leftmost on the intersection of the Brownian curve and the line {(z, y): z ∈R}, for some x2 ∈R. Then Brownian motion does not perform any full windings around (x1, x2), as this would necessarily imply that it crosses the half-line {(x, x2): x < x2}, contradicting the minimality of x1. One can ask for a more extreme deviation from typical behaviour: A point x = B(t) is an α-cone point if the Brownian curve is contained in an open cone with tip in x = (x1, x2), central axis {(x1, x): x > x2} and opening angle α. Note that the points described in the previous paragraph are 2π-cone points in this sense. We show that α-cone points exist exactly if α ∈[π, 2π], and prove that for every such α, almost surely, dim © x ∈R2 : x is an α-cone point ª = 2 −2π α . This is an example of a Hausdorffdimension spectrum, a topic which has been at the centre of some research activity at the beginning of the current millennium. 19 CHAPTER 1 Definition and first properties of Brownian motion In this chapter we focus on one-dimensional, or linear, Brownian motion. We start with Paul L´ evy’s construction of Brownian motion and discuss two fundamental sample path properties, continuity and differentiability. 1. Paul L´ evy’s construction of Brownian motion 1.1. Definition of Brownian motion. Brownian motion is closely linked to the nor-mal distribution. Recall that a random variable X is normally distributed with mean µ and variance σ2 if P{X > x} = 1 √ 2πσ2 Z ∞ x e−(u−µ)2 2σ2 du, for all x ∈R. Definition 1.1. A real-valued stochastic process {B(t) : t ≥0} is called a (linear) Brownian motion with start in x ∈R if the following holds: • B(0) = x, • the process has independent increments, i.e. for all times 0 ≤t1 ≤t2 ≤. . . ≤tn the increments B(tn)−B(tn−1), B(tn−1)−B(tn−2), . . . , B(t2)−B(t1) are independent random variables, • for all t ≥0 and h > 0, the increments B(t + h) −B(t) are normally distributed with expectation zero and variance h, • almost surely, the function t 7→B(t) is continuous. We say that {B(t) : t ≥0} is a standard Brownian motion if x = 0. ⋄ Let us step back and look at some technical points. We have defined Brownian motion as a stochastic process {B(t) : t ≥0} which is just a family of (uncountably many) random variables ω 7→B(t, ω) defined on a single probability space (Ω, A, P). At the same time, a stochastic process can also be interpreted as a random function with the sample functions defined by t 7→B(t, ω). The sample path properties of a stochastic process are the properties of these random functions, and it is these properties we will be most interested in in this book. Remark 1.2. When considering a stochastic process as a random function it is sometimes useful to assume that the mapping (t, ω) 7→B(t, ω) is measurable on the product space [0, ∞) × Ω. We shall not need any such assumption before Chapter 7. ⋄ 21 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −2 −1.7 −1.4 −1.1 −0.7 −0.4 −0.1 0.2 0.5 0.8 1.1 1.5 1.8 2.1 2.4 t B(t) Figure 1. Graphs of five sampled Brownian motions By the marginal distributions of a stochastic process {B(t) : t ≥0} we mean the laws of all the finite dimensional random vectors ¡ B(t1), B(t2), . . . , B(tn) ¢ , for all 0 ≤t1 ≤t2 ≤. . . ≤tn. To describe these joint laws it suffices to describe the joint law of B(0) and all the increments ¡ B(t1) −B(0), B(t2) −B(t1), . . . , B(tn) −B(tn−1) ¢ , for all 0 ≤t1 ≤t2 ≤. . . ≤tn. This is what we have done in the first three items of the definition, which specify the marginal distributions of Brownian motion. However, the last item, almost sure continuity, is also crucial, and this is information which goes beyond the marginal distributions of the process in the sense above, technically because the set {ω ∈Ω: t 7→B(t, ω) continuous} is in general not in the σ-algebra generated by the random vectors (B(t1), B(t2), . . . , B(tn)), n ∈N. Example 1.3. Suppose that B is a Brownian motion and U is an independent random variable, which is uniformly distributed on [0, 1]. Then the process { ˜ B(t) : t ≥0} defined by ˜ B(t) = ½ B(t) if t ̸= U, 0 if t = U, has the same marginal distributions as a Brownian motion, but is discontinuous if B(U) ̸= 0, i.e. with probability one, and hence this process is not a Brownian motion. ⋄ We see that, if we are interested in the sample path properties of a stochastic process, we may need to specify more than just its marginal distributions. Suppose X is a property a function might or might not have, like continuity, differentiability, etc. We say that a process {X(t) : t ≥0} has property X almost surely if there exists A ∈A such that P(A) = 1 and A ⊂ © ω ∈Ω: t 7→X(t, ω) has property X ª . Note that the set on the right need not lie in A. 1.2. Paul L´ evy’s construction of Brownian motion. It is a nontrivial issue whether the conditions imposed on the marginal distributions in the definition of Brownian motion allow the process to have continuous sample paths, or whether there is a contradiction. In this section we show that there is no contradiction and, fortunately, Brownian motion exists. 22 Theorem 1.4 (Wiener 1923). Standard Brownian motion exists. We construct Brownian motion as a uniform limit of continuous functions, to ensure that it automatically has continuous paths. Recall that we need only construct a standard Brownian motion {B(t): t ≥0}, as X(t) = x+B(t) is a Brownian motion with starting point x. The proof exploits properties of Gaussian random vectors, which are the higher-dimensional analogue of the normal distribution. Definition 1.5. A random vector (X1, . . . , Xn) is called a Gaussian random vector if there exists an n × m matrix A, and an n-dimensional vector b such that XT = AY + b, where Y is an m-dimensional vector with independent standard normal entries. ⋄ Basic facts about Gaussian random variables are collected in Appendix II.2. The following exercise is an easy warm-up to the proof of Wiener’s theorem. Proof of Wiener’s theorem. We first construct Brownian motion on the interval [0, 1] as a random element on the space C[0, 1] of continuous functions on [0, 1]. The idea is to construct the right joint distribution of Brownian motion step by step on the finite sets Dn = © k 2n : 0 ≤k ≤2nª of dyadic points. We then interpolate the values on Dn linearly and check that the uniform limit of these continuous functions exists and is a Brownian motion. To do this let D = S∞ n=0 Dn and let (Ω, A, P) be a probability space on which a collection {Zt : t ∈D} of independent, standard normally distributed random variables can be defined. Let B(0) := 0 and B(1) := Z1. For each n ∈N we define the random variables B(d), d ∈Dn such that (1) for all r < s < t in Dn the random variable B(t) −B(s) is normally distributed with mean zero and variance t −s, and is independent of B(s) −B(r), (2) the vectors (B(d) : d ∈Dn) and (Zt : t ∈D \ Dn) are independent. Note that we have already done this for D0 = {0, 1}. Proceeding inductively we may assume that we have succeeded in doing it for some n −1. We then define B(d) for d ∈Dn \ Dn−1 by B(d) = B(d −2−n) + B(d + 2−n) 2 + Zd 2(n+1)/2 . Note that the first summand is the linear interpolation of the values of B at the neighbouring points of d in Dn−1. Therefore B(d) is independent of (Zt : t ∈D\Dn) and the second property is fulfilled. Moreover, as 1 2[B(d + 2−n) −B(d −2−n)] depends only on (Zt : t ∈Dn−1), it is independent of Zd/2(n+1)/2. Both terms are normally distributed with mean zero and variance 2−(n+1). Hence their sum B(d) −B(d −2−n) and their difference B(d + 2−n) −B(d) are independent and normally distributed with mean zero and variance 2−n by Corollary II.3.4. Indeed, all increments B(d)−B(d−2−n), for d ∈Dn\{0}, are independent. To see this it suffices to show that they are pairwise independent, as the vector of these increments is Gaussian. We have seen in the previous paragraph that pairs B(d) −B(d −2−n), B(d + 2−n) −B(d) with 23 d ∈Dn \ Dn−1 are independent. The other possibility is that the increments are over intervals separated by some d ∈Dn−1. Choose d ∈Dj with this property and minimal j, so that the two intervals are contained in [d−2−j, d], respectively [d, d+2−j]. By induction the increments over these two intervals of length 2−j are independent, and the increments over the intervals of length 2−n are constructed from the independent increments B(d) −B(d −2−j), respectively B(d + 2−j) −B(d), using a disjoint set of variables (Zt : t ∈Dn). Hence they are independent and this implies the first property, and completes the induction step. Having thus chosen the values of the process on all dyadic points, we interpolate between them. Formally, define F0(t) =    Z1 for t = 1 0 for t = 0 linear in between. and, for each n ≥0, Fn(t) =    2−(n+1)/2Zt for t ∈Dn \ Dn−1 0 for t ∈Dn−1 linear betweenconsecutive points in Dn. These functions are continuous on [0, 1] and for all n and d ∈Dn (1.1) B(d) = n X i=0 Fi(d) = ∞ X i=0 Fi(d). This can be seen by induction. It holds for n = 0. Suppose that it holds for n −1. Let d ∈Dn \ Dn−1. Since for 0 ≤i ≤n −1 the function Fi is linear on [d −2−n, d + 2−n], we get n−1 X i=0 Fi(d) = n−1 X i=1 Fi(d −2−n) + Fi(d + 2−n) 2 = B(d −2−n) + B(d + 2−n) 2 . Since Fn(d) = 2−(n+1)/2Zd, this gives (1.1). On the other hand, we have, by definition of Zd and by Lemma II.3.1, for c > 0 and large n, P{|Zd| ≥c√n} ≤exp ³−c2n 2 ´ , so that the series ∞ X n=0 P{ there exists d ∈Dn with |Zd| ≥c√n} ≤ ∞ X n=0 X d∈Dn P{|Zd| ≥c√n} ≤ ∞ X n=0 (2n + 1) exp ³−c2n 2 ´ , converges as soon as c > √2 log 2. Fix such a c. By the Borel-Cantelli lemma there exists a random (but almost surely finite) N such that for all n ≥N and d ∈Dn we have |Zd| < c√n. Hence, for all n ≥N, (1.2) ∥Fn∥∞< c√n2−n/2 . 24 This upper bound implies that, almost surely, the series B(t) = ∞ X n=0 Fn(t) is uniformly convergent on [0, 1]. We denote the continuous limit by {B(t) : t ∈[0, 1]}. It remains to check that the increments of this process have the right marginal distributions. This follows directly from the properties of B on the dense set D ⊂[0, 1] and the continuity of the paths. Indeed, suppose that t1 < t2 < · · · < tn are in [0, 1]. We find t1,k ≤t2,k ≤· · · ≤tn,k in D with limk↑∞ti,k = ti and infer from the continuity of B that, for 1 ≤i ≤n −1, B(ti+1) −B(ti) = lim k↑∞B(ti+1,k) −B(ti,k) . As limk↑∞E[B(ti+1,k) −B(ti,k)] = 0 and lim k↑∞Cov ¡ B(ti+1,k) −B(ti,k), B(tj+1,k) −B(tj,k)) ¢ = lim k↑∞1{i=j} ¡ ti+1,k −ti,k ¢ = 1{i=j} ¡ ti+1 −ti) , the increments B(ti+1) −B(ti) are, by Proposition II.3.7, independent Gaussian random vari-ables with mean 0 and variance ti+1 −ti, as required. We have thus constructed a continuous process B : [0, 1] →R with the same marginal distrib-utions as Brownian motion. Take a sequence B1, B2, . . . of independent C[0, 1]-valued random variables with the distribution of this process, and define {B(t) : t ≥0} by gluing together the parts, more precisely by B(t) = B⌊t⌋(t −⌊t⌋) + ⌊t⌋−1 X i=0 Bi(1) , for all t ≥0 . This defines a continuous random function B : [0, ∞) →R and one can see easily from what we have shown so far that the requirements of a standard Brownian motion are fulfilled. Remark 1.6. A stochastic process {Y (t) : t ≥0} is called a Gaussian process, if for all t1 < t2 < . . . < tn the vector (Y (t1), . . . , Y (tn)) is a Gaussian random vector. It is shown in Exercise 1.2 that Brownian motion with start in x ∈R is a Gaussian process. ⋄ 1.3. Simple invariance properties of Brownian motion. One of the themes of this book is that many natural sets that can be derived from the sample paths of Brownian motion are in some sense random fractals. An intuitive approach to fractals is that they are sets which have a nontrivial geometric structure at all scales. A key role in this behaviour is played by the very simple scaling invariance property of Brownian motion, which we now formulate. It identifies a transformation on the space of functions, which changes the individual Brownian random functions but leaves their distribution unchanged. 25 Lemma 1.7 (Scaling invariance). Suppose {B(t) : t ≥0} is a standard Brownian motion and let a > 0. Then the process {X(t) : t ≥0} defined by X(t) = 1 aB(a2t) is also a standard Brownian motion. Proof. Continuity of the paths, independence and stationarity of the increments remain unchanged under the rescaling. It remains to observe that X(t) −X(s) = 1 a ¡ B(a2t) −B(a2s) ¢ is normally distributed with expectation 0 and variance (1/a2)(a2t −a2s) = t −s. Remark 1.8. Scaling invariance has many useful consequences. As an example, let a < 0 < b, and look at T(a, b) = inf{t ≥0 : B(t) = a or B(t) = b}, the first exit time of a one-dimensional standard Brownian motion from the interval [a, b]. Then, with X(t) = 1 aB(a2t) we have ET(a, b) = a2 E inf © t ≥0 : X(t) = 1 or X(t) = b/a ª = a2 ET(b/a, 1) , which implies that ET(−b, b) is a constant multiple of b2. Also P © {B(t): t ≥0} exits [a, b] at a ª = P © {X(t): t ≥0} exits [1, b/a] at 1 ª is only a function of the ratio b/a. The scaling invariance property will be used extensively in all the following chapters, and we shall often use the phrase that a fact holds ‘by Brownian scaling’ to indicate this. ⋄ We shall discuss a very powerful extension of the scaling invariance property, the conformal invariance property, in Chapter 7 of the book. A further useful invariance property of Brownian motion, invariance under time-inversion, can be identified easily. As above, the transformation on the space of functions changes the individual Brownian random functions without changing the distribution. Theorem 1.9 (Time inversion). Suppose {B(t) : t ≥0} is a standard Brownian motion. Then the process {X(t) : t ≥0} defined by X(t) = ½ 0 for t = 0, tB(1/t) for t > 0, is also a standard Brownian motion. Proof. Recall that the finite dimensional marginals (B(t1), . . . , B(tn)) of Brownian motion are Gaussian random vectors and are therefore characterised by E[B(ti)] = 0 and Cov(B(ti), B(tj)) = ti for 0 ≤ti ≤tj. Obviously, {X(t) : t ≥0} is also a Gaussian process and the Gaussian random vectors (X(t1), . . . , X(tn)) have expectation 0. The covariances, for t > 0, h ≥0, are given by Cov(X(t + h), X(t)) = (t + h)t Cov(B(1/(t + h)), B(1/t)) = t(t + h) 1 t + h = t . Hence the law of all the finite dimensional marginals ¡ X(t1), X(t2), . . . , X(tn) ¢ , for 0 ≤t1 ≤. . . ≤tn, 26 are the same as for Brownian motion. The paths of t 7→X(t) are clearly continuous for all t > 0 and in t = 0 we use the following two facts: First, the distribution of X on the rationals Q is the same as for a Brownian motion, hence lim t↓0 t∈Q X(t) = 0 almost surely. And second, X is almost surely continuous on (0, ∞), so that 0 = lim t↓0 t∈Q X(t) = lim t↓0 X(t) almost surely. Hence {X(t) : t ≥0} has almost surely continuous paths, and is a Brownian motion. Remark 1.10. The symmetry inherent in the time inversion property becomes more apparent if one considers the Ornstein-Uhlenbeck diffusion {X(t) : t ∈R}, which is given by X(t) = e−tB(e2t) for all t ∈R. This is a Markov process (this will be explained properly in Chapter 2.3), such that X(t) is standard normally distributed for all t. It is a diffusion with a drift towards the origin proportional to the distance from the origin. Unlike Brownian motion, the Ornstein-Uhlenbeck diffusion is time reversible: The time inversion formula gives that {X(t) : t ≥0} and {X(−t) : t ≥0} have the same law. For t near −∞, X(t) relates to the Brownian motion near time 0, and for t near ∞, X(t) relates to the Brownian motion near ∞. ⋄ Time inversion is a useful tool to relate the properties of Brownian motion in a neighbourhood of time t = 0 to properties at infinity. To illustrate the use of time-inversion we exploit Theorem 1.9 to get an interesting statement about the long-term behaviour from a trivial statement at the origin. Corollary 1.11 (Law of large numbers). Almost surely, lim t→∞ B(t) t = 0. Proof. Let {X(t) : t ≥0} be as defined in Theorem 1.9. Using this theorem, we see that limt→∞B(t)/t = limt→∞X(1/t) = X(0) = 0. In the next two chapters we discuss the two basic analytic properties of Brownian motion as a random function, its continuity and differentiability properties. 2. Continuity properties of Brownian motion The definition of Brownian motion already requires that the sample functions are continuous almost surely. This implies that on the interval [0, 1] (or any other compact interval) the sample functions are uniformly continuous, i.e. there exists some (random) function ϕ with limh↓0 ϕ(h) = 0 called a modulus of continuity of the function B : [0, 1] →R, such that (2.1) lim sup h↓0 sup 0≤t≤1−h |B(t + h) −B(t)| ϕ(h) ≤1. 27 Can we achieve such a bound with a deterministic function ϕ, i.e. is there a nonrandom modulus of continuity for the Brownian motion? The answer is yes, as the following theorem shows. Theorem 1.12. There exists a constant C > 0 such that, almost surely, for every sufficiently small h > 0 and all 0 ≤t ≤1 −h, ¯ ¯B(t + h) −B(t) ¯ ¯ ≤C p h log(1/h). Proof. This follows quite elegantly from L´ evy’s construction of Brownian motion. Recall the notation introduced there and that we have represented Brownian motion as a series B(t) = ∞ X n=0 Fn(t) , where each Fn is a piecewise linear function. The derivative of Fn exists almost everywhere, and by definition and (1.2), for any c > √2 log 2 there exists a (random) N ∈N such that, for all n > N, ∥F ′ n∥∞≤2∥Fn∥∞ 2−n ≤2c√n2n/2 . Now for each t, t + h ∈[0, 1], using the mean-value theorem, |B(t + h) −B(t)| ≤ ∞ X n=0 |Fn(t + h) −Fn(t)| ≤ ℓ X n=0 h∥F ′ n∥∞+ ∞ X n=ℓ+1 2∥Fn∥∞. Hence, using (1.2) again, we get for all ℓ> N, that this is bounded by h N X n=0 ∥F ′ n∥∞+ 2ch ℓ X n=N √n2n/2 + 2c ∞ X n=ℓ+1 √n2−n/2. We now suppose that h is (again random and) small enough that the first summand is smaller than p h log(1/h) and that ℓdefined by 2−ℓ< h ≤2−ℓ+1 exceeds N. For this choice of ℓthe second and third summands are also bounded by constant multiples of p h log(1/h) as both sums are dominated by their largest element. Hence we get (2.1) with a deterministic function ϕ(h) = C p h log(1/h). This upper bound is pretty close to the optimal result. The following lower bound confirms that the only missing bit is the precise value of the constant. Theorem 1.13. For every constant c < √ 2, almost surely, for every ε > 0 there exist 0 < h < ε and t ∈[0, 1 −h] with ¯ ¯B(t + h) −B(t) ¯ ¯ ≥c p h log(1/h). Proof. Let c < √ 2 and define, for integers k, n ≥0, the events Ak,n = n B((k + 1)e−n) −B(ke−n) > c√ne−n/2o . Then, using Lemma II.3.1, for any k ≥0, P(Ak,n) = P{B(e−n) > c√ne−n/2} = P{B(1) > c√n} ≥ c√n c2n + 1e−c2n/2 . 28 By our assumption on c, we have enP(Ak,n) →∞as n ↑∞. Therefore, using 1 −x ≤e−x for all x, P ³ ⌊en−1⌋ \ k=0 Ac k,n ´ = (1 −P(A0,n))en ≤exp(−enP(A0,n)) →0 . By considering h = e−n one can now see that, for any ε > 0, P n for all h ∈(0, ε) and t ∈[0, 1 −h] we have |B(t + h) −B(t)| ≤c p h log(1/h) o = 0 . One can determine the constant c in the best possible modulus of continuity ϕ(h) = c p h log(1/h) precisely. Indeed, our proof of the lower bound yields a value of c = √ 2, which turns out to be optimal. This striking result is due to Paul L´ evy. Theorem 1.14 (L´ evy’s modulus of continuity (1937)). Almost surely, lim sup h↓0 sup 0≤t≤1−h |B(t + h) −B(t)| p 2h log(1/h) = 1 . Remark 1.15. We come back to the modulus of continuity of Brownian motion in Chapter 10, where we prove a substantial extension, the spectrum of fast points of Brownian motion. We will not use Theorem 1.14 in the sequel as Theorem 1.12 is sufficient to discuss all problems where an upper bound on the increase of a Brownian motion is needed. Hence the proof of L´ evy’s modulus of continuity may be skipped on first reading. ⋄ In the light of Theorem 1.13, we only need to prove the upper bound. We first look at increments over a class of intervals, which is chosen to be sparse, but big enough to approximate arbitrary intervals. More precisely, given natural numbers n, m, we let Λn(m) be the collection of all intervals of the form £ (k −1 + b)2−n+a, (k + b)2−n+a¤ , for k ∈{1, . . . , 2n}, a, b ∈{0, 1 m, . . . , m−1 m }. We further define Λ(m) := S n Λn(m). Lemma 1.16. For any fixed m and c > √ 2, almost surely, there exists n0 ∈N such that, for any n ≥n0, ¯ ¯B(t) −B(s) ¯ ¯ ≤c q (t −s) log 1 (t−s) for all [s, t] ∈Λm(n). Proof. From the tail estimate for a standard normal random variable X, see Lemma II.3.1, we obtain P n sup k∈{1,...,2n} sup a,b∈{0, 1 m,..., m−1 m } ¯ ¯ ¯B ¡ (k −1 + b)2−n+a¢ −B ¡ (k + b)2−n+a¢¯ ¯ ¯ > c p 2−n+a log(2n+a) o ≤2n(2m) P © X > c p log(2n) ª ≤ 2m c p log(2n) 1 √ 2π 2n(1−c2/2), 29 and as the right hand side is summable, the result follows from the Borel-Cantelli lemma. Lemma 1.17. Given ε > 0 there exists m ∈N such that for every interval [s, t] ⊂[0, 1] there exists an interval [s′, t′] ∈Λ(m) with |t −t′| < ε (t −s) and |s −s′| < ε (t −s). Proof. Choose m large enough to ensure that 1 m < ε/4 and 21/m < 1 + ε/2. Given an interval [s, t] ⊂[0, 1], we first pick n such that 2−n ≤t −s < 2−n+1, then a ∈{0, 1 m, . . . , m−1 m } such that 2−n+a ≤t −s < 2−n+a+ 1 m. Next, pick k ∈{1, . . . , 2n} such that (k −1)2−n+a < s ≤k2−n+a, and b ∈{0, 1 m, . . . , m−1 m } such that (k −1 + b)2−n+a ≤s ≤(k −1 + b + 1 m)2−n+a. Let s′ = (k −1 + b)2−n+a, then |s −s′| ≤1 m2−n+a ≤ε 4 2−n+1 ≤ε 2 (t −s). Choosing t′ = (k + b)2−n+a ensures that [s′, t′] ∈Λn(m) and, moreover, |t −t′| ≤|s −s′| + |(t −s) −(t′ −s′)| ≤ε 2 (t −s) + ¡ 2−n+a+1/m −2−n+a¢ ≤ε 2 (t −s) + ε 2 2−n+a ≤ε (t −s), as required. Proof of Theorem 1.14. Given c > √ 2, pick 0 < ε < 1 small enough to ensure that ˜ c := c −ε > √ 2 and m ∈N as in Lemma 1.17. Using Lemma 1.16 we choose n0 ∈N large enough that, for all n ≥n0 and all intervals [s′, t′] ∈Λn(m), almost surely, |B(t′) −B(s′)| ≤˜ c q (t′ −s′) log 1 (t′−s′) . Now let [s, t] ⊂[0, 1] be arbitrary, with t −s < 2−n0 ∧ε, and pick [s′, t′] ∈Λ(m) with |t −t′| < ε (t −s) and |s −s′| < ε (t −s). Then, recalling Theorem 1.12, we obtain ¯ ¯B(t) −B(s) ¯ ¯ ≤ ¯ ¯B(t) −B(t′) ¯ ¯ + ¯ ¯B(t′) −B(s′) ¯ ¯ + ¯ ¯B(s′) −B(s) ¯ ¯ ≤C q |t −t′| log 1 |t−t′| + ˜ c q (t′ −s′) log 1 t′−s′ + C q |s −s′| log 1 |s−s′| ≤ ¡ 4C√ε + ˜ c p (1 + 2ε)(1 + log(1 + 2ε)) ¢q (t −s) log 1 t−s. By making ε > 0 small, the first factor on the right can be chosen arbitrarily close to c. This completes the proof of the upper bound, and hence of the theorem. Remark 1.18. The limsup in Theorem 1.14 may be replaced by a limit, see Exercise 1.6. ⋄ 30 Definition 1.19. A function f : [0, ∞) →R is said to be locally α-H¨ older continuous at x ≥0, if there exists ε > 0 and c > 0 such that |f(x) −f(y)| ≤c |x −y|α, for all y ≥0 with |y −x| < ε. We refer to α > 0 as the H¨ older exponent and to c > 0 as the H¨ older constant . ⋄ Clearly, α-H¨ older continuity gets stronger, as the exponent α gets larger. The results of this chapter so far indicate that, for Brownian motion, the transition between paths which are α-H¨ older continuous and paths which are not happens at α = 1/2. Corollary 1.20. If α < 1/2, then, almost surely, Brownian motion is everywhere locally α-H¨ older continuous. Proof. Let C > 0 be as in Theorem 1.12. Applying this theorem to the Brownian motions {B(t) −B(k): t ∈[k, k + 1]}, where k is a nonnegative integer, we see that, almost surely, for every k there exists h(k) > 0 such that for all t ∈[k, k + 1) and 0 < h < (k + 1 −t) ∧h(k), ¯ ¯B(t + h) −B(t) ¯ ¯ ≤C p h log(1/h) ≤C hα . Doing the same to the Brownian motions { ˜ B(t): t ∈[k, k+1]} with ˜ B(t) = B(k+1−t)−B(k+1) gives the full result. Remark 1.21. This result is optimal in the sense that, for α > 1/2, almost surely, at every point, Brownian motion fails to be locally α-H¨ older continuous, see Exercise 1.7. Points where Brownian motion is locally 1/2-H¨ older continuous exist almost surely, but they are very rare. We come back to this issue when discussing ‘slow points’ of Brownian motion in Chapter 10. ⋄ 3. Nondifferentiability of Brownian motion Having proved in the previous section that Brownian motion is somewhat regular, let us see why it is erratic. One manifestation is that the paths of Brownian motion have no intervals of monotonicity. Proposition 1.22. Almost surely, for all 0 < a < b < ∞, Brownian motion is not monotone on the interval [a, b]. Proof. First fix an interval [a, b]. If [a, b] is an interval of monotonicity, i.e. if B(s) ≤B(t) for all a ≤s ≤t ≤b, then we pick numbers a = a1 ≤. . . ≤an+1 = b and divide [a, b] into n sub-intervals [ai, ai+1]. Each increment B(ai) −B(ai+1) has to have the same sign. As the increments are independent, this has probability 2 · 2−n, and taking n →∞shows that the probability that [a, b] is an interval of monotonicity must be 0. Taking a countable union gives that there is no interval of monotonicity with rational endpoints, but each monotone interval would have a nontrivial monotone rational sub-interval. In order to discuss differentiability of Brownian motion we make use of the time-inversion trick, which allows us to relate differentiability at t = 0 to a long-term property. This property 31 is a complementary result to the law of large numbers: Whereas Corollary 1.11 asserts that Brownian motion grows slower than linearly, the next proposition shows that the limsup growth of B(t) is faster than √ t. Proposition 1.23. Almost surely, (3.1) lim sup n→∞ B(n) √n = +∞, and lim inf n→∞ B(n) √n = −∞. For the proof of Proposition 1.23 we use the Hewitt-Savage 0-1 law for exchangeable events, which we briefly recall. Readers unfamiliar with the result are invited to give a proof as Exercise 1.8. Definition 1.24. Let X1, X2, . . . be a sequence of random variables on a probability space (Ω, F, P) and consider a set A of sequences such that © X1, X2, · · · ∈A ª ∈F. The event {X1, X2, · · · ∈A} is called exchangeable if © X1, X2, · · · ∈A ª ⊂ © Xσ1, Xσ2, · · · ∈A ª for all finite permutations σ: N →N. Here finite permutation means that σ is a bijection with σn = n for all sufficiently large n. ⋄ Lemma 1.25 (Hewitt-Savage 0-1 law). If A is an exchangeable event for an independent, iden-tically distributed sequence, then P(A) is 0 or 1. Proof of Proposition 1.23. We clearly have, by Fatou’s lemma, P © B(n) > c√n infinitely often ª ≥lim sup n→∞P © B(n) > c√n ª . By the scaling property, the expression in the lim sup equals P{B(1) > c}, which is positive. Let Xn = B(n) −B(n −1), and note that © B(n) > c√n infinitely often ª = n n X j=1 Xj > c√n infinitely often o is an exchangeable event. Hence the Hewitt-Savage 0-1 law gives that, with probability one, B(n) > c√n infinitely often. Taking the intersection over all positive integers c gives the first part of the statement and the second part is proved analogously. Remark 1.26. It is natural to ask whether there exists a ‘gauge’ function ϕ: [0, ∞) →[0, ∞) such that B(t)/ϕ(t) has a lim sup which is greater than 0 but less than ∞. An answer will be given by the law of the iterated logarithm in the first section of Chapter 5. ⋄ For a function f, we define the upper and lower right derivatives D∗f(t) = lim sup h↓0 f(t + h) −f(t) h , 32 and D∗f(t) = lim inf h↓0 f(t + h) −f(t) h . We now show that for any fixed time t, almost surely, Brownian motion is not differentiable at t. For this we use Proposition 1.23 and the invariance under time-inversion. Theorem 1.27. Fix t ≥0. Then, almost surely, Brownian motion is not differentiable at t. Moreover, D∗B(t) = +∞and D∗B(t) = −∞. Proof. Given a standard Brownian motion B we construct a further Brownian motion X by time-inversion as in Theorem 1.9. Then D∗X(0) ≥lim sup n→∞ X( 1 n) −X(0) 1 n ≥lim sup n→∞ √n X( 1 n) = lim sup n→∞ B(n) √n , which is infinite by Proposition 1.23. Similarly, D∗X(0) = −∞, showing that X is not dif-ferentiable at 0. Now let t > 0 be arbitrary and {B(t) : t ≥0} a Brownian motion. Then X(s) = B(t + s) −B(t) defines a standard Brownian motion and differentiability of X at zero is equivalent to differentiability of B at t. While the previous proof shows that every t is almost surely a point of nondifferentiability for the Brownian motion, this does not imply that almost surely every t is a point of nondifferentiability for the Brownian motion! The order of the quantifiers for all t and almost surely in results like Theorem 1.27 is of vital importance. Here the statement holds for all Brownian paths outside a set of probability zero, which may depend on t, and the union of all these sets of probability zero may not itself be a set of probability zero. To illustrate this point, consider the following example: The argument in the proof of Theo-rem 1.27 also shows that the Brownian motion X crosses 0 for arbitrarily small values s > 0. Defining the level sets Z(t) = {s > 0 : X(s) = X(t)}, this shows that every t is almost surely an accumulation point from the right for Z(t). But not every point t ∈[0, 1] is an accumulation point from the right for Z(t). For example the last zero of {X(t) : t ≥0} before time 1 is, by definition, never an accumulation point from the right for Z(t) = Z(0). This example illus-trates that there can be random exceptional times at which Brownian motion exhibits atypical behaviour. These times are so rare that any fixed (i.e. nonrandom) time is almost surely not of this kind. Remark 1.28. The behaviour of Brownian motion at a fixed time t > 0 reflects the behaviour at typical times in the following sense: Suppose X is a measurable event (a set of paths) such that for all fixed t ≥0, P © s 7→B(t + s) −B(t) satisfies X ª = 1. Then, almost surely, the set of exceptional times © t : s 7→B(t + s) −B(t) does not satisfy X ª 33 has Lebesgue measure 0. Indeed, write Θt for the operator that shifts paths by t, such that (ΘtB)(s) = B(t + s) −B(t). Then, by Fubini’s theorem, E Z ∞ 0 1{t : s 7→(ΘtB)(s) does not satisfy X ª dt = Z ∞ 0 P{ΘtB / ∈X} dt = 0. For example, the previous result shows that, almost surely, the path of a Brownian motion is not differentiable at Lebesgue-almost every time t. ⋄ Remark 1.29. Exercise 1.9 shows that, almost surely, there exist times t∗, t∗∈[0, 1) with D∗B(t∗) ≤0 and D∗B(t∗) ≥0. Hence the almost sure behaviour at a fixed point t, which is described in Theorem 1.27, does not hold at all points simultaneously. ⋄ Theorem 1.30 (Paley, Wiener and Zygmund 1933). Almost surely, Brownian motion is nowhere differentiable. Furthermore, almost surely, for all t, either D∗B(t) = +∞ or D∗B(t) = −∞ or both.. Proof. Suppose that there is a t0 ∈[0, 1] such that −∞< D∗B(t0) ≤D∗B(t0) < ∞. Then lim sup h↓0 |B(t0 + h) −B(t0)| h < ∞, and, using the boundedness of Brownian motion on [0, 2], this implies that for some finite constant M there exists t0 with sup h∈[0,1] |B(t0 + h) −B(t0)| h ≤M. It suffices to show that this event has probability zero for any M. From now on fix M. If t0 is contained in the binary interval [(k −1)/2n, k/2n] for n > 2, then for all 1 ≤j ≤2n −k the triangle inequality gives ¯ ¯B ((k + j)/2n) −B ((k + j −1)/2n) ¯ ¯ ≤|B ((k + j)/2n) −B(t0)| + |B(t0) −B ((k + j −1)/2n)| ≤M(2j + 1)/2n. Define events Ωn,k := n¯ ¯B ((k + j)/2n) −B ((k + j −1)/2n) ¯ ¯ ≤M(2j + 1)/2n for j = 1, 2, 3 o . Then by independence of the increments and the scaling property, for 1 ≤k ≤2n −3, P(Ωn,k) ≤ 3 Y j=1 P ©¯ ¯B ((k + j)/2n) −B ((k + j −1)/2n) ¯ ¯ ≤M(2j + 1)/2nª ≤P n |B(1)| ≤7M/ √ 2n o3 , 34 which is at most (7M2−n/2)3, since the normal density is bounded by 1/2. Hence P Ã2n−3 [ k=1 Ωn,k ! ≤2n(7M2−n/2)3 = (7M)32−n/2, which is summable over all n. Hence, by the Borel-Cantelli lemma, P n there is t0 ∈[0, 1] with sup h∈[0,1] |B(t0 + h) −B(t0)| h ≤M o ≤P Ã2n−3 [ k=1 Ωn,k for infinitely many n ! = 0. Remark 1.31. The proof of Theorem 1.30 can be tightened to prove that, for any α > 1 2, Brownian motion is, almost surely, nowhere locally α-H¨ older continuous, see Exercise 1.7. ⋄ Remark 1.32. There is an abundance of interesting statements about the right derivatives of Brownian motion, which we state as exercises at the end of the chapter. As a taster we mention here that L´ evy [Le54] asked whether, almost surely, D∗B(t) ∈{−∞, ∞} for every t ∈[0, 1). Exercise 1.11 shows that this is not the case. ⋄ Another important regularity property, which Brownian motion does not possess is to be of bounded variation. We first define what it means for a function to be of bounded variation. Definition 1.33. A right-continuous function f : [0, t] →R is a function of bounded variation if V (1) f (t) := sup k X j=1 ¯ ¯f ¡ tj ¢ −f ¡ tj−1 ¢¯ ¯ < ∞, where the supremum is over all k ∈N and partitions 0 = t0 ≤t1 ≤· · · ≤tk−1 ≤tk = t. If the supremum is infinite f is said to be of unbounded variation. ⋄ Remark 1.34. It is not hard to show that f is of bounded variation if and only if it can be written as the difference of two increasing functions. ⋄ Theorem 1.35. Suppose that the sequence of partitions 0 = t (n) 0 ≤t (n) 1 ≤· · · ≤t (n) k(n)−1 ≤t (n) k(n) = t is nested, i.e. at each step one or more partition points are added, and the mesh ∆(n) := sup 1≤j≤k(n) © t (n) j −t (n) j−1 ª converges to zero. Then, almost surely, lim n→∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 = t, and therefore Brownian motion is of unbounded variation. 35 Remark 1.36. For a sequence of partitions as above, we define V (2)(t) := lim n→∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 to be the quadratic variation of Brownian motion. The fact that Brownian motion has finite quadratic variation will be of crucial importance in Chapter 7, however, the analogy to the notion of bounded variation of a function is not perfect: In Exercise 1.13 we find a sequence of partitions 0 = t (n) 0 ≤t (n) 1 ≤· · · ≤t (n) k(n)−1 ≤t (n) k(n) = t with mesh converging to zero, such that almost surely lim sup n→∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 = ∞. In particular, the condition that the partitions in Theorem 1.35 are nested cannot be dropped entirely, though it can be replaced by other conditions, see Exercise 1.14. ⋄ The proof of Theorem 1.35 is based on the following simple lemma. Lemma 1.37. If X, Z are independent, symmetric random variables in L2, then E £ (X + Z)2 ¯ ¯ X2 + Z2¤ = X2 + Z2. Proof. By symmetry of Z we have E £ (X + Z)2 ¯ ¯ X2 + Z2¤ = E £ (X −Z)2 ¯ ¯ X2 + Z2¤ . Both sides of the equation are finite, so that we can take the difference and obtain E £ XZ ¯ ¯X2 + Z2¤ = 0, and the result follows immediately. Proof of Theorem 1.35. By the H¨ older property, we can find, for any α ∈(0, 1/2), an n such that |B(a) −B(b)| ≤|a −b|α for all a, b ∈[0, t] with |a −b| ≤∆(n). Hence k(n) X j=1 ¯ ¯B ¡ t (n) j ¢ −B ¡ t (n) j−1 ¢¯ ¯ ≥∆(n)−α k(n) X j=1 ¡ B ¡ t (n) j ¢ −B ¡ t (n) j−1 ¢¢2. Therefore, once we show that the random variables Xn := k(n) X j=1 ¡ B ¡ t (n) j ¢ −B ¡ t (n) j−1 ¢¢2 converge almost surely to a positive random variable it follows immediately that Brownian motion is almost surely of unbounded variation. By inserting elements in the sequence, if necessary, we may assume that at each step exactly one point is added to the partition. 36 To see that {Xn : n ∈N} converges we use the theory of martingales in discrete time, see Appendix II.3 for basic facts on martingales. We denote by Gn the σ-algebra generated by the random variables Xn, Xn+1, . . .. Then G∞:= ∞ \ k=1 Gk ⊂· · · ⊂Gn+1 ⊂Gn ⊂· · · ⊂G1. We show that {Xn : n ∈N} is a reverse martingale, i.e. that almost surely, Xn = E £ Xn−1 ¯ ¯ Gn ¤ for all n ≥2 . This is easy with the help of Lemma 1.37. Indeed, if s ∈(t1, t2) is the inserted point we apply it to the symmetric, independent random variables B(s) −B(t1), B(t2) −B(s) and denote by F the σ-algebra generated by (B(s) −B(t1))2 + (B(t2) −B(s))2. Then E £¡ B(t2) −B(t1) ¢2¯ ¯ F ¤ = ¡ B(s) −B(t1) ¢2 + ¡ B(t2) −B(s) ¢2, and hence E £¡ B(t2) −B(t1) ¢2 − ¡ B(s) −B(t1) ¢2− ¡ B(t2) −B(s) ¢2¯ ¯ F ¤ = 0, which implies that {Xn : n ∈N} is a reverse martingale. By the L´ evy downward theorem, see Theorem II.4.10, lim n↑∞Xn = E[X1 | G∞] almost surely. The limit has expectation E[X1] = t and, by Fatou’s lemma, its variance is bounded by lim inf n↑∞E £ (Xn −EXn)2¤ = lim inf n↑∞3 k(n) X j=1 ¡ t (n) j −t (n) j−1 ¢2 ≤3 lim inf n↑∞∆(n) = 0. Hence, E[X1 | G∞] = t almost surely, as required. 4. The Cameron-Martin theorem In this section we have a look at Brownian motion with drift and ask, when we can transfer results about driftless Brownian motion to Brownian motion with drift. 37 Exercises Exercise 1.1. Let {B(t) : t ≥0} be a Brownian motion with arbitrary starting point. Show that, for all s, t ≥0, we have Cov(B(s), B(t)) = s ∧t. Exercise 1.2 ( ∗). Show that Brownian motion with start in x ∈R is a Gaussian process. Exercise 1.3. Show that, for every point x ∈R, there exists a two-sided Brownian motion {B(t): t ∈R} with B(0) = x, which has continuous paths, independent increments and the property that, for all t ∈R and h > 0, the increments B(t + h) −B(t) are normally distributed with expectation zero and variance h. Exercise 1.4 ( ∗). Fix x, y ∈R. The Brownian bridge with start in x and end in y is the process {X(t): 0 ≤t ≤1} defined by X(t) = B(t) −t ¡ B(1) −y ¢ , for 0 ≤t ≤1 , where {B(t): t ≥0} is a Brownian motion started in x. The Brownian bridge is an almost surely continuous process such that X(0) = x and X(1) = y. (a) Show that, for every bounded f : Rn →R, E £ f ¡ X(t1), . . . , X(tn) ¢¤ = Z f(x1, . . . , xn) p(t1, x, x1) p(1, x, y) × n Y i=2 p(ti −ti−1, xi, xi+1)p(1 −tn, xn, y) dx1 . . . dxn, for all 0 < t1 < · · · < tn < 1 where p(t, x, y) = 1 √ 2πt e−(y−x)2 2t . (b) Infer that, for any t0 < 1, the processes {X(t): 0 ≤t ≤t0} and {B(t): 0 ≤t ≤t0} are mutually absolutely continuous. Exercise 1.5 ( ∗). Prove the law of large numbers in Corollary 1.11 directly. Use the law of large numbers for sequences of independent identically distributed random variables to show that limn→∞B(n)/n = 0. Then show that B(t) does not oscillate too much between n and n + 1. Exercise 1.6 ( ∗). Show the following improvement to Theorem 1.14: Almost surely, lim h↓0 sup 0≤t≤1−h |B(t + h) −B(t)| p 2h log(1/h) = 1 . 38 Exercise 1.7 ( ∗). Show that, if α > 1/2, then, almost surely, at every point, Brownian motion fails to be locally α-H¨ older continuous. Exercise 1.8 ( ∗). Show that, if A is an exchangeable event for an independent, identically dis-tributed sequence, then P(A) is 0 or 1. Exercise 1.9. Show that, for a Brownian motion {B(t): t ≥0}, (a) for all t ≥0 we have P{t is a local maximum} = 0; (b) almost surely local maxima exist; (c) almost surely, there exist times t∗, t∗∈[0, 1) with D∗B(t∗) ≤0 and D∗B(t∗) ≥0. Exercise 1.10 ( ∗). Let f ∈C[0, 1] any fixed continuous function. Show that, almost surely, the function {B(t) + f(t) : t ∈[0, 1]} is nowhere differentiable. Exercise 1.11 ( ∗). Show that, almost surely, there exists a time t at which D∗B(t) = 0. Exercise 1.12 ( ∗∗). Show that, almost surely, D∗B(t0) = −∞, where t0 is uniquely determined by B(t0) = max 0≤t≤1 B(t). Hint. Try this exercise after the discussion of the strong Markov property in Chapter 2. 39 Exercise 1.13 ( ∗∗). (a) Show that, almost surely, there exists a family 0 = t (n) 0 ≤t (n) 1 ≤· · · ≤t (n) k(n)−1 ≤t (n) k(n) = t of (random) partitions such that lim n↑∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 = ∞. Hint. Use the construction of Brownian motion to pick a partition consisting of dyadic intervals, such that the increment of Brownian motion over any chosen interval is large relative to the square root of its length. (b) Construct a (non-random) sequence of partitions 0 = t (n) 0 ≤t (n) 1 ≤· · · ≤t (n) k(n)−1 ≤t (n) k(n) = t with mesh converging to zero, such that, almost surely, lim sup n→∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 = ∞. Exercise 1.14 ( ∗). Consider a (not necessarily nested) sequence of partitions 0 = t (n) 0 ≤t (n) 1 ≤· · · ≤t (n) k(n)−1 ≤t (n) k(n) = t with mesh converging to zero. (a) Show that, in the sense of L2-convergence, lim n→∞ k(n) X j=1 ¡ B(t (n) j ) −B(t (n) j−1) ¢2 = t. (b) Show that, if additionally ∞ X n=1 k(n) X j=1 ¡ t (n) j −t (n) j−1 ¢2 < ∞, then the convergence in (a) also holds almost surely. 40 Notes and Comments The first study of the mathematical process of Brownian motion is due to Bachelier in [Ba00] in the context of modeling stock market fluctuations, see [DE06] for a modern edition. Bachelier’s work was long forgotten and has only recently been rediscovered, today an international society for mathematical finance is named after him. The physical phenomenon of Brownian motion is usually attributed to Brown [Br28] and was explained by Einstein in [Ei05]. The first rigorous construction of mathematical Brownian motion is due to Wiener [Wi23]. There is a variety of constructions of Brownian motion in the literature. The approach we have followed goes back to one of the great pioneers of Brownian motion, the French mathematician Paul L´ evy, see [Le48]. L´ evy’s construction has the advantage that continuity properties of Brownian motion can be obtained from the construction. An alternative is to first show that a Markov process with the correct transition probabilities can be constructed, and then to use an abstract criterion, like Kolmogorov’s criterion for the existence of a continuous version of the process. See, for example, [RY94], [KS88] and [Ka85] for further alternative constructions. Gaussian processes, only briefly mentioned here, are one of the richest and best understood class of processes in probability theory. Some good references for this are [Ad90] and [Li95]. A lot of effort in current research is put into trying to extend our understanding of Brownian motion to more general Gaussian processes like the so-called fractional Brownian motion. The main difficulty is that these processes do not have the extremely useful Markov property — which we shall discuss in the next chapter, and which we will make heavy use of throughout the book. The modulus of continuity, Theorem 1.14, goes back to L´ evy [Le37]. Observe that this result describes continuity of Brownian motion near its worst time. By contrast, the law of the iterated logarithm in the form of Corollary 5.3 shows that at a typical time the continuity properties of Brownian motion are better: For every fixed time t > 0 and c > √ 2, almost surely, there exists ε > 0 with |B(t) −B(t + h)| ≤c p h log log(1/h) for all |h| < ε. In Chapter 10 we explore for how many times t > 0 we are close to the worst case scenario. The existence of points where Brownian motion is locally 1/2-H¨ older continuous is a very tricky question. Dvoretzky [Dv63] showed that, for a sufficiently small c > 0, almost surely no point satisfies 1/2-local H¨ older continuity with H¨ older constant c. Later, Davis [Da83] and, independently, Greenwood and Perkins [GP83] identified the maximal possible H¨ older constant, we will discuss their work in Chapter 10. There is a lot of discussion about nowhere differentiable, continuous functions in the analysis literature of the early twentieth century. Examples are Weierstrass’ function, see e.g. [MG84], and van der Waerden’s function, see e.g. [Bi82]. Nowhere differentiability of Brownian motion was first shown by Paley, Wiener and Zygmund in [PWZ33], but the proof we give is due to Dvoretzky, Erd˝ os and Kakutani [DEK61]. 41 Besides the discussion of special examples of such functions, the statement that in some sense ‘most’ or ‘almost all’ continuous functions are nowhere differentiable is particularly fascinating. A topological form of this statement is that nowhere differentiability is a generic property for the space C([0, 1]) in the sense of Baire category. A newer, measure theoretic approach based on an idea of Christensen [Ch72], which was later rediscovered by Hunt, Sauer, and Yorke [HSY92], is the notion of prevalence. A subset A of a separable Banach space X is called prevalent if there exists a Borel probability measure µ on X such that µ(x + A) = 1 for any x ∈X. A strengthening of the proof of Theorem 1.30, see Exercise 1.10, shows that the set of nowhere differentiable functions is prevalent. The time t where D∗B(t) = 0 which we constructed in Exercise 1.11 is an exceptional time, i.e. a time where Brownian motion behaves differently from almost every other time. In Chapter 10 we enter a systematic discussion of such times, and in particular address the question how many exceptional points (in terms of Hausdorffdimension) of a certain type exist. The set of times where D∗B(t) = 0 has Hausdorffdimension 1/4, see Barlow and Perkins [BP84]. The interesting fact that the ‘true’ quadratic variation of Brownian motion, taken as a supre-mum over arbitrary partitions with mesh going to zero, is infinite is a result of L´ evy, see [Le40]. Finer variation properties of Brownian motion have been studied by Taylor in [Ta72]. He shows, for example, that the ψ-variation V ψ = sup k X i=1 ψ ¡ |B(ti) −B(ti−1)| ¢ , where the supremum is taken over all partitions 0 = t0 < · · · < tk = 1, k ∈N, is finite almost surely for ψ1(s) = s2/(2 log log(1/s)), but is infinite for any ψ with ψ(s)/ψ1(s) →∞as s →0. 42 CHAPTER 2 Brownian motion as a strong Markov process In this chapter we discuss the strong Markov property of Brownian motion. We also briefly discuss Markov processes in general and show that some processes, which can be derived from Brownian motion, are also Markov processes. We then exploit these facts to get finer properties of Brownian sample paths. 1. The Markov property and Blumenthal’s 0-1 Law For the discussion of the Markov property we include higher dimensional Brownian motion, which can be defined easily by requiring the characteristics of a linear Brownian motion in every component, and independence of the components. Definition 2.1. If B1, . . . , Bd are independent linear Brownian motions started in x1, . . . , xd, then the stochastic process {B(t) : t ≥0} given by B(t) = (B1(t), . . . , Bd(t))T is called a d-dimensional Brownian motion started in (x1, . . . , xd)T. The d-dimensional Brownian motion started in the origin is also called standard Brownian motion. Two-dimensional Brownian motion is also called planar Brownian motion. ⋄ Notation 2.2. Throughout this book we write Px for the probability measure which makes the d-dimensional process {B(t) : t ≥0} a Brownian motion started in x ∈Rd, and Ex for the corresponding expectation. ⋄ Suppose now that {X(t) : t ≥0} is a stochastic process. Intuitively, the Markov property says that if we know the process {X(t) : t ≥0} on the interval [0, s], for the prediction of the future {X(t) : t ≥s} this is as useful as just knowing the endpoint X(s). Moreover, a process is called a (time-homogeneous) Markov process if it starts afresh at any fixed time s. Slightly more precisely this means that, supposing the process can be started in any point X(0) = x ∈Rd, the time-shifted process {X(s + t) : t ≥0} has the same distribution as the process started in X(s) ∈Rd. We shall formalise the notion of a Markov process later in this chapter, but start by giving a straight formulation of the facts for a Brownian motion. Note that two stochastic processes {X(t) : t ≥0} and {Y (t) : t ≥0} are called independent, if for any sets t1, . . . , tn ≥0 and s1, . . . , sm ≥0 of times the vectors (X(t1), . . . , X(tn)) and (Y (s1), . . . , Y (sm)) are independent. 43 Figure 1. Brownian motion starts afresh at time s. Theorem 2.3 (Markov property). Suppose that {B(t) : t ≥0} is a Brownian motion started in x ∈Rd. Let s > 0, then the process {B(t + s) −B(s) : t ≥0} is again a Brownian motion started in the origin and it is independent of the process {B(t) : 0 ≤t ≤s}. Proof. It is trivial to check that {B(t + s) −B(s) : t ≥0} satisfies the definition of a d-dimensional Brownian motion. The independence statement follows directly from the inde-pendence of the increments of a Brownian motion. We now improve this result slightly and introduce some useful terminology. Definition 2.4. A filtration on a probability space (Ω, F, P) is a family (F(t) : t ≥0) of σ-algebras such that F(s) ⊂F(t) ⊂F for all s < t. A probability space together with a filtra-tion is sometimes called a filtered probability space. A stochastic process {X(t) : t ≥0} defined on (Ω, F, P) is called adapted if X(t) is F(t)-measurable for any t ≥0. ⋄ Suppose we have a Brownian motion {B(t) : t ≥0} defined on some probability space, then we can define a filtration (F0(t) : t ≥0) by letting F0(t) be the σ-algebra generated by the random variables {B(s) : 0 ≤s ≤t}. With this definition, the Brownian motion is obviously adapted to the filtration. Intuitively, this σ-algebra contains all the information available from observing the process up to time t. By Theorem 2.3, the process {B(t + s) −B(s) : t ≥0} is independent of F0(s). In a first step, we improve this and allow a slightly larger (augmented) σ-algebra F+(s) defined by F+(s) := \ t>s F0(t) . Clearly, the family (F+(t) : t ≥0) is again a filtration and F+(s) ⊃F0(s), but intuitively F+(s) is a bit larger than F0(s), allowing an additional infinitesimal glance into the future. 44 Theorem 2.5. For every s ≥0 the process {B(t + s) −B(s) : t ≥0} is independent of F+(s). Proof. By continuity B(t + s) −B(s) = limn→∞B(sn + t) −B(sn) for a strictly decreasing sequence {sn : n ∈N} converging to s. By Theorem 2.3, for any t1, . . . , tm ≥0, the vector (B(t1 + s) −B(s), . . . , B(tm + s) −B(s)) = limj↑∞(B(t1 + sj) −B(sj), . . . , B(tm + sj) −B(sj)) is independent of F+(s), and so is the process {B(t + s) −B(s) : t ≥0}. Remark 2.6. An alternative way of stating this is that conditional on F+(s) the process {B(t + s) : t ≥0} is a Brownian motion started in B(s). ⋄ We now look at the germ σ-algebra F+(0), which heuristically comprises all events defined in terms of Brownian motion on an infinitesimal small interval to the right of the origin. Theorem 2.7 (Blumenthal’s 0-1 law). Let x ∈Rd and A ∈F+(0). Then Px(A) ∈{0, 1}. Proof. Using Theorem 2.5 for s = 0 we see that any A ∈σ{B(t): t ≥0} is independent of F+(0). This applies in particular to A ∈F +(0), which therefore is independent of itself, hence has probability zero or one. As a first application we show that a standard linear Brownian motion has positive and negative values and zeros in every small interval to the right of 0. We have studied this remarkable prop-erty of Brownian motion already by different means, in the discussion following Theorem 1.27. Theorem 2.8. Suppose {B(t): t ≥0} is a linear Brownian motion. Define τ = inf{t > 0 : B(t) > 0} and σ = inf{t > 0 : B(t) = 0}. Then P0{τ = 0} = P0{σ = 0} = 1 . Proof. The event {τ = 0} = ∞ \ n=1 n there is 0 < ε < 1/n such that B(ε) > 0 o is clearly in F+(0). Hence we just have to show that this event has positive probability. This follows, as, for t > 0, P0{τ ≤t} ≥P0{B(t) > 0} = 1/2 . Hence P0{τ = 0} ≥1/2 and we have shown the first part. The same argument works replacing B(t) > 0 by B(t) < 0 and from these two facts P0{σ = 0} = 1 follows, using the intermediate value property of continuous functions. A further application is a 0-1 law for the tail σ-algebra of a Brownian motion. Define G(t) = σ{B(s): s ≥t}. Let T = T t≥0 G(t) be the σ-algebra of all tail events. Theorem 2.9 (Kolmogorov’s 0-1 Law). Let x ∈Rd and A ∈T . Then Px(A) ∈{0, 1}. Proof. It suffices to look at the case x = 0. Under the time inversion of Brownian motion, the tail σ-algebra is mapped on the germ σ-algebra, which is trivial by Blumenthal’s 0-1 law. 45 Remark 2.10. In Exercise 2.2 we shall see that, for any tail event A ∈T , the probability Px(A) is independent of x. For a germ event A ∈F+(0), however, the probability Px(A) may depend on x. ⋄ As final example of this section we now exploit the Markov property to show that the set of local extrema of a linear Brownian motion is a countable, dense subset of [0, ∞). We shall use the easy fact, proved in Exercise 2.3, that every local maximum of Brownian motion is a strict local maximum. Proposition 2.11. The set M of times where a linear Brownian motion assumes its local maxima is almost surely countable and dense. Proof. Consider the function from the set of non-degenerate closed intervals with rational endpoints to R given by [a, b] 7→inf © t ≥a : B(t) = max a≤s≤b B(s) ª . The image of this map contains the set M almost surely by Exercise 2.3. This shows that M is countable almost surely. We already know that Brownian motion has no interval of increase or decrease almost surely, by Proposition 1.22. It follows that it almost surely has a local maximum in every nondegenerate interval. 2. The strong Markov property and the reflection principle Heuristically, the Markov property states that Brownian motion is started anew at each deter-ministic time instance. It is a crucial property of Brownian motion that this holds also for an important class of random times. These random times are called stopping times. The basic idea is that a random time T is a stopping time if we can decide whether {T < t} by just knowing the path of the stochastic process up to time t. Think of the situation that T is the first moment where some random event related to the process happens. Definition 2.12. A random variable T with values in [0, ∞], defined on a probability space with filtration (F(t) : t ≥0) is called a stopping time if {T < t} ∈F(t), for every t ≥0. It is called a strict stopping time if {T ≤t} ∈F(t), for every t ≥0. ⋄ It is easy to see that every strict stopping time is also a stopping time. This follows from {T < t} = ∞ [ n=1 {T ≤t −1/n} ∈F(t) . For certain nice filtrations strict stopping times and stopping times agree. In order to come into this situation, we are going to work with the filtration (F+(t) : t ≥0) in the case of Brownian motion and refer the notions of stopping time, etc. always to this filtration. As this filtration is larger than (F0(t) : t ≥0), our choice produces more stopping times. The crucial property 46 which distinguishes {F+(t) : t ≥0} from {F0(t) : t ≥0} is right-continuity, which means that \ ε>0 F+(t + ε) = F+(t) . To see this note that \ ε>0 F+(t + ε) = ∞ \ n=1 ∞ \ k=1 F0(t + 1/n + 1/k) = F+(t) . Theorem 2.13. Every stopping time T with respect to the filtration (F+(t) : t ≥0), or indeed with respect to any right-continuous filtration, is automatically a strict stopping time. Proof. Suppose that T is a stopping time. Then {T ≤t} = ∞ \ k=1 {T < t + 1/k} ∈ ∞ \ n=1 F+(t + 1/n) = F+(t) . Note that this argument uses only the right-continuity of {F+(t) : t ≥0}. We give some examples. • Every deterministic time t ≥0 is also a stopping time. • Suppose G ⊂Rd is an open set. Then T = inf{t ≥0 : B(t) ∈G} is a stopping time. Proof. Let Q be the rationals in (0, t). Then, by continuity of B, {T < t} = [ s∈Q {B(s) ∈G} ∈F+(t) . • If Tn ↑T is an increasing sequence of stopping times, then T is also a stopping time. Proof. {T ≤t} = T∞ n=1{Tn ≤t} ∈F+(t) . • Suppose H is a closed set, for example a singleton. Then T = inf{t ≥0 : B(t) ∈H} is a stopping time. Proof. Let G(n) = {x ∈Rd : ∃y ∈H with |x −y| < 1/n} so that H = T G(n). Then Tn := inf{t ≥0 : B(t) ∈G(n)} are stopping times, which are increasing to T. • Let T be a stopping time. Define stopping times Tn by Tn = (m + 1)2−n if m2−n ≤T < (m + 1)2−n . In other words, we stop at the first time of the form k2−n after T. It is easy to see that Tn is a stopping time. We will use it later as a discrete approximation to T. We define, for every stopping time T, the σ-algebra F+(T) = {A ∈A : A ∩{T < t} ∈F+(t) for all t ≥0} . 47 This means that the part of A that lies in {T < t} should be measurable with respect to the information available at time t. Heuristically, this is the collection of events that happened before the stopping time T. As in the proof of the last theorem we can infer that for right-continuous filtrations like our (F+(t) : t ≥0) the event {T ≤t} may replace {T < t} without changing the definition. We can now state and prove the strong Markov property for Brownian motion, which was rigorously established by Hunt [Hu56] and Dynkin [Dy57]. Theorem 2.14 (Strong Markov property). For every almost surely finite stopping time T, the process {B(T + t) −B(T) : t ≥0} is a standard Brownian motion independent of F+(T). Proof. We first show our statement for the stopping times Tn which discretely approximate T from above, Tn = (m + 1)2−n if m2−n ≤T < (m + 1)2−n , see the examples above. Write Bk = {Bk(t) : t ≥0} for the Brownian motion defined by Bk(t) = B(t + k/2n) −B(k/2n), and B∗= {B∗(t) : t ≥0} for the process defined by B∗(t) = B(t + Tn) −B(Tn). Suppose that E ∈F+(Tn). Then, for every event {B∗∈A}, we have P ¡ {B∗∈A} ∩E ¢ = ∞ X k=0 P ¡ {Bk ∈A} ∩E ∩{Tn = k2−n} ¢ = ∞ X k=0 P{Bk ∈A} P ¡ E ∩{Tn = k2−n} ¢ , using that {Bk ∈A} is independent of E ∩{Tn = k2−n} ∈F+(k2−n) by Theorem 2.5. Now, by Theorem 2.3, P{Bk ∈A} = P{B ∈A} does not depend on k, and hence we get ∞ X k=0 P{Bk ∈A} P ¡ E ∩{Tn = k2−n} ¢ = P{B ∈A} ∞ X k=0 P ¡ E ∩{Tn = k2−n} ¢ = P{B ∈A}P(E), which shows that B∗is a Brownian motion and independent of E, hence of F+(Tn), as claimed. It remains to generalize this to general stopping times T. As Tn ↓T we have that {B(s+Tn)− B(Tn) : s ≥0} is a Brownian motion independent of F+(Tn) ⊃F+(T). Hence the increments B(s + t + T) −B(t + T) = lim n→∞B(s + t + Tn) −B(t + Tn) of the process {B(r +T)−B(T) : r ≥0} are independent and normally distributed with mean zero and variance s. As the process is obviously almost surely continuous, it is a Brownian motion. Moreover all increments, B(s + t + T) −B(t + T) = lim B(s + t + Tn) −B(t + Tn), and hence the process itself, are independent of F+(T). Remark 2.15. Let T = inf{t ≥0 : B(t) = max0≤s≤1 B(s)}. It is intuitively clear that T is not a stopping time. One way to prove it, is to first observe that almost surely T < 1. The increment B(t + T) −B(T) does not take positive values in a small neighbourhood to the right of 0, which contradicts the strong Markov property and Theorem 2.8. ⋄ 48 2.1. The reflection principle. We will see many applications of the strong Markov prop-erty later, however, the next result, the reflection principle, is particularly interesting. The reflection principle states that Brownian motion reflected at some stopping time T is still a Brownian motion. More formally: Theorem 2.16 (Reflection principle). If T is a stopping time and {B(t) : t ≥0} is a standard Brownian motion, then the process {B∗(t) : t ≥0} called Brownian motion reflected at T and defined by B∗(t) = B(t)1{t≤T} + (2B(T) −B(t))1{t>T} is also a standard Brownian motion. Figure 2. The reflection principle Proof. If T is finite, by the strong Markov property both {B(t + T) −B(T) : t ≥0} and {−(B(t + T) −B(T)) : t ≥0} are Brownian motions and independent of the beginning {B(t) : t ∈[0, T]}. Hence the concatenation (gluing together) of the beginning with the first part and the concatenation with the second part have the same distribution. The first is just {B(t) : t ≥0}, the second is the object {B∗(t) : t ≥0} introduced in the statement. Remark 2.17. For a linear Brownian motion, consider τ = inf{t : B(t) = max0≤s≤1 B(s)} and let {B∗(t): t ≥0} be the reflection at τ defined as in Theorem 2.16. Almost surely {B(τ + t) −B(τ) : t ≥0} is non-positive on some right neighbourhood of t = 0, and hence is not Brownian motion. The strong Markov property does not apply here because τ is not a stopping time for the filtration (F+(t) : t ≥0). We shall see in Theorem 5.14 that Brownian motion almost surely has no point of increase. Since τ is a point of increase of the reflected process {B∗(t) : t ≥0}, it follows that the distributions of {B(t): t ≥0} and of {B∗(t) : t ≥0} are singular. ⋄ 49 Now specialise to the case of linear Brownian motion. Let M(t) = max0≤s≤t B(s). A priori it is not at all clear what the distribution of this random variable is, but we can determine it as a consequence of the reflection principle. Theorem 2.18. If a > 0 then P0{M(t) > a} = 2P0{B(t) > a} = P0{|B(t)| > a}. Proof. Let T = inf{t ≥0 : B(t) = a} and let {B∗(t) : t ≥0} be Brownian motion reflected at T. Then {M(t) > a} is the disjoint union of the events {B(t) > a} and {M(t) > a, B(t) ≤a} and since the latter is exactly {B∗(t) ≥a} the statement follows from the reflection principle. Remark 2.19. Theorem 2.18 is most useful when combined with a tail estimate for the Gaussian as in Lemma II.3.1. For example, for an upper bound we obtain, for all a > 0, P0{M(t) > a} ≤ √ 2t a √π exp © −a2 2t ª . ⋄ 2.2. The area of planar Brownian motion. Continuous curves in the plane can still be extremely wild. Space-filling curves, like the Peano curve, can map the time interval [0, 1] continuously on sets of positive area, see for example [La98]. We now show that the range of planar Brownian motion has zero area. The Markov property and the reflection principle play an important role in the proof. Suppose {B(t) : t ≥0} is planar Brownian motion. We denote the Lebesgue measure on Rd by Ld, and use the symbol f ∗g to denote the convolution of the functions f and g given, whenever well-defined, by f ∗g (x) := Z f(y)g(x −y) dy. For a set A ⊂Rd and x ∈Rd we write A + x := {a + x : a ∈A}. Lemma 2.20. If A1, A2 ⊂R2 are Borel sets with positive area, then L2 ³© x ∈R2 : L2(A1 ∩(A2 + x)) > 0 ª´ > 0. Proof. We may assume A1 and A2 are bounded. By Fubini’s theorem, Z R2 1A1 ∗1−A2(x) dx = Z R2 Z R2 1A1(w)1A2(w −x) dw dx = Z R2 1A1(w) µZ R2 1A2(w −x) dx ¶ dw = L2(A1)L2(A2) > 0. 50 Thus 1A1 ∗1−A2(x) > 0 on a set of positive area. But 1A1 ∗1−A2(x) = Z 1A1(y) 1−A2(x −y) dy = Z 1A1(y) 1A2+x(y) dy = L2(A1 ∩(A2 + x)) , proving the lemma. We are now ready to prove L´ evy’s theorem on the area of planar Brownian motion. Theorem 2.21 (L´ evy 1940). Almost surely, L2(B[0, 1]) = 0. Proof. Let X = L2(B[0, 1]) denote the area of B[0, 1]. First we check that E[X] < ∞. Note that X > a only if the Brownian motion leaves the square centred in the origin of sidelength √a/2. Hence, using Theorem 2.18, P{X > a} ≤2 P © max t∈[0,1] |W(t)| > √a/2 } = 4 P{W(1) > √a/2} ≤4e−a/8, for a > 1, where {W(t): t ≥0} is standard one-dimensional Brownian motion. Hence, E[X] = Z ∞ 0 P{X > a} da ≤4 Z ∞ 1 e−a/8da + 1 < ∞. Note that B(3t) and √ 3B(t) have the same distribution, and hence EL2(B[0, 3]) = 3EL2(B[0, 1]) = 3E[X] . Note that we have L2(B[0, 3]) ≤P2 j=0 L2(B[j, j + 1]) with equality if and only if for 0 ≤i < j ≤2 we have L2(B[i, i + 1] ∩B[j, j + 1]) = 0. On the other hand, for j = 0, 1, 2, we have EL2(B[j, j + 1]) = E[X] and 3E[X] = EL2(B[0, 3]) ≤ 2 X j=0 EL2(B[j, j + 1]) = 3E[X] , whence, almost surely, the intersection of any two of the B[j, j + 1] has measure zero. In particular, L2(B[0, 1] ∩B[2, 3]) = 0 almost surely. Now we can use the Markov property to define two Brownian motions, {B1(t) : t ∈[0, 1]} by B1(t) = B(t), and {B2(t) : t ∈[0, 1]} by B2(t) = B(t + 2) −B(2) + B(1). Both Brownian motions are independent of the random variable Y := B(2)−B(1). For x ∈R2, let R(x) denote the area of the set B1[0, 1] ∩(x + B2[0, 1]), and note that {R(x) : x ∈R2} is independent of Y . Then 0 = E[L2(B[0, 1] ∩B[2, 3])] = E[R(Y )] = (2π)−1 Z R2 e−| x|2/2 E[R(x)] dx, where we are averaging with respect to the Gaussian distribution of B(2)−B(1). Thus R(x) = 0 almost surely for L2-almost all x, and hence L2 ¡© x ∈R2 : R(x) > 0 ª¢ = 0, almost surely. 51 From Lemma 2.20 we get that, almost surely, L2(B[0, 1]) = 0 or L2(B[2, 3]) = 0. The observation that L2(B[0, 1]) and L2(B[2, 3]) are identically distributed and independent completes the proof that L2(B[0, 1]) = 0 almost surely. Remark 2.22. How big is the range, or path, of Brownian motion? We have seen that the Lebesgue measure of a planar Brownian path is zero almost surely, but a more precise answer needs the concept of Hausdorffmeasure and dimension, which we develop in Chapter 4. ⋄ Corollary 2.23. For any points x, y ∈Rd, d ≥2, we have Px{y ∈B(0, 1]} = 0. Proof. Observe that, by projection onto the first two coordinates, it suffices to prove this result for d = 2. Note that Theorem 2.21 holds for Brownian motion with arbitrary starting point y ∈R2. By Fubini’s theorem, for any fixed y ∈R2, Z R2 Py{x ∈B[0, 1]} dx = EyL2(B[0, 1]) = 0. Hence, for L2-almost every point x, we have Py{x ∈B[0, 1]} = 0. By symmetry of Brownian motion, Py{x ∈B[0, 1]} = P0{x −y ∈B[0, 1]} = P0{y −x ∈B[0, 1]} = Px{y ∈B[0, 1]} . We infer that Px{y ∈B[0, 1]} = 0, for L2-almost every point x. For any ε > 0 we thus have, almost surely, PB(ε){y ∈B[0, 1]} = 0. Hence, P{y ∈B(0, 1]} = lim ε↓0 P{y ∈B[ε, 1]} = lim ε↓0 EPB(ε){y ∈B[0, 1 −ε]} = 0, where we have used the Markov property in the second step. Remark 2.24. Loosely speaking, planar Brownian motion almost surely does not hit singletons. Which other sets are not hit by Brownian motion? This clearly depends on the size and shape of the set in some intricate way, and a precise answer will use the notion of capacity, which we study in Chapter 8. ⋄ 2.3. The zero set of Brownian motion. As a further application of the strong Markov property we have a first look at the properties of the zero set {t ≥0 : B(t) = 0} of one-dimensional Brownian motion. We prove that this set is a closed set with no isolated points (sometimes called a perfect set). This is perhaps surprising since, almost surely, a Brownian motion has isolated zeros from the left, for instance the first zero after 1/2, or from the right, like the last zero before 1/2. Theorem 2.25. Let {B(t) : t ≥0} be a one dimensional Brownian motion and Zero = {t ≥0 : B(t) = 0} its zero set. Then, almost surely, Zero is a closed set with no isolated points. 52 Proof. Clearly, with probability one, Zero is closed because Brownian motion is continuous almost surely. To prove that no point of Zero is isolated we consider the following construction: For each rational q ∈[0, ∞) consider the first zero after q, i.e., τq = inf{t ≥q: B(t) = 0}. Note that τq is an almost surely finite stopping time. Since Zero is closed, the inf is almost surely a minimum. By the strong Markov property, applied to τq, we have that for each q, almost surely τq is not an isolated zero from the right. But, since there are only countably many rationals, we conclude that almost surely, for all q rational, τq is not an isolated zero from the right. Our next task is to prove that the remaining points of Z are not isolated from the left. So we claim that any 0 < t ∈Z which is different from τq for all rational q is not an isolated point from the left. To see this take a sequence qn ↑t, qn ∈Q. Define tn = τqn. Clearly qn ≤tn < t and so tn ↑t. Thus t is not isolated from the left. Remark 2.26. Theorem 2.25 implies that Zero is uncountable, see Exercise 2.7. ⋄ 3. Markov processes derived from Brownian motion In this section, we define the concept of a Markov process. Our motivation is that various processes derived from Brownian motion are Markov processes. Among the examples are the reflection of Brownian motion in zero, and the process {Ta : a ≥0} of times Ta when a Brownian motion reaches level a for the first time. We assume that the reader is familiar with the notion of conditional expectation given a σ-algebra, see [Wi91] for a reference. Definition 2.27. A function p: [0, ∞) × Rd × B →R, where B is the Borel σ-algebra in Rd, is a Markov transition kernel provided (1) p( · , · , A) is measurable as a function of (t, x), for each A ∈B; (2) p(t, x, · ) is a Borel probability measure on Rd for all t ≥0 and x ∈Rd, when integrating a function f with respect to this measure we write Z f(y) p(t, x, dy) ; (3) for all A ∈B, x ∈Rd and t, s > 0, p(t + s, x, A) = Z Rd p(t, y, A) p(s, x, dy). An adapted process {X(t) : t ≥0} is a (time-homogeneous) Markov process with transition kernel p with respect to a filtration (F(t) : t ≥0), if for all t ≥s and Borel sets A ∈B we have P{X(t) ∈A | F(s)} = p(t −s, X(s), A) . ⋄ 53 Observe that p(t, x, A) is the probability that the process takes a value in A at time t, if it is started at the point x. Readers familiar with Markov chains can recognise the pattern behind this definition: The Markov transition kernel p plays the role of the transition matrix P in this setup. The next two examples are trivial consequences of the Markov property for Brownian motion. Example 2.28. Brownian motion is a Markov process and for its transition kernel p the distri-bution p(t, x, · ) is a normal distribution with mean x and variance t. Similarly, d-dimensional Brownian motion is a Markov process and p(t, x, · ) is a Gaussian with mean x and covariance matrix t times identity. Note that property (3) in the definition of the Markov transition kernel is just the fact that the sum of two independent Gaussian random vectors is a Gaussian random vector with the sum of the covariance matrices. ⋄ Notation 2.29. The transition kernel of d-dimensional Brownian motion is described by probability measures p(t, x, · ) with densities denoted throughout this book by p(t, x, y) = (2πt)−d/2 exp ³ −|x −y|2 2t ´ . ⋄ Example 2.30. The reflected one-dimensional Brownian motion {X(t) : t ≥0} defined by X(t) = |B(t)| is a Markov process. Moreover, its transition kernel p(t, x, ·) is the law of |Y | for Y normally distributed with mean x and variance t, which we call the modulus normal distribution with parameters x and t. ⋄ We now prove a famous theorem of Paul L´ evy, which shows that the difference of the maximum process of a Brownian motion and the Brownian motion itself is a reflected Brownian motion. To be precise, this means that the difference of the processes has the same marginal distributions as a reflected Brownian motion, and is also almost surely continuous. Theorem 2.31 (L´ evy 1948). Let {M(t): t ≥0} be the maximum process of a linear standard Brownian motion {B(t): t ≥0}, i.e. the process defined by M(t) = max 0≤s≤t B(s). Then, the process {Y (t) : t ≥0} defined by Y (t) = M(t)−B(t) is a reflected Brownian motion. Proof. The main step is to show that the process Y = {Y (t) : t ≥0} is a Markov process and its Markov transition kernel p(t, x, · ) has modulus normal distribution with parameters x and t. Once this is established, it is immediate that the marginal distributions of Y agree with those of a reflected Brownian motion. Obviously, Y has almost surely continuous paths. For the main step, fix s > 0, consider the two processes { ˆ B(t) : t ≥0} defined by ˆ B(t) = B(s + t) −B(s) for t ≥0, and { ˆ M(t) : t ≥0} defined by ˆ M(t) = max 0≤u≤t ˆ B(u) for t ≥0. 54 0 100 200 300 400 0 0 100 200 300 400 0 B(t) M(t) M(t) −B(t) t t Figure 3. On the left, the processes {B(t): t ≥0} with associated maximum process {M(t): t ≥0} indicated by the dashed curve. On the right the process {M(t) −B(t): t ≥0}. Because Y (s) is F+(s)-measurable, it suffices to check that conditional on F+(s), for every t ≥0, the random variable Y (s + t) has the same distribution as |Y (s) + ˆ B(t)|. Indeed, this directly implies that {Y (t) : t ≥0} is a Markov process with the same transition kernel as the reflected Brownian motion. To prove the claim fix s, t ≥0 and observe that M(s+t) = M(s)∨(B(s)+ ˆ M(t)), and so we have Y (s+t) = (M(s)∨(B(s)+ ˆ M(t)))−(B(s)+ ˆ B(t)). Using the fact that (a∨b)−c = (a−c)∨(b−c), we have Y (s + t) = ¡ Y (s) ∨ˆ M(t) ¢ −ˆ B(t). To finish, it suffices to check, for every y ≥0, that y ∨ˆ M(t) −ˆ B(t) has the same distribution as |y + ˆ B(t)|. For any a ≥0 write P1 = P{y −ˆ B(t) > a}, P2 = P © y −ˆ B(t) ≤a and ˆ M(t) −ˆ B(t) > a ª . Then P{y ∨ˆ M(t) −ˆ B(t) > a} = P1 + P2. Since ˆ B has the same distribution as −ˆ B we have P1 = P{y + ˆ B(t) > a}. To study the second term it is useful to define the time reversed Brownian motion {W(u) : 0 ≤u ≤t} by W(u) := ˆ B(t −u) −ˆ B(t) for 0 ≤u ≤t. Note that W is also a Brownian motion for 0 ≤u ≤t since it is continuous and its finite dimensional distributions are Gaussian with the right covariances. Let MW(t) = max0≤u≤t W(u). Then MW(t) = ˆ M(t) −ˆ B(t). Since W(t) = −ˆ B(t), we have P2 = P{y + W(t) ≤a and MW(t) > a}. If we use the reflection principle by reflecting {W(u) : 0 ≤u ≤t} at the first time it hits a, we get another Brownian motion {W ∗(u) : 0 ≤u ≤t}. In terms of this Brownian motion we have P2 = P{W ∗(t) ≥a + y}. Since W ∗has the same distribution as −ˆ B, it follows that 55 P2 = P{y + ˆ B(t) ≤−a}. The Brownian motion ˆ B has continuous distribution, and so, by adding P1 and P2, we get P{y ∨ˆ M(t) −ˆ B(t) > a} = P{|y + ˆ B(t)| > a}. This proves the main step and, consequently, the theorem. While, as seen above, {M(t) −B(t) : t ≥0} is a Markov process, it is important to note that the maximum process {M(t) : t ≥0} itself is not a Markov process. However the times when new maxima are achieved form a Markov process, as the following theorem shows. Theorem 2.32. For any a ≥0 define the stopping times Ta = inf{t ≥0 : B(t) = a}. Then {Ta : a ≥0} is an increasing Markov process with transition kernels given by the densities s 7→p(a, t, s) = a √ 2π(s−t)3 exp ¡ − a2 2(s−t) ¢ 1{s > t}, for a > 0. This process is called the stable subordinator of index 1 2. Proof. Fix a ≥b ≥0 and note that for all t ≥0 we have {Ta −Tb = t} = {B(Tb + s) −B(Tb) < a −b, for s < t, and B(Tb + t) −B(Tb) = a −b}. By the strong Markov property of Brownian motion this event is independent of F+(Tb) and therefore in particular of {Td : d ≤b}. This proves the Markov property of {Ta : a ≥0}. The form of the transition kernel follows from the reflection principle, P{Ta −Tb ≤t} = P{Ta−b ≤t} = P © max 0 ≤s≤t B(s) ≥a −b ª = 2P © B(t) ≥a −b ª = 2 Z ∞ a−b 1 √ 2πt exp ¡ −x2 2t ¢ dx = Z t 0 1 √ 2πs3 (a −b) exp ¡ −(a−b)2 2s ¢ ds, where we used the substitution x = p t/s (a −b) in the last step. In a similar way there is another important Markov process, the Cauchy process, hidden in the planar Brownian motion, see Figure 4. Theorem 2.33. Let {B(t): t ≥0} be a planar Brownian motion, define a family (V (a): a ≥0) of vertical lines V (a) = {(x, y) ∈R2 : x = a}, and let T(a) = τ(V (a)) be the first hitting time of V (a). Then the process {X(a): a ≥0} defined by X(a) := B2(T(a)) for B(t) = (B1(t), B2(t))T is a Markov process with transition kernel p(a, x, dy) = a π(a2 + (x −y)2) dy . This process is called the Cauchy process. 56 0 X(s) X(t) V(s) V(t) s t Figure 4. The Cauchy process embedded in planar Brownian motion Proof. The Markov property of {X(a): a ≥0} is a consequence of the strong Markov property of Brownian motion for the stopping times T(a), and the fact that T(a) < T(b) for all a < b. In order to calculate the transition density recall from Theorem 2.32 that T(a), which is the first time when the one-dimensional Brownian motion {B1(s): s ≥0} hits level a, has density a √ 2πs3 exp ¡ −a2 2s ¢ . T(a) is independent of {B2(s): s ≥0} and therefore the density of B2(T(a)) is (in the variable x) Z ∞ 0 1 √ 2πs exp ¡ −x2 2s ¢ a √ 2πs3 exp ¡ −a2 2s ¢ ds = Z ∞ 0 a π(a2 + x2) e−σ dσ = a π(a2 + x2) , where the integral is evaluated using the substitution σ = 1 2s (a2 + x2). Remark 2.34. Alternative proofs of Theorem 2.33, avoiding the explicit evaluation of integrals will be given in Exercise 2.17 and Exercise 7.4. ⋄ 4. The martingale property of Brownian motion In the previous section we have taken a particular feature of Brownian motion, the Markov property, and introduced an abstract class of processes, the Markov processes, which share this feature. We have seen that a number of process derived from Brownian motion are again Markov processes and this insight helped us getting new information about Brownian motion. In this section we follow a similar plan, taking a different feature of Brownian motion, the martingale property, as a starting point. 57 Definition 2.35. A real-valued stochastic process {X(t) : t ≥0} is a martingale with respect to a filtration (F(t) : t ≥0) if it is adapted to the filtration, E|X(t)| < ∞for all t ≥0 and, for any pair of times 0 ≤s ≤t, E £ X(t) ¯ ¯ F(s) ¤ = X(s) almost surely. The process is called a submartingale if ≥holds, and a supermartingale if ≤holds in the display above. ⋄ Remark 2.36. Intuitively, a martingale is a process where the current state X(t) is always the best prediction for its further states. In this sense, martingales describe fair games. If {X(t) : t ≥0} is a martingale, the process {|X(t)| : t ≥0} need not be a martingale, but it still is a submartingale, as a simple application of the triangle inequality shows. ⋄ Example 2.37. For a one-dimensional Brownian motion {B(t) : t ≥0} we have E £ B(t) ¯ ¯ F+(s) ¤ = E £ B(t) −B(s) ¯ ¯ F+(s) ¤ + B(s) = E £ B(t) −B(s) ¤ + B(s) = B(s) , using Theorem 2.5 in the second step. Hence Brownian motion is a martingale. ⋄ We now state two useful facts about martingales, which we will exploit extensively: The optional stopping theorem and Doob’s maximal inequality. Both of these results are well-known in the discrete time setting and there is a reminder in Appendix II.3. The natural extension of these results to the continuous time setting is the content of our propositions. The optional stopping theorem provides a condition under which the defining equation for martingales can be extended from fixed times 0 ≤s ≤t to stopping times 0 ≤S ≤T. Proposition 2.38 (Optional stopping theorem). Suppose {X(t) : t ≥0} is a continuous martingale, and 0 ≤S ≤T are stopping times. If the process {X(t ∧T) : t ≥0} is dominated by an integrable random variable X, i.e. |X(t ∧T)| ≤X almost surely, for all t ≥0, then E £ X(T) ¯ ¯F(S) ¤ = X(S), almost surely. Proof. The best way to prove this is to prove the result first for martingales in discrete time, and then extend the result by approximation. The result for discrete time is provided in our appendix, see Theorem II.4.11. Let us explain the approximation step here. Fix N ∈N and define a discrete time martingale by Xn = X(T ∧n2−N) and stopping times S′ = ⌊2NS⌋+ 1 and T ′ = ⌊2NT⌋+ 1, with respect to the filtration (G(n) : n ∈N) given by G(n) = F(n2−N). Obviously Xn is dominated by an integrable random variable and hence the discrete time result gives E £ XT ′ ¯ ¯ G(S′) ¤ = XS′ , which translates as E £ X(T) ¯ ¯ F(SN) ¤ = X(T ∧SN) , for SN = 2−N(⌊2NS⌋+ 1). Letting N ↑∞and using dominated convergence for conditional expectations gives the result. The following inequality will also be of great use to us. 58 Proposition 2.39 (Doob’s maximal inequality). Suppose {X(t) : t ≥0} is a continuous submartingale and p > 1. Then, for any t ≥0, E h¡ sup 0≤s≤t |X(t)| ¢pi ≤ ¡ p p−1 ¢p E £ |X(t)|p¤ . Proof. Again this is proved for martingales in discrete time in our appendix, see Theorem II.4.14, and can be extended by approximation. Fix N ∈N and define a discrete time martingale by Xn = X(tn2−N) with respect to the filtration (G(n) : n ∈N) given by G(n) = F(tn2−N). By the discrete version of Doob’s maximal inequality, E h¡ sup 1≤k≤2N Xk ¢pi ≤ ¡ p p−1 ¢p E £ Xp 2N ¤ = ¡ p p−1 ¢p E £ X(t)p¤ . Letting N ↑∞and using monotone convergence gives the claim. We now use the martingale property and the optional stopping theorem to prove Wald’s lemmas for Brownian motion. These results identify the first and second moments of the value of Brownian motion at well-behaved stopping times. Theorem 2.40 (Wald’s lemma for Brownian motion). Let {B(t) : t ≥0} be a standard linear Brownian motion, and T be a stopping time such that either (i) E[T] < ∞, or (ii) © B(t ∧T): t ≥0 ª is L1-bounded . Then we have E[B(T)] = 0. Remark 2.41. The proof of Wald’s lemma is based on an optional stopping argument. An alternative proof of (i), which uses only the strong Markov property and the law of large numbers, is suggested in Exercise 2.5. Also, the moment condition (i) in Theorem 2.40 can be relaxed, see Theorem 2.46 for an optimal criterion. ⋄ Proof. We first show that a stopping time satisfying condition (i), also satisfies condition (ii). So suppose E[T] < ∞, and define Mk = max 0≤t≤1 |B(t + k) −B(k)| and M = ⌈T⌉ X k=1 Mk. Then E[M] = E h ⌈T⌉ X k=1 Mk i = ∞ X k=1 E £ 1{T ≥k} Mk ¤ = ∞ X k=1 P{T ≥k} E[Mk] = E[M0] E[T] < ∞, where, using Fubini’s theorem and Remark 2.19, E[M0] = Z ∞ 0 P © max 0≤t≤1 |B(t)| > x ª dx ≤ Z ∞ 0 2 √ 2 x √π exp © −x2 2t ª < ∞. 59 Now note that |B(t∧T)| ≤M, so that (ii) holds. It remains to observe that under condition (ii) we can apply the optional stopping theorem with S = 0, which yields E[B(T)] = 0. Corollary 2.42. Let S ≤T be stopping times and E[T] < ∞. Then E £ (B(S))2¤ = E £ (B(T))2¤ + E £¡ B(T) −B(S) ¢2¤ . Proof. The tower property of conditional expectation gives E £¡ B(T) ¢2¤ = E £¡ B(S) ¢2¤ + 2E £ B(S)E £ B(T) −B(S) | F(S) ¤ + E £¡ B(T) −B(S) ¢2¤ . Note that E[T] < ∞implies E[T −S | F(S)] < ∞almost surely. Hence the strong Markov property at time S together with Wald’s lemma imply E[B(T) −B(S) | F(S)] = 0 almost surely, so that the middle term vanishes. To find the second moment of B(T) and thus prove Wald’s second lemma, we identify a further martingale derived from Brownian motion. Lemma 2.43. Suppose {B(t) : t ≥0} is a linear Brownian motion. Then the process © B(t)2 −t : t ≥0 ª is a martingale. Proof. The process is adapted to the natural filtration of Brownian motion and E £ B(t)2 −t ¯ ¯ F+(s) ¤ = E £¡ B(t) −B(s) ¢2 ¯ ¯ F+(s) ¤ + 2 E £ B(t)B(s) ¯ ¯ F+(s) ¤ −B(s)2 −t = (t −s) + 2B(s)2 −B(s)2 −t = B(s)2 −s , which completes the proof. Theorem 2.44 (Wald’s second lemma). Let T be a stopping time for standard Brownian motion such that E[T] < ∞. Then E £ B(T)2¤ = E[T]. Proof. Look at the martingale {B(t)2 −t: t ≥0} and define stopping times Tn = inf{t ≥0: |B(t)| = n} so that {B(t ∧T ∧Tn)2 −t ∧T ∧Tn : t ≥0} is dominated by the integrable random variable n2 + T. By the optional stopping theorem we get E[B(T ∧Tn)2] = E[T ∧Tn]. By Corollary 2.42 we have E[B(T)2] ≥E[B(T ∧Tn)2]. Hence, by monotone convergence, E £ B(T)2¤ ≥lim n→∞E £ B(T ∧Tn)2¤ = lim n→∞E £ T ∧Tn ¤ = E[T] . Conversely, now using Fatou’s lemma in the first step, E £ B(T)2¤ ≤lim inf n→∞E £ B(T ∧Tn)2¤ = lim inf n→∞E £ T ∧Tn ¤ ≤E[T] . 60 Wald’s lemmas suffice to obtain exit probabilities and expected exit times for a linear Brown-ian motion. In Chapter 3 we shall explore the corresponding problem for higher-dimensional Brownian motion using harmonic functions. Theorem 2.45. Let a < 0 < b and, for a standard linear Brownian motion {B(t) : t ≥0}, define T = min{t ≥0 : B(t) ∈{a, b}}. Then • P{B(T) = a} = b |a| + b and P{B(T) = b} = |a| |a| + b. • E[T] = |a|b. Proof. Let T = τ({a, b}) be the first exit time from the interval [a, b]. This stopping time satisfies the condition of the optional stopping theorem, as |B(t ∧T)| ≤|a| ∨b. Hence, by Wald’s first lemma, 0 = E[B(T)] = aP{B(T) = a} + bP{B(T) = b}. Together with the trivial equation P{B(T) = a} + P{B(T) = b} = 1 one can solve this, and obtain P{B(T) = a} = b/(|a| + b), and P{B(T) = b} = |a|/(|a| + b). To use Wald’s second lemma, we check that E[T] < ∞. For this purpose note that E[T] = Z ∞ 0 P{T > t} dt = Z ∞ 0 P{B(s) ∈(a, b) for all s ∈[0, t]} dt , and that, for t ≥k ∈N the integrand is bounded by the kth power of maxx∈(a,b) Px{B(1) ∈ (a, b)}, i.e. decreases exponentially. Hence the integral is finite. Now, by Wald’s second lemma and the exit probabilities, we obtain E[T] = E[B(T)2] = a2b |a| + b + b2|a| |a| + b = |a|b. We now discuss a strengtening of Theorem 2.40, which works with a weaker moment condition. This theorem will not be used in the remainder of the book and can be skipped on first reading. We shall see in Exercise 2.11 that the condition we give is in some sense optimal. Theorem 2.46. Let {B(t): t ≥0} be a standard linear Brownian motion and T a stopping time with E[T 1/2] < ∞. Then E[B(T)] = 0. Proof. Let {M(t): t ≥0} be the maximum process of {B(t): t ≥0} and T a stopping time with E[T 1/2] < ∞. Let τ = ⌈log4 T⌉, so that B(t ∧T) ≤M(4τ). In order to get E[B(T)] = 0 from the optional stopping theorem it suffices to show that the majorant is integrable, i.e. that EM(4τ) < ∞. Define a discrete time stochastic process {Xk : k ∈N} by Xk = M(4k) −2k+2, and observe that τ is a stopping time with respect to the natural filtration (Fk : k ∈N) of this process. Moreover, the process is a supermartingale. Indeed, E £ Xk ¯ ¯ Fk−1 ¤ ≤M(4k−1) + E h max 0≤t≤4k−4k−1 B(t) i −2k+2 . 61 and the supermartingale property follows as E h max 0≤t≤4k−4k−1 B(t) i = p 4k −4k−1 E h max 0≤t≤1 B(t) i = 2 p 4k −4k−1 ≤2k+2 −2k+1 . Now let t = 4ℓand use the supermartingale property for τ ∧ℓto get E £ M(4τ ∧t) ¤ = E £ Xτ∧ℓ ¤ + E £ 2τ∧ℓ+2¤ ≤E[X0] + 4 E £ 2τ¤ . Note that X0 = M(1)−4, which has finite expectation and, by our assumption on the moments of T, we have E[2τ] < ∞. Thus, by monotone convergence, E £ M(4τ) ¤ = lim t↑∞ £ M(4τ ∧t) ¤ < ∞, which completes the proof of the theorem. Given a function f : R →R we were able, in Lemma 2.43, to subtract a suitable term from f(B(t)) to obtain a martingale. To get a feeling what we have to subtract in the general case, we look at the analogous problem for the simple random walk {Sn : n ∈N}. A straightforward calculation gives, for f : Z →R, E £ f(Sn+1) ¯ ¯ σ{S1, . . . , Sn} ¤ −f(Sn) = 1 2 ¡ f(Sn + 1) −2f(Sn) + f(Sn −1) ¢ = 1 2 ˜ ∆f(Sn) , where ˜ ∆is the second difference operator ˜ ∆f(x) := f(x + 1) −2f(x) + f(x −1) . Hence f(Sn) −1 2 n−1 X k=0 ˜ ∆f(Sk) defines a (discrete time) martingale. In the Brownian motion case, one would expect a similar result with ˜ ∆f replaced by its continuous analogue, the Laplacian ∆f(x) = d X i=1 ∂2f ∂x2 i . Theorem 2.47. Let f : Rd →R be twice continuously differentiable, and {B(t): t ≥0} be a d-dimensional Brownian motion. Further suppose that, for all t > 0 and x ∈Rd, we have Ex|f(B(t))| < ∞and Ex R t 0 |∆f(B(s))| ds < ∞. Then the process {X(t) : t ≥0} defined by X(t) = f(B(t)) −1 2 Z t 0 ∆f(B(s)) ds is a martingale. Proof. For any 0 ≤s < t, E £ X(t) ¯ ¯ F(s) ¤ = EB(s) £ f(B(t)) ¤ −1 2 Z s 0 ∆f(B(u)) du − Z t s EB(s) £ 1 2 ∆f(B(u)) ¤ du . 62 Now, using integration by parts and 1 2 ∆p(t, x, y) = ∂ ∂tp(t, x, y), we find EB(s) £1 2 ∆f(B(u)) ¤ = 1 2 Z p(u −s, B(s), x) ∆f(x) dx = 1 2 Z ∆p(u −s, B(s), x) f(x) dx = Z ∂ ∂up(u −s, B(s), x) f(x) dx , and hence Z t s EB(s) £ 1 2 ∆f(B(u)) ¤ du = lim ε↓0 Z h Z t s+ε ∂ ∂up(u −s, B(s), x) du i f(x) dx = Z p(t −s, B(s), x) f(x) dx −lim ε↓0 Z p(ε, B(s), x) f(x) dx = EB(s) £ f(B(t)) ¤ −f(B(s)) , and this confirms the martingale property. Example 2.48. Using f(x) = x2 in Theorem 2.47 yields the familiar martingale {B(t)2 −t: t ≥0}. Using f(x) = x3 we obtain the martingale {B(t)3 −3 R t 0 B(s) ds: t ≥0} and not the familiar martingale {B(t)3 −3tB(t): t ≥ 0}. Of course, the difference { R t 0(B(t) −B(s)) ds : t ≥0} is a martingale. ⋄ The next lemma states a fundamental principle, which we will discuss further in Chapter 7, see in particular Theorem 7.17. Corollary 2.49. Suppose f : Rd →R satisfies ∆f(x) = 0 and Ex|f(B(t))| < ∞, for every x ∈Rd and t > 0. Then the process {f(B(t)) : t ≥0} is a martingale. Example 2.50. The function f : R2 →R given by f(x1, x2) = ex1 cos x2 satisfies ∆f(x) = 0. Hence X(t) = eB1(t) cos B2(t) defines a martingale, where {B1(t): t ≥0} and {B2(t): t ≥0} are independent linear Brownian motions. ⋄ 63 Exercises Exercise 2.1. Show that the definition of d-dimensional Brownian motion is invariant under an orthogonal change of coordinates. Exercise 2.2. Show that, for any tail event A ∈T , the probability Px(A) is independent of x. Also show that, for a germ event A ∈F+(0), the probability Px(A) may depend on x. Exercise 2.3 ( ∗). For a linear Brownian motion, almost surely, every local maximum is a strict local maximum. Hint. Use the Markov property to show that, given two disjoint closed time intervals, the maxima of Brownian motion on them are different almost surely. Then show that every local maximum of Brownian motion is a strict local maximum if this holds simultaneously for all pairs of disjoint rational intervals. Exercise 2.4 ( ∗). (i) If S ≤T are stopping times, then F+(S) ⊂F+(T). (ii) If Tn ↓T are stopping times, then F+(T) = T∞ n=1 F+(Tn). (iii) If T is a stopping time, then the random variable B(T) is F+(T)-measurable. Exercise 2.5 ( ∗). Let {B(t): t ≥0} be a standard Brownian motion on the line, and T be a stopping time with E[T] < ∞. Define an increasing sequence of stopping times by T1 = T and Tn = T(Bn) + Tn−1 where T(Bn) is the same stopping time, but associated with the Brownian motion {Bn(t): t ≥0} given by Bn(t) = B(t + Tn−1) −B(Tn−1). (a) Show that, almost surely, lim n↑∞ B(Tn) n = 0. (b) Show that B(T) is integrable. (c) Show that, almost surely, lim n↑∞ B(Tn) n = E £ B(T) ¤ . Combining (a) and (c) implies that E £ B(T) ¤ = 0, which is Wald’s lemma. Exercise 2.6. Show that, for any x > 0 and measurable set A ⊂[0, ∞), Px © B(s) ≥0 for all 0 ≤s ≤t and B(t) ∈A ª = Px{B(t) ∈A} −P−x{B(t) ∈A} . 64 Exercise 2.7 ( ∗). Show that any nonempty, closed set with no isolated points is uncountable. Note that this applies, in particular, to the zero set of (linear) Brownian motion. Exercise 2.8. The Ornstein-Uhlenbeck diffusion is the process {X(t) : t ∈R}, given by X(t) = e−tB(e2t) for all t ∈R, see also Remark 1.10. Show that {X(t): t ≥0} and {X(−t): t ≥0} are Markov processes and find their Markov transition kernels. Exercise 2.9. Let x, y ∈Rd and {B(t): t ≥0} a d-dimensional Brownian motion started in x. Define the d-dimensional Brownian bridge {X(t): 0 ≤t ≤1} with start in x and end in y by X(t) = B(t) −t ¡ B(1) −y ¢ , for 0 ≤t ≤1 . Show that the Brownian bridge is not a homogeneous Markov process. Exercise 2.10. Find two stopping times S ≤T with E[S] < ∞such that E[(B(S))2] > E[(B(T))2]. Exercise 2.11 ( ∗). The purpose of this exercise is to show that the moment condition in Theo-rem 2.46 is optimal. Let {B(t): t ≥0} be a standard linear Brownian motion and define T = inf{t ≥0: B(t) = 1}, so that B(T) = 1 almost surely. Show that E[T α] < ∞ for all α < 1/2. Exercise 2.12. Let {B(t): t ≥0} be a standard linear Brownian motion (a) Show that there exists a stopping time T with ET = ∞but E[(B(T))2] < ∞. (b) Show that, for every stopping time T with ET = ∞and E √ T < ∞, we have E £ B(T)2¤ = ∞. 65 Exercise 2.13. Let {B(t): t ≥0} be a linear Brownian motion. (a) Show that, for σ > 0, the process {exp(σB(t) −σ2t 2 ) : t ≥0} is a martingale. (b) By taking derivatives ∂n ∂σn at zero, derive that the following processes are martingales. – {B(t)2 −t : t ≥0}, – {B(t)3 −3tB(t) : t ≥0}, and – {B(t)4 −6tB(t)2 + 3t2 : t ≥0}. (c) Find E[T 2] for T = min{t ≥0: B(t) ∈{a, b}} and a < 0 < b. Exercise 2.14 ( ∗). Let {B(t): t ≥0} be a linear Brownian motion and a, b > 0. Show that P0 © B(t) = a + bt for some t > 0 ª = e−2ab. Exercise 2.15 ( ∗). Let R > 0 and A = {−R, R}. Denote by τ(A) the first hitting time of A, and by Tx the first hitting times of the point x ∈R. Consider a linear Brownian motion started at x ∈[0, R], and prove that (a) Ex[τ(A)] = R2 −x2. (b) Ex £ TR ¯ ¯ TR < T0 ¤ = R2−x2 3 . Hint. In (b) use one of the martingales of Exercise 2.13(b). Exercise 2.16. Let {B(t): t ≥0} be a Brownian motion in dimension one. (a) Use the optional stopping theorem for the martingale in Exercise 2.13(a) to show that, with τa = inf{t ≥0 : B(t) = a}, E0 £ e−λτa¤ = e−a √ 2λ, for all λ, a > 0 . (b) Use the reflection principle to show that, with τ−a = inf{t ≥0 : B(t) = −a}, we have E0 £ e−λτa¤ = E0 £ e−λτa 1{τa < τ−a} ¤ + E0 £ e−λτ−a 1{τ−a < τa} ¤ e−2a √ 2λ . (c) Deduce that τ = τa ∧τ−a satisfies E0 £ e−λτ¤ = sech(a √ 2λ) , where sech(x) = 2 ex+e−x. 66 Exercise 2.17. In this exercise we interpret R2 as the complex plane. Hence a planar Brownian motion becomes a complex Brownian motion. A complex-valued stochastic process is called a martingale, if its real and imaginary parts are martingales. Let {B(t) : t ≥0} be a complex Brownian motion started in i. (a) Show that {eiλB(t) : t ≥0} is a martingale, for any λ ∈R. (b) Let T be the first time when {B(t) : t ≥0} hits the real axis. Using the optional stopping theorem at T, show that E £ eiλB(T)¤ = e−λ . Inverting the Fourier transform, the statement of (b) means that B(T) is Cauchy distributed, a fact we already know from an explicit calculation, see Theorem 2.33. Exercise 2.18 ( ∗). Let f : Rd →R be twice continuously differentiable, {B(t): t ≥0} a d-dimensional Brownian motion such that Ex R t 0 e−λs|f(B(s))| ds < ∞and Ex R t 0 e−λs|∆f(B(s))| ds < ∞, for any x ∈Rd and t > 0. (a) Show that the process {X(t) : t ≥0} defined by X(t) = e−λt f(B(t)) − Z t 0 e−λs ¡ 1 2 ∆f(B(s)) −λf(B(s)) ¢ ds is a martingale. (b) Suppose U is a bounded open set, λ ≥0, and u: U →R is a bounded solution of 1 2∆u(x) = λ u(x), for x ∈U , and lim x→x0 u(x) = f(x0) for all x0 ∈∂U. Show that, u(x) = Ex h f(B(τ)) e−λτi , where τ = inf{t ≥0: B(t) ̸∈U}. 67 Notes and Comments The Markov property is central to any discussion of Brownian motion. The discussion of this chapter is only a small fraction of what has to be said, and the Markov property will be omnipresent in the rest of the book. The name goes back to Markov’s paper [Ma06] where the Markovian dependence structure was introduced and a law of large numbers for dependent random variables was proved. The strong Markov property had been used for special stopping times, like hitting times of a point, since the 1930s. Hunt [Hu56] formalised the idea and gave rigorous proofs, and so did, independently, Dynkin [Dy57]. Zero-one laws are classics in probability theory. We have already encountered the powerful Hewitt-Savage law. Blumenthal’s zero-one law was first proved in [Bl57]. The reflection principle is usually attributed to D. Andr´ e [An87], who stated a variant for random walks. His concern was the ballot problem: if two candidates in a ballot receive a, respectively b votes, with a > b, what is the probability that the first candidate was always in the lead during the counting of the votes? See the classical text of Feller [Fe68] for more on this problem. A formulation of the reflection principle for Brownian motion was given by L´ evy [Le39], though apparently not based on the rigorous foundation of the strong Markov property. We shall later use a higher-dimensional version of the reflection principle, where a Brownian motion in Rd is reflected in a hyperplane. The class of Markov processes, defined in this chapter, has a rich and fascinating theory of its own, and some aspects are discussed in the books [RW00] and [Ch82]. A typical feature of this theory is its strong connection to analysis and potential theory, which stems from the key rˆ ole played by the transition semigroup in their definition. This aspect is emphasised in the book [BG68]. Many of the important examples of Markov processes can be derived from Brownian motion in one way or the other, and this is an excellent motivation for further study of the theory. Amongst them are stable L´ evy processes, like the Cauchy process or stable subordinators, the Bessel processes, and diffusions. The intriguing relationship uncovered in Theorem 2.31 has found numerous extensions and complementary results, among them Pitman’s 2M −X theorem, see [Pi75], which describes the process {2M(t) −B(t) : t ≥0} as a 3-dimensional Bessel process. The concept of martingales is due to Doob, see [Do53]. They are an important class of sto-chastic processes in their own right and one of the gems of modern probability theory. A gentle introduction, mostly in discrete time, is [Wi91], while [RY94] discusses continuous martingales and the rich relations to Brownian motion. A fascinating fact, due to Dambis [Da65], Dubins, and Schwarz [DS65], is that for every continuous martingale there exists a time-change, i.e. a reparametrisation t 7→Tt such that Tt, t ≥0 are stopping times, such that t 7→M(Tt) is a Brownian motion. Exercise 2.15 appears in similar form in [St75]. 68 CHAPTER 3 Harmonic functions, transience and recurrence In this chapter we explore the relation of harmonic functions and Brownian motion. This approach will be particularly useful for d-dimensional Brownian motion for d > 1. It allows us to study the fundamental questions of transience and recurrence of Brownian motion, investigate the classical Dirichlet problem of electrostatics, and provide the background for the deeper investigations of probabilistic potential theory, which will follow in Chapter 8. 1. Harmonic functions and the Dirichlet problem Let U be a domain, i.e. a connected open set U ⊂Rd, and ∂U be its boundary. Suppose that its closure U is a homogeneous body and its boundary is electrically charged, the charge given by some continuous function ϕ: ∂U →R. The Dirichlet problem asks for the voltage u(x) at some point x ∈U. Kirchhoff’s laws state that u must be a harmonic function in U. We therefore start by discussing the basic features of harmonic functions. Definition 3.1. Let U ⊂Rd be a domain. A function u: U →R is harmonic (on U) if it is twice continuously differentiable and, for any x ∈U, ∆u(x) := d X j=1 ∂2u ∂x2 j (x) = 0. If instead of the last condition only ∆u(x) ≥0, then the function u is called subharmonic. ⋄ To begin with we give two useful reformulations of the harmonicity condition, called the mean value properties, which do not make explicit reference to differentiability. Theorem 3.2. Let U ⊂Rd be a domain and u: U →R measurable and locally bounded. The following conditions are equivalent: (i) u is harmonic; (ii) for any ball B(x, r) ⊂U, u(x) = 1 L(B(x, r)) Z B(x,r) u(y) dy; (iii) for any ball B(x, r) ⊂U, u(x) = 1 σx,r(∂B(x, r)) Z ∂B(x,r) u(y) dσx,r(y), where σx,r is the surface measure on ∂B(x, r). 69 Remark 3.3. We use the following version of Green’s identity, (1.1) Z ∂B(x,r) ∂u ∂n(y) dσx,r(y) = Z B(x,r) ∆u(y) dy, where n(y) is the outward normal vector of the ball at y. The result can also be proved by purely probabilistic means, see Exercise 8.1. ⋄ Proof. (ii) ⇒(iii) Assume u has the mean value property (ii). Define ψ: (0, ∞) →R by ψ(r) = r1−d Z ∂B(x,r) u(y) dσx,r(y). We show that ψ is constant. Indeed, for any r > 0, rd L(B(x, 1)) u(x) = L(B(x, r)) u(x) = Z B(x,r) u(y) dy = Z r 0 ψ(s) sd−1 ds. Differentiating with respect to r gives dL(B(x, 1))u(x) = ψ(r), and therefore ψ(r) is constant. Now (iii) follows from the well known identity dL(B(x, r))/r = σx,r(∂B(x, r)). (iii) ⇒(ii) Fix s > 0, multiply (ii) by σx,r(∂B(x, r)) and integrate over all radii 0 < r < s. (iii) ⇒(i) Suppose g: [0, ∞) →[0, ∞) is a smooth function with compact support in [0, ε) and R g(| x|) dx = 1. Integrating (iii) one obtains u(x) = Z u(y)g(| x −y| ) dy for all x ∈U and sufficiently small ε > 0. As convolution of a smooth function with a bounded function produces a smooth function, we observe that u is infinitely often differentiable in U. Now suppose that ∆u ̸= 0, so that there exists a small ball B(x, ε) ⊂U such that either ∆u(x) > 0 on B(x, ε), or ∆u(x) < 0 on B(x, ε). Using the notation from above, we obtain that 0 = ψ′(r) = r1−d Z ∂B(x,r) ∂u ∂n(y) dσx,r(y) = r1−d Z B(x,r) ∆u(y) dy, using (1.1). This is a contradiction. (i) ⇒(iii) Suppose that u is harmonic and B(x, r) ⊂U. With the notation from above and (1.1), we obtain that ψ′(r) = r1−d Z ∂B(x,r) ∂u ∂n(y) dσx,r(y) = r1−d Z B(x,r) ∆u(y) dy = 0. Hence ψ is constant, and as limr↓0 ψ(r) = σ0,1(B(0, 1)) u(x), we obtain (iii). 70 Remark 3.4. A twice differentiable function u: U →R is subharmonic if and only if (1.2) u(x) ≤ 1 L(B(x, r)) Z B(x,r) u(y) dy for any ball B(x, r) ⊂U. This can be obtained in a way very similar to Theorem 3.2, see also Exercise 3.1. ⋄ An important property satisfied by harmonic, and in fact subharmonic, functions is the maxi-mum principle. This is one of the key principles of analysis. Theorem 3.5 (Maximum principle). Suppose u: Rd →R is a function, which is subharmonic on an open connected set U ⊂Rd. (i) If u attains its maximum in U, then u is a constant. (ii) If u is continuous on ¯ U and U is bounded, then max x∈¯ U u(x) = max x∈∂U u(x). Remark 3.6. If u is harmonic, the theorem may be applied to both u and −u. Hence the conclusions of the theorem also hold with ‘maximum’ replaced by ‘minimum’. ⋄ Proof. (i) Let M be the maximum. Note that V = {x ∈U : u(x) = M} is relatively closed in U. Since U is open, for any x ∈V , there is a ball B(x, r) ⊂U. By the mean-value property of u, see Remark 3.4, M = u(x) ≤ 1 L(B(x, r)) Z B(x,r) u(y) dy ≤M. Equality holds everywhere, and as u(y) ≤M for all y ∈B(x, r), we infer that u(y) = M almost everywhere on B(x, r). By continuity this implies B(x, r) ⊂V . Hence V is also open, and by assumption nonempty. Since U is connected we get that V = U. Therefore, u is constant on U. (ii) Since u is continuous and ¯ U is closed and bounded, u attains a maximum on ¯ U. By (i) the maximum has to be attained on ∂U. Corollary 3.7. Suppose u1, u2 : Rd →R are functions, which are harmonic on a bounded domain U ⊂Rd and continuous on ¯ U. If u1 and u2 agree on ∂U, then they are identical. Proof. By Theorem 3.5(ii) applied to u1 −u2 we obtain that sup x∈¯ U © u1(x) −u2(x) ª = sup x∈∂U © u1(x) −u2(x) ª = 0. Hence u1(x) ≤u2(x) for all x ∈¯ U. Applying the same argument to u2 −u1, one sees that supx∈¯ U{u2(x) −u1(x)} = 0. Hence u1(x) = u2(x) for all x ∈¯ U. We can now formulate the basic fact on which the relationship of Brownian motion and harmonic functions rests. 71 Theorem 3.8. Suppose U is a domain, {B(t) : t ≥0} a Brownian motion started inside U and τ = τ(∂U) = min{t ≥0 : B(t) ∈∂U} the first hitting time of its boundary. Let ϕ: ∂U →R be measurable, and such that the function u: U →R with (1.3) u(x) = Ex £ ϕ(B(τ)) 1{τ < ∞} ¤ , for every x ∈U, is locally bounded. Then u is a harmonic function. Proof. The proof uses only the strong Markov property of Brownian motion and the mean value characterisation of harmonic functions. For a ball B(x, δ) ⊂U let ˜ τ = inf{t > 0: B(t) ̸∈ B(x, δ)}, then the strong Markov property implies that u(x) = Ex £ Ex £ ϕ(B(τ)) 1{τ < ∞} ¯ ¯ F+(˜ τ) ¤¤ = Ex £ u ¡ B(˜ τ) ¢¤ = Z ∂B(x,r) u(y) ϖx,δ(dy), where ϖx,δ is the uniform distribution on the sphere ∂B(x, δ). Therefore, u has the mean value property and hence it is harmonic on U by Theorem 3.2. Definition 3.9. Let U be a domain in Rd and let ∂U be its boundary. Suppose ϕ: ∂U →R is a continuous function on its boundary. A continuous function v: U →R is a solution to the Dirichlet problem with boundary value ϕ, if it is harmonic on U and v(x) = ϕ(x) for x ∈∂U. ⋄ The Dirichlet problem was posed by Gauss in 1840. In fact Gauss thought he showed that there is always a solution, but his reasoning was wrong and Zaremba in 1911 and Lebesgue in 1924 gave counterexamples. However, if the domain is sufficiently nice there is a solution, as we will see below. Definition 3.10. Let U ⊂Rd be a domain. We say that U satisfies the Poincar´ e cone condition at x ∈∂U if there exists a cone V based at x with opening angle α > 0, and h > 0 such that V ∩B(x, h) ⊂U c. ⋄ The following lemma, which is illustrated by Figure 1, will prepare us to solve the Dirichlet problem for ‘nice’ domains. Recall that we denote, for any open or closed set A ⊂Rd, by τ(A) the first hitting time of the set A by Brownian motion, τ(A) = inf{t ≥0: B(t) ∈A}. Lemma 3.11. Let 0 < α < 2π and C0(α) ⊂Rd is a cone based at the origin with opening angle α, and a = sup x∈B(0,1 2 ) Px © τ(∂B(0, 1)) < τ(C0(α)) ª . Then a < 1 and, for any positive integer k and h′ > 0, we have Px © τ(∂B(z, h′)) < τ(Cz(α)) ª ≤ak, for all x, z ∈Rd with | x −z| < 2−kh′, where Cz(α) is a cone based at z with opening angle α. 72                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            2−k 2−k+1 2 −k+2 B(t) x C (a) 0 Figure 1. Brownian motion avoiding a cone Proof. Obviously a < 1. If x ∈B(0, 2−k) then by the strong Markov property Px © τ(∂B(0, 1)) < τ(C0(α)) ª ≤ k−1 Y i=0 sup x∈B(0,2−k+i) Px © τ(∂B(0, 2−k+i+1)) < τ(C0(α)) ª = ak. Therefore, for any positive integer k and h′ > 0, we have by scaling Px © τ(∂B(z, h′)) < τ(Cz(α)) ª ≤ak , for all x with | x −z| < 2−kh′. Theorem 3.12 (Dirichlet Problem). Suppose U ⊂Rd is a bounded domain such that every boundary point satisfies the Poincar´ e cone condition, and suppose ϕ is a continuous function on ∂U. Let τ(∂U) = inf{t > 0: B(t) ∈∂U}. Then the function u: U →R given by u(x) = Ex £ ϕ(B(τ(∂U))) ¤ , for x ∈U, is the unique continuous function harmonic on U with u(x) = ϕ(x) for all x ∈∂U. Proof. The uniqueness claim follows from Corollary 3.7. The function u is bounded and hence harmonic on U by Theorem 3.8. It remains to show that the Poincar´ e cone condition implies the boundary condition. Fix z ∈∂U, then there is a cone Cz(α) based at z with angle α > 0 with Cz(α) ∩B(z, h) ⊂U c. By Lemma 3.11, for any positive integer k and h′ > 0, we have Px © τ(∂B(z, h′)) < τ(Cz(α)) ª ≤ak for all x with | x −z| < 2−kh′. Given ε > 0, there is a 0 < δ ≤h such that |ϕ(y) −ϕ(z)| < ε for all y ∈∂U with |y −z| < δ. For all x ∈U with |z −x| < 2−kδ, (1.4) |u(x) −u(z)| = ¯ ¯Exϕ(B(τ(∂U))) −ϕ(z) ¯ ¯ ≤Ex ¯ ¯ϕ(B(τ(∂U))) −ϕ(z) ¯ ¯ . If the Brownian motion hits the cone Cz(α), which is outside the domain U, before the sphere ∂B(z, δ), then |z −B(τ(∂U))| < δ, and ϕ(B(τ(∂U))) is close to ϕ(z). The complement has 73 small probability. More precisely, (1.4) is bounded above by 2∥ϕ∥∞Px © τ(∂B(z, δ)) < τ(Cz(α))} + εPx{τ(∂U) < τ(∂B(z, δ))} ≤2∥ϕ∥∞ak + ε. This implies that u is continuous on U. Remark 3.13. If the Poincar´ e cone condition holds at every boundary point, we can simulate the solution of the Dirichlet problem by running many independent Brownian motions, starting in x ∈U until they hit the boundary of U and letting u(x) be the average of the values of ϕ on the hitting points. ⋄ Remark 3.14. In Chapter 8 we will improve the results on the Dirichlet problem significantly and give sharp criteria for the existence of solutions. ⋄ To justify the introduction of conditions on the domain we now give an example where the function u of Theorem 3.12 fails to solve the Dirichlet problem. Example 3.15. Take a solution v: B(0, 1) →R of the Dirichlet problem on the planar disc B(0, 1) with boundary condition ϕ: ∂B(0, 1) →R. Let U = {x ∈R2 : 0 < | x| < 1} be the punctured disc. We claim that u(x) = Ex £ ϕ(B(τ(∂U))) ¤ fails to solve the Dirichlet problem on U with boundary condition ϕ: ∂B(0, 1) ∪{0} →R if ϕ(0) ̸= v(0). Indeed, as planar Brownian motion does not hit points, by Corollary 2.23, the first hitting time τ of ∂U = ∂B(0, 1) ∪{0} agrees almost surely with the first hitting time of ∂B(0, 1). Then, by Theorem 3.12, u(0) = E0[ϕ(B(τ))] = v(0) ̸= ϕ(0). ⋄ We now show how the techniques we have developed so far can be used to prove a classical result from harmonic analysis, Liouville’s theorem, by probabilistic means. The proof uses the reflection principle for higher-dimensional Brownian motion. Theorem 3.16 (Liouville’s theorem). Any bounded harmonic function on Rd is constant. Proof. Let u: Rd →[−M, M] be a harmonic function, x, y two distinct points in Rd, and H the hyperplane so that the reflection in H takes x to y. Let {B(t) : t ≥0} be Brownian motion started at x, and {B(t): t ≥0} its reflection in H. Let τ(H) = min{t: B(t) ∈H} and note that (1.5) {B(t): t ≥τ(H)} d = {B(t): t ≥τ(H)}. Harmonicity implies that Ex[u(B(t))] = u(x) and decomposing the above into t < τ(H) and t ≥τ(H) we get u(x) = Ex £ u(B(t))1{t<τ(H)} ¤ + Ex £ u(B(t))1{t≥τ(H)} ¤ . A similar equality holds for u(y). Now, using (1.5), |u(x) −u(y)| = ¯ ¯E £ u(B(t))1{t<τ(H)} ¤ −E £ u(B(t))1{t<τ(H)} ¤¯ ¯ ≤2MP{t < τ(H)} →0, as t →∞. Thus u(x) = u(y), and since x and y were chosen arbitrarily, u must be constant. 74 2. Recurrence and transience of Brownian motion A Brownian motion {B(t): t ≥0} in dimension d is called transient if lim t↑∞|B(t)| = ∞ almost surely. Note that the event {limt↑∞|B(t)| = ∞} is a tail event and hence, by Kolmogorov’s zero-one law, it must have probability zero or one. In this section we decide in which dimensions d the Brownian motion is transient, and in which it is not. This question is intimately related to the exit probabilities of the Brownian motion from an annulus: Suppose the motion starts at a point x inside an annulus A = {x ∈Rd : r ≤| x| ≤R}, for 0 < r < R < ∞. What is the probability that the Brownian motion hits ∂B(0, r) before ∂B(0, R)? The answer is given in terms of harmonic functions on the annulus and is therefore closely related to the Dirichlet problem. To find explicit solutions u: ¯ A →R of the Dirichlet problem on an annulus it is first reasonable to assume that u is spherically symmetric, i.e. there is a function ψ: [r, R] →R such that u(x) = ψ(| x|2). We can express derivatives of u in terms of ψ as ∂iψ(| x|2) = ψ′(| x|2)2xi and ∂iiψ(| x|2) = ψ′′(| x|2)4x2 i + 2ψ′(| x|2). Therefore, ∆u = 0 means 0 = d X i=1 ³ ψ′′(| x|2)4x2 i + 2ψ′(| x|2) ´ = 4| x|2ψ′′(| x|2) + 2dψ′(| x|2). Letting y = | x|2 > 0 we can write this as ψ′′(y) = −d 2y ψ′(y). This is solved by every ψ satisfying ψ′(y) = y−d/2 and thus ∆u = 0 holds on {| x| ̸= 0} for (2.1) u(x) =    | x| if d = 1, 2 log | x| if d = 2, | x|2−d if d ≥3. We write u(r) for the value of u(x) for all x ∈∂B(0, r). Now define stopping times Tr = τ(∂B(0, r)) = inf{t > 0 : |B(t)| = r} for r > 0, and denote by T = Tr ∧TR the first exit time from A. By Theorem 3.12 we have u(x) = Ex £ u(B(T)) ¤ = u(r)Px{Tr < TR} + u(R)(1 −Px{Tr < TR}). This formula can be solved Px{Tr < TR} = u(R) −u(x) u(R) −u(r) and we get an explicit solution for the exit problem. 75 Theorem 3.17. Suppose {B(t) : t ≥0} is a Brownian motion in dimension d ≥1 started in x ∈A := {x ∈Rd : r ≤| x| ≤R} inside an annulus A with radii 0 < r < R < ∞. Then, Px{Tr < TR} =        R−| x| R−r if d = 1, log R−log | x| log R−log r if d = 2, R2−d−| x|2−d R2−d−r2−d if d ≥3. Letting R ↑∞in Theorem 3.17 leads to the following corollary. Corollary 3.18. For any x ̸∈B(0, r), we have Px{Tr < ∞} = ( 1 if d ≤2, rd−2 | x|d−2 if d ≥3. We now apply this to the problem of recurrence and transience of Brownian motion in various dimensions. Generally speaking, we call a Markov process {X(t): t ≥0} with values in Rd • point recurrent, if for every x ∈Rd, almost surely, there is a (random) sequence tn ↑∞such that X(tn) = x for all n ∈N, • neighbourhood recurrent, if, for every x ∈Rd and ε > 0, almost surely, there exists a (random) sequence tn ↑∞such that X(tn) ∈B(x, ε) for all n ∈N. • transient, if it converges to infinity almost surely. Theorem 3.19. Brownian motion is • point recurrent in dimension d = 1, • neighbourhood recurrent, but not point recurrent, in d = 2, • transient in dimension d ≥3. Proof. We leave the case d = 1 as Exercise 3.3, and look at dimension d = 2. Fix ε > 0 and x ∈Rd. By Corollary 3.18 and shift-invariance the stopping time t1 = inf{t > 0: B(t) ∈ B(x, ε)} is almost surely finite. Using the strong Markov property at time t1 + 1 we see that this also applies to t2 = inf{t > t1 + 1: B(t) ∈B(x, ε)}, and continuing like this, we obtain a sequence of times tn ↑∞such that, almost surely, B(tn) ∈B(x, ε) for all n ∈N. Taking a union over a countable family of small balls, which form a basis of the Eulidean topology, implies that in d = 2 Brownian motion is neighbourhood recurrent. Recall from Corollary 2.23 that planar Brownian motion does not hit points, hence it cannot be point recurrent. It remains to show that Brownian motion is transient in dimensions d ≥3. Look at the events An := {|B(t)| > n for all t ≥Tn3}. Recall from Proposition 1.23 that Tn3 < ∞almost surely. By the strong Markov property, for every n ≥|x|1/3, Px(Ac n) = Ex h PB(Tn3){Tn < ∞} i = ³ 1 n2 ´d−2 . 76 Note that the right hand side is summable, and hence the Borel-Cantelli lemma shows that only finitely many of the events Ac n occur, which implies that |B(t)| diverges to infinity, almost surely, and hence that Brownian motion in d ≥3 is transient. Remark 3.20. Neighbourhood recurrence, in particular, implies that the path of a planar Brownian motion (running for an infinite amount of time) is dense in the plane. ⋄ We now have a qualitative look at the transience of Brownian motion in Rd, d ≥3, and ask for the speed of escape to infinity. This material is slightly more advanced and can be skipped on first reading. Consider a standard Brownian motion {B(t): t ≥0} in Rd, for d ≥3, and fix a sequence tn ↑∞. For any ε > 0, by Fatou’s lemma, P ©¯ ¯B(tn) ¯ ¯ < ε√tn infinitely often ª ≥lim sup n→∞P ©¯ ¯B(tn) ¯ ¯ < ε√tn ª > 0. By the Hewitt Savage zero-one law, the probability on the left-hand side must therefore be one, whence (2.2) lim inf n→∞ |B(tn)| √tn = 0, almost surely. This statement is refined by the Dvoretzky-Erd˝ os test. Theorem 3.21 (Dvoretzky-Erd˝ os test). Let {B(t): t ≥0} be Brownian motion in Rd for d ≥3 and f : (0, ∞) →(0, ∞) increasing. Then Z ∞ 1 f(r)d−2r−d/2 dr < ∞ if and only if lim inf t↑∞ |B(t)| f(t) = ∞almost surely. Conversely, if the integral diverges, then lim inf t↑∞ |B(t)|/f(t) = 0 almost surely. For the proof we first recall two generally useful tools. The first is an easy case of the Paley-Zygmund inequality, see Exercise 3.4 for the full statement. Lemma 3.22 (Paley-Zygmund inequality). For any nonnegative random variable X with E[X2] < ∞, P © X > 0 ª ≥E[X]2 E[X2] . Proof. The Cauchy-Schwarz inequality gives E[X] = E[X 1{X > 0}] ≤E[X2]1/2 ¡ P{X > 0} ¢1/2, and the required inequality follows immediately. 77 The second tool is a version of the Borel-Cantelli lemma, which allows some dependence of the events. This is known as the Kochen-Stone lemma, and is a consequence of the Paley-Zygmund inequality, see Exercise 3.5 or [FG97]. Lemma 3.23. Suppose E1, E2, . . . are events with ∞ X n=1 P(En) = ∞ and lim inf k→∞ Pk m=1 Pk n=1 P(En ∩Em) ¡ Pk n=1 P(En) ¢2 < ∞. Then, with positive probability, infinitely many of the events take place. A core estimate in the proof of the Dvoretzky-Erd˝ os test is the following lemma, which is based on the hitting probabilities of the previous paragraphs. Lemma 3.24. There exists a constant C1 > 0 depending only on the dimension d such that, for any ρ > 0, we have sup x∈Rd Px © there exists t > 1 with |B(t)| ≤ρ ª ≤C1 ρd−2 . Proof. We use Corollary 3.18 for the probability that the motion started at time one hits B(0, ρ), to see that Px © there exists t > 1 with |B(t)| ≤ρ ª ≤E0 h³ ρ |B(1) + x| ´d−2i ≤ρd−2 1 (2π)d/2 Z Rd |y + x|2−d exp © −|y|2 2 ª dy. By considering the integration domains |y + x| ≥|y| and |y + x| ≤|y| separately, it is easy to see that the integral on the right is uniformly bounded in x. Proof of Theorem 3.21. Define events An = © there exists t ∈(2n, 2n+1] with |B(t)| ≤f(t) ª . By Brownian scaling, monotonicity of f, and Lemma 3.24, P(An) ≤P © there exists t > 1 with |B(t)| ≤f(2n+1)2−n/2ª ≤C1 ³ f(2n+1) 2−n/2´d−2 . Now assume that the integral converges, or equivalently, that (2.3) ∞ X n=1 ³ f(2n) 2−n/2´d−2 < ∞. Then the Borel Cantelli lemma and (2.3) imply that, almost surely, the set {t > 0 : |B(t)| ≤ f(t)} is bounded. Since (2.3) also applies to any constant multiple of f in place of f, it follows that lim inft↑∞|B(t)|/f(t) = ∞almost surely. For the converse, suppose that the integral diverges, whence (2.4) ∞ X n=1 ³ f(2n) 2−n/2´d−2 = ∞. 78 In view of (2.2), we may assume that f(t) < √ t for all large enough t. Changing f on a finite interval, we may assume that this inequality holds for all t > 0. For ρ ∈(0, 1), consider the random variable Iρ = R 2 1 1{|B(t)| ≤ρ}. Since the density of |B(t)| on the unit ball is bounded from above and also away from zero for t ∈[1, 2], we infer that C2ρd ≤E[Iρ] ≤C3ρd for suitable constants depending only on the dimension. To complement this by an estimate of the second moment, we use the Markov property to see that E[I2 ρ] = 2E hZ 2 1 1{|B(t)| ≤ρ} Z 2 t 1{|B(s)| ≤ρ} ds dt i ≤2E hZ 2 1 1{|B(t)| ≤ρ} EB(t) Z ∞ 0 1{| ˜ B(s)| ≤ρ} ds dt i , where the inner expectation is with respect to a Brownian motion { ˜ B(t): t ≥0} started in the fixed point B(t), whereas the outer expectation is with respect to B(t). We analyse the dependence of the inner expectation on the starting point. Given x ̸= 0, we let T = inf{t > 0: |B(t)| = x} and use the strong Markov property to see that E0 Z ∞ 0 1{|B(s)| ≤ρ} ds ≥E Z ∞ T 1{|B(s)| ≤ρ} ds = Ex Z ∞ 0 1{|B(s)| ≤ρ} ds, so that the expectation is maximal if the process is started at the origin. Hence we obtain that E[I2 ρ] ≤2C3 ρd E0 Z ∞ 0 1{|B(s)| ≤ρ} ds . Moreover, by Brownian scaling, we obtain E0 Z ∞ 0 1{|B(s)| ≤ρ} ds = ρ2 Z ∞ 0 P{|B(s)| ≤1} ds = ρ2 Z B(0,1) dx Z ∞ 0 1 (2πs)d/2 exp © −x2/s ª ds = C4 ρ2, where the finiteness of the constant C4 is easily checked by substituting sx2 for s in the inner integral. In summary, we have E[I2 ρ] ≤2C3C4 ρd+2. By the Paley-Zygmund inequality, for a suitable constant C5 > 0, P{Iρ > 0} ≥E[Iρ]2 E[I2 ρ] ≥C5ρd−2 . Now choose ρ = f(2n)2−n/2, which is smaller than one, as f(t) < √ t. By Brownian scaling and monotonicity of f, we have P(An) ≥P{Iρ > 0} ≥C5 ³ f(2n) 2−n/2´d−2 , 79 so P n P(An) = ∞by (2.4). For m < n −1, the Markov property at time 2n−1, Brownian scaling and Lemma 3.24 yield that P[An | Am] ≤sup x∈Rd Px © there exists t > 1 with |B(t)| ≤f(2n+1)2(1−n)/2ª ≤C1 ³ f(2n+1) 2(1−n)/2´d−2 . From this we get that lim inf k→∞ Pk m=1 Pk n=1 P(An ∩Am) ¡ Pk n=1 P(An) ¢2 ≤2 lim inf k→∞ Pk m=1 P(Am) Pk n=m+2 P[An | Am] ¡ Pk n=1 P(An) ¢2 ≤2C1 C5 lim inf k→∞ Pk n=1(f(2n+1) 2(1−n)/2)d−2 Pk n=1(f(2n) 2−n/2)d−2 < ∞. The Kochen-Stone lemma now yields that P{An infinitely often} > 0, whence by the Hewitt-Savage 0-1-law this probability is 1. Thus the set {t > 0: |B(t)| ≤f(t)} is almost surely unbounded. Since (2.4) also applies to εf in place of f for any ε > 0, it follows that lim inft↑∞|B(t)|/f(t) = ∞almost surely 3. Occupation measures and Green’s functions We now address the following question: Given a bounded domain U ⊂Rd, how much time does Brownian motion spend in U? Our first result states that for a linear Brownian motion running for a finite amount of time, this time is comparable to the Lebesgue measure of U. Theorem 3.25. Let {B(s): s ≥0} be a linear Brownian motion and t > 0. Define the occupation measure µt by µt(A) = Z t 0 1A(B(s)) ds for A ⊂R Borel. Then, almost surely, µt is absolutely continuous with respect to the Lebesgue measure. Proof. By Lebesgue’s theorem absolute continuity with respect to the Lebesgue measure means that, for µt-almost very x ∈R, lim inf r↓0 µt(B(x, r)) L(B(x, r)) < ∞. To see this we use first Fatou’s lemma and then Fubini’s theorem, E Z lim inf r↓0 µt(B(x, r)) L(B(x, r)) dµt(x) ≤lim inf r↓0 1 2r E Z µt(B(x, r)) dµt(x) = lim inf r↓0 1 2r Z t 0 Z t 0 P © |B(s1) −B(s2)| ≤r ª ds1 ds2 . Using that the density of a standard normal random variable X is bounded by one, we get P © |B(s1) −B(s2)| ≤r ª = P © |X| ≤ r √ |s1−s2| ª ≤ r √ |s1−s2|, 80 and this implies that lim inf r↓0 1 2r Z t 0 Z t 0 P © |B(s1) −B(s2)| ≤r ª ds1 ds2 ≤1 2 Z t 0 Z t 0 ds1 ds2 p |s1 −s2| < ∞. This implies that µt is absolutely continuous with respect to L. We now turn to higher dimensions d ≥2. A first simple result shows that whether the overall time spent in a bounded set is finite or not depends just on transience or recurrence of the process. Theorem 3.26. Let U ⊂Rd be a nonempty bounded open set and x ∈Rd arbitrary. • If d = 2, then Px-almost surely, Z ∞ 0 1U(B(t)) dt = ∞. • If d ≥3, then E Z ∞ 0 1U(B(t)) dt < ∞. Proof. As U is contained in a ball and contains a ball, it suffices to show this for balls. By shifting, we can even restrict to balls U = B(0, r) centred in the origin. Let us start with the first claim. We let d ≤2 and let G = B(0, 2r). Let T0 = 0 and, for all k ≥1, let Sk = inf{t > Tk−1 : B(t) ∈U} and Tk = inf{t > Sk : B(t) ̸∈G}. Recall that, almost surely, these stopping times are finite. From the strong Markov property we infer, for k ≥1, Px n Z Tk Sk 1U(B(t)) dt ≥s ¯ ¯ ¯F+(Sk) o = PB(Sk) n Z T1 0 1U(B(t)) dt ≥s o = Ex h PB(Sk) n Z T1 0 1U(B(t)) dt ≥s oi = Px n Z Tk Sk 1U(B(t)) dt ≥s o , by rotation invariance. The second expression does not depend on k, so that the random variables Z Tk Sk 1U(B(t)) dt, for k = 1, 2, . . . are independent and identically distributed. As they are not identically zero, but nonnegative, they have positive expectation and, by the strong law of large numbers we infer Z ∞ 0 1U(B(t)) dt = lim n→∞ n X k=1 Z Tk Sk 1U(B(t)) dt = ∞, which proves the first claim. For the second claim, we first look at Brownian motion started in the origin and obtain, making good use of Fubini’s theorem and denoting by p: [0, ∞) × Rd × Rd →[0, 1] the transition density of Brownian motion, E0 Z ∞ 0 1B(0,r)(B(s)) ds = Z ∞ 0 P0{B(s) ∈B(0, r)} ds = Z ∞ 0 Z B(0,r) p(s, 0, y) dy ds = Z B(0,r) Z ∞ 0 p(s, 0, y) ds dy = σ(∂B(0, 1)) Z r 0 ρd−1 Z ∞ 0 ³ 1 √ 2πs ´d e −ρ2 2s ds dρ. 81 Now we can use the substitution t = ρ2/s and obtain, for a suitable constant C(d) < ∞, = C(d) Z r 0 ρd−1ρ2−d dρ = C(d) 2 r2 < ∞. For start in an arbitrary x ̸= 0, we look at a Brownian motion started in 0 and a stopping time T, which is the first hitting time of the sphere ∂B(0, | x| ). Using spherical symmetry and the strong Markov property we obtain Ex Z ∞ 0 1B(0,r)(B(s)) ds = E0 Z ∞ T 1B(0,r)(B(s)) ds ≤E0 Z ∞ 0 1B(0,r)(B(s)) ds < ∞. In the case when Brownian motion is transient it is interesting to ask further for the expected time the process spends in a bounded open set. In order not to confine this discussion to the case d ≥3 we introduce suitable stopping rules for Brownian motion in d = 2. Definition 3.27. Suppose that {B(t) : 0 ≤t ≤T} is a d-dimensional Brownian motion and one of the following three cases holds: (1) d ≥3 and T = ∞, (2) d ≥2 and T is an independent exponential time with parameter λ > 0, (3) d ≥2 and T is the first exit time from a bounded domain D containing 0. We use the convention that D = Rd in cases (1), (2). We refer to these three cases by saying that {B(t) : 0 ≤t ≤T} is a transient Brownian motion. ⋄ Remark 3.28. For a transient Brownian motion {B(t) : 0 ≤t ≤T}, given F+(t), on the event {B(t) = y, t < T}, the process {B(s + t) : 0 ≤s ≤T} is again a transient Brownian motion of the same type, started in y. We do not consider Brownian motion stopped at a fixed time, because this model lacks exactly this form of the Markov property. ⋄ Proposition 3.29. For transient Brownian motion {B(t) : 0 ≤t ≤T} there exists a transi-tion (sub-)density p∗: [0, ∞) × Rd × Rd →[0, 1] such that, for any t > 0, Px © B(t) ∈A and t ≤T ª = Z A p∗(t, x, y) dy for every A ⊂Rd Borel. Moreover, for all t ≥0 and L-almost every x, y ∈D we have p∗(t, x, y) = p∗(t, y, x). Proof. Fix t throughout the proof. For the existence of the density, by the Radon-Nikodym theorem, it suffices to check that Px{B(t) ∈A and t ≤T} = 0, if A is a Borel set of Lebesgue measure zero. This is obvious, by just dropping the requirement t ≤T, and recalling that B(t) is normally distributed. If d ≥3 and T = ∞, or if d ≥2 and T is independent, exponentially distributed symmetry is obvious. Hence we can now concentrate on the case d ≥2 and a bounded domain D. We fix a compact set K ⊂D and define, for every x ∈K and n ∈N, a measure µ(n) x on the Borel sets A ⊂D, µ (n) x (A) = Px © B( kt 2n) ∈K for all k = 0, . . . , 2n and B(t) ∈A ª . 82 Then µ(n) x has a density p∗ n(t, x, y) = Z K · · · Z K 2n Y i=1 p ¡ t 2n, zi−1, zi ¢ dz1 . . . dz2n−1 , where z0 = x, z2n = y and p is the transition density of d-dimensional Brownian motion. As p is symmetric in the space variables, so is p∗ n for every n. Note that p∗ n is decreasing in n. From the monotone convergence theorem one can see that p∗ K(t, x, y) := lim p∗ n(t, x, y) is a transition subdensity of Brownian motion stopped upon leaving K. The symmetry of p∗ n gives p∗ K(t, x, y) = p∗ K(t, y, x). Choosing an increasing sequence of compact sets exhausting U and taking a monotone limit yields a symmetric version p∗(t, x, y) of the transition density. Definition 3.30. For transient Brownian motion {B(t) : 0 ≤t ≤T} we define the Green’s function G: Rd × Rd →[0, ∞] by G(x, y) = Z ∞ 0 p∗(t, x, y) dt . The Green’s function is also called the Green kernel. Sometimes it is also called the potential kernel, but we shall reserve this terminus for a closely related concept, see Remark 8.20. ⋄ In probabilistic terms G is the density of the expected occupation measure for the transient Brownian motion started in x. Theorem 3.31. If f : Rd →[0, ∞] is measurable, then Ex Z T 0 f(B(t)) dt = Z f(y) G(x, y) dy. Proof. Fubini’s theorem implies Ex Z T 0 f(B(t)) dt = Z ∞ 0 Ex £ f(B(t)) 1{t≤T} ¤ dt = Z ∞ 0 Z p∗(t, x, y) f(y) dy dt = Z Z ∞ 0 p∗(t, x, y) dt f(y) dy = Z G(x, y)f(y) dy, by definition of the Green’s function. In case (1), i.e. if T = ∞, Green’s function can be calculated explicitly. Theorem 3.32. If d ≥3 and T = ∞, then G(x, y) = c(d) | x −y|2−d, where c(d) = Γ(d/2−1) 2πd/2 . 83 Proof. Assume d ≥3 and use the substitution s = | x −y|2/2t to obtain, G(x, y) = Z ∞ 0 1 (2πt)−d/2 e−| x−y|2/2t dt = Z 0 ∞ ³ s π| x −y|2 ´d/2 e−s ³ −| x −y|2 2s2 ´ ds = | x −y|2−d 2πd/2 Z ∞ 0 s(d/2)−2 e−s ds = Γ(d/2 −1) 2πd/2 | x −y|2−d, where Γ(x) = R ∞ 0 sx−1e−s ds is the Gamma function. This proves that G has the given form and the calculation above also shows that the integral is infinite if d ≤2. In case (2), if Brownian motion is stopped at an independent exponential time, one can find the asymptotics of G(x, y) for x →y. Theorem 3.33. If d = 2 and T is an independent exponential time with parameter λ > 0, then G(x, y) ∼−1 π log | x −y| for |x −y| ↓0 . Proof. Note that the transition sub-density of Brownian motion stopped at an independent exponential time with parameter λ > 0 equals p∗(t, x, y) = e−λt p(t, x, y), where p is the transition density for (unstopped) Brownian motion. Hence G(x, y) = Gλ(x, y) := Z ∞ 0 1 2πt exp © −|x−y|2 2t −λt ª dt . We thus get Gλ(x−y) = G1( √ λ(x−y)) and may assume without loss of generality that λ = 1. Then G(x, y) = 1 2π Z ∞ 0 e−t t Z ∞ |x−y|2/(2t) e−s ds dt = 1 2π Z ∞ 0 e−s Z ∞ |x−y|2/(2s) e−t t dt ds . For an upper bound we use that, for |x −y| ≤1, that Z ∞ |x−y|2/(2s) e−t t dt ≤ ½ log 2s |x−y|2 + 1, if |x −y|2 ≤2s, 1, if |x −y|2 > 2s. This gives, with 0 < γ := − R ∞ 0 e−s log s ds denoting Euler’s constant, G(x, y) ≤1 2π ¡ 1 + log 2 −γ −2 log |x −y| ¢ , from which the upper bound follows. For the lower bound we use Z ∞ |x−y|2/(2s) e−t t dt ≥ Z 1 |x−y|2/(2s) 1 −t t dt ≥log 2s |x −y|2 −1, and thus G(x, y) ≥1 2π ¡ −1 + log 2 −γ −2 log |x −y| ¢ , and again this is asymptotically equal to −1 π log | x −y|. 84 We now explore some of the major analytic properties of Green’s function. Theorem 3.34. In all three cases of transient Brownian motion in d ≥2, the Green’s function has the following properties: (i) G(x, y) is finite if x ̸= y. (ii) G(x, y) = G(y, x) for all x, y ∈D. (iii) for any y ∈D the Green’s function G( · , y) is harmonic on D \ {y}. This result is easy in the case d ≥3, T = ∞, where the Green’s function is explicitly known by Theorem 3.32. We therefore focus on the case d = 2 and prepare the proof by two lemmas, which are of some independent interest. Lemma 3.35. If d = 2, for x, z ∈R2 with |x −z| = 1, −1 π log | x −y| = Z ∞ 0 p(s, x, y) −p(s, x, z) ds , where p is the transition kernel for the (unstopped) Brownian motion. Proof. For |x −z| = 1, we obtain Z ∞ 0 p(t, x, y) −p(t, x, z) dt = 1 2π Z ∞ 0 ¡ e−|x−y|2 2t −e−1 2t¢ dt t = 1 2π Z ∞ 0 ³ Z 1/2t |x−y|2/2t e−s ds ´ dt t , and by changing the order of integration this equals 1 2π Z ∞ 0 e−s³ Z 1/2s | x−y|2/2s dt t ´ ds = −1 π log | x −y| , which completes the proof. Lemma 3.36. Let D ⊂R2 be a bounded domain and x, y ∈D. Then with u(x) = 2 log |x|, G(x, y) = −1 2π u(x −y) −Ex h−1 2π u ¡ B(τ) −y ¢i , where τ is the first exit time from D. Proof. Let f : D →[0, ∞) be continuous with compact support. Picking v ∈∂B(0, 1), we obtain for any x ∈D, Z G(x, y) f(y) dy = Ex Z ∞ 0 f(B(t)) −1{t ≥τ}f(B(t)) dt = Z ∞ 0 Z ¡ p(t, x, y) −p(t, x, x + v) ¢ f(y) dy dt −Ex Z ∞ 0 Z ¡ p(t, B(τ), y) −p(t, B(τ), B(τ) + v) ¢ f(y) dy dt . 85 This implies the statament for every x ∈D and almost every y ∈D. It remains to show that we can choose a version of the density p∗(t, · , · ) such that the statement holds for every x, y ∈D. Indeed, let p∗(t, x, y) = p(t, x, y) −Ex £ p(t −τ, B(τ), y)1{τ < t} ¤ . Integrating over all y ∈A gives that this is indeed a transition density for the stopped process. Moreover, adding and subtracting p(t, x, x + v) = p(t, B(τ), B(τ) + v) on the right hand side and integrating over t ∈(0, ∞) yields the statement for the Green’s function as-sociated to this particular choice of transition kernel p∗, which therefore holds for all x, y ∈D. Proof of Theorem 3.34. We first look at properties (i), (ii) and continuity of G( · , y) on D \ {y}. These three properties are obvious in the case d ≥3, T = ∞, by the explicit form of the Green’s function uncovered in Theorem 3.32. If Brownian motion is stopped at an independent exponential time, it is easy to see from p∗(t, x, y) = e−λt p(t, x, y) that the Green’s function is finite everywhere except on the diagonal ∆= {(x, y): x = y}, and symmetric. Moreover, continuity is easy to check using dominated convergence. We can therefore focus on the case where the Brownian motion is stopped upon leaving a bounded domain D. First let d = 2. Lemma 3.35 gives, for x ̸= y, that G(x, y) < ∞. However, we have Ex[−1/(2π) u(B(τ) −x)] < ∞, hence G(x, x) = ∞by Lemma 3.36. If x ∈D, then G(x, · ) is continuous on D\{x}, because the right hand side of the equation in Lemma 3.36 is continuous. Similarly, if y ∈D the right hand side is continuous in x on D \ {y}, as Ex[u(B(τ) −y)] is harmonic in x. Hence G( · , x) is also continuous on D \ {x}. The symmetry follows from the almost-everywhere symmetry of p∗(t, · , · ) together with the continuity. Next, if d ≥3 we can carry out the same proof replacing −1/(2π)u(x, y) by ℓ(x, y) = c(d)|x − y|2−d. In fact the arguments become significantly easier because ℓ(x, y) = Z ∞ 0 p(t, x, y) dt , for all x, y ∈Rd , and there is no need to subtract a ‘renormalisation’ term. Finally, we show property (iii) in all cases. Define Gε(x, y) := Z B(y,ε) G(x, z) dz, for B(y, ε) ⊂D and x ∈D. We prove that Gε(· , y) satisfies the mean value property on D \ B(y, ε), i.e. Gε(x, y) = 1 L(B(x, r)) Z B(x,r) Gε(z, y) dz, for 0 < r < | x −y| −ε. The result follows from this since, using continuity of G, for x, y ∈D with |x −y| > r, G(x, y) = lim ε↓0 Gε(x, y) L(B(y, ϵ)) = lim ε↓0 1 L(B(x, r)) Z B(x,r) Gε(z, y) L(B(y, ε)) dz = 1 L(B(x, r)) Z B(x,r) G(z, y) dz, where the last equality follows from the bounded convergence theorem. 86 Fix x ̸= y in D, let 0 < r < | x−y| and let ε < | x−y|−r. Denote τ = T ∧inf{t: |B(t)−x| = r}. As a Brownian motion started in x spends no time in B(y, ε) before time τ, we can write Gε(x, y) = Ex Z T τ 1{B(t) ∈B(y, ε)} dt = Ex h EB(τ) Z T 0 1{ ˜ B(t) ∈B(y, ε)} dt i , where the inner expectation is with respect to a Brownian motion { ˜ B(t): t ≥0} started in the fixed point B(τ), whereas the outer expectation is with respect to B(τ). By the strong Markov property and since, given τ < T, the random variable B(τ) is uniformly distributed on ∂B(x, r), by rotational symmetry, we conclude, Gε(x, y) = ExGε(B(τ), y) = Z ∂B(x,r) Gε(z, y) dϖx,r(z), so that Gε satisfies the mean value property and hence is harmonic by Theorem 3.2. Remark 3.37. Suppose d ≥3 and T = ∞. Let K be a compact set and µ a measure on ∂K. Then u(x) = Z ∂K G(x, y) dµ(y), for x ∈Kc is a harmonic function on Kc. This can be verified easily from the harmonicity of G( · , y) and the mean value property. Physically, u(x) is the electrostatic (or Newtonian) potential at x resulting from a charge represented by µ. In particular, the Green function G( · , y) can be interpreted as the electrostatic potential induced by a unit charge in the point y. An interesting question is whether every positive harmonic function on Kc can be represented in such a way by a suitable µ. We will come back to this question in Chapter 8. ⋄ 4. The harmonic measure A particularly appealing way of writing the harmonic function u in Theorem 3.8 is in terms of the harmonic measure on ∂U. Definition 3.38. Let {B(t): t ≥0} be a d-dimensional Brownian motion, d ≥2, started in some point x and fix a closed set A ⊂Rd. Define a measure µA(x, · ) by µA(x, B) = P © B(τ) ∈B, τ < ∞} where τ = inf{t ≥0 : B(t) ∈A}, for B ⊂A Borel. In other words, µA(x, · ) is the distribution of the first hitting point of A, and the total mass of the measure is the probability that a Brownian motion started in x ever hits the set A. The harmonic measure is supported by ∂A. ⋄ 87 The following corollary is only an equivalent reformulation of Theorem 3.12. Corollary 3.39. If the Poincar´ e cone condition is satisfied at every point x ∈∂U on the boundary of a bounded domain U, then the solution of the Dirichlet problem with boundary condition ϕ: ∂U →R, can be written as u(x) = Z ϕ(y) µ∂U(x, dy) for all x ∈U. Remark 3.40. Of course, the harmonicity of u does not rely on the Poincar´ e cone condition. In fact, by Theorem 3.8, for any compact A ⊂Rd and Borel set B ⊂∂A, the function x 7→µA(x, B) is harmonic on Ac. ⋄ Besides its value in the discussion of the Dirichlet problem, the harmonic measure is also interesting in its own right, as it intuitively weighs the points of A according to their accessibility from x. We now show that the measures µA(x, · ) for different values of x ∈Ac are mutually absolutely continuous. This is a form of the famous Harnack principle. Theorem 3.41 (Harnack principle). Suppose A ⊂Rd is compact and x, y are in the unbounded component of Ac. Then µA(x, · ) ≪µA(y, · ). Proof. Given B ⊂∂A Borel, by Remark 3.40, the mapping x 7→µA(x, B) is a harmonic function on Ac. If it takes the value zero for some y ∈Ac, then y is a minimum and the maximum principle, Theorem 3.5, together with the subsequent remark, imply that µA(x, B) = 0 for all x ∈Ac, as required. The Harnack principle allows to formulate the following definition. Definition 3.42. A compact set A is called nonpolar for Brownian motion, or simply nonpolar, if µA(x, A) > 0 for one (and hence for all) x ∈Ac. Otherwise, the set A is called polar for Brownian motion. ⋄ We now give an explicit formula for the harmonic measures on the unit sphere ∂B(0, 1). Note that if x = 0 then the distribution of B(τ) is (by symmetry) the uniform distribution, but if x is another point it is an interesting problem to determine this distribution in terms of a probability density. Theorem 3.43 (Poisson’s formula). Suppose that A ⊂∂B(0, 1) is a Borel subset of the unit sphere for d ≥2. Let ϖ denote the uniform distribution on the unit sphere. Then, for all x ̸∈∂B(0, 1), µ∂B(0,1)(x, A) = Z A ¯ ¯ 1 −| x|2 ¯ ¯ | x −y|d dϖ(y). Remark 3.44. The density appearing in the theorem is usually called the Poisson kernel and appears frequently in potential theory. ⋄ 88 Proof. We start by looking at the case | x| < 1. To prove the theorem we indeed show that for every bounded measurable f : Rd →R we have (4.1) Ex[f(B(τ))] = Z ∂B(0,1) 1 −| x|2 | x −y|d f(y) dϖ(y), which on the one hand implies the formula by choosing indicator functions, on the other hand, by the monotone class theorem, see e.g. [Du95, Chapter 5, (1.5)], it suffices to show this for smooth functions. To prove (4.1) we recall Theorem 3.12, which tells us that we just have to show that the right hand side as a function in x ∈B(0, 1) defines a solution of the Dirichlet problem on B(0, 1) with boundary value f. To check this, one first checks that 1−| x|2 | x−y|d is harmonic in x on B(0, 1), which is a straightforward calculation, and then argues that it is allowed to differentiate twice under the integral sign. To check the boundary condition first look at the case f ≡1, in which case we have to show that, for all x ∈B(0, 1), I(x) := Z ∂B(0,1) 1 −| x|2 | x −y|d ϖ(dy) ≡1. We use Theorem 3.12 to show this. Indeed, observe that I(0) = 1, I is invariant under rotation and ∆I = 0 on B(0, 1), by the first part. Now let x ∈B(0, 1) with | x| = r < 1 and let τ := inf{t: |B(t)| > r}. By Theorem 3.12, I(0) = E0 £ I(B(τ)) ¤ = I(x) , using rotation invariance in the second step. Hence I ≡1. Now we show that the right hand side in the theorem can be extended continuously to all points y ∈∂B(0, 1) by f(y). We write D0 for ∂B(0, 1) with a δ-neighbourhood B(y, δ) removed and D1 = ∂B(0, 1) \ D0. We have, using that I ≡1, for all x ∈B(y, δ/2) ∩U, ¯ ¯ ¯f(y) − Z ∂B(0,1) 1 −| x|2 | x −z|df(z) dϖ(z) ¯ ¯ ¯ = ¯ ¯ ¯ Z ∂B(0,1) 1 −| x|2 | x −z|d(f(y) −f(z)) dϖ(z) ¯ ¯ ¯ ≤ 2∥f∥∞ Z D0 1 −| x|2 | x −z|d dϖ(z) + sup z∈D1 |f(y) −f(z)|. For fixed δ > 0 the first term goes to 0 as x →y by dominated convergence, whereas the second can be made arbitrarily small by choice of δ. This completes the proof if x ∈B(0, 1). If | x| > 1 we use inversion at the unit circle to transfer the problem to the case studied before. By a straightforward calculation, one can check that a function u: B(0, 1) c →R is harmonic if and only if its inversion u∗: B(0, 1) \ {0} →R defined by u∗¡ x |x|2 ¢ = u(x)|x|d−2 is harmonic. Now suppose that f : ∂B(0, 1) →R is a smooth function on the boundary. Then define a harmonic function u: B(0, 1) c →R by u(x) = Ex £ f ¡ B(τ(∂B(0, 1))) ¢ 1 © τ(∂B(0, 1)) < ∞ ª¤ . 89 Then u∗: B(0, 1) \ {0} →R is bounded and harmonic. By Exercise 3.8 we can extend it to the origin, so that the extension is harmonic on B(0, 1). In fact, this extension is obviously given by u∗(0) = R ϕ dϖ. The harmonic extension is continuous on the closure, with boundary values given by f. Hence it agrees with the function of the first part, and u = u∗∗must be its inversion, which gives the claimed formula. We now fix a compact nonpolar set A ⊂Rd, and look at the harmonic measure µA(x, · ) when x →∞. The first task is to make sure that this object is well-defined. Theorem 3.45. Let A ⊂Rd be a compact, nonpolar set, then there exists a probability measure µA on A, given by µA(B) = lim x→∞Px © B(τ(A)) ∈B | τ(A) < ∞ ª . This measure is called the harmonic measure (from infinity). Remark 3.46. The harmonic measure weighs the points of A according to their accessibility from infinity. It is naturally supported by the outer boundary of A, which is the boundary of the infinite connected component of Rd \ A. ⋄ The proof is prepared by a lemma, which is yet another example how the strong Markov property can be exploited to great effect. Lemma 3.47. For A ⊂Rd compact and nonpolar and every ε > 0, there exists a large R > 0 such that, for all x ∈∂B(0, R) and any hyperplane H ⊂Rd containing the origin, Px © τ(A) < τ(H) ª < ε Px © τ(A) < ∞ ª . Proof. Pick a radius r > 0 such that A ⊂B(0, r) and note from Remark 3.40 that x 7→Px{τ(A) < ∞} is harmonic on Ac. Therefore the minimum of this function on the compact set ∂B(0, r) is positive, say δ > 0. It therefore suffices to show that Px © τ(B(0, r) < τ(H) ª < εδ Px © τ(B(0, r) < ∞ ª . Now there exists an absolute constant q < 1 such that, for any x ∈∂B(0, 2) and hyperplane H, Px © τ(B(0, 1)) < τ(H) ª < q Px © τ(B(0, 1)) < ∞ ª . Let k be large enough to ensure that qk < εδ. Then, by the strong Markov property and Brownian scaling, sup x∈∂B(0,r2k) Px © τ(B(0, r)) < τ(H) ª ≤ sup x∈∂B(0,r2k) Ex £ 1{τ(B(0, r2k−1)) < τ(H)} PB(τ(B(0,r2k−1))) © τ(B(0, r)) < τ(H) ª¤ ≤q sup x∈∂B(0,r2k) Px © τ(B(0, r2k−1)) < ∞ ª sup x∈∂B(0,r2k−1) Px © τ(B(0, r)) < τ(H)}. 90 Iterating this and letting R = r2k gives sup x∈∂B(0,R) Px © τ(B(0, r)) < τ(H) ª ≤qk sup x∈∂B(0,R) Px © τ(B(0, r)) < ∞ ª , as required to complete the proof. Proof of Theorem 3.45. Let x, y ∈∂B(0, r) and H be the hyperplane through the origin, which is orthogonal to x −y. If {B(t) : t ≥0} is a Brownian motion started in x, define {B(t) : t ≥0} the Brownian motion started in y, obtained by defining B(t) as the reflection of B(t) at H, for all times t ≤τ(H), and B(t) = B(t) for all t ≥τ(H). This coupling gives, for every ε > 0 and sufficiently large r, ¯ ¯µA(x, B) −µA(y, B) ¯ ¯ ≤Px © τ(A) < τ(H) ª ≤εµA(x, A), using Lemma 3.47 for the last inequality. In particular, we get |µA(x, A)−µA(y, A)| ≤εµA(x, A). Next, let |z| > r and apply the strong Markov property to obtain µA(x, B) µA(x, A) −µA(z, B) µA(z, A) = Z ³ µA(x, B) µB(0,r)(z, B(0, r))µA(x, A) −µA(y, B) µA(z, A) ´ µB(0,r)(z, dy) = 1 µA(z, A) Z ³ µA(x, B) µA(z, A) µB(0,r)(z, B(0, r))µA(x, A) −µA(y, B) ´ µB(0,r)(z, dy). We note that µA(z, A) = R µB(0,r)(z, dy)µA(y, A) ≤(1 + ε)µB(0,r)(z, B(0, r))µA(x, A), which leads to the estimate µA(x, B) µA(x, A) −µA(z, B) µA(z, A) ≤ε + ε(1 + ε), and the same estimate can be performed with the roles of x and z reversed. As ε > 0 was arbitrary, this implies that µA(x, B)/µA(x, A) converges as x →∞. Example 3.48. For any ball B(x, r) we have µB(x,r) = ϖx,r, the uniform distribution. Indeed, ϖx,r( · ) = c(R) R ∂B(x,R) µB(x,r)(y, · )dϖx,R(y), for all R > r and a suitable constant C(R), as the two balls are concentric, and both sides of the equation are rotationally invariant measures on the sphere ∂B(x, r). Letting R ↑∞, we obtain from Theorem 3.45, that ϖx,r = µB(x,r). ⋄ The following surprising proposition shows that the harmonic measure from infinity can also be obtained without this limiting procedure. Theorem 3.49. Let A ⊂Rd be compact and suppose B(x, r) ⊃A, let ϖx,r be the uniform distribution on ∂B(x, r). Then we have, for any Borel set B ⊂A, µA(B) = R µA(a, B) ϖx,r(da) R µA(a, A) ϖx,r(da). 91 Remark 3.50. The surprising fact here is that the right hand side does not depend on the choice of the ball B(x, r). ⋄ The crucial observation behind this result is that, starting a Brownian motion in a uniformly chosen point on the boundary of a sphere, the first hitting point of any ball inside that sphere, if it exists, is again uniformly distributed, see Figure 2.                                                                                                   Figure 2. Starting Brownian motion uniformly on the big circle, the distribution of the first hitting point on the small circle is also uniform. Lemma 3.51. Let B(x, r) ⊂B(y, s) and B ⊂∂B(x, r) Borel. Then R µB(x,r)(a, B) ϖy,s(da) R µB(x,r)(a, B(x, r)) ϖy,s(da) = ϖx,r(B). Proof. By Example 3.48 we have ϖy,s = µB(y,s) and hence, for the normalization con-stant c(R) = 1/ R µB(y,s)(a, B(y, s)) ϖx,R(da), we have ϖy,s( · ) = lim R↑∞c(R) Z µB(y,s)(a, · ) ϖx,R(da) . Hence, for any B ⊂B(x, r) Borel, using the Markov property in the second step, Z µB(x,r)(a, B) ϖy,s(da) = lim R↑∞c(R) Z Z µB(x,r)(a, B)µB(y,s)(b, da) ϖx,R(db) = lim R↑∞c(R) Z µB(x,r)(a, B) ϖx,R(da) = ³ lim R↑∞c(R) ´ ϖx,r(B) , because B(x, R) and B(x, r) are concentric. By substituting B = B(x, r) into the equation, we see that the limit must be equal to the stated constant. 92 Proof of Theorem 3.49. Assume that B(x, r) and B(y, s) are two balls containing A. We may then find a ball B(z, t) containing both these balls. Using Lemma 3.51 and the strong Markov property applied to the first hitting of B(x, r) we obtain, for any Borel set B ⊂A, Z µA(a, B) ϖx,r(da) = c1 Z Z µA(a, B)µB(x,r)(b, da) ϖz,t(db) = c1 Z µA(b, B) ϖz,t(db) = c2 Z Z µA(a, B)µB(y,s)(b, da) ϖz,t(db) = c2 Z µA(a, B) ϖy,s(da), for suitable constants c1, c2 depending only on the choice of the balls. Choosing B = A gives the normalisation constant c2 = R µA(a, A) ϖx,r(da) R µA(a, A) ϖy,s(da), and this shows that the right hand side in Theorem 3.49 is independent of the choice of the enclosing ball. It therefore must stay constant when its radius goes to infinity, thus completing the proof. 93 Exercises Exercise 3.1. Show that, if u: U →R is subharmonic, then u(x) ≤ 1 L(B(x, r)) Z B(x,r) u(y) dy for any ball B(x, r) ⊂U. Conversely, show that any twice differentiable function u: U →R satisfying (1.2) is subhar-monic. Also give an example of a discontinuous function u satisfying (1.2). Exercise 3.2 ( ∗). Suppose u: B(x, r) →R is harmonic and bounded by M. Show that the kth order partial derivatives are bounded by a constant multiple of Mr−k. Exercise 3.3. Prove the case d = 1 in Theorem 3.19. Exercise 3.4 ( ∗). Prove the strong form of the Paley-Zygmund inequality: For any nonnegative random variable X with E[X2] < ∞and λ ∈[0, 1), P © X > λ E[X] ª ≥(1 −λ)2 E[X]2 E[X2] . Exercise 3.5. Prove the Kochen-Stone lemma: Suppose E1, E2, . . . are events with ∞ X n=1 P(En) = ∞ and lim inf k→∞ Pk m=1 Pk n=1 P(En ∩Em) ¡ Pk n=1 P(En) ¢2 < ∞. Then, with positive probability, infinitely many of the events take place. Hint. Apply the Paley-Zygmund inequality to X = lim infn→∞1En. Exercise 3.6 ( ∗). Suppose that u is a radial harmonic function on the annulus D = {x ∈Rd : r < | x| < R}, where radial means u(x) = ˜ u(| x| ) for some function ˜ u: (r, R) →R and all x. Suppose further that u is continuous on ¯ D. Show that, • if d ≥3, there exist constants a and b such that u(x) = a + b| x|2−d; • if d = 2, there exist constants a and b such that u(x) = a + b log | x|. Exercise 3.7 ( ∗). Show that any positive harmonic function on Rd is constant. 94 Exercise 3.8 ( ∗). Let D ⊂Rd be a domain and x ∈D. Suppose u: D \ {x} →R is bounded and harmonic. Show that there exists a unique harmonic continuation u: D →R. Exercise 3.9. Let f : (0, 1) →(0, ∞) with t 7→f(t)/t decreasing. Then Z 1 0 f(r)d−2r−d/2 dr < ∞ if and only if lim inf t↓0 |B(t)| f(t) = ∞almost surely. Conversely, if the integral diverges, then lim inft↓0 |B(t)|/f(t) = 0 almost surely. Exercise 3.10. Show that, if d ≥3 and T is an independent exponential time with parameter λ > 0, then G(x, y) ∼c(d) | x −y|2−d for |x −y| ↓0 , where c(d) is as in Theorem 3.32. Exercise 3.11. Show that • if d ≥2 and T exponential time with parameter λ > 0, then G( · , y) is subharmonic on Rd \ {y}; • if d ≥2 and T the first exit time from the domain D, then G( · , y) is harmonic on D \ {y}. Exercise 3.12 ( ∗). Show that if D is a bounded domain, than G: D × D \ ∆is continuous, where ∆= {(x, x) : x ∈D} is the diagonal. Exercise 3.13 ( ∗). Find the Green’s function for the planar Brownian motion stopped when leaving the domain B(0, r). Exercise 3.14 ( ∗). Suppose x, y ̸∈B(0, r) and A ⊂B(0, r) is a compact, nonpolar set. Show that µA(x, · ) and µA(y, · ) are mutually absolutely continuous with a density bounded away from zero and infinity. 95 Notes and Comments Gauss discusses the Dirichlet problem in [Ga40] in a paper on electrostatics. Examples which show that a solution may not exist for certain domains were given by Zaremba [Za11] and Lebesgue [Le24]. Zaremba’s example is the punctured disk we discuss in Example 3.15, and Lebesgue’s example is the thorn, which we will discuss in Example 8.32. For domains with smooth boundary the problem was solved by Poincar´ e [Po90]. Bachelier [Ba00, Ba01] was the first to note a connection of Brownian motion and the Laplace operator. The first probabilistic approaches to the Dirichlet problem were made by Phillips and Wiener [PW23] and Courant, Friedrichs and Lewy [CFL28]. These proofs used probability in a discrete setting and approximation. The treatment of the Dirichlet problem using Brownian motion and the probabilistic definition of the harmonic measure are due to the pioneering work of Kakutani [Ka44a, Ka44b, Ka45]. A current survey of probabilistic methods in analysis can be found in the book of Bass [Ba95], see also Port and Stone [PS78] for a classical reference. P´ olya [Po21] discovered that a simple symmetric random walk on Zd is recurrent for d ≤2 and transient otherwise. His result was later extended to Brownian motion by L´ evy [Le40] and Kakutani [Ka44a]. Neighbourhood recurrence implies, in particular, that the path of a planar Brownian motion (running for an infinite amount of time) is dense in the plane. A more subtle question is whether in d ≥3 all orthogonal projections of a d-dimensional Brownian motion are neighbourhood recurrent, or equivalently whether there is an infinite cylinder avoided by its range. In fact, an avoided cylinder does exist almost surely. This result is due to Adelman, Burdzy and Pemantle [ABP98]. The Dvoretzky-Erd˝ os test is originally from [DE51] and more information and additional references can be found in [Pr90]. There is also an analogous result for planar Brownian motion (with shrinking balls) which is due to Spitzer [Sp58]. Green introduced the function named after him in [Gr28]. Its probabilistic interpretation appears in Kac’s paper [Ka51] and is investigated thoroughly by Hunt [Hu56]. Quite a lot can be said about the transition densities: p∗(t, · , · ) is jointly continuous on D ×D and symmetric in the space variables. Moreover, p∗(t, x, y) vanishes if either x or y is on the boundary of D, if this boundary is sufficiently regular. This is, of course, only nontrivial in case (3) and full proofs for this case can be found in [Ba95] or [PS78]. 96 CHAPTER 4 Hausdorffdimension: Techniques and applications Dimensions are a tool to measure the size of mathematical objects on a crude scale. For example, in classical geometry one can use dimension to see that a line segment (a one-dimensional object) is smaller than the surface of a ball (a two-dimensional object), but there is no difference between line-segments of different lengths. It may therefore come as a surprise that dimension is able to distinguish the size of so many objects in probability theory. In this chapter we first introduce a suitably general notion of dimension, the Hausdorffdimen-sion. We then describe general techniques to calculate the Hausdorffdimension of arbitrary subsets of Rd, and apply these techniques to the graph and zero set of Brownian motion in di-mension one, and to the range of higher dimensional Brownian motion. Lots of further examples will follow in subsequent chapters. 1. Minkowski and Hausdorffdimension 1.1. The Minkowski dimension. How can we capture the dimension of a geometric object? One requirement for a useful definition of dimension is that it should be intrinsic. This means that it should be independent of an embedding of the object in an ambient space like Rd. Intrinsic notions of dimension can be defined in arbitrary metric spaces. Suppose E is a bounded metric space with metric ρ. Here bounded means that the diameter |E| = sup{ρ(x, y) : x, y ∈E} of E is finite. The example we have in mind is a bounded subset of Rd. The definition of Minkowski dimension is based on the notion of a covering of the metric space E. A covering of E is a finite or countable collection of sets E1, E2, E3, . . . with E ⊂ ∞ [ i=1 Ei . Define, for ε > 0, M(E, ε) = min n k ≥1 : there exists a finite covering E1, . . . , Ek of E with |Ei| ≤ε for i = 1, . . . , k o , where |A| is the diameter of a set A ⊂E. Intuitively, when E has dimension s the number M(E, ε) should be of order ε−s. This can be verified in simple cases like line segments, planar squares, etc. This intuition motivates the definition of Minkowski dimension. Definition 4.1. For a bounded metric space E we define the lower Minkowski dimension as dimME := lim inf ε↓0 log M(E, ε) log(1/ε) , 97 and the upper Minkowski dimension as dimME := lim sup ε↓0 log M(E, ε) log(1/ε) . We always have dimME ≤dimME, but equality need not hold. If it holds we write dimM E := dimME = dimME. ⋄ Remark 4.2. If E is a subset of the unit cube [0, 1]d ⊂Rd then let ˜ Mn(E) = # © Q ∈Dn : Q ∩E ̸= ∅ ª be the number of dyadic cubes of sidelength 2−n which hit E. Then there exists a constant C(d) > 0 depending only on the dimension, such that ˜ Mn(E) ≥M(E, √ d 2−n) ≥C(d) ˜ Mn(E). Hence dimME := lim sup n↑∞ log ˜ Mn(E) n log 2 and dimME := lim inf n↑∞ log ˜ Mn(E) n log 2 . ⋄ Example 4.3. In Exercise 4.1, we calculate the Minkowski dimension of a deterministic ‘frac-tal’, the (ternary) Cantor set, C = n ∞ X i=1 xi 3i : xi ∈{0, 2} o ⊂[0, 1] . This set is obtained from the unit interval [0, 1] by first removing the middle third, and the successively the middle third out of each remaining interval ad infinitum, see Figure 1 for the first three stages of the construction. ⋄ 1 1/9 1/3 1/9 Figure 1. The ternary Cantor set is obtained by removing the middle third from each interval. The figure shows the first three steps of the infinite procedure. 98 Remark 4.4. There is an unpleasant limitation of Minkowski dimension: Observe that single-tons S = {x} have Minkowski dimension 0, but we shall see in Exercise 4.2 that the set E := © 1 n : n ∈N ª ∪ © 0 ª has positive dimension. Hence the Minkowski dimension does not have the countable stability property dim ∞ [ k=1 Ek = sup © dim Ek : k ≥1 ª . This is one of the properties we expect from a reasonable concept of dimension. There are two ways out of this problem. (i) One can use a notion of dimension taking variations of the size in the different sets in a covering into account. This captures finer details of the set and leads to the notion of Hausdorffdimension. (ii) One can enforce the countable stability property by subdividing every set in countably many bounded pieces and taking the maximal dimension of them. The infimum over the numbers such obtained leads to the notion of packing dimension. We follow the first route now, but come back to the second route later in the book. ⋄ 1.2. The Hausdorffdimension. The Hausdorffdimension and Hausdorffmeasure were introduced by Felix Hausdorffin 1919. Like the Minkowski dimension, Hausdorffdimension can be based on the notion of a covering of the metric space E. For the definition of the Minkowski dimension we have evaluated coverings crudely by counting the number of sets in the covering. Now we also allow infinite coverings and take the size of the covering sets, measured by their diameter, into account. Looking back at the example of Exercise 4.2 one can see that the set E = {1/n : n ≥1} ∪{0} can be covered much more effectively, if we decrease the size of the balls as we move from right to left. In this example there is a big difference between evaluations of the covering which take into account that we use small sets in the covering, and the evaluation based on just counting the number of sets used to cover. A very useful evaluation is the α-value of a covering. For every α ≥0 and covering E1, E2, . . . we say that the α-value of the covering is ∞ X i=1 |Ei|α . The terminology of the α-values of a covering allows to formulate a concept of dimension, which is sensitive to the effect that the fine features of this set occur in different scales at different places. Definition 4.5. For every α ≥0 the α-Hausdorffcontent of a metric space E is defined as Hα ∞(E) = inf n ∞ X i=1 |Ei|α : E1, E2, . . . is a covering of E o , 99 informally speaking the α-value of the most efficient covering. If 0 ≤α ≤β, and Hα ∞(E) = 0, then also Hβ ∞(E) = 0. Thus we can define dim E = inf n α ≥0 : Hα ∞(E) = 0 o = sup n α ≥0 : Hα ∞(E) > 0 o , the Hausdorffdimension of the set E. ⋄ Remark 4.6. The Hausdorffdimension may, of course, be infinite. But it is easy to see that subsets of Rd have Hausdorffdimension no larger than d. Moreover, in Exercise 4.3 we show that for every bounded metric space, the Hausdorffdimension is bounded from above by the lower Minkowski dimension. Finally, in Exercise 4.4 we check that Hausdorffdimension has the countable stability property. ⋄                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 1 1 1 Figure 2. The ball, sphere and line segment pictured here all have 1-Hausdorff content equal to one. The concept of the α-Hausdorffcontent plays an important part in the definition of the Haus-dorffdimension. However, it does not help distinguish the size of sets of the same dimension. For example, the three sets sketched in Figure 2 all have the same 1-Hausdorffcontent: the ball and the sphere on the left can be covered by a ball of diameter one, so that their 1-Hausdorff content is at most one, but the line segment on the right also does not permit a more effective covering and its 1-Hausdorffcontent is also 1. Therefore, one considers a refined concept, the Hausdorffmeasure. Here the idea is to consider only coverings by small sets. Definition 4.7. Let X be a metric space and E ⊂X. For every α ≥0 and δ > 0 define Hα δ (E) = inf n ∞ X i=1 |Ei|α : E1, E2, E3, . . . cover E, and |Ei| ≤δ o , i.e. we are considering coverings of E by sets of diameter no more than δ. Then Hα(E) = sup δ>0 Hα δ (E) = lim δ↓0 Hα δ (E) is the α-Hausdorffmeasure of the set E. ⋄ 100 Remark 4.8. The α-Hausdorffmeasure has two obvious properties which, together with Hα(∅) = 0, make it an outer measure. These are countable subadditivity, Hα³ ∞ [ i=1 Ei ´ ≤ ∞ X i=1 Hα(Ei) , for any sequence E1, E2, E3, . . . ⊂X , and monotonicity, Hα(E) ≤Hα(D), if E ⊂D ⊂X . ⋄ One can express the Hausdorffdimension in terms of the Hausdorffmeasure. Proposition 4.9. For every metric space E we have dim E = inf{α : Hα(E) = 0} = inf{α : Hα(E) < ∞} = sup{α : Hα(E) > 0} = sup{α : Hα(E) = ∞} . Proof. We focus on the first equality, all the other arguments are similar. Suppose dim E > α. Then, for all β ≤α, c := Hβ ∞(E) > 0, and clearly we have Hβ δ (E) ≥c > 0 for all δ > 0. Hence, Hβ(E) ≥c > 0 and this implies Hβ(E) > 0 for all β ≤α. We infer that inf{β : Hβ(E) = 0} ≥α. Conversely, if dim E < α, then Hα ∞(E) = 0 and hence, for every δ > 0, there exists a covering by sets E1, E2, . . . with P∞ k=1 |Ek|α < δ. These sets have diameter less than δ1/α, hence Hα δ1/α(E) < δ and letting δ ↓0 yields Hα(E) = 0. This proves inf{β : Hβ(E) = 0} ≤α. Remark 4.10. As Lipschitz maps increase the diameter of sets by at most a constant, the image of any set A ⊂E under a Lipschitz map has at most the Hausdorffdimension of A. This observation is particularly useful for projections. ⋄ A natural generalisation of the last remark arises when we look at the effect of H¨ older continuous maps on the Hausdorffdimension. Definition 4.11. A function f : (E1, ρ1) →(E2, ρ2) between metric spaces is called α-H¨ older continuous if there exists a (global) constant C > 0 such that ρ2 ¡ f(x), f(y) ¢ ≤C ρ1 ¡ x, y ¢α for all x, y ∈E1 . The constant C is sometimes called the H¨ older constant. ⋄ Remark 4.12. H¨ older continuous maps allow some control on the Hausdorffmeasure of images: We show in Exercise 4.6 that, if f : (E1, ρ1) →(E2, ρ2) is surjective and α-H¨ older continuous with constant C, then for any β ≥0, Hβ(E2) ≤Cβ Hαβ(E1), and therefore dim(E2) ≤1 α dim(E1). ⋄ 101 1.3. Upper bounds on the Hausdorffdimension. We now give general upper bounds for the dimension of graph and range of functions, which are based on H¨ older continuity. Definition 4.13. For a function f : A →Rd, for A ⊂[0, ∞), we define the graph to be Graphf = © (t, f(t)) : t ∈A ª ⊂Rd+1, and the range or path to be Rangef = f(A) = © f(t) : t ∈A ª ⊂Rd. ⋄ Proposition 4.14. Suppose f : [0, 1] →Rd is an α-H¨ older continuous function. Then (a) dim(Graphf) ≤1 + (1 −α) ¡ d ∧1 α ¢ , (b) and, for any A ⊂[0, 1], we have dim f(A) ≤dim A α . Proof of (a). Since f is α-H¨ older continuous, there exists a constant C such that, if s, t ∈[0, 1] with |t −s| ≤ε, then |f(t) −f(s)| ≤Cεα. Cover [0, 1] by no more than ⌈1/ε⌉ intervals of length ε. The image of each such interval is then contained in a ball of diameter Cεα. One can now • either cover each such ball by no more than a constant multiple of εdα−d balls of diameter ε, • or use the fact that subintervals of length (ε/C)1/α in the domain are mapped into balls of diameter ε to cover the image inside the ball by a constant multiple of ε1−1/α balls of radius ε. In both cases, look at the cover of the graph consisting of the product of intervals and corresponding balls in [0, 1] × Rd of diameter ε. The first construction needs a constant multiple of εdα−d−1 product sets, the second uses ε−1/α product sets, all of which have diameter of order ε. These coverings give the required upper bounds. Proof of (b). Suppose that dim(A) < β < ∞. Then there exists a covering A1, A2, A3, . . . such that A ⊂S j Aj and P j |Aj|β < ε. Then f(A1), f(A2), . . . is a covering of f(A), and |f(Aj)| ≤C|Aj|α, where C is the H¨ older constant. Thus, X j |f(Aj)|β/α ≤Cβ/α X j |Aj|β < Cβ/αε →0 as ε ↓0, and hence dim f(A) ≤β/α. Remark 4.15. By countable stability of Hausdorffdimension, the statements of Proposi-tion 4.14 remain true if we assume that f : [0, ∞) →Rd is locally α-H¨ older continuous. ⋄ 102 We now take a first look at dimensional properties of Brownian motion and harvest the results from our general discussion so far. We have shown in Corollary 1.20 that linear Brownian motion is everywhere locally α-H¨ older for any α < 1/2, almost surely. This extends obviously to d-dimensional Brownian motion, and this allows us to get an upper bound on the Hausdorff dimension of its range and graph. Corollary 4.16. The graph of a d-dimensional Brownian motion satisfies, almost surely, dim(Graph) ≤ ½ 3/2 if d = 1, 2 if d ≥2 . For any fixed set A ⊂[0, ∞), almost surely dim B(A) ≤2 dim(A) ∧d. Remark 4.17. The corresponding lower bounds for the Hausdorffdimension of Graph and Range are more subtle and will be discussed in Section 4.3, when we have more sophisticated tools at our disposal. Our upper bounds also hold for the Minkowski dimension, see Exercise 4.7, and corresponding lower bounds are easier than in the Hausdorffcase and obtainable at this stage, see Exercise 4.10. ⋄ Corollary 4.16 does not make any statement about the 2-Hausdorffmeasure of the range, and any such statement requires more information than the H¨ older exponent alone can provide, see for example Exercise 4.9. It is however not difficult to show that (1.1) H2¡ B([0, 1]) ¢ < ∞ almost surely. Indeed, for any n ∈N, we look at the covering of B([0, 1]) by the closure of the balls B ³ B( k n), max k n≤t≤k+1 n ¯ ¯B(t) −B( k n) ¯ ¯ ´ , k ∈{0, . . . , n −1}. By the uniform continuity of Brownian motion on the unit interval, the maximal diameter in these coverings goes to zero, as n →∞. Moreover, we have E h³ max k n≤t≤k+1 n ¯ ¯B(t) −B( k n) ¯ ¯ ´2i ≤E h³ max 0≤t≤1 n |B(t)| ´2i = 1 n E h³ max 0≤t≤1 |B(t)| ´2i , using Brownian scaling. The expectation on the right is finite by Theorem 2.39. Hence the expected 2-value of the nth covering is bounded from above by 4E h n X k=1 ³ max k n≤t≤k+1 n ¯ ¯B(t) −B( k n) ¯ ¯ ´2i ≤4 E h³ max 0≤t≤1 |B(t)| ´2i , which implies, by Fatou’s lemma, that E h lim inf n→∞4 n X k=1 n X k=1 ³ max k n≤t≤k+1 n ¯ ¯B(t) −B( k n) ¯ ¯ ´2i < ∞. Hence the liminf is almost surely finite, which proves (1.1). 103 The next theorem improves upon (1.1) by showing that the 2-dimensional Hausdorffmeasure of the range of d-dimensional Brownian motion is zero for any d ≥2. The proof is considerably more involved and may be skipped on first reading. It makes use of the fact that we have a ‘natural’ measure on Range at our disposal, which we can use as a tool to pick a good cover by cubes. The idea of using a natural measure supported by the ‘fractal’ for comparison purposes will also turn out to be crucial for the lower bounds for Hausdorffdimension, which we discuss in the next section. Theorem 4.18. Let {B(t): t ≥0} be a Brownian motion in dimension d ≥2. Then H2(Range) = 0 almost surely. Proof. It is sufficient to show that H2(Range) = 0 for d ≥3, as 2-dimensional Brownian motion is the projection of 3-dimensional Brownian motion, and projections cannot increase the Hausdorffmeasure of a set. Moreover it suffices to prove H2(Range ∩Cube) = 0 almost surely, for any half-open cube Cube ⊂Rd of sidelength one at positive distance from the starting point of the Brownian motion. Without loss of generality we may assume that this cube is the unit cube Cube = [0, 1)d, and our Brownian motion is started at some x ̸∈Cube. So let d ≥3, and recall the definition of the (locally finite) occupation measure µ, defined by µ(A) = Z ∞ 0 1A(B(s)) ds , for A ⊂Rd Borel. Let Dk be the collection of all cubes Qd i=1[ni2−k, (ni+1)2−k) where n1, . . . , nd ∈{0, . . . , 2k −1}. We fix a threshold m ∈N and let M > m. We call D ∈Dk with k ≥m a big cube if µ(D) ≥1 ε 2−2k . The collection E(M) consists of all maximal big cubes D ∈Dk, m ≤k < M, i.e. all those which are not contained in another big cube, together with all cubes D ∈DM which are not contained in a big cube, but intersect Range. Obviously E(M) is a cover of Range ∩Cube by sets of diameter smaller than √ d2−m. To find the expected 2-value of this cover, first look at a cube D ∈DM. We denote by D = DM ⊂DM−1 ⊂· · · ⊂Dm with Dk ∈Dk the ascending sequence of cubes containing D. Let D∗ k be the cube with the same centre as Dk and 3/2 its sidelength, see Figure 3. Let τ(D) be the first hitting time of the cube D and τk = inf{t > τ(D) : B(t) ̸∈D∗ k} be the first exit time from D∗ k for M > k ≥m. For the cubes Cube = [0, 1)d and Child = [0, 1 2)d we also define the expanded cubes Cube∗and Child∗and the stopping time τ = inf{t > 0 : B(t) ̸∈ Cube∗}. Let q := sup y∈Child∗Py n Z τ 0 1Cube(B(s)) ds ≤1 ε o < 1 . 104 DM Dm m D D M Figure 3. Nested systems of cubes, cubes D∗ k indicated by dashed, Dk by solid boundaries. By the strong Markov property applied to the stopping times τM < . . . < τm+1 and Brownian scaling, Px © µ(Dk) ≤1 ε 2−2k for all M > k ≥m ¯ ¯ τ(D) < ∞ ª ≤Px n Z τk τk+1 1Dk(B(s)) ds ≤1 ε 2−2k for all M > k ≥m ¯ ¯ ¯ τ(D) < ∞ o ≤ M−1 Y k=m sup y∈D∗ k+1 Py n 22k Z ˜ τk 0 1Dk(B(s)) ds ≤1 ε o ≤qM−m , where ˜ τk is the first exit time of the Brownian motion from D∗ k and the last inequality follows from Brownian scaling. Recall from Theorem 3.17 that Px{τ(D) < ∞} ≤c2−M(d−2), for a constant c > 0 depending only on the dimension d and the fixed distance of x from the unit cube. Hence the probability that any given cube D ∈DM is in our cover is Px © µ(Dk) ≤1 ε 2−2k for all M > k ≥m, τ(D) < ∞ ª ≤c2−M(d−2)qM−m . Hence the expected 2-value from the cubes in E(M) ∩DM is (1.2) d2dM2−2MPx © µ(Dk) ≤1 ε 2−2k for all M > k ≥m, τ(D) < ∞ ª ≤cd qM−m . The 2-value from the cubes in E(M) ∩Sm k=M+1 Dk is bounded by (1.3) M−1 X k=m d2−2k X D∈C(M)∩Dk 1{µ(D) ≥2−2k 1 ε} ≤dε M−1 X k=m X D∈C(M)∩Dk µ(D) ≤dε µ(Cube) . As Eµ(Cube) < ∞by Theorem 3.26, we infer from (1.2) and (1.3) that the expected 2-value of our cover converges to zero for ε ↓0 and a suitable choice M = M(ε). Hence a subsequence converges to zero almost surely, and, as m was arbitrary, this ensures that H2(Range) = 0 almost surely. 105 2. The mass distribution principle From the definition of the Hausdorffdimension it is plausible that in many cases it is relatively easy to give an upper bound on the dimension: just find an efficient cover of the set and find an upper bound to its α-value. However it looks more difficult to give lower bounds, as we must obtain a lower bound on α-values of all covers of the set. The mass distribution principle is a way around this problem, which is based on the existence of a nontrivial measure on the set. The basic idea is that, if this measure distributes a positive amount of mass on a set E in such a manner that its local concentration is bounded from above, then the set must be large in a suitable sense. For the purpose of this method we call a measure µ on the Borel sets of a metric space E a mass distribution on E, if 0 < µ(E) < ∞. The intuition here is that a positive and finite mass is spread over the space E. Theorem 4.19 (Mass distribution principle). Suppose E is a metric space and α ≥0. If there is a mass distribution µ on E and constants C > 0 and δ > 0 such that µ(V ) ≤C|V |α , for all closed sets V with diameter |V | ≤δ, then Hα(E) ≥µ(E) C > 0, and hence dim E ≥α. Proof. Suppose that U1, U2, . . . is a cover of E by arbitrary sets with |Ui| ≤δ. Let Vi be the closure of Ui and note that |Ui| = |Vi|. We have 0 < µ(E) ≤µ ³ ∞ [ i=1 Ui ´ ≤µ ³ ∞ [ i=1 Vi ´ ≤ ∞ X i=1 µ(Vi) ≤C ∞ X i=1 |Ui|α . Passing to the infimum over all such covers, and letting δ ↓0 gives the statement. We now apply this technique to find the Hausdorffdimension of the zero set of a linear Brownian motion. Recall that this is an uncountable set with no isolated points. At first it is not clear what measure on Zero would be suitable to apply the mass distribution principle. Here L´ evy’s theorem, see Theorem 2.31, comes to our rescue: Recall the definition of the maximum process {M(t): t ≥0} associated with a Brownian motion from Chapter 2.3. Definition 4.20. Let {B(t): t ≥0} be a linear Brownian motion and {M(t): t ≥0} the associated maximum process. A time t ≥0 is a record time for the Brownian motion if M(t) = B(t) and the set of all record times for the Brownian motion is denoted by Rec. ⋄ Note that the record times are the zeros of the process {Y (t): t ≥0} given by Y (t) = M(t) −B(t). 106 By Theorem 2.31 this process is a reflected Brownian motion, and hence its zero set and the zero set of {B(t) : t ≥0} have the same distribution. A natural measure on Rec is given by the distribution function {M(t) : t ≥0}, which allows us to get a lower bound for the Hausdorff dimension of Rec via the mass distribution principle. Lemma 4.21. Almost surely, dim(Zero ∩[0, 1]) = dim(Rec ∩[0, 1]) ≥1/2. Proof. The first equality follows from Theorem 2.31, so that we can focus in this proof on the record set. Since t 7→M(t) is an increasing and continuous function, we can regard it as a distribution function of a positive measure µ, with µ(a, b] = M(b) −M(a). This measure is obviously supported on the (closed) set Rec of record times. We know that, with probability one, the Brownian motion is locally H¨ older continuous with any exponent α < 1/2. Thus there exists a (random) constant Cα, such that, almost surely, M(b) −M(a) ≤ max 0≤h≤b−a B(a + h) −B(a) ≤Cα(b −a)α, for all a, b ∈[0, 1]. By the mass distribution principle, we get that, almost surely, dim(Rec ∩[0, 1]) ≥α. Letting α ↑1 2 finishes the proof. To get an upper bound on the Hausdorffdimension of Zero we use a covering consisting of intervals. Define the collection Dk of intervals [j2−k, (j + 1)2−k) for j = 0, . . . , 2k −1, and let Z(I) = 1 if there exists t ∈I with B(t) = 0. To estimate the dimension of the zero set we need an estimate for the probability that Z(I) = 1, i.e. for the probability that a given interval contains a zero of Brownian motion. Lemma 4.22. There is an absolute constant C such that, for any a, ε > 0, P © there exists t ∈(a, a + ε) with B(t) = 0 ª ≤C q ε a+ε. Proof. Consider the event A = {|B(a + ε)| ≤√ε}. By the scaling property of Brownian motion, we can give the upper bound P(A) = P n |B(1)| ≤ q ε a+ε o ≤2 q ε a+ε. Knowing that Brownian motion has a zero in (a, a + ε) makes the event A very likely. Indeed, applying the strong Markov property at the stopping time T = inf{t ≥a : B(t) = 0}, we have P(A) ≥P ¡ A ∩{0 ∈B[a, a + ε]} ¢ ≥P{T ≤a + ε} min a≤t≤a+ε P{B(a + ε) ≤√ε | B(t) = 0}. Clearly the minimum is achieved at t = a and, using the scaling property of Brownian motion, we have P{B(a + ε) ≤√ε | B(a) = 0} = P{|B(1)| ≤1} =: c > 0. Hence, P{T ≤a + ε} ≤2 c q ε a+ε, and this completes the proof. 107 Remark 4.23. This is only very crude information about the position of the zeros of a linear Brownian motion. Much more precise information is available, for example in the form of the arcsine law for the last sign-change, which we prove in the next section, and which (after a simple scaling) yields the precise value of the probability in Lemma 4.22. ⋄ We have thus shown that, for any ε > 0 and sufficiently large integer k, we have E[Z(I)] ≤c1 2−k/2, for all I ∈Dk with I ⊂(ε, 1 −ε) , for some constant c1 > 0. Hence the covering of the set {t ∈(ε, 1 −ε) : B(t) = 0} by all I ∈Dk with I ∩(ε, 1 −ε) ̸= ∅and Z(I) = 1 has an expected 1 2-value of E h X I∈Dk I∩(ε,1−ε)̸=∅ 1{Z(I)=1} 2−k/2i = X I∈Dk I∩(ε,1−ε)̸=∅ E[Z(I)] 2−k/2 ≤c1 2k 2−k/2 2−k/2 ≤c1. We thus get, from Fatou’s lemma, E h lim inf k→∞ X I∈Dk I∩(ε,1−ε)̸=∅ 1{Z(I)=1} 2−k/2i ≤lim inf k→∞E h X I∈Dk I∩(ε,1−ε)̸=∅ 1{Z(I)=1} 2−k/2i ≤c1. Hence the liminf is almost surely finite, which means that there exists a family of coverings with maximal diameter going to zero and bounded 1 2-value. This implies that, almost surely, H 1 2© t ∈(ε, 1 −ε) : B(t) = 0 ª < ∞, and, in particular, that dim(Zero ∩(ε, 1 −ε)) ≤1 2. As ε > 0 was arbitrary, we obtain the same bound for the full zero set. Combining this estimate with Lemma 4.21 we have verified the following result. Theorem 4.24. Let {B(t): 0 ≤t ≤1} be a linear Brownian motion. Then, with probability one, we have dim ¡ Zero ∩[0, 1] ¢ = dim ¡ Rec ∩[0, 1] ¢ = 1 2 . Remark 4.25. As in the case of the Brownian path, the Hausdorffmeasure is itself not a nontrivial measure on the zero set, see Exercise 4.13. In Chapter 6 we shall construct such a measure, the local time at zero. Until then, L´ evy’s identity will remain the crucial tool. ⋄ 3. The energy method The energy method is a technique to find a lower bound for the Hausdorffdimension, which is particularly interesting in applications to random fractals. It replaces the condition on the mass of all closed sets in the mass distribution principle by finiteness of an energy. 108 Definition 4.26. Suppose µ is a mass distribution on a metric space (E, ρ) and α ≥0. The α-potential of a point x ∈E with respect to µ is defined as φα(x) = Z dµ(y) ρ(x, y)α . In the case E = R3 and α = 1, this is the Newton gravitational potential of the mass µ. The α-energy of µ is Iα(µ) = Z φα(x) dµ(x) = ZZ dµ(x) dµ(y) ρ(x, y)α . ⋄ The simple idea of the energy method is the following: Mass distributions with Iα(µ) < ∞ spread the mass so that at each place the concentration is sufficiently small to overcome the singularity of the integrand. This is only possible on sets which are large in a suitable sense. Theorem 4.27 (Energy method). Let α ≥0 and µ be a mass distribution on a metric space E. Then, for every ε > 0, we have Hα ε (E) ≥ µ(E) RR ρ(x,y)<ε dµ(x) dµ(y) ρ(x,y)α . Hence, if Iα(µ) < ∞then Hα(E) = ∞and, in particular, dim E ≥α. Remark 4.28. To get a lower bound on the dimension from this method it suffices to show finiteness of a single integral. In particular, in order to show for a random set E that dim E ≥α almost surely, it suffices to show that EIα(µ) < ∞for a (random) measure on E. ⋄ Proof. Suppose that {An : n = 1, 2, . . .} is a pairwise disjoint covering of E by sets of diameter < ε. Then ZZ ρ(x,y)<ε dµ(x) dµ(y) ρ(x, y)α ≥ ∞ X n=1 ZZ An×An dµ(x) dµ(y) ρ(x, y)α ≥ ∞ X n=1 µ(An)2 |An|α . Moreover, using the Cauchy-Schwarz inequality, µ(E) ≤ ∞ X n=1 µ(An) = ∞ X n=1 |An| α 2 µ(An) |An| α 2 ≤ ∞ X n=1 |An|α ∞ X n=1 µ(An)2 |An|α ≤Hα ε (E) ZZ ρ(x,y)<ε dµ(x) dµ(y) ρ(x, y)α . Dividing both sides by the integral gives the stated inequality. If EIα(µ) < ∞the integral converges to zero, so that Hα ε (E) diverges to infinity. We now apply the energy method to resolve questions left open in the first section of this chapter, namely the lower bounds for the Hausdorffdimension of the graph and range of Brownian motion. 109 The nowhere differentiability of linear Brownian motion established in the first chapter suggests that its graph may have dimension greater than one. For dimensions d ≥2, it is interesting to look at the range of Brownian motion. We have seen that planar Brownian motion is neighbourhood recurrent, that is, it visits every neighbourhood in the plane infinitely often. In this sense, the range of planar Brownian motion is comparable to the plane itself and one can ask whether this is also true in the sense of dimension. Theorem 4.29 (Taylor 1953). Let {B(t): t ≥0} be d-dimensional Brownian motion. (a) If d = 1, then dim Graph = 3/2 almost surely. (b) If d ≥2, then dim Range = dim Graph = 2 almost surely. Recall that we already know the upper bounds from Corollaries 4.16 and 4.16. We now look at lower bounds for the range of Brownian motion in d ≥2. Proof of Theorem 4.29(b). A natural measure on Range is the occupation measure µB defined by µB(A) = L(B−1(A) ∩[0, 1]), for all Borel sets A ⊂Rd, or, equivalently, Z Rd f(x) dµB(x) = Z 1 0 f ¡ B(t) ¢ dt, for all bounded measurable functions f. We want to show that for any 0 < α < 2, (3.1) E ZZ dµB(x) dµB(y) | x −y|α = E Z 1 0 Z 1 0 ds dt |B(t) −B(s)|α < ∞. Let us evaluate the expectation E|B(t) −B(s)|−α = E £ (|t −s|1/2|B(1)|)−α¤ = |t −s|−α/2 Z Rd cd |z|αe−|z|2/2dz. The integral can be evaluated using polar coordinates, but all we need is that it is a finite constant c depending on d and α only. Substituting this expression into (3.1) and using Fubini’s theorem we get (3.2) EIα(µB) = c Z 1 0 Z 1 0 ds dt |t −s|α/2 ≤2c Z 1 0 du uα/2 < ∞. Therefore Iα(µB) < ∞and hence dim Range > α, almost surely. The lower bound on the range follows by letting α ↑2. We also obtain a lower bound for the dimension of the graph: As the graph of a function can be projected onto the path, the dimension of the graph is at least the dimension of the path by Remark 4.10. Hence, if d ≥2, almost surely dim Graph ≥2. Now let us turn to linear Brownian motion and prove the first half of Taylor’s theorem. Proof of Theorem 4.29(a). Again we use the energy method for a sharp lower bound. Recall that we have shown in Corollary 4.16 that dim Graph ≤3/2. Let α < 3/2 and define a measure µ on the graph by µ(A) = L1({0 ≤t ≤1 : (t, B(t)) ∈A}) for A ⊂[0, ∞) × R Borel. 110 Changing variables, the α-energy of µ can be written as ZZ dµ(x) dµ(y) | x −y|α = Z 1 0 Z 1 0 ds dt (| t −s|2 + | B(t) −B(s)|2)α/2. Bounding the integrand, taking expectations, and applying Fubini we get that (3.3) EIα(µ) ≤2 Z 1 0 E ¡ (t2 + B(t)2)−α/2¢ dt. Let p(z) = √ 2π −1 exp(−z2/2) denote the standard normal density. By scaling, the expectation above can be written as (3.4) 2 Z +∞ 0 (t2 + tz2)−α/2p(z) dz. Comparing the size of the summands in the integration suggests separating z ≤ √ t from z > √ t. Then we can bound (3.4) above by twice Z √ t 0 (t2)−α/2dz + Z ∞ √ t (tz2)−α/2p(z) dz = t 1 2 −α + t−α/2 Z ∞ √ t z−αp(z) dz. Furthermore, we separate the last integral at 1. We get Z ∞ √ t z−αp(z) dz ≤1 + Z 1 √ t z−α dz. The latter integral is of order t(1−α)/2. Substituting these results into (3.3), we see that the expected energy is finite when α < 3/2. The claim now follows from the energy method. 4. Frostman’s lemma and capacity In this section we provide a converse to the mass distribution principle, i.e. starting from a lower bound on the Hausdorffmeasure we construct a mass distribution on a set. This is often useful, for example if one wants to relate the Hausdorffdimension of a set and its image under some transformation. Theorem 4.30 (Frostman’s lemma). If A ⊂Rd is a closed set such that Hα(A) > 0, then there exists a Borel probability measure µ supported on A and a constant C > 0 such that µ(D) ≤C|D|α for all Borel sets D. We now give a proof of Frostman’s lemma, which is based on a tree representation of Euclidean space and a famous result from graph theory, the max-flow min-cut theorem. The proof given here is based on the representation of compact subsets of Rd by trees, an idea that we will encounter again in Chapter 9. 111 Definition 4.31. A tree T = (V, E) is a connected graph described by a finite or countable set V of vertices, which includes a distinguished vertex ϱ designated as the root, and a set E ⊂V × V of ordered edges, such that • for every vertex v ∈V the set {w ∈V : (w, v) ∈E} consists of exactly one element v, the parent, except for the root ϱ ∈V , which has no parent; • for every vertex v there is a unique self-avoiding path from the root to v and the number of edges in this path is the order or generation |v| of the vertex v ∈V ; • for every v ∈V , the set of offspring or children of {w ∈V : (v, w) ∈E} is finite. ⋄ Notation 4.32. Suppose T = (V, E) is a tree. For any v, w ∈V we denote by v ∧w the element on the intersection of the paths from the root to v, respectively w with maximal order, i.e. the last common ancestor of v and w. The order |e| of an edge e = (u, v) is the order of its end-vertex v. Every infinite self-avoiding path started in the root is called a ray. The set of rays is denoted ∂T, the boundary of T. For any two rays ξ and η we define ξ ∧η the vertex in the intersection of the rays, which maximises the order. Note that |ξ ∧η| is the number of edges that two rays ξ and η have in common. The distance between two rays ξ and η is defined to be |ξ −η| := 2−|ξ∧η|, and this definition makes the boundary ∂T a metric space. ⋄ Remark 4.33. The boundary ∂T of a tree is an interesting fractal in its own right. Its Hausdorff dimension is log2(br T) where br T is a suitably defined average offspring number. This, together with other interesting probabilistic aspects of trees, is discussed in depth in [LP05]. ⋄ Definition 4.34. Suppose capacities are assigned to the edges of a tree T, i.e. there is a mapping C : E →[0, ∞). A flow of strength c > 0 through a tree with capacities C is a mapping θ : E →[0, c] such that • for the root we have X w=ϱ θ ¡ ϱ, w ¢ = c, for every other vertex v ̸= ϱ we have θ ¡ v, v ¢ = X w : w=v θ ¡ v, w ¢ , i.e. the flow into and out of each vertex other than the root is conserved. • θ(e) ≤C(e), i.e. the flow through the edge e is bounded by its capacity. A set Π of edges is called a cutset if every ray includes an edge from Π. ⋄ The key to the mass distribution principle is a famous result of graph theory, the max-flow min-cut theorem of Ford and Fulkerson [FF56], which we prove in Section 4 of Appendix II. 112 Theorem 4.35 (Max-flow min-cut theorem). max n strength (θ) : θ a flow with capacities C o = inf n X e∈Π C(e) : Π a cutset o . Proof of Frostman’s lemma. We may assume A ⊂[0, 1]d. Any compact cube in Rd of sidelength s can be split into 2d nonoverlapping compact cubes of side length s/2. We first create a tree with a root that we associate with the cube [0, 1]d. Every vertex in the tree has 2d edges emanating from it, each leading to a vertex that is associated with one of the 2d subcubes with half the sidelength of the original cube. We then erase the edges ending in vertices associated with subcubes that do not intersect A. In this way we construct a tree T = (V, E) such that the rays in ∂T correspond to sequences of nested compact cubes, see Figure 4. A Figure 4. The first two stages in the construction of the tree associated with the shaded set A ⊂[0, 1]2. Dotted edges in the tree are erased. There is a canonical mapping Φ: ∂T →A, which maps sequences of nested cubes to their intersection. Note that if x ∈A, then there is an infinite path emanating from the root, all of whose vertices are associated with cubes that contain x and thus intersect A. Hence Φ is surjective. For any edge e at level n define the capacity C(e) = 2−nα. We now associate to every cutset Π a covering of A, consisting of those cubes associated with the initial vertices of the edges in the cutset. To see that the resulting collection of cubes is indeed a covering, let ξ be a ray. As Π is a cutset, it contains one of the edges in this ray, and the cube associated with the initial vertex of this edge contains the point Φ(ξ). Hence we indeed cover the entire set Φ(∂T) = A. This implies that inf n X e∈Π C(e) : Π a cutset o ≥inf nX j |Aj|α : A ⊂ [ j Aj o , and as Hα ∞(A) > 0 this is bounded from zero. Thus, by the max-flow min-cut theorem, there exists a flow θ: E →[0, ∞) of positive strength such that θ(e) ≤C(e) for all edges e ∈E. 113 We now show how to define a suitable measure on the space of infinite paths. Given an edge e ∈E we associate a set T(e) ⊂∂T consisting of all rays containing the edge e. Define e ν ¡ T(e) ¢ = θ(e). It is easily checked that the collection C(∂T) of subsets T(v) ⊂∂T for all v ∈T is a semi-algebra on ∂T. Recall that this means that if A, B ∈C(∂T), then A ∩B ∈C(∂T) and Ac is a finite disjoint union of sets in C(∂T). Because the flow through any vertex is preserved, e ν is countably additive. Thus, using a measure extension theorem such as, for example [Du95, A.1(1.3)], we can extend e ν to a measure ν on the σ-algebra generated by C(∂T). We can now define a Borel measure µ = ν ◦Φ−1 on A, which satisfies µ(C) = θ(e), where C is the cube associated with the initial vertex of the edge e. Suppose now that D is a Borel subset of Rd and n is the integer such that 2−n < |D ∩[0, 1]d| ≤2−(n−1). Then D ∩[0, 1]d can be covered with 3d of the cubes in the above construction having side length 2−n. Using the bound, we have µ(D) ≤d α 2 3d2−nα ≤d α 2 3d|D|α, so we have a finite measure µ satisfying the requirement of the lemma. Normalising µ to get a probability measure completes the proof. We define the (Riesz) α-capacity of a metric space (E, ρ) as Capα(E) := sup n Iα(µ)−1 : µ a mass distribution on E with µ(E) = 1 o . Theorem 4.27 states that a set of positive α-capacity has dimension at least α. We now show that, in this formulation the method is sharp. Our proof of this fact relies on Frostman’s lemma and hence refers to closed subsets of Euclidean space. Theorem 4.36. For any closed set A ⊂Rd, dim A = sup © α : Capα(A) > 0 ª . Proof. It only remains to show ≤, and for this purpose it suffices to show that if dim A > α, then there exists a Borel probability measure µ on A such that Iα(µ) = Z Rd Z Rd dµ(x) dµ(y) | x −y|α < ∞. By our assumption for some sufficiently small β > α we have Hβ(A) > 0. By Frostman’s lemma, there exists a nonzero Borel probability measure µ on A and a constant C such that µ(D) ≤C|D|β for all Borel sets D. By restricting µ to a smaller set if necessary, we can make the support of µ have diameter less than one. Fix x ∈A, and for k ≥1 let Sk(x) = {y : 2−k < | x −y| ≤21−k}. Since µ has no atoms, we have Z Rd dµ(y) | x −y|α = ∞ X k=1 Z Sk(x) dµ(y) | x −y|α ≤ ∞ X k=1 µ(Sk(x))2kα, 114 where the equality follows from the monotone convergence theorem and the inequality holds by the definition of the Sk. Also, ∞ X k=1 µ(Sk(x))2kα ≤C ∞ X k=1 |22−k|β2kα = C′ ∞ X k=1 2k(α−β), where C′ = 22βC. Since β > α, we have Iα(µ) ≤C′ ∞ X k=1 2k(α−β) < ∞, which proves the theorem. In Corollary 4.16 we have seen that the image of a set A ⊂[0, ∞) under Brownian motion has at most twice the Hausdorffdimension of A. Naturally, the question arises whether this is a sharp estimate. The following result of McKean shows that, if d ≥2, this is sharp for any set A, while in d = 1 it is sharp as long as dim A ≤1 2. Theorem 4.37 (McKean 1955). Let A ⊂[0, ∞) be a closed subset and {B(t) : t ≥0} a d-dimensional Brownian motion. Then, almost surely, dim B(A) = 2 dim A ∧d. Proof. The upper bound was verified in Corollary 4.16. For the lower bound let α < dim(A) ∧(d/2). By Theorem 4.36, there exists a Borel probability measure µ on A such that Iα(µ) < ∞. Denote by µB the measure defined by µB(D) = µ({t ≥0 : B(t) ∈D}) for all Borel sets D ⊂Rd. Then E[I2α(µB)] = E ·ZZ dµB(x) dµB(y) | x −y|2α ¸ = E ·Z ∞ 0 Z ∞ 0 dµ(t) dµ(s) |B(t) −B(s)|2α ¸ , where the second equality can be verified by a change of variables. Note that the denominator on the right hand side has the same distribution as |t −s|α|Z|2α, where Z is a d-dimensional standard normal random variable. Since 2α < d, we have that E[|Z|−2α] = 1 (2π)d/2 Z Rd |y|−2αe−|y|2/2 dy < ∞. Hence, using Fubini’s theorem, E[I2α(µB)] = Z R Z R E[|Z|−2α]dµ(t) dµ(s) |t −s|α ≤E[|Z|−2α]Iα(µ) < ∞. Thus, E[I2α(µB)] < ∞, and hence I2α(µB) < ∞almost surely. Moreover, µB is supported on B(A) because µ is supported on A. It follows from Theorem 4.27 that dim B(A) ≥2α almost surely. By letting α ↑dim(A)∧d/2, we see that dim(B(A)) ≥2 dim(A)∧d almost surely. This completes the proof of Theorem 4.37. 115 Remark 4.38. We have indeed shown that, if Capα(A) > 0, then Cap2α(B(A)) > 0 almost surely. The converse of this statement is also true and will be discussed later, see Theorem 9.36. ⋄ Remark 4.39. Later in the book, we shall be able to significantly improve McKean’s theorem and show that for Brownian motion in dimension d ≥2, almost surely, for any A ⊂[0, ∞), we have dim B(A) = 2 dim(A). This result is Kaufman’s theorem, see Theorem 9.28. Note the difference between the results of McKean and Kaufman: In Theorem 4.37, the null probability set depends on A, while Kaufman’s theorem has a much stronger claim: it states dimension doubling simultaneously for all sets. This allows us to plug in random sets A, which may depend completely arbitrarily on the Brownian motion. For Kaufman’s theorem, d ≥2 is a necessary condition: we have seen that the zero set of one dimensional Brownian motion has dimension 1/2, while its image is a single point. ⋄ 116 Exercises Exercise 4.1 ( ∗). Show that for the ternary Cantor set C, we have dimM C = log 2 log 3. Exercise 4.2 ( ∗). Let E := {1/n : n ∈N} ∪{0}. Then dimM E = 1 2. Exercise 4.3 ( ∗). Show that, for every bounded metric space, the Hausdorffdimension is bounded from above by the lower Minkowski dimension. Exercise 4.4 ( ∗). Show that Hausdorffdimension has the countable stability property. Exercise 4.5. Show that, for the ternary Cantor set C we have dim C = log 2 log 3. Exercise 4.6 ( ∗). If f : (E1, ρ1) →(E2, ρ2) is surjective and α-H¨ older continuous with constant C, then for any β ≥0, Hβ(E2) ≤Cβ Hαβ(E1), and therefore dim(E2) ≤1 α dim(E1). Exercise 4.7. Suppose f : [0, 1] →Rd is an α-H¨ older continuous function. Then (a) dimM(Graphf) ≤1 + (1 −α) ¡ d ∧1 α ¢ , (b) and, for any A ⊂[0, 1], we have dimM f(A) ≤dimM A α . Exercise 4.8 ( ∗). For any integer d ≥1 and 0 < α < d construct a compact set A ⊂Rd such that dim A = α. 117 Exercise 4.9. Construct a function f : [0, 1] →Rd which is α-H¨ older continuous for any α < β, but has Hβ(f[0, 1]) = ∞. Exercise 4.10. A function f : [0, 1] →R is called reverse β-H¨ older for some 0 < β < 1 if there exists a constant C > 0 such that for any interval [t, s], there is a subinterval [t1, s1] ⊂ [t, s], such that |f(t1) −f(s1)| ≥C|t −s|β. Let f : [0, 1] →R be reverse β-H¨ older. Then dimM(Graphf) ≥2 −β. Exercise 4.11. Show that for {B(t) : 0 ≤t ≤1} we have dimM Graph = 3 2 if d = 1, and dimM Graph = dimM B[0, 1] = 2 if d ≥2, almost surely. Exercise 4.12. Show that dimM {0 ≤t ≤1 : B(t) = 0} = 1 2, almost surely. Exercise 4.13 ( ∗). Show that H1/2(Zero) = 0, almost surely. 118 Notes and Comments Felix Hausdorffintroduced the Hausdorffmeasure in his seminal paper [Ha19]. Credit should also be given to Carath´ eodory [Ca14] who introduced a general construction in which Hausdorff measure can be naturally embedded. The Hausdorffmeasure indeed defines a measure on the Borel sets, proofs can be found in [Ma95] and [Ro99]. If X = Rd and α = d the Hausdorff measure Hα is a constant multiple of Lebesgue measure Ld, moreover if α is an integer and X an embedded α-submanifold, then Hα is the surface measure. This idea can also be used to develop vector analysis on sets with much less smoothness than a differentiable manifold. For more about Hausdorffdimension and geometric questions related to it we strongly recommend Mattila [Ma95]. The classic text of Rogers [Ro99], which first appeared in 1970, is a thorough discussion of Hausdorffmeasures. Falconer [Fa97a, Fa97b] covers a range of applications and current developments, but with more focus on deterministic fractals. The results on the Hausdorffdimension of graph and range of a Brownian motion are due to S.J. Taylor [Ta53, Ta55] and independently to L´ evy [Le51] though the latter paper does not contain full proofs. Taylor also proved in [Ta55] that the dimension of the zero set of a Brownian motion in dimension one is 1/2. Stronger results show that, almost surely, the Hausdorffdimension of all nontrivial level sets is 1/2. For this and much finer results see [Pe81]. A classical survey, which inspired a lot of activity in the area of Hausdorffdimension and stochastic processes is [Ta86] and a modern survey is [Xi04]. The energy method and Frostman’s lemma all stem from Otto Frostman’s famous 1935 the-sis [Fr35], which lays the foundations of modern potential theory. The elegant quantitative proof of the energy method given here is due to Oded Schramm. Frostman’s lemma was gen-eralised to complete, separable metric spaces by Howroyd [Ho95] using a functional-analytic approach. The main difficulty arising in the proof is that, if Hα(E) = ∞, one has to find a subset A ⊂E with 0 < Hα(A) < ∞, which is tricky to do in abstract metric spaces. Frostman’s original proof uses, in a way, the same idea as the proof presented here, though the transfer to the tree setup is not done explicitly. Probability using trees became fashionable in the 1990s and indeed, this is the right way to look at many problems of Hausdorffdimension and fractal geometry. Survey articles are [Pe95] and [Ly96], more information can be found in [Pe99] and [LP05]. McKean’s theorem is due to H.P. McKean jr. [McK55]. Its surprising extension by Kaufman is not as hard as one might think considering the wide applicability of the result. The original source is [Ka69], we discuss the result in depth in Chapter 9. The concept of ‘reverse H¨ older’ mappings only partially extends from Minkowski to Hausdorff dimension. If f : [0, 1] →R is both β-H¨ older and reverse β-H¨ older for some 0 < β < 1, it satisfies dim(Graphf) > 1, see Przytycki and Urba´ nski [PU89]. For example, the Weierstrass nowhere differentiable function, defined by W(t) = P∞ n=0 an cos(bnt), for ab > 1, 0 < a < 1 is β-H¨ older and reverse β-H¨ older for some 0 < β < 1. The Hausdorffdimension of its graph is, however, not rigorously known in general. 119 There is a natural refinement of the notions of Hausdorffdimension and Hausdorffmeasure, which is based on evaluating sets by applying an arbitrary ‘gauge’ function ϕ to the diameter, rather than taking a power. Measuring sets using a gauge function not only allows much finer results, it also turns out that the natural measures on graph and range of Brownian paths, which we have encountered in this chapter, turn out to be Hausdorffmeasures for suitable gauge functions. Results in this direction are [CT62, Ra63a, Ta64] and we include elements of this discussion in Chapter 6, where the zero set of Brownian motion is considered. 120 CHAPTER 5 Brownian motion and random walk In this chapter we discuss some aspects of the relation between random walk and Brownian motion. The first two sections aim to demonstrate the nature of this relation by examples, which are of interest in their own right. These are first the law of the iterated logarithm, which is easier to prove for Brownian motion and can be extended to random walks by an embedding argument, and second a proof that Brownian motion does not have points of increase, which is based on a combinatorial argument for a class of random walks, and then extended to Brownian motion. We then discuss the Skorokhod embedding problem systematically, and give a proof of the Donsker invariance principle based on the Skorokhod embedding. We give a variety of applications of Donsker’s theorem, including the arcsine laws. 1. The law of the iterated logarithm For a standard linear Brownian motion {B(t) : t ≥0}, although at any given time t and for any open set U ⊂R the probability of the event {B(t) ∈U} is positive, over a long time Brownian motion cannot grow arbitrarily fast. We have seen in Corollary 1.11 that, for any small ε > 0, almost surely, there exists t0 > 0 such that |B(t)| ≤εt for all t ≥t0, whereas Proposition 1.23 ensures that for every large k, almost surely, there exist arbitrarily large times t such that |B(t)| ≥k √ t. It is therefore natural to ask for the asymptotic smallest upper envelope of the Brownian motion, i.e. for a function ψ: (1, ∞) →R such that lim sup t→∞ B(t) ψ(t) = 1. The law of the iterated logarithm (whose name comes from the answer to this question but is by now firmly established for this type of upper-envelope results) provides such a ‘gauge’ function, which determines the almost-sure asymptotic growth of a Brownian motion. A similar problem arises for arbitrary random walks {Sn : n ≥0}, where we ask for a sequence (an : n ≥0) such that lim sup n→∞ Sn an = 1. These two questions are closely related, and we start with an answer to the first one. Theorem 5.1 (Law of the Iterated Logarithm for Brownian motion). Suppose {B(t) : t ≥0} is a standard linear Brownian motion. Then, almost surely, lim sup t→∞ B(t) p 2t log log(t) = 1. 121 Remark 5.2. By symmetry it follows that, almost surely, lim inf t→∞ B(t) p 2t log log(t) = −1. Hence, for any ε > 0, there exists t0 such that |B(t)| ≤(1 + ε) p 2t log log(t) for any t ≥t0, while there exist arbitrarily large times t with |B(t)| ≥(1 −ε) p 2t log log(t). ⋄ 0 2 4 6 8 10 x 10 5 −2500 −2000 −1500 −1000 −500 0 500 1000 1500 2000 2500 ψ(t) = p 2t loglog(t) −ψ(t) B(t) t 0 2 4 6 8 10 x 10 5 −2500 −2000 −1500 −1000 −500 0 500 1000 1500 2000 2500 ψ(t) = p 2t loglog(t) −ψ(t) B(t) t Figure 1. The picture on the left shows the asymptotic upper envelope ψ(t) = p 2t log log(t) and a typical Brownian path indicating that scales where the path comes near to the envelope are very sparse. The picture on the right shows Brownian motion at such a scale. Due to the special nature of this scale the Brownian path (which is implicitly conditioned on the event of ending up near the upper envelope) has features untypical of Brownian paths. See the ‘Notes and Comments’ section for more details. Proof. The main idea is to scale by a geometric sequence. Let ψ(t) = p 2t log log(t). We first prove the upper bound. Fix ε > 0 and q > 1. Let An = ½ max 0≤t≤qn B(t) ≥(1 + ε)ψ(qn) ¾ . By Theorem 2.18 the maximum of Brownian motion up to a fixed time t has the same distrib-ution as |B(t)|. Therefore P(An) = P ½|B(qn)| √qn ≥(1 + ε) ψ(qn) √qn ¾ . We can use the tail estimate P{Z > x} ≤e−x2/2 for a standard normally distributed Z and x > 1, see Lemma II.3.1, to conclude that, for large n, P(An) ≤exp ¡ −(1 + ε)2 log log qn¢ = 1 (n log q)(1+ε)2 . This is summable in n and hence, by the Borel-Cantelli lemma, we get that only finitely many of these events occur. For large t write qn−1 ≤t < qn. We have B(t) ψ(t) = B(t) ψ(qn) ψ(qn) qn t ψ(t) qn t ≤(1 + ε)q, 122 since ψ(t)/t is decreasing in t. Thus lim sup t→∞ B(t) ψ(t) ≤(1 + ε)q, almost surely. Since this holds for any ε > 0 and q > 1 we have proved that lim sup B(t)/ψ(t) ≤1. For the lower bound, fix q > 1. In order to use the Borel-Cantelli lemma in the other direction, we need to create a sequence of independent events. Let Dn = © B(qn) −B(qn−1) ≥ψ(qn −qn−1) ª . We now use Lemma II.3.1 to see that there is a constant c > 0 such that, for large x, P{Z > x} ≥ce−x2/2 x . Using this estimate we get, for some further constant ˜ c > 0, P(Dn) = P n Z ≥ψ(qn−qn−1) √ qn−qn−1 o ≥c e−log log(qn−qn−1) p 2log log(qn −qn−1) ≥ ce−log(n log q) p 2 log(n log q) > ˜ c n log n, and therefore P n P(Dn) = ∞. Thus for infinitely many n B(qn) ≥B(qn−1) + ψ(qn −qn−1) ≥−2ψ(qn−1) + ψ(qn −qn−1), where the second inequality follows from applying the previously proved upper bound to −B(qn−1). From the above we get that, for infinitely many n, (1.1) B(qn) ψ(qn) ≥−2ψ(qn−1) + ψ(qn −qn−1) ψ(qn) ≥−2 √q + qn −qn−1 qn = 1 −2 √q −1 q. Indeed, to obtain the second inequality first note that ψ(qn−1) ψ(qn) = ψ(qn−1) p qn−1 √qn ψ(qn) 1 √q ≤1 √q, since ψ(t)/ √ t is increasing in t for large t. For the second term we just use the fact that ψ(t)/t is decreasing in t. Now (1.1) implies that lim sup t→∞ B(t) ψ(t) ≥−2 √q + 1 −1 q almost surely, and letting q ↑∞concludes the proof of the lower bound. Corollary 5.3. Suppose {B(t) : t ≥0} is a standard Brownian motion. Then, almost surely, lim sup h↓0 |B(h)| p 2h log log(1/h) = 1. Proof. By Theorem 1.9 the process {X(t) : t ≥0} defined by X(t) = tB(1/t) for t > 0 is a standard Brownian motion. Hence, using Theorem 5.1, we get lim sup h↓0 |B(h)| p 2h log log(1/h) = lim sup t↑0 |X(t)| √2t log log t = 1 . 123 The law of the iterated logarithm is a result which is easier to prove for Brownian motion than for random walks, as scaling arguments can be used to good effect in the proof. We now use an ad hoc argument to obtain a law of the iterated logarithm for simple random walks, i.e. the random walk with increments taking the values ±1 with equal probability, from Theorem 5.1. A version for more general walks will follow with analogous arguments from the embedding techniques of Section 3, see Theorem 5.17. Theorem 5.4 (Law of the Iterated Logarithm for simple random walk). Let {Sn : n ≥0} be a simple random walk. Then, almost surely, lim sup n→∞ Sn √2n log log n = 1. We now start the technical work to transfer the result from Brownian motion to simple random walk. The next result shows that the limsup does not change if we only look along a sufficiently dense sequence of random times. We abbreviate ψ(t) = p 2t log log(t). Lemma 5.5. If {Tn : n ≥1} is a sequence of random times (not necessarily stopping times) satisfying Tn →∞and Tn+1/Tn →1 almost surely, then lim sup n→∞ B(Tn) ψ(Tn) = 1 almost surely. Furthermore, if Tn/n →a almost surely, then lim sup n→∞ B(Tn) ψ(an) = 1 almost surely. Proof. The upper bound follows from the upper bound for continuous time without any conditions on {Tn : n ≥1}. For the lower bound some restrictions are needed, which prevent us from choosing, for example, T0 = 0 and Tn = inf{t > Tn−1 + 1: B(t) < 1 n}. Our conditions Tn+1/Tn →1 and Tn →∞make sure that the times are sufficiently dense to rule out this effect. Define, for fixed q > 4, Dk = © B(qk) −B(qk−1) ≥ψ(qk −qk−1) ª , Ωk = ½ min qk≤t≤qk+1 B(t) −B(qk) ≥− p qk ¾ and D∗ k = Dk ∩Ωk. Note that Dk and Ωk are independent events. From Brownian scaling and Lemma II.3.1 it is easy to see that, for a suitable constant c > 0, P(Dk) = P n B(1) ≥ψ(qk−qk−1) √ qk−qk−1 o ≥ c k log k . Moreover, by scaling, P(Ωk) =: cq > 0, and cq that does not depend on k. As P(D∗ k) = cqP(Dk) the sum P k P(D∗ 2k) is infinite. As the events {D∗ 2k : k ≥1} are independent, by the Borel-Cantelli lemma, for infinitely many (even) k, min qk≤t≤qk+1 B(t) ≥B(qk−1) + ψ(qk −qk−1) − p qk. 124 By Remark 5.2, for all sufficiently large k, we have B(qk−1) ≥−2ψ(qk−1) and, by easy asymp-totics, ψ(qk −qk−1) ≥ψ(qk)(1 −1 q). Hence, for infinitely many k, min qk≤t≤qk+1 B(t) ≥ψ(qk −qk−1) −2ψ(qk−1) − p qk ≥ψ(qk) ³ 1 −1 q − 2 √q ´ − p qk , with the right hand side being positive by our choice of q. Now define n(k) = min{n : Tn > qk}. Since the ratios Tn+1/Tn tend to 1, it follows that for any fixed ε > 0, we have qk ≤Tn(k) < qk (1 + ε) for all large k. Thus, for infinitely many k, B(Tn(k)) ψ(Tn(k)) ≥ ψ(qk) ψ(qk(1 + ε)) ³ 1 −1 q − 2 √q ´ − p qk ψ(qk) . But since p qk/ψ(qk) →0 and ψ(qk)/ψ(qk(1 + ε)) →1/√1 + ε, we conclude that lim sup n→∞ B(Tn) ψ(Tn) ≥ 1 1 + ε ¡ 1 −1 q − 2 √q ¢ , and since the left hand side does not depend on q and ε > 0 we can let q ↑∞and ε ↓0 to ar-rive at the desired conclusion. For the last part, note that if Tn/n →a then ψ(Tn)/ψ(an) →1. Figure 2. Embedding simple random walk into Brownian motion Proof of Theorem 5.4. To prove the law of the iterated logarithm for simple random walk, we let T0 = 0 and, for n ≥1, Tn = min{t > Tn−1 : |B(t) −B(Tn−1)| = 1}. The times Tn are stopping times for Brownian motion and, hence, by the strong Markov property, the waiting times Tn −Tn−1 are independent and identically distributed random variables. Obviously, P{B(Tn)−B(Tn−1) = 1} = P{B(Tn)−B(Tn−1) = −1} = 1 2 , and therefore {B(Tn) : n ≥0} is a simple random walk. By Theorem 2.45, we have E[Tn −Tn−1] = 1, and hence the law of large numbers ensures that Tn/n converges almost surely to 1, and the theorem follows from Lemma 5.5. 125 Remark 5.6. The technique we have used to get Theorem 5.4 from Theorem 5.1 was based on finding an increasing sequence of stopping times {Tn : n ≥0} for the Brownian motion, such that Sn = B(Tn) defines a simple random walk, while we keep some control over the size of Tn. This ‘embedding technique’ will be extended substantially in Section 3. ⋄ 2. Points of increase for random walk and Brownian motion A point t ∈(0, ∞) is a local point of increase for the function f : (0, ∞) →R if for some open interval (a, b) containing t we have f(s) ≤f(t) for all s ∈(a, t) and f(t) ≤f(s) for all s ∈(t, b). In this section we show that Brownian motion almost surely has no local points of increase. Our proof uses a combinatorial argument to derive a quantitative result for simple random walks, and then uses this result to study the case of Brownian motion. A crucial tool in the proof is an inequality of Harris [Ha60], which is of some independent interest. Theorem 5.7 (Harris’ inequality). Suppose that X = (X1, . . . , Xd) is a random variable with values in Rd and independent coordinates. Let f, g: Rd →R be measurable functions, which are nondecreasing in each coordinate. Then, (2.1) E £ f(X)g(X) ¤ ≥E[f(X)] E[g(X)] , provided the above expectations are well-defined. Proof. One can argue, using the monotone convergence theorem, that it suffices to prove the result when f and g are bounded. We assume f and g are bounded and proceed by induction on the dimension d. Suppose first that d = 1. Note that (f(x) −f(y))(g(x) −g(y)) ≥0 , for all x, y ∈R. Therefore, for Y an independent random variable with the same distribution as X, 0 ≤ E £ (f(X) −f(Y ))(g(X) −g(Y )) ¤ = 2E £ f(X)g(X) ¤ −2E £ f(X) ¤ E £ g(Y ) ¤ , and (2.1) follows easily. Now, suppose (2.1) holds for d −1. Define f1(x1) = E £ f(x1, X2, . . . , Xd) ¤ , and define g1 similarly. Note that f1(x1) and g1(x1) are non-decreasing functions of x1. Since f and g are bounded, we may apply Fubini’s theorem to write the left hand side of (2.1) as (2.2) Z R E £ f(x1, X2, . . . , Xd) g(x1, X2, . . . , Xd) ¤ dµ1(x1) , where µ1 denotes the law of X1. The expectation in the integral is at least f1(x1)g1(x1) by the induction hypothesis. Thus, using the result for the d = 1 case, we can bound (2.2) from below by E[f1(X1)] E[g1(X2)], which equals the right hand side of (2.1), completing the proof. For the rest of this section, let X1, X2, . . . be independent random variables with P{Xi = 1} = P{Xi = −1} = 1 2, 126 and let Sk = Pk i=1 Xi be their partial sums. Denote (2.3) pn = P{Si ≥0 for all 1 ≤i ≤n} . Then {Sn is a maximum among S0, S1, . . . Sn} is precisely the event that the reversed random walk given by S′ k = Xn +. . .+Xn−k+1 is nonnegative for all k = 1, . . . , n. Hence this event also has probability pn. The following lemma gives the order of magnitude of pn, the proof will be given as Exercise 5.4. Lemma 5.8. There are positive constants C1 and C2 such that C1 √n ≤P{Si ≥0 for all 1 ≤i ≤n} ≤C2 √n for all n ≥1. The next lemma expresses, in terms of the pn defined in (2.3), the probability that Sj stays between 0 and Sn for j between 0 and n. Lemma 5.9. We have p2 n ≤P © 0 ≤Sj ≤Sn for all 1 ≤j ≤n ª ≤p2 ⌊n/2⌋. Proof. The two events A = {0 ≤Sj for all j ≤⌊n/2⌋} and B = {Sj ≤Sn for j ≥⌊n/2⌋} are independent, since A depends only on X1, . . . , X⌊n/2⌋and B depends only on the remaining X⌊n/2⌋+1, . . . , Xn. Therefore, P{0 ≤Sj ≤Sn} ≤P(A ∩B) = P(A)P(B) ≤p2 ⌊n/2⌋, which proves the upper bound. For the lower bound, we let f(x1, . . . , xn) = 1 if all the partial sums x1 + . . . + xk for k = 1, . . . , n are nonnegative, and f(x1, . . . , xn) = 0 otherwise. Also, define g(x1, . . . , xn) = f(xn, . . . , x1). Then f and g are nondecreasing in each component. By Harris’ inequality, for X = (X1, . . . , Xn), E £ f(X)g(X) ¤ ≥E[f(X)] E[g(X)] = p2 n. Also, E £ f(X)g(X) ¤ = P{X1 + . . . + Xj ≥0 and Xj+1 + . . . + Xn ≥0 for all j } = P © 0 ≤Sj ≤Sn for all 1 ≤j ≤n ª , which proves the lower bound. Definition 5.10. (a) A sequence s0, s1, . . . , sn of reals has a (global) point of increase at k ∈{0, . . . , n}, if si ≤sk for i = 0, 1, . . . , k −1 and sk ≤sj for j = k + 1, . . . , n. (b) A real-valued function f has a global point of increase in the interval (a, b) if there is a point t ∈(a, b) such that f(s) ≤f(t) for all s ∈(a, t) and f(t) ≤f(s) for all s ∈(t, b). t is a local point of increase if it is a global point of increase in some interval. ⋄ 127 Theorem 5.11. Let S0, S1, . . . , Sn be a simple random walk. Then P © S0, . . . , Sn has a point of increase ª ≤ C log n , for all n > 1, where C does not depend on n. The key to Theorem 5.11 is the following upper bound, which holds for more general random walks. It will be proved as Exercise 5.5. Lemma 5.12. For any random walk {Sj : j ≥0} on the line, (2.4) P © S0, . . . , Sn has a point of increase ª ≤2 Pn k=0 pkpn−k P⌊n/2⌋ k=0 p2 k . Remark 5.13. Equation (2.4) is easy to interpret: The expected number of points of increase by time ⌊n/2⌋is the numerator in (2.4), and given that there is at least one such point, the expected number is bounded below by the denominator; hence twice the ratio of these expectations bounds the required probability. ⋄ Proof of Theorem 5.11. To bound the numerator in (2.4), we can use symmetry to deduce from Lemma 5.8 that n X k=0 pkpn−k ≤2 + 2 ⌊n/2⌋ X k=1 pkpn−k ≤2 + 2C2 2 ⌊n/2⌋ X k=1 k−1/2(n −k)−1/2 ≤2 + 4C2 2n−1/2 ⌊n/2⌋ X k=1 k−1/2 , which is bounded above because the last sum is O(n1/2). Since Lemma 5.8 implies that the denominator in (2.4) is at least C2 1 log⌊n/2⌋, this completes the proof. We now see how we can use embedding ideas to pass from the result about simple random walks to the result about Brownian motion. Theorem 5.14. Brownian motion almost surely has no local points of increase. Proof. To deduce this, it suffices to apply Theorem 5.11 to a simple random walk on the integers. Indeed, it clearly suffices to show that the Brownian motion {B(t) : t ≥0} almost surely has no global points of increase in a fixed time interval (a, b) with rational endpoints. Sampling the Brownian motion when it visits a lattice yields a simple random walk; by refining the lattice, we may make this walk as long as we wish, which will complete the proof. More precisely, for any vertical spacing h > 0 define τ0 to be the first t ≥a such that B(t) is an integral multiple of h, and for i ≥0 let τi+1 be the minimal t ≥τi such that |B(t) −B(τi)| = h. Define Nb = max{k ∈Z : τk ≤b}. For integers i satisfying 0 ≤i ≤Nb, define Si = B(τi) −B(τ0) h . 128 Then {Si : i = 1, . . . , Nb} is a finite portion of a simple random walk. If the Brownian motion has a (global) point of increase t0 in (a, b) at t, and if k is an integer such that τk−1 ≤t0 ≤τk, then this random walk has points of increase at k −1 and k. If t0 ∈(a + ε, b −ε), for some ε > 0, such a k is guaranteed to exist if |B(a + ε) −B(a)| > h and |B(b −ε) −B(b)| > h. Therefore, for all n, (2.5) P © {B(t): t ≥0} has a global point of increase in (a, b) situated in (a + ε, b −ε) ª ≤P{Nb ≤n} + P © |B(a + ε) −B(a)| ≤h ª + P © |B(b −ε) −B(b)| ≤h ª + ∞ X m=n+1 P © S0, . . . , Sm has a point of increase and Nb = m ª . Note that Nb ≤n implies |B(b) −B(a)| ≤(n + 1)h, so P{Nb ≤n} ≤P © |B(b) −B(a)| ≤(n + 1)h ª = P n |Z| ≤(n+1)h √ b−a o , where Z has a standard normal distribution. Since S0, . . . , Sm, conditioned on Nb = m is a finite portion of a simple random walk, it follows from Theorem 5.11 that for some constant C, we have ∞ X m=n+1 P © S0, . . . , Sm has a point of increase, and Nb = m ª ≤ ∞ X m=n+1 P{Nb = m} C log m ≤ C log(n + 1). Thus, the probability in (2.5) can be made arbitrarily small by first taking n large and then picking h > 0 sufficiently small. Finally, let ε ↓0 to complete the proof. 3. The Skorokhod embedding problem In the proof of Theorem 5.4 we have made use of the fact that there exists a stopping time T for linear Brownian motion with the property that E[T] < ∞and the law of B(T) is the uniform distribution on {−1, 1}. To use the same method for random walks {Sn : n ∈N} with general increments, it would be necessary to find, for a given random variable X representing an increment, a stopping time T with E[T] < ∞, such that B(T) has the law of X. This problem is called the Skorokhod embedding problem. By Wald’s lemmas, Theorem 2.40 and Theorem 2.44, for any integrable stopping time T, we have E £ B(T) ¤ = 0 and E £ B(T)2¤ = E[T] < ∞, so that the Skorokhod embedding problem can only be solved for random variables X with mean zero and finite second moment. However, these are the only restrictions, as the following result shows. 129 Theorem 5.15 (Skorokhod embedding theorem). Suppose that {B(t) : t ≥0} is a standard Brownian motion and that X is a real valued random variable with E[X] = 0 and E[X2] < ∞. Then there exists a stopping time T, with respect to the natural filtration (F(t) : t ≥0) of the Brownian motion, such that B(T) has the law of X and E[T] = E[X2]. Example 5.16. Assume that X may take two values a < b. In order that E[X] = 0 we must have a < 0 < b and P{X = a} = b/(b −a) and P{X = b} = −a/(b −a). We have seen in Theorem 2.45 that, for the stopping time T = inf{t : B(t) ̸∈(a, b)} the random variable B(T) has the same distribution as X, and that E[T] = −ab is finite. ⋄ Note that the Skorokhod embedding theorem allows us to use the arguments developed for the proof of the law of the iterated logarithm for simple random walks, Theorem 5.4, and obtain a much more general result. Theorem 5.17 (Hartman-Wintner law of the iterated logarithm). Let {Sn : n ∈N} be a random walk with increments Sn −Sn−1 of zero mean and finite variance σ2. Then lim sup n→∞ Sn p 2σ2 n log log n = 1. We now present two proofs of the Skorokhod embedding theorem, which actually represent different constructions of the required stopping times. Both approaches, Dubins’ embedding, and the Az´ ema-Yor embedding are very elegant and have their own merits. 3.1. The Dubins’ embedding theorem. The first one, due to Dubins [Du68], is partic-ularly simple and based on the notion of binary splitting martingales. We say that a martingale {Xn : n ∈N} is binary splitting if, whenever for some x0, . . . , xn ∈R the event A(x0, . . . , xn) := {X0 = x0, X1 = x1, . . . , Xn = xn} has positive probability, the random variable Xn+1 conditioned on A(x0, . . . , xn) is supported on at most two values. Lemma 5.18. Let X be a random variable with E[X2] < ∞. Then there is binary splitting martingale {Xn : n ∈N} such that Xn →X almost surely and in L2. Proof. We define the martingale {Xn : n ∈N} and the associated filtration (Gn : n ∈N) recursively. Let G0 be the trivial σ-algebra and X0 = EX. Define the random variable ξ0 by ξ0 = ½ 1 , if X ≥X0 , −1 , if X < X0 . For any n > 0, let Gn = σ{ξ0, . . . , ξn−1} and Xn = E[X | Gn]. Also define the random variable ξn by ξn = ½ 1 , if X ≥Xn , −1 , if X < Xn . 130 0 20 40 60 80 100 −4 −3 −2 2 3 4 Figure 3. Dubins’ embedding for the uniform distribution on {−4, −2, 0, 2, 4}: First go until you hit {−3, 3}, in this picture you hit −3. Given that, continue until you hit either −2 or −4, in this picture you hit −2. Hence B(T) = −2 for this sample. Note that Gn is generated by a partition Pn into 2n sets, each of which has the form A(x0, . . . , xn). As each element of Pn is a union of two elements of Pn+1, the martingale {Xn : n ∈N} is binary splitting. Also we have, for example as in Appendix II.(3.1), that E £ X2¤ = E £ (X −Xn)2¤ + E £ X2 n ¤ ≥E £ X2 n ¤ . Hence {Xn : n ∈N} is bounded in L2 and, from the convergence theorem for L2-bounded martingales and L´ evy’s upward theorem, see Theorems II.4.12 and II.4.9, we get Xn →X∞:= E £ X ¯ ¯ G∞ ¤ almost surely and in L2, where G∞= σ ¡ S∞ i=0 Gi ¢ . To conclude the proof we have to show that X = X∞almost surely. We claim that, almost surely, (3.1) lim n↑∞ξn (X −Xn+1) = |X −X∞| . Indeed, if X(ω) = X∞(ω) this is trivial. If X(ω) < X∞(ω) then for some large enough N we have X(ω) < Xn(ω) for any n > N, hence ξn = −1 and (3.1) holds. Similarly, if X(ω) > X∞(ω) then ξn = 1 for n > N and so (3.1) holds. Using that ξn is Gn+1-measurable, we find that E £ ξn (X −Xn+1) ¤ = E £ ξn E[X −Xn+1 | Gn+1] ¤ = 0 . Recall that if Yn →Y almost surely, and {Yn : n = 0, 1, · · · } is L2-bounded, then EYn →EY (see, for example, the discussion of uniform integrability in Appendix II.3). Hence, as the left hand side of (3.1) is L2-bounded, we conclude that E|X −X∞| = 0. 131 Proof of Theorem 5.15. From Lemma 5.18 we take a binary splitting martingale {Xn : n ∈N} such that Xn →X almost surely and in L2. Recall from the example preceding this proof that if X is supported on a set two elements {−a, b} for some a, b > 0 then T = inf{t : B(t) ∈{−a, b}} is the required stopping time. Hence, as Xn conditioned on A(x0, . . . , xn−1) is supported on at most two values it is clear we can find a sequence of stopping times T0 ≤T1 ≤ . . . such that B(Tn) is distributed as Xn and ETn = E[X2 n]. As Tn is an increasing sequence, we have Tn ↑T almost surely for some stopping time T. Also, by the monotone convergence theorem ET = lim n↑∞ETn = lim n↑∞E £ X2 n ¤ = E £ X2¤ . As B(Tn) converges in distribution to X by our construction, and converges almost surely to B(T) by continuity of the Brownian sample paths, we conclude that B(T) is distributed as X. 3.2. The Az´ ema-Yor embedding theorem. In this section we discuss a second solution to the Skorokhod embedding problem with a more explicit construction of the stopping times. Theorem 5.19 (Az´ ema-Yor embedding theorem). Suppose that X is a real valued random variable with E[X] = 0 and E[X2] < ∞. Let Ψ(x) = E £ X ¯ ¯ X ≥x ¤ if P{X ≥x} > 0 , and Ψ(x) = 0 otherwise. For a Brownian motion {B(t) : t ≥0} let {M(t) : t ≥0} be the maximum process and define a stopping time τ by τ = inf{t ≥0 : M(t) ≥Ψ(B(t))}. Then E[τ] = E[X2] and B(τ) has the same law as X. 0 10 20 30 40 50 −15 −10 −5 0 5 Ψ−1(M(t)) B(t) T M(t) Figure 4. The Az´ ema-Yor embedding: the path is stopped when the Brownian motion hits the level Ψ−1(M(t)), where Ψ−1(x) = sup{b: Ψ(b) ≤x}. 132 We proceed in three steps. In the first step we formulate an embedding for random variables taking only finitely many values. Lemma 5.20. Suppose the random variable X takes only finitely many values x1 < x2 < · · · < xn. Define y1 < y2 < · · · < yn−1 by yi = Ψ(xi+1), and define stopping times T0 = 0 and Ti = inf © t ≥Ti−1 : B(t) ̸∈(xi, yi) ª for i ≤n −1. Then T = Tn−1 satisfies E[T] = E[X2] and B(T) has the same law as X. 1 2 3 4 x x y y y 1 2 3 4 3 2 1 x x y =x 4 5 T =T −2 −1 2 0 1 T T Figure 5. The Az´ ema-Yor embedding for the uniform distribution on the set {−2, −1, 0, 1, 2}. The drawn path samples the value B(T) = 0 with T = T4. Proof. First observe that yi ≥xi+1 and equality holds if and only if i = n −1. We have E[Tn−1] < ∞, by Theorem 2.45, and E[Tn−1] = E[B(Tn−1)2], from Theorem 2.44. For i = 1, . . . , n −1 define random variables Yi = ½ E[X | X ≥xi+1] if X ≥xi+1, X if X ≤xi. Note that Y1 has expectation zero and takes on the two values x1, y1. For i ≥2, given Yi−1 = yi−1, the random variable Yi takes the values xi, yi and has expectation yi−1. Given Yi−1 = xj, j ≤i −1 we have Yi = xj. Note that Yn−1 = X. We now argue that (B(T1), . . . , B(Tn−1)) d = (Y1, . . . , Yn−1). Clearly, B(T1) can take only the values x1, y1 and has expectation zero, hence the law of B(T1) agrees with the law of Y1. For i ≥2, given B(Ti−1) = yi−1, the random variable B(Ti) takes the values xi, yi and has expectation yi−1. Given B(Ti−1) = xj where j ≤i −1, we have B(Ti) = xj. Hence the two tuples have the same law and, in particular, B(Tn−1) has the same law as X. 133 In the second step, we show that the stopping time we have constructed in Lemma 5.20 agrees with the stopping time τ in the Az´ ema-Yor embedding. Lemma 5.21. The stopping time T constructed in Lemma 5.20 and the stopping time τ in Theorem 5.19 are equal. Proof. Suppose that B(Tn−1) = xi, and hence Ψ(B(Tn−1)) = yi−1. If i ≤n −1, then i is minimal with the property that B(Ti) = · · · = B(Tn−1), and thus B(Ti−1) ̸= B(Ti). Hence M(Tn−1) ≥yi−1. If i = n we also have M(Tn−1) = xn ≥yi−1, which implies in any case that τ ≤T. Conversely, if Ti−1 ≤t < Ti then B(t) ∈(xi, yi) and this implies M(t) < yi ≤Ψ(B(t)). Hence τ ≥T, and altogether we have seen that T = τ. This completes the proof of Theorem 5.19 for random variables taking finitely many values. The general case follows from a limiting procedure, which is left as Exercise 5.9. 4. The Donsker invariance principle Let {Xn : n ≥0} be a sequence of independent and identically distributed random variables and assume that they are normalized, so that E[Xn] = 0 and Var(Xn) = 1. This assumption is no loss of generality for Xn with finite variance, since we can always consider the normalization Xn −E[Xn] p Var(Xn) . We look at the random walk generated by the sequence Sn = n X k=1 Xk , and interpolate linearly between the integer points, i.e. S(t) = S[t] + (t −[t])(S[t]+1 −S[t]) . This defines a random function S ∈C[0, ∞). We now define a sequence {S∗ n : n ≥1} of random functions in C[0, 1] by S∗ n(t) = S(nt) √n for all t ∈[0, 1]. Theorem 5.22 (Donsker’s Invariance Principle). On the space C[0, 1] of continuous functions on the unit interval with the metric induced by the sup-norm, the sequence {S∗ n : n ≥1} converges in distribution to a standard Brownian motion {B(t) : t ∈[0, 1]}. Remark 5.23. Donsker’s invariance principle is also called the functional central limit theorem. The name invariance principle comes from the fact that the limit in Theorem 5.22 does not depend on the choice of the exact distribution of the normalised random variables Xn. ⋄ The idea of the proof is to construct the random variables X1, X2, X3, . . . on the same probability space as the Brownian motion in such a way that {S∗ n : n ≥1} is with high probability close to a scaling of this Brownian motion. 134 Lemma 5.24. Suppose that {B(t): t ≥0} a linear Brownian motion. Then, for any random variable X with mean zero and variance one, there exists a sequence of stopping times 0 = T0 ≤T1 ≤T2 ≤T3 ≤. . . with respect to the Brownian motion, such that (a) the sequence {B(Tn): n ≥0} has the distribution of the random walk with increments given by the law of X, (b) the sequence of functions {S∗ n : n ≥0} constructed from this random walk satisfies lim n→∞P n sup 0≤t≤1 ¯ ¯ ¯B(nt) √n −S∗ n(t) ¯ ¯ ¯ > ε o = 0 . Proof. Using Skorokhod embedding, we define T1 to be a stopping time with E[T1] = 1 such that B(T1) = X in distribution. By the strong Markov property, {B2(t): t ≥0} = {B(T1 + t) −B(T1): t ≥0} is a Brownian motion and independent of F+(T1) and, in particular, of (T1, B(T1)). Hence we can define a stopping time T ′ 2 for the Brownian motion {B2(t): t ≥0} such that E[T ′ 2] = 1 such that B(T ′ 2) = X in distribution. Then T2 = T1 +T ′ 2 is a stopping time for the original Brownian motion with E[T2] = 2, such that B(T2) is the second value in a random walk with increments given by the law of X. We can proceed inductively to get a sequence 0 = T0 ≤T1 ≤T2 ≤T3 < . . . such that Sn = B(Tn) is the embedded random walk, and E[Tn] = n. Abbreviate Wn(t) = B(nt) √n and let An be the event that there exists t ∈[0, 1) such that |S∗ n(t) − Wn(t)| > ε. We have to show that P(An) →0. Let k = k(t) be the unique integer with (k −1)/n ≤t < k/n. Because S∗ n is linear on such an interval we have An ⊂ © there exists t ∈[0, 1) such that ¯ ¯Sk/√n −Wn(t) ¯ ¯ > ε ª ∪ © there exists t ∈[0, 1) such that ¯ ¯Sk−1/√n −Wn(t) ¯ ¯ > ε ª . As Sk = B(Tk) = √nWn(Tk/n), we obtain An ⊂A∗ n := © there exists t ∈[0, 1) such that ¯ ¯Wn ¡ Tk/n) ¢ −Wn(t) ¯ ¯ > ε ª ∪ © there exists t ∈[0, 1) such that ¯ ¯Wn ¡ Tk−1/n ¢ −Wn(t) ¯ ¯ > ε ª . For given 0 < δ < 1 the event A∗ n implies that either (4.1) © there exist s, t ∈[0, 2] such that |s −t| < δ , |Wn(s) −Wn(t)| > ε ª or (4.2) © there exists t ∈[0, 1) such that |Tk/n −t| ∨|Tk−1/n −t| ≥δ ª . Note that the probability of (4.1) does not depend on n. Choosing δ > 0 small, we can make this probability as small as we wish, since Brownian motion is uniformly continuous on [0, 2]. It remains to show that for arbitrary, fixed δ > 0, the probability of (4.2) converges to zero as n →∞. To prove this we use that lim n→∞ Tn n = lim n→∞ 1 n n X k=1 (Tk −Tk−1) = 1 almost surely. 135 This is Kolmogorov’s law of large numbers for the sequence {Tk −Tk−1} of independent identi-cally distributed random variables with mean 1. Observe that for every sequence {an} of reals one has lim n→∞ an n = 1 ⇒lim n→∞sup 0≤k≤n |ak −k|/n = 0 . This is a matter of plain (deterministic) arithmetic and eaqsily checked. Hence we have, (4.3) lim n→∞P n sup 0≤k≤n |Tk −k| n ≥δ o = 0 . Now recall that t ∈[(k −1)/n, k/n) and let n > 2/δ. Then P © there exist t ∈[0, 1] such that |Tk/n −t| ∨|Tk−1/n −t| ≥δ ª ≤ P n sup 1≤k≤n (Tk −(k −1)) ∨(k −Tk−1) n ≥δ o ≤ P n sup 1≤k≤n Tk −k n ≥δ/2 o + P n sup 1≤k≤n (k −1) −Tk−1 n ≥δ/2 o , and by (4.3) both summands converge to 0. Proof of the Donsker invariance principle. Choose the sequence of stopping times as in Lemma 5.24 and recall from the scaling property of Brownian motion that the random functions {Wn(t): 0 ≤t ≤1} given by Wn(t) = B(nt)/√n are standard Brownian motions. Suppose that K ⊂C[0, 1] is closed and define K[ε] = {f ∈C[0, 1]: ∥f −g∥sup ≤ε for some g ∈K}. Then P{S∗ n ∈K} ≤P{Wn ∈K[ε]} + P{∥S∗ n −Wn∥sup > ε} . As n →∞, the second term goes to 0, whereas the first term does not depnd on n and is equal to P{B ∈K[ε]} for a Brownian motion B. As K is closed we have lim ε↓0 P{B ∈K[ε]} = P n B ∈ \ ε>0 K[ε] o = P{B ∈K}. Putting these facts together, we obtain lim supn→∞P{S∗ n ∈K} ≤P{B ∈K}, which is condition (ii) in the Portmanteau theorem, Theorem II.1.6. Hence Donsker’s invariance principle is proved. Below and in the following section we harvest a range of results for random walks, which we can transfer from Brownian motion by means of Donsker’s invariance principle. Readers unfamiliar with the nature of convergence in distribution are recommended to look at the appendix, Chapter II.1. Theorem 5.25. Suppose that {Xk : k ≥1} is a sequence of independent, identically distributed random variables with E[X1] = 0 and 0 < E[X2 1] = σ2 < ∞. Let {Sn : n ≥0} be the associated random walk and Mn = max{Sk : 0 ≤k ≤n} 136 its maximal value up to time n. Then, for all x ≥0, lim n→∞P{Mn ≥x√n} = 2 √ 2πσ2 Z ∞ x e−y2/2σ2 dy . Proof. By scaling we can assume that σ2 = 1. Suppose now that g: R →R is a continuous bounded function. Define a function G: C[0, 1] →R by G(f) = g ³ max x∈[0,1] f(x) ´ , and note that G is continuous and bounded. Then, by definition, E h G(S∗ n) i = E h g ³ max 0≤t≤1 S(tn) √n ´i = E h g ³max0≤k≤n Sk √n ´i , and E h G(B) i = E h g ³ max 0≤t≤1 B(t) ´i . Hence, by Donsker’s invariance principle, lim n→∞E h g ³Mn √n ´i = E h g ³ max 0≤t≤1 B(t) ´i . From the Portmanteau theorem, Theorem II.1.6, and the reflection principle, Theorem 2.18, we infer lim n→∞P{Mn ≥x√n} = P{max 0≤t≤1 B(t) ≥x} = 2P{B(1) ≥x} , and the latter probability is the given integral. 5. The arcsine laws We now discuss the two famous arcsine laws for Brownian motion and also for random walks. Their name comes from the arcsine distribution, which is the distribution on (0, 1) which has the density 1 π p x(1 −x) for x ∈(0, 1). The cumulative distribution function of an arcsine distributed random variable X is therefore given by P{X ≤x} = 2 π arcsin(√x) for x ∈(0, 1) . The first arcsine law describes the law of the last passage over level zero by a Brownian motion or random walk running for finite time. In the case of a Brownian motion we shall find this law by a smart calculation, and then Donsker’s invariance principle will allow us to transfer the result to random walks. Observe that the following result is surprising: the rightmost zero of Brownian motion in the interval (0, 1) is most likely to be near zero or one, see Figure 6. 137 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5 3 3.5 Figure 6. The density of the arcsine distribution is concentrated near the boundary values 0 and 1. Theorem 5.26 (First arcsine law for Brownian motion). Let {B(t) : t ≥0} be a standard linear Brownian motion. Then, (a) the random variable L = sup © t ∈[0, 1] : B(t) = 0 ª , the last zero of Brownian motion in [0, 1], is arcsine distributed, and (b) the random variable M ∈ [0, 1], which is uniquely determined by B(M) = maxs∈[0,1] B(s), is arcsine distributed. Proof. The discussion of local extrema in Chapter 2.1 has shown that there is indeed a unique maximum, and hence M is well-defined. Moreover Theorem 2.31 shows that M, which is the last zero of the process {M(t) −B(t) : t ≥0} has the same law as L. Hence it suffices to prove part (b). Recall that {M(t) : 0 ≤t ≤1} is defined by M(t) = max0≤s≤t B(s). For s ∈[0, 1], P{M ≤s} = P © max 0≤u≤s B(u) > max s≤v≤1 B(v) ª = P © max 0≤u≤s B(u) −B(s) > max s≤v≤1 B(v) −B(s) ª = P © M1(s) > M2(1 −s) ª , where {M1(t) : 0 ≤t ≤1} is the maximum process of the Brownian motion {B1(t) : t ≥0}, which is given by B1(t) = B(s −t) −B(s), and {M2(t) : 0 ≤t ≤1} is the maximum process of the independent Brownian motion {B2(t) : 0 ≤s ≤1}, which is given by B2(t) = B(s + t) −B(s). Since, by Theorem 2.18, for any fixed t, the random variable M(t) has the same law as |B(t)|, we have P © M1(s) > M2(1 −s) ª = P © |B1(s)| > |B2(1 −s)| ª . 138 Using the scaling invariance of Brownian motion we can express this in terms of a pair of two independent standard normal random variables Z1 and Z2 , by P © |B1(s)| > |B2(1 −s)| ª = P ©√s |Z1| > √ 1 −s |Z2| ª = P n |Z2| p Z2 1 + Z2 2 < √s o . In polar coordinates, (Z1, Z2) = (R cos θ, R sin θ) pointwise. The fact that the random variable θ is uniformly distributed on [0, 2π] follows from Lemma II.3.3 in the appendix. So the last quantity becomes P n |Z2| p Z2 1 + Z2 2 < √s o = P © | sin(θ)| < √s ª = 4P © θ < arcsin(√s) ª = 4 µarcsin(√s) 2π ¶ = 2 π arcsin(√s). It follows by differentiating that M has density (π p s(1 −s))−1 for s ∈(0, 1). For random walks the first arc-sine law takes the form of a limit theorem, as the length of the walk tends to infinity. Proposition 5.27 (Arcsine law for the last sign-change). Suppose that {Xk : k ≥1} is a sequence of independent, identically distributed random variables with E[X1] = 0 and 0 < E[X2 1] = σ2 < ∞. Let {Sn : n ≥0} be the associated random walk and Nn = max{1 ≤k ≤n : SkSk−1 ≤0} the last time the random walk changes its sign before time n. Then, for all x ∈(0, 1), lim n→∞P{Nn ≤xn} = 2 π arcsin(√x) . Proof. The strategy of proof is to use Theorem 5.26, and apply Donsker’s invariance principle to extend the result to random walks. As Nn is unchanged under scaling of the random walk we may assume that σ2 = 1. Define a bounded function g on C[0, 1] by g(f) = max{t ≤1 : f(t) = 0}. It is clear that g(S∗ n) differs from Nn/n by a term, which is bounded by 1/n and therefore vanishes asymptotically. Hence Donsker’s invariance principle would imply convergence of Nn/n in distribution to g(B) = sup{t ≤1 : B(t) = 0} — if g was continuous. g is not continuous, but we show that g is continuous on the set C of all f ∈C[0, 1] such that f takes positive and negative values in every neighbourhood of every zero and f(1) ̸= 0. As, by Theorem 2.25, Brownian motion is almost surely in C, we get from property (v) in the Portmanteau theorem, Theorem II.1.6, and by Donsker’s invariance principle, that, for every continuous bounded h: R →R, lim n→∞E h h ³Nn n ´i = lim n→∞E £ h ◦g(S∗ n) ¤ = E £ h ◦g(B) ¤ = E £ h(sup{t ≤1 : B(t) = 0}) ¤ , which completes the proof subject to the claim. To see that g is continuous on C, let ε > 0 be given and f ∈C. Let δ0 = min t∈[g(f)+ε,1] |f(t)| , 139 and choose δ1 such that (−δ1, δ1) ⊂f(g(f) −ε, g(f) + ε) . Let 0 < δ < δ0 ∧δ1. If now ∥h −f∥∞< δ, then h has no zero in (g(f) + ε, 1], but has a zero in (g(f) −ε, g(f) + ε), because there are s, t ∈(g(f) −ε, g(f) + ε) with h(t) < 0 and h(s) > 0. Thus |g(h) −g(f)| < ε. This shows that g is continuous on C. There is a second arcsine law for Brownian motion, which describes the law of the random variable L © t ∈[0, 1] : B(t) > 0 ª , the time spent by Brownian motion above the x-axis. This statement is much harder to derive directly for Brownian motion, though we will do this using more sophisticated tools in Chapter 8. At this stage we can use random walks to derive the result for Brownian motion. Theorem 5.28 (Second arcsine law for Brownian motion). Let {B(t) : t ≥0} be a standard linear Brownian motion. Then, L © t ∈[0, 1] : B(t) > 0 ª , is arcsine distributed. The idea is to prove a direct relationship between the first maximum and the number of pos-itive terms for a simple random walk by a combinatorial argument, and then transfer this to Brownian motion using Donsker’s theorem. Lemma 5.29 (Richard’s lemma). Let {Sk : k = 1, . . . , n} be a simple, symmetric random walk on the integers. Then (5.1) # © k ∈{1, . . . , n} : Sk > 0 ª d = min © k ∈{0, . . . , n} : Sk = max 0≤j≤n Sj ª . Proof. Let Xk = Sk −Sk−1 for each k ∈{1, . . . , n}. We rearrange the tuple (X1, . . . , Xn) by • placing first in decreasing order of k the terms Xk for which Sk > 0, • and then in increasing order of k the Xk for which Sk ≤0. Denote the new tuple by ( e X1, . . . , e Xn) and let e Sk be the associated kth partial sum. We first show that (X1, . . . , Xn) d = ( e X1, . . . , e Xn) . Indeed, suppose first that all partial sums are nonpositive, then trivially the conditional distri-butions are the same. Next condition on the fact that k is the position of the last positive Sk. Note that under this condition the tuples (X1, . . . , Xk) and (Xk+1, . . . , Xn) are still indepen-dent. Moreover, the (X1, . . . , Xk) are independent and identically distributed random variables conditioned on the total sum being one, and therefore they are exchangeable. Hence the con-ditional law of of the vector (Xk, X1, . . . , Xk−1) is the same. Repeating this argument now for (X1, . . . , Xk−1) we see after finitely many steps that the two tuples have the same law. Hence {e Sk : k = 1, . . . , n} is a random walk and we now check by induction on n that # © k ∈{1, . . . , n} : Sk > 0} = min{k ∈{0, . . . , n} : e Sk = max 0≤j≤n e Sj ª . Indeed, this holds trivially for n = 1. When Xn+1 is appended there are two possibilities: 140 • if Sn+1 > 0, then e X1 = Xn+1 and the position of the leftmost maximum in {e Sk : k = 0, . . . , n} is shifted by one position to the right. • if Sn+1 ≤0, then e Xn+1 = Xn+1 and the position of the leftmost maximum in {e Sk : k = 0, . . . , n} remains the same. This completes the induction step and proves the lemma. Proof of Theorem 5.28. Starting point is (5.1). First look at the right hand side of the equation, which divided by n can be written as g(S∗ n) for the function g: C[0, 1] →[0, 1] defined by g(f) = inf © t ∈[0, 1] : f(t) = sup s∈[0,1] f(s) ª . The function g is continuous in every f ∈C[0, 1] which has a unique maximum, hence almost everywhere with respect to the Wiener measure. Hence, by combining Donsker’s theorem and the Portmanteau theorem, the right hand side in (5.1) divided by n converges to the distribution of g(B), which by Theorem 5.26 is the arcsine distribution. Similarly, the left hand side of (5.1) can be approximated by h(S∗ n) for the function h: C[0, 1] → [0, 1] defined by h(f) = L{t ∈[0, 1] : f(t) > 0 ª . The approximation error is bounded by 1 n # © k ∈{1, . . . , n} : Sk = 0 ª , which converges to zero. The function h is obviously continuous in every f ∈C[0, 1] with the property that lim ε↓0 L{t ∈[0, 1] : 0 ≤f(t) ≤ε ª = 0, which again is equivalent to L{t ∈[0, 1] : f(t) = 0 ª = 0, a property which Brownian motion has almost surely. Hence, by combining Donsker’s theorem and the Portmanteau theorem again, the left hand side in (5.1) divided by n converges to the distribution of h(B) = L © t ∈[0, 1] : B(t) > 0 ª , and this completes the argument. Remark 5.30. The second arcsine law for general random walks follows from this using, by now, familiar arguments, see Exercise 5.10. ⋄ 141 Exercises Exercise 5.1 ( ∗). Suppose {B(t): t ≥0} is a standard linear Brownian motion. Show that lim sup n↑0 sup n≤t<n+1 B(t) −B(n) √2 log n = 1 almost surely. Exercise 5.2 ( ∗). Derive from Theorem 5.1 that, for a d-dimensional Brownian motion, lim sup t↑∞ |B(t)| √2t log log t = 1 almost surely. Exercise 5.3 ( ∗). Suppose {B(t): t ≥0} is a linear Brownian motion and τ the first hitting time of level 1. Show that, almost surely, lim sup h↓0 B(τ) −B(τ −h) p 2h log log(1/h) ≤1. Exercise 5.4 ( ∗). Show that there are positive constants C1 and C2 such that C1 √n ≤P{Si ≥0 for all 1 ≤i ≤n} ≤C2 √n for all n ≥1. Hint. For simple random walk a reflection principle holds in quite the same way as for Brownian motion. The key to the proof is to verify that P{Si ≥0 for all 1 ≤i ≤n} = P{Sn ≥0} −P{S∗ n ≤−2} where S∗ n is the random walk reflected at the stopping time τ−1 = min{k: Sk = −1}. Exercise 5.5 ( ∗). Prove that, for any random walk {Sj : j ≥0} on the line, P © S0, . . . , Sn has a point of increase ª ≤2 Pn k=0 pkpn−k P⌊n/2⌋ k=0 p2 k , where p0, . . . , pn are as in (2.3). 142 Exercise 5.6. An event A ⊂Rd is an increasing event if, (x1, . . . , xi−1, xi,xi+1, . . . xd) ∈A and e xi ≥xi = ⇒ (x1, . . . , xi−1, e xi, xi+1, . . . xd) ∈A. If A and B are increasing events, show that P(A ∩B) ≥P(A)P(B), i.e. A and B are positively correlated. Exercise 5.7 ( ∗). Show that we can obtain a lower bound on the probability that a random walk has a point of increase that differs from the upper bound only by a constant factor. More precisely, for any random walk on the line, P © S0, . . . , Sn has a point of increase ª ≥ Pn k=0 pkpn−k 2 P⌊n/2⌋ k=0 p2 k , where p0, . . . , pn are as in (2.3). Exercise 5.8. Suppose X1, . . . , Xn are independent and identically distributed and consider their ordered relabeling given by X(1) ≥X(2) ≥. . . ≥X(n) . Show that E[X(i)X(j)] ≥E[X(i)]E[X(j)], provided these expectations are well-defined. Exercise 5.9 ( ∗). Given a centred random variable X, show that there exist centred random variables Xn taking only finitely many values, such that Xn converges to X in law and, for Ψn(x) = E £ Xn ¯ ¯ Xn ≥x ¤ , the embedding stopping times τn = inf{t ≥0 : M(t) ≥Ψn(B(t))} converge almost surely to τ. Infer that B(τ) has the same law as X, and E[τ] = E[X2]. Exercise 5.10. Suppose that {Xk : k ≥1} is a sequence of independent, identically distributed random variables with E[X1] = 0, P{X1 = 0} = 0 and 0 < E[X2 1] = σ2 < ∞. Let {Sn : n ≥0} be the associated random walk and Pn = #{1 ≤k ≤n : Sk > 0} the number of positive values of the random walk before time n. Then, for all x ∈(0, 1), lim n→∞P{Pn ≤xn} = 2 π arcsin(√x) . 143 Notes and Comments Historically, the law of the iterated logarithm was first proved for simple random walk by Khinchin [Kh23, Kh24] and later generalised to other random walks by Kolmogorov [Ko29] and Hartman and Wintner [HW41]. The original arguments of Kolmogorov, Hartman and Wintner were extremely difficult, and a lot of authors have since provided more accessible proofs, see, for example, de Acosta [dA83]. For Brownian motion the law of the iterated logarithm is also due to Khinchin [Kh33]. The idea of using embedding arguments to transfer the result from the Brownian motion to the random walk case is due to Strassen [St64]. For a survey of laws of the iterated logarithm, see [Bi86]. An extension of the law of the iterated logarithm is Strassen’s law, which is first proved in [St64]. If a standard Brownian motion on the interval [0, t] is rescaled by a factor 1/t in time and a factor p 2t log log(1/t) in space, the set of limit points in C[0, 1] are exactly the functions f with f(0) = 0 and R 1 0 (f ′(t))2 dt ≤1. Strassen’s law also explains the approximate form of the curve in the right half of Figure 5.1. Any function in this class with f(1) = 1 satisfies 1 ≥ Z 1 0 (f ′(t))2 dt ≥ ³ Z 1 0 f ′(t) dt ´2 = 1, which implies that f ′(t) is constant and thus f(t) = t for all t ∈(0, 1). Therefore, for large t, the Brownian path conditioned on ending near to its upper envelope resembles a straight line in the sup-norm, as can be seen in Figure 5.1. The nonincrease phenomenon, which is described in Theorem 5.11, holds for arbitrary symmet-ric random walks, and can thus be viewed as a combinatorial consequence of fluctuations in random sums. Indeed, our argument shows this — subject to a generalisation of Lemma 5.8. The latter result holds if the increments Xi have a symmetric distribution, or if the increments have mean zero and finite variance, see e.g. Feller [Fe66, Section XII.8]. Dvoretzky, Erd˝ os and Kakutani [DEK61] were the first to prove that Brownian motion almost surely has no local points of increase. Knight [Kn81] and Berman [Be83] noted that this follows from proper-ties of the local time of Brownian motion; direct proofs were given by Adelman [Ad85] and Burdzy [Bu90]. The proof we give is taken from [Pe96c]. A higher-dimensional analogue of this question is whether, for Brownian motion in the plane, there exists a line such that the Brownian motion path, projected onto that line, has a global point of increase, or equivalently whether the Brownian motion path admits cut lines. We say a line ℓis a cut line for the Brownian motion if, for some t0 ∈(0, 1) with B(t0) ∈ℓ, the points B(t) lies on one side of ℓfor all t ∈[0, t0) and on the other side of ℓfor all t ∈(t0, 1]. It was proved by Bass and Burdzy [BB97] that planar Brownian motion almost surely does not have cut lines. Burdzy [Bu89], with a correction to the proof in [Bu95], however showed that Brownian motion in the plane almost surely does have cut points, which are points B(t) such that the Brownian motion path with the point B(t) removed is disconnected. It was conjectured that the Hausdorffdimension of the set of cut points is 3/4. This conjecture has recently been proved by Lawler, Schramm and Werner [LSW01], see also [La96a]. 144 For Brownian motion in three dimensions, there almost surely exist cut planes, where we say P is a cut plane if for some t, the set {B(s): 0 < s < t} lies on one side of the plane and the set {B(s): 1 > s > t} on the other side. This result, original due to Pemantle, is also described in Bass and Burdzy [BB97]. An argument of Evans, which is closely related to material we discuss in the final section of Chapter 10, shows that the set of times corresponding to cut planes has Hausdorffdimension zero. Pemantle [Pe97] has shown that the range of planar Brownian motion almost surely does not cover any straight line segment. Which curves can and which cannot be covered by a Brownian motion path is, in general, an open question. Also unknown is the minimal Hausdorffdimension of curves contained in the range of planar Brownian motion, though it is known that it contains a curve of Hausdorffdimension 4/3, namely its outer boundary, see [LSW01]. Harris’ inequality was discovered by Harris [Ha60] and is also known as FKG inequality in recognition of the work of Fortuin, Kasteleyn and Ginibre [FKG71] who extended the original inequality beyond the case of product measures. ‘Correlation inequalities’ like these play an extremely important role in percolation theory and spatial statistical physics. Exercise 5.8 indi-cates the important role of this idea in the investigation of order statistics, see Lehmann [Le66] and Bickel [Bi67] for further discussion and applications. The Skorokhod embedding problem is a classic, which still leads to some attractive research. The first embedding theorem is due to Skorokhod [Sk65]. The Russian original of this work appeared in 1961 and the Dubins embedding, which we have presented is not much younger, see [Du68]. Our presentation, based on the idea of binary splitting martingales, follows Neveu [Ne75, Ex. II.7, p 34] and we thank Jim Pitman for directing us to this reference. Another classic embedding technique is Root’s embedding, see [Ro69]. The Az´ ema-Yor em-bedding was first described in [AY79], but we follow Meilijson [Me83] in the proof. One of the attractive features of the Az´ ema-Yor embedding is that, among all stopping times T with ET < ∞which represent a given random variable X, it maximizes the max0≤t≤T B(t). Gener-alisation of the embedding problem to more general classes of probability laws require different forms of minimality for the embedding stopping time, or more general processes in which one embeds. A survey of current developments is [Ob04]. The idea of an invariance principle that allows to transfer limit theorems from special cases to general random walks can be traced to Erd˝ os and Kac [EK46, EK47]. The first general result of this nature is due to Donsker [Do51] following an idea of Doob [Do49]. Besides the embedding technique carried out in our proof, there is also a popular alternative proof, which goes back to Prohorov [Pr56]. Suppose that a subsequence of {S∗ n : n ≥1} converges in distribution to a limit X. This limit is a continuous random function, which is easily seen to have stationary, independent increments, which have expectation zero and variance equal to their length. By a general result this implies that X is a Brownian motion. So Brownian motion is the only possible limit point of the sequence {S∗ n : n ≥1}. The difficult part of this proof is now to show that every subsequence of {S∗ n : n ≥1} has a convergent subsubsequence, the tightness property. Many interesting applications and extensions of Donsker’s theorem can be found in [Bi68]. 145 An important class of extensions of Donsker’s theorem are the strong approximation theorems which were provided by Skorokhod [Sk65] and Strassen [St64]. In these results the Brownian motion and the random walk are constructed on the same probability space in such a way that they are close almost surely. An optimal result in this direction is the famous paper of Koml´ os, Major and Tusn´ ady [KMT75]. For an exposition of their work and applications, see [CR81]. The arcsine laws for Brownian motion were first proved by L´ evy in [Le39, Le48]. The proof of the first law, which we give here, follows Kallenberg [Ka02]. This law can also be proved by a direct calculation, which however is slightly longer, see for example [Du95]. Our proof of the second arcsine law goes back to an idea of Baxter [Ba62]. 146 CHAPTER 6 Brownian local time In this chapter we focus on linear Brownian motion and address the question how to measure the amount of time spent by a Brownian path at a given level. As we already know from Theorem 3.25 that the occupation times up to time t are absolutely continuous measures, their densities are a viable measure for the time spent at level a during the time interval [0, t]. We shall show that these densities make up a continuous random field {La(t): a ∈R, t ≥0}, which is called the Brownian local time. Nontrivial information about the distribution of this process is contained in a theorem of L´ evy (studying it as function of time) and the Ray-Knight theorem (studying it as function of the level). We finally show how to interpret local time as a family of Hausdorffmeasures. 1. The local time at zero How can we measure the amount of time spent by a standard linear Brownian motion {B(t): t ≥ 0} at zero? We have already seen that, almost surely, the zero set {s ∈[0, t): B(s) = 0} is a set of Hausdorffdimension 1/2. Moreover, by Exercise 4.13, the 1/2-dimensional Hausdorff measure of the zero set is zero, so Hausdorffmeasure as defined so far does not give a nontrivial answer. We approach this problem by counting the number of downcrossings of a nested sequence of intervals decreasing to zero. More precisely, for a linear Brownian motion {B(t): t ≥0} with arbitrary starting point, given a < b, we define stopping times τ0 = 0 and, for j ≥1, (1.1) σj = inf © t > τj−1 : B(t) = b ª , τj = inf © t > σj : B(t) = a ª . We call the random functions B (j) : [0, τj −σj] →R, B (j)(s) = B(σj + s) the jth downcrossing of [a, b]. For every t > 0 we denote by D(a, b, t) = max © j ∈N : τj ≤t ª the number of downcrossings of the interval [a, b] before time t. Note that D(a, b, t) is almost surely finite by the absolute continuity of Brownian motion on the compact interval [0, t]. Theorem 6.1 (Downcrossing representation of the local time at zero). There exists a nontrivial stochastic process {L(t): t ≥0} called the local time at zero such that for any sequences an ↑0 and bn ↓0 with an < bn, almost surely, lim n→∞2(bn −an) D(an, bn, t) = L(t) for every t > 0. Moreover, this process is almost surely locally γ-H¨ older continuous for any γ < 1/2. 147 Remark 6.2. One might wonder about the meaning of the normalisation factor 2 in our theorem, which is omitted in some treatments (e.g. [KS88]). An intuitive answer is that the time spent near zero should be approximated by the number of downcrossings plus the number of upcrossings of a small interval centred in zero. The factor thus compensates for the number of upcrossings, which is (up to an error of at most one) the same as the number of downcrossings. ⋄ The key ingredient of the proof of Theorem 6.1 is the following fact. Lemma 6.3. Suppose that a < m < b and let {B(t): 0 ≤t ≤T} be a linear Brownian motion stopped at the time T when it first hits a given level above b. Let • D be the number of downcrossings of the interval [a, b], • Dl be the number of downcrossings of the interval £ a, m ¤ , • Du be the number of downcrossings of the interval £ m, b ¤ . There exist two independent sequences X0, X1, . . . and Y0, Y1, . . . of independent nonnegative random variables, which are also independent of D, such that for j ≥1 the random variables Xj are geometric with mean (b −a)/(m −a) and the random variables Yj are geometric with mean (b −a)/(b −m), and Dl = X0 + D X j=1 Xj and Du = Y0 + D X j=1 Yj . −2 0 2 4 6 −4 −3 −2 b m a σj τj B(j) ↓ σj+1 B(j) ↑ y Figure 1. The downcrossing of [a, b] contains one downcrossing of [a, m] and the following upcrossing of [a, b] contains one further downcrossing of [a, m]. 148 Proof. Recall the definition of the stopping times σj, τj from (1.1). For j ≥1, define the jth downcrossings, resp. upcrossings, of [a, b] by B (j) ↓: [0, τj −σj] →R, B (j) ↓(s) = B(σj + s) B (j) ↑: [0, σj+1 −τj] →R, B (j) ↑(s) = B(τj + s), By the strong Markov property all these pieces of the Brownian path are independent. Note that D depends only on the family (B (j) ↓: j ≥1) of downcrossings. First look at Dl and denote by X0 the number of downcrossings of [a, m] before the first downcrossing of [a, b]. The jth downcrossing of [a, b] contains exactly one downcrossing of [a, m] and the jth upcrossing of [a, b] contains a random number Xj of downcrossings of [a, m], which, by Theorem 2.45, satisfies P © Xj = k ª = ³m −a b −a ´ ³b −m b −a ´k−1 for every k ∈{1, . . .}. In other words Xj is geometrically distributed with (success) parameter (m −a)/(b −a). −4 −2 0 2 4 6 8 −4 −3 −2 b m a σj τj B(j) ↓ σj+1 B(j) ↑ y ˜ σ1 ˜ τ1 ˜ τ2 Figure 2. The downcrossing of [a, b] contains three downcrossings of [m, b] and the following upcrossing of [a, b] contains no further downcrossings of [m, b]. Second look at Du and denote by Y0 the number of downcrossings of [m, b] after the last downcrossing of [a, b]. No downcrossings of [m, b] can occur during an upcrossing of [a, b]. Fix a j and look at the downcrossing B (j) ↓ of [a, b]. Define stopping times ˜ σ0 = 0 and, for i ≥1, ˜ τi = inf © t > ˜ σi−1 : B (j) ↓(t) = m ª , ˜ σi = inf © t > ˜ τi : B (j) ↓(t) = b ª . This subdivides the path of B (j) ↓into independent downcrossing periods [˜ σi−1, ˜ τi], and upcrossing periods [˜ τi, ˜ σi] of [m, b]. By our assumption the upper hitting boundary is above b and therefore can only be hit during the downcrossing periods, while the lifetime of B (j) ↓ expires when the lower boundary a is hit, which can only occur during an upcrossing period. The probability of this event equals (b −m)/(b −a) by Theorem 2.45. 149 Hence the number of downcrossings of [m, b] during the jth downcrossing of [a, b] is a geometric random variable Yj with (success) parameter (b −m)/(b −a), which completes the proof. For the proof of Theorem 6.1 we first prove the convergence for the case when the Brownian motion is stopped at the time T = Tb when it first reaches some level b > b1. This has the advantage that there cannot be any uncompleted upcrossings. Lemma 6.4. For any two sequences an ↑0 and bn ↓0 with an < bn, the discrete time stochastic process n 2(bn −an) D(an, bn, T): n ∈N o is a submartingale with respect to the natural filtration (Fn : n ∈N). Proof. We may assume that, for each n, we have either (1) an = an+1 or (2) bn = bn+1 , which is no loss of generality, as we may replace a step where both an and bn are changed by two steps, where only one is changed at the time. The original sequence is then a subsequence of the modified one and inherits the submartingale property. Now fix n and first assume that we are in case (1) an = an+1. By Lemma 6.3 for Dl, the total number D(an, bn+1, T) of downcrossings of [an, bn+1] given Fn is the sum of D(an, bn, T) independent geometric random variables with parameter (bn+1−an)/(bn−an) plus a nonnegative contribution. Hence, E £ (bn+1 −an) D(an, bn+1, T) ¯ ¯ Fn ¤ ≥(bn −an) D(an, bn, T), which is the submartingale property (for the nth step). Second assume that we are in case (1) bn = bn+1. Then Lemma 6.3 for Du shows that the number of downcrossings of [an+1, bn] given Fn is the sum of D(an, bn, T) independent geometric random variables with parameter (bn −an+1)/(bn −an) plus a nonnegative contribution. Hence E £ (bn −an+1) D(an+1, bn, T) ¯ ¯ Fn ¤ ≥(bn −an) D(an, bn, T), and together with the first case this establishes that {2(bn −an) D(an, bn, T): n ∈N} is a submartingale with respect to its natural filtration. Lemma 6.5. For any two sequences an ↑0 and bn ↓0 with an < bn the limit L(Tb) := lim n→∞2(bn −an) D(an, bn, Tb) exists almost surely. It does not depend on the choice of sequences. Proof. Observe that D(an, bn, Tb) is a geometric random variable with parameter (bn − an)/(b −an). Recall that the variance of a geometric random variable with parameter p is (1 −p)/p2, and so its second moment is bounded by 2/p2. Hence E £ 4(bn −an)2 D(an, bn, Tb)2 ¤ ≤8 (b −an)2, 150 and thus the submartingale in Lemma 6.4 is L2-bounded. By the submartingale convergence theorem, see Theorem II.4.5, the limit lim n↑∞2(bn −an) D(an, bn, Tb) exists almost surely, and by Theorem II.4.12 also in L2 ensuring that the limit is nontrivial. Finally, note that the limit does not depend on the choice of the sequence an ↑0 and bn ↓0 because if it did, then given two sequences with different limits we could construct a sequence of intervals alternating between the two sequences, which would not converge. Lemma 6.6. For any fixed time t > 0, almost surely, the limit L(t) := lim n→∞2(bn −an) D(an, bn, t) exists. Proof. We define an auxiliary Brownian motion {Bt(s): s ≥0} by Bt(s) = B(t + s). For any integer b > b1 we denote by Dt(an, bn, Tb) the number of downcrossings of the interval [an, bn] by the auxiliary Brownian motion before it hits b. Then, almost surely, Lt(Tb) := lim n↑∞2(bn −an) Dt(an, bn, Tb), exists by the previous lemma. Given t > 0 we fix a Brownian path such that this limit exists for all integers b > b1. Pick b so large that Tb > t. Define L(t) := L(Tb) −Lt(Tb) . To show that this is the required limit, observe that D(an, bn, Tb) −Dt(an, bn, Tb) −1 ≤D(an, bn, t) ≤D(an, bn, Tb) −Dt(an, bn, Tb), where the correction −1 on the left hand side arises from the possibility that t interrupts a downcrossing. Multiplying by 2(bn −an) and taking a limit gives L(Tb) −Lt(Tb) for both bounds, proving convergence. We now have to study the dependence of L(t) on the time t in more detail. To simplify the notation we write In(s, t) = 2(bn −an) ¡ D(an, bn, t) −D(an, bn, s) ¢ for all 0 ≤s < t . The following lemma contains a probability estimate, which is sufficient to get the convergence of the downcrossing numbers jointly for all times and to establish H¨ older continuity. Lemma 6.7. Let γ < 1/2 and 0 < ε < (1 −2γ)/3. Then, for all t ≥0 and 0 < h < 1, we have P © L(t + h) −L(t) > hγª ≤2 exp{−1 2 h−ε} . Proof. As, by Fatou’s lemma, P © L(t + h) −L(t) > hγª = P © lim inf n→∞In(t, t + h) > hγª ≤lim inf n→∞P © In(t, t + h) > hγª we can focus on estimating P{In(t, t + h) > hγ} for fixed large n. By the Markov property it suffices to estimate Px{In(0, h) > hγª uniformly for all x ∈R. This probability is clearly 151 maximal when x = bn, so we may assume this. Let Th = inf{s > 0: B(s) = bn + h(1−ε)/2} and observe that © In(0, h) > hγª ⊂ © In(0, Th) > hγª ∪ © Th < h ª . The number of downcrossings of [an, bn] during the period before Th is geometrically distributed with mean (bn −an)−1h(1−ε)/2 + 1 and thus Pbn © In(0, Th) > hγª = ³ h(1−ε)/2 bn −an + h(1−ε)/2 ´⌈ 1 2(bn−an) hγ⌉−1 n→∞ − →exp © −1 2 hγ−1 2+ ε 2ª ≤exp © −1 2 h−εª . With {W(s): s ≥0} denoting a standard linear Brownian motion, Pbn © Th < h ª = P n max 0≤s≤h W(s) ≥h(1−ε)/2o ≤ q 2 πh−ε exp © −1 2 h−εª where we have used Remark 2.19 in the last step. The result follows by adding the last two displayed formulas. Lemma 6.8. Almost surely, L(t) := lim n→∞2(bn −an) D(an, bn, t) exists for every t ≥0. Proof. It suffices to prove the simultaneous convergence for all 0 ≤t ≤1. We define a countable set of gridpoints G = [ m∈N Gm ∪{1}, for Gm = © k m : k ∈{0, . . . , m −1} ª and show that the stated convergence holds on the set [ t∈G © L(t) = lim n→∞2(bn −an) D(an, bn, t) exists ª ∪ [ m>M [ t∈Gm © L(t + 1 m) −L(t) ≤(1/m)γª . which, by choosing M suitably, has probability arbitrarily close to one by the previous two lemmas. Given any t ∈[0, 1) and a large m we find t1, t2 ∈Gm with t2 −t1 = 1 m and t ∈[t1, t2]. We obviously have 2(bn −an) D(an, bn, t1) ≤2(bn −an) D(an, bn, t) ≤2(bn −an) D(an, bn, t2). Both bounds converge on our set, and the difference of the limits is L(t2) −L(t1), which is bounded by m−γ and thus can be made arbitrarily small by choosing a large m. Lemma 6.9. For γ < 1 2, almost surely, the process {L(t): t ≥0} is locally γ-H¨ older continuous. Proof. It suffices to look at 0 ≤t < 1. We use the notation of the proof of the previous and show that γ-H¨ older continuity holds on the set constructed there. Indeed, whenever 0 ≤ s < t < 1 and t −s < 1/M we pick m ≥M such that 1 m+1 ≤t −s < 1 m . We take t1 < s with t1 ∈Gm and s −t1 < 1/m, and t2 > t with t2 ∈Gm and t2 −t < 1/m. 152 Note that t2 −t1 ≤2/m by construction and hence, L(t) −L(s) ≤L(t2) −L(t1) ≤2(1/m)γ ≤2 ¡ m+1 m )γ (t −s)γ. The result follows as the fraction on the right is bounded by 2. This completes the proof of the downcrossing representation, Theorem 6.1. It is easy to see from this representation that, almost surely, the local time at zero increases only on the zero set of the Brownian motion, see Exercise 6.1. Observe that the increasing process {L(t) : t ≥0} is not a Markov process. Heuristically, the size of the increment L(t + h) −L(t) depends on the position of the first zero of the Brownian motion after time t, which is strongly dependent on the position of the last zero before time t. The last zero however is the position of the last point of increase of the local time process before time t, and therefore the path {L(s) : 0 ≤s ≤t} contains relevant information beyond its endpoint. Nevertheless, we can describe the law of the local time process, thanks to the following famous theorem of Paul L´ evy, which describes the law of the local time at zero in terms of the maximum process of Brownian motion. It opens the door to finer results on the local time at zero, like those presented in Section 4 of this chapter. Theorem 6.10 (L´ evy). The local time at zero {L(t): t ≥0} and the maximum process {M(t): t ≥0} of a standard linear Brownian motion have the same distribution. The proof uses the simple random walk embedded in the Brownian motion, a technique which we will exploit extensively in the next section. Define stopping times τ0 := τ (n) 0 := 0 and τk := τ (n) k := inf © t > τk−1 : |B(t) −B(τk−1)| = 2−nª , for k ≥1 . The nth embedded random walk {X (n) k : k = 1, 2, . . .} is defined by Xk := X (n) k := 2n B ¡ τ (n) k ¢ . The length of the embedded random walk is N := N (n)(t) := max{k ∈N: τk ≤t} , which is easily seen to be independent of the actual walk. Lemma 6.11. For every t > 0, almost surely, lim n→∞2−2nN (n)(t) = t. Proof. First note that {ξ (n) k : k = 1, 2, . . .} defined by ξk := ξ (n) k := τ (n) k −τ (n) k−1 is a sequence of independent random variables, for each n. By Theorem 2.45 the mean of ξk is 2−2n and its variance is, by Brownian scaling, equal to c2−4n for some constant c > 0. (See, for example, Exercise 2.13 for instructions how to find the constant.) Define S (n)(t) = ⌈22nt⌉ X k=1 ξ (n) k . 153 Then ES(n)(t) = ⌈22nt⌉2−2n →t and Var ¡ S(n)(t) ¢ = c2−4n⌈22nt⌉, hence E ∞ X n=1 ¡ S (n)(t) −ES (n)(t) ¢2 < ∞. We infer that, almost surely, limn→∞S(n)(t) = t. For fixed ε > 0, we pick n0 large so that S (n)(t −ε) ≤t ≤S (n)(t + ε) for all n ≥n0. The sum over ξk up to N (n)(t) + 1 is at least t, by definition, and hence we get N (n)(t) + 1 ≥⌈22n(t −ε)⌉. Conversely, the sum over ξk up to N (n)(t) is at most t and hence N (n)(t) ≤⌈22n(t + ε)⌉. The result follows as ε > 0 was arbitrary. Lemma 6.12. Almost surely, for every t > 0, lim n↑∞2−n# © k ∈{1, . . . , N (n)(t)} : |Xk−1| = 0, |Xk| = 1 ª = L(t) Proof. By Theorem 6.1 applied to the sequences an = −2−n and bn = 0 we have lim n↑∞2−n# © k ∈{1, . . . , N (n)(t)} : Xk−1 = 0, Xk = −1 ª = 1 2L(t) . Applying Theorem 6.1 to the sequences an = 0 and bn = 2−n we get lim n↑∞2−n# © k ∈{1, . . . , N (n)(t)} : Xk−1 = 1, Xk = 0 ª = 1 2L(t) . As #{k ≤N : Xk−1 = 1, Xk = 0} and #{k ≤N : Xk−1 = 0, Xk = 1} differ by no more than one, the result follows by adding up the two displayed formulas. 0 2 4 6 8 10 −2 −1 0 1 2 3 Xk Mk N(n)(t) k 0 2 4 6 8 10 0 1 2 3 Yk = Mk −Xk k N(n)(t) Figure 3. On the left an embedded random walk {Xk : k ≥0} together with its maximum process {Mk : k ≥0}. On the right the associated difference process {Yk : k ≥0} defined by Yk = Mk −Xk. 154 We define the maximum process {M (n) k : k = 1, 2, . . .} associated with the embedded random walk by Mk = M (n) k = max © X (n) j : j ∈{0, . . . , k} ª . Then the process {Y (n) k : k = 1, 2, . . .} defined by Yk := Y (n) k := Mk −Xk is a Markov chain with statespace {0, 1, 2, . . .} and the following transition mechanism • if j ̸= 0 then P{Yk+1 = j + 1 | Yk = j} = 1 2 = P{Yk+1 = j −1 | Yk = j}, • P{Yk+1 = 0 | Yk = 0} = 1 2 = P{Yk+1 = 1 | Yk = 0} . One can recover the maximum process {Mk : k = 1, 2, . . .} from {Yk : k = 1, 2, . . .} by counting the number of flat steps Mk = # © j ∈{1, . . . , k}: Yj = Yj−1 ª . Hence we obtain, asymptotically, the maximum process of the Brownian motion as a limit of the number of flat steps in {Y (n) k : k = 1, 2, . . .}. Lemma 6.13. For any time t > 0, almost surely, M(t) = lim n↑∞2−n # © j ∈{1, . . . , N (n)(t)}: Y (n) j = Y (n) j−1 ª . Proof. Note that # © j ∈{1, . . . , N}: Yj = Yj−1 ª is the maximum of the random walk {Xk : k = 1, 2, . . . , N} over its entire length. This maximum, multiplied by 2−n, differs from M(t) by no more than 2−n, and this completes the argument. Removing the flat steps in the process {Y (n) j : j = 1, 2, . . .} we obtain a process {˜ Y (n) k : k = 1, 2, . . .}, which has the same law as {|Xk|: k = 1, 2, . . .}. By Lemma 6.12 we therefore have the convergence in distribution, as n ↑∞, (1.2) 2−n# © k ∈{1, . . . , N (n)(t)} : ˜ Y (n) k−1 = 0, ˜ Y (n) k = 1 ª = ⇒L(t), jointly for any finite set of times. Lemma 6.14. Almost surely, lim n↑∞2−n³ # © j ∈{1, . . . , N (n)(t)}: Y (n) j−1 = Y (n) j ª −# © k ∈{1, . . . , N (n)(t)} : ˜ Y (n) k−1 = 0, ˜ Y (n) k = 1 ª´ = 0 . Proof. First note that when {Yj : j = 1, 2, . . .} returns to zero for the ith time, the number of steps before it moves to one is given by a random variable Zi with distribution P{Zi = k ª = 2−k−1 for k = 0, 1, . . .. Denoting by Z0 the number of steps before it moves initially, the random variables Z0, Z1, . . . are independent and independent of the process {˜ Y (n) k : k = 1, 2, . . .}. Let A (n) = # © j ∈{1, . . . , N (n)(t)}: Y (n) j−1 = 1, Y (n) j = 0 ª 155 0 2 4 6 8 10 0 1 2 3 Yj A(n) = 2 N(n)(t) j τ0 τ1 0 2 4 6 8 10 0 1 2 3 ˜ Yk k N(n)(t) −MN(n)(t) MN(n)(t) Figure 4. On the left a sample of the processes {Yj : 0 ≤j ≤N (n)(t)}. On the right the associated {˜ Yk : 0 ≤k ≤N (n)(t)}, which is obtained by removing the two flat steps and extending the path to its original length. be the total number of returns to zero before time N. Then, almost surely, as n ↑∞, 2−n³ # © j ∈{1, . . . , N (n)(t)}: Y (n) j = Y (n) j−1 ª −# © j ∈{1, . . . , N (n)(t)} : Y (n) j−1 = 0, Y (n) j = 1 ª´ = 2−n A(n) X i=0 (Zi −1) = ¡ 2−nA (n)¢ 1 A(n) A(n) X i=0 ¡ Zi −EZi ¢ − →0, because the first factor converges by Lemma 6.12 and the second by the law of large numbers. To study the effect of the removal of the flat pieces, recall that almost surely the length N (n)(t) of the walk is of order 22nt, by Lemma 6.11, and the number of flat pieces is MN(n)(t), which is of order 2n, by Lemma 6.13. Hence, for all ε > 0, if n is large enough, N (n)(t −ε) + MN(n)(t) ≤N (n)(t). We infer from this that 2−n³ # © j ∈{1, . . . , N (n)(t)} : ˜ Y (n) j−1 = 0, ˜ Y (n) j = 1 ª −# © j ∈{1, . . . , N (n)(t)} : Y (n) j−1 = 0, Y (n) j = 1 ª´ ≤2−n# © j ∈{N (n)(t −ε) + 1, . . . , N (n)(t)} : ˜ Y (n) j−1 = 0, ˜ Y (n) j = 1 ª , and the right hand side converges almost surely to a random variable, which has the law of L(t) −L(t −ε) and hence can be made arbitrarily small by choice of ε > 0. Proof of Theorem 6.10 Note that both processes in Theorem 6.10 are continuous, so that it suffices to compare their finite dimensional distributions. Equality of these follows directly by combining Lemma 6.13, Equation (1.2) and Lemma 6.14. 156 Theorem 6.15 (Occupation time representation of the local time at zero). For all sequences an ↑0 and bn ↓0 with an < bn, almost surely, lim n→∞ 1 bn −an Z t 0 1{an ≤B(s) ≤bn} ds = L(t) for every t > 0. The proof is prepared by the following lemma, which we prove as Exercise 6.4. Lemma 6.16. Let {W(s): s ≥0} be a standard linear Brownian motion and τ1 its first hitting time of level 1. Then E R τ1 0 1{0 ≤W(s) ≤1} ds = 1. Proof of Theorem 6.15 Recall the stopping times τj defined for an < bn as in (1.1). For the proof of the lower bound note that Z t 0 1{an ≤B(s) ≤bn} ds ≥ D(an,bn,t) X j=1 Z τj τj−1 1{an ≤B(s) ≤bn} ds . By Brownian scaling Z τj τj−1 1{an ≤B(s) ≤bn} ds = (bn −an)2 Z τ 0 1{0 ≤W(s) ≤1} ds , where {W(s): s ≥0} is a standard linear Brownian motion and τ = inf © s > 0: W(s) = 0 and there exists t < s with W(t) = 1 ª . Hence 1 bn −an D(an,bn,t) X j=1 Z τj τj−1 1{an ≤B(s) ≤bn} ds = (bn −an)D(an, bn, t) h 1 D(an, bn, t) D(an,bn,t) X j=1 Z τ 0 1{0 ≤W(s) ≤1} ds i . The first factor converges almost surely to 1 2 L(t), by Theorem 6.1. From the law of large numbers and Lemma 6.16 we get for the second factor, lim n↑∞ 1 D(an, bn, t) D(an,bn,t) X j=1 Z τ 0 1{0 ≤W(s) ≤1} ds = E Z τ 0 1{0 ≤W(s) ≤1} ds = 2 . This verifies the lower bound. The upper bound can be obtained by including the period [τj, τj+1] for j = D(an, bn, t) in the summation and using the same arguments as for the lower bound. This completes the proof of Theorem 6.15. 157 2. A random walk approach to the local time process Given a level a ∈R the construction of the previous chapter allows us to define the local time at level a for a linear Brownian motion {B(t): t ≥0}. Indeed, simply let {La(t): t ≥0} be the local time at zero of the auxiliary Brownian motion {Ba(t): t ≥0} defined by Ba(t) = B(t)−a. Using Theorem 6.15 it is not hard to show that {La(t): a ∈R} is the density of the occupation measure µt introduced in Theorem 3.25. Theorem 6.17. For linear Brownian motion {B(t): t ≥0}, almost surely, for any bounded measurable g: R →R and t > 0, Z g(a) dµt(a) = Z t 0 g(B(s)) ds = Z ∞ −∞ g(a) La(t) da. Proof. First, observe that for the statement it suffices to have {La(t): t ≥0} defined for L-almost every a. Second, we may assume that t is fixed. Indeed, it suffices to verify the second equality for a countable family of bounded measurable g: R →R, for example the indicator functions of rational intervals. Having fixed such a g both sides are continuous in t. For fixed t, we know from Theorem 3.25 that µt ≪L almost surely, hence a density f exists by the Radon-Nikodym theorem and may be obtained as f(a) = lim ε↓0 1 2ε Z t 0 1{a −ε ≤B(s) ≤a + ε} ds, which equals La(t) by Theorem 6.15, almost surely for L-almost every a. A major result about linear Brownian motion is the continuity of the density {La(t): a ∈R} of the occupation measures, which we now prove. To explore La(t) as a function of the levels a we extend the downcrossing representation to hold simultaneously at all levels a. We approach this problem via the random walks embedded in a Brownian motion. For a Brownian motion {B(t): t ≥0} started in the origin we recall the definition of the embedded random walks {X (n) k : k = 1, 2, . . .} from the previous section: Define stopping times τ0 := τ (n) 0 := 0 and τk := τ (n) k := inf © t > τk−1 : |B(t) −B(τk−1)| = 2−nª , for k ≥1, and define the nth embedded random walk by Xk := X (n) k := 2n B ¡ τ (n) k ¢ . For any time t > 0 the length of the embedded random walk is N := N (n)(t) := max{k ∈N: τk ≤t} , which is independent of the nth walk itself. Further, given a ∈R, choose j(a) ∈{0, 1, . . .} such that j(a) 2−n ≤a < (j(a) + 1) 2−n . Denote the number of downcrossings of 2na by the nth embedded random walk by D (n)(a, t) := # © k ∈{0, . . . , N (n)(t)}: X (n) k = j(a) + 1, X (n) k+1 = j(a) ª . 158 Theorem 6.18 (Trotter’s theorem). Let {B(t): t ≥0} be a linear Brownian motion and let D(n)(a, t) be the number of downcrossings of 2na by the nth embedded random walk stopped at time N (n)(t). Then, almost surely, La(t) := lim n→∞2−n+1 D (n)(a, t) exists for all a ∈R and t ≥0. Moreover, for every γ < 1 2, the random field {La(t): a ∈R, t ≥0} is almost surely locally γ-H¨ older continuous. Remark 6.19. Note that {La(t) : a ∈R, t ≥0} is a stochastic process depending on more than one parameter, and to emphasise this fact we use the notion random field. ⋄ The proof uses the following estimate for the sum of independent geometric random variables with mean two, which we prove as Exercise 6.5. Lemma 6.20. Let X1, X2, . . . are independent geometrically distributed random variables with mean 2. Then, for sufficiently small ε > 0, for all nonnegative integers k ≤m, P n¯ ¯ ¯ k X j=1 (Xj −2) ¯ ¯ ¯ ≥εm o ≤4 exp © −1 5 ε2 m ª . The following lemma is the heart of the proof of Theorem 6.18. Lemma 6.21. Suppose that a < b and let {B(t): 0 ≤t ≤T} be a linear Brownian motion stopped at the time T when it first hits a given level above b. Let • D be the number of downcrossings of the interval [a, b], • Dl be the number of downcrossings of the interval £ a, a+b 2 ¤ , • Du be the number of downcrossings of the interval £ a+b 2 , b ¤ . Then, for sufficiently small ε > 0, for all nonnegative integers k ≤m, P ©¯ ¯D −1 2 Dl ¯ ¯ > εm, ¯ ¯D −1 2 Du ¯ ¯ > εm ¯ ¯ D = k ª ≤12 exp © −1 5 ε2 m ª . Proof. By Lemma 6.3 we have that, given {D = k}, there exist independent random variables X0, X1, X2 . . ., such that Dl = X0 + k X j=1 Xj . and X1, X2, . . . are geometrically distributed with mean 2. An inspection of the proof of Theo-rem 6.3 reveals that X0 is either zero or also geometrically distributed with mean 2, depending on the starting point of the Brownian motion. 159 Using Lemma 6.20 and Chebyshev’s inequality, we get, if ε > 0 is small enough, P ©¯ ¯ 1 2 Dl −D ¯ ¯ > εm ¯ ¯ D = k ª ≤P n¯ ¯ k X j=1 (Xj −2) ¯ ¯ > εm ¯ ¯ ¯ D = k o + P © X0 > εm ª ≤4 exp © −ε2 5 m ª + 2 exp{−εm log 2} ≤6 exp © −ε2 5 m ª . The argument is analogous for Du, and this completes the proof. We now fix γ < 1 2 and a large integer N. We stop the Brownian motion at time TN when it first hits level N, and abbreviate D(n)(a) := D(n)(a, TN). We denote the nth dyadic grid by Dn := Dn(N) := © k2−n : k ∈{−N2n, −N2n + 1, . . . , N2n −1} ª . Lemma 6.22. Denote by Ω(m) the event that, for all n ≥m, (a) ¯ ¯D(n)(a) −1 2 D(n+1)(a) ¯ ¯ ≤2n(1−γ) for all a ∈[−N, N), (b) ¯ ¯D(n)(a) −D(n)(b) ¯ ¯ ≤2 2n(1−γ) for all a, b ∈[−N, N) with |a −b| ≤2−n. Then lim m↑∞P ¡ Ω(m) ¢ = 1 . Proof. Item (a) follows by combining the following three items, (i) ¯ ¯D(n)(a) −1 2 D(n+1)(a) ¯ ¯ ≤ 1 n2 2−nγ D(n)(a) for all a ∈[−N, N) with D(n)(a) ≥2n, (ii) ¯ ¯D(n)(a) −1 2 D(n+1)(a) ¯ ¯ ≤2n(1−γ) for all a ∈[−N, N) with D(n)(a) < 2n, (iii) D(n)(a) ≤n22n for all a ∈[−N, N). We observe that it is equivalent to show (i),(ii) for all a ∈Dn+1 and (iii) for all a ∈Dn. To estimate the probability of the first item we use Lemma 6.21 with ε = 1 n2 2−nγ and m = k. We get that ∞ X n=m X a∈Dn+1 P ©¯ ¯D (n)(a) −1 2 D (n+1)(a) ¯ ¯ > 1 n2 2−nγ D (n)(a) and D (n)(a) ≥2nª ≤ ∞ X n=m X a∈Dn+1 12 exp © − 1 5n4 2n(1−2γ)ª ≤(48N) ∞ X n=m 2n exp © − 1 5n4 2n(1−2γ)ª m→∞ − →0 . 160 For the second item we get from Lemma 6.21 with ε = 2−γn and m = 2n > k. This gives that ∞ X n=m X a∈Dn+1 P ©¯ ¯D (n)(a) −1 2 D (n+1)(a) ¯ ¯ > 2n(1−γ) and D (n)(a) < 2nª ≤ ∞ X n=m X a∈Dn+1 12 exp © −1 5 2n(1−2γ)ª ≤(48N) ∞ X n=m 2n exp © −1 5 2n(1−2γ)ª m→∞ − →0 . For the third item we use that the random variable D(n)(a) is geometrically distributed with parameter 2−n N−a ≥2−n 2N . We therefore obtain, for some sequence δn →0, P © D (n)(a) > n22nª ≤ ¡ 1 −2−n 2N ¢n22n−1 ≤exp{−n2 1+δn 2N }, hence, for sufficiently large m, ∞ X n=m X a∈Dn P © D (n)(a) > n22nª ≤ ∞ X n=m (2N)2n exp © −n2 1+δn 2N ª m→∞ − →0. This completes the estimates needed for item (a). Item (b) need only be checked for all a, b ∈Dn with |a −b| = 2−n. Note that D(n)(a), resp. D(n)(b), are the number of downcrossings of the lower, resp. upper, half of an interval of length 2−n+1, which may or may not be dyadic. Denote by ˜ D(n−1)(a) = ˜ D(n−1)(b) the number of downcrossings of this interval. Then P{|D (n)(a) −D (n)(b) ¯ ¯ > 2 2n(1−γ)} ≤P{|D (n)(a) −1 2 ˜ D (n−1)(a) ¯ ¯ > 2n(1−γ)} + P{|D (n)(b) −1 2 ˜ D (n−1)(b) ¯ ¯ > 2n(1−γ)}, and summability of these probabilities over all a, b ∈Dn with |a −b| = 2−n and n ≥m has been established in the proof of item (a). This completes the proof. Lemma 6.23. On the set Ω(m) we have that La(TN) := lim n→∞2−n−1 D (n)(a) exists for every a ∈[−N, N). Proof. We show that the sequence defined by 2−n−1 D(n)(a), for n ∈N, is a Cauchy sequence. Indeed, by item (a) in the definition of the set Ω(m) we get that, for any a ∈[−N, N] and n ≥m, ¯ ¯2−n−1D (n)(a) −2−n−2 D (n+1)(a) ¯ ¯ ≤2−nγ . Thus, for any n ≥m, sup k≥n ¯ ¯2−n−1D (n)(a) −2−k−1 D (k)(a) ¯ ¯ ≤ ∞ X k=n ¯ ¯2−k−1D (k)(a) −2−k−2 D (k+1)(a) ¯ ¯ ≤ ∞ X k=n 2−kγ n→∞ − →0, and thus the sequence is a Cauchy sequence and therefore convergent. 161 Lemma 6.24. On the set Ω(m) the process {La(TN): a ∈[−N, N)} is γ-H¨ older continuous. Proof. Fix a, b ∈[−N, N) with 2−n−1 ≤a −b ≤2−n for some n ≥m. Then, using item (a) and item (b) in the definition of Ω(m), for all k ≥n, ¯ ¯2−k−1D (k)(a) −2−k−1D (k)(b) ¯ ¯ ≤ ¯ ¯2−n−1D (n)(a) −2−n−1D (n)(b) ¯ ¯ + k−1 X j=n ¯ ¯2−j−2D (j+1)(a) −2−j−1D (j)(a) ¯ ¯ + k−1 X j=n ¯ ¯2−j−2D (j+1)(b) −2−j−1D (j)(b) ¯ ¯ ≤2 2−nγ + 2 ∞ X j=n 2−jγ, Letting k ↑∞, we get |La(TN) −Lb(TN)| ≤ ¡ 2 + 2 1−2−γ ¢ 2−nγ ≤ ¡ 21+γ + 21+γ 1−2−γ ¢ |a −b|γ, which completes the proof. Lemma 6.25. For any fixed time t > 0, almost surely, the limit La(t) := lim n→∞2−n−1 D (n)(a) exists for all a ∈R and moreover {La(t): a ∈R} is γ-H¨ older continuous. Proof. Given t > 0 define the auxiliary Brownian motion {Bt(s): s ≥0} by Bt(s) = B(t+s) and denote by D (n) t (a) the number of downcrossings associated to the auxiliary Brownian mo-tion. Then, almost surely, La t (TN) := limn↑∞2−n−1 D (n) t (a) exists for all a ∈R and integers N. On this event we pick N so large that TN > t. Define La(t) := La(TN) −La t (TN), and observe that {La(t): a ∈R} defined like this is γ-H¨ older continuous by Lemma 6.24. It remains to show that this definition agrees with the one stated in the lemma. To this end, observe that D (n)(a, TN) −D (n) t (a, TN) −1 ≤D (n)(a, t) ≤D (n)(a, TN) −D (n) t (a, TN). Multiplying by 2−n−1 and taking a limit proves the claimed convergence. Lemma 6.26. Almost surely, La(t) := lim n→∞2−n−1 D (n)(a, t) exists for every t ≥0 and a ∈R and {La(t): a ∈R, t ≥0} is γ-H¨ older continuous. Proof. It suffices to look at t ∈[0, N) and a ∈[−N, N). Recall the definition of the dyadic points Dn in [−N, N) and additionally define dyadic points in [0, N) by Hm = © k2−m : k ∈{0, . . . , N2m −1} ª , H = ∞ [ m=1 Hm . 162 We show that the claimed statements hold on the set \ t∈H © La(t) exists for all a ∈[−N, N) and a 7→La(t) is γ-H¨ older continuous ª ∩ \ m>M \ t∈Hm \ a∈Dm © La(t + 2−m) −La(t) ≤2−mγª , which, by choosing M suitably, has probability arbitrarily close to one by Lemma 6.25 and Lemma 6.7. Given any t ∈[0, N) and a ∈[−N, N], for any large m, we find t1, t2 ∈Hm with t2 −t1 = 2−m and t ∈[t1, t2]. We have 2−n−1 D (n)(a, t1) ≤2−n−1 D(a, t) ≤2−n−1 D(a, t2). Both bounds converge on our set, and the difference of the limits is La(t2) −La(t1). We can then find b ∈Hk for k ≥M with |La(t1) −Lb(t1)| < 2−mγ and |La(t2) −Lb(t2)| < 2−mγ and get 0 ≤La(t2) −La(t1) ≤|La(t2) −Lb(t2)| + |Lb(t2) −Lb(t1)| + |La(t1) −Lb(t1)| ≤32−mγ, which can be made arbitrarily small by choice of m, proving simultaneous convergence. For the proof of continuity, suppose a, b ∈[−N, N) and s, t ∈[0, N) with 2−m ≤|a −b| ≤2−m and 2−m ≤t −s ≤2−m for some m ≥M. We pick s1, s2 ∈Hm and t1, t2 ∈Hm such that s −2−m < s1 ≤s ≤s2 < s + 2−m and t −2−m < t1 ≤t ≤t2 < t + 2−m, and a1, b1 ∈Dm with |a −a1| ≤2−m and |b −b1| ≤2−m. Then La(t) −Lb(s) ≤La(t2) −Lb(s1) ≤|La(t2) −La1(t2)| + |La1(t2) −La1(s1)| + |La1(s1) −Lb(s1)|, La(s) −Lb(t) ≤La(s2) −Lb(t1) ≤|La(s2) −La1(s2)| + |La1(s2) −La1(t1)| + |La1(t1) −Lb(t1)|, and all contributions on the right are bounded by constant multiples of 2−mγ, by the construc-tion of our set. This completes the proof of γ-H¨ older continuity. This completes the proof of Trotter’s theorem, Theorem 6.18. 3. The Ray-Knight theorem We now have a closer look at the distributions of local times Lx(T) as a function of the level x in the case that Brownian motion is started at an arbitrary point and stopped at the time T when it first hits level zero. The following remarkable distributional identity goes back to the work of Ray and Knight. 163 Theorem 6.27 (Ray-Knight theorem). Suppose a > 0 and {B(t) : 0 ≤t ≤T} is a linear Brownian motion started at a and stopped at time T = inf{t ≥0 : B(t) = 0}, when it reaches level zero for the first time. Then {Lx(T) : 0 ≤x ≤a} d = {|W(x)|2 : 0 ≤x ≤a} , where {W(x): x ≥0} is a standard planar Brownian motion. 0 0 0 0 B(t) a a t T x |W (x)|2 = Lx(T) Figure 5. The Brownian path on the left, and its local time as a function of the level, on the right. Remark 6.28. The process {|W(x)|2 : x ≥0} of squared norms of a planar Brownian motion is called the squared two-dimensional Bessel process. For any fixed x, the random variable |W(x)|2 is exponentially distributed with mean 2x, see Lemma II.3.8. ⋄ We carry out the proof of the Ray-Knight theorem in three steps. As a warm-up, we look at one point 0 < x ≤a. Recall from the downcrossing representation, Theorem 6.1, that lim n→∞ 2 n Dn(x) = Lx(T) almost surely, where Dn(x) denotes the number of downcrossings of the interval [x, x −1/n] before time T. Lemma 6.29. For any 0 < x ≤a, we have 2 n Dn(x) = ⇒|W(x)|2 as n ↑∞. Proof. By the strong Markov property and the exit probabilities from an interval described in Theorem 2.45, it is clear that, provided n > 1/x, the random variable Dn(x) is geometrically distributed with (success) parameter 1/(nx), i.e. P{Dn(x) = k} = 1 nx(1 − 1 nx)k−1 for all k ∈ {1, 2, . . .}. Hence, as n →∞, we obtain that P{Dn(x) > ny/2} = ¡ 1 − 1 nx ¢⌊ny/2⌋− →e−y/(2x) , and the result follows, as |W(x)|2 is exponentially distributed with mean 2x. 164 Lemma 6.29 is the ‘one-point version’ of Theorem 6.27. The essence of the Ray-Knight theorem is captured in the ‘two-point version’, which we prove next. We fix two points x and x + h with 0 < x < x + h < a. The next three lemmas are the crucial ingredients for the proof of Theorem 6.27. Lemma 6.30. Let 0 < x < x + h < a. Then, for all n > h, we have Dn(x + h) = D + Dn(x) X j=1 Ij Nj , where • D = D(n) is the number of downcrossings of the interval [x + h −1 n, x + h] before the Brownian motion hits level x, • for any j ∈N the random variable Ij = I (n) j is Bernoulli distributed with mean 1 nh+1, • for any j ∈N the random variable Nj = N (n) j is geometrically distributed with mean nh + 1, and all these random variables are independent of each other and of Dn(x). −5 0 5 10 15 20 25 30 35 40 45 xk+1 xk+1 −1 n xk xk −1 n τ2j−2 τ2j−1 B(2j−1) Figure 6. The random variables Ij and Nj depend only on the pieces B(2j−1) for j ≥1. For this sample Ij = 1 as the path hits x + h before x −1 n and Nj = 2, because the path downcrosses [x + h, x + h −1 n] twice before hitting x −1 n. Proof. The decomposition of Dn(x+h) is based on counting the number of downcrossings of the interval [x+h−1/n, x+h] that have taken place between the stopping times in the sequence τ0 = inf © t > 0: B(t) = x ª , τ1 = inf © t > τ0 : B(t) = x −1 n ª , τ2j = inf © t > τ2j−1 : B(t) = x ª , τ2j+1 = inf © t > τ2j : B(t) = x −1 n ª , 165 for j ≥1. By the strong Markov property the pieces B (0) : [0, τ0] →R, B (0)(s) = B(s) B (j) : [0, τj −τj−1] →R, B (j)(s) = B(τj−1 + s), j ≥1, are all independent. The crucial observation of the proof is that the vector Dn(x) is a function of the pieces B(2j) for j ≥1, whereas we shall define the random variables D, I1, I2, . . . and N1, N2 . . . depending only on the other pieces B(0) and B(2j−1) for j ≥1. First, let D be the number of downcrossings of [x + h, x + h −1/n] during the time interval [0, τ0]. Then fix j ≥1 and hence a piece B(2j−1). Define Ij to be the indicator of the event that B(2j−1) reaches level x + h during its lifetime. By Theorem 2.45 this event has probability 1/(nh + 1). Observe that the number of downcrossings by B(2j−1) is zero if the event fails. If the event holds, we define Nj as the number of downcrossings of [x + h, x + h −1/n] by B(2j−1), which is a geometric random variable with mean nh + 1 by the strong Markov property and Theorem 2.45. The claimed decomposition follows now from the fact that the pieces B(2j) for j ≥1 do not upcross the interval [x + h, x + h −1/n] by definition and that B(2j−1) for j = 1, . . . , Dn(x) are exactly the pieces that take place before the Brownian motion reaches level zero. Lemma 6.31. Suppose nun are nonnegative, even integers and un →u. Then 2 n D (n) + 2 n nun 2 X j=1 I (n) j N (n) j = ⇒˜ X2 + ˜ Y 2 + 2 M X j=1 ˜ Zj as n ↑∞, where ˜ X, ˜ Y are normally distributed with mean zero and variance h, the random variable M is Poisson distributed with parameter u/(2h) and ˜ Z1, ˜ Z2, . . . are exponentially distributed with mean h, and all these random variables are independent. Proof. By Lemma 6.29, we have, for ˜ X, ˜ Y as defined in the lemma, 2 n D (n) = ⇒|W(h)|2 d = ˜ X2 + ˜ Y 2 as n ↑∞. Moreover, we observe that 2 n nun 2 X j=1 I (n) j N (n) j d = 2 n Bn X j=1 N (n) j , where Bn is binomial with parameters nun/2 ∈{0, 1, . . .} and 1/(nh + 1) ∈(0, 1) and indepen-dent of N (n) 1 , N (n) 2 , . . .. We now show that, when n ↑∞, the random variables Bn converge in distribution to M and the random variables 1 n N (n) j converge to ˜ Zj, as defined in the lemma. For this purpose it suffices to show convergence of the Laplace transforms, see Proposition II.1.8. First note that, for λ, θ > 0, we have E exp © −λ ˜ Zj ª = 1 λh+1, E £ θM¤ = exp © −u(1−θ) 2h ª , 166 and hence E exp n −λ M X j=1 ˜ Zj o = E ¡ 1 λh+1 ¢M = exp © −u 2h λh λh+1 ª = exp © − uλ 2λh+2 ª . Convergence of 1 n N (n) j is best seen using tail probabilities P © 1 n N (n) j > a ª = ¡ 1 − 1 nh+1 ¢⌊na⌋− →exp © −a h ª = P{ ˜ Zj > a} . Hence, for a suitable sequence δn →0, E exp © −λ 1 n N (n) j ª = 1 + δn λh + 1 . For the binomial distributions we have E £ θBn¤ = ³ θ nh+1 + ¡ 1 − 1 nh+1 ¢´nun/2 − →exp © −u(1−θ) 2h ª , and thus lim n↑∞E exp n −λ1 n Bn X j=1 N (n) j o = lim n↑∞E h¡ 1+δn λh+1 ¢Bni = exp © −u 2h λh+δn λh+1 ª = exp © − uλ 2λh+2 ª = E exp n −λ M X j=1 ˜ Zj o . Lemma 6.32. Suppose X is standard normally distributed, Z1, Z2, . . . standard exponentially distributed and N Poisson distributed with parameter ℓ2/2 for some ℓ> 0. If all these random variables are independent, then (X + ℓ)2 d = X2 + 2 N X j=1 Zj . Proof. It suffices to show that the Laplace transforms of the random variables on the two sides of the equation agree. Let λ > 0. Completing the square, we find E exp{−λ (X + ℓ)2} = 1 √ 2π Z exp{−λ (x + ℓ)2 −x2/2} dx = 1 √ 2π Z exp n −1 2 ¡√ 2λ + 1 x + 2λℓ √ 2λ+1 ¢2 −λℓ2 + 2λ2ℓ2 2λ+1 o dx = 1 √ 2λ + 1 exp n − λℓ2 2λ+1 o . From the special case ℓ= 0 we get E exp{−λ X2} = 1 √ 2λ+1. For any θ > 0, E[θN] = exp{−ℓ2/2} ∞ X k=0 (ℓ2θ/2)k k! = exp{(θ −1)ℓ2/2} . 167 Using this and that E exp{−2λ Zj} = 1 2λ+1 we get E exp © −λ ¡ X2 + 2 N X j=1 Zj ¢ª = 1 √ 2λ + 1 E ³ 1 2λ + 1 ´N = 1 √ 2λ + 1 exp n − λℓ2 2λ+1 o , which completes the proof. Remark 6.33. An alternative proof of Lemma 6.32 will be given in Exercise 6.6. ⋄ By combining the previous three lemmas we obtain the following convergence result for the conditional distribution of Dn(x + h) given Dn(x), which is the ‘two-point version’ of the Ray-Knight theorem. Lemma 6.34. Suppose nun are nonnegative, even integers and un →u. For any λ ≥0 we have lim n→∞E £ exp © −λ 2 nDn(x + h) ª ¯ ¯ 2 nDn(x) = un ¤ = E(0,√u) £ exp © −λ|W(h)|2ª¤ , where {W(x): x ≥0} denotes a planar Brownian motion started in (0, √u) ∈R2. Proof. Combining Lemmas 6.30 and 6.31 we get lim n→∞E £ exp © −λ 2 nDn(x + h) ª ¯ ¯ 2 nDn(x) = un ¤ = E h exp n −λ ³ ˜ X2 + ˜ Y 2 + 2 M X j=1 ˜ Zj ´oi = E h exp n −λh ³ X2 + Y 2 + 2 M X j=1 Zj ´oi , with X, Y are standard normally distributed, Z1, Z2, . . . are standard exponentially distributed and M is Poisson distributed with parameter ℓ2/2, for ℓ= p u/h. By Lemma 6.32 the right hand side can thus be rewritten as E £ exp © −λh ¡ (X + p u/h)2 + Y 2¢¤ = E(0,√u) £ exp © −λ|W(h)|2ª¤ , which proves the lemma. Now we complete the proof of Theorem 6.27. Note that, as both {Lx(T) : x ≥0} and {|W(x)|2 : x ≥0} are continuous processes, it suffices to show that, for any 0 < x1 < · · · < xm < a the vectors ¡ Lx1(T), . . . , Lxm(T) ¢ and ¡ |W(x1)|2, . . . , |W(xm)|2¢ have the same distribution. The Markov property of the downcrossing numbers, which approx-imate the local times, allows us to reduce this problem to the study of the ‘two-point version’. 168 Lemma 6.35. For all sufficiently large integers n, the process © Dn(xk): k = 1, . . . , m ª is a (possibly inhomogeneous) Markov chain. Proof. Fix k ∈{2, . . . , m}. By Lemma 6.30 applied to x = xk−1 and h = xk −xk−1 we can write Dn(xk) as a function of Dn(xk−1) and various random variables, which by construction, are independent of Dn(x1), . . . , Dn(xk−1). This establishes the Markov property. Note that, by rotational invariance of planar Brownian motion, {|W(xk)|2 : k = 1, . . . , m} is a Markov chain with transition probabilities given by E £ exp{−λ |W(xk+1)|2} ¯ ¯ |W(xk)|2 = u ¤ = E(0,√u) £ exp{−λ |W(xk+1 −xk)|2} ¤ , for all λ > 0. The following general fact about the convergence of families of Markov chains ensures that we have done enough to complete the proof of Theorem 6.27. Lemma 6.36. Suppose, for n = 1, 2, . . ., that {X (n) k : k = 1, . . . , m} is a Markov chain with discrete state space Ωn ⊂[0, ∞) and that {Xk : k = 1, . . . , m} is a Markov chain with state space [0, ∞). Suppose further that (1) (X (n) 1 , . . . , X (n) m ) converges almost surely to some random vector (Y1, . . . , Ym), (2) X (n) 1 ⇒X1 as n ↑∞, (3) for all k = 1, . . . , m −1, λ > 0 and yn ∈Ωn with yn →y, we have lim n→∞E £ exp{−λX (n) k+1} ¯ ¯ X (n) k = yn ¤ = E £ exp{−λXk+1} ¯ ¯ Xk = y ¤¯ ¯ ¯. Then (X (n) 1 , . . . , X (n) m ) = ⇒(X1, . . . , Xm) and, in particular, the vectors (X1, . . . , Xm) and (Y1, . . . , Ym) have the same distribution. Proof. Recall from Proposition II.1.8 that it suffices to show that the Laplace transforms converge. Let λ1, . . . , λm ≥0. By assumption (2) we have X (n) 1 ⇒X1 and hence we may assume, by way of induction, that for some fixed k = 1, . . . , m −1, we have (X (n) 1 , . . . , X (n) k ) = ⇒(X1, . . . , Xk) . This implies, in particular, that (X1, . . . , Xk) and (Y1, . . . , Yk) have the same distribution. Define Φn : Ωn →[0, 1], Φn(y) = E £ exp{−λk+1X (n) k+1} ¯ ¯ X (n) k = y ¤ and Φ: [0, ∞) →[0, 1], Φ(y) = E £ exp{−λk+1Xk+1} ¯ ¯ Xk = y ¤ . 169 Then, combining assumption (1) and (3), Φn(X (n) k ) →Φ(Yk) almost surely. Hence, using this and once more assumption (1), E h exp © − k+1 X j=1 λjX (n) j ªi = E h exp © − k X j=1 λjX (n) j ª Φn(X (n) k ) i →E h exp © − k X j=1 λjYj ª Φ(Yk) i . As the vectors (X1, . . . , Xk) and (Y1, . . . , Yk) have the same distribution the limit can be rewrit-ten as E h exp © − k X j=1 λjXj ª Φ(Xk) i = E h exp © − k+1 X j=1 λjXj ªi , and this completes the induction step. Finally, as (X (n) 1 , . . . , X (n) m ) converges almost surely, and hence also in distribution to (Y1, . . . , Ym), this vector must have the same distribution as (X1, . . . , Xm). This completes the proof. Proof of Theorem 6.27. We use Lemma 6.36 with X (n) k = 2 nDn(xk), Xk = |W(xk)|2 and Yk = Lxk(T). Then assumption (1) is satisfied by the downcrossing representation, as-sumption (2) follows from Lemma 6.29 and assumption (3) from Lemma 6.34. Lemma 6.36 thus gives that the random vectors (Lx1(T), . . . , Lxm(T)) and (|W(x1)|2, . . . , |W(xm)|2) have the same distribution, which concludes the proof. As an easy application of the Ray-Knight theorem, we answer the question whether, almost surely, simultaneously for all levels x ∈[0, a) the local times at level x are positive. Theorem 6.37 (Ray’s theorem). Suppose a > 0 and {B(t) : 0 ≤t ≤Ta} is a linear Brownian motion started at zero and stopped at time Ta = inf{t ≥0 : B(t) = a}, when it reaches level a for the first time. Then, almost surely, Lx(Ta) > 0 for all 0 ≤x < a. Proof. The statement can be reworded as saying that the process {La−x(Ta) : 0 < x ≤a} almost surely does not hit zero. By the Ray-Knight theorem (applied to the Brownian motion {a −B(t): t ≥0}) this process agrees with {|W(x)|2 : 0 < x ≤a} for a standard planar Brownian motion {W(x) : x ≥0} which, by Theorem 3.19, never returns to the origin. Ray’s theorem can be exploited to give a result on the Hausdorffdimension of the level sets of the Brownian motion, which holds simultaneously for all levels a ∈R. Theorem 6.38. Almost surely, dim{t ≥0 : B(t) = a ª ≥1 2 , for all a ∈R. Proof. Obviously, it suffices to show that, for every fixed a > 0, almost surely, dim © 0 ≤t < Ta : B(t) = x ª ≥1 2 for all 0 ≤x < a . This can be achieved using the mass distribution principle. Considering the increasing function Lx : [0, T a) →[0, ∞) as distribution function of a measure ℓx, we infer from Theorem 6.37 and Theorem 6.18 that, almost surely, for every x ∈[0, a), the measure ℓx is a mass distribution 170 on the set {0 ≤t < Ta : B(t) = x}. By Theorem 6.18, for any γ < 1/2, almost surely, there exists a (random) C > 0 such that, for all x ∈[0, a), t ∈[0, Ta) and ε ∈(0, 1), ℓx(t −ε, t + ε) ≤|Lx(t + ε) −Lx(t −ε)| ≤C (2ε)γ . The claim therefore follows from the mass distribution principle, Theorem 4.19. Remark 6.39. Equality holds in Theorem 6.38. We will obtain the full result later as an easy corollary of Kaufman’s dimension doubling theorem, see Theorem 9.28. ⋄ 4. Brownian local time as a Hausdorffmeasure In this section we show that the local time L0(t) can be obtained as an intrinsically defined measure of the random set Zero ∩[0, t]. The only family of intrinsically defined measures on metric spaces we have encountered so far in this book is the family of α-dimensional Hausdorff measures. As the α-dimensional Hausdorffmeasure of the zero set is always either zero (if α ≥1 2) or infinity (if α < 1 2) we need to look out for an alternative construction. We need not look very far. The definition of Hausdorffdimension still makes sense if we evaluate coverings by applying, instead of a simple power, an arbitrary nondecreasing function to the diameters of the sets in a covering. Definition 6.40. A nondecreasing function φ: [0, ε) →[0, ∞) with φ(0) = 0 defined on a nonempty interval [0, ε) is called a (Hausdorff) gauge function. Let X be a metric space and E ⊂X. For every gauge function φ and δ > 0 define Hφ δ (E) = inf n ∞ X i=1 φ(|Ei|) : E1, E2, E3, . . . cover E, and |Ei| ≤δ o . Then Hφ(E) = sup δ>0 Hφ δ (E) = lim δ↓0 Hφ δ (E) is the generalised φ-Hausdorffmeasure of the set E. ⋄ Theorem 6.41. There exists a constant c > 0 such that, almost surely, for all t > 0, L0(t) = Hϕ¡ Zero ∩[0, t] ¢ , for the gauge function ϕ(r) = c p r log log(1/r). The remainder of this section is devoted to the proof of this theorem. The material developed here will not be used in the remainder of the book. An important tool in the proof is the following classical theorem of Rogers and Taylor. 171 Proposition 6.42 (Rogers-Taylor Theorem). Let µ be a Borel measure on Rd and let φ be a Hausdorffgauge function. (i) If Λ ⊂Rd is a Borel set and lim sup r↓0 µB(x, r) φ(r) < α for all x ∈Λ, then Hφ(Λ) ≥α−1 µ(Λ). (ii) If Λ ⊂Rd is a Borel set and lim sup r↓0 µB(x, r) φ(r) > θ for all x ∈Λ, then Hφ(Λ) ≤κdθ−1µ(V ) for any open set V ⊂Rd that contains Λ, where κd depends only on d. Moreover, in d = 1 one can also obtain an analogue of (i) for one-sided intervals. (iii) If Λ ⊂R is a closed set and lim sup r↓0 µ[x, x + r] φ(r) < α for all x ∈Λ, then Hφ(Λ) ≥α−1 µ(Λ). Remark 6.43. If µ is finite on compact sets, then µ(Λ) is the infimum of µ(V ) over all open sets V ⊃Λ, see for example [Ru86, 2.18]. Hence µ(V ) can be replaced by µ(Λ) on the right hand side of the inequality in (ii). ⋄ Proof. (i) We write Λε = © x ∈Λ: sup r∈(0,ε) µB(x,r) φ(r) < α ª and note that µ(Λε) →µ(Λ) as ε ↓0. Fix ε > 0 and consider a cover {Aj} of Λε. Suppose that Aj intersects Λε and rj = |Aj| < ε for all j. Choose xj ∈Aj ∩Λε for each j. Then µB(xj, rj) < αφ(rj) for every j, whence X j≥1 φ(rj) ≥α−1 X j≥1 µB(xj, rj) ≥α−1µ(Λε) . Thus Hφ ε(Λ) ≥Hφ ε(Λε) ≥α−1µ(Λε). Letting ε ↓0 proves (i). (ii) Let ε > 0. For each x ∈Λ, choose a positive rx < ε such that B(x, 2rx) ⊂V and µB(x, rx) > θφ(rx); then among the dyadic cubes of diameter at most rx that intersect B(x, rx), let Qx be a cube with µ(Qx) maximal. (We consider here dyadic cubes of the form Qd i=1[ai/2m, (ai + 1)/2m) where ai are integers). In particular, Qx ⊂V and |Qx| > rx/2 so the side-length of Qx is at least rx/(2 √ d). Let Nd = 1 + 8⌈ √ d⌉and let Q∗ x be the cube with the same center zx as Qx, scaled by Nd (i.e., Q∗ x = zx + Nd(Qx −zx)). Observe that Q∗ x contains 172 B(x, rx), so B(x, rx) is covered by at most N d d dyadic cubes that are translates of Qx. Therefore, for every x ∈Λ, we have µ(Qx) ≥N −d d µB(x, rx) > N −d d θφ(rx) . Let {Qx(j) : j ≥1} be any enumeration of the maximal dyadic cubes among {Qx : x ∈Λ}. Then µ(V ) ≥ X j≥1 µ(Qx(j)) ≥N −d d θ X j≥1 φ(rx(j)) . The collection of cubes {Q∗ x(j) : j ≥1} forms a cover of Λ. Since each of these cubes is covered by N d d cubes of diameter at most rx(j), we infer that Hφ ε(Λ) ≤N d d X j≥1 φ(rx(j)) ≤N 2d d θ−1µ(V ). Letting ε ↓0 proves (ii). (iii) Without loss of generality we may assume that µ has no atoms. Given ε > 0 we find δ > 0 such that Λδ(α) = © t ∈Λ: sup h<δ µ[t,t+h] ϕ(h) ≤α −δ ª satisfies µ(Λδ(α)) > (1 −ε) µ(Λ). Observe that Λδ(α) is closed. Given a cover {˜ Ij} of Λ with |˜ Ij| < δ we look at Ij = [aj, bj] where aj is the maximum and bj the minimum of the closed set cl ˜ Ij ∩Λδ(α). Then {Ij} covers Λδ(α) and hence X j≥1 ϕ(|˜ Ij|) ≥ X j≥1 ϕ(|Ij|) ≥(α −δ)−1 X j≥1 µ(Ij) ≥(α −δ)−1 µ ¡ Λδ(α) ¢ ≥(α −δ)−1 (1 −ε) µ(Λ) , and (iii) follows for δ ↓0, as ε > 0 was arbitrary. For the proof of Theorem 6.41 we first note that, by Theorem 6.10, it is equivalent to show that, for the maximum process {M(t): t ≥0} of a Brownian motion {B(t): t ≥0}, we have, almost surely, M(t) = Hϕ¡ Rec ∩[0, t] ¢ for all t ≥0 , where Rec = {s ≥0: B(s) = M(s)} denotes the set of record points of the Brownian motion. We define the measure µ on Rec as given by the distribution function M, i.e. µ(a, b] = M(b) −M(a) for all intervals (a, b] ⊂R. Then µ is also the image measure of the Lebesgue measure on [0, ∞) under the mapping a 7→Ta := inf{s ≥0 : B(t) = a}. The main part is to show that, for closed sets Λ ⊂[0, ∞), (4.1) c µ(Λ) ≤Hφ¡ Λ ∩Rec ¢ ≤C µ(Λ), where φ(r) = p r log log(1/r) and c, C are positive constants. 173 The easier direction, the lower bound for the Hausdorffmeasure, follows from part (iii) of the Rogers-Taylor theorem and the upper bound in the law of the iterated logarithm. Indeed, for any level a > 0 let Ta = inf{s ≥0 : B(t) = a}. Then, by Corollary 5.3 applied to the standard Brownian motion {B(Ta + t) −B(Ta): t ≥0}, almost surely, lim sup r↓0 M(Ta + r) −M(Ta) p 2r log log(1/r) = lim sup r↓0 B(Ta + r) −B(Ta) p 2r log log(1/r) = 1 . Defining the set A = © s ∈Rec: lim sup r↓0 µ[s, s + r]/φ(r) ≤ √ 2 ª , this means that, for every a > 0, we have Ta ∈A almost surely. By Fubini’s theorem, Eµ(Ac) = E Z ∞ 0 1{Ta ̸∈A} da = Z ∞ 0 P{Ta ̸∈A} da = 0, and hence, almost surely, µ(Ac) = 0. By part (iii) of the Rogers-Taylor theorem, for every closed set Λ ⊂[0, ∞), Hφ(Λ ∩Rec) ≥Hφ(Λ ∩A) ≥ 1 √ 2 µ(Λ ∩A) = 1 √ 2 µ(Λ) , showing the left inequality in (4.1). For the harder direction, the upper bound for the Hausdorffmeasure, it is important to note that the lower bound in Corollary 5.3 does not suffice. Instead, we need a law of the iterated logarithm which holds simultaneously for Hφ-almost all record times. Lemma 6.44. For every ϑ > 0 small enough, almost surely, Hφn s ∈Rec: lim sup h↓0 M(s + h) −M(s −h) φ(h) < ϑ o = 0 . Proof. We only need to prove that, for some ϑ > 0, the set Λ(ϑ) = © s ∈Rec ∩(0, 1): lim sup h↓0 M(s+h)−M(s−h) φ(h) < ϑ ª satisfies Hφ(Λ(ϑ)) = 0. Moreover, denoting Λδ(ϑ) = © s ∈Rec ∩ £ δ, 1 −δ ¤ : sup h<δ M(s+h)−M(s−h) φ(h) < ϑ ª , we have Λ(ϑ) = [ δ>0 Λδ(ϑ) . It thus suffices to show, for fixed δ > 0, that, almost surely, lim inf n↑∞Hφ 1/n(Λδ(ϑ)) = 0 . Fix δ > 0 and a positive integer n such that 1/√n < δ. For parameters A > 1, θ > ϑ and q > 2, which we choose later, we say that an interval of the form I = [(k −1)/n, k/n] with k ∈ {1, . . . , n} is good if 174 (i) I contains a record point, in other words, τ := inf © t ≥k−1 n : B(t) = M(t)} ≤k n , and either of the following two conditions hold, (ii) there exists j ≥0 with 1 ≤qj+1 ≤√n such that B ¡ τ + qj n ¢ −B(τ) < −A φ ¡ qj n ¢ ; (iii) for all j ≥0 with 1 ≤qj+1 ≤√n we have that B ¡ τ + qj+1 n ¢ −B ¡ τ + qj n ¢ < θ φ ¡ qj+1−qj n ¢ . We now argue pathwise, and show that, given A > 1, θ > ϑ we can find q > 2 such that the good intervals cover the set Λδ(ϑ). Indeed, suppose that I is not good but contains a minimal record point τ ∈[(k −1)/n, k/n]. Then there exists j ≥0 with 1 ≤qj+1 ≤√n such that B ¡ τ + qj n ¢ −B(τ) ≥−A φ ¡ qj n ¢ and B ¡ τ + qj+1 n ¢ −B ¡ τ + qj n ¢ ≥θ φ ¡ qj+1−qj n ¢ . This implies that, for any t ∈[(k −1)/n, k/n] ∩Rec, M ¡ t + qj+1 n ¢ −M ¡ t −qj+1 n ¢ ≥M ¡ τ + qj+1 n ¢ −M ¡ τ ¢ ≥B ¡ τ + qj+1 n ¢ −B ¡ τ ¢ ≥θ φ ¡ qj+1−qj n ¢ −A φ ¡ qj n ¢ ≥ϑ φ ¡ qj+1 n ¢ , if q is chosen large enough. Hence the interval I does not intersect Λδ(ϑ) and therefore the good intervals cover this set. Next we show that, for any A > √ 2 > θ and suitably chosen C > 0, for every I = [(k−1)/n, k/n] with I ∩[δ, 1 −δ] ̸= ∅, (4.2) P ©£ k−1 n , k n ¤ is good ª ≤C 1 √n ³ 1 log n ´ A2 2 −1 . By Lemma 4.22 in conjunction with Theorem 6.10 we get, for some constant C0 > 0 depending only on δ > 0, P © τ < k n ª ≤C0 1 √n. We further get, for some constant C1 > 0, for all j with qj+1 ≤√n, P n B ¡ τ + qj n ¢ −B(τ) < −A φ ¡ qj n ¢o ≤P © B(1) < −A p log log(n/qj) ª ≤exp © −A2 2 log log(√n) ª ≤C1 ³ 1 log n ´ A2 2 . Using the independence of these events and summing over all j ≥0 with 1 ≤qj+1 ≤√n, of which there are no more than C2 log n, we get that (4.3) P ©£ k−1 n , k n ¤ satisfies (i) and (ii) ª ≤C0C1C2 1 √n ³ 1 log n ´ A2 2 −1 . 175 To estimate the probability that [(k −1)/n, k/n] satisfies (i) and (iii) we first note that P n B ¡ qj+1−qj n ¢ < θ φ ¡ qj+1−qj n ¢o ≤P n B(1) < θ q log log ¡ n q−1 ¢o ≤1 − exp © −θ2 2 log log ¡ n q−1 ¢ } θ q log log ¡ n q−1 ¢ , using Lemma II.3.1. From this we infer that, for suitable c3 > 0, P n B ¡ qj+1−qj n ¢ < θ φ ¡ qj+1−qj n ¢ for all 1 ≤qj+1 ≤√n o ≤ Y j≤log n 2 log q µ 1 −exp © −θ2 2 log log n} θ √log log n ¶ ≤ µ 1 − 1 θ (log n) θ2 2 (log log n) 1 2 ¶ log n 2 log q ≤exp n −c3 (log n)1−θ2 2 (log log n) 1 2 o . Combining this with the estimate for τ < k/n we get that (4.4) P ©£ k−1 n , k n ¤ satisfies (i) and (iii) ª ≤C0 1 √n exp n −c3 (log n)1−θ2 2 (log log n) 1 2 o . As θ < √ 2, the right hand side in (4.4) is of smaller order than the right hand side in (4.3) and hence we have shown (4.2). Finally, we look at the expected φ-values of our covering. We obtain that EHφ 1/n(Λδ(ϑ)) ≤ ⌈n(1−δ)⌉ X k=⌈δn⌉ φ(1/n)P ©£ k−1 n , k n ¤ is good ª ≤C √ log log(1/n) (log n)A2/2−1 − →0, and, by Fatou’s lemma we get, almost surely, lim inf n↑∞Hφ 1/n(Λδ(ϑ)) = 0 , as required to complete the proof. The right inequality in (4.1) now follows easily from Lemma 6.44 and part (ii) of the Rogers-Taylor theorem. We define the set A = © s ∈Rec: lim sup r↓0 µB(s, r)/φ(r) ≥ϑ ª , and note that Hφ(Rec ∩Ac) = 0, for ϑ sufficiently small. By part (ii) of the Rogers-Taylor theorem we get, for every Borel set Λ ⊂[0, ∞), Hφ(Λ ∩Rec) = Hφ(Λ ∩A) ≤κ1ϑ−1 µ(Λ ∩A) ≤κ1ϑ−1 µ(Λ). This implies the right inequality and hence completes the proof of (4.1). 176 To complete the proof of Theorem 6.41 we look at the process {X(a): a ≥0} defined by X(a) = Hφ¡ Rec ∩[0, Ta] ¢ . The next lemma will help us to show that this process is trivial. Lemma 6.45. Suppose {Y (t): t ≥0} is stochastic process starting in zero with the following properties, • the paths are almost surely continuous, • the increments are independent, nonnegative and stationary, • there exists a C > 0 such that, almost surely, Y (t) ≤C t for all t > 0. Then there exists ˜ c ≥0 such that, almost surely, Y (t) = ˜ c t for every t ≥0. Proof. We first look at the function m: [0, ∞) →[0, ∞) defined by m(t) = EY (t). This function is continuous, as the paths of {Y (t): t ≥0} are continuous and bounded on compact sets. Further, because the process {Y (t): t ≥0} has independent and stationary increments, the function m is linear and hence there exists ˜ c ≥0 with m(t) = ˜ c t. It thus suffices to show that the variance of Y (t) is zero. Indeed, for every n > 0, we have Var Y (t) = n X j=1 Var ³ Y ¡ kt n ¢ −Y ¡ (k−1)t n ¢´ = n Var Y ¡ t n ¢ ≤n E £ Y ¡ t n ¢2¤ ≤n C2 ¡ t n ¢2 n→∞ − →0, and hence Y (t) = EY (t) = ˜ c t as claimed. Let us check that {X(a): a ≥0} satisfies the conditions of Lemma 6.45. We first note that X(a + h) −X(a) = Hφ¡ Rec ∩[0, Ta+h] ¢ −Hφ¡ Rec ∩[0, Ta] ¢ = Hφ¡ Rec ∩[Ta, Ta+h] ¢ , as can be seen easily from the definition of the Hausdorffmeasure Hφ. Using this, continuity of the paths follows from the fact that, by (4.1), Hφ¡ Rec ∩[Ta, Ta+h] ¢ ≤C ¡ M(Ta+h) −M(Ta) ¢ = C h . The strong Markov property implies that the increments are independent and stationary, and they are obviously nonnegative. And finally, by (4.1), almost surely, for any a ≥0, X(a) = Hφ¡ Rec ∩[0, Ta] ¢ ≤C M(Ta) = C a . Lemma 6.45 thus implies that there exists ˜ c ≥0 with Hφ¡ Rec ∩[0, Ta] ¢ = ˜ c a = ˜ c M(Ta) for all a ≥0. The set {Ta : a ∈R} is dense in Rec. Indeed, the only elements in Rec\{Ta : a ∈R} are the countably many times when the Brownian motion revisits a local maximum for the first time. These times are stopping times and can therefore be approximated from the right by elements of {Ta : a ∈R}. 177 Using continuity, we infer that, almost surely, Hφ¡ Rec ∩[0, t] ¢ = ˜ c M(t) for all t ∈Rec. For general t ≥0 we let τ = max(Rec ∩[0, t]) and note that Hφ¡ Rec ∩[0, t] ¢ = Hφ¡ Rec ∩[0, τ] ¢ = ˜ c M(τ) = ˜ c M(t) . By the lower bound in (4.1) we must have ˜ c > 0 and hence we can put c = 1/˜ c and get M(t) = c Hφ¡ Rec ∩[0, t] ¢ = Hcφ¡ Rec ∩[0, t] ¢ , as required to complete the proof of Theorem 6.41. 178 Exercises Exercise 6.1. Using the downcrossing representation of the local time process {L(t): t ≥0} given in Theorem 6.1, show that, almost surely, L(s) = L(t) for every interval (s, t) not containing a zero of the Brownian motion. In other words, the local time at zero increases only on the zero set of the Brownian motion. Exercise 6.2. Show, by reviewing the argument in the proof of Theorem 6.10, that for a standard linear Brownian motion the processes {(|B(t)|, L(t)): t ≥0} and {(M(t) −B(t), M(t)): t ≥0} have the same distribution. Hint. In Theorem 7.36 we give an alternative proof of this result using stochastic integration. Exercise 6.3. Show that P0{L(t) > 0 for every t > 0} = 1. Hint. This follows easily from Theorem 6.10. Exercise 6.4 ( ∗). Let {W(s): s ≥0} be a standard linear Brownian motion and τ1 its first hitting time of level 1. Show that E Z τ1 0 1{0 ≤W(s) ≤1} ds = 1. Hint. Use Exercise 2.15. Exercise 6.5 ( ∗). Suppose X1, X2, . . . are independent geometrically distributed random variables with mean 2. Then, for sufficiently small ε > 0, for all nonnegative integers k ≤m, P n¯ ¯ ¯ k X j=1 (Xj −2) ¯ ¯ ¯ ≥εm o ≤4 exp © −1 5 ε2 m ª . Exercise 6.6 ( ∗). Give an alternative proof of Lemma 6.32 by computing the densities of the random variables (X + ℓ)2 and X2 + 2 PN j=1 Zj. 179 Exercise 6.7. Use the Ray-Knight theorem and L´ evy’s theorem, Theorem 6.10, to show that, for a suitable constant c > 0, the function ϕ(h) = c p h log(1/h) for 0 < h < 1, is a modulus of continuity for the random field {La(t): a ∈R, t ≥0}. 180 Notes and Comments The study of local times is crucial for the Brownian motion in dimension one and good references are [RY94] and the survey article [Bo89]. Brownian local times were first introduced by Paul L´ evy in [Le48] and a thorough investigation is initiated in a paper by Trotter [Tr58] who showed that there is a version of local time continuous in time and space. An alternative construction of local times can be given in terms of stochastic integrals, using Tanaka’s formula as a definition. We shall explore this direction in Section 7.3. The equality for the upcrossing numbers in Lemma 6.3 agrees with the functional equation for a branching process with immigration. The relationship between local times and branching processes, which is underlying our entire treatment, can be exploited and extended in various ways. One example of this can be found in Neveu and Pitman [NP89], for more recent progress in this direction, see Le Gall and Le Jan, [LL98]. A good source for further reading is the discussion of L´ evy processes and trees by Duquesne and Le Gall in [DL02]. For an introdution into branching processes with and without immigration, see [AN04]. In a similar spirit, a result which is often called the second Ray-Knight theorem describes the process {La T : a > 0} when T = inf{t > 0: L0 t = x}, see [RY94] or the original papers by Ray and Knight cited above. The resulting process is a Feller diffusion, which is the canonical process describing critical branching with initial mass x. The local times of Brownian motion can therefore be used to encode the branching information for a variety of processes describing the evolution of particles which undergo critical branching and spatial migration. For more information on this powerful link between Brownian motion and the world of spatial branching processes, see for example [LG99]. The concept of local times can be extended to a variety of processes like continuous semimartin-gales, see e.g. [RY94], or Markov processes [BG68]. The idea of introducing local times as densities of occupation measure has been fruitful in a variety of contexts, in particular in the introduction of local times on the intersection of Brownian paths. Important papers in this direction are [GH80] and [GHR84]. The Ray-Knight theorem was discovered by D. Ray and F. Knight independently by different methods in 1963. The proof of Knight uses discretisation, see [Kn63] for the original paper and [Kn81] for more information. Ray’s approach to Theorem 6.27 is less intuitive but more versatile, and is based on the Feynman-Kac formula, see [Ra63b] for the original paper. Our presentation is simpler than Knight’s method. The distributional identity at its core, see Lemma 6.32, is yet to be explained probabilistically. The analytic proof given in Exercise 6.6 is due to H. Robbins and E.J.G. Pitman [RP49]. 181 Extensions of the Ray-Knight theorem includes a characterisation of {Lx(T) : x ≥0} for parameters exceeding a. This is best discussed in the framework of Brownian excursion theory, see for example [RY94]. The Ray-Knight theorem can be extended into a deep relationship between the local times of symmetric Markov processes and an associated Gaussian process, which is the subject of the famous Dynkin isomorphism theorem. See Eisenbaum [Ei94] or the comprehensive monograph by Marcus and Rosen [MR06] for more on this subject. According to Taylor [Ta86], Hausdorffmeasures with arbitrary gauge functions were introduced by A.S. Besicovitch. General theory of outer measures, as presented in [Ro99] shows that Hφ indeed defines a measures on the Borel sets of a metric space. The fact that the local time at zero agrees with the Hφ Hausdorffmeasure of the zero set is due to Taylor and Wendel [TW66]. It can be shown that the local times La(t) agree with the Hausdorffmeasure of the set {s ∈ [0, t]: B(s) = a} simultaneously for all levels a and times t. This delicate result is proved in [Pe81] using nonstandard analysis. The Rogers-Taylor theorem is due to C.A. Rogers and S.J. Taylor in [RT61]. The original statement is slightly more general as it allows to replace µ(V ) by µ(Λ) on the right hand side without any regularity condition on µ. Most proofs in the literature of the harder half, statement (ii) in our formulation, use the Besicovitch covering theorem. We give a self-contained proof using dyadic cubes instead. Other natural measures related to Brownian motion can also be shown to agree with Hausdorff measures with suitable gauge functions. The most notable example is the occupation measure, whose gauge function is ϕ(r) = ½ cd r2 log log(1/r) if d ≥3, c2 r2 log(1/r) log log log(1/r) if d = 2. This result is due to Ciesielsky and Taylor [CT62] in the first case, and to Ray [Ra63a] and Taylor [Ta64] in the second case. A stimulating survey of this subject is [LG85]. 182 CHAPTER 7 Stochastic integrals and applications In this chapter we first construct an integral with respect to Brownian motion. Amongst the applications are the conformal invariance of Brownian motion, a short look at windings of Brownian motion, the Tanaka formula for Brownian local times, and the Feynman-Kac formula. 1. Stochastic integrals with respect to Brownian motion 1.1. Construction of the stochastic integral. We look at a Brownian motion in di-mension one {B(t) : t ≥0} considered as a random continuous function. As we have found in Theorem 1.35, this function is almost surely of unbounded variation, which is why we cannot use Lebesgue-Stieltjes integration to define integrals of the form R t 0 f(s) dB(s). There is how-ever an escape from this dilemma, if one is willing to take advantage of the fact that Brownian motions are random functions and therefore one can make use of weaker forms of limits. This is the idea of stochastic integration. Before explaining the procedure, we have a look at a reasonable class of integrands, as we would like to admit random functions as integrands, too. At a first reading, the reader might prefer to think of deterministic integrands only and skip the next couple of paragraphs until we begin the construction of the integral after the proof of Lemma 7.2. A suitable class of random integrands is the class of progressively measurable processes. We denote by (Ω, A, P) the probability space on which our Brownian motion {B(t): t ≥0} is defined and suppose that (F(t) : t ≥0) is a filtration to which the Brownian motion is adapted such that the strong Markov property holds. Because we also want the integral up to time t to be adapted to our filtration, we assume that the filtration (F(t) : t ≥0) is complete, i.e. contains all sets of probability zero in A. Note that every filtration can be completed simply by adding all these sets and that the completion preserves the strong Markov property. Definition 7.1. A process {X(t, ω) : t ≥0, ω ∈Ω} is called progressively measurable if for each t ≥0 the mapping X : [0, t] × Ω→R is measurable with respect to the σ-algebra B([0, t]) × F(t). ⋄ Lemma 7.2. Any processes {X(t): t ≥0}, which is adapted and either right- or left-continuous is also progressively measurable. 183 Proof. Assume that {X(t): t ≥0} is right-continuous. Fix t > 0. For a positive integer n and 0 ≤s ≤t define Xn(0, ω) = X(0, ω) and Xn(s, ω) = X ¡ (k+1)t 2n , ω ¢ , for kt2−n < s ≤(k + 1)t2−n . The mapping (s, ω) 7→Xn(s, ω) is B([0, t]) ⊗F(t) measurable. By right-continuity we have limn↑∞Xn(s, ω) = X(s, ω) for all s ∈[0, t] and ω ∈Ω, hence the limit map (s, ω) 7→X(s, ω) is also B([0, t]) ⊗F(t) measurable, proving progressive measurability. The left-continuous case is analogous. The construction of the integrals is quite straightforward. We start by integrating progressively measurable step processes {H(t, ω) : t ≥0, ω ∈Ω} of the form H(t, ω) = k X i=1 Ai(ω)1(ti,ti+1](t), for 0 ≤t1 ≤. . . ≤tk+1, and F(ti)-measurable Ai. In complete analogy to the classical case we define the integral as Z ∞ 0 H(s) dB(s) := k X i=1 Ai ¡ B(ti+1) −B(ti) ¢ . Now let H be a progressively measurable process satisfying E R ∞ 0 H(s)2 ds < ∞. Suppose H can be approximated by a family of progressively measurable step processes Hn, n ≥1, then we define (1.1) Z ∞ 0 H(s) dB(s) := L2 −lim n→∞ Z ∞ 0 Hn(s) dB(s). At this stage we focus on L2-convergence, though we shall see later that the stochastic integral can also be constructed as an almost sure limit, see Remark 7.7. For the approximation of H by progressively measurable step processes we look at the norm ∥H∥2 2 := E Z ∞ 0 H(s)2 ds. What we have to show now to complete the definition is that, (1) every progressively measurable process satisfying E R ∞ 0 H(s)2 ds < ∞can be approxi-mated in the ∥· ∥2 norm by progressively measurable step processes, (2) for each approximating sequence the limit in (1.1) exists, (3) and this limit does not depend on the choice of the approximating step processes. This is what we check now, beginning with item (1). Lemma 7.3. For every progressively measurable process {H(s, ω) : s ≥0, ω ∈Ω} satisfying E R ∞ 0 H(s)2 ds < ∞there exists a sequence {Hn : n ∈N} of progressively measurable step processes such that limn→∞∥Hn −H∥2 = 0. 184 Proof. The strategy is to approximate the progressively measurable process successively by • a bounded progressively measurable process, • a bounded, almost surely continuous progressively measurable process, • and finally, by a progressively measurable step process. Let {H(s, ω) : s ≥0, ω ∈Ω} be a progressively measurable process with ∥H∥2 < ∞. We first define the cut-offat a fixed time n > 0 by letting Hn(s, ω) = H(s, ω) for s ≤n and Hn(s, ω) = 0 otherwise. Clearly limn↑∞∥Hn −H∥2 = 0. Second, we approximate any progressively measurable H on a finite interval by truncating its values, i.e. for large n we define Hn by letting Hn(s, ω) = H(s, ω) ∧n. Clearly Hn is progressively measurable and limn↑∞∥Hn −H∥2 = 0. Third, we approximate any uniformly bounded progressively measurable H by a bounded, almost-surely continuous, progressively measurable process. Let h = 1/n and, using the con-vention H(s, ω) = H(0, ω) for s < 0 we define Hn(s, ω) = 1 h Z s s−h H(s, ω) ds. Because we only take an average over the past, Hn is again progressively measurable. It is almost surely continuous and it is a well-known fact that, for every ω ∈Ωand almost every s ∈[0, t], lim h↓0 1 h Z s s−h H(t, ω) dt = H(s, ω) . Since H is uniformly bounded (and using progressive measurability) we can take expectations and an average over time, and obtain from the bounded convergence theorem that lim n↑∞∥Hn −H∥2 = 0 . Finally, a bounded, almost-surely continuous, progressively measurable process can be approximated by a step process Hn by taking Hn(s, ω) = H(j/n, ω) for j/n ≤s < (j + 1)/n. These functions are again progressively measurable and one easily sees limn↑∞∥Hn −H∥2 = 0. This completes the approximation. The following lemma describes the crucial property of the integral of step processes. Lemma 7.4. Let H be a progressively measurable step process and E R ∞ 0 H(s)2 ds < ∞, then E h³ Z ∞ 0 H(s) dB(s) ´2i = E Z ∞ 0 H(s)2 ds. 185 Proof. We use the Markov property to see that, for every progressively measurable step process H = Pk i=1 Ai1(ai,ai+1], E h³ Z ∞ 0 H(s) dB(s) ´2i = E h k X i,j=1 AiAj ¡ B(ai+1) −B(ai) ¢¡ B(aj+1) −B(aj) ¢i = 2 k X i=1 k X j=i+1 E £ AiAj ¡ B(ai+1) −B(ai) ¢ E £ B(aj+1) −B(aj) ¯ ¯ F(aj) ¤i + k X i=1 E £ A2 i ¡ B(ai+1) −B(ai) ¢2¤ = k X i=1 E £ A2 i ] ¡ ai+1 −ai ¢ = E Z ∞ 0 H(s)2 ds. Corollary 7.5. Suppose {Hn : n ∈N} is a sequence of progressively measurable step processes such that E Z ∞ 0 ¡ Hn(s) −Hm(s) ¢2 ds − →0, as n, m →∞, then E h³ Z ∞ 0 Hn(s) −Hm(s) dB(s) ´2i − →0, as n, m →∞. Proof. Because the difference of two step processes is again a step process, Lemma 7.4 can be applied to Hn −Hm and this gives the statement. The following theorem addresses issues (2) and (3), thus completing our construction of the stochastic integral. Theorem 7.6. Suppose {Hn : n ∈N} is a sequence of progressively measurable step processes and H a progressively measurable process such that lim n→∞E Z ∞ 0 ¡ Hn(s) −H(s) ¢2 ds = 0, then L2 −lim n→∞ Z ∞ 0 Hn(s) dB(s) =: Z ∞ 0 H(s) dB(s) exists and is independent of the choice of {Hn : n ∈N}. Moreover, we have (1.2) E h³ Z ∞ 0 H(s) dB(s) ´2i = E Z ∞ 0 H(s)2 ds. 186 Remark 7.7. If the sequence of step processes is chosen such that ∞ X n=1 E Z ∞ 0 ¡ Hn(s) −H(s) ¢2 ds < ∞, then, by (1.2), we get P∞ n=1 E[( R ∞ 0 Hn(s) −H(s) dB(s))2] < ∞, and therefore, almost surely, ∞ X n=1 h Z ∞ 0 Hn(s) dB(s) − Z ∞ 0 H(s) dB(s) i2 < ∞. This implies that, almost surely, lim n→∞ Z ∞ 0 Hn(s) dB(s) = Z ∞ 0 H(s) dB(s) . ⋄ Proof of Theorem 7.6. By the triangle inequality {Hn : n ∈N} satisfies the assumption of Corollary 7.5, and hence { R ∞ 0 Hn(s) dB(s) : n ∈N} is a Cauchy sequence in L2(P). By completeness of this space, the limit exists, and Corollary 7.5 also shows that the limit is independent of the choice of the approximating sequence. The last statement follows from Lemma 7.4, applied to Hn, by taking the limit n →∞. Finally, we describe the stochastic integral as a stochastic process in time. The crucial property of this process are continuity and the martingale property stated in the next theorem. Definition 7.8. Suppose {H(s, ω): s ≥0 , ω ∈Ω} is progressively measurable with E R t 0 H(s, ω)2 ds < ∞. Define the progressively measurable process {Ht(s, ω): s ≥0 , ω ∈Ω} by Ht(s, ω) = H(s, ω) 1{s ≤t} . Then the stochastic integral up to t is defined as, Z t 0 H(s) dB(s) := Z ∞ 0 Ht(s) dB(s) . ⋄ Definition 7.9. We say that a stochastic process {X(t): t ≥0} is a modification of a process {Y (t): t ≥0} if, for every t ≥0, we have P{X(t) = Y (t)} = 1. ⋄ Theorem 7.10. Suppose the process {H(s, ω): s ≥0 , ω ∈Ω} is progressively measurable with E Z t 0 H(s, ω)2 ds < ∞ for any t ≥0. Then there exists an almost surely continuous modification of { R t 0 H(s) dB(s): t ≥0}. More-over, this process is a martingale and hence E Z t 0 H(s) dB(s) = 0 for every t ≥0. 187 Proof. Fix a large integer t0 and let Hn be a sequence of step processes such that ∥Hn −Ht0∥2 →0, and therefore E h³ Z ∞ 0 ¡ Hn(s) −Ht0(s) ¢ dB(s) ´2i →0 . Obviously, for any s ≤t the random variable R s 0 Hn(u) dB(u) is F(s)-measurable and R t s Hn(u) dB(u) is independent of F(s), meaning that the process n Z t 0 Hn(u) dB(u): 0 ≤t ≤t0 o is a martingale, for every n. For any 0 ≤t ≤t0 define X(t) = E h Z t0 0 H(s) dB(s) ¯ ¯ ¯ F(t) i , so that {X(t) : 0 ≤t ≤t0} is also a martingale and X(t0) = Z t0 0 H(s) dB(s). By Doob’s maximal inequality, Proposition 2.39, for p = 2, E · sup 0≤t≤t0 ³ Z t 0 Hn(s) dB(s) −X(t) ´2¸ ≤4 E ·³ Z t0 0 ¡ Hn(s) −H(s) ¢ dB(s) ´2¸ , which converges to zero, as n →∞. This implies, in particular, that almost surely, the process {X(t) : 0 ≤t ≤t0} is a uniform limit of continuous processes, and hence continuous. For fixed 0 ≤t ≤t0, by taking L2-limits from the step process approximation, the random variable R t 0 H(s) dB(s) is F(t)-measurable and R t0 t H(s) dB(s) is independent of F(t) with zero expectation. Therefore R t 0 H(s) dB(s) is a conditional expectation of X(t0) given F(t), hence coinciding with X(t) almost surely. We now have a basic stochastic integral at our disposal. Obviously, a lot of bells and whistles can be added to this construction, but we refrain from doing so and keep focused on the essential properties and eventually on the applications to Brownian motion. 1.2. Itˆ o’s formula. For stochastic integration Itˆ o’s formula plays the same role as the fundamental theorem of calculus for classical integration. Let f be continuously differentiable and x: [0, ∞) →R, then the fundamental theorem can be written as f(x(t)) −f(x(0)) = Z t 0 f ′(x(s)) dx(s) , and this formula holds when x is continuous and of bounded variation. Itˆ o’s formula offers an analogue of this for the case that x is a Brownian motion. The crucial difference is that a third term enters, which makes the existence of a second derivative of f necessary. The next result, a key step in the derivation of this formula, is an extension of Exercise 1.14. 188 Theorem 7.11. Suppose f : R →R is continuous, t > 0, and 0 = t (n) 1 < . . . < t (n) n = t are partitions of the interval [0, t], such that the mesh converges to zero. Then, in probability, n−1 X j=1 f ¡ B(t (n) j ) ¢ ¡ B(t (n) j+1) −B(t (n) j ) ¢2 − → Z t 0 f(B(s)) ds . Proof. Let T be the first exit time from a compact interval. It suffices to prove the statement for Brownian motion stopped at T, as the interval may be chosen to make P{T < t} arbitrarily small. By continuity of f and the definition of the Riemann integral, almost surely, lim n→∞ n−1 X j=1 f ¡ B(t (n) j ∧T) ¢ ¡ t (n) j+1 ∧T −t (n) j ∧T ¢ = Z t∧T 0 f(B(s)) ds . It thus suffices to show that lim n→∞E h³ n−1 X j=1 f ¡ B(t (n) j ∧T) ¢ ³¡ B(t (n) j+1 ∧T) −B(t (n) j ∧T) ¢2 − ¡ t (n) j+1 ∧T −t (n) j ∧T ¢´´2i = 0. Recall that {B(t)2 −t : t ≥0} is a martingale, by Lemma 2.43, and hence, for all r ≤s, E £¡ B(s) −B(r) ¢2 −(s −r) ¯ ¯ F(r) ¤ = 0 . This allows us to simplify the previous expression as follows, E h³ n−1 X j=1 f ¡ B(t (n) j ∧T) ¢ ³¡ B(t (n) j+1 ∧T) −B(t (n) j ∧T) ¢2 − ¡ t (n) j+1 ∧T −t (n) j ∧T ¢´´2i = n−1 X j=1 E h f ¡ B(t (n) j ∧T) ¢2³¡ B ¡ t (n) j+1 ∧T ¢ −B ¡ t (n) j ∧T ¢¢2 − ¡ t (n) j+1 ∧T −t (n) j ∧T ¢´2 i . We can now bound f by its maximum on the compact interval, and multiplying out the square and dropping a negative cross term we get an upper bound, which is a constant multiple of n−1 X j=1 E h¡ B(t (n) j+1 ∧T) −B(t (n) j ∧T) ¢4i + n−1 X j=1 E h¡ t (n) j+1 ∧T −t (n) j ∧T ¢2i . (1.3) Using Brownian scaling on the first term, we see that this expression is bounded by a constant multiple of n−1 X j=1 ¡ t (n) j+1 −t (n) j ¢2 ≤t ∆(n), where ∆(n) denotes the mesh, which goes to zero. This completes the proof. We are now able to formulate and prove a first version of Itˆ o’s formula. Theorem 7.12 (Itˆ o’s formula I). Let f : R →R be twice continuously differentiable such that E R t 0 f ′¡ B(s) ¢2 ds < ∞for some t > 0. Then, almost surely, for all 0 ≤s ≤t, f ¡ B(s) ¢ −f ¡ B(0) ¢ = Z s 0 f ′¡ B(u) ¢ dB(u) + 1 2 Z s 0 f ′′¡ B(u) ¢ du. 189 Proof. We denote the modulus of continuity of f ′′ on [−M, M] by ω(δ, M) := sup x,y∈[−M,M] |x−y|<δ f ′′(x) −f ′′(y) . Then, by Taylor’s formula, for any x, y ∈[−M, M] with |x −y| < δ, ¯ ¯f(y) −f(x) −f ′(x)(y −x) −1 2f ′′(x)(y −x)2¯ ¯ ≤ω(δ, M) (y −x)2 . Now, for any sequence 0 = t1 < . . . < tn = t with δB := max1≤i≤n−1 ¯ ¯B(ti+1) −B(ti) ¯ ¯ and MB = max0≤s≤t |B(s)|, we get ¯ ¯ n−1 X i=1 ¡ f(B(ti+1)) −f(B(ti)) ¢ − n−1 X i=1 f ′¡ B(ti) ¢¡ B(ti+1) −B(ti) ¢ − n−1 X i=1 1 2f ′′¡ B(ti) ¢¡ B(ti+1) −B(ti) ¢2¯ ¯ ≤ω(δB, MB) n−1 X i=1 ¡ B(ti+1) −B(ti) ¢2 . Note that the first sum is simply f(B(t))−f(B(0)). By the definition of the stochastic integral and Theorem 7.11 we can choose a sequence of partitions with mesh going to zero, such that, almost surely, the first subtracted term on the left converges to R t 0 f ′¡ B(s) ¢ dB(s), the second subtracted term converges to 1 2 R t 0 f ′′¡ B(s) ¢ ds, and the sum on the right hand side converges to t. By continuity of the Brownian path ω(δB, MB) converges almost surely to zero. This proves Itˆ o’s formula for fixed t, or indeed almost surely for all rational times 0 ≤s ≤t. As all the terms in Itˆ o’s formula are continuous almost surely, we get the result simultaneously for all 0 ≤s ≤t. Next, we provide an enhanced version of Itˆ o’s formula, which allows the function f to depend not only on the position of Brownian motion, but also on a second argument, which is assumed to be increasing in time. Theorem 7.13 (Itˆ o’s formula II). Suppose {ζ(s): s ≥0} is an increasing, continuous adapted stochastic process. Let f : R × R →R be twice continuously differentiable in the x-coordiante, and once continuously differentiable in the y-coordinate. Assume that E Z t 0 h ∂xf(B(s), ζ(s)) i2 ds < ∞, for some t > 0. Then, almost surely, for all 0 ≤s ≤t, f ¡ B(s), ζ(s) ¢ −f ¡ B(0), ζ(0) ¢ = Z s 0 ∂xf ¡ B(u), ζ(u) ¢ dB(u) + Z s 0 ∂yf ¡ B(u), ζ(u) ¢ dζ(u) + 1 2 Z s 0 ∂xxf ¡ B(u), ζ(u) ¢ du. 190 Proof. To begin with, we inspect the proof of Theorem 7.11 and see that it carries over without difficulty to the situation, when f is allowed to depend additionally on an adapted process {ζ(s): s ≥0}, i.e. we have for any partitions 0 = t (n) 1 < . . . < t (n) n = t with mesh going to zero, in probability, (1.4) n−1 X j=1 f ¡ ζ ¡ t (n) j ¢ , B ¡ t (n) j ¢¢ ³ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢´2 − → Z t 0 f(ζ(s), B(s)) ds . We denote the modulus of continuity of ∂yf by ω1(δ, M) = sup −M≤x1,x2,y1,y2≤M |x1−x2|∧|y1−y2|<δ ∂yf(x1, y1) −∂yf(x2, y2), and the modulus of continuity of ∂xxf by ω2(δ, M) = sup −M≤x1,x2,y1,y2≤M |x1−x2|∧|y1−y2|<δ ∂xxf(x1, y1) −∂xxf(x2, y2). Now take x, x0, y, y0 ∈[−M, M] with |x −x0| ∧|y −y0| < δ. By the mean value theorem, there exists a value ˜ y ∈[−M, M] with the property that|˜ y −y| ∧|˜ y −y0| < δ such that f(x, y) −f(x, y0) = ∂yf(x, ˜ y) (y −y0), and hence ¯ ¯f(x, y) −f(x, y0) −∂yf(x0, y0) (y −y0) ¯ ¯ ≤ω1(M, δ) (y −y0). Taylor’s formula implies that ¯ ¯f(x, y0) −f(x0, y0) −∂xf(x0, y0)(x −x0) −1 2 ∂xxf(x0, y0)(x −x0)2¯ ¯ ≤ω2(δ, M)(x −x0)2. Combining the last two formulas using the triangle inequality, we get that (1.5) ¯ ¯f(x, y) −f(x0, y0) −∂yf(x0, y0) (y −y0) −∂xf(x0, y0) (x −x0) −1 2∂xxf(x0, y0)(x −x0)2¯ ¯ ≤ω1(δ, M) (y −y0) + ω2(δ, M)(x −x0)2. Now, for any sequence 0 = t1 < . . . < tn = t define δ := max 1≤i≤n−1 ¯ ¯B(ti+1) −B(ti) ¯ ¯ ∧ max 1≤i≤n−1 ¯ ¯ζ(ti+1) −ζ(ti) ¯ ¯, and M := max 0≤s≤t |B(s)| ∧max 0≤s≤t |ζ(s)|. We get from (1.5), ¯ ¯ ¯f ¡ B(t), ζ(t) ¢ −f ¡ B(0), ζ(0) ¢ − n−1 X i=1 ∂xf ¡ B(ti), ζ(ti) ¢ ¡ B(ti+1) −B(ti) ¢ − n−1 X i=1 ∂yf ¡ B(ti), ζ(ti) ¢ ¡ ζ(ti+1) −ζ(ti) ¢ −1 2 n−1 X i=1 ∂xxf ¡ B(ti), ζ(ti) ¢ ¡ B(ti+1) −B(ti) ¢2¯ ¯ ¯ ≤ω1(δ, M) ¡ ζ(t) −ζ(0) ¢ + ω2(δ, M) n−1 X i=1 ¡ B(ti+1) −B(ti) ¢2 . 191 We can choose a sequence of partitions with mesh going to zero, such that, almost surely, the following convergence statements hold, • the first sum on the left converges to R t 0 ∂xf ¡ B(s), ζ(s) ¢ dB(s) by the definition of the stochastic integral, • the second sum on the left converges to R t 0 ∂yf ¡ B(s), ζ(s) ¢ dζ(s) by definition of the Stieltjes integral, • the third sum on the left converges to 1 2 R t 0 ∂xxf ¡ B(s), ζ(s) ¢ ds by (1.4), • the sum on the right hand side converges to t by Theorem 7.11. By continuity of the Brownian path ω1(δ, M) and ω2(δ, M) converge almost surely to zero. This proves the enhanced Itˆ o’s formula for fixed t, and looking at rationals and exploiting continuity as before, we get the result simultaneously for all 0 ≤s ≤t. With exactly the same technique, we obtain a version of Itˆ o’s formula for higher dimensional Brownian motion. The detailed proof will be an exercise, see Exercise 7.3. To give a pleasant formulation, we introduce some notation for functions f : Rd+m →R, where we interpret the argument as two vectors, x ∈Rd and y ∈Rm. We write ∂j for the partial derivative in direction of the jth coordinate, and ∇xf = (∂1f, . . . , ∂df) and ∇yf = (∂d+1f, . . . , ∂d+mf) for the vector of derivatives in the directions of x, respectively y. For integrals we use the scalar product notation Z t 0 ∇xf ¡ B(u), ζ(u) ¢ · dB(u) = d X i=1 Z t 0 ∂if(B(u), ζ(u)) dBi(u), and Z t 0 ∇yf ¡ B(u), ζ(u) ¢ · dζ(u) = m X i=1 Z t 0 ∂d+if(B(u), ζ(u)) dζi(u). Finally, for the Laplacian in the x-variable we write ∆xf = d X j=1 ∂jjf . Theorem 7.14 (Multidimensional Itˆ o’s formula). Let {B(t) : t ≥0} be a d-dimensional Brownian motion and suppose {ζ(s): s ≥0} is a continuous, adapted stochastic process with values in Rm and increasing components. Let f : Rd+m →R be such that the partial derivatives ∂if and ∂jkf exist for all 1 ≤j, k ≤d, 1 ≤i ≤d + m and are continuous. If, for some t > 0, E Z t 0 ¯ ¯∇xf ¡ B(s), ζ(s) ¢¯ ¯2 ds < ∞, 192 then, almost surely, for all 0 ≤s ≤t, (1.6) f ¡ B(s), ζ(s) ¢ −f ¡ B(0), ζ(0) ¢ = Z s 0 ∇xf ¡ B(u), ζ(u) ¢ · dB(u) + Z s 0 ∇yf ¡ B(u), ζ(u) ¢ · dζ(u) + 1 2 Z s 0 ∆xf ¡ B(u), ζ(u) ¢ du . Remark 7.15. As the Itˆ o formula holds almost surely simultaneously for all times s ∈[0, t], it also holds for stopping times bounded by t. Suppose now that f : U →R satisfies the differentiability conditions on an open set U, and K ⊂U is compact. Then there exists f ∗: Rm →R with f ∗= f on K, which satisfies the conditions of Theorem 7.14. Let T be the first exit time from K. Applying Theorem 7.14 to f ∗yields (1.6) for f, almost surely, for all times s ∧T, for s ≤t. ⋄ To appreciate the following discussion, we introduce a localisation of the notion of a martingale. Definition 7.16. An adapted stochastic process {X(t): 0 ≤t ≤T} is called a local martingale if there exist stopping times Tn, which are almost surely increasing to T, such that {X(t ∧Tn): t ≥0} is a martingale, for every n. ⋄ The following theorem is a substantial extension of Corollary 2.49. Theorem 7.17. Let D ⊂Rd be a domain and f : D →R be harmonic on D. Suppose that {B(t): 0 ≤t ≤T} is a Brownian motion started inside D and stopped at the time T when it first exits the domain D. (a) The process {f(B(t)): 0 ≤t ≤T} is a local martingale. (b) If we have E Z t∧T 0 ¯ ¯∇f(B(s)) ¯ ¯2 ds < ∞ for all t > 0, then {f(B(t ∧T)) : t ≥0} is a martingale. Proof. Suppose that Kn, n ∈N, is an increasing sequence of compact sets whose union is D, and let Tn be the associated exit times. By Theorem 7.14 in conjunction with Remark 7.15, f ¡ B(t ∧Tn) ¢ = Z t∧Tn 0 ∇f ¡ B(s) ¢ · dB(s) , whence {f ¡ B(t ∧Tn) ¢ : t ≥0} is a martingale, which proves (a). Obviously, almost surely, (1.7) f ¡ B(t ∧T) ¢ = lim n↑∞f ¡ B(t ∧Tn) ¢ . 193 For any t ≥0, the process {f(B(t∧Tn)) : n ∈N} is a discrete-time martingale by the optional stopping theorem. By our integrability assumption, E h³ f ¡ B(t ∧Tn) ´2i = E Z Tn∧t 0 ¯ ¯∇f ¡ B(s) ¢¯ ¯2 ds ≤E Z T∧t 0 ¯ ¯∇f ¡ B(s) ¢¯ ¯2 ds < ∞, so that the martingale is L2-bounded and convergence in (1.7) holds in the L1-sense. Taking limits in the equation E h f ¡ B(t ∧Tm) ¢ ¯ ¯ ¯ F(s ∧Tn) i = f ¡ B(s ∧Tn) ¢ , for m ≥n and t ≥s , first for m ↑∞, then n ↑∞, gives E £ f ¡ B(t ∧T) ¢ ¯ ¯ F(s ∧T) ¤ = f ¡ B(s ∧T) ¢ , for t ≥s. This shows that {f(B(t ∧T)) : t ≥0} is a martingale and completes the proof. Example 7.18. The radially symmetric functions (related to the radial potential), f(x) = ( log | x| if d = 2, |x|2−d if d ≥3. are harmonic on the domain Rd \ {0}. For a d-dimensional Brownian motion {B(t): t ≥0} with B(0) ̸= 0, the process {f(B(t)): t ≥0} is however not a martingale. Indeed, it is a straightforward calculation to verify that lim t↑∞E log |B(t)| = ∞, if d = 2, and lim t↑∞E[|B(t)|2−d] = 0, if d ≥3, contradicting the martingale property. Hence the integrability condition in Theorem 7.17(b) cannot be dropped without replacement, a local martingale is not necessarily a martingale. ⋄ 2. Conformal invariance and winding numbers We now focus on planar Brownian motion {B(t) : t ≥0} and formulate an invariance property which is at the heart of the role of Brownian motion in the context of planar random curves. Throughout this section we use the identification of R2 and C and use complex notation when it is convenient. To motivate the main result suppose that f : C →C is analytic, i.e. everywhere complex differentiable, and write f = f1 + if2 for the decomposition of f into a real and an imaginary part. Then, by the Cauchy-Riemann equations ∂1f1 = ∂2f2 and ∂2f1 = −∂1f2, we have ∆f1 = ∆f2 = 0. Then Itˆ o’s formula (if applicable) states that almost surely, for every t ≥0, f ¡ B(t) ¢ = Z t 0 f ′¡ B(s) ¢ dB(s) , 194 where dB(s) is short for dB1(s) + i dB2(s) with B(s) = B1(s) + iB2(s). The right hand side defines a continuous process with independent increments, and it is at least plausible that they are Gaussian. Moreover, its expectation vanishes and E h³ Z t 0 f ′¡ B(s) ¢ dB(s) ´2i = Z t 0 ¯ ¯f ′(B(s)) ¯ ¯2 ds , suggesting that {f(B(t)) : t ≥0} is a Brownian motion ‘travelling’ with the modified speed t 7→ Z t 0 |f ′(B(s))|2 ds. To turn this heuristic into a powerful theorem we allow the function to be an analytic map f : U →V between domains in the plane. Recall that such a map is called conformal if it is a bijection. Theorem 7.19. Let U be a domain in the complex plane, x ∈U, and let f : U →V be analytic. Let {B(t) : t ≥0} be a planar Brownian motion started in x and τU = inf © t ≥0 : B(t) ̸∈U ª its first exit time from the domain U. Then the process {f(B(t)) : 0 ≤t ≤τU} is a time-changed Brownian motion, i.e. there exists a planar Brownian motion { e B(t) : t ≥0} such that, for any t ∈[0, τU), f(B(t)) = e B(ζ(t)), where ζ(t) = Z t 0 ¯ ¯f ′(B(s)) ¯ ¯2 ds . If, additionally, f is conformal, then ζ(τU) is the first exit time from V by { e B(t) : t ≥0}. Remark 7.20. Note that, as f is complex differentiable, the derivative Df(x) is just multipli-cation by a complex number f ′(x), and f can be approximated locally around x by its tangent z 7→f(x) + f ′(x)(z −x). The derivative of the time change is ∂tζ(t) = |f ′(B(t))|2 = ¡ ∂1f1(B(t)) ¢2 + ¡ ∂2f1(B(t)) ¢2 . ⋄ Remark 7.21. The famous Riemann mapping theorem states that for any pair of simply con-nected open sets U, V ⊊C there exists a conformal mapping f : U →V , see [Ru86, 14.8]. This ensures that there are plenty of examples for Theorem 7.19. ⋄ Proof. Note first that the derivative of f is nonzero except for an at most countable set of points, which does not have a limit point in U. As this set is not hit by Brownian motion, we may remove it from U and note that the resulting set is still open We may therefore assume that f has nonvanishing derivative everywhere on U. We may also assume, without loss of generality, that f is a mapping between bounded domains. Otherwise choose Un ⊂Kn ⊂U such that Un is open with S Un = U and Kn is compact, which implies that Vn = f(Un) is bounded. Then {f(B(t)) : t ≤τUn} is a time-changed Brownian motion for all n, and this extends immediately to {f(B(t)) : t ≤τU}. 195 The main argument of the proof is based on stochastic integration. Recall that the Cauchy-Riemann equations imply that the vectors ∇f1 and ∇f2 are orthogonal and |∇f1| = |∇f2| = |f ′|. We start by defining for each t ≥0, a stopping time σ(t) = inf © s ≥0 : ζ(s) ≥t ª , which represents the inverse of the time change. Let { e B(t) : t ≥0} be a Brownian motion independent of {B(t) : t ≥0}, and define a process {W(t) : t ≥0} by W(t) = f ¡ B(σ(t) ∧τU) ¢ + e B(t) −e B(t ∧ζ(τU)), for t ≥0. In rough words, at the random time ζ(τU) an independent Brownian motion is attached at the endpoint of the process {f(B(σ(t))) : 0 ≤t ≤ζ(τU)}. Denote by G(t) the σ-algebra generated by {W(s) : s ≤t}. It suffices to prove that the process {W(t) : t ≥0} is a Brownian motion. It is obvious that the process is continuous almost surely and hence it suffices to show that its finite dimensional marginal distributions coincide with those of a Brownian motion. Recalling the Laplace transform of the bivariate normal distribution, this is equivalent to showing that, for any 0 ≤s ≤t and λ ∈C, E £ e⟨λ,W(t)⟩¯ ¯ G(s) ¤ = exp ¡ 1 2|λ|2(t −s) + ⟨λ, W(s)⟩ ¢ . where we have used ⟨· , · ⟩to denote the scalar product. This follows directly once we show that, for x ∈U, (2.1) E £ e⟨λ,W(t)⟩¯ ¯ W(s) = f(x) ¤ = exp ¡ 1 2 |λ|2 (t −s) + ⟨λ, f(x)⟩ ¢ . For simplicity of notation we may assume s = 0. For the proof we first evaluate the expectation with respect to the independent Brownian motion { e B(t) : t ≥0} inside, which gives E £ e⟨λ,W(t)⟩¯ ¯ W(0) = f(x) ¤ = Ex exp ¡ ⟨λ, f(B(σ(t) ∧τU)) ¢ + 1 2|λ|2 ¡ t −ζ(σ(t) ∧τU) ¢´ . We use the multidimensional Itˆ o’s formula for the bounded mapping F(x, u) = exp ¡ ⟨λ, f(x)⟩+ 1 2 |λ|2(t −u) ¢ , which is defined on U × R, see Remark 7.15. To prepare this, note that ∂iieg = [∂iig + (∂ig)2eg] and hence (2.2) ∆eg = [∆g + |∇g|2] eg . For g = ⟨λ, f⟩we have ∇g = P2 i=1 λi∇fi, which implies |∇g|2 = |λ|2 |f ′|2 as the vectors ∇fi are orthogonal with norm |f ′|. Moreover, ∆g = 0 by the analyticity of f. Applying (2.2) gives ∆e⟨λ,f(x)⟩= |λ|2 |f ′(x)|2 e⟨λ,f(x)⟩. Moreover, we have ∂u exp ¡ 1 2 |λ|2 (t −u) ¢ = −1 2 |λ|2 exp ¡ 1 2 |λ|2 (t −u) ¢ . We now let Un = {x ∈U : | x −y| ≥1 n for all y ∈∂U}. Then |f ′(x)| is bounded away from zero on Un and therefore the stopping time T = σ(t) ∧τUn is bounded. The multidimensional 196 version of Itˆ o’s formula gives, almost surely, F ¡ B(T), ζ(T) ¢ = F ¡ B(0), ζ(0) ¢ + Z T 0 ∇xF ¡ B(s), ζ(s) ¢ · dB(s) + Z T 0 ∂uF ¡ B(s), ζ(s) ¢ dζ(s) + 1 2 Z T 0 ∆xF ¡ B(s), ζ(s) ¢ ds . Looking back at the two preparatory displays and recalling that dζ(u) = |f ′(B(u))|2 du we see that the two terms in the second line cancel each other. Making use of bounded convergence and the fact that the stochastic integral has zero expectation, see Exercise 7.1, we obtain that E £ e⟨λ,W(t)⟩¯ ¯ W(0) = f(x) ¤ = Ex £ F ¡ B(σ(t) ∧τU), ζ(σ(t) ∧τU) ¢¤ = lim n→∞Ex £ F ¡ B(T), ζ(T) ¢¤ = F ¡ x, 0 ¢ = exp ³ 1 2|λ|2t + ⟨λ, f(x)⟩ ´ . This shows (2.1) and completes the proof. As a first application we look at harmonic measure and exploit its conformal invariance in order to calculate it explicitly in a special case. Theorem 7.22. Suppose U, V ⊂R2 are domains and f : ¯ U →¯ V is continuous and maps U conformally into V . (a) If x ∈U, then µ∂U(x, · ) ◦f −1 = µ∂V (f(x), · ). (b) Suppose additionally that U = Kc and V = Lc are the complement of compact sets and limx→∞f(x) = ∞. Then µK ◦f −1 = µL . Proof. (a) follows from Theorem 7.19 together with the continuity of f on ¯ U, which ensures that the first hitting point of ∂U by a Brownian motion is mapped onto the first hitting point of ∂V by its conformal image. For (b) tahe the limit x →∞and recall Theorem 3.45. Example 7.23. We find the harmonic measure from infinity on the unit interval [0, 1] = © x + iy: y = 0, 0 ≤x ≤1 ª . Starting point is the harmonic measure on the circle ∂B(0, 1), which we know is the uniform distribution ϖ. Let U be the complement of the unit ball B(0, 1) and V the complement of the interval [−, 1, 1], and take the conformal mapping f : U →V, f(z) = 1 2 ³ z + 1 z ´ , which satisfies our conditions. Hence ϖ◦f −1 is the harmonic measure on [−1, 1]. If z = x+iy = cos θ + i sin θ ∈∂B(0, 1), then |f ′(z)|2 = sin2 θ, and hence |f ′(z)| = |y| = √ 1 −x2. Recalling that every x ∈[−1, 1] has two preimages, we get that the density of ϖ ◦f −1 at x = cos θ is 2 2π|f ′(eiθ)| = 1 π 1 √ 1 −x2. 197 Mapping V via z 7→z2 onto the complement of [0, 1], noting that |f ′(z)| = 2|z| and that again we have two preimages, we obtain that the harmonic measure on [0, 1] is dµ0,1 = 1 π 1 p x(1 −x) dx, which is the Beta( 1 2, 1 2) distribution. ⋄ As a further important application of conformal invariance we calculate the probability that a planar Brownian motion exits a cone before leaving a disc, see Figure 1. 0 x 1 r Figure 1. The Brownian path does not exit the cone before leaving the disc. Theorem 7.24. Let α ∈(0, 2π] and denote by W[α] an open cone with vertex in the origin, symmetric about the x-axis, with opening angle α. Let {B(t) : t ≥0} be planar Brownian motion started in x = (1, 0), and denote T(r) = inf{t ≥0 : |B(t)| = r}. Then, for r > 1, P © B[0, T(r)] ⊂W[α] ª = 2 π arctan ³ 2r π α r 2π α −1 ´ . Proof. For ease of notation we identify R2 with the complex plane. In the first step we use the conformal map f : W[α] →W[π] defined by f(x) = xπ/α to map the cone onto a halfspace. Let B∗= f ◦B, which by conformal invariance is a time-changed Brownian motion started in the point B∗(0) = 1. We thus have that © B[0, T(r)] ⊂W[α] ª = © B∗[0, T(rπ/α)] ⊂W[π] ª . It therefore suffices to show the result in the case α = π. So let {B(t) : t ≥0} be a Brownian motion started in B(0) = 1 and look at the stopping time S = min{t ≥0 : Re(B(t)) ≤0}. We use reflection on the imaginary axis, i.e. for f(x, y) = (−x, y) we let e B(t) = ½ B(t) if t ≤S, f(B(t)) if t ≥S. Then e B is a Brownian motion started in e B(0) = 1 and, for e T(r) = inf{t ≥0 : | e B(t)| = r}, P{Re(B(T(r))) > 0} = P{Re(B(T(r))) > 0, T(r) < S} + P{Re(B(T(r))) > 0, T(r) > S} = P{T(r) < S} + P{Re( e B( e T(r))) < 0}. 198 As {T(r) < S} is the event whose probability we need to bound, it just remains to find P{Re(B(T(r))) > 0} −P{Re(B(T(r))) < 0}. By Brownian scaling we may assume that the Brownian motion is started at B(0) = 1/r and T = min{t ≥0 : |B(t)| = 1}. We apply the conformal map f : B(0, 1) →B(0, 1), f(z) = z −1/r 1 −z/r, which is a M¨ obius transformation mapping the starting point of the Brownian motion to the origin and fixing the point 1. As this maps the segment {z ∈∂B(0, 1) : Re(z) < 0} onto a segment of length 2 arctan r2−1 2r we obtain the result. The next result represents planar Brownian motion in polar coordinates. Again we identify R2 with the complex plane. Theorem 7.25 (Skew-product representation). Suppose {B(t) : t ≥0} is a planar Brownian motion with B(0) = 1. Then there exist two independent linear Brownian motions {Wi(t) : t ≥0}, for i = 1, 2, such that B(t) = exp ¡ W1(H(t)) + i W2(H(t)) ¢ , for all t ≥0, where H(t) = Z t 0 ds |B(s)|2 = inf n u ≥0 : Z u 0 exp(2W1(s)) ds > t o . Remark 7.26. By the result, both the logarithm of the radius, and the continuous determi-nation of the angle of a planar Brownian motion are time-changed Brownian motions. The time-change itself depends only on the radius of the motion and ensures that the angle changes slowly away from the origin, but rapidly near the origin. ⋄ Proof. Note first that H(t) itself is well-defined by Corollary 2.23. Moreover, the claimed equality for H(t) follows easily from the fact that both sides have the same value at t = 0 and the same derivative. Let {W(t) : t ≥0} be planar Brownian motion and W(t) = W1(t) + i W2(t) its decomposition into real and imaginary part. By Theorem 7.19, (2.3) exp ¡ W(t) ¢ = B(ζ(t)), where {B(t) : t ≥0} is a planar Brownian motion and ζ(t) = Z t 0 exp(2W1(s)) ds. By definition H is the inverse function of ζ. Hence, using (2.3) for t = H(s), we get B(s) = exp ¡ W(H(s)) ¢ = exp ¡ W1(H(s)) + i W2(H(s)) ¢ , which is the desired result. 199 Example 7.27. By the skew-product representation, for a planar Brownian motion {B(t) : t ≥0}, we have log |B(t)| = W1(H(t)) and hence the process {log |B(t)| : t ≥0} is a time-changed Brownian motion in dimension one. However, recall from Example 7.18 that it is not a martingale. ⋄ For further applications, we need to study the asymptotics of the random clock H(t) more carefully. To state the next result let {W1(t) : t ≥0} be a linear Brownian motion as in Theorem 7.25 and, for a > 0, let {W a 1 (t) : t ≥0} be the Brownian motion given by W a 1 (t) = a−1W1(a2t). For each such Brownian motion we look at the first hitting time of level b, T a b := inf © t ≥0 : W a 1 (t) = b ª . Theorem 7.28. For every ε > 0 we have lim t→∞P n¯ ¯ ¯ 4H(t) (log t)2 −T 1 2 log t 1 ¯ ¯ ¯ > ε o = 0. The proof uses the following simple fact, sometimes known as Laplace’s method. Lemma 7.29. For any continuous f : [0, t] →R and t > 0, lim a↑∞ 1 a log Z t 0 exp(af(v)) dv = max 0≤s≤t f(s). Proof. The upper bound is obvious, by replacing f by its maximum. For the lower bound, let s ∈[0, t] be a point where the maximum is taken. We use continuity to find, for any ε > 0, some 0 < δ < 1 such that f(r) ≥f(s) −ε for all r ∈(s −δ, s + δ). Restricting the limit to this interval gives a lower bound of max0≤s≤t f(s)−ε, and the result follows as ε > 0 was arbitrary. Proof of Theorem 7.28. By scaling one may assume that W1(0) = 0. We abbreviate a = a(t) = 1 2 log t. As we have, for any δ > 0, lim ε↓0 P n T 1 2 log t 1+ε −T 1 2 log t 1−ε > δ o = lim ε↓0 P n T 1 1+ε −T 1 1−ε > δ o = 0 it suffices to show that lim t↑∞P n 4H(t) (log t)2 > T 1 2 log t 1+ε o = 0, and lim t↑∞P n 4H(t) (log t)2 < T 1 2 log t 1−ε o = 0. We first show that (2.4) lim t↑∞P n 4H(t) (log t)2 > T 1 2 log t 1+ε o = 0. We have n 4H(t) (log t)2 > T 1 2 log t 1+ε o = n Z a2T a 1+ε 0 exp(2W1(u)) du < t o = n 1 2a log Z a2T a 1+ε 0 exp(2W1(u)) du < 1 o , 200 recalling that a = 1 2 log t. Note now that 1 2a log Z a2T a 1+ε 0 exp(2W1(u)) du = log a a + 1 2a log Z T a 1+ε 0 exp(2aW a 1 (u)) du, and the right hand side has the same distribution as log a a + 1 2a log Z T 1 1+ε 0 exp(2aW1(u)) du. Laplace’s method gives that, almost surely, lim a↑∞ 1 2a log Z T 1 1+ε 0 exp(2aW1(u)) du = sup 0≤s≤T 1 1+ε W1(s) = 1 + ε. Hence, lim a↑∞P n¯ ¯ ¯log a a + 1 2a log Z T 1 1+ε 0 exp(2aW1(u)) du −(1 + ε) ¯ ¯ ¯ > ε o = 0. This proves (2.4). In the same way one can show that lim t↑∞P n 4H(t) (log t)2 > T 1 2 log t 1−ε o = 0, and this completes the proof. Remark 7.30. As {W a 1 (t) : t ≥0} is a Brownian motion for every a > 0, the law of T a 1 does not depend on a > 0. Therefore, Theorem 7.28 implies that 4H(t) (log t)2 = ⇒T1, where T1 = inf{s ≥0 : W(s) = 1 ª . The distribution of T1 is, by Theorem 2.32 given by the density (2πs3)−1/2 exp(−1/(2s)). ⋄ We now determine the asymptotic law of the winding numbers θ(t) = W2(H(t)), as t →∞. Theorem 7.31 (Spitzer’s law). For any x ∈R, lim t→∞P n 2 log t θ(t) ≤x o = Z x −∞ dy π(1 + y2). In other words, the law of 2θ(t)/ log t converges to a standard symmetric Cauchy distribution. Proof. We define {W a 2 (t) : t ≥0} by W a 2 (t) = (1/a)W2(a2t). Then, a−1θ(t) = a−1W2(H(t)) = W a 2 (a−2H(t)). Hence, by Theorem 7.28, for a = a(t) = 1 2 log t, lim t→∞P n¯ ¯ ¯2θ(t) log t −W a 2 ¡ T a 1 ¢¯ ¯ ¯ > ε o = 0. 201 The law of the random variable W a 2 (T a 1 ) does not depend on the choice of a. By Theorem 2.33, see also Exercise 7.4, it is Cauchy distributed. 3. Tanaka’s formula and Brownian local time In this section we establish a deep connection between Itˆ o’s formula and Brownian local times for linear Brownian motion {B(t): t ≥0}. The basic idea is to give an analogue of Itˆ o’s formula for the function f : R →R, f(t) = | t −a|. Note that this function is not twice continuously differentiable, so Itˆ o’s formula cannot be applied directly. To see what we are aiming at, let’s apply Itˆ o’s formula informally. We have in the distributional sense that f ′(x) = sign(x −a) and f ′′(x) = 2δa. Hence Itˆ o’s formula would give |B(t) −a| −|B(0) −a| = Z t 0 sign(B(s) −a) dB(s) + Z t 0 δa(B(s)) ds, The last integral can be interpreted as the time spent by Brownian motion at level a and hence it is natural to expect that it is the local time La(t). Tanaka’s formula confirms this intuition. Theorem 7.32 (Tanaka’s formula). Let {B(t) : t ≥0} be linear Brownian motion. Then, for every a ∈R, almost surely, for all t > 0, |B(t) −a| −|B(0) −a| = Z t 0 sign(B(s) −a) dB(s) + La(t), where sign x = 1{x>0} −1{x≤0}. Remark 7.33. To explore the relation of Tanaka’s and Itˆ o’s formula further, suppose that f : R →R is twice differentiable such that f ′ has compact support, but do not assume that f ′′ is continuous. Then, for a suitable constant c, f ′(x) = 1 2 Z sign(x −a) f ′′(a) da + c , and, for a suitable constant b, f(x) = 1 2 Z |x −a| f ′′(a) da + cx + b . Integrating Tanaka’s formula with respect to 1 2 f ′′(a) da and exchanging this integral with the stochastic integral, which is justified by Exercise 7.7 below, gives f ¡ B(t) ¢ −f ¡ B(0) ¢ = Z t 0 f ′¡ B(s) ¢ dB(s) + 1 2 Z La(t) f ′′(a) da . By Theorem 6.17 the last term equals 1 2 R t 0 f ′′¡ B(s) ¢ ds. Hence, we learn that Itˆ o’s formula does not require the continuity requirement for the second derivative. ⋄ 202 For the proof of the Tanaka formula we define ˜ La(t) := |B(t) −a| −|B(0) −a| − Z t 0 sign(B(s) −a) dB(s) and show that this defines a density of the occupation measure. Lemma 7.34. For every t ≥0 and a ∈R, almost surely, ˜ La(t) = lim ε↓0 1 ε Z t 0 1(a,a+ε)(B(s)) ds, in probability. Proof. Using the strong Markov property the statement can be reduced to the case a = 0. The main idea of the proof is now to use convolution to make | x| smooth, and then use Itˆ o’s formula for the smooth function. For this purpose, recall that, for any δ > 0 we can find smooth functions g, h: R →[0, 1] with compact support such that g ≤1(0,1) ≤h and R g = 1 −δ, R h = 1 + δ. This reduces the problem to showing that ˜ L0(t) = lim ε↓0 1 ε Z t 0 g ¡ ε−1B(s) ¢ ds, in probability, for g: R →[0, 1] smooth, with compact support in [0, ∞) and R g = 1. Let fε(x) = ε−1 Z | x −a| g(ε−1a) da = Z | x −εa| g(a) da. fε is smooth and f ′ ε(x) = R sign(x −εa)g(a) da, f ′′ ε (x) = 2ε−1g(ε−1x). Itˆ o’s formula gives (3.1) fε(B(t)) −fε(B(0)) − Z t 0 f ′ ε(B(s)) dB(s) = ε−1 Z t 0 g ¡ ε−1B(s) ¢ ds. Now we let ε ↓0 for each term. The sequence of probability measures ε−1g(ε−1z) dz converges weakly to δ0, which implies that fε(x) →| x| for all x. From the definition of fε we infer that all functions fε are Lipschitz with Lipschitz constant one. Hence, given δ > 0 and a compact interval I = [a, b], we can find finitely many points a = x1 < . . . < xn = b with xk+1 −xk < δ/3. There exists ε0 > 0 such that |fε(xk) −| xk|| < δ/3 for all 0 < ε < ε0. Then, for all x ∈I there is xk ≤x ≤xk+1 and |fε(x) −| x| | ≤|fε(x) −fε(xk)| + |fε(xk) −| xk| | + || xk| −| x| | ≤δ. In other words, fε(x) →| x| uniformly on compact intervals. Given δ > 0 one can find M such that |B(s)| ≤M on [0, t] with probability exceeding 1 −δ. On this event we have fε(B(s)) →|B(s)| uniformly in s ∈[0, t]. This ensures convergence in probability of the first two terms on the left hand side of (3.1). To deal with the third term, we differentiate fε and get f ′ ε(x) = Z sign ¡ x −εa ¢ g(a) da ↑sign(x) as ε ↓0. Now we use the isometry property (1.2) to infer that E h³ Z t 0 sign(B(s)) dB(s) − Z t 0 f ′ ε(B(s)) dB(s) ´2i = E Z t 0 (sign(B(s)) −f ′ ε(B(s)))2 ds. 203 The right hand side converges to zero, by the bounded convergence theorem. Hence we have shown that, in probability, lim ε↓0 ε−1 Z t 0 g ¡ ε−1B(s) ¢ ds = lim ε↓0 fε(B(t)) −fε(B(0)) − Z t 0 f ′ ε(B(s)) dB(s) = |B(t)| −|B(0)| − Z t 0 sign(B(s)) dB(s) = ˜ L0(t). Proof of Theorem 7.32. Convergence in probability implies that a subsequence converges almost surely and hence, for every t, we obtain from Lemma 7.34 that, almost surely, the process {˜ La(t): a ∈R} is a density of the occupation measure. From Theorem 6.17 we therefore get that ˜ La(t) = La(t) for almost every a ∈R. Averaging over t gives that, almost surely, ˜ La(t) = La(t) for almost every a ∈R and t ≥0. As the random field {La(t): t ≥0, a ∈R} is continuous by Theorem 6.18, it therefore is a continuous modification of {˜ La(t): t ≥0, a ∈R}. In particular, for every a, almost surely, the process {La(t): t ≥0} agrees with {˜ La(t): t ≥0}. Corollary 7.35. For every a ∈R, almost surely, for all t ≥0, 1 2La(t) = (B(t) −a)+ −(B(0) −a)+ − Z t 0 1{B(s)>a} dB(s), and 1 2La(t) = (B(t) −a)−−(B(0) −a)−+ Z t 0 1{B(s)≤a} dB(s). Proof. The right sides in these formulas add up to La(t), while their difference is zero. We now use Tanka’s formula to prove L´ evy’s theorem describing the joint law of the modulus and local time of a Brownian motion. Theorem 7.36 (L´ evy). The processes { (|B(t)|, L0(t)) : t ≥0} and { (M(t) −B(t), M(t)) : t ≥0} have the same distribution. Remark 7.37. This result extends both Theorem 2.31 where it was shown that the processes {|B(t)| : t ≥0} and {M(t) −B(t) : t ≥0} have the same distribution, and Theorem 6.10 where it was shown that {L0(t) : t ≥0} and {M(t) : t ≥0} have the same distribution. Exercise 6.2 suggests an alternative proof using random walk methods. ⋄ 204 As a preparation for the proof we find the law of the process given by integrating the sign of a Brownian motion with respect to that Brownian motion. Lemma 7.38. For every a ∈R, the process {W(t) : t ≥0} given by W(t) = Z t 0 sign(B(s) −a) dB(s) is a standard Brownian motion. Proof. Assume, without loss of generality, that a < 0. Suppose that T = inf{t > 0 : B(t) = a}. Then W(t) = B(t) for all t ≤T and hence {W(t) : 0 ≤t ≤T} is a (stopped) Brownian motion. By the strong Markov property the process { e B(t) : t ≥0} given by e B(t) = B(t+T)−a is a Brownian motion started in the origin, which is independent of {W(t) : 0 ≤t ≤T}. As W(t + T) = W(T) + Z t+T T sign(B(s) −a) dB(s) = B(T) + Z t 0 sign( e B(s)) d e B(s), it suffices to show that the second term is a Brownian motion to complete the proof. Hence we may henceforth assume that a = 0. Now fix 0 ≤s < t and recall that W(t) −W(s) is independent of F(s). For the proof it hence suffices to show that W(t)−W(s) has a centred normal distribution with variance t−s. Choose s = t (n) 1 < . . . < t (n) n = t with mesh ∆(n) ↓0, and approximate the progressively measurable process H(u) = sign(B(u)) by the step processes Hn(u) = sign ¡ B(t (n) j ) ¢ if t (n) j < u ≤t (n) j+1. It follows from the fact that the zero set of Brownian motion is a closed set of measure zero, that lim E R t s (Hn(u) −H(u))2 du = 0, and hence W(t) −W(s) = Z t s H(u) dB(u) = L2 −lim n→∞ Z t s Hn(u) dB(u) = L2 −lim n→∞ n−1 X j=1 sign ¡ B(t (n) j ) ¢ ¡ B(t (n) j+1) −B(t (n) j ) ¢ . From the independence of the Brownian increments and elementary properties of the normal distribution, one can see that the random variables on the right all have a centred normal distribution with variance t −s. Hence this also applies to the limit W(t) −W(s). Proof of Theorem 7.36. By Tanaka’s formula we have |B(t)| = Z t 0 sign(B(s)) dB(s) + L0(t) = W(t) + L0(t) . Define a standard Brownian motion {f W(t) : t ≥0} by f W(t) = −W(t) for all t ≥0, and let {f M(t) : t ≥0} be the associated maximum process. We show that f M(t) = L0(t) for all t ≥0, 205 which implies that {(|B(t)|, L0(t)) : t ≥0} and {(f M(t)−f W(t), f M(t)) : t ≥0} agree pointwise, and the result follows as the latter process agrees in distribution with {(M(t) −B(t), M(t)) : t ≥0}. To show that f M(t) = L0(t) we first note that f W(s) = L0(s) −|B(s)| ≤L0(s), and hence, taking the maximum over all s ≤t, we get f M(t) ≤L0(t). On the other hand, the process {L0(t) : t ≥0} increases only on the set {t : B(t) = 0} and on this set we have L0(t) = f W(t) ≤f M(t). Hence the proof is complete, since {f M(t) : t ≥0} is increasing. 4. Feynman-Kac formulas and applications In this section we answer some natural questions about Brownian motion that involve time. For example, we find the probability that linear Brownian motion exits a given interval by a fixed time. Our main tool is the close relationship between the expectation of certain functionals of the Brownian path and the heat equation with dissipation term. This goes under the name of Feynman-Kac formula, and the theorems that make up this theory establish a strong link between parabolic partial differential equations and Brownian motion. Definition 7.39. Let U ⊂Rd be either open and bounded, or U = Rd. A twice differentiable function u: (0, ∞) × U →[0, ∞) is said to solve the heat equation with heat dissipation rate V : U →R and initial condition f : U →[0, ∞) on U if we have • lim x→x0 t↓0 u(t, x) = f(x0), whenever x0 ∈U, • lim x→x0 t→t0 u(t, x) = 0, whenever x0 ∈∂U, • ∂tu(t, x) = 1 2∆xu(t, x) + V (x)u(t, x) on (0, ∞) × U, where the Laplacian ∆x acts on the space variables x. ⋄ Remark 7.40. The solution u(t, x) describes the temperature at time t at x for a heat flow with cooling with rate −V (x) on the set {x ∈U : V (x) < 0}, and heating with rate V (x) on the set {x ∈U : V (x) > 0}, where the initial temperature distribution is given by f(x) and the boundary of U is kept at zero temperature. ⋄ Instead of going for the most general results linking the heat equation to Brownian motion, we give some of the more basic forms of the Feynman-Kac formula together with applications. Our first theorem in this spirit, an existence result for the heat equation in the case U = Rd, will lead to a new, more analytic proof of the second arcsine law, Theorem 5.28. 206 Theorem 7.41. Suppose V : Rd →R is bounded. Then u: [0, ∞) × Rd →R defined by u(t, x) = Ex n exp ³ Z t 0 V ¡ B(r) ¢ dr ´o , solves the heat equation on Rd with dissipation rate V and initial condition one. Proof. The easiest proof is by a direct calculation. Expand the exponential in a power series, then the terms in the expansion are a0(x) := 1 and, for n ≥1, an(x) := 1 n! Ex h Z t 0 · · · Z t 0 V (B(t1)) · · · V (B(tn))dt1 . . . dtn i = Ex h Z t 0 dt1 Z t t1 dt2 · · · Z t tn−1 dtn V (B(t1)) · · · V (B(tn)) i = Z t 0 dt1 · · · Z t tn−1 dtn Z dx1 · · · Z dxn+1 n Y i=1 V (xi) n+1 Y i=1 p(ti −ti−1, xi−1, xi) , with the conventions x0 = x, t0 = 0 and tn+1 = t. Differentiating with respect to t and using ∂tp(t, x1, x2) = 1 2∆xp(t, x1, x2), we get ∂ta1(x) = ∂t Z t 0 dt1 Z dx1 V (x1) Z dx2 p(t1, x, x1) p(t −t1, x1, x2) = Z t 0 dt1 Z dx1 V (x1) Z dx2 p(t1, x1, x) 1 2∆xp(t −t1, x2, x1) + Z dx1 V (x) p(t, x, x1) = 1 2∆x a1(x) + V (x)a0(x) . Analogously, ∂ ∂t an(x) = 1 2∆x an(x) + V (x)an−1(x). Adding up all these terms, and noting that differentiation under the summation sign is allowed, verifies the validity of the differential equation. The requirement on the initial condition follows easily from the boundedness of V . As an application we give a proof of the second arcsine law, Theorem 5.28, which does not rely on the first arcsine law. We use Theorem 7.41 with V (x) = λ 1[0,∞)(x). Then u(t, x) := Ex h exp ³ −λ Z t 0 1[0,∞)(B(s)) ds ´i solves ∂tu(t, x) = 1 2 ∂xxu(t, x) −λ 1[0,∞)(x) u(t, x) , u(0, x) = 1 for all x ∈R. To turn this partial differential equation into an ordinary differential equations, we take the Laplace transform g(x) = Z ∞ 0 u(t, x) e−ρt dt , 207 which satisfies the equation ρ g(x) + λV (x) g(x) −1 2 g′′(x) = 1 . This can be rewritten as (ρ + λ) g(x) −1 2 g′′(x) = 1 if x > 0 , ρ g(x) −1 2 g′′(x) = 1 if x < 0 . Solving these two linear ordinary differential equations gives g(x) = 1 λ+ρ + A e √ 2ρx + Be− √ 2ρx if x > 0 , g(x) = 1 ρ + C e √ 2ρx + De− √ 2ρx if x < 0 . As g must remain bounded as ρ ↑∞, we must have A = D = 0. Moreover, g must be continuously differentiable in zero, hence C and B can be calculated from matching conditions. After an elementary calculation we obtain g(0) = 1 p ρ(ρ + λ) . On the other hand, with X(t) = 1 t Z t 0 1[0,∞)(B(s)) ds we have, using Brownian scaling in the second step, g(0) = E0 h Z ∞ 0 exp ¡ −ρt −λtX(t) ¢ dt i = E0 h Z ∞ 0 exp ¡ −ρt −λtX(1) ¢ dt i = E0 h 1 ρ + λX(1) i . Now we let ρ = 1 and from E0 h 1 1 + λX(1) i = 1 √ 1 + λ and the expansions 1 √ 1 + λ = ∞ X n=0 (−λ)n 1 2 3 2 · · · 2n−1 2 n! , and Z 1 0 xn−1 2 (1 −x)−1 2 dx = Γ(2n+1 2 ) Γ( 1 2) Γ(n + 1) = π 1 2 3 2 · · · 2n−1 2 n! , we get for the moments of X(1), by a comparison of coefficients, E £ X(1)n¤ = 1 π Z 1 0 xn 1 p x(1 −x) dx, which implies that X(1) is arcsine distributed. Our second version of the Feynman-Kac formula is a uniqueness result for the case of zero dissipation rate, which will allow us the express the probability that linear Brownian motion exits an interval before a fixed time t in two different ways. 208 Theorem 7.42. If u is a bounded, twice continuously differentiable solution of the heat equation on the domain U, with zero dissipation rate and continuous initial condition f, then (4.1) u(t, x) = Ex h f ¡ B(t) ¢ 1{t < τ} i , where τ is the first exit time from the domain U. Proof. The proof is based on Itˆ o’s formula, Theorem 7.14, and Remark 7.15. We let K ⊂U be compact and denote by σ the first exit time from K. Fixing t > 0 and applying Itˆ o’s formula with f(x, y) = u(t −y, x) and ζ(s) = s gives, for all s < t, u(t −s ∧σ, B(s ∧σ)) −u(t, B(0)) = Z s∧σ 0 ∇xu(t −v, B(v)) · dB(v) − Z s∧σ 0 ∂tu(t −v, B(v)) dv + 1 2 Z s∧σ 0 ∆xu(t −v, B(v)) dv. As u solves the heat equation, the last two terms on the right cancel. Hence, taking expectations, Ex £ u(t −s ∧σ, B(s ∧σ)) ¤ = Ex £ u(t, B(0)) ¤ = u(t, x), using that the stochastic integral has vanishing expectation. Exhausting U by compact sets, i.e. letting σ ↑τ, leads to Ex[u(t −s, B(s)) 1{s < τ}] = u(t, x). Taking a limit s ↑t gives the required result. As an application of Theorem 7.42 we calculate the probability that a linear Brownian motion stays, up to time t, within an interval. As a warm-up we suggest to look at Exercise 7.8 where the easy case of a half-open interval is treated. Here we focus on intervals [a, b], for a < 0 < b, and give two different formulas for the probability of staying in [a, b] up to time t. To motivate the first formula, we start with a heuristic approach, which gives the correct result, and then base the rigorous proof on the Feynman-Kac formula. For our heuristics we think of the transition (sub–)density, qt : [0, a]×[0, a] →[0, 1] of a Brownian motion, which is killed upon leaving the interval [0, a]. In a first approximation we subtract from the transition density p(t, x, y) of an unkilled Brownian motion the transition density for all the paths that reach level 0. By the reflection principle (applied to the first hitting time of level 0) the latter is equal to p(t, x, −y). We then subtract the transition density of all the paths that reach level a, which, again by the reflection principle, equals p(t, x, 2a −y), then add again the density of all the paths that reach level 0 after hitting a, as these have already been subtracted in the first step. This gives the approximation term p(t, x, y) −p(t, x, −y) −p(t, x, 2a −y) + p(t, x, 2a + y). Of course the iteration does not stop here (for example we have double-counted paths that reach level 0 after hitting a). Eventually we have to consider an infinite series of alternating reflections at levels 0 and a to obtain the density qt(x, y) = ∞ X k=−∞ © p(t, x, 2ka + y) −p(t, x, 2ka −y) ª . 209 Integrating this over y ∈[0, a] makes the following theorem plausible. Theorem 7.43. Let 0 < x < a. Then (4.2) Px © B(s) ∈(0, a) for all 0 ≤s ≤t ª = ∞ X k=−∞ n Φ ¡ 2ka+a−x √ t ¢ −Φ ¡ 2ka−x √ t ¢ −Φ ¡ 2ka+a+x √ t ¢ + Φ ¡ 2ka+x √ t ¢o , where Φ(x) is the distribution function of a standard normal distribution. Proof. The left hand side in (4.2) agrees with the right hand side in Theorem 7.42 for U = (0, a) and f = 1. The series on the right hand side is absolutely convergent, and hence satisfies the boundary conditions at x = 0 and x = a. It is also not difficult to verify that it is bounded. Elementary calculus gives ∂tΦ ¡ 2ka+a−x √ t ¢ = −2ka+a−x 2t3/2 p(t, x, 2ka + a) = 1 2 ∂xxΦ ¡ 2ka+a−x √ t ¢ , and similarly for the other summands. Hence termwise differentiation shows that the right hand side satisfies the heat equation. To see that the initial condition is fulfilled, note that (as t ↓0) the sums over all k > 0 converge to zero by cancellation, and the sum over all k < 0 obviously converge to zero. Among the four terms belonging to k = 0, two terms with posi-tive sign and one term with negative sign converge to one, whereas one term converges to zero. The solution of the heat equation is not in the form one would get by a na¨ ıve separation of variables approach. This approach yields a different, equally valuable, expression for the probability of interest. Indeed, writing u(t, x) = v(t)w(x) one expects w to be an eigenfunction of 1 2 ∂xx on (0, a) with zero boundary conditions. These eigenfunctions are sin ¡ kπ(2x−a) 2a ¢ , for k even, cos ¡ kπ(2x−a) 2a ¢ , for k odd, with eigenvalues −k2π2/(2a2). As we are only interested in solutions symmetric about a/2 only the cosine terms will contribute For v we are looking for the eigenfunctions of ∂t with the same eigenvalues, which are exp ¡ −k2π2 2a2 t ¢ , for k odd, and considering the initial condition (and shifting the cosine by π/2) leads to the solution (4.3) u(t, x) = 4 π ∞ X n=0 1 2n + 1 exp ¡ −(2n+1)2π2 2a2 t ¢ sin ¡ (2n+1)πx a ¢ . Therefore (4.3) is an alternative representation of the probability in (4.2). For practical purposes this series is more useful when t is large, as the convergence is faster, whereas the series in the theorem converges fast only for small values of t > 0. 210 We now prove an elliptic, or time-stationary, version of the Feynman-Kac formula. This will enable us to describe the distribution of the total time spent by a transient Brownian motion in unit ball in terms of a Laplace transfrom. Theorem 7.44. Let d ≥3 and V : Rd →[0, ∞) be bounded. Define h(x) := Ex h exp ³ − Z ∞ 0 V (B(t)) dt ´i . Then h: Rd →[0, ∞) satisfies the equation h(x) = 1 − Z G(x, y)V (y) h(y) dy for all x ∈Rd . Remark 7.45. Informally, the integral equation in Theorem 7.44 implies 1 2 ∆h = V h, which is also what one gets from letting t ↑∞in Theorem 7.41. See also Exercise 2.18 for a converse result in a similar spirit. ⋄ Proof. Define the ‘resolvent operator’ RV λ f(x) := Z ∞ 0 e−λtEx £ f(B(t)) e− R t 0 V (B(s)) ds¤ dt . Using the fundamental theorem of calculus in the second step we obtain R0 λf(x)−RV λ f(x) = Ex Z ∞ 0 dt e−λt− R t 0 V (B(s)) ds f(B(t)) ¡ e R t 0 V (B(s)) ds −1 ¢ = Ex Z ∞ 0 dt e−λt− R t 0 V (B(s)) ds f(B(t)) Z t 0 V (B(s)) e R s 0 V (B(r)) dr ds . Using Fubini’s theorem and the Markov property, we may continue with = Ex Z ∞ 0 ds e−λs V (B(s)) Z ∞ 0 dt exp ¡ −λt − Z t 0 V (B(s + u)) du ¢ f(B(s + t)) = Ex Z ∞ 0 ds e−λsV (B(s)) RV λ f(B(s)) = R0 λ ¡ V RV λ f ¢ (x) . The function h is related to the resolvent operator by the equation h(x) = lim λ↓0 λRV λ 1(x) . Letting f ≡1 we obtain 1 −λ RV λ 1 = λ R0 λ ¡ V RV λ 1 ¢ , and as λ ↓0 we get 1 −h(x) = R0 0 ¡ V h ¢ (x) = Z G(x, y)V (y) h(y) dy , for the Green’s function G(x, y) = (2π)−1 |x −y|−2, as claimed. We use Theorem 7.44 to prove the three-dimensional case of the Ciesielski-Taylor identity, one of the most surprising identities about Brownian motion. Key to this is the following proposition. 211 Proposition 7.46. For a standard Brownian motion {B(t): t ≥0} in dimension three let T = R ∞ 0 1{B(t) ∈B(0, 1)} be the total occupation time of the unit ball. Then E £ e−λT¤ = sech( √ 2λ) . Proof. Let V (x) = λ1B(0,1) and define h(x) = Ex[e−λT] as in Theorem 7.44. Then h(x) = 1 −λ Z B(0,1) G(x, y) h(y) dy for all x ∈Rd . Clearly, we are looking for a rotationally symmetric function h. The integral on the right can therefore be split into two parts: First, the integral over B(0, |x|), which is the Newtonian potential due to a symmetric mass distribution on B(0, |x|) and therefore remains unchanged if the same mass is concentrated at the origin. Second, the integral over B(0, 1) \ B(0, |x|), which is harmonic on the open ball B(0, |x|) with constant value on the boundary, so itself is constant. Hence, for x ∈B(0, 1), to 1 −h(x) = λ 2π|x| Z B(0,|x|) h(y) dy + λ Z B(0,1)\B(0,|x|) h(y) 2π|y| dy. With u(r) = rh(x) for |x| = r we have, for 0 < r < 1, r −u(r) = 2λ Z r 0 su(s) ds + 2λr Z 1 r u(s) ds , and by differentiation u′′ = 2λu on (0, 1). Hence u(r) = Ae √ 2λr + Be− √ 2λr . The boundary conditions u(0) = 0 and u′(1) = 1 give B = −A and A = 1 √ 2λ 1 e √ 2λ + e− √ 2λ . Then h(x) = lim r↓0 u(r) r = 1 −2λ Z 1 0 u(r) dr = 1 −A √ 2λ ¡ e √ 2λ + e− √ 2λ −2 ¢ = 2 e √ 2λ + e− √ 2λ = sech( √ 2λ) , as required to complete the proof. Theorem 7.47 (Ciesielski-Taylor identity). The first exit time from the unit ball by a standard Brownian motion in dimension one and the total occupation time of the unit ball by a standard Brownianian motion in dimension d = 3 have the same distribution. Proof. The Laplace transform of the first exit time from the unit interval (−1, 1) is given in Exercise 2.16. It coincides with the Laplace transform of T given in Proposition 7.46. Hence the two distributions coincide. 212 Exercises Exercise 7.1 ( ∗). Suppose {H(s, ω): s ≥0 , ω ∈Ω} is progressively measurable and {B(t): t ≥0} a linear Brownian motion. Show that for any stopping time T with E h Z T 0 H(s)2 ds i < ∞, we have (a) E h Z T 0 H(s) dB(s) i = 0, (b) E h³ Z T 0 H(s) dB(s) ´2i = E h Z T 0 H(s)2 ds i . Exercise 7.2. Suppose that f : [0, 1] →R is in the Cameron-Martin space, i.e. f(t) = R t 0 f ′(s) ds for all t ∈[0, 1] and f ′ ∈L2(0, 1). Then, almost surely, Z 1 0 f ′(s) dB(s) = lim n→∞n n X j=0 ³ f ¡ j+1 n ¢ −f ¡ j n ¢´ ³ B ¡ j+1 n ¢ −B ¡ j n ¢´ . Exercise 7.3 ( ∗). Give the details of the proof of the multidimensional Itˆ o formula, Theorem 7.14. Exercise 7.4 ( ∗). Give an alternative proof of Theorem 2.33 based on a conformal mapping of the halfplanes {(x, y): x > t} onto the unit disk, which exploits our knowledge of harmonic measure on spheres. Exercise 7.5 ( ∗). Let {B(t): t ≥0} be a planar Brownian motion. Show that, if θ(t) is the continuous determination of the angle of B(t), we have, almost surely, lim inf t↑∞ θ(t) = −∞ and lim sup t↑∞ θ(t) = ∞. Exercise 7.6. Formalise and prove the statement that, for every ε > 0, a planar Brownian motion winds around its starting point infinitely often in any time interval [0, ε]. 213 Exercise 7.7 ( ∗). Show that under suitable conditions, stochastic integrals and ordinary integrals can be interchanged: Suppose h: R →[0, ∞) is a continuous function with compact support. Then, almost surely, Z ∞ −∞ h(a) ³ Z t 0 sign(B(s) −a) dB(s) ´ da = Z t 0 ³ Z ∞ −∞ h(a) sign(B(s) −a) da ´ dB(s). Hint. Write the outer integral on the left hand side as a limit of Riemann sums. For this purpose it is useful to know that, by Tanaka’s formula and continuity of the local times, the integrand has a continuous modification. Exercise 7.8. (a) Show that the function u: (0, ∞) × (0, ∞) →R given by u(t, x) = r 2 πt Z x 0 e−z2 2t dz solves the heat equation on the domain (0, ∞) with zero dissipation rate and constant initial condition f = 1. (b) Infer from this that, for x > 0, Px © B(s) > 0 for all s ≤t ª = r 2 πt Z x 0 e−z2 2t dz. (c) Explain how the result of (b) could have been obtained from the reflection principle. Exercise 7.9. Prove the Erd˝ os-Kac theorem: Let X1, X2, . . . be a sequence of independent identically distributed random variables with mean zero and variance one. Let Sn = X1 +· · ·+ Xn and Tn = max{|S1|, . . . , |Sn|}. Then lim n→∞P{Tn < x} = 4 π ∞ X n=0 (−1)n 2n + 1 exp ³ −(2n + 1)2π2 8x2 ´ . 214 Notes and Comments The first stochastic integral with a random integrand was defined by Itˆ o [It44] but stochastic integrals with respect to Brownian motion with deterministic integrands were known to Paley, Wiener and Zygmund already in 1933, see [PWZ33]. Our stochastic integral is by far not the most general construction possible, the complete theory of Itˆ o integration is one of the cornerstones of modern probability. Interesting further material can be found, for example, in the books [CW90], [RW00] or [Du96]. Itˆ o’s formula, first proved in [It51], plays a central role in stochastic analysis, quite like the fundamental theorem of calculus does in real analysis. The version we give is designed to minimise the technical effort to get to the desired applications, but a lot more can be said if the discussion is extended to the concept of semimartingales, the references in the previous paragraph provide the background for this. Conformal invariance was known to L´ evy and a sketch of a proof is given in the book [Le48]. It is interesting to note that this fact does not extend to higher dimensions d ≥3. There are not many nontrivial conformally invariant maps anyway, but essentially the only one, inversion on a sphere, fails. This is easy to see, as the image of Brownian motion stopped on the boundary of the punctured domain B(0, 1) \ {0} has zero probability of not hitting B(0, 1). There is rich interaction between complex analysis and Brownian motion, which relies on conformal invariance. The conformal invariance of harmonic measure, which we proved in Theorem 7.22, is not easy to obtain by purely analytical means. Another result from com-plex analysis, which can be proved effectively using Brownian motion is Picard’s theorem, see Davis [Da75]. The theorem states that a nonconstant entire function has a range which omits at most one point from the complex plane. Only very recently a completely new perspective on conformal invariance has opened up through the theory of conformally invariant random curves developed by Lawler, Schramm, and Werner, see e.g. [We04]. The skew-product representation has many nice applications, for more examples see [LG91], which also served as the backbone of our exposition. The first result about the windings of Brownian motion is Spitzer’s law, first proved by F. Spitzer in [Sp58]. There are plenty of extensions including pathwise laws [Sh98, M¨ o02], windings around more than one point, and joint laws of windings and other functionals [PY86]. A discussion of some problems related to this can be found in [Yo92]. Tanaka’s formula offers many fruitful openings, among them the theory of local times for semimartingales, which is presented in [RY94]. The formula goes back to the paper by Tanaka [Ta63]. Alternative to our approach, Tanaka’s formula can be taken as a definition of Brownian local time. Then continuity can be obtained from the Kolmogorov-ˇ Centsov theorem and moment estimates based on the Burkholder-Davis-Gundy inequalities, see for example the book by Karatzas and Shreve [KS88]. 215 The Feynman-Kac formula is a classical application in stochastic calculus, which is discussed in more detail in [KS88]. It can be exploited to obtain an enormous variety of distributional properties of Brownian motion, see [BS02] for (literally!) thousands of examples. The converse, application of Brownian motion to study equations, is of course equally natural. Del Moral [DM04] gives an impressive account of the wide applicability of this formula and its variants. The identity between the two formulas describing the probability that a Brownian motion stays between two barriers serves as a standard example for the Poisson summation formula, see [Fe66, X.5 and XIX.5]. According to Feller it was discovered originally in connection with Jacobi’s theory of transformations of theta functions, see Landau [La09, Satz 277]. The ‘iterated reflection’ argument, which we have used to determine the transition density of a Brownian motion with absorbing barriers may also be used to determine transition densities for a Brownian motion which is reflected at the barriers, see [Fe66, X.5]. In higher dimensions Brownian motion reflected at the boundaries of a domain is an interesting subject, not least because of its connections to partial differential equations with Neumann boundary conditions, see, for example, [Br76]. The Erd˝ os-Kac law plays an important rˆ ole for the Kolmogorov-Smirnov test known from non-parametric statistics, see e.g. [Fe68]. Plenty of proofs of the arcsine law are known: Besides the two provided in this book, there is also an approach of Kac [Ka51] based on the Meyer-Tanaka formula, and Rogers and Williams [RW00, III.24] provide a proof based on local time theory. The Ciesielski-Taylor identity was found by Ciesielski and Taylor in 1962 by an explicit calcu-lation, see [CT62]. It extends to arbitrary dimensions d, stating that the law of the exit times from the unit ball by a standard Brownian motion in dimension d equals the law of the total occupation time in the unit ball by the standard Brownian motion in dimension d + 2. Our argument is taken from [Sp64], see also [RW00, III.20]. Many proofs of this fact are known, see for example [Yo92], but none provides a geometrically intuitive explanation and it may well be that none exists. 216 CHAPTER 8 Potential theory of Brownian motion In this chapter we develop the key facts of the potential theory of Brownian motion. This theory is centred around the notions of a harmonic function, the energy of a measure, and the capacity of a set. The probabilistic problem at the heart of this chapter is to find the probability that Brownian motion visits a given set. 1. The Dirichlet problem revisited We now take up the study of the Dirichlet problem again and ask for sharp conditions on the domain which ensure the existence of solutions, which allow us to understand the problem for domains with very irregular boundaries, like for example connected components of the complement of a planar Brownian curve. In this chapter, stochastic integrals and Itˆ o’s formula will be a helpful tool. As a warm-up, we suggest to use these tools to give a probabilistic proof of the mean value property of harmonic functions, see Exercise 8.1. Recall from Example 3.15 that the existence of a solution of the Dirichlet problem may be in doubt by the fact that Brownian motion started at the boundary ∂U may not leave the domain U immediately. Indeed, we show here that this is the only problem that can arise. Definition 8.1. A point x ∈A is called regular for the closed set A ⊂Rd if the first hitting time TA = inf{t > 0 : B(t) ∈A} satisfies Px{TA = 0 ª = 1. A point which is not regular is called irregular. ⋄ Remark 8.2. In the case d = 1 we have already seen that for any starting point x ∈R, almost surely a Brownian motion started in x returns to x in every interval [0, ε) with ε > 0. Hence every point is regular for any set containing it. ⋄ We already know a condition which implies that a point is regular, namely the Poincar´ e cone condition introduced in Chapter 3. Theorem 8.3. If the domain U ⊂Rd satisfies the Poincar´ e cone condition at x ∈∂U, then x is regular for the complement of U. Proof. Suppose x ∈∂U satisfies the condition, then there is an open cone V with height h > 0 and angle α > 0 in U c based at x. Then the first exit time τU for the domain satisfies Px{τU ≤t} ≥Px{B(t) ∈V ∩B(x, h)} ≥Px{B(t) ∈V } −Px{B(t) ̸∈B(x, h)}, 217 By Brownian scaling the last term equals P{B(1) ∈V }−Px{B(1) ̸∈B(x, h/ √ t)}. For t ↓0 the subtracted term goes to zero, and hence Px{τU = 0} = limt↓0 Px{τU ≤t} = P{B(1) ∈V } > 0. By Blumenthal’s zero-one law we have Px{τU = 0} = 1, in other words x is regular for U c. Remark 8.4. At the end of this chapter we will be able to improve this and give a sharp condition for a point to be regular, Wiener’s test of regularity. ⋄ Theorem 8.5 (Dirichlet Problem). Suppose U ⊂Rd is a bounded domain and ϕ be a continuous function on ∂U. Define τ = min{t ≥0: B(t) ∈∂U}, and define u: U →R by u(x) = Ex £ ϕ(B(τ)) ¤ . (a) A solution to the Dirichlet problem exists if and only if the function u is a solution to the Dirichlet problem with boundary condition ϕ. (b) u is a harmonic function on U with u(x) = ϕ(x) for all x ∈∂U and is continuous at every point x ∈∂U that is regular for the complement of U. (c) If every x ∈∂U is regular for the complement of U, then u is the unique continuous function u: U →R which is harmonic on U such that u(x) = ϕ(x) for all x ∈∂U. Proof. For the proof of (a) let v be any solution of the Dirichlet problem on U with boundary condition ϕ. Define open sets Un ↑U by Un = © x ∈U : | x−y| > 1 n for all y ∈∂U ª . Let τn be the first exit time of Un and τ the first exit time from U, which are stopping times. As ∆v(x) = 0 for all x ∈U we see from the multidimensional version of Itˆ o’s formula that v(B(t ∧τn)) = v(B(0)) + d X i=1 Z t∧τn 0 ∂v ∂xi (B(s)) dBi(s) + 1 2 d X i=1 Z t∧τn 0 ∂2v ∂x2 i (B(s)) ds . Note that ∂v/∂xi is bounded on the closure of Un, and thus everything is well-defined. The last term vanishes as ∆v(x) = 0 for all x ∈U. Taking expectations the second term on the right also vanishes, by Exercise 7.1, and we get that Ex £ v(B(t ∧τn)) ¤ = Ex £ v(B(0)) ¤ = v(x), for x ∈Un . Note that v, and hence the integrand on the left hand side, are bounded. Moreover, it is easy to check using boundedness of U and a reduction to the one-dimensional case, that τ is almost surely finite. Hence, as t ↑∞and n →∞, bounded convergence yields that the left hand side converges to Ex[v(B(τ))] = Ex[ϕ(B(τ))]. The result follows, as the right hand side depends neither on t nor on n. The harmonicity statement of (b) is included in Theorem 3.8, and u = ϕ on ∂U is obvious from the definition. It remains to show the continuity claim. For a regular x ∈∂U we now show that if Brownian motion is started at a point in U, which is sufficiently close to x, then with high probability the Brownian motion hits U c, before leaving a given ball B(x, δ). We start by noting that, for every t > 0 and η > 0 the set O(t, η) := © z ∈U : Pz{τ ≤t ª > η ª is open. Indeed, if z ∈O(t, η), then for some small s > 0 and δ > 0 and large M > 0, we have Pz © |B(s) −z| ≤M, B(u) ∈U c for some s ≤u ≤t ª > η + δ. 218 By the Markov property the left hand side above can be written as Z B(z,M) Pξ © B(u) ∈U c for some 0 ≤u ≤t −s ª p(s, z, ξ) dξ, Now let ε > 0 so small that |p(s, z, ξ)−p(s, y, ξ)| < δ/L(B(0, M)) for all |z −y| < ε and ξ ∈Rd. Then we have Py{τ ≤t ª ≥Py © B(u) ∈U c for some s ≤u ≤t ª > η, hence the ball B(z, ε) is in O(t, η), which therefore must be open. Given ε > 0 and δ > 0 we now choose t > 0 small enough, such that for τ ′ = inf{s > 0: B(s) ̸∈ B(x, δ)} we have Pz © τ ′ < t ª < ε/2 for all | x−z| < δ/2. By regularity we have x ∈O(t, 1−ε/2), and hence we can choose 0 < θ < δ/2 to achieve B(x, θ) ⊂O(t, 1 −ε/2). We have thus shown that, (1.1) | x −z| < θ ⇒Pz © τ < τ ′ª > 1 −ε. To complete the proof, let ε > 0 be arbitrary. Then there is a δ > 0 such that |ϕ(x)−ϕ(y)| < ε for all y ∈∂U with | x −y| < δ. Choose θ as in (1.1). For all z ∈U with |z −x| < δ ∧θ we get |u(x) −u(z)| = ¯ ¯Ez[ϕ(x) −ϕ(B(τ))] ¯ ¯ ≤2∥ϕ∥∞Pz © τ ′ < τ} + ε ≤ε (2∥ϕ∥∞+ 1). As ε > 0 can be arbitrarily small, u is continuous at x ∈∂U, and part (c) follows trivially from (b) and the maximum principle. A further classical problem of partial differential equations, the Poisson problem, is related to Brownian motion in a way quite similar to the Dirichlet problem. Definition 8.6. Let U ⊂Rd be a bounded domain and u: U →R be a continuous function, which is twice continuously differentiable on U. Let g: U →R be continuous. Then u is said to be the solution of Poisson’s problem for g if u(x) = 0 for all x ∈∂U and −1 2∆u(x) = g(x) for all x ∈U. ⋄ Remark 8.7. The probabilistic approach to the Poisson problem will be discussed in Exer-cises 8.2 and 8.3. We show that, for g bounded, the solution u of Poisson’s problem for g, if it exists, equals (1.2) u(x) = Ex h Z T 0 g(B(t)) dt i for x ∈U , where T := inf{t > 0 : B(t) ̸∈U}. Conversely, if g is H¨ older continuous and every x ∈∂U is regular for the complement of U, then the function (1.2) solves the Poisson problem for g. ⋄ Remark 8.8. If u solves Poisson’s problem for g ≡1 in a domain U ⊂Rd, then u(x) = Ex[T] is the average time it takes a Brownian motion started in x to leave the set U. ⋄ 219 2. The equilibrium measure In Chapter 3 we have studied the distribution of the location of the first entry of a Brownian motion into a closed set Λ, the harmonic measure. In the case of a transient (or killed) Brownian motion there is a natural counterpart to this by looking at the distribution of the position of the last exit from a closed set. This leads to the notion of the equilibrium measure, which we discuss and apply in this section. To motivate the next steps we first look at a simple random walk {Xn : n ∈N} in d ≥3. Let A ⊂Zd be a bounded set, then by transience the last exit time γ = max{n ∈N : Xn ∈A} is finite on the event that the random walk ever hits A. Note that γ is not a stopping time. Then, for any x ∈Zd and y ∈A, Px © X hits A and Xγ = y ª = ∞ X k=0 Px © Xk = y, Xj ̸∈A for all j > k ª = ∞ X k=0 Px © Xk = y}Py{γ = 0}, and introducing the Green’s function G(x, y) = P∞ k=0 Px © Xk = y} we get, for all y ∈A, Px © X hits A and Xγ = y ª = G(x, y) Py{γ = 0}. This holds also, trivially, for all y ∈Zd \ A. Summing over all y ∈Zd gives Px © X ever hits A ª = X y∈Zd G(x, y)Py{γ = 0}. The formula allows us to describe the probability of ever hitting a set as a potential with respect to the measure y 7→Py{γ = 0}, which is supported on A. Our aim in this section is to extend this to Brownian motion. Note that the argument above relied heavily on the transience of the random walk. This is no different in the case of Brownian motion. In order to include the two-dimensional case we ‘kill’ the Brownian motion, either when it exits a large domain or at an independent exponential stopping time. Note that both possibilities preserve the strong Markov property, in the case of exponential killing this is due to the lack-of-memory property of the exponential distribution. To formally explain our setup we now suppose that {B(t) : 0 ≤t ≤T} is a transient Brownian motion in the sense of Chapter 3. Recall that this means that {B(t) : 0 ≤t ≤T} is a d-dimensional Brownian motion killed at time T, and one of the following three cases holds: (1) d ≥3 and T = ∞, (2) d ≥2 and T is an independent exponential time, (3) d ≥2 and T is the first exit time from a bounded domain D containing 0. We use the convention that D = Rd in cases (1),(2). In all cases, transient Brownian motion is a Markov process and, by Proposition 3.29 its transition kernel has a density, which we denote by p∗(t, x, y). Note that in case (2,3) the function p∗(t, x, y) is only a subprobability density 220 because of the killing, indeed it is strictly smaller than the corresponding density without killing. The associated Green’s function G(x, y) = Z ∞ 0 p∗(t, x, y) dt, is always well-defined and finite for all x ̸= y. Theorem 8.9 (Last exit formula). Suppose {B(t) : 0 ≤t ≤T} is a transient Brownian motion and Λ ⊂Rd a compact set. Let γ = sup © t ∈(0, T] : B(t) ∈Λ ª be the last exit time from Λ, using the convention γ = 0 if the path does not hit Λ. Then there exists a finite measure ν on Λ called the equilibrium measure, such that, for any Borel set A ⊂Λ and x ∈D, Px © B(γ) ∈A, 0 < γ ≤T ª = Z A G(x, y) dν(y). Remark 8.10. Observe that the equilibrium measure is uniquely determined by the last exit formula above. The proof of Theorem 8.9 is similar to the simple calculation in the discrete case, the equilibrium measure is constructed as limit of the measure ε−1Py{0 < γ ≤ε} dy. ⋄ Proof of Theorem 8.9. Let Uε be a uniform random variable on [0, ε], independent of the Brownian motion and the killing time. Then, for any bounded and continuous f : D →R, Ex £ f(B(γ −Uε)) 1{Uε < γ} ¤ = ε−1 Z ∞ 0 Ex £ f(B(t))1{t < γ ≤t + ε} ¤ dt = ε−1 Z ∞ 0 Ex £ f(B(t))1{t ≤T} PB(t){0 < γ ≤ε} ¤ dt . Using the notation ψε(x) = ε−1Px{0 < γ ≤ε} this equals Z ∞ 0 Ex £ f · ψε(B(t))1{t ≤T} ¤ dt = Z ∞ 0 Z D p∗(t, x, y) f · ψε(y) dy dt = Z D f(y) G(x, y) ψε(y) dy. This means that the subprobability measure ηε defined by ηε(A) = Px © B(γ −Uε) ∈A, Uε < γ ≤T ª has the density G(x, y) ψε(y). Therefore also, (2.1) G(x, y)−1 dηε(y) = ψε(y) dy. Observe now that, by continuity of the Brownian path, limε↓0 ηε = η0 in the sense of weak convergence, where the measure η0 on Λ is defined by η0(A) = Px © B(γ) ∈A, 0 < γ ≤T ª , for all Borel sets A ⊂Λ. As, for fixed x ∈D, the function y 7→G(x, y)−1 is continuous and bounded on Λ, we infer that, in the sense of weak convergence lim ε↓0 G(x, y)−1 dηε = G(x, y)−1 dη0. 221 By (2.1) the measure ψε(y) dy therefore converges weakly to a limit measure ν, which does not depend on x, and satisfies G(x, y)−1 dη0(y) = dν(y) for all x ∈D. As η0 has no atom in x we therefore obtain that dη0(y) = G(x, y) dν(y) for all x ∈D. Integrating over any Borel set A gives the statement. As a first application we give an estimate for the probability that Brownian motion in Rd, for d ≥3, hits a set contained in an annulus around x. Corollary 8.11. Suppose {B(t) : t ≥0} is Brownian motion in Rd, with d ≥3, and Λ ⊂B(x, R) \ B(x, r) is compact. Then R2−dν(Λ) ≤Px © {B(t) : t ≥0} ever hits Λ ª ≤r2−dν(Λ), where ν is the equilibrium measure on Λ. Proof. By Theorem 8.9 in the case A = Λ we have Px © {B(t) : t ≥0} ever hits Λ ª = Z Λ G(x, y) dν(y). Recall that G(x, y) = | x −y|2−d and use that R2−d ≤G(x, y) ≤r2−d. Theorem 8.5 makes us interested in statements claiming that the set of irregular points of a set A is small. The following fundamental result will play an important role in the next chapter. Theorem 8.12. Suppose A ⊂Rd, d ≥2 is a closed set and let Ar be the set of regular points for A. Then, for all x ∈Rd, Px © B(t) ∈A \ Ar for some t > 0 ª = 0, in other words, the set of irregular points is polar for Brownian motion. For the proof of Theorem 8.12 we have to develop a tool of independent interest, the strong maximum principle. A special case of this is the following statement, from which Theorem 8.12 follows without too much effort. Theorem 8.13. Let {B(t) : t ≥0} be a d-dimensional Brownian motion, and T an indepen-dent exponential time. Let Λ ⊂Rd be a compact set and define τ = inf{t > 0 : B(t) ∈Λ}. If for some ϑ < 1, we have Px © τ < T ª ≤ϑ for all x ∈Λ, then Px © τ < T ª ≤ϑ for all x ∈Rd. Proof of Theorem 8.12. We can write the set of irregular points of A as a countable union of compact sets A \ Ar = ∞ [ ℓ=1 ∞ [ m=1 ∞ [ n=1 n x ∈A ∩B(0, m) : Px{τ(A) ≤T(n) ª ≤1 −1 ℓ o , where T(n) is an independent exponential time with mean 1/n and τ(A) is the first hitting time of A. It suffices to prove that Brownian motion does not hit any fixed set in the union, so 222 let ℓ, m, n be fixed and take T = T(n), ϑ = 1 −1/ℓand a compact set Λ = n x ∈A ∩B(0, m) : Px{τ(A) ≤T ª ≤ϑ o . If x ∈Λ, then, writing τ for the first hitting time of Λ ⊂A, Px{τ ≤T ª ≤Px{τ(A) ≤T ª ≤ϑ, for all x ∈Λ and therefore by Theorem 8.13 for all x ∈Rd. Now suppose x ∈Rd is the arbitrary starting point of a Brownian motion {B(t) : t ≥0} and Λ(ε) = {y ∈Rd : |y −z| ≤ε for some z ∈Λ}. Define τε as the first hitting time of Λ(ε). Clearly, as Λ is closed, lim ε↓0 Px © τε ≤T ª = Px © τ ≤T ª . Moreover, by the strong Markov property applied at the stopping time τε and the lack of memory property of exponential random variables, Px © τ ≤T ª ≤Px © τε ≤T ª max z∈Λε Pz © τ ≤T ª ≤Px © τε ≤T ª (1 −ϑ), and letting ε ↓0 we obtain Px © τ ≤T ª ≤Px © τ ≤T ª (1 −ϑ). As ϑ < 1 this implies Px © τ ≤T ª = 0, and as T is independent of the Brownian motion and can take arbitrarily large values with positive probability, we infer that the Brownian motion started in x never hits Λ. The idea in the proof of Theorem 8.13 is to use the equilibrium measure ν to express Px © τ < T ª as a potential, which means that, denoting the parameter of the exponential by λ > 0, Px © τ < T ª = Z Gλ(x, y) dν(y), where Gλ is the Green’s function for the Brownian motion stopped at time T, i.e. Gλ(x, y) = Z ∞ 0 e−λt p(t, x, y) dt. Recall that for any fixed y the function x 7→Gλ(x, y) is subharmonic on Rd \ {y}, by Exer-cise 3.11, and this implies that Uλν(x) = Z Gλ(x, y)dν(y) is subharmonic on Λc. If Uλν was also continuous on the closure of Λc, then the maximum principle in Theorem 3.5 would tell us that Uλν has its maxima on the boundary ∂Λ and this would prove Theorem 8.13. However we do not know the continuity of Uλν on the closure of Λc, so we need a strengthening of the maximum principle. 223 We now let K be a kernel, i.e. a measurable function K : Rd × Rd →[0, ∞]. Suppose that x 7→K(x, y) is subharmonic outside {y}, and that K(x, y) is a continuous and decreasing function of the distance | x −y|. For any finite measure µ on the compact set Λ, let Uµ(x) = Z K(x, y) dµ(y) be the potential of µ at x with respect to the kernel K. Theorem 8.14 (Strong maximum principle). Then, for any ϑ < 1, we have the equivalence Uµ(x) ≤ϑ for all x ∈Λ ⇔ Uµ(x) ≤ϑ for all x ∈Rd. Remark 8.15. Note that this completes the proof of Theorem 8.13 and hence of Theorem 8.12 by applying it to the special case of the kernel K = Gλ and the equilibrium measure. ⋄ The proof we present relies on a beautiful geometric lemma. Lemma 8.16. There is a number N depending only on the dimension d such that the following holds: For every x ∈Rd and every closed set Λ there are N nonoverlapping closed cones V1, . . . , VN with vertex x such that, if ξi is a point of Λ ∩Vi closest to x, then any point y ∈Λ with y ̸= x is no further to some ξi than to x. x y ξ π 3 Λ Figure 1. The geometric argument in Lemma 8.16. Proof. The proof is elementary by looking at Figure 1: Let N be the number of closed cones with circular base, vertex in the origin and opening angle π/3 needed to cover Rd. Let V be a shift of such a cone with vertex in x, ξ be a point in V ∩Λ which is closest to x, and y ∈Λ be arbitrary. The triangle with vertices in x, ξ and y has at most angle π/3 at the vertex x, and hence by the geometry of triangles, the distance of y and ξ is no larger than the distance of y and x. 224 Proof of Theorem 8.14. Of course, only the implication ⇒needs proof. Take µ satisfying Uµ(x) ≤ϑ for all x ∈Λ. Note that, by monotone convergence, (2.2) Uµ(x) = lim δ↓0 Z | x−y|>δ K(x, y) dµ(y). Hence, for a given η > 0, by Egoroff’s theorem, there exists a compact subset F ⊂Λ such that, µ(F) > µ(Λ) −η and the convergence in (2.2) is uniform on F. If we define µ1 to be the restriction of µ to F, then we can find, for every ε > 0 some δ > 0 such that sup x∈F Z | x−y|≤δ K(x, y) dµ1(y) < ε. Now let {xn} ⊂Rd be a sequence converging to x0 ∈F. Then, as the kernel K is bounded on sets bounded away from the diagonal, lim sup n→∞Uµ1(xn) ≤ Z K(x0, y) dµ1(y) + lim sup n→∞ Z |y−xn|≤δ K(xn, y) dµ1(y). We now want to compare K(xn, y) with K(ξ, y) for ξ ∈F. Here we use Lemma 8.16 for the point x = xn and obtain ξ1, . . . , ξN ∈F such that K(xn, y) ≤ N X i=1 K(ξi, y), where we have used that K depends only on the distance of the arguments and is decreasing in it. We thus have Z |y−xn|≤δ K(xn, y) dµ1(y) ≤ N X i=1 Z |y−ξi|≤δ K(ξi, y) dµ1(y) ≤Nε. As ε > 0 was arbitrary we get lim sup n→∞Uµ1(xn) ≤Uµ1(x0). As the converse statement lim inf n→∞Uµ1(xn) ≥Uµ1(x0) holds trivially by Fatou’s lemma, we obtain the continuity of Uµ1 on F. Continuity of Uµ1 on F c is obvious from the properties of the kernel, so that we have continuity of Uµ1 on all of Rd. By the maximum principle, Theorem 3.5, we infer that Uµ1(x) ≤ϑ. To complete the proof let x ̸∈Λ be arbitrary, and denote its distance to Λ by ϱ. Then K(x, y) ≤C(ϱ) for all y ∈Λ. Therefore Uµ(z) ≤Uµ1(z) + ηC(ϱ) ≤ϑ + η C(ϱ), and the result follows by letting η ↓0. 225 3. Polar sets and capacities One of our ideas to measure the size of sets in Chapter 4 was based on the notion of capacity of the set. While this notion appeared to be useful, but maybe a bit artifical at the time, we can now understand its true meaning. This is linked to the notion of polarity, namely whether a set has a positive probability of being hit by a suitably defined random set. More precisely, we ask, which sets are polar for the range of a Brownian motion {B(t): t ≥0}. Recall that a Borel set A ⊂Rd is polar for Brownian motion if, for all x, Px © B(t) ∈A for some t > 0 ª = 0. In the case d = 1 we already know that only the empty set is polar, whereas by Corollary 2.23 points are polar for Brownian motion in all dimensions d ≥2. The general characterisation of polar sets requires an extension of the notion of capacities to a bigger class of kernels. Definition 8.17. Suppose A ⊂Rd is a Borel set and K : Rd × Rd →[0, ∞] is a kernel. Then the K-energy of a measure µ is defined to be IK(µ) = ZZ K(x, y) dµ(x) dµ(y), and the K-capacity of A is defined as CapK(A) = £ inf © IK(µ) : µ a probability measure on A ª¤−1 . Recall that the α-energy of a measure and the Riesz α-capacity Capα of a set defined in Chapter 4 correspond to the kernel K(x, y) = | x −y|−α. ⋄ Remark 8.18. In most of our applications the kernels are of the form K(x, y) = f(| x −y|) for some decreasing function f : [0, ∞) →[0, ∞]. In this case we simply write If instead of IK and call this the f-energy. We also write Capf instead of CapK and call this the f-capacity. ⋄ Theorem 8.19 (Kakutani’s theorem). A closed set Λ is polar for d-dimensional Brownian motion if and only if it has zero f-capacity for the radial potential f defined by f(ε) := ( ¯ ¯ log(1/ε) ¯ ¯ if d = 2, ε2−d if d ≥3. Remark 8.20. We call the kernel K(x, y) = f(|x −y|), where f is the radial potential, the potential kernel. Up to constants, it agrees with the Green kernel in d ≥3. ⋄ Instead of proving Kakutani’s theorem directly, we aim for a stronger result, which gives, for compact sets Λ ⊂Rd, a quantitative estimate of P0 © ∃0 < t < T such that B(t) ∈Λ ª in terms of capacities. However, even if d = 3 and T = ∞, one cannot expect that P0 © ∃t > 0 such that B(t) ∈Λ ª ≍Capf(Λ) 226 for the radial potential f in Theorem 8.19. Observe, for example, that the left hand side depends strongly on the starting point of Brownian motion, whereas the right hand side is translation invariant. Similarly, if Brownian motion is starting at the origin, the left hand side is invariant under scaling, i.e. remains the same when Λ is replaced by λΛ for any λ > 0, whereas the right hand side is not. For a direct comparison of hitting probabilities and capacities, it is therefore necessary to use a capacity function with respect to a scale-invariant modification of the Green kernel G, called the Martin kernel, which we now introduce. We again look at all transient Brownian motions, recall the setup from Section 3.2. All we need is the following property, which is easy to verify directly from the form of the Green’s function G in case (1). For the other two cases we give a conceptual proof. Proposition 8.21. For every compact set Λ ⊂D ⊂Rd there exists a constant C depending only on Λ such that, for all x, y ∈Λ and sufficiently small ε > 0, (3.1) sup | x−z|<ε ε−d Z B(y,ε) G(z, ξ) G(x, y) dξ ≤C. Proof. Fix a compact set Λ ⊂D and ε > 0 smaller than one tenth of the distance of Λ and Dc and let x, y ∈Λ. We abbreviate hε(x, y) = Z B(y,ε) G(x, ξ) dξ. We first assume that | x −y| > 4ε and show that in this case (3.2) sup | x−e x|<ε sup |y−e y|<ε G(e x, e y) ≤CG(x, y), from which our claim easily follows. The function G(· , y) is harmonic on D\{y}. Hence, with τ = inf{0 < t ≤T : B(t) ̸∈B(x, 2ε)} we note that, for all e x ∈B(x, ε), G(e x, y) = Ee x £ G(B(τ), y), τ ≤T ¤ . This is the average of G(· , y) with respect to the harmonic measure µ∂B(x,2ε)(e x, · ). This measure has a density with respect to the uniform measure on the sphere ∂B(x, 2ε), which is bounded from zero and infinity by absolute constants. In the cases (1),(3) this can be seen directly from Poisson’s formula. Therefore G(e x, y) ≤CG(x, y) and repetition of this argument, introducing now e y ∈B(y, ε) and fixing e x gives the claim. Now look at the case | x −y| ≤4ε. We first observe that, for some constant c > 0, G(x, y) ≥ cε2−d, which is obvious in all cases. Now let z ∈B(x, ε). Decomposing the Brownian path on its first exit time τ from B(x, 8ε) and denoting the uniform distribution on ∂B(x, 8ε) by ϖ we obtain for constants C1, C2 > 0, hε(z, y) ≤E[τ ∧T] + Ez £ hε(B(τ), y), τ ≤T ¤ ≤C1ε2 + C2εd Z G(w, y) dϖ(w), 227 where we have used (3.2). A similar decomposition gives that R G(w, y) dϖ(w) ≤C3G(x, y) and putting all facts together gives hε(z, y) ≤C4εdG(x, y), as required. Definition 8.22. We define the Martin kernel M : D × D →[0, ∞] by M(x, y) := G(x, y) G(0, y) for x ̸= y, and otherwise by M(x, x) = ∞. ⋄ The following theorem shows that (in all three cases of transient Brownian motions) Martin capacity is indeed a good estimate of the hitting probability. Theorem 8.23. Let {B(t) : 0 ≤t ≤T} be a transient Brownian motion and A ⊂D closed. Then (3.3) 1 2 CapM(A) ≤P0{∃0 < t ≤T such that B(t) ∈A} ≤CapM(A) Proof. Let µ be the (possibly defective) distribution of B(τ) for the stopping time τ = inf{0 < t ≤T : B(t) ∈A}. Note that the total mass of µ is (3.4) µ(A) = P0{τ ≤T} = P0{B(t) ∈A for some 0 < t ≤T}. The idea for the upper bound is that if the harmonic measure µ is nontrivial, it is an obvious candidate for a measure of finite M-energy. Recall from the definition of the Green’s function, for any y ∈D, (3.5) E0 Z T 0 1{|B(t) −y| < ε} dt = Z B(y,ε) G(0, z) dz. By the strong Markov property applied to the first hitting time τ of A, P0 © |B(t) −y| < ε and t ≤T ª ≥P0 © |B(t) −y| < ε and τ ≤t ≤T ª = EP © |B(t −τ) −y| < ε | F(τ) ª . Integrating over t and using Fubini’s theorem yields E0 Z T 0 1{|B(t) −y| < ε} dt ≥ Z A Z B(y,ε) G(x, z) dz dµ(x). Combining this with (3.5) we infer that Z B(y,ε) Z A G(x, z) dµ(x) dz ≤ Z B(y,ε) G(0, z) dz . Dividing by L(B(0, ε)) and letting ε ↓0 we obtain Z A G(x, y) dµ(x) ≤G(0, y), 228 i.e. R A M(x, y) dµ(x) ≤1 for all y ∈D. Therefore, IM(µ) ≤µ(A) and thus if we use µ/µ(A) as a probability measure we get CapM(A) ≥[IM(µ/µ(A))]−1 ≥µ(A), which by (3.4) yields the upper bound on the probability of hitting A. To obtain a lower bound for this probability, a second moment estimate is used. It is easily seen that the Martin capacity of A is the supremum of the capacities of its compact subsets, so we may assume that A is a compact subset of the domain D \ {0}. We take ε > 0 smaller than half the distance of A to Dc ∪{0}. For x, y ∈A let hε(x, y) = Z B(y,ε) G(x, ξ) dξ denote the expected time which a Brownian motion started in x spends in the ball B(y, ε). Also define h∗ ε(x, y) = sup | x−z|<ε Z B(y,ε) G(z, ξ) dξ. Given a probability measure ν on A, and ε > 0, consider the random variable Zε = Z A Z T 0 1{B(t) ∈B(y, ε)} hε(0, y) dt dν(y) . Clearly E0Zε = 1. By symmetry, the second moment of Zε can be written as (3.6) E0Z2 ε = 2E0 Z T 0 ds Z T s dt ZZ 1{B(s) ∈B(x, ε), B(t) ∈B(y, ε)} hε(0, x)hε(0, y) dν(x) dν(y) ≤2E0 ZZ Z T 0 ds 1{B(s) ∈B(x, ε)} h∗ ε(x, y) hε(0, x)hε(0, y) dν(x) dν(y) = 2 ZZ h∗ ε(x, y) hε(0, y) dν(x) dν(y). Observe that, for all fixed x, y ∈A we have limε↓0 L(B(0, ε))−1 h∗ ε(x, y) = G(x, y) and limε↓0 L(B(0, ε))−1 hε(0, y) = G(0, y). Moreover, by (3.1) and the fact that G(0, y) is bounded from zero and infinity for all y ∈A, for some constant C, h∗ ε(x, y) hε(0, y) ≤C G(x, y) G(0, y) = C M(x, y). Hence, if ν is a measure of finite energy, we can use dominated convergence and obtain, (3.7) lim ε↓0 EZ2 ε ≤2 ZZ G(x, y) G(0, y) dν(x) dν(y) = 2IM(ν). Clearly, the hitting probability P{∃t > 0, y ∈A such that B(t) ∈B(y, ε)} is at least P{Zε > 0} ≥(EZε)2 EZ2 ε = (EZ2 ε)−1 , 229 where we have used the Paley-Zygmund inequality in the second step. Compactness of A, together with transience and continuity of Brownian motion, imply that if the Brownian path visits every ε-neighbourhood of the compact set A then it intersects A itself. Therefore, by (3.7), P{∃t > 0 such that B(t) ∈A} ≥lim ε↓0 (EZ2 ε)−1 ≥ 1 2IM(ν). Since this is true for all probability measures ν on A, we get the desired conclusion. Remark 8.24. The right-hand inequality in (3.3) can be an equality: look at the case d = 3, T = ∞, our case (1), and take a sphere in Rd centred at the origin, which has hitting probability and capacity both equal to one. Exercise 8.5 shows that the constant 1/2 on the left cannot be increased. ⋄ Proof of Theorem 8.19. It suffices, by taking countable unions, to consider compact sets Λ which have positive distance from the origin. First consider the case d = 3. Then G(0, x) is bounded away from zero and infinity. Hence the set Λ is polar if and only if its f-capacity vanishes, where f(ε) = εd−2. In the case d = 2 we choose a ball B(0, R) containing Λ. By Exercise 3.13 the Green’s function for Brownian motion stopped upon leaving B(0, R) is given as G(R)(x, y) = ½ −1 π log | x/R −y/R| + 1 π log ¯ ¯ x | x| −| x|yR−2¯ ¯, if x ̸= 0, x, y ∈B(0, R), −1 π log |y/R| if x = 0, y ∈B(0, R). Again G(R)(0, y) is bounded over all y ∈Λ, and so is the second summand of G(R)(x, y). Hence only the contribution from −log | x −y| decides about finiteness of the energy of a probability measure. Therefore, any measure with finite Martin energy has finite f-energy for f(ε) = −log(1/ε), and vice versa. This completes the proof. The estimates in Theorem 8.23 are valid beyond the Brownian motion case. The following proposition, which has a very similar proof to Theorem 8.23, shows that one has an analogous result in a discrete setup. We will see a surprising application of this in Chapter 9. Proposition 8.25. Let {Xn : n ∈N} be a transient Markov chain on a countable state space S, and set G(x, y) = Ex " ∞ X n=0 1{y}(Xn) # and M(x, y) = G(x, y) G(ρ, y) . Then for any initial state ρ and any subset Λ of S, 1 2 CapM(Λ) ≤Pρ © {Xn : n ∈N} hits Λ ª ≤CapM(Λ) . Proof. To prove the right-hand inequality, we may assume that the hitting probability is positive. Let τ = inf{n : Xn ∈Λ} and let ν be the measure ν(A) = Pρ{τ < ∞and Xτ ∈A}. 230 In general, ν is a sub-probability measure, as τ may be infinite. By the Markov property, for y ∈Λ, Z Λ G(x, y) dν(x) = X x∈Λ Pρ{Xτ = x} G(x, y) = G(ρ, y) , whence R Λ M(x, y) dν(x) = 1. Therefore IM(ν) = ν(Λ), IM ¡ ν/ν(Λ) ¢ = [ν(Λ)]−1; consequently, since ν/ν(Λ) is a probability measure, CapM(Λ) ≥ν(Λ) = Pρ © {Xn} hits Λ ª . This yields one inequality. Note that the Markov property was used here. For the reverse inequality, we use the second moment method. Given a probability measure µ on Λ, set Z = Z Λ ∞ X n=0 1{y}(Xn) dµ(y) G(ρ, y) . Eρ[Z] = 1, and the second moment satisfies Eρ[Z2] = Eρ Z Λ Z Λ ∞ X m=0 ∞ X n=0 1{x}(Xm)1{y}(Xn) dµ(x)dµ(y) G(ρ, x)G(ρ, y) ≤ 2Eρ Z Λ Z Λ X m≤n 1{x}(Xm)1{y}(Xn) dµ(x)dµ(y) G(ρ, x)G(ρ, y) . Observe that ∞ X m=0 Eρ ∞ X n=m 1{x}(Xm)1{y}(Xn) = ∞ X m=0 Pρ{Xm = x} G(x, y) = G(ρ, x)G(x, y) . Hence Eρ[Z2] ≤2 Z Λ Z Λ G(x, y) G(ρ, y) dµ(x) dµ(y) = 2IM(µ) , and therefore Pρ © {Xn} hits Λ ª ≥Pρ{Z > 0} ≥ ¡ Eρ[Z] ¢2 Eρ[Z2] ≥ 1 2IM(µ) . We conclude that Pρ © {Xn} hits Λ ª ≥1 2CapM(Λ). Recall from Corollary 8.11 that we have already seen estimates for the probability that Brownian motion hits a set, which were given in terms of the total mass of the equilibrium measure. The following theorem reveals the relationship between the equilibrium measure and capacities. Theorem 8.26. Let Λ ⊂Rd be a nonpolar, compact set and G: Rd × Rd →[0, ∞] the Green’s function of a transient Brownian motion. Then CapG(Λ) = n IG ³ ν ν(Λ) ´o−1 = ν(Λ). where ν is the equilibrium measure of Λ. 231 Remark 8.27. If Λ is polar, we have CapG(Λ) = 0 = ν(Λ). Otherwise, this shows that the probability measure ν/ν(Λ) minimizes the G-energy over the set of all probability measures on Λ. For the proof we first note that, for the Green’s function G of a transient Brownian motion, the G-energy of a signed measure is always nonnegative. Lemma 8.28. Let µ, ν finite measures on Rd and G the Green’s function G of a transient Brownian motion. Then, for σ = µ −ν, we have ZZ G(x, y) dσ(x) dσ(y) ≥0. Proof. From the Chapman-Kolmogorov equation we have p∗(t, x, y) = Z p∗(t/2, x, z) p∗(t/2, z, y) dz. Integrating with respect to dσ(x) dσ(y) and using the symmetry of p∗(t, · , · ) gives ZZ p∗(t, x, y) dσ(x) dσ(y) = Z ³ Z p∗(t/2, x, z) dσ(x) ´2 dz ≥0. Integrating now over time gives the result. Proof of Theorem 8.26. Let ν be the equilibrium measure and define ϕ(x) = R G(x, y) dν(y). By the last exit formula, Theorem 8.9, ϕ(x) is the probability that a Brownian motion started at x hits Λ before time T. Hence ϕ(x) ≤1 for every x and, if x is a regular point for Λ, then ϕ(x) = 1. Also by the last exit formula, because irregular points are never hit by a Brownian motion, see Theorem 8.12, we have ϕ(x) = 1 for ν-almost every point. This implies that IG(ν) = Z Λ ϕ(x) dν(x) = ν(Λ). Suppose now that µ is an arbitrary measure on Λ with µ(Λ) = ν(Λ) and assume that µ has finite energy. Note that µ does not charge the set of irregular points, as otherwise this set would have positive capacity and would be nonpolar by Theorem 8.23. Then, starting with Lemma 8.28 and using also the symmetry of G, 0 ≤ ZZ G(x, y) d(ν −µ)(x) d(ν −µ)(y) = IG(µ) + IG(ν) −2 ZZ G(x, y) dν(x) dµ(y) = IG(µ) + ν(Λ) −2 Z Λ ϕ(y) dµ(y) ≤IG(µ) −ν(Λ), using in the last step that ϕ(y) = 1 on the set of regular points, and thus µ-almost everywhere. This implies that IG(µ) ≥ν(Λ) = IG(ν), so that ν/ν(Λ) is a minimiser in the definition of CapG. This completes the proof. 232 4. Wiener’s test of regularity In this section we concentrate on d ≥3 and find a sharp criterion for a point to be regular for a closed set Λ ⊂Rd. This criterion is given in terms of the capacity of the intersection of Λ with annuli, or shells, concentric about x. To fix some notation let k > ℓbe integers and x ∈Rd, and define the annulus Ax(k, ℓ) := © y ∈Rd : 2−k ≤|y −x| ≤2−ℓª . Abbreviate Ax(k) := Ax(k + 1, k) and let Λk x := Λ ∩Ax(k) . We aim to prove the following result. Theorem 8.29 (Wiener’s test). A point x ∈Rd is regular for the closed set Λ ⊂Rd, d ≥3, if and only if ∞ X k=1 2k(d−2)Cd−2 ¡ Λk x ¢ = ∞, where Cd−2 is the Riesz (d −2)-capacity introduced in Definition 4.3. In the proof, we may assume, without loss of generality, that x = 0. We start the proof with an easy observation. Lemma 8.30. There exists a constant c > 0, which depends only on the dimension d, such that, for all k, we have c 2k(d−2)Cd−2(Λk 0) ≤CapM(Λk 0) ≤c 2(k+1)(d−2)Cd−2(Λk 0). Proof. Observe that, as z ∈Ak 0 implies 2−k−1 ≤|z| ≤2−k, we obtain the statement by estimating the denominator in the Martin kernel M. The crucial step in the proof is a quantitative estimate, from which Wiener’s test follows quickly. Lemma 8.31. There exists a constant c > 0, depending only on the dimension d, such that 1 −exp ³ −c k−1 X j=ℓ CapM(Λj 0) ´ ≤P0 © {B(t) : t ≥0} hits Λ ∩A0(k, ℓ) ª ≤ k−1 X j=ℓ CapM(Λj 0) . Proof. For the upper bound we look at the event D(j) that a Brownian motion started in 0 hits Λj 0. Then, using Theorem 8.23, we get P0 ¡ D(j) ¢ ≤CapM ¡ Λj 0 ¢ . Therefore P0 © {B(t) : t ≥0} hits Λ ∩A0(k, ℓ) ª ≤P0 ³ k−1 [ j=ℓ D(j) ´ ≤ k−1 X j=ℓ CapM(Λj 0), and this completes the proof of the upper bound. For the lower bound we look at the event E(z, j) that a Brownian motion started in some point z ∈∂B(0, 2−j) and stopped upon hitting ∂B(0, 2−j+4) hits Λj−2 0 . Again we use either 233 Theorem 8.23, or Corollary 8.11 in conjunction with Theorem 8.26, to get, for constants c1, c2 > 0 depending only on the dimension d, Pz © {B(t) : t ≥0} ever hits Λj−2 0 ª ≥c1 ¡ 2−(j−1) −2−j¢2−dCd−2 ¡ Λj−2 0 ¢ , and, for any y ∈∂B(0, 2−j+4), Py © {B(t) : t ≥0} ever hits Λj−2 0 ª ≤c2 ¡ 2−(j−4) −2−(j−2)¢2−dCd−2 ¡ Λj−2 0 ¢ . Therefore, for a constant c > 0 depending only on the dimension d, P ¡ E(z, j) ¢ ≥Pz © {B(t)} ever hits Λj−2 0 ª − max y∈∂B(0,2−j+4) Py © {B(t)} ever hits Λj−2 0 ª ≥c 2j(d−2)Cd−2 ¡ Λj−2 0 ¢ . Now divide {ℓ, . . . , k −1} into four subsets such that each subset I satisfies |i −j| ≥4 for all i ̸= j ∈I. Choose a subset I which satisfies (4.1) X j∈I 2j(d−2)Cd−2(Λj 0) ≥1 4 k−1 X j=ℓ 2j(d−2) Cd−2(Λj 0). Now we have with τj = inf{t ≥0 : |B(t)| = 2−j}, P0 © {B(t) : t ≥0} avoids Λ ∩A0(k, ℓ) ª ≤P0 ³ \ j∈I E ¡ B(τj), j ¢c´ ≤ Y j∈I sup z∈∂B(0,2−j) P ¡ E(z, j)c¢ ≤ Y j∈I ³ 1 −c 2j(d−2)Cd−2 ¡ Λj−2 0 ¢ ´ ≤exp ³ −c X j∈I 2j(d−2) Cd−2(Λj 0) ´ , using the estimate log(1 −x) ≤−x in the last step. The lower bound, with c = C/4, now follows from (4.1) and Lemma 8.30 when we pass to the complement. Proof of Wiener’s test. Suppose P∞ k=1 2k(d−2)Cd−2 ¡ Λk 0 ¢ = ∞. Therefore, by Lemma 8.31 and Lemma 8.30, for all k ∈N, P0 © {B(t) : t ≥0} hits Λ ∩B(0, 2−k) ª ≥1 −exp ³ −c ∞ X j=k CapM(Λj 0) ´ = 1. Since points are polar, for any ε, δ > 0 there exists a large k such that P0 © {B(t) : t ≥ε} hits B(0, 2−k) ª < δ . Combining these two facts we get for the first hitting time τ = τ(Λ) of the set Λ, P0 © τ < ε ª ≥P0 © {B(t) : t ≥0} hits Λ ∩B(0, 2−k) ª −P0 © {B(t) : t ≥ε} hits B(0, 2−k)} ≥1 −δ. As ε, δ > 0 were arbitrary, the point 0 must be regular. 234 Now suppose that P∞ k=1 2k(d−2)Cd−2 ¡ Λk 0 ¢ < ∞. Then ∞ X k=1 P0 © {B(t) : t ≥0} hits Λ ∩A0(k) ª ≤ ∞ X k=1 CapM(Λk 0) < ∞. Hence, by the Borel Cantelli lemma, almost surely there exists a ball B(0, ε) such that {B(t) : t ≥0} does not hit B(0, ε) ∩Λ. By continuity we therefore must have inf{t > 0: B(t) ∈Λ} > 0 almost surely, hence the point 0 is irregular. x1 -1 1 0 -1 1 x2, x3 Λ G Figure 2. Lebesgue’s thorn. Example 8.32. The following example is due to Lebesgue [Le24], and is usually called Lebesgue’s thorn. For any α > 1 we define an open subset G ⊂(−1, 1)3 with a cusp at zero by G := © (x1, x2, x3) ∈(−1, 1)3 : q x2 2 + x2 3 > f(x1) if x1 ≥0 ª , with f(x) = xα, see Figure 2. Now the origin is an irregular point for Λ = Gc. For the proof it suffices, by Wiener’s test, to check that the series P∞ k=1 2kC1 ¡ Λk 0 ¢ converges. Note that, for any probability measure µ on Λk 0, we have I1(µ) ≥2αk and, hence, ∞ X k=1 2kC1 ¡ Λk 0 ¢ ≤ ∞ X k=1 2k(1−α) < ∞, verifying Wiener’s test of irregularity. ⋄ 235 Exercises Exercise 8.1 ( ∗). Let U ⊂Rd be a domain and u: U →R subharmonic. Use Itˆ o’s formula to show that, for any ball B(x, r) ⊂U, u(x) ≤ 1 L(B(x, r)) Z B(x,r) u(y) dy. Exercise 8.2 ( ∗). Suppose g is bounded and u a solution of Poisson’s problem for g. Show that this solution has the form u(x) = Ex h Z T 0 g(B(t)) dt i , for x ∈U , where T := inf{t > 0 : B(t) ̸∈U}. Observe that his implies that the solution, if it exists, is always uniquely determined. Exercise 8.3. Let u(x) = Ex h Z T 0 g(B(t)) dt i , for x ∈U , where T := inf{t > 0 : B(t) ̸∈U}. Show that, • if g is H¨ older continuous, then the function u : U →R solves −1 2∆u = g. • if every x ∈∂U is regular for the complement of U, then u(x) = 0 for all x ∈∂U. Exercise 8.4. Suppose Λ ⊂Rd, for d ≥3, is compact and γ the last exit time from Λ defined as in Theorem 8.9. Show that lim x→∞Px © B(γ) ∈A | γ > 0 ª = ν(A) ν(Λ). Exercise 8.5. For d ≥3 consider the spherical shell ΛR = {x ∈Rd : 1 ≤| x| ≤R}. Show that limR→∞CapM(ΛR) = 2. 236 Exercise 8.6. Let {X(a): a ≥0} be a stable subordinator of index 1 2 and K(s, t) : = ½ (t −s)−1/2 0 < s ≤t , 0 s > t > 0 . Let M(s, t) = K(s, t)/K(0, t), then for any subset Λ of [0, ∞), 1 2 CapK(Λ) ≤P0 © {X(a) : a ≥0} hits Λ ª ≤CapK(Λ) . 237 Notes and Comments The proof of the last exit formula is taken from Chung’s beautiful paper [Ch73], but the existence of an energy minimizing measure is a much older fact. For the case of the Newtonian potential (d = 3) it was determined by Gauss as the charge distribution on the surface of a conductor which minimises the electrostatic energy. Classically, the equilibrium measure is defined as the measure ν on Λ that maximizes ν(Λ) among those with potential bounded by one. Then ν/ν(Λ) is the energy minimizing probability measure, see [Ca67]. Rigorous results and extensions to general Riesz-potentials are due to Frostman in his ground-breaking thesis [Fr35]. Our discussion of the strong maximum principle follows Carleson [Ca67]. Bass [Ba95] describes an alternative approach. Characterising the polar sets for Brownian motion is related to the following question: for which sets A ⊂Rd are there nontrivial bounded harmonic functions on Rd \ A? Such sets are called removable for bounded harmonic functions. Consider the simplest case first. When A is the empty set, it is obviously polar, and by Liouville’s theorem there is no bounded harmonic function on its complement. Nevanlinna (about 1920) proved that for d ≥3 there exist non-constant bounded harmonic functions on Rd \ A if and only if CapG(A) > 0, where G(x, y) = f(|x−y|) for the radial potential f as before. Just to make this result more plausible, note that the function h(x) = R G(x, y)µ(dy), where µ is a measure on A of finite G-energy, would make a good candidate for such a function, see Theorem 3.34. Loosely speaking, G-capacity measures whether a set A is big enough to hide a pole of a harmonic function inside. Recall from Theorem 4.36 that dim A > d −2 implies existence of such functions, and dim A < d −2 implies nonexistence. Kakutani (1944) showed that there exist bounded harmonic functions on Rd \ A if and only if A is polar for Brownian motion. Kakutani’s theorem is proved in [Ka44a]. The precise hitting estimates we give are fairly recent, our proof is a variant of the original proof by Benjamini, Pemantle and Peres in [BPP95]. Proposition 8.25 goes back to the same paper. An interesting question is which subsets of a compact sets are charged by the harmonic mea-sure µA. Clearly µA does not charge polar sets, and in particular, in d ≥3, we have µA(B) = 0 for all Borel sets with dim(B) < d −2. In the plane, by a famous theorem of Makarov, see [Ma85], we have that • any set B of dimension < 1 has µA(B) = 0, • there is a set S ⊂A with dim S = 1 such that µA(Sc) = 0. However, the outer boundary, which supports the harmonic measure, may have a dimension much bigger than one. An interesting question arising in the context of self-avoiding curves asks for the dimension of the outer boundary of the image B[0, 1] of a Brownian motion. Based on scaling arguments from polymer physics, Benoit Mandelbrot conjectured in 1982 that this set should have fractal dimension 4/3. Bishop, Jones, Pemantle, and Peres [BJPP97] showed that that the outer boundary has dimension > 1. In 2001 Mandelbrot’s conjecture was finally proved by Lawler, Schramm and Werner [LSW01]. 238 CHAPTER 9 Intersections and self-intersections of Brownian paths In this chapter we study multiple points of d-dimensional Brownian motion. We shall see, for example, in which dimensions the Brownian path has double points and explore how many double points there are. This chapter also contains some of the highlights of the book: a proof that planar Brownian motion has points of infinite multiplicity, the intersection equivalence of Brownian motion and percolation limit sets, and the surprising dimension-doubling theorem of Kaufman. 1. Intersection of paths: existence and Hausdorffdimension 1.1. Existence of inetrsections. Suppose that {B1(t) : t ≥0} and {B2(t) : t ≥0} are two independent d-dimensional Brownian motions started in arbitrary points. The question we ask in this section is, in which dimensions the ranges, or paths, of the two motions have a nontrivial intersection, in other words whether there exist times t1, t2 > 0 such that B1(t1) = B2(t2). As this question is trivial if d = 1 we assume d ≥2 throughout this section. We have developed the tools to decide this question in Chapter 4 and Chapter 8. Keeping the path {B1(t) : t ≥0} fixed, we have to decide whether it is a polar set for the second Brownian motion. By Kakutani’s theorem, Theorem 8.19, this question depends on its capacity with respect to the potential kernel. As the capacity is again related to Hausdorffmeasure and dimension, the results of Chapter 4 are crucial in the proof of the following result. Theorem 9.1. (a) For d ≥4, almost surely, two independent Brownian paths in Rd have an empty intersection, except for a possible common starting point. (b) For d ≤3, almost surely, the intersection S of two independent Brownian paths in Rd is nontrivial, i.e. contains points other than a possible common starting point. Remark 9.2. In the case d ≤3, if the Brownian paths are started in the same point, then almost surely, the paths intersect before any positive time t > 0, see Exercise 9.1 (a). ⋄ 239 Proof of (a). Note that it suffices to look at one Brownian motion and show that its path is, almost surely, a set of capacity zero with respect to the potential kernel. If d ≥4, the capacity with respect to the potential kernel is a multiple of the Riesz (d −2)-capacity. By Theorem 4.27 this capacity is zero for sets of finite (d −2)-dimensional Hausdorffmeasure. Now note that if d ≥5 the dimension of a Brownian path is two, and hence strictly smaller than d −2, so that the (d −2)-dimensional Hausdorffmeasure is zero, which shows that the capacity must be zero. If d = 4 the situation is only marginally more complicated, although the dimension of the Brownian path is 2 = d −2 and the simple argument above does not apply. However, we know from (1.1) in Chapter 4 that H2(B[0, 1]) < ∞almost surely, which implies that Cap2(B[0, 1]) = 0 by Theorem 4.27. This implies that an independent Brownian motion almost surely does not hit either of the segments B[n, n + 1], and therefore avoids the path entirely. Proof of (b). If d = 3, the capacity with respect to the potential kernel is a multiple of the Riesz 1-capacity. As the Hausdorffdimension of a path is two, this capacity is positive by Theorem 4.36. Therefore two Brownian paths in d = 3 intersect with positive probability. Suppose now the two Brownian motions start in different points, we may assume that one is the origin and the other one is denoted x. By rotational invariance, the probability that the paths do not intersect depends only on |x|, and by Brownian scaling we see that it is completely independent of the choice of x ̸= 0. Denote this probability by q and, given any ε > 0, choose a large time t such that P © B1(t1) ̸= B2(t2) for all 0 < t1, t2 ≤t ª ≤q + ε. Then, using the Markov property, q ≤P © B1(t1) ̸= B2(t2) for all t1, t2 ≤t ª P © B1(t1) ̸= B2(t2) for all t1, t2 > t ª ≤q(q + ε). As ε > 0 was arbitrary, we get q ≤q2, and as we know that q < 1 we obtain that q = 0. This shows that two Brownian paths started in different points intersect almost surely. If they start in the same point, by the Markov property, P © B1(t1) ̸= B2(t2) for all t1, t2 > 0 ª = lim t↓0 t>0 P © B1(t1) ̸= B2(t2) for all t1, t2 > t ª = 1, as required to complete the argument in the case d = 3. A path in d ≤2 is the projection of a three dimensional path on a lower dimensional subspace, hence if two paths in d = 3 intersect almost surely, then so do two paths in d = 2. It is equally natural to ask, for integers p > 2 and d ≤3, whether a collection of p independent d-dimensional Brownian motions {B1(t) : t ≥0}, . . . , {Bp(t) : t ≥0} intersect, i.e. whether there exist times t1, . . . , tp > 0 such that B1(t1) = · · · = Bp(tp). 240 Theorem 9.3. (a) For d ≥3, almost surely, three independent Brownian paths in Rd have an empty intersection, except for a possible common starting point. (b) For d = 2, almost surely, the intersection S of any finite number p of independent Brownian paths in Rd is nontrivial, i.e. contains points other than a possible common starting point. In the light of our discussion of the case p = 2, it is natural to approach the question about the existence of intersections of p paths, by asking for the Hausdorffdimension and measure of the intersection of p −1 paths. This leads to an easy proof of (a). Lemma 9.4. Suppose S is the intersection of the ranges of two Brownian motions in d = 3. Then, almost surely, for every compact set Λ ⊂R3 not containing the starting points of the Brownian motions, we have H1(S ∩Λ) < ∞. Proof. Fix a cube Cube ⊂R3 of unit sidelength not containing the starting points. It suffices to show that, almost surely, H1(S∩Cube) < ∞. For this purpose let Cn be the collection of dyadic subcubes of Cube of sidelength 2−n, and In be the collection of cubes in Cn which are hit by both motions. By our hitting estimates, Theorem 3.17, there exists C > 0 such that, for any cube E ∈Cn, P © E ∈In ª = P © ∃s > 0 with B(s) ∈E ª2 ≤C2−2n. Now, for every n, the collection In is a covering of S, and E h X E∈In |E| i = 23n P © E ∈In ª√ 32−n ≤C √ 3. Therefore, by Fatou’s lemma, we obtain E h lim inf n→∞ X E∈In |E| i ≤lim inf n→∞E h X E∈In |E| i ≤C √ 3. Hence, almost surely, for arbitrarily large values of n we have a covering of S ∩Cube by sets of diameter at most √ 32−n with 1-value no more than C √ 3. We infer from this that H1(S ∩Cube) ≤C √ 3 almost surely, and this completes the proof. Proof of Theorem 9.3 (a). It suffices to show that, for any cube Cube of unit sidelength, which does not contain the origin, we have Cap1(S ∩Cube) = 0. This follows directly from Lemma 9.4 and the energy method, Theorem 4.27. 241 For Theorem 9.3 (b) it would suffice to show that the Hausdorffdimension of the set S = © x ∈Rd : ∃t1, t2 > 0 such that x = B1(t1) = B2(t2) ª . is positive in the case d = 2. In fact, it is a natural question to ask for the Hausdorffdimension of the intersection of Brownian paths in any case when the set is nonempty. The problem was raised by Itˆ o and McKean in their influential book [IM74], and has since been resolved by Taylor [Ta66] and Fristedt [Fr67]. The nontrivial problem of finding lower bounds for the Hausdorffdimension of the intersection sets is best approached using the technique of stochastic co-dimension, which we discuss now. 1.2. Stochastic co-dimension and percolation limit sets. Given a set A, the idea behind the stochastic co-dimension approach is to take a suitable random test set Θ, and check whether P{Θ ∩A ̸= ∅} is zero or positve. In the latter case this indicates that the set is large, and we should therefore get a lower bound on the dimension of A. A natural choice of such a random test set would be the range of Brownian motion. Recall that, for example in the case d = 3, if P{Range ∩A ̸= ∅} > 0 is positive, this implies that dim A ≥1. Of course, in order to turn this idea into a systematic technique for finding lower bounds for the Hausdorffdimension, an entire family of test sets is needed to tune the size of the test set in order to give sharp bounds. For this purpose, Taylor [Ta66] used stable processes instead of Brownian motion. This is not the easiest way and also limited, because stable processes only exist across a limited range of parameters. The approach we use in this book is based on using the family of percolation limit sets as test sets. Suppose that C ⊂Rd is a fixed compact unit cube. We denote by Cn the collection of compact dyadic subcubes (relative to C) of sidelength 2−n. We also let C = ∞ [ n=0 Cn. Given γ ∈[0, d] we construct a random compact set Γ[γ] ⊂C inductively as follows: We keep each of the 2d compact cubes in C1 independently with probability p = 2−γ. Let S1 be the collection of cubes kept in this procedure and S(1) their union. Pass from Sn to Sn+1 by keeping each cube of Cn+1, which is not contained in a previously rejected cube, independently with probability p. Denote by S = S∞ n=1 Sn and let S(n + 1) be the union of the cubes in Sn+1. Then the random set Γ[γ] := ∞ \ n=1 S(n) is called a percolation limit set. The usefulness of percolation limit sets in fractal geometry comes from the following theorem. 242 Theorem 9.5 (Hawkes 1981). For every γ ∈[0, d] and every closed set A ⊂C the following properties hold (i) if dim A < γ, then almost surely, A ∩Γ[γ] = ∅, (ii) if dim A > γ, then A ∩Γ[γ] ̸= ∅with positive probability, (iii) if dim A > γ, then (a) almost surely dim ¡ A ∩Γ[γ] ¢ ≤dim A −γ and, (b) for all ε > 0, with positive probability dim ¡ A ∩Γ[γ] ¢ ≥dim A −γ −ε. Remark 9.6. Observe that the first part of the theorem gives a lower bound γ for the Hausdorff dimension of a set A, if we can show that A ∩Γ[γ] ̸= ∅with positive probability. As with so many ideas in fractal geometry one of the roots of this method lies in the study of trees, more precisely percolation on trees, see [Ly90]. ⋄ Remark 9.7. (a) There is a close kinship of the stochastic co-dimension technique and the energy method: A set A is called polar for the percolation limit set, if P{A ∩Γ[γ] ̸= ∅} = 0. We shall see in Theorem 9.18 that a set is polar for the percolation limit set if and only if it has γ-capacity zero. (b) For d ≥3, the criterion for polarity of a percolation limit set with γ = d −2 therefore agrees with the criterion for the polarity for Brownian motion. This ‘equivalence’ between percolation limit sets and Brownian motion has a quantitative strengthening which is discussed in Section 2 of this chapter. ⋄ Proof of (i) in Hawkes’ theorem. The proof of part (i) is based on the first mo-ment method, which means that we essentially only have to calculate an expectation. Because dim A < γ there exists, for every ε > 0, a covering of A by countably many sets D1, D2, . . . with P∞ i=1 |Di|γ < ε. As each set is contained in no more than a constant number of dyadic cubes of smaller diameter, we may even assume that D1, D2, . . . ∈C. Suppose that the sidelength of Di is 2−n, then the probability that Di ∈Sn is 2−nγ. By picking from D1, D2, . . . those cubes which are in S we get a covering of A ∩Γ[γ]. Let N be the number of cubes picked in this procedure, then P{A ∩Γ[γ] ̸= ∅} ≤P{N > 0} ≤EN = ∞ X i=1 P{Di ∈S} = ∞ X i=1 |Di|γ < ε. As this holds for all ε > 0 we infer that, almost surely, we have A ∩Γ[γ] = ∅. 243 Proof of (ii) in Hawkes’ theorem. The proof of part (ii) is based on the second moment method, which means that a variance has to be calculated. We also use the nontrivial part of Frostman’s lemma in the form of Theorem 4.36, which states that, as dim A > γ, there exists a probability measure µ on A such that Iγ(µ) < ∞. Now let n be a positive integer and define the random variables Yn = X C∈Sn µ(C) |C|γ = X C∈Cn µ(C)2nγ 1{C∈Sn}. Note that Yn > 0 implies S(n) ∩A ̸= ∅and, by compactness, if Yn > 0 for all n we even have A ∩Γ[γ] ̸= ∅. As Yn+1 > 0 implies Yn > 0, we get that P © A ∩Γ[γ] ̸= ∅ ª ≥P © Yn > 0 for all n ª = lim n→∞P © Yn > 0 ª . It therefore suffices to give a positive lower bound for P{Yn > 0} independent of n. A straightforward calculation gives for the first moment E[Yn] = P C∈Cn µ(C) = 1. For the second moment we find E[Y 2 n ] = X C∈Cn X D∈Cn µ(C)µ(D) 22nγ P{C ∈Sn and D ∈Sn}. The latter probability depends on the dyadic distance of the cubes C and D: if 2−m is the sidelength of the smallest dyadic cube which contains both C and D, then the probability in question is 2−2γ(n−m)2−γm. The value m can be estimated in terms of the Euclidean distance of the cubes, indeed if x ∈C and y ∈D then | x −y| ≤ √ d2−m. This gives a handle to estimate the second moment in terms of the energy of µ. We find that E[Y 2 n ] = X C∈Cn X D∈Cn µ(C)µ(D)2γm ≤dγ/2 ZZ dµ(x) dµ(y) | x −y|γ = dγ/2Iγ(µ). Plugging these moment estimates into the easy form of the Paley-Zygmund inequality, Lemma 3.22, gives P{Yn > 0} ≥d−γ/2Iγ(µ)−1, as required. Proof of (iii) in Hawkes’ theorem. For part (iii) note that the intersection Γ[γ] ∩Γ[δ] of two independent percolation limit sets has the same distribution as Γ[γ +δ]. Suppose first that δ > dim A −γ. Then, by part (i), A ∩Γ[γ] ∩Γ[δ] = ∅almost surely, and hence, by part (ii), dim A ∩Γ[γ] ≤δ almost surely. Letting δ ↓dim A −γ completes the proof of part (a). Now suppose that δ < dim A −γ. Then, with positive probability, (A ∩Γ[γ]) ∩Γ[δ] ̸= ∅, by part (ii). And using again part (i) we get that dim A ∩Γ[γ] ≥δ with positive probability, completing the proof of part (b). 244 1.3. Hausdorffdimension of intersections. We can now use the stochastic codimension approach to find the Hausdorffdimension of the intersection of two Brownian paths, whenever it is nonempty. Note that the following theorem also implies Theorem 9.3 (b). Theorem 9.8. Suppose d ≥2 and p ≥2 are integers sucht that p(d −1) < d. Suppose that {B1(t) : t ≥0}, . . . , {Bp(t) : t ≥0} are p independent d-dimensional Brownian motions. Let Rangei be the range of {Bi(t) : t ≥0} for 1 ≤i ≤p. Then, almost surely, dim ¡ Range1 ∩. . . ∩Rangep ¢ = d −p(d −2). Remark 9.9. A good way to make this result plausible is by recalling the situation for the intersection of linear subspaces of Rd: If the spaces are in general position, then the co-dimension of the intersection is the sum of the co-dimensions of the subspaces. As the Hausdorffdimension of a Brownian path is two, the plausible codimension of the intersection of p paths is p(d −2), and hence the dimension is d −p (d −2). ⋄ Remark 9.10. Under assumption of the theorem, if the Brownian paths are started in the same point, then almost surely, dim(B1[0, t1] ∩· · · ∩Bp[0, tp]) = d −p(d −2), for any t1, . . . , tp > 0, see Exercise 9.1 (b). ⋄ Note that, by Lemma 9.4, we have dim(Range1 ∩Range2) ≤1 if d = 3, and hence only the lower bounds in Theorem 9.8 remain to be proved. For these we use the stochastic codimension method, but first we provide a useful zero-one law. Lemma 9.11. For any γ > 0 the probability of the event n dim ¡ Range1 ∩. . . ∩Rangep ¢ ≥γ o is either zero or one, and independent of the starting points of the Brownian motions. Proof. For t ∈(0, ∞] denote by S(t) = © x ∈Rd : there exist 0 < ti < t such that x = B1(t1) = · · · = Bp(tp) ª , and let p(t) = P{dim S(t) ≥γ}. We start by considering the case that all Brownian motions start at the origin. Then, by monotonicity of the events, P © dim S(t) ≥γ for all t > 0 ª = lim t↓0 p(t). The event on the left hand side is in the germ-σ-algebra and hence, by Blumenthal’s zero-one law, has probability zero or one. By scaling, however, p(t) does not depend on t at all, so we have either p(t) = 0 for all t > 0 or p(t) = 1 for all t > 0. 245 In the first case we note that, by the Markov property applied at times t1, . . . , tp, 0 = P © dim S(∞) ≥γ ª = Z P © dim S(∞) ≥γ | B1(t) = x1, . . . , Bp(t) = xp ª dµ(x1, . . . , xp), where µ is the product of p independent centred, normally distributed random variables with variances t. Therefore P{dim S(∞) ≥γ} = 0 for Lpd-almost every vector of starting points. Finally, for an arbitrary configuration of starting points, P © dim S(∞) ≥γ ª = lim t↓0 P © dim{x ∈Rd : ∃ti ≥t such that x = B1(t1) = · · · = Bp(tp)} ≥γ ª = 0. A completely analogous argument can be carried out for the second case. Proof of Theorem 9.8. First we look at d = 3 and p = 2. Suppose γ < 1 is arbitrary, and pick β > 1 such that γ + β < 2. Let Γ[γ] and Γ[β] by two independent percolation limit sets, indepedendent of the Brownian motions. Note that then Γ[γ] ∩Γ[β] is a percolation limit set with parameter γ + β. Hence, by Theorem 9.5 (ii) and the fact that dim(Range1) = 2 > γ + β, we have P © Range1 ∩Γ[γ] ∩Γ[β] ̸= ∅ ª > 0. Interpreting Γ[β] as the test set and using Theorem 9.5 (i) we obtain dim ¡ Range1 ∩Γ[γ] ¢ ≥β with positive probability. As β > 1, given this event, the set Range1 ∩Γ[γ] has positive capacity with respect to the potential kernel in R3 and is therefore nonpolar with respect to the independent Brownian motion {B2(t): t ≥0}. We therefore have P © Range1 ∩Range2 ∩Γ[γ] ̸= ∅ ª > 0. Using Theorem 9.5 (i) we infer that dim(Range1 ∩Range2) ≥γ with positive probability. Lemma 9.11 shows that this must in fact hold almost surely, and the result follows as γ < 1 was arbitrary. Second we look at d = 3 and any p ≥2. Suppose γ < 2 is arbitrary, and pick β1, . . . , βp > 0 such that γ + β1 + · · · + βp < 2. Let Γ[γ] and Γ[β1], . . . , Γ[βp] be independent percolation limit sets, indepedendent of the p Brownian motions. Note that then Γ[γ] ∩ p \ i=1 Γ[βi] is a percolation limit set with parameter γ + β1 + · · · + βp. Hence, by Theorem 9.5 (ii) and the fact that dim(Range1) = 2 > γ + β1 + · · · + βp, we have P n Range1 ∩Γ[γ] ∩ p \ i=1 Γ[βi] ̸= ∅ o > 0. 246 Interpreting Γ[βp] as the test set and using Theorem 9.5 (i) we obtain dim ³ Range1 ∩Γ[γ] ∩ p−1 \ i=1 Γ[βi] ´ ≥βp with positive probability. As βp > 0, given this event, the set Range1 ∩Γ[γ] ∩ p−1 \ i=1 Γ[βi] has positive capacity with respect to the potential kernel in R2 and is therefore nonpolar with respect to the independent Brownian motion {B2(t): t ≥0}. We therefore have P © Range1 ∩Range2 ∩Γ[γ] ∩ p−1 \ i=1 Γ[βi] ̸= ∅ ª > 0. Iterating this procedure p −1 times we obtain P n p \ i=1 Rangei ∩Γ[γ] ̸= ∅ o > 0. Using Theorem 9.5 (i) we infer that dim(Tp i=1 Rangei) ≥ γ with positive probability. Lemma 9.11 shows that this must in fact hold almost surely, and the result follows as γ < 2 was arbitrary. 2. Intersection equivalence of Brownian motion and percolation limit sets The idea of quantitative estimates of hitting probabilities, has a natural extension: two ran-dom sets may be called intersection-equivalent, if their hitting probabilities for a large class of test sets are comparable. This concept of equivalence allows surprising relationships between random sets which, at first sight, might not have much in common. In this section we establish intersection-equivalence between Brownian motion and suitably defined percolation limit sets, and use this to characterise the polar sets for the intersection of Brownian paths. We start the discussion by formalising the idea of intersection-equivalence. Definition 9.12. Two random closed sets A and B in Rd are intersection-equivalent in the compact set U, if there exist two positive constants c, C such that, for any closed set Λ ⊂U, (2.1) c P{A ∩Λ ̸= ∅} ≤P{B ∩Λ ̸= ∅} ≤CP{A ∩Λ ̸= ∅}. Using the symbol a ≍b to indicate that the ratio of a and b is bounded from above and below by positive constants which do not depend on Λ we can write this as P{A ∩Λ ̸= ∅} ≍P{B ∩Λ ̸= ∅}. ⋄ Remark 9.13. Let G be the collection of all closed subsets of Rd. Formally, we define a random closed set as a mapping A: Ω→G such that, for every compact Λ ⊂Rd, the set {ω: A(ω) ∩Λ = ∅} is measurable. ⋄ 247 The philosophy of the main result of this section is that we would like to find a class of particularly simple sets, which are intersection-equivalent to the paths of transient Brownian motion. If these sets are easier to study, we can ‘translate’ easy results about the simple sets into hard ones for Brownian motion. A good candidate for these simple sets are percolation limit sets: they have excellent features of self-similarity and independence between the fine structures in different parts. Many of their properties can be obtained from classical facts about Galton-Watson branching processes. We introduce percolation limit sets with generation dependent retention probabilities. Denote by Cn the compact dyadic cubes of sidelength 2−n. For any sequence p1, p2, . . . in (0, 1) we define families Sn of compact dyadic cubes inductively by including any cube in Cn which is not contained in a previously rejected cube, independently with probability pn. Define Γ = ∞ \ n=1 [ S∈Sn S, to be the percolation limit set for the sequence p1, p2, . . .. To find a suitable sequence of retention probabilities we compare the hitting probabilities of dyadic cubes by a percolation limit set on the one hand and a transient Brownian on the other. (This is obviously necessary to establish intersection-equivalence). We assume that percolation is performed in a cube Cube at positive distance from the origin, at which a transient Brownian motion is started. Supposing for the moment that the retention probabilities are such that the survival probability of any retained cube is bounded from below, for any cube Q ∈Cn, the hitting estimates for the percolation limit set are P © Γ ∩Q ̸= ∅ ª ≍p1 · · · pn . By Theorem 8.23, on the other hand, P © B[0, T] ∩Q ̸= ∅ ª ≍CapM(Q) ≍1/f(2−n) , for the radial potential f(ε) = ( log2(1/ε) for d = 2, ε2−d for d ≥3 , where we have chosen basis 2 for the logarithm for convenience of this argument. When we choose the sequence p1, p2, . . . of retention probabilities such that p1 · · · pn = 1/f(2−n). More explicitly, we choose p1 = 22−d and, for n ≥2, (2.2) pn = f(2−n+1) f(2−n) = ( n−1 n for d = 2, 22−d for d ≥3 . The retention probabilities are constant for d ≥3, but generation dependent for d = 2. Theorem 9.14. Let {B(t): 0 ≤t ≤T} denote transient Brownian motion started in the origin, and Cube ⊂Rd a compact cube of unit sidelength not containing the origin. Let Γ be a percolation limit set in Cube with retention probabilities chosen as in (2.2). Then the range of the Brownian motion is intersection-equivalent to the percolation limit set Γ in the cube Cube. 248 Before discussing the proof, we look at an application of Theorem 9.14 to our understanding of Brownian motion. We first make two easy observations. Lemma 9.15. Suppose that A1, . . . , Ak, F1, . . . , Fk are independent random closed sets, with Ai intersection-equivalent to Fi for 1 ≤i ≤k. Then A1 ∩A2 ∩. . .∩Ak is intersection-equivalent to F1 ∩F2 ∩. . . ∩Fk. Proof. By induction, we can reduce this to the case k = 2. It then clearly suffices to show that A1 ∩A2 is intersection-equivalent to F1 ∩A2. This is done by conditioning on A2, P © A1 ∩A2 ∩Λ ̸= ∅ ª = E £ P{A1 ∩A2 ∩Λ ̸= ∅| A2} ¤ ≍E £ P{F1 ∩A2 ∩Λ ̸= ∅| A2} ¤ = P{F1 ∩A2 ∩Λ ̸= ∅}. Lemma 9.16. For independent percolation limit sets Γ1 and Γ2 with retention probabilities p1, p2, . . . and q1, q2, . . ., respectively, their intersection Γ1 ∩Γ2 is a percolation limit set with retention probabilities p1q1, p2q2, . . .. Proof. This is obvious from the definition of percolation limit sets and independence. These results enable us to recover the results about existence of intersection of Brownian paths from the survival criteria of Galton-Watson trees. As an example look at the intersection of two Brownian paths in Rd, d ≥3. By Theorem 9.14 and Lemma 9.15, the intersection of these paths is intersection-equivalent (in any unit cube not containing the starting points) to the intersection of two independent percolation limit sets with constant retention parameters p = 22−d. This intersection, by Lemma 9.16, is another percolation limit set, but now with parameter p2 = 24−2d. Now observe that this set has a positive probability of being nonempty if and only if a Galton-Watson process with binomial offspring distribution with parameters n = 2d and p = 24−2d has a positive survival probability. This is the case if and only if the mean offspring number np strictly exceeds 1, i.e. if 4 −d > 0. In other words, in d = 3 the two paths intersect with positive probability, in all higher dimensions they almost surely do not intersect. We now give the proof of Theorem 9.14. A key rˆ ole in the proof is played by a fundamental result of R. Lyons concerning survival probabilities of general trees under the percolation process, which has great formal similarity with the quantitative hitting estimates for Brownian paths of Theorem 8.23. Recall the notation for trees from Page 112. As usual we define, for any kernel K : ∂T × ∂T → [0, ∞], the K-energy of the measure µ on ∂T as IK(µ) = ZZ K(x, y) dµ(x) dµ(y), and the K-capacity of the boundary of the tree by CapK(∂T) = £ inf © IK(µ) : µ a probability measure on ∂T ª¤−1 . 249 Given a sequence p1, p2, . . . of probabilities, percolation on T is obtained by removing each edge of T of order n independently with probability 1 −pn and retaining it otherwise, with mutual independence among edges. Say that a ray ξ survives the percolation if all the edges on ξ are retained, and say that the tree boundary ∂T survives if some ray of T survives. Theorem 9.17 (Lyons). If percolation with retention probabilities p1, p2, . . . is performed on a rooted tree T, then (2.3) CapK(∂T) ≤P © ∂T survives the percolation ª ≤2CapK(∂T) , where the kernel K is defined by K(x, y) = Q|x∧y| i=1 p−1 i . Proof. For two vertices v, w we write v ↔w if the shortest path between the vertices is retained in the percolation. We also write v ↔∂T if a ray through vertex v survives the percolation and v ↔Tn if there is a self-avoiding path of retained edges connecting v to a vertex of generation n. Note that K(x, y) = P{ρ ↔x ∧y}−1 by definition of the kernel K. By the finiteness of the degrees, {ρ ↔∂T} = \ n {ρ ↔Tn} . We start with the left inequality in (2.3) and consider the case of a finite tree T first. We extend the definition of the boundary ∂T to finite trees by letting ∂T be the set of leaves, i.e., the vertices with no offspring. Let µ be a probability measure on ∂T and set Y = X x∈∂T µ(x)1{ρ ↔x} P{ρ ↔x} . Then E[Y ] = P x∈∂T µ(x) = 1, and E[Y 2] = E " X x∈∂T X y∈∂T µ(x)µ(y) 1{ρ ↔x, ρ ↔y} P{ρ ↔x} P{ρ ↔y} # = X x∈∂T X y∈∂T µ(x)µ(y)P{ρ ↔x and ρ ↔y} P{ρ ↔x}P{ρ ↔y} . Thus, E[Y 2] = X x,y∈∂T µ(x)µ(y) 1 P{ρ ↔x ∧y} = IK(µ). Using the Paley-Zygmund inequality in the second step, we obtain P{ρ ↔∂T} ≥P{Y > 0} ≥ ¡ E[Y ] ¢2 E[Y 2] = 1 IK(µ) . The left-hand side does not depend on µ, so optimising the right-hand side over µ yields P{ρ ↔∂T} ≥sup µ 1 IK(µ) = CapK(∂T) , 250 which proves the lower bound for finite trees. For T infinite, let µ be any probability measure on ∂T. This induces a probability measure e µ on the set Tn, consisting of those vertices which become leaves when the tree T is cut offafter the nth generation, by letting e µ(v) = µ © ξ ∈∂T : v ∈ξ ª , for any vertex v ∈Tn . By the finite case considered above, P{ρ ↔Tn} ≥ ³ X x,y∈Tn K(x, y)e µ(x)e µ(y) ´−1 . Each ray ξ must pass through some vertex x ∈Tn. This implies that K(x, y) ≤K(ξ, η) for x ∈ξ and y ∈η. Therefore, Z ∂T Z ∂T K(ξ, η) dµ(ξ) dµ(η) ≥ X x,y∈Tn K(x, y)e µ(x)e µ(y) ≥ 1 P{ρ ↔Tn} . Hence P{ρ ↔Tn} ≥IK(µ)−1 for any probability measure µ on ∂T. Optimising over µ and passing to the limit as n →∞, we get P{ρ ↔∂T} ≥CapK(∂T) . It remains to prove the right-hand inequality in (2.3). Assume first that T is finite. There is a Markov chain {Vk : k ∈N} hiding here: Suppose the offspring of each individual is ordered from left to right, and note that this imposes a natural order on all vertices of the tree by saying that x is to the left of y if there are siblings v, w with v to the left of w, such that x is a descendant of v and y is a descendant of w. The random set of leaves that survive the percolation may thus be enumerated from left to right as V1, V2, . . . , Vr. The key observation is that the random sequence ρ, V1, V2, . . . , Vr, ∆, ∆, . . . is a Markov chain on the state space ∂T ∪{ρ, ∆}, where ρ is the root and ∆is a formal absorbing cemetery. Indeed, given that Vk = x, all the edges on the unique path from ρ to x are retained, so that survival of leaves to the right of x is determined by the edges strictly to the right of the path from ρ to x, and is thus conditionally independent of V1, . . . , Vk−1, see Figure 1. This verifies the Markov property, so Proposition 8.25 may be applied. The transition prob-abilities for the Markov chain above are complicated, but it is easy to write down the Green kernel G. For any vertex x let path(x) be the set of edges on the shortest path from ρ to x. Clearly, G(ρ, y) equals the probability that y survives percolation, so G(ρ, y) = |y| Y n=1 pn . If x is to the left of y, then G(x, y) is equal to the probability that the range of the Markov chain contains y given that it contains x, which is just the probability of y surviving given that x survives. Therefore, G(x, y) = |y| Y n=|x∧y|+1 pn , and hence M(x, y) = G(x, y) G(ρ, y) = |x∧y| Y n=1 p−1 n . 251 ρ Figure 1. The Markov chain embedded in the tree. Now G(x, y) = 0 for x on the right of y; thus (keeping the diagonal in mind) K(x, y) ≤M(x, y) + M(y, x) for all x, y ∈∂T, and therefore IK(µ) ≤2IM(µ) . Now apply Proposition 8.25 to Λ = ∂T: CapK(∂T) ≥1 2CapM(∂T) ≥1 2 P © {Vk : k ∈N} hits ∂T ª = 1 2 P{ρ ↔∂T} . This establishes the upper bound for finite T. The inequality for general T follows from the finite case by taking limits. The main remaining task is to translate Lyons’ theorem, Theorem 9.17 into hitting estimates for percolation limit sets using a ‘tree representation’ as in Figure 2, and relating the capacity of the tree boundary to the capacity of the percolation limit set. Theorem 9.18. Let Γ be a percolation limit set in the unit cube Cube with retention parameters p1, p2, . . .. Then, for any closed set Λ ⊂Cube we have P © Γ ∩Λ ̸= ∅ ª ≍Capf(Λ) , for any decreasing f satisfying f(2−k) = p−1 1 · · · p−1 k . Remark 9.19. This result extends parts (i) and (ii) in Hawkes’ theorem, Theorem 9.5, in two ways: It includes generation dependent retention and gives a quantitative estimate. ⋄ The key to this lemma is the following representation for the f-energy of a measure. 252 Figure 2. Percolation limit set and associated tree Lemma 9.20. Suppose f : (0, ∞) →(0, ∞) is a decreasing function, and denote h(n) = f(2−n)− f(21−n) for n ≥1, and h(0) = f(1). Then, for any measure µ on the unit cube [0, 1)d, If(µ) ≍ ∞ X n=0 h(n) ³ X Q∈Dn µ(Q)2´ , where the implied constants depend only on d. Proof of the lower bound in Lemma 9.20. Fix an integer ℓsuch that √ d ≤2ℓ. For any x, y ∈[0, 1]d we write n(x, y) = max © n : x, y ∈D for some D ∈Dn ª . Note that n(x, y) = n + ℓimplies | x −y| ≤ √ d2−n−ℓ≤2−n and hence f(| x −y|) ≥f(2−n). We thus get If(µ) = ZZ f( |x −y|) dµ(x) dµ(y) ≥ ∞ X n=0 f(2−n) µ ⊗µ © (x, y) : n(x, y) = n + ℓ ª = ∞ X n=0 f(2−n) £ Sn+ℓ(µ) −Sn+ℓ+1(µ) ¤ , where Sn(µ) = P Q∈Dn µ(Q)2. Note that, by the Cauchy-Schwarz inequality, (2.4) Sn(µ) = X Q∈Dn µ(Q)2 ≤ X Q∈Dn ³ X V ∈Dn+1 V ⊂Q µ(V ) ´2 ≤2d X V ∈Dn+1 µ(V )2 = 2d Sn+1(µ) . Rearranging the sum and using this ℓtimes, we obtain that ∞ X n=0 f(2−n) £ Sn+ℓ(µ) −Sn+ℓ+1(µ) ¤ = ∞ X n=0 h(n) Sn+ℓ(µ) ≥ ∞ X n=0 h(n) 2−dℓSn(µ) , which is our statement with c = 2−dℓ. 253 Proof of the upper bound in Lemma 9.20. For 21−n ≥| x −y| > 2−n, we have ∞ X k=0 h(k)1{21−k ≥| x −y|} = f(2−n) ≥f(| x −y|) , and hence we can decompose the integral as If(µ) = ZZ f(| x −y|) dµ(x) dµ(y) ≤ ZZ ∞ X k=0 h(k)1{21−k ≥| x −y|} dµ(x) dµ(y) . For cubes Q1, Q2 ∈Dn we write Q1 ∼Q2 is they are either adjacent or they agree (though note that ∼is not an equivalence relation). Then ZZ 1{21−k ≥| x −y|} dµ(x) dµ(y) = µ ⊗µ © (x, y) : | x −y| ≤21−kª ≤ X Q1,Q2∈Dk−1 Q1∼Q2 µ(Q1) µ(Q2) ≤1 2 X Q1,Q2∈Dk−1 Q1∼Q2 ³ µ(Q1)2 + µ(Q2)2´ , using the inequality of the geometric and arithmetic mean in the last step. As, for each cube, the number of adjacent or identical dyadic cubes of the same sidelength is 3d, we obtain that If(µ) ≤3d+1 2 ∞ X k=0 h(k) X Q∈Dk−1 µ(Q)2 ≤3d+12d−1 ∞ X k=0 h(k) X Q∈Dk µ(Q)2 , using (2.4) from above. This completes the proof of the upper bound. Proof. Denote the coordinatewise minimum of Cube by x. We employ the canonical mapping R from the boundary of a 2d-ary tree Υ, where every vertex has 2d children, to the cube Cube. Formally, label the edges from each vertex to its children in a one-to-one manner with the vectors in Θ = {0, 1}d. Then the boundary ∂Υ is identified with the sequence space ΘZ+ and we define R: ∂Υ = ΘZ+ →Cube by R(ω1, ω2, . . .) = x + ∞ X n=1 2−nωn. We now use the representation given in Lemma 9.20 to relate the K-energy of a measure µ on ∂T (with K as in Theorem 9.17) to the f-energy of its image measure µ ◦R−1 on Cube, showing that (2.5) IK(µ) ≍If(µ ◦R−1) 254 where the implied constants depend only on the dimension d. Indeed the K-energy of a measure µ on ∂T satisfies, by definition, IK(µ) = ZZ |x∧y| Y i=1 p−1 i dµ(x) dµ(y) = ZZ X v≤x∧y ³ |v| Y i=1 p−1 i − |v|−1 Y i=1 p−1 i ´ dµ(x) dµ(y) = X v∈Γ ³ |v| Y i=1 p−1 i − |v|−1 Y i=1 p−1 i ´ ZZ 1{x, y ≥v} dµ(x) dµ(y) = X v∈Γ ³ |v| Y i=1 p−1 i − |v|−1 Y i=1 p−1 i ´ µ ¡ {ξ ∈∂T : v ∈ξ ª¢2 , whereas the f-energy of the measure µ ◦R−1 satisfies, by Lemma 9.20, If ¡ µ ◦R−1¢ ≍ ∞ X k=0 h(k) X D∈Dk µ ¡ R−1(D) ¢2 , where h(k) = f(2−k) −f(2−k+1) = p−1 1 · · · p−1 k −p−1 1 · · · p−1 k+1 by our assumptions on f. Now R−1(D) is contained in no more than 3d sets of the form {ξ ∈∂T : v ∈ξ ª , for |v| = k, in such a way that over all cubes D ∈Dk no such set is used in more than 3d covers. Conversely each set R−1(D) contains an individual set of this form entirely, so that we obtain (2.5). As any measure ν on R(∂T) ⊂Cube can be written as µ ◦R−1 for an appropriate measure µ on ∂T it follows from (2.5) that CapK(∂T) ≍Capf(R(∂T)). Any closed set Λ in the unit cube Cube can be written as the image R(∂T) of the boundary of some subtree T of the regular 2d-ary tree. We perform percolation with retention parameters p1, p2, . . . on this tree. Then, by Theorem 9.17, P © Γ ∩Λ ̸= ∅ ª = P © ∂T survives the percolation ª ≍CapK(∂T) ≍Capf(Λ) . Proof of Theorem 9.14. As the cube Cube has positive distance to the starting point of Brownian motion, we can remove the denominator and smaller order terms from the Martin kernel in Theorem 8.23, as in the proof of Theorem 8.19. We thus obtain P © B[0, T] ∩Λ ̸= ∅ ª ≍Capf(Λ) , where f is the radial potential. For the choice of retention probabilities in (2.2) we can apply Theorem 9.18, which implies Capf(Λ) ≍P © Γ ∩Λ ̸= ∅ ª , and combining the two displays gives the result. 255 The intersection-equivalence approach enables us to characterise the polar sets for the intersec-tion of p independent Brownian motions in Rd and give a quantitative estimate of the hitting probabilities. Theorem 9.21. Let B1, . . . , Bp be independent Brownian motions in Rd starting in arbitrary fixed points and suppose p(d −2) < d. Let S = © x ∈Rd : ∃t1, . . . , tp > 0 with x = B1(t1) = · · · = Bp(tp) ª . Then, for any closed set Λ, we have P © S ∩Λ ̸= ∅ ª > 0 if and only if Capfp(Λ) > 0 , where f is the radial potential. Proof. We may assume that Λ is contained in a unit cube at positive distance from the starting points. Let Γ be a percolation limit set in that cube, with retention probabilities p1, p2, . . . satisfying p1 · · · pn = 1/f p(2−n). By Theorem 9.14 and Lemma 9.15, the random set S is intersection-equivalent to Γ in that cube. Theorem 9.18 characterises the polar sets for Γ, completing the argument. 3. Multiple points of Brownian paths The results of the previous section also provide the complete answer to the question of the existence of p-fold multiple points of d-dimensional Brownian motion. This is achived by Theorem 9.22. Suppose d ≥2 and {B(t) : t ∈[0, 1]} is a d-dimensional Brownian motion. Then, almost surely, • if d ≥4 no double points exist, i.e. Brownian motion is injective, • if d = 3 double points exist, but triple points fail to exist, • if d = 2 points of any finite multiplicity exist. Proof. To show nonexistence of double points in d ≥4 we divide, for every n ∈N the interval [0, 1) into 2n equal subintervals of the form [k2−n, (k + 1)2−n). Note that, for all s, t ∈[0, 1) with s < t there exists a unique n ∈N and k ∈{1, . . . , 2n −1} with s ∈[(k −1)2−n, k2−n) and t ∈[k2−n, (k + 1)2−n), see Figure 3. The Brownian motions {B1(t) : 0 ≤t ≤2−n} and {B2(t) : 0 ≤t ≤2−n} given by B1(t) = B(k2−n + t) −B(k2−n) and B2(t) = B(k2−n −t) −B(k2−n) are independent and hence, by Theorem 9.1, they do not intersect, almost surely. Hence for each pair [(k −1)2−n, k2−n), [k2−n, (k + 1)2−n) of intervals, almost surely, there is no s ∈[(k −1)2−n, k2−n) and t ∈[k2−n, (k +1)2−n) with B(s) = B(t). As this holds almost surely simultaneously for all pairs of intervals and all n, we note that there exists no s, t ∈[0, 1) with s ̸= t but B(s) = B(t). 256 0 1/2 1 1/2 1 1/4 3/4 1/4 3/4 t s Figure 3. Exhausting the triangle {(s, t) ∈[0, 1)2, s < t} by squares of the form [(k −1)2−n, k2−n) × [k2n, (k + 1)2n). To show existence of double points in d ≤3 we fix an arbitrary pair of adjacent intervals, say [(k −1)2−n, k2−n) and [k2−n, (k + 1)2−n). We apply Theorem 9.1 in conjunction with Remark 9.2, to the independent Brownian motions {B1(t) : 0 ≤t ≤2−n} and {B2(t) : 0 ≤t ≤2−n} given by B1(t) = B(k2−n + t) −B(k2−n) and B2(t) = B(k2−n −t) −B(k2−n), to see that, almost surely, the two ranges intersect. To show nonexistence of triple points in d = 3 we observe that it suffices to show that for any four rationals 0 < α1 < α2 < α3 < α4 < 1, almost surely there is no 0 < s < α1, α2 < t < α3, α4 < u < 1 with B(s) = B(t) = B(u). Fix four rationals as above and denote by µ the law of the vector ¡ B(α1), B(α2), B(α3), B(α4) ¢ . Obviously this law has a bounded density with respect to a vector (X1, X2, X3, X4) of inde-pendent standard normal random variables with variances α1, α2, α3, α4. Denoting the upper bound by C > 0, we obtain P © there exist 0 < s < α1, α2 < t < α3, α4 < u < 1 with B(s) = B(t) = B(u) ª = Z P © B[0, α1] ∩B[α2, α3] ∩B[α4, 1] ̸= ∅| B(αi) = xi for i = 1, . . . , 4 ª dµ(x1, · · · , x4) ≤C E h P © B[0, α1] ∩B[α2, α3] ∩B[α4, 1] ̸= ∅| B(αi) = Xi for i = 1, . . . , 4 ªi , where the last expectation is with respect to the vector (X1, X2, X3, X4). It is easy to see, for example from L´ evy’s construction of Brownian motion on an interval, that this expectation 257 equals P © B1[0, α1] ∩B2[α2, α3] ∩B3[α4, 1] ̸= ∅ ª where B1, B2, B3 are three independent Brownian motions. Hence Theorem 9.3 shows that the probability is zero, and therefore there are no triple points of Brownian motion in d = 3. To show the existence of p-multiple points in R2 fix numbers 0 < α1 < α2 < · · · < α2p < ε, and let µ the law of the vector ¡ B(α1), . . . , B(α2p) ¢ . This law has a density with respect to a vector (X1, . . . , X2p) of independent standard normal random variables with variances α1, . . . α2p, which is bounded from below, say by c > 0. Hence, with α0 := 0, P n p−1 \ i=0 B[α2i, α2i+1] ̸= ∅ o = Z P n p−1 \ i=0 B[α2i, α2i+1] ̸= ∅ ¯ ¯ ¯ B(αi) = xi for i = 1, . . . , 2p o dµ(x1, · · · , x2p) ≥c E h P n p−1 \ i=0 B[α2i, α2i+1] = ∅ ¯ ¯ ¯ B(αi) = Xi for i = 1, . . . , 2p oi , where the last expectation is with respect to the vector (X1, . . . , X2p). The last exepctation equals P n p−1 \ i=0 Bi[α2i, α2i+1] ̸= ∅ o > 0. Hence, for every ε > 0, p-fold self-intersections of the Brownian path occur before time ε with positive probability. By Brownian scaling this probability does not depend on ε > 0, and therefore also the event that p-fold self-intersection occur before any positive time has positive probability. As this event is in the germ-σ-algebra, Blumenthal’s zero-one law implies that its probaility must be one, which completes the proof. Theorem 9.23. Let {B(t) : 0 ≤t ≤1} be a planar Brownian motion. Then, almost surely, for every positive integer p, there exists points x ∈R2 which are visited exactly p times by the Brownian motion. Proof. Note first that it suffices to show this with positive probability. Indeed, by Brownian scaling, the probability that the path {B(t) : 0 ≤t ≤r} has points of multiplicity exactly p does not depend on r. By Blumenthal’s zero-one law it therefore must be zero or one. 258 The idea of the proof is now to construct a set Λ such that Capfp(Λ) > 0 but Capfp+1(Λ) = 0 for the radial potential f. By Exercise 9.2 the first condition implies that the probability that Λ contains a p-fold multiple point is positive. The second conditon ensures that it almost surely does not contain a p + 1-fold multiple point. Hence the p-multiple points found in Λ must be strictly p-multiple. We construct the set Λ by iteration, starting from a compact unit cube Cube. In the nth construction step we divide each cube retained in the previous step into its four nonoverlapping dyadic subcubes and retain only one of them, say the bottom left cube, except at the steps with number n = ⌈4 k p+1⌉, for k = p + 1, p + 2, . . ., when we retain all four subcubes. The number k(n) of times within the first n steps when we have retained all four cubes satisfies k(n) ≍(log n) p+1 log 4. Denoting by Sn the set of all dyadic cubes retained in the nth step, we define the compact set Λ = ∞ \ n=1 [ S∈Sn S . The calculation of the capacity of Λ will be based on the formula given in Lemma 9.20. Observe that, if f p(ε) = logp(1/ε) is the pth power of the 2-dimensional radial potential, then the associated function is h(p)(n) = f p(2−n) −f p(21−n) ≍np −(n −1)p ≍np−1 . Note that the number g(n) of cubes kept in the first n steps of the construction satisfies g(n) ≍np+1. By our construction P∞ n=0 np−1 g(n)−1 < ∞, but P∞ n=0 np g(n)−1 = ∞. For the measure µ distributing the unit mass equally among the retained cubes of the same sidelength (hence giving mass g(n)−1 to each retained cube), we have Ifp(µ) ≍ ∞ X n=0 h(p)(n) ³ X Q∈Dn µ(Q)2´ ≍ ∞ X n=0 np−1 g(n)−1 < ∞, and hence Capfp(Λ) > 0. For the converse statement, note that the equidistributing measure µ minimises the sum P Q∈Dn ν(Q)2 over all probability measures ν charging only the retained cubes. Hence, for any probability measure on Λ, Ifp+1(ν) ≍ ∞ X n=0 h(p)(n) ³ X Q∈Dn ν(Q)2´ ≥ ∞ X n=0 h(p)(n) g(n)−1 ≍ ∞ X n=0 np g(n)−1 = ∞, verifying that Capfp+1(Λ) = 0. This completes the proof. Knowing that planar Brownian motion has points of arbitrarily large finite multiplicity, it is an interesting question whether there are points of infinite multiplicity. Theorem 9.24. Let {B(t) : t ≥0} be a planar Brownian motion. Then, almost surely, there exists a point x ∈R2 such that the set {t ≥0 : B(t) = x} is uncountable. 259 The rest of this section is devoted to the proof of this nontrivial result and will not be used in the remainder of the book. It may be skipped on first reading. Let us first describe the rough strategy of the proof: We start with a double point, i.e. some s1 < s2 such that B(s1) = B(s2) and suppose s1 and s2 are not too close. Forgetting that such times are necessarily random times, we could argue that for a small ε1 > 0 the four independent Brownian motions {B(s1 −t) −B(s1) : t ∈(0, ε1)}, {B(s1 + t) −B(s1) : t ∈(0, ε1)}, {B(s2 −t) −B(s1) : t ∈(0, ε1)}, {B(s2 + t) −B(s1) : t ∈(0, ε1)}, which all start in the origin, have a point of intersection with probability one. Hence there existed a quadruple point, i.e. times t1 < t2 < t3 < t4 with B(t1) = B(t2) = B(t3) = B(t4). Again under the false assumption that these times are fixed, we could iterate this argument with a very small ε2. Inductively we would construct a sequence xn of points of multiplicity 2n converging to some x. Consider the set of times where Brownian motion visits this point. In the closure of each of the 2n disjoint time intervals of length εn, on which the 2n Brownian motions of the nth stage are defined, there is a time t with B(t) = x. Hence there must be at least as many such times as rays in a binary tree, i.e. uncountably many. While this idea is nice, it cannot be applied directly: we cannot choose the intersection times as stopping times, or even fixed times, for our Brownian motions. In the proof we therefore replace the intersection times by the hitting times of small balls, which are stopping times. However when moving from stage n to stage n + 1 we only have a 2n+1-fold intersection with positive probability, not with probability one. We therefore need to do this simultaneously for many balls and obtain a high probability of intersection by an additional law of large numbers effect. Throughout the proof we use the following notation. For any open or closed sets A1, A2, . . . and a Brownian motion B : [0, ∞) →R2 define stopping times τ(A1) := inf{t ≥0: B(t) ∈A1}, τ(A1, . . . , An) := inf{t ≥τ(A1, . . . , An−1) : B(t) ∈An}, for n ≥2, where, as usual, the infimum over the empty set is set to infinity. We say the Brownian motion upcrosses the shell B(x, 2r) \ B(x, r) twice before a stopping time T if, τ ¡ B(x, r), B(x, 2r)c, B(x, r), B(x, 2r)c¢ < T. We call the paths of Brownian motion between times τ(B(x, r)) and τ(B(x, r), B(x, 2r)c) and between times τ(B(x, r), B(x, 2r)c, B(x, r)) and τ(B(x, r), B(x, 2r)c, B(x, r), B(x, 2r)c) the up-crossing excursions, see Figure 4. From now on let T be the first exit time of Brownian motion from the unit ball. Recall the following fact from Theorem 8.23 and the discussion of the Martin kernels in Chapter 8. Lemma 9.25. Let 1 < m < n be two integers and B be a ball of radius 2−n with centre at distance at least 2−m and at most 21−m from the origin. Then, for sufficiently large m, we have m 2n ≤P © τ(B) < T ª ≤2m n . 260 B (2) (1) B B Figure 4. The path B : [0, ∞) →R2 upcrosses the shell twice; the upcrossing excursions are bold and marked B(1), B(2). Recall from Theorem 3.43 that the density of B(T) under Pz is given by the Poisson kernel, which is P(z, w) = 1 −|z|2 |z −w|2 for any z ∈B(0, 1) and w ∈∂B(0, 1). Lemma 9.26. Consider Brownian motion started at z ∈B(0, r) where r < 1, and stopped at time T when it exits the unit ball. Let τ ≤T be a stopping time, and let A ∈F(τ). Then we have (i) Pz ¡ A ¯ ¯ B(T) ¢ = Pz(A) Ez £ P(B(τ), B(T)) ¯ ¯ A ¤ P(z, B(T)) . (ii) If Pz ¡ {B(τ) ∈B(0, r)} ¯ ¯ A ¢ = 1, then ³1 −r 1 + r ´2 Pz(A) ≤Pz ¡ A ¯ ¯ B(T) ¢ ≤ ³1 + r 1 −r ´2 Pz(A) almost surely. Proof. (i) Let I ⊂∂B(0, 1) be a Borel set. Using the strong Markov property in the second step, we get Pz ¡ A ¯ ¯ {B(T) ∈I} ¢ Pz{B(T) ∈I} = Pz(A) Pz ¡ {B(T) ∈I} ¯ ¯ A ¢ = Pz(A) Ez £ PB(τ) © B(T) ∈I ª ¯ ¯ A ¤ . As a function of I, both sides of the equation define a finite measure with total mass Pz(A). Comparing the densities of the measures with respect to the surface measure on ∂B(0, 1) gives Pz ¡ A ¯ ¯ B(T) ¢ P(z, B(T)) = Pz(A) Ez £ P(B(τ), B(T)) ¯ ¯ A ¤ . (ii) The assumption of this part and (i) imply that the ratio Pz(A|B(T))/Pz(A) can be written as an average of ratios P(u, w)/P(z, w) where w = B(T) ∈∂B(0, 1) and u, z ∈B(0, r). The assertion follows by finding the minimum and maximum of P(u, w) as u ranges over B(0, r). The following lemma, concerning the common upcrossings of L Brownian excursions, will be the engine driving the proof of Theorem 9.24. 261 Lemma 9.27. Let n > 5 and let {x1, . . . , x4n−5} be points such that the balls B(xi, 21−n) are disjoint and contained in the shell {z : 1 4 ≤|z| ≤ 3 4}. Consider L independent Brownian upcrossing excursions W1, . . . , WL, started at prescribed points on ∂B(0, 1) and stopped when they reach ∂B(0, 2). Let S denote the number of centres xi, 1 ≤i ≤4n−5 such that the shell B(xi, 2−n+1) \ B(xi, 2−n) is upcrossed twice by each of W1, . . . , WL. Then there exists constants c, c∗> 0 such that (3.1) P ¡ {S > 4n(c/n)L} ¯ ¯ A ¢ ≥cL ∗ L! . Moreover, the same estimate (with a suitable constant c∗) is valid if we condition on the end points of the excursions W1, . . . , WL. Proof of Lemma 9.27. By Lemma 9.25, for any z ∈∂B(0, 1), the probability of Brownian motion starting at z hitting the ball B(xi, 2−n) before reaching ∂B(0, 2) is at least 1 2n, and the probability of the second upcrossing excursion of B(xi, 2−n+1) \ B(xi, 2−n), when starting at ∂B(xi, 21−n) is at least 1/2. Thus (3.2) ES ≥4n−5(4n)−L . We now estimate the second moment of S. Consider a pair of centres xi, xj such that 2−m ≤ |xi −xj| ≤21−m for some m < n −1. For each k ≤L, let Vk = Vk(xi, xj) denote the event that the balls B(xi, 2−n) and B(xj, 2−n) are both visited by Wk. Given that B(xi, 2−n) is reached first, the conditional probability that Wk will also visit B(xj, 2−n) is at most 2m n . We conclude that P(Vk) ≤4m n2 whence P ³ L \ k=1 Vk ´ ≤ ³4m n2 ´L . For each m < n −1 and i ≤4n−5, the number of centres xj such that 2−m ≤|xi −xj| ≤21−m is at most 4n−m. We deduce that there exists C1 > 0 such that (3.3) ES2 ≤CL 1 42n n2L n X m=1 mL4−m ≤(2C1)L42nL! n2L , where the last inequality follows, e.g., from taking x = 1/4 in the binomial identity ∞ X m=0 µm + L L ¶ xm = (1 −x)−L−1 . Now (3.2), (3.3) and the Paley-Zygmund inequality yield (3.1). The final statement of the lemma follows from Lemma 9.26. Proof of Theorem 9.24. Fix an increasing sequence {ni : i ≥1} to be chosen later, and let Nℓ= Pℓ i=1 ni. Denote qi = 4ni −5 and Qi = 4Ni−5i. We begin by constructing a nested sequence of centres which we associate with a forest, i.e. a collection of trees, in the following manner. The first level of the forest consists of Q1 centres, {x(1) 1 , . . . , x(1) Q1}, chosen such that the balls {B(x(1) k , 2−N1+1) : 1 ≤k ≤Q1} are disjoint and contained in the annulus {z : 1 4 ≤|z| ≤3 4}. 262 Continue this construction recursively. For ℓ> 1 suppose that level ℓ−1 of the forest has been constructed. Level ℓconsists of Qℓvertices {x(ℓ) 1 , . . . , x(ℓ) Qℓ}. Each vertex x(ℓ−1) i , 1 ≤i ≤Qℓ−1, at level ℓ−1 has qℓchildren {x(ℓ) j : (i −1)qℓ< j ≤iqℓ} at level ℓ; the balls of radius 2−Nℓ+1 around these children are disjoint and contained in the annulus {z : 1 42−Nℓ−1 ≤|z −x(ℓ−1) i | ≤3 42−Nℓ−1}. Recall that T = inf{t > 0 : |B(t)| = 1}. We say that a level one vertex x(1) k survived if Brownian motion upcrosses the shell B(x(1) k , 2−N1+1) \ B(x(1) k , 2−N1) twice before T. A vertex at the second level x(2) k is said to have survived if its parent vertex survived, and in each upcrossing excursion of its parent, Brownian motion upcrosses the shell B(x(2) k , 2−N2+1)\B(x(2) k , 2−N2) twice. Recursively, we say a vertex x(ℓ) k , at level ℓof the forest, survived if its parent vertex survived, and in each of the 2ℓ−1 upcrossing excursions of its parent, Brownian motion upcrosses the shell ball(x(ℓ) k , 2−Nℓ+1)\B(x(ℓ) k , 2−Nℓ) twice. Also, for any ℓ≥1 , let Sℓdenote the number of vertices at level ℓof the forest that survived. Using the notation of Lemma 9.27, denote Γℓ= 4nℓ(c/nℓ)L and pℓ= cL ∗ L! . where L = L(ℓ) = 2ℓ−1. Lemma 9.27 with n = n1 states that (3.4) P{S1 > Γ1} ≥p1 = c∗ For ℓ> 1, the same lemma, and independence of excursions in disjoint shells given their endpoints, yield (3.5) P ¡ {Sℓ+1 ≤Γℓ+1} ¯ ¯ {Sℓ> Γℓ} ¢ ≤(1 −pℓ+1)Γℓ≤exp(−pℓ+1Γℓ) . By picking nℓlarge enough, we can ensure that pℓ+1Γℓ> ℓ, whence the right-hand side of (3.5) is summable in ℓ. Consequently (3.6) α = P ³ ∞ \ ℓ=1 {Sℓ> Γℓ} ´ = P{S1 > Γ1} ∞ Y ℓ=1 P ¡ {Sℓ+1 > Γℓ+1} ¯ ¯ {Sℓ> Γℓ} ¢ > 0 . Thus with probability at least α, there is a nested sequence of closed balls B(x(ℓ) k(ℓ), 2−Nℓ) for ℓ= 1, 2, . . . such that all their centres survive. The intersection of such a nested sequence yields a point visited by Brownian motion uncountably many times before it exits the unit disk. Let Hr denote the event that Brownian motion, killed on exiting B(0, r), has a point of un-countable multiplicity. As explained above, (3.6) implies that P(H1) ≥α > 0. By Brownian scaling, P(Hr) does not depend on r, whence P ³ ∞ \ n=1 H1/n ´ ≥α . The Blumenthal zero-one law implies that this intersection has probability 1, so there are points of uncountable multiplicity almost surely. 263 4. Kaufman’s dimension doubling theorem In Theorem 4.37 we have seen that d-dimensional Brownian motion maps any set of dimension α almost surely into a set of dimension 2α. Surprisingly, by a famous result of Kaufman, the dimension doubling property holds almost surely simultaneously for all sets. Theorem 9.28 (Kaufman 1969). Let {B(t) : t ≥0} be Brownian motion in dimension d ≥2. Almost surely, for any A ⊂[0, ∞), we have dim B(A) = 2 dim A. The power of this result lies in the fact that the dimension doubling formula can now be applied to arbitrary random sets. As a first application we ask, how big the sets T(x) = © t ≥0 : B(t) = x ª of times mapped by d-dimensional Brownian motion onto the same point x can possibly be. We have seen so far in this chapter and Theorem 6.38 that, almost surely, • in dimension d ≥4 all sets T(x) consist of at most one point, • in dimension d = 3 all sets T(x) consist of at most two points, • in dimension d = 2 at least one of the sets T(x) is uncountable, • in dimension d = 1 all sets T(x) have at least Hausdorffdimension 1 2. We now use Kaufman’s theorem to determine the Hausdorffdimension of the sets T(x) in the case of planar and linear Brownian motion. Corollary 9.29. Suppose {B(t): t ≥0} is a planar Brownian motion. Then, almost surely, for all x ∈R2, we have dim T(x) = 0. Proof. By Kaufman’s theorem we have dim T(x) = 1 2 dim{x} = 0. Corollary 9.30. Suppose {B(t): t ≥0} is a linear Brownian motion. Then, almost surely, for all x ∈R, we have dim T(x) = 1 2. Proof. The lower bound was shown in Theorem 6.38. For the upper bound let {W(t) : t ≥0} be a Brownian motion independent of {B(t) : t ≥0}. Applying Kaufman’s theorem for the planar Brownian motion given by e B(t) = (B(t), W(t)) we get, almost surely, dim T(x) = dim e B−1({x} × R) ≤1 2 dim({x} × R) = 1 2 . We now prove Kaufman’s theorem, first in the case d ≥3. Note first that dim B(A) ≤2 dim A holds in all dimensions and for all sets A ⊂[0, ∞) if {B(t) : t ≥0} is α-H¨ older continuous for any α < 1 2. By Corollary 1.20 this holds almost surely. Hence only the lower bound dim B(A) ≥2 dim A requires proof. The crucial idea here is that one uses a standardised covering of B(A) by dyadic cubes and ensures that, simultaneously for all possible covering cubes the preimages allow an efficient covering. An upper bound for dim A follows by selecting from the coverings of all preimages. 264 Lemma 9.31. Consider a cube Q ⊂Rd centred at a point x and having diameter 2r. Let {B(t) : t ≥0} be d-dimensional Brownian motion, with d ≥3. Define recursively τ Q 1 = inf{t ≥0 : B(t) ∈Q} , τ Q k+1 = inf{t ≥τ Q k + r2 : B(t) ∈Q}, for k ≥1, with the usual convention that inf ∅= ∞. Then there exists 0 < θ < 1 depending only on the dimension d, such that Pz{τ Q n+1 < ∞} ≤θn for all z ∈R2 and n ∈N. Proof. It is sufficient to show that for some θ as above, Pz © τ Q k+1 = ∞ ¯ ¯ τ Q k < ∞ ª > 1 −θ. But the quantity on the left can be bounded from below by Pz © τ Q k+1 = ∞ ¯ ¯ |B(τ Q k + r2) −x| > 2r, τ Q k < ∞ ª Pz © |B(τ Q k + r2) −x| > 2r ¯ ¯ τ Q k < ∞ ª The second factor is clearly positive, by the strong Markov property, and the first is also positive since Brownian motion is transient in d ≥3. Both probabilities are invariant under changing the scaling factor r. Lemma 9.32. Let Cm denote the set of dyadic cubes of side length 2−m inside [−1 2, 1 2]d. Almost surely there exists a random variable C = C(ω) so that for all m and for all cubes Q ∈Cm we have τ Q ⌈mC+1⌉= ∞. Proof. From Lemma 9.31 we get that ∞ X m=1 X Q∈Cm P © τ Q ⌈cm+1⌉< ∞ ª ≤ ∞ X m=1 2dmθcm. Now choose c so large that 2dθc < 1. Then, by the Borel-Cantelli lemma, for all but finitely many m we have τ Q ⌈cm+1⌉+1 = ∞for all Q ∈Cm. Finally, we can choose a random C(ω) > c to handle the finitely many exceptional cubes. Proof of Theorem 9.28 for d ≥3. As mentioned before we can focus on the ‘≥’ direction. We fix L and show that, almost surely, for all subsets S of [−L, L]d we have (4.1) dim B−1(S) ≤1 2 dim S. Applying this to S = B(A) ∩[−L, L]d successively for a countable unbounded set of L we get the desired conclusion. By scaling, it is sufficient to prove (4.1) for L = 1/2. The idea now is to verify (4.1) for all paths satisfying Lemma 9.32 using completely deterministic reasoning. As this set of paths has full measure, this verifies the statement. Hence fix a path {B(t) : t ≥0} satisfying Lemma 9.32 for a constant C > 0. If β > dim S and ε > 0 there exists a covering of S by binary cubes {Qj : j ∈N} ⊂S∞ m=1 Cm such that P |Qj|β < ε. If Nm denotes the number of cubes from Cm in such a covering, then ∞ X m=1 Nm 2−mβ < ε. 265 Consider the inverse image of these cubes under {B(t) : t ≥0}. Since we chose this path so that Corollary 9.32 is satisfied, this yields a covering of B−1(S), which for each m ≥1 uses at most CmNm intervals of length r2 = d2−2m. For γ > β we can bound the γ/2-dimensional Hausdorffcontent of B−1(S) from above by ∞ X m=1 Cm Nm(d2−2m)γ/2 = C dγ/2 ∞ X m=1 m Nm 2−mγ. This can be made small by choosing a suitable ε > 0. Thus B−1(S) has Hausdorffdimension at most γ/2 for all γ > β > dim S, and therefore dim B−1(S) ≤dim S/2. In d = 2 we cannot rely on transience of Brownian motion. To get around this problem, we can look at the Brownian path up to a stopping time. A convenient choice of stopping time for this purpose is τ ∗ R = min © t : |B(t)| = R ª . For the two dimensional version of Kaufman’s theorem it is sufficient to show that, almost surely, dim B(A) ≥2 dim(A ∩[0, τ ∗ R]) for all A ⊂[0, ∞). Lemma 9.31 has to be changed accordingly. Lemma 9.33. Consider a cube Q ⊂R2 centred at a point x and having diameter 2r, and assume that the cube Q is inside the ball of radius R about 0. Let {B(t) : t ≥0} be planar Brownian motion. Define τ Q k as in (4.1), and assume that the cube Q is inside the ball of radius R about the origin. Then there exists c = c(R) > 0 such that, with 2−m−1 < r < 2−m, for any z ∈R2, (4.2) Pz © τ Q k < τ ∗ R ª ≤ ³ 1 −c m ´k ≤e−ck/m. Proof. It suffices to bound Pz{τk+1 ≥τ ∗ R | τk < τ ∗ R} from below by Pz © τ Q k+1 ≥τ ∗ R ¯ ¯ |B(τ Q k + r2) −x| > 2r, τ Q k < τ ∗ R ª Pz © |B(τ Q k + r2) −x| > 2r ¯ ¯ τ Q k < τ ∗ R ª . The second factor does not depend on r and R, and it can be bounded from below by a constant. The first factor is bounded from below by the probability that planar Brownian motion started at any point in ∂B(0, 2r) hits ∂B(0, 2R) before ∂B(0, r). Using Theorem 3.17 this probability is given by log 2r −log r log 2R −log r ≥ 1 log2(R + 1 + m). This is at least c/m for some c > 0 which depends on R only. The bound (4.2) on P{τ Q k < ∞} in two dimensions is worse by a linear factor than the bound in higher dimensions. This, however, does not make a significant difference in the proof of the two dimensional version of Theorem 9.28, which can now be completed in the same way. There is also a version of Kaufman’s theorem for Brownian motion in dimension one. Theorem 9.34. Suppose {B(t) : t ≥0} is a linear Brownian motion. Then, almost surely, for all nonempty closed sets S ⊂R, we have dim B−1(S) = 1 2 + 1 2 dim S. 266 Remark 9.35. Note that here it is essential to run Brownian motion on an unbounded time interval. For example, for the point x = max0≤t≤1 B(t) the set {t ∈[0, 1] : B(t) = x} is a singleton almost surely. The restriction to closed sets comes from the Frostman lemma, which we have proved for closed sets only, and can be relaxed accordingly. ⋄ Proof. For the proof of the upper bound let {W(t) : t ≥0} be a Brownian motion independent of {B(t) : t ≥0}. Applying Kaufman’s theorem for the planar Brownian motion given by e B(t) = (B(t), W(t)) we get almost surely, for all S ⊂R, dim B−1(S) = dim e B−1(S × R) ≤1 2 dim(S × R) = 1 2 + 1 2 dim S, where we have used the straightforward fact that dim(S × R) = 1 + dim S. The lower bound requires a more complicated argument, based on Frostman’s lemma. For this purpose we may suppose that S ⊂(−M, M) is closed and dim S > α. Then there exists a measure µ supported by S such that µ(B(x, r)) ≤rα for all x ∈S, 0 < r < 1. Let ℓa be the measure with cumulative distribution function given by the local time at level a. Let ν be the measure on B−1(S) given by ν(A) = Z µ(da) ℓa(A), for A ⊂[0, ∞) Borel. Then, by Theorem 6.18, one can find a constant C > 0 such that ℓa(B(x, r)) = La(x + r) −La(x −r) ≤Cr 1 2 −ε for all a ∈[−M, M] and 0 < r < 1. By H¨ older continuity of Brownian motion there exists, for given ε > 0, a constant c > 0 such that, for every x ∈[0, 1], |B(x + s) −B(x)| ≤cr 1 2 −ε for all s ∈[−r, r]. From this we get the estimate ν(B(x, r)) = Z µ(da) £ La(x + r) −La(x −r) ¤ ≤ Z B(x)+cr 1 2 −ε B(x)−cr 1 2 −ε µ(da) £ La(x + r) −La(x −r) ¤ ≤cαr α 2 −εαCr 1 2 −ε for all x ∈S, 0 < r < 1. Hence, by the mass distribution principle, we get the lower bound α/2 + 1/2 −ε(1 + α) for the dimension and the result follows when ε ↓0 and α ↑dim S. As briefly remarked in the discussion following Theorem 4.37, Brownian motion is also ‘capacity-doubling’. This fact holds for a very general class of kernels, we give an elegant proof of this fact here. Theorem 9.36. Let {B(t) : t ∈[0, 1]} be d-dimensional Brownian motion and A ⊂[0, 1] a closed set. Suppose f is decreasing and there is a constant C > 0 with (4.3) Z 1 0 f(r2x) f(x) rd−1 dr ≤C for all x ∈(0, 1) , 267 and let φ(x) = x2. Then, almost surely, Capf ¡ A ¢ > 0 if and only if Capf◦φ ¡ B(A) ¢ > 0 . Remark 9.37. Condition (4.3) is only used in the ‘only if’ part of the statement. Note that if f(x) = xα is a power law, then (4.3) holds if and only if 2α < d. ⋄ Proof. We start with the ‘only if’ direction, which is easier. Suppose Capf(A) > 0. This implies that there is a mass distribution µ on A such that the f-energy of µ is finite. Then µ ◦B−1 is a mass distribution on B(A) and we will show that it has finite f ◦φ-energy. Indeed, If◦φ ¡ µ ◦B−1¢ = ZZ f ◦φ ¡ |x −y| ¢ µ ◦B−1(dx) µ ◦B−1(dy) = ZZ f ¡ |B(s) −B(t)|2¢ µ(ds) µ(dt) . Hence, EIf◦φ ¡ µ ◦B−1¢ = ZZ Ef ¡ |X|2 |s −t| ¢ µ(ds) µ(dt) , where X is a d-dimensional standard normal random variable. Using polar coordinates and the monotonicity of f we get, for a constant κ(d) depending only on the dimension, E £ f ¡ |X|2 |s −t| ¢¤ = κ(d) Z ∞ 0 f(r2 |s −t|) e−r2/2 rd−1 dr ≤f(|s −t|) κ(d) ³ Z 1 0 f(r2 |s −t|) f(|s −t|) r1−d dr + Z ∞ 1 e−r2/2 rd−1 dr ´ . By (4.3) the bracket on the right hand side is bounded by a constant independent of |s −t|, and hence E[If◦φ(µ ◦B−1)] < ∞, which in particular implies If◦φ(µ) < ∞almost surely. The difficulty in the ‘if’ direction is that a measure on B(A) with finite f◦φ-energy cannot easily be transported backwards onto A. To circumvent this problem we use the characterisation of capacity in terms of polarity with respect to percolation limit sets, recall Theorem 9.18. Fix a unit cube Cube such that Capf◦φ(B(A)∩Cube) > 0 with positive probability, and let Γ be a percolation limit set with retention probabilities associated to the decreasing function f(x2/4) as in Theorem 9.18, which is independent of Brownian motion. Then, by Theorem 9.18, we have B(A) ∩Γ ̸= ∅with positive probability. Define a random variable T = inf © t ∈A : B(t) ∈Γ ª , which is finite with positive probability. Hence the measure µ given by µ(B) = P © B(T) ∈B, T < ∞ ª is a mass distribution on A. We shall show that it has finite f-energy, which completes the proof. Again we use the polarity criterion of Theorem 9.18 to do this. Let Sn = S S∈Sn S be the union of all cubes retained in the construction up to step n. Then, by looking at the retention probability of any fixed point in Cube, we have, for any s ∈A, (4.4) P © B(s) ∈Sn ª ≤p1 · · · pn ≤C 1 f ◦φ(2−n−1) . 268 Conversely, by a first entrance decomposition, P © B(s) ∈Sn ª ≥P © B(s) ∈Sn, B(T) ∈Sn, T < ∞ ª = Z s 0 µ(dt) P © B(s) ∈Sn ¯ ¯ B(t) ∈Sn ª Given B(t) ∈Sn and √s −t ≤2−n+k for some k ∈{0, . . . , n}, the probability that B(s) and B(t) are contained in the same dyadic cube Q ∈Cn−k is bounded from below by a constant. Given this event, we know that Q is retained in the percolation (otherwise we could not have B(t) ∈Sn) and the probability that the cube in Cn, that contains B(s), is retained in the percolation is at least pn−k+1 · · · pn. Therefore Z s 0 µ(dt) P © B(s) ∈Sn ¯ ¯ B(t) ∈Sn ª ≥ n X k=0 µ ¡ [s −2−2n+2k, s −2−2n+2k−2) ¢ pn−k+1 · · · pn ≥c n X k=0 µ ¡ [s −2−2n+2k, s −2−2n+2k−2) ¢ f ◦φ(2−n+k−1) f ◦φ(2−n−1) ≥c 1 f ◦φ(2−n−1) Z s−2−2n−2 0 µ(dt) f ¡ s −t ¢ . Finiteness of the f-energy follows by comparing this with (4.4), cancelling the factor 1/f ◦φ(2−n−1), integrating over µ(ds), and letting n →∞. This completes the proof. 269 Exercises Exercise 9.1. (a) Suppose that {B1(t): t ≥0}, {B2(t): t ≥0} are independent standard Brownian motions in R3. Then, almost surely, B1[0, t] ∩B2[0, t] ̸= ∅for any t > 0. (b) Suppose that {B1(t): t ≥0}, . . . , {Bp(t): t ≥0} are p independent standard Brownian motions in Rd, and d > d(p −2). Then, almost surely, dim ¡ B1[0, t1] ∩· · · ∩Bp[0, tp] ¢ = d −p(d −2) for any t1, . . . , tp > 0. Exercise 9.2 ( ∗). For a d-dimensional Brownian motion {B(t): t ≥0} we denote by S(p) = © x ∈Rd : ∃0 < t1 < · · · < tp < 1 with x = B(t1) = · · · = B(tp) ª the set of p-fold multiple points. Show that, for d > p (d −2), (a) dim S(p) = d −p (d −2), almost surely. (b) for any closed set Λ, we have P © S(p) ∩Λ ̸= ∅ ª > 0 if and only if Capfp(Λ) > 0 , where the decreasing function f is the radial potential. Exercise 9.3. (a) Let A be a set of rooted trees. We say that A is inherited if every finite tree is in A, and if T ∈A and v ∈V is a vertex of the tree then the tree T(v), consisting of all successors of v, is in A. Prove the Galton-Watson 0–1 law: For a Galton-Watson tree, conditional on survival, every inherited set has probability zero or one. (b) Show that for the percolation limit sets Γ[γ] ⊂Rd with 0 < γ < d we have P © dim Γ[γ] = d −γ | Γ[γ] ̸= ∅ ª = 1. Exercise 9.4 ( ∗). Consider a standard Brownian motion {B(t): t ≥0} and let A1, A2 ⊂[0, ∞). (a) Show that if dim(A1 × A2) < 1/2 then P{B(A1) intersects B(A2)} = 0. 270 (b) Derive the same conclusion under the weaker assumption that A1 × A2 has vanishing 1/2-dimensional Hausdorffmeasure. (c) Show that if Cap1/2(A1 × A2) > 0, then P{B(A1) intersects B(A2)} > 0. (d) Find a set A ⊂[0, ∞) such that the probability that {B(t): t ≥0} is one-to-one on A is strictly between zero and one. Exercise 9.5 ( ∗). Let {B(t): 0 ≤t ≤1} be a planar Brownian motion. For every a ∈R define the sets S(a) = {y ∈R: (a, y) ∈B[0, t]}, the vertical slices of the path. Show that, almost surely, dim S(a) = 1, for every a ∈(min{x: (x, y) ∈B[0, t]}, max{x: (x, y) ∈B[0, t]}). Exercise 9.6. Let {B(t) : t ≥0} be Brownian motion in dimension d ≥2. Show that, almost surely, for any A ⊂[0, ∞), we have dimM B(A) = 2 dimM A and dimM B(A) = 2 dimM A . 271 Notes and Comments The question whether there exist p-multiple points of a d-dimensional Brownian motion was solved in various stages in the early 1950s. First, L´ evy showed in [Le40] that almost all paths of a planar Brownian motion have double points, and Kakutani [Ka44a] showed that if n ≥5 almost no paths have double points. The cases of d = 3, 4 where added by Dvoretzky, Erd˝ os and Kakutani in [DEK50] and the same authors showed in [DEK54] that planar Brownian motion has points of arbitrary multiplicity. Finally, Dvoretzky, Erd˝ os, Kakutani and Taylor showed in [DEKT57] that there are no triple points in d = 3. Clearly the existence of p-fold multiple points is essentially equivalent to the problem whether p independent Brownian motions have a common intersection. The problem of finding the Hausdorffdimension of the set of p-fold multiple points in the plane, and of double points in R3, was still open when Itˆ o and McKean wrote their influential book on the sample paths of diffusions, see [IM74, p. 261] in 1964, but was solved soon after by Taylor [Ta66] and Fristedt [Fr67]. Perkins and Taylor [PT88] provide fine results when Brownian paths in higher dimensions ‘come close’ to self-intersecting. The method of stochastic codimension, which we use to find these dimensions, is due to Taylor [Ta66], who used the range of stable processes as ‘test sets’. The restriction of the stable indices to the range α ∈(0, 2] leads to complications, which can be overcome by a projection method of Fristedt [Fr67] or by using multiparameter processes [Kh02]. The use of percolation limit sets as test sets is much more recent and due to Khoshnevisan, Peres and Xiao [KPX00], though similar ideas are used in the context of trees ar least since the pioneering work of Lyons [Ly90]. The latter paper is also the essential source for our proof of Hawkes’ theorem. Some very elegant proofs of these classical facts were given later: Rosen [Ro83] provides a local time approach, and Kahane [Ka86] proves a general formula for the intersection of independent random sets satisfying suitable conditions. The bottom line of Kahane’s approach is that the formula ‘codimension of the intersection is equal to the sum of codimensions of the intersected sets’ which is well-known from linear subspaces in general position can be extended to the Hausdorffdimension of a large class of random sets, which includes the paths of Brownian motion, see also [Fa97a, Ma95]. The intersection equivalence approach we describe in Section 2 is taken from [Pe96a], [Pe96b]. The proof of Lyons’ theorem we give is taken from [BPP95]. See [Ly92, Theorem 2.1] for the original proof. Hendricks and Taylor conjectured in 1976 a characterisation of the polar sets for the multiple points of a Brownian motion or a more general Markov process, which included the state-ment of Theorem 9.21. Sufficiency of the capacity criterion in Theorem 9.21 was proved by Evans [Ev87a, Ev87b] and independently by Tongring [To88], see also Le Gall, Rosen and Shieh [LRS89]. The full equivalence was later proved in a much more general setting by Fitzsimmons and Salisbury [FS89]. A quantitative treatment of the question, which sets con-tain double points of Brownian motion is given in [PP07]. 272 Points of multiplicity strictly n where identified by Adelman and Dvoretzky [AD85] and the result is also an immediate consequence of the exact Hausdorffgauge function identified by Le Gall [LG86]. The existence of points of infinite multiplicity in the planar case was first stated in [DEK58] though their proof seems to have a gap. Le Gall [LG87] proves a stronger result: Two sets A, B ⊂R are said to be of the same order type if there exists an increasing homeomorphism φ of R such that φ(A) = B. Le Gall shows that, for any totally disconnected, compact A ⊂R, almost surely there exists a point x ∈R2 such that the set {t ≥0 : B(t) = x} has the same order type as A. In particular, there exist points of countably infinite and uncountable multiplicity. Le Gall’s proof is based on the properties of natural measures on the intersection of Brownian paths. Our proof avoids this and seems to be new, though it uses arguments of Le Gall’s proof as well as some techniques from [KM05]. Substantial generalisations of Exercise 9.4 can be found in papers by Khoshnevisan [Kh99] and Khoshnevisan and Xiao [KX05]. For example, in [Kh99, Theorem 8.2] it is shown that the condition in part (c) is an equivalence. Kaufman proved his dimension doubling theorem in [Ka69]. The version for Brownian motion in dimension one is due to Serlet [Se95]. The capacity-doubling result in the given generality is new, but Khoshnevisan and Xiao [KX05, Question 1.1, Theorem 7.1] prove the special case when f is a power law using a different method. Their argument is based on the investigation of additive L´ evy processes and works for a class of processes much more general than Brownian motion. Theorem 9.36 does not hold uniformly for all sets A. Examples can be constructed along the lines in [PT87]. In this book we do not construct a measure on the intersection of p Brownian paths. However this is possible and yields the intersection local time first studied by Geman, Horowitz and Rosen [GHR84], see also [Ro83]. This quantity plays a key role in the analysis of Brownian paths and [LG91] gives a very accessible account of the state of research in 1991, which is still worth reading. Recent research deals with fine Hausdorffdimension properties of the intersections, see for example [KM02, KM05]. 273 CHAPTER 10 Exceptional sets for Brownian motion The techniques developed in this book so far give a fairly satisfactory picture of the behaviour of a Brownian motion at a typical time, like a fixed time or a stopping time. In this chapter we explore exceptional times, for example times where the path moves slower or faster than in the law of the iterated logarithm, or does not wind as in Spitzer’s law. Again Hausdorffdimension is the right tool to describe just how exceptional an exceptional behaviour is, but we shall see that another notion of dimension, the packing dimension, can provide additional insight. 1. The fast times of Brownian motion In a famous paper from 1974, Orey and Taylor raise the question, how often, on a Brownian path, the law of the iterated logarithm fails. To understand this, recall that, by Corollary 5.3 and the Markov property, for a linear Brownian motion {B(t): t ≥0} and for every t ∈[0, 1], almost surely, lim sup h↓0 |B(t + h) −B(t)| p 2h log log(1/h) = 1. This contrasts sharply with the following result (note the absence of the iterated logarithm!). Theorem 10.1. Almost surely, we have max 0≤t≤1 lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) = 1. Remark 10.2. At the time t ∈[0, 1] where the maximum in Theorem 10.1 is attained, the law of the iterated logarithm fails and it is therefore an exceptional time. ⋄ Proof. The upper bound follows from L´ evy’s modulus of continuity, Theorem 1.14, as sup 0≤t<1 lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≤lim sup h↓0 sup 0≤t≤1−h |B(t + h) −B(t)| p 2h log(1/h) = 1. Readers who have skipped the proof of Theorem 1.14 given in Chapter 1 will be able to infer the upper bound directly from Remark 10.5 below. It remains to show that there exists a time t ∈[0, 1] such that lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≥1. 275 Recall from Proposition 1.13 that, almost surely, for every constant c < √ 2 and every ε, δ > 0 there exists t ∈[0, δ] and 0 < h < ε with ¯ ¯B(t + h) −B(t) ¯ ¯ > c p h log(1/h). Using the Markov property this implies that, for c < √ 2, the sets M(c, ε) = © t ∈[0, 1] : there is h ∈(0, ε) such that ¯ ¯B(t + h) −B(t) ¯ ¯ > c p h log(1/h) ª are almost surely dense in [0, 1]. By continuity of Brownian motion they are open, and clearly M(c, ε) ⊂M(d, δ) whenever c > d and ε < δ. Hence, by Baire’s (category) theorem, the intersection \ c< √ 2 ε>0 M(c, ε) = n t ∈[0, 1] : lim sup h↓0 ¯ ¯B(t + h) −B(t) ¯ ¯ p 2h log(1/h) ≥1 o is dense and hence nonempty almost surely. To explore how often we come close to the exceptional behaviour described in Theorem 10.1 we introduce a spectrum of exceptional points. Given a > 0 we call a time t ∈[0, 1] an a-fast time if lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≥a , and t ∈[0, 1] is a fast time if it is a-fast for some a > 0. By Theorem 10.1 fast times exist, in fact the proof even shows that the set of fast times is dense in [0, 1] and hence is infinite. Conversely it is immediate from the law of the iterated logarithm that the set has Lebesgue measure zero, recall Remark 1.28. The appropriate notion to measure the quantity of a-fast times is, again, Hausdorffdimension. Theorem 10.3 (Orey and Taylor 1974). Suppose {B(t): t ≥0} is a linear Brownian motion. Then, for every a ∈[0, 1], we have almost surely, dim n t ∈[0, 1] : lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≥a o = 1 −a2 . The rest of this section is devoted to the proof of this result. We start with a proof of the upper bound, which also shows that there are almost surely no a-fast points for a > 1. So fix an arbitrary a > 0. Let ε > 0 and η > 1, having in mind that we later let η ↓1 and ε ↓0. The basic idea is to cover the interval [0, 1) by a collection of intervals of the form [jη−k, (j + 1)η−k) with j = 0, . . . , ⌈ηk −1⌉and k ≥1. Any such interval of length h := η−k is included in the covering if, for h′ := kη−k, |B(jh + h′) −B(jh)| > a(1 −4ε) p 2h′ log(1/h′). Let Ik = Ik(η, ε) be the collection of intervals of length η−k chosen in this procedure. 276 Lemma 10.4. Almost surely, for every ε > 0 and δ > 0, there is an η > 1 and m ∈N such that the collection I = I(ε, δ) = © I ∈Ik : k ≥m ª is a covering of the set of a-fast points consisting of intervals of diameter no bigger than δ. Proof. We first note that by Theorem 1.12 there exists a constant C > 0 such that, almost surely, there exists ρ > 0 such that, for all s, t ∈[0, 2] with |s −t| ≤ρ, (1.1) ¯ ¯B(s) −B(t) ¯ ¯ ≤C p |s −t| log(1/|s −t|). Choose η > 1 such that √η −1 ≤aε/C. Let M be the minimal integer with Mη−M ≤ρ and m ≥M such that mη−m < δ and kη−k < ℓη−ℓfor all k > ℓ≥m. Now suppose that t ∈[0, 1] is an a-fast point. By definition there exists an 0 < u < mη−m such that |B(t + u) −B(t)| ≥a(1 −ε) p 2u log(1/u). We pick the unique k ≥m such that kη−k < u ≤(k −1)η−k+1, and let h′ = kη−k. By (1.1), we have |B(t + h′) −B(t)| ≥|B(t + u) −B(t)| −|B(t + u) −B(t + h′)| ≥a(1 −ε) p 2u log(1/u) −C p (u −h′) log(1/(u −h′)). As 0 ≤u −h′ ≤(η −1)kη−k, and by our choice of η, the subtracted term is smaller than aε p 2h′ log(1/h′) for sufficiently large m. Hence there exists k ≥m such that |B(t + h′) −B(t)| ≥a(1 −2ε) p 2h′ log(1/h′). Now let j be such that t ∈[jη−k, (j + 1)η−k). As before let h = η−k. Then, by the triangle inequality and using (1.1) twice, we have |B(jh + h′) −B(jh)| ≥|B(t + h′) −B(t)| −|B(t) −B(jh)| −|B(jh + h′) −B(t + h′)| ≥a(1 −2ε) p 2h′ log(1/h′) −2C p h log(1/h) > a(1 −4ε) p 2h′ log(1/h′), using in the last step that, by choosing m sufficiently large, the subtracted term can be made smaller than 2aε p 2h′ log(1/h′). Proof of the upper bound in Theorem 10.3. This involves only a first moment calculation. All there is to show is that, for any γ > 1 −a2 there exists ε > 0 such that the random variable P I∈I(ε,δ) |I|γ is finite, almost surely. For this it suffices to verify that its expectation is finite. Note that E h X I∈I(ε,δ) |I|γi = ∞ X k=m ⌈ηk−1⌉ X j=0 η−kγ P ©|B(jη−k+kη−k)−B(jη−k)| √ 2 kη−k log(ηk/k) > a(1 −4ε) ª . So it all boils down to an estimate of a single probability, which is very simple as it involves just one normal random variable, namely B(jη−k + kη−k) −B(jη−k). More precisely, for X 277 standard normal, (1.2) P n|B(jη−k + kη−k) −B(jη−k)| p 2kη−k log(ηk/k) > a(1 −4ε) o = P © |X| > a(1 −4ε) p 2 log(ηk/k) ª ≤ 1 a(1−4ε)√ log(ηk/k)π exp © −a2 (1 −4ε)2 log(ηk/k) ª ≤η−ka2 (1−4ε)3, for all sufficiently large k and all 0 ≤j < 2k, using the estimate for normal random variables of Lemma II.3.1 in the penultimate step. Given γ > 1 −a2 we can finally find ε > 0 such that γ + a2 (1 −4ε)3 > 1, so that ∞ X k=m ηk−1 X j=0 η−kγ P n|B(jη−k + kη−k) −B(jη−k)| p 2kη−k log(ηk/k) > a(1 −4ε) o ≤ m X k=1 ηkη−kγ η−ka2 (1−4ε)3 < ∞, completing the proof of the upper bound in Theorem 10.3. Remark 10.5. If a > 1 one can choose γ < 0 in the previous proof, which shows that there are no a-fast times as the empty collection is suitable to cover the set of a-fast times. ⋄ For the lower bound we have to work harder. We divide, for any positive integer k, the interval [0, 1] into nonoverlapping dyadic subintervals [j2−k, (j + 1)2−k] for j = 0, . . . , 2k −1. As before, we denote this collection of intervals by Ck and by C the union over all collections Ck for k ≥1. To each interval I ∈C we associate a {0, 1}-valued random variable Z(I) and then define sets A(k) := [ I∈Cn Z(I)=1 I and A := ∞ \ n=1 ∞ [ k=n A(k) . Because 1A = lim sup 1A(k) the set A is often called the limsup fractal associated with the family (Z(I): I ∈C). We shall see below that the set of a-fast points contains a large limsup fractal and derive the lower bound from the following general result on limsup fractals. Proposition 10.6. Suppose that (Z(I): I ∈C) is a collection of random variables with values in {0, 1} such that pk := P{Z(I) = 1} is the same for all I ∈Ck. For I ∈Cm, with m < n, define Mn(I) := X J∈Cn J⊂I Z(J) . Let ζ(n) ≥1 and γ > 0 be such that (1) Var(Mn(I)) ≤ζ(n) E[Mn(I)] = ζ(n) pn2n−m, (2) lim n↑∞2n(γ−1) ζ(n) p−1 n = 0, . then dim A ≥γ almost surely for the limsup fractal A associated with (Z(I): I ∈C). 278 The idea of the proof of Proposition 10.6 is to construct a probability measure µ on A and then use the energy method. To this end, we choose an increasing sequence (ℓk : k ∈N) such that Mℓk(D) > 0 for all D ∈Cℓk−1. We then define a (random) probability measure µ in the following manner: Assign mass one to any interval I ∈Cℓ0. Proceed inductively: if J ∈Cm with ℓk−1 < m ≤ℓk and J ⊂D for D ∈Cℓk−1 define (1.3) µ(J) = Mℓk(J)µ(D) Mℓk(D) . Then µ is consistently defined on all intervals in C and therefore can be extended to a probability measure on [0, 1] by the measure extension theorem. The crucial part of the proof is then to show that, for a suitable choice of (ℓk : k ∈N) the measure µ has finite γ-energy. For the proof of Proposition 10.6 we need two lemmas. The first one is a simple combination of two facts, which have been established at other places in the book: The bounds for the energy of a measure established in Lemma 9.20, and the lower bound of Hausdorffdimension in terms of capacity which follows from the potential theoretic method, see Theorem 4.27. Lemma 10.7. Suppose B ⊂[0, 1] is a Borel set and µ is a probability measure on B. Then ∞ X m=1 X J∈Cm µ(J)2 2−αm < ∞implies dim B ≥α. Proof. By Lemma 9.20 with f(x) = xα and h(n) = 2−nα(1 −2−α) we obtain, for a suitable constant C > 0 that Iα(µ) ≤C ∞ X m=1 X J∈Cm µ(J)2 2−αm . If the right hand side is finite, then so is the α-energy of the measure µ. We thus obtain dim B ≥α by Theorem 4.27. For the formulation of the second lemma we use (2) to pick, for any ℓ∈N an integer n = n(ℓ) ≥ℓ such that 2n(γ−1) ζ(n) ≤pn 2−3ℓ. Lemma 10.8. There exists an almost surely finite random variable ℓ0 such that, for all ℓ≥ℓ0 and D ∈Cℓ, with n = n(ℓ), • for all D ∈Cℓwe have ¯ ¯Mn(D) −EMn(D) ¯ ¯ < 1 2EMn(D), and, in particular, Mn(D) > 0; • for a constant C depending only on γ, n X m=ℓ 2γm X J∈Cm J⊂D Mn(J)2 (2n−ℓpn)2 ≤C2γℓ. 279 Remark 10.9. The first statement in the lemma says intuitively that the variance of the random variables Mn(D) is small, i.e. they are always close to their mean. This is essentially what makes this proof work. ⋄ Proof of Lemma 10.8. For m ≤n, J ∈Cm we denote ∆n(J) := Mn(J) −EMn(J) and, for ℓ≤n and D ∈Cℓ, set Υn(D) := n X m=ℓ 2mγ X J∈Cm J⊂D ∆n(J)2. By assumption (1) we have E £ ∆n(J)2¤ ≤ζ(n)pn2n−m and therefore, for all D ∈Cℓ, EΥn(D) ≤ n X m=ℓ 2mγ X J∈Cm J⊂D E[∆n(J)2] ≤ n X m=ℓ 2mγζ(n) pn 2n−ℓ≤ 2nγ 2γ −1 ζ(n) pn 2n−ℓ. By our choice of n = n(ℓ) we thus obtain E h X D∈Cℓ Υn(D) (2n−ℓpn)2 i ≤ 1 2γ −1 ζ(n) 22ℓ−n+nγp−1 n ≤ 2−ℓ 2γ −1 . Since the right hand side is summable in ℓwe conclude that the summands inside the last expectation converge to zero as ℓ↑∞. In particular, there exists ℓ0 < ∞such that, for all ℓ≥ℓ0 we have 2−ℓγ ≤1/4 and, for n = n(ℓ) and D ∈Cℓ, Υn(D) ≤ ¡ 2n−ℓpn ¢2 = ¡ EMn(D) ¢2. The first statement follows from this very easily: For any ℓ≥ℓ0 and n = n(ℓ) we have (recalling the definition of Υn(D)), ∆n(D)2 ≤2−ℓγΥn(D) ≤2−ℓγ¡ EMn(D) ¢2 ≤1 4 ¡ EMn(D) ¢2. In order to get the second statement we calculate, X J∈Cm J⊂D (EMn(J))2 (2n−ℓpn)2 = X J∈Cm J⊂D 22(ℓ−m) = 2ℓ−m. Therefore (1.4) n X m=ℓ 2mγ X J∈Cm J⊂D (EMn(J))2 (2n−ℓpn)2 = 2ℓ n X m=ℓ 2−m(1−γ) ≤ 2ℓγ 1 −2−(1+γ) . Now, recalling the choice of n, (1.5) n X m=ℓ 2mγ X J∈Cm J⊂D ∆n(J)2 (2n−ℓpn)2 = 1 (2n−ℓpn)2 n X m=ℓ 2mγ X J∈Cm J⊂D ∆n(J)2 = Υn(D) (2n−ℓpn)2 ≤1. Since Mn(J)2 = ¡ EMn(J) + ∆n(J) ¢2 ≤2 ¡ EMn(J) ¢2 + 2 ¡ ∆n(J) ¢2, adding the inequalities (1.4) and (1.5) and setting C := 2 + 2/(1 −2−(1+γ)) proves the second statement. 280 We now define the sequence (ℓk : k ∈N) by ℓk+1 = n(ℓk) for all integers k ≥0. The first statement of Lemma 10.8 ensures that µ is well defined by (1.3), and together with the second statement will enable us to check that µ has finite γ-energy. Proof of Proposition 10.6. We can now use Lemma 10.8 to verify the condition of Lemma 10.7 and finish the proof of Proposition 10.6. Indeed, by definition of µ, (1.6) ∞ X m=ℓ0+1 X J∈Cm µ(J)2 2−γm = ∞ X k=0 ℓk+1 X m=ℓk+1 2γm X D∈Cℓk µ(D)2 Mℓk+1(D)2 X J∈Cm J⊂D Mℓk+1(J)2. Recall that qk+1 := EMℓk+1(D) = 2ℓk+1−ℓkpℓk+1 and, by the first statement of Lemma 10.8, for every k ∈N and D ∈Cℓk, 1 2 qk+1 ≤Mℓk+1(D) ≤2qk+1. Now, from the definition of the measure µ we get, with D ⊂D′ ∈Ck−1, µ(D) = Mℓk(D)µ(D′) Mℓk(D′) ≤2Z(D)/qk, and therefore we can continue (1.6) with the upper bound 16 ∞ X k=0 1 q2 k X D∈Cℓk Z(D) ℓk+1 X m=ℓk+1 2γm X J∈Cm J⊂D Mℓk+1(J)2 q2 k+1 ≤16 C ∞ X k=0 1 q2 k X D∈Cℓk Z(D) 2γℓk, using the second statement of Lemma 10.8 and the definition of qk+1. Finally, using the defin-ition of Mℓk+1 and the definition of (ℓk : k ∈N) we note that, ∞ X k=1 1 q2 k Mℓk([0, 1]) 2γℓk ≤ ∞ X k=1 2ℓk−1 qk 2γℓk ≤ ∞ X k=1 22ℓk−1−ℓk 2γℓk pℓk ≤ ∞ X k=1 2−ℓk−1 < ∞. This ensures convergence of the sequence (1.6) and thus completes the proof. Coming back to the lower bound in Theorem 10.3 we fix ε > 0. Given I = [jh, (j + 1)h] with h := 2−k we let Z(I) = 1 if and only if |B(jh + h′) −B(jh)| ≥a(1 + ε) p 2h′ log(1/h′), for h′ := k2−k. Lemma 10.10. Almost surely, the set A associated with this family (Z(I): I ∈C) of random variables is contained in the set of a-fast times. Proof. Recall that by Theorem 1.12 there exists a constant C > 0 such that, almost surely, |B(s) −B(t)| ≤C p |t −s| log(1/|t −s|), for all s, t ∈[0, 2]. 281 Now assume that k is large enough that (2C aε ¢2 k log 2+k log k ≤k2 log 2. Let t ∈A and suppose that t ∈I ∈Ck with Z(I) = 1. Then, by the triangle equality, |B(t + h′) −B(t)| ≥|B(jh + h′) −B(jh)| −|B(t + h′) −B(jh + h′)| −|B(jh) −B(t)| ≥a(1 + ε) p h′ log(1/h′) −2C p h log h ≥a p h′ log(1/h′). As this happens for infinitely many k, this proves that t is an a-fast point. The next lemma singles out the crucial estimates of expectation and variance needed to ap-ply Proposition 10.6. The first is based on the upper tail estimate for a standard normal distribution, the second on the ‘short range’ of the dependence of the family (Z(I) : I ∈C). Lemma 10.11. Define pn = E[Z(I)] for I ∈Cn, and η(n) := 2n + 1. Then, (a) for I ∈Ck we have E[Z(I)] ≥2−k a2 (1+ε)3; (b) for m ≤n and J ∈Cm, we have E[(Mn(J) −EMn(J))2] ≤pn 2n−m η(n). Proof. For part (a), denoting by X a standard normal random variable, (1.7) P © |B(jh + h′) −B(jh)| ≥a(1 + ε) p 2h′ log(1/h′) ª = P © |X| > a(1 + ε) p 2 log(1/h′) ª ≥ a(1+ε)√ 2 log(1/h′) 1+a2(1+ε)2 log(1/h′) 1 √ 2π exp © −a2 (1 + ε)2 log(1/h′) ª ≥2−k a2 (1+ε)3, for all sufficiently large k and all 0 ≤j < 2k, using the lower estimate for normal random variables of Lemma 3.1 in the penultimate step. For part (b) note that for two intervals J1, J2 ∈Cn the associated random variables Z(J1) and Z(J2) are independent if their distance is at least than n2−n. Using this whenever possible and the trivial estimate E[Z(J1)Z(J2)] ≤EZ(J1) otherwise, we get EMn(J)2 = X J1,J2∈Cn J1,J2⊂I E £ Z(J1)Z(J2) ¤ ≤ X J1∈Cn J1⊂I (2n + 1)EZ(J1) + EZ(J1) X J2∈Cn J2⊂I EZ(J2), Hence we obtain E £ (Mn(J) −EMn(J))2¤ ≤ X J1∈Cn J1⊂I (2n + 1)pn = pn2n−m(2n + 1), which settles the lemma. Proof of the lower bound in Theorem 10.3. By Lemma 10.11 the conditions of Proposition 10.6 hold for any γ < 1 −a2 (1 + ε)3. As, for any ε > 0, the set A associated to (Z(I): I ∈C) is contained in the set of a-fast times, the latter has at least dimension 1 −a2. 282 2. Packing dimension and limsup fractals In this section we ask for a precise criterion, whether a set E contains a-fast times for various values of a. It turns out that such a criterion depends not on the Hausdorff, but on the packing dimension of the set E. We therefore begin by introducing the concept of packing dimension, which was briefly mentioned in the beginning of Chapter 4, in some detail. We choose to define packing dimension in a way, which indicates its nature as a concept, which is a natural dual to the notion of Hausdorffdimension. The natural dual operation to covering a set with balls, as we have done in the case of Hausdorffdimension, is the operation of packing balls into the set. Definition 10.12. Suppose E is a metric space. For every δ > 0, a δ-packing of A ⊂E is a countable collection of disjoint balls B(x1, r1), B(x2, r2), B(x3, r3), . . . with centres xi ∈A and radii 0 ≤ri ≤δ. For every s ≥0 we introduce the s-value of the packing as P∞ i=1 rs i . The s-packing number of A is defined as P s(A) = lim δ↓0 P s δ for P s δ = sup n ∞ X i=1 rs i : (B(xi, ri)) a δ-packing of A o . ⋄ Note that the packing number is defined in the same way as the Hausdorffmeasure with efficient (small) coverings replaced by efficient (large) packings. A difference is that the packing numbers do not define a reasonable measure. However a small modification gives the so-called packing measure, Ps(A) = inf n ∞ X i=1 P s(Ai) : A = ∞ [ i=1 Ai o . The packing dimension has a definition analogous to the definition of Hausdorffdimension with Hausdorffmeasures replaced by packing measures. Definition 10.13. The packing dimension of E is dimP E = inf{s : Ps(E) = 0}. ⋄ Remark 10.14. It is not hard to see that dimP E = inf{s : Ps(E) < ∞} = sup{s : Ps(E) > 0} = sup{s : Ps(E) = ∞}, a proof of this fact is suggested as Exercise 10.1. ⋄ An alternative approach to packing dimension is to use a suitable regularisation of the upper Minkowski dimension, recall Remark 4.4.4 where we have hinted at this possibility. Theorem 10.15. For every metric space E we have dimP E = inf n ∞ sup i=1 dimMEi : E = ∞ [ i=1 Ei , Ei bounded o . 283 Remark 10.16. We have, for all bounded sets E, that dimP E ≤dimME and, of course, strict inequality may hold. Obviously, every countable set has packing dimension 0, compare with the example in Exercise 4.2. For this definition it is not hard to see that the countable stability property is satisfied. ⋄ Proof. Define, for every A ⊂E and ε > 0, P(A, ε) = max © k : there are disjoint balls B(x1, ε), . . . , B(xk, ε) with xi ∈A ª . Recall the definition of the numbers M(A, ε) from (1.1) in Chapter 4. We first show that P(A, 4ε) ≤M(A, 2ε) ≤P(A, ε) . Indeed, if k = P(A, ε) let B(x1, ε), . . . , B(xk, ε) be disjoint balls with xi ∈A. Suppose x ∈ A \ Sk i=1 B(xi, 2ε), then B(x, ε) is disjoint from all balls B(xi, ε) contradicting the choice of k. Hence B(x1, 2ε), . . . , B(xk, 2ε) is a covering of A and we have shown M(A, 2ε) ≤P(A, ε). For the other inequality let m = M(A, 2ε) and k = P(A, 4ε) and choose x1, . . . , xm ∈A and y1, . . . , yk ∈A such that A ⊂ m [ i=1 B(xi, 2ε) and B(y1, 4ε), . . . , B(yk, 4ε) disjoint. Then each yj belongs to some B(xi, 2ε) and no such ball contains more than one such point. Thus k ≤m, which proves P(A, 4ε) ≤M(A, 2ε). Suppose now that inf{t : Pt(E) = 0} < s. Then there is t < s and E = S∞ i=1 Ai such that, for every set A = Ai, we have P t(A) < 1. Obviously, P t ε(A) ≥P(A, ε)εt. Letting ε ↓0 gives lim ε↓0 M(A, ε)εt ≤lim ε↓0 P(A, ε/2)εt ≤2tP t(A) < 2t. Hence dimMA ≤t and by definition sup∞ i=1 dimMAi ≤t < s. To prove the opposite inequality, let 0 < t < s < inf{r : Pr(E) = 0} , and Ai ⊂E bounded with E = S∞ i=1 Ai. It suffices to show that dimM(Ai) ≥t for some i. Since Ps(E) > 0 there is i such that P s(Ai) > 0. Let 0 < α < P s(Ai), then for all δ ∈(0, 1) we have P s δ (Ai) > α and there exist disjoint balls B(x1, r1), B(x2, r2), B(x3, r3), . . . with centres xj ∈Ai and radii rj smaller than δ with ∞ X j=1 rs j ≥α . For every m let km be the number of balls with radius 2−m−1 < rj ≤2−m. Then, ∞ X m=0 km2−ms ≥ ∞ X j=1 rs j ≥α . 284 This yields, for some integer N ≥0, 2Nt(1 −2t−s)α ≤kN , since otherwise ∞ X m=0 km2−ms < ∞ X m=0 2mt(1 −2t−s)2−msα = α . Since rj ≤δ for all j, we have 2−N−1 < δ. Moreover, P(Ai, 2−N−1) ≥kN ≥2Nt(1 −2t−s)α , which gives sup 0≤ε≤δ P(Ai, ε)εt ≥P(Ai, 2−N−1)2−Nt−t ≥2−t(1 −2t−s)α . Letting δ ↓0, and recalling the relation of M(A, ε) and P(A, ε) established at the beginning of the proof, we obtain lim sup ε↓0 M(Ai, ε)εt ≥lim sup ε↓0 P(Ai, 2ε)εt > 0 , and thus dimMAi ≥t, as required. Remark 10.17. It is easy to see that, for every metric space, dimP E ≥dim E. This is suggested as Exercise 10.2. ⋄ The following result shows that every closed subset of Rd has a large subset, which is ‘regular’ in a suitable sense. It will be used in the proof of Theorem 10.28 below. Lemma 10.18. Let A ⊂Rd be closed. (i) If any open set V which intersects A satisfies dimM(A∩V ) ≥α , then dimP(A) ≥α. (ii) If dimP(A) > α, then there is a (relatively closed) nonempty subset e A of A, such that, for any open set V which intersects e A, we have dimP( e A ∩V ) > α. Proof. Let A ⊂S∞ j=1 Aj, where the Aj are closed. We are going to show that there exist an open set V and an index j such that V ∩A ⊂Aj. For this V and j we have, dimM(Aj) ≥dimM(Aj ∩V ) ≥dimM(A ∩V ) ≥α. This in turn implies that dimP(A) ≥α. Suppose now that for any V open such that V ∩A ̸= ∅, it holds that V ∩A ̸⊂Aj. Then Ac j is a dense open set relative to A. By Baire’s (category) theorem A ∩T j Ac j ̸= ∅, which means that A ̸⊂S j Aj, contradicting our assumption and proving (i). Now choose a countable basis B of the topology of Rd and define e A = A \ [ © B ∈B : dimM(B ∩A) ≤α ª . Then, dimP(A \ e A) ≤α using dimP ≤dimM and countable stability of packing dimension. From this we conclude that dimP e A = dimP A > α. 285 If for some V open, V ∩e A ̸= ∅and dimP( e A ∩V ) ≤α then V contains some set B ∈B such that e A ∩B ̸= ∅. For that set we have dimP(A ∩B) ≤dimP(A \ e A) ∧dimP( e A ∩B)) ≤α, contradicting the construction of e A. Example 10.19. As an example of a result demonstrating the duality between Hausdorffand packing dimension is the product formula, see [BP96]. In the dimension theory of smooth sets (manifolds, linear spaces) we have the following formula for product sets dim(E × F) = dim E + dim F . The example discussed in Exercise 2.3 shows that this formula fails for Hausdorffdimension, a reasonable formula for the Hausdorffdimension of product sets necessarily involves information about the packing dimension of one of the factor sets. In [BP96] it is shown that, for every Borel set A ⊂Rd, dimP(A) = sup B n dim(A × B) −dim(B) o where the supremum is over all compact sets B ⊂Rd. One can also show that, if A satisfies dim A = dimP A, then the product formula dim(A × B) = dim A + dim B holds. ⋄ Before moving back to our study of Brownian paths we study the packing dimension of the ‘test sets’ we have used in the stochastic codimension method, see Section 9.1. Theorem 10.20. Let γ ∈[0, d] and Γ[γ] be a percolation limit set in Rd with retention para-meter 2−γ. Then • dimP Γ[γ] ≤d −γ almost surely, • dimP Γ[γ] = d −γ almost surely on Γ[γ] ̸= ∅. Proof. For the first item, as packing dimension is bounded from above by the upper Minkowski dimension, it suffices to show that dimM Γ[γ] ≤d −γ almost surely. For this purpose we use the formula for the upper Minkowski dimension given in Remark 4.2. For a given n, we cover the percolation limit set by Sn, the collection of cubes retained in the nth construction step. The probability that a given cube of sidelength 2−n is in Sn is 2−nγ and hence the expected number of cubes in Sn is 2n(d−γ). Hence, for any ε > 0, P © 2n(γ−d−ε) #Sn > 1 ª ≤2n(γ−d−ε)E#Sn ≤2−nε, which is summable. Hence, almost surely, 2n(γ−d−ε) #Sn ≤1 for all but finitely many n. Thus, almost surely, dimM ≤lim sup n↑∞ log #Sn n log 2 ≤d −γ + ε for every ε > 0. For the second item recall the corresponding statement for Hausdorffdimension from Exer-cise 9.3. The result follows, as packing dimension is bounded from below by the Hausdorff dimension, see Remark 10.17. 286 Remark 10.21. At a first glance the concept of packing dimension does not seem to add sub-stantial news to the discussion of fine properties of d-dimensional Brownian motion. However, a first sign that something interesting might be going on can be found in Exercise 10.5, where we show that the Hausdorffand packing dimension of the sets of a-fast points differ. This is indicative of the fact that optimal coverings of these sets uses coverings sets of widely differing size, and that optimal packings use sets of quite different scale. ⋄ Given a set E ⊂[0, 1] we now ask for the maximal value of a such that E contains an a-fast time with positive probability. This notion of size is most intimately linked to packing dimension as the following theorem shows. We denote by F(a) ⊂[0, 1] the set of a-fast times. Theorem 10.22 (Khoshnevisan, Peres and Xiao). For any compact set E ⊂[0, 1], almost surely, sup t∈E lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) = p dimP(E). Moreover, if dimP(E) > a2, then dimP(F(a) ∩E) = dimP(E). Remark 10.23. The result can be extended from compact sets E to more general classes of sets, more precisely the analytic sets, see [KPX00]. ⋄ Remark 10.24. An equivalent formulation of the theorem is that, for any compact E ⊂[0, 1], almost surely, P © F(a) ∩E ̸= ∅ ª = ½ 1, if dimP(E) > a2, 0, if dimP(E) < a2. Using the compact percolation limit sets E = Γ[γ] in this result and Hawkes’ theorem, Theorem 9.5, one can obtain an alternative proof of the Orey-Taylor theorem. Indeed, by Theorem 10.20, if γ < 1 −a2 we have dimP(E) > a2 with positive probability, and therefore, P © F(a) ∩E ̸= ∅ ª > 0. Hence, by Hawkes’ theorem, dim F(a) ≥1 −a2 with positive probability. Brownian scaling maps a-fast points onto a-fast points. Therefore there exists ε > 0 such that, for any n ∈N and 0 ≤j ≤n −1, P © dim(F(a) ∩[j/n, (j + 1)/n]) ≥1 −a2ª ≥ε, and hence P © dim F(a) ≥1 −a2ª ≥1 −(1 −ε)n →1. Conversely, by Theorem 10.20, if γ > 1−a2 we have dimP(E) < a2 almost surely, and therefore, P © F(a) ∩E ̸= ∅ ª = 0. Hence, by Hawkes’ theorem, Theorem 9.5, we have dim F(a) ≤1 −a2 almost surely. ⋄ Theorem 10.22 can be seen as a probabilistic interpretation of packing dimension. The upper and lower Minkowski dimensions allow a similar definition when the order of sup and lim are interchanged. 287 Theorem 10.25. For any compact E ⊂[0, 1] almost surely, (2.1) lim sup h↓0 sup t∈E |B(t + h) −B(t)| p 2h log(1/h) = q dimM(E). Proof of the upper bounds in Theorem 10.22 and Theorem 10.25. Suppose E ⊂[0, 1] is compact. We assume that dimM(E) < λ for some λ < a2 and show that (2.2) lim sup h↓0 sup t∈E |B(t + h) −B(t)| p 2h log(1/h) ≤λ almost surely. Note that this is the upper bound in Theorem 10.25. Once this is shown it immediately implies sup t∈E lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≤dimM(E) almost surely. Now, for any decomposition E = S∞ i=1 Ei, we have sup t∈E lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) = ∞ sup i=1 sup t∈cl(Ei) lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≤ ∞ sup i=1 dimM(Ei), where we have made use of the fact that the upper Minkowski dimension is insensitive under taking the closure of a set. Theorem 10.15 now implies that sup t∈E lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≤dimP(E) almost surely, which is the upper bound in Theorem 10.22. For the proof of (2.2) cover E by disjoint subintervals I = [(j/k)η−k, ((j + 1)/k)η−k) for j = 0, . . . , ⌈kηk −1⌉, of equal length h = η−k/k such that I ∩E ̸= ∅. By definition of the upper Minkowski dimension there exists an m such that, for all k ≥m, no more than ηλk different such intervals of length h = η−k/k intersect E. Now fix ε > 0 such that λ < a2 2 (1 −ε)3, which is possible by our condition on λ. Let Z(I) = 1 if , for h′ = η−k, |B(jh + h′) −B(jh)| > a(1 −3ε) p h′ log(1/h′). Recall from the proof of Lemma 10.4 that there is an η > 1 such that, for any m ∈N, the collection © I = [(j/k)η−k, ((j + 1)/k)η−k) : Z(I) = 1, I ∩E ̸= ∅, k ≥m ª is a covering of the set M(m) := n t ∈E : sup η−k<u≤η−k+1 |B(t + u) −B(t)| p u log(1/u) ≥a(1 −ε) for some k ≥m o . Moreover, we recall from (1.2), that P{Z(I) = 1} ≤η−k a2 2 (1−ε)3, 288 and, sticking to our notation I = [(j/k)η−k, ((j + 1)/k)η−k) for a little while longer, ∞ X k=0 ⌈kηk−1⌉ X j=0 P © Z(I) = 1 ª 1{I ∩E ̸= ∅} ≤ ∞ X k=0 ηλkη−k a2 2 (1−ε)3 < ∞, and hence by the Borel-Cantelli lemma there exists an m such that Z(I) = 0 whenever I = [(j/k)η−k, ((j + 1)/k)η−k) for some k ≥m. This means that the set M(m) can be covered by the empty covering, so it must itself be empty. This shows (2.2) and completes the proof. We are going to embed the proof of the lower bound now into a more general framework, including the discussion of limsup fractals in a d-dimensional cube. Definition 10.26. Fix an open unit cube Cube = x0 + (0, 1)d ⊂Rd. For any nonnegative integer k, denote by Ck the collection of dyadic cubes x0 + d Y i=1 [ji2−k, (ji + 1)2−k] with ji ∈{0, . . . , 2k −1} for all i ∈{1, . . . , d}, and C = S k≥0 Ck. Denote by (Z(I): I ∈C) a collection of random variables each taking values in {0, 1}. The limsup fractal associated to this collection is the random set A := ∞ \ n=1 ∞ [ k=n [ I∈Ck Z(I)=1 int(I), where int(I) is the interior of the cube I. ⋄ Remark 10.27. Compared with the setup of the previous section we have switched to the use of open cubes in the definition of limsup fractals. This choice is more convenient when we prove hitting estimates, whereas in Proposition 10.6 the choice of closed cubes was more convenient when constructing random measures on A. ⋄ The key to our result are the hitting probabilities for the discrete limsup fractal A under some conditions on the random variables (Z(I): I ∈C). Theorem 10.28. Suppose that (i) the means pn = E[Z(I)] are independent of the choice of I ∈Cn and satisfy lim inf n↑∞ log pn n log 2 ≥−γ, for some γ > 0; (ii) there exists c > 0 such that the random variables Z(I) and Z(J) are independent whenever I, J ∈Cn and the distance of I and J exceeds cn2n. Then, for any compact E ⊂Cube with dimP(E) > γ, we have P © A ∩E ̸= ∅ ª = 1. 289 Remark 10.29. The second assumption, which gives us the necessary independence for the lower bound, can be weakened, see [KPX00]. Note that no assumption is made concerning the dependence of random variables Z(I) for intervals I of different size. ⋄ Proof of Theorem 10.28 Let E ⊂Cube be compact with dimP E > γ. Let e E be defined as in Lemma 10.18 for example as e E = E \ [ ai<bi rational n d Y i=1 (ai, bi) : dimM ¡ E ∩ d Y i=1 (ai, bi) ¢ < γ o . From the proof of Lemma 10.18 we have dimP E = dimP e E. Define open sets An = [ I∈Cn © int(I) : Z(I) = 1 ª , and A∗ n = [ m≥n An = [ m≥n [ I∈Cm © int(I) : Z(I) = 1 ª . By definition A∗ n ∩e E is open in e E. We will show that it is also dense in e E with probability one. This, by Baire’s category theorem, will imply that A ∩e E = ∞ \ n=1 A∗ n ∩e E ̸= ∅, almost surely, as required. To show that A∗ n ∩e E is dense in e E, we need to show that for any open binary cube J which intersects e E, the set A∗ n ∩e E ∩J is almost surely non-empty. For the rest of the proof, take ε > 0 small and n large enough so that e E ∩J intersects more than 2n(γ+2ε) binary cubes of sidelength 2−n, and so that (log pn)/n > −(log 2)(γ + ε). Let Sn be the set of cubes in Cn that intersect e E. Define Tn = X I∈Sn Z(I), so that P © An ∩e E ∩J = ∅ ª = P{Tn = 0}. To show that this probability converges to zero, by the Paley-Zygmund inequality, it suffices to prove that (Var Tn)/(ETn)2 does. The first moment of Tn is given by ETn = sn pn > 2(γ+2ε)n2−γ−εn = 2εn, where sn denotes the cardinality of Sn. The variance can be written as Var Tn = Var X I∈Sn Z(I) = X I∈Sn X J∈Sn Cov(Z(I), Z(J)). Here each summand is at most pn, and the summands for which I and J have distance at least cn2−n vanish by assumption. Thus X I∈Sn X J∈Sn Cov(Z(I), Z(J)) ≤pn # © (I, J) ∈Sn × Sn : dist(I, J) ≤cn 2−nª ≤pnsn c(2n + 1) = c(2n + 1) ETn. 290 This implies that (Var Tn)/(ETn)2 →0. Hence, almost surely, A∗ n is an open dense set, concluding the proof. We now show how the main statement of Theorem 10.22 follows from this, and how the ideas in the proof also lead to the lower bound in Theorem 10.25. Proof the lower bound in Theorem 10.22 and Theorem 10.25. For the lower bound we look at a compact set E ⊂(0, 1) with dimP(E) > a2/2 and first go for the result in Theorem 10.22. Choose ε > 0 such that dimP(E) > a2 2 (1 + ε)3. Associate to every dyadic interval I = [jh, (j + 1)h] ∈Ck with h = 2−k the random variable Z(I), which takes the value one if and only if, for h′ = k2−k, |B(jh + h′) −B(jh)| ≥a(1 + ε) p h′ log(1/h′), and note that by Lemma 10.10 the limsup fractal associated to these random variables is contained in the set of a-fast times. It remains to note that the collection {Z(I) : I ∈Ck , k ≥ 0} satisfies the condition (i) with γ = a2 2 (1 + ε)3 by (1.7) and condition (ii) with c = 1. Theorem 10.28 now gives that P © A ∩E ̸= ∅ ª = 1, and therefore sup t∈E lim sup h↓0 |B(t + h) −B(t)| p 2h log(1/h) ≥ p dimP(E). For the lower bound in Theorem 10.25 we look at a compact set E ⊂(0, 1) with dimM(E) > a2/2 and fix ε > 0 such that dimM(E) (1 −ε)2 ≥a2 2 . Hence there exists a sequence (nk : k ∈N) such that # © I ∈Cnk : I ∩E ̸= ∅ ª ≥2nk a2 2 (1−ε). With Z(I) defined as above we obtain, using notation and proof of Theorem 10.28, that P{Z(I) = 1} ≤2γ, with γ = a2 2 (1 + ε)4, and Var Tnk ≤(2nk + 1) ETnk, for Tn = X I∈Cn Z(I) 1{I ∩E ̸= ∅}. By Chebyshev’s inequality we get, for 1/2 < η < 1, P © |Tnk −ETnk| ≥(ETnk)ηª ≤(2nk + 1) (ETnk)1−2η. As ETn is exponentially increasing we can infer, using the Borel-Cantelli lemma, that lim k↑∞ Tnk ETnk = 1 almost surely. This implies that Tnk ̸= 0 for all sufficiently large k. Hence, as Z(I) = 1 and I ∩E ̸= ∅imply that there exists t ∈I ∩E with with |B(t + h′) −B(t)| ≥a p h′ log(1/h′) for h′ = nk2nk, by the proof of Lemma 10.10, we have completed the proof of Theorem 10.25. 291 3. Slow times of Brownian motion At the fast times Brownian motion has, in infinitely many small scales, an unusually large growth. Conversely, one may ask whether there are times where a Brownian path has, at all small scales, unusually small growth. This question of slow times for the Brownian motion is related to nondifferentiability of the Brownian path. Indeed, in our proof of non-differentiability, we showed that almost surely, lim sup h↓0 |B(t + h) −B(t)| h = ∞, for all t ∈[0, 1], and in 1963 Dvoretzky showed that there exists a constant δ > 0 such that almost surely, lim sup h↓0 |B(t + h) −B(t)| √ h > δ, for all t ∈[0, 1]. In 1983 Davis and, independently, Perkins and Greenwood, found that the optimal constant in this result is equal to one. Theorem 10.30 (Davis, Perkins and Greenwood). Almost surely, inf t∈[0,1] lim sup h↓0 |B(t + h) −B(t)| √ h = 1. Remark 10.31. We call a time t ∈[0, 1] an a-slow time if (3.1) lim sup h↓0 |B(t + h) −B(t)| √ h ≤a. The result shows that a-slow times exist for a > 1 but not for a < 1. The Hausdorffdimension of the set of a-slow times is studied in Perkins [Pe83]. ⋄ For the proof of Theorem 10.30 we need to investigate the probability that the graph of a Brownian motion stays within a parabola open to the right. The following lemma is what we need for an upper bound. Lemma 10.32. Let M := max0≤t≤1 |B(t)| and, for r < 1, define the stopping time T = inf{t ≥1 : |B(t)| = M + r √ t}. Then ET < ∞. Proof. By Theorem 2.44, for every t ≥1, we have E[T ∧t] = E[B(T ∧t)2] ≤E £ (M + r √ T ∧t)2¤ = EM 2 + 2rE £ M √ T ∧t ¤ + r2E[T ∧t] ≤EM 2 + 2r(EM 2)1/2(E[T ∧t])1/2 + r2E[T ∧t], where H¨ older’s inequality was used in the last step. This gives (1 −r2)E[T ∧t] ≤E[M 2] + 2r ¡ EM 2¢1/2¡ E[T ∧t] ¢1/2, 292 and as E[M 2] < ∞we get that E[T ∧t] is bounded and hence ET < ∞. Proof of the upper bound in Theorem 10.30. We show that, for r < 1 the set A = © t ∈[0, 1] : lim sup h↓0 |B(t + h) −B(t)| < r √ h ª is empty almost surely. The crucial input is that, for any interval I = [a, b] ⊂[0, 1], we have by the triangle inequality and Brownian scaling, for M = max{|B(t)|: 0 ≤t ≤b −a}, P © ∃t ∈I : |B(t + h) −B(t)| < r √ h for all 0 < h ≤1 ª ≤P © |B(a + h) −B(a)| < M + r √ h for all b −a < h ≤1 ª ≤P © T ≥ 1 b−a ª . Now, dividing [0, 1] into n intervals of length 1/n we get P{A ̸= ∅} ≤lim inf n→∞ n−1 X k=0 P{A ∩[k/n, (k + 1)/n] ̸= ∅} ≤lim inf n→∞nP © T ≥n ª = 0, using in the final step that ∞> ET ≥P n P © T ≥n ª and divergence of the harmonic series. We turn to the proof of the upper bound. Again we start by studying exit times from a parabola. For 0 < r < ∞and a > 0 let T(r, a) := inf{t ≥0 : |B(t)| = r √ t + a}. For the moment it suffices to note the following property of T(1, a). Lemma 10.33. We have ET(1, a) = ∞. Proof. Suppose that ET(1, a) < ∞. Then, by Theorem 2.44, we have that ET(1, a) = EB(T(1, a))2 = ET(1, a) + a, which is a contradiction. Hence ET(1, a) = ∞. For 0 < r < ∞and a > 0 we now define further stopping times S(r, a) := inf{t ≥a : |B(t)| ≥r √ t}. Lemma 10.34. If r > 1 there is a p = p(r) < 1 such that E[S(r, 1)p] = ∞. In particular, lim sup n↑∞ n P{S(r, 1) > n} E[S(r, 1) ∧n] > 0. The proof uses the following general lemma. Lemma 10.35. Suppose X is a nonnegative random variable and EXp = ∞for some p < 1. Then lim sup n↑∞ n P{X > n}/E[X ∧n] > 0. 293 Proof. Let p < 1 and suppose for contradiction that, for some ε < 1/(2 + 2 1−p), n P{X > n} < ε E[X ∧n] for all integers n > y0 ≥2. We have that E[(X ∧N)p] = Z Np 0 P{Xp > x} dx = p Z N 0 yp−1 P{X > y} dy , and hence, for all N, E[(X ∧N)p] ≤p Z y0 0 yp−1 dy + ε y0 y0−1 p Z N y0 yp−2 E[X ∧y] dy ≤yp 0 + 2εp Z N 0 yp−1 P{X > y} dy + 2εp Z N y0 yp−2 Z y 0 P{X > z} dz dy ≤yp 0 + 2ε E[(X ∧N)p] + 2εp Z N 0 P{X > z} Z ∞ z yp−2 dy dz ≤yp 0 + ε 2 ¡ 1 + 1 1−p ¢ E[(X ∧N)p] . This implies that E[Xp] = sup E[(X ∧N)p] < ∞, and so the statement follows. Proof of Lemma 10.34. Define a sequence of stopping times by τ0 = 1 and, for k ≥1, τk = ½ inf{t ≥τk−1 : B(t) = 0 or |B(t)| ≥r √ t ª if k odd, inf{t ≥τk−1 : |B(t)| ≥ √ t ª if k even. By the strong Markov property and Brownian scaling we get, for any λ > 0, P{τ2k −τ2k−1 > λτ2k−1 | B(τ2k−1) = 0} = P{T(1, 1) > λ}. Define, with P1 referring to a Brownian motion started in B(0) = 1, c := P1 © inf{t ≥0 : B(t) = 0} < inf{t ≥0 : |B(t)| = r √ t + 1} ª . Now, for k ≥2 and λ > 0, on {τ2k−2 < S(r, 1)}, P © τ2k −τ2k−1 > λτ2k−2 ¯ ¯ F(τ2k−2) ª ≥P © τ2k −τ2k−1 > λτ2k−1 ¯ ¯ F(τ2k−2), B(τ2k−1) = 0 ª P © B(τ2k−1) = 0 ¯ ¯ F(τ2k−2) ª = c P{T(1, 1) > λ}. To pass from this estimate to the pth moments we use that, for any nonnegative random variable X, we have EXp = R ∞ 0 P{Xp > λ} dλ. This is an easy consequence of Fubini’s theorem and gives E £ (τ2k −τ2k−1)p¤ = E Z ∞ 0 τ p 2k−2 P © (τ2k −τ2k−1)p > λτ p 2k−2 ¯ ¯ F(τ2k−2) ª dλ ≥E Z ∞ 0 τ p 2k−2 P © τ2k −τ2k−1 > λ1/pτ2k−2 ¯ ¯ F(τ2k−2) ª 1{τ2k−2 < S(r, 1)} dλ ≥c E Z ∞ 0 τ p 2k−2 P © T(1, 1) > λ1/pª 1{τ2k−2 < S(r, 1)} dλ. 294 Now, using the formula for EXp again, but for X = T(1, 1) and noting that {τ2k−2 < S(r, 1)} = {τ2k−3 < τ2k−2}, we obtain E £ (τ2k −τ2k−1)p¤ ≥cE[T(1, 1)p] E £ τ p 2k−21{τ2k−2 < S(r, 1)} ¤ ≥cE[T(1, 1)p] E £ (τ2k−2 −τ2k−3)p¤ , and by iterating this, E £ (τ2k −τ2k−1)p¤ ≥ ³ cE[T(1, 1)p] ´k−1 E £ (τ2 −τ1)p¤ . Note that, by Fatou’s lemma and by Lemma 10.33, lim inf p↑1 E[T(1, 1)p] ≥ET(1, 1) = ∞. Hence we may pick p < 1 such that E[T(1, 1)p] > 1/c. Then E[S(r, 1)p] ≥E[τ p 2k] ≥E £ (τ2k −τ2k−1)p¤ − →∞, as k ↑∞, which is the first statement we wanted to prove. The second statement follows directly from the general fact stated as Lemma 10.35. Proof of the lower bound in Theorem 10.30. Fix r > 1 and let A(n) = © t ∈[0, 1] : |B(t + h) −B(t)| < r √ h, for all 1 n ≤h ≤1}. Note that n ≥m implies A(n) ⊂A(m). We show that (3.2) P n ∞ \ n=1 A(n) ̸= ∅ o = lim n→∞P © A(n) ̸= ∅ ª > 0. For n ∈N and let v(0, n) = 0 and, for i ≥1, v(i, n) := (v(i −1,n) + 1) ∧inf © t ≥v(i −1, n) + 1 n : |B(t) −B(v(i −1, n))| ≥r p t −v(i −1, n) ª . Then P{v(i + 1, n) −v(i, n) = 1 | F(v(i, n))} = P{S(r, 1) ≥n}, and by Brownian scaling, (3.3) E[v(i + 1, n) −v(i, n) | F(v(i, n))] = 1 nE[S(r, 1) ∧n]. Of course v(k, n) ≥1 if v(i, n) −v(i −1, n) = 1 for some i ≤k. Thus, for any m, P{v(i + 1, n) −v(i, n) = 1 for some i ≤m such that v(i, n) ≤1 ª = m X i=1 P{S(r, 1) ≥n}P{v(i, n) ≤1} ≥mP{S(r, 1) ≥n}P{v(m, n) ≤1}. Let (nk : k ∈N) be an increasing sequence of integers such that nk P{S(r, 1) ≥nk} E[S(r, 1) ∧nk] ≥ε > 0, and E[S(r, 1) ∧nk] ≤nk/6 for all k, which is possible by Lemma 10.34. 295 Choose the integers mk so that they satisfy 1 3 ≤mk nk E[S(r, 1) ∧nk] ≤1 2. Summing (3.3) over all i = 1, . . . , mk −1, Ev(mk, nk) = mk nk E[S(r, 1) ∧nk], hence P{v(mk, nk) ≥1} ≤1/2. Now we get, putting all our ingredients together, P{A(nk) ̸= ∅} ≥P{v(i + 1, nk) −v(i, nk) = 1 for some i ≤mk such that v(i, nk) ≤1 ª ≥mkP{S(r, 1) ≥nk}P{v(mk, nk) ≤1} ≥mkP{S(r, 1) ≥nk}/2 ≥mk 2nk εE[S(r, 1) ∧nk] ≥ε 6. This proves (3.2). It remains to observe that, by Brownian scaling, there exists δ > 0 such that, for all n ∈N, P © ∃t ∈[0, 1/n] : lim sup h↓0 |B(t + h) −B(t)|/ √ h ≤r} ≥δ. Hence, by independence, P © ∃t ∈[0, 1] : lim sup h↓0 |B(t + h) −B(t)|/ √ h ≤r} ≥1 −(1 −δ)n − →1. This completes the proof of the lower bound, and hence completes the proof of Theorem 10.30. 4. Cone points of planar Brownian motion We now focus on a planar Brownian motion {B(t) : t ≥0}. Recall from Section 7.2 that around a typical point on the path this motion performs an infinite number of windings in both directions. It is easy to see that there are exceptional points for this behaviour: Let x = min{x: (x, 0) ∈B[0, 1]}. Then Brownian motion does not perform any windings around (x, 0), as this would necessarily imply that it crosses the half-line {(y, 0): y < x} contradicting the minimality of x. More generally, each point (x, y) ∈R2 with x = min{x: (x, y) ∈B[0, 1]} has this property, if the set is nonempty. Hence, the set of such points has dimension at least one, as the projection onto the y-axis gives a nontrivial interval. We shall see below that this set has indeed Hausdorff dimension one. We now look at points where a cone-shaped area with the tip of the cone placed in the point is avoided by the Brownian motion. These points are called cone points. Definition 10.36. Let {B(t) : t ≥0} be a planar Brownian motion. For any angle α ∈(0, 2π) and direction ξ ∈[0, 2π), define the closed cone W[α, ξ] := © x = rei(θ−ξ) : |θ| ≤α/2, r ≥0 ª ⊂R2. 296 Given a cone x + W[α, ξ] we call its dual the reflection of its complement about the tip, i.e. the cone x + W[2π −α, ξ + π]. A point x = B(t), 0 < t < 1, is an α-cone point if there exists ε > 0 and ξ ∈[0, 2π) such that B(0, 1) ∩B(x, ε) ⊂x + W[α, ξ] . ⋄ Remark 10.37. Clearly, if x = B(t) is a cone point, then there exists a small δ > 0 such that B(t −δ, t + δ) ⊂x + W[α, ξ]. Hence the path {B(t): 0 ≤t ≤1} performs only a finite number of windings around x. ⋄ We now identify the angles α for which there exist α-cone points, and, if they exist, determine the Hausdorffdimension of the set of α-cone points. Theorem 10.38 (Evans 1985). Let {B(t): 0 ≤t ≤1} be a planar Brownian motion. Then, almost surely, α-cone points exist for any α ≥π but not for α < π. Moreover, if α ∈[π, 2π), then dim © x ∈R2 : x is an α-cone point ª = 2 −2π α . In the proof of Theorem 10.38 we identify R2 with the complex plane and use complex notation wherever convenient. Suppose that {B(t): t ≥0} is a planar Brownian motion defined for all positive times. We first fix an angle α ∈(0, 2π) and a direction ξ ∈[0, 2π) and define the notion of an approximate cone point as follows: For any 0 < δ < ε we let Tδ(z) := inf © s ≥0: B(s) ∈B(z, δ) ª and Sδ,ε(z) := inf © s ≥Tδ/2(z): B(s) ̸∈B(z, ε) ª . We say that z ∈R2 is a (δ, ε)-approximate cone point if B(0, Tδ(z)) ⊂z + W[α, ξ], and B(Tδ/2(z), Sδ,ε(z)) ⊂z + W[α, ξ] . Note that we do not require (δ, ε)-approximate cone points to belong to the Brownian path. The relation between cone points and approximate cone points will become clear later, we first collect the necessary information about the probability that a given point is a (δ, ε)-approximate cone point. The strong Markov property allows us to consider the events happening during the intervals [0, Tδ(z)] and [Tδ/2(z), 1] separately. Lemma 10.39. There exist constants C > c > 0 such that, for every δ > 0, (a) for all z ∈R2, P © B(0, Tδ(z)) ⊂z + W[α, ξ] ª ≤C ¡ δ |z| ¢ π α, (b) for all z ∈R2 with 0 ∈z + W[α/2, ξ], P © B(0, Tδ(z)) ⊂z + W[α, ξ] ª ≥c ¡ δ |z| ¢ π α. 297 Proof. We write z = |z| eiθ and apply the skew-product representation, Theorem 7.25, to the Brownian motion {z −B(t): t ≥0} and obtain B(t) = z −R(t) exp(i θ(t) ¢ , for all t ≥0, for R(t) = exp(W1(H(t)) and θ(t) = W2(H(t)), where {W1(t): t ≥0} and {W2(t): t ≥0} are independent linear Brownian motions started in log |z|, resp. in θ, and a strictly increasing time-change {H(t): t ≥0} which depends only on the first of these motions. This implies that Tδ(z) = inf{s ≥0: R(s) ≤δ} and therefore H(Tδ(z)) = inf © u ≥0: W1(u) ≤log δ ª =: τlog δ . We infer that © B(0, Tδ(z)) ⊂z + W[α, ξ] ª = © |W2(u) + π −ξ| ≤α 2 for all u ∈[0, τlog δ] ª . The latter event means that a linear Brownian motion started in θ stays inside the interval [ξ −π −α/2, ξ −π + α/2] up to the independent random time τlog δ. For the probability of such events we have found two formulas, (4.2) and (4.3) in Chapter 7. The latter formula gives P © |W2(u) + π −ξ| ≤α 2 for all u ∈[0, τlog δ] ª = ∞ X k=0 4 (2k+1) π sin ¡ (2k+1)π(α/2+ξ−π−θ) α ¢ E h exp ¡ −(2k+1)2π2 2α2 τlog δ ¢i = ∞ X k=0 4 (2k+1) π sin ¡ (2k+1)π(α/2+ξ−π−θ) α ¢ ¡ δ |z| ¢(2k+1) π α, using Exercise 2.16 (a) to evaluate the Laplace transform of the first hitting times of a point by linear Brownian motion. Now note that the upper bound, part (a) of the lemma, is trivial if |z| ≤2δ, and otherwise one can bound the exact formula from above by ¡ δ |z| ¢ π α ∞ X k=0 4 (2k+1) π 2−2k π α. The lower bound, part (b) of the lemma, follows from Brownian scaling if δ/|z| is bounded from below. Otherwise note that, under our assumption on z we have |θ + π −ξ| ≤α 4 and thus the sine term corresponding to k = 0 is bounded from below by sin(π/4) > 0. Thus we get a lower bound of ¡ δ |z| ¢ π α ³ 4 π sin(π/4) − ∞ X k=1 4 (2k+1) π ¡ δ |z| ¢2k π α´ , and the bracket is bounded from below by a positive constant if δ/|z| is sufficiently small. An entirely analogous argument also provides the estimates needed for the events imposed after the Brownian motion has hit the ball B(z, δ/2). 298 Lemma 10.40. There exist constants C > c > 0 such that, for every 0 < δ < ε, (a) for all x, z ∈R2 with |x −z| = δ/2, Px © B(0, Tε(z)) ⊂z + W[α, ξ] ª ≤C ¡ δ ε ¢ π α. (b) for all x, z ∈R2 with |x −z| = δ/2 and x −z ∈W[α/2, ξ], Px © B(0, Tε(z)) ⊂z + W[α, ξ] ª ≥c ¡ δ ε ¢ π α. We now focus on the upper bound. Using the strong Markov property we may combine Lem-mas 10.39 (a) and 10.40 (a) to obtain the following lemma. Lemma 10.41. There exists a constant C0 > 0 such that, for any z ∈R2, P © z is a (δ, ε)-approximate cone point ª ≤C0 |z|−π α ε−π α δ 2π α . Proof. By the strong Markov property applied at the stopping time Tδ/2(z) we get P © z is a (δ, ε)-approximate cone point ª ≤E £ 1{B(0, Tδ(z)) ⊂z + W[α, ξ]} PB(Tδ/2(z)) {B(0, S (0) ε (z)) ⊂z + W[α, ξ]} ¤ ≤C2 ¡ δ |z| ¢ π α ¡ δ ε ¢ π α, where we have used Lemmas 10.39 (a) and 10.40 (a). The result follows with C0 := C2. Let M(α, ξ, ε) be the set of all points in the plane, which are (δ, ε)-approximate cone points for all δ > 0. Obviously z ∈M(α, ξ, ε) if and only if there exist t > 0 such that z = B(t) and B(0, t) ⊂z + W[α, ξ], and B(t, S (t) ε (z)) ⊂z + W[α, ξ] , where S (t) ε (z) := inf © s > t: B(s) ̸∈B(z, ε) ª . Lemma 10.42. Almost surely, • if α ∈(0, π) then M(α, ξ, ε) = ∅, • if α ∈[π, 2π) then dim M(α, ξ, ε) ≤2 −2π α . Proof. Take a compact cube Cube of unit sidelength not containing the origin. It suffices to show that M(α, ξ, ε) ∩Cube = ∅if α ∈(0, π) and dim M(α, ξ, ε) ∩Cube ≤2 −2π α if α ∈(π, 2π). Given a dyadic subcube D ∈Dk of sidelength 2−k let D∗⊃D be a concentric ball around D with radius (1 + √ 2)2−k. Define the focal point x = x(D) of D to be • if α < π the tip of the cone x + W[α, ξ] whose boundary halflines are tangent to D∗, • if α > π the tip of the cone whose dual has boundary halflines tangent to D∗. The following properties are easy to check: • for every y ∈D we have y + W[α, ξ] ⊂x + W[α, ξ], 299 • for some constant C1 > 0 depending only on α, and every y ∈D we have B(y, 2−k) ⊂B(x, C12−k) and B(y, 1 2 2−k) ⊂B(x, C1 1 2 2−k), • for some k0 ∈N depending only on α and ε, every y ∈D and k ≥k0, B(y, ε) ⊃B(x, ε/2). This implies that, if k ≥k0 and the cube D ∈Dk contains a (2−k, ε)-approximate cone point, then its focal point x is a (C12−k, ε/2)-approximate cone point. Hence, by Lemma 10.41, for a constant C2 > 0, P © D contains a (2−k, ε)-approximate cone point ª ≤C2 |x(D)|−π α ˜ ε−π α 2−k 2π α . Note that, given Cube and ε > 0 we can find k1 ≥k0 such that |x(D)| is bounded away from zero over all D ∈Dk and k ≥k1. Hence we obtain C3 > 0 such that, for all k ≥k1, P © D contains a (2−k, ε)-approximate cone point ª ≤C3 2−k 2π α . Then, if α ∈(0, π), P{M(α, ξ, ε) ̸= ∅} ≤ X D∈Dk P © D contains a (2−k, ε)-approximate cone point ª ≤C3 22k 2−k 2π α k→∞ − → 0 , proving part (a). Moreover, if α ∈(π, 2π) and k ≥k1, we may cover M(α, ξ, ε) ∩Cube by the collection of cubes D ∈Dk which contain a (2−k, ε)-approximate cone point. Then, for any γ > 2 −2π α the expected γ-value of this covering is E X D∈Dk 2−kγ+ 3 2 γ1{D contains a (2−k, ε)-approximate cone point } ≤2 3 2γ X D∈Dk 2−kγ P © D contains a (2−k, ε)-approximate cone point ª ≤C3 2k(2−2π α −γ) k→∞ − → 0 , and this proves that, almost surely, dim M(α, ξ, ε) ≤γ. Proof of the upper bound in Theorem 10.38. Suppose δ > 0 is arbitrary and z ∈R2 is an α-cone point. Then there exist a rational number q ∈[0, 1), a rational direction ξ ∈[0, 2π), and a rational ε > 0, such that z = B(t) for some t ∈(q, 1) and B(q, t) ⊂z + W[α + δ, ξ], and B(t, S (t) ε (z)) ⊂z + W[α + δ, ξ] . By Lemma 10.42 for every fixed choice of rational parameters this set is empty almost surely if α + δ < π. For any α < π we can pick δ > 0 with α + δ < π and hence there are no α-cone points almost surely. Similarly, if α ≥π we can use Lemma 10.42 and the countable stability of Hausdorffdimension to obtain an almost sure upper bound of 2 −2π/(α + δ) for the set of α-cone points. The result follows as δ > 0 was arbitrary. 300 We now establish the framework to prove the lower bound in Theorem 10.38. Again we fix x0 ∈Rd and a cube Cube = x0 + [0, 1)d. Recall the definition of the collection Dk of dyadic subcubes of sidelength 2−k and let D = S∞ k=1 Dk. Suppose that {Z(I) : I ∈D} is a collection of random variables each taking values in {0, 1}. With this collection we associate the random set A := ∞ \ k=1 [ I∈Dk Z(I)=1 I . Theorem 10.43. Suppose that the random variables {Z(I) : I ∈D} satisfy the monotonicity condition I ⊂J and Z(I) = 1 ⇒ Z(J) = 1. Assume that, for some positive constants γ, c1 and C1, (i) c1 |I|γ ≤EZ(I) ≤C1 |I|γ for all I ∈D, (ii) E £ Z(I)Z(J) ¤ ≤C1 |I|2γ dist(I, J)−γ for all I, J ∈Dk and k ≥1. Then, for λ > γ and Λ ⊂Cube closed with Hλ(Λ) > 0, there exists a p > 0, such that P © dim(A ∩Λ) ≥λ −γ ª ≥p . Remark 10.44. Though formally, if the monotonicity condition holds, A is a limsup fractal, the monotonicity establishes a strong dependence of the random variables {Z(I) : I ∈Dk} which in general invalidates the second assumption of Theorem 10.28. We therefore need a result which deals specifically with this situation. ⋄ We prepare the proof with a little lemma, based on Fubini’s theorem. Lemma 10.45. Suppose ν is a probability measure on Rd such that νB(x, r) ≤Crλ for all x ∈Rd, r > 0. Then, for all 0 < β < λ there exists C2 > 0 such that, Z B(x,r) |x −y|−β ν(dy) ≤C2 rλ−β, for every x ∈Rd and r > 0. This implies, in particular, thatZZ |x −y|−β dν(x) dν(y) < ∞. Proof. Fubini’s theorem gives Z B(x,r) |x −y|−β ν(dy) = Z ∞ 0 ν © y ∈B(x, r): |x −y|−β > s ª ds ≤ Z ∞ r−β νB(x, s−1/β) ds + C rλ−β ≤C Z ∞ r−β s−λ/β ds + C rλ−β, which implies the first statement. Moreover, ZZ |x −y|−β dν(x) dν(y) ≤ Z dν(x) Z B(x,1) |x −y|−β dν(y) + 1 ≤C2 + 1 < ∞. 301 Proof of Theorem 10.43. We show that there exists p > 0 such that, for every 0 < β < λ −γ, with probability at least p, there exists a positive measure µ on Λ ∩A such that its β-energy Iβ(µ) is finite. This implies dim(A ∩Λ) ≥β by the energy method, see Theorem 4.27. To begin with, given Λ ⊂Cube with Hλ(Λ) > 0, we use Frostman’s lemma to find a Borel probability measure ν on Λ and positive constants 0 < c < C such that ν(D) ≤C|D|λ for all Borel sets D ⊂Rd. Writing An := [ I∈Dn Z(I)=1 I , we define µn to be the measure supported on Λ given by µn(B) = 2nγ ν(B ∩An) for any Borel set B ⊂Rd. Then, using (i), we get E £ µn(An) ¤ ≥2nγ X I∈Dn ν(I) EZ(I) ≥c1 X I∈Dn ν(I) = c1 . Moreover, using (ii), we obtain E £ µn(An)2¤ ≤22nγ X I∈Dn X J∈Dn E[Z(I)Z(J)]ν(I)ν(J) ≤C1 X I∈Dn X J∈Dn dist(I,J)>0 dist(I, J)−γν(I)ν(J) + C(3 √ d)λ 22nγ 2−nλ X I∈Dn E[Z(I)]ν(I) ≤C13γ ZZ |x −y|−γ dν(x) dν(y) + C1 C (3 √ d)λ =: C3 < ∞, where finiteness of C3 follows from the second statement of Lemma 10.45. We now show that, for every β < λ −γ we find k(β) such that EIβ(µn) ≤k(β). Indeed, EIβ(µn) = 22nγ X I,J∈Dn E[Z(I)Z(J)] Z I dν(x) Z J dν(y) |x −y|−β ≤C1 X I∈Dn X J∈Dn dist(I,J)>0 dist(I, J)−γ Z I dν(x) Z J dν(y) |x −y|−β + C1 2nγ X I∈Dn X J∈Dn dist(I,J)=0 Z I dν(x) Z J dν(y) |x −y|−β . For the first summand, we use that dist(I, J)−γ ≤(3 √ d)γ |x −y|−γ whenever x ∈I and y ∈J, and infer boundedness from the second statement of Lemma 10.45. For the second summand the first statement of Lemma 10.45 gives a bound of C1C2 2nγ (3 √ d2−n)λ−β X I∈Dn ν(I) ≤C1 C2 (3 √ d)λ−β . 302 Hence, EIβ(µn) is bounded uniformly in n, as claimed. We therefore find ℓ(β) > 0 such that P © Iβ(µn) ≥ℓ(β) ª ≤k(β) ℓ(β) ≤c1 8C3 . Now, by the Paley-Zygmund inequality, see Lemma 3.22, P © µn(An) > c1 2 ª ≥P © µn(An) > 1 2 E[µn(An)] ª ≥1 4 E[µn(An)]2 E[µn(An)2] ≥c1 4C3 . Hence we obtain that P © µn(An) > c1 2 , Iβ(µn) < ℓ(β) ª ≥p := c1 8C3 . Using Fatou’s lemma we infer that P © µn(An) > c1 2 , Iβ(µn) < ℓ(β) infinitely often ª ≥p . On this event we can pick a subsequence along which µn converges to some measure µ. Then µ is supported by A and µ(Cube) ≥lim inf µn(Cube) = lim inf µn(An) ≥c1/2. Finally, for each ε > 0, where the limit is taken along the chosen subsequence, ZZ |x−y|>ε |x −y|−βdµ(x) dµ(y) = lim ZZ |x−y|>ε |x −y|−βdµn(x) dµn(y) = Iβ(µn) ≤ℓ(β), and for ε ↓0 we get Iβ(µ) ≤ℓ(β). We now use Theorem 10.43 to give a lower bound for the dimension of the set of cone points. Fix α ∈(π, 2π) and a unit cube Cube = x0 + [0, 1]2 ⊂W[α/2, 0] . Choose a large radius R > 2 such that Cube ⊂B(0, R/2) and define rk := R − k X j=1 2−j > R/2 . Given a cube I ∈Ck(x0) we denote by z its centre and let Z(I) = 1 if z is a (2−k, rk)-approximate cone point with direction ξ = π, i.e. if B(0, T2−k(z)) ⊂z + W[α, π], and B(T2−k−1(z), S2−k,rk(z)) ⊂z + W[α, π] , and otherwise let Z(I) = 0. By our choice of the sequence (rk) we have I ⊂J and Z(I) = 1 ⇒ Z(J) = 1. Lemma 10.46. There are constants 0 < c1 < C1 < ∞such that, for any cube I ∈C, we have c1 |I| 2π α ≤P © Z(I) = 1 ª ≤C1 |I| 2π α . 303 Proof. The upper bound is immediate from Lemma 10.41. For the lower bound we use that, for any z ∈Cube and δ > 0, inf |x−z|=δ Px © B(Tδ/2(z)) ∈z + W[α/2, π] ª = inf |x|=1 Px © B(T1/2(0)) ∈W[α/2, π] ª =: c0 > 0 , and hence, if z is the centre of I ∈Ck and δ = 2−k, using Lemmas 10.39 (b) and 10.40 (b), P © Z(I) = 1 ª ≥E h 1{B(0, Tδ(z)) ⊂z + W[α, π]} EB(Tδ(z)) £ 1{B(Tδ/2(z)) ∈z + W[α/2, π] ª × PB(Tδ/2(z)){B(0, S (0) rk (z)) ⊂z + W[α, π]} ¤i ≥c0 c2 δ 2π α ¡ R|z| ¢−π α, which gives the desired statement, as |z| is bounded away from infinity. Lemma 10.47. There is a constant 0 < C1 < ∞such that, for any cubes I, J ∈Ck, k ≥1, we have E £ Z(I)Z(J) ¤ ≤C1 |I| 4π α dist(I, J)−2π α . Proof. Let zI, zJ be the centres of I, resp. J, and abbreviate η := |zI −zJ| and δ := 2−k. Then, for η > 2δ, using the strong Markov property and Lemmas 10.39 (a) and 10.40 (a), E £ Z(I) Z(J) 1{Tδ/2(zI) < Tδ/2(zJ) ¤ ≤E h 1{B(0, Tδ(zI)) ⊂zI + W[α, π]} EB(Tδ/2(zI)) £ 1{B(0, S (0) η/2(zI)) ⊂zI + W[α, π]} × EB(Tη/2(zJ)) £ 1{B(0, Tδ(zJ)) ∈zJ + W[α, π] ª PB(Tδ/2(zJ)){B(0, S (0) rk (zJ)) ⊂zJ + W[α, π]} ¤¤i ≤C4 ¡ δ |zI| ¢ π α ¡ δ η ¢ 2π α ¡ 2δ R ¢ π α ≤(C1/2) |I| 4π α dist(I, J)−2π α , where we define C1 := 2 π α+1(C4 ∨1). Suppose now that η ≤2δ. Then, by a simpler argument, E £ Z(I) Z(J) 1{Tδ/2(zI) < Tδ/2(zJ) ¤ ≤E £ 1{B(0, Tδ(zI)) ⊂zI + W[α, π]} PB(Tδ/2(zJ)){B(0, S (0) rk (zJ)) ⊂zJ + W[α, π]} ¤ ≤C2 ¡ δ |zI| ¢ π α ¡ 2δ R ¢ π α ≤(C1/2) |I| 4π α dist(I, J)−2π α . Exchanging the rˆ oles of I and J gives the corresponding estimate E[Z(I)Z(J)1{Tδ/2(zI) > Tδ/2(zJ)] ≤(C1/2) |I| 4π α dist(I, J)−2π α , and the proof is completed by adding the two estimates. Proof of the lower bound in Theorem 10.38. The set A which we obtain from our choice of {Z(I): I ∈C} is contained in the set ˜ A := © B(t) : t > 0 and B(0, S (t) R/2(B(t))) ⊂B(t) + W[α, π] ª . 304 Therefore, by Theorem 10.43, we have dim ˜ A ≥2 −2π/α with positive probability. Given any 0 < δ < 1/2, we define a sequence τ (δ) 1 ≤τ (δ) 2 ≤. . . of stopping times by τ (δ) 1 = 0 and, for k ≥1, τ (δ) k := S (τ(δ) k−1) δR ¡ B(τ (δ) k−1) ¢ . Denoting A (δ) k := © B(t) : τ (δ) k−1 ≤t ≤τ (δ) k and B(τ (δ) k−1, S (t) R/2(B(t))) ⊂B(t) + W[α, π] ª we have that ˜ A ⊂ ∞ [ k=1 A (δ) k . Now fix β < 2 −2π/α. The events {dim A (δ) k ≥β} all have the same probability, which cannot be zero as this would contradict the lower bound on the dimension of ˜ A. In particular, there exists p (δ) R > 0 such that P © dim © B(t): 0 ≤t ≤S (0) δR(0) and B(0, S (0) R/2(0)) ⊂B(t) + W[α, π] ª ≥β ª ≥p (δ) R . By scaling we get that p (δ) R does not depend on R. Hence, by Blumenthal’s zero-one law, we have that p (δ) R = 1 for all δ > 0, R > 0. Letting ε ↓0 we get, almost surely, dim © B(t): t ≤S (0) δR(0), B(0, S (0) R/2(0)) ⊂B(t) + W[α, π] ª ≥2 −2π α for every δ > 0, R > 0. On this event we may choose first R > 1 and then δ > 0 such that S (0) R/2(0) > 1 and S (0) δR(0) < 1, and get that the Hausdorffdimension of the set of α-cone points is at least 2 −2π α . A surprising consequence of the non-existence of cone points for angles smaller then π is that the convex hull of the planar Brownian curve is a fairly smooth set. Theorem 10.48 (Adelman (1982)). Almost surely, the convex hull of {B(s): 0 ≤s ≤1} has a differentiable boundary. Proof. A compact, convex subset H ⊂R2 is said to have a corner at x ∈∂H if there exists a cone with vertex x and opening angle α < π which contains H. If H does not have corners, the supporting hyperplanes are unique at each point x ∈∂H and thus ∂H is a differentiable boundary. So all we have to show is that the convex hull H of {B(s) : 0 ≤s ≤1} has no corners. Clearly, by Spitzer’s theorem, B(0) and B(1) are no corners almost surely. Suppose any other point x ∈∂H is a corner, then obviously it is contained in the path, and therefore it is an α-cone point for some α > π. By Theorem 10.38, almost surely, such points do not exist and this is a contradiction. 305 Exercises Exercise 10.1. Show that, for every metric space E, dimP E = inf{s : Ps(E) < ∞} = sup{s : Ps(E) > 0} = sup{s : Ps(E) = ∞}. Exercise 10.2 ( ∗). Show that, for every metric space E, we have dimP E ≥dim E. Exercise 10.3. Let {mk : k ≥1} be a rapidly increasing sequence of positive integers such that lim k→∞ mk mk+1 = 0. Define two subsets of [0, 1] by E = n ∞ X i=1 xi 2i : xi ∈{0, 1} and xi = 0 if mk + 1 ≤i ≤mk+1 for some even k o and F = n ∞ X i=1 xi 2i : xi ∈{0, 1} and xi = 0 if mk + 1 ≤i ≤mk+1 for some odd k o . Show that (1) dim E = dimME = 0 and dim F = dimMF = 0, (2) dimP E = dimME = 1 and dimP F = dimMF = 1, (3) dim(E × F) ≥1. Exercise 10.4. Show that, almost surely, • dimP Range = 2, for Brownian motion in d ≥2, • dimP Graph = 3 2, for Brownian motion in d = 1, • dimP Zero = 1 2, for Brownian motion in d = 1. 306 Exercise 10.5. Show that, for every a ∈[0, √ 2], we have almost surely, dimP n t ∈[0, 1] : lim sup h↓0 |B(t + h) −B(t)| p h log(1/h) ≥a o = 1. Hint. This can be done directly, but it can also be derived from more general ideas, as formulated for example in Exercise 10.9. Exercise 10.6. Show that lim inf h↓0 sup t∈E |B(t + h) −B(t)| p 2h log(1/h) = p dimM(E). Exercise 10.7 ( ∗). Use Theorem 10.43 to prove once more that the zero set Zero of linear Brownian motion has Hausdorffdimension 1 2 almost surely. Exercise 10.8. Show that, if lim sup n↑∞ log pn n log 2 ≤−γ, for some γ > 0, then, for any compact E ⊂[0, 1] with dimP(E) < γ, we have P © A ∩E ̸= ∅ ª = 0. Note that no independence assumption is needed for this statement. Exercise 10.9 ( ∗). (a) Suppose A is a discrete limsup fractal associated to random variables {Z(I) : I ∈Ck, k ≥1} satisfying the conditions of Theorem 10.28. Then, if dimP(E) > γ, we have almost surely, dimP(A ∩E) = dimP(E). (b) Show that, if dimP(E) > a2, then almost surely dimP(F(a) ∩E) = dimP(E), where F(a) is the set of a-fast points. 307 Exercise 10.10 ( ∗). Give a proof of Lemma 10.40 (a) based on Theorem 7.24. Exercise 10.11. Suppose K ⊂R2 is a compact set and x ∈R2 \ K a point outside the set. Imagine K as a solid body, and x as the position of an observer. This observer can only see a part of the body, which can be formally described as K(x) = © y ∈K : [x, y] ∩K = {y} ª , where [x, y] denotes the compact line segment connecting x and y. It is natural to ask for the Hausdorffdimension of the visible part of a set K. Assuming that dim K ≥1, an unresolved conjecture in geometric measure theory claims that, for Lebesgue-almost every x ̸∈K, the Hausdorffdimension of K(x) is one. Show that this conjecture holds for the path of planar Brownian motion, K = B[0, 1], in other words, almost surely, for Lebesgue almost every x ∈R2, the Hausdorffdimension of the visible part B0, 1 is one. Exercise 10.12. Let {B(t): t ≥0} be a planar Brownian motion and α ∈[π, 2π). Show that, almost surely, no double points are α-cone points. Exercise 10.13 ( ∗). Let {B(t): t ≥0} be a planar Brownian motion and α ∈(0, π]. A point x = B(t), 0 < t < 1, is a one-sided α-cone point if there exists ξ ∈[0, 2π) such that B(0, t) ⊂x + W[α, ξ] . (a) Show that for α ≤π 2, almost surely, there are no one-sided α-cone points. (b) Show that for α ∈( π 2, π], almost surely, the set of one-sided α-cone points has Hausdorffdimension 2 −π α. 308 Notes and Comments The paper [OT74] by Orey and Taylor is a seminal work in the study of dimension spectra for exceptional points of Brownian motion. It contains a proof of Theorem 10.3 using the mass distribution principle and direct construction of the Frostman measure. This approach can be extended to other limsup fractals, but this method requires quite strong independence assumptions which make this method difficult in many more general situations. In [OT74] the question how often on a Brownian path the law of the iterated logarithm fails is also answered in the sense that, for θ > 1, almost surely, the set n t > 0: lim sup h↓0 B(t + h) −B(t) p 2h log log(1/h) ≥θ o has zero or infinite Hausdorffmeasure for the gauge function φ(r) = r log(1/r)γ depending whether γ < θ2 −1 or γ > θ2 −1. Our proof of Theorem 10.3 is based on estimates of energy integrals. This method was used by Hu and Taylor [HT97] and Shieh and Taylor [ST99], and our exposition follows closely [DPRZ00]. In the latter paper an interesting class of exceptional times for the Brownian motion is treated, the thick times of Brownian motions in dimension d ≥3. For any time t ∈(0, 1) we let U(t, ε) = L{s ∈(0, 1) : |B(s) −B(t)| ≤ε} the set of times where the Brownian is up to ε near to its position at time t. It is shown that, for all 0 ≤a ≤16 π2, almost surely, dim n t ∈[0, 1] : lim sup ε↓0 U(t,ε) ε2 log(1/ε) ≥a o = 1 −a π2 16 . This paper should be very accessible to anyone who followed the arguments of Section 10.1. The method of [DPRZ00] can be extended to limsup fractals with somewhat weaker independence properties and also extends to the study of dimension spectra with strict equality. The third way to prove Theorem 10.3 is the method of stochastic codimension explored in Section 10.2. An early reference for this method is Taylor [Ta66] who suggested to use the range of stable processes as test sets, and made use of the potential theory of stable processes to obtain lower bounds for Hausdorffdimension. This class of test sets is not big enough for all problems: the Hausdorffdimension of a stable processes is bounded above by its index, hence cannot exceed 2, and therefore these test sets can only test dimensions in the range [d −2, d]. A possible remedy is to pass to multiparameter processes, see the recent book of Khoshnevisan [Kh02] for a survey. Later, initiated by seminal papers of Hawkes [Ha81] and R. Lyons [Ly90], it was discovered that percolation limit sets are a very suitable class of test functions, see [KPX00]. Our exposition closely follows the latter reference. Kaufman [Ka75] showed that every compact set E ⊂[0, 1] with dim(E) > a2 almost surely contains an a-fast point, but the more precise result involving the packing dimension is due to [KPX00]. The concept of packing dimension was introduced surprisingly late by Tricot in [Tr82] and in [TT85] it was investigated together with the packing measure and applied to the Brownian path by Taylor and Tricot. Lemma 10.18(i) is from [Tr82], Lemma 10.18(ii) for 309 trees can be found in [BP94], see Proposition 4.2(b), the general version given is in Falconer and Howroyd [FH96] and in Mattila and Mauldin [MM97]. Several people contributed to the investigation of slow points, for example Dvoretzky [Dv63], Kahane [Ka76], Davis [Da83], Greenwood and Perkins [GP83] and Perkins [Pe83]. There are a number of variants, for example one can allow h < 0 in (3.1) or omit the modulus signs. The Hausdorffdimension of a-slow points is discussed in [Pe83], this class of exceptional sets is not tractable with the limsup-methods: note that an exceptional behaviour is required at all small scales. The crucial ingredient, the finiteness criterion for moments of the stopping times T(r, a) is due to Shepp [Sh67]. Cone points were discussed by Evans in [Ev85], an alternative discussion can be found in Lawler’s survey paper [La99]. Our argument essentially follows the latter paper. The corre-lation condition in Theorem 10.43 appears in the strongly related context of quasi-Bernoulli percolation on trees, see Lyons [Ly92]. An alternative notion of global cone points requires that the entire path of the Brownian motion {B(t): t ≥0} stays inside the cone with tip in the cone point. The same dimension formula holds for this concept. The upper bound follows of course from our consideration of local cone points, and our proof gives the lower bound with positive probability. The difficult part is to show that the lower bound holds with probability one. A solution to this problem is contained in Burdzy and San Mart´ ın [BSM89], and this technique has also been successfully used in the study of the Brownian frontier [La96b, BJPP97]. A discussion of the smoothness of the boundary of the convex hull can be found in Cranston, Hsu and March [CHM89], but our Theorem 10.48 is older. The result was stated by L´ evy [Le48] and was probably first proved by Adelman in 1982, though this does not seem to be published. It is conjectured in geometric measure theory that for any set of Hausdorffdimension dim K ≥1, for Lebesgue-almost every x ̸∈K, the Hausdorffdimension of the visible part K(x) is one. For upper bounds on the dimension and the state of the art on this conjecture, see [ON04]. It is natural to compare this to Makarov’s theorem on the support of harmonic measure: if the rays of light were following Brownian paths rather than straight lines, the conjecture would hold by Makarov’s theorem, see [Ma85]. 310 Appendix I: Hints and solutions for selected exercises In this section we give hints, solutions or additional references for the exercises marked with either of the symbols ( ∗) or ( ∗∗) in the main body of the text. Exercise 1.2. Fix times 0 < t1 < . . . < tn. Let M :=      1 0 . . . 0 −1 ... ... . . . 0 ... ... 0 0 0 −1 1     , D :=       1 √t1 0 . . . 0 0 1 √t2−t1 ... . . . . . . ... ... 0 0 . . . 0 1 √tn−tn−1       . Then, for a Brownian motion {B(t) : t ≥0} with start in x, by definition, the vector X := D M (B(t1) −x, . . . , B(tn) −x ¢T has independent standard normal entries. As both D and M are nonsingular, the matrix A := M −1D−1 is well-defined and, denoting also b = (x, . . . , x), we have that ¡ B(t1), . . . , B(tn) ¢T = AX + b . By definition, this means that (B(t1), . . . , B(tn)) is a Gaussian random vector. Exercise 1.4. Note that {X(t): 0 ≤t ≤1} is a Gaussian process, while the distributions given in (a) determine Gaussian random vectors. Hence it suffices to identify the means and covariances of (X(t1), . . . , X(tn)) and compare them with those given in (a). Starting with the mean, on the one hand we obviously have EX(t) = x(1 −t) + ty, on the other hand Z z p(t, x, z)p(1 −t, z, y) p(1, x, y) = 1 p(1, x, y) Z ¡ z −x(1 −t) −ty ¢ p(t, x, z)p(1 −t, z, y) dz + x(1 −t) + ty, and the integral can be seen to vanish by completing the square in the exponent of the integrand. To perform the covariance calculation one may assume that x = y = 0, which reduces the complexity of expressions significantly, see [Du96, (8.5) in Chapter 7] for more details. 311 Exercise 1.5. B(t) does not oscillate too much between n and n + 1 if lim sup n→∞ 1 n £ max n≤t≤n+1 B(t) −B(n) ¤ = 0 . Use the Borel-Cantelli lemma and a suitable estimate for P © max 0≤t≤1 B(t) ≥εn ª to obtain this result. Exercise 1.6. One has to improve the lower bound, and show that, for every constant c < √ 2, almost surely, there exists ε > 0 such that, for all 0 < h < ε, there exists t ∈[0, 1 −h] with ¯ ¯B(t + h) −B(t) ¯ ¯ ≥c p h log(1/h). To this end, given δ > 0, let c < √ 2 −δ and define, for integers k, n ≥0, the events Ak,n = n B ¡ (k + 1)e−√n¢ −B ¡ ke−√n¢ > c ¡√n e−√n¢ 1 2o . Then, using Lemma II.3.1, for any k ≥0, P(Ak,n) = P n B ¡ e−√n¢ > c ¡√ne−√n¢ 1 2o = P © B(1) > c n 1 4ª ≥ cn 1 4 c2√n + 1 e−c2√n/2 . Therefore, by our assumption on c, and using that 1 −x ≤e−x for all x ≥0, ∞ X n=0 P ³ ⌊e √n−1⌋ \ k=0 Ac k,n ´ ≤ ∞ X n=0 ¡ 1 −P(A0,n) ¢e √n−1 ≤ ∞ X n=0 exp ¡ −(e √n −1)P(A0,n) ¢ < ∞. From the Borel-Cantelli lemma we thus obtain that, almost surely, there exists n0 ∈N such that, for all n ≥n0, there exists t ∈[0, 1 −e−√n] of the form t = ke−√n such that ¯ ¯B ¡ t + e−√n¢ −B(t) ¯ ¯ > c ¡√ne−√n¢ 1 2 . In addition, we may choose n0 big enough to ensure that e−√n0 is sufficiently small in the sense of Theorem 1.12. Then we pick ε = e−√n0 and, given 0 < h < ε, choose n such that e−√n+1 < h ≤e−√n. Then, for t as above, ¯ ¯B ¡ t + h ¢ −B(t) ¯ ¯ ≥ ¯ ¯B ¡ t + e−√n¢ −B(t) ¯ ¯ − ¯ ¯B ¡ t + h ¢ −B ¡ t + e−√n¢¯ ¯ > c ¡√ne−√n¢ 1 2 −C ³¡ e−√n −e−√n+1¢ log ¡ 1/(e−√n −e−√n+1) ¢ . It is not hard to see that the second (subtracted) term decays much more rapidly than the first, so that modifying n0 to ensure that it is below δ (√ne−√n) 1 2 gives the result. 312 Exercise 1.7. It suffices to show that, for fixed ε > 0 and c > 0, almost surely, for all t ≥0, there exists 0 < h < ε with |B(t + h) −B(t)| > chα. By Brownian scaling we may further assume ε = 1. Note that, after this simplification, the complementary event means that there is a t0 ≥0 such that sup h∈(0,1) B(t0 + h) −B(t0) hα ≤c and inf h∈(0,1) B(t0 + h) −B(t0) hα ≥−c . We may assume that t0 ∈[0, 1). Fix l ≥1/(α −1 2). Then t0 ∈ £ k−1 2n , k 2n ¢ for any large n and some 0 ≤k < 2n −l. Then, by the triangle inequality, for all j ∈{1, . . . , 2n −k}, ¯ ¯ ¯B ¡ k+j 2n ¢ −B ¡ k+j−1 2n ¢¯ ¯ ¯ ≤2c ¡ j+1 2n ¢α. Now, for any 0 ≤k < 2n −l, let Ωn,k be the event n¯ ¯B ¡ k+j 2n ¢ −B ¡ k+j−1 2n ¢¯ ¯ ≤2c ¡ j+1 2n ¢α for j = 1, 2, . . . , l o . It suffices to show that, almost surely for all sufficiently large n and all k ∈{0, . . . , 2n −l} the event Ωn,k does not occur. Observe that P(Ωn,k) ≤ £ P © |B(1)| ≤2n/2 2c ¡ l+1 2n ¢αª¤l ≤ £ 2n/2 2c ¡ l+1 2n )α¤l, since the normal density is bounded by 1/2. Hence, for a suitable constant C, P ³ 2n−l [ k=0 Ωn,k ´ ≤2n £ 2n/2 2c ¡ l+1 2n ¢α¤l = C £ 2(1−l(α−1/2))¤n, which is summable. Thus P ³ lim sup n→∞ 2n−l [ k=0 Ωn,k ´ = 0 . This is the required statement and hence the proof is complete. Exercise 1.8. The proof can be found in [Du95, Chapter 3] or [Ka02, Theorem 3.15]. Exercise 1.10. Argue as in the proof of Theorem 1.30 with B replaced by B+f. The resulting term P n¯ ¯B(1) + √ 2n f ((k + j)/2n) − √ 2n f ((k + j −1)/2n) ¯ ¯ ≤7M/ √ 2n o can be estimated in exactly the same manner as for the unshifted Brownian motion. 313 Exercise 1.11. This can be found, together with stronger and more general results, in [BP84]. Put I = £ B(1), sup0≤s≤1 B(s) ¤ , and define a function g: I →[0, 1] by setting g(x) = sup{s ∈[0, 1] : B(s) = x}. First check that almost surely the interval I is non-degenerate, g is strictly decreasing, left continuous and satisfies B(g(x)) = x. Then show that almost surely the set of discontinuities of g is dense in I. We restrict our attention to the event of probability 1 on which these assertions hold. Let Vn = © x ∈I : g(x −h) −g(x) > nh for some h ∈(0, n−1) ª . Now show that Vn is open and dense in I. By the Baire category theorem, V := T n Vn is uncountable and dense in I. Now if x ∈V then there is a sequence xn ↑x such that g(xn) −g(x) > n(x −xn). Setting t = g(x) and tn = g(xn) we have tn ↓t and tn −t > n(B(t) −B(tn)), from which it follows that D∗B(t) ≥0. On the other hand D∗B(t) ≤0 since B(s) ≤B(t) for all s ∈(t, 1), by definition of t = g(x). Exercise 1.12. We first fix some positive ε and positive a. For some small h and an interval I ⊂[ε, 1 −ε] with length h, we consider the event A that t0 ∈I and we have B(t0 + ˜ h) −B(t0) > −2ah1/4 for some h1/4 < ˜ h ≤2h1/4. We now denote by tL the left endpoint of I. Using Theorem 1.12 we see there exists some positive C so that B(t0) −B(tL) ≤C p h log(1/h) Hence the event A implies the following events A1 = © B(tL −s) −B(tL) ≤C p h log(1/h) for all s ∈[0, ε] ª , A2 = © B(tL + s) −B(tL) ≤C p h log(1/h) for all s ∈[0, h1/4] ª . We now define T := inf(s > tL + h1/4 : B(s) > B(t) −2ah1/4). Then by definition we have that T ≤tL + 2h1/4 and this implies the event A3 = © B(T + s) −B(T) ≤2ah1/4 + C p h log(1/h) for all s ∈[0, ε] ª . Now by the strong Markov property, these three events are independent and we obtain P(A) ≤P(A1) P(A2) P(A3). We estimate the probabilities of these three events and obtain P(A1) = P © B(ε) ≤C p h log(1/h) ª ≤ 1 √ 2πε 2C p h log(1/h), P(A2) = P © B(h1/4) ≤C p h log(1/h) ª ≤ 1 √ 2πh1/4 2C p h log(1/h), P(A3) = P © B(ε) ≤2ah1/4 + C p h log(1/h) ª ≤ 1 √ 2πε 2 ¡ C h1/4 + 2ah1/4¢ . Hence we obtain, for a suitable constant K > 0, depending on a and ε, that P(A) ≤K h9/8 log(1/h). 314 Summing over a covering collection of 1/h intervals of length h gives the bound P © t0 ∈[ε, 1 −ε] and B(t0 + ˜ h) −B(t0) > −2ah1/4 for some h1/4 < ˜ h ≤2h1/4ª ≤K log(1/h)h1/8. Taking h = 2−4n−4 in this bound and summing over n, we see that ∞ X n=1 P n t0 ∈[ε, 1 −ε] and sup 2−n−1<h≤2−n B(t0 + h) −B(t0) h > −a o < ∞, and from the Borel-Cantelli lemma we obtain that, almost surely, either t0 ̸∈[ε, 1 −ε], or lim sup h↓0 B(t0 + h) −B(t0) h > −a. Now recall that a and ε are arbitrary positive numbers, so taking a countable union over a and ε gives that, almost surely, DB(t0) = −∞, as required. Exercise 1.13 By Brownian scaling it suffices to consider the case t = 1. (a) We first show that, given M > 0 large, for any fixed point s ∈[0, 1], almost surely there exists n ∈N such that the dyadic interval I(n, s) := [k2−n, (k + 1)2−n] containing s satisfies ( ∗) ¯ ¯B ¡ (k + 1)2−n¢ −B ¡ k2−n¢¯ ¯ ≥M 2−n/2. To see this, it is best to consider the construction of Brownian motion, see Theorem 1.4. Using the notation of that proof, let d0 = 1 and dn+1 ∈Dn+1 \ Dn be the dyadic point that splits the interval [k2−n, (k+1)2−n) containing s. This defines a sequence Zdn, n = 0, 1, . . . of independent, normally distributed random variables. Now let n = min © k ∈{0, 1, . . .}: |Zdk| ≥3M ª , which is almost surely well-defined. Moreover, 3M ≤ ¯ ¯Zdn ¯ ¯ = 2 n−1 2 ¯ ¯2B(dn) −B(d −2−n) −B(d + 2−n) ¯ ¯ ≤2 n+1 2 ¯ ¯B(d) −B(d ± 2−n) ¯ ¯ + 2 n−1 2 ¯ ¯B(d + 2−n) −B(d −2−n) ¯ ¯, where ± indicates that the inequality holds with either choice of sign. This implies that either I(n, s) or I(n −1, s) satisfies ( ∗). We denote by N(s) be the smallest nonnegative integer n, for which ( ∗) holds. By Fubini’s theorem, almost surely, we have N(s) < ∞for almost every s ∈[0, 1]. On this event, we can pick a finite collection of disjoint dyadic intervals [t2j, t2j+1], j = 0, . . . , k −1, with summed lengths exceeding 1/2, say, such that the partition 0 = t0 < · · · < t2k = 1 given by their endpoints satisfies 2k X j=1 ¡ B(tj) −B(tj−1) ¢2 ≥M 2 k X j=1 (t2j+1 −t2j) ≥M 2 , from which (a) follows, as M was arbitrary. 315 (b) Note that the number of (finite) partitions of [0, 1] consisting of dyadic points is countable. Hence, by (a), given n ∈N, we can find a finite set Pn of partitions such that the probability that there exists a partition 0 = t0 < · · · < tk = 1 in Pn with the property that k X j=1 ¡ B(tj) −B(tj−1) ¢2 ≥n is bigger than 1 −1 n. Successively enumerating the partitions in P1, P2, . . . yields a sequence satisfying the requirement of (b). Exercise 1.14 To see convergence in the L2-sense one can use the independence of the incre-ments of a Brownian motion, E h k(n) X j=1 ¡ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢¢2 −t i2 = k(n) X j=1 E h¡ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢¢2 − ¡ t (n) j+1 −t (n) j ¢i2 ≤ k(n) X j=1 E h¡ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢¢4 + ¡ t (n) j+1 −t (n) j ¢2i . Now, using that the fourth moments of a centred normal distribution with variance σ2 is 3σ4, this can be estimated by a constant multiple of k(n) X j=1 ¡ t (n) j+1 −t (n) j ¢2, which goes to zero. Moreover, by the Markov inequality P n¯ ¯ ¯ k(n) X j=1 ¡ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢¢2 −t ¯ ¯ ¯ > ε o ≤ε−2 E h n X j=1 ¡ B ¡ t (n) j+1 ¢ −B ¡ t (n) j ¢¢2 −t i2 , and summability of the right hand side together with the Borel-Cantelli lemma ensures almost sure convergence. Exercise 2.3. We first show that, given two disjoint closed time intervals, the maxima of Brownian motion on them are different almost surely. For this purpose, let [a1, b1] and [a2, b2] be two fixed intervals with b1 < a2. Denote by m1 and m2, the maxima of Brownian motion on these two intervals. Applying the Markov property at time b1 we see that the random variable B(a2) −B(b1) is independent of m1 −B(b1). Using the Markov property at time a2 we see that m2 −B(a2) is also independent of both these variables. Conditioning on the values of the random variables m1 −B(b1) and m2 −B(a2), the event m1 = m2 can be written as B(a2) −B(b1) = m1 −B(b1) −(m2 −B(a2)). The left hand side being a continuous random variable, and the right hand side a constant, we see that this event has probability 0. Now the statement just proved holds jointly for all disjoint pairs of intervals with rational endpoints. The proposition follows, since if Brownian motion has a non-strict local maximum, there are two disjoint rational intervals where Brownian motion has the same maximum. 316 Exercise 2.4. (i) If A ∈F(S), then A ∩{T ≤t} = (A ∩{S ≤t}) ∩{T ≤t} ∈F+(t) . (ii) By (i), F(T) ⊂F(Tn) for all n, which proves ⊂. On the other hand, if A ∈T∞ n=1 F(Tn), then for all t ≥0, A ∩{T < t} = ∞ [ k=1 ∞ \ n=k A ∩{Tn < t} ∈F+(t) . (iii) Look at the discrete stopping times Tn defined in the previous example. We have, for any Borel set A ⊂Rd, {B(Tn) ∈A} ∩ © Tn ≤k2−nª = k [ m=0 ³ {B(m2−n) ∈A} ∩{Tn = m2−n} ´ ∈F+(k2−n). Hence B(Tn) is F(Tn)-measurable, and as Tn ↓T, we get that B(T) = lim B(Tn) is F(Tn)-measurable for any n. Hence B(T) is F(T)-measurable by part (ii). Exercise 2.5. If T = 0 almost surely, there is nothing to show, hence we may assume E[T] > 0. (a) By construction, Tn is the sum of n independent random variables with the law of T, hence, by the law of large numbers, almost surely, lim n→∞ Tn n = E[T] > 0, which, by assumption, is finite. This implies, in particular, that Tn →∞almost surely, and together with the law of large numbers for Brownian motion, Corollary 1.11, we get almost surely, limn→∞B(Tn)/Tn = 0. The two limit statements together show that, almost surely, lim n→∞ B(Tn) n = lim n→∞ B(Tn) Tn lim n→∞ Tn n = 0. (b) Again by construction, B(Tn) is the sum of n independent random variables with the law of B(T), which we conveniently denote X1, X2, . . .. As lim n→∞ Xn n = lim n→∞ B(Tn) n −lim n→∞ B(Tn−1) n = 0, the event {|Xn| ≥n} occurs only finitely often, so that the Borel-Cantelli lemma implies ∞ X n=0 P © |Xn| ≥n ª < ∞. Hence we have that E[B(T)] = E|Xn| ≤ ∞ X n=0 P © |Xn| ≥n ª < ∞. (c) By the law of large numbers, almost surely, lim n→∞ B(Tn) n = lim n→∞ 1 n n X j=1 Xj = E[B(T)]. 317 Exercise 2.7. Let S be a nonempty, closed set S with no isolated points. To see that it is uncountable, we construct a subset with the cardinality of {1, 2}N. Start by choosing a point x1 ∈S. As this point is not isolated there exists a further, different point x2 ∈S. Now pick two disjoint closed balls B1, B2 around these points. Again, as x1 is not isolated, we can find two points in B1 ∩S, around which we can put disjoint balls contained in B1 ∩S, similarly for B2 ∩S, and so on. Now there is a bijection between {1, 2}N and the decreasing sequences of balls in our construction. The intersection of the balls in each such sequence contains, as S is closed, at least one point of S, and two points belonging to two different sequences are clearly different. This completes the proof. Exercise 2.11. By Fubini’s theorem, E[T α] = Z ∞ 0 P{T > x1/α} dx ≤1 + Z ∞ 1 P © M(x1/α) < 1 ª dx. Note that, by Brownian scaling, P{M(x1/α) < 1} ≤C x−1 2α for a suitable constant C > 0, which implies that E[T α] < ∞, as required. Exercise 2.14. By Exercise 2.13 the process {X(t): t ≥0} defined by X(t) = exp © 2bB(t) −2b2t ª for t ≥0, defines a martingale. Observe that T = inf{t > 0: B(t) = a + bt} is a stopping time for the natural filtration, which is finite exactly if B(t) = a + bt for some t > 0. Then P{T < ∞} = e−2ab E £ X(T) 1{T < ∞} ¤ , and because {XT(t): t ≥0} is bounded, the right hand side equals e−2ab. Exercise 2.15. Use the binomial expansion of (B(t) + (B(t + h) −B(t)))3 to deduce that X(t) = B(t)3 −3tB(t) defines a martingale. We know that Px{TR < T0} = x/R. Write τ∗= τ({0, R}). Then x3 = Ex[X(0)] = Ex[X(τ∗)] = Px{TR < T0} Ex £ X(τ∗) | TR < T0 ¤ = Px{TR < T0} Ex £ R3 −3τ∗R | TR < T0 ¤ = (x/R)(R3 −3γR) = x (R2 −3γ) . Solving the last equation for γ gives the claim. Exercise 2.18. Part (a) can be proved similarly to Theorem 2.47, which in fact is the special case λ = 0 of this exercise. For part (b) choose u: U →R as a bounded solution of 1 2∆u(x) = λ u(x), for x ∈U , with lim x→x0 u(x) = f(x0) for all x0 ∈∂U. Then X(t) = e−λtu(B(t)) − Z t 0 e−λs¡ 1 2∆u(B(s)) −λu(B(s)) ¢ ds 318 defines a martingale. For any compact K ⊂U we can pick a twice continuously differentiable function v: Rd →R with v = u on K and v = 0 on U c Apply the optional stopping theorem to stopping times S = 0, T = inf{t ≥0: B(T) ̸∈K}. to get, for every x ∈K, u(x) = E[X(0)] = E[X(T)] = Ex £ e−λT f(B(T)) ¤ . Now choose a sequence Kn ↑U of compacts and pass to the limit on the right hand side of the equation. Exercise 3.2. To prove the result for k = 1 estimate |u(x)−u(y)| in terms of | x−y| using the mean value formula for harmonic functions and the fact that, if x and y are close, the volume of the symmetric difference of B(x, r) and B(y, r) is bounded by a constant multiple of rd−1| x−y|. For general k note that the partial derivatives of a harmonic function are themselves harmonic, and iterate the estimate. Exercise 3.4. Define a random variable Y by Y := X, if X > λE[X], and Y := 0, otherwise. Applying the Cauchy-Schwarz inequality to E[Y ] = E[Y 1{Y > 0}] gives E[Y 1{Y > 0}] ≤E[Y 2]1/2 ¡ P{Y > 0} ¢1/2, hence, as X ≥Y ≥X −λE[X], we get P © X > λ E[X] ª = P{Y > 0} ≥E[Y ]2 E[Y 2] ≥(1 −λ)2 E[X]2 E[X2] . Exercise 3.6. For d ≥3, choose a and b such that a + br2−d = ˜ u(r), and a + bR2−d = ˜ u(R). Notice that the harmonic functions given by u(x) = ˜ u(| x| ) and v(x) = a + b| x|2−d agree on ∂D. They also agree on D by Corollary 3.7. So u(x) = a + b| x|2−d. By similar consideration we can show that u(x) = a + b log | x| in the case d = 2. Exercise 3.7. Let x, y ∈Rd, a = | x −y|. Suppose u is a positive harmonic function. Then u(x) = 1 LB(x, R) Z B(x,R) u(z) dz ≤LB(y, R + a) LB(x, R) 1 LB(y, R + a) Z B(y,R+a) u(z) dz = (R + a)d Rd u(y). This converges to u(y) as R →∞, so u(x) ≤u(y), and by symmetry, u(x) = u(y) for all x, y. Hence u is constant. Exercise 3.8. Uniqueness is clear, because there is at most one continuous extension of u. Let D0 ⊂D be a ball whose closure is contained in D, which contains x. u is bounded and harmonic on D1 = D0 \ {x} and continuous on D1 \ {x}. Show that this already implies that u(z) = Ez[u(τ(D1))] on D1 and that the right hand side has an obvious harmonic extension to D1 ∪{x}, which defines the global extension. 319 Exercise 3.12. To obtain joint continuity one can show equicontinuity of G(x, · ) and G( · , x) in D \ B(x, ε) for any ε > 0. This follows from the fact that these functions are harmonic, by Exercise 3.11, and the estimates of Exercise 3.2. Exercise 3.13. Recall that G(x, y) = 1 π log(1/|x −y|) −1 πEx[log(1/|B(τ) −y|)]. The expectation can be evaluated (one can see how in the proof of Theorem 3.43). The final answer is G(x, y) = ½ −1 π log | x/R −y/R| + 1 π log ¯ ¯ x | x| −| x|yR−2¯ ¯, if x ̸= 0, x, y ∈B(0, R), −1 π log |y/R| if x = 0, y ∈B(0, R). Exercise 3.14. Suppose x, y ̸∈B(0, r) and A ⊂B(0, r) compact. Then, by the strong Markov property applied to the first hitting time of ∂B(0, r), µA(x, · ) = Z ∂B(0,r) µA(z, · ) dµ∂B(0,r)(x, dz) . Use Theorem 3.43 to show that, for B ⊂A Borel, µ∂B(0,r)(x, B) ≤Cµ∂B(0,r)(y, B) for a constant C not depending on B. Complete the argument from there. Exercise 4.1. Let α = log 2/ log 3. For the upper bound it suffices to find an efficient covering of C by intervals of diameter ε. If ε ∈(0, 1) is given, let n be the integer such that 1/3n < 2ε ≤ 1/3n−1 and look at the sets h n X i=1 xi 3i , n X i=1 xi 3i + ε i for (x1, . . . , xn) ∈{0, 2}n. These sets obviously cover C and each of them is contained in an open ball centred in an interval of diameter 2ε. Hence M(C, ε) ≤2n = 3αn = 3α¡ 3n−1¢α ≤3α(1/ε)α . This implies dimMC ≤α . For the lower bound we may assume we have a covering by intervals (xk −ε, xk + ε), with xk ∈C, and let n be the integer such that 1/3n+1 ≤2ε < 1/3n. Let xk = P∞ i=1 xi,k3−i. Then B(xk −ε, xk + ε) ∩C ⊂ n ∞ X i=1 yi 3i : y1 = x1,k, . . . , yn = xn,k o , and we need at least 2n sets of the latter type to cover C. Hence, M(C, ε) ≥2n = 3αn = (1/3)α¡ 3n+1¢α ≥(1/3)α(1/ε)α . This implies dimMC ≥α . 320 Exercise 4.2. Given ε ∈(0, 1) find the integer n such that 1/(n + 1)2 ≤ε < 1/n2. Then the points in {1/k : k > n} ∪{0} can be covered by n + 1 intervals of diameter ε. n further balls suffice to cover the remaining n points. Hence M(E, ε) ≤2n + 1 ≤2n+1 n (1/ε)1/2 , implying dimM(E) ≤1/2. On the other hand, as the distance between neighbouring points is 1 k − 1 k + 1 = 1 k(k + 1) ≥ 1 (k + 1)2 , we always need at least n −1 sets of diameter ε to cover E, which implies M(E, ε) ≥n −1 ≥n−1 n+1 (1/ε)1/2 , hence dimM(E) ≥1/2. Exercise 4.3. Suppose E is a bounded metric space with dimME < α. Choose ε > 0 such that dimEM < α −ε. Then, for every k there exists 0 < δ < 1 k and a covering E1, . . . , En of E by sets of diameter at most δ with n ≤δ−α+ε. The α-value of this covering is at most nδα ≤δε, which tends to zero for large k. Hence Hα ∞(E) = 0, and dim E ≤α. Exercise 4.4. Indeed, as E ⊂F implies dim E ≤dim F, it is obvious that dim ∞ [ k=1 Ek ≥sup © dim Ek : k ≥1 ª . To see the converse, we use Hα ∞ ³ ∞ [ k=1 Ek ´ ≤ inf n ∞ X k=1 ∞ X j=1 |Ej,k|α : E1,k, E2,k, . . . covers Ek o = ∞ X k=1 inf n ∞ X j=1 |Ej,k|α : E1,k, E2,k, . . . covers Ek o = ∞ X k=1 Hα ∞(Ek) . Hence, dim ∞ [ k=1 Ek ≤ sup n α ≥0 : Hα ∞ ¡ ∞ [ k=1 Ek ¢ > 0 o ≤sup n α ≥0 : ∞ X k=1 Hα ∞ ¡ Ek ¢ > 0 o ≤ ∞ sup k=1 sup n α ≥0 : Hα ∞ ¡ Ek ¢ > 0 o . This proves the converse inequality. 321 Exercise 4.6. Suppose that f is surjective and α-H¨ older continuous with H¨ older constant C > 0, and assume that Hαβ(E1) < ∞. Given ε, δ > 0 we can cover E1 with sets B1, B2, . . . of diameter at most δ such that ∞ X i=1 |Bi|αβ ≤Hαβ(E1) + ε . Note that the sets f(B1), f(B2), . . . cover E2 and that |f(Bi)| ≤C |Bi|α ≤C δα. Hence ∞ X i=1 |f(Bi)|β ≤Cβ ∞ X i=1 |Bi|αβ ≤Cβ Hαβ(E1) + Cβ ε, from which the claimed result for the Hausdorffmeasure readily follows. Exercise 4.8. Start with d = 1. For any 0 < a < 1/2 let C(a) be the Cantor set obtained by iteratively removing from each construction interval a central interval of 1 −2a of its length. Note that at the nth level of the construction we have 2n intervals each of length an. It is not hard to show that C(a) has Hausdorffdimension log 2/ log(1/a), which solves the problem for the case d = 1. For arbitrary dimension d and given α we find a such that dim C(a) = α/d. Then the Cartesian product C(a)× d . . . ×C(a) has dimension α. The upper bound is straightforward, and the lower bound can be verified, for example, from the mass distribution principle, by considering the natural measure that places mass 1/2dn to each of the 2dn cubes of sidelength an at the nth construction level. Exercise 4.13. Recall that it suffices to show that H1/2(Rec) = 0 almost surely. In the proof of Lemma 4.21 the maximum process was use to define a measure on the set of record points: this measure can be used to define ‘big intervals’ analogous to the ‘big cubes’ in the proof of Theorem 4.18. A similar covering strategy as in this proof yields the result. Exercise 5.1. Use the Borel-Cantelli lemma for the events En = n sup n≤t<n+1 B(t) −B(n) ≥ p a log n o and test for which values of a the series P(En) converges. To estimate the probabilities, the reflection principle and Lemma II.3.1 will be useful. Exercise 5.2. The lower bound is immediate from the one-dimensional statement. For the upper bound pick a finite subset S ⊂∂B(0, 1) of directions such that, for every x ∈∂B(0, 1) there exists ˜ x ∈S with |x −˜ x| < ε. Almost surely, all Brownian motions in dimension one obtained by projecting {B(t) : t ≥0} on the line determined by the vectors in S satisfy the statement. From this one can infer that the limsup under consideration is bounded from above by 1 + ε. 322 Exercise 5.3. Let Ta = inf{t > 0: B(t) = a}. The proof of the upper bound can be based on the fact that, for A < 1 and q > 1, ∞ X n=1 P © ψ(T1 −T1−qn) < 1 A 2−nª < ∞. Exercise 5.4. Define the stopping time τ−1 = min{k : Sk = −1} and recall the definition of pn from (2.3). Then pn = P{Sn ≥0} −P{Sn ≥0, τ−1 < n}. Let {S∗ j : j ≥0} denote the random walk reflected at time τ−1, that is S∗ j = Sj for j ≤τ−1, S∗ j = (−1) −(Sj + 1) for j > τ−1. Note that if τ−1 < n then Sn ≥0 if and only if S∗ n ≤−2, so pn = P{Sn ≥0} −P{S∗ n ≤−2}. Using symmetry and the reflection principle, we have pn = P{Sn ≥0} −P{Sn ≥2} = P © Sn ∈{0, 1} ª , which means that pn = P{Sn = 0} = ³ n n/2 ´ 2−n for n even, pn = P{Sn = 1} = ³ n (n−1)/2 ´ 2−n for n odd. Recall that Stirling’s Formula gives m! ∼ √ 2πmm+1/2e−m, where the symbol ∼means that the ratio of the two sides approaches 1 as m →∞. One can deduce from Stirling’s Formula that pn ∼ p 2/πn, which proves the result. Exercise 5.5. Denote by In(k) the event that k is a point of increase for S0, S1, . . . , Sn and by Fn(k) = In(k) \ Sk−1 i=0 In(i) the event that k is the first such point. The events that {Sk is largest among S0, S1, . . . Sk} and that {Sk is smallest among Sk, Sk+1, . . . Sn} are inde-pendent, and therefore P(In(k)) = pkpn−k. Observe that if Sj is minimal among Sj, . . . , Sn , then any point of increase for S0, . . . , Sj is automatically a point of increase for S0, . . . , Sn. Therefore for j ≤k we can write Fn(j) ∩In(k) = Fj(j) ∩{Sj ≤Si ≤Sk for all i ∈[j, k]} ∩{Sk is minimal among Sk, . . . , Sn} . The three events on the right-hand side are independent, as they involve disjoint sets of sum-mands; the second of these events is of the type considered in Lemma 5.9. Thus, P(Fn(j) ∩In(k)) ≥ P(Fj(j)) p2 k−j pn−k ≥ p2 k−j P(Fj(j)) P {Sj is minimal among Sj, . . . , Sn} , since pn−k ≥pn−j. Here the two events on the right are independent, and their intersection is precisely Fn(j). Consequently P(Fn(j) ∩In(k)) ≥p2 k−jP(Fn(j)) . 323 Decomposing the event In(k) according to the first point of increase gives (0.1) n X k=0 pkpn−k = n X k=0 P(In(k)) = n X k=0 k X j=0 P(Fn(j) ∩In(k)) ≥ ⌊n/2⌋ X j=0 j+⌊n/2⌋ X k=j p2 k−jP(Fn(j)) ≥ ⌊n/2⌋ X j=0 P(Fn(j)) ⌊n/2⌋ X i=0 p2 i . This yields an upper bound on the probability that {Sj : j = 0, . . . , n} has a point of increase by time n/2; but this random walk has a point of increase at time k if and only if the reversed walk {Sn −Sn−i : i = 0, . . . , n} has a point of increase at time n−k. Thus, doubling the upper bound given by (0.1) proves the statement. Exercise 5.7. In the proof of Exercise 5.5 we have seen that, n X k=0 pkpn−k = n X k=0 P(In(k)) = n X k=0 k X j=0 P(Fn(j) ∩In(k)) . By Lemma 5.9, we have, for j ≤k ≤n, P(Fn(j) ∩In(k)) ≤ P(Fn(j) ∩{Sj ≤Si ≤Sk for j ≤i ≤k}) ≤ P(Fn(j))p2 ⌊(k−j)/2⌋. Thus, n X k=0 pkpn−k ≤ n X k=0 k X j=0 P(Fn(j))p2 ⌊(k−j)/2⌋≤ n X j=0 P(Fn(j)) n X i=0 p2 ⌊i/2⌋. This implies the statement. Exercise 5.9. Suppose that X is an arbitrary random variable with vanishing expectation and finite variance. For each n ∈N divide the intersection of the support of X with the interval [−n, n] into finitely intervals with mesh < 1 n. If x1 < · · · < xm are the partition points, construct the law of Xn by placing, for any j ∈{0, . . . , m}, atoms of size P{X ∈[xj, xj+1)} in position E[X | xj ≤X < xj+1], using the convention x0 = −∞and xm+1 = ∞. By construction, Xn takes only finitely many values. Observe that E[Xn] = 0 and Xn converges to X in distribution. Moreover, one can show that τn →τ almost surely. This implies that B(τn) →B(τ) almost surely, and therefore also in distribution, which implies that X has the same law as B(τ). Fatou’s lemma implies that E[τ] ≤lim inf n↑∞E £ τn ¤ = lim inf n↑∞E £ X2 n ¤ < ∞. Hence, by Wald’s second lemma, E[X2] = E[B(τ)2] = E[τ]. 324 Exercise 6.4. From Exercise 2.15 we get, for any x ∈(0, 1) that Ex £ T1 ¯ ¯ T1 < T0 ¤ = 1 −x2 3 , Ex £ T0 ¯ ¯ T0 < T1 ¤ = 2x −x2 3 , where T0, T1 are the first hitting times of the points 0, resp. 1. Define stopping times τ (x) 0 = 0 and, for j ≥1, σ (x) j = inf{t > τ (x) j−1 : B(t) = x}, τ (x) j = inf{t > σ (x) j : B(t) ∈{0, 1}} . Let N (x) = min{j ≥1: B(τ (x) j ) = 1}. Then N (x) is geometric with parameter x. We have Z T1 0 1{0 ≤B(s) ≤1} ds = lim x↓0 N(x) X j=1 (τ (x) j −σ (x) j ) . and this limit is increasing. Hence E Z T1 0 1{0 ≤B(s) ≤1} ds = lim x↓0 E £ N (x) −1 ¤ E £ τ (x) 1 −σ (x) 1 ¯ ¯ B(τ (x) 1 ) = 0 ¤ + lim x↓0 E £ τ (x) 1 −σ (x) 1 ¯ ¯ B(τ (x) 1 ) = 1 ¤ = lim x↓0 ³1 x −1 ´ 2x −x2 3 + lim x↓0 1 −x2 3 = 1 . Exercise 6.5. Observe that E exp{λZj} = eλ/(2 −eλ) for all λ < log 2, and hence, for a suitable constantC and all small λ > 0, E exp © λ(Zj −2) ª ≤λ2 + Cλ3, by a Taylor expansion. Using this for λ = ε 2 we get from Chebyshev’s inequality, P n k X j=1 (Zj −2) > mε o ≤exp{−m ε2 2 } ³ E exp{ ε 2 (Zj −2)} ´k ≤exp © −m ε2 2 ª exp © m ¡ ε2 4 + C ε3 8 ¢ª , which proves the the more difficult half of the claim. The inequality for the lower tail is obvious. Exercise 6.6. We have that P n(X + ℓ)2 2 ≤t o = P © − √ 2t −ℓ≤X ≤ √ 2t −ℓ ª . So the density of the left hand side is 1 2 √ πt e−(2t+ℓ2)/2 h eℓ √ 2t + e−ℓ √ 2ti , which by Taylor expansion is 1 √ πt e−(2t+ℓ2)/2 ∞ X k=0 (ℓ √ 2t)2k (2k)! . 325 Recall that X2/2 is distributed as Gamma( 1 2), and given N the sum PN i=1 Zi is distributed as Gamma(N). By conditioning on N, we get that the density of the right hand side is ∞ X k=0 ℓ2ke−ℓ2/2tk−1/2e−t 2kk!Γ(k + 1 2) . Recall that Γ(k + 1 2) = √π(2k)! 22kk! , and so the densities of both sides are equal. Exercise 7.1. Use that R T 0 H(s) dB(s) = R ∞ 0 HT(s) dB(s). Exercise 7.3. First establish a Taylor formula of the form ¯ ¯f(x, y) −f(x0, y0) −∇yf(x0, y0) · (y −y0) −∇xf(x0, y0) · (x −x0) −1 2(x −x0)THesxf(x0, y0)(x −x0) ¯ ¯ ≤ω1(δ, M) |y −y0| + ω2(δ, M)|x −x0|2, where Hesxf = (∂ijf) is the d × d-Hessian matrix of second derivatives in the directions of x, and ω1(δ, M) = sup x1,x2∈[−M,M]d,y1,y2∈[−M,M]m |x1−x2|∧|y1−y2|<δ ¯ ¯∇yf(x1, y1) −∇yf(x2, y2) ¯ ¯, and the modulus of continuity of Hesxf by ω2(δ, M) = sup x1,x2∈[−M,M]d,y1,y2∈[−M,M]m |x1−x2|∧|y1−y2|<δ ∥Hesxf(x1, y1) −Hesxf(x2, y2)∥, where ∥· ∥is the operator norm of a matrix. Then argue as in the proof of Theorem 7.13. Exercise 7.4. First use Brownian scaling and the Markov property, as in the original proof of Theorem 2.33 to reduce the problem to showing that the distribution of B(T(1)) (using the notation of Theorem 2.33) is the Cauchy distribution. The map defined by f(z) = z 2−z, for z ∈C, takes the half-plane {(x, y): x < 1} onto the unit disk and f(0) = 0. The image measure of harmonic measure on V (1) from 0 is the harmonic measure on the unit sphere from the origin, which is uniform. Hence the harmonic measure µV (1)(0, · ) is the image measure of the uniform distribution ϖ on the unit sphere under f −1, which can be calculated using the derivative of f. Exercise 7.5. Use that θ(t) = W2(H(t)) and limt↑∞H(t) = ∞. 326 Exercise 7.7. Suppose h is supported by [0, b] and look at the partitions given by t (n) k = bk2−n, for k = 0, . . . , 2n. By Theorem 7.32 and Theorem 6.18 we can choose a continuous modification of the process { R t 0 sign(B(s)−a) dB(s) : a ∈R}. Hence the Lebesgue integral on the left hand side is also a Riemann integral and can be approximated by the sum 2n−1 X k=0 b2−nh(t (n) k ) ³ Z t 0 sign(B(s) −t (n) k ) dB(s) ´ = Z t 0 Fn(B(s)) dB(s), where Fn(x) = 2n−1 X k=0 b2−nh(t (n) k ) sign ¡ x −t (n) k ¢ , for n ∈N. This is a uniformly bounded sequence, which is uniformly convergent to the Lebesgue integral F(x) = Z ∞ −∞ h(a) sign(x −a) da. Therefore the sequence of stochastic integrals converges in L2 to the stochastic integral R ∞ 0 F(B(s)) dB(s), which is the right hand side of our formula. Exercise 8.1. Suppose that u is subharmonic and B(x, r) ⊂U. Let τ be the first exit time from B(x, r), which is a stopping time. As ∆u(z) ≥0 for all z ∈U we see from the multidimensional version of Itˆ o’s formula that u(B(t ∧τ)) ≤u(B(0)) + d X i=1 Z t∧τ 0 ∂u ∂xi (B(s)) dBi(s). Note that ∂u/∂xi is bounded on the closure of B(x, r), and thus everything is well-defined. We can now take expectations, and use Exercise 7.1 to see that Ex £ u(B(t ∧τ)) ¤ ≤Ex £ u(B(0)) ¤ = u(x). Now let t ↑∞, so that the left hand side converges to Ex[u(B(τ))] and note that this gives the mean value property for spheres. The result follows by integrating over r. Exercise 8.2. Let u be a solution of the Poisson problem on U. Define open sets Un ↑U by Un = © x ∈U : | x −y| > 1 n for all y ∈∂U ª . Let τn be the first exit time of Un, which is a stopping time. As 1 2∆u(x) = −g(x) for all x ∈U we see from the multidimensional version of Itˆ o’s formula that u(B(t ∧τn)) = u(B(0)) + d X i=1 Z t∧τn 0 ∂u ∂xi (B(s)) dBi(s) − Z t∧τn 0 g(B(s)) ds. Note that ∂u/∂xi is bounded on the closure of Un, and thus everything is well-defined. We can now take expectations, and use Exercise 7.1 to see that Ex £ u(B(t ∧τn)) ¤ = u(x) −Ex Z t∧τn 0 g(B(s)) ds. 327 Note that both integrands are bounded. Hence, as t ↑∞and n →∞, bounded convergence yields that u(x) = Ex Z τ 0 g(B(s)) ds, where we have use the boundary condition to eliminate the left hand side. Exercise 9.2. Use decompositions as in the proof of Theorem 9.22 to transfer the results of Theorem 9.8 from intersections of independent Brownian motions to self-intersections of one Brownian motion. Exercise 9.4. A counterexample as required in part (d) can be constructed as follows: Let A1 and A2 be two disjoint closed sets on the line such that the Cartesian squares A2 i have Hausdorff dimension less than 1/2 yet the Cartesian product A1 × A2 has dimension strictly greater than 1/2. Let A be the union of A1 and A2. Then Brownian motion {B(t): t ≥0} on A is 1-1 with positive probability (if B(A1) is disjoint from B(A2)) yet with positive probability B(A1) intersects B(A2). For instance let A1 consist of points in [0, 1] where the binary nth digit vanishes whenever (2k)! ≤n < (2k + 1)! for some k. Let A2 consist of points in [2, 3] where the binary nth digit vanishes whenever (2k −1)! ≤n < (2k)! for some k. then dim(A2 i ) = 0 for i = 1, 2 yet dim(A1 × A2) ≥dim(A1 + A2) = 1, in fact dim(A1 × A2) = 1. Exercise 9.5. Let {B1(t): 0 ≤t ≤1} be the first component of the planar motion. By Kaufman’s theorem, almost surely, dim S(a) = 2 dim{t ∈[0, 1]: B1(t) = a} and, as in Corollary 9.30, the dimension on the right equals 1/2 for every a ∈(min{x: (x, y) ∈ B[0, t]}, max{x: (x, y) ∈B[0, t]}). Exercise 10.2. For every decomposition E = S∞ i=1 Ei of E into bounded sets, we have, using countable stability of Hausdorffdimension, ∞ sup i=1 dimMEi ≥ ∞ sup i=1 dim Ei = dim ∞ [ i=1 Ei = dim E , and passing to the infimum yields the statement. Exercise 10.7 The argument is sketched in [La99]. Exercise 10.9. For (a) note that Theorem 10.28 can be read as a criterion to determine the packing dimension of a set E by hitting it with a limsup random fractal. Hence dimP(A ∩E) can be found by evaluating P{A ∩A′ ∩E = ∅} for A′ an independent copy of A. Now use that A ∩A′ is also a discrete limsup fractal. 328 Exercise 10.10 To apply Theorem 7.24 for the proof of Lemma 10.40 (a) we shift the cone by defining a new tip ˜ z as follows: • If α < π the intersection of the line through x parallel to the central axis of the cone with the boundary of the dual cone, • if α > π the intersection of the line through x parallel to the central axis of the cone with the boundary of the cone. Note that z + W[α, ξ] ⊂˜ z + W[α, ξ] and there exists a constant C > 1 depending only on α such that |z −˜ z| < Cδ. There is nothing to show if Cδ > ε/2 and otherwise Px © B(0, Tε(z)) ⊂z + W[α, ξ] ª ≤Px © B(0, Tε/2(˜ z)) ⊂˜ z + W[α, ξ] ª . By shifting, rotating and scaling the Brownian motion and by Theorem 7.24 we obtain an upper bound for the right hand side of P1 © B(0, T ε δ (C+ 1 2 )−1(0)) ⊂W[α, 0] ª = 2 π arctan ¡ C0 ¡ δ ε ¢ π α¢ ≤C1 ¡ δ ε ¢ π α, where C0, C1 > 0 are suitable constants. 329 Appendix II: Background and prerequisites 1. Convergence of distributions on metric spaces In this section we collect the basic facts about convergence in distribution. While this is a familiar concept for real valued random variables, for example in the central limit theorem, we need a more abstract viewpoint, which allows to study convergence in distribution for random variables with values in metric spaces, like for example function spaces. If random variables {Xn : n ≥0} converge in distribution, strictly speaking it is their distribu-tions and not the random variables themselves which converge. This distinguishes convergence in distribution from the types of convergence for random variables, like • almost sure convergence, • convergence in probability, • L1-convergence (and Lp-convergence). These types of convergence refer to a sequences of random variables {Xn : n ≥0} converging to a random variable X on the same probability space. The values of the approximating sequences lead to conclusions about the values of the limit random variable. This is entirely different for convergence in distribution, which we now study. Intuitively if {Xn : n ≥0} converges in distribution to X, this just means that the shape of the distributions of Xn for large n is like the shape of the distribution of X. Sample values from Xn allow no inference towards sample values from X and, indeed, there is no need to define Xn and X on the same probability space. In fact, convergence in distribution is only related to the convergence of the distributions of the random variables and not to the random variables themselves. We start by giving a definition of convergence in distributions for random variables in met-ric spaces, explore some of its properties and then show that the concept of convergence in distribution for real-valued random variables is consistent with our definition. Definition 1.1. Suppose (E, ρ) is a metric space and A the Borel-σ-algebra on E. Suppose that Xn and X are E-valued random variables. Then we say that Xn converges in distribution to X, if, for every bounded continuous g : E →R, lim n→∞E[g(Xn)] = E[g(X)]. We write Xn ⇒X for convergence in distribution. ⋄ Remark 1.2. If Xn ⇒X and g: E →R is continuous, then g(Xn) ⇒g(X). But note that, if E = R and Xn ⇒X, this does not imply that E[Xn] converges to E[X], as g(x) = x is not a bounded function on R. ⋄ 331 Here is an alternative approach, which shows that convergence in distribution is in fact a convergence of the distributions. The statement of the following proposition is trivial. Proposition 1.3. Let Prob(E) be the set of probability measures on (E, A). A sequence {Pn : n ≥0} ⊂Prob(E) converges weakly to a limit P ∈Prob(E) if, for every continuous, bounded function g : E →R, lim n→∞ Z g dPn = Z g dP . Then the limit of a convergent sequence is uniquely determined. Suppose that Xn and X are E-valued random variables. Then Xn converges in distribution to X, if and only if the distributions of Xn converge weakly to the distribution of X. Proof. Only the uniqueness of the limit needs proof. If P and Q are two limits of the same sequence, then R f dP = R f dQ for all bounded continuous f : E →R. For every open set G ⊂E we may choose an increasing sequence fn(x) = nρ(x, Gc) ∧1 of continuous functions converging to 1G and infer from monotone convergence that P(G) = Q(G). Now P = Q follows from the Uniqueness theorem for probability measures. Example 1.4. • Suppose E = {1, . . . , m} is finite and ρ(x, y) = 1 −1{x=y}. Then Xn ⇒X if and only if limn→∞P{Xn = k} = P{X = k} for all k ∈E. • Let E = [0, 1] and Xn = 1/n almost surely. Then Xn ⇒X, where X = 0 almost surely. However, note that limn→∞P{Xn = 0} = 0 ̸= P{X = 0} = 1. ⋄ Theorem 1.5. Suppose a sequence {Xn : n ≥0} of random variables converges almost surely to a random variable X (of course, all on the same probability space). Then Xn converges in distribution to X. Proof. Suppose g is bounded and continuous. The g(Xn) converges almost surely to g(X). As the sequence is bounded it is also uniformly integrable, hence convergence holds also in the L1-sense and this implies convergence of the expectations, i.e. E[g(Xn)] →E[g(X)]. Theorem 1.6 (Portmanteau theorem). The following statements are equivalent (i) Xn ⇒X. (ii) For all closed sets K ⊂E, lim supn→∞P{Xn ∈K} ≤P{X ∈K}. (iii) For all open sets G ⊂E, lim infn→∞P{Xn ∈G} ≥P{X ∈G}. (iv) For all Borel sets A ⊂E with P{X ∈∂A} = 0, limn→∞P{Xn ∈A} = P{X ∈A}. (v) For all bounded measurable functions g : E →R with P{g is discontinuous at X} = 0 we have E[g(Xn)] →E[g(X)]. 332 Proof. (i)⇒(ii) Let gn(x) = 1 −(nρ(x, K) ∧1), which is continuous and bounded, is 1 on K and converges pointwise to 1K. Then, for every n, lim sup k→∞ P{Xk ∈K} ≤lim sup k→∞ E[gn(Xk)] = E[gn(X)] . Let n →∞. The integrand on the right hand side is bounded by 1 and converges pointwise and hence in the L1-sense to 1K(X). (ii)⇒(iii) Follows from 1G = 1 −1K for the closed set K = Gc. (iii)⇒(iv) Let G be the interior and K the closure of A. Then, by assumption, P{X ∈G} = P{X ∈K} = P{X ∈A} and we may use (iii) and (ii) (which follows immediately from (iii)) to get lim sup n→∞P{Xn ∈A} ≤lim sup n→∞P{Xn ∈K} ≤P{X ∈K} = P{X ∈A}, lim inf n→∞P{Xn ∈A} ≥lim inf n→∞P{Xn ∈G} ≥P{X ∈G} = P{X ∈A}. (iv)⇒(v) From (iv) we infer that the convergence holds for g of the form g(x) = PN n=1 an1An where An satisfies P{X ∈An} = 0. Let us call such functions elementary. Given g as in (v) we observe that for every a < b with possibly a countable set of exceptions P © X ∈∂{x : g(x) ∈(a, b]} ª = 0 . Indeed, if X ∈∂{x : g(x) ∈(a, b]} then either g is discontinuous in X or g(X) = a or g(X) = b. The first event has probability zero and so have the last two except possibly for a countable set of values of a, b. By decomposing the real axis in small suitable intervals we thus obtain an increasing sequence gn and a decreasing sequence hn of elementary functions both converging pointwise to g. Now, for all k, lim sup n→∞E[g(Xn)] ≤lim sup n→∞E[hk(Xn)] = E[hk(X)] , and lim inf n→∞E[g(Xn)] ≥lim inf n→∞E[gk(Xn)] = E[gk(X)] . and the right sides converge, as k →∞, by bounded convergence, to E[g(X)]. (v)⇒(i) This is trivial. To remember the directions of the inequalities in the Portmanteau theorem it is useful to recall the last example Xn = 1/n →0 and choose G = (0, 1) and K = {0} to obtain cases where the opposite inequalities fail. We now show that the convergence of distribution as defined here agrees with the familiar concept in the case of real random variables. Theorem 1.7 (Helly-Bray theorem). Let Xn and X be real valued random variables and define the associated distribution functions Fn(x) = P{Xn ≤x} and F(x) = P{X ≤x}. Then the following assertions are equivalent. (a) Xn converges in distribution to X, (b) lim n→∞Fn(x) = F(x) for all x such that F is continuous in x. 333 Proof. (a)⇒(b) Use property (iv) for the set A = (−∞, x]. (b)⇒(a) We choose a dense sequence {xn} with P{X = xn} = 0 and note that every open set G ⊂R can be written as the countable union of disjoint intervals Ik = (ak, bk] with ak, bk chosen from the sequence. We have lim n→∞P{Xn ∈Ik} = lim n→∞Fn(bk) −Fn(ak) = F(bk) −F(ak) = P{X ∈Ik} . Hence, for all N, lim inf n→∞P{Xn ∈G} ≥ N X k=1 lim inf n→∞P{Xn ∈Ik} = N X k=1 P{X ∈Ik} , and as N →∞the last term converges to P{X ∈G}. Finally, we note the useful fact that for nonnegative random variables Xn, rather then testing convergence of E[f(Xn)] for all continuous bounded functions f, it suffices to consider functions of a rather simple form. Proposition 1.8. Suppose (X (n) 1 , . . . , X (n) m ) are random vectors with nonnegative entries, then (X (n) 1 , . . . , X (n) m ) = ⇒(X1, . . . , Xm) , if an only if, for any λ1, . . . , λm ≥0, lim n↑∞E h exp © − m X j=1 λjX (n) j ªi = E h exp © − m X j=1 λjXj ªi . The function φ(λ1, . . . , λm) = E[exp{−Pm j=1 λjXj}] is called the Laplace transform of (X1, . . . , Xm) and thus the proposition states in other words that the convergence of non-negative random vectors is equivalent to convergence of their Laplace transforms. The proof, usually done by approximation, can be found in... 2. Gaussian random variables In this section we have collected the facts about Gaussian random vectors, which are used in this book. We start with a useful estimate for standard normal random variables, which is quite precise for large x. Lemma 3.1. Suppose X is standard normally distributed. Then, for all x > 0, x x2 + 1 1 √ 2π e−x2/2 ≤P{X > x} ≤1 x 1 √ 2π e−x2/2. Proof. The right inequality is obtained by the estimate P{X > x} ≤ 1 √ 2π Z ∞ x u x e−u2/2 du = 1 x 1 √ 2πe−x2/2 . For the left inequality we define f(x) = xe−x2/2 −(x2 + 1) Z ∞ x e−u2/2 du . 334 Observe that f(0) < 0 and limx→∞f(x) = 0. Moreover, f ′(x) = (1 −x2 + x2 + 1)e−x2/2 −2x Z ∞ x e−u2/2 du = −2x ³ Z ∞ x e−u2/2 du −e−x2/2 x ´ , which is positive for x > 0, by the first part. Hence f(x) ≤0, proving the lemma. We now look more closely at random vectors with normally distributed components. Our motivation is that they arise, for example, as vectors consisting of the increments of a Brownian motion. Let us clarify some terminology. Definition 3.2. A random variable X = (X1, . . . , Xd)T with values in Rd has the d-dimensional standard Gaussian distribution if its d coordinates are standard normally distributed and independent. ⋄ More general Gaussian distributions can be derived as linear images of standard Gaussians. Recall, e.g. from Definition 1.2 in Chapter 1, that a random variable Y with values in Rd is called Gaussian if there exists an m-dimensional standard Gaussian X, an d × m matrix A, and a d dimensional vector b such that Y T = AX + b. The covariance matrix of the (column) vector Y is then given by Cov(Y ) = E £ (Y −EY )(Y −EY )T¤ = AAT , where the expectations are defined componentwise. Our next lemma shows that applying an orthogonal d×d matrix does not change the distribution of a standard Gaussian random vector, and in particular that the standard Gaussian distribution is rotationally invariant. We write Id for the d × d identity matrix. Lemma 3.3. If A is an orthogonal d × d matrix, i.e. AAT = Id, and X is a d-dimensional standard Gaussian vector, then AX is also a d-dimensional standard Gaussian vector. Proof. As the coordinates of X are independent, standard normally distributed, X has a density f(x1, . . . , xd) = d Y i=1 1 √ 2π e−x2 i /2 = 1 (2π)d/2 e−| x|2/2 , where | · | is the Euclidean norm. The density of AX is (by the transformation rule) f(A−1x) | det(A−1)|. The determinant is 1 and, since orthogonal matrices preserve the Euclidean norm, the density of X is invariant under A. Corollary 3.4. Let X1 and X2 be independent and normally distributed with expectation 0 and variance σ2 > 0. Then X1 + X2 and X1 −X2 are independent and normally distributed with expectation 0 and variance 2σ2. Proof. The vector (X1/σ, X2/σ)T is standard Gaussian by assumption. Look at A = µ 1 √ 2 1 √ 2 1 √ 2 −1 √ 2 ¶ . 335 This is an orthogonal matrix and applying it to our vector yields ((X1 + X2)/( √ 2σ), (X1 − X2)/( √ 2σ)), which thus must have independent standard normal coordinates. The next proposition shows that the distribution of a Gaussian random vector is determined by its expectation and covariance matrix. Proposition 3.5. If X and Y are d-dimensional Gaussian vectors with EX = EY and Cov(X) = Cov(Y ), then X and Y have the same distribution. Proof. It is sufficient to consider the case EX = EY = 0. By definition, there are standard Gaussian random vectors X1 and X2 and matrices A and B with X = AX1 and Y = BX2. By adding columns of zeros to A or B, if necessary, we can assume that X1 and X2 are both k-vectors, for some k, and A, B are both d × k matrices. Let A and B be the vector subspaces of Rk generated by the row vectors of A and B, respectively. To simplify notation assume that the first l ≤d row vectors of A form a basis of A. Define the linear map L: A →B by L(Ai) = Bi for i = 1, . . . , l. Here Ai is the ith row vector of A, and Bi is the ith row vector of B. Our aim is to show that L is an orthogonal isomorphism and then use the previous proposition. Let us first show that L is an isomorphism. Our covariance assumption gives that AAT = BBT. Assume there is a vector v1A1 + . . . vlAl whose image is 0. Then the d-vector v = (v1, . . . , vl, 0, . . . , 0) satisfies vB = 0. Hence ∥vA∥2 = vAATvT = vBBTvT = 0 . We conclude that vA = 0. Hence L is injective and dim A ≤dim B. Interchanging the roles of A and B gives that L is an isomorphism. As the entry (i, j) of AAT = BBT is the scalar product of Ai and Aj as well as Bi and Bj, the mapping L is orthogonal. We can extend it on the orthocomplement of A to an orthogonal map L: Rk →Rk (or an orthogonal k × k-matrix). Then X = AX1 and Y = BX2 = ALTX2. As LTX2 is standard Gaussian, by Lemma 3.3, X and Y have the same distribution. In particular, comparing a d-dimensional Gaussian vector with Cov(X) = Id with a Gaussian vector with d independent entries and the same expectation, we obtain the following fact. Corollary 3.6. A Gaussian random vector X has independent entries if and only if its co-variance matrix is diagonal. In other words, the entries in a Gaussian vector are uncorrelated if and only if they are independent. We now show that the Gaussian nature of a random vector is preserved under taking limits. Proposition 3.7. Suppose {Xn : n ∈N} is a sequence of Gaussian random vectors and limn Xn = X, almost surely. If b := limn→∞EXn and C := limn→∞Cov Xn exist, then X is Gaussian with mean b and covariance matrix C. 336 Proof. A variant of the argument in Proposition 3.5 shows that Xn converges in law to a Gaussian random vector with mean b and covariance matrix C. As almost sure convergence implies convergence of the associated distributions, this must be the law of X. Lemma 3.8. Suppose X, Y are independent and normally distributed with mean zero and variance σ2, then X2 + Y 2 is exponentially distributed with mean 2σ2. Proof. For any bounded, measurable f : R →R we have, using polar coordinates, Ef(X2 + Y 2) = 1 2πσ2 Z f(x2 + y2) exp © −x2+y2 2σ2 ª dx dy = 1 σ2 Z ∞ 0 f(r2) exp © − r2 2σ2 ª r dr = 1 2σ2 Z ∞ 0 f(a) exp © − a 2σ2 ª da = Ef(Z) , where Z is exponential with mean 2σ2. 3. Martingales in discrete time In this section we recall the essentials from the theory of martingales in discrete time. A more thorough introduction to this delightful subject is Williams [Wi91]. Definition 4.1. A filtration (Fn : n ≥0) is an increasing sequence F0 ⊂F1 ⊂· · · ⊂Fn ⊂· · · of σ-algebras. Let {Xn : n ≥0} be a stochastic process in discrete time and (Fn : n ≥0) be a filtration. The process is a martingale relative to the filtration if, for all n ≥0, • Xn is measurable with respect to Fn, • E|Xn| < ∞, and • E[Xn+1 | Fn] = Xn, almost surely. If we have ‘≥’ in the last condition, then {Xn : n ≥0} is called a submartingale, if ‘≤’ holds it is called a supermartingale. ⋄ Remark 4.2. Note that for a submartingale E[Xn+1] ≥E[Xn], for a supermartingale E[Xn+1] ≤ E[Xn], and hence for a martingale we have E[Xn+1] = E[Xn]. ⋄ Loosely speaking, a stopping time is a random time such that the knowledge about a random process at time n suffices to determine whether the stopping time has happened at time n or not. Here is a formal definition. Definition 4.3. A random variable T with values in {0, 1, 2, . . .} ∪{∞} is called a stopping time if {T ≤n} = {ω : T(ω) ≤n} ∈Fn for all n ≥0 . ⋄ 337 If {Xn : n ≥0} is a supermartingale and T a stopping time, then it is easy to check that the process {XT n : n ≥0} defined by XT n = XT∧n is a supermartingale. If {Xn : n ≥0} is a martingale, then both {Xn : n ≥0} and {−Xn : n ≥0} are supermartingales and, hence, we have, E £ XT∧n ¤ = E £ X0 ¤ , for all n ≥0 . Doob’s optional stopping theorem gives criteria when, letting n ↑∞, we obtain E[XT] = E[X0]. Theorem 4.4 (Doob’s optional stopping theorem). Let T be a stopping time and X a martin-gale. Then XT is integrable and E £ XT ¤ = E £ X0 ¤ , if one of the following conditions hold: (1) T is bounded, i.e. there is N such that T < N almost surely; (2) {XT n : n ≥0} is dominated by an integrable random variable Z, i.e. |Xn∧T| ≤Z for all n ≥0 almost surely; (3) E[T] < ∞and there is K > 0 such that supn |Xn −Xn−1| ≤K. Proof. Recall that E[XT∧n −X0] = 0. The result follows in case (1) by choosing n = N. In case (2) let n →∞and use dominated convergence. In case (3) observe that |XT∧n −X0| = | PT∧n k=1 (Xk −Xk−1)| ≤KT. By assumption KT is an integrable function and dominated convergence can be used again. Doob’s famous forward convergence theorem gives a sufficient condition for the almost sure convergence of supermartingales to a limiting random variable. See [Wi91, 11.5] for the proof. Theorem 4.5 (Doob’s supermartingale convergence theorem). Let {Xn : n ≥0} be a super-martingale, which is bounded in L1, i.e. there is K > 0 such that E|Xn| ≤K for all n. Then there exists an integrable random variable X on the same probability space such that lim n→∞Xn = X almost surely. Remark 4.6. Note that if {Xn : n ≥0} is nonnegative, we have E[|Xn|] = E[Xn] ≤E[X0] := K and thus Xn is automatically bounded in L1 and limn→∞Xn = X exists. ⋄ A key question is when the almost sure convergence in the supermartingale convergence theo-rem can be replaced by L1-convergence (which in contrast to almost sure convergence implies convergence of expectations). A necessary and sufficient criterion for this is uniform integra-bility. A stochastic process {Xn : n ≥0} is called uniformly integrable if, for every ε > 0, there exists K > 0 such that E £ |Xn|1{|Xn| ≥K} ¤ < ε for all n ≥0 . Sufficient criteria for uniform integrability are 338 • {Xn : n ≥0} is dominated by an integrable random variable, • {Xn : n ≥0} is Lp-bounded for some p > 1, • {Xn : n ≥0} is L1-convergent. The following lemma is proved in [Wi91, 13.7]. Lemma 4.7. Any stochastic process {Xn : n ≥0}, which is uniformly integrable and almost surely convergent, converges also in the L1-sense. The next result is one of the highlights of martingale theory. Theorem 4.8 (Martingale closure theorem). Suppose that the martingale {Xn : n ≥0} is uniformly integrable. Then there is an integrable random variable X such that lim n→∞Xn = X almost surely and in L1. Moreover, Xn = E[X | Fn] for every n ≥0. Proof. Uniform integrability implies that {Xn : n ≥0} is L1-bounded and thus, by the martingale convergence theorem, almost surely convergent to an integrable random variable X. Convergence in the L1-sense follow from Lemma II.4.7. To check the last assertion, we note that Xn is Fn-measurable and let F ∈Fn. For all m ≥n we have, by the martingale property, R F Xm dP = R F Xn dP . We let m →∞. Then | R F Xm dP − R F X dP| ≤ R |Xm −X| dP →0, hence we obtain R F X dP = R F Xn dP, as required. There is a natural converse to the martingale closure theorem, see [Wi91, 14.2] for the proof. Theorem 4.9 (L´ evy’s upward theorem). Suppose that X is an integrable random variable and Xn = E[X | Fn]. Then {Xn : n ≥0} is a uniformly integrable martingale and lim n→∞Xn = E £ X | F∞ ¤ almost surely and in L1 , where F∞= ¡ S∞ n=1 Fn ¢ is the smallest σ-algebra containing the entire filtration. There is also a convergence theorem for ‘reverse’ martingales, which is called L´ evy’s downward theorem and is a natural partner to the upward theorem, see [Wi91, 14.4] for the proof. Theorem 4.10 (L´ evy’s downward theorem). Suppose that (Gn : n ∈N) is a collection of σ-algebras such that G∞:= ∞ \ k=1 Gk ⊂· · · ⊂Gn+1 ⊂Gn ⊂· · · ⊂G1. An integrable process {Xn : n ∈N} is a reverse martingale if almost surely, Xn = E £ Xn−1 ¯ ¯ Gn ¤ for all n ≥2 . Then lim n↑∞Xn = E[X1 | G∞] almost surely. 339 An important consequence of Theorems II.4.4 and II.4.8 is that the martingale property holds for well-behaved stopping times. For a stopping time T define FT to be the σ-algebra of events A with A ∩{T ≤n} ∈Fn. Observe that XT is FT-measurable. Theorem 4.11 (Optional sampling theorem). If the martingale {Xn : t ≥0} is uniformly integrable, then for all stopping times 0 ≤S ≤T we have E[XT ¯ ¯ FS] = XS almost surely. Proof. By the martingale closure theorem, XT n converges to XT in L1 and E[XT | Fn] = XT∧n = XT n . Dividing XT in its positive and its nonpositive part if nec-essary, we may assume that XT ≥0 and therefore XT n ≥0 almost surely. Taking conditional expectation with respect to FS∧n gives E £ XT ¯ ¯ FS∧n ¤ = XS∧n. Now let A ∈FS. We have to show that E[XT1A] = E[XS1A]. Note first that A ∩{S ≤n} ∈FS∧k. Hence, we get E[XT1{A ∩{S ≤n}}] = E[XS∧n1{A ∩{S ≤n}}] = E[XS1{A ∩{S ≤n}}]. Letting n ↑∞ and using monotone convergence gives the required result. Of considerable practical importance are martingales {Xn : t ≥0}, which are square integrable. Note that in this case we can calculate, for m ≥n, (3.1) E £ X2 m | Fn ¤ = E £ (Xm −Xn)2 | Fn ¤ + 2E[Xm | Fn] Xn −X2 n = E £ (Xm −Xn)2 | Fn ¤ + X2 n ≥X2 n , so that {X2 n : t ≥0} is a submartingale. Theorem 4.12 (Convergence theorem for L2-bounded martingales). Suppose that the martin-gale {Xn : t ≥0} is L2-bounded. Then there is a random variable X such that lim n→∞Xn = X almost surely and in L2 . Proof. From (3.1) and L2-boundedness of {Xn : t ≥0} it is easy to see that, for m ≥n, E £ (Xm −Xn)2¤ = m X k=n+1 E £ (Xk −Xk−1)2¤ ≤ ∞ X k=1 E £ (Xk −Xk−1)2¤ < ∞. Recall that L2-boundedness implies L1-boundedness, and hence, by the martingale convergence theorem, Xn converges almost surely to an integrable random variable X. Letting m ↑∞and using Fatou’s lemma in the last display, gives L2-convergence. We now discuss two martingale inequalities that have important counterparts in the continuous setting. The first one is Doob’s weak maximal inequality. Theorem 4.13 (Doob’s weak maximal inequality). Let {Xj : j ≥0} be a submartingale and denote Mn := max1≤j≤n Xj. Then, for all λ > 0, λP © Mn ≥λ ª ≤E £ Xn1{Mn ≥λ} ¤ . 340 Proof. Define the stopping time τ := ½ min{k : Xk ≥λ} if Mn ≥λ n if Mn < λ Note that {Mn ≥λ} = {Xτ ≥λ}. This implies λP{Mn ≥λ} = λP{Xτ ≥λ ª = Eλ1{Xτ ≥λ} ≤EXτ1{Xτ ≥λ} = EXτ1{Mn ≥λ} , and the result follows once we demonstrate EXτ1{Mn ≥λ} ≤EXn1{Mn ≥λ}. But, as τ is bounded by n and Xτ is a submartingale, we have E[Xτ] ≤E[Xn], which implies that E £ Xτ1{Mn < λ} ¤ + E £ Xτ1{Mn ≥λ} ¤ ≤E £ Xn1{Mn < λ} ¤ + E £ Xn1{Mn ≥λ} ¤ . Because, by definition of τ, we have Xτ1{Mn < λ} = Xn1{Mn < λ}, this reduces to E £ Xτ1{Mn ≥λ} ¤ ≤E £ Xn1{Mn ≥λ} ¤ , and this concludes the proof. The most useful martingale inequality for us is Doob’s Lp-maximal inequality. Theorem 4.14 (Doob’s Lp maximal inequality). Suppose {Xn : n ≥0} is a submartingale. Let Mn = max1≤k≤n Xk and p > 1. Then E £ M p n ¤ ≤ ¡ p p−1 ¢p E £ |Xn|p¤ . We make use of the following lemma, which allows us to compare the Lp-norms of two nonneg-ative random variables. Lemma 4.15. Suppose nonnegative random variables X and Y satisfy, for all λ > 0, λP{Y ≥λ} ≤E[X1{Y ≥λ}] . Then, for all p > 1, E £ Y p¤ ≤ ¡ p p−1 ¢p E £ Xp¤ . Proof. Using the fact that X ≥0 and xp = R x 0 pλp−1dλ, we can express E[Xp] as a double integral and apply Fubini’s theorem, E £ Xp¤ = E Z ∞ 0 1{X ≥λ} pλp−1 dλ = Z ∞ 0 pλp−1 P{X ≥λ} dλ . Similarly, using the hypothesis, E £ Y p¤ = Z ∞ 0 pλp−1 P © Y ≥λ ª dλ ≤ Z ∞ 0 pλp−2E £ X1{Y ≥λ} ¤ dλ . We can rewrite the right hand side, using Fubini’s theorem again, and then integrating pλp−2 and using H¨ older’s inequality with q = p/(p −1), Z ∞ 0 pλp−2 E £ X1{Y ≥λ} ¤ dλ = E h X Z Y 0 pλp−2 dλ i = q E £ XY p−1¤ ≤q ∥X∥p∥Y p−1∥q . 341 Altogether, this gives E[Y p] ≤q(E[Xp])1/p (E[Y p])1/q So, assuming E[Y p] < ∞, the above inequality gives, ¡ E[Y p] ¢1/p ≤q ¡ E[Xp] ¢1/p , from which the result follows by raising both sides to the pth power. In general, if E[Y p] = ∞, then for any n ∈N, the random variable Yn = Y ∧n satisfies the hypothesis of the lemma, and the result follows by letting n ↑∞and applying the monotone convergence theorem. Proof of Theorem 4.14. If {Xn : n ≥0} is a submartingale, so is {|Xn|: n ≥0}. Hence we may assume in the proof that Xn ≥0. By Doob’s weak maximal inequality, λP © Mn ≥λ ª ≤E £ Xn1{Mn ≥λ} ¤ , and applying Lemma 4.15 with X = Xn and Y = Mn gives the result. 4. The max-flow min-cut theorem Here we give a short proof of a famous result of graph theory, the max-flow min-cut theorem of Ford and Fulkerson [FF56] in the special case of infinite trees. Theorem 5.1 (Max-flow min-cut theorem). max n strength (θ) : θ a flow with capacities C o = inf n X e∈Π C(e) : Π a cutset o . Proof. The proof is a festival of compactness arguments. First observe that on the left hand side the infimum is indeed a maximum, because if {θn} is a sequence of flows with capacities C, then at every edge we have a bounded sequence {θn(e)} and by the diagonal argument we may pass to a subsequence such that lim θn(e) exists simultaneously for all e ∈E. This limit is obviously again a flow with capacities C. Secondly observe that every cutset Π contains a finite subset Π′ ⊂Π, which is still a cutset. Indeed, if this was not the case, we had for every positive integer j a ray ej 1, ej 2, ej 3, . . . with ej i ̸∈Π for all i ≤j. By the diagonal argument we find a sequence jk and edges el of order l such that ejk l = el for all k ≥l. Then e1, e2, . . . is a ray and el ̸∈Π for all l, which is a contradiction. Now let θ be a flow with capacities C and Π an arbitrary cutset. We let A be the set of vertices v such that there is a sequence of edges e1, . . . , en ̸∈Π with e1 = (ρ, v1), en = (vn−1, v) and ej = (vj−1, vj). By our previous observation this set is finite. Let φ(v, e) := ½ 1 if e = (v, w) for some w ∈V , −1 if e = (w, v) for some w ∈V . 342 Then, using the definition of a flow and finiteness of all sums, strength (θ) = X e∈E φ(ρ, e)θ(e) = X v∈A X e∈E φ(v, e)θ(e) = X e∈E θ(e) X v∈A φ(v, e) ≤ X e∈Π θ(e) ≤ X e∈Π C(e) . This proves the first inequality. For the reverse inequality we restrict attention to finite trees. Let Tn be the tree consisting of all vertices Vn and edges En of order ≤n and look at cutsets Π consisting of vertices in En. A flow of strength c > 0 through the tree Tn with capacities C is a mapping θ : En →[0, c] such that • for the root we have P w : w=ρ θ ¡ ρ, w ¢ = c, • for every vertex v ̸= ρ with |v| < n we have θ ¡ v, v ¢ = P w : w=v θ ¡ v, w ¢ , • θ(e) ≤C(e). We shall show that max n strength (θ) : θ a flow in Tn with capacities C o ≥ min n X e∈Π C(e) : Π a cutset in Tn o . (4.1) Once we have this, we get a sequence (θn) of flows in Tn with capacities C and strength at least c = min{P e∈Π C(e) : Π a cutset in T}. By using the diagonal argument once more we can get a subsequence such that the limits of θn(e) exist for every edge, and the result is a flow θ with capacities C and strength at least c, as required. To prove (4.1) let θ be a flow of maximal strength c with capacities C in Tn and call a sequence ρ = v0, v1, . . . , vn with (vi, vi+1) ∈En an augmenting sequence if θ(vi, vi+1) < C(vi, vi+1). If there are augmenting sequences, we can construct a flow e θ of strength > c by just increasing the flow through every edge of the augmenting sequence by a sufficiently small ε > 0. As θ was maximal this is a contradiction. Hence there is a minimal cutset Π consisting entirely of edges in En with θ(e) ≥C(e). Let A, as above, be the collection of all edges which are connected to the root by edges not in Π. As before, we have strength (θ) = X e∈E θ(e) X v∈A φ(v, e) = X e∈Π θ(e) ≥ X e∈Π C(e) , where in the penultimate step we use minimality. This proves (4.1) and finishes the proof. 343 Index adapted, 44 almost surely, 22 arcsine law for the last sign-change, 139 for the last zero, 138 for the position of the maximum, 138 for the time spent above zero, 140, 207 Az´ ema-Yor embedding theorem, 132 Bachelier, Louis, 41 ballot problem, 68 Bessel process, 164 Blumenthal’s 0-1 law, 45 bounded variation, 35 Brown, Robert, 41 Brownian bridge, 38, 65 Brownian motion d-dimensional, 43 fractional, 41 monotonicity, 31 standard, 14 area, 51 conformal invariance, 195 definition, 21 graph, 103 image, 103 L´ evy’s construction, 22 linear, 21 path, 103 planar, 43 range, 103 scaling invariance, 25 skew-product representation, 199 standard, 21, 43 time inversion, 26 transient, 82, 220 with drift, 37 zero set, 52 Brownian scaling, 26 Cameron-Martin space, 213 Cantor set Hausdorffdimension, 117 Minkowski dimension, 117 capacity α-capacity, 114, 226 general definition, 226 of an edge, 112 Riesz, 114 Cauchy process, 56 Ciesielski-Taylor identity, 212 compactness argument, 342 conditional expectation, 53 conformal invariance, 18, 195 continuity L´ evy’s modulus, 29 local H¨ older, 31 convergence in distribution, 331 convolution, 50 cooling, 206 countable stability property, 99 countable subadditivity, 101 covering, 97 cut line, 144 cutset, 112 derivative upper and lower, 32 differentiability in a point, 33 nowhere, 34 diffusion matrix, 14 dimension, 97 Dirichlet problem existence, 74 harmonic measure, 88 representation of solution, 15 solution, 73 uniqueness, 218 dissipation rate, 206 345 domain, 69 Donsker’s invariance principle, 17, 134 Doob’s Lp maximal inequality, 341 Doob’s weak maximal inequality, 340 Doob, Joe, 68 downcrossing, 147 drift, 37 vector, 14 Dubins’ embedding theorem, 130 Dvoretzky-Erd˝ os test, 77 Dynkin, Eugene, 68 Einstein, Albert, 41 embedding Az´ ema-Yor, 132 Dubins, 130 Skorokhod, 130 energy α-energy, 109, 226 energy method terminology, 109 envelope lower, 77 equilibrium measure, 221 Erd˝ os-Kac theorem, 214 exceptional set of times, 33 exchangeable, 32 exit from an interval, 210 exit time, 61, 66 fast time, 276 Feynman-Kac formula, 207, 209 filtered probability space, 44 filtration, 44 first arcsine law, 138, 139 first moment method, 243 FKG inequality, 145 flow, 112 Ford and Fulkerson, 112 fractional Brownian motion, 41 Frostman’s lemma, 111, 119 Frostman, Otto, 119 functional central limit theorem, 134 gauge function, 120, 171 Gaussian random vector, 23 standard, 335 process, 41 germ σ-algebra, 45 Green kernel, 83 Green’s function definition, 83 properties, 85 H¨ older continuous, 101 harmonic function definition, 69 discrete, 15 history, 238 maximum principle, 71 mean value property, 69 removable, 238 harmonic measure conformal invariance, 197 definition, 87 from infinity, 90 on the sphere, 88 support, 90 Harnack principle, 88 Harris inequality, 126 Hausdorffcontent, 99 Hausdorffdimension, 16 and measure, 101 countable stability, 117 definition, 99, 100 of graph, 110 of range, 110 of zero set, 108 Hausdorffmeasure definition, 100 generalised, 171 Hausdorff, Felix, 119 Hawkes’ theorem, 243 heat equation, 206 heating, 206 Helly-Bray theorem, 333 Hewitt-Savage 0-1 law, 32 hitting probability, 76, 228 increasing event, 143 independent, 43 independent increments, 13, 21 initial distribution, 14 integral test, 77 intersection equivalence definition, 247 intersections dimension, 245 existence, 239, 241, 245 nonexistence, 239, 241 intrinsic, 97 inversion at the circle, 89 irregular point, 217 346 Itˆ o’s formula multi-dimensional, 192 nonsmooth case, 202 one-dimensional, 189 with additional dependence, 190 Kakutani’s theorem, 226 Kakutani, S., 96 Kaufman’s theorem, 116, 264 kernel, 224 Kochen-Stone lemma, 78, 94 Kolmogorov’s 0-1 law, 45 L´ evy’s downward theorem, 339 L´ evy’s theorem for local time, 204 for the maximum process, 54 L´ evy, Paul, 41 Laplace’s method, 200 law of maximum, 50 law of large numbers, 27 law of the iterated logarithm failure, 275 for Brownian motion, 121 for general random walk, 130 for simple random walk, 124 Hartman and Wintner, 130, 144 Khinchine, 121, 124, 144 Kolmogorov, 144 Lebesgue’s thorn, 235 limsup fractal, 278 Liouville’s theorem, 74 local maximum dense, 46 strict, 46 local time definition, 202 L´ evy’s theorem, 204 local time at zero, 147 lower envelope, 77 Lyons’ theorem, 250 macroscopic, 13 Makarov’s theorem, 238, 310 Mandelbrot’s conjecture, 238 marginal distributions, 22 Markov strong property, 48 Markov process, 53 Markov property, 43 Markov transition kernel, 53 Markov, Andrej, 68 Martin kernel, 227 martingale, 58 binary splitting, 130 discrete, 337 reverse, 339 uniformly integrable, 338 martingale inequality, 340 mass distribution, 106 mass distribution principle, 106 max-flow min-cut theorem, 112, 342 McKean’s theorem, 103, 115 microscopic, 13 Minkowski dimension lower, 97 upper, 98 modification, 187 modulus normal distribution, 54 modulus of continuity, 27, 29, 38 monotonicity, 101 in an interval, 31 multiple points existence, 256 nonexistence, 256 neighbourhood recurrent, 76 nonpolar set, 88 nowhere differentiable, 34 optional stopping theorem, 58 Orey and Taylor’s theorem, 276 Ornstein-Uhlenbeck diffusion, 27, 65 outer boundary, 90 outer measure, 101 packing, 283 packing number, 283 Paley, Wiener, Zygmund’s theorem, 34 Paley-Zygmund inequality, 77, 94 percolation, 250 percolation limit set definition, 242 with generation dependent probabilities, 248 perfect set, 52 permuation finite, 32 Poincar´ e cone condition, 217 point of increase global, 127 local, 126 point recurrent, 76 Poisson kernel, 88 Poisson problem 347 definition, 219 uniqueness, 236 Poisson’s formula, 88 polar for Brownian motion, 88 for percolation, 243 points, 52 Portmanteau theorem, 332 potential α-potential, 109 electrostatic, 87 gravitational, 109 Newtonian, 87 potential kernel, 83, 226 product formula, 286 progressively measurable, 183 quadratic variation, 36, 188 radial potential, 226 random closed set, 247 random field, 159 random walk, 13 Ray’s theorem, 170 record time, 106 recurrent, 76 neighbourhood, 76 point, 76 reflection principle, 49, 74, 209 regular point, 217 regularisation, 283 removable, 238 resolvent operator, 211 reverse H¨ older, 103, 119 reverse martingale, 339 Riemann mapping theorem, 195 right-continuity, 47 sample path properties, 21 scaling invariance, 17 second arcsine law, 140, 207 second moment method, 244 Skorokhod embedding theorem, 130 smallest upper envelope, 121 spectrum, 276 Spitzer’s law, 201 stable subordinator, 56 stationary increments, 13 stochastic integral as martingale, 187 construction, 184 continuous, 187 Fubini’s theorem, 214 up to T, 187 stochastic integration, 18, 183 stochastic process, 21 stopping time definition, 46 properties, 64 strict, 46 Strassen’s law, 144 subharmonic function, 69 submartingale, 58 tail σ-algebra, 45 tail event, 45 Tanaka’s formula, 202 Taylor, S. James, 119 transient, 76 trees terminology, 112 Tricot, Claude, 309 typical times, 33 universal object, 13 universality, 13 upcrossing excursion, 260 upper envelope, 121 value of a covering, 99 value of a packing, 283 visible part of a set, 308 Wald’s lemma, 59 first, 59 second, 60 Weierstrass function, 119 Wiener’s theorem, 22 winding number, 201 zero set Hausdorffdimension, 108 Hausdorffmeasure, 104 no isolated points, 52 uncountable, 53 zero-one law Galton-Watson, 270 of Kolmogorov, 45 zero-one law of Blumenthal, 45 of Hewitt-Savage, 32 348 Bibliography [dA83] A. de Acosta. A new proof of the Hartman-Wintner law of the iterated logarithm. Ann. Probab. 11, 270–276 (1983). [Ad85] O. Adelman. Brownian motion never increases: a new proof of a theorem of Dvoretzky, Erd˝ os and Kakutani. Israel J. of Math. 50, 189–192 (1985). [ABP98] O. Adelman, K. Burdzy and R. Pemantle. Sets avoided by Brownian motion. Ann. Probab. 26, 429–464 (1998). [AD85] O. Adelman and A. Dvoretzky. Plane Brownian motion has strictly n-multiple points. Israel J. Math. 52, 361–364 (1985). [Ad90] R. Adler. An introduction to continuity, extrema and related topics for general Gaussian processes. Institute Math. Statistics, Hayward CA (1990). [An87] D. Andr´ e. Solution directe du probl eme r´ esolu par M. Bertrand. C. R. Acad. Sci. Paris 105, 436–437 (1887). [AN04] K. B. Athreya and P.E. Ney. Branching processes. Dover Publications, Mineola NY (2004). [AY79] J. Az´ ema and M. Yor. Une solution simple au probleme de Skorokhod. In: Seminar on probability XIII, 90–115, Springer Lecture Notes in Mathematics 721 (1979). [Ba00] L. Bachelier. Th´ eorie de la speculation. Ann. Sci. Ecole Norm. Sup. 17 21–86 (1900). [Ba01] L. Bachelier. Th´ eorie math´ ematique du jeu. Ann. Sci. Ecole Norm. Sup. 18 143–210 (1901). [BP84] M. T. Barlow and E. Perkins. Levels at which every Brownian excursion is exceptional. Lecture Notes in Math. 1059, Springer, Berlin-New York, 1–28 (1984). [Ba95] R. Bass. Probabilistic Techniques in Analysis. Springer-Verlag, New York (1995). [BB97] R. Bass and K. Burdzy. Cutting Brownian paths. Memoir Amer. Math. Soc. 137 (1997). [Ba62] G. Baxter. Combinatorial methods in fluctuation theory. Z. Wahrsch. verw. Gebiete. 1 263–270 (1962). [BPP95] I. Benjamini, R. Pemantle and Y. Peres. Martin capacity for Markov chains. Ann. Prob. 23, 1332–1346 (1995). [BP94] I. Benjamini and Y. Peres. Tree-indexed random walks on groups and first passage percolation. Probab. Th. Rel. Fields. 98, 91–112 (1994). [Be83] S. M. Berman. Nonincrease almost everywhere of certain measurable functions with applications to stochastic processes. Proc. Amer. Math. Soc. 88, 141–144 (1983). [Be91] J. Bertoin. Increase of a L´ evy process with no positive jumps. Stochastics 37, 247–251 (1991). [Bi67] P. Bickel. Some contributions to the theory of order statistics. Proc. Fifth Berkeley Sympos. Math. Statist. and Probability, Vol I: Statistics, 575–591 (1967). [Bi68] P. Billingsley. Convergence of Probability Measures. Wiley, New York (1968). [Bi82] P. Billingsley. Van der Waerden’s continuous nowhere differentiable function. Amer Math. Monthly 89 691 (1982). [Bi86] N. H. Bingham. Variants on the law of the iterated logarithm. Bull. London Math. Soc. 18, 433–467 (1986). 349 [BP96] C. J. Bishop and Y. Peres. Packing dimension and Cartesian products. Trans. Amer. Math. Soc. 348, 4433–4445 (1996). [BJPP97] C. J. Bishop, P. W. Jones, R. Pemantle and Y. Peres. The outer boundary of Brownian motion has dimension greater than one. J. Funct. Analysis 143, 309-336 (1997). [Bl57] R.M. Blumenthal. An extended Markov property. Trans. Amer. Math. Soc. 82, 52–72 (1957). [BG68] R.M. Blumenthal and R. Getoor. Markov processes and potential theory. Academic Press (1968). [Bo89] A.N. Borodin. Brownian local time. Russian Math. Surveys 44, 1–51 (1989). [BS02] A.N. Borodin and P. Salminen. Handbook of Brownian motion. Facts and formulae. 2nd Edition. Birkh¨ auser Verlag, Basel (2002). [Br76] G.R. Brosamler. A probabilistic version of the Neumann problem. Math. Scand. 38, 137–147 (1976). [Br28] R. Brown. A brief descriptions of microscopical observations made in the months of June, July and August 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Ann. Phys. 14, 294–313 (1828). [Bu89] K. Burdzy. Cut points on Brownian paths. Ann. Probab. 17, 1012–1036 (1989). [Bu90] K. Burdzy. On nonincrease of Brownian motion. Ann. Probab. 18, 978–980 (1990). [Bu95] K. Burdzy. Labyrinth dimension of Brownian trace. In: Probability and Mathematical Statistics Volume dedicated to the memory of Jerzy Neyman 15, 165–193 (1995). [BL90] K. Burdzy and G. F. Lawler. Non-intersection exponents for Brownian paths. Part II: Estimates and applications to a random fractal. Ann. Probab. 18, 981–1009 (1990). [BSM89] K. Burdzy and J. San Mart´ ın. Curvature of the convex hull of planar Brownian motion near its minimum point. Stoch. Proc. Appl. 33, 89–103 (1989). [Ca14] C. Carath´ eodory. ¨ Uber das lineare Maß von Punktmengen, eine Verallgemeinerung des L¨ angenbegriffs. Nachrichten Ges. Wiss. G¨ ottingen, 406–426 (1914). [Ca67] L. Carleson. Selected Problems on Exceptional Sets. Van Nostrand, Princeton, NJ (1967). [Ch72] J.P.R. Christensen. On sets of Haar measure zero in abelian Polish groups. Israel J. Math. 13, 255–260 (1972). [Ch73] K.L. Chung. Probabilistic approach in potential theory to the equilibrium problem. Ann. Inst. Fourier, Grenoble 23,3, 313–322 (1973). [Ch82] K.L. Chung. Lectures from Markov processes to Brownian motion. Springer Verlag (1982). [Ch02] K.L. Chung. Green, Brown and Probability & Brownian motion on the line. World Scientific (2002). [CFL28] R. Courant, K. Friedrichs and H. Lewy. ¨ Uber die partiellen Differentialgleichungen der math-ematischen Physik. Math. Annalen 100, 32–74 (1928). [CHM89] M. Cranston, P. Hsu and P. March. Smoothness of the convex hull of planar Brownian motion. Ann. Probab. 17, 144–150 (1989). [CR81] M. Cs¨ org¨ o and P. Revesz. Strong Approximations in Probability and Statistics. Academic Press, New York (1981). [CT62] Z. Ciesielski and S. J. Taylor. First passage times and sojourn times for Brownian motion in space and the exact Hausdorffmeasure of the sample path. Trans. Amer. Math. Soc. 103 (1962). [CW90] K.L. Chung and R. J. Williams. Introduction to stochastic integration. Second edition. Birkh¨ auser, Boston, MA (1990). [Da65] K.E. Dambis. On the decomposition of continuous martingales. Theor. Prob. Appl. 10, 401-410 (1965). [Da75] B. Davis. Picard’s theorem and Brownian motion. Trans. Amer. Math. Soc. 213, 353-362 (1975). [Da83] B. Davis. On Brownian slow points. Z. Wahrscheinlichkeitstheorie. 64, 359-367 (1983). 350 [DE06] M. Davis and A. Etheridge. Louis Bachelier’s theory of speculation: The origins of modern finance. Princeton University Press, Princeton, NJ (2006). [DM98] P. Deheuvels and D.M. Mason. Random fractal functional laws of the iterated logarithm. Studia Sci. Math.Hungar. 34, 89-106 (1998). [DM04] P. Del Moral. Feynman-Kac formulae. Genealogical and interacting particle systems with appli-cations. Springer Verlag (2004). [DPRZ00] A. Dembo, Y. Peres, J. Rosen and O. Zeitouni. Thick points for spatial Brownian motion: multifractal analysis of occupation measure. Ann. Probab. 28(1), 1–35 (2000). [DPRZ01] A. Dembo, Y. Peres, J. Rosen and O. Zeitouni. Thin points for Brownian motion. Ann. Inst. H. Poincar´ e Probab. Statist., 36(6), 749–774 (2001). [Do96] R. A. Doney. Increase of L´ evy processes. Ann. Probab. 24, 961–970 (1996). [Do51] M. D. Donsker. An invariance principle for certain probability limit theorems. Mem. Amer. Math. Soc. 6 (1951–52). [Do49] J. L. Doob. Heuristic approach to the Kolmogorov-Smirnov theorems. Ann. Math. Stat. 20, 393–403 (1949). [Do53] J.L. Doob. Stochastic Processes. Wiley (1953). [Du68] L. E. Dubins. On a theorem of Skorokhod. Ann. Math. Statist. 39, 2094–2097 (1968). [DS65] L. E. Dubins and G. Schwarz. On continuous martingales. Proc. Nat. Acad. Sci. USA 53, 913–916 (1965). [Du02] R. M. Dudley. Real Analysis and Probability. Cambridge University Press (2002). [DL02] T. Duquesne and J.-F. Le Gall. Random trees, Lvy processes and spatial branching processes. Ast´ erisque 281, 1–147 (2002). [Du95] R. Durrett. Probability: Theory and examples. Duxbury Press, Belmont (1995). [Du96] R. Durrett. Stochastic calculus: a practical introduction. CRC Press (1996). [Dv63] A. Dvoretzky. On the oscillation of the Brownian motion process. Israel Journal Math. 1, 212-214 (1963). [DE51] A. Dvoretzky and P. Erd˝ os. Some problems on random walk in space. In: Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, 1950. 353–367 (1951). [DEK50] A. Dvoretzky, P. Erd˝ os and S. Kakutani Double points of paths of Brownian motion in n-space. Acta Sci. Math. Szeged 12 75–81 (1950). [DEK54] A. Dvoretzky, P. Erd˝ os and S. Kakutani Multiple points of paths of Brownian motion in the plane. Bull. Res. Council Israel 3 364–371 (1954). [DEK58] A. Dvoretzky, P. Erd˝ os and S. Kakutani Points of multiplicity c of planar Brownian motion. Bull. Res. Council Israel 7 157–180 (1958). [DEK61] A. Dvoretzky, P. Erd˝ os and S. Kakutani. Nonincrease everywhere of the Brownian motion process. Proc. 4th Berkeley Symp. Math. Stat. Probab. 2, 103-116 (1961). [DEKT57] A. Dvoretzky, P. Erd˝ os, S. Kakutani and S. J. Taylor Triple points of Brownian paths in 3-space. Proc. Cambridge Philos. Soc. 53 856–862 (1957). [Dy57] E. B. Dynkin. Inhomogeneous strong Markov processes. Dokl. Akad. Nauk. SSSR. 113, 261–263 (1957). [Ei05] A. Einstein. ¨ Uber die von der molekularkinetischen Theorie der W¨ arme geforderte Bewegung von in ruhenden Fl¨ ussigkeiten suspendierten Teilchen. Ann. Physik 17 549–560 (1905). [Ei94] N. Eisenbaum. Dynkin’s isomorphism theorem and the Ray-Knight theorems. Probab. Theory Re-lated Fields 99 321–335 (1994). [EK46] P. Erd˝ os and M. Kac. On certain limit theorems in the theory of probability. Bull. Amer. Math. Soc. 52 292–302 (1946). 351 [EK47] P. Erd˝ os and M. Kac. On the number of positive sums of independent random variables. Bull. Amer. Math. Soc. 53 1011–1020 (1947). [Ev85] S. Evans. On the Hausdorffdimension of Brownian cone points. Math. Proc. Cambridge Philoph. Soc. 98 343–353 (1985). [Ev87a] S. Evans. Potential theory for a family of several Markov processes. Ann. Inst. H. Poincar´ e Probab. Statist. 23 499-530 (1987). [Ev87b] S. Evans. Multiple points in the sample paths of a L´ evy process. Probab. Theory Related Fields 76 359–367 (1987). [Fa97a] K. Falconer. Fractal geometry: mathematical foundations and applications. Wiley, Chichester (1997). [Fa97b] K. Falconer. Techniques in fractal geometry. John Wiley & Sons, Chichester (1997). [FH96] K.J. Falconer and J. D. Howroyd. Projection theorems for box and packing dimension. Math. Proc. Camb. Phil. Soc. 119, 287–295 (1996). [Fe68] W. Feller. An introduction to probability theory and its applications. Volume I. 3rd Edition. John Wiley & Sons, New York (1968). [Fe66] W. Feller. An introduction to probability theory and its applications. Volume II. John Wiley & Sons, New York (1966). [FS89] P.J. Fitzsimmons and T. Salisbury. Capacity and energy for multiparameter Markov processes. Ann. Inst. Henri Poincar´ e, Probab. 25 325–350 (1989). [FF56] L. R. Ford Jr. and D. R. Fulkerson. Maximal flow through a network. Canad. J. Math. 8, 399–404 (1956). [Fr67] B. Fristedt An extension of a theorem of S. J. Taylor concerning the multiple points of the symmetric stable process. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 9 62–64 (1967). [FG97] B. Fristedt and L. Gray. A modern approach to probability theory. Birkh¨ auser, Boston (1997). [FKG71] C. M. Fortuin, P. N. Kasteleyn and J. Ginibre. Correlational inequalities for partially ordered sets. Comm. Math. Phys. 22, 89–103 (1971). [Fr35] O. Frostman. Potential d’´ equilibre et capacit´ e des ensembles avec quelques applications a la th´ eorie des fonctions. Meddel. Lunds Univ. Math. Sem. 3, 1–118, (1935). [Ga40] C. F. Gauss. Allgemeine Lehrs¨ atze in Beziehung auf die im verkehrten Verh¨ altnisse des Quadrats der Entfernung wirkenden Anziehungs- und Abstoßungskr¨ afte. Gauss Werke 5 197–242 (1840). [GH80] D. Geman and J. Horowitz. Occupation densities. Ann. Probab. 8, 1–67 (1980). [GHR84] D. Geman, J. Horowitz and J. Rosen. A local time analysis of intersections of Brownian motion in the plane. Ann. Probab. 12, 86–107 (1984). [Gr28] G. Green. An essay on the application of mathematical analysis to the theories of electricity and magnetism. (1828). [GP83] C. Greenwood and E. Perkins. A conditioned limit theorem for random walk and Brownian local time on square root boundaries. Ann. Probab. 11, 227-261 (1983). [Ha19] F. Hausdorff. Dimension und ¨ ußeres Maß. Math. Ann. 79, 157–179 (1919). [Ha60] T. E. Harris. A lower bound for the critical probability in a certain percolation process. Proc. Camb. Phil. Soc. 56, 13–20 (1960). [HW41] P. Hartman and A. Wintner. On the law of the iterated logarithm. J. Math. 63, 169–176 (1941). [Ha81] J. Hawkes. Trees generated by a simple branching process. J. London Math. Soc., 24, 373–384 (1981). [Ho95] J.D. Howroyd. On dimension and on the existence of sets of finite positive Hausdorffmeasure. Proc. Lond. Math. Soc., III. 70, 581-604 (1995). [HT97] X. Hu and S. J. Taylor. The multifractal structure of stable occupation measure. Stoch. Proc. Appl. 66, 283-299 (1997). 352 [HSY92] B. R. Hunt, T. Sauer and J. A. Yorke. Prevalence: a translation-invariant “almost every” on infinite-dimensional spaces. Bull. Amer. Math. Soc. 27 (2), 217–238 (1992). [Hu56] G. A. Hunt. Some theorems concerning Brownian motion. Trans. Amer. Math. Soc. 81 , 294–319 (1956). [It44] K. Itˆ o. Stochastic integral. Proc. Imp. Acad. Tokyo 20, 519–524 (1944). [It51] K. Itˆ o. On a formula concerning stochastic differentials. Nagoya Math. Journal 3, 55–55 (1951). [IM74] K. Itˆ o and H.P. McKean. Diffusion processes and their sample paths. Springer, Berlin, 1974. [Ka51] M. Kac. On some connections between probability theory and differential and integral equations. Proc. 2nd Berkeley Symp. Math. Statist. Probab., 189–215 (1951). [Ka76] J.-P. Kahane. Sur les z´ eros et les instants de ralentissement du mouvement brownien. C. R. Acad. Sci. Paris 282, 431–433 (1976). [Ka85] J.-P. Kahane. Some random series of functions. Cambridge University Press (1985). [Ka86] J.-P. Kahane. Sur la dimension des intersections. In: Aspects of Mathematics and its Applications (Ed. J.A. Barroso), 419–430, North Holland (1986). [Ka44a] S. Kakutani. On Brownian motions in n-space. Proc. Imp. Acad. Tokyo 20, 648–652 (1944). [Ka44b] S. Kakutani. Two dimensional Brownian motion and harmonic functions. Proc. Imp. Acad. Tokyo 20, 706–714 (1944). [Ka45] S. Kakutani. Markoffprocess and the Dirichlet problem. Proc. Japan Acad. 21, 227–233 (1945). [Ka02] O. Kallenberg. Foundations of modern probability. Springer Verlag (2002). [KS88] I. Karatzas and S. E. Shreve. Brownian motion and stochastic calculus. Springer Verlag (1988). [Ka69] R. Kaufman. Une propri´ et´ e m´ etrique du mouvement brownien. C. R. Acad. Sci. Paris 268, 727-728 (1969). [Ka75] R. Kaufman. Large increments of Brownian motion. Nagoya Math. Journal 56, 139-145 (1975). [Kh23] A.Y. Khinchin. ¨ Uber dyadische Br¨ uche. Math. Zeitschrift 18, 109-116 (1923). [Kh24] A.Y. Khinchin. ¨ Uber einen Satz der Wahrscheinlichkeitsrechnung. Fund. Math. 6, 9-20 (1924). [Kh33] A.Y. Khinchin. Asymptotische Gesetze der Wahrscheinlichkeitsrechnung. Springer Verlag, Berlin (1933). [Kh02] D. Khoshnevisan. Multiparameter processes. Springer Verlag (2002). [Kh99] D. Khoshnevisan. Brownian sheet images and Bessel-Riesz capacity. Trans. Amer. Math. Soc. 351, 2607-2622 (1999) [KPX00] D. Khoshnevisan, Y. Peres and Y. Xiao. Limsup random fractals. El. J. Probab. 5, Paper 4, pp.1–24 (2000). [KS00] D. Khoshnevisan and Z. Shi. Fast sets and points for fractional Brownian motion. S´ eminaire de Probabilit´ es, XXXIV, 393–416, Lecture Notes in Math., 1729, Springer, Berlin, (2000). [KX05] D. Khoshnevisan and Y. Xiao. L´ evy processes: Capacity and Hausdorffdimension. Ann. Probab. 33, 841–878 (2005). [KM05] A. Klenke and P. M¨ orters. The multifractal spectrum of Brownian intersection local times. Ann. Probab. 33, 1255-1301 (2005). [Kn63] F. B. Knight. Random walks and a sojurn density process of Brownian motion. Trans. Amer. Math. Soc. 109, 56–86 (1963). [Kn81] F. B. Knight. Essentials of Brownian Motion and Diffusion. Amer. Math. Soc., Providence, Rhode Island (1981). [Ko29] A.N. Kolmogorov. ¨ Uber das Gesetz des iterierten Logarithmus. Math. Ann., 101, 126-135, (1929). [KMT75] J. Koml´ os, P. Major and G. Tusn´ ady. An approximation of partial sums of independent random variables and the sample distribution function. Z. Wahrsch. verw. Gebiete, 32, 111-131, (1975). 353 [KM02] W. K¨ onig and P. M¨ orters. Brownian intersection local times: upper tails and thick points. Ann. Probab., 30(4), 1605-1656, (2002). [La98] M. Laczkovich. Conjecture and proof. TypoTeX, Budapest (1998). [La09] E. Landau. Verteilung der Primzahlen. (1909). [La96a] G.F. Lawler. Hausdorffdimension of cut points for Brownian motion. El. Journal Probab. 1 (1996). [La96b] G.F. Lawler. The dimension of the frontier of planar Brownian motion. El. Comm. in Probab. 1, 29–47 (1996). [La99] G.F. Lawler. Geometric and fractal properties of Brownian motion and random walk paths in two and three dimensions. In: Random walks (Budapest 1998) 219–258, Bolyaj Society Math. Stud., Volume 9, Budapest, 1999. [LSW01] G. F. Lawler, O. Schramm and W. Werner. The dimension of the Brownian frontier is 4/3. Math. Res. Lett. 8, 401-411 (2001). [Le24] H. Lebesque. Conditions de r´ egularit´ e, conditions d’ irr´ egularit´ e, conditions de impossibilit´ e dans le probl eme de Drirchlet. Comp. Rendu Acad. Sci. 178, 349–354 (1924). [LG85] J.-F. Le Gall. Sur la measure de Hausdorffde la courbe brownienne. In: S´ eminaire de probabilit´ es (Strasbourg) 19 297–313 (1985). [LG86] J.-F. Le Gall. The exact Hausdorffmeasure of Brownian multiple points. In: Seminar on Stochastic Processes (1986), Birkh¨ auser, Basel, pp 107–137. [LG87] J.-F. Le Gall. Le comportement du mouvement brownian entre les deux instants ou il passe par un point double. J. Funct. Anal. 71, 246–262 (1987). [LG91] J.-F. Le Gall. Some properties of planar Brownian motion. Springer Lecture Notes in Mathematics. [LG99] J.-F. Le Gall. Spatial branching processes, random snakes and partial differential equations. Lecture Notes in Mathematics ETH Z¨ urich. Birkh¨ auser, Basel (1999). [LL98] J.-F. Le Gall and Y. Le Jan. Branching processes in Lvy processes: the exploration process. Ann. Probab. 26 213–252 (1998). [LRS89] J.-F. Le Gall, J.S. Rosen, and N.-R. Shieh. Multiple points of L´ evy processes. Ann. Probab. 17 503–515 (1989). [Le66] E. Lehmann. Some concepts of dependence. Ann. Math. Statist. 37, 1137–1153 (1966). [Le37] P. L´ evy. Th´ eorie de l’addition des variables al´ eatoires. Gauthier-Villars, Paris (1937). [Le39] P. L´ evy. Sur certain processus stochastiques homog´ enes. Comp. Math. 7 283–339 (1939). [Le40] P. L´ evy. Le mouvement brownien plan. Amer. J. Math. 62 487–550 (1940). [Le48] P. L´ evy. Processus stochastiques et mouvement Brownien. Suivi d’une note de M. Lo eve. Gauthier-Villars, Paris (1948). [Le51] P. L´ evy. La mesure de Hausdorffde la courbe du mouvement brownien ` a n dimensions. C. R. Acad. Sci. Paris 233 600–602 (1951). [Le54] P. L´ evy. La mesure de Hausdorffde la courbe du mouvement brownien. Giorn. Ist. Ital. Attuari 16 1–37 (1954). [Li95] M.A. Lifshits. Gaussian random functions. Kluwer, Dordrecht (1995). [Ly90] R. Lyons. Random walks and percolation on trees. Ann. Probab. 18, 931-958 (1990). [Ly92] R. Lyons. Random walks, capacity and percolation on trees. Ann. Probab. 20 2043–2088 (1992). [Ly96] R. Lyons. Probabilistic aspects of infinite trees and some applications. In: Trees, pp. 81–94, Birkh¨ auser, Basel (1996). [LP05] R. Lyons with Y. Peres. Probability on trees and networks. To appear. Preliminary version cur-rently available at rdlyons/prbtree/prbtree.html [MR06] M.B. Marcus and J. Rosen. Markov processes, Gaussian processes, and local times. Cambridge University Press, Cambridge (2006). 354 [Ma85] N.G. Makarov. On the distortion of boundary sets under conformal mappings. Proc. London Math. Soc.(3) 51 369–384 (1985). [Ma06] A.A. Markov. Extension of the law of large numbers to dependent events. [Russian] Bull. Soc. Phys. Math. Kazan 15 135–156 (1906). [Ma95] P. Mattila. Geometry of sets and measures in Euclidean spaces. Fractals and rectifiability. Cam-bridge University Press (1999). [MM97] P. Mattila and R.D. Mauldin. Measure and dimension functions: measurability and densities. Math. Proc. Camb. Philos. Soc. 121, 81–100 (1997). [McK55] H.P. McKean Jr. Hausdorff-Besicovitch dimension of Brownian motion paths. Duke Math. J. 22, 229–234 (1955). [Me83] I. Meilijson. On the Az´ ema-Yor stopping time. In: Seminar on probability XVII, 225–226, Springer LNM 986 (1983). [Me66] P. A. Meyer. Probability and Potentials.. Blaisdell Publishing, Waltham (1966). [MG84] D. P. Minassian and J. W. Gaisser. A simple Weierstrass function. Amer. Math. Monthly 91, 254-256 (1984). [M¨ o02] P. M¨ orters. A pathwise version of Spitzer’s law. In: Limit theorems in Probability and Statistics II, J´ anos Bolyai Mathematical Society 427–436 (2002). [Ne75] J. Neveu. Discrete-parameter martingales. Revised Edidtion. Translated from the French by T. P. Speed. North Holland, Amsterdam (1975). [NP89] J. Neveu and J.W. Pitman. The branching process in a Brownian excursion. S´ eminaire de Prob-abilit´ es, XXIII, 248–257, Lecture Notes in Math., 1372, Springer, Berlin (1989). [Ob04] I. Obl´ oj The Skorokhod embedding problem and its offspring. Probab. Surv. 1, 321–390 (2004). [ON04] T.C. O’Neil. The Hausdorffdimension of the visible sets of connected compact sets Trans. Amer. Math. Soc. to appear, (2004). [OT74] S. Orey and S. J. Taylor. How often on a Brownian path does the law of the iterated logarithm fail? Proc. London Math. Soc.(3) 28, 174–192 (1974). [PWZ33] R. E. A. C. Paley, N. Wiener and A. Zygmund. Notes on random functions. Math. Zeitschrift 37, 647-668 (1933). [Pe95] R. Pemantle. Tree-indexed processes. Stat. Sci. 10, 200–213 (1995). [Pe97] R. Pemantle. The probability that Brownian motion almost contains a line. Ann. Inst. H. Poincar´ e, Probab. Stat. 33, 147–165 (1997). [PP07] R. Pemantle and Y. Peres. What is the probability of intersecting the set of Brownian double points? Ann. Probab. 35, 2044–2062 (2007). [Pe96a] Y. Peres. Remarks on intersection-equivalence and capacity equivalence. Ann. Inst. H. Poincar´ e (Physique Th´ eorique) 64, 339–347 (1996). [Pe96b] Y. Peres. Intersection-equivalence of Brownian paths and certain branching processes. Commun. Math. Phys. 177, 417-434 (1996). [Pe96c] Y. Peres. Points of increase for random walks. Israel J. of Math. 95, 341–347 (1996). [Pe99] Y. Peres. Probability on trees: An introductory climb. In: Bertoin, J. (ed.) et al., Lectures on probability theory and statistics. Ecole d’´ et´ e de Probabilit´ es de Saint-Flour XXVII-1997. Berlin: Springer. Lect. Notes Math. 1717, 193-280 (1999). [Pe81] E. Perkins. The exact Hausdorffmeasure of the level sets of Brownian motion. Z. Wahrschein-lichkeitstheorie. 58, 373-388 (1981). [Pe83] E. Perkins. On the Hausdorffdimension of Brownian slow points. Z. Wahrscheinlichkeitstheorie. 64, 369-399 (1983). [PT87] E. Perkins and S.J. Taylor. Uniform measure results for the image of subsets under Brownian motion. Probab. Theory rel. Fields. 76, 257-289 (1987). 355 [PT88] E. Perkins and S.J. Taylor. Measuring close approaches ona Brownian path. Ann. Probab. 16, 1458-1480 (1988). [PW23] H. B. Phillips and N. Wiener. Nets and Dirichlet problem. J. Math. Phys. 2, 105-124 (1923). [Pi75] J.W. Pitman. One-dimensional Brownian motion and the three-dimensional Bessel process. Ad-vances in Appl. Probability. 7 511–526 (1975). [PY86] J. Pitman and M. Yor. Asymptotic laws of planar Brownian motion. Ann. Probab. 14 733–779 (1986). [Po90] H. Poincar´ e. Sur les ´ equations aux deriv´ ees partielles de la physique math´ ematique. Amer. J. Math. 12 211–779 (1986). [Po21] G. P´ olya. ¨ Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassen-netz. Math. Ann. 84 149–160 (1921). [PS78] S.C. Port and C.J. Stone Brownian motion and classical potential theory. Academic Press, New York (1978). [Pr56] Y.V. Prohorov. Convergence of random processes and limit theorems in probability. Th. Probab. Appl. 1 157–214 (1956). [Pr90] W.E. Pruitt. The rate of escape of random walk. Ann. Probab. 18 1417–1461 (1990). [PU89] F. Przytycki and M. Urba´ nski. On the Hausdorffdimension of some fractal sets. Studia Math. 93, 155–186 (1989). [Ra63a] D. Ray. Sojourn times and the exact Hausdorffmeasure of the sample path for planar Brownian motion. Trans. Amer. Math. Soc. 106, 436–444 (1963). [Ra63b] D. Ray. Sojourn times of diffusion processes. Illinois J. Math. 7, 615–630 (1963). [RY94] D. Revuz and M. Yor. Continuous martingales and Brownian motion. Springer Verlag (1994). [RP49] H. Robbins and E.J.G. Pitman. Application of the method of mixtures to quadratic forms. Ann. Math. Statist. 20, 552–560 (1949). [Ro99] C.A. Rogers. Hausdorffmeasures. Cambridge University Press (1999). [RT61] C.A. Rogers and S.J. Taylor. Functions continuous and singular with respect to a Hausdorff measure. Mathematika 8, 1–31 (1961). [RW00] L.C.G. Rogers and D. Williams. Diffusions, Markov processes and martingales. Cambridge Uni-versity Press (2000). [Ro69] D. H. Root. The existence of certain stopping times on Brownian motion. Ann. Math. Statist. 40, 715–718 (1969). [Ro83] J. Rosen. A local time approach to the self-intersections of Brownian paths in space. Comm. Math. Phys. 88, 327–338 (1983). [Ru86] W. Rudin. Real and complex analysis. 3rd edition. McGraw-Hill. [Se95] L. Serlet. Some dimension results for super-Brownian motion. Probab. Theory Related Fields 101, 371–391 (1995). [Sh67] L.A. Shepp. A first passage problem for the Wiener process. Ann. Math. Statist. 38, 1912–1914. [Sh98] Z. Shi. Windings of Brownian motion and random walks in the plane. Ann. Probab. 26 112–131 (1998). [ST99] N.-R. Shieh and S.J. Taylor. Logarithmic multifractal spectrum of stable occupation measure. [Sk65] A. Skorokhod. Studies in the theory of random processes. Addison-Wesley, Reading, MA (1965). [Sp58] F. Spitzer. Some theorems concerning 2-dimensional Brownian motion. Trans. Amer. Math. Soc. 87 187–197 (1958). [Sp64] F. Spitzer. Principles of random walk. Van Nostrand, Princeton (1964). [St75] F. Stern. Conditional expectation of the duration in the classical ruin problem. Mathematics mag-azine. 48 200-203 (1975). 356 [St64] V. Strassen. An invariance principle for the law of the iterated logarithm. Z. Wahrsch. Verw. Gebiete 3, 211–226 (1964). [Ta63] H. Tanaka Note on continuous additive functionals of the one-dimensional Brownian path. Z. Wahrsch. Verw. Gebiete 1, 251–157(1963). [Ta53] S. J. Taylor. The Hausdorffα-dimensional measure of Brownian paths in n-space. Proc. Cambridge Philos. Soc. 49, 31–39 (1953). [Ta55] S. J. Taylor. The α-dimensional measure of the graph and set of zeros of a Brownian path. Proc. Cambridge Philos. Soc. 51, 265–274 (1955). [Ta64] S. J. Taylor. The exact Hausdorffmeasure of the sample path for planar Brownian motion. Proc. Cambridge Philos. Soc. 60, 253–258 (1964). [Ta66] S. J. Taylor. Multiple points for the sample paths of the symmetric stable process. Zeitschr. Wahrsch. verw. Gebiete 5, 247–264 (1966). [Ta72] S. J. Taylor. Exact asymptotic estimates of Brownian path variation. Duke Math. J. 39, 219–241 (1972). [Ta86] S. J. Taylor. The measure theory of random fractals. Proc. Cambridge Philos. Soc. 100 383–406 (1986). [TT85] S. J. Taylor and C. Tricot. Packing measure and its evaluation for a Brownian path. Trans. Amer. Math. Soc. 288 679–699 (1985). [TW66] S. J. Taylor and J. G. Wendel. The exact Hausdorffmeasure of the zero set of a stable process. Zeitschr. Wahrsch. verw. Gebiete 6, 170–180 (1966). [To88] N. Tongring. Which sets contain multiple points of Brownian motion? Math. Proc. Camb. Phil. Soc. 103, 181–187 (1988). [Tr82] C. Tricot. Two definitions of fractional dimension. Math. Proc. Cambridge Philos. Soc. 91, 57–74 (1982). [Tr58] H. Trotter. A property of Brownian motion paths. Ill. Journal Math. 2, 425–433 (1958). [We04] W. Werner. Random planar curves and Schramm-Loewner evolutions. In: Lectures on probability theory and statistics, Lecture Notes in Math., 1840, 107–195, Springer (2004). [Wi23] N. Wiener. Differential space. Journal Math. Phys. 2, 131–174 (1923). [Wi91] D. Williams. Probability with martingales. Cambridge University Press (1991). [Xi04] Y. Xiao. Random fractals and Markov processes. In: Fractal Geometry and Applications: A Jubilee of Benoit Mandelbrot. Eds: M.L. Lapidus, M. van Frankenhuijsen. pp 261–338. American Mathe-matical Society (2004). [Yo92] M. Yor. Some aspects of Brownian motion. Part I. Birkh¨ auser Verlag (1992). [Za11] S. Zaremba. Sur le principe de Dirichlet. Acta Math. 34 293–316 (1911). 357
243
How many squares are on an 8×8 chess board? - Quora =============== Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Chess Boards Puzzle Question PLANE GEOMETRY Chess Arithmetic Puzzle Solving Old Puzzle Mathematical Sciences 5 How many squares are on an 8×8 chess board? All related (59) Sort Recommended Ashish Chandra Have little bit of mathematics in me · Author has 189 answers and 6.9M answer views ·9y Aha, I love these kinds of questions where you need to look into other perspective to get the answer. All the people down there have given a correct answer, but I would like to provide the answer in a different and most simplest way. Lets first simplify the problem. A chess board is 8X8 unit, and most obvious answer that comes to our mind is 64, but as we start looking into 2X2, 3X3 all the way to 8X8 squares things get complicated beyond comprehension. Ok, lets start with simple square. Now, lets make a 2X2 square Ok, lets go ahead one more time, and a make 3X3 square Ohh Boy, you want one more ti Continue Reading Aha, I love these kinds of questions where you need to look into other perspective to get the answer. All the people down there have given a correct answer, but I would like to provide the answer in a different and most simplest way. Lets first simplify the problem. A chess board is 8X8 unit, and most obvious answer that comes to our mind is 64, but as we start looking into 2X2, 3X3 all the way to 8X8 squares things get complicated beyond comprehension. Ok, lets start with simple square. Now, lets make a 2X2 square Ok, lets go ahead one more time, and a make 3X3 square Ohh Boy, you want one more time, ok brace yourself I guess you have got the point, so for a chess board of 8X8 squares, total no of square would be: Bonus : Calculate the number of rectangles in chessboard. (Hint: Square is also a rectangle) Upvote · 99 80 9 7 Sponsored by Adobe Photoshop Next-gen Generative Fill. Add rich, detailed elements that blend seamlessly into scenes with Generative Fill. Learn More 9 1 Marc Rafalowicz Knows French · Author has 871 answers and 756.8K answer views ·4y Originally Answered: How many squares of any size are in an 8x8 checkerboard? · you have 64 single squares in a checkerboard. then you have 2x2 squares, and you have to imagine how many of them can be found on the board. From the upper corner left, you can slide such a 2x2 square 7 times to the right or downwards, then you reach the borders of the board. If you combine these possibilities, you get 7x7 = 49 squares of 2x2. then you have 3x3 squares, and you have to imagine how many of them can be found on the board. From the upper corner left, you can slide such a 3x3 square 6 times to the right or downwards, then you reach the borders of the board. If you combine these poss Continue Reading you have 64 single squares in a checkerboard. then you have 2x2 squares, and you have to imagine how many of them can be found on the board. From the upper corner left, you can slide such a 2x2 square 7 times to the right or downwards, then you reach the borders of the board. If you combine these possibilities, you get 7x7 = 49 squares of 2x2. then you have 3x3 squares, and you have to imagine how many of them can be found on the board. From the upper corner left, you can slide such a 3x3 square 6 times to the right or downwards, then you reach the borders of the board. If you combine these possibilities, you get 6x6 = 36 squares of 3x3. I guess you see the logic, and find easily that you have 25 squares of 4x4, 16 squares of 5x5, 9 squares of 6x6, 4 squares of 7x7, and one big square of 8x8, the chekboard itself. this gives a total of 64 + 49 +36 + 25 +16 +9 + 4 + 1, or : (16+9 + 25 ) + (49 + 1) + ( 36 + 64) + 4 = 50 + 50 + 100 + 4 = 204 squares Upvote · 9 4 Heng Bryan I have read the series before and watched the movies. ·4y Originally Answered: How many squares are in an 8x8 grid? · I hope I am interpreting this question correctly. I assume you are asking how many squares there are in this: 8x8 = 64 there are 7 possible positions for a 2 x 2 square within 2 rows. this means there are also 7 possible positions downwards. 7x7 = 49 there a 6 possible positions for a 3 x 3 square within 3 rows this means there are also 7 possible positions downwards. 6x6 = 36 starting to see the pattern now? every time a possible square increases in size, the square of the number of possible positions in the same length of rows is the number of positions possible for a a square of that size. Using wha Continue Reading I hope I am interpreting this question correctly. I assume you are asking how many squares there are in this: 8x8 = 64 there are 7 possible positions for a 2 x 2 square within 2 rows. this means there are also 7 possible positions downwards. 7x7 = 49 there a 6 possible positions for a 3 x 3 square within 3 rows this means there are also 7 possible positions downwards. 6x6 = 36 starting to see the pattern now? every time a possible square increases in size, the square of the number of possible positions in the same length of rows is the number of positions possible for a a square of that size. Using what we (or I) have discovered, we can follow up with: 5x5 = 25 4x4 = 16 3x3 = 9 2x2 = 4 1x1 = 1 64 + 49 + 36 + 25 + 16 + 9 + 4 + 1 = 204 the answer is 204. However if you were just asking for how many squares there were in the grid, and not this overthought version, the answer is 64. Upvote · Related questions More answers below How many squares are there in a Chess Board? A. 24 B. 36 C. 48 D. 64 How many squares are there in a chessboard? How many squares and rectangles are there in a chess board? How many squares can be formed by using a standard chess board? Why are there 64 squares on a chess board? Anders Kaseorg MIT S.B. in Math ’08; MIT PhD student in CS ’14– · Upvoted by Michael Northington , PhD in Mathematics from Vanderbilt · Author has 550 answers and 10.3M answer views ·Updated 11y Originally Answered: How many squares are on an 8x8 chess board? · Let the chess board be axis-aligned with its southwest and northeast corners at and , and consider a square with southwest and northeast corners at and . The squares centered south of the 45° diagonal line are exactly described by , so there are of these. The squares centered on or north of that line are exactly described by , so there are of these. That gives a total of squares. (Same idea as my answer to How many squares are there in Continue Reading Let the chess board be axis-aligned with its southwest and northeast corners at and , and consider a square with southwest and northeast corners at and . The squares centered south of the 45° diagonal line are exactly described by , so there are of these. The squares centered on or north of that line are exactly described by , so there are of these. That gives a total of squares. (Same idea as my answer to How many squares are there in a rectangle, 12 square tiles wide and 24 square tiles high?.) Upvote · 99 25 9 1 9 1 Sponsored by mina Explore Lisbon like a local. Discover the Best of Portugal’s Enchanting Capital – Hidden Gems, Must-See Attractions, Local Tips. Read More 99 91 Assistant Bot · 11mo An 8×8 chessboard contains not only the 64 individual squares (8 rows and 8 columns) but also larger squares that can be formed by combining smaller squares. To find the total number of squares on the chessboard, we can calculate the number of squares of each possible size: 1x1 squares: 2x2 squares: 3x3 squares: 4x4 squares: 5x5 squares: 6x6 squares: 7x7 squares: 8x8 squares: Now, we can sum these values: Thus, the total number of squares on Continue Reading An 8×8 chessboard contains not only the 64 individual squares (8 rows and 8 columns) but also larger squares that can be formed by combining smaller squares. To find the total number of squares on the chessboard, we can calculate the number of squares of each possible size: 1x1 squares: 2x2 squares: 3x3 squares: 4x4 squares: 5x5 squares: 6x6 squares: 7x7 squares: 8x8 squares: Now, we can sum these values: Thus, the total number of squares on an 8×8 chessboard is 204. Upvote · Aditya Bhushan who said nothing is impossible i'v doing it for years ·9y Originally Answered: How many squares are there in an 8x8 chess board? · 1 8x8 square 4 7x7 square 9 6x6 square 16 5x5 square 25 4x4 square 36 3x3 square 49 2x2 square 64 1x1 square Total 204 square Upvote · 9 4 Related questions More answers below How many squares are there in a chess board of size 9x11? How many squares are there in a 'm' x 'n' chessboard? How many different sized squares can you find on the chessboard? How many different 8x8 chess boards can you make with 32 white and 32 black squares? How many ways are there to choose a white square and a black square from an 8×8 chess-board so that these squares lie neither in the same row nor in the same column? (A) 56 (B) 5040 (C) 720 (D) 672 (E) 768? Lance Everett Studied Nanoengineering&Mathematics at University of California, San Diego · Author has 1.1K answers and 1.5M answer views ·Updated 6y Consider the different places each different kind of square can be located. A by can be in different positions for the and the coordinates. A by can be in each, since one place in each direction has been eliminated, etc. Hence we have the sum of squares up to , There is a closed formula for the first squares , which can be more or less easily derived from the Euler-Maclaurin formula (or other more involved methods): Here we have as expected. Upvote · 9 3 Sponsored by Book Geists If you're a Kiwi, this could be the best day of your life! Available to Kiwis only. Read today. Learn More 1.2K 1.2K Sanket Alekar I am fairly good at Math · Upvoted by David Joyce , Ph.D. Mathematics, University of Pennsylvania (1979) · Author has 634 answers and 1.5M answer views ·11y Think of the chess board as an cartesian grid. To create an square, where , we can first select a segment of length along the x-axis, and another along the y-axis. If we draw regions on the grid perpendicular to each of these segments, the overlap between the two regions will be an square. There are 8 ways to select 1-length segments along any axis, 7 ways to select 2-length, 6 ways for 3-length... 9 - n ways for n-length segments. So for each value of n, there are ways of selecting a square of side n. So in total, we have Continue Reading Think of the chess board as an cartesian grid. To create an square, where , we can first select a segment of length along the x-axis, and another along the y-axis. If we draw regions on the grid perpendicular to each of these segments, the overlap between the two regions will be an square. There are 8 ways to select 1-length segments along any axis, 7 ways to select 2-length, 6 ways for 3-length... 9 - n ways for n-length segments. So for each value of n, there are ways of selecting a square of side n. So in total, we have squares. Upvote · 9 3 9 1 George Binning Long time stand-up fan. ·8y Originally Answered: For constructing an 8×8 chess board, how many squares are there in the chess board? · 64? If only it were that simple. Unfortunately, you can also fit seven 2×2 squares along each side of a chess board, and six 3×3 squares along each side, and so on. So there are really (8×8)+(7×7)+(6×6)+(5×5)+(4×4)+(3×3)+(2×2)+(1×1)=204 squares on a chess board. Upvote · 9 5 Sponsored by Life Insurance Corporation Life Secured, Future Sure. Secure Life, Assured Returns. Learn More 999 298 Ravi Gupta Knows a little about calculus. ·10y Originally Answered: How many squares are there in a 88 grid? · No. of squares in nn grid = n²+(n-1)²+(n-2)²..........1² So for n=8 No. of squares = 8²+7²+6²+5²+4²+3²+2²+1² =64+49+36+25+16+9+4+1 = 204 So 204 is the answer. Upvote · 9 4 9 3 Joseph DiGiorgio Former Retired Professor of Chemistry at California State University, Sacramento (1964–2007) · Author has 818 answers and 401.6K answer views ·5y Originally Answered: How many squares are there on a standard 8 x 8 checkerboard? The answer is not 64. · I get 204; 1x1 squares = 8^2 = 64 ; 2x2 squares = 7^2 = 49; 3x3 squares = 6^2 = 36; 4x4 squares = 5^2 = 25; 5x5 squares = 4^2 = 16; 6x6 squares = 3^2 = 9; 7x7 squares = 2^2 = 4; 8x8 squares = 1^2 = 1: Total squares are 204 Upvote · 9 9 Rhythm Gupta JEE AIR 27, IIT-D Computer Science · Upvoted by David Joyce , Ph.D. Mathematics, University of Pennsylvania (1979) ·13y Originally Answered: How many squares are on an 8x8 chess board? · 64 + 49 + 36 + 25 + 16 + 9 + 4 + 1= 204 Also Visit: Upvote · 99 11 Gary Ward MaEd in Education&Mathematics, Austin Peay State University (Graduated 1997) · Author has 4.9K answers and 7.5M answer views ·5y Originally Answered: How many squares are there on a standard 8 x 8 checkerboard? The answer is not 64. · Alex, This may help Sara picture it. Continue Reading Alex, This may help Sara picture it. Upvote · 9 2 9 2 9 1 Related questions How many squares are there in a Chess Board? A. 24 B. 36 C. 48 D. 64 How many squares are there in a chessboard? How many squares and rectangles are there in a chess board? How many squares can be formed by using a standard chess board? Why are there 64 squares on a chess board? How many squares are there in a chess board of size 9x11? How many squares are there in a 'm' x 'n' chessboard? How many different sized squares can you find on the chessboard? How many different 8x8 chess boards can you make with 32 white and 32 black squares? How many ways are there to choose a white square and a black square from an 8×8 chess-board so that these squares lie neither in the same row nor in the same column? (A) 56 (B) 5040 (C) 720 (D) 672 (E) 768? How many rectangles are on a chess board that is not a square? How many rectangles can be formed on a chess board? How many white squares are there on a chessboard? How many queens are required to cover every square in an 88 chessboard? How many squares do you play with on a chess board? Related questions How many squares are there in a Chess Board? A. 24 B. 36 C. 48 D. 64 How many squares are there in a chessboard? How many squares and rectangles are there in a chess board? How many squares can be formed by using a standard chess board? Why are there 64 squares on a chess board? How many squares are there in a chess board of size 9x11? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
244
If there a way to find the location of the first difference between two strings? =============== XP is just a number PerlMonks If there a way to find the location of the first difference between two strings? by flexvault(Monsignor) Log in Create a new user The Monastery Gates Seekers of Perl Wisdom Meditations Cool Uses For Perl Obfuscation Tutorials Poetry Illuminations Reviews Perl News Recent Threads Newest Nodes Super Search PerlMonks Discussion What's New on Mar 26, 2012 at 09:02 UTC( [id://961622]=perlquestion: printw/replies, xml )Need Help?? flexvault has asked for the wisdom of the Perl Monks concerning the following question: Dear Monks, Since looping on characters is not one of Perl's strong points, is there a way to get the location of the first difference between two strings? (Note: I typed the code in since I can't cut and paste between my browser and my Xterm.) perl -e 'use bytes;$s1="abcd";$s2="abcz";$dif=$cmp=$s1 cmp $s2;print "+$dif\t$cmp\n";' [download] What I would like to get, is the following: 3 -1 [download] Obviously it doesn't work that way, but is there an alternate that I'm not aware of? Thank you "Well done is better than well said." - Benjamin Franklin | Comment on If there a way to find the location of the first difference between two strings? Select or Download Code | | --- | | Replies are listed 'Best First'. | | --- | | Re: If there a way to find the location of the first difference between two strings? by JavaFan(Canon) on Mar 26, 2012 at 09:21 UTC | | Untested: use 5.010; my $t = $s1 ^ $s2; my ($f) = $t =~ /^(\x{00})/; say length $f; [download] | [reply] [d/l] | | Re^2: If there a way to find the location of the first difference between two strings? by jwkrahn(Abbot) on Mar 26, 2012 at 09:33 UTC | | > _use 5.010; my $t = $s1 ^ $s2; my ($f) = $t =~ /^(\x{00})/; say length $f; > > > > [download] Another way to do that: use 5.010; my $t = $s1 ^ $s2; $t =~ /^\0/ && say $+ [download] | [reply] [d/l] [select] | | Re^3: If there a way to find the location of the first difference between two strings? by flexvault(Monsignor) on Mar 26, 2012 at 10:36 UTC | | JavaFan and jwkrahn, C o o l ! That's the code I was hoping for... Need to benchmark now! UPDATE: My apologies to JavaFan, I didn't realize his post was first. Both solutions Benchmarked are better than mine (376%). Thank you "Well done is better than well said." - Benjamin Franklin | [reply] | | Re: If there a way to find the location of the first difference between two strings? by AnomalousMonk(Archbishop) on Mar 26, 2012 at 16:15 UTC | | Note that if the strings being compared are equal, both JavaFan and jwkrahn's solutions yield a number for the first difference offset that is equal to the length of the string(s). >perl -wMstrict -lE "my $s1 = 'abcdefg'; my $s2 = 'abcdefg'; ;; my $t = $s1 ^ $s2; ;; my ($f) = $t =~ /^(\x{00})/; say length $f; ;; $t =~ /^\0/ && say $+; " 7 7 [download] | [reply] [d/l] | | Re^2: If there a way to find the location of the first difference between two strings? by flexvault(Monsignor) on Mar 26, 2012 at 18:00 UTC | | I still needed to know how they compare (lt eq gt), so I did the following: srand( 711 ); . . . sub case2 { my $s1 = 'a' x 39 . 'b'; my $s2 = 'a' x 39 . chr( 98 + rand(4)); my ( $cmp, $loc ) = Cmp_and_Loc ( $s1, $s2 ); } sub Cmp_and_Loc { my ( $s1, $s2, undef ) = @_; my $t = 0; my $cmp = $s1 cmp $s2; if ( $cmp ) { $t = $s1 ^ $s2; $t =~ /^\0)/; $t = $+; } return ( $cmp, $t ); } [download] This appeared to work since if the strings are equal the sub returns 0,0. I will test your variation, but it still flies compared to anything I tried. Using the rand(4), I think I was able to test all combinations. As always, when someone shows you how to do it correctly, it becomes easy :-) Thank you "Well done is better than well said." - Benjamin Franklin | [reply] [d/l] | | Re^2: If there a way to find the location of the first difference between two strings? by trizen(Hermit) on Mar 26, 2012 at 17:50 UTC | | Small fix: $t =~ /[^\0]/ && say $-; | [reply] [d/l] | | Re: If there a way to find the location of the first difference between two strings? by jaredor(Priest) on Mar 28, 2012 at 05:03 UTC | | If you are going to use subroutines in lieu of one-liners, here's a non-bit twiddling version. You've already got a solution, so please forgive the redundancy; this is a nice excuse for me to practice implementing streams a la HOP: ["Higher-Order Perl" now available for free download] If you, like me, feel leery using bit-wise operations on strings that might be Unicode, this may be a more comforting approach. While your other solutions may indeed work 100% of the time with Unicode strings as well, that is too much thinking and worry for me so I just punt to the standard string manipulation & comparison functions in perl. The code doesn't have any error checking, but the compare stream tries to do the right thing (per my tastes) for the boundary cases. Obviously you can change stream output behavior to taste. I left the characters in the output to better demonstrate the results. (And I generalized it a little to allow use of streams of characters in addition to just strings, so the YAGNI line tax is 1, leaving aside the YAGNI maintenance tax ;-) I'm usually late the party so I don't expect many to see this, but if anyone has suggestions for improvement I would be interested in hearing them. This code Read more... Compare strings with generators (2 kB) Produces Read more... Output of test (2 kB) | [reply] [d/l] [select] | | Re^2: If there a way to find the location of the first difference between two strings? by flexvault(Monsignor) on Mar 28, 2012 at 16:08 UTC | | jaredor, Thanks for your input. If you notice in the original post I said "use bytes" to eliminate concerns about UCA. perl -e 'use bytes;$s1="abcd";$s2="abcz";$dif=$cmp=$s1 cmp $s2;print "+$dif\t$cmp\n";' [download] The performance hit for using UCA is just too great. In some of my tests, the performance was degradated by as much as 10,000%. As for "bit-wise operations on strings", I have a math background and started programming by writing code in machine language, and later assembler, Basic, Fortran, C, and a lot of others, until I had the good fortune of being introduced to Perl. To explain why performance is so critical, I have been writing a "pure-perl" data base engine, to replace Oracle's BerkeleyDB and MySQL in all of our products. So our goal was to come within 20% of the performance of Oracle products. As it turns out, our clients will see enhanced performance when we switch them over, and we will be able to provide database support on any platform that Perl runs on. (An area where Perl excels!) I have been very impressed with the performance of Perl since 5.8.x. So in profiling( -d:NTYProf ) of the code, the routine I asked about, is called 14,595,348 times on a test of writing 100K records. So even a slight improvement would be welcome. Thanks to the PM answers, I got a 376% increase in performance. Great! (Note: Some of our clients have databases with billions of records.) When I wrote Perl performance just gets better and better! my intent was in showing that Perl has improved over the years. It was the first time that I had an actual test case to run on several versions of Perl from 5.6.1 to 5.12.2. Since then I have tested with 5.14.2 with even better results. I don't know why Perl performance is improving for this type of work, but I can demonstrate that it is. I also have incorrectly used the term "modern Perl" in the past, since I didn't realize that a module "Modern::Perl" existed. Thank you and Good Luck! "Well done is better than well said." - Benjamin Franklin | [reply] [d/l] | | Re^3: If there a way to find the location of the first difference between two strings? by chromatic(Archbishop) on Apr 02, 2012 at 06:58 UTC | | > _I didn't realize that a module "Modern::Perl" existed. It's just a silly little shortcut to enable new (and should-have-been-on-by-default) features in the most recent releases of Perl 5. "Modern" is deliberately vague. | [reply] | | Re^3: If there a way to find the location of the first difference between two strings? by jaredor(Priest) on Apr 02, 2012 at 06:44 UTC | | Thanks for the background flexvault, I doubt I would have posted anything had I known you were doing something with database keys. I thought you might be writing some sort of diff routine for a homebrew editor or some such. (I should have checked you out anyway to see that you've way too much history and mojo to need to be told about iterators.) I looked more at JavaFan and jwkrahn's solutions than your initial statement of the problem, so overlooked your use of the bytes pragma. I guess I'm conditioned to look for the -M and -m options. I've never used the bytes module, which seems to make all strings just byte vectors. Modding out by endianess, do you think there's some sort of bit-wise C idiom out there to capitalize on the fact one and only one of ($s1 & ($s1 ^ $s2)) or ($s2 & ($s1 ^ $s2)) will have the "high order bit"? You might be able to get away from using a regexp by, e.g., craftily using bit shifts. But I'm unfamiliar with issues such as if using numerical ordering in database keys impacts performance with things that might have a different lexicographic ordering. I don't think you need to apologize for using "modern Perl" in a general sense. chromatic puts that include at the top of his responses in PM and it's good PR for his excellent book, Modern Perl 2011-2012 Edition, but knowledgeable folk such as yourself are given lots of latitude by students such as myself, who learn a lot whenever you produce a "modern Perl" example. | [reply] | Back to Seekers of Perl Wisdom| Log In? | | --- | | Username: Password: - [x] remember me What's my password? Create A New User | | Domain Nodelet? | | www.com | www.net | www.org | | Node Status? | | node history Node Type: perlquestion [id://961622] Approved by Corion Front-paged by Corion help | | Chatterbox? | | and the web crawler heard nothing... How do I use this? • Last hour • Other CB clients | | Other Users? | | Others pondering the Monastery: (4) Tux jwkrahn pme chatterbot As of 2025-08-14 05:18 GMT | | Sections? | | Seekers of Perl Wisdom Cool Uses for Perl Meditations PerlMonks Discussion Categorized Q&A Tutorials Obfuscated Code Perl Poetry Perl News about | | Information? | | PerlMonks FAQ Guide to the Monastery What's New at PerlMonks Voting/Experience System Tutorials Reviews Library Perl FAQs Other Info Sources | | Find Nodes? | | Nodes You Wrote My Watched Nodes Super Search List Nodes By Users Newest Nodes Recently Active Threads Selected Best Nodes Best Nodes Worst Nodes Saints in our Book Random Node | | Leftovers? | | The St. Larry Wall Shrine Offering Plate Awards Quests Perl Perl Blogs Perl.com Planet Perl Perl Weekly Perl Jobs Perl Mongers Perl documentation MetaCPAN CPAN Raku | | Voting Booth? | | No recent polls found | | Notices? | | | | • hippo | ‥ epoptai's answer Re: how do I set a cookie and redirect was blessed by hippo! | | --- | | | • erzuuli | ‥ Anonymous Monks are no longer allowed to use Super Search, due to an excessive use of this resource by robots. | | PerlMonks graciously bestowed by Tim Vroom. PerlMonks somehow became entangled with The Perl Foundation. Donate to TPF! Marvelous Managed Hosting and Bandwidth Generously Provided by pair Networks Built with the Perl programming language.
245
How much memory would a decimal number with 62.8 trillion digits after the decimal point take to store? - Quora =============== Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Computer Science Number Representation Log... File Size Decimal Digital Memory Information Storage Numeric Representation Decimal Number System Data Storage Technology 5 How much memory would a decimal number with 62.8 trillion digits after the decimal point take to store? All related (34) Sort Recommended Thomas Epp Director of the Math Resource Center · Author has 223 answers and 230.5K answer views ·3y That depends on the format that you use to store the number. Using ASCII characters, each digit takes 8 bits (or 1 Byte) to store, so the number would take 62.8 trillion Bytes for the digits after the decimal point, plus 1 Byte per digit before the decimal point plus 1 (for the decimal point itself). That’s a little more that 57 TB, assuming that the number does not have billions or trillions of digits before the decimal point as well. We can cut that number in half by using binary coded decimal (BCD). BCD codes – there are actually several variations of BCD codes – only use 4 bits per decimal d Continue Reading That depends on the format that you use to store the number. Using ASCII characters, each digit takes 8 bits (or 1 Byte) to store, so the number would take 62.8 trillion Bytes for the digits after the decimal point, plus 1 Byte per digit before the decimal point plus 1 (for the decimal point itself). That’s a little more that 57 TB, assuming that the number does not have billions or trillions of digits before the decimal point as well. We can cut that number in half by using binary coded decimal (BCD). BCD codes – there are actually several variations of BCD codes – only use 4 bits per decimal digit, so we can get 2 digits per Byte. Then we would need only about 28.55 TB plus the space required for the digits to the left of the decimal point. If the number is a rational value (and if it has exactly 62.8 trillion digits after the decimal place, it is rational), then we can write it as a fraction and reduce the fraction. Depending on the numerator and denominator in the reduced fraction, we might be able to store them separately using fewer total digits. It seems likely that the number you wish to store is not really a 62.8 trillion digit rational number, but likely an approximation of an irrational number (π, perhaps?) that we know to 62.8 trillion digits. In that case, the more relevant question is not how much memory it would take to store all 62.8 trillion known digits, but rather how many digits do we really need or want? For virtually all engineering calculations, 8 significant digits are more than enough; usually 2 or 3 digits will suffice. If the number is an approximation of an irrational, it certainly would be more efficient memory-wise to store the algorithm used to compute the digits, then use that to calculate the value to the accuracy needed. It would likely take longer to compute the value than to retrieve it from memory storage, but it would likely take a couple of kilobytes or less instead of needing Terabytes of storage. Upvote · 9 3 9 1 Sponsored by mina Explore Lisbon like a local. Discover the Best of Portugal’s Enchanting Capital – Hidden Gems, Must-See Attractions, Local Tips. Read More 99 92 Lawrence Stewart Research Computer Scientist · Author has 10.7K answers and 21.6M answer views ·3y It depends. If it is a random number that is uncompressible, and you store it in ASCII text form, then 62.8 trillion bytes of memory plus however much for the integer part. If you are storing it in binary, then each digit takes lg(10) bits, or 3.3219… bits. You can round up to 4 bits per digit and use BCD, which is easy. If you are able to compress it, then it could be very small. For example, the number 0.33333… is pretty easy, it is three all the way down. For computable numbers, like Pi or e or the square root of 2, they are probably uncompressible by, say, ZIP, but are easy to regenerate by wr Continue Reading It depends. If it is a random number that is uncompressible, and you store it in ASCII text form, then 62.8 trillion bytes of memory plus however much for the integer part. If you are storing it in binary, then each digit takes lg(10) bits, or 3.3219… bits. You can round up to 4 bits per digit and use BCD, which is easy. If you are able to compress it, then it could be very small. For example, the number 0.33333… is pretty easy, it is three all the way down. For computable numbers, like Pi or e or the square root of 2, they are probably uncompressible by, say, ZIP, but are easy to regenerate by writing a program to compute them and storing the program. Upvote · 9 2 Assistant Bot · 11mo To calculate the memory required to store a decimal number with 62.8 trillion digits after the decimal point, we can break down the requirements: Understanding the Number of Digits: 62.8 trillion digits is equivalent to 62.8×10 12 62.8×10 12 digits. Memory per Digit: Each decimal digit (0-9) can typically be stored in 4 bits (since 2 4=16 2 4=16 can represent the 10 digits). Alternatively, you could use 1 byte (8 bits) per digit, which is often more practical in computer systems. Calculating Memory: Using 4 bits per digit: \text{Memory in bits}=62.8\times 10^{12}\text{dig\text{Memory in bits}=62.8\times 10^{12}\text{dig Continue Reading To calculate the memory required to store a decimal number with 62.8 trillion digits after the decimal point, we can break down the requirements: Understanding the Number of Digits: 62.8 trillion digits is equivalent to 62.8×10 12 62.8×10 12 digits. Memory per Digit: Each decimal digit (0-9) can typically be stored in 4 bits (since 2 4=16 2 4=16 can represent the 10 digits). Alternatively, you could use 1 byte (8 bits) per digit, which is often more practical in computer systems. Calculating Memory: Using 4 bits per digit: Memory in bits=62.8×10 12 digits×4 bits/digit=251.2×10 12 bits Memory in bits=62.8×10 12 digits×4 bits/digit=251.2×10 12 bits To convert bits to bytes, divide by 8: Memory in bytes=251.2×10 12 8=31.4×10 12 bytes≈31.4 TB Memory in bytes=251.2×10 12 8=31.4×10 12 bytes≈31.4 TB Using 1 byte per digit: Memory in bytes=62.8×10 12 bytes≈62.8 TB Memory in bytes=62.8×10 12 bytes≈62.8 TB Conclusion: If you store each digit in 4 bits, you would need approximately 31.4 TB. If you store each digit in 1 byte, you would need approximately 62.8 TB. Thus, depending on the storage method, the memory required would be between 31.4 TB and 62.8 TB. Upvote · Related questions More answers below How is a decimal value stored in memory? Do zeros after a decimal point count as decimal places? What is the number of decimal places in 8.02456? How do you write 8 to the decimal number 8? What is a number with only one digit after the decimal point? Peter Groot B.S. in Mathematics, Massachusetts Institute of Technology (Graduated 1971) · Author has 9.1K answers and 3.4M answer views ·3y You are asking about pi, which last year was computed to 62.8 trillion digits. If stored as decimal digits at 4 bits per digit, that is 31.4 trillion bytes. But it could be stored as hexadecimal digits, getting 4 bits of information instead of 3.3 for decimal, so it could be stored in about 26 trillion bytes. With 2 terabyte flash drives, that is 16 2TB drives or 13 drives in hexadecimal. Upvote · Michael Bauers Studied Computer Science at North Dakota State University (Graduated 1988) · Author has 11.1K answers and 10M answer views ·5y Related How are decimal numbers with points stored in computer memory? Depends on whether you want to know about fixed point, or floating point. For fixed point, one method is called Binary Coded Decimal. There’s a number of methods. The one I am familiar with, involves packing 2 decimal digits, per byte. The sign is handled in various ways depending on implementation. The lower level arithemetic operations don’t know where the decimal place is. As it’s fixed, it doesn’t matter. On output, a decimal place would be added. I think in COBOL, the language itself has provisions for dealing with the presentation of the answer. Mostly computers use floating point numbers, Continue Reading Depends on whether you want to know about fixed point, or floating point. For fixed point, one method is called Binary Coded Decimal. There’s a number of methods. The one I am familiar with, involves packing 2 decimal digits, per byte. The sign is handled in various ways depending on implementation. The lower level arithemetic operations don’t know where the decimal place is. As it’s fixed, it doesn’t matter. On output, a decimal place would be added. I think in COBOL, the language itself has provisions for dealing with the presentation of the answer. Mostly computers use floating point numbers, and mostly IEEE floating point numbers ( there are various IEEE standards.) The typical 32 bit number involves a binary floating point number with a 23 bit mantissa, and an 8-bit exponent. This leaves one bit to represent the sign. As there’s always a 1 in the left place, it’s not stored. So you have a value like… +/- 1.DDDDD…DDD 2^E There are 23 binary digits. The exponent goes from -126 to +127, and is stored as an unsigned 8 bit value, with a bias. The bias is simply applied to the stored exponent value, such as subtracting 126 from it. You will note this is just like so called scientific notation, but it’s in base 2 ( binary,) rather than base 10. Upvote · 9 3 Sponsored by Adobe Photoshop Next-gen Generative Fill. Add rich, detailed elements that blend seamlessly into scenes with Generative Fill. Learn More Sitaram Bettadpur Scientist and Educator · Author has 6.2K answers and 2.6M answer views ·Jul 15 Related How long would a computer take to identify all numbers between 0 and a large number, say trillions, perfectly divisible by 2 (including decimal numbers)? No time at all. First, I assume that by decimal numbers you mean real numbers. The idea of being perfectly divisible by 2 does not apply to real numbers: All real numbers are perfectly divisible by 2. So what you really need is all whole numbers divisible by 2. That is simple: it is just every other number. Simple loop which will hardly take any time! Upvote · Related questions More answers below In the number Pi (3.14159…), what does the 14th digit after the decimal point represent? What is 8.5% as a decimal? What is the largest number that can be stored in memory? What is the definition of a number with three or more digits after the decimal point? What is the number before the decimal point called? Aleš Mihev M.Sc. in Computer Science, University of Ljubljana (Graduated 1988) · Author has 791 answers and 234K answer views ·1y Related What would be the value of a decimal digit if we use an infinitely large amount of memory to represent it? Unfortunately, no computer really has infinitely large memory. However, the Turing machine does have an infinite tape, and therefore also infinite memory. This means that the Turing machine can represent all possible integers, as well as all possible fractions (rational numbers). It can also execute algorithms (functions) over arbitrary integers or fractions. However, even the Turing machine cannot represent all real numbers, because they form an uncountable set. For the same reason, the Turing machine cannot solve every possible problem either. Upvote · Sponsored by Book Geists If you're a Kiwi, this could be the best day of your life! Available to Kiwis only. Read today. Learn More 1.2K 1.2K Joe Zbiciak Developed practical algorithms actually used in production. · Upvoted by David Rutter , M.S. C.S. Georgia Tech and Paul McQuesten , PhD Computer Science & Neuroevolution, The University of Texas at Austin (2002) · Author has 5.9K answers and 57.9M answer views ·4y Related How many bytes does it take to store the number 4^9 2^14? It depends on what representation you use. Also, it depends on how many bits are in a byte on your system. It's generally 8 these days, but since we're talking encodings and representations, let's be explicit. That's what proper engineering specs are all about. Now let's evaluate your expression. 4 9⋅2 14=2 18⋅2 14=2 32 4 9⋅2 14=2 18⋅2 14=2 32 Logarithmic Representation? Since this is a pure power of 2, you could just store the power of 2 itself. That works if you only need to represent numbers of the form 2 n 2 n. That takes 6 bits, assuming you store the exponent n n in an unsigned Continue Reading It depends on what representation you use. Also, it depends on how many bits are in a byte on your system. It's generally 8 these days, but since we're talking encodings and representations, let's be explicit. That's what proper engineering specs are all about. Now let's evaluate your expression. 4 9⋅2 14=2 18⋅2 14=2 32 4 9⋅2 14=2 18⋅2 14=2 32 Logarithmic Representation? Since this is a pure power of 2, you could just store the power of 2 itself. That works if you only need to represent numbers of the form 2 n 2 n. That takes 6 bits, assuming you store the exponent n n in an unsigned, non-biased positional-weighted binary representation. It's Log, Log… Of course, 32=2 5 32=2 5. If you only need to represent numbers of the form 2 2 n 2 2 n, then you could encode this in just 3 bits, again assuming you store n in an unsigned, non-biased, positional-weighted binary format. Merrily We Float A Long? Now, you could also use a minifloat representation and get more flexibility. Binary floating point representations typically can express powers of 2 exactly, and so any small floating point representation that has enough exponent reach will do. It could be as small as one 8-bit byte. This just extends my earlier idea of storing the exponent. Binary floating point representations start with a signed exponent, and add an exponent bias, sign bit, and significand. Can you go smaller though? Fit (in a) Bit? The absolute minimum representation that would distinguish 2 32 2 32 from some other value is one bit. Assign 0 to the other value, and 1 to 2 32 2 32, or vice versa. It doesn't really matter. Stringing A Long? And there are yet other ways. For example, we could store the value symbolically as an expression in ASCII. The string "4^92^14" requires 8 bytes. You might need additional space to encode the length. Or, maybe you abuse the MSB (since ASCII is a 7-bit code) to denote end of string. Punchline To answer your question properly, you need to specify the representation. P. S. “A Long" is not a typo. It's a pun. Upvote · 99 45 9 7 Bob Beechey Former Lecturer Maths & Computing at UNITEC (2000–2007) · Author has 1.5K answers and 1.1M answer views ·1y Related What would be the value of a decimal digit if we use an infinitely large amount of memory to represent it? I assume you mean decimal integer, not decimal “digit which only has possible values 0 - 9. If you use n bits of memory, and the integer is unsigned, then the maximum storable number is 2 n−1 2 n−1. So, with an infinite number of bits, you have 2∞−1 2∞−1. So, the answer to your question is that the number is undefined. Or you might say Continue Reading I assume you mean decimal integer, not decimal “digit which only has possible values 0 - 9. If you use n bits of memory, and the integer is unsigned, then the maximum storable number is 2 n−1 2 n−1. So, with an infinite number of bits, you have 2∞−1 2∞−1. So, the answer to your question is that the number is undefined. Or you might say Upvote · Sponsored by RedHat Know what your AI knows, with open source models. The only secrets your AI should keep are yours. Learn More 9 2 Michael Bauers Senior Principal Software Engineer (2020–present) · Author has 11.1K answers and 10M answer views ·2y Related Why is it difficult to store decimal numbers in computer memory? But it’s not. It’s trivial. I can’t state that strongly enough. ITS TRIVIAL TO STORE DECIMAL NUMBERS IN COMPUTER MEMORY. Binary Code Decimal is the usual method. I am not saying it’s trivial to do arithmetic on them, if the CPU doesn’t provide BCD arithmetic. In general, CPUs do arithmetic in binary, assuming binary inputs and producing binary outputs. There have been CPUs which support BCD arithmetic though. I think x86 has limited support for it, but I guess it doesn’t work in 64 bit mode or something ( I don’t know the details.) If you really needed decimal arithmetic, e.g. you want to do arith Continue Reading But it’s not. It’s trivial. I can’t state that strongly enough. ITS TRIVIAL TO STORE DECIMAL NUMBERS IN COMPUTER MEMORY. Binary Code Decimal is the usual method. I am not saying it’s trivial to do arithmetic on them, if the CPU doesn’t provide BCD arithmetic. In general, CPUs do arithmetic in binary, assuming binary inputs and producing binary outputs. There have been CPUs which support BCD arithmetic though. I think x86 has limited support for it, but I guess it doesn’t work in 64 bit mode or something ( I don’t know the details.) If you really needed decimal arithmetic, e.g. you want to do arithmetic on fixed point decimal for financial software, you might need to write or acquire a library to do so. As for why CPUs generally work in binary, it’s because the entire CPU is based around binary logic. Because the logic is built from transistors operating as on/off switches. Upvote · Rob Lion mechanical engineer with wide-ranging interests · Author has 2.6K answers and 12.7M answer views ·8y Related If Hexadecimal is used to store a 100-digit numerical sequence (example: 482482320) on a computer hard drive, how many bytes would it occupy in truth? It depends on how it is encoded and stored by the software handling the value. Throughout I’ll assume 8 bits per byte, though historically there were computer systems where this was not true. A 100-digit decimal value, stored as ASCII text (8 bits per character) would take up 100 bytes of memory. That same value stored as Unicode UTF-32 text (32 bits per character) would take up 400 bytes of memory. The same decimal value stored as packed binary-coded decimal (BCD) (4 bits per decimal digit) would take up 50 bytes of memory. The same 100-digit decimal value could be converted to a hexadecimal rep Continue Reading It depends on how it is encoded and stored by the software handling the value. Throughout I’ll assume 8 bits per byte, though historically there were computer systems where this was not true. A 100-digit decimal value, stored as ASCII text (8 bits per character) would take up 100 bytes of memory. That same value stored as Unicode UTF-32 text (32 bits per character) would take up 400 bytes of memory. The same decimal value stored as packed binary-coded decimal (BCD) (4 bits per decimal digit) would take up 50 bytes of memory. The same 100-digit decimal value could be converted to a hexadecimal representation would having at most ⌈100 log 10 log 16⌉=84⌈100 log⁡10 log⁡16⌉=84 hexadecimal digits. Stored as 4 bits per hexadecimal digit, that would take up 42 bytes of memory. Beyond that, if the number were not truly random, there is a possibility that it could be losslessly compressed to an even smaller number of bytes in memory. Now, when it comes to storage on a hard drive, the disk’s sector size and operating system’s file system cluster size come into play. For a long time, the standard sector size for hard disk drives was 512 bytes; even a 1-byte file would take up an entire 512-byte sector. With even larger disks available now, this has been increased with Advanced Format to 4KiB, 4096 bytes, per sector. This reduces efficiency for storage of small files, but has other advantages in operation. The NTFS file system developed by Microsoft uses 4KiB clusters by default, but supports up to 64KiB clusters which may be needed to handle addressing on large storage volumes. A single file in the file system stored on the disk will take up at least one cluster. Of course, these latter two factors may be mitigated if the file being saved by the software consists of more data than just this single 100-digit number. Upvote · 9 6 Mike Speciner SB from Massachusetts Institute of Technology (Graduated 1968) · Author has 1.5K answers and 395.8K answer views ·5y Related How are decimal numbers with points stored in computer memory? If you’re just asking how they are stored, they can be stored directly as text, or they can be converted to something that is easier for the computer to compute with. To store as text, each character is encoded as a short sequence of bits; the encoding could be ASCII, Unicode, or perhaps some more-efficient encoding specific to numeric data. (See Binary-coded decimal - Wikipedia for example.) Furthermore, a compression scheme could be used to reduce the storage required. For computation, however, numbers tend to be converted to floating point, a specific encoding into a fixed number of bits, on Continue Reading If you’re just asking how they are stored, they can be stored directly as text, or they can be converted to something that is easier for the computer to compute with. To store as text, each character is encoded as a short sequence of bits; the encoding could be ASCII, Unicode, or perhaps some more-efficient encoding specific to numeric data. (See Binary-coded decimal - Wikipedia for example.) Furthermore, a compression scheme could be used to reduce the storage required. For computation, however, numbers tend to be converted to floating point, a specific encoding into a fixed number of bits, one of which is a “sign bit”, some of which are “exponent bits”, and some are “mantissa bits”. Similar to scientific notation, but in binary. Most computers have built-in operations to compute with floating point numbers. Typical [IEEE double, 64-bit] floating point representations have about 17 decimal digits of precision and a range from about 10^-300 to 10^300. Other representations are possible. Upvote · Philips George John Studies Computer Science. Software Engineer. · Author has 107 answers and 428.7K answer views ·9y Related How many bytes are required to store a number "16" and "666"? The simplest answer would mean two assumptions. Assume the numbers are represented using the usual binary number system encoding for positive integers (and not Binary Coded Decimal (BCD) representation, two’s complement representation to accommodate negative numbers etc.). Also assume that while storing a number, we will only use the lowest number of bits required to represent a number (the former assumption is usually true, but the latter is usually false!). Using 2 2 binary digits, we can represent positive integers up to 2 2−1=3 2 2−1=3. But we cannot represent 2 2=4 2 2=4 or any number greater than th Continue Reading The simplest answer would mean two assumptions. Assume the numbers are represented using the usual binary number system encoding for positive integers (and not Binary Coded Decimal (BCD) representation, two’s complement representation to accommodate negative numbers etc.). Also assume that while storing a number, we will only use the lowest number of bits required to represent a number (the former assumption is usually true, but the latter is usually false!). Using 2 2 binary digits, we can represent positive integers up to 2 2−1=3 2 2−1=3. But we cannot represent 2 2=4 2 2=4 or any number greater than that. 0,1,10,11 0,1,10,11 Using 3 3 binary digits, we can represent positive integers up to 2 3−1=7 2 3−1=7. But we cannot represent any number greater than or equal to 2 3=8 2 3=8. 0,1,10,11,100,101,110,111 0,1,10,11,100,101,110,111 So, to represent any positive integer less than 2 n 2 n, we need at most n n bits. Also, we only need the entire n n bits if our number is greater than or equal to 2 n−1 2 n−1 (otherwise, we could have represented it using just (n−1)(n−1) bits). Putting the above two statements together, we need n n bits to represent a positive integer N N in binary if: 2 n−1≤N<2 n 2 n−1≤N<2 n For the numbers given in the question: 16⟹(2 4=16)≤16<(2 5=32)16⟹(2 4=16)≤16<(2 5=32) so we need 5 5 bits 666⟹(2 9=512)≤666<(2 10=1024)666⟹(2 9=512)≤666<(2 10=1024) so we need 10 10 bits Put in another way, the number of bits required to represent a number N N in binary is the smallest integer exponent n n that we need to raise 2 2 to, so that we get 2 n>N 2 n>N Mathematically, 2 n>N⟺n>l o g 2 N 2 n>N⟺n>l o g 2 N. So the integer exponent that fits our criteria would be the next integer which comes after (greater than) l o g 2 N l o g 2 N This would be given by the expression: n=⌊l o g 2 N⌋+1 n=⌊l o g 2 N⌋+1 (where ⌊x⌋⌊x⌋ represents the floor function) So we need ⌊l o g 2 N⌋+1⌊l o g 2 N⌋+1 bits to represent a number N in binary number system. Using this equation: Number of bits needed to represent 16 = ⌊l o g 2 16⌋+1=⌊4⌋+1=4+1=5⌊l o g 2 16⌋+1=⌊4⌋+1=4+1=5 Number of bits needed to represent 666 = ⌊l o g 2 666⌋+1=⌊9.37937836707⌋+1=9+1=10⌊l o g 2 666⌋+1=⌊9.37937836707⌋+1=9+1=10 At the beginning of the answer, I mentioned BCD and two’s complement representations. For storing integers using BCD, the number of bits required = 4 the number of digits in the number’s usual decimal (base 10) representation. I take usual to mean without any additional leading zeroes etc. i.e. 123 as opposed to 0123. For two’s complement representation (if considering negative numbers as well), it will add one additional bit (the sign bit). Also the assumption that whatever we calculate will be the actual memory space used is almost always false. The space will be allocated in multiples of 8 bits (1 byte) at least. A 5 bit number might actually be stored using 8 bits (unsigned char type in C) or even more, an 18 bit number using 32 bits (long data type in C) etc. Upvote · 9 4 9 1 9 1 Related questions How is a decimal value stored in memory? Do zeros after a decimal point count as decimal places? What is the number of decimal places in 8.02456? How do you write 8 to the decimal number 8? What is a number with only one digit after the decimal point? In the number Pi (3.14159…), what does the 14th digit after the decimal point represent? What is 8.5% as a decimal? What is the largest number that can be stored in memory? What is the definition of a number with three or more digits after the decimal point? What is the number before the decimal point called? Why does pi have so many digits after the decimal? For a 5-digit decimal number, what is the highest decimal value that it can represent? Is 1.0 a decimal number? How many decimal places can you have after a decimal point? What are some numbers with a decimal point and no other digit after it? Related questions How is a decimal value stored in memory? Do zeros after a decimal point count as decimal places? What is the number of decimal places in 8.02456? How do you write 8 to the decimal number 8? What is a number with only one digit after the decimal point? In the number Pi (3.14159…), what does the 14th digit after the decimal point represent? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
246
Published Time: 2016-02-01 Identification of the phd gene cluster responsible for phenylpropanoid utilization in Corynebacterium glutamicum | Request PDF =============== Article Publisher preview available Identification of the phd gene cluster responsible for phenylpropanoid utilization in Corynebacterium glutamicum February 2016 Applied Microbiology and Biotechnology 100(4) DOI:10.1007/s00253-015-7165-1 Authors: Nicolai Kallscheuer Nicolai Kallscheuer This person is not on ResearchGate, or hasn't claimed this research yet. Michael Vogt Forschungszentrum Jülich Jannick Kappelmann Jannick Kappelmann This person is not on ResearchGate, or hasn't claimed this research yet. Karin Krumbach Karin Krumbach This person is not on ResearchGate, or hasn't claimed this research yet. Show all 7 authorsHide Read publisher previewRequest full-text PDF To read the full-text of this research, you can request a copy directly from the authors. Read publisher preview Request full-text Download citation Copy link Link copied Request full-textDownload citation Copy link Link copied To read the full-text of this research, you can request a copy directly from the authors. Citations (90)References (44)Figures (4) Abstract and Figures Phenylpropanoids as abundant, lignin-derived compounds represent sustainable feedstocks for biotechnological production processes. We found that the biotechnologically important soil bacterium Corynebacterium glutamicum is able to grow on phenylpropanoids such as p-coumaric acid, ferulic acid, caffeic acid, and 3-(4-hydroxyphenyl)propionic acid as sole carbon and energy sources. Global gene expression analyses identified a gene cluster (cg0340-cg0341 and cg0344-cg0347), which showed increased transcription levels in response to phenylpropanoids. The gene cg0340 (designated phdT) encodes for a putative transporter protein, whereas cg0341 and cg0344-cg0347 (phdA-E) encode enzymes involved in the β-oxidation of phenylpropanoids. The phd gene cluster is transcriptionally controlled by a MarR-type repressor encoded by cg0343 (phdR). Cultivation experiments conducted with C. glutamicum strains carrying single-gene deletions showed that loss of phdA, phdB, phdC, or phdE abolished growth of C. glutamicum with all phenylpropanoid substrates tested. The deletion of phdD (encoding for putative acyl-CoA dehydrogenase) additionally abolished growth with the α,β-saturated phenylpropanoid 3-(4-hydroxyphenyl)propionic acid. However, the observed growth defect of all constructed single-gene deletion strains could be abolished through plasmid-borne expression of the respective genes. These results and the intracellular accumulation of pathway intermediates determined via LC-ESI-MS/MS in single-gene deletion mutants showed that the phd gene cluster encodes for a CoA-dependent, β-oxidative deacetylation pathway, which is essential for the utilization of phenylpropanoids in C. glutamicum. Schematic representation of two chain-shortening mechanisms during phenylpropanoid degradation. The degradation of phenylpropanoids starts with the cleavage of a C2 unit from the aliphatic side chain for conversion into benzoic acid derivatives. This can be achieved via a retro-aldol pathway or a β-oxidative pathway. As both mechanisms are CoA-dependent and yield acetyl-CoA as product, they depend on the same initial step, namely, a CoA ligation reaction leading to the corresponding phenylpropanoid CoA thioesters. If not already present, a α,β-double bond is subsequently introduced by acyl-CoA dehydrogenase activity. Hydration of the aliphatic double bond of the thioesters yields 3-hydroxyacyl-CoA intermediates which are common intermediates of both mechanisms. In the retro-aldol pathway (upper branch), the 3-hydroxyacyl-CoA intermediate is cleaved into a benzaldehyde derivative and acetyl-CoA. The benzaldehyde derivative is prepared for ring cleavage by oxidation of the aldehyde group to a carboxyl group. In the β-oxidative pathway (lower branch), the hydroxy group of the 3-hydroxyacyl-CoA intermediate is oxidized to a keto group. Subsequently, a thiolytic cleavage leads to the formation of a benzoyl-CoA derivative and acetyl-CoA or, alternatively, a hydrolytic cleavage yields a benzoate derivative and acetyl-CoA (only the hydrolytic cleavage is depicted here) … Phenylpropanoids tested as sole carbon and energy sources for C. glutamicum. Phenylpropanoids vary in their hydroxylation and methoxylation patterns of the aromatic ring and in the saturation of the aliphatic side chain in α,β-position. Abbreviation: 4-HPP=3-(4-hydroxyphenyl) propionic acid. For the other phenylpropanoids, the respective trivial names are given … Growth of C. glutamicum with phenylpropanoids as sole carbon and energy sources. Cultivations were performed in CGXII-defined medium with p-coumaric acid (black square), ferulic acid (white square), caffeic acid (black triangle), 4-HPP (inverted white triangle), and cinnamic acid (black diamond) as sole carbon and energy sources. Starting with an initial substrate concentration of 5 mM, phenylpropanoids were fed each 2 h during the course of the cultivation to avoid inhibiting substrate levels of >10 mM (5 mM after 2 h, 7.5 mM after 4 and 6 h, and 10 mM after 8 and 10 h). The data represent mean values and standard deviations obtained from three independent cultivations … Proposed phenylpropanoid degradation (Phd) pathway in C. glutamicum. Schematic overview of a the genetic organization of the identified phd gene cluster and b proposed CoA-dependent, β-oxidative deacetylation pathway for degradation of phenylpropanoids in C. glutamicum. Presumably, involved co-factors are also indicated. PhdA=acyl:CoA ligase, encoded by cg0341 (phdA); PhdD=acyl-CoA dehydrogenase, encoded by cg0346 (phdD); PhdE=enoyl-CoA hydratase, encoded by cg0347 (phdE); PhdB=3-hydroxyacyl-CoA dehydrogenase, encoded by cg0344 (phdB); PhdC=3-oxoacyl-CoA ketohydrolase (acetyl-CoA forming), encoded by cg0345 (phdC); phdT=transporter gene (cg0341); and phdR=MarR-type transcriptional repressor gene (cg0343). c Degradation pathways of the five tested phenylpropanoids to yield succinyl-CoA and acetyl-CoA of C. glutamicum. Gray arrows represent the β-oxidative deacetylation pathway of C. glutamicum as shown in a. Abbreviations: 4-HPP, 3-(4-hydroxyphenyl)propionic acid; 4-HBA, 4-hydroxybenzoic acid … Figures - available from: Applied Microbiology and Biotechnology This content is subject to copyright. Terms and conditions apply. Discover the world's research 25+ million members 160+ million publication pages 2.3+ billion citations Join for free Publisher Preview 1 A preview of this full-text is provided by Springer Nature. Learn more Content available from Applied Microbiology and Biotechnology This content is subject to copyright. Terms and conditions apply. APPLIED MIC ROBIAL AND CELL PH YSIOLOG Y Identification of the phd ge ne cluster r esponsib le for phenylpr opanoi d utilizatio n in Corynebacte rium gluta micum Nicolai Kall scheuer 1 &Michael V og t 1 &Jannick Kapp elmann 1 &Karin Krumbac h 1 & Stephan Noack 1 &Michael Bot t 1 &Jan Marienh agen 1 Received:16 October 2015/Accepted:7 November 2015/Publishe d online:26 November 2015 Springer-V erlag Be rlin Heide lberg 2015 Abstract Phenylpropanoids as abundant,lignin-derived compounds re present su stainabl e feedstocks for biotechn olog- ical produc tion proces ses.W e found tha t the biotec hnological- ly important so il bacterium Co rynebacter ium glutamic um is able to grow on phenyl propanoi ds such as p-couma ric acid, ferulic aci d,caffeic acid,and 3-(4-hydroxy phenyl)p ropionic acid as sole carbon an d energy sour ces.Global ge ne expres- sion analyses id entified a gene cl uster(cg0340-cg0341 and cg0344-cg0 347),which sh owed increa sed transcri ption leve ls in response to phen ylpropan oids.The gene cg03 40(designat- ed phdT)encode s for a puta tive transp orter protei n,wherea s cg0341 and cg0344-cg0347(phdA-E)encode enzymes in- volved in the β-o xidation of phen ylpropan oids.The phd gene cluster is tr anscriptio nally cont rolled by a Ma rR-type re pres- sor encoded by cg03 43(phdR).Cult ivation expe riments con- ducted with C.gl utamicum stra ins carrying si ngle-gen e dele- tions showed that loss of phdA,phdB,phdC,o r phdE abolished gr owth of C.glutamic um with all phen ylpropanoi d substrates te sted.The deleti on of phdD(encod ing for putati ve acyl-CoA dehydrogenase)additionally abolished growth with the α,β-saturated phenylpropanoid 3-(4- hydroxyphenyl)propionic acid.However,the observed growth defe ct of all cons tructed sing le-gene de letion stra ins could be abolis hed throug h plasmid-bo rne expressi on of the respective ge nes.These re sults and the in tracellu lar accumu- lation of pathwa y intermedi ates determ ined via LC-ESI-MS/ MS in single-g ene dele tion mutant s showed tha t the phd gene cluster encodes for a CoA-dependent,β-oxidative deacetylation pa thway,which is esse ntial for the util ization of phenylprop anoids in C.gl utamicum. Keywords Ph enylpropan oids.Aromat ics.Lignin. Corynebact erium glut amicum.Degradatio n pathways. β-oxidati on Introduc tion Microorganisms follow various biodegradative metabolic strategies fo r the utilizat ion of aromatic co mpounds and th us play a crucial role in bi ogeochem ical cycles(Di az 2004).One class of abunda nt aromatic comp ounds is the grou p of lignin- derived phenylpropanoids,which are synthesized from the amino acids L-phenylala nine and L-tyr osine in plan ts.These compounds ar e structur ally characte rized by a sing le aromatic phenyl grou p and a three-carbon prop ene tail(Mar ienhagen and Bott 2013).Typically,phenylpropanoid degradation is initiated by shortening of the ali phatic side chain through cleavage of a C 2 unit(Campillo et al.2014).Thi s step is necessary to co nvert phenyl propanoi ds into benzoic ac id de- rivatives,e.g.,protocat echuate,be fore channeli ng these com- pounds into th e microbia l carbon metabo lism,e.g.,via the β- ketoadipate pa thway or the gent isate pathwa y.T wo me cha- nisms for thi s C 2 shortening re action are desc ribed for bacter ia in literatu re,a retro-al dol pathway an d a β-oxidati ve pathway (Fig.1).In both pathw ays,an α,β-dou ble bond is introdu ced by an acyl-CoA dehydro genase activi ty after conversi on of the phenylpropanoid to the corresponding thioester. As example for a retro-aldol mechanism,a coupled Nicolai Ka llscheue r and Micha el V ogt co ntribute d equall y to this wor k. Electron ic supplem entary mater ial The online vers ion of this arti cle (doi:10.1007/s00 253-015-7165-1)cont ains suppl ementary mat erial, which is ava ilable to a uthorize d users. Jan Marienh agen j.marienh [email protected] 1 Institute o f Bio-and Geosci ences,IBG-1:Biotechno logy, Forschung szentrum Jü lich,52425 Jüli ch,German y Appl Microbi ol Biotechno l(2016)100:1871–18 81 DOI 10.100 7/s00253-015-7165-1 Content courtesy of Springer Nature, terms of use apply. Rights reserved. Supplementary resource (1) Supplementary Material Data November 2015 Nicolai Kallscheuer·Michael Vogt·Jannick Kappelmann·Karin Krumbach·Jan Marienhagen Citations (90) References (44) ... In this context, it is astonishing that the potential of C. glutamicum for producing aromatic compounds beyond aromatic amino acids is relatively underutilized. The described high resistance to increased concentrations of cytotoxic aromatic compounds such as hydroxybenzoic acids or phenylpropanoids would be a significant advantage of C. glutamicum-based cell factories for aromatics production (Kallscheuer et al. 2016a;Kitade et al. 2018). For instance, in direct comparison to Pseudomonas putida, another robust biotechnological workhorse, C. glutamicum grows faster and reaches a higher biomass yield when cultivated in the presence of 10 g L −1 anthranilate (Kuepper et al. 2020). ... ... This tolerance of C. glutamicum is partly attributed to the characteristic outer membrane rich in mycolic acids, which acts as a permeability barrier (Marchand et al. 2012). Additionally, the extensive catabolic network for the degradation of aromatic compounds in C. glutamicum is well understood, and can easily be manipulated to enable the accumulation of valuable pathway intermediates or to prevent the degradation of precursor molecules or target compounds (Kallscheuer et al. 2016a). ... ... The first significant step in engineering C. glutamicum for polyphenol production was achieved in 2016 with the construction of a platform strain featuring the deletion of four gene clusters comprising 21 genes involved in the catabolism of aromatic compounds (Kallscheuer et al. 2016b). Among these were four genes of the phd gene cluster discovered shortly before, which plays a crucial role in the degradation of phenylpropanoids via a CoA-dependent β-oxidative deacetylation pathway (Kallscheuer et al. 2016a). To facilitate polyphenol biosynthesis, codon-optimized, plant-derived genes were heterologously expressed in this C. glutamicum variant. ... Engineering of Corynebacterium glutamicum for the synthesis of aromatic compounds Article Full-text available May 2025 APPL MICROBIOL BIOT Jan Marienhagen A significant proportion of industrially important small molecules are aromatic, and the majority of these compounds are produced chemically, relying heavily on fossil resources. In bacteria and plants, the shikimate pathway and related biosynthetic routes serve as the primary sources of aromatic compounds. Microbial cell factories, which are poised to play a central role in the emerging bio-based economy, provide a sustainable alternative for producing commercially valuable aromatics from renewable resources. Corynebacterium glutamicum, already established as an industrial workhorse for the large-scale production of various amino acids, can be engineered to overproduce aromatic compounds derived from the shikimate pathway. Furthermore, the functional integration of heterologous or synthetic pathways enables access to high-value natural products, such as plant polyphenols and other polyketides. This review highlights recent advancements in the metabolic engineering of C. glutamicum for the sustainable production of these aromatic compounds. Key points • C. glutamicum’s high tolerance to aromatic compounds is key to aromatics production. • Detailed physiological insights enable access to shikimate pathway-derived products. • Diverse plant (poly)phenols and other aromatic polyketides can be produced. Graphical Abstract View Show abstract ... Furthermore, MB001 expressed 30% more heterologous model protein than its parental strain. Strain MB101 has also been used as a cell factory to produce amino acids (Eberhardt et al., 2014) and phenylpropanoids (Kallscheuer et al., 2015(Kallscheuer et al., , 2016Kallscheuer and Marienhagen, 2018), among others. Irrelevant gene clusters have been deleted in strain MB001, resulting in strain C1, with a genome reduction of approx. ... Reprogramming microbial cells to improve the production of biopharmaceuticals and fine chemicals Chapter Apr 2025 Alvaro R. Lara Marcos López-Pérez Francisco J. Fernández View ... Besides chemical instability, enzymatic degradation of 4-OH-phenylpyruvate may be possible, but has not yet been described for C. glutamicum. However, different phenylpropanoids, such as the structurally similar compound 4-OH-phenylpropionic acid, are degraded by C. glutamicum via enzymes encoded in the phd cluster . Nevertheless, chemical instability of 4-OH-phenylpyruvate necessitates its fast decarboxylation at low substrate concentrations. ... Two routes for tyrosol production by metabolic engineering of Corynebacterium glutamicum Article Full-text available Apr 2025 Biotechnol Biofuels Nora Junker Sara-Sophie Poethe Volker F. Wendisch Background The phenolic compound tyrosol is widely used in the pharmaceutical industry, owing to its beneficial effects on human health and its use as a precursor for key pharmaceuticals, including β1-receptor blockers. Tyrosol can be found in olive oil, but despite its natural biosynthesis in plants, low extraction efficiencies render microbial production a more viable alternative. Results Here, we engineered the l-tyrosine overproducing Corynebacterium glutamicum strain AROM3 for the de novo production of tyrosol. Two routes were established and compared: one via 4-OH-phenylpyruvate as intermediate and the other via tyramine. We initially expected the first route to require heterologous expression of a prephenate dehydrogenase gene, given that C. glutamicum lacks this enzymatic function. However, heterologous expression of ARO10 from Saccharomyces cerevisiae (ARO10Sc), which encodes a phenylpyruvate decarboxylase, was sufficient to establish tyrosol production in strain AROM3. We identified that 4-OH-phenylpyruvate is synthesized from l-tyrosine by native aminotransferases, which is subsequently decarboxylated by Aro10Sc, and reduced to tyrosol by native alcohol dehydrogenases, leading to a titer of 9.4 ± 1.1 mM (1.30 ± 0.15 g/L). We identified the furfural dehydrogenase FudC as major enzyme involved in this pathway, as its gene deletion reduced tyrosol production by 75%. Given the instability of 4-OH-phenylpyruvate, the synthesis of tyrosol via the stable intermediate tyramine was pursued via the second route. Decarboxylation of l-tyrosine followed by oxidative deamination was accomplished by overexpression of the l-tyrosine decarboxylase gene tdc from Levilactobacillus brevis (tdcLb) and the tyramine oxidase gene tyo from Kocuria rhizophila (tyoKr). Using this route, tyrosol production was increased by 44% compared to the route via 4-OH-phenylpyruvate. With a division of labor approach by co-cultivating l-tyrosine producing strains that either express tdcLb or tyoKr, the highest titer of 14.1 ± 0.3 mM (1.95 ± 0.04 g/L) was achieved. Conclusions This study demonstrates the potential of endotoxin-free C. glutamicum as production host for the l-tyrosine-derived product tyrosol. Due to its l-arogenate pathway for l-tyrosine synthesis, the unstable 4-OH-phenylpyruvate could be excluded as intermediate in the Tdc–Tyo pathway, outcompeting the most often utilized production route via phenylpyruvate decarboxylases. View Show abstract Genetically encoded biosensor enabled mining, characterisation and engineering of aromatic acid MFS transporters Preprint Full-text available Jun 2025 Philip Le Roy Micaela Chacon Neil Dixon Active transport of chemical species across the cell membrane represents a critical biological and biotechnological function, allowing the cell to selectively import compounds of nutritional value whilst exporting potentially toxic compounds. Major facilitator superfamily (MFS) transporters represent a ubiquitous class able to uptake and export an array of different chemical species. When designing biosynthetic pathways within microbial hosts, for production or remediation, transport is often critical to the efficiency of the resulting engineered strain. However, transport is a commonly neglected node for characterisation and engineering given difficulties in producing, purifying and assaying membrane transport proteins outside of their native environment. Here, using syntenic analysis and genetically encoded biosensors a library of MFS transporters were screened for their ability to uptake the aromatic acids, protocatechuic acid and terephthalic acid. The structure activity relationships of the corresponding transporters, PcaK and TphK, were then assessed with library of aromatic acid effectors. Finally, the feasibility of protein engineering was assessed, by the creation of chimeric MFS transporters, revealing a degree of effector recognition plasticity and the modularity of core transmembrane domains. This study provides a library of validated MFS transporters and demonstrates the value of employing genetically encoded biosensors in the characterisation and engineering of this important transport function. View Show abstract Constructing metabolic pathway of lignin monomers and their derivatives based on metabolic recombination models and yield models Article Jun 2025 BIORESOURCE TECHNOL Ruijian Shao Zaixin Huang Su Sun Fuying Ma View Expediting genome synthesis of Corynebacterium glutamicum with an artificial chromosome vector Article Mar 2025 Zhanhua Zhang Peixiong Hong Zebin Li Zhanglin Lin View Pinosylvin: A Multifunctional Stilbenoid with Antimicrobial, Antioxidant, and Anti-Inflammatory Potential Article Full-text available Mar 2025 CURR ISSUES MOL BIOL Argyrios Periferakis Aristodemos-Theodoros Periferakis Lamprini Troumpata Cristian Scheau Stilbenoids are a category of plant compounds exhibiting notable health-related benefits. After resveratrol, perhaps the most well-known stilbenoid is pinosylvin, a major phytochemical constituent of most plants characterised by the pine spines among others. Pinosylvin and its derivatives have been found to exert potent antibacterial and antifungal effects, while their antiparasitic and antiviral properties are still a subject of ongoing research. The antioxidant properties of pinosylvin are mostly based on its scavenging of free radicals, inhibition of iNOS and protein kinase C, and promotion of HO-1 expression. Its anti-inflammatory properties are based on a variety of mechanisms, such as COX-2 inhibition, NF-κB and TRPA1 activation inhibition, and reduction in IL-6 levels. Its anticancer properties are partly associated with its antioxidant and anti-inflammatory potential, although a number of other mechanisms are described, such as apoptosis induction and matrix metalloproteinase inhibition. A couple of experiments have also suggested a neuroprotective potential. A multitude of ethnomedical and ethnobotanical effects of pinosylvin-containing plants are reported, like antimicrobial, antioxidant, anti-inflammatory, hepatoprotective, and prokinetic actions; many of these are corroborated by recent research. The advent of novel methods of artificial pinosylvin synthesis may facilitate its mass production and adoption as a medical compound. Finally, pinosylvin may be a tool in promoting environmentally friendly pesticide and insecticide policies and be used in land remediation schemes. View Show abstract Characterization of bacterial transporters involved in the uptake of lignin-derived aromatic compounds Chapter Feb 2025 Masaya Fujita Naofumi Kamimura Eiji Masai View Sustainable Production of Aromatic Chemicals from Lignin using Enzymes and Engineered Microbes Article Full-text available Nov 2024 Timothy D. H. Bugg Victoria Sodré Lignin is an aromatic biopolymer found in plant cell walls and is the most abundant source of renewable aromatic carbon in the biosphere. Hence there is considerable interest in the conversion of lignin, either derived from agricultural waste or produced as a byproduct of pulp/paper manufacture, into high-value chemicals. Although lignin is rather inert, due to the presence of ether C-O and C-C linkages, several microbes are able to degrade lignin. This review will introduce these microbes and the enzymes that they use to degrade lignin and will describe recent studies on metabolic engineering that can generate high-value chemicals from lignin bioconversion. Catabolic pathways for degradation of lignin fragments will be introduced, and case studies where these pathways have been engineered by gene knockout/insertion to generate bioproducts that are of interest as monomers for bioplastic synthesis or aroma chemicals will be described. Life cycle analysis of lignin bioconversion processes is discussed. View Show abstract Metabolic engineering of Corynebacterium glutamicum: Unlocking its potential as a key cell factory platform for organic acid production Article Nov 2024 BIOTECHNOL ADV Li Minghou Han Li Xue Zhang Anthony J Sinskey View Show more The activity of CouR, a MarR family transcriptional regulator, is modulated through a novel molecular mechanism Article Full-text available Sep 2015 Hiroshi Otani Peter J Stogios Xiaohui Xu Lindsay D. Eltis CouR, a MarR-type transcriptional repressor, regulates the cou genes, encoding p-hydroxycinnamate catabolism in the soil bacterium Rhodococcus jostii RHA1. The CouR dimer bound two molecules of the catabolite p-coumaroyl–CoA (Kd = 11 ± 1 μM). The presence of p-coumaroyl–CoA, but neither p-coumarate nor CoASH, abrogated CouR's binding to its operator DNA in vitro. The crystal structures of ligand-free CouR and its p-coumaroyl–CoA-bound form showed no significant conformational differences, in contrast to other MarR regulators. The CouR–p-coumaroyl–CoA structure revealed two ligand molecules bound to the CouR dimer with their phenolic moieties occupying equivalent hydrophobic pockets in each protomer and their CoA moieties adopting non-equivalent positions to mask the regulator's predicted DNA-binding surface. More specifically, the CoA phosphates formed salt bridges with predicted DNA-binding residues Arg36 and Arg38, changing the overall charge of the DNA-binding surface. The substitution of either arginine with alanine completely abrogated the ability of CouR to bind DNA. By contrast, the R36A/R38A double variant retained a relatively high affinity for p-coumaroyl–CoA (Kd = 89 ± 6 μM). Together, our data point to a novel mechanism of action in which the ligand abrogates the repressor's ability to bind DNA by steric occlusion of key DNA-binding residues and charge repulsion of the DNA backbone. View Show abstract Characterization of p-Hydroxycinnamate Catabolism in a Soil Actinobacterium Article Full-text available Nov 2014 J BACTERIOL Hiroshi Otani Young-Eun Lee Israël Casabon Lindsay D. Eltis p-Hydroxycinnamates, such as ferulate and p-coumarate, are components of plant cell walls and have a number of commercial applications. Rhodococcus jostii RHA1 (RHA1) catabolizes ferulate via vanillate and the β-ketoadipate pathway. Here, we used transcriptomics to identify genes in RHA1 that are up-regulated during growth on ferulate versus benzoate. The up-regulated genes included three transcriptional units predicted to encode the uptake and β-oxidative deacetylation of p-hydroxycinnamates: couHTL, couNOM and couR. Neither ΔcouL nor ΔcouO mutants grew on p-hydroxycinnamates, but grew on vanillate. Among several p-hydroxycinnamates, CouL catalyzed the thioesterification of p-coumarate and caffeate most efficiently (kcat/KM ∼400 mM(-1)s(-1)). p-Coumarate was also RHA1's preferred growth substrate, suggesting that CouL is a determinant of the pathway's specificity. CouL did not catalyze the activation of sinapate, similar to two p-coumaric acid:CoA ligases from plants, and contains the same bulged loop that helps determine substrate specificity in the plant homologues. The couO mutant accumulated 4-hydroxy-3-methoxyphenyl-β-ketopropionate in the culture supernatant when incubated with ferulate, supporting β-oxidative deacetylation. This phenotype was not complemented with a D257N variant of CouO, consistent with the predicted role of Asp257 as a metal ligand in this amidohydrolase superfamily member. These data suggest that CouO functionally replaces the β-ketothiolase and acyl-CoA thioesterase that occur in canonical β-oxidative pathways. Finally, the transcriptomics data suggest the involvement of two distinct formaldehyde detoxification pathways in vanillate catabolism and identify a eugenol catabolic pathway. This study augments our understanding of the bacterial catabolism of aromatics from renewable feedstocks. View Show abstract Analysis of Hydroxycinnamic Acid Degradation in Agrobacterium fabrum Reveals a Coenzyme A-Dependent, Beta-Oxidative Deacetylation Pathway Article Full-text available Mar 2014 APPL ENVIRON MICROB Tony Campillo Sébastien Renoud Isabelle Kerzaon Florence Hommais The soil- and rhizosphere-inhabiting bacterium Agrobacterium fabrum (genomospecies G8 of the Agrobacterium tumefaciens species complex) is known to have species-specific genes involved in ferulic acid degradation. Here, we characterized, by genetic and analytical means, intermediates of degradation as feruloyl coenzyme A (feruloyl-CoA), 4-hydroxy-3-methoxyphenyl-β-hydroxypropionyl–CoA, 4-hydroxy-3-methoxyphenyl-β-ketopropionyl–CoA, vanillic acid, and protocatechuic acid. The genes atu1416, atu1417, and atu1420 have been experimentally shown to be necessary for the degradation of ferulic acid. Moreover, the genes atu1415 and atu1421 have been experimentally demonstrated to be essential for this degradation and are proposed to encode a phenylhydroxypropionyl-CoA dehydrogenase and a 4-hydroxy-3-methoxyphenyl-β-ketopropionic acid (HMPKP)–CoA β-keto-thiolase, respectively. We thus demonstrated that the A. fabrum hydroxycinnamic degradation pathway is an original coenzyme A-dependent β-oxidative deacetylation that could also transform p-coumaric and caffeic acids. Finally, we showed that this pathway enables the metabolism of toxic compounds from plants and their use for growth, likely providing the species an ecological advantage in hydroxycinnamic-rich environments, such as plant roots or decaying plant materials. View Show abstract Handbook of corynebacterium glutamicum Book Jan 2005 L. Eggeling M. Bott One of the most important organisms in biotechnology, Corynebacterium glutamicum is currently used to produce 2 million tons of amino acids per year for a rapidly expanding market. Until now, research and information have been scattered among individual papers which are often difficult to locate in a timely manner. As the first complete compilation of major findings, Handbook of Corynebacterium glutamicum is a comprehensive source of scientific and technical information required for the understanding and manipulation of C. glutamicum. The book summarizes the current knowledge in the field ofC. glutamicum research from its discovery in 1957 through the most recent studies at the genomic and systemic level, and provides a basis for future work. Written by experts from industry and academia, chapters cover all major aspects of C. glutamicum, including physiology, biochemistry, genetics, and industrial applications. Just as C. glutamicum has proven its profitability in industry and research, this book will demonstrate its value to the scientists striving to understand and develop even more efficient producer strains of this promising microorganism. View Show abstract Taxonomical Studies on Glutamic Acid Producing Bacteria Part I Article Jan 1965 Ken-ichiro TAKAYAMA Shigeo Abe Shukuo Kinoshita The taxonomical and comparative studies of the glutamic acid producing bacteria were undertaken, using two hundred and seven strains. The following results were obtained. (1) The organisms change their cell-forms during the growth successively to short rods, rods and ellipsoidal spheres. (2) The organisms are Gram-positive. However, negatively stained cells, which are considered to be non-viable are observed in some samples. (3) The organisms grow well on a tellurite agar medium and form gray to black colored colonies. (4) The organisms are not thermoduric in the skim milk. (5) Hydrogen sulfide is produced by all strains when they are cultured aerobically in the medium containing cystine. (6) The organisms are resistant to relatively high concentration of sodium chloride. (7) Some differences of acid production from carbohydrates are observed in these strains. These acid productivities are found to be stable physiological characters in these micro-organisms. (8) The all strains require biotin for growth but some of these strains are found to require thiamine or para-amino benzoic acid in addition to biotin. (9) Some physiological characters described in this paper are different from those reported by the other anthers. The differences are probably due to cultivation times or to the presence of contaminant. © 1965, Japan Society for Bioscience, Biotechnology, and Agrochemistry. All rights reserved. View Show abstract Amidohydrolase Superfamily Article Aug 2014 Aimin Liu Lu Huo The amidohydrolase superfamily is a structure‐based cluster of enzymes that contain a sturdy and versatile triosephosphate isomerase (TIM)‐like (β/α) 8 ‐barrel fold embracing the catalytic active site. To date, the amidohydrolase superfamily has grown into one of the largest families of enzymes, with tens of thousand of members catalysing a wide range of hydrolytic and nonhydrolytic metabolic reactions which are important in amino acid and nucleotide metabolism as well as biodegradation of agricultural and industrial compounds. Previously, the presence of a mono‐ or dinuclear d ‐block metal cofactor in the active site was thought to be one of the main characteristics of the members in this superfamily. However, recently new members containing a trinuclear metal cofactors or no cofactor at all were discovered. It has become apparent that activating a well‐ordered water molecule by an active site residue for nucleophilic attack on the organic substrate is a common mechanistic feature for all members of the superfamily. Key Concepts Amidohydrolase superfamily is one of the largest enzyme superfamilies performing a wide array of catalytic reactions. All members in the family employ a TIM‐barrel structural fold, although some of the amidohydrolase superfamily members present an imperfect barrel. The vast majority, but not all, of the enzymes in this superfamily are metalloenzymes. The hallmark of the catalytic mechanisms shared by these enzymes is to use an activated water molecule to attack the organic substrate. The catalytic centre is diverse, with one, two, three or zero metal ion(s). View Show abstract Microbial production of amino acids and derived chemicals: Synthetic biology approaches to strain development Article Dec 2014 CURR OPIN BIOTECH Volker F. Wendisch Amino acids are produced at the multi-million-ton-scale with fermentative production of l-glutamate and l-lysine alone being estimated to amount to more than five million tons in the year 2013. Metabolic engineering constantly improves productivities of amino acid producing strains, mainly Corynebacterium glutamicum and Escherichia coli strains. Classical mutagenesis and screening have been accelerated by combination with intracellular metabolite sensing. Synthetic biology approaches have allowed access to new carbon sources to realize a flexible feedstock concept. Moreover, new pathways for amino acid production as well as fermentative production of non-native compounds derived from amino acids or their metabolic precursors were developed. These include dipeptides, α,ω-diamines, α,ω-diacids, keto acids, acetylated amino acids and ω-amino acids. View Show abstract Pushing product formation to its limit: Metabolic engineering of Corynebacterium glutamicum for L-leucine overproduction Article Dec 2013 METAB ENG Michael Vogt Sabine Haas Simon Klaffl Michael Bott Using metabolic engineering, an efficient L-leucine production strain of Corynebacterium glutamicum was developed. In the wild type of C. glutamicum, the leuA-encoded isopropylmalate synthase (IPMS) is inhibited by low l-leucine concentrations with a Ki of 0.4mM. We identified a feedback-resistant IMPS variant, which carries two amino acid exchanges (R529H, G532D). The corresponding leuA(fbr) gene devoid of the attenuator region and under control of a strong promoter was integrated in one, two or three copies into the genome and combined with additional genomic modifications aimed at increasing l-leucine production. These modifications involved (i) deletion of the gene encoding the repressor LtbR to increase expression of leuBCD, (ii) deletion of the gene encoding the transcriptional regulator IolR to increase glucose uptake, (iii) reduction of citrate synthase activity to increase precursor supply, and (iv) introduction of a gene encoding a feedback-resistant acetohydroxyacid synthase. The production performance of the resulting strains was characterized in bioreactor cultivations. Under fed-batch conditions, the best producer strain accumulated l-leucine to levels exceeding the solubility limit of about 24g/l. The molar product yield was 0.30mol l-leucine per mol glucose and the volumetric productivity was 4.3mmolL(-1)h(-1). These values were obtained in a defined minimal medium with a prototrophic and plasmid-free strain, making this process highly interesting for industrial application. View Show abstract Chemical and technical challenges in the analysis of central carbon metabolites by liquid-chromatography mass spectrometry Article Nov 2013 David Siegel Hjalmar Permentier Dirk-Jan Reijngoud Rainer Bischoff This review deals with chemical and technical challenges in the analysis of small-molecule metabolites involved in central carbon and energy metabolism via liquid-chromatography mass-spectrometry (LC-MS). The covered analytes belong to the prominent pathways in biochemical carbon oxidation such as glycolysis or the tricarboxylic acid cycle and, for the most part, share unfavorable properties such as a high polarity, chemical instability or metal-affinity. The topic is introduced by selected examples on successful applications of metabolomics in the clinic. In the core part of the paper, the structural features of important analyte classes such as nucleotides, coenzyme A thioesters or carboxylic acids are linked to "problematic hotspots" along the analytical chain (sample preparation and-storage, separation and detection). We discuss these hotspots from a chemical point of view, covering issues such as analyte degradation or interactions with metals and other matrix components. Based on this understanding we propose solutions wherever available. A major notion derived from these considerations is that comprehensive carbon metabolomics inevitably requires multiple, complementary analytical approaches covering different chemical classes of metabolites. View Show abstract Beyond Growth Rate 0.6: What Drives Corynebacterium glutamicum to Higher Growth Rates in Defined Medium Article Feb 2014 BIOTECHNOL BIOENG Simon Unthan Alexander Grünberger Jan van Ooyen Stephan Noack In a former study we showed that C. glutamicum grows much faster in defined CGXII glucose medium when growth was initiated in highly diluted environments (Grünberger et al. 2013b). Here we studied the batch growth of C. glutamicum in CGXII at a comparable low starting biomass concentration of OD≈0.005 in more detail. During bioreactor cultivations a bi-phasic growth behavior with changing growth rates was observed. Initially the culture grew with μˆ=0.61±0.02 h-1 before the growth rate dropped to μˆ=0.46±0.02 h-1. We were able to confirm the elevated growth rate for C. glutamicum in CGXII and showed for the first time a growth rate beyond 0.6 in lab-scale bioreactor cultivations on defined medium. Advanced growth studies combining well-designed bioreactor and microfluidic single-cell cultivations (MSCC) with quantitative transcriptomics, metabolomics and integrative in silico analysis revealed protocatechuic acid as a hidden co-substrate for accelerated growth within CGXII. The presented approach proves the general applicability of MSCC to investigate and validate the effect of single medium components on microorganism growth during cultivation in liquid media, and therefore might be of interest for any kind of basic growth study. Biotechnol. Bioeng. © 2013 Wiley Periodicals, Inc. View Show abstract Show more Recommended publications Discover more Sponsored content Women in Science: Promoting Diversity and Strengthening Talent November 2024 Women are indispensable driving forces in science, whose perspectives and talents enrich progress in research and technology. However, despite great progress, women are still underrepresented, especially in scientific and technical fields. We are actively committed to a more diverse scientific... View post Sponsored content Diversity as a Factor of Success in Research October 2024 Diversity plays a key role in our excellent research. This diversity ranges from a high degree of interdisciplinarity and a broad spectrum of topics within the three research priorities of energy, information, and bioeconomy to the highly specialized research infrastructures and cutting-edge... View post Sponsored content Career Opportunities for Scientists October 2024 Early career scientists make an important contribution to scientific progress through their commitment and innovation. One of our key ambitions at Forschungszentrum Jülich is to provide tailored support and conditions for doctoral researchers, postdocs and Young Investigator Group leaders to... View post Sponsored content Research for a changing society October 2024 Shaping change. This is what drives us at Forschungszentrum Jülich. As a member of the Helmholtz Association with more than 7,400 employees, we conduct interdisciplinary research into a digitized society, a climate-friendly energy system, and a sustainable economy. How is information processed... View post Article Effects of a propionic acid-based preservative on storage characteristics, nutritive value, and ener... January 2012 · Journal of Dairy Science W.K. Coblentz M G Bertram During 2009 and 2010, alfalfa (Medicago sativa L.) hays from 2 cuttings harvested from the same field site were used to evaluate the effects of a propionic acid-based preservative on the storage characteristics and nutritive value of hays stored as large round bales. A total of 87 large round bales (diameter = 1.5m) were included in the study; of these, 45 bales served as controls, whereas 42 ... [Show full abstract] were treated with a commercial propionic acid-based preservative at mean application rates of 0.5±0.14 and 0.7±0.19% of bale weight, expressed on a wet (as is) or dry matter basis, respectively. Initial bale moisture concentrations ranged from 10.2 to 40.4%. Internal bale temperatures were monitored daily during an outdoor storage period, and heating characteristics were summarized for each bale as heating degree days (HDD) >30°C. For acid-treated bales, the regression relationship between HDD and initial bale moisture was best fitted to a quadratic model in which the linear term was dropped to improve fit (Y=2.02x(2) - 401; R(2)=0.77); control hays were best fitted to a nonlinear model in which the independent variable was squared [Y=4,112 - (4,549×e(-0.000559xx)); R(2)=0.77]. Based on these regressions, acid-treated bales accumulated more HDD than control hays when the initial bale moisture was >27.7%; this occurred largely because acid treatment tended to prolong active heating relative to control hays. Linear regressions of recoveries of dry matter on HDD did not differ on the basis of treatment, yielding a common linear relationship of Y=-0.0066x+96.3 (R(2)=0.75). Regressions relating changes (post-storage - pre-storage) in concentrations of several nutritional components (neutral detergent fiber, lignin, ash, crude protein, and total digestible nutrients) with HDD for acid-treated hays typically exhibited more inflection points or were higher-ordered polynomial regressions than those of control hays. These more complex responses probably reflected the perturbation of normal heating patterns following acid treatment; however, overall effects on post-storage nutritive value were relatively limited in scope. The potential to improve nutritive value relative to cost for these large round bales was not especially favorable, and hay producers may find that diligence to achieve adequate field desiccation before baling, or use of oxygen-exclusion methods, such as wrapping in plastic, may be better alternatives for preserving moist hays. Read more Article Full-text available Engineering Corynebacterium glutamicum for the production of 2,3-butanediol October 2015 · Microbial Cell Factories Dušica Radoš Ana Lucia Carvalho Stefan Wieschalka [...] Helena Santos 2,3-Butanediol is an important bulk chemical with a wide range of applications. In bacteria, this metabolite is synthesised from pyruvate via a three-step pathway involving α-acetolactate synthase, α-acetolactate decarboxylase and 2,3-butanediol dehydrogenase. Thus far, the best producers of 2,3-butanediol are pathogenic strains, hence, the development of more suitable organisms for industrial ... [Show full abstract] scale fermentation is needed. Herein, 2,3-butanediol production was engineered in the Generally Regarded As Safe (GRAS) organism Corynebacterium glutamicum. A two-stage fermentation process was implemented: first, cells were grown aerobically on acetate; in the subsequent production stage cells were used to convert glucose into 2,3-butanediol under non-growing and oxygen-limiting conditions. A gene cluster, encoding the 2,3-butanediol biosynthetic pathway of Lactococcus lactis, was assembled and expressed in background strains, C. glutamicum ΔldhA, C. glutamicum ΔaceEΔpqoΔldhA and C. glutamicum ΔaceEΔpqoΔldhAΔmdh, tailored to minimize pyruvate-consuming reactions, i.e., to prevent carbon loss in lactic, acetic and succinic acids. Producer strains were characterized in terms of activity of the relevant enzymes in the 2,3-butanediol forming pathway, growth, and production of 2,3-butanediol under oxygen-limited conditions. Productivity was maximized by manipulating the aeration rate in the production phase. The final strain, C. glutamicum ΔaceEΔpqoΔldhAΔmdh(pEKEx2-als,aldB,P tuf butA), under optimized conditions produced 2,3-butanediol with a 0.66 mol mol −1 yield on glucose, an overall productivity of 0.2 g L −1 h −1 and a titer of 6.3 g L −1 . We have successfully developed C. glutamicum into an efficient cell factory for 2,3-butanediol production. The use of the engineered strains as a basis for production of acetoin, a widespread food flavour, is proposed. View full-text Article Single-Conformation Ultraviolet and Infrared Spectra of Jet-Cooled Monolignols: p-Coumaryl Alcohol,... February 2011 · Journal of the American Chemical Society Chirantha Rodrigo William H James Timothy S Zwier Single-conformation spectroscopy of the three lignin monomers (hereafter "monolignols") p-coumaryl alcohol (pCoumA), coniferyl alcohol (ConA), and sinapyl alcohol (SinA) has been carried out on the isolated molecules cooled in a supersonic expansion. Laser-induced fluorescence excitation, dispersed fluorescence, resonant two-photon ionization, UV-UV hole-burning, and resonant ion-dip infrared ... [Show full abstract] spectroscopy were carried out as needed to obtain firm assignments for the observed conformers of the three molecules. In each case, two conformers were observed, differing in the relative orientations of the vinyl and OH substituents para to one another on the phenyl ring. In pCoumA, the two conformers have S(0)-S(1) origins nearly identical in size, split from one another by only 7 cm(-1), in close analogy with previous results of Morgan et al. on p-vinylphenol ( Chem. Phys. 2008 , 347 , 340 ). ConA, with its methoxy group ortho to the OH group, also has two low-energy conformers forming a syn/anti pair, in this case with the OH group locked into an orientation in which it forms an intramolecular H-bond with the adjacent methoxy group. The electronic frequency shift between the two conformers is dramatically increased to 805 cm(-1), with the dominant conformer of ConA (with S(0)-S(1) origin at 32 640 cm(-1)) about 5 times the intensity of its minor counterpart (with S(0)-S(1) origin at 33 444 cm(-1)). The presence of an OH···OCH(3) intramolecular H-bond is established by the shift of the OH stretch fundamental of the OH group to 3599 cm(-1), as it is in o-methoxyphenol ( Fujimaki et al. J. Chem. Phys. 1999 , 110 , 4238 ). Analogous single-conformation UV and IR spectra of o-methoxy-p-vinylphenol show a close similarity to ConA and provide a basis for a firm assignment of the red-shifted (blue-shifted) conformer of both molecules to the syn (anti) conformer. The two observed conformers of SinA, with its two methoxy group straddling the OH group, have S(0)-S(1) origins split by 239 cm(-1) (33 055 and 33 294 cm(-1)), a value between those in pCoumA and ConA. A combination of experimental data and calculations on the three monolignols and simpler derivatives is used to establish that the conformational preferences of the monolignols reflect the preferences of each of the ring substituents separately, enhanced by the presence of the intramolecular OH···OCH(3) H-bond. Taken as a whole, the presence of multiple flexible substituents locks in certain preferred orientations of the groups relative to one another, even in the apparently flexible allyl alcohol side chain (-CH═CH-CH(2)OH), where the OH group orients itself so that the hydrogen is pointed back over the vinyl π cloud in order to minimize interactions between the oxygen lone pairs and the π electrons. Read more Article Laccase-catalysed synthesis of coupling products of phenolic substrates in different reactors March 2003 · Applied Microbiology and Biotechnology R. Pilz Elke Hammer Frieder Schauer Udo Kragl Substrate oxidation of aromatic substances by the enzyme laccase followed by a heteromolecular coupling with a co-substrate is a promising possibility for the synthesis of new compounds. To find a suitable reactor for the effective production of new compounds, the laccase-catalysed coupling of 3-(3,4-dihydroxyphenyl)propionic acid with 4-aminobenzoic acid was investigated as a model system. Based ... [Show full abstract] on the kinetic parameters, a mathematical model was used to predict the reaction yield and oxygen demand in a discontinuously stirred tank reactor and a continuously operated stirred tank reactor. Membrane processes were used for bubble-free aeration of the system and to recover the soluble enzyme. Read more Last Updated: 10 Aug 2025 Discover the world's research Join ResearchGate to find the people and research you need to help your work. Join for free ResearchGate iOS App Get it from the App Store now. Install Keep up with your stats and more Access scientific knowledge from anywhere or Discover by subject area Recruit researchers Join for free LoginEmail Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · HintTip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? - [x] Keep me logged in Log in or Continue with Google No account? Sign up Company About us News Careers Support Help Center Business solutions Advertising Recruiting © 2008-2025 ResearchGate GmbH. All rights reserved. Terms Privacy Copyright Imprint Consent preferences
247
Naming Alkenes Suffix: -ene Many of the same rules for alkanes apply to alkenes 1. Name the parent hydrocarbon by locating the longest carbon chain that contains the double bond and name it according to the number of carbons with the suffix -ene. H3C CH2 CH2 C CH2 CH2 H3C H3C CH2 CH2 C CH2 CH2 H3C Parent = pentene not hexene (does not contain double bond) 2. a. Number the carbons of the parent chain so the double bond carbons have the lowest possible numbers. H3C CH2 CH2 CH CH CH3 1 2 3 4 5 6 2-hexene b. If the double bond is equidistant from each end, number so the first substituent has the lowest number. H3C CH CH CH CH2 CH3 1 2 3 4 5 6 2-methyl-3-hexene CH3 3. Write out the full name, numbering the substituents according to their position in the chain and list them in alphabetical order. 4. Indicate the double bond by the number of the first alkene carbon. H3C CH2 CH2 CH CH CH3 1 2 3 4 5 6 2-hexene 5. If more than one double bond is present, indicate their position by using the number of the first carbon of each double bond and use the suffix -diene (for 2 double bonds), -triene (for 3 double bonds), -tetraene (for 4 double bonds), etc. H2C CH CH2 CH CH2 1 2 3 4 5 1,4-pentadiene H2C CH CH CH CH3 1 2 3 4 5 1,3-pentadiene 6. a. Cycloalkenes are named in a similar way. Number the cycloalkene so the double bond carbons get numbers 1 and 2, and the first substituent is the lowest possible number. CH3 1 2 3 3-methylcyclohexene CH3 1 2 3 4 5 6 NOT 6-methylcyclohexene b. If there is a substituent on one of the double bond carbons, it gets number 1. CH3 CH3 CH3 CH3 1 2 3 4 5 1 2 3 4 5 1,5-dimethylcyclopentene NOT 2,3-dimethylcyclopentene Alkenes as Substituents: CH CH2 ethenyl or vinyl (vinylcyclohexane) CH2 CH CH2 2-propenyl or allyl (allylcyclohexane) CH2 methylene (methylenecyclohexane) HC CH3 ethylidene (ethylidenecyclohexane) Non-IUPAC Alkenes (Table 6.1, pg. 184) H2C CH2 CH CH2 H3C ethylene (ethene) propylene (propene) C CH2 H3C H3C isobutylene (2-methylpropene) C CH H2C CH3 CH2 isoprene (2-methyl-1,3-butadiene)
248
Published Time: 2008-08-15T00:56:01Z Kayles - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Appearance move to sidebar hide Text Small Standard Large This page always uses small font size Width Standard Wide The content is as wide as possible for your browser window. Color (beta) Automatic Light Dark This page is always in light mode. Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk [x] Toggle the table of contents Contents move to sidebar hide (Top) 1 Rules 2 History 3 Analysis 4 Applications 5 Computational complexity 6 Generalizations 7 See also 8 References Kayles [x] 1 language Español Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Edit interlanguage links Print/export Download as PDF Printable version In other projects Wikidata item From Wikipedia, the free encyclopedia Mathematical game This article is about the combinatorial game. For the lawn game, see Bowling and Skittles (sport). A row of bowling pins. On their turn, a player may choose to eliminate a single pin, or two adjacent ones. Kayles is a simple impartial game in combinatorial game theory, invented by Henry Dudeney in 1908. Given a row of imagined bowling pins, players take turns to knock out either one pin, or two adjacent pins, until all the pins are gone. Using the notation of octal games, Kayles is denoted 0.77. Rules [edit] Kayles is played with a row of tokens, which represent bowling pins. The row may be of any length. The two players alternate; each player, on his or her turn, may remove either any one pin (a ball bowled directly at that pin), or two adjacent pins (a ball bowled to strike both). Under the normal play convention, a player loses when they have no legal move (that is, when all the pins are gone). The game can also be played using misère rules; in this case, the player who cannot move wins. History [edit] Kayles was invented by Henry Dudeney.Richard Guy and Cedric Smith were first to completely analyze the normal-play version, using Sprague-Grundy theory. The misère version was analyzed by William Sibert in 1973, but he did not publish his work until 1989. The name "Kayles" is an Anglicization of the French quilles, meaning "bowling pins". Analysis [edit] Most players quickly discover that the first player has a guaranteed win in normal Kayles whenever the row length is greater than zero. This win can be achieved using a symmetry strategy. On their first move, the first player should move so that the row is broken into two sections of equal length. This restricts all future moves to one section or the other. Now, the first player merely imitates the second player's moves in the opposite row. It is more interesting to ask what the nim-value is of a row of length n{\displaystyle n}. This is often denoted K n{\displaystyle K_{n}}; it is a nimber, not a number. By the Sprague–Grundy theorem, K n{\displaystyle K_{n}} is the mex over all possible moves of the nim-sum of the nim-values of the two resulting sections. For example, K 5=mex{K 0+K 4,K 1+K 3,K 2+K 2,K 0+K 3,K 1+K 2},{\displaystyle K_{5}={\mbox{mex}}{K_{0}+K_{4},K_{1}+K_{3},K_{2}+K_{2},K_{0}+K_{3},K_{1}+K_{2}},\,} because from a row of length 5, one can move to the positions K 0+K 4,K 1+K 3,K 2+K 2,K 0+K 3,and K 1+K 2.{\displaystyle K_{0}+K_{4},\quad K_{1}+K_{3},\quad K_{2}+K_{2},\quad K_{0}+K_{3},{\text{ and }}K_{1}+K_{2}.\,} Recursive calculation of values (starting with K 0=0{\displaystyle K_{0}=0}) gives the results summarized in the following table. To find the value of K n{\displaystyle K_{n}} on the table, write n{\displaystyle n} as 12 a+b{\displaystyle 12a+b}, and look at row a, column b: Kayles nim-values through K 83{\displaystyle K_{83}}| K n{\displaystyle K_{n}} | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | 0+ | 0 | 1 | 2 | 3 | 1 | 4 | 3 | 2 | 1 | 4 | 2 | 6 | | 12+ | 4 | 1 | 2 | 7 | 1 | 4 | 3 | 2 | 1 | 4 | 6 | 7 | | 24+ | 4 | 1 | 2 | 8 | 5 | 4 | 7 | 2 | 1 | 8 | 6 | 7 | | 36+ | 4 | 1 | 2 | 3 | 1 | 4 | 7 | 2 | 1 | 8 | 2 | 7 | | 48+ | 4 | 1 | 2 | 8 | 1 | 4 | 7 | 2 | 1 | 4 | 2 | 7 | | 60+ | 4 | 1 | 2 | 8 | 1 | 4 | 7 | 2 | 1 | 8 | 6 | 7 | | 72+ | 4 | 1 | 2 | 8 | 1 | 4 | 7 | 2 | 1 | 8 | 2 | 7 | At this point, the nim-value sequence becomes periodic with period 12, so all further rows of the table are identical to the last row. Applications [edit] Because certain positions in dots and boxes reduce to Kayles positions, it is helpful to understand Kayles in order to analyze a generic dots and boxes position. Computational complexity [edit] Under normal play, Kayles can be solved in polynomial time using Sprague-Grundy theory. Generalizations [edit] Node Kayles is a generalization of Kayles to graphs in which each bowl “knocks down” (removes) a desired vertex and all its neighboring vertices. Alternatively, this game can be viewed as two players finding an independent set together. Winner determination is solvable in polynomial time for any family of graphs with bounded asteroidal number (defined as the size of a largest subset of vertices such that the removal of the closed neighborhood of any vertex in the set leaves the remaining vertices of the set in the same connected component). Similarly, in the clique-forming game, two players must find a clique in the graph. The last to play wins. Schaefer (1978) proved that deciding the outcome of these games is PSPACE-complete. The same result holds for a partisan version of node Kayles, in which, for every node, only one of the players is allowed to choose that particular node as the knock down target. See also [edit] Combinatorial game theory Octal games Dawson's Kayles Nimber References [edit] ^Dudeney, H. E. (2002), The Canterbury puzzles, Dover, pp.118–119, puzzle 73, ISBN0-486-42558-4. Originally published in 1908. ^Conway, John H. On Numbers and Games. Academic Press, 1976. ^ Jump up to: abR. K. Guy and C. A. B. Smith, The G-values of various games, Proc. Cambridge Philos. Soc., 52 (1956) 514–526. ^T.E. Plambeck, Daisies, Kayles and the Sibert-Conway decomposition in misere octal gamesArchived 2010-07-14 at the Wayback Machine, Theoret. Comput. Sci (Math Games) (1992) 96 361–388. ^ Jump up to: abPlambeck, Thane, Kayles, archived from the original on 2008-10-12, retrieved 2008-08-15 ^E. Berlekamp, J. H. Conway, R. Guy. Winning Ways for your Mathematical Plays. Academic Press, 1982. ^Bodlaender, Hans L. (2015). "Exact Algorithms for Kayles". Theoretical Computer Science. 562: 165–176. doi:10.1016/j.tcs.2014.09.042. ^Schaefer, Thomas J. (1978). "On the complexity of some two-person perfect-information games". Journal of Computer and System Sciences. 16 (2): 185–225. doi:10.1016/0022-0000(78)90045-4. Retrieved from " Categories: Combinatorial game theory Mathematical games Hidden categories: Webarchive template wayback links Articles with short description Short description matches Wikidata This page was last edited on 2 April 2025, at 13:47(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Edit preview settings Search Search [x] Toggle the table of contents Kayles 1 languageAdd topic
249
Overcoming friction and steps towards superlubricity: A review of underlying mechanisms - ScienceDirect =============== Typesetting math: 100% Skip to main contentSkip to article Journals & Books ViewPDF Download full issue Search ScienceDirect Outline Abstract Keywords 1. Introduction 2. Atomistic models 3. Mechanisms to achieve superlubricity 4. Future challenges 5. Summary Declaration of Competing Interest References Show full outline Cited by (10) Figures (12) Show 6 more figures Applied Surface Science Advances Volume 6, 1 December 2021, 100175 Overcoming friction and steps towards superlubricity: A review of underlying mechanisms Author links open overlay panel Himanshu Shekhar a, Ravikumar Dumpala b Show more Outline Add to Mendeley Share Cite rights and content Under a Creative Commons license Open access Abstract Herein, we present a topical review of the advances and the mechanisms involved in achieving superlubricity - the regime of friction in which the coefficient of friction (COF) is less than 0.01. In light of the race towards achieving superlubricity on an industrial scale, this review gives a brief overview of the friction dissipating mechanisms at an atomic scale and the atomistic models devised over the years to explain the phenomena of superlubricity. Furthermore, specific emphasis is given to the mechanisms discovered yet to achieve superlubricity on both micro and macro scales in accordance with their physical fundamentals. Additionally, a brief overview of certain obstacles in the course of the same to guide the emerging directions of research is also discussed in the later section. This review provides better guidance for understanding the basics and advancements over the years in the domain of superlubricity facilitating, its transition from laboratories to engineering scale applications. Previous article in issue Next article in issue Keywords Superlubricity Ultra-low friction Nanotribology Incommensurate Wear Nanoscale 1. Introduction Although lubricants can be employed and wear can be monitored, in the end, it is the energy saved by a tribological system that counts the most. There are only a handful of discoveries and novel technologies in tribology that have the potential to dramatically alter and transmogrify our world. The discovery of superlubricity, the regime of friction in which friction vanishes or nearly vanishes, is one of them. Friction is intriguingly quotidian still poorly understood occurrence involved in diverse industries such as transportation, energy, aerospace, metallurgy, oceanography, bioengineering, and so on. Alternatively, friction has an inimical influence on global energy demand and the environment and economy as well. Around 20 to 30% of the world's energy consumption is accounted for friction . There has been a significant effort over the past few decades to minimise friction through engineered surfaces, improved lubricants, and optimized mechanical systems, yet the complexity of the same has obligated most of this research to be experimental rather than theoretical. Nevertheless, in recent years, scientists have discovered ways of significant friction reduction under certain conditions. Researchers are invested in discovering innovative ways to reduce adverse effects of friction (consequently energy losses) also making effort to achieve near-zero friction coefficient. This regime of friction was called superlubricity 2,. Fig. 1 depicts the coverage of the number of publications on superlubricity in the past decade. 1. Download: Download high-res image (368KB) 2. Download: Download full-size image Fig. 1. Statistical data of the articles related to superlubricity. Data was compiled from Web of Science Core Collection (Clarivate) database as on 28 th August 2021; Search condition: “Superlubricity” in Title, Abstract, or Keywords Notwithstanding all these efforts and diverse approaches, superlubricity has scarcely ever been implemented in practical systems or achieved at engineering scales. In view of the fact that it is next to impossible to achieve a zero-friction state, superlubricity is customarily defined as a friction regime in which the coefficient of friction is minimised to the order of 0.01 or lower (Fig. 2) . Though this criterion itself doesn't completely define this regime, there are other parameters as well that are required to be taken into consideration, the most prominent of which is minimal wear. 1. Download: Download high-res image (126KB) 2. Download: Download full-size image Fig. 2. Friction coefficient range in superlubricity regime. Reprinted with permission from Reference . Copyright 2020, Springer Nature Berman et al. at Argonne National Laboratories demonstrated the potential to achieve superlubricity on a macroscale by graphene nanoscroll formation (graphene wrapping around nanodiamonds), which thus facilitates the sliding on the Diamond-like Carbon (DLC) surface with the value of friction coefficient to be as low as 0.004. It was also realised that water perturbs this phenomenon by stabilising the defects in graphene that is actually responsible for the bonds formation with the nanodiamonds (Fig. 3) 5,. The perplexity in achieving macroscale superlubricity is often due to the complexity of different simultaneous interactions occurring at the sliding interfaces of systems. Although, this regime at the molecular/atomic scale is roughly achieved by desirable chemistry, structural orders, and orientations in Two-Dimensional (2D) materials and heterojunctions, altering this phenomenon at an industrial scale is arduous . 1. Download: Download high-res image (901KB) 2. Download: Download full-size image Fig. 3. Molecular Dynamics (MD) Simulations of single graphene nanoscroll formation. Change in friction at the nanoscale with respect to time for DLC ball sliding against (graphene+nanodiamonds) in dry and humid environments. It was seen that Graphene nano-scrolls were formed over nanodiamonds in dry environment, with the value of steady-state COF to be 0.005 ± 0.004, whereas in a humid environment the formation of nanoscrolls is prevented and the value of COF obtained is 0.12 ± 0.04. Reprinted with permission from Reference . Copyright 2015, American Association for the advancement of Science. In recent years, due to endeavours in characterisations, modelling, and simulations, considerable progress has been achieved in this domain; the consequent mechanisms are divided into eight main categories, namely, structural superlubricity, hydrogen mediated superlubricity, superlubricity via rolling effect, liquid superlubricity, thermolubricity, superlubricity via cold ion chains, superlubricity by virtue of telescopic dynamics in nanotubes and confined geometries, superlubricity by virtue of self-assembly in liquid crystals. This review report gives an overview of the up to date development in the superlubricity regime, including the atomistic models used to explain the same, followed by the underlying mechanisms and a perspective for future research in this field. 1.1. Frictional energy dissipation mechanisms At the nanoscopic level, friction becomes very complicated and can't be analysed by the laws applicable for rigid bodies at the macroscale. Berman et al. described various phenomena that are associated with the energy losses due to sliding and therefore leading to friction (Fig. 4) . 1. Download: Download high-res image (469KB) 2. Download: Download full-size image Fig. 4. Representative schematics of nanoscopic frictional energy dissipation mechanisms as discussed in sequence. Reprinted with permission from Reference . Copyright 2018, American Chemical Society The crusts of those mechanisms are as follows- 1.1.1. Wear Generally, frictional energy is dissipated due to shear and removal of material at the sliding interface causing physical damage to the materials. It makes the surface rough and highly reactive. This can be a consequence of structural deformation, fatigue, and crack initiation and propagation. This friction-induced wear frequently consumes or dissipates energy while sliding. The changes in this wear behaviour of the tribo-systems can be analysed to get an idea about the friction regime. Researchers at Argonne National Laboratories demonstrated that graphene suppressed corrosive, adhesive, and abrasive degradations, facilitated facile shearing at the interface, and inhibited wear by reducing friction 7,. 1.1.2. Molecular deformation This mechanism of friction dissipation is linked to the contortion of molecules of considerable size accounted in the vicinity of the surface of sliding counterparts. It can be said to be a corollary of elastoplastic surface deformation. It is highly dependent on the physical and/or chemical affinity of the mating surfaces. The highly inert nature and the bending flexibility of 2D materials help them counter molecular deformation. Also, increasing the number of layers of 2D materials reduces molecular deformation, thus making them the potential candidates to counter this effect 7,. 1.1.3. Thermal effect This mechanism involves the thermal activation of atoms at the interface as they move around. Thermally activated energy barriers have to be overcome by the atoms during sliding. The corrugation of interface atoms is the pivotal factor in the analysis of the frictional response, as explained by the atomic models in the coming sections. In bulk materials, as temperature increases, the sliding friction decreases, but the opposite is seen when atoms of 2D materials are analyzed at nanoscale . Theoretical studies show that there is high friction between a tip made of single-walled carbon nanotube (SWCNT) and a graphene membrane. This may occur due to enhanced bending, which results in the dynamical corrugation of surfaces at increased temperatures . 1.1.4. Electronic effects The electronic effects, more specifically, the static electricity (electrostatic charge generation), can dominate friction at sliding interfaces. The friction coefficient was seen to be reduced by a factor of two for ultra-thin lead samples sliding past frozen nitrogen layers below the superconducting transition temperature of lead 7,. The absence of van der Waal bonds in 2D materials contributes to the decrement of friction countering this effect. High electrical conductivity (especially in graphene) acts against electrostatic charge buildup to counter friction. Wang et al. showed using density functional theory (DFT) calculations that sliding friction is lower in hydrogenated graphene layers when compared with pristine graphene layers . This friction decrement originates due to the charge accumulation between carbon and hydrogen atoms, which attributes to the electrostatic repulsion during sliding of the hydrogenated graphene layers . 1.1.5. Bonding This mechanism can be attributed to the chemical interactions at the interfaces. The bonds formation and breakage during sliding is the result of these chemical interactions at the asperity contacts. The chemical bond formation is frequently responsible for adhesion modulation. Researchers also demonstrated that friction can be minimized by inserting a somewhat inert molecular layer between 2D materials in liquids 7,. 1.1.6. Phonons Some researchers observed that there was heat generation during sliding even without wear. Theoretical predictions corresponding to these phenomena depend on the atomic level dissipation of energy through electronic excitations (exoelectrons), optical excitations (photons), and lattice vibrations (phonons) at or in the vicinity of the materials’ surface during sliding. Prasad et al. used MD simulations to demonstrate friction dissipation by phononic coupling in Carbon Nanotube (CNT) oscillators. They concluded that such friction is the net effect of two different coupling mechanisms –longitudinal and transverse phonon modes 7,. Dong et al. investigated via molecular dynamics simulations and two-temperature method and confirmed that electron-phonon coupling in graphene moderately affects friction as compared to the substrate roughness . 1.1.7. Environmental effects The previous sections described the effects involving only surface interactions which could be accurate under the assumption of the system being in ultrahigh vacuum. However, the circumambient environment containing a particular gas could positively/negatively impact the friction coefficient depending on the molecular characteristics of gas-surface interactions . This effect basically deals with Surface functionalization and also the contribution of the circumambient atoms in the vicinity of the system during sliding . Graphene lasts for 6000 more cycles in macroscale pin-on-disk tests when the environment consists of hydrogen gas . The commonly used test environments for DLC films are Ar, H 2 O, H 2, CO 2, H 2-He mixture, Ultra-high vacuum (UHV), humid air, dry N 2, and O 2. 1.1.8. Structural effects This effect is completely dependent on the commensurability and incommensurability of the mating surfaces. Considering that the lattice symmetry of two ordered crystalline materials is impeccably matched and aligned in synergy with one another in the direction of sliding (Fig. 5(a)), the structural matching is said to be commensurate . These surfaces exhibit atomic interlocking and, therefore, strong adhesion and friction. Incommensurability between sliding surfaces is said to occur when there is a mismatch of the lattices (Fig. 5(b)), (structural superlubricity explained in section 3.1). Graphene shows structural superlubricity when the two surfaces are rotated against each other and also when there is a lattice spacing mismatch (Fig. 5(c)) . In 2013, Cho et al. found that mechanically stripped graphene arrests surface wrinkle deformation by virtue of its conformal morphology, enhancing an intimate contact with the substrate and suppressing the puckering effect, thereby diminishing friction 19,. 1. Download: Download high-res image (2MB) 2. Download: Download full-size image Fig. 5. Schematic representation of structural superlubricity. (a) When the two have the same rotational symmetry and also a matching orientation, the lattice experiences interlocking, and thus sliding is arrested (commensurality) (b) One layer is given an angular rotation to disrupt rotational symmetry (c) Layers have lattice spacing mismatch. (b) and (c) represent incommensurality. Reprinted with permission from Reference . Copyright 2020, Springer Nature 2. Atomistic models Superlubricity at an atomic scale was explained using atomistic simulations and simple atomic models, which proved the concepts of force cancellation for obtaining the observed behaviour. Thus entirely new avenues of achieving superlubricity can be obtained based on some atomistic models. 2.1. Prandtl-Tomlinson Model (PT Model) This model is the basis of working of the atomic force microscope and is one of the most simplified atomic models in nanotribology. Herein a nanotip (point mass-atom/ion) is dragged on a corrugated periodic potential by an elastic medium (generally a spring) (Fig. 6) . A frictional parameter η (Aubry Relation Parameter) can be defined to exhibit the relation between the elastic energy stored in the spring and energy corrugation. If the tip-surface interaction is described by a sinusoidal potential with amplitude “V 0” and periodicity (distance between two peaks on the corrugated periodic potential) “a” then η=(4 π 2 V o/C e f f a 2) where “C eff” is the spring constant as shown in Fig. 6. Considering the case of η<1, the sliding occurs in the superlubric regime, and if η>1, the sliding is said to occur in the stick-slip regime , , , . In 2006, Hirano showed that, though the PT model demonstrates the energy dissipation mechanism during sliding, its occurrence in practical friction systems is highly improbable . 1. Download: Download high-res image (119KB) 2. Download: Download full-size image Fig. 6. A schematic diagram of PT Model showing a nanotip (point mass) dragged across a corrugated periodic potential surface. Here, C eff = the spring constant of the harmonic spring, a = distance between two crests on the corrugated periodic potential surface. Reprinted with permission from Reference . Copyright 2016, American Chemical Society 2.2. Frankel-Kontorova Model (FK Model) The distinctive feature of this model is that it takes the property of elasticity into account. This model consists of a one-dimensional chain of particles (atoms/ions) linked by harmonic springs, the whole of which is subjected to a sinusoidal interaction potential 29, (Fig. 7). The ratio of the lattice constant of chained particles determines the degree of commensurality, Ω=a/b, where ‘a’ is the length of spring between two adjacent atoms and b is the distance between two successive valleys in the periodic lattice arrangement. 1. Download: Download high-res image (85KB) 2. Download: Download full-size image Fig. 7. A schematic diagram of the FK model depicting the one-dimensional (1D) chain of particles. Reprinted with permission from Reference . Copyright 2009, Elsevier The FK model predicts average KE to be dependent on Ω at lower sliding velocities, whereas as per the PT model, KE is independent of Ω. For a given “a” and “b”, threshold spring constant k c can be determined. If k>k c- Static and kinetic friction diminishes If k<k c- Limited Static and kinetic friction In 2021, Gao et al . stated that there existed an analogy between the ratio of binding energy and the smallest in-plane stiffness (Γ b/E m i n) and the Aubry transition parameter. In the FK model, the spring stiffness (k) is fundamentally proportional to the in-plane stiffness (E). In the superlubric regime, i.e., when η<1 and k>k c, there exists and an equivalency between the ratio (Γ b/E m i n) and the Aubry transition parameter . 2.3. Frankel-Kontorova-Tomlinson Model (FKT Model) This model is the culmination of both the PT model and FK model. This model features mechanical particles whose interactions are represented with springs in two different orientations (horizontal and vertical). The horizontal springs represent the neighbouring inter-particle interactions, and the vertical one represents the particle-moving substrate interaction as shown in Fig. 8. This model operates on the assumption that the interaction potentials of atoms of one surface relative to the atoms of the other sliding surface can be modeled in the form of a harmonic function . The FK and FKT models facilitate the prediction of friction as a function of lattice spacing. In 2020, Abhishek Kumar explained this behaviour using the FKT model as follows: (i)In case of commensurate solids, the coefficient of friction is independent of the contact area . (ii)When the elastic properties of a bulk material can compensate the instabilities, then symmetry plays a critical role in establishing the relation between friction and area of contact . (iii)In the case of incommensurate interfaces, the coefficient of friction is observed to decrease linearly with respect to the contact area . 1. Download: Download high-res image (141KB) 2. Download: Download full-size image Fig. 8. A schematic diagram of FKT model where “a” is the periodic length; “b” is the length of the horizontal spring at equilibrium; k 1 and k 2 are the spring constants of vertical and horizontal springs, respectively. Reprinted with permission from Reference . Copyright 2021, Elsevier 3. Mechanisms to achieve superlubricity In 2018, Ali Erdemir and Jean Michel Martin had explained four mechanisms of achieving superlubricity , following which few more modes were discovered, a brief description of which can be found below. 3.1. Structural superlubricity On a macroscale, the coefficient of friction sliding surfaces is highly dependent on the roughness of the surface. However, research on the nanoscale has demonstrated that friction originates from atomic interactions. In 1993, Motohisa Hirano theorised that the coefficient of friction should vanish for specific orientations of crystalline surfaces in contact . He referred to this behaviour as superlubricity. Low temperatures are preferable (but don't significantly affect the mechanism) because of the minimisation of the thermal vibration frequency of surface atoms. This mechanism has roots in the mismatch between the lattice arrangement (rotational symmetry mismatch or lateral orientation mismatch/lattice spacing mismatch) of two surfaces in contact. If the surface atoms having a periodic arrangement are layered against one another, their crests and troughs will fit perfectly together (Fig. 5(a)), the surfaces would undergo sliding in the stick-slip regime. If one surface is given an angular rotation that hinders the rotational symmetry, the crests and troughs would no longer match (Fig. 5 (b)). The lattice misalignment cancels the lateral forces omnidirectionally, which corresponds to zero net friction force . In addition to that, incommensurality can also be achieved by coupling the surfaces in accordance with mismatched lattice spacing (Fig. 5 (c)). In recent years, the superlubricity regime achieved via frictional anisotropy (either rotational symmetry mismatch or lattice spacing mismatch) is referred to as structural superlubricity. Two-dimensional (2D) heterostructures appear to be the prime candidates to achieve structural superlubricity. In 2014, Wang et al. conducted a first-principle study of atomic-scale friction of two dimensional fluorographene/MoS2 heterostructure and found that the heterostructure demonstrated superlubricity attributed to the formation of a Moiré pattern, leading to the cancellation of localised energy and force variations . In 2017, Li et al . developed a novel versatile method to test interlayer friction in 2D materials, and provided evidence for ultra-low friction between atomic layers of Hexagonal MoS 2 (COF ≈ 10−4) . In 2021, Gao et al. devised three criteria that need to be fulfilled by any 2D heterojunction to undergo sliding in the regime of superlubricity; they are as follows- •There must be incommensurality between contact surfaces that arrest the interactions between the sliding faces . •There must be low binding energies between subsequent layers in the heterojunctions, which minimise the mechanical disturbance that occurs out of the plane during incommensurate sliding of the surfaces . •The lamellar heterojunctions materials must demonstrate high stiffness so as to counter the in-plane structural deformation and thus preserve the incommensurality . Based on the criteria stated above, they shortlisted 61 heterojunctions out of 208 potential candidates that demonstrated structural superlubricity, the best of which was C 2/C 2 with the value of (E m i n/Γ b) ratio to be 1266.4, the highest among all taken into consideration. The registry index (RI) concept is used to evaluate interlaminar copolymerisation levels in layered materials using basic geometric considerations . This concept can be used to quantify the registry mismatch between two lattices . It also explains the observed interaction at the interface and wear-less sliding friction in layered materials. In 2012, Oded Hod used this concept to formulate a model for predicting the misfit-angle dependence of friction in bilayer hexagonal boron-nitride (h-BN) . He presented an accurate relationship between a commensurate interlayer and nonabrasive friction in layered materials, which could be used to capture the behaviour of hexagonal graphite sheets sliding on graphene surface. Leven et al. defined the RI at the interface of graphene and h-BN and concluded that adequately large graphene flakes could slide on the h-BN layers, regardless of the relative orientation, and achieve a stable superlubric state 20,. Two of the most recently devised ways to achieve structural superlubricity are: 3.1.1. Structural superlubricity via strain engineering Researchers have found that commensurality is a more stable state, but in this state, stacking systems tend to align themselves in commensurate orientations for stability; thus, superlubricity is arrested. Some routes to suppress this behaviour, as explained in 2019 by Wang et al., are- a)Forming heterojunctions between 2D materials for lattice mismatch . b)Applying a geometric constraint on graphene nanoribbons as the substrate . c)Stretching one of the surfaces to have a mismatch in periodicity . Elaborating on (c), if one of the surfaces is stretched, the orientation of the surfaces in contact is not a significant factor in the determination of COF value above a critical strain. Wang et al. in 2019 demonstrated via MD simulations that equi-strain biaxial and uniaxial stretching of graphene could minimise interlayer friction by several folds. The strain value beyond which a superlubric state can be achieved (critical strain) and has an inverse relationship with the size of the graphite flakes . It was observed that for both types of stretching conditions, there was a decrement in the friction force with the increment in strain value. Analysing the standard deviation, interaction potential, and spatial distribution of friction within the flake, they found that this reduction in the COF was because of the absolute lattice mismatch instead of the effect due to out-of-plane displacements. In addition, later in the same year, they used MD simulations to quantitatively demonstrate how the area of the Moiré pattern (Fig. 9 (a) and (b)) compared to the area of the graphite flake played a significant role in this mechanism to achieve superlubricity . 1. Download: Download high-res image (865KB) 2. Download: Download full-size image Fig. 9. (a) Graph showing the relationship between Friction force(pN/atom) vs. substrate biaxial strain percentage for a flake of graphene containing 2400 atoms, (b) Comparison of the length of flake and length of Moiré pattern (L flake and L Moiré). Reprinted with permission from Reference . Copyright 2019, American Chemical Society 3.1.2. Superlubricity via numberless micro-contact In 2020, Li et al. proposed a novel method for achieving structural superlubricity on a macroscale by analysing the topography of engineering steel substrates as microscopic point contacts instead of bulk macroscale surface contact . For achieving incommensurality, the surface had to meet several criteria like being atomically flat and demonstrating inertness towards external interactions. Consequently, they coated the steel substrates with different 2D heterogeneous nanocomposites. After a moderate pre-running period, the nanoparticles aligned themselves accordingly to arrest chemical interactions by heterogeneous compounding and thus facilitate sliding. The analysis showed that graphene/MoS 2 coating showed robust superlubricity proving this to be a promising mechanism to achieve robust superlubricity at a macroscale. Despite the exciting prospects of this mechanism, the occurrence of incommensurality at the interface doesn't always lead to superlubricity. In 2017, Dietzel et al . compared the sliding of Sb-particles on highly oriented pyrolytic graphite (HOPG) and MoS 2 via ab initio simulations, and realised that though the sliding of the Sb particles on HOPG was in accordance with structural superlubricity, the Sb particles sliding on MoS 2 showed a transition from superlubricity to constant shear stress behaviour which was attributed to strong chemical interactions that promoted the formation of dislocations and breakdown of structural superlubricity . 3.2. Hydrogen mediated superlubricity In carbon-based materials, the attainment of superlubricity relies heavily on the surrounding gases in the controlled test environment . In 2014, Chen et al. achieved superlubricity for Si-containing hydrogenated amorphous carbon film [polymer – like a-C:H:Si (31.9 at. % H)] in different test environments (humid air, dry inert N 2, reactive H 2); They concluded that the most influential factors in controlling the tribological properties of such films are the hydrogen-induced diversity in the film structures and the environmental gas characteristics . This also holds true for DLCs whose atoms lack long-range symmetry but are tetrahedrally coordinated . They can provide ultra-low COF (as low as 0.001) depending upon the nature and extent of bulk, surface, and tribochemistry . The atoms at the surface of the DLC films generally undergo passivation of dangling bonds by hydrogen termination. These bonds between carbon and Hydrogen atoms are much stronger than the single covalent bonds between carbon atoms 35,. In the scarcity of hydrogen, the surface atoms develop strong interactions, which by default increase adhesion and consequently coefficient of friction. During hydrogen termination, the electric charge density of hydrogen atoms shift towards the carbon atoms; thus, the hydrogen atoms at the surface have a positive charge. The dipole moment thus formed facilitates a repulsion mechanism between both the surfaces , as shown in Fig. 10. In 2017, Wang et al . demonstrated via molecular dynamics and ab initio computer simulations that there exists such repulsion between fully hydrogenated DLC films stacked against one another . Thus, this evidence proves how the presence of hydrogen in the environment or hydrogen termination of DLC surfaces facilitates superlubricity. However, on the contrary, the presence of water or oxygen molecules in the environment can alter the mechanism and result in an increase in frictional force . 1. Download: Download high-res image (360KB) 2. Download: Download full-size image Fig. 10. Friction simulation model of DLC substrate. Reprinted with permission from Reference . Copyright 2017, American Chemical Society 3.3. Superlubricity via rolling effect The moment of friction in rolling motion is much smaller than in sliding motion due to a reduced contact area. This reduction in contact area reduces the chemical interactions between the surfaces, thus diminishing the moment of friction. The potential candidates to achieve superlubricity via this effect are onion-like/fullerene-like carbons (OLCs/FLCs). OLC is actually a generic name for spherical carbon materials wholly composed of sp 2 hybridised carbon atoms having remarkable self-lubricating properties by virtue of their tight microstructure, excellent dispersibility, high chemical inertness, and high elasticity (elastic recovery ∼92%) 16,. With a similar shape are FLCs which are composed of curved and cross-linked graphene sheets covalently linked by tetrahedral sp 3 carbon bonds . In addition to that, the formation of nano-scrolls can also aid in reducing the contact area and thus reducing friction. As shown in Fig. 3, using MD simulations, Berman et al . showed the change in friction at the nanoscale with respect to time for DLC ball sliding against (graphene+nanodiamonds) in dry and humid environments . It was seen that Graphene nano-scrolls (similar to OLC structure) were formed over nanodiamonds in dry nitrogen environment with the value of steady-state COF to be 0.005 ± 0.004. Though structural superlubricity plays a part in the same, the key aspect here is the reduction in contact area attributed to the rolling effect at the interface due to the presence/formation of these structures. 3.4. Liquid superlubricity Liquid superlubricity is very different from solid superlubricity because of the major difference in contact models at the interface. Also, liquid superlubricity holds more potential for industrial applications because most investigations pertaining to the same are conducted at a macroscale. Friction in lubricated tribological systems is demonstrated via a Stribeck curve, in which friction coefficient is plotted against Hersey number, which is a dimensionless number, a function of dynamic viscosity, sliding speed, and normal load per length of contact. At low speeds (boundary lubrication regime) due to thorough surface contacts, the viscosity of the lubricant is redundant in reducing friction, so the COF value is high. Upon further increment in sliding speed (mixed lubrication regime), the surfaces start to separate, and the reduction of friction due to the viscous fluid comes into the picture. Upon further increment in speed (elastohydrodynamic-lubrication regime), a lubricant film envelopes the inner surface and separates it from the outer. The last regime is reached at the highest sliding speeds (hydrodynamic-lubrication regime) in which there is negligible asperity contact, friction force becomes critically dependent on the viscosity of the lubricant, and the surface gets supported by the hydrodynamic pressure. In 2015, Li et al. investigated the differences in liquid superlubricity achieved via oil- and water-based lubricants and found that for water-based lubricants (low pressure-viscosity coefficient), superlubricity could always be achieved with pressure increasing from 49 MPa to 132 MPa, but the lubrication regime would alter for oil-based lubricants (high pressure-viscosity coefficient) in the same conditions . Thus, their results showed that the COF increased with the pressure-viscosity coefficient when the contact pressure was high but had no change when the contact pressure was low. In 2017, Jean Michel Martin demonstrated the achievement of a superlubricity regime under oleic acid lubrication in bearing steel coated with tetrahedral amorphous carbon (ta-C) . Also, in 2019, he and his team were able to obtain a COF of 0.004 by sliding a steel ball against ta-C coating in glycerol under boundary lubrication regime . Some of the potential candidates for achieving liquid superlubricity (most explained by Jin et al. in 2013) were grafted polymer brushes on mica surfaces and water 57,, hydrogels with a scruple of lipids , polymerised Polyzwitterionic Brushes on macroinitiator coated mica surfaces , ceramics in water 61,, hyaluronic acid in human body joints , polysaccharides in red algae and some plants , carbon quantum dots in Ionic Liquids (ILs) , mixtures of fructose and diols , polyethylene glycol aqueous solution with boric acid additive , etc. Of course, the liquid presence at the interface prohibits the utter nullification of friction, but ultra-low friction coefficient values have been reported in the above and some other lubrication mechanisms over the last two decades. In 2019, Zhai et al . summarised the recent progress in different nanomaterials (0D, 1D, 2D, and 3D) for achieving superlubricity, classifying the discussion between solid and liquid superlubricity for each type of nanomaterial . In 2021, Ma et al. achieved superlubricity in the case of glycerol aqueous solutions (water/glycerol weight ratio of 0.2) for steel tribopairs within 40 seconds under a certain range of testing conditions . They came up with three criteria – tribochemical polishing to produce mated surface profiles, passivated surfaces for friction reduction due to occasional asperity contacts, proper rheological properties of the fluid for generation of microscopic elastohydrodynamic films – for observing liquid mediating superlubricity in their case, which could be implemented and tested for other systems as well . In the same year, Jiang et al . and Luo et al . classified the liquid-mediated superlubricity mechanisms into three categories (Fig. 11). 1. Download: Download high-res image (337KB) 2. Download: Download full-size image Fig. 11. Schematic representation of various liquid superlubricity mechanisms. Reprinted with permission from Reference . Copyright 2021, Elsevier 1. Download: Download high-res image (137KB) 2. Download: Download full-size image Fig. 12. Friction coefficients of LC amide-1 at higher and lower concentrations, LC amide-2 and LC-ester. Modified with permission from Reference . 3.4.1. Hydrodynamic effect In 2014, Deng et al. achieved stable superlubricity (COF=0.003) silicon nitride and sapphire lubricated by phosphoric acid after a brief running-in period and attributed this to the hydrodynamic effects . In 2016, Chen et al. studied the lubrication properties of the glycerol solution and the nanodiamonds-glycerol colloidal solution and found that both the solutions achieved superlubricity at about 40% relative humidity (RH) at the steel-steel interface, but the wear rate pertaining to the colloidal solution was much less than the prior. The achieved superlubricity was attributed to the hydrodynamic effect and formation of the hydrogen bond layer, which inhibits direct contact between opposite asperities and asperities and nanodiamonds. Also, the reduced wear was attributed to the rolling effect of nanodiamonds . In 2018, Wang et al . developed a parametric model to obtain an optimum profile for grove texture to enhance the hydrodynamic effect and achieve superlubricity . In 2019, Schreiber et al . conducted experiments to study the frictional properties of self-mated silicon carbide, and they found that the samples exhibited superlubricity (COF < 0.003) after a run-in phase in both humid air and dry N 2 environments; during the course of the experiments, the tribo-contacts were randomly fully submerged in isooctane, and the atmosphere around the fluid was controlled by constant flushing of the surrounding chamber with humid air, dry air or dry N 2. They attributed this behaviour to hydrodynamic effects. 3.4.2. Electric double-layer effect An electric double layer (EDL) is a surface phenomenon that has a major influence on the rheological properties of fluids in micro-regions at solid-liquid interfaces . In 2013, Li et al. investigated the superlubricity obtained in the contact region between Si 3 N 4 ball and SiO 2 plate lubricated by phosphoric acid solution by direct observation under optical microscope combined with a Raman microscope and attributed this reduction in friction coefficient to the forming of hydrogen bond network on the stern layer (internal layer of EDL) at the interface of the tribopair . In 2021, Zhang et al . measured lubrication properties of four quaternary Phosphonium ILs as a function of potential on HOPG by using atomic force microscopy (AFM) , and later in the same year, Silvester et al. achieved Superlubricity for (i) a [P 6,6,6,14] + -enriched stern layer at −1.0 V, where long alkyl chains associated with [P 6,6,6,14] + cations aids the formation of robust boundary layers via enhancing lateral cohesive forces and (ii) a [TFSI]−-enriched Stern layer at +1.0 V, where the bulky fluorinated ligands reduce geometric contact corrugations and form a flat and smooth boundary layer . 3.4.3. Hydration effect The basic idea behind this effect is that the water molecules in hydration shells, by virtue of huge dipole moments, form sturdily held yet highly flexible layers surrounding the ions, thus having high resistance to compression force and low resistance to shear force 79,. In 2011, Zhang et al . achieved a COF value of 0.0028 under a pressure of 300 MPa when silicon nitride ball sliding on silicate glass was lubricated with mixed aqueous solution of glycerol and boric acid; they attributed this result to the hydration effect which makes water molecules act as a lubricant at the contact interface . Another example of this effect can be the recent experiments by Han et al . where they demonstrated macroscale superlubricity using hydrated alkali metal ions –Li+, Na+ and K+; under contact pressure as high as 250 MPa between Si 3 N 4 ball and sapphire disk after a subsequent running-in period with acid solutions 68,. Through their experiments, they confirmed that the hydration effect was responsible for the low COF in the given scenario rather than the hydrodynamic effect. 3.5. Thermolubricity The temperature effect of friction on 2D materials and nanoscale is very different from that observed on a macroscale. At the macroscale, the increase of temperature decreases the coefficient of friction, whereas vice versa is observed in 2D materials. The increase of temperature contributes to the thermal vibrations and formation of thermal barriers, which need to be overcome for sliding (section 1.1.3) . Taking the Prandtl-Tomlinson model into account (Fig. 6), at 0 K, the ions (point mass) get stuck in the valleys of the optical periodic lattice, and the spring potential energy must be high enough to pull them out overcoming the energy barrier in order to facilitate sliding. When the temperature is increased above a threshold, a transition is seen from the stick-slip regime to the thermal drift regime where the point mass deflects from the state of mechanical equilibrium by jumping back and forth, thus overcoming the potential energy barriers , , . Krylov et al . devised that in order to achieve thermolubricity, there must be a decrease in velocity, increase in temperature, and reduction in the corrugation of the optical lattice . 3.6. Superlubricity via cold ion chains This mechanism of achieving superlubricity relies on the contact framework of Frankel-Kontorova Model (FK model-Fig. 7) and Frankel-Kontorova-Tomlinson Model (FKT model-Fig. 8), where friction is predicted on the basis of lattice parameters and stiffness . In 2015, Bylinskii et al . demonstrated a behaviour analogous to the FK and FKT model with a laser-cooled cold trapped chain of Yb+ ions, made to move on a sinusoidal potential of an optical stationary wave . The behaviour observed was analogous to the one-dimensional FK/FKT models. Upon further study, it came to light that when the potential barrier was more than k B T, the low friction coefficient was due to structural superlubricity, but when the potential barrier was somewhat similar to k B T, then the low friction coefficient was due to thermal lubricity 85,. 3.7. Superlubricity by virtue of telescopic dynamics in nanotubes and confined geometries In 2020, Vanossi et al. classified telescopic dynamics in nanotubes and confined geometries to be a separate mechanism to achieve superlubricity . Falk et al. studied the friction of water at four different graphitic surfaces - armchair CNT(water inside), Zigzag CNT, armchair CNT (water outside), graphene planes- and found that the coefficient of friction demonstrated a strong curvature dependence in the case of CNTs; As the radius of CNT decreases the COF decreases for water inside but increases for water outside . The study was primarily focused on the structural difference of water and CNTs, but they did acknowledge the fact that this reduction in friction coefficient (superlubric regime) below a threshold diameter cannot be only attributed to the incommensurality at the interface. In 2014, Niguès et al . compared the mechanical response of multi-walled carbon nanotubes (MWCNTs) and multi-walled boron nitride nanotubes (MW-BNNTs) during the telescopic sliding of the layers and fracture. They found that the friction force in MW-BNNTs was orders of magnitude higher than MWCNTs even though they had similar crystallographic and structural properties; the key difference lies in the ionic character of BN, which contributes to the interlayer electrostatic interactions, thus increasing the frictional drag . In 2016, Secchi et al . further emphasised this theory and investigated the pressure-driven flow of water inside CNTs and BNNTs. They found that the surface slippage in CNT was very large and had a radius dependency compared to BNNTs with no slippage; They attributed this contrasting difference to the link between hydrodynamics behaviour of fluid inside confined geometries and the electronic structure of the confining material . This establishes a major link between nanofluidics and condensed matter physics, which demands further exploration. 3.8. Superlubricity by virtue of self-assembly in liquid crystals In the year 2018, Hongyu Shi and his research group demonstrated how the coefficient of friction at the nanoscale was related to the interaction strength in supramolecular assembly using DFT calculations . In 2020 and 2021, Shanchao Tan and his team conducted major research work on studying the nanotribological properties of three self-assembled liquid crystals (LC amide-1, LC amide-2, and LC-ester) using Atomic force microscopy and density functional theory . They found that the friction force of LC amide-1 at higher concentrations was quite larger than the other self-assembled liquid crystal systems (Fig. 12). The mechanism behind achieving superlubricity of this kind is that during sliding, the physical interactions occurring between the assembled molecules and the HOPG substrate is perturbed by the AFM probe, but they regenerate swiftly in order to achieve a dynamic equilibrium state. 4. Future challenges There are many obstacles present on the road to achieving macroscale superlubricity, which includes dynamic reconstruction of interfaces, edge effects, surface roughness, interfacial adsorption and defects, contaminants, elasticity, and external conditions 94,. Comprehensively, the present scenario demands for endeavours in this field to be stringent as the overall development in the vicinity of this domain to overcome the physical constraints (like the requirement of ultra-high vacuum and various characterisation techniques) are also being explored further. The dynamic rearrangement of the contact surfaces, which potentially keeps on locking the systems via commensurality, also poses a problem limiting this regime to trivial timescales . Furthermore, MD simulations and DFT calculations have accelerated the research on contacting surfaces, but these methods face greater challenges in establishing accurate relationships between the simulations (less time, high velocity) and practical experiments (more time, less velocity) . Moreover, further research work is required to be focused on the wear resistance offered by various materials when they slide in the superlubricity regime, consequently working on the enhancement of the material durability in this domain via experimental investigations. There are many pending questions regarding the robustness, uniformity, scalability, and programmability of this regime, the answers to which are yet to be found. Although the stability of this regime at microscale is questionable, efforts are underway to improve the stability and explore different criteria required to promote the same, An example of this effort would be the work of Li et al .; they reported a stable superlubric behaviour at a maximum local pressure of up to 700 MPa between graphite and silica under ambient conditions; this behaviour was found to be independent of sliding velocity, rotational angle and surface roughness . It is expected that in the coming years, the achievements yet will accelerate research on the identification of materials and heterostructures to extend the application range of scientific developments by employing the synergistic strategies and combining the merits of the eight mechanisms discussed in this review; an example of which would be the work of, Ge et al ., they qualitatively explained the mechanism of achieving macroscale superlubricity (upon completion of the wearing-in period) via synergy effect of graphene oxide nanoflakes (GONFs) and ethanediol (EDO) at Si 3 N 4-SiO 2 interfaces . The development of lab to industry ecosystem is in process, and efforts are being given by researchers all around the globe to implement the concepts discussed into industrial applications, the most recent example of which was the work of Li et al . is the achievement of superlubricity regime (COF value as low as 0.0003) under an ultra-high contact pressure (of up to 2.52 GPa) for Graphene sliding against GNFs which has significant future implications for lubrication in nanomachines and micromachines . The underlying mechanism of liquid superlubricity is much more complicated than its solid counterpart and requires closer study in future . 5. Summary In this paper, the progress made towards achieving superlubricity using different mechanisms and concepts over the years has been reviewed. The regime of superlubricity holds ample potential for energy consumption in various tribological systems. The past two decades have witnessed accelerative development in this domain, from theoretical computations and simulations to stringent practical experiments. At present, superlubricity will continue to be for mechanical engineering, Physics, Materials science, and Chemistry: a means to develop the foundations of atomic-scale friction. Undoubtedly, structural superlubricity and Hydrogen mediated superlubricity have their own set of constraints in practicality because of demanding environmental conditions. They do find potential applications in hermetically sealed computer hard disks, space applications, microelectromechanical systems (MEMS), etc. . Also, Fast water transport mechanisms in CNTs open new avenues for nanofluidic devices used in ultrafiltration, energy conversion, and desalination . At present, solid superlubricity can be achieved at different scales, but liquid superlubricity mechanisms have greater potentials because of higher load-carrying capacity. The lubrication properties of liquid crystals are also being explored at the nanoscale, though the advancements are at a nascent stage. By taking into account the development in this field in the past three decades, with the COF values changing from 10−3 - 10−4 for liquid superlubricity and 10−5 for solid superlubricity, this regime can be looked upon as a powerful tool to handle energy issues, provided that the laboratory research is linked with the industrial applications and problems of scalability, carrying capacity, stability and controllability are addressed . To solve the problem of the energy crisis with no decrement in our quality of life, we must work towards reducing the energy consumption in tribological systems as much as possible. This would be a small yet significant step in our path to sustainability. Declaration of Competing Interest None. Special issue articles Recommended articles References K. Holmberg, A. Erdemir Global impact of friction on energy consumption, economy and environment FME Trans, 43 (3) (2015), pp. 181-185 View in ScopusGoogle Scholar M. Hirano, K. Shinjo Superlubricity and frictional anisotropy Wear, 168 (1) (Sep. 1993), pp. 121-125, 10.1016/0043-1648(93)90207-3 View PDFView articleView in ScopusGoogle Scholar M. Hirano Superlubricity: a state of vanishing friction Wear, 254 (10) (Jul. 2003), pp. 932-940, 10.1016/S0043-1648(03)00295-3 View PDFView articleView in ScopusGoogle Scholar H. Wang, Y. Liu Superlubricity achieved with two-dimensional nano-additives to liquid lubricants Friction, 8 (6) (Dec. 2020), pp. 1007-1024, 10.1007/s40544-020-0410-3 View in ScopusGoogle Scholar D. Berman, S.A. Deshmukh, S.K.R.S. Sankaranarayanan, A. Erdemir, A.V. Sumant Macroscale superlubricity enabled by graphene nanoscroll formation Science, 348 (6239) (Jun. 2015), pp. 1118-1122, 10.1126/science.1262024 View in ScopusGoogle Scholar J. Hone, R.W. Carpick Slippery when dry Science, 348 (6239) (Jun. 2015), pp. 1087-1088, 10.1126/science.aab3233 View in ScopusGoogle Scholar D. Berman, A. Erdemir, A.V. Sumant Approaches for Achieving Superlubricity in Two-Dimensional Materials ACS Nano, 12 (3) (Mar. 2018), pp. 2122-2137, 10.1021/acsnano.7b09046 View in ScopusGoogle Scholar D. Berman, A. Erdemir, A.V. Sumant Reduced wear and friction enabled by graphene layers on sliding steel surfaces in dry nitrogen Carbon, 59 (Aug. 2013), pp. 167-175, 10.1016/j.carbon.2013.03.006 View PDFView articleView in ScopusGoogle Scholar C. Lee, et al. Frictional Characteristics of Atomically Thin Sheets Science, 328 (5974) (Apr. 2010), pp. 76-80, 10.1126/science.1184167 View in ScopusGoogle Scholar A. Smolyanitsky Effects of thermal rippling on the frictional properties of free-standing graphene RSC Adv, 5 (37) (Mar. 2015), pp. 29179-29184, 10.1039/C5RA01581B View in ScopusGoogle Scholar A. Dayo, W. Alnasrallah, J. Krim Superconductivity-Dependent Sliding Friction Phys. Rev. Lett., 80 (8) (Feb. 1998), pp. 1690-1693, 10.1103/PhysRevLett.80.1690 View in ScopusGoogle Scholar J. Wang, et al. Theoretical Study of Superlow Friction Between Two Single-Side Hydrogenated Graphene Sheets Tribol. Lett., 48 (2) (Nov. 2012), pp. 255-261, 10.1007/s11249-012-0015-8 View in ScopusGoogle Scholar J. Li, J. Luo Superlow Friction of Graphite Induced by the Self-Assembly of Sodium Dodecyl Sulfate Molecular Layers Langmuir, 33 (44) (Nov. 2017), pp. 12596-12601, 10.1021/acs.langmuir.7b03053 View in ScopusGoogle Scholar M.V.D. Prasad, B. Bhattacharya Phononic Origins of Friction in Carbon Nanotube Oscillators Nano Lett, 17 (4) (Apr. 2017), pp. 2131-2137, 10.1021/acs.nanolett.6b04310 View in ScopusGoogle Scholar Y. Dong Effects of substrate roughness and electron–phonon coupling on thickness-dependent friction of graphene J. Phys. Appl. Phys., 47 (5) (Jan. 2014), Article 055305, 10.1088/0022-3727/47/5/055305 View in ScopusGoogle Scholar X. Chen, J. Li Superlubricity of carbon nanostructures Carbon, 158 (Mar. 2020), pp. 1-23, 10.1016/j.carbon.2019.11.077 View PDFView articleGoogle Scholar D. Berman, S.A. Deshmukh, S.K.R.S. Sankaranarayanan, A. Erdemir, A.V. Sumant Extraordinary Macroscale Wear Resistance of One Atom Thick Graphene Layer Adv. Funct. Mater., 24 (42) (2014), pp. 6640-6646, 10.1002/adfm.201401755 View in ScopusGoogle Scholar X. Feng, S. Kwon, J.Y. Park, M. Salmeron Superlubric Sliding of Graphene Nanoflakes on Graphene ACS Nano, 7 (2) (Feb. 2013), pp. 1718-1724, 10.1021/nn305722d Google Scholar D.-H. Cho, et al. Effect of surface morphology on friction of graphene on various substrates Nanoscale, 5 (7) (Mar. 2013), pp. 3063-3069, 10.1039/C3NR34181J View in ScopusGoogle Scholar L. Liu, et al. Recent advances in friction and lubrication of graphene and other 2D materials: Mechanisms and applications Friction, 7 (3) (Jun. 2019), pp. 199-216, 10.1007/s40544-019-0268-4 View in ScopusGoogle Scholar A. Vanossi, C. Bechinger, M. Urbakh Structural lubricity in soft and hard matter systems Nat. Commun., 11 (1) (Dec. 2020), p. 4657, 10.1038/s41467-020-18429-1 View in ScopusGoogle Scholar P. Lu, Y.C. Loke, X. Tang, S.S. Kushvaha, S.J. O'Shea A Note on the Two-Spring Tomlinson Model Tribol. Lett., 43 (1) (Jul. 2011), pp. 73-76, 10.1007/s11249-011-9787-5 View in ScopusGoogle Scholar U.D. Schwarz, H. Hölscher Exploring and Explaining Friction with the Prandtl–Tomlinson Model ACS Nano, 10 (1) (Jan. 2016), pp. 38-41, 10.1021/acsnano.5b08251 View in ScopusGoogle Scholar A. Socoliuc, R. Bennewitz, E. Gnecco, E. Meyer Transition from Stick-Slip to Continuous Sliding in Atomic Friction: Entering a New Regime of Ultralow Friction Phys. Rev. Lett., 92 (13) (Apr. 2004), Article 134301, 10.1103/PhysRevLett.92.134301 Google Scholar V.L. Popov The Prandtl-Tomlinson Model for Dry Friction V.L. Popov (Ed.), Contact Mechanics and Friction: Physical Principles and Applications, Springer, Berlin, Heidelberg (2017), pp. 173-192, 10.1007/978-3-662-53081-8_11 Ed. Google Scholar A. Vanossi, N. Manini, M. Urbakh, S. Zapperi, E. Tosatti Colloquium: Modeling friction: From nanoscale to mesoscale Rev. Mod. Phys., 85 (2) (Apr. 2013), pp. 529-552, 10.1103/RevModPhys.85.529 View in ScopusGoogle Scholar S.Y. Krylov, J.W.M. Frenken The physics of atomic-scale friction: Basic considerations and open questions Phys. Status Solidi B, 251 (4) (2014), pp. 711-736, 10.1002/pssb.201350154 View in ScopusGoogle Scholar M. Hirano Atomistics of friction Surf. Sci. Rep., 60 (8) (Mar. 2006), pp. 159-201, 10.1016/j.surfrep.2005.10.003 View PDFView articleView in ScopusGoogle Scholar O.M. Braun, Y.S. Kivshar Nonlinear dynamics of the Frenkel–Kontorova model Phys. Rep., 306 (1) (Dec. 1998), pp. 1-108, 10.1016/S0370-1573(98)00029-5 View PDFView articleView in ScopusGoogle Scholar N. Forcadel, C. Imbert, R. Monneau Homogenization of fully overdamped Frenkel–Kontorova models J. Differ. Equ., 246 (3) (Feb. 2009), pp. 1057-1097, 10.1016/j.jde.2008.06.034 View PDFView articleView in ScopusGoogle Scholar E. Gao, B. Wu, Y. Wang, X. Jia, W. Ouyang, Z. Liu Computational Prediction of Superlubric Layered Heterojunctions ACS Appl. Mater. Interfaces, 13 (28) (Jul. 2021), pp. 33600-33608, 10.1021/acsami.1c04870 View in ScopusGoogle Scholar J. Luo, M. Liu, L. Ma Origin of friction and the new frictionless technology—Superlubricity: Advancements and future outlook Nano Energy, 86 (Aug. 2021), Article 106092, 10.1016/j.nanoen.2021.106092 View PDFView articleView in ScopusGoogle Scholar F. Alhama, F. Marín, J.A. Moreno An efficient and reliable model to simulate microscopic mechanical friction in the Frenkel–Kontorova–Tomlinson model Comput. Phys. Commun., 182 (11) (Nov. 2011), pp. 2314-2325, 10.1016/j.cpc.2011.06.006 View PDFView articleView in ScopusGoogle Scholar A. Kumar Advancements in emerging superlubricity: A review of the atomistic models, simulation techniques and their applications to explore the state of ultra-low friction Mater. Today Proc., 42 (2021), pp. 884-892, 10.1016/j.matpr.2020.11.738 View PDFView articleView in ScopusGoogle Scholar J.M. Martin, A. Erdemir Superlubricity: Friction's vanishing act Phys. Today, 71 (4) (Apr. 2018), pp. 40-46, 10.1063/PT.3.3897 Google Scholar L.-F. Wang, T.-B. Ma, Y.-Z. Hu, Q. Zheng, H. Wang, J. Luo Superlubricity of two-dimensional fluorographene/MoS2heterostructure: a first-principles study Nanotechnology, 25 (38) (Sep. 2014), Article 385701, 10.1088/0957-4484/25/38/385701 View in ScopusGoogle Scholar H. Li, et al. Superlubricity between MoS2 Monolayers Adv. Mater., 29 (27) (2017), Article 1701474, 10.1002/adma.201701474 View in ScopusGoogle Scholar N. Marom, et al. Stacking and Registry Effects in Layered Materials: The Case of Hexagonal Boron Nitride Phys. Rev. Lett., 105 (4) (Jul. 2010), Article 046801, 10.1103/PhysRevLett.105.046801 View in ScopusGoogle Scholar O. Hod Quantifying the Stacking Registry Matching in Layered Materials, Isr. J. Chem., 50 (4) (2010), pp. 506-514, 10.1002/ijch.201000052 View in ScopusGoogle Scholar O. Hod Interlayer commensurability and superlubricity in rigid layered materials Phys. Rev. B, 86 (7) (Aug. 2012), Article 075444, 10.1103/PhysRevB.86.075444 View in ScopusGoogle Scholar I. Leven, D. Krepel, O. Shemesh, O. Hod Robust Superlubricity in Graphene/h-BN Heterojunctions J. Phys. Chem. Lett., 4 (1) (Jan. 2013), pp. 115-120, 10.1021/jz301758c View in ScopusGoogle Scholar K. Wang, W. Ouyang, W. Cao, M. Ma, Q. Zheng Robust superlubricity by strain engineering Nanoscale, 11 (5) (2019), pp. 2186-2193, 10.1039/C8NR07963C View in ScopusGoogle Scholar K. Wang, C. Qu, J. Wang, W. Ouyang, M. Ma, Q. Zheng Strain Engineering Modulates Graphene Interlayer Friction by Moiré Pattern Evolution ACS Appl. Mater. Interfaces, 11 (39) (Oct. 2019), pp. 36169-36176, 10.1021/acsami.9b09259 View in ScopusGoogle Scholar P. Li, et al. Toward Robust Macroscale Superlubricity on Engineering Steel Substrate Adv. Mater., 32 (36) (2020), Article 2002039, 10.1002/adma.202002039 View in ScopusGoogle Scholar D. Dietzel, J. Brndiar, I. Štich, A. Schirmeisen Limitations of Structural Superlubricity: Chemical Bonds versus Contact Size ACS Nano, 11 (8) (Aug. 2017), pp. 7642-7647, 10.1021/acsnano.7b02240 View in ScopusGoogle Scholar A. Erdemir The role of hydrogen in tribological properties of diamond-like carbon films Surf. Coat. Technol., 146–147 (Sep. 2001), pp. 292-297, 10.1016/S0257-8972(01)01417-7 View PDFView articleView in ScopusGoogle Scholar X. Chen, T. Kato, M. Nosaka Origin of Superlubricity in a-C:H:Si Films: A Relation to Film Bonding Structure and Environmental Molecular Characteristic ACS Appl. Mater. Interfaces, 6 (16) (Aug. 2014), pp. 13389-13405, 10.1021/am502416w View in ScopusGoogle Scholar A. Erdemir, O. Eryilmaz Achieving superlubricity in DLC films by controlling bulk, surface, and tribochemistry Friction, 2 (2) (Jun. 2014), pp. 140-155, 10.1007/s40544-014-0055-1 View in ScopusGoogle Scholar A. Erdemir Design criteria for superlubricity in carbon films and related microstructures Tribol. Int., 37 (7) (Jul. 2004), pp. 577-583, 10.1016/j.triboint.2003.12.007 View PDFView articleView in ScopusGoogle Scholar Y. Wang, et al. Tight-Binding Quantum Chemical Molecular Dynamics Study on the Friction and Wear Processes of Diamond-Like Carbon Coatings: Effect of Tensile Stress ACS Appl. Mater. Interfaces, 9 (39) (Oct. 2017), pp. 34396-34404, 10.1021/acsami.7b07551 View in ScopusGoogle Scholar Z. Gong, C. Bai, L. Qiang, K. Gao, J. Zhang, B. Zhang Onion-like carbon films endow macro-scale superlubricity Diam. Relat. Mater., 87 (Aug. 2018), pp. 172-176, 10.1016/j.diamond.2018.06.004 View PDFView articleView in ScopusGoogle Scholar Z. Cao, W. Zhao, Q. Liu, A. Liang, J. Zhang Super-Elasticity and Ultralow Friction of Hydrogenated Fullerene-Like Carbon Films: Associated with the Size of Graphene Sheets Adv. Mater. Interfaces, 5 (6) (2018), Article 1701303, 10.1002/admi.201701303 View in ScopusGoogle Scholar J. Li, C. Zhang, M. Deng, J. Luo Investigation of the difference in liquid superlubricity between water- and oil-based lubricants RSC Adv, 5 (78) (Jul. 2015), pp. 63827-63833, 10.1039/C5RA10834A View in ScopusGoogle Scholar M.I. De Barros Bouchet, et al. Diamond-like carbon coating under oleic acid lubrication: Evidence for graphene oxide formation in superlow friction Sci. Rep., 7 (1) (Jun. 2017), p. 46394, 10.1038/srep46394 View in ScopusGoogle Scholar Y. Long, M.-I.D.B. Bouchet, T. Lubrecht, T. Onodera, J.M. Martin Superlubricity of glycerol by self-sustained chemical polishing Sci. Rep., 9 (1) (Apr. 2019), p. 6286, 10.1038/s41598-019-42730-9 View in ScopusGoogle Scholar J. Li, J. Luo Advancements in superlubricity Sci. China Technol. Sci., 56 (12) (Dec. 2013), pp. 2877-2887, 10.1007/s11431-013-5387-y View in ScopusGoogle Scholar J. Klein, et al. Lubrication forces between surfaces bearing polymer brushes Macromolecules, 26 (21) (Oct. 1993), pp. 5552-5560, 10.1021/ma00073a004 View in ScopusGoogle Scholar U. Raviv, S. Giasson, N. Kampf, J.-F. Gohy, R. Jérôme, J. Klein Lubrication by charged polymers Nature, 425 (6954) (Sep. 2003), pp. 163-165, 10.1038/nature01970 View in ScopusGoogle Scholar W. Lin, et al. Cartilage-inspired, lipid-based boundary-lubricated hydrogels Science, 370 (6514) (Oct. 2020), pp. 335-338, 10.1126/science.aay8276 View in ScopusGoogle Scholar M. Chen, W.H. Briscoe, S.P. Armes, J. Klein Lubrication at Physiological Pressures by Polyzwitterionic Brushes Science, 323 (5922) (Mar. 2009), pp. 1698-1701, 10.1126/science.1169399 View in ScopusGoogle Scholar M. Chen, K. Kato, K. Adachi Friction and wear of self-mated SiC and Si3N4 sliding in water Wear, 250 (1) (Oct. 2001), pp. 246-255, 10.1016/S0043-1648(01)00648-2 View PDFView articleView in ScopusGoogle Scholar F. Zhou, K. Adachi, K. Kato Friction and wear property of a-CNx coatings sliding against ceramic and steel balls in water Diam. Relat. Mater., 14 (10) (Oct. 2005), pp. 1711-1720, 10.1016/j.diamond.2005.06.025 View PDFView articleView in ScopusGoogle Scholar Y.C. Fung Biomechanics: Mechanical Properties of Living Tissues (2nd ed.), Springer-Verlag, New York (1993), 10.1007/978-1-4757-2257-4 Google Scholar S. (Malis) Arad, et al. Superior Biolubricant from a Species of Red Microalga Langmuir, 22 (17) (Aug. 2006), pp. 7313-7317, 10.1021/la060600x Google Scholar W. Ma, Z. Gong, K. Gao, L. Qiang, J. Zhang, S. Yu Superlubricity achieved by carbon quantum dots in ionic liquid Mater. Lett., 195 (May 2017), pp. 220-223, 10.1016/j.matlet.2017.02.135 View PDFView articleGoogle Scholar Q. Ma, S. Wang, G. Dong Macroscale liquid superlubricity achieved with mixtures of fructose and diols Wear, 484–485 (Nov. 2021), Article 204037, 10.1016/j.wear.2021.204037 View PDFView articleView in ScopusGoogle Scholar X. Ge, J. Li, C. Zhang, J. Luo Liquid Superlubricity of Polyethylene Glycol Aqueous Solution Achieved with Boric Acid Additive Langmuir, 34 (12) (Mar. 2018), pp. 3578-3587, 10.1021/acs.langmuir.7b04113 View in ScopusGoogle Scholar W. Zhai, K. Zhou Nanomaterials in Superlubricity Adv. Funct. Mater., 29 (28) (2019), Article 1806395, 10.1002/adfm.201806395 View in ScopusGoogle Scholar Q. Ma, T. He, A.M. Khan, Q. Wang, Y.-W. Chung Achieving macroscale liquid superlubricity using glycerol aqueous solutions Tribol. Int., 160 (Aug. 2021), Article 107006, 10.1016/j.triboint.2021.107006 View PDFView articleView in ScopusGoogle Scholar Y. Jiang, et al. Temporary or permanent liquid superlubricity failure depending on shear-induced evolution of surface topography Tribol. Int., 161 (Sep. 2021), Article 107076, 10.1016/j.triboint.2021.107076 View PDFView articleView in ScopusGoogle Scholar M. Deng, C. Zhang, J. Li, L. Ma, J. Luo Hydrodynamic effect on the superlubricity of phosphoric acid between ceramic and sapphire Friction, 2 (2) (Jun. 2014), pp. 173-181, 10.1007/s40544-014-0053-3 View in ScopusGoogle Scholar Z. Chen, Y. Liu, J. Luo Superlubricity of nanodiamonds glycerol colloidal solution between steel surfaces Colloids Surf. Physicochem. Eng. Asp., 489 (Jan. 2016), pp. 400-406, 10.1016/j.colsurfa.2015.10.062 View PDFView articleView in ScopusGoogle Scholar W. Wang, Y. He, J. Zhao, J. Mao, Y. Hu, J. Luo Optimization of groove texture profile to improve hydrodynamic lubrication performance: Theory and experiments Friction, 8 (1) (Feb. 2020), pp. 83-94, 10.1007/s40544-018-0247-1 Google Scholar P.J. Schreiber, J. Schneider Liquid superlubricity obtained for self-mated silicon carbide in nonaqueous low-viscosity fluid Tribol. Int., 134 (Jun. 2019), pp. 7-14, 10.1016/j.triboint.2019.01.031 View PDFView articleView in ScopusGoogle Scholar Q. Zuo, P. Huang, F. Su Theory analysis of asymmetrical electric double layer effects on thin film lubrication Tribol. Int., 49 (May 2012), pp. 67-74, 10.1016/j.triboint.2011.12.021 View PDFView articleView in ScopusGoogle Scholar J. Li, L. Ma, S. Zhang, C. Zhang, Y. Liu, J. Luo Investigations on the mechanism of superlubricity achieved with phosphoric acid solution by direct observation J. Appl. Phys., 114 (11) (Sep. 2013), Article 114901, 10.1063/1.4821063 View in ScopusGoogle Scholar Y. Zhang, M.W. Rutland, J. Luo, R. Atkin, H. Li Potential-Dependent Superlubricity of Ionic Liquids on a Graphite Surface J. Phys. Chem. C, 125 (7) (Feb. 2021), pp. 3940-3947, 10.1021/acs.jpcc.0c10804 View in ScopusGoogle Scholar D.S. Silvester, R. Jamil, S. Doblinger, Y. Zhang, R. Atkin, H. Li Electrical Double Layer Structure in Ionic Liquids and Its Importance for Supercapacitor, Battery, Sensing, and Lubrication Applications J. Phys. Chem. C, 125 (25) (Jul. 2021), pp. 13707-13720, 10.1021/acs.jpcc.1c03253 View in ScopusGoogle Scholar A. Gaisinskaya, et al. Hydration lubrication: exploring a new paradigm Faraday Discuss, 156 (0) (Jul. 2012), pp. 217-233, 10.1039/C2FD00127F View in ScopusGoogle Scholar A. Gaisinskaya-Kipnis, L. Ma, N. Kampf, J. Klein Frictional Dissipation Pathways Mediated by Hydrated Alkali Metal Ions Langmuir, 32 (19) (May 2016), pp. 4755-4764, 10.1021/acs.langmuir.6b00707 View in ScopusGoogle Scholar C.-H. Zhang, Z.-Z. Ma, J.-B. Luo, X.-C. Lu, S.-Z. Wen Superlubricity of a Mixed Aqueous Solution Chin. Phys. Lett., 28 (5) (May 2011), Article 056201, 10.1088/0256-307X/28/5/056201 Google Scholar T. Han, C. Zhang, J. Luo Macroscale Superlubricity Enabled by Hydrated Alkali Metal Ions Langmuir, 34 (38) (Sep. 2018), pp. 11281-11291, 10.1021/acs.langmuir.8b01722 View in ScopusGoogle Scholar S. Yu. Krylov, K.B. Jinesh, H. Valk, M. Dienwiebel, J.W.M. Frenken Thermally induced suppression of friction at the atomic scale Phys. Rev. E, 71 (6) (Jun. 2005), Article 065101, 10.1103/PhysRevE.71.065101 Google Scholar K.B. Jinesh, S. Yu. Krylov, H. Valk, M. Dienwiebel, J.W.M. Frenken Thermolubricity in atomic-scale friction Phys. Rev. B, 78 (15) (Oct. 2008), Article 155440, 10.1103/PhysRevB.78.155440 View in ScopusGoogle Scholar M.Z. Baykara, M.R. Vazirisereshk, A. Martini Emerging superlubricity: A review of the state of the art and perspectives on future research Appl. Phys. Rev., 5 (4) (Dec. 2018), Article 041102, 10.1063/1.5051445 View in ScopusGoogle Scholar Y. Dong, A. Vadakkepatt, A. Martini Analytical Models for Atomic Friction Tribol. Lett., 44 (3) (Sep. 2011), p. 367, 10.1007/s11249-011-9850-2 View in ScopusGoogle Scholar A. Bylinskii, D. Gangloff, V. Vuletić Tuning friction atom-by-atom in an ion-crystal simulator Science, 348 (6239) (Jun. 2015), pp. 1115-1118, 10.1126/science.1261422 View in ScopusGoogle Scholar D. Gangloff, A. Bylinskii, I. Counts, W. Jhe, V. Vuletić Velocity tuning of friction with two trapped atoms Nat. Phys., 11 (11) (Nov. 2015), pp. 915-919, 10.1038/nphys3459 View in ScopusGoogle Scholar K. Falk, F. Sedlmeier, L. Joly, R.R. Netz, L. Bocquet Molecular Origin of Fast Water Transport in Carbon Nanotube Membranes: Superlubricity versus Curvature Dependent Friction Nano Lett, 10 (10) (Oct. 2010), pp. 4067-4073, 10.1021/nl1021046 View in ScopusGoogle Scholar A. Niguès, A. Siria, P. Vincent, P. Poncharal, L. Bocquet Ultrahigh interlayer friction in multiwalled boron nitride nanotubes Nat. Mater., 13 (7) (Jul. 2014), pp. 688-693, 10.1038/nmat3985 View in ScopusGoogle Scholar E. Secchi, S. Marbach, A. Niguès, D. Stein, A. Siria, L. Bocquet Massive radius-dependent flow slippage in carbon nanotubes Nature, 537 (7619) (Sep. 2016), pp. 210-213, 10.1038/nature19315 View in ScopusGoogle Scholar H. Shi, et al. Nanotribological Study of Supramolecular Template Networks Induced by Hydrogen Bonds and van der Waals Forces ACS Nano, 12 (8) (Aug. 2018), pp. 8781-8790, 10.1021/acsnano.8b05045 Google Scholar S. Tan, et al. Insight Into the Superlubricity and Self-Assembly of Liquid Crystals Front. Chem., 9 (Jun. 2021), Article 668794, 10.3389/fchem.2021.668794 View in ScopusGoogle Scholar O. Hod, E. Meyer, Q. Zheng, M. Urbakh Structural superlubricity and ultralow friction across the length scales Nature, 563 (7732) (Nov. 2018), pp. 485-492, 10.1038/s41586-018-0704-z View in ScopusGoogle Scholar J. Yuan, R. Yang, G.Y. Zhang Structural Superlubricity in 2D van der Waals Heterojunctions Nanotechnology (Jul. 2021), 10.1088/1361-6528/ac1197 Google Scholar A.E. Filippov, M. Dienwiebel, J.W.M. Frenken, J. Klafter, M. Urbakh Torque and Twist against Superlubricity Phys. Rev. Lett., 100 (4) (Jan. 2008), Article 046102, 10.1103/PhysRevLett.100.046102 View in ScopusGoogle Scholar J. Li, T. Gao, J. Luo Superlubricity of Graphite Induced by Multiple Transferred Graphene Nanoflakes Adv. Sci., 5 (3) (2018), Article 1700616, 10.1002/advs.201700616 View in ScopusGoogle Scholar X. Ge, J. Li, R. Luo, C. Zhang, J. Luo Macroscale Superlubricity Enabled by the Synergy Effect of Graphene-Oxide Nanoflakes and Ethanediol ACS Appl. Mater. Interfaces, 10 (47) (Nov. 2018), pp. 40863-40870, 10.1021/acsami.8b14791 View in ScopusGoogle Scholar J. Li, J. Li, J. Luo Superlubricity of Graphite Sliding against Graphene Nanoflake under Ultrahigh Contact Pressure Adv. Sci., 5 (11) (2018), Article 1800810, 10.1002/advs.201800810 View in ScopusGoogle Scholar J. Xu, J. Li New achievements in superlubricity from international workshop on superlubricity: fundamental and applications Friction, 3 (4) (Dec. 2015), pp. 344-351, 10.1007/s40544-015-0100-8 View in ScopusGoogle Scholar J. Luo, X. Zhou Superlubricitive engineering—Future industry nearly getting rid of wear and frictional energy consumption Friction, 8 (4) (Aug. 2020), pp. 643-665, 10.1007/s40544-020-0393-0 View in ScopusGoogle Scholar Cited by (10) Atomistic insight into flash temperature during friction 2022, International Communications in Heat and Mass Transfer Citation Excerpt : This helps to prolong the life of the equipment and reduce energy losses. More discussions on the influence of thermal effects on friction can be found in the relevant reviews [10,43,44]. However, the interaction between heat transfer and friction at the atomic scale has been rarely discussed. Show abstract Heat generation at frictional interfaces is important due to its strong influence on the contacting material's deformation, microstructure, and mechanical properties. Inspired by the governing equations of convective heat transfer, a mathematical model is proposed to describe the microscopic friction-induced heat transfer process. The Prandtl-Tomlinson (P-T) model deals with the stick-slip motion of the sliding probe, and the energy conservation equation expresses the heat transfer process. The effect of friction on the heat transfer is represented by the frictional work in the energy conservation equation, and the thermal activation force in the P-T model introduces the effect of heat transfer on friction. Numerical results reveal that the interfacial temperature and friction force have the same period with the opposite trend. When the thermal-friction coupling effect is considered, the stick-slip motion is advanced, and the friction force decreases with the increase of the base temperature, while a barely noticeable difference is observed in the interfacial temperature rise. This work helps to improve the understanding of the heat transfer mechanisms between the rubbing surfaces, which is fundamental for various applications in friction welding and tribology. ### Sustainable Manufacturing of Multifunctional Fluorine-Free Superslippery Flexible Surfaces 2024, ACS Applied Materials and Interfaces ### Investigation of the Tribological Properties of Hybrid Additive-Modified Water-Based Lubricating Fluid 2024, Lubricants ### Analysis of the Effectiveness of Technological Lubricants with the Addition of Boric Acid in Sheet Metal Forming 2023, Materials ### Tribological Aspects of Sheet Titanium Forming 2023, Materials ### Recent Progress on Carbon Nanomaterials for Resisting the Wear Damages 2022, Journal of Nanomaterials View all citing articles on Scopus © 2021 The Author(s). Published by Elsevier B.V. Part of special issue Nanotribology: physical, chemical, and mechanical events that control friction, wear and lubrication at the molecular scale. View special issue Recommended articles Finite element simulations of sliding contact of the head-disk interface in magnetic storage with lubricant effects Applied Surface Science Advances, Volume 6, 2021, Article 100155 Youfeng Zhang, …, Andreas A.Polycarpou View PDF ### Boundary element method for the elastic contact problem with hydrostatic load at the contact interface Applied Surface Science Advances, Volume 6, 2021, Article 100176 De Huang, …, Andreas Almqvist View PDF ### Enhancement of non-contact friction between metal surfaces induced by the electrical double layer Applied Surface Science Advances, Volume 6, 2021, Article 100160 A.I.Volokitin View PDF ### On the viscous dissipation caused by randomly rough indenters in smooth sliding motion Applied Surface Science Advances, Volume 6, 2021, Article 100182 Sergey Sukhomlinov, Martin H.Müser View PDF ### Bulk conformation and shear-induced transitions in bottlebrush poly(alkyl methacrylate): The role of side chain length and polymer flexibility Tribology International, Volume 204, 2025, Article 110476 Himanshu Shekhar, …, Hedong Zhang View PDF ### Achieving macroscale liquid superlubricity using glycerol aqueous solutions Tribology International, Volume 160, 2021, Article 107006 Qiang Ma, …, Yip-Wah Chung Show 3 more articles Article Metrics Citations Citation Indexes 10 Captures Mendeley Readers 23 View details About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
250
Second Virial Coefficient of the Equation of State - an overview | ScienceDirect Topics =============== Skip to Main content Journals & Books Second Virial Coefficient of the Equation of State In subject area:Chemistry The second coefficient of the virial equation of state is defined as a function of the interaction pair potential between particles in solution, indicating the nature of interactions; it is positive for repulsive interactions and negative for attractive ones. AI generated definition based on:Current Opinion in Colloid & Interface Science, 2004 About this page Add to MendeleySet alert Discover other topics 1. On this page On this page Definition Chapters and Articles Related Terms Recommended Publications Third Party Recommendations Featured Authors On this page Definition Chapters and Articles Related Terms Recommended Publications Third Party Recommendations Featured Authors Chapters and Articles You might find these chapters and articles relevant to this topic. Chapter Molecular Properties of Cellulose and Cellulose Derivatives 2005, Cellulose and Cellulose DerivativesKenji Kamide CA (DS 0.49)1 The second virial coefficientA 2, determined for CA (DS 0.49) in DMAc at 25 °C by LS is collected in the fifth column of Table 3.3.1. There is too much scatter around 1.1 × 10−3 (cm 3 g−2 mol) to allow a definite conclusion as to its molecular weight dependence. The A 2 values of the CA (DS 0.49) fraction for the three solvents, shown in Table 3.5.1, are of the order of 10−4 cm mol g−2, thus showing no systematic variation in the solvent. View chapterExplore book Read full chapter URL: Book 2005, Cellulose and Cellulose DerivativesKenji Kamide Chapter Osmotic Pressure 2016, Colloid and Interface Chemistry for Water Quality ControlQing Chang 4.2 Osmotic pressure of macromolecule solutions There are some deviations between macromolecule solutions and ideal solutions. These deviations can be corrected by multiple correction factors (4.5)π=R T(A 1 c+A 2 c 2+A 3 c 3+⋯⋯) where A 1, A 2, A 3,…… are referred to as virial coefficients. The first virial coefficient is (4.6)A 1=1 M For dilute macromolecule solutions, Eq. (4.5) can be simplified as (4.7)π=R T(A 1 c+A 2 c 2)=c R T(A 1+A 2 c) or π c=R T 1 M+A 2 c(4.8) This form suggests that the plot of π/c versus c should be a straight line, and from the intercept and slop of which, we can obtain the molecular weight and the second virial coefficient. The second virial coefficient A 2 indicates the relationship between macromolecule and solvent as follows. A 2>0: The attractive force between molecular chain segments is weaker, but the attractive force between molecular chain segment and solvent is stronger, therefore, the solvent is the fine one; A 2=0: The attractive force between molecular chain segments is equal to the attractive force between molecular chain segment and solvent, therefore, the solution is the ideal solution; A 2<0: The attractive force between molecular chain segments is stronger, but the attractive force between molecular chain segment and solvent is weaker, therefore, the solvent is not the fine one; The second virial coefficient A 2 changes with temperature; the temperature at which A 2 changes to zero is referred to as θ-temperature. The second virial coefficient A 2 also changes with solvent; the solvent at which A 2 changes to zero is referred to as θ-solvent. The solution which is at θ-temperature or in θ-solvent is the ideal solution. Example 4.1 Osmotic Pressure of Macromolecule Solutions Polyisobutylene was dissolved in benzene, forming a series of solutions. The osmotic pressures of these solutions were measured at 25°C as follows: c (g dm−3)5.0 10.0 15.0 20.0 π (Pa)49.54 101.04 155.0 210.9 Calculate the molecular weight of this polyisobutylene. Solution For dilute macromolecule solutions (4.8)π c=R T 1 M+A 2 c First, transform the data as follows: c (kg m−3)5.0 10.0 15.0 20.0 π/c (Pa m 3 kg−1)9.91 10.10 10.33 10.55 Then plot this data in the form of Eq. (4.8). A plot of π/c against concentrations is shown in the following figure. Sign in to download full-size image From this figure, we obtain The intercept=π/c=9.685 Pa m−3 kg−1 M=R T π c=8.3145×298.15 9.685=255.9(kg mol−1) Show more View chapterExplore book Read full chapter URL: Book 2016, Colloid and Interface Chemistry for Water Quality ControlQing Chang Chapter Colligative Properties and Virial Coefficients of Polymer Solutions 2000, Physical Chemistry of Polymer SolutionsKenji Kamide, Toshiaki Dobashi ≪Problem 5-23≫Second virial coefficient (XIII): Rod-like molecule Prove that the second virial coefficient for the solution of rod-like molecules with length l and radius r is expressed as (5.23.1)A 2=πN A rl 2 4M 2 Answer The second and third virial coefficients are calculated from (5.11.1)A 2=−N A M 2 b 2=−N A 2M 2 V∫∫g 2(1,2)d(1)d(2) (5.11.2)A 3=−2N A 2 M 3(b 3−2b 2 2)=−N A 2M 2 V∫∫g 3({3})d{3}+4MA 2 2 Suppose one end of the rod-like molecule locates at (x 1, y 1, z 1) on Cartesian coordinate and the direction of the rod is expressed as (θ 1, φ 1) on the polar coordinate, as shown in Fig. 5-23(a). When two molecules completely overlap with each other, Sign in to download full-size image Fig. 5-23. (a) A rod-like molecule on rectangular and polar coordinates and (b) overlapping of two rods with diameter d=2r and length 1 (1 being much larger than d). (5.23.2)F 2(1,2)=0 From the normalization condition (5.9.3)lim V→∞1 V N∫⋯∫F N(1,2,⋯L,N)d{N}=1 we have (5.23.3)1 V∫F 1(1)d(1)=1 Similarly, (5.23.4)F 1(1)∫0 2π∫0 π sinθ 1 dθ 1 dφ 1=4πF 1(1)=1 or (5.23.5)F 1(1)=1 4 π=F 1(2) Thus, if two molecules overlap each other, we have (5.23.6)g 2(1,2)=0−(1 4π)2=−1 16π 2 If there is no overlap, F 2(1,2)=1/16π 2 and (5.23.7)g 2(1,2)=1 16π 2−1 16π 2=0 The volume for the overlap of two molecules is 2rl 2 sinθ 2, as illustrated as the rectangle enclosed by dotted lines in Fig. 5-23(b) Here, θ 2 and θ 2 are the coordinates on the polar coordinate which is constructed such that one of the principal coordinates is the direction of the first molecule. The integral in Eq. (5.11.1) for the overlap volume is carried out as (5.23.1)A 2=(−N A 2M 2 V)(−1 16π 2)∫∫0∞∫dxdydz∫0 2π∫0 π sinθ 1 dθ 1 dφ 1×∫0 2π∫0 π(2rl 2 sinθ 2)sinθ 2 dθ 2 dφ 2=(−N A 2M 2 V)(−1 16π 2)V×4π×(2π 2 rl 2)=πN A rl 2 4M 2 The volume of the rod molecule is (5.23.8)V 1=πr 2 l Then, A 2 is rewritten as (5.23.9)A 2=N A V 1 4M 2(l r)=N A 4×1 πr 3×(V 1 M)2 Thus, A 2 does not depend on M in this case, too. Show more View chapterExplore book Read full chapter URL: Book 2000, Physical Chemistry of Polymer SolutionsKenji Kamide, Toshiaki Dobashi Chapter Fugacity of a pure component 2021, The Thermodynamics of Phase and Reaction Equilibria (Second Edition)İsmaı̇l Tosun Problems related to Section 6.5 6.16 The second virial coefficient of carbon dioxide is expressed in the form of B=137.6−87.7 exp⁡(325.7 T), where B is in cm 3/mol and T is in K.(a) Calculate the change in molar Gibbs energy when CO 2 at 320 K expands isothermally from 10 bar to 2 bar. (b) Calculate the enthalpy departure function for CO 2 at 10 bar and 320 K. (Answer: (a) −4198.2 J/mol, (b) −352.12 J/mol) 6.17 Lee and Edmister (1971) proposed the following empirical equation for predicting the fugacity of pure liquid hydrocarbons: ln⁡(f P)=A 1+A 2 T r+A 3 ln⁡T r+A 4 T r 2+A 5 T r 6+(A 6 T r+A 7 ln⁡T r+A 8 T r 2)P r+A 9 T r 3 P r 2−ln⁡P r+ω[(1−T r)(A 10+A 11 T r)+A 12 P r T r+A 13 T r 3 P r 2], where A 1=6.32873 A 6=−0.018706 A 11=−11.201 A 2=−8.45167 A 7=−0.286517 A 12=−0.05044 A 3=−6.90287 A 8=0.18940 A 13=0.002255 A 4=1.87895 A 9=−0.002584 A 5=−0.33448 A 10=8.7015 Estimate the specific enthalpy of liquid n-pentane at 18.5 bar and 433 K relative to that of the ideal gas at the same temperature. (Answer:−264.5 J/g) 6.18 Apply Eq. (6.5-5) to the vapor and liquid phases at vapor–liquid equilibrium at a given temperature to obtain (1)d ln⁡f i V(T,P i v a p)=−(H˜i V−H˜i I G R T 2)d T+(V˜i V R T)d P i v a p (2)d ln⁡f i L(T,P i v a p)=−(H˜i L−H˜i I G R T 2)d T+(V˜i L R T)d P i v a p Show that the subtraction of Eq. (2) from Eq. (1) and the use of (3)P i v a p V˜=Z R T lead to (4)d ln⁡P i v a p d T=Δ H˜i v a p R T 2(1 Z i V−Z i L) Eq. (4) is the so-called “rigorous-Clausius–Clapeyron equation”. Assuming Z i V−Z i L≃Z i V≃1, Eq. (4) reduces to Eq. (5.4-10). View chapterExplore book Read full chapter URL: Book 2021, The Thermodynamics of Phase and Reaction Equilibria (Second Edition)İsmaı̇l Tosun Chapter Calculation of Changes in Internal Energy, Enthalpy, and Entropy 2013, The Thermodynamics of Phase and Reaction EquilibriaIsmail Tosun Problems Related to Section 3.1 3.1 Calculate the second virial coefficient for n-butane at 375 K. The experimental value is -426.5 cm 3/mol (Gupta and Eubank, 1997 Gupta and Eubank, 1997). Also estimate the density of n-butane at 375 K and 5 bar. (Answer:-430.8 cm 3/mol,0.01 g/cm 3) 3.2 Various correlations are available in the literature to calculate the second virial coefficient for nonpolar fluids besides Eqn(3.1-10). For example, Meng et al., 2004 Meng et al. (2004) modified the well-known Tsonopoulos, 1979 Tsonopoulos (1979) correlation as (1)B=RT c P c f(0)+ω f(1), where (2)f(0)=0.13356-0.30252 T r 3-0.15668 T r 2-0.00724 T r 3-0.00022 T r 8, (3)f(1)=0.17404-0.15581 T r+0.38183 T r 2-0.44044 T r 3-0.00541 T r 8. Calculate the second virial coefficients for methane, ethane, and carbon dioxide at 300 K using Eqn(1). (Answer:-42.098 cm 3/mol,-181.483 cm 3/mol,-121.869 cm 3/mol) 3.3 Saturated isobutane vapor at 320 K is contained in a piston-cylinder assembly. It is expanded to 3 bar by a reversible isothermal process. At 320 K, the vapor pressure of isobutane is 6.243 bar. Using the virial equation of state, Eqn(3.1-9), (a) Estimate the molar volume of isobutane at the initial and final states. (b) Determine the work done during the expansion process. (Answer: (a) 3704 cm 3/mol,8310.7 cm 3/mol and (b) -1949.7 J/mol)3.4 If a gas obeys the virial equation of state, Eqn(3.1-9), use Eqn(1) of Problem 2.1 and show that the Joule-Thomson coefficient is given by (1)μ=T d B d T-B C∼P∗-T d 2 B d T 2 P. Substitute Eqn(3.1-10) into Eqn(1) to obtain μ=T c P c-0.083+1.0972 T r 1.6+ω-0.139+0.8944 T r 4.2 C∼P∗R+P r 1.75552 T r 2.6+ω 3.75648 T r 5.2. 3.5 Analytical calculation of the roots of a quadratic equation (1)x 3+px 2+qx+r=0 is as follows. Let the terms M and N be defined as (2)M=3 q-p 2 9 and N=9 pq-27 r-2 p 3 54. If p, q, and r are real and if Δ=M 3+N 2 is the discriminant, then• Δ>0; one root is real and two are complex conjugate, • Δ=0; all roots are real and at least two are equal, • Δ<0; all roots are real and unequal. Case (i) Solutions forΔ>0 In this case the roots are given by (3)x 1=S+T-1 3 p, (4)x 2=-1 2(S+T)-1 3 p+1 2 i 3 S-T, (5)x 3=-1 2(S+T)-1 3 p-1 2 i 3 S-T, where (6)S=N+Δ 3 and T=N-Δ 3. Case (ii) Solutions forΔ<0 The roots are given by (7)x i=±2-M cos θ 3+(i-1)120°-1 3 p i=1,2,3, where (8)θ=arccos N 2(-M)3(θ is in degrees). In Eqn(7) the upper sign applies if N is positive, while the lower sign applies if N is negative. If methanol obeys the van der Waals equation of state, predict the density of saturated methanol vapor at 11 bar and 413 K analytically. (Answer:0.01106 g/cm 3) 3.6 Estimate the molar volumes of saturated liquid and vapor for n-octane at 293 K using: (a) Redlich-Kwong equation of state. (b) Soave-Redlich-Kwong equation of state. (c) Peng-Robinson equation of state. At 293 K, the vapor pressure of n-octane is 1.387 bar. (Answer: (a) 196.9 cm 3/mol,15,250 cm 3/mol, (b) 189.6 cm 3/mol,14,560 cm 3/mol, and (c) 168.6 cm 3/mol,14,540 cm 3/mol) 3.7 The maximum pressure, P, that a spherical tank can withstand is dependent on the tank radius, R, the wall thickness, t w, and the maximum tensile strength of the material, σ. The equation proposed by Strelzhoff and Pan, 1968 Strelzhoff and Pan (1968) states that P=2 t w σ R+0.2 t w if P<0.665 σ,2 σ t w R+1 2-2 σ t w R+1 2+2 if P>0.665 σ, where P is the internal gauge pressure. Estimate the maximum amount of ethylene that can be stored in a spherical tank at 300 K. The tank is made of stainless steel 316, has a radius of 1.5 m, and has a wall thickness of 13 mm. Ethylene is represented by the Peng-Robinson equation of state. For stainless steel 316, σ=586 MPa. (Answer: 4476 kg) 3.8 A rigid tank contains propane at 350 K and an unknown pressure. When the tank is cooled to 303 K, the vapor starts condensing. Estimate the initial pressure in the tank using the Peng-Robinson equation of state. At 303 K, the vapor pressure of propane is 10.84 bar. (Answer: 13.234 bar) 3.9 A 0.8 m 3 rigid tank contains 2 kg of acetylene at 200 K. Determine the state and pressure of acetylene in the tank using the Peng-Robinson equation of state. The vapor pressure of acetylene at 200 K is 1.905 bar.(Answer: 1.544 bar) 3.10 A 0.05 m 3 piston-cylinder device contains 7 mol of isobutane at 350 K. It is compressed isothermally until the volume becomes 0.01 m 3. The vapor pressure of isobutane is given as a function of temperature as ln P vap=-9.416723 ln T-4577.132 T+73.51351+1.514279×10-5 T 2, where P vap is in kPa and T is in K. Estimate the initial and final pressures using the Peng-Robinson equation of state.(Answer: 3.812 bar, 12.469 bar)3.11 Suppose that a rigid cylinder contains propylene at 380 K and 150 bar. The contents of the cylinder are cooled to 285 K and part of the liquid that condensed at the bottom of the cylinder is removed. If the cylinder temperature is again increased to 380 K, will the pressure be greater than, equal to, or less than 150 bar? Explain clearly.(Answer: Less than 150 bar) 3.12 A piston-cylinder assembly contains 3 kg of acetone with 60% quality. The system is heated at constant pressure until all the acetone vaporizes. Calculate the work done if the temperature at the final state is 400 K. Assume acetone is represented by the Peng-Robinson equation of state. The vapor pressure of acetone is given as a function of temperature as ln P vap=10.0311-2940.46 T-35.93, where P vap is in bar and T is in K.(Answer:-58,598 J)3.13 From Fig. 3.3 a, conclude that the isotherm satisfies the following equation in the two-phase region: (1)P vap(V∼V-V∼L)=∫V∼L V∼V P d V∼. (a) If a substance is represented by the Redlich-Kwong equation of state, show that Eqn(1) takes the form (2)P vap(V∼V-V∼L)=RT ln V∼V-b V∼L-b-a b T ln V∼V(V∼L+b)V∼L(V∼V+b). (b) Express Eqn(2) in terms of dimensionless quantities to obtain (3)Z V-Z L-ln Z V-B Z L-B+A B ln Z V(Z L+B)Z L(Z V+B)=0. Note that Eqn(3) can be used to estimate vapor pressures of pure substances at any given temperature.(c) Using a trial-and-error procedure, estimate the vapor pressure of propane at 350 K if it obeys the Redlich-Kwong equation of state. (Answer: (c) 30.9 bar) Show more View chapterExplore book Read full chapter URL: Book 2013, The Thermodynamics of Phase and Reaction EquilibriaIsmail Tosun Chapter High-Pressure Fluid Phase Equilibria 2012, Supercritical Fluid Science and TechnologyUlrich K. Deiters, Thomas Kraska 7.11 Problems 1. Derive an expression for the second virial coefficient of a gas whose molecules have a square-well pair potential (Eq. (7.35)). Discuss the temperature dependence of the second virial coefficient at high temperatures. 2. If an equation of state is not good enough for a given task, it is a common trick to make its parameters temperature dependent. Check whether using temperature-dependent covolumes, a. b = b 0 + b 1/T b. b = b 0 + b 1 T with b 1< 0 can cause isotherm crossing in connection with the van der Waals equation of state.3. Another “trick of the trade” is turning constant exponents into substance-specific parameters. It may be tempting to improve a simple van der Waals type equation of state p=p rep−a V m 2 by turning it into p=p rep−a V m ν, with ν being a parameter that can be fitted to experimental data. Analyze this modification with respect to physical plausibility and its effect on the second virial coefficient.4. Derive an expression for the isochoric heat capacity for the van der Waals equation of state. 5. Calculate the second and third virial coefficients of the Redlich–Kwong equation of state. 6. Discuss Soave's α (T) temperature function, which appears in some cubic equations of state (e.g., Eq. (7.13)): are there extrema? what is the limiting behavior? 7. Derive the so-called optimized SAFT equation of state (OSAFT) by repeating the derivation in Section 7.5.4.5 for a different hard-sphere equation of state, namely Eq. (7.33). 8. Derive an expression for the effective collision diameter of a gas whose molecules have a linear repulsive pair potential with a hard core: u(r)=∞r<σ ϵ λ−1 λ−r σ σ≤r<λ σ 0 r≥λ σ. 9. Derive an expression for the effective collision diameter of a gas whose molecules have a “rectangular” repulsive pair potential with a hard core: u(r)=∞r<σ+ϵ σ≤r<λ σ 0 r≥λ σ. Show more View chapterExplore book Read full chapter URL: Book series2012, Supercritical Fluid Science and TechnologyUlrich K. Deiters, Thomas Kraska Chapter Light Scattering – The Application of Static Light Scattering: Zimm Plot 2013, Reference Module in Chemistry, Molecular Sciences and Chemical EngineeringH. Wu, M. Lattuada Theoretical Background of Zimm Plot When light is shone on a liquid sample, part of the incident radiation is scattered anytime it encounters along its path a change in optical properties.1a,3a Such changes can be the result of the presence of particles, droplets, bubbles, proteins, or any object suspended in the medium,7 or even to the presence of density fluctuations in the medium itself.8 Here the latter case is not considered, and the focus is on the situation where the sample is a suspension, containing particles, droplets, proteins, or macromolecules dispersed in a liquid. For the sake of simplicity, the suspended objects will henceforth be referred to generically as particles. The optical properties of the suspension are usually described in terms of the difference in the refractive index between the suspended particles and the surrounding medium. The following quantitative treatment closely follows the notation of Pusey.9 A very general expression for the intensity (I) of the light scattered by a collection of N particles is given by the following expression: I q I 0=1 r 2∑i=1 N∑j=1 N b i q b j⁎q exp−q⋅R i−R j where I 0 is the intensity of the incident light; r, the sample distance from the detector; N, the number of suspended particles; Ri, the position vector of the i th particle; and q, the scattering wave vector, with a magnitude given by q=q=4 π n 0 λ sin θ 2 with θ being the scattering angle and λ the wavelength of the radiation in vacuum. The quantities indicated by b i(q) are the so-called scattering lengths, and are expressed as b i q=π n 0 2 λ 2 n p 2−n l 2 n 0 2∫V i exp−q⋅r i d 3 r i where n p, n l, and n 0 are the refractive indexes of the particles, the liquid, and the suspension, respectively, and the integral is carried out over the volume of the i th particle, V i. Equation can be rearranged and simplified in several ways. In the case where the dispersed phase is made of very small, identical particles at sufficiently low concentrations such that the correlation among them can be neglected, the intensity of the scattered light takes the following form 9: I q I 0=π 2 λ 4 r 2 n p 2−n l 2 2 N⋅V p P q where P(q) is the particle form factor, accounting for the particle shape, and V p the particle volume. With this notation, P(q)→1 when q→0. In the case where the correlation among the identical particles cannot be neglected, eqn becomes I q I 0=π 2 λ 4 r 2 n p 2−n l 2 2 N⋅V p P q S q where the scattering structure factor, S(q), has been introduced, which is normalized in such a way that S(q)→N when q→0. It is often useful to reformulate the aforementioned equations for the scattered intensity in terms of the so-called Rayleigh ratio, R, defined as 1b,9 R=I q r 2 I 0 V=π 2 λ 4 n p 2−n l 2 2 C⋅V p P q S q where V is the volume of the sample, and C the number concentration of the particles. Very often, this last formulation is further modified in order to better highlight the role of the refractive index mismatch between solvent and particles. Assuming first that the particle concentration is low enough for the scattering structure factor to be neglected, taking the limit q→0 and considering that the difference in the refractive index between particles and solvent is not too large, eqn can be rewritten as 9: R≈4 π 2 n 0 2 λ 4 n p−n l 2 C⋅V p≈4 π 2 n 0 2 λ 4 d n 0 d c 2 M N A c=KMc with K=4 π 2 n 0 2 λ 4 N A d n 0 d c 2 where c is the molar concentration of particles; M, their molecular weight; and N A, the Avogadro number. The appearance of the gradient in the refractive index of the solution with the concentration of particles is very handy, because this quantity can be further manipulated using results from classical thermodynamics. In practical applications, it is more convenient to define an excess Rayleigh ratio, R ex, where, instead of the scattered intensity, the difference between the scattered intensity at a given particle concentration and the intensity at zero particle concentration (i.e., the solvent is used): R ex=I c q−I c=0 q r 2 I 0 V In this manner, one can eliminate the contribution of the scattered light from the solvent. Hereafter, this definition of the Rayleigh ratio will be used. In the case of polydisperse samples, eqn is modified as follows: R ex=K M w c where M w is the weight-average molecular weight of the particles. This indicates that for measuring molecular weight, the scattering technique will enhance the contribution of particles with higher masses. In order to see how particle size affects eqn , it is sufficient to expand in Taylor series the form factor (Guinier regime)1a,1b,7: P q=1−16 π 2 3 λ 2 R g 2 sin 2 θ 2+⋯=1−q 2 3 R g 2+⋯ where R g is the radius of gyration, in order to recover the following famous expression 1b,10: Kc R ex=1 M w 1+q 2 3 R g 2+⋯ which is one of the basic equations of the Zimm plot.10 Whenever the concentration of particles is not negligible, the effect of the scattering structure factor needs to be taken into account. An instructive derivation of the effect of the concentration can be obtained from the definition of structure factor in terms of pair-correlation function g(r) 10: S q=1+4 πC∫0∞g r−1 sin q⋅r q⋅r r 2 d r and expanding it in power series of q: S q=1−2 C⋅A 2+⋯ where A 2 is the second virial coefficient, defined as 9 A 2=−2 π∫0∞g r−1 r 2 d r≈−2 π∫0∞exp−U r k B T−1 r 2 d r with U(r) being the pair-interaction potential between two particles. The above expansion is valid at not too high concentrations. In this way, a Zimm plot at a relatively high particle concentration can also be used to gain information about interparticle interactions.9 The general equation of the Zimm plot in this case can be obtained by substituting eqn into eqn to replace S(q) so as to have, after rearrangement, Kc R ex=1 P q 1 M w+2 B 2 c Note that B 2 is also called the second virial coefficient, which is basically equal to A 2/M w. It is worth noting the three limiting cases of eqn : I. For θ→0, since P(q)→1, we have lim θ→0 Kc R ex=Kc R ex 0=1 M w+2 B 2 c II. For c→0, eqn is recovered: lim c→0 Kc R ex=1 M w⋅P q=1 M w 1+R g 2⋅q 2 3,i.e.,1 P q=1+R g 2⋅q 2 3 III. For both θ→0 and c→0, eqn is recovered: lim c→0 θ→0 Kc R ex=1 M w Therefore, with the measured values of the Rayleigh ratio, R ex, for different concentrations and for different angles, these three equations constitute the basis for the SLS (Zimm plot) to determine the molecular (mass) weight, M w, radius of gyration, R g, and second virial coefficient, B 2, of macromolecules (e.g., polymers and proteins) in disperse media.10 In addition to the derivation presented earlier, other theoretical approaches have been developed to obtain information about interparticle interactions from a Zimm plot for multicomponent systems. An interesting example is to start from the following definition of the Rayleigh ratio (measured at a scattering angle of 90°)11 R ex,90=4 π 2 λ 4 n 0 2 Δn 2 C and expand the ensemble-average refractive index mismatch using the relations derived from multicomponent thermodynamics. More explicitly, since Δn≈d n=∑j∂n∂N j T,P,N k≠j d N j substituting it in eqn leads to: Δn 2=∑j∂n∂N j T,P,N k≠j 2 N i 2−N i 2+2∑j<i∂n∂N j T,P,N k≠j∂n∂N i T,P,N k≠i N j δ ij+N j V G ij where the last term G ij is defined in terms of the pair-correlation function between species i and j 11: G ij=4 π∫0∞g ij r−1 r 2 d r which clearly depends on the level of correlation among the species (i and j) included in the pair-correlation function g ij(r), and thus on their pair-interaction potentials. In the following section, the theoretical background illustrated earlier is made use of, particularly focusing on the applications of the Zimm plot. Show more View chapterExplore book Read full chapter URL: Reference work2013, Reference Module in Chemistry, Molecular Sciences and Chemical EngineeringH. Wu, M. Lattuada Chapter Thermodynamic Properties 2005, The Corresponding-States Principle and its PracticeHong Wei Xiang 5.2.2 Virial Coefficients of Lennard–Jones Molecules It is possible to account for the behavior of the second virial coefficientB as a function of temperature if the Lennard–Jones potential is written in the form: (5.11)φ(r)=λ r n−μ r 6. The values of n are between 9 and 14. In general, the value n = 12 is used. Based on Eq. (5.11), we have: (5.12)φ(r)=4 ε(1 R 12−1 R 6) (5.13)ε=(μ 2/4 λ),σ=(λ/μ)1/6 where R = r / σ. Introducing Eq. (5.12) into the known theoretical expression for B: (5.14)B=−2 π N∫0∞[exp⁡(−ϕ(r)/k T)−1]r 2 d r. For the Lennard–Jones potential, the second virial coefficient is expressed as: (5.15)B=2 π N σ 3 3(4 ε k T)1/4∑i=0∞c i(4 ε k T)i/2, where (5.16)c i=−1 4 i!Γ(−1 4+i 2). Taking 2 πNσ 3 /3 as the unit of volume and ɛ/k as the temperature, the reduced form for the virial equation of state is (5.17)p V R T=1+BV+CV2+⋯, where V = 3 V / 2 πN σ 3, T = kT / ɛ, (5.18)B=B 2 π N σ 3/3=(4 T)1/4∑c i(4 T)1/2, and (5.19)C=C/(2 π N σ 3/3)2. This theory was based on the assumption that the molecular field is spherical and can be expressed by Eq. (5.12) containing two parameters ɛ and σ. View chapterExplore book Read full chapter URL: Book 2005, The Corresponding-States Principle and its PracticeHong Wei Xiang Chapter STABILITY ANALYSIS 2007, Nonequilibrium Thermodynamics (Second Edition)Yaşar Demirel PROBLEMS 12.1 Using the truncated virial equation of state with the second virial coefficientB(T) P V R T=1+B(T)V Obtain the thermodynamic stability condition based on the constraint on B(T). 12.2 If we have a fluid in a closed system at constant entropy and pressure, prove that the stability condition of the fluid is C p> 0. 12.3 Using the truncated virial equation of state P V R T=1+B V+C V 2 and the constant-volume heat capacityC v = a + bT. For what values of B and C this fluid undergo a vapor–liquid phase transition?12.4 Solve the following initial value problem as an eigenvalue problem, and prepare a state-space plot. d x d t=−9 x 1+4 x 2 d x 2 d t=−2 x 1+2 x 2 x 1(0)=1;x 2(0)=−1 12.5 Solve the following initial value problem as an eigenvalue problem, and prepare a state-space plot of x 1 versus x 2 and x 2 versus x 3. d x 1 d t=8 x 1−5 x 2+10 x 3 d x 2 d t=2 x 1+x 2+2 x 3 d x 3 d t=−4 x 1+4 x 2−6 x 3 The initial conditions are x 1(0) = x 2(0) = 2; x 3(0) = −3 12.6 We have a first-order homogeneous reaction, taking place in an ideal stirred tank reactor. The volume of the reactor is 20 × 10−3 m 3. The reaction takes place in the liquid phase. The concentration of the reactant in the feed flow is 3.1 kmol/m 3 and the volumetric flow rate of the feed is 58 × 10−6 m 3/s. The density and specific heat of the reaction mixture are constant at 1000 kg/m 3 and 4.184 kJ/(kg K). The reactor operates at adiabatic conditions. If the feed flow is at 298 K, investigate the possibility of multiple solutions for conversion at various temperatures in the product stream. The heat of reaction and the rate of reaction are Δ H r=–2.1×10 8 J/kmol J r=k C=4.5×10 6 C(kmol/m 2)exp[(–62800/(RT)) kmol/(m 3 s) View chapterExplore book Read full chapter URL: Book 2007, Nonequilibrium Thermodynamics (Second Edition)Yaşar Demirel Chapter Stability Analysis 2019, Nonequilibrium Thermodynamics (Fourth Edition)Yaşar Demirel, Vincent Gerbaud Problems 12.1 Using the truncated virial equation of state with the second virial coefficientB(T) P V R T=1+B(T)V Obtain the thermodynamic stability condition based on the constraint on B(T). 12.2 If we have a fluid in a closed system at constant entropy and pressure, prove that the stability condition of the fluid is C p>0. 12.3 Using the truncated virial equation of state P V R T=1+B V+C V 2 and the constant volume heat capacityC v=a+b T. For what values of B and C this fluid undergo a vapor-liquid phase transition?12.4 Solve the following initial value problem as an eigenvalue problem, and prepare a state-space plot. d x 1 d t=−9 x 1+4 x 2 d x 2 d t=−2 x 1+2 x 2 x 1(0)=1;x 2(0)=−1 12.5 Repeat Pr. 12.4 with the initial values of x 1(0)=1;x 2(0)=−1, and prepare a state-space plot. 12.6 Solve the following initial value problem as an eigenvalue problem, and prepare a state-space plot of x 1 versus x 2 and x 2 versus x 3. d x 1 d t=8 x 1−5 x 2+10 x 3 d x 2 d t=2 x 1+x 2+2 x 3 d x 3 d t=−4 x 1+4 x 2−6 x 3 The initial conditions are: x 1(0)=x 2(0)=2;x 3(0)=−3 12.7 Repeat Pr. 12.6 with the initial conditions are: x 1(0)=1,x 2(0)=2;x 3(0)=0 12.8 Consider the following reaction scheme A→k 1 X k 1=1.1 B+X→k 2 Y+E k 2=1.0 2 X+Y→k 3 3 X k 3=1.1 X→k 4 F k 4=1.1 The initial values of A and B are maintained at c A=0.5 and c B=2.4, while the products E and F are removed. Calculate: (a) The particular (stationary) solution (b) The Jacobian matrix at steady state (c) The homogeneous solution 12.9 Consider the following reaction scheme A→k 1 X k 1=1.1 B+X→k 2 Y+E k 2=1.0 2 X+Y→k 3 3 X k 3=1.0 X→k 4 F k 4=1.0 The initial values of A and B are maintained at c A=0.01 and c B=1.4, while the products E and F are removed. Calculate: (a) The particular (stationary) solution (b) The Jacobian matrix at steady state (c) The homogeneous solution 12.10 We have a first-order homogeneous reaction taking place in an ideal stirred tank reactor. The volume of the reactor is 20×10−3 m 3. The reaction takes place in the liquid phase. The concentration of the reactant in the feed flow is 3.1 kmol/m 3 and the volumetric flow rate of the feed is 58×10−6 m 3/s. The density and specific heat of the reaction mixture are constant at 1000 kg/m 3 and 4.184 kJ/(kg K). The reactor operates at adiabatic conditions. If the feed flow is at 298 K, investigate the possibility of multiple solutions for conversion at various temperatures in the product stream. The heat of reaction and the rate of reaction are: Δ H r=−2.1×10 8 J/kmol;J r=4.5×10 6 c(kmol/m 2)exp(−62800/(R T) Show more View chapterExplore book Read full chapter URL: Book 2019, Nonequilibrium Thermodynamics (Fourth Edition)Yaşar Demirel, Vincent Gerbaud Related terms: Chemical Potential Cellulose Light Scattering Molecular Mass Gibbs Free Energy Equation of State Purity Virial Coefficient Radius of Gyration Monomer View all Topics Recommended publications Fluid Phase EquilibriaJournal Journal of Molecular LiquidsJournal The Journal of Chemical ThermodynamicsJournal PolymerJournal Browse books and journals From other publishers ChiralityJournal Organic Chemistry FrontiersJournal Journal of Heterocyclic ChemistryJournal OrganometallicsJournal Browse books and journals Featured Authors Prausnitz, John M.M.University of California, Berkeley, Berkeley, United States Citations51,915 h-index99 Publications187 Duan, YuanyuanTsinghua University, Beijing, China Citations9,203 h-index53 Publications177 Winzor, Donald J.The University of Queensland, Brisbane, Australia Citations7,391 h-index42 Publications205 Wills, Peter R.The University of Auckland, Auckland, New Zealand Citations2,737 h-index26 Publications38 About ScienceDirect Remote access Advertise Contact and support Terms and conditions Privacy policy Cookies are used by this site. Cookie settings All content on this site: Copyright © 2025 or its licensors and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Targeting Cookies [x] Targeting Cookies These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. If you do not allow these cookies, you will experience less targeted advertising. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
251
abstract algebra - $\beta_a(n)=(a_1\cdots(a_nb))\setminus_ b$ and Iterations in right divisible magmas e representability by left translations.) - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now β a(n)=(a 1∗⋯(a n∗b))∖∗b β a(n)=(a 1∗⋯(a n∗b))∖∗b and Iterations in right divisible magmas e representability by left translations.) Ask Question Asked 11 years, 3 months ago Modified11 years, 2 months ago Viewed 127 times This question shows research effort; it is useful and clear 3 Save this question. Show activity on this post. Let's consider the magma (G,∗)(G,∗) with infinite elements. Now I define left(G)left⁡(G) the set of all the left translations left(G):{L a:a∈G,L a(b)=a∗b}left⁡(G):{L a:a∈G,L a(b)=a∗b} And i t e r(a)i t e r(a) as the set formed by the left traslation by a∈G a∈G and closed under function composition or in other words the iterations of the left translations. iter(a):{L n a:n∈N∖{0}}iter⁡(a):{L a n:n∈N∖{0}} What are the weakest conditions that (G,∗)(G,∗) must satisfies if we want that exist an collection injective functions F a:iter(a)→>left(G)F a:iter⁡(a)→>left⁡(G)? F a F a are defined in this way F a(L 1 a)=L a F a(L a 1)=L a F a(L n a)=L a′F a(L a n)=L a′ (F a(L n a))(x)=L n a(x)(F a(L a n))(x)=L a n(x) I think that this is equivalent to the this statement iter(a)⊆left(G)iter⁡(a)⊆left⁡(G) . If (G,∗)(G,∗) is associative then this always holds because L a(L a(x))=L a∗a(x)L a(L a(x))=L a∗a(x) and in general F a(L n a)=L a n F a(L a n)=L a n But this assumption is too stroong for my needs A weaker condition is that if ∗∗ is right invertible (exists a R−1 a R a−1 such that R−1 b(a∗b)=(a∗b)∖∗b=a R b−1(a∗b)=(a∗b)∖∗b=a) then we can define the functions β a β a β a(n)∗b=a 1∗⋯(a n∗b)β a(n)∗b=a 1∗⋯(a n∗b) β a(n)=(a 1∗⋯(a n∗b))∖∗b β a(n)=(a 1∗⋯(a n∗b))∖∗b β a(n)=R b(L n a(b))β a(n)=R b(L a n(b)) and these functions MUST be always constants (for every b b) so we can define the injections F a F a : L n a=L β a(n)L a n=L β a(n) and thus F a F a exists F a(L n a)=L β a(n)F a(L a n)=L β a(n) 1 1-β a β a not depend from b b and is always costant, F a F a exists and iter(a)⊆left(G)iter⁡(a)⊆left⁡(G) are three equivalent statements? 3 3-When β a β a satisfies that weak condition? 3 3-If (G,∗)(G,∗) is not commutative, not associative and not left invertible but is right invertible is possible that this bijection from iter(a)→left(g)iter⁡(a)→left⁡(g) exists? Or maybe there can be a surjection? Note: the image of iter(a)iter⁡(a) by F a F a, F a[iter(a)]F a[iter⁡(a)] should be always a commutative subsemigroup of (left(G),∘)(left⁡(G),∘), commutative submonoid if ∗∗ has a left unit and a commutative subgroup if ∗∗ is right invertible. Update: here a related question. Existence of an operation ⋅⋅ such that (a∗(b∗c))=(a⋅b)∗c(a∗(b∗c))=(a⋅b)∗c Seems to me that the conditiont that the User Goos found is really similar to inclusion condition. Anyways I think that the questions here are still meaningfull. abstract-algebra functions magma Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Apr 13, 2017 at 12:19 CommunityBot 1 asked May 8, 2014 at 15:44 m_lm_l 2,532 2 2 gold badges 21 21 silver badges 46 46 bronze badges 0 Add a comment| 0 Sorted by: Reset to default You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions abstract-algebra functions magma See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 5Existence of an operation ⋅⋅ such that (a∗(b∗c))=(a⋅b)∗c(a∗(b∗c))=(a⋅b)∗c Related 3Is the following structure a group? 26Invertible matrices over a commutative ring and their determinants 5Existence of an operation ⋅⋅ such that (a∗(b∗c))=(a⋅b)∗c(a∗(b∗c))=(a⋅b)∗c 2Can we always define distributive multiplication 1Different right / left identity and two sided identity element 4Prove that either F⊆ker f F⊆ker⁡f or else R′R′ contains a subring isomorphic to F F. 4Expressing Green's relations in regular semigroups 3Notions of basis and span in a magma 1What is a right-invertible and left-invertible operation? Hot Network Questions Is the logic of the original smoking study valid? How to debug/correct missing number error in plug during memoization? Samba(Linux)/Windows interaction How to deal with this problem in hedonism? Do you email authors whose results you have improved? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? Dropdown width with very long options Rectangle and circle with same area and circumference Why isn't gauge symmetry a symmetry while global symmetry is? how often do CANZUK judges color their text? Why do we expect AI to reason instantly when humans require years of lived experience? Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Can you negotiate a W1/W2/W3 position (Germany) if you bring enough third-party funds? Why do I observe a sawtooth-like wave when measuring the voltage of a Li-Ion battery? Using my custom font on kitty Kubuntu How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Summation with fractional part Can metal atoms act as ligands? Is 人形机器人 a redundant expression? What does, "For you alone are holy." mean in Revelation 15:4? Show double quotient with congruence subgroup is simply connected? Why do the rules allow resigning in drawn positions with insufficient mating material? repeat_and_join function for strings and chars in rust Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
252
Kenneth Burke Burke Kenneth : Kenneth Burke has been termed "simply the finest literary critic in the world, and perhaps the finest since Coleridge" (Stanley Edgar Hyman, The New Leader). Mr. Burke has published ten other works with the University of California Press: Towards a Better Life (1966); Language as Symbolic Action: Essays on Life, Literature, and Method (1966) Collected Poems, 1915-1967 (1968); The Complete White Oxen: Collected Short Fiction of Kenneth Burke (1968); A Grammar of Motives (1969); Permanence and Change: An Anatomy of Purpose (1984); The Philosophy of Literary Form (1974); A Rhetoric of Motives (1969); The Rhetoric of Religion: Studies in Logology (1970); and Attitudes Toward History, Third Edition (1984). Ebooks Essays Toward a Symbolic of Motives, 1950-1955 $25.60 A Grammar of Motives $33.95 Permanence and Change: An Anatomy of Purpose $3.99 Permanence and Change: An Anatomy of Purpose $0.99 The Philosophy of Literary Form $36.95 On Human Nature: A Gathering While Everything Flows, 1967-1984 $85.00 Language As Symbolic Action: Essays on Life, Literature, and Method $38.95 La filosofía de la forma literaria: Y otros estudios sobre la acción simbólica $8.99
253
Chapter 4 Vector Norms and Matrix Norms 4.1 Normed Vector Spaces In order to define how close two vectors or two matrices are, and in order to define the convergence of sequences of vectors or matrices, we can use the notion of a norm. Recall that R+ = {x ∈R | x ≥0}. Also recall that if z = a + ib ∈C is a complex number, with a, b ∈R, then z = a −ib and |z| = √ a2 + b2 (|z| is the modulus of z). 207 208 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Definition 4.1. Let E be a vector space over a field K, where K is either the field R of reals, or the field C of com-plex numbers. A norm on E is a function ￿￿: E →R+, assigning a nonnegative real number ￿u￿to any vector u ∈E, and satisfying the following conditions for all x, y, z ∈E: (N1) ￿x￿≥0, and ￿x￿= 0 iffx = 0. (positivity) (N2) ￿λx￿= |λ| ￿x￿. (scaling) (N3) ￿x + y￿≤￿x￿+ ￿y￿. (triangle inequality) A vector space E together with a norm ￿￿is called a normed vector space. From (N3), we easily get |￿x￿−￿y￿| ≤￿x −y￿. 4.1. NORMED VECTOR SPACES 209 Example 4.1. 1. Let E = R, and ￿x￿= |x|, the absolute value of x. 2. Let E = C, and ￿z￿= |z|, the modulus of z. 3. Let E = Rn (or E = Cn). There are three standard norms. For every (x1, . . . , xn) ∈E, we have the 1-norm ￿x￿1, defined such that, ￿x￿1 = |x1| + · · · + |xn|, we have the Euclidean norm ￿x￿2, defined such that, ￿x￿2 = ￿ |x1|2 + · · · + |xn|2￿1 2 , and the sup-norm ￿x￿∞, defined such that, ￿x￿∞= max{|xi| | 1 ≤i ≤n}. More generally, we define the ￿p-norm (for p ≥1) by ￿x￿p = (|x1|p + · · · + |xn|p)1/p. There are other norms besides the ￿p-norms; we urge the reader to find such norms. 210 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Some work is required to show the triangle inequality for the ￿p-norm. Proposition 4.1. If E is a finite-dimensional vector space over R or C, for every real number p ≥1, the ￿p-norm is indeed a norm. The proof uses the following facts: If q ≥1 is given by 1 p + 1 q = 1, then (1) For all α, β ∈R, if α, β ≥0, then αβ ≤αp p + βq q . (∗) (2) For any two vectors u, v ∈E, we have n ￿ i=1 |uivi| ≤￿u￿p ￿v￿q . (∗∗) 4.1. NORMED VECTOR SPACES 211 For p > 1 and 1/p + 1/q = 1, the inequality n ￿ i=1 |uivi| ≤ ￿ n ￿ i=1 |ui|p ￿1/p￿ n ￿ i=1 |vi|q ￿1/q is known as H¨ older’s inequality. For p = 2, it is the Cauchy–Schwarz inequality. Actually, if we define the Hermitian inner product ￿−, −￿ on Cn by ￿u, v￿= n ￿ i=1 uivi, where u = (u1, . . . , un) and v = (v1, . . . , vn), then |￿u, v￿| ≤ n ￿ i=1 |uivi| = n ￿ i=1 |uivi|, so H¨ older’s inequality implies the inequality |￿u, v￿| ≤￿u￿p ￿v￿q also called H¨ older’s inequality, which, for p = 2 is the standard Cauchy–Schwarz inequality. 212 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS The triangle inequality for the ￿p-norm, ￿ n ￿ i=1 (|ui+vi|)p ￿1/p ≤ ￿ n ￿ i=1 |ui|p ￿1/p + ￿ n ￿ i=1 |vi|q ￿1/q , is known as Minkowski’s inequality. When we restrict the Hermitian inner product to real vectors, u, v ∈Rn, we get the Euclidean inner product ￿u, v￿= n ￿ i=1 uivi. It is very useful to observe that if we represent (as usual) u = (u1, . . . , un) and v = (v1, . . . , vn) (in Rn) by column vectors, then their Euclidean inner product is given by ￿u, v￿= u￿v = v￿u, and when u, v ∈Cn, their Hermitian inner product is given by ￿u, v￿= v∗u = u∗v. 4.1. NORMED VECTOR SPACES 213 In particular, when u = v, in the complex case we get ￿u￿2 2 = u∗u, and in the real case, this becomes ￿u￿2 2 = u￿u. As convenient as these notations are, we still recommend that you do not abuse them; the notation ￿u, v￿is more intrinsic and still “works” when our vector space is infinite dimensional. Proposition 4.2. The following inequalities hold for all x ∈Rn (or x ∈Cn): ￿x￿∞≤￿x￿1 ≤n￿x￿∞, ￿x￿∞≤￿x￿2 ≤√n￿x￿∞, ￿x￿2 ≤￿x￿1 ≤√n￿x￿2. 214 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Proposition 4.2 is actually a special case of a very impor-tant result: in a finite-dimensional vector space, any two norms are equivalent. Definition 4.2. Given any (real or complex) vector space E, two norms ￿￿a and ￿￿b are equivalent iffthere exists some positive reals C1, C2 > 0, such that ￿u￿a ≤C1 ￿u￿b and ￿u￿b ≤C2 ￿u￿a , for all u ∈E. Given any norm ￿￿on a vector space of dimension n, for any basis (e1, . . . , en) of E, observe that for any vector x = x1e1 + · · · + xnen, we have ￿x￿= ￿x1e1 + · · · + xnen￿≤C ￿x￿1 , with C = max1≤i≤n ￿ei￿and ￿x￿1 = ￿x1e1 + · · · + xnen￿= |x1| + · · · + |xn|. The above implies that | ￿u￿−￿v￿| ≤￿u −v￿≤C ￿u −v￿1 , which means that the map u ￿→￿u￿is continuous with respect to the norm ￿￿1. 4.1. NORMED VECTOR SPACES 215 Let Sn−1 1 be the unit ball with respect to the norm ￿￿1, namely Sn−1 1 = {x ∈E | ￿x￿1 = 1}. Now, Sn−1 1 is a closed and bounded subset of a finite-dimensional vector space, so by Bolzano–Weiertrass, Sn−1 1 is compact. On the other hand, it is a well known result of analysis that any continuous real-valued function on a nonempty compact set has a minimum and a maximum, and that they are achieved. Using these facts, we can prove the following important theorem: Theorem 4.3. If E is any real or complex vector space of finite dimension, then any two norms on E are equivalent. Next, we will consider norms on matrices. 216 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS 4.2 Matrix Norms For simplicity of exposition, we will consider the vector spaces Mn(R) and Mn(C) of square n × n matrices. Most results also hold for the spaces Mm,n(R) and Mm,n(C) of rectangular m × n matrices. Since n × n matrices can be multiplied, the idea behind matrix norms is that they should behave “well” with re-spect to matrix multiplication. Definition 4.3. A matrix norm ￿￿on the space of square n×n matrices in Mn(K), with K = R or K = C, is a norm on the vector space Mn(K) with the additional property that ￿AB￿≤￿A￿￿B￿, for all A, B ∈Mn(K). Since I2 = I, from ￿I￿= ￿ ￿I2￿ ￿≤￿I￿2, we get ￿I￿≥1, for every matrix norm. 4.2. MATRIX NORMS 217 Before giving examples of matrix norms, we need to re-view some basic definitions about matrices. Given any matrix A = (aij) ∈Mm,n(C), the conjugate A of A is the matrix such that Aij = aij, 1 ≤i ≤m, 1 ≤j ≤n. The transpose of A is the n × m matrix A￿such that A￿ ij = aji, 1 ≤i ≤m, 1 ≤j ≤n. The conjugate of A is the n × m matrix A∗such that A∗= (A￿) = (A)￿. When A is a real matrix, A∗= A￿. A matrix A ∈Mn(C) is Hermitian if A∗= A. If A is a real matrix (A ∈Mn(R)), we say that A is symmetric if A￿= A. 218 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS A matrix A ∈Mn(C) is normal if AA∗= A∗A, and if A is a real matrix, it is normal if AA￿= A￿A. A matrix U ∈Mn(C) is unitary if UU ∗= U ∗U = I. A real matrix Q ∈Mn(R) is orthogonal if QQ￿= Q￿Q = I. Given any matrix A = (aij) ∈Mn(C), the trace tr(A) of A is the sum of its diagonal elements tr(A) = a11 + · · · + ann. It is easy to show that the trace is a linear map, so that tr(λA) = λtr(A) and tr(A + B) = tr(A) + tr(B). 4.2. MATRIX NORMS 219 Moreover, if A is an m × n matrix and B is an n × m matrix, it is not hard to show that tr(AB) = tr(BA). We also review eigenvalues and eigenvectors. We con-tent ourselves with definition involving matrices. A more general treatment will be given later on (see Chapter 8). Definition 4.4. Given any square matrix A ∈Mn(C), a complex number λ ∈C is an eigenvalue of A if there is some nonzero vector u ∈Cn, such that Au = λu. If λ is an eigenvalue of A, then the nonzero vectors u ∈ Cn such that Au = λu are called eigenvectors of A associated with λ; together with the zero vector, these eigenvectors form a subspace of Cn denoted by Eλ(A), and called the eigenspace associated with λ. Remark: Note that Definition 4.4 requires an eigen-vector to be nonzero. 220 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS A somewhat unfortunate consequence of this requirement is that the set of eigenvectors is not a subspace, since the zero vector is missing! On the positive side, whenever eigenvectors are involved, there is no need to say that they are nonzero. If A is a square real matrix A ∈Mn(R), then we re-strict Definition 4.4 to real eigenvalues λ ∈R and real eigenvectors. However, it should be noted that although every complex matrix always has at least some complex eigenvalue, a real matrix may not have any real eigenvalues. For example, the matrix A = ￿0 −1 1 0 ￿ has the complex eigenvalues i and −i, but no real eigen-values. Thus, typically, even for real matrices, we consider com-plex eigenvalues. 4.2. MATRIX NORMS 221 Observe that λ ∈C is an eigenvalue of A iffAu = λu for some nonzero vector u ∈Cn iff(λI −A)u = 0 iffthe matrix λI −A defines a linear map which has a nonzero kernel, that is, iffλI −A not invertible. However, from Proposition 2.10, λI −A is not invertible iff det(λI −A) = 0. Now, det(λI −A) is a polynomial of degree n in the indeterminate λ, in fact, of the form λn −tr(A)λn−1 + · · · + (−1)n det(A). Thus, we see that the eigenvalues of A are the zeros (also called roots) of the above polynomial. Since every complex polynomial of degree n has exactly n roots, counted with their multiplicity, we have the fol-lowing definition: 222 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Definition 4.5. Given any square n × n matrix A ∈Mn(C), the polynomial det(λI −A) = λn −tr(A)λn−1 + · · · + (−1)n det(A) is called the characteristic polynomial of A. The n (not necessarily distinct) roots λ1, . . . , λn of the characteristic polynomial are all the eigenvalues of A and constitute the spectrum of A. We let ρ(A) = max 1≤i≤n |λi| be the largest modulus of the eigenvalues of A, called the spectral radius of A. 4.2. MATRIX NORMS 223 Proposition 4.4. For any matrix norm ￿￿on Mn(C) and for any square n × n matrix A, we have ρ(A) ≤￿A￿. Remark: Proposition 4.4 still holds for real matrices A ∈Mn(R), but a different proof is needed since in the above proof the eigenvector u may be complex. (Hint: use Theorem 4.3.) Now, it turns out that if A is a real n × n symmetric matrix, then the eigenvalues of A are all real and there is some orthogonal matrix Q such that A = Q￿diag(λ1, . . . , λn)Q, where diag(λ1, . . . , λn) denotes the matrix whose only nonzero entries (if any) are its diagonal entries, which are the (real) eigenvalues of A. 224 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Similarly, if A is a complex n × n Hermitian matrix, then the eigenvalues of A are all real and there is some unitary matrix U such that A = U ∗diag(λ1, . . . , λn)U, where diag(λ1, . . . , λn) denotes the matrix whose only nonzero entries (if any) are its diagonal entries, which are the (real) eigenvalues of A. We now return to matrix norms. We begin with the so-called Frobenius norm, which is just the norm ￿￿2 on Cn2, where the n × n matrix A is viewed as the vec-tor obtained by concatenating together the rows (or the columns) of A. The reader should check that for any n × n complex ma-trix A = (aij), ￿ n ￿ i,j=1 |aij|2 ￿1/2 = ￿ tr(A∗A) = ￿ tr(AA∗). 4.2. MATRIX NORMS 225 Definition 4.6. The Frobenius norm ￿￿F is defined so that for every square n × n matrix A ∈Mn(C), ￿A￿F = ￿ n ￿ i,j=1 |aij|2 ￿1/2 = ￿ tr(AA∗) = ￿ tr(A∗A). The following proposition show that the Frobenius norm is a matrix norm satisfying other nice properties. Proposition 4.5. The Frobenius norm ￿￿F on Mn(C) satisfies the following properties: (1) It is a matrix norm; that is, ￿AB￿F ≤￿A￿F ￿B￿F, for all A, B ∈Mn(C). (2) It is unitarily invariant, which means that for all unitary matrices U, V , we have ￿A￿F = ￿UA￿F = ￿AV ￿F = ￿UAV ￿F . (3) ￿ ρ(A∗A) ≤￿A￿F ≤√n ￿ ρ(A∗A), for all A ∈ Mn(C). 226 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Remark: The Frobenius norm is also known as the Hilbert-Schmidt norm or the Schur norm. So many famous names associated with such a simple thing! We now give another method for obtaining matrix norms using subordinate norms. First, we need a proposition that shows that in a finite-dimensional space, the linear map induced by a matrix is bounded, and thus continuous. Proposition 4.6. For every norm ￿￿on Cn (or Rn), for every matrix A ∈Mn(C) (or A ∈Mn(R)), there is a real constant CA > 0, such that ￿Au￿≤CA ￿u￿, for every vector u ∈Cn (or u ∈Rn if A is real). Proposition 4.6 says that every linear map on a finite-dimensional space is bounded. 4.2. MATRIX NORMS 227 This implies that every linear map on a finite-dimensional space is continuous. Actually, it is not hard to show that a linear map on a normed vector space E is bounded iffit is continuous, regardless of the dimension of E. Proposition 4.6 implies that for every matrix A ∈Mn(C) (or A ∈Mn(R)), sup x∈Cn x￿=0 ￿Ax￿ ￿x￿≤CA. Now, since ￿λu￿= |λ| ￿u￿, it is easy to show that sup x∈Cn x￿=0 ￿Ax￿ ￿x￿= sup x∈Cn ￿x￿=1 ￿Ax￿. Similarly sup x∈Rn x￿=0 ￿Ax￿ ￿x￿= sup x∈Rn ￿x￿=1 ￿Ax￿. 228 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Definition 4.7. If ￿￿is any norm on Cn, we define the function ￿￿on Mn(C) by ￿A￿= sup x∈Cn x￿=0 ￿Ax￿ ￿x￿= sup x∈Cn ￿x￿=1 ￿Ax￿. The function A ￿→￿A￿is called the subordinate matrix norm or operator norm induced by the norm ￿￿. It is easy to check that the function A ￿→￿A￿is indeed a norm, and by definition, it satisfies the property ￿Ax￿≤￿A￿￿x￿, for all x ∈Cn. This implies that ￿AB￿≤￿A￿￿B￿ for all A, B ∈Mn(C), showing that A ￿→￿A￿is a matrix norm. 4.2. MATRIX NORMS 229 Observe that the subordinate matrix norm is also defined by ￿A￿= inf{λ ∈R | ￿Ax￿≤λ ￿x￿, for all x ∈Cn}. The definition also implies that ￿I￿= 1. The above show that the Frobenius norm is not a subor-dinate matrix norm (why?). 230 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Remark: We can also use Definition 4.7 for any norm ￿￿on Rn, and define the function ￿￿R on Mn(R) by ￿A￿R = sup x∈Rn x￿=0 ￿Ax￿ ￿x￿= sup x∈Rn ￿x￿=1 ￿Ax￿. The function A ￿→￿A￿R is a matrix norm on Mn(R), and ￿A￿R ≤￿A￿, for all real matrices A ∈Mn(R). However, it is possible to construct vector norms ￿￿on Cn and real matrices A such that ￿A￿R < ￿A￿. In order to avoid this kind of difficulties, we define sub-ordinate matrix norms over Mn(C). Luckily, it turns out that ￿A￿R = ￿A￿for the vector norms, ￿￿1 , ￿￿2, and ￿￿∞. 4.2. MATRIX NORMS 231 Proposition 4.7. For every square matrix A = (aij) ∈Mn(C), we have ￿A￿1 = sup x∈Cn ￿x￿1=1 ￿Ax￿1 = max j n ￿ i=1 |aij| ￿A￿∞= sup x∈Cn ￿x￿∞=1 ￿Ax￿∞= max i n ￿ j=1 |aij| ￿A￿2 = sup x∈Cn ￿x￿2=1 ￿Ax￿2 = ￿ ρ(A∗A) = ￿ ρ(AA∗). Furthermore, ￿A∗￿2 = ￿A￿2, the norm ￿￿2 is unitar-ily invariant, which means that ￿A￿2 = ￿UAV ￿2 for all unitary matrices U, V , and if A is a normal matrix, then ￿A￿2 = ρ(A). The norm ￿A￿2 is often called the spectral norm. 232 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Observe that property (3) of proposition 4.5 says that ￿A￿2 ≤￿A￿F ≤√n ￿A￿2 , which shows that the Frobenius norm is an upper bound on the spectral norm. The Frobenius norm is much easier to compute than the spectal norm. The reader will check that the above proof still holds if the matrix A is real, confirming the fact that ￿A￿R = ￿A￿ for the vector norms ￿￿1 , ￿￿2, and ￿￿∞. It is also easy to verify that the proof goes through for rectangular matrices, with the same formulae. Similarly, the Frobenius norm is also a norm on rectan-gular matrices. For these norms, whenever AB makes sense, we have ￿AB￿≤￿A￿￿B￿. 4.2. MATRIX NORMS 233 The following proposition will be needed when we deal with the condition number of a matrix. Proposition 4.8. Let ￿￿be any matrix norm and let B be a matrix such that ￿B￿< 1. (1) If ￿￿is a subordinate matrix norm, then the ma-trix I + B is invertible and ￿ ￿(I + B)−1￿ ￿≤ 1 1 −￿B￿. (2) If a matrix of the form I + B is singular, then ￿B￿≥1 for every matrix norm (not necessarily subordinate). 234 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Remark: Another result that we will not prove here but that plays a role in the convergence of sequences of pow-ers of matrices is the following: For every matrix A ∈Matn(C) and for every ￿> 0, there is some subor-dinate matrix norm ￿￿such that ￿A￿≤ρ(A) + ￿. Note that equality is generally not possible; consider the matrix A = ￿0 1 0 0 ￿ , for which ρ(A) = 0 < ￿A￿, since A ￿= 0. 4.3. CONDITION NUMBERS OF MATRICES 235 4.3 Condition Numbers of Matrices Unfortunately, there exist linear systems Ax = b whose solutions are not stable under small perturbations of either b or A. For example, consider the system     10 7 8 7 7 5 6 5 8 6 10 9 7 5 9 10         x1 x2 x3 x4    =     32 23 33 31    . The reader should check that it has the solution x = (1, 1, 1, 1). If we perturb slightly the right-hand side, obtaining the new system     10 7 8 7 7 5 6 5 8 6 10 9 7 5 9 10         x1 + ∆x1 x2 + ∆x2 x3 + ∆x3 x4 + ∆x4    =     32.1 22.9 33.1 30.9    , the new solutions turns out to be x = (9.2, −12.6, 4.5, −1.1). 236 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS In other words, a relative error of the order 1/200 in the data (here, b) produces a relative error of the order 10/1 in the solution, which represents an amplification of the relative error of the order 2000. Now, let us perturb the matrix slightly, obtaining the new system     10 7 8.1 7.2 7.08 5.04 6 5 8 5.98 9.98 9 6.99 4.99 9 9.98         x1 + ∆x1 x2 + ∆x2 x3 + ∆x3 x4 + ∆x4    =     32 23 33 31    . This time, the solution is x = (−81, 137, −34, 22). Again, a small change in the data alters the result rather drastically. Yet, the original system is symmetric, has determinant 1, and has integer entries. 4.3. CONDITION NUMBERS OF MATRICES 237 The problem is that the matrix of the system is badly conditioned, a concept that we will now explain. Given an invertible matrix A, first, assume that we per-turb b to b + δb, and let us analyze the change between the two exact solutions x and x + δx of the two systems Ax = b A(x + δx) = b + δb. We also assume that we have some norm ￿￿and we use the subordinate matrix norm on matrices. From Ax = b Ax + Aδx = b + δb, we get δx = A−1δb, and we conclude that ￿δx￿≤ ￿ ￿A−1￿ ￿￿δb￿ ￿b￿≤￿A￿￿x￿. 238 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Consequently, the relative error in the result ￿δx￿/ ￿x￿ is bounded in terms of the relative error ￿δb￿/ ￿b￿in the data as follows: ￿δx￿ ￿x￿≤ ￿ ￿A￿ ￿ ￿A−1￿ ￿￿￿δb￿ ￿b￿. Now let us assume that A is perturbed to A + δA, and let us analyze the change between the exact solutions of the two systems Ax = b (A + ∆A)(x + ∆x) = b. After some calculations, we get ￿∆x￿ ￿x + ∆x￿≤ ￿ ￿A￿ ￿ ￿A−1￿ ￿￿￿∆A￿ ￿A￿. Observe that the above reasoning is valid even if the ma-trix A + ∆A is singular, as long as x + ∆x is a solution of the second system. Furthermore, if ￿∆A￿is small enough, it is not unreason-able to expect that the ratio ￿∆x￿/ ￿x + ∆x￿is close to ￿∆x￿/ ￿x￿. 4.3. CONDITION NUMBERS OF MATRICES 239 This will be made more precise later. In summary, for each of the two perturbations, we see that the relative error in the result is bounded by the relative error in the data, multiplied the number ￿A￿ ￿ ￿A−1￿ ￿. In fact, this factor turns out to be optimal and this sug-gests the following definition: Definition 4.8. For any subordinate matrix norm ￿￿, for any invertible matrix A, the number cond(A) = ￿A￿ ￿ ￿A−1￿ ￿ is called the condition number of A relative to ￿￿. The condition number cond(A) measure the sensitivity of the linear system Ax = b to variations in the data b and A; a feature referred to as the condition of the system. Thus, when we says that a linear system is ill-conditioned, we mean that the condition number of its matrix is large. 240 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS We can sharpen the preceding analysis as follows: Proposition 4.9. Let A be an invertible matrix and let x and x + δx be the solutions of the linear systems Ax = b A(x + δx) = b + δb. If b ￿= 0, then the inequality ￿δx￿ ￿x￿≤cond(A)￿δb￿ ￿b￿ holds and is the best possible. This means that for a given matrix A, there exist some vectors b ￿= 0 and δb ￿= 0 for which equality holds. 4.3. CONDITION NUMBERS OF MATRICES 241 Proposition 4.10. Let A be an invertible matrix and let x and x + ∆x be the solutions of the two systems Ax = b (A + ∆A)(x + ∆x) = b. If b ￿= 0, then the inequality ￿∆x￿ ￿x + ∆x￿≤cond(A)￿∆A￿ ￿A￿ holds and is the best possible. This means that given a matrix A, there exist a vector b ￿= 0 and a matrix ∆A ￿= 0 for which equality holds. Furthermore, if ￿∆A￿is small enough (for instance, if ￿∆A￿< 1/ ￿ ￿A−1￿ ￿), we have ￿∆x￿ ￿x￿≤cond(A)￿∆A￿ ￿A￿(1 + O(￿∆A￿)); in fact, we have ￿∆x￿ ￿x￿≤cond(A)￿∆A￿ ￿A￿ ￿ 1 1 −￿A−1￿￿∆A￿ ￿ . 242 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Remark: If A and b are perturbed simultaneously, so that we get the “perturbed” system (A + ∆A)(x + δx) = b + δb, it can be shown that if ￿∆A￿< 1/ ￿ ￿A−1￿ ￿(and b ￿= 0), then ￿∆x￿ ￿x￿≤ cond(A) 1 −￿A−1￿￿∆A￿ ￿￿∆A￿ ￿A￿+ ￿δb￿ ￿b￿ ￿ . We now list some properties of condition numbers and figure out what cond(A) is in the case of the spectral norm (the matrix norm induced by ￿￿2). 4.3. CONDITION NUMBERS OF MATRICES 243 First, we need to introduce a very important factorization of matrices, the singular value decomposition, for short, SVD. It can be shown that given any n×n matrix A ∈Mn(C), there exist two unitary matrices U and V , and a real diagonal matrix Σ = diag(σ1, . . . , σn), with σ1 ≥σ2 ≥· · · ≥σn ≥0, such that A = V ΣU ∗. The nonnegative numbers σ1, . . . , σn are called the singular values of A. If A is a real matrix, the matrices U and V are orthogonal matrices. 244 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS The factorization A = V ΣU ∗implies that A∗A = UΣ2U ∗ and AA∗= V Σ2V ∗, which shows that σ2 1, . . . , σ2 n are the eigenvalues of both A∗A and AA∗, that the columns of U are correspond-ing eivenvectors for A∗A, and that the columns of V are corresponding eivenvectors for AA∗. In the case of a normal matrix if λ1, . . . , λn are the (com-plex) eigenvalues of A, then σi = |λi|, 1 ≤i ≤n. 4.3. CONDITION NUMBERS OF MATRICES 245 Proposition 4.11. For every invertible matrix A ∈Mn(C), the following properties hold: (1) cond(A) ≥1, cond(A) = cond(A−1) cond(αA) = cond(A) for all α ∈C −{0}. (2) If cond2(A) denotes the condition number of A with respect to the spectral norm, then cond2(A) = σ1 σn , where σ1 ≥· · · ≥σn are the singular values of A. (3) If the matrix A is normal, then cond2(A) = |λ1| |λn|, where λ1, . . . , λn are the eigenvalues of A sorted so that |λ1| ≥· · · ≥|λn|. (4) If A is a unitary or an orthogonal matrix, then cond2(A) = 1. 246 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS (5) The condition number cond2(A) is invariant under unitary transformations, which means that cond2(A) = cond2(UA) = cond2(AV ), for all unitary matrices U and V . Proposition 4.11 (4) shows that unitary and orthogonal transformations are very well-conditioned, and part (5) shows that unitary transformations preserve the condition number. In order to compute cond2(A), we need to compute the top and bottom singular values of A, which may be hard. The inequality ￿A￿2 ≤￿A￿F ≤√n ￿A￿2 , may be useful in getting an approximation of cond2(A) = ￿A￿2 ￿ ￿A−1￿ ￿ 2, if A−1 can be determined. Remark: There is an interesting geometric characteri-zation of cond2(A). 4.3. CONDITION NUMBERS OF MATRICES 247 If θ(A) denotes the least angle between the vectors Au and Av as u and v range over all pairs of orthonormal vectors, then it can be shown that cond2(A) = cot(θ(A)/2)). Thus, if A is nearly singular, then there will be some orthonormal pair u, v such that Au and Av are nearly parallel; the angle θ(A) will the be small and cot(θ(A)/2)) will be large. It should also be noted that in general (if A is not a normal matrix) a matrix could have a very large condition number even if all its eigenvalues are identical! For example, if we consider the n × n matrix A =           1 2 0 0 . . . 0 0 0 1 2 0 . . . 0 0 0 0 1 2 . . . 0 0 . . . . . . ... ... ... . . . . . . 0 0 . . . 0 1 2 0 0 0 . . . 0 0 1 2 0 0 . . . 0 0 0 1           , it turns out that cond2(A) ≥2n−1. 248 CHAPTER 4. VECTOR NORMS AND MATRIX NORMS Going back to our matrix A =     10 7 8 7 7 5 6 5 8 6 10 9 7 5 9 10    , which is a symmetric matrix, it can be shown that λ1 ≈30.2887 > λ2 ≈3.858 > λ3 ≈0.8431 > λ4 ≈0.01015, so that cond2(A) = λ1 λ4 ≈2984. The reader should check that for the perturbation of the right-hand side b used earlier, the relative errors ￿δx￿/￿x￿ and ￿δx￿/￿x￿satisfy the inequality ￿δx￿ ￿x￿≤cond(A)￿δb￿ ￿b￿ and comes close to equality.
254
PACIFIC JOURNAL OF MATHEMATICS Vol. 47, No. 2, 1973 COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA; A MAPPING APPROACH DAVID P. BELLAMY "Continuum" denotes a compact connected Hausdorff space. The principal result is that every indecomposable continuum can be mapped onto Knaster's example D of a chainable indecomposable continuum with one endpoint. This result is then used to conclude that those indecomposable continua each of whose proper subcontinua is decomposable, those which are homeomorphic with each of their nondegenerate sub-continua, and those such that each two points in the same composant can be joined by a continuum which cannot be mapped onto D, have at least c composants. It is also shown that generalized arcwise connected continua are decomposable. The author and , among others, has raised the question of how many composants an indecomposable continuum must have. The technique applied to prove that metric indecomposable continua have uncountably many depends upon the second countability of the comple-ment of a point. (See, for example, [5, p. 140].) Other arguments can generalize this; for example, H. Cook has pointed out in conversation that if an indecomposable continuum has two composants, and is first countable at a point of each, then it has uncountably many composants. This can be generalized to include the case of a continuum with two com-posants, each of which contains a compact Gδ subset. S. Mazurkiewicz has shown that a metric indecomposable continuum has c composants, sharpening slightly the result that it has uncountably many. M. E. Rudin has shown that if the continuum hypothesis holds, then it is not true that every indecomposable continuum has c composants. J. W. Rogers, Jr., has shown that every metric indecomposable continuum can be mapped onto the continuum D mentioned above. (See [5, p. 332] or [6, p. 206] for a picture.) We follow Rogers here in a representing D as an inverse limit of arcs [0,1] = I, indexed by the positive integers, where the bonding map between successive terms is always h, where h(t) = 2ί for t ^ 1/2 and h(t) = 2 - 2ί for t ^ 1/2. Throughout what follows, let I denote [0,1]; h, this function; and D, the inverse limit of this system. This work extends Rogers' result to the nonmetric case; also, the argument here is simpler than Rogers'. This result is then applied to obtain a partial answer to the composant question in certain cases. This work also generalizes work of G. R. Gordh, Jr., presented at the University of Oklahoma Conference on General Topology in March, 1972, , and answers in the negative 303 304 DAVID P. BELLAMY the question of L. E. Ward, Jr., , concerning whether there are indecomposable continua each two points of which can be connected by a generalized arc. (A generalized arc is a continuum with exactly two noncutpoints.) Principal Result. We first establish the following: LEMMA. If X is an indecomposable continuum and f:X-^Iisa continuous function such that /"^(O) and f~ ι(V) both have nonempty interior, then there exists a continuous function g: X—+ I such that fiΓ^O) and g" 1^) both have nonempty interior, and such that f = hog. Proof. Suppose / is given. Let M\JN be a separation of X-Int f~ ι{l) such that both M Π Int f~\ϋ) and N Π Int f~ ι{0) are nonvoid. Such a separation exists since X-Int f~ ι(ϊ) is compact and no component of it has interior; in particular, Int / -1(0) must meet at least two components and hence two quasi-components of it. Now define g: X—• I by \f(x) if x e M = 1 - —f{x) if xeN = i- if «Γ(1). It is readily verified that g is well-defined and continuous. Then, Int (Γ^l) - NΠ Int /^(O) Φ φ Int g~ ι{0) - M Π Int f~\Q) Φ φ . The reader can easily verify that f = hog. COROLLARY TO PROOF. If X is an indecomposable continuum and f:X-+Iis a continuous function such that /"^(O) and /"^(l) both have nonempty interior, and if p, q e X are points such that p e Int /"^(O); f(q) φ 0,1 and q lies in a different component of X-Int /"^(l) from p, then there exists a continuous function g: X— I such that f = hog; g^φ) and ^ ( l ) both have nonempty interior, p e Int (p^O), and 1 > Q(q) ^ 1/2. Proof. Choose M and N as in the proof of the lemma, so that p e M. If q e N, set M r = M; N' — N. If q e M, then since p and q lie in different components of M and M is compact, there is separation M r U A of M with q e A and p e JkP. Then, let iV' = A U N. In either case, proceed as in the proof of the lemma, replacing M and N by M! COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 305 and N' respectively. It is then readily verified that 1 > g(q) 2 > 1/2 and pelntg^iO). We are now in a position to prove: THEOREM. Let X be a nondegenerate indecomposable continuum. Then X can be mapped continuously onto D. Proof. Let 0 be a nonempty open subset of X such that Cl (0) Φ X. Since Cl (0) is a proper closed subset of an indecomposable con-tinuum, with nonvoid interior, it is not connected. Let A (j B be a separation of Cl (0) and observe that both A and B have interior. Let fi.X- +I be a Urysohn function such that fλ(x) — 0 for xeA and f,{x) = 1 for x e B. Now, Int fτ\ϋ) Φ Φ Φ Int /TW We proceed inductively. Suppose continuous functions f^.X—! have been chosen for 1 < ^ i < w, such that for each i > 1, fto/^ = /ί_1 and such that for each i, /^(O) and fϊ ι(ϊ) both have interior. Applying the lemma, we obtain a function fn:X—I such that /^(Q) and /^(l) both have interior and hofn — /n_1# Then the sequence of functions (fi}Z=i induces a map f:X—>D which is onto since X is compact and each f{ is onto, and the proof is complete. Before looking at the composant question, we sharpen this result somewhat with three corollaries to the proof, and to the theorem. COROLLARY 1. If p and q are distinct points of the indecomposable continuum X, there is a continuous surjection f:X-+D such that f(p) Proof. Choose 0 so that p e 0 and q $ Cl (0). Choose A, B so that p e A. Apply Tietze's extension theorem to obtain a function fι such that f,{x) = 0 for x e A, f,(x) - 1 for xeB, and f,(q) = 1/2. Then proceed as in the proof of the theorem. f(p) φ f(q) since fλ{p) — 0 while fM) = 1/2. COROLLARY 2. A nondegenerate indecomposable continuum X can be embedded into a product of copies of D such that every projection carries the image of X onto D. Proof. Let F — {/: / is continuous mapping of Xonto D}« Define k: X—>ΠfeFD by kf = /. By Corollary 1, k is 1 — 1 and by com-pactness it is an embedding. The /-projection of k(X) into D yields f(X), which is all of D. COROLLARY 3. If p and q belong to different composants of the 306 DAVID P. BELLAMY indecomposable continuum X, there is a continuous mapping f: X—>D such that f(p) and f(q) lie in different composants of D. Proof. Obtain fx as in the proof of Corollary 1. Then suppose fi has been chosen for 1 ^ i < n such that p e Int fτ ι(θ) and such that 1 > fi{q) ^ 1/2 for each i, in addition to the other properties assumed in the proof of the theorem. Then, since p and q lie in different composants of X, they must lie in different components of X-Int /~-i(l). The corollary to the proof of the Lemma enables us to choose fn so that p 6 Int f~ ι(0) while 1 > fn(q) ^ 1/2. Then with the map /: X-+ D so obtained, suppose W^D were a proper subcontinuum with f(p) e W and f(q) e W. Let Wt S I be the ith projection of W. Since f(p) is the point each of whose coordinates is zero, 0 e Wι for each i. Since W'ΦD, there is a j such that W, Φ I. Then 1 £ Wά. Since Λ(l/2) = 1,1/2 ί Wj+1. Thus W, +1 S [0,1/2) while fj+1(q) ^ 1/2, a contradiction, since f3 +1(q) e Wj+1. Hence, f(p) and f(q) belong to different com-posants of D. The Composant Problem. The theorem of the preceding section now allows us to make some observations about composants and other internal structures of indecomposable continua. DEFINITION 1. If X and Y are continua and /: X-^Y is a con-tinuous surjection, / maps X irreducibly onto Y iff f(W) is a proper subcontinuum of Y whenever W is a proper subcontinuum of X. PROPOSITION. If X and Y are indecomposable continua and f:X—> Y irreducibly onto, then X has at least as many composants as Y. Proof. If p, q lie in the same composant of X, there is a proper subcontinuum W of X containing p and q. Hence, f(W) is a proper subcontinuum of Y containing both f(p) and f(q), which thus lie in the same composant of Y. Thus, if K is a composant of Y, f~ ι{K) is a union of composants of X. Applying the axiom of choice, we can define a 1-1 function g from the set of composants of Y into the set of composants of X by choosing g(K) to be some composant of X contained in f~\K) for each composant K of Y. COROLLARY 4. If an indecomposable continuum X can be mapped irreducibly onto D, X has at least c composants. COROLLARY 5. If X is a nondegenerate indecomposable continuum, Xcontains an indecomposable subcontinuum Mwith at least c composants. COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 307 Proof. Let f: X—+D be onto. Consider {W £ Ξ X: W is a continuum and /(W) = D} . By compactness and Zorn's lemma, this set contains a minimal element M; M is necessarily indecomposable, and f\M:M—+D maps M irredu-cibly onto Zλ We are done by Corollary 4. DEFINITION 2. An indecomposable continuum is irreducibly inde-composable iff each of its nondegenerate proper subcontinua is decom-posable. DEFINITION 3. A continuum is hereditarily equivalent iff it is homeomorphic with each of its nondegenerate subcontinua. (See and .) COROLLARY 6. An irreducibly indecomposable continuum X which is nondegenerate has at least c composants. Proof. The M in Corollary 5 must in this case be X. COROLLARY 7. A nondegenerate hereditarily equivalent indecom-posable continuum X has at least c composants. Proof. The M in Corollary 5 is in this case homeomorphic with X and so has the same number of composants as X. We also obtain the following somewhat more technical results. COROLLARY 8. If X is a nondegenerate indecomposable continuum such that whenever p, q belong to the same composant of X, p and q lie together in a continuum W(p, q) which cannot be mapped onto D, then X has at least c composants. Proof. Let /: X— D be onto and let M g X be a continuum such that f(M) = D. Suppose M Φ X. Then M lies in some composant K of X. Let p,qe M such that f(p) and f(q) belong to different composants of D. There is a continuum W(p, q) which cannot be mapped onto D, while p, q e W(p, q). Then f(W(p, q)) is a proper subcontinuum of D meeting two composants of ΰ , a contradiction. Thus, M = X and by Proposition 1, the proof is done. In particular, then, if each two points of each composant of an indecomposable continuum lie in a continuum which is locally connected; is a union of fewer than c locally connected continua; or is hereditarily decomposable, then the continuum has at least c composants. (The hereditarily decomposable case was pointed out by L. E. Rogers and 308 DAVID P. BELLAMY G R. Gordh, Jr , in conversation with the author,) COROLLARY 9. A continuum X, each two points of which can be joined by a continuum which cannot be mapped onto D, is decom-posable. Proof. Such a continuum cannot be mapped onto D, for if p, q e X and /: X-+D is onto; and f(p) and f(q) lie in different composants of D, then each continuum containing both p and q is mapped onto D b y / . COROLLARY 10. A continuum each two points of which can be joined by a generalized arc is decomposable. The following Corollaries resolve the question in , mentioned earlier. We close with them. COROLLARY 11. A hereditarily unicoherent generalized arcwise connected continuum is hereditarily decomposable. Proof. If X is such a continuum and W QX is a subcontinuum, then for any p, q in W there is a generalized arc A in X from p to q. By irreducibility of A and hereditary unicoherence, A ϋ W. Thus W is generalized arcwise connected and hence decomposable. COROLLARY 12. Each generalized arcwise connected hereditarily unicoherent continuum has the fixed point property for continuous multi-valued functions. REMARK. This is a restatement of Theorem 2 of [11, p. 926] in light of Corollary 11. REFERENCES 1. D. P. Bellamy, A non-metric indecomposable continuum, Duke Math. J., 38, No. 1 (1971), 15-20. 2. , Topological Properties of Compactifications of Half-Open Interval., Ph. D. Thesis, Michigan State University, (1968). 3. G. R. Gordh, Jr., Indecomposable Hausdorff Continua and Mappings of Connected Linearly Ordered Spaces, presented to the University of Oklahoma Conference on General Topology, March, 1972. 4. G. W. Henderson, Proof that every compact decomposable continuum which is topologically equivalent to each of its nondegenerate subcontinua is an arc, Annals of Math., 72, No. 3 (1960), 421-428. 5. J. G. Hocking and G. S. Young, Topology, Addison-Wesley, Reading, Mass., (1969). 6. K. Kuratowski, Topology, Vol. II, Academic Press, New York and London, (1968). 7. S. Mazurkiewicz, Sur les Continus Indecomposables, Fund. Math., 10(1927), 305-310. COMPOSANTS OF HAUSDORFF INDECOMPOSABLE CONTINUA 309 8. E. E. Moise, An indecomposable plane continuum which is homeomorphic to each of its nondegenerate subcontinua, Trans. Amer. Math. Soc, 63 (1948), 581-594. 9. J. W. Rogers, Jr., On mapping indecomposable continua onto certain chainable indecomposable continua, Proc. Amer. Math. Soc, 25. No. 2 (1970), 449-456. 10. M. E. Rudin, Composants and βN, Proceedings of the Washington State University Conference on General Topology, (1970), 117-119. 11. L. E. Ward, Jr., A fixed point theorem for multi-valued functions, Pacific J. Math., 8 (1958), 921-927. Received June 21, 1972 and in revised form August 30, 1972. UNIVERSITY OF DELAWARE
255
Regulation of PCSK9 Expression and Function: Mechanisms and Therapeutic Implications - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. Service Alert: Planned Maintenance beginning July 25th Most services will be unavailable for 24+ hours starting 9 PM EDT. Learn more about the maintenance. Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Front Cardiovasc Med . 2021 Oct 15;8:764038. doi: 10.3389/fcvm.2021.764038 Search in PMC Search in PubMed View in NLM Catalog Add to search Regulation of PCSK9 Expression and Function: Mechanisms and Therapeutic Implications Xiao-dan Xia Xiao-dan Xia 1 Department of Orthopedics, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People's Hospital, Qingyuan, China Find articles by Xiao-dan Xia 1,†, Zhong-sheng Peng Zhong-sheng Peng 2 School of Economics, Management and Law, University of South China, Hengyang, China Find articles by Zhong-sheng Peng 2,†, Hong-mei Gu Hong-mei Gu 3 Group on the Molecular and Cell Biology of Lipids, Department of Pediatrics, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada Find articles by Hong-mei Gu 3, Maggie Wang Maggie Wang 3 Group on the Molecular and Cell Biology of Lipids, Department of Pediatrics, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada Find articles by Maggie Wang 3, Gui-qing Wang Gui-qing Wang 1 Department of Orthopedics, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People's Hospital, Qingyuan, China Find articles by Gui-qing Wang 1,, Da-wei Zhang Da-wei Zhang 3 Group on the Molecular and Cell Biology of Lipids, Department of Pediatrics, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada Find articles by Da-wei Zhang 3, Author information Article notes Copyright and License information 1 Department of Orthopedics, The Sixth Affiliated Hospital of Guangzhou Medical University, Qingyuan People's Hospital, Qingyuan, China 2 School of Economics, Management and Law, University of South China, Hengyang, China 3 Group on the Molecular and Cell Biology of Lipids, Department of Pediatrics, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada Edited by: Yiliang Chen, Medical College of Wisconsin, United States Reviewed by: Fang Li, Columbia University Irving Medical Center, United States; Daisy Sahoo, Medical College of Wisconsin, United States ✉ Correspondence: Gui-qing Wang [email protected] Da-wei Zhang [email protected] This article was submitted to Lipids in Cardiovascular Disease, a section of the journal Frontiers in Cardiovascular Medicine †These authors have contributed equally to this work Received 2021 Aug 24; Accepted 2021 Sep 16; Collection date 2021. Copyright © 2021 Xia, Peng, Gu, Wang, Wang and Zhang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. PMC Copyright notice PMCID: PMC8589637 PMID: 34782856 Abstract Proprotein convertase subtilisin/kexin type 9 (PCSK9) promotes degradation of low-density lipoprotein receptor (LDLR) and plays a central role in regulating plasma levels of LDL cholesterol levels, lipoprotein(a) and triglyceride-rich lipoproteins, increasing the risk of cardiovascular disease. Additionally, PCSK9 promotes degradation of major histocompatibility protein class I and reduces intratumoral infiltration of cytotoxic T cells. Inhibition of PCSK9 increases expression of LDLR, thereby reducing plasma levels of lipoproteins and the risk of cardiovascular disease. PCSK9 inhibition also increases cell surface levels of major histocompatibility protein class I in cancer cells and suppresses tumor growth. Therefore, PCSK9 plays a vital role in the pathogenesis of cardiovascular disease and cancer, the top two causes of morbidity and mortality worldwide. Monoclonal anti-PCSK9 antibody-based therapy is currently the only available treatment that can effectively reduce plasma LDL-C levels and suppress tumor growth. However, high expenses limit their widespread use. PCSK9 promotes lysosomal degradation of its substrates, but the detailed molecular mechanism by which PCSK9 promotes degradation of its substrates is not completely understood, impeding the development of more cost-effective alternative strategies to inhibit PCSK9. Here, we review our current understanding of PCSK9 and focus on the regulation of its expression and functions. Keywords: lipid metabolism, cardiovascular disease, atherosclerosis, cancer immunotherapy, PCSK9, LDL receptor, major histocompatibility protein class I Introduction Plasma low-density lipoprotein cholesterol (LDL-C) levels are positively correlated to the risk of cardiovascular disease (CVD). Statins, the currently most-prescribed lipid-lowering drug, reduce cardiovascular events by 20–40%. However, there is mounting evidence that about 50% of statin-treated patients and 80% of very high-risk patients do not achieve the recommended cholesterol values even with the highest tolerated dose. Furthermore, up to 20% of statin-treated people show statin intolerance, and about 10–12% of cases exhibit maladaptive side effects (1). Thus, there is an urgent need to develop a non-statin-based cholesterol-lowering drug. Proprotein convertase subtilisin/kexin type 9 (PCSK9) plays a critical role in regulating plasma cholesterol homeostasis through promoting LDL receptor (LDLR) degradation (Figure 1). Gain-of-function mutations in PCSK9 cause autosomal dominant hypercholesterolemia, while loss-of-function mutations are associated with reduced plasma levels of LDL-C (2–6). PCSK9 also promotes major histocompatibility protein class I (MHCI) degradation and suppresses immune attacks to tumors (7) (Figure 1). Therefore, PCSK9 plays a central role in the pathogenesis of CVD and cancers. In addition, it has been reported that PCSK9, especially extra-hepatic PCSK9, can recruit inflammatory cells and induce local inflammation (8, 9). Here, we summarize the latest advances in PCSK9 and focus on its role in lipid metabolism and cancer immunotherapy and the molecular mechanisms for the regulation of PCSK9 expression. Figure 1. Open in a new tab PCSK9, LDLR, and MHCI. PCSK9 is auto-cleaved in the ER. Mature PCSK9 is transported to the Golgi and then secreted. PCSK9 binds to LDLR and MHCI on the cell surface. After, the complex is delivered to endosomes via endocytosis and then transported to the lysosome for degradation. LDLR binds to its ligands such as LDL, VLDL, and Lp(a) and then the receptor/ligand complex enters cells via receptor-mediated endocytosis and is delivered to the endosome. In the acidic endosomal environment, the ligand, such as LDL, is released from LDLR and transported to the lysosome for degradation. LDLR is recycled to plasma membrane. PCSK9-promoted degradation of LDLR increases plasma levels of LDL and Lp(a). Pro, prodomain; CAT, catalytic domain; CTD, C-terminal domain. PCSK9 Function Human and mouse PCSK9 is encoded by the PCSK9/Pcsk9 gene located at chromosome 1p32.3 and 4C7, containing thirteen and twelve exons that encode a 692 and 694-amino acid PCSK9 protein, respectively (10). PCSK9 is highly conserved among mammals, including chimpanzee, monkey, camel, alpaca, rat, and mouse, with an approximate amino acid identity of 99, 96, 82, 81, 77, and 77%, respectively. The majority of identified gain-of-function and loss-of-function mutations occur in entirely conserved residues, such as gain-of-function mutations D35Y, L108R, S127R, D129G, N157K, R215H, F216L, R218S, A220T, R357H, D374Y, N425S, R468W, R496W and R499H, and loss-of-function mutations R104C, R105Q, G106R, G236S, L253F, G316C, N354I and S462P (2, 4, 11, 12). However, loss-of-function mutations R46L and R93C and gain-of-function mutations R96L, A168E, R499H, and S636R are not conserved between human and mouse or rat PCSK9. The correspondence residues of Arg46, Arg93, Arg96, Ala168, Arg499, and Ser636 in mouse/ rat PCSK9 are Pro49/Pro48, Gln93/Gln92, His99/His95, Thr211/Thr167, Trp512/Arg498, and Ser639/Ser635, respectively. Whether the difference in these residues affects PCSK9 function, however, is unclear. PCSK9 contains a signal peptide [amino acid (aa) 1-30], a prodomain (Pro) (aa 31-152), a catalytic domain (CAT) (aa 153-425) and a Cys and His-rich C-terminal domain (CTD) (aa 426-692) (13) (Figure 1). The CAT contains a classical serine protease catalytic triad of Asp186, His226 and Ser386 and is highly conserved with the CAT of other proprotein convertases. PCSK9 is self-cleaved by the CAT at the FAQ152/SIPK site in the endoplasmic reticulum (ER). After autocleavage, the prodomain is associate with the CAT and masks the catalytic activity of PCSK9. This process is required for PCSK9 maturation and secretion. Compared to the other subtilisin-like serine protease family members, the CTD of PCSK9 is unique and contains multiple potential protein-protein interaction motifs (13). The CTD is positively charged and may interact with the negatively charged ligand-binding repeats of LDLR in the acidic endosomal environment, blocking recycling of the receptor (14, 15). In addition, partial deletion of the CTD markedly damages PCSK9 secretion, indicating its essential role in this process (16, 17). However, the underlying mechanism is unclear. Our recent study suggests that the CTD mediates PCSK9 secretion, possibly via a coat protein complex II (COPII) component Sec24 (17). PCSK9 induces degradation of LDLR and its family members, including very low-density lipoprotein receptor (VLDLR), apolipoprotein E receptor 2 (ApoER2), and LDLR-related protein 1 (LRP1) (10, 18–20), thereby playing an essential role in lipid metabolism. However, PCSK9 at a physiological concentration can effectively degrade LDLR but not other LDLR family members in cultured cells (19, 20). Similarly, PCSK9 degrades LDLR but not LRP1 in mouse liver (21), but it can regulate visceral adipogenesis likely through promoting VLDLR degradation in mouse adipose tissues (22). Conversely, loss of functional PCSK9 in humans does not cause any known abnormality except for reduced plasma cholesterol levels (3, 6). Nevertheless, these findings indicate that PCSK9's action on its substrates is cell/tissue-type and/or species-dependent. The CAT of PCSK9 directly binds to the epidermal growth factor precursor homology domain A (EGF-A) of LDLR on the cell surface. After endocytosis, PCSK9 remains bound to LDLR in the acidic endosome, preventing LDLR from recycling and redirecting the receptor to the lysosome for degradation (Figure 1) (19, 21, 23, 24). PCSK9 can also promote LDLR degradation via an intracellular pathway, especially when overexpressed in cultured cells (25). Of note, evolocumab and alirocumab are humanized monoclonal anti-PCSK9 antibodies targeting the catalytic domain of PCSK9. They block PCSK9 binding to LDLR on the cell surface and do not affect the intracellular pathway. On the other hand, Inclisiran is a small siRNA that targets PCSK9 mRNA and reduces its expression. It can inhibit both the intracellular and extracellular pathways. However, evolocumab, alirocumab, and Inclisiran all markedly reduce plasma LDL cholesterol levels in patients to a similar degree (26–28). Therefore, the extracellular pathway appears to be mainly responsible for the LDL-lowering effect of PCSK9 inhibition. PCSK9 directs its substrates for lysosomal degradation (Figure 1), which does not require its proteolytic activity (29). Both caveolae-dependent and clathrin-mediated endocytosis have been reported to play an important role in the endocytosis of PCSK9/LDLR complex in HepG2 cells (30–32). Additionally, DeVay et al. reported that amyloid beta precursor-like protein 2 (APLP2) directly bound to PCSK9 and targeted the PCSK9/LDLR complex to lysosomes for degradation in HepG2 cells (33), while other studies showed that APLP2 did not affect PCSK9-promoted LDLR degradation in mice, HepG2, or Huh7 cells (34, 35). Differences in approaches and/or models used in these studies may cause these discrepancies. However, further studies are required to elucidate the underlying mechanism for PCSK9-promoted LDLR degradation. PCSK9 and LDL Plasma LDL is eliminated from circulation primarily via hepatic LDLR. Upon binding, the LDL-LDLR complex is internalized via the clathrin-coated pits and subsequently delivered to endosomes, where LDL is released from LDLR and then transported to lysosomes for degradation, while LDLR is recycled to the cell surface to clear more LDL. Mutations in LDLR cause familial hypercholesterolemia (FH), characterized by elevated plasma levels of cholesterol, particularly LDL cholesterol, and increased risk of CVD (36). PCSK9 promotes LDLR lysosomal degradation. Circulating PCSK9 preferentially degrades LDLR in mouse liver (37), which may be due to hepatic heparan sulfate proteoglycans (HSPG). HSPG can recruit circulating PCSK9 to hepatocytes, enhancing its action on LDLR (38). Knockout of PCSK9 increases hepatic LDLR levels, reduces plasma LDL cholesterol levels, and improves sensitivity to statin treatment in mice (37). LDL is derived from VLDL catabolism, which is a triglyceride-enriched lipoprotein exclusively secreted by the liver. Triglycerides in VLDL are hydrolyzed by lipoprotein lipase, resulting in intermediate-density lipoprotein (IDL), which can be further metabolized to LDL (39). PCSK9 can directly interact with apoB100, the main structural lipoprotein on VLDL, and inhibit apoB100 degradation, thereby promoting its secretion. Knockout of PCSK9 reduces hepatic apoB secretion and plasma LDL cholesterol levels in Ldlr−/−/Apobec1−/− mice (40). Conversely, gain-of-function mutant PCSK9 increases apoB100 secretion in a rat hepatoma-derived cell line, McArdle-7777 cells (41). These findings indicate that, in addition to reducing the availability of hepatic LDLR, PCSK9 may promote the production of LDL through increasing secretion of VLDL. However, hepatocytes typically produce apoB100 in abundance, and the rate-limiting step in VLDL secretion is lipidation of apoB100. Therefore, the physiological role of PCSK9-promoted VLDL secretion may not be significant in vivo. PCSK9 is also expressed in extra-hepatic cells and tissues, such as the vascular smooth muscles cells (VSMCs), macrophages, endothelial cells, the pancreas, the kidneys, the intestine and the central nervous system (10). The arterial vessel has the maximal secretion of PCSK9 at the lowest level of shear stress that occurs in the aortic branching and aorta-iliac bifurcation regions of the mouse aorta. Cultured VSMCs produce substantially more PCSK9 than endothelial cells (42). Elevated PCSK9 in VSMCs can reduce LDLR levels in VSMCs and macrophages (42, 43), which may impair LDL clearance and accelerate retention of LDL in VSMCs and macrophages in the location of arterial bifurcation. PCSK9 is also expressed in and secreted from pancreatic beta cells. However, inhibition of PCSK9 does not affect insulin secretion in the human EndoC-betaH1 beta cell line and mice even though PCSK9 promotes LDLR degradation in beta cells (44). Together, these findings indicate a cell-type-specific function of PCSK9. PCSK9 and Triglyceride-Rich Lipoproteins Elevated plasma levels of triglyceride-rich lipoproteins and their remnants are an independent risk factor for atherosclerosis and CVD. Hepatic LDLR binds to apoE on remnant lipoprotein particles to mediate their clearance (36). Therefore, PCSK9 can affect plasma triglyceride and remnant cholesterol levels through the LDLR pathway. Elevated plasma PCSK9 levels are positively associated with plasma TG levels in humans upon a short-term high-fructose intake (45). Treatments with PCSK9 inhibitors increase clearance of VLDL remnants in patients (46). Alirocumab, a fully human PCSK9 monoclonal antibody, reduces LDL particles by 56.3% in human patients. This reduction is partly due to an increase in the clearance rate of IDL particles, thereby decreasing the conversion of IDL to LDL (27). PCSK9 is expressed in the intestine and can affect chylomicron metabolism. Knockout of PCSK9 in mice significantly reduces lymphatic apoB48 secretion and increases secretion of TG-rich large chylomicrons. Clearance of chylomicron remnants is also increased in Pcsk9−/− mice (47). Rashid et al. further demonstrated that PCSK9 promoted chylomicron secretion through both LDLR-dependent and -independent pathways in mice and a human enterocyte cell line, CaCo-2 cells, such as increasing the expression of apoB, microsomal triglyceride transfer protein and lipogenic genes in enterocytes (48). However, inhibition of PCSK9 by evolocumab or alirocumab does not significantly affect VLDL production or postprandial plasma levels of apoB48 and triglycerides in healthy humans or patients with hypercholesterolemia (49). Conversely, in patients with type-II diabetes mellitus, evolocumab reduces postprandial apoB48 levels even though the effect on postprandial triglyceride levels is not significant, while alirocumab can significantly reduce fasting plasma apoB48 and TG levels and postprandial TG levels (50). Plasma PCSK9 levels are also correlated with plasma apoB48-containing TG-rich lipoproteins in men with insulin resistance (51). However, the impact of PCSK9 on plasma levels of TG-rich lipoproteins, such as VLDL and chylomicrons, is much less than its effect on plasma LDL (52). This may be because VLDL and chylomicron remnants can also be effectively cleared by an LDLR-independent pathway, such as LRP1 (53). In summary, extracellular PCSK9 regulates LDLR-mediated catabolism, and intracellular PCSK9 modulates apoB secretion; the two pathways might act in a complementary fashion to regulate TG-rich lipoproteins metabolism with the extracellular pathway as the primary contributor. PCSK9 and Lipoprotein(a) Elevated plasma lipoprotein(a) [Lp(a)] levels are a highly prevalent risk factor for cardiovascular disease, especially for myocardial infarction, atherosclerotic stenosis and aortic valve stenosis. Lp(a) is an apoB100–containing lipoprotein particle covalently linked to the plasminogen-like glycoprotein apo(a) by a disulfide bond (54). The statin treatment and lifestyle interventions hardly affect circulating Lp(a) levels, which brings a real challenge for successfully managing elevated Lp(a) levels in patients. Conversely, PCSK9 inhibitors dramatically reduce plasma Lp(a) levels up to ~35% in patients (54, 55). Inhibition of PCSK9 reduces the risk of coronary heart disease to a much greater degree in patients with a high plasma Lp(a) level compared to patients with a low plasma Lp(a) level (23 vs. 7%) (54). However, how PCSK9 regulates Lp(a) levels is unclear. Plasma Lp(a) levels are determined by its production and clearance. Lp(a) is removed from circulation through LDLR, SR-BI, and LRP1(54). Lp(a) levels are increased in FH patients who carry loss-of-function mutant LDLR. Overexpression of LDLR enhances Lp(a) clearance in mice. PCSK9 promotes LDLR degradation and reduces Lp(a) catabolism in HepG2 cells and primary fibroblasts (31). These findings indicate that PCSK9 can regulate plasma Lp(a) levels in a LDLR-dependent pathway. However, while statin treatment increases LDLR levels, it has no significant effect on plasma Lp(a) levels in patients. Furthermore, lymphocytes from patients with homozygous FH can effectively take up Lp(a) particles, and PCSK9 inhibitors can lower circulating Lp(a) in homozygous FH patients (56), indicating a LDLR-independent pathway. Lp(a) usually cannot compete with LDL for binding to LDLR. It has been proposed that PCSK9 inhibition can promote hepatic clearance of Lp(a) through LDLR-mediated endocytosis when plasma LDL levels are low (57). Nevertheless, although the mechanism by which PCSK9 inhibitors reduce Lp(a) levels remains to be determined, the fact that PCSK9 inhibitors provide an additional beneficial effect in lowering circulating Lp(a) may confer protection against CVD from a clinical perspective. Further work is needed to understand the role of PCSK9 in the overall metabolism of apoB-containing lipoproteins, especially for Lp(a). PCSK9 and Cancer Cell Immunity MHCI on the cell surface presents specific antigens to T-cell receptors (TCR) on CD8+ T cells, activating CD8+ T cell-mediated cell killing. After antigen presentation, MHCI enters cells via endocytosis and is recycled to present new antigens. On the other hand, programmed cell death protein 1 receptor (PD-1) on the cell surface interacts with its ligand programmed death-ligand 1 (PD-L1) on T cells to act as an immune checkpoint, which suppresses the immune response and prevents indiscriminate attacks (58). During tumor development, cancer cells evolve various mechanisms to escape immune attacks, such as stimulating immune checkpoint targets and reducing tumor-specific antigen (TSA) presentation. Monoclonal anti-PD1 or PDL1 antibodies, which inhibit the PD1 pathway and promote antitumor immune response, have been approved to treat various types of cancers such as melanoma, bladder cancer, non-small cell lung cancer, and renal cell carcinoma. On the other hand, MHCI on the cancer cell surface presents TSA to CD8+ cells, activating CD8+ cell-mediated cancer cell killing (58). Recently, Liu et al. reported that PCSK9 bound to MHCI on the cancer cell surface and redirected it to the lysosome for degradation, thereby reducing cell surface MHCI levels and TSA presentation. Knockout of PCSK9 or inhibition of circulating PCSK9 increased CD8+ T cell intratumoral infiltration and enhanced antitumor activity of CD8+ T cells in mice. This suppressed tumor growth of several mouse cancer cell lines, including 4T1 (breast cancer), MC38 (colon adenocarcinoma), and the PD-1 inhibitor-resistant cancer cell line, MC38R, in mice (7). In addition, knockout of PCSK9 suppressed tumor growth in Ldlr−/− mice, indicating a LDLR-independent mechanism. However, Yuan et al. reported that inhibition of PCSK9 attenuated MC38 tumor growth in a LDLR-dependent manner. They found that LDLR directly interacted with T-cell receptor complex (TCR) and increased its cell surface levels. Inhibition of PCSK9 increased LDLR and TCR levels in MC38 tumors, enhancing TCR signaling and CD8+ T cell-dependent cancer cell killing in mice. The reason for the discrepancy is unclear. Yuan et al. did not report whether MHCI levels in MC38 tumors were affected by PCSK9, but they found that PCSK9 inhibition did not alter MHCI levels in B16F10 melanoma cells (59), indicating that PCSK9 regulates MHCI levels in a tumor/cell type-specific manner. These findings suggest that PCSK9 may control tumor growth through the LDLR and the MHCI pathway independently and/or collaboratively. PCSK9 produced locally in vascular cells and cardiomyocytes can promote inflammation via the NF-κB signaling pathway (60, 61). Chronic inflammation increases the risk of cancer. PCSK9 expression is high in various cancers, such as hepatocellular carcinoma, gastricadenocarcinoma and prostate cancer cell lines (62–64). Zhang et al. reported that PCSK9 expression is positively correlated with poor prognosis. The authors found that PCSK9 suppressed apoptosis in cultured hepatoma-derived cell lines through the Bax/Bcl-2/Caspase9/3 pathway (64). Consistently, inhibition of PCSK9 by siRNA promotes apoptosis in a human lung adenocarcinoma cell line, A549 cells, via activation of caspase-3 and stimulation of ER stress (62). On the other hand, silencing PCSK9 by siRNA reduces radiation-induced apoptosis in prostate cancer cell lines, PC-3 and LnCap and thus enhances cell viability (65). However, the authors did not investigate the potential contribution of inflammation in these studies. Nevertheless, these studies suggest that PCSK9 plays a complex role in cancer development via different mechanisms. Cancer risk analysis of subjects carrying loss-of-function and gain-of-function mutations in PCSK9 will further reveal and confirm the role of PCSK9 in cancer progression. Regulation of PCSK9 PCSK9 plays a critical role in regulating circulating lipid homeostasis and MHCI-dependent immune responses. The complexity of PCSK9's functions indicates that its activity is strictly regulated by various mechanisms at multiple levels. Regulation of PCSK9 Expression Epigenetically, binding of forkhead box O (FoxO) 3 to the promoter of PCSK9 recruits Sirt6 to deacetylate histone H3, suppressing PCSK9 expression (Figure 2) (66). On the other hand, histone nuclear factor P (HINFP) binds to a HINFP motif in 20 bp upstream of the sterol regulatory element motif (SRE) in PCSK9 promoter, promoting histone H4 acetylation to activate sterol regulatory element-binding protein 2 (SREBP2)-mediated upregulation of PCSK9 transcription (67). Furthermore, PCSK9 promoter is methylated. Alcohol use disorder (AUD) causes hypomethylation in PCSK9 promoter and consequently reduces PCSK9 expression and plasma cholesterol levels in a mouse model of AUD, which may partially contribute to the protective effect on CVD risk observed in light alcohol users (68). Figure 2. Open in a new tab Regulation of PCSK9 expression. Transcriptional factors, such as SREBP2, HNF-1α, SP-1and E2F2, upregulate PCSK9 transcription. FGF21 and resistin inhibit and increase SREBP2-mediated transcription of PCSK9, respectively. Barberine reduces PCSK9 expression via suppressing the activity of SREBP2 and HNF-1α on PCSK9 transcription. 9K suppresses PCSK9 expression through SP1 and HNF-1α, while Curcumin, 7030B-C5 inhibits HNF-1α-induced transcription of PCSK9. Alcohol use causes hypomethylation of PCSK9 promoter and then reduces PCSK9 expression. Insulin activates mTROC1 and then PKCδ to suppress PCSK9 transcription via HNF-1α. miR-483-5p, miR-1228-3p, miR-143-5p, miR-564, and miR-4721 bind to the 3'UTR of PCSK9 mRNA, reducing PCSK9 expression, while miR27a somehow increases PCSK9 expression. R-IMPP inhibits 80S ribosome and reduces PCSK9 expression. After autocleavage in the ER, PCSK9 is transported to the Golgi via classical COPII vesicles. There, PCSK9 undergoes posttranslational modifications, such as phosphorylation, glycosylation, and sulfation. Mature PCSK9 is then secreted into the extra cellular environment. At the transcriptional level, both SREBP1 and SREBP2 have been reported to bind to SRE in PCSK9 promoter and thus upregulate PCSK9 expression in cultured cells; however, PCSK9 is predominantly regulated by SREPB2 in vivo (69–71). Statin treatment activates the transcriptional activity of SREBP2 and thus increases the expression of LDLR and PCSK9 (71). However, Poirier et al. reported that the expression of PCSK9 in the rodent central nervous system was regulated in a SREBP2-independnet manner (72). PCSK9 promoter also contains a binding site of the transcription factor hepatocyte nuclear factor 1 alpha (HNF1α) (70, 73). Silencing of HNF1α but not HNF1β significantly reduced PCSK9 expression. Furthermore, insulin increases the mTORC1 signaling pathway and activates PKCδ, which reduces HNF1α-mediated expression of PCSK9 and increases hepatic LDLR levels (74). Li et al. reported that HNF1α worked cooperatively with SREBP2 to activate PCSK9 transcription since mutations in the HNF1α-binding site significantly reduced SREBP2-mediated upregulation of PCSK9 transcription (73). However, the HNF1α binding site is just 28 bp upstream of the SREBP2 bindings site. Mutations in the HNF1α binding site may affect the integrity of SRE and then indirectly impair SREBP2 binding. The HNF1α binding site in the promoter of PCSK9 contains a consensus FoxO binding site (Figure 2). FoxO3 can inhibit PCSK9 expression competitively via inhibiting HNF1α-mediated upregulation of PCSK9 (66). In addition, Lai et al. reported that transcription factor E2F2 could bind to the PCSK9 promoter region and upregulate its expression under a condition of feeding or high cellular cholesterol levels (75). The promoter region of PCSK9 also contains NF-Y and SP1 binding sites upstream of SRE (71). The putative NF-Y binding site appears not to affect PCSK9 expression. However, the SP1 site may mediate basal transcription of PCSK9 since it is not required for the sterol-dependent regulation of PCSK9 expression, but mutations in this site reduce PCSK9 expression (69). A variant, C-332C>A, in the SP1 binding site, increases PCSK9 expression by approximately 2.5-fold independent of lovastatin treatment (76). Several lines of evidence demonstrate the regulation of PCSK9 transcription by small molecules (Figure 2). Curcumin and the methanol extract of Cajanus cajan L. leaves reduce HNF1α levels and downregulate PCSK9 transcription in HepG2 cells (77, 78). Epigallocatechingallate (humans, Sprague-Dawl rats, HepG2 and Huh7 cells), ascorbic acid (mice, HepG2 and Huh7 cells), Pinostrobin (HepG2 cells), and tanshinone IIA (HepG2 cells) reduce PCSK9 expression in a FoxO3a-dependent manner, probably via attenuating HNF1α-mediated activation of PCSK9 expression and/or methylation in the promoter region of PCSK9 (79–82). A small molecular, 7030B-C5, also reduces PCSK9 expression in HepG2 cells and mice mainly through the FoxO1 and HNF1α pathway (83). In addition, Berberine reduces PCSK9 expression mainly through attenuating SREBP2 and HNF1α-mediated upregulation of PCSK9 transcription in HepG2 cells (73), which may account for its cholesterol-lowering effect. Conversely, a berberine derivative, 9k, downregulates PCSK9 expression via suppressing the transcriptional activity of HNF1α and/or SP1 in HepG2 cells (84). Fibroblast growth factor 21 inhibits the transcriptional activity of SREBP2, thereby reducing PCSK9 expression in mouse liver (85). In addition, glucagon, bile acids, fibrate, and oncostatin M have been reported to inhibit PCSK9 expression at the transcriptional levels in HepG2 cells (86–88), but the underlying mechanisms are unclear. On the other hand, resistin, a small cysteine-rich protein secreted from macrophages and adipose tissues, increases PCSK9 transcription via the SREBP2 pathway in HepG2 cells and primary human hepatocytes (89). Nevertheless, these findings indicate the potential of inhibiting PCSK9 transcription as an avenue to lower plasma cholesterol levels, reducing CVD risk. However, the aforementioned transcriptional factors also regulate the transcription of many other proteins that play important roles in various physiological processes. For example, inhibition of SREBP2 reduces LDLR expression, attenuating LDL clearance. Inhibition of HNF1α activity does not affect LDLR expression. However, HNF1α can act as a tumor suppressor, and its expression is reduced in patients with liver malignancies (90). Thus, it is a big challenge to develop small molecules that can specifically modify PCSK9 expression at the transcriptional level. Post-transcriptionally, the expression of PCSK9 is regulated by microRNAs (miRNA) (Figure 2). miR-483-5p targets the 3′-UTR of the PCSK9 mRNA, reduces PCSk9 expression and decreases plasma cholesterol levels in HepG2 cells and mice (91). Similarly, miR-224, miR-191, miR-222, miR-1228-3p, miR-564, miR-4721, miR-337-3p, and miR-143-5p can reduce PCSK9 expression through targeting its 3′-UTR in cultured cells, such as Huh7, HepG2 and BON-1 cells (92–94). A common variant, 1420C>G, decreases the inhibitory effect of miR-1228-3p and miR-143-5p on PCSK9 expression, reducing plasma levels of PCSK9 and LDL cholesterol (95). Similarly, Los et al. identified several variants in PCSK9 3′-UTR in FH patients. The variant 345C>T impairs binding of miR-4721 and miR-564 to PCSK9 3′-UTR and increases PCSK9 expression (93). Conversely, miR-27a upregulates PCSK9 expression, possibly through binding to the upstream of PCSK9 promoter in HepG2 cells (96). It is of note that a single miRNA often targets multiple genes as binding of miRNAs to their target genes requires seed pairing of as few as six nucleotides or even imperfect seed pairing (97). Thus, one of the key issues of miRNA-based therapies is their potential off-target effect. Compared to miRNA, siRNAs bind to their complementary sequence in mRNA that completely matches their antisense strand, thereby specifically reducing the expression of their target genes. Phase III trials of Inclisiran, a chemically modified siRNA that targets PCSK9 mRNA, shows a promising lipid-lowering effect. Subcutaneous injection of Inclisiran reduces plasma LDL-C levels up to 50% in heterozygous FH patients without any major side-effects (26). Inclisiran, which requires twice-yearly administration, may reduce the cost of PCSK9 inhibitors compared to the current PCSK9 monoclonal antibody therapy that needs administration every 2–4 weeks. However, it is still a financial burden as a primary prevention measure for all eligible patients. In addition, siRNAs, particularly at a high dose, also exhibit miRNA-like off-target activity (98). Additionally, duplex siRNA can trigger an innate immune response in Toll-like receptors-dependent and independent mechanisms (99). Patients with Inclisiran treatment do show a slightly increased rate of mild-to-moderate bronchitis (4.3 vs. 0.7% for Inclisiran and placebo, respectively) (26). Therefore, possible long-term side effects of using siRNAs as a lifelong primary prevention strategy need to be assessed. A small molecule, R-IMPP, can selectively target human 80S ribosome and inhibit PCSK9 translation. R-IMPP significantly reduces the protein level of PCSK9, thereby increasing LDLR levels and LDL uptake in Huh7 cells (100). However, the therapeutic potential of R-IMPP is uncertain because ribosomes are the core of protein translation machinery and not an ideal therapeutic target. Most recently, Liu et al. reported that the blood flow rate regulated PCSK9 expression through the toll-like receptor 4-MyD88-NF-κB signaling pathway in the rabbit thoracic aorta. Low-flow state increased, whereas high-flow state reduced the mRNA and protein level of PCSK9 in vascular cells. Interestingly, they observed an opposite effect on the expression of LDLR (101), indicating that the impact of flow rate is independent of SREBP2. Interestingly, knockdown of PCSK9 suppressed, while overexpression of PCSK9 enhanced the toll-like receptor 4-NF-κB signaling pathway and inflammation in the atherosclerotic lesions of apoE−/− mice (60). It will be of interest to see if the increased expression of PCSK9 under the low-flow state promotes the toll-like receptor 4-NF-κB signaling pathway in the thoracic aorta. Circulating PCSK9 is mainly produced by hepatocytes. Blood flow rate may not have a similar effect on PCSK9 expression in hepatocytes since hepatocytes, unlike aortic vascular cells, are not directly exposed to blood flow. On the other hand, blood flow rate equals blood flow divided by cross-section area. Atherosclerosis causes blood vessels to harden and to narrow, which increases blood flow rate. It would be of interest to assess whether PCSK9 expression in vascular cells near and at atherosclerotic lesion area is reduced due to the increased blood flow rate, which could lead to a beneficial outcome since vascular cell PCSK9 can promote inflammation. Regulation of PCSK9 Secretion Although multiple tissues express PCSK9, circulating PCSK9 is mainly secreted from the liver (102). Loss-of-function PCSK9 mutations such as G236S, S462P, and C679X reduce its secretion, while gain-of-function mutations such as E32K enhance PCSK9 secretion (17, 103). Furthermore, circulating PCSK9 is rapidly cleared from the blood with a half-life of about 5 min in mice (37), indicating that targeting PCSK9 secretion is a promising therapeutic strategy. However, the machinery system controlling PCSK9 secretion is still elusive. PCSK9 undergoes autocleavage in the ER (104), which is essential for its maturation and secretion. However, the enzymatic activity is not required for this processing. Mutant PCSK9 that losses its catalytic activity is retained in the ER, but coexpression of prodomain with catalytic dead mutant PCSK9 rescues its secretion in HepG2 cells (29). In addition, deletion of part of the C-terminal domain of PCSK9 impairs its secretion but does not affect its autocleavage in cultured human hepatocytes, such as HepG2 and Huh7 cells (17, 105), indicating that the autocleavage is not sufficient to support PCSK9 secretion. After autocleavage, the cleaved N-terminal PRO is associated with the catalytic domain and functions as an intramolecular chaperone, guaranteeing the correct folding of the catalytic domain in the ER. This step is believed to be the rate-limiting step for PCSK9 mature and secretion (103). PCSK9 is transported from the ER to the Golgi via the classical COPII vesicles. The lack of SEC24, one of COPII components, significantly reduces PCSK9 secretion in mice and cultured human hepatocytes, HepG2 and Huh7 cells (17, 106). However, PCSK9 is a secretory protein located in the ER lumen, while SEC24 is located in the cytosol. Therefore, a cargo receptor is required to bridge the interaction between PCSK9 and SEC24. Emmer et al. reported that a cargo receptor Surf4 facilitated secretion of PCSK9 that was overexpressed in HEK293 cells. They found that Surf4 co-immunoprecipitated with PCSK9, and knockout of Surf4 significantly reduced the amount of PCSK9 detected in culture medium (107). Surf4 is a transmembrane protein that mainly resides in the ER membrane. It contains five putative transmembrane domains, an ER lumen-exposed N-terminus that binds cargo proteins within the lumen, and a cytoplasmic domain that interacts with COPII components, facilitating cargo sorting into COPII vesicles. However, we found that knockdown of Surf4 in cultured immortalized human hepatocytes, HepG2 and Huh7 cells, did not impair endogenous PCSK9 secretion (108). This discrepancy may be caused by different types of cells used in the two studies. We investigated endogenous PCSK9 secretion from cultured hepatocytes, HepG2 and Huh7 cells, while Emmer et al. studied the effect of Surf4 on secretion of PCSK9 overexpressed in HEK293 cells that do not express endogenous PCSK9. Furthermore, we found that knockdown of Surf4 in mouse liver had no significant effect on plasma and hepatic PCSK9 levels. In liver-specific Surf4 knockout mice, the levels of PCSK9 in plasma and liver homogenate were also comparable to that in the wild-type mice (109). Therefore, Surf4 is not required for endogenous PCSK9 secretion. The C-terminal domain of PCSK9 has been implicated in its secretion. Loss-of-function mutations such as E498K and S462P located in the C-terminus of PCSK9 damage its secretion (12, 103). Deletion of the entire C-terminal PCSK9 from amino acids 456 to 692 does not impair PCSK9 secretion. Conversely, removing part of the C-terminal region (from amino acids 457 to 528 or 608 to 692) damages PCSK9 secretion (16, 17, 105). Furthermore, the deletion of part of or the entire hinge region that connects the catalytic domain and the C-terminal domain also significantly reduces PCSK9 secretion, indicating the important role of this region. SEC24 silencing significantly reduces the secretion of the wild-type but not mutant PCSK9 without the C-terminal domain, indicating that the C-terminal region of PCSK9 may be involved in SEC24-facilitated PCSK9 secretion (17). Further studies are required to elucidate how PCSK9 is sorted into COPII vesicles. Most recently, Rogers et al. reported that dynamin-related protein1 (DRP1)-mediated ER remodeling involved in PCSK9 secretion (110). Inhibition of DRP1 by mitochondrial division inhibitor 1 or knockout hepatic DRP1 markedly reduced PCSK9 secretion in HepG2 cells and mice. After delivery to the Golgi apparatus, PCSK9 undergoes various posttranslational modifications and is then packed into secretory vesicles. The vesicles are delivered to and fused with the plasma membrane, releasing PCSK9 into the extracellular milieu. Gustafsen et al. reported that sortilin co-immunoprecipitated with PCSK9, and the two proteins were colocalized in the trans-Golgi network in HepG2 cells. Knockout of sortilin significantly reduced plasma PCSK9 levels in mice and reduced PCSK9 secretion from mouse primary hepatocytes. The author further showed that PCSK9 levels were positively correlated with sortilin levels in human serum. Thus, they concluded that sortilin interacted with PCSK9 in the trans-Golgi network, facilitating PCSK9 secretion (111). On the other hand, Butkinaree et al. reported that plasma levels of PCSK9 were comparable in sortilin knockout mice and wild-type littermates, and sortilin had no effect on PCSK9-promoted LDLR degradation in HepG2 and Huh7 cells. Instead, PCSK9 induced sortilin degradation (35). The reason for this discrepancy is unclear. The mice used in the Gustafsen study were C57BL/6J background, while Butkinaree et al. did not report their mouse background. Nevertheless, these findings reveal the complexity of the molecular mechanisms of PCSK9 secretion. Posttranslational Modifications of PCSK9 PCSK9 is predicted to be phosphorylated on serine, threonine, asparagine, and lysine residues by PhosphoSitePlus (www.phosphosite.org). Mass spectrometry analysis of plasma samples confirms this prediction. Furthermore, Dewpura et al. reported that PCSK9 was partially phosphorylated on serine residues at positions 47 and 688 by a Golgi casein kinase-like kinase in a cell-type dependent manner, with the highest phosphorylation in HepG2 cell (~70%), followed by Huh 7 cells (~54%), HEK293 cells (~23%), and CHO-K1cells (none). Phosphorylation may protect PCSK9 against proteolysis and increase its stability in Huh 7 cells (112). Meanwhile, this finding also indicates that serine phosphorylation is not required for PCSK9's action on LDLR since PCSK9 purified from CHO-K1 cells is unphosphorylated and can effectively promote LDLR degradation (38). It has also been reported that PCSK9 was phosphorylated on serine residues at positions 47, 666, 668, and 688 by family with sequence similarity 20, member C (FAM20C), which increased PCSK9 secretion and enhanced its ability to promote LDLR degradation in HepG2 cells (113). However, phosphorylation at these sites was also not required for PCSK9-promoted LDLR degradation since mutant PCSK9 that lost phosphorylation at the four serine residues still could stimulate LDLR degradation. In addition, FAM20C phosphorylates serine residue in a consensus Ser-x-Glu motif present in many secreted proteins (114). Thus, it cannot be ruled out that FAM20C may indirectly affect PCSK9 secretion and function via phosphorylation of other proteins. PCSK9 is N-glycosylated on asparagine residue at position 533 and sulfated on tyrosine residues (104). Detailed analysis revealed that the N-glycosylation at Asn533 and sulfation at Tyr38 were not required for PCSK9 processing, secretion and function in HepG2 and Huh 7 cells (115). Treatment of cells with tunicamycin that inhibits N-glycosylation or chlorate that inhibits tyrosine sulfation had no effect on PCSK9 expression and secretion in the human hepatocyte cell line, Huh7 cells. When overexpressed, mutant PCSK9 that lost the N-glycosylation and Tyr sulfation sites alone or together could be efficiently secreted and promote LDLR degradation in Huh7 and HepG2 cells (116). However, the secreted mutant PCSK9 appeared to promote LDLR degradation less effectively than the wild-type protein, suggesting that N-glycosylation may enhance the ability of PCSK9 to stimulate LDLR degradation. Plasma levels of PCSK9 in subjects carrying phosphomannose mutase 2 (PMM2) variants (p.R141H and p.P69S) are significantly reduced by approximately 42% compared to the controls, which might contribute to hypolipidemia observed in these patients (116). The PMM2 variants may affect N-glycosylation of PCSK9 and its secretion in vivo, even though removal or inhibition of N-glycosylation in PCSK9 does not affect its secretion in cultured cells. Alternatively, these variants may impair PCSK9 secretion indirectly by affecting unknown factors that are important for PCSK9 secretion since PMM2 is required for the synthesis of GDP-mannose, a mannose donor for N-glycosylation. Analysis of secretion of N-glycosylation defective PCSK9 mutant in Pcsk9−/− mice might provide a clue for the potential role of N-glycosylation in PCSK9 secretion in vivo. Nevertheless, these studies indicate that posttranslational modifications, including phosphorylation, sulfation and N-glycosylation, may affect but are not required for PCSK9 processing, secretion, stability and function. Conclusion and Perspectives PCSK9 regulates plasma cholesterol levels and tumor-specific antigen presentation primarily through promoting LDLR and MHCI degradation, respectively. Of note, the lack of PCSK9 in humans does not cause any known notable side effects (3, 6). Therefore, PCSK9 is a promising therapeutic target to reduce the risk of the top two leading causes of mortality worldwide, cardiovascular disease and cancer. Various strategies have been or are being developed to inhibit PCSK9 (117–120) (Table 1). Current PCSK9 inhibitors, evolocumab and alirocumab, are humanized monoclonal anti-PCSK9 antibodies. They can significantly reduce plasma LDL cholesterol levels and cardiovascular events in patients with hypercholesterolemia and suppress tumor growth in mice. Inclisiran, a chemically synthesized siRNA targeting PCSK9 mRNA, also significantly reduces plasma LDL cholesterol levels by about 30–50% with only two injections each year. However, these strategies are expensive, limiting their widespread use. Other strategies, such as CRISPR-Cas9 gene editing and PCSK9 vaccine, are only in preclinical studies or phase I clinical trials (119, 120). Therefore, there is an urgent need for further research to elucidate the underlying mechanisms for PCSK9's impact on lipid metabolism and cancer growth. For example, (1) PCSK9 binds to LDLR with a much higher affinity at the acidic endosomal environment to block LDLR recycling. Does PCSK9 bind to MHCI in a pH-dependent manner? (2) PCSK9, LDLR and MHCI do not contain a lysosomal targeting signal; how is PCSK9/LDLR and PCSK9/MHCI complex redirected from the endosome to the lysosome for degradation? (3) Circulating PCSK9 is mainly secreted from hepatocytes and then promotes LDLR and MHCI degradation. What machinery system assists PCSK9 secretion? (4) HSPG facilitates PCSK9-promoted hepatic LDLR degradation. Is there a cofactor assisting PCSK9's action on MHCI? Answering these questions is critical to the development of innovative and more cost-effective treatment options to inhibit PCSK9-promoted degradation of LDLR and MHCI. Table 1. PSCK9 inhibitors. | Name | Strategy | Target | Mechanism | Status | | :--- | :--- | :--- | :--- | :--- | | Evolocumab | Humanized monoclonal antibody | CAT of PCSK9 | Blocking PCSK9 binding to LDLR | Approved | | Alirocumab | Humanized monoclonal antibody | CAT of PCSK9 | Blocking PCSK9 binding to LDLR | Approved | | Inclisiran | GalNAc-conjugated siRNA | mRNA of PCSK9 | Inhibiting PCSK9 expression | Under review by the FDA | | LIB003 | Adnectin-human serum albumin fusion protein | CAT of PCSK9 | Blocking PCSK9 binding to LDLR | Phase III | | AT04A and AT06A | PCSK9 peptide Vaccine | Aa 153-162 of PCSK9 | Blocking PCSK9 binding to LDLR | Phase I | | Mimetic peptide | Mimicking the binding site of PCSCK9 on LDLR | CAT of PCSK9 | Blocking PCSK9 binding to LDLR | Preclinical | | DRP | Small PCSK9 inhibitor | | Inhibiting interaction between PCSK9 and HSPG | Preclinical | | NYX-330 | Small PCSK9 inhibitor | PCSK9 | Inhibition of PCSK9 binding to LDLR | Preclinical | | PF-0644846 | Inhibitor of ribosome | 80S ribosome | Inhibition of PCSK9 translation | Preclinical | | CRISPR-Cas9 | Gene editing | PCSK9 gene | Knockout/knockdown of PCSK9 expression | Preclinical | | 9k | Small inhibitor, berberine derivative | The HNF1α pathway | Inhibition of PCSK9 transcription | Preclinical | | 7030B-C5 | Small inhibitor | the FoxO1 and HNF1α pathway | Inhibition of PCSK9 transcription | Preclinical | Open in a new tab Author Contributions X-dX and Z-sP wrote the initial draft. G-qW and D-wZ supervised the final version. H-mG participated in the discussion and the preparation of the manuscript. MW reviewed and edited the manuscript. All authors contributed to the article and approved the submitted version. Funding This study was supported by the Natural Sciences and Engineering Research Council of Canada (RGPIN-2016-06479) and Canadian Institutes of Health Research (PS 155994). X-dX and G-qW were supported by funding from Qingyuan People's Hospital. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's Note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. References 1.Fitchett DH, Hegele RA, Verma S. Cardiology patient page. Statin intolerance. Circulation. (2015) 131:e389–91. 10.1161/CIRCULATIONAHA.114.013189 [DOI] [PubMed] [Google Scholar] 2.Abifadel M, Varret M, Rabes JP, Allard D, Ouguerram K, Devillers M, et al. Mutations in PCSK9 cause autosomal dominant hypercholesterolemia. Nat Genet. (2003) 34:154–6. 10.1038/ng1161 [DOI] [PubMed] [Google Scholar] 3.Cohen J, Pertsemlidis A, Kotowski IK, Graham R, Garcia CK, Hobbs HH. Low LDL cholesterol in individuals of African descent resulting from frequent nonsense mutations in PCSK9. Nat Genet. (2005) 37:161–5. 10.1038/ng1509 [DOI] [PubMed] [Google Scholar] 4.Zhao Z, Tuakli-Wosornu Y, Lagace TA, Kinch L, Grishin NV, Horton JD, et al. Molecular characterization of loss-of-function mutations in PCSK9 and identification of a compound heterozygote. Am J Hum Genet. (2006) 79:514–23. 10.1086/507488 [DOI] [PMC free article] [PubMed] [Google Scholar] 5.Guo S, Xia XD, Gu HM, Zhang DW. Proprotein convertase subtilisin/Kexin-Type 9 and lipid metabolism. Adv Exp Med Biol. (2020) 1276:137–56. 10.1007/978-981-15-6082-8_9 [DOI] [PubMed] [Google Scholar] 6.Lebeau P, Platko K, Al-Hashimi AA, Byun JH, Lhotak S, Holzapfel N, et al. Loss-of-function PCSK9 mutants evade the unfolded protein response sensor GRP78 and fail to induce endoplasmic reticulum stress when retained. J Biol Chem. (2018) 293:7329–43. 10.1074/jbc.RA117.001049 [DOI] [PMC free article] [PubMed] [Google Scholar] 7.Liu X, Bao X, Hu M, Chang H, Jiao M, Cheng J, et al. Inhibition of PCSK9 potentiates immune checkpoint therapy for cancer. Nature. (2020) 588:693–8. 10.1038/s41586-020-2911-7 [DOI] [PMC free article] [PubMed] [Google Scholar] 8.Ricci C, Ruscica M, Camera M, Rossetti L, Macchi C, Colciago A, et al. PCSK9 induces a pro-inflammatory response in macrophages. Sci Rep. (2018) 8:2267. 10.1038/s41598-018-20425-x [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Giunzioni I, Tavori H, Covarrubias R, Major AS, Ding L, Zhang Y, et al. Local effects of human PCSK9 on the atherosclerotic lesion. J Pathol. (2016) 238:52–62. 10.1002/path.4630 [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Seidah NG. The PCSK9 revolution and the potential of PCSK9-based therapies to reduce LDL-cholesterol. Glob Cardiol Sci Pract. (2017) 2017:e201702. 10.21542/gcsp.2017.2 [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Guo Q, Feng X, Zhou Y. PCSK9 variants in familial hypercholesterolemia: a comprehensive synopsis. Front Genet. (2020) 11:1020. 10.3389/fgene.2020.01020 [DOI] [PMC free article] [PubMed] [Google Scholar] 12.Benjannet S, Hamelin J, Chretien M, Seidah NG. Loss- and gain-of-function PCSK9 variants: cleavage specificity, dominant negative effects, and low density lipoprotein receptor (LDLR) degradation. J Biol Chem. (2012) 287:33745–55. 10.1074/jbc.M112.399725 [DOI] [PMC free article] [PubMed] [Google Scholar] 13.Cunningham D, Danley DE, Geoghegan KF, Griffor MC, Hawkins JL, Subashi TA, et al. Structural and biophysical studies of PCSK9 and its mutants linked to familial hypercholesterolemia. Nat Struct Mol Biol. (2007) 14:413–9. 10.1038/nsmb1235 [DOI] [PubMed] [Google Scholar] 14.Tveten K, Holla OL, Cameron J, Strom TB, Berge KE, Laerdahl JK, et al. Interaction between the ligand-binding domain of the LDL receptor and the C-terminal domain of PCSK9 is required for PCSK9 to remain bound to the LDL receptor during endosomal acidification. Hum Mol Genet. (2012) 21:1402–9. 10.1093/hmg/ddr578 [DOI] [PubMed] [Google Scholar] 15.Yamamoto T, Lu C, Ryan RO. A two-step binding model of PCSK9 interaction with the low density lipoprotein receptor. J Biol Chem. (2011) 286:5464–70. 10.1074/jbc.M110.199042 [DOI] [PMC free article] [PubMed] [Google Scholar] 16.Saavedra YG, Day R, Seidah NG. The M2 module of the Cys-His-rich domain (CHRD) of PCSK9 protein is needed for the extracellular low-density lipoprotein receptor (LDLR) degradation pathway. J Biol Chem. (2012) 287:43492–501. 10.1074/jbc.M112.394023 [DOI] [PMC free article] [PubMed] [Google Scholar] 17.Deng SJ, Shen Y, Gu HM, Guo S, Wu SR, Zhang DW. The role of the C-terminal domain of PCSK9 and SEC24 isoforms in PCSK9 secretion. Biochim Biophys Acta Mol Cell Biol Lipids. (2020) 1865:158660. 10.1016/j.bbalip.2020.158660 [DOI] [PubMed] [Google Scholar] 18.Rashid S, Curtis DE, Garuti R, Anderson NN, Bashmakov Y, Ho YK, et al. Decreased plasma cholesterol and hypersensitivity to statins in mice lacking Pcsk9. Proc Natl Acad Sci USA. (2005) 102:5374–9. 10.1073/pnas.0501652102 [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Zhang DW, Lagace TA, Garuti R, Zhao Z, McDonald M, Horton JD, et al. Binding of proprotein convertase subtilisin/kexin type 9 to epidermal growth factor-like repeat A of low density lipoprotein receptor decreases receptor recycling and increases degradation. J Biol Chem. (2007) 282:18602–12. 10.1074/jbc.M702027200 [DOI] [PubMed] [Google Scholar] 20.Gu HM, Adijiang A, Mah M, Zhang DW. Characterization of the role of EGF-A of low-density lipoprotein receptor in PCSK9 binding. J Lipid Res. (2013) 54:3345–57. 10.1194/jlr.M041129 [DOI] [PMC free article] [PubMed] [Google Scholar] 21.Lagace TA, Curtis DE, Garuti R, McNutt MC, Park SW, Prather HB, et al. Secreted PCSK9 decreases the number of LDL receptors in hepatocytes and in livers of parabiotic mice. J Clin Invest. (2006) 116:2995–3005. 10.1172/JCI29383 [DOI] [PMC free article] [PubMed] [Google Scholar] 22.Roubtsova A, Munkonda MN, Awan Z, Marcinkiewicz J, Chamberland A, Lazure C, et al. Circulating proprotein convertase subtilisin/kexin 9 (PCSK9) regulates VLDLR protein and triglyceride accumulation in visceral adipose tissue. Arterioscler Thromb Vasc Biol. (2011) 31:785–91. 10.1161/ATVBAHA.110.220988 [DOI] [PubMed] [Google Scholar] 23.Kwon HJ, Lagace TA, McNutt MC, Horton JD, Deisenhofer J. Molecular basis for LDL receptor recognition by PCSK9. Proc Natl Acad Sci USA. (2008) 105:1820–5. 10.1073/pnas.0712064105 [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Zhang DW, Garuti R, Tang WJ, Cohen JC, Hobbs HH. Structural requirements for PCSK9-mediated degradation of the low-density lipoprotein receptor. Proc Natl Acad Sci USA. (2008) 105:13045–50. 10.1073/pnas.0806312105 [DOI] [PMC free article] [PubMed] [Google Scholar] 25.Maxwell KN, Fisher EA, Breslow JL. Overexpression of PCSK9 accelerates the degradation of the LDLR in a post-endoplasmic reticulum compartment. Proc Natl Acad Sci USA. (2005) 102:2069–74. 10.1073/pnas.0409736102 [DOI] [PMC free article] [PubMed] [Google Scholar] 26.Wright RS, Ray KK, Raal FJ, Kallend DG, Jaros M, Koenig W, et al. Pooled patient-level analysis of inclisiran trials in patients with familial hypercholesterolemia or atherosclerosis. J Am Coll Cardiol. (2021) 77:1182–93. 10.1016/j.jacc.2020.12.058 [DOI] [PubMed] [Google Scholar] 27.Reyes-Soffer G, Pavlyha M, Ngai C, Thomas T, Holleran S, Ramakrishnan R, et al. Effects of PCSK9 inhibition with alirocumab on lipoprotein metabolism in healthy humans. Circulation. (2017) 135:352–62. 10.1161/CIRCULATIONAHA.116.025253 [DOI] [PMC free article] [PubMed] [Google Scholar] 28.Bonaca MP, Nault P, Giugliano RP, Keech AC, Pineda AL, Kanevsky E, et al. Low-density lipoprotein cholesterol lowering with evolocumab and outcomes in patients with peripheral artery disease: insights from the FOURIER trial (further cardiovascular outcomes research with PCSK9 inhibition in subjects with elevated risk). Circulation. (2018) 137:338–50. 10.1161/CIRCULATIONAHA.117.032235 [DOI] [PubMed] [Google Scholar] 29.McNutt MC, Lagace TA, Horton JD. Catalytic activity is not required for secreted PCSK9 to reduce low density lipoprotein receptors in HepG2 cells. J Biol Chem. (2007) 282:20799–803. 10.1074/jbc.C700095200 [DOI] [PubMed] [Google Scholar] 30.Wang Y, Huang Y, Hobbs HH, Cohen JC. Molecular characterization of proprotein convertase subtilisin/kexin type 9-mediated degradation of the LDLR. J Lipid Res. (2012) 53:1932–43. 10.1194/jlr.M028563 [DOI] [PMC free article] [PubMed] [Google Scholar] 31.Romagnuolo R, Scipione CA, Boffa MB, Marcovina SM, Seidah NG, Koschinsky ML. Lipoprotein(a) catabolism is regulated by proprotein convertase subtilisin/kexin type 9 through the low density lipoprotein receptor. J Biol Chem. (2015) 290:11649–62. 10.1074/jbc.M114.611988 [DOI] [PMC free article] [PubMed] [Google Scholar] 32.Jang H-D, Lee SE, Yang J, Lee H-C, Shin D, Lee H, et al. Cyclase-associated protein 1 is a binding partner of proprotein convertase subtilisin/kexin type-9 and is required for the degradation of low-density lipoprotein receptors by proprotein convertase subtilisin/kexin type-9. Eur Heart J. (2020) 41:239–52. 10.1093/eurheartj/ehz566 [DOI] [PMC free article] [PubMed] [Google Scholar] 33.DeVay RM, Shelton DL, Liang H. Characterization of proprotein convertase subtilisin/kexin type 9 (PCSK9) trafficking reveals a novel lysosomal targeting mechanism via amyloid precursor-like protein 2 (APLP2). J Biol Chem. (2013) 288:10805–18. 10.1074/jbc.M113.453373 [DOI] [PMC free article] [PubMed] [Google Scholar] 34.Fu T, Guan Y, Xu J, Wang Y. APP, APLP2 and LRP1 interact with PCSK9 but are not required for PCSK9-mediated degradation of the LDLR in vivo. Biochim Biophys Acta. (2017) 1862:883–9. 10.1016/j.bbalip.2017.05.002 [DOI] [PMC free article] [PubMed] [Google Scholar] 35.Butkinaree C, Canuel M, Essalmani R, Poirier S, Benjannet S, Asselin MC, et al. Amyloid precursor-like protein 2 and sortilin do not regulate the PCSK9 convertase-mediated low density lipoprotein receptor degradation but interact with each other. J Biol Chem. (2015) 290:18609–20. 10.1074/jbc.M115.647180 [DOI] [PMC free article] [PubMed] [Google Scholar] 36.Goldstein JL, Brown MS. A century of cholesterol and coronaries: from plaques to genes to statins. Cell. (2015) 161:161–72. 10.1016/j.cell.2015.01.036 [DOI] [PMC free article] [PubMed] [Google Scholar] 37.Grefhorst A, McNutt MC, Lagace TA, Horton JD. Plasma PCSK9 preferentially reduces liver LDL receptors in mice. J Lipid Res. (2008) 49:1303–11. 10.1194/jlr.M800027-JLR200 [DOI] [PMC free article] [PubMed] [Google Scholar] 38.Gustafsen C, Olsen D, Vilstrup J, Lund S, Reinhardt A, Wellner N, et al. Heparan sulfate proteoglycans present PCSK9 to the LDL receptor. Nat Commun. (2017) 8:503. 10.1038/s41467-017-00568-7 [DOI] [PMC free article] [PubMed] [Google Scholar] 39.Wang H, Eckel RH. Lipoprotein lipase: from gene to obesity. Am J Physiol Endocrinol Metab. (2009) 297:E271–88. 10.1152/ajpendo.90920.2008 [DOI] [PubMed] [Google Scholar] 40.Sun H, Krauss RM, Chang JT, Teng BB. PCSK9 deficiency reduces atherosclerosis, apolipoprotein B secretion, and endothelial dysfunction. J Lipid Res. (2018) 59:207–23. 10.1194/jlr.M078360 [DOI] [PMC free article] [PubMed] [Google Scholar] 41.Sun XM, Eden ER, Tosi I, Neuwirth CK, Wile D, Naoumova RP, et al. Evidence for effect of mutant PCSK9 on apolipoprotein B secretion as the cause of unusually severe dominant hypercholesterolaemia. Hum Mol Genet. (2005) 14:1161–9. 10.1093/hmg/ddi128 [DOI] [PubMed] [Google Scholar] 42.Ding Z, Liu S, Wang X, Deng X, Fan Y, Sun C, et al. Hemodynamic shear stress via ROS modulates PCSK9 expression in human vascular endothelial and smooth muscle cells and along the mouse aorta. Antioxid Redox Signal. (2015) 22:760–71. 10.1089/ars.2014.6054 [DOI] [PMC free article] [PubMed] [Google Scholar] 43.Ferri N, Tibolla G, Pirillo A, Cipollone F, Mezzetti A, Pacia S, et al. Proprotein convertase subtilisin kexin type 9 (PCSK9) secreted by cultured smooth muscle cells reduces macrophages LDLR levels. Atherosclerosis. (2012) 220:381–6. 10.1016/j.atherosclerosis.2011.11.026 [DOI] [PubMed] [Google Scholar] 44.Peyot ML, Roubtsova A, Lussier R, Chamberland A, Essalmani R, Murthy Madiraju SR, et al. Substantial PCSK9 inactivation in beta-cells does not modify glucose homeostasis or insulin secretion in mice. Biochim Biophys Acta Mol Cell Biol Lipids. (2021) 1866:158968. 10.1016/j.bbalip.2021.158968 [DOI] [PubMed] [Google Scholar] 45.Cariou B, Langhi C, Le Bras M, Bortolotti M, Le KA, Theytaz F, et al. Plasma PCSK9 concentrations during an oral fat load and after short term high-fat, high-fat high-protein and high-fructose diets. Nutr Metab. (2013) 10:4. 10.1186/1743-7075-10-4 [DOI] [PMC free article] [PubMed] [Google Scholar] 46.Hollstein T, Vogt A, Grenkowitz T, Stojakovic T, Marz W, Laufs U, et al. Treatment with PCSK9 inhibitors reduces atherogenic VLDL remnants in a real-world study. Vascul Pharmacol. (2019) 116:8–15. 10.1016/j.vph.2019.03.002 [DOI] [PubMed] [Google Scholar] 47.Le May C, Kourimate S, Langhi C, Chetiveaux M, Jarry A, Comera C, et al. Proprotein convertase subtilisin kexin type 9 null mice are protected from postprandial triglyceridemia. Arterioscler Thromb Vasc Biol. (2009) 29:684–90. 10.1161/ATVBAHA.108.181586 [DOI] [PubMed] [Google Scholar] 48.Rashid S, Tavori H, Brown PE, Linton MF, He J, Giunzioni I, et al. Proprotein convertase subtilisin kexin type 9 promotes intestinal overproduction of triglyceride-rich apolipoprotein B lipoproteins through both low-density lipoprotein receptor-dependent and -independent mechanisms. Circulation. (2014) 130:431–41. 10.1161/CIRCULATIONAHA.113.006720 [DOI] [PMC free article] [PubMed] [Google Scholar] 49.Watts GF, Gidding SS, Mata P, Pang J, Sullivan DR, Yamashita S, et al. Familial hypercholesterolaemia: evolving knowledge for designing adaptive models of care. Nat Rev Cardiol. (2020) 17:360–77. 10.1038/s41569-019-0325-8 [DOI] [PubMed] [Google Scholar] 50.Taskinen MR, Bjornson E, Andersson L, Kahri J, Porthan K, Matikainen N, et al. Impact of proprotein convertase subtilisin/kexin type 9 inhibition with evolocumab on the postprandial responses of triglyceride-rich lipoproteins in type II diabetic subjects. J Clin Lipidol. (2020) 14:77–87. 10.1016/j.jacl.2019.12.003 [DOI] [PubMed] [Google Scholar] 51.Drouin-Chartier JP, Tremblay AJ, Hogue JC, Lemelin V, Lamarche B, Couture P. Plasma PCSK9 correlates with apoB-48-containing triglyceride-rich lipoprotein production in men with insulin resistance. J Lipid Res. (2018) 59:1501–9. 10.1194/jlr.M086264 [DOI] [PMC free article] [PubMed] [Google Scholar] 52.Taskinen MR, Bjornson E, Kahri J, Soderlund S, Matikainen N, Porthan K, et al. Effects of Evolocumab on the Postprandial Kinetics of Apo (Apolipoprotein) B100- and B48-Containing Lipoproteins in Subjects With Type 2 Diabetes. Arterioscler Thromb Vasc Biol. (2020) 41:962–75 10.1161/ATVBAHA.120.315446 [DOI] [PubMed] [Google Scholar] 53.Rohlmann A, Gotthardt M, Hammer RE, Herz J. Inducible inactivation of hepatic LRP gene by cre-mediated recombination confirms role of LRP in clearance of chylomicron remnants. J Clin Invest. (1998) 101:689–95. 10.1172/JCI1240 [DOI] [PMC free article] [PubMed] [Google Scholar] 54.O'Donoghue ML, Fazio S, Giugliano RP, Stroes ESG, Kanevsky E, Gouni-Berthold I, et al. Lipoprotein(a), PCSK9 inhibition, and cardiovascular risk. Circulation. (2019) 139:1483–92. 10.1161/CIRCULATIONAHA.118.037184 [DOI] [PubMed] [Google Scholar] 55.Farmakis I, Doundoulakis I, Pagiantza A, Zafeiropoulos S, Antza C, Karvounis H, et al. Lipoprotein(a) reduction with proprotein convertase subtilisin/kexin type 9 inhibitors: a systematic review and meta-analysis. J Cardiovasc Pharmacol. (2021) 77:397–407. 10.1097/FJC.0000000000000963 [DOI] [PubMed] [Google Scholar] 56.Blom DJ, Harada-Shiba M, Rubba P, Gaudet D, Kastelein JJP, Charng MJ, et al. Efficacy and safety of alirocumab in adults with homozygous familial hypercholesterolemia: the ODYSSEY HoFH trial. J Am Coll Cardiol. (2020) 76:131–42. 10.1016/j.jacc.2020.05.027 [DOI] [PubMed] [Google Scholar] 57.Stoekenbroek RM, Lambert G, Cariou B, Hovingh GK. Inhibiting PCSK9 - biology beyond LDL control. Nat Rev Endocrinol. (2018) 15:52–62. 10.1038/s41574-018-0110-5 [DOI] [PubMed] [Google Scholar] 58.Jhunjhunwala S, Hammer C, Delamarre L. Antigen presentation in cancer: insights into tumour immunogenicity and immune evasion. Nat Rev Cancer. (2021) 21:298–312. 10.1038/s41568-021-00339-z [DOI] [PubMed] [Google Scholar] 59.Yuan J, Cai T, Zheng X, Ren Y, Qi J, Lu X, et al. Potentiating CD8(+) T cell antitumor activity by inhibiting PCSK9 to promote LDLR-mediated TCR recycling and signaling. Protein Cell. (2021) 12:240–60. 10.1007/s13238-021-00821-2 [DOI] [PMC free article] [PubMed] [Google Scholar] 60.Tang ZH, Peng J, Ren Z, Yang J, Li TT, Li TH, et al. New role of PCSK9 in atherosclerotic inflammation promotion involving the TLR4/NF-kappaB pathway. Atherosclerosis. (2017) 262:113–22. 10.1016/j.atherosclerosis.2017.04.023 [DOI] [PubMed] [Google Scholar] 61.Guo Y, Yan B, Tai S, Zhou S, Zheng XL. PCSK9: associated with cardiac diseases and their risk factors? Arch Biochem Biophys. (2021) 704:108717. 10.1016/j.abb.2020.108717 [DOI] [PubMed] [Google Scholar] 62.Xu X, Cui Y, Cao L, Zhang Y, Yin Y, Hu X. PCSK9 regulates apoptosis in human lung adenocarcinoma A549 cells via endoplasmic reticulum stress and mitochondrial signaling pathways. Exp Ther Med. (2017) 13:1993–9. 10.3892/etm.2017.4218 [DOI] [PMC free article] [PubMed] [Google Scholar] 63.Marimuthu A, Subbannayya Y, Sahasrabuddhe NA, Balakrishnan L, Syed N, Sekhar NR, et al. SILAC-based quantitative proteomic analysis of gastric cancer secretome. Proteomics Clin Appl. (2013) 7:355–66. 10.1002/prca.201200069 [DOI] [PMC free article] [PubMed] [Google Scholar] 64.Zhang SZ, Zhu XD, Feng LH, Li XL, Liu XF, Sun HC, et al. PCSK9 promotes tumor growth by inhibiting tumor cell apoptosis in hepatocellular carcinoma. Exp Hematol Oncol. (2021) 10:25. 10.1186/s40164-021-00218-1 [DOI] [PMC free article] [PubMed] [Google Scholar] 65.Gan SS, Ye JQ, Wang L, Qu FJ, Chu CM, Tian YJ, et al. Inhibition of PCSK9 protects against radiation-induced damage of prostate cancer cells. Onco Targets Ther. (2017) 10:2139–46. 10.2147/OTT.S129413 [DOI] [PMC free article] [PubMed] [Google Scholar] 66.Tao R, Xiong X, DePinho RA, Deng CX, Dong XC. FoxO3 transcription factor and Sirt6 deacetylase regulate low density lipoprotein (LDL)-cholesterol homeostasis via control of the proprotein convertase subtilisin/kexin type 9 (Pcsk9) gene expression. J Biol Chem. (2013) 288:29252–9. 10.1074/jbc.M113.481473 [DOI] [PMC free article] [PubMed] [Google Scholar] 67.Li H, Liu J. The novel function of HINFP as a co-activator in sterol-regulated transcription of PCSK9 in HepG2 cells. Biochem J. (2012) 443:757–68. 10.1042/BJ20111645 [DOI] [PubMed] [Google Scholar] 68.Lohoff FW, Sorcher JL, Rosen AD, Mauro KL, Fanelli RR, Momenan R, et al. Methylomic profiling and replication implicates deregulation of PCSK9 in alcohol use disorder. Mol Psychiatry. (2018) 23:1900–10. 10.1038/mp.2017.168 [DOI] [PMC free article] [PubMed] [Google Scholar] 69.Jeong HJ, Lee HS, Kim KS, Kim YK, Yoon D, Park SW. Sterol-dependent regulation of proprotein convertase subtilisin/kexin type 9 expression by sterol-regulatory element binding protein-2. J Lipid Res. (2008) 49:399–409. 10.1194/jlr.M700443-JLR200 [DOI] [PubMed] [Google Scholar] 70.Dong B, Wu M, Li H, Kraemer FB, Adeli K, Seidah NG, et al. Strong induction of PCSK9 gene expression through HNF1alpha and SREBP2: mechanism for the resistance to LDL-cholesterol lowering effect of statins in dyslipidemic hamsters. J Lipid Res. (2010) 51:1486–95. 10.1194/jlr.M003566 [DOI] [PMC free article] [PubMed] [Google Scholar] 71.Dubuc G, Chamberland A, Wassef H, Davignon J, Seidah NG, Bernier L, et al. Statins upregulate PCSK9, the gene encoding the proprotein convertase neural apoptosis-regulated convertase-1 implicated in familial hypercholesterolemia. Arterioscler Thromb Vasc Biol. (2004) 24:1454–9. 10.1161/01.ATV.0000134621.14315.43 [DOI] [PubMed] [Google Scholar] 72.Poirier S, Prat A, Marcinkiewicz E, Paquin J, Chitramuthu BP, Baranowski D, et al. Implication of the proprotein convertase NARC-1/PCSK9 in the development of the nervous system. J Neurochem. (2006) 98:838–50. 10.1111/j.1471-4159.2006.03928.x [DOI] [PubMed] [Google Scholar] 73.Li H, Dong B, Park SW, Lee HS, Chen W, Liu J. Hepatocyte nuclear factor 1alpha plays a critical role in PCSK9 gene transcription and regulation by the natural hypocholesterolemic compound berberine. J Biol Chem. (2009) 284:28885–95. 10.1074/jbc.M109.052407 [DOI] [PMC free article] [PubMed] [Google Scholar] 74.Ai D, Chen C, Han S, Ganda A, Murphy AJ, Haeusler R, et al. Regulation of hepatic LDL receptors by mTORC1 and PCSK9 in mice. J Clin Invest. (2012) 122:1262–70. 10.1172/JCI61919 [DOI] [PMC free article] [PubMed] [Google Scholar] 75.Lai Q, Giralt A, Le May C, Zhang L, Cariou B, Denechaud PD, et al. E2F1 inhibits circulating cholesterol clearance by regulating Pcsk9 expression in the liver. JCI Insight. (2017) 2:e89729. 10.1172/jci.insight.89729 [DOI] [PMC free article] [PubMed] [Google Scholar] 76.Blesa S, Vernia S, Garcia-Garcia AB, Martinez-Hervas S, Ivorra C, Gonzalez-Albert V, et al. A new PCSK9 gene promoter variant affects gene expression and causes autosomal dominant hypercholesterolemia. J Clin Endocrinol Metab. (2008) 93:3577–83. 10.1210/jc.2008-0269 [DOI] [PubMed] [Google Scholar] 77.Tai MH, Chen PK, Chen PY, Wu MJ, Ho CT, Yen JH. Curcumin enhances cell-surface LDLR level and promotes LDL uptake through downregulation of PCSK9 gene expression in HepG2 cells. Mol Nutr Food Res. (2014) 58:2133–45. 10.1002/mnfr.201400366 [DOI] [PubMed] [Google Scholar] 78.Chang HY, Wu JR, Gao WY, Lin HR, Chen PY, Chen CI, et al. The cholesterol-modulating effect of methanol extract of pigeon pea (Cajanus cajan (L.) Millsp.) leaves on regulating LDLR and PCSK9 expression in HepG2 cells. Molecules. (2019) 24:493. 10.3390/molecules24030493 [DOI] [PMC free article] [PubMed] [Google Scholar] 79.Cui CJ, Jin JL, Guo LN, Sun J, Wu NQ, Guo YL, et al. Beneficial impact of epigallocatechingallate on LDL-C through PCSK9/LDLR pathway by blocking HNF1alpha and activating FoxO3a. J Transl Med. (2020) 18:195. 10.1186/s12967-020-02362-4 [DOI] [PMC free article] [PubMed] [Google Scholar] 80.Wang D, Yang X, Chen Y, Gong K, Yu M, Gao Y, et al. Ascorbic acid enhances low-density lipoprotein receptor expression by suppressing proprotein convertase subtilisin/kexin 9 expression. J Biol Chem. (2020) 295:15870–15882. 10.1074/jbc.RA120.015623 [DOI] [PMC free article] [PubMed] [Google Scholar] 81.Gao WY, Chen PY, Chen SF, Wu MJ, Chang HY, Yen JH. Pinostrobin inhibits proprotein convertase subtilisin/kexin-type 9 (PCSK9) gene expression through the modulation of FoxO3a protein in HepG2 cells. J Agric Food Chem. (2018) 66:6083–93. 10.1021/acs.jafc.8b02559 [DOI] [PubMed] [Google Scholar] 82.Chen HC, Chen PY, Wu MJ, Tai MH, Yen JH. Tanshinone IIA modulates low density lipoprotein uptake via down-regulation of PCSK9 gene expression in HepG2 cells. PLoS ONE. (2016) 11:e0162414. 10.1371/journal.pone.0162414 [DOI] [PMC free article] [PubMed] [Google Scholar] 83.Wang X, Chen X, Zhang X, Su C, Yang M, He W, et al. A small-molecule inhibitor of PCSK9 transcription ameliorates atherosclerosis through the modulation of FoxO1/3 and HNF1alpha. EBioMedicine. (2020) 52:102650. 10.1016/j.ebiom.2020.102650 [DOI] [PMC free article] [PubMed] [Google Scholar] 84.Fan TY, Yang YX, Zeng QX, Wang XL, Wei W, Guo XX, et al. Structure-activity relationship and biological evaluation of berberine derivatives as PCSK9 down-regulating agents. Bioorg Chem. (2021) 113:104994. 10.1016/j.bioorg.2021.104994 [DOI] [PubMed] [Google Scholar] 85.Lin Z, Pan X, Wu F, Ye D, Zhang Y, Wang Y, et al. Fibroblast growth factor 21 prevents atherosclerosis by suppression of hepatic sterol regulatory element-binding protein-2 and induction of adiponectin in mice. Circulation. (2015) 131:1861–71. 10.1161/CIRCULATIONAHA.115.015308 [DOI] [PMC free article] [PubMed] [Google Scholar] 86.Langhi C, Le May C, Kourimate S, Caron S, Staels B, Krempf M, et al. Activation of the farnesoid X receptor represses PCSK9 expression in human hepatocytes. FEBS Lett. (2008) 582:949–55. 10.1016/j.febslet.2008.02.038 [DOI] [PubMed] [Google Scholar] 87.Kourimate S, Le May C, Langhi C, Jarnoux AL, Ouguerram K, Zair Y, et al. Dual mechanisms for the fibrate-mediated repression of proprotein convertase subtilisin/kexin type 9. J Biol Chem. (2008) 283:9666–73. 10.1074/jbc.M705831200 [DOI] [PubMed] [Google Scholar] 88.Momtazi AA, Banach M, Pirro M, Katsiki N, Sahebkar A. Regulation of PCSK9 by nutraceuticals. Pharmacol Res. (2017) 120:157–69. 10.1016/j.phrs.2017.03.023 [DOI] [PubMed] [Google Scholar] 89.Melone M, Wilsie L, Palyha O, Strack A, Rashid S. Discovery of a new role of human resistin in hepatocyte low-density lipoprotein receptor suppression mediated in part by proprotein convertase subtilisin/kexin type 9. J Am Coll Cardiol. (2012) 59:1697–705. 10.1016/j.jacc.2011.11.064 [DOI] [PubMed] [Google Scholar] 90.Patitucci C, Couchy G, Bagattin A, Caneque T, de Reynies A, Scoazec JY, et al. Hepatocyte nuclear factor 1alpha suppresses steatosis-associated liver cancer by inhibiting PPARgamma transcription. J Clin Invest. (2017) 127:1873–88. 10.1172/JCI90327 [DOI] [PMC free article] [PubMed] [Google Scholar] 91.Dong J, He M, Li J, Pessentheiner AR, Wang C, Zhang J, et al. MicroRNA-483 ameliorates hypercholesterolemia by inhibiting PCSK9 production. JCI Insight. (2020) 5:e143812. 10.1172/jci.insight.143812 [DOI] [PMC free article] [PubMed] [Google Scholar] 92.Naeli P, Mirzadeh Azad F, Malakootian M, Seidah NG, Mowla SJ. Post-transcriptional regulation of PCSK9 by miR-191, miR-222, and miR-224. Front Genet. (2017) 8:189. 10.3389/fgene.2017.00189 [DOI] [PMC free article] [PubMed] [Google Scholar] 93.Los B, Borges JB, Oliveira VF, Freitas RC, Dagli-Hernandez C, Bortolin RH, et al. Functional analysis of PCSK9 3'UTR variants and mRNA-miRNA interactions in patients with familial hypercholesterolemia. Epigenomics. (2021) 13:779–791. 10.2217/epi-2020-0462 [DOI] [PubMed] [Google Scholar] 94.Xu X, Dong Y, Ma N, Kong W, Yu C, Gong L, et al. MiR-337-3p lowers serum LDL-C level through targeting PCSK9 in hyperlipidemic mice. Metabolism. (2021) 119:154768. 10.1016/j.metabol.2021.154768 [DOI] [PubMed] [Google Scholar] 95.Decourt C, Janin A, Moindrot M, Chatron N, Nony S, Muntaner M, et al. PCSK9 post-transcriptional regulation: role of a 3'UTR microRNA-binding site variant in linkage disequilibrium with c.1420G. Atherosclerosis. (2020) 314:63–70. 10.1016/j.atherosclerosis.2020.10.010 [DOI] [PubMed] [Google Scholar] 96.Alvarez ML, Khosroheidari M, Eddy E, Done SC. MicroRNA-27a decreases the level and efficiency of the LDL receptor and contributes to the dysregulation of cholesterol homeostasis. Atherosclerosis. (2015) 242:595–604. 10.1016/j.atherosclerosis.2015.08.023 [DOI] [PMC free article] [PubMed] [Google Scholar] 97.Broughton JP, Lovci MT, Huang JL, Yeo GW, Pasquinelli AE. Pairing beyond the seed supports MicroRNA targeting specificity. Mol Cell. (2016) 64:320–33. 10.1016/j.molcel.2016.09.004 [DOI] [PMC free article] [PubMed] [Google Scholar] 98.Jackson AL, Bartz SR, Schelter J, Kobayashi SV, Burchard J, Mao M, et al. Expression profiling reveals off-target gene regulation by RNAi. Nat Biotechnol. (2003) 21:635–7. 10.1038/nbt831 [DOI] [PubMed] [Google Scholar] 99.Pecot CV, Calin GA, Coleman RL, Lopez-Berestein G, Sood AK. RNA interference in the clinic: challenges and future directions. Nat Rev Cancer. (2011) 11:59–67. 10.1038/nrc2966 [DOI] [PMC free article] [PubMed] [Google Scholar] 100.Petersen DN, Hawkins J, Ruangsiriluk W, Stevens KA, Maguire BA, O'Connell TN, et al. A small-molecule anti-secretagogue of PCSK9 targets the 80S ribosome to inhibit PCSK9 protein translation. Cell Chem Biol. (2016) 23:1362–71. 10.1016/j.chembiol.2016.08.016 [DOI] [PubMed] [Google Scholar] 101.Liu S, Deng X, Zhang P, Wang X, Fan Y, Zhou S, et al. Blood flow patterns regulate PCSK9 secretion via MyD88-mediated pro-inflammatory cytokines. Cardiovasc Res. (2020) 116:1721–32. 10.1093/cvr/cvz262 [DOI] [PMC free article] [PubMed] [Google Scholar] 102.Zaid A, Roubtsova A, Essalmani R, Marcinkiewicz J, Chamberland A, Hamelin J, et al. Proprotein convertase subtilisin/kexin type 9 (PCSK9): hepatocyte-specific low-density lipoprotein receptor degradation and critical role in mouse liver regeneration. Hepatology. (2008) 48:646–54. 10.1002/hep.22354 [DOI] [PubMed] [Google Scholar] 103.Chorba JS, Galvan AM, Shokat KM. Stepwise processing analyses of the single-turnover PCSK9 protease reveal its substrate sequence specificity and link clinical genotype to lipid phenotype. J Biol Chem. (2018) 293:1875–86. 10.1074/jbc.RA117.000754 [DOI] [PMC free article] [PubMed] [Google Scholar] 104.Benjannet S, Rhainds D, Essalmani R, Mayne J, Wickham L, Jin W, et al. NARC-1/PCSK9 and its natural mutants: zymogen cleavage and effects on the low density lipoprotein (LDL) receptor and LDL cholesterol. J Biol Chem. (2004) 279:48865–75. 10.1074/jbc.M409699200 [DOI] [PubMed] [Google Scholar] 105.Du F, Hui Y, Zhang M, Linton MF, Fazio S, Fan D. Novel domain interaction regulates secretion of proprotein convertase subtilisin/kexin type 9 (PCSK9) protein. J Biol Chem. (2011) 286:43054–61. 10.1074/jbc.M111.273474 [DOI] [PMC free article] [PubMed] [Google Scholar] 106.Chen XW, Wang H, Bajaj K, Zhang P, Meng ZX, Ma D, et al. SEC24A deficiency lowers plasma cholesterol through reduced PCSK9 secretion. Elife. (2013) 2:e00444. 10.7554/eLife.00444.017 [DOI] [PMC free article] [PubMed] [Google Scholar] 107.Emmer BT, Hesketh GG, Kotnik E, Tang VT, Lascuna PJ, Xiang J, et al. The cargo receptor SURF4 promotes the efficient cellular secretion of PCSK9. Elife. (2018) 7:e38839. 10.7554/eLife.38839.026 [DOI] [PMC free article] [PubMed] [Google Scholar] 108.Shen Y, Wang B, Deng S, Zhai L, Gu HM, Alabi A, et al. Surf4 regulates expression of proprotein convertase subtilisin/kexin type 9 (PCSK9) but is not required for PCSK9 secretion in cultured human hepatocytes. Biochim Biophys Acta Mol Cell Biol Lipids. (2020) 1865:158555. 10.1016/j.bbalip.2019.158555 [DOI] [PubMed] [Google Scholar] 109.Wang B, Shen Y, Zhai L, Xia X, Gu HM, Wang M, et al. Atherosclerosis-associated hepatic secretion of VLDL but not PCSK9 is dependent on cargo receptor protein Surf4. J Lipid Res. (2021) 62:100091. 10.1016/j.jlr.2021.100091 [DOI] [PMC free article] [PubMed] [Google Scholar] 110.Rogers MA, Hutcheson JD, Okui T, Goettsch C, Singh SA, Halu A, et al. Dynamin-related protein 1 inhibition reduces hepatic PCSK9 secretion. Cardiovasc Res. (2021) 117:2340–53. 10.1093/cvr/cvab034 [DOI] [PMC free article] [PubMed] [Google Scholar] 111.Gustafsen C, Kjolby M, Nyegaard M, Mattheisen M, Lundhede J, Buttenschon H, et al. The hypercholesterolemia-risk gene SORT1 facilitates PCSK9 secretion. Cell Metab. (2014) 19:310–8. 10.1016/j.cmet.2013.12.006 [DOI] [PubMed] [Google Scholar] 112.Dewpura T, Raymond A, Hamelin J, Seidah NG, Mbikay M, Chretien M, et al. PCSK9 is phosphorylated by a Golgi casein kinase-like kinase ex vivo and circulates as a phosphoprotein in humans. FEBS J. (2008) 275:3480–93. 10.1111/j.1742-4658.2008.06495.x [DOI] [PubMed] [Google Scholar] 113.Ben Djoudi Ouadda A, Gauthier MS, Susan-Resiga D, Girard E, Essalmani R, Black M, et al. Ser-phosphorylation of PCSK9 (proprotein convertase subtilisin-kexin 9) by Fam20C (family with sequence similarity 20, member C) kinase enhances its ability to degrade the LDLR (low-density lipoprotein receptor). Arterioscler Thromb Vasc Biol. (2019) 39:1996–2013. 10.1161/ATVBAHA.119.313247 [DOI] [PMC free article] [PubMed] [Google Scholar] 114.Zhang H, Zhu Q, Cui J, Wang Y, Chen MJ, Guo X, et al. Structure and evolution of the Fam20 kinases. Nat Commun. (2018) 9:1218. 10.1038/s41467-018-03615-z [DOI] [PMC free article] [PubMed] [Google Scholar] 115.Benjannet S, Rhainds D, Hamelin J, Nassoury N, Seidah NG. The proprotein convertase (PC) PCSK9 is inactivated by furin and/or PC5/6A: functional consequences of natural mutations and post-translational modifications. J Biol Chem. (2006) 281:30561–72. 10.1074/jbc.M606495200 [DOI] [PubMed] [Google Scholar] 116.Chong M, Yoon G, Susan-Resiga D, Chamberland A, Cheillan D, Pare G, et al. Hypolipidaemia among patients with PMM2-CDG is associated with low circulating PCSK9 levels: a case report followed by observational and experimental studies. J Med Genet. (2020) 57:11–17. 10.1136/jmedgenet-2019-106102 [DOI] [PubMed] [Google Scholar] 117.Catapano AL, Pirillo A, Norata GD. New pharmacological approaches to target PCSK9. Curr Atheroscler Rep. (2020) 22:24. 10.1007/s11883-020-00847-7 [DOI] [PubMed] [Google Scholar] 118.Katzmann JL, Gouni-Berthold I, Laufs U. PCSK9 inhibition: insights from clinical trials and future prospects. Front Physiol. (2020) 11:595819. 10.3389/fphys.2020.595819 [DOI] [PMC free article] [PubMed] [Google Scholar] 119.Musunuru K, Chadwick AC, Mizoguchi T, Garcia SP, DeNizio JE, Reiss CW, et al. In vivo CRISPR base editing of PCSK9 durably lowers cholesterol in primates. Nature. (2021) 593:429–34. 10.1038/s41586-021-03534-y [DOI] [PubMed] [Google Scholar] 120.Zeitlinger M, Bauer M, Reindl-Schwaighofer R, Stoekenbroek RM, Lambert G, Berger-Sieczkowski E, et al. A phase I study assessing the safety, tolerability, immunogenicity, and low-density lipoprotein cholesterol-lowering activity of immunotherapeutics targeting PCSK9. Eur J Clin Pharmacol. (2021) 77:1473–1484. 10.1007/s00228-021-03149-2 [DOI] [PMC free article] [PubMed] [Google Scholar] Articles from Frontiers in Cardiovascular Medicine are provided here courtesy of Frontiers Media SA ACTIONS View on publisher site PDF (1022.6 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Introduction PCSK9 Function Regulation of PCSK9 Conclusion and Perspectives Author Contributions Funding Conflict of Interest Publisher's Note References Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
256
Published Time: Mon, 23 Jan 2023 07:33:38 GMT arXiv:2011.07619v1 [math.CA] 15 Nov 2020 Order estimates of the uniform approximations by Zygmund sums on the classes of convolutions of periodic functions Serdyuk A.S., Hrabova U.Z. Institute of Mathematics, Kiev Volyn national university of Lesya Ukrainka, Lutsk Abstract We establish the exact-order estimates of uniform approximations by the Zygmund sums Zsn−1 (that is trigonometric polynomials of the form Zsn−1(f ; t) := a0 2 ∑n−1 k=1 (1 − ( k n )s) × (ak(f ) cos kt + bk(f ) sin kt ), s > 0, where ak(f ) and bk (f ) are the Fourier coefficients of f ∈ L1) of 2π-periodic continuous functions f from the classes Cψβ,p . These classes are defined by the convolutions of functions from the unit ball in the space Lp, 1 ≤ p < ∞, with generating fixed kernels Ψβ (t) = ∑∞ k=1 ψ(k) cos (kt + βπ 2 ), Ψβ ∈ Lp′ , β ∈ R, 1/p + 1 /p ′ = 1 . We additionally assume that the product ψ(k)ks+1 /p is generally monotonically increasing with the rate of some power function, and, besides, for 1 < p < ∞ it holds that ∑∞ k=n ψp′ (k)kp′−2 < ∞, and for p = 1 the following condition is true ∑∞ k=n ψ(k) < ∞.It is shown that under these conditions Zygmund sums Zsn−1 and Fejer sums σn−1 = Z1 n−1 realize the order of the best uniform approximations by trigonometric polynomials of these classes, namely for 1 < p < ∞ En(Cψβ,p )C ≍ E ( Cψβ,p ; Zsn−1 ) C ≍ ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ , 1 p + 1 p′ = 1 , and for p = 1 En(Cψβ, 1)C ≍ E ( Cψβ, 1; Zsn−1 ) C ≍  ∞ ∑ k=n ψ(k), cos βπ 2 6 = 0; ψ(n)n, cos βπ 2 = 0 , where En(Cψβ,p )C := sup f∈Cψβ,p inf tn−1∈T 2n−1 ‖f − tn−1‖C , and T2n−1 is the subspace of trigonometric polynomials tn−1 of order n − 1 with real coefficients, E ( Cψβ,p ; Zsn−1 ) C := sup f∈Cψβ,p ‖f − Zsn−1(t)‖C . Key words :best approximations, Zygmund sums, Fejer sums, subspace of trigonometric polynomials, order estimate 11 Notations, definitions and auxiliary statements Denote by Lp, 1 ≤ p ≤ ∞ , the space of 2π–periodic summable on [0 , 2π] functions f with the norm ‖f ‖p = ( 2π∫ 0 |f (t)|pdt )1/p , 1 ≤ p < ∞;ess sup t |f (t)|, p = ∞, and by C the space of 2π–continuous periodic functions in which the norm is defined by equality ‖f ‖C = max t |f (t)|.Let f ∈ L1 and Sf = a0 2 + ∞ ∑ k=1 (ak(f ) cos kx + bk(f ) sin kx ), be the Fourier series of function f .If for the sequence ψ(k) ∈ R and fixed number β ∈ R the series ∞ ∑ k=1 1 ψ (k) ( ak(f ) cos ( kx + βπ 2 ) bk(f ) sin ( kx + βπ 2 )) is the Fourier series of a summable function ϕ, then this function is called as (ψ, β )-derivative of the function f (x) and is denoted by f ψβ (x). A set of functions f (x), for which this condition is satisfied is denoted by Lψβ , and subset all continuous functions from Lψβ is denoted by Cψβ .If f ∈ Lψβ and furthermore f ψβ ∈ N, where N ⊂ L1, then we write that f ∈ Lψβ N. Let us put Lψβ N ∩ C = Cψβ N. The concept of (ψ, β )-derivative is a natural generalization of the concept of (r, β )-derivative in the Weyl–Nagy sense and coincides almost everywhere with the last one, when ψ(k) = k−r, r > 0, namely, if ψ(k) = k−r, r > 0, then Lψβ N = W rβ N, and, f ψβ = f rβ ,where f rβ is the derivative in the Weyl–Nagy sense, and W rβ N are the Weyl-Nagy classes , . In the case, when β = r, the classes W rβ N are the well known Weyl classes W rr N, while the derivatives f rβ coincide almost everywhere with the derivatives in the sense of Weyl f rr . If, in addition, β = r, r ∈ N, then f rβ coincide almost everywhere with the usual derivatives f (r) of the order r of the function f (f rβ = f rr = f (r)) and at the same time W rβ N = W rr N = W rN.According to the Statement 3.8.3 from , if the series ∞ ∑ k=1 ψ(k) cos (kt − βπ 2 ), β ∈ R (1) is the Fourier series of the function Ψβ ∈ L1, then the elements f of the classes Lψβ N for almost every x ∈ R are represented as a convolution f (x) = a0 2 + (Ψ β ∗ ϕ)( x) = a0 2 + 1 π π ∫ −π Ψβ (x − t)ϕ(t)dt, a 0 ∈ R, ϕ ⊥ 1, ϕ ∈ N, (2) where ϕ almost everywhere coincides with f ψβ .As sets N we will consider the unit balls of the spaces Lp: Up = {ϕ ∈ Lp : ‖ϕ‖p ≤ 1}, 1 ≤ p ≤ ∞ . 2Then put: Lψβ,p := Lψβ Up, Cψβ,p := Cψβ Up, W rβ,p := W rβ Up.According to the Statement 1.2, from , if the fixed kernel Ψβ of the classes Lψβ,p and Cψβ,p satisfies the inclusion Ψβ ∈ Lp′ , 1 p 1 p′ = 1 , 1 ≤ p ≤ ∞ , then the convolutions of the form (2) are continuous functions, where N = Up. It is clear that in this case for f ∈ Cψβ,p the equality (2) is fulfilled for all x ∈ R.We assume that the sequences ψ(k) are traces on the set of natural numbers N of some positive continuous convex downwards functions ψ(t) of the continuous argument t ≥ 1, that tends to zero for t → ∞ . The set of all such functions ψ(t) is denoted by M.To classify functions ψ from M on their speed of decreasing to zero it is convenient to use the following characteristic : α(t) = α(ψ; t) = ψ(t) t|ψ′(t)| , ψ ′(t) := ψ′(t + 0) . (3) With its help we consider the following subsets of the set M (see, e.g., ) M0 := {ψ ∈ M : ∃K > 0 ∀t ≥ 1 0 < K ≤ α(ψ; t)}, MC := {ψ ∈ M : ∃K1, K 2 > 0 ∀t ≥ 1 0 < K 1 ≤ α(ψ; t) ≤ K2}. It is clear that MC ⊂ M0.Zygmund sums of the order n − 1 of the function f ∈ L1 are the trigonometric polynomials of the form Zsn−1(f ; t) = a0 2 + n−1 ∑ k=1 ( 1 − ( k n )s) (ak(f ) cos kt + bk(f ) sin kt ), s > 0, (4) where ak(f ) and bk(f ) are Fourier coefficients of the function f .In the case s = 1 polynomials Zsn−1 are Fejer sums: Z1 n−1 = σn−1 σn−1(f ; t) = a0 2 + n−1 ∑ k=1 ( 1 − k n ) (ak(f ) cos kt + bk(f ) sin kt ). (5) In this paper we consider the following approximation characteristics E ( Cψβ,p ; Zsn−1 ) C = sup f∈Cψβ,p ‖f (·) − Zsn−1(f ; ·)‖C, 1 ≤ p ≤ ∞ , β ∈ R, (6) and solve the problem of establishing the order of decreasing to zero as n → ∞ of the mentioned quantities with respect to relations between parameters ψ, β, p and s. It is clear that we can make conclusion about the approximation ability of a linear polynomial approximation method (including Fejer σn−1 and Zygmund Zsn−1 methods) on the class Cψβ,p , after comparison the rate of decreasing of the exact upper bounds of uniform deviations of trigonometric sums, which are generated by this method, on the set Cψβ,p with the rate of decreasing of the best uniform approximations of the class Cψβ,p by trigonometric polynomials tn−1 of order not higher than n − 1, namely the quantities of the form: En(Cψβ,p )C = sup f∈Cψβ,p inf tn−1 ‖f (·) − tn−1(·)‖C , 1 ≤ p ≤ ∞ , (7) 3where T2n−1 is the subspace of trigonometric polynomials tn−1 of order n−1 with real coefficients. In this case, since always the following estimate holds En ( Cψβ,p ) C ≤ E ( Cψβ,p ; Zsn−1 ) C , n ∈ N, (8) it is important to know under which restrictions on the parameters ψ, s, β and p the following equality takes place En ( Cψβ,p ) C ≍ E ( Cψβ,p ; Zsn−1 ) C . (9) The notation A(n) ≍ B(n) means, that A(n) = O(B(n)) and at the same time B(n) = O(A(n)) , where by the notation A(n) = O(B(n)) we mean, that there exists a constant K > 0 such that the inequality A(n) ≤ K(B(n)) holds. In the work A. Zygmund introduced trigonometric polynomials of the form (4) and found exact order estimates of the quantities E (W r ∞ ; Zsn−1 ) C at r ∈ N. B. Nagy investigated the quantities E (W rβ, ∞; Zsn−1 ) C at r > 0, β ∈ Z, and for s ≤ r he established the asymptotic equali-ty, and for s > r he found order estimates. Later S.A. Telyakovsky obtained asymptotically exact equalities for the quantities E (W rβ, ∞; Zsn−1 ) C for r > 0 and β ∈ R for n → ∞ . On the Weyl-Nagy classes, the exact order estimates of the quantities E (W rβ,p ; Zsn−1 ) C for 1 < p < ∞ and r > 1/p and for p = 1 and r ≥ 1, β ∈ R are found in the work . Concerning the Fejer sums σn−1(f ; t) it should be noticed that the order estimates of quanti-ties E (W rβ, ∞; σn−1 ) C , r > 0 for β ∈ Z were found by S.M. Nikol’skii ; for the quantities E (W rr,p ; σn−1 ) C for 1 < p ≤ ∞ and r > 1/p , and also for p = 1 and r ≥ 1 were found by V.M. Tikhomirov and by A.I. Kamzolov . Approximation properties of Zygmund sums on the classes of (ψ, β )-differentiable functions were studied in the works , , , (see., also, ). Particularly in the work of D.M. Bushev the asymptotic equalities for the quantities E(Cψβ, ∞; Zsn−1)C were established for some quite natural constraints on ψ and s as n → ∞ . In the case, when the series ∑∞ k=1 ψ2(k) is convergent, the exact values of the quantities E ( Cψβ, 2; Zsn−1 ) C were established in the work of A.S. Serdyuk and I.V.Sokolenko . In the work the authors found the exact order estimatites of uniform approximations by Zygmund sums Zsn−1 on the classes Cψβ,p , 1 < p < ∞, when ψ ∈ Θp, and Θp, 1 < p < ∞, is the set of non-increasing functions ψ(t), for which there exists α > 1/p such that the function tαψ(t) almost decreases, and ψ(t)ts+1 /p −ε increases by [1 , ∞) for some ε > 0.Concerning the estimates of the best uniform approximations of functional compacts, it should be noticed the following. For the Weyl-Nagy classes W rβ,p , r > 1/p , β ∈ R, 1 ≤ p ≤ ∞ ,the exact order estimates of the best approximations En ( W rβ,p ) C are known (see, e.g., ). Moreover, for p = ∞ the exact values of the quantities En ( W rβ, ∞ ) C for all r > 0, β ∈ R and n ∈ N are known (see. ). The order estimates of the best approximations of the classes Cψβ,p under certain restrictions on ψ, β and p were investigated in the works , , , . In some partial cases (especially for p = ∞) the exact or asymptotically exact values of the quantities En ( Cψβ,p ) C (are also known (see. , , , , , , ). In this paper, we establish the exact order estimates of the quantities of the form (6) for all 1 ≤ p < ∞ and β ∈ R, in case, when ψ(t)t1/p ∈ M0, the product ψ(k)ks+1 /p generally 4monotonically increases, ψ(k)ks+1 /p −ε almost increases (according to Bernstein) for some ε > 0 and for 1 < p < ∞ ∞∑ k=n ψp′ (k)kp′−2 < ∞, 1 p + 1 p′ = 1 , (10) and for p = 1 ∞∑ k=n ψ(k) < ∞. (11) The conditions (10) and (11) and the monotonic decreasing to zero of the sequence ψ(k) ensure the inclusion of Ψβ ∈ Lp′ , 1/p + 1 /p ′ = 1 , 1 ≤ p < ∞ (see, e.g., Lemma 12.6.6 from , s. 193.) In this paper it is also shown that for some conditions, Zygmund sums (and at s = 1 also the Fejer sums) realize the orders of the best uniform approximations on the classes Cψβ,p , that is, the order estimate (9) is true. Previously, this property was proved for Fourier sums , , , . Let us formulate some necessary definitions. A non-negative sequence a = {ak }∞ k=1 , k ∈ N, is said to be generally monotonically increasi-ng (and write a ∈ GM +), if there exists a constant A ≥ 1, such that for any natural n1 and n2 such that n1 ≤ n2 inequalities are held an1 + m−1 ∑ k=n1 |ak − ak+1 | ≤ Aa m, m = n1, n 2. (12) It is easy to see that if the positive sequence a = {ak }∞ k=1 increases, starting from some number, then it generally monotonically increasing. A non-negative sequence a = {ak }∞ k=1 , k ∈ N is said to be almost increasing (according to Bernstein) if there exists a constant K, such that for all, n1 ≤ n2 an1 ≤ Ka n2 . (13) In this case, if for the sequence a = {ak }∞ k=1 there exists a constant ε > 0, such that {akk−ε} almost increases, then we write a ∈ GA +. It is clear that if the sequence a belongs to GM +,then it is almost increasing according to Bernstein. Let us put further at δ > 0 gδ(t) := ψ(t)tδ , t ∈ [1 , ∞). 2 Order estimates of the approximations by Zygmund sums on the classes of convolutions Theorem 1. Let s > 0, 1 ≤ p < ∞, g1/p ∈ M0, gs+1 /p ∈ GM + ∩ GA +, β ∈ R and n ∈ N. In the case 1 < p < ∞, if the condition (10) holds and the following inequality holds inf t≥1 α(g1/p ; t) > p′ 2 , (14) then the following order estimates take place En ( Cψβ,p ) C ≍ E ( Cψβ,p ; Zsn−1 ) C ≍ ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ , 1 p + 1 p′ = 1; (15) 5in the case p = 1 , if the condition (11) holds and the following inequality holds inf t≥1 α(g1; t) > 1, (16) then the following order estimates take place En ( Cψβ, 1 ) C ≍ E ( Cψβ, 1; Zsn−1 ) C ≍  ∞ ∑ k=n ψ(k), cos βπ 2 6 = 0; ψ(n)n, cos βπ 2 = 0 . (17) Proof. Since the operator Zsn−1 : f (t) → Zsn−1(f, t ) is linear polynomial operator, which is invariant under the shift, i.e. Zsn−1(fh, t ) = Zsn−1(f, t + h), f h(t) = f (t + h), h ∈ R, and norm in C and classes Cψβ,p also are invariant under the shift, that is ‖fh(t)‖C = ‖f (t)‖C ; f (t) ∈ Cψβ,p ⇒ fh(t) ∈ Cψβ,p , then E ( Cψβ,p ; Zsn−1 ) C = sup f∈Cψβ,p |f (0) − Zsn−1(f ; 0) |. (18) By virtue (2) and (4) for any function f ∈ Cψβ,p , 1 ≤ p < ∞, β ∈ R, s > 0 the following equality holds f (0) − Zsn−1(f ; 0) = 1 π π ∫ −π ( 1 nsn−1∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) +Ψ −β,n (t) ) ϕ(t)dt, (19) where Ψ−β,n (t) = ∑∞ k=n ψ(k) cos (kt + βπ 2 ), ‖ϕ‖p ≤ 1, n ∈ N.Relations (18) and (19), H¨ older’s inequality and triangle inequality imply that for 1 ≤ p < ∞E ( Cψβ,p ; Zsn−1 ) C ≤ 1 π ∥∥∥∥∥ 1 nsn−1∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) Ψ −β,n (t) ∥∥∥∥∥p′ ≤≤ 1 πn s ∥∥∥∥∥ n−1 ∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) ∥∥∥∥∥p′ 1 π ∥∥Ψ−β,n (t)∥∥p′ , 1 p + 1 p′ = 1 . (20) Let us show that, if gs+1 /p ∈ GM + ∩ GA +, where gs+1 /p = {ψ(k)ks+1 /p }∞ k=1 , then ∥∥∥ n−1 ∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) ∥∥∥p′ = O(ψ(n)ns+ 1 p ), 1 ≤ p < ∞. (21) Applying Abel transformation to the function ∑n−1 k=1 ψ(k)ks cos (kt + βπ 2 ), we have n−1 ∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) = n−2 ∑ k=1 ( ψ(k)ks − ψ(k + 1)( k + 1) s ) Dk,β (t)+ 6+ψ(n − 1)( n − 1) sDn−1,β (t) − 1 2 cos βπ 2 , (22) where Dk,β (t) := 1 2 cos βπ 2 + k ∑ ν=1 cos ( νt − βπ 2 ) . Then, in view of ‖Dk,β (t)‖p′ = O(k1− 1 p′ ) = O(k 1 p ), 1 ≤ p < ∞, k ∈ N, β ∈ R (see, e.g., ), of (2) we get ∥∥∥∥∥ n−1 ∑ k=1 ψ(k)ks cos ( kt + βπ 2 ) ∥∥∥∥∥p′ == O(1) + O ( n−2∑ k=1 ∣∣ψ(k)ks − ψ(k + 1)( k + 1) s∣∣k 1 p ) O ( ψ(n − 1)( n − 1) s+ 1 p ) . (23) Since gs+1 /p ∈ GM +, then, by using the triangle inequality, inequality (12) and Lagrange theorem, we have n−2∑ k=1 ∣∣ψ(k)ks − ψ(k + 1)( k + 1) s∣∣k 1 p ≤≤ n−2 ∑ k=1 ∣∣ψ(k)ks+ 1 p − ψ(k + 1)( k + 1) s+ 1 p ∣∣ + n−2 ∑ k=1 ∣∣ψ(k + 1)( k + 1) s+ 1 p − ψ(k + 1)( k + 1) sk 1 p ∣∣ ≤≤ Aψ (n − 1)( n − 1) s+ 1 p 1 p n−2 ∑ k=1 ψ(k + 1)( k + 1) sk 1 p−1 == Aψ (n − 1)( n − 1) s+ 1 p 1 p n−2 ∑ k=1 ψ(k + 1)( k + 1) s+ 1 p−1 (1 + 1 k ) 1 p′ ≤≤ Aψ (n − 1)( n − 1) s+ 1 p 2 n−1 ∑ k=2 ψ(k)ks+ 1 p k . (24) According to the condition gs+1 /p ∈ GA +, there exits ε > 0 such that the sequence {gs+1 /p (k)k−ε} = {ψ(k)ks+1 /p −ε} almost increases, and hence taking into account (13), we obtain n−1∑ k=2 ψ(k)ks+1 /p k = n−1 ∑ k=2 ψ(k)ks+1 /p −ε k1−ε ≤≤ Kψ (n − 1)( n − 1) s+1 /p −εn−1∑ k=2 1 k1−ε < Kψ (n − 1)( n − 1) s+1 /p −εn−1∫ 1 dt t < K ε ψ(n − 1)( n − 1) s+1 /p . (25) From (2) and (2) we get the following inequality ∣∣ψ(k)ks − ψ(k + 1)( k + 1) s∣∣k 1 p ≤ ( A + 2K ε ) ψ(n − 1)( n − 1) s+1 /p . (26) 7From (2) and (26) we obtain an estimate (21). To estimate the norm ‖Ψ−β,n (·)‖p′ for 1 < p ′ < ∞ we use the statement, which was establi-shed in , and according to which in the case when {ak }∞ k=1 is the monotonically non-increasing sequence of positive numbers is such that ∑∞ k=1 ap′ k kp′−2 < ∞, then for an arbitrary n ∈ N and γ ∈ R the following estimate holds ∥∥∥ ∞ ∑ k=n ak cos (kx + γ)∥∥∥p′ = O ( ∞∑ k=n ap′ k kp′−2 + ap′ n np′−1 )1/p ′ . (27) Putting in (27) ak = ψ(k), γ = βπ 2 we obtain that for 1 < p < ∞, β ∈ R and n ∈ N ‖Ψ−β,n (·)‖p′ = O ( ∞∑ k=n ψp′ (k)kp′−2 + ψp′ (n)np′−1 )1/p ′ . (28) Then, using Lemma 3 of , we conclude that for 1 < p ′ < ∞, n ∈ N under condition (10) and imbedding g1/p ∈ M0 the following estimate holds ψp′ (n)np′−1 = O ( ∞∑ k=n ψp′ (k)kp′−2 ) . (29) According to the conditions of Theorem 1 we have that g1/p ∈ M0, so taking into account (29), from (28), we obtain ‖Ψ−β,n (·)‖p′ = O ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ , 1 < p ′ < ∞, β ∈ R, n ∈ N. (30) Combining (2), (21) and (30) in the case when g1/p ∈ M0, and gs+1 /p ∈ GM + ∩ GA +, we arrive at the estimate E ( Cψβ,p ; Zsn−1 ) C = O ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ , 1 < p < ∞, 1 p + 1 p′ = 1 . (31) As follows from Corollary 1 and 2 from for 1 < p < ∞, 1/p + 1 /p ′ = 1 , n ∈ N and β ∈ R, under conditions (10) and (14) and imbedding g1/p ∈ M0 for En ( Cψβ,p ) C we arrive at the following order estimates En ( Cψβ,p ) C ≍ ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ . (32) Therefore, by virtue of inequality (8) and relations (31) and (32) we obtain order equality (15). Further, let us consider the case p = 1. Let us establish the estimate of the norm ‖Ψ−β,n (·)‖p′ = ‖Ψ−β,n (·)‖∞.It is obvious that for any β ∈ R the following inequality holds ‖Ψ−β,n (·)‖∞ = ∥∥∥∥ ∞ ∑ k=n ψ(k) cos ( kt + βπ 2 ) ∥ ∥∥∥∞ ≤ ∞ ∑ k=n ψ(k). (33) 8If β = 2 k + 1 , k ∈ Z, then following estimate takes place ‖Ψ−β,n (·)‖∞ = ∥∥∥∥ ∞ ∑ k=n ψ(k) sin kt ∥∥∥∥∞ ≤ (π + 2) ψ(n)n (34) (see, e.g., relation (82) from ). According to Lemma 3 from , if g1 ∈ M0, where g1 = {ψ(k)k}∞ k=1 and the condition (11) holds, then the following estimates are true ψ(n)n = O ( ∞∑ k=n ψ(k) ) . (35) If g1 ∈ M0 and the conditions (11) hold, then combining (2), (21), (33) – (35), we obtain the following estimates E ( Cψβ, 1; Zsn−1 ) C =  O ( ∞∑ k=n ψ(k) ) , cos βπ 2 6 = 0; O(ψ(n)n), cos βπ 2 = 0 . (36) To estimate the quantity E ( Cψβ, 1; Zsn−1 ) C from below, we use Theorems 3 and 4 from , according to which, if g1 ∈ M0 and the conditions (11) and (16) are true, then for n ∈ N and β ∈ R the following the order equalities take place En ( Cψβ, 1 ) C ≍  ∞ ∑ k=n ψ(k), cos βπ 2 6 = 0; ψ(n)n, cos βπ 2 = 0 . (37) The estimate (17) follows from the inequality (8), estimates (36) and (37). Theorem 1 is proved. Assume that the conditions of Theorem 1 take place, moreover, more stronger imbedding holds g1/p ∈ MC . As it follows from Lemma 3 from if g1/p ∈ MC and the condition (10) holds, then for 1 < p < ∞ the following estimates take place ∞ ∑ k=n ψp′ (k)kp′−2 ≍ ψp′ (n)np′−1. (38) In addition, as it was shown in , Lemma 3], if g1 ∈ MC and the condition (11) holds, then the following order estimates are true ∞ ∑ k=n ψ(k) ≍ ψ(n)n. (39) Formulas (38) and (39), and Theorem 1 allow us to write the following statement. Theorem 2. Let Let s > 0, 1 ≤ p < ∞, g1/p ∈ MC , gs+1 /p ∈ GM + ∩ GA +, β ∈ R and n ∈ N.In the case 1 < p < ∞, if the conditions (10) and (14) hold, then the following order estimates take place En(Cψβ,p )C ≍ E ( Cψβ,p ; Zsn−1 ) C ≍ ψ(n)n1/p , (40) 9and in the case p = 1 if the condition (11) and (16) hold, then the following order estimates take place En(Cψβ, 1)C ≍ E ( Cψβ, 1; Zsn−1 ) C ≍ ψ(n)n. (41) Proof. Order estimates (40) were established in . Note that when 1 < p < ∞, g1/p ∈ M0 and lim t→∞ α(g1/p ; t) = ∞, (42) then the order estimates (40) do not take place, since in this case ψ(n)n 1 p = o (( ∞∑ k=n ψp′ (k)kp′−2)1/p ′ ) , n → ∞ (see, Lemma from ). Similarly, when p = 1 , g1/p = g1 ∈ M0 and lim t→∞ α(g1; t) = ∞, (43) then as follows from Lemma 3 ψ(n)n = o ( ∞∑ k=n ψ(k) ) , in this case, for β such that cos βπ 2 6 = 0 order estimates (41) do not take place. As example of the function ψ(t), for which the conditions of Theorem 1 and the equalities (42) and (43) take place, we can use the function ψ(t) = t−1/p ln −γ (t + K), γ > { 1 p′ , 1 < p < ∞;1, p = 1 , K > { eγp ′/2, 1 < p < ∞; eγ , p = 1 , (44) (see , ). Let us write the order estimates for the quantities En ( Cψβ,p ) C and E ( Cψβ,p ; Zsn−1 ) C in the case, when ψ(t) has the form (44). Theorem 3. Let ψ(t) = t−1/p ln −γ (t + K), β ∈ R and n ∈ N. If 1 < p < ∞, γ > 1/p ′, K > e γp ′/2, 1/p + 1 /p ′ = 1 , then En(Cψβ,p )C ≍ E ( Cψβ,p ; Zsn−1 ) C ≍ ψ(n)n1/p ln 1/p ′ n, n ≥ 2; (45) if p = 1 , γ > 1, K > e γ , then En(Cψβ, 1)C ≍ E ( Cψβ, 1; Zsn−1 ) C ≍ { ψ(n)n ln n, cos βπ 2 6 = 0 , n ≥ 2; ψ(n)n, cos βπ 2 = 0 . (46) We show that for the indicated function ψ of the form (44) all conditions of the Theorem 1 are true. Indeed, for 1 < p < ∞, γ > 1/p ′, K > e γp ′/2 we have ∞ ∑ k=n ψp′ (k)kp′−2 = ∞ ∑ k=n 1 k ln γp ′ (k + K) < ∞, 10 α(g1/p ; t) = (t + K) ln( t + K) γt > ln( t + eγp ′/2) γ , and hence, lim t→∞ α(g1/p ; t) = ∞ and α(g1/p ; t) > p′ 2 . For p = 1 , γ > 1, K ≥ eγ , we have ∞∑ k=n ψ(k) ≤ ∞∑ k=n 1 kln γ(k+eγ) < ∞, α(g1; t) > ln( t + eγ ) γ , and hence, lim t→∞ α(g1; t) = ∞ i α(g1; t) > 1. It is obvious that for any s > 0 and 1 ≤ p < ∞ the functions gs+1 /p (t) = ts ln −γ (t + K) increase monotonically, starting from some point t0. Therefore, it is not difficult to be convinced that the sequence gs+1 /p (k) belongs to the set GM + ∩ GA + Therefore, the function ψ of the form (44) satisfies the conditions of Theorem 1. Further, using the formula (79) from , obtain ( ∞∑ k=n ψp′ (k)kp′−2 )1/p ′ ≍ ( ∞∫ n ψp′ (t)tp′−2dt )1/p ′ = ( ∞∫ n dt t ln γp ′ (t + K) )1/p ′ ≍ ln 1/p ′−γ n = ψ(n)n1/p ln 1/p ′ n ln −γ n ln −γ (n + K) ≍ ψ(n)n1/p ln 1/p ′ n, n ≥ 2. (47) Then formula (45) follows from the estimate (15) and relations (2). Similarly, by virtue of the inequality (87) from we get ∞ ∑ k=n ψ(k) ≍ ∞ ∫ n ψ(t)dt = ∞ ∫ n dt t ln γ (t + K) ≍ ln 1−γ n ≍ ψ(n)n ln n, n > 2. (48) Formula (46) follows from the estimates (17) and relations (2), in the case where β is such that cos βπ 2 6 = 0 . By this Theorem 3 is proved. As it was already mentioned, for s = 1 the sums Zygmund Zsn−1 coincide with the known Fejer sums σn−1. Therefore, Theorem 1 and 2 imply the following statements. Proposition 1. Let 1 ≤ p < ∞, g1/p ∈ M0, g1+1 /p ∈ GM + ∩ GA +, β ∈ R and n ∈ N. In the case 1 < p < ∞, if the conditions (10) and (14) hold, then the following order estimates take place En(Cψβ,p )C ≍ E ( Cψβ,p ; σn−1 ) C ≍ ( ∞∑ k=n ψp′ (k)kp′−2)1/p ′ ; (49) in the case p = 1 , if the conditions (11) and (16) hold, then the following order equlaities take place En(Cψβ, 1)C ≍ E ( Cψβ, 1; σn−1 ) C ≍  ∞ ∑ k=n ψ(k), cos βπ 2 6 = 0 ,ψ(n)n, cos βπ 2 = 0 . (50) 11 Proposition 2. Let 1 ≤ p < ∞, g1/p ∈ MC , g1+1 /p ∈ GM + ∩ GA +, β ∈ R and n ∈ N. In the case 1 < p < ∞, if the conditions (10) and (14) hold, then the following order estimates take place En(Cψβ,p )C ≍ E ( Cψβ,p ; σn−1 ) C ≍ ψ(n)n1/p ; (51) in the case p = 1 , if the conditions (11) and (16) hold, then the following order estimates take place En(Cψβ, 1)C ≍ E ( Cψβ, 1; σn−1 ) C ≍ ψ(n)n. (52) References Bushev D.M. Approximation of classes of continuous periodic functions by Zygmund sums .Preprint, 1984, Kuev, AN USSR. Inst. Math. 84.56, (in Russian) 2. Dzyadyk V.K. On best approximation in classes of periodic functions defined by integrals of a linear combination of absolutely monotonic kernels . Math. Notes. 1974.  16 (5), 1008–1014. (translation of Mat. Zametki 1974, 16 (5), 691–701. doi: 10.1007/BF01149788 (in Russian)) 3. Hrabova U.Z., Serdyuk A.S. Order estimates for the best approximations and approxi-mations by Fourier sums of the classes of (ψ, β )-differential functions . Ukrainian Math. J. 2013. 65 (9), 1319–1331. doi: 10.1007/s11253-014-0861-7 (translation of Ukrain. Mat. Zh. 2013, 65 (9), 1186–1197. (in Ukrainian)) 4. Kamzolov A.I. Approximation of the functional classes ˜W αp (L) in the spaces Ls[−π, π ] by the Fejer method . Math. Notes. 1978. 23 (3), 185189. doi: 10.1007/BF01651429 (translation of Mat. Zametki 1978, 23 (3), 343–349. doi: 10.1007/BF01651429 (in Russi-an)) 5. Kostich M. V. Approximation of functions from Weyl-Nagy classes by Zygmund averages .Ukrain. Math. J. 1998, 50 , (5), 834838. doi: 10.1007/BF02514336 (translation of Ukrain. Mat. Zh. 1998, 50 (5), 735–738. (in Ukrainian)) 6. Nagy B. Sur une classe g´ en´ erale de proced es de sommation pour les s` eries de Fourier .Acta Math. Acad. Sci. Hungar. 1948, 1, (3), 14–62. 7. Nikol’skii S. M. Approximation of periodic functions by trigonometric polynomials . Tr. Mat. Inst. Akad. Nauk SSSR. 1945, 15 , 1–76. (in Russian) 8. Pinkus A. n-widths in approximation theory . Springer-Verlag, Berlin, 1985. 9. Serdyuk A.S. On the best approximation of classes of convolutions of periodic functions by trigonometric polynomials . Ukrain. Math. J. 1995, 47 (9), 14351440. doi: 10.1007/BF01057518 (translation of Ukrain. Mat. Zh. 1995, 47 (9), 1261–1265. (in Ukrainian)) 10. Serdyuk A.S. Widths and best approximations for classes of convolutions of periodic functions . Ukrainian Math. J. 1999, 51 (5), 748763. doi: 10.1007/BF02591709 (translati-on of Ukrain. Mat. Zh. 1999, 51 (5), 674–687. ((in Ukrainian)) 12 11. Serdyuk A.S. On best approximation in classes of convolutions of periodic functions .Theory of the approximation of functions and related problems, Pr. Inst. Mat. Nats. Akad. Nauk Ukr. Mat. Zastos., 2002, 35 , 172–194. (in Ukrainian) 12. Serdyuk A.S. Best approximations and widths of classes of convolutions of periodic functi-ons of high smoothness . Ukrain. Math. J. 2005, 57 (7), 11201148. doi: 10.1007/s11253-005-0251-2(translation of Ukrain. Mat. Zh. 2005, 57 (7), 946–971. (in Ukrainian)) 13. Serdyuk A.S., Hrabova U.Z. Estimates of uniform approximations by Zygmund sums on classes of convolutions of periodic functions . Approx. Theory of Functions and Related Problems: Proc. Inst. Math. NAS Ukr. 2013, 10 (1), 222–244. (in Ukrainian) 14. Serdyuk A. S., Sokolenko I. V. Uniform approximation of the classes of (ψ, β )-differentiable functions by linear methods . Extremal Problems of the Theory of Functions and Related Problems: Proc. Inst. Math. NAS Ukr. 2011, 8 (1), 181189. (in Ukrainian) 15. Serdyuk A.S., Sokolenko I. V. Asymptotic estimates for the best uniform approximations of classes of convolution of periodic functions of high smoothness . Ukr. Mat. Visn. 2020, 17 (3), 396413. (in Ukrainian) 16. Serdyuk A.S., Stepaniuk T.A. Estimations of the best approximations for the classes of infinitely differentiable functions in uniform and integral metrics . Ukrainian Math. J. 2014, 66 (9), 13931407. doi:10.1007/s11253-015-1018-z (translation of Ukrain. Mat. Zh. 2014, 66 (9), 1244–1256. (in Ukrainian)) 17. Serdyuk A.S., Stepaniuk T.A. Order estimates for the best approximations and approxi-mations by Fourier sums in the classes of convolutions of periodic functions of low smoothness in the uniform metric . Ukrain. Math. J. 2014, 66 (12), 186218821. doi:10.1007/s11253-015-1056-6 (translation of Ukrain. Mat. Zh. 2014, 66 (12), 1658–1675. (in Ukrainian)) 18. Serdyuk A.S., Stepaniuk T.A. Uniform approximations by Fourier sums in classes of generalized Poisson integrals . Analysis Mathematica. 2019, 45 , 201–236. doi: 10.1007/s10476-018-0310-1 19. Stepanets A.I. Methods of Approximation Theory , Utrecht: VSP, 2005. 20. Stepaniuk T.A. Estimates for the best approximations and approximation by Fourier sums of classes of convolutions of periodic functions of not high smoothness in integral metrics .Approx. Theory of Functions and Related Problems: Proc. Inst. Math. NAS Ukr. 2014, 11 (3), 241–269. (in Ukrainian) 21. Sz.–Nagy B. ¨Uber gewisse Extremalfragen bei transformierten trigonometrischen Entwi-cklungen . Ber. mat.–phys. Acad. Wiss. Leipzig.1938, 90 , 103–134. 22. Telyakovskii S. A. On the norms of trigonometric polynomials and approximation of di-fferentiable functions by linear averages of their Fourier series . Tr. Mat. Inst. Akad. Nauk SSSR. 1961, 62 , 6197. (in Russian) 23. Temlyakov V.N. Approximation of periodic functions . Computational Mathematics and Analysis Series. Nova Science Publishers, Inc. 1993. 13 24. Tikhomirov V. M. Some questions in approximation theory . Izdat. Moskov. Univ., Moscow. 1976. (in Russian) 25. Zygmund A. Smooth functions . Duke Math. J. 1945, 12 , 47–76. doi:10.1215/S0012-7094-45-01206-3 26. Zygmund A. Trigonometric series . [Russian translation], 2, Moscow: Mir, 1965. (in Russi-an) 14
257
Lecture 3: SVM dual, kernels and regression C19 Machine Learning Hilary 2015 A. Zisserman • Primal and dual forms • Linear separability revisted • Feature maps • Kernels for SVMs • Regression • Ridge regression • Basis functions SVM – review • We have seen that for an SVM learning a linear classifier f(x) = w>x + b is formulated as solving an optimization problem over w : min w∈Rd ||w||2 + C N X i max (0, 1 −yif(xi)) • This quadratic optimization problem is known as the primal problem. • Instead, the SVM can be formulated to learn a linear classifier f(x) = N X i αiyi(xi>x) + b by solving an optimization problem over αi. • This is know as the dual problem, and we will look at the advantages of this formulation. Sketch derivation of dual form The Representer Theorem states that the solution w can always be written as a linear combination of the training data: w = N X j=1 αjyjxj Proof: see example sheet . Now, substitute for w in f(x) = w>x + b f(x) = ⎛ ⎝ N X j=1 αjyjxj ⎞ ⎠>x + b = N X j=1 αjyj ³ xj>x ´ + b and for w in the cost function minw ||w||2 subject to yi ³ w>xi + b ´ ≥1, ∀i ||w||2 = ⎧ ⎨ ⎩ X j αjyjxj ⎫ ⎬ ⎭ > ⎧ ⎨ ⎩ X k αkykxk ⎫ ⎬ ⎭= X jk αjαkyjyk(xj>xk) Hence, an equivalent optimization problem is over αj min αj X jk αjαkyjyk(xj>xk) subject to yi ⎛ ⎝ N X j=1 αjyj(xj>xi) + b ⎞ ⎠≥1, ∀i and a few more steps are required to complete the derivation. Primal and dual formulations N is number of training points, and d is dimension of feature vector x. Primal problem: for w ∈Rd min w∈Rd ||w||2 + C N X i max (0, 1 −yif(xi)) Dual problem: for α ∈RN (stated without proof): max αi≥0 X i αi−1 2 X jk αjαkyjyk(xj>xk) subject to 0 ≤αi ≤C for ∀i, and X i αiyi = 0 • Need to learn d parameters for primal, and N for dual • If N << d then more efficient to solve for α than w • Dual form only involves (xj>xk). We will return to why this is an advantage when we look at kernels. Primal and dual formulations Primal version of classifier: f(x) = w>x + b Dual version of classifier: f(x) = N X i αiyi(xi>x) + b At first sight the dual form appears to have the disad-vantage of a K-NN classifier — it requires the training data points xi. However, many of the αi’s are zero. The ones that are non-zero define the support vectors xi. Support Vector Machine w Support Vector Support Vector b ||w|| wTx + b = 0 support vectors f(x) = X i αiyi(xi>x) + b C = 10 soft margin Handling data that is not linearly separable • introduce slack variables • linear classifier not appropriate ?? min w∈Rd,ξi∈R+ ||w||2 + C N X i ξi subject to yi ³ w>xi + b ´ ≥1 −ξi for i = 1 . . . N Solution 1: use polar coordinates 0 0 • Data is linearly separable in polar coordinates • Acts non-linearly in original space r θ θ r Φ : Ã x1 x2 ! → Ã r θ ! R2 →R2 > 0 < 0 Solution 2: map data to higher dimension 0 0 • Data is linearly separable in 3D • This means that the problem can still be solved by a linear classifier Φ : Ã x1 x2 ! → ⎛ ⎜ ⎝ x2 1 x2 2 √ 2x1x2 ⎞ ⎟ ⎠ R2 →R3 SVM classifiers in a transformed feature space f(x) = 0 Rd RD Φ Φ : x →Φ(x) Rd →RD Learn classifier linear in w for RD: f(x) = w>Φ(x) + b Φ(x) is a feature map Classifier, with w ∈RD: f(x) = w>Φ(x) + b Learning, for w ∈RD min w∈RD ||w||2 + C N X i max (0, 1 −yif(xi)) • Simply map x to Φ(x) where data is separable • Solve for w in high dimensional space RD • If D >> d then there are many more parameters to learn for w. Can this be avoided? Primal Classifier in transformed feature space Classifier: f(x) = N X i αiyi xi>x + b →f(x) = N X i αiyi Φ(xi)>Φ(x) + b Learning: max αi≥0 X i αi −1 2 X jk αjαkyjykxj>xk → max αi≥0 X i αi −1 2 X jk αjαkyjykΦ(xj)>Φ(xk) subject to 0 ≤αi ≤C for ∀i, and X i αiyi = 0 Dual Classifier in transformed feature space • Note, that Φ(x) only occurs in pairs Φ(xj)>Φ(xi) • Once the scalar products are computed, only the N dimensional vector α needs to be learnt; it is not necessary to learn in the D dimensional space, as it is for the primal • Write k(xj, xi) = Φ(xj)>Φ(xi). This is known as a Kernel Classifier: f(x) = N X i αiyi k(xi, x) + b Learning: max αi≥0 X i αi −1 2 X jk αjαkyjyk k(xj, xk) subject to 0 ≤αi ≤C for ∀i, and X i αiyi = 0 Dual Classifier in transformed feature space Special transformations Φ : Ã x1 x2 ! → ⎛ ⎜ ⎝ x2 1 x2 2 √ 2x1x2 ⎞ ⎟ ⎠ R2 →R3 Φ(x)>Φ(z) = ³ x2 1, x2 2, √ 2x1x2 ´ ⎛ ⎜ ⎝ z2 1 z2 2 √ 2z1z2 ⎞ ⎟ ⎠ = x2 1z2 1 + x2 2z2 2 + 2x1x2z1z2 = (x1z1 + x2z2)2 = (x>z)2 Kernel Trick • Classifier can be learnt and applied without explicitly computing Φ(x) • All that is required is the kernel k(x, z) = (x>z)2 • Complexity of learning depends on N (typically it is O(N3)) not on D Example kernels • Linear kernels k(x, x0) = x>x0 • Polynomial kernels k(x, x0) = ³ 1 + x>x0´d for any d > 0 — Contains all polynomials terms up to degree d • Gaussian kernels k(x, x0) = exp ³ −||x −x0||2/2σ2´ for σ > 0 — Infinite dimensional feature space f(x) = N X i αiyik(xi, x) + b N = size of training data weight (may be zero) support vector SVM classifier with Gaussian kernel Gaussian kernel k(x, x0) = exp ³ −||x −x0||2/2σ2´ Radial Basis Function (RBF) SVM f(x) = N X i αiyi exp ³ −||x −xi||2/2σ2´ + b -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 feature x feature y RBF Kernel SVM Example • data is not linearly separable in original feature space σ = 1.0 C = ∞ f(x) = 1 f(x) = 0 f(x) = −1 f(x) = N X i αiyi exp ³ −||x −xi||2/2σ2´ + b σ = 1.0 C = 100 Decrease C, gives wider (soft) margin σ = 1.0 C = 10 f(x) = N X i αiyi exp ³ −||x −xi||2/2σ2´ + b σ = 1.0 C = ∞ f(x) = N X i αiyi exp ³ −||x −xi||2/2σ2´ + b σ = 0.25 C = ∞ Decrease sigma, moves towards nearest neighbour classifier σ = 0.1 C = ∞ f(x) = N X i αiyi exp ³ −||x −xi||2/2σ2´ + b Kernel Trick - Summary • Classifiers can be learnt for high dimensional features spaces, without actually having to map the points into the high dimensional space • Data may be linearly separable in the high dimensional space, but not linearly separable in the original feature space • Kernels can be used for an SVM because of the scalar product in the dual form, but can also be used elsewhere – they are not tied to the SVM formalism • Kernels apply also to objects that are not vectors, e.g. k(h, h0) = P k min(hk, h0 k) for histograms with bins hk, h0 k y Regression • Suppose we are given a training set of N observations ((x1, y1), . . . , (xN, yN)) with xi ∈Rd, yi ∈R • The regression problem is to estimate f(x) from this data such that yi = f(xi) Learning by optimization • As in the case of classification, learning a regressor can be formulated as an optimization: loss function regularization • There is a choice of both loss functions and regularization • e.g. squared loss, SVM “hinge-like” loss • squared regularizer, lasso regularizer Minimize with respect to f ∈F N X i=1 l (f(xi), yi) + λR (f) Choice of regression function – non-linear basis functions • Function for regression y(x, w) is a non-linear function of x, but linear in w: f(x, w) = w0 + w1φ1(x) + w2φ2(x) + . . . + wMφM(x) = w>Φ(x) • For example, for x ∈R, polynomial regression with φj(x) = xj: f(x, w) = w0 + w1φ1(x) + w2φ2(x) + . . . + wMφM(x) = M X j=0 wjxj e.g. for M = 3, f(x, w) = (w0, w1, w2, w3) ⎛ ⎜ ⎜ ⎝ 1 x x2 x3 ⎞ ⎟ ⎟ ⎠= w>Φ(x) Φ : x →Φ(x) R1 →R4 Least squares “ridge regression” • Cost function – squared loss: loss function regularization • Regression function for x (1D): • NB squared loss arises in Maximum Likelihood estimation for an error model target value f(x, w) = w0 + w1φ1(x) + w2φ2(x) + . . . + wMφM(x) = w>Φ(x) yi = ˜ yi + ni ni ∼N(0, σ2) measured value true value xi yi Solving for the weights w Notation: write the target and regressed values as N-vectors y = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ y1 y2 . . yN ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ f = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ Φ(x1)>w Φ(x2)>w . . Φ(xN)>w ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ = Φw = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 φ1(x1) . . . φM(x1) 1 φ1(x2) . . . φM(x2) . . . . 1 φ1(xN) . . . φM(xN) ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ w0 w1 . . wM ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ e.g. for polynomial regression with basis functions up to x2 Φw = ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ 1 x1 x2 1 1 x2 x2 2 . . . . 1 xN x2 N ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎛ ⎜ ⎝ w0 w1 w2 ⎞ ⎟ ⎠ Φ is an N × M design matrix e E(w) = 1 2 N X i=1 {f(xi, w) −yi}2 + λ 2kwk2 = 1 2 N X i=1 ³ yi −w>Φ(xi) ´2 + λ 2kwk2 = 1 2 (y −Φw)2 + λ 2kwk2 Now, compute where derivative w.r.t. w is zero for minimum e E(w) dw = −Φ> (y −Φw) + λw = 0 Hence ³ Φ>Φ + λI ´ w = Φ>y w = ³ Φ>Φ + λI ´−1 Φ>y w = ³ Φ>Φ + λI ´−1 Φ>y = assume N > M M x 1 M x M M basis functions, N data points M x N N x 1 • This shows that there is a unique solution. • If λ = 0 (no regularization), then w = (Φ>Φ )−1Φ>y = Φ+y where Φ+ is the pseudo-inverse of Φ (pinv in Matlab) • Adding the term λI improves the conditioning of the inverse, since if Φ is not full rank, then (Φ>Φ + λI) will be (for sufficiently large λ) • As λ →∞, w →1 λΦ>y →0 • Often the regularization is applied only to the inhomogeneous part of w, i.e. to ˜ w, where w = (w0, ˜ w) w = ³ Φ>Φ + λI ´−1 Φ>y f(x, w) = w>Φ(x) = Φ(x)>w = Φ(x)> ³ Φ>Φ + λI ´−1 Φ>y = b(x)>y Output is a linear blend, b(x), of the training values {yi} 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y ideal fit Sample points Ideal fit Example 1: polynomial basis functions • The red curve is the true function (which is not a polynomial) • The data points are samples from the curve with added noise in y. • There is a choice in both the degree, M, of the basis functions used, and in the strength of the regularization f(x, w) = M X j=0 wjxj = w>Φ(x) w is a M+1 dimensional vector Φ : x →Φ(x) R →RM+1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 0.001 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 1e-010 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 1e-015 N = 9 samples, M = 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -25 -20 -15 -10 -5 0 5 10 15 x y Polynomial basis functions M = 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y least-squares fit Sample points Ideal fit Least-squares solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -400 -300 -200 -100 0 100 200 300 400 x y Polynomial basis functions M = 5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y least-squares fit Sample points Ideal fit Least-squares solution 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y ideal fit Sample points Ideal fit Example 2: Gaussian basis functions • The red curve is the true function (which is not a polynomial) • The data points are samples from the curve with added noise in y. • Basis functions are centred on the training data (N points) • There is a choice in both the scale, sigma, of the basis functions used, and in the strength of the regularization f(x, w) = N X i=1 wie−(x−xi)2/σ2 = w>Φ(x) w is a N-vector Φ : x →Φ(x) R →RN N = 9 samples, sigma = 0.334 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 0.001 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 1e-010 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit lambda = 1e-015 10 -10 10 -5 10 0 0 1 2 3 4 5 6 log  error norm Ideal fit Validation Training Min error 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit Validation set fit Choosing lambda using a validation set 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit Validation set fit 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 x y Gaussian basis functions Sigma = 0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -1.5 -1 -0.5 0 0.5 1 1.5 x y Sample points Ideal fit Validation set fit 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 -2000 -1500 -1000 -500 0 500 1000 1500 2000 x y Gaussian basis functions Sigma = 0.334 Application: regressing face pose • Estimate two face pose angles: • yaw (around the Y axis) • pitch (around the X axis) • Compute a HOG feature vector for each face region • Learn a regressor from the HOG vector to the two pose angles Summary and dual problem So far we have considered the primal problem where f(x, w) = M X i=1 wiφi(x) = w>Φ(x) and we wanted a solution for w ∈RM As in the case of SVMs, we can also consider the dual problem where w = N X i=1 aiΦ(xi) and f(x, a) = N X i aiΦ(xi)>Φ(x) and obtain a solution for a ∈RN. Again • there is a closed form solution for a, • the solution involves the N × N Gram matrix k(xi, xj) = Φ(xi)>Φ(xj), • so we can use the kernel trick again to replace scalar products Background reading and more • Bishop, chapters 6 & 7 for kernels and SVMs • Hastie et al, chapter 12 • Bishop, chapter 3 for regression • More on web page:
258
Published Time: Tue, 10 Jun 2025 04:45:58 GMT Optional stopping theorem - Wikipedia =============== Jump to content [x] Main menu Main menu move to sidebar hide Navigation Main page Contents Current events Random article About Wikipedia Contact us Contribute Help Learn to edit Community portal Recent changes Upload file Special pages Search Search [x] Appearance Donate Create account Log in [x] Personal tools Donate Create account Log in Pages for logged out editors learn more Contributions Talk Contents move to sidebar hide (Top) 1 StatementToggle Statement subsection 1.1 Remark 2 Applications 3 Proof 4 References 5 External links [x] Toggle the table of contents Optional stopping theorem [x] 8 languages Deutsch Español Français Italiano עברית Polski Русский Українська Edit links Article Talk [x] English Read Edit View history [x] Tools Tools move to sidebar hide Actions Read Edit View history General What links here Related changes Upload file Permanent link Page information Cite this page Get shortened URL Download QR code Print/export Download as PDF Printable version In other projects Wikidata item Appearance move to sidebar hide From Wikipedia, the free encyclopedia A martingale's expected value at a stopping time equals its initial expected value Not to be confused with Optimal stopping. This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources:"Optional stopping theorem"–news·newspapers·books·scholar·JSTOR(February 2012) (Learn how and when to remove this message) In probability theory, the optional stopping theorem (or sometimes Doob's optional sampling theorem, for American probabilist Joseph Doob) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies. The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing. Statement [edit] A discrete-time version of the theorem is given below, with N{\displaystyle \mathbb {N} }0 denoting the set of natural integers, including zero. Let X = (X t)t∈N{\displaystyle \mathbb {N} }0 be a discrete-time martingale and τ a stopping time with values in N{\displaystyle \mathbb {N} }0 ∪ {∞}, both with respect to a filtration(F t)t∈N{\displaystyle \mathbb {N} }0. Assume that one of the following three conditions holds: (a) The stopping time τ is almost surely bounded, i.e., there exists a constantc ∈ N{\displaystyle \mathbb {N} } such that τ ≤ c a.s.(b) The stopping time τ has finite expectation and the conditional expectations of the absolute value of the martingale increments are almost surely bounded, more precisely, E[τ]<∞{\displaystyle \mathbb {E} [\tau ]<\infty } and there exists a constant c such that E[|X t+1−X t||F t]≤c{\displaystyle \mathbb {E} {\bigl [}|X_{t+1}-X_{t}|\,{\big \vert }\,{\mathcal {F}}{t}{\bigr ]}\leq c} almost surely on the event {_τ>t} for all t ∈ N{\displaystyle \mathbb {N} }0.(c) There exists a constant c such that |X t∧τ| ≤ c a.s. for all t ∈ N{\displaystyle \mathbb {N} }0 where ∧ denotes the minimum operator. Then X τ is an almost surely well defined random variable and E[X τ]=E[X 0].{\displaystyle \mathbb {E} [X_{\tau }]=\mathbb {E} [X_{0}].} Similarly, if the stochastic process X = (X t)t∈N{\displaystyle \mathbb {N} }0 is a submartingale or a supermartingale and one of the above conditions holds, then E[X τ]≥E[X 0],{\displaystyle \mathbb {E} [X_{\tau }]\geq \mathbb {E} [X_{0}],} for a submartingale, and E[X τ]≤E[X 0],{\displaystyle \mathbb {E} [X_{\tau }]\leq \mathbb {E} [X_{0}],} for a supermartingale. Remark [edit] Under condition(c) it is possible that τ = ∞ happens with positive probability. On this event X τ is defined as the almost surely existing pointwise limit of (X t)t∈N{\displaystyle \mathbb {N} }0 , see the proof below for details. Applications [edit] This section does not cite any sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed.(May 2024) (Learn how and when to remove this message) The optional stopping theorem can be used to prove the impossibility of successful betting strategies for a gambler with a finite lifetime (which gives condition(a)) or a house limit on bets (condition(b)). Suppose that the gambler can wager up to c dollars on a fair coin flip at times 1, 2, 3, etc., winning his wager if the coin comes up heads and losing it if the coin comes up tails. Suppose further that he can quit whenever he likes, but cannot predict the outcome of gambles that haven't happened yet. Then the gambler's fortune over time is a martingale, and the time τ at which he decides to quit (or goes broke and is forced to quit) is a stopping time. So the theorem says that E[X τ] = E[X 0]. In other words, the gambler leaves with the same amount of money on average as when he started. (The same result holds if the gambler, instead of having a house limit on individual bets, has a finite limit on his line of credit or how far in debt he may go, though this is easier to show with another version of the theorem.) Suppose a random walk starting at a ≥ 0 that goes up or down by one with equal probability on each step. Suppose further that the walk stops if it reaches 0 or m ≥ a; the time at which this first occurs is a stopping time. If it is known that the expected time at which the walk ends is finite (say, from Markov chain theory), the optional stopping theorem predicts that the expected stop position is equal to the initial position a. Solving a = pm + (1 – p)0 for the probability p that the walk reaches m before 0 gives p = a/m. Now consider a random walk X that starts at 0 and stops if it reaches –m or +m, and use the Y n = X n 2 – n martingale from the examples section. If τ is the time at which X first reaches ±m, then 0 = E[Y 0] = E[Y τ] = m 2 – E[τ]. This gives E[τ] = m 2. Care must be taken, however, to ensure that one of the conditions of the theorem hold. For example, suppose the last example had instead used a 'one-sided' stopping time, so that stopping only occurred at +m, not at −m. The value of X at this stopping time would therefore be m. Therefore, the expectation value E[X τ] must also be m, seemingly in violation of the theorem which would give E[X τ] = 0. The failure of the optional stopping theorem shows that all three of the conditions fail. Proof [edit] Let X τ denote the stopped process, it is also a martingale (or a submartingale or supermartingale, respectively). Under condition(a) or(b), the random variable X τ is well defined. Under condition(c) the stopped process X τ is bounded, hence by Doob's martingale convergence theorem it converges a.s. pointwise to a random variable which we call X τ. If condition(c) holds, then the stopped process X τ is bounded by the constant random variable M:= c. Otherwise, writing the stopped process as X t τ=X 0+∑s=0 τ−1∧t−1(X s+1−X s),t∈N 0,{\displaystyle X_{t}^{\tau }=X_{0}+\sum {s=0}^{\tau -1\land t-1}(X{s+1}-X_{s}),\quad t\in {\mathbb {N} }{0},} gives |_X t τ| ≤ M for all t ∈ N{\displaystyle \mathbb {N} }0, where M:=|X 0|+∑s=0 τ−1|X s+1−X s|=|X 0|+∑s=0∞|X s+1−X s|⋅1{τ>s}{\displaystyle M:=|X_{0}|+\sum {s=0}^{\tau -1}|X{s+1}-X_{s}|=|X_{0}|+\sum {s=0}^{\infty }|X{s+1}-X_{s}|\cdot \mathbf {1} _{{\tau >s}}}. By the monotone convergence theorem E[M]=E[|X 0|]+∑s=0∞E[|X s+1−X s|⋅1{τ>s}]{\displaystyle \mathbb {E} [M]=\mathbb {E} [|X_{0}|]+\sum {s=0}^{\infty }\mathbb {E} {\bigl [}|X{s+1}-X_{s}|\cdot \mathbf {1} {{\tau >s}}{\bigr ]}}. If condition(a) holds, then this series only has a finite number of non-zero terms, hence _M is integrable. If condition(b) holds, then we continue by inserting a conditional expectation and using that the event {τ>s} is known at time s (note that τ is assumed to be a stopping time with respect to the filtration), hence E[M]=E[|X 0|]+∑s=0∞E[E[|X s+1−X s||F s]⋅1{τ>s}⏟≤c 1{τ>s}a.s. by (b)]≤E[|X 0|]+c∑s=0∞P(τ>s)=E[|X 0|]+c E[τ]<∞,{\displaystyle {\begin{aligned}\mathbb {E} [M]&=\mathbb {E} [|X_{0}|]+\sum {s=0}^{\infty }\mathbb {E} {\bigl [}\underbrace {\mathbb {E} {\bigl [}|X{s+1}-X_{s}|{\big |}{\mathcal {F}}{s}{\bigr ]}\cdot \mathbf {1} {{\tau >s}}} {\leq \,c\,\mathbf {1} {{\tau >s}}{\text{ a.s. by (b)}}}{\bigr ]}\&\leq \mathbb {E} [|X_{0}|]+c\sum {s=0}^{\infty }\mathbb {P} (\tau >s)\&=\mathbb {E} [|X{0}|]+c\,\mathbb {E} [\tau ]<\infty ,\\end{aligned}}} where a representation of the expected value of non-negative integer-valued random variables is used for the last equality. Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable M. Since the stopped process X τ converges almost surely to X τ, the dominated convergence theorem implies E[X τ]=lim t→∞E[X t τ].{\displaystyle \mathbb {E} [X_{\tau }]=\lim {t\to \infty }\mathbb {E} [X{t}^{\tau }].} By the martingale property of the stopped process, E[X t τ]=E[X 0],t∈N 0,{\displaystyle \mathbb {E} [X_{t}^{\tau }]=\mathbb {E} [X_{0}],\quad t\in {\mathbb {N} }_{0},} hence E[X τ]=E[X 0].{\displaystyle \mathbb {E} [X_{\tau }]=\mathbb {E} [X_{0}].} Similarly, if X is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality. References [edit] Grimmett, Geoffrey R.; Stirzaker, David R. (2001). Probability and Random Processes (3rd ed.). Oxford University Press. pp.491–495. ISBN9780198572220. Bhattacharya, Rabi; Waymire, Edward C. (2007). A Basic Course in Probability Theory. Springer. pp.43–45. ISBN978-0-387-71939-9. External links [edit] Doob's Optional Stopping Theorem Retrieved from " Categories: Theorems in probability theory Theorems in statistics Martingale theory Hidden categories: Articles with short description Short description is different from Wikidata Articles needing additional references from February 2012 All articles needing additional references Articles needing additional references from May 2024 Articles containing proofs This page was last edited on 12 May 2025, at 05:23(UTC). Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Code of Conduct Developers Statistics Cookie statement Mobile view Search Search [x] Toggle the table of contents Optional stopping theorem 8 languagesAdd topic
259
33 VISIBILITY Joseph O’Rourke INTRODUCTION In a geometric context, two objects are “visible” to each other if there is a line segment connecting them that does not cross any obstacles. Over 500 papers have been published on aspects of visibility in computational geometry in the last 40 years. The research can be broadly classified as primarily focused on combinatorial issues, or primarily focused on algorithms. We partition the combinatorial work into “art gallery theorems” (Section 33.1) and illumination of convex sets (33.2), and research on visibility graphs (33.3) and the algorithmic work into that con-cerned with polygons (33.4), more general planar environments (33.5) paths (33.6), and mirror reflections (33.7). All of this work concerns visibility in two dimen-sions. Investigations in three dimensions, both combinatorial and algorithmic, are discussed in Section 33.8, and the final section (33.9) touches on visibility in Rd. 33.1 ART GALLERY THEOREMS A typical “art gallery theorem” provides combinatorial bounds on the number of guards needed to visually cover a polygonal region P (the art gallery) defined by n vertices. Equivalently, one can imagine light bulbs instead of guards and require full direct-light illumination. GLOSSARY Guard: A point, a line segment, or a line—a source of visibility or illumination. Vertex guard: A guard at a polygon vertex. Point guard: A guard at an arbitrary point. Interior visibility: A guard x ∈P can see a point y ∈P if the segment xy is nowhere exterior to P: xy ⊆P. Exterior visibility: A guard x can see a point y outside of P if the segment xy is nowhere interior to P; xy may intersect ∂P, the boundary of P. Star polygon: A polygon visible from a single interior point. Diagonal: A segment inside a polygon whose endpoints are vertices, and which otherwise does not touch ∂P. Floodlight: A light that illuminates from the apex of a cone with aperture α. Vertex floodlight: One whose apex is at a vertex (at most one per vertex). 875 Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 876 J. O’Rourke MAIN RESULTS The most general combinatorial results obtained to date are summarized in Ta-ble 33.1.1. Tight bounds, and ranges between lower and upper bounds are listed for the minimum number of guards sufficient for all polygons with n vertices (and possibly h holes). TABLE 33.1.1 Number of guards needed. PROBLEM NAME POLYGONS INT/EXT GUARD NUMBER Art gallery theorem simple interior vertex ⌊n/3⌋ Fortress problem simple exterior point ⌈n/3⌉ Prison yard problem simple int & ext vertex ⌈n/2⌉ Prison yard problem orthogonal int & ext vertex [⌈5n/16⌉, ⌊5n/12⌋+ 1] Orthogonal polygons simple orthogonal interior vertex ⌊n/4⌋ Orthogonal with holes orthogonal with h holes interior vertex [⌊2n/7⌋, ⌊(17n −8)/52⌋ Orthogonal with holes orthogonal with h holes interior vertex [⌊(n + h)/4⌋, ⌊(n + 2h)/4⌋] Polygons with holes polygons with h holes interior point ⌊(n + h)/3⌋ Of special note is the difficult orthogonal prison yard problem: How many vertex guards are needed to cover both the interior and the exterior of an orthogonal polygon? See Figure 33.1.1. An upper bound of ⌊5n/12⌋+2 was obtained by [HK96] via the following graph-coloring theorem: Every plane, bipartite, 2-connected graph has an even triangulation (all nodes have even degree) and therefore the resulting graph is 3-colorable. This bound was subsequently improved to ⌊5n/12⌋+ 1 in [MP12]. FIGURE 33.1.1 A pyramid polygon with n = 24 vertices whose interior and exterior are covered by 8 guards. Repeating the pattern establishes a lower bound of 5n/16 + c on the orthogonal prison yard problem [HK93]. COVERS AND PARTITIONS Each art gallery theorem above implies a cover result, a cover by star polygons. Many of the theorem proofs rely on particular partitions. For example, the orthog-onal polygon result depends on the theorem that every orthogonal polygon may be partitioned via diagonals into convex quadrilaterals. Most cover problems are NP-hard, and finding a minimum guard set for a simple polygon is NP-complete. Approximation algorithms have only achieved O(log n) times the fewest guards [Gho10]. See Section 30.2 for more on covers and partitions. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 877 EDGE GUARDS A variation permits guards (mobile guards) to patrol segments, diagonals, or edges; equivalent is illumination by line segment/diagonal/edge light sources (flu-orescent light bulbs). Here there are fewer results; see Table 33.1.2. Toussaint conjectures that the last line of this table should be ⌊n/4⌋for sufficiently large n. TABLE 33.1.2 Edge guards. POLYGONS GUARD BOUNDS SOURCE Polygon diagonal ⌊n/4⌋ [O’R83] Orthogonal polygons segment ⌊(3n + 4)/16⌋ [Agg84, O’R87] Orthogonal polygons with h holes segment ⌊(3n + 4h + 4)/16⌋ [GHKS96] Polygon (n > 11) closed edge [⌊n/4⌋, ⌊3n/10⌋+ 1] [She94] Polygon open edge [⌊n/3⌋, ⌊n/2⌋] [TTW12] 33.1.1 1.5D TERRAIN GUARDING A 1.5D terrain is an x-monotone chain of edges, guarded by points on the chain. Guarding such a terrain has application to placing communication devices to cover the terrain. Although known to be NP-hard [KK11], it required further work to find a polynomial discretization, thereby establishing its NP-completeness [FHKS16]. TABLE 33.1.3 Floodlights. APEX ALPHA BOUNDS SOURCE Any point [180◦, 360◦] ⌊n/3⌋ [T´ ot00] Any point [90◦, 180◦] 2⌊n/3⌋ [T´ ot00] Any point [45◦, 60◦) [n −2, n −1] [T´ ot03d] Vertex < 180◦ not always possible [ECOUX95] Vertex 180◦ [9n/14 −c, ⌊2n/3⌋−1] [ST05] 33.1.2 FLOODLIGHT ILLUMINATION Urrutia introduced a class of questions involving guards with restricted vision, or, equivalently, illumination by floodlights: How many floodlights, each with aperture α, and with their apexes at distinct nonexterior points, are sufficient to cover any polygon of n vertices? One surprise is that ⌊n/3⌋half-guards/π-floodlights suffice, although not when restricted to vertices. A second surprise is that, for any α < π, there is a polygon that cannot be illuminated by an α floodlight at every vertex. See Table 33.1.3. A third surprise is that the best result on vertex π-floodlights employs pointed pseudotriangulations (cf. Chapter 5) in an essential way. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 878 J. O’Rourke 33.2 ILLUMINATION OF PLANAR CONVEX SETS A natural extension of exterior visibility is illumination of the plane in the presence of obstacles. Here it is natural to use “illumination” in the same sense as “visibility.” Under this model, results depend on whether light sources are permitted to lie on obstacle boundaries: ⌊2n/3⌋lights are necessary and sufficient (for n > 5) if they may [O’R87], and ⌊2(n+1)/3⌋if they may not [T´ ot02]. More work has been done on illuminating the boundary of the obstacles, under a stronger notion of illumination, corresponding to “clear visibility.” GLOSSARY Illuminate: x illuminates y if xy does not include a point strictly interior to an obstacle, and does not cross a segment obstacle. Cross: xy crosses segment s if they have exactly one point p in common, and p is in the relative interior of both xy and s. Clearly illuminate: x clearly illuminates y if the open segment (x, y) does not include any point of an obstacle. Compact: Closed and bounded in Rd. Homothetic: Obtained by dilation and translation. Isothetic: Sides parallel to orthogonal coordinate axes. MAIN RESULTS A third, even stronger notion of illumination is considered in Section 33.9 below. The main question that has been investigated is: How many point lights strictly exterior to a collection of n pairwise disjoint compact, convex objects in the plane are needed to clearly illuminate every object boundary point? Answers for a variety of restricted sets are shown in Table 33.2.1. TABLE 33.2.1 Illuminating convex sets in plane. FAMILY BOUNDS SOURCE Convex sets 4n −7 [Fej77] Circular disks 2n −2 [Fej77] Isothetic rectangles [n −1, n + 1] [Urr00] Homothetic triangles [n, n + 1] [CRCU93] Triangles [n, ⌊(5n + 1)/4⌋] [T´ ot03b] Segments (one side) [4n/9 −2, ⌊(n + 1)/2⌋] [CRC+95, T´ ot03c] Segments (both sides) ⌊4(n + 1)/5⌋ [T´ ot01] The most interesting open problem here is to close the gap for triangles. Urrutia conjectures [Urr00] that n + c lights suffice for some constant c. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 879 33.3 VISIBILITY GRAPHS Whereas art gallery theorems seek to encapsulate an environment’s visibility into one function of n, the study of visibility graphs endeavors to uncover the more fine-grained structure of visibility. The original impetus for their investigation came from pattern recognition, and its connection to shape continues to be one of its primary sources of motivation; see Chapter 54. Another application is graphics (Chapter 52): illumination and radiosity depend on 3D visibility relations (Sec-tion 33.8.) GLOSSARY Visibility graph: A graph with a node for each object, and arcs between objects that can see one another. Vertex visibility graph: The objects are the vertices of a simple polygon. Endpoint visibility graph: The objects are the endpoints of line segments in the plane. See Figure 33.3.1b. Segment visibility graph: The objects are whole line segments in the plane, either open or closed. Object visibility: Two objects A and B are visible to one another if there are points x ∈A and y ∈B such that x sees y. Point visibility: Two points x and y can see one another if the segment xy is not “obstructed,” where the meaning of “obstruction” depends on the problem. ϵ-visibility: Lines of sight are finite-width beams of visibility. Hamiltonian: A graph is Hamiltonian if there is a simple cycle that includes every node. OBSTRUCTIONS TO VISIBILITY For polygon vertices, x sees y if xy is nowhere exterior to the polygon, just as in art gallery visibility; this implies that polygon edges are part of the visibility graph. For segment endpoints, x sees y if the closed segment xy intersects the union of all the segments either in just the two endpoints, or in the entire closed segment. This disallows grazing contact with a segment, but includes the segments themselves in the graph. GOALS Four goals can be discerned in research on visibility graphs: 1. Characterization: asks for a precise delimiting of the class of graphs realizable by a certain class of geometric objects. 2. Recognition: asks for an algorithm to recognize when a graph is a visibility graph. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 880 J. O’Rourke (a) (b) (c) FIGURE 33.3.1 (a) A set of 6 pairwise disjoint line segments. (b) Their endpoint visibility graph G. (c) A Hamiltonian cycle in G. 3. Reconstruction: asks for an algorithm that will take a visibility graph as input, and output a geometric realization. 4. Counting: concerned with the number of visibility graphs under various re-strictions [HN01]. POINT VISIBILITY GRAPHS Given a set P of n points in the plane, visibility between x, y ∈P may be blocked by a third point in P. The recognition of point visibility graph is NP-hard [Roy16], in fact it is complete for the existential theory of the reals [CH17]. However, for planar graphs, there is complete characterization, and an O(n)-time recognition algorithm [GR15]. Pfender [Pfe08] constructed point visibility graphs of clique number 6 and arbitrary high chromatic number. For example, it was established in [PPVW12] that every visibility graph with minimum degree δ has vertex connectivity of at least δ/2 + 1, and if the number of collinear points is no more than 4, then G has connectivity of at least 2δ/3+1. This later quantity is conjectured to hold without the collinearity restriction. Related Ramsey-type problems and results are surveyed in [PW10]. VERTEX VISIBILITY GRAPHS A complete characterization of vertex visibility graphs of polygons has remained elusive, but progress has been made by: 1. Restricting the class of polygons: polynomial-time recognition and recon-struction algorithms for orthogonal staircase polygons have been obtained. See Figure 33.3.2. 2. Restricting the class of graphs: every 3-connected vertex visibility graph has a 3-clique ordering, i.e., an ordering of the vertices so that each vertex is part of a triangle composed of preceding vertices. 3. Adding information: assuming knowledge of the boundary Hamiltonian cir-cuit, four necessary conditions have been established by Ghosh and others [Gho97], and conjectured to be sufficient. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 881 FIGURE 33.3.2 A staircase polygon and its vertex visibility graph. ENDPOINT VISIBILITY GRAPHS A set of n pairwise disjoint line segments forms a noncrossing perfect matching on the 2n endpoints in the plane. For segment endpoint visibility graphs, there have been three foci: 1. Are the graphs Hamiltonian? See Figure 33.3.1c. Posed by Mirzaian, this was settled in the affirmative [HT03]: yes, there is always a Hamiltonian polygon (i.e., a noncrossing circuit) for pairwise disjoint line segments, not all lying on a line. 2. In the quest for generating a random noncrossing perfect matching, Aich-holzer et al. [ABD+09] conjecture that any two such matchings are connected by sequence of noncrossing perfect matchings in which consecutive matching are compatible (the union of the two matchings is also noncrossing). Every matching on 4n vertices is known to have a compatible matching [IST13]. 3. Size questions: there must be at least 5n −4 edges [SE87], and at least 6n−6 when no segment is a “chord” splitting the convex hull [GOH+02]; the smallest clique cover has size Ω(n2/ log2 n) [AAAS94]. SEGMENT VISIBILITY GRAPHS Whole segment visibility graphs have been investigated most thoroughly under the restriction that the segments are all (say) vertical and visibility is horizontal. Such segments are often called bars. The visibility is usually required to be ϵ-visibility. Endpoints on the same horizontal line often play an important role here, as does the distinction between closed segments and intervals (which may or may not include their endpoints). There are several characterizations: 1. G is representable by segments, with no two endpoints on the same horizontal line, iffthere is a planar embedding of G such that, for every interior k-face F, the induced subgraph of F has exactly 2k −3 edges. 2. G is representable by segments, with endpoints on the same horizontal per-mitted, iffthere is a planar embedding of G with all cutpoints on the exterior face. 3. Every 3-connected planar graph is representable by intervals. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 882 J. O’Rourke OBSTACLE NUMBER An interesting variant of visibility graphs has drawn considerable attention. Given a graph G, an obstacle representation of G is a mapping of its nodes to the plane such that edge (x, y) is in G if and only if the segment xy does not intersect any “obstacle.” An obstacle is any connected subset of R2. The obstacle number of G is the minimum number of obstacles in an obstacle representation of G. At least one obstacle is needed to represent any graph other than the complete graph. There are graphs with obstacle number Ω(n/(log log n)2) [DM13]. No upper bound better than O(n2) is known. When the obstacles are points and G is the empty graph on n vertices, this quantity is known as the blocking number b(n); see [Mat09, PW10]. It is conjectured that limn→∞b(n)/n = ∞, but the best bound is b(n) ≥(25/8 −o(1))n [DPT09]. INVISIBILITY GRAPHS For a set X ⊆Rd, its invisibility graph I(X) has a vertex for each point in X, and an edge between two vertices u and v if the segment uv is not completely contained in X. The chromatic number χ(X) and clique number ω(X) of I(X) have been studied, primarily in the context of the covering number, the fewest convex sets whose union is X. It is clear that ω(X) ≤χ(X), and it was conjectured in [MV99] that for planar sets X, there is no upper bound on χ as a function of ω. This conjecture was settled positively in [CKM+10]. OTHER VISIBILITY GRAPHS The notion of a visibility graph can be extended to objects such as disjoint disks: each disk is a node, with an arc if there is a segment connecting them that avoids touching any other disk. Rappaport proved that the visibility graph of disjoint con-gruent disks is Hamiltonian [Rap03]. Rectangle visibility graphs, which restrict visibility to vertical or horizontal lines of sight between disjoint rectangles, have been studied for their role in graph drawing (Chapter 55). A typical result is that any graph with a maximum vertex degree 4 can be realized as a rectangle visibility graph [BDHS97]. OPEN PROBLEMS Ghosh and Goswami list 44 open problems on visibility graphs in their survey [GG13]. Below we list just three. 1. Given a visibility graph G and a Hamiltonian circuit C, construct in polyno-mial time a simple polygon such that its vertex visibility graph is G, with C corresponding to the polygon’s boundary. 2. Given a visibility graph G of a simple polygon P, find the Hamiltonian cycle that corresponds to the boundary of P. 3. Develop an algorithm to recognize whether a polygon vertex visibility graph is planar. Necessary and sufficient conditions are known [LC94]. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 883 33.4 ALGORITHMS FOR VISIBILITY IN A POLYGON Designing algorithms to compute aspects of visibility in a polygon P was a major focus of the computational geometry community in the 1980s. For most of the basic problems, optimal algorithms were found, several depending on Chazelle’s linear-time triangulation algorithm [Cha91]. See [Gho07] for a book-length survey. GLOSSARY Throughout, P is a simple polygon. Kernel: The set of points in P that can see all of P. See Figure 33.4.4. Point visibility polygon: The region visible from a point in P. Segment visibility polygon: The region visible from a segment in P. MAIN RESULTS The main algorithms are listed in Table 33.4.1. We discuss two of these algorithms below to illustrate their flavor. TABLE 33.4.1 Polygon visibility algorithms. ALGORITHM TO COMPUTE TIME COMPLEXITY SOURCE Kernel O(n) [LP79] Point visibility polygon O(n) [JS87] Segment visibility polygon O(n) [GHL+87] Shortest illuminating segment O(n) [DN94] Vertex visibility graph O(E) [Her89] VISIBILITY POLYGON ALGORITHM Let x ∈P be the visibility source. Lee’s linear-time algorithm [JS87] processes the vertices of P in a single counterclockwise boundary traversal. At each step, a vertex is either pushed on or popped offa stack, or a wait event is processed. The latter occurs when the boundary at that point is invisible from x. At any stage, the stack represents the visible portion of the boundary processed so far. Although this algorithm is elementary in its tools, it has proved delicate to implement correctly. VISIBILITY GRAPH ALGORITHM In contrast, Hershberger’s vertex visibility algorithm [Her89] uses sophisticated tools to achieve output-size sensitive time complexity O(E), where E is the num-Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 884 J. O’Rourke ber of edges of the graph. His algorithm exploits the intimate connection between shortest paths and visibility in polygons. It first computes the shortest path map (Chapter 31) in O(n) time for a vertex, and then systematically transforms this into the map of an adjacent vertex in time proportional to the number of changes. Repeating this achieves O(E) time overall. Most of the above algorithms have been parallelized; see, for example, [GSG92]. 33.5 ALGORITHMS FOR VISIBILITY AMONG OBSTACLES The shortest path between two points in an environment of polygonal obstacles follows lines of sight between obstacle vertices. This has provided an impetus for developing efficient algorithms for constructing visibility regions and graphs in such settings. The obstacles most studied are noncrossing line segments, which can be joined end-to-end to form polygonal obstacles. Many of the questions mentioned in the previous section can be revisited for this environment. The major results are shown in Table 33.5.1; the first three are described in [O’R87]; the fourth is discussed below. TABLE 33.5.1 Algorithms for visibility among obstacles. ALGORITHM TO COMPUTE TIME COMPLEXITY Point visibility region O(n log n) Segment visibility region Θ(n4) Endpoint visibility graph O(n2) Endpoint visibility graph O(n log n + E) ENDPOINT VISIBILITY GRAPH The largest effort has concentrated on constructing the endpoint visibility graph. Worst-case optimal algorithms were first discovered by constructing the line ar-rangement dual to the endpoints in O(n2) time. Since many visibility graphs have less than a quadratic number of edges, an output-size sensitive algorithm was a significant improvement: O(n log n + E) where E is the number of edges of the graph [GM91]. This was further improved to O(n + h log h + E) for h polygonal obstacles with a total of n vertices [DW15]. A polygon with n vertices and h holes can be preprocessed into a data structure with space complexity O(min(|E|, nh) + n) in time O(E + n log n), and can report the visibility polygon V (q) of a query point q in time O(|V (q)| log n + h) [IK09]. 33.6 VISIBILITY PATHS A fruitful idea was introduced to visibility research in the mid-1980s: the notion of “link distance” between two points, which represents the smallest number of Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 885 mutually visible relay stations needed to communicate from one point to another; see Section 31.3. A related notion called “watchman tours” was introduced a bit later, mixing shortest paths and visibility problems, and employing many of the concepts developed for link-path problems (Section 31.4). 33.7 MIRROR REFLECTIONS GLOSSARY Light ray reflection: A light ray reflects from an interior point of a mirror with reflected angle equal to incident angle; a ray that hits a mirror endpoint is absorbed. Mirror polygon: A polygon all of whose edges are mirrors reflecting light rays. Periodic light ray: A ray that reflects from a collection of mirrors and, after a finite number of reflections, rejoins its path (and thenceforth repeats that path). Trapped light ray: One that reflects forever, and so never “reaches” infinity. Klee asked whether every polygonal room whose walls are mirrors (a mirror poly-gon) is illuminable from every interior point [Kle69, KW91]. Tokarsky answered no by constructing rooms that leave one point dark when the light source is located at a particular spot [Tok95]. Complementing Tokarsky’s result, it is now known that if P is a rational polygon (angles rational multiples of π), for every x there are at most finitely many dark points y in P [LMW16]. However, a second question of Klee remains open: Is every mirror polygon illuminable from some interior point? FIGURE 33.7.1 100 mirror disks fail to trap 10 rays from a point source (near the center) [OP01]. The behavior of light reflecting in a polygon is complex. Aronov et al. [ADD+98] Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 886 J. O’Rourke proved that after k reflections, the boundary of the illuminated region has combi-natorial complexity O(n2k), with a matching lower bound for any fixed k. Even de-termining whether every triangle supports a periodic ray is unresolved; see [HH00]. Pach asked whether a finite set of disjoint circular mirrors can trap all the rays from a point light source [Ste96]. See Fig. 33.7.1. This and many other related questions [OP01] remain open. 33.8 VISIBILITY IN THREE DIMENSIONS Research on visibility in three dimensions (3D) has concentrated on three topics: hidden surface removal, polyhedral terrains, and various 3D visibility graphs. 33.8.1 HIDDEN SURFACE REMOVAL “Hidden surface removal” is one of the key problems in computer graphics (Chap-ter 52), and has been the focus of intense research for two decades. The typical problem instance is a collection of (planar) polygons in space, from which the view from z = ∞must be constructed. Traditionally, hidden-surface algorithms have been classified as either image-space algorithms, exploiting the ultimate need to compute visible colors for image pixels, and object-space algorithms, which perform exact computations on object polygons. We only discuss the latter. The complexity of the output scene can be quadratic in the number of input vertices n. A worst-case optimal Θ(n2) algorithm can be achieved by projecting the lines containing each polygon edge to a plane and constructing the resulting arrangement of lines [D´ ev86, McK87]. More recent work has focused on obtaining output-size sensitive algorithms, whose time complexity depends on the number of vertices k in the output scene (the complexity of the visibility map), which is often less than quadratic in n. See Table 33.8.1 for selected results. In the table, k is the complexity of the visibility map, the “wire-frame” projection of the scene. A notable example is based on careful construction of “visibility maps,” which leads, e.g., to a complexity of O((n + k) log2 n) for performing hidden surface removal on nonintersecting spheres, where k is the complexity of the output map. TABLE 33.8.1 Hidden-surface algorithm complexities. ENVIRONMENT COMPLEXITY SOURCE Isothetic rectangles O((n + k) log n) [BO92] Polyhedral terrain O((n + k) log n log log n) [RS88] Nonintersecting polyhedra O(n √ k log n) [SO92] O(n1+ϵ√ k) [BHO+94] O(n2/3+ϵk2/3 + n1+ϵ) [AM93] Arbitrary intersecting spheres O(n2+ϵ) [AS00] Nonintersecting spheres O(k + n3/2 log n) [SO92] Restricted-intersecting spheres O((n + k) log2 n) [KOS92] Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 887 33.8.2 BINARY SPACE PARTITION TREES Binary Space Partition (BSP) trees are a popular method of implementing the basic painter’s algorithm, which displays objects back-to-front to obtain proper occlusion of front-most surfaces. A BSP partitions Rd into empty, open convex sets by hyperplanes in a recursive fashion. A BSP for a set S of n line segments in R2 is a partition such that all the open regions corresponding to leaf nodes of the tree are empty of points from S: all the segments in S lie along the boundaries of the regions. An example is shown in Fig. 33.8.1. In general, a BSP for S will “cut up” the segments in S, in the sense that a particular s ∈S will not lie in the boundary of a single leaf region. In the figure, partitions 1 and 2 both cut segments, but partition 3 does not. An attractive feature of BSPs is that an implementation to construct them is easy: In R3, select a polygon, partition all objects by the plane containing it, and recurse. Bounding the size (number of leaves) of BSP trees has been a challenge. The long-standing conjecture that O(n) size in R2 is achievable was shown to be false. See Table 33.8.2 for selected results. 1 2 3 A 5 4 B C D E F A B C D E F 1 2 3 5 4 FIGURE 33.8.1 A binary space partition tree for 3 segments. TABLE 33.8.2 BSP complexities. DIM CLASS BOUND SOURCE 2 segments O(n log n) [PY90] 2 isothetic Θ(n) [PY92] 2 fat Θ(n) [BGO97] 2 segments Θ(n log n/ log log n) [T´ ot03a, T´ ot11] 3 polyhedra O(n2) [PY90] 3 polyhedra Ω(n2) [Cha84] 3 isothetic Θ(n3/2) [PY92] 3 fat orthog. rects. O(n log8 n) [T´ ot08] 33.8.3 POLYHEDRAL TERRAINS Polyhedral terrains are an important special class of 3D surfaces, arising in a variety of applications, most notably geographic information systems (Chapter 59). Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 888 J. O’Rourke GLOSSARY Polyhedral terrain: A polyhedral surface that intersects every vertical line in at most a single point. Perspective view: A view from a point. Orthographic view: A view from infinity (parallel lines of sight). Ray-shooting query: A query asking which terrain face is first hit by a ray shooting in a given direction from a given point. (See Chapter 41.) α(n): The inverse Ackermann function (nearly a constant). See Section 28.10. COMBINATORIAL BOUNDS Several almost-tight bounds on the maximum number of combinatorially different views of a terrain have been obtained, as listed in Table 33.8.3. TABLE 33.8.3 Bounds for polyhedral terrains. VIEW TYPE BOUND SOURCE Along vertical O(n22α(n)) [CS89] Orthographic O(n5+ϵ) [AS94] Perspective O(n8+ϵ) [AS94] Bose et al. established that ⌊n/2⌋vertex guards are sometimes necessary and always sufficient to guard a polyhedral terrain of n vertices [BSTZ97, BKL96]. ALGORITHMS Algorithms seek to exploit the terrain constraints to improve on the same compu-tations for general polyhedra: 1. To compute the orthographic view from above the terrain: time O((k + n) log n log log n), where k is the output size [RS88]. 2. To preprocess for O(log n) ray-shooting queries for rays with origin on a ver-tical line [BDEG94]. 33.8.4 3D VISIBILITY GRAPHS GLOSSARY Aspect graph: A graph with a node for each combinatorially distinct view of a collection of polyhedra, with two nodes connected by an arc if the views can be reached directly from one another by a continuous movement of the viewpoint. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 889 Isothetic: Edges parallel to Cartesian coordinate axes. Box visibility graph: A graph realizable by disjoint isothetic boxes in 3D with orthogonal visibility. Kn: The complete graph on n nodes. There have been three primary motivations for studying visibility graphs of objects in three dimensions. 1. Computer graphics: Useful for accelerating interactive “walkthroughs” of complex polyhedral scenes [TS91], and for radiosity computations [TH93]. See Chapter 52. 2. Computer vision: “Aspect graphs” are used to aid image recognition. The maximum number of nodes in an aspect graph for a polyhedron of n vertices depends on both convexity and the type of view. See Table 33.8.4. Note that the nonconvex bounds are significantly larger than those for terrains. TABLE 33.8.4 Combinatorial complexity of visibility graphs. CONVEXITY ORTHOGRAPHIC PERSPECTIVE SOURCE Convex polyhedron Θ(n2) Θ(n3) [PD90] Nonconvex polyhedron Θ(n6) Θ(n9) [GCS91] 3. Combinatorics: It has been shown that K22 is realizable by disjoint iso-thetic rectangles in “2 1 2D” with vertical visibility (all rectangles are paral-lel to the xy-plane), but that K56 (and therefore all larger complete graphs) cannot be so represented [BEF+93]. It is known that K42 is a box visibility graph [BJMO94] but that K184 is not [FM99]. 33.9 PENETRATING ILLUMINATION OF CONVEX BODIES A rich vein of problems was initiated by Hadwiger, Levi, Gohberg, and Markus; see [MS99] for the complex history. The problems employ a different notion of exterior illumination, which could be called penetrating illumination (or perhaps “stabbing”), and focuses on a single convex body in Rd. GLOSSARY Penetrating illumination: An exterior point x penetratingly illuminates a point y on the boundary ∂K of an object K if the ray from x through y has a non-empty intersection with the interior int K of K. Direction illumination: A point y ∈∂K is illuminated from direction v if the ray from the exterior through y with direction v has a non-empty intersection with int K. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 890 J. O’Rourke Affine symmetry: An object in R3 has affine symmetry if it unchanged after reflection through a point, reflection in a plane, or rotation about a line by angle 2π/n, n = 2, 3, . . .. The central problem may be stated: What is the minimum number of exterior points sufficient to penetratingly illuminate any compact, convex body K in Rd? The problem is only completely solved in 2D: 4 lights are needed for a parallelo-gram, and 3 for all other convex bodies. In 3D it is known that 8 lights are needed for a parallelepiped (Fig. 33.9.1), and conjectured that 7 suffice for all other con-vex bodies. Bezdek proved that 8 lights suffice for any 3-polytope with an affine symmetry [Bez93]. Lassak proved that no more than 20 lights are needed for any compact, convex body in 3D [Bol81]. FIGURE 33.9.1 A parallelepiped requires 23 = 8 lights for penetrating illumination of its boundary. One reason for the interest in this problem is its connection to other problems, particularly covering problems. Define: I0(K) : the minimum number of points sufficient to penetratingly illuminate K. I∞(K) : the minimum number of directions sufficient to direction-illuminate K. H(K) : the minimum number of smaller homothetic copies of K that cover K. i(K) : the minimum number of copies of int K that cover K. Remarkably, I0(K) = I∞(K) = H(K) = i(K) , as established by Boltjanski, Hadwiger, and Soltan; see again [MS99]. Several have conjectured that these quantities are ≤2d for compact, convex bodies in Rd, with equality only for the d-parallelotope. The conjecture has been established only for special classes of bodies in 3 and higher dimensions, e.g., [Bol01, Bez11]. 33.10 SOURCES AND RELATED MATERIAL SURVEYS All results not given an explicit reference above may be traced in these surveys. [O’R87]: A monograph devoted to art gallery theorems and visibility algorithms. [She92]: A survey of art gallery theorems and visibility graphs, updating [O’R87]. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 891 [O’R92]: A short update to [She92]. [Urr00]: Art gallery results, updating [She92]. [GG13]: Survey of open problems on visibility graphs. [O’R93]: Survey of visibility graph results. [Gho07]: Survey of visibility algorithms in R2. [MSD00]: Survey of link-distance algorithms. [Mur99]: A Ph.D. thesis on hidden-surface removal algorithms. [T´ ot05]: Survey of binary space partitions. [MS99, Bez06, BK16]: Surveys of illumination of convex bodies. RELATED CHAPTERS Chapter 29: Triangulations and mesh generation Chapter 30: Polygons Chapter 31: Shortest paths and networks Chapter 41: Ray shooting and lines in space Chapter 42: Geometric intersection Chapter 52: Computer graphics Chapter 54: Pattern recognition Chapter 59: Geographic information systems REFERENCES [AAAS94] P.K. Agarwal, N. Alon, B. Aronov, and S. Suri. Can visibility graphs be represented compactly? Discrete Comput. Geom., 12:347–365, 1994. [ABD+09] O. Aichholzer, S. Bereg, A. Dumitrescu, A. Garc´ ıa, C. Huemer, F. Hurtado, M. Kano, A. M´ arquez, D. Rappaport, S. Smorodinsky, D.L. Souvaine, J. Urrutia, and D.R. Wood. Compatible geometric matchings. Comput. Geom. 42:617–626, 2009. [ADD+98] B. Aronov, A.R. Davis, T.K. Dey, S.P. Pal, and D.C. Prasad. Visibility with multiple reflections. Discrete Comput. Geom., 20:61–78, 1998. [Agg84] A. Aggarwal. The Art Gallery Problem: Its Variations, Applications, and Algorith-mic Aspects. Ph.D. thesis, Dept. of Comput. Sci., Johns Hopkins Univ., Baltimore, 1984. [AM93] P.K. Agarwal and J. Matouˇ sek. Ray shooting and parametric search. SIAM J. Comput., 22:794–806, 1993. [AS94] P.K. Agarwal and M. Sharir. On the number of views of polyhedral terrains. Discrete Comput. Geom., 12:177–182, 1994. [AS00] P.K. Agarwal and M. Sharir. Pipes, cigars, and kreplach: The union of Minkowski sums in three dimensions. Discrete Comput. Geom., 24:645–685, 2000. [BDEG94] M. Bern, D.P. Dobkin, D. Eppstein, and R. Grossman. Visibility with a moving point of view. Algorithmica, 11:360–378, 1994. [BDHS97] P. Bose, A.M. Dean, J.P. Hutchinson, and T.C. Shermer. On rectangle visibility Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 892 J. O’Rourke graphs. Proc. Graph Drawing, vol. 1190 of LNCS, pages 25–35, Springer, Berlin, 1997. [BEF+93] P. Bose, H. Everett, S.P. Fekete, M.E. Houle, A. Lubiw, H. Meijer, K. Romanik, T.C. Shermer, S. Whitesides, and C. Zelle. On a visibility representation for graphs in three dimensions. J. Graph Algorithms Appl., 2:1–16, 1998. [Bez93] K. Bezdek. Hadwiger-Levi’s covering problem revisited. In J. Pach, editor, New Trends in Discrete and Computational Geometry, vol. 10 of Algorithms Combin., pages 199–233, Springer-Verlag, Berlin, 1993. [Bez06] K. Bezdek. The illumination conjecture and its extensions. Period. Math. Hungar., 53:59–69, 2006. [Bez11] K. Bezdek. The illumination conjecture for spindle convex bodies. Proc. Steklov Inst. Math., 275:169–176, 2011. [BGO97] M. de Berg, M. de Groot, and M.H. Overmars. New results on binary space partitions in the plane. Comput. Geom., 8:317–333, 1997. [BHO+94] M. de Berg, D. Halperin, M.H. Overmars, J. Snoeyink, and M. van Kreveld. Efficient ray shooting and hidden surface removal. Algorithmica, 12:30–53, 1994. [BJMO94] P. Bose, A. Josefczyk, J. Miller, and J. O’Rourke. K42 is a box visibility graph. In Snapshots in Comput. Geom., pages 88–91, Univ. Saskatchewan, 1994. [BK16] K. Bezdek and M.A. Khan. The geometry of homothetic covering and illumination. Preprint, arXiv:1602.06040v2, 2016. [BKL96] P. Bose, D.G. Kirkpatrick. and Z. Li. Efficient algorithms for guarding or illuminating the surface of a polyhedral terrain. Proc. 8th Canad. Conf. Comput. Geom., pages 217–222, 1996. [BO92] M. de Berg and M.H. Overmars. Hidden surface removal for c-oriented polyhedra. Comput. Geom., 1:247–268, 1992. [Bol81] V. Boltjansky. Combinatorial geometry. Algebra Topol. Geom., 19:209–274, 1981. In Russian. Cited in [MS99]. [Bol01] V. Boltjansky. Solution of the illumination problem for bodies with md M = 2. Discrete Comput. Geom., 26:527–541, 2001. [BSTZ97] P. Bose, T.C. Shermer, G.T. Toussaint, B. Zhu. Guarding polyhedral terrains. Com-put. Geom., 7:173–185, 1997. [CH17] J. Cardinal and U. Hoffmann. Recognition and complexity of point visibility graphs. Discrete Comput. Geom., 57:164–178, 2017. [Cha84] B. Chazelle. Convex partitions of polyhedra: A lower bound and worst-case optimal algorithm. SIAM J. Comput., 13:488–507, 1984. [Cha91] B. Chazelle. Triangulating a simple polygon in linear time. Discrete Comput. Geom., 6:485–524, 1991. [CKM+10] J. Cibulka, J. Kynˇ cl, V. M´ esz´ aros, R. Stolaˇ r, and P. Valtr. On three parameters of invisibility graphs. In Internat. Comput. Combinatorics Conf., pages 192–198. Springer, 2010. [CRCU93] J. Czyzowicz, E. Rivera-Campo, and J. Urrutia. Illuminating rectangles and triangles in the plane. J. Combin. Theory Ser. B, 57:1–17, 1993. [CRC+95] J. Czyzowicz, E. Rivera-Campo, J. Urrutia, and J. Zaks. On illuminating line seg-ments in the plane. Discrete Math., 137:147–153, 1995. [CS89] R. Cole and M. Sharir. Visibility problems for polyhedral terrains. J. Symbolic Comput., 7:11–30, 1989. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 893 [DW15] D.Z. Chen and H. Wang. A new algorithm for computing visibility graphs of polyg-onal obstacles in the plane. J. Comput. Geom., 6:316-345, 2015. [D´ ev86] F. D´ evai. Quadratic bounds for hidden line elimination. In Proc. 2nd Sympos. Comput. Geom., pages 269–275, ACM Press, 1986. [DM13] V. Dujmovic and P. Morin. On obstacle numbers. Electron. J. Combin, 22:P3, 2013. [DN94] G. Das and G. Narasimhan. Optimal linear-time algorithm for the shortest illumi-nating line segment. In Proc. 10th Sympos. Comput. Geom., pages 259–266, ACM Press, 1994. [DPT09] A. Dumitrescu, J. Pach, and G. T´ oth. A note on blocking visibility between points. Geombinatorics, 19:67–73, 2009. [ECOUX95] V. Estivill-Castro, J. O’Rourke, J. Urrutia, and D. Xu. Illumination of polygons with vertex floodlights. Inform. Process. Lett., 56:9–13, 1995. [Fej77] L. Fejes T´ oth. Illumination of convex discs. Acta Math. Acad. Sci. Hungar., 29:355– 360, 1977. [FHKS16] S. Friedrichs, M. Hemmer, J. King, and C. Schmidt. The continuous 1.5D terrain guarding problem: Discretization, optimal solutions, and PTAS. J. Comput. Geom., 7:256–284, 2016. [FM99] S.P. Fekete and H. Meijer. Rectangle and box visibility graphs in 3d. Internat. J. Comput. Geom. Appl., 9:1–27, 1999. [GCS91] Z. Gigus, J.F. Canny, and R. Seidel. Efficiently computing and representing aspect graphs of polyhedral objects. IEEE Trans. Pattern Anal. Mach. Intell., 13:542–551, 1991. [GG13] S.K. Ghosh and P.P. Goswami. Unsolved problems in visibility graphs of points, segments, and polygons. ACM Comput. Surv., 46:22, 2013. [GHKS96] E. Gy˝ ori, F. Hoffmann, K. Kriegel, and T.C. Shermer. Generalized guarding and partitioning for rectilinear polygons. Comput. Geom., 6:21–44, 1996. [GHL+87] L.J. Guibas, J. Hershberger, D. Leven, M. Sharir, and R.E. Tarjan. Linear-time algo-rithms for visibility and shortest path problems inside triangulated simple polygons. Algorithmica, 2:209–233, 1987. [Gho97] S.K. Ghosh. On recognizing and characterizing visibility graphs of simple polygons. Discrete Comput. Geom., 17:143–162, 1997. [Gho07] S.K. Ghosh. Visibility Algorithms in the Plane. Cambridge Univ. Press, 2007. [Gho10] S.K. Ghosh. Approximation algorithms for art gallery problems in polygons. Discrete Appl. Math., 158:718–722, 2010. [GM91] S.K. Ghosh and D.M. Mount. An output-sensitive algorithm for computing visibility graphs. SIAM J. Comput., 20:888–910, 1991. [GOH+02] A. Garc´ ıa-Olaverri, F. Hurtado, M. Noy and J. Tejel. On the minimum size of visibility graphs. Inform. Proc. Lett., 81:223–230, 2002. [GR15] S.K. Ghosh and B. Roy. Some results on point visibility graphs. Theor. Comput. Sci., 575:17–32, 2015. [GSG92] M.T. Goodrich, S. Shauck, and S. Guha. Parallel methods for visibility and shortest path problems in simple polygons. Algorithmica, 8:461–486, 1992. [Her89] J. Hershberger. An optimal visibility graph algorithm for triangulated simple poly-gons. Algorithmica, 4:141–155, 1989. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 894 J. O’Rourke [HH00] L. Halbelsen and N. Hungerb¨ uhler. On periodic billiard trajectories in obtuse trian-gles. SIAM Rev., 42:657–670, 2000. [HK93] F. Hoffmann and K. Kriegel. A graph coloring result and its consequences for some guarding problems. In Proc. 4th Internat. Sympos. Algorithms Comput., vol. 762 of LNCS, pages 78–87, Springer-Verlag, Berlin, 1993. [HK96] F. Hoffmann and K. Kriegel. A graph coloring result and its consequences for some guarding problems. SIAM J. Discrete Math., 9:210–224, 1996. [HN01] F. Hurtado and M. Noy. On the number of visibility graphs of simple polygons. Discrete Math., 232:139–144, 2001. [HT03] M. Hoffmann and C.D. T´ oth. Segment endpoint visibility graphs are Hamiltonian. Comput. Geom., 26:47–68, 2003. [IK09] R. Inkulu and S. Kapoor. Visibility queries in a polygonal region. Comput. Geom., 42:852–864, 2009. [IST13] M. Ishaque, D.L. Souvaine, and C.D. T´ oth. Disjoint compatible geometric matchings. Discrete Comput. Geom., 49:89–131, 2013. [JS87] B. Joe and R.B. Simpson. Correction to Lee’s visibility polygon algorithm. BIT, 27:458–473, 1987. [KK11] J. King and E. Krohn. Terrain guarding is NP-hard. SIAM J. Comput., 40:1316– 1339, 2011. [Kle69] V. Klee. Is every polygonal region illuminable from some point? Amer. Math. Monthly, 76:180, 1969. [KOS92] M.J. Katz, M.H. Overmars, and M. Sharir. Efficient hidden surface removal for objects with small union size. Comput. Geom., 2:223–234, 1992. [KW91] V. Klee and S. Wagon. Old and New Unsolved Problems in Plane Geometry. Math. Assoc. Amer., 1991. [LC94] S.-Y. Lin and C. Chen. Planar visibility graphs. In Proc. 6th Canad. Conf. Comput. Geom., pages 30–35, 1994. [LMW16] S. Lelievre, T. Monteil, and B. Weiss. Everything is illuminated. Geom. Topol. 20:1737–62, 2016. [LP79] D.T. Lee and F.P. Preparata. An optimal algorithm for finding the kernel of a polygon. J. Assoc. Comput. Mach., 26:415–421, 1979. [Mat09] J. Matouˇ sek. Blocking visibility for points in general position. Discrete Comput. Geom., 42:219–223, 2009. [MV99] J. Matouˇ sek and P. Valtr. On visibility and covering by convex sets. Israel J. Math., 113:341–379, 1999. [McK87] M. McKenna. Worst-case optimal hidden-surface removal. ACM Trans. Graph., 6:19–28, 1987. [MP12] T.S. Michael and V. Pinciu. Guarding orthogonal prison yards: An upper bound. Congr. Numerantium, 211:57–64, 2012. [MS99] H. Martini and V. Soltan. Combinatorial problems on the illumination of convex bodies. Aequationes Math., 57:121–152, 1999. [MSD00] A. Maheshwari, J.-R. Sack, and H.N. Djidjev. Link distance problems. In J.-R. Sack and J. Urrutia, editors, Handbook of Computational Geometry, pages 519–558, Elsevier, Amsterdam, 2000. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. Chapter 33: Visibility 895 [Mur99] T.M. Murali. Efficient Hidden-Surface Removal in Theory and in Practice. Ph.D. thesis, Brown University, Providence, 1999. [O’R83] J. O’Rourke. Galleries need fewer mobile guards: A variation on Chv´ atal’s theorem. Geom. Dedicata, 14:273–283, 1983. [O’R87] J. O’Rourke. Art Gallery Theorems and Algorithms. Internat. Series Monographs Computer Science. Oxford University Press, New York, 1987. [O’R92] J. O’Rourke. Computational geometry column 15. Internat. J. Comput. Geom. Appl., 2:215–217, 1992. Also in SIGACT News, 23:2, 1992. [O’R93] J. O’Rourke. Computational geometry column 18. Internat. J. Comput. Geom. Appl., 3:107–113, 1993. Also in SIGACT News, 24:1:20–25, 1993. [OP01] J. O’Rourke and O. Petrovici. Narrowing light rays with mirrors. In Proc. 13th Canad. Conf. Comput. Geom., pages 137–140, 2001. [PD90] H. Plantinga and C.R. Dyer. Visibility, occlusion, and the aspect graph. Internat. J. Comput. Vision, 5:137–160, 1990. [Pfe08] F. Pfender. Visibility graphs of point sets in the plane. Discrete Comput. Geom., 39:455–459, 2008. [PPVW12] M.S. Payne, A. P´ or, P. Valtr, and D.R. Wood. On the connectivity of visibility graphs. Discrete Comput. Geom., 48:669–681, 2012. [PW10] A. P´ or and D.R. Wood. On visibility and blockers. J. Comput. Geom., 1:29–40, 2010. [PY90] M.S. Paterson and F.F. Yao. Efficient binary space partitions for hidden-surface removal and solid modeling. Discrete Comput. Geom., 5:485–503, 1990. [PY92] M.S. Paterson and F.F. Yao. Optimal binary space partitions for orthogonal objects. J. Algorithms, 13:99–113, 1992. [Rap03] D. Rappaport. The visibility graph of congruent discs is Hamiltonian. Internat. J. Comput. Geom. Appl., 25:257–265, 2003. [Roy16] B. Roy. Point visibility graph recognition is NP-hard. Internat. J. Comput. Geom. Appl., 26:1–32 , 2016. [RS88] J.H. Reif and S. Sen. An efficient output-sensitive hidden-surface removal algorithms and its parallelization. In Proc. 4th Sympos. Comput. Geom., pages 193–200, ACM Press, 1988. [SE87] X. Shen and H. Edelsbrunner. A tight lower bound on the size of visibility graphs. Inform. Process. Lett., 26:61–64, 1987. [She94] T.C. Shermer. A tight bound on the combinatorial edge guarding problem. In Snapshots in Comput. Geom., pages 191–223, Univ. Saskatchewan, 1994. [She92] T.C. Shermer. Recent results in art galleries. Proc. IEEE, 80:1384–1399, 1992. [SO92] M. Sharir and M.H. Overmars. A simple output-sensitive algorithm for hidden-surface removal. ACM Trans. Graph., 11:1–11, 1992. [ST05] B. Speckmann and C.D. T´ oth. Allocating vertex π-guards in simple polygons via pseudo-triangulations. Discrete Comput. Geom., 33:345–364, 2005. [Ste96] I. Stewart. Mathematical recreations. Sci. Amer., 275:100–103, 1996. Includes light in circular forest problem due to J. Pach. [TH93] S. Teller and P. Hanrahan. Global visibility algorithms for illumination computations. In Proc. ACM Conf. SIGGRAPH 93, pages 239–246, 1993. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017. 896 J. O’Rourke [Tok95] G.W. Tokarsky. Polygonal rooms not illuminable from every point. Amer. Math. Monthly, 102:867–879, 1995. [T´ ot00] C.D. T´ oth. Art gallery problem with guards whose range of vision is 180◦. Comput. Geom., 17:121–134, 2000. [T´ ot01] C.D. T´ oth. Illuminating both sides of line segments. In J. Akiyama, M. Kano, and M. Urabe, editors, Discrete and Computational Geometry, vol. 2098 of LNCS, pages 370–380, Springer, Berlin, 2001. [T´ ot02] C.D. T´ oth. Illumination in the presence of opaque line segments in the plane. Com-put. Geom., 21:193–204, 2002. [T´ ot03a] C.D. T´ oth. A note on binary plane partitions. Discrete Comput. Geom., 30:3–16, 2003. [T´ ot03b] C.D. T´ oth. Guarding disjoint triangles and claws in the plane. Comput. Geom., 25:51–65, 2003. [T´ ot03c] C.D. T´ oth. Illuminating disjoint line segments in the plane. Discrete Comput. Geom., 30:489–505, 2003. [T´ ot03d] C.D. T´ oth. Illumination of polygons by 45◦-floodlights. Discrete Math., 265:251–260, 2003. [T´ ot05] C.D. T´ oth. Binary space partitions: Recent developments. In J.E. Goodman, J. Pach, and E. Welz, editors, Combinatorial and Computational Geometry, vol. 52 of MSRI Publications, Cambridge University Press, pages 529–556, 2005. [T´ ot08] C.D. T´ oth. Binary space partitions for axis-aligned fat rectangles. SIAM J. Comput., 38:429–447, 2008. [T´ ot11] C.D. T´ oth. Binary plane partitions for disjoint line segments. Discrete Comput. Geom., 45:617–646, 2011. [TS91] S.J. Teller and C.H. S´ equin. Visibility preprocessing for interactive walkthroughs. Comput. Graph., 25:61–69, 1991. [TTW12] C.D. T´ oth, G. Toussaint, and A. Winslow. Open guard edges and edge guards in simple polygons. In A. Marquez, P. Ramos, and J. Urrutia, editors, Computational Geometry, vol. 7579 of LNCS, pages 54–64, Springer, Berlin, 2012. [Urr00] J. Urrutia. Art gallery and illumination problems. In J.-R. Sack and J. Urrutia, ed-itors, Handbook of Computational Geometry, pages 973–1027, Elsevier, Amsterdam, 2000. Preliminary version (August 6, 2017). To appear in the Handbook of Discrete and Computational Geometry, J.E. Goodman, J. O'Rourke, and C. D. Tóth (editors), 3rd edition, CRC Press, Boca Raton, FL, 2017.
260
This browser is no longer supported. Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support. Share via Note Access to this page requires authorization. You can try signing in or changing directories. Access to this page requires authorization. You can try changing directories. Microsoft C/C++ language conformance by Visual Studio version In this article Standards conformance for the Microsoft C/C++ compiler in Visual Studio (MSVC) is a work in progress. Here's a summary of ISO Standard C and C++ language and library conformance by Visual Studio version. Each C++ compiler and standard library feature name has a link to the ISO Standard C++ proposal paper that describes the feature, when one is available at publication time. The Supported column lists the Visual Studio version in which support for the feature first appeared. For details on conformance improvements, see C++ conformance improvements in Visual Studio. For a list of other changes, see What's New for Visual C++ in Visual Studio. For conformance changes in earlier versions, see Visual C++ change history and Visual C++ What's New 2003 through 2015. For current news from the C++ team, visit the C++ team blog. Note There are no binary breaking changes between Visual Studio 2015, 2017, 2019, and 2022. For more information, see C++ binary compatibility between Visual Studio versions C++ compiler features | Feature | Supported | | --- | --- | | C++03/11 Core language features | Supported | |  Everything else | VS 2015 A | |  Two-phase name lookup | VS 2017 15.7 B | |  N2634 Expression SFINAE | VS 2017 15.7 | |  N1653 C99 preprocessor | VS 2019 16.6 C | | C++03/11 Core language features (Defect reports) | Supported | | C++14 Core language features | Supported | |  N3323 Tweaked wording for contextual conversions | VS 2013 | |  N3472 Binary literals | VS 2015 | |  N3638 auto and decltype(auto) return types | VS 2015 | |  N3648 init-captures | VS 2015 | |  N3649 Generic lambdas | VS 2015 | |  N3760 attribute | VS 2015 | |  N3778 Sized deallocation | VS 2015 | |  N3781 Digit separators | VS 2015 | |  N3651 Variable templates | VS 2015 Update 2 | |  N3652 Extended constexpr | VS 2017 15.0 | |  N3653 Default member initializers for aggregates | VS 2017 15.0 | | C++17 Core language features | Supported | |  N4086 Removing trigraphs | VS 2010 14 | |  N3922 New rules for auto with braced-init-lists | VS 2015 14 | |  N4051 typename in template template-parameters | VS 2015 14 | |  N4266 Attributes for namespaces and enumerators | VS 2015 14 | |  N4267 u8 character literals | VS 2015 14 | |  N4230 Nested namespace definitions | VS 2015.3 17 | |  N3928 Terse static_assert | VS 2017 15.0 17 | |  P0184R0 Generalized range-based for-loops | VS 2017 15.0 14 | |  P0188R1 attribute | VS 2017 15.0 17 | |  P0001R1 Removing the register keyword | VS 2017 15.3 17 | |  P0002R1 Removing operator++ for bool | VS 2017 15.3 17 | |  P0018R3 Capturing this by value | VS 2017 15.3 17 | |  P0028R4 Using attribute namespaces without repetition | VS 2017 15.3 17 | |  P0061R1 __has_include | VS 2017 15.3 14 | |  P0138R2 Direct-list-init of fixed enums from integers | VS 2017 15.3 17 | |  P0170R1 constexpr lambdas | VS 2017 15.3 17 | |  P0189R1 attribute | VS 2017 15.3 17 | |  P0212R1 attribute | VS 2017 15.3 17 | |  P0217R3 Structured bindings | VS 2017 15.3 17 | |  P0292R2 constexpr if-statements | VS 2017 15.3 D | |  P0305R1 Selection statements with initializers | VS 2017 15.3 17 | |  P1381R1 Reference capture of structured bindings | VS 2017 15.3 17 | |  P0245R1 Hexfloat literals | VS 2017 15.5 17 | |  N4268 Allowing more non-type template args | VS 2017 15.5 17 | |  N4295 Fold expressions | VS 2017 15.5 17 | |  P0003R5 Removing dynamic-exception-specifications | VS 2017 15.5 17 | |  P0012R1 Adding noexcept to the type system | VS 2017 15.5 17 | |  P0035R4 Over-aligned dynamic memory allocation | VS 2017 15.5 17 | |  P0386R2 Inline variables | VS 2017 15.5 17 | |  P0522R0 Matching template template-parameters to compatible arguments | VS 2017 15.5 17 | |  P0036R0 Removing some empty unary folds | VS 2017 15.5 17 | |  N4261 Fixing qualification conversions | VS 2017 15.7 17 | |  P0017R1 Extended aggregate initialization | VS 2017 15.7 17 | |  P0091R3 Template argument deduction for class templates  P0512R0 Class template argument deduction issues | VS 2017 15.7 17 | |  P0127R2 Declaring non-type template parameters with auto | VS 2017 15.7 17 | |  P0135R1 Guaranteed copy elision | VS 2017 15.6 | |  P0136R1 Rewording inheriting constructors | VS 2017 15.7 17 | |  P0137R1 std::launder | VS 2017 15.7 17 | |  P0145R3 Refining expression evaluation order  P0400R0 Order of evaluation of function arguments | VS 2017 15.7 17 | |  P0195R2 Pack expansions in using-declarations | VS 2017 15.7 17 | |  P0283R2 Ignoring unrecognized attributes | VS 2015 14 | | C++17 Core language features (Defect reports) | Supported | |  P0702R1 Fixing class template argument deduction for initializer-list ctors | VS 2017 15.7 17 | |  P0961R1 Relaxing the structured bindings customization point finding rules | VS 2019 16.0 17 | |  P0969R0 Allowing structured bindings to accessible members | VS 2019 16.0 17 | |  P0588R1 Simplifying implicit lambda capture | VS 2019 16.4 17 | |  P1771R1 for constructors | VS 2019 16.4 17 | |  P1825R0 Merged wording for P0527R1 and P1155R3, more implicit moves | VS 2019 16.4 17 | |  P0929R2 Checking for abstract class types | VS 2019 16.5 17 | |  P0962R1 Relaxing the range-for loop customization point finding rules | VS 2019 16.5 17 | |  P0859R0 CWG 1581: When are constexpr member functions defined | Partial in VS 2019 16.7 E, Full in VS 2022 17.1 | |  P1009R2 Array size deduction in new-expressions | VS 2019 16.7 17 | |  P1286R2 Contra CWG DR1778 | VS 2019 16.8 17 | | C++20 Core language features | Supported | |  P0641R2 const mismatch with defaulted copy constructor | VS 2015 14 | |  P0704R1 Fixing const lvalue ref-qualified pointers to members | VS 2015 14 | |  P1041R4 Make char16_t/char32_t string literals be UTF-16/32 | VS 2015 14 | |  P1330R0 Changing the active member of a union inside constexpr | VS 2017 15.0 14 | |  P0972R0 noexcept For <chrono> zero(), min(), max() | VS 2017 15.7 14 | |  P0515R3 Three-way (spaceship) comparison operator <=> | VS 2019 16.0 20 | |  P0941R2 Feature-test macros | VS 2019 16.0 14 | |  P1008R1 Prohibiting aggregates with user-declared constructors | VS 2019 16.0 20 | |  P0329R4 Designated initialization | VS 2019 16.1 20 | |  P0846R0 ADL and function templates that are not visible | VS 2019 16.1 20 | |  P0409R2 Allowing lambda-capture [=, this] | VS 2019 16.2 20 | |  P0428R2 Familiar template syntax for generic lambdas | VS 2019 16.2 20 | |  P0624R2 Default constructible and assignable stateless lambdas | VS 2019 16.2 20 | |  P0780R2 Allowing pack expansion in lambda init-capture | VS 2019 16.2 20 | |  P0806R2 Deprecate implicit capture of this via [=] | VS 2019 16.2 20 | |  P1120R0 Consistency improvements for <=> and other comparison operators | VS 2019 16.2 20 | |  P1185R2 <=> != == | VS 2019 16.2 20 | |  P0734R0 Concepts | VS 2019 16.3 20 | |  P0857R0 Fixing functionality gaps in constraints | VS 2019 16.3 20 | |  P1084R2 Today's return-type-requirements are insufficient | VS 2019 16.3 20 | |  P0892R2 Conditional explicit | VS 2019 16.4 20 | |  P1091R3 Extending structured bindings to be more like variable declarations | VS 2019 16.4 20 | |  P1099R5 Using enum | VS 2019 16.4 20 | |  P1186R3 When do you actually use <=> | VS 2019 16.4 20 | |  P1630R1 Spaceship needs a tune-up | VS 2019 16.4 20 | |  P0306R4 Adding __VA_OPT__ for comma omission and comma deletion | VS 2019 16.5. To provide better backward compatibility, __VA_OPT__ is enabled under /Zc:preprocessor across all language versions. | |  P0614R1 Range-based for-loops with initializers | VS 2019 16.5 20 | |  P0683R1 Default member initializers for bit-fields | VS 2019 16.5 20 | |  P1002R1 try-catch blocks in constexpr functions | VS 2019 16.5 20 | |  P1161R3 Deprecate uses of the comma operator in subscripting expressions | VS 2019 16.5 20 | |  P1301R4 | VS 2019 16.5 20 | |  P1703R1 Recognizing header unit imports requires full preprocessing | VS 2019 16.5 20 | |  P1946R0 Allow defaulting comparisons by value | VS 2019 16.5 20 | |  P0479R5 and attributes | VS 2019 16.6 20 | |  P0692R1 Relaxing access checking on specializations | VS 2019 16.6 14 | |  P0732R2 Class types in non-type template parameters | VS 2019 16.6 20 | |  P1139R2 Address wording issues related to ISO 10646 | VS 2019 16.6 14 | |  P1907R1 Inconsistencies with non-type template parameters | VS 2019 16.6 20 | |  P1971R0 US053: Mandate the return type for return_void and return_value to be void | VS 2019 16.6 20 | |  P1971R0 US065: Apply Coroutines issue 24 from P0664R8 | VS 2019 16.6 20 | |  P1979R0 Resolution to US086 | VS 2019 16.6 20 | |  P0388R4 Permit conversions to arrays of unknown bound | VS 2019 16.7 20 | |  P0466R5 Layout-compatibility and Pointer-interconvertibility Traits | VS 2019 16.7 20 | |  P0722R3 Efficient sized delete for variable sized classes | VS 2019 16.7 20 | |  P1094R2 Nested inline namespaces | VS 2019 16.7 20 | |  P1152R4 Deprecating volatile | VS 2019 16.7 20 | |  P1331R2 Permitting trivial default initialization in constexpr contexts | VS 2019 16.7 20 | |  P1358R0 2310: Type completeness and derived-to-base pointer conversions | VS 2019 16.7 20 | |  P1452R2 On the non-uniform semantics of return-type-requirements | VS 2019 16.7 20 | |  P1616R1 Using unconstrained TTPs with constrained templates | VS 2019 16.7 20 | |  P1814R0 CTAD for alias templates | VS 2019 16.7 20 | |  P1816R0 CTAD for aggregates | VS 2019 16.7 20 | |  P1957R1 Converting from T to bool should be considered narrowing (re: US 212) | VS 2019 16.7 DR | |  P1968R0 CWG 2282: Consistency with mismatched aligned/non-over-aligned allocation/deallocation functions | VS 2019 16.7 20 | |  P1969R0 CWG 2280: Matching a usual deallocation function with placement new | VS 2019 16.7 20 | |  P1969R0 CWG 2382: Array allocation overhead for non-allocating placement new | VS 2019 16.7 20 | |  P1969R0 CWG 2441: Inline function parameters | VS 2019 16.7 20 | |  P1971R0 US052: Non-executed return statements in coroutines | VS 2019 16.7 20 | |  P1972R0 US105: Check satisfaction of constraints for non-templates when forming pointer to function | VS 2019 16.7 20 | |  P1980R0 CA096: Declaration matching for non-dependent requires-clauses | VS 2019 16.7 20 | |  P2082R1 Fixing CTAD for aggregates | VS 2019 16.7 20 | |  P2085R0 Consistent defaulted comparisons | VS 2019 16.7 20 | |  P2103R0 US033: Allow "import" inside linkage-specifications | VS 2019 16.7 20 | |  P2107R0 US064: Copy semantics of coroutine parameters | VS 2019 16.7 20 | |  P0912R5 Coroutines | VS 2019 16.8 20 | |  P1103R3 Modules | VS 2019 16.8 20 | |  P0315R4 Allowing lambdas in unevaluated contexts | VS 2019 16.8 20 | |  P0848R3 Conditionally trivial special member functions | VS 2019 16.8 20 | |  P0960R3 Allow initializing aggregates from a parenthesized list of values | VS 2019 16.8 20 | |  P1766R1 Mitigating minor modules maladies | VS 2019 16.8 20 | |  P1811R0 Relaxing redefinition restrictions for re-exportation robustness | VS 2019 16.8 20 | |  P1874R1 Dynamic Initialization Order of Non-Local Variables in Modules | VS 2019 16.8 20 | |  P1975R0 Fixing the wording of parenthesized aggregate-initialization | VS 2019 16.8 20 | |  P0634R3 Down with typename! | VS 2019 16.9 20 | |  P0784R7 More constexpr containers | VS 2019 16.9 20 | |  P0840R2 attribute | VS 2019 16.9 20 | |  P1064R0 Allowing virtual function calls in constant expressions | VS 2019 16.9 20 | |  P1141R2 Yet another approach for constrained declarations | VS 2019 16.9 20 | |  P1327R1 Allowing dynamic_cast, polymorphic typeid in constant expressions | VS 2019 16.9 20 | |  P1668R1 Permitting unevaluated inline assembly in constexpr functions | VS 2019 16.9 20 | |  P1073R3 Immediate functions | VS 2019 16.10 20 | |  P1143R2 constinit | VS 2019 16.10 20 | |  P1353R0 Missing feature-test macros | VS 2019 16.10 20 | |  P0735R1 Interaction of memory_order_consume with release sequences | N/A | |  P1236R1 Signed integers are two's complement | N/A | | C++23 Core language features | Supported | |  P0330R8 Literal Suffix for (signed) size_t | no | |  P0847R7 Deducing this | no | |  P0849R8 auto(x): decay-copy in the language | no | |  P1102R2 Down with ()! | no | |  P1169R4 static operator() | no | |  P1401R5 Narrowing contextual conversions to bool | no | |  P1467R9 Extended floating-point types and standard names | no | |  P1774R8 Portable assumptions | no | |  P1787R6 Declarations and where to find them | no | |  P1847R4 Make declaration order layout mandated | VS 2022 17.0 23 | |  P1938R3 if consteval | no | |  P1949R7 C++ Identifier Syntax using Unicode Standard Annex 31 | no | |  P2029R4 Proposed resolution for core issues 411, 1656, and 2333; numeric and universal character escapes in character and string literals | no | |  P2036R3 Change scope of lambda trailing-return-type | no | |  P2071R2 Named universal character escapes | no | |  P2128R6 Multidimensional subscript operator | no | |  P2156R1 Allow Duplicate Attributes | no | |  P2173R1 Attributes on Lambda-Expressions | no | |  P2186R2 Remove Garbage Collection Support | VS 2022 17.0 23 | |  P2201R1 Mixed string literal concatenation | no | |  P2223R2 Trimming whitespaces before line splicing | no | |  P2242R3 Non-literal variables (and labels and gotos) in constexpr functions | no | |  P2246R1 Character encoding of diagnostic text | VS 2022 17.0 23 | |  P2266R3 Simpler implicit move | no | |  P2280R4 Using unknown pointers and references in constant expressions | no | |  P2290R3 Delimited escape sequences | no | |  P2295R6 Support for UTF-8 as a portable source file encoding | no | |  P2314R4 Character sets and encodings | no | |  P2316R2 Consistent character literal encoding | VS 2022 17.0 23 | |  P2324R2 Labels at the end of compound statements (C compatibility) | no | |  P2327R1 De-deprecating volatile compound operations | no | |  P2334R1 preprocessing directives elifdef and elifndef | no | |  P2360R0 Extend init-statement to allow alias-declaration | no | |  P2362R3 Remove non-encodable wide character literals and multicharacter wide character literals | no | |  P2437R1 Support for #warning | no | |  P2448R2 Relaxing some constexpr restrictions | no | |  P2460R2 Relax requirements on wchar_t to match existing practices | no | |  P2468R2 The Equality Operator You Are Looking For | no | |  P2493R0 Missing feature test macros for C++20 core papers | no | |  P2493R0 Missing feature test macros for C++20 core papers | no | |  P2513R4 char8_t Compatibility and Portability Fix | VS 2022 17.4 DR | |  P2579R0 Mitigation strategies for P2036 ”Changing scope for lambda trailing-return-type” | no | |  P2582R1 Wording for class template argument deduction from inherited constructors | no | N2634 Expression SFINAE N1653 C99 preprocessor N3323 Tweaked wording for contextual conversions N3472 Binary literals N3638 auto and decltype(auto) return types N3648 init-captures N3649 Generic lambdas N3760 attribute N3778 Sized deallocation N3781 Digit separators N3651 Variable templates N3652 Extended constexpr N3653 Default member initializers for aggregates N4086 Removing trigraphs N3922 New rules for auto with braced-init-lists N4051 typename in template template-parameters N4266 Attributes for namespaces and enumerators N4267 u8 character literals N4230 Nested namespace definitions N3928 Terse static_assert P0184R0 Generalized range-based for-loops P0188R1 attribute P0001R1 Removing the register keyword P0002R1 Removing operator++ for bool P0018R3 Capturing this by value P0028R4 Using attribute namespaces without repetition P0061R1 __has_include P0138R2 Direct-list-init of fixed enums from integers P0170R1 constexpr lambdas P0189R1 attribute P0212R1 attribute P0217R3 Structured bindings P0292R2 constexpr if-statements P0305R1 Selection statements with initializers P1381R1 Reference capture of structured bindings P0245R1 Hexfloat literals N4268 Allowing more non-type template args N4295 Fold expressions P0003R5 Removing dynamic-exception-specifications P0012R1 Adding noexcept to the type system P0035R4 Over-aligned dynamic memory allocation P0386R2 Inline variables P0522R0 Matching template template-parameters to compatible arguments P0036R0 Removing some empty unary folds N4261 Fixing qualification conversions P0017R1 Extended aggregate initialization P0091R3 Template argument deduction for class templates P0512R0 Class template argument deduction issues P0127R2 Declaring non-type template parameters with auto P0135R1 Guaranteed copy elision P0136R1 Rewording inheriting constructors P0137R1 std::launder P0145R3 Refining expression evaluation order P0400R0 Order of evaluation of function arguments P0195R2 Pack expansions in using-declarations P0283R2 Ignoring unrecognized attributes P0702R1 Fixing class template argument deduction for initializer-list ctors P0961R1 Relaxing the structured bindings customization point finding rules P0969R0 Allowing structured bindings to accessible members P0588R1 Simplifying implicit lambda capture P1771R1 for constructors P1825R0 Merged wording for P0527R1 and P1155R3, more implicit moves P0929R2 Checking for abstract class types P0962R1 Relaxing the range-for loop customization point finding rules P0859R0 CWG 1581: When are constexpr member functions defined P1009R2 Array size deduction in new-expressions P1286R2 Contra CWG DR1778 P0641R2 const mismatch with defaulted copy constructor P0704R1 Fixing const lvalue ref-qualified pointers to members P1041R4 Make char16_t/char32_t string literals be UTF-16/32 P1330R0 Changing the active member of a union inside constexpr P0972R0 noexcept For <chrono> zero(), min(), max() P0515R3 Three-way (spaceship) comparison operator <=> P0941R2 Feature-test macros P1008R1 Prohibiting aggregates with user-declared constructors P0329R4 Designated initialization P0846R0 ADL and function templates that are not visible P0409R2 Allowing lambda-capture [=, this] P0428R2 Familiar template syntax for generic lambdas P0624R2 Default constructible and assignable stateless lambdas P0780R2 Allowing pack expansion in lambda init-capture P0806R2 Deprecate implicit capture of this via [=] P1120R0 Consistency improvements for <=> and other comparison operators P1185R2 <=> != == P0734R0 Concepts P0857R0 Fixing functionality gaps in constraints P1084R2 Today's return-type-requirements are insufficient P0892R2 Conditional explicit P1091R3 Extending structured bindings to be more like variable declarations P1099R5 Using enum P1186R3 When do you actually use <=> P1630R1 Spaceship needs a tune-up P0306R4 Adding __VA_OPT__ for comma omission and comma deletion __VA_OPT__ /Zc:preprocessor P0614R1 Range-based for-loops with initializers P0683R1 Default member initializers for bit-fields P1002R1 try-catch blocks in constexpr functions P1161R3 Deprecate uses of the comma operator in subscripting expressions P1301R4 P1703R1 Recognizing header unit imports requires full preprocessing P1946R0 Allow defaulting comparisons by value P0479R5 and attributes P0692R1 Relaxing access checking on specializations P0732R2 Class types in non-type template parameters P1139R2 Address wording issues related to ISO 10646 P1907R1 Inconsistencies with non-type template parameters P1971R0 US053: Mandate the return type for return_void and return_value to be void P1971R0 US065: Apply Coroutines issue 24 from P0664R8 P1979R0 Resolution to US086 P0388R4 Permit conversions to arrays of unknown bound P0466R5 Layout-compatibility and Pointer-interconvertibility Traits P0722R3 Efficient sized delete for variable sized classes P1094R2 Nested inline namespaces P1152R4 Deprecating volatile P1331R2 Permitting trivial default initialization in constexpr contexts P1358R0 2310: Type completeness and derived-to-base pointer conversions P1452R2 On the non-uniform semantics of return-type-requirements P1616R1 Using unconstrained TTPs with constrained templates P1814R0 CTAD for alias templates P1816R0 CTAD for aggregates P1957R1 Converting from T to bool should be considered narrowing (re: US 212) P1968R0 CWG 2282: Consistency with mismatched aligned/non-over-aligned allocation/deallocation functions P1969R0 CWG 2280: Matching a usual deallocation function with placement new P1969R0 CWG 2382: Array allocation overhead for non-allocating placement new P1969R0 CWG 2441: Inline function parameters P1971R0 US052: Non-executed return statements in coroutines P1972R0 US105: Check satisfaction of constraints for non-templates when forming pointer to function P1980R0 CA096: Declaration matching for non-dependent requires-clauses P2082R1 Fixing CTAD for aggregates P2085R0 Consistent defaulted comparisons P2103R0 US033: Allow "import" inside linkage-specifications P2107R0 US064: Copy semantics of coroutine parameters P0912R5 Coroutines P1103R3 Modules P0315R4 Allowing lambdas in unevaluated contexts P0848R3 Conditionally trivial special member functions P0960R3 Allow initializing aggregates from a parenthesized list of values P1766R1 Mitigating minor modules maladies P1811R0 Relaxing redefinition restrictions for re-exportation robustness P1874R1 Dynamic Initialization Order of Non-Local Variables in Modules P1975R0 Fixing the wording of parenthesized aggregate-initialization P0634R3 Down with typename! P0784R7 More constexpr containers P0840R2 attribute P1064R0 Allowing virtual function calls in constant expressions P1141R2 Yet another approach for constrained declarations P1327R1 Allowing dynamic_cast, polymorphic typeid in constant expressions P1668R1 Permitting unevaluated inline assembly in constexpr functions P1073R3 Immediate functions P1143R2 constinit P1353R0 Missing feature-test macros P0735R1 Interaction of memory_order_consume with release sequences P1236R1 Signed integers are two's complement P0330R8 Literal Suffix for (signed) size_t P0847R7 Deducing this P0849R8 auto(x): decay-copy in the language P1102R2 Down with ()! P1169R4 static operator() P1401R5 Narrowing contextual conversions to bool P1467R9 Extended floating-point types and standard names P1774R8 Portable assumptions P1787R6 Declarations and where to find them P1847R4 Make declaration order layout mandated P1938R3 if consteval P1949R7 C++ Identifier Syntax using Unicode Standard Annex 31 P2029R4 Proposed resolution for core issues 411, 1656, and 2333; numeric and universal character escapes in character and string literals P2036R3 Change scope of lambda trailing-return-type P2071R2 Named universal character escapes P2128R6 Multidimensional subscript operator P2156R1 Allow Duplicate Attributes P2173R1 Attributes on Lambda-Expressions P2186R2 Remove Garbage Collection Support P2201R1 Mixed string literal concatenation P2223R2 Trimming whitespaces before line splicing P2242R3 Non-literal variables (and labels and gotos) in constexpr functions P2246R1 Character encoding of diagnostic text P2266R3 Simpler implicit move P2280R4 Using unknown pointers and references in constant expressions P2290R3 Delimited escape sequences P2295R6 Support for UTF-8 as a portable source file encoding P2314R4 Character sets and encodings P2316R2 Consistent character literal encoding P2324R2 Labels at the end of compound statements (C compatibility) P2327R1 De-deprecating volatile compound operations P2334R1 preprocessing directives elifdef and elifndef P2360R0 Extend init-statement to allow alias-declaration P2362R3 Remove non-encodable wide character literals and multicharacter wide character literals P2437R1 Support for #warning P2448R2 Relaxing some constexpr restrictions P2460R2 Relax requirements on wchar_t to match existing practices P2468R2 The Equality Operator You Are Looking For P2493R0 Missing feature test macros for C++20 core papers P2493R0 Missing feature test macros for C++20 core papers P2513R4 char8_t Compatibility and Portability Fix P2579R0 Mitigation strategies for P2036 ”Changing scope for lambda trailing-return-type” P2582R1 Wording for class template argument deduction from inherited constructors C++ Standard library features A more detailed listing of Standard Library features and bug fixes by product version is available on the GitHub Microsoft STL wiki Changelog page. | Feature | Supported | | --- | --- | | C++14 Standard library features | Supported | |  N3462 SFINAE-Friendly result_of | VS 2015.2 | |  N3302 constexpr For <complex> | VS 2015 | |  N3469 constexpr For <chrono> | VS 2015 | |  N3470 constexpr For <array> | VS 2015 | |  N3471 constexpr For <initializer_list>, <tuple>, <utility> | VS 2015 | |  N3545 integral_constant::operator()() | VS 2015 | |  N3642 UDLs For <chrono>, <string> (1729ms, "meow"s, etc.) | VS 2015 | |  N3644 Null Forward Iterators | VS 2015 | |  N3654 quoted() | VS 2015 | |  N3657 Heterogeneous Associative Lookup | VS 2015 | |  N3658 integer_sequence | VS 2015 | |  N3659 shared_mutex (Timed) | VS 2015 | |  N3668 exchange() | VS 2015 | |  N3669 Fixing constexpr Member Functions Without const | VS 2015 | |  N3670 get<T>() | VS 2015 | |  N3671 Dual-Range equal(), is_permutation(), mismatch() | VS 2015 | |  N3778 Sized Deallocation | VS 2015 | |  N3779 UDLs For <complex> (3.14i, etc.) | VS 2015 | |  N3789 constexpr For <functional> | VS 2015 | |  N3887 tuple_element_t | VS 2015 | |  N3891 Renaming shared_mutex (Timed) To shared_timed_mutex | VS 2015 | |  N3346 Minimal Container Element Requirements | VS 2013 | |  N3421 Transparent Operator Functors (less<>, etc.) | VS 2013 | |  N3655 Alias Templates For <type_traits> (decay_t, etc.) | VS 2013 | |  N3656 make_unique() | VS 2013 | | C++17 Standard library features | Supported | |  N3911 void_t | VS 2015 14 | |  N4089 Safe Conversions In unique_ptr<T[]> | VS 2015 14 | |  N4169 invoke() | VS 2015 14 | |  N4190 Removing auto_ptr, random_shuffle(), And Old <functional> Stuff | VS 2015 F | |  N4258 noexcept Cleanups | VS 2015 14 | |  N4259 uncaught_exceptions() | VS 2015 14 | |  N4277 Trivially Copyable reference_wrapper | VS 2015 14 | |  N4279 insert_or_assign()/try_emplace() For map/unordered_map | VS 2015 14 | |  N4280 size(), empty(), data() | VS 2015 14 | |  N4366 Precisely Constraining unique_ptr Assignment | VS 2015 14 | |  N4387 Improving pair And tuple | VS 2015.2 14 | |  N4389 bool_constant | VS 2015 14 | |  N4508 shared_mutex (Untimed) | VS 2015.2 14 | |  N4510 Supporting Incomplete Types In vector/list/forward_list | VS 2013 14 | |  N4562 Library Fundamentals: <algorithm> sample() | VS 2017 15.0 | |  N4562 Library Fundamentals: <any> | VS 2017 15.0 | |  N4562 Library Fundamentals: <memory_resource>  P0337R0 Deleting polymorphic_allocator Assignment | VS 2017 15.6 | |  N4562 Library Fundamentals: <optional> | VS 2017 15.0 | |  N4562 Library Fundamentals: <string_view> | VS 2017 15.0 | |  N4562 Library Fundamentals: <tuple> apply() | VS 2017 15.0 | |  N4562 Library Fundamentals: Boyer-Moore search()  P0253R1 Fixing Searcher Return Types | VS 2017 15.3 17 | |  P0003R5 Removing Dynamic Exception Specifications | VS 2017 15.5 17 | |  P0004R1 Removing Deprecated Iostreams Aliases | VS 2015.2 F | |  P0005R4 not_fn()  P0358R1 Fixes For not_fn() | VS 2017 15.5 17 | |  P0006R0 Variable Templates For Type Traits (is_same_v, etc.) | VS 2015.2 14 | |  P0007R1 as_const() | VS 2015.2 14 | |  P0013R1 Logical Operator Type Traits (conjunction, etc.) | VS 2015.2 14 | |  P0024R2 Parallel Algorithms  P0336R1 Renaming Parallel Execution Policies  P0394R4 Parallel Algorithms Should terminate() For Exceptions  P0452R1 Unifying <numeric> Parallel Algorithms | VS 2017 15.7 G | |  P0025R1 clamp() | VS 2015.3 | |  P0030R1 hypot(x, y, z) | VS 2017 15.7 | |  P0031R0 constexpr For <array> (Again) And <iterator> | VS 2017 15.3 17 | |  P0032R3 Homogeneous Interface For variant/any/optional | VS 2017 15.0 | |  P0033R1 Rewording enable_shared_from_this | VS 2017 15.5 14 | |  P0040R3 Extending Memory Management Tools | VS 2017 15.3 17 | |  P0063R3 C11 Standard Library | VS 2015 C11, 14 | |  P0067R5 Elementary String Conversions | VS 2019 16.4 | |  P0074R0 owner_less<> | VS 2015.2 14 | |  P0077R2 is_callable, is_nothrow_callable | VS 2017 15.0 | |  P0083R3 Splicing Maps And Sets  P0508R0 Clarifying insert_return_type | VS 2017 15.5 17 | |  P0084R2 Emplace Return Type | VS 2017 15.3 17 | |  P0088R3 <variant> | VS 2017 15.0 | |  P0092R1 <chrono> floor(), ceil(), round(), abs() | VS 2015.2 14 | |  P0152R1 atomic::is_always_lock_free | VS 2017 15.3 17 | |  P0154R1 hardware_destructive_interference_size, etc. | VS 2017 15.3 17 | |  P0156R0 Variadic lock_guard | VS 2015.2 14 | |  P0156R2 Renaming Variadic lock_guard to scoped_lock | VS 2017 15.3 17 | |  P0163R0 shared_ptr::weak_type | VS 2017 15.0 | |  P0174R2 Deprecating Vestigial Library Parts | VS 2017 15.5 17 | |  P0185R1 is_swappable, is_nothrow_swappable | VS 2015.3 | |  P0209R2 make_from_tuple() | VS 2017 15.0 | |  P0218R1 <filesystem>  P0219R1 Relative Paths For Filesystem  P0317R1 Directory Entry Caching For Filesystem  P0392R0 Supporting string_view In Filesystem Paths  P0430R2 Supporting Non-POSIX Filesystems  P0492R2 Resolving NB Comments for Filesystem | VS 2017 15.7 H | |  P0220R1 Library Fundamentals V1 | VS 2017 15.6 | |  P0226R1 Mathematical Special Functions | VS 2017 15.7 | |  P0254R2 Integrating string_view And std::string | VS 2017 15.0 | |  P0258R2 has_unique_object_representations | VS 2017 15.3 I | |  P0272R1 Non-const basic_string::data() | VS 2015.3 | |  P0295R0 gcd(), lcm() | VS 2017 15.3 17 | |  P0298R3 std::byte | VS 2017 15.3 17, J | |  P0302R1 Removing Allocator Support In std::function | VS 2017 15.5 17 | |  P0307R2 Making Optional Greater Equal Again | VS 2017 15.0 | |  P0393R3 Making Variant Greater Equal | VS 2017 15.0 | |  P0403R1 UDLs For <string_view> ("meow"sv, etc.) | VS 2017 15.3 17 | |  P0414R2 shared_ptr<T[]>, shared_ptr<T[N]>  P0497R0 Fixing shared_ptr For Arrays | VS 2017 15.5 14 | |  P0418R2 atomic compare_exchange memory_order Requirements | VS 2017 15.3 14 | |  P0426R1 constexpr For char_traits | VS 2017 15.7 | |  P0433R2 Integrating template deduction for class templates into the standard library  P0739R0 Improving class template argument deduction integration into the standard library | VS 2017 15.7 | |  P0435R1 Overhauling common_type  P0548R1 Tweaking common_type and duration | VS 2017 15.3 14 | |  P0504R0 Revisiting in_place_t/in_place_type_t<T>/in_place_index_t<I> | VS 2017 15.0 | |  P0505R0 constexpr For <chrono> (Again) | VS 2017 15.3 17 | |  P0510R0 Rejecting variants Of Nothing, Arrays, References, And Incomplete Types | VS 2017 15.0 | |  P0513R0 Poisoning hash  P0599R1 noexcept hash | VS 2017 15.3 14 | |  P0516R0 Marking shared_future Copying As noexcept | VS 2017 15.3 14 | |  P0517R0 Constructing future_error From future_errc | VS 2017 15.3 14 | |  P0521R0 Deprecating shared_ptr::unique() | VS 2017 15.5 17 | |  P0558R1 Resolving atomic<T> Named Base Class Inconsistencies | VS 2017 15.3 14 | |  P0595R2 std::is_constant_evaluated() | VS 2019 16.5 20 | |  P0602R4 Propagating Copy/Move Triviality In variant/optional | VS 2017 15.317 | |  P0604R0 Changing is_callable/result_of To invoke_result, is_invocable, is_nothrow_invocable | VS 2017 15.3 17 | |  P0607R0 Inline Variables for the Standard Library | VS 2017 15.5 17 | |  P0618R0 Deprecating <codecvt> | VS 2017 15.5 17 | | C++17 Standard library features (Defect reports) | Supported | |  P0682R1 Repairing Elementary String Conversions | VS 2015 15.7 17 | |  P1164R1 Making create_directory() Intuitive | VS 2019 16.0 14 | | C++20 Standard library features | Supported | |  P0809R0 Comparing Unordered Containers | VS 2010 14 | |  P0858R0 Constexpr Iterator Requirements | VS 2017 15.3 17 | |  P0777R1 Avoiding Unnecessary Decay | VS 2017 15.7 14 | |  P0550R2 remove_cvref | VS 2019 16.0 20 | |  P0318R1 unwrap_reference, unwrap_ref_decay | VS 2019 16.1 20 | |  P0457R2 starts_with()/ends_with() For basic_string/basic_string_view | VS 2019 16.1 20 | |  P0458R2 contains() For Ordered And Unordered Associative Containers | VS 2019 16.1 20 | |  P0646R1 list/forward_list remove()/remove_if()/unique() Return size_type | VS 2019 16.1 20 | |  P0769R2 shift_left(), shift_right() | VS 2019 16.1 20 | |  P0887R1 type_identity | VS 2019 16.1 20 | |  P0020R6 atomic<float>, atomic<double>, atomic<long double> | VS 2019 16.2 20 | |  P0463R1 endian | VS 2019 16.2 20 | |  P0482R6 char8_t: A type for UTF-8 characters and strings | VS 2019 16.2 20 | |  P0600R1 For The STL, Part 1 | VS 2019 16.2 20 | |  P0653R2 to_address() | VS 2019 16.2 20 | |  P0754R2 <version> | VS 2019 16.2 20 | |  P0771R1 noexcept For std::function's Move Constructor | VS 2019 16.2 20 | |  P0487R1 Fixing operator>>(basic_istream&, CharT) | VS 2019 16.3 20 | |  P0616R0 Using move() In <numeric> | VS 2019 16.3 20 | |  P0758R1 is_nothrow_convertible | VS 2019 16.3 20 | |  P0898R3 Standard Library Concepts | VS 2019 16.3 20 | |  P0919R3 Heterogeneous Lookup For Unordered Containers | VS 2019 16.3 20 | |  P1754R1 Rename Concepts to standard_case | VS 2019 16.4 20 | |  P0325R4 to_array from LFTS with updates | VS 2019 16.5 20 | |  P0340R3 SFINAE-Friendly underlying_type | VS 2019 16.5 14 | |  P0356R5 bind_front() | VS 2019 16.5 20 | |  P0439R0 enum class memory_order | VS 2019 16.5 20 | |  P0553R4 <bit> Rotating And Counting Functions | VS 2019 16.5 20 | |  P0556R3 <bit> ispow2(), ceil2(), floor2(), log2p1() | VS 2019 16.5 20 | |  P0595R2 is_constant_evaluated() | VS 2019 16.5 20 | |  P0631R8 <numbers> Math Constants | VS 2019 16.5 20 | |  P0655R1 visit<R>() | VS 2019 16.5 20 | |  P0738R2 istream_iterator Cleanup | VS 2019 16.5 14 | |  P0767R1 Deprecating is_pod | VS 2019 16.5 20 | |  P0966R1 string::reserve() Should Not Shrink | VS 2019 16.5 20 | |  P1209R0 erase_if(), erase() | VS 2019 16.5 20 | |  P1227R2 Signed std::ssize(), Unsigned span::size() | VS 2019 16.5 20 | |  P1355R2 Narrow Contract For ceil2() | VS 2019 16.5 20 | |  P1357R1 is_bounded_array, is_unbounded_array | VS 2019 16.5 20 | |  P1612R1 Relocating endian To <bit> | VS 2019 16.5 20 | |  P1651R0 bind_front() Should Not Unwrap reference_wrapper | VS 2019 16.5 20 | |  P1690R1 Refining Heterogeneous Lookup For Unordered Containers | VS 2019 16.5 20 | |  P1902R1 Missing Feature-Test Macros 2017-2019 | VS 2019 16.5 14 | |  P0122R7 <span>  P1024R3 Enhancing span usability  P1085R2 Removing span comparisons  P1394R4 Range constructor for span  P1872R0 span should have size_type, not index_type | VS 2019 16.6 20 | |  P0202R3 constexpr for <algorithm> and exchange() | VS 2019 16.6 20 | |  P0357R3 Supporting Incomplete Types In reference_wrapper | VS 2019 16.6 20 | |  P0619R4 Removing C++17-Deprecated Features In C++20 | VS 2019 16.6 20 | |  P0879R0 constexpr for swapping functions | VS 2019 16.6 20 | |  P0883R2 Fixing atomic initialization | VS 2019 16.6 14 | |  P0935R0 Eradicating Unnecessarily Explicit Default Constructors | VS 2019 16.6 14 | |  P1006R1 constexpr For pointer_traits<T>::pointer_to() | VS 2019 16.6 20 | |  P1165R1 Consistently Propagating Stateful Allocators In basic_string's operator+() | VS 2019 16.6 14 | |  P1423R3 char8_t backward compatibility remediation | VS 2019 16.6 20 | |  P1645R1 constexpr for <numeric> algorithms | VS 2019 16.6 20 | |  P0415R1 constexpr For <complex> (Again) | VS 2019 16.7 20 | |  P0476R2 <bit> bit_cast | VS 2019 16.7 20 | |  P0528R3 Atomic Compare-And-Exchange With Padding Bits | VS 2019 16.7 20 | |  P0586R2 Integer comparison functions | VS 2019 16.7 20 | |  P0674R1 make_shared() For Arrays | VS 2019 16.7 20 | |  P0718R2 atomic<shared_ptr<T>>, atomic<weak_ptr<T>> | VS 2019 16.7 20 | |  P1023R0 constexpr For std::array Comparisons | VS 2019 16.7 20 | |  P1115R3 erase()/erase_if() Return size_type | VS 2019 16.7 20 | |  P1831R1 Deprecating volatile in the standard library | VS 2019 16.7 20 | |  P1871R1 Concept traits should be named after concepts | VS 2019 16.7 20 | |  P1956R1 <bit> has_single_bit(), bit_ceil(), bit_floor(), bit_width() | VS 2019 16.7 20 | |  P1964R2 Replacing boolean With boolean-testable | VS 2019 16.7 20 | |  P1976R2 Fixed-size span construction from dynamic range | VS 2019 16.7 20 | |  P2091R0 Issues with range access CPOs | VS 2019 16.7 20 | |  P2102R0 Make "implicit expression variations" more explicit | VS 2019 16.7 20 | |  P2116R0 Remove tuple-like protocol support from fixed-extent span | VS 2019 16.7 20 | |  P0019R8 atomic_ref | VS 2019 16.8 20 | |  P0528R3 Library support for atomic compare-and-exchange with padding bits | VS 2019 16.8 20 | |  P0811R3 midpoint(), lerp() | VS 2019 16.8 20 | |  P0912R5 Library Support For Coroutines | VS 2019 16.8 20 | |  P1001R2 execution::unseq | VS 2019 16.8 20 | |  P1032R1 Miscellaneous constexpr | VS 2019 16.8 20 | |  P1065R2 constexpr INVOKE | VS 2019 16.8 20 | |  P1123R0 Editorial Guidance for merging P0019r8 and P0528r3 | VS 2019 16.8 20 | |  P1960R0 NB Comment Changes Reviewed by SG1 | VS 2019 16.8 20 | |  P0339R6 polymorphic_allocator<> | VS 2019 16.9 20 | |  P0660R10 <stop_token> and jthread | VS 2019 16.9 20 | |  P0768R1 Library Support For The Spaceship Comparison Operator <=> | VS 2019 16.9 20 | |  P1007R3 assume_aligned() | VS 2019 16.9 20 | |  P1020R1 Smart Pointer Creation With Default Initialization | VS 2019 16.9 20 | |  P1135R6 The C++20 Synchronization Library | VS 2019 16.9 20 | |  P1771R1 Library support for for constructors | VS 2019 16.9 20 | |  P0053R7 <syncstream>  P0753R2 osyncstream Manipulators | VS 2019 16.10 20 | |  P0355R7 <chrono> Calendars And Time Zones | VS 2019 16.10 20abi | |  P0408R7 Efficient access To basic_stringbuf's buffer | VS 2019 16.10 20 | |  P0466R5 Library support for layout-compatibility and pointer-interconvertibility traits | VS 2019 16.10 20 | |  P0475R1 Guaranteed Copy Elision For Piecewise Construction | VS 2019 16.10 20 | |  P0591R4 Utility Functions For Uses-Allocator Construction | VS 2019 16.10 20 | |  P0608R3 Improving variant's Converting Constructor/Assignment | VS 2019 16.10 20 | |  P0645R10 <format> Text Formatting | VS 2019 16.10 20abi | |  P0784R7 Library support for more constexpr containers | VS 2019 16.10 20 | |  P0896R4 <ranges> | VS 2019 16.10 20abi | |  P0980R1 constexpr std::string | VS 2019 16.10 20, P | |  P1004R2 constexpr std::vector | VS 2019 16.10 20, P | |  P1208R6 <source_location> | VS 2019 16.10 20 | |  P1502R1 Standard Library Header Units | VS 2019 16.10 20 | |  P1614R2 Adding Spaceship <=> To The Library | VS 2019 16.10 20 | |  P1285R0 Improving Completeness Requirements For Type Traits | N/A | | C++20 Standard library features (Defect reports) | Supported | |  P2325R3 Views Should Not Be Required To Be Default Constructible | VS 2022 17.0 20abi | |  P2328R1 join_view should join all views of ranges | VS 2022 17.0 20abi | |  P2367R0 Remove misuses of list-initialization from clause 24 ranges | VS 2022 17.0 20abi | |  P2259R1 Partial LWG issue resolution: repairing Input Range Adaptors and counted_iterator | VS 2022 17.0 23 | | C++23 Standard library features | Supported | |  P0288R9 move_only_function​ | VS 2022 17.2 23 | |  P0323R12 <expected>​​ | VS 2022 17.3 23 | |  P0401R6 Providing Size Feedback In The Allocator Interface​ | VS 2022 17.0 23 | |  P0448R4 <spanstream>​ | VS 2022 17.1 23 | |  P0627R6 unreachable()​ | VS 2022 17.2 23 | |  P0798R8 Monadic Operations For optional​ | VS 2022 17.2 23 | |  P0849R8 auto(x): decay-copy In The Language​ | VS 2022 17.4 23 | |  P0881R7 <stacktrace>​ | VS 2022 17.4 23 | |  P0943R6 Supporting C Atomics In C++​ | VS 2022 17.1 23 | |  P1048R1 is_scoped_enum​ | VS 2022 17.0 23 | |  P1072R10 basic_string::resize_and_overwrite​ | VS 2022 17.1 23 | |  P1132R7 out_ptr(), inout_ptr()​ | VS 2022 17.0 23 | |  P1147R1 Printing volatile Pointers​ | VS 2022 17.1 23 | |  P1206R7 Conversions From Ranges To Containers​ | VS 2022 17.4 23 | |  P1272R4 byteswap()​ | VS 2022 17.1 23 | |  P1328R1 constexpr type_info::operator==()​ | VS 2022 17.4 23 | |  P1413R3 Deprecate aligned_storage And aligned_union​​ | VS 2022 17.3 23 | |  P1425R4 Iterator Pair Constructors For stack And queue​ | VS 2022 17.1 23 | |  P1518R2 Stop Overconstraining Allocators In Container Deduction Guides​ | VS 2022 17.1 23 | |  P1659R3 ranges::starts_with, ranges::ends_with​ | VS 2022 17.1 23 | |  P1679R3 contains() For basic_string/basic_string_view​ | VS 2022 17.0 23 | |  P1682R3 to_underlying() For Enumerations​ | VS 2022 17.0 23 | |  P1899R3 views::stride​ | VS 2022 17.4 23 | |  P1951R1 Default Template Arguments For pair's Forwarding Constructor​ | VS 2022 17.0 23 | |  P1989R2 Range Constructor For string_view​ | VS 2022 17.0 23 | |  P2077R3 Heterogeneous Erasure Overloads For Associative Containers​ | VS 2022 17.2 23 | |  P2136R3 invoke_r()​ | VS 2022 17.1 23 | |  P2162R2 Inheriting from std::variant | VS 2022 17.0 17 | |  P2166R1 Prohibit basic_string and basic_string_view from being constructed from nullptr | VS 2022 17.0 23, R | |  P2186R2 Removed garbage collection support | VS 2022 17.0 23, Q | |  P2251R1 Require span And basic_string_view To Be Trivially Copyable​ | VS 2022 17.1 23 | |  P2273R3 constexpr unique_ptr​​ | VS 2022 17.3 23 | |  P2291R3 constexpr Integral <charconv>​ | VS 2022 17.4 23 | |  P2302R4 ranges::contains, ranges::contains_subrange​ | VS 2022 17.4 23 | |  P2321R2 std::zip​ | partial in VS 2022 17.5 23 | |  P2322R6 ranges::fold_left, ranges::fold_right, etc.​ | VS 2022 17.5 23 | |  P2387R3 Pipe Support For User-Defined Range Adaptors​ | VS 2022 17.4 23 | |  P2393R1 Cleaning Up Integer-Class Types​ | VS 2022 17.2 23 | |  P2401R0 Conditional noexcept For exchange()​ | VS 2022 17.1 23 | |  P2408R5 Ranges Iterators As Inputs To Non-Ranges Algorithms​ | VS 2022 17.4 23 | |  P2417R2 More constexpr bitset​ | VS 2022 17.4 23 | |  P2419R2 Clarify Handling Of Encodings In Localized Formatting Of chrono Types​ | VS 2022 17.4 23 | |  P2438R2 string::substr() &&​ | VS 2022 17.4 23 | |  P2440R1 ranges::iota, ranges::shift_left, ranges::shift_right​ | VS 2022 17.4 23 | |  P2441R2 views::join_with​ | VS 2022 17.4 23 | |  P2442R1 Windowing Range Adaptors: views::chunk, views::slide​​ | VS 2022 17.3 23 | |  P2443R1 views::chunk_by​​ | VS 2022 17.3 23 | |  P2445R1 forward_like()​ | VS 2022 17.4 23 | |  P2446R2 views::as_rvalue​ | VS 2022 17.4 23 | |  P2465R3 Standard Library Modules std And std.compat​ | no | |  P2494R2 Relaxing Range Adaptors To Allow Move-Only Types​ | VS 2022 17.4 23 | |  P2499R0 string_view Range Constructor Should Be explicit​ | VS 2022 17.4 23 | |  P2508R1 basic_format_string, format_string, wformat_string​ | VS 2022 17.5 23 | |  P2517R1 Conditional noexcept For apply()​ | VS 2022 17.4 23 | |  P2520R0 move_iterator<T> Should Be A Random-Access Iterator​​ | VS 2022 17.4 23 | |  P2549R1 unexpected<E>::error() | VS 2022 17.3 23 | N3462 SFINAE-Friendly result_of N3302 constexpr For <complex> N3469 constexpr For <chrono> N3470 constexpr For <array> N3471 constexpr For <initializer_list>, <tuple>, <utility> N3545 integral_constant::operator()() N3642 UDLs For <chrono>, <string> (1729ms, "meow"s, etc.) N3644 Null Forward Iterators N3654 quoted() N3657 Heterogeneous Associative Lookup N3658 integer_sequence N3659 shared_mutex (Timed) N3668 exchange() N3669 Fixing constexpr Member Functions Without const N3670 get<T>() N3671 Dual-Range equal(), is_permutation(), mismatch() N3778 Sized Deallocation N3779 UDLs For <complex> (3.14i, etc.) N3789 constexpr For <functional> N3887 tuple_element_t N3891 Renaming shared_mutex (Timed) To shared_timed_mutex N3346 Minimal Container Element Requirements N3421 Transparent Operator Functors (less<>, etc.) N3655 Alias Templates For <type_traits> (decay_t, etc.) N3656 make_unique() N3911 void_t N4089 Safe Conversions In unique_ptr<T[]> N4169 invoke() N4190 Removing auto_ptr, random_shuffle(), And Old <functional> Stuff N4258 noexcept Cleanups N4259 uncaught_exceptions() N4277 Trivially Copyable reference_wrapper N4279 insert_or_assign()/try_emplace() For map/unordered_map N4280 size(), empty(), data() N4366 Precisely Constraining unique_ptr Assignment N4387 Improving pair And tuple N4389 bool_constant N4508 shared_mutex (Untimed) N4510 Supporting Incomplete Types In vector/list/forward_list N4562 Library Fundamentals: <algorithm> sample() N4562 Library Fundamentals: <any> N4562 Library Fundamentals: <memory_resource> P0337R0 Deleting polymorphic_allocator Assignment N4562 Library Fundamentals: <optional> N4562 Library Fundamentals: <string_view> N4562 Library Fundamentals: <tuple> apply() N4562 Library Fundamentals: Boyer-Moore search() P0253R1 Fixing Searcher Return Types P0003R5 Removing Dynamic Exception Specifications P0004R1 Removing Deprecated Iostreams Aliases P0005R4 not_fn() P0358R1 Fixes For not_fn() P0006R0 Variable Templates For Type Traits (is_same_v, etc.) P0007R1 as_const() P0013R1 Logical Operator Type Traits (conjunction, etc.) P0024R2 Parallel Algorithms P0336R1 Renaming Parallel Execution Policies P0394R4 Parallel Algorithms Should terminate() For Exceptions P0452R1 Unifying <numeric> Parallel Algorithms P0025R1 clamp() P0030R1 hypot(x, y, z) P0031R0 constexpr For <array> (Again) And <iterator> P0032R3 Homogeneous Interface For variant/any/optional P0033R1 Rewording enable_shared_from_this P0040R3 Extending Memory Management Tools P0063R3 C11 Standard Library P0067R5 Elementary String Conversions P0074R0 owner_less<> P0077R2 is_callable, is_nothrow_callable P0083R3 Splicing Maps And Sets P0508R0 Clarifying insert_return_type P0084R2 Emplace Return Type P0088R3 <variant> P0092R1 <chrono> floor(), ceil(), round(), abs() P0152R1 atomic::is_always_lock_free P0154R1 hardware_destructive_interference_size, etc. P0156R0 Variadic lock_guard P0156R2 Renaming Variadic lock_guard to scoped_lock P0163R0 shared_ptr::weak_type P0174R2 Deprecating Vestigial Library Parts P0185R1 is_swappable, is_nothrow_swappable P0209R2 make_from_tuple() P0218R1 <filesystem> P0219R1 Relative Paths For Filesystem P0317R1 Directory Entry Caching For Filesystem P0392R0 Supporting string_view In Filesystem Paths P0430R2 Supporting Non-POSIX Filesystems P0492R2 Resolving NB Comments for Filesystem P0220R1 Library Fundamentals V1 P0226R1 Mathematical Special Functions P0254R2 Integrating string_view And std::string P0258R2 has_unique_object_representations P0272R1 Non-const basic_string::data() P0295R0 gcd(), lcm() P0298R3 std::byte P0302R1 Removing Allocator Support In std::function P0307R2 Making Optional Greater Equal Again P0393R3 Making Variant Greater Equal P0403R1 UDLs For <string_view> ("meow"sv, etc.) P0414R2 shared_ptr<T[]>, shared_ptr<T[N]> P0497R0 Fixing shared_ptr For Arrays P0418R2 atomic compare_exchange memory_order Requirements P0426R1 constexpr For char_traits P0433R2 Integrating template deduction for class templates into the standard library P0739R0 Improving class template argument deduction integration into the standard library P0435R1 Overhauling common_type P0548R1 Tweaking common_type and duration P0504R0 Revisiting in_place_t/in_place_type_t<T>/in_place_index_t<I> P0505R0 constexpr For <chrono> (Again) P0510R0 Rejecting variants Of Nothing, Arrays, References, And Incomplete Types P0513R0 Poisoning hash P0599R1 noexcept hash P0516R0 Marking shared_future Copying As noexcept P0517R0 Constructing future_error From future_errc P0521R0 Deprecating shared_ptr::unique() P0558R1 Resolving atomic<T> Named Base Class Inconsistencies P0595R2 std::is_constant_evaluated() P0602R4 Propagating Copy/Move Triviality In variant/optional P0604R0 Changing is_callable/result_of To invoke_result, is_invocable, is_nothrow_invocable P0607R0 Inline Variables for the Standard Library P0618R0 Deprecating <codecvt> P0682R1 Repairing Elementary String Conversions P1164R1 Making create_directory() Intuitive P0809R0 Comparing Unordered Containers P0858R0 Constexpr Iterator Requirements P0777R1 Avoiding Unnecessary Decay P0550R2 remove_cvref P0318R1 unwrap_reference, unwrap_ref_decay P0457R2 starts_with()/ends_with() For basic_string/basic_string_view P0458R2 contains() For Ordered And Unordered Associative Containers P0646R1 list/forward_list remove()/remove_if()/unique() Return size_type P0769R2 shift_left(), shift_right() P0887R1 type_identity P0020R6 atomic<float>, atomic<double>, atomic<long double> P0463R1 endian P0482R6 char8_t: A type for UTF-8 characters and strings P0600R1 For The STL, Part 1 P0653R2 to_address() P0754R2 <version> P0771R1 noexcept For std::function's Move Constructor P0487R1 Fixing operator>>(basic_istream&, CharT) P0616R0 Using move() In <numeric> P0758R1 is_nothrow_convertible P0898R3 Standard Library Concepts P0919R3 Heterogeneous Lookup For Unordered Containers P1754R1 Rename Concepts to standard_case P0325R4 to_array from LFTS with updates P0340R3 SFINAE-Friendly underlying_type P0356R5 bind_front() P0439R0 enum class memory_order P0553R4 <bit> Rotating And Counting Functions P0556R3 <bit> ispow2(), ceil2(), floor2(), log2p1() P0595R2 is_constant_evaluated() P0631R8 <numbers> Math Constants P0655R1 visit<R>() P0738R2 istream_iterator Cleanup P0767R1 Deprecating is_pod P0966R1 string::reserve() Should Not Shrink P1209R0 erase_if(), erase() P1227R2 Signed std::ssize(), Unsigned span::size() P1355R2 Narrow Contract For ceil2() P1357R1 is_bounded_array, is_unbounded_array P1612R1 Relocating endian To <bit> P1651R0 bind_front() Should Not Unwrap reference_wrapper P1690R1 Refining Heterogeneous Lookup For Unordered Containers P1902R1 Missing Feature-Test Macros 2017-2019 P0122R7 <span> P1024R3 Enhancing span usability P1085R2 Removing span comparisons P1394R4 Range constructor for span P1872R0 span should have size_type, not index_type P0202R3 constexpr for <algorithm> and exchange() P0357R3 Supporting Incomplete Types In reference_wrapper P0619R4 Removing C++17-Deprecated Features In C++20 P0879R0 constexpr for swapping functions P0883R2 Fixing atomic initialization P0935R0 Eradicating Unnecessarily Explicit Default Constructors P1006R1 constexpr For pointer_traits<T>::pointer_to() P1165R1 Consistently Propagating Stateful Allocators In basic_string's operator+() P1423R3 char8_t backward compatibility remediation P1645R1 constexpr for <numeric> algorithms P0415R1 constexpr For <complex> (Again) P0476R2 <bit> bit_cast P0528R3 Atomic Compare-And-Exchange With Padding Bits P0586R2 Integer comparison functions P0674R1 make_shared() For Arrays P0718R2 atomic<shared_ptr<T>>, atomic<weak_ptr<T>> P1023R0 constexpr For std::array Comparisons P1115R3 erase()/erase_if() Return size_type P1831R1 Deprecating volatile in the standard library P1871R1 Concept traits should be named after concepts P1956R1 <bit> has_single_bit(), bit_ceil(), bit_floor(), bit_width() P1964R2 Replacing boolean With boolean-testable P1976R2 Fixed-size span construction from dynamic range P2091R0 Issues with range access CPOs P2102R0 Make "implicit expression variations" more explicit P2116R0 Remove tuple-like protocol support from fixed-extent span P0019R8 atomic_ref P0528R3 Library support for atomic compare-and-exchange with padding bits P0811R3 midpoint(), lerp() P0912R5 Library Support For Coroutines P1001R2 execution::unseq P1032R1 Miscellaneous constexpr P1065R2 constexpr INVOKE P1123R0 Editorial Guidance for merging P0019r8 and P0528r3 P1960R0 NB Comment Changes Reviewed by SG1 P0339R6 polymorphic_allocator<> P0660R10 <stop_token> and jthread P0768R1 Library Support For The Spaceship Comparison Operator <=> P1007R3 assume_aligned() P1020R1 Smart Pointer Creation With Default Initialization P1135R6 The C++20 Synchronization Library P1771R1 Library support for for constructors P0053R7 <syncstream> P0753R2 osyncstream Manipulators P0355R7 <chrono> Calendars And Time Zones P0408R7 Efficient access To basic_stringbuf's buffer P0466R5 Library support for layout-compatibility and pointer-interconvertibility traits P0475R1 Guaranteed Copy Elision For Piecewise Construction P0591R4 Utility Functions For Uses-Allocator Construction P0608R3 Improving variant's Converting Constructor/Assignment P0645R10 <format> Text Formatting P0784R7 Library support for more constexpr containers P0896R4 <ranges> P0980R1 constexpr std::string P1004R2 constexpr std::vector P1208R6 <source_location> P1502R1 Standard Library Header Units P1614R2 Adding Spaceship <=> To The Library P1285R0 Improving Completeness Requirements For Type Traits P2325R3 Views Should Not Be Required To Be Default Constructible P2328R1 join_view should join all views of ranges P2367R0 Remove misuses of list-initialization from clause 24 ranges P2259R1 Partial LWG issue resolution: repairing Input Range Adaptors and counted_iterator P0288R9 move_only_function​ P0323R12 <expected>​​ P0401R6 Providing Size Feedback In The Allocator Interface​ P0448R4 <spanstream>​ P0627R6 unreachable()​ P0798R8 Monadic Operations For optional​ P0849R8 auto(x): decay-copy In The Language​ P0881R7 <stacktrace>​ P0943R6 Supporting C Atomics In C++​ P1048R1 is_scoped_enum​ P1072R10 basic_string::resize_and_overwrite​ P1132R7 out_ptr(), inout_ptr()​ P1147R1 Printing volatile Pointers​ P1206R7 Conversions From Ranges To Containers​ P1272R4 byteswap()​ P1328R1 constexpr type_info::operator==()​ P1413R3 Deprecate aligned_storage And aligned_union​​ P1425R4 Iterator Pair Constructors For stack And queue​ P1518R2 Stop Overconstraining Allocators In Container Deduction Guides​ P1659R3 ranges::starts_with, ranges::ends_with​ P1679R3 contains() For basic_string/basic_string_view​ P1682R3 to_underlying() For Enumerations​ P1899R3 views::stride​ P1951R1 Default Template Arguments For pair's Forwarding Constructor​ P1989R2 Range Constructor For string_view​ P2077R3 Heterogeneous Erasure Overloads For Associative Containers​ P2136R3 invoke_r()​ P2162R2 Inheriting from std::variant P2166R1 Prohibit basic_string and basic_string_view from being constructed from nullptr P2186R2 Removed garbage collection support P2251R1 Require span And basic_string_view To Be Trivially Copyable​ P2273R3 constexpr unique_ptr​​ P2291R3 constexpr Integral <charconv>​ P2302R4 ranges::contains, ranges::contains_subrange​ P2321R2 std::zip​ P2322R6 ranges::fold_left, ranges::fold_right, etc.​ P2387R3 Pipe Support For User-Defined Range Adaptors​ P2393R1 Cleaning Up Integer-Class Types​ P2401R0 Conditional noexcept For exchange()​ P2408R5 Ranges Iterators As Inputs To Non-Ranges Algorithms​ P2417R2 More constexpr bitset​ P2419R2 Clarify Handling Of Encodings In Localized Formatting Of chrono Types​ P2438R2 string::substr() &&​ P2440R1 ranges::iota, ranges::shift_left, ranges::shift_right​ P2441R2 views::join_with​ P2442R1 Windowing Range Adaptors: views::chunk, views::slide​​ P2443R1 views::chunk_by​​ P2445R1 forward_like()​ P2446R2 views::as_rvalue​ P2465R3 Standard Library Modules std And std.compat​ P2494R2 Relaxing Range Adaptors To Allow Move-Only Types​ P2499R0 string_view Range Constructor Should Be explicit​ P2508R1 basic_format_string, format_string, wformat_string​ P2517R1 Conditional noexcept For apply()​ P2520R0 move_iterator<T> Should Be A Random-Access Iterator​​ P2549R1 unexpected<E>::error() A group of papers listed together indicates a Standard feature along with one or more approved improvements or expansions. These features are implemented together. C Standard library features | Feature | Supported | | --- | --- | | C99 Standard library features | Supported | |  Alternative spellings macros <iso646.h> | VS 2015 | |  Wide character support <wchar.h> and <wctype.h> | VS 2015 | |  Complex support in <complex.h> | Partial in VS 2015 K | |  Type generic math functions <tgmath.h> | VS 2019 16.8 2104 | |  Additional floating point characteristics <float.h> | VS 2015 | |  Hexadecimal float printf specifiers %A, %a | VS 2015 | |  Extended integers types <inttypes.h>, <stdint.h> | VS 2015 | |  vscanf family in <stdio.h> and <wchar.h> | VS 2015 | |  New math functions in <math.h> | VS 2015 | |  Treatment of math library error conditions (math_errhandling) | VS 2015 | |  Floating point environment access <fenv.h> | VS 2015 | |  %lf conversion specifier for printf | VS 2015 | |  snprintf family of functions in <stdio.h> | VS 2015 | |  boolean type in <stdbool.h> | VS 2015 | |  va_copy macro | VS 2015 | |  Additional strftime conversion specifiers | Partial in VS 2015 L | | C11 Standard library features | Supported | |  Alignment specifiers <stdalign.h> | VS 2019 16.8 C11, 2104 | |  aligned_alloc | No M | |  No return specifiers <stdnoreturn.h> | VS 2019 16.8 C11, 2104 | |  Threading support <threads.h> | yes | |  Atomic support <stdatomic.h> | experimental | |  char16_t, char32_t <uchar.h> | VS 2019 16.8 C11 | |  gets() removed | VS 2019 16.8 C11, N | |  gets_s() | VS 2019 16.8 C11 | |  Bounds-checking interfaces (_s APIs) | Partial in VS 2015 C11, O | |  fopen "x" option | VS 2019 16.8 C11 | |  Static assertions | VS 2019 16.8 C11, 2104 | |  quick_exit | VS 2019 16.8 C11 | |  <complex.h> macros | VS 2019 16.8 C11 | |  floating point characteristics <float.h> | VS 2019 16.8 C11 | |  C11 threads <threads.h> | VS 2022 17.8 C11 | <iso646.h> <wchar.h> <wctype.h> <complex.h> <tgmath.h> <float.h> %A %a <inttypes.h> <stdint.h> vscanf <stdio.h> <wchar.h> <math.h> math_errhandling <fenv.h> %lf printf snprintf <stdio.h> boolean <stdbool.h> va_copy strftime <stdalign.h> aligned_alloc <stdnoreturn.h> <threads.h> <stdatomic.h> char16_t char32_t <uchar.h> gets() gets_s() _s fopen "x" quick_exit <complex.h> <float.h> <threads.h> Supported values No Not yet implemented. Partial The implementation is incomplete. For more information, see the Notes section. VS 2010 Supported in Visual Studio 2010. VS 2013 Supported in Visual Studio 2013. VS 2015 Supported in Visual Studio 2015 (RTW). VS 2015.2 and VS 2015.3 indicate features that are supported in Visual Studio 2015 Update 2 and Visual Studio 2015 Update 3, respectively. VS 2017 15.0 Supported in Visual Studio 2017 version 15.0 (RTW). VS 2017 15.3 Supported in Visual Studio 2017 version 15.3. VS 2017 15.5 Supported in Visual Studio 2017 version 15.5. VS 2017 15.7 Supported in Visual Studio 2017 version 15.7. VS 2019 16.0 Supported in Visual Studio 2019 version 16.0 (RTW). VS 2019 16.1 Supported in Visual Studio 2019 version 16.1. VS 2019 16.2 Supported in Visual Studio 2019 version 16.2. VS 2019 16.3 Supported in Visual Studio 2019 version 16.3. VS 2019 16.4 Supported in Visual Studio 2019 version 16.4. VS 2019 16.5 Supported in Visual Studio 2019 version 16.5. VS 2019 16.6 Supported in Visual Studio 2019 version 16.6. VS 2019 16.7 Supported in Visual Studio 2019 version 16.7. VS 2019 16.8 Supported in Visual Studio 2019 version 16.8. VS 2019 16.9 Supported in Visual Studio 2019 version 16.9. VS 2019 16.10 Supported in Visual Studio 2019 version 16.10. VS 2022 17.0 Supported in Visual Studio 2022 version 17.0. VS 2022 17.1 Supported in Visual Studio 2022 version 17.1. VS 2022 17.2 Supported in Visual Studio 2022 version 17.2. VS 2022 17.3 Supported in Visual Studio 2022 version 17.3. VS 2022 17.4 Supported in Visual Studio 2022 version 17.4. VS 2022 17.5 Supported in Visual Studio 2022 version 17.5. Notes A In /std:c++14 mode, dynamic exception specifications remain unimplemented, and throw() is still treated as a synonym for __declspec(nothrow). In C++17, dynamic exception specifications were mostly removed by P0003R5, except for one vestige: throw() is deprecated and required to behave as a synonym for noexcept. In /std:c++17 mode, MSVC now conforms to the Standard by giving throw() the same behavior as noexcept, that is, enforcement via termination. /std:c++14 throw() __declspec(nothrow) throw() noexcept /std:c++17 throw() noexcept The compiler option /Zc:noexceptTypes requests the old behavior of __declspec(nothrow). It's likely that throw() will be removed in a future version of C++. To help with migrating code in response to these changes in the Standard and the Microsoft implementation, new compiler warnings for exception specification issues are added under /std:c++17 and /permissive-. /Zc:noexceptTypes __declspec(nothrow) throw() /std:c++17 /permissive- B Supported in /permissive- mode in Visual Studio 2017 version 15.7. For more information, see Two-phase name lookup support comes to MSVC. /permissive- Two-phase name lookup support comes to MSVC C In Visual Studio 2019 version 16.6 and later versions, the compiler fully implements the standard C99 preprocessor via the /Zc:preprocessor option. (In Visual Studio 2017 versions 15.8 through 16.5, the compiler supports the standard C99 preprocessor via the /experimental:preprocessor compiler option.) This option is on by default when the compiler option /std:c11 or /std:c17 is specified. /Zc:preprocessor /experimental:preprocessor /std:c11 /std:c17 D Supported under /std:c++14 with a suppressible warning, C4984. /std:c++14 C4984 E The implementation is sufficient to support the C++20 Standard Library. A complete implementation requires a binary breaking change. F Features removed when the /std:c++17 or later compiler option is specified. To re-enable these features (to ease the transition to newer language modes), use these macros: _HAS_AUTO_PTR_ETC, _HAS_FUNCTION_ALLOCATOR_SUPPORT, _HAS_OLD_IOSTREAMS_MEMBERS, and _HAS_UNEXPECTED. /std:c++17 _HAS_AUTO_PTR_ETC _HAS_FUNCTION_ALLOCATOR_SUPPORT _HAS_OLD_IOSTREAMS_MEMBERS _HAS_UNEXPECTED G C++17's parallel algorithms library is complete. Complete doesn't mean that every algorithm is parallelized in every case. The most important algorithms have been parallelized. Execution policy signatures are provided even where the implementation doesn't parallelize algorithms. The central internal header, <yvals_core.h>, contains the following "Parallel Algorithms Notes": C++ allows an implementation to implement parallel algorithms as calls to the serial algorithms. This implementation parallelizes several common algorithm calls, but not all. <yvals_core.h> The following algorithms are parallelized: adjacent_difference adjacent_find all_of any_of count count_if equal exclusive_scan find find_end find_first_of find_if find_if_not for_each for_each_n inclusive_scan is_heap is_heap_until is_partitioned is_sorted is_sorted_until mismatch none_of partition reduce remove remove_if replace replace_if search search_n set_difference set_intersection sort stable_sort transform transform_exclusive_scan transform_inclusive_scan transform_reduce These algorithms aren't presently parallelized: copy copy_n fill fill_n move reverse reverse_copy rotate rotate_copy shift_left shift_right swap_ranges generate generate_n partial_sort partial_sort_copy copy_if includes inplace_merge lexicographical_compare max_element merge min_element minmax_element nth_element partition_copy remove_copy remove_copy_if replace_copy replace_copy_if set_symmetric_difference set_union stable_partition unique unique_copy H This is a wholly new implementation, incompatible with the previous std::experimental version, made necessary by symlink support, bug fixes, and changes in standard-required behavior. Currently, <filesystem> provides both the new std::filesystem and the previous std::experimental::filesystem. The <experimental/filesystem> header provides only the old experimental implementation. Expect removal of the experimental implementation in the next ABI-breaking release of the libraries. std::experimental <filesystem> std::filesystem std::experimental::filesystem <experimental/filesystem> I Supported by a compiler intrinsic. J std::byte is enabled by /std:c++17 or later, but because it can conflict with the Windows SDK headers in some cases, it has a fine-grained opt-out macro. To disable it, define _HAS_STD_BYTE as 0. std::byte /std:c++17 _HAS_STD_BYTE 0 K MSVC doesn't support the _Complex keyword or native complex types. The Universal CRT <complex.h> uses implementation-specific macros to achieve the same effect. For more information, see C complex math support. _Complex <complex.h> L The Universal CRT doesn't implement the strftime E and O alternative conversion modifiers. These modifiers are ignored (for example, %Oe behaves the same as %e). The modifiers aren't supported by the underlying locale APIs. strftime E O %Oe %e M The Universal CRT doesn't implement C11 aligned_alloc, but does provide _aligned_malloc and _aligned_free. Because the Windows operating system doesn't support aligned allocations, this function is unlikely to be implemented. aligned_alloc _aligned_malloc _aligned_free N The declaration is removed, but the export for the function remains for backward compatibility. O Certain bounds-checking functions are unimplemented, or have different signatures, or aren't part of the C11 or C17 standard. These functions are unimplemented: abort_handler_s, ignore_handler_s, memset_s, set_constraint_handler_s, snprintf_s, snwprintf_s, strerrorlen_s, vsnwprintf_s. These functions have different signatures: gmtime_s, localtime_s, qsort_s, strtok_s, vsnprintf_s, wcstok_s. These functions don't appear in the standard: clearerr_s, fread_s. abort_handler_s ignore_handler_s memset_s set_constraint_handler_s snprintf_s snwprintf_s strerrorlen_s vsnwprintf_s gmtime_s localtime_s qsort_s strtok_s vsnprintf_s wcstok_s clearerr_s fread_s P Support was added in Visual Studio 2019 version 16.10. Support for Clang was added in Visual Studio 2022 version 17.0. Q This removes declare_reachable, undeclare_reachable, declare_no_pointers, undeclare_no_pointers, get_pointer_safety. Previously, these functions had no effect. declare_reachable undeclare_reachable declare_no_pointers undeclare_no_pointers get_pointer_safety R This is a common source-breaking change. However, code that previously had undefined behavior at runtime is now rejected with compiler errors. S Input range adaptors and counted_iterator are implemented in VS 2022 17.0. A future update to Visual Studio 2019 version 16.11 is planned to incorporate these changes. counted_iterator T <stdatomic.h> is currently supported when compiled as C++ (/std:c++latest). It isn't yet supported when compiled as C (/std:c11 and /std:c17) <stdatomic.h> /std:c++latest /std:c11 /std:c17 14 These C++17 and C++20 features are always enabled, even when /std:c++14 (the default) is specified. The reason is either because the feature was implemented before the introduction of the /std options, or because conditional implementation was undesirably complex. /std:c++14 /std 17 These features are enabled by the /std:c++17 or later compiler option. /std:c++17 20 In versions through Visual Studio 2019 version 16.10, these features are enabled by the /std:c++latest compiler option. Visual Studio 2019 version 16.11 added the /std:c++20 compiler option to enable these features. /std:c++latest /std:c++20 20abi Because of ongoing post-release work on the C++20 standard, <format>, the formatting parts of <chrono> (which rely on <format>), and the range factories and range adaptors from <ranges> (everything that needs the view concept) are only available under /std:c++latest. Expect these features under /std:c++20 after agreement is reached with WG21 that no further ABI-breaking changes are necessary. The remaining parts of <chrono> and the algorithms that apply to ranges are enabled under the /std:c++20 compiler option in Visual Studio 2019 version 16.11 and later versions. <format> <chrono> <format> <ranges> view /std:c++latest /std:c++20 <chrono> /std:c++20 23 In Visual Studio 2022 version 17.0 and up, these features are enabled by the /std:c++latest compiler option. /std:c++latest C11 Compiler support for C11 and C17 requires Visual Studio 2019 version 16.8 or higher. Except as noted, C11 and C17 library support requires Windows SDK build 10.0.20211.0 or higher. For more information on how to install support for C11 and C17, see Install C11 and C17 support in Visual Studio. DR These features are enabled in all C++ /std compiler option modes. The C++ Standard committee adopted this change as a retroactive Defect Report to C++11 and all later versions. /std 2104 C11 library support for this feature requires Windows SDK build 10.0.20348.0 (version 2104) or higher. See also C++ Language Reference C++ Standard Library C++ conformance improvements in Visual Studio What's New for Visual C++ in Visual Studio Visual C++ change history 2003 through 2015 Visual C++ What's New 2003 through 2015 C++ team blog Feedback Was this page helpful? Additional resources Additional resources In this article
261
Constructing a NJ tree with color coded branches =============== Groups Groups Conversations All groups and messages Send feedback to Google Help Training Sign in Groups Groups Davis R Users' Group Conversations About Privacy•Terms     Constructing a NJ tree with color coded branches 398 views Skip to first unread message  Erin Wilkus unread, Mar 4, 2016, 3:56:55 PM 3/4/16    Reply to author Sign in to reply to author Forward Sign in to forward Delete You do not have permission to delete messages in this group Copy link Report message Show original message Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message to Davis R Users' Group I'm generating NJ trees using the adegenet package in r and I want to color code the branches. I expect this is fairly straight forward but I'm missing something that I haven't been able to find through my online searches. I can label points by referencing my annotation file but when it comes to generating colors for the branches based on the same variable I have a problem. packages = list("ape","pegas", "sequinr", "ggplot2", "adegenet", "phangorn") install.packages("ape","pegas", "sequinr", "ggplot2", "adegenet", "phangorn") library("ape") library("pegas") library("seqinr") library("ggplot2") library("adegenet") library("phangorn") setwd("/Users/ErinWilkus/Desktop/ErinsSNP1") Reading fasta file dna <- fasta2DNAbin("SNP_ALLALL.fa") dna class(dna) as.character(dna[1:5, 1:10]) Annotation file to include seed source annot <- read.csv("~/Desktop/ErinsSNP1/AnnotationsALL.csv") head(annot) Checking that samples are identicle in both files dim(dna) dim(annot) all(annot$accession==rownames(dna)) table(annot$Source) Calculating distance with JC69 (Jukes and Cantor (1969)). It assumes that all substitutions (i.e. a change of a base by another one) have the same probability. This probability is the same for all sites along the DNA sequence. Other distances are possible. D <- dist.dna(dna, model = "JC69") class(D) length(D) Plotting distance matrix temp<-as.data.frame(as.matrix(D)) table.paint(temp, cleg=0, clabel.row=.5, clabel.col=.5) transform data to generate distance matric image temp<-t(as.matrix(D)) temp<- temp[,ncol(temp):1] par(mar=c(1,5,5,1)) image(x=1:230, y=1:230, temp, col=rev(heat.colors(100)), xaxt="n", yaxt="n", xlab="", ylab="") axis(side=2, at=1:230, lab=rownames(dna), las=2, cex.axis=.5) axis(side=3, at=1:230, lab=rownames(dna), las=3, cex.axis=.5) Making a Neighbor-Joining Tree Estimation tre <- nj(D) class(tre) tre<-ladderize(tre) tre Plot simple neighbor joining tree with color codes for sources Why won't this work? plot(tre, show.tip=FALSE) title("Unrooted NJ tree") myPal<- colorRampPalette(c("red", "yellow", "green", "blue")) tiplabels(annot$Source, bg=num2col(annot$Source, col.pal=myPal), cex=.5) temp<- pretty(M:'SEA 5', 28) legend("bottomleft", fill=num2col (temp, col.pal=myPal), leg=temp, ncol=2) Michael Levy unread, Mar 7, 2016, 12:09:14 AM 3/7/16    Reply to author Sign in to reply to author Forward Sign in to forward Delete You do not have permission to delete messages in this group Copy link Report message Show original message Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message to [email protected] Hi Erin, It’s a little hard to tell what’s going on without being able to run your code, but it looks to me like what you have should put colors underneath your tip labels, yes? If you want to color code the branches, you need a color for each segment (note that this is greater than the number of tips) and you can provide that vector of colors to the edge.color argument in the plot call, e.g. plot(tre, edge.color = myColors). Does that help? ​ -- Michael Levy c: 304-376-4523   -- Check out our R resources at You received this message because you are subscribed to the Google Groups "Davis R Users' Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. Visit this group at For more options, visit Reply all Reply to author Forward  0 new messages  Search Clear search Close search Google apps Main menu
262
Stabilizer code | Error Correction Zoo =============== edit cite share [Jump to code hierarchy] Stabilizer code Description A code whose logical subspace is the joint eigenspace (usually with eigenvalue +1) of a set of commuting unitary Pauli-type operators forming the code's stabilizer group. They can be block codes defined of tensor-product spaces of qubits or qudits, or non-block codes defined on single sufficiently large Hilbert spaces such as bosonic modes or group spaces. The coding theory motivation for stabilizer codes came from linear binary codes, whose codewords form a closed subspace in the space of binary strings. Stabilizer codes extend this property, in various ways, to quantum error correction. Stabilizer codes can be defined succinctly using the stabilizer group generators and without explicitly writing out a basis of codewords. The stabilizer formalism is applicable to almost all quantum-code kingdoms; see list of stabilizer codes for a list of all stabilizer codes. Stabilizer codes were originally defined for qubits, where the relevant commuting operators are tensor products of Pauli matrices. The Pauli stabilizer structure is useful in providing standardized encoding, gates, decoding, and performance bounds. Elements of this structure remain in qudit extensions, in particular for prime-dimensional modular qudits and Galois qudits. Infinite-dimensional Pauli-type bases yield the bosonic stabilizer and rotor stabilizer codes. One can switch between stabilizer codes by appending another stabilizer group and taking the center of the resulting larger group. Stabilizer code switching, code deformation, update rule, or code rewiring: Stabilizer code switching is a map between stabilizer codes that is done using a stabilizer group F, (1)S→N⟨S,F⟩(F), where N denotes taking the normalizer of a group (e.g., see [1,2] for proofs). Code switching may not preserve the logical information and instead implement logical measurements; conditions on S and F such that qubit stabilizer code switching preserves logical information are derived in [3; Prop. II.1]. The stabilizer rewiring algorithm (SRA) allows for code switching between a pair of compatible stabilizer codes (see also Ref. [5,6]), and ancillary qubits may be used to maintain minimum distance of any intermediate codes . Clifford operations and Pauli measurements can be expressed as sequences of code switching. In the context of stabilizer codes realizing Abelian topological phases, code switching implements anyon condensation of any anyons represented by operators in the group F. Code switching can be done using only transversal gates for qubit stabilizer codes . Extensions of the stabilizer formalism, such as XS and XP stabilizer codes, relax the mutual commutation property. Other extensions, such as CWS and union stabilizer codes, enlarge the codespace by re-assigning error words as codewords. Protection The group of all Pauli-type operators typically serves as the set of noise operators for stabilizer codes. Gates A Gottesman-Knill-type theorem exists for qubits, modular qudits, Galois qudits, and rotors [10,11], as well as oscillators [12–14]. Decoding The structure of stabilizer codes allows for straightforward syndrome-based decoding because the stabilizer generators serve as the code's check operators, and their eigenvalues serve as the error syndromes. The error correction process involves measuring the stabilizer generators and applying correcting Pauli-type operators based on the measurement outcomes. Cousins Abelian topological code—There is a general correspondence between stabilizer codes and gauge theory, with the stabilizer group playing the role of the gauge group . Majorana stabilizer code—Majorana stabilizer codes are useful for Majorana-based architectures, where the degrees of freedom are electrons, and the notion of locality is different than all other code kingdoms. Member of code lists Hamiltonian-based codes Quantum codes Quantum codes with notable decoders Stabilizer codes Primary Hierarchy Quantum Domain Group Kingdom Group-based quantum codeQECCQuantum Parents Group-based quantum code Stabilizer codes are constructed out of Pauli strings, modular-qudit Pauli strings, Galois-qudit Pauli strings, oscillator displacement operators, or rotor generalized Pauli strings. All of these are examples of the Weyl-Heisenberg group on Manin's quantum plane, which is defined on a configuration space that is generally a free Abelian group [16–19]. Commuting-projector Hamiltonian codeHamiltonian-basedQECCQuantum Codespace is the ground-state space of the code Hamiltonian, which consists of an equal linear combination of stabilizer generators and which can be made into a frustration-free commuting-projector Hamiltonian. Frustration-free Hamiltonian codeHamiltonian-basedQECCQuantum Codespace is the ground-state space of the code Hamiltonian, which consists of an equal linear combination of stabilizer generators and which can be made into a frustration-free commuting-projector Hamiltonian. Knill codeQECCQuantum Stabilizer codes are Knill codes whose nice error basis is either the Pauli strings, modular-qudit Pauli strings, Galois-qudit Pauli strings, oscillator displacement operators, or rotor generalized Pauli strings. Stabilizer code Children Rotor stabilizer code Calderbank-Shor-Steane (CSS) stabilizer codeQuantum RM codeColorHomologicalKitaev surfaceBBQuantum QR code Graph quantum code Graph quantum codes are a subset of stabilizer codes over G-valued qudits for Abelian G. Any stabilizer code over Abelian G is locally equivalent to a graph quantum code (see also [21,22]). Quantum low-weight check (QLWC) codeQLDPCGeneralized homological-productQuantum RM codeColorHomologicalLattice stabilizerTwist-defect colorTwist-defect surfaceCDSCKitaev surfaceFracton stabilizerBB Random stabilizer code Bosonic stabilizer codeAnalog stabilizerQuantum lattice Modular-qudit stabilizer codeFermion-into-qubitTwist-defect colorTwist-defect surfaceFracton stabilizerColorBBHomologicalQuantum RM codeCDSCKitaev surface Galois-qudit stabilizer codeFermion-into-qubitTwist-defect colorCDSCTwist-defect surfaceColorHomologicalBBKitaev surfaceQuantum QR codeQuantum RM code References M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2012) DOID. Lee and B. Yoshida, “Randomly Monitored Quantum Codes”, (2024) arXiv:2402.00145D. Aasen, J. Haah, Z. Li, and R. S. K. Mong, “Measurement Quantum Cellular Automata and Anomalies in Floquet Codes”, (2023) arXiv:2304.01277K. R. Colladay and E. J. Mueller, “Rewiring stabilizer codes”, New Journal of Physics 20, 083030 (2018) arXiv:1707.09403DOIH. Bombin, “Gauge Color Codes: Optimal Transversal Gates and Gauge Fixing in Topological Stabilizer Codes”, (2015) arXiv:1311.0879Y. Hwang, B.-S. Choi, Y. Ko, and J. Heo, “Fault-tolerant conversion between stabilizer codes by Clifford operations”, (2015) arXiv:1511.02596C. Huang and M. Newman, “Transversal switching between generic stabilizer codes”, (2018) arXiv:1709.09282M. E. Beverland, S. Huang, and V. Kliuchnikov, “Fault tolerance of stabilizer channels”, (2024) arXiv:2401.12017S. Heußen and J. Hilder, “Efficient fault-tolerant code switching via one-way transversal CNOT gates”, (2024) arXiv:2409.13465J. Bermejo-Vega and M. V. den Nest, “Classical simulations of Abelian-group normalizer circuits with intermediate measurements”, (2013) arXiv:1210.3637J. Bermejo-Vega, C. Y.-Y. Lin, and M. V. den Nest, “The computational power of normalizer circuits over black-box groups”, (2014) arXiv:1409.4800S. D. Bartlett, B. C. Sanders, S. L. Braunstein, and K. Nemoto, “Efficient Classical Simulation of Continuous Variable Quantum Information Processes”, Physical Review Letters 88, (2002) arXiv:quant-ph/0109047DOIV. Veitch, N. Wiebe, C. Ferrie, and J. Emerson, “Efficient simulation scheme for a class of quantum optics experiments with non-negative Wigner representation”, New Journal of Physics 15, 013037 (2013) arXiv:1210.1783DOIA. Mari and J. Eisert, “Positive Wigner Functions Render Classical Simulation of Quantum Computation Efficient”, Physical Review Letters 109, (2012) arXiv:1208.3660DOIS. Carrozza, A. Chatwin-Davies, P. A. Hoehn, and F. M. Mele, “A correspondence between quantum error correcting codes and quantum reference frames”, (2024) arXiv:2412.15317Yu. I. Manin, “Some remarks on Koszul algebras and quantum groups”, Annales de l’institut Fourier 37, 191 (1987) DOIY. I. Manin, “Quantized Theta-Functions”, Progress of Theoretical Physics Supplement 102, 219 (2013) DOIYu. I. Manin, “Functional equations for quantum theta functions”, (2003) arXiv:math/0307393E. Chang-Young and H. Kim, “Theta vectors and quantum theta functions”, Journal of Physics A: Mathematical and General 38, 4255 (2005) arXiv:math/0402401DOID. Schlingemann, “Stabilizer codes can be realized as graph codes”, (2001) arXiv:quant-ph/0111080M. Van den Nest, J. Dehaene, and B. De Moor, “Graphical description of the action of local Clifford transformations on graph states”, Physical Review A 69, (2004) arXiv:quant-ph/0308151DOIM. Grassl, A. Klappenecker, and M. Rotteler, “Graphs, quadratic forms, and quantum codes”, Proceedings IEEE International Symposium on Information Theory, 45 arXiv:quant-ph/0703112DOI Page edit log Victor V. Albert (2022-04-19) — most recent Cite as: “Stabilizer code”, The Error Correction Zoo (V. V. Albert & P. Faist, eds.), 2022. Github: Group-based quantum QECC OAQECC Commuting-projector Hamiltonian Frustration-free Hamiltonian Knill Rotor stabilizer CSS Graph quantum QLWC Abelian topological Majorana stabilizer Classical Domain Quantum Domain Classical-quantum Domain Home Team About Code graph Lists Concepts glossary Add new code Search 🌒 ≡ Error correction zoo by Victor V. Albert, Philippe Faist, and many contributors. This work is licensed under a CC-BY-SA License. See how to contribute.
263
Published Time: 2024-08-22T07:10:20-06:00 The Timeless Beauty of Phoenix Crowns: A Symbol of Culture and Eleganc – OrientCrafted =============== Skip to content 🔥10% OFF Any 2 Items,15% OFF Any 3 Items, 20% OFF Any 4 Items 🔥10% OFF Any 2 Items,15% OFF Any 3 Items, 20% OFF Any 4 Items ✈️Free Shipping & Taxes Worldwide ✈️Free Shipping & Taxes Worldwide Home 3D Metal Puzzle All 3D Metal Puzzle Oriental Series Western Series Cloisonne DIY Kit Gift 3D Puzzles About Our Story Craft Community What's Cloisonné Enamel Track Your Order USD Region and language Search Country/Region No results found Australia USD $ Austria USD $ Belgium USD $ Canada USD $ China USD $ Czechia USD $ Denmark USD $ Finland USD $ France USD $ Germany USD $ Ireland USD $ Israel USD $ Italy USD $ Japan USD $ Malaysia USD $ Netherlands USD $ New Zealand USD $ Norway USD $ Poland USD $ Portugal USD $ Singapore USD $ South Korea USD $ Spain USD $ Sweden USD $ Switzerland USD $ United Arab Emirates USD $ United Kingdom USD $ United States USD $ OrientCrafted Home 3D Metal Puzzle All 3D Metal Puzzle Oriental Series Western Series Cloisonne DIY Kit Gift 3D Puzzles About Our Story Craft Community What's Cloisonné Enamel Track Your Order More Open region and language selector USD Search Country/Region No results found Australia USD $ Austria USD $ Belgium USD $ Canada USD $ China USD $ Czechia USD $ Denmark USD $ Finland USD $ France USD $ Germany USD $ Ireland USD $ Israel USD $ Italy USD $ Japan USD $ Malaysia USD $ Netherlands USD $ New Zealand USD $ Norway USD $ Poland USD $ Portugal USD $ Singapore USD $ South Korea USD $ Spain USD $ Sweden USD $ Switzerland USD $ United Arab Emirates USD $ United Kingdom USD $ United States USD $ Account Other sign in options Sign in Orders Profile Account Other sign in options Sign in Orders Profile Total items in cart: 0 0 Your cart is empty Have an account? Log in to check out faster. Continue shopping Home 3D Metal Puzzle Cloisonne DIY Kit Gift 3D Puzzles About Track Your Order The Timeless Beauty of Phoenix Crowns: A Symbol of Culture and Elegance August 22, 2024 The Chinese phoenix crown, known as "fengguan" (凤冠) in Mandarin, is more than just a beautiful piece of headwear—it's a symbol of history, culture, and exquisite craftsmanship. For centuries, these crowns have been worn by Chinese empresses and noblewomen, representing their status and the deep cultural significance behind their roles. Today, the phoenix crown continues to captivate people worldwide with its intricate designs and the profound stories it carries. A Glimpse into the History of the Phoenix Crown The origins of the Chinese phoenix crown date back to the Ming Dynasty (1368-1644), when it was predominantly worn by empresses, princesses, and other women of high rank during important ceremonies and weddings. The crown is named after the mythical phoenix, a symbol of grace, virtue, and renewal in Chinese culture. This majestic bird is often depicted as rising from the ashes, symbolizing immortality and the cyclical nature of life. The phoenix crown is typically adorned with a variety of precious materials, including gold, pearls, jade, and kingfisher feathers. These luxurious elements not only enhanced the crown's aesthetic appeal but also reflected the wearer's noble status. The intricate design of each phoenix crown was carefully crafted to embody the wearer's dignity and elegance, making it a cherished heirloom passed down through generations. The Cultural Significance of the Phoenix Crown In Chinese culture, the phoenix is one of the most revered creatures, often associated with the empress and representing the feminine counterpart to the dragon, which symbolizes the emperor. Together, the dragon and phoenix embody the harmony of yin and yang, a fundamental concept in Chinese philosophy. The phoenix crown, therefore, is not merely an accessory but a powerful emblem of the balance between strength and beauty, authority and grace. The phoenix crown also plays a significant role in Chinese weddings, particularly in traditional ceremonies where the bride wears a version of the crown. This symbolizes the bride's transformation into a revered figure within her new family, much like an empress in her court. The presence of the phoenix in the crown's design is believed to bring good fortune, happiness, and prosperity to the newlyweds. The Art of Craftsmanship The making of a phoenix crown is a labor-intensive process that requires exceptional skill and precision. Artisans spend countless hours meticulously crafting each element, from the delicate filigree work to the placement of gemstones and feathers. The process often involves multiple stages, including designing, casting, engraving, and assembling, each demanding a high level of expertise. One of the most remarkable aspects of the phoenix crown is the use of kingfisher feathers, which are prized for their vibrant blue hue. These feathers are carefully applied to the crown to create intricate patterns and designs that shimmer in the light, giving the crown its distinctive, ethereal beauty. The combination of fine materials and expert craftsmanship makes each phoenix crown a unique work of art, cherished for both its beauty and its cultural significance. The Phoenix Crown in Modern Times While the traditional phoenix crown is no longer worn as part of everyday attire, its legacy lives on in modern fashion and art. Designers and artists continue to draw inspiration from the phoenix crown, incorporating its motifs into contemporary jewelry, clothing, and even home decor. This blend of ancient symbolism and modern aesthetics allows the phoenix crown to maintain its relevance and appeal in today's world. For those interested in Chinese culture and history, the phoenix crown represents a connection to the past, a way to experience the elegance and sophistication of ancient China. It also serves as a reminder of the enduring values of virtue, grace, and harmony that the phoenix embodies. Bringing the Phoenix Crown into Your Home If you're captivated by the beauty and significance of the Chinese phoenix crown, you can bring a piece of this rich cultural heritage into your own home with the OrientCrafted brand's Phoenix Crown 3D Metal Puzzle. This stunning puzzle is a perfect way to engage with the art and craftsmanship of the phoenix crown in a hands-on and rewarding way. As you carefully assemble each piece, you'll gain a deeper appreciation for the intricate details and the history behind this iconic symbol. Not only does this 3D puzzle offer a challenging and enjoyable experience, but it also serves as a beautiful display piece, adding a touch of cultural elegance to any room. Whether you're a history enthusiast, a puzzle lover, or simply someone who appreciates fine craftsmanship, the OrientCrafted Phoenix Crown 3D Metal Puzzle is an excellent choice. 1 comment How can I order a crown for Nanci Jennifer Arrington•July 12, 2025 Leave a comment Name Email Message Please note, comments need to be approved before they are published. Post comment Join our email list Get exclusive deals and early access to new products. Email Learn Learn About Us Craft Community Blogs What's Cloisonné Enamel Support Support Contact Us RETURN & REFUND POLICY SHIPPING POLICY Payment Methods Privacy Policy Terms of Service Track Your Order © 2025 OrientCrafted Refund policy Privacy policy Terms of service Shipping policy Contact information Facebook Instagram Youtube Payment methods Search Clear Products "Thousand Miles of Rivers and Mountains" Enamel Bracelet "Thousand Miles of Rivers and Mountains" Enamel Bracelet $49.00 145-Piece 3D Creative Desk Clock Puzzle | Desktop Clock Sculpture 145-Piece 3D Creative Desk Clock Puzzle | Desktop Clock Sculpture $79.00 160-Piece 3D Vase Puzzle | Blue-and-White Porcelain Style Décor 160-Piece 3D Vase Puzzle | Blue-and-White Porcelain Style Décor Regular price$139.00 Sale price$89.00 160-Piece Artistic Column Lamp Puzzle | 3D Decorative and Functional Gift 160-Piece Artistic Column Lamp Puzzle | 3D Decorative and Functional Gift Regular price$139.00 Sale price$89.00 View all
264
quantum mechanics - Derivative of eigenvalues with respect to parameter - Physics Stack Exchange =============== Join Physics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Physics helpchat Physics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Derivative of eigenvalues with respect to parameter Ask Question Asked 4 years, 1 month ago Modified2 months ago Viewed 1k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. By trying to find precise ways to calculate the derivative of numerical Hermitian matrices, I've recently stumbled upon this post in Math Stack Exchange. From the first answer on that post we get an expression for the derivative of the eigenvalues with respect to the matrix entries. From that we get that for a Hermitian matrix H H parametrized by a real quantity φ φ, its eigenvalues E p(φ)E p(φ) and the unitary matrix that diagonalizes it U U, the following identity holds: ∂E p∂φ=∑i j∂E p∂H i j∂H i j∂φ=[U∂H∂φ U†]p p.∂E p∂φ=∑i j∂E p∂H i j∂H i j∂φ=[U∂H∂φ U†]p p. I was not able to find this information anywhere else and the references presented to the original expression in that post have a bit more complicated math that I find hard to follow. My main question regarding it is: Does this expression generalize to higher order derivatives? That is, does the following expression hold? ∂n E p∂φ n=?[U∂n H∂φ n U†]p p∂n E p∂φ n=?[U∂n H∂φ n U†]p p quantum-mechanics schroedinger-equation perturbation-theory eigenvalue matrix-elements Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Improve this question Follow Follow this question to receive notifications edited Jun 8 at 3:51 Qmechanic♦ 221k 52 52 gold badges 630 630 silver badges 2.5k 2.5k bronze badges asked Jul 2, 2021 at 1:20 Lucas BaldoLucas Baldo 1,928 9 9 silver badges 25 25 bronze badges 3 1 I didn’t work it out for your case but Matrix Cookbook has some relevant formulae. math.uwaterloo.ca/~hwolkowi/matrixcookbook.pdf –Brick Commented Jul 4, 2021 at 1:27 @Brick Thanks, that's a nice resource I didn't know about. If I find an answer to my question I'll be sure to send it to the authors of that document. –Lucas Baldo Commented Jul 5, 2021 at 21:49 Related: physics.stackexchange.com/q/717102/2451 –Qmechanic♦ Commented Jun 8 at 4:00 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. This is standard Rayleigh-Schrödinger perturbation theory, but described a bit differently from how it is usually done in quantum mechanics books. Here (assuming a non-degenerate spectrum) is the quickest route to the usual treatment: Suppose that we have an eigenvector |n⟩|n⟩ with eigenvalue E n E n H|n⟩=E n|n⟩.H|n⟩=E n|n⟩. Then differentiate with respect to the parameter to get d H|n⟩+H d|n⟩=d E n|n⟩+E n d|n⟩d H|n⟩+H d|n⟩=d E n|n⟩+E n d|n⟩ Find matrix elements by applying ⟨m|⟨m| to this get ⟨m|d H|n⟩+⟨m|H d|n⟩=d E n⟨m|n⟩+E n⟨m|d|n⟩.⟨m|d H|n⟩+⟨m|H d|n⟩=d E n⟨m|n⟩+E n⟨m|d|n⟩. First, take m=n m=n so ⟨n|d H|n⟩+⟨n|H d|n⟩=d E n⟨n|n⟩+E n⟨n|d|n⟩.⟨n|d H|n⟩+⟨n|H d|n⟩=d E n⟨n|n⟩+E n⟨n|d|n⟩. or ⟨n|d H|n⟩+E n⟨n|d|n⟩=d E n⟨n|n⟩+E n⟨n|d|n⟩,⟨n|d H|n⟩+E n⟨n|d|n⟩=d E n⟨n|n⟩+E n⟨n|d|n⟩, so ⟨n|d H|n⟩=d E n⟨n|n⟩.⟨n|d H|n⟩=d E n⟨n|n⟩. This is your (and Feynman-Hellmann's) equation. Now let m≠n m≠n and assume that d|n⟩d|n⟩ is orthogonal to |n⟩|n⟩ (This does not preserve normalization but this will not matter) Now we get ⟨m|d H|n⟩+E m⟨m|d|n⟩=+E n⟨m|d|n⟩,⟨m|d H|n⟩+E m⟨m|d|n⟩=+E n⟨m|d|n⟩, so ⟨m|d|n⟩=⟨m|d H|n⟩E n−E m.⟨m|d|n⟩=⟨m|d H|n⟩E n−E m. Now you know how the derivative d E n≡d E n d λ d E n≡d E n d λ of the eigenvalue and d|n⟩≡d d λ|n⟩d|n⟩≡d d λ|n⟩ of the eigenvector. You can use these formulas to compute the second derivative of E n E n and so on. the answers rapidly get much more complicated (see the section in the cited Wikipedia article called "Second-order and higher-order corrections"). Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 7 at 22:20 hft 27.7k 2 2 gold badges 34 34 silver badges 81 81 bronze badges answered Jul 5, 2021 at 22:13 mike stonemike stone 59.1k 4 4 gold badges 53 53 silver badges 159 159 bronze badges 7 Interesting. I hadn't thought of connecting it to perturbation theory because it's not usually written with derivatives. The "Second order and higher-order corrections" section you mentioned, for example, uses no derivatives. Only under the Hellmann–Feynman theorems I found expressions using derivatives. Is my first equation equivalent to the first of these theorems? –Lucas Baldo Commented Jul 5, 2021 at 22:30 1 @Lucas Baldo. Yes your formula is just Feynman Helmann. The series in Wiki is is just their rather tedious way of writing the Taylor expansion withiout using derivatives. Keep applying my formulae and you will get their expressions, although they might look different because of some cancellations in the sums over intermediate states, –mike stone Commented Jul 5, 2021 at 22:32 Thanks! Also, as far as I know, in usual perturbation theory one thinks of a parameter λ λ that is taken to be small and a perturbing Hamiltonian V V, such that H=H 0+λ V H=H 0+λ V. These elements seem to be missing in the Hellmann-Feynman formulation, so I don't see the exact connection between the two frameworks. In other words, I don't see what are we perturbing in the Hellmann-Feynman description and how we are perturbing it. –Lucas Baldo Commented Jul 5, 2021 at 22:39 1 In this context Feyman-Helmann sets d H/d λ=V d H/d λ=V and all higher derivatives of H H are zero. –mike stone Commented Jul 6, 2021 at 0:19 1 Why do you think that? Using λ V λ V is just a special simple case of the general F-H theorem. –mike stone Commented Jul 6, 2021 at 0:25 |Show 2 more comments This answer is useful 1 Save this answer. Show activity on this post. During the last days I have found out that my initial guess was incomplete and I'm posting here to share what I have found. The terms with derivatives of U U can contribute non-trivially to the expression for derivatives of order higher than one. I have calculated an expression for the second-order case. The derivation is too long for this post but final result is ∂2 ϵ p∂φ 2=(U∂2 H∂φ 2 U†)p p+2(U∂H∂φ U†A)p p,∂2 ϵ p∂φ 2=(U∂2 H∂φ 2 U†)p p+2(U∂H∂φ U†A)p p, where A A is defined by A a b={0,a=b(ϵ b−ϵ a)−1(U∂H∂φ U†)a b,a≠b.A a b={0,a=b(ϵ b−ϵ a)−1(U∂H∂φ U†)a b,a≠b. I'm still looking for possible errors in the derivation and I have no idea if this method will extend for higher order derivatives. Feedback on both is welcome. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications edited Jun 7 at 22:05 answered Jul 5, 2021 at 22:14 Lucas BaldoLucas Baldo 1,928 9 9 silver badges 25 25 bronze badges Add a comment| Your Answer Thanks for contributing an answer to Physics Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. Use MathJax to format equations. MathJax reference. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions quantum-mechanics schroedinger-equation perturbation-theory eigenvalue matrix-elements See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Linked 3Higher-order (e.g. n n th-order) corrections to (non-degenerate, time-independent) perturbation theory in QM? Related 12Understanding the Jacobian Matrix 1Solve eigenvalue problem with known constraint on one of the Eigenvalues 1Rotation which diagonalizes the Hamiltonian 1Analysis of the eigenvalues of the particle in a finite square well 6Hamiltonian eigenvalues in a transformed reference frame 2Spin 1/2 - finding eigenvalues and eigenvectors of a Hamiltonian that includes d/dz 5⟨O^†φ∣∣∣ψ⟩⟨O^†φ|ψ⟩ How do I act the operator in bra? 0Relation between diagonal and off-diagonal entries of Hermitian Operator Hot Network Questions Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? What do you call this outfit that Japanese housewives always wear? Why does grounding eliminate mains hum but not radio signals? What violent acts or injuries are attributable to Palestine Action? Expected number of rolls to see all sides of a die Open dense subset of a compact lie group has full measure In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" Show double quotient with congruence subgroup is simply connected? Could a Manned Jupiter Mission use a Shadow Shield? Make separate appendix table of contents and remove appendix chapters and sections from main toc History of Wilcoxon/Mann-Whitney being for the median? Dropdown width with very long options A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers Why are there no 'add14' chords? How soon after parking a car in a paid parking area must I provide proof of payment? How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified Road tire bulge - is it still safe to ride? Drawing a 3D vector field with vortices and perspective axis labels Best bike type for multi-day tours in France (e.g., Discover France itineraries) Summation with fractional part Replacing \kern1em with $\hookrightarrow$ in macro using \discretionary gives ‘Improper discretionary list’ error. How to solve this problem? What does, "For you alone are holy." mean in Revelation 15:4? Is 人形机器人 a redundant expression? "Melbourne saw the most significant change both in actual coffee prices and in percentages." Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Physics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
265
Published Time: 2017-01-06 01:38:04 Highest power of 2 less than or equal to given number - GeeksforGeeks =============== Skip to content Tutorials Python Java DSA ML & Data Science Interview Corner Programming Languages Web Development CS Subjects DevOps Software and Tools School Learning Practice Coding Problems Courses DSA to Development Get IBM Certification Newly Launched! Master Django Framework Become AWS Certified For Working Professionals Interview 101: DSA & System Design JAVA Backend Development (Live) DevOps Engineering (LIVE) Data Structures & Algorithms in Python For Students Placement Preparation Course Data Science (Live) Data Structure & Algorithm-Self Paced (C++/JAVA) Master Competitive Programming (Live) Full Stack Development with React & Node JS (Live) Full Stack Development Data Science Program All Courses Go Premium Switch to Dark Mode Sign In DSA Practice Bitwise Algorithms MCQs on Bitwise Algorithms Tutorial on Biwise Algorithms Binary Representation Bitwise Operators Bit Swapping Bit Manipulation Count Set bits Setting a Bit Clear a Bit Toggling a Bit Left & Right Shift Gray Code Checking Power of 2 Important Tactics Bit Manipulation for CP Fast Exponentiation Sign In ▲ Open In App Next Article: Analysis of Algorithms Highest power of 2 less than or equal to given number Last Updated : 23 Jul, 2025 Comments Improve Suggest changes 35 Likes Like Report Given a number n, find the highest power of 2 that is smaller than or equal to n. Examples : Input : n = 10 Output : 8 Input : n = 19 Output : 16 Input : n = 32 Output : 32 A simple solution is to start checking from n and keep decrementing until we find a power of 2. Try it on GfG Practice C++ ```CPP // C++ program to find highest power of 2 smaller // than or equal to n. include using namespace std; int highestPowerof2(int n) { int res = 0; for (int i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i - 1)) == 0) { res = i; break; } } return res; } // Driver code int main() { int n = 10; cout << highestPowerof2(n); return 0; } // This code is contributed by Sania Kumari Gupta // (kriSania804) ``` // C++ program to find highest power of 2 smaller // than or equal to n.#include using namespace std;​int highestPowerof2(int n){ int res = 0; for (int i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i - 1)) == 0) { res = i; break; } } return res;}​// Driver code int main(){ int n = 10; cout << highestPowerof2(n); return 0;}​// This code is contributed by Sania Kumari Gupta// (kriSania804) C ```c // C program to find highest power of 2 smaller // than or equal to n. include int highestPowerof2(int n) { int res = 0; for (int i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i - 1)) == 0) { res = i; break; } } return res; } // Driver code int main() { int n = 10; printf("%d", highestPowerof2(n)); return 0; } // This code is contributed by Sania Kumari Gupta // (kriSania804) Javajava // Java program to find highest power of // 2 smaller than or equal to n. class GFG{ static int highestPowerof2(int n) { int res = 0; for(int i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i-1)) == 0) { res = i; break; } } return res; } // Driver code public static void main(String[] args) { int n = 10; System.out.print(highestPowerof2(n)); } } // This code is contributed by 29AjayKumar Python3python3 Python3 program to find highest power of 2 smaller than or equal to n. def highestPowerof2(n): res = 0; for i in range(n, 0, -1): # If i is a power of 2 if ((i & (i - 1)) == 0): res = i; break; return res; Driver code n = 10; print(highestPowerof2(n)); This code is contributed by mits C#CSHARP // C# code to find highest power // of 2 smaller than or equal to n. using System; class GFG { public static int highestPowerof2(int n) { int res = 0; for (int i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i - 1)) == 0) { res = i; break; } } return res; } // Driver Code static public void Main () { int n = 10; Console.WriteLine(highestPowerof2(n)); } } // This code is contributed by ajit PHPphp php // PHP program to find highest // power of 2 smaller than or // equal to n. function highestPowerof2($n) { $res = 0; for ($i = $n; $i = 1; $i--) { // If i is a power of 2 if (($i & ($i - 1)) == 0) { $res = $i; break; } } return $res; } // Driver code $n = 10; echo highestPowerof2($n); // This code is contributed by m_kit ?> JavaScriptjavascript // JavaScript program to find highest power // of 2 smaller than or equal to n. function highestPowerof2(n) { let res = 0; for (let i = n; i >= 1; i--) { // If i is a power of 2 if ((i & (i - 1)) == 0) { res = i; break; } } return res; } // Driver code let n = 10; document.write(highestPowerof2(n)); ``` Output8 Time complexity : O(n). In worst case, the loop runs floor(n/2) times. The worst case happens when n is of the form 2 x - 1. Auxiliary Space: O(1) since only constant space is used for variables An efficient solutionis to use bitwise left shift operator to find all powers of 2 starting from 1. For every power check if it is smaller than or equal to n or not. Below is the implementation of the idea. C++ ```CPP // C++ program to find highest power of 2 smaller // than or equal to n. include using namespace std; int highestPowerof2(unsigned int n) { // Invalid input if (n < 1) return 0; int res = 1; // Try all powers starting from 2^1 for (int i = 0; i < 8 sizeof(unsigned int); i++) { int curr = 1 << i; // If current power is more than n, break if (curr > n) break; res = curr; } return res; } // Driver code int main() { int n = 10; cout << highestPowerof2(n); return 0; } // This code is contributed by Sania Kumari Gupta ``` // C++ program to find highest power of 2 smaller // than or equal to n.#include using namespace std;​int highestPowerof2(unsigned int n){ // Invalid input if (n < 1) return 0; int res = 1; // Try all powers starting from 2^1 for (int i = 0; i < 8 sizeof(unsigned int); i++) { int curr = 1 << i; // If current power is more than n, break if (curr > n) break; res = curr; } return res;}​// Driver code int main(){ int n = 10; cout << highestPowerof2(n); return 0;}​// This code is contributed by Sania Kumari Gupta C ```c // C program to find highest power of 2 smaller // than or equal to n. include int highestPowerof2(unsigned int n) { // Invalid input if (n < 1) return 0; int res = 1; // Try all powers starting from 2^1 for (int i = 0; i < 8 sizeof(unsigned int); i++) { int curr = 1 << i; // If current power is more than n, break if (curr > n) break; res = curr; } return res; } // Driver code int main() { int n = 10; printf("%d", highestPowerof2(n)); return 0; } // This code is contributed by Sania Kumari Gupta JavaJava // Java program to find // highest power of 2 smaller // than or equal to n. import java.io.; class GFG { static int highestPowerof2(int n) { // Invalid input if (n < 1) return 0; int res = 1; // Try all powers // starting from 2^1 for (int i = 0; i < 8 Integer.BYTES; i++) { int curr = 1 << i; // If current power is // more than n, break if (curr > n) break; res = curr; } return res; } // Driver code public static void main(String[] args) { int n = 10; System.out.println(highestPowerof2(n)); } } // This code is contributed aj_36 python3Python3 Python3 program to find highest power of 2 smaller than or equal to n. import sys def highestPowerof2( n): # Invalid input if (n < 1): return 0 res = 1 #Try all powers starting from 2^1 for i in range(8sys.getsizeof(n)): curr = 1 << i # If current power is more than n, break if (curr > n): break res = curr return res Driver code if name == "main": n = 10 print(highestPowerof2(n)) C#CSHARP // C# program to find // highest power of 2 smaller // than or equal to n. using System; class GFG { static int highestPowerof2(int n) { // Invalid input if (n < 1) return 0; int res = 1; // Try all powers // starting from 2^1 for (int i = 0; i < 8 sizeof(uint); i++) { int curr = 1 << i; // If current power is // more than n, break if (curr > n) break; res = curr; } return res; } // Driver code static public void Main () { int n = 10; Console.WriteLine(highestPowerof2(n)); } } // This code is contributed ajit PHPphp php // PHP program to find highest // power of 2 smaller // than or equal to n. function highestPowerof2($n) { // Invalid input if ($n < 1) return 0; $res = 1; // Try all powers starting // from 2^1 for ($i = 0; $i < 8 PHP_INT_SIZE; $i++) { $curr = 1 << $i; // If current power is // more than n, break if ($curr $n) break; $res = $curr; } return $res; } // Driver code $n = 10; echo highestPowerof2($n); // This code is contributed // by m_kit ?> JavaScriptjavascript function highestPowerof2(n) { // Invalid input if (n < 1) return 0; let res = 1; // Try all powers starting from 2^1 for (let i=0; i<8; i++) { let curr = 1 << i; // If current power is more than n, break if (curr > n) break; res = curr; } return res; } // Driver code let n = 10; document.write(highestPowerof2(n)); ``` Output8 Time Complexity:O(32) Auxiliary Space:O(1) A Solution using Log(n) Thanks to Anshuman Jha for suggesting this solution. C++ ```CPP // C++ program to find highest power of 2 smaller // than or equal to n. include using namespace std; int highestPowerof2(int n) { int p = (int)log2(n); return (int)pow(2, p); } // Driver code int main() { int n = 10; cout << highestPowerof2(n); return 0; } ``` // C++ program to find highest power of 2 smaller // than or equal to n.#includeusing namespace std;​int highestPowerof2(int n){ int p = (int)log2(n); return (int)pow(2, p); }​// Driver code int main(){ int n = 10; cout << highestPowerof2(n); return 0;} Java ```Java // Java program to find // highest power of 2 // smaller than or equal to n. import java.io.; class GFG { static int highestPowerof2(int n) { int p = (int)(Math.log(n) / Math.log(2)); return (int)Math.pow(2, p); } // Driver code public static void main (String[] args) { int n = 10; System.out.println(highestPowerof2(n)); } } // This code is contributed // by m_kit Python3Python3 Python3 program to find highest power of 2 smaller than or equal to n. import math def highestPowerof2(n): p = int(math.log(n, 2)); return int(pow(2, p)); Driver code n = 10; print(highestPowerof2(n)); This code is contributed by mits C#CSHARP // C# program to find // highest power of 2 // smaller than or equal to n. using System; class GFG { static int highestPowerof2(int n) { int p = (int)(Math.Log(n) / Math.Log(2)); return (int)Math.Pow(2, p); } // Driver code static public void Main () { int n = 10; Console.WriteLine(highestPowerof2(n)); } } // This code is contributed // by ajit PHPphp php // PHP program to find highest // power of 2 smaller than or // equal to n. function highestPowerof2($n) { $p = (int)log($n, 2); return (int)pow(2, $p); } // Driver code $n = 10; echo highestPowerof2($n); // This code is contributed by ajit ? JavaScriptjavascript // Javascript program to find // highest power of 2 // smaller than or equal to n. function highestPowerof2(n) { let p = parseInt(Math.log(n) / Math.log(2), 10); return Math.pow(2, p); } let n = 10; document.write(highestPowerof2(n)); // This code is contributed by divyeshrabadiya07. ``` Output8 Time Complexity:O(logn) Auxiliary Space:O(1) Solution using bitmasks : C++ ```cpp // C++ program to find highest power of 2 smaller // than or equal to n. include using namespace std; unsigned highestPowerof2(unsigned x) { // check for the set bits x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; // Then we remove all but the top bit by xor'ing the // string of 1's with that string of 1's shifted one to // the left, and we end up with just the one top bit // followed by 0's. return x ^ (x >> 1); } int main() { int n = 10; cout << highestPowerof2(n) << "\n"; return 0; } // This code is contributed by Rudrakshi. ``` // C++ program to find highest power of 2 smaller // than or equal to n.#include using namespace std;​unsigned highestPowerof2(unsigned x){ // check for the set bits x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16;​ // Then we remove all but the top bit by xor'ing the // string of 1's with that string of 1's shifted one to // the left, and we end up with just the one top bit // followed by 0's. return x ^ (x >> 1);}​int main(){​ int n = 10; cout << highestPowerof2(n) << "\n";​ return 0;}​// This code is contributed by Rudrakshi. Java ```java // Java program to find highest power of 2 smaller // than or equal to n. import java.io.; class GFG { static int highestPowerof2(int x) { // check for the set bits x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; // Then we remove all but the top bit by xor'ing the // string of 1's with that string of 1's shifted one to // the left, and we end up with just the one top bit // followed by 0's. return x ^ (x >> 1); } // Driver code public static void main (String[] args) { int n = 10; System.out.println(highestPowerof2(n)); } } // This code is contributed by avanitrachhadiya2155 Python3python3 Python3 program to find highest power of 2 smaller than or equal to n. def highestPowerof2(x): # check for the set bits x |= x >> 1 x |= x >> 2 x |= x >> 4 x |= x >> 8 x |= x >> 16 # Then we remove all but the top bit by xor'ing the # string of 1's with that string of 1's shifted one to # the left, and we end up with just the one top bit # followed by 0's. return x ^ (x >> 1) n = 10 print(highestPowerof2(n)) This code is contributed by divyesh072019. C#csharp // C# program to find highest power of 2 smaller // than or equal to n. using System; public class GFG { static int highestPowerof2(int x) { // check for the set bits x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; // Then we remove all but the top bit by xor'ing the // string of 1's with that string of 1's shifted one to // the left, and we end up with just the one top bit // followed by 0's. return x ^ (x >> 1); } // Driver code public static void Main(String[] args) { int n = 10; Console.WriteLine(highestPowerof2(n)); } } // This code is contributed by umadevi9616 JavaScriptjavascript // Javascript program to find highest power of 2 smaller // than or equal to n. function highestPowerof2(x) { // check for the set bits x |= x >> 1; x |= x >> 2; x |= x >> 4; x |= x >> 8; x |= x >> 16; // Then we remove all but the top bit by xor'ing the // string of 1's with that string of 1's shifted one to // the left, and we end up with just the one top bit // followed by 0's. return x ^ (x >> 1); } let n = 10; document.write(highestPowerof2(n)) // This code is contributed by rag2127 ``` Output8 Time Complexity: O(1) Auxiliary Space: O(1) since only constant space is used for variables A solution using MSB If the given number is the power of two then it is the required number otherwise set only the most significant bit which gives us the required number. C++ ```cpp // C++ program to find // smallest power of 2 // smaller than or equal to n include using namespace std; long long highestPowerof2(long long N) { // if N is a power of two simply return it if (!(N & (N - 1))) return N; // else set only the most significant bit return 0x8000000000000000 >> (__builtin_clzll(N)); } // Driver Code int main() { long long n = 5; cout << highestPowerof2(n); return 0; } // This code is contributed by phasing17 ``` // C++ program to find // smallest power of 2// smaller than or equal to n#include using namespace std;​long long highestPowerof2(long long N){ // if N is a power of two simply return it if (!(N & (N - 1))) return N; // else set only the most significant bit return 0x8000000000000000 >> (__builtin_clzll(N));}​// Driver Code int main(){ long long n = 5; cout << highestPowerof2(n); return 0;}​// This code is contributed by phasing17 Java ```java // Java program to find // smallest power of 2 // smaller than or equal to n import java.util.; class GFG { static int highestPowerof2(int N) { // if N is a power of two simply return it if ((N & (N - 1)) == 0) return N; // else set only the most significant bit return (1 << (Integer.toBinaryString(N).length() - 1)); } // Driver Code public static void main(String[] args) { int n = 5; System.out.println(highestPowerof2(n)); } } // This code is contributed by phasing17 Python3python3 Python3 program to find smallest power of 2 smaller than or equal to n def highestPowerof2(N): # if N is a power of two simply return it if (not (N & (N - 1))): return N; # else set only the most significant bit return 0x8000000000000000 >> (64 - N.bit_length()) Driver Code n = 5; print(highestPowerof2(n)) This code is contributed by phasing17 C#csharp // C# program to find // smallest power of 2 // smaller than or equal to n using System; using System.Collections.Generic; class GFG { static int highestPowerof2(int N) { // if N is a power of two simply return it if ((N & (N - 1)) == 0) return N; // else set only the most significant bit return (1 << ((Convert.ToString(N, 2).Length) - 1)); } // Driver Code public static void Main(string[] args) { int n = 5; Console.WriteLine(highestPowerof2(n)); } } // This code is contributed by phasing17 JavaScriptjavascript // JS program to find // smallest power of 2 // smaller than or equal to n function highestPowerof2(N) { // if N is a power of two simply return it if (!(N & (N - 1))) return N; // else set only the most significant bit return 1 << ((N.toString(2)).length) - 1; } // Driver Code let n = 5; console.log(highestPowerof2(n)); // This code is contributed by phasing17 ``` Output4 Time Complexity : O(1) as counting leading zeroes can cause at most O(64) time complexity. Auxiliary Space:O(1) Application Problem: Some people are standing in a queue. A selection process follows a rule where people standing on even positions are selected. Of the selected people a queue is formed and again out of these only people on even position are selected. This continues until we are left with one person. Find out the position of that person in the original queue. Print the position(original queue) of that person who is left. Examples : Input: n = 10 Output:8 Explanation : 1 2 3 4 5 6 7 8 9 10 ===>Given queue 2 4 6 8 10 4 8 8 Input: n = 17 Input: 16 Explanation : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 ===>Given queue 2 4 6 8 10 12 14 16 4 8 12 16 8 16 16 Related Article : Power of 2 greater than or equal to a given number. Comment More info Advertise with us Next Article Analysis of Algorithms K kartik 35 Improve Article Tags : Bit Magic Mathematical Recursion DSA Amazon +1 More Practice Tags : Amazon Bit Magic Mathematical Recursion Similar Reads Basics & Prerequisites Logic Building Problems Logic building is about creating clear, step-by-step methods to solve problems using simple rules and principles. It’s the heart of coding, enabling programmers to think, reason, and arrive at smart solutions just like we do.Here are some tips for improving your programming logic: Understand the pro 2 min readAnalysis of Algorithms Analysis of Algorithms is a fundamental aspect of computer science that involves evaluating performance of algorithms and programs. Efficiency is measured in terms of time and space.BasicsWhy is Analysis Important?Order of GrowthAsymptotic Analysis Worst, Average and Best Cases Asymptotic NotationsB 1 min read Data Structures Array Data Structure In this article, we introduce array, implementation in different popular languages, its basic operations and commonly seen problems / interview questions. An array stores items (in case of C/C++ and Java Primitive Arrays) or their references (in case of Python, JS, Java Non-Primitive) at contiguous 3 min readString in Data Structure A string is a sequence of characters. The following facts make string an interesting data structure.Small set of elements. Unlike normal array, strings typically have smaller set of items. For example, lowercase English alphabet has only 26 characters. ASCII has only 256 characters.Strings are immut 2 min readHashing in Data Structure Hashing is a technique used in data structures that efficiently stores and retrieves data in a way that allows for quick access. Hashing involves mapping data to a specific index in a hash table (an array of items) using a hash function. It enables fast retrieval of information based on its key. The 2 min readLinked List Data Structure A linked list is a fundamental data structure in computer science. It mainly allows efficient insertion and deletion operations compared to arrays. Like arrays, it is also used to implement other data structures like stack, queue and deque. Here’s the comparison of Linked List vs Arrays Linked List: 2 min readStack Data Structure A Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out). LIFO implies that the element that is inserted last, comes out first and FILO implies that the element that is inserted first 2 min readQueue Data Structure A Queue Data Structure is a fundamental concept in computer science used for storing and managing data in a specific order. It follows the principle of "First in, First out" (FIFO), where the first element added to the queue is the first one to be removed. It is used as a buffer in computer systems 2 min readTree Data Structure Tree Data Structure is a non-linear data structure in which a collection of elements known as nodes are connected to each other via edges such that there exists exactly one path between any two nodes. Types of TreeBinary Tree : Every node has at most two childrenTernary Tree : Every node has at most 4 min readGraph Data Structure Graph Data Structure is a collection of nodes connected by edges. It's used to represent relationships between different entities. If you are looking for topic-wise list of problems on different topics like DFS, BFS, Topological Sort, Shortest Path, etc., please refer to Graph Algorithms. Basics of 3 min readTrie Data Structure The Trie data structure is a tree-like structure used for storing a dynamic set of strings. It allows for efficient retrieval and storage of keys, making it highly effective in handling large datasets. Trie supports operations such as insertion, search, deletion of keys, and prefix searches. In this 15+ min read Algorithms Searching Algorithms Searching algorithms are essential tools in computer science used to locate specific items within a collection of data. In this tutorial, we are mainly going to focus upon searching in an array. When we search an item in an array, there are two most common algorithms used based on the type of input 2 min readSorting Algorithms A Sorting Algorithm is used to rearrange a given array or list of elements in an order. For example, a given array [10, 20, 5, 2] becomes [2, 5, 10, 20] after sorting in increasing order and becomes [20, 10, 5, 2] after sorting in decreasing order. There exist different sorting algorithms for differ 3 min readIntroduction to Recursion The process in which a function calls itself directly or indirectly is called recursion and the corresponding function is called a recursive function. A recursive algorithm takes one step toward solution and then recursively call itself to further move. The algorithm stops once we reach the solution 14 min readGreedy Algorithms Greedy algorithms are a class of algorithms that make locally optimal choices at each step with the hope of finding a global optimum solution. At every step of the algorithm, we make a choice that looks the best at the moment. To make the choice, we sometimes sort the array so that we can always get 3 min readGraph Algorithms Graph is a non-linear data structure like tree data structure. The limitation of tree is, it can only represent hierarchical data. For situations where nodes or vertices are randomly connected with each other other, we use Graph. Example situations where we use graph data structure are, a social net 3 min readDynamic Programming or DP Dynamic Programming is an algorithmic technique with the following properties.It is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of 3 min readBitwise Algorithms Bitwise algorithms in Data Structures and Algorithms (DSA) involve manipulating individual bits of binary representations of numbers to perform operations efficiently. These algorithms utilize bitwise operators like AND, OR, XOR, NOT, Left Shift, and Right Shift.BasicsIntroduction to Bitwise Algorit 4 min read Advanced Segment Tree Segment Tree is a data structure that allows efficient querying and updating of intervals or segments of an array. It is particularly useful for problems involving range queries, such as finding the sum, minimum, maximum, or any other operation over a specific range of elements in an array. The tree 3 min readPattern Searching Pattern searching algorithms are essential tools in computer science and data processing. These algorithms are designed to efficiently find a particular pattern within a larger set of data. Patten SearchingImportant Pattern Searching Algorithms:Naive String Matching : A Simple Algorithm that works i 2 min readGeometry Geometry is a branch of mathematics that studies the properties, measurements, and relationships of points, lines, angles, surfaces, and solids. From basic lines and angles to complex structures, it helps us understand the world around us.Geometry for Students and BeginnersThis section covers key br 2 min read Interview Preparation Interview Corner: All Resources To Crack Any Tech Interview This article serves as your one-stop guide to interview preparation, designed to help you succeed across different experience levels and company expectations. Here is what you should expect in a Tech Interview, please remember the following points:Tech Interview Preparation does not have any fixed s 3 min readGfG160 - 160 Days of Problem Solving Are you preparing for technical interviews and would like to be well-structured to improve your problem-solving skills? Well, we have good news for you! GeeksforGeeks proudly presents GfG160, a 160-day coding challenge starting on 15th November 2024. In this event, we will provide daily coding probl 3 min read Practice Problem GeeksforGeeks Practice - Leading Online Coding Platform GeeksforGeeks Practice is an online coding platform designed to help developers and students practice coding online and sharpen their programming skills with the following features. GfG 160: This consists of most popular interview problems organized topic wise and difficulty with with well written e 6 min readProblem of The Day - Develop the Habit of Coding Do you find it difficult to develop a habit of Coding? If yes, then we have a most effective solution for you - all you geeks need to do is solve one programming problem each day without any break, and BOOM, the results will surprise you! Let us tell you how:Suppose you commit to improve yourself an 5 min read Like 35 Corporate & Communications Address: A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305) Registered Address: K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305 Advertise with us Company About Us Legal Privacy Policy In Media Contact Us Advertise with us GFG Corporate Solution Placement Training Program Languages Python Java C++ PHP GoLang SQL R Language Android Tutorial Tutorials Archive DSA DSA Tutorial Basic DSA Problems DSA Roadmap Top 100 DSA Interview Problems DSA Roadmap by Sandeep Jain All Cheat Sheets Data Science & ML Data Science With Python Data Science For Beginner Machine Learning ML Maths Data Visualisation Pandas NumPy NLP Deep Learning Web Technologies HTML CSS JavaScript TypeScript ReactJS NextJS Bootstrap Web Design Python Tutorial Python Programming Examples Python Projects Python Tkinter Python Web Scraping OpenCV Tutorial Python Interview Question Django Computer Science Operating Systems Computer Network Database Management System Software Engineering Digital Logic Design Engineering Maths Software Development Software Testing DevOps Git Linux AWS Docker Kubernetes Azure GCP DevOps Roadmap System Design High Level Design Low Level Design UML Diagrams Interview Guide Design Patterns OOAD System Design Bootcamp Interview Questions Inteview Preparation Competitive Programming Top DS or Algo for CP Company-Wise Recruitment Process Company-Wise Preparation Aptitude Preparation Puzzles School Subjects Mathematics Physics Chemistry Biology Social Science English Grammar Commerce GeeksforGeeks Videos DSA Python Java C++ Web Development Data Science CS Subjects @GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved We use cookies to ensure you have the best browsing experience on our website. By using our site, you acknowledge that you have read and understood our Cookie Policy&Privacy Policy Got It ! Improvement Suggest changes Suggest Changes Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal. Create Improvement Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all. Suggest Changes min 4 words, max Words Limit:1000 Thank You! Your suggestions are valuable to us. What kind of Experience do you want to share? Interview ExperiencesAdmission ExperiencesCareer JourneysWork ExperiencesCampus ExperiencesCompetitive Exam Experiences Login Modal | GeeksforGeeks =============== Log in New user ?Register Now Continue with Google or Username or Email Password [x] Remember me Forgot Password Sign In By creating this account, you agree to ourPrivacy Policy&Cookie Policy. Create Account Already have an account ?Log in Continue with Google or Username or Email Password Institution / Organization Sign Up Please enter your email address or userHandle. Back to Login Reset Password
266
Published Time: Sun, 22 Jan 2023 20:43:00 GMT Minimizing the number of 5-cycles in graphs with given edge-density Patrick Bennett ∗ Andrzej Dudek † Bernard Lidick´ y ‡ Oleg Pikhurko § June 6, 2019 Abstract Motivated by the work of Razborov about the minimal density of triangles in graphs we study the minimal density of the 5-cycle C5. We show that every graph of order n and size (1 − 1 k ) ( n 2 ), where k > 3 is an integer, contains at least ( 110 − 12k + 1 k2 − 1 k3 + 25k4 ) n5 + o(n5)copies of C5. This bound is optimal, since a matching upper bound is given by the balanced complete k-partite graph. The proof is based on the flag algebras framework. We also provide a stability result. An SDP solver is not necessary to verify our proofs. 1 Introduction It is believed that extremal graph theory was started by Tur´ an when he proved that any graph on n vertices with more than r−22( r−1) n2 edges must contain a copy of Kr (i.e. a clique with r vertices). The case r = 3 was earlier proved by Mantel . The general Tur´ an problem is to determine the minimum number ex( n, H ) of edges in an n vertex graph that guarantees a copy of a graph H, and has been very widely studied. The Erd˝ os–Stone Theorem ∗ Department of Mathematics, Western Michigan University, Kalamazoo, MI, USA. E-mail: [email protected] . Supported in part by Simons Foundation Grant #426894. † Department of Mathematics, Western Michigan University, Kalamazoo, MI, USA. E-mail: [email protected] . Supported in part by Simons Foundation Grant #522400. ‡ Department of Mathematics, Iowa State University. Ames, IA, USA. E-mail: [email protected] .Supported in part by NSF grant DMS-1600390. § Mathematics Institute and DIMAP, University of Warwick, Coventry CV4 7AL, UK. E-mail: [email protected] . Supported in part by ERC grant 306493. 1 arXiv:1803.00165v3 [math.CO] 5 Jun 2019 was a major breakthrough which asymptotically determined the value of ex( n, H ) for all nonbipartite H. For such H we have ex( n, H ) = χ(H) − 22( χ(H) − 1) n2 + o(n2). The natural quantitative question that arises is how many copies of H must be contained in a graph G on n vertices with m > ex( n, H ) edges. This question has also been well studied. Obviously the number of edges m can be expressed as a density parameter p such that m = p(n 2 ). Therefore, we will use the following notation. Let G be a (large) graph of order n and H a small one. Define νH (G) to be the number of unlabeled copies (not necessary induced) of H in G and the corresponding density as dH (G) = νH (G) |V (G)||V (H)| . Furthermore, for a given number p ∈ [0 , 1] let dH (p) = lim n→∞ min G dH (G), where the minimum is taken over all graphs G of order n and size ( p + o(1)) (n 2 ). It is not hard to show by double-counting that the limit exists, see e.g. [23, Lemma 2.2]. When H = K3 (that means it is a triangle) Moon and Moser and also independently Nordhaus and Stewart determined dK3 (p) for any p = 1 − 1 k , where k is a positive integer. We call such p = 1 − 1 k a Tur´ an density . Some other partial results for the general r-clique H = Kr were established by Lov´ asz and Simonovits . However, for arbitrary p these problems remained open for over 50 years. Then Razborov in his seminal paper introduced the so-called flag algebras and, using them, determined dK3 (p) for any p in . Subsequently, Pikhurko and Razborov characterized all almost extremal graphs. Very recently, Liu, Pikhurko and Staden found the precise minimum number of triangles among graphs with a given number of edges in almost all range. Nikiforov determined dK4 (p) for all p, and then Reiher determined dKr (p) for all r and p.In this paper we address the minimum density of the 5-cycle, C5, in a graph with given edge density. We chose to investigate C5 instead of C4 since it is known due to Sidorenko that for any fixed constant edge density p, the minimum C4-density is achieved asymptotically by the random graph Gn,p . It is worth mentioning some other research related to 5-cycles. Specifically, Grzesik and independently Hatami, Hladk´ y, Kr´ al’, Norine and Razborov proved that the maximum density of 5-cycles in a triangle-free graph that is large or its number of vertices is a power of 5 is achieved by the balanced blow-up of a 5-cycle. The extension to graphs of all sizes, with one exception on 8 vertices, was done by Lidick´ y and Pfender . This settled in the affirmative a conjecture of Erd˝ os . On the other hand, Balogh, Hu, Lidick´ y, and Pfender studied the problem of maximizing induced 5-cycles, and proved that this is achieved by the balanced iterated blow-up of a 5-cycle. This confirmed a special case of a conjecture of Pippinger and Golumbic . The main result of this paper is as follows. 2Theorem 1. Let k > 3 be an integer. Define p = 1 − 1 k and λ = 110 − 12k + 1 k2 − 1 k3 + 25k4 . (1) Then dC5 (p) = λ. We also have the following stability result. Let the Tur´ an graph T nk be the complete k-partite graph on n vertices with part sizes as equal as possible. Theorem 2. For every integer k > 3 and real δ > 0 there is  > 0 such that every graph G with n > 1/ vertices, at least (p − )(n 2 ) edges and at most (λ + )n5 copies of C5 is within edit distance δn 2 from the Tur´ an graph T nk , where p and λ are as in (1) . Observe that the above theorems (as stated) also hold in the case k = 2 for which dC5 (12 ) = 0. However, their validity in this case easily follows from known standard results. Although the proofs of Theorems 1 and 2 are based on the flag algebras framework, their verification does not require using any SDP solver. Theorems 1 and 2 are proved in respectively Sections 2 and 3. Finally, in Section 4, we discuss the general edge density and provide an upper bound on dC5 (p) for any p ∈ [0 , 1]. 2 Proof of the main theorem 2.1 Upper bound By considering the sequence of graphs T nk as n → ∞ , we get dC5 (T nk ) = [ 110 (k)5 + 12 (k)4 + 12 (k)3 ] ( nk )5 n5 + o(1) , where ( k)= k(k − 1) · · · (k − + 1) is the falling factorial . To justify the numerator, we count the number of C5 copies with vertices in parts V1, V 2, V 3, V 4, V 5 of the partition. These parts may not all be distinct: for example we may have V1 = V3. However T nk has no edges within these parts and so we know Vi 6 = Vi+1 . We count copies of C5 by grouping them according to how many distinct parts there are among V1, . . . , V 5. Now there are asymptotically 110 (k)5 (nk )5 copies that hit 5 different parts (label 5 distinct parts, choose one vertex in each part, and divide by 10 for overcounting). Also, there are asymptotically 12 (k)4 (nk )5 copies hitting 4 parts, and 12 (k)3 (nk )5 copies hitting 3 parts. Simplifying, we get that dC5 (T nk ) = λ+o(1), which implies the upper bound in Theorem 1. 2.2 Lower bound 2.2.1 Preliminaries The proof of the lower bound in Theorem 1 relies on the celebrated flag algebra method introduced by Razborov . Here we briefly discuss the main idea behind this approach, 3referring the reader to for all details. Alternatively, our lower bound is rephrased at the beginning of Section 3 by means of a combinatorial identity (namely (12)) whose statement does not use any flag algebra formalism. Let ( Gn)n∈N be a sequence of graphs, such that order of Gn increases. Such a sequence is called convergent if for every fixed graph H, the density of H in Gn converges, i.e., for every H there exists some number φ(H), such that lim n→∞ p(H, G n) = φ(H), where p(H, G ) is the probability that |H| = |V (H)| vertices chosen uniformly at random from V (G) induce a copy of H. (Here, it will be more convenient to count induced copies of H; see e.g. Equations (5.19)–(5.21) in that show how to switch between induced and non-induced versions.) Notice that any sequence of graphs whose orders increase has a convergent subsequence. Thus, without loss of generality we assume Gn is convergent. Note that φ cannot be an arbitrary function since it must satisfy many obvious identities such as φ(edge ) + φ(nonedge ) = 1. Interestingly, these φ exactly correspond to homomorphisms that we now describe. De-note by F the set of all graphs and by Fthe set of graphs of order, up to an isomorphism. Let RF be the set of all finite formal linear combinations of graphs in F with real coefficients. It comes with the natural operations of addition and multiplication by a real number. Let K be a linear subspace generated by all linear combinations F − ∑ H∈F ` p(F, H ) · H, (2) where ` > |F |. Notice that φ evaluated at any element of K gives 0. Finally, let A be RF factorized by K. It is possible to define multiplication on A, which we do in Section 2.2.3. It can be proved that A is indeed an algebra. Now limits of convergent graph sequences correspond to homomorphism φ from A to R such that φ(F ) > 0 for all F ∈ F . Denote the set of all such homomorphisms by Hom +(A, R). Let OP T be the following linear combination, which counts the C5 copies using induced subgraphs: OP T = + + + 2 + 2 + 4 + 6 + 12 , where the coefficient of each graph is the number of copies of C5 it contains. Thus, φ(OP T ) = 120 lim n→∞ dC5 (Gn). (3) The factor 120 = 5! comes from the fact that p(F, G n) for F ∈ F 5 is the number of copies of F divided by (n 5 ) whereas our scaling for dC5 was chosen as n−5. Notice that OP T can be written as a linear combination of all 34 graphs on 5-vertices, where 26 graphs have coefficient 0. Namely, OP T = ∑ F∈F 5 cOP T F F, (4) 4where the nonzero entries cOP T F are as above. Our goal is to prove a good lower bound on min φ∈Hom +(A,R) φ(OP T ), given that that the edge density is p, that is, we have φ ( ) = p. (5) For this we find suitable A ∈ A , such that φ(A) > 0 for all φ ∈ Hom +(A, R) with φ(K2) = p, and use it in calculations. In particular, we will use it as φ(OP T ) > φ(OP T ) − φ(A) = φ(OP T − A) > c, where c is the smallest coefficient cF when we express OP T − A as ∑ F∈F ` cF F . Note that A may contain both positive and negative coefficients, and these coefficients combine with coefficients in OP T .When p = 1 − 1 k for integer k > 3, it is possible to prove the sharp lower bound as above by considering graphs of order 5 with only one labeled vertex. Similarly to defining the algebra A and limits of convergent graph sequences, one can define limits of sequences from the set F1 which consists of graphs with exactly one labeled vertex up a label-preserving isomorphism. This gives an algebra A1 and homomorphisms Hom +(A1, R). In the following, we depict the labeled vertex by a square. Let X be the following column vector X = ( X1, . . . , X 6)T = ( , , , , , )T . (6) Notice that X is the vector of all graphs on 3 vertices with exactly one labeled vertex (the yellow square). For isomorphism, the labeled vertex must be preserved but the remaining two vertices may be swapped. If M is a positive semidefinite matrix in R6×6, then for every φ1 ∈ Hom +(A1, R) it holds that φ1 (XT M X ) = φ1 (XT ) M φ 1 (X) > 0, where by φ1 (X) we mean the vector that results from applying φ1 to each coordinate of X.Also, there is a linear operator J · K1 : RF1 → RF (which, roughly speaking, “unlabels” each F ∈ F 1) such that for all φ ∈ Hom +(A, R) we have φ (JXT M X K1 ) > 0. Furthermore, we have JXT M X K1 = ∑ F∈F 5 cMF · F, (7) see Section 2.2.3 for more details, in particular on how to calculate coefficients cMF .Also, the relation (2) for cliques K2 and K1 gives that respectively K2 = ∑ H∈F 5 p(K2, H )· H and 1 = K1 = ∑ H∈F 5 H. Thus (5) can be written as an identity involving densities of 5-vertex graphs. 5Next, we take the sum of equations (4), (5) multiplied by some α, and φ (JXT M X K1 ) > 0expanded using (7), and obtain φ(OP T ) > φ(OP T ) + α ( p − φ ( )) − φ (JXT M X K1 ) = φ ( OP T + αp − α − JXT M X K1 ) = φ ( ∑ F∈F 5 (cOP T F + αp − α · p(K2, F ) − cMF ) · F ) (In Appendix A we provide cOP T F and p(K2, F ) for each F ∈ F 5.) For F ∈ F 5, define cF = cOP T F + αp − α · p(K2, F ) − cMF . (8) With this notation φ(OP T ) > φ ( ∑ F∈F 5 cF · F ) min F∈F 5 cF · φ ( ∑ F∈F 5 F ) = min F∈F 5 cF , (9) where cF is a number that depends on the choice of M and α. Let us transfer this back to our extremal graph problem: Lemma 3. For every p ∈ [0 , 1] , M < 0 and α ∈ R, with cF = cF (p, M, α ) as in (8) , we have dC5 (p) > 1120 min F∈F 5 cF . Proof. Suppose on the contrary we can find an increasing sequence of graphs Gn with edge density p + o(1) such that d5(Gn) stays strictly below the stated bound. Take a convergent subsequence and let φ ∈ Hom ∗(A, R) be its limit. It satisfies (5) so the bound in (9) applies to φ. However, this contradicts (3). 2.2.2 Finding the optimum Let an integer k > 3 be fixed. Let p and λ be as in (1). By Lemma 3, in order to finish the proof of Theorem 1, it is enough to present some M < 0 and α ∈ R with cF > 5! λ for every F ∈ F 5. Let α = 1 k3 (60 k3 − 240 k2 + 360 k − 192 ) . In order to define the matrix M we define first two matrices A and B as follows: A =  32 k2 − 96 k + 96 0 4k2 − 16 k 0 10 k4 − 30 k3 − 8k2 + 96 k − 96 −10 k4 + 35 k3 − 4k2 − 80 k + 96 4k2 − 16 k −10 k4 + 35 k3 − 4k2 − 80 k + 96 10 k4 − 40 k3 + 24 k2 + 64 k − 96  6and B =  k − 1 1 k − 2 0 k − 3 −10 2 k − 2 0 2k − 4 −20 0 k − 1 −1 2k − 2 −2  . It is easy to verify (by checking principal minors) that A is positive definite for any k > 3. Therefore, the matrix M = 32k4 BT AB (10) is positive semidefinite. In Section 2.2.4 we briefly describe how we determined matrices A and B. With this choice of M and α, one can verify using for example Maple (see Appendix B) that coefficients cF satisfy: c = c = c = c = c = c = c = c = c = c = c = c = c = c = c = c = c = c = 15k4 (60 k4 − 300 k3 + 600 k2 − 600 k + 240) c = c = c = c = 15k4 (66 k4 − 300 k3 + 600 k2 − 600 k + 240) c = 15k4 (68 k4 − 300 k3 + 600 k2 − 600 k + 240) c = c = c = c = 15k4 (64 k4 − 300 k3 + 600 k2 − 600 k + 240) c = 15k4 (65 k4 − 300 k3 + 600 k2 − 600 k + 240) c = c = c = c = 15k4 (62 k4 − 300 k3 + 600 k2 − 600 k + 240) c = c = 15k4 (61 k4 − 300 k3 + 600 k2 − 600 k + 240) . Since the entries only ever disagree in the k4 coefficient, it is easy to see that the smallest cF ’s are in the first two rows and are equal to 5! λ, as desired. (Recall that this proves the lower bound on dC5 (p) of Theorem 1 by Lemma 3.) 2.2.3 Products of graphs and determining cMF coefficients First, we define the product of unlabeled graphs. Recall that for a graph G we denote |V (G)| by |G|. Let F1, F 2, F in F be such that |F1| + |F2| 6 |F |. Choose uniformly at random two disjoint subsets X1 and X2 of V (F ) of sizes |F1| and |F2|, respectively. Denote by p(F1, F 2; F )the probability that F [X1] is isomorphic to F1 and F [X2] is isomorphic to F2. Finally, the product of F1 and F2 is defined as F1 × F2 = ∑ F∈F |F1|+|F2| p(F1, F 2; F ) · F. The product can be extended to linear combinations of graphs and gives a multiplication operation in A.7The product in A1 is defined along the same lines as in A but the intersection of X1 and X2 is exactly the labeled vertex. A more precise definition follows. Let F1, F 2, F in F1 such that |F1| + |F2| 6 |F | − 1. Choose uniformly at random subsets X1 and X2 of V (F ) of sizes |F1| and |F2|, respectively whose intersection is exactly the one labeled vertex. Denote by p(F1, F 2; F ) the probability that F [X1] is isomorphic to F1 and F [X2] is isomorphic to F2,where isomorphism preserves the labeled vertex. Finally, the product of F1 and F2 is defined as F1 × F2 = ∑ F∈F |F1|+|F2|− 1 p(F1, F 2; F ) · F. Next we define the unlabeling operator J · K1 : F1 → RF. We extend J · K1 to a linear function RF1 → RF which we also call J · K1. Let F ∈ F 1. Denote by G ∈ F the graph obtained from F by unlabeling the labeled vertex. Let v be a vertex in G chosen uniformly at random. Let q be the probability that G with labeled v is isomorphic to F . Then JF K1 = q · G. Recall that X is the vector of all 3-vertex labeled graphs from F1: X = ( X1, X 2, X 3, X 4, X 5, X 6)T = ( , , , , , )T . In Appendix A we list all coefficients for products in F13 , after unlabeling and multiplying by a scaling factor of 30 to clear denominators. Then we obtain that JXT M X K1 = 6 ∑ i=1 6 ∑ j=1 Mi,j JXi × Xj K1 = ∑ F∈F 5 cMF · F, since each JXi × Xj K1 is a linear combination of graphs in F5. 2.2.4 Guessing matrices A and B In this paragraph we describe how we obtained the matrices A and B. First, we used semidefinite programming to find a matrix M for several small odd values of k. Notice that if (9) is applied to the extremal construction, then the left-hand side is equal to the right-hand side. That means that all inequalities used are actually equalities. In particular, φ (JXT M X K1 ) = 0. Since M is a positive semidefinite matrix, X evaluated on our extremal example (the limit of T nk as n → ∞ ) must give an eigenvector of M corresponding to the eigenvalue 0. The matrix B was obtained by projecting onto the space orthogonal to three zero eigenvectors of M . As noted before, we had one zero eigenvector to start with. By looking at all eigenvectors of M , we managed to guess another zero eigenvector. We tried projection with the two zero eigenvectors and found the third one in the projection. After having obtained matrices B, we observed that a suitable A exists even if we set the coordinate [1 , 2] and [2 , 1] to 0. With proper scaling of the objective function, we were getting nice matrices from the CSDP solver with all entries integers. By using the solutions for several values of k, we calculated a polynomial function of k fitting each entry in matrix A.Finally we observed that the same matrices A and B also work for even values of k.83 Stability In this section we prove Theorem 2. For this purpose it will be convenient to rewrite our lower bound as an asymptotic identity valid for an arbitrary graph. Fix k > 3. Let p and λ be as in (1). Let the matrix M < 0, α ∈ R, and the reals cMF , c OP T F , c F , indexed by F ∈ F 5,be as previously. Recall that X = ( X1, . . . , X 6)T is the vector of 3-vertex rooted graphs defined in (6). For a graph G = ( V, E ) of order n > 5 and a vertex r ∈ V , let Yr be the column vector whose i-th component is the number of unordered 2-sets {u, v } ⊆ V \ { r} such that the induced graph G[{r, u, v }] rooted at r is isomorphic to Xi. Define Y = 45! ∑ r∈V Y Tr M Y r > 0. Let us argue that Y = ∑ F∈F 5 cMF P (F, G ) + O(n4), (11) where for F ∈ F we let P (F, G ) = (n )p(F, G ) be the number of `-sets inducing a copy of F in G. Indeed, the i-th entry of Yr can be written as a double sum 12 ∑ u∈V ∑ v∈V of the indicator function that r, u, v are distinct and the graph G[{r, u, v }] when rooted at r is isomorphic to Xi. Using this representation of Yr and expanding everything, we can write Y as a sum over all ( r, u, v, u ′, v ′) ∈ V 5 of some function that depends only on the graph induced by the (multi)set ( r, u, v, u ′, v ′) inside G. Apart of O(n4) terms when some of the vertices coincide, the remaining ones can be grouped by the isomorphism type F ∈ F 5 of G[{r, u, v, u ′, v ′}]. For F ∈ F 5, each unordered 5-set spanning an induced copy of F in G contributes the same amount (depending only on F and M ) and the coefficient cMF was in fact defined by us to be equal to this common value. Thus (11) holds. Likewise, P (K2, G )(n−23 ) and (n 5 ) can be written as fixed linear combinations of P (F, G )over F ∈ F 5. Also, dC5 (G)n5 = ∑ F∈F 5 cOP T F P (F, G ) is the number of 5-cycles in G. Putting all together, we obtain the following identity valid for an arbitrary graph G: dC5 (G) n5 + α 5! (2P (K2, G )n3 − pn 5) − Y + O(n4) = ∑ F∈F 5 cF P (F, G ), (12) where cF for F ∈ F 5 was defined to be exactly the contribution of each induced copy of F in G to the left-hand side while all combinations when some vertices in the underlying 5-fold sum coincide are absorbed into the error term O(n4). Note that if we multiply (12) by (n 5 )−1 then the scaled terms in (12) will be asymptotically the same as in (9) when n → ∞ . Since ∑ F∈F 5 P (F, G ) = (n 5 ), the right-hand size of (12) can be lower bounded by (n 5 ) min F ∈F 5 cF , giving the required lower bound in Theorem 1 since each cF is at least 5! λ.Let us turn to stability. Take any sequence of graphs Gm of strictly increasing orders such that |E(Gm)| > ( p − 1 m ) ( |Gm| 2 ) and dC5 (Gm) 6 λ + 1 m, for all m ∈ N. (13) Observe that if cF > λ for some F ∈ F 5, then the right-hand side of (12) is at least (λ + ( cF − λ)p(F, G )) (n 5 ). Thus we have that p(F, G m) = o(1) as m → ∞ for every such F .9By looking at the explicit formulas for cF near the end of Section 2.2.2, we see that there are 16 such graphs. They are collected into the list L in Figure 1, and are denoted by L1, . . . , L 16 in this order. Figure 1: The list L = ( L1, . . . , L 16 )Let the co-cherry P2 be the complement of the 2-edge path P2, that is, P2 is the graph with 3 vertices and 1 edge. Next, we show that its density in Gm must also be o(1). Note that there are 5-vertex graphs not in the list L that contain the co-cherry. Thus the naive approach does not work and a slightly more involved argument is needed. Lemma 4. For every sequence of graphs Gm as in (13) , we have that lim m→∞ p(P2, G m) = 0 . Proof. Let m be sufficiently large, G = Gm, V = V (G) and n = |V |. For i ∈ { 0, 1, 2}, let Fi be the (unique) graph of order 4 with i disjoint edges. Let L′ = L ∪ { F0, F 1, F 2}.Apply the induced removal lemma (see, e.g., [1, 4]) to G to destroy all induced graphs in L′ whose density is o(1). Formally, let f = n−1 + max {p(L, G m) : L ∈ L} (and let initially G = Gm). As long as there is at least one F ∈ L ′ with 0 < p (F, G ) 6 f , change as few as possible adjacencies in G to destroy all copies of all such F so that, additionally, no graph in L′ absent from G is introduced. Since f tends to 0 as m → ∞ and the above iteration is applied at most |L ′| times (in fact, at most |L ′ \ L| + 1 = 4 times), we change o(n2) edges in total by the induced removal lemma. Also, the final graph G contains no graph from the list L since the first iteration destroyed all such subgraphs by our choice of f . Claim 4.1 . G contains no induced F1 (i.e., 4 vertices spanning exactly one edge). Proof. Take a copy of F1 and add one new vertex x of degree d. If d ∈ { 0, 1, 2, 3}, then the sets of possible obtained graphs up to isomorphism are respectively {L1}, {L2, L 5}, {L4, L 6, L 8}, and {L7, L 9}. We see that each 1-vertex extension of F1 is in L except when d = 4 (that is, when x is adjacent to every vertex of F1). This means that for every copy of F1, say on A ⊆ V , the set A is complete to V \ A in G. It follows that every two distinct induced copies of F1 are vertex-disjoint and thus G has at most n/ 4 such copies. This is at most f (n 4 ), so G has no copy of F1 at all. Claim 4.2 . G contains no induced F2 (which is the matching with two edges). 10 Proof. If we extend a copy of F2 by adding a vertex x of degree d ∈ { 0, 1, 2, 3}, then we obtain graphs in respectively {L5}, {L8}, {L11 , L 14 } and {L15 }. Thus the only extension that does not lead to a graph in L is to connect x to every vertex of F2. This gives that every two distinct induced copies of F2 in G are vertex-disjoint. Thus we have at most O(n) 6 f (n 4 ) copies of F2, that is, none at all. Consider the edgeless 4-vertex graph F0. If we add a vertex x of degree d ∈ { 1, 2, 3},then we get respectively L1, L2 and L3. The only remaining ways are to have x empty or complete to F0. Now, consider any copy of F0 in G, say with vertex set A0 ⊆ V (G). By above, every vertex outside of A0 is empty or complete to A0. Let A ⊇ A0 consist of all vertices of G that send no edges to A0. Note that A is an independent set: if we had an edge xy inside A then x, y plus some two extra vertices from A0 would span a copy of F1 in G, contradicting Claim 4.1. Moreover, A is complete to V \ A. Indeed, for every pair (a, b ) ∈ A × (V \ A), the subgraph of G induced by a and some further three vertices of A0 has no edges; thus the vertex b 6 ∈ A must be complete to it. It follows that we can find disjoint independent sets Ai, i ∈ I, in V such that each Ai is complete to V \ Ai while every copy of F0 in G is inside one of these sets Ai. Define B = V \ (∪i∈I Ai). By the definition of B and the above claims, we have that H = G[B] is {F0, F 1, F 2}-free. This means that the complement H of H cannot have a (not necessarily induced) 4-cycle C4 because for any way of filling its diagonals we get F0, F1 or F2 in H. Thus |E(H)| is at most the Tur´ an function ex( n, C 4) = O(n3/2), that is, H is O(n3/2)-close in the edit distance to being a complete graph. We see that G is O(n3/2)-close to the complete partite graph G′ with parts Ai, i ∈ I, and {x}, x ∈ B. As every co-cherry in G has to contain at least one pair where E(G) and E(G′) differ, G has at most O(n5/2) co-cherries. Since the original graph Gm and G differ in o(n2) adjacencies, the co-cherry density in Gm is o(1), as required. Thus, another application of the induced removal lemma gives that we can change o(1)-fraction of adjacencies in Gm and make it P2-free, that is, complete partite. Thus, in order to finish the proof of Theorem 2, it is enough to argue that each of the k largest parts of Gm has ( 1 k o(1)) |Gm| vertices. We present two proofs of this. The first proof is more direct but longer. The second one is shorter but assumes some known facts about graphons. 3.1 First proof We need the following auxiliary result. Lemma 5. Suppose a graph J on n vertices has a subgraph X such that (i) X has x vertices where ′n 6 x 6 (1 − ′)n and edge density q 6 12 (ii) X is complete to V (J) \ X (iii) X contains at least 12 x4q3 + ′x4 copies of P4. 11 Then there exists a graph J′ on n vertices with asymptotically the same edge density as J and dC5 (J′) 6 dC5 (J) − 12(′)6. Proof. Note first that conditions (i) and (ii) imply that J is dense since it has at least ′(1 − ′)n2 edges. We make J′ by replacing X with a X′, which is a random balanced bipartite graph with edge probability 2 q. We will not change the rest of the graph, so J′ − X′ = J − X. W.h.p. X′ has edge density asymptotically q and so J′ has asymptotically the same edge density as J. We will argue that J′ has much fewer copies of C5 than J has, by considering several possible types of C5 copies. We will compare the copies according to how they intersect X (for counting copies of C5 in the graph J) or X′ (in J′). Specifically, since X is complete to the rest of J we have νC5 (J) = ∑ H mH νH (X) · νC5−H (J − X)where the sum is over all induced subgraphs H ⊆ C5, and the coefficient mH is the number of C5 copies contained in the graph formed by taking a copy of H and a copy of C5 − H with every possible edge in between. Recall that νH (G) counts the number of (not necessarily induced) copies of H in G. Similarly, we have νC5 (J′) = ∑ H mH νH (X′) · νC5−H (J′ − X′) = ∑ H mH νH (X′) · νC5−H (J − X), since J′ − X′ = J − X. So we will compare νH (X) with νH (X′) for each H. Specifically we will show that νH (X′) 6 (1 + o(1)) νH (X) for each H, and that this inequality holds with some room for H = P4.Some easy cases: when H has no vertices, νH (X) = νH (X′) = 1. When H is a single vertex, νH (X) = νH (X′) = x. When H is just an edge, νH (X) = (1 + o(1)) νH (X′) = (1 + o(1)) (x 2 )q. When H has 2 vertices and no edge we have νH (X′) = νH (X) = (x 2 ).When H is the graph on 3 vertices consisting of an edge and an isolated vertex, we have νH (X′) = (1 + o(1)) νH (X) = (1 + o(1)) x(x 2 )q.When H = P3 (the path of length 2) we have νP3 (X′) = 2 (x 2 2 )x 2 (2 q)2 = (1 + o(1)) 12x3q2 which we compare to νP3 (X) = ∑ v∈X (|N (v) ∩ X| 2 ) x · (2q(x 2 ) x 2 ) = (1 + o(1)) 12x3q2. Finally we consider the case H = P4. We have νP4 (X′) = 2 (x 2 2 ) · 2 (x 2 2 ) (2 q)3 = (1 + o(1)) 12x4q3 12 which we compare to νP4 (X) = 12x4q3 + ′x4. Taking all possible H into account, we see that νC5 (J) − νC5 (J′) = ∑ H [νH (X) − νH (X′)] · νC5−H (J − X) [νP4 (X) − νP4 (X′)] · νC5−P4 (J − X) (1 + o(1)) ′x4 · (n − x) 12(′)6n5 and so dC5 (J′) 6 dC5 (J) − 12(′)6. Proof of Theorem 2. Let Gm be as in (13). Let m → ∞ . By the induced graph removal lemma and Lemma 4 we can eliminate all co-cherries in the graph G = Gm of order n → ∞ by adding or removing at most αn 2 edges, for some α = α() → 0 as  → 0. Call this new graph G′, which has edge density p′, where p − 2α 6 p′ 6 p + 2 α. Moreover, G′ is a complete k′-partite graph for some k′. Say the parts of G′ are X1, . . . , X k′ . Also, note that since adding (or removing) one edge to G creates (or destroys) at most n3 copies of C5, we have dC5 (G) = dC5 (G′) + O(α), and dC5 (p) = dC5 (p′) + O(α)(recall that we use big-O notation to replace quantities that are bounded in absolute value, and the quantity being replaced may be negative). Now dC5 (G′) 6 dC5 (G) + O(α) 6 dC5 (p) +  + O(α) 6 dC5 (p′) + O( + α) (14) and so G′ has nearly the minimum C5-density among graphs with edge density p′.In the following, we will need a parameter β = β() = (  + α()) 1/100 . Claim 5.1 . We are done unless we have the following. For any i 6 = j, |Xi| + |Xj | 6 (1 − β)n. Proof. Without loss of generality, suppose for contradiction that |X1| + |X2| > (1 − β)n, so the number of edges in G′ is at most (n 2 ) − (|X1| 2 ) − (|X2| 2 ) 6 (n 2 ) − 2 ((1 −β)n 2 2 ) 6 12n2 − 14(1 − β)2n2 = (14 + O(β) ) n2 13 and so we must have k = 2 since throughout the proof we assume  (and therefore α and β) are sufficiently small. Now if || X1| − | X2|| > β 1/3n, say without loss of generality |X1| > |X2| + β1/3n then the number of edges in G′ is at most |X1|| X2| + βn (|X1| + |X2|) + (βn 2 ) 6 (n 2 + 12β1/3n ) ( n 2 − 12β1/3n ) βn 2 + (βn 2 ) = (14 − 14β2/3 + O(β) ) n2, which is a contradiction for small  since G′ has at least (n 2 )p − αn 2 edges (where p = 12 since k = 2) and 14 β2/3 + O(β) > α for small . To summarize, G′ is a complete partite graph that has two large parts X1, X 2 which differ in size by at most β1/3n, and together the rest of the parts make up at most βn vertices. It is easy to see then that G′ can be changed into a balanced complete bipartite graph by editing O(β1/3n2) edges. Thus, we henceforth assume that for any i 6 = j, |Xi| + |Xj | 6 (1 − β)n. Claim 5.2 . For all i, j , if |Xi|, |Xj | > βn , then || Xi| − | Xj || 6 βn . Proof. Suppose for contradiction that there are two parts (without loss of generality say X1, X 2) such that |X1|, |X2| > βn and || X1| − | X2|| > βn . We will derive a contradiction by arguing that G′ can be modified by Lemma 5 to form another graph G∗ of asymptotically the same edge density but with significantly smaller C5-density than G′.We apply Lemma 5 with J = G′, X = X1 ∪ X2, ′ = 12 β6 and q = x1x2 (x 2 ) = (1 + o(1)) 2x1x2 x2 where |Xi| = xi and x = x1 + x2. Let us check the conditions of the lemma. Clearly we have βn 6 x 6 (1 − β)n, and X is complete to the rest of the graph (since X is composed of two parts of a complete partite graph). Finally, the number of copies of P4 in X is νP4 (X) = 2 (x1 2 ) · 2 (x2 2 ) = (1 + o(1)) x21x22 which we compare to 12x4q3 = (1 + o(1)) 12x4 (2x1x2 x2 )3 = (1 + o(1)) 4x31x32 x2 . 14 From here we can see that νP4 (X) − 12x4q3 > (1 + o(1)) ( x21x22 − 4x31x32 x2 ) 12 · x21x22 x2 (x2 − 4x1x2)= 12 · x21x22 x2 (x1 − x2)2 12(βn )4 n2 (βn )2 = 12β6n4 > 12β6x4 and so Lemma 5 applies, implying that J = G′ must have C5-density at least dC5 (p′) + 12 (12β6 )6 = dC5 (p′) + 1128 β36 . But then from (14), we have dC5 (p′) + 1128 β36 6 dC5 (G′) 6 dC5 (p′) + O( + α), a contradiction for small  since β = (  + α)1/100 .Without loss of generality say that |X1|, . . . , |X| > βn and |Xi| < βn for any i > . By Claim 5.2, there is some value x such that |Xi| ∈ [( x − β)n, (x + β)n] for 1 6 i 6 `. Then the number of edges in G′ is at most (n 2 ) − ∑ (|Xi| 2 ) 6 (n 2 ) − ` ((x − β)n 2 ) = 12n2(1 − `x 2 + O (β)) . We will now show a lower bound matching the above upper bound. Since for any numbers a > b and δ > 0, we have ( a+δ)2 +( b−δ)2 > a 2 +b2 the following holds. Since ∑ i>` |Xi| 6 n,and for i > ` we have |Xi| 6 βn , the maximum possible value of ∑ i>` |Xi|2 occurs when all the terms are either 0 or ( βn )2, meaning that the number of positive terms would be at most 1 β , so we have ∑ i>` |Xi|2 6 1 β · (βn )2 = βn 2 the number of edges in G′ is then at least (n 2 ) − ∑ (|Xi| 2 ) (n 2 ) − ` ((x + β)n 2 ) − 12βn 2 = 12n2(1 − `x 2 + O(β)) . 15 But we know G′ has edge density p′ = 1 − 1 k O(α) = 1 − `x 2 + O(β) and so we get x = 1 √k+ O(β)and in particular 6 k since otherwise |X1| + . . . + |X| > (x + O(β)) n > n . To summarize, at this point we know that the graph must have ` 6 k “large” parts which each have about 1√k` n vertices, and the rest of the parts are “small” and each have at most βn vertices. We would like to show that = k, so assume for contradiction that < k . Claim 5.3 . ∑ i>` |Xi| > βn . Proof. Observe that ∑ i>` |Xi| = n − ∑ i6` |Xi| = n − ` ( 1 √k` + O(β) ) n = ( 1 −√` √k + O(β) ) n > βn since ` < k and we may assume β > 0 is arbitrarily small. Now we will use Lemma 5 on J = G′ and X being X1 together with several of the small Xis, which will finish the proof. Recall we have |X1| of size ( 1√kl + O(β) ) n. We know |Xi| < βn for all i > and at the same time | ∪ i> Xi| > βn . Hence there exists an integer z such that βn 6 | ∪ z>i>Xi| 6 2βn . Let Y = ∪z>i> Xi. In order to apply Lemma 5 to X = X1 ∪ Y , we need to count the number of copies of P4 in X, the other assumptions of Lemma 5 are clearly satisfied. Notice that νP4 (X) is bounded from below by the number of copies of P4 that alternate vertices in X1 and in Y , which gives νP4 (X) > |X1|2|Y |2 > |X1|2(βn )2 = β2 kl n4 + O(β3)n4. (15) Denote |X| by x. Notice that x = |X1| + |Y | = ( 1 √kl + O(β) ) n. Let e be the number of edges in X. It can be bounded from above by pretending that Y is a complete graph, which gives e 6 |X1| · | Y | + |Y |2/2 6 2βn 2 √kl + O(β2)n2. This gives q = 2ex2 6 4β√kl + O(β2). Hence X satisfies of Lemma 5 (iii) with ′ = β2kl 2 , since 12x4q3 6 32 β3 √kl n4 + O(β4)n4 16 is significantly smaller than νP4 (X) (see (15)) and ′x4 6 β2 2kl n4 + O(β4)n4 is about 12 νP4 (X). Hence Lemma 5 implies dC5 (G′) > dC5 (p′) + β12 (kl )6 27 > d C5 (p′) + β19 . Combining this with (14) gives the final contradiction dC5 (p′) + β19 6 dC5 (G′) 6 dC5 (p′) + O( + α)for a small  since β = (  + α)1/100 .Summarizing, we just showed that G can be transformed into the Tur´ an graph T kn by adding or deleting at most o(n2) edges. 3.2 Second proof Here we use some notions related to graphons. An introduction to graphons and further details can be found in the excellent book by Lov´ asz . In general, a graphon is a quadruple Q = (Ω , B, μ, W ), where (Ω , B, μ ) is a standard probability space and W : Ω × Ω → [0 , 1] is a symmetric measurable function, see [13, Section 13.1]. For a graph F on [ k], its induced homomorphism density in Q is tind (F, Q ) = ∫ Ωk ∏ ij ∈E(F) W (xi, x j ) ∏ ij ∈E(F) (1 − W (xi, x j )) d μ(x1) . . . dμ(xk). Here we identify two graphons Q and Q′ if tind (F, Q ) = tind (F, Q ′) for every graph F , calling them equivalent .The relevance of graphons comes from the result of Lov´ asz and Szegedy that positive homomorphisms φ : Hom +(A, R) → R are in one-to-one correspondence with graphons Q (up to equivalence) so that, for every graph F , we have φ(F ) = p(F, Q ), where we let p(F, Q ) = |F |! |aut( F)| tind (F, Q ) with aut( F ) being the group of automorphisms of F . Also, let dC5 (Q) = 15! ∑ F∈F 5 cOP T F p(F, Q ). We correspond to a graph G = ( V, E ) the graphon QG = ( V, 2V , μ, A ) where μ is the uniform measure and A : V × V → { 0, 1} is the adjacency function of G. Then, for example, tind (F, Q G) is the probability that a uniform random map f : V (F ) → V (G) is an induced homomorhism , that is, for all i, j ∈ V (F ), {i, j } ∈ E(F ) if and only if {f (i), f (j)} ∈ E(G). We say that a sequence of graphons Qn converges to Q if, for every graph F , we have lim n→∞ tind (F, Q n) = tind (F, Q ). In particular, if Qn = QGn for some increasing sequence of graphs Gn, then this gives the same convergence of graphs that we used. Since, by Lemma 4, we will be seeing only the limits of (almost) complete partite graphs, the following more restrictive class P of “complete partite” graphons will suffice for our 17 purposes. Namely, from now on, we fix Ω to be the set {0, 1, 2, . . . } of non-negative integers with the discrete topology (thus all subsets of Ω or Ω 2 are measurable) and fix W (i, j ) to be 0 if i = j > 1 and be 1 otherwise (i.e., if i 6 = j or if i = j = 0). Only the measure μ will vary, and the measures that we consider are as follows. Let R = { ρ ∈ [0 , 1] N : ρ1 > ρ2 > . . . , ∞ ∑ i=1 ρi 6 1 } . For ρ = ( ρ1, ρ 2, . . . ) ∈ R , define the probability measure μρ on (Ω , 2Ω) by μρ({i}) = ρi for i > 1. Thus μρ({0}) = ρ0, where ρ0 is always a shorthand for 1 − ∑∞ i=1 ρi (but is not an entry of the vector ρ = ( ρ1, ρ 2, . . . )). Also, define Pρ = (Ω , 2Ω, μ ρ, W ), for ρ ∈ R and let P = {Pρ : ρ ∈ R} consist of all graphons that arise this way. For example, a complete partite graph G gives a graphon PG ∈ P as follows. Order the parts V1, . . . , V s of G non-increasingly by their size, let ρG = ( |V1|/|G|, . . . , |Vs|/|G|, 0, 0, . . . ), and take PG = (Ω , 2Ω, μ ρG , W ). Since all vertices inside a part Vi are twins in G, we have that tind (F, Q G) = tind (F, P G) for every F ∈ F . Thus QG and PG are equivalent graphons. One should think of Pρ as the limit of complete partite graphs where, for i > 1, ρi is the fraction of vertices in the i-th largest part while ρ0 is the total fraction of vertices in parts of relative size o(1). Lemma 6. If a sequence of vectors ρn ∈ R converges to ρ ∈ [0 , 1] N in the product topology (that is, pointwise), then ρ ∈ R and the corresponding graphons Pρn converge to Pρ.Proof. If ∑∞ i=1 ρi > 1, then ∑mi=1 ρi > 1 for some m and thus ∑mi=1 ρn,i > 1 for sufficiently large n, a contradiction. Thus ρ ∈ R .We have to show that the graphons Pn = (Ω , 2Ω, μ n, W ) converge to Pρ, where μn = μρn .Take any F ∈ F and  > 0. Let k = |F | and fix an integer m > 3(k 2 )/ .For any Q = (Ω , 2Ω, μ, W ) ∈ P , define Q′ = (Ω , 2Ω, μ ′, W ) ∈ P , where μ′ is the push-forward of the measure μ under the map that sends each i > m to 0 and is the identity other-wise. (In the R-domain, this corresponds to truncating x ∈ R to x′ = ( x1, . . . , x m, 0, . . . ) ∈R.) Let us show that |tind (F, Q ) − tind (F, Q ′)| 6 / 3, for every Q ∈ P . (16) This inequality becomes more obvious if we allow general graphons and observe that the graphon Q′ is equivalent to (Ω , 2Ω, μ, W ′), where W ′(i, j ) is defined to be 0 if 1 6 i = j 6 m and 1 otherwise. Thus when we pass from W to W ′ on the same probability space (Ω , 2Ω, μ ), then for every i ∈ Ω the measure of j with W (i, j ) 6 = W ′(i, j ) is always at most 1 m+1 6 / 3(k 2 ).By Tonelli’s theorem, this also upper bounds the μ2-measure of the set Z of pairs in Ω 2 where W and W ′ differ. Now, tind (F, ·) is an integral of a [0 , 1]-valued function over Ω k and, by the Union Bound, the probability that some pair hits Z is at most (k 2 )μ2(Z) 6 / 3, giving the desired. 18 Note that μ′ n ({i}) = μn({i}) converges to μ′ ρ ({i}) = μρ({i}) for each i ∈ [m]. It follows that μ′ n ({0}) converges to μ′ ρ ({0}), since the support of probability measures μ′ ρ and any μ′ n is a subset of {0} ∪ [m]. For such measures tind (F, ·) is a polynomial (and thus contin-uous) function of the measures of singletons 0 , . . . , m . Thus, for all large n, we have that |tind (F, P ′ n )−tind (F, P ′ ρ )| 6 / 3; then it holds by (16) that |tind (F, P n)−tind (F, P ρ)| 6 . Since  > 0 and F were arbitrary, Pn → Pρ as required. Remark 1. Using some standards facts about graphons, one can prove the converse impli-cation of Lemma 6 (namely that the graphon convergence Pρn → Pρ implies that ρn → ρ), see where the space P is studied in more detail. Note that the limit of the Tur´ an graphs T kn as n → ∞ is QKk (or, equivalently, Pρ for ρ = ( 1 k , . . . , 1 k , 0, . . . ) ∈ R ). Lemma 7. For every k > 3, every sequence of graphs as in (13) converges to QKk .Proof. Let us first show that ( Gn)∞ n=1 has a subsequence convergent to QKk . By Lemma 4 and the induced removal lemma, we can make each Gn complete partite, without changing the convergence of any subsequence. Recall that ρGn ∈ R is the vector encoding the part ratios of Gn. Since the product space [0 , 1] N is compact, some subsequence of ρGn ∈ [0 , 1] N converges to some ρ. By Lemma 6, we have that ρ ∈ R and the corresponding subsequence of graphs Gn converges to Q = Pρ. Thus the graphon Q satisfies that p(K2, Q ) = p and dC5 (Q) = λ.The identity in (12) can be re-written as an identity valid for every graphon. Since we need to analyse it only for Q, let us state a version that uses the (very special) structure of graphons in P. We need a few definitions first. For a graph F ∈ F 1 on [ k] rooted at 1 and j ∈ Ω, define the rooted density of F in ( Q, j )as tind (F, (Q, j )) = ∑ f ∏ki=2 ρf (i), where f in the sum ranges over all maps V (F ) → Ω such that f (1) = j and, for all distinct u, v ∈ V (F ), we have that W (f (u), f (v)) = 1 if and only if {u, v } ∈ E(F ). Equivalently, this is the limit as n → ∞ of the probability of the following event E. Suppose we choose k − 1 independent uniform vertices in Gn together with another vertex we call the root which we make adjacent to everybody else if j = 0 or Gn has fewer than j parts, and otherwise we put the root in the j-th largest part of Gn. Then we let E be the event that these chosen vertices together with the root induce a vertex-labeled homomorphic copy of F . For example, if H ∈ F is the unrooted copy of F , then tind (H, Q ) = ∞ ∑ j=0 tind (F, (Q, j )) ρj . (17) The version for unlabeled non-roots is p(F, (Q, j )) = (k−1)! |aut( F)| tind (F, (Q, j )), where aut( F ) is the group of root-preserving automorphisms of F . We also define a column vector Yj = (p(X1, (Q, j )) , . . . , p (X6, (Q, j )) )T ∈ R6, 19 where X = ( X1, . . . , X 6)T was defined in (6). With this notation, the limit version of (12) is 5! dC5 (Q) − ∑ F∈F 5 cF p(F, Q ) + α (p(K2, Q ) − p) = ∞ ∑ j=0 Y Tj M Y j ρj . (18) Recall that each cF in (18) is at least 5! λ and that p(K2, Q ) = p for our Q (which is the limit of some Gn); thus the left-hand size of (18) is non-positive. Also, recall that M < 0; thus xT M x > 0 for every x ∈ R6 with equality if and only if M x = 0. As the 3 × 3-matrix A in the factorization (10) is non-singular, the null-space N of M is the same as that of B.Calculations (see e.g. the Maple code in Appendix B) show that the 3-dimensional vector space N can be spanned by z1, z 2, z 3 ∈ R6 where  zT 1 zT 2 zT 3  =  1 0 0 2( k − 1) k − 1 k2 − 3k + 2 0 1 0 −2 0 10 0 1 k − 1 k−22 k2−3k+2 2  . (19) By the previous paragraph, Yj ∈ R6 belongs to N for every j ∈ Ω with ρj > 0. Since N is a (finite dimensional and thus closed) linear subspace, it also contains the mean Y =∑∞ j=0 Yj ρj . By (17), we have that, in particular, Y 2 = tind (P2, Q ) and Y 3 = 12 tind (P2, Q ) are both 0. Since the entries in each Yj are non-negative, we conclude that Yj has its second and third coordinates zero for every j ∈ Ω with ρj > 0. The row-reduced matrix in (19) shows that such Yj must be colinear to z1. Since the sum of entries of Yj is 1, we have Yj = 1 k2 z1.In particular, its first coordinate is 1 /k 2. On the other hand, it is p(X1, (Q, j )) which is the density of K3 rooted at j in Q, that is, it is ρ2 j if j > 1 and 0 if j = 0. Thus ρ0 = 0 and ρj = 1 /k for every j in the support of μ, so indeed Q = QKk is the limit of Tur´ an graphs. Finally, if we assume on the contrary to the lemma that the whole sequence ( Gn)∞ n=1 does not converges to QKk , then by the compactness of the space of all graphons, some subsequence converges to a graphon non-equivalent to QKk . But then this violates the first claim of the proof. Second Proof of Theorem 2. Lemma 7 and the fact that each graphon is the limit of some sequence of finite graphs imply that the limit version of the C5-minimisation problem has the unique solution QKk whose function W , moreover, happens to be {0, 1}-valued. These are exactly the assumptions of [21, Theorem 15] which directly gives the required stability property. In order to give the reader some idea of what is going on, let us unfold slightly the proof of [21, Theorem 15] for this particular case. Suppose on the contrary that, for some integer k > 3 and δ > 0, a sequence Gn of graphs as in (13) violates the stability property. By passing to a subsequence, it converges to some graphon Q with p(K2, Q ) = p and dC5 (Q) = λ.By Lemma 7, we can assume that Q = QKk . While the convergence of Gn to some graphon does not identify Gn within edit distance o(|Gn|2) in general, it does if the function W of the graphon assumes only values 0 and 1, see [16, Lemma 2.9] or [21, Theorem 17]. Thus, the convergence Gn → QKk implies that Gn is o(|Gn|2)-close to T |Gn| k in the edit distance, contradicting our assumption. 20 Another possible derivation of Theorem 2 from Lemma 7 is to use the known properties of the so-called cut-distance via the argument in [22, Page 146] where the description of all extremal graphons for the triangle-minimisation problem was used to describe all almost extremal graphs. 4 Remarks on the case p 6 = 1 − 1 k Our general upper bound construction is as follows. Suppose that p is a constant satisfying 1 − 1 k < p < 1 − 1 k+1 . Partition the vertices into k − 1 sets X1, . . . , X k−1 of size xn and one more set Y of size yn . Each Xi is an independent set. For 1 6 i 6 = j 6 k − 1 we have that Xi is complete to Xj . Y is also complete to each Xi. Finally, G[Y ] is any graph such that for some parameter 0 < ρ < 12 we have (i) G[Y ] has asymptotically 12 y2n2ρ edges, 12 y3n3ρ2 paths of length 2 (that means on 3 vertices), and 12 y4n4ρ3 paths of length 3; (ii) G[Y ] has o(n5) copies of C5.(See the end of this subsection for discussion on which graphs are suitable for G[Y ].) We assume that (k − 1) x + y = 1 so we have a total of n vertices. The edge density in this construction is (k−12 ) (xn )2 + ( k − 1)( xn )( yn ) + ( 12 + o(1)) y2n2ρ (n 2 ) , which tends to g(x, y, ρ ) = ( k − 1) 2x2 + 2( k − 1) xy + ρy 2 as n → ∞ . So we also assume that the parameters x, y, ρ satisfy g(x, y, ρ ) = p.Now we consider the ratio f (x, y, ρ ) = lim n→∞ νG(C5) n5 . We claim that f (x, y, ρ ) = [ 110 (k − 1) 5 + 12(k − 1) 4 + 12(k − 1) 3 ] x5 + [12(k − 1) 4 + 32(k − 1) 3 + 12(k − 1) 2 ] x4y + [( 12 + 12ρ ) (k − 1) 3 + ( 1 + 12ρ ) (k − 1) 2 ] x3y2 + [( 12ρ + 12ρ2 ) (k − 1) 2 + 12ρ(k − 1) ] x2y3 12ρ3(k − 1) xy 4. 21 Note that we have grouped the terms of f (x, y, ρ ) according to powers of x and y, and then according to falling factorials of ( k − 1). To understand our formula, it helps to think of the powers of x, y as specifying how many vertices come from sets of size xn, yn , and the falling factorial ( k − 1) as specifying how many distinct sets of size xn are involved. For example, the first term 110 (k − 1) 5 x5 is there because there are 110 (k − 1) 5(xn )5 many copies of C5 having vertices v1, . . . , v 5 all in different parts of size xn . Now let us justify a more complicated term like say the second term in the third line, (1 + 12 ρ) (k − 1) 2 x3y2. This term counts the copies of C5 that have vertices v1, . . . , v 5 such that v1 and v2 come from Y , v3 and v4 are in the same set of size xn , and v5 is in some other set of size xn (and v1, . . . , v 5 may be in any order on the cycle). The case where v1 and v2 are consecutive in the cycle contributes 12 (k − 1) 2ρ(yn )2(xn )3, and the other case contributes ( k − 1) 2(yn )2(xn )3.Now for a given integer k > 2 and a real number 1 − 1 k < p < 1 − 1 k+1 we define an optimization problem (P): Minimize f (x, y, ρ ) subject to: (k − 1) x + y = 1, g(x, y, ρ ) = p, x, y > 0. Let us denote its solution by fmin (p) = f (x0, y 0, ρ 0). Clearly, dC5 (p) 6 fmin (p). For some certain values of k and p we verified that 120 ·fmin (p) numerically matches the lower bound on dC5 (p) given by the flag algebras. In particular, when we calculated with unlabeled flags of order `, we were getting numerically matching bounds for p 6 1 − 1 `−2 and we observed a gap in the bounds for p > 1 − 1 `−2 different from Tur´ an densities. Since computer calculations can be performed with current computers in a reasonable time only for ` 6 8, a simple straightforward use of computer is unlikely to provide a numerical match of dC5 (p) and fmin (p) for all p. Unfortunately, we were unable to convert the numerical match to a formal proof. The main problem is that (P) has no closed solution. For example, for k = 2 and 12 < p < 23 we can plug into the objective function y = 1 − x and ρ = ( p − x2 − 2xy )/y 2 obtaining f (2 , x, 1−x, (p−x2 −2xy )/y 2) = x(2 x2 − 2x + p)(3 x4 − 5x3 + (1 + 4 p)x2 + (1 − 4p)x + p2)2 ( x − 1) 2 . Now it is not difficult to show that there exists a local minimum for some 13 < x < 12 .Unfortunately, it looks like this minimum can be only found numerically. There might be a different parametrization of the problem that would make it possible to solve ( P ) and formally show a match with flag algebra calculations for some range of p. On Figure 2 we present the shape of fmin (p). We conjecture that dC5 (p) = fmin (p) for any p.We now address what graphs are suitable for G[Y ], i.e. what graphs satisfy (i) and (ii). Note first that some such choice of G[Y ] exists, for example it can be a random bipartite graph with two parts of size 12 yn and edge probability 2 ρ. Now we claim that G[Y ] satisfies 22 (a) (b) (c) Figure 2: (a) A graph of fmin (p) based on numerical calculations. Blue points correspond to the Tur´ an densities (i.e. p = 1 − 1/k ). (b) Secant lines between Tur´ an densities. (c) A graph of fmin (p) − L(p). (i) if and only if G[Y ] is almost ynρ -regular, or more formally, all but o(n) vertices in G[Y ]have degree (1 + o(1)) ynρ . Indeed, if G[Y ] is almost ynρ -regular then it is easy to verify the edge and path counts in (i). Conversely, suppose (i) holds, and let the random variable Z represent the degree of a random vertex in G[Y ]. Then we have E[Z] = (1 + o(1)) ynρ and since ∑ v∈Y (deg (v)2 ) is the number of paths of length 2 we can calculate E[Z2] = 1 yn ∑ v∈V(Y) deg (v)2 = 1 yn · 2(1 + o(1)) 12y3n3ρ2 = (1 + o(1)) y2n2ρ2 = (1 + o(1)) E[Z]2 so Z is concentrated by Chebyshev’s inequality (see, e.g., Lemma 20.3 in ). In other words, G[Y ] is almost ynρ -regular. We believe that we have described all almost optimal graphs. Specifically, we believe that any graph with edge density p and C5-density dC5 (p) + o(1) can be transformed by adding or deleting at most o(n2) edges into a graph with a vertex partition X1, . . . , X k−1, Y where |Xi| = xn, |Y | = yn , all Xi are independent, all Xi and Y are complete to each other, and G[Y ] is ynρ -regular where x, y, ρ are a solution to the optimization problem (P). References N. Alon, E. Fischer, M. Krivelevich, and M. Szegedy, Efficient testing of large graphs ,Combinatorica 20 (2000), no. 4, 451–476. J. Balogh, P. Hu, B. Lidick´ y, and F. Pfender, Maximum density of induced 5-cycle is achieved by an iterated blow-up of 5-cycle , European J. Combin. 52 (2016), no. part A, 47–58. B. Borchers, CSDP, a C library for semidefinite programming , Optimization Methods and Software 11 (1999), no. 1-4, 613–623. 23 D. Conlon and J. Fox, Graph removal lemmas , Surveys in combinatorics 2013, London Math. Soc. Lecture Note Ser., vol. 409, Cambridge Univ. Press, Cambridge, 2013, pp. 1– 49. P. Erd˝ os, On some problems in graph theory, combinatorial analysis and combinatorial number theory , Graph theory and combinatorics (Cambridge, 1983), Academic Press, London, 1984, pp. 1–17. P. Erd˝ os and A. H. Stone, On the structure of linear graphs , Bull. Amer. Math. Soc. 52 (1946), 1087–1091. A. Frieze and M. Karo´ nski, Introduction to random graphs , Cambridge University Press, Cambridge, 2016. A. Grzesik, On the maximum number of five-cycles in a triangle-free graph , J. Combin. Theory Ser. B 102 (2012), no. 5, 1061–1066. H. Hatami, J. Hladk´ y, D. Kr´ al’, S. Norine, and A. Razborov, On the number of pentagons in triangle-free graphs , J. Combin. Theory Ser. A 120 (2013), no. 3, 722–732. B. Lidick´ y and F. Pfender, Pentagons in triangle-free graphs , European J. Combin. 74 (2018), 85–89. H. Liu, O. Pikhurko, M. Sharifzadeh, and K. Staden, Stability from symmetrisation arguments , work in progress, 2019. H. Liu, O. Pikhurko, and K. Staden, The exact minimum number of triangles in graphs of given order and size , arXiv:1712.00633 . L. Lov´ asz, Large networks and graph limits , Colloquium Publications, Amer. Math. Soc., 2012. L. Lov´ asz and M. Simonovits, On the number of complete subgraphs of a graph. II ,Studies in pure mathematics, Birkh¨ auser, Basel, 1983, pp. 459–495. L. Lov´ asz and B. Szegedy, Limits of dense graph sequences , J. Combin. Theory (B) 96 (2006), 933–957. , Testing properties of graphs and functions , Israel J. Math. 178 (2010), 113–156. W. Mantel, Problem 28 , Winkundige Opgaven 10 (1907), 60–61. J. W. Moon and L. Moser, On a problem of Tur´ an , Magyar Tud. Akad. Mat. Kutat´ oInt. K¨ ozl. 7 (1962), 283–286. V. Nikiforov, The number of cliques in graphs of given order and size , Trans. Amer. Math. Soc. 363 (2011), no. 3, 1599–1618. 24 E. A. Nordhaus and B. M. Stewart, Triangles in an ordinary graph , Canad. J. Math. 15 (1963), 33–41. O. Pikhurko, An analytic approach to stability , Discrete Math. 310 (2010), 2951–2964. O. Pikhurko and A. Razborov, Asymptotic structure of graphs with the minimum number of triangles , Combin. Probab. Comput. 26 (2017), no. 1, 138–160. O. Pikhurko, J. Sliacan, and K. Tyros, Strong forms of stability from flag algebra cal-culations , J. Combin. Theory (B) 135 (2019), 129–178. N. Pippenger and M. C. Golumbic, The inducibility of graphs , J. Combinatorial Theory Ser. B 19 (1975), no. 3, 189–203. A. Razborov, Flag algebras , J. Symbolic Logic 72 (2007), no. 4, 1239–1282. , On the minimal density of triangles in graphs , Combin. Probab. Comput. 17 (2008), no. 4, 603–618. Ch. Reiher, The clique density theorem , Ann. of Math. (2) 184 (2016), no. 3, 683–707. A.F. Sidorenko, Inequalities for functionals generated by bipartite graphs , Diskret. Mat. 3 (1991), no. 3, 50–65. P. Tur´ an, Eine Extremalaufgabe aus der Graphentheorie , Mat. Fiz. Lapok 48 (1941), 436–452. 25 A Appendix FcOP T F 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 2 2 4 6 12 p(K2, F ) 0 1 2 3 4 3 2 3 4 3 4 5 4 5 4 5 5 6 6 7 6 4 5 6 7 6 5 6 7 7 8 8 9 10 JX1 × X1K1 30 12 4 0 0 0 4 2 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX1 × X2K1 0 3 4 3 0 6 0 1 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX1 × X3K1 0 6 4 3 0 0 8 2 0 6 2 0 0 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX1 × X4K1 0 0 2 6 12 0 0 2 2 0 3 4 0 0 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX1 × X5K1 0 0 1 0 0 0 0 2 0 0 1 0 4 0 1 0 2 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX1 × X6K1 0 0 0 0 0 3 0 0 2 0 0 2 0 2 0 1 0 2 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 JX2 × X2K1 0 0 0 0 0 0 2 2 2 0 0 0 4 4 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 JX2 × X3K1 0 0 0 0 0 0 4 2 1 4 2 0 0 0 2 2 0 0 0 0 0 6 2 1 0 0 0 0 0 0 0 0 0 0 JX2 × X4K1 0 0 0 0 0 0 0 0 0 2 2 2 0 0 2 2 2 2 0 0 0 0 1 2 3 0 0 0 0 0 0 0 0 0 JX2 × X5K1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 2 0 1 0 0 0 0 0 1 0 0 0 5 2 0 1 0 0 0 0 JX2 × X6K1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 2 1 0 4 0 1 2 0 1 0 0 0 JX3 × X3K1 0 0 4 0 0 12 0 4 4 4 0 0 0 0 6 2 0 0 0 0 0 12 4 0 0 0 10 2 0 0 0 0 0 0 JX3 × X4K1 0 0 0 0 0 0 0 2 2 0 2 4 8 4 2 0 4 2 0 0 0 0 4 2 0 8 0 2 2 0 0 0 0 0 JX3 × X5K1 0 0 0 3 0 0 0 0 2 0 2 0 0 2 0 2 1 0 0 0 0 0 2 2 0 0 0 2 0 2 0 0 0 0 JX3 × X6K1 0 0 0 0 0 0 0 0 1 0 0 0 0 4 0 2 0 2 0 0 12 0 0 3 6 0 0 0 2 0 2 0 0 0 JX4 × X4K1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 4 4 12 12 0 0 0 0 0 0 10 6 4 4 4 0 0 0 JX4 × X5K1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 2 2 1 6 0 0 0 0 2 0 0 0 2 2 4 0 4 0 0 JX4 × X6K1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 3 0 0 2 2 6 4 8 6 0 JX5 × X5K1 0 0 0 0 6 0 0 0 0 0 0 4 0 0 0 0 0 2 0 0 0 0 0 0 0 4 0 0 2 0 0 2 0 0 JX5 × X6K1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 2 0 6 0 0 0 0 3 0 0 0 1 0 4 0 3 0 JX6 × X6K1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 2 0 4 4 12 30 Table 1: All entries corresponding to p(K2, F ) are multiplied by 10 and all entries corre-sponding to JXi × Xj K1 are multiplied by 30. 26 B Appendix This Maple code computes cF coefficients. Matrices A, B and M are defined in Subsec-tion 2.2.2. X is a matrix of size 21 × 34 and it is defined in Appendix A (rows corre-spond to JXi × Xj K1). Vectors cFOPT, pF, cFM and cF (each of size 34) correspond to cOP T F , p (K2, F ), c MF and cF , respectively. Constant a corresponds to α. restart: with(LinearAlgebra): A := Matrix(): B := Matrix(): M:= (3/(2k^4))Matrix(Multiply(Transpose(B), Multiply(A, B))): X:=(1/30)Matrix(): cFM := Vector(34): k_ind := 0: printlevel := 2: for i to 6 do for j from i to 6 do k_ind := k_ind+1; if i = j then cFM := cFM+M(i, j)Transpose(Row(X, k_ind)); else cFM := cFM+2M(i, j)Transpose(Row(X,k_ind)); end if; end do; end do: cFOPT := Vector([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,2,2,4,6,12]): pF := (1/10)Vector([0,1,2,3,4,3,2,3,4,3,4,5,4,5,4,5,5,6,6,7,6,4,5,6,7,6,5,6,7,7,8,8,9,10]): 27 a := (1/(k^3))(60k^3 - 240k^2 + 360k - 192): cF := Vector(34): for i to 34 do cF(i) := cFOPT(i)-apF(i)-cFM(i)+(k-1)a/k end do: for i to 34 do printf("5k^4cF(%d) = %s\n", i, convert(expand(5k^4cF(i)), string)) end do: kernel := NullSpace(B): kernelMatrix := Matrix(convert(kernel, list)): ReducedRowEchelonForm(Transpose(kernelMatrix)) 28
267
Eye tracking in audiovisual translation ISBN 978-88-548-4913-6 DOI 10.4399/97888548491364 pag. 83–104 (dicembre 2012) Eye movements in reading Implications for reading subtitles ER. SKR : In this chapter, we review research on eye movements during reading. The characteristics of eye movements during reading have important implications for understanding eye movements when viewers read subtitles. Research findings on the following issues are discussed: () the size of the perceptual span, () what determines when and where to move the eyes while reading, and () cross–linguistic differences. Implications for reading subtitles are also discussed. . Introduction Although research on eye movements during reading has been con-ducted for over a century, the past three decades or so has seen a flourish of research on this topic (Rayner, , ). With the use of increasingly advanced state–of–the–art eye trackers we are able to measure eye movements and exert sufficient experimental control in experiments to rather precisely infer what is happening in the mind as we read. Clearly, the study of how one reads is relevant to any discussion about reading subtitles. That is in order for subtitles to be understood they must first be read. However, there are two additional tasks for a subtitle reader. First, they need to read at a pace imposed by the movie, or subtitle transcriber, whereas, under normal conditions, where and when to move the eyes is determined by the reader (but can also be influenced by the properties of the text itself). Second, when reading subtitles, people also shift their gaze back and forth . This chapter was prepared when the first author was the recipient of a Dean’s Fellowship from the University of California, San Diego and a Study Visit Award from the Experimental Psychology Society. The second author was supported by a Humboldt Research Award and grant HDfrom the National Institutes of Health.   Elizabeth R. Schotter and Keith Rayner between the subtitle and the movie/TV program. Other chapters in the present volume deal with this dynamic switching between film and subtitle and with subtitle reading per se; our goal is to provide some background information regarding eye movements during read-ing. We believe that is important to know what readers are doing under normal reading circumstances, in order to be better able to investigate and discuss how they read subtitles (for a good overview of eye movements during subtitle reading per se, see d’Ydewalle, & De Bruycker, ). In the present chapter we review basic terminology, measures, and findings concerning eye movements during reading. We will consider topics such as the perceptual span, determinants of eye movements, and linguistic influences on reading. In the final section we will review some commonalities and differences of reading in different languages, as we believe this would be of particular interest to researchers study-ing the reading of subtitles. . Basic characteristics of eye movements We obtain visual information with our eyes when light hits the retina and is transformed into electric signals, which get relayed to the brain to be interpreted. Importantly, not all parts of the retina have the same acuity, or resolution. The region of highest acuity—called the fovea—is located in the center of the retina and extends degrees of visual angle in diameter. Outside the fovea, acuity drops offrapidly, in regions called the parafovea (–degrees of visual angle from fixation) and the periphery (everything beyond the parafovea). For this reason, in order to process information most effectively we must move our eyes so that the fovea fixates the location of that which we intend to process. These eye movements are called saccades and last approximately – ms (milliseconds—depending on the size of the actual movement). Between the saccades, our eyes are relatively stable (in fixations), which last approximately –ms. It is during fixations that information is obtained from the visual stimulus since during saccades vision is Eye movements in reading  suppressedand we are essentially ‘blind’. We may have the intuition that we are able to process all the information in a visual display in a single fixation, but it is simply not possible without moving our eyes so that our foveas may fixate a location to process the information therein (Rayner, , , ). For the most part, overt attention (where the eyes are fixating) is tightly linked to covert attention (where the mind is attending). Al-though the most thorough and effective processing is reserved for that done in the fovea, some processing can also be accomplished for information in the parafovea and peripheral vision (see Schotter, Angele & Rayner, for a comprehensive review). In simple labora-tory tasks, eye location and attention may be separated or dissociated (Posner, ). However, even in these simple laboratory tasks, it takes much conscious effort to keep the eyes fixated while attending to another location. Furthermore, in more complex tasks like reading, such dissociations, when they occur, are generally the consequence of programming an eye movement. It is well established that, in many tasks, a shift of attention to the saccade target precedes the actual eye movement (Deubel, & Schneider, ; Hoffman, & Subramaniam, ; Kowler, Anderson, Dosher, & Blaser, ; Rayner, McConkie, & Ehrlich, ). Since eye movements are motor programs, they take some time to plan and execute (approximately ms) under circumstances in which no other cognitive processing is required and one’s task is merely to move one’s eyes to the location of a fixation target (Becker, & Jürgens, ; McPeek, Skavenski, & Nakayama, ; Rayner, Slowiaczek, Clifton, & Bertera, ). Additionally, they are largely ballistic, meaning it is difficult (though possible) to change the trajectory of a saccade once it is initiated. It has generally been assumed that there is near perfect binocular coordination during reading and that the two eyes start moving at the same time and land on the same letter. However, recent research (see Kirkby, Webster, Blythe, & Liversedge, for a comprehensive review) has demonstrated that up to –% of the time the eyes are on different letters (sometimes as a result of the two eyes being . Although vision is suppressed, for most cognitive tasks, mental processing continues during the saccade (see Irwin, , for a review of when cognition is also suppressed during saccades).  Elizabeth R. Schotter and Keith Rayner crossed—left eye fixating further to the right than the right eye—and other times when they are uncrossed—the left eye is much further to the left than the right eye; Liversedge, Rayner, White, Findlay, & McSorley, ; Liversedge, White, Findlay, & Rayner, ). Interest-ingly, the amount of disparity tends to be greater in beginning readers than skilled readers (Blythe, Liversedge, Joseph, White, Findlay, & Rayner, ). More importantly, perhaps, word frequency and case alternation affect fixation duration in reading (as we will discuss more in detail below), but not fixation disparity (Juhasz, Liversedge, White, & Rayner, ). At one time, researchers doubted that eye movements reflected much of cognitive processing and believed that the eyes and mind were not tightly linked during information processing tasks, such as reading. They came to this conclusion based on the observation that saccade latencies in simple oculomotor tasks are relatively long (compared to, for example, the average fixation duration in reading). These relatively long latencies (and high variability in the latencies) led them to argue that there wasn’t sufficient time for eye fixations to be influenced by cognitive processes. However, these conclusions were based on the assumption that programming saccades and cog-nitive processes occur in a serial fashion. But, more recent research has established that the eyes and mind are very tightly linked, that saccades can be programmed in parallel (Becker, & Jürgens, ), and that information processing can proceed concurrently with sac-cade programming (Rayner, , ). Furthermore, information is obtained from words in the parafovea (see discussion below) and this parafoveal processing can facilitate foveal processing, leading to shorter fixation times than would be expected otherwise. .. Eye movements during reading Now that a basic overview has been provided, we turn to a discussion of what is known about eye movements during reading from research over the past years. The majority of work on reading has been con-ducted in English, and the general findings we report in this section will be on reading English. Because the topic of this book is subtitles, we realize that our audience might be interested in characteristics of reading in other languages, as well. Therefore, at the end of this Eye movements in reading  chapter there is a section illustrating the characteristics of reading in languages other than English. That being said, while the physical characteristics/anatomy of the eyes and eye movements reported above are not language specific, when measures in the immediately following section are reported, they refer to reading English. During reading, the average saccade extends –letter spaces. Let-ter spaces are the appropriate metric to use, rather than visual angle, because letter spaces are more deterministic of saccade length than visual angle in reading. That is, if font size is held constant and the text is read either at a close reading distance (so that fewer characters comprise one degree of visual angle) or a far reading distance (so that more letters comprise one degree of visual angle) eye movements tend to extend the same number of characters (not the same degree of visual angle) across reading distances (Morrison, & Rayner, ). Not all saccades move forward (to the right) with the direction of the text. Regressions, which comprise –% of the eye movements during reading, actually move backward to previously read or skipped text. The percent of regressions depends on the difficulty of the text; more difficult texts lead to more regressive saccades. Similarly, more difficult texts lead to longer fixation durations and shorter saccades. Fixation duration, saccade length, and percent regressions — all of which are considered global measures of reading difficulty — are all clearly influenced by text difficulty. Additionally, the type of reading material and the reader’s goal during reading influence these measures (Rayner, & Pollatsek, ). Although global measures are important indications of processing, more often in reading research local effects related to fixation time on a specific word in a sentence are reported, rather than the average duration on all words in the text. These measures include first fixation duration (the duration of the first fixation on a word), single fixation duration (duration of the fixation on a word in cases where the word was only fixated once, not including regressive fixations), and gaze duration (the sum of all fixations on a word before moving to another word). These measures are important because global measures would only be useful if readers fixated every word and each word only once, which is clearly not the case. Many words are fixated more than once and approximately a third of the words in the text are skipped (not fixated directly) during reading. Often times these skipped words are  Elizabeth R. Schotter and Keith Rayner short, very common words (such as the, of, in, and), although longer words that are highly predictable from prior text are also frequently skipped (Ehrlich, & Rayner, ; Rayner, & Well, ). When words are skipped, there is good reason to believe that they were processed while eyes were fixating the previous word. Conversely, the words that are refixated (fixated more than once before the eyes move to the next word) are more likely longer and less common, taking more time and more fixations to fully process the word for meaning. The above mentioned local measures are useful to estimate how long it takes to process a word (Rayner, ). .. The perceptual span across languages As mentioned above, foveal vision represents the area with highest acuity; as we also noted, the reason we move our eyes during reading is to place the fovea over the words we wish to process. However, there is still some processing that can be done outside the fovea and much research has been done to investigate the size of the perceptual span—also called the region of effective vision or the functional field of view—during a fixation in reading. We may have the intuition that when we read we are able to see an entire line of text (or even the entire text) quite clearly, but this intuition is wrong. Experiments using the gaze–contingent moving window paradigm (see Figure ) introduced by McConkie and Rayner (; see also Rayner, & Bertera, ) have clearly demonstrated that readers have little or no access to information about the characters outside a relatively small window around the character which they are fixating. In these experiments, the eyes are monitored and the text around the point of fixation is revealed (i.e. accurate information is provided within the window area), with the text outside that window replaced by other letters (typically with x’s or random letters). Eye Movements are recorded as people read with windows of various sizes to deter-mine the window size that yields reading that is equivalent to when there is no window (i.e. with all letters revealed/available). When reading is like normal with a moving window, no information would have been obtained from the letters that are masked, and the win-dow size would thus provide a valid estimate of the perceptual span. Research using this paradigm has revealed that readers of English typ-Eye movements in reading  ically have a perceptual span comprising a region from –characters to the left of fixation and –characters to the right of fixation. If the presently fixated word and the following word are normal and the rest of the letters are replaced with other letters, readers are typically unaware that anything is strange about the text and reading is only about % slower than without a window. If two words to the right of the fixated word are revealed/available, reading is generally equiv-alent to normal. Furthermore, readers obtain no information from lines above and below the line currently being read (Pollatsek, Raney, La Gasse, & Rayner, ). Finally, moving mask experiments (see Figure ) — where the fixated and immediately surrounding letters are masked and letters outside the moving mask are revealed — have demonstrated that it is very difficult (nearly impossible) to read when the fovea cannot be used to obtain information and one must rely solely on parafoveal and peripheral vision (Rayner, & Bertera, ; Rayner, Inhoff, Morrison, Slowiaczek, & Bertera, ). Figure . Examples of a moving window (with a thirteen character window), a moving mask (with a character mask), and the boundary paradigm. When the reader’s eye movement crosses an invisible boundary location (the letter l at the end of perceptual), the preview word hand changes to the target word span. The asterisk represents the location of the eyes in each example.  Elizabeth R. Schotter and Keith Rayner Other research investigating how much information is obtained to the right of fixation involves another gaze–contingent display change paradigm (see Figure ) called the boundary change paradigm (Rayner, ). This research has indicated that a valid preview of the upcoming word before the eyes fixate it leads to shorter fixations (by about – ms) on the word once it is looked at compared to a condition in which the preview of the word is invalid (i.e. another word, a nonword, or a random letter string in the location of the target word). This speeded processing with a valid preview compared to an invalid preview is termed preview benefit. This research has revealed that readers generally do not represent word information in a literal visual representation, but rather in an abstract form (McConkie, & Zola, ; Rayner, McConkie, & Zola, ). Preview benefit seems to be largely due to orthographic and phonological information (Rayner, , ). The perceptual span varies depending on reading skill; beginning readers (Häikiö, Bertram, Hyönä, & Niemi, ; Rayner, ) and dyslexic readers (Rayner, Murphy, Henderson, & Pollatsek, ) have smaller perceptual spans than more skilled readers. The smaller span is attributed to the fact that more cognitive effort is allocated to the fixated word and fewer attentional resources can be used to identify other words. The perceptual span also seems to change slightly as we age; older readers have smaller (Laubrock, Kliegl, & Engbert, ; Rayner, Reichle, Stroud, Williams, & Pollatsek, ) and less asymmetrical perceptual spans than that of younger readers (Rayner, Castelhano, & Yang, ). .. Linguistic influences on reading behavior Research on eye movements during reading has shown that how long the eyes remain on a word reflects the ease or difficulty with which that word is processed. The amount of time one spends fixating a word while reading is variable and can depend on many linguistic/lexical factors, such as the frequency of that word (Inhoff, & Rayner, ; Rayner, & Duffy, ), how predictable that word is (Ehrlich, & Rayner, ; Rayner, & Well, ), how many meanings that word has (Duffy, Morris, & Rayner, ; Sereno, O’Donnell, & Rayner, ), the age at which the meaning of the word was acquired (Juhasz, Eye movements in reading  & Rayner, , ), semantic relations between the word and prior words in the sentence (Carroll, & Slowiaczek, ; Morris, ), how familiar the word is (Williams, & Morris, ), and so on (see Rayner, , for reviews). In addition to these largely lexical variables, higher level discourse and syntactic variables also influence readers’ eye fixations. For example, when readers read garden–path sentences, fixations are typically quite long (and there are also many regressions) when they encounter disambiguating information (Frazier, & Rayner, ; Rayner, Carlson, & Frazier, ; see Clifton, Staub, & Rayner, ; Staub, & Rayner, for extended reviews regarding discourse and syntactic influences on eye movements). The strongest evidence in support of the claim that linguistic/lexical processing strongly influences how long readers look at words comes from what are called “disappearing text” studies. In such studies, a word either disappears or is masked shortly after (–ms) it is fixated (Ishida, & Ikeda, ; Liversedge, Rayner, White, Vergilino– Perez, Findlay, & Kentridge, ; Rayner et al., ; Rayner, Li-versedge, White, & Vergilino–Perez, ; Rayner, Liversedge, & White, ). Results from these studies show that even if the word is only present for a short amount of time (ms) people can read quite normally (i.e. they fixate that location for the same amount of time they would if the word had not disappeared). Furthermore, how long the eyes remain on a word once it has disappeared is highly determined by the frequency of that word, indicating quite strongly that the determining force on when to move the eyes forward during reading is the cognitive processing associated with the fixated word. Interestingly, if the following word also disappears or is masked at the same time as the fixated word it causes quite a disruption to pro-cessing (Rayner et al., ), indicating that the word to the right of fixation receives some processing before it is fixated and that this preprocessing is very important to normal reading. In summary, the amount of information obtained on a given fix-ation during reading is limited (approximately –characters to the left and –characters to the right of fixation). Furthermore, the amount of information available for word identification is even smaller (approximately –characters to the right). The word following the fixated word is important to normal reading, as that word is processed to some extent before it is fixated (as evidenced by the preview benefit  Elizabeth R. Schotter and Keith Rayner and disappearing text studies). In situations in which the following word is fully identified, that word is skipped, and the word following it is fixated. Finally, the strongest influencing factor on how long a word needs to be fixated is the ease or difficulty with which it is processed. .. Models of eye movements during reading Based on all the research that has been conducted on eye movements during reading over the past years, a number of models of eye movement control in reading have been proposed, the most influential of which being the E–Z Reader model (Pollatsek, Reichle, & Rayner, ; Rayner, Ashby, Pollatsek, & Reichle, ; Rayner, Reichle, & Pollatsek, ; Reichle, Pollatsek, Fisher, & Rayner, ; Reichle, Pollatsek, & Rayner, ; Reichle, Rayner, & Pollatsek, ). For brevity, the other models will not be discussed in this chapter, but a comprehensive review can be found in a special issue of Cognitive Systems Research (, vol. ). The E–Z Reader model is a robust model that can account for all of the previously stated data, predict fixation durations during reading, which words will be skipped, and which words will be refixated. It can accommodate both global and local measures of processing, as can most of the competing models. The currently accepted models are very similar in many ways, but differ on specific issues about how they explain certain effects. E– Z Reader happens to be very transparent and can therefore make clear predictions; when it cannot account for data it is clear why it can’t (making necessary adjustments to the model apparent in the model itself). The model has also been able to account for data that otherwise might have proved to be enigmatic. However, it must be noted that the model does have some limitations: until recently the model did not accommodate higher order processes due to sentence parsing or discourse variables. A more recent version of the model attempts to deal with some aspects of higher order processing (see Reichle, Warren, & McConnell, ), but there remain aspects of higher order processing that are not dealt with. Furthermore, the assumption that lexical processing drives the eyes forward during reading is just that—an assumption. However, it is quite consistent with the results from the disappearing text studies we reviewed above. Even if it is an assumption, it is not an unreasonable one due to Eye movements in reading  the fact that higher order processes generally only intervene when the processing system goes awry (see Rayner, Warren, Juhasz, & Liversedge, ). In summary, recent research on eye movements and reading has led to the implementation of computational models, which can closely simulate actual behavioral data of eye movements during reading. .. The influence of language and orthography Because subtitles are used to translate information (i.e. the words from a movie or TV program) from one language to another, we think it is important to consider the comparisons between different languages and orthographies with respect to reading. As mentioned previously, the majority of reading research with respect to eye movements has been conducted on English text. However, there has been much important work done on other languages, as well, and some important similarities and differences between languages arise. One must bear in mind, when comparing reading in different languages that one can often find similarities and differences between languages when considering the same measure or topic, depending on the grain of analysis. This will be discussed in detail below with respect to each topic, but the important thing to remember is that languages vary greatly on many facets, most importantly for our purposes in terms of orthography, and one must be careful with the terminology or grain of analysis one uses to compare them. A list of various eye movement metrics and comparisons across languages can be seen in Table . .. Eye movement measures across languages As mentioned previously, saccade lengths when reading English, are approximately –characters in length. At first glance, one would think that saccade lengths in Chinese (characters; Shen, ; Wang, ; Stern, ) and Japanese (.characters; Ikeda, & Saida, ) are comparatively much shorter. Importantly, though, “characters” mean very different things in English (an alphabetic language for which a character constitutes a letter) and languages such as Chinese (for which a character constitutes a morpheme) and Japanese, which is made up of two orthographic systems: Kanji (comprised of mor- Elizabeth R. Schotter and Keith Rayner Language Eye movement measure Reference Saccade length (characters) English 7–9 Rayner, 1978, 1998 Chinese 2 Shen, 1927; Wang, 1935; Stern, 1978 Japanese 3.5 Ikeda and Saida, 1978 Hebrew 5.5 Pollatsek, Bolozky, Well and Rayner, 1981 Perceptual Span (characters) Left Right English 3–4 14–15 McConkie and Rayner, 1975 Japanese Not specified 6 Ikeda and Saida, 1978 Chinese 1 2–3 Chen and Tang, 1998; Inhoff and Liu, 1998 Table . Eye movement measures across languages with citation references. phemic characters) and Kana (comprised of syllabic characters). Writ-ing systems like that of Chinese and Japanese are comprised of words with fewer characters than alphabetic languages such as English and more information is contained within a character (e.g. one morpheme or syllable, compared to one letter). Therefore, while saccades of read-ers of English may subtend a longer distance (in letters/characters) than those of readers of Chinese or Japanese, when considered in terms of words the saccade lengths are comparable across languages. Readers of Hebrew exhibit saccades of approximately .characters (Pollatsek, Bolozky, Well and Rayner, ) and although Hebrew is also an alphabetic language, it is more densely packed than English. Hebrew is written without vowels—words are shorter than those in English—and function words are clitic, meaning that they attach like affixes to other words. Therefore, although saccade length is shorter than English in terms of number of characters, it is compara-ble to English and other languages in terms of words, or informational content. The aforementioned data are important to show that infor-mational density of the text determines how far the eyes move during a saccade while reading. This conjecture is corroborated by studies of English in which the difficulty of the text—and consequently the infor-mation density—is manipulated. As informational density increases, Eye movements in reading  saccade length decreases (Rayner, ). Similar to saccade metrics, measures of reading rate across lan-guages are superficially different (when measured in terms of let-ters/characters) but in reality are very similar (when measured in terms of amount of information obtained in a given time). For exam-ple, the reading rate in Hebrew (average number of words per minute) is equivalent to that of English when based on the number of words in the English translation of the Hebrew text (Pollatsek et al., ). Likewise, reading rate in Chinese and Japanese is fairly equivalent to English when based on words. Another important difference between orthographies is the direc-tion in which the text is read. In general, the direction in which the print is read does not affect reading. For comparative studies of readers of different orthographies, all observed differences in reading speed were due to the more familiar orthography being read more easily than the less familiar one (Shen, ; Sun et al., ). Experiments that manipulate the direction in which text is read have come to a similar conclusion. Tinker () found that English readers reading English text printed vertically were initially % slower than when they read horizontally arranged text, but improved to be only % slower after four weeks of practice. Similarly, Kolers () found similar improvement with practice in English readers trained to read from right to left. Furthermore, beginning readers can learn to read from left to right just as easily as from right to left (Clay, ). There might be a physiological reason why reading horizontally might be better than reading vertically (and indeed, the majority of the world’s languages have horizontal orthographies); visual acuity falls offfaster in the vertical direction than in the horizontal direction. Therefore, it might be easier to identify words in the parafovea to the left and right of fixation than above and below fixation. .. The perceptual span across languages Studies using the moving window technique in different languages have found the following comparisons (see Table ). The writing system that is most different from English is Chinese. Typically, now, Chinese is read from left to right in mainland China, although that has not always been true, historically. A number of recent studies have  Elizabeth R. Schotter and Keith Rayner investigated readers reading in the horizontal direction. These studies have found that the perceptual span in Chinese extends character to the left and –characters to the right of fixation (Chen, & Tang, ; Inhoff, & Liu, ). The perceptual span of Japanese (a mixture of morphemic Kanji and syllabic Kana characters) extends approximately characters to the right of fixation (Ikeda, & Saida, ). Hebrew is read from right to left, unlike English; Pollatsek et al. () found that the perceptual span of Hebrew readers was asymmetrical, containing more letters to the left of fixation than to the right. But those same readers, when reading English, had the canonical asymmetry—more letters to the right of fixation than to the left. From these data, one might expect that the size of the perceptual span across languages is quite variable. Indeed, in terms of character spaces it is. But, again, when one considers that the amount of infor-mation contained within a given character space also varies across languages it is apparent that the perceptual span is quite similar across languages in terms of information obtained. Hebrew contains fewer letters than that of English. As mentioned above, the words in He-brew are typically shorter than those in English, and the perceptual spans for readers of the two languages contain approximately the same number of words. Likewise, the average word in Chinese is  characters long (but many are , and some are –), so in terms of words the perceptual span is probably not that different from English. Furthermore, in Japanese, when only logographic (Kanji) characters are used, the perceptual span is shorter than when mostly syllabic (Kana) characters are used (Osaka, ), indicating that informational density of the text modulates perceptual span. In short, although the perceptual span for readers of different languages varies in size with respect to number of characters, they are similar in terms of number of words. Furthermore, Pollatsek et al.’s () data on the direction of asymmetry indicates that the perceptual span is not “hard–wired”, but depends on the direction of the text being read, as shown by bilingual readers’ ability to switch the direction of the asymmetry, based on the language they are reading. In short, the perceptual span is larger ahead of fixation than behind it. Eye movements in reading  . Summary and implications for subtitle research In this chapter we have reviewed some basic information regarding eye movements during reading. First, we reviewed the anatomical characteristics of the eyes per se and noted that there are areas of high acuity (the fovea) and relatively lower acuity (parafovea and periphery) that strongly influence what our eyes do when we read. We also re-viewed the two main characteristics of eye movements: saccades (the actual movements of the eyes) and fixations (the period of time when the eyes are relatively stable) and we discussed various measures of eye movements: global measures (such as saccade length and reading speed measured in words per minute) and local measures (such as first fixation duration, single fixation duration, and gaze duration). We also provided an overview of research on the size of the perceptual span— the area of the text from which useful information is obtained—and how it is measured (e.g. gaze contingent display change and disappear-ing text experiments). Then we discussed how linguistic factors of the text (for example, word length, frequency, polysemy, predictability, age of acquisition, familiarity, and semantic relations) influence eye movements during reading. We then briefly reviewed models of eye movements during reading, focusing on the E–Z Reader model. In the last section of the chapter we addressed similarities and differ-ences between eye movements during reading languages of different orthographies. Particularly, eye movements seem to differ greatly depending on orthography, but when considering the information obtained on a given fixation or saccade size based on number of words, they are very similar. It is obviously important, when talking about reading in any context — including reading subtitles — to understand how the eyes work and what the mind is doing as one reads. In this chapter we have provided some background information regarding eye movements in reading. But, it is also the case that more work is needed to fully understand how people shift their attention and eye location when reading subtitles (d’Ydewalle, & De Bruycker, ). Research on how people look at advertisements (Pieters, Wedel, & Liechty, ; Rayner, Miller, & Rotello, ; Rayner, Rotello, Stewart, Keir, & Duffy, ) is quite interesting in the context of examining how viewers alternate their attention between pictorial and written information. This research  Elizabeth R. Schotter and Keith Rayner indicates that the strategy of the viewer and their goal very much influence where they look. Like research on eye movements when examining advertisements, research on eye movements when viewing a movie/TV program with subtitles would seem to provide useful information regarding how people integrate information across differ-ent channels and how parallel/serial processing comes into play as they deal with a complex stimulus array. . References Becker, W ., & Jürgens, R. (). Analysis of the Saccadic System by Means of Double Step Stimuli. Vision Research, , –. Blythe, H.I., Liversedge, S.P ., Joseph, H.S.S.L., White, S.J., Findlay, J.M., & Rayner, K. (). The Binocular Coordination of Eye Movements during Reading in Children and Adults. Vision Research, , –. Chen, H., & Tang, C. (). The Effective Visual Field in Reading Chinese. Reading and Writing, , –. Clay, M. (). The Early Detection of Reading Difficulties. Auckland, New Zealand: Heinemann. Clifton, C., Staub, A., & Rayner, K. (). Eye Movements in Reading Words and Sentences. In R. van Gompel, M.H. Fischer, W .S. Murray, & R.L. Hill (Eds), Eye Movements: A Window on Mind and Brain (pp –). Elsevier. Deubel, H., & Schneider, W .X. (). Saccade Target Selection and Object Recognition: Evidence for a Common Attentional Mechanism. Vision Research, , –. Duffy, S.A., Morris, R.K., & Rayner, K. (). Lexical Ambiguity and Fix-ation Times in Reading. Journal of Memory and Language, , –. d’Ydewalle, G., & De Bruycker, W . (). Eye Movements of Children and Adults while Reading Television Subtitles. European Psychologist, , Eye movements in reading  –. Ehrlich, S.E., & Rayner, K. (). Contextual Effects on Word Perception and Eye Movements during Reading. Journal of Verbal Learning and Ver-bal Behavior, , –. Frazier, L., & Rayner, K. (). Making and Correcting Errors during Sen-tence Comprehension: Eye Movements in the Analysis of Structurally Ambiguous Sentences. Cognitive Psychology, , –. Häikiö, T., Bertram, R., Hyona, J., & Neimi, P . (). Development of the Letter Identity Span in Reading: Evidence from the Eye Movement Moving Window Paradigm. Journal of Experimental Child Psychology, , –. Hoffman, J.E., & Subramaniam, B. (). The Role of Visual Attention in Saccadic Eye Movements. Perception and Psychophysics, , –. Ikeda, M., & Saida, S. (). Span of Recognition in Reading. Vision Re-search, , –. Inhoff, A.W ., & Liu, W . (). The Perceptual Span and Oculomotor Ac-tivity during the Reading of Chinese Sentences. Journal of Experimental Psychology: Human Perception and Performance, , –. Inhoff, A.W ., & Rayner, K. (). Parafoveal Word Processing during Eye Fixations in Reading: Effects of Word Frequency. Perception and Psy-chophysics, , –. Irwin, D.E. (). Lexical Processing during Saccadic Eye Movements. Cog-nitive Psychology, , –. Ishida, T., & Ikeda, M. (). Temporal Properties of Information Extrac-tion in Reading Studied by a Text–mask Replacement Technique. Jour-nal of the Optical Society A: Optics and Image Science, , –. Juhasz, B.J., Liversedge, S.P ., White, S.J., & Rayner, K. (). Binocular Coordination of the Eyes during Reading: Word Frequency and Case Alternation Affect Fixation Duration but not Fixation Disparity. Quar-terly Journal of Experimental Psychology, , –. Juhasz, B.J., & Rayner, K. (). Investigating the Effects of a Set of In-tercorrelated Variables on Eye Fixation Durations in Reading. Journal of Experimental Psychology: Learning, Memory and Cognition, , –. ——— (). The Role of Age–of–Acquisition and Word Frequency in Reading: Evidence from Eye Fixation Durations. Visual Cognition, , –.  Elizabeth R. Schotter and Keith Rayner Kirkby, J.A., Webster, L.A.D., Blythe, H.I., & Liversedge, S.P . (). Binoc-ular Coordination during Reading and Non–Reading Tasks. Psychologi-cal Bulletin, , –. Kolers, P . () Experiments in Reading. Scientific American, , –. Kowler, E., Anderson, E., Dosher, B., & Blaser, E. (). The Role of At-tention in Programming Saccades. Vision Research, , –. Laubrock, J., Kliegl, R., & Engbet, R. (). SWIFT Explorations of Age Differences in Eye Movements during Reading. Neuroscience and Biobe-havioral Reviews, , –. Liversedge, S.P ., Rayner, K., White, S.J., Findlay, J.M., & McSorley, E. (). Binocular Coordination of the Eyes during Reading. Current Biology, , –. Liversedge, S.P ., White, S.J., Findlay, J.M., & Rayner, K. (). Binocu-lar Coordination of Eye Movements during Reading. Vision Research, , –. Liversedge, S.P ., Rayner, K., White, S.J., Vergilino–Perez, D., Findlay, J.M., & Kentridge, R.W . (). Eye Movements while Reading Disappearing Text: Is There a Gap Effect in Reading? Vision Research, , –. McConkie, G.W ., & Rayner, K. (). The Span of the Effective Stimulus during a Fixation in Reading. Perception and Psychophysics, , –. McConkie, G.W ., & Zola, D. (). Is Visual Information Integrated across Successive Fixations in Reading? Perception and Psychophysics, , –. McPeek, R.M., Skavenski, A.A., & Nakayama, K. (). Concurrent Pro-cessing of Saccades in Visual Search. Vision Research, , –. Morris, R.K. (). Lexical and Message–Level Sentence Context Effects on Fixation Times in Reading. Journal of Experimental Psychology: Learn-ing, Memory, and Cognition, , –. Morrison, R.E., & Rayner, K. (). Saccade Size in Reading Depends upon Character Spaces and Not Visual Angle.PerceptionandPsychophysics, , –. Osaka, N. (). Effect of Peripheral Visual Field Size upon Eye Move-ments During Japanese Text Processing. In J.K. O’Regan and A. Lévy– Schoen (Eds.), Eye Movements: From Physiology to Cognition (pp. –). Amsterdam: North Holland. Pieters, R., Wedel, M., & Liechty, J. (). The Influence of Goals on the Time Course of Eye Movements across Advertisements. Journal of Ex-Eye movements in reading  perimental Psychology: Applied, , –. Pollatsek, A., Bolozky, S., Well, A.D., & Rayner K. (). Asymmetries in the Perceptual Span for Israeli Readers. Brain and Language, , –. Pollatsek, A., Raney, G.E., La Gasse, L., & Rayner, K. (). The Use of Information below Fixation in Reading and Visual Search. Canadian Jour-nal of Experimental Psychology, , –. Pollatsek, A., Reichle, E.D., & Rayner, K. (). Tests of the E–Z Reader Model: Exploring the Interface between Cognition and Eye–Movement Control. Cognitive Psychology, , –. Posner, M.I. (). Orienting of Attention. Quarterly Journal of Experimen-tal Psychology, , –. Rayner, K. (). The Perceptual Span and Peripheral Cues in Reading. Cognitive Psychology, , –. ——— (). Eye Movements in Reading and Information Processing. Psy-chological Bulletin, , –. ——— (). Eye Movements and Perceptual Span in Beginning and Skilled Readers. Journal of Experimental Child Psychology, , –. ——— (). Eye Movements in Reading and Information Processing:  Years of Research, Psychological Bulletin, , –. ——— (). The Thirty–Fifth Sir Frederick Bartlett Lecture: Eye Move-ments during Reading, Scene Perception and Visual Search. Quarterly Journal of Experimental Psychology, , –. Rayner, K., Ashby, J., Pollatsek, A., & Reichle, E.D. (). The Effects of Frequency and Predictability on Eye Fixations in Reading: Implications for the E–Z Reader Model. Journal of Experimental Psychology: Human Perception and Performance, , –. Rayner, K., & Bertera, J.H. (). Reading without a Fovea. Science, , –. Rayner, K., Carlson, M., & Frazier, L. (). The Interaction of Syntax and Semantics during Sentence Processing: Eye Movements in the Analysis of Semantically Biased Sentences. Journal of Verbal Learning and Verbal Behavior, , –. Rayner, K., Castelhano, M.S., & Yang, J. (). Eye Movements and the Perceptual Span of Older and Younger Readers. Psychology and Aging, , –. Rayner, K., & Duffy, S.A. (). Lexical Complexity and Fixation Times  Elizabeth R. Schotter and Keith Rayner in Reading: Effects of Word Frequency, Verb Complexity, and Lexical Ambiguity. Memory and Cognition, , –. Rayner, K., Inhoff, A.W ., Morrison, R.E., Slowiaczek, M.L., & Bertera, J.H. (). Masking of Foveal and Parafoveal Vision during Eye Fixations in Reading. Journal of Experimental Psychology: Human Perception and Perfor-mance, , –. Rayner, K., Liversedge, S.P ., & White, S.J. (). Eye Movements when Reading Disappearing Text: The Importance of the Word to the Right of Fixation. Vision Research, : –. Rayner, K., Miller, B., & Rotello, C.M. (). Eye Movements when Look-ing at Print Advertisements: The Goal of the Viewer Matters. Applied Cognitive Psychology, , –. Rayner, K., McConkie, G.W ., & Ehrlich, S.F. (). Eye Movements and Integrating Information across Saccades. Journal of Experimental Psychol-ogy: Human Perception and Performance, , –. Rayner, K., McConkie, G.W ., & Zola, D. (). Integrating Information across Eye Movements. Cognitive Psychology, , –. Rayner, K., Murphy, L.A., Henderson, J.M., & Pollatsek, A. (). Selec-tive Attentional Dyslexia. Cognitive Neuropsychology, , –. Rayner, K., & Pollatsek, A. (). The Psychology of Reading. Englewood Cliffs, NJ: Prentice Hall. Rayner, K., Reichle, E.D., & Pollatsek, A. (). Eye Movement Control in Reading: An Overview and Model. In G. Underwood (Ed.), Eye Guid-ance in Reading and Scene Perception (pp. –). Oxford, England: Else-vier. Rayner, K., Reichle, E.D., Stroud, M.J., Williams, C.C., & Pollatsek, A. (). The Effect of Word Frequency, Word Predictability, and Font Difficulty on the Eye Movements of Young and Older Readers. Psychol-ogy and Aging, , –. Rayner, K., Rotello, C.M., Stewart, A., Keir, J., & Duffy, S.A. (). In-tegrating Text and Pictorial Information: Eye Movements when Look-ing at Print Advertisements. Journal of Experimental Psychology: Applied, , –. Rayner, K., Slowiaczek, M.L., Clifton, C., & Bertera, J.H. (). Latency of Sequential Eye Movements: Implications for Reading. Journal of Ex-perimental Psychology: Human Perception and Performance, , –. Rayner, K., Warren, T., Juhasz, B.J., & Liversedge, S.P . (). The Effect Eye movements in reading  of Plausibility on Eye Movements in Reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, , –. Rayner, K., & Well, A.D. (). Effects of Contextual Constraint on Eye Movements in Reading: A Further Examination. Psychonomic Bulletin and Review, , –. Reichle, E.D., Pollatsek, A., Fisher, D.L., & Rayner, K. (). Toward a Model of Eye Movement Control in Reading. Psychological Review, , –. Reichle, E.D., Pollatsek, A., & Rayner, K. (). E–Z Reader: A Cognitive– Control, Serial–Attention Model of Eye–Movement Behavior during Reading. Cognitive Systems Research, , –. Reichle, E.D., Rayner, K., & Pollatsek, A. (). The E–Z Reader Model of Eye Movement Control in Reading: Comparison to Other Models. Behavioral and Brain Sciences, , –. Schotter, E.R., Angele, B., & Rayner, K. (). Parafoveal Processing in Reading. Attention, Perception & Psychophysics, , –. Sereno, S.C., O’Donnell, P .J., & Rayner, K. (). Eye Movements and Lexical Ambiguity Resolution: Investigating the Subordinate Bias Effect. Journal of Experimental Psychology: Human Perception and Performance, , –. Shen, E. (). An Analysis of Eye Movements in the Reading of Chinese. Journal of Experimental Psychology, , –. Staub, A., & Rayner, K. (). Eye Movements and On–Line Comprehen-sion Processes. In G.M. Gaskell (Ed.), The Oxford Handbook of Psycholin-guistics (pp. –). Oxford: Oxford University Press. Stern, J.A. (). Eye Movements, Reading and Cognition. In J.W . Den-ders, D.F. Fisher, & R.A. Monty (Eds.), Eye Movements and the Higher Psy-chological Functions (pp. –). Hillsdale, New Jersey: Lawrence Earl-baum Associates. Sun, F., Morita, M., & Stark, L.W . (). Comparative Patterns of Reading Eye Movement in Chinese and English. Perception and Psychophysics, , –. Tinker, M.A. (). Prolonged Reading Tasks in Visual Research. Journal of Applied Psychology, , –. Wang, F.C. (). An Experimental Study of Eye–Movements in the Silent Reading of Chinese. The Experimental School Journal, , –.  Elizabeth R. Schotter and Keith Rayner Williams, R.S., & Morris, R.K. (). Eye Movements, Word Familiarity, and Vocabulary Acquisition. European Journal of Cognitive Psychology, , –. Elizabeth R. Schotter and Keith Rayner University of California, San Diego
268
Skip to main content Distribution of the individual coordinates of a uniform random vector on a high-dimensional sphere Ask Question Asked Modified 3 years, 2 months ago Viewed 6k times This question shows research effort; it is useful and clear 6 Save this question. Show activity on this post. Let X=(X1,…,Xn) be a random vector uniformly distributed on the n-dimensional sphere of radius R>0. Intuitively, i think that for large p every coordinate Xi is normally distributed with variance R2/n, but I'm not quite sure. Question More formaly, if Φ is the CDF of the standard Guassian N(0,1), what is a good upper bound for the quantity αn:=supz∈R|P(X1≤nR−2z)−Φ(z)| ? Observations My wild guess is that αn≤Cn−1/2 for some absolute constant C independent of n and R. pr.probability probability-distributions limits-and-convergence measure-concentration geometric-probability Share CC BY-SA 4.0 Improve this question Follow this question to receive notifications edited Nov 13, 2018 at 12:49 dohmatob asked Nov 13, 2018 at 11:58 dohmatobdohmatob 7,00311 gold badge1919 silver badges8383 bronze badges Add a comment | 3 Answers 3 Reset to default This answer is useful 6 Save this answer. Show activity on this post. Without loss of generality, R=1. Let Z1,…,Zn be iid standard normal random variables (r.v.'s). Then n−−√X1=Dn−−√Z1Z21+⋯+Z2n−−−−−−−−−−−√=DZ1+⋯+ZnZ21+⋯+Z2n−−−−−−−−−−−√=:T1, where =D denotes the equality in distribution. By the top display on page 20 (you may also want to see the published version), dKo(T1,Z1)≤dKo(T,Z1)+0.24n, where dKo(X,Y):=supx∈R|P(X≤x)−P(Y≤x)| is the Kolmogorov distance between r.v.'s X,Y, and T is a r.v. with the Student distribution tn−1 with n−1 degrees of freedom. By Theorem 1.2 (you may also want to see the published version), for n≥5 dKo(T,Z1)<0.16n−1, so that supx∈R|P(n−−√X1≤x)−Φ(x)|=dKo(T1,Z1)≤0.24n+0.16n−1∼0.4n. I think the latter constant factor 0.4 can be improved to about 0.16 by using directly the method of proof of Theorem 1.2. Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Nov 13, 2018 at 15:40 answered Nov 13, 2018 at 14:19 Iosif PinelisIosif Pinelis 139k99 gold badges120120 silver badges248248 bronze badges 5 OK great. This is better than my imagined n−1/2 rate. The references are even a bigger treasure. Thanks! – dohmatob Commented Nov 13, 2018 at 14:27 1 Thanks. I have added refs. to the published versions of the papers. – Iosif Pinelis Commented Nov 13, 2018 at 14:29 Great. Would you mind throwing in 1 or two details hinting the "hidden" computation "dKo(T1,T)≤0.24/n" ? I guess this follows from your delta-method, but a sentence saying what's going on (e.g "one can take the function f=...", etc.) might be really useful. – dohmatob Commented Nov 13, 2018 at 14:36 1 No, the inequality dKo(T1,Z1)≤dKo(T,Z1)+0.24n does not use the delta method results at all. Rather, it follows immediately from elementary formula (4.24) in the delta-method paper, since the cdf's of T1 and T are easy to express in terms of each other. – Iosif Pinelis Commented Nov 13, 2018 at 14:42 Off-topic: I wonder if you would mind helping on this mathoverflow.net/questions/314409/… or this math.stackexchange.com/questions/2976654/…. Thanks in advance. – dohmatob Commented Nov 13, 2018 at 14:54 Add a comment | This answer is useful 4 Save this answer. Show activity on this post. We may assume R=1. A useful trick is to realize the uniform measure on the unit sphere as the distribution of (G1|G|,…,Gn|G|), where G=(G1,…,Gn) is a Gaussian vector with independant N(0,1) coordinates, and |G|=G21+⋯+G2n−−−−−−−−−−−√. With this in hand you can now write P(X1≤zn−−√)=P(G1≤|G|n−−√z)≈P(G1≤z), where in the last step you have to argue that |G| concentrates around n−−√ with fluctuations O(1) (a concenquence of tail standard tail bounds on chi-squared distribution). Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Apr 19, 2020 at 17:01 dohmatob 7,00311 gold badge1919 silver badges8383 bronze badges answered Nov 13, 2018 at 14:13 Guillaume AubrunGuillaume Aubrun 4,7183636 silver badges4646 bronze badges Add a comment | This answer is useful 3 Save this answer. Show activity on this post. Here is my solution without the reduction trick to 1D gaussian. Let U:=X/∥X∥. Since U is uniformly distributed on the unit n-sphere, it follows that the random variable UTz has the same distribution as U1 (the first coordinate of the random vector U), which in turn (by the Archimedean projection property) has the same distribution as the first coordinate of a point draw uniformly in the unit ball in Rn−1. Thus, P(U1>δ) is the probability that a random point in the unit ball in Rn−2 lies in on given side of an equatorial hyperplane, we have for every unit-vector z, P(|UTz|>δ)=P(|U1|>δ)=2P(U1>δ)=1−I(δ;12,n−12)=I(1−δ;n−12,12),(2) where I(t;a,b) is the normalized incomplete beta function, defined by It(t;a,b):=B(t;a,b)/B(1;a,b), with B(t;a,b):=∫t0sa−1(1−s)b−1ds. Theorem (UTz is sub-exponential! ). Let U be uniformly distributed on the unit n-sphere and let z be a fixed vector on this sphere. If n is large enough, then for every δ∈[0,1], it holds that P(|UTz|>δ)≤e−n−14δ.(3) Proof. Let p=I(1−δ;1/2,(n−1)/2). It is known since Temme (1992) that for p∈(0,1) and large a>0, the solution of the equation p=I(t;a,b) is given (approximately) by t=tp(a,b)≈e−(1/a)Q1−p(Γ(b,1)),(4) where Q1−p(Γ(b,1)) is the 1−p quantile of the unit-scale gamma distribution with shape parameter b. Now by standard concentration results (e.g see Boucheron et al. textbook), Q1−p(Γ(b,1))≤log(1/p)+2blog(1/p)−−−−−−−−−√.(5) In particular, for a=(n−1)/2 and b=1/2 we get Q1−p(Γ(1/2,1))≤log(1/p)+log(1/p)−−−−−−−√≤2log(1/p).(6) Putting (2), (4), and (6) together and using the basic inequality e−t≥1−t∀t>−1, we see that 1−δ≥t2p((n−1)/2,1/2)≥e−2Q1−2p(Γ(1/2,1))n−1≥e−2n−1(log(12p)+log(12p)√)≥1−2(log(12p)+log(12p)−−−−−−−√)n−1≥1−4log(12p)n−1, from which (3) follows upon combining with (2). □ Share CC BY-SA 4.0 Improve this answer Follow this answer to receive notifications edited Jun 5, 2022 at 13:49 answered Apr 19, 2020 at 17:20 dohmatobdohmatob 7,00311 gold badge1919 silver badges8383 bronze badges 4 I am confused that why does the equation (2) is independent of norm(z). – lazyleo Commented Jun 5, 2022 at 13:09 Well, precisely because ∥z∥=1 as in the theorem. I've made this more explicit. – dohmatob Commented Jun 5, 2022 at 13:47 Does that mean P(|U⊤z|>δ)=P(|U1|>δ∥z∥)? – lazyleo Commented Jun 5, 2022 at 14:03 Yes, by rotational invariance. – dohmatob Commented Jun 5, 2022 at 19:44 Add a comment | You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions pr.probability probability-distributions limits-and-convergence measure-concentration geometric-probability See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Visit chat Linked 3 Upper-bound for eigenvalues of E[UUT], where U is uniformly distributed on the unit n-sphere 0 Wasserstein distance between N(0,1/d) and the marginal distribution of x1 when x=(x1,…,xd) is uniform on the unit-sphere in Rd 1 Convergent condition of the high-dimensional submatrix of some orthogonal matrix 0 Asymptotic distribution of nEP^n[g(Z;θ)]TCovP^n[g(Z;θ)]−1EP^n[g(Z;θ)] Related 47 Intuitive proof that the first (n−2) coordinates on a sphere are uniform in a ball 20 Anti-concentration of Bernoulli sums 14 Concentration bounds for sums of random variables of permutations 1 Minimum of Pareto Random Variables given Harmonic Mean 0 Bounds for the sum of dependent gaussian random variables 1 High absolute moment of sum of the independent random variables 1 For x1,...,xn iid random on sphere of radius d−−√ in Rd, what is a good upper-bound on min distance of xn from the other xi's? Upper-bound for spectral norm of the covariance matrix of a certain Gaussian vector with correlated entries 2 Normalized concentration inequality for empirical CDF (iid sum) 3 Does a DKW-type inequality hold for the empirical CDF of a random vector on the sphere? Question feed By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
269
American Cockroach Periplaneta americana DIAGNOSTIC MORPHOLOGY Adults: • Adults are 1 3/8” to 2 1/8” (34-53mm) long • Reddish brown in color with a pale brown to yellowish band around the edge of the pronotal shield • The last segment of the cerci is at least 2 times longer than wide • Both sexes have full wings but are poor to moderate fliers • Nymphs do not have wings • Ootheca are approximately 3/8” (8mm) long with a length 1.5 times its width Immature Stage: • Cockroaches experience gradual metemorphosis and have no larval form. The process goes from egg to nymph to adult  information current as of 2 March, 2012 For more information visit www.museumpests.net GENERAL INFORMATION The American cockroach has worldwide distribution but is more common in commercial rather than residential buildings Common areas infested are basements, steam tunnels, and food storage and preparation areas. During warmer months or in warm climates they can survive outside of structures. SIGNS OF INFESTATION Signs to look for are adults, nymphs, wings, and eggs in the area of concern. Often the adults will travel fair distances along plumbing and heating piping paths from infested basement areas to random rooms. They are then usually noticed near janitor closets, bathrooms, radiators, and other piping areas. In colder climates sources are usually associated with soil pipes, sump pump, or basement drain area. Warmer climates can have wall voids and other areas infested. FOOD SOURCES American cockroaches will feed on most any food with a preference for fermenting foods. LIFE CYCLE The female glues or drops the ootheca within 4 days of forming. A female produces 6 to 14 ootheca in her lifetime and each ootheca contains 14 to 16 eggs. Average development time from egg to adult is 600 days but can range from 168 to 786 days depending upon temperature and humidity. Adults live 440 days at room temperature but only 225 days at 84 degrees Fahrenheit (29 C) on average with a range from 102 to 588 days. CONTROL & TREATMENT Physical removal, chemical contact pesticides, pesticide baits, and other tactics can be used based on the infested area. Locating the source whether associated with sewer areas, wall voids, or elsewhere in a building will help determine treatment. Sealing areas related to building piping systems is helpful. Often dry floor drains are screened and P-traps are filled with water and topped of with a food grade oil to minimize evaporation. This helps prevent American cockroaches entering directly into a room from the drain. Direct treatment of infested sewer related areas with properly labeled insecticide dust is not an unusual solution. Fact Sheet: American Cockroach Photo Credit for image of an Adult American Cockroach: Gary Alpert, Harvard University, Bugwood.org Photo Credit for image of American Cockroach Nymphs: Daniel Suiter, University of Georgia, Bugwood.org  information current as of 2 March, 2012 For more information visit www.museumpests.net
270
GitHub - ecattez/shahmat: A Chess implementation with the Domain Driven Design =============== Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Product GitHub Copilot Write better code with AI GitHub Spark New Build and deploy intelligent apps GitHub Models New Manage and compare prompts GitHub Advanced Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore Why GitHub All features Documentation GitHub Skills Blog Solutions By company size Enterprises Small and medium teams Startups Nonprofits By use case DevSecOps DevOps CI/CD View all use cases By industry Healthcare Financial services Manufacturing Government View all industries View all solutions Resources Topics AI DevOps Security Software Development View all Explore Learning Pathways Events & Webinars Ebooks & Whitepapers Customer Stories Partners Executive Insights Open Source GitHub Sponsors Fund open source developers The ReadME Project GitHub community articles Repositories Topics Trending Collections Enterprise Enterprise platform AI-powered developer platform Available add-ons GitHub Advanced Security Enterprise-grade security features Copilot for business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [x] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name Query To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert {{ message }} ecattez/shahmatPublic NotificationsYou must be signed in to change notification settings Fork 0 Star 3 A Chess implementation with the Domain Driven Design ecattez.github.io/shahmat/ License GPL-3.0 license 3 stars0 forksBranchesTagsActivity Star NotificationsYou must be signed in to change notification settings Code Issues 6 Pull requests 1 Actions Projects 0 Security### Uh oh! There was an error while loading.Please reload this page. Insights Additional navigation options Code Issues Pull requests Actions Projects Security Insights ecattez/shahmat master 6Branches0Tags Go to file Code Open more actions menu Folders and files | Name | Name | Last commit message | Last commit date | | --- | --- | --- | --- | | Latest commit ------------- ecattez ci(github-pages): make the test report public (#37) Feb 28, 2020 c1cc1c9·Feb 28, 2020 History ------- 15 Commits Open commit details | | .github/workflows | .github/workflows | ci(github-pages): make the test report public (#37) | Feb 28, 2020 | | src | src | feat(promotion): promotion reimplemented as part of a move (#34) | Feb 27, 2020 | | .gitignore | .gitignore | feat(pawn): implementation of the pawn chess piece | Jan 16, 2020 | | LICENSE | LICENSE | Initial commit | Jan 6, 2020 | | README.md | README.md | feat(king): implemented (#22) | Feb 5, 2020 | | pom.xml | pom.xml | feat(king): implemented (#22) | Feb 5, 2020 | | View all files | Repository files navigation README GPL-3.0 license Shahmat A Chess implementation with the Domain Driven Design The Game The ultimate aim in the chess game is delivering a checkmate – trapping your opponent´s king. The term checkmate is an alteration of the Persian phrase "Shah Mat", meaning literally, "the King is ambushed". Each side starts out with 16 pieces, consisting of 8 pawns, 2 rooks, 2 knights, 2 bishops, 1 queen and 1 king, all in the same color. White always goes first. Pawn moves Pawns are both simple and complex in their movements. The pawn piece has the fewest options of any chess piece on the board in where it can move and it can only move forward until it reaches the other side of the board. Here are a few things to know about how a pawn chess piece moves: It can only directly forward one square; It can move directly forward two squares on their first move only; It can move diagonally forward when capturing an opponent's chess piece; If there is another piece in front of it, the pawn is stuck, unless there is a piece ahead on the capturing diagonals; Once it reaches the other side of the chess board, the player may promote the pawn in for any other chess piece if they choose, except another king. En passant En passant (French: [ɑ̃ paˈsɑ̃], lit. in passing) is a move in chess. It is a special pawn capture that can only occur immediately after a pawn makes a move of two squares from its starting square, and it could have been captured by an enemy pawn had it advanced only one square. The opponent captures the just-moved pawn "as it passes" through the first square. The result is the same as if the pawn had advanced only one square and the enemy pawn had captured it normally. En passant is a unique privilege of pawns, other pieces cannot capture en passant. It is the only capture in chess in which the capturing piece does not replace the captured piece on its square. Promotion Promotion in chess is a rule that requires a pawn that reaches its eighth rank to be replaced by the player's choice of a queen, knight, rook, or bishop of the same color. The new piece replaces the pawn within the same move. The choice of new piece is not limited to pieces previously captured, thus promotion can result in a player owning, for example, two or more queens despite starting the game with one. Pawn promotion, or the threat of it, often decides the result in an endgame. Since the queen is the most powerful piece, the vast majority of promotions are to a queen. Promotion to a queen is also called queening; promotion to any other piece is referred to as underpromotion. Rook The rooks are the most simple-moving chess pieces on the board. Their movements are only straight, moving forward, backward or side to side. At any point in the game, the piece can move in any direction that is straight ahead, behind or to the side. Here are a few things to know about how the Rook chess piece moves: It can move forward, backward, left or right at any time; It can move anywhere from 1 to 7 squares in any direction, so long as it is not obstructed by any other piece; It can capture an opponent piece that obstruct its way. Bishop The bishop chess piece is stuck moving in diagonals. Each player starts out with two bishop pieces, each one residing on its own color of square. Between both pieces, you can cover the entire board, but one piece can only cover one half of the board, only the colors of squares it started the game on. It can move in any direction diagonally, so long as it is not obstructed by another piece. It cannot move past any piece that is obstructing its path. It can take any other piece on the board that is within its bounds of movement. Queen The queen chess piece is like a combination of the Rook and Bishop chess pieces. It queen can move in any direction on a straight or diagonal path. It cannot move past any piece that is obstructing its path. It can be used to capture any of your opponent's pieces on the board. Knight The Knight chess piece moves in a very mysterious way. Unlike Rooks, Bishops or Queens, the Knight is limited in the number of squares it can move across. In fact, the piece moves in a shape similar to the uppercase "L". Here are the specifics: it can move forward, backward, left or right two squares and must then move one square in either perpendicular direction. it can only move to one of up to eight positions on the board. it can move to any position not already inhabited by another piece of the same color. it can skip over any other pieces to reach its destination position. King King chess pieces are somewhat limited in their movement. They cannot go riding across the chess board as quickly as most other pieces and they are easier to contain than most chess pieces from an opponent's perspective. Here are a few rules to note: The king piece can move one single square in any direction. The king cannot move onto a square that is currently occupied by a piece from its own team. References About A Chess implementation with the Domain Driven Design ecattez.github.io/shahmat/ Resources Readme License GPL-3.0 license Uh oh! There was an error while loading. Please reload this page. Activity Stars 3 stars Watchers 1 watching Forks 0 forks Report repository Releases No releases published Packages 0 No packages published Uh oh! There was an error while loading. Please reload this page. Languages Java 100.0% Footer © 2025 GitHub,Inc. Footer navigation Terms Privacy Security Status Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
271
The Diffie-Hellman Key Exchange What is Diffie-Hellman Key Exchange (exponential key exchange)? The Diffie-Hellman key exchange (also known as exponential key exchange) is a method for securely exchanging cryptographic keys over an insecure channel. It is a fundamental building block of many secure communication protocols, including SSL/TLS and SSH. The Diffie-Hellman key exchange works by allowing two parties (Alice and Bob) to agree on a shared secret key over an insecure channel, without any other party being able to intercept the key or learn anything about it. The key exchange involves the following steps ? Alice and Bob agree on two large prime numbers, p and g, and a public key exchange algorithm. Alice chooses a secret integer, a, and computes A = g^a mod p. She sends A to Bob. Bob chooses a secret integer, b, and computes B = g^b mod p. He sends B to Alice. Alice computes s = B^a mod p. Bob computes s = A^b mod p. Alice and Bob now both have shared secret keys, which they can use to establish a secure communication channel. The security of the Diffie-Hellman key exchange relies on the fact that it is computationally infeasible for an attacker to determine the shared secret keys from the public values of p, g, A, and B. This allows Alice and Bob to exchange the key securely, even over an insecure channel. Where is Diffie-Hellman Key Exchange Used? The Diffie-Hellman key exchange (also known as exponential key exchange) is a widely used and trusted technique for securely exchanging cryptographic keys over an insecure channel. It is used in many different contexts, including ? Secure communication protocols ? The Diffie-Hellman key exchange is used in many secure communication protocols, such as SSL/TLS and SSH, to establish a secure channel between two parties. It allows the parties to agree on a shared secret key that can be used to encrypt and decrypt messages exchanged over the channel. Virtual private networks (VPNs) ? The Diffie-Hellman key exchange is often used in VPNs to establish a secure connection between a client and a server. It allows the client and server to agree on a shared secret key that can be used to encrypt and decrypt traffic exchanged over the VPN connection. Secure file transfer protocols ? The Diffie-Hellman key exchange is used in many secure file transfer protocols,such as SFTP and FTPS, to establish a secure channel for transferring files between two parties.It allows the parties to agree on a shared secret key that can be used to encrypt and decrypt the transferred files. Other applications ? The Diffie-Hellman key exchange is also used in many other applications where secure communication is required, such as secure email, secure web browsing, and secure voice over IP (VoIP). It is a flexible and widely supported technique for establishing secure communication channels. Overall, the Diffie-Hellman key exchange is an important and widely used technique for securely exchanging cryptographic keys and establishing secure communication channels. It is an essential component of many secure communication protocols and applications. How does Diffie-Hellman Key Exchange Work? The Diffie-Hellman key exchange (also known as exponential key exchange) is a method for securely exchanging cryptographic keys over an insecure channel. It works by allowing two parties (Alice and Bob) to agree on a shared secret key without any other party being able to intercept the key or learn anything about it. The key exchange involves the following steps ? Alice and Bob agree on two large prime numbers, p and g, and a public key exchange algorithm. Alice chooses a secret integer, a, and computes A = g^a mod p. She sends A to Bob. Bob chooses a secret integer, b, and computes B = g^b mod p. He sends B to Alice. Alice computes s = B^a mod p. Bob computes s = A^b mod p. Alice and Bob now both have the shared secret key s, which they can use to establish a secure communication channel. The security of the Diffie-Hellman key exchange relies on the fact that it is computationally infeasible for an attacker to determine the shared secret key s from the public values of p, g, A, and B. This allows Alice and Bob to exchange the key securely, even over an insecure channel. Vulnerabilities of Diffie-Hellman Key Exchange The Diffie-Hellman key exchange (also known as exponential key exchange) is a widely used and trusted technique for securely exchanging cryptographic keys over an insecure channel. However, like all cryptographic systems, it is not completely immune to attacks and vulnerabilities. Some potential vulnerabilities of the Diffie-Hellman key exchange include ? Man-in-the-middle attacks ? If an attacker is able to intercept and modify the messages exchanged between Alice and Bob during the key exchange, they may be able to impersonate Alice or Bob and establish a secure channel with the other party. This can be prevented by using certificate-based authentication and/or by verifying the authenticity of the messages using message authentication codes (MACs). Small subgroup attacks ? If the prime number p used in the key exchange has a small subgroup, an attacker may be able to use this to their advantage to recover the shared secret key. To prevent this, it is important to use a large prime number with no known small subgroups. Exponent attacks ? If the secret exponents (a and b) used in the key exchange are not chosen randomly, an attacker may be able to use this to their advantage to recover the shared secret key. To prevent this, it is important to use a strong random number generator to generate the secret exponents. Examples of Diffie-Hellman Key Exchange The Diffie-Hellman key exchange (also known as exponential key exchange) is a widely used and trusted technique for securely exchanging cryptographic keys over an insecure channel. It is used in many different contexts, including secure communication protocols, virtual private networks (VPNs), secure file transfer protocols, and other applications where secure communication is required. Some examples of the use of the Diffie-Hellman key exchange include ? SSL/TLS ? The Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols use the Diffie-Hellman key exchange to establish a secure channel between a client and a server. This allows the client and server to exchange encrypted messages over an insecure network, such as the Internet. SSH ? The Secure Shell (SSH) protocol uses the Diffie-Hellman key exchange to establish a secure channel between a client and a server. This allows users to securely log in to a remote server and execute commands, transfer files, and perform other tasks over an insecure network. VPNs ? Many VPN protocols, such as IPSec and OpenVPN, use the Diffie-Hellman key exchange to establish a secure connection between a client and a server. This allows the client and server to exchange encrypted traffic over an insecure network, such as the Internet. SFTP ? The Secure File Transfer Protocol (SFTP) uses the Diffie-Hellman key exchange to establish a secure channel between a client and a server. This allows users to securely transfer files between two systems over an insecure network. 68K+ Views Kickstart Your Career Get certified by completing the course TOP TUTORIALS TRENDING TECHNOLOGIES CERTIFICATIONS COMPILERS & EDITORS Tutorials Point is a leading Ed Tech company striving to provide the best learning material on technical and non-technical subjects. Tutorials Point is a leading Ed Tech company striving to provide the best learning material on technical and non-technical subjects. © Copyright 2025. All Rights Reserved.
272
Sale of Goods Act – Business Law and Ethics Canadian Edition =============== Skip to content Menu Primary Navigation Home Read Sign in Search in book: Search Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices. Book Contents Navigation Contents Preface Introduction Attribution Land Affirmation Accessibility Statement 1. Overview of Canadian Law Background Establishing Standards Promoting Consistency Resolving Disputes The Rule of Law Business Law, Ethics, and Regulation Source and Types of Law Substance versus Procedure Public vs Private Law Cause of Action Courts In Canada Legal Systems of the World Conclusion Questions for Discussion Additional Resources 2. Legal Risk Management Overview An Approach to Evaluating Risk A Model for Evaluating Legal Risk Mitigating and Managing Legal Risk An Introduction to Mathematics of Risk Alternative Dispute Resolution (ADR) Processes for Mitigating Risk Conclusion Cases of Interest Additional Resources 3. Business Ethics and Social Responsibility Business Ethics Social Responsibility Conclusion Terms and Definitions Cases of Interest Questions for Discussion Additional Resources ENDNOTES 4. Business Legislation in Canada Overview The Constitution Act and the Duty to Consult Indigenous Peoples Sale of Goods Act Consumer Protection Act The Competition Act Canadian Environmental Protection Act, 1999 Personal Property Security Act (Ontario) (OPPSA) Conclusion Cases of Interest Additional Resources 5. Business Organization – Business Structures Business in Canada Sole Proprietor Partnerships Corporations Mergers, Consolidations, and Dissolutions Cases of Interest Questions for Discussion Additional Resources 6. Contract Law What Is a Contract? Offer Assignment, Delegation, and Third Party Remedies Sample Contract Structure Common Mistakes in Contract Writing Conclusion Cases of Interest Questions for Discussion Additional Resources 7. The Tort System Tort Law Overview Intentional Torts Unintentional Torts Other Torts Conclusion Cases of Interest Questions for Discussion Additional Resources 8. Labour Law Labour Law - Overview Labour Law – Agency Labour Law - Employment Employment Law Jurisdictions Conclusion Cases of Interest Questions for Discussion Additional Resources Canadian Property Law Chapter Outline Introduction Personal Property Methods of Acquiring Personal Property Real Property Methods of Acquiring Real Property Interests and Scope Duties of Landowners Ownership Interests in Real Property Scope of Interests in Real Property Wills and Trusts Land Use Regulation Introduction to Secured Transactions Other Types of Collateral Attachment of a Security Interest Perfection of a Security Interest Conclusion Cases of Interest Questions for Discussion Additional Resources 10. Intellectual Property Introduction Patents Trademarks Copyright Trade Secrets Conclusion Cases of Interest Question for Discussion Additional Resources 11. Technology and the Law Introduction Technology and Jurisdiction Long-Arm Statutes Extra-Territorial Reach Foreign Actions Technology and Privacy What is confidential information in Canada? Technology and Contract Law Technology and Tort Law Technology and Regulations The Privacy Act Freedom of Information and Personal Privacy Act (FIPPA – Ontario) The Personal Information Protection and Electronic Documents Act (PIPEDA) Provincial Privacy Laws Canada’s Anti-Spam Legislation (CASL) Acts Under Consideration: European Union (EU) General Data Protection Regulation (GDPR) Technology and Bias Conclusion Cases of Interest Questions for Discussion Additional Resources Introduction to Bankruptcy Chapter Outline Bankruptcy Bankruptcy and First Nations Voluntary and Involuntary Bankruptcy Types of Bankruptcy Division 1 Proposals Division 2 Proposal (Consumer Proposals) Duty of Good Faith Bankruptcy and Insolvency Act (BIA) The Companies Creditors Arrangement Act (CCAA) Bankruptcy Court Roles of Receiver and Trustee Discharge of Debtor Conclusion Definitions Cases of Interest Questions for Discussion Additional Resources 13. Criminal Liability Introduction The Nature of Criminal Law Burden of Proof Classification of Crimes Indictable, Summary, and Hybrid White-Collar Crime Common Business Crimes Corporate Criminal Liability Conclusion Cases of Interest Questions for Discussion Additional Resources 14. International Law Introduction to International Law Introduction Sources and Practice of International Law International Law Enforcement The Nature of International Law Conclusion Cases of Interest Questions for Discussion Additional Resources Business Law and Ethics Canadian Edition Business Legislation in Canada Sale of Goods Act The S ale of G ood s A ct ( was developed to regulate the sale of goods where no contract explicitly exists. The intent is to provide a basic set of regulations which are to be followed when selling goods within Canada. The regulations provide general rules to govern transactions. Businesses and individuals are allowed to contract out of the regulations of the Sale of Goods Act. Most individuals and businesses prefer to develop specific terms and conditions to govern their transactions (for example; payment terms, delivery dates, dispute resolution and other unique conditions). It is important to note that some regulations do not allow businesses or individuals to contract out of the regulations. For example, the Environmental Protection Act or the Landlord Ten ant Act. The Sale of Goods Act only applies to goods that are sold or transferred between two parties. The act does allow for sellers to recoup goods in transit in the event the purchaser fails to pay in order to offset the potential loss. During bankruptcy proceedings of a purchaser, sellers often have a difficult time reclaiming their goods. Typically, these items are part of the liquidation event, and the seller receives their respective portion of the proceeds from the sale. Under the Sale of Goods Act, sellers have obligations which they must follow. The regulations are designed to help protect the buyer from unfair actions by the seller. The following is a list of factors that every businessperson should be aware of: The seller can only sell goods that they own and have the right to sell. This is known as having ‘good title’. Sellers cannot transfer any goods for which they do not have good title. Goods cannot be sold with any encumbrances (a claim against a good by someone other than the person representing or claiming ownership). For example, a seller could not sell a car to purchaser if the car had a lien placed on the vehicle from a bank. The seller would be required to clear the lien before transferring title to the purchaser. A seller must sell all goods at a standard of quality which allows them to be sold to others. For example, a seller cannot not sell inventory to a retailer which, upon delivery, is found to be such poor condition that it is unable to be sold to customers. Sellers must present samples of the goods that are a fair representation of delivered merchandise. A seller could not for example, show a chair which fits an adult and then deliver a chair which only fits a child. An important condition of the Sale of Goods Act is the transfer of title from the seller to the buyer. This is very important relative to the risk of ownership. For example, if a buyer in Ontario purchased goods from a seller in British Columbia and during transit the goods were damaged beyond repair, who is responsible the cost of the loss? The question of who owned the goods at what point in the journey is critical; whoever owns the goods bears the burden of the loss. Risk follows title, so unless otherwise agreed, the goods remain at the seller’s risk until the property is transferred to the buyer. When the property is transferred to the buyer, the goods are at the buyer’s risk whether delivery has been made or not. There are five rules in the Act which explain who has title when: Rule 1. Where there is an unconditional contract for the sale of specific goods in a deliverable state, the property in the goods passes to the buyer when the contract is made, and it is immaterial whether the time of payment or the time of delivery or both is postponed. Rule 2.W here there is a contract for the sale of specific goods and the seller is bound to do something to the goods for the purpose of putting them into a deliverable state, the property does not pass until such thing is done and the buyer has notice thereof. Rule 3. Where there is a contract for the sale of specific goods in a deliverable state, but the seller is bound to weigh, measure, test or do some other act or thing with reference to the goods for the purpose of ascertaining the price, the property does not pass until such act or thing is done and the buyer has notice thereof. Rule 4. When goods are delivered to the buyer on approval or “on sale or return” or other similar terms, the property therein passes to the buyer; when the buyer signifies approval or acceptance to the seller or does any other act adopting the transaction; if the buyer does not signify approval or acceptance to the seller but retains the goods without giving notice of rejection, then if a time has been fixed for the return of the goods, on the expiration of such time, and, if no time has been fixed, on the expiration of a reasonable time, and what is a reasonable time is a question of fact. Rule 5. Where there is a contract for the sale of unascertained or future goods by description and goods of that description and in a deliverable state are unconditionally appropriated to the contract, either by the seller with the assent of the buyer, or by the buyer with the assent of the seller, the property in the goods thereupon passes to the buyer, and such assent may be expressed or implied and may be given either before or after the appropriation is made. Where in pursuance of the contract the seller delivers the goods to the buyer or to a carrier or other bailee (whether named by the buyer or not) for the purpose of transmission to the buyer and does not reserve the right of disposal, the seller shall be deemed to have unconditionally appropriated the goods to the contract. There are several details and complex elements regarding the Sale of Goods Act. It is advisable to speak to a professional in this area before engaging in the sale of goods. However, there are some key elements which any business might benefit from knowing. The following information has been taken from the Ontario Government website. F or additional details, the Act can be accessed here: Deliverable state Goods shall be deemed to be in a deliverable state within the meaning of this Act [Sale of Goods Act] when they are in such a state that the buyer would be bound to take delivery of them. Sale and agreement to sell A contract of sale of goods is a contract whereby the seller transfers or agrees to transfer the property in the goods to the buyer for a money consideration, called the price. What constitutes a sale or agreement to sell Where under a contract of sale the property in goods is transferred from the seller to the buyer, the contract is called a sale, but, where the transfer of the property in the goods is to take place at a future time or subject to some condition thereafter to be fulfilled, the contract is called an agreement to sell. Where price not determined Where the price is not determined, the buyer shall pay a reasonable price, and what constitutes a reasonable price is a question of fact dependent on the circumstances of each particular case. Sale by description Where there is a contract for the sale of goods by description, there is an implied condition that the goods will correspond with the description, and, if the sale is by sample as well as by description. Sale by sample In the case of a contract for sale by sample, there is an implied condition, (a) that the bulk will correspond with the sample in quality; (b) that the buyer will have a reasonable opportunity of comparing the bulk with the sample; and (c) that the goods will be free from any defect rendering them unmerchantable that would not be apparent on reasonable examination of the sample. Duties of seller and buyer It is the duty of the seller to deliver the goods and of the buyer to accept and pay for them in accordance with the terms of the contract of sale. Payment and delivery concurrent Unless otherwise agreed, delivery of the goods and payment of the price are concurrent conditions – the seller shall be ready and willing to give possession of the goods to the buyer in exchange for the price and the buyer shall be ready and willing to pay the price in exchange for possession of the goods. Where no time for delivery fixed Where the seller is bound to send the goods to the buyer but no time for sending them is fixed, the seller is bound to send them within a reasonable time. Delivery of wrong quantity or quality Where the seller delivers to the buyer a quantity of goods less than the seller contracted to sell, the buyer may reject them, but if they are accepted, the buyer shall pay for them at the contract rate. Goods not in accordance with contract Where the seller delivers to the buyer the goods contracted to be sold mixed with goods of a different description not included in the contract, the buyer may accept the goods that are in accordance with the contract and reject the rest or may reject the whole. Rights of buyer as to examination Where goods are delivered to the buyer that the buyer has not previously examined, the buyer shall be deemed not to have accepted them until there has been a reasonable opportunity of examining them for the purpose of ascertaining whether they are in conformity with the contract. Effect of refusal to accept Unless otherwise agreed, where a buyer refuses to accept delivery of goods and has the right to do so, the goods are not bound to be returned to the seller, but it is sufficient if the buyer communicates to the seller that acceptance of the goods is refused. Wrongful neglect or refusal to take delivery When the seller is ready and willing to deliver the goods and requests the buyer to take delivery and the buyer does not within a reasonable time after such request take delivery of the goods, the buyer is liable to the seller for any loss occasioned by the buyer’s neglect or refusal to take delivery, and for a reasonable charge for the care and custody of the goods. Withholding delivery Where the property in goods has not passed to the buyer, the unpaid seller has, in addition to other remedies, a right of withholding delivery similar to and co-extensive with the rights of lien and stoppage in the course of transit where the property has passed to the buyer. Right of stoppage in transit When the buyer of goods becomes insolvent, the unpaid seller who has parted with the possession of the goods has the right of stopping them in the course of transit, that is to say, the unpaid seller may resume possession of the goods as long as they are in course of transit and may retain them until payment or tender of the price. Previous/next navigation Previous: The Constitution Act and the Duty to Consult Indigenous Peoples Next: Consumer Protection Act Back to top License Business Law and Ethics Canadian Edition Copyright © 2023 by Craig Ervine is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted. Share This Book Powered by Pressbooks Pressbooks User Guide |Pressbooks Directory |Contact Pressbooks on YouTubePressbooks on LinkedIn
273
The Maya Calendar =============== My home page Contents: IntroductionAstronomyThe Christian CalendarThe Roman calendarThe Julian calendarThe Gregorian calendarCounting yearsThe Julian PeriodEasterWeek-related questionsMiscellaneous questionsISO 8601The Hebrew CalendarThe Islamic CalendarThe Persian CalendarThe WeekThe French Revolutionary Calendar The Maya Calendar The Chinese CalendarFrequently Asked Questions about this FAQAcknowledgements Contact: Claus Tøndering – (Please include the word “calendar” in the subject line) The Maya Calendar Subsections What is the Long Count? How are the baktun counted? When did the Long Count start? What is the Tzolkin? When did the Tzolkin start? What is the Haab? When did the Haab start? Did the Mayas think a year was 365 days? (I am very grateful to Chris Carrier for providing most of the information about the Maya calendar.) Among their other accomplishments, the ancient Mayas invented a calendar of remarkable accuracy and complexity. The Maya calendar was adopted by the other Mesoamerican nations, such as the Aztecs and the Toltec, which adopted the mechanics of the calendar unaltered but changed the names of the days of the week and the months. The Maya calendar uses three different dating systems in parallel, the Long Count, the Tzolkin (divine calendar), and the Haab (civil calendar). Of these, only the Haab has a direct relationship to the length of the year. Today’s Mayan date looks like this: , . is the Long Count date. is the Tzolkin date. is the Haab date. Today’s Long Count: What is the Long Count? The Long Count is really a mixed base-20/base-18 representation of a number, representing the number of days since the start of the Mayan era. It is thus akin to the Julian Day Number). The basic unit is the kin (day), which is the last component of the Long Count. Going from right to left the remaining components are: uinal(1 uinal = 20 kin = 20 days) tun(1 tun = 18 uinal = 360 days = approx. 1 year) katun(1 katun = 20 tun = 7,200 days = approx. 20 years) baktun(1 baktun = 20 katun = 144,000 days = approx. 394 years) The kin, tun, and katun are numbered from 0 to 19. The uinal are numbered from 0 to 17. On the numbering of the baktun, see below Although they are not part of the Long Count, the Mayas had names for larger time spans. The following names are sometimes quoted, although they are not ancient Maya terms: 1 pictun = 20 baktun = 2,880,000 days = approx. 7885 years 1 calabtun = 20 pictun = 57,600,000 days = approx. 158,000 years 1 kinchiltun = 20 calabtun = 1,152,000,000 days = approx. 3 million years 1 alautun = 20 kinchiltun = 23,040,000,000 days = approx. 63 million years How are the baktun counted? Different sources present different views of this question. Here are the possibilities I have come across: The baktun are numbered from 0 to 19, making a full baktun cycle correspond to a pictun. The Long Count started with 0.0.0.0.0. Long Count day 19.19.19.17.19 will be followed by 0.0.0.0.0 (in AD 4772). The baktun are numbered from 1 to 13. The Long Count started with 13.0.0.0.0. Long Count day 13.19.19.17.19 is followed by 1.0.0.0.0 (in 2720 BC and again in AD 2407). The baktun are numbered from 0 to 13, with 13 having the same meaning as 0. The Long Count started with 0.0.0.0.0. Long Count day 13.0.0.0.0 is another name for 0.0.0.0.0 (which occurred on 21 December AD 2012). The baktun are numbered from 0 to 13. The Long Count started with 0.0.0.0.0. Long Count day 13.0.0.0.0 was followed by 0.0.0.0.0 on 22 December AD 2012 with no intervening days. Between 1.0.0.0.0 (in 2720 BC) and 12.19.19.17.19 (20 December AD 2012) these theories had identical values for the Long Count. The Mayan dates given on this page are consistent with systems 1 and 2 above. When did the Long Count start? Depending on which of the baktun numberings you use, the first date in the Long Count is either 0.0.0.0.0 or 13.0.0.0.0. There has been some dispute about what date in our calendar corresponds to this Long Count; however, most autorities today agree that the Long Count started on 6 Sep­tember 3114 BC in the Julian proleptic calendar. (Other dates that have been suggested are 8 September 3114 BC and 11 November 3374 BC.) The date 0.0.0.0.0/13.0.0.0.0 may have been the Mayas’ idea of the date of the creation of the world. The Long Count again reached 13.0.0.0.0 on 21 December 2012. Today’s Tzolkin date: What is the Tzolkin? The Tzolkin date is a combination of two “week” lengths. While our calendar uses a single week of seven days, the Mayan calendar used two different lengths of week: a numbered week of 13 days, in which the days were numbered from 1 to 13 a named week of 20 days, in which the names of the days were: 0.Ahau 5.Chicchan 10.Oc 15.Men 1.Imix 6.Cimi 11.Chuen 16.Cib 2.Ik 7.Manik 12.Eb 17.Caban 3.Akbal 8.Lamat 13.Ben 18.Etznab 4.Kan 9.Muluc 14.Ix 19.Caunac As the named week is 20 days and the smallest Long Count digit is 20 days, there is synchrony between the two; if, for example, the last digit of today’s Long Count is 0, today must be Ahau; if it is 6, it must be Cimi. Since the numbered and the named week were both “weeks”, each of their name/number change daily; therefore, the day after 3 Cimi is not 4 Cimi, but 4 Manik, and the day after that, 5 Lamat. The next time Cimi rolls around, 20 days later, it will be 10 Cimi instead of 3 Cimi. The next 3 Cimi will not occur until 260 (or 13×20) days have passed. This 260-day cycle also had good-luck or bad-luck associations connected with each day, and for this reason, it became known as the “divinatory year.” The “years” of the Tzolkin calendar are not counted. When did the Tzolkin start? Long Count 0.0.0.0.0 corresponds to 4 Ahau. The authorities agree on this. Today’s Haab date: What is the Haab? The Haab was the civil calendar of the Mayas. It consisted of 18 “months” of 20 days each, followed by 5 extra days, known as Uayeb. This gives a year length of 365 days. The names of the month were: 1.Pop 7.Yaxkin 13.Mac 2.Uo 8.Mol 14.Kankin 3.Zip 9.Chen 15.Muan 4.Zotz 10.Yax 16.Pax 5.Tzec 11.Zac 17.Kayab 6.Xul 12.Ceh 18.Cumku In contrast to the Tzolkin dates, the Haab month names changed every 20 days instead of daily; so the day after 4 Zotz would be 5 Zotz, followed by 6 Zotz ... up to 19 Zotz, which is followed by 0 Tzec. The days of the month were numbered from 0 to 19. This use of a 0th day of the month in a civil calendar is unique to the Maya system; it is believed that the Mayas discovered the number zero, and the uses to which it could be put, centuries before it was discovered in Europe or Asia. The Uayeb days acquired a very derogatory reputation for bad luck; known as “days without names” or “days without souls,” and were observed as days of prayer and mourning. Fires were extinguished and the population refrained from eating hot food. Anyone born on those days was “doomed to a miserable life.” The years of the Haab calendar are not counted. The length of the Tzolkin year was 260 days and the length of the Haab year was 365 days. The smallest number that can be divided evenly by 260 and 365 is 18,980, or 365×52; this was known as the Calendar Round. If a day is, for example, “4 Ahau 8 Cumku,” the next day falling on “4 Ahau 8 Cumku” would be 18,980 days or about 52 years later. Among the Aztec, the end of a Calendar Round was a time of public panic as it was thought the world might be coming to an end. When the Pleiades crossed the horizon on 4 Ahau 8 Cumku, they knew the world had been granted another 52-year extension. When did the Haab start? Long Count 0.0.0.0.0 corresponds to 8 Cumku. The authorities agree on this. Did the Mayas think a year was 365 days? Although there were only 365 days in the Haab year, the Mayas were aware that a year is slightly longer than 365 days, and in fact, many of the month-names are associated with the seasons; Yaxkin, for example, means “new or strong sun” and, at the beginning of the Long Count, 1 Yaxkin was the day after the winter solstice, when the sun starts to shine for a longer period of time and higher in the sky. When the Long Count was put into motion, it was started at 7.13.0.0.0, and 0 Yaxkin corresponded with Midwinter Day, as it did at 0.0.0.0.0 back in 3114 B.C. The available evidence indicates that the Mayas estimated that a 365-day year precessed through all the seasons twice in 7.13.0.0.0 or 1,101,600 days. We can therefore derive a value for the Mayan estimate of the year by dividing 1,101,600 by 365, subtracting 2, and taking that number and dividing 1,101,600 by the result, which gives us an answer of 365.242036 days, which is slightly more accurate than the 365.2425 days of the Gregorian calendar. This apparent accuracy could, however, be a simple coincidence. The Mayas estimated that a 365-day year precessed through all the seasons twice in 7.13.0.0.0 days. These numbers are only accurate to 2-3 digits. Suppose the 7.13.0.0.0 days had corresponded to 2.001 cycles rather than 2 cycles of the 365-day year, would the Mayas have noticed? The French Revolutionary Calendar Up The Chinese Calendar References ^Edward M. Reingold & Nachum Dershowitz: Calendrical Calculations – The Millennium Edition. Cambridge University Press, 2001. ISBN 0-521-77752-6. P. 142-143.
274
Graphing a polar spiral r(theta)=theta/pi on 0 to 5 pi, and animation of a polar spiral. Zak's Lab 15500 subscribers 8 likes Description 966 views Posted: 20 May 2021 Questions or requests? Post your comments below, and I will respond within 24 hours. Graphing a polar spiral r(theta)=theta/pi on 0 to 5 pi, and animation of a polar spiral. In this video, we are given the equation of a polar spiral, and we plug values of theta into the equation to obtain points on the polar curve. We fast forward through many points and get a basic sense of how the polar spiral is traced out as theta increases. Special time out to look at the first few smallest angles so we can grasp how the spiral starts for small values of theta. Finally, we show an animation of the polar spiral as it's traced out from 0 to 5 pi, and we're done! Transcript: Intro in this example we're asked to plot the polar curve r of theta equals theta over pi on the interval zero to five pi so we're going around two and a half times and i can tell because r is proportional to theta that r is just going to get bigger and bigger as theta gets bigger and bigger so this ends up producing what's called a polar spiral curve and to get a sense for precisely what it's doing we can start to plot specific points Graphing when theta is zero r is zero over pi which is zero and i'm at the origin when theta is pi over two r is just whatever that angle is divided by pi so it's one half when theta is pi r is one and i'm going to keep going this way so i get to the very end i'll just fast forward that now it might be a little confusing at first to see how all these points are connected by a smooth spiral and i think it helps to look at some of the smallest angles just for a second so if i put in theta equals pi over six i would get r equals one sixth so very small put in theta equals pi over four i get one fourth if it equals pi over three i get one third and then you start to get a sense for how it could all be connected by a smooth spiral so we'll go ahead and import an image from a computer algebra system since it's going to do a lot better job than i can and there's our basic polar spiral on 0 to 5 pi Outro if you find the math content on zack's lab helpful click on the zack's lab logo on the right to browse playlists and subscribe to the channel i produce dozens of new videos per month and subscribing is the easiest way to find new content thanks for watching
275
An Atlas of Genetic Correlations across Human Diseases and Traits - PMC =============== Skip to main content An official website of the United States government Here's how you know Here's how you know Official websites use .gov A .gov website belongs to an official government organization in the United States. Secure .gov websites use HTTPS A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites. PMC Search Update PMC Beta search will replace the current PMC search the week of September 7, 2025. Try out PMC Beta search now and give us your feedback. Learn more Search Log in Dashboard Publications Account settings Log out Search… Search NCBI Primary site navigation Search Logged in as: Dashboard Publications Account settings Log in Search PMC Full-Text Archive Search in PMC Advanced Search Journal List User Guide New Try this search in PMC Beta Search View on publisher site Download PDF Add to Collections Cite Permalink PERMALINK Copy As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health. Learn more: PMC Disclaimer | PMC Copyright Notice Nat Genet . Author manuscript; available in PMC: 2016 May 1. Published in final edited form as: Nat Genet. 2015 Sep 28;47(11):1236–1241. doi: 10.1038/ng.3406 Search in PMC Search in PubMed View in NLM Catalog Add to search An Atlas of Genetic Correlations across Human Diseases and Traits Brendan Bulik-Sullivan Brendan Bulik-Sullivan 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Brendan Bulik-Sullivan 1,2,3,, Hilary K Finucane Hilary K Finucane 4 Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA Find articles by Hilary K Finucane 4,, Verneri Anttila Verneri Anttila 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Verneri Anttila 1,2,3, Alexander Gusev Alexander Gusev 5 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA 6 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA Find articles by Alexander Gusev 5,6, Felix R Day Felix R Day 7 MRC Epidemiology Unit, University of Cambridge School of Clinical Medicine, Institute of Metabolic Science, Cambridge Biomedical Campus, Cambridge, CB2 0QQ, UK Find articles by Felix R Day 7, Po-Ru Loh Po-Ru Loh 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 5 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA Find articles by Po-Ru Loh 1,5; ReproGen Consortium 8; Psychiatric Genomics Consortium 8; Genetic Consortium for Anorexia Nervosa of the Wellcome Trust Case Control Consortium 3 8, Laramie Duncan Laramie Duncan 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Laramie Duncan 1,2,3, John RB Perry John RB Perry 7 MRC Epidemiology Unit, University of Cambridge School of Clinical Medicine, Institute of Metabolic Science, Cambridge Biomedical Campus, Cambridge, CB2 0QQ, UK Find articles by John RB Perry 7, Nick Patterson Nick Patterson 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA Find articles by Nick Patterson 1, Elise B Robinson Elise B Robinson 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Elise B Robinson 1,2,3, Mark J Daly Mark J Daly 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Mark J Daly 1,2,3, Alkes L Price Alkes L Price 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 5 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA 6 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA Find articles by Alkes L Price 1,5,6,, Benjamin M Neale Benjamin M Neale 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA Find articles by Benjamin M Neale 1,2,3, Author information Article notes Copyright and License information 1 Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 2 Stanley Center for Psychiatric Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA 3 Analytic and Translational Genetics Unit, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, USA 4 Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, USA 5 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA 6 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA 7 MRC Epidemiology Unit, University of Cambridge School of Clinical Medicine, Institute of Metabolic Science, Cambridge Biomedical Campus, Cambridge, CB2 0QQ, UK ✉ Address correspondence to BBS [email protected], BMN [email protected], HKF [email protected] and ALP [email protected] 8 A list of members and affiliations appears in the Supplementary Note. Co-first authors Co-last authors Issue date 2015 Nov. Users may view, print, copy, and download text and data-mine the content in such documents, for the purposes of academic research, subject always to the full Conditions of use: PMC Copyright notice PMCID: PMC4797329 NIHMSID: NIHMS719075 PMID: 26414676 The publisher's version of this article is available at Nat Genet Abstract Identifying genetic correlations between complex traits and diseases can provide useful etiological insights and help prioritize likely causal relationships. The major challenges preventing estimation of genetic correlation from genome-wide association study (GWAS) data with current methods are the lack of availability of individual genotype data and widespread sample overlap among meta-analyses. We circumvent these difficulties by introducing a technique – cross-trait LD Score regression – for estimating genetic correlation that requires only GWAS summary statistics and is not biased by sample overlap. We use this method to estimate 276 genetic correlations among 24 traits. The results include genetic correlations between anorexia nervosa and schizophrenia, anorexia and obesity and associations between educational attainment and several diseases. These results highlight the power of genome-wide analyses, since there currently are no significantly associated SNPs for anorexia nervosa and only three for educational attainment. Introduction Understanding the complex relationships among human traits and diseases is a fundamental goal of epidemiology. Randomized controlled trials and longitudinal studies are time-consuming and expensive, so many potential risk factors are studied using cross-sectional correlations studies at a single time point. Obtaining causal inferences from such studies can be challenging due to issues such as confounding and reverse causation, which can lead to spurious associations and mask the effects of real risk factors [1, 2]. Genetics can help elucidate cause and effect, since inherited genetic risks cannot be subject to reverse causation and are correlated with a smaller list of confounders. The first methods for testing for genetic overlap were family studies [3, 4, 5, 6, 7]. In order to estimate genetic overlaps among many pairs of phenotypes, family designs require measuring multiple traits on the same individuals. Consequently, it is challenging to scale family designs to a large number of traits, especially traits that difficult or costly to measure (e.g., low-prevalence diseases). More recently, genome-wide association studies (GWAS) have allowed us to obtain effect-size estimates for specific genetic variants, so it is possible to test for shared genetics by looking for correlations in effect-sizes across traits, which does not require measuring multiple traits per individual. There exists a large class of methods for interrogating genetic overlap via GWAS that focus only on genome-wide significant SNPs. One of the most influential methods in this class is Mendelian randomization, which uses significantly associated SNPs as instrumental variables to attempt quantify causal relationships between risk factors and disease[1, 2]. Methods that focus on significant SNPs are effective for traits where there are many significant associations that account for a substantial fraction of heritability [8, 9]. For many complex traits, heritability is distributed over thousands of variants with small effects, and the proportion of heritability accounted for by significantly associated variants at current sample sizes is small . In such situations, one can often obtain more accurate results by using genome-wide data, rather than just significantly associated variants . A complementary approach is to estimate genetic correlation, which includes the effects of all SNPs, including those that do not reach genome-wide significance (Methods). The two main existing techniques for estimating genetic correlation from GWAS data are restricted maximum likelihood (REML) [11, 12, 13, 14, 15, 16] and polygenic scores [17, 18]. These methods have only been applied to a few traits, because they require individual genotype data, which are difficult to obtain due to informed consent limitations. In order to overcome these limitations, we have developed a technique for estimating genetic correlation using only GWAS summary statistics that is not biased by sample overlap. Our method, cross-trait LD Score regression, is a simple extension of single-trait LD Score regression and is computationally very fast. We apply this method to data from 24 GWAS and report genetic correlations for 276 pairs of phenotypes, demonstrating shared genetic bases for many complex diseases and traits. Results Overview of Methods The method presented here for estimating genetic correlation from summary statistics relies on the fact that the GWAS effect-size estimate for a given SNP incorporates the effects of all SNPs in linkage disequilibrium (LD) with that SNP [19, 20]. For a polygenic trait, SNPs with high LD will have higher χ 2 statistics on average than SNPs with low LD . A similar relationship holds if we replace χ 2 statistics for a single study with the product of z-scores from two studies of traits with non-zero genetic correlation. More precisely, under a polygenic model [11, 13], the expected value of z 1 j z 2 j is (1) where N i is the sample size for study i, ρ g is genetic covariance (defined in Methods), ℓ j is LD Score , N s is the number of individuals included in both studies, and ρ is the phenotypic correlation among the N s overlapping samples. We derive this equation in the Supplementary Note. If study 1 and study 2 are the same study, then Equation 1 reduces to the single-trait result from , because genetic covariance between a trait and itself is heritability, and χ 2=z 2. As a consequence of equation 1, we can estimate genetic covariance using the slope from the regression of z 1 j z 2 j on LD Score, which is computationally very fast (Methods). Sample overlap creates spurious correlation between z 1 j and z 2 j, which inflates z 1 j z 2 j. The expected magnitude of this inflation is uniform across all markers, and in particular does not depend on LD Score. As a result, sample overlap only affects the intercept from this regression (the term ) and not the slope, so the estimates of genetic correlation will not be biased by sample overlap. Similarly, shared population stratification will alter the intercept but have minimal impact on the slope, because the correlation between LD Score and the rate of genetic drift is minimal . If we are willing to assume no shared population stratification, and we know the amount of sample overlap and phenotypic correlation in advance (i.e., the true value of ), we can constrain the intercept to this value. We refer to this approach as constrained intercept LD Score regression. Constrained intercept LD Score regression has lower standard error – often by as much as 30% – than LD Score regression with unconstrained intercept, but will yield biased and misleading estimates if the intercept is misspecified, e.g., if we specify the wrong value of N s ρ or do not completely control for population stratification. Normalizing genetic covariance by the SNP-heritabilities yields genetic correlation: , where denotes the SNP-heritability from study i. Genetic correlation ranges between −1 and 1. Results similar to Equation 1 hold if one or both studies is a case/control study, in which case genetic covariance is on the observed scale. Details are provided in the Supplementary Note. There is no distinction between observed and liability scale genetic correlation for case/control traits, so we can define and estimate genetic correlation between a case/control trait and a quantitative trait and genetic correlation between pairs of case/control traits without the need to specify a scale (Supplementary Note). Simulations We performed a series of simulations to evaluate the robustness of the model to potential confounders such as sample overlap and model misspecification, and to verify the accuracy of the standard error estimates (Methods). Table 1 shows cross-trait LD Score regression estimates and standard errors from 1,000 simulations of quantitative traits. For each simulation replicate, we generated two phenotypes for each of 2,062 individuals in our sample by drawing effect sizes approximately 600,000 SNPs on chromosome 2 from a bivariate normal distribution. We then computed summary statistics for both phenotypes and estimated heritability and genetic correlation with cross-trait LD Score regression. The summary statistics were generated from completely overlapping samples. Results are shown in Table 1. These simulations confirm that cross-trait LD Score regression yields accurate estimates of the true genetic correlation and that the standard errors match the standard deviation across simulations. Thus, cross-trait LD Score regression is not biased by sample overlap, in contrast to estimation of genetic correlation via polygenic risk scores, which is biased in the presence of sample overlap . We also evaluated simulations with one quantitative trait and one case/control study and show that cross-trait LD Score regression can be applied to binary traits and is not biased by oversampling of cases (Supplementary Table 1). Table 1. Simulations with complete sample overlap. Truth shows the true parameter values. Estimate shows the average cross-trait LD Score regression estimate across 1000 simulations. SD shows the standard deviation of the estimates across 1000 simulations, and SE shows the mean cross-trait LD Score regression SE across 1000 simulations. Further details of the simulation setup are given in the Methods. | Parameter | Truth | Estimate | SD | SE | | :---: | :--- | :--- | :--- | :--- | | h 2 | 0.58 | 0.58 | 0.072 | 0.075 | | ρ g | 0.29 | 0.29 | 0.057 | 0.058 | | r g | 0.50 | 0.49 | 0.079 | 0.073 | Open in a new tab Estimates of heritability and genetic covariance can be biased if the underlying model of genetic architecture is misspecified, e.g., if variance explained is correlated with LD Score or MAF [19, 21]. Because genetic correlation is estimated as a ratio, it is more robust; biases that affect the numerator and the denominator in the same direction tend to cancel. We obtain approximately correct estimates of genetic correlation even in simulations with models of genetic architecture where our estimates of heritability and genetic covariance are biased (Supplementary Table 2). Replication of Pyschiatric Cross-Disorder Results As technical validation, we replicated the estimates of genetic correlations among psychiatric disorders obtained with individual genotypes and REML in , by applying cross-trait LD Score regression to summary statistics from the same data . These summary statistics were generated from non-overlapping samples, so we applied cross-trait LD Score regression using both unconstrained and constrained intercepts (Methods). Results from these analyses are shown in Figure 1. The results from cross-trait LD Score regression were similar to the results from REML. Cross-trait LD Score regression with constrained intercept gave standard errors that were only slightly larger than those from REML, while the standard errors from cross-trait LD Score regression with intercept were substantially larger, especially for traits with small sample sizes (e.g., ADHD, ASD). Figure 1. Open in a new tab Application to Summary Statistics From 25 Phenotypes We used cross-trait LD Score regression to estimate genetic correlations among 24 phenotypes (URLs, Methods). Genetic correlation estimates for all 276 pairwise combinations of the 24 traits are shown in Figure 2. For clarity of presentation, the 24 phenotypes were restricted to contain only one phenotype from each cluster of closely related phenotypes (Methods). Genetic correlations among the educational, anthropometric, smoking, and insulin-related phenotypes that were excluded from Figure 2 are shown Supplementary Figures 1, 2, 3 and 4, respectively. A full table of 1176 genetic correlations among 49 traits is provided in Supplementary Table 4. References and sample sizes are shown in Supplementary Table 3. Figure 2. Open in a new tab The first section of Table 2 lists genetic correlation results that are consistent with epidemiological associations, but, as far as we are aware, have not previously been reported using genetic data. The estimates of the genetic correlation between age at menarche and adult height , triglycerides and type 2 diabetes [30, 31] are consistent with the epidemiological associations. The estimate of a negative genetic correlation between anorexia nervosa and obesity suggests that the same genetic factors influence normal variation in BMI as well as dysregulated BMI in psychiatric illness. This result is consistent with the observation that BMI GWAS findings implicate neuronal, rather than metabolic, cell-types and epigenetic marks [32, 33]. The negative genetic correlation between adult height and coronary artery disease agrees with a replicated epidemiological association [34, 35, 36]. We observe several significant associations with the educational attainment phenotypes from Rietveld et al. : we estimate a statistically significant negative genetic correlation between college and Alzheimer’s disease, which agrees with epidemiological results [38, 39]. The positive genetic correlation between college and bipolar disorder is consistent with previous epidemiological reports [40, 41]. The estimate of a negative genetic correlation between smoking and college is consistent with the observed differences in smoking rates as a function of educational attainment . Table 2. Genetic correlation estimates, standard errors and p-values for selected pairs of traits. Results are grouped into genetic correlations that are new genetic results, but are consistent with established epidemiological associations (“Epidemiological”), genetic correlations that are new both to genetics and epidemiology (“New/Nonzero”) and interesting null results (“New/Low”). The p-values are uncorrected p-values. Results that pass multiple testing correctionw for the 300 tests in Figure 2 at 1% FDR have a single asterisk; results that pass Bonferroni correction have two asterisks. We present some genetic correlations that agree with epidemiological associations but that do not pass multiple testing correction in these data. | | Phenotype 1 | Phenotype 2 | rg(se) | p-value | | :--- | :--- | :--- | :--- | :--- | | Epidemiological | Age at menarche | Adult height | 0.13 (0.03) | 2×10−6 | | Age at menarche | Type 2 diabetes | −0.13 (0.04) | 2×10−3 | | Age at menarche | Triglycerides | −0.12 (0.04) | 1×10−3 | | Coronary artery disease | Age at menarche | −0.12 (0.05) | 3×10−2 | | Coronary artery disease | Years of education | −0.25 (0.06) | 1×10−4 | | Coronary artery disease | Adult height | −0.17 (0.04) | 1×10−5 | | Alzheimer’s | Years of education | −0.29 (0.1) | 5×10−3 | | Bipolar disorder | Years of education | 0.30 (0.06) | 9×10−7 | | BMI | Years of education | −0.28 (0.03) | 6×10−16 | | Triglycerides | Years of education | −0.26 (0.06) | 2×10−8 | | Anorexia nervosa | BMI | −0.18 (0.04) | 3×10−7 | | Ever/never smoker | Years of education | −0.36 (0.06) | 2×10−8 | | Ever/never smoker | BMI | 0.20 (0.04) | 8×10−7 | | New/Nonzero | Autism spectrum disorder | Years of education | 0.30 (0.08) | 2×10−4 | | Ulcerative colitis | Childhood obesity | −0.34 (0.08) | 3.1 × 10−5 | | Anorexia nervosa | Schizophrenia | 0.19 (0.04) | 2×10−5 | | New/Low | Schizophrenia | Alzheimer’s | 0.04 (0.06) | >0.1 | | Schizophrenia | Ever/never smoker | 0.04 (0.06) | >0.1 | | Schizophrenia | Triglycerides | −0.04 (0.04) | >0.1 | | Schizophrenia | LDL cholesterol | −0.04 (0.04) | >0.1 | | Schizophrenia | HDL cholesterol | 0.03 (0.04) | >0.1 | | Schizophrenia | Rheumatoid arthritis | −0.04 (0.05) | >0.1 | | Crohn’s disease | Rheumatoid arthritis | −0.03 (0.08) | >0.1 | | Ulcerative colitis | Rheumatoid arthritis | 0.09 (0.08) | >0.1 | Open in a new tab The second section of Table 2 lists three results that are, to the best of our knowledge, new both to genetics and epidemiology. One, we find a positive genetic correlation between anorexia nervosa and schizophrenia. Comorbidity between eating and psychotic disorders has not been thoroughly investigated in the psychiatric literature [43, 44], and this result raises the possibility of similarity between these classes of disease. Two, we estimate a negative genetic correlation between ulcerative colitis (UC) and childhood obesity. The relationship between premorbid BMI and ulcerative colitis is not well-understood; exploring this relationship may be a fruitful direction for further investigation. Three, we estimate a positive genetic correlation between autism spectrum disorder (ASD) and educational attainment (which has very high genetic correlation with IQ [37, 45, 46]). The ASD summary statistics were generated using a case-pseudocontrol study design, so this result cannot be explained by oversampling of ASD cases from the more highly educated parents, which is observed epidemiologically . The distribution of IQ among individuals with ASD has lower mean than the general population, but with heavy tails (i.e., an excess of individuals with low and high IQ). There is also emerging evidence that the genetic architecture of ASD varies across the IQ distribution . The third section of Table 2 lists interesting examples where the genetic correlation is close to zero with small standard error. The low genetic correlation between schizophrenia and rheumatoid arthritis is interesting because schizophrenia has been observed to be protective for rheumatoid arthritis , though the epidemiological effect is weak, so it is possible that there is a real genetic correlation, but it is too small for us to detect. The low genetic correlation between schizophrenia and smoking is notable because of the increased tobacco use (both prevalence and number of cigarettes per day) among individuals with schizophrenia . The low genetic correlation between schizophrenia and plasma lipid levels contrasts with a previous report of pleiotropy between schizophrenia and triglycerides . Pleiotropy (unsigned) is different from genetic correlation (signed; see Methods); however, the pleiotropy reported by Andreassen, et al. could be explained by the sensitivity of the method used to the properties of a small number of regions with strong LD, rather than trait biology (Supplementary Figure 5). We estimate near-zero genetic correlation between Alzheimer’s disease and schizophrenia. The genetic correlations between Alzheimers disease and the other psychiatric traits (anorexia nervosa, bipolar, major depression, ASD) are also close to zero, but with larger standard errors, due to smaller sample sizes. This suggests that the genetic basis of Alzheimer’s disease is distinct from psychiatric conditions. Last, we estimate near zero genetic correlation between rheumatoid arthritis (RA) and both Crohn’s disease (CD) and UC. Although these diseases share many associated loci [53, 54], there appears to be no directional trend: some RA risk alleles are also risk alleles for UC and CD, but many RA risk alleles are protective for UC and CD , yielding near-zero genetic correlation. This example highlights the distinction between pleiotropy and genetic correlation (Methods). Finally, the estimates of genetic correlations among metabolic traits are consistent with the estimates obtained using REML in Vattikuti et al. (Supplementary Table 6), and are directionally consistent with the recent Mendelian randomization results from Wuertz et al. . The estimate of 0.54 (0.07) for the genetic correlation between CD and UC is consistent with the estimate of 0.62 (0.04) from Chen et al. . Discussion We have described a new method for estimating genetic correlation from GWAS summary statistics, which we applied to a dataset of GWAS summary statistics consisting of 24 traits and more than 1.5 million unique phenotype measurements. We reported several new findings that would have been difficult to obtain with existing methods, including a positive genetic correlation between anorexia nervosa and schizophrenia. Our method replicated many previously-reported GWAS-based genetic correlations, and confirmed observations of overlap among genome-wide significant SNPs, MR results and epidemiological associations. This method is an advance for several reasons: it does not require individual genotypes, genome-wide significant SNPs or LD-pruning (which loses information if causal SNPs are in LD). Our method is not biased by sample overlap and is computationally fast. Furthermore, our approach does not require measuring multiple traits on the same individuals, so it scales easily to studies of thousands of pairs of traits. These advantages allow us to estimate genetic correlation for many more pairs of phenotypes than was possible with existing methods. The challenges in interpreting genetic correlation are similar to the challenges in MR. We highlight two difficulties. First, genetic correlation is immune to environmental confounding, but is subject to genetic confounding, analogous to confounding by pleiotropy in MR. For example, the genetic correlation between HDL and CAD in Figure 2 could result from a causal effect HDL→CAD, but could also be mediated by triglycerides (TG) [9, 56], represented graphically as HDL←G→TG→CAD, where G is the set of genetic variants with effects on both HDL and TG. Extending genetic correlation to multiple genetically correlated phenotypes is an important direction for future work . Second, although genetic correlation estimates are not biased by oversampling of cases, they are affected by other forms of biased sampling, such as misclassification and case/control/covariate sampling (e.g., a BMI-matched study of T2D). We note several limitations of cross-trait LD Score regression as an estimator of genetic correlation. First, cross-trait LD Score regression requires larger sample sizes than methods that use individual genotypes in order to achieve equivalent standard error. Second, cross-trait LD Score regression is not currently applicable to samples from recently-admixed populations. Third, we have not investigated the potential impact of assortative mating on estimates of genetic correlation, which remains as a future direction. Fourth, methods built from polygenic models, such as cross-trait LD Score regression and REML, are most effective when applied to traits with polygenic genetic architectures. For traits where significant SNPs account for a sizable proportion of heritability, analyzing only these SNPs can be more powerful. Developing methods that make optimal use of both large-effect SNPs and diffuse polygenic signal is a direction for future research. Despite these limitations, we believe that the cross-trait LD Score regression estimator of genetic correlation will be a useful addition to the epidemiological toolbox, because it allows for rapid screening for correlations among a diverse set of traits, without the need for measuring multiple traits on the same individuals or genome-wide significant SNPs. Methods Definition of Genetic Covariance and Correlation All definitions refer to narrow-sense heritabilities and genetic covariances. Let S denote a set of M SNPs, let X denote a vector of additively (0-1-2) coded genotypes for the SNPs in S, and let y 1 and y 2 denote phenotypes. Define β:=argmax α∈R M Cor[y 1,X α], where the maximization is performed in the population (i.e., in the infinite data limit). Let γ denote the corresponding vector for y 2. This is a projection, so β is unique modulo SNPs in perfect LD. Define , the heritability explained by SNPs in S, as and ρ S(y 1,y 2), the genetic covariance among SNPs in S, as . The genetic correlation among SNPs in S is , which lies in [−1,1]. Following , we use subscript g (as in , ρ g,r g) when the set of SNPs is genotyped and imputed SNPs in GWAS. SNP genetic correlation (r g) is different from family study genetic correlation. In a family study, the relationship matrix captures information about all genetic variation, not just common SNPs. As a result, family studies estimate the total genetic correlation (S equals all variants). Unlike the relationship between SNP-heritability and total heritability, for which , no similar relationship holds between SNP genetic correlation and total genetic correlation. If β and γ are more strongly correlated among common variants than rare variants, then the total genetic correlation will be less than the SNP genetic correlation. Genetic correlation is (asymptotically) proportional to Mendelian randomization estimates. If we use a genetic instrument to estimate the effect b 12 of y 1 on y 2, the 2SLS estimate is b̂ 2 SLS:= g T y 2/g T y 1 . The expectations of the numerator and denominator are E[g T y 2]=ρ S(y 1, y 2) and . Thus, . If we use the same set S of SNPs to estimate b 12 and b 21 (e.g., if S is the set of all common SNPs, as in the genetic correlation analyses in this paper), then this procedure is symmetric in y 1 and y 2. Genetic correlation is different from pleiotropy. Two traits have a pleiotropic relationship if many variants affect both. Genetic correlation is a stronger condition than pleiotropy: to exhibit genetic correlation, the directions of effect must also be consistently aligned. Cross-Trait LD Score Regression Recall from the Overview of Methods that the cross-trait LD Score regression equation is (2) where z ij denotes the z-score for study i and SNP j, N i is the sample size for study i, ρ g is genetic covariance, ℓ j is LD Score , N s is the number of individuals included in both studies, and ρ is the phenotypic correlation among the N s overlapping samples. We derive this equation in the Supplementary Note. We estimate genetic covariance by regressing z 1 j z 2 j against , (where N ij is the sample size for SNP j in study i) then multiplying the resulting slope by M, the number of SNPs in the reference panel with MAF between 5% and 50% (technically, this is an estimate of the genetic covariance among SNPs with 5–50% MAF; Supplementary Note). If we know the correct value of the intercept term ahead of time, we can reduce the standard error by constraining the intercept to this value using the -constrain-intercept flag in ldsc (for pairs of binary traits, we give a corresponding expression in terms of the number of overlapping cases and controls in the Supplementary Note). Note that this works even when there is known nonzero sample overlap We recommend using the in-sample estimate of ρ (denoted ρ̂), rather than the population value of ρ. Under unbiased sampling ρ̂ is consistent for ρ with O(1/N) variance, so in this case, the distinction between ρ and ρ̂ is not of great importance. Under biased sampling (as discussed in the previous section), the expected LD Score regression intercept depends on the expected sample correlation E[y i 1 y i 2|s=1] (which is estimated consistently by ρ̂), not population ρ. Thus, we advise to use ρ̂ rather than ρ when constraining the intercept. Regression Weights For heritability estimation, we use the regression weights from . If effect sizes for both phenotypes are drawn from a bivariate normal distribution, then the optimal regression weights for genetic covariance estimation are (3) (Supplementary Note). This quantity depends on several parameters ( , ρ g, ρ,N s) which are not known a priori, so it is necessary to estimate them from the data. We compute the weights in two steps: The first regression is weighted using heritabilities from the single-trait LD Score regressions, ρN s=0, and ρ g estimated as . The second regression is weighted using the estimates of ρN s and ρ g from step 1. The genetic covariance estimate that we report is the estimate from the second regression. Linear regression with weights estimated from the data is called feasible generalized least squares (FGLS). FGLS has the same limiting distribution as WLS with optimal weights, so WLS p-values are valid for FGLS . We multiply the heteroskedasticity weights by 1/ℓ j (where ℓ j is LD Score with sum over regression SNPs) in order to downweight SNPs that are overcounted. This is a heuristic: the optimal approach is to rotate the data so that it is de-correlated, but this rotation matrix is difficult to compute. Two-Step Estimator As noted in , SNPs with very large effect sizes can result in large LD Score regression standard errors for single-trait LD Score regression with unconstrained intercept; cross-trait LD Score regression with unconstrained intercept behaves similarly. This is due to the well-known fact that linear regression deals poorly with outliers in the response variable (LD Score regression with constrained intercept is not nearly as adversely affected by large-effect SNPs). The solution proposed in was to remove SNPs with χ 2>80 from the LD Score regression. This is a satisfactory solution when the goal is to estimate the LD Score regression intercept. If the goal is to distinguish polygenicity from population stratification, and we are willing to assume that the population stratification is subtle, such that SNPs with χ 2>80 are much more likely to be real causal SNPs rather than artifacts, then we can make the task much easier by removing those SNPs. However, this is unsatisfactory if the goal is to estimate h 2: ignoring large-effect SNPs with χ 2>80 would bias estimates of h 2 and ρ g towards zero. Therefore, for estimating h 2 or ρ g, we take a two step approach. The first step is to estimate the LD Score regression intercept with all SNPs with χ 2>30 removed (i.e., all genome-wide significant SNPs; the threshold can be adjusted with the -two-step flag in ldsc). The second step is to estimate h 2 or ρ g using all SNPs and constrained intercept LD Score regression with the intercept constrained to the value from the first step (note that we account for uncertainty in the intercept when computing a standard error; see the next section). Assessment of Statistical Significance via Block Jackknife Summary statistics for SNPs in LD are correlated, so the OLS standard error will be biased downwards. We estimate a heteroskedasticity-and-correlation-robust standard error with a block jackknife over blocks of adjacent SNPs. This is the same procedure used in , and gives accurate standard errors in simulations (Table 1). We obtain a standard error for the genetic correlation by using a ratio block jackknife over SNPs. The default setting in ldsc is 200 blocks per genome, which can be adjusted with the -num-blocks flag. For the two-step estimator, if we were to estimate the intercept in the first step, then obtain a jackknife standard error for the second step treating the intercept as fixed, the standard error would be biased downwards, because it would not take into account the uncertainty in the intercept. Instead, we jackknife both steps of the procedure, which appropriately accounts for uncertainty in the intercept and yields a valid standard error. Reverse Causation Consider a scenario where a risk factor E 1 causes a disease D, but incidence of disease D changes postmorbid levels of E 1 (this could occur e.g., incidence of disease persuades affected individuals to change their behavior in ways that lower E 1). If D is sufficiently common in our GWAS sample, then the genetic correlation may be affected by reverse causation. LD Score regression (or any genetic correlation estimator) will yield a consistent estimate of the cross-sectional genetic correlation between E 1 and D at the given timepoint; however, the cross-sectional genetic correlation between E 1 and D will be attenuated relative to the genetic correlation between disease and pre-morbid levels of E 1. The genetic correlation between disease and pre-morbid levels of the risk factor will typically be the more interesting quantity to estimate, because it is more closely related to the causal effect of E 1 on D. We can estimate this quantity by excluding all post-morbid measurements of the risk factor from the risk factor GWAS. This allows us to circumvent reverse causation, at the cost of a small decrease in sample size. If D is uncommon, then modification of behavior after onset of D will account for only a small fraction of the population variance in E 1, so the effect of reverse causation on the genetic correlation will be small. Thus, reverse causation is primarily a concern for high-prevalence diseases. Non-Random Ascertainment We show in the Supplementary Note that LD Score regression is robust to oversampling of cases in case/control studies, modulo transformation observed and liability scale heritability and genetic covariance. Oversampling of cases is the most common form of biased sampling, but there are many other forms of biased sampling. For example, consider case/control/covariate ascertainment, where the sampling of cases and controls takes into account a covariate. As as concrete example, we know that high BMI is a major risk factor for T2D. If we wish to discover genetic variants that influence risk for T2D via mechanisms other than BMI, we may wish to perform a case/control study for T2D where we compare BMI-matched cases and controls. If we were to use such a T2D study and a random population study of BMI to compute the genetic correlation between BMI and T2D, the result would be substantially attenuated relative to the population genetic correlation between T2D and BMI. (Note that this example holds irrespective of whether there is sample overlap and applies to all genetic correlation estimators, not just LD Score). More generally, let s i=1 denote the event that individual i is selected into our study, and let C i denote a vector of covariates describing individual i (which may include the phenotype of individual i). Then we can represent an arbitrary biased sampling scheme by specifying the selection probabilities f(C i):=P[s i=1|C i] (note that case/control ascertainment is the special case where C i=y i). Suppose that phenotypes are generated following the model from Section 1.1 of the Supplementary Note, but that our sample is selected following the biased sampling scheme f. Let a ij denote the additive genetic component for phenotype j in inidividual i. If there is no direct ascertainment on genotype (i.e., if C i does not include genotypes), then the proof of Proposition 1 in the Supplementary Note goes through, except that ρ is replaced with E[y i 1 y i 2|s i=1] and ρ g is replaced with E[a i 1 a i 2|s i=1]. This has two practical implications: first, in studies with biased sampling schemes and sample overlap, if one wishes to constrain the intercept, one should use the sample correlation between phenotypes ρ̂ rather than the population correlation ρ. Under biased sampling, plim N→∞ρ̂=E[y i 1 y i 2|s i=1], which is typically not equal to ρ. Second, even if there is no sample verlap, biased sampling can affect the genetic correlation estimate. If the biased sampling mechanism (i.e., the function f(C i):=P[s i=1|C i]) is known, then it may be possible to explicitly model the biased sampling and derive a function for converting genetic correlation estimates from the biased sample to population genetic correlations (similar to the derivations in sections 1.3 and 1.4 of the Supplementary Note). If the biased sampling mechanism can only be described qualitatively, then it should at least be possible to guess the magnitude and direction of the bias by reasoning about E[y i 1 y i 2|s i=1] and E[a i 1 a i 2|s i=1]. Computational Complexity Let N denote sample size and M the number of SNPs. The computational complexity of the steps involved in LD Score regression are as follows: Computing summary statistics takes O(MN) time. Computing LD Scores takes O(MN) time, though the N for computing LD Scores need not be large. We use the N=378 Europeans from 1000 Genomes. LD Score regression takes O(M) time and space. For a user who has already computed summary statistics and downloads LD Scores from our website (URLs), the computational cost of LD Score regression is O(M) time and space. For comparison, REML takes time O(MN 2) for computing the GRM and O(N 3) time for maximizing the likelihood. Practically, estimating LD Scores takes roughly an hour parallelized over chromosomes, and LD Score regression takes about 15 seconds per pair of phenotypes on a 2014 MacBook Air with 1.7 GhZ Intel Core i7 processor. Simulations We simulated quantitative traits under an infinitesimal model in 2062 controls from a Swedish study. To simulate the standard scenario where many causal SNPs are not genotyped, we simulated phenotypes by drawing causal SNPs from 622,146 best-guess imputed 1000 Genomes SNPs on chromosome 2, then retained only the 90,980 HM3 SNPs with MAF above 5% for LD Score regression. We note that the simulations in show that single-trait LD Score regression is only minimally biased by uncorrected population stratification and moderate ancestry mismatch between the reference panel used for estimating LD Scores and the population sampled in GWAS. In particular, LD Scores estimated from the 1000 Genomes reference panel are suitable for use with European-ancestry meta-analyses. Put another way, LD Score is only minimally correlated with F ST, and the differences in LD Score among European populations are not so large as to bias LD Score regression. Since we use the same LD Scores for cross-trait LD Score regression as for single-trait LD Score regression, these results extend to cross-trait LD Score regression. Summary Statistic Datasets We selected traits for inclusion in the main text via the following procedure: Begin with all publicly available non-sex-stratified European-only summary statistics. Remove studies that do not provide signed summary statistics. Remove studies not imputed to at least HapMap 2. Remove studies that adjust for heritable covariates . Remove all traits with heritability z-score below 4. Genetic correlation estimates for traits with heritability z-score below 4 are generally too noisy to report. Prune clusters of correlated phenotypes (e.g., obesity classes 1–3) by picking the trait from each cluster with the highest heritability heritability z-score. We then applied the following filters (implemented in the script munge_sumstats.py included with ldsc): For studies that provide a measure of imputation quality, filter to INFO above 0.9. For studies that provide sample MAF, filter to sample MAF above 1%. In order to restrict to well-imputed SNPs in studies that do not provide a measure of imputation quality, filter to HapMap3 SNPs with 1000 Genomes EUR MAF above 5%, which tend to be well-imputed in most studies. This step should be skipped if INFO scores are available for all studies. If sample size varies from SNP to SNP, remove SNPs with effective sample size less than 0.67 times the 90th percentile of sample size. For specialty chip (e.g., metabochip) meta-analyses, remove SNPs with N above the maximum GWAS N. Remove indels and structural variants. Remove strand-ambiguous SNPs. Remove SNPs whose alleles do not match the alleles in 1000 Genomes. Genomic control (GC) correction at any stage biases the heritability and genetic covariance estimates downwards (see the Supplementary Note of . The biases in the numerator and denominator of genetic correlation cancel exactly, so genetic correlation is not biased by GC correction. A majority of the studies analyzed in this paper used GC correction, so we do not report genetic covariance and heritability. Data on Alzheimer’s disease were obtained from the following source: International Genomics of Alzheimer’s Project (IGAP) is a large two-stage study based upon genome-wide association studies (GWAS) on individuals of European ancestry. In stage 1, IGAP used genotyped and imputed data on 7,055,881 single nucleotide polymorphisms (SNPs) to meta-analyze four previously-published GWAS datasets consisting of 17,008 Alzheimer’s disease cases and 37,154 controls (The European Alzheimer’s Disease Initiative, EADI; the Alzheimer Disease Genetics Consortium, ADGC; The Cohorts for Heart and Aging Research in Genomic Epidemiology consortium, CHARGE; The Genetic and Environmental Risk in AD consortium, GERAD). In stage 2, 11,632 SNPs were genotyped and tested for association in an independent set of 8,572 Alzheimer’s disease cases and 11,312 controls. Finally, a meta-analysis was performed combining results from stages 1 and 2. We only used stage 1 data for LD Score regression. Supplementary Material 1 NIHMS719075-supplement-1.pdf (524.7KB, pdf) 2 NIHMS719075-supplement-2.csv (63.4KB, csv) Acknowledgments We would like to thank P. Sullivan, C. Bulik, S. Caldwell, C. Arabica, O. Andreassen for helpful comments. This work was supported by NIH grants R01 MH101244 (ALP), R01 HG006399 (NP), R03 CA173785 (HKF) and by the Fannie and John Hertz Foundation (HKF). Data on anorexia nervosa were obtained by funding from the WTCCC3 WT088827/Z/09 titled “A genome-wide association study of anorexia nervosa”. Data on glycaemic traits have been contributed by MAGIC investigators and have been downloaded from www.magicinvestigators.org. Data on coronary artery disease/myocardial infarction have been contributed by CARDIoGRAMplusC4D investigators and have been downloaded from www.CARDIOGRAMPLUSC4D.ORG We thank the International Genomics of Alzheimer’s Project (IGAP) for providing summary results data for these analyses. The investigators within IGAP contributed to the design and implementation of IGAP and/or provided data but did not participate in analysis or writing of this report. IGAP was made possible by the generous participation of the control subjects, the patients, and their families. The i-Select chips was funded by the French National Foundation on Alzheimer’s disease and related disorders. EADI was supported by the LABEX (laboratory of excellence program investment for the future) DISTALZ grant, Inserm, Institut Pasteur de Lille, Université de Lille 2 and the Lille University Hospital. GERAD was supported by the Medical Research Council (Grant 503480), Alzheimer’s Research UK (Grant 503176), the Wellcome Trust (Grant 082604/2/07/Z) and German Federal Ministry of Education and Research (BMBF): Competence Network Dementia (CND) grant 01GI0102, 01GI0711, 01GI0420. CHARGE was partly supported by the NIH/NIA grant R01 AG033193 and the NIA AG081220 and AGES contract N01-AG-12100, the NHLBI grant R01 HL105756, the Icelandic Heart Association, and the Erasmus Medical Center and Erasmus University. ADGC was supported by the NIH/NIA grants: U01 AG032984, U24 AG021886, U01 AG016976, and the Alzheimer’s Association grant ADGC-10-196728. URLs ldsc software: github.com/bulik/ldsc This paper: github.com/bulik/gencor_tex PGC (psychiatric) summary statistics: www.med.unc.edu/pgc/downloads GIANT (anthopometric) summary statistics: www.broadinstitute.org/collaboration/giant/index.php/GIANT_consortium_data_files EGG (Early Growth Genetics) summary statistics: www.egg-consortium.org MAGIC (insulin, glucose) summary statistics: www.magicinvestigators.org/downloads/ CARDIoGRAM (coronary artery disease) summary statistics: www.cardiogramplusc4d.org DIAGRAM (T2D) summary statistics: www.diagram-consortium.org Rheumatoid arthritis summary statistics: www.broadinstitute.org/ftp/pub/rheumatoid_arthritis/Stahl_etal_2010NG/ IGAP (Alzheimers) summary statistics: www.pasteur-lille.fr/en/recherche/u744/igap/igap_download.php IIBDGC (inflammatory bowel disease) summary statistics: www.ibdgenetics.org/downloads.html We used a newer version of these data with 1000 Genomes imputation. Plasma lipid summary statistics: www.broadinstitute.org/mpg/pubs/lipids2010/ SSGAC (educational attainment) summary statistics: www.ssgac.org/ Beans: www.barismo.com www.bluebottlecoffee.com Author Contributions MJD provided reagents. BMN and ALP provided reagents. CL, ER, VA, JP and FD aided in the interpretation of results. JP and FD provided data on age at menarche. The caffeine molecule is responsible for all that is good about this manuscript. BBS and HKF are responsible for the rest. All authors revised and approved the final manuscript. Competing Financial Interests The authors declare no competing financial interests. References 1.Smith George Davey, Ebrahim Shah. Mendelian randomization: can genetic epidemiology contribute to understanding environmental determinants of disease? International journal of epidemiology. 2003;32(1):1–22. doi: 10.1093/ije/dyg070. [DOI] [PubMed] [Google Scholar] 2.Smith George Davey, Hemani Gibran. Mendelian randomization: genetic anchors for causal inference in epidemiological studies. Human molecular genetics. 2014;23(R1):R89–R98. doi: 10.1093/hmg/ddu328. [DOI] [PMC free article] [PubMed] [Google Scholar] 3.Vandenberg SG. Multivariate analysis of twin differences. Methods and goals in human behavior genetics. 1965:29–43. [Google Scholar] 4.Kempthorne Oscar, Osborne Richard H. The interpretation of twin data. American journal of human genetics. 1961;13(3):320. [PMC free article] [PubMed] [Google Scholar] 5.Loehlin John C, Vandenberg Steven Gerritjan. Genetic and environmental components in the covariation of cognitive abilities: An additive model. Louisville Twin Study, University of Louisville; 1966. [Google Scholar] 6.Neale Michael, Cardon Lon. Methodology for genetic studies of twins and families. 67. Springer; 1992. [Google Scholar] 7.Lichtenstein Paul, et al. Common genetic determinants of schizophrenia and bipolar disorder in swedish families: a population-based study. The Lancet. 2009;373(9659):234–239. doi: 10.1016/S0140-6736(09)60072-6. [DOI] [PMC free article] [PubMed] [Google Scholar] 8.Voight Benjamin F, et al. Plasma hdl cholesterol and risk of myocardial infarction: a mendelian randomisation study. The Lancet. 2012;380(9841):572–580. doi: 10.1016/S0140-6736(12)60312-2. [DOI] [PMC free article] [PubMed] [Google Scholar] 9.Ron Do, et al. Common variants associated with plasma triglycerides and risk for coronary artery disease. Nature genetics. 2013;45(11):1345–1352. doi: 10.1038/ng.2795. [DOI] [PMC free article] [PubMed] [Google Scholar] 10.Visscher Peter M, Brown Matthew A, McCarthy Mark I, Yang Jian. Five years of gwas discovery. The American Journal of Human Genetics. 2012;90(1):7–24. doi: 10.1016/j.ajhg.2011.11.029. [DOI] [PMC free article] [PubMed] [Google Scholar] 11.Yang Jian, et al. Common snps explain a large proportion of the heritability for human height. Nature Genetics. 2010;42(7):565–569. doi: 10.1038/ng.608. [DOI] [PMC free article] [PubMed] [Google Scholar] 12.Yang Jian, Hong Lee S, Goddard Michael E, Visscher Peter M. Gcta: a tool for genome-wide complex trait analysis. The American Journal of Human Genetics. 2011;88(1):76–82. doi: 10.1016/j.ajhg.2010.11.011. [DOI] [PMC free article] [PubMed] [Google Scholar] 13.Lee Sang Hong, Yang Jian, Goddard Michael E, Visscher Peter M, Wray Naomi R. Estimation of pleiotropy between complex diseases using single-nucleotide polymorphism-derived genomic relationships and restricted maximum likelihood. Bioinformatics. 2012;28(19):2540–2542. doi: 10.1093/bioinformatics/bts474. [DOI] [PMC free article] [PubMed] [Google Scholar] 14.Cross-Disorder Group of the Psychiatric Genomics Consortium et al. Genetic relationship between five psychiatric disorders estimated from genome-wide snps. Nature Genetics. 2013 doi: 10.1038/ng.2711. [DOI] [PMC free article] [PubMed] [Google Scholar] 15.Vattikuti Shashaank, Guo Juen, Chow Carson C. Heritability and genetic correlations explained by common snps for metabolic syndrome traits. PLoS genetics. 2012;8(3):e1002637. doi: 10.1371/journal.pgen.1002637. [DOI] [PMC free article] [PubMed] [Google Scholar] 16.Chen Guo-Bo, et al. Estimation and partitioning of (co) heritability of inflammatory bowel disease from gwas and immunochip data. Human molecular genetics. 2014:ddu174. doi: 10.1093/hmg/ddu174. [DOI] [PMC free article] [PubMed] [Google Scholar] 17.Purcell Shaun M, et al. Common polygenic variation contributes to risk of schizophrenia and bipolar disorder. Nature. 2009;460(7256):748–752. doi: 10.1038/nature08185. [DOI] [PMC free article] [PubMed] [Google Scholar] 18.Dudbridge Frank. Power and predictive accuracy of polygenic risk scores. PLoS genetics. 2013;9(3):e1003348. doi: 10.1371/journal.pgen.1003348. [DOI] [PMC free article] [PubMed] [Google Scholar] 19.Bulik-Sullivan Brendan, et al. LD Score regression distinguishes confounding from polygenicity in genome-wide association studies. Nature Genetics. 2015 doi: 10.1038/ng.3211. [DOI] [PMC free article] [PubMed] [Google Scholar] 20.Yang Jian, et al. Genomic inflation factors under polygenic inheritance. European Journal of Human Genetics. 2011;19(7):807–812. doi: 10.1038/ejhg.2011.39. [DOI] [PMC free article] [PubMed] [Google Scholar] 21.Speed Doug, Hemani Gibran, Johnson Michael R, Balding David J. Improved heritability estimation from genome-wide snps. The American Journal of Human Genetics. 2012;91(6):1011–1021. doi: 10.1016/j.ajhg.2012.10.010. [DOI] [PMC free article] [PubMed] [Google Scholar] 22.Cross-Disorder Group of the Psychiatric Genomics Consortium et al. Identification of risk loci with shared effects on five major psychiatric disorders: a genome-wide analysis. Lancet. 2013;381(9875):1371. doi: 10.1016/S0140-6736(12)62129-1. [DOI] [PMC free article] [PubMed] [Google Scholar] 23.Perry John RB, et al. Parent-of-origin-specific allelic associations among 106 genomic loci for age at menarche. Nature. 2014;514(7520):92–97. doi: 10.1038/nature13545. [DOI] [PMC free article] [PubMed] [Google Scholar] 24.Morris Andrew P, et al. Large-scale association analysis provides insights into the genetic architecture and pathophysiology of type 2 diabetes. Nature genetics. 2012;44(9):981. doi: 10.1038/ng.2383. [DOI] [PMC free article] [PubMed] [Google Scholar] 25.Horikoshi Momoko, et al. New loci associated with birth weight identify genetic links between intrauterine growth and adult height and metabolism. Nature genetics. 2013;45(1):76–82. doi: 10.1038/ng.2477. [DOI] [PMC free article] [PubMed] [Google Scholar] 26.Freathy Rachel M, et al. Type 2 diabetes risk alleles are associated with reduced size at birth. Diabetes. 2009;58(6):1428–1433. doi: 10.2337/db08-1739. [DOI] [PMC free article] [PubMed] [Google Scholar] 27.Early Growth Genetics (EGG) Consortium et al. A genome-wide association meta-analysis identifies new childhood obesity loci. Nature genetics. 2012;44(5):526–531. doi: 10.1038/ng.2247. [DOI] [PMC free article] [PubMed] [Google Scholar] 28.Rob Taal H, et al. Common variants at 12q15 and 12q24 are associated with infant head circumference. Nature genetics. 2012;44(5):532–538. doi: 10.1038/ng.2238. [DOI] [PMC free article] [PubMed] [Google Scholar] 29.Onland-Moret NC, et al. Age at menarche in relation to adult height the epic study. American journal of epidemiology. 2005;162(7):623–632. doi: 10.1093/aje/kwi260. [DOI] [PubMed] [Google Scholar] 30.Day Felix, et al. Puberty timing associated with diabetes, cardiovascular disease and also diverse health outcomes in men and women: the uk biobank study. Scientific Reports. 2014 doi: 10.1038/srep11208. [DOI] [PMC free article] [PubMed] [Google Scholar] 31.Elks Cathy E, et al. Age at menarche and type 2 diabetes risk the epic-interact study. Diabetes care. 2013;36(11):3526–3534. doi: 10.2337/dc13-0446. [DOI] [PMC free article] [PubMed] [Google Scholar] 32.Finucane Hilary K, et al. Partitioning heritability by functional category using GWAS summary statistics. In Press at Nature Genetics. 2015 doi: 10.1038/ng.3404. [DOI] [PMC free article] [PubMed] [Google Scholar] 33.Sadaf Farooqi I. Defining the neural basis of appetite and obesity: from genes to behaviour. Clinical Medicine. 2014;14(3):286–289. doi: 10.7861/clinmedicine.14-3-286. [DOI] [PMC free article] [PubMed] [Google Scholar] 34.Wang Na, et al. Associations of adult height and its components with mortality: a report from cohort studies of 135 000 chinese women and men. International journal of epidemiology. 2011;40(6):1715–1726. doi: 10.1093/ije/dyr173. [DOI] [PMC free article] [PubMed] [Google Scholar] 35.Hebert Patricia R, et al. Height and incidence of cardiovascular disease in male physicians. Circulation. 1993;88(4):1437–1443. doi: 10.1161/01.cir.88.4.1437. [DOI] [PubMed] [Google Scholar] 36.Rich-Edwards Janet W, et al. Height and the risk of cardiovascular disease in women. American journal of epidemiology. 1995;142(9):909–917. doi: 10.1093/oxfordjournals.aje.a117738. [DOI] [PubMed] [Google Scholar] 37.Rietveld Cornelius A, et al. Gwas of 126,559 individuals identifies genetic variants associated with educational attainment. Science. 2013;340(6139):1467–1471. doi: 10.1126/science.1235488. [DOI] [PMC free article] [PubMed] [Google Scholar] 38.Barnes Deborah E, Yaffe Kristine. The projected effect of risk factor reduction on alzheimer’s disease prevalence. The Lancet Neurology. 2011;10(9):819–828. doi: 10.1016/S1474-4422(11)70072-2. [DOI] [PMC free article] [PubMed] [Google Scholar] 39.Norton Sam, Matthews Fiona E, Barnes Deborah E, Yaffe Kristine, Brayne Carol. Potential for primary prevention of alzheimer’s disease: an analysis of population-based data. The Lancet Neurology. 2014;13(8):788–794. doi: 10.1016/S1474-4422(14)70136-X. [DOI] [PubMed] [Google Scholar] 40.MacCabe James H, et al. Excellent school performance at age 16 and risk of adult bipolar disorder: national cohort study. The British Journal of Psychiatry. 2010;196(2):109–115. doi: 10.1192/bjp.bp.108.060368. [DOI] [PubMed] [Google Scholar] 41.Tiihonen Jari, et al. Premorbid intellectual functioning in bipolar disorder and schizophrenia: results from a cohort study of male conscripts. American Journal of Psychiatry. 2005;162(10):1904–1910. doi: 10.1176/appi.ajp.162.10.1904. [DOI] [PubMed] [Google Scholar] 42.Pierce John P, Fiore Michael C, Novotny Thomas E, Hatziandreu Evridiki J, Davis Ronald M. Trends in cigarette smoking in the united states: educational differences are increasing. Jama. 1989;261(1):56–60. [PubMed] [Google Scholar] 43.Striegel-Moore Ruth H, Garvin Vicki, Dohm Faith-Anne, Rosenheck Robert A. Psychiatric comorbidity of eating disorders in men: a national study of hospitalized veterans. International Journal of Eating Disorders. 1999;25(4):399–404. doi: 10.1002/(sici)1098-108x(199905)25:4<399::aid-eat4>3.0.co;2-0. [DOI] [PubMed] [Google Scholar] 44.Blinder Barton J, Cumella Edward J, Sanathara Visant A. Psychiatric comorbidities of female inpatients with eating disorders. Psychosomatic Medicine. 2006;68(3):454–462. doi: 10.1097/01.psy.0000221254.77675.f5. [DOI] [PubMed] [Google Scholar] 45.Deary Ian J, Strand Steve, Smith Pauline, Fernandes Cres. Intelligence and educational achievement. Intelligence. 2007;35(1):13–21. [Google Scholar] 46.Calvin Catherine M, Fernandes Cres, Smith Pauline, Visscher Peter M, Deary Ian J. Sex, intelligence and educational achievement in a national cohort of over 175,000 11-year-old schoolchildren in england. Intelligence. 2010;38(4):424–432. [Google Scholar] 47.Durkin Maureen S, et al. Socioeconomic inequality in the prevalence of autism spectrum disorder: evidence from a us cross-sectional study. PLoS One. 2010;5(7):e11551. doi: 10.1371/journal.pone.0011551. [DOI] [PMC free article] [PubMed] [Google Scholar] 48.Robinson Elise B, et al. Autism spectrum disorder severity reflects the average contribution of de novo and familial influences. Proceedings of the National Academy of Sciences. 2014;111(42):15161–15165. doi: 10.1073/pnas.1409204111. [DOI] [PMC free article] [PubMed] [Google Scholar] 49.Samocha Kaitlin E, et al. A framework for the interpretation of de novo mutation in human disease. Nature genetics. 2014;46(9):944–950. doi: 10.1038/ng.3050. [DOI] [PMC free article] [PubMed] [Google Scholar] 50.Silman Alan J, Pearson Jacqueline E. Epidemiology and genetics of rheumatoid arthritis. Arthritis Res. 2002;4(Suppl 3):S265–S272. doi: 10.1186/ar578. [DOI] [PMC free article] [PubMed] [Google Scholar] 51.de Leon Jose, Diaz Francisco J. A meta-analysis of worldwide studies demonstrates an association between schizophrenia and tobacco smoking behaviors. Schizophrenia research. 2005;76(2):135–157. doi: 10.1016/j.schres.2005.02.010. [DOI] [PubMed] [Google Scholar] 52.Andreassen Ole A, et al. Improved detection of common variants associated with schizophrenia by leveraging pleiotropy with cardiovascular-disease risk factors. The American Journal of Human Genetics. 2013;92(2):197–209. doi: 10.1016/j.ajhg.2013.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar] 53.Cotsapas Chris, et al. Pervasive sharing of genetic effects in autoimmune disease. PLoS genetics. 2011;7(8):e1002254. doi: 10.1371/journal.pgen.1002254. [DOI] [PMC free article] [PubMed] [Google Scholar] 54.Farh Kyle Kai-How, et al. Genetic and epigenetic fine mapping of causal autoimmune disease variants. Nature. 2014 doi: 10.1038/nature13835. [DOI] [PMC free article] [PubMed] [Google Scholar] 55.Wurtz Peter, et al. Metabolic signatures of adiposity in young adults: Mendelian randomization analysis and effects of weight change. PLoS Medicine. 2014 doi: 10.1371/journal.pmed.1001765. [DOI] [PMC free article] [PubMed] [Google Scholar] 56.Burgess Stephen, Freitag Daniel F, Khan Hassan, Gorman Donal N, Thompson Simon G. Using multivariable mendelian randomization to disentangle the causal effects of lipid fractions. PloS one. 2014;9(10):e108891. doi: 10.1371/journal.pone.0108891. [DOI] [PMC free article] [PubMed] [Google Scholar] 57.Greenland Sander, Pearl Judea, Robins James M. Causal diagrams for epidemiologic research. Epidemiology. 1999:37–48. [PubMed] [Google Scholar] 58.Dahl Andy, Hore Victoria, Iotchkova Valentina, Marchini Jonathan. Network inference in matrix-variate gaussian models with non-independent noise. 2013 arXiv preprint arXiv:1312.1622. [Google Scholar] 59.Angrist Joshua D, Pischke Jörn-Steffen. Mostly harmless econometrics: An empiricist’s companion. Princeton university press; 2008. [Google Scholar] 60.Aschard Hugues, Vilhjálmsson Bjarni J, Joshi Amit D, Price Alkes L, Kraft Peter. Adjusting for heritable covariates can bias effect estimates in genome-wide association studies. The American Journal of Human Genetics. 2015 doi: 10.1016/j.ajhg.2014.12.021. [DOI] [PMC free article] [PubMed] [Google Scholar] 61.International HapMap 3 Consortium et al. Integrating common and rare genetic variation in diverse human populations. Nature. 2010;467(7311):52–58. doi: 10.1038/nature09298. [DOI] [PMC free article] [PubMed] [Google Scholar] Associated Data This section collects any data citations, data availability statements, or supplementary materials included in this article. Supplementary Materials 1 NIHMS719075-supplement-1.pdf (524.7KB, pdf) 2 NIHMS719075-supplement-2.csv (63.4KB, csv) ACTIONS View on publisher site PDF (689.9 KB) Cite Collections Permalink PERMALINK Copy RESOURCES Similar articles Cited by other articles Links to NCBI Databases On this page Abstract Introduction Results Discussion Methods Supplementary Material Acknowledgments URLs Author Contributions Competing Financial Interests References Associated Data Cite Copy Download .nbib.nbib Format: Add to Collections Create a new collection Add to an existing collection Name your collection Choose a collection Unable to load your collection due to an error Please try again Add Cancel Follow NCBI NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed Connect with NLM NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov Back to Top
276
A 4x4 matrix representation of SU(3)? - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now A 4x4 matrix representation of SU(3)? Ask Question Asked 10 years, 9 months ago Modified6 years, 1 month ago Viewed 2k times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. Is it possible to find a representation of the infinitesimal generators of the special unitary group SU(3) that contains 4 by 4 matrices, by say taking a Kronecker product of its irreducible representation(s) with itself? I know this is possible for SU(2), where one can express the three 4 by 4 matrices spanning the unit quaternion group in terms of some of the generators of SU(4), which are 4 by 4 matrices as well. I am trying to do the same for SU(3). This question is motivated by the investigation of higher dimensional gauge theories. Thanks. representation-theory Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Oct 21, 2014 at 23:38 itsqualtime asked Oct 21, 2014 at 17:33 itsqualtimeitsqualtime 119 7 7 bronze badges 6 2 Can you not just extend the matrices of size 3 3 by a "trivial" block of size 1 1 ? So the direct sum of the natural representation and the trivial one. –Dietrich Burde Commented Oct 21, 2014 at 18:14 So basically adding a "1" entry to each of the Gell-Mann matrices? Would you say this is the only way of obtaining a 4 x 4 representation? –itsqualtime Commented Oct 21, 2014 at 20:03 Actually not the Gell-Mann matrices since those are its infinitesimal generators, but rather its fundamental 3x3 rep –itsqualtime Commented Oct 21, 2014 at 21:10 2 I don't think this is possible in a way other than what Dietrich Burde described. SU(2) is special in the sense that there is up to isomorphism a single irreducible rep of each dimension. With SU(3) the sequence of possible dimensions of irreducible reps goes like 1, 3, 3, 6, 8, 6, 10,... IIRC at least the 8-dimensional and 10-dimensional reps play a role in particle physics. –Jyrki Lahtonen Commented Oct 22, 2014 at 20:26 Jyrki - that's very interesting and particularly relevant to what I'm doing. Do you know of any ref. that would provide more details on the number and dimensionality of the IRR of an arbitrary SU(n) group? –itsqualtime Commented Oct 23, 2014 at 19:56 |Show 1 more comment 2 Answers 2 Sorted by: Reset to default This answer is useful 2 Save this answer. Show activity on this post. You might have a look at Fulton's book Young tableaux for the combinatorics relevant for computing these dimensions. Here is the relevant fact: the irreducible representations of S U(n)S U(n) may be indexed by partitions with at most n−1 n−1 parts in such a way that the dimension of the irreducible L(λ)L(λ) corresponding to a partition λ λ is the number of column-strict Young tableaux (in the alphabet {1,2,…,n}{1,2,…,n}) on λ λ. This gives a practical way of computing the dimensions that is a bit faster than using Weyl's character formula (the classical formula that can be expressed in a root-system uniform fashion) or more recent tools such as the Littelmann path model. The point being, for S U(n)S U(n) you don't need to know about root systems to operate a machine that will compute the dimensions. (In fact, things are even better: Schur functions give you the characters of the irreps and not just their dimensions). In your example, the irreps would therefore be indexed by the partitions (in roughly increasing order of size) (0),(1),(1,1),(2),(2,1),(3),(2,2),(3,1),(4),…(0),(1),(1,1),(2),(2,1),(3),(2,2),(3,1),(4),… of dimensions 1,3,3,6,8,10,6,15,15,...1,3,3,6,8,10,6,15,15,... as noted in Jyrki's comment above. The upshot for your problem is that a four dimensional representation is one of three things: the sum of four copies of the trivial irrep, or the sum of one copy of the trivial irrep with one of the two 3 3 dimensional irreps. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Oct 24, 2014 at 1:33 answered Oct 24, 2014 at 1:14 StephenStephen 15.4k 1 1 gold badge 44 44 silver badges 53 53 bronze badges Add a comment| This answer is useful 1 Save this answer. Show activity on this post. As noted by the accepted answer and the comments, the answer is no. Note, we can prove this without using Young Tableaux: we can check the possible dimensions as noted in Jyrki's comment above by directly using the known dimension formula for S U(3)S U(3): The irreducible representations are indexed by a pair of non-negative integers (m 1,m 2)(m 1,m 2) and the dimension of the irreducible representation is given by dim(V m 1,m 2)=1 2(m 1+1)(m 2+1)(m 1+m 2+2)dim⁡(V m 1,m 2)=1 2(m 1+1)(m 2+1)(m 1+m 2+2) One checks that there is no pair of non-negative integers that gives dim(V m 1,m 2)=4 dim⁡(V m 1,m 2)=4 by testing the first few (and only) possibilities directly. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Jun 24, 2019 at 20:47 Jonathan RaynerJonathan Rayner 1,203 12 12 silver badges 21 21 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions representation-theory See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 3Shared representation of S U(2)S U(2) and S O(3)S O(3) 1Are all the unitary irreducible representations of S U(2)S U(2) have unit determinant? 1How to find the matrix generators of higher dimensional irreducible representations of SU(2)SU⁡(2)? 1Is sum of matrices of a faithful representation degenerate? 1Find the conjugate/similar transformation matrices connecting two unitary irreducible representations corresponding to the same character. 2Representation theory: is it possible to obtain certain variables explicitly? Hot Network Questions Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? PCB design for audio compressor – THT routing, GND plane and power tracks Expected number of rolls to see all sides of a die In Grep, how can I grep -r --exclude build/lib//.py At the time of the prequels, was everyone who worked in the Jedi Temple on Coruscant a Jedi? In what sense are "proofs" of the existence of God really proofs? Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States? How do I combine bold and regular text in a PlotLabel? Limit in the Lie Derivative Is Adj N Adj possible? Is laser engraving on an interstellar object feasible? Illustrative GenAI images of real world objects in publications What does, "For you alone are holy." mean in Revelation 15:4? Can Suspended Sentence be cast Twice? When two black holes merge, what happens to the space-time inside them? Can metal atoms act as ligands? Best bike type for multi-day tours in France (e.g., Discover France itineraries) Why was there a child at the dig site in Montana? Humanity sent mothership to explore universe, coming back home after thousands years, crew now like winged aliens I failed to make Claus benzene. (With sticks.) In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" how to set grub default When was this builder's paper produced? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
277
Game involving tiling a 1 by n board with 1 x 2 tiles? - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Game involving tiling a 1 by n board with 1 x 2 tiles? Ask Question Asked 11 years, 7 months ago Modified7 years, 4 months ago Viewed 3k times This question shows research effort; it is useful and clear 5 Save this question. Show activity on this post. Consider a 1 1 by n n tiled rectangle. You want to play a game with one opponent in which you place 1 1 by 2 2 "dominoes" on this rectangle. The player who places the last domino wins. Which player can guarantee a win in the game and when (the one who goes first or the one who goes second)? So far, I have determined that for n n is even, the player who goes first can guarantee a victory by implementing a mirror strategy. However, the player who goes first will not always win when n n is odd. Any ideas? game-theory combinatorial-game-theory Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Dec 22, 2013 at 1:00 okarin asked Dec 22, 2013 at 0:39 okarinokarin 2,272 1 1 gold badge 23 23 silver badges 42 42 bronze badges 12 If this is a NIM game, it's not about who goes first or second. It's about whether you start with a N N or P P position. You have shown that for n n even that if you are handed that you have a P P position and will win. However, if you are handed an N N position, ie n n odd, then there is no winning strategy for you since you have to hand the other player a P P position and they have a winning strategy. –mathematics2x2life Commented Dec 22, 2013 at 0:45 For some n n not even, the first to move can still win. That is why I want to determine a generalization of when each player will win. –okarin Commented Dec 22, 2013 at 0:46 First, we would say for n n odd, not 'n n not even'. If this is true, then this is not a NIM game. For any NIM game, given a position and a first player, assuming perfect play from both players, then that first player either has a strategy so that they will win regardless of the other players moves or no possible strategy to win and hence will lose. –mathematics2x2life Commented Dec 22, 2013 at 0:49 2 There's a detailed treatment of games like this in Chapter 4 of Winning Ways For Your Mathematical Plays, Volume 1. As @vadim123 said, this is equivalent to the game "Choose a pile, remove exactly two counters, and split the remainder into two piles", although one or both can be empty. The number of tiles in the rectangle = the number of counters in the pile. Placing a domino not on the end of a rectangle = splitting into two piles. –DanTilkin Commented Dec 22, 2013 at 1:36 1 The game is certainly nim-like in being impartial, which means that from any given position both players have the same legal moves. (Contrast chess, where in any position one player is only allowed to move the white pieces and the other player the black pieces.) It is a theorem that any position in any impartial game is exactly equivalent to a nim-heap of some size. OP has an insight here that mathematics2x2life seems to lack. –MJD Commented Dec 22, 2013 at 1:55 |Show 7 more comments 2 Answers 2 Sorted by: Reset to default This answer is useful 9 Save this answer. Show activity on this post. The Sprague-Grundy theory should kill this problem pretty dead; it solves any game of a large class that includes this game. Without getting into the theoretical details of the theory: You can tabulate the "value" of each position, where the "value" is a non-negative integer A position with no legal moves has value 0. A position with legal moves to one or more positions has the value m e x(v a l(p 1),v a l(p 2),…)m e x(v a l(p 1),v a l(p 2),…) where v a l(p 1)…v a l(p 1)… are the values of the positions to which one can legally move, and m e x(S)m e x(S) is the smallest non-negative integer not in S S. If a position p p is actually a disjoint sum of two subpositions, p=p 1+p 2 p=p 1+p 2, where a move in one subposition can't affect the play in the other subposition, then v a l(p)=v a l(p 1)⊕v a l(p 2)v a l(p)=v a l(p 1)⊕v a l(p 2), where ⊕⊕ is the "bitwise exclusive or" operation, also sometimes described as "add in base 2, but without carrying". A position is a win for the player who moves to it if, and only if, its value is 0. If its value is positive, it is a win for the next player to move from it. Let's say that r n r n represents a row of n n empty squares. Every position in your game is a disjoint sum of these. For example, after the first player moves in r 6 r 6, they leave either r 4 r 4 or r 1+r 3,r 1+r 3, or r 2+r 2 r 2+r 2. (Also r 3+r 1 r 3+r 1, but the ++ and ⊕⊕ operations are commutative, so this is the same as r 1+r 3 r 1+r 3.) The values for these rows are as follows: r 0 r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8 r 9⋮m e x(v a l(r 0)=0)=m e x(v a l(r 1)=0)=m e x(v a l(r 2)=1,v a l(r 1)⊕v a l(r 1)=0)=m e x(v a l(r 3)=1,v a l(r 1)⊕v a l(r 2)=1)=m e x(v a l(r 4)=2,v a l(r 1)⊕v a l(r 3)=1,v a l(r 2)⊕v a l(r 2)=0)=m e x(v a l(r 5)=0,v a l(r 1)⊕v a l(r 4)=2,v a l(r 2)⊕v a l(r 3)=0)=m e x(3,0⊕0=0,2⊕1=3,1⊕1=0)=m e x(1,3⊕0=3,0⊕1=1,2⊕1=3)=0 0 1 1 2 0 3 1 1 0⋮r 0 0 r 1 0 r 2 m e x(v a l(r 0)=0)=1 r 3 m e x(v a l(r 1)=0)=1 r 4 m e x(v a l(r 2)=1,v a l(r 1)⊕v a l(r 1)=0)=2 r 5 m e x(v a l(r 3)=1,v a l(r 1)⊕v a l(r 2)=1)=0 r 6 m e x(v a l(r 4)=2,v a l(r 1)⊕v a l(r 3)=1,v a l(r 2)⊕v a l(r 2)=0)=3 r 7 m e x(v a l(r 5)=0,v a l(r 1)⊕v a l(r 4)=2,v a l(r 2)⊕v a l(r 3)=0)=1 r 8 m e x(3,0⊕0=0,2⊕1=3,1⊕1=0)=1 r 9 m e x(1,3⊕0=3,0⊕1=1,2⊕1=3)=0⋮⋮ Let's look at r 5 r 5 for example. The next player to play can play at either end, leaving r 3 r 3, or in the middle, leaving a disjoint sum of r 1 r 1 and r 2 r 2. From an earlier calculation that v a l(r 2)=1 v a l(r 2)=1 and v a l(r 1)=0 v a l(r 1)=0. The value of the sum of r 2+r 1 r 2+r 1 is 1⊕0=1 1⊕0=1, and the value of r 3 r 3 is (from an earlier calculation) 1. So the value of r 5 r 5 is the mex of 1 and 1, which is the smallest nonnegative number not in the set {1,1}{1,1}, and this is 0. This means that r 5 r 5 should be a win for the player who moved to it, and a loss for the next player to move. As indeed it is: whatever move the next player to move makes, their opponent wins trivially. From r 6 r 6, on the other hand, one can move to r 4 r 4, to r 3+r 1 r 3+r 1, or to r 2+r 2 r 2+r 2. These have values 2 2, 1⊕0 1⊕0, and 1⊕1 1⊕1, respectively. 1⊕1=0 1⊕1=0, so we need to find m e x(2,1,0)m e x(2,1,0) which is 3. This is not equal to zero, so the next player has a win, which they produce by moving to a position with value 0, in this case r 2+r 2 r 2+r 2, which they can win by a symmetry argument. r 9 r 9 should be a win for the player who moves to it; let's call her P P for "previous". Suppose the next player, N N, moves to r 3+r 4 r 3+r 4. P P's job now is to move to a zero position. r 3 r 3 has value 1 and r 4 r 4 has value 2. It must be possible to move from r 4 r 4 to a position with value 1, leaving 1⊕1=0 1⊕1=0. So the correct move here is from r 3+r 4 r 3+r 4 to r 3+r 2 r 3+r 2. Now whichever component P P moves in leaves one remaining move in the other component for N N, who wins, as predicted. It may not be clear how to calculate the value for r n r n directly, but to calculate all the r i r i for i≤n i≤n is straightforward if tedious. A computer calculation reveals that the 0 positions of length less than 100, which are the ones you want to move to if you want to win, are rows of 0, 1, 5, 9, 15, 21, 25, 29, 35, 39, 43, 55, 59, 63, 73, 77, 89, 93, 97, and 99 squares. Except for 0, these are all odd, as you observed they must be. The winning strategy is always “Move to a position with value 0.” Because of the definition of m e x m e x, there is such a move if and only if the current position has a nonzero value. (More detailed explanation of how this works, with extended example.) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Dec 22, 2013 at 15:27 answered Dec 22, 2013 at 1:14 MJDMJD 67.6k 44 44 gold badges 308 308 silver badges 619 619 bronze badges 2 1 Actually, you can do better than computing things with mex for this and many similar games (see my answer). –Mark S. Commented Dec 22, 2013 at 17:32 Also note that the Sprague-Grundy theorem isn't too powerful: while it applies to many impartial games, some impartial games are too complex to solve because their rulesets don't easily reveal their relation to Nim (i.e. Sprouts) –Tristan F.-R. Commented Dec 25, 2024 at 23:25 Add a comment| This answer is useful 5 Save this answer. Show activity on this post. In the context of what's known about impartial games like this, vadim123 and DanTilkin made the key observation: This game is a counter-removing game where you can remove two counters from any pile, and leave the remainder of that pile's counters in 0, 1, or 2 piles. Such basic counter-removing games are called octal games and are denoted by a code. The code for this game happens to be .07, but this game actually has a name: it's Dawson's Kayles, which is related to a more famous game called Dawson's Chess. You can play your game at cut-the-knot, as well as a version of Dawson's Chess. If you're unfamiliar with Sprague-Grundy theory, the strategy for Nim, and/or Octal Periodicity, check out this community wiki collection of tutorials about them, because they're necessary for getting the full answer. Since octal games leave disjoint piles (sometimes envisioned as strips of paper boxes), and the Sprague-Grundy theory tells us that bitwise xor is all we need to combine disjoint piles, all you need to know to play an octal game perfectly is the sequence of [Grundy] values for a single "pile" (1×n 1×n board). MJD outlined the beginnings of this computation in their answer. However, many octal games, including Dawson's Kayles, are special in that the sequence of values is eventually periodic. (In fact, we can't prove that's not the case for all octal games.) There's a nice theorem called Guy-Smith periodicity that says "if the sequence for an octal game looks periodic for a little while, then it will stay periodic". (A proof can be found in this paper by Siegel.) In particular, it turns out that the values Dawson's Kayles are eventually periodic. Dawson's Chess (code .137) is a "first cousin" of Dawson's Kayles, which means that their sequences of values are the same, except for a shift: Dawson's Kayles has an extra 0 at the beginning. Here is the sequence for Dawson's Chess which "has period 34 with the only exceptions at n=0, 14, 16, 17, 31, 34 and 51." The table is easier to read in Figure 2 of this paper by Plambeck. In short, this periodicity lets you have an efficient strategy for any game of Dawson's Kayles, without doing much computation. In particular, the losing rows are the ones with length 0, 1, 15, 35, and all numbers congruent to 5, 9, 21, 25, or 29 modulo 34 (and all other boards have a strategy for the first player: make moves where the bitwise xor of the remaining boards' values is zero). If you play on a bigger board (not even necessarily a rectangle), but players on only play pieces horizontally, then each contiguous row is independent and you can use bitwise xor to calculate the value of the whole game. As an aside, if you play on a bigger board, and players can place the domino horizontally or vertically, then this game has the name "[normal play] Cram" and not too much is known, even for rectangular boards when the board has width bigger than 3. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Mar 24, 2018 at 21:21 Rosie F 3,259 2 2 gold badges 14 14 silver badges 31 31 bronze badges answered Dec 22, 2013 at 17:31 Mark S.Mark S. 25.9k 2 2 gold badges 53 53 silver badges 115 115 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions game-theory combinatorial-game-theory See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 1Position games: how to fill a matrix with dominos? 4Tutorials for Sprague-Grundy Theorem/Nimbers? 2The application of Nimbers to Nim strategy 1How to prove that for n n even and positive, the first player can guarantee a win? 1What properties can be derived for these recursively defined Nimbers (or Grundy numbers)? 1dominos game: fill a m×n m×n matrix with 2×1 2×1 tiles and m,n m,n odd Related 20A game played on a rectangle 2Tiling m by n rectangle game 5Winning strategy - nim variation 1Game Dealing with Multiplication and Winning Strategy 1How to prove that for n n even and positive, the first player can guarantee a win? 2A square removing game - winning strategy? 2Two players take turns placing a domino onto a 6×6 6×6 grid. The first player who can't place a domino loses. 5A game from fault lines in domino tilings 4A game on a rectangular board Hot Network Questions Is there any way to still use Manifest v2 extensions in Google Chrome 139+? In Matthew 17:4, what was Peter’s intention in proposing to make three tents for Jesus, Moses, and Elijah? Where should I host software for individual papers when GitHub is now part of Microsoft AI? In the US, can I contribute to my Roth IRA, ahead of the time I get the earned income? Matching Illustrator Effect > Warp > Wave with Inkscape What is a single adjective for someone who accepts their faults? Does trading for Kyogre in Pokémon Omega Ruby include its Mega Evolution? Are there other LEGO Duplo track layouts with two trains that trigger all the switches indefinitely? Why do we expect AI to reason instantly when humans require years of lived experience? Which set has greater cardinality and why? Why לֶחֶם instead of לַחַם? In Grep, how can I grep -r --exclude build/lib//.py Use bigger sample for predictors in regression What does my 3D Printing Life-Seeder Probe need to print to populate the Universe for humans? Does the Melf's Acid Arrow spell require a ranged attack roll? Why was there a child at the dig site in Montana? Intel NUC automatically shuts down when trying Ubuntu Expected number of rolls to see all sides of a die Can metal atoms act as ligands? Can "Accepted" Be Used as a Noun? Best bike type for multi-day tours in France (e.g., Discover France itineraries) How do Commoners "change class"? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Make separate appendix table of contents and remove appendix chapters and sections from main toc Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
278
Published Time: Sat, 30 Nov 2024 01:42:09 GMT arXiv:quant-ph/9705052v1 28 May 1997 Stabilizer Codes and Quantum Error Correction Thesis by Daniel Gottesman In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy California Institute of Technology Pasadena, California 2024 (Submitted May 21, 1997) COPYRIGHT ii c© 2024 Daniel Gottesman All Rights Reserved ACKNOWLEDGEMENTS iii Acknowledgements I would like to thank my advisor John Preskill for his guidance, and the members of the QUIC collaboration, particularly David Beckman, John Cortese, Jarah Evslin, Chris Fuchs, Sham Kakade, Andrew Landahl, and Hideo Mabuchi, for many stimulating conversations. My graduate career was supported by a Na-tional Science Foundation Graduate Fellowship, by the U. S. Department of Energy under Grant No. DE-FG03-92-ER40701, and by DARPA under Grant No. DAAH04-96-1-0386 administered by the Army Research Office. ABSTRACT iv Abstract Controlling operational errors and decoherence is one of the major challenges facing the field of quantum computation and other attempts to create specified many-particle entangled states. The field of quantum error correction has de-veloped to meet this challenge. A group-theoretical structure and associated subclass of quantum codes, the stabilizer codes, has proved particularly fruitful in producing codes and in understanding the structure of both specific codes and classes of codes. I will give an overview of the field of quantum error correction and the formalism of stabilizer codes. In the context of stabilizer codes, I will discuss a number of known codes, the capacity of a quantum channel, bounds on quantum codes, and fault-tolerant quantum computation. CONTENTS v Contents 1 Introduction and Preliminary Material 1 1.1 Quantum Computers . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Introduction to Quantum Mechanics . . . . . . . . . . . . . . . . 51.3 Introduction to Classical Coding Theory . . . . . . . . . . . . . . 8 2 Basics of Quantum Error Correction 11 2.1 The Quantum Channel . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 A Simple Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.3 Properties of Any Quantum Code . . . . . . . . . . . . . . . . . . 13 2.4 Error Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3 Stabilizer Coding 17 3.1 The Nine-Qubit Code Revisited . . . . . . . . . . . . . . . . . . . 17 3.2 The General Stabilizer Code . . . . . . . . . . . . . . . . . . . . . 18 3.3 Some Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.4 Alternate Languages for Stabilizers . . . . . . . . . . . . . . . . . 23 3.5 Making New Codes From Old Codes . . . . . . . . . . . . . . . . 25 3.6 Higher Dimensional States . . . . . . . . . . . . . . . . . . . . . . 29 4 Encoding and Decoding Stabilizer Codes 31 4.1 Standard Form for a Stabilizer Code . . . . . . . . . . . . . . . . 31 4.2 Network for Encoding . . . . . . . . . . . . . . . . . . . . . . . . 33 4.3 Other Methods of Encoding and Decoding . . . . . . . . . . . . . 35 5 Fault-Tolerant Computation 37 5.1 Encoded Computation and Fault-Tolerance . . . . . . . . . . . . 37 5.2 Measurement and Error Correction . . . . . . . . . . . . . . . . . 38 5.3 Transformations of the Stabilizer . . . . . . . . . . . . . . . . . . 40 5.4 The Effects of Measurements . . . . . . . . . . . . . . . . . . . . 44 5.5 Producing New Operations in N (G) . . . . . . . . . . . . . . . . 46 5.6 Codes With Multiple Encoded Qubits . . . . . . . . . . . . . . . 49 5.7 The Toffoli Gate . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 5.8 Construction of Gates in N (G) . . . . . . . . . . . . . . . . . . . 55 5.9 Refining the Error Correction Algorithm . . . . . . . . . . . . . . 57 CONTENTS vi 6 Concatenated Coding 60 6.1 The Structure of Concatenated Codes . . . . . . . . . . . . . . . 60 6.2 Threshhold for Storage Errors and Gates From N (G) . . . . . . . 62 6.3 Toffoli Gate Threshhold . . . . . . . . . . . . . . . . . . . . . . . 67 7 Bounds on Quantum Error-Correcting Codes 72 7.1 General Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 7.2 Weight Enumerators and Linear Programming Bounds . . . . . . 74 7.3 Bounds on Degenerate Stabilizer Codes . . . . . . . . . . . . . . 78 7.4 Error-Correcting Codes and Entanglement Purification Protocols 81 7.5 Capacity of the Erasure Channel . . . . . . . . . . . . . . . . . . 82 7.6 Capacity of the Depolarizing Channel . . . . . . . . . . . . . . . 83 8 Examples of Stabilizer Codes 87 8.1 Distance Two Codes . . . . . . . . . . . . . . . . . . . . . . . . . 87 8.2 The Five-Qubit Code . . . . . . . . . . . . . . . . . . . . . . . . . 89 8.3 A Class of Distance Three Codes . . . . . . . . . . . . . . . . . . 90 8.4 Perfect One-Error-Correcting Codes . . . . . . . . . . . . . . . . 95 8.5 A Class of Distance Four Codes . . . . . . . . . . . . . . . . . . . 96 8.6 CSS Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 8.7 Amplitude Damping Codes . . . . . . . . . . . . . . . . . . . . . 99 8.8 Some Miscellaneous Codes . . . . . . . . . . . . . . . . . . . . . . 101 A Quantum Gates 104 B Glossary 106 LIST OF TABLES vii List of Tables 3.1 The stabilizer for Shor’s nine-qubit code . . . . . . . . . . . . . . 17 3.2 The stabilizer for the five-qubit code. . . . . . . . . . . . . . . . . 21 3.3 The stabilizer for the eight-qubit code. . . . . . . . . . . . . . . . 22 3.4 The seven-qubit CSS code. . . . . . . . . . . . . . . . . . . . . . 23 3.5 A [4 , 2, 2] code derived from the [5 , 1, 3] code. . . . . . . . . . . . 26 3.6 The thirteen-qubit code formed by pasting together the five- and eight-qubit codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.7 Result of concatenating the five-qubit code with itself. . . . . . . 28 8.1 The stabilizer for a [16 , 10 , 3] code. . . . . . . . . . . . . . . . . . 91 8.2 The stabilizer for a [16 , 6, 4] code. . . . . . . . . . . . . . . . . . . 97 8.3 The stabilizer for the [8 , 0, 4] code. . . . . . . . . . . . . . . . . . 98 8.4 A four-qubit code for the amplitude damping channel. . . . . . . 100 8.5 The stabilizer for an [11 , 1, 5] code. . . . . . . . . . . . . . . . . . 102 8.6 The stabilizer for a code to correct one σx or σz error. . . . . . . 102 LIST OF FIGURES viii List of Figures 2.1 Network to detect leakage errors. . . . . . . . . . . . . . . . . . . 16 4.1 Creating the state X|00000 〉 for the five-qubit code. . . . . . . . 34 4.2 Network for encoding the five-qubit code. . . . . . . . . . . . . . 35 5.1 Network to swap |α〉 and |β〉 using ancilla |γ〉. . . . . . . . . . . . 42 5.2 Network to swap two qubits using CNOT. . . . . . . . . . . . . . 51 5.3 Recursive construction of gates in N (G). . . . . . . . . . . . . . . 57 6.1 Cat state construction and verification. . . . . . . . . . . . . . . . 64 6.2 The Toffoli gate construction. . . . . . . . . . . . . . . . . . . . . 68 7.1 The quantum Hamming bound, the Knill-Laflamme bound, and the bound from equation (7.66) . . . . . . . . . . . . . . . . . . . 86 A.1 Various quantum gates. . . . . . . . . . . . . . . . . . . . . . . . 105 CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 1 Chapter 1 Introduction and Preliminary Material 1.1 Quantum Computers Computers have changed the world in many ways. They are ubiquitous, run-ning air-traffic control and manufacturing plants, providing movie special effects and video games, and serving as a substrate for electronic mail and the World Wide Web. While computers allow us to solve many problems that were simply impossible before their advent, a number of problems require too much com-putation to be practical even for relatively simple inputs, and using the most powerful computers. The field of classical complexity theory has developed to classify problems by their difficulty. A class of problems is generally considered tractable if an algo-rithm exists to solve it with resources (such as time and memory) polynomial in the size of the input. Two well-known classically intractable problems are factor-ing an n-bit number and the Traveling Salesman problem (finding the minimum cyclic path connecting n cities with specified distances between them). Both of these problems are in the complexity class NP (for “non-deterministic polyno-mial”): 1 given a black box that solves the problem (an oracle ), we can check in polynomial time that the solution is correct. The Traveling Salesman problem is an NP-complete problem; that is, any problem in NP can be transformed into an instance of the Traveling Salesman problem in polynomial time. If we can solve the Traveling Salesman problem in polynomial time, we can solve any NP problem in polynomial time. Factoring may or may not be NP-complete, but so much work has been done attempting to solve it that the consensus is that it is classically intractable, and RSA public-key cryptography, which is used, for instance, to send credit-card numbers in Web browsing software, depends on the difficulty of factoring large numbers. 1Strictly speaking, it is the associated decision problems that are in NP. CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 2As computer hardware develops over time, the underlying technology con-tinually changes to become faster, smaller, and generally better. What was impossible on yesterday’s computers may be quite possible today. A problem that was intractable on the earlier hardware might become tractable with the new technology. However, the strong Church-Turing Thesis states that this is not the case, and that every physical implementation of universal computa-tion can simulate any other implementation with only a polynomial slowdown. 2 In this way, the Church-Turing Thesis protects complexity theory from obsoles-cence as computer technology improves. While a new computer may be able to factor larger numbers, the difficulty of factoring numbers will still scale the same way with the size of the input on the new hardware as on the old hardware. Another problem that has proven to be classically intractable is simulating quantum systems. A single spin-1/2 particle, such as an electron trapped in a quantum dot, has a two-dimensional space of states, which can be consid-ered to describe the direction of its spin. A similar classical particle such as a Heisenberg spin would also have a two-dimensional space of states. However, n quantum particles have a 2 n-dimensional state space, while n classical Heisen-berg spins would only have a 2 n-dimensional space of states. The extra states in the quantum system come from the presence of entangled states between many different particles. Note that while an n-bit classical digital computer has 2 n possible states, they only form an n-dimensional state space, since a state can be described by an n-component binary vector. To describe a state in a quan-tum computer with n qubits requires a complex vector with 2 n components. I give a basic introduction to quantum mechanics in section 1.2. Quantum systems are difficult to simulate classically because they generically utilize the full 2 n-dimensional Hilbert space as they evolve, requiring exponential classical resources. This fact led Feynman to conjecture that a quantum computer which used quantum mechanics intrinsically might be more powerful than a computer mired in the classical world . While this seems a sensible suggestion when just look-ing at quantum mechanics, it is in fact quite revolutionary in that it suggests that the strong Church-Turing Thesis is wrong! 3 This opens up the possibility that classical complexity classes might not apply for quantum computers, and that some classically intractable problems might become tractable. The most spectacular instance of this is Shor’s discovery of an algorithm to factor num-bers on a quantum computer in a polynomial time in the number of digits . Another impressive algorithm is Grover’s algorithm , which can find a single object in an unsorted database of N objects in O(√N) time on a quantum computer, while the same task would require an exhaustive search on a classi-cal computer, taking O(N ) time. It has been shown that O(√N) time is the best possible speed for this task , which tends to suggest that NP-complete problems are still intractable on a quantum computer, although this has not 2The original Church-Turing thesis only states that any universal computer can simulate any other computer, but the requirement of polynomial resources is a useful strengthening. 3A classical computer can simulate a quantum computer, but only with exponential re-sources, so the weak Church-Turing Thesis does still hold. CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 3been shown (note that a proof of this would also show P 6 = NP for a classical computer). However, declaring by theoretical fiat the basic properties of a quantum computer is a far cry from actually building one and using it to factor large numbers. Nevertheless, the first steps in building a quantum computer have been taken. Any quantum computer requires a system with long-lived quantum states and a way to interact them. Typically, we consider systems comprised of a number of two-state subsystems, which are called qubits (for “quantum bits”). There are many proposals for how to build a quantum computer. Some possible physical realizations of qubits are: • the ground and excited states of ions stored in a linear ion trap, with interactions between ions provided through a joint vibrational mode [6, 7]. • photons in either polarization, with interactions via cavity QED . • nuclear spin states in polymers, with interactions provided by nuclear magnetic resonance techniques . While these implementations are seemingly very different, it is possible to sim-ulate the computational process of one system on any of the others, providing a quantum analogue to the Church-Turing Thesis (although there are difficult technical or theoretical problems with scaling up the size of these implementa-tions). These suggested implementations of quantum computers all share a much higher susceptibility to errors than modern classical computers. While further development may reduce the size of errors by orders of magnitude, it is unlikely that quantum computers will ever reach the incredible reliability of classical computers. Modern classical computers guard against error largely by being digital instead of analog — instead of allowing each bit of the computer to vary continuously between 0 and 1, at each time step the hardware kicks the bit back to the nearer of 0 and 1. This prevents small errors from building up into large errors, which are therefore drastically reduced. The same technique cannot be used in a quantum computer, because continually measuring each qubit would destroy the entangled states that distinguish a quantum computer from a classical computer. Entangled states are in general very delicate, and making a measurement on one will typically collapse it into a less entangled state. Small interactions with the environment provide a sort of continuous measurement of a system, and as the system grows in size, these become harder and harder to ignore. The system will decohere and begin to look like a classical system. Decoherence is why the world looks classical at a human scale. Reducing interactions with the environment can reduce the effects of decoherence, but not eliminate them entirely. Even if the basal error rate in a quantum computer can be reduced to some small value ǫ per unit time, after N time steps, the probability of surviving without an error is only (1 − ǫ)N , which decreases exponentially with N . Even CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 4if an algorithm runs in polynomial time on an error-free computer, it will require exponentially many runs on a real computer unless something can be done to control the errors. The same problem occurs for classical computers. There, the problem can be solved in principle by the use of error-correcting codes. In practice, they are not usually necessary for normal computer operation, but they are essential to overcome noise in communications channels. I give a basic introduction to the theory of classical error-correcting codes in section 1.3. Classical error-correction techniques cannot be directly carried over to quan-tum computers for two reasons. First of all, the classical techniques assume we can measure all of the bits in the computer. For a quantum computer, this would destroy any entanglement between qubits. More importantly, a classical computer only needs to preserve the bit values of 0 and 1. A quantum computer also needs to keep phase information in entangled states. Thus, while quantum error-correcting codes are related to classical codes, they require a somewhat new approach. The first quantum error-correcting codes were discovered by Shor and Steane . I discuss Shor’s original code and some basics of quantum error-correcting codes in chapter 2. I then go on to describe the formalism of stabilizer codes in chapter 3, along with some simple examples and methods for creating new codes from old ones. Chapter 4 describes how to build networks to encode and decode stabilizer codes. Because we will want to use these codes in the operation of quantum computers, in chapter 5, I will discuss how to perform operations on states encoded using a quantum error-correcting code without losing the protection against errors. Chapter 6 describes how to use concate-nated codes to do arbitrarily long calculations as long as the basic error rate is below some threshhold value, and presents a rough calculation of that thresh-hold. Chapter 7 discusses known upper and lower bounds on the existence of stabilizer codes and the channel capacity. Finally, in chapter 8, I will give a partial list of known quantum error-correcting codes and their properties. Ap-pendix A contains a brief discussion of quantum gates and a list of symbols for them used in figures. Appendix B contains a glossary of useful terms for discussing quantum error-correcting codes. Since the promise of quantum computation has attracted scientists from a number of fields, including computer science, mathematics, and physics, some of the background one group takes for granted may be alien to others. There-fore, in the following two sections, I have provided basic introductions to quan-tum mechanics and classical coding theory. People familiar with one or both fields should skip the appropriate section(s). For a more complete treatment of quantum mechanics, see . For a more complete treatment of classical error-correcting codes, see . CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 5 1.2 Introduction to Quantum Mechanics The state of a classical computer is a string of 0s and 1s, which is a vector over the finite field Z2. The state of a quantum computer (or any quantum system) is instead a vector over the complex numbers C. Actually, a quantum state lies in a Hilbert space, since there is an inner product (which I will define later). The state is usually written |ψ〉, which is called a ket . A classical computer with n bits has 2 n possible states, but this is only an n-dimensional vector space over Z2. A quantum computer with n qubits is a state in a 2 n-dimensional complex vector space. For a single qubit, the standard basis vectors are written as |0〉 and |1〉. An arbitrary single-qubit state is then α|0〉 + β|1〉. (1.1) α and β are complex numbers, with |α|2 + |β|2 = 1. This is a normalized state. With multiple qubits, we can have states that cannot be written as the product of single-qubit states. For instance, 1 √2 (|00 〉 + |11 〉) (1.2) cannot be decomposed in this way. Such a state is said to be entangled . En-tangled states are what provide a quantum computer with its power. They will also play a major role in quantum error correction. The particular state (1.2) is called an Einstein-Podalsky-Rosen pair (or EPR) pair, and serves as a useful basic unit of entanglement in many applications. If we make a measurement on the qubit in equation (1.1), we get a classical number corresponding to one of the basis states. The measurement disturbs the original state, which collapses into the basis state corresponding to the measurement outcome. If we measure the state (1.1), the outcome will be 0 with probability |α|2, and it will be 1 with probability |β|2. The normalization ensures that the probability of getting some result is exactly 1. Through most of this thesis, I will instead write down unnormalized states. These states will stand for the corresponding normalized states, which are formed by multiplying the unnormalized states by an appropriate constant. The overall phase of a state vector has no physical significance. The measurement we made implements one of two projection operators, the projections on the basis |0〉, |1〉. This is not the only measurement we can make on a single qubit. In fact, we can project on any basis for the Hilbert space of the qubit. If we have multiple qubits, we can measure a number of different qubits independently, or we can measure some joint property of the qubits, which corresponds to projecting on some entangled basis of the system. Note that the projection on the basis |0〉, |1〉 for either qubit destroys the entanglement of the state (1.2), leaving it in a tensor product state. A particularly fruitful way to understand a quantum system is to look at the behavior of various operators acting on the states of the system. For instance, a nice set of operators to consider for a single qubit is the set of Pauli spin CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 6matrices σx = ( 0 11 0 ) , σ y = ( 0 −ii 0 ) , and σz = ( 1 00 −1 ) . (1.3) The original measurement I described corresponds to measuring the eigenvalue of σz . The corresponding projection operators are 1 2 (I ± σz ). If we have a spin-1/2 particle, this measurement is performed by measuring the spin of the particle along the z axis. We could also measure along the x or y axis, which corresponds to measuring the eigenvalue of σx or σy . The projections are 1 2 (I ± σx) and 1 2 (I ± σy ). We can also make measurements of more general operators, provided they have real eigenvalues. A matrix A has real eigenvalues iff it is Hermitian: A† = A, where A† is the Hermitian adjoint (or just adjoint ), equal to the complex conjugate transpose. Note that all of the Pauli spin matrices are Hermitian. The Pauli matrices also satisfy an important algebraic property — they anticommute with each other. That is, {σi, σ j } = σiσj + σj σi = 0 (1.4) whenever i 6 = j (with i, j ∈ { x, y, z }). Another possible relationship between two operators A and B is for them to commute . That is, [A, B ] = AB − BA = 0 . (1.5) It is possible for two matrices to neither commute nor anticommute, and, in fact, this is the generic case. Two commuting matrices can be simultaneously diagonalized. This means that we can measure the eigenvalue of one of them without disturbing the eigenvectors of the other. Conversely, if two operators do not commute, measuring one will disturb the eigenvectors of the other, so we cannot simultaneously measure non-commuting operators. There is a natural complex inner product on quantum states. Given an orthonormal basis |ψi〉, the inner product between |α〉 = ∑ ci|ψi〉 and |β〉 =∑ di|ψi〉 is 〈α|β〉 = ∑ c∗ i dj 〈ψi|ψj 〉 = ∑ c∗ i di. (1.6) Each ket |ψ〉 corresponds to a bra 〈ψ| and the Hermitian adjoint is the adjoint with respect to this inner product, so U |ψ〉 corresponds to 〈ψ|U †. The operator ∑ |ψ〉〈 φ| acts on the Hilbert space as follows: (∑ |ψ〉〈 φ| ) |α〉 = ∑〈φ|α〉 | ψ〉. (1.7) The inner product can reveal a great deal of information about the structure of a set of states. For instance, 〈ψ|φ〉 = 1 if and only if |ψ〉 = |φ〉.Eigenvectors of a Hermitian operator A with different eigenvalues are auto-matically orthogonal: 〈ψ|A|φ〉 = 〈ψ|(A|φ〉) = λφ〈ψ|φ〉 (1.8) = (〈ψ|A)|φ〉 = λ∗ ψ 〈ψ|φ〉. (1.9) CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 7Since the eigenvalues of A are real, it follows that 〈ψ|φ〉 = 0 whenever λφ 6 = λψ .Conversely, if 〈ψ|φ〉 = 0, there exists a Hermitian operator for which |ψ〉 and |φ〉 are eigenvectors with different eigenvalues. We often want to consider a subsystem A of a quantum system B. Since A may be entangled with the rest of the system, it is not meaningful to speak of the “state” of A. If we write the state of B as ∑ |ψi〉| φi〉, where |ψi〉 is an orthonormal basis for B − A , and |φi〉 are possible states for A, then to an observer who only interacts with the subsystem A, the subsystem appears to be in just one of the states |φi〉 with some probability. A is said to be in a mixed state as opposed to the pure state of a closed system in a definite state. We can extend the formalism to cover mixed states by introducing the density matrix ρ. For a pure system in the state |ψ〉, the density matrix is |ψ〉〈 ψ|. The density matrix for the subsystem for the entangled state above is ∑ |φi〉〈 φi|.Density matrices are always positive and have tr ρ = 1. To find the density matrix of a subsystem given the density matrix of the full system, simply trace over the degrees of freedom of the rest of the system. Given a closed quantum system, time evolution preserves the inner product, so the time evolution operator U must be unitary. That is, U †U = U U † = I.An open system can be described as a subsystem of a larger closed system, so the evolution of the open system descends from the global evolution of the full system. Time evolution of the subsystem is described by some superoperator acting on the density matrix of the subsystem. One fact about quantum states that has profound implications for quantum computation is that it is impossible to make a copy of an arbitrary unknown quantum state. This is known as the “No Cloning Theorem,” and is a con-sequence of the linearity of quantum mechanics. The proof is straightforward: Suppose we wish to have an operation that maps an arbitrary state |ψ〉 → | ψ〉 ⊗ | ψ〉. (1.10) Then arbitrary |φ〉 is mapped by |φ〉 → | φ〉 ⊗ | φ〉 (1.11) as well. Because the transformation must be linear, it follows that |ψ〉 + |φ〉 → | ψ〉 ⊗ | ψ〉 + |φ〉 ⊗ | φ〉. (1.12) However, |ψ〉 ⊗ | ψ〉 + |φ〉 ⊗ | φ〉 6 = ( |ψ〉 + |φ〉) ⊗ (|ψ〉 + |φ〉), (1.13) so we have failed to copy |ψ〉 + |φ〉. In general, if we pick an orthonormal basis, we can copy the basis states, but we will not have correctly copied superpositions of those basis states. We will instead have either measured the original system and therefore destroyed the superposition, or we will have produced a state that is entangled between the original and the “copy.” This means that to perform quantum error correction, we cannot simply make backup copies of the quantum state to be preserved. Instead, we must protect the original from any likely error. CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 8 1.3 Introduction to Classical Coding Theory Classical coding theory tends to concentrate on linear codes , a subclass of all possible codes with a particular relation between codewords. Suppose we wish to encode k bits using n bits. The data can be represented as a k-dimensional binary vector v. Because we are dealing with binary vectors, all the arithmetic is mod two. For a linear code, the encoded data is then Gv for some n×k matrix G (with entries from Z2), which is independent of v. G is called the generator matrix for the code. Its columns form a basis for the k-dimensional coding sub-space of the n-dimensional binary vector space, and represent basis codewords. The most general possible codeword is an arbitrary linear combination of the basis codewords; thus the name “linear code.” Given a generator matrix G, we can calculate the dual matrix P , which is an ( n − k) × n matrix of 0s and 1s of maximal rank n − k with P G = 0. Since any codeword s has the form Gv , P s = P Gv = 0 v = 0, and P annihilates any codeword. Conversely, suppose P s = 0. Since P has rank n − k, it only annihilates a k-dimensional space spanned by the columns of G, and s must be a linear combination of these columns. Thus, s = Gv for some v, and s is a valid codeword. The matrix P is called the parity check matrix for the code. It can be used to test if a given vector is a valid codeword, since P s = 0 iff s is a codeword. The dual code is defined to be the code with generator matrix P T and parity matrix GT .In order to consider the error-correcting properties of a code, it is useful to look at the Hamming distance between codewords. The Hamming distance between two vectors is the minimum number of bits that must be flipped to convert one vector to the other. The distance between a and b is equal to the weight (the number of 1s in the vector) of a+b. For a code to correct t single-bit errors, it must have distance at least 2 t + 1 between any two codewords. A t bit error will take a codeword exactly distance t away from its original value, so when the distance between codewords is at least 2 t+1, we can distinguish errors on different codewords and correct them to the proper codewords. A code to encode k bits in n bits with minimum distance d is said to be an [ n, k, d ] code. Now suppose we consider a t bit error. We can write down a vector e to describe this vector by putting ones in the places where bits are flipped and zeros elsewhere. Then if the original codeword is s, after the error it is s′ = s + e. If we apply the parity check matrix, we get P s ′ = P (s + e) = P s + P e = 0 + P e = P e, (1.14) so the value of P s ′ does not depend on the value of s, only on e. If P e is different for all possible errors e, we will be able to determine precisely what error occurred and fix it. P e is called the error syndrome , since it tells us what the error is. Since P e = P f iff P (e − f ) = 0, to have a code of distance d, we need P e 6 = 0 for all vectors e of weight d − 1 or less. Equivalently, any d − 1columns of P must be linearly independent. We can place upper and lower bounds on the existence of linear codes to correct t errors. Each of the 2 k codewords has a Hamming sphere of radius t.CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 9All the words inside the Hamming sphere come from errors acting on the same codeword. For a code on n bits, there are n one-bit errors, ( n 2 ) two-bit errors, and in general ( nj ) j-bit errors. The Hamming spheres cannot overlap, but they must all fit inside the vector space, which only has 2 n elements. Thus, t ∑ j=0 ( nj ) 2k ≤ 2n. (1.15) This is called the Hamming bound on [ n, k, 2t + 1] codes. As n, k, and t get large, this bound approaches the asymptotic form k n ≤ 1 − H ( t n ) , (1.16) where H(x) is the Hamming entropy H(x) = −x log 2 x − (1 − x) log 2(1 − x). (1.17) We can set a lower bound on the existence of [ n, k, 2t+1] linear codes as well, called the Gilbert-Varshamov bound. Suppose we have such a code (if necessary with k = 0) with 2t∑ j=0 ( nj ) 2k < 2n. (1.18) Then the spheres of distance 2 t around each codeword do not fill the space, so there is some vector v that is at least distance 2 t + 1 from each of the other codewords. In addition, v + s (for any codeword s) is at least distance 2 t + 1 from any other codeword s′, since the distance is just ( v + s) + s′ = v + ( s + s′), which is the distance between v and the codeword s + s′. This means that we can add v and all the vectors v + s to the code without dropping the distance below 2 t + 1. This gives us an [ n, k + 1 , 2t + 1] code. We can continue this process until 2t∑ j=0 ( nj ) 2k ≥ 2n. (1.19) Asymptotically, this becomes k n ≥ 1 − H ( 2t n ) . (1.20) Another case of great interest is the capacity of a classical channel. This is equal to the efficiency k/n of the most efficient code on an asymptotically large block that corrects measure one of the errors occuring. For instance, a common channel is the binary symmetric channel , where an error occurs independently on each bit with probability p for both 0 and 1. Shannon showed that channel capacity is just equal to one minus the entropy introduced by the channel . For the binary symmetric channel, the entropy is just the Hamming entropy CHAPTER 1. INTRODUCTION AND PRELIMINARY MATERIAL 10 H(p), so the capacity is 1 − H(p), coinciding with the Hamming bound for the expected number of errors t = pn . Shannon also showed that the capacity of a channel can be achieved by choosing codewords at random, then discarding only a few of them (measure zero asymptotically). CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 11 Chapter 2 Basics of Quantum Error Correction 2.1 The Quantum Channel Now we turn to the quantum channel. A noisy quantum channel can be a regular communications channel which we expect to preserve at least some degree of quantum coherence, or it can be the passage of time as a set of qubits sits around, interacting with its environment, or it can be the result of operating with a noisy gate on some qubits in a quantum computer. In any of these cases, the input of a pure quantum state can produce a mixed state as output as the data qubits become entangled with the environment. Even when a pure state comes out, it might not be the same state as the one that went in. At first it appears that trying to correct a mixed state back into the correct pure state is going to be harder than correcting an erroneous pure state, but this is not the case. The output mixed state can be considered as an ensemble of pure states. If we can correct each of the pure states in the ensemble back to the original input state, we have corrected the full mixed state. Another way of phrasing this is to say the channel applies a superoperator to the input density matrix. We can diagonalize this superoperator and write it as the direct sum of a number of different matrices acting directly on the possible input pure states with various probabilities. If the code can correct any of the possible matrices, it can correct the full superoperator. A key point is that the individual matrices need not be unitary. From now on, I will only consider the effects of a (possibly non-unitary) matrix acting on a pure state. 2.2 A Simple Code For the moment, let us consider only channels which cause an error on a single qubit at a time. We wish to protect a single logical qubit against error. We CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 12 cannot send it through the channel as is, because the one qubit that is affected might be the one we want to keep. Suppose we send through nine qubits after encoding the logical qubit as follows: |0〉 → |0〉 = ( |000 〉 + |111 〉)( |000 〉 + |111 〉)( |000 〉 + |111 〉) (2.1) |1〉 → |1〉 = ( |000 〉 − | 111 〉)( |000 〉 − | 111 〉)( |000 〉 − | 111 〉). (2.2) The data is no longer stored in a single qubit, but instead spread out among nine of them. Note that even if we know the nine qubits are in one of these two states, we cannot determine which one without making a measurement on at least three qubits. This code is due to Shor . Suppose the channel flips a single qubit, say the first one, switching |0〉 and |1〉. Then by comparing the first two qubits, we find they are different, which is not allowed for any valid codeword. Therefore we know an error occurred, and furthermore, it flipped either the first or second qubit. Note that we do not actually measure the first and second qubits, since this would destroy the superposition in the codeword; we just measure the difference between them. Now we compare the first and third qubits. Since the first qubit was flipped, it will disagree with the third; if the second qubit had been flipped, the first and third would have agreed. Therefore, we have narrowed down the error to the first qubit and we can fix it simply by flipping it back. To handle possible bit flips on the other blocks of three, we do the same comparisons inside the other blocks. However, this is not the only sort of error that could have occurred. The channel might have left the identity of the 0 and 1 alone, but altered their relative phase, introducing, for instance, a relative factor of −1 when the first qubit is |1〉. Then the two basis states become |0〉 → (|000 〉 − | 111 〉)( |000 〉 + |111 〉)( |000 〉 + |111 〉) (2.3) |1〉 → (|000 〉 + |111 〉)( |000 〉 − | 111 〉)( |000 〉 − | 111 〉). (2.4) By comparing the sign of the first block of three with the second block of three, we can see that a sign error has occurred in one of those blocks. Then by comparing the signs of the first and third blocks of three, we narrow the sign error down to the first block, and flip the sign back to what it should be. Again, we do not want to actually measure the signs, only whether they agree. In this case, measuring the signs would give us information about whether the state is |0〉 or |1〉, which would destroy any superposition between them. This does not exhaust the list of possible one qubit errors. For instance, we could have both a bit flip and a sign flip on the same qubit. However, by going through both processes described above, we will fix first the bit flip, then the sign flip (in fact, this code will correct a bit flip and a sign flip even if they are on different qubits). The original two errors can be described as the operation of σx = ( 0 11 0 ) and σz = ( 1 00 −1 ) . (2.5) CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 13 The simultaneous bit and sign flip is σy = iσ xσz = ( 0 −ii 0 ) . (2.6) Sometimes I will write σxi , σyi , or σzi to represent σx, σy , or σz acting on the ith qubit. The most general one-qubit error that can occur is some 2 × 2 matrix; but such a matrix can always be written as the (complex) linear combination of σx, σy , σz , and the 2 × 2 identity matrix I. Consider what happens to the code when such an error occurs: |ψ〉 = α|0〉 + β|1〉 → aσ xi |ψ〉 + bσ yi |ψ〉 + cσ zi |ψ〉 + d|ψ〉. (2.7) Suppose we perform the process above, comparing bits within a block of three, and comparing the signs of blocks of three. This acts as a measurement of which error (or the identity) has occurred, causing the state, originally in a superposi-tion, to collapse to σxi |ψ〉 with probability |a|2, to σyi |ψ〉 with probability |b|2,to σzi |ψ〉 with probability |c|2, and to |ψ〉 with probability |d|2. In any of the four cases, we have determined which error occurred and we can fix it. 2.3 Properties of Any Quantum Code Now let us consider properties of more general codes. A code to encode k qubits in n qubits will have 2 k basis codewords corresponding to the basis of the original states. Any linear combination of these basis codewords is also a valid codeword, corresponding to the same linear combination of the unencoded basis states. The space T of valid codewords (the coding space ) is therefore a Hilbert space in its own right, a subspace of the full 2 n-dimensional Hilbert space. As with Shor’s nine-qubit code, if we can correct errors E and F , we can correct aE + bF , so we only need to consider whether the code can correct a basis of errors. One convenient basis to use is the set of tensor products of σx, σy , σz , and I. The weight of an operator of this form is the number of qubits on which it differs from the identity. The set of all these tensor products with a possible overall factor of −1 or ±i forms a group G under multiplication. G will play a major role in the stabilizer formalism. Sometimes I will write it Gn to distinguish the groups for different numbers of qubits. G1 is just the quaternionic group; Gn is the direct product of n copies of the quaternions modulo all but a global phase factor. In order for the code to correct two errors Ea and Eb, we must always be able to distinguish error Ea acting on one basis codeword |ψi〉 from error Eb acting on a different basis codeword |ψj 〉. We can only be sure of doing this if Ea|ψ1〉 is orthogonal to Eb|ψ2〉; otherwise there is some chance of confusing them. Thus, 〈ψi|E† a Eb|ψj 〉 = 0 (2.8) when i 6 = j for correctable errors Ea and Eb. Note that we normally include the identity in the set of possible “errors,” since we do not want to confuse an error CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 14 on one qubit with nothing happening to another. If we have a channel in which we are certain some error occurred, we do not need to include the identity as a possible error. In any case, the set of correctable errors is unlikely to be a group — it does not even need to be closed under multiplication. However, (2.8) is insufficient to guarantee a code will work as a quantum error-correcting code. When we make a measurement to find out about the error, we must learn nothing about the actual state of the code within the coding space. If we did learn something, we would be disturbing superpositions of the basis states, so while we might correct the basis states, we would not be correcting an arbitrary valid codeword. We learn information about the error by measuring 〈ψi|E† a Eb|ψi〉 for all possible errors Ea and Eb. This quantity must therefore be the same for all the basis codewords: 〈ψi|E† a Eb|ψi〉 = 〈ψj |E† a Eb|ψj 〉. (2.9) We can combine equations (2.8) and (2.9) into a single equation: 〈ψi|E† a Eb|ψj 〉 = Cab δij , (2.10) where |ψi〉 and |ψj 〉 run over all possible basis codewords, Ea and Eb run over all possible errors, and Cab is independent of i and j. This condition was found by Knill and Laflamme and Bennett et al. . The above argument shows that (2.10) is a necessary condition for the code to correct the errors {Ea}. It is also a sufficient condition: The matrix Cab is Hermitian, so it can be diagonalized. If we do this and rescale the errors {Ea} appropriately, we get a new basis {Fa} for the space of possible errors, with either 〈ψi|F † a Fb|ψj 〉 = δab δij (2.11) or 〈ψi|F † a Fb|ψj 〉 = 0 , (2.12) depending on a. Note that this basis will not necessarily contain operators that are tensor products of one-qubit operators. Errors of the second type actually annihilate any codeword, so the probability of one occuring is strictly zero and we need not consider them. The other errors always produce orthogonal states, so we can make some measurement that will tell us exactly which error occurred, at which point it is a simple matter to correct it. Therefore, a code satisfies equation (2.10) for all Ea and Eb in some set E iff the code can correct all errors in E.Another minor basis change allows us to find a basis where any two errors acting on a given codeword either produce orthogonal states or exactly the same state. The errors Fa that annihilate codewords correspond to two errors that act the same way on codewords. For instance, in Shor’s nine-qubit code, σz1 and σz2 act the same way on the code, so σz1 − σz2 will annihilate codewords. This phenomenon will occur iff Cab does not have maximum rank. A code for which Cab is singular is called a degenerate code, while a code for which it is not is nondegenerate . Shor’s nine-qubit code is degenerate; we will see many CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 15 examples of nondegenerate codes later. Note that whether a code is degenerate or not depends on the set of errors it is intended to correct. For instance, a two-error-correcting degenerate code might be nondegenerate when considered as a one-error-correcting code. In equation (2.10), E = E† a Eb is still in the group G when Ea and Eb are in G. The weight of the smallest E in G for which (2.10) does not hold is called the distance of the code. A quantum code to correct up to t errors must have distance at least 2 t + 1. Every code has distance at least one. A distance d code encoding k qubits in n qubits is described as an [ n, k, d ] code. Note that a quantum [ n, k, d ] code is often written in the literature as to distinguish it from a classical [ n, k, d ] code. I have chosen the notation [ n, k, d ] to emphasize the similarities with the classical theory; when I need to distinguish, I will do so using the words “quantum” and “classical.” We can also consider variations of the usual error-correction problem. For instance, suppose we only want to detect if an error has occurred, not to correct it. This could, for instance, be used to prevent errors using the quantum Zeno effect . In this case, we do not need to distinguish error Ea from Eb, only from the identity. We can use the same argument to find (2.10), only now Eb = I always. This means a code to detect s errors must have distance at least s + 1. Another variation is when we know in which qubit(s) an error has occurred, as in the quantum erasure channel . In this case, we only need distinguish Ea from those Eb affecting the same qubits. This means that E† a Eb has the same weight as Ea, and to correct r such located errors, we need a code of distance at least r + 1. We can also imagine combining all of these tasks. A code to correct t arbitrary errors, r additional located errors, and detect a further s errors must have distance at least r + s + 2 t + 1. 2.4 Error Models In this thesis, I will mostly assume that errors occur independently on different qubits, and that when an error occurs on a qubit, it is equally likely to be a σx, σy , or σz error. If the probability ǫ of error per qubit is fairly small, it is often useful to simply ignore the possibility of more than t errors, since this only occurs with probability O(ǫt+1 ). Thus, I will typically deal with codes that correct up to t arbitrary errors. Such a code will handle any error on up to t qubits that leaves the data somewhere in the normal computational space (although moving it outside of the space of valid codewords). In some systems, there will be errors that move the system outside of the computational space. For instance, if the data is stored as the ground or metastable excited state of an ion, the electron might instead end up in a differ-ent excited state. If the data is stored in the polarization of a photon, the photon might escape. In both of these cases, the normal error correction networks will not function properly, since they assume that the qubit is either in the state |0〉 or |1〉. However, by performing some measurement that distinguishes between the computational Hilbert space and other possible states, we can determine CHAPTER 2. BASICS OF QUANTUM ERROR CORRECTION 16 |ψ〉|0〉 s ❣❣ s ❣❣ Figure 2.1: Network to detect leakage errors. not only that this sort of leakage error has occurred, but also on which qubit it has occurred. Then we can cool the atom to the ground state or introduce a new photon with random polarization, and the error becomes a located error, which was discussed at the end of the previous section. One possible network of gates to detect a leakage error is given in figure 2.1 (see appendix A for a de-scription of the symbols used in this and later figures). This network asssumes that states outside the normal computational space do not interact at all with other qubits. If the data state |ψ〉 is either |0〉 or |1〉, the ancilla qubit will flip and become |1〉. If the data state is neither |0〉 nor |1〉, the ancilla will remain |0〉, thus signalling a leakage error on this data qubit. Another possible difficulty arises when correlated errors on multiple qubits can occur. While this can in principle be a severe problem, it can be handled without a change in formalism as long as the chance of a correlated error drops rapidly enough with the size of the blocks of errors. Since a t-qubit error will occur with probability O(ǫt) when the probability of uncorrelated single-qubit errors is ǫ, as long as the probability of a t-qubit correlated error is O(ǫt), the correlated errors cause no additional problems. In real systems, the assumption that errors are equally likely to be σx, σy ,and σz errors is a poor one. In practice, some linear combinations of σx, σy ,and σz are going to be more likely than others. For instance, when the qubits are ground or excited states of an ion, a likely source of errors is spontaneous emission. After some amount of time, the excited state will either decay to the ground state, producing the error σx +iσ y with probability ǫ, or it will not, which changes the relative amplitudes of |0〉 and |1〉, resulting in the error I − σz with probability O(ǫ2). A channel that performs this sort of time evolution is known as an amplitude damping channel. Since the only O(1) effect of time evolution is the identity, this sort of error can be protected against to lowest order by a code to correct an arbitrary single error. However, codes that take account of the restricted possibilities for errors can be more efficient than codes that must correct a general error , and understanding the physically likely sources of error will certainly be an important part of engineering quantum computers. CHAPTER 3. STABILIZER CODING 17 Chapter 3 Stabilizer Coding 3.1 The Nine-Qubit Code Revisited Let us look more closely at the procedure we used to correct errors for the nine-qubit code. To detect a bit flip error on one of the first three qubits, we compared the first two qubits and the first and third qubits. This is equivalent to measuring the eigenvalues of σz1σz2 and σz1σz3. If the first two qubits are the same, the eigenvalue of σz1σz2 is +1; if they are different, the eigenvalue is −1. Similarly, to detect a sign error, we compare the signs of the first and second blocks of three and the first and third blocks of three. This is equivalent to measuring the eigenvalues of σx1σx2σx3σx4σx5σx6 and σx1σx2σx3σx7σx8σx9.Again, if the signs agree, the eigenvalues will be +1; if they disagree, the eigen-values will be −1. In order to totally correct the code, we must measure the eigenvalues of a total of eight operators. They are listed in table 3.1. The two valid codewords |0〉 and |1〉 in Shor’s code are eigenvectors of all eight of these operators with eigenvalue +1. All the operators in G that fix both |0〉 and |1〉 can be written as the product of these eight operators. The set of operators that fix |0〉 and |1〉 form a group S, called the stabilizer of the code, and M1 through M8 are the generators of this group. When we measure the eigenvalue of M1, we determine if a bit flip error has M1 σz σz I I I I I I IM2 σz I σz I I I I I IM3 I I I σz σz I I I IM4 I I I σz I σz I I IM5 I I I I I I σz σz IM6 I I I I I I σz I σz M7 σx σx σx σx σx σx I I IM8 σx σx σx I I I σx σx σx Table 3.1: The stabilizer for Shor’s nine-qubit code CHAPTER 3. STABILIZER CODING 18 occurred on qubit one or two, i.e., if σx1 or σx2 has occurred. Note that both of these errors anticommute with M1, while σx3 through σx9, which cannot be detected by just M1, commute with it. Similarly, M2 detects σx1 or σx3, which anticommute with it, and M7 detects σz1 through σz6. In general, if M ∈ S, {M, E } = 0, and |ψ〉 ∈ T , then M E |ψ〉 = −EM |ψ〉 = −E|ψ〉, (3.1) so E|ψ〉 is an eigenvector of M with eigenvalue −1 instead of +1 and to detect E we need only measure M .The distance of this code is in fact three. Even a cursory perusal reveals that any single-qubit operator σxi , σyi , or σzi will anticommute with one or more of M1 through M8. Since states with different eigenvalues are orthogonal, condition (2.10) is satisfied when Ea has weight one and Eb = I. We can also check that every two-qubit operator E anticommutes with some element of S,except for those of the form σza σzb where a and b are in the same block of three. However, the operators of this form are actually in the stabilizer. This means that σza σzb |ψ〉 = |ψ〉 for any codeword |ψ〉, and 〈ψ|σza σzb |ψ〉 = 〈ψ|ψ〉 = 1 for all codewords |ψ〉, and these operators also satisfy equation (2.10). Since σza σzb is in the stabilizer, both σza and σzb act the same way on the codewords, and there is no need to distinguish them. When we get to operators of weight three, we do find some for which (2.10) fails. For instance, σx1σx2σx3 commutes with everything in S, but 〈0|σx1σx2σx3|0〉 = +1 (3.2) 〈1|σx1σx2σx3|1〉 = −1. (3.3) 3.2 The General Stabilizer Code The stabilizer construction applies to many more codes than just the nine-qubit one [21, 22]. In general, the stabilizer S is some Abelian subgroup of G and the coding space T is the space of vectors fixed by S. Since σy has imaginary components, while σx and σz are real, with an even number of σy ’s in each element of the stabilizer, all the coefficients in the basis codewords can be chosen to be real; if there are an odd number of σy ’s, they may be imaginary. However, Rains has shown that whenever a (possibly complex) code exists, a real code exists with the same parameters . Therefore, I will largely restrict my attention to real codes. For a code to encode k qubits in n, T has 2 k dimensions and S has 2 n−k elements. S must be an Abelian group, since only commuting operators can have simultaneous eigenvectors, but provided it is Abelian and neither i nor −1is in S, the space T = {| ψ〉 s.t. M |ψ〉 = |ψ〉 ∀ M ∈ S} does have dimension 2 k.At this point it will be helpful to note a few properties of G. Since σ2 x = σ2 y = σ2 z = +1, every element in G squares to ±1. Also, σx, σy , and σz on the same qubit anticommute, while they commute on different qubits. Therefore, any two elements of G either commute or they anticommute. σx, σy , and σz are all CHAPTER 3. STABILIZER CODING 19 Hermitian, but of course ( iI )† = −iI , so elements of G can be either Hermitian or anti-Hermitian. In either case, if A ∈ G , A† ∈ G also. Similarly, σx, σy , and σz are all unitary, so every element of G is unitary. As before, if M ∈ S, |ψi〉 ∈ T , and {M, E } = 0, then M E |ψi〉 = −E|ψi〉, so 〈ψi|E|ψj 〉 = 〈ψi|M E |ψj 〉 = −〈 ψi|E|ψj 〉 = 0 . (3.4) Therefore the code satisfies (2.8) whenever E = E† a Eb = ±EaEb anticommutes with M for some M ∈ S. In fact, in such a case it also satisfies (2.9), since 〈ψi|E|ψi〉 = 〈ψj |E|ψj 〉 = 0. Therefore, if E† a Eb anticommutes with some ele-ment of S for all errors Ea and Eb in some set, the code will correct that set of errors. Of course, strictly speaking, this is unlikely to occur. Generally, I will be an allowed error, and E = I†I commutes with everything. However, S is a group, so I ∈ S. In general, if E ∈ S, 〈ψi|E|ψj 〉 = 〈ψi|ψj 〉 = δij . (3.5) This will satisfy equation (2.10) also. Now, there generally are many elements of G that commute with everything in S but are not actually in S. The set of elements in G that commute with all of S is defined as the centralizer C(S) of S in G. Because of the properties of S and G, the centralizer is actually equal to the normalizer N (S) of S in G, which is defined as the set of elements of G that fix S under conjugation. To see this, note that for any A ∈ G , M ∈ S, A†M A = ±A†AM = ±M. (3.6) Since −1 /∈ S, A ∈ N (S) iff A ∈ C(S), so N (S) = C(S). Note that S ⊆ N (S). In fact, S is a normal subgroup of N (S). N (S) contains 4 · 2n+k elements. The factor of four is for the overall phase factor. Since an overall phase has no effect on the physical quantum state, often, when considering N (S), I will only really consider N (S) without this global phase factor. If E ∈ N (S) − S, then E rearranges elements of T but does not take them out of T : if M ∈ S and |ψ〉 ∈ T , then M E |ψ〉 = EM |ψ〉 = E|ψ〉, (3.7) so E|ψ〉 ∈ T also. Since E / ∈ S, there is some state in T that is not fixed by E.Unless it differs from an element of S by an overall phase, E will therefore be undetectable by this code. Putting these considerations together, we can say that a quantum code with stabilizer S will detect all errors E that are either in S or anticommute with some element of S. In other words, E ∈ S ∪ (G − N (S)). This code will correct any set of errors {Ei} iff EaEb ∈ S ∪ (G − N (S)) ∀Ea, E b (note that E† a Eb commutes with M ∈ G iff EaEb = ±E† a Eb does). For instance, the code will have distance d iff N (S) − S contains no elements of weight less than d. If S has elements of weight less than d (except the identity), it is a degenerate CHAPTER 3. STABILIZER CODING 20 code; otherwise it is a nondegenerate code. For instance, the nine-qubit code is degenerate, since it has distance three and σz1σz2 ∈ S. A nondegenerate stabilizer code satisfies 〈ψi|E† a Eb|ψj 〉 = δab δij . (3.8) By convention, an [ n, 0, d ] code must be nondegenerate. When EaEb ∈ S, we say that the errors Ea and Eb are degenerate. We cannot distinguish between Ea and Eb, but there is no need to, since they have the same effect on the codewords. It is sometimes useful to define the error syndrome for a stabilizer code. Let fM : G → Z2, fM (E) = { 0 if [ M, E ] = 0 1 if {M, E } = 0 (3.9) and f (E) = ( fM1 (E), . . . , f Mn−k (E)), where M1, . . . , M n−k are the generators of S. Then f (E) is some ( n − k)-bit binary number which is 0 iff E ∈ N (S). f (Ea) = f (Eb) iff f (EaEb) = 0, so for a nondegenerate code, f (E) is different for each correctable error E.In order to perform the error-correction operation for a stabilizer code, all we need to do is measure the eigenvalue of each generator of the stabilizer. The eigenvalue of Mi will be ( −1) fMi (E), so this process will give us the error syndrome. The error syndrome in turn tells us exactly what error occurred (for a nondegenerate code) or what set of degenerate errors occurred (for a degenerate code). The error will always be in G since the code uses that error basis, and every operator in G is unitary, and therefore invertible. Then we just apply the error operator (or one equivalent to it by multiplication by S) to fix the state. Note that even if the original error that occurred is a nontrivial linear combination of errors in G, the process of syndrome measurement will project onto one of the basis errors. If the resulting error is not in the correctable set, we will end up in the wrong encoded state, but otherwise, we are in the correct state. In chapter 5, I describe a few ways of measuring the error syndrome that are tolerant of imperfect component gates. Since the elements of N (S) move codewords around within T , they have a natural interpretation as encoded operations on the codewords. Since S fixes T , actually only N (S)/S will act on T nontrivially. If we pick a basis for T consisting of eigenvectors of n commuting elements of N (S), we get an auto-morphism N (S)/S → G k. N (S)/S can therefore be generated by i (which we will by and large ignore) and 2 k equivalence classes, which I will write Xi and Zi (i = 1 . . . k ), where Xi maps to σxi in Gk and Zi maps to σzi in Gk. They are encoded σx and σz operators for the code. If k = 1, I will write X1 = X and Z1 = Z. The X and Z operators satisfy [Xi, Xj ] = 0 (3.10) [Zi, Zj ] = 0 (3.11) [Xi, Zj ] = 0 ( i 6 = j) (3.12) {Xi, Zi} = 0. (3.13) CHAPTER 3. STABILIZER CODING 21 M1 σx σz σz σx IM2 I σx σz σz σx M3 σx I σx σz σz M4 σz σx I σx σz X σx σx σx σx σx Z σz σz σz σz σz Table 3.2: The stabilizer for the five-qubit code. 3.3 Some Examples I shall now present a few short codes to use as examples. The first encodes one qubit in five qubits [17, 24] and is given in table 3.2. I have also included X and Z, which, along with M1 through M4, generate N (S). Note that this code is cyclic (i.e., the stabilizer and codewords are invariant under cyclic permutations of the qubits). It has distance three (for instance, σy1σz2σy3 ∈ N (S) − S) and is nondegenerate. We can take the basis codewords for this code to be |0〉 = ∑ M∈S M |00000 〉 (3.14) and |1〉 = X|0〉. (3.15) That is, |0〉 = |00000 〉 + M1|00000 〉 + M2|00000 〉 + M3|00000 〉 + M4|00000 〉 M1M2|00000 〉 + M1M3|00000 〉 + M1M4|00000 〉 M2M3|00000 〉 + M2M4|00000 〉 + M3M4|00000 〉 (3.16) + M1M2M3|00000 〉 + M1M2M4|00000 〉 + M1M3M4|00000 〉 M2M3M4|00000 〉 + M1M2M3M4|00000 〉 = |00000 〉 + |10010 〉 + |01001 〉 + |10100 〉 |01010 〉 − | 11011 〉 − | 00110 〉 − | 11000 〉− | 11101 〉 − | 00011 〉 − | 11110 〉 − | 01111 〉 (3.17) − | 10001 〉 − | 01100 〉 − | 10111 〉 + |00101 〉, and |1〉 = X|0〉 = |11111 〉 + |01101 〉 + |10110 〉 + |01011 〉 |10101 〉 − | 00100 〉 − | 11001 〉 − | 00111 〉− | 00010 〉 − | 11100 〉 − | 00001 〉 − | 10000 〉 (3.18) − | 01110 〉 − | 10011 〉 − | 01000 〉 + |11010 〉. Since multiplying by an element of the stabilizer merely rearranges the sum ∑ M , these two states are in T . When these are the encoded 0 and 1, X is the CHAPTER 3. STABILIZER CODING 22 M1 σx σx σx σx σx σx σx σx M2 σz σz σz σz σz σz σz σz M3 I σx I σx σy σz σy σz M4 I σx σz σy I σx σz σy M5 I σy σx σz σx σz I σy X1 σx σx I I I σz I σz X2 σx I σx σz I I σz I X3 σx I I σz σx σz I I Z1 I σz I σz I σz I σz Z2 I I σz σz I I σz σz Z3 I I I I σz σz σz σz Table 3.3: The stabilizer for the eight-qubit code. encoded bit flip operator σx and Z is the encoded σz . This code also has the property that every possible error syndrome is used by the single-qubit errors. It is therefore a perfect code. There are a number of other perfect codes [25, 26], which will be discussed in chapter 8. A code encoding three qubits in eight qubits [21, 22, 27] appears in table 3.3. Again, M1 through M5 generate the stabilizer, and generate N (S) with Xi and Zi. This is also a nondegenerate distance three code. The codewords are |c1c2c3〉 = Xc1 1 Xc2 2 Xc3 3 ∑ M∈S M |00000000 〉. (3.19) The operators Xi and Zi are the encoded σx and σz on the ith encoded qu-bit. This code is one of an infinite family of codes [21, 28], which I present in chapter 8. A particularly useful class of codes with simple stabilizers is the Calderbank-Shor-Steane (or CSS ) class of codes [29, 30]. Suppose we have a classical code with parity check matrix P . We can make a quantum code to correct just σx errors using a stabilizer with elements corresponding to the rows of P , with a σz wherever P has a 1 and I’s elsewhere. The error syndrome f (E) for a product of σx errors E is then equal to the classical error syndrome for the same set of classical bit flip errors. Now add in stabilizer generators corresponding to the parity check matrix Q of a second classical code, only now with σx’s instead of σz ’s. These generators will identify σz errors. Together, they can also identify σy errors, which will have a nontrivial error syndrome for both parts. In general, a code formed this way will correct as many σx errors as the code for P can correct, and as many σz errors as the code for Q can correct; a σy error counts as one of each. We can only combine P and Q into a single stabilizer in the CSS form if the generators derived from the two codes commute. This will be true iff the rows of P and Q are orthogonal using the binary dot product. This means that the dual code of each code must be a subset of the other code. The minimum distance of the quantum code will be the minimum of the distances of P and CHAPTER 3. STABILIZER CODING 23 M1 σx σx σx σx I I IM2 σx σx I I σx σx IM3 σx I σx I σx I σx M4 σz σz σz σz I I IM5 σz σz I I σz σz IM6 σz I σz I σz I σz X I I I I σx σx σx Z I I I I σz σz σz Table 3.4: The seven-qubit CSS code. Q. An example of a code of this sort is given in table 3.4. It is based on the classical [7 , 4, 3] Hamming code, which is self-dual. For this code, the codewords are |0〉 = |0000000 〉 + |1111000 〉 + |1100110 〉 + |1010101 〉 |0011110 〉 + |0101101 〉 + |0110011 〉 + |1001011 〉 (3.20) and |1〉 = |0000111 〉 + |1111111 〉 + |1100001 〉 + |1010010 〉 |0011001 〉 + |0101010 〉 + |0110100 〉 + |1001100 〉. (3.21) The encoded |0〉 state is the superposition of the even codewords in the Hamming code and the encoded |1〉 state is the superposition of the odd codewords in the Hamming code. This behavior is characteristic of CSS codes; in general, the various quantum codewords are superpositions of the words in subcodes of one of the classical codes. CSS codes are not as efficient as the most general quantum code, but they are easy to derive from known classical codes and their simple form often makes them ideal for other purposes. For instance, the seven-qubit code is particularly well suited for fault-tolerant computation (as I will discuss in chapter 5). 3.4 Alternate Languages for Stabilizers There are number of possible ways of describing the stabilizer of a quantum code. They each have advantages and are useful in different circumstances. The description I have used so far uses the language of finite group theory and is particularly useful for making contact with the usual language of quantum mechanics. This is the form presented in . We can instead write the stabilizer using binary vector spaces, as in , which emphasizes connections with the classical theory of error-correcting codes. To do this, we write the stabilizer as a pair of ( n − k) × n binary matrices (or often one ( n − k) × 2n matrix with a line separating the two halves). The rows correspond to the different generators of the stabilizer and the columns CHAPTER 3. STABILIZER CODING 24 correspond to different qubits. One matrix has a 1 whenever the generator has a σx or a σy in the appropriate place, the other has a 1 whenever the generator has a σy or σz . Overall phase factors get dropped. For instance, the five-qubit code in this form becomes  1 0 0 1 0 0 1 1 0 00 1 0 0 1 0 0 1 1 01 0 1 0 0 0 0 0 1 10 1 0 1 0 1 0 0 0 1  . (3.22) Other elements of G get converted to two n-dimensional vectors in the same way. We can convert back to the group theory formalism by writing down operators with a σx if the left vector or matrix has a 1, a σz if the right vector or matrix has a 1, and a σy if they are both 1. The generators formed this way will never have overall phase factors, although other elements of the group might. Multiplication of group elements corresponds to addition of the corresponding binary vectors. In the binary formalism, the condition that two operators commute with each other becomes the condition that the following inner product is 0: Q(a|b, c |d) = n ∑ i=1 (aidi + bici) = 0 , (3.23) using binary arithmetic as usual. ai, bi, ci, and di are the ith components of the corresponding vectors. Therefore the condition that the stabilizer be Abelian converts to the condition that the stabilizer matrix ( A|B) satisfy n ∑ l=1 (Ail Bjl + Bil Ajl ) = 0 . (3.24) We determine the vectors in N (S) by evaluating the inner product (3.23) with the rows of ( A|B). To get a real code (with an even number of σy ’s), the code should also satisfy n∑ l=1 Ail Bil = 0 . (3.25) Another formalism highlights connections with the classical theory of codes over the field GF(4) . This is a field of characteristic two containing four elements, which can be written {0, 1, ω, ω 2}. Since the field has characteristic two, 1 + 1 = ω + ω = ω2 + ω2 = 0 . (3.26) Also, ω3 = 1 and 1+ ω = ω2. We can rewrite the generators as an n-dimensional “vector” over GF(4) by substituting 1 for σx, ω for σz , and ω2 for σy . The multiplicative structure of G becomes the additive structure of GF(4). I put vector in quotes because the code need not have the structure of a vector space over GF(4). If it does (that is, the stabilizer is closed under multiplication by CHAPTER 3. STABILIZER CODING 25 ω), the code is a linear code, which is essentially a classical code over GF(4). The most general quantum code is sometimes called an additive code, because the stabilizer is only closed under sums of its elements. In this formalism, the five-qubit code appears as  1 ω ω 1 00 1 ω ω 11 0 1 ω ωω 1 0 1 ω  . (3.27) Note that the five-qubit code is a linear quantum code. Again, there is an additional condition for a quantum code. Define the “trace” operator by Tr ω = Tr ω2 = 1, Tr 1 = Tr 0 = 0. Two operators in G commute iff their images, the vectors u and v over GF(4), satisfy Tr u · v = Tr  n ∑ j=1 uj vj  = 0 , (3.28) where vj is conjugation on the jth component of v, switching ω and ω2, and leaving 0 and 1 alone. 3.5 Making New Codes From Old Codes Using old codes to find new ones can simplify the task of finding codes, which can otherwise be quite a difficult problem. There are a number of simple mod-ifications we can make to existing codes to produce new codes with different parameters [25, 26]. One trivial change is to perform a permutation of σx, σy , and σz on each qubit. This leaves the distance and size of the code the same, although it may be useful for codes that can correct different numbers of σx, σy , and σz errors. A slightly less trivial manipulation is to add a new qubit and a new generator which is σx for the new qubit. The other generators are tensored with the identity on the new qubit to form the generators of the new code. This makes an [ n, k, d ]code (degenerate or nondegenerate) into an [ n + 1 , k, d ] degenerate code: Any operator acting as σy or σz on the new qubit will anticommute with the new generator, and any operator with the form M ⊗ σx(n+1) will be equivalent to the operator M ⊗ I. Therefore, an operator must have at least weight d when restricted to the first n qubits to be in N (S) − S.A less trivial manipulation is to remove the last qubit, converting an [ n, k, d ]code into an [ n − 1, k + 1 , d − 1] code. To do this, we choose the n − k generators of S so that M1 ends σx, M2 ends σz , and M3 through Mn−k end I. We can always do this when d > 1 by picking the first two and then multiplying by combinations of them to make the others end appropriately. 1 Then the new 1If the code has been formed by adding a single σx(or σyor σz) generator, as above, we may not be able to do this for a given qubit, but there will always be at least one qubit for which we can. CHAPTER 3. STABILIZER CODING 26 M ′ 1 σx σz σz σx M ′ 2 σy σx σx σy X1 σx σx σx σx X2 σx I σx σz Z1 σy σz σy I Z2 I σx σz σz Table 3.5: A [4 , 2, 2] code derived from the [5 , 1, 3] code. code has a stabilizer formed from the last n − k − 2 generators, dropping M1 and M2. Suppose we have an operator A on the first n − 1 qubits of weight w that commutes with M3 through Mn−k. There are four possibilities, all of which lead to an operator of weight at most w + 1 that commutes with the original stabilizer: 1. A commutes with both M1 and M2.2. A commutes with M1, but not M2. Then A ⊗ σxn commutes with M1 and M2.3. A commutes with M2, but not M1. Then A ⊗ σzn commutes with M1 and M2.4. A anticommutes with both M1 and M2. Then A ⊗ σyn commutes with M1 and M2.Since the original code had distance d, w must be at least d−1, which is therefore the distance of the new code. The stabilizer has n−k −2 generators, so the code encodes ( n − 1) − (n − k − 2) = k + 1 qubits. The new X and Z operators are M1 and M2 (in either order), restricted to the first n − 1 qubits. An example of this construction is to remove the last qubit from the [5 , 1, 3] code of figure 4.2 to produce a [4 , 2, 2] code: the generators of the new code are M1 and M3M4,both without the last qubit. The new stabilizer is given in figure 3.5. Note that the Z1 operator is equal to M3Z for the five-qubit code. I have multiplied by M3 so that Z1 anticommutes with X1.Another way to make new codes is by pasting together old codes. Suppose we have four stabilizers R1, R2, S1, and S2, with R1 ⊂ S1 and R2 ⊂ S2. Let R1 define an [ n1, l 1, c 1] code, R2 be an [ n2, l 2, c 2] code, S1 be an [ n1, k 1, d 1]code, and S2 be an [ n2, k 2, d 2] code. Then ki < l i and ci ≤ di. We require l1 − k1 = l2 − k2 and for S1 and S2 to be nondegenerate. 2 Let generators of R1 be {M1, . . . , M n1−l1 }, the generators of S1 be {M1, . . . , M n1−k1 }, the generators of R2 be {N1, . . . , N n2−l2 }, and the generators of S2 be {N1, . . . , N n2−k2 }. We form a new stabilizer S on n1 + n2 qubits generated by {M1 ⊗ I, . . . , M n1−l1 ⊗ I, I ⊗ N1, . . . , I ⊗ Nn2−l2 ,Mn1−l1 +1 ⊗ Nn2 −l2+1 , . . . , M n1−k1 ⊗ Nn2−k2 }. (3.29) 2We can actually allow S1and S2to be degenerate, as long as all the degenerate operators are confined to R1and R2 CHAPTER 3. STABILIZER CODING 27 M1 σx σx σx σx σx σx σx σx I I I I IM2 σz σz σz σz σz σz σz σz I I I I I M3 I I I I I I I I σx σz σz σx I M4 I σx I σx σy σz σy σz I σx σz σz σx M5 I σx σz σy I σx σz σy σx I σx σz σz M6 I σy σx σz σx σz I σy σz σx I σx σz Table 3.6: The thirteen-qubit code formed by pasting together the five- and eight-qubit codes. The code has ( n1 − l1) + ( n2 − l2) + ( li − ki) generators, and therefore en-codes l1 + k2 = l2 + k1 qubits. For instance, if S1 is the eight-qubit code and S2 is the five-qubit code, with R1 generated by σxσxσxσxσxσxσxσx and σz σz σz σz σz σz σz σz and R2 generated by σxσz σz σxI, we can make the [13 , 7, 3] code given in table 3.6. In general, the distance of the new code will be min {d1, d 2, c 1 + c2}. This is because an operator acting on just the first n1 qubits can only commute with S if it commutes with S1, an operator acting on the last n2 qubits can only commute with S if it commutes with S2, and an operator acting on both parts must commute with both R1 ⊗ I and I ⊗ R2.Another very useful way of producing new codes is to concatenate two codes to produce a code of greater total distance. Suppose we have an [ n1, k, d 1] code (stabilizer S1) and we encode each of its n1 qubits again using an [ n2, 1, d 2] code (stabilizer S2). The result is an [ n1n2, k, d 1d2] code. Its stabilizer S is n1 copies of S2, acting on the physical qubits in blocks of size n2, plus an additional n1 −k generators corresponding to the generators of S1. However, these generators are encoded to act on the second code. That is, a σx acting on the first code must be replaced by an X for the second code. For instance, the code resulting from concatenating the five-qubit code with itself has the stabilizer given in table 3.7. The concatenated code has distance d1d2 because operators in N (S) − S must have distance at least d2 on at least d1 blocks of n2 qubits, so have weight at least d1d2. Note that it is not strictly necessary to use the same code to encode each qubit of S1.There are two possible ways to concatenate when S2 encodes multiple qubits. Suppose S1 is an [ n1, k 1, d 1] code and S2 is an [ n2, k 2, d 2] code. Further, suppose n1 is a multiple of k2. Then we can encode blocks of S1 of size k2 using S2.This will result in a code using n1n2/k 2 qubits to encode k1 qubits. It still takes an operator of distance at least d2 to cause an error on an n2-qubit block, but such an error can cause up to k2 errors on S1, so the resulting code need only have distance ⌈d1/k 2⌉d2. However, the k2 errors that result are not a general set of k2 errors, so the code may actually be better. Suppose S1 has distance d′ 1 (d′ 1 ≥ ⌈ d1/k 2⌉) for blocks of k2 errors, i.e., d′ 1 such blocks must have errors before the code fails. Then the concatenated code has distance d′ 1 d2.Another way to concatenate codes encoding multiple qubits is to add addi-tional blocks of S1 to fill the spaces in S2. That is, we actually encode k2 copies CHAPTER 3. STABILIZER CODING 28 M1 σx σz σz σx I I I I I I I I I I I I I I I I I I I I IM2 I σ x σz σz σx I I I I I I I I I I I I I I I I I I I IM3 σx I σ x σz σz I I I I I I I I I I I I I I I I I I I IM4 σz σx I σ x σz I I I I I I I I I I I I I I I I I I I IM5 I I I I I σx σz σz σx I I I I I I I I I I I I I I I IM6 I I I I I I σ x σz σz σx I I I I I I I I I I I I I I IM7 I I I I I σx I σ x σz σz I I I I I I I I I I I I I I IM8 I I I I I σz σx I σ x σz I I I I I I I I I I I I I I IM9 I I I I I I I I I I σx σz σz σx I I I I I I I I I I IM10 I I I I I I I I I I I σ x σz σz σx I I I I I I I I I IM11 I I I I I I I I I I σx I σ x σz σz I I I I I I I I I IM12 I I I I I I I I I I σz σx I σ x σz I I I I I I I I I IM13 I I I I I I I I I I I I I I I σx σz σz σx I I I I I IM14 I I I I I I I I I I I I I I I I σ x σz σz σx I I I I IM15 I I I I I I I I I I I I I I I σx I σ x σz σz I I I I IM16 I I I I I I I I I I I I I I I σz σx I σ x σz I I I I IM17 I I I I I I I I I I I I I I I I I I I I σx σz σz σx IM18 I I I I I I I I I I I I I I I I I I I I I σ x σz σz σx M19 I I I I I I I I I I I I I I I I I I I I σx I σ x σz σz M20 I I I I I I I I I I I I I I I I I I I I σz σx I σ x σz M21 σx σx σx σx σx σz σz σz σz σz σz σz σz σz σz σx σx σx σx σx I I I I IM22 I I I I I σx σx σx σx σx σz σz σz σz σz σz σz σz σz σz σx σx σx σx σx M23 σx σx σx σx σx I I I I I σx σx σx σx σx σz σz σz σz σz σz σz σz σz σz M24 σz σz σz σz σz σx σx σx σx σx I I I I I σx σx σx σx σx σz σz σz σz σz Table 3.7: Result of concatenating the five-qubit code with itself. CHAPTER 3. STABILIZER CODING 29 of S1, encoding the ith qubit of each copy in the same S2 block. This produces an [ n1n2, k 1k2, d 1d2] code, since any failure of an S2 block only produces one error in each S1 block. 3.6 Higher Dimensional States So far, we have only considered systems for which the Hilbert space is the tensor product of two-state systems. However, it may turn out that a good physical implementation of quantum computation uses three- or four-level atoms, or spin-one particles, or some other system where it makes more sense to consider it as the tensor product of d-dimensional systems, where d > 2. I will call the fundamental unit of such a system a qudit . In such a case, we will want to consider error correcting codes where a single qudit error can occur with reasonable probability. For these systems, the stabilizer code formalism needs to be modified to deal with the extra dimensions. Fundamental to the success of the stabilizer formalism was the use of the Pauli spin matrix basis for possible errors. The algebraic properties of this basis allowed a straightforward characterization of errors depending on whether they commuted or anticommuted with elements of an Abelian group. Knill has codified the properties necessary for this construction to generalize to d-dimensional spaces. Suppose we have a set of d2 unitary operators E1, . . . , E n2 (including the identity) acting on a single qudit such that the Ei’s form a basis for all possible d × d complex matrices. If EiEj = wij Ei∗j for all i, j (where ∗ is some binary group operation), then the Ei’s are said to form a nice error basis. The values wij will then have modulus one. Given a nice error basis, we form the group Gn for this basis as the tensor product of n copies of the error basis, with possible overall phases generated by the wij ’s. Then an Abelian subgroup S of Gn that does not contain any nontrivial phase times the identity will have a nontrivial set T of states in the Hilbert space in the +1 eigenspace of every operator in S. The code T can detect any error E for which EM = cM E for some M ∈ G n and some c 6 = 1. One interesting complication of codes over d-dimensional spaces is that when S has n − k generators, T need not encode k qudits. This can only occur when d is composite and the order of a generator of S is a nontrivial factor of d. It is still true that if S has r elements, then T will be ( dn/r )-dimensional. If all the generators of S have order d, T does encode k qudits. One particularly convenient error basis for any d is generated by Dω and Cn, where ( Dω )ij = δij ωi and ( Cn)ij = δj, (i+1 mod n). ω is a primitive nth root of unity. For d = 2, this just reduces to the usual Pauli basis, since C2 = σx and D−1 = σz . For higher d, Dω maps |i〉 → ωi|i〉 and Cn adds one modulo n.This is a nice error basis, with CnDω = ωD ω Cn. (3.30) The elements of the basis can be written Can Dbω , and (Can Dbω ) ( CcnDdω ) = ωad −bc (CcnDdω ) ( CanDbω ) . (3.31) CHAPTER 3. STABILIZER CODING 30 Codes for higher-dimensional systems have not been as extensively studied as those for two-dimensional systems, but some constructions are given in [31, 32, 33, 34, 35]. CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 31 Chapter 4 Encoding and Decoding Stabilizer Codes 4.1 Standard Form for a Stabilizer Code To see how to encode a general stabilizer code , it is helpful to describe the code in the language of binary vector spaces (see section 3.4). Note that the specific choice of generators is not at all unique. We can always replace a generator Mi with MiMj for some other generator Mj . The corresponding effect on the binary matrices is to add row j to row i in both matrices. For simplicity, it is also helpful to rearrange qubits in the code. This has the effect of rearranging the corresponding columns in both matrices. Combining these two operations, we can perform Gaussian elimination on the first matrix, putting the code in this form: r{ n − k − r{ ( r ︷︸︸︷ I n−r ︷︸︸︷ A r ︷︸︸︷ B n−r ︷︸︸︷ C 0 0 D E ) . (4.1) Here, r is the rank of the σx portion of the stabilizer generator matrix. Then we perform another Gaussian elimination on E to get r{ n − k − r − s{ s{  r ︷︸︸︷ I n−k−r−s ︷︸︸︷ A1 k+s ︷︸︸︷ A2 r ︷︸︸︷ B n−k−r−s ︷︸︸︷ C1 k+s ︷︸︸︷ C2 0 0 0 D1 I E2 0 0 0 D2 0 0  . (4.2) The rank of E is n − k − r − s. However, the first r generators will not commute with the last s generators unless D2 = 0, which really implies that s = 0. Thus we can always put the code into the standard form r{ n − k − r{ ( r ︷︸︸︷ I n−k−r ︷︸︸︷ A1 k ︷︸︸︷ A2 r ︷︸︸︷ B n−k−r ︷︸︸︷ C1 k ︷︸︸︷ C2 0 0 0 D I E ) . (4.3) CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 32 For instance, the standard form for the five-qubit code of table 3.2 is  1 0 0 0 1 1 1 0 1 10 1 0 0 1 0 0 1 1 00 0 1 0 1 1 1 0 0 00 0 0 1 1 1 0 1 1 1  . (4.4) Suppose we have an X operator which in this language is written ( u|v) = (u1u2u3|v1v2v3), where u1 and v1 are r-dimensional vectors, u2 and v2 are (n − k − r)-dimensional vectors, and u3 and v3 are k-dimensional vectors. How-ever, elements of N (S) are equivalent up to multiplication by elements of S.Therefore, we can also perform eliminations on X to force u1 = 0 and v2 = 0. Then, because X is in N (S), we must satisfy (3.23), so ( I A1 A2 B C1 C2 0 0 0 D I E ) vT 1 0 vT 3 0 uT 2 uT 3  = ( vT 1 A2vT 3 C1uT 2 C2uT 3 uT 2 Eu T 3 ) = ( 00 ) . (4.5) Suppose we want to choose a complete set of k X operators. We can combine their vectors into two k ×n matrices (0 U2U3|V10V3). We want them to commute with each other, so U3V T 3 V3U T 3 = 0. Suppose we pick U3 = I. Then we can take V3 = 0, and by equation (4.5), U2 = ET and V1 = ET CT 1 CT 2 . The rest of the construction will assume that this choice has actually been made. Another choice of U3 and V3 will require us to perform some operation on the unencoded data to compensate. For the five-qubit code, the standard form of the X generator would be (00001 |10010). We can see that this is equivalent (mod S) to the X given in table 3.2. We can also pick a complete set of k Z operators, which act on the code as encoded σz operators. They are uniquely defined (up to multiplication by S, as usual) given the X operators. Zi is an operator that commutes with M ∈ S,commutes with Xj for i 6 = j, and anticommutes with Xi. We can bring it into the standard form (0 U ′ 2 U ′ 3 |V ′ 1 0V ′ 3 ). Then U ′ 3 V T 3 V ′ 3 U T 3 = I. (4.6) When U3 = I and V3 = 0, V ′ 3 = I. Since equation (4.5) holds for the Z operators too, U ′ 2 = U ′ 3 = 0 and V ′ 1 = AT 2 . For instance, for the five-qubit code, the standard form of the Z generator is (00000 |11111), which is exactly what is given in table 3.2. CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 33 4.2 Network for Encoding Given a stabilizer in standard form along with the X operators in standard form, it is straightforward to produce a network to encode the corresponding code. The operation of encoding a stabilizer code can be written as |c1 . . . c k〉 → ( ∑ M∈S M ) Xc1 1 · · · Xck k |0 . . . 0〉 (4.7) = (I + M1) · · · (I + Mn−k)Xc1 1 · · · Xck k |0 . . . 0〉, (4.8) where M1 through Mn−k generate the stabilizer, and X1 through Xk are the encoded σx operators for the k encoded qubits. This is true because, in general, for any N ∈ S, N ( ∑ M∈S M ) |ψ〉 = ( ∑ M∈S N M ) |ψ〉 = ( ∑ M′∈S M ′ ) |ψ〉, (4.9) so ∑ M |ψ〉 is in the coding space T for any state |ψ〉. If we define the encoded 0 as |0〉 = ∑ M∈S M | n ︷ ︸︸ ︷ 0 . . . 0〉, (4.10) then by the definition of the X’s, we should encode |c1 . . . c k〉 → Xc1 1 · · · Xck k ( ∑ M∈S M ) |0 . . . 0〉. (4.11) Since Xi commutes with M ∈ S, this is just (4.7). Naturally, to encode this, we only need to worry about encoding the basis states |c1 . . . c k〉.The standard form of Xi has the form Z(r)X(n−k−r)σx(n−k+i) (Z(r) is the product of σz ’s on the first r qubits and X(n−k−r) is the product of σx’s on the next n − k − r qubits). Suppose we put the kth input qubit |ck〉 in the nth spot, following n − 1 0s. The state Xck k |0 . . . 0〉 therefore has a 1 for the nth qubit iff |ck〉 = |1〉. This means we can get the state Xck k |0 . . . 0〉 by applying Xk (without the final σxn ) to the input state conditioned on the nth qubit. For instance, for the five-qubit code, X = Z ⊗ I ⊗ I ⊗ Z ⊗ X. The corresponding operation is illustrated in figure 4.1. In this case r = n − k = 4, so there are no bit flips, only controlled σz ’s. In the more general case, we also need to apply X1 through Xk−1, depending on c1 through ck−1. Since the form of the X’s ensures that each only operates on a single one of the last k qubits, we can substitute |ci〉 for the ( n − k + i)th qubit and apply Xi conditioned on it, as with |ck〉. This produces the state Xc1 1 · · · Xck k |0 . . . 0〉.Further, note that the X operators only act as σz on the first r qubits and as σx on the next n − k − r qubits. Since σz acts trivially on |0〉, we can just CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 34 c 0000 s σz σz Figure 4.1: Creating the state X|00000 〉 for the five-qubit code. ignore that part of the X’s when implementing this part of the encoder, leaving just the controlled NOTs. The first r qubits automatically remain in the state |0〉 after this step of encoding. This means that for the five-qubit code, this step of encoding is actually trivial, with no operations. In general, this step is only necessary if r < n − k.For the next step of the encoding, we note that the standard form of the first r generators only applies a single bit flip in the first r qubits. This means that when we apply I + Mi, the resulting state will be the sum of a state with |0〉 for the ith qubit and a state with |1〉 for the ith qubit. We therefore apply the Hadamard transform R = 1 √2 ( 1 11 −1 ) (4.12) to the first r qubits, putting each in the state |0〉 + |1〉. Then we apply Mi (for i = 1 , . . . , r ) conditioned on qubit i (ignoring the factor of σxi ). While these operators may perform phase operations on the first r qubits, they do not flip them, so there is no risk of one operation confusing the performance of another one. The one possible complication is when Mi has a factor of σzi . In this case, σzi only introduces a minus sign if the qubit is |1〉 anyway, so we do not need to condition it on anything. Just performing σzi after the Hadamard transform is sufficient. For the five-qubit code, the full network for encoding is given in figure 4.2. For more general codes, r < n −k, and there are n−k −r generators that are formed just of the tensor product of σz ’s. However, we do not need to consider such generators to encode. Let M be such a generator. Since M commutes with all the other generators and every X, we can commute I + M through until it acts directly on |0 . . . 0〉. However, σz acts trivially on |0〉, so I + M fixes |0 . . . 0〉, and in equation (4.8), we can skip any Mi that is the tensor product of σz ’s. The effect of these operators is seen just in the form of the X operators, which must commute with them. Applying each of the X operators requires up to n − k − r two-qubit opera-tions. Each of the first r qubits must be prepared with a Hadamard transform and possibly a σz , which we can combine with the Hadamard transform. Then CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 35 c 0000 R R R R σz σz s σz σz σy s σz σz ❣s σz σz ❣s σz σz σy Figure 4.2: Network for encoding the five-qubit code. applying each of the first r generators requires up to n − 1 two-qubit operations. The whole encoder therefore requires up to r one-qubit operations and at most k(n − k − r) + r(n − 1) ≤ (k + r)( n − k) ≤ n(n − k) (4.13) two-qubit operations. 4.3 Other Methods of Encoding and Decoding We can decode a code by performing the above network in reverse. In order to do this, we should first perform an error correction cycle, since the network will not necessarily work properly on an encoded state. Note that in principle we can build a decoder that corrects while decoding. We can form a basis for the Hilbert space from the states A|ψi〉, where A ∈ G and |ψi〉 is a basis state for the coding space T . The combined corrector/decoder would map A|ψi〉 to |i〉 ⊗ | f (A)〉, where f (A) is the error syndrome for A. If A is not a correctable error, |i〉 will not necessarily be the state encoded by |ψi〉, but if A is correctable, it will be. It is not usually worthwhile using a quantum network that does this, since the error correction process is usually dealt with more easily using classical measurements. However, some proposed implementations of quantum computation cannot be used to measure a single system , so this sort of network would be necessary. The decoding method presented in can easily be adapted to produce networks that simultaneously correct and decode. One good reason not to decode by running the encoder backwards is that most of the work in the encoder went into producing the encoded 0. There is no actual information in that state, so we might be able to save time decoding if we could remove the information without dealing with the structure of the encoded 0. We can do this by using the X and Z operators. If we want to measure the ith encoded qubit without decoding, we can do this by measuring the eigenvalue of Zi. If the eigenvalue is +1, the ith encoded qubit is |0〉; if it is −1, the ith encoded qubit is |1〉. In standard form, Zi is the tensor product of σz ’s. That means it will have eigenvalue ( −1) P , where P is the parity of the CHAPTER 4. ENCODING AND DECODING STABILIZER CODES 36 qubits acted on by Zi. Therefore, if we apply a controlled-NOT from each of these qubits to an ancilla qubit, we have performed a controlled-NOT from the ith encoded qubit to the ancilla — we will flip the ancilla iff the ith encoded qubit is |1〉.If the original state of the code is |0〉| ψ〉 + |1〉| φ〉 (with the first ket represent-ing the ith logical qubit) and the ancilla begins in the state |0〉, after applying this CNOT operation, we have |0〉| ψ〉| 0〉 + |1〉| φ〉| 1〉. (4.14) Now we apply Xi conditioned on the ancilla qubit. This will flip the ith encoded qubit iff the ancilla is |1〉. This produces the state |0〉| ψ〉| 0〉 + |0〉| φ〉| 1〉 = |0〉 (|ψ〉| 0〉 + |φ〉| 1〉) . (4.15) The ith encoded qubit has been set to 0 and the ancilla holds the state that the ith encoded qubit used to hold. The rest of the code has been left undisturbed. We can repeat this operation with each of the encoded qubits, transferring them to k ancilla qubits. Each such operation requires at most 2( n − k + 1) two-qubit operations (since Z requires at most r+1 operations and X could require n−k+1 operations). Therefore, the full decoder uses at most 2 k(n − k + 1) operations, which is often less than is required to encode. At the end of the decoding, the original n qubits holding the code are left in the encoded 0 state. We can run this process backwards to encode, but we need an encoded 0 state to begin with. This could be a residue from an earlier decoding operation, or could be produced separately. One way to produce it would be to use the network of section 4.2, using |0 . . . 0〉 as the input data. Alternately, we could produce it by performing an error correction cycle on a set of n |0〉’s for the sta-bilizer generated by M1, . . . M n−k, Z1, . . . , Zk. This stabilizer has n generators, so there is only one joint +1 eigenvector, which is just the encoded 0 for the original code. CHAPTER 5. FAULT-TOLERANT COMPUTATION 37 Chapter 5 Fault-Tolerant Computation 5.1 Encoded Computation and Fault-Tolerance I have shown how to encode qubits in blocks to protect them from individual errors. This, by itself, is useful for transmitting quantum data down a noisy communications line, for instance — we can encode the data using the code, send it, correct the errors, and decode it. Then we can process the data normally. However, the framework so far is insufficient for performing computations on a realistic quantum computer. If we need to decode the data in order to perform quantum gates on it, it is vulnerable to noise during the time it is decoded. Even if we know how to perform gates on the data while it is still encoded, we must be careful to make sure that a single error does not cause us to accidentally perform the wrong computation. For instance, suppose a single qubit has been flipped and we apply a con-trolled-NOT from it to another qubit. Then the second qubit will flip exactly when it is supposed to stay the same. In consequence, now both the first and the second qubits have bit flip errors. If both qubits are part of the same block, we now have two errors in the block instead of one. Before very much of this occurs, we will have too many errors in the block to correct. If we correct errors often enough, we can salvage the situation , but in the process we lose a lot of the power of the error-correcting code. Therefore, I will define a fault-tolerant operation as one for which a single error introduces at most one error per block of the code. In a large computer, we have many encoded blocks of data, and a given operation may introduce one error in a number of them. However, each block retains its ability to correct that single error. In the example above, an error propagated forward from the control qubit to the target qubit of the CNOT. In a quantum computer, errors can also propagate backwards. For instance, suppose we have the state (α|0〉 + β|1〉)( |0〉 ± | 1〉) (5.1) CHAPTER 5. FAULT-TOLERANT COMPUTATION 38 and perform a CNOT from the first qubit to the second. The resulting state is α|0〉(|0〉 ± | 1〉) + β|1〉(±1)( |0〉 ± | 1〉) = ( α|0〉 ± β|1〉)( |0〉 ± | 1〉). (5.2) Initially flipping the sign on the second qubit will result in a sign flip on the first qubit after the CNOT. In a CNOT, amplitude (bit flip) errors propagate forwards, and phase errors propagate backwards. This means that not only must we make sure not to perform operations from one qubit to another within a block, we must also be sure not to perform multiple CNOTs from a block onto the same target qubit, even if it is a disposable ancilla qubit. Otherwise, a single phase error in the ancilla qubit can produce multiple errors within a block. Operations for which each qubit in a block only interacts with the corresponding qubit, either in another block or in a specialized ancilla, will be called transversal operations. Any transversal operation is automatically fault-tolerant, although there are some fault-tolerant operations which are not transversal. 5.2 Measurement and Error Correction Suppose we want to measure the operator σz1σz2, as with Shor’s nine-qubit code. The eigenvalue is +1 if both qubits are the same and −1 if they are different. One natural way to do this is perform a CNOT from both qubits to a third ancilla qubit, initially in the state |0〉. If both qubits are |0〉, the ancilla is left alone, and if both are |1〉, the ancilla gets flipped twice, returning to the state |0〉. If only one of the two qubits is |1〉, the ancilla only flips once, ending up in the state |1〉. Measuring the ancilla will then tell us the eigenvalue of σz1σz2.However, this procedure is not a transversal operation. Both qubits interact with the same ancilla qubit, and a single phase error on the ancilla qubit could produce phase errors in both data qubits, producing two errors in the block (actually, this particular example does not have this problem, since a phase error on the ancilla qubit is meaningless until after it has interacted with the first data qubit; but if we were measuring σz1σz2σz3 instead, the problem would be a real one). One possible solution to the problem is to use two ancilla qubits, both initially |0〉, instead of one. Then we perform CNOTs from the first data qubit to the first ancilla qubit and from the second data qubit to the second ancilla qubit. Then we measure the ancilla qubits and determine their parity. This will again tell us the eigenvalue of σz1σz2, and we do not run the risk of introducing two phase errors into the data. However, we have instead done something worse. By measuring both ancilla qubits, we have, in effect, measured the original data qubits, which destroys any superposition of the +1-eigenstates of σz1σz2. To make this work, we need to be able to measure the ancilla without finding out anything about the data. Since we are only interested in the parity of the data qubits, we could have just as well started the ancilla in the state |11 〉 as |00 〉. If both or neither ancilla qubits are flipped, the parity is still even, and if only one is flipped, the parity CHAPTER 5. FAULT-TOLERANT COMPUTATION 39 is odd, as it should be. However, measuring the ancilla still tells us what states the data qubits were in. The state of a data qubit is equal to the reverse of the measured state of the corresponding ancilla qubit. This means if we start the ancilla in the superposition |00 〉+|11 〉 and perform CNOTs from the data qubits to the ancilla qubits, measuring the ancilla will again tell us the parity of the data qubits. However, we do not know whether the state we measure originally corresponded to the ancilla state |00 〉 or |11 〉,which means we cannot deduce the state of the data. The two ancilla states correspond to the two possible states of the data qubits with the same parity. This means that measuring the ancilla does not destroy a superposition of these two states of the data. This is what we desired. Because we interact each data qubit with a separate ancilla qubit, a single phase error in the ancilla will only produce a single phase error in the data. Of course, if a single qubit in the ancilla flips so we start in the state |01 〉 + |10 〉,we will measure the wrong parity. We can circumvent this problem by simply preparing multiple ancillas in the same state, performing the CNOTs to each of them, and measuring each. If we prepare three such ancillas and determine the parity as the majority result, the answer will be correct unless two errors have occurred. If the chance of a single error is ǫ, the chance of getting two errors in the data or getting the wrong measurement result is O(ǫ2). We can use this trick on products of more than two σz operators by preparing the ancilla in a state which is the sum of all even parity states. Such a state can be made by preparing a “cat” state |0 . . . 0〉 + |1 . . . 1〉 (named after Schr¨ odinger’s cat) and performing a Hadamard transform (4.12) on each qubit. Again, we perform a CNOT from the data qubits to corresponding qubits in the ancilla and measure the ancilla. The result will have even parity iff the selected data qubits have even parity, but the measurement does not destroy superpositions of the possible data states with that parity. Again, a single error in the ancilla could give the wrong parity, so we should repeat the measurement. Also, the preparation of the “cat” state is not at all fault-tolerant, so we could easily have multiple bit flip errors in the “cat” state, which will result in multiple phase errors in the ancilla state. Since phase errors will feed back into the data, we should carefully verify the “cat” state to make sure that we do not have multiple amplitude errors. Suppose we want to measure a more general operator in G, such as M1 = σx ⊗ σz ⊗ σz ⊗ σx ⊗ I, the first generator for the five-qubit code. Note that under the Hadamard transform |0〉 ↔ |0〉 + |1〉|1〉 ↔ |0〉 − | 1〉, (5.3) so the eigenvectors of σz transform to the eigenvectors of σx and vice-versa. This means to measure M1, we should perform the Hadamard transform on qubits one and four and instead measure σz ⊗ σz ⊗ σz ⊗ σz ⊗ I. We know how to do this from the above discussion. Then we should perform the Hadamard transform again to return to the original state (modulo any collapse caused by CHAPTER 5. FAULT-TOLERANT COMPUTATION 40 the measurement). In a similar way, we can rotate σy into σz (exactly how is discussed in more detail in section 5.3), and therefore measure any operator in G.From the ability to make measurements, we can easily perform error cor-rection for any stabilizer code . Recall that to correct errors, we measure the eigenvalue of each generator of the stabilizer. This we now know how to do fault-tolerantly. This tells us the error syndrome, which tells us the error (or class of degenerate errors). This error is some operator in G, and to correct it, we just apply the operator to the code. Since it is the tensor product of single qubit operators, this is a transversal operation, and is therefore fault-tolerant. Because a full measurement of the error syndrome takes a fair amount of time, the possibility of an error in the data while measuring the syndrome can-not be ignored. An error in the data in the middle of the syndrome measurement will result in the wrong syndrome, which could correspond to a totally differ-ent error with nothing in common with the actual error. Therefore, we should measure the syndrome multiple times, only stopping when we have sufficient confidence that we have determined the correct current error syndrome. Since we are measuring the syndrome multiple times, we only need to measure each bit once per overall syndrome measurement; repetitions of the syndrome measure-ment will also protect against individual errors in the syndrome bits. The true error syndrome will evolve over the course of repeated measurements. Eventu-ally, more errors will build up in the data than can be corrected by the code, producing a real error in the data. Assuming the basic error rate is low enough, this occurance will be very rare, and we can do many error correction cycles before it happens. However, eventually the computation will fail. In chapter 6, I will show how to avoid this result and do arbitrarily long computations provided the basic error rate is sufficiently low. 5.3 Transformations of the Stabilizer Now I will begin to discuss how to perform actual operations on encoded states. We already know how to perform encoded σx, σy , and σz operations on stabilizer codes. These operations all commute with the stabilizer and therefore leave the generators of the stabilizer alone. A more general unitary operation U will not necessarily do this. If M ∈ S, then |ψ〉 = M |ψ〉 for |ψ〉 ∈ T , and U |ψ〉 = U M |ψ〉 = U M U †U |ψ〉, (5.4) so U M U † fixes U |ψ〉. Even if we have an operator N which is not in S, U will take the eigenvectors of N to eigenvectors of U N U †, effectively transforming N → U N U †. Suppose U M U † ∈ G . Then if we want an operation that takes an encoded codeword to another valid codeword, we need U M U † ∈ S. If this is true for all M ∈ S, then U |ψ〉 ∈ T as well, and U is a valid encoded operation. If it is also transversal, we know it will be fault-tolerant as well. The set of U such that U AU † ∈ G for all A ∈ G is the normalizer N (G) of G in U (n). It turns out that N (G) is generated by the single qubit operations RCHAPTER 5. FAULT-TOLERANT COMPUTATION 41 (the Hadamard transform) and P = ( 1 00 i ) , (5.5) and the controlled NOT [17, 22]. The set of U such that U M U † ∈ S for all M ∈ S is the normalizer NU(n)(S) of S in U (n), which need not be a subset of N (G). Any transversal operator in NU(n)(S) is a valid fault-tolerant operation. However, operators outside of N (G) are much more difficult to work with and analyze. Therefore, I will restrict my attention to operators in the intersection of N (G) and NU(n)(S). The operators in N (G) acting on G by conjugation permute tensor products of σx, σy , and σz . For instance, Rσ xR† = 1 2 ( 1 11 −1 ) ( 0 11 0 ) ( 1 11 −1 ) = ( 1 00 −1 ) = σz (5.6) Rσ z R† = 1 2 ( 1 11 −1 ) ( 1 00 −1 ) ( 1 11 −1 ) = ( 0 11 0 ) = σx. (5.7) Also, Rσ y R† = −iRσ xσz R† = −iRσ xR†Rσ z R† = −iσ z σx = −σy. (5.8) R switches σx and σz . Similarly, P σ xP † = ( 1 00 i ) ( 0 11 0 ) ( 1 00 −i ) = ( 0 −ii 0 ) = σy (5.9) P σ z P † = ( 1 00 i ) ( 1 00 −1 ) ( 1 00 −i ) = ( 1 00 −1 ) = σz . (5.10) P switches σx and σy . These two operations generate all possible permutations of σx, σy , and σz . Operators in N (G1) can be viewed as transformations of the Bloch sphere which permute the coordinate axes. The third generator of N (G) is the controlled NOT. It acts on two qubits, and therefore permutes the elements of G2. Its action is as follows: σx ⊗ I → σx ⊗ σx I ⊗ σx → I ⊗ σx (5.11) σz ⊗ I → σz ⊗ II ⊗ σz → σz ⊗ σz . Amplitudes are copied forwards and phases are copied backwards, as I described before. In the same way, any element of N (G) gives a permutation of G. These permutations of G always preserve the group structure of G, so are actually automorphisms of G.Given an automorphism of G, we can always find an element of N (G) that produces that automorphism , modulo the automorphism iI → − iI . We CHAPTER 5. FAULT-TOLERANT COMPUTATION 42 |γ〉|β〉|α〉 |γ〉|α〉|β〉 ❣❣ ❣❣ ❣❣ ❣❣ swap Figure 5.1: Network to swap |α〉 and |β〉 using ancilla |γ〉.can find the matrix of a given transformation U corresponding to some auto-morphism by determining the action of U on basis states. |0〉 is an eigenvector of σz , so it is mapped to an eigenvector of U σ z U †. |1〉 = σx|0〉, so it becomes (U σ xU †)U |0〉. For instance, the automorphism T : σx → σy , σ z → σx maps |0〉 → (1 /√2) ( |0〉 + |1〉) and |1〉 → σy T |0〉 = −(i/ √2) ( |0〉 − | 1〉). Thus, the matrix of T is T = 1 √2 ( 1 −i 1 i ) . (5.12) Another useful operation is to swap two qubits in a block. This is not a transversal operation, and it is not fault-tolerant by itself. An error during the swap gate can produce errors in the two qubits to be swapped, producing two errors in the same block. However, we do not need to worry about error propagation because the swap gate swaps the errors along with the correct states. Therefore, to get a fault-tolerant swap gate, we only need to produce a circuit to swap qubits that does not directly interact them. Such a circuit is given in figure 5.1. In order to produce a valid fault-tolerant encoded operation, we may com-bine swap operations within a block of an error-correcting code and transversal operations on the block to get something that permutes the elements of the stabilizer. The set of such operations is the automorphism group A(S) of S.Codes with a large automorphism group are therefore better suited for perform-ing fault-tolerant operations. For instance, the seven-qubit code of table 3.4 is invariant under any single-qubit operation in N (G) performed bitwise. There are also a number of permutations of its qubits in the automorphism group, although they turn out to be unimportant in this case. The five-qubit code of table 3.2 has fewer automorphisms. The only transversal operations in its automorphism group are T : σx → σy , σ z → σx (5.13) and T 2. Note that in the language of GF(4) codes, the operation T corresponds to multiplication by ω2. Therefore it is a valid transversal operation for any linear quantum code. The five-qubit code is also invariant under cyclic per-mutations of the five component qubits, although these operations turn out to leave the encoded data unchanged, so are not very useful. CHAPTER 5. FAULT-TOLERANT COMPUTATION 43 Once we have a possible encoded operation U , we must discover what it actually does to the encoded states. We can do this by analyzing the behavior of N (S)/S under the operation. Because U is in N (G) ∩ NU(n)(S), it also has a natural action on N (S)/S ∼= Gk . This action on Gk is equivalent to some operation in N (Gk ). This is the operation that is performed on the k encoded qubits. For instance, the Hadamard transform R applied bitwise to the seven-qubit code switches X = σx5σx6σx7 and Z = σz5σz6σz7. This is just R applied to the G1 group for the single encoded qubit. In the same way, P bitwise for the seven-qubit code converts X into −Y (Y is the encoded σy ), and thus performs an encoded P †. The minus sign for Y occurs because Y = −iXZ = −i(i3)σy5σy6σy7 = −σy5σy6σy7.For the five-qubit code, X = σx1σx2σx3σx4σx5 and Z = σz1σz2σz3σz4σz5,so T bitwise transforms X to Y and Z to X, and therefore acts as an encoded T operation. For both the five- and seven-qubit codes, the qubit permutations in A(S) produce the identity operation on the encoded qubits. For a block encoding k qubits, an operation in the automorphism group might perform any multiple-qubit operation in N (Gk ). We can also do multiple-qubit operations interacting two blocks by applying multiple-qubit operations transversally between the blocks. For instance, we can apply a CNOT from the ith qubit in the first block to the ith qubit in the second block. We can interact r blocks by applying transversally any operation in N (Gr ). We can even apply different operations to different qubits within a block. However, we should not also apply swaps within a block unless we can perform error correction afterwards, since otherwise errors could spread from one qubit in a block to the corresponding qubit in a different block, then back to a different qubit in the first block, producing two errors in the first block. The stabilizer of two blocks of a code is just S × S. Therefore, the operation, to be valid, must permute the elements of this group. For instance, bitwise CNOT applied between two blocks of the seven-qubit code is a valid operation, because Mi ⊗ I → Mi ⊗ Mi (i = 1 , 2, 3) Mi ⊗ I → Mi ⊗ I (i = 4 , 5, 6) (5.14) I ⊗ Mi → I ⊗ Mi (i = 1 , 2, 3) I ⊗ Mi → Mi ⊗ Mi (i = 4 , 5, 6) . Since this also takes X ⊗ I → X ⊗ XI ⊗ X → I ⊗ X (5.15) Z ⊗ I → Z ⊗ II ⊗ Z → Z ⊗ Z, it acts as a CNOT on the encoded qubits. On the other hand, bitwise CNOT applied to the five-qubit code is not a valid operation, because, for instance, CHAPTER 5. FAULT-TOLERANT COMPUTATION 44 M1 = σx ⊗ σz ⊗ σz ⊗ σx ⊗ I, so M1 ⊗ I → M1 ⊗ (σx ⊗ I ⊗ I ⊗ σx ⊗ I) and σx ⊗ I ⊗ I ⊗ σx ⊗ I is not in S.The CSS codes are those for which the stabilizer is the direct product of a part where the elements are tensor products of σxi ’s and a part where the elements are tensor products of σzi ’s. We can also pick the X and Z operators to be tensor products of σxi ’s and σzi ’s, respectively. This means that just as with the seven-qubit code, bitwise CNOT will be a valid operation for any CSS codes, and will perform the CNOT between corresponding encoded qubits in the two blocks. Conversely, if bitwise CNOT is a valid operation for a code, that means it is a CSS code: Let M = XY be an arbitrary element of the stabilizer S, where X is the tensor product of σxi ’s and Z is the tensor product of σzi ’s. Then, under CNOT, M ⊗ I → M ⊗ X and I ⊗ M → Z ⊗ M . Thus, X and Z are themselves elements of S. The stabilizer therefore breaks up into a σx part and a σz part, which means it is a CSS code. 5.4 The Effects of Measurements We are not strictly limited to unitary operations in a quantum computation. We can also make measurements, which correspond to randomly applying one of a set of complete projection operators, usually labeled by eigenvalues of a Hermitian operator. Based on the classical measurement result, we can then apply one of a number of possible operators to the resulting quantum state. This process can be converted into a purely quantum process, but in the ideal-ization where classical computation is error-free while quantum computation is not, there is a distinct advantage in converting as much as possible to classical information. Even in a more realistic situation, classical computation is likely to be much more reliable than quantum computation and classical error-correction methods are simpler than quantum ones. In addition, we may know how to per-form operations conditioned on classical information fault-tolerantly even when we do not know how to perform the corresponding quantum operations fault-tolerantly. As we shall see, ancilla preparation and measurement are powerful tools for expanding the available set of fault-tolerant quantum operations. Suppose we wish to measure operator A, with A2 = I. Measuring A for a state |ψ〉 will typically give one of two results |ψ+〉 or |ψ−〉, corresponding to the two eigenvalues ±1 of A. In order to keep the description of our algorithm under control, we would like a way to convert |ψ−〉 to |ψ+〉 for any possible input state |ψ〉. This will not be possible unless we know something more about the possible states |ψ〉. Suppose we know that there is a unitary operator M ,with M |ψ〉 = |ψ〉 and {M, A } = 0. Then M †|ψ−〉 = M † 1 2 (I − A)|ψ〉 = M † 1 2 (I − A)M |ψ〉 = M †M 1 2 (I + A)|ψ〉 = 1 2 (I + A)|ψ〉 (5.16) = |ψ+〉.CHAPTER 5. FAULT-TOLERANT COMPUTATION 45 If we make the measurement, then apply M † if the result is −1 and do nothing if the result is +1, then we have applied the nonunitary operator P+ = 1 2 (I +A). We can then continue the computation with the assurance that the computer is in the state |ψ+〉. In order to perform this nonunitary operator, we have taken advantage of the fact that |ψ〉 is a +1-eigenstate of M . This trick cannot be used if we do not know anything about the state of |ψ〉.We know how to measure operators in G fault-tolerantly. If we prepare an ancilla in a known state and apply a known set of operations in N (G), the resulting state can be at least partially described by a stabilizer S. This stabilizer is not the stabilizer of a quantum error-correcting code, but simply a way of describing the information we have about the state. In many of the applications below, there will be one stabilizer for the error-correcting code, and another which describes the restricted state of the data due to our preparation of the ancilla in a known state. We can fault-tolerantly measure (fault-tolerant with respect to the error-correcting code) an operator A ∈ G that anticommutes with some M ∈ S (the stabilizer describing the data) and correct the result as above to perform the operation P+. Any operators in S that commute with A will still fix the state of the system after the measurement and correction. Hereafter, in the context of performing operations on encoded states, I will usually speak of “measuring” A when I mean applying P+ for A.If A ∈ S, there is no need to measure A to perform P+, since the state is already an eigenstate of A with eigenvalue +1. If A commutes with everything in S but is not in S itself, then measuring A will give us information about which state we had that was fixed by S. However, we do not have an M that anticommutes with A, so we cannot fix P− to P+. If A anticommutes with some element of S, say M1, then we can choose the remaining n − k − 1 generators of S to commute with A (if Mi anticommutes with A, M1Mi will commute with A). The stabilizer S′ after applying P+ will then be generated by A and M2, . . . , M n−k.We can better understand the operator P+ by looking at the transformation it induces from N (S)/S to N (S′)/S ′. Half of the representatives of each coset in N (S)/S will commute with A and half will anticommute, since of N and M1N ,one will commute and one will anticommute. If N ∈ N (S) commutes with A, its eigenvectors and eigenvalues are left unchanged by measuring A. Therefore the coset represented by N in N (S′)/S ′ will act on P+|ψ〉 in the same way as the coset in N (S)/S acted on |ψ〉. Any representative of the same coset in N (S)/S will produce the same coset in N (S′)/S ′ as long as it commutes with A. We therefore have a map from N (S)/S ∼= G to N (S′)/S ′ ∼= G, which is an operation in N (G). Using selected ancilla preparation and existing tranversal operations, we can use this process to create new transversal operations. A nice example of this formalism, which can be applied independently of quantum error correction, is a description of quantum teleportation . We start with three qubits, the first in an arbitrary state |ψ〉, the other two in the Bell state |00 〉 + |11 〉. This state can be described by the stabilizer S1 generated by I ⊗ σx ⊗ σx and I ⊗ σz ⊗ σz . The cosets of N (S1)/S 1 can be represented by X = σx ⊗ I ⊗ I and Z = σz ⊗ I ⊗ I. The third qubit is far away, so we cannot CHAPTER 5. FAULT-TOLERANT COMPUTATION 46 perform any quantum gates interacting it with the other two qubits. However, we can make measurements on the first two qubits and send the information to be used to perform conditional quantum gates just on the third qubit. First, we apply a CNOT from the first qubit to the second qubit. This produces stabilizer S2 generated by I ⊗ σx ⊗ σx and σz ⊗ σz ⊗ σz , with X = σx ⊗σx ⊗I and Z = σz ⊗I ⊗I. Now measure σx for the first qubit. This produces stabilizer S3 generated by σx ⊗ I ⊗ I and I ⊗ σx ⊗ σx. The coset representative σx ⊗ σx ⊗ I commutes with the measured operator, so it still represents the new coset. Multiplying by the first generator of S3 still gives a coset representative of X in N (S3)/S 3, so X = I ⊗ σx ⊗ I. σz ⊗ I ⊗ I does not commute with the measured operator, but ( σz ⊗ σz ⊗ σz )( σz ⊗ I ⊗ I) = I ⊗ σz ⊗ σz represents the same coset in N (S2)/S 2 and does commute with the measured operator, so it represents the Z coset in N (S3)/S 3. The measurement potentially requires an application of σz ⊗ σz ⊗ σz if it is necessary to correct P−. This provides one of the sets of conditional operations used in quantum teleportation. Now we measure σz for the second qubit. This produces the stabilizer S4 generated by σx ⊗ I ⊗ I and I ⊗ σz ⊗ I. This time, the representative of Z commutes with the measured operator, so Z for N (S4)/S 4 is I ⊗ σz ⊗ σz ∼= I ⊗I ⊗σz . I ⊗σx ⊗I does not commute, but ( I ⊗σx ⊗σx)( I ⊗σx ⊗I) = I ⊗I ⊗σx does, so in N (S4)/S 4, X = I ⊗ I ⊗ σx. The operation to correct P− this time is I ⊗ σx ⊗ σx. This provides the second set of conditional operations in teleportation. Note that S4 completely determines the state of the first two qubits and does not restrict the state of the third qubit at all. In fact, the X operator in N (S1)/S 1, which started as σx for the first qubit, has been transformed into σx for the third qubit, and Z, which began as σz for the first qubit, has become σz for the third qubit. This means the final state is ( |0〉 + |1〉) ⊗ | 0〉 ⊗ | ψ〉, and we have teleported the state as desired. After we measure σx, σy , or σz for a qubit, we have completely determined the state of that qubit, so its contribution to the stabilizer will just be the operator just measured, and it will not contribute to standard representatives of the cosets in N (S′)/S ′ at all. Therefore, when describing how to produce new transversal operations, I will drop qubits from the notation after they have been measured. 5.5 Producing New Operations in N (G) The group N (G) can be generated by just the operations R, P , and CNOT applied to arbitrary qubits and pairs of qubits. I will now show that, by us-ing measurements, we can, in fact, generate N (G) using just CNOT. Then I will demonstrate that for most known codes, we can apply an encoded CNOT transversally. First, note that by preparing an ancilla in an arbitrary state and measuring σx, σy , or σz , we can always prepare that ancilla qubit in the +1 eigenstate of any of these three operators. Also, there are only six interesting operators in CHAPTER 5. FAULT-TOLERANT COMPUTATION 47 N (G1): I, R, P (and P †), Q (and Q†), T , and T 2 (and T † and ( T 2)†), where Q = P †RP switches σy and σz , and T = RP † is the cyclic permutation of σx, σy , and σz . I have only counted this as six operators, since the adjoints produce the same permutations, but with different signs distributed among σx, σy , and σz . This effect can also be produced by applying σx, σy and σz themselves. Any two non-identity operators in this set, other than T and T 2, will suffice to generate all of them. Suppose we have an arbitrary single-qubit state |ψ〉. Let us prepare an ancilla qubit in the +1 eigenstate of σz , then apply a CNOT from the data qubit to the ancilla qubit. This produces the stabilizer σz ⊗ σz , with X = σx ⊗ σx and Z = σz ⊗ I. Now measure σy for the ancilla qubit and discard the ancilla. This leaves the first qubit with X = −σy and Z = σz , which means we have applied P †.Now prepare the ancilla in the +1 eigenstate of σx and apply a CNOT from the ancilla qubit to the data qubit. This produces stabilizer σx ⊗ σx, with X = σx ⊗ I and Z = σz ⊗ σz . Measure σy for the ancilla and discard it, leaving X = σx and Z = −σy . We have applied Q†. Along with P from above, this suffices to generate N (G1) and therefore N (Gn) for any n.We can also produce T directly by preparing the ancilla in the +1 eigenstate of σy and applying a CNOT from the ancilla qubit to the data qubit. This produces a stabilizer of σx ⊗ σy , with X = σx ⊗ I and Z = σz ⊗ σz . Measure σy for the data qubit and discard it, leaving X = σy and Z = σx, both on the former ancilla qubit. The net result is to apply T , but to move the data from the data qubit to what began as the ancilla qubit. Now let us turn our attention to transversal operations on quantum error-correcting stabilizer codes. Consider the following four-qubit transformation: σx ⊗ I ⊗ I ⊗ I → σx ⊗ σx ⊗ σx ⊗ II ⊗ σx ⊗ I ⊗ I → I ⊗ σx ⊗ σx ⊗ σx I ⊗ I ⊗ σx ⊗ I → σx ⊗ I ⊗ σx ⊗ σx I ⊗ I ⊗ I ⊗ σx → σx ⊗ σx ⊗ I ⊗ σx (5.17) σz ⊗ I ⊗ I ⊗ I → σz ⊗ σz ⊗ σz ⊗ II ⊗ σz ⊗ I ⊗ I → I ⊗ σz ⊗ σz ⊗ σz I ⊗ I ⊗ σz ⊗ I → σz ⊗ I ⊗ σz ⊗ σz I ⊗ I ⊗ I ⊗ σz → σz ⊗ σz ⊗ I ⊗ σz . Given an element M of an arbitrary stabilizer, this operation applied bitwise maps M ⊗ I ⊗ I ⊗ I → M ⊗ M ⊗ M ⊗ II ⊗ M ⊗ I ⊗ I → I ⊗ M ⊗ M ⊗ M (5.18) I ⊗ I ⊗ M ⊗ I → M ⊗ I ⊗ M ⊗ MI ⊗ I ⊗ I ⊗ M → M ⊗ M ⊗ I ⊗ M. Each of these images is in the group S × S × S × S, so this is a valid transversal CHAPTER 5. FAULT-TOLERANT COMPUTATION 48 operation for any stabilizer code. Because of (5.18), this operation just applies itself to the encoded qubits. When the code has multiple qubits per block, (5.17) applies itself to all of the corresponding sets of encoded qubits. This is very useful, since if we have two logical qubits and prepare two more ancilla logical qubits each in the +1 eigenstate of σz , and then apply (5.17) to these four qubits, we get a stabilizer with generators σz ⊗ I ⊗ σz ⊗ σz and σz ⊗ σz ⊗ I ⊗ σz , and X1 = σx ⊗ σx ⊗ σx ⊗ I X2 = I ⊗ σx ⊗ σx ⊗ σx (5.19) Z1 = σz ⊗ σz ⊗ σz ⊗ I Z2 = I ⊗ σz ⊗ σz ⊗ σz . Measure σx for both ancilla qubits and discard them. This leaves us with X1 = σx ⊗ σx X2 = I ⊗ σx (5.20) Z1 = σz ⊗ I Z2 = σz ⊗ σz . This we can recognize as the CNOT from the first data qubit to the second data qubit. As the CNOT suffices to get every operation in N (G), we can therefore perform any such operation transversally for any stabilizer code encoding a single qubit. There are other operations like (5.17) that work for any stabilizer code. The condition they must satisfy is for σx tensor any number of copies of the identity to map to the tensor product of some number of copies of σx and I,and σz in the same position must map to the same tensor product of σz and I.This means any such automorphism can be fully described by an n × n binary matrix (for an n-qubit operation). The image of σxi must commute with the image of σzj for i 6 = j. This means that the binary dot product of two different rows of the matrix must be 0. Also, the image of σxi must anticommute with the image of σzi . This means that the binary dot product of any row with itself must be 1. These two conditions combine to say that the matrix must be an element of O(n, Z2), the orthogonal group over Z2. The smallest n for which this group has an element other than a permutation is n = 4. If we were working with d-dimensional states instead of qubits, we would instead need a matrix in O(n, Zd). Note that the straightforward generalization of (5.18) is in O(n, Zd)for n = d + 2. Codes which have single-qubit tranversal operations other than the identity will in general have a larger available space of multiple-qubit operations. Any n-qubit automorphism that maps σx to the tensor product of I with Ui(σx) and σz to the same tensor product of I with Ui(σz ) will be an automorphism of n copies of S if Ui is an automorphism of S for all i. Note that Ui may be the identity. It may also be possible for Ui to not be an automorphism of G1 at all, CHAPTER 5. FAULT-TOLERANT COMPUTATION 49 although this will depend on the code. For instance, for a CSS code, we can have Ui(σx) = σx, Ui(σz ) = I or Ui(σx) = I, Ui(σz ) = σz . 5.6 Codes With Multiple Encoded Qubits For codes encoding more than one qubit per block, we have more work to do. We only know how to perform (5.17) between corresponding qubits in different blocks, and furthermore, we must perform the operation between all the encoded qubits in both blocks. The solution to the second problem is straightforward. If we prepare an ancilla qubit in the +1 eigenstate of σx and apply a CNOT from the ancilla to a single data qubit, we get the stabilizer σx ⊗ σx, with X = σx ⊗ I and Z = σz ⊗ σz . Then if we measure σz for the data qubit, we are left with X = σx and Z = σz , both for the ancilla qubit. We have transferred the data qubit to the ancilla qubit without changing it. On the other hand, if we had prepared the ancilla qubit in the +1 eigenstate of σz and applied the CNOT, nothing in the data qubit would have changed. We can use this fact to switch individual encoded qubits out of a storage block into a temporary holding block. Prepare the holding block with all the encoded qubits in the +1 eigenstate of σz , except the jth encoded qubit, which is in the +1 eigenstate of σx. Then use (5.17) to apply a CNOT from the holding block to the storage block and measure σz for the jth encoded qubit in the storage block. This switches the jth encoded qubit from the storage block to the holding block while leaving the other qubits in the storage block undisturbed. The jth encoded qubit in the storage block is left in the state |0〉,as are all the encoded qubits in the holding block but the jth one. To perform operations between just the jth encoded qubits in two (or more) different blocks while leaving the other qubits in those blocks alone, we can switch both jth qubits into new, empty blocks, as above. Then we interact them. If necessary, we again clear all but the jth encoded qubit in each temporary block by measuring σz . Then we can switch the qubits back into the initial blocks by applying a CNOT from the holding block to the appropriate storage block and measuring Xj for the holding block. This leaves the questions of interacting the jth encoded qubit in one block with the ith encoded qubit in another block, and of interacting two encoded qubits in the same block. We can partially solve either problem by switching the two qubits to be interacted into separate holding blocks. If we know how to swap the jth encoded qubit with the first encoded qubit, we can then swap both qubits into the first position, interact them as desired, then swap them back to their initial positions and switch them back to their storage block or blocks. One way to swap qubits within a block is to perform some nontrivial action on a single block. For a code with trivial automorphism group, this will not exist. However, almost any automorphism will suffice to swap encoded qubits as desired. This is because there are so few two-qubit operations in N (G). Any automorphism of the code will produce some element of N (Gk) on the k encoded CHAPTER 5. FAULT-TOLERANT COMPUTATION 50 qubits, typically (although certainly not always) interacting all of them. If we perform some measurement on all of the encoded qubits in the block except the first and the jth, we are left with a two-qubit operation between those two encoded qubits. We can always perform single-qubit operations on any encoded qubit in a block by switching the qubit into a fresh block, applying the operation to every encoded qubit in the new block, clearing unneccesary qubits and switching the qubit back to the first block. Using this freedom, any operation in N (G2) can be transformed to map σx ⊗ I to one of σx ⊗ I, σx ⊗ σx, and I ⊗ σx. There is still a remaining freedom to switch σy and σz on either qubit, and we may also switch either with σx for any qubit where the image of σx ⊗ I acts as the identity. We treat the three possibilities as separate cases: • σx ⊗ I → σx ⊗ I The operation preserves the group structure of G2, so the image of I ⊗ σx must commute with σx ⊗I. Up to single-qubit operations, the possibilities are 1. I ⊗ σx: The image of σz ⊗ I must be either σz ⊗ I or σz ⊗ σx. In the first case, the image of I ⊗ σz is I ⊗ σz and the operation is the identity. In the second case, the image of I ⊗ σz must be σx ⊗ σz . If we apply R to the first qubit before the operation and again after it, this produces a CNOT from the first qubit to the second qubit. 2. σx ⊗ σx: The image of σz ⊗ I must be σz ⊗ σz and the image of I ⊗ σz may be either I ⊗ σz or σx ⊗ σy . If it is I ⊗ σz , the operation is exactly CNOT from the second qubit to the first. If it is σx ⊗ σy ,we can again get CNOT from the second qubit to the first by simply applying Q to the second qubit, followed by the operation. • σx ⊗ I → I ⊗ σx This case is related to the first one by simply swapping the two qubits. Therefore, the possibilities can be reduced to a simple swap, and a CNOT either way followed by a swap. • σx ⊗ I → σx ⊗ σx Now there are three possibilities for the image of I ⊗ σx: I ⊗ σx again, σx ⊗ I, or σz ⊗ σz .1. I ⊗ σx: The image of I ⊗ σz must be σz ⊗ σz . The image of σz ⊗ I may be either σz ⊗ I or σy ⊗ σx. As with case two above, if it is σz ⊗ I, this is a CNOT from the first qubit to the second; if it is σy ⊗ σx, we can apply Q to the first qubit and then this operation to get a CNOT from the first qubit to the second. 2. σx ⊗I: This case can be produced from the previous one by swapping the two qubits. Thus, the operation can be converted into a CNOT from the first qubit to the second followed by a swap. CHAPTER 5. FAULT-TOLERANT COMPUTATION 51 |β〉|α〉|α〉|β〉 s ❣ s ❣ s ❣ Figure 5.2: Network to swap two qubits using CNOT. 3. σz ⊗ σz : In this case, the image of σz ⊗ I can be σz ⊗ I, I ⊗ σz , σy ⊗ σx, or σx ⊗ σy . If the image of σz ⊗ I is σz ⊗ I, the image of I ⊗ σz must be I ⊗ σx or σz ⊗ σy . If it is I ⊗ σx and we apply R to the second qubit and then this operation, it performs a CNOT from the first qubit to the second. If it is σz ⊗ σy , we can apply T σ z to the second qubit, followed by the operation in order to get a CNOT from the first qubit to the second. If the image of σz ⊗ I is I ⊗ σz , we can get it from last case by swapping the qubits, so it can be reduced to a CNOT from the first qubit to the second followed by a swap. If the image of σz ⊗ I is σy ⊗ σx, then the image of I ⊗ σz may again be either I ⊗ σx or σz ⊗ σy . If it is I ⊗ σx, we can perform Q on the first qubit and R on the second qubit, followed by the two-qubit operation. This produces a CNOT from the first qubit to the second one. If it is σz ⊗ σy , we can perform Q on the first qubit and T σ z on the second qubit, followed by the two-qubit operation. This again produces a CNOT from the first qubit to the second qubit. Finally, if the image of σz ⊗ I is σx ⊗ σy , we can produce the previous case by applying a swap, so the two-qubit operation can be converted to a CNOT from the first qubit to the second qubit followed by a swap. Also, note that R applied to both qubits, followed by a CNOT in one direc-tion, followed by R on both qubits, produces a CNOT in the other direction. Therefore, up to application of single-qubit operations, the only possible two-qubit operations in N (G) are the identity, a CNOT, a swap, or a CNOT followed by a swap. We can make a swap out of three CNOTs using the simple network from figure 5.2. We cannot make a general swap out of CNOT followed by swap. However, if the control qubit of the CNOT begins in the state |0〉, the operation does swap the two qubits. This is all that is necessary to get all of N (G), since we only need to move a single data qubit around within an otherwise empty block. Even if we have no automorphism to switch the jth qubit and the first qubit, we can still do it using quantum teleportation . To do this, we will need an EPR pair entangled between the first and jth encoded qubits. We can make an unencoded EPR pair and then encode it normally. However, a single error during the encoding can destroy the pair. Therefore, we will need to make a number of EPR pairs and purify good ones using an entanglement purification CHAPTER 5. FAULT-TOLERANT COMPUTATION 52 protocol (EPP) [17, 42]. We can interact corresponding qubits in the EPR pair using operations in N (G), which is all that is necessary. For instance, we could make five EPR pairs and use the one-way EPP derived from the five-qubit code to purify a single good EPR pair. It would take two independent errors to get an error in this pair. An easier way to make the EPR pair is to start with the +1 eigenstate of both Z1 and Zj , then to measure X1Xj , which is an operator in N (S) just like any other. This leaves the ancilla block in the +1 eigenstate of Z1Zj and X1Xj , which is just an EPR pair. Once we have a reliable EPR pair, the teleportation process requires only operations in N (G) between corresponding encoded qubits. This allows us to move the jth encoded qubit in one otherwise empty block to the first encoded qubit in the block that previously held the EPR pair. This allows us to do any operation in N (G) for any stabilizer code. Essentially the same procedures will work when the basic unit is the qudit instead of the qubit . 5.7 The Toffoli Gate The group N (G) is insufficient to allow universal quantum computation. In fact, Knill has shown that a quantum computer using only elements from N (G)and measurements can be simulated efficiently on a classical computer. The argument follows easily from the results of the preceding sections. If we begin with a state initialized to |0 · · · 0〉, the stabilizer is σz1, σ z2, . . . . Each operation in N (G) produces a well-defined transformation of the stabilizer, which can be classically tracked efficiently. Any measurement will also transform the stabilizer in a well-defined way, which is again easy to keep track of on a classical computer. Therefore, we can store and evolve complete information on the state of the quantum computer with only polynomial classical overhead. In order to perform truly universal quantum computation, even a single gate outside of N (G) can be sufficient. For instance, the Toffoli gate (a three-qubit gate which flips the third qubit iff both of the first two qubits are |1〉) along with N (G) suffices for universal computation. Shor gave an implementation of the Toffoli gate which can be easily adapted to any code allowing N (G). Since this is any stabilizer code, we can do universal computation for any stabilizer code. Note that there are a number of other gates outside N (G) that we could add to get a universal set of gates (such as the single-qubit π/ 8 rotation), and for some codes, it may be easier to perform these gates than the Toffoli gate . However, I will just discuss the implementation of the Toffoli gate. The Toffoli gate can be expanded using G as a basis as follows: 1 4 (3 I + σz1 + σz2 − σz1σz2 + ( I − σz1)( I − σz2)σx3) . (5.21) Applying the Toffoli gate to a state therefore produces the following transfor-mation on the elements of G3: σx1 → 1 16 (3 I + σz1 + σz2 − σz1σz2 + ( I − σz1)( I − σz2)σx3)CHAPTER 5. FAULT-TOLERANT COMPUTATION 53 × (3 I − σz1 + σz2 + σz1σz2 + ( I + σz1)( I − σz2)σx3) σx1 = 1 2 (I + σz2 + ( I − σz2)σx3) σx1 σx2 → 1 2 (I + σz1 + ( I − σz1)σx3) σx2 σx3 → σx3 (5.22) σz1 → σz1 σz2 → σz2 σz3 → 1 16 (3 I + σz1 + σz2 − σz1σz2 + ( I − σz1)( I − σz2)σx3) × (3 I + σz1 + σz2 − σz1σz2 − (I − σz1)( I − σz2)σx3) σz3 = 1 2 (I + σz1 + ( I − σz1)σz2) σz3. This means σz1, σz2, and σx3 stay the same, σx1 becomes σx1 tensor a CNOT from qubit two to qubit three, σx2 becomes σx2 tensor a CNOT from qubit one to qubit three, and σz3 becomes σz3 tensor a conditional sign for qubits one and two. Suppose we can make the ancilla |A〉 = 1 2 (|000 〉 + |010 〉 + |100 〉 + |111 〉). (5.23) This state is fixed by the three operators M1 = 1 2 (I + σz2 + ( I − σz2)σx3) σx1 M2 = 1 2 (I + σz1 + ( I − σz1)σx3) σx2 (5.24) M3 = 1 2 (I + σz1 + ( I − σz1)σz2 ) σz3. Now suppose we have three data qubits (numbers four, five, and six) that we wish to perform a Toffoli gate on. We simply apply CNOTs from qubit one to qubit four, qubit two to qubit five, and from qubit six to qubit three. This produces the following “stabilizer”: M ′ 1 = 1 2 (I + σz2 + ( I − σz2)σx3) σx1σx4 M ′ 2 = 1 2 (I + σz1 + ( I − σz1)σx3) σx2σx5 (5.25) M ′ 3 = 1 2 (I + σz1 + ( I − σz1)σz2) σz3σz6. Then measure σz4, σz5, and σx6 and discard qubits 4–6. As we can see, this produces the transformation (5.22) on the three data qubits while moving them to what were formerly the ancilla qubits. Note that correcting for measured eigenvalues of −1 will require applying M1, M2, or M3, which are not elements of G. They are, however, elements of N (G). CHAPTER 5. FAULT-TOLERANT COMPUTATION 54 Therefore, in order to perform the Toffoli gate on encoded states, we must produce an encoded version of the ancilla |A〉. Then we need only perform measurements and encoded operations in N (G) to produce the effect of a Toffoli gate. Below, I will assume G only encoded one qubit per block. If it encodes more, we can still do the same thing by moving the qubits to be interacted into the first encoded qubit in otherwise empty blocks. The X and Z operators used to create the ancilla are just X1 and Z1.To produce the encoded ancilla |A〉, we start with the encoded version of the state |A〉 + |B〉, where |B〉 = 1 2 (|001 〉 + |011 〉 + |101 〉 + |110 〉). (5.26) Note that |B〉 is related to |A〉 by applying σx to the third qubit. Since |A〉 + |B〉 = 111 ∑ a=000 |a〉 = ( |0〉 + |1〉)3, (5.27) we can easily prepare it by measuring X for each block. Henceforth, |A〉 and |B〉 will denote the encoded versions of themselves. Now we take an ancilla in a “cat” state |0 . . . 0〉 + |1 . . . 1〉, where the number of qubits in the cat state is equal to the number of qubits in a single block of the code. Then we will perform an operation that takes |0 . . . 0〉| A〉 → |0 . . . 0〉| A〉|1 . . . 1〉| A〉 → |1 . . . 1〉| A〉 (5.28) |0 . . . 0〉| B〉 → |0 . . . 0〉| B〉|1 . . . 1〉| B〉 → −| 1 . . . 1〉| B〉. Then under (5.28), (|0 . . . 0〉+|1 . . . 1〉)( |A〉+|B〉) → (|0 . . . 0〉+|1 . . . 1〉)|A〉+( |0 . . . 0〉−| 1 . . . 1〉)|B〉. (5.29) If we measure σx ⊗ · · · ⊗ σx for the cat state, if we get +1, the rest of the ancilla is in the state |A〉. If we get −1, the rest of the ancilla is in the state |B〉.One complication is that a single qubit error in the cat state can cause this measurement result to be wrong. Luckily, (|0 . . . 0〉 + |1 . . . 1〉)|A〉 → (|0 . . . 0〉 + |1 . . . 1〉)|A〉 (5.30) (|0 . . . 0〉 + |1 . . . 1〉)|B〉 → (|0 . . . 0〉 − | 1 . . . 1〉)|B〉. (5.31) Therefore, if we prepare another cat state and apply (5.28) again, we should again get +1 if the ancilla was actually in the state |A〉 after the first mea-surement and −1 if it was actually in the state |B〉. We can therefore get any desired level of reliability for the ancilla state by repeating (5.28) a number of times. Finally, once we are confident we have either |A〉 or |B〉, we apply X to CHAPTER 5. FAULT-TOLERANT COMPUTATION 55 the third ancilla qubit if it is |B〉. This means we will always have prepared the state |A〉.To perform (5.28), we will have to perform the operation |A〉 → | A〉 and |B〉 → −| B〉 if and only if the qubits of the cat state are |1 . . . 1〉. If the qubits of the cat state are |0 . . . 0〉, then we do nothing to the rest of the ancilla. I will show that we can apply |A〉 → | A〉 and |B〉 → −| B〉 using a series of transversal operations and measurements. If we apply these operations and measurements conditioned on the corresponding qubit from the cat state being |1〉, then we have actually performed (5.28). Conditioning the operations on the cat state bit will generally involve using Toffoli gates and possibly other gates outside N (G), but they are all gates on single qubits rather than blocks. We assume we know how to perform universal computation on individual qubits, so these gates are available to us. The state |A〉 is a +1-eigenvector of M3, from equation (5.24). |B〉 is a −1-eigenvector of the same M3, so applying M3 does, in fact, transform |A〉 → | A〉 and |B〉 → −| B〉. M3 is just a conditional sign on the first two qubits (i.e. an overall sign of −1 iff both qubits are |1〉) times σz on the third qubit. Therefore it is in N (G) and can be performed transversally for any stabilizer code. Therefore, we can perform universal computation using any stabilizer code. 5.8 Construction of Gates in N (G) In order to use the general fault-tolerant protocols, we need to apply three- or four-qubit gates. Suppose our basic gates are limited to one- and two-qubit gates. These gates are sufficient to give us any gates in N (G). I will now give a construction for any gate in N (G) using one- and two-qubit gates. The construction will be inductive. In section 5.6, I showed that any one- or two-qubit gate could be made using R, P , and CNOT. Suppose we can construct any n-qubit gate using one- and two-qubit gates, and let U be an ( n + 1)-qubit gate. Using swaps and one-qubit gates, we can guarantee that M = U σ z1U † = σx1 ⊗ M ′ (5.32) and N = U σ x1U † = I ⊗ N ′ or σz1 ⊗ N ′. (5.33) Note that {M, N } = 0. Suppose U (|0〉 ⊗ | ψ〉) = |0〉 ⊗ | ψ1〉 + |1〉 ⊗ | ψ2〉, (5.34) where |ψ〉, |ψ1〉, and |ψ2〉 are states of the last n qubits. The results of section 5.4 tell us that if we measure σz for the first qubit after applying U and apply M † (which anticommutes with σz1) if the result is −1, we will get |0〉 ⊗ | ψ1〉. This means that |ψ2〉 = M ′|ψ1〉. Define U ′ by U ′|ψ〉 = |ψ1〉. Then U |0〉 ⊗ | ψ〉 = ( I + M )( |0〉 ⊗ U ′|ψ〉). (5.35) CHAPTER 5. FAULT-TOLERANT COMPUTATION 56 Now, U (|1〉 ⊗ | ψ〉) = U [( σx|0〉) ⊗ | ψ〉] (5.36) = N U (|0〉 ⊗ | ψ〉) (5.37) = N (I + M )( |0〉 ⊗ U ′|ψ〉) (5.38) = (I − M )N (|0〉 ⊗ U ′|ψ〉) (5.39) = (I − M )( |0〉 ⊗ N ′U ′|ψ〉). (5.40) Therefore, if we first apply U ′ to the last n qubits, followed by applying N ′ to the last n qubits conditioned on the first qubit, followed by a Hadamard transform R on the first qubit, followed by M ′ on the last n qubits conditioned on the first qubit, we have applied U : |0〉 ⊗ | ψ〉 + |1〉 ⊗ | φ〉 → |0〉 ⊗ U ′|ψ〉 + |1〉 ⊗ U ′|φ〉 (5.41) → |0〉 ⊗ U ′|ψ〉 + |1〉 ⊗ N ′U ′|φ〉 (5.42) → (|0〉 + |1〉) ⊗ U ′|ψ〉 + ( |0〉 − | 1〉) ⊗ N ′U ′|φ〉 (5.43) → (|0〉 ⊗ U ′|ψ〉 + |1〉 ⊗ M ′U ′|ψ〉)+ ( |0〉 ⊗ N ′U ′|φ〉 − | 1〉 ⊗ M ′N ′U ′|φ〉) (5.44) = [|0〉 ⊗ U ′|ψ〉 + M (|0〉 ⊗ U ′|ψ〉)] + [ |0〉 ⊗ N ′U ′|φ〉 − M (|0〉 ⊗ N ′U ′|φ〉)] (5.45) = (I + M )( |0〉 ⊗ U ′|ψ〉)+ ( I − M )( |0〉 ⊗ N ′U ′|φ〉) (5.46) = U (|0〉 ⊗ | ψ〉) + U (|1〉 ⊗ | φ〉) (5.47) = U (|0〉 ⊗ | ψ〉 + |1〉 ⊗ | φ〉). (5.48) U ′ is an n-qubit gate in N (G), which, by the inductive hypothesis, we can perform using one- and two-qubit gates. Both M ′ and N ′ are in G, so apply-ing them conditioned on the first qubit requires only two-qubit gates in N (G). Therefore, this construction allows us to perform any U in N (G) using only one-and two-qubit gates. The construction is summarized in figure 5.3. To get M and N in the correct form requires only identifying a single qubit on which M does not act as the identity and N acts differently from M . From there, a single one-qubit gate and a swap between that qubit and the first puts M and N in the desired form. It is not really necessary for the construction that the selected qubit be in the first position, so we can actually put M and N in the right form using just one one-qubit gate. We also need to perform R on that qubit in the middle of the operation. Applying M ′ and N ′ conditioned on the selected qubit uses up to 2 n two-qubit gates. Therefore, this construction of U uses the gates in U ′ plus up to two one-qubit gates and 2 n two-qubit gates. Thus, by induction, an ( n + 1)-qubit gate ( n ≥ 2) can use up to 2( n − 2) CHAPTER 5. FAULT-TOLERANT COMPUTATION 57 ... U′ ... Rs N′ ... s M ′ ... Figure 5.3: Recursive construction of gates in N (G). one-qubit gates and 1 + n+1 ∑ j=3 2( j − 1) = 1 + ( n + 2)( n − 1) = n2 + n − 1 (5.49) two-qubit gates. Note that this construction can also be used for encoding data into a stabi-lizer code. The map U will map σxi → Xi and σzi → Zi (i = 1 , . . . , k ) for the k data qubits. The remaining n − k qubits start out as |0〉, so for i = k + 1 , . . . , n ,we map σzi → Mi−k, where Mj (j = 1 , . . . , n − k) are generators of S. Any remaining freedom for the choice of the image of σxi for i = k + 1 , . . . , n is unimportant. This produces an encoding for any stabilizer code using any X and Z operators in N (G). In some cases, it may be more efficient than the construction given in chapter 4, but the upper bound for efficiency is higher. 5.9 Refining the Error Correction Algorithm Since errors occur while we are measuring the error syndrome, we are inevitably led to a race between the errors that are constantly occuring and our ability to correct them. Therefore it is desireable to be able to perform error correction as efficiently as possible. In this section, I will discuss a few ways of speeding up error correction. One significant improvement is to do classical error correction on the syn-drome bits . The most basic form of error correction described in section 5.2 measures the eigenvalues of the n − k generators of S. If we treat these as classical bits, we can encode them using a classical [ m, n − k, d ′] linear code. The bits of the classical codeword will be linear combinations of the original syndrome bits, which means they will correspond to eigenvalues of products of the generators of the stabilizer. This means we need only measure these m new elements of the stabilizer. Then we can do classical error correction on the result to extract the actual ( n − k)-bit syndrome. If there were less than d′ errors on the measured syndrome bits, we can still determine the real syndrome. This protects very well against ancilla errors that produce the wrong measurement CHAPTER 5. FAULT-TOLERANT COMPUTATION 58 result for a single syndrome bit. It protects less well against data errors that cause the syndrome to change in the middle of measurement, but there is a good chance it will warn us when such an error has occurred. If no errors are detected using the classical code, it is quite likely we have measured the correct syndrome. There is still a chance that we have not, so we may want to repeat the measurement, but we will not have to do it as many times to produce the same level of confidence in the result. Another possible improvement is to reduce the number of qubits needed to perform error correction. Below, I present a method due to Steane . This method puts more effort into preparing the ancilla, allowing a reduction in the number of operations performed on the data. In some situations, this results in an improvement in error tolerance; in other situations, the effort spent in preparing the ancilla is too large, and this results in worse tolerance for errors. Steane’s ancilla state uses 2 n qubits, which are prepared in the sum of the states of a classical code. The specific classical code is formed by taking the two matrices in the binary vector space representation of S (section 3.4) and tacking them together into a single ( n − k) × 2n matrix. The matrix for the σz ’s is first. This is the parity check matrix of the classical code. The ancilla state can be described by a stabilizer SA on 2 n qubits. The first n − k generators of the stabilizer are the rows of the parity check matrix with σz ’s for the 1s. The remaining n + k generators of the stabilizer are the n + k independent tensor products of σx’s that commute with the first n − k generators. Note that the fact that S is Abelian means that n−k of the new generators will also be formed directly from the generators of the stabilizer, this time by combining the σx and σz matrices with the σx one first and replacing 1s with σx’s. There is only a single state in the Hilbert space fixed by all 2 n of these generators, and that is the desired ancilla state. For instance, if the original code is a CSS code such as the seven-qubit code, the resulting ancilla state is the tensor product of two ancilla states, each in the superposition of all the states in one of the two classical codes that make up the CSS code. For the seven-qubit code, that means two copies of |0〉 + |1〉, where |0〉 and |1〉 are the encoded 0 and 1 states for the seven-qubit code. In general, the classical code will be able to identify as many errors as the quantum code can, counting errors in both bits j and j + n (for j ≤ n) as a single error. Once we have this ancilla, we should again verify it, as we did for the “cat” states in sections 5.2 and 5.7. Then we apply a CNOT from data qubit i to ancilla qubit i, followed by a Hadamard transform R on the data qubit and a CNOT from the ith data qubit to the ( n + i)th ancilla qubit, followed by a final Hadamard transform on the data qubit. Assuming no phase errors in the ancilla, the data qubit ends up in its original state. We can see this by looking at the stabilizer of the ancilla. The last n + k generators M of SA are all tensor products of σx’s, so the CNOTs simply map I ⊗ M → I ⊗ M , which is obviously still in S × SA. The first n − k generators are tensor products of σz ’s, say M1 ⊗ M2 (with M1 and M2 n-qubit operators). The CNOTs then map I ⊗ (M1 ⊗ M2) → M1(RM 2R†) ⊗ (M1 ⊗ M2). (5.50) CHAPTER 5. FAULT-TOLERANT COMPUTATION 59 But M1 has a σz anywhere some element M ∈ S does and RM 2R† has a σx anywhere the same M does, so M1(RM 2R†) = M , and M1(RM 2R†)⊗(M1 ⊗M2)is in S × SA.The effect of the CNOTs on the generators M of S is to copy the σx’s forward into the first n qubits of the ancilla and the σz ’s forward into σx’s in the last n qubits of the ancilla. That is, M ⊗ I → M ⊗ (M1 ⊗ M2), where M1 and M2 are the product of σx’s, and M1 ⊗ M2 is one of the second set of n − k generators of SA. Therefore a correct codeword will have no effect on the ancilla. Measuring σz on each of the 2 n ancilla qubits will therefore give us a random codeword from the classical code without disturbing the data or the quantum code. A bit flip error in the jth qubit of the quantum code will carry forward to a bit flip error in the jth qubit of the ancilla, and a phase error in the jth qubit of the quantum code will produce a bit flip error in the ( n + j)th qubit of the ancilla. Therefore, errors in the quantum code will produce bit flip errors in the measured classical codeword. The actual codeword tells us nothing, but the error syndrome will identify the error in the quantum code. As with the cat state method, an incorrect ancilla qubit can result in the wrong error syndrome, but repeating the error syndrome measurement can give an arbitrarily high confidence level to the result. Single-qubit phase errors in the ancilla will just feed back to single-qubit phase or bit flip errors in the data. CHAPTER 6. CONCATENATED CODING 60 Chapter 6 Concatenated Coding 6.1 The Structure of Concatenated Codes Encoding data using a quantum error-correcting code and applying fault-toler-ant operations to it may or may not actually improve the basic error rate for the computation. Since the gates involved in error correction are themselves noisy, the process of error correction introduces errors at the same time it is fixing them. If the basic gate error rate is low enough, the error correction will fix more errors than it introduces on the average, and making a fault-tolerant com-putation will help rather than harm. If the error rate is too high, attempting to correct errors will introduce more errors than are fixed, and error correction is actively doing harm. Even if error correction helps rather than harms, statisti-cal fluctuations will eventually produce more errors than the code can correct, resulting in a real error in the data. Furthermore, the extra computational overhead required to do fault-tolerant operations may counteract the additional resistance to errors provided by the code, so the encoded computer may not be able to do longer computations than the original computer. Nevertheless, if the basic error rate in the quantum computer is low enough, we will be able to do longer computations using quantum codes and fault-tolerance than we could without them. Suppose we can get a certain amount of improvement by using a specific code, say the seven-qubit code. We might imagine that by using a code that corrects more errors, we could do a longer computation yet, and by increasing the number of errors the code corrects in-definitely, we could do arbitrarily long computation. However, for arbitrary families of codes, the number of steps required to do error correction may in-crease rapidly with the number of errors corrected. Therefore, the time required to do error correction may eventually overwhelm the capability of the code to deal with errors, and the performance of the computer will start to decrease again. To solve this problem, we need to find a class of codes where the time to measure the error syndrome increases only slowly with the error-correcting capabilities of the code. CHAPTER 6. CONCATENATED CODING 61 The desired class of codes is concatenated codes [34, 45, 48, 49]. For a concatenated code, the data is encoded using some [ n, k, d ] code, then each qubit in a block is again encoded using an [ n1, 1, d 1] code. The qubits making up blocks in the new code may be further encoded using an [ n2, 1, d 2] code, and so on indefinitely. The result is an [ nn 1n2 · · · nl−1, k, dd 1d2 · · · dl−1] code. We can find the error syndrome of such a code rather rapidly. We measure the error syndrome for the [ nl−1, 1, d l−1] code (the first level of the code) for all of the blocks of nl−1 qubits at once. To do this, we must make the assumption that we can do parallel computation on different qubits. Note that we need this assumption anyway, or storage errors will always build up on some block while we are correcting errors on the other blocks. Similarly, we measure the error syndrome for the [ nl−2, 1, d l−2] code at the second level of the code in parallel for different blocks, and so on, for all l levels of the code. Therefore, we can measure the error syndrome for the whole code in only the sum of the number of steps required to measure each constituent code, instead of something like the product, which would be a more typical complexity for a code of the same parameters. In order to analyze concatenated codes, it is useful to make a few simplifying assumptions. One assumption is that we are using the same code at every level. One particularly good code for this purpose is the [7 , 1, 3] code, because any operation in N (G) can be immediately performed transversally, keeping the overhead for fault-tolerant computation small. In addition, it is a small code, so the complexity of error correction is not too large. Allowing varying codes at different levels may improve the space efficiency of the code, but it will not change the basic results. The other simplifying assumption is that the operations at level j are basically similar to operations at level j + 1. Each level feeds information about error rates for different gates and storage errors and relative times for the different operations to the next lower level, but nothing else. Error correction at each level is an independent process. Note that this will impair the error-correction properties of the code, since the full minimum distance of the code assumes that we combine information about the error syndrome from all the different levels. However, even with this assumption, we will find that for low enough basic error rates, we can do arbitrarily long computations with arbitrarily low real error rates by using sufficiently many levels of concatenation (the basic error rate is the rate of errors in actual physical qubits due to gates or storage errors; the real error rate is the rate of errors in the encoded data). When the basic error rate is low enough, adding an extra level of concatenation further reduces the real error rate; if the basic error rate is too high, adding an extra layer increases the real error rate because of the extra time spent on error correction and calculation. In this chapter, I will present a rough calculation of the error threshhold below which arbitrarily long computation is possible. In my discussion, the zeroth level of the code consists of the individual physical qubits making it up. These qubits form the blocks of a [7 , 1, 3] code. Each block of seven physical qubits forms a qubit at the first level of the code. In general, qubits at the jth level of the code consist of 7 j physical qubits. There are a total of l levels CHAPTER 6. CONCATENATED CODING 62 in the code. The qubits at the lth level are the real data qubits. We wish to keep the effective error rate on these qubits as low as possible. For this calculation, I will assume that storage errors occur independently on different physical qubits with rate pstor . The error rate for any one- or two-qubit gate in N (G) will be pg, and the error rate for the Toffoli gate will be pT of . I assume any gate may produce correlated errors on the qubits affected by the gate, but will produce no errors on any other qubits. There will be an additional storage error on qubits unaffected by the gate, but the storage error is included in the gate error for qubits that are affected by the gate. All the errors are assumed to be stochastically distributed, so the error probabilities for different qubits will add instead of the error amplitudes in the quantum states. In addition, the error rates for state preparation and state measurement will be important. Iwill denote them by pprep and pmeas , respectively. The computation will call for various operations performed on the qubits encoded at various different levels. After any operation at level j, I will per-form error correction at level j. This means we can give an effective error rate to each operation at level j. The fact that a given error rate refers to a gate at level j will be noted by a superscript ( j). Thus, p(0) stor is the storage error rate on the physical qubits, while p(l) g is the effective error rate on the data qubits from performing an operation in N (G). Only allowing one gate per error correction will typically reduce the performance of the code. Errors created during error correction will dominate; an optimized code would perform error correction when the expected accumulated chance of errors was roughly equal to the chance of errors during error correction. However, the assumption of one gate per error correction is another very useful simplifying assumption be-cause it preserves the self-similar character of the concatenated code, allowing a relatively straightforward recursive calculation of the real error rates. Some logical operations, such as the Toffoli gate, will require more and more physical operations as the level increases. The basic time required to perform a physical operation will be 1, and the storage error rate (at any level) is the error rate per unit time. The time to perform a Toffoli gate at level j will be denoted t(j) T of . Because operations in N (G) can be performed at any level just by performing a single operation from N (G) in parallel at the next lower level, the time to perform an operation in N (G) at any level is just 1. The time to prepare a state encoded at the jth level is t(j) prep and the time to measure a qubit at the jth level is t(j) meas . t(0) prep = 0 and t(0) meas = 1. 6.2 Threshhold for Storage Errors and Gates From N (G) To determine p(j) g in terms of quantities at level j − 1, we note that a gate in N (G) at level j consists of a single gate in N (G) on each of the constituent qubits at level j − 1 followed by a full error correction cycle at level j − 1. In order for the level j gate to have an error, there must be two errors at level CHAPTER 6. CONCATENATED CODING 63 j − 1, either in the N (G) gate or in the error correction. I will assume that there is no residual error that was missed in an earlier error correction step. A more careful calculation should consider such leftover errors, which can be significant. Suppose the chance of an error occuring in a single data qubit during a single measurement of the error syndrome is pEC . There are a few possible situations that result in an error at level j. Two errors at level j − 1 could occur in any of ( 72 ) = 21 choices of two qubits. This could occur from two N (G) gates going wrong, with probability ( p(j−1) g )2. We repeat the error syndrome measurement until we get the same result twice. If there is one error from an N (G) gate and one from either of these measurements of the error syndrome, there will be an error at level j. The probability of this is 4 p(j−1) g pEC . Finally, both errors could come from the error correction. This could be two errors in the first or second syndrome measurement, with probability 2 p2 EC . Given one error in a syndrome measurement, we will need to do three syndrome measurements total. If two of those go wrong, it will also produce an error at level j. This has probability 6 p2 EC . There are also a number of possibilities involving an error in the ancilla state producing an incorrect syndrome and requiring more measurements. However, I assume the error rates involved are all fairly low, so the probability of this situation producing an error at level j is smaller by O(p), which I will assume is negligable. Thus, the total gate error rate at level j is p(j) g = 21 ( (p(j−1) g )2 + 4 p(j−1) g pEC + 8 p2 EC ) . (6.1) Similarly, a single time step at level j without a gate involves a single time step without a gate at level j − 1 followed by error correction. Therefore, p(j) stor = 21 ( (p(j−1) stor )2 + 4 p(j−1) stor pEC + 8 p2 EC ) . (6.2) The salient aspect of these equations is that the probability of error at level j is of the order of the square of the error rate at level j − 1. This means that p(l) g will scale roughly as p(0) g (p(0) g /p thresh )2l (6.3) for some threshhold error rate pthresh and similarly for p(l) stor . This is a very rapid decrease in p(l) g as a function of l when p(0) g < p thresh . We will thus only need a few levels, of order log(log p) to bring the real error rate down to O(p) per step. Thus, the number of extra qubits necessary for a fault-tolerant computation is only polylog p times the original number, which is a very good scaling. However, while the asymptotic scaling is quite good, for vaguely reasonable p, the actual number of extra qubits needed is quite large. In order to determine the threshhold pthresh , let us calculate pEC . I will assume we are using Shor’s cat state method to correct errors, although another method (such as Steane’s) might ultimately lead to better performance. We have to measure six syndrome bits, so we will need to prepare six cat states, each using four qubits. I will assume a limited ability to plan ahead in the CHAPTER 6. CONCATENATED CODING 64 |0〉|0〉|0〉|0〉|0〉 R s ❣s ❣s ❣s ❣s ❣ R R R R Measure Figure 6.1: Cat state construction and verification. calculation, so the data qubits will have to wait for the first cat state in a single measurement of the error syndrome, but the other cat states are being prepared at the same time, so they will be ready just when they are needed. To prepare a cat state, we start with all four qubits in the state |0〉 (encoded using the code at level j − 1), perform a Hadamard rotation R on the first qubit, then a CNOT from the first qubit to the third qubit, and then two more CNOTs, from the first qubit to the second and from the third to the fourth, as shown in figure 6.1. Bit flip errors at this point will become phase errors after the final Hadamard transform, so we need to ensure that there is at most one. Every way a single gate error earlier in the construction can produce two bit flip errors here makes the second and fourth qubits different. Therefore, we perform CNOTs from the second and fourth qubits to an additional ancilla test qubit and measure the test qubit. If it is |0〉, we can use the ancilla; if it is |1〉, there is at least one error in the cat state, possibly two. We throw the cat state out and construct another one. Finally, we must perform a Hadamard transform on each of the four qubits in the cat state to get the actual ancilla used in error correction. An examination of the circuit shows that any bit flip errors before the cycle in which there are two CNOTs will cause the test qubit to flip. Therefore, only errors at this stage or later will have a chance of affecting the actual ancilla used. For the second and fourth qubits, the error must actually occur after (or during) the CNOT to the test qubit. Therefore, the chance of an important error in any single ancilla qubit is 2 pg + pstor (for qubits two and four) or pg + 2 pstor (for qubits one and three). Although only phase errors can feed back, the fault-tolerant network does not treat σx and σz errors symmetrically, so in order to be safe, I will consider the worst case where every error is of the most dangerous type. However, in no case can an error in the test qubit feed back into the data qubits, so I have not included errors from this source. Now, we can construct a network for error syndrome measurement such that each data qubit contributes to at most four syndrome bits. In addition, two Hadamard rotations are necessary. Therefore, the process of syndrome mea-surement introduces at most an additional probability 6 pg + 2 pstor of error. To this, we must add the probability of an error feeding back, plus the accumulation CHAPTER 6. CONCATENATED CODING 65 of storage errors while we prepare the cat state and measure the ancilla. There is only waiting time for the preparation of the first cat state and measurement of the last one, since preparation and measurement of the other cat states is taking place in parallel. Feedback is a more serious problem, but we can arrange it so that no data qubit interacts with more than two ancilla qubits with error rate 2 pg + pstor , so the total feedback is at most 6 pg + 6 pstor . Therefore, pEC = (6 pg + 6 pstor ) + (6 pg + 2 pstor ) + (6 + tprep + tmeas )pstor (6.4) = 12 pg + (14 + tprep + tmeas )pstor . (6.5) Now, in order to measure a qubit encoded at some level, it is sufficient to measure all of the constituent qubits. At level one, this gives us some seven-bit string which is a codeword of the classical Hammng code (possibly with some errors). Whether it is a codeword of even or odd parity will tell us whether the corresponding level one qubit is |0〉 or |1〉. We can continue to do this at all levels, using classical error correction at each level to correct any errors in indi-vidual bits. This will, in general, require a fair amount of classical computation. However, I will assume that classical computation is much faster than quantum computation when it can perform the same task, and that in the regime of inter-est, tmeas = 1. No matter what the speed of the classical computer, eventually t(j) meas will become greater than one, but due to the rapid convergence of the double exponential, this will have a very small effect on the threshhold. Preparing encoded |0〉 states at level j − 1 does take a fair amount of time, however. Furthermore, the amount of time will increase with level. One way to prepare encoded 0 states reliably is by performing a full error correction cycle for the code with the addition of the Z operator σz5σz6σz7. The input state can be anything. The time to do this is at most 4( tEC + 1). Recall that we must get the same error syndrome twice before we trust it. If there is an error in the second syndrome measurement, we may have to measure the syndrome twice more, for a total of four times. The chance of two errors is lower order, and therefore we ignore it. The time for one error correction cycle is tEC = 14 + tprep + tmeas , so t(j) prep = 64 + 4 t(j−1) prep . In order to cut down the growth rate with level, I will assume we can plan ahead enough to prepare the ancillas for later syndrome measurements while measuring the earlier syndromes. Then t(j) prep = 43 + t(j−1) prep .Recalling that t(0) prep = 0, we then get t(j) prep = 43 j. One benefit of preparing states using error correction is that the chance of residual error is minimal. Iwill take p(j) prep = 0 where it matters. Finally, we get the result for pEC . The tprep that contributes is actually t(j−1) prep , so p(j) EC = 12 p(j−1) g [15 + 43( j − 1)] p(j−1) stor . (6.6) Therefore, p(j) g = 21 [ (p(j−1) g )2 + 4 p(j−1) g pEC + 8 p2 EC ] (6.7) CHAPTER 6. CONCATENATED CODING 66 = 25221 ( p(j−1) g )2 + [61740 + 176988( j − 1)] p(j−1) g p(j−1) stor [37800 + 216720( j − 1) + 310632( j − 1) 2] (p(j−1) stor )2 (6.8) and p(j) stor = 21 [ (p(j−1) stor )2 + 4 p(j−1) stor pEC + 8 p2 EC ] (6.9) = 24192 ( p(j−1) g )2 + [61488 + 173376( j − 1)] p(j−1) g p(j−1) stor [39081 + 220332( j − 1) + 310632( j − 1) 2] (p(j−1) stor )2. (6.10) Note a number of things here. If we perform error correction after every time step, whether it has a gate or not, the storage error rate and gate error rate at the next level will actually be dominated by the error rate of error correction, so they will be very close. Also, at levels beyond the first, the error rate is dominated by storage errors occuring while we wait around encoding the ancilla qubits for error correction. Therefore, the algorithm will benefit greatly from a more rapid preparation algorithm, a better ability to plan ahead, or both. First, consider the limit in which storage errors are negligable. In this case, we do not perform error correction after a step without a gate. Therefore, p(j) stor = 0 at all levels. Then, p(j) g = 25221 ( p(j−1) g )2, and the threshhold for a computation involving only operations from N (G) is pthresh = 1 /25200 = 4.0 × 10 −5. A second limit would be when p(0) g = p(0) stor , so there are no gate errors beyond the simple storage error in the same time step. Then they should be equal at all other levels, as well. Then p(j) stor = [124761 + 393708( j − 1) + 310632( j − 1) 2] (p(j−1) stor )2. (6.11) Then p(1) stor = 124800 ( p(0) stor )2, p(2) stor = 8 .3 × 10 5 (p(1) stor )2, and p(3) stor = 2 .2 × 10 6 (p(2) stor )2. For higher j, we approximate p(j) stor = 3 .1×10 5 (j−1) 2(p(j−1) stor )2 = [ (j − 1) 2p(j−1) stor /(3 .2 × 10 −6) ] p(j−1) stor . (6.12) To get continual improvement, it is sufficient for p(j) stor /p (j−1) stor < (j − 1) 2/j 2.This will mean p(j) stor ≤ 9 j2 p(3) stor . It suffices for p(4) stor = 9 16 p(3) stor , so p(3) stor = 1 16 (3 .2 × 10 −6). Following this back, we find that for only storage errors, the threshhold is roughly pthresh = 2 .2 × 10 −6, or slightly more than an order of magnitude worse than for just gate errors. Let us consider another case. Suppose we can plan ahead well, and prepare ancillas for error correction just in time for when they are needed. Then pEC =12 pg + 9 pstor , and p(j) g = 25221 ( p(j−1) g )2 + 37044 p(j−1) g p(j−1) stor 13608 ( p(j−1) stor )2 (6.13) p(j) stor = 24192 ( p(j−1) g )2 + 37296 p(j−1) g p(j−1) stor 14385 ( p(j−1) stor )2. (6.14) For all practical purposes, for j > 1, p(j) g = p(j) stor = p(j) = 75873 ( p(j−1) )2. This means that the threshhold occurs at p(1) = 1 /75873 = 1 .3 × 10 −5. At the limit CHAPTER 6. CONCATENATED CODING 67 p(0) stor = 0, we get a threshhold for pg of pthresh = 2 .3 × 10 −5. At the limit p(0) g = p(0) stor , we get a threshhold pthresh = 1 .3 × 10 −5.Finally, suppose we do not do error correction after every step, but instead attempt to optimize the number of steps N between error corrections. Then the chance of error in N steps is N p (j−1) g or N p (j−1) stor , and equations (6.1) and (6.2) become N p (j) g = 21 [ N 2(p(j−1) g )2 + 4 N p (j−1) g pEC + 8 p2 EC ] (6.15) N p (j) stor = 21 [ N 2(p(j−1) stor )2 + 4 N p (j−1) stor pEC + 8 p2 EC ] . (6.16) The values p(j) g and p(j) stor now represent average error rates, rather than strict error rates per step. As long as we do gates from N (G) only or storage only, these values will be accurate representations, but if we mix and match, the story will be a bit different. Optimizing with respect to N gives us − 21 N 2 [ N 2(p(j−1) g )2 + 4 N p (j−1) g pEC + 8 p2 EC ] (6.17) + 21 N [ 2N (p(j−1) g )2 + 4 p(j−1) g pEC ] = 0 (6.18) N 2(p(j−1) g )2 + 4 N p (j−1) g pEC + 8 p2 EC = 2N 2(p(j−1) g )2 + 4 N p (j−1) g pEC (6.19) N 2(p(j−1) g )2 − 8p2 EC = 0 (6.20) N = √8 ( pEC /p (j−1) g ). (6.21) The same is true for storage steps. The optimum number of steps makes the accumulated chance of error during gates √8 times the chance of error during error correction. Plugging in this value for N gives us p(j) g = 21 N (16 + 8 √2) p2 EC . (6.22) Assuming no storage errors, pEC = 12 p(j−1) g , so N = 34 and p(j) g = 2 .4 × 10 3 (p(j−1) g )2, so the threshhold is pthresh = 4 .1 × 10 −4. In practice, we will not be able to perform error correction after exactly 34 gates, since there will be Toffoli gates occuring at possibly inconvenient times, but if we get close to the right frequency of error correction, the actual threshhold will not be too much worse than this. 6.3 Toffoli Gate Threshhold To figure out the recursion relation for the Toffoli gate, look at figure 6.2, which summarizes the construction in section 5.7. I will follow each qubit individually CHAPTER 6. CONCATENATED CODING 68 |0〉|0〉|0〉|cat 〉|cat 〉|cat 〉|d1〉|d2〉|d3〉 R R R Rσz s s s❣ Meas. Rσz s s s❣ Meas. Rσz s s s❣ Meas. ❣ ✒ ✻ s ❣s ❣s ❣ R Meas. Meas. Meas. s σz σz ✻ s ❣❣ ✻ ❣ s ❣ ✻ |d′ 1 〉|d′ 2 〉|d′ 3 〉 Figure 6.2: The Toffoli gate construction. Each line represents seven qubits at the next lower level. in order to figure out the final chance of error for that qubit. This is a construc-tion for the Toffoli gate at level j + 1. I will assume we do error correction on all three ancilla qubits only after the Toffoli gate is completed. All three ancilla qubits start out with p(j+1) prep chance of error from preparing encoded |0〉’s. There are actually two types of relevant encoding errors. There can be errors remain-ing at lower levels. Since we have just done an error correction cycle, I assume that the number of residual errors is negligable. There is also a chance that the qubit will not be an encoded |0〉, but some other encoded state. This would count as a complete failure of the Toffoli gate, since it would produce a real error at level j + 1. However, I will assume that the chance of this happening is also zero. Assume the chance of a remaining bit flip error in a cat state is pcat and the time to make a cat state is tcat . Only bit flip errors feed back from the cat states to the ancilla qubits in this network. Let A1, A2, and A3 be the accumulated chances of error in the three ancilla qubits. First we have a Hadamard transform on all three of these qubits. After the first gate in the ancilla construction, A3 = tcat p(j) stor 2 p(j) g pcat . It will have to sit around an additional 1 + t(j) T of time steps before the interaction with the next cat state begins. Thus, after the first cat state is finished, A3 = ( tcat + t(j) T of 1) p(j) stor 2 p(j) g pcat . By the time of the Toffoli gate with the first two ancilla qubits, the chance of errors in the cat state which can feed back into the main ancilla is at most pcat + 2 p(j) g . The first two ancilla qubits have already waited a time tcat + 2, so the overall chance CHAPTER 6. CONCATENATED CODING 69 of errors in the first two ancilla qubits is A1 = A2 = ( tcat + 2) p(j) stor pcat + 3 p(j) g p(j) T of . (6.23) We repeat the cat state interaction two more times with new cat states, which we have been preparing in parallel with the first cat state. Therefore, we only need 2 + t(j) T of more time steps for each interaction, introducing the same amount of error as the equivalent steps in the first interaction. We must also measure the cat states. We can do it in the basis they end up in; we check for odd or even parity. If two of the three cat states have odd parity, we decide the ancilla is in the state |B〉, and we perform σx on the third ancilla qubit. This process will take an additional t(j) meas 1 time units. After the ancilla creation is completed, the chances of error on the three qubits are A1 = ( tcat + t(j) meas 7 ) p(j) stor 3 pcat + 7 p(j) g 3 p(j) T of (6.24) A2 = ( tcat + t(j) meas 7 ) p(j) stor 3 pcat + 7 p(j) g 3 p(j) T of (6.25) A3 = ( tcat + t(j) meas 3 t(j) T of 3 ) p(j) stor 3 pcat + 5 p(j) g . (6.26) The whole ancilla construction has taken a time tcat + t(j) meas 3 t(j) T of 7, during which time the data qubits have been accumulating storage errors. I assume here that tcat ≥ t(j) prep Now we perform the CNOTs between the data qubits and the ancilla qubits. Again we make the conservative assumption that all of the accumulated chance of error on the data qubits feeds into the ancilla qubits. Thus, A1 = ( 2tcat + 2 t(j) meas 3 t(j) T of 14 ) p(j) stor 3 pcat + 8 p(j) g 3 p(j) T of (6.27) A2 = ( 2tcat + 2 t(j) meas 3 t(j) T of 14 ) p(j) stor 3 pcat + 8 p(j) g 3 p(j) T of (6.28) A3 = ( 2tcat + 2 t(j) meas 6 t(j) T of 10 ) p(j) stor 3 pcat + 6 p(j) g . (6.29) Now we measure σz for the first two data qubits and σx for the third data qubit. We will add one time step for the Hadamard rotation on the third data qubit, plus t(j) meas to measure. We should include a chance of the Toffoli gate failing because of the wrong result on one of these measurements, but I will assume that chance is small compared to the accumulated errors on the ancilla qubits. Before we start doing the conditional operations to convert the ancilla states to complete the transfer of the data, the chances of error are A1 = ( 2tcat + 3 t(j) meas 3 t(j) T of 15 ) p(j) stor 3 pcat + 8 p(j) g 3 p(j) T of (6.30) A2 = ( 2tcat + 3 t(j) meas 3 t(j) T of 15 ) p(j) stor 3 pcat + 8 p(j) g 3 p(j) T of (6.31) A3 = ( 2tcat + 3 t(j) meas 6 t(j) T of 11 ) p(j) stor 3 pcat + 6 p(j) g . (6.32) CHAPTER 6. CONCATENATED CODING 70 I will now assume that all three operations are necessary; this is the worst case, and usually there will be fewer gate errors. The first conditional operation interacts ancilla qubits one and two, giving A1 = ( 4tcat + 6 t(j) meas 6 t(j) T of 30 ) p(j) stor 6 pcat + 17 p(j) g 6 p(j) T of (6.33) A2 = ( 4tcat + 6 t(j) meas 6 t(j) T of 30 ) p(j) stor 6 pcat + 17 p(j) g 6 p(j) T of (6.34) A3 = ( 2tcat + 3 t(j) meas 6 t(j) T of 11 ) p(j) stor 3 pcat + 7 p(j) g . (6.35) The second conditional operation interacts ancilla qubits one and three, so A1 = ( 6tcat + 9 t(j) meas 12 t(j) T of 41 ) p(j) stor 9 pcat + 25 p(j) g 6 p(j) T of (6.36) A2 = ( 4tcat + 6 t(j) meas 6 t(j) T of 30 ) p(j) stor 6 pcat + 18 p(j) g 6 p(j) T of (6.37) A3 = ( 6tcat + 9 t(j) meas 12 t(j) T of 34 ) p(j) stor 9 pcat + 25 p(j) g 6 p(j) T of .(6.38) The third operation interacts the second and third ancilla qubits. Much of the error from the first and second ancilla qubits has already been introduced into the third qubit, so there is no need to add it again. In fact, much of it may cancel out instead. However, I assume it remains. The only new error for the third ancilla qubit is the gate error on the second qubit from the previous operation plus the gate error for this operation. Thus, A1 = ( 6tcat + 9 t(j) meas 12 t(j) T of 41 ) p(j) stor 9 pcat + 26 p(j) g 6 p(j) T of (6.39) A2 = ( 6tcat + 9 t(j) meas 12 t(j) T of 41 ) p(j) stor 9 pcat + 27 p(j) g 6 p(j) T of (6.40) A3 = ( 6tcat + 9 t(j) meas 12 t(j) T of 41 ) p(j) stor 9 pcat + 27 p(j) g 6 p(j) T of . (6.41) The overall chance of error on a single one of the new data qubits after the full Toffoli gate construction is thus ( 6tcat + 9 t(j) meas 12 t(j) T of 41 ) p(j) stor 9 pcat + 27 p(j) g 6 p(j) T of . (6.42) The time taken to perform this Toffoli gate is t(j+1) T of = tcat + 2 t(j) meas 3 t(j) T of 12 . (6.43) After error correction, the chance of a real error at level j + 1 is p(j+1) T of = 21 {[ (6 tcat + 9 t(j) meas 12 t(j) T of 41) p(j) stor 9 pcat + 27 p(j) g 6 p(j) T of ]2 4 [ (6 tcat + 9 t(j) meas 12 t(j) T of 41) p(j) stor 9 pcat + 27 p(j) g 6 p(j) T of ] pEC 8 p2 EC } . (6.44) CHAPTER 6. CONCATENATED CODING 71 In order to simplify the recursion relation so that it is easily solvable, I will only investigate the limit where there are no storage errors. In this case, it makes sense to verify the cat state used in the construction until the chance of errors in it is negligable. Therefore, I will also assume that pcat = 0. Then the recursion relation for the Toffoli gate becomes p(j+1) T of = 21 [ (27 p(j) g 6 p(j) T of )2 + 4 (27 p(j) g 6 p(j) T of ) pEC + 8 p2 EC ] (6.45) = 66717 ( p(j) g )2 + 12852 p(j) g p(j) T of 756 ( p(j) T of )2. (6.46) Recall that in this limit, p(j) g = 25221 ( p(j−1) g )2, so p(j) g = 25200 a(j)(p(0) g )2j , (6.47) where a(j + 1) = 1 + 2 a(j), with a(1) = 1. Therefore, a(j) = 2 j − 1, and p(j) g = 4.0 × 10 −5 [ p(0) g /(4 .0 × 10 −5) ]2j (6.48) = pthresh ( p(0) g /p thresh )2j . (6.49) Writing ǫ = p(0) g /p thresh , we have p(j+1) T of = 1 .1 × 10 −4 ǫ2j+1 0 .51 ǫ2j p(j) T of 756 ( p(j) T of )2. (6.50) The first term is often negligable compared to the second term, in which case p(j+1) T of = ( 0.51 ǫ2j 756 p(j) T of ) p(j) T of . (6.51) In the limit where ǫ is small, we find a threshhold value of p(0) T of = 1 /756 = 1.3 × 10 −3.Even when ǫ is fairly large, the presence of Toffoli gates does not present much of a problem for the threshhold. For instance, if we demand that p(0) T of = p(0) g = ǫp thresh , then p(1) T of = 1.1 × 10 −4 ǫ2 + [12852 pthresh ǫ + 756 pthresh ǫ] p(0) T of (6.52) ≈ 1.3 × 10 −4 ǫ2, (6.53) p(2) T of = 1.1 × 10 −4 ǫ4 + [0.51 ǫ2 + 756 (1 .3 × 10 −4) ǫ2] p(1) T of (6.54) = 1.9 × 10 −4 ǫ4, (6.55) p(3) T of = 1.1 × 10 −4 ǫ8 + [0.51 ǫ4 + 756 (1 .9 × 10 −4) ǫ4] p(2) T of , (6.56) = 2.3 × 10 −4 ǫ8. (6.57) If we let ǫ4 = 1 .9/2.3, so ǫ ≈ 0.95, then p(3) T of = p(2) T of , and as we add levels of concatenation, the Toffoli gate error gate will begin to improve. Therefore, the presence of Toffoli gates with the same physical error rate as other gates causes less than a 5% reduction in the threshhold. CHAPTER 7. BOUNDS ON QUANTUM CODES 72 Chapter 7 Bounds on Quantum Error-Correcting Codes 7.1 General Bounds The question of how efficient an error-correcting code of a given block size can be made in terms of both encoded qubits and distance is an interesting and im-portant question in the theories of both classical and quantum error correction. In the classical theory, only upper and lower bounds exist on the efficiency of codes that must have a given minimum distance between all codewords. The true, achievable bounds on such codes are unknown. Better understood in the classical case is the asymptotic efficiency of coding (where we only require that the code correct all likely errors). In the limit of infinite bits sent, we usually require the code to correct measure one of the errors occuring using some proba-bility measure associated with the channel. Classically, Shannon’s theorem tells us what the achievable capacity of a channel is. No real quantum analogue of Shannon’s theorem is known, despite extensive work on the subject [50, 51, 52]. One simple upper bound on the efficiency of quantum codes is the quantum Hamming bound . For a nondegenerate code with basis codewords |ψi〉 and possible errors Ea, all of the states Ea|ψi〉 are linearly independent for all a and i. If the code uses n qubits, there can only be 2 n linearly indepedent vectors in the Hilbert space, so the number of errors times the number of codewords must be less than or equal to 2 n. If the code corrects all errors of weight t or less and encodes k qubits, this means t ∑ j=0 3j ( nj ) 2k ≤ 2n. (7.1) There are ( nj ) ways to choose j qubits to be affected by j errors and 3 j ways these errors can be tensor products of σx, σy , and σz . This bound is completely analogous to the classical Hamming bound, with two differences: the quantum CHAPTER 7. BOUNDS ON QUANTUM CODES 73 bound has a factor of 3 j reflecting the additional quantum-mechanical degrees of freedom; and the quantum bound only applies to nondegenerate codes. The distinction between degenerate and nondegenerate codes is a purely quantum-mechanical distinction; there are no classical degenerate codes. It is unknown whether there are any degenerate codes that exceed the quantum Hamming bound (7.1). If we let the block size n grow arbitrarily large, we should also increase the expected number of errors. Consider the depolarizing channel, which is equally likely to have σx, σy , and σz errors. Suppose there is a probability p of having one of these errors on a given qubit and 1 − p of having no error. The expected number of errors on a block of size n is t = np . The number of likely errors will be about the number of errors of length t, so the quantum Hamming bound becomes 3np ( nnp ) 2k ≤ 2n. (7.2) Taking the logarithm and rearranging gives us k n ≤ 1 − p log 2 3 − H(p). (7.3) Again, H(x) = −x log 2 x − (1 − x) log 2(1 − x), as with the asymptotic form of the classical Hamming bound (1.16). As with the classical case, we can achieve the quantum Hamming bound by using random codes. Unlike the classical case, this is not always the most efficient use of the channel, so (7.3) does not give the actual channel capacity of the quantum channel. I will discuss this question in greater detail in section 7.6. For minimum distance codes, it is not in general possible to achieve the quantum Hamming bound. We can set a lower bound, the quantum Gilbert-Varshamov bound. Recall that 〈ψi|E† a Eb|ψj 〉 = Cab δij (7.4) for a quantum code correcting errors {Ea} with basis states |ψi〉. The matrix Cab is Hermitian, but is further constrained by the algebraic relationships of the operators E† a Eb. It is better to consider Cab as a function of operators O = E† a Eb. When the possible errors are all operators of up to weight t, O can be any operator of weight ≤ 2t. Slightly more generally, for a code of distance d, O is any operator of weight less than d. Therefore, the statement 〈ψ|E† a Eb|ψ〉 = Cab (7.5) is actually N = d−1 ∑ j=0 3j ( nj ) (7.6) constraints on the state |ψ〉. For generic Cab (satisfying the appropriate alge-braic constraints) and generic linear subspace V with dimension larger than N ,there will be states |ψ〉 satisfying equation (7.5). CHAPTER 7. BOUNDS ON QUANTUM CODES 74 Suppose we choose generic Cab and a generic state |ψ1〉 satisfying (7.5). Now restrict attention to the subspace orthogonal to |ψ1〉 and to all O|ψ1〉 for operators O of weight less than d. For an n-qubit Hilbert space, this subspace has dimension 2 n − N . Choose a generic state |ψ2〉 in this subspace satisfying (7.5). Now restrict attention to the subspace orthogonal to both O|ψ1〉 and O|ψ2〉. We can again pick |ψ3〉 in this subspace satisfying (7.5), and so on. Choose |ψi〉 orthogonal to all O|ψj 〉 (j ≤ i − 1) and satisfying (7.5). We can continue doing this as long as d−1 ∑ j=0 3j ( nj ) i < 2n. (7.7) Therefore, we can always find a distance d quantum code encoding k qubits in n qubits satisfying d−1∑ j=0 3j ( nj ) 2k ≥ 2n. (7.8) This is the quantum Gilbert-Varshamov bound. In the limit where t = pn = d/ 2, with n large, this becomes k n ≥ 1 − 2p log 2 3 − H(2 p). (7.9) The quantum Hamming bound only limits the efficiency of nondegenerate codes. For degenerate codes, we can still set a bound, but it will not be as restrictive. For an [ n, k, d ] code, we can choose any d − 1 qubits and remove them. The remaining n − d + 1 qubits must contain enough information to reconstruct not only the 2 k possible codewords, but the state of the missing qubits as well. Because the missing qubits can be any qubits, we can choose them to have maximum entropy. Then n − d + 1 ≥ d − 1 + k (7.10) n ≥ 2( d − 1) + k. (7.11) This is the Knill-Laflamme bound [16, 54]. It is a quantum analog of the classical Singleton bound. A code to correct t errors must have distance d = 2 t + 1, so for such a code, n ≥ 4t + k. This bound holds for any code with a given minimum distance, whether it is degenerate or nondegenerate. For instance, this bound demonstrates that the smallest one-error-correcting quantum code uses five qubits. 7.2 Weight Enumerators and Linear Program-ming Bounds In the classical theory of error-correcting codes, the distribution of codeword weights contains a great deal of information about the code. This distribution CHAPTER 7. BOUNDS ON QUANTUM CODES 75 is often encoded in the coefficients of a polynomial, and algebraic relationships between these polynomials, known as weight enumerators , can be very useful for setting bounds on classical codes. Many of the same ideas can be adapted for use with quantum error-correcting codes [23, 55, 56, 57]. Let Ad be the number of elements of the stabilizer S with weight d, and let Bd be the number of elements of N (S) with weight d (ignoring overall phases). Note that Bd ≥ Ad ≥ 0. Define polynomials A(z) = n ∑ d=0 Adzd (7.12) B(z) = n ∑ d=0 Bdzd. (7.13) A0 = B0 = 1 always. For a code of distance d, Bd′ = Ad′ for all d′ < d .For a nondegenerate code, Bd′ = Ad′ = 0 for d′ < d . A degenerate code has Bd′ = Ad′ > 0 for at least one d′ < d . A(z) and B(z) are the weight enumerators of S and N (S). The polynomials A(z) and B(z) satisfy the quantum MacWilliams identity : B(z) = 1 2n−k (1 + 3 z)nA ( 1 − z 1 + 3 z ) . (7.14) In other words, n ∑ d=0 Bdzd = 1 2n−kn∑ d=0 Ad(1 − z)d(1 + 3 z)n−d. (7.15) Matching coefficients of zd, we find Bd = 1 2n−kn∑ d′=0 [ d∑ s=0 (−1) s3d−s ( d′ s ) ( n − d′ d − s )] Ad′ . (7.16) To prove this, note that an operator E ∈ G of weight d will either commute with every operator M ∈ S or it will commute with exactly half of the operators in S. Therefore, if we sum ∑ M∈S (−1) fM (E), (7.17) we will get zero if E / ∈ N (S) and 2 n−k if E ∈ N (S) (recall that fM (E) is 0 if M and E commute and 1 if they do not). Therefore, we can write Bd as follows: Bd = 1 2n−k ∑ E ∑ M∈S (−1) fM (E), (7.18) where the sum over E is taken over all E ∈ G of weight d. We reverse the order of summation and break up the sum over M to the sum over d′ and the sum CHAPTER 7. BOUNDS ON QUANTUM CODES 76 over M ∈ S of weight d′ to get Bd = 1 2n−kn∑ d′=0 ∑ M ∑ E (−1) fM (E). (7.19) Now, any given M and E will both act nontrivially on some set of s qubits. Of those s, they will act as different Pauli matrices on t qubits and as the same Pauli matrix on s − t qubits. Now, (−1) fM (E) = ( −1) t. (7.20) The number of operators E that agree with M on s − t qubits and disagree on t qubits is 1s−t2t3d−s ( st ) ( d′ s ) ( n − d′ d − s ) . (7.21) Note that this does not depend on M . Thus, Bd = 1 2n−kn∑ d′=0 ∑ Md ∑ s=0 s ∑ t=0 [ 1s−t(−2) t ( st )] 3d−s ( d′ s ) ( n − d′ d − s ) (7.22) = 1 2n−kn∑ d′=0 ∑ Md ∑ s=0 (1 − 2) s3d−s ( d′ s ) ( n − d′ d − s ) (7.23) = 1 2n−kn∑ d′=0 ∑ Md ∑ s=0 (−1) s3d−s ( d′ s ) ( n − d′ d − s ) (7.24) = 1 2n−kn∑ d′=0 [ d∑ s=0 (−1) s3d−s ( d′ s ) ( n − d′ d − s )] Ad′ . (7.25) This proves the quantum MacWilliams identity (7.14) for stabilizer codes. The coefficients Ad and Bd can also be defined for non-stabilizer codes, and equation (7.14) will still hold, so any bounds derived strictly from the quantum MacWilliams identity will hold for any quantum code, not just stabilizer codes. For any code of distance d, the coefficients Ad and Bd satisfy the additional constraints B0 = A0 = 1 (7.26) Bd′ = Ad′ (d′ < d ) (7.27) Bd′ ≥ Ad′ ≥ 0 ( ∀ d′). (7.28) For a nondegenerate code, Ad′ = Bd′ = 0 for d′ < d . These constraints along with equation (7.14) restrict the allowed values of Ad and Bd. The con-straints are all linear, so standard linear programming techniques will find so-lutions. If there are no possible integer values of Ad and Bd satisfying all of the constraints, there is no [ n, k, d ] code. Otherwise, the possible solutions CHAPTER 7. BOUNDS ON QUANTUM CODES 77 will give us parameters of possible codes. For instance, applying the con-straints for a [5 , 1, 3] code produces the unique solution Ai = (1 , 0, 0, 0, 15 , 0) and Bi = (1 , 0, 0, 30 , 15 , 18) . Therefore, the usual five-qubit code is essen-tially the only [5 , 1, 3] code. There are thus no degenerate five-qubit codes. Even tighter linear programming bounds than those produced by the quan-tum MacWilliams identity are possible. This can be done using the quantum shadow enumerator . The shadow Sh (S) of a code S is defined as the set of E ∈ G satisfying fM (E) ≡ wt( M ) (mod 2) (7.29) for all M ∈ S (where wt( M ) is the weight of M ). Define Sd to be the number of elements of Sh (S) of weight d (again, ignoring overall phases), and S(z) = n ∑ d=0 Sdzd. (7.30) S(z) is the shadow enumerator of S. Then S(z) = 1 2n−k (1 + 3 z)nA ( z − 1 1 + 3 z ) . (7.31) If S contains only operators of even weight, then E ∈ Sh (S) iff fM (E) = 0 for all M ∈ S, so Sh (S) = N (S), and Sd = Bd. Furthermore, in this case, A(z)is an even function, so S(z) = B(z) = 1 2n−k (1 + 3 z)nA ( 1 − z 1 + 3 z ) (7.32) = 1 2n−k (1 + 3 z)nA ( z − 1 1 + 3 z ) . (7.33) If S contains an element of odd weight, consider the subset S′ ⊂ S of even weight operators. Then S′ has exactly 2 n−k−1 elements. This is true because in order for M, M ′ ∈ S to commute, they must overlap and disagree only on an even number of qubits. Thus, wt( M M ′) ≡ wt( M ) + wt( M ′) (mod 2). The shadow of S is just Sh (S) = N (S′) − N (S). Let B′(z) and A′(z) be the weight enumerators of S′ and N (S′). Then S(z) = B′(z) − B(z) (7.34) = 1 2n−k−1 (1 + 3 z)nA′ ( 1 − z 1 + 3 z ) − 1 2n−k (1 + 3 z)nA ( 1 − z 1 + 3 z ) (7.35) = 1 2n−k (1 + 3 z)n [ 2A′ ( 1 − z 1 + 3 z ) − A ( 1 − z 1 + 3 z )] . (7.36) Now, A′ d = Ad for even d and A′ d = 0 for odd d, so A(z) + A(−z) = 2 A′(z), and S(z) = 1 2n−k (1 + 3 z)nA ( z − 1 1 + 3 z ) . (7.37) CHAPTER 7. BOUNDS ON QUANTUM CODES 78 Again, the shadow enumerator can be defined for non-stabilizer codes and satisfies the same relationship with A(z) as for stabilizer codes. In both the stabilizer and non-stabilizer case, Sd ≥ 0. Along with (7.31), this provides addi-tional constraints for the linear programming bound restricting the parameters of any code. These bounds have been applied to all possible codes with n ≤ 30 [23, 26]. Among other things, they show that the smallest possible distance five code is an [11 , 1, 5] code and that degenerate codes in this region all fall below the quantum Hamming bound. The shadow enumerator can also be used to show that any nondegenerate code on n qubits can correct at most ⌊ n+1 6 ⌋ errors . 7.3 Bounds on Degenerate Stabilizer Codes It is still unknown whether there are any degenerate codes that exceed the limits set by the quantum Hamming bound, but for certain restricted cases, we can show that there are not. For codes using fewer than 30 qubits, the linear programming bounds of the previous section show this. In this section, I will show that the statement also is true for all stabilizer codes that correct one or two errors. The results can be extended slightly beyond stabilizer codes, but do not apply to the most general possible code. For a one-error-correcting degenerate code, the stabilizer S will contain one or more operators of weight one or two. Weight one operators totally constrain a qubit and both the operator and the qubit can be eliminated, converting an [n, k, d ] code into an [ n − 1, k, d ]. If the latter satisfies the quantum Hamming bound, the former will as well. Suppose there are l independent weight two operators M1, . . . , M l in S. Let D be the group generated by M1, . . . , M l. Note that S − D will contain no operators of weight less than three. The weight two operators in D tell us which errors produce the same states. For instance, if M1 = σz1σz2, σz1|ψ〉 = σz2|ψ〉 for any codeword |ψ〉.Any operator in N (D) will take states fixed by D to states fixed by D. The total dimensionality of the subspace fixed by D is 2 n−l. Suppose that none of the operators in D acts on some qubit j. Then all of the three operators σxj , σyj , and σzj are in N (D), and they are not degenerate. Therefore, they must produce orthogonal states in the subspace fixed by D for each basis codeword. There are always at least n − 2l qubits not affected by D, since each generator of D can add at most two qubits. Therefore, [1 + 3( n − 2l)] 2 k ≤ 2n−l (7.38) k ≤ n − l − log 2[1 + 3( n − 2l)] . (7.39) Recall that the quantum Hamming bound says that k ≤ n − log 2(1 + 3 n), (7.40) so (7.39) is more restrictive when l + log 2[1 + 3( n − 2l)] ≥ log 2(1 + 3 n) (7.41) CHAPTER 7. BOUNDS ON QUANTUM CODES 79 l ≥ log 2 [ 1 + 3 n 1 + 3( n − 2l) ] (7.42) = log 2 [ 1 + 6l 1 + 3( n − 2l) ] . (7.43) Assuming n ≥ 2l, we see that the quantum Hamming bound will still hold if l ≥ log 2(1 + 6 l). This is true for l ≥ 5. For l = 4, (7.43) holds for n ≥ 9; for l = 3, it holds for n ≥ 7. For l = 2, (7.43) holds for n ≥ 5, and for l = 1, it holds for n ≥ 4. The remaining possibilities with n ≥ 2l are ruled out by the linear programming bounds of section 7.2. On the other hand, if l > n/ 2, then k ≤ n − l ≤ n/ 2. For n ≥ 13, the quantum Hamming bound is less restrictive than this, so in conjunction with the linear programming bounds, we can conclude that there are no distance three degenerate stabilizer codes that exceed the quantum Hamming bound. We can make a similar argument for codes to correct two errors. Now let D be generated by the operators of weight four or less in S. There must be at least n − 4l qubits that are unaffected by operators in D. All the possible weight one and two errors on those qubits give orthogonal states, so [ 1 + 3( n − 4l) + 9 2 (n − 4l)( n − 4l − 1) ] 2k ≤ 2n−l (7.44) [ 1 − 3 2 n + 9 2 n2 + 6 l(1 + 12 l − 6n) ] 2l ≤ 2n−k. (7.45) The quantum Hamming bound will still hold if [ 1 − 3 2 n + 9 2 n2 + 6 l(1 + 12 l − 6n) ] 2l ≥ 1 − 3 2 n + 9 2 n2 (7.46) [ 1 − 6l(6 n − 12 l − 1) 1 − 3n/ 2 + 9 n2/2 ] 2l ≥ 1. (7.47) Now, l(6 n − 12 l − 1) = −12[ l2 − (6 n − 1) l/ 12] is maximized for l = (6 n − 1) /24. That means (7.47) will be satisfied when [ 1 − (6 n − 1) 2 8 − 12 n + 36 n2 ] 2l ≥ 1 (7.48) 7 8 − 12 n + 36 n2 2l ≥ 1 (7.49) 7 · 2l−2 ≥ 9n2 − 3n + 2 . (7.50) If this is true, the code will satisfy the quantum Hamming bound. If it is not true, then l ≤ 2 − log 2 7 + log 2(9 n2 − 3n + 2) (7.51) ≤ 3 + 2 log 2 n. (7.52) CHAPTER 7. BOUNDS ON QUANTUM CODES 80 Then l(6 n − 12 l − 1) ≤ 6nl ≤ 6n(3 + 2 log 2 n), so equation (7.47) will again be satisfied when [ 1 − 6n(3 + 2 log 2 n) 1 − 3n/ 2 + 9 n2/2 ] 2l ≥ 1. (7.53) However, for n ≥ 30, 6n(3 + 2 log 2 n) 1 − 3n/ 2 + 9 n2/2 ≤ 0.58 , (7.54) so (7.47) will be satisfied for any l with 1 < l ≤ n/ 4 in the regime of interest. When l = 1, (7.47) becomes 1 − 6(6 n − 13) 1 − 3n/ 2 + 9 n2/2 ≥ 1/2. (7.55) However, for n ≥ 30, 6(6 n − 13) 1 − 3n/ 2 + 9 n2/2 ≤ 0.26 , (7.56) so (7.47) is satisfied for l = 1 as well. Therefore, we are left with l > n/ 4. Again, this implies that k ≤ n − l < 3n/ 4. This is at least as restrictive than the quantum Hamming bound for n ≥ 52. For n = 31, the quantum Hamming bound says k ≤ n − 13. Therefore, for 31 ≤ n ≤ 51, the only remaining region of interest, the code must have l ≤ n/ 4 + 5 to violate the quantum Hamming bound. The only possibility for l > n/ 4 + 4 is l = 12, n = 31. Assume for the moment that l ≤ n/ 4 + 4. Then there are at least n−16 qubits in the code that are affected by at most one of the generators of D. This is more than l + 3, so either at least two of the generators of D must each affect two qubits that are fixed by all of the other generators, or one generator fixes four qubits that are unaffected by all of the other generators. The second case will be more restrictive to the code than the first one, so I will assume the first case holds. Assume without loss of generality that the two generators are Ml−1 and Ml. Then errors on the four qubits affected only by these generators leave the codewords within the subspace fixed by D′, the group generated by M1, . . . , M l−2. There are 67 errors of weight zero, one and two on the four qubits, so 67 · 2k ≤ 2n−(l−2) (7.57) k ≤ n − l − 5. (7.58) This is at least as restrictive as the quantum Hamming bound for any n between 31 and 51. That leaves the case l = 12, n = 31. Even in this case, there must be at least fourteen qubits that are affected by at most one of the generators of D. As before, this is enough to ensure that we can pick two generators of D that will together act on four qubits unaffected by any of the other generators. Again, k ≤ n − l − 5, which is more restrictive than the quantum Hamming bound. Therefore, there are no two-error-correcting degenerate stabilizer codes exceeding the quantum Hamming bound. CHAPTER 7. BOUNDS ON QUANTUM CODES 81 The methods of this section could be adapted and perhaps applied to codes correcting three or more errors, but it gets more difficult for each additional error, since the cases with l > n/ (2 t) must be treated on a special basis, and the range of n for which this could violate the quantum Hamming bound grows rapidly with t. Eventually, it might well be true that some code with enough degeneracies does violate the quantum Hamming bound. Even though we cannot rule out the possibility of a sufficiently large de-generate code violating the quantum Hamming bound, we can still set a less restrictive bound on degenerate stabilizer codes by constructing a classical code from the quantum code . Since bounds on the efficiencies of classical codes are known, we can therefore get bounds on the possible parameters of quantum codes. To produce a classical code from a quantum code, first put the code in standard form, as per (4.3). In particular, note the r × k matrix A2. r ≤ n − k,but by performing single qubit rotations from N (G), we can always convert one generator to the product of σz ’s, so we can ensure that r ≤ n − k − 1. If we look at the classical code C with k × (r + k) generator matrix ( AT 2 |I), then C encodes k bits in at most n − 1 bits. If the original quantum code could correct t quantum errors, it turns out that the classical code C can correct t classical bit flip errors, whether the quantum code was degenerate or nondegenerate. Therefore, the existence of an [ n, k, d ] quantum code implies that an [ n − 1, k, d ]classical code exists. 7.4 Error-Correcting Codes and Entanglement Purification Protocols Before discussing bounds on the channel capacity, I will discuss another way of looking at quantum codes that is sometimes helpful for thinking about the channel capacity. Consider the situation where Alice prepares a number of EPR pairs and sends one member of the pair to Bob. In general, both the qubits that Alice keeps and the qubits she sends to Bob may be subject to errors and decoherence. This means that Alice and Bob will share a number of imperfect pairs. If Alice attempts to teleport a state using these imperfect EPR pairs, for instance, the state that Bob receives will be incorrect. Alice and Bob wish to perform some local operations on their halves of the imperfect pairs so that they are left with a smaller number of perfect pairs (or at least better ones). A protocol to do this is called an entanglement purification protocol (or EPP) [17, 42]. Depending on the situation, Bob and Alice may or may not be allowed to communicate with each other and perform operations conditioned on the results of measurements by the other one. If both Bob and Alice can communicate with each other via classical communication channels, the possible protocols they can implement are called two-way error purification protocols (or 2-EPPs). If Bob can only receive classical information (as well as qubits) from Alice, but not CHAPTER 7. BOUNDS ON QUANTUM CODES 82 transmit, then Bob and Alice are restricted to using one-way error purification protocols (or 1-EPPs). In principle, there is another possibility. Bob and Alice might not be able to communicate classically at all. However, it turns out that the protocols available for them in this case are equivalent to the 1-EPPs. On the other hand, it is known that in some circumstances, 2-EPPs allow more good pairs to be purified than 1-EPPs do . One remarkable fact about 1-EPPs is that they are equivalent to quantum error-correcting codes. Suppose we have a quantum code. We can make a 1-EPP out of it as follows: Alice encodes the qubits she is going to send to Bob using the code, then Bob corrects and decodes. The encoded qubits that are thus preserved in the channel retain their entanglement with the qubits Alice kept, and thus form part of a good EPR pair. The number of good pairs is just equal to the number of encoded qubits. Conversely, suppose we have a 1-EPP that distills k good pairs from n noisy pairs and we wish to make a quantum code. In this case Alice is the encoder and Bob is the decoder for the code. Alice creates n EPR pairs and sends them to Bob, then performs her half of the 1-EPP. Since she cannot receive transmissions from Bob, she does not need to wait until Bob receives the qubits to do this. This is why a quantum code is equivalent to a 1-EPP and not a 2-EPP. After she has performed her half of the purification protocol, sending any necessary classical information, she takes the k qubits she wishes to protect and performs her half of the teleportation protocol using her half of what will be the k good pairs. Again, she sends the classical information about the measurement results to Bob. Bob now receives the qubits, plus all the classical information. He completes the purification protocol, purifying k good pairs. Since they are good EPR pairs, when he then completes the teleportation protocol, the resulting state is the correct one, and the whole process acts like a code encoding k qubits in n qubits. 7.5 Capacity of the Erasure Channel Most quantum channels are very difficult to analyze. However, the channel capacity is known for at least one simple channel of interest. The erasure channel is the channel for which every qubit sent through the channel has some chance p of being totally randomized. However, when this happens, we always know on which qubit it occurred. The capacity of the erasure channel for both quantum codes and 2-EPPs is straightforward to calculate . The capacity for 2-EPPs is particularly straightforward. If Alice sends n EPR pairs through the channel, pn of them will be destroyed, but (1 − p)n will remain intact. Furthermore, Bob will know which pairs remain intact, so he tells Alice and they discard the useless pairs. This achieves a rate of 1 − p.Clearly, it is impossible to do better than this. This means that the capacity for a 2-EPP is just 1 − p.With a 1-EPP or quantum code, we cannot do as well, because Bob cannot tell Alice which pairs she should keep and which she should throw away. In fact, CHAPTER 7. BOUNDS ON QUANTUM CODES 83 we can set an upper bound on the capacity of 1 − 2p. Suppose the erasure rate of p in the channel is actually caused by Charlie, who steals any given qubit with probability p, replaces any stolen qubits with random ones, and then tells Bob which qubits he stole. When p = 1 /2, Bob has exactly the same number of valid pairs as Charlie. If there were any operations Alice could make without consulting Bob that enabled him to purify even a single valid pair, Charlie could do the same thing as Bob, also giving a valid pair. Now when Alice attempts to teleport something to Bob, she is also teleporting it to Charlie. This would allow the cloning of a quantum state. Therefore, the rate for p > 1/2 is zero. For p < 1/2, we can imagine Alice somehow knows n(1 − 2p) of the pairs that will not be stolen by Charlie. The remaining 2 pn pairs she is uncertain about. Of them, pn will be stolen by Charlie, again leaving him with the same number of good pairs from this set as Bob has. If Alice attempts to purify more than n(1 − 2p) pairs with Bob, she will therefore also be purifying pairs with Charlie, again leading to state cloning. Therefore, the capacity is bounded above by 1 − 2p.This is, in fact, the actual achievable capacity for this channel. Suppose we take a random Abelian subgroup of Gn with n − k generators. This subgroup will act as the stabilizer S of a code. If we encode k qubits using this code, and then send them through the erasure channel, for large n, with high probability, pn known qubits will have been randomized. We need to distinguish between the 4 pn possible errors on these qubits. Since the error operators are all on the same pn qubits, there are again 4 pn products of these operators. If measure one of these products anticommute with some element of S, then we will be able to correct the errors and decode the k qubits, with fidelity approaching one for large n. Since the generators are chosen randomly, each one will commute with half of the possible operators of weight pn and anticommute with half of the possible operators. The different generators commute and anticommute with operators independently, so the number of operators that commute with all n − k generators is 4pn /2n−k = 2 k−(1 −2p)n = 2 (r−1+2 p)n, (7.59) where r is the rate: k = rn . As long as r < 1 − 2p, the chance of not being able to distinguish all the likely errors goes to zero as n → ∞ . Therefore, a random stabilizer code can give us rate 1 − 2p. Since this coincides with the upper bound on the capacity, it is the actual capacity of the erasure channel. 7.6 Capacity of the Depolarizing Channel The depolarizing channel is a very natural channel to consider. In this channel, with probability 1 − p, each qubit is left alone. In addition, there are equal probabilities p/ 3 that σx, σy , or σz affects the qubit. We can apply similar methods to the depolarizing channel as with the erasure channel to place upper and lower bounds on its capacity. However, currently these bounds do not meet, so the actual capacity of the depolarizing channel is unknown. CHAPTER 7. BOUNDS ON QUANTUM CODES 84 The depolarizing channel can also simulated by imagining Charlie is ran-domly stealing some qubits from the channel. If Charlie steals a qubit with probability q and replaces it with a random qubit (not telling Bob which one was stolen), there is still a 1 /4 chance that Charlie happens to replace the stolen qubit with one in the same state. There is only a chance q/ 4 of Charlie applying each of σx, σy , and σz . Therefore, this situation corresponds to the depolariz-ing channel with p = 3 q/ 4. We can make a cloning argument just as with the erasure channel to set an upper bound on the capacity. Again we find that the capacity is limited by 1 − 2q = 1 − 8p/ 3. When p > 3/8, the rate of transmission is necessarily zero. Actually, we can set a tighter upper bound than this. Randomly stealing qubits is not the best eavesdropping method available to Charlie that will look like the depolarizing channel. The best eavesdropping method actually allows him to produce the same state as Bob whenever p > 1/4 . This means that the rate is limited to 1 − 4p. This is the asymptotic form of the Knill-Laflamme bound, which was derived for codes with a fixed minimum distance in section 7.1. We can set a lower bound for the achievable rate by again considering the rate for a random stabilizer code. If we encode k qubits in n qubits using a random stabilizer S, the expected number of errors is pn . We need measure one of the errors to be distinguishable from each other. The errors E and F are distinguishable if E†F anticommutes with some elements of S, and are not if they do not. The typical product E†F actually does not have weight 2 pn .There is a chance p2 that E and F will both have nontrivial action on a given qubit. If they act as different Pauli matrices, the product will still act on that qubit. If they act as the same Pauli matrix, the product will not act on that qubit at all. The probability of having both act as the same Pauli matrix is p2/3. Therefore, the expected length of the product E†F is (2 p − 4p2/3) n. Let x = 2 p − 4p2/3. Let the number of errors of weight w be N (w). Then the number of different products of weight xn is N (xn ), and therefore the number of typical products that commute with everything in S is N (xn )/2n−k. Now, there are N (pn ) likely errors, so the number of ways we can pair them into products is N (pn )[ N (pn ) − 1] /2. This means that the number of ways of getting any given operator O of weight xn is ( N (pn )2 )/ N (xn ). (7.60) For each of the pairs that gives one of the N (xn )/2n−k products that commute with S, we must remove one of the errors in the pair from the group of likely errors. Therefore, we must remove ( N (pn )2 )/ 2n−k (7.61) errors. We want to remove only measure zero of the errors, so we wish this CHAPTER 7. BOUNDS ON QUANTUM CODES 85 number to be small compared to N (pn ) for large n. Thus, N (pn )/2n−k+1 ≪ 1 (7.62) N (pn ) ≪ 2n−k+1 (7.63) k/n < 1 − 1 n log 2 N (pn ) = 1 − p log 2 3 − H(p). (7.64) This is just the quantum Hamming bound (7.3). In other words, a random code saturates the quantum Hamming bound. However, the quantum Hamming bound only limits the efficiency of non-degenerate codes. The typical element of a random stabilizer will have weight 3n/ 4, which is much larger than pn for any p where the rate could possibly be nonzero. Therefore, a random code will have a negligable number of de-generate errors, and the quantum Hamming bound will still apply. However, if we choose the stabilizer to be of a restricted form rather than totally random, we can choose it to have very many degeneracies, and the quantum Hamming bound may be exceeded , although existing codes only allow us to exceed the rate of a random code by a very small amount. Shor and Smolin showed that by concatenating a random code with a simple repetition code ( |0〉 becomes the tensor product of |0〉’s and |1〉 becomes the tensor product of |1〉’s), the rate of the code is improved slightly near the zero-rate limit. The optimum block size for repetition turns out to be five. We can still set an upper bound on the efficiency of a degenerate stabilizer code using similar arguments to those that gave us the capacity of a random stabilizer code. Note that this upper bound does not necessarily apply to all codes, so it may not be a strict upper bound on the capacity. However, non-stabilizer codes are very difficult to work with, so it does provide a practical upper bound on the capacity. To give this bound, assume that every element of S actually has weight xn . This bound is unlikely to be achievable, since the product of two oper-ators of weight xn will only rarely have weight xn again. There are at least N (xn )/2n−k operators of weight n that commute with S, but 2 n−k of them are in S. Therefore, in the best case, there are only N (xn )/2n−k − 2n−k operators that can potentially cause a problem. In the limit where n and k = rn are both large, either N (xn )/2n−k will dominate the number of troublesome oper-ators, or N (xn )/2n−k ≪ 2n−k. In the first case, the calculation goes through as for a completely random stabilizer, giving us a capacity only at the quantum Hamming bound. In the second case, N (xn ) ≪ 22( n−k) (7.65) r = k/n < 1 − 1 2n log 2 N (xn ) = 1 − x 2 log 2 3 − 1 2 H(x). (7.66) Since x = 2 p − 4p2/3, this is higher than the quantum Hamming bound. Equa-tion (7.66) gives an upper bound on the capacity of the depolarizing channel achievable using stabilizer codes. It is shown in figure 7.1 along with the Knill-Laflamme bound and the quantum Hamming bound. Cleve has also proved a CHAPTER 7. BOUNDS ON QUANTUM CODES 86 0.05 0.1 0.15 0.2 0.25 Error Rate 0.2 0.4 0.6 0.8 1 Transmission Rate Figure 7.1: The quantum Hamming bound (dashed), the Knill-Laflamme bound (dotted), and the bound from equation (7.66) (solid). bound on the capacity achievable using degenerate stabilizer codes , but it is slightly worse than (7.66) everywhere in the region of interest, so it is not shown in the figure. CHAPTER 8. EXAMPLES OF STABILIZER CODES 87 Chapter 8 Examples of Stabilizer Codes There are many known stabilizer codes [10, 11, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 35, 42]. I will not attempt to list them all here, but will instead concentrate on a few interesting individual codes and classes of codes. In a number of cases, I will not just describe the stabilizers of the codes, but will also discuss the normalizers and automorphism groups of the stabilizers, since these are important to realizing fault-tolerant computation in the most efficient possible way. 8.1 Distance Two Codes For even n, there is always an [ n, n − 2, 2] code. The stabilizer S has two generators, one the product of all n σ x’s and one the product of all the σz ’s. For even n, these commute. N (S) consists of tensor products in G that contain an even number of σx’s, an even number of σy ’s, and an even number of σz ’s. We can write Xi = σx1σx(i+1) (8.1) Zi = σz(i+1) σzn , (8.2) for i = 1 , . . . , n − 2. The automorphism group A(S) contains all possible permutations of the qubits and the Hadamard rotation R applied to all n qubits at once. If n is a multiple of four, any single-qubit operation in N (G) applied to all the qubits gives an element of A(S). The order of A(S) is thus either 2 n! or 6 n!. Swapping qubit i with qubit j switches the ( i − 1)th encoded qubit with the (j − 1)th encoded qubit (for 1 < i, j < n ). Swapping qubit 1 with qubit i + 1 (i = 1 , . . . , n − 2) transforms Xi → XiCHAPTER 8. EXAMPLES OF STABILIZER CODES 88 Xj → XiXj (i 6 = j) Zi → Z1Z2 · · · Zn−2 (8.3) Zj → Zj (i 6 = j). Similarly, swapping qubit n with qubit i + 1 ( i = 1 , . . . , n − 2) transforms Xi → X1X2 · · · Xn−2 Xj → Xj (i 6 = j) Zi → Zi (8.4) Zj → ZiZj (i 6 = j). Swapping the first qubit with the nth qubit performs the transformation Xi → X1 · · · Xi−1Xi+1 · · · Xn−2 Zi → Z1 · · · Zi−1Zi+1 · · · Zn−2. (8.5) Performing R on every qubit performs the same transformation as swapping the first and nth qubits, but also performs R on every encoded qubit. For n amultiple of four, performing P on every qubit performs the following operation: Xi → −XiZ1 · · · Zi−1Zi+1 · · · Zn−2 Zi → Zi. (8.6) Because these codes are of the CSS form, a CNOT applied to every qubit transversally between two blocks is also a valid fault-tolerant operation, and performs CNOTs between the corresponding encoded qubits. The case of n = 4, the smallest distance two code, is of particular interest. The code from figure 3.5 can be converted into the form of the codes currently under consideration using single-qubit rotations, although the X and Z oper-ators will need to redefined. It can be used to detect a single error or to correct a single erasure . In this case, X1 = σx1σx2 X2 = σx1σx3 Z1 = σz2σz4 (8.7) Z2 = σz3σz4. Switching the second and third qubits or switching the first and fourth qubits both swap the two encoded qubits. Swapping the first and second qubits or the third and fourth qubits produces the transformation X1 → X1 X2 → X1X2 Z1 → Z1Z2 (8.8) Z2 → Z2.CHAPTER 8. EXAMPLES OF STABILIZER CODES 89 This is just a CNOT from the second encoded qubit to the first encoded qubit. Similarly, swapping the first and third qubits or the second and fourth qubits performs a CNOT from the first encoded qubit to the second encoded qubit. The transversal Hadamard rotation in this case performs the Hadamard rotations on both qubits and switches them. Applying P to all four qubits performs the gate X1 → −X1Z2 X2 → −Z1X2 Z1 → Z1 (8.9) Z2 → Z2. We can recognize this as the encoded conditional sign gate followed by an en-coded σz1σz2.A more extensive discussion of the properties of distance two codes (and a few codes of greater distances) appears in . 8.2 The Five-Qubit Code The five-qubit code is the shortest possible quantum code to correct one error, and is therefore of immense interest [17, 24]. Its stabilizer is given in table 3.2. Recall that the stabilizer is simply generated by cyclic permutations of σx ⊗σz ⊗ σz ⊗ σx ⊗ I. There are five cyclic permutations of this, but only four produce independent generators. The stabilizer has sixteen elements: the identity, and the 3 × 5 cyclic permutations of σx ⊗ σz ⊗ σz ⊗ σx ⊗ I, σy ⊗ σx ⊗ σx ⊗ σy ⊗ I,and σz ⊗ σy ⊗ σy ⊗ σz ⊗ I. X is just the tensor product of five σx’s and Z is the tensor product of the five σz ’s. As I noted in section 3.4, the five-qubit code is a linear GF(4) code. There-fore, the operation T : σx → σy , σ z → σx (8.10) applied transversally is a valid fault-tolerant operation and performs an encoded version of itself. We can use this operation to derive a valid three-qubit operation for the five-qubit code: σx ⊗ I ⊗ I → σx ⊗ σy ⊗ σz I ⊗ σx ⊗ I → σy ⊗ σx ⊗ σz I ⊗ I ⊗ σx → σx ⊗ σx ⊗ σx (8.11) σz ⊗ I ⊗ I → σz ⊗ σx ⊗ σy I ⊗ σz ⊗ I → σx ⊗ σz ⊗ σy I ⊗ I ⊗ σz → σz ⊗ σz ⊗ σz . We can, of course, permute the qubits on the right and apply T or T 2 to any or all of them and still get a valid three-qubit operation. CHAPTER 8. EXAMPLES OF STABILIZER CODES 90 Using measurements and this three-qubit operation, we can generate directly a number of additional one- and two-qubit operations. We can always get such gates using the protocol described in section 5.5, but it may be more efficient to get some gates using this three-qubit operation. Suppose we place the data qubit in the third place and prepare the first two qubits in encoded |0〉 states. Then apply the three-qubit operation and measure σy on the first two qubits. The effect is to perform a Hadamard rotation R on the data qubit. Alternatively, prepare the first two qubits in +1 eigenstates of σx, apply the three-qubit gate, and measure σz on the first two qubits. This performs P on the data qubit. By preparing a single ancilla qubit, applying the three-qubit operation, and making a single measurement, we can also get a variety of two-qubit operations. 8.3 A Class of Distance Three Codes The eight-qubit code of table 3.3 is just one of a class of codes with parameters [2 j , 2j − j − 2, 3] . Note that according the quantum Hamming bound, this is the maximal number of encoded qubits for n = 2 j , d = 3. These codes are related to the classical Reed-Muller codes , but are more efficient than CSS codes formed from the classical Reed-Muller codes. Like the classical Reed-Muller codes, the codes described in this section allow us to efficiently compute the actual error occuring from the measured error syndrome. The first two generators of these codes are always the same. One is the product of 2 j σx’s and the second is the product of 2 j σz ’s. We will call these generators MX and MZ , and the remaining j generators will be M1 through Mj .The stabilizers of these codes always include the distance two codes discussed in section 8.1. This is convenient when correcting errors — we can measure the first two generators and use them to detect whether any error has occurred. If not, we do not need to go any further. It will be convenient to construct the codes by describing the error syndromes of the 3 n possible one-qubit errors. I will show that they are all distinct and then that the generators that give those error syndromes all commute. For these codes, the error syndrome f (E) for error E is a ( j + 2)-bit number. Recall that each bit corresponds to a generator of S, and the ith bit is 0 iff E commutes with generator Mi. f (E) is a group homomorphism from G to ( Z2)j+2 .Because of the form of the first two generators, the first two bits of f (σxi )are always 01, the first two bits of f (σzi ) are always 10, and the first two bits of f (σyi ) are always 11, as they must be to preserve the group structure of G.For the remaining bits of the error syndrome, we will number the qubits from 0 to n − 1 and write the number in base two. Then f (σxi ) = 01 ⊕ i (8.12) f (σzi ) = 10 ⊕ σ(i) (8.13) f (σyi ) = 11 ⊕ (i + σ(i)) . (8.14) The function σ(i) is some as yet undefined additive group automorphism on CHAPTER 8. EXAMPLES OF STABILIZER CODES 91 MX σx σx σx σx σx σx σx σx σx σx σx σx σx σx σx σx MZ σz σz σz σz σz σz σz σz σz σz σz σz σz σz σz σz M1 I σx I σx I σx I σx σz σy σz σy σz σy σz σy M2 I σx I σx σz σy σz σy σx I σx I σy σz σy σz M3 I σx σz σy σx I σy σz I σx σz σy σx I σy σz M4 I σy σx σz I σy σx σz I σy σx σz I σy σx σz Table 8.1: The stabilizer for a [16 , 10 , 3] code. (Z2)j . We will be able to completely describe it by defining its action on 0 . . . 01, 0 . . . 010, . . . , 10 . . . 0. For this to give a distance three code, the error syndrome must have the property that f (E) 6 = 0 for any weight two operator E ∈ G . By including the stabilizer of a distance two code, we have already insured that any weight one operator has non-zero error syndrome. We can immediately see that f (E) 6 = 0 unless E is the product of two Pauli matrices of the same type. Therefore, we need to consider f (σxl σxm ) = 00 ⊕ (l + m) (8.15) f (σzl σzm ) = 00 ⊕ σ(l + m) (8.16) f (σyl σym ) = 00 ⊕ (l + m) + σ(l + m), (8.17) for l 6 = m. The second and third equations follow because σ is a group homo-morphism. Since i = l + m can be anything but 0, σ(l + m) will not be 0 either, and we need only choose σ so that σ(i) 6 = i for any i 6 = 0. The actual function σ we want to use will depend on whether j is even or odd. For even j, consider the following function σ: σ(0 . . . 0001) = 11 . . . 11 σ(0 . . . 0010) = 0 . . . 001 σ(0 . . . 0100) = 0 . . . 010 (8.18) ... σ(1000 . . . 0) = 010 . . . 0. Then clearly σ(i) = i/ 2 for any nonzero i ending in 0. If i does end in 1, for σ(i)to end in 1 also, the previous bit must have been 0, which means that the bit before that must have been 1, and so on. Therefore, the only possible number for which i = σ(i) is i = 010 . . . 101. Because j is even, the first bit must be 0. But σ(l) always begins in 1 for any l ending in 1, so even for this particular i, σ(i) 6 = i. Therefore, the error syndrome produces a distance three code. The smallest case is a [16 , 10 , 3] code, which is given in table 8.1. We do still need to verify that it is an actual code by verifying that there are commuting generators that give these error syndromes. The first two generators MX and MZ will always commute with the other j generators, since f (σxi ) and CHAPTER 8. EXAMPLES OF STABILIZER CODES 92 f (σzi ) each have a 0 in the rth position for n/ 2 i’s and a 1 in the rth position for n/ 2 i’s. When the rth bit of f (σxi ) is 0 and the rth bit of f (σzi ) is 1, then the rth generator is the tensor product of σxi with something else (thus, this generator commutes with σxi and anticommutes with σzi ). Other combinations will produce I, σyi , or σzi , and we can determine the complete form of Mr in this way. We need only check that Mr and Ms commute. Let fr(E) be the ( r + 2)th bit of f (E), that is, the bit corresponding to Mr . I assume without loss of generality that s > r . The binary matrix representation of S is closely related to the error syndrome, and Mr and Ms commute iff n ∑ i=0 (fr(σxi )fs(σzi ) + fr(σzi )fs(σxi )) = 0 . (8.19) There are a few possible cases to consider: • j > s > r + 1 > 2: In this case, fs(σzi ) is equal to the sum of the jth bit of i and the ( s − 1)th bit and fr(σzi ) is the sum of the jth bit of i and the (r − 1)th bit. On the other hand, fr(σxi ) is just equal to the rth bit of i and fs(σzi ) is equal to the sth bit of i. The jth, ( r − 1)th, and ( s − 1)th bits are distinct from bits r and s. Therefore, the fr (σxi )fs(σzi ) term contributes to the sum when the rth bit of i is 1 and the jth and ( s − 1)th bits of i are different. This is true for n/ 4 values of i. The fr (σzi )fs(σxi )term similarly contributes to the sum for n/ 4 i’s. Since n/ 4 + n/ 4 is even, Mr and Ms commute. • j > s > r + 1 = 2: In this case, fs(σzi ) is still equal to the sum of the jth bit of i and the ( s − 1)th bit, but fr(σzi ) is just equal to the jth bit of i. However, both the fr (σxi )fs(σzi ) and the fr (σzi )fs(σxi ) terms still contribute to the sum for n/ 4 i’s, so Mr and Ms still commute. • j = s > r + 1 > 2: Both fs(σzi ) and fr (σzi ) are given as in the first case. fr (σxi )fs(σzi ) still contributes to n/ 4 terms in the sum. Now, however, fr (σzi )fs(σxi ) can only contribute when the jth bit of i is 1. Since we also need fr (σzi ) = 1, this term only contributes when the jth bit of i is 1 and the ( r − 1)th bit is 0. This still contributes to n/ 4 terms in the sum, so Mr and Ms again commute. • j > s = r + 1 > 2: Now, the ( s − 1)th bit is equal to the rth bit. That means fr (σxi )fs(σzi ) only contributes when the rth bit of i is 1 and the jth bit of i is 0. This contributes to n/ 4 terms in the sum, as does fr (σzi )fs(σxi ), so Mr and Ms commute in this case as well. • j = s = r + 1 > 2: This is a combination of the previous two cases. fr (σxi )fs(σzi ) only contributes when the rth bit of i is 1 and the jth bit of i is 0 and fr (σzi )fs(σxi ) contributes when the jth bit of i is 1 and the (r − 1)th bit is 0. Again, this is an even number of contributing terms, so Mr and Ms commute. CHAPTER 8. EXAMPLES OF STABILIZER CODES 93 • j > s = r + 1 = 2: fr(σzi ) is again equal to the jth bit of i. However, this does not affect fr (σxi )fs(σzi ), which contributes to n/ 4 terms in the sum, as in the previous two cases. It does affect fr (σzi )fs(σxi ), but this term still contributes to n/ 4 terms, so Mr and Ms commute. • j = s > r + 1 = 2: As before, fr (σxi )fs(σzi ) contributes to n/ 4 terms in the sum. Now, however, fr(σzi )fs(σxi ) contributes whenever the jth bit of i is 1. This means it contributes to n/ 2 terms instead of n/ 4. Therefore, there are a total of 3 n/ 4 contributing terms. However, since j ≥ 3, n/ 4 is still even, and M1 and Mj commute too. • j = s = r + 1 = 2: Since j ≥ 3, this case is impossible. For the case of odd j, we do something very similar. Now let σ(0 . . . 0001) = 11 . . . 11 σ(0 . . . 0010) = 0 . . . 001 σ(0 . . . 0100) = 0 . . . 010 (8.20) ... σ(0100 . . . 0) = 001 . . . 0 σ(1000 . . . 0) = 101 . . . 1. An example of a code using this σ is the [8 , 3, 3] code given in table 3.3. In this case, if the first bit is 0, the last bit must also be 0 for the first bits of i and σ(i) to match. However, σ(i) is certainly not equal to i for any i with both first and last bits 0. If the first bit is 1, the last bit must be 0 in order for the first bits of i and σ(i) to match. Thus, the second bit must be 0, which means the third bit must be 1, and so on. However, since j is odd, this progression would mean that the jth bit would have be 1, while we already know it must be 0. Therefore, there is no i for which σ(i) = i. Again, we have a distance three code. We again need to check that the generators commute. As for even j, every-thing immediately commutes with MX and MZ . We consider similar cases to see if Mr and Ms commute: • j > s > r + 1 > 3: Here, fr (σzi ) is the sum of the first, jth, and ( r − 1)th bits of i, and fs(σzi ) is the sum of the first, jth, and ( s − 1)th bits of i.This still leads to both fr (σxi )fs(σzi ) and fr(σzi )fs(σxi ) contributing to n/ 4 terms each in the sum, so Mr and Ms commute. • j > s > r + 1 = 3: Now fr(σzi ) is just equal to the jth bit of i, as in the case j > s > r + 1 = 2 for even j. As then, Mr and Ms commute. • j > s > r + 1 = 2: Now fr(σzi ) is the sum of the first and jth bits of i, and fr(σxi )fs(σzi ) contributes only when the first bit of i is 1 and the (s − 1)th and jth bits of i agree, but this still contributes to n/ 4 terms in the sum, so Mr and Ms still commute. CHAPTER 8. EXAMPLES OF STABILIZER CODES 94 • j = s > r + 1 > 3: In this case, fr(σzi )fs(σxi ) only contributes when the jth bit of i is 1 and the first and ( r − 1)th bits are the same. This still occurs for n/ 4 i’s, so Mr and Ms commute. • j > s = r + 1 > 3: Now, fr(σxi )fs(σzi ) contributes when the rth bit of i is 1 and the first and jth bits are the same. This occurs for n/ 4 i’s, so Mr and Ms commute. • j = s = r + 1 > 3: fr (σxi )fs(σzi ) contributes to n/ 4 terms in the sum, as in the previous case, and fr (σzi )fs(σxi ) does too, as in the case before that. Therefore, Mr and Ms still commute. • j > s = r + 1 = 3: As with the previous two cases, fr (σxi )fs(σzi ) con-tributes to n/ 4 terms in the sum. fr(σzi ) is equal to the jth bit of i, so fr (σzi )fs(σxi ) contributes only when the sth and jth bits of i are both 1. This is still n/ 4 values of i, so Mr and Ms again commute. • j > s = r + 1 = 2: In this case, fs(σzi ) is the jth bit of i and fr (σzi ) is the sum of the first and jth bits. That means fr (σxi )fs(σzi ) contributes when the first and jth bits of i are 1, and fr(σzi )fs(σxi ) contributes when the second bit of i is 1 and the first and jth bits are different. Both of these terms therefore contribute to n/ 4 terms in the sum, so Mr and Ms commute. • j = s > r + 1 = 3: As usual, fr(σxi )fs(σzi ) contributes to n/ 4 terms in the sum. fr(σzi )fs(σxi ) contributes whenever the jth bit of i is 1. This means it contributes to n/ 2 terms in the sum, for a total of 3 n/ 4 nonzero terms. Again, since j ≥ 3, 3 n/ 4 is even, so Mr and Ms commute. • j = s > r + 1 = 2: Now, fr(σxi )fs(σzi ) contributes whenever the first bit of i is 1 and the jth and ( j − 1)th bits agree. This is true for n/ 4 i’s. fr (σzi )fs(σxi ) contributes when the first bit of i is 0 and the jth bit of i is 1, which is again true for n/ 4 i’s. Therefore, Mr and Ms commute. • j = s = r + 1 = 3: This case only arises for the [8 , 3, 3] code, so we can just check it by looking at table 3.3. Again, the case j = s = r + 1 = 2 does not arise at all. Now I will describe the X and Z operators for these codes. I will choose all of the X operators to be of the form σxa σxi (for some i 6 = a) times the product of σz ’s. In order to do this, we just need to find a set K of j + 1 σz ’s (not including σza ) for which f (σzl ) over the σzl ∈ K form a spanning set of binary vectors in ( Z2)j+1 (skipping MZ , which σz will never anticommute with). Then we will be able to pick some operator E that is a product of these σz ’s so that Xi = σxa σxi ′ E commutes with all the generators of S, and another operator E′ so that Zi = σzi ′ E′ also is in N (S). If we choose the possible values of i′ so that they do not overlap with the qubits l for which σzl ∈ K, then {Xi, Zi} = 0 and [ Xi, Zm] = 0 for i 6 = m.CHAPTER 8. EXAMPLES OF STABILIZER CODES 95 For even j, K will consist of σz2l for l = 1 , . . . , j − 1, plus σz0 and σz(n−1) (recall the qubits are numbered 0 to n − 1). f (σz0) = 10 ⊕ 0 . . . 0, f (σz(n−1) ) = 10 ⊕ 10 . . . 0, and f (σz2l ) is 10 followed by the binary representation of 2 l−1.This set K has the desired properties. We pick a = 1. For odd j, K will again include σz2l , but only for l = 1 , . . . j −2. The remain-ing elements of K will be σz0, σz(2 (j−1) +1) , and σz(n−2) . Now, f (σz(2 (j−1) +1) ) = 10 ⊕ 010 . . . 0, and f (σz(n−2) ) = 10 ⊕ 10 . . . 0, so again K will have the desired property. We again pick a = 1. Note that for the eight-qubit code, this will actually give us a different definition of Xi and Zi than in table 3.3. I will conclude this section with a brief discussion of the automorphism groups of these codes. There will not generally be a simple transversal operation in A(S) for one of these codes, but they have a number of symmetries when we allow permutations of the qubits. One simple but large class of symmetries switches qubit i with qubit i + l, where the addition is bitwise binary. For instance, we might swap the first n/ 2 qubits with the last n/ 2 qubits, or the first n/ 4 qubits with the second n/ 4 and the third n/ 4 with the last n/ 4. The effect of this swap is to add 1 to any bit r of f (σxi ) (for all i) where l is 1 in the rth bit. This much is equivalent to multiplying Mr by MZ . We also add 1 to any bit r of f (σzi ) (for all i) where σ(l) is 1 in the rth bit. This is equivalent to multiplying Mr by MX . Whether we multiply by MX , MZ , or both, the product is still in S, so the operation preserves S and is a valid fault-tolerant operation. There may be other symmetries of these codes, as well. 8.4 Perfect One-Error-Correcting Codes A perfect quantum code is a nondegenerate code for which the inequality of the quantum Hamming bound becomes an equality. For one-error-correcting codes, that means (1 + 3 n)2 k = 2 n. The possibility of a perfect code therefore exists whenever 1 + 3 n is a power of two (up to 2 n). For instance, the five-qubit code is a perfect code. 1 + 3 n will be a power of two iff n = (2 2j − 1) /3 for some j.Therefore there could be perfect codes for n = 5, n = 21, n = 85, and so on, with parameters [(2 2j − 1) /3, (2 2j − 1) /3 − 2j, 3]. In fact, perfect codes do exist for all these parameters. One construction of these codes uses the Hamming codes over GF(4) . Another construction is to paste together one of the codes from the previous section with an earlier perfect code. The stabilizer S1 of any code from sec-tion 8.3 contains the stabilizer R1 = {I, M X , M Z , M X MZ } for a distance two code. To make the perfect code for j ≥ 3, let S1 be the stabilizer for the [2 2j−2, 22j−2 − 2j, 3] code, and S2 be the stabilizer for the perfect code for j − 1, with parameters [(2 2j−2 − 1) /3, (2 2j−2 − 1) /3 − 2j + 2 , 3]. For j = 2, S2 is the stabilizer for the five-qubit code. Then using trivial R2 (which still has distance one), the pasting construction of section 3.5 gives us a new code of distance three. The total number of qubits used by the code is 22j−2 + (2 2j−2 − 1) /3 = (4 2 2j−2 − 1) /3 = (2 2j − 1) /3. (8.21) CHAPTER 8. EXAMPLES OF STABILIZER CODES 96 It encodes (2 2j − 1) /3 − 2j qubits, and therefore is the perfect code for j. 8.5 A Class of Distance Four Codes We can extend the stabilizers of the codes from section 8.3 to get distance four codes. The parameters of these distance four codes will be [2 j , 2j − 2j − 2, 4]. The first two generators of S will again be MX and MZ . The next j generators of S are the generators M1 through Mj from section 8.3, so S includes the stabilizer for a distance three code. The last j generators of S are Ni = RM iR for i = 1 , . . . , j , where R is applied to all 2 j qubits. As with the codes of section 8.3, the error occuring for these codes can be efficiently determined from the error syndrome. We can summarize this by writing the error syndromes for σxi and σzi : f (σxi ) = 01 ⊕ i ⊕ σ(i) (8.22) f (σzi ) = 10 ⊕ σ(i) ⊕ i. (8.23) Since S includes the stabilizer of a distance three code, it automatically has distance at least three. We need to check that f (E) 6 = 0 for any weight three operator E. The only form of an operator E for which the first two bits of f (E)could be 00 is E = σxa σyb σzc . Then f (E) = 00 ⊕ (a + σ(b) + b + σ(c)) ⊕ (σ(a) + b + σ(b) + c) (8.24) = 00 ⊕ (a + b + σ(b + c)) ⊕ (b + c + σ(a + b)) . (8.25) If r = a + b and s = b + c, then f (E) is nonzero as long as r 6 = σ(s) or s 6 = σ(r). This means that we need s 6 = σ(σ(s)) = σ2(s) (8.26) for all nonzero s (when r = s = 0, E = I). To see that this is true, note that for even j, σ2(0 . . . 0001) = 10 . . . 00 σ2(0 . . . 0010) = 11 . . . 11 σ2(0 . . . 0100) = 0 . . . 001 (8.27) ... σ2(1000 . . . 0) = 001 . . . 0. If s has a 0 in the next-to-last bit, it cannot have σ2(s) = s unless s = 0. If s has a 1 in the next-to-last bit, it must have a 0 for the fourth-from-the-last bit, and so on. If j is a multiple of four, we find that the first bit must be a 0, which means that the last bit of s must be a 1. This in turn implies that the third-from-the-last bit is 0, and so on until we reach the second bit of s, which must be 0, so s = 001100 . . . 11. However, the second bit of σ2(s) is 1 because CHAPTER 8. EXAMPLES OF STABILIZER CODES 97 MX σx σx σx σx σx σx σx σx σx σx σx σx σx σx σx σx MZ σz σz σz σz σz σz σz σz σz σz σz σz σz σz σz σz M1 I σx I σx I σx I σx σz σy σz σy σz σy σz σy M2 I σx I σx σz σy σz σy σx I σx I σy σz σy σz M3 I σx σz σy σx I σy σz I σx σz σy σx I σy σz M4 I σy σx σz I σy σx σz I σy σx σz I σy σx σz N1 I σz I σz I σz I σz σx σy σx σy σx σy σx σy N2 I σz I σz σx σy σx σy σz I σz I σy σx σy σx N3 I σz σx σy σz I σy σx I σz σx σy σz I σy σx N4 I σy σz σx I σy σz σx I σy σz σx I σy σz σx Table 8.2: The stabilizer for a [16 , 6, 4] code. the next-to-last bit is. Therefore, σ(s) 6 = s in this case. If j is even, but not a multiple of four, the first bit of s must be 1, which means that the last bit is 0. Again we follow the chain of logic back to the second bit of s and again find that it must be 0, again giving a contradiction. Therefore σ2(s) 6 = s for any nonzero s for any even j. An example for even j is the [16 , 6, 4] code given in table 8.2. If j is odd, σ2(0 . . . 0001) = 0111 . . . 11 σ2(0 . . . 0010) = 1111 . . . 11 σ2(0 . . . 0100) = 000 . . . 001 σ2(0 . . . 1000) = 000 . . . 010 (8.28) ... σ2(010 . . . 00) = 0001 . . . 00 σ2(1000 . . . 0) = 0101 . . . 11 . In order to have σ2(s) = s, we cannot have the first bit and last two bits of s all 0. If the first bit of s is 1, then the next-to-last bit of s must also be 1. Then if the last bit is 0, the third-from-the-last bit must be 0 and the fourth-from-the-last bit must be 1. Also, the second bit is 0 and the third bit is 1. After the third bit, they must continue to alternate 0 and 1 until the next-to-last bit. This means odd numbered bits are 1 and even numbered bits are 0. However, the fourth-from-the-last bit is an even numbered bit, giving a contradiction. Therefore, if the first bit of s is 1, the last two bits must both be 1 also. That means the third-from-the-last and fourth-from-the-last bits must both be 0. However, it also means that the second bit of s is 1 and the third bit of s is 0. The fourth bit is 0 again, but the fifth bit is 1, and after that they alternate until the last two bits. This contradicts the fact that the third- and fourth-from-the-last bits must both be 0. That leaves the possibility that the first bit of s is 0. Then the next-to-last bit is 0 too, so the last bit must be 1. That means the third-from-the-last bit CHAPTER 8. EXAMPLES OF STABILIZER CODES 98 MX σx σx σx σx σx σx σx σx MZ σz σz σz σz σz σz σz σz M1 I σx I σx σy σz σy σz M2 I σx σz σy I σx σz σy M3 I σy σx σz σx σz I σy N1 I σz I σz σy σx σy σx N2 I σz σx σy I σz σx σy N3 I σy σz σx σz σx I σy Table 8.3: The stabilizer for the [8 , 0, 4] code. is 0 and the fourth-from-the-last bit is 1. Also, the second and third bits of s are both 1. The next two bits are both 0, and the two after that are both 1. The bits pair up to be the same, with the pairs alternating between 0 and 1. However, the fourth- and third-from-the-last bits form one of these pairs, and they are different, giving another contradiction. Therefore, σ2(s) 6 = s for any nonzero s for odd j as well as for even j. An example for odd j is the [8 , 0, 4] code shown in table 8.3. To show that this set of generators forms the stabilizer for a code, we still have to show that they all commute. From the fact that Mr and Ms commute with each other and MX and MZ , we can immediately conclude that Nr and Ns commute with each other and MX and MZ . Also, Mr and Nr commute, since they get one sign of −1 for each σx or σz in Mr, and there are an even number of σx’s and σz ’s. We must show that Mr commutes with Ns for r 6 = s.Now, fMr (Ns) = n−1 ∑ i=0 [ i(r)i(s) + σ(i)(r)σ(i)(s)] . (8.29) Here, x(r) is the rth bit of x. Now, σ is a permutation of 0 through n − 1, so the second term in the sum is equal to the first term in the sum. Therefore, the sum is automatically zero, and these generators do form a stabilizer. 8.6 CSS Codes As discussed in section 3.3, a CSS code [29, 30] is one where some of the gener-ators are tensor products of σx’s and the rest are tensor products of σz ’s. The σx generators and the σz generators correspond to the parity check matrices of two classical codes C1 and C2, with C⊥ 1 ⊆ C2. For instance, the classical Reed-Muller codes can be used to create a number of good quantum codes. CSS codes cannot be as efficient as the most general quantum code, but they can still be quite good. We can set upper and lower bounds using adaptations of the classical Hamming bound and Gilbert-Varshamov bound. This argument shows that the rate k/n of a CSS code to correct t arbitrary errors is asymptotically limited by 1 − 2H(2 t/n ) ≤ k/n ≤ 1 − 2H(t/n ). (8.30) CHAPTER 8. EXAMPLES OF STABILIZER CODES 99 The CSS codes are a particularly interesting class of codes for two reasons: First, they are built using classical codes, which have been more heavily studied than quantum codes, so it is fairly easy to construct useful quantum codes simply by looking at lists of classical codes. Second, because of the form of the generators, the CSS codes are precisely those for which a CNOT applied between every pair of corresponding qubits in two blocks performs a valid fault-tolerant operation (see section 5.3). This makes them particularly good candidates for fault-tolerant computation. In order to get universal fault-tolerant computation for a code, the first step is to produce the encoded CNOT for the code. For the most general stabilizer code, this requires performing a four-qubit operation using two ancilla qubits and making two measurements. In a CSS code, this process is reduced to a single transversal operation. Next, in order to produce one-qubit operations, we need to use one ancilla qubit, perform a CNOT, and make a measurement. For the most general CSS code, we will still have to do this. However, if the code has the property that C1 = C2 (so C⊥ 1 ⊆ C1), then the σx generators have the same form as the σz generators, so a transversal Hadamard rotation is also a valid fault-tolerant operation. If we further have the property that the parity check matrix of C1 has a multiple of four 1s in each row, then the transversal phase P is a valid fault-tolerant operation too. For a general CSS code satisfying these conditions, these operations will perform some multiple-qubit gate on the qubits encoded in a single block. However, if each block only encodes a single qubit, we can choose the X and Z operators so that transversal Hadamard performs an encoded Hadamard rotation, and so that the transversal P performs an encoded P or P †. In particular, when C1 is a punctured doubly-even self-dual classical code, all these conditions are satisfied, and we can perform any operation in N (G) by performing a single transversal operation . In order to get universal computation, we will also need the Toffoli gate or some other gate outside N (G), and this will almost always require a more complicated construction. 8.7 Amplitude Damping Codes Suppose we restrict attention to the amplitude damping channel. In this chan-nel, each qubit behaves independently according to one of the following matrices: ( 1 00 √1 − ǫ2 ) or ( 0 ǫ 0 0 ) . (8.31) It is difficult to create efficient codes that will deal with the exact evolution produced by this channel. However, when ǫ is fairly small, it is sufficient to merely satisfy equation (2.10) approximately . If we wish to correct the equivalent of one error, corrections of O(ǫ3) will not matter, since that would be equivalent to distinguishing one error from two errors. Let us expand ( 1 00 √1 − ǫ2 ) = I − 1 4 ǫ2(I − σz ) + O(ǫ4). (8.32) CHAPTER 8. EXAMPLES OF STABILIZER CODES 100 M1 σx σx σx σx M2 σz σz I IM3 I I σz σz X σx σx I I Z σz I σz I Table 8.4: A four-qubit code for the amplitude damping channel. All of the higher order corrections to this equation will be powers of I − σz .Therefore, if we let A = σx(I − σz ) = 2 ǫ ( 0 ǫ 0 0 ) , (8.33) and B = I − σz , (8.34) we need to consider all terms of the form 〈ψi|E†F |ψj 〉, (8.35) where E and F are products of A and B. We get one factor of ǫ for each A and one factor of ǫ2 for each B. We only need to consider those terms that have total order less than ǫd to have an effectively distance d code. This corrects t errors where d = 2 t + 1. One possible way to achieve this is to have a CSS code for which the σz generators can correct t σ x errors and the σx generators can detect t σ z errors. For instance, the code given in table 8.6 will work if we first map σz → σx and σy → σz . For such a code, we are correcting I and σz rather than B. Since B is in the linear span of σz and the identity, it is handled by these codes as well. We can expand the range of possible codes by taking the actual linear combi-nation of I and σz that appears in A and B into account. For instance, consider the code from table 8.4 . This code can correct one amplitude damping error (i.e., it satisfies (2.10) to O(ǫ3)). We can instantly see that (2.10) is satisfied for E†F = Ai (the subscript indicates the affected qubit) or E†F = A† i Aj , where (i, j ) 6 = (1 , 2) , (3 , 4). When ( i, j ) = (1 , 2) (or (3 , 4)), something interesting and unusual happens: 〈ψi|A† 1 A2|ψj 〉 = 〈ψi|(I − σz1)σx1σx2(I − σz2)|ψj 〉 (8.36) = 〈ψi|σx1σx2(I + σz1) ( I − σz2)|ψj 〉. (8.37) Now, σz1σz2|ψj 〉 = |ψj 〉, so 〈ψi|σx1σx2(I + σz1) ( I − σz2)|ψj 〉 = 〈ψi|σx1σx2(I + σz1) ( I − σz1)|ψj 〉 (8.38) = 0, (8.39) since ( I + σz1) ( I − σz1) = 0. We also need to consider the terms E†F = B and E†F = A† i Ai = I − σzi = B. In this case, we can again separate B into I and σz , and the latter is handled by the generator M1.CHAPTER 8. EXAMPLES OF STABILIZER CODES 101 By applying similar principles, we can see that Shor’s nine-qubit code (table 3.1) can be used to correct two amplitude damping errors. We need to consider products of one through four A’s and products of one or two B’s, as well as the product of a B with one or two A’s. Shor’s code breaks down into three blocks of three. If for any block of three, we have one or two A’s acting on that block, E†F will anticommute with one of the σz generators for that block, and 〈ψi|E†F |ψj 〉 = 0. This takes care of all possible operators E†F involving one, two, or four A’s. We still need to consider A† 1 A2A3 (and similar terms) and products of one or two B’s. The products of B’s we again expand into I and σz , producing products of zero, one, and two σz ’s. Operators with one σz or with two σz ’s in different blocks of three will anticommute with one of the σx operators. Operators such as σz1σz2 that act on two qubits in the same block of three are in the stabilizer and are thus equivalent to the identity. Finally, operators such as A† 1 A2A3 are dealt with similarly to A† 1 A2 for the four qubit code above: 〈ψi|A† 1 A2A3|ψj 〉 = 〈ψi|(I − σz1)σx1σx2(I − σz2)σx3(I − σz3)|ψj 〉 (8.40) = 〈ψi|σx1σx2σx3(I + σz1) ( I − σz2) ( I − σz3)|ψj 〉 (8.41) = 〈ψi|σx1σx2σx3(I + σz1) ( I − σz1) ( I − σz3)|ψj 〉 (8.42) = 0. (8.43) Thus, the nine qubit code can correct two amplitude damping errors. Fault tolerance for these codes must be handled carefully. Transversal oper-ations of any sort will not respect the form of the error operators, so we need to be sure the code will be able to correct the new error operators. For instance, the CNOT applied to I ⊗ A produces ( I ⊗ σx) ( I ⊗ I − σz ⊗ σz ). This cannot be written as the tensor product of A’s and B’s. However, I ⊗ Ai is still distin-guishable from the images of I ⊗ Aj (since ( I ⊗ I + σz ⊗ σz ) ( I ⊗ I − σz ⊗ σz ) = 0) and Aj ⊗ I. Therefore, transversal CNOT is a valid fault-tolerant operation for the four-qubit code as long as we correct errors taking its effects into account. 8.8 Some Miscellaneous Codes In this section I present a few more codes that do not fit easily into any of the classes I have already discussed. Figure 8.5 shows an [11 , 1, 5] code, the smallest code to correct two errors . Figure 8.6 gives a code that can correct one σx error or one σz error, but not a σy error. This code is better than any possible distance three code, and is another example illustrating the utility of stabilizer codes for more general channels than the depolarizing channel. It is based on the classical Hamming code with an additional generator to distinguish between σx and σz errors. In fact, this code also detects if a σy error has occurred, although it cannot tell us where the error occurred. The set of all possible codes includes many codes that are not equivalent to stabilizer codes. Currently, however, only one is known that is better than any stabilizer code . This code has distance two and encodes six states using five CHAPTER 8. EXAMPLES OF STABILIZER CODES 102 M1 σz σz σz σz σz σz I I I I IM2 σx σx σx σx σx σx I I I I IM3 I I I σz σx σy σy σy σy σx σz M4 I I I σx σy σz σz σz σz σy σx M5 σz σy σx I I I σz σy σx I IM6 σx σz σy I I I σx σz σy I IM7 I I I σz σy σx σx σy σz I IM8 I I I σx σz σy σz σx σy I IM9 σz σx σy I I I σz σz σz σx σy M10 σy σz σx I I I σy σy σy σz σx X I I I I I I σx σx σx σx σx Z I I I I I I σz σz σz σz σz Table 8.5: The stabilizer for an [11 , 1, 5] code. M1 σz σz σz σz σz σz σz M2 σy σy σy σy I I IM3 σy σy I I σy σy IM4 σy I σy I σy I σy X1 σx σx I I I I σz X2 σx I σx I I σz I X3 σx I I σz σx I I Z1 I σz I σz I σz I Z2 I I σz σz I I σz Z3 I I I I σz σz σz Table 8.6: The stabilizer for a code to correct one σx or σz error. CHAPTER 8. EXAMPLES OF STABILIZER CODES 103 qubits, whereas any distance two stabilizer code could only encode two qubits (four states) with five qubits. It can be given in terms of the projection P onto the subspace of valid codewords: P = 1 /16 [3 I ⊗ I ⊗ I ⊗ I ⊗ I + ( I ⊗ σz ⊗ σy ⊗ σy ⊗ σz )cyc ( I ⊗ σx ⊗ σz ⊗ σz ⊗ σx)cyc − (I ⊗ σy ⊗ σx ⊗ σx ⊗ σy )cyc (8.44) + 2 ( σz ⊗ σx ⊗ σy ⊗ σy ⊗ σx)cyc − 2 σz ⊗ σz ⊗ σz ⊗ σz ⊗ σz ]. The subscript “cyc” means that we actually add the five cyclic permutations of the indicated term. Note that this means the projection operator, and there-fore the code, is itself cyclic. The trace of P is six, so P projects onto a six-dimensional space and the code can therefore be used to encode six basis states. Conjugation of P by σx, σy , or σz on any single qubit will produce P ′ with P P ′ = 0, so the code for this projection operator satisfies (2.10) for a distance two code, with Cab = δab .APPENDIX A. QUANTUM GATES 104 Appendix A Quantum Gates It is usually helpful to think of a quantum computer as performing a series of gates , drawn from some fairly small basic set of physically implementable unitary transformations. The net transformation applied to the quantum com-puter is the product of the unitary transformations associated with the gates performed. In order to have a universal quantum computer, it should be possi-ble to get arbitrarily close to any unitary transformation. This property makes no guarantees about how many gates are required to get within ǫ of the de-sired unitary operation, and figuring out how to get a given operator with the minimum number of basic gates is the goal of quantum algorithm design. There are a number of known sets of universal quantum gates [64, 65]. For instance, all single-qubit unitary operators and the controlled-NOT together comprise a universal set. The controlled-NOT gate (or CNOT) is a two-qubit operator that flips the second qubit iff the first qubit is |1〉. It has the matrix  1 0 0 00 1 0 00 0 0 10 0 1 0  . (A.1) In fact, the controlled-NOT and one single-qubit operator are sufficient, as long as the the single-qubit rotation acts by an angle incommensurate with 2 π. An-other finite universal set of quantum gates consists of the Hadamard rotation R, R = 1 √2 ( 1 11 −1 ) , (A.2) the phase gate P , P = ( 1 00 i ) , (A.3) the controlled-NOT, and the Toffoli gate, which is a three-qubit gate which flips the third qubit iff the first two qubits are in the state |11 〉.APPENDIX A. QUANTUM GATES 105 ❣ σx σy σy σz σz s ❣ Control Target Controlled-NOT s σy Controlled-σy s σz Controlled-σz ss ❣ Control Control Target Toffoli gate P P Q Q R Hadamard R T T Figure A.1: Various quantum gates. In addition to the gates mentioned above, I refer to a number of other simple gates in this thesis. For instance, the simple NOT gate, the sign gate, and the combined bit and sign flip gate (which are equal to σx, σz , and σy , respectively) play a crucial role in the stabilizer formalism. I also refer to two other single-qubit gates related to P and R. They are Q = 1 √2 ( 1 i −i −1 ) , (A.4) and T = 1 √2 ( 1 −i 1 i ) . (A.5) I also occasionally refer to the “conditional sign” gate, which is a two-qubit gate that gives the basis state |11 〉 a sign of −1 and leaves the other three basis states alone. The conditional sign gate is equivalent to the controlled-NOT via conjugation of one qubit by R. The conditional sign gate is effectively a controlled-σz gate, where σz gets applied to one qubit iff the other qubit is |1〉.I also use an analogous controlled-σy operator. The CNOT is the controlled-σx.To describe a series of gates, it is usually helpful to draw a diagram of the gate array. Horizontal lines represent the qubits of the quantum computer, which enter at the left and leave from the right. A summary of the symbols I use for the various gates is given in figure A.1. APPENDIX B. GLOSSARY 106 Appendix B Glossary additive code Another name for a stabilizer code. Often contrasted with lin-ear quantum codes, which are a subclass of additive codes. amplitude damping channel A channel for which the |1〉 state may relax to the |0〉 state with some probability. An example is a two-level atom relaxing via spontaneous emission. cat state The n-qubit entangled state |0 . . . 0〉 + |1 . . . 1〉. Cat states act as ancillas in many fault-tolerant operations. coding space The subset of the Hilbert space corresponding to correctly en-coded data. The coding space forms a Hilbert space in its own right. concatenation The process of encoding the physical qubits making up one code as the logical qubits of a second code. Concatenated codes are par-ticularly simple to correct, and can be used to perform arbitrarily long fault-tolerant computations as long as the physical error rate is below some threshhold. CSS code Short for Calderbank-Shor-Steane code. A CSS code is formed from two classical error-correcting codes. CSS codes can easily take advantage of results from the theory of classical error-correcting codes and are also well-suited for fault-tolerant computation. See sections 3.3 and 8.6. cyclic code A code that is invariant under cyclic permutations of the qubits. decoherence The process whereby a quantum system interacts with its envi-ronment, which acts to effectively measure the system. The world looks classical at large scales because of decoherence. Decoherence is likely to be a major cause of errors in quantum computers. degenerate code A code for which linearly independent correctable errors act-ing on the coding space sometimes produce linearly dependent states. De-generate codes bypass many of the known bounds on efficiency of quantum APPENDIX B. GLOSSARY 107 codes and have the potential to be much more efficient than any nonde-generate code. depolarizing channel A channel that produces a random error on each qubit with some fixed probability. distance The minimum weight of any operator E† a Eb such that equation (2.10) is not satisfied for an orthonormal basis of the coding space. A quantum code with distance d can detect up to d − 1 errors, or it can correct ⌊(d − 1) /2⌋ general errors or d − 1 located errors. entanglement Nonlocal, nonclassical correlations between two quantum sys-tems. The presence of entangled states gives quantum computers their additional computational power relative to classical computers. entanglement purification protocol Often abbreviated EPP. An EPP is a protocol for producing high-quality EPR pairs from a larger number of low-quality EPR pairs. EPPs are classified depending on whether they use one-way or two-way classical communication. A 1-way EPP (or 1-EPP) is equivalent to a quantum error-correcting code. EPR pair Short for Einstein-Podalsky-Rosen pair. An EPR pair is the entan-gled state (1 /√2) ( |00 〉 + |11 〉), and acts as a basic unit of entanglement. erasure channel A channel that produces one or more located errors. error syndrome A number classifying the error that has occurred. For a stabilizer code, the error syndrome is a binary number with a 1 for each generator of the stabilizer the error anticommutes with and a 0 for each generator of the stabilizer the error commutes with. fault-tolerance The property (possessed by a network of gates) that an error on a single physical qubit or gate can only produce one error in any given block of an error-correcting code. A fault-tolerant network can be used to perform computations that are more resistant to errors than the physical qubits and gates composing the computer, provided the error rate is low enough to begin with. A valid fault-tolerant operation should also map the coding space into itself to avoid producing errors when none existed before. leakage error An error in which a qubit leaves the allowed computational space. By measuring each qubit to see if it is in the computational space, a leakage error can be converted into a located error. linear code A stabilizer code that, when described in the GF(4) formalism (section 3.4), has a stabilizer that is invariant under multiplication by ω.Often contrasted with an additive code. APPENDIX B. GLOSSARY 108 located error Sometimes called an erasure. A located error is an error which acts on a known qubit in an unknown way. A located error is easier to correct than a general error acting on an unknown qubit. nice error basis A basis which shares certain essential properties with the Pauli matrices and can be used to define a generalized stabilizer code. See section 3.6. nondegenerate code A code for which linearly independent correctable errors acting on the coding space always produce linearly independent states. Nondegenerate codes are much easier to set bounds on than degenerate codes. pasting A construction for combining two quantum codes to make a single larger code. See section 3.5. perfect code A code for which every error syndrome corresponds to a cor-rectable error. See section 8.4 for a construction of the distance three perfect codes. quantum error-correcting code Sometimes abbreviated QECC. A QECC is a set of states that can be restored to their original state after some number of errors occur. A QECC must satisfy equation (2.10). qubit A single two-state quantum system that serves as the fundamental unit of a quantum computer. The word “qubit” comes from “quantum bit.” qudit A d-dimensional generalization of a qubit. shadow The set of operators in G which commute with the even-weight ele-ments of the stabilizer and anticommute with the odd-weight elements of the stabilizer. shadow enumerator The weight enumerator of the shadow. It is useful for setting bounds on the existence of quantum codes. stabilizer The set of tensor products of Pauli matrices that fix every state in the coding space. The stabilizer is an Abelian subgroup of the group G defined in section 2.3. The stabilizer contains all of the vital information about a code. In particular, operators in G that anticommute with some element of the stabilizer can be detected by the code. stabilizer code A quantum code that can be described by giving its stabilizer. Also called an additive code or a GF(4) code. teleportation A process whereby a quantum state is destroyed and exactly reconstructed elsewhere. Quantum teleportation of a single qubit requires one EPR pair shared between the source and destination, and involves two measurements on the source qubit. The two bits from the measurements must be classically transmitted to the destination in order to reconstruct the original quantum state. APPENDIX B. GLOSSARY 109 threshhold The error rate below which a suitably configured quantum com-puter can be used to perform arbitrarily long computations. Current methods for proving the existence of a threshhold use concatenated codes. Most estimates of the threshhold lie in the range 10 −6 – 10 −4. transversal operation An operation applied in parallel to the various qubits in a block of a quantum error-correcting code. Qubits from one block can only interact with corresponding qubits from another block or from an ancilla. Any transversal operation is automatically fault-tolerant. weight A property of operators only defined on operators which can be written as the tensor product of single-qubit operators. For such an operator, the weight is the number of single-qubit operators in the product that are not equal to the identity. weight enumerator A polynomial whose coefficients cn are the number of elements of weight n in some set, such as the stabilizer or the normalizer of the stabilizer. Weight enumerators are very helpful in setting bounds on the possible existence of quantum error-correcting codes through identities such as the quantum MacWilliams identities (equation (7.14)). BIBLIOGRAPHY 110 Bibliography A. Church, “An unsolvable problem of elementary number theory,” Amer. J. Math 58 , 345 (1936); A. M. Turing, “On computable numbers, with an application to the Entscheidungsproblem,” Proc. Lond. Math. Soc. (2) 42 ,230 (1936) and Proc. Lond. Math. Soc. (2) 43 , 544 (1937). R. P. Feynman, “Simulating physics with computers,” Int. J. Theor. Phys. 21 , 467 (1982). P. Shor, “Algorithms for quantum computation: discrete logarithms and factoring,” Proceedings, 35th Annual Symposium on Fundamentals of Computer Science, (1994). L. K. Grover, “A fast quantum mechanical algorithm for database search,” Proceedings, 28th ACM Symposium on Theory of Computation, 212 (1996). C. B. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, “Strengths and weaknesses of quantum computing,” quant-ph/9701001 (1997). J. I. Cirac and P. Zoller, “Quantum computations with cold trapped ions,” Phys. Rev. Lett. 74 , 4091 (1995). C. Monroe, D. M. Meekhof, B. E. King, W. M. Itano, and D. J. Wineland, “Demonstration of a fundamental quantum logic gate,” Phys. Rev. Lett. 75 , 4714 (1995). Q. A. Turchette, C. J. Hood, W. Lange, H. Mabuchi, and H. J. Kimble, “Measurement of conditional phase shifts for quantum logic,” Phys. Rev. Lett. 75 , 4710 (1995). N. Gershenfeld and I. Chuang, “Bulk spin resonance quantum computa-tion,” Science 275 , 350 (1997). P. Shor, “Scheme for reducing decoherence in quantum memory,” Phys. Rev. A 52 , 2493 (1995). A. M. Steane, “Error correcting codes in quantum theory,” Phys. Rev. Lett. 77 , 793 (1996). BIBLIOGRAPHY 111 C. Cohen-Tannoudji, Quantum Mechanics , Wiley, New York (1977). F. J. MacWilliams and N. J. A. Sloane, The Theory of Error-Correcting Codes , North-Holland Publishing Company, New York (1977). W. K. Wootters and W. H. Zurek, “A single quantum cannot be cloned,” Nature 299 , 802 (1982). C. E. Shannon, “A mathematical theory of communication,” Bell Sys. Tech. J. 27 , 379, 623 (1948). E. Knill and R. Laflamme, “A theory of quantum error-correcting codes,” Phys. Rev. A 55 , 900 (1997). C. Bennett, D. DiVincenzo, J. Smolin, and W. Wootters, “Mixed state en-tanglement and quantum error correction,” Phys. Rev. A 54 , 3824 (1996). L. Vaidman, L. Goldenberg, and S. Wiesner, “Error prevention scheme with four particles,” Phys. Rev. A 54 , 1745R (1996). M. Grassl, Th. Beth, and T. Pellizzari, “Codes for the quantum erasure channel,” quant-ph/9610042 (1996). D. W. Leung, M. A. Nielsen, I. L. Chuang, Y. Yamamoto, “Approximate quantum error correction can lead to better codes,” quant-ph/9704002 (1997). D. Gottesman, “Class of quantum error-correcting codes saturating the quantum Hamming bound,” Phys. Rev. A 54 , 1862 (1996). A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, “Quantum error correction and orthogonal geometry,” Phys. Rev. Lett. 78 , 405 (1997). E. Rains, “Quantum shadow enumerators,” quant-ph/9611001 (1996). R. Laflamme, C. Miquel, J. P. Paz, and W. Zurek, “Pefect quantum error correction code,” Phys. Rev. Lett. 77 , 198 (1996). D. Gottesman, “Pasting quantum codes,” quant-ph/9607027 (1996). A. R. Calderbank, E. M. Rains, P. W. Shor, and N. J. A. Sloane, “Quantum error correction via codes over GF(4),” quant-ph/9608006 (1996). A. Steane, “Simple quantum error correcting codes,” Phys. Rev. A 54 , 4741 (1996). A. Steane, “Quantum Reed-Muller codes, ” quant-ph/9608026 (1996). A. R. Calderbank and P. W. Shor, “Good quantum error-correcting codes exist,” Phys. Rev. A 54 , 1098 (1996). BIBLIOGRAPHY 112 A. Steane, “Multiple particle interference and quantum error correction,” Proc. Roy. Soc. Lond. A 452 , 2551 (1996). E. Knill, “Non-binary error bases and quantum codes,” quant-ph/9608048 (1996); E. Knill, “Group representations, error bases and quantum codes,” quant-ph/9608049 (1996). H. F. Chau, “Correcting quantum errors in higher spin systems,” quant-ph/9610023 (1996) H. F. Chau, “Five quantum register error correction code for higher spin systems,” quant-ph/9702033 (1997). D. Aharonov and M. Ben-Or, “Fault-tolerant quantum computation with constant error,” quant-ph/9611025 (1996). E. Rains, “Nonbinary quantum codes,” quant-ph/9703048 (1997). R. Cleve and D. Gottesman, “Efficient computations of encodings for quan-tum error correction,” quant-ph/9607030 (1996). D. P. DiVincenzo, “Quantum gates and circuits,” quant-ph/9705009 (1997). P. Shor, “Fault-tolerant quantum computation,” quant-ph/9605011 (1996). D. DiVincenzo and P. Shor, “Fault-tolerant error correction with efficient quantum codes,” Phys. Rev. Lett. 77 , 3260 (1996). D. Gottesman, “A theory of fault-tolerant quantum computation,” quant-ph/9702029 (1997). C. H. Bennett, G. Brassard, C. Crepeau, R. Josza, A. Peres, and W. K. Wootters, “Teleporting an unknown quantum state via dual classical and Einstein-Podalsky-Rosen channels,” Phys. Rev. Lett. 70 , 1895 (1993). C. H. Bennett, G. Brassard, S. Popescu, B. Schumacher, J. A. Smolin, and W. K. Wootters, “Purification of noisy entanglement and faithful telepor-tation via noisy channels,” Phys. Rev. Lett. 76 , 722 (1996). E. Knill, R. Laflamme, and D. Gottesman, in preparation. E. Knill, personal communication. E. Knill, R. Laflamme, and W. Zurek, “Accuracy threshold for quan-tum computation,” quant-ph/9610011 (1996); E. Knill, R. Laflamme, and W. Zurek, “Resilient quantum computation: error models and thresholds,” quant-ph/9702058 (1997). J. Evslin, S. Kakade, and J. P. Preskill, unpublished. BIBLIOGRAPHY 113 A. M. Steane, “Active stabilization, quantum computation and quantum state synthesis,” Phys. Rev. Lett. 78 , 2252 (1997). E. Knill and R. Laflamme, “Concatenated quantum codes,” quant-ph/ 9608012 (1996). C. Zalka, “Threshold estimate for fault tolerant quantum computing,” quant-ph/9612028 (1996). S. Lloyd, “The capacity of a noisy quantum channel,” Phys. Rev. A 55 ,1613 (1997). B. Schumacher and M. A. Nielsen, “Quantum data processing and error correction,” Phys. Rev. A 54 , 2629 (1996). H. Barnum, M. A. Nielsen, and B. Schumacher, “Information transmission through a noisy quantum channel,” quant-ph/9702049 (1997). A. Ekert and C. Macchiavello, “Error correction in quantum communica-tion,” Phys. Rev. Lett. 77 , 2585 (1996). N. J. Cerf and R. Cleve, “Information-theoretic interpretation of quantum error-correcting codes,” quant-ph/9702031 (1997). P. Shor and R. Laflamme, “Quantum analog of the MacWilliams identities for classical coding theory,” Phys. Rev. Lett. 78 , 1600 (1997). E. M. Rains, “Quantum weight enumerators,” quant-ph/9612015 (1996). E. M. Rains, “Polynomial invariants of quantum codes,” quant-ph/9704042 (1997). R. Cleve, “Quantum stabilizer codes and classical linear vodes,” quant-ph/9612048 (1996). C. H. Bennett, D. P. DiVincenzo, and J. A. Smolin, “Capacities of quantum erasure channels,” quant-ph/9701015 (1997). C. Fuchs and J. Smolin, unpublished. P. Shor and J. Smolin, “Quantum error-correcting codes need not com-pletely reveal the error syndrome,” quant-ph/9604006 (1996). E. M. Rains, “Quantum codes of minimum distance two,” quant-ph/ 9704043 (1997). E. M. Rains, R. H. Hardin, P. W. Shor, and N. J. A. Sloane, “A nonadditive quantum code,” quant-ph/9703002 (1997). S. Lloyd, “Almost any quantum logic gate is universal,” Phys. Rev. Lett. 75 , 346 (1995). BIBLIOGRAPHY 114 A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin, and H. Weinfurter, “Elementary gates for quantum computation,” Phys. Rev. A 52 , 3457 (1995).
279
Published Time: 2002-02-25T10:26:12Z Bonaventura Cavalieri – Wikipedia =============== Zum Inhalt springen [x] Hauptmenü Hauptmenü In die Seitenleiste verschieben Verbergen Navigation Hauptseite Themenportale Zufälliger Artikel Spezialseiten Mitmachen Artikel verbessern Neuen Artikel anlegen Autorenportal Hilfe Letzte Änderungen Kontakt Suche Suchen [x] Erscheinungsbild Erscheinungsbild In die Seitenleiste verschieben Verbergen Text Klein Standard Groß Diese Seite verwendet immer eine kleine Schriftgröße Breite Standard Breit Der Inhalt ist so breit wie für das Browserfenster möglich. Farbe (Beta) Automatisch Hell Dunkel Diese Seite ist immer im hellen Modus. Jetzt spenden Benutzerkonto erstellen Anmelden [x] Meine Werkzeuge Jetzt spenden Benutzerkonto erstellen Anmelden [x] Inhaltsverzeichnis umschalten Inhaltsverzeichnis In die Seitenleiste verschieben Verbergen (Anfang) 1 Leben 2 Werke 3 Ehrungen 4 Literatur 5 Weblinks 6 Einzelnachweise Bonaventura Cavalieri [x] 48 Sprachen Afrikaans العربية مصرى Asturianu Azərbaycanca Беларуская Български বাংলা Català Čeština Ελληνικά English Español Euskara فارسی Suomi Français Galego עברית Hrvatski Kreyòl ayisyen Magyar Հայերեն Italiano 日本語 Қазақша 한국어 Lëtzebuergesch Latviešu Malagasy മലയാളം Nederlands Norsk nynorsk Polski Piemontèis Português Română Русский Slovenščina Српски / srpski Svenska Tagalog Türkçe Українська Oʻzbekcha / ўзбекча Tiếng Việt 中文 粵語 Links bearbeiten Artikel Diskussion [x] Deutsch Lesen Bearbeiten Quelltext bearbeiten Versionsgeschichte [x] Werkzeuge Werkzeuge In die Seitenleiste verschieben Verbergen Aktionen Lesen Bearbeiten Quelltext bearbeiten Versionsgeschichte Allgemein Links auf diese Seite Änderungen an verlinkten Seiten Permanenter Link Seiten­­informationen Artikel zitieren Kurzlink QR-Code herunterladen Links auf Artikel in anderen Sprachen bearbeiten Drucken/​exportieren Als PDF herunterladen Druckversion In anderen Projekten Commons Wikidata-Datenobjekt aus Wikipedia, der freien Enzyklopädie Cavalerius ist eine Weiterleitung auf diesen Artikel. Zum nach Cavalieri benannten Mondkrater siehe Cavalerius (Mondkrater). Bonaventura Cavalieri Bonaventura Francesco Cavalieri ( 1598 wahrscheinlich in Mailand; † 3. Dezember oder 30. November 1647 in Bologna; mit Gelehrtennamen Cavalerius) war ein italienischerJesuat,Mathematiker und Astronom. Leben [Bearbeiten | Quelltext bearbeiten] Bonaventura Cavalieri arbeitete auf dem Gebiet der Geometrie und lehrte an der Universität Bologna. Gleichzeitig war er Prior eines Jesuitenklosters. Die von ihm ausgeführten Berechnungen von Oberflächen und Volumina, die durch Johannes KeplersFassrechnung angeregt wurden, nahmen Methoden der Infinitesimalrechnung vorweg und waren für ihre Entwicklung bedeutend. Bekannt wurde Cavalieri hauptsächlich durch das Prinzip der Indivisibilien. Dieses Prinzip war in einer Vorform bereits 1604 und 1615 von Johannes Kepler verwendet worden. In der frühen Version von 1635 wird angenommen, dass eine Linie aus einer unendlichen Zahl von Punkten ohne Größe besteht, eine Oberfläche aus einer unendlichen Zahl von Linien ohne Breite und ein Körper aus einer unendlichen Zahl von Oberflächen ohne Höhe. Als Reaktion auf Einwände formulierte er das Prinzip neu und veröffentlichte es so 1647 mit einer Verteidigung der Theorie. 1653 wurden seine Werke mit späteren Korrekturen neu herausgegeben. Das Cavalierische Prinzip besagt, dass zwei Körper das gleiche Volumen haben, wenn alle ebenen Schnitte den gleichen Flächeninhalt besitzen, die parallel zu einer vorgegebenen Grundebene und in übereinstimmenden Abständen ausgeführt werden. Stefano degli Angeli (1623–1697) war sein Schüler. Er wünschte sich Michelangelo Ricci und Evangelista Torricelli als Herausgeber seiner nachgelassenen Schriften. Torricelli starb aber kurz vor ihm und Ricci fand keine Zeit. Sie wurden erst 1919 veröffentlicht. Werke [Bearbeiten | Quelltext bearbeiten] Lo specchio ustorio, 1632 Geometria indivisibilibus, 1635 Exercitationes Geometricae, 1647 Ehrungen [Bearbeiten | Quelltext bearbeiten] Ihm zu Ehren wurden der Asteroid(18059) Cavalieri und der Mondkrater Cavalerius benannt. Literatur [Bearbeiten | Quelltext bearbeiten] Amir R. Alexander: Der Kampf um das unendlich Kleine. In: Spektrum der Wissenschaft, Heft Oktober 2015 (spektrum.de) Weblinks [Bearbeiten | Quelltext bearbeiten] Commons: Bonaventura Cavalieri– Sammlung von Bildern, Videos und Audiodateien Geometria Indivisibilibus (die Ausgabe von 1653) Angelo Fabroni: Bonaventura Cavalerius. In: Vitae Italorum doctrina excellentium qui saeculis XVII. et XVIII. floruerunt. Band I. Pisa 1778, S.262–301 (Latein, eingeschränkte Vorschau in der Google-Buchsuche). John J. O’Connor, Edmund F. Robertson:Bonaventura Francesco Cavalieri. In: MacTutor History of Mathematics archive(englisch). @1@2Vorlage:Toter Link/www.mathe.tu-freiberg.deKurze Biographie(Seite nicht mehr abrufbar, festgestellt im Juni 2025. Suche in Webarchiven) Spektrum.de: Vom Krankenpfleger zum Mathematiker 1. Juni 2018 Einzelnachweise [Bearbeiten | Quelltext bearbeiten] ↑Jürgen Elstrodt: Maß- und Integrationstheorie. 4. Auflage, Springer, Berlin 2005, ISBN 3-540-21390-2, S. 167. ↑Alexander Witting: Integralrechnung, Walter de Gruyter Berlin 1933, S. 53–54. Normdaten(Person): GND: 118872621 (lobid, GND Explorer, OGND, AKS) | LCCN: n88061073 | VIAF: 44324467 | Wikipedia-Personensuche | Personendaten | | --- | | NAME | Cavalieri, Bonaventura | | ALTERNATIVNAMEN | Cavalieri, Bonaventura Francesco (vollständiger Name) | | KURZBESCHREIBUNG | italienischer Mathematiker und Astronom | | GEBURTSDATUM | 1598 | | GEBURTSORT | unsicher: Mailand, Herzogtum Mailand | | STERBEDATUM | 30. November 1647 oder 3. Dezember 1647 | | STERBEORT | Bologna, Kirchenstaat | Abgerufen von „ Kategorien: Astronom (17. Jahrhundert) Mathematiker (17. Jahrhundert) Hochschullehrer (Universität Bologna) Person als Namensgeber für einen Asteroiden Person als Namensgeber für einen Mondkrater Person (Religion, Mailand) Historische Person (Italien) Geboren 1598 Gestorben 1647 Mann Versteckte Kategorie: Wikipedia:Weblink offline Diese Seite wurde zuletzt am 3. Juni 2025 um 12:57 Uhr bearbeitet. Abrufstatistik· Autoren Der Text ist unter der Lizenz „Creative-Commons Namensnennung – Weitergabe unter gleichen Bedingungen“ verfügbar; Informationen zu den Urhebern und zum Lizenzstatus eingebundener Mediendateien (etwa Bilder oder Videos) können im Regelfall durch Anklicken dieser abgerufen werden. Möglicherweise unterliegen die Inhalte jeweils zusätzlichen Bedingungen. Durch die Nutzung dieser Website erklären Sie sich mit den Nutzungsbedingungen und der Datenschutzrichtlinie einverstanden. Wikipedia® ist eine eingetragene Marke der Wikimedia Foundation Inc. Datenschutz Über Wikipedia Impressum Verhaltenskodex Entwickler Statistiken Stellungnahme zu Cookies Mobile Ansicht Einstellungen für die Seitenvorschau bearbeiten Suche Suchen [x] Inhaltsverzeichnis umschalten Bonaventura Cavalieri 48 SprachenAbschnitt hinzufügen
280
log10(2) =============== Home About Mobile Chat Bot Subjects Standards Account » Crop Image × Crop Answer Success! 0.30102999566398 ↓Steps Explained:↓ Evaluate the following logarithmic expression log10(2) Evaluate log 10(2) using the Change of Base Formula The formula for the change of base rule in log b(x) is as follows: log b(x)=Ln(x) Ln(b) Given b = 10 and x = 2, we have: log 10(2)=Ln(2) Ln(10) log 10(2)=0.69314718055995 2.302585092994 Final Answer 0.30102999566398 Take the QuizSwitch to Chat Mode Related Calculators:Hyperbolic Function|Direct Current (Electrical Engineering) Ohms Law|Hyperbolic Inverse Share Share Post Share Mail Share Share Post Pin Share
281
algebraic geometry - Equivalence of defining field structure and recovering field structure via abstract projective plane - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Equivalence of defining field structure and recovering field structure via abstract projective plane Ask Question Asked 7 years, 6 months ago Modified7 years, 6 months ago Viewed 85 times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. Consider a field k k and its associated projective plane over k k, P 2(k)P 2(k). Suppose we have the following properties A 1−A 3 A 1−A 3 of P 2(k)P 2(k). A1) There is a unique line L(p q)L(p q) through any two distinct points p,q p,q in P 2(k)P 2(k). A2) Every pair of two distinct lines intersects at exactly one point. A3) Given any two triples collinear points (p,q,r),(p′,q′,r′)(p,q,r),(p′,q′,r′), then (L(p q′)∩L(p′q),L(p r′)∩L(r p′),L(q r′)∩L(r q′))(L(p q′)∩L(p′q),L(p r′)∩L(r p′),L(q r′)∩L(r q′)) are also colinear.(Pappus Hexagon Theorem) Consider any 4 points in general position. The statement is that one can recover the field k k through those 4 points. Q: What is the meaning of recovering here? Additive and multiplicative structure can be recovered? Q': How to recover? Somehow this is obvious? This is not as obvious as elliptic curve to define group structure. abstract-algebra algebraic-geometry projective-geometry Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked Jan 27, 2018 at 18:19 user45765user45765 8,750 3 3 gold badges 24 24 silver badges 54 54 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 0 Save this answer. Show activity on this post. There is a method of coordinatizing a projective plane. This will give you an algebraic structure known as a planar ternary ring. Certain geometric properties of the plane are reflected in the algebraic structure or the ternary ring, for example a plane can be coordinatized by a skew field if and only if Desargues' theorem holds, and Pappus' theorem implies (in addition) that the plane can be coordinatized by a field. The big issue is that you can have nonisomorphic planar ternary rings associated with the same plane, you are only guaranteed uniqueness up to isotopy. So the theorem of Pappus guarantees that coordinatizing will give you something isotopic to a field, and then I believe it is simple to show that you cannot have two nonisomorphic fields in the same isotopism class so the field is determined. There are lots of references on coordinatizing projective planes, I would look these up on Google Scholar or something similar if you want more technical details. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Jan 29, 2018 at 16:45 xxxxxxxxxxxxxxxxxx 13.7k 3 3 gold badges 31 31 silver badges 62 62 bronze badges Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions abstract-algebra algebraic-geometry projective-geometry See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 0What happens if we overlap two points in Pascal's theorem? 0A few questions regarding Pappus' Theorem 4What is the difference between projective and elliptic geometry? 3Duality of Projective Plane: Prove there is a set of four distinct lines, no three are concurrent 2Can a line in a projective plane have only one point? 5What is the structure, if any, on the projective space over a field? 3Given two parallel sides in a conic, the centre pass through line joining the midpoints. 2Intersections of lines formed by the pentagon are collinear. 5Trouble proving that a plane in (synthetic) projective space containing two points must contain the line between them Hot Network Questions What does, "For you alone are holy." mean in Revelation 15:4? Is laser engraving on an interstellar object feasible? Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Is there a simple method to prove that this triangle is isosceles? How did the early Church interpret Hebrews 6:4-6, Hebrews 10:26-31, 2 Peter 2:20-22, and other similar passages? How to deal with this problem in hedonism? At the time of the prequels, was everyone who worked in the Jedi Temple on Coruscant a Jedi? Which public officers other than presidents and lawmakers are chosen by people? Dropdown width with very long options Reskinning creatures without accidentally hiding how dangerous/safe they are Why are there no 'add14' chords? Landmark identification in "The Angel" (Arsenal FC's anthem) I failed to make Claus benzene. (With sticks.) how to set grub default How to reduce repetition in a large amount of if-else statements when reading from a buffer How to defend against GDPR being used to access anti-fraud measures? What's at stake if the E3/EU "snaps back" their sanctions on Iran? Samba(Linux)/Windows interaction Can "Accepted" Be Used as a Noun? Can metal atoms act as ligands? Can Suspended Sentence be cast Twice? Why was there a child at the dig site in Montana? Meaning of 'present' in Job 1:6 What is a good way to get magnetic sensor input? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
282
Astrobites Guide to Polarimetry | astrobites Loading [MathJax]/extensions/MathMenu.js RSS Submit a Guest Post Undergraduates: Submit your Research! Suggest a Paper Topic! Select Page About About Astrobites Meet the Authors Statement of Inclusivity Copyright & Permissions Latest Research Daily Paper Summaries Classics Undergrad Research Physical Review Coverage Beyond astro-ph Beyond astro-ph Library Interviews Career Navigation Personal Experiences Current Events Teaching with Astrobites Guides EM Spectrum Galaxies & AGNs Spectroscopy and Spectral Lines Adaptive Optics Gravitational Waves Transient Astronomy Astrophysical Software Graduate School Writing a personal statement for grad apps First Observing Run …More Guides! Submit a Guest Post Undergraduates: Submit your Research! Suggest a Paper Topic! Astrobites Guide to Polarimetry by Briley Lewis | Oct 23, 2022 | Guides | 0 comments By Briley Lewis A brief intro to polarized light Light is an electromagnetic wave — and its electric field isn’t always oriented in the same direction. The orientation of light’s electric field defines its “state of polarization.” In this guide, we’ll talk about what polarization is, how it’s produced by the cosmos, and how we can observe it. We categorize polarization in three main ways: unpolarized light, linearly polarized light, and elliptically polarized light. Unpolarized light (a.k.a. natural light) is better described as randomly polarized light; that is, many light sources are a collection of emitters where the emitted light’s polarization is changing very frequently and randomly. This is one extreme, and often light is partially polarized in some way. Linearly polarized light has a constant orientation of the electric field (although the magnitude of the wave may vary still.) Elliptically polarized light has an electric field whose vector rotates, tracing out an ellipse. Circularly polarized light is one case of this, where both x and y directions have the same magnitude. Some of these cases are illustrated in the figure below. We can describe polarization mathematically using matrices.Stokes vectors (a.k.a. Stokes parameters) are a useful way to do so. There are four parameters: I, Q, U, and V. I is the total intensity, Q describes linear polarization (horizontal or vertical, depending on the sign), and U describes polarization at a 2nd set of orthogonal axes (+/-45 degrees), and V describes elliptical polarization (right-handed if >0, left-handed<0). They are defined as follows: I = S1 = 2I0 (where I0 is the incident light) Q = S2 = 2I1 – 2I0 (where I1 is the light through a linear polarizer with a horizontal axis) U = S3 = 2I2 – 2I0 (where I2 is the light through a linear polarizer with an axis at 45o) V = S4 = 2I3 – 2I0 (where I3 is the light through a circular polarizer) For completely polarized light, I2 = Q2 + U2 + V2. For a partially polarized system, the degree of polarization is given by P = (Q2 + U2 + V2)½ / I. See Table 8.5 from Hecht for an illustrative example of Stokes vectors for various polarization states. Similarly, the operations of different polarizers on Stokes vectors can be described by Mueller matrices. What in the universe creates polarized light? Polarization can be affected by dichroism, reflection, scattering, or birefringence (more on dichroism and birefringence in the next section!), as well as other electromagnetic effects. Some radiation processes, like synchrotron radiation, naturally produce polarized light as well. Light can be polarized by scattering due to interactions with electrons. For unpolarized incident light, light scattered along the incident axis will be unaltered, and light scattered at orthogonal (90 degree) angles will be linearly polarized. Scattering can be more complicated depending on the size of the particle relative to the wavelength of light: Rayleigh scattering describes what happens when the particles are much smaller than the wavelength, and Mie scattering describes scattering more generally. Light can also be polarized by reflection off a dielectric medium, where only one component of the incoming polarization will be reflected and the other will be refracted. Brewster’s law describes the angle where the reflected ray will be fully polarized, and deviations from that angle will be partially polarized. Some examples of situations that create polarized light in astronomy are: Dust scattering in debris disks (e.g. the GPI survey of debris disks, including in polarized light) Rayleigh scattering in a planet’s atmosphere, like Earth’s! Extreme environments with magnetic fields, that produce synchrotron radiation (e.g. the EHT image of M87 in polarized light) …and so much more! How do we measure polarization? To figure out how much of the incoming light is polarized, we need to use some sort of polarizer — a filter that separates light into its components, or only lets a certain polarization of light to pass through. As Hecht says in his Optics textbook, for polarizers to work “there must be some sort of asymmetry associated with the process.” Some polarizers use dichroism, where only one polarization state is selectively absorbed, and the other orthogonal polarization state passes through just fine. Some crystals are naturally dichroic, as are Polaroid filters. Another commonly harnessed effect is birefringence, meaning that a substance has different indices of refraction due to the arrangement of atoms within it. Certain birefringent crystals can split light into orthogonal polarization states. A useful example in astronomy is the Wollaston prism, which serves as a polarizing beamsplitter in many instruments. Another important type of optic is known as a wave plate, something that changes the polarization of the light in your incoming beam. A full-wave plate creates a phase difference of 360 degrees (2π radians), whereas a half-wave plate induces a 180 degree (π radians) phase difference and a quarter-wave plate shifts the phase by 90 degrees (π/2 radians). There are also polarizers that induce circular polarization, such as the combination of a linear polarizer and a wave plate. So, what makes an astronomical polarimeter? At least in optical/infrared, there’s usually some sort of beam-splitter, like a Wollaston prism, that splits the light into two orthogonal polarizations, plus a half-wave plate that allows the observer to modulate the polarization in order to calibrate out instrumental effects. (You can read in detail about the Gemini Planet Imager polarimeter here as an example!) Beyond the optical and IR, there are other ways of measuring polarimetry, too. Radio telescopes can detect polarization since they are essentially recording the state of the electric field, and other types of detectors for high-energy light like X-rays (e.g. gas pixel detectors) have been devised to measure polarization as well. Some current observatories with polarimetry capabilities and their cool science results (plus relevant Astrobites!) IXPE [The Imaging X-Ray Polarimetry Explorer]— The recently launched NASA mission IXPE is going to be looking for polarization from some extreme sources, like supernovae, AGN, and pulsars! Be on the lookout for its first results coming very soon. Historical Flares of Sgr A: A Polarizing Event? Astrobites at AAS 239 [IXPE Press Conference] Plerions: the hole of the donut VLT/SPHERE— SPHERE is focused on exoplanet characterization and detection, including the wildly cool detection of PDS 70b, a very young forming planet still embedded in its disk. A New, Scattered Light View of Planet Forming Disks First Photos of a Baby Planet New Directly Imaged Planet Challenges Planet Formation Theories Gemini Planet Imager— Briefly mentioned earlier, the Gemini Planet Imager didn’t just image planets, it also imaged debris disks! And it did so in polarized light, using polarimetric differential imaging, a technique that separates the starlight from the disk’s light. They’ve got a whole survey sample of polarized debris disks, plus some neat in-depth studies of individual disks! New detections of exoplanet HD 95086 b with the Gemini Planet Imager A Deeper Look into the Atmospheres of HR8799 c and d with GPI Subaru/SCExAO/CHARIS— The CHARIS instrument on the Subaru Telescope can do spectropolarimetry [looking at polarization in multiple wavelengths] in the infrared, including polarimetric differential imaging (CHARIS-PDI) which is useful for finding exoplanets and disks. They’ve done some cool imaging of jets off young T Tauri stars and debris disks! The Birth of a Baby Planet Supernovae in 3d [Subaru FOCAS, not CHARIS!] ALMA— Polarimetry works a bit differently for radio telescope arrays like ALMA, but they make it happen. ALMA has been key for understanding magnetic fields of objects across the Universe, such as the interesting and extreme supernova AT2018cow! Event Horizon Telescope— Similar to ALMA in that it’s a bit different from a “normal” single telescope, the EHT array managed to measure one of the most extreme examples of polarization yet — the polarized light from the dusty region around M87’s supermassive black hole! This Black Hole Has Some Pol(arization) Interview Series: Dr. Lia Medeiros HARPS— HARPS, the ESO’s famous spectrograph, now has polarimetric capabilities! It’s capable of spectropolarimetry, which can help in understanding the magnetic fields of stars. From Earth Plants to Exo-Plants: Spectropolarimetry as an Agnostic Biosignature SOFIA HAWC+— The airborne observatory SOFIA has a unique far-infrared imaging polarimeter called HAWC+ which has been used to look at star-forming regions and emission in a dusty torus around an active galactic nucleus. The Magnetic Menagerie of NGC 1097 Meet the AAS Keynote Speakers: Prof. David Chuss Astrobites at AAS 240 Day 3 [SOFIA Town Hall] Meet the AAS Keynote Speakers: Dr. Enrique Lopez Rodriguez Polarime-trying to Map Magnetic Fields in the Orion Nebula There are definitely more polarimeters and science cases than mentioned in here, but hopefully this is a useful start if you’re thinking about polarimetry in your research or just trying to learn more! Astrobite edited by: Jessie Thwaites and Sabina Sagynbayeva Featured image credit: Encyclopedia Britannica Resources: ESO Polarimetry Polarimetry: A Powerful Diagnostic Tool in Astronomy Astronomical Polarimetry (thesis) [Book] Kolokolova, L., Hough, J., & Levasseur-Regourd, A. (Eds.). (2015). Polarimetry of Stars and Planetary Systems. Cambridge: Cambridge University Press. doi:10.1017/CBO9781107358249 [Textbook] Hecht, Eugene. Optics. Pearson Education, 2012. Mail Post Share Share Pin Share Share Author Briley Lewis Briley Lewis is a PhD Candidate and NSF Fellow at the University of California, Los Angeles studying Astronomy & Astrophysics. Her research interests are primarily in planetary systems – both exoplanets and objects in our own solar system, how they form, and how we can create instruments to learn more about them. She has previously pursued her research at the American Museum of Natural History in NYC, and also at Space Telescope Science Institute in Baltimore, MD. Outside of research, she is passionate about teaching and public outreach, and spends her free time bringing together her love of science with her loves of crafting and writing, and playing with her rescue dog Rocky. View all posts Submit a Comment Cancel reply
283
Published Time: 2022 Interval Type-3 Fuzzy Sets | SpringerLink =============== Your privacy, your choice We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media. By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection. See our privacy policy for more information on the use of your personal data. Manage preferences for further information and to change your choices. Accept all cookies Skip to main content Advertisement Log in Menu Find a journalPublish with usTrack your research Search Cart Search Search by keyword or author Search Navigation Find a journal Publish with us Track your research Home Interval Type-3 Fuzzy Systems: Theory and Design Chapter Interval Type-3 Fuzzy Sets Chapter First Online: 14 March 2022 pp 13–43 Cite this chapter Interval Type-3 Fuzzy Systems: Theory and Design Oscar Castillo5, Juan R. Castro6& Patricia Melin5 Part of the book series:Studies in Fuzziness and Soft Computing ((STUDFUZZ,volume 418)) 568 Accesses Abstract In this chapter, an provide an overview of type-3 fuzzy sets and their operations. This overview is intended to offer the basic concepts required for understanding the methods and algorithms presented later in this book. This is a preview of subscription content, log in via an institution to check access. Access this chapter Log in via an institution Subscribe and save Springer+ from $39.99 /Month Starting from 10 chapters or articles per month Access and download chapters and articles from more than 300k books and 2,500 journals Cancel anytime View plans Buy Now Chapter USD 29.95 Price excludes VAT (USA) Available as PDF Read on any device Instant download Own it forever Buy Chapter eBook USD 109.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Softcover Book USD 139.99 Price excludes VAT (USA) Compact, lightweight edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Softcover Book Hardcover Book USD 139.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions References Mohammadzadeh, A., Sabzalian, M.H., Zhang, W.: An interval type-3 fuzzy system and a new online fractional-order learning algorithm: theory and practice. IEEE Trans. Fuzzy Syst. 28(9), 1940–1950 (2020) ArticleGoogle Scholar Rickard, J.T., Aisbett, J., Gibbon, G.: Fuzzy subsethood for fuzzy sets of type-2 and generalized type-n. IEEE Trans. Fuzzy Syst. 17(1), 50–60 (2009) ArticleGoogle Scholar Liu, Z., Mohammadzadeh, A., Turabieh, H., Mafarja, M., Band, S.S., Mosavi, A.: A new online learned interval type-3 fuzzy control system for solar energy management systems. IEEE Access 9, 10498–10508 (2021) ArticleGoogle Scholar Liang, Q., Mendel, J.M.: Interval type-2 fuzzy logic systems: theory and design. IEEE Trans. Fuzzy Syst. 8, 535–550 (2000) ArticleGoogle Scholar Mendel, J.M.: Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions. Prentice-Hall, Upper-Saddle River, NJ (2001) MATHGoogle Scholar Mendel, J.M., Hagras, H., Tan, W.-W., Melek, W.W., Ying, H.: Introduction to Type-2 Fuzzy Logic Control. Wiley and IEEE Press, Hoboken, NJ (2014) BookGoogle Scholar Karnik, N.N., Mendel, J.M.: Operations on type-2 fuzzy sets. Fuzzy Sets Syst. 122, 327–348 (2001) ArticleMathSciNetGoogle Scholar Sakalli, A., Kumbasar, T., Mendel, J.M.: Towards systematic design of general type-2 fuzzy logic controllers: analysis, interpretation, and tuning. IEEE Trans. Fuzzy Syst. 29(2), 226–239 (2021) ArticleGoogle Scholar Cao, Y., Raise, A., Mohammadzadeh, A., et al.: Deep learned recurrent type-3 fuzzy system: application for renewable energy modeling. Prediction. Energy Reports (2021) Google Scholar Moreno, J.E., et al.: Design of an interval type-2 fuzzy model with justifiable uncertainty. Inf. Sci. 513, 206–221 (2020) ArticleGoogle Scholar Qasem, S.N., Ahmadian, A., Mohammadzadeh, A., Rathinasamy, S., Pahlevanzadeh, B.: A type-3 logic fuzzy system: optimized by a correntropy based Kalman filter with adaptive fuzzy kernel size Inform. Sci. 572, 424–443 (2021) MathSciNetGoogle Scholar Download references Author information Authors and Affiliations Division of Graduate Studies, Tijuana Institute of Technology, Tijuana, Baja California, Mexico Oscar Castillo&Patricia Melin School of Engineering, UABC University, Tijuana, Baja California, Mexico Juan R. Castro Authors 1. Oscar CastilloView author publications Search author on:PubMedGoogle Scholar 2. Juan R. CastroView author publications Search author on:PubMedGoogle Scholar 3. Patricia MelinView author publications Search author on:PubMedGoogle Scholar Corresponding author Correspondence to Oscar Castillo. Rights and permissions Reprints and permissions Copyright information © 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG About this chapter Cite this chapter Castillo, O., Castro, J.R., Melin, P. (2022). Interval Type-3 Fuzzy Sets. In: Interval Type-3 Fuzzy Systems: Theory and Design. Studies in Fuzziness and Soft Computing, vol 418. Springer, Cham. Download citation .RIS .ENW .BIB DOI: Published: 14 March 2022 Publisher Name: Springer, Cham Print ISBN: 978-3-030-96514-3 Online ISBN: 978-3-030-96515-0 eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0) Share this chapter Anyone you share the following link with will be able to read this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative Publish with us Policies and ethics Access this chapter Log in via an institution Subscribe and save Springer+ from $39.99 /Month Starting from 10 chapters or articles per month Access and download chapters and articles from more than 300k books and 2,500 journals Cancel anytime View plans Buy Now Chapter USD 29.95 Price excludes VAT (USA) Available as PDF Read on any device Instant download Own it forever Buy Chapter eBook USD 109.00 Price excludes VAT (USA) Available as EPUB and PDF Read on any device Instant download Own it forever Buy eBook Softcover Book USD 139.99 Price excludes VAT (USA) Compact, lightweight edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Softcover Book Hardcover Book USD 139.99 Price excludes VAT (USA) Durable hardcover edition Dispatched in 3 to 5 business days Free shipping worldwide - see info Buy Hardcover Book Tax calculation will be finalised at checkout Purchases are for personal use only Institutional subscriptions Sections References Abstract References Author information Rights and permissions Copyright information About this chapter Publish with us Mohammadzadeh, A., Sabzalian, M.H., Zhang, W.: An interval type-3 fuzzy system and a new online fractional-order learning algorithm: theory and practice. IEEE Trans. Fuzzy Syst. 28(9), 1940–1950 (2020) ArticleGoogle Scholar Rickard, J.T., Aisbett, J., Gibbon, G.: Fuzzy subsethood for fuzzy sets of type-2 and generalized type-n. IEEE Trans. Fuzzy Syst. 17(1), 50–60 (2009) ArticleGoogle Scholar Liu, Z., Mohammadzadeh, A., Turabieh, H., Mafarja, M., Band, S.S., Mosavi, A.: A new online learned interval type-3 fuzzy control system for solar energy management systems. IEEE Access 9, 10498–10508 (2021) ArticleGoogle Scholar Liang, Q., Mendel, J.M.: Interval type-2 fuzzy logic systems: theory and design. IEEE Trans. Fuzzy Syst. 8, 535–550 (2000) ArticleGoogle Scholar Mendel, J.M.: Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions. Prentice-Hall, Upper-Saddle River, NJ (2001) MATHGoogle Scholar Mendel, J.M., Hagras, H., Tan, W.-W., Melek, W.W., Ying, H.: Introduction to Type-2 Fuzzy Logic Control. Wiley and IEEE Press, Hoboken, NJ (2014) BookGoogle Scholar Karnik, N.N., Mendel, J.M.: Operations on type-2 fuzzy sets. Fuzzy Sets Syst. 122, 327–348 (2001) ArticleMathSciNetGoogle Scholar Sakalli, A., Kumbasar, T., Mendel, J.M.: Towards systematic design of general type-2 fuzzy logic controllers: analysis, interpretation, and tuning. IEEE Trans. Fuzzy Syst. 29(2), 226–239 (2021) ArticleGoogle Scholar Cao, Y., Raise, A., Mohammadzadeh, A., et al.: Deep learned recurrent type-3 fuzzy system: application for renewable energy modeling. Prediction. Energy Reports (2021) Google Scholar Moreno, J.E., et al.: Design of an interval type-2 fuzzy model with justifiable uncertainty. Inf. Sci. 513, 206–221 (2020) ArticleGoogle Scholar Qasem, S.N., Ahmadian, A., Mohammadzadeh, A., Rathinasamy, S., Pahlevanzadeh, B.: A type-3 logic fuzzy system: optimized by a correntropy based Kalman filter with adaptive fuzzy kernel size Inform. Sci. 572, 424–443 (2021) MathSciNetGoogle Scholar Discover content Journals A-Z Books A-Z Publish with us Journal finder Publish your research Language editing Open access publishing Products and services Our products Librarians Societies Partners and advertisers Our brands Springer Nature Portfolio BMC Palgrave Macmillan Apress Discover Your privacy choices/Manage cookies Your US state privacy rights Accessibility statement Terms and conditions Privacy policy Help and support Legal notice Cancel contracts here 34.96.49.8 Not affiliated © 2025 Springer Nature
284
What is self-modifying code? Can we modify the source code while compiling it using GCC? If yes, how will we do that? - Quora =============== Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In What is self-modifying code? Can we modify the source code while compiling it using GCC? If yes, how will we do that? All related (32) Sort Recommended Dale Gulledge Real time software developer for mumble years · Upvoted by Basile Starynkevitch , PhD Computer Programming, Sorbonne Université (1990) · Author has 4.4K answers and 5.4M answer views ·2y Self-modifying code is code that modifies itself at run time. You can certainly write self-modifying code in C. The easiest way I can think of for someone not intimately familiar with the instruction set of the target processor would be to have your program generate code for a shared library, run the compiler on it, and then load it. While you wouldn’t truly be modifying the running code in place, you would be modifying the behavior of the running program. It’s essentially a more limited version of the updates that some programs do by downloading the installer for the new version, running it, Continue Reading Self-modifying code is code that modifies itself at run time. You can certainly write self-modifying code in C. The easiest way I can think of for someone not intimately familiar with the instruction set of the target processor would be to have your program generate code for a shared library, run the compiler on it, and then load it. While you wouldn’t truly be modifying the running code in place, you would be modifying the behavior of the running program. It’s essentially a more limited version of the updates that some programs do by downloading the installer for the new version, running it, and then restarting. Truly self-modifying code modifies the program in memory while it’s running. It’s very easy to do in many interpreted languages. It’s also something to be very careful with, because when it goes wrong it’s very painful to debug because the code in memory doesn’t match the source code. Upvote · 9 1 Promoted by Grammarly Grammarly Knows English ·Updated Dec 17 What are some good ways to improve English grammar and writing abilities for a non-native speaker? Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluen Continue Reading Communicating fluently in English is a gradual process, one that takes a lot of practice and time to hone. In the meantime, the learning process can feel daunting: You want to get your meaning across correctly and smoothly, but putting your ideas into writing comes with the pressure of their feeling more permanent. This is why consistent, tailored suggestions are most helpful for improving your English writing abilities. Seeing specific writing suggestions based on common grammatical mistakes multilingual speakers make in English is key to improving your communication and English writing fluency. Regular feedback is powerful because writing in a language that isn’t the first one you learned poses extra challenges. It can feel extra frustrating when your ideas don’t come across as naturally as in your primary language. It’s also tough to put your writing out there when you’re not quite sure if your grammar and wording are correct. For those communicating in English in a professional setting, your ability to write effectively can make all the difference between collaboration and isolation, career progress and stagnation. Grammarly Pro helps multilingual speakers sound their best in English with tailored suggestions to improve grammar and idiomatic phrasing. Especially when you’re writing for work, where time often is in short supply, you want your communication to be effortless. In addition to offering general fluency assistance, Grammarly Pro now includes tailored suggestions for writing issues common among Spanish, Hindi, Mandarin, French, and German speakers, with more languages on the way. Features for all multilingual speakers Grammarly’s writing suggestions will catch the most common grammatical errors that multilingual speakers make in English. For example, if you drop an article or misuse a preposition (such as “on” instead of “in”), our sidebar will flag those mistakes within the Fix spelling and grammar category with the label Common issue for multilingual speakers. Most importantly, it will provide suggestions for fixing them. While these errors seem small, one right after another can make sentences awkward and more difficult to absorb. Eliminating them all in one fell swoop is a powerful way to put a more fluent spin on your document. Features for speakers of specific languages With Grammarly Pro, speakers of French, German, Hindi, Mandarin, and Spanish can get suggestions specifically tailored to their primary language, unlocking a whole other level of preciseness in written English. For speakers of those languages, our sidebar will flag “false friends,” or cognates, which are words or phrases that have a similar form or sound in one’s primary language but don’t have the same meaning in English. But now Grammarly Pro’s writing suggestions will catch these types of errors for you and provide suggestions on how to fix them. You can find these suggestions in the Sound more fluent category in our floating sidebar. Simply click on the suggestion highlighted in green, and voila, your English will be more polished and accurate. PS: Tailored suggestions for other language backgrounds are on the way! Upvote · 1.1K 1.1K 999 320 99 33 Lou Andvere been holding senior roles for the last 20 years · Upvoted by Dmitriy Genzel , Ph.D. Computer Science, Brown University (2005) and Hasib Al Muhaimin , IOI participant '13, '14, '15, ACM-ICPC World Finalist 2016 · Author has 605 answers and 2.9M answer views ·6y Related Can GCC (or another compiler) inline or optimize a function call mistakenly and generate a program that produces wrong results? Back in the PlayStation 2 days, I was working at a somewhat weird gaming startup in California. The startup imploded later, but that’s a different story. This story is about how four weeks before E3 (the Electronic Entertainment Expo, a major industry event), our CTO walked into our engine team office carrying a PS2 devkit, plopped it down on the nearest desk, and, with a somewhat guilty look on his face, announced that we need a PS2 demo for E3. Our engine at the time was PC-only, was fairly cutting-edge in terms of features, and, in particular, had features that had never been done on the PS2 Continue Reading Back in the PlayStation 2 days, I was working at a somewhat weird gaming startup in California. The startup imploded later, but that’s a different story. This story is about how four weeks before E3 (the Electronic Entertainment Expo, a major industry event), our CTO walked into our engine team office carrying a PS2 devkit, plopped it down on the nearest desk, and, with a somewhat guilty look on his face, announced that we need a PS2 demo for E3. Our engine at the time was PC-only, was fairly cutting-edge in terms of features, and, in particular, had features that had never been done on the PS2. Full-scene shadows, for one. Normal mapping on everything, too. You see, the PS2 was composed of an amazing set of chips that were crippled, hobbled and stymied by an absolutely inane architecture that, I suspect, was the result of some serious infighting at Sony. But, back to our CTO. The room went quiet. Joe, Jake and Bill, my partners in crime at the time, sat there looking at him. “That’s a joke, right?” said Jake, finally. It wasn’t a joke. I’m not good with pregnant pauses. They make me agree to things I shouldn’t agree to. I would do very poorly in a police interrogation. Five minutes of silence, and I’d implicate my own grandma. Thirty seconds passed. “Ok I’ll take it,” I said. I got six volumes, I think, of PS2 manuals. Green ones. I opened them. They were in Japanese. My knowledge of Japanese is basically limited to “watashi wa nihongo ga wakarimasen”, which means “I do not speak Japanese” and may or may not be grammatically correct, and, in any case, is not a phrase I was likely to encounter in the manuals. It took them a week to get me the English set, in the meantime I cobbled together a prototype renderer using the sample code (which was commented almost exclusively in Japanese, so figuring it out involved some detective work). The PS2 compiler was made by a company called SN Systems, before they were acquired by Sony. They had two compilers, SNC and ProDG, where if memory serves me right, ProDG was a GCC fork. I believe we went with ProDG, but am not sure now. I think it had better template support at the time? Something along those lines. We got our demo, with full-scene dynamic shadows and normal mapping on everything (I think only one other person has ever done that, we work at the same company now), showed it behind closed doors to the Playstation Magazine who almost pissed themselves. Too bad the company went out of business shortly thereafter and nothing was ever done with the tech and I spent three years of my life doing things that never saw the light of day, but that’s the game industry for ya. But. For four or five months, I more or less drove our PS2 development, and so was very familiar with the issues. During this time, we’d filed, I’d estimate, about one hundred legitimate bug reports to SN Systems (who, I should say, were fantastically prompt at addressing them, and I loved working with my contacts there). Actual, honest-to-goodness, compiler bugs. Many years later I got a little bit of a taste of this from the other side of the barricades, while working on a real-world compiler at a large company. I’ve since decided that I greatly prefer fixing bugs in a compiler I’m working on, than reporting bugs in a compiler that I’m working with. So yes, compilers do make mistakes, and they do generate wrong programs, and debugging that sort of thing can be… interesting. Upvote · 999 648 99 18 9 6 Steve Johnson Former Bell Labs, then Ardent, Transmeta, The Mathworks · Author has 496 answers and 1M answer views ·Updated 2y Related Can software modify its own source code while running on the computer? If so, what is the use of this feature or concept? In the earliest computers, this was the only way to write a loop! In Univac I, for example, a loop involved modifying the load and store instructions to increase the address at each iteration and then clean up later. Even in the early PC days, so-called self-modifying code was used in operating system code. In some cases, it could even be faster than not modifying the code. As computers became more complicated, with caches and branch prediction, self-modifying code became quite complicated to implement correctly and hard to debug. There are still cases where it can be useful. For example, plantin Continue Reading In the earliest computers, this was the only way to write a loop! In Univac I, for example, a loop involved modifying the load and store instructions to increase the address at each iteration and then clean up later. Even in the early PC days, so-called self-modifying code was used in operating system code. In some cases, it could even be faster than not modifying the code. As computers became more complicated, with caches and branch prediction, self-modifying code became quite complicated to implement correctly and hard to debug. There are still cases where it can be useful. For example, planting a breakpoint in a debugger is often done by replacing the instruction with a branch to a block of code that then branches back into the code being debugged. There is also an area that is quite a bit more “respectable” than the above uses. It is Just-In-Time or JIT code. When a feature is implemented as a JIT, the affected program is patched to branch to a place where, in effect, a special-purpose compiler generates binary code to provide the feature desired, and then branches back into the program. Supporting JIT code puts some strain on the OS and may need special memory protection hardware. But, when the OS and hardware support it, JIT code can provide improved performance and excellent debugger support. EDIT: Other answers have pointed out that operating system code can and does modify very low level code to customize the OS to the exact hardware that it is booted on. This does not make me sleep better at night… Sounds like a security hole ready to open, especially with multiple authors with their fingers in this particular pie. Upvote · 99 16 9 1 9 1 Related questions More answers below What is the source code for GCC's built-in functions (like __builtin_expect, etc.)? What are the advantages of self-modifying code? What is the difference between self-modifying code and self-modifying programs? What are some examples of self-modifying code? How do they work, and what is their purpose? How can I get my old code to work with the new GCC version? Steve Baker Senior Software Engineer (2013–present) · Author has 38.8K answers and 287.8M answer views ·2y Related Can software modify its own source code while running on the computer? If so, what is the use of this feature or concept? Compiled software (written in, for example C or C++), could modify it’s source code - but it wouldn’t have any effect because that code is compiled into binary machine code - and changing the sources won’t have any effect. Interpreted languages like JavaScript and Python could potentially have self-modifying source code - certainly in JavaScript you can do that - I’ve no idea about Python and others though. I wonder whether you ACTUALLY care more about self-modifying code in general…and that’s a very different matter. Back when computers were much slower and a lot of code was written in assembly Continue Reading Compiled software (written in, for example C or C++), could modify it’s source code - but it wouldn’t have any effect because that code is compiled into binary machine code - and changing the sources won’t have any effect. Interpreted languages like JavaScript and Python could potentially have self-modifying source code - certainly in JavaScript you can do that - I’ve no idea about Python and others though. I wonder whether you ACTUALLY care more about self-modifying code in general…and that’s a very different matter. Back when computers were much slower and a lot of code was written in assembly language - it was quite common to have self-modifying code. You’d see it a lot in video games where it could be used to save significant amounts of time by rewriting code rather than having an “if” statement. When I was in college back in the 1970’s - we had a mainframe computer made by Singer (yes, the sewing machine company!) - which didn’t have a hardware call stack for calling subroutines (so no “CALL” and “RET” instructions) - and the way the Algol-60 and Fortran compilers implemented subroutines was to compile in a JMP instruction with a 0000 target address as the last line of the subroutine. When it generated code to call the subroutine, it would first overwrite the address field at the end of the subroutine with the address of the next-but-one instruction, then jump to the start of the subroutine - which would do it’s thing and then obediently jump to that next instruction to return to the calling program. Sadly, this meant that you couldn’t use recursion…but back then, that wasn’t unacceptable. However, with modern operating systems, it’s typically impossible to write self-modifying code because the operating system makes the machine code write-protected. This is necessary because there can be multiple copies of the same program (or even the same DLL library) running at once - and you want to be able to share the executable code across all of those copies. If code could be self-modifying then one copy of the program could change the code for all of them - which would be A Very Bad Thing (tm). Upvote · 99 64 9 3 Sponsored by Adobe Photoshop Next-gen Generative Fill. Add rich, detailed elements that blend seamlessly into scenes with Generative Fill. Learn More Related questions What is the source code for GCC's built-in functions (like __builtin_expect, etc.)? What are the advantages of self-modifying code? What is the difference between self-modifying code and self-modifying programs? What are some examples of self-modifying code? How do they work, and what is their purpose? How can I get my old code to work with the new GCC version? What is the impact of self-modifying code on compiler design? What is the definition of self-modifying code? What is the definition of metamorphic code? Does GCC perform constant folding with an evidence from the assembly code? Why does GCC not warn for unreachable code? How would you write a self-modifying code? What is a self-compiling code? For what is self modifying code used nowadays? What programming language is used to write GCC compiler source code? How do I compile GCC? What are the benefits and drawbacks of self modifying code? Related questions What is the source code for GCC's built-in functions (like __builtin_expect, etc.)? What are the advantages of self-modifying code? What is the difference between self-modifying code and self-modifying programs? What are some examples of self-modifying code? How do they work, and what is their purpose? How can I get my old code to work with the new GCC version? What is the impact of self-modifying code on compiler design? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
285
The embedding capacity of $4$-dimensional symplectic ellipsoids | Annals of Mathematics =============== Skip to content Home About Editorial Board Submission Guidelines Subscriptions Contact The embedding capacity of 4 4-dimensional symplectic ellipsoids Pages 1191-1282 from Volume 175 (2012), Issue 3 by Dusa McDuff, Felix Schlenk Abstract This paper calculates the function c(a)c(a) whose value at a a is the infimum of the size of a ball that contains a symplectic image of the ellipsoid E(1,a)E(1,a). (Here a≥1 a≥1 is the ratio of the area of the large axis to that of the smaller axis.) The structure of the graph of c(a)c(a) is surprisingly rich. The volume constraint implies that c(a)c(a) is always greater than or equal to the square root of a a, and it is not hard to see that this is equality for large a a. However, for a a less than the fourth power τ 4 τ 4 of the golden ratio, c(a)c(a) is piecewise linear, with graph that alternately lies on a line through the origin and is horizontal. We prove this by showing that there are exceptional curves in blow ups of the complex projective plane whose homology classes are given by the continued fraction expansions of ratios of Fibonacci numbers. On the interval [τ 4,7][τ 4,7] we find c(a)=(a+1)/3 c(a)=(a+1)/3. For a≥7 a≥7, the function c(a)c(a) coincides with the square root except on a finite number of intervals where it is again piecewise linear. The embedding constraints coming from embedded contact homology give rise to another capacity function c E C H c E C H which may be computed by counting lattice points in appropriate right angled triangles. According to Hutchings and Taubes, the functorial properties of embedded contact homology imply that % c E C H(a)≤c(a)c E C H(a)≤c(a) for all a a. We show here that c E C H(a)≥c(a)c E C H(a)≥c(a) for all a a. Keywords Fibonacci numbers, symplectic embeddings DOI MR 2912705 zbMATH 06051270 Milestones Received: 31 January 2010 Accepted: 8 April 2011 Published online: 1 May 2012 Authors Dusa McDuff Department of Mathematics, Barnard College, Columbia University, New York, NY 10027 Felix Schlenk Université de Neuchâtel, Neuchâtel, Switzerland ← Previous article in this issue Next article in this issue → Search for: Full Article Online Content on Project Euclid 2017–2024 Online Content on JSTOR 1884--2018 To appear in forthcoming issues 2025 Vol. 2011 2 3 2024 Vol. 2001 2 3 Vol. 1991 2 3 2023 Vol. 1981 2 3 Vol. 1971 2 3 2022 Vol. 1961 2 3 Vol. 1951 2 3 2021 Vol. 1941 2 3 Vol. 1931 2 3 2020 Vol. 1921 2 3 Vol. 1911 2 3 2019 Vol. 1901 2 3 Vol. 1891 2 3 2018 Vol. 1881 2 3 Vol. 1871 2 3 2017 Vol. 1861 2 3 Vol. 1851 2 3 2016 Vol. 1841 2 3 Vol. 1831 2 3 2015 Vol. 1821 2 3 Vol. 1811 2 3 2014 Vol. 1801 2 3 Vol. 1791 2 3 2013 Vol. 1781 2 3 Vol. 1771 2 3 2012 Vol. 1761 2 3 Vol. 1751 2 3 2011 Vol. 1741 2 3 Vol. 1731 2 3 2010 Vol. 1721 2 3 Vol. 1711 2 3 2009 Vol. 1701 2 3 Vol. 1691 2 3 2008 Vol. 1681 2 3 Vol. 1671 2 3 2007 Vol. 1661 2 3 Vol. 1651 2 3 2006 Vol. 1641 2 3 Vol. 1631 2 3 2005 Vol. 1621 2 3 Vol. 1611 2 3 2004 Vol. 1601 2 3 Vol. 1591 2 3 2003 Vol. 1581 2 3 Vol. 1571 2 3 2002 Vol. 1561 2 3 Vol. 1551 2 3 2001 Vol. 1541 2 3 Vol. 1531 2 3 2000 Vol. 1521 2 3 Vol. 1511 2 3 1999 Vol. 1501 2 3 Vol. 1491 2 3 1998 Vol. 1481 2 3 Vol. 1471 2 3 1997 Vol. 1461 2 3 Vol. 1451 2 3 1996 Vol. 1441 2 3 Vol. 1431 2 3 1995 Vol. 1421 2 3 Vol. 1411 2 3 1994 Vol. 1401 2 3 Vol. 1391 2 3 1993 Vol. 1381 2 3 Vol. 1371 2 3 1992 Vol. 1361 2 3 Vol. 1351 2 3 1991 Vol. 1341 2 3 Vol. 1331 2 3 1990 Vol. 1321 2 3 Vol. 1311 2 3 1989 Vol. 1301 2 3 Vol. 1291 2 3 1988 Vol. 1281 2 3 Vol. 1271 2 3 1987 Vol. 1261 2 3 Vol. 1251 2 3 1986 Vol. 1241 2 3 Vol. 1231 2 3 1985 Vol. 1221 2 3 Vol. 1211 2 3 1984 Vol. 1201 2 3 Vol. 1191 2 3 1983 Vol. 1181 2 3 Vol. 1171 2 3 1982 Vol. 1161 2 3 Vol. 1151 2 3 1981 Vol. 1141 2 3 Vol. 1131 2 3 1980 Vol. 1121 2 3 Vol. 1111 2 3 1979 Vol. 1101 2 3 Vol. 1091 2 3 1978 Vol. 1081 2 3 Vol. 1071 2 3 1977 Vol. 1061 2 3 Vol. 1051 2 3 1976 Vol. 1041 2 3 Vol. 1031 2 3 1975 Vol. 1021 2 3 Vol. 1011 2 3 1974 Vol. 1001 2 3 Vol. 991 2 3 1973 Vol. 981 2 3 Vol. 971 2 3 1972 Vol. 961 2 3 Vol. 951 2 3 1971 Vol. 941 2 3 Vol. 931 2 3 1970 Vol. 921 2 3 Vol. 911 2 3 1969 Vol. 901 2 3 Vol. 891 2 3 1968 Vol. 881 2 3 Vol. 871 2 3 1967 Vol. 861 2 3 Vol. 851 2 3 1966 Vol. 841 2 3 Vol. 831 2 3 Proudly powered by WordPress. PDF Document Information Annals of Mathematics Fine Hall – Washington Road Princeton University Princeton, NJ 08544, USA Phone: 1-609-258-6468, Fax: 1-609-258-1367 Copyright © 2024 Annals of Mathematics
286
ca.classical analysis and odes - How did Gauss discover the invariant density for the Gauss map? - MathOverflow =============== Join MathOverflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community MathOverflow helpchat MathOverflow Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now How did Gauss discover the invariant density for the Gauss map? Ask Question Asked 15 years, 3 months ago Modified5 years, 3 months ago Viewed 3k times This question shows research effort; it is useful and clear 18 Save this question. Show activity on this post. The Gauss map is defined on (0,1)(0,1) by the formula f(x)=1 x−⌊1 x⌋f(x)=1 x−⌊1 x⌋ Then the density ρ(x)=1 log 2(1+x)ρ(x)=1 log⁡2(1+x) is f f-invariant. It appeared in Gauss' diary. Gauss didn't indicate the way he had found the density. Checking invariance is straightforward. Is there a simple (short) way to come up with this density function? ca.classical-analysis-and-odes ds.dynamical-systems Share Share a link to this question Copy linkCC BY-SA 2.5 Cite Improve this question Follow Follow this question to receive notifications asked May 3, 2010 at 1:09 ZarathustraZarathustra 1,434 12 12 silver badges 21 21 bronze badges 3 2 A. Ya. Khinchin says in the little book "Continued Fractions" that Gauss did not include a proof in the letter to Laplace that gives the density, never published the proof, but these appeared with Kuz'min in 1928 and Levy in 1929. Khinchin indicates that his proof follows Kuz'min. But it is several pages. Available as a Dover paperback. And, of course, there could be a modern short proof. –Will Jagy Commented May 3, 2010 at 1:40 Will, I think Khinchin is talking about the proof of ergodicity of f f. This is important as it can be used to proof a statement on distribution of continued fraction "digits". This statement is frequently referred to as "Kuz'min's theorem". I may be wrong –Zarathustra Commented May 4, 2010 at 1:12 Yes, Khinchin is preparing the whole topic in very few pages, the Gauss measure blends in with lots of related stuff. Section 15, "Gauss's problem and Kuz'min's theorem" starts on page 71, and I think the first time he mentions the exact form of your ρ(x)ρ(x) is formula (80) on page 82. I see you liked Peter Luthy's heuristic. Good. –Will Jagy Commented May 4, 2010 at 1:56 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 15 Save this answer. Show activity on this post. Here is a heuristic. Observe that f f is decreasing on any interval of the form I n:=(1/(n+1),1/n)I n:=(1/(n+1),1/n). If x∈I n x∈I n and f(x)=a f(x)=a, then x=1 a+n x=1 a+n, and so f−1(a,1)f−1(a,1) is the union of the intervals (1/(n+1),1/(n+a))(1/(n+1),1/(n+a)). Supposing there were an f f-invariant measure μ=g d x μ=g d x, you can see that ∑n=1∞∫1/n+a 1/n+1 g(x)d x=∫1 a g(x)d x∑n=1∞∫1/n+1 1/n+a g(x)d x=∫a 1 g(x)d x. Supposing that g g is continuous, take the derivative of both sides: () ∑n=1∞g(1/(n+a))/(n+a)2=g(a)∑n=1∞g(1/(n+a))/(n+a)2=g(a). Approximating the sum by an integral, we get g(a)≈∫∞1 g(1 x+a)1(x+a)2 d x g(a)≈∫1∞g(1 x+a)1(x+a)2 d x. Supposing that g g is never too big or too small, this gives g(a)≈C 1+a g(a)≈C 1+a. One can then plug this kind of function into () to see that such a function works for any C. Then just pick C to normalize. Probably someone from the era of Gauss (especially with his acuity at mathematics) did not need to do anything past () since people back then seem like magicians when it comes to expressions with infinite sums. Share Share a link to this answer Copy linkCC BY-SA 2.5 Cite Improve this answer Follow Follow this answer to receive notifications answered May 3, 2010 at 5:03 Peter LuthyPeter Luthy 1,576 10 10 silver badges 15 15 bronze badges 1 You're very welcome! –Peter Luthy Commented May 4, 2010 at 5:47 Add a comment| This answer is useful 4 Save this answer. Show activity on this post. It is not the density. It is distribution function. Density function 1/(x+1)1/(x+1) is not so complicated to find it. Rational numbers lead to this function as well. So some experiments can give this function. Another heuristics you can find in the article Gyldén, H. Quelques remarques relativement à la représentation des nombres irrationnels par des fractions continues C. R. Ac. Sci. Paris., 1888, 107, 1584–1587 They are very rough but one of the Gyldén' answers is correct. Share Share a link to this answer Copy linkCC BY-SA 2.5 Cite Improve this answer Follow Follow this answer to receive notifications edited May 3, 2010 at 13:39 answered May 3, 2010 at 2:48 Alexey UstinovAlexey Ustinov 13.4k 7 7 gold badges 93 93 silver badges 128 128 bronze badges 7 8 Alexey, could you elaborate on this: if someone didn't already know an invariant measure for the Gauss transformation, what kind of numerical experiments (or other ideas) would naturally lead one to anticipate that something like the Gauss measure is a good candidate for an invariant measure? That the Gauss measure works can be verified mechanically once it is found, but I am unaware of a natural way to find it in the first place, as also spoke Zarathustra. –KConrad Commented May 3, 2010 at 3:32 2 You can consider continued fraction expansion for rational numbers a/b a/b, 1≤a≤b≤R 1≤a≤b≤R, R→∞R→∞. You will find that there is probability p(1)p(1) that partial quotient is equal to 1 (or 2, 3,...). More generally there ia probability that the tail of continued fraction is less or equal to the fixed number x x. Using p(1)p(1), p(2)p(2)... you'll find values of distribution function at x=1/2 x=1/2, 2/3 2/3, 3/4 3/4,... Next spep is to find appropriate density function. –Alexey Ustinov Commented May 3, 2010 at 7:10 1 @KConrad: philosophy.eserver.org/nietzsche-zarathustra.txt –Will Jagy Commented May 3, 2010 at 17:38 Alexey, I understand what you are saying. It is hard to believe that it was discovered like that (though I know that people were good at doing math experiments by hand back then). Does anyone know if Gauss was collecting this statistical data for rational numbers? –Zarathustra Commented May 4, 2010 at 1:19 The same statistics can came from qudratic irrationalities. –Alexey Ustinov Commented May 5, 2010 at 8:38 |Show 2 more comments This answer is useful 1 Save this answer. Show activity on this post. I once saw a talk by professor Pierre Arnoux of Institut de Mathématique de Luminy where he talked precisely about that. The baseline was that nobody knows. Apparently gauss wrote in a letter that "an easy calculation shows that....". For the rest of the talk he presented some approaches that Gauss might have taken. But in the end we don't know. Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Improve this answer Follow Follow this answer to receive notifications answered Apr 22, 2020 at 12:44 Martin AnderssonMartin Andersson 116 1 1 bronze badge 2 This sounds more like a comment to me, unless you elaborate on what those possible approaches presented were. –Wojowu Commented Apr 22, 2020 at 13:26 1 Well, the question was not "How can one discover the invariant density of the Gauss map", but "How did Gauss discover the invariant density of the Gauss map". I'm saying that the best answer to that question is: We don't know. The reason? Someone has read through all the letters to find the answer and found no answer. –Martin Andersson Commented Apr 24, 2020 at 2:55 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions ca.classical-analysis-and-odes ds.dynamical-systems See similar questions with these tags. Featured on Meta Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Will you help build our new visual identity? Related 1Is it possible to define the density of the logistic map for x<0 x<0? 8On the density map of the abundancy index 3About generalized continued fractions 2(Exponential) mixing property for Gauss map — going from cylinders to intervals Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today MathOverflow Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.12.32759 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
287
Published Time: Mon, 23 Jan 2023 18:20:27 GMT arXiv:2212.12287v1 [cs.CG] 23 Dec 2022 Circle packing in regular polygons Paolo Amore Facultad de Ciencias, CUICBAS, Universidad de Colima, Bernal D´ ıaz del Castillo 340, Colima, Colima, Mexico [email protected] December 26, 2022 Abstract We study the packing of a large number of congruent and non–overlapping circles inside a regular polygon. We have devised efficient algorithms that allow one to generate configurations of N densely packed circles inside a regular polygon and we have carried out intensive numerical experiments spanning several polygons (the largest number of sides considered here being 16) and up to 200 circles (400 circles in the special cases of the equilateral triangle and the regular hexagon) . Some of the configurations that we have found possibly are not global maxima of the packing frac-tion, particularly for N ≫ 1, due to the great computational complexity of the problem, but nonetheless they should provide good lower bounds for the packing fraction at a given N . This is the first systematic numeri-cal study of packing in regular polygons, which previously had only been carried out for the equilateral triangle, the square and the circle. 1 Introduction Circle packing is possibly the prototype of a multidisciplinary problem: for physicists working in soft condensed matter circle packing, or more generally sphere packing, is relevant in the study of systems with a large number of par-ticles interacting via contact (short–range) interactions (the case of particles of fixed irregular non–congruent shapes is equally if not more interesting); for mathematicians, circle packing falls in a wider class of problems, exemplified by the renowned Kepler’s conjecture, regarding optimal sphere packing in three dimensional space; to computer scientists it represents a demanding computa-tional problem, which can be used as natural testing ground for developing effi-cient algorithms. Packing is also an important problem in everyday life and even in the organization of natural systems: the Tammes problem, which amounts essentially to circle packing on the surface of sphere, was originally introduced to describe the distribution of pores on pollen grains [ 1, 2]. In general, these problems are very intuitive and grasping their essence does not require particular mathematical abilities; unfortunately they are also very 1difficult to solve, exactly or even approximately (numerically). This is ex-emplified by Kepler’s conjecture, which regards sphere-packing in the three– dimensional Euclidean space, and has been proved only recently by Thomas Hales [ 3], almost 400 years after its initial formulation. The relevant quantity that one tries to maximize in packing problem is the packing fraction, defined as the ratio between the area covered by the disks (or whatever shape one is working with) and the total area of the container. For the case of congruent circles on the infinite plane it has been proved that the maximum value that one can obtain is [ 4, 5] ρplane = π/ √12 . (1) Clearly ρplane is also an upper bound to the packing fraction of N congruent disks in a finite domain on the plane: in general, the optimal hexagonal packing in the infinite plane cannot be achieved in a finite domain, because of the frus-tration introduced by the borders, which may lower considerably the maximal density 1. The effect of the border becomes negligible in the limit N → ∞ and the optimal bound ( 1) can be recovered in this limit. The main domains in the plane where the packing of congruent disks has been studied before are the circle [ 6, 7, 8, 9, 10 , 11 , 12 , 13 , 14 ], the equilateral triangle [ 17 , 18 , 19 , 20 , 21 , 22 , 23 ], the square (see refs. [ 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 ]) and rectangles of different proportions [ 35 , 36 , 37 , 38 ]. Additionally, results for specific shapes of the container and of the ojbects (which can also be non–congruent) can be found in the repositories [39 ] and [ 40 ]. In particular the ”packomania” repository [ 40 ], curated by E. Specht, contains a incredibly rich collection of configurations (and diagrams), which have recently been extended to include more general regular polygons for modest numbers of disks. We also mention the repository [ 41 ] which contains the configurations found in [ 34 ] for circle packing in the square. For the reader interested in obtaining a global picture of the problem ref. [ 42 ] provides a review on the status of circle and sphere packing. The main purpose of the present article is to discuss the packing of congruent circles inside domains with the shape of a regular polygon. To achieve this goal, the algorithm of ref. [ 34 ] (which in turn is an improvement of the algorithm initially proposed by Nurmela and ¨Osterg˚ ard [28 ]) is extended and modified. The algorithms that we introduce in this work are then used to explore dense configurations of N congruent disks in regular polygons with different number of sides, hopefully corresponding, in some cases, to global maxima of the packing fraction (due to the rapidly increasing complexity of the problem as more and more circles are considered, finding a global maximum of the density becomes extremely difficult even for not so large configurations). The paper is organized as follows: in section 2 we describe the various al-gorithms that we have developed; in section 3 we discuss Euler’s theorem of 1For the equilateral triangle and the regular hexagon the hexagonal packing can be realized for specific values of N. Even in this case, however, the maximal density is lower than ( 1) for finite Ndue to the effect of the border. 2topology for the domains under consideration, in terms of topological charges; in section 4 we provide upper bounds for the maximal packing density in do-mains with the shape of a regular polygon; in section 5 we report the numerical results and discuss the main features of the configurations obtained; finally, in section 6 we draw our conclusions, highlight the main achievements and discuss possible directions for future work. 2 The method In this section we describe the extension of the methods recently introduced in [34 ] for the case of a square container to containers with the shape of regular polygons. Following [ 28 , 34 ] we need to dispose N points inside a regular polygon with σ sides and arrange them in a way such that the minimal distance between any two points is maximal. If we take this distance to be twice the radius of the N congruent disks with the centers at the N points, the resulting configuration represents an optimal packing of N disks inside the regular polygon. We parametrize the coordinates of a point inside a regular polygon with σ sides as x(t, u, σ ) = sin 2 t X (u, 0, σ ) y(t, u, σ ) = sin 2 t Y (u, 0, σ ) , (2) where P = ( X(u, 0, σ ), Y (u, 0, σ )) is a point on the border of the polygon and X(u, δ, σ ) = Γ( u, δ, σ ) cos( u) Y (u, δ, σ ) = Γ( u, δ, σ ) sin( u) , (3) with Γ( u, δ, σ ) ≡ ( δ + cos ( π σ )) sec ( π σ − (u mod 2π σ ) ) . (4) 3Figure 1: Optimal configuration for 3 disks inside a pentagon In this parametrization the coordinates of the centers of the disks are allowed to vary inside a regular polygon of unit circumradius (which corresponds to using δ = 0 in the formulas above), whereas R− = r + cos π σR+ = 1 + r sec π σ (5) are the circumradius and the apothem of the regular polygon containing all the disks. This situation is illustrated in Fig. 1 for the case of a regular pentagon (σ = 5): the inner (red) pentagon is the region where the points (centers of the disks) are allowed to access, whereas the outer (blue) pentagon is the region where the disks can move. The perimeter and area of the outer polygon are P(r, σ ) = 2 σ ( r tan ( π σ ) sin ( π σ )) A(r, σ ) = σ 2 sin ( 2π σ ) ( 1 + r sec ( π σ )) 2 . (6) Similarly the perimeter of the inner polygon is P−(r, σ ) = 2 σ sin ( π σ ) . (7) We define the packing fraction as the ratio between the area inside the polygon occupied by the disks and the total area of the polygon ρ ≡ N πr 2 A(r, σ ) = N πr 2 cot ( π σ ) σ (r + cos ( π σ )) 2 . (8) 4As the maximum density that can be achieved in the infinite plane is given by ( 1), the condition ρ ≤ ρplane constrains the dimensions of the disks 0 ≤ r ≤ ς(N, σ ) ≡ σ cos ( π σ ) √ 2√3N σ cot ( π σ ) − σ. (9) In particular, for N ≫ 1 ς(N, σ ) ≈ √ σ sin ( 2π σ ) 4√31 √N + O [ 1 N ] . (10) It is natural to introduce the packing efficiency given by ε ≡ r ς(N, σ ) ≤ 1 , (11) where ε = 1 corresponds to reaching the highest possible density at the given N .Similarly we define ξ ≡ Nbr P− ≤ 1 , (12) representing the fraction of the perimeter of the inner polygon covered by the peripheral disks ( Nb is the number of such disks). In this case ξ = 1 does not necessarily correspond to reaching the maximal density (for instance, if one arranges n2 disks in a square packing inside a square, then ξ = 1, but the density is not optimal). In addition to these parameters, we also introduce ν ≡ Nv σ (13) where Nv is the number of vertices of the regular polygon which are occupied by a disk, and δ ≡ L 2N r − 1 ≥ 0 (14) where L is the minimal length of a path that goes through all the points crossing them only once (i.e. the minimal path in the associate travelling salesman problem). Configurations of maximal density with δ = 0, if found, correspond to the packing of a necklace . A necessary condition such that the a configuratiion of N points is a necklace is that each point is at a minimal distance from an even number of two other points. The packing of filaments insides finite regions has interesting application in physics and biophyisics (see refs. [ 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 , 52 , 53 ]). Before describing the algorithms that we have introduced to obtain high– density packing configurations, it is useful to specify three possible conventions in the problem setup: 5Table 1: Different conventions for the packing of circles inside regular polygons. The results are expressed either using the radius of the disks in I or II. I II III r r r 1+ rsec ( π σ ) 1 2 r 1−rsec ( π σ ) r 1 2 R+ 1 + r sec π σ 1 1+ r sec π σ 2r 1 1−rsec ( π σ ) 1 1 2r A σ 2 sin ( 2π σ ) ( 1 + r sec ( π σ )) 2 σ 2 sin ( 2π σ ) σ 8r2 sin ( 2π σ ) σsin ( 2π σ ) 2 (r sec ( π σ )−1)2 σ 2 sin ( 2π σ ) σ tan ( π σ )( r−cos ( π σ )) 2 4r2 P 2σ (r tan ( π σ ) + sin ( π σ )) 2σ sin ( π σ ) σ r sin ( π σ ) 2σsin ( π σ ) 1−rsec ( π σ ) 2σ sin ( π σ ) σ tan ( π σ )( cos ( π σ )−r) r • I: the regular polygon has an apothem R+ = 1 + r sec π σ , where r is the radius of the disks (this corresponds to the case represented in Fig. 1); • II: the apothem of the regular polygon is held fixed at R+ = 1; • III: the circle of the disks is held fixed at r = 1 /2; In Table 1 we report the main properties of the polygons using the three conventions. Notice that ρ and ξ defined earlier are scale invariant quantities and therefore their expressions do not depend on the convention used. We can now briefly describe the algorithms used in this paper. 2.1 Algorithm 1 This algorithm has been recently introduced in Ref. [ 34 ] as an improvement of an algorithm proposed earlier by Nurmela and ¨Osterg˚ ard in [ 28 ]. The main difference of the present implementation from the one of [ 34 ] is the different parametrization of the domain. Refs. [ 28 , 34 ] are limited to the square: in that case a point inside the square is parametrized as (x, y ) = ℓ 2 (sin t, sin u) , − π 2 ≤ t, u ≤ π 2 (15) Observe that this parametrization is fundamentally different from the one of eq. ( 2), in a way that is analogous to the relation between cartesian and polar coordinates. In particular, using eq. ( 2) one is able to describe the entire border of the domain in terms of a single angle (0 ≤ u ≤ 2π). 6Once a parametrization is chosen (in this case eq. ( 2) ) the original con-strained optimization problem is converted to an unconstrained one, which is easier to deal with. As we have done in [ 34 ], we then consider a energy functional V = N ∑ i=2 i−1 ∑ j=1 ( λ r2 ij )s Fij (ǫ, α ) , (16) where Fij (ǫ, α ) = [( cos 2 ti + ǫ) ( cos 2 tj + ǫ)] α (17) and ǫ > 0 and α ≤ 0. The term Fij (ǫ, α ) provides a border repulsion, which is essential to avoid that excessive number of points reach the border in the earlier stages of the algorithm. As we have remarked in [ 34 ], once a point is deposited on the border, it is practically impossible to remove it and therefore the algorithm will fail to produce a sufficiently dense configuration if the optimal configuration has less border points. On the other hand, if we set α = 0 we have F = 1 (this limit would be the analogous of [ 28 ], with a different parametrization) and the border repulsion disappears. Although this aspect of the present implementation is similar to that of [ 34 ], there is a fundamental difference in the explicit form of F, which reflects the different parametrization used. Clearly there is considerable freedom in choosing the functional form of F, as long as it avoids the points to fall on the border too early. Eq. ( 17 ), while not unique, it is very simple, thus limiting the amount of numerical load. The parameter λ in eq. ( 16 ) plays an important role: although the potential is always repulsive, the intensity of the repulsion is much stronger for r2 ≤ λ,while dying off for r → ∞ . Moreover the repulsion is essentially confined to the r2 ≤ λ for s ≫ 1. The appropriate choice is to set λ to the minimal distance between any two points in the set: in this way, for large s, the interaction mimics the contact interaction between rigid disks. The pseudocode for this algorithm is essentially the pseudocode of [ 34 ] (with minor differences) that we report in the appendix for completeness. As we have noticed above, what makes our algorithm 1 more effective than the original algorithm of ref. [ 28 ] in producing dense configurations is the ability to avoid the overpopulation of the border in the early stages of the algorithm, where the interaction is long range. The border repulsion that we have in-troduced limits very much this problem and the efficiency of the algorithm is greatly enhanced. We have also modified our Algorithm 1 to carry out the minimization of the energy at each value of s using the basin hopping (BH) algorithm. The basin hopping method [ 54 , 55 , 56 , 57 ] is considered the standard algorithm for global optimization of large scale numerical problems. When applied to the Thomson problem, for example, it has led to consistent improvement of previous records (see for instance [ 58 ]). 7In the case of circle packing, however, the search for the global maximum of the density is pursued indirectly by progressively changing the potential from long range ( s small) to short range ( s ≫ 1) and by minimizing the energy at each step. Moreover, the energy functional of eq. ( 16 ) is changing even at a fixed s because of the dependence on λ. For this reason one may expect that the effectiveness of the method could be affected. In our numerical explorations we have found that implementing the use of basin hopping in algorithm 1 does not lead to a noticeable improvement of performance. We plan to further explore the use of the BH algorithm in packing problem in future work. 2.2 Algorithm 2 As we have discussed in the previous section, the fundamental inspiration for improving the algorithm of [ 28 ] and thus obtaining algorithm 1, comes from regarding a configuration of disks as a physical systems of repelling ”charges” and recognizing that, in a conductor, charges tend to go to the surface (in three dimensions): in two dimensional domains, for long range interactions, the charge density tends to accumulate at the border leaving the central region depleted. The inclusion of border repulsion allows one to limit greatly this behavior and reflects into a larger probability of generating dense configurations. The configurations obtained with algorithm 1 are typically very dense, but, in many cases, they can be still improved to achieve larger densities. To improve the packing configurations obtained with algorithm 1, we are using algorithm 2, originally proposed in [ 34 ]. As for algorithm 1, this algorithm is inspired by a concrete physical analogy: it is well known from experiments that the dense packing of uniform spheres or solids of different shape can be improved if the container in which the objects are poured is shaken [ 59 , 60 ]. 2.3 Algorithm 3: a variance minimizing algorithm Although the algorithms of [ 34 ] allow one to obtain dense configurations of con-gruent circles inside a regular polygon, these configurations can still be improved by solving the set of coupled nonlinear equations corresponding to the contacts between different neighboring circles and between the peripheral circles and the border. This is the approach followed by Nurmela and ¨Osterg˚ ard in [ 28 ]. This procedure may appear straightforward, but its implementation is not completely trivial; in some cases, what is expected to be a contact it is in reality a ”false contact”, meaning that two disks are extremely close without touching each other. In such case the corresponding equation should be eliminated by the set. Unfortunately one cannot know a priori whether a contact is ”true” or ”false” and therefore a considerable amount of experiments might be needed. To avoid the complications that we have just described we have introduced a different algorithm that allows to improve the results obtained with the two main algorithms, without having to solve a system of nonlinear equations. In 8general we will refer to this process, as well as the procedure described by N ¨O, as a ”refinement”. The algorithm is described in the pseudocode in the appendix and it is based on the observation that minimal distances for the circles that are expected to be in contact with other circles in a densely packed configuration obtained with algorithms 1 and 2 (or even different algorithm) will be typically distributed about some certain value with a small variance. The smallness of this variance will be an indication of the ”quality” of the configuration. Ideally, if we can slightly move the points in a way that the variance vanishes, it means that we have reached a ”perfect” packing. Here we are using ”perfect” not as a synonym of densest but just to mean that the contacts between the disks are fully enforced within the numerical precision. While moving the points, however, we have to be careful in holding the border points on the border, otherwise unwanted solutions could be produced (think for example of the points gradually collapsing into a unique point). Figure 2 is worth a thousand worth: here we consider the configurations of 100 disks inside a pentagon, obtained using algorithms 1 and 2 (orange plot) and algorithms 1+2+3 (blue plot). The first configuration is very dense, ρ1 ≈ 0.821430066209, but the possible contacts are distributed with a rather large variance, Σ 1 ≈ 2.2 × 10 −7. After applying algorithm 3, a slightly denser configuration is found, with ρ2 ≈ 0.821430442804, with a variance that is several orders of magnitude smaller, Σ 2 ≈ 5.6 × 10 −21 . 0.1747218 0.1747220 0.1747222 0.1747224 0.1747226 0.1747228 0.1747230 0.5 1 5 10 50 100 Figure 2: Histograms for the distribution of possible contact for 100 disks inside a pentagon. The orange histogram corresponds to the data obtained using algorithm 1+2, whereas the blue histogram corresponds to applying algorithm 3 to the previous data. The vertical dashed line is the average of the data of the first histogram, whereas the horizontal arrows mark the variance of these data, Σ1 ≈ 2.2 × 10 −7. The variance of the second set is several orders of magnitude smaller, Σ 2 ≈ 5.6 × 10 −21 ..9Special considerations can be given to packing problems inside the equilateral triangle and the regular hexagons: these are the only two polygons that allow hexagonal packing for specific values of N ; based on the experience that we have accumulated in solving packing problems, we have observed that sometimes the algorithms produce dense configurations of disks with one or more ”holes” (i.e. space that could accomodate an extra disk). Sometimes these configurations could correspond to genuine global maxima of the packing fraction 2, although more often they could be improved. In these cases it can be appropriate to follow these simple steps: • Identify the ”holes” in the configuration at hand (let Nh be the number of holes); • Pick Nh disks on the border (either randomly or among the disks with less neighbors) (it may be convenient to select disks that are in contact); • Use the Nh disks above to fill the Nh holes previously detected; • Apply algorithm 2 to the resulting configuration; One example of this procedure is given in fig. 3; the configuration in the right plot is obtained from the configuration in the left plot following the steps described above. Similarly, a dense configuration with N disks and Nh holes can be used to produce dense configurations of N + k disks, by filling k holes and applying algorithm 2. 2For the case of the equilateral triangle with N=n(n+ 1) /2 disks ( n= 1 ,2, . . . ) it is known that the disks are arranged in a triangular lattice for the optimal configuration; Oler [ 17 ] has conjectured that the configurations with N−1 disks are obtained from the former by eliminating one of the disks. This conjecture is sometimes also attributed to Erd¨ os. 10 Figure 3: Left: Configuration with 228 disks inside a hexagon, found with Algorithm 1 and 2; Right: Configuration obtained applying Algorithm 4 and 2 to the configuration in the left plot. The color scheme used in this figure is explained in Fig. 4: disks that have n contacts (within a given tolerance) with other disks or the walls of the container are represented with a specific color (the same color code will be later used in the Voronoi diagram representations to distinguish cells with different number of sides). 0123456789 Figure 4: Color scheme used to represent the number of contacts (with the border and with other disks) or the number of sides of a Voronoi cell (see the discussion in next section). 3 Euler’s theorem and packing In the absence of a border, the densest packing of congruent disks in the plane corresponds to a hexagonal lattice, with a density (packing fraction) ρ = π/ √12. When borders are added, defining a finite region, this configuration is no longer possible and, as a result, lower densities are achieved. The border introduces a geometrical frustration , in the sense that the hexagonal structure cannot be attained simultaneously for all the disks. A similar situation also occurs in the absence of a border, but in presence of curvature, such as for the problem of N charges interacting on non–planar surfaces (particularly on the sphere) [ 61 ,11 62 , 63 , 58 , 65 , 66 , 67 ]. This last problem is usually referred to as ”Thomson problem”. To a more fundamental level, Euler’s theorem of topology requires that for any tessellation of the manifold one must obey the equation NV − NE + NF = χ , (18) where NV , NE and NF are respectively the number of vertices, edges and faces of the tessellation and χ is the Euler characteristic ( χ = 2 for the sphere). Eq. ( 18 ) can be cast in an equivalent form in terms of the topological charges, qi = 6 − ci, where ci is the coordination number of the ith point, as [ 66 ] Q = N ∑ i=1 qi = 6 χ . (19) In particular, since Q = 12 for a sphere, it is not possible to tile a sphere only with hexagons: this is actually directly observed in the minimal energy config-urations obtained for the Thomson problem, where the simplest arrangement corresponds to 12 pentagonal disinclination immersed in a ”sea” of hexagons (however, as the number of particles grows, new kind of defects, which still comply with Euler’s theorem appear). Consider, for example, the optimal packing of 10 congruent disks in an hexagon, reported in the left of figs. 5; in the right figure we find the Voronoi diagram for this configuration, consisting of 6 quadrilaterals, 2 pentagons and 2 hexagons (represented in different colors). The reader should not be confused by the fact that some of the cells are classified as quadrilaterals although they appear to be pentagons: when one of the vertices of these cells corresponds to a vertex of the hexagonal domain, they can be continuously deformed to a straight line, unless the vertex falls on the border between two adjacent cells. We will refer to these vertices as ”spurious”. 12 Figure 5: Left: Optimal packing of 10 disks inside an hexagon; Right: Voronoi diagram of the configuration. Figure 6: Left: Optimal packing of 12 disks inside an hexagon; Right: Voronoi diagram of the configuration. 13 Figure 7: Best packing for N = 13 equal circles inside a hexagon. Figure 8: Left: Optimal packing of 100 disks inside an hexagon; Right: Voronoi diagram of the configuration. Looking at the figures we see that we have Nb = 8 cells on the border of the hexagon (corresponding to the number of peripheral disks) (and ¯Ns = 6 spurious vertices). With some imagination we can project this diagram onto a sphere (for example, using a stereographic projection from the plane to the sphere) and thus put ourselves in the canonical situation of the Thomson problem. When we perform this operation, the region of the plane external to the hexagon is mapped to a polygonal region on the sphere with exactly Nb vertices (in this case an octagon). 14 By taking into account the topological charge of the outer face we have Q = N ∑ i=1 qi + (6 − Nb) = 12 . (20) Using eq.( 20 ) we have Q = 2 · 1 + 6 · 2 + 2 · 0 + 1 · (−2) = 12 . (21) Let us now consider the optimal configuration of 12 disks inside the hexagon for which eq. ( 20 ) fails. The straightforward application of eq. ( 19 ) in this case yields Q = 3 · 1 + 9 · 2 + 1 · (−3) = 18 6 = 12 (22) although one can easily check that Euler’s theorem, stated in terms of vertices, edges and faces holds (as it should!). The breakdown of eq. ( 20 ) has a simple explanation (and equally simple cure): the Voronoi diagram in this case contains three vertices of order four, while the discussion in [ 66 ] assumes that only 3 lines emanate from a vertex. A 4-vertex can however be slightly modified to a pair of 3-vertices, which implies that 2 of the four faces sharing the vertex will gain one edge each. In other words, this is equivalent to associate a topological charge of −2 to each 4-vertex. A similar argument can be extended to a n-vertex, which acquires a topological charge −2( n − 3). In this way we write Q = N ∑ i=1 qi + ∑ k q(v) k (6 − Nb) = 12 , (23) where q(v) k is the topological charge on each vertex (for 3-vertices q(v) = 0). Applying eq. ( 23 ) to the case of Fig. 6 we now get the correct answer: Q = 3 · 1 + 9 · 2 + 3 · (−2) + 1 · (−3) = 12 . It is instructive to look at the configuration of 13 disks inside the hexagon displayed in Fig. 7: in this case we have Q = 6 · 3 + 6 · 1 + 6 · (−2) = 12 . Eq. ( 23 ) can be expressed in an equivalent and more practical form Q = NVint ∑ i=1 q(int ) i + Nb ∑ i=1 (q(ext ) i − 1) + 6 + Nvert ∑ i=1 q(v) i = NVint ∑ i=1 q(int ) i + Nb ∑ i=1 ¯q(ext ) i 6 + Nvert ∑ i=1 q(v) i = 12 (24) 15 where ¯ q(ext ) i ≡ q(ext ) i − 1 = 5 − c(ext ) i is the topological charge of the ith border cell (in this case the pentagon carries null charge). A similar formula, but limited to 3-vertices, is reported in refs. [ 68 ] and [ 69 ], although in that case the topological charge of a border point is reported as q(ext ) i = 4 − ci. This apparent incongruence is just a matter of convention: in our case we are defining c(ext ) i as the number of sides of the Voronoi cell, which for internal cells is also the number of neighbors; in the case of [ 68 , 69 ] ci is the number of neighbors (observe that for external cells the number of neighbors does not match the number of sides of the cell). If one eliminates a redundant factor of 6 from both sides of eq. ( 24 ) one obtains the simpler form NVint ∑ i=1 q(int ) i + Nb ∑ i=1 ¯q(ext ) i + Nvert ∑ i=1 q(v) i = 6 (25) As we see from eq. ( 25 ) the total topological charge inside the domain must equal 6; however Euler’s theorem does not discriminate among configurations where a number of cells is replaced by the same number of cells (of different shape) carrying the same topological charge. As the density increases configu-rations with a complicated defect structure may become favorable energetically. Figure 9: Voronoi diagrams for the configurations of Fig. 3..It is worth producing the Voronoi diagrams of the configurations in Fig. 3,shown in Fig. 9. The topological charges and vertices for these configurations are reported in table 2 and one can verify that Euler’s theorem is indeed obeyed (Q = 12 in both cases). Observe that the configuration on the left contains a larger number of defects, as well as two 6-vertices. 16 4 Upper bounds for the density Thue’s theorem [ 4] establishes that the densest packing of equal disks in the plane occurs when the disks form a hexagonal lattice. The density that corre-sponds to this configuration is ρ = π/ √12. Fejes Toth [ 70 , 71 ] has proved that for a given convex domain D in the plane where N unit disks are packed the following inequality holds N √12 < A (D) , (26) where A(D) is the area of the domain (Segre and Mahler had earlier proved this result for the special case of a convex polygon with at most six sides [ 72 ]). Groemer [ 73 ] has sharpened ( 26 ) using the results of Segre and Mahler [ 72 ], obtaining the inequality N √12 ≤ A(D) − κP (D) + λ , (27) where P (D) is the perimeter of the domain and κ ≡ 2 − √3 2 ≈ 0.133975 , λ ≡ √12 − π(√3 − 1) ≈ 1.1643 . For a Jordan polygon Π, Oler [ 17 ] has proved that for N disks of unit diam-eter the tighter inequality inequality holds N ≤ 2 √3 A(Π) + 1 2 P (Π) + 1 . (28) We can use the results of Table 1 to express Oler’s inequality in terms of the disk radius for a regular polygon of unit apothem; using this table we have A(Π) = σ tan ( π σ ) ( r − cos ( π σ )) 2 4r2 P (Π) = σ tan ( π σ ) ( cos ( π σ ) − r) r , (29) and N ≤ 1 2√3 σ tan ( π σ ) ( r − cos ( π σ )) 2 r2 + σ tan ( π σ ) ( cos ( π σ ) − r) 2r + 1 , (30) where r is the disk radius in II. The upper bound on N is reached for r = √3 √ σ sin ( π σ ) ( 8√3( N − 1) cos ( π σ ) + 3 σ sin ( π σ )) + (3 − 2√3) σ sin ( π σ ) 12( N − 1) − 2 (√3 − 3) σ tan ( π σ ) , (31) and r ≈ √ σ sin ( 2π σ ) 2 4 √3√n − (2√3 − 3) σ sin ( π σ ) 12 n + . . . , (32) 17 for N → ∞ . Observe that the leading term in eq. ( 32 ) corresponds to ς of eq. ( 10 ). Using eq.( 31 ) we obtain an upper bound on the density ρ ≤ ρ(max )(N, σ ) = 4πN ∆( N, σ ) (33) where ∆( N, σ ) ≡ − √2 (√3 − 2 ) sec ( π σ ) √ σ sin ( 2π σ ) ( 8√3( N − 1) + 3 σ tan ( π σ )) 8 √3( N − 1) + 2 ( 5 − 2√3 ) σ tan ( π σ ) . For N → ∞ ρ(max )(N, σ ) ≈ π 2√3 − π 6 √ 1 2 ( 7√3 − 12 )√ σ tan ( π σ ) N + . . . . (34) G´ asp´ ar and Tarnai [ 75 ] have used Groemer’s and Oler’s inequalities to ob-tain upper bounds for circle packing in the square, the equilateral triangle and the circle. In particular, the inequalities (6), (14) and (17) of [ 75 ] for the the equilateral triangle, the square and the circle are precisely our inequality ( 33 )for σ = 3, 4 and ∞.Using ( 31 ) we can also obtain an upper bound for the number of peripheral disks as NB ≤ P(r) 2r , which reduces to NB / √ 2√3σ tan ( π σ ) N (35) for N → ∞ . Notice that the upper bound is saturated only when the perimeter is completely covered by disks (i.e. ξ = 1). 5 Numerical results In this section we present the numerical results obtained with the algorithms described earlier (with the exception of the square, where we use the very pre-cise results of ref. [ 40 ]). To obtain densely packed configurations of congruent disks inside a domain we have run algorithm 1 a large number of times, followed by algorithm 2 and 3. In general finding the global maximum of the packing fraction of N disks in a given container is quite challenging and rigorous proofs of optimality exist only for special shapes and very limited values of N . While we don’t expect that all the configurations that we have calculated correspond to global maxima of the packing fraction, we believe that their density should 18 be very close to the maximum. The configurations that we have obtained nu-merically are available at [ 41 ]; plots of these configurations (which include their Voronoi diagrams are available in the supplemental material at [ 76 ]) The most important quantity in our analysis is certainly the packing fraction; we have plotted it for the several domains considered in this paper in Figs. 10 , 11 and 12 . Due to the large number of domains that we have considered, we have considered separately the cases of special domains (the equilateral triangle, the square, the hexagon and the dodecagon), of regular polygons with even number of sides and regular polygons with odd number of sides, respectively in Figs. 10 , 11 and 12 .The equilateral triangle and the regular hexagon are special domains, since they admit configurations with triangular packing at specific values of N : this is reflected in the large spikes of the density observed in Fig. 10 . At these values, the upper bound ρmax (N ) is reached (the continuous curves in the plot represent ρmax ) for the different domains). The square and the dodecagon also display a pattern of maxima, of more modest value (notice however that the results for the dodecagon cover only N ≤ 200). From Figs. 11 and 12 we see that the behavior of the density does not display in general large oscillations as found in the previous cases (the dodecagon, which is also included in 11 , is seen to outperform the regular polygons with even number of sides included in the plot). Similarly the pentagon, is also performing generally better than the odd regular polygons up to the pentadecagon (with the obvious exclusion of the equilateral triangle), as seen in Fig. 12 . 0100 200 300 400 0.80 0.82 0.84 0.86 0.88 0.90 Figure 10: Packing fraction for the equilateral triangle, square, hexagon and dodecagon. The solid lines are the upper bound ρmax (N ). 19 0 50 100 150 200 0.70 0.75 0.80 0.85 0.90 Figure 11: Packing fraction for selected regular polygons with an even number of sides. 050 100 150 200 0.70 0.75 0.80 0.85 Figure 12: Packing fraction for selected regular polygons with an odd number of sides. For small N the maximum density that can be achieved may be significantly lower than the maximum density in the plane, ρplane = π/ √12, because of the frustration introduced by the border. As N gets larger, however, we expect that the packing fraction to increase, as the border effects are increasingly less important. In particular, we must have lim N →∞ ρ(N ) = ρplane .We can estimate the size of N congruent disks packed inside a domain of area A, as r ≈ √ A N π ; the number of border disks is then roughly NB ≈ P √ N π 4A 3 . 3Strictly speaking NBis not always well defined, since there may be degenerate configu- 20 On these grounds we expect that the leading corrections to the packing fraction, at finite N , behave as 1 /√N .Based on this qualitative argument we introduce the fit ρ(f it )(N ) = π√12 + a1√N a2 N to describe the numerical results of the packing fraction for the regular polygons considered in this paper; in Fig. ( 13 ) we report a1 as a function of σ (the number of sides of the polygon). To confirm the special nature of the dodecahedron, we see that a1 presents a local maximum at σ = 12, similarly to the cases of the triangle and the hexagon. 2 4 6 8 10 12 14 16 - 1.0 - 0.9 - 0.8 - 0.7 - 0.6 - 0.5 Figure 13: lim N →∞ √N (ρ(f it )(N ) − π/ √12 ) for different regular polygons as a function of the number of sides. Our previous qualitative estimate also justifies the fit N (fit) B (N ) = √N (b1 + b2/√N + b3/N ) for the number of peripheral disks; in Fig. 14 and 15 we plot NB (N ), for the different domains that we have studied. It is no surprise that the triangle and the regular hexagon allow the largest values of NB /N . The thin solid lines in the plots represent N (fit) B (N ). The case of the regular hexagon is particular remarkable since the behavior of NB vs N is almost monotonically growing. rations with different number of border disks. This happens particularly for the equilateral triangle and the regular hexagon, where the configurations obtained eliminating a disk from a perfectly packed configuration appear to be also optimal. For example in the case of the left plot of Fig. 3, which represents a non–optimal configuration, the holes could be moved to the border without changing the density. 21 0 100 200 300 400 0 20 40 60 80 Figure 14: Number of peripheral disks, NB , as a function of N , for the equilat-eral triangle, the square, the hexagon and the dodecagon. The solid lines are the fits NB (N ) = √N (a1 + a2/√N + a3/N ). For the equilateral triangle, square and hexagon the results cover up to N = 400 (right plot). 050 100 150 200 0 10 20 30 40 50 050 100 150 200 0 10 20 30 40 50 Figure 15: Number of peripheral disks, NB , as a function of N , for different regular polygons. The solid lines are the fits NB (N ) = √N (a1 +a2/√N +a3/N ). For the equilateral triangle, square and hexagon the results cover up to N = 400 (right plot). In Fig. 16 we plot the coefficient b1 of the fit as a function of σ and compare it with the upper bound obtained from eq. ( 35 ). This bound, as expected, is almost saturated for the hexagon and, to a lesser extent, for the equilateral triangle. 22 4 6 8 10 12 14 16 1 2 3 4 5 Figure 16: lim N →∞ N (fit) B (N )/√N as a function of the number of sides. The dashed line is the function √2 4 √3 √ σ tan ( π σ ) from eq. ( 35 ). While b1 provides the leading asymptotic behavior of NB (N ) for N ≫ 1, the border fraction ξ provides information over the border crowding at finite N , as displayed in Figs. 17 and 18 . The largest values of ξ are found for the regular hexagon while we find rather small values for the square. The average border fraction, calculated over the set of configurations with 2 ≤ N ≤ 200 (400 for triangle, square and hexagon) is displayed in Fig. 19 . 0100 200 300 400 0.6 0.7 0.8 0.9 1.0 Figure 17: Border fraction as a function of N for the equilateral triangle, square, hexagon and dodecagon. 23 0 50 100 150 200 0.6 0.7 0.8 0.9 1.0 050 100 150 200 0.6 0.7 0.8 0.9 1.0 Figure 18: Border fraction as a function of N for the equilateral triangle, square, hexagon and dodecagon. 4 6 8 10 12 14 16 0.70 0.75 0.80 0.85 0.90 0.95 1.00 Figure 19: 〈ξ〉 for different regular polygons as a function of the number of sides. In Table 3 we report the configurations of N disks inside a regular polygon (3 ≤ σ ≤ 16) that correspond to necklaces (in general we have restricted our search to N ≤ 200, with the exception of σ = 3 , 4, 6 where N ≤ 400 4). In our analysis we consider that a configuration corresponds to a necklace if δ ≤ 10 −10 .The regular hexagon and the equilateral triangle possess the largest number of necklaces among the regular polygons studied. The fraction of Voronoi cells of given shape for the packing configurations inside the hexagon is displayed in Fig. 20 . The thin dashed lines for the pentag- 4For the square we have just used the numerical results of [ 40 ] 24 onal and hexagonal cells correspond to the fits c(pentagon) 1 N −1/2 + c(pentagon) 2 N −1 and 1 − c(hexagon) 1 N −1/2 − c(hexagon) 2 N −1. The qualitative justification for these fits is given by noticing that hexagonal and pentagonal cells dominate at large density and that pentagonal cells are mostly located at the border: since Nb scales as √N for N ≫ 1, we have that N (pentagon) /N ≈ 1/√N . The spikes in the fraction of pentagonal cells (and the corresponding dips in the fraction of hexagonal cells) are due to configurations where the hexagon is ”split” into subdomains by straight cuts. 0100 200 300 400 0.0 0.2 0.4 0.6 0.8 1.0 Figure 20: Number of Voronoi cells with different number of sides for the regular hexagon. The color scheme corresponds to the one found in fig. 4. 6 Conclusions We have introduced efficient algorithms for calculating configurations of densely packed congruent disks inside regular polygons with an arbitrary number of sides σ. We have obtained numerical results for 3 ≤ σ ≤ 16 (with the exception of the square, σ = 4, where we have used the precise configurations reported in [ 40 ]) and N ≤ 200 (for σ = 3 and σ = 6 we have reached N = 400). Given the computational complexity of the problem we cannot expect that all the configurations that we have obtained are global maxima of the density, but we think that their density should be nearly maximal. The large amount of numerical data has allowed us to perform an analysis of the properties of these systems, which had not been carried out before (as we mention in the introduction, only few regular polygons, such as the square and the equilateral triangle, had been studied in depth in the literature). Our analysis has focused not only on the packing fraction (density), but also on additional properties 25 Table 2: Topological charge of the configurations in fig. 9. The number of hexagonal cells and 3-vertices is not reported since they don’t contribute to the topological charge. cells Nb vertices Q 4 5 7 8 V4 V5 V6 left of fig. 9 10 54 4 1 50 0 0 2 12 right of fig. 9 8 46 6 0 50 0 0 0 12 Table 3: Possible necklaces configurations for regular polygons with 3 ≤ σ ≤ 16 for N ≤ 200 (with the exception of σ = 3 , 4, 6 where N ≤ 400). σ Nnecklaces 3 2, 3, 6, 9, 10, 12, 14, 15, 20, 21, 24, 25, 26, 27, 28, 31, 35, 36, 39, 40, 42, 43, 44, 45, 51, 52, 54, 55, 60, 65, 66, 71, 77, 78, 84, 86, 88, 90, 91, 95, 104, 105, 111, 112, 118, 119, 120, 130, 133, 134, 135, 136, 143, 150, 151, 152, 153, 169, 170, 171, 182, 187, 188, 189, 190, 199, 208, 209, 210, 228, 231, 273, 274, 275, 276, 300, 310, 321, 324, 349, 350, 351, 374, 377, 378, 389 4 2, 3, 4, 8, 15, 16, 20, 23, 24, 30, 34, 35, 36, 42, 56, 80, 128, 180, 208, 247, 340 5 2, 3, 4, 5, 15, 21, 23 6 2, 3, 4, 5, 6, 7, 10, 11, 12, 14, 16, 17, 18, 19, 26, 27, 29, 30, 33, 34, 35, 36, 37 46, 47, 48, 51, 52, 56, 58, 60, 61, 68, 69, 74, 75, 79, 80, 84, 85, 90, 91, 106, 107, 108, 114, 118, 119, 120, 123, 124, 126, 127, 146, 147, 153, 154, 160, 161, 167, 168, 169, 191, 192, 199, 200, 207, 208, 212, 215, 216, 217, 233, 242, 243, 251, 252, 261, 269, 270, 271, 289, 298, 299, 300, 309, 310, 319, 320, 324, 325, 329, 330, 331, 350, 351, 362, 363, 372, 373, 374, 384, 385, 393, 395, 396, 397 7 2, 3, 4, 5, 10, 11, 12 8 2, 3, 4, 5, 22, 27, 28, 54, 64 9 2, 3, 4, 5, 7, 10, 11, 37 10 2, 3, 4, 5, 10, 12 11 2, 3, 4, 5, 10, 12 12 2, 3, 4, 5, 6, 7, 10, 11, 12, 18, 19, 31, 37, 47, 61, 73, 91, 127 13 2, 3, 4, 5, 10, 12 14 2, 3, 4, 5, 10 15 2, 3, 4, 5, 7, 10, 11, 12, 15, 18, 37, 45, 48 16 2, 3, 4, 5, 10, 12 26 of the system that we have identified here: the border fraction, for instance, is seen to be quite sensitive to σ, with the square having the lowest average border fraction of all the polygons that we have studied. Similarly, we have found that, apart from the expected cases σ = 3 , 6, the dodecagon and the pentadecagon (σ = 12 , 15) have a larger number of configurations that are necklaces, with respect to the remaining polygons. Finally, the packing configurations can also be visualized in terms of Voronoi cells, each carrying a topological charge which depends both on the number of sides of the cell and whether the cell is located at the border of the domain or not. The Euler theorem of topology requires that the total topological charge inside the domain add to Qtotal = 6. It is observed that the internal topological charge in these systems tends always to be very small, even at large density, a striking difference with Coulomb systems where it is observed to become increasingly negative [ 69 ]. Another important difference with these systems is the presence of higher order vertices, which themselves carry a negative topological charge (these vertices are not usually observed in Coulomb systems). Acknowledgements The research of P.A. was supported by Sistema Nacional de Investigadores (M´ exico). The plots in this paper have been plotted using Mathematica [ 77 ]and MaTeX [ 78 ]. Numerical calculations have been carried out using python [79 ] and numba [80 ]. 27 A Pseudocodes of the algorithms H Pseudocode of algorithm 1 (adapted from [ 34 ]) step 1: Generate a random set of N points inside a regular polygon with σ sides; step 2: Define the energy functional (6), with s = sin and with α such that lim s→∞ α = 0; the initial value of s, sin , may be chosen arbitrarily, but it is convenient that it is not too large, to allow better results; step 3: Minimize the functional in step 2) and use the configuration obtained as new initial configuration, now with s′ = κs , with κ > 1; step 4: Repeat step 3) until reaching a large value of s (in most of our calculations we have used sf in = 10 6 , but larger values might be used if convergence was not reached); step 5: Compare the density of the configuration obtained at step 4) with the maximal density for N circles and, if the current den-sity improves the previous record, store the new configuration as the best so far; step 6: Repeat steps 1-5) a large number of times 28 Pseudocode of algorithm 2 (adapted from [ 34 ]) step 1: Take a configuration of densely packed disks (for example ob-tained with Algorithm 1) and perturb randomly the position of the circles; step 2: Treat the centers of the circles as point–like particles repelling with an interaction ( λ r2 )sin , with λ = r2 min , rmin being the closest distance between any two points in this set; step 3: Minimize the total energy of the system, eq. ( 16 ), for s = sin (typically, at the beginning we choose sin ≈ 10 2 − 10 3); step 4: Let s′ = κs , with κ > 1, and repeat step 3) using the configu-ration obtained there as the initial configuration; iterate these steps up to a sufficiently large value of s, until convergence has been reached; step 5: If the final configuration of step 4) has a higher density of the configuration of 1), use it as new initial configuration and repeat the steps 1–4) as many times as needed, each time updating the initial configuration to be the densest configuration; step 6: If after some iterations the process cannot easily improve the density, modify step 1), by making the amplitude of the random perturbation smaller and repeat the steps 2–5) (in general the initial value of sin will then be taken to be larger); step 7: Stop the algorithm when convergence has been reached (or when the assigned number of iterations has been completed). 29 Pseudocode of the variance minimizing algorithm step 1: as initial step in the algorithm, one needs a densely packed configuration of disks inside the polygon, which in our case is obtained using the algorithms 1 and 2; step 2: for such a configuration calculate the minimal distance of each point (corresponding to the center of a disk) from any other point in the set and let dmin be the smallest of such distances. Circles that are possibly in contact with at least one other circle will have very similar distances. Ideally, if the exact configura-tion had been reached, these distances would be all the same (assuming that no false contacts are present); step 3: fix a certain threshold η and assume that circles at a distance d such that 0 < d − dmin < η are in contact; identify all pairs of circles that are in contact. We call p the number of such pairs and ¯di (with i = 1 , . . . , p ) the distance between the circles forming the ith pair. step 4: identify circles that have no contact with other circles; step 5: identify circles that are in contact with the border; these points are then allowed to move only along the border, but not inward; step 6: consider the square variance (we work with square distances to avoid dealing with square roots) Σ = 1 p p ∑ i=1 d4 i − ( 1 p p ∑ i=1 d2 i )2 step 7: minimize Σ allowing the points with at least one contact to move in the plane (for circles on the border, only allow move-ments on the border), while keeping fixed points corresponding to circles with no contacts with other circles; introduce a scale factor σ for the the amplitude of the movement of the charges (initially one can start with σ ≈ 10 −4); step 8: if the density of the new configuration is larger than the initial density then accept the new configuration and repeat the whole process a number of times (we normally perform 50 runs); step 9: if at the end of the whole cycle, the algorithm was not able to find any improvement, then reduce σ and η (typically σ → σ/ 10 , η → η/ 10) . Repeat until needed. 30 B Generating uniformly distributed random points inside a regular polygon The goal is to generate N points uniformly distributed inside a regular polygon. We start by considering the unit circle ( σ = ∞) and then generalize the discus-sion to finite values of σ. Since the probability density for a uniform distribution inside a circle grows linearly as ρ(r) = 2 r , 0 ≤ r ≤ 1, (36) we see that the fraction of points at a distance r < 1 goes like r2. However, if we generate the points by using random values of r uniformly distributed on (0 , 1), the fraction of points at a distance r < 1 scales as r. This happens because the points which are close to the center of the circle are generated with a much larger probability, without taking into account the proper scaling of the areas. However, if the random numbers {q1, q 2, . . . } are uniformly distributed be-tween 0 and 1, the numbers {√q1, √q2, . . . } are distributed linearly, as required by ρ(r). This gives us a simple recipe to generate N points inside a unit circle: 1) consider i = 1; 2) generate a uniform random number θi, with 0 ≤ θi ≤ π;3) generate a uniform random number qi, with 0 ≤ qi ≤ 1; 4) generate the point Pi ≡ √qi(cos θi, sin θi); 5) i → i + 1 and go back to step 2 until i > N .We want now to generalize this result to the case of regular polygons with a unit circumradius: a point inside the polygon is obtained using eq. ( 2), which requires specifying the two polar variables ( t, u ). Clearly u is the analogous of θ for the circle and it can be generated using random numbers uniformly distributed on (0 , π ), exactly as before. It is easy to see that a point with a given angle u must be at a distance r ≤ dmax (u) = cos ( π σ ) sec ( π σ − (u mod 2π σ ) ) (37) to be contained in the regular polygon. If the distance r, generated using √q exceeds this bound, the point must be discarded and the operation needs to be repeated until an acceptable value of r is obtained. 31 References Tammes, Pieter Merkus Lambertus. ”On the origin of number and arrangement of the places of exit on the surface of pollen-grains.” Recueil des travaux botaniques n´ eerlandais 27.1 (1930): 1-84. Weaire, Denis, and Tomaso Aste. The pursuit of perfect packing. CRC Press, 2008. Hales, T. C., “A proof of the Kepler conjecture”, Annals of math-ematics (2005): 1065-1185. A. Thue, Uber die dichteste Zusammenstellung von kongruenten Kreisen in einer Ebene, Christiania Vid. Selsk.1 (1910), 3-9. Fejes, L. “ ¨Uber die dichteste Kugellagerung”, Mathematische Zeitschrift 48.1 (1942): 676-684. Kravitz, Sidney. ”Packing cylinders into cylindrical containers.” Mathematics magazine 40.2 (1967): 65-71. Reis, George E. ”Dense packing of equal circles within a circle.” Mathematics Magazine 48.1 (1975): 33-37. Melissen, Hans. ”Densest packings of eleven congruent circles in a circle.” Geometriae Dedicata 50.1 (1994): 15-25. Graham, Ronald L., et al. ”Dense packings of congruent circles in a circle.” Discrete Mathematics 181.1-3 (1998): 139-154. Lubachevsky, Boris D., Ron L. Graham, and Frank H. Stillinger. ”Patterns and structures in disk packings.” Periodica Mathemat-ica Hungarica 34.1 (1997): 123-142. Lubachevsky, Boris D., and Ronald L. Graham. ”Curved hexago-nal packings of equal disks in a circle.” Discrete & Computational Geometry 18.2 (1997): 179-194. Fodor, Ferenc. ”The densest packing of 19 congruent circles in a circle.” Geometriae Dedicata 74.2 (1999): 139-145. Fodor, Ferenc. ”The densest packing of 12 congruent circles in a circle.” Contributions to Algebra and Geometry 41.2 (2000): 401-409. Fodor, Ferenc. ”The densest packing of 13 congruent circles in a circle.” Beitr¨ age zur Algebra und Geometrie 44.2 (2003): 431-440. 32 N´ emeth, Z. T., and H. L¨ owen. ”Freezing in finite systems: hard discs in circular cavities.” Journal of Physics: Condensed Matter 10.28 (1998): 6189. N´ emeth, Z. T., and H. L¨ owen. ”Freezing and glass transition of hard spheres in cavities.” Physical Review E 59.6 (1999): 6824. Oler, Norman. ”A finite packing problem.” Canadian Mathemat-ical Bulletin 4.2 (1961): 153-155. Melissen, Hans. ”Densest packings of congruent circles in an equilateral triangle.” The American Mathematical Monthly 100.10 (1993): 916-925. Melissen, J. B. M. ”Optimal packings of eleven equal circles in an equilateral triangle.” Acta Mathematica Hungarica 65.4 (1994): 389-393. Melissen, Jan BM, and P. C. Schuur. ”Packing 16, 17 or 18 circles in an equilateral triangle.” Discrete Mathematics 145.1-3 (1995): 333-342 Payan, Charles. ”Empilement de cercles ´ egaux dans un trian-gle ´ equilat´ eral a propos d’une conjecture d’Erd˝ os-Oler.” Discrete Mathematics 165 (1997): 555-565. Graham R. and Lubachevsky, ”Dense packings of equal disks in an equilateral triangle: from 22 to 34 and beyond.” arXiv preprint math/0406252 (2004). Jo´ os, Antal. ”Packing 13 circles in an equilateral triangle.” Ae-quationes mathematicae 95.1 (2021): 35-65. Schaer, J. “The densest packing of 9 circles in a square”, Cana-dian Mathematical Bulletin 8.3 (1965): 273-277. Schaer, J., and A. Meir. “On a geometric extremum problem”, Canadian Mathematical Bulletin 8.1 (1965): 21-27. Goldberg, Michael. ”The packing of equal circles in a square.” Mathematics Magazine 43.1 (1970): 24-30. Goldberg, Michael. ”Packing of 14, 16, 17 and 20 circles in a circle.” Mathematics Magazine 44.3 (1971): 134-139. Nurmela, Kari J., and Patric RJ ¨Osterg˚ ard. ”Packing up to 50 equal circles in a square.” Discrete & Computational Geometry 18.1 (1997): 111-120. 33 Nurmela, Kari J., Patric RJ ¨Osterg˚ ard, and Rainer aus dem Spring. ”Asymptotic behavior of optimal circle packings in a square.” Canadian Mathematical Bulletin 42.3 (1999): 380-385. Nurmela, Kari J. ”More optimal packings of equal circles in a square.” Discrete & Computational Geometry 22.3 (1999): 439-457. Szab´ o, P´ eter G´ abor, and Eckard Specht. ”Packing up to 200 equal circles in a square.” Models and Algorithms for Global Optimization. Springer, Boston, MA, 2007. 141-156 Szab´ o, P´ eter G´ abor, et al. New approaches to circle packing in a square: with program codes. Vol. 6. Springer Science & Business Media, 2007. Mark´ ot, M. C., “Improved interval methods for solving circle packing problems in the unit square.” Journal of Global Opti-mization 81.3 (2021): 773-803. Amore, Paolo, and Tenoch Morales. ”Efficient algorithms for the dense packing of congruent circles inside a square.” Discrete & Computational Geometry (2022): 1-19. Heppes, Alad´ ar, and Hans Melissen. ”Covering a rectangle with equal circles.” Periodica Mathematica Hungarica 34.1 (1997): 65-81. Lubachevsky, Boris D., and Ronald Graham. ”Dense packings of congruent circles in rectangles with a variable aspect ratio.” Discrete and Computational Geometry. Springer, Berlin, Heidel-berg, 2003. 633-650. Lubachevsky, Boris D., and Ronald L. Graham. ”Minimum perimeter rectangles that enclose congruent non-overlapping cir-cles.” Discrete Mathematics 309.8 (2009): 1947-1962. Specht, Eckehard. ”High density packings of equal circles in rect-angles with variable aspect ratio.” Computers & operations re-search 40.1 (2013): 58-69. Friedman, Erich, Packing equal copies , accessed on June 8, 2022 Specht, Eckard, Packings of equal and unequal circles in fixed-sized containers . . . , accessed on June 8, 2022 Amore, P., “Universidad de Colima Repository of Computational Physics and Applied Mathematics” (2021) 34 Hifi, M., and Rym M., “A literature review on circle and sphere packing problems: Models and methodologies”, Advances in Op-erations Research 2009 (2009). Brito, V. P., et al. ”Beads-on-a-string packing in two dimen-sions.” Physica A: Statistical Mechanics and its Applications 342.3-4 (2004): 419-427. Bou´ e, Laurent, and Eytan Katzav. ”Folding of flexible rods confined in 2D space.” EPL (Europhysics Letters) 80.5 (2007): 54002. Gomes, Marcelo AF, et al. ”Plastic deformation of 2D crum-pled wires.” Journal of Physics D: Applied Physics 41.23 (2008): 235408. Stoop, Norbert, Falk K. Wittel, and Hans J. Herrmann. ”Mor-phological phases of crumpled wire.” Physical review letters 101.9 (2008): 094101. Gomes, M. A. F., et al. ”Crumpled states of a wire in a two-dimensional cavity with pins.” Physical Review E 81.3 (2010): 031127. Bayart, Elsa, et al. ”Measuring order in the isotropic packing of elastic rods.” EPL (Europhysics Letters) 95.3 (2011): 34002. Oskolkov, Nikolay N., et al. ”Nematic ordering of polymers in confined geometry applied to DNA packaging in viral capsids.” The Journal of Physical Chemistry B 115.3 (2011): 422-432. Guven, Jemal, and Pablo V´ azquez-Montejo. ”Confinement of semiflexible polymers.” Physical Review E 85.2 (2012): 026603. Deboeuf, Stephanie, et al. ”Comparative study of crumpling and folding of thin sheets.” Physical review letters 110.10 (2013): 104301. Sobral, Thiago A., et al. ”Unpacking of a crumpled wire from two-dimensional cavities.” Plos one 10.6 (2015): e0128568. Grossman, Doron, Eytan Katzav, and Eran Sharon. ”Packing of stiff rods on ellipsoids: Geometry.” Physical Review E 103.1 (2021): 013001. Li, Zhenqin, and Harold A. Scheraga. ”Monte Carlo-minimization approach to the multiple-minima problem in pro-tein folding.” Proceedings of the National Academy of Sciences 84.19 (1987): 6611-6615. 35 Wales, David J., and Jonathan PK Doye. ”Global optimization by basin-hopping and the lowest energy structures of Lennard-Jones clusters containing up to 110 atoms.” The Journal of Phys-ical Chemistry A 101.28 (1997): 5111-5116. Wales, David J., and Harold A. Scheraga. ”Global optimization of clusters, crystals, and biomolecules.” Science 285.5432 (1999): 1368-1372. Wales, David. Energy landscapes: Applications to clusters, biomolecules and glasses. Cambridge University Press, 2003. Wales, David J., and Sidika Ulker. ”Structure and dynamics of spherical crystals characterized for the Thomson problem.” Physical Review B 74.21 (2006): 212101. Pouliquen, Olivier, Maxime Nicolas, and P. D. Weidman, “Crys-tallization of non-Brownian spheres under horizontal shaking”, Physical Review Letters 79.19 (1997): 3640. Baker Jessica and Arshad Kudrolli, “Maximum and minimum stable random packings of platonic solids”, Physical Review E 82.6 (2010): 061304. M.J.Bowick, D.R.Nelson and A. Travesset, ”Interacting topolog-ical defects on frozen topologies”, Phys. Rev. B 62 , 8738-8751 (2000) Bowick, Mark, et al. ”Crystalline order on a sphere and the generalized Thomson problem.” Physical Review Letters 89.18 (2002): 185502. Bausch, A. R., et al. ”Grain boundary scars and spherical crys-tallography.” Science 299.5613 (2003): 1716-1718. Bowick, Mark J., et al. ”Crystalline particle packings on a sphere with long-range power-law potentials.” Physical Review B 73.2 (2006): 024115. Wales, David J., Hayley McKay, and Eric L. Altschuler. ”Defect motifs for spherical topologies.” Physical Review B 79.22 (2009): 224115. M. J. Bowick and L. Giomi, ”Two dimensional matter: order, curvature and defects”, Advances in Physics 58 , 449-563 (2009) Kusumaatmaja, Halim, and David J. Wales. ”Defect motifs for constant mean curvature surfaces.” Physical review letters 110.16 (2013): 165502. 36 Giomi, Luca, and Mark Bowick. ”Crystalline order on Rieman-nian manifolds with variable Gaussian curvature and boundary.” Physical Review B 76.5 (2007): 054106. Yao, Zhenwei, and Monica Olvera De La Cruz. ”Topological de-fects in flat geometry: The role of density inhomogeneity.” Phys-ical review letters 111.11 (2013): 115503. Fejes-Toth, L., ’ ¨Some packing and covering theorems’, Acta Sci. Math. Szeged 12 (1950), 62-67. Fejes Toth, L., ”Lagerungen in der Ebene, auf der Kugel und im Raum”, Grund- lehren der mathematischen Wissenschaften in Einzeldarstellungen, Bd. 65, S. 67. Berlin-Gottingen-Heidelberg 1953. Segre, Beniamino, and K. Mahler. ”On the densest packing of circles.” The American Mathematical Monthly 51.5 (1944): 261-270. Groemer, Helmut. ” ¨Uber die Einlagerung von Kreisen in einen konvexen Bereich.” Mathematische Zeitschrift 73.3 (1960): 285-294. Folkman, Jon H., and Ronald L. Graham. ”A packing inequality for compact convex subsets of the plane.” Canadian Mathemat-ical Bulletin 12.6 (1969): 745-752. G´ asp´ ar, Zsolt, and Tibor Tarnai. ”Upper bound of density for packing of equal circles in special domains in the plane.” Peri-odica Polytechnica Civil Engineering 44.1 (2000): 13-32. Paolo Amore. (2022). Circle packing inside an equilat-eral triangle: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular pentagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular hexagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular heptagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular octagon: supplemental material (Version 1). Zenodo. 37 Paolo Amore. (2022). Circle packing inside a regular nonagon: supplemental material. Paolo Amore. (2022). Circle packing inside a regular decagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing in a regular hen-decagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular dodecagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular tridecagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular tetradecagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside a regular pen-tadecagon: supplemental material (Version 1). Zenodo. Paolo Amore. (2022). Circle packing inside aregular hexadecagon: supplemental material. Wolfram Research, Inc., Mathematica, Version 12.3.1, Cham-paign, IL (2021). Szabolcs H., ”LaTeX typesetting in Mathematica”, ”G. van Rossum, Python tutorial, Technical Report CS-R9526, Centrum voor Wiskunde en Informatica (CWI), Amsterdam, May 1995.” Lam, S. K., Antoine P., and Seibert S., “Numba: A llvm-based python jit compiler.” Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC. (2015). 38
288
Bubble Dynamics and Cavitation | Annual Reviews =============== We use cookies to track usage and preferences.I Understand Menu Publications A-Z Journal Information About Subscribe Give Institutional Login Login Register 0 Cart Help Journal Information Author Resource Center Copyright & Permissions Add To Your Course Reader Expected Publication Dates Impact Factor Rankings Access Metadata RSS Feeds About What We Do Subscribe to Open Founder & History Knowable Magazine Our Team Careers Press Center Events Global Access DEI Directory Help/FAQs Contact Us Subscribe General Ordering Info Library Resource Center Online Activation Instructions Personal Pricing Institutional Pricing Society Partnerships The Charleston Advisor 1932 search0 Skip to content Institutional Login Login Register 0 Cart Help Publications A-Z Journals A-Z Analytical Chemistry Animal Biosciences Anthropology Astronomy and Astrophysics Biochemistry Biomedical Data Science Biomedical Engineering Biophysics Cancer Biology Cell and Developmental Biology Chemical and Biomolecular Engineering Clinical Psychology Computer Science Condensed Matter Physics Control, Robotics, and Autonomous Systems Criminology Developmental Psychology Earth and Planetary Sciences Ecology, Evolution, and Systematics Economics Entomology Environment and Resources Financial Economics Fluid Mechanics Food Science and Technology Genetics Genomics and Human Genetics Immunology Law and Social Science Linguistics Marine Science Materials Research Medicine Microbiology Neuroscience Nuclear and Particle Science Nutrition Organizational Psychology and Organizational Behavior Pathology: Mechanisms of Disease Pharmacology and Toxicology Physical Chemistry Physiology Phytopathology Plant Biology Political Science Psychology Public Health Resource Economics Sociology Statistics and Its Application Virology Vision Science Knowable Magazine Katina Magazine The Charleston Advisor (Archive) Events Topic Collections Journal Information Author Resource Center Copyright & Permissions Add To Your Course Reader Expected Publication Dates Impact Factor Rankings Access Metadata RSS Feeds About What We Do Subscribe to Open Founder & History Knowable Magazine Our Team Careers Press Center Events Global Access DEI Directory Help/FAQs Contact Us Subscribe General Ordering Info Librarian Resource Center Subscribe to Open Online Activation Instructions Personal Orders Institutional Pricing Society Partnerships The Charleston Advisor Give Home A-Z Publications Annual Review of Fluid Mechanics Volume 9, 1977 Article Annual Review of Fluid Mechanics #### Volume 9, 1977 Review Article Bubble Dynamics and Cavitation M S Plessetand A Prosperetti Vol. 9:145-185 (Volume publication date January 1977) © Annual Reviews info Info Article information Full-Text Figures & Tables References Cited By Supplemental Material list Sections vpn_key Get Access build Tools Tools Add to my favorites Favourites Create Publication Alert Create Citation Alert Create Correction Alert Export citation BibT E X Endnote Zotero Medlars RefWorks Mendeley Recommend to library Permissions share Share Share Dynamic Mode Decomposition and Its Variants Peter J. Schmid pp.225–254 (30) Fundamental Fluid Dynamics Challenges in Inkjet Printing Detlef Lohse pp.349–382 (34) Immersed Boundary Methods: Historical Perspective and Future Outlook Roberto Verzicco pp.129–155 (27) Learning Nonlinear Reduced Models from Data with Operator Inference Boris Kramer, Benjamin Peherstorfer and Karen E. Willcox pp.521–548 (28) content/journals/fluid Journal 5 3 false en Most Cited Most Cited RSS feed LATTICE BOLTZMANN METHOD FOR FLUID FLOWS Shiyi Chenand Gary D. Doolen Vol. 30 (1998), pp.329–364 The Proper Orthogonal Decomposition in the Analysis of Turbulent Flows G Berkooz, P Holmesand J L Lumley Vol. 25 (1993), pp.539–575 Engineering Flows in Small Devices: Microfluidics Toward a Lab-on-a-Chip H.A. Stone, A.D. Stroockand A. Ajdari Vol. 36 (2004), pp.381–411 Vortex Dynamics in the Cylinder Wake C H K Williamson Vol. 28 (1996), pp.477–539 IMMERSED BOUNDARY METHODS Rajat Mittaland Gianluca Iaccarino Vol. 37 (2005), pp.239–261 Particle-Imaging Techniques for Experimental Fluid Mechanics Ronald J. Adrian Vol. 23 (1991), pp.261–304 WAVELET TRANSFORMS AND THEIR APPLICATIONS TO TURBULENCE Marie Farge Vol. 24 (1992), pp.395–458 DROP IMPACT DYNAMICS: Splashing, Spreading, Receding, Bouncing… A.L. Yarin Vol. 38 (2006), pp.159–192 Machine Learning for Fluid Mechanics Steven L. Brunton, Bernd R. Noackand Petros Koumoutsakos Vol. 52 (2020), pp.477–508 V ORTEX-I NDUCED V IBRATIONS C.H.K. Williamsonand R. Govardhan Vol. 36 (2004), pp.413–455 MoreLess Toggle navigation Tools Related Articles from Annual Reviews Cavitation Bubbles Near Boundaries J R Blake and D C Gibson Annual Review of Fluid Mechanics NONLINEAR BUBBLE DYNAMICS Z. C. Feng and L. G. Leal Annual Review of Fluid Mechanics More Less▪ Abstract The inertia-dominated dynamics of a single gas or vapor bubble in an incompressible or nearly incompressible liquid has been the subject of intense investigation for many years. Studies prior to 1976 were thoroughly reviewed by Plesset & Prosperetti (1977) in Volume 9 of this series. Our review fills the gap between Plesset & Prosperetti's review and the present. We focus on new understandings of bubble dynamics through a nonlinear dynamical systems approach. Bubble Dynamics in Soft and Biological Matter Benjamin Dollet, Philippe Marmottant and Valeria Garbin Annual Review of Fluid Mechanics More LessBubbles are present in a large variety of emerging applications, from advanced materials to biology and medicine, as either laser-generated or acoustically driven bubbles. In these applications, the bubbles undergo oscillatory dynamics and collapse inside—or near—soft and biological materials. The presence of a soft, viscoelastic medium strongly affects the bubble dynamics, both its linear resonance properties and its nonlinear behavior. Surfactant molecules or solid particles adsorbed on a bubble surface can also modify the bubble dynamics through the rheological properties of the interfacial layer. Furthermore, the interaction of bubbles with biological cells and tissues is highly dependent on the mechanical properties of these soft deformable media. This review covers recent developments in bubble dynamics in soft and biological matter for different confinement conditions: bubbles in a viscoelastic medium, coated by a viscoelastic layer, or in the vicinity of soft confinement or objects. The review surveys current work in the field and illustrates open questions for future research. MODELING SHAPES AND DYNAMICS OF CONFINED BUBBLES Vladimir S. Ajaev and G.M. Homsy Annual Review of Fluid Mechanics More LessAbstract We review mathematical models of confined bubbles, emphasizing physical mechanisms as expressed in simple geometries. Molecular interactions between liquid, gas, and the confining solid are all important and are described through the disjoining pressure concept. Methods for finding static shapes are considered. The static solution is a springboard for discussing pressure-driven and surface-tension-driven flows, both of which involve viscous effects and macroscopic films entrained near apparent contact lines. We next discuss vapor bubbles produced by thermal effects. Vaporization localized near contact lines and condensation distributed in colder parts of the interface lead to steady vapor bubbles. Their size is determined through global constraints. Unsteady vapor bubbles are discussed and we end with thoughts on open problems. Read the latest from Knowable Magazine Journal News About Annual Reviews: What We Do Press and News Careers Contact Us FAQ Help Discover Content: Journals A-Z Impact Factor Rankings Publication Dates Online Events Article Collections Knowable Magazine Katina Magazine Against the Grain Libraries and Institutions: Subscribe to Open (S2O) Librarian Resource Center Institutional Account Administration Institutional Pricing Usage Statistics Charleston Hub Katina | Librarianship Elevated Charleston Advisor (Archive) Against the Grain Author Resources: Article Preparation and Submission Editorial Principles and Policies Contact Us Copyright and Permissions Article Proposals © Copyright 2025 Contact Us Email Preferences Annual Reviews Directory Multimedia Supplemental Materials FAQs Privacy Policy Cookie Settings x This is a required field Please enter a valid email address Approval was a Success Invalid data An Error Occurred Approval was partially successful, following selected items could not be processed due to error Annual Reviews: 10.1146/annurev.fl.09.010177.001045 SEARCH_EXPAND_ITEM × Bubble Dynamics and Cavitation Publication Date:01 Jan 1977 Online Option Sign in to access your institutional or personal subscription or get immediate access to your online copy - available in PDF and ePub formats Price:$32.00 Buy Online Access Cancel Previewclose Bubble Dynamics and Cavitation, Page 1 of 1 < Previous page|Next page >/docserver/preview/fulltext/fluid/9/1/annurev.fl.09.010177.001045-1.gif
289
Annales Henri Lebesgue 2 (2019) 349-368 NINA GANTERT SERGUEI POPOV MARINA VACHKOVSKAIA ON THE RANGE OF A TWO-DIMENSIONAL CONDITIONED SIMPLE RANDOM WALK SUR L’AMPLITUDE D’UNE MARCHE ALÉATOIRE SIMPLE BIDIMENSIONNELLE CONDITIONNÉE Abstract. — We consider the two-dimensional simple random walk conditioned on never hitting the origin. This process is a Markov chain, namely it is the Doob h-transform of the simple random walk with respect to the potential kernel. It is known to be transient and we show that it is “almost recurrent” in the sense that each infinite set is visited infinitely often, almost surely. We prove that, for a “large” set, the proportion of its sites visited by the conditioned walk is approximately a Uniform[0, 1] random variable. Also, given a set G ⊂R2 that does not “surround” the origin, we prove that a.s. there is an infinite number of k’s such that kG ∩Z2 is unvisited. These results suggest that the range of the conditioned walk has “fractal” behavior. Keywords: random interlacements, range, transience, simple random walk, Doob’s h-transform. 2010 Mathematics Subject Classification: 60J1060G50, 82C41. DOI: () The work of S.P. and M.V. was partially supported by CNPq (grants 300886/2008–0 and 305369/2016–4) and FAPESP (grant 2017/02022–2). The authors are grateful to the anonymous referee for carefully reading the first version of this paper. 350 N. GANTERT, S. POPOV & M. VACHKOVSKAIA Résumé. — Nous considérons une marche aléatoire simple en dimension 2 conditionnée à ne jamais atteindre l’origine. Ce processus est une chaîne de Markov, à savoir la transformation de Doob h de la marche aléatoire simple par rapport au noyau potentiel. Il est connu que ce processus est transient et nous montrons qu’il est « presque récurrent » en ce sens que chaque ensemble infini est visité infiniment souvent, presque sûrement. Nous prouvons que, pour un « grand » ensemble, la proportion des sites visités par la marche aléatoire conditionnée est approximativement une variable aléatoire uniforme dans [0, 1]. En outre, étant donné un ensemble G ⊂R2 qui « n’entoure » pas l’origine, nous prouvons que p.s., il existe un nombre infini d’entiers naturels k tels que kG ∩Z2 ne soit pas visité. Ces résultats suggèrent que l’amplitude de la marche simple conditionnée a un comportement « fractal ». 1. Introduction and results We start by introducing some basic notation and defining the “conditioned” random walk b S, the main object of study in this paper. Besides being interesting on its own, this random walk is the main ingredient in the construction of the two-dimensional random interlacements of [CP17, CPV16] (see also [ČT12, DRS14, PT15, Szn10] for the higher-dimensional case). Write x ∼y if x and y are neighbours in Z2. Let (Sn, n ⩾0) be two-dimensional simple random walk, i.e., the discrete-time Markov chain with state space Z2 and transition probabilities defined in the following way: (1.1) Pxy =      1 4, if x ∼y, 0, otherwise. We assume that all random variables in this paper are constructed on a common probability space with probability measure P and we denote by E the corresponding expectation. When no confusion can arise, we will write Px and Ex for the law and expectation of the(1) random walk started from x. Let τ0(A) = inf{k ⩾0 : Sk ∈A}, (1.2) τ1(A) = inf{k ⩾1 : Sk ∈A} (1.3) be the entrance and the hitting time of the set A by simple random walk S (we use the convention inf ∅= +∞). For a singleton A = {x}, we will write τi(A) = τi(x), i = 0, 1, for short. One of the key objects needed to understand the two-dimensional simple random walk is the potential kernel a, defined by (1.4) a(x) = ∞ X k=0  P0[Sk =0] −Px[Sk =0]  . It can be shown that the above series indeed converges and we have a(0) = 0, a(x) > 0 for x ̸= 0. It is straightforward to check that the function a is harmonic outside the origin, i.e., (1.5) 1 4 X y:y∼x a(y) = a(x) for all x ̸= 0. (1)The simple one, or the conditioned one defined below. ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 351 Also, using (1.4) and the Markov property, one can easily obtain that 1 4 P x∼0 a(x) = 1, which implies by symmetry that (1.6) a(x) = 1 for all x ∼0. Observe that (1.5) immediately implies that a(Sk∧τ0(0)) is a martingale, we will repeatedly use this fact in the sequel. Further, one can show that (with γ = 0.5772156 . . . the Euler–Mascheroni constant) (1.7) a(x) = 2 π ln ∥x∥+ 2γ + 3 ln 2 π + O(∥x∥−2) as x →∞, cf. [LL10, Theorem 4.4.4]. Let us define another random walk ( b Sn, n ⩾0) on Z2 \ {0} in the following way: its transition probability matrix equals (compare to (1.1)) (1.8) b Pxy =        a(y) 4a(x), if x ∼y, x ̸= 0, 0, otherwise. It is immediate to see from (1.5) that the random walk b S is indeed well defined. The walk b S is the Doob h-transform of the simple random walk, under the condition of not hitting the origin (see [CPV16, Lemma 3.3 and its proof]). Let b τ0, b τ1 be defined as in (1.2)–(1.3), but with b S in the place of S. We summarize the basic properties of the walk b S in the following Proposition 1.1. — The following statements hold: (1) The walk b S is reversible, with the reversible measure µx := a2(x). (2) In fact, it can be represented as a random walk on the two-dimensional lattice with conductances  a(x)a(y), x, y ∈Z2, x ∼y  . (3) Let N be the set of the four neighbours of the origin. Then the process 1/a( b Sn∧b τ0(N)) is a martingale. (4) The walk b S is transient. (5) Moreover, for all x ̸= 0 (1.9) Px h b τ1(x) < ∞ i = 1 − 1 2a(x), and for all x ̸= y, x, y ̸= 0 (1.10) Px h b τ0(y) < ∞ i = Px h b τ1(y) < ∞ i = a(x) + a(y) −a(x −y) 2a(x) . The statements of Proposition 1.1 are not novel (they appear already in [CPV16]), but we found it useful to collect them here for the sake of completeness and also for future reference. We will prove Proposition 1.1 in the next section. It is curious to observe that (1.10) implies that, for any x, Px[b τ1(y) < ∞] converges to 1 2 as y →∞. As noted in [CPV16], this is related to the remarkable fact that if one conditions on a very distant site being vacant, then this reduces the intensity “near the origin” of the two-dimensional random interlacement process by a factor of four. TOME 2 (2019) 352 N. GANTERT, S. POPOV & M. VACHKOVSKAIA Let ∥· ∥be the Euclidean norm. Define the (discrete) ball B(x, r) = {y ∈Z2 : ∥y −x∥⩽r} (note that this definition works for all x ∈R2 and r ∈R+), and abbreviate B(r) := B(0, r). The (internal) boundary of A ⊂Z2 is defined by ∂A = {x ∈A : there exists y ∈Z2 \ A such that x ∼y}. Now we introduce some more notation and state the main results. For a set T ⊂Z+ (thought of as a set of time moments) let b ST = [ m∈T n b Sm o be the range of the walk b S with respect to that set. For simplicity, we assume in the following that the walk b S starts at a fixed neighbour x0 of the origin, and we write P for Px0 (it is, however, clear that our results hold for any fixed starting position of the walk). For a nonempty and finite set A ⊂Z2, let us consider random variables R(A) = A ∩b S[0,∞) |A| , V(A) = A \ b S[0,∞) |A| = 1 −R(A); that is, R(A) (respectively, V(A)) is the proportion of visited (respectively, unvisited) sites of A by the walk b S. Let us also abbreviate, for M0 > 0, (1.11) ℓ(n) A = |A|−1 max y∈A A ∩B  y, n lnM0 n  . Our main result is the following Theorem 1.2. — Let M0 > 0 be a fixed constant, and assume that A ⊂B(n) \ B(n ln−M0 n). Then, for all s ∈[0, 1], we have, with positive constants c1,2 depending only on M0, (1.12) P[V(A) ⩽s] −s ⩽c1 ln ln n ln n 1/3 + c2ℓ(n) A ln ln n ln n −2/3 , and the same result holds with R on the place of V. The above result means that if A ⊂B(n) \ B(n ln−M0 n) is “big enough and well distributed”, then the proportion of visited sites has approximately Uniform[0, 1] distribution. In particular, one can obtain the following Corollary 1.3. — Assume that D ⊂R2 is a bounded open set. Then both sequences (R(nD ∩Z2), n ⩾1) and (V(nD ∩Z2), n ⩾1) converge in distribution to the Uniform[0, 1] random variable. Indeed, it is straightforward to obtain it from Theorem 1.2 since |nD ∩Z2| is of order n2 as n →∞(note that D contains a disk), and so ℓ(n) nD∩Z2 will be of order ln−2M0 n. Observe that we can cut out B(n ln−M0 n) from nD without doing any harm to the limit theorem, since formally we need A ⊂B(n) \ B(n ln−M0 n) in ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 353 order to apply Theorem 1.2. Then, we can choose M0 large enough such that the right-hand side of (1.12) goes to 0. Also, we prove that the range of b S contains many “big holes”. To formulate this result, we need the following Definition 1.4. — We say that a set G ⊂R2 does not surround the origin, if • there exists c1 > 0 such that G ⊂B(c1), i.e., G is bounded; • there exist c2 > 0, c3 > 0, and a function f = (f1, f2) : [0, 1] 7→R2 such that f(0) = 0, ∥f(1)∥= c1, |f ′ 1(s)| + |f ′ 2(s)| ⩽c2 for all s ∈[0, 1], and inf s∈[0,1],y∈G ∥(f1(s), f2(s)) −y∥⩾c3, i.e., one can escape from the origin to infinity along a path which is uniformly away from G. Then, we have Theorem 1.5. — Let G ⊂R2 be a set that does not surround the origin. Then, (1.13) P h nG ∩b S[0,∞) = ∅for infinitely many n i = 1. Theorem 1.5 invites the following Remark 1.6. — A natural question to ask is whether there are also “big” completely filled subsets of Z2, that is, if a.s. there are infinitely many n such that (nG ∩ Z2) ⊂ b S[0,∞), for G ⊂R2 being, say, a disk. It is not difficult to see that the answer to this question is “no”. We do not give all details, but the reason for this is that, informally, one b S-trajectory corresponds to the two-dimensional random interlacements of [CPV16] “just above” the level α = 0. Then, as in Theorem 2.5(iii) (inequality (22)) of [CPV16], it is possible to show that, with any fixed δ > 0, P h (nG ∩Z2) ⊂b S[0,∞) i ⩽n−2+δ for all large enough n; our claim then follows from the (first) Borel–Cantelli lemma. We also establish some additional properties of the conditioned walk b S, which will be important for the proof of Theorem 1.5 and are of independent interest. Consider an irreducible Markov chain. Recall that a set is called recurrent with respect to the Markov chain, if it is visited infinitely many times almost surely; a set is called transient, if it is visited only finitely many times almost surely. It is clear that any nonempty set is recurrent with respect to a recurrent Markov chain, and every finite set is transient with respect to a transient Markov chain. Note that, in general, a set can be neither recurrent nor transient (think e.g. of the simple random walk on a binary tree, fix a neighbour of the root and consider the set of vertices of the tree connected to the root through this fixed neighbour). In many situations it is possible to characterize completely the recurrent and transient sets, as well as to answer the question if any set must be either recurrent or transient. For example, for the simple random walk in Zd, d ⩾3, each set is either recurrent or transient and the characterization is provided by the Wiener’s test (see e.g. [LL10, Corollary 6.5.9]), formulated in terms of capacities of intersections of the TOME 2 (2019) 354 N. GANTERT, S. POPOV & M. VACHKOVSKAIA set with exponentially growing annuli. Now, for the conditioned two-dimensional walk b S the characterization of recurrent and transient sets is particularly simple: Theorem 1.7. — A set A ⊂Z2 is recurrent with respect to b S if and only if A is infinite. Next, we recall that a Markov chain has the Liouville property, see e.g. [Woe09, Chapter IV], if all bounded harmonic (with respect to that Markov chain) functions are constants. Since Theorem 1.7 implies that every set must be recurrent or transient, we obtain the following result as its corollary: Theorem 1.8. — The conditioned two-dimensional walk b S has the Liouville property. These two results, besides being of interest on their own, will also be operational in the proof of Theorem 1.5. 2. Some auxiliary facts and proof of Proposition 1.1 For A ⊂Zd, recall that ∂A denotes its internal boundary. We abbreviate τ1(R) = τ1(∂B(R)). We will consider, with a slight abuse of notation, the function a(r) = 2 π ln r + 2γ + 3 ln 2 π of a real argument r ⩾1. To explain why this notation is convenient, observe that, due to (1.7), we may write, for the case when (say) 2∥x∥⩽r and as r →∞, (2.1) X y∈∂B(x,r) ν(y)a(y) = a(r) + O ∥x∥∨1 r  for any probability measure ν on ∂B(x, r). For all x ∈Z2 and R ⩾1 such that x, y ∈B(R/2) and x ̸= y, we have (2.2) Px[τ1(R) < τ1(y)] = a(x −y) a(R) + O  R−1(∥y∥∨1) , as R →∞. This is an easy consequence of the optional stopping theorem applied to the martingale a(Sn∧τ0(y) −y), together with (2.1). Also, an application of the optional stopping theorem to the martingale 1/a( b Sn∧b τ0(N)) yields (2.3) Px[b τ1(R) < b τ1(r)] = (a(r))−1 −(a(x))−1 + O(R−1) (a(r))−1 −(a(R))−1 + O(r−1), for 1 < r < ∥x∥< R < ∞. Sending R to infinity in (2.3) we see that for 1 ⩽r ⩽∥x∥ (2.4) Px[b τ1(r) = ∞] = 1 −a(r) + O(r−1) a(x) . We need the fact that S conditioned on hitting ∂B(R) before 0 is almost indistin-guishable from b S. For A ⊂Z2, let Γ(x) A denote the set of all finite nearest-neighbour trajectories that start at x ∈A \ {0} and end when entering ∂A for the first time. ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 355 ∂A ∂A′ Figure 2.1. Excursions (pictured as bold pieces of trajectories) of random walks between ∂A and ∂A′. For V ⊂Γ(x) A write S ∈V if there exists k such that (S0, . . . , Sk) ∈V (and the same for the conditioned walk b S). We write Γ(x) 0,R for Γ(x) B(R). Lemma 2.1. — Assume that V ⊂Γ(x) 0,R; then we have (2.5) Px[S ∈V | τ1(R) < τ1(0)] = Px[ b S ∈V ]  1 + O((R ln R)−1)  . Proof. — This is Lemma 3.3(i) of [CPV16]. □ If A ⊂A′ are (finite) subsets of Z2, then the excursions between ∂A and ∂A′ are pieces of nearest-neighbour trajectories that begin on ∂A and end on ∂A′, see Figure 2.1, which is, hopefully, self-explanatory. We refer to Section 3.4 of [CPV16] for formal definitions. Proof of Proposition 1.1. It is straightforward to check (1)–(3) directly, we leave this task for the reader. Item (4) (the transience) follows from (3) and Theorem 2.5.8 of [MPW17]. As for (v), we first observe that (1.9) is a consequence of (1.10), although it is of course also possible to prove it directly, see [CPV16, Proposition 2.2]. Indeed, using (1.8) and then (1.10), (1.5) and (1.6), one can write Px h b τ1(x) < ∞ i = 1 4a(x) X y∼x a(y)Py h b τ1(x) < ∞ i = 1 4a(x) X y∼x 1 2 (a(y) + a(x) −a(y −x)) = 1 − 1 2a(x). Now, to prove (1.10), we essentially use the approach of Lemma 3.7 of [CPV16], al-though here the calculations are simpler. Let us define (note that all the probabilities TOME 2 (2019) 356 N. GANTERT, S. POPOV & M. VACHKOVSKAIA 0 y x ∂B(R) p1 p2 q12 q21 1 −(p1 + p2) Figure 2.2. Trajectories for the probabilities of interest. below are for the simple random walk S) h1 = Px[τ1(0) < τ1(R)], h2 = Px[τ1(y) < τ1(R)], q12 = P0[τ1(y) < τ1(R)], q21 = Py[τ1(0) < τ1(R)], p1 = Px[τ1(0) < τ1(R) ∧τ1(y)], p2 = Px[τ1(y) < τ1(R) ∧τ1(0)], see Figure 2.2. Using (2.2) (and in addition the Markov property and (1.5) for (2.8)) we have for x, y ̸= 0, x ̸= y h1 = 1 − a(x) a(R) + O(R−1), (2.6) h2 = 1 − a(x −y) a(R) + O(R−1∥y∥), (2.7) q12 = 1 − a(y) a(R) + O(R−1∥y∥), (2.8) q21 = 1 − a(y) a(R) + O(R−1), (2.9) ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 357 which implies that lim R→∞(1 −h1)a(R) = a(x), (2.10) lim R→∞(1 −h2)a(R) = a(x −y), (2.11) lim R→∞(1 −q12)a(R) = a(y), (2.12) lim R→∞(1 −q21)a(R) = a(y). (2.13) Observe that, due to the Markov property, it holds that h1 = p1 + p2q21, h2 = p2 + p1q12. Solving these equations with respect to p1, p2, we obtain p1 = h1 −h2q21 1 −q12q21 , (2.14) p2 = h2 −h1q12 1 −q12q21 . (2.15) Let us denote (2.16) ¯ h1 = 1 −h1, ¯ h2 = 1 −h2, ¯ q12 = 1 −q12, ¯ q21 = 1 −q21. Next, using Lemma 2.1, we have that Px[b τ1(y) < b τ1(R)] = Px[τ1(y) < τ1(R) | τ1(R) < τ1(0)]  1 + o(R−1)  = Px[τ1(y) < τ1(R) < τ1(0)] Px[τ1(R) < τ1(0)]  1 + o(R−1)  = p2(1 −q21) 1 −h1  1 + o(R−1)  = (h2 −h1q12)(1 −q21) (1 −q12q21)(1 −h1)  1 + o(R−1)  = (¯ h1 + ¯ q12 −¯ h2 −¯ h1¯ q12)¯ q21 (¯ q12 + ¯ q21 −¯ q12¯ q21)¯ h1  1 + o(R−1)  . (2.17) Since Px[b τ1(y) < ∞] = limR→∞Px[b τ1(y) < b τ1(R)], using (2.10)–(2.13) we ob-tain (1.10) (observe that the “product” terms in (2.17) are of smaller order and will disappear in the limit). □ We now use the ideas contained in the last proof to obtain some refined bounds on the hitting probabilities for excursions of the conditioned walk. Let us assume that ∥x∥⩾n ln−M0 n and y ∈A, where the set A is as in Theorem 1.2. Also, abbreviate R = n ln2 n. Lemma 2.2. — In the above situation, we have (2.18) Px[b τ1(y) < b τ1(R)] =  1 + O(ln−3 n) a(x)a(R) + a(y)a(R) −a(x −y)a(R) −a(x)a(y) a(x)(2a(R) −a(y)) . TOME 2 (2019) 358 N. GANTERT, S. POPOV & M. VACHKOVSKAIA 0 B(n) \ B n lnM0 n  ∂B(n ln n) ∂B(n ln2 n) A E x0 E x1 E x2 Figure 2.3. Excursions and their visits to A Proof. — This is essentially the same calculation as in the proof of (1.10), with the following difference: after arriving to the expression (2.17), instead of sending R to infinity (which conveniently “kills” many terms there), we need to carefully deal with all the O’s. Specifically, we reuse notations (2.6)–(2.9) and (2.16), then write Px[b τ1(y) < b τ1(R)] = Px[τ1(y) < τ1(R) | τ1(R) < τ1(0)]  1 + O(n−1)  = B1 B2  1 + o(n−1)  , (2.19) where (observe that, since ∥y∥⩽n and R = n ln2 n, we have a(R) + O(R−1∥y∥) = a(R) + O(ln−2 n) = a(R)(1 + O(ln−3 n)) B1 = (¯ h1 + ¯ q12 −¯ h2 −¯ h1¯ q12)¯ q21 = a(y) a(R) + O(R−1)  a(x) a(R) + O(R−1) + a(y) a(R) + O(R−1∥y∥) − a(x −y) a(R) + O(R−1∥y∥) − a(x)a(y) (a(R) + O(R−1))(a(R) + O(R−1∥y∥))  =  1 + O((R ln R)−1)  a(y) a(R) · a(x)a(R) + a(y)a(R) −a(x −y)a(R) −a(x)a(y) (1 + O(ln−3 n))a2(R) =  1 + O(ln−3 n)  a(y) a(R) · a(x)a(R) + a(y)a(R) −a(x −y)a(R) −a(x)a(y) a2(R) , ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 359 and B2 = (¯ q12 + ¯ q21 −¯ q12¯ q21)¯ h1 = a(x) a(R) + O(R−1)  a(y) a(R) + O(R−1∥y∥) + a(y) a(R) + O(R−1) − a2(y) (a(R) + O(R−1))(a(R) + O(R−1∥y∥))  =  1 + O((R ln R)−1)  a(x) a(R) · 2a(y)a(R) −a2(y) + O(ln−1 n) (1 + O(ln−3 n))a2(R) =  1 + O(ln−3 n)  a(x) a(R) · 2a(y)a(R) −a2(y) a2(R) . We insert the above back to (2.19) and note that the factor a(y) a3(R) cancels to ob-tain (2.18). □ 3. Proofs of the main results We start with Proof of Theorem 1.2. — First, we describe informally the idea of the proof. We consider the visits to the set A during excursions of the walk from ∂B(n ln n) to ∂B(n ln2 n), see Figure 2.3. The crucial argument is the following: the randomness of V(A) comes from the number of excursions and not from the excursions themselves. If the number of excursions is around c × ln n ln ln n, then it is possible to show (using a standard weak-LLN argument) that the proportion of uncovered sites in A is concentrated around e−c. On the other hand, that number of excursions can be modeled roughly as Y × ln n ln ln n, where Y is an Exponential(1) random variable. Then, P[V(A) ⩽s] ≈P[Y ⩾ln s−1] = s, as required. We now give a rigorous argument. Let c H be the conditional entrance measure for the (conditioned) walk b S, i.e., (3.1) c HA(x, y) = Px h b Sb τ1(A) = y | b τ1(A) < ∞ i . Let us first denote the initial piece of the trajectory by E x0 = b S[0,b τ(n ln n)]. Then, we consider a Markov chain (E xk, k ⩾1) of excursions between ∂B(n ln n) and ∂B(n ln2 n), defined in the following way: for k ⩾2 the initial site of E xk is chosen according to the measure c HB(n ln n)(zk−1, · ), where zk−1 ∈∂B(n ln2 n) is the last site of the excursion E xk−1; also, the initial site of E x1 is the last site of E x0; the weights of trajectories are chosen according to (1.8) (i.e., each excursion is an b S-walk trajectory). It is important to observe that one may couple (E xk, k ⩾1) with the “true” excursions of the walk b S in an obvious way: one just picks the excursions subsequently, each time tossing a coin to decide if the walk returns to B(n ln n). Let ψn = min x∈∂B(n ln2 n) Px[b τ(n ln n) = ∞] TOME 2 (2019) 360 N. GANTERT, S. POPOV & M. VACHKOVSKAIA be the minimal probability to avoid B(n ln n), starting at sites of ∂B(n ln2 n). Us-ing (2.4) it is straightforward to obtain that Px[b τ1(n ln n) = ∞] = ln ln n ln n + 2 ln ln n  1 + O(n−1)  for any x ∈∂B(n ln2 n), and so it also holds that (3.2) ψn = ln ln n ln n + 2 ln ln n  1 + O(n−1)  . Let us consider a sequence of i.i.d. random variables (ηk, k ⩾0) such that P[ηk = 1] = 1−P[ηk = 0] = ψn. Let c N = min{k : ηk = 1}, so that c N is a Geometric random variable with mean ψ−1 n . Now, (3.2) implies that Px[b τ(n ln n) = ∞] −ψn ⩽O  ln ln n n ln n  for any x ∈∂B(n ln2 n), so it is clear(2) that c N can be coupled with the actual number of excursions N in such a way that N ⩽c N a.s. and (3.3) P[N ̸= c N] ⩽O(n−1). Note that this construction preserves the independence of c N from the excursion sequence (E xk, k ⩾1) itself. Define R(k) = A ∩(E x0 ∪E x1 ∪. . . ∪E xk) |A| , and V(k) = A \ (E x0 ∪E x1 ∪. . . ∪E xk) |A| = 1 −R(k) to be the proportions of visited and unvisited sites in A with respect to the first k excursions together with the initial piece E x0. Now, it is straightforward to check that (2.18) implies that, for any x ∈∂B(n ln n) and y ∈A (3.4) Px h b τ1(y) < b τ1(n ln2 n) i = ln ln n ln n  1 + O ln ln n ln n  , and, for y, z ∈B(n) \ B  n 2 lnM0 n  such that ∥y −z∥= n/b with b ⩽2 lnM0 n (3.5) Pz h b τ1(y) < b τ1(n ln2 n) i = 2 ln ln n + ln b ln n  1 + O ln ln n ln n  . Indeed, first, observe that the factor B2 in (2.18) is, in both cases, (3.6) a(x)(2a(R) −a(y)) =  2 π 2 ln2 n + O  ln n ln ln n  . (2)Let (Zn, n ⩾1) be a sequence of {0, 1}-valued random variables adapted to a filtration (Fn, n ⩾1) and such that P[Zn+1 = 1 | Fn] ∈[p, p + ε] a.s.. Then it is elementary to obtain that the total variation distance between the random variable min{k : Zk = 1} and the Geometric random variable with mean p−1 is bounded above by O(ε/p). ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 361 As for the factor B1, we have B1 = a(x)a(R) + a(y)a(R) −a(x −y)a(R) −a(x)a(y) = (a(x) −a(x −y))a(R) −(a(R) −a(x))a(y) = O  (ln n)−1 × O(ln n) +  2 π ln ln n + o(n−2)  ×  2 π ln n + O(ln ln n)  =  2 π 2 ln n ln ln n + O  (ln ln n)2 in the case of (3.4), and (writing also ∥z∥= n/c with (2 lnM0 n)−1 ⩽c ⩽1) B1 = a(z)a(R) + a(y)a(R) −a(z −y)a(R) −a(z)a(y) = (a(z) −a(z −y))a(R) −(a(R) −a(z))a(y) = 2 π  −ln c + ln b + o(n−1)  × 2 π  ln n + O(ln ln n)  + 2 π  2 ln ln n + ln c + o(n−1)  ×  2 π ln n + O(ln ln n)  =  2 π 2 ln n × (2 ln ln n + ln b) + O  (ln ln n)2 in the case of (3.5); with (3.6) we then obtain (3.4)–(3.5). For y ∈A and a fixed k ⩾1 consider the random variable ξ(k) y = 1{y / ∈E x0 ∪E x1 ∪. . . ∪E xk}, so that V(k) = |A|−1 P y∈A ξ(k) y . Now (3.4) implies that, for all j ⩾1, P[y / ∈E xj] = 1 −ln ln n ln n  1 + O ln ln n ln n  , and (3.5) implies that P[y / ∈E x0 ∪E x1] = 1 −O ln ln n ln n  for any y ∈A. Let µ(k) y = Eξ(k) y . Then we have µ(k) y = P[y / ∈E x0 ∪E x1 ∪. . . ∪E xk] =  1 −O ln ln n ln n  ×    1 −ln ln n ln n  1 + O ln ln n ln n   k−1 = exp  −kln ln n ln n  1 + O  k−1 + ln ln n ln n  . (3.7) Next, we need to estimate the covariance of ξ(k) y and ξ(k) z in case ∥y −z∥⩾n ln−M0 n. First note that, for any x ∈∂B(n ln n) Px h {y, z} ∩E x1 = ∅ i = 1 −Px[y ∈E x1] −Px[z ∈E x1] + Px h {y, z} ⊂E x1 i = 1 −2ln ln n ln n  1 + O ln ln n ln n  + Px h {y, z} ⊂E x1 i TOME 2 (2019) 362 N. GANTERT, S. POPOV & M. VACHKOVSKAIA by (3.4); also, since n b τ1(y) < b τ1(z) < b τ1(n ln2 n) o ⊂ n b τ1(y) < b τ1(n ln2 n), b Sk = z for some b τ1(y) < k < b τ1(n ln2 n) o from (3.4)–(3.5) we obtain Px h {y, z} ⊂E x1 i = Px h max{b τ1(y), b τ1(z)} < b τ1(n ln2 n) i = Px h b τ1(y) < b τ1(z) < b τ1(n ln2 n) i + Px h b τ1(z) < b τ1(y) < b τ1(n ln2 n) i ⩽Px h b τ1(y) < b τ1(n ln2 n) i Py h b τ1(z) < b τ1(n ln2 n) i + Px h b τ1(z) < b τ1(n ln2 n) i Pz h b τ1(y) < b τ1(n ln2 n) i ⩽2ln ln n ln n × (2 + M0) ln ln n ln n  1 + O ln ln n ln n  = O ln ln n ln n 2 . Therefore, similarly to (3.7) we obtain E(ξ(k) y ξ(k) z ) = exp  −2kln ln n ln n  1 + O  k−1 + ln ln n ln n  , which, together with (3.7), implies after some elementary calculations that, for all y, z ∈A such that ∥y −z∥⩾n ln−M0 n (3.8) cov(ξ(k) y , ξ(k) z ) = O ln ln n ln n  uniformly in k, since  ln ln n ln n + k ln ln n ln n 2  exp  −2kln ln n ln n  = O ln ln n ln n  ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 363 uniformly in k. Recall the notation ℓ(n) A from (1.11). Now, using Chebyshev’s inequal-ity, we write P  |A|−1 X y∈A (ξ(k) y −µ(k) y ) > ε  ⩽(ε|A|)−2 Var  X y∈A ξ(k) y   = (ε|A|)−2 X y,z∈A cov(ξ(k) y , ξ(k) z ) = (ε|A|)−2   X y,z∈A, ∥y−z∥< n lnM0 n cov(ξ(k) y , ξ(k) z ) + X y,z∈A, ∥y−z∥⩾ n lnM0 n cov(ξ(k) y , ξ(k) z )   ⩽(ε|A|)−2  X y∈A A ∩B(y, n lnM0 n) + |A|2O ln ln n ln n   ⩽ε−2ℓ(n) A + ε−2O ln ln n ln n  . (3.9) Let Φ(s) = min n k : V(k) ⩽s o be the number of excursions necessary to make the unvisited proportion of A at most s. We have P[V(A) ⩽s] = P[Φ(s) ⩽N] = P[Φ(s) ⩽N, N = c N] + P[Φ(s) ⩽N, N ̸= c N] = P[Φ(s) ⩽c N] + P[Φ(s) ⩽N, N ̸= c N] −P[Φ(s) ⩽c N, N ̸= c N], so, recalling (3.3), (3.10) P[V(A) ⩽s] −P[Φ(s) ⩽c N] ⩽P[N ̸= c N] ⩽O(n−1). Next, we write P[Φ(s) ⩽c N] = E  P[c N ⩾Φ(s) | Φ(s)]  = E(1 −ψn)Φ(s), (3.11) (here we used the independence property stated below (3.3)) and concentrate on obtaining lower and upper bounds on the expectation in the right-hand side of (3.11). For this, assume that s ∈(0, 1) is fixed and abbreviate δn = ln ln n ln n 1/3 k− n =  (1 −δn) ln s−1 ln n ln ln n  , k+ n =  (1 + δn) ln s−1 ln n ln ln n  ; TOME 2 (2019) 364 N. GANTERT, S. POPOV & M. VACHKOVSKAIA we also assume that n is sufficiently large so that δn ∈(0, 1 2) and 1 < k− n < k+ n . Now, according to (3.7), µ(k± n ) y = exp  −(1 ± δn) ln s−1  1 + O  (k± n )−1 + ln ln n ln n  = s exp  −ln s−1  ± δn + O  (k± n )−1 + ln ln n ln n  = s  1 + O  δn ln s−1 + ln ln n ln n (1 + ln s−1)  , so in both cases it holds that (observe that s ln s−1 ⩽1/e for all s ∈[0, 1]) (3.12) µ(k± n ) y = s + O  δn + ln ln n ln n  = s + O(δn). With a similar calculation, one can also observe that (3.13) (1 −ψn)(k± n ) = s + O(δn). We then write, using (3.12) P[Φ(s) > k+ n ] = P[V(k+ n ) > s] = P  |A|−1 X y∈A ξ(k+ n ) y > s  = P  |A|−1 X y∈A (ξ(k+ n ) y −µ(k+ n ) y ) > s −|A|−1 X y∈A µ(k+ n ) y  = P  |A|−1 X y∈A (ξ(k+ n ) y −µ(k+ n ) y ) > O(δn)  . (3.14) Then, (3.9) implies that (3.15) P[Φ(s) > k+ n ] ⩽O  ℓ(n) A ln ln n ln n −2/3 + ln ln n ln n 1/3 . Quite analogously, one can also obtain that (3.16) P[Φ(s) < k− n ] ⩽O  ℓ(n) A ln ln n ln n −2/3 + ln ln n ln n 1/3 . Using (3.13) and (3.15), we then write E(1 −ψn)Φ(s) ⩾E  (1 −ψn)Φ(s)1{Φ(s) ⩽k+ n }  ⩾(1 −ψn)k+ n P[Φ(s) ⩽k+ n ] ⩾  s −O ln ln n ln n 1/3 1 −O  ℓ(n) A ln ln n ln n −2/3 + ln ln n ln n 1/3 , (3.17) ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 365 and, using (3.13) and (3.16), E(1 −ψn)Φ(s) = E  (1 −ψn)Φ(s)1{Φ(s) ⩾k− n }  + E  (1 −ψn)Φ(s)1{Φ(s) < k− n }  ⩽(1 −ψn)k− n + P[Φ(s) < k− n ] ⩽  s + O ln ln n ln n 1/3 +  1 −O  ℓ(n) A ln ln n ln n −2/3 + ln ln n ln n 1/3 . (3.18) Therefore, using also (3.10)–(3.11), we obtain (1.12), thus concluding the proof of Theorem 1.2. □ Next, we will prove Theorems 1.7 and 1.8, since the latter will be needed in the course of the proof of Theorem 1.5. Proof of Theorem 1.7. — Clearly, we only need to prove that every infinite subset of Zd is recurrent for b S. Basically, this is a consequence of the fact that, due to (1.10), (3.19) lim y→∞Px0 h b τ1(y) < ∞ i = 1 2 for any x0 ∈Z2. Indeed, let b S0 = x0; since A is infinite, by (3.19) one can find y0 ∈A and R0 such that {x0, y0} ⊂B(R0) and Px0 h b τ1(y0) < b τ1(R0) i ⩾1 3. Then, for any x1 ∈∂B(R0), we can find y1 ∈A and R1 > R0 such that y1 ∈ B(R1) \ B(R0) and Px1 h b τ1(y1) < b τ1(R1) i ⩾1 3. Continuing in this way, we can construct a sequence R0 < R1 < R2 < . . . (depending on the set A) such that, for each k ⩾0, the walk b S hits A on its way from ∂B(Rk) to ∂B(Rk+1) with probability at least 1 3, regardless of the past. This clearly implies that A is a recurrent set. □ Proof of Theorem 1.8. — Indeed, Theorem 1.7 implies that every subset of Z2 must be either recurrent or transient, and then Proposition 3.8 in [Rev84, Chapter 2] implies the Liouville property. Still, for the reader’s convenience, we include the proof here. Assume that h : Z2 \ {0} →R is a bounded harmonic function for b S. Let us prove that (3.20) lim inf y→∞h(y) = lim sup y→∞h(y), that is, h must have a limit at infinity. Indeed, assume that (3.20) does not hold, which means that there exist two constants b1 < b2 and two infinite sets B1, B2 ⊂Z2 such that h(y) ⩽b1 for all y ∈B1 and h(y) ⩾b2 for all y ∈B2. Now, on one hand h( b Sn) is a bounded martingale, so it must a.s. converge to some limit; on the other hand, Theorem 1.7 implies that both B1 and B2 will be visited infinitely often by b S, and so h( b Sn) cannot converge to any limit, thus yielding a contradiction. This proves (3.20). TOME 2 (2019) 366 N. GANTERT, S. POPOV & M. VACHKOVSKAIA Now, if limy→∞h(y) = c, then it is easy to obtain from the Maximum Principle that h(x) = c for any x. This concludes the proof of Theorem 1.8. □ Finally, we are able to prove that there are “big holes” in the range of b S: Proof of Theorem 1.5. — Clearly, if G does not surround the origin in the sense of Definition 1.4, then G ⊂B(c1) \ B(c3). For the sake of simplicity, let us assume that G ⊂B(1) \ B(1/2); the general case can be treated in a completely analogous way. Consider the two sequences of events En = n b τ1(23n−1G) > b τ1(23n), ∥b Sj∥> 23n−1 for all j ⩾b τ1(23n) o , E′ n = n ∥b Sj∥> 23n−1 for all j ⩾b τ1(23n) o and note that En ⊂E′ n and 23n−1G ∩b S[0,∞) = ∅on En. Our goal is to show that a.s. an infinite number of events (En, n ⩾1) occurs. Observe, however, that the events in each of the above two sequences are not independent, so the “basic” second Borel–Cantelli lemma will not work. In the following, we use a generalization of the second Borel–Cantelli lemma, known as the Kochen–Stone theorem [KS64]: it holds that (3.21) P  ∞ X k=1 1{Ek} = ∞  ⩾lim sup k→∞  Pk i=1 P[Ei] 2 Pk i,j=1 P[Ei ∩Ej]. We will now prove that there exists a positive constant c4 such that (3.22) P[En] ⩾c4 n for all n ⩾1. Indeed, since G ⊂B(1) \ B(1/2) does not surround the origin, by comparison with Brownian motion it is elementary to obtain that, for some c5 > 0, Px h τ1(23n−1G) > τ1(23n), τ1(0) > τ1(23n) i > c5 for all x ∈∂B(23(n−1)). Lemma 2.1 then implies that, for some c6 > 0, Px h b τ1(23n−1G) > b τ1(23n) i =  1 + o(2−3n)  Px h τ1(23n−1G) > τ1(23n) | τ1(0) > τ1(23n) i =  1 + o(2−3n)  Px h τ1(23n−1G) > τ1(23n), τ1(0) > τ1(23n) i > c6 (3.23) for all x ∈∂B(23(n−1)). Let us denote, recalling (1.7), γ∗= π 2 × 1 ln 2 × 2γ+3 ln 2 π = 2γ+3 ln 2 2 ln 2 . Using (2.4), we then obtain Pz h ∥b Sj∥> 23n−1 for all j ⩾0 i = 1 −a(23n−1) + O(2−3n) a(23n) + O(2−3n) = 1 3n + γ∗  1 + o(2−3n)  . (3.24) for any z ∈∂B(23n). The inequality (3.22) follows from (3.23) and (3.24). Now, we need an upper bound for P[Em∩En], m ⩽n. Clearly, Em∩En ⊂E′ m∩E′ n, and note that the event E′ m∩E′ n means that the particle hits ∂B(23n) before ∂B(23m−1) ANNALES HENRI LEBESGUE Range of two-dimensional conditioned SRW 367 starting from a site on ∂B(23m), and then never hits ∂B(23n−1) starting from a site on ∂B(23n). So, again using (2.4) and Lemma 2.1, we write analogously to (3.24) (and also omitting a couple of lines of elementary calculations) P[Em ∩En] ⩽P[E′ m ∩E′ n] = (a(23m−1))−1 −(a(23m))−1 + O(2−3m) (a(23m−1))−1 −(a(23n))−1 + O(2−3m) ×  1 −a(23n−1) + O(2−3n) a(23n) + O(2−3n)  = 1 (3(n −m) + 1)(3m + γ∗)  1 + o(2−3m)  . (3.25) Now, (3.22) implies that Pk i=1 P[Ei] ⩾c9 ln k, and (3.25) implies (again, after some elementary calculations) that Pk i,j=1 P[Ei∩Ej] ⩽c10 ln2 k. So, using (3.21), we obtain that P  ∞ X k=1 1{Ek} = ∞  ⩾c11 > 0. Now, note that, again due to Proposition 3.8 in [Rev84, Chapter 2], the Liouville property implies that every tail event must have probability 0 or 1, and so the probability in the above display must be equal to 1. This concludes the proof of Theorem 1.5. □ BIBLIOGRAPHY [CP17] Francis Comets and Serguei Popov, The vacant set of two-dimensional critical random interlacement is infinite, Ann. Probab. 45 (2017), no. 6B, 4752–4785. ↑350 [CPV16] Francis Comets, Serguei Popov, and Marina Vachkovskaia, Two-dimensional random interlacements and late points for random walks, Commun. Math. Phys. 343 (2016), no. 1, 129–164. ↑350, 351, 353, 355 [ČT12] Jiří Černý and Augusto Q. Teixeira, From random walk trajectories to random inter-lacements, Ensaios Matemáticos, vol. 23, Sociedade Brasileira de Matemática, 2012. ↑350 [DRS14] Alexander Drewitz, Balázs Ráth, and Artëm Sapozhnikov, An introduction to random interlacements, SpringerBriefs in Mathematics, Springer, 2014. ↑350 [KS64] Simon Kochen and Charles Stone, A note on the Borel-Cantelli lemma, Ill. J. Math. 8 (1964), 248–251. ↑366 [LL10] Gregory F. Lawler and Vlada Limic, Random walk: a modern introduction, Cambridge Studies in Advanced Mathematics, vol. 123, Cambridge University Press, 2010. ↑351, 353 [MPW17] Mikhail Menshikov, Serguei Popov, and Andrew Wade, Non-homogeneous random walks: Lyapunov function methods for near-critical stochastic systems, Cambridge Tracts in Mathematics, vol. 209, Cambridge University Press, 2017. ↑355 [PT15] Serguei Popov and Augusto Teixeira, Soft local times and decoupling of random inter-lacements, J. Eur. Math. Soc. 17 (2015), no. 10, 2545–2593. ↑350 [Rev84] Daniel Revuz, Markov chains, 2nd ed., North-Holland Mathematical Library, vol. 11, North-Holland, 1984. ↑365, 367 [Szn10] Alain-Sol Sznitman, Vacant set of random interlacements and percolation, Ann. Math. 171 (2010), no. 3, 2039–2087. ↑350 TOME 2 (2019) 368 N. GANTERT, S. POPOV & M. VACHKOVSKAIA [Woe09] Wolfgang Woess, Denumerable Markov chains. Generating functions, boundary theory, random walks on trees, EMS Textbooks in Mathematics, European Mathematical Society, 2009. ↑354 Manuscript received on 9th April 2018, revised on 13th December 2018, accepted on 18th January 2019. Recommended by Editor H. Lacoin. Published under license CC BY 4.0. This journal is a member of Centre Mersenne. Nina GANTERT Technische Universität München Fakultät für Mathematik Boltzmannstr. 3 85748 Garching (Germany) [email protected] Serguei POPOV Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University of Campinas – UNICAMP rua Sérgio Buarque de Holanda 651 13083–859, Campinas SP (Brazil) [email protected] Marina VACHKOVSKAIA Department of Statistics, Institute of Mathematics, Statistics and Scientific Computation, University of Campinas – UNICAMP rua Sérgio Buarque de Holanda 651 13083–859, Campinas SP (Brazil) [email protected] ANNALES HENRI LEBESGUE
290
The Project Gutenberg eBook of The Aeneid, by Virgil This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. Title: The Aeneid Author: Virgil Translator: J. W. Mackail Release Date: August 29, 2007 [eBook #22456] [Most recently updated: September 6, 2021] Language: English Character set encoding: UTF-8 Produced by: David Clarke, Lisa Reigel, and the Project Gutenberg Online Distributed Proofreading Team START OF THE PROJECT GUTENBERG EBOOK THE AENEID Transcriber's Note: Numbers in the left margin refer to line numbers in Virgil's Aeneid. These numbers appeared at the top of each page of text and have been retained for reference. Obvious typographical errors have been corrected. A complete list follows the text. [Pg iii] THE AENEID OF VIRGIL TRANSLATED INTO ENGLISH BY J. W. MACKAIL, M.A. FELLOW OF BALLIOL COLLEGE, OXFORD London MACMILLAN AND CO. 1885 [Pg iv] Printed by R. & R. Clark, Edinburgh. [Pg v] PREFACE There is something grotesque in the idea of a prose translation of a poet, though the practice is become so common that it has ceased to provoke a smile or demand an apology. The language of poetry is language in fusion; that of prose is language fixed and crystallised; and an attempt to copy the one material in the other must always count on failure to convey what is, after all, one of the most essential things in poetry,—its poetical quality. And this is so with Virgil more, perhaps, than with any other poet; for more, perhaps, than any other poet Virgil depends on his poetical quality from first to last. Such a translation can only have the value of a copy of some great painting executed in mosaic, if indeed a copy in Berlin wool is not a closer analogy; and even at the best all it can have to say for itself will be in Virgil's own words, Experiar sensus; nihil hic nisi carmina desunt. In this translation I have in the main followed the text of Conington and Nettleship. The more important deviations from this text are mentioned in the notes; but I have not thought it necessary [Pg vi]to give a complete list of various readings, or to mention any change except where it might lead to misapprehension. Their notes have also been used by me throughout. Beyond this I have made constant use of the mass of ancient commentary going under the name of Servius; the most valuable, perhaps, of all, as it is in many ways the nearest to the poet himself. The explanation given in it has sometimes been followed against those of the modern editors. To other commentaries only occasional reference has been made. The sense that Virgil is his own best interpreter becomes stronger as one studies him more. My thanks are due to Mr. Evelyn Abbott, Fellow and Tutor of Balliol, and to the Rev. H. C. Beeching, for much valuable suggestion and criticism. TABLE OF CONTENTS | | | --- | | PREFACE | | BOOK FIRST | The Coming of Aeneas to Carthage | | BOOK SECOND | The Story of the Sack of Troy | | BOOK THIRD | The Story of the Seven Years' Wandering | | BOOK FOURTH | The Love of Dido, and Her End | | BOOK FIFTH | The Games of the Fleet | | BOOK SIXTH | The Vision of the Under World | | BOOK SEVENTH | The Landing in Latium, and the Roll of the Armies of Italy | | BOOK EIGHTH | The Embassage to Evander | | BOOK NINTH | The Siege of the Trojan Camp | | BOOK TENTH | The Battle on the Beach | | BOOK ELEVENTH | The Council of the Latins, and the Life and Death of Camilla | | BOOK TWELFTH | The Slaying of Turnus | | NOTES | [Pg 1] THE AENEID BOOK FIRST THE COMING OF AENEAS TO CARTHAGE I sing of arms and the man who of old from the coasts of Troy came, an exile of fate, to Italy and the shore of Lavinium; hard driven on land and on the deep by the violence of heaven, for cruel Juno's unforgetful anger, and hard bestead in war also, ere he might found a city and carry his gods into Latium; from whom is the Latin race, the lords of Alba, and the stately city Rome. Muse, tell me why, for what attaint of her deity, or in what vexation, did the Queen of heaven drive one so excellent in goodness to circle through so many afflictions, to face so many toils? Is anger so fierce in celestial spirits? There was a city of ancient days that Tyrian settlers dwelt in, Carthage, over against Italy and the Tiber mouths afar; rich of store, and mighty in war's fierce pursuits; wherein, they say, alone beyond all other lands had Juno her seat, and held Samos itself less dear. Here was her armour, here her chariot; even now, if fate permit, the goddess strives to nurture it for queen of the nations. Nevertheless she had heard a race was issuing of the blood of [Pg 2][20-53]Troy, which sometime should overthrow her Tyrian citadel; from it should come a people, lord of lands and tyrannous in war, the destroyer of Libya: so rolled the destinies. Fearful of that, the daughter of Saturn, the old war in her remembrance that she fought at Troy for her beloved Argos long ago,—nor had the springs of her anger nor the bitterness of her vexation yet gone out of mind: deep stored in her soul lies the judgment of Paris, the insult of her slighted beauty, the hated race and the dignities of ravished Ganymede; fired with this also, she tossed all over ocean the Trojan remnant left of the Greek host and merciless Achilles, and held them afar from Latium; and many a year were they wandering driven of fate around all the seas. Such work was it to found the Roman people. Hardly out of sight of the land of Sicily did they set their sails to sea, and merrily upturned the salt foam with brazen prow, when Juno, the undying wound still deep in her heart, thus broke out alone: 'Am I then to abandon my baffled purpose, powerless to keep the Teucrian king from Italy? and because fate forbids me? Could Pallas lay the Argive fleet in ashes, and sink the Argives in the sea, for one man's guilt, mad Oïlean Ajax? Her hand darted Jove's flying fire from the clouds, scattered their ships, upturned the seas in tempest; him, his pierced breast yet breathing forth the flame, she caught in a whirlwind and impaled on a spike of rock. But I, who move queen among immortals, I sister and wife of Jove, wage warfare all these years with a single people; and is there any who still adores Juno's divinity, or will kneel to lay sacrifice on her altars?' Such thoughts inly revolving in her kindled bosom, the goddess reaches Aeolia, the home of storm-clouds, the land laden with furious southern gales. Here in a desolate cavern Aeolus keeps under royal dominion and yokes in [Pg 3][54-85]dungeon fetters the struggling winds and loud storms. They with mighty moan rage indignant round their mountain barriers. In his lofty citadel Aeolus sits sceptred, assuages their temper and soothes their rage; else would they carry with them seas and lands, and the depth of heaven, and sweep them through space in their flying course. But, fearful of this, the lord omnipotent hath hidden them in caverned gloom, and laid a mountain mass high over them, and appointed them a ruler, who should know by certain law to strain and slacken the reins at command. To him now Juno spoke thus in suppliant accents: 'Aeolus—for to thee hath the father of gods and king of men given the wind that lulls and that lifts the waves—a people mine enemy sails the Tyrrhene sea, carrying into Italy the conquered gods of their Ilian home. Rouse thy winds to fury, and overwhelm their sinking vessels, or drive them asunder and strew ocean with their bodies. Mine are twice seven nymphs of passing loveliness; her who of them all is most excellent in beauty, Deïopea, I will unite to thee in wedlock to be thine for ever; that for this thy service she may fulfil all her years at thy side, and make thee father of a beautiful race.' Aeolus thus returned: 'Thine, O queen, the task to search whereto thou hast desire; for me it is right to do thy bidding. From thee have I this poor kingdom, from thee my sceptre and Jove's grace; thou dost grant me to take my seat at the feasts of the gods, and makest me sovereign over clouds and storms.' Even with these words, turning his spear, he struck the side of the hollow hill, and the winds, as in banded array, pour where passage is given them, and cover earth with eddying blasts. East wind and west wind together, and the gusty south-wester, falling prone on the sea, stir it up [Pg 4][86-120]from its lowest chambers, and roll vast billows to the shore. Behind rises shouting of men and whistling of cordage. In a moment clouds blot sky and daylight from the Teucrians' eyes; black night broods over the deep. Pole thunders to pole, and the air quivers with incessant flashes; all menaces them with instant death. Straightway Aeneas' frame grows unnerved and chill, and stretching either hand to heaven, he cries thus aloud: 'Ah, thrice and four times happy they who found their doom under high Troy town before their fathers' faces! Ah, son of Tydeus, bravest of the Grecian race, that I could not have fallen on the Ilian plains, and gasped out this my life beneath thine hand! where under the spear of Aeacides lies fierce Hector, lies mighty Sarpedon; where Simoïs so often bore beneath his whirling wave shields and helmets and brave bodies of men.' As the cry leaves his lips, a gust of the shrill north strikes full on the sail and raises the waves up to heaven. The oars are snapped; the prow swings away and gives her side to the waves; down in a heap comes a broken mountain of water. These hang on the wave's ridge; to these the yawning billow shows ground amid the surge, where the sea churns with sand. Three ships the south wind catches and hurls on hidden rocks, rocks amid the waves which Italians call the Altars, a vast reef banking the sea. Three the east forces from the deep into shallows and quicksands, piteous to see, dashes on shoals and girdles with a sandbank. One, wherein loyal Orontes and his Lycians rode, before their lord's eyes a vast sea descending strikes astern. The helmsman is dashed away and rolled forward headlong; her as she lies the billow sends spinning thrice round with it, and engulfs in the swift whirl. Scattered swimmers appear in the vast eddy, armour of men, timbers and Trojan treasure amid the water. Ere now the stout ship of Ilioneus, ere now of brave Achates, and she wherein [Pg 5][121-152]Abas rode, and she wherein aged Aletes, have yielded to the storm; through the shaken fastenings of their sides they all draw in the deadly water, and their opening seams give way. Meanwhile Neptune discerned with astonishment the loud roaring of the vexed sea, the tempest let loose from prison, and the still water boiling up from its depths, and lifting his head calm above the waves, looked forth across the deep. He sees all ocean strewn with Aeneas' fleet, the Trojans overwhelmed by the waves and the ruining heaven. Juno's guile and wrath lay clear to her brother's eye; east wind and west he calls before him, and thereon speaks thus: 'Stand you then so sure in your confidence of birth? Careless, O winds, of my deity, dare you confound sky and earth, and raise so huge a coil? you whom I—But better to still the aroused waves; for a second sin you shall pay me another penalty. Speed your flight, and say this to your king: not to him but to me was allotted the stern trident of ocean empire. His fastness is on the monstrous rocks where thou and thine, east wind, dwell: there let Aeolus glory in his palace and reign over the barred prison of his winds.' Thus he speaks, and ere the words are done he soothes the swollen seas, chases away the gathered clouds, and restores the sunlight. Cymothoë and Triton together push the ships strongly off the sharp reef; himself he eases them with his trident, channels the vast quicksands, and assuages the sea, gliding on light wheels along the water. Even as when oft in a throng of people strife hath risen, and the base multitude rage in their minds, and now brands and stones are flying; madness lends arms; then if perchance they catch sight of one reverend for goodness and service, they are silent and stand by with attentive ear; he with [Pg 6][153-190]speech sways their temper and soothes their breasts; even so hath fallen all the thunder of ocean, when riding forward beneath a cloudless sky the lord of the sea wheels his coursers and lets his gliding chariot fly with loosened rein. The outworn Aeneadae hasten to run for the nearest shore, and turn to the coast of Libya. There lies a spot deep withdrawn; an island forms a harbour with outstretched sides, whereon all the waves break from the open sea and part into the hollows of the bay. On this side and that enormous cliffs rise threatening heaven, and twin crags beneath whose crest the sheltered water lies wide and calm; above hangs a background of flickering forest, and the dark shade of rustling groves. Beneath the seaward brow is a rock-hung cavern, within it fresh springs and seats in the living stone, a haunt of nymphs; where tired ships need no fetters to hold nor anchor to fasten them with crooked bite. Here with seven sail gathered of all his company Aeneas enters; and disembarking on the land of their desire the Trojans gain the chosen beach, and set their feet dripping with brine upon the shore. At once Achates struck a spark from the flint and caught the fire on leaves, and laying dry fuel round kindled it into flame. Then, weary of fortune, they fetch out corn spoiled by the sea and weapons of corn-dressing, and begin to parch over the fire and bruise in stones the grain they had rescued. Meanwhile Aeneas scales the crag, and seeks the whole view wide over ocean, if he may see aught of Antheus storm-tossed with his Phrygian galleys, aught of Capys or of Caïcus' armour high astern. Ship in sight is none; three stags he espies straying on the shore; behind whole herds follow, and graze in long train across the valley. Stopping short, he snatched up a bow and swift arrows, the arms trusty Achates was carrying; and first the leaders, their stately heads high with branching antlers, then the common [Pg 7][191-222]herd fall to his hand, as he drives them with his shafts in a broken crowd through the leafy woods. Nor stays he till seven great victims are stretched on the sod, fulfilling the number of his ships. Thence he seeks the harbour and parts them among all his company. The casks of wine that good Acestes had filled on the Trinacrian beach, the hero's gift at their departure, he thereafter shares, and calms with speech their sorrowing hearts: 'O comrades, for not now nor aforetime are we ignorant of ill, O tried by heavier fortunes, unto this last likewise will God appoint an end. The fury of Scylla and the roaring recesses of her crags you have been anigh; the rocks of the Cyclops you have trodden. Recall your courage, put dull fear away. This too sometime we shall haply remember with delight. Through chequered fortunes, through many perilous ways, we steer for Latium, where destiny points us a quiet home. There the realm of Troy may rise again unforbidden. Keep heart, and endure till prosperous fortune come.' Such words he utters, and sick with deep distress he feigns hope on his face, and keeps his anguish hidden deep in his breast. The others set to the spoil they are to feast upon, tear chine from ribs and lay bare the flesh; some cut it into pieces and pierce it still quivering with spits; others plant cauldrons on the beach and feed them with flame. Then they repair their strength with food, and lying along the grass take their fill of old wine and fat venison. After hunger is driven from the banquet, and the board cleared, they talk with lingering regret of their lost companions, swaying between hope and fear, whether they may believe them yet alive, or now in their last agony and deaf to mortal call. Most does good Aeneas inly wail the loss now of valiant Orontes, now of Amycus, the cruel doom of Lycus, of brave Gyas, and brave Cloanthus. [Pg 8][223-254]And now they ceased; when from the height of air Jupiter looked down on the sail-winged sea and outspread lands, the shores and broad countries, and looking stood on the cope of heaven, and cast down his eyes on the realm of Libya. To him thus troubled at heart Venus, her bright eyes brimming with tears, sorrowfully speaks: 'O thou who dost sway mortal and immortal things with eternal command and the terror of thy thunderbolt, how can my Aeneas have transgressed so grievously against thee? how his Trojans? on whom, after so many deaths outgone, all the world is barred for Italy's sake. From them sometime in the rolling years the Romans were to arise indeed; from them were to be rulers who, renewing the blood of Teucer, should hold sea and land in universal lordship. This thou didst promise: why, O father, is thy decree reversed? This was my solace for the wretched ruin of sunken Troy, doom balanced against doom. Now so many woes are spent, and the same fortune still pursues them; Lord and King, what limit dost thou set to their agony? Antenor could elude the encircling Achaeans, could thread in safety the Illyrian bays and inmost realms of the Liburnians, could climb Timavus' source, whence through nine mouths pours the bursting tide amid dreary moans of the mountain, and covers the fields with hoarse waters. Yet here did he set Patavium town, a dwelling-place for his Teucrians, gave his name to a nation and hung up the armour of Troy; now settled in peace, he rests and is in quiet. We, thy children, we whom thou beckonest to the heights of heaven, our fleet miserably cast away for a single enemy's anger, are betrayed and severed far from the Italian coasts. Is this the reward of goodness? Is it thus thou dost restore our throne?' Smiling on her with that look which clears sky and [Pg 9][255-289]storms, the parent of men and gods lightly kissed his daughter's lips; then answered thus: 'Spare thy fear, Cytherean; thy people's destiny abides unshaken. Thine eyes shall see the city Lavinium, their promised home; thou shalt exalt to the starry heaven thy noble Aeneas; nor is my decree reversed. He thou lovest (for I will speak, since this care keeps torturing thee, and will unroll further the secret records of fate) shall wage a great war in Italy, and crush warrior nations; he shall appoint his people a law and a city; till the third summer see him reigning in Latium, and three winters' camps pass over the conquered Rutulians. But the boy Ascanius, whose surname is now Iülus—Ilus he was while the Ilian state stood sovereign—thirty great circles of rolling months shall he fulfil in government; he shall carry the kingdom from its fastness in Lavinium, and make a strong fortress of Alba the Long. Here the full space of thrice an hundred years shall the kingdom endure under the race of Hector's kin, till the royal priestess Ilia from Mars' embrace shall give birth to a twin progeny. Thence shall Romulus, gay in the tawny hide of the she-wolf that nursed him, take up their line, and name them Romans after his own name. I appoint to these neither period nor boundary of empire: I have given them dominion without end. Nay, harsh Juno, who in her fear now troubles earth and sea and sky, shall change to better counsels, and with me shall cherish the lords of the world, the gowned race of Rome. Thus is it willed. A day will come in the lapse of cycles, when the house of Assaracus shall lay Phthia and famed Mycenae in bondage, and reign over conquered Argos. From the fair line of Troy a Caesar shall arise, who shall limit his empire with ocean, his glory with the firmament, Julius, inheritor of great Iülus' name. Him one day, thy care done, thou shalt welcome to heaven loaded [Pg 10][290-321]with Eastern spoils; to him too shall vows be addressed. Then shall war cease, and the iron ages soften. Hoar Faith and Vesta, Quirinus and Remus brothers again, shall deliver statutes. The dreadful steel-riveted gates of war shall be shut fast; on murderous weapons the inhuman Fury, his hands bound behind him with an hundred fetters of brass, shall sit within, shrieking with terrible blood-stained lips.' So speaking, he sends Maia's son down from above, that the land and towers of Carthage, the new town, may receive the Trojans with open welcome; lest Dido, ignorant of doom, might debar them her land. Flying through the depth of air on winged oarage, the fleet messenger alights on the Libyan coasts. At once he does his bidding; at once, for a god willed it, the Phoenicians allay their haughty temper; the queen above all takes to herself grace and compassion towards the Teucrians. But good Aeneas, nightlong revolving many and many a thing, issues forth, so soon as bountiful light is given, to explore the strange country; to what coasts the wind has borne him, who are their habitants, men or wild beasts, for all he sees is wilderness; this he resolves to search, and bring back the certainty to his comrades. The fleet he hides close in embosoming groves beneath a caverned rock, amid shivering shadow of the woodland; himself, Achates alone following, he strides forward, clenching in his hand two broad-headed spears. And amid the forest his mother crossed his way, wearing the face and raiment of a maiden, the arms of a maiden of Sparta, or like Harpalyce of Thrace when she tires her coursers and outstrips the winged speed of Hebrus in her flight. For huntress fashion had she slung the ready bow from her shoulder, and left her blown tresses free, bared her knee, and knotted together her garments' flowing folds. 'Ha! my men,' she begins, 'shew me if [Pg 11][322-355]haply you have seen a sister of mine straying here girt with quiver and a lynx's dappled fell, or pressing with shouts on the track of a foaming boar.' Thus Venus, and Venus' son answering thus began: 'Sound nor sight have I had of sister of thine, O maiden unnamed; for thy face is not mortal, nor thy voice of human tone; O goddess assuredly! sister of Phoebus perchance, or one of the nymphs' blood? Be thou gracious, whoso thou art, and lighten this toil of ours; deign to instruct us beneath what skies, on what coast of the world, we are thrown. Driven hither by wind and desolate waves, we wander in a strange land among unknown men. Many a sacrifice shall fall by our hand before thine altars.' Then Venus: 'Nay, to no such offerings do I aspire. Tyrian maidens are wont ever to wear the quiver, to tie the purple buskin high above their ankle. Punic is the realm thou seest, Tyrian the people, and the city of Agenor's kin; but their borders are Libyan, a race unassailable in war. Dido sways the sceptre, who flying her brother set sail from the Tyrian town. Long is the tale of crime, long and intricate; but I will briefly follow its argument. Her husband was Sychaeus, wealthiest in lands of the Phoenicians, and loved of her with ill-fated passion; to whom with virgin rites her father had given her maidenhood in wedlock. But the kingdom of Tyre was in her brother Pygmalion's hands, a monster of guilt unparalleled. Between these madness came; the unnatural brother, blind with lust of gold, and reckless of his sister's love, lays Sychaeus low before the altars with stealthy unsuspected weapon; and for long he hid the deed, and by many a crafty pretence cheated her love-sickness with hollow hope. But in slumber came the very ghost of her unburied husband; lifting up a face pale in wonderful wise, he exposed the merciless altars and [Pg 12][356-387]his breast stabbed through with steel, and unwove all the blind web of household guilt. Then he counsels hasty flight out of the country, and to aid her passage discloses treasures long hidden underground, an untold mass of silver and gold. Stirred thereby, Dido gathered a company for flight. All assemble in whom hatred of the tyrant was relentless or fear keen; they seize on ships that chanced to lie ready, and load them with the gold. Pygmalion's hoarded wealth is borne overseas; a woman leads the work. They came at last to the land where thou wilt descry a city now great, New Carthage, and her rising citadel, and bought ground, called thence Byrsa, as much as a bull's hide would encircle. But who, I pray, are you, or from what coasts come, or whither hold you your way?' At her question he, sighing and drawing speech deep from his breast, thus replied: 'Ah goddess, should I go on retracing from the fountain head, were time free to hear the history of our woes, sooner would the evening star lay day asleep in the closed gates of heaven. Us, as from ancient Troy (if the name of Troy hath haply passed through your ears) we sailed over alien seas, the tempest at his own wild will hath driven on the Libyan coast. I am Aeneas the good, who carry in my fleet the household gods I rescued from the enemy; my fame is known high in heaven. I seek Italy my country, my kin of Jove's supreme blood. With twenty sail did I climb the Phrygian sea; oracular tokens led me on; my goddess mother pointed the way; scarce seven survive the shattering of wave and wind. Myself unknown, destitute, driven from Europe and Asia, I wander over the Libyan wilderness.' But staying longer complaint, Venus thus broke in on his half-told sorrows: 'Whoso thou art, not hated I think of the immortals [Pg 13][388-420]dost thou draw the breath of life, who hast reached the Tyrian city. Only go on, and betake thee hence to the courts of the queen. For I declare to thee thy comrades are restored, thy fleet driven back into safety by the shifted northern gales, except my parents were pretenders, and unavailing the augury they taught me. Behold these twelve swans in joyous line, whom, stooping from the tract of heaven, the bird of Jove fluttered over the open sky; now in long train they seem either to take the ground or already to look down on the ground they took. As they again disport with clapping wings, and utter their notes as they circle the sky in company, even so do these ships and crews of thine either lie fast in harbour or glide under full sail into the harbour mouth. Only go on, and turn thy steps where the pathway leads thee.' Speaking she turned away, and her neck shone roseate, her immortal tresses breathed the fragrance of deity; her raiment fell flowing down to her feet, and the godhead was manifest in her tread. He knew her for his mother, and with this cry pursued her flight: 'Thou also merciless! Why mockest thou thy son so often in feigned likeness? Why is it forbidden to clasp hand in hand, to hear and utter true speech?' Thus reproaching her he bends his steps towards the city. But Venus girt them in their going with dull mist, and shed round them a deep divine clothing of cloud, that none might see them, none touch them, or work delay, or ask wherefore they came. Herself she speeds through the sky to Paphos, and joyfully revisits her habitation, where the temple and its hundred altars steam with Sabaean incense, and are fresh with fragrance of chaplets in her worship. They meantime have hasted along where the pathway points, and now were climbing the hill which hangs enormous over the city, and looks down on its facing towers. [Pg 14][421-456]Aeneas marvels at the mass of building, pastoral huts once of old, marvels at the gateways and clatter of the pavements. The Tyrians are hot at work to trace the walls, to rear the citadel, and roll up great stones by hand, or to choose a spot for their dwelling and enclose it with a furrow. They ordain justice and magistrates, and the august senate. Here some are digging harbours, here others lay the deep foundations of their theatre, and hew out of the cliff vast columns, the lofty ornaments of the stage to be: even as bees when summer is fresh over the flowery country ply their task beneath the sun, when they lead forth their nation's grown brood, or when they press the liquid honey and strain their cells with nectarous sweets, or relieve the loaded incomers, or in banded array drive the idle herd of drones far from their folds; they swarm over their work, and the odorous honey smells sweet of thyme. 'Happy they whose city already rises!' cries Aeneas, looking on the town roofs below. Girt in the cloud he passes amid them, wonderful to tell, and mingling with the throng is descried of none. In the heart of the town was a grove deep with luxuriant shade, wherein first the Phoenicians, buffeted by wave and whirlwind, dug up the token Queen Juno had appointed, the head of a war horse: thereby was their race to be through all ages illustrious in war and opulent in living. Here to Juno was Sidonian Dido founding a vast temple, rich with offerings and the sanctity of her godhead: brazen steps rose on the threshold, brass clamped the pilasters, doors of brass swung on grating hinges. First in this grove did a strange chance meet his steps and allay his fears; first here did Aeneas dare to hope for safety and have fairer trust in his shattered fortunes. For while he closely scans the temple that towers above him, while, awaiting the queen, he admires the fortunate city, the emulous hands and elaborate work of her craftsmen, he sees ranged in order the [Pg 15][457-491]battles of Ilium, that war whose fame was already rumoured through all the world, the sons of Atreus and Priam, and Achilles whom both found pitiless. He stopped and cried weeping, 'What land is left, Achates, what tract on earth that is not full of our agony? Behold Priam! Here too is the meed of honour, here mortal estate touches the soul to tears. Dismiss thy fears; the fame of this will somehow bring thee salvation.' So speaks he, and fills his soul with the painted show, sighing often the while, and his face wet with a full river of tears. For he saw, how warring round the Trojan citadel here the Greeks fled, the men of Troy hard on their rear; here the Phrygians, plumed Achilles in his chariot pressing their flight. Not far away he knows the snowy canvas of Rhesus' tents, which, betrayed in their first sleep, the blood-stained son of Tydeus laid desolate in heaped slaughter, and turns the ruddy steeds away to the camp ere ever they tasted Trojan fodder or drunk of Xanthus. Elsewhere Troïlus, his armour flung away in flight—luckless boy, no match for Achilles to meet!—is borne along by his horses, and thrown back entangled with his empty chariot, still clutching the reins; his neck and hair are dragged over the ground, and his reversed spear scores the dust. Meanwhile the Ilian women went with disordered tresses to unfriendly Pallas' temple, and bore the votive garment, sadly beating breast with palm: the goddess turning away held her eyes fast on the ground. Thrice had Achilles whirled Hector round the walls of Troy, and was selling the lifeless body for gold; then at last he heaves a loud and heart-deep groan, as the spoils, as the chariot, as the dear body met his gaze, and Priam outstretching unarmed hands. Himself too he knew joining battle with the foremost Achaeans, knew the Eastern ranks and swart Memnon's armour. Penthesilea leads her crescent-shielded Amazonian columns in furious heat with [Pg 16][492-524]thousands around her; clasping a golden belt under her naked breast, the warrior maiden clashes boldly with men. While these marvels meet Dardanian Aeneas' eyes, while he dizzily hangs rapt in one long gaze, Dido the queen entered the precinct, beautiful exceedingly, a youthful train thronging round her. Even as on Eurotas' banks or along the Cynthian ridges Diana wheels the dance, while behind her a thousand mountain nymphs crowd to left and right; she carries quiver on shoulder, and as she moves outshines them all in deity; Latona's heart is thrilled with silent joy; such was Dido, so she joyously advanced amid the throng, urging on the business of her rising empire. Then in the gates of the goddess, beneath the central vault of the temple roof, she took her seat girt with arms and high enthroned. And now she gave justice and laws to her people, and adjusted or allotted their taskwork in due portion; when suddenly Aeneas sees advancing with a great crowd about them Antheus and Sergestus and brave Cloanthus, and other of his Trojans, whom the black squall had sundered at sea and borne far away on the coast. Dizzy with the shock of joy and fear he and Achates together were on fire with eagerness to clasp their hands; but in confused uncertainty they keep hidden, and clothed in the sheltering cloud wait to espy what fortune befalls them, where they are leaving their fleet ashore, why they now come; for they advanced, chosen men from all the ships, praying for grace, and held on with loud cries towards the temple. After they entered in, and free speech was granted, aged Ilioneus with placid mien thus began: 'Queen, to whom Jupiter hath given to found this new city, and lay the yoke of justice upon haughty tribes, we beseech thee, we wretched Trojans storm-driven over all [Pg 17][525-559]the seas, stay the dreadful flames from our ships; spare a guiltless race, and bend a gracious regard on our fortunes. We are not come to deal slaughter through Libyan homes, or to drive plundered spoils to the coast. Such violence sits not in our mind, nor is a conquered people so insolent. There is a place Greeks name Hesperia, an ancient land, mighty in arms and foison of the clod; Oenotrian men dwelt therein; now rumour is that a younger race from their captain's name have called it Italy. Thither lay our course . . . when Orion rising on us through the cloudrack with sudden surf bore us on blind shoals, and scattered us afar with his boisterous gales and whelming brine over waves and trackless reefs. To these your coasts we a scanty remnant floated up. What race of men, what land how barbarous soever, allows such a custom for its own? We are debarred the shelter of the beach; they rise in war, and forbid us to set foot on the brink of their land. If you slight human kinship and mortal arms, yet look for gods unforgetful of innocence and guilt. Aeneas was our king, foremost of men in righteousness, incomparable in goodness as in warlike arms; whom if fate still preserves, if he draws the breath of heaven and lies not yet low in dispiteous gloom, fear we have none; nor mayest thou repent of challenging the contest of service. In Sicilian territory too is tilth and town, and famed Acestes himself of Trojan blood. Grant us to draw ashore our storm-shattered fleet, to shape forest trees into beams and strip them for oars; so, if to Italy we may steer with our king and comrades found, Italy and Latium shall we gladly seek; but if salvation is clean gone, if the Libyan gulf holds thee, dear lord of thy Trojans, and Iülus our hope survives no more, seek we then at least the straits of Sicily, the open homes whence we sailed hither, and Acestes for our king.' Thus Ilioneus, and all the Dardanian company [Pg 18][560-593]murmured assent. . . . Then Dido, with downcast face, briefly speaks: 'Cheer your anxious hearts, O Teucrians; put by your care. Hard fortune in a strange realm forces me to this task, to keep watch and ward on my wide frontiers. Who can be ignorant of the race of Aeneas' people, who of Troy town and her men and deeds, or of the great war's consuming fire? Not so dull are the hearts of our Punic wearing, not so far doth the sun yoke his steeds from our Tyrian town. Whether your choice be broad Hesperia, the fields of Saturn's dominion, or Eryx for your country and Acestes for your king, my escort shall speed you in safety, my arsenals supply your need. Or will you even find rest here with me and share my kingdom? The city I establish is yours; draw your ships ashore; Trojan and Tyrian shall be held by me in even balance. And would that he your king, that Aeneas were here, storm-driven to this same haven! But I will send messengers along the coast, and bid them trace Libya to its limits, if haply he strays shipwrecked in forest or town.' Stirred by these words brave Achates and lord Aeneas both ere now burned to break through the cloud. Achates first accosts Aeneas: 'Goddess-born, what purpose now rises in thy spirit? Thou seest all is safe, our fleet and comrades are restored. One only is wanting, whom our eyes saw whelmed amid the waves; all else is answerable to thy mother's words.' Scarce had he spoken when the encircling cloud suddenly parts and melts into clear air. Aeneas stood discovered in sheen of brilliant light, like a god in face and shoulders; for his mother's self had shed on her son the grace of clustered locks, the radiant light of youth, and the lustre of joyous eyes; as when ivory takes beauty under the artist's hand, or when silver or Parian stone is inlaid in gold. [Pg 19][594-625]Then breaking in on all with unexpected speech he thus addresses the queen: 'I whom you seek am here before you, Aeneas of Troy, snatched from the Libyan waves. O thou who alone hast pitied Troy's untold agonies, thou who with us the remnant of the Grecian foe, worn out ere now by every suffering land and sea can bring, with us in our utter want dost share thy city and home! to render meet recompense is not possible for us, O Dido, nor for all who scattered over the wide world are left of our Dardanian race. The gods grant thee worthy reward, if their deity turn any regard on goodness, if aught avails justice and conscious purity of soul. What happy ages bore thee? what mighty parents gave thy virtue birth? While rivers run into the sea, while the mountain shadows move across their slopes, while the stars have pasturage in heaven, ever shall thine honour, thy name and praises endure in the unknown lands that summon me.' With these words he advances his right hand to dear Ilioneus, his left to Serestus; then to the rest, brave Gyas and brave Cloanthus. Dido the Sidonian stood astonished, first at the sight of him, then at his strange fortunes; and these words left her lips: 'What fate follows thee, goddess-born, through perilous ways? what violence lands thee on this monstrous coast? Art thou that Aeneas whom Venus the bountiful bore to Dardanian Anchises by the wave of Phrygian Simoïs? And well I remember how Teucer came to Sidon, when exiled from his native land he sought Belus' aid to gain new realms; Belus my father even then ravaged rich Cyprus and held it under his conquering sway. From that time forth have I known the fall of the Trojan city, known thy name and the Pelasgian princes. Their very foe would extol the Teucrians with highest praises, and boasted himself a branch [Pg 20][626-661]of the ancient Teucrian stem. Come therefore, O men, and enter our house. Me too hath a like fortune driven through many a woe, and willed at last to find my rest in this land. Not ignorant of ill do I learn to succour the afflicted.' With such speech she leads Aeneas into the royal house, and orders sacrifice in the gods' temples. Therewith she sends his company on the shore twenty bulls, an hundred great bristly-backed swine, an hundred fat lambs and their mothers with them, gifts of the day's gladness. . . . But the palace within is decked with splendour of royal state, and a banquet made ready amid the halls. The coverings are curiously wrought in splendid purple; on the tables is massy silver and deeds of ancestral valour graven in gold, all the long course of history drawn through many a heroic name from the nation's primal antiquity. Aeneas—for a father's affection denied his spirit rest—sends Achates speeding to his ships, to carry this news to Ascanius, and lead him to the town: in Ascanius is fixed all the parent's loving care. Presents likewise he bids him bring saved from the wreck of Ilium, a mantle stiff with gold embroidery, and a veil with woven border of yellow acanthus-flower, that once decked Helen of Argos, the marvel of her mother Leda's giving; Helen had borne them from Mycenae, when she sought Troy towers and a lawless bridal; the sceptre too that Ilione, Priam's eldest daughter, once had worn, a beaded necklace, and a double circlet of jewelled gold. Achates, hasting on his message, bent his way towards the ships. But in the Cytherean's breast new arts, new schemes revolve; if Cupid, changed in form and feature, may come in sweet Ascanius' room, and his gifts kindle the queen to madness and set her inmost sense aflame. Verily she fears the uncertain house, the double-tongued race of Tyre; [Pg 21][662-698]cruel Juno frets her, and at nightfall her care floods back. Therefore to winged Love she speaks these words: 'Son, who art alone my strength and sovereignty, son, who scornest the mighty father's Typhoïan shafts, to thee I fly for succour, and sue humbly to thy deity. How Aeneas thy brother is driven about all the sea-coasts by bitter Juno's malignity, this thou knowest, and hast often grieved in our grief. Now Dido the Phoenician holds him stayed with soft words, and I tremble to think how the welcome of Juno's house may issue; she will not be idle in this supreme turn of fortune. Wherefore I counsel to prevent her wiles and circle the queen with flame, that, unalterable by any deity, she may be held fast to me by passionate love for Aeneas. Take now my thought how to do this. The boy prince, my chiefest care, makes ready at his dear father's summons to go to the Sidonian city, carrying gifts that survive the sea and the flames of Troy. Him will I hide deep asleep in my holy habitation, high on Cythera's hills or in Idalium, that he may not know nor cross our wiles. Do thou but for one night feign his form, and, boy as thou art, put on the familiar face of a boy; so when in festal cheer, amid royal dainties and Bacchic juice, Dido shall take thee to her lap, shall fold thee in her clasp and kiss thee close and sweet, thou mayest imbreathe a hidden fire and unsuspected poison.' Love obeys his dear mother's words, lays by his wings, and walks rejoicingly with Iülus' tread. But Venus pours gentle dew of slumber on Ascanius' limbs, and lifts him lulled in her lap to the tall Idalian groves of her deity, where soft amaracus folds him round with the shadowed sweetness of its odorous blossoms. And now, obedient to her words, Cupid went merrily in Achates' guiding, with the royal gifts for the Tyrians. Already at his coming the queen hath sate her down in the midmost on her golden [Pg 22][699-733]throne under the splendid tapestries; now lord Aeneas, now too the men of Troy gather, and all recline on the strewn purple. Servants pour water on their hands, serve corn from baskets, and bring napkins with close-cut pile. Fifty handmaids are within, whose task is in their course to keep unfailing store and kindle the household fire. An hundred others, and as many pages all of like age, load the board with food and array the wine cups. Therewithal the Tyrians are gathered full in the wide feasting chamber, and take their appointed places on the broidered cushions. They marvel at Aeneas' gifts, marvel at Iülus, at the god's face aflame and forged speech, at the mantle and veil wrought with yellow acanthus-flower. Above all the hapless Phoenician, victim to coming doom, cannot satiate her soul, but, stirred alike by the boy and the gifts, she gazes and takes fire. He, when hanging clasped on Aeneas' neck he had satisfied all the deluded parent's love, makes his way to the queen; the queen clings to him with her eyes and all her soul, and ever and anon fondles him in her lap, ah, poor Dido! witless how mighty a deity sinks into her breast; but he, mindful of his mother the Acidalian, begins touch by touch to efface Sychaeus, and sows the surprise of a living love in the long-since-unstirred spirit and disaccustomed heart. Soon as the noise of banquet ceased and the board was cleared, they set down great bowls and enwreathe the wine. The house is filled with hum of voices eddying through the spacious chambers; lit lamps hang down by golden chainwork, and flaming tapers expel the night. Now the queen called for a heavy cup of jewelled gold, and filled it with pure wine; therewith was the use of Belus and all of Belus' race: then the hall was silenced. 'Jupiter,' she cries, 'for thou art reputed lawgiver of hospitality, grant that this be a joyful day to the Tyrians and the voyagers from Troy, a day to live in our children's memory. [Pg 23][734-756]Bacchus, the giver of gladness, be with us, and Juno the bountiful; and you, O Tyrians, be favourable to our assembly.' She spoke, and poured liquid libation on the board, which done, she first herself touched it lightly with her lips, then handed it to Bitias and bade him speed; he valiantly drained the foaming cup, and flooded him with the brimming gold. The other princes followed. Long-haired Iopas on his gilded lyre fills the chamber with songs ancient Atlas taught; he sings of the wandering moon and the sun's travails; whence is the human race and the brute, whence water and fire; of Arcturus, the rainy Hyades, and the twin Oxen; why wintry suns make such haste to dip in ocean, or what delay makes the nights drag lingeringly. Tyrians and Trojans after them redouble applause. Therewithal Dido wore the night in changing talk, alas! and drank long draughts of love, asking many a thing of Priam, many a thing of Hector; now in what armour the son of the Morning came; now of what fashion were Diomede's horses; now of mighty Achilles. 'Nay, come,' she cries, 'tell to us, O guest, from their first beginning the treachery of the Grecians, thy people's woes, and thine own wanderings; for this is now the seventh summer that bears thee a wanderer over all the earth and sea.' [Pg 24] BOOK SECOND THE STORY OF THE SACK OF TROY All were hushed, and sate with steadfast countenance; thereon, high from his cushioned seat, lord Aeneas thus began: 'Dreadful, O Queen, is the woe thou bidst me recall, how the Grecians pitiably overthrew the wealth and lordship of Troy; and I myself saw these things in all their horror, and I bore great part in them. What Myrmidon or Dolopian, or soldier of stern Ulysses, could in such a tale restrain his tears! and now night falls dewy from the steep of heaven, and the setting stars counsel to slumber. Yet if thy desire be such to know our calamities, and briefly to hear Troy's last agony, though my spirit shudders at the remembrance and recoils in pain, I will essay. 'Broken in war and beaten back by fate, and so many years now slid away, the Grecian captains build by Pallas' divine craft a horse of mountainous build, ribbed with sawn fir; they feign it vowed for their return, and this rumour goes about. Within the blind sides they stealthily imprison chosen men picked out one by one, and fill the vast cavern of its womb full with armed soldiery. 'There lies in sight an island well known in fame, Tenedos, rich of store while the realm of Priam endured, [Pg 25][23-55]now but a bay and roadstead treacherous to ships. Hither they launch forth, and hide on the solitary shore: we fancied they were gone, and had run down the wind for Mycenae. So all the Teucrian land put her long grief away. The gates are flung open; men go rejoicingly to see the Doric camp, the deserted stations and abandoned shore. Here the Dolopian troops were tented, here cruel Achilles; here their squadrons lay; here the lines were wont to meet in battle. Some gaze astonished at the deadly gift of Minerva the Virgin, and wonder at the horse's bulk; and Thymoetes begins to advise that it be drawn within our walls and set in the citadel, whether in guile, or that the doom of Troy was even now setting thus. But Capys and they whose mind was of better counsel, bid us either hurl sheer into the sea the guileful and sinister gift of Greece, or heap flames beneath to consume it, or pierce and explore the hollow hiding-place of its womb. The wavering crowd is torn apart in high dispute. 'At that, foremost of all and with a great throng about him, Laocoön runs hotly down from the high citadel, and cries from far: "Ah, wretched citizens, what height of madness is this? Believe you the foe is gone? or think you any Grecian gift is free of treachery? is it thus we know Ulysses? Either Achaeans are hid in this cage of wood, or the engine is fashioned against our walls to overlook the houses and descend upon the city; some delusion lurks there: trust not the horse, O Trojans. Be it what it may, I fear the Grecians even when they offer gifts." Thus speaking, he hurled his mighty spear with great strength at the creature's side and the curved framework of the belly: the spear stood quivering, and the jarred cavern of the womb sounded hollow and uttered a groan. And had divine ordinance, had a soul not infatuate been with us, he had moved us to lay violent steel on the Argolic hiding place; [Pg 26][56-90]and Troy would now stand, and you, tall towers of Priam, yet abide. 'Lo, Dardanian shepherds meanwhile dragged clamorously before the King a man with hands tied behind his back, who to compass this very thing, to lay Troy open to the Achaeans, had gone to meet their ignorant approach, confident in spirit and doubly prepared to spin his snares or to meet assured death. From all sides, in eagerness to see, the people of Troy run streaming in, and vie in jeers at their prisoner. Know now the treachery of the Grecians, and from a single crime learn all. . . . For as he stood amid our gaze confounded, disarmed, and cast his eyes around the Phrygian columns, "Alas!" he cried, "what land now, what seas may receive me? or what is the last doom that yet awaits my misery? who have neither any place among the Grecians, and likewise the Dardanians clamour in wrath for the forfeit of my blood." At that lament our spirit was changed, and all assault stayed: we encourage him to speak, and tell of what blood he is sprung, or what assurance he brings his captors. '"In all things assuredly," says he, "O King, befall what may, I will confess to thee the truth; nor will I deny myself of Argolic birth—this first—nor, if Fortune hath made Sinon unhappy, shall her malice mould him to a cheat and a liar. Hath a tale of the name of Palamedes, son of Belus, haply reached thine ears, and of his glorious rumour and renown; whom under false evidence the Pelasgians, because he forbade the war, sent innocent to death by wicked witness; now they bewail him when he hath left the light;—in his company, being near of blood, my father, poor as he was, sent me hither to arms from mine earliest years. While he stood unshaken in royalty and potent in the councils of the kings, we too wore a name and honour. When by subtle Ulysses' malice (no unknown tale do I tell) [Pg 27][91-124]he left the upper regions, my shattered life crept on in darkness and grief, inly indignant at the fate of my innocent friend. Nor in my madness was I silent: and, should any chance offer, did I ever return a conqueror to my native Argos, I vowed myself his avenger, and with my words I stirred his bitter hatred. From this came the first taint of ill; from this did Ulysses ever threaten me with fresh charges, from this flung dark sayings among the crowd and sought confederate arms. Nay, nor did he rest, till by Calchas' service—but yet why do I vainly unroll the unavailing tale, or why hold you in delay, if all Achaeans are ranked together in your mind, and it is enough that I bear the name? Take the vengeance deferred; this the Ithacan would desire, and the sons of Atreus buy at a great ransom." 'Then indeed we press on to ask and inquire the cause, witless of wickedness so great and Pelasgian craft. Tremblingly the false-hearted one pursues his speech: '"Often would the Grecians have taken to flight, leaving Troy behind, and disbanded in weariness of the long war: and would God they had! as often the fierce sea-tempest barred their way, and the gale frightened them from going. Most of all when this horse already stood framed with beams of maple, storm clouds roared over all the sky. In perplexity we send Eurypylus to inquire of Phoebus' oracle; and he brings back from the sanctuary these words of terror: With blood of a slain maiden, O Grecians, you appeased the winds when first you came to the Ilian coasts; with blood must you seek your return, and an Argive life be the accepted sacrifice. When that utterance reached the ears of the crowd, their hearts stood still, and a cold shudder ran through their inmost sense: for whom is doom purposed? who is claimed of Apollo? At this the Ithacan with loud clamour drags Calchas the soothsayer forth amidst them, and demands of him what is this the gods signify. And now many an one [Pg 28][125-158]foretold me the villain's craft and cruelty, and silently saw what was to come. Twice five days he is speechless in his tent, and will not have any one denounced by his lips, or given up to death. Scarcely at last, at the loud urgence of the Ithacan, he breaks into speech as was planned, and appoints me for the altar. All consented; and each one's particular fear was turned, ah me! to my single destruction. And now the dreadful day was at hand; the rites were being ordered for me, the salted corn, and the chaplets to wreathe my temples. I broke away, I confess it, from death; I burst my bonds, and lurked all night darkling in the sedge of the marshy pool, till they might set their sails, if haply they should set them. Nor have I any hope more of seeing my old home nor my sweet children and the father whom I desire. Of them will they even haply claim vengeance for my flight, and wash away this crime in their wretched death. By the heavenly powers I beseech thee, the deities to whom truth is known, by all the faith yet unsullied that is anywhere left among mortals; pity woes so great; pity an undeserving sufferer." 'At these his tears we grant him life, and accord our pity. Priam himself at once commands his shackles and strait bonds to be undone, and thus speaks with kindly words: "Whoso thou art, now and henceforth dismiss and forget the Greeks: thou shalt be ours. And unfold the truth to this my question: wherefore have they reared this vast size of horse? who is their counsellor? or what their aim? what propitiation, or what engine of war is this?" He ended; the other, stored with the treacherous craft of Pelasgia, lifts to heaven his freed hands. "You, everlasting fires," he cries, "and your inviolable sanctity be my witness; you, O altars and accursed swords I fled, and chaplets of the gods I wore as victim! unblamed may I break the oath of Greek allegiance, unblamed hate them and bring all to light that they [Pg 29][159-191]conceal; nor am I bound by any laws of country. Do thou only keep by thy promise, O Troy, and preserve faith with thy preserver, as my news shall be true, as my recompense great. '"All the hope of Greece, and the confidence in which the war began, ever centred in Pallas' aid. But since the wicked son of Tydeus, and Ulysses, forger of crime, made bold to tear the fated Palladium from her sanctuary, and cut down the sentries on the towered height; since they grasped the holy image, and dared with bloody hands to touch the maiden chaplets of the goddess; since then the hope of Greece ebbed and slid away backwards, their strength was broken, and the mind of the goddess estranged. Whereof the Tritonian gave token by no uncertain signs. Scarcely was the image set in the camp; flame shot sparkling from its lifted eyes, and salt sweat started over its body; thrice, wonderful to tell, it leapt from the ground with shield and spear quivering. Immediately Calchas prophesies that the seas must be explored in flight, nor may Troy towers be overthrown by Argive weapons, except they repeat their auspices at Argos, and bring back that divine presence they have borne away with them in the curved ships overseas. And now they have run down the wind for their native Mycenae, to gather arms and gods to attend them; they will remeasure ocean and be on you unawares. So Calchas expounds the omens. This image at his warning they reared in recompense for the Palladium and the injured deity, to expiate the horror of sacrilege. Yet Calchas bade them raise it to this vast size with oaken crossbeams, and build it up to heaven, that it may not find entry at the gates nor be drawn within the city, nor protect your people beneath the consecration of old. For if hand of yours should violate Minerva's offering, then utter destruction (the gods turn rather on himself his augury!) should be upon Priam's empire and [Pg 30][192-226]the Phrygian people. But if under your hands it climbed into your city, Asia should advance in mighty war to the walls of Pelops, and a like fate awaited our children's children." 'So by Sinon's wiles and craft and perjury the thing gained belief; and we were ensnared by treachery and forced tears, we whom neither the son of Tydeus nor Achilles of Larissa, whom not ten years nor a thousand ships brought down. 'Here another sight, greater, alas! and far more terrible meets us, and alarms our thoughtless senses. Laocoön, allotted priest of Neptune, was slaying a great bull at the accustomed altars. And lo! from Tenedos, over the placid depths (I shudder as I recall) two snakes in enormous coils press down the sea and advance together to the shore; their breasts rise through the surge, and their blood-red crests overtop the waves; the rest trails through the main behind and wreathes back in voluminous curves; the brine gurgles and foams. And now they gained the fields, while their bloodshot eyes blazed with fire, and their tongues lapped and flickered in their hissing mouths. We scatter, pallid at the sight. They in unfaltering train make towards Laocoön. And first the serpents twine in their double embrace his two little children, and bite deep in their wretched limbs; then him likewise, as he comes up to help with arms in his hand, they seize and fasten in their enormous coils; and now twice clasping his waist, twice encircling his neck with their scaly bodies, they tower head and neck above him. He at once strains his hands to tear their knots apart, his fillets spattered with foul black venom; at once raises to heaven awful cries; as when, bellowing, a bull shakes the wavering axe from his neck and runs wounded from the altar. But the two snakes glide away to the high sanctuary and seek the fierce Tritonian's citadel, [Pg 31][227-261]and take shelter under the goddess' feet beneath the circle of her shield. Then indeed a strange terror thrills in all our amazed breasts; and Laocoön, men say, hath fulfilled his crime's desert, in piercing the consecrated wood and hurling his guilty spear into its body. All cry out that the image must be drawn to its home and supplication made to her deity. . . . We sunder the walls, and lay open the inner city. All set to the work; they fix rolling wheels under its feet, and tie hempen bands on its neck. The fated engine climbs our walls, big with arms. Around it boys and unwedded girls chant hymns and joyfully lay their hand on the rope. It moves up, and glides menacing into the middle of the town. O native land! O Ilium, house of gods, and Dardanian city renowned in war! four times in the very gateway did it come to a stand, and four times armour rang in its womb. Yet we urge it on, mindless and infatuate, and plant the ill-ominous thing in our hallowed citadel. Even then Cassandra opens her lips to the coming doom, lips at a god's bidding never believed by the Trojans. We, the wretched people, to whom that day was our last, hang the shrines of the gods with festal boughs throughout the city. Meanwhile the heavens wheel on, and night rises from the sea, wrapping in her vast shadow earth and sky and the wiles of the Myrmidons; about the town the Teucrians are stretched in silence; slumber laps their tired limbs. 'And now the Argive squadron was sailing in order from Tenedos, and in the favouring stillness of the quiet moon sought the shores it knew; when the royal galley ran out a flame, and, protected by the gods' malign decrees, Sinon stealthily lets loose the imprisoned Grecians from their barriers of pine; the horse opens and restores them to the air; and joyfully issuing from the hollow wood, Thessander and Sthenelus the captains, and terrible Ulysses, [Pg 32][262-295]slide down the dangling rope, with Acamas and Thoas and Neoptolemus son of Peleus, and Machaon first of all, and Menelaus, and Epeüs himself the artificer of the treachery. They sweep down the city buried in drunken sleep; the watchmen are cut down, and at the open gates they welcome all their comrades, and unite their confederate bands. 'It was the time when by the gift of God rest comes stealing first and sweetest on unhappy men. In slumber, lo! before mine eyes Hector seemed to stand by, deep in grief and shedding abundant tears; torn by the chariot, as once of old, and black with gory dust, his swoln feet pierced with the thongs. Ah me! in what guise was he! how changed from the Hector who returns from putting on Achilles' spoils, or launching the fires of Phrygia on the Grecian ships! with ragged beard and tresses clotted with blood, and all the many wounds upon him that he received around his ancestral walls. Myself too weeping I seemed to accost him ere he spoke, and utter forth mournful accents: "O light of Dardania, O surest hope of the Trojans, what long delay is this hath held thee? from what borders comest thou, Hector our desire? with what weary eyes we see thee, after many deaths of thy kin, after divers woes of people and city! What indignity hath marred thy serene visage? or why discern I these wounds?" He replies naught, nor regards my idle questioning; but heavily drawing a heart-deep groan, "Ah, fly, goddess-born," he says, "and rescue thyself from these flames. The foe holds our walls; from her high ridges Troy is toppling down. Thy country and Priam ask no more. If Troy towers might be defended by strength of hand, this hand too had been their defence. Troy commends to thee her holy things and household gods; take them to accompany thy fate; seek for them a city, which, after all the seas have known thy wanderings, thou shalt at last establish in [Pg 33][296-327]might." So speaks he, and carries forth in his hands from their inner shrine the chaplets and strength of Vesta, and the everlasting fire. 'Meanwhile the city is stirred with mingled agony; and more and more, though my father Anchises' house lay deep withdrawn and screened by trees, the noises grow clearer and the clash of armour swells. I shake myself from sleep and mount over the sloping roof, and stand there with ears attent: even as when flame catches a corn-field while south winds are furious, or the racing torrent of a mountain stream sweeps the fields, sweeps the smiling crops and labours of the oxen, and hurls the forest with it headlong; the shepherd in witless amaze hears the roar from the cliff-top. Then indeed proof is clear, and the treachery of the Grecians opens out. Already the house of Deïphobus hath crashed down in wide ruin amid the overpowering flames; already our neighbour Ucalegon is ablaze: the broad Sigean bay is lit with the fire. Cries of men and blare of trumpets rise up. Madly I seize my arms, nor is there so much purpose in arms; but my spirit is on fire to gather a band for fighting and charge for the citadel with my comrades. Fury and wrath drive me headlong, and I think how noble is death in arms. 'And lo! Panthus, eluding the Achaean weapons, Panthus son of Othrys, priest of Phoebus in the citadel, comes hurrying with the sacred vessels and conquered gods and his little grandchild in his hand, and runs distractedly towards my gates. "How stands the state, O Panthus? what stronghold are we to occupy?" Scarcely had I said so, when groaning he thus returns: "The crowning day is come, the irreversible time of the Dardanian land. No more are we a Trojan people; Ilium and the great glory of the Teucrians is no more. Angry Jupiter hath cast all into the scale of Argos. The Grecians are lords of the burning [Pg 34][328-362]town. The horse, standing high amid the city, pours forth armed men, and Sinon scatters fire, insolent in victory. Some are at the wide-flung gates, all the thousands that ever came from populous Mycenae. Others have beset the narrow streets with lowered weapons; edge and glittering point of steel stand drawn, ready for the slaughter; scarcely at the entry do the guards of the gates essay battle, and hold out in the blind fight." 'Heaven's will thus declared by the son of Othrys drives me amid flames and arms, where the baleful Fury calls, and tumult of shouting rises up. Rhipeus and Epytus, most mighty in arms, join company with me; Hypanis and Dymas meet us in the moonlight and attach themselves to our side, and young Coroebus son of Mygdon. In those days it was he had come to Troy, fired with mad passion for Cassandra, and bore a son's aid to Priam and the Phrygians: hapless, that he listened not to his raving bride's counsels. . . . Seeing them close-ranked and daring for battle, I therewith began thus: "Men, hearts of supreme and useless bravery, if your desire be fixed to follow one who dares the utmost; you see what is the fortune of our state: all the gods by whom this empire was upheld have gone forth, abandoning shrine and altar; your aid comes to a burning city. Let us die, and rush on their encircling weapons. The conquered have one safety, to hope for none." 'So their spirit is heightened to fury. Then, like wolves ravening in a black fog, whom mad malice of hunger hath driven blindly forth, and their cubs left behind await with throats unslaked; through the weapons of the enemy we march to certain death, and hold our way straight into the town. Night's sheltering shadow flutters dark around us. Who may unfold in speech that night's horror and death-agony, or measure its woes in weeping? The [Pg 35][363-397]ancient city falls with her long years of sovereignty; corpses lie stretched stiff all about the streets and houses and awful courts of the gods. Nor do Teucrians alone pay forfeit of their blood; once and again valour returns even in conquered hearts, and the victorious Grecians fall. Everywhere is cruel agony, everywhere terror, and the sight of death at every turn. 'First, with a great troop of Grecians attending him, Androgeus meets us, taking us in ignorance for an allied band, and opens on us with friendly words: "Hasten, my men; why idly linger so late? others plunder and harry the burning citadel; are you but now on your march from the tall ships?" He spoke, and immediately (for no answer of any assurance was offered) knew he was fallen among the foe. In amazement, he checked foot and voice; even as one who struggling through rough briers hath trodden a snake on the ground unwarned, and suddenly shrinks fluttering back as it rises in anger and puffs its green throat out; even thus Androgeus drew away, startled at the sight. We rush in and encircle them with serried arms, and cut them down dispersedly in their ignorance of the ground and seizure of panic. Fortune speeds our first labour. And here Coroebus, flushed with success and spirit, cries: "O comrades, follow me where fortune points before us the path of safety, and shews her favour. Let us exchange shields, and accoutre ourselves in Grecian suits; whether craft or courage, who will ask of an enemy? the foe shall arm our hands." Thus speaking, he next dons the plumed helmet and beautifully blazoned shield of Androgeus, and fits the Argive sword to his side. So does Rhipeus, so Dymas in like wise, and all our men in delight arm themselves one by one in the fresh spoils. We advance, mingling with the Grecians, under a protection not our own, and join many a battle [Pg 36][398-432]with those we meet amid the blind night; many a Greek we send down to hell. Some scatter to the ships and run for the safety of the shore; some in craven fear again climb the huge horse, and hide in the belly they knew. Alas that none may trust at all to estranged gods! 'Lo! Cassandra, maiden daughter of Priam, was being dragged with disordered tresses from the temple and sanctuary of Minerva, straining to heaven her blazing eyes in vain; her eyes, for fetters locked her delicate hands. At this sight Coroebus burst forth infuriate, and flung himself on death amid their columns. We all follow him up, and charge with massed arms. Here first from the high temple roof we are overwhelmed with our own people's weapons, and a most pitiful slaughter begins through the fashion of our armour and the mistaken Greek crests; then the Grecians, with angry cries at the maiden's rescue, gather from every side and fall on us; Ajax in all his valour, and the two sons of Atreus, and the whole Dolopian army: as oft when bursting in whirlwind West and South clash with adverse blasts, and the East wind exultant on the coursers of the Dawn; the forests cry, and fierce in foam Nereus with his trident stirs the seas from their lowest depth. Those too appear, whom our stratagem routed through the darkness of dim night and drove all about the town; at once they know the shields and lying weapons, and mark the alien tone on our lips. We go down, overwhelmed by numbers. First Coroebus is stretched by Peneleus' hand at the altar of the goddess armipotent; and Rhipeus falls, the one man who was most righteous and steadfast in justice among the Teucrians: the gods' ways are not as ours: Hypanis and Dymas perish, pierced by friendly hands; nor did all thy goodness, O Panthus, nor Apollo's fillet protect thy fall. O ashes of Ilium and death flames of my people! you I call to witness that in your ruin I [Pg 37][433-465]shunned no Grecian weapon or encounter, and my hand earned my fall, had destiny been thus. We tear ourselves away, I and Iphitus and Pelias, Iphitus now stricken in age, Pelias halting too under the wound of Ulysses, called forward by the clamour to Priam's house. 'Here indeed the battle is fiercest, as if all the rest of the fighting were nowhere, and no slaughter but here throughout the city, so do we descry the war in full fury, the Grecians rushing on the building, and their shielded column driving up against the beleaguered threshold. Ladders cling to the walls; and hard by the doors and planted on the rungs they hold up their shields in the left hand to ward off our weapons, and with their right clutch the battlements. The Dardanians tear down turrets and the covering of the house roof against them; with these for weapons, since they see the end is come, they prepare to defend themselves even in death's extremity: and hurl down gilded beams, the stately decorations of their fathers of old. Others with drawn swords have beset the doorway below and keep it in crowded column. We renew our courage, to aid the royal dwelling, to support them with our succour, and swell the force of the conquered. 'There was a blind doorway giving passage through the range of Priam's halls by a solitary postern, whereby, while our realm endured, hapless Andromache would often and often glide unattended to her father-in-law's house, and carry the boy Astyanax to his grandsire. I issue out on the sloping height of the ridge, whence wretched Teucrian hands were hurling their ineffectual weapons. A tower stood on the sheer brink, its roof ascending high into heaven, whence was wont to be seen all Troy and the Grecian ships and Achaean camp: attacking it with iron round about, where the joints of the lofty flooring yielded, we wrench it from its deep foundations and shake it free; it gives way, and [Pg 38][466-498]suddenly falls thundering in ruin, crashing wide over the Grecian ranks. But others swarm up; nor meanwhile do stones nor any sort of missile slacken. . . . Right before the vestibule and in the front doorway Pyrrhus moves rejoicingly in the sparkle of arms and gleaming brass: like as when a snake fed on poisonous herbs, whom chill winter kept hid and swollen underground, now fresh from his weeds outworn and shining in youth, wreathes his slippery body into the daylight, his upreared breast meets the sun, and his triple-cloven tongue flickers in his mouth. With him huge Periphas, and Automedon the armour-bearer, driver of Achilles' horses, with him all his Scyrian men climb the roof and hurl flames on the housetop. Himself among the foremost he grasps a poleaxe, bursts through the hard doorway, and wrenches the brazen-plated doors from the hinge; and now he hath cut out a plank from the solid oak and pierced a vast gaping hole. The house within is open to sight, and the long halls lie plain; open to sight are the secret chambers of Priam and the kings of old, and they see armed men standing in front of the doorway. 'But the inner house is stirred with shrieks and misery and confusion, and the court echoes deep with women's wailing; the golden stars are smitten with the din. Affrighted mothers stray about the vast house, and cling fast to the doors and print them with kisses. With his father's might Pyrrhus presses on; nor guards nor barriers can hold out. The gate totters under the hard driven ram, and the doors fall flat, rent from the hinge. Force makes way; the Greeks burst through the entrance and pour in, slaughtering the foremost, and filling the space with a wide stream of soldiers. Not so furiously when a foaming river bursts his banks and overflows, beating down the opposing dykes with whirling water, is he borne mounded over the fields, and sweeps herds and [Pg 39][499-529]pens all about the plains. Myself I saw in the gateway Neoptolemus mad in slaughter, and the two sons of Atreus, saw Hecuba and the hundred daughters of her house, and Priam polluting with his blood the altar fires of his own consecration. The fifty bridal chambers—so great was the hope of his children's children—their doors magnificent with spoils of barbaric gold, have sunk in ruin; where the fire fails the Greeks are in possession. 'Perchance too thou mayest inquire what was Priam's fate. When he saw the ruin of his captured city, the gates of his house burst open, and the enemy amid his innermost chambers, the old man idly fastens round his aged trembling shoulders his long disused armour, girds on the unavailing sword, and advances on his death among the thronging foe. 'Within the palace and under the bare cope of sky was a massive altar, and hard on the altar an ancient bay tree leaned clasping the household gods in its shadow. Here Hecuba and her daughters crowded vainly about the altar-stones, like doves driven headlong by a black tempest, and crouched clasping the gods' images. And when she saw Priam her lord with the armour of youth on him, "What spirit of madness, my poor husband," she cries, "hath stirred thee to gird on these weapons? or whither dost thou run? Not such the succour nor these the defenders the time requires: no, were mine own Hector now beside us. Retire, I beseech thee, hither; this altar will protect us all, or thou wilt share our death." With these words on her lips she drew the aged man to her, and set him on the holy seat. 'And lo, escaped from slaughtering Pyrrhus through the weapons of the enemy, Polites, one of Priam's children, flies wounded down the long colonnades and circles the empty halls. Pyrrhus pursues him fiercely with aimed [Pg 40][530-563]wound, just catching at him, and follows hard on him with his spear. As at last he issued before his parents' eyes and faces, he fell, and shed his life in a pool of blood. At this Priam, although even now fast in the toils of death, yet withheld not nor spared a wrathful cry: "Ah, for thy crime, for this thy hardihood, may the gods, if there is goodness in heaven to care for aught such, pay thee in full thy worthy meed, and return thee the reward that is due! who hast made me look face to face on my child's murder, and polluted a father's countenance with death. Ah, not such to a foe was the Achilles whose parentage thou beliest; but he revered a suppliant's right and trust, restored to the tomb Hector's pallid corpse, and sent me back to my realm." Thus the old man spoke, and launched his weak and unwounding spear, which, recoiling straight from the jarring brass, hung idly from his shield above the boss. Thereat Pyrrhus: "Thou then shalt tell this, and go with the message to my sire the son of Peleus: remember to tell him of my baleful deeds, and the degeneracy of Neoptolemus. Now die." So saying, he drew him quivering to the very altar, slipping in the pool of his child's blood, and wound his left hand in his hair, while in his right the sword flashed out and plunged to the hilt in his side. This was the end of Priam's fortunes; thus did allotted fate find him, with burning Troy and her sunken towers before his eyes, once magnificent lord over so many peoples and lands of Asia. The great corpse lies along the shore, a head severed from the shoulders and a body without a name. 'But then an awful terror began to encircle me; I stood in amaze; there rose before me the likeness of my loved father, as I saw the king, old as he, sobbing out his life under the ghastly wound; there rose Creüsa forlorn, my plundered house, and little Iülus' peril. I look back [Pg 41][564-596]and survey what force is around me. All, outwearied, have given up and leapt headlong to the ground, or flung themselves wretchedly into the fire: ['Yes, and now I only was left; when I espy the daughter of Tyndarus close in the courts of Vesta, crouching silently in the fane's recesses; the bright glow of the fires lights my wandering, as my eyes stray all about. Fearing the Teucrians' anger for the overthrown towers of Troy, and the Grecians' vengeance and the wrath of the husband she had abandoned, she, the common Fury of Troy and her native country, had hidden herself and cowered unseen by the altars. My spirit kindles to fire, and rises in wrath to avenge my dying land and take repayment for her crimes. Shall she verily see Sparta and her native Mycenae unscathed, and depart a queen and triumphant? Shall she see her spousal and her home, her parents and children, attended by a crowd of Trojan women and Phrygians to serve her? and Priam have fallen under the sword? Troy blazed in fire? the shore of Dardania so often soaked with blood? Not so. For though there is no name or fame in a woman's punishment, nor honour in the victory, yet shall I have praise in quenching a guilty life and exacting a just recompense; and it will be good to fill my soul with the flame of vengeance, and satisfy the ashes of my people. Thus broke I forth, and advanced infuriate;] '——When my mother came visibly before me, clear to sight as never till then, and shone forth in pure radiance through the night, gracious, evident in godhead, in shape and stature such as she is wont to appear to the heavenly people; she caught me by the hand and stayed me, and pursued thus with roseate lips: '"Son, what overmastering pain thus wakes thy wrath? Why ravest thou? or whither is thy care for us fled? Wilt thou not first look to it, where thou hast left Anchises, [Pg 42][597-630]thine aged worn father; or if Creüsa thy wife and the child Ascanius survive? round about whom all the Greek battalions range; and without my preventing care, the flames ere this had made them their portion, and the hostile sword drunk their blood. Not the hated face of the Laconian woman, Tyndarus' daughter; not Paris is to blame; the gods, the gods in anger overturn this magnificence, and make Troy topple down. Look, for all the cloud that now veils thy gaze and dulls mortal vision with damp encircling mist, I will rend from before thee. Fear thou no commands of thy mother, nor refuse to obey her counsels. Here, where thou seest sundered piles of masonry and rocks violently torn from rocks, and smoke eddying mixed with dust, Neptune with his great trident shakes wall and foundation out of their places, and upturns all the city from her base. Here Juno in all her terror holds the Scaean gates at the entry, and, girt with steel, calls her allied army furiously from their ships. . . . Even now on the citadel's height, look back! Tritonian Pallas is planted in glittering halo and Gorgonian terror. Their lord himself pours courage and prosperous strength on the Grecians, himself stirs the gods against the arms of Dardania. Haste away, O son, and put an end to the struggle. I will never desert thee; I will set thee safe in the courts of thy father's house." 'She ended, and plunged in the dense blackness of the night. Awful faces shine forth, and, set against Troy, divine majesties . . . 'Then indeed I saw all Ilium sinking in flame, and Neptunian Troy uprooted from her base: even as an ancient ash on the mountain heights, hacked all about with steel and fast-falling axes, when husbandmen emulously strain to cut it down: it hangs threateningly, with shaken top and quivering tresses asway; till gradually, overmastered with [Pg 43][631-662]wounds, it utters one last groan, and rending itself away, falls in ruin along the ridge. I descend, and under a god's guidance clear my way between foe and flame; weapons give ground before me, and flames retire. 'And now, when I have reached the courts of my ancestral dwelling, our home of old, my father, whom it was my first desire to carry high into the hills, and whom first I sought, declines, now Troy is rooted out, to prolong his life through the pains of exile. '"Ah, you," he cries, "whose blood is at the prime, whose strength stands firm in native vigour, do you take your flight. . . . Had the lords of heaven willed to prolong life for me, they should have preserved this my home. Enough and more is the one desolation we have seen, survivors of a captured city. Thus, oh thus salute me and depart, as a body laid out for burial. Mine own hand shall find me death: the foe will be merciful and seek my spoils: light is the loss of a tomb. This long time hated of heaven, I uselessly delay the years, since the father of gods and king of men blasted me with wind of thunder and scathe of flame." 'Thus held he on in utterance, and remained obstinate. We press him, dissolved in tears, my wife Creüsa, Ascanius, all our household, that our father involve us not all in his ruin, and add his weight to the sinking scale of doom. He refuses, and keeps seated steadfast in his purpose. Again I rush to battle, and choose death in my misery. For what had counsel or chance yet to give? Thoughtest thou my feet, O father, could retire and abandon thee? and fell so unnatural words from a parent's lips? "If heaven wills that naught be left of our mighty city, if this be thy planted purpose, thy pleasure to cast in thyself and thine to the doom of Troy; for this death indeed the gate is wide, and even now Pyrrhus will be here newly bathed in Priam's [Pg 44][663-695]blood, Pyrrhus who slaughters the son before the father's face, the father upon his altars. For this was it, bountiful mother, thou dost rescue me amid fire and sword, to see the foe in my inmost chambers, and Ascanius and my father, Creüsa by their side, hewn down in one another's blood? My arms, men, bring my arms! the last day calls on the conquered. Return me to the Greeks; let me revisit and renew the fight. Never to-day shall we all perish unavenged." 'Thereat I again gird on my sword, and fitting my left arm into the clasps of the shield, strode forth of the palace. And lo! my wife clung round my feet on the threshold, and held little Iülus up to his father's sight. "If thou goest to die, let us too hurry with thee to the end. But if thou knowest any hope to place in arms, be this household thy first defence. To what is little Iülus and thy father, to what am I left who once was called thy wife?" 'So she shrieked, and filled all the house with her weeping; when a sign arises sudden and marvellous to tell. For, between the hands and before the faces of his sorrowing parents, lo! above Iülus' head there seemed to stream a light luminous cone, and a flame whose touch hurt not to flicker in his soft hair and play round his brows. We in a flutter of affright shook out the blazing hair and quenched the holy fires with spring water. But lord Anchises joyfully upraised his eyes; and stretching his hands to heaven: "Jupiter omnipotent," he cries, "if thou dost relent at any prayers, look on us this once alone; and if our goodness deserve it, give thine aid hereafter, O lord, and confirm this thine omen." 'Scarcely had the aged man spoken thus, when with sudden crash it thundered on the left, and a star gliding through the dusk shot from heaven drawing a bright trail of light. We watch it slide over the palace roof, leaving [Pg 45][696-730]the mark of its pathway, and bury its brilliance in the wood of Ida; the long drawn track shines, and the region all about fumes with sulphur. Then conquered indeed my father rises to address the gods and worship the holy star. "Now, now delay is done with: I follow, and where you lead, I come. Gods of my fathers, save my house, save my grandchild. Yours is this omen, and in your deity Troy stands. I yield, O my son, and refuse not to go in thy company." 'He ended; and now more loudly the fire roars along the city, and the burning tides roll nearer. "Up then, beloved father, and lean on my neck; these shoulders of mine will sustain thee, nor will so dear a burden weigh me down. Howsoever fortune fall, one and undivided shall be our peril, one the escape of us twain. Little Iülus shall go along with me, and my wife follow our steps afar. You of my household, give heed to what I say. As you leave the city there is a mound and ancient temple of Ceres lonely on it, and hard by an aged cypress, guarded many years in ancestral awe: to this resting-place let us gather from diverse quarters. Thou, O father, take the sacred things and the household gods of our ancestors in thine hand. For me, just parted from the desperate battle, with slaughter fresh upon me, to handle them were guilt, until I wash away in a living stream the soilure. . . ." So spoke I, and spread over my neck and broad shoulders a tawny lion-skin for covering, and stoop to my burden. Little Iülus, with his hand fast in mine, keeps uneven pace after his father. Behind my wife follows. We pass on in the shadows. And I, lately moved by no weapons launched against me, nor by the thronging bands of my Grecian foes, am now terrified at every breath, startled by every noise, thrilling with fear alike for my companion and my burden. 'And now I was nearing the gates, and thought I had [Pg 46][731-764]outsped all the way; when suddenly the crowded trampling of feet came to our ears, and my father, looking forth into the darkness, cries: "My son, my son, fly; they draw near. I espy the gleaming shields and the flicker of brass." At this, in my flurry and confusion, some hostile god bereft me of my senses. For while I plunge down byways, and swerve from where the familiar streets ran, Creüsa, alas! whether, torn by fate from her unhappy husband, she stood still, or did she mistake the way, or sink down outwearied? I know not; and never again was she given back to our eyes; nor did I turn to look for my lost one, or cast back a thought, ere we were come to ancient Ceres' mound and hallowed seat; here at last, when all gathered, one was missing, vanished from her child's and her husband's company. What man or god did I spare in frantic reproaches? or what crueller sight met me in our city's overthrow? I charge my comrades with Ascanius and lord Anchises, and the gods of Teucria, hiding them in the winding vale. Myself I regain the city, girding on my shining armour; fixed to renew every danger, to retrace my way throughout Troy, and fling myself again on its perils. First of all I regain the walls and the dim gateway whence my steps had issued; I scan and follow back my footprints with searching gaze in the night. Everywhere my spirit shudders, dismayed at the very silence. Thence I pass on home, if haply her feet (if haply!) had led her thither. The Grecians had poured in, and filled the palace. The devouring fire goes rolling before the wind high as the roof; the flames tower over it, and the heat surges up into the air. I move on, and revisit the citadel and Priam's dwelling; where now in the spacious porticoes of Juno's sanctuary, Phoenix and accursed Ulysses, chosen sentries, were guarding the spoil. Hither from all quarters is flung in masses the treasure of Troy torn from burning shrines, [Pg 47][765-798]tables of the gods, bowls of solid gold, and raiment of the captives. Boys and cowering mothers in long file stand round. . . . Yes, and I dared to cry abroad through the darkness; I filled the streets with calling, and again and yet again with vain reiterance cried piteously on Creüsa. As I stormed and sought her endlessly among the houses of the town, there rose before mine eyes a melancholy phantom, the ghost of very Creüsa, in likeness larger than her wont. I was motionless; my hair stood up, and the accents faltered on my tongue. Then she thus addressed me, and with this speech allayed my distresses: "What help is there in this mad passion of grief, sweet my husband? not without divine influence does this come to pass: nor may it be, nor does the high lord of Olympus allow, that thou shouldest carry Creüsa hence in thy company. Long shall be thine exile, and weary spaces of sea must thou furrow through; and thou shalt come to the land Hesperia, where Lydian Tiber flows with soft current through rich and populous fields. There prosperity awaits thee, and a kingdom, and a king's daughter for thy wife. Dispel these tears for thy beloved Creüsa. Never will I look on the proud homes of the Myrmidons or Dolopians, or go to be the slave of Greek matrons, I a daughter of Dardania, a daughter-in-law of Venus the goddess. . . . But the mighty mother of the gods keeps me in these her borders. And now farewell, and still love thy child and mine." This speech uttered, while I wept and would have said many a thing, she left me and retreated into thin air. Thrice there was I fain to lay mine arms round her neck; thrice the vision I vainly clasped fled out of my hands, even as the light breezes, or most like to fluttering sleep. So at last, when night is spent, I revisit my comrades. 'And here I find a marvellous great company, newly flocked in, mothers and men, a people gathered for exile, [Pg 48][799-804]a pitiable crowd. From all quarters they are assembled, ready in heart and fortune, to whatsoever land I will conduct them overseas. And now the morning star rose over the high ridges of Ida, and led on the day; and the Grecians held the gateways in leaguer, nor was any hope of help given. I withdrew, and raising my father up, I sought the mountain.' [Pg 49] BOOK THIRD THE STORY OF THE SEVEN YEARS' WANDERING 'After heaven's lords pleased to overthrow the state of Asia and Priam's guiltless people, and proud Ilium fell, and Neptunian Troy smokes all along the ground, we are driven by divine omens to seek distant places of exile in waste lands. Right under Antandros and the mountains of Phrygian Ida we build a fleet, uncertain whither the fates carry us or where a resting-place is given, and gather the people together. Scarcely had the first summer set in, when lord Anchises bids us spread our sails to fortune, and weeping I leave the shores and havens of my country, and the plains where once was Troy. I sail to sea an exile, with my comrades and son and the gods of household and state. 'A land of vast plains lies apart, the home of Mavors, in Thracian tillage, and sometime under warrior Lycurgus' reign; friendly of old to Troy, and their gods in alliance while our fortune lasted. Hither I pass, and on the winding shore I lay under thwarting fates the first foundations of a city, and from my own name fashion its name, Aeneadae. 'I was paying sacrifice to my mother, daughter of Dione, and to all the gods, so to favour the work begun, and slew a shining bull on the shore to the high lord of [Pg 50][22-54]the heavenly people. Haply there lay a mound hard at hand, crowned with cornel thickets and bristling dense with shafts of myrtle. I drew near; and essaying to tear up the green wood from the soil, that I might cover the altar with leafy boughs, I see a portent ominous and wonderful to tell. For from the first tree whose roots are rent away and broken from the ground, drops of black blood trickle, and gore stains the earth. An icy shudder shakes my limbs, and my blood curdles chill with terror. Yet from another I go on again to tear away a tough shoot, fully to fathom its secret; yet from another black blood follows out of the bark. With many searchings of heart I prayed the woodland nymphs, and lord Gradivus, who rules in the Getic fields, to make the sight propitious as was meet and lighten the omen. But when I assail a third spearshaft with a stronger effort, pulling with knees pressed against the sand; shall I speak or be silent? from beneath the mound is heard a pitiable moan, and a voice is uttered to my ears: "Woe's me, why rendest thou me, Aeneas? spare me at last in the tomb, spare pollution to thine innocent hands. Troy bore me; not alien to thee am I, nor this blood that oozes from the stem. Ah, fly the cruel land, fly the greedy shore! For I am Polydorus; here the iron harvest of weapons hath covered my pierced body, and shot up in sharp javelins." Then indeed, borne down with dubious terror, I was motionless, my hair stood up, and the accents faltered on my tongue. 'This Polydorus once with great weight of gold had hapless Priam sent in secret to the nurture of the Thracian king, when now he was losing trust in the arms of Dardania, and saw his city leaguered round about. The king, when the Teucrian power was broken and fortune withdrew, following Agamemnon's estate and triumphant arms, [Pg 51][55-87]severs every bond of duty; murders Polydorus, and lays strong hands on the gold. O accursed hunger of gold, to what dost thou not compel human hearts! When the terror left my senses, I lay the divine tokens before the chosen princes of the people, with my father at their head, and demand their judgment. All are of one mind, to leave the guilty land, and abandoning a polluted home, to let the gales waft our fleets. So we bury Polydorus anew, and the earth is heaped high over his mound; altars are reared to his ghost, sad with dusky chaplets and black cypress; and around are the Ilian women with hair unbound in their fashion. We offer bubbling bowls of warm milk and cups of consecrated blood, and lay the spirit to rest in her tomb, and with loud voice utter the last call. 'Thereupon, so soon as ocean may be trusted, and the winds leave the seas in quiet, and the soft whispering south wind calls seaward, my comrades launch their ships and crowd the shores. We put out from harbour, and lands and towns sink away. There lies in mid sea a holy land, most dear to the mother of the Nereids and Neptune of Aegae, which strayed about coast and strand till the Archer god in his affection chained it fast from high Myconos and Gyaros, and made it lie immoveable and slight the winds. Hither I steer; and it welcomes my weary crew to the quiet shelter of a safe haven. We disembark and worship Apollo's town. Anius the king, king at once of the people and priest of Phoebus, his brows garlanded with fillets and consecrated laurel, comes to meet us; he knows Anchises, his friend of old; we clasp hands in welcome, and enter his palace. I worshipped the god's temple, an ancient pile of stone. "Lord of Thymbra, give us an enduring dwelling-place; grant a house and family to thy weary servants, and a city to abide: keep Troy's second fortress, the remnant left of the Grecians and merciless Achilles. Whom follow [Pg 52][88-121]we? or whither dost thou bid us go, where fix our seat? Grant an omen, O lord, and inspire our minds." 'Scarcely had I spoken thus; suddenly all seemed to shake, all the courts and laurels of the god, the whole hill to be stirred round about, and the cauldron to moan in the opening sanctuary. We sink low on the ground, and a voice is borne to our ears: "Stubborn race of Dardanus, the same land that bore you by parentage of old shall receive you again on her bountiful breast. Seek out your ancient mother; hence shall the house of Aeneas sway all regions, his children's children and they who shall be born of them." Thus Phoebus; and mingled outcries of great gladness uprose; all ask, what is that city? whither calls Phoebus our wandering, and bids us return? Then my father, unrolling the records of men of old, "Hear, O princes," says he, "and learn your hopes. In mid ocean lies Crete, the island of high Jove, wherein is mount Ida, the cradle of our race. An hundred great towns are inhabited in that opulent realm; from it our forefather Teucer of old, if I recall the tale aright, sailed to the Rhoetean coasts and chose a place for his kingdom. Not yet was Ilium nor the towers of Pergama reared; they dwelt in the valley bottoms. Hence came our Lady, haunter of Cybele, the Corybantic cymbals and the grove of Ida; hence the rites of inviolate secrecy, and the lions yoked under the chariot of their mistress. Up then, and let us follow where divine commandments lead; let us appease the winds, and seek the realm of Gnosus. Nor is it a far journey away. Only be Jupiter favourable, the third day shall bring our fleet to anchor on the Cretan coast." So spoke he, and slew fit sacrifice on the altars, a bull to Neptune, a bull to thee, fair Apollo, a black sheep to Tempest, a white to the prosperous West winds. 'Rumour flies that Idomeneus the captain is driven [Pg 53][122-154]forth of his father's realm, and the shores of Crete are abandoned, that the houses are void of foes and the dwellings lie empty to our hand. We leave the harbour of Ortygia, and fly along the main, by the revel-trod ridges of Naxos, by green Donusa, Olearos and snow-white Paros, and the sea-strewn Cyclades, threading the racing channels among the crowded lands. The seamen's clamour rises in emulous dissonance; each cheers his comrade: Seek we Crete and our forefathers. A wind rising astern follows us forth on our way, and we glide at last to the ancient Curetean coast. So I set eagerly to work on the walls of my chosen town, and call it Pergamea, and exhort my people, joyful at the name, to cherish their homes and rear the castle buildings. And even now the ships were drawn up on the dry beach; the people were busy in marriages and among their new fields; I was giving statutes and homesteads; when suddenly from a tainted space of sky came, noisome on men's bodies and pitiable on trees and crops, pestilence and a year of death. They left their sweet lives or dragged themselves on in misery; Sirius scorched the fields into barrenness; the herbage grew dry, and the sickly harvest denied sustenance. My father counsels to remeasure the sea and go again to Phoebus in his Ortygian oracle, to pray for grace and ask what issue he ordains to our exhausted state; whence he bids us search for aid to our woes, whither bend our course. 'Night fell, and sleep held all things living on the earth. The sacred images of the gods and the household deities of Phrygia, that I had borne with me from Troy out of the midst of the burning city, seemed to stand before mine eyes as I lay sleepless, clear in the broad light where the full moon poured through the latticed windows; then thus addressed me, and with this speech allayed my distresses: "What Apollo hath to tell thee when thou dost [Pg 54][155-188]reach Ortygia, he utters here, and sends us unsought to thy threshold. We who followed thee and thine arms when Dardania went down in fire; we who under thee have traversed on shipboard the swelling sea; we in like wise will exalt to heaven thy children to be, and give empire to their city. Do thou prepare a mighty town for a mighty people, nor draw back from the long wearisome chase. Thou must change thy dwelling. Not to these shores did the god at Delos counsel thee, or Apollo bid thee find rest in Crete. There is a region Greeks name Hesperia, an ancient land, mighty in arms and foison of the clod; Oenotrian men dwell therein; now rumour is that a younger race have called it Italy after their captain's name. This is our true dwelling place; hence is Dardanus sprung, and lord Iasius, the first source of our race. Up, arise, and tell with good cheer to thine aged parent this plain tale, to seek Corythus and the lands of Ausonia. Jupiter denies thee the Dictaean fields." 'Astonished at this vision and divine utterance (nor was that slumber; but openly I seemed to know their countenances, their veiled hair and gracious faces, and therewith a cold sweat broke out all over me) I spring from my bed and raise my voice and upturned hands skyward and pay pure offering on the hearth. The sacrifice done, I joyfully tell Anchises, and relate all in order. He recognises the double descent and twofold parentage, and the later wanderings that had deceived him among ancient lands. Then he speaks: "O son, hard wrought by the destinies of Ilium, Cassandra only foretold me this fortune. Now I recall how she prophesied this was fated to our race, and often cried of Hesperia, often of an Italian realm. But who was to believe that Teucrians should come to Hesperian shores? or whom might Cassandra then move by prophecy? Yield we to Phoebus, and follow the better [Pg 55][189-222]way he counsels." So says he, and we all rejoicingly obey his speech. This dwelling likewise we abandon; and leaving some few behind, spread our sails and run over the waste sea in our hollow wood. 'After our ships held the high seas, nor any land yet appears, the sky all round us and all round us the deep, a dusky shower drew up overhead carrying night and tempest, and the wave shuddered and gloomed. Straightway the winds upturn the main, and great seas rise; we are tossed asunder over the dreary gulf. Stormclouds enwrap the day, and rainy gloom blots out the sky; out of the clouds bursts fire fast upon fire. Driven from our course, we go wandering on the blind waves. Palinurus himself professes he cannot tell day from night on the sky, nor remember the way amid the waters. Three dubious days of blind darkness we wander on the deep, as many nights without a star. Not till the fourth day was land at last seen to rise, discovering distant hills and sending up wreaths of smoke. The sails drop; we swing back to the oars; without delay the sailors strongly toss up the foam, and sweep through the green water. The shores of the Strophades first receive me thus won from the waves, Strophades the Greek name they bear, islands lying in the great Ionian sea, which boding Celaeno and the other Harpies inhabit since Phineus' house was shut on them, and they fled in terror from the board of old. Than these no deadlier portent nor any fiercer plague of divine wrath hath issued from the Stygian waters; winged things with maidens' countenance, bellies dropping filth, and clawed hands and faces ever wan with hunger. . . . 'When borne hitherward we enter the haven, lo! we see goodly herds of oxen scattered on the plains, and goats flocking untended over the grass. We attack them with the sword, and call the gods and Jove himself to share our [Pg 56][223-258]spoil. Then we build seats on the winding shore and banquet on the dainty food. But suddenly the Harpies are upon us, swooping awfully from the mountains, and shaking their wings with loud clangour, plunder the feast, and defile everything with unclean touch, spreading a foul smell, and uttering dreadful cries. Again, in a deep recess under a caverned rock, shut in with waving shadows of woodland, we array the board and renew the altar fires; again, from their blind ambush in diverse quarters of the sky, the noisy crowd flutter with clawed feet around their prey, defiling the feast with their lips. Then I bid my comrades take up arms, and proclaim war on the accursed race. Even as I bade they do, range their swords in cover among the grass, and hide their shields out of sight. So when they swooped clamorously down along the winding shore, Misenus from his watch-tower on high signals on the hollow brass; my comrades rush in and essay the strange battle, to set the stain of steel on the winged horrors of the sea. But they take no violence on their plumage, nor wounds on their bodies; and soaring into the firmament with rapid flight, leave their foul traces on the spoil they had half consumed. Celaeno alone, prophetess of ill, alights on a towering cliff, and thus breaks forth in deep accents: '"War is it for your slaughtered oxen and steers cut down, O children of Laomedon, war is it you would declare, and drive the guiltless Harpies from their ancestral kingdom? Take then to heart and fix fast these words of mine; which the Lord omnipotent foretold to Phoebus, Phoebus Apollo to me, I eldest born of the Furies reveal to you. Italy is your goal; wooing the winds you shall go to Italy, and enter her harbours unhindered. Yet shall you not wall round your ordained city, ere this murderous outrage on us compel you, in portentous hunger, to eat your tables with gnawing teeth." 'She spoke, and winged her way back to the shelter of [Pg 57][259-293]the wood. But my comrades' blood froze chill with sudden affright; their spirits fell; and no longer with arms, nay with vows and prayers they bid me entreat favour, whether these be goddesses, or winged things ill-ominous and foul. And lord Anchises from the beach calls with outspread hands on the mighty gods, ordering fit sacrifices: "Gods, avert their menaces! Gods, turn this woe away, and graciously save the righteous!" Then he bids pluck the cable from the shore and shake loose the sheets. Southern winds stretch the sails; we scud over the foam-flecked waters, whither wind and pilot called our course. Now wooded Zacynthos appears amid the waves, and Dulichium and Same and Neritos' sheer rocks. We fly past the cliffs of Ithaca, Laërtes' realm, and curse the land, fostress of cruel Ulysses. Soon too Mount Leucata's cloudy peaks are sighted, and Apollo dreaded of sailors. Hither we steer wearily, and stand in to the little town. The anchor is cast from the prow; the sterns are grounded on the beach. 'So at last having attained to land beyond our hopes, we purify ourselves in Jove's worship, and kindle altars of offering, and make the Actian shore gay with the games of Ilium. My comrades strip, and, slippery with oil, exercise their ancestral contests; glad to have got past so many Argive towns, and held on their flight through the encircling foe. Meanwhile the sun rounds the great circle of the year, and icy winter ruffles the waters with Northern gales. I fix against the doorway a hollow shield of brass, that tall Abas had borne, and mark the story with a verse: These arms Aeneas from the conquering Greeks. Then I bid leave the harbour and sit down at the thwarts; emulously my comrades strike the water, and sweep through the seas. Soon we see the cloud-capped Phaeacian towers sink away, skirt the shores of Epirus, and enter the Chaonian haven and approach high Buthrotum town. [Pg 58] [294-328]'Here the rumour of a story beyond belief comes on our ears; Helenus son of Priam is reigning over Greek towns, master of the bride and sceptre of Pyrrhus the Aeacid; and Andromache hath again fallen to a husband of her people. I stood amazed; and my heart kindled with marvellous desire to accost him and learn of so strange a fortune. I advance from the harbour, leaving the fleet ashore; just when haply Andromache, in a grove before the town, by the waters of a feigned Simoïs, was pouring libation to the dust, and calling Hector's ghost to a tomb with his name, on an empty turfed green with two altars that she had consecrated, a wellspring of tears. When she caught sight of me coming, and saw distractedly the encircling arms of Troy, terror-stricken at the vision marvellously shewn, her gaze fixed, and the heat left her frame. She swoons away, and hardly at last speaks after long interval: "Comest thou then a real face, a real messenger to me, goddess-born? livest thou? or if sweet light is fled, ah, where is Hector?" She spoke, and bursting into tears filled all the place with her crying. Just a few words I force up, and deeply moved gasp out in broken accents: "I live indeed, I live on through all extremities; doubt not, for real are the forms thou seest . . . Alas! after such an husband, what fate receives thy fall? or what worthier fortune revisits thee? Dost thou, Hector's Andromache, keep bonds of marriage with Pyrrhus?" She cast down her countenance, and spoke with lowered voice: '"O single in happy eminence that maiden daughter of Priam, sentenced to die under high Troy town at an enemy's grave, who never bore the shame of the lot, nor came a captive to her victorious master's bed! We, sailing over alien seas from our burning land, have endured the haughty youthful pride of Achilles' seed, and borne children in slavery: he thereafter, wooing Leda's Hermione and a Lacedaemonian [Pg 59][329-363]marriage, passed me over to Helenus' keeping, a bondwoman to a bondman. But him Orestes, aflame with passionate desire for his stolen bride, and driven by the furies of crime, catches unguarded and murders at his ancestral altars. At Neoptolemus' death a share of his realm fell to Helenus' hands, who named the plains Chaonian, and called all the land Chaonia after Chaon of Troy, and built withal a Pergama and this Ilian citadel on the hills. But to thee how did winds, how fates give passage? or whose divinity landed thee all unwitting on our coasts? what of the boy Ascanius? lives he yet, and draws breath, thy darling, whom Troy's . . . Yet hath the child affection for his lost mother? is he roused to the valour of old and the spirit of manhood by his father Aeneas, by his uncle Hector?" 'Such words she poured forth weeping, and prolonged the vain wail; when the hero Helenus son of Priam approaches from the town with a great company, knows us for his kin, and leads us joyfully to his gates, shedding a many tears at every word. I advance and recognise a little Troy, and a copy of the great Pergama, and a dry brook with the name of Xanthus, and clasp a Scaean gateway. Therewithal my Teucrians make holiday in the friendly town. The king entertained them in his spacious colonnades; in the central hall they poured goblets of wine in libation, and held the cups while the feast was served on gold. 'And now a day and another day hath sped; the breezes woo our sails, and the canvas blows out to the swelling south. With these words I accost the prophet, and thus make request: '"Son of Troy, interpreter of the gods, whose sense is open to Phoebus' influences, his tripods and laurels, to stars and tongues of birds and auguries of prosperous flight, tell me now,—for the voice of revelation was all favourable to my course, and all divine influence counselled me to [Pg 60][364-396]seek Italy and explore remote lands; only Celaeno the Harpy prophesies of strange portents, a horror to tell, and cries out of wrath and bale and foul hunger,—what perils are the first to shun? or in what guidance may I overcome these sore labours?" 'Hereat Helenus, first suing for divine favour with fit sacrifice of steers, and unbinding from his head the chaplets of consecration, leads me in his hand to thy courts, O Phoebus, thrilled with the fulness of the deity, and then utters these prophetic words from his augural lips: '"Goddess-born: since there is clear assurance that under high omens thou dost voyage through the deep; so the king of the gods allots destiny and unfolds change; this is the circle of ordinance; a few things out of many I will unfold to thee in speech, that so thou mayest more safely traverse the seas of thy sojourn, and find rest in the Ausonian haven; for Helenus is forbidden by the destinies to know, and by Juno daughter of Saturn to utter more: first of all, the Italy thou deemest now nigh, and close at hand, unwitting! the harbours thou wouldst enter, far are they sundered by a long and trackless track through length of lands. First must the Trinacrian wave clog thine oar, and thy ships traverse the salt Ausonian plain, by the infernal pools and Aeaean Circe's isle, ere thou mayest build thy city in safety on a peaceful land. I will tell thee the token, and do thou keep it close in thine heart. When in thy perplexity, beside the wave of a sequestered river, a great sow shall be discovered lying under the oaks on the brink, with her newborn litter of thirty, couched white on the ground, her white brood about her teats; that shall be the place of the city, that the appointed rest from thy toils. Neither shrink thou at the gnawn tables that await thee; the fates will find a way, and Apollo aid thy call. These lands moreover, on this nearest border of the Italian shore [Pg 61][397-432]that our own sea's tide washes, flee thou: evil Greeks dwell in all their towns. Here the Locrians of Narycos have set their city, and here Lyctian Idomeneus beset the Sallentine plains with soldiery; here is the town of the Meliboean captain, Philoctetes' little Petelia fenced by her wall. Nay, when thy fleets have crossed overseas and lie at anchor, when now thou rearest altars and payest vows on the beach, veil thine hair with a purple garment for covering, that no hostile face at thy divine worship may meet thee amid the holy fires and make void the omens. This fashion of sacrifice keep thou, thyself and thy comrades, and let thy children abide in this pure observance. But when at thy departure the wind hath borne thee to the Sicilian coast, and the barred straits of Pelorus open out, steer for the left-hand country and the long circuit of the seas on the left hand; shun the shore and water on thy right. These lands, they say, of old broke asunder, torn and upheaved by vast force, when either country was one and undivided; the ocean burst in between, cutting off with its waves the Hesperian from the Sicilian coast, and with narrow tide washes tilth and town along the severance of shore. On the right Scylla keeps guard, on the left unassuaged Charybdis, who thrice swallows the vast flood sheer down her swirling gulf, and ever again hurls it upward, lashing the sky with water. But Scylla lies prisoned in her cavern's blind recesses, thrusting forth her mouth and drawing ships upon the rocks. In front her face is human, and her breast fair as a maiden's to the waist down; behind she is a sea-dragon of monstrous frame, with dolphins' tails joined on her wolf-girt belly. Better to track the goal of Trinacrian Pachynus, lingering and wheeling round through long spaces, than once catch sight of misshapen Scylla deep in her dreary cavern, and of the rocks that ring to her sea-coloured hounds. Moreover, if [Pg 62][433-466]Helenus hath aught of foresight or his prophecy of assurance, if Apollo fills his spirit with the truth, this one thing, goddess-born, one thing for all will I foretell thee, and again and again repeat my counsel: to great Juno's deity be thy first prayer and worship; to Juno utter thy willing vows, and overcome thy mighty mistress with gifts and supplications; so at last thou shalt leave Trinacria behind, and be sped in triumph to the Italian borders. When borne hither thou drawest nigh the Cymaean city, the haunted lakes and rustling woods of Avernus, thou shalt behold the raving prophetess who deep in the rock chants of fate, and marks down her words on leaves. What verses she writes down on them, the maiden sorts into order and shuts behind her in the cave; they stay in their places unstirred and quit not their rank. But when at the turn of the hinge the light wind from the doorway stirs them, and disarranges the delicate foliage, never after does she trouble to capture them as they flutter about the hollow rock, nor restore their places or join the verses; men depart without counsel, and hate the Sibyl's dwelling. Here let no waste in delay be of such account to thee (though thy company chide, and the passage call thy sails strongly to the deep, and thou mayest fill out their folds to thy desire) that thou do not approach the prophetess, and plead with prayers that she herself utter her oracles and deign to loose the accents from her lips. The nations of Italy and the wars to come, and the fashion whereby every toil may be avoided or endured, she shall unfold to thee, and grant her worshipper prosperous passage. Thus far is our voice allowed to counsel thee: go thy way, and exalt Troy to heaven by thy deeds." 'This the seer uttered with friendly lips; then orders gifts to be carried to my ships, of heavy gold and sawn ivory, and loads the hulls with massy silver and cauldrons [Pg 63][467-502]of Dodona, a mail coat triple-woven with hooks of gold, and a helmet splendid with spike and tressed plumes, the armour of Neoptolemus. My father too hath his gifts. Horses besides he brings, and grooms . . . fills up the tale of our oarsmen, and equips my crews with arms. 'Meanwhile Anchises bade the fleet set their sails, that the fair wind might meet no delay. Him Phoebus' interpreter accosts with high courtesy: "Anchises, honoured with the splendour of Venus' espousal, the gods' charge, twice rescued from the fallen towers of Troy, lo! the land of Ausonia is before thee: sail thou and seize it. And yet needs must thou float past it on the sea; far away lies the quarter of Ausonia that is revealed of Apollo. Go," he continues, "happy in thy son's affection: why do I run on further, and delay the rising winds in talk?" Andromache too, sad at this last parting, brings figured raiment with woof of gold, and a Phrygian scarf for Ascanius, and wearies not in courtesy, loading him with gifts from the loom. "Take these too," so says she, "my child, to be memorials to thee of my hands, and testify long hence the love of Andromache wife of Hector. Take these last gifts of thy kinsfolk, O sole surviving likeness to me of my own Astyanax! Such was he, in eyes and hands and features; and now his equal age were growing into manhood like thine." 'To them as I departed I spoke with starting tears: "Live happily, as they do whose fortunes are perfected! We are summoned ever from fate to fate. For you there is rest in store, and no ocean floor to furrow, no ever-retreating Ausonian fields to pursue. You see a pictured Xanthus, and a Troy your own hands have built; with better omens, I pray, and to be less open to the Greeks. If ever I enter Tiber and Tiber's bordering fields, and see a city granted to my nation, then of these kindred towns [Pg 64][503-537]and allied peoples in Epirus and Hesperia, which have the same Dardanus for founder, and whose story is one, of both will our hearts make a single Troy. Let that charge await our posterity." 'We put out to sea, keeping the Ceraunian mountains close at hand, whence is the shortest passage and seaway to Italy. The sun sets meanwhile, and the dusky hills grow dim. We choose a place, and fling ourselves on the lap of earth at the water's edge, and, allotting the oars, spread ourselves on the dry beach for refreshment: the dew of slumber falls on our weary limbs. Not yet had Night driven of the Hours climbed her mid arch; Palinurus rises lightly from his couch, explores all the winds, and listens to catch a breeze; he marks the constellations gliding together through the silent sky, Arcturus, the rainy Hyades and the twin Oxen, and scans Orion in his armour of gold. When he sees the clear sky quite unbroken, he gives from the stern his shrill signal; we disencamp and explore the way, and spread the wings of our sails. And now reddening Dawn had chased away the stars, when we descry afar dim hills and the low line of Italy. Achates first raises the cry of Italy; and with joyous shouts my comrades salute Italy. Then lord Anchises enwreathed a great bowl and filled it up with wine; and called on the gods, standing high astern . . . "Gods sovereign over sea and land and weather! bring wind to ease our way, and breathe favourably." The breezes freshen at his prayer, and now the harbour opens out nearer at hand, and a temple appears on the Fort of Minerva. My comrades furl the sails and swing the prows to shore. The harbour is scooped into an arch by the Eastern flood; reefs run out and foam with the salt spray; itself it lies concealed; turreted walls of rock let down their arms on either hand, and the temple retreats from the beach. Here, an inaugural sight, four horses of snowy [Pg 65][538-570]whiteness are grazing abroad on the grassy plain. And lord Anchises: "War dost thou carry, land of our sojourn; horses are armed in war, and menace of war is in this herd. But yet these same beasts are wont in time to enter harness, and carry yoke and bit in concord; there is hope of peace too," says he. Then we pray to the holy deity, Pallas of the clangorous arms, the first to welcome our cheers. And before the altars we veil our heads in Phrygian garments, and duly, after the counsel Helenus had urged deepest on us, pay the bidden burnt-sacrifice to Juno of Argos. 'Without delay, once our vows are fully paid, we round to the arms of our sailyards and leave the dwellings and menacing fields of the Grecian people. Next is descried the bay of Tarentum, town, if rumour is true, of Hercules. Over against it the goddess of Lacinium rears her head, with the towers of Caulon, and Scylaceum wrecker of ships. Then Trinacrian Aetna is descried in the distance rising from the waves, and we hear from afar a great roaring of the sea on beaten rocks, and broken noises by the shore: the channels boil up, and the surge churns with sand. And lord Anchises: "Of a surety this is that Charybdis; of these cliffs, these awful rocks did Helenus prophesy. Out, O comrades, and rise together to the oars." Even as bidden they do; and first Palinurus swung the gurgling prow leftward through the water; to the left all our squadron bent with oar and wind. We are lifted skyward on the crescent wave, and again sunk deep into the nether world as the water is sucked away. Thrice amid their rocky caverns the cliffs uttered a cry; thrice we see the foam flung out, and the stars through a dripping veil. Meanwhile the wind falls with sundown; and weary and ignorant of the way we glide on to the Cyclopes' coast. 'There lies a harbour large and unstirred by the winds' [Pg 66][571-604]entrance; but nigh it Aetna thunders awfully in wrack, and ever and again hurls a black cloud into the sky, smoking with boiling pitch and embers white hot, and heaves balls of flame flickering up to the stars: ever and again vomits out on high crags from the torn entrails of the mountain, tosses up masses of molten rock with a groan, and boils forth from the bottom. Rumour is that this mass weighs down the body of Enceladus, half-consumed by the thunderbolt, and mighty Aetna laid over him suspires the flame that bursts from her furnaces; and so often as he changes his weary side, all Trinacria shudders and moans, veiling the sky in smoke. That night we spend in cover of the forest among portentous horrors, and see not from what source the noise comes. For neither did the stars show their fires, nor was the vault of constellated sky clear; but vapours blotted heaven, and the moon was held in a storm-cloud through dead of night. 'And now the morrow was rising in the early east, and the dewy darkness rolled away from the sky by Dawn, when sudden out of the forest advances a human shape strange and unknown, worn with uttermost hunger and pitiably attired, and stretches entreating hands towards the shore. We look back. Filthy and wretched, with shaggy beard and a coat pinned together with thorns, he was yet a Greek, and had been sent of old to Troy in his father's arms. And he, when he saw afar the Dardanian habits and armour of Troy, hung back a little in terror at the sight, and stayed his steps; then ran headlong to the shore with weeping and prayers: "By the heavens I beseech you, by the heavenly powers and this luminous sky that gives us breath, take me up, O Trojans, carry me away to any land soever, and it will be enough. I know I am one out of the Grecian fleets, I confess I warred against the household gods of Ilium; for that, if our wrong and guilt is so great, throw [Pg 67][605-639]me piecemeal on the flood or plunge me in the waste sea. If I do perish, gladly will I perish at human hands." He ended; and clung clasping our knees and grovelling at them. We encourage him to tell who he is and of what blood born, and reveal how Fortune pursues him since then. Lord Anchises after little delay gives him his hand, and strengthens his courage by visible pledge. At last, laying aside his terror, he speaks thus: '"I am from an Ithacan home, Achemenides by name, set out for Troy in luckless Ulysses' company; poor was my father Adamastus, and would God fortune had stayed thus! Here my comrades abandoned me in the Cyclops' vast cave, mindless of me while they hurry away from the barbarous gates. It is a house of gore and blood-stained feasts, dim and huge within. Himself he is great of stature and knocks at the lofty sky (gods, take away a curse like this from earth!) to none gracious in aspect or courteous of speech. He feeds on the flesh and dark blood of wretched men. I myself saw, when he caught the bodies of two of us with his great hand, and lying back in the middle of the cave crushed them on the rock, and the courts splashed and swam with gore; I saw when he champed the flesh adrip with dark clots of blood, and the warm limbs quivered under his teeth. Yet not unavenged. Ulysses brooked not this, nor even in such straits did the Ithacan forget himself. For so soon as he, gorged with his feast and buried in wine, lay with bent neck sprawling huge over the cave, in his sleep vomiting gore and gobbets mixed with wine and blood, we, praying to the great gods and with parts allotted, pour at once all round him, and pierce with a sharp weapon the huge eye that lay sunk single under his savage brow, in fashion of an Argolic shield or the lamp of the moon; and at last we exultingly avenge the ghosts of our comrades. But fly, O wretched men, fly [Pg 68][640-674]and pluck the cable from the beach. . . . For even in the shape and stature of Polyphemus, when he shuts his fleeced flocks and drains their udders in the cave's covert, an hundred other horrible Cyclopes dwell all about this shore and stray on the mountain heights. Thrice now does the horned moon fill out her light, while I linger in life among desolate lairs and haunts of wild beasts in the woodland, and from a rock survey the giant Cyclopes and shudder at their cries and echoing feet. The boughs yield a miserable sustenance, berries and stony sloes, and plants torn up by the root feed me. Sweeping all the view, I at last espied this fleet standing in to shore. On it, whatsoever it were, I cast myself; it is enough to have escaped the accursed tribe. Do you rather, by any death you will, destroy this life of mine." 'Scarcely had he spoken thus, when on the mountain top we see shepherding his flocks a vast moving mass, Polyphemus himself seeking the shores he knew, a horror ominous, shapeless, huge, bereft of sight. A pine lopped by his hand guides and steadies his footsteps. His fleeced sheep attend him, this his single delight and solace in ill. . . . After he hath touched the deep flood and come to the sea, he washes in it the blood that oozes from his eye-socket, grinding his teeth with groans; and now he strides through the sea up to his middle, nor yet does the wave wet his towering sides. We hurry far away in precipitate flight, with the suppliant who had so well merited rescue; and silently cut the cable, and bending forward sweep the sea with emulous oars. He heard, and turned his steps towards the echoing sound. But when he may in no wise lay hands on us, nor can fathom the Ionian waves in pursuit, he raises a vast cry, at which the sea and all his waves shuddered, and the deep land of Italy was startled, and Aetna's vaulted caverns moaned. But the tribe of the [Pg 69][675-709]Cyclopes, roused from the high wooded hills, run to the harbour and fill the shore. We descry the Aetnean brotherhood standing impotent with scowling eye, their stately heads up to heaven, a dreadful consistory; even as on a mountain summit stand oaks high in air or coned cypresses, a high forest of Jove or covert of Diana. Sharp fear urges us to shake out the sheets in reckless haste, and spread our sails to the favouring wind. Yet Helenus' commands counsel that our course keep not the way between Scylla and Charybdis, the very edge of death on either hand. We are resolved to turn our canvas back. And lo! from the narrow fastness of Pelorus the North wind comes down and reaches us. I sail past Pantagias' mouth with its living stone, the Megarian bay, and low-lying Thapsus. Such names did Achemenides, of luckless Ulysses' company, point out as he retraced his wanderings along the returning shores. 'Stretched in front of a bay of Sicily lies an islet over against wavebeat Plemyrium; they of old called it Ortygia. Hither Alpheus the river of Elis, so rumour runs, hath cloven a secret passage beneath the sea, and now through thy well-head, Arethusa, mingles with the Sicilian waves. We adore as bidden the great deities of the ground; and thence I cross the fertile soil of Helorus in the marsh. Next we graze the high reefs and jutting rocks of Pachynus; and far off appears Camarina, forbidden for ever by oracles to move, and the Geloan plains, and vast Gela named after its river. Then Acragas on the steep, once the breeder of noble horses, displays its massive walls in the distance; and with granted breeze I leave thee behind, palm-girt Selinus, and thread the difficult shoals and blind reefs of Lilybaeum. Thereon Drepanum receives me in its haven and joyless border. Here, so many tempestuous seas outgone, alas! my father, the solace of every care and chance, Anchises is [Pg 70][710-718]lost to me. Here thou, dear lord, abandonest me in weariness, alas! rescued in vain from peril and doom. Not Helenus the prophet, though he counselled of many a terror, not boding Celaeno foretold me of this grief. This was the last agony, this the goal of the long ways; thence it was I had departed when God landed me on your coasts.' Thus lord Aeneas with all attent retold alone the divine doom and the history of his goings. At last he was hushed, and here in silence made an end. [Pg 71] BOOK FOURTH THE LOVE OF DIDO, AND HER END But the Queen, long ere now pierced with sore distress, feeds the wound with her life-blood, and catches the fire unseen. Again and again his own valiance and his line's renown flood back upon her spirit; look and accent cling fast in her bosom, and the pain allows not rest or calm to her limbs. The morrow's dawn bore the torch of Phoebus across the earth, and had rolled away the dewy darkness from the sky, when, scarce herself, she thus opens her confidence to her sister: 'Anna, my sister, such dreams of terror thrill me through! What guest unknown is this who hath entered our dwelling? How high his mien! how brave in heart as in arms! I believe it well, with no vain assurance, his blood is divine. Fear proves the vulgar spirit. Alas, by what destinies is he driven! what wars outgone he chronicled! Were my mind not planted, fixed and immoveable, to ally myself to none in wedlock since my love of old was false to me in the treachery of death; were I not sick to the heart of bridal torch and chamber, to this temptation alone I might haply yield. Anna, I will confess it; since Sychaeus mine husband met his piteous doom, and our household was shattered by a brother's murder, he only hath [Pg 72][22-55]touched mine heart and stirred the balance of my soul. I know the prints of the ancient flame. But rather, I pray, may earth first yawn deep for me, or the Lord omnipotent hurl me with his thunderbolt into gloom, the pallid gloom and profound night of Erebus, ere I soil thee, mine honour, or unloose thy laws. He took my love away who made me one with him long ago; he shall keep it with him, and guard it in the tomb.' She spoke, and welling tears filled the bosom of her gown. Anna replies: 'O dearer than the daylight to thy sister, wilt thou waste, sad and alone, all thy length of youth, and know not the sweetness of motherhood, nor love's bounty? Deemest thou the ashes care for that, or the ghost within the tomb? Be it so: in days gone by no wooers bent thy sorrow, not in Libya, not ere then in Tyre; Iarbas was slighted, and other princes nurtured by the triumphal land of Africa; wilt thou contend so with a love to thy liking? nor does it cross thy mind whose are these fields about thy dwelling? On this side are the Gaetulian towns, a race unconquerable in war; the reinless Numidian riders and the grim Syrtis hem thee in; on this lies a thirsty tract of desert, swept by the raiders of Barca. Why speak of the war gathering from Tyre, and thy brother's menaces? . . . With gods' auspices to my thinking, and with Juno's favour, hath the Ilian fleet held on hither before the gale. What a city wilt thou discern here, O sister! what a realm will rise on such a union! the arms of Troy ranged with ours, what glory will exalt the Punic state! Do thou only, asking divine favour with peace-offerings, be bounteous in welcome and draw out reasons for delay, while the storm rages at sea and Orion is wet, and his ships are shattered and the sky unvoyageable.' With these words she made the fire of love flame up in her spirit, put hope in her wavering soul, and let honour slip away. [Pg 73] [56-90]First they visit the shrines, and desire grace from altar to altar; they sacrifice sheep fitly chosen to Ceres the Lawgiver, to Phoebus and lord Lyaeus, to Juno before all, guardian of the marriage bond. Dido herself, excellent in beauty, holds the cup in her hand, and pours libation between the horns of a milk-white cow, or moves in state to the rich altars before the gods' presences, day by day renewing her gifts, and gazing athirst into the breasts of cattle laid open to take counsel from the throbbing entrails. Ah, witless souls of soothsayers! how may vows or shrines help her madness? all the while the subtle flame consumes her inly, and deep in her breast the wound is silent and alive. Stung to misery, Dido wanders in frenzy all down the city, even as an arrow-stricken deer, whom, far and heedless amid the Cretan woodland, a shepherd archer hath pierced and left the flying steel in her unaware; she ranges in flight the Dictaean forest lawns; fast in her side clings the deadly reed. Now she leads Aeneas with her through the town, and displays her Sidonian treasure and ordered city; she essays to speak, and breaks off half-way in utterance. Now, as day wanes, she seeks the repeated banquet, and again madly pleads to hear the agonies of Ilium, and again hangs on the teller's lips. Thereafter, when all are gone their ways, and the dim moon in turn quenches her light, and the setting stars counsel to sleep, alone in the empty house she mourns, and flings herself on the couch he left: distant she hears and sees him in the distance; or enthralled by the look he has of his father, she holds Ascanius on her lap, if so she may steal the love she may not utter. No more do the unfinished towers rise, no more do the people exercise in arms, nor work for safety in war on harbour or bastion; the works hang broken off, vast looming walls and engines towering into the sky. So soon as she perceives her thus fast in the toils, and [Pg 74][91-124]madly careless of her name, Jove's beloved wife, daughter of Saturn, accosts Venus thus: 'Noble indeed is the fame and splendid the spoils you win, thou and that boy of thine, and mighty the renown of deity, if two gods have vanquished one woman by treachery. Nor am I so blind to thy terror of our town, thine old suspicion of the high house of Carthage. But what shall be the end? or why all this contest now? Nay, rather let us work an enduring peace and a bridal compact. Thou hast what all thy soul desired; Dido is on fire with love, and hath caught the madness through and through. Then rule we this people jointly in equal lordship; allow her to be a Phrygian husband's slave, and to lay her Tyrians for dowry in thine hand.' To her—for she knew the dissembled purpose of her words, to turn the Teucrian kingdom away to the coasts of Libya—Venus thus began in answer: 'Who so mad as to reject these terms, or choose rather to try the fortune of war with thee? if only when done, as thou sayest, fortune follow. But I move in uncertainty of Jove's ordinance, whether he will that Tyrians and wanderers from Troy be one city, or approve the mingling of peoples and the treaty of union. Thou art his wife, and thy prayers may essay his soul. Go on; I will follow.' Then Queen Juno thus rejoined: 'That task shall be mine. Now, by what means the present need may be fulfilled, attend and I will explain in brief. Aeneas and Dido (alas and woe for her!) are to go hunting together in the woodland when to-morrow's rising sun goes forth and his rays unveil the world. On them, while the beaters run up and down, and the lawns are girt with toils, will I pour down a blackening rain-cloud mingled with hail, and startle all the sky in thunder. Their company will scatter for shelter in the dim darkness; Dido and the Trojan captain [Pg 75][125-159]shall take refuge in the same cavern. I will be there, and if thy goodwill is assured me, I will unite them in wedlock, and make her wholly his; here shall Hymen be present.' The Cytherean gave ready assent to her request, and laughed at the wily invention. Meanwhile Dawn rises forth of ocean. A chosen company issue from the gates while the morning star is high; they pour forth with meshed nets, toils, broad-headed hunting spears, Massylian horsemen and sinewy sleuth-hounds. At her doorway the chief of Carthage await their queen, who yet lingers in her chamber, and her horse stands splendid in gold and purple with clattering feet and jaws champing on the foamy bit. At last she comes forth amid a great thronging train, girt in a Sidonian mantle, broidered with needlework; her quiver is of gold, her tresses knotted into gold, a golden buckle clasps up her crimson gown. Therewithal the Phrygian train advances with joyous Iülus. Himself first and foremost of all, Aeneas joins her company and unites his party to hers: even as Apollo, when he leaves wintry Lycia and the streams of Xanthus to visit his mother's Delos, and renews the dance, while Cretans and Dryopes and painted Agathyrsians mingle clamorous about his altars: himself he treads the Cynthian ridges, and plaits his flowing hair with soft heavy sprays and entwines it with gold; the arrows rattle on his shoulder: as lightly as he went Aeneas; such glow and beauty is on his princely face. When they are come to the mountain heights and pathless coverts, lo, wild goats driven from the cliff-tops run down the ridge; in another quarter stags speed over the open plain and gather their flying column in a cloud of dust as they leave the hills. But the boy Ascanius is in the valleys, exultant on his fiery horse, and gallops past one and another, praying that among the unwarlike herds a foaming boar may issue or a tawny lion descend the hill. [Pg 76] [160-194]Meanwhile the sky begins to thicken and roar aloud. A rain-cloud comes down mingled with hail; the Tyrian train and the men of Troy, and the Dardanian boy of Venus' son scatter in fear, and seek shelter far over the fields. Streams pour from the hills. Dido and the Trojan captain take refuge in the same cavern. Primeval Earth and Juno the bridesmaid give the sign; fires flash out high in air, witnessing the union, and Nymphs cry aloud on the mountain-top. That day opened the gate of death and the springs of ill. For now Dido recks not of eye or tongue, nor sets her heart on love in secret: she calls it marriage, and with this name veils her fall. Straightway Rumour runs through the great cities of Libya,—Rumour, than whom none other is more swift to mischief; she thrives on restlessness and gains strength by going: at first small and timorous; soon she lifts herself on high and paces the ground with head hidden among the clouds. Her, one saith, Mother Earth, when stung by wrath against the gods, bore last sister to Coeus and Enceladus, fleet-footed and swift of wing, ominous, awful, vast; for every feather on her body is a waking eye beneath, wonderful to tell, and a tongue, and as many loud lips and straining ears. By night she flits between sky and land, shrilling through the dusk, and droops not her lids in sweet slumber; in daylight she sits on guard upon tall towers or the ridge of the house-roof, and makes great cities afraid; obstinate in perverseness and forgery no less than messenger of truth. She then exultingly filled the countries with manifold talk, and blazoned alike what was done and undone: one Aeneas is come, born of Trojan blood; on him beautiful Dido thinks no shame to fling herself; now they hold their winter, long-drawn through mutual caresses, regardless of their realms and enthralled by passionate dishonour. This the pestilent goddess [Pg 77][195-227]spreads abroad in the mouths of men, and bends her course right on to King Iarbas, and with her words fires his spirit and swells his wrath. He, the seed of Ammon by a ravished Garamantian Nymph, had built to Jove in his wide realms an hundred great temples, an hundred altars, and consecrated the wakeful fire that keeps watch by night before the gods perpetually, where the soil is fat with blood of beasts and the courts blossom with pied garlands. And he, distracted and on fire at the bitter tidings, before his altars, amid the divine presences, often, it is said, bowed in prayer to Jove with uplifted hands: 'Jupiter omnipotent, to whom from the broidered cushions of their banqueting halls the Maurusian people now pour Lenaean offering, lookest thou on this? or do we shudder vainly when our father hurls the thunderbolt, and do blind fires in the clouds and idle rumblings appal our soul? The woman who, wandering in our coasts, planted a small town on purchased ground, to whom we gave fields by the shore and laws of settlement, she hath spurned our alliance and taken Aeneas for lord of her realm. And now that Paris, with his effeminate crew, his chin and oozy hair swathed in the turban of Maeonia, takes and keeps her; since to thy temples we bear oblation, and hallow an empty name.' In such words he pleaded, clasping the altars; the Lord omnipotent heard, and cast his eye on the royal city and the lovers forgetful of their fairer fame. Then he addresses this charge to Mercury: 'Up and away, O son! call the breezes and slide down them on thy wings: accost the Dardanian captain who now loiters in Tyrian Carthage and casts not a look on destined cities; carry down my words through the fleet air. Not such an one did his mother most beautiful vouch him to [Pg 78][228-264]us, nor for this twice rescue him from Grecian arms; but he was to rule an Italy teeming with empire and loud with war, to transmit the line of Teucer's royal blood, and lay all the world beneath his law. If such glories kindle him in nowise, and he take no trouble for his own honour, does a father grudge his Ascanius the towers of Rome? with what device or in what hope loiters he among a hostile race, and casts not a glance on his Ausonian children and the fields of Lavinium? Let him set sail: this is the sum: thereof be thou our messenger.' He ended: his son made ready to obey his high command. And first he laces to his feet the shoes of gold that bear him high winging over seas or land as fleet as the gale; then takes the rod wherewith he calls wan souls forth of Orcus, or sends them again to the sad depth of hell, gives sleep and takes it away and unseals dead eyes; in whose strength he courses the winds and swims across the tossing clouds. And now in flight he descries the peak and steep sides of toiling Atlas, whose crest sustains the sky; Atlas, whose pine-clad head is girt alway with black clouds and beaten by wind and rain; snow is shed over his shoulders for covering; rivers tumble over his aged chin; and his rough beard is stiff with ice. Here the Cyllenian, poised evenly on his wings, made a first stay; hence he shot himself sheer to the water. Like a bird that flies low, skirting the sea about the craggy shores of its fishery, even thus the brood of Cyllene left his mother's father, and flew, cutting the winds between sky and land, along the sandy Libyan shore. So soon as his winged feet reached the settlement, he espies Aeneas founding towers and ordering new dwellings; his sword twinkled with yellow jasper, and a cloak hung from his shoulders ablaze with Tyrian sea-purple, a gift that Dido had made costly and shot the warp with thin gold. Straightway [Pg 79][265-299]he breaks in: 'Layest thou now the foundations of tall Carthage, and buildest up a fair city in dalliance? ah, forgetful of thine own kingdom and state! From bright Olympus I descend to thee at express command of heaven's sovereign, whose deity sways sky and earth; expressly he bids me carry this charge through the fleet air: with what device or in what hope dost thou loiter idly on Libyan lands? if such glories kindle thee in nowise, yet cast an eye on growing Ascanius, on Iülus thine hope and heir, to whom the kingdom of Italy and the Roman land are due.' As these words left his lips the Cyllenian, yet speaking, quitted mortal sight and vanished into thin air away out of his eyes. But Aeneas in truth gazed in dumb amazement, his hair thrilled up, and the accents faltered on his tongue. He burns to flee away and leave the pleasant land, aghast at the high warning and divine ordinance. Alas, what shall he do? how venture to smooth the tale to the frenzied queen? what prologue shall he find? and this way and that he rapidly throws his mind, and turns it on all hands in swift change of thought. In his perplexity this seemed the better counsel; he calls Mnestheus and Sergestus, and brave Serestus, and bids them silently equip the fleet, gather their crews on shore, and order their armament, keeping the cause of the commotion hid; himself meanwhile, since Dido the gracious knows not nor looks for severance to so strong a love, will essay to approach her when she may be told most gently, and the way for it be fair. All at once gladly do as bidden, and obey his command. But the Queen—who may delude a lover?—foreknew his devices, and at once caught the presaging stir. Safety's self was fear; to her likewise had evil Rumour borne the maddening news that they equip the fleet and prepare [Pg 80][300-334]for passage. Helpless at heart, she reels aflame with rage throughout the city, even as the startled Thyiad in her frenzied triennial orgies, when the holy vessels move forth and the cry of Bacchus re-echoes, and Cithaeron calls her with nightlong din. Thus at last she opens out upon Aeneas: 'And thou didst hope, traitor, to mask the crime, and slip away in silence from my land? Our love holds thee not, nor the hand thou once gavest, nor the bitter death that is left for Dido's portion? Nay, under the wintry star thou labourest on thy fleet, and hastenest to launch into the deep amid northern gales; ah, cruel! Why, were thy quest not of alien fields and unknown dwellings, did thine ancient Troy remain, should Troy be sought in voyages over tossing seas? Fliest thou from me? me who by these tears and thine own hand beseech thee, since naught else, alas! have I kept mine own—by our union and the marriage rites preparing; if I have done thee any grace, or aught of mine hath once been sweet in thy sight,—pity our sinking house, and if there yet be room for prayers, put off this purpose of thine. For thy sake Libyan tribes and Nomad kings are hostile; my Tyrians are estranged; for thy sake, thine, is mine honour perished, and the former fame, my one title to the skies. How leavest thou me to die, O my guest? since to this the name of husband is dwindled down. For what do I wait? till Pygmalion overthrow his sister's city, or Gaetulian Iarbas lead me to captivity? At least if before thy flight a child of thine had been clasped in my arms,—if a tiny Aeneas were playing in my hall, whose face might yet image thine,—I would not think myself ensnared and deserted utterly.' She ended; he by counsel of Jove held his gaze unstirred, and kept his distress hard down in his heart. At last he briefly answers: 'Never, O Queen, will I deny that thy goodness hath [Pg 81][335-368]gone high as thy words can swell the reckoning; nor will my memory of Elissa be ungracious while I remember myself, and breath sways this body. Little will I say in this. I never hoped to slip away in stealthy flight; fancy not that; nor did I ever hold out the marriage torch or enter thus into alliance. Did fate allow me to guide my life by mine own government, and calm my sorrows as I would, my first duty were to the Trojan city and the dear remnant of my kindred; the high house of Priam should abide, and my hand had set up Troy towers anew for a conquered people. But now for broad Italy hath Apollo of Grynos bidden me steer, for Italy the oracles of Lycia. Here is my desire; this is my native country. If thy Phoenician eyes are stayed on Carthage towers and thy Libyan city, what wrong is it, I pray, that we Trojans find our rest on Ausonian land? We too may seek a foreign realm unforbidden. In my sleep, often as the dank shades of night veil the earth, often as the stars lift their fires, the troubled phantom of my father Anchises comes in warning and dread; my boy Ascanius, how I wrong one so dear in cheating him of an Hesperian kingdom and destined fields. Now even the gods' interpreter, sent straight from Jove—I call both to witness—hath borne down his commands through the fleet air. Myself in broad daylight I saw the deity passing within the walls, and these ears drank his utterance. Cease to madden me and thyself alike with plaints. Not of my will do I follow Italy. . . .' Long ere he ended she gazes on him askance, turning her eyes from side to side and perusing him with silent glances; then thus wrathfully speaks: 'No goddess was thy mother, nor Dardanus founder of thy line, traitor! but rough Caucasus bore thee on his iron crags, and Hyrcanian tigresses gave thee suck. For why do I conceal it? For what further outrage do I wait? [Pg 82][369-400]Hath our weeping cost him a sigh, or a lowered glance? Hath he broken into tears, or had pity on his lover? Where, where shall I begin? Now neither doth Queen Juno nor our Saturnian lord regard us with righteous eyes. Nowhere is trust safe. Cast ashore and destitute I welcomed him, and madly gave him place and portion in my kingdom; I found him his lost fleet and drew his crews from death. Alas, the fire of madness speeds me on. Now prophetic Apollo, now oracles of Lycia, now the very gods' interpreter sent straight from Jove through the air carries these rude commands! Truly that is work for the gods, that a care to vex their peace! I detain thee not, nor gainsay thy words: go, follow thine Italy down the wind; seek thy realm overseas. Yet midway my hope is, if righteous gods can do aught at all, thou wilt drain the cup of vengeance on the rocks, and re-echo calls on Dido's name. In murky fires I will follow far away, and when chill death hath severed body from soul, my ghost will haunt thee in every region. Wretch, thou shalt repay! I will hear; and the rumour of it shall reach me deep in the under world.' Even on these words she breaks off her speech unfinished, and, sick at heart, escapes out of the air and sweeps round and away out of sight, leaving him in fear and much hesitance, and with much on his mind to say. Her women catch her in their arms, and carry her swooning to her marble chamber and lay her on her bed. But good Aeneas, though he would fain soothe and comfort her grief, and talk away her distress, with many a sigh, and melted in soul by his great love, yet fulfils the divine commands and returns to his fleet. Then indeed the Teucrians set to work, and haul down their tall ships all along the shore. The hulls are oiled and afloat; they carry from the woodland green boughs for oars and massy logs unhewn, in hot haste to go. . . . One might descry them shifting [Pg 83][401-433]their quarters and pouring out of all the town: even as ants, mindful of winter, plunder a great heap of wheat and store it in their house; a black column advances on the plain as they carry home their spoil on a narrow track through the grass. Some shove and strain with their shoulders at big grains, some marshal the ranks and chastise delay; all the path is aswarm with work. What then were thy thoughts, O Dido, as thou sawest it? What sighs didst thou utter, viewing from the fortress roof the broad beach aswarm, and seeing before thine eyes the whole sea stirred with their noisy din? Injurious Love, to what dost thou not compel mortal hearts! Again, she must needs break into tears, again essay entreaty, and bow her spirit down to love, not to leave aught untried and go to death in vain. 'Anna, thou seest the bustle that fills the shore. They have gathered round from every quarter; already their canvas woos the breezes, and the merry sailors have garlanded the sterns. This great pain, my sister, I shall have strength to bear, as I have had strength to foresee. Yet this one thing, Anna, for love and pity's sake—for of thee alone was the traitor fain, to thee even his secret thoughts were confided, alone thou knewest his moods and tender fits—go, my sister, and humbly accost the haughty stranger: I did not take the Grecian oath in Aulis to root out the race of Troy; I sent no fleet against her fortresses; neither have I disentombed his father Anchises' ashes and ghost, that he should refuse my words entrance to his stubborn ears. Whither does he run? let him grant this grace—alas, the last!—to his lover, and await fair winds and an easy passage. No more do I pray for the old delusive marriage, nor that he give up fair Latium and abandon a kingdom. A breathing-space I ask, to give my madness rest and room, till my very [Pg 84][434-469]fortune teach my grief submission. This last favour I implore: sister, be pitiful; grant this to me, and I will restore it in full measure when I die.' So she pleaded, and so her sister carries and recarries the piteous tale of weeping. But by no weeping is he stirred, inflexible to all the words he hears. Fate withstands, and lays divine bars on unmoved mortal ears. Even as when the eddying blasts of northern Alpine winds are emulous to uproot the secular strength of a mighty oak, it wails on, and the trunk quivers and the high foliage strews the ground; the tree clings fast on the rocks, and high as her top soars into heaven, so deep strike her roots to hell; even thus is the hero buffeted with changeful perpetual accents, and distress thrills his mighty breast, while his purpose stays unstirred, and tears fall in vain. Then indeed, hapless and dismayed by doom, Dido prays for death, and is weary of gazing on the arch of heaven. The more to make her fulfil her purpose and quit the light, she saw, when she laid her gifts on the altars alight with incense, awful to tell, the holy streams blacken, and the wine turn as it poured into ghastly blood. Of this sight she spoke to none—no, not to her sister. Likewise there was within the house a marble temple of her ancient lord, kept of her in marvellous honour, and fastened with snowy fleeces and festal boughs. Forth of it she seemed to hear her husband's voice crying and calling when night was dim upon earth, and alone on the house-tops the screech-owl often made moan with funeral note and long-drawn sobbing cry. Therewithal many a warning of wizards of old terrifies her with appalling presage. In her sleep fierce Aeneas drives her wildly, and ever she seems being left by herself alone, ever going uncompanioned on a weary way, and seeking her Tyrians in a solitary land: even as frantic Pentheus sees the [Pg 85][470-503]arrayed Furies and a double sun, and Thebes shows herself twofold to his eyes: or Agamemnonian Orestes, renowned in tragedy, when his mother pursues him armed with torches and dark serpents, and the Fatal Sisters crouch avenging in the doorway. So when, overcome by her pangs, she caught the madness and resolved to die, she works out secretly the time and fashion, and accosts her sorrowing sister with mien hiding her design and hope calm on her brow. 'I have found a way, mine own—wish me joy, sisterlike—to restore him to me or release me of my love for him. Hard by the ocean limit and the set of sun is the extreme Aethiopian land, where ancient Atlas turns on his shoulders the starred burning axletree of heaven. Out of it hath been shown to me a priestess of Massylian race, warder of the temple of the Hesperides, even she who gave the dragon his food, and kept the holy boughs on the tree, sprinkling clammy honey and slumberous poppy-seed. She professes with her spells to relax the purposes of whom she will, but on others to bring passion and pain; to stay the river-waters and turn the stars backward: she calls up ghosts by night; thou shalt see earth moaning under foot and mountain-ashes descending from the hills. I take heaven, sweet, to witness, and thee, mine own darling sister, I do not willingly arm myself with the arts of magic. Do thou secretly raise a pyre in the inner court, and let them lay on it the arms that the accursed one left hanging in our chamber, and all the dress he wore, and the bridal bed where I fell. It is good to wipe out all the wretch's traces, and the priestess orders thus.' So speaks she, and is silent, while pallor overruns her face. Yet Anna deems not her sister veils death behind these strange rites, and grasps not her wild purpose, nor fears aught deeper than at Sychaeus' death. So she makes ready as bidden. . . . [Pg 86][504-538]But the Queen, the pyre being built up of piled faggots and sawn ilex in the inmost of her dwelling, hangs the room with chaplets and garlands it with funeral boughs: on the pillow she lays the dress he wore, the sword he left, and an image of him, knowing what was to come. Altars are reared around, and the priestess, with hair undone, thrice peals from her lips the hundred gods of Erebus and Chaos, and the triform Hecate, the triple-faced maidenhood of Diana. Likewise she had sprinkled pretended waters of Avernus' spring, and rank herbs are sought mown by moonlight with brazen sickles, dark with milky venom, and sought is the talisman torn from a horse's forehead at birth ere the dam could snatch it. . . . Herself, the holy cake in her pure hands, hard by the altars, with one foot unshod and garments flowing loose, she invokes the gods ere she die, and the stars that know of doom; then prays to whatsoever deity looks in righteousness and remembrance on lovers ill allied. Night fell; weary creatures took quiet slumber all over earth, and woodland and wild waters had sunk to rest; now the stars wheel midway on their gliding path, now all the country is silent, and beasts and gay birds that haunt liquid levels of lake or thorny rustic thicket lay couched asleep under the still night. But not so the distressed Phoenician, nor does she ever sink asleep or take the night upon eyes or breast; her pain redoubles, and her love swells to renewed madness, as she tosses on the strong tide of wrath. Even so she begins, and thus revolves with her heart alone: 'See, what do I? Shall I again make trial of mine old wooers that will scorn me? and stoop to sue for a Numidian marriage among those whom already over and over I have disdained for husbands? Then shall I follow the Ilian fleets and the uttermost bidding of the Teucrians? because it is good to think they were once raised up by my [Pg 87][539-570]succour, or the grace of mine old kindness is fresh in their remembrance? And how should they let me, if I would? or take the odious woman on their haughty ships? art thou ignorant, ah me, even in ruin, and knowest not yet the forsworn race of Laomedon? And then? shall I accompany the triumphant sailors, a lonely fugitive? or plunge forth girt with all my Tyrian train? so hardly severed from Sidon city, shall I again drive them seaward, and bid them spread their sails to the tempest? Nay die thou, as thou deservest, and let the steel end thy pain. With thee it began; overborne by my tears, thou, O my sister, dost load me with this madness and agony, and layest me open to the enemy. I could not spend a wild life without stain, far from a bridal chamber, and free from touch of distress like this! O faith ill kept, that was plighted to Sychaeus' ashes!' Thus her heart broke in long lamentation. Now Aeneas was fixed to go, and now, with all set duly in order, was taking hasty sleep on his high stern. To him as he slept the god appeared once again in the same fashion of countenance, and thus seemed to renew his warning, in all points like to Mercury, voice and hue and golden hair and limbs gracious in youth. 'Goddess-born, canst thou sleep on in such danger? and seest not the coming perils that hem thee in, madman! nor hearest the breezes blowing fair? She, fixed on death, is revolving craft and crime grimly in her bosom, and swells the changing surge of wrath. Fliest thou not hence headlong, while headlong flight is yet possible? Even now wilt thou see ocean weltering with broken timbers, see the fierce glare of torches and the beach in a riot of flame, if dawn break on thee yet dallying in this land. Up ho! linger no more! Woman is ever a fickle and changing thing.' So spoke he, and melted in the black night. [Pg 88] [571-603]Then indeed Aeneas, startled by the sudden phantom, leaps out of slumber and bestirs his crew. 'Haste and awake, O men, and sit down to the thwarts; shake out sail speedily. A god sent from high heaven, lo! again spurs us to speed our flight and cut the twisted cables. We follow thee, holy one of heaven, whoso thou art, and again joyfully obey thy command. O be favourable; give gracious aid and bring fair sky and weather.' He spoke, and snatching his sword like lightning from the sheath, strikes at the hawser with the drawn steel. The same zeal catches all at once; rushing and tearing they quit the shore; the sea is hidden under their fleets; strongly they toss up the foam and sweep the blue water. And now Dawn broke, and, leaving the saffron bed of Tithonus, shed her radiance anew over the world; when the Queen saw from her watch-tower the first light whitening, and the fleet standing out under squared sail, and discerned shore and haven empty of all their oarsmen. Thrice and four times she struck her hand on her lovely breast and rent her yellow hair: 'God!' she cries, 'shall he go? shall an alien make mock of our realm? Will they not issue in armed pursuit from all the city, and some launch ships from the dockyards? Go; bring fire in haste, serve weapons, swing out the oars! What do I talk? or where am I? what mad change is on my purpose? Alas, Dido! now thou dost feel thy wickedness; that had graced thee once, when thou gavest away thy crown. Behold the faith and hand of him! who, they say, carries his household's ancestral gods about with him! who stooped his shoulders to a father outworn with age! Could I not have riven his body in sunder and strewn it on the waves? and slain with the sword his comrades and his dear Ascanius, and served him for the banquet at his father's table? But the chance of battle had been dubious. If it had! whom did I fear [Pg 89][604-635]with my death upon me? I should have borne firebrands into his camp and filled his decks with flame, blotted out father and son and race together, and flung myself atop of all. Sun, whose fires lighten all the works of the world, and thou, Juno, mediatress and witness of these my distresses, and Hecate, cried on by night in crossways of cities, and you, fatal avenging sisters and gods of dying Elissa, hear me now; bend your just deity to my woes, and listen to our prayers. If it must needs be that the accursed one touch his haven and float up to land, if thus Jove's decrees demand, and this is the appointed term,—yet, distressed in war by an armed and gallant nation, driven homeless from his borders, rent from Iülus' embrace, let him sue for succour and see death on death untimely on his people; nor when he hath yielded him to the terms of a harsh peace, may he have joy of his kingdom or the pleasant light; but let him fall before his day and without burial on a waste of sand. This I pray; this and my blood with it I pour for the last utterance. And you, O Tyrians, hunt his seed with your hatred for all ages to come; send this guerdon to our ashes. Let no kindness nor truce be between the nations. Arise out of our dust, O unnamed avenger, to pursue the Dardanian settlement with firebrand and steel. Now, then, whensoever strength shall be given, I invoke the enmity of shore to shore, wave to water, sword to sword; let their battles go down to their children's children.' So speaks she as she kept turning her mind round about, seeking how soonest to break away from the hateful light. Thereon she speaks briefly to Barce, nurse of Sychaeus; for a heap of dusky ashes held her own, in her country of long ago: 'Sweet nurse, bring Anna my sister hither to me. Bid her haste and sprinkle river water over her body, and bring [Pg 90][636-667]with her the beasts ordained for expiation: so let her come: and thou likewise veil thy brows with a pure chaplet. I would fulfil the rites of Stygian Jove that I have fitly ordered and begun, so to set the limit to my distresses and give over to the flames the funeral pyre of the Dardanian.' So speaks she; the old woman went eagerly with quickened pace. But Dido, fluttered and fierce in her awful purpose, with bloodshot restless gaze, and spots on her quivering cheeks burning through the pallor of imminent death, bursts into the inner courts of the house, and mounts in madness the high funeral pyre, and unsheathes the sword of Dardania, a gift asked for no use like this. Then after her eyes fell on the Ilian raiment and the bed she knew, dallying a little with her purpose through her tears, she sank on the pillow and spoke the last words of all: 'Dress he wore, sweet while doom and deity allowed! receive my spirit now, and release me from my distresses. I have lived and fulfilled Fortune's allotted course; and now shall I go a queenly phantom under the earth. I have built a renowned city; I have seen my ramparts rise; by my brother's punishment I have avenged my husband of his enemy; happy, ah me! and over happy, had but the keels of Dardania never touched our shores!' She spoke; and burying her face in the pillow, 'Death it will be,' she cries, 'and unavenged; but death be it. Thus, thus is it good to pass into the dark. Let the pitiless Dardanian's gaze drink in this fire out at sea, and my death be the omen he carries on his way.' She ceased; and even as she spoke her people see her sunk on the steel, and blood reeking on the sword and spattered on her hands. A cry rises in the high halls; Rumour riots down the quaking city. The house resounds with lamentation and sobbing and bitter crying of women; [Pg 91][668-700]heaven echoes their loud wails; even as though all Carthage or ancient Tyre went down as the foe poured in, and the flames rolled furious over the roofs of house and temple. Swooning at the sound, her sister runs in a flutter of dismay, with torn face and smitten bosom, and darts through them all, and calls the dying woman by her name. 'Was it this, mine own? Was my summons a snare? Was it this thy pyre, ah me, this thine altar fires meant? How shall I begin my desolate moan? Didst thou disdain a sister's company in death? Thou shouldst have called me to share thy doom; in the self-same hour, the self-same pang of steel had been our portion. Did these very hands build it, did my voice call on our father's gods, that with thee lying thus I should be away as one without pity? Thou hast destroyed thyself and me together, O my sister, and the Sidonian lords and people, and this thy city. Give her wounds water: I will bathe them and catch on my lips the last breath that haply yet lingers.' So speaking she had climbed the high steps, and, wailing, clasped and caressed her half-lifeless sister in her bosom, and stanched the dark streams of blood with her gown. She, essaying to lift her heavy eyes, swoons back; the deep-driven wound gurgles in her breast. Thrice she rose, and strained to lift herself on her elbow; thrice she rolled back on the pillow, and with wandering eyes sought the light of high heaven, and moaned as she found it. Then Juno omnipotent, pitying her long pain and difficult decease, sent Iris down from heaven to unloose the struggling life from the body where it clung. For since neither by fate did she perish, nor as one who had earned her death, but woefully before her day, and fired by sudden madness, not yet had Proserpine taken her lock from the golden head, nor sentenced her to the Stygian under world. So Iris on dewy saffron pinions flits down through the sky [Pg 92][701-705]athwart the sun in a trail of a thousand changing dyes, and stopping over her head: 'This hair, sacred to Dis, I take as bidden, and release thee from that body of thine.' So speaks she, and cuts it with her hand. And therewith all the warmth ebbed forth from her, and the life passed away upon the winds. [Pg 93] BOOK FIFTH THE GAMES OF THE FLEET Meanwhile Aeneas and his fleet in unwavering track now held mid passage, and cleft the waves that blackened under the North, looking back on the city that even now gleams with hapless Elissa's funeral flame. Why the broad blaze is lit lies unknown; but the bitter pain of a great love trampled, and the knowledge of what woman can do in madness, draw the Teucrians' hearts to gloomy guesses. When their ships held the deep, nor any land farther appears, the seas all round, and all round the sky, a dusky shower drew up overhead, carrying night and storm, and the wave shuddered and gloomed. Palinurus, master of the fleet, cries from the high stern: 'Alas, why have these heavy storm-clouds girt the sky? lord Neptune, what wilt thou?' Then he bids clear the rigging and bend strongly to the oars, and brings the sails across the wind, saying thus: 'Noble Aeneas, not did Jupiter give word and warrant would I hope to reach Italy under such a sky. The shifting winds roar athwart our course, and blow stronger out of the black west, and the air thickens into mist: nor are we fit to force our way on and across. Fortune is the stronger; let us follow her, and turn our course whither she calls. [Pg 94][23-55]Not far away, I think, are the faithful shores of thy brother Eryx, and the Sicilian haven, if only my memory retraces rightly the stars I watched before.' Then good Aeneas: 'Even I ere now discern the winds will have it so, and thou urgest against them in vain. Turn thou the course of our sailing. Could any land be welcomer to me, or where I would sooner choose to put in my weary ships, than this that hath Dardanian Acestes to greet me, and laps in its embrace lord Anchises' dust?' This said, they steer for harbour, while the following west wind stretches their sails; the fleet runs fast down the flood, and at last they land joyfully on the familiar beach. But Acestes high on a hill-top, amazed at the friendly squadron approaching from afar, hastens towards them, weaponed and clad in the shaggy skin of a Libyan she-bear. Him a Trojan mother conceived and bore to Crimisus river; not forgetful of his parentage, he wishes them joy of their return, and gladly entertains them on his rustic treasure and comforts their weariness with his friendly store. So soon as the morrow's clear daylight had chased the stars out of the east, Aeneas calls his comrades along the beach together, and from a mounded hillock speaks: 'Great people of Dardanus, born of the high blood of gods, the yearly circle of the months is measured out to fulfilment since we laid the dust in earth, all that was left of my divine father, and sadly consecrated our altars. And now the day is at hand (this, O gods, was your will), which I will ever keep in grief, ever in honour. Did I spend it an exile on Gaetulian quicksands, did it surprise me on the Argolic sea or in Mycenae town, yet would I fulfil the yearly vows and annual ordinance of festival, and pile the altars with their due gifts. Now we are led hither, to the very dust and ashes of our father, not as I deem without [Pg 95][56-90]divine purpose and influence, and borne home into the friendly haven. Up then and let us all gather joyfully to the sacrifice: pray we for winds, and may he deign that I pay these rites to him year by year in an established city and consecrated temple. Two head of oxen Acestes, the seed of Troy, gives to each of your ships by tale: invite to the feast your own ancestral gods of the household, and those whom our host Acestes worships. Further, so the ninth Dawn uplift the gracious day upon men, and her shafts unveil the world, I will ordain contests for my Trojans; first for swift ships; then whoso excels in the foot-race, and whoso, confident in strength and skill, comes to shoot light arrows, or adventures to join battle with gloves of raw hide; let all be here, and let merit look for the prize and palm. Now all be hushed, and twine your temples with boughs.' So speaks he, and shrouds his brows with his mother's myrtle. So Helymus does, so Aletes ripe of years, so the boy Ascanius, and the rest of the people follow. He advances from the assembly to the tomb among a throng of many thousands that crowd about him; here he pours on the ground in fit libation two goblets of pure wine, two of new milk, two of consecrated blood, and flings bright blossoms, saying thus: 'Hail, holy father, once again; hail, ashes of him I saved in vain, and soul and shade of my sire! Thou wert not to share the search for Italian borders and destined fields, nor the dim Ausonian Tiber.' Thus had he spoken; when from beneath the sanctuary a snake slid out in seven vast coils and sevenfold slippery spires, quietly circling the grave and gliding from altar to altar, his green chequered body and the spotted lustre of his scales ablaze with gold, as the bow in the cloud darts a thousand changing dyes athwart the sun: Aeneas stood amazed at the sight. At last he wound [Pg 96][91-126]his long train among the vessels and polished cups, and tasted the feast, and again leaving the altars where he had fed, crept harmlessly back beneath the tomb. Doubtful if he shall think it the Genius of the ground or his father's ministrant, he slays, as is fit, two sheep of two years old, as many swine and dark-backed steers, pouring the while cups of wine, and calling on the soul of great Anchises and the ghost rearisen from Acheron. Therewithal his comrades, as each hath store, bring gifts to heap joyfully on the altars, and slay steers in sacrifice: others set cauldrons arow, and, lying along the grass, heap live embers under spits and roast the flesh. The desired day came, and now the ninth Dawn rode up clear and bright behind Phaëthon's coursers; and the name and renown of illustrious Acestes had stirred up all the bordering people; their holiday throng filled the shore, to see Aeneas' men, and some ready to join in contest. First of all the prizes are laid out to view in the middle of the racecourse; tripods of sacrifice, green garlands and palms, the reward of the conquerors, armour and garments dipped in purple, talents of silver and gold: and from a hillock in the midst the trumpet sounds the games begun. First is the contest of rowing, and four ships matched in weight enter, the choice of all the fleet. Mnestheus' keen oarsmen drive the swift Dragon, Mnestheus the Italian to be, from whose name is the Memmian family; Gyas the huge bulk of the huge Chimaera, a floating town, whom her triple-tiered Dardanian crew urge on with oars rising in threefold rank; Sergestus, from whom the Sergian house holds her name, sails in the tall Centaur; and in the sea-coloured Scylla Cloanthus, whence is thy family, Cluentius of Rome. Apart in the sea and over against the foaming beach, lies a rock that the swoln waves beat and drown what time the [Pg 97][127-159]north-western gales of winter blot out the stars; in calm it rises silent out of the placid water, flat-topped, and a haunt where cormorants love best to take the sun. Here lord Aeneas set up a goal of leafy ilex, a mark for the sailors to know whence to return, where to wheel their long course round. Then they choose stations by lot, and on the sterns their captains glitter afar, beautiful in gold and purple; the rest of the crews are crowned with poplar sprays, and their naked shoulders glisten wet with oil. They sit down at the thwarts, and their arms are tense on the oars; at full strain they wait the signal, while throbbing fear and heightened ambition drain their riotous blood. Then, when the clear trumpet-note rang, all in a moment leap forward from their line; the shouts of the sailors strike up to heaven, and the channels are swept into foam by the arms as they swing backward. They cleave their furrows together, and all the sea is torn asunder by oars and triple-pointed prows. Not with speed so headlong do racing pairs whirl the chariots over the plain, as they rush streaming from the barriers; not so do their charioteers shake the wavy reins loose over their team, and hang forward on the whip. All the woodland rings with clapping and shouts of men that cheer their favourites, and the sheltered beach eddies back their cries; the noise buffets and re-echoes from the hills. Gyas shoots out in front of the noisy crowd, and glides foremost along the water; whom Cloanthus follows next, rowing better, but held back by his dragging weight of pine. After them, at equal distance, the Dragon and the Centaur strive to win the foremost room; and now the Dragon has it, now the vast Centaur outstrips and passes her; now they dart on both together, their stems in a line, and their keels driving long furrows through the salt water-ways. And now they drew nigh the rock, and were hard [Pg 98][160-193]on the goal; when Gyas as he led, winner over half the flood, cries aloud to Menoetes, the ship's steersman: 'Whither away so far to the right? This way direct her path; kiss the shore, and let the oarblade graze the leftward reefs. Others may keep to deep water.' He spoke; but Menoetes, fearing blind rocks, turns the bow away towards the open sea. 'Whither wanderest thou away? to the rocks, Menoetes!' again shouts Gyas to bring him back; and lo! glancing round he sees Cloanthus passing up behind and keeping nearer. Between Gyas' ship and the echoing crags he scrapes through inside on his left, flashes past his leader, and leaving the goal behind is in safe water. Then indeed grief burned fierce through his strong frame, and tears sprung out on his cheeks; heedless of his own dignity and his crew's safety, he flings the too cautious Menoetes sheer into the sea from the high stern, himself succeeds as guide and master of the helm, and cheers on his men, and turns his tiller in to shore. But Menoetes, when at last he rose struggling from the bottom, heavy with advancing years and wet in his dripping clothes, makes for the top of the crag, and sits down on a dry rock. The Teucrians laughed out as he fell and as he swam, and laugh to see him spitting the salt water from his chest. At this a joyful hope kindled in the two behind, Sergestus and Mnestheus, of catching up Gyas' wavering course. Sergestus slips forward as he nears the rock, yet not all in front, nor leading with his length of keel; part is in front, part pressed by the Dragon's jealous prow. But striding amidships between his comrades, Mnestheus cheers them on: 'Now, now swing back, oarsmen who were Hector's comrades, whom I chose to follow me in Troy's extremity; now put forth the might and courage you showed in Gaetulian quicksands, amid Ionian seas and Malea's chasing waves. Not the first [Pg 99][194-227]place do I now seek for Mnestheus, nor strive for victory; though ah!—yet let them win, O Neptune, to whom thou givest it. But the shame of coming in last! Win but this, fellow-citizens, and avert that disaster!' His men bend forward, straining every muscle; the brasswork of the ship quivers to their mighty strokes, and the ground runs from under her; limbs and parched lips shake with their rapid panting, and sweat flows in streams all over them. Mere chance brought the crew the glory they desired. For while Sergestus drives his prow furiously in towards the rocks and comes up with too scanty room, alas! he caught on a rock that ran out; the reef ground, the oars struck and shivered on the jagged teeth, and the bows crashed and hung. The sailors leap up and hold her with loud cries, and get out iron-shod poles and sharp-pointed boathooks, and pick up their broken oars out of the eddies. But Mnestheus, rejoicing and flushed by his triumph, with oars fast-dipping and winds at his call, issues into the shelving water and runs down the open sea. As a pigeon whose house and sweet nestlings are in the rock's recesses, if suddenly startled from her cavern, wings her flight over the fields and rushes frightened from her house with loud clapping pinions; then gliding noiselessly through the air, slides on her liquid way and moves not her rapid wings; so Mnestheus, so the Dragon under him swiftly cleaves the last space of sea, so her own speed carries her flying on. And first Sergestus is left behind, struggling on the steep rock and shoal water, and shouting in vain for help and learning to race with broken oars. Next he catches up Gyas and the vast bulk of the Chimaera; she gives way, without her steersman. And now on the very goal Cloanthus alone is left; him he pursues and presses hard, straining all his strength. Then indeed the shouts redouble, as all together eagerly cheer on the pursuer, and [Pg 100][228-264]the sky echoes their din. These scorn to lose the honour that is their own, the glory in their grasp, and would sell life for renown; to these success lends life; power comes with belief in it. And haply they had carried the prize with prows abreast, had not Cloanthus, stretching both his open hands over the sea, poured forth prayers and called the gods to hear his vows: 'Gods who are sovereign on the sea, over whose waters I run, to your altars on this beach will I bring a snow-white bull, my vow's glad penalty, and will cast his entrails into the salt flood and pour liquid wine.' He spoke, and far beneath the flood maiden Panopea heard him, with all Phorcus' choir of Nereids, and lord Portunus with his own mighty hand pushed him on his way. The ship flies to land swifter than the wind or an arrow's flight, and shoots into the deep harbour. Then the seed of Anchises, summoning all in order, declares Cloanthus conqueror by herald's outcry, and dresses his brows in green bay, and gives gifts to each crew, three bullocks of their choice, and wine, and a large talent of silver to take away. For their captains he adds special honours; to the winner a scarf wrought with gold, encircled by a double border of deep Meliboean purple; woven in it is the kingly boy on leafy Ida, chasing swift stags with javelin and racing feet, keen and as one panting; him Jove's swooping armour-bearer hath caught up from Ida in his talons; his aged guardians stretch their hands vainly upwards, and the barking of hounds rings fierce into the air. But to him who, next in merit, held the second place, he gives to wear a corslet triple-woven with hooks of polished gold, stripped by his own conquering hand from Demoleos under tall Troy by the swift Simoïs, an ornament and safeguard among arms. Scarce could the straining shoulders of his servants Phegeus and Sagaris carry its heavy folds; yet with it on, Demoleos at [Pg 101][265-302]full speed would chase the scattered Trojans. The third prize he makes twin cauldrons of brass, and bowls wrought in silver and rough with tracery. And now all moved away in the pride and wealth of their prizes, their brows bound with scarlet ribbons; when, hardly torn loose by all his art from the cruel rock, his oars lost, rowing feebly with a single tier, Sergestus brought in his ship jeered at and unhonoured. Even as often a serpent caught on a highway, if a brazen wheel hath gone aslant over him or a wayfarer left him half dead and mangled with the blow of a heavy stone, wreathes himself slowly in vain effort to escape, in part undaunted, his eyes ablaze and his hissing throat lifted high; in part the disabling wound keeps him coiling in knots and twisting back on his own body; so the ship kept rowing slowly on, yet hoists sail and under full sail glides into the harbour mouth. Glad that the ship is saved and the crew brought back, Aeneas presents Sergestus with his promised reward. A slave woman is given him not unskilled in Minerva's labours, Pholoë the Cretan, with twin boys at her breast. This contest sped, good Aeneas moved to a grassy plain girt all about with winding wooded hills, and amid the valley an amphitheatre, whither, with a concourse of many thousands, the hero advanced and took his seat on a mound. Here he allures with rewards and offer of prizes those who will try their hap in the fleet foot-race. Trojans and Sicilians gather mingling from all sides, Nisus and Euryalus foremost . . . Euryalus in the flower of youth and famed for beauty, Nisus for pure love of the boy. Next follows renowned Diores, of Priam's royal line; after him Salius and Patron together, the one Acarnanian, the other Tegean by family and of Arcadian blood; next two men of Sicily, Helymus and Panopes, foresters and attendants on old Acestes; many besides whose fame is hid in [Pg 102][303-338]obscurity. Then among them all Aeneas spoke thus: 'Hearken to this, and attend in good cheer. None out of this number will I let go without a gift. To each will I give two glittering Gnosian spearheads of polished steel, and an axe chased with silver to bear away; one and all shall be honoured thus. The three foremost shall receive prizes, and have pale olive bound about their head. The first shall have a caparisoned horse as conqueror; the second an Amazonian quiver filled with arrows of Thrace, girt about by a broad belt of gold, and on the link of the clasp a polished gem; let the third depart with this Argolic helmet for recompense.' This said, they take their place, and the signal once heard, dart over the course and leave the line, pouring forth like a storm-cloud while they mark the goal. Nisus gets away first, and shoots out far in front of the throng, fleeter than the winds or the winged thunderbolt. Next to him, but next by a long gap, Salius follows; then, left a space behind him, Euryalus third . . . and Helymus comes after Euryalus; and close behind him, lo! Diores goes flying, just grazing foot with foot, hard on his shoulder; and if a longer space were left, he would creep out past him and win the tie. And now almost in the last space, they began to come up breathless to the goal, when unfortunate Nisus trips on the slippery blood of the slain steers, where haply it had spilled over the ground and wetted the green grass. Here, just in the flush of victory, he lost his feet; they slid away on the ground they pressed, and he fell forward right among the ordure and blood of the sacrifice. Yet forgot he not his darling Euryalus; for rising, he flung himself over the slippery ground in front of Salius, and he rolled over and lay all along on the hard sand. Euryalus shoots by, wins and holds the first place his friend gave, and flies on amid prosperous clapping and cheers. Behind Helymus comes [Pg 103][339-373]up, and Diores, now third for the palm. At this Salius fills with loud clamour the whole concourse of the vast theatre, and the lords who looked on in front, demanding restoration of his defrauded prize. Euryalus is strong in favour, and beauty in tears, and the merit that gains grace from so fair a form. Diores supports him, who succeeded to the palm, so he loudly cries, and bore off the last prize in vain, if the highest honours be restored to Salius. Then lord Aeneas speaks: 'For you, O boys, your rewards remain assured, and none alters the prizes' order: let me be allowed to pity a friend's innocent mischance.' So speaking, he gives to Salius a vast Gaetulian lion-skin, with shaggy masses of hair and claws of gold. 'If this,' cries Nisus, 'is the reward of defeat, and thy pity is stirred for the fallen, what fit recompense wilt thou give to Nisus? to my excellence the first crown was due, had not I, like Salius, met Fortune's hostility.' And with the words he displayed his face and limbs foul with the wet dung. His lord laughed kindly on him, and bade a shield be brought forth, the workmanship of Didymaon, torn by him from the hallowed gates of Neptune's Grecian temple; with this special prize he rewards his excellence. Thereafter, when the races are finished and the gifts fulfilled: 'Now,' he cries, 'come, whoso hath in him valour and ready heart, and lift up his arms with gauntleted hands.' So speaks he, and sets forth a double prize of battle; for the conqueror a bullock gilt and garlanded; a sword and beautiful helmet to console the conquered. Straightway without pause Dares issues to view in his vast strength, rising amid loud murmurs of the people; he who alone was wont to meet Paris in combat; he who, at the mound where princely Hector lies, struck down as he came the vast bulk upborne by conquering Butes, of Amycus' Bebrycian line, and stretched him in [Pg 104][374-410]death on the yellow sand. Such was Dares; at once he raises his head high for battle, displays his broad shoulders, and stretches and swings his arms right and left, lashing the air with blows. For him another is required; but none out of all the train durst approach or put the gloves on his hands. So he takes his stand exultant before Aeneas' feet, deeming he excelled all in victories; and thereon without more delay grasps the bull's horn with his left hand, and speaks thus: 'Goddess-born, if no man dare trust himself to battle, to what conclusion shall I stand? how long is it seemly to keep me? bid me carry off thy gifts.' Therewith all the Dardanians murmured assent, and bade yield him the promised prize. At this aged Acestes spoke sharply to Entellus, as he sate next him on the green cushion of grass: 'Entellus, bravest of heroes once of old in vain, wilt thou thus idly let a gift so great be borne away uncontested? Where now prithee is divine Eryx, thy master of fruitless fame? where thy renown over all Sicily, and those spoils hanging in thine house?' Thereat he: 'Desire of glory is not gone, nor ambition checked by fear; but torpid age dulls my chilly blood, and my strength of limb is numb and outworn. If I had what once was mine, if I had now that prime of years, yonder braggart's boast and confidence, it had taken no prize of goodly bullock to allure me; nor heed I these gifts.' So he spoke, and on that flung down a pair of gloves of giant weight, with whose hard hide bound about his wrists valiant Eryx was wont to come to battle. They stood amazed; so stiff and grim lay the vast sevenfold oxhide sewed in with lead and iron. Dares most of all shrinks far back in horror, and the noble son of Anchises turns round this way and that their vast weight and voluminous folds. Then the old man spoke thus in deep accents: 'How, had they seen the gloves [Pg 105][411-444]that were Hercules' own armour, and the fatal fight on this very beach? These arms thy brother Eryx once wore; thou seest them yet stained with blood and spattered brains. In them he stood to face great Alcides; to them was I used while fuller blood supplied me strength, and envious old age had not yet strewn her snows on either temple. But if Dares of Troy will have none of these our arms, and good Aeneas is resolved on it, and my patron Acestes approves, let us make the battle even. See, I give up the gauntlets of Eryx; dismiss thy fears; and do thou put off thy Trojan gloves.' So spoke he, and throwing back the fold of his raiment from his shoulders, he bares the massive joints and limbs, the great bones and muscles, and stands up huge in the middle of the ground. Then Anchises' lordly seed brought out equal gloves and bound the hands of both in matched arms. Straightway each took his stand on tiptoe, and undauntedly raised his arms high in air. They lift their heads right back and away out of reach of blows, and make hand play through hand, inviting attack; the one nimbler of foot and confident in his youth, the other mighty in mass of limb, but his knees totter tremulous and slow, and sick panting shakes his vast frame. Many a mutual blow they deliver in vain, many an one they redouble on chest and side, sounding hollow and loud: hands play fast about ear and temple, and jawbones clash under the hard strokes. Old Entellus stands immoveable and astrain, only parrying hits with body and watchful eye. The other, as one who casts mounts against some high city or blockades a hill-fort in arms, tries this and that entrance, and ranges cunningly over all the ground, and presses many an attack in vain. Entellus rose and struck clean out with his right downwards; his quick opponent saw the descending blow before it came, [Pg 106][445-481]and slid his body rapidly out of its way. Entellus hurled his strength into the air, and all his heavy mass, overreaching, fell heavily to the earth; as sometime on Erymanthus or mighty Ida a hollow pine falls torn out by the roots. Teucrians and men of Sicily rise eagerly; a cry goes up, and Acestes himself runs forward, and pityingly lifts his friend and birthmate from the ground. But the hero, not dulled nor dismayed by his mishap, returns the keener to battle, and grows violent in wrath, while shame and resolved valour kindle his strength. All afire, he hunts Dares headlong over the lists, and redoubles his blows now with right hand, now with left; no breath nor pause; heavy as hailstones rattle on the roof from a storm-cloud, so thickly shower the blows from both his hands as he buffets Dares to and fro. Then lord Aeneas allowed not wrath to swell higher or Entellus to rage out his bitterness, but stopped the fight and rescued the exhausted Dares, saying thus in soothing words: 'Unhappy! what height of madness hath seized thy mind? Knowest thou not the strength is another's and the gods are changed? Yield thou to Heaven.' And with the words he proclaimed the battle over. But him his faithful mates lead to the ships dragging his knees feebly, swaying his head from side to side, and spitting from his mouth clotted blood mingled with teeth. At summons they bear away the helmet and shield, and leave palm and bull to Entellus. At this the conqueror, swelling in pride over the bull, cries: 'Goddess-born, and you, O Trojans! learn thus what my strength of body was in its prime, and from what a death Dares is saved by your recall.' He spoke, and stood right opposite in face of the bullock as it stood by, the prize of battle; then drew back his hand, and swinging the hard gauntlet sheer down between the horns, smashed the bones in upon the shattered brain. The ox rolls over, and quivering and [Pg 107][482-516]lifeless lies along the ground. Above it he utters these deep accents: 'This life, Eryx, I give to thee, a better payment than Dares' death; here I lay down my gloves and unconquered skill.' Forthwith Aeneas invites all that will to the contest of the swift arrow, and proclaims the prizes. With his strong hand he uprears the mast of Serestus' ship, and on a cord crossing it hangs from the masthead a fluttering pigeon as mark for their steel. They gather, and a helmet of brass takes the lots as they throw them in. First in rank, and before them all, amid prosperous cheers, comes out Hippocoön son of Hyrtacus; and Mnestheus follows on him, but now conqueror in the ship race, Mnestheus with his chaplet of green olive. Third is Eurytion, thy brother, O Pandarus, great in renown, thou who of old, when prompted to shatter the truce, didst hurl the first shaft amid the Achaeans. Last of all, and at the bottom of the helmet, sank Acestes, he too venturing to set hand to the task of youth. Then each and all they strongly bend their bows into a curve and pull shafts from their quivers. And first the arrow of the son of Hyrtacus, flying through heaven from the sounding string, whistles through the fleet breezes, and reaches and sticks fast full in the mast's wood: the mast quivered, and the bird fluttered her feathers in affright, and the whole ground rang with loud clapping. Next valiant Mnestheus took his stand with bow bent, aiming high with levelled eye and arrow; yet could not, unfortunate! hit the bird herself with his steel, but cut the knotted hempen bands that tied her foot as she hung from the masthead; she winged her flight into the dark windy clouds. Then Eurytion, who ere now held the arrow ready on his bended bow, swiftly called in prayer to his brother, marked the pigeon as she now went down the empty sky exultant on clapping wings; and as she passed under a dark cloud, [Pg 108][517-553]struck her: she fell breathless, and, leaving her life in the aery firmament, slid down carrying the arrow that pierced her. Acestes alone was over, and the prize lost; yet he sped his arrow up into the air, to display his lordly skill and resounding bow. At this a sudden sign meets their eyes, mighty in augural presage, as the high event taught thereafter, and in late days boding seers prophesied of the omen. For the flying reed blazed out amid the swimming clouds, traced its path in flame, and burned away on the light winds; even as often stars shooting from their sphere draw a train athwart the sky. Trinacrians and Trojans hung in astonishment, praying to the heavenly powers; neither did great Aeneas reject the omen, but embraces glad Acestes and loads him with lavish gifts, speaking thus: 'Take, my lord: for the high King of heaven by these signs hath willed thee to draw the lot of peculiar honour. This gift shalt thou have as from aged Anchises' own hand, a bowl embossed with figures, that once Cisseus of Thrace gave my father Anchises to bear, in high token and guerdon of affection.' So speaking, he twines green bay about his brows, and proclaims Acestes conqueror first before them all. Nor did gentle Eurytion, though he alone struck the bird down from the lofty sky, grudge him to be preferred in honour. Next comes for his prize he who cut the cord; he last, who pierced the mast with his winged reed. But lord Aeneas, ere yet the contest is sped, calls to him Epytides, guardian and attendant of ungrown Iülus, and thus speaks into his faithful ear: 'Up and away, and tell Ascanius, if he now holds his band of boys ready, and their horses arrayed for the charge, to defile his squadrons to his grandsire's honour in bravery of arms.' So says he, and himself bids all the crowding throng withdraw from the long racecourse and leave the lists free. The boys move in before their parents' faces, glittering in rank on their [Pg 109][554-590]bitted horses; as they go all the people of Troy and Trinacria murmur and admire. On the hair of them all rests a garland fitly trimmed; each carries two cornel spear-shafts tipped with steel; some have polished quivers on their shoulders; above their breast and round their neck goes a flexible circlet of twisted gold. Three in number are the troops of riders, and three captains gallop up and down; following each in equal command rides a glittering division of twelve boys. One youthful line goes rejoicingly behind little Priam, renewer of his grandsire's name, thy renowned seed, O Polites, and destined to people Italy; he rides a Thracian horse dappled with spots of white, showing white on his pacing pasterns and white on his high forehead. Second is Atys, from whom the Latin Atii draw their line, little Atys, boy beloved of the boy Iülus. Last and excellent in beauty before them all, Iülus rode in on a Sidonian horse that Dido the bright had given him for token and pledge of love. The rest of them are mounted on old Acestes' Sicilian horses. . . . The Dardanians greet their shy entrance with applause, and rejoice at the view, and recognise the features of their parents of old. When they have ridden merrily round all the concourse of their gazing friends, Epytides shouts from afar the signal they await, and sounds his whip. They gallop apart in equal numbers, and open their files three and three in deploying bands, and again at the call wheel about and bear down with levelled arms. Next they start on other charges and other retreats in corresponsive spaces, and interlink circle with circle, and wage the armed phantom of battle. And now they bare their backs in flight, now turn their lances to the charge, now plight peace and ride on side by side. As once of old, they say, the labyrinth in high Crete had a tangled path between blind walls, and a thousand ways of doubling treachery, where tokens to follow failed in the [Pg 110][591-625]maze unmastered and irrecoverable: even in such a track do the children of Troy entangle their footsteps and weave the game of flight and battle; like dolphins who, swimming through the wet seas, cut Carpathian or Libyan. . . . This fashion of riding, these games Ascanius first revived, when he girt Alba the Long about with walls, and taught their celebration to the Old Latins in the way of his own boyhood, with the youth of Troy about him. The Albans taught it their children; on from them mighty Rome received it and kept the ancestral observance; and now it is called Troy, and the boys the Trojan troop. Thus far sped the sacred contests to their holy lord. Just at this Fortune broke faith and grew estranged. While they pay the due rites to the tomb with diverse games, Juno, daughter of Saturn, sends Iris down the sky to the Ilian fleet, and breathes a gale to speed her on, revolving many a thought, and not yet satiate of the ancient pain. She, speeding her way along the thousand-coloured bow, runs swiftly, seen of none, down her maiden path. She discerns the vast concourse, and traverses the shore, and sees the haven abandoned and the fleet left alone. But far withdrawn by the solitary verge of the sea the Trojan women wept their lost Anchises, and as they wept gazed all together on the fathomless flood. 'Alas! after all those weary waterways, that so wide a sea is yet to come!' such is the single cry of all. They pray for a city, sick of the burden of their sea-sorrow. So she darts among them, not witless to harm, and lays by face and raiment of a goddess: she becomes Beroë, the aged wife of Tmarian Doryclus, who had once had birth and name and children, and in this guise goes among the Dardanian matrons. 'Ah, wretched we,' she cries, 'whom hostile Achaean hands did not drag to death beneath our native city! ah hapless race, for what destruction does Fortune hold thee back? The [Pg 111][626-660]seventh summer now declines since Troy's overthrow, while we pass measuring out by so many stars the harbourless rocks over every water and land, pursuing all the while over the vast sea an Italy that flies us, and tossing on the waves. Here are our brother Eryx' borders, and Acestes' welcome: who denies us to cast up walls and give our citizens a city? O country, O household gods vainly rescued from the foe! shall there never be a Trojan town to tell of? shall I nowhere see a Xanthus and a Simoïs, the rivers of Hector? Nay, up and join me in burning with fire these ill-ominous ships. For in sleep the phantom of Cassandra the soothsayer seemed to give me blazing brands: Here seek your Troy, she said; here is your home. Now is the time to do it; nor do these high portents allow delay. Behold four altars to Neptune; the god himself lends the firebrand and the nerve.' Speaking thus, at once she strongly seizes the fiery weapon, and with straining hand whirls it far upreared, and flings: the souls of the Ilian women are startled and their wits amazed. At this one of their multitude, and she the eldest, Pyrgo, nurse in the palace to all Priam's many children: 'This is not Beroë, I tell you, O mothers; this is not the wife of Doryclus of Rhoeteum. Mark the lineaments of divine grace and the gleaming eyes, what a breath is hers, what a countenance, and the sound of her voice and the steps of her going. I, I time agone left Beroë apart, sick and fretting that she alone must have no part in this our service, nor pay Anchises his due sacrifice.' So spoke she. . . . But the matrons at first, dubious and wavering, gazed on the ships with malignant eyes, between the wretched longing for the land they trod and the fated realm that summoned them: when the goddess rose through the sky on poised wings, and in her flight drew a vast bow beneath the clouds. Then indeed, amazed at the tokens and driven by madness, they raise a cry and snatch fire from the [Pg 112][661-694]hearths within; others plunder the altars, and cast on brushwood boughs and brands. The Fire-god rages with loose rein over thwarts and oars and hulls of painted fir. Eumelus carries the news of the burning ships to the grave of Anchises and the ranges of the theatre; and looking back, their own eyes see the floating cloud of dark ashes. And in a moment Ascanius, as he rode gaily before his cavalry, spurred his horse to the disordered camp; nor can his breathless guardians hold him back. 'What strange madness is this?' he cries; 'whither now hasten you, whither, alas and woe! O citizens? not on the foe nor on some hostile Argive camp; it is your own hopes you burn. Behold me, your Ascanius!' and he flung before his feet the empty helmet, put on when he roused the mimicry of war. Aeneas and the Trojan train together hurry to the spot. But the women scatter apart in fear all over the beach, and stealthily seek the woods and the hollow rocks they find: they loathe their deed and the daylight, and with changed eyes know their people, and Juno is startled out of their breast. But not thereby do the flames of the burning lay down their unconquered strength; under the wet oak the seams are alive, spouting slow coils of smoke; the creeping heat devours the hulls, and the destroyer takes deep hold of all: nor does the heroes' strength avail nor the floods they pour in. Then good Aeneas rent away the raiment from his shoulders and called the gods to aid, stretching forth his hands: 'Jupiter omnipotent, if thou hatest not Troy yet wholly to her last man, if thine ancient pity looks at all on human woes, now, O Lord, grant our fleet to escape the flame, and rescue from doom the slender Teucrian estate. Or do thou plunge to death this remnant, if I deserve it, with levelled thunderbolt, and here with thine own hand smite us down.' Scarce had he uttered this, when a black tempest rages in streaming showers; earth trembles [Pg 113][695-726]to the thunder on plain and steep; the water-flood rushes in torrents from the whole heaven amid black darkness and volleying blasts of the South. The ships are filled from overhead, the half-burnt timbers are soaking; till all the heat is quenched, and all the hulls, but four that are lost, are rescued from destruction. But lord Aeneas, dismayed by the bitter mischance, revolved at heart this way and that his shifting weight of care, whether, forgetting fate, he should rest in Sicilian fields, or reach forth to the borders of Italy. Then old Nautes, whom Tritonian Pallas taught like none other, and made famous in eminence of art—she granted him to reply what the gods' heavy anger menaced or what the order of fate claimed—he then in accents of comfort thus speaks to Aeneas: 'Goddess-born, follow we fate's ebb and flow, whatsoever it shall be; fortune must be borne to be overcome. Acestes is of thine own divine Dardanian race; take him, for he is willing, to join thee in common counsel; deliver to him those who are over, now these ships are lost, and those who are quite weary of thy fortunes and the great quest. Choose out the old men stricken in years, and the matrons sick of the sea, and all that is weak and fearful of peril in thy company. Let this land give a city to the weary; they shall be allowed to call their town Acesta by name.' Then, indeed, kindled by these words of his aged friend, his spirit is distracted among all his cares. And now black Night rose chariot-borne, and held the sky; when the likeness of his father Anchises seemed to descend from heaven and suddenly utter thus: 'O son, more dear to me than life once of old while life was yet mine; O son, hard wrought by the destinies of Ilium! I come hither by Jove's command, who drove the [Pg 114][727-760]fire from thy fleets, and at last had pity out of high heaven. Obey thou the fair counsel aged Nautes now gives. Carry through to Italy thy chosen men and bravest souls; in Latium must thou war down a people hard and rough in living. Yet ere then draw thou nigh the nether chambers of Dis, and in the deep tract of hell come, O son, to meet me. For I am not held in cruel Tartarus among wailing ghosts, but inhabit Elysium and the sweet societies of the good. Hither with much blood of dark cattle shall the holy Sibyl lead thee. Then shalt thou learn of all thy line, and what city is given thee. And now farewell; dank Night wheels her mid-career, and even now I feel the stern breath of the panting horses of the East.' He ended, and retreated like a vapour into thin air. 'Ah, whither hurriest thou?' cries Aeneas; 'whither so fast away? From whom fliest thou? or who withholds thee from our embrace?' So speaking, he kindles the sleeping embers of the fire, and with holy meal and laden censer does sacrifice to the tutelar of Pergama and hoar Vesta's secret shrine. Straightway he summons his crews and Acestes first of all, and instructs them of Jove's command and his beloved father's precepts, and what is now his fixed mind and purpose. They linger not in counsel, nor does Acestes decline his bidden duty: they enrol the matrons in their town, and plant a people there, souls that will have none of glory. The rest repair the thwarts and replace the ships' timbers that the flames had gnawed upon, and fit up oars and rigging, little in number, but alive and valiant for war. Meanwhile Aeneas traces the town with the plough and allots the homesteads; this he bids be Ilium, and these lands Troy. Trojan Acestes, rejoicing in his kingdom, appoints a court and gathers his senators to give them statutes. Next, where the crest of Eryx is neighbour to the stars, a dwelling is founded to Venus the Idalian; [Pg 115][761-793]and a priest and breadth of holy wood is attached to Anchises' grave. And now for nine days all the people hath feasted, and offering been paid at the altars; quiet breezes have smoothed the ocean floor, and the gathering south wind blows, calling them again to sea. A mighty weeping arises along the winding shore; a night and a day they linger in mutual embraces. The very mothers now, the very men to whom once the sight of the sea seemed cruel and the name intolerable, would go on and endure the journey's travail to the end. These Aeneas comforts with kindly words, and commends with tears to his kinsman Acestes' care. Then he bids slay three steers to Eryx and a she-lamb to the Tempests, and loose the hawser as is due. Himself, his head bound with stripped leaves of olive, he stands apart on the prow holding the cup, and casts the entrails into the salt flood and pours liquid wine. A wind rising astern follows them forth on their way. Emulously the crews strike the water, and sweep through the seas. But Venus meanwhile, wrought upon with distress, accosts Neptune, and thus pours forth her heart's complaint: 'Juno's bitter wrath and heart insatiable compel me, O Neptune, to sink to the uttermost of entreaty: neither length of days nor any goodness softens her, nor doth Jove's command and fate itself break her to desistence. It is not enough that her accursed hatred hath devoured the Phrygian city from among the people, and exhausted on it the stores of vengeance; still she pursues this remnant, the bones and ashes of murdered Troy. I pray she know why her passion is so fierce. Thyself art my witness what a sudden stir she raised of late on the Libyan waters, flinging all the seas to heaven in vain reliance on Aeolus' blasts; this she dared in thy realm. . . . Lo too, driving the Trojan matrons into guilt, she hath foully [Pg 116][794-826]burned their ships, and forced them, their fleet lost, to leave the crews to an unknown land. Let the remnant, I beseech thee, give their sails to thy safe keeping across the seas; let them reach Laurentine Tiber; if I ask what is permitted, if fate grants them a city there.' Then the son of Saturn, compeller of the ocean deep, uttered thus: 'It is wholly right, O Cytherean, that thy trust should be in my realm, whence thou drawest birth; and I have deserved it: often have I allayed the rage and full fury of sky and sea. Nor less on land, I call Xanthus and Simoïs to witness, hath been my care of thine Aeneas. When Achilles pursued the Trojan armies and hurled them breathless on their walls, and sent many thousands to death,—when the choked rivers groaned and Xanthus could not find passage or roll out to sea,—then I snatched Aeneas away in sheltering mist as he met the brave son of Peleus outmatched in strength and gods, eager as I was to overthrow the walls of perjured Troy that mine own hands had built. Now too my mind rests the same; dismiss thy fear. In safety, as thou desirest, shall he reach the haven of Avernus. One will there be alone whom on the flood thou shalt lose and require; one life shall be given for many. . . .' With these words the goddess' bosom is soothed to joy. Then their lord yokes his wild horses with gold and fastens the foaming bits, and letting all the reins run slack in his hand, flies lightly in his sea-coloured chariot over the ocean surface. The waves sink to rest, and the swoln water-ways smooth out under the thundering axle; the storm-clouds scatter from the vast sky. Diverse shapes attend him, monstrous whales, and Glaucus' aged choir, and Palaemon, son of Ino, the swift Tritons, and Phorcus with all his army. Thetis and Melite keep the left, and maiden Panopea, Nesaea and Spio, Thalia and Cymodoce. [Pg 117] [827-860]At this lord Aeneas' soul is thrilled with soft counterchange of delight. He bids all the masts be upreared with speed, and the sails stretched on the yards. Together all set their sheets, and all at once slacken their canvas to left and again to right; together they brace and unbrace the yard-arms aloft; prosperous gales waft the fleet along. First, in front of all, Palinurus steered the close column; the rest under orders ply their course by his. And now dewy Night had just reached heaven's mid-cone; the sailors, stretched on their hard benches under the oars, relaxed their limbs in quiet rest: when Sleep, sliding lightly down from the starry sky, parted the shadowy air and cleft the dark, seeking thee, O Palinurus, carrying dreams of bale to thee who dreamt not of harm, and lit on the high stern, a god in Phorbas' likeness, dropping this speech from his lips: 'Palinurus son of Iasus, the very seas bear our fleet along; the breezes breathe steadily; for an hour rest is given. Lay down thine head, and steal thy worn eyes from their toil. I myself for a little will take thy duty in thy stead.' To whom Palinurus, scarcely lifting his eyes, returns: 'Wouldst thou have me ignorant what the calm face of the brine means, and the waves at rest? Shall I have faith in this perilous thing? How shall I trust Aeneas to deceitful breezes, and the placid treachery of sky that hath so often deceived me?' Such words he uttered, and, clinging fast to the tiller, slackened hold no whit, and looked up steadily on the stars. Lo! the god shakes over either temple a bough dripping with Lethean dew and made slumberous with the might of Styx, and makes his swimming eyes relax their struggles. Scarcely had sleep begun to slacken his limbs unaware, when bending down, he flung him sheer into the clear water, tearing rudder and half the stern away with him, and many a time crying vainly on his comrades: himself [Pg 118][861-871]he rose on flying wings into the thin air. None the less does the fleet run safe on its sea path, and glides on unalarmed in lord Neptune's assurance. Yes, and now they were sailing in to the cliffs of the Sirens, dangerous once of old and white with the bones of many a man; and the hoarse rocks echoed afar in the ceaseless surf; when her lord felt the ship rocking astray for loss of her helmsman, and himself steered her on over the darkling water, sighing often the while, and heavy at heart for his friend's mischance. 'Ah too trustful in sky's and sea's serenity, thou shalt lie, O Palinurus, naked on an alien sand!' [Pg 119] BOOK SIXTH THE VISION OF THE UNDER WORLD So speaks he weeping, and gives his fleet the rein, and at last glides in to Euboïc Cumae's coast. They turn the prows seaward; the ships grounded fast on their anchors' teeth, and the curving ships line the beach. The warrior band leaps forth eagerly on the Hesperian shore; some seek the seeds of flame hidden in veins of flint, some scour the woods, the thick coverts of wild beasts, and find and shew the streams. But good Aeneas seeks the fortress where Apollo sits high enthroned, and the lone mystery of the awful Sibyl's cavern depth, over whose mind and soul the prophetic Delian breathes high inspiration and reveals futurity. Now they draw nigh the groves of Trivia and the roof of gold. Daedalus, as the story runs, when in flight from Minos' realm he dared to spread his fleet wings to the sky, glided on his unwonted way towards the icy northern star, and at length lit gently on the Chalcidian fastness. Here, on the first land he retrod, he dedicated his winged oarage to thee, O Phoebus, in the vast temple he built. On the doors is Androgeus' death; thereby the children of Cecrops, bidden, ah me! to pay for yearly ransom seven souls of their sons; the urn stands there, and the lots are drawn. Right [Pg 120][23-55]opposite the land of Gnosus rises from the sea; on it is the cruel love of the bull, the disguised stealth of Pasiphaë, and the mingled breed and double issue of the Minotaur, record of a shameful passion; on it the famous dwelling's laborious inextricable maze; but Daedalus, pitying the great love of the princess, himself unlocked the tangled treachery of the palace, guiding with the clue her lover's blind footsteps. Thou too hadst no slight part in the work he wrought, O Icarus, did grief allow. Twice had he essayed to portray thy fate in gold; twice the father's hands dropped down. Nay, their eyes would scan all the story in order, were not Achates already returned from his errand, and with him the priestess of Phoebus and Trivia, Deïphobe daughter of Glaucus, who thus accosts the king: 'Other than this are the sights the time demands: now were it well to sacrifice seven unbroken bullocks of the herd, as many fitly chosen sheep of two years old.' Thus speaks she to Aeneas; nor do they delay to do her sacred bidding; and the priestess calls the Teucrians into the lofty shrine. A vast cavern is scooped in the side of the Euboïc cliff, whither lead an hundred wide passages by an hundred gates, whence peal forth as manifold the responses of the Sibyl. They had reached the threshold, when the maiden cries: It is time to enquire thy fate: the god, lo! the god! And even as she spoke thus in the gateway, suddenly countenance nor colour nor ranged tresses stayed the same; her wild heart heaves madly in her panting bosom; and she expands to sight, and her voice is more than mortal, now the god breathes on her in nearer deity. 'Lingerest thou to vow and pray,' she cries, 'Aeneas of Troy? lingerest thou? for not till then will the vast portals of the spellbound house swing open.' So spoke she, and sank to silence. A cold shiver ran through the Teucrians' iron frames, and the king pours heart-deep supplication: [Pg 121] [56-89]'Phoebus, who hast ever pitied the sore travail of Troy, who didst guide the Dardanian shaft from Paris' hand full on the son of Aeacus, in thy leading have I pierced all these seas that skirt mighty lands, the Massylian nations far withdrawn, and the fields the Syrtes fringe; thus far let the fortune of Troy follow us. You too may now unforbidden spare the nation of Pergama, gods and goddesses to whomsoever Ilium and the great glory of Dardania did wrong. And thou, O prophetess most holy, foreknower of the future, grant (for no unearned realm does my destiny claim) a resting-place in Latium to the Teucrians, to their wandering gods and the storm-tossed deities of Troy. Then will I ordain to Phoebus and Trivia a temple of solid marble, and festal days in Phoebus' name. Thee likewise a mighty sanctuary awaits in our realm. For here will I place thine oracles and the secrets of destiny uttered to my people, and consecrate chosen men, O gracious one. Only commit not thou thy verses to leaves, lest they fly disordered, the sport of rushing winds; thyself utter them, I beseech thee.' His lips made an end of utterance. But the prophetess, not yet tame to Phoebus' hand, rages fiercely in the cavern, so she may shake the mighty godhead from her breast; so much the more does he tire her maddened mouth and subdue her wild breast and shape her to his pressure. And now the hundred mighty portals of the house open of their own accord, and bring through the air the answer of the soothsayer: 'O past at length with the great perils of the sea! though heavier yet by land await thee, the Dardanians shall come to the realm of Lavinium; relieve thy heart of this care; but not so shall they have joy of their coming. Wars, grim wars I discern, and Tiber afoam with streams of blood. A Simoïs shall not fail thee, a Xanthus, a Dorian camp; another Achilles is already found for Latium, he too [Pg 122][90-123]goddess-born; nor shall Juno's presence ever leave the Teucrians; while thou in thy need, to what nations or what towns of Italy shalt thou not sue! Again is an alien bride the source of all that Teucrian woe, again a foreign marriage-chamber. . . . Yield not thou to distresses, but all the bolder go forth to meet them, as thy fortune shall allow thee way. The path of rescue, little as thou deemest it, shall first open from a Grecian town.' In such words the Sibyl of Cumae chants from the shrine her perplexing terrors, echoing through the cavern truth wrapped in obscurity: so does Apollo clash the reins and ply the goad in her maddened breast. So soon as the spasm ceased and the raving lips sank to silence, Aeneas the hero begins: 'No shape of toil, O maiden, rises strange or sudden on my sight; all this ere now have I guessed and inly rehearsed in spirit. One thing I pray; since here is the gate named of the infernal king, and the darkling marsh of Acheron's overflow, be it given me to go to my beloved father, to see him face to face; teach thou the way, and open the consecrated portals. Him on these shoulders I rescued from encircling flames and a thousand pursuing weapons, and brought him safe from amid the enemy; he accompanied my way over all the seas, and bore with me all the threats of ocean and sky, in weakness, beyond his age's strength and due. Nay, he it was who besought and enjoined me to seek thy grace and draw nigh thy courts. Have pity, I beseech thee, on son and father, O gracious one! for thou art all-powerful, nor in vain hath Hecate given thee rule in the groves of Avernus. If Orpheus could call up his wife's ghost in the strength of his Thracian lyre and the music of the strings,—if Pollux redeemed his brother by exchange of death, and passes and repasses so often,—why make mention of great Theseus, why of Alcides? I too am of Jove's sovereign race.' [Pg 123] [124-157]In such words he pleaded and clasped the altars; when the soothsayer thus began to speak: 'O sprung of gods' blood, child of Anchises of Troy, easy is the descent into hell; all night and day the gate of dark Dis stands open; but to recall thy steps and issue to upper air, this is the task and burden. Some few of gods' lineage have availed, such as Jupiter's gracious favour or virtue's ardour hath upborne to heaven. Midway all is muffled in forest, and the black coils of Cocytus circle it round. Yet if thy soul is so passionate and so desirous twice to float across the Stygian lake, twice to see dark Tartarus, and thy pleasure is to plunge into the mad task, learn what must first be accomplished. Hidden in a shady tree is a bough with leafage and pliant shoot all of gold, consecrate to nether Juno, wrapped in the depth of woodland and shut in by dim dusky vales. But to him only who first hath plucked the golden-tressed fruitage from the tree is it given to enter the hidden places of the earth. This hath beautiful Proserpine ordained to be borne to her for her proper gift. The first torn away, a second fills the place in gold, and the spray burgeons with even such ore again. So let thine eyes trace it home, and thine hand pluck it duly when found; for lightly and unreluctant will it follow if thine is fate's summons; else will no strength of thine avail to conquer it nor hard steel to cut it away. Yet again, a friend of thine lies a lifeless corpse, alas! thou knowest it not, and defiles all the fleet with death, while thou seekest our counsel and lingerest in our courts. First lay him in his resting-place and hide him in the tomb; lead thither black cattle; be this first thine expiation; so at last shalt thou behold the Stygian groves and the realm untrodden of the living.' She spoke, and her lips shut to silence. Aeneas goes forth, and leaves the cavern with fixed eyes and sad countenance, his soul revolving inly the unseen [Pg 124][158-194]issues. By his side goes faithful Achates, and plants his footsteps in equal perplexity. Long they ran on in mutual change of talk; of what lifeless comrade spoke the soothsayer, of what body for burial? And even as they came, they see on the dry beach Misenus cut off by untimely death, Misenus the Aeolid, excelled of none other in stirring men with brazen breath and kindling battle with his trumpet-note. He had been attendant on mighty Hector; in Hector's train he waged battle, renowned alike for bugle and spear: after victorious Achilles robbed him of life the valiant hero had joined Dardanian Aeneas' company, and followed no meaner leader. But now, while he makes his hollow shell echo over the seas, ah fool! and calls the gods to rival his blast, jealous Triton, if belief is due, had caught him among the rocks and sunk him in the foaming waves. So all surrounded him with loud murmur and cries, good Aeneas the foremost. Then weeping they quickly hasten on the Sibyl's orders, and work hard to pile trees for the altar of burial, and heap it up into the sky. They move into the ancient forest, the deep coverts of game; pitch-pines fall flat, ilex rings to the stroke of axes, and ashen beams and oak are split in clefts with wedges; they roll in huge mountain-ashes from the hills. Aeneas likewise is first in the work, and cheers on his crew and arms himself with their weapons. And alone with his sad heart he ponders it all, gazing on the endless forest, and utters this prayer: 'If but now that bough of gold would shew itself to us on the tree in this depth of woodland! since all the soothsayer's tale of thee, Misenus, was, alas! too truly spoken.' Scarcely had he said thus, when twin doves haply came flying down the sky, and lit on the green sod right under his eyes. Then the kingly hero knows them for his mother's birds, and joyfully prays: 'Ah, be my guides, if way there be, and direct your aëry passage into the groves [Pg 125][195-230]where the rich bough overshadows the fertile ground! and thou, O goddess mother, fail not our wavering fortune.' So spoke he and stayed his steps, marking what they signify, whither they urge their way. Feeding and flying they advance at such distance as following eyes could keep them in view; then, when they came to Avernus' pestilent gorge, they tower swiftly, and sliding down through the liquid air, choose their seat and light side by side on a tree, through whose boughs shone out the contrasting flicker of gold. As in chill mid-winter the woodland is wont to blossom with the strange leafage of the mistletoe, sown on an alien tree and wreathing the smooth stems with burgeoning saffron; so on the shadowy ilex seemed that leafy gold, so the foil tinkled in the light breeze. Immediately Aeneas seizes it and eagerly breaks off its resistance, and carries it beneath the Sibyl's roof. And therewithal the Teucrians on the beach wept Misenus, and bore the last rites to the thankless ashes. First they build up a vast pyre of resinous billets and sawn oak, whose sides they entwine with dark leaves and plant funereal cypresses in front, and adorn it above with his shining armour. Some prepare warm water in cauldrons bubbling over the flames, and wash and anoint the chill body, and make their moan; then, their weeping done, lay his limbs on the pillow, and spread over it crimson raiment, the accustomed pall. Some uplift the heavy bier, a melancholy service, and with averted faces in their ancestral fashion hold and thrust in the torch. Gifts of frankincense, food, and bowls of olive oil, are poured and piled upon the fire. After the embers sank in and the flame died away, they soaked with wine the remnant of thirsty ashes, and Corynaeus gathered the bones and shut them in an urn of brass; and he too thrice encircled his comrades with fresh water, and cleansed them with light spray sprinkled from a [Pg 126][231-267]bough of fruitful olive, and spoke the last words of all. But good Aeneas heaps a mighty mounded tomb over him, with his own armour and his oar and trumpet, beneath a skyey mountain that now is called Misenus after him, and keeps his name immortal from age to age. This done, he hastens to fulfil the Sibyl's ordinance. A deep cave yawned dreary and vast, shingle-strewn, sheltered by the black lake and the gloom of the forests; over it no flying things could wing their way unharmed, such a vapour streamed from the dark gorge and rose into the overarching sky. Here the priestess first arrays four black-bodied bullocks and pours wine upon their forehead; and plucking the topmost hairs from between the horns, lays them on the sacred fire for first-offering, calling aloud on Hecate, mistress of heaven and hell. Others lay knives beneath, and catch the warm blood in cups. Aeneas himself smites with the sword a black-fleeced she-lamb to the mother of the Eumenides and her mighty sister, and a barren heifer, Proserpine, to thee. Then he uprears darkling altars to the Stygian king, and lays whole carcases of bulls upon the flames, pouring fat oil over the blazing entrails. And lo! about the first rays of sunrise the ground moaned underfoot, and the woodland ridges began to stir, and dogs seemed to howl through the dusk as the goddess came. 'Apart, ah keep apart, O ye unsanctified!' cries the soothsayer; 'retire from all the grove; and thou, stride on and unsheath thy steel; now is need of courage, O Aeneas, now of strong resolve.' So much she spoke, and plunged madly into the cavern's opening; he with unflinching steps keeps pace with his advancing guide. Gods who are sovereign over souls! silent ghosts, and Chaos and Phlegethon, the wide dumb realm of night! as I have heard, so let me tell, and according to your will unfold things sunken deep under earth in gloom. [Pg 127] [268-303]They went darkling through the dusk beneath the solitary night, through the empty dwellings and bodiless realm of Dis; even as one walks in the forest beneath the jealous light of a doubtful moon, when Jupiter shrouds the sky in shadow and black night blots out the world. Right in front of the doorway and in the entry of the jaws of hell Grief and avenging Cares have made their bed; there dwell wan Sicknesses and gloomy Eld, and Fear, and ill-counselling Hunger, and loathly Want, shapes terrible to see; and Death and Travail, and thereby Sleep, Death's kinsman, and the Soul's guilty Joys, and death-dealing War full in the gateway, and the Furies in their iron cells, and mad Discord with bloodstained fillets enwreathing her serpent locks. Midway an elm, shadowy and high, spreads her boughs and secular arms, where, one saith, idle Dreams dwell clustering, and cling under every leaf. And monstrous creatures besides, many and diverse, keep covert at the gates, Centaurs and twy-shaped Scyllas, and the hundredfold Briareus, and the beast of Lerna hissing horribly, and the Chimaera armed with flame, Gorgons and Harpies, and the body of the triform shade. Here Aeneas snatches at his sword in a sudden flutter of terror, and turns the naked edge on them as they come; and did not his wise fellow-passenger remind him that these lives flit thin and unessential in the hollow mask of body, he would rush on and vainly lash through phantoms with his steel. Hence a road leads to Tartarus and Acheron's wave. Here the dreary pool swirls thick in muddy eddies and disgorges into Cocytus with its load of sand. Charon, the dread ferryman, guards these flowing streams, ragged and awful, his chin covered with untrimmed masses of hoary hair, and his glassy eyes aflame; his soiled raiment hangs knotted from his shoulders. Himself he plies the pole and trims the sails of his vessel, the steel-blue galley with freight [Pg 128][304-336]of dead; stricken now in years, but a god's old age is lusty and green. Hither all crowded, and rushed streaming to the bank, matrons and men and high-hearted heroes dead and done with life, boys and unwedded girls, and children laid young on the bier before their parents' eyes, multitudinous as leaves fall dropping in the forests at autumn's earliest frost, or birds swarm landward from the deep gulf, when the chill of the year routs them overseas and drives them to sunny lands. They stood pleading for the first passage across, and stretched forth passionate hands to the farther shore. But the grim sailor admits now one and now another, while some he pushes back far apart on the strand. Moved with marvel at the confused throng: 'Say, O maiden,' cries Aeneas, 'what means this flocking to the river? of what are the souls so fain? or what difference makes these retire from the banks, those go with sweeping oars over the leaden waterways?' To him the long-lived priestess thus briefly returned: 'Seed of Anchises, most sure progeny of gods, thou seest the deep pools of Cocytus and the Stygian marsh, by whose divinity the gods fear to swear falsely. All this crowd thou discernest is helpless and unsepultured; Charon is the ferryman; they who ride on the wave found a tomb. Nor is it given to cross the awful banks and hoarse streams ere the dust hath found a resting-place. An hundred years they wander here flitting about the shore; then at last they gain entrance, and revisit the pools so sorely desired.' Anchises' son stood still, and ponderingly stayed his footsteps, pitying at heart their cruel lot. There he discerns, mournful and unhonoured dead, Leucaspis and Orontes, captains of the Lycian squadron, whom, as they sailed together from Troy over gusty seas, the south wind overwhelmed and wrapped the waters round ship and men. [Pg 129] [337-369]Lo, there went by Palinurus the steersman, who of late, while he watched the stars on their Libyan passage, had slipped from the stern and fallen amid the waves. To him, when he first knew the melancholy form in that depth of shade, he thus opens speech: 'What god, O Palinurus, reft thee from us and sank thee amid the seas? forth and tell. For in this single answer Apollo deceived me, never found false before, when he prophesied thee safety on ocean and arrival on the Ausonian coasts. See, is this his promise-keeping?' And he: 'Neither did Phoebus on his oracular seat delude thee, O prince, Anchises' son, nor did any god drown me in the sea. For while I clung to my appointed charge and governed our course, I pulled the tiller with me in my fall, and the shock as I slipped wrenched it away. By the rough seas I swear, fear for myself never wrung me so sore as for thy ship, lest, the rudder lost and the pilot struck away, those gathering waves might master it. Three wintry nights in the water the blustering south drove me over the endless sea; scarcely on the fourth dawn I descried Italy as I rose on the climbing wave. Little by little I swam shoreward; already I clung safe; but while, encumbered with my dripping raiment, I caught with crooked fingers at the jagged needles of mountain rock, the barbarous people attacked me in arms and ignorantly deemed me a prize. Now the wave holds me, and the winds toss me on the shore. By heaven's pleasant light and breezes I beseech thee, by thy father, by Iülus thy rising hope, rescue me from these distresses, O unconquered one! Either do thou, for thou canst, cast earth over me and again seek the haven of Velia; or do thou, if in any wise that may be, if in any wise the goddess who bore thee shews a way,—for not without divine will do I deem thou wilt float across these vast rivers and the Stygian pool,—lend me a pitying [Pg 130][370-403]hand, and bear me over the waves in thy company, that at least in death I may find a quiet resting-place.' Thus he ended, and the soothsayer thus began: 'Whence, O Palinurus, this fierce longing of thine? Shalt thou without burial behold the Stygian waters and the awful river of the Furies? Cease to hope prayers may bend the decrees of heaven. But take my words to thy memory, for comfort in thy woeful case: far and wide shall the bordering cities be driven by celestial portents to appease thy dust; they shall rear a tomb, and pay the tomb a yearly offering, and for evermore shall the place keep Palinurus' name.' The words soothed away his distress, and for a while drove grief away from his sorrowing heart; he is glad in the land of his name. So they complete their journey's beginning, and draw nigh the river. Just then the waterman descried them from the Stygian wave advancing through the silent woodland and turning their feet towards the bank, and opens on them in these words of challenge: 'Whoso thou art who marchest in arms towards our river, forth and say, there as thou art, why thou comest, and stay thine advance. This is the land of Shadows, of Sleep, and slumberous Night; no living body may the Stygian hull convey. Nor truly had I joy of taking Alcides on the lake for passenger, nor Theseus and Pirithoüs, born of gods though they were and unconquered in might. He laid fettering hand on the warder of Tartarus, and dragged him cowering from the throne of my lord the King; they essayed to ravish our mistress from the bridal chamber of Dis.' Thereto the Amphrysian soothsayer made brief reply: 'No such plot is here; be not moved; nor do our weapons offer violence; the huge gatekeeper may bark on for ever in his cavern and affright the bloodless ghosts; Proserpine may keep her honour within her uncle's gates. Aeneas of Troy, renowned [Pg 131][404-437]in goodness as in arms, goes down to meet his father in the deep shades of Erebus. If the sight of such affection stirs thee in nowise, yet this bough' (she discovers the bough hidden in her raiment) 'thou must know.' Then his heaving breast allays its anger, and he says no more; but marvelling at the awful gift, the fated rod so long unseen, he steers in his dusky vessel and draws to shore. Next he routs out the souls that sate on the long benches, and clears the thwarts, while he takes mighty Aeneas on board. The galley groaned under the weight in all her seams, and the marsh-water leaked fast in. At length prophetess and prince are landed unscathed on the ugly ooze and livid sedge. This realm rings with the triple-throated baying of vast Cerberus, couched huge in the cavern opposite; to whom the prophetess, seeing the serpents already bristling up on his neck, throws a cake made slumberous with honey and drugged grain. He, with threefold jaws gaping in ravenous hunger, catches it when thrown, and sinks to earth with monstrous body outstretched, and sprawling huge over all his den. The warder overwhelmed, Aeneas makes entrance, and quickly issues from the bank of the irremeable wave. Immediately wailing voices are loud in their ears, the souls of babies crying on the doorway sill, whom, torn from the breast and portionless in life's sweetness, a dark day cut off and drowned in bitter death. Hard by them are those condemned to death on false accusation. Neither indeed are these dwellings assigned without lot and judgment; Minos presides and shakes the urn; he summons a council of the silent people, and inquires of their lives and charges. Next in order have these mourners their place whose own innocent hands dealt them death, who flung away their souls in hatred of the day. How fain were they now in upper air to endure their poverty and [Pg 132][438-472]sore travail! It may not be; the unlovely pool locks them in her gloomy wave, and Styx pours her ninefold barrier between. And not far from here are shewn stretching on every side the Wailing Fields; so they call them by name. Here they whom pitiless love hath wasted in cruel decay hide among untrodden ways, shrouded in embosoming myrtle thickets; not death itself ends their distresses. In this region he discerns Phaedra and Procris and woeful Eriphyle, shewing on her the wounds of her merciless son, and Evadne and Pasiphaë; Laodamia goes in their company, and she who was once Caeneus and a man, now woman, and again returned by fate into her shape of old. Among whom Dido the Phoenician, fresh from her death-wound, wandered in the vast forest; by her the Trojan hero stood, and knew the dim form through the darkness, even as the moon at the month's beginning to him who sees or thinks he sees her rising through the vapours; he let tears fall, and spoke to her lovingly and sweet: 'Alas, Dido! so the news was true that reached me; thou didst perish, and the sword sealed thy doom! Ah me, was I cause of thy death? By the stars I swear, by the heavenly powers and all that is sacred beneath the earth, unwillingly, O queen, I left thy shore. But the gods, at whose orders now I pass through this shadowy place, this land of mouldering overgrowth and deep night, the gods' commands drove me forth; nor could I deem my departure would bring thee pain so great as this. Stay thy footstep, and withdraw not from our gaze. From whom fliest thou? the last speech of thee fate ordains me is this.' In such words and with starting tears Aeneas soothed the burning and fierce-eyed soul. She turned away with looks fixed fast on the ground, stirred no more in countenance by the speech he essays than if she stood in iron flint or Marpesian stone. At length she started, and fled wrathfully [Pg 133][473-508]into the shadowy woodland, where Sychaeus, her ancient husband, responds to her distresses and equals her affection. Yet Aeneas, dismayed by her cruel doom, follows her far on her way with pitying tears. Thence he pursues his appointed path. And now they trod those utmost fields where the renowned in war have their haunt apart. Here Tydeus meets him; here Parthenopaeus, glorious in arms, and the pallid phantom of Adrastus; here the Dardanians long wept on earth and fallen in the war; sighing he discerns all their long array, Glaucus and Medon and Thersilochus, the three children of Antenor, and Polyphoetes, Ceres' priest, and Idaeus yet charioted, yet grasping his arms. The souls throng round him to right and left; nor is one look enough; lingering delighted, they pace by his side and enquire wherefore he is come. But the princes of the Grecians and Agamemnon's armies, when they see him glittering in arms through the gloom, hurry terror-stricken away; some turn backward, as when of old they fled to the ships; some raise their voice faintly, and gasp out a broken ineffectual cry. And here he saw Deïphobus son of Priam, with face cruelly torn, face and both hands, and ears lopped from his mangled temples, and nostrils maimed by a shameful wound. Barely he knew the cowering form that hid its dreadful punishment; then he springs to accost it in familiar speech: 'Deïphobus mighty in arms, seed of Teucer's royal blood, whose wantonness of vengeance was so cruel? who was allowed to use thee thus? Rumour reached me that on that last night, outwearied with endless slaughter, thou hadst sunk on the heap of mingled carnage. Then mine own hand reared an empty tomb on the Rhoetean shore, mine own voice thrice called aloud upon thy ghost. Thy name and armour keep the spot; thee, O my friend, I could not see nor lay in the native earth I left.' [Pg 134] [509-541]Whereto the son of Priam: 'In nothing, O my friend, wert thou wanting; thou hast paid the full to Deïphobus and the dead man's shade. But me my fate and the Laconian woman's murderous guilt thus dragged down to doom; these are the records of her leaving. For how we spent that last night in delusive gladness thou knowest, and must needs remember too well. When the fated horse leapt down on the steep towers of Troy, bearing armed infantry for the burden of its womb, she, in feigned procession, led round our Phrygian women with Bacchic cries; herself she upreared a mighty flame amid them, and called the Grecians out of the fortress height. Then was I fast in mine ill-fated bridal chamber, deep asleep and outworn with my charge, and lay overwhelmed in slumber sweet and profound and most like to easeful death. Meanwhile that crown of wives removes all the arms from my dwelling, and slips out the faithful sword from beneath my head: she calls Menelaus into the house and flings wide the gateway: be sure she hoped her lover would magnify the gift, and so she might quench the fame of her ill deeds of old. Why do I linger? They burst into the chamber, they and the Aeolid, counsellor of crime, in their company. Gods, recompense the Greeks even thus, if with righteous lips I call for vengeance! But come, tell in turn what hap hath brought thee hither yet alive. Comest thou driven on ocean wanderings, or by promptings from heaven? or what fortune keeps thee from rest, that thou shouldst draw nigh these sad sunless dwellings, this disordered land?' In this change of talk Dawn had already crossed heaven's mid axle on her rose-charioted way; and haply had they thus drawn out all the allotted time; but the Sibyl made brief warning speech to her companion: 'Night falls, Aeneas; we waste the hours in weeping. Here is the place where the road disparts; by this that runs to the right [Pg 135][542-574]under great Dis' city is our path to Elysium; but the leftward wreaks vengeance on the wicked and sends them to unrelenting hell.' But Deïphobus: 'Be not angered, mighty priestess; I will depart, I will refill my place and return into darkness. Go, glory of our people, go, enjoy a fairer fate than mine.' Thus much he spoke, and on the word turned away his footsteps. Aeneas looks swiftly back, and sees beneath the cliff on the left hand a wide city, girt with a triple wall and encircled by a racing river of boiling flame, Tartarean Phlegethon, that echoes over its rolling rocks. In front is the gate, huge and pillared with solid adamant, that no warring force of men nor the very habitants of heaven may avail to overthrow; it stands up a tower of iron, and Tisiphone sitting girt in bloodstained pall keeps sleepless watch at the entry by night and day. Hence moans are heard and fierce lashes resound, with the clank of iron and dragging chains. Aeneas stopped and hung dismayed at the tumult. 'What shapes of crime are here? declare, O maiden; or what the punishment that pursues them, and all this upsurging wail?' Then the soothsayer thus began to speak: 'Illustrious chief of Troy, no pure foot may tread these guilty courts; but to me Hecate herself, when she gave me rule over the groves of Avernus, taught how the gods punish, and guided me through all her realm. Gnosian Rhadamanthus here holds unrelaxing sway, chastises secret crime revealed, and exacts confession, wheresoever in the upper world one vainly exultant in stolen guilt hath till the dusk of death kept clear from the evil he wrought. Straightway avenging Tisiphone, girt with her scourge, tramples down the shivering sinners, menaces them with the grim snakes in her left hand, and summons forth her sisters in merciless train. Then at last the sacred gates are flung open and grate on the jarring hinge. Markest thou what sentry is seated in [Pg 136][575-609]the doorway? what shape guards the threshold? More grim within sits the monstrous Hydra with her fifty black yawning throats: and Tartarus' self gapes sheer and strikes into the gloom through twice the space that one looks upward to Olympus and the skyey heaven. Here Earth's ancient children, the Titans' brood, hurled down by the thunderbolt, lie wallowing in the abyss. Here likewise I saw the twin Aloïds, enormous of frame, who essayed with violent hands to pluck down high heaven and thrust Jove from his upper realm. Likewise I saw Salmoneus in the cruel payment he gives for mocking Jove's flame and Olympus' thunders. Borne by four horses and brandishing a torch, he rode in triumph midway through the populous city of Grecian Elis, and claimed for himself the worship of deity; madman! who would mimic the storm-cloud and the inimitable bolt with brass that rang under his trampling horse-hoofs. But the Lord omnipotent hurled his shaft through thickening clouds (no firebrand his nor smoky glare of torches) and dashed him headlong in the fury of the whirlwind. Therewithal Tityos might be seen, fosterling of Earth the mother of all, whose body stretches over nine full acres, and a monstrous vulture with crooked beak eats away the imperishable liver and the entrails that breed in suffering, and plunges deep into the breast that gives it food and dwelling; nor is any rest given to the fibres that ever grow anew. Why tell of the Lapithae, of Ixion and Pirithoüs? over whom a stone hangs just slipping and just as though it fell; or the high banqueting couches gleam golden-pillared, and the feast is spread in royal luxury before their faces; couched hard by, the eldest of the Furies wards the tables from their touch and rises with torch upreared and thunderous lips. Here are they who hated their brethren while life endured, or struck a parent or entangled a client in wrong, or who brooded [Pg 137][610-643]alone over found treasure and shared it not with their fellows, this the greatest multitude of all; and they who were slain for adultery, and who followed unrighteous arms, and feared not to betray their masters' plighted hand. Imprisoned they await their doom. Seek not to be told that doom, that fashion of fortune wherein they are sunk. Some roll a vast stone, or hang outstretched on the spokes of wheels; hapless Theseus sits and shall sit for ever, and Phlegyas in his misery gives counsel to all and witnesses aloud through the gloom, Learn by this warning to do justly and not to slight the gods. This man sold his country for gold, and laid her under a tyrant's sway; he set up and pulled down laws at a price; this other forced his daughter's bridal chamber and a forbidden marriage; all dared some monstrous wickedness, and had success in what they dared. Not had I an hundred tongues, an hundred mouths, and a voice of iron, could I sum up all the shapes of crime or name over all their punishments.' Thus spoke Phoebus' long-lived priestess; then 'But come now,' she cries; 'haste on the way and perfect the service begun; let us go faster; I descry the ramparts cast in Cyclopean furnaces, and in front the arched gateway where they bid us lay the gifts foreordained.' She ended, and advancing side by side along the shadowy ways, they pass over and draw nigh the gates. Aeneas makes entrance, and sprinkling his body with fresh water, plants the bough full in the gateway. Now at length, this fully done, and the service of the goddess perfected, they came to the happy place, the green pleasances and blissful seats of the Fortunate Woodlands. Here an ampler air clothes the meadows in lustrous sheen, and they know their own sun and a starlight of their own. Some exercise their limbs in tournament on the greensward, contend in games, and wrestle on the yellow sand. Some [Pg 138][644-676]dance with beating footfall and lips that sing; with them is the Thracian priest in sweeping robe, and makes music to their measures with the notes' sevenfold interval, the notes struck now with his fingers, now with his ivory rod. Here is Teucer's ancient brood, a generation excellent in beauty, high-hearted heroes born in happier years, Ilus and Assaracus, and Dardanus, founder of Troy. Afar he marvels at the armour and chariots empty of their lords: their spears stand fixed in the ground, and their unyoked horses pasture at large over the plain: their life's delight in chariot and armour, their care in pasturing their sleek horses, follows them in like wise low under earth. Others, lo! he beholds feasting on the sward to right and left, and singing in chorus the glad Paean-cry, within a scented laurel-grove whence Eridanus river surges upward full-volumed through the wood. Here is the band of them who bore wounds in fighting for their country, and they who were pure in priesthood while life endured, and the good poets whose speech abased not Apollo; and they who made life beautiful by the arts of their invention, and who won by service a memory among men, the brows of all girt with the snow-white fillet. To their encircling throng the Sibyl spoke thus, and to Musaeus before them all; for he is midmost of all the multitude, and stands out head and shoulders among their upward gaze: 'Tell, O blissful souls, and thou, poet most gracious, what region, what place hath Anchises for his own? For his sake are we come, and have sailed across the wide rivers of Erebus.' And to her the hero thus made brief reply: 'None hath a fixed dwelling; we live in the shady woodlands; soft-swelling banks and meadows fresh with streams are our habitation. But you, if this be your heart's desire, scale this ridge, and I will even now set you on an easy [Pg 139][677-708]pathway.' He spoke, and paced on before them, and from above shews the shining plains; thereafter they leave the mountain heights. But lord Anchises, deep in the green valley, was musing in earnest survey over the imprisoned souls destined to the daylight above, and haply reviewing his beloved children and all the tale of his people, them and their fates and fortunes, their works and ways. And he, when he saw Aeneas advancing to meet him over the greensward, stretched forth both hands eagerly, while tears rolled over his cheeks, and his lips parted in a cry: 'Art thou come at last, and hath thy love, O child of my desire, conquered the difficult road? Is it granted, O my son, to gaze on thy face and hear and answer in familiar tones? Thus indeed I forecast in spirit, counting the days between; nor hath my care misled me. What lands, what space of seas hast thou traversed to reach me, through what surge of perils, O my son! How I dreaded the realm of Libya might work thee harm!' And he: 'Thy melancholy phantom, thine, O my father, came before me often and often, and drove me to steer to these portals. My fleet is anchored on the Tyrrhenian brine. Give thine hand to clasp, O my father, give it, and withdraw not from our embrace.' So spoke he, his face wet with abundant weeping. Thrice there did he essay to fling his arms about his neck; thrice the phantom vainly grasped fled out of his hands even as light wind, and most like to fluttering sleep. Meanwhile Aeneas sees deep withdrawn in the covert of the vale a woodland and rustling forest thickets, and the river of Lethe that floats past their peaceful dwellings. Around it flitted nations and peoples innumerable; even as in the meadows when in clear summer weather bees settle on the variegated flowers and stream round the snow-white [Pg 140][709-742]lilies, all the plain is murmurous with their humming. Aeneas starts at the sudden view, and asks the reason he knows not; what are those spreading streams, or who are they whose vast train fills the banks? Then lord Anchises: 'Souls, for whom second bodies are destined and due, drink at the wave of the Lethean stream the heedless water of long forgetfulness. These of a truth have I long desired to tell and shew thee face to face, and number all the generation of thy children, that so thou mayest the more rejoice with me in finding Italy.'—'O father, must we think that any souls travel hence into upper air, and return again to bodily fetters? why this their strange sad longing for the light?' 'I will tell,' rejoins Anchises, 'nor will I hold thee in suspense, my son.' And he unfolds all things in order one by one. 'First of all, heaven and earth and the liquid fields, the shining orb of the moon and the Titanian star, doth a spirit sustain inly, and a soul shed abroad in them sways all their members and mingles in the mighty frame. Thence is the generation of man and beast, the life of winged things, and the monstrous forms that ocean breeds under his glittering floor. Those seeds have fiery force and divine birth, so far as they are not clogged by taint of the body and dulled by earthy frames and limbs ready to die. Hence is it they fear and desire, sorrow and rejoice; nor can they pierce the air while barred in the blind darkness of their prison-house. Nay, and when the last ray of life is gone, not yet, alas! does all their woe, nor do all the plagues of the body wholly leave them free; and needs must be that many a long ingrained evil should take root marvellously deep. Therefore they are schooled in punishment, and pay all the forfeit of a lifelong ill; some are hung stretched to the viewless winds; some have the taint of guilt washed out beneath the dreary deep, or burned away in fire. We [Pg 141][743-777]suffer, each a several ghost; thereafter we are sent to the broad spaces of Elysium, some few of us to possess the happy fields; till length of days completing time's circle takes out the ingrained soilure and leaves untainted the ethereal sense and pure spiritual flame. All these before thee, when the wheel of a thousand years hath come fully round, a God summons in vast train to the river of Lethe, that so they may regain in forgetfulness the slopes of upper earth, and begin to desire to return again into the body.' Anchises ceased, and leads his son and the Sibyl likewise amid the assembled murmurous throng, and mounts a hillock whence he might scan all the long ranks and learn their countenances as they came. 'Now come, the glory hereafter to follow our Dardanian progeny, the posterity to abide in our Italian people, illustrious souls and inheritors of our name to be, these will I rehearse, and instruct thee of thy destinies. He yonder, seest thou? the warrior leaning on his pointless spear, holds the nearest place allotted in our groves, and shall rise first into the air of heaven from the mingling blood of Italy, Silvius of Alban name, the child of thine age, whom late in thy length of days thy wife Lavinia shall nurture in the woodland, king and father of kings; from him in Alba the Long shall our house have dominion. He next him is Procas, glory of the Trojan race; and Capys and Numitor; and he who shall renew thy name, Silvius Aeneas, eminent alike in goodness or in arms, if ever he shall receive his kingdom in Alba. Men of men! see what strength they display, and wear the civic oak shading their brows. They shall establish Nomentum and Gabii and Fidena city, they the Collatine hill-fortress, Pometii and the Fort of Inuus, Bola and Cora: these shall be names that are now nameless lands. Nay, Romulus likewise, seed of Mavors, shall join [Pg 142][778-810]his grandsire's company, from his mother Ilia's nurture and Assaracus' blood. Seest thou how the twin plumes straighten on his crest, and his father's own emblazonment already marks him for upper air? Behold, O son! by his augury shall Rome the renowned fill earth with her empire and heaven with her pride, and gird about seven fortresses with her single wall, prosperous mother of men; even as our lady of Berecyntus rides in her chariot turret-crowned through the Phrygian cities, glad in the gods she hath borne, clasping an hundred of her children's children, all habitants of heaven, all dwellers on the upper heights. Hither now bend thy twin-eyed gaze; behold this people, the Romans that are thine. Here is Caesar and all Iülus' posterity that shall arise under the mighty cope of heaven. Here is he, he of whose promise once and again thou hearest, Caesar Augustus, a god's son, who shall again establish the ages of gold in Latium over the fields that once were Saturn's realm, and carry his empire afar to Garamant and Indian, to the land that lies beyond our stars, beyond the sun's yearlong ways, where Atlas the sky-bearer wheels on his shoulder the glittering star-spangled pole. Before his coming even now the kingdoms of the Caspian shudder at oracular answers, and the Maeotic land and the mouths of sevenfold Nile flutter in alarm. Nor indeed did Alcides traverse such spaces of earth, though he pierced the brazen-footed deer, or though he stilled the Erymanthian woodlands and made Lerna tremble at his bow: nor he who sways his team with reins of vine, Liber the conqueror, when he drives his tigers from Nysa's lofty crest. And do we yet hesitate to give valour scope in deeds, or shrink in fear from setting foot on Ausonian land? Ah, and who is he apart, marked out with sprays of olive, offering sacrifice? I know the locks and hoary chin of the king of Rome who shall establish the infant city in his [Pg 143][811-843]laws, sent from little Cures' sterile land to the majesty of empire. To him Tullus shall next succeed, who shall break the peace of his country and stir to arms men rusted from war and armies now disused to triumphs; and hard on him over-vaunting Ancus follows, even now too elate in popular breath. Wilt thou see also the Tarquin kings, and the haughty soul of Brutus the Avenger, and the fasces regained? He shall first receive a consul's power and the merciless axes, and when his children would stir fresh war, the father, for fair freedom's sake, shall summon them to doom. Unhappy! yet howsoever posterity shall take the deed, love of country and limitless passion for honour shall prevail. Nay, behold apart the Decii and the Drusi, Torquatus with his cruel axe, and Camillus returning with the standards. Yonder souls likewise, whom thou discernest gleaming in equal arms, at one now, while shut in Night, ah me! what mutual war, what battle-lines and bloodshed shall they arouse, so they attain the light of the living! father-in-law descending from the Alpine barriers and the fortress of the Dweller Alone, son-in-law facing him with the embattled East. Nay, O my children, harden not your hearts to such warfare, neither turn upon her own heart the mastering might of your country; and thou, be thou first to forgive, who drawest thy descent from heaven; cast down the weapons from thy hand, O blood of mine. . . . He shall drive his conquering chariot to the Capitoline height triumphant over Corinth, glorious in Achaean slaughter. He shall uproot Argos and Agamemnonian Mycenae, and the Aeacid's own heir, the seed of Achilles mighty in arms, avenging his ancestors in Troy and Minerva's polluted temple. Who might leave thee, lordly Cato, or thee, Cossus, to silence? who the Gracchan family, or these two sons of the Scipios, a double thunderbolt of war, Libya's bale? and Fabricius potent in poverty, or [Pg 144][844-875]thee, Serranus, sowing in the furrow? Whither whirl you me all breathless, O Fabii? thou art he, the most mighty, the one man whose lingering retrieves our State. Others shall beat out the breathing bronze to softer lines, I believe it well; shall draw living lineaments from the marble; the cause shall be more eloquent on their lips; their pencil shall portray the pathways of heaven, and tell the stars in their arising: be thy charge, O Roman, to rule the nations in thine empire; this shall be thine art, to lay down the law of peace, to be merciful to the conquered and beat the haughty down.' Thus lord Anchises, and as they marvel, he so pursues: 'Look how Marcellus the conqueror marches glorious in the splendid spoils, towering high above them all! He shall stay the Roman State, reeling beneath the invading shock, shall ride down Carthaginian and insurgent Gaul, and a third time hang up the captured armour before lord Quirinus.' And at this Aeneas, for he saw going by his side one excellent in beauty and glittering in arms, but his brow had little cheer, and his eyes looked down: 'Who, O my father, is he who thus attends him on his way? son, or other of his children's princely race? How his comrades murmur around him! how goodly of presence he is! but dark Night flutters round his head with melancholy shade.' Then lord Anchises with welling tears began: 'O my son, ask not of the great sorrow of thy people. Him shall fate but shew to earth, and suffer not to stay further. Too mighty, lords of heaven, did you deem the brood of Rome, had this your gift been abiding. What moaning of men shall arise from the Field of Mavors by the imperial city! what a funeral train shalt thou see, O Tiber, as thou flowest by the new-made grave! Neither shall the boyhood of any [Pg 145][876-901]of Ilian race raise his Latin forefathers' hope so high; nor shall the land of Romulus ever boast of any fosterling like this. Alas his goodness, alas his antique honour, and right hand invincible in war! none had faced him unscathed in armed shock, whether he met the foe on foot, or ran his spurs into the flanks of his foaming horse. Ah me, the pity of thee, O boy! if in any wise thou breakest the grim bar of fate, thou shalt be Marcellus. Give me lilies in full hands; let me strew bright blossoms, and these gifts at least let me lavish on my descendant's soul, and do the unavailing service.' Thus they wander up and down over the whole region of broad vaporous plains, and scan all the scene. And when Anchises had led his son over it, each point by each, and kindled his spirit with passion for the glories on their way, he tells him thereafter of the war he next must wage, and instructs him of the Laurentine peoples and the city of Latinus, and in what wise each task may be turned aside or borne. There are twin portals of Sleep, whereof the one is fabled of horn, and by it real shadows are given easy outlet; the other shining white of polished ivory, but false visions issue upward from the ghostly world. With these words then Anchises follows forth his son and the Sibyl together there, and dismisses them by the ivory gate. He pursues his way to the ships and revisits his comrades; then bears on to Caieta's haven straight along the shore. The anchor is cast from the prow; the sterns are grounded on the beach. [Pg 146] BOOK SEVENTH THE LANDING IN LATIUM, AND THE ROLL OF THE ARMIES OF ITALY Thou also, Caieta, nurse of Aeneas, gavest our shores an everlasting renown in death; and still thine honour haunts thy resting-place, and a name in broad Hesperia, if that be glory, marks thy dust. But when the last rites are duly paid, and the mound smoothed over the grave, good Aeneas, now the high seas are hushed, bears on under sail and leaves his haven. Breezes blow into the night, and the white moonshine speeds them on; the sea glitters in her quivering radiance. Soon they skirt the shores of Circe's land, where the rich daughter of the Sun makes her untrodden groves echo with ceaseless song; and her stately house glows nightlong with burning odorous cedarwood, as she runs over her delicate web with the ringing comb. Hence are heard afar angry cries of lions chafing at their fetters and roaring in the deep night; bears and bristly swine rage in their pens, and vast shapes of wolves howl; whom with her potent herbs the deadly divine Circe had disfashioned, face and body, into wild beasts from the likeness of men. But lest the good Trojans might suffer so dread a change, might enter her haven or draw nigh the ominous shores, Neptune filled [Pg 147][23-55]their sails with favourable winds, and gave them escape, and bore them past the seething shallows. And now the sea reddened with shafts of light, and high in heaven the yellow dawn shone rose-charioted; when the winds fell, and every breath sank suddenly, and the oar-blades toil through the heavy ocean-floor. And on this Aeneas descries from sea a mighty forest. Midway in it the pleasant Tiber stream breaks to sea in swirling eddies, laden with yellow sand. Around and above fowl many in sort, that haunt his banks and river-channel, solaced heaven with song and flew about the forest. He orders his crew to bend their course and turn their prows to land, and glides joyfully into the shady river. Forth now, Erato! and I will unfold who were the kings, what the tides of circumstance, how it was with ancient Latium when first that foreign army drew their fleet ashore on Ausonia's coast; I will recall the preluding of battle. Thou, divine one, inspire thou thy poet. I will tell of grim wars, tell of embattled lines, of kings whom honour drove on death, of the Tyrrhenian forces, and all Hesperia enrolled in arms. A greater history opens before me, a greater work I essay. Latinus the King, now growing old, ruled in a long peace over quiet tilth and town. He, men say, was sprung of Faunus and the nymph Marica of Laurentum. Faunus' father was Picus; and he boasts himself, Saturn, thy son; thou art the first source of their blood. Son of his, by divine ordinance, and male descent was none, cut off in the early spring of youth. One alone kept the household and its august home, a daughter now ripe for a husband and of full years for marriage. Many wooed her from wide Latium and all Ausonia. Fairest and foremost of all [Pg 148][56-93]is Turnus, of long and lordly ancestry; but boding signs from heaven, many and terrible, bar the way. Within the palace, in the lofty inner courts, was a laurel of sacred foliage, guarded in awe through many years, which lord Latinus, it was said, himself found and dedicated to Phoebus when first he would build his citadel; and from it gave his settlers their name, Laurentines. High atop of it, wonderful to tell, bees borne with loud humming across the liquid air girt it thickly about, and with interlinked feet hung in a sudden swarm from the leafy bough. Straightway the prophet cries: 'I see a foreigner draw nigh, an army from the same quarter seek the same quarter, and reign high in our fortress.' Furthermore, while maiden Lavinia stands beside her father feeding the altars with holy fuel, she was seen, oh, horror! to catch fire in her long tresses, and burn with flickering flame in all her array, her queenly hair lit up, lit up her jewelled circlet; till, enwreathed in smoke and lurid light, she scattered fire over all the palace. That sight was rumoured wonderful and terrible. Herself, they prophesied, she should be glorious in fame and fortune; but a great war was foreshadowed for her people. But the King, troubled by the omen, visits the oracle of his father Faunus the soothsayer, and the groves deep under Albunea, where, queen of the woods, she echoes from her holy well, and breathes forth a dim and deadly vapour. Hence do the tribes of Italy and all the Oenotrian land seek answers in perplexity; hither the priest bears his gifts, and when he hath lain down and sought slumber under the silent night on the spread fleeces of slaughtered sheep, sees many flitting phantoms of wonderful wise, hears manifold voices, and attains converse of the gods, and hath speech with Acheron and the deep tract of hell. Here then, likewise seeking an answer, lord Latinus paid fit sacrifice of an hundred woolly ewes, and [Pg 149][94-127]lay couched on the strewn fleeces they had worn. Out of the lofty grove a sudden voice was uttered: 'Seek not, O my child, to unite thy daughter in Latin espousals, nor trust her to the bridal chambers ready to thine hand; foreigners shall come to be thy sons, whose blood shall raise our name to heaven, and the children of whose race shall see, where the circling sun looks on either ocean, all the rolling world swayed beneath their feet.' This his father Faunus' answer and counsel given in the silent night Latinus restrains not in his lips; but wide-flitting Rumour had already borne it round among the Ausonian cities, when the children of Laomedon moored their fleet to the grassy slope of the river bank. Aeneas, with the foremost of his captains and fair Iülus, lay them down under the boughs of a high tree and array the feast. They spread wheaten cakes along the sward under their meats—so Jove on high prompted—and crown the platter of corn with wilding fruits. Here haply when the rest was spent, and scantness of food set them to eat their thin bread, and with hand and venturous teeth do violence to the round cakes fraught with fate and spare not the flattened squares: Ha! Are we eating our tables too? cries Iülus jesting, and stops. At once that accent heard set their toils a limit; and at once as he spoke his father caught it from his lips and hushed him, in amazement at the omen. Straightway 'Hail, O land!' he cries, 'my destined inheritance! and hail, O household gods, faithful to your Troy! here is home; this is our native country. For my father Anchises, now I remember it, bequeathed me this secret of fate: "When hunger shall drive thee, O son, to consume thy tables where the feast fails, on the unknown shores whither thou shalt sail; then, though outwearied, hope for home, and there at last let thine hand remember to set thy house's foundations and bulwarks." This was [Pg 150][128-162]the hunger, this the last that awaited us, to set the promised end to our desolations . . . Up then, and, glad with the first sunbeam, let us explore and search all abroad from our harbour, what is the country, who its habitants, where is the town of the nation. Now pour your cups to Jove, and call in prayer on Anchises our father, setting the wine again upon the board.' So speaks he, and binding his brows with a leafy bough, he makes supplication to the Genius of the ground, and Earth first of deities, and the Nymphs, and the Rivers yet unknown; then calls on Night and Night's rising signs, and next on Jove of Ida, and our lady of Phrygia, and on his twain parents, in heaven and in the under world. At this the Lord omnipotent thrice thundered sharp from high heaven, and with his own hand shook out for a sign in the sky a cloud ablaze with luminous shafts of gold. A sudden rumour spreads among the Trojan array, that the day is come to found their destined city. Emulously they renew the feast, and, glad at the high omen, array the flagons and engarland the wine. Soon as the morrow bathed the lands in its dawning light, they part to search out the town, and the borders and shores of the nation: these are the pools and spring of Numicus; this is the Tiber river; here dwell the brave Latins. Then the seed of Anchises commands an hundred envoys chosen of every degree to go to the stately royal city, all with the wreathed boughs of Pallas, to bear him gifts and desire grace for the Teucrians. Without delay they hasten on their message, and advance with swift step. Himself he traces the city walls with a shallow trench, and builds on it; and in fashion of a camp girdles this first settlement on the shore with mound and battlements. And now his men had traversed their way; they espied the towers and steep roofs of the Latins, and drew near the wall. Before the city boys and men in their early [Pg 151][163-196]bloom exercise on horseback, and break in their teams on the dusty ground, or draw ringing bows, or hurl tough javelins from the shoulder, and contend in running and boxing: when a messenger riding forward brings news to the ears of the aged King that mighty men are come thither in unknown raiment. He gives orders to call them within his house, and takes his seat in the midst on his ancestral throne. His house, stately and vast, crowned the city, upreared on an hundred columns, once the palace of Laurentian Picus, amid awful groves of ancestral sanctity. Here their kings receive the inaugural sceptre, and have the fasces first raised before them; this temple was their senate-house; this their sacred banqueting-hall; here, after sacrifice of rams, the elders were wont to sit down at long tables. Further, there stood arow in the entry images of the forefathers of old in ancient cedar, Italus, and lord Sabinus, planter of the vine, still holding in show the curved pruning-hook, and gray Saturn, and the likeness of Janus the double-facing, and the rest of their primal kings, and they who had borne wounds of war in fighting for their country. Armour besides hangs thickly on the sacred doors, captured chariots and curved axes, helmet-crests and massy gateway-bars, lances and shields, and beaks torn from warships. He too sat there, with the divining-rod of Quirinus, girt in the short augural gown, and carrying on his left arm the sacred shield, Picus the tamer of horses; he whom Circe, desperate with amorous desire, smote with her golden rod and turned by her poisons into a bird with patches of colour on his wings. Of such wise was the temple of the gods wherein Latinus, sitting on his father's seat, summoned the Teucrians to his house and presence; and when they entered in, he thus opened with placid mien: 'Tell, O Dardanians, for we are not ignorant of your city and race, nor unheard of do you bend your course [Pg 152][197-228]overseas, what seek you? what the cause or whereof the need that hath borne you over all these blue waterways to the Ausonian shore? Whether wandering in your course, or tempest-driven (such perils manifold on the high seas do sailors suffer), you have entered the river banks and lie in harbour; shun not our welcome, and be not ignorant that the Latins are Saturn's people, whom no laws fetter to justice, upright of their own free will and the custom of the god of old. And now I remember, though the story is dimmed with years, thus Auruncan elders told, how Dardanus, born in this our country, made his way to the towns of Phrygian Ida and to the Thracian Samos that is now called Samothrace. Here was the home he left, Tyrrhenian Corythus; now the palace of heaven, glittering with golden stars, enthrones and adds him to the ranged altars of the gods.' He ended; and Ilioneus pursued his speech with these words: 'King, Faunus' illustrious progeny, neither hath black tempest driven us with stress of waves to shelter in your lands, nor hath star or shore misled us on the way we went. Of set purpose and willing mind do we draw nigh this thy city, outcasts from a realm once the greatest that the sun looked on as he came from Olympus' utmost border. From Jove hath our race beginning; in Jove the men of Dardania rejoice as ancestor; our King himself of Jove's supreme race, Aeneas of Troy, hath sent us to thy courts. How terrible the tempest that burst from fierce Mycenae over the plains of Ida, driven by what fate Europe and Asia met in the shock of two worlds, even he hath heard who is sundered in the utmost land where the ocean surge recoils, and he whom stretching midmost of the four zones the zone of the intolerable sun holds in severance. Borne by that flood over many desolate seas, we crave a scant dwelling [Pg 153][229-261]for our country's gods, an unmolested landing-place, and the air and water that are free to all. We shall not disgrace the kingdom; nor will the rumour of your renown be lightly gone or the grace of all you have done fade away; nor will Ausonia be sorry to have taken Troy to her breast. By the fortunes of Aeneas I swear, by that right hand mighty, whether tried in friendship or in warlike arms, many and many a people and nation—scorn us not because we advance with hands proffering chaplets and words of supplication—hath sought us for itself and desired our alliance; but yours is the land that heaven's high ordinance drove us forth to find. Hence sprung Dardanus: hither Apollo recalls us, and pushes us on with imperious orders to Tyrrhenian Tiber and the holy pools of Numicus' spring. Further, he presents to thee these small guerdons of our past estate, relics saved from burning Troy. From this gold did lord Anchises pour libation at the altars; this was Priam's array when he delivered statutes to the nations assembled in order; the sceptre, the sacred mitre, the raiment wrought by the women of Ilium. . . .' At these words of Ilioneus Latinus holds his countenance in a steady gaze, and stays motionless on the floor, casting his intent eyes around. Nor does the embroidered purple so move the King, nor the sceptre of Priam, as his daughter's marriage and the bridal chamber absorb him, and the oracle of ancient Faunus stirs deep in his heart. This is he, the wanderer from a foreign home, foreshewn of fate for his son, and called to a realm of equal dominion, whose race should be excellent in valour and their might overbear all the world. At last he speaks with good cheer: 'The gods prosper our undertaking and their own augury! What thou desirest, Trojan, shall be given; nor do I spurn your gifts. While Latinus reigns you shall not [Pg 154][262-294]lack foison of rich land nor Troy's own riches. Only let Aeneas himself come hither, if desire of us be so strong, if he be in haste to join our friendship and be called our ally. Let him not shrink in terror from a friendly face. A term of the peace for me shall be to touch your monarch's hand. Do you now convey in answer my message to your King. I have a daughter whom the oracles of my father's shrine and many a celestial token alike forbid me to unite to one of our own nation; sons shall come, they prophesy, from foreign coasts, such is the destiny of Latium, whose blood shall exalt our name to heaven. He it is on whom fate calls; this I think, this I choose, if there be any truth in my soul's foreshadowing.' Thus he speaks, and chooses horses for all the company. Three hundred stood sleek in their high stalls; for all the Teucrians in order he straightway commands them to be led forth, fleet-footed, covered with embroidered purple: golden chains hang drooping over their chests, golden their housings, and they champ on bits of ruddy gold: for the absent Aeneas a chariot and pair of chariot horses of celestial breed, with nostrils breathing flame; of the race of those which subtle Circe bred by sleight on her father, the bastard issue of a stolen union. With these gifts and words the Aeneadae ride back from Latinus carrying peace. And lo! the fierce wife of Jove was returning from Inachian Argos, and held her way along the air, when out of the distant sky, far as from Sicilian Pachynus, she espied the rejoicing of Aeneas and the Dardanian fleet. She sees them already house-building, already trusting in the land, their ships left empty. She stops, shot with sharp pain; then shaking her head, she pours forth these words: 'Ah, hated brood, and doom of the Phrygians that thwarts our doom! Could they perish on the Sigean [Pg 155][295-326]plains? Could they be ensnared when taken? Did the fires of Troy consume her people? Through the midst of armies and through the midst of flames they have found their way. But, I think, my deity lies at last outwearied, or my hatred sleeps and is satisfied? Nay, it is I who have been fierce to follow them over the waves when hurled from their country, and on all the seas have crossed their flight. Against the Teucrians the forces of sky and sea are spent. What hath availed me Syrtes or Scylla, what desolate Charybdis? they find shelter in their desired Tiber-bed, careless of ocean and of me. Mars availed to destroy the giant race of the Lapithae; the very father of the gods gave over ancient Calydon to Diana's wrath: for forfeit of what crime in the Lapithae, what in Calydon? But I, Jove's imperial consort, who have borne, ah me! to leave naught undared, who have shifted to every device, I am vanquished by Aeneas. If my deity is not great enough, I will not assuredly falter to seek succour where it may be; if the powers of heaven are inflexible, I will stir up Acheron. It may not be to debar him of a Latin realm; well; and Lavinia is destined his bride unalterably. But it may be yet to defer, to make all this action linger; but it may be yet to waste away the nation of either king; at such forfeit of their people may son-in-law and father-in-law enter into union. Blood of Troy and Rutulia shall be thy dower, O maiden, and Bellona is the bridesmaid who awaits thee. Nor did Cisseus' daughter alone conceive a firebrand and travail of bridal flames. Nay, even such a birth hath Venus of her own, a second Paris, another balefire for Troy towers reborn.' These words uttered, she descends to earth in all her terrors, and calls dolorous Allecto from the home of the Fatal Sisters in nether gloom, whose delight is in woeful wars, in wrath and treachery and evil feuds: hateful to [Pg 156][327-360]lord Pluto himself, hateful and horrible to her hell-born sisters; into so many faces does she turn, so savage the guise of each, so thick and black bristles she with vipers. And her Juno spurs on with words, saying thus: 'Grant me, virgin born of Night, this thy proper task and service, that the rumour of our renown may not crumble away, nor the Aeneadae have power to win Latinus by marriage or beset the borders of Italy. Thou canst set brothers once united in armed conflict, and overturn families with hatreds; thou canst launch into houses thy whips and deadly brands; thine are a thousand names, a thousand devices of injury. Stir up thy teeming breast, sunder the peace they have joined, and sow seeds of quarrel; let all at once desire and demand and seize on arms.' Thereon Allecto, steeped in Gorgonian venom, first seeks Latium and the high house of the Laurentine monarch, and silently sits down before Amata's doors, whom a woman's distress and anger heated to frenzy over the Teucrians' coming and the marriage of Turnus. At her the goddess flings a snake out of her dusky tresses, and slips it into her bosom to her very inmost heart, that she may embroil all her house under its maddening magic. Sliding between her raiment and smooth breasts, it coils without touch, and instils its viperous breath unseen; the great serpent turns into the twisted gold about her neck, turns into the long ribbon of her chaplet, inweaves her hair, and winds slippery over her body. And while the gliding infection of the clammy poison begins to penetrate her sense and run in fire through her frame, nor as yet hath all her breast caught fire, softly she spoke and in mothers' wonted wise, with many a tear over her daughter and the Phrygian bridal: 'Is it to exiles, to Teucrians, that Lavinia is proffered in marriage, O father? and hast thou no compassion on [Pg 157][361-392]thy daughter and on thyself? no compassion on her mother, whom with the first northern wind the treacherous rover will abandon, steering to sea with his maiden prize? Is it not thus the Phrygian herdsman wound his way to Lacedaemon, and carried Leda's Helen to the Trojan towns? Where is thy plighted faith? Where thine ancient care for thy people, and the hand Turnus thy kinsman hath so often clasped? If one of alien race from the Latins is sought for our son, if this stands fixed, and thy father Faunus' commands are heavy upon thee, all the land whose freedom severs it from our sway is to my mind alien, and of this is the divine word. And Turnus, if one retrace the earliest source of his line, is born of Inachus and Acrisius, and of the midmost of Mycenae.' When in this vain essay of words she sees Latinus fixed against her, and the serpent's maddening poison is sunk deep in her vitals and runs through and through her, then indeed, stung by infinite horrors, hapless and frenzied, she rages wildly through the endless city. As whilome a top flying under the twisted whipcord, which boys busy at their play drive circling wide round an empty hall, runs before the lash and spins in wide gyrations; the witless ungrown band hang wondering over it and admire the whirling boxwood; the strokes lend it life: with pace no slacker is she borne midway through towns and valiant nations. Nay, she flies into the woodland under feigned Bacchic influence, assumes a greater guilt, arouses a greater frenzy, and hides her daughter in the mountain coverts to rob the Teucrians of their bridal and stay the marriage torches. 'Hail, Bacchus!' she shrieks and clamours; 'thou only art worthy of the maiden; for to thee she takes up the lissom wands, thee she circles in the dance, to thee she trains and consecrates her tresses.' Rumour flies abroad; and the matrons, their breasts kindled by the furies, run all at once [Pg 158][393-426]with a single ardour to seek out strange dwellings. They have left their homes empty, they throw neck and hair free to the winds; while others fill the air with ringing cries, girt about with fawnskins, and carrying spears of vine. Amid them the infuriate queen holds her blazing pine-torch on high, and chants the wedding of Turnus and her daughter; and rolling her bloodshot gaze, cries sudden and harsh: 'Hear, O mothers of Latium, wheresoever you be; if unhappy Amata hath yet any favour in your affection, if care for a mother's right pierces you, untie the chaplets from your hair, begin the orgies with me.' Thus, amid woods and wild beasts' solitary places, does Allecto goad the queen with the encircling Bacchic madness. When their frenzy seemed heightened and her first task complete, the purpose and all the house of Latinus turned upside down, the dolorous goddess flies on thence, soaring on dusky wing, to the walls of the gallant Rutulian, the city which Danaë, they say, borne down on the boisterous south wind, built and planted with Acrision's people. The place was called Ardea once of old; and still Ardea remains a mighty name; but its fortune is no more. Here in his high house Turnus now took rest in the black midnight. Allecto puts off her grim feature and the body of a Fury; she transforms her face to an aged woman's, and furrows her brow with ugly wrinkles; she puts on white tresses chaplet-bound, and entwines them with an olive spray; she becomes aged Calybe, priestess of Juno's temple, and presents herself before his eyes, uttering thus: 'Turnus, wilt thou brook all these toils poured out in vain, and the conveyance of thy crown to Dardanian settlers? The King denies thee thy bride and the dower thy blood had earned; and a foreigner is sought for heir to the kingdom. Forth now, dupe, and face thankless perils; forth, cut down the Tyrrhenian lines; give the [Pg 159][427-458]Latins peace in thy protection. This Saturn's omnipotent daughter in very presence commanded me to pronounce to thee, as thou wert lying in the still night. Wherefore arise, and make ready with good cheer to arm thy people and march through thy gates to battle; consume those Phrygian captains that lie with their painted hulls in the beautiful river. All the force of heaven orders thee on. Let King Latinus himself know of it, unless he consents to give thee thy bridal, and abide by his words, when he shall at last make proof of Turnus' arms.' But he, deriding her inspiration, with the words of his mouth thus answers her again: 'The fleets ride on the Tiber wave; that news hath not, as thou deemest, escaped mine ears. Frame not such terrors before me. Neither is Queen Juno forgetful of us. . . . But thee, O mother, overworn old age, exhausted and untrue, frets with vain distress, and amid embattled kings mocks thy presage with false dismay. Thy charge it is to keep the divine image and temple; war and peace shall be in the hands of men and warriors.' At such words Allecto's wrath blazed out. But amid his utterance a quick shudder overruns his limbs; his eyes are fixed in horror; so thickly hiss the snakes of the Fury, so vast her form expands. Then rolling her fiery eyes, she thrust him back as he would stammer out more, raised two serpents in her hair, and, sounding her whip, resumed with furious tone: 'Behold me the overworn! me whom old age, exhausted and untrue, mocks with false dismay amid embattled kings! Look on this! I am come from the home of the Dread Sisters: war and death are in my hand. . . .' So speaking, she hurled her torch at him, and pierced his breast with the lurid smoking brand. He breaks from sleep in overpowering fear, his limbs and body bathed in [Pg 160][459-494]sweat that breaks out all over him; he shrieks madly for arms, searches for arms on his bed and in his palace. The passion of the sword rages high, the accursed fury of war, and wrath over all: even as when flaming sticks are heaped roaring loud under the sides of a seething cauldron, and the boiling water leaps up; the river of water within smokes furiously and swells high in overflowing foam, and now the wave contains itself no longer; the dark steam flies aloft. So, for the stain of the broken peace, he orders his chief warriors to march on King Latinus, and bids prepare for battle, to defend Italy and drive the foe from their borders; himself will suffice for Trojans and Latins together. When he uttered these words and called the gods to hear his vows, the Rutulians stir one another up to arms. One is moved by the splendour of his youthful beauty, one by his royal ancestry, another by the noble deeds of his hand. While Turnus fills the Rutulian minds with valour, Allecto on Stygian wing hastens towards the Trojans. With fresh wiles she marked the spot where beautiful Iülus was trapping and coursing game on the bank; here the infernal maiden suddenly crosses his hounds with the maddening touch of a familiar scent, and drives them hotly on the stag-hunt. This was the source and spring of ill, and kindled the country-folk to war. The stag, beautiful and high-antlered, was stolen from his mother's udder and bred by Tyrrheus' boys and their father Tyrrheus, master of the royal herds, and ranger of the plain. Their sister Silvia tamed him to her rule, and lavished her care on his adornment, twining his antlers with delicate garlands, and combed his wild coat and washed him in the clear spring. Tame to her hand, and familiar to his master's table, he would wander the woods, and, however late the night, return home to the door he knew. Far astray, he floated idly down the stream, and allayed his heat on the green bank, when Iülus' [Pg 161][495-528]mad hounds started him in their hunting; and Ascanius himself, kindled with desire of the chief honour, aimed a shaft from his bended bow. A present deity suffered not his hand to stray, and the loud whistling reed came driven through his belly and flanks. But the wounded beast fled within the familiar roof and crept moaning to the courtyard, dabbled with blood, and filling all the house with moans as of one beseeching. Sister Silvia, smiting her arms with open hands, begins to call for aid, and gathers the hardy rustics with her cries. They, for a fell destroyer is hidden in the silent woodland, are there before her expectation, one armed with a stake hardened in the fire, one with a heavy knotted trunk; what each one searches and finds, wrath turns into a weapon. Tyrrheus cheers on his array, panting hard, with his axe caught up in his hand, as he was haply splitting an oaken log in four clefts with cross-driven wedges. But the grim goddess, seizing from her watch-tower the moment of mischief, seeks the steep farm-roof and sounds the pastoral war-note from the ridge, straining the infernal cry on her twisted horn; it spread shuddering over all the woodland, and echoed through the deep forests: the lake of Trivia heard it afar; Nar river heard it with white sulphurous water, and the springs of Velinus; and fluttered mothers clasped their children to their breast. Then, hurrying to the voice of the terrible trumpet-note, on all sides the wild rustics snatch their arms and stream in: therewithal the men of Troy pour out from their camp's open gates to succour Ascanius. The lines are ranged; not now in rustic strife do they fight with hard trunks or burned stakes; the two-edged steel sways the fight, the broad cornfields bristle dark with drawn swords, and brass flashes smitten by the sunlight, and casts a gleam high into the cloudy air: as when the wind begins to blow and the flood [Pg 162][529-560]to whiten, gradually the sea lifts his waves higher and yet higher, then rises from the bottom right into the air. Here in the front rank young Almo, once Tyrrheus' eldest son, is struck down by a whistling arrow; for the wound, staying in his throat, cut off in blood the moist voice's passage and the thin life. Around many a one lies dead, aged Galaesus among them, slain as he throws himself between them for a peacemaker, once incomparable in justice and wealth of Ausonian fields; for him five flocks bleated, a five-fold herd returned from pasture, and an hundred ploughs upturned the soil. But while thus in even battle they fight on the broad plain, the goddess, her promise fulfilled, when she hath dyed the war in blood, and mingled death in the first encounter, quits Hesperia, and, glancing through the sky, addresses Juno in exultant tone: 'Lo, discord is ripened at thy desire into baleful war: tell them now to mix in amity and join alliance. Insomuch as I have imbued the Trojans in Ausonian blood, this likewise will I add, if I have assurance of thy will. With my rumours I will sweep the bordering towns into war, and kindle their spirit with furious desire for battle, that from all quarters help may come; I will sow the land with arms.' Then Juno answering: 'Terror and harm is wrought abundantly. The springs of war are aflow: they fight with arms in their grasp, the arms that chance first supplied, that fresh blood stains. Let this be the union, this the bridal that Venus' illustrious progeny and Latinus the King shall celebrate. Our Lord who reigns on Olympus' summit would not have thee stray too freely in heaven's upper air. Withdraw thy presence. Whatsoever future remains in the struggle, that I myself will sway.' Such accents uttered the daughter of Saturn; and the [Pg 163][561-594]other raises her rustling snaky wings and darts away from the high upper air to Cocytus her home. There is a place midmost of Italy, deep in the hills, notable and famed of rumour in many a country, the Vale of Amsanctus; on either hand a wooded ridge, dark with thick foliage, hems it in, and midway a torrent in swirling eddies shivers and echoes over the rocks. Here is shewn a ghastly pool, a breathing-hole of the grim lord of hell, and a vast chasm breaking into Acheron yawns with pestilential throat. In it the Fury sank, and relieved earth and heaven of her hateful influence. But therewithal the queenly daughter of Saturn puts the last touch to war. The shepherds pour in full tale from the battlefield into the town, bearing back their slain, the boy Almo and Galaesus' disfigured face, and cry on the gods and call on Latinus. Turnus is there, and amid the heat and outcry at the slaughter redoubles his terrors, crying that Teucrians are bidden to the kingdom, that a Phrygian race is mingling its taint with theirs, and he is thrust out of their gates. They too, the matrons of whose kin, struck by Bacchus, trample in choirs down the pathless woods—nor is Amata's name a little thing—they too gather together from all sides and weary themselves with the battle-cry. Omens and oracles of gods go down before them, and all under malign influence clamour for awful war. Emulously they surround Latinus' royal house. He withstands, even as a rock in ocean unremoved, as a rock in ocean when the great crash comes down, firm in its own mass among many waves slapping all about: in vain the crags and boulders hiss round it in foam, and the seaweed on its side is flung up and sucked away. But when he may in nowise overbear their blind counsel, and all goes at fierce Juno's beck, with many an appeal to gods and void sky, 'Alas!' he cries, 'we are broken of fate and driven helpless in the [Pg 164][595-626]storm. With your very blood will you pay the price of this, O wretched men! Thee, O Turnus, thy crime, thee thine awful punishment shall await; too late wilt thou address to heaven thy prayers and supplication. For my rest was won, and my haven full at hand; I am robbed but of a happy death.' And without further speech he shut himself in the palace, and dropped the reins of state. There was a use in Hesperian Latium, which the Alban towns kept in holy observance, now Rome keeps, the mistress of the world, when they stir the War-God to enter battle; whether their hands prepare to carry war and weeping among Getae or Hyrcanians or Arabs, or to reach to India and pursue the Dawn, and reclaim their standards from the Parthian. There are twain gates of War, so runs their name, consecrate in grim Mars' sanctity and terror. An hundred bolts of brass and masses of everlasting iron shut them fast, and Janus the guardian never sets foot from their threshold. There, when the sentence of the Fathers stands fixed for battle, the Consul, arrayed in the robe of Quirinus and the Gabine cincture, with his own hand unbars the grating doors, with his own lips calls battles forth; then all the rest follow on, and the brazen trumpets blare harsh with consenting breath. With this use then likewise they bade Latinus proclaim war on the Aeneadae, and unclose the baleful gates. He withheld his hand, and shrank away averse from the abhorred service, and hid himself blindly in the dark. Then the Saturnian queen of heaven glided from the sky, with her own hand thrust open the lingering gates, and swung sharply back on their hinges the iron-bound doors of war. Ausonia is ablaze, till then unstirred and immoveable. Some make ready to march afoot over the plains; some, mounted on tall horses, ride amain in clouds of dust. All seek out arms; and now they rub their shields smooth and make their spearheads glitter with [Pg 165][627-659]fat lard, and grind their axes on the whetstone: rejoicingly they advance under their standards and hear the trumpet note. Five great cities set up the anvil and sharpen the sword, strong Atina and proud Tibur, Ardea and Crustumeri, and turreted Antemnae. They hollow out head-gear to guard them, and plait wickerwork round shield-bosses; others forge breastplates of brass or smooth greaves of flexible silver. To this is come the honour of share and pruning-hook, to this all the love of the plough: they re-temper their fathers' swords in the furnace. And now the trumpets blare; the watchword for war passes along. One snatches a helmet hurriedly from his house, another backs his neighing horses into the yoke; and arrays himself in shield and mail-coat triple-linked with gold, and girds on his trusty sword. Open now the gates of Helicon, goddesses, and stir the song of the kings that rose for war, the array that followed each and filled the plains, the men that even then blossomed, the arms that blazed in Italy the bountiful land: for you remember, divine ones, and you can recall; to us but a breath of rumour, scant and slight, is wafted down. First from the Tyrrhene coast savage Mezentius, scorner of the gods, opens the war and arrays his columns. By him is Lausus, his son, unexcelled in bodily beauty by any save Laurentine Turnus, Lausus tamer of horses and destroyer of wild beasts; he leads a thousand men who followed him in vain from Agylla town; worthy to be happier in ancestral rule, and to have other than Mezentius for father. After them beautiful Aventinus, born of beautiful Hercules, displays on the sward his palm-crowned chariot and victorious horses, and carries on his shield his father's device, the hundred snakes of the Hydra's serpent-wreath. Him, in the wood of the hill Aventine, Rhea the priestess [Pg 166][660-693]bore by stealth into the borders of light, a woman mingled with a god, after the Tirynthian Conqueror had slain Geryon and set foot on the fields of Laurentum, and bathed his Iberian oxen in the Tuscan river. These carry for war javelins and grim stabbing weapons, and fight with the round shaft and sharp point of the Sabellian pike. Himself he went on foot swathed in a vast lion skin, shaggy with bristling terrors, whose white teeth encircled his head; in such wild dress, the garb of Hercules clasped over his shoulders, he entered the royal house. Next twin brothers leave Tibur town, and the people called by their brother Tiburtus' name, Catillus and valiant Coras, the Argives, and advance in the forefront of battle among the throng of spears: as when two cloud-born Centaurs descend from a lofty mountain peak, leaving Homole or snowy Othrys in rapid race; the mighty forest yields before them as they go, and the crashing thickets give them way. Nor was the founder of Praeneste city absent, the king who, as every age hath believed, was born of Vulcan among the pasturing herds, and found beside the hearth, Caeculus. On him a rustic battalion attends in loose order, they who dwell in steep Praeneste and the fields of Juno of Gabii, on the cool Anio and the Hernican rocks dewy with streams; they whom rich Anagnia, and whom thou, lord Amasenus, pasturest. Not all of them have armour, nor shields and clattering chariots. The most part shower bullets of dull lead; some wield in their hand two darts, and have for head-covering caps of tawny wolfskin; their left foot is bare wherewith to plant their steps; the other is covered with a boot of raw hide. But Messapus, tamer of horses, the seed of Neptune, whom none might ever strike down with steel or fire, calls quickly to arms his long unstirred peoples and bands [Pg 167][694-727]disused to war, and again handles the sword. These are of the Fescennine ranks and of Aequi Falisci, these of Soracte's fortresses and the fields of Flavina, and Ciminus' lake and hill, and the groves of Capena. They marched in even time, singing their King; as whilome snowy swans among the thin clouds, when they return from pasturage, and utter resonant notes through their long necks; far off echoes the river and the smitten Asian fen. . . . Nor would one think these vast streaming masses were ranks clad in brass; rather that, high in air, a cloud of hoarse birds from the deep gulf was pressing to the shore. Lo, Clausus of the ancient Sabine blood, leading a great host, a great host himself; from whom now the Claudian tribe and family is spread abroad since Rome was shared with the Sabines. Alongside is the broad battalion of Amiternum, and the Old Latins, and all the force of Eretum and the Mutuscan oliveyards; they who dwell in Nomentum town, and the Rosean country by Velinus, who keep the crags of rough Tetrica and Mount Severus, Casperia and Foruli, and the river of Himella; they who drink of Tiber and Fabaris, they whom cold Nursia hath sent, and the squadrons of Horta and the tribes of Latinium; and they whom Allia, the ill-ominous name, severs with its current; as many as the waves that roll on the Libyan sea-floor when fierce Orion sets in the wintry surge; as thick as the ears that ripen in the morning sunlight on the plain of the Hermus or the yellowing Lycian tilth. Their shields clatter, and earth is amazed under the trampling of their feet. Here Agamemnonian Halaesus, foe of the Trojan name, yokes his chariot horses, and draws a thousand warlike peoples to Turnus; those who turn with spades the Massic soil that is glad with wine; whom the elders of Aurunca sent from their high hills, and the Sidicine low country [Pg 168][728-761]hard by; and those who leave Cales, and the dweller by the shallows of Volturnus river, and side by side the rough Saticulan and the Oscan bands. Polished maces are their weapons, and these it is their wont to fit with a tough thong; a target covers their left side, and for close fighting they have crooked swords. Nor shalt thou, Oebalus, depart untold of in our verses, who wast borne, men say, by the nymph Sebethis to Telon, when he grew old in rule over Capreae the Teleboïc realm: but not so content with his ancestral fields, his son even then held down in wide sway the Sarrastian peoples and the meadows watered by Sarnus, and the dwellers in Rufrae and Batulum, and the fields of Celemnae, and they on whom from her apple orchards Abella city looks down. Their wont was to hurl lances in Teutonic fashion; their head covering was stripped bark of the cork tree, their shield-plates glittering brass, glittering brass their sword. Thee too, Ufens, mountainous Nersae sent forth to battle, of noble fame and prosperous arms, whose race on the stiff Aequiculan clods is rough beyond all other, and bred to continual hunting in the woodland; they till the soil in arms, and it is ever their delight to drive in fresh spoils and live on plunder. Furthermore there came, sent by King Archippus, the priest of the Marruvian people, dressed with prosperous olive leaves over his helmet, Umbro excellent in valour, who was wont with charm and touch to sprinkle slumberous dew on the viper's brood and water-snakes of noisome breath. Yet he availed not to heal the stroke of the Dardanian spear-point, nor was the wound of him helped by his sleepy charms and herbs culled on the Massic hills. Thee the woodland of Angitia, thee Fucinus' glassy wave, thee the clear pools wept. . . . Likewise the seed of Hippolytus marched to war, Virbius [Pg 169][762-796]most excellent in beauty, sent by his mother Aricia. The groves of Egeria nursed him round the spongy shore where Diana's altar stands rich and gracious. For they say in story that Hippolytus, after he fell by his stepmother's treachery, torn asunder by his frightened horses to fulfil a father's revenge, came again to the daylight and heaven's upper air, recalled by Diana's love and the drugs of the Healer. Then the Lord omnipotent, indignant that any mortal should rise from the nether shades to the light of life, launched his thunder and hurled down to the Stygian water the Phoebus-born, the discoverer of such craft and cure. But Trivia the bountiful hides Hippolytus in a secret habitation, and sends him away to the nymph Egeria and the woodland's keeping, where, solitary in Italian forests, he should spend an inglorious life, and have Virbius for his altered name. Whence also hoofed horses are kept away from Trivia's temple and consecrated groves, because, affrighted at the portents of the sea, they overset the chariot and flung him out upon the shore. Notwithstanding did his son train his ruddy steeds on the level plain, and sped charioted to war. Himself too among the foremost, splendid in beauty of body, Turnus moves armed and towers a whole head over all. His lofty helmet, triple-tressed with horse-hair, holds high a Chimaera breathing from her throat Aetnean fires, raging the more and exasperate with baleful flames, as the battle and bloodshed grow fiercer. But on his polished shield was emblazoned in gold Io with uplifted horns, already a heifer and overgrown with hair, a lofty design, and Argus the maiden's warder, and lord Inachus pouring his stream from his embossed urn. Behind comes a cloud of infantry, and shielded columns thicken over all the plains; the Argive men and Auruncan forces, the Rutulians and old Sicanians, the Sacranian ranks and Labicians with [Pg 170][797-817]painted shields; they who till thy dells, O Tiber, and Numicus' sacred shore, and whose ploughshare goes up and down on the Rutulian hills and the Circaean headland, over whose fields Jupiter of Anxur watches, and Feronia glad in her greenwood: and where the marsh of Satura lies black, and cold Ufens winds his way along the valley-bottoms and sinks into the sea. Therewithal came Camilla the Volscian, leading a train of cavalry, squadrons splendid with brass: a warrior maiden who had never used her woman's hands to Minerva's distaff or wool-baskets, but hardened to endure the battle shock and outstrip the winds with racing feet. She might have flown across the topmost blades of unmown corn and left the tender ears unhurt as she ran; or sped her way over mid sea upborne by the swelling flood, nor dipt her swift feet in the water. All the people pour from house and field, and mothers crowd to wonder and gaze at her as she goes, in rapturous astonishment at the royal lustre of purple that drapes her smooth shoulders, at the clasp of gold that intertwines her tresses, at the Lycian quiver she carries, and the pastoral myrtle shaft topped with steel. [Pg 171] BOOK EIGHTH THE EMBASSAGE TO EVANDER When Turnus ran up the flag of war on the towers of Laurentum, and the trumpets blared with harsh music, when he spurred his fiery steeds and clashed his armour, straightway men's hearts are in tumult; all Latium at once flutters in banded uprisal, and her warriors rage furiously. Their chiefs, Messapus, and Ufens, and Mezentius, scorner of the gods, begin to enrol forces on all sides, and dispeople the wide fields of husbandmen. Venulus too is sent to the town of mighty Diomede to seek succour, to instruct him that Teucrians set foot in Latium; that Aeneas in his fleet invades them with the vanquished gods of his home, and proclaims himself the King summoned of fate; that many tribes join the Dardanian, and his name swells high in Latium. What he will rear on these foundations, what issue of battle he desires, if Fortune attend him, lies clearer to his own sight than to King Turnus or King Latinus. Thus was it in Latium. And the hero of Laomedon's blood, seeing it all, tosses on a heavy surge of care, and throws his mind rapidly this way and that, and turns it on all hands in swift change of thought: even as when the quivering light of water brimming in brass, struck back [Pg 172][23-56]from the sunlight or the moon's glittering reflection, flickers abroad over all the room, and now mounts aloft and strikes the high panelled roof. Night fell, and over all lands weary creatures were fast in deep slumber, the race of fowl and of cattle; when lord Aeneas, sick at heart of the dismal warfare, stretched him on the river bank under the cope of the cold sky, and let sleep, though late, overspread his limbs. To him the very god of the ground, the pleasant Tiber stream, seemed to raise his aged form among the poplar boughs; thin lawn veiled him with its gray covering, and shadowy reeds hid his hair. Thereon he addressed him thus, and with these words allayed his distresses: 'O born of the family of the gods, thou who bearest back our Trojan city from hostile hands, and keepest Troy towers in eternal life; O long looked for on Laurentine ground and Latin fields! here is thine assured home, thine home's assured gods. Draw not thou back, nor be alarmed by menace of war. All the anger and wrath of the gods is passed away . . . And even now for thine assurance, that thou think not this the idle fashioning of sleep, a great sow shall be found lying under the oaks on the shore, with her new-born litter of thirty head: white she couches on the ground, and the brood about her teats is white. By this token in thirty revolving years shall Ascanius found a city, Alba of bright name. My prophecy is sure. Now hearken, and I will briefly instruct thee how thou mayest unravel and overcome thy present task. An Arcadian people sprung of Pallas, following in their king Evander's company beneath his banners, have chosen a place in these coasts, and set a city on the hills, called Pallanteum after Pallas their forefather. These wage perpetual war with the Latin race; these do thou take to thy camp's alliance, and join with them in league. Myself I [Pg 173][57-89]will lead thee by my banks and straight along my stream, that thou mayest oar thy way upward against the river. Up and arise, goddess-born, and even with the setting stars address thy prayers to Juno as is meet, and vanquish her wrath and menaces with humble vows. To me thou shalt pay a conqueror's sacrifice. I am he whom thou seest washing the banks with full flood and severing the rich tilth, glassy Tiber, best beloved by heaven of rivers. Here is my stately home; my fountain-head is among high cities.' Thus spoke the River, and sank in the depth of the pool: night and sleep left Aeneas. He arises, and, looking towards the radiant sky of the sunrising, holds up water from the river in fitly-hollowed palms, and pours to heaven these accents: 'Nymphs, Laurentine Nymphs, from whom is the generation of rivers, and thou, O father Tiber, with thine holy flood, receive Aeneas and deign to save him out of danger. What pool soever holds thy source, who pitiest our discomforts, from whatsoever soil thou dost spring excellent in beauty, ever shall my worship, ever my gifts frequent thee, the hornèd river lord of Hesperian waters. Ah, be thou only by me, and graciously confirm thy will.' So speaks he, and chooses two galleys from his fleet, and mans them with rowers, and withal equips a crew with arms. And lo! suddenly, ominous and wonderful to tell, the milk-white sow, of one colour with her white brood, is espied through the forest couched on the green brink; whom to thee, yes to thee, queenly Juno, good Aeneas offers in sacrifice, and sets with her offspring before thine altar. All that night long Tiber assuaged his swelling stream, and silently stayed his refluent wave, smoothing the surface of his waters to the fashion of still pool and quiet mere, to spare [Pg 174][90-121]labour to the oar. So they set out and speed on their way with prosperous cries; the painted fir slides along the waterway; the waves and unwonted woods marvel at their far-gleaming shields, and the gay hulls afloat on the river. They outwear a night and a day in rowing, ascend the long reaches, and pass under the chequered shadows of the trees, and cut through the green woodland in the calm water. The fiery sun had climbed midway in the circle of the sky when they see afar fortress walls and scattered house roofs, where now the might of Rome hath risen high as heaven; then Evander held a slender state. Quickly they turn their prows to land and draw near the town. It chanced on that day the Arcadian king paid his accustomed sacrifice to the great son of Amphitryon and all the gods in a grove before the city. With him his son Pallas, with him all the chief of his people and his poor senate were offering incense, and the blood steamed warm at their altars. When they saw the high ships, saw them glide up between the shady woodlands and rest on their silent oars, the sudden sight appals them, and all at once they rise and stop the banquet. Pallas courageously forbids them to break off the rites; snatching up a spear, he flies forward, and from a hillock cries afar: 'O men, what cause hath driven you to explore these unknown ways? or whither do you steer? What is your kin, whence your habitation? Is it peace or arms you carry hither?' Then from the lofty stern lord Aeneas thus speaks, stretching forth in his hand an olive bough of peace-bearing: 'Thou seest men born of Troy and arms hostile to the Latins, who have driven us to flight in insolent warfare. We seek Evander; carry this message, and tell him that chosen men of the Dardanian captains are come pleading for an armed alliance.' Pallas stood amazed at the august name. 'Descend,' [Pg 175][122-154]he cries, 'whoso thou art, and speak with my father face to face, and enter our home and hospitality.' And giving him the grasp of welcome, he caught and clung to his hand. Advancing, they enter the grove and leave the river. Then Aeneas in courteous words addresses the King: 'Best of the Grecian race, thou whom fortune hath willed that I supplicate, holding before me boughs dressed in fillets, no fear stayed me because thou wert a Grecian chief and an Arcadian, or allied by descent to the twin sons of Atreus. Nay, mine own prowess and the sanctity of divine oracles, our ancestral kinship, and the fame of thee that is spread abroad over the earth, have allied me to thee and led me willingly on the path of fate. Dardanus, who sailed to the Teucrian land, the first father and founder of the Ilian city, was born, as Greeks relate, of Electra the Atlantid; Electra's sire is ancient Atlas, whose shoulder sustains the heavenly spheres. Your father is Mercury, whom white Maia conceived and bore on the cold summit of Cyllene; but Maia, if we give any credence to report, is daughter of Atlas, that same Atlas who bears up the starry heavens; so both our families branch from a single blood. In this confidence I sent no embassy, I framed no crafty overtures; myself I have presented mine own person, and come a suppliant to thy courts. The same Daunian race pursues us and thee in merciless warfare; we once expelled, they trust nothing will withhold them from laying all Hesperia wholly beneath their yoke, and holding the seas that wash it above and below. Accept and return our friendship. We can give brave hearts in war, high souls and men approved in deeds.' Aeneas ended. The other ere now scanned in a long gaze the face and eyes and all the form of the speaker; then thus briefly returns: 'How gladly, bravest of the Teucrians, do I hail and [Pg 176][155-188]own thee! how I recall thy father's words and the very tone and glance of great Anchises! For I remember how Priam son of Laomedon, when he sought Salamis on his way to the realm of his sister Hesione, went on to visit the cold borders of Arcadia. Then early youth clad my cheeks with bloom. I admired the Teucrian captains, admired their lord, the son of Laomedon; but Anchises moved high above them all. My heart burned with youthful passion to accost him and clasp hand in hand; I made my way to him, and led him eagerly to Pheneus' high town. Departing he gave me an adorned quiver and Lycian arrows, a scarf inwoven with gold, and a pair of golden bits that now my Pallas possesses. Therefore my hand is already joined in the alliance you seek, and soon as to-morrow's dawn rises again over earth, I will send you away rejoicing in mine aid, and supply you from my store. Meanwhile, since you are come hither in friendship, solemnise with us these yearly rites which we may not defer, and even now learn to be familiar at your comrades' board.' This said, he commands the feast and the wine-cups to be replaced whence they were taken, and with his own hand ranges them on the grassy seat, and welcomes Aeneas to the place of honour, with a lion's shaggy fell for cushion and a hospitable chair of maple. Then chosen men with the priest of the altar in emulous haste bring roasted flesh of bulls, and pile baskets with the gift of ground corn, and serve the wine. Aeneas and the men of Troy with him feed on the long chines of oxen and the entrails of the sacrifice. After hunger is driven away and the desire of food stayed, King Evander speaks: 'No idle superstition that knows not the gods of old hath ordered these our solemn rites, this customary feast, this altar of august sanctity; saved from bitter perils, O Trojan guest, do we worship, and [Pg 177][189-225]most due are the rites we inaugurate. Look now first on this overhanging cliff of stone, where shattered masses lie strewn, and the mountain dwelling stands desolate, and rocks are rent away in vast ruin. Here was a cavern, awful and deep-withdrawn, impenetrable to the sunbeams, where the monstrous half-human shape of Cacus had his hold: the ground was ever wet with fresh slaughter, and pallid faces of men, ghastly with gore, hung nailed on the haughty doors. This monster was the son of Vulcan, and spouted his black fires from his mouth as he moved in giant bulk. To us also in our desire time bore a god's aid and arrival. For princely Alcides the avenger came glorious in the spoils of triple Geryon slain; this way the Conqueror drove the huge bulls, and his oxen filled the river valley. But savage Cacus, infatuate to leave nothing undared or unhandled in craft or crime, drives four bulls of choice shape away from their pasturage, and as many heifers of excellent beauty. And these, that there should be no straightforward footprints, he dragged by the tail into his cavern, the track of their compelled path reversed, and hid them behind the screen of rock. No marks were there to lead a seeker to the cavern. Meanwhile the son of Amphitryon, his herds filled with food, was now breaking up his pasturage and making ready to go. The oxen low as they depart; all the woodland is filled with their complaint as they clamorously quit the hills. One heifer returned the cry, and, lowing from the depth of the dreary cave, baffled the hope of Cacus from her imprisonment. At this the grief and choler of Alcides blazed forth dark and infuriate. Seizing in his hand his club of heavy knotted oak, he seeks with swift pace the aery mountain steep. Then, as never before, did we see Cacus afraid and his countenance troubled; he goes flying swifter than the wind and seeks his cavern; fear wings his feet. As he shut himself in, and, bursting the [Pg 178][226-260]chains, dropped the vast rock slung in iron by his father's craft, and blocked the doorway with its pressure, lo! the Tirynthian came in furious wrath, and, scanning all the entry, turned his face this way and that and ground his teeth. Thrice, hot with rage, he circles all Mount Aventine; thrice he assails the rocky portals in vain; thrice he sinks down outwearied in the valley. There stood a sharp rock of flint with sides cut sheer away, rising over the cavern's ridge a vast height to see, fit haunt for foul birds to build on. This—for, sloping from the ridge, it leaned on the left towards the river—he loosened, urging it from the right till he tore it loose from its deep foundations; then suddenly shook it free; with the shock the vast sky thunders, the banks leap apart, and the amazed river recoils. But the den, Cacus' huge palace, lay open and revealed, and the depths of gloomy cavern were made manifest; even as though some force tearing earth apart should unlock the infernal house, and disclose the pallid realms abhorred of heaven, and deep down the monstrous gulf be descried where the ghosts flutter in the streaming daylight. On him then, surprised in unexpected light, shut in the rock's recesses and howling in strange fashion, Alcides from above hurls missiles and calls all his arms to aid, and presses hard on him with boughs and enormous millstones. And he, for none other escape from peril is left, vomits from his throat vast jets of smoke, wonderful to tell, and enwreathes his dwelling in blind gloom, blotting view from the eyes, while in the cave's depth night thickens with smoke-bursts in a darkness shot with fire. Alcides broke forth in anger, and with a bound hurled himself sheer amid the flames, where the smoke rolls billowing and voluminous, and the cloud surges black through the enormous den. Here, as Cacus in the darkness spouts forth his idle fires, he grasps and twines tight round him, till his eyes start out and his throat [Pg 179][261-295]is drained of blood under the strangling pressure. Straightway the doors are torn open and the dark house laid plain; the stolen oxen and forsworn plunder are shewn forth to heaven, and the misshapen carcase dragged forward by the feet. Men cannot satisfy their soul with gazing on the terrible eyes, the monstrous face and shaggy bristling chest, and the throat with its quenched fires. Thenceforth this sacrifice is solemnised, and a younger race have gladly kept the day; Potitius the inaugurator, and the Pinarian family, guardians of the rites of Hercules, have set in the grove this altar, which shall ever be called of us Most Mighty, and shall be our mightiest evermore. Wherefore arise, O men, and enwreathe your hair with leafy sprays, and stretch forth the cups in your hands; call on our common god and pour the glad wine.' He ended; when the twy-coloured poplar of Hercules hid his shaded hair with pendulous plaited leaf, and the sacred goblet filled his hand. Speedily all pour glad libation on the board, and supplicate the gods. Meanwhile the evening star draws nigher down the slope of heaven, and now the priests went forth, Potitius at their head, girt with skins after their fashion, and bore torches aflame. They renew the banquet, and bring the grateful gift of a second repast, and heap the altars with loaded platters. Then the Salii stand round the lit altar-fires to sing, their brows bound with poplar boughs, one chorus of young men, one of elders, and extol in song the praises and deeds of Hercules; how first he strangled in his gripe the twin terrors, the snakes of his stepmother; how he likewise shattered in war famous cities, Troy and Oechalia; how under Eurystheus the King he bore the toil of a thousand labours by Juno's malign decrees. Thine hand, unconquered, slays the cloud-born double-bodied race, Hylaeus and Pholus, the Cretan monster, and the huge lion in the hollow Nemean rock. Before thee the Stygian pools [Pg 180][296-329]shook for fear, before thee the warder of hell, couched on half-gnawn bones in his blood-stained cavern; to thee not any form was terrible, not Typhoeus' self towering in arms; thou wast not bereft of counsel when the snake of Lerna encompassed thee with thronging heads. Hail, true seed of Jove, deified glory! graciously visit us and these thy rites with favourable feet. Such are their songs of praise; they crown all with the cavern of Cacus and its fire-breathing lord. All the woodland echoes with their clamour, and the hills resound. Thence all at once, the sacred rites accomplished, retrace their way to the city. The age-worn King walked holding Aeneas and his son by his side for companions on his way, and lightened the road with changing talk. Aeneas admires and turns his eyes lightly round about, pleased with the country; and gladly on spot after spot inquires and hears of the memorials of earlier men. Then King Evander, founder of the fortress of Rome: 'In these woodlands dwelt Fauns and Nymphs sprung of the soil, and a tribe of men born of stocks and hard oak; who had neither law nor grace of life, nor did they know to yoke bulls or lay up stores or save their gains, but were nurtured by the forest boughs and the hard living of the huntsman. Long ago Saturn came from heaven on high in flight before Jove's arms, an exile from his lost realm. He gathered together the unruly race scattered on the mountain heights, and gave them statutes, and chose Latium to be their name, since in these borders he had found a safe hiding-place. Beneath his reign were the ages named of gold; thus, in peace and quietness, did he rule the nations; till gradually there crept in a sunken and stained time, the rage of war, and the lust of possession. Then came the Ausonian clan and the tribes of Sicania, and many a time the land of Saturn put away her name. Then were kings, [Pg 181][330-364]and fierce Thybris with his giant bulk, from whose name we of Italy afterwards called the Tiber river, when it lost the true name of old, Albula. Me, cast out from my country and following the utmost limits of the sea, Fortune the omnipotent and irreversible doom settled in this region; and my mother the Nymph Carmentis' awful warnings and Apollo's divine counsel drove me hither.' Scarce was this said; next advancing he points out the altar and the Carmental Gate, which the Romans call anciently by that name in honour of the Nymph Carmentis, seer and soothsayer, who sang of old the coming greatness of the Aeneadae and the glory of Pallanteum. Next he points out the wide grove where valiant Romulus set his sanctuary, and the Lupercal in the cool hollow of the rock, dedicate to Lycean Pan after the manner of Parrhasia. Therewithal he shows the holy wood of Argiletum, and calls the spot to witness as he tells the slaying of his guest Argus. Hence he leads him to the Tarpeian house, and the Capitol golden now, of old rough with forest thickets. Even then men trembled before the wood and rock. 'This grove,' he cries, 'this hill with its leafy crown, is a god's dwelling, though whose we know not; the Arcadians believe Jove himself hath been visible, when often he shook the darkening aegis in his hand and gathered the storm-clouds. Thou seest these two towns likewise with walls overthrown, relics and memorials of men of old. This fortress lord Janus built, this Saturn; the name of this was once Janiculum, of that Saturnia.' With such mutual words they drew nigh the house of poor Evander, and saw scattered herds lowing on the Roman Forum and down the gay Carinae. When they reached his dwelling, 'This threshold,' he cries, 'Alcides the Conqueror stooped to cross; in this palace he rested. Dare thou, my guest, to despise riches; mould thyself to [Pg 182][365-396]like dignity of godhead, and come not exacting to our poverty.' He spoke, and led tall Aeneas under the low roof of his narrow dwelling, and laid him on a couch of stuffed leaves and the skin of a Libyan she-bear. Night falls and clasps the earth in her dusky wings. But Venus, stirred in spirit by no vain mother's alarms, and moved by the threats and stern uprisal of the Laurentines, addresses herself to Vulcan, and in her golden bridal chamber begins thus, breathing divine passion in her speech: 'While Argolic kings wasted in war the doomed towers of Troy, the fortress fated to fall in hostile fires, no succour did I require for her wretched people, no weapons of thine art and aid: nor would I task, dear my lord, thee or thy toils for naught, though I owed many and many a debt to the children of Priam, and had often wept the sore labour of Aeneas. Now by Jove's commands he hath set foot in the Rutulian borders; I now therefore come with entreaty, and ask armour of the god I worship. For the son she bore, the tears of Nereus' daughter, of Tithonus' consort, could melt thine heart. Look what nations are gathering, what cities bar their gates and sharpen the sword against me for the desolation of my children.' The goddess ended, and, as he hesitates, clasps him round in the soft embrace of her snowy arms. He suddenly caught the wonted flame, and the heat known of old pierced him to the heart and overran his melting frame: even as when, bursting from the thunder peal, a sparkling cleft of fire shoots through the storm-clouds with dazzling light. His consort knew, rejoiced in her wiles, and felt her beauty. Then her lord speaks, enchained by Love the immortal: 'Why these far-fetched pleas? Whither, O goddess, is thy trust in me gone? Had like distress been thine, [Pg 183][397-431]even then we might unblamed have armed thy Trojans, nor did doom nor the Lord omnipotent forbid Troy to stand, and Priam to survive yet ten other years. And now, if thou purposest war, and this is thy counsel, whatever charge I can undertake in my craft, in aught that may be made of iron or molten electrum, whatever fire and air can do, cease thou to entreat as doubtful of thy strength.' These words spoken, he clasped his wife in the desired embrace, and, sinking in her lap, wooed quiet slumber to overspread his limbs. Thereon, so soon as sleep, now in mid-career of waning night, had given rest and gone; soon as a woman, whose task is to sustain life with her distaff and the slender labours of the loom, kindles the ashes of her slumbering fire, her toil encroaching on the night, and sets a long task of fire-lit spinning to her maidens, that so she may keep her husband's bed unsullied and nourish her little children,—even so the Lord of Fire, nor slacker in his hours than she, rises from his soft couch to the work of his smithy. An island rises by the side of Sicily and Aeolian Lipare, steep with smoking cliffs, whereunder the vaulted and thunderous Aetnean caverns are hollowed out for Cyclopean forges, the strong strokes on the anvils echo in groans, ore of steel hisses in the vaults, and the fire pants in the furnaces: the house of Vulcan, and Vulcania the land's name. Hither now the Lord of Fire descends from heaven's height. In the vast cavern the Cyclopes were forging iron, Brontes and Steropes and Pyracmon with bared limbs. Shaped in their hands was a thunderbolt, in part already polished, such as the Father of Heaven hurls down on earth in multitudes, part yet unfinished. Three coils of frozen rain, three of watery mist they had enwrought in it, three of ruddy fire and winged south wind; now they were mingling in their work the awful splendours, the sound and terror, and the [Pg 184][432-469]angry pursuing flames. Elsewhere they hurried on a chariot for Mars with flying wheels, wherewith he stirs up men and cities; and burnished the golden serpent-scales of the awful aegis, the armour of wrathful Pallas, and the entwined snakes on the breast of the goddess, the Gorgon head with severed neck and rolling eyes. 'Away with all!' he cries: 'stop your tasks unfinished, Cyclopes of Aetna, and attend to this; a warrior's armour must be made. Now must strength, now quickness of hand be tried, now all our art lend her guidance. Fling off delay.' He spoke no more; but they all bent rapidly to the work, allotting their labours equally. Brass and ore of gold flow in streams, and wounding steel is molten in the vast furnace. They shape a mighty shield, to receive singly all the weapons of the Latins, and weld it sevenfold, circle on circle. Some fill and empty the windy bellows of their blast, some dip the hissing brass in the trough. They raise their arms mightily in responsive time, and turn the mass of metal about in the grasp of their tongs. While the lord of Lemnos is busied thus in the borders of Aeolia, Evander is roused from his low dwelling by the gracious daylight and the matin songs of birds from the eaves. The old man arises, and draws on his body raiment, and ties the Tyrrhene shoe latchets about his feet; then buckles to his side and shoulder his Tegeaean sword, and swathes himself in a panther skin that droops upon his left. Therewithal two watch-dogs go before him from the high threshold, and accompany their master's steps. The hero sought his guest Aeneas in the privacy of his dwelling, mindful of their talk and his promised bounty. Nor did Aeneas fail to be astir with the dawn. With the one went his son Pallas, with the other Achates. They meet and clasp hands, and, sitting down within the house, at length enjoy unchecked converse. The King begins thus: . . . [Pg 185] [470-505]'Princely chief of the Teucrians, in whose lifetime I will never allow the state or realm of Troy vanquished, our strength is scant to succour in war for so great a name. On this side the Tuscan river shuts us in; on that the Rutulian drives us hard, and thunders in arms about our walls. But I purpose to unite to thee mighty peoples and the camp of a wealthy realm; an unforeseen chance offers this for thy salvation. Fate summons thy approach. Not far from here stands fast Agylla city, an ancient pile of stone, where of old the Lydian race, eminent in war, settled on the Etruscan ridges. For many years it flourished, till King Mezentius ruled it with insolent sway and armed terror. Why should I relate the horrible murders, the savage deeds of the monarch? May the gods keep them in store for himself and his line! Nay, he would even link dead bodies to living, fitting hand to hand and face to face (the torture!), and in the oozy foulness and corruption of the dreadful embrace so slay them by a lingering death. But at last his citizens, outwearied by his mad excesses, surround him and his house in arms, cut down his comrades, and hurl fire on his roof. Amid the massacre he escaped to the refuge of Rutulian land and the armed defence of Turnus' friendship. So all Etruria hath risen in righteous fury, and in immediate battle claim their king for punishment. Over these thousands will I make thee chief, O Aeneas; for their noisy ships crowd all the shore, and they bid the standards advance, while the aged diviner stays them with prophecies: "O chosen men of Maeonia, flower and strength of them, of old time, whom righteous anger urges on the enemy, and Mezentius inflames with deserved wrath, to no Italian is it permitted to hold this great nation in control: choose foreigners to lead you." At that, terrified by the divine warning, the Etruscan lines have encamped on the plain; Tarchon himself hath sent ambassadors to me with the crown [Pg 186][506-539]and sceptre of the kingdom, and offers the royal attire will I but enter their camp and take the Tyrrhene realm. But old age, frozen to dulness, and exhausted with length of life, denies me the load of empire, and my prowess is past its day. I would urge it on my son, did not the mixture of blood by his Sabellian mother make this half his native land. Thou, to whose years and race alike the fates extend their favour, on whom fortune calls, enter thou in, a leader supreme in bravery over Teucrians and Italians. Mine own Pallas likewise, our hope and comfort, I will send with thee; let him grow used to endure warfare and the stern work of battle under thy teaching, to regard thine actions, and from his earliest years look up to thee. To him will I give two hundred Arcadian cavalry, the choice of our warlike strength, and Pallas as many more to thee in his own name.' Scarce had he ended; Aeneas, son of Anchises, and trusty Achates gazed with steadfast face, and, sad at heart, were revolving inly many a labour, had not the Cytherean sent a sign from the clear sky. For suddenly a flash and peal comes quivering from heaven, and all seemed in a moment to totter, and the Tyrrhene trumpet-blast to roar along the sky. They look up; again and yet again the heavy crash re-echoes. They see in the serene space of sky armour gleam red through a cloud in the clear air, and ring clashing out. The others stood in amaze; but the Trojan hero knew the sound for the promise of his goddess mother; then he speaks: 'Ask not, O friend, ask not in any wise what fortune this presage announces; it is I who am summoned of heaven. This sign the goddess who bore me foretold she would send if war assailed, and would bring through the air to my succour armour from Vulcan's hands. . . . Ah, what slaughter awaits the wretched Laurentines! what a price, O Turnus, wilt thou pay me! how many shields and helmets and brave bodies of men shalt thou, [Pg 187][540-573]Lord Tiber, roll under thy waves! Let them call for armed array and break the league!' These words uttered, he rises from the high seat, and first wakes with fresh fire the slumbering altars of Hercules, and gladly draws nigh his tutelar god of yesternight and the small deities of the household. Alike Evander, and alike the men of Troy, offer up, as is right, choice sheep of two years old. Thereafter he goes to the ships and revisits his crew, of whose company he chooses the foremost in valour to attend him to war; the rest glide down the water and float idly with the descending stream, to come with news to Ascanius of his father's state. They give horses to the Teucrians who seek the fields of Tyrrhenia; a chosen one is brought for Aeneas, housed in a tawny lion skin that glitters with claws of gold. Rumour flies suddenly, spreading over the little town, that they ride in haste to the courts of the Tyrrhene king. Mothers redouble their prayers in terror, as fear treads closer on peril and the likeness of the War God looms larger in sight. Then Evander, clasping the hand of his departing son, clings to him weeping inconsolably, and speaks thus: 'Oh, if Jupiter would restore me the years that are past, as I was when, close under Praeneste, I cut down their foremost ranks and burned the piled shields of the conquered! Then this right hand sent King Erulus down to hell, though to him at his birth his mother Feronia (awful to tell) had given three lives and triple arms to wield; thrice must he be laid low in death; yet then this hand took all his lives and as often stripped him of his arms. Never should I now, O son, be severed from thy dear embrace; never had the insolent sword of Mezentius on my borders dealt so many cruel deaths, widowed the city of so many citizens. But you, O heavenly powers, and thou, Jupiter, Lord and Governor of Heaven, have compassion, I pray, on [Pg 188][574-609]the Arcadian king, and hear a father's prayers. If your deity and decrees keep my Pallas safe for me, if I live that I may see him and meet him yet, I pray for life; any toil soever I have patience to endure. But if, O Fortune, thou threatenest some dread calamity, now, ah now, may I break off a cruel life, while anxiety still wavers and expectation is in doubt, while thou, dear boy, my one last delight, art yet clasped in my embrace; let no bitterer message wound mine ear.' These words the father poured forth at the final parting; his servants bore him swooning within. And now the cavalry had issued from the open gates, Aeneas and trusty Achates among the foremost, then other of the Trojan princes, Pallas conspicuous amid the column in scarf and inlaid armour; like the Morning Star, when, newly washed in the ocean wave, he shews his holy face in heaven, and melts the darkness away. Fearful mothers stand on the walls and follow with their eyes the cloud of dust and the squadrons gleaming in brass. They, where the goal of their way lies nearest, bear through the brushwood in armed array. Forming in column, they advance noisily, and the horse hoof shakes the crumbling plain with four-footed trampling. There is a high grove by the cold river of Caere, widely revered in ancestral awe; sheltering hills shut it in all about and girdle the woodland with their dark firs. Rumour is that the old Pelasgians, who once long ago held the Latin borders, consecrated the grove and its festal day to Silvanus, god of the tilth and flock. Not far from it Tarchon and his Tyrrhenians were encamped in a protected place; and now from the hill-top the tents of all their army might be seen outspread on the fields. Lord Aeneas and his chosen warriors draw hither and refresh their weary horses and limbs. But Venus the white goddess drew nigh, bearing her gifts through the clouds of heaven; and when she saw her [Pg 189][610-646]son withdrawn far apart in the valley's recess by the cold river, cast herself in his way, and addressed him thus: 'Behold perfected the presents of my husband's promised craftsmanship: so shalt thou not shun, O my child, soon to challenge the haughty Laurentines or fiery Turnus to battle.' The Cytherean spoke, and sought her son's embrace, and laid the armour glittering under an oak over against him. He, rejoicing in the magnificence of the goddess' gift, cannot have his fill of turning his eyes over it piece by piece, and admires and handles between his arms the helmet, dread with plumes and spouting flame, as when a blue cloud takes fire in the sunbeams and gleams afar; then the smooth greaves of electrum and refined gold, the spear, and the shield's ineffable design. There the Lord of Fire had fashioned the story of Italy and the triumphs of the Romans, not witless of prophecy or ignorant of the age to be; there all the race of Ascanius' future seed, and their wars fought one by one. Likewise had he fashioned the she-wolf couched after the birth in the green cave of Mars; round her teats the twin boys hung playing, and fearlessly mouthed their foster-mother; she, with round neck bent back, stroked them by turns and shaped their bodies with her tongue. Thereto not far from this he had set Rome and the lawless rape of the Sabines in the concourse of the theatre when the great Circensian games were celebrated, and a fresh war suddenly arising between the people of Romulus and aged Tatius and austere Cures. Next these same kings laid down their mutual strife and stood armed before Jove's altar with cup in hand, and joined treaty over a slain sow. Not far from there four-horse chariots driven apart had torn Mettus asunder (but thou, O Alban, shouldst have kept by thy words!), and Tullus tore the flesh of the liar through the forest, his splashed blood dripping from the briars. Therewithal Porsena commanded [Pg 190][647-681]to admit the exiled Tarquin, and held the city in the grasp of a strong blockade; the Aeneadae rushed on the sword for liberty. Him thou couldst espy like one who chafes and like one who threatens, because Cocles dared to tear down the bridge, and Cloelia broke her bonds and swam the river. Highest of all Manlius, warder of the Tarpeian fortress, stood with the temple behind him and held the high Capitoline; and the thatch of Romulus' palace stood rough and fresh. And here the silver goose, fluttering in the gilded colonnades, cried that the Gauls were there on the threshold. The Gauls were there among the brushwood, hard on the fortress, secure in the darkness and the dower of shadowy night. Their clustering locks are of gold, and of gold their attire; their striped cloaks glitter, and their milk-white necks are entwined with gold. Two Alpine pikes sparkle in the hand of each, and long shields guard their bodies. Here he had embossed the dancing Salii and the naked Luperci, the crests wreathed in wool, and the sacred shields that fell from heaven; in cushioned cars the virtuous matrons led on their rites through the city. Far hence he adds the habitations of hell also, the high gates of Dis and the dooms of guilt; and thee, O Catiline, clinging on the beetling rock, and shuddering at the faces of the Furies; and far apart the good, and Cato delivering them statutes. Amidst it all flows wide the likeness of the swelling sea, wrought in gold, though the foam surged gray upon blue water; and round about dolphins, in shining silver, swept the seas with their tails in circle as they cleft the tide. In the centre were visible the brazen war-fleets of Actium; thou mightest see all Leucate swarm in embattled array, and the waves gleam with gold. Here Caesar Augustus, leading Italy to battle with Fathers and People, with gods of household and of state, stands on the lofty stern; prosperous flames jet round his brow, and his [Pg 191][682-715]ancestral star dawns overhead. Elsewhere Agrippa, with favouring winds and gods, proudly leads on his column; on his brows glitters the prow-girt naval crown, the haughty emblazonment of the war. Here Antonius with barbarian aid and motley arms, from the conquered nations of the Dawn and the shore of the southern sea, carries with him Egypt and the Eastern forces of utmost Bactra, and the shameful Egyptian woman goes as his consort. All at once rush on, and the whole ocean is torn into foam by straining oars and triple-pointed prows. They steer to sea; one might think that the Cyclades were uptorn and floated on the main, or that lofty mountains clashed with mountains, so mightily do their crews urge on the turreted ships. Flaming tow and the winged steel of darts shower thickly from their hands; the fields of ocean redden with fresh slaughter. Midmost the Queen calls on her squadron with the timbrel of her country, nor yet casts back a glance on the twin snakes behind her. Howling Anubis, and gods monstrous and multitudinous, level their arms against Neptune and Venus and against Minerva; Mars rages amid the havoc, graven in iron, and the Fatal Sisters hang aloft, and Discord strides rejoicing with garment rent, and Bellona attends her with blood-stained scourge. Looking thereon, Actian Apollo above drew his bow; with the terror of it all Egypt and India, every Arab and Sabaean, turned back in flight. The Queen herself seemed to call the winds and spread her sails, and even now let her sheets run slack. Her the Lord of Fire had fashioned amid the carnage, wan with the shadow of death, borne along by the waves and the north-west wind; and over against her the vast bulk of mourning Nile, opening out his folds and calling with all his raiment the conquered people into his blue lap and the coverture of his streams. But Caesar rode into the city of Rome in triple triumph, and dedicated his vowed [Pg 192][716-731]offering to the gods to stand for ever, three hundred stately shrines all about the city. The streets were loud with gladness and games and shouting. In all the temples was a band of matrons, in all were altars, and before the altars slain steers strewed the ground. Himself he sits on the snowy threshold of Phoebus the bright, reviews the gifts of the nations and ranges them on the haughty doors. The conquered tribes move in long line, diverse as in tongue, so in fashion of dress and armour. Here Mulciber had designed the Nomad race and the ungirt Africans, here the Leleges and Carians and archer Gelonians. Euphrates went by now with smoother waves, and the Morini utmost of men, and the hornèd Rhine, the untamed Dahae, and Araxes chafing under his bridge. These things he admires on the shield of Vulcan, his mother's gift, and rejoicing in the portraiture of unknown history, lifts on his shoulder the destined glories of his children. [Pg 193] BOOK NINTH THE SIEGE OF THE TROJAN CAMP And while thus things pass far in the distance, Juno daughter of Saturn sent Iris down the sky to gallant Turnus, then haply seated in his forefather Pilumnus' holy forest dell. To him the child of Thaumas spoke thus with roseate lips: 'Turnus, what no god had dared promise to thy prayer, behold, is brought unasked by the circling day. Aeneas hath quitted town and comrades and fleet to seek Evander's throne and Palatine dwelling-place. Nor is it enough; he hath pierced to Corythus' utmost cities, and is mustering in arms a troop of Lydian rustics. Why hesitate? now, now is the time to call for chariot and horses. Break through all hindrance and seize the bewildered camp.' She spoke, and rose into the sky on poised wings, and flashed under the clouds in a long flying bow. He knew her, and lifting either hand to heaven, with this cry pursued her flight: 'Iris, grace of the sky, who hath driven thee down the clouds to me and borne thee to earth? Whence is this sudden sheen of weather? I see the sky parting asunder, and the wandering stars in the firmament. I follow the high omen, whoso thou art that callest me to arms.' And with these words he drew nigh the wave, and [Pg 194][23-58]caught up water from its brimming eddy, making many prayers to the gods and burdening the air with vows. And now all the army was advancing on the open plain, rich in horses, rich in raiment of broidered gold. Messapus rules the foremost ranks, the sons of Tyrrheus the rear. Turnus commands the centre: even as Ganges rising high in silence when his seven streams are still, or the rich flood of Nile when he ebbs from the plains, and is now sunk into his channel. On this the Teucrians descry a sudden cloud of dark dust gathering, and the blackness rising on the plain. Caïcus raises a cry from the mound in front: 'What mass of misty gloom, O citizens, is rolling hitherward? to arms in haste! serve out weapons, climb the walls. The enemy approaches, ho!' With mighty clamour the Teucrians pour in through all the gates and fill the works. For so at his departure Aeneas the great captain had enjoined; were aught to chance meanwhile, they should not venture to range their line or trust the plain, but keep their camp and the safety of the entrenched walls. So, though shame and wrath beckon them on to battle, they yet bar the gates and do his bidding, and await the foe armed and in shelter of the towers. Turnus, who had flown forward in advance of his tardy column, comes up suddenly to the town with a train of twenty chosen cavalry, borne on a Thracian horse dappled with white, and covered by a golden helmet with scarlet plume. 'Who will be with me, my men, to be first on the foe? See!' he cries; and sends a javelin spinning into the air to open battle, and advances towering on the plain. His comrades take up the cry, and follow with dreadful din, wondering at the Teucrians' coward hearts, that they issue not on even field nor face them in arms, but keep in shelter of the camp. Hither and thither he rides furiously, tracing the walls, and seeking entrance where way is none. And as a wolf prowling [Pg 195][59-92]about some crowded sheepfold, when, beaten sore of winds and rains, he howls at the pens by midnight; safe beneath their mothers the lambs keep bleating on; he, savage and insatiate, rages in anger against the flock he cannot reach, tired by the long-gathering madness for food, and the throat unslaked with blood: even so the Rutulian, as he gazes on the walled camp, kindles in anger, and indignation is hot in his iron frame. By what means may he essay entrance? by what passage hurl the imprisoned Trojans from the rampart and fling them on the plain? Close under the flanking camp lay the fleet, fenced about with mounds and the waters of the river; it he attacks, and calls for fire to his exultant comrades, and eagerly catches a blazing pine-torch in his hand. Then indeed they press on, quickened by Turnus' presence, and all the band arm them with black faggots. The hearth-fires are plundered; the smoky brand trails a resinous glare, and the Fire-god sends clouds of glowing ashes upward. What god, O Muses, guarded the Trojans from the rage of the fire? who repelled the fierce flame from their ships? Tell it; ancient is the assurance thereof, but the fame everlasting. What time Aeneas began to shape his fleet on Phrygian Ida, and prepared to seek the high seas, the Berecyntian, they say, the very Mother of gods, spoke to high Jove in these words: 'Grant, O son, to my prayer, what her dearness claims who bore thee and laid Olympus under thy feet. My pine forest beloved of me these many years, my grove was on the mountain's crown, whither men bore my holy things, dim with dusky pine and pillared maples. These, when he required a fleet, I gave gladly to the Dardanian; now fear wrings me with sharp distress. Relieve my terrors, and grant a mother's prayers such power that they may yield to no stress of voyaging or of stormy gust: be birth on our hills their avail.' [Pg 196] [93-126]Thus her son in answer, who wheels the starry worlds: 'O mother, whither callest thou fate? or what dost thou seek for these of thine? May hulls have the right of immortality that were fashioned by mortal hand? and may Aeneas traverse perils secure in insecurity? To what god is power so great given? Nay, but when, their duty done, they shall lie at last in their Ausonian haven, from all that have outgone the waves and borne their Dardanian captain to the fields of Laurentum, will I take their mortal body, and bid them be goddesses of the mighty deep, even as Doto the Nereïd and Galatea, when they cut the sea that falls away from their breasts in foam.' He ended; and by his brother's Stygian streams, by the banks of the pitchy black-boiling chasm he nodded confirmation, and shook all Olympus with his nod. So the promised day was come, and the destinies had fulfilled their due time, when Turnus' injury stirred the Mother to ward the brands from her holy ships. First then a strange light flashed on all eyes, and a great glory from the Dawn seemed to dart over the sky, with the choirs of Ida; then an awful voice fell through air, filling the Trojan and Rutulian ranks: 'Disquiet not yourselves, O Teucrians, to guard ships of mine, neither arm your hands: sooner shall Turnus burn the seas than these holy pines. You, go free; go, goddesses of the sea; the Mother bids it.' And immediately each ship breaks the bond that held it, as with dipping prows they plunge like dolphins deep into the water: from it again (O wonderful and strange!) they rise with maidens' faces in like number, and bear out to sea. The Rutulians stood dumb: Messapus himself is terror-stricken among his disordered cavalry; even the stream of Tiber pauses with hoarse murmur, and recoils from sea. But bold Turnus fails not a whit in confidence; nay, he [Pg 197][127-158]raises their courage with words, nay, he chides them: 'On the Trojans are these portents aimed; Jupiter himself hath bereft them of their wonted succour; nor do they abide Rutulian sword and fire. So are the seas pathless for the Teucrians, nor is there any hope in flight; they have lost half their world. And we hold the land: in all their thousands the nations of Italy are under arms. In no wise am I dismayed by those divine oracles of doom the Phrygians insolently advance. Fate and Venus are satisfied, in that the Trojans have touched our fruitful Ausonian fields. I too have my fate in reply to theirs, to put utterly to the sword the guilty nation who have robbed me of my bride; not the sons of Atreus alone are touched by that pain, nor may Mycenae only rise in arms. But to have perished once is enough! To have sinned once should have been enough, in all but utter hatred of the whole of womankind. Trust in the sundering rampart, and the hindrance of their trenches, so little between them and death, gives these their courage: yet have they not seen Troy town, the work of Neptune's hand, sink into fire? But you, my chosen, who of you makes ready to breach their palisade at the sword's point, and join my attack on their fluttered camp? I have no need of Vulcanian arms, of a thousand ships, to meet the Teucrians. All Etruria may join on with them in alliance: nor let them fear the darkness, and the cowardly theft of their Palladium, and the guards cut down on the fortress height. Nor will we hide ourselves unseen in a horse's belly; in daylight and unconcealed are we resolved to girdle their walls with flame. Not with Grecians will I make them think they have to do, nor a Pelasgic force kept off till the tenth year by Hector. Now, since the better part of day is spent, for what remains refresh your bodies, glad that we have done so well, and expect the order of battle.' [Pg 198] [159-192]Meanwhile charge is given to Messapus to blockade the gates with pickets of sentries, and encircle the works with watchfires. Twice seven are chosen to guard the walls with Rutulian soldiery; but each leads an hundred men, crimson-plumed and sparkling in gold. They spread themselves about and keep alternate watch, and, lying along the grass, drink deep and set brazen bowls atilt. The fires glow, and the sentinels spend the night awake in games. . . . Down on this the Trojans look forth from the rampart, as they hold the height in arms; withal in fearful haste they try the gates and lay gangways from bastion to bastion, and bring up missiles. Mnestheus and valiant Serestus speed the work, whom lord Aeneas appointed, should misfortune call, to be rulers of the people and governors of the state. All their battalions, sharing the lot of peril, keep watch along the walls, and take alternate charge of all that requires defence. On guard at the gate was Nisus son of Hyrtacus, most valiant in arms, whom Ida the huntress had sent in Aeneas' company with fleet javelin and light arrows; and by his side Euryalus, fairest of all the Aeneadae and the wearers of Trojan arms, showing on his unshaven boy's face the first bloom of youth. These two were one in affection, and charged in battle together; now likewise their common guard kept the gate. Nisus cries: 'Lend the gods this fervour to the soul, Euryalus? or does fatal passion become a proper god to each? Long ere now my soul is restless to begin some great deed of arms, and quiet peace delights it not. Thou seest how confident in fortune the Rutulians stand. Their lights glimmer far apart; buried in drunken sleep they have sunk to rest; silence stretches all about. Learn then what doubt, what purpose, now rises in my spirit. People and senate, they all cry that Aeneas [Pg 199][193-226]be summoned, and men be sent to carry him tidings. If they promise what I ask in thy name—for to me the glory of the deed is enough—methinks I can find beneath yonder hillock a path to the walls of Pallanteum town.' Euryalus stood fixed, struck through with high ambition, and therewith speaks thus to his fervid friend: 'Dost thou shun me then, Nisus, to share thy company in highest deeds? shall I send thee alone into so great perils? Not thus did my warrior father Opheltes rear and nurture me amid the Argive terror and the agony of Troy, nor thus have I borne myself by thy side while following noble Aeneas to his utmost fate. Here is a spirit, yes here, that scorns the light of day, that deems lightly bought at a life's price that honour to which thou dost aspire.' To this Nisus: 'Assuredly I had no such fear of thee; no, nor could I; so may great Jupiter, or whoso looks on earth with equal eyes, restore me to thee triumphant. But if haply—as thou seest often and often in so forlorn a hope—if haply chance or deity sweep me to adverse doom, I would have thee survive; thine age is worthier to live. Be there one to commit me duly to earth, rescued or ransomed from the battlefield: or, if fortune deny that, to pay me far away the rites of funeral and the grace of a tomb. Neither would I bring such pain on thy poor mother, she who singly of many matrons hath dared to follow her boy to the end, and slights great Acestes' city.' And he: 'In vain dost thou string idle reasons; nor does my purpose yield or change its place so soon. Let us make haste.' He speaks, and rouses the watch; they come up, and relieve the guard; quitting their post, he and Nisus stride on to seek the prince. The rest of living things over all lands were soothing their cares in sleep, and their hearts forgot their pain; the foremost Trojan captains, a chosen band, held council [Pg 200][227-261]of state upon the kingdom; what should they do, or who would now be their messenger to Aeneas? They stand, leaning on their long spears and grasping their shields, in mid level of the camp. Then Nisus and Euryalus together pray with quick urgency to be given audience; their matter is weighty and will be worth the delay. Iülus at once heard their hurried plea, and bade Nisus speak. Thereon the son of Hyrtacus: 'Hear, O people of Aeneas, with favourable mind, nor regard our years in what we offer. Sunk in sleep and wine, the Rutulians are silent; we have stealthily spied the open ground that lies in the path through the gate next the sea. The line of fires is broken, and their smoke rises darkly upwards. If you allow us to use the chance towards seeking Aeneas in Pallanteum town, you will soon descry us here at hand with the spoils of the great slaughter we have dealt. Nor shall we miss the way we go; up the dim valleys we have seen the skirts of the town, and learned all the river in continual hunting.' Thereon aged Aletes, sage in counsel: 'Gods of our fathers, under whose deity Troy ever stands, not wholly yet do you purpose to blot out the Trojan race, when you have brought us young honour and hearts so sure as this.' So speaking, he caught both by shoulder and hand, with tears showering down over face and feature. 'What guerdon shall I deem may be given you, O men, what recompense for these noble deeds? First and fairest shall be your reward from the gods and your own conduct; and Aeneas the good shall speedily repay the rest, and Ascanius' fresh youth never forget so great a service.'—'Nay,' breaks in Ascanius, 'I whose sole safety is in my father's return, I adjure thee and him, O Nisus, by our great household gods, by the tutelar spirit of Assaracus and hoar Vesta's sanctuary—on your knees I lay all my fortune and trust—recall my father; [Pg 201][262-296]give him back to sight; all sorrow disappears in his recovery. I will give a pair of cups my father took in vanquished Arisba, wrought in silver and rough with tracery, twin tripods, and two large talents of gold, and an ancient bowl of Sidonian Dido's giving. If it be indeed our lot to possess Italy and grasp a conquering sceptre, and to assign the spoil; thou sawest the horse and armour of Turnus as he went all in gold; that same horse, the shield and the ruddy plume, will I reserve from partition, thy reward, O Nisus, even from now. My father will give besides twelve mothers of the choicest beauty, and men captives, all in their due array; above these, the space of meadow-land that is now King Latinus' own domain. Thee, O noble boy, whom mine age follows at a nearer interval, even now I welcome to all my heart, and embrace as my companion in every fortune. No glory shall be sought for my state without thee; whether peace or war be in conduct, my chiefest trust for deed and word shall be in thee.' Answering whom Euryalus speaks thus: 'Let but the day never come to prove me degenerate from this daring valour; fortune may fall prosperous or adverse. But above all thy gifts, one thing I ask of thee. My poor mother of Priam's ancient race, whom neither the Ilian land nor King Acestes' city kept from following me forth, her I now leave in ignorance of this danger, such as it is, and without a farewell, because—night and thine hand be witness!—I cannot bear a parent's tears. But thou, I pray, support her want and relieve her loneliness. Let me take with me this hope in thee, I shall go more daringly to every fortune.' Deeply stirred at heart, the Dardanians shed tears, fair Iülus before them all, as the likeness of his own father's love wrung his soul. Then he speaks thus: . . . 'Assure thyself all that is due to thy mighty enterprise; [Pg 202][297-330]for she shall be a mother to me, and only in name fail to be Creüsa; nor slight is the honour reserved for the mother of such a son. What chance soever follow this deed, I swear by this head whereby my father was wont to swear, what I promise to thee on thy prosperous return shall abide the same for thy mother and kindred.' So speaks he weeping, and ungirds from his shoulder the sword inlaid with gold, fashioned with marvellous skill by Lycaon of Gnosus and fitly set in a sheath of ivory. Mnestheus gives Nisus the shaggy spoils of a lion's hide; faithful Aletes exchanges his helmet. They advance onward in arms, and as they go all the company of captains, young and old, speed them to the gates with vows. Likewise fair Iülus, with a man's thought and a spirit beyond his years, gave many messages to be carried to his father. But the breezes shred all asunder and give them unaccomplished to the clouds. They issue and cross the trenches, and through the shadow of night seek the fatal camp, themselves first to be the death of many a man. All about they see bodies strewn along the grass in drunken sleep, chariots atilt on the shore, the men lying among their traces and wheels, with their armour by them, and their wine. The son of Hyrtacus began thus: 'Euryalus, now for daring hands; all invites them; here lies our way; see thou that none raise a hand from behind against us, and keep far-sighted watch. Here will I deal desolation, and make a broad path for thee to follow.' So speaks he and checks his voice; therewith he drives his sword at lordly Rhamnes, who haply on carpets heaped high was drawing the full breath of sleep; a king himself, and King Turnus' best-beloved augur, but not all his augury could avert his doom. Three of his household beside him, lying carelessly among their arms, and the armour-bearer and charioteer of Remus go [Pg 203][331-364]down before him, caught at the horses' feet. Their drooping necks he severs with the sword, then beheads their lord likewise and leaves the trunk spouting blood; the dark warm gore soaks ground and cushions. Therewithal Lamyrus and Lamus, and beautiful young Serranus, who that night had played long and late, and lay with the conquering god heavy on every limb; happy, had he played out the night, and carried his game to day! Even thus an unfed lion riots through full sheepfolds, for the madness of hunger urges him, and champs and rends the fleecy flock that are dumb with fear, and roars with blood-stained mouth. Nor less is the slaughter of Euryalus; he too rages all aflame; an unnamed multitude go down before his path, and Fadus and Herbesus and Rhoetus and Abaris, unaware; Rhoetus awake and seeing all, but he hid in fear behind a great bowl; right in whose breast, as he rose close by, he plunged the sword all its length, and drew it back heavy with death. He vomits forth the crimson life-blood, and throws up wine mixed with blood in the death agony. The other presses hotly on his stealthy errand, and now bent his way towards Messapus' comrades, where he saw the last flicker of the fires go down, and the horses tethered in order cropping the grass; when Nisus briefly speaks thus, for he saw him carried away by excess of murderous desire; 'Let us stop; for unfriendly daylight draws nigh. Vengeance is sated to the full; a path is cut through the enemy.' Much they leave behind, men's armour wrought in solid silver, and bowls therewith, and beautiful carpets. Euryalus tears away the decorations of Rhamnes and his sword-belt embossed with gold, a gift which Caedicus, wealthiest of men of old, sends to Remulus of Tibur when plighting friendship far away; he on his death-bed gives them to his grandson for his own; after his death the Rutulians captured them as spoil of war; these he fits on the shoulders valiant [Pg 204][365-396]in vain, then puts on Messapus' light helmet with its graceful plumes. They issue from the camp and make for safety. Meanwhile an advanced guard of cavalry were on their way from the Latin city, while the rest of their marshalled battalions linger on the plains, and bore a reply to King Turnus; three hundred men all under shield, in Volscens' leading. And now they approached the camp and drew near the wall, when they descry the two turning away by the pathway to the left; and in the glimmering darkness of night the forgotten helmet betrayed Euryalus, glittering as it met the light. It seemed no thing of chance. Volscens cries aloud from his column: 'Stand, men! why on the march, or how are you in arms? or whither hold you your way?' They offer nothing in reply, but quicken their flight into the forest, and throw themselves on the night. On this side and that the horsemen bar the familiar crossways, and encircle every outlet with sentinels. The forest spread wide in tangled thickets and dark ilex; thick growth of briars choked it all about, and the muffled pathway glimmered in a broken track. Hampered by the shadowy boughs and his cumbrous spoil, Euryalus in his fright misses the line of way. Nisus gets clear; and now unthinkingly he had passed the enemy, and the place afterwards called Albani from Alba's name; then the deep coverts were of King Latinus' domain; when he stopped, and looked back in vain for his lost friend. 'Euryalus, unhappy! on what ground have I left thee? or where shall I follow, again unwinding all the entanglement of the treacherous woodland way?' Therewith he marks and retraces his footsteps, and wanders down the silent thickets. He hears the horses, hears the clatter and signal-notes of the pursuers. Nor had he long to wait, when shouts reach his ears, and he sees Euryalus, whom even now, in the perplexity of ground and [Pg 205][397-431]darkness, the whole squadron have borne down in a sudden rush, and seize in spite of all his vain struggles. What shall he do? with what force, what arms dare his rescue? or shall he rush on his doom amid their swords, and find in their wounds a speedy and glorious death? Quickly he draws back his arm with poised spear, and looking up to the moon on high, utters this prayer: 'Do thou give present aid to our enterprise, O Latonian goddess, glory of the stars and guardian of the woodlands: by all the gifts my father Hyrtacus ever bore for my sake to thine altars, by all mine own hand hath added from my hunting, or hung in thy dome, or fixed on thy holy roof, grant me to confound these masses, and guide my javelin through the air.' He ended, and with all the force of his body hurls the steel. The flying spear whistles through the darkness of the night, and comes full on the shield of Sulmo, and there snaps, and the broken shaft passes on through his heart. Spouting a warm tide from his breast he rolls over chill in death, and his sides throb with long-drawn gasps. Hither and thither they gaze round. Lo, he all the fiercer was poising another weapon high by his ear; while they hesitate, the spear went whizzing through both Tagus' temples, and pierced and stuck fast in the warm brain. Volscens is mad with rage, and nowhere espies the sender of the weapon, nor where to direct his fury. 'Yet meanwhile thy warm blood shalt pay me vengeance for both,' he cries; and unsheathing his sword, he made at Euryalus. Then indeed frantic with terror Nisus shrieks out; no longer could he shroud himself in darkness or endure such agony. 'On me, on me, I am here, I did it, on me turn your steel, O Rutulians! Mine is all the guilt; he dared not, no, nor could not; to this heaven I appeal and the stars that know; he only loved his hapless friend too well.' Such words he was uttering; but the sword driven hard home is gone [Pg 206][432-464]clean through his ribs and pierces the white breast. Euryalus rolls over in death, and the blood runs over his lovely limbs, and his neck sinks and settles on his shoulder; even as when a lustrous flower cut away by the plough droops in death, or weary-necked poppies bow down their head if overweighted with a random shower. But Nisus rushes amidst them, and alone among them all makes at Volscens, keeps to Volscens alone: round him the foe cluster, and on this side and that hurl him back: none the less he presses on, and whirls his sword like lightning, till he plunges it full in the face of the shrieking Rutulian, and slays his enemy as he dies. Then, stabbed through and through, he flung himself above his lifeless friend, and there at last found the quiet sleep of death. Happy pair! if my verse is aught of avail, no length of days shall ever blot you from the memory of time, while the house of Aeneas shall dwell by the Capitoline's stedfast stone, and the lord of Rome hold sovereignty. The victorious Rutulians, with their spoils and the plunder regained, bore dead Volscens weeping to the camp. Nor in the camp was the wailing less, when Rhamnes was found a bloodless corpse, and Serranus and Numa and all their princes destroyed in a single slaughter. Crowds throng towards the corpses and the men wounded to death, the ground fresh with warm slaughter and the swoln runlets of frothing blood. They mutually recognise the spoils, Messapus' shining helmet and the decorations that cost such sweat to win back. And now Dawn, leaving the saffron bed of Tithonus, scattered over earth her fresh shafts of early light; now the sunlight streams in, now daylight unveils the world. Turnus, himself fully armed, awakes his men to arms, and each leader marshals to battle his brazen lines and whets their ardour with varying rumours. Nay, pitiable sight! they [Pg 207][465-499]fix on spear-points and uprear and follow with loud shouts the heads of Euryalus and Nisus. . . . The Aeneadae stubbornly face them, lining the left hand wall (for their right is girdled by the river), hold the deep trenches and stand gloomily on the high towers, stirred withal by the faces they know, alas, too well, in their dark dripping gore. Meanwhile Rumour on fluttering wings rushes with the news through the alarmed town and glides to the ears of Euryalus' mother. But instantly the warmth leaves her woeful body, the shuttle starts from her hand and the threads unroll. She darts forth in agony, and with woman's wailing and torn hair runs distractedly towards the walls and the foremost columns, recking naught of men, naught of peril or weapons; thereon she fills the air with her complaint: 'Is it thus I behold thee, O Euryalus? Couldst thou, the latest solace of mine age, leave me alone so cruelly? nor when sent into such danger was one last word of thee allowed thine unhappy mother? Alas, thou liest in a strange land, given for a prey to the dogs and fowls of Latium! nor was I, thy mother, there for chief mourner, to lay thee out or close thine eyes or wash thy wounds, and cover thee with the garment I hastened on for thee whole nights and days, an anxious old woman taking comfort from the loom. Whither shall I follow? or what land now holds thy mangled corpse, thy body torn limb from limb? Is this all of what thou wert that returns to me, O my son? is it this I have followed by land and sea? Strike me through of your pity, on me cast all your weapons, Rutulians; make me the first sacrifice of your steel. Or do thou, mighty lord of heaven, be merciful, and with thine own weapon hurl this hateful life to the nether deep, since in no wise else may I break away from life's cruelty.' At this weeping cry their courage falters, and a sigh of sorrow passes all along; their strength is benumbed and broken for battle. Her, while [Pg 208][500-535]her grief kindled, at Ilioneus' and weeping Iülus' bidding Idaeus and Actor catch up and carry home in their arms. But the terrible trumpet-note afar rang on the shrill brass; a shout follows, and is echoed from the sky. The Volscians hasten up in even line under their advancing roof of shields, and set to fill up the trenches and tear down the palisades. Some seek entrance by scaling the walls with ladders, where the defenders' battle-line is thin, and light shows through gaps in the ring of men. The Teucrians in return shower weapons of every sort, and push them down with stiff poles, practised by long warfare in their ramparts' defence: and fiercely hurl heavy stones, so be they may break the shielded line; while they, crowded under their shell, lightly bear all the downpour. But now they fail; for where the vast mass presses close, the Teucrians roll a huge block tumbling down that makes a wide gap in the Rutulians and crashes through their armour-plating. Nor do the bold Rutulians care longer to continue the blind fight, but strive to clear the rampart with missiles. . . . Elsewhere in dreadful guise Mezentius brandishes his Etruscan pine and hurls smoking brands; but Messapus, tamer of horses, seed of Neptune, tears away the palisading and calls for ladders to the ramparts. Thy sisterhood, O Calliope, I pray inspire me while I sing the destruction spread then and there by Turnus' sword, the deaths dealt from his hand, and whom each warrior sent down to the under world; and unroll with me the broad borders of war. A tower loomed vast with lofty gangways at a point of vantage; this all the Italians strove with main strength to storm, and set all their might and device to overthrow it; the Trojans in return defended it with stones and hurled showers of darts through the loopholes. Turnus, leading the attack, threw a blazing torch that caught flaming on the [Pg 209][536-570]side wall; swoln by the wind, the flame seized the planking and clung devouring to the standards. Those within, in hurry and confusion, desire retreat from their distress; in vain; while they cluster together and fall back to the side free from the destroyer, the tower sinks prone under the sudden weight with a crash that thunders through all the sky. Pierced by their own weapons, and impaled on hard splinters of wood, they come half slain to the ground with the vast mass behind them. Scarcely do Helenor alone and Lycus struggle out; Helenor in his early prime, whom a slave woman of Licymnos bore in secret to the Maeonian king, and sent to Troy in forbidden weapons, lightly armed with sheathless sword and white unemblazoned shield. And he, when he saw himself among Turnus' encircling thousands, ranks on this side and ranks on this of Latins, as a wild beast which, girt with a crowded ring of hunters, dashes at their weapons, hurls herself unblinded on death, and comes with a bound upon the spears; even so he rushes to his death amid the enemy, and presses on where he sees their weapons thickest. But Lycus, far fleeter of foot, holds by the walls in flight midway among foes and arms, and strives to catch the coping in his grasp and reach the hands of his comrades. And Turnus pursuing and aiming as he ran, thus upbraids him in triumph: 'Didst thou hope, madman, thou mightest escape our hands?' and catches him as he clings, and tears him and a great piece of the wall away: as when, with a hare or snowy-bodied swan in his crooked talons, Jove's armour-bearer soars aloft, or the wolf of Mars snatches from the folds some lamb sought of his mother with incessant bleating. On all sides a shout goes up. They advance and fill the trenches with heaps of earth; some toss glowing brands on the roofs. Ilioneus strikes down Lucetius with a great fragment of mountain rock as, carrying fire, he draws [Pg 210][571-606]nigh the gate. Liger slays Emathion, Asylas Corinaeus, the one skilled with the javelin, the other with the stealthy arrow from afar. Caeneus slays Ortygius; Turnus victorious Caeneus; Turnus Itys and Clonius, Dioxippus, and Promolus, and Sagaris, and Idas where he stood in front of the turret top; Capys Privernus: him Themillas' spear had first grazed lightly; the madman threw down his shield to carry his hand to the wound; so the arrow winged her way, and pinning his hand to his left side, broke into the lungs with deadly wound. The son of Arcens stood splendid in arms, and scarf embroidered with needlework and bright with Iberian blue, the beautiful boy sent by his father Arcens from nurture in the grove of our Lady about the streams of Symaethus, where Palicus' altar is rich and gracious. Laying down his spear, Mezentius whirled thrice round his head the tightened cord of his whistling sling, pierced him full between the temples with the molten bullet, and stretched him all his length upon the sand. Then, it is said, Ascanius first aimed his flying shaft in war, wont before to frighten beasts of the chase, and struck down a brave Numanian, Remulus by name, but lately allied in bridal to Turnus' younger sister. He advancing before his ranks clamoured things fit and unfit to tell, and strode along lofty and voluble, his heart lifted up with his fresh royalty. 'Take you not shame to be again held leaguered in your ramparts, O Phrygians twice taken, and to make walls your fence from death? Behold them who demand in war our wives for theirs! What god, what madness, hath driven you to Italy? Here are no sons of Atreus nor glozing Ulysses. A race of hardy breed, we carry our newborn children to the streams and harden them in the bitter icy water; as boys they spend wakeful nights over the chase, and tire out the woodland; but in manhood, [Pg 211][607-639]unwearied by toil and trained to poverty, they subdue the soil with their mattocks, or shake towns in war. Every age wears iron, and we goad the flanks of our oxen with reversed spear; nor does creeping old age weaken our strength of spirit or abate our force. White hairs bear the weight of the helmet; and it is ever our delight to drive in fresh spoil and live on our plunder. Yours is embroidered raiment of saffron and shining sea-purple. Indolence is your pleasure, your delight the luxurious dance; you wear sleeved tunics and ribboned turbans. O right Phrygian women, not even Phrygian men! traverse the heights of Dindymus, where the double-mouthed flute breathes familiar music. The drums call you, and the Berecyntian boxwood of the mother of Ida; leave arms to men, and lay down the sword.' As he flung forth such words of ill-ominous strain, Ascanius brooked it not, and aimed an arrow on him from the stretched horse sinew; and as he drew his arms asunder, first stayed to supplicate Jove in lowly vows: 'Jupiter omnipotent, deign to favour this daring deed. My hands shall bear yearly gifts to thee in thy temple, and bring to stand before thine altars a steer with gilded forehead, snow-white, carrying his head high as his mother's, already pushing with his horn and making the sand fly up under his feet.' The Father heard and from a clear space of sky thundered on the left; at once the fated bow rings, the grim-whistling arrow flies from the tense string, and goes through the head of Remulus, the steel piercing through from temple to temple. 'Go, mock valour with insolence of speech! Phrygians twice taken return this answer to Rutulians.' Thus and no further Ascanius; the Teucrians respond in cheers, and shout for joy in rising height of courage. Then haply in the tract of heaven tressed Apollo sate looking down from his cloud on the [Pg 212][640-673]Ausonian ranks and town, and thus addresses triumphant Iülus: 'Good speed to thy young valour, O boy! this is the way to heaven, child of gods and parent of gods to be! Rightly shall all wars fated to come sink to peace beneath the line of Assaracus; nor art thou bounded in a Troy.' So speaking, he darts from heaven's height, and cleaving the breezy air, seeks Ascanius. Then he changes the fashion of his countenance, and becomes aged Butes, armour-bearer of old to Dardanian Anchises, and the faithful porter of his threshold; thereafter his lord gave him for Ascanius' attendant. In all points like the old man Apollo came, voice and colour, white hair, and grimly clashing arms, and speaks these words to eager Iülus: 'Be it enough, son of Aeneas, that the Numanian hath fallen unavenged beneath thine arrows; this first honour great Apollo allows thee, nor envies the arms that match his own. Further, O boy, let war alone.' Thus Apollo began, and yet speaking retreated from mortal view, vanishing into thin air away out of their eyes. The Dardanian princes knew the god and the arms of deity, and heard the clash of his quiver as he went. So they restrain Ascanius' keenness for battle by the words of Phoebus' will; themselves they again close in conflict, and cast their lives into the perilous breach. Shouts run all along the battlemented walls; ringing bows are drawn and javelin thongs twisted: all the ground is strewn with missiles. Shields and hollow helmets ring to blows; the battle swells fierce; heavy as the shower lashes the ground that sets in when the Kids are rainy in the West; thick as hail pours down from storm-clouds on the shallows, when the rough lord of the winds congeals his watery deluge and breaks up the hollow vapours in the sky. Pandarus and Bitias, sprung of Alcanor of Ida, whom woodland Iaera bore in the grove of Jupiter, grown now [Pg 213][674-709]tall as their ancestral pines and hills, fling open the gates barred by their captain's order, and confident in arms, wilfully invite the enemy within the walls. Themselves within they stand to right and left in front of the towers, sheathed in iron, the plumes flickering over their stately heads: even as high in air around the gliding streams, whether on Padus' banks or by pleasant Athesis, twin oaks rise lifting their unshorn heads into the sky with high tops asway. The Rutulians pour in when they see the entrance open. Straightway Quercens and Aquicolus beautiful in arms, and desperate Tmarus, and Haemon, seed of Mars, either gave back in rout with all their columns, or in the very gateway laid down their life. Then the spirits of the combatants swell in rising wrath, and now the Trojans gather swarming to the spot, and dare to close hand to hand and to sally farther out. News is brought to Turnus the captain, as he rages afar among the routed foe, that the enemy surges forth into fresh slaughter and flings wide his gates. He breaks off unfinished, and, fired with immense anger, rushes towards the haughty brethren at the Dardanian gate. And on Antiphates first, for first he came, the bastard son of mighty Sarpedon by a Theban mother, he hurls his javelin and strikes him down; the Italian cornel flies through the yielding air, and, piercing the gullet, runs deep into his breast; a frothing tide pours from the dark yawning wound, and the steel grows warm where it pierces the lung. Then Meropes and Erymas, then Aphidnus goes down before his hand; then Bitias, fiery-eyed and exultant, not with a javelin; for not to a javelin had he given his life; but the loud-whistling pike came hurled with a thunderbolt's force; neither twofold bull's hide kept it back, nor the trusty corslet's double scales of gold: his vast limbs sink in a heap; earth utters a groan, and the great shield clashes [Pg 214][710-745]over him: even as once and again on the Euboïc shore of Baiae falls a mass of stone, built up of great blocks and so cast into the sea; thus does it tumble prone, crashes into the shoal water and sinks deep to rest; the seas are stirred, and the dark sand eddies up; therewith the depth of Prochyta quivers at the sound, and the couchant rocks of Inarime, piled above Typhoeus by Jove's commands. On this Mars armipotent raised the spirit and strength of the Latins, and goaded their hearts to rage, and sent Flight and dark Fear among the Teucrians. From all quarters they gather, since battle is freely offered; and the warrior god inspires. . . . Pandarus, at his brother's fall, sees how fortune stands, what hap rules the day; and swinging the gate round on its hinge with all his force, pushes it to with his broad shoulders, leaving many of his own people shut outside the walls in the desperate conflict, but shutting others in with him as they pour back in retreat. Madman! who saw not the Rutulian prince burst in amid their columns, and fairly shut him into the town, like a monstrous tiger among the silly flocks. At once strange light flashed from his eyes, and his armour rang terribly; the blood-red plumes flicker on his head, and lightnings shoot sparkling from his shield. In sudden dismay the Aeneadae know the hated form and giant limbs. Then tall Pandarus leaps forward, in burning rage at his brother's death: 'This is not the palace of Amata's dower,' he cries, 'nor does Ardea enclose Turnus in her native walls. Thou seest a hostile camp; escape hence is hopeless.' To him Turnus, smiling and cool: 'Begin with all thy valiance, and close hand to hand; here too shalt thou tell that a Priam found his Achilles.' He ended; the other, putting out all his strength, hurls his rough spear, knotty and unpeeled. The breezes caught it; Juno, daughter of Saturn, [Pg 215][746-780]made the wound glance off as it came, and the spear sticks fast in the gate. 'But this weapon that my strong hand whirls, this thou shalt not escape; for not such is he who sends weapon and wound.' So speaks he, and rises high on his uplifted sword; the steel severs the forehead midway right between the temples, and divides the beardless cheeks with ghastly wound. He crashes down; earth shakes under the vast weight; dying limbs and brain-spattered armour tumble in a heap to the ground, and the head, evenly severed, dangles this way and that from either shoulder. The Trojans scatter and turn in hasty terror; and had the conqueror forthwith taken thought to burst the bars and let in his comrades at the gate, that had been the last day of the war and of the nation. But rage and mad thirst of slaughter drive him like fire on the foe. . . . First he catches up Phalaris; then Gyges, and hamstrings him; he plucks away their spears, and hurls them on the backs of the flying crowd; Juno lends strength and courage. Halys he sends to join them, and Phegeus, pierced right through the shield; then, as they ignorantly raised their war-cry on the walls, Alcander and Halius, Noëmon and Prytanis. Lynceus advanced to meet him, calling up his comrades; from the rampart the glittering sword sweeps to the left and catches him; struck off by the one downright blow, head and helmet lay far away. Next Amycus fell, the deadly huntsman, incomparable in skill of hand to anoint his arrows and arm their steel with venom; and Clytius the Aeolid, and Cretheus beloved of the Muses, Cretheus of the Muses' company, whose delight was ever in songs and harps and stringing of verses; ever he sang of steeds and armed men and battles. At last, hearing of the slaughter of their men, the Teucrian captains, Mnestheus and gallant Serestus, come up, and see their comrades in disordered flight and the foe [Pg 216][781-814]let in. And Mnestheus: 'Whither next, whither press you in flight? what other walls, what farther city have you yet? Shall one man, and he girt in on all sides, fellow-citizens, by your entrenchments, thus unchecked deal devastation throughout our city, and send all our best warriors to the under world? Have you no pity, no shame, cowards, for your unhappy country, for your ancient gods, for great Aeneas?' Kindled by such words, they take heart and rally in dense array. Little by little Turnus drew away from the fight towards the river, and the side encircled by the stream: the more bravely the Teucrians press on him with loud shouts and thickening masses, even as a band that fall on a wrathful lion with levelled weapons, but he, frightened back, retires surly and grim-glaring; and neither does wrath nor courage let him turn his back, nor can he make head, for all that he desires it, against the surrounding arms and men. Even thus Turnus draws lingeringly backward, with unhastened steps, and soul boiling in anger. Nay, twice even then did he charge amid the enemy, twice drove them in flying rout along the walls. But all the force of the camp gathers hastily up; nor does Juno, daughter of Saturn, dare to supply him strength to countervail; for Jupiter sent Iris down through the aery sky, bearing stern orders to his sister that Turnus shall withdraw from the high Trojan town. Therefore neither with shield nor hand can he keep his ground, so overpoweringly from all sides comes upon him the storm of weapons. About the hollows of his temples the helmet rings with incessant clash, and the solid brass is riven beneath the stones; the horsehair crest is rent away; the shield-boss avails not under the blows; Mnestheus thunders on with his Trojans, and pours in a storm of spears. All over him the sweat trickles and pours in swart stream, and no breathing space is given; sick gasps shake [Pg 217][815-818]his exhausted limbs. Then at last, with a headlong bound, he leapt fully armed into the river; the river's yellow eddies opened for him as he came, and the buoyant water brought him up, and, washing away the slaughter, returned him triumphant to his comrades. [Pg 218] BOOK TENTH THE BATTLE ON THE BEACH Meanwhile the heavenly house omnipotent unfolds her doors, and the father of gods and king of men calls a council in the starry dwelling; whence he looks sheer down on the whole earth, the Dardanian camp, and the peoples of Latium. They sit down within from doorway to doorway: their lord begins: 'Lords of heaven, wherefore is your decree turned back, and your minds thus jealously at strife? I forbade Italy to join battle with the Teucrians; why this quarrel in face of my injunction? What terror hath bidden one or another run after arms and tempt the sword? The due time of battle will arrive, call it not forth, when furious Carthage shall one day sunder the Alps to hurl ruin full on the towers of Rome. Then hatred may grapple with hatred, then hostilities be opened; now let them be, and cheerfully join in the treaty we ordain.' Thus Jupiter in brief; but not briefly golden Venus returns in answer: . . . 'O Lord, O everlasting Governor of men and things—for what else may we yet supplicate?—beholdest thou how the Rutulians brave it, and Turnus, borne charioted through the ranks, proudly sweeps down the tide of battle? Bar [Pg 219][22-58]and bulwark no longer shelter the Trojans; nay, within the gates and even on the mounded walls they clash in battle and make the trenches swim with blood. Aeneas is away and ignorant. Wilt thou never then let our leaguer be raised? Again a foe overhangs the walls of infant Troy; and another army, and a second son of Tydeus rises from Aetolian Arpi against the Trojans. Truly I think my wounds are yet to come, and I thy child am keeping some mortal weapons idle. If the Trojans steered for Italy without thy leave and defiant of thy deity, let them expiate their sin; aid not such with thy succour. But if so many oracles guided them, given by god and ghost, why may aught now reverse thine ordinance or write destiny anew? Why should I recall the fleets burned on the coast of Eryx? why the king of storms, and the raging winds roused from Aeolia, or Iris driven down the clouds? Now hell too is stirred (this share of the world was yet untried) and Allecto suddenly let loose above to riot through the Italian towns. In no wise am I moved for empire; that was our hope while Fortune stood; let those conquer whom thou wilt. If thy cruel wife leave no region free to Teucrians, by the smoking ruins of desolated Troy, O father, I beseech thee, grant Ascanius unhurt retreat from arms, grant me my child's life. Aeneas may well be tossed over unknown seas and follow what path soever fortune open to him; him let me avail to shelter and withdraw from the turmoil of battle. Amathus is mine, high Paphos and Cythera, and my house of Idalia; here, far from arms, let him spend an inglorious life. Bid Carthage in high lordship rule Ausonia; there will be nothing there to check the Tyrian cities. What help was it for the Trojans to escape war's doom and thread their flight through Argive fires, to have exhausted all those perils of sea and desolate lands, while they seek Latium and the towers of a Troy rebuilt? Were it not better to have [Pg 220][59-91]clung to the last ashes of their country, and the ground where once was Troy? Give back, I pray, Xanthus and Simoïs to a wretched people, and let the Teucrians again, O Lord, circle through the fates of Ilium.' Then Queen Juno, swift and passionate: 'Why forcest thou me to break long silence and proclaim my hidden pain? Hath any man or god constrained Aeneas to court war or make armed attack on King Latinus? In oracular guidance he steered for Italy: be it so: he whom raving Cassandra sent on his way! Did we urge him to quit the camp or entrust his life to the winds? to give the issue of war and the charge of his ramparts to a child? to stir the loyalty of Tyrrhenia or throw peaceful nations into tumult? What god, what potent cruelty of ours, hath driven him on his hurt? Where is Juno in this, or Iris sped down the clouds? It shocks thee that Italians should enring an infant Troy with flame, and Turnus set foot on his own ancestral soil—he, grandchild of Pilumnus, son of Venilia the goddess: how, that the dark brands of Troy assail the Latins? that Trojans subjugate and plunder fields not their own? how, that they choose their brides and tear plighted bosom from bosom? that their gestures plead for peace, and their ships are lined with arms? Thou canst steal thine Aeneas from Grecian hands, and spread before them a human semblance of mist and empty air; thou canst turn his fleet into nymphs of like number: is it dreadful if we retaliate with any aid to the Rutulians? Aeneas is away and ignorant; away and ignorant let him be. Paphos is thine and Idalium, thine high Cythera; why meddlest thou with fierce spirits and a city big with war? Is it we who would overthrow the tottering state of Phrygia? we? or he who brought the Achaeans down on the hapless Trojans? who made Europe and Asia bristle up in arms, and whose theft shattered the alliance? Was it in my guidance the [Pg 221][92-125]adulterous Dardanian broke into Sparta? or did I send the shafts of passion that kindled war? Then terror for thy children had graced thee; too late now dost thou rise with unjust complaints, and reproaches leave thy lips in vain.' Thus Juno pleaded; and all the heavenly people murmured in diverse consent; even as rising gusts murmur when caught in the forests, and eddy in blind moanings, betraying to sailors the gale's approach. Then the Lord omnipotent and primal power of the world begins; as he speaks the high house of the gods and trembling floor of earth sink to silence; silent is the deep sky, and the breezes are stilled; ocean hushes his waters into calm. 'Take then to heart and lay deep these words of mine. Since it may not be that Ausonians and Teucrians join alliance, and your quarrel finds no term, to-day, what fortune each wins, what hope each follows, be he Trojan or Rutulian, I will hold in even poise; whether it be Italy's fate or Trojan blundering and ill advice that holds the camp in leaguer. Nor do I acquit the Rutulians. Each as he hath begun shall work out his destiny. Jupiter is one and king over all; the fates will find their way.' By his brother's infernal streams, by the banks of the pitchy black-boiling chasm he signed assent, and made all Olympus quiver at his nod. Here speaking ended: thereon Jupiter rises from his golden throne, and the heavenly people surround and escort him to the doorway. Meanwhile the Rutulians press round all the gates, dealing grim slaughter and girdling the walls with flame. But the army of the Aeneadae are held leaguered within their trenches, with no hope of retreat. They stand helpless and disconsolate on their high towers, and their thin ring girdles the walls,—Asius, son of Imbrasus, and Thymoetes, son of Hicetaon, and the two Assaraci, and Castor, and old Thymbris together in the front rank: by them Clarus and [Pg 222][126-160]Themon, both full brothers to Sarpedon, out of high Lycia. Acmon of Lyrnesus, great as his father Clytius, or his brother Mnestheus, carries a stone, straining all his vast frame to the huge mountain fragment. Emulously they keep their guard, these with javelins, those with stones, and wield fire and fit arrows on the string. Amid them he, Venus' fittest care, lo! the Dardanian boy, his graceful head uncovered, shines even as a gem set in red gold on ornament of throat or head, or even as gleaming ivory cunningly inlaid in boxwood or Orician terebinth; his tresses lie spread over his milk-white neck, bound by a flexible circlet of gold. Thee, too, Ismarus, proud nations saw aiming wounds and arming thy shafts with poison,—thee, of house illustrious in Maeonia, where the rich tilth is wrought by men's hands, and Pactolus waters it with gold. There too was Mnestheus, exalted in fame as he who erewhile had driven Turnus from the ramparts; and Capys, from whom is drawn the name of the Campanian city. They had closed in grim war's mutual conflict; Aeneas, while night was yet deep, clove the seas. For when, leaving Evander for the Etruscan camp, he hath audience of the king, and tells the king of his name and race, and what he asks or offers, instructs him of the arms Mezentius is winning to his side, and of Turnus' overbearing spirit, reminds him what is all the certainty of human things, and mingles all with entreaties; delaying not, Tarchon joins forces and strikes alliance. Then, freed from the oracle, the Lydian people man their fleet, laid by divine ordinance in the foreign captain's hand. Aeneas' galley keeps in front, with the lions of Phrygia fastened on her prow, above them overhanging Ida, sight most welcome to the Trojan exiles. Here great Aeneas sits revolving the changing issues of war; and Pallas, clinging on his left side, asks now [Pg 223][161-195]of the stars and their pathway through the dark night, now of his fortunes by land and sea. Open now the gates of Helicon, goddesses, and stir the song of the band that come the while with Aeneas from the Tuscan borders, and sail in armed ships overseas. First in the brazen-plated Tiger Massicus cuts the flood; beneath him are ranked a thousand men who have left Clusium town and the city of Cosae; their weapons are arrows, and light quivers on the shoulder, and their deadly bow. With him goes grim Abas, all his train in shining armour, and a gilded Apollo glittering astern. To him Populonia had given six hundred of her children, tried in war, but Ilva three hundred, the island rich in unexhausted mines of steel. Third Asilas, interpreter between men and gods, master of the entrails of beasts and the stars in heaven, of speech of birds and ominous lightning flashes, draws a thousand men after him in serried lines bristling with spears, bidden to his command from Pisa city, of Alphaean birth on Etruscan soil. Astyr follows, excellent in beauty, Astyr, confident in his horse and glancing arms. Three hundred more—all have one heart to follow—come from the householders of Caere and the fields of Minio, and ancient Pyrgi, and fever-stricken Graviscae. Let me not pass thee by, O Cinyras, bravest in war of Ligurian captains, and thee, Cupavo, with thy scant company, from whose crest rise the swan plumes, fault, O Love, of thee and thine, and blazonment of his father's form. For they tell that Cycnus, in grief for his beloved Phaëthon, while he sings and soothes his woeful love with music amid the shady sisterhood of poplar boughs, drew over him the soft plumage of white old age, and left earth and passed crying through the sky. His son, followed on shipboard with a band of like age, sweeps the huge Centaur forward with his oars; he leans over the water, and [Pg 224][196-227]threatens the waves with a vast rock he holds on high, and furrows the deep seas with his length of keel. He too calls a train from his native coasts, Ocnus, son of prophetic Manto and the river of Tuscany, who gave thee, O Mantua, ramparts and his mother's name; Mantua, rich in ancestry, yet not all of one blood, a threefold race, and under each race four cantons; herself she is the cantons' head, and her strength is of Tuscan blood. From her likewise hath Mezentius five hundred in arms against him, whom Mincius, child of Benacus, draped in gray reeds, led to battle in his advancing pine. Aulestes moves on heavily, smiting the waves with the swinging forest of an hundred oars; the channels foam as they sweep the sea-floor. He sails in the vast Triton, who amazes the blue waterways with his shell, and swims on with shaggy front, in human show from the flank upward; his belly ends in a dragon; beneath the monster's breast the wave gurgles into foam. So many were the chosen princes who went in thirty ships to aid Troy, and cut the salt plains with brazen prow. And now day had faded from the sky, and gracious Phoebe trod mid-heaven in the chariot of her nightly wandering: Aeneas, for his charge allows not rest to his limbs, himself sits guiding the tiller and managing the sails. And lo, in middle course a band of his own fellow-voyagers meets him, the nymphs whom bountiful Cybele had bidden be gods of the sea, and turn to nymphs from ships; they swam on in even order, and cleft the flood, as many as erewhile, brazen-plated prows, had anchored on the beach. From far they know their king, and wheel their bands about him, and Cymodocea, their readiest in speech, comes up behind, catching the stern with her right hand: her back rises out, and her left hand oars her passage through the silent water. Then she thus [Pg 225][228-261]accosts her amazed lord: 'Wakest thou, seed of gods, Aeneas? wake, and loosen the sheets of thy sails. We are thy fleet, Idaean pines from the holy hill, now nymphs of the sea. When the treacherous Rutulian urged us headlong with sword and fire, unwillingly we broke thy bonds, and we search for thee over ocean. This new guise our Lady made for us in pity, and granted us to be goddesses and spend our life under the waves. But thy boy Ascanius is held within wall and trench among the Latin weapons and the rough edge of war. Already the Arcadian cavalry and the brave Etruscan together hold the appointed ground. Turnus' plan is fixed to bar their way with his squadrons, that they may not reach the camp. Up and arise, and ere the coming of the Dawn bid thy crews be called to arms; and take thou the shield which the Lord of Fire forged for victory and rimmed about with gold. To-morrow's daylight, if thou deem not my words vain, shall see Rutulians heaped high in slaughter.' She ended, and, as she went, pushed the tall ship on with her hand wisely and well; the ship shoots through the water fleeter than javelin or windswift arrow. Thereat the rest quicken their speed. The son of Anchises of Troy is himself deep in bewilderment; yet the omen cheers his courage. Then looking on the heavenly vault, he briefly prays: 'O gracious upon Ida, mother of gods, whose delight is in Dindymus and turreted cities and lions coupled to thy rein, do thou lead me in battle, do thou meetly prosper thine augury, and draw nigh thy Phrygians, goddess, with favourable feet.' Thus much he spoke; and meanwhile the broad light of returning day now began to pour in, and chased away the night. First he commands his comrades to follow his signals, brace their courage to arms and prepare for battle. And now his Trojans and his camp are in his sight as he stands high astern, when next he lifts the [Pg 226][262-296]blazing shield on his left arm. The Dardanians on the walls raise a shout to the sky. Hope comes to kindle wrath; they hurl their missiles strongly; even as under black clouds cranes from the Strymon utter their signal notes and sail clamouring across the sky, and noisily stream down the gale. But this seemed marvellous to the Rutulian king and the captains of Ausonia, till looking back they see the ships steering for the beach, and all the sea as a single fleet sailing in. His helmet-spike blazes, flame pours from the cresting plumes, and the golden shield-boss spouts floods of fire; even as when in transparent night comets glow blood-red and drear, or the splendour of Sirius, that brings drought and sicknesses on wretched men, rises and saddens the sky with malignant beams. Yet gallant Turnus in unfailing confidence will prevent them on the shore and repel their approach to land. 'What your prayers have sought is given, the sweep of the sword-arm. The god of battles is in the hands of men. Now remember each his wife and home: now recall the high deeds of our fathers' honour. Let us challenge meeting at the water's edge, while they waver and their feet yet slip as they disembark. Fortune aids daring. . . .' So speaks he, and counsels inly whom he shall lead to meet them, whom leave in charge of the leaguered walls. Meanwhile Aeneas lands his allies by gangways from the high ships. Many watch the retreat and slack of the sea, and leap boldly into the shoal water; others slide down the oars. Tarchon, marking the shore where the shallows do not seethe and plash with broken water, but the sea glides up and spreads its tide unbroken, suddenly turns his bows to land and implores his comrades: 'Now, O chosen crew, bend strongly to your oars; lift your ships, make them go; let the prows cleave this hostile land and the keel plough [Pg 227][297-330]herself a furrow. I will let my vessel break up on such harbourage if once she takes the land.' When Tarchon had spoken in such wise, his comrades rise on their oar-blades and carry their ships in foam towards the Latin fields, till the prows are fast on dry land and all the keels are aground unhurt. But not thy galley, Tarchon; for she dashes on a shoal, and swings long swaying on the cruel bank, pitching and slapping the flood, then breaks up, and lands her crew among the waves. Broken oars and floating thwarts entangle them, and the ebbing wave sucks their feet away. Nor does Turnus keep idly dallying, but swiftly hurries his whole array against the Trojans and ranges it to face the beach. The trumpets blow. At once Aeneas charges and confounds the rustic squadrons of the Latins, and slays Theron for omen of battle. The giant advances to challenge Aeneas; but through sewed plates of brass and tunic rough with gold the sword plunges in his open side. Next he strikes Lichas, cut from his mother already dead, and consecrated, Phoebus, to thee, since his infancy was granted escape from the perilous steel. Near thereby he struck dead brawny Cisseus and vast Gyas, whose clubs were mowing down whole files: naught availed them the arms of Hercules and their strength of hand, nor Melampus their father, ever of Alcides' company while earth yielded him sore travail. Lo! while Pharus utters weak vaunts the hurled javelin strikes on his shouting mouth. Thou too, while thou followest thy new delight, Clytius, whose cheeks are golden with youthful down—thou, luckless Cydon, struck down by the Dardanian hand, wert lying past thought, ah pitiable! of the young loves that were ever thine, did not the close array of thy brethren interpose, the children of Phorcus, seven in number, and send a sevenfold shower of darts. Some glance ineffectual from helmet and shield; [Pg 228][331-365]some Venus the bountiful turned aside as they grazed his body. Aeneas calls to trusty Achates: 'Give me store of weapons; none that hath been planted in Grecian body on the plains of Ilium shall my hand hurl at Rutulian in vain.' Then he catches and throws his great spear; the spear flies grinding through the brass of Maeon's shield, and breaks through corslet and through breast. His brother Alcanor runs up and sustains with his right arm his sinking brother; through his arm the spear passes speeding straight on its message, and holds its bloody way, and the hand dangles by the sinews lifeless from the shoulder. Then Numitor, seizing his dead brother's javelin, aims at Aeneas, but might not fairly pierce him, and grazed tall Achates on the thigh. Here Clausus of Cures comes confident in his pride of strength, and with a long reach strikes Dryops under the chin, and, urging the stiff spear-shaft home, stops the accents of his speech and his life together, piercing the throat; but he strikes the earth with his forehead, and vomits clots of blood. Three Thracians likewise of Boreas' sovereign race, and three sent by their father Idas from their native Ismarus, fall in divers wise before him. Halesus and his Auruncan troops hasten thither; Messapus too, seed of Neptune, comes up charioted. This side and that strive to hurl back the enemy, and fight hard on the very edge of Ausonia. As when in the depth of air adverse winds rise in battle with equal spirit and strength; not they, not clouds nor sea, yield one to another; long the battle is doubtful; all stands locked in counterpoise: even thus clash the ranks of Troy and ranks of Latium, foot fast on foot, and man crowded up on man. But in another quarter, where a torrent had driven a wide path of rolling stones and bushes torn away from the banks, Pallas saw his Arcadians, unaccustomed to move as infantry, giving back before the Latin pursuit, when the [Pg 229][366-400]roughness of the ground bade them dismount. This only was left in his strait, to kindle them to valour, now by entreaties, now by taunts: 'Whither flee you, comrades? by your deeds of bravery, by your leader Evander's name, by your triumphant campaigns, and my hope that now rises to rival my father's honour, trust not to flight. Our swords must hew a way through the enemy. Where yonder mass of men presses thickest, there your proud country calls you with Pallas at your head. No gods are they who bear us down; mortals, we feel the pressure of a mortal foe; we have as many lives and hands as he. Lo, the deep shuts us in with vast sea barrier; even now land fails our flight; shall we make ocean or Troy our goal?' So speaks he, and bursts amid the serried foe. First Lagus meets him, drawn thither by malign destiny; him, as he tugs at a ponderous stone, hurling his spear where the spine ran dissevering the ribs, he pierces and wrenches out the spear where it stuck fast in the bone. Nor does Hisbo catch him stooping, for all that he hoped it; for Pallas, as he rushes unguarded on, furious at his comrade's cruel death, receives him on his sword and buries it in his distended lungs. Next he attacks Sthenius, and Anchemolus of Rhoetus' ancient family, who dared to violate the bridal chamber of his stepmother. You, too, the twins Larides and Thymber, fell on the Rutulian fields, children of Daucus, indistinguishable for likeness and a sweet perplexity to your parents. But now Pallas made cruel difference between you; for thy head, Thymber, is swept off by Evander's sword; thy right hand, Larides, severed, seeks its master, and the dying fingers jerk and clutch at the sword. Fired by his encouragement, and beholding his noble deeds, the Arcadians advance in wrath and shame to meet the enemy in arms. Then Pallas pierces Rhoeteus as he flies past in his chariot. This space, this [Pg 230][401-435]much of respite was given to Ilus; for at Ilus he had aimed the strong spear from afar, and Rhoeteus intercepts its passage, in flight from thee, noble Teuthras and Tyres thy brother; he rolls from the chariot in death, and his heels strike the Rutulian fields. And as the shepherd, when summer winds have risen to his desire, kindles the woods dispersedly; on a sudden the mid spaces catch, and a single flickering line of fire spreads wide over the plain; he sits looking down on his conquest and the revel of the flames; even so, Pallas, do thy brave comrades gather close to sustain thee. But warrior Halesus advances full on them, gathering himself behind his armour; he slays Ladon, Pheres, Demodocus; his gleaming sword shears off Strymonius' hand as it rises to his throat; he strikes Thoas on the face with a stone, and drives the bones asunder in a shattered mass of blood and brains. Halesus had his father the soothsayer kept hidden in the woodland: when the old man's glazing eyes sank to death, the Fates laid hand on him and devoted him to the arms of Evander. Pallas aims at him, first praying thus: 'Grant now, lord Tiber, to the steel I poise and hurl, a prosperous way through brawny Halesus' breast; thine oak shall bear these arms and the dress he wore.' The god heard it; while Halesus covers Imaon, he leaves, alas! his breast unarmed to the Arcadian's weapon. Yet at his grievous death Lausus, himself a great arm of the war, lets not his columns be dismayed; at once he meets and cuts down Abas, the check and stay of their battle. The men of Arcadia go down before him; down go the Etruscans, and you, O Teucrians, invincible by Greece. The armies close, matched in strength and in captains; the rear ranks crowd in; weapons and hands are locked in the press. Here Pallas strains and pushes on, here Lausus opposite, nearly matched in age, excellent in beauty; but fortune [Pg 231][436-467]had denied both return to their own land. Yet that they should meet face to face the sovereign of high Olympus allowed not; an early fate awaits them beneath a mightier foe. Meanwhile Turnus' gracious sister bids him take Lausus' room, and his fleet chariot parts the ranks. When he saw his comrades, 'It is time,' he cried, 'to stay from battle. I alone must assail Pallas; to me and none other Pallas is due; I would his father himself were here to see.' So speaks he, and his Rutulians draw back from a level space at his bidding. But then as they withdrew, he, wondering at the haughty command, stands in amaze at Turnus, his eyes scanning the vast frame, and his fierce glance perusing him from afar. And with these words he returns the words of the monarch: 'For me, my praise shall even now be in the lordly spoils I win, or in illustrious death: my father will bear calmly either lot: away with menaces.' He speaks, and advances into the level ring. The Arcadians' blood gathers chill about their hearts. Turnus leaps from his chariot and prepares to close with him. And as a lion sees from some lofty outlook a bull stand far off on the plain revolving battle, and flies at him, even such to see is Turnus' coming. When Pallas deemed him within reach of a spear-throw, he advances, if so chance may assist the daring of his overmatched strength, and thus cries into the depth of sky: 'By my father's hospitality and the board whereto thou camest a wanderer, on thee I call, Alcides; be favourable to my high emprise; let Turnus even in death discern me stripping his blood-stained armour, and his swooning eyes endure the sight of his conqueror.' Alcides heard him, and deep in his heart he stifled a heavy sigh, and let idle tears fall. Then with kindly words the father accosts his son: 'Each hath his own appointed day; short and irrecoverable [Pg 232][468-502]is the span of life for all: but to spread renown by deeds is the task of valour. Under high Troy town many and many a god's son fell; nay, mine own child Sarpedon likewise perished. Turnus too his own fate summons, and his allotted period hath reached the goal.' So speaks he, and turns his eyes away from the Rutulian fields. But Pallas hurls his spear with all his strength, and pulls his sword flashing out of the hollow scabbard. The flying spear lights where the armour rises high above the shoulder, and, forcing a way through the shield's rim, ceased not till it drew blood from mighty Turnus. At this Turnus long poises the spear-shaft with its sharp steel head, and hurls it on Pallas with these words: See thou if our weapon have not a keener point. He ended; but for all the shield's plating of iron and brass, for all the bull-hide that covers it round about, the quivering spear-head smashes it fair through and through, passes the guard of the corslet, and pierces the breast with a gaping hole. He tears the warm weapon from the wound; in vain; together and at once life-blood and sense follow it. He falls heavily on the ground, his armour clashes over him, and his bloodstained face sinks in death on the hostile soil. And Turnus standing over him . . .: 'Arcadians,' he cries, 'remember these my words, and bear them to Evander. I send him back his Pallas as was due. All the meed of the tomb, all the solace of sepulture, I give freely. Dearly must he pay his welcome to Aeneas.' And with these words, planting his left foot on the dead, he tore away the broad heavy sword-belt engraven with a tale of crime, the array of grooms foully slain together on their bridal night, and the nuptial chambers dabbled with blood, which Clonus, son of Eurytus, had wrought richly in gold. Now Turnus exults in spoiling him of it, and rejoices at his prize. Ah spirit of man, ignorant of fate and the allotted future, or to keep bounds when elate with prosperity!—the day will [Pg 233][503-535]come when Turnus shall desire to have bought Pallas' safety at a great ransom, and curse the spoils of this fatal day. But with many moans and tears Pallas' comrades lay him on his shield and bear him away amid their ranks. O grief and glory and grace of the father to whom thou shalt return! This one day sent thee first to war, this one day takes thee away, while yet thou leavest heaped high thy Rutulian dead. And now no rumour of the dreadful loss, but a surer messenger flies to Aeneas, telling him his troops are on the thin edge of doom; it is time to succour the routed Teucrians. He mows down all that meets him, and hews a broad path through their columns with furious sword, as he seeks thee, O Turnus, in thy fresh pride of slaughter. Pallas, Evander, all flash before his eyes; the board whereto but then he had first come a wanderer, and the clasped hands. Here four of Sulmo's children, as many more of Ufens' nurture, are taken by him alive to slaughter in sacrifice to the shade below, and slake the flames of the pyre with captive blood. Next he levelled his spear full on Magus from far. He stoops cunningly; the spear flies quivering over him; and, clasping his knees, he speaks thus beseechingly: 'By thy father's ghost, by Iülus thy growing hope, I entreat thee, save this life for a child and a parent. My house is stately; deep in it lies buried wealth of engraven silver; I have masses of wrought and unwrought gold. The victory of Troy does not turn on this, nor will a single life make so great a difference.' He ended; to him Aeneas thus returns answer: 'All the wealth of silver and gold thou tellest of, spare thou for thy children. Turnus hath broken off this thy trafficking in war, even then when Pallas fell. Thus judges the ghost of my father Anchises, thus Iülus.' So speaking, he grasps his helmet with his left hand, and, bending back his neck, drives his [Pg 234][536-572]sword up to the hilt in the suppliant. Hard by is Haemonides, priest of Phoebus and Trivia, his temples wound with the holy ribboned chaplet, all glittering in white-robed array. Him he meets and chases down the plain, and, standing over his fallen foe, slaughters him and wraps him in great darkness; Serestus gathers the armour and carries it away on his shoulders, a trophy, King Gradivus, to thee. Caeculus, born of Vulcan's race, and Umbro, who comes from the Marsian hills, fill up the line. The Dardanian rushes full on them. His sword had hewn off Anxur's left arm, with all the circle of the shield—he had uttered brave words and deemed his prowess would second his vaunts, and perchance with spirit lifted up had promised himself hoar age and length of years—when Tarquitus in the pride of his glittering arms met his fiery course, whom the nymph Dryope had borne to Faunus, haunter of the woodland. Drawing back his spear, he pins the ponderous shield to the corslet; then, as he vainly pleaded and would say many a thing, strikes his head to the ground, and, rolling away the warm body, cries thus over his enemy: 'Lie there now, terrible one! no mother's love shall lay thee in the sod, or place thy limbs beneath thine heavy ancestral tomb. To birds of prey shalt thou be left, or borne down sunk in the eddying water, where hungry fish shall suck thy wounds.' Next he sweeps on Antaeus and Lucas, the first of Turnus' train, and brave Numa and tawny-haired Camers, born of noble Volscens, who was wealthiest in land of the Ausonians, and reigned in silent Amyclae. Even as Aegaeon, who, men say, had an hundred arms, an hundred hands, fifty mouths and breasts ablaze with fire, and arrayed against Jove's thunders as many clashing shields and drawn swords: so Aeneas, when once his sword's point grew warm, rages victorious over all the field. Nay, lo! he darts full in face on Niphaeus' four-horse chariot; before his long strides [Pg 235][573-608]and dreadful cry they turned in terror and dashed back, throwing out their driver and tearing the chariot down the beach. Meanwhile the brothers Lucagus and Liger drive up with their pair of white horses. Lucagus valiantly waves his drawn sword, while his brother wheels his horses with the rein. Aeneas, wrathful at their mad onslaught, rushes on them, towering high with levelled spear. To him Liger . . . 'Not Diomede's horses dost thou discern, nor Achilles' chariot, nor the plains of Phrygia: now on this soil of ours the war and thy life shall end together.' Thus fly mad Liger's random words. But not in words does the Trojan hero frame his reply: for he hurls his javelin at the foe. As Lucagus spurred on his horses, bending forward over the whip, with left foot advanced ready for battle, the spear passes through the lower rim of his shining shield and pierces his left groin, knocks him out of the chariot, and stretches him in death on the fields. To him good Aeneas speaks in bitter words: 'Lucagus, no slackness in thy coursers' flight hath betrayed thee, or vain shadow of the foe turned them back; thyself thou leapest off the harnessed wheels.' In such wise he spoke, and caught the horses. His brother, slipping down from the chariot, pitiably outstretched helpless hands: 'Ah, by the parents who gave thee birth, great Trojan, spare this life and pity my prayer.' More he was pleading; but Aeneas: 'Not such were the words thou wert uttering. Die, and be brother undivided from brother.' With that his sword's point pierces the breast where the life lies hid. Thus the Dardanian captain dealt death over the plain, like some raging torrent stream or black whirlwind. At last the boy Ascanius and his troops burst through the ineffectual leaguer and issue from the camp. Meanwhile Jupiter breaks silence to accost Juno: 'O sister and wife best beloved, it is Venus, as thou deemedst, [Pg 236][609-639]nor is thy judgment astray, who sustains the forces of Troy; not their own valour of hand in war, and untamable spirit and endurance in peril.' To whom Juno beseechingly: 'Why, fair my lord, vexest thou one sick at heart and trembling at thy bitter words? If that force were in my love that once was, and that was well, never had thine omnipotence denied me leave to withdraw Turnus from battle and preserve him for his father Daunus in safety. Now let him perish, and pay forfeit to the Trojans of his innocent blood. Yet he traces his birth from our name, and Pilumnus was his father in the fourth generation, and oft and again his bountiful hand hath heaped thy courts with gifts.' To her the king of high heaven thus briefly spoke: 'If thy prayer for him is delay of present death and respite from his fall, and thou dost understand that I ordain it thus, remove thy Turnus in flight, and snatch him from the fate that is upon him. For so much indulgence there is room. But if any ampler grace mask itself in these thy prayers, and thou dreamest of change in the whole movement of the war, idle is the hope thou nursest.' And Juno, weeping: 'Ah yet, if thy mind were gracious where thy lips are stern, and this gift of life might remain confirmed to Turnus! Now his portion is bitter and guiltless death, or I wander idly from the truth. Yet, oh that I rather deluded myself with false alarms, and thou who canst wouldst bend thy course to better counsels.' These words uttered, she darted through the air straight from high heaven, cloud-girt in driving tempest, and sought the Ilian ranks and camp of Laurentum. Then the goddess, strange and ominous to see, fashions into the likeness of Aeneas a thin and pithless shade of hollow mist, decks it with Dardanian weapons, and gives it the mimicry of shield and divine helmet plume, gives unsubstantial [Pg 237][640-673]words and senseless utterance, and the mould and motion of his tread: like shapes rumoured to flit when death is past, or dreams that delude the slumbering senses. But in front of the battle-ranks the phantom dances rejoicingly, and with arms and mocking accents provokes the foe. Turnus hastens up and sends his spear whistling from far on it; it gives back and turns its footsteps. Then indeed Turnus, when he believed Aeneas turned and fled from him, and his spirit madly drank in the illusive hope: 'Whither fliest thou, Aeneas? forsake not thy plighted bridal chamber. This hand shall give thee the land thou hast sought overseas.' So clamouring he pursues, and brandishes his drawn sword, and sees not that his rejoicing is drifting with the winds. The ship lay haply moored to a high ledge of rock, with ladders run out and gangway ready, wherein king Osinius sailed from the coasts of Clusium. Here the fluttering phantom of flying Aeneas darts and hides itself. Nor is Turnus slack to follow; he overleaps the barriers and springs across the high gangways. Scarcely had he lighted on the prow; the daughter of Saturn snaps the hawser, and the ship, parted from her cable, runs out on the ebbing tide. And him Aeneas seeks for battle and finds not, and sends many a man that meets him to death. Then the light phantom seeks not yet any further hiding-place, but, flitting aloft, melts in a dark cloud; and a blast comes down meanwhile and sweeps Turnus through the seas. He looks back, witless of his case and thankless for his salvation, and, wailing, stretches both hands to heaven: 'Father omnipotent, was I so guilty in thine eyes, and is this the punishment thou hast ordained? Whither am I borne? whence came I? what flight is this, or in what guise do I return? Shall I look again on the camp or walls of Laurentum? What of that array of men who followed me to arms? whom—oh horrible!—I have abandoned all amid [Pg 238][674-707]a dreadful death; and now I see the stragglers and catch the groans of those who fall. What do I? or how may earth ever yawn for me deep enough? Do you rather, O winds, be pitiful, carry my bark on rock or reef; it is I, Turnus, who desire and implore you; or drive me on the cruel shoals of the Syrtis, where no Rutulian may follow nor rumour know my name.' Thus speaking, he wavers in mind this way and that: maddened by the shame, shall he plunge on his sword's harsh point and drive it through his side, or fling himself among the waves, and seek by swimming to gain the winding shore, again to return on the Trojan arms? Thrice he essayed either way; thrice queenly Juno checked and restrained him in pity of heart. Cleaving the deep, he floats with the tide down the flood, and is borne on to his father Daunus' ancient city. But meanwhile at Jove's prompting fiery Mezentius takes his place in the battle and assails the triumphant Teucrians. The Tyrrhene ranks gather round him, and all at once in unison shower their darts down on the hated foe. As a cliff that juts into the waste of waves, meeting the raging winds and breasting the deep, endures all the threatening force of sky and sea, itself fixed immovable, so he dashes to earth Hebrus son of Dolichaon, and with him Latagus, and Palmus as he fled; catching Latagus full front in the face with a vast fragment of mountain rock, while Palmus he hamstrings, and leaves him rolling helpless; his armour he gives Lausus to wear on his shoulders, and the plumes to fix on his crest. With them fall Evanthes the Phrygian, and Mimas, fellow and birthmate of Paris; for on one night Theano bore him to his father Amycus, and the queen, Cisseus' daughter, was delivered of Paris the firebrand; he sleeps in his fathers' city; Mimas lies a stranger on the Laurentian coast. And as the boar driven by snapping hounds from the mountain heights, [Pg 239][708-744]many a year hidden by Vesulus in his pines, many an one fed in the Laurentian marsh among the reedy forest, once come among the nets, halts and snorts savagely, with shoulders bristling up, and none of them dare be wrathful or draw closer, but they shower from a safe distance their darts and cries; even thus none of those whose anger is righteous against Mezentius have courage to meet him with drawn weapon: far off they provoke him with missiles and huge clamour, and he turns slow and fearless round about, grinding his teeth as he shakes the spears off his shield. From the bounds of ancient Corythus Acron the Greek had come, leaving for exile a bride half won. Seeing him afar dealing confusion amid the ranks, in crimson plumes and his plighted wife's purple,—as an unpastured lion often ranging the deep coverts, for madness of hunger urges him, if he haply catches sight of a timorous roe or high-antlered stag, he gapes hugely for joy, and, with mane on end, clings crouching over its flesh, his cruel mouth bathed in reeking gore. . . . so Mezentius darts lightly among the thick of the enemy. Hapless Acron goes down, and, spurning the dark ground, gasps out his life, and covers the broken javelin with his blood. But the victor deigned not to bring down Orodes with the blind wound of his flying lance as he fled; full face to face he meets him, and engages man with man, conqueror not by stealth but armed valour. Then, as with planted foot, he thrust him off the spear: 'O men,' he cries, 'Orodes lies low, no slight arm of the war.' His comrades shout after him the glad battle chant. And the dying man: 'Not unavenged nor long, whoso thou art, shalt thou be glad in victory: thee too an equal fate marks down, and in these fields thou shalt soon lie.' And smiling on him half wrathfully, Mezentius: 'Now die thou. But of me let the father of gods and king of men take counsel.' So saying, he drew the weapon out of his body. [Pg 240][745-780]Grim rest and iron slumber seal his eyes; his lids close on everlasting night. Caedicus slays Alcathoüs, Sacrator Hydaspes, Rapo Parthenius and the grim strength of Orses, Messapus Clonius and Erichaetes son of Lycaon, the one when his reinless horse stumbling had flung him to the ground, the other as they met on foot. And Agis the Lycian advanced only to be struck from horseback by Valerus, brave as his ancestry; and Thronius by Salius, and Salius by Nealces with treacherous arrow-shot that stole from far. Now the heavy hand of war dealt equal woe and counterchange of death; in even balance conquerors and conquered slew and fell; nor one nor other knows of retreat. The gods in Jove's house pity the vain rage of either and all the agonising of mortals. From one side Venus, from one opposite Juno, daughter of Saturn, looks on; pale Tisiphone rages among the many thousand men. But now, brandishing his huge spear, Mezentius strides glooming over the plain, vast as Orion when, with planted foot, he cleaves his way through the vast pools of mid-ocean and his shoulder overtops the waves, or carrying an ancient mountain-ash from the hilltops, paces the ground and hides his head among the clouds: so moves Mezentius, huge in arms. Aeneas, espying him in the deep columns, makes on to meet him. He remains, unterrified, awaiting his noble foe, steady in his own bulk, and measures with his eye the fair range for a spear. 'This right hand's divinity, and the weapon I poise and hurl, now be favourable! thee, Lausus, I vow for the live trophy of Aeneas, dressed in the spoils stripped from the pirate's body.' He ends, and throws the spear whistling from far; it flies on, glancing from the shield, and pierces illustrious Antores hard by him sidelong in the flank; Antores, companion of Hercules, who, sent thither from Argos, had stayed by Evander, and [Pg 241][781-814]settled in an Italian town. Hapless he goes down with a wound not his own, and in death gazes on the sky, and Argos is sweet in his remembrance. Then good Aeneas throws his spear; through the sheltering circle of threefold brass, through the canvas lining and fabric of triple-sewn bull-hide it went, and sank deep in his groin; yet carried not its strength home. Quickly Aeneas, joyful at the sight of the Tyrrhenian's blood, snatches his sword from his thigh and presses hotly on his struggling enemy. Lausus saw, and groaned deeply for love of his dear father, and tears rolled over his face. Here will I not keep silence of thy hard death-doom and thine excellent deeds (if in any wise things wrought in the old time may win belief), nor of thyself, O fitly remembered! He, helpless and trammelled, withdrew backward, the deadly spear-shaft trailing from his shield. The youth broke forward and plunged into the fight; and even as Aeneas' hand rose to bring down the blow, he caught up his point and held him in delay. His comrades follow up with loud cries, so the father may withdraw in shelter of his son's shield, while they shower their darts and bear back the enemy with missiles from a distance. Aeneas wrathfully keeps covered. And as when storm-clouds pour down in streaming hail, all the ploughmen and country-folk scatter off the fields, and the wayfarer cowers safe in his fortress, a stream's bank or deep arch of rock, while the rain falls, that they may do their day's labour when sunlight reappears; thus under the circling storm of weapons Aeneas sustains the cloud of war till it thunders itself all away, and calls on Lausus, on Lausus, with chiding and menace: 'Whither runnest thou on thy death, with daring beyond thy strength? thine affection betrays thee into rashness.' But none the less he leaps madly on; and now wrath rises higher and fiercer in the Dardanian captain, and the Fates pass Lausus' last [Pg 242][815-849]threads through their hand; for Aeneas drives the sword strongly right through him up all its length: the point pierced the light shield that armed his assailant, and the tunic sewn by his mother with flexible gold: blood filled his breast, and the life left the body and passed mourning through the air to the under world. But when Anchises' son saw the look on the dying face, the face pale in wonderful wise, he sighed deeply in pity, and reached forth his hand, as the likeness of his own filial affection flashed across his soul. 'What now shall good Aeneas give thee, what, O poor boy, for this thy praise, for guerdon of a nature so noble? Keep for thine own the armour thou didst delight in; and I restore thee, if that matters aught at all, to the ghosts and ashes of thy parents. Yet thou shalt have this sad comfort in thy piteous death, thou fallest by great Aeneas' hand.' Then, chiding his hesitating comrades, he lifts him from the ground, dabbling the comely-ranged tresses with blood. Meanwhile his father, by the wave of the Tiber river, stanched his wound with water, and rested his body against a tree-trunk. Hard by his brazen helmet hangs from the boughs, and the heavy armour lies quietly on the meadow. Chosen men stand round; he, sick and panting, leans his neck and lets his beard spread down over his chest. Many a time he asks for Lausus, and sends many an one to call him back and carry a parent's sad commands. But Lausus his weeping comrades were bearing lifeless on his armour, mighty and mightily wounded to death. Afar the soul prophetic of ill knew their lamentation: he soils his gray hairs plenteously with dust, and stretches both hands on high, and clings on the dead. 'Was life's hold on me so sweet, O my son, that I let him I bore receive the hostile stroke in my room? Am I, thy father, saved by these wounds of thine, and living by thy death? Alas and woe! [Pg 243][850-885]now at last exile is bitter! now the wound is driven deep! And I, even I, O my son, stained thy name with crime, driven in hatred from the throne and sceptre of my fathers. I owed vengeance to my country and my people's resentment; might mine own guilty life but have paid it by every form of death! Now I live, and leave not yet man and day; but I will.' As he speaks thus he raises himself painfully on his thigh, and though the violence of the deep wound cripples him, yet unbroken he bids his horse be brought, his beauty, his comfort, that ever had carried him victorious out of war, and says these words to the grieving beast: 'Rhoebus, we have lived long, if aught at all lasts long with mortals. This day wilt thou either bring back in triumph the gory head and spoils of Aeneas, and we will avenge Lausus' agonies; or if no force opens a way, thou wilt die with me: for I deem not, bravest, thou wilt deign to bear an alien rule and a Teucrian lord.' He spoke, and took his welcome seat on the back he knew, loading both hands with keen javelins, his head sheathed in glittering brass and shaggy horse-hair plumes. Thus he galloped in. Through his heart sweep together the vast tides of shame and mingling madness and grief. And with that he thrice loudly calls Aeneas. Aeneas knew the call, and makes glad invocation: 'So the father of gods speed me, so Apollo on high: do thou essay to close hand to hand. . . .' Thus much he utters, and moves up to meet him with levelled spear. And he: 'Why seek to frighten me, fierce man, now my son is gone? this was thy one road to my ruin. We shrink not from death, nor relent before any of thy gods. Cease; for I come to my death, first carrying these gifts for thee.' He spoke, and hurled a weapon at his enemy; then plants another and yet another as he darts round in a wide circle; but they are stayed on the boss of gold. Thrice he rode wheeling close round him by the [Pg 244][886-908]left, and sent his weapons strongly in; thrice the Trojan hero turns round, taking the grim forest on his brazen guard. Then, weary of lingering in delay on delay, and plucking out spear-head after spear-head, and hard pressed in the uneven match of battle, with much counselling of spirit now at last he bursts forth, and sends his spear at the war-horse between the hollows of the temples. The creature raises itself erect, beating the air with its feet, throws its rider, and coming down after him in an entangled mass, slips its shoulder as it tumbles forward. The cries of Trojans and Latins kindle the sky. Aeneas rushes up, drawing his sword from the scabbard, and thus above him: 'Where now is gallant Mezentius and all his fierce spirit?' Thereto the Tyrrhenian, as he came to himself and gazing up drank the air of heaven: 'Bitter foe, why these taunts and menaces of death? Naught forbids my slaughter; neither on such terms came I to battle, nor did my Lausus make treaty for this between me and thee. This one thing I beseech thee, by whatsoever grace a vanquished enemy may claim: allow my body sepulture. I know I am girt by the bitter hatred of my people. Stay, I implore, their fury, and grant me and my son union in the tomb.' So speaks he, and takes the sword in his throat unfalteringly, and the lifeblood spreads in a wave over his armour. [Pg 245] BOOK ELEVENTH THE COUNCIL OF THE LATINS, AND THE LIFE AND DEATH OF CAMILLA Meanwhile Dawn arose forth of Ocean. Aeneas, though the charge presses to give a space for burial of his comrades, and his mind is in the tumult of death, began to pay the gods his vows of victory with the breaking of the East. He plants on a mound a mighty oak with boughs lopped away on every hand, and arrays it in the gleaming arms stripped from Mezentius the captain, a trophy to thee, mighty Lord of War; he fixes on it the plumes dripping with blood, the broken spears, and the corslet struck and pierced in twelve places; he ties the shield of brass on his left hand, and hangs from his neck the ivory sword. Then among his joyous comrades (for all the throng of his captains girt him close about) he begins in these words of cheer: 'The greatest deed is done, O men; be all fear gone for what remains. These are the spoils of a haughty king, the first-fruits won from him; my hands have set Mezentius here. Now our way lies to the walls of the Latin king. Prepare your arms in courage, and let your hopes anticipate the war; let no ignorant delay hinder or tardy thoughts of fear keep us back, so soon as heaven grant us to pluck up the standards and lead our army from the camp. [Pg 246][22-58]Meanwhile let us commit to earth the unburied bodies of our comrades, since deep in Acheron this honour is left alone. Go,' says he, 'grace with the last gifts those noble souls whose blood won us this land for ours; and first let Pallas be sent to Evander's mourning city, he whose valour failed not when the day of darkness took him, and the bitter wave of death.' So speaks he weeping, and retraces his steps to the door, where aged Acoetes watched Pallas' lifeless body laid out for burial; once armour-bearer to Evander in Parrhasia, but now gone forth with darker omens, appointed attendant to his darling foster-child. Around is the whole train of servants, with a crowd of Trojans, and the Ilian women with hair unbound in mourning after their fashion. When Aeneas entered at the high doorway they beat their breasts and raise a loud wail aloft, and the palace moans to their grievous lamentation. Himself, when he saw the pillowed head and fair face of Pallas, and on his smooth breast the gaping wound of the Ausonian spear-head, speaks thus with welling tears: 'Did Fortune in her joyous coming,' he cries, 'O luckless boy, grudge thee the sight of our realm, and a triumphal entry to thy father's dwelling? Not this promise of thee had I given to Evander thy sire at my departure, when he embraced me as I went and bade me speed to a wide empire, and yet warned me in fear that the men were valiant, the people obstinate in battle. And now he, fast ensnared by empty hope, perchance offers vows and heaps gifts on his altars; we, a mourning train, go in hollow honour by his corpse, who now owes no more to aught in heaven. Unhappy! thou wilt see thy son cruelly slain; is this our triumphal return awaited? is this my strong assurance? Ah me, what a shield is lost, mine Iülus, to Ausonia and to thee!' [Pg 247] [59-96]This lament done, he bids raise the piteous body, and sends a thousand men chosen from all his army for the last honour of escort, to mingle in the father's tears; a small comfort in a great sorrow, yet the unhappy parent's due. Others quickly plait a soft wicker bier of arbutus rods and oak shoots, and shadow the heaped pillows with a leafy covering. Here they lay him, high on their rustic strewing; even as some tender violet or drooping hyacinth-blossom plucked by a maiden's finger, whose sheen and whose grace is not yet departed, but no more does Earth the mother feed it or lend it strength. Then Aeneas bore forth two purple garments stiff with gold, that Sidonian Dido's own hands, happy over their work, had once wrought for him, and shot the warp with delicate gold. One of these he sadly folds round him, a last honour, and veils in its covering the tresses destined to the fire; and heaps up besides many a Laurentine battle-prize, and bids his spoils pass forth in long train; with them the horses and arms whereof he had stripped the enemy, and those, with hands tied behind their back, whom he would send as nether offering to his ghost, and sprinkle the blood of their slaying on the flame. Also he bids his captains carry stems dressed in the armour of the foe, and fix on them the hostile names. Unhappy Acoetes is led along, outworn with age, he smites his breast and rends his face, and flings himself forward all along the ground. Likewise they lead forth the chariot bathed in Rutulian blood; behind goes weeping Aethon the war-horse, his trappings laid away, and big drops wet his face. Others bear his spear and helmet, for all else is Turnus' prize. Then follow in mourning array the Teucrians and all the Tyrrhenians, and the Arcadians with arms reversed. When the whole long escorting file had taken its way, Aeneas stopped, and sighing deep, pursued thus: 'Once again war's dreadful destiny calls us hence to other tears: [Pg 248][97-129]hail thou for evermore, O princely Pallas, and for evermore farewell.' And without more words he bent his way to the high walls and advanced towards his camp. And now envoys were there from the Latin city with wreathed boughs of olive, praying him of his grace to restore the dead that lay strewn by the sword over the plain, and let them go to their earthy grave: no war lasts with men conquered and bereft of breath; let this indulgence be given to men once called friends and fathers of their brides. To them Aeneas grants leave in kind and courteous wise, spurning not their prayer, and goes on in these words: 'What spite of fortune, O Latins, hath entangled you in the toils of war, and made you fly our friendship? Plead you for peace to the lifeless bodies that the battle-lot hath slain? I would fain grant it even to the living. Neither have I come but because destiny had given me this place to dwell in; nor wage I war with your people; your king it is who hath broken our covenant and preferred to trust himself to Turnus' arms. Fitter it were Turnus had faced death to-day. If he will fight out the war and expel the Teucrians, it had been well to meet me here in arms; so had he lived to whom life were granted of heaven or his own right hand. Now go, and kindle the fire beneath your hapless countrymen.' Aeneas ended: they stood dumb in silence, with faces bent steadfastly in mutual gaze. Then aged Drances, ever young Turnus' assailant in hatred and accusation, with the words of his mouth thus answers him again: 'O Trojan, great in renown, yet greater in arms, with what praises may I extol thy divine goodness? Shall thy righteousness first wake my wonder, or thy toils in war? We indeed will gratefully carry these words to our fathers' city, and, if fortune grant a way, will make thee at one with King Latinus. Let Turnus seek his own alliances. Nay, [Pg 249][130-163]it will be our delight to rear the massy walls of destiny and stoop our shoulders under the stones of Troy.' He ended thus, and all with one voice murmured assent. Twelve days' truce is struck, and in mediation of the peace Teucrians and Latins stray mingling unharmed on the forest heights. The tall ash echoes to the axe's strokes; they overturn pines that soar into the sky, and busily cleave oaken logs and scented cedar with wedges, and drag mountain-ashes on their groaning waggons. And now flying Rumour, harbinger of the heavy woe, fills Evander and Evander's house and city with the same voice that but now told of Pallas victorious over Latium. The Arcadians stream to the gates, snatching funeral torches after their ancient use; the road gleams with the long line of flame, and parts the fields with a broad pathway of light; the arriving crowd of Phrygians meets them and mingles in mourning array. When the matrons saw all the train approach their dwellings they kindle the town with loud wailing. But no force may withhold Evander; he comes amid them; the bier is set down; he flings himself on Pallas, and clasps him with tears and sighs, and scarcely at last does grief leave his voice's utterance free. 'Other than this, O Pallas! was thy promise to thy father, that thou wouldst not plunge recklessly into the fury of battle. I knew well how strong was the fresh pride of arms and the sweetness of honour in a first battle. Ah, unhappy first-fruits of his youth and bitter prelude of the war upon our borders! ah, vows and prayers of mine that no god heard! and thou, pure crown of wifehood, happy that thou art dead and not spared for this sorrow! But I have outgone my destiny in living, to stay here the survivor of my child. Would I had followed the allied arms of Troy, to be overwhelmed by Rutulian weapons! Would my life had been given, and I and not my Pallas were borne home in this [Pg 250][164-198]procession! I would not blame you, O Teucrians, nor our treaty and the friendly hands we clasped: our old age had that appointed debt to pay. Yet if untimely death awaited my son, it will be good to think he fell leading the Teucrians into Latium, and slew his Volscian thousands before he fell. Nay, no other funeral than this would I deem thy due, my Pallas, than good Aeneas does, than the mighty Phrygians, than the Tyrrhene captains and all the army of Tyrrhenia. Great are the trophies they bring on whom thine hand deals death; thou also, Turnus, wert standing now a great trunk dressed in arms, had his age and his strength of years equalled thine. But why, unhappy, do I delay the Trojan arms? Go, and forget not to carry this message to your king: Thine hand it is that keeps me lingering in a life that is hateful since Pallas fell, and Turnus is the debt thou seest son and father claim: for thy virtue and thy fortune this scope alone is left. I ask not joy in life; I may not; but to carry this to my son deep in the under world.' Meanwhile Dawn had raised her gracious light on weary men, bringing back task and toil: now lord Aeneas, now Tarchon, have built the pyres on the winding shore. Hither in ancestral fashion hath each borne the bodies of his kin; the dark fire is lit beneath, and the vapour hides high heaven in gloom. Thrice, girt in glittering arms, they have marched about the blazing piles, thrice compassed on horseback the sad fire of death, and uttered their wail. Tears fall fast upon earth and armour; cries of men and blare of trumpets roll skyward. Then some fling on the fire Latin spoils stripped from the slain, helmets and shapely swords, bridles and glowing chariot wheels; others familiar gifts, the very shields and luckless weapons of the dead. Around are slain in sacrifice oxen many in number, and bristly swine and cattle gathered out of all the country [Pg 251][199-234]are slaughtered over the flames. Then, crowding the shore, they gaze on their burning comrades, and guard the embers of the pyres, and cannot tear themselves away till dewy Night wheels on the star-spangled glittering sky. Therewithal the unhappy Latins far apart build countless pyres and bury many bodies of men in the ground; and many more they lift and bear away to the neighbouring country, or send them back to the city; the rest, a vast heap of undistinguishable slaughter, they burn uncounted and unhonoured; on all sides the broad fields gleam with crowded rivalry of fires. The third Dawn had rolled away the chill shadow from the sky; mournfully they piled high the ashes and mingled bones from the embers, and heaped a load of warm earth above them. Now in the dwellings of rich Latinus' city the noise is loudest and most the long wail. Here mothers and their sons' unhappy brides, here beloved sisters sad-hearted and orphaned boys curse the disastrous war and Turnus' bridal, and bid him his own self arm and decide the issue with the sword, since he claims for himself the first rank and the lordship of Italy. Drances fiercely embitters their cry, and vouches that Turnus alone is called, alone is claimed for battle. Yet therewith many a diverse-worded counsel is for Turnus, and the great name of the queen overshadows him, and he rises high in renown of trophies fitly won. Among their stir, and while confusion is fiercest, lo! to crown all, the envoys from great Diomede's city bring their gloomy message: nothing is come of all the toil and labour spent; gifts and gold and strong entreaties have been of no avail; Latium must seek other arms, or sue for peace to the Trojan king. For heavy grief King Latinus himself swoons away. The wrath of heaven and the fresh graves before his eyes warn him that Aeneas is borne on by fate's evident will. So he sends imperial summons to [Pg 252][235-269]his high council, the foremost of his people, and gathers them within his lofty courts. They assemble, and stream up the crowded streets to the royal dwelling. Latinus, eldest in years and first in royalty, sits amid them with cheerless brow, and bids the envoys sent back from the Aetolian city tell the news they bring, and demands a full and ordered reply. Then tongues are hushed; and Venulus, obeying his word, thus begins to speak: 'We have seen, O citizens, Diomede in his Argive camp, and outsped our way and passed all its dangers, and touched the hand whereunder the land of Ilium fell. He was founding a town, named Argyripa after his ancestral people, on the conquered fields of Iapygian Garganus. After we entered in, and licence of open speech was given, we lay forth our gifts, we instruct him of our name and country, who are its invaders, and why we are drawn to Arpi. He heard us, and replied thus with face unstirred: '"O fortunate races, realm of Saturn, Ausonians of old, how doth fortune vex your quiet and woo you to tempt wars you know not? We that have drawn sword on the fields of Ilium—I forbear to tell the drains of war beneath her high walls, the men sunken in yonder Simoïs—have all over the world paid to the full our punishment and the reward of guilt, a crew Priam's self might pity; as Minerva's baleful star knows, and the Euboïc reefs and Caphereus' revenge. From that warfaring driven to alien shores, Menelaus son of Atreus is in exile far as Proteus' Pillars, Ulysses hath seen the Cyclopes of Aetna. Shall I make mention of the realm of Neoptolemus, and Idomeneus' household gods overthrown? or of the Locrians who dwell on the Libyan beach? Even the lord of Mycenae, the mighty Achaeans' general, sank on his own threshold edge under his accursed wife's hand, where the adulterer crouched over conquered Asia. Aye, or that the gods grudged it me to return to [Pg 253][270-301]my ancestral altars, to see the bride of my desire, and lovely Calydon! Now likewise sights of appalling presage pursue me; my comrades, lost to me, have soared winging into the sky, and flit birds about the rivers—ah me, dread punishment of my people!—and fill the cliffs with their melancholy cries. This it was I had to look for even from the time when I madly assailed celestial limbs with steel, and sullied the hand of Venus with a wound. Do not, ah, do not urge me to such battles. Neither have I any war with Troy since her towers are overthrown, nor do I remember with delight the woes of old. Turn to Aeneas with the gifts you bear to me from your ancestral borders. We have stood to face his grim weapons, and met him hand to hand; believe one who hath proved it, how mightily he rises over his shield, in what a whirlwind he hurls his spear. Had the land of Ida borne two more like him, Dardanus had marched to attack the towns of Inachus, and Greece were mourning fate's reverse. In all our delay before that obstinate Trojan city, it was Hector and Aeneas whose hand stayed the Grecian victory and bore back its advance to the tenth year. Both were splendid in courage, both eminent in arms; Aeneas was first in duty. Let your hands join in treaty as they may; but beware that your weapons close not with his." 'Thou hast heard, most gracious king, at once what is the king's answer, and what his counsel for our great struggle.' Scarcely thus the envoys, when a diverse murmur ran through the troubled lips of the Ausonians; even as, when rocks delay some running river, it plashes in the barred pool, and the banks murmur nigh to the babbling wave. So soon as their minds were quieted, and their hurrying lips hushed, the king, first calling on the gods, begins from his lofty throne: [Pg 254] [302-336]'Ere now could I wish, O Latins, we had determined our course of state, and it had been better thus; not to meet in council at such a time as now, with the enemy seated before our walls. We wage an ill-timed war, fellow-citizens, with a divine race, invincible, unbroken in battle, who brook not even when conquered to drop the sword. If you had hope in appeal to Aetolian arms, abandon it; though each man's hope is his own, you discern how narrow a path it is. Beyond that you see with your eyes and handle with your hands the total ruin of our fortunes. I blame no one; what valour's utmost could do is done; we have fought with our whole kingdom's strength. Now I will unfold what I doubtfully advise and purpose, and with your attention instruct you of it in brief. There is an ancient land of mine bordering the Tuscan river, stretching far westward beyond the Sicanian borders. Auruncans and Rutulians sow on it, work the stiff hills with the ploughshare, and pasture them where they are roughest. Let all this tract, with a pine-clad belt of mountain height, pass to the Teucrians in friendship; let us name fair terms of treaty, and invite them as allies to our realm; let them settle, if they desire it so, and found a city. But if they have a mind to try other coasts and another people, and can abide to leave our soil, let us build twice ten ships of Italian oak, or as many more as they can man; timber lies at the water's edge for all; let them assign the number and fashion of the vessels, and we will supply brass, labour, dockyards. Further, it is our will that an hundred ambassadors of the highest rank in Latium shall go to bear our words and ratify the treaty, holding forth in their hands the boughs of peace, and carrying for gifts weight of gold and ivory, and the chair and striped robe, our royal array. Give counsel openly, and succour our exhausted state.' Then Drances again, he whose jealous ill-will was [Pg 255][337-370]wrought to anger and stung with bitterness by Turnus' fame, lavish of wealth and quick of tongue though his hand was cold in war, held no empty counsellor and potent in faction—his mother's rank ennobled a lineage whose paternal source was obscure—rises, and with these words heaps and heightens their passion: 'Dark to no man and needing no voice of ours, O gracious king, is that whereon thou takest counsel. All confess they know how our nation's fortune sways; but their words are choked. Let him grant freedom of speech and abate his breath, he by whose disastrous government and perverse way (I will speak out, though he menace me with arms and death) we see so many stars of battle gone down and all our city sunk in mourning; while he, confident in flight, assails the Trojan camp and makes heaven quail before his arms. Add yet one to those gifts of thine, to all the riches thou bidst us send or promise to the Dardanians, most gracious of kings, but one; let no man's passion overbear thee from giving thine own daughter to an illustrious son and a worthy marriage, and binding this peace by perpetual treaty. Yet if we are thus terror-stricken heart and soul, let us implore him in person, in person plead him of his grace to give way, to restore king and country their proper right. Why again and again hurlest thou these unhappy citizens on peril so evident, O source and spring of Latium's woes? In war is no safety; peace we all implore of thee, O Turnus, and the one pledge that makes peace inviolable. I the first, I whom thou picturest thine enemy, as I care not if I am, see, I bow at thy feet. Pity thine allies; relent, and retire before thy conqueror. Enough have we seen of rout and death, and desolation over our broad lands. Or if glory stir thee, if such strength kindle in thy breast, and if a palace so delight thee for thy dower, be bold, and advance stout-hearted upon the foe. We verily, that Turnus [Pg 256][371-406]may have his royal bride, must lie scattered on the plains, worthless lives, a crowd unburied and unwept. Do thou also, if thou hast aught of might, if the War-god be in thee as in thy fathers, look him in the face who challenges. . . .' At these words Turnus' passion blazed out. He utters a groan, and breaks forth thus in deep accents: 'Copious indeed, Drances, and fluent is ever thy speech at the moment war calls for action; and when the fathers are summoned thou art there the first. But we need no words to fill our senate-house, safely as thou wingest them while the mounded walls keep off the enemy, and the trenches swim not yet with blood. Thunder on in rhetoric, thy wonted way: accuse thou me of fear, Drances, since thine hand hath heaped so many Teucrians in slaughter, and thy glorious trophies dot the fields. Trial is open of what live valour can do; nor indeed is our foe far to seek; on all sides they surround our walls. Are we going to meet them? Why linger? Will thy bravery ever be in that windy tongue and those timorous feet of thine? . . . My conqueror? Shall any justly flout me as conquered, who sees Tiber swoln fuller with Ilian blood, and all the house and people of Evander laid low, and the Arcadians stripped of their armour? Not such did Bitias and huge Pandarus prove me, and the thousand men whom on one day my conquering hand sent down to hell, shut as I was in their walls and closed in the enemy's ramparts. In war is no safety. Fool! be thy boding on the Dardanian's head and thine own fortunes. Go on; cease not to throw all into confusion with thy terrors, to exalt the strength of a twice vanquished race, and abase the arms of Latinus before it. Now the princes of the Myrmidons tremble before Phrygian arms, now Tydeus' son and Achilles of Larissa, and Aufidus river recoils from the Adriatic wave. Or when the scheming villain [Pg 257][407-443]pretends to shrink at my abuse, and sharpens calumny by terror! never shall this hand—keep quiet!—rob thee of such a soul; with thee let it abide, and dwell in that breast of thine. Now I return to thee, my lord, and thy weighty resolves. If thou dost repose no further hope in our arms, if all hath indeed left us, and one repulse been our utter ruin, and our fortune is beyond recovery, let us plead for peace and stretch forth unarmed hands. Yet ah! had we aught of our wonted manhood, his toil beyond all other is blessed and his spirit eminent, who rather than see it thus, hath fallen prone in death and once bitten the ground. But if we have yet resources and an army still unbroken, and cities and peoples of Italy remain for our aid; but if even the Trojans have won their glory at great cost of blood (they too have their deaths, and the storm fell equally on all), why do we shamefully faint even on the threshold? Why does a shudder seize our limbs before the trumpet sound? Often do the Days and the varying change of toiling Time restore prosperity; often Fortune in broken visits makes man her sport and again establishes him. The Aetolian of Arpi will not help us; but Messapus will, and Tolumnius the fortunate, and the captains sent by many a nation; nor will fame be scant to follow the flower of Latium and the Laurentine land. Camilla the Volscian too is with us, leading her train of cavalry, squadrons splendid in brass. But if I only am claimed by the Teucrians for combat, if that is your pleasure, and I am the barrier to the public good, Victory does not so hate and shun my hands that I should renounce any enterprise for so great a hope. I shall meet him in courage, did he outmatch great Achilles and wear arms like his forged by Vulcan's hands. To you and to my father Latinus I Turnus, unexcelled in bravery by any of old, consecrate my life. Aeneas calls on him alone: let him, I implore: let not Drances rather appease with his [Pg 258][444-480]life this wrath of heaven, if such it be, or win the renown of valour.' Thus they one with another strove together in uncertainty; Aeneas moved from his camp to battle. Lo, a messenger rushes spreading confusion through the royal house, and fills the town with great alarms: the Teucrians, ranged in battle-line with the Tyrrhene forces, are marching down by the Tiber river and filling the plain. Immediately spirits are stirred and hearts shaken and wrath roused in fierce excitement among the crowd. Hurrying hands grasp at arms; for arms their young men clamour; the fathers shed tears and mutter gloomily. With that a great noise rises aloft in diverse contention, even as when flocks of birds haply settle on a lofty grove, or swans utter their hoarse cry among the vocal pools on the fish-filled river of Padusa. 'Yes, citizens!' cries Turnus, seizing his time: 'gather in council and sit praising peace, while they rush on dominion in arms!' Without more words he sprung up and issued swiftly from the high halls. 'Thou, Volusus,' he cries, 'bid the Volscian battalions arm, and lead out the Rutulians. Messapus, and Coras with thy brother, spread your armed cavalry widely over the plain. Let a division entrench the city gates and man the towers: the rest of our array attack with me where I command.' The whole town goes rushing to the walls; lord Latinus himself, dismayed by the woeful emergency, quits the council and puts off his high designs, and chides himself sorely for not having given Aeneas unasked welcome, and made him son and bulwark of the city. Some entrench the gates, or bring up supply of stones and poles. The hoarse clarion utters the ensanguined note of war. A motley ring of boys and matrons girdle the walls. Therewithal the queen with a crowd of mothers ascends bearing gifts to Pallas' towered temple, and by her side goes maiden Lavinia, source of all that woe, [Pg 259][481-514]her beautiful eyes cast down. The mothers enter in, and while the temple steams with their incense, pour from the high doorway their mournful cry: 'Maiden armipotent, Tritonian, sovereign of war, break with thine hand the spear of the Phrygian plunderer, hurl him prone to earth and dash him down beneath our lofty gates.' Turnus arrays himself in hot haste for battle, and even now hath done on his sparkling breastplate with its flickering scales of brass, and clasped his golden greaves, his brows yet bare and his sword buckled to his side; he runs down from the fortress height glittering in gold, and exultantly anticipates the foe. Thus when a horse snaps his tether, and, free at last, rushes from the stalls and gains the open plain, he either darts towards the pastures of the herded mares, or bathing, as is his wont, in the familiar river waters, dashes out and neighs with neck stretched high, glorying, and his mane tosses over collar and shoulder. Camilla with her Volscian array meets him face to face in the gateway; the princess leaps from her horse, and all her squadron at her example slide from horseback to the ground. Then she speaks thus: 'Turnus, if bravery hath any just self-confidence, I dare and promise to engage Aeneas' cavalry, and advance to meet the Tyrrhene horse. Permit my hand to try war's first perils: do thou on foot keep by the walls and guard the city.' To this Turnus, with eyes fixed on the terrible maiden: 'O maiden flower of Italy, how may I essay to express, how to prove my gratitude? But now, since that spirit of thine excels all praise, share thou the toil with me. Aeneas, as the report of the scouts I sent assures, hath sent on his light-armed horse to annoy us and scour the plains; himself he marches on the city across the lonely ridge of the mountain steep. I am arranging a stratagem of [Pg 260][515-550]war in his pathway on the wooded slope, to block a gorge on the highroad with armed troops. Do thou receive and join battle with the Tyrrhene cavalry; with thee shall be gallant Messapus, the Latin squadrons, and Tiburtus' division: do thou likewise assume a captain's charge.' So speaks he, and with like words heartens Messapus and the allied captains to battle, and advances towards the enemy. There is a sweeping curve of glen, made for ambushes and devices of arms. Dark thick foliage hems it in on either hand, and into it a bare footpath leads by a narrow gorge and difficult entrance. Right above it on the watch-towers of the hill-top lies an unexpected level, hidden away in shelter, whether one would charge from right and left or stand on the ridge and roll down heavy stones. Hither he passes by a line of way he knew, and, seizing his ground, occupies the treacherous woods. Meanwhile in the heavenly dwellings Latona's daughter addressed fleet Opis, one of her maiden fellowship and sacred band, and sadly uttered these accents: 'Camilla moves to fierce war, O maiden, and vainly girds on our arms, dear as she is beyond others to me. For her love of Diana is not newly born, nor her spirit stirred by sudden affection. Driven from his kingdom through jealousy of his haughty power, Metabus left ancient Privernum town, and bore his infant with him in his flight through war and battle, the companion of his exile, and called her by her mother Casmilla's name, with a little change, Camilla. Carrying her before him on his breast, he sought a long ridge of lonely woodland; on all sides angry weapons pressed on him, and Volscian soldiery spread hurrying round about. Lo, in mid flight swoln Amasenus ran foaming with banks abrim, so heavily had the clouds burst in rain. He would swim it; but love of the infant holds him back in alarm for so dear a burden. Inly revolving [Pg 261][551-586]all, he settled reluctantly on a sudden resolve: the great spear that the warrior haply carried in his stout hand, of hard-knotted and seasoned oak, to it he ties his daughter swathed in cork-tree bark of the woodland, and binds her balanced round the middle of the spear; poising it in his great right hand he thus cries aloft: "Gracious one, haunter of the woodland, maiden daughter of Latona, a father devotes this babe to thy service; thine is this weapon she holds, thine infant suppliant, flying through the air from her enemies. Accept her, I implore, O goddess, for thine own, whom now I entrust to the chance of air." He spoke, and drawing back his arm, darts the spinning spear-shaft: the waters roar: over the racing river poor Camilla shoots on the whistling weapon. But Metabus, as a strong band now presses nigher, plunges into the river, and triumphantly pulls spear and girl, his gift to Trivia, from the grassy turf. No cities ever received him within house or rampart, nor had his savagery submitted to it; he led his life on the lonely pastoral hills. Here he nursed his daughter in the underwood among tangled coverts, on the milk of a wild brood-mare's teats, squeezing the udder into her tender lips. And so soon as the baby stood and went straight on her feet, he armed her hands with a sharp javelin, and hung quiver and bow from her little shoulders. Instead of gold to clasp her tresses, instead of the long skirted gown, a tiger's spoils hang down her back. Even then her tender hand hurled childish darts, and whirled about her head the twisted thong of her sling, and struck down the crane from Strymon or the milk-white swan. Many a mother among Tyrrhenian towns destined her for their sons in vain; content with Diana alone, she keeps unsoiled for ever the love of her darts and maidenhood. Would she had not plunged thus into warfare and provoked the Trojans by attack! so were she now dear to me and one of my [Pg 262][587-620]company. But since bitter doom is upon her, up, glide from heaven, O Nymph, and seek the Latin borders, where under evil omen they join in baleful battle. Take these, and draw from the quiver an avenging shaft; by it shall he pay me forfeit of his blood, whoso, Trojan or Italian alike, shall sully her sacred body with a wound. Thereafter will I in a sheltering cloud bear body and armour of the hapless girl unspoiled to the tomb, and lay them in her native land.' She spoke; but the other sped lightly down the aery sky, girt about with dark whirlwind on her echoing way. But meanwhile the Trojan force nears the walls, with the Etruscan captains and their whole cavalry arrayed in ordered squadrons. Their horses' trampling hoofs thunder on all the field, as, swerving this way and that, they chafe at the reins' pressure; the iron field bristles wide with spears, and the plain is aflame with uplifted arms. Likewise Messapus and the Latin horse, and Coras and his brother, and maiden Camilla's squadron, come forth against them on the plain, and draw back their hands and level the flickering points of their long lances, in a fire of neighing horses and advancing men. And now each had drawn within javelin-cast of each, and drew up; with a sudden shout they dart forth, and urge on their furious horses; from all sides at once weapons shower thick like snow, and veil the sky with their shadow. In a moment Tyrrhenus and fiery Aconteus charge violently with crossing spears, and are the first to fall; they go down with a heavy crash, and their beasts break and shatter chest upon chest. Aconteus, hurled off like a thunderbolt or some mass slung from an engine, is dashed away, and scatters his life in air. Immediately the lines waver, and the Latins wheeling about throw their shields behind them and turn their horses towards the town. The Trojans pursue; Asilas heads and leads on [Pg 263][621-653]their squadrons. And now they drew nigh the gates, and again the Latins raise a shout and wheel their supple necks about; the pursuers fly, and gallop right back with loosened rein: as when the sea, running up in ebb and flow, now rushes shoreward and strikes over the cliffs in a wave of foam, drenching the edge of the sand in its curving sweep; now runs swirling back, and the surge sucks the rolling stones away. Twice the Tuscans turn and drive the Rutulians towards the town; twice they are repelled, and look back behind them from cover of their shields. But when now meeting in a third encounter, the lines are locked together all their length, and man singles out his man; then indeed, amid groans of the dying, deep in blood roll armour and bodies, and horses half slain mixed up with slaughtered men. The battle swells fierce. Orsilochus hurled his spear at the horse of Remulus, whom himself he shrank to meet, and left the steel in it under the ear; at the stroke the charger rears madly, and, mastered by the wound, lifts his chest and flings up his legs: the rider is thrown and rolls over on the ground. Catillus strikes down Iollas, and Herminius mighty in courage, mighty in limbs and arms, bareheaded, tawny-haired, bare-shouldered; undismayed by wounds, he leaves his vast body open against arms. Through his broad shoulders the quivering spear runs piercing him through, and doubles him up with pain. Everywhere the dark blood flows; they deal death with the sword in battle, and seek a noble death by wounds. But amid the slaughter Camilla rages, a quivered Amazon, with one side stripped for battle, and now sends tough javelins showering from her hand, now snatches the strong battle-axe in her unwearying grasp; the golden bow, the armour of Diana, clashes on her shoulders; and even when forced backward in retreat, she turns in flight and [Pg 264][654-691]aims darts from her bow. But around her are her chosen comrades, maiden Larina, Tulla, Tarpeia brandishing an axe inlaid with bronze, girls of Italy, whom Camilla the bright chose for her own escort, good at service in peace and war: even as Thracian Amazons when the streams of Thermodon clash beneath them as they go to war in painted arms, whether around Hippolyte, or while martial Penthesilea returns in her chariot, and the crescent-shielded columns of women dance with loud confused cry. Whom first, whom last, fierce maiden, does thy dart strike down? First Euneus, son of Clytius; for as he meets her the long fir shaft crashes through his open breast. He falls spouting streams of blood, and bites the gory ground, and dying writhes himself upon his wound. Then Liris and Pagasus above him; who fall headlong and together, the one thrown as he reins up his horse stabbed under him, the other while he runs forward and stretches his unarmed hand to stay his fall. To these she joins Amastrus, son of Hippotas, and follows from far with her spear Tereus and Harpalycus and Demophoön and Chromis: and as many darts as the maiden sends whirling from her hand, so many Phrygians fall. Ornytus the hunter rides near in strange arms on his Iapygian horse, his broad warrior's shoulders swathed in the hide stripped from a bullock, his head covered by a wolf's wide-grinning mouth and white-tusked jaws; a rustic pike arms his hand; himself he moves amid the squadrons a full head over all. Catching him up (for that was easy amid the rout), she runs him through, and thus cries above her enemy: 'Thou wert hunting wild beasts in the forest, thoughtest thou, Tyrrhenian? the day is come for a woman's arms to refute thy words. Yet no light fame shalt thou carry to thy fathers' ghosts, to have fallen under the weapon of Camilla.' Next Orsilochus and Butes, the two mightiest of mould among the Teucrians; Butes she pierces in the [Pg 265][692-725]back with her spear-point between corslet and helmet, where the neck shews as he sits, and the shield hangs from his left shoulder; Orsilochus she flies, and darting in a wide circle, slips into the inner ring and pursues her pursuer; then rising her full height, she drives the strong axe deep through armour and bone, as he pleads and makes much entreaty; warm brain from the wound splashes his face. One met her thus and hung startled by the sudden sight, the warrior son of Aunus haunter of the Apennine, not the meanest in Liguria while fate allowed him to deceive. And he, when he discerns that no fleetness of foot may now save him from battle or turn the princess from pursuit, essays to wind a subtle device of treachery, and thus begins: 'How hast thou glory, if a woman trust in her horse's strength? Debar retreat; trust thyself to level ground at close quarters with me, and prepare to fight on foot. Soon wilt thou know how windy boasting brings one to harm.' He spoke; but she, furious and stung with fiery indignation, hands her horse to an attendant, and takes her stand in equal arms on foot and undismayed, with naked sword and shield unemblazoned. But he, thinking his craft had won the day, himself flies off on the instant, and turning his rein, darts off in flight, pricking his beast to speed with iron-armed heel. 'False Ligurian, in vain elated in thy pride! for naught hast thou attempted thy slippery native arts, nor will thy craft bring thee home unhurt to treacherous Aunus.' So speaks the maiden, and with running feet swift as fire crosses his horse, and catching the bridle, meets him in front and takes her vengeance in her enemy's blood: as lightly as the falcon, bird of bale, swoops down from aloft on a pigeon high in a cloud, and pounces on and holds her, and disembowels her with taloned feet, while blood and torn feathers flutter down the sky. But the creator of men and gods sits high on Olympus' [Pg 266][726-759]summit watching this, not with eyes unseeing: he kindles Tyrrhenian Tarchon to the fierce battle, and sharply goads him on to wrath. So Tarchon gallops amid the slaughter where his squadrons retreat, and urges his troops in changing tones, calling man on man by name, and rallies the fliers to fight. 'What terror, what utter cowardice hath fallen on your spirits, O never to be stung to shame, O slack alway? a woman drives you in disorder and routs our ranks! Why wear we steel? for what are these idle weapons in our hands? Yet not slack in Venus' service and wars by night, or, when the curving flute proclaims Bacchus' revels, to look forward to the feast and the cups on the loaded board (this your passion, this your desire!) till the soothsayer pronounce the offering favourable, and the fatted victim invite you to the deep groves.' So speaking, he spurs his horse into the midmost, ready himself to die, and bears violently down full on Venulus; and tearing him from horseback, grasps his enemy and carries him away with him on the saddle-bow by main force. A cry rises up, and all the Latins turn their eyes. Tarchon flies like fire over the plain, carrying the armed man, and breaks off the steel head from his own spear and searches the uncovered places, trying where he may deal the mortal blow; the other struggling against him keeps his hand off his throat, and strongly parries his attack. And, as when a golden eagle snatches and soars with a serpent in his clutch, and his feet are fast in it, and his talons cling; but the wounded snake writhes in coiling spires, and its scales rise and roughen, and its mouth hisses as it towers upward; the bird none the less attacks his struggling prize with crooked beak, while his vans beat the air: even so Tarchon carries Tiburtus out of the ranks, triumphant in his prize. Following their captain's example and issue the men of Maeonia charge in. Then Arruns, due to his [Pg 267][760-796]doom, circles in advance of fleet Camilla with artful javelin, and tries how fortune may be easiest. Where the maiden darts furious amid the ranks, there Arruns slips up and silently tracks her footsteps; where she returns victorious and retires from amid the enemy, there he stealthily bends his rapid reins. Here he approaches, and here again he approaches, and strays all round and about, and untiringly shakes his certain spear. Haply Chloreus, sacred to Cybele and once her priest, glittered afar, splendid in Phrygian armour; a skin feathered with brazen scales and clasped with gold clothed the horse that foamed under his spur; himself he shone in foreign blue and scarlet, with fleet Gortynian shafts and a Lycian horn; a golden bow was on his shoulder, and the soothsayer's helmet was of gold; red gold knotted up his yellow scarf with its rustling lawny folds; his tunics and barbarian trousers were wrought in needlework. Him, whether that she might nail armour of Troy on her temples, or herself move in captive gold, the maiden pursued in blind chase alone of all the battle conflict, and down the whole line, reckless and fired by a woman's passion for spoils and plunder: when at last out of his ambush Arruns chooses his time and darts his javelin, praying thus aloud to heaven: 'Apollo, most high of gods, holy Soracte's warder, to whom we beyond all do worship, for whom the blaze of the pinewood heap is fed, where we thy worshippers in pious faith print our steps amid the deep embers of the fire, grant, O Lord omnipotent, that our arms wipe off this disgrace. I seek not the dress the maiden wore, nor trophy or any spoil of victory; other deeds shall bring me praise; let but this dread scourge fall stricken beneath my wound, I will return inglorious to my native towns.' Phoebus heard, and inly granted half his vow to prosper, half he shred into the flying breezes. To surprise and strike down Camilla in sudden death, this he [Pg 268][797-831]yielded to his prayer; that his high home might see his return he gave not, and a gust swept off his accents on the gale. So, when the spear sped from his hand hurtled through the air, all the Volscians marked it well and turned their eyes on the queen; and she alone knew not wind or sound of the weapon on its aery path, till the spear passed home and sank where her breast met it, and, driven deep, drank her maiden blood. Her companions run hastily up and catch their sinking mistress. Arruns takes to flight more alarmed than all, in mingled fear and exultation, and no longer dares to trust his spear or face the maiden's weapons. And as the wolf, some shepherd or great bullock slain, plunges at once among the trackless mountain heights ere hostile darts are in pursuit, and knows how reckless he hath been, and drooping his tail lays it quivering under his belly, and seeks the woods; even so does Arruns withdraw from sight in dismay, and, satisfied to escape, mingles in the throng of arms. The dying woman pulls at the weapon with her hand; but the iron head is fixed deep in the wound up between the rib-bones. She swoons away with loss of blood; chilling in death her eyes swoon away; the once lustrous colour leaves her face. Then gasping, she thus accosts Acca, one of her birthmates, who alone before all was true to Camilla, with whom her cares were divided; and even so she speaks: 'Thus far, Acca my sister, have I availed; now the bitter wound overmasters me, and all about me darkens in haze. Haste away, and carry to Turnus my last message; to take my place in battle, and repel the Trojans from the town. And now goodbye.' Even with the words she dropped the reins and slid to ground unconscious. Then the unnerving chill overspread her, her neck slackened, her head sank overpowered by death, and her arms fell, and with a moan the life fled indignant into the dark. Then indeed an [Pg 269][832-867]infinite cry rises and smites the golden stars; the battle grows bloodier now Camilla is down; at once in serried rants all the Teucrian forces pour in, with the Tyrrhene captains and Evander's Arcadian squadrons. But Opis, Trivia's sentinel, long ere now sits high on the hill-tops, gazing on the battle undismayed. And when afar amid the din of angry men she espied Camilla done woefully to death, she sighed and uttered forth a deep cry: 'Ah too, too cruel, O maiden, the forfeit thou hast paid for daring armed attack on the Teucrians! and nothing hath availed thee thy lonely following of Diana in the woodlands, nor wearing our quiver on thy shoulder. Yet thy Queen hath not left thee unhonoured now thy latter end is come; nor will this thy death be unnamed among the nations, nor shalt thou bear the fame of one unavenged; for whosoever hath sullied thy body with a wound shall pay death for due.' Under the mountain height was a great earthen mound, tomb of Dercennus, a Laurentine king of old, shrouded in shadowy ilex. Hither the goddess most beautiful first swoops down, and marks Arruns from the mounded height. As she saw him glittering in arms and idly exultant: 'Why,' she cries, 'wanderest thou away? hitherward direct thy steps; come hither to thy doom, to receive thy fit reward for Camilla. Shalt thou die, and by Diana's weapons?' The Thracian spoke, and slid out a fleet arrow from her gilded quiver, and stretched it level on the bow, and drew it far, till the curving tips met one another, and now her hands touched in counterpoise, the left the steel edge, the string in the right her breast. At once and in a moment Arruns heard the whistle of the dart and the resounding air, as the steel sank in his body. His comrades leave him forgotten on the unknown dust of the plain, moaning his last and gasping his life away; Opis wings her flight to the skyey heaven. [Pg 270] [868-901]At once the light squadron of Camilla retreat now they have lost their mistress; the Rutulians retreat in confusion, brave Atinas retreats. Scattered captains and thinned companies make for safety, and turn their horses backward to the town. Nor does any avail to make stand against the swarming death-dealing Teucrians, or bear their shock in arms; but their unstrung bows droop on their shoulders, and the four-footed galloping horse-hoof shakes the crumbling plain. The eddying dust rolls up thick and black towards the walls, and on the watch-towers mothers beat their breasts and the cries of women rise up to heaven. On such as first in the rout broke in at the open gates the mingling hostile throng follows hard; nor do they escape death, alas! but in the very gateway, within their native city and amid their sheltering homes, they are pierced through and gasp out their life. Some shut the gates, and dare not open to their pleading comrades nor receive them in the town; and a most pitiful slaughter begins between armed men who guard the entry and others who rush upon their arms. Barred out before their weeping parents' eyes and faces, some, swept on by the rout, roll headlong into the trenches; some, blindly rushing with loosened rein, batter at the gates and stiffly-bolted doorway. The very mothers from the walls in eager heat (true love of country points the way, when they see Camilla) dart weapons with shaking hand, and eagerly make hard stocks of wood and fire-hardened poles serve for steel, and burn to die among the foremost for their city's sake. Meanwhile among the forests the terrible news pours in on Turnus, and Acca brings him news of the mighty invasion; the Volscian lines are destroyed; Camilla is fallen; the enemy thicken and press on, and have swept all before them down the tide of battle. Raging he leaves the hills he had beset—Jove's stern will ordains it [Pg 271][902-915]so—and quits the rough woodland. Scarcely had he marched out of sight and gained the plain when lord Aeneas enters the open defiles, surmounts the ridge, and issues from the dim forest. So both advance swiftly to the town with all their columns, no long march apart, and at once Aeneas descried afar the plains all smoking with dust, and saw the Laurentine columns, and Turnus knew Aeneas terrible in arms, and heard the advancing feet and the neighing of the horses. And straightway would they join battle and essay the conflict, but that ruddy Phoebus even now dips his weary coursers in the Iberian flood, and night draws on over the fading day. They encamp before the city, and draw their trenches round the walls. [Pg 272] BOOK TWELFTH THE SLAYING OF TURNUS When Turnus sees the Latins broken and fainting in the thwart issue of war, his promise claimed for fulfilment, and men's eyes pointed on him, his own spirit rises in unappeasable flame. As the lion in Phoenician fields, his breast heavily wounded by the huntsmen, at last starts into arms, and shakes out the shaggy masses from his exultant neck, and undismayed snaps the brigand's planted weapon, roaring with blood-stained mouth; even so Turnus kindles and swells in passion. Then he thus addresses the king, and so furiously begins: 'Turnus stops not the way; there is no excuse for the coward Aeneadae to take back their words or renounce their compact. I join battle; bring the holy things, my lord, and swear the treaty. Either this hand shall hurl to hell the Dardanian who skulks from Asia, and the Latins sit and see my single sword wipe out the nation's reproach; or let him rule his conquest, and Lavinia pass to his espousal.' To him Latinus calmly replied: 'O excellent young man! the more thy hot valour abounds, the more intently must I counsel, and weigh fearfully what may befall. Thou hast thy father Daunus' realm, hast many towns taken by [Pg 273][23-55]thine hand, nor is Latinus lacking in gold and goodwill. There are other maidens unwedded in Latium and Laurentine fields, and of no mean birth. Let me unfold this hard saying in all sincerity: and do thou drink it into thy soul. I might not ally my daughter to any of her old wooers; such was the universal oracle of gods and men. Overborne by love for thee, overborne by kinship of blood and my weeping wife's complaint, I broke all fetters, I severed the maiden from her promised husband, I took up unrighteous arms. Since then, Turnus, thou seest what calamities, what wars pursue me, what woes thyself before all dost suffer. Twice vanquished in pitched battle, we scarce guard in our city walls the hopes of Italy: the streams of Tiber yet run warm with our blood, and our bones whiten the boundless plain. Why fall I away again and again? what madness bends my purpose? if I am ready to take them into alliance after Turnus' destruction, why do I not rather bar the strife while he lives? What will thy Rutulian kinsmen, will all Italy say, if thy death—Fortune make void the word!—comes by my betrayal, while thou suest for our daughter in marriage? Cast a glance on war's changing fortune; pity thine aged father, who now far away sits sad in his native Ardea.' In nowise do the words bend Turnus' passion: he rages the more fiercely, and sickens of the cure. So soon as he found speech he thus made utterance: 'The care thou hast for me, most gracious lord, for me lay down, I implore thee, and let me purchase honour with death. Our hand too rains weapons, our steel is strong; and our wounds too draw blood. The goddess his mother will be far from him to cover his flight, woman-like, in a cloud and an empty phantom's hiding.' But the queen, dismayed by the new terms of battle, wept, and clung to her fiery son as one ready to die: [Pg 274][56-89]'Turnus, by these tears, by Amata's regard, if that touches thee at all—thou art now the one hope, the repose of mine unhappy age; in thine hand is Latinus' honour and empire, on thee is the weight of all our sinking house—one thing I beseech thee; forbear to join battle with the Teucrians. What fate soever awaits thee in the strife thou seekest, it awaits me, Turnus, too: with thee will I leave the hateful light, nor shall my captive eyes see Aeneas my daughter's lord.' Lavinia tearfully heard her mother's words with cheeks all aflame, as deep blushes set her face on fire and ran hotly over it. Even as Indian ivory, if one stain it with sanguine dye, or where white lilies are red with many a rose amid: such colour came on the maiden's face. Love throws him into tumult, and stays his countenance on the girl: he burns fiercer for arms, and briefly answers Amata: 'Do not, I pray thee, do not weep for me, neither pursue me thus ominously as I go to the stern shock of war. Turnus is not free to dally with death. Thou, Idmon, bear my message to the Phrygian monarch in this harsh wording: So soon as to-morrow's Dawn rises in the sky blushing on her crimson wheels, let him not loose Teucrian or Rutulian: let Teucrian and Rutulian arms have rest, and our blood decide the war; on that field let Lavinia be sought in marriage.' These words uttered, withdrawing swiftly homeward, he orders out his horses, and rejoicingly beholds them snorting before his face: those that Orithyia's self gave to grace Pilumnus, such as would excel the snows in whiteness and the gales in speed. The eager charioteers stand round and pat their chests with clapping hollowed hands, and comb their tressed manes. Himself next he girds on his shoulders the corslet stiff with gold and pale mountain-bronze, and buckles on the sword and shield and scarlet-plumed [Pg 275][90-124]helmet-spikes: that sword the divine Lord of Fire had himself forged for his father Daunus and dipped glowing in the Stygian wave. Next, where it stood amid his dwelling leaning on a massy pillar, he strongly seizes his stout spear, the spoil of Actor the Auruncan, and brandishes it quivering, and cries aloud: 'Now, O spear that never hast failed at my call, now the time is come; thee princely Actor once, thee Turnus now wields in his grasp. Grant this strong hand to strike down the effeminate Phrygian, to rend and shatter the corslet, and defile in dust the locks curled with hot iron and wet with myrrh.' Thus madly he runs on: sparkles leap out from all his blazing face, and his keen eyes flash fire: even as the bull when before his first fight he bellows awfully, and drives against a tree's trunk to make trial of his angry horns, and buffets the air with blows or scatters the sand in prelude of battle. And therewithal Aeneas, terrible in his mother's armour, kindles for warfare and awakes into wrath, rejoicing that offer of treaty stays the war. Comforting his comrades and sorrowing Iülus' fear, he instructs them of destiny, and bids bear answer of assurance to King Latinus, and name the laws of peace. Scarcely did the morrow shed on the mountain-tops the beams of risen day, as the horses of the sun begin to rise from the deep flood and breathe light from their lifted nostrils; Rutulian and Teucrian men measured out and made ready a field of battle under the great city's ramparts, and midway in it hearth-fires and grassy altars to the gods of both peoples; while others bore spring water and fire, draped in priestly dress and their brows bound with grass of the field. The Ausonian army issue forth, and crowd through the gates in streaming serried columns. On this side all the Trojan and Tyrrhenian host pour in diverse armament, girt with iron even as though the harsh battle-strife [Pg 276][125-158]called them forth. Therewith amid their thousands the captains dart up and down, splendid in gold and purple, Mnestheus, seed of Assaracus, and brave Asilas, and Messapus, tamer of horses, brood of Neptune: then each on signal given retired to his own ground; they plant their spears in the earth and lean their shields against them. Mothers in eager abandonment, and the unarmed crowd and feeble elders beset towers and house-roofs, or stand at the lofty gates. But Juno, on the summit that is now called the Alban—then the mountain had neither name nor fame or honour—looked forth from the hill and surveyed the plain and double lines of Laurentine and Trojan, and Latinus' town. Straightway spoke she thus to Turnus' sister, goddess to goddess, lady of pools and noisy rivers: such worship did Jupiter the high king of air consecrate to her for her stolen virginity: 'Nymph, grace of rivers, best beloved of our soul, thou knowest how out of all the Latin women that ever rose to high-hearted Jove's thankless bed, thee only have I preferred and gladly given part and place in heaven. Learn thy woe, that thou blame not me for it, Juturna. Where fortune seemed to allow and the Destinies granted Latinus' estate to prosper, I shielded Turnus and thy city. Now I see him joining battle with unequal fates, and the day of doom and deadly force draws nigh. Mine eyes cannot look on this battle and treaty: thou, if thou darest aught of more present help for the brother of thy blood, go on; it befits thee. Haply relief shall follow misery.' Scarcely thus: when Juturna's eyes overbrimmed with tears, and thrice and again she smote her hand on her gracious breast. 'This is not time for tears,' cries Juno, daughter of Saturn: 'hasten and snatch thy brother, if it may be, from his death; or do thou waken war, and make [Pg 277][159-191]the treaty abortive. I encourage thee to dare.' With such urgence she left her, doubting and dismayed, and grievously wounded in soul. Meanwhile the kings go forth; Latinus in mighty pomp rides in his four-horse chariot; twelve gilded rays go glittering round his brows, symbol of the Sun his ancestor; Turnus moves behind a white pair, clenching in his hand two broad-headed spears. On this side lord Aeneas, fount of the Roman race, ablaze in starlike shield and celestial arms, and close by Ascanius, second hope of mighty Rome, issue from the camp; and the priest, in spotless raiment, hath brought the young of a bristly sow and an unshorn sheep of two years old, and set his beasts by the blazing altars. They, turning their eyes towards the sunrising, scatter salted corn from their hands and clip the beasts with steel over the temples, and pour cups on the altars. Then Aeneas the good, with sword drawn, thus makes invocation: 'Be the Sun now witness, and this Earth to my call, for whose sake I have borne to suffer so sore travail, and the Lord omnipotent, and thou his wife, at last, divine daughter of Saturn, at last I pray more favourable; and thou, mighty Mavors, who wieldest all warfare in lordship beneath thy sway; and on the Springs and Rivers I call, and the Dread of high heaven, and the divinities of the blue seas: if haply victory fall to Turnus the Ausonian, the vanquished make covenant to withdraw to Evander's city; Iülus shall quit the soil; nor ever hereafter shall the Aeneadae return in arms to renew warfare, or attack this realm with the sword. But if Victory grant battle to us and ours (as I think the rather, and so the rather may the gods seal their will), I will not bid Italy obey my Teucrians, nor do I claim the realm for mine; let both nations, unconquered, join treaty for ever under equal law. Gods [Pg 278][192-225]and worship shall be of my giving: my father Latinus shall bear the sword, and have a father's prescribed command. For me my Teucrians shall establish a city, and Lavinia give the town her name.' Thus Aeneas first: thereon Latinus thus follows: 'By these same I swear, O Aeneas, by Earth, Sea, Sky, and the twin brood of Latona and Janus the double-facing, and the might of nether gods and grim Pluto's shrine; this let our Father hear, who seals treaties with his thunderbolt. I touch the altars, I take to witness the fires and the gods between us; no time shall break this peace and truce in Italy, howsoever fortune fall; nor shall any force turn my will aside, not if it dissolve land into water in turmoil of deluge, or melt heaven in hell: so surely as this sceptre' (for haply he bore a sceptre in his hand) 'shall never burgeon into thin leafage and shady shoot, since once in the forest cut down right to the stem it lost its mother, and the steel lopped away its tressed arms: a tree of old: now the craftsman's hand hath bound it in adornment of brass and given it to our Latin fathers' bearing.' With such words they sealed mutual treaty midway in sight of the princes. Then they duly slay the consecrated beasts over the flames, and tear out their live entrails, and pile the altars with laden chargers. But long ere this the Rutulians deemed the battle unequal, and their hearts are stirred in changeful motion; and now the more, as they discern nigher that in ill-matched strength . . . . heightened by Turnus, as advancing with noiseless pace he humbly worships at the altar with downcast eye, by his wasted cheeks and the pallor on his youthful frame. Soon as Juturna his sister saw this talk spread, and the people's mind waver in uncertainty, into the mid ranks, in feigned form of Camertus—his family was high in long ancestry, and his father's name [Pg 279][226-260]for valour renowned, and himself most valiant in arms—into the mid ranks she glides, not ignorant of her task, and scatters diverse rumours, saying thus: 'Shame, O Rutulians! shall we set one life in the breach for so many such as these? are we unequal in numbers or bravery? See, Troy and Arcadia is all they bring, and those fate-bound bands that Etruria hurls on Turnus. Scarce is there an enemy to meet every other man of ours. He indeed will ascend to the gods for whose altars he devotes himself, and move living in the lips of men: we, our country lost, shall bow to the haughty rigour of our lords, if we now sit slackly on the field.' By such words the soldiers' counsel was kindled yet higher and higher, and a murmur crept through their columns; the very Laurentines, the very Latins are changed; and they who but now hoped for rest from battle and rescue of fortune now desire arms and pray the treaty were undone, and pity Turnus' cruel lot. To this Juturna adds a yet stronger impulse, and high in heaven shews a sign more potent than any to confuse Italian souls with delusive augury. For on the crimsoned sky Jove's tawny bird flew chasing, in a screaming crowd, fowl of the shore that winged their column; then suddenly stooping to the water, pounces on a noble swan with merciless crooked talons. The startled Italians watch, while all the birds together clamorously wheel round from flight, wonderful to see, and dim the sky with their pinions, and in thickening cloud urge their foe through air, till, conquered by their attack and his heavy prey, he yielded and dropped it from his talons into the river, and winged his way deep into the clouds. Then indeed the Rutulians clamorously greet the omen, and their hands flash out. And Tolumnius the augur cries before them all: 'This it was, this, that my vows often have sought; I welcome and know a deity; [Pg 280][261-294]follow me, follow, snatch up the sword, O hapless people whom the greedy alien frightens with his arms like silly birds, and with strong hand ravages your shores. He too will take to flight, and spread his sails afar over ocean. Do you with one heart close up your squadrons, and defend in battle your lost king.' He spoke, and darting forward, hurled a weapon full on the enemy; the whistling cornel-shaft sings, and unerringly cleaves the air. At once and with it a vast shout goes up, and all their rows are amazed, and their hearts hotly stirred. The spear flies on; where haply stood opposite in ninefold brotherhood all the beautiful sons of one faithful Tyrrhene wife, borne of her to Gylippus the Arcadian, one of them, midway where the sewn belt rubs on the flank and the clasp bites the fastenings of the side, one of them, excellent in beauty and glittering in arms, it pierces clean through the ribs and stretches on the yellow sand. But of his banded brethren, their courage fired by grief, some grasp and draw their swords, some snatch weapons to throw, and rush blindly forward. The Laurentine columns rush forth against them; again from the other side Trojans and Agyllines and Arcadians in painted armour flood thickly in: so hath one passion seized all to make decision by the sword. They pull the altars to pieces; through all the air goes a thick storm of weapons, and faster falls the iron rain. Bowls and hearth-fires are carried off; Latinus himself retreats, bearing the outraged gods of the broken treaty. The others harness their chariots, or vault upon their horses and come up with swords drawn. Messapus, eager to shatter the treaty, rides menacingly down on Aulestes the Tyrrhenian, a king in a king's array. Retreating hastily, and tripped on the altars that meet him behind, the hapless man goes down on his head and shoulders. But Messapus flies up with wrathful spear, and strikes him, as he pleads sore, a deep downward [Pg 281][295-330]blow from horseback with his beam-like spear, saying thus: That for him: the high gods take this better victim. The Italians crowd in and strip his warm limbs. Corynaeus seizes a charred brand from the altar, and meeting Ebysus as he advances to strike, darts the flame in his face; his heavy beard flamed up, and gave out a scorched smell. Following up his enemy's confusion, the other seizes him with his left hand by the hair, and bears him to earth with a thrust of his planted knee, and there drives the unyielding sword into his side. Podalirius pursues and overhangs with naked sword the shepherd Alsus as he rushes amid the foremost line of weapons; Alsus swings back his axe, and severs brow and chin full in front, wetting his armour all over with spattered blood. Grim rest and iron slumber seal his eyes; his lids close on everlasting night. But good Aeneas, his head bared, kept stretching his unarmed hand and calling loudly to his men: 'Whither run you? What is this strife that so spreads and swells? Ah, restrain your wrath! truce is already stricken, and all its laws ordained; mine alone is the right of battle. Leave me alone, and my hand shall confirm the treaty; these rites already make Turnus mine.' Amid these accents, amid words like these, lo! a whistling arrow winged its way to him, sped from what hand or driven by what god, none knows, or what chance or deity brought such honour to the Rutulians; the renown of the high deed was buried, nor did any boast to have dealt Aeneas' wound. Turnus, when he saw Aeneas retreating from the ranks and his captains in dismay, burns hot with sudden hope. At once he calls for his horses and armour, and with a bound leaps proudly into his chariot and handles the reins. He darts on, dealing many a brave man's body to death; many an one he rolls half-slain, or crushes whole files under his chariot, or seizes and showers spears on the fugitives. As [Pg 282][331-364]when by the streams of icy Hebrus Mavors kindles to bloodshed and clashes on his shield, and stirs war and speeds his furious coursers; they outwing south winds and west on the open plain; utmost Thrace groans under their hoof-beats; and around in the god's train rush the faces of dark Terror, and Wraths and Ambushes; even so amid the battle Turnus briskly lashes on his reeking horses, trampling on the foes that lie piteously slain; the galloping hoof scatters bloody dew, and spurns mingled gore and sand. And now hath he dealt Sthenelus to death, and Thamyrus and Pholus, him and him at close quarters, the other from afar; from afar both the sons of Imbrasus, Glaucus and Lades, whom Imbrasus himself had nurtured in Lycia and equipped in equal arms, whether to meet hand to hand or to outstrip the winds on horseback. Elsewhere Eumedes advances amid the fray, ancient Dolon's brood, illustrious in war, renewing his grandfather's name, his father's courage and strength of hand, who of old dared to claim Pelides' chariot as his price if he went to spy out the Grecian camp; to him the son of Tydeus told out another price for his venture, and he dreams no more of Achilles' horses. Him Turnus descried far on the open plain, and first following him with light javelin through long space of air, stops his double-harnessed horses and leaps from the chariot, and descends on his fallen half-lifeless foe, and, planting his foot on his neck, wrests the blade out of his hand and dyes its glitter deep in his throat, adding these words withal: 'Behold, thou liest, Trojan, meting out those Hesperian fields thou didst seek in war. Such guerdon is theirs who dare to tempt my sword; thus do they found their city.' Then with a spear-cast he sends Asbutes to follow him, and Chloreus and Sybaris, Dares and Thersilochus, and Thymoetes fallen flung over his horse's neck. And as when [Pg 283][365-398]the Edonian North wind's wrath roars on the deep Aegean, and the wave follows it shoreward; where the blast comes down, the clouds race over the sky; so, wheresoever Turnus cleaves his way, columns retreat and lines turn and run; his own speed bears him on, and his flying plume tosses as his chariot meets the breeze. Phegeus brooked not his proud approach; he faced the chariot, and caught and twisted away in his right hand the mouths of his horses, spurred into speed and foaming on the bit. Dragged along and hanging by the yoke he is left uncovered; the broad lance-head reaches him, pins and pierces the double-woven breastplate, and lightly wounds the surface of his body. Yet turning, he advanced on the enemy behind his shield, and sought succour in the naked point; when the wheel running forward on its swift axle struck him headlong and flung him to ground, and Turnus' sword following it smote off his head between the helmet-rim and the upper border of the breastplate, and left the body on the sand. And while Turnus thus victoriously deals death over the plains, Mnestheus meantime and faithful Achates, and Ascanius by their side, set down Aeneas in the camp, dabbled with blood and leaning every other step on his long spear. He storms, and tries hard to pull out the dart where the reed had broken, and calls for the nearest way of remedy, to cut open the wound with broad blade, and tear apart the weapon's lurking-place, and so send him back to battle. And now Iapix son of Iasus came, beloved beyond others of Phoebus, to whom once of old, smitten with sharp desire, Apollo gladly offered his own arts and gifts, augury and the lyre and swift arrows: he, to lengthen out the destiny of a parent given over to die, chose rather to know the potency of herbs and the practice of healing, and deal in a silent art unrenowned. Aeneas stood chafing bitterly, propped on his vast spear, mourning [Pg 284][399-435]Iülus and a great crowd of men around, unstirred by their tears. The aged man, with garment drawn back and girt about him in Paeonian fashion, makes many a hurried effort with healing hand and the potent herbs of Phoebus, all in vain; in vain his hand solicits the arrow-head, and his pincers' grasp pulls at the steel. Fortune leads him forward in nowise; Apollo aids not with counsel; and more and more the fierce clash swells over the plains, and the havoc draws nigher on. Already they see the sky a mass of dust, the cavalry approaching, and shafts falling thickly amid the camp; the dismal cry uprises of warriors fighting and falling under the War-god's heavy hand. At this, stirred deep by her son's cruel pain, Venus his mother plucked from Cretan Ida a stalk of dittamy with downy leaves and bright-tressed flowers, the plant not unknown to wild goats when winged arrows are fast in their body. This Venus bore down, her shape girt in a dim halo; this she steeps with secret healing in the river-water poured out and sparkling abrim, and sprinkles life-giving juice of ambrosia and scented balm. With that water aged Iapix washed the wound, unwitting; and suddenly, lo! all the pain left his body, all the blood in the deep wound was stanched. And now following his hand the arrow fell out with no force, and strength returned afresh as of old. 'Hasten! arms for him quickly! why stand you?' cries Iapix aloud, and begins to kindle their courage against the enemy; 'this comes not by human resource or schooling of art, nor does my hand save thee, Aeneas: a higher god is at work, and sends thee back to higher deeds.' He, eager for battle, had already clasped on the greaves of gold right and left, and scorning delay, brandishes his spear. When the shield is adjusted by his side and the corslet on his back, he clasps Ascanius in his armed embrace, and lightly kissing him through the helmet, cries: 'Learn of me, O boy, valour [Pg 285][436-470]and toil indeed, fortune of others. Now mine hand shall give thee defence in war, and lead thee to great reward: do thou, when hereafter thine age ripens to fulness, keep this in remembrance, and as thou recallest the pattern of thy kindred, let thy spirit rise to thy father Aeneas, thine uncle Hector.' These words uttered, he issued towering from the gates, brandishing his mighty spear: with him in serried column rush Antheus and Mnestheus, and all the throng streams forth of the camp. The field drifts with blinding dust, and the startled earth trembles under the tramp of feet. From his earthworks opposite Turnus saw and the Ausonians saw them come, and an icy shudder ran deep through their frame; first and before all the Latins Juturna heard and knew the sound, and in terror fled away. He flies on, and hurries his dark column over the open plain. As when in fierce weather a storm-cloud moves over mid sea to land, with presaging heart, ah me, the hapless husbandmen shudder from afar; it will deal havoc to their trees and destruction to their crops, and make a broad path of ruin; the winds fly before it, and bear its roar to the beach; so the Rhoetean captain drives his army full on the foe; one and all they close up in wedges, and mass their serried ranks. Thymbraeus smites massive Osiris with the sword, Mnestheus slays Arcetius, Achates Epulo, Gyas Ufens: Tolumnius the augur himself goes down, he who had hurled the first weapon against the foe. Their cry rises to heaven, and in turn the routed Rutulians give backward in flight over the dusty fields. Himself he deigns not to cut down the fugitives, nor pursue such as meet him fair on foot or approach in arms: Turnus alone he tracks and searches in the thick haze, alone calls him to conflict. Then panic-stricken the warrior maiden flings Turnus' charioteer out over his reins, and leaving him far where he slips from the [Pg 286][471-504]chariot-pole, herself succeeds and turns the wavy reins, tones and limbs and armour all of Metiscus' wearing. As when a black swallow flits through some rich lord's spacious house, and circles in flight the lofty halls, gathering her tiny food for sustenance to her twittering nestlings, and now swoops down the spacious colonnades, now round the wet ponds; in like wise dart Juturna's horses amid the enemy, and her fleet chariot passes flying over all the field. And now here and now here she displays her triumphant brother, nor yet allows him to close, but flies far and away. None the less does Aeneas thread the circling maze to meet him, and tracks his man, and with loud cry cries on him through the scattered ranks. Often as he cast eyes on his enemy and essayed to outrun the speed of the flying-footed horses, so often Juturna wheeled her team away. Alas, what can he do? Vainly he tosses on the ebb and flow, and in his spirit diverse cares make conflicting call; when Messapus, who haply bore in his left hand two tough spear-shafts topped with steel, runs lightly up and aims and hurls one of them upon him with unerring stroke. Aeneas stood still, and gathered himself behind his armour, sinking on bended knee; yet the rushing spear bore off his helmet-spike, and dashed the helmet-plume from the crest. Then indeed his wrath swells; and forced to it by their treachery, while chariot and horses disappear, he calls Jove oft and again to witness, and the altars of the violated treaty, and now at last plunges amid their lines. Sweeping terrible down the tide of battle he wakens fierce indiscriminate carnage, and flings loose all the reins of wrath. What god may now unfold for me in verse so many woes, so many diverse slaughters and death of captains whom now Turnus, now again the Trojan hero, drives over all the field? Was it well, O God, that nations destined to everlasting peace should clash in so vast a shock? Aeneas [Pg 287][505-540]meets Sucro the Rutulian; the combat stayed the first rush of the Teucrians, but delayed them not long; he catches him on the side, and, when fate comes quickest, drives the harsh sword clean through the ribs where they fence the breast. Turnus brings down Amycus from horseback with his brother Diores, and meets them on foot; him he strikes with his long spear as he comes, him with his sword-point, and hangs both severed heads on his chariot and carries them off dripping with blood. The one sends to death Talos and Tanaïs and brave Cethegus, three at one meeting, and gloomy Onites, of Echionian name, and Peridia the mother that bore him; the other those brethren sent from Lycia and Apollo's fields, and Menoetes the Arcadian, him who loathed warfare in vain; who once had his art and humble home about the river-fisheries of Lerna, and knew not the courts of the great, but his father was tenant of the land he tilled. And as fires kindled dispersedly in a dry forest and rustling laurel-thickets, or foaming rivers where they leap swift and loud from high hills, and speed to sea each in his own path of havoc; as fiercely the two, Aeneas and Turnus, dash amid the battle; now, now wrath surges within them, and unconquerable hearts are torn; now in all their might they rush upon wounds. The one dashes Murranus down and stretches him on the soil with a vast whirling mass of rock, as he cries the names of his fathers and forefathers of old, a whole line drawn through Latin kings; under traces and yoke the wheels spurned him, and the fast-beating hoofs of his rushing horses trample down their forgotten lord. The other meets Hyllus rushing on in gigantic pride, and hurls his weapon at his gold-bound temples; the spear pierced through the helmet and stood fast in the brain. Neither did thy right hand save thee from Turnus, O Cretheus, bravest of the Greeks; nor did his gods shield Cupencus when Aeneas came; he gave his [Pg 288][541-575]breast full to the steel, nor, alas! was the brazen shield's delay aught of avail. Thee likewise, Aeolus, the Laurentine plains saw sink backward and cover a wide space of earth; thou fallest, whom Argive battalions could not lay low, nor Achilles the destroyer of Priam's realm. Here was thy goal of death; thine high house was under Ida, at Lyrnesus thine high house, on Laurentine soil thy tomb. The whole battle-lines gather up, all Latium and all Dardania, Mnestheus and valiant Serestus, with Messapus, tamer of horses, and brave Asilas, the Tuscan battalion and Evander's Arcadian squadrons; man by man they struggle with all their might; no rest nor pause in the vast strain of conflict. At this Aeneas' mother most beautiful inspired him to advance on the walls, directing his columns on the town and dismaying the Latins with sudden and swift disaster. As in search for Turnus he bent his glance this way and that round the separate ranks, he descries the city free from all this warfare, unpunished and unstirred. Straightway he kindles at the view of a greater battle; he summons Mnestheus and Sergestus and brave Serestus his captains, and mounts a hillock; there the rest of the Teucrian army gathers thickly, still grasping shield and spear. Standing on the high mound amid them, he speaks: 'Be there no delay to my words; Jupiter is with us; neither let any be slower to move that the design is sudden. This city to-day, the source of war, the royal seat of Latinus, unless they yield them to receive our yoke and obey their conquerors, will I raze to ground, and lay her smoking roofs level with the dust. Must I wait forsooth till Turnus please to stoop to combat, and choose again to face his conqueror? This, O citizens, is the fountain-head and crown of the accursed war. Bring brands speedily, and reclaim the treaty in fire.' He ended; all with spirit alike emulous form a wedge and advance in serried masses to the walls. Ladders are run [Pg 289][576-611]up, and fire leaps sudden to sight. Some rush to the separate gates, and cut down the guards of the entry, others hurl their steel and darken the sky with weapons. Aeneas himself among the foremost, upstretching his hand to the city walls, loudly reproaches Latinus, and takes the gods to witness that he is again forced into battle, that twice now do the Italians choose warfare and break a second treaty. Discord rises among the alarmed citizens: some bid unbar the town and fling wide their gates to the Dardanians, and pull the king himself towards the ramparts; others bring arms and hasten to defend the walls: as when a shepherd tracks bees to their retreat in a recessed rock, and fills it with stinging smoke, they within run uneasily up and down their waxen fortress, and hum louder in rising wrath; the smell rolls in darkness along their dwelling, and a blind murmur echoes within the rock as the smoke issues to the empty air. This fortune likewise befell the despairing Latins, this woe shook the whole city to her base. The queen espies from her roof the enemy's approach, the walls scaled and firebrands flying on the houses; and nowhere Rutulian ranks, none of Turnus' columns to meet them; alas! she deems him destroyed in the shock of battle, and, distracted by sudden anguish, shrieks that she is the source of guilt, the spring of ill, and with many a mad utterance of frenzied grief rends her purple attire with dying hand, and ties from a lofty beam the ghastly noose of death. And when the unhappy Latin women knew this calamity, first her daughter Lavinia tears her flower-like tresses and roseate cheeks, and all the train around her madden in her suit; the wide palace echoes to their wailing, and from it the sorrowful rumour spreads abroad throughout the town. All hearts sink; Latinus goes with torn raiment, in dismay at his wife's doom and his city's downfall, defiling his hoary hair with soilure of sprinkled dust. [Pg 290] [614-648]Meanwhile on the skirts of the field Turnus chases scattered stragglers, ever slacker to battle, ever less and less exultant in his coursers' victorious speed. The confused cry came to him borne in blind terror down the breeze, and his startled ears caught the echoing tumult and disastrous murmur of the town. 'Ah me! what agony shakes the city? or what is this cry that fleets so loud from the distant town?' So speaks he, and distractedly checks the reins. And to him his sister, as changed into his charioteer Metiscus' likeness she swayed horses and chariot-reins, thus rejoined: 'This way, Turnus, let us pursue the brood of Troy, where victory opens her nearest way; there are others whose hands can protect their dwellings. Aeneas falls fiercer on the Italians, and closes in conflict; let our hand too deal pitiless death on his Teucrians. Neither in tale of dead nor in glory of battle shalt thou retire outdone.' Thereat Turnus: . . . 'Ah my sister, long ere now I knew thee, when first thine arts shattered the treaty, and thou didst mingle in the strife; and now thy godhead conceals itself in vain. But who hath bidden thee descend from heaven to bear this sore travail? was it that thou mightest see thy hapless brother cruelly slain? for what do I, or what fortune yet gives promise of safety? Before my very eyes, calling aloud on me, I saw Murranus, than whom none other is left me more dear, sink huge to earth, borne down by as huge a wound. Hapless Ufens is fallen, not to see our shame; corpse and armour are in Teucrian hands. The destruction of their households, this was the one thing yet lacking; shall I suffer it? Shall my hand not refute Drances' jeers? shall I turn my back, and this land see Turnus a fugitive? Is Death all so bitter? Do you, O Shades, be gracious to me, since the powers of heaven are estranged; to you shall I go down, a pure spirit and [Pg 291][649-681]ignorant of your blame, never once unworthy of my mighty fathers of old.' Scarce had he spoken thus; lo! Saces, borne flying on his foaming horse through the thickest of the foe, an arrow-wound right in his face, darts, beseeching Turnus by his name. 'Turnus, in thee is our last safety; pity thy people. Aeneas thunders in arms, and threatens to overthrow and hurl to destruction the high Italian fortress; and already firebrands are flying on our roofs. On thee, on thee the Latins turn their gazing eyes; King Latinus himself mutters in doubt, whom he is to call his sons, to whom he shall incline in union. Moreover the queen, thy surest stay, hath fallen by her own hand and in dismay fled the light. Alone in front of the gates Messapus and valiant Atinas sustain the battle-line. Round about them to right and left the armies stand locked and the iron field shivers with naked points; thou wheelest thy chariot on the sward alone.' At the distracting picture of his fortune Turnus froze in horror and stood in dumb gaze; together in his heart sweep the vast mingling tides of shame and maddened grief, and love stung to frenzy and resolved valour. So soon as the darkness cleared and light returned to his soul, he fiercely turned his blazing eyeballs towards the ramparts, and gazed back from his wheels on the great city. And lo! a spire of flame wreathing through the floors wavered up skyward and held a turret fast, a turret that he himself had reared of mortised planks and set on rollers and laid with high gangways. 'Now, O my sister, now fate prevails: cease to hinder; let us follow where deity and stern fortune call. I am resolved to face Aeneas, resolved to bear what bitterness there is in death; nor shalt thou longer see me shamed, sister of mine. Let me be mad, I pray thee, with this madness before the end.' He spoke, and leapt swiftly from his chariot to the field, and darting through weapons [Pg 292][682-718]and through enemies, leaves his sorrowing sister, and bursts in rapid course amid their columns. And as when a rock rushes headlong from some mountain peak, torn away by the blast, or if the rushing rain washes it away, or the stealing years loosen its ancient hold; the reckless mountain mass goes sheer and impetuous, and leaps along the ground, hurling with it forests and herds and men; thus through the scattering columns Turnus rushes to the city walls, where the earth is wettest with bloodshed and the air sings with spears; and beckons with his hand, and thus begins aloud: 'Forbear now, O Rutulians, and you, Latins, stay your weapons. Whatsoever fortune is left is mine: I singly must expiate the treaty for you all, and make decision with the sword.' All drew aside and left him room. But lord Aeneas, hearing Turnus' name, abandons the walls, abandons the fortress height, and in exultant joy flings aside all hindrance, breaks off all work, and clashes his armour terribly, vast as Athos, or as Eryx, or as the lord of Apennine when he roars with his tossing ilex woods and rears his snowy crest rejoicing into air. Now indeed Rutulians and Trojans and all Italy turned in emulous gaze, and they who held the high city, and they whose ram was battering the foundations of the wall, and unarmed their shoulders. Latinus himself stands in amaze at the mighty men, born in distant quarters of the world, met and making decision with the sword. And they, in the empty level field that cleared for them, darted swiftly forward, and hurling their spears from far, close in battle shock with clangour of brazen shields. Earth utters a moan; the sword-strokes fall thick and fast, chance and valour joining in one. And as in broad Sila or high on Taburnus, when two bulls rush to deadly battle forehead to forehead, the herdsmen retire in terror, all the herd stands dumb in dismay, and the heifers murmur in doubt which shall be [Pg 293][719-752]lord in the woodland, which all the cattle must follow; they violently deal many a mutual wound, and gore with their stubborn horns, bathing their necks and shoulders in abundant blood; all the woodland moans back their bellowing: even thus Aeneas of Troy and the Daunian hero rush together shield to shield; the mighty crash fills the sky. Jupiter himself holds up the two scales in even balance, and lays in them the different fates of both, trying which shall pay forfeit of the strife, whose weight shall sink in death. Turnus darts out, thinking it secure, and rises with his whole reach of body on his uplifted sword; then strikes; Trojans and Latins cry out in excitement, and both armies strain their gaze. But the treacherous sword shivers, and in mid stroke deserts its eager lord. If flight aid him not now! He flies swifter than the wind, when once he descries a strange hilt in his weaponless hand. Rumour is that in his headlong hurry, when mounting behind his yoked horses to begin the battle, he left his father's sword behind and caught up his charioteer Metiscus' weapon; and that served him long, while Teucrian stragglers turned their backs; when it met the divine Vulcanian armour, the mortal blade like brittle ice snapped in the stroke; the shards lie glittering upon the yellow sand. So in distracted flight Turnus darts afar over the plain, and now this way and now that crosses in wavering circles; for on all hands the Teucrians locked him in crowded ring, and the dreary marsh on this side, on this the steep city ramparts hem him in. Therewith Aeneas pursues, though ever and anon his knees, disabled by the arrow, hinder and stay his speed; and foot hard on foot presses hotly on his hurrying enemy: as when a hunter courses with a fleet barking hound some stag caught in a river-loop or girt by the crimson-feathered toils, and he, in terror of the snares and the high river-bank, [Pg 294][753-786]darts back and forward in a thousand ways; but the keen Umbrian clings agape, and just catches at him, and as though he caught him snaps his jaws while the baffled teeth close on vacancy. Then indeed a cry goes up, and banks and pools answer round about, and all the sky echoes the din. He, even as he flies, chides all his Rutulians, calling each by name, and shrieks for the sword he knew. But Aeneas denounces death and instant doom if one of them draw nigh, and doubles their terror with threats of their city's destruction, and though wounded presses on. Five circles they cover at full speed, and unwind as many this way and that; for not light nor slight is the prize they seek, but Turnus' very lifeblood is at issue. Here there haply had stood a bitter-leaved wild olive, sacred to Faunus, a tree worshipped by mariners of old; on it, when rescued from the waves, they were wont to fix their gifts to the god of Laurentum and hang their votive raiment; but the Teucrians, unregarding, had cleared away the sacred stem, that they might meet on unimpeded lists. Here stood Aeneas' spear; hither borne by its own speed it was held fast stuck in the tough root. The Dardanian stooped over it, and would wrench away the steel, to follow with the weapon him whom he could not catch in running. Then indeed Turnus cries in frantic terror: 'Faunus, have pity, I beseech thee! and thou, most gracious Earth, keep thy hold on the steel, as I ever have kept your worship, and the Aeneadae again have polluted it in war.' He spoke, and called the god to aid in vows that fell not fruitless. For all Aeneas' strength, his long struggling and delay over the tough stem availed not to unclose the hard grip of the wood. While he strains and pulls hard, the Daunian goddess, changing once more into the charioteer Metiscus' likeness, runs forward and passes her brother his sword. But Venus, indignant that the [Pg 295][787-818]Nymph might be so bold, drew nigh and wrenched away the spear where it stuck deep in the root. Erect in fresh courage and arms, he with his faithful sword, he towering fierce over his spear, they face one another panting in the battle shock. Meanwhile the King of Heaven's omnipotence accosts Juno as she gazes on the battle from a sunlit cloud. 'What yet shall be the end, O wife? what remains at the last? Heaven claims Aeneas as his country's god, thou thyself knowest and avowest to know, and fate lifts him to the stars. With what device or in what hope hangest thou chill in cloudland? Was it well that a deity should be sullied by a mortal's wound? or that the lost sword—for what without thee could Juturna avail?—should be restored to Turnus and swell the force of the vanquished? Forbear now, I pray, and bend to our entreaties; let not the pain thus devour thee in silence, and distress so often flood back on me from thy sweet lips. The end is come. Thou hast had power to hunt the Trojans over land or wave, to kindle accursed war, to put the house in mourning, and plunge the bridal in grief: further attempt I forbid thee.' Thus Jupiter began: thus the goddess, daughter of Saturn, returned with looks cast down: 'Even because this thy will, great Jupiter, is known to me for thine, have I left, though loth, Turnus alone on earth; nor else wouldst thou see me now, alone on this skyey seat, enduring good and bad; but girt in flame I were standing by their very lines, and dragging the Teucrians into the deadly battle. I counselled Juturna, I confess it, to succour her hapless brother, and for his life's sake favoured a greater daring; yet not the arrow-shot, not the bending of the bow, I swear by the merciless well-head of the Stygian spring, the single ordained dread of the gods in heaven. And now I retire, and leave the battle in loathing. [Pg 296][819-854]This thing I beseech thee, that is bound by no fatal law, for Latium and for the majesty of thy kindred. When now they shall plight peace with prosperous marriages (be it so!), when now they shall join in laws and treaties, bid thou not the native Latins change their name of old, nor become Trojans and take the Teucrian name, or change their language, or alter their attire: let Latium be, let Alban kings endure through ages, let Italian valour be potent in the race of Rome. Troy is fallen; let her and her name lie where they fell.' To her smilingly the designer of men and things: 'Jove's own sister thou art, and second seed of Saturn, such surge of wrath tosses within thy breast! But come, allay this madness so vainly stirred. I give thee thy will, and yield thee ungrudged victory. Ausonia shall keep her native speech and usage, and as her name is, it shall be. The Trojans shall sink mingling into their blood; I will add their sacred law and ritual, and make all Latins and of a single speech. Hence shall spring a race of tempered Ausonian blood, whom thou shalt see outdo men and gods in duty; nor shall any nation so observe thy worship.' To this Juno assented, and in gladness withdrew her purpose; meanwhile she quits her cloud, and retires out of the sky. This done, the Father revolves inly another counsel, and prepares to separate Juturna from her brother's arms. Twin monsters there are, called the Dirae by their name, whom with infernal Megaera the dead of night bore at one single birth, and wreathed them in like serpent coils, and clothed them in windy wings. They appear at Jove's throne and in the courts of the grim king, and quicken the terrors of wretched men whensoever the lord of heaven deals sicknesses and dreadful death, or sends terror of war upon guilty cities. One of these Jupiter sent swiftly down from heaven's height, and bade her meet Juturna for a [Pg 297][855-888]sign. She wings her way, and darts in a whirlwind to earth. Even as an arrow through a cloud, darting from the string when Parthian hath poisoned it with bitter gall, Parthian or Cydonian, and sped the immedicable shaft, leaps through the swift shadow whistling and unknown; so sprung and swept to earth the daughter of Night. When she espies the Ilian ranks and Turnus' columns, suddenly shrinking to the shape of a small bird that often sits late by night on tombs or ruinous roofs, and vexes the darkness with her cry, in such change of likeness the monster shrilly passes and repasses before Turnus' face, and her wings beat restlessly on his shield. A strange numbing terror unnerves his limbs, his hair thrills up, and the accents falter on his tongue. But when his hapless sister knew afar the whistling wings of the Fury, Juturna unbinds and tears her tresses, with rent face and smitten bosom. 'How, O Turnus, can thine own sister help thee now? or what more is there if I break not under this? What art of mine can lengthen out thy day? can I contend with this ominous thing? Now, now I quit the field. Dismay not my terrors, disastrous birds; I know these beating wings, and the sound of death, nor do I miss high-hearted Jove's haughty ordinance. Is this his repayment for my maidenhood? what good is his gift of life for ever? why have I forfeited a mortal's lot? Now assuredly could I make all this pain cease, and go with my unhappy brother side by side into the dark. Alas mine immortality! will aught of mine be sweet to me without thee, my brother? Ah, how may Earth yawn deep enough for me, and plunge my godhead in the under world!' So spoke she, and wrapping her head in her gray vesture, the goddess moaning sore sank in the river depth. But Aeneas presses on, brandishing his vast tree-like spear, and fiercely speaks thus: 'What more delay is there [Pg 298][889-924]now? or why, Turnus, dost thou yet shrink away? Not in speed of foot, in grim arms, hand to hand, must be the conflict. Transform thyself as thou wilt, and collect what strength of courage or skill is thine; pray that thou mayest wing thy flight to the stars on high, or that sheltering earth may shut thee in.' The other, shaking his head: 'Thy fierce words dismay me not, insolent! the gods dismay me, and Jupiter's enmity.' And no more said, his eyes light on a vast stone, a stone ancient and vast that haply lay upon the plain, set for a landmark to divide contested fields: scarcely might twelve chosen men lift it on their shoulders, of such frame as now earth brings to birth: then the hero caught it up with trembling hand and whirled it at the foe, rising higher and quickening his speed. But he knows not his own self running nor going nor lifting his hands or moving the mighty stone; his knees totter, his blood freezes cold; the very stone he hurls, spinning through the empty void, neither wholly reached its distance nor carried its blow home. And as in sleep, when nightly rest weighs down our languorous eyes, we seem vainly to will to run eagerly on, and sink faint amidst our struggles; the tongue is powerless, the familiar strength fails the body, nor will words or utterance follow: so the disastrous goddess brings to naught all Turnus' valour as he presses on. His heart wavers in shifting emotion; he gazes on his Rutulians and on the city, and falters in terror, and shudders at the imminent spear; neither sees he whither he may escape nor how rush violently on the enemy, and nowhere his chariot or his sister at the reins. As he wavers Aeneas poises the deadly weapon, and, marking his chance, hurls it in from afar with all his strength of body. Never with such a roar are stones hurled from some engine on ramparts, nor does the thunder burst in so loud a peal. Carrying grim death with it, the spear flies in fashion of some dark whirlwind, and [Pg 299][925-952]opens the rim of the corslet and the utmost circles of the sevenfold shield. Right through the thigh it passes hurtling on; under the blow Turnus falls huge to earth with his leg doubled under him. The Rutulians start up with a groan, and all the hill echoes round about, and the width of high woodland returns their cry. Lifting up beseechingly his humbled eyes and suppliant hand: 'I have deserved it,' he says, 'nor do I ask for mercy; use thy fortune. If an unhappy parent's distress may at all touch thee, this I pray; even such a father was Anchises to thee; pity Daunus' old age, and restore to my kindred which thou wilt, me or my body bereft of day. Thou art conqueror, and Ausonia hath seen me stretch conquered hands. Lavinia is thine in marriage; press not thy hatred farther.' Aeneas stood wrathful in arms, with rolling eyes, and lowered his hand; and now and now yet more the speech began to bend him to waver: when high on his shoulder appeared the sword-belt with the shining bosses that he knew, the luckless belt of the boy Pallas, whom Turnus had struck down with mastering wound, and wore on his shoulders the fatal ornament. The other, as his eyes drank in the plundered record of his fierce grief, kindles to fury, and cries terrible in anger: 'Mayest thou, thou clad in the spoils of my dearest, escape mine hands? Pallas it is, Pallas who now strikes the sacrifice, and exacts vengeance in thy guilty blood.' So saying, he fiercely plunges the steel full in his breast. But his limbs grow slack and chill, and the life with a moan flies indignantly into the dark. THE END. [Pg 301] NOTES Book First l. 123—Accipiunt inimicum imbrem. Inimica non tantum hostilia sed perniciosa.—Serv. on ix. 315. The word often has this latter sense in Virgil. l. 396—Aut capere aut captas iam despectare videntur. Henry seems unquestionably right in explaining captas despectare of the swans rising and hovering over the place where they had settled, this action being more fully expressed in the next two lines. The parallelism between ll. 396 and 400 exists, but it is inverted, capere corresponding to subit, captas despectare to tenet. l. 427—lata theatris with the balance of MS. authority. l. 550—Arvaque after Med. and Pal.; armaque Con. l. 636—Munera laetitiamque die ('ut multi legunt,' says Serv.), though it has little MS. authority, has been adopted because it is strongly probable on internal grounds, as giving a basis for the other two readings, dei and dii. l. 722—The long-since-unstirred spirit. And weep afresh love's long-since-cancell'd woe. Shakespeare, Sonnet XXX. l. 726—dependent lychni laquearibus aureis. Serv. on viii. 25, summique ferit laquearia tecti, says 'multi lacuaria legunt. nam lacus dicuntur: unde est . . . lacunar. non enim a laqueis dicitur.' As Prof. Nettleship has pointed out, this seems to indicate that there are two words, laquear from laqueus, meaning chain or network, and lacuar or lacunar from lacus, meaning sunk work. [Pg 302]Book Second l. 30—Classibus hic locus. Ad equites referre debemus.—Serv. Cf. also vii. 716. l. 76—Omitted with the best MSS. l. 234—moenia pandimus urbis. Moenia cetera urbis tecta vel aedes accipiendum.—Serv. This is the sense which the word generally has in Virgil: it is often used in contrast with muri, or as a synonym of urbs; and in most cases city is its nearest English equivalent. l. 381—caerula colla tumentem. Caerulum est viride cum nigro.—Serv. on vii. 198. Cf. iii. 208, where it is used of the colour of the sea after a storm. l. 616—nimbo effulgens. est fulgidum lumen quo deorum capita cinguntur. sic etiam pingi solet.—Serv. Cf. xii. 416. Book Third l. 127—freta concita terris with all the best MSS.; consita Con. l. 152—qua se Plena per insertas fundebat Luna fenestras. The usual explanation, which makes insertas an epithet transferred by a sort of hypallage from Luna to fenestras, is extremely violent, and makes the word little more than a repetition of se fundebat. Servius mentions two other interpretations; non seratas, quasi inseratas, and clatratas; the last has been adopted in the translation. In the passage of Lucretius (ii. 114) which Virgil has imitated here, Contemplator enim cum solis lumina . . . Inserti fundunt radii per opaca domorum, it is possible that clatris may be the lost word. l. 684— Contra iussa monent Heleni, Scyllam atque Charybdim Inter, utramque viam leti discrimine parvo Ni teneant cursus. In this difficult passage it is probably best to take cursus as the subject to teneant (cursus teneant, id est agantur.—Serv. Cf. also l. 454 above, quamvis vi cursus in altum Vela vocet), viam being either the direct object of teneant, or in loose apposition to Scyllam atque Charybdim. [Pg 303] l. 708—tempestatibus actis with Rom. and Pal.; actus Con. after Med. Book Fourth Totus hic liber . . . in consiliis et subtilitatibus est. nam paene comicus stilus est. nec mirum, ubi de amore tractatur.—Serv. l. 273—Omitted with the best MSS. l. 528—Omitted with the best MSS. Book Fifth l. 595—iuduntque per undas, omitted with the preponderance of MS. authority. Book Sixth l. 242—Omitted with the balance of MS. authority. l. 806—virtutem extendere factis with Med.; virtute extendere vires Con. Book Eighth l. 46—Omitted with the majority of the best MSS. l. 383—Arma rogo. Genetrix nato te filia Nerei. Arma rogo. hic distinguendum, ut cui petat non dicat, sed relinquat intellegi . . . Genetrix nato te filia Nerei. hoc est, soles hoc praestare matribus.—Serv. Book Ninth l. 29—Omitted with all the best MSS. l. 122—Omitted with all the best MSS. l. 281— Me nulla dies tam fortibus ausis Dissimilem arguerit tantum, Fortuna secunda Aut adversa cadat. With some hesitation I have adopted this reading as the one open to least objection, though the balance of authority is decidedly in favour of haud adversa. For the position of tantum cf. Ecl. x. 46, according to the 'subtilior explicatio' now generally adopted. [Pg 304] l. 412— Et venit adversi in tergum Sulmonis ibique Frangitur, et fisso transit praecordia ligno. The phrase in tergum occurs twice elsewhere: ix. 764—meaning 'on the back'; and xi. 653—meaning 'backward'; and in x. 718 the uncertainty about the order of the lines makes it possible that tergo decutit hastas was meant to refer to the boar, not to Mezentius. But the passages quoted by the editors there shew that the word might be used in the sense of 'shield'; and this being so we are scarcely justified in reading aversi against all the good MSS. l. 529—Omitted with most MSS. Book Tenth l. 278—Omitted with the best MSS. l. 754—Insidiis, iaculo et longe fallente sagitta. The MS. authority is decidedly in favour of this, the more difficult reading; and the hendiadys is not more violent than those in Georg. ii. 192, Aen. iii. 223. Book Twelfth l. 218—Tum magis, ut propius cernunt non viribus aequis. With Ribbeck I believe that there is a gap in the sense here, and have marked one in the translation. l. 520—Limina with Med. Munera Con. ll. 612, 613—Omitted with the best MSS. l. 751—Venator cursu canis et latratibus instat. I take cursu canis as equivalent to currente cane, as in i. 324, spumantis apri cursum clamore prementem. Printed by R. & R. Clark, Edinburgh. TRANSCRIBER'S NOTES Transcriber added the Table of Contents. The following words appear with and without a hyphen. Spelling has been left as in the original. | | | | --- | --- | | blood-stained | bloodstained | | hill-tops | hilltops | | horse-hair | horsehair | | life-blood | lifeblood | | new-born | newborn | | spear-shaft | spearshaft | | water-ways | waterways | The following words are spelled in multiple ways. Spelling has been left as in the original. | | | | --- | --- | | aery | aëry | | horned | hornèd | | Nereids | Nereïd | | Pergama | Pergamea | The following corrections have made to the text: page 173—'[quotation mark missing in original]Nymphs, Laurentine Nymphs page 202—in name fail to be Creüsa[original has Crëusa] page 207—Rumour on fluttering[original has flutttering] wings page 285—the Rhoetean[original has Rhoeteian] captain drives his army The first occurrence of Phoebus was rendered with an oe ligature in the original. Ellipses match the original. Page 300 is blank in the original. END OF THE PROJECT GUTENBERG EBOOK THE AENEID Updated editions will replace the previous one—the old editions will be renamed. Creating the works from print editions not protected by U.S. copyright law means that no one owns a United States copyright in these works, so the Foundation (and you!) can copy and distribute it in the United States without permission and without paying copyright royalties. Special rules, set forth in the General Terms of Use part of this license, apply to copying and distributing Project Gutenberg™ electronic works to protect the PROJECT GUTENBERG™ concept and trademark. Project Gutenberg is a registered trademark, and may not be used if you charge for an eBook, except by following the terms of the trademark license, including paying royalties for use of the Project Gutenberg trademark. If you do not charge anything for copies of this eBook, complying with the trademark license is very easy. You may use this eBook for nearly any purpose such as creation of derivative works, reports, performances and research. Project Gutenberg eBooks may be modified and printed and given away--you may do practically ANYTHING in the United States with eBooks not protected by U.S. copyright law. Redistribution is subject to the trademark license, especially commercial redistribution. START: FULL LICENSE THE FULL PROJECT GUTENBERG LICENSE PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK To protect the Project Gutenberg™ mission of promoting the free distribution of electronic works, by using or distributing this work (or any other work associated in any way with the phrase “Project Gutenberg”), you agree to comply with all the terms of the Full Project Gutenberg™ License available with this file or online at www.gutenberg.org/license. Section 1. General Terms of Use and Redistributing Project Gutenberg™ electronic works 1.A. By reading or using any part of this Project Gutenberg™ electronic work, you indicate that you have read, understand, agree to and accept all the terms of this license and intellectual property (trademark/copyright) agreement. If you do not agree to abide by all the terms of this agreement, you must cease using and return or destroy all copies of Project Gutenberg™ electronic works in your possession. If you paid a fee for obtaining a copy of or access to a Project Gutenberg™ electronic work and you do not agree to be bound by the terms of this agreement, you may obtain a refund from the person or entity to whom you paid the fee as set forth in paragraph 1.E.8. 1.B. “Project Gutenberg” is a registered trademark. It may only be used on or associated in any way with an electronic work by people who agree to be bound by the terms of this agreement. There are a few things that you can do with most Project Gutenberg™ electronic works even without complying with the full terms of this agreement. See paragraph 1.C below. There are a lot of things you can do with Project Gutenberg™ electronic works if you follow the terms of this agreement and help preserve free future access to Project Gutenberg™ electronic works. See paragraph 1.E below. 1.C. The Project Gutenberg Literary Archive Foundation (“the Foundation” or PGLAF), owns a compilation copyright in the collection of Project Gutenberg™ electronic works. Nearly all the individual works in the collection are in the public domain in the United States. If an individual work is unprotected by copyright law in the United States and you are located in the United States, we do not claim a right to prevent you from copying, distributing, performing, displaying or creating derivative works based on the work as long as all references to Project Gutenberg are removed. Of course, we hope that you will support the Project Gutenberg™ mission of promoting free access to electronic works by freely sharing Project Gutenberg™ works in compliance with the terms of this agreement for keeping the Project Gutenberg™ name associated with the work. You can easily comply with the terms of this agreement by keeping this work in the same format with its attached full Project Gutenberg™ License when you share it without charge with others. 1.D. The copyright laws of the place where you are located also govern what you can do with this work. Copyright laws in most countries are in a constant state of change. If you are outside the United States, check the laws of your country in addition to the terms of this agreement before downloading, copying, displaying, performing, distributing or creating derivative works based on this work or any other Project Gutenberg™ work. The Foundation makes no representations concerning the copyright status of any work in any country other than the United States. 1.E. Unless you have removed all references to Project Gutenberg: 1.E.1. The following sentence, with active links to, or other immediate access to, the full Project Gutenberg™ License must appear prominently whenever any copy of a Project Gutenberg™ work (any work on which the phrase “Project Gutenberg” appears, or with which the phrase “Project Gutenberg” is associated) is accessed, displayed, performed, viewed, copied or distributed: This eBook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this eBook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. 1.E.2. If an individual Project Gutenberg™ electronic work is derived from texts not protected by U.S. copyright law (does not contain a notice indicating that it is posted with permission of the copyright holder), the work can be copied and distributed to anyone in the United States without paying any fees or charges. If you are redistributing or providing access to a work with the phrase “Project Gutenberg” associated with or appearing on the work, you must comply either with the requirements of paragraphs 1.E.1 through 1.E.7 or obtain permission for the use of the work and the Project Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9. 1.E.3. If an individual Project Gutenberg™ electronic work is posted with the permission of the copyright holder, your use and distribution must comply with both paragraphs 1.E.1 through 1.E.7 and any additional terms imposed by the copyright holder. Additional terms will be linked to the Project Gutenberg™ License for all works posted with the permission of the copyright holder found at the beginning of this work. 1.E.4. Do not unlink or detach or remove the full Project Gutenberg™ License terms from this work, or any files containing a part of this work or any other work associated with Project Gutenberg™. 1.E.5. Do not copy, display, perform, distribute or redistribute this electronic work, or any part of this electronic work, without prominently displaying the sentence set forth in paragraph 1.E.1 with active links or immediate access to the full terms of the Project Gutenberg™ License. 1.E.6. You may convert to and distribute this work in any binary, compressed, marked up, nonproprietary or proprietary form, including any word processing or hypertext form. However, if you provide access to or distribute copies of a Project Gutenberg™ work in a format other than “Plain Vanilla ASCII” or other format used in the official version posted on the official Project Gutenberg™ website (www.gutenberg.org), you must, at no additional cost, fee or expense to the user, provide a copy, a means of exporting a copy, or a means of obtaining a copy upon request, of the work in its original “Plain Vanilla ASCII” or other form. Any alternate format must include the full Project Gutenberg™ License as specified in paragraph 1.E.1. 1.E.7. Do not charge a fee for access to, viewing, displaying, performing, copying or distributing any Project Gutenberg™ works unless you comply with paragraph 1.E.8 or 1.E.9. 1.E.8. You may charge a reasonable fee for copies of or providing access to or distributing Project Gutenberg™ electronic works provided that: • You pay a royalty fee of 20% of the gross profits you derive from the use of Project Gutenberg™ works calculated using the method you already use to calculate your applicable taxes. The fee is owed to the owner of the Project Gutenberg™ trademark, but he has agreed to donate royalties under this paragraph to the Project Gutenberg Literary Archive Foundation. Royalty payments must be paid within 60 days following each date on which you prepare (or are legally required to prepare) your periodic tax returns. Royalty payments should be clearly marked as such and sent to the Project Gutenberg Literary Archive Foundation at the address specified in Section 4, “Information about donations to the Project Gutenberg Literary Archive Foundation.” • You provide a full refund of any money paid by a user who notifies you in writing (or by e-mail) within 30 days of receipt that s/he does not agree to the terms of the full Project Gutenberg™ License. You must require such a user to return or destroy all copies of the works possessed in a physical medium and discontinue all use of and all access to other copies of Project Gutenberg™ works. • You provide, in accordance with paragraph 1.F.3, a full refund of any money paid for a work or a replacement copy, if a defect in the electronic work is discovered and reported to you within 90 days of receipt of the work. • You comply with all other terms of this agreement for free distribution of Project Gutenberg™ works. 1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™ electronic work or group of works on different terms than are set forth in this agreement, you must obtain permission in writing from the Project Gutenberg Literary Archive Foundation, the manager of the Project Gutenberg™ trademark. Contact the Foundation as set forth in Section 3 below. 1.F.1. Project Gutenberg volunteers and employees expend considerable effort to identify, do copyright research on, transcribe and proofread works not protected by U.S. copyright law in creating the Project Gutenberg™ collection. Despite these efforts, Project Gutenberg™ electronic works, and the medium on which they may be stored, may contain “Defects,” such as, but not limited to, incomplete, inaccurate or corrupt data, transcription errors, a copyright or other intellectual property infringement, a defective or damaged disk or other medium, a computer virus, or computer codes that damage or cannot be read by your equipment. 1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for the “Right of Replacement or Refund” described in paragraph 1.F.3, the Project Gutenberg Literary Archive Foundation, the owner of the Project Gutenberg™ trademark, and any other party distributing a Project Gutenberg™ electronic work under this agreement, disclaim all liability to you for damages, costs and expenses, including legal fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH 1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT, CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE. 1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you discover a defect in this electronic work within 90 days of receiving it, you can receive a refund of the money (if any) you paid for it by sending a written explanation to the person you received the work from. If you received the work on a physical medium, you must return the medium with your written explanation. The person or entity that provided you with the defective work may elect to provide a replacement copy in lieu of a refund. If you received the work electronically, the person or entity providing it to you may choose to give you a second opportunity to receive the work electronically in lieu of a refund. If the second copy is also defective, you may demand a refund in writing without further opportunities to fix the problem. 1.F.4. Except for the limited right of replacement or refund set forth in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO WARRANTIES OF MERCHANTABILITY OR FITNESS FOR ANY PURPOSE. 1.F.5. Some states do not allow disclaimers of certain implied warranties or the exclusion or limitation of certain types of damages. If any disclaimer or limitation set forth in this agreement violates the law of the state applicable to this agreement, the agreement shall be interpreted to make the maximum disclaimer or limitation permitted by the applicable state law. The invalidity or unenforceability of any provision of this agreement shall not void the remaining provisions. 1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation, the trademark owner, any agent or employee of the Foundation, anyone providing copies of Project Gutenberg™ electronic works in accordance with this agreement, and any volunteers associated with the production, promotion and distribution of Project Gutenberg™ electronic works, harmless from all liability, costs and expenses, including legal fees, that arise directly or indirectly from any of the following which you do or cause to occur: (a) distribution of this or any Project Gutenberg™ work, (b) alteration, modification, or additions or deletions to any Project Gutenberg™ work, and (c) any Defect you cause. Section 2. Information about the Mission of Project Gutenberg™ Project Gutenberg™ is synonymous with the free distribution of electronic works in formats readable by the widest variety of computers including obsolete, old, middle-aged and new computers. It exists because of the efforts of hundreds of volunteers and donations from people in all walks of life. Volunteers and financial support to provide volunteers with the assistance they need are critical to reaching Project Gutenberg™’s goals and ensuring that the Project Gutenberg™ collection will remain freely available for generations to come. In 2001, the Project Gutenberg Literary Archive Foundation was created to provide a secure and permanent future for Project Gutenberg™ and future generations. To learn more about the Project Gutenberg Literary Archive Foundation and how your efforts and donations can help, see Sections 3 and 4 and the Foundation information page at www.gutenberg.org. Section 3. Information about the Project Gutenberg Literary Archive Foundation The Project Gutenberg Literary Archive Foundation is a non-profit 501(c)(3) educational corporation organized under the laws of the state of Mississippi and granted tax exempt status by the Internal Revenue Service. The Foundation’s EIN or federal tax identification number is 64-6221541. Contributions to the Project Gutenberg Literary Archive Foundation are tax deductible to the full extent permitted by U.S. federal laws and your state’s laws. The Foundation’s business office is located at 809 North 1500 West, Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up to date contact information can be found at the Foundation’s website and official page at www.gutenberg.org/contact Section 4. Information about Donations to the Project Gutenberg Literary Archive Foundation Project Gutenberg™ depends upon and cannot survive without widespread public support and donations to carry out its mission of increasing the number of public domain and licensed works that can be freely distributed in machine-readable form accessible by the widest array of equipment including outdated equipment. Many small donations ($1 to $5,000) are particularly important to maintaining tax exempt status with the IRS. The Foundation is committed to complying with the laws regulating charities and charitable donations in all 50 states of the United States. Compliance requirements are not uniform and it takes a considerable effort, much paperwork and many fees to meet and keep up with these requirements. We do not solicit donations in locations where we have not received written confirmation of compliance. To SEND DONATIONS or determine the status of compliance for any particular state visit www.gutenberg.org/donate. While we cannot and do not solicit contributions from states where we have not met the solicitation requirements, we know of no prohibition against accepting unsolicited donations from donors in such states who approach us with offers to donate. International donations are gratefully accepted, but we cannot make any statements concerning tax treatment of donations received from outside the United States. U.S. laws alone swamp our small staff. Please check the Project Gutenberg web pages for current donation methods and addresses. Donations are accepted in a number of other ways including checks, online payments and credit card donations. To donate, please visit: www.gutenberg.org/donate Section 5. General Information About Project Gutenberg™ electronic works Professor Michael S. Hart was the originator of the Project Gutenberg™ concept of a library of electronic works that could be freely shared with anyone. For forty years, he produced and distributed Project Gutenberg™ eBooks with only a loose network of volunteer support. Project Gutenberg™ eBooks are often created from several printed editions, all of which are confirmed as not protected by copyright in the U.S. unless a copyright notice is included. Thus, we do not necessarily keep eBooks in compliance with any particular paper edition. Most people start at our website which has the main PG search facility: www.gutenberg.org. This website includes information about Project Gutenberg™, including how to make donations to the Project Gutenberg Literary Archive Foundation, how to help produce our new eBooks, and how to subscribe to our email newsletter to hear about new eBooks.
291
Jump to content Sestina العربية Беларуская Беларуская (тарашкевіца) Català Čeština Deutsch Eesti Ελληνικά Español Esperanto Français Bahasa Indonesia Italiano עברית ქართული 日本語 Polski Português Русский Slovenščina Српски / srpski Svenska Türkçe Українська 中文 Edit links From Wikipedia, the free encyclopedia Fixed verse form of poetry "Sestina" In fair Provence, the land of lute and rose, Arnaut, great master of the lore of love, First wrought sestines to win his lady's heart, For she was deaf when simpler staves he sang, And for her sake he broke the bonds of rhyme, And in this subtler measure hid his woe. 'Harsh be my lines,' cried Arnaut, 'harsh the woe My lady, that enthorn'd and cruel rose, Inflicts on him that made her live in rhyme!' But through the metre spake the voice of Love, And like a wild-wood nightingale he sang Who thought in crabbed lays to ease his heart. First two stanzas of the sestina "Sestina" Edmund Gosse (1879) A sestina (Italian: sestina, from sesto, sixth; Old Occitan: cledisat [klediˈzat];[why?] also known as sestine, sextine, sextain) is a fixed verse form consisting of six stanzas of six lines each, normally followed by a three-line envoi. The words that end each line of the first stanza are used as line endings in each of the following stanzas, rotated in a set pattern. The invention of the form is usually attributed to Arnaut Daniel, a troubadour of 12th-century Provence, and the first sestinas were written in the Occitan language of that region. The form was cultivated by his fellow troubadours, then by other poets across Continental Europe in the subsequent centuries; they contributed to what would become the "standard form" of the sestina. The earliest example of the form in English appeared in 1579, though they were rarely written in Britain until the end of the 19th century. The sestina remains a popular poetic form, and many sestinas continue to be written by contemporary poets. History [edit] The oldest-known sestina is "Lo ferm voler qu'el cor m'intra", written around 1200 by Arnaut Daniel, a troubadour of Aquitanian origin; he refers to it as "cledisat", meaning, more or less, "interlock". Hence, Daniel is generally considered the form's inventor, though it has been suggested that he may only have innovated an already existing form. Nevertheless, two other original troubadouric sestinas are recognised, the best known being "Eras, pus vey mon benastruc" by Guilhem Peire Cazals de Caortz; there are also two contrafacta built on the same end-words, the best known being Ben gran avoleza intra by Bertran de Born. These early sestinas were written in Old Occitan; the form started spilling into Italian with Dante in the 13th century; by the 15th, it was used in Portuguese by Luís de Camões. The involvement of Dante and Petrarch in establishing the sestina form, together with the contributions of others in the country, account for its classification as an Italian verse form—despite not originating there. The result was that the sestina was re-imported into France from Italy in the 16th century. Pontus de Tyard was the first poet to attempt the form in French, and the only one to do so prior to the 19th century; he introduced a partial rhyme scheme into his sestina. English [edit] An early version of the sestina in Middle English is the "Hymn to Venus" by Elizabeth Woodville (1437–1492); it is an "elaboration" on the form, found in one single manuscript. It is a six-stanza poem that praises Venus, the goddess of love, and consists of six seven-line stanzas in which the first line of each stanza is also its last line, and the lines of the first stanza provide the first lines for each subsequent stanza. The first appearance of the sestina in English print is "Ye wastefull woodes", comprising lines 151–89 of the August Æglogue in Edmund Spenser's Shepherd's Calendar, published in 1579. It is in unrhymed iambic pentameter, but the order of end-words in each stanza is non-standard – ending 123456, 612345, etc. – each stanza promoting the previous final end-word to the first line, but otherwise leaving the order intact; the envoi order is (1) 2 / (3) 4 / (5) 6. This scheme was set by the Spaniard Gutierre de Cetina. Although they appeared in print later, Philip Sidney's three sestinas may have been written earlier, and are often credited as the first in English. The first published (toward the end of Book I of The Countess of Pembroke's Arcadia, 1590) is the double sestina "Ye Goatherd Gods". In this variant the standard end-word pattern is repeated for twelve stanzas, ending with a three-line envoi, resulting in a poem of 75 lines. Two others were published in subsequent editions of the Arcadia. The second, "Since wailing is a bud of causeful sorrow", is in the "standard" form. Like "Ye Goatherd Gods" it is written in unrhymed iambic pentameter and uses exclusively feminine endings, reflecting the Italian endecasillabo. The third, "Farewell, O sun, Arcadia's clearest light", is the first rhyming sestina in English: it is in iambic pentameters and follows the standard end-word scheme, but rhymes in the first stanza (the rhyme scheme necessarily changes in each subsequent stanza, a consequence of which is that the 6th stanza is in rhyming couplets). Sidney uses the same envoi structure as Spenser. William Drummond of Hawthornden published two sestinas (which he called "sextains") in 1616, which copy the form of Sidney's rhyming sestina. After this, there is an absence of notable sestinas for over 250 years, with John Frederick Nims noting that, "... there is not a single sestina in the three volumes of the Oxford anthologies that cover the seventeenth, eighteenth and nineteenth centuries." In the 1870s, there was a revival of interest in French forms, led by Andrew Lang, Austin Dobson, Edmund Gosse, W. E. Henley, John Payne, and others. The earliest sestina of this period is Algernon Charles Swinburne's "Sestina". It is in iambic pentameter rhyming in the first stanza; each stanza begins by repeating the previous end-words 6 then 1, but the following 4 lines repeat the remaining end-words ad lib; the envoi is (1) 4 / (2) 3 / (5) 6. In the same volume (Poems and Ballads, Second Series, 1878) Swinburne introduces a "double sestina" ("The Complaint of Lisa") that is unlike Sidney's: it comprises 12 stanzas of 12 iambic pentameter lines each, the first stanza rhyming . Similar to his "Sestina", each stanza first repeats end-words 12 then 1 of the previous stanza; the rest are ad lib. The envoi is (12) 10 / (8) 9 / (7) 4 / (3) 6 / (2) 1 / (11) 5. From the 1930s, a revival of the form took place across the English-speaking world, led by poets such as W. H. Auden, and the 1950s were described as the "age of the sestina" by James E. B. Breslin. "Sestina: Altaforte" by Ezra Pound, "Paysage moralisé" by W. H. Auden, and "Sestina" by Elizabeth Bishop are distinguished modern examples of the sestina. "Histoire" by Harry Mathews adds an additional Oulipian constraint: the end words wildly misusing ideological or prejudicial terms. The sestina remains a popular closed verse form, and many sestinas continue to be written by contemporary poets; notable examples include "Six Bad Poets" by Christopher Reid, "The Guest Ellen at the Supper for Street People" by David Ferry and "IVF" by Kona Macphee. Form [edit] Although the sestina has been subject to many revisions throughout its development, there remain several features that define the form. The sestina is composed of six stanzas of six lines (sixains), followed by a stanza of three lines (a tercet). There is no rhyme within the stanzas; instead the sestina is structured through a recurrent pattern of the words that end each line, a technique known as "lexical repetition". In the original form composed by Daniel, each line is of ten syllables, except the first of each stanza which are of seven. The established form, as developed by Petrarch and Dante, was in hendecasyllables. Since then, changes to the line length have been a relatively common variant, such that Stephanie Burt has written: "sestinas, as the form exists today, [do not] require expertise with inherited meter ...". The pattern that the line-ending words follow is often explained if the numbers 1 to 6 are allowed to stand for the end-words of the first stanza. Each successive stanza takes its pattern based upon a bottom-up pairing of the lines of the preceding stanza (i.e., last and first, then second-from-last and second, then third-from-last and third). Given that the pattern for the first stanza is 123456, this produces 615243 in the second stanza, numerical series which corresponds, as Paolo Canettieri has shown, to the way in which the points on the dice are arranged.[page needed] This genetic hypothesis is supported by the fact that Arnaut Daniel was a strong dice player and various images related to this game are present in his poetic texts. Another way of visualising the pattern of line-ending words for each stanza is by the procedure known as retrogradatio cruciata, which may be rendered as "backward crossing". The second stanza can be seen to have been formed from three sets of pairs (6–1, 5–2, 4–3), or two triads (1–2–3, 4–5–6). The 1–2–3 triad appears in its original order, but the 4–5–6 triad is reversed and superimposed upon it. The pattern of the line-ending words in a sestina is represented both numerically and alphabetically in the following table: Table of sestina end-words | Stanza 1 | Stanza 2 | Stanza 3 | Stanza 4 | Stanza 5 | Stanza 6 | | 1 A | 6 F | 3 C | 5 E | 4 D | 2 B | | 2 B | 1 A | 6 F | 3 C | 5 E | 4 D | | 3 C | 5 E | 4 D | 2 B | 1 A | 6 F | | 4 D | 2 B | 1 A | 6 F | 3 C | 5 E | | 5 E | 4 D | 2 B | 1 A | 6 F | 3 C | | 6 F | 3 C | 5 E | 4 D | 2 B | 1 A | The sixth stanza is followed by a tercet that is known variably by the French term envoi, the Occitan term tornada, or, with reference to its size in relation to the preceding stanzas, a "half-stanza". It consists of three lines that include all six of the line-ending words of the preceding stanzas. This should take the pattern of 2–5, 4–3, 6–1 (numbers relative to the first stanza); the first end-word of each pair can occur anywhere in the line, while the second must end the line. However, the end-word order of the envoi is no longer strictly enforced. "Sestina" Time to plant tears (6), says the almanac (5). The grandmother (2) sings to the marvelous stove (4) and the child (3) draws another inscrutable house (1). The envoi to "Sestina"; the repeated words are emboldened and labelled. Elizabeth Bishop (1965) The sestina has been subject to some variations, with changes being made to both the size and number of stanzas, and also to individual line length. A "double sestina" is the name given to either: two sets of six six-line stanzas, with a three-line envoy (for a total of 75 lines), or twelve twelve-line stanzas, with a six-line envoy (for a total of 150 lines). Examples of either variation are rare; "Ye Goatherd Gods" by Philip Sidney is a notable example of the former variation, while "The Complaint of Lisa" by Algernon Charles Swinburne is a notable example of the latter variation. In the former variation, the original pattern of line-ending words, i.e. that of the first stanza, recurs in the seventh stanza, and thus the entire change of pattern occurs twice throughout. In the second variation, the pattern of line-ending words returns to the starting sequence in the eleventh stanza; thus it does not, unlike the "single" sestina, allow for every end-word to occupy each of the stanza ends; end-words 5 and 10 fail to couple between stanzas. Effect [edit] The structure of the sestina, which demands adherence to a strict and arbitrary order, produces several effects within a poem. Stephanie Burt notes that, "The sestina has served, historically, as a complaint", its harsh demands acting as "signs for deprivation or duress". The structure can enhance the subject matter that it orders; in reference to Elizabeth Bishop's A Miracle for Breakfast, David Caplan suggests that the form's "harshly arbitrary demands echo its subject's". Nevertheless, the form's structure has been criticised; Paul Fussell considers the sestina to be of "dubious structural expressiveness" when composed in English and, irrespective of how it is used, "would seem to be [a form] that gives more structural pleasure to the contriver than to the apprehender". Margaret Spanos highlights "a number of corresponding levels of tension and resolution" resulting from the structural form, including: structural, semantic and aesthetic tensions. She believes that the aesthetic tension, which results from the "conception of its mathematical completeness and perfection", set against the "experiences of its labyrinthine complexities" can be resolved in the apprehension of the "harmony of the whole". The strength of the sestina, according to Stephen Fry, is the "repetition and recycling of elusive patterns that cannot be quite held in the mind all at once". For Shanna Compton, these patterns are easily discernible by newcomers to the form; she says that: "Even someone unfamiliar with the form's rules can tell by the end of the second stanza ... what's going on ...". Examples [edit] The 1972 television play Between Time and Timbuktu, based on the writings of Kurt Vonnegut, is about a poet-astronaut who wanted to compose a sestina in outer space. Vonnegut wrote a sestina for the production. See also [edit] Canzone, an Italian or Provençal song or ballad, in which the sestina is sometimes included. Pentina, a variation of the sestina based on five endwords. Villanelle, another type of fixed verse form. References [edit] ^ Eusebi, Mario (1996). L'aur'amara. Rome: Carocci. ISBN 978-88-7984-167-2. ^ Jump up to: a b Fry 2007, p. 235 ^ Davidson 1910 pp. 18–20 ^ Collura, Alessio (2010). Il trovatore Guilhem Peire de Cazals. Edizione Critica. Padova: Master Thesis, University of Padova. ^ Jump up to: a b c d e f g Preminger 1993, p. 1146 ^ "Sestina of the Lady Pietra degli Scrovigni". The Poetry Foundation. Retrieved 20 March 2012. ^ Jump up to: a b c Gasparov 1996 p. 159 ^ Stratton 1917, pp. 306, 316, 318 ^ Kastner 1903 p. 283 ^ Kastner, 1903 pp. 283–4 ^ Stanbury, Sarah (2005). "Middle English Religious Lyrics". In Duncan, Thomas Gibson (ed.). A Companion to the Middle English Lyric. Boydell & Brewer. pp. 227–41. ISBN 9781843840657. ^ McNamer, Sarah (2003). "Lyrics and romances". In Wallace, David; Dinshaw, Carolyn (eds.). The Cambridge Companion to Medieval Women's Writing. Cambridge UP. pp. 195–209. ISBN 9780521796385. ^ Barratt, Alexandra, ed. (1992). Women's Writing in Middle English. New York: Longman. pp. 275–77. ISBN 0-582-06192-X. ^ "The Shepheardes Calender: August". University of Oregon. Retrieved 28 March 2012. ^ Shapiro 1980, p. 185 ^ Jump up to: a b Ferguson 1996, pp. 188–90 ^ Jump up to: a b Burt 2007, p. 219 ^ Caplan 2006, pp. 19–20 ^ White 1887, p xxxix ^ This is the earliest-published sestina reprinted by Gleeson White (White 1887, pp 203–12), and he doesn't mention any earlier ones. ^ Lennard 2006, p. 53 ^ Caplan 2006, p. 20 ^ "Sestina: Altaforte". The Poetry Foundation. Retrieved 20 March 2012. ^ Preminger 1993, p. 1147 ^ Ruby (24 January 2012). "Elizabeth Bishop: Sestina". Elizabeth Bishop. Retrieved 9 March 2018. ^ Matthews, Harry (16 August 1984). "Histoire". The New York Review of Books. XXXI (13). Retrieved 5 March 2024. ^ Burt 2007, pp. 218–19 ^ Kellaway, Kate (26 October 2013). "Six Bad Poets by Christopher Reid – review". The Guardian. Retrieved 14 July 2023. ^ Wheldon, Wynn (26 September 2013). "Six Bad Poets, by Christopher Reid – review". The Spectator. Retrieved 14 July 2023. ^ "The Guest Ellen at the Supper for Street People". The Poetry Foundation. Retrieved 20 March 2012. ^ Fry 2007, p. 231 ^ Spanos 1978, p. 546 ^ Fry 2007, p. 232 ^ Kastner 1903, p. 284 ^ Strand et al., 2001, p. 24 ^ Burt 2007, p. 222 ^ Canettieri, Paolo (1996). Il gioco delle forme nella poesia dei trovatori. Rome: Il Bagatto. ISBN 88-7806-095-X. ^ Germini, Simone (8 October 2014). "Arnaut Daniel, il giocatore". iMalpensanti (in Italian). Retrieved 1 January 2025. ^ Krysl 2004, p. 9 ^ Shapiro 1980 pp. 7–8 ^ Fry 2007, pp. 234–5 ^ Fry 2007, p. 234 ^ Fry 2007, p. 237 ^ Ferguson 1996, p. 1413–13 ^ "The Complaint of Lisa". The Poetry Foundation. Retrieved 19 February 2012. ^ Caplan 2006, p. 23 ^ Fussell 1979, p. 145 ^ Jump up to: a b Spanos 1987, p. 551 ^ Fry 2007, p. 238 ^ Burt 2007, p. 226 ^ Vonnegut, Kurt (2012). Wakefield, Dan (ed.). Kurt Vonnegut: Letters. New York: Delacorte Press. ISBN 9780345535399. I am writing a sestina for the script... It's tough, but what isn't? (Letter of 2 October 1971, to his daughter Nanette.) Sources [edit] Burt, Stephen (2007). "Sestina! or, The Fate of the Idea of Form" (PDF). Modern Philology. 105 (1): 218–241. doi:10.1086/587209. JSTOR 10.1086/587209. S2CID 162877995. (subscription required) Canettieri, Paolo (1996). Il gioco delle forme nella lirica dei trovatori. IT: Bagatto Libri.{{cite book}}: CS1 maint: publisher location (link) Caplan, David (2006). Questions of possibility: contemporary poetry and poetic form. US: Oxford University Press. ISBN 978-0-19-531325-3.{{cite book}}: CS1 maint: publisher location (link) Davidson, F. J. A. (1910). "The Origin of the Sestina". Modern Language Notes. 25 (1): 18–20. doi:10.2307/2915934. JSTOR 2915934. (subscription required) Ferguson, Margaret; et al. (1996). The Norton Anthology of Poetry. US: W. W. Norton & Company, Inc. ISBN 0-393-96820-0. Fry, Stephen (2007). The Ode Less Travelled. UK: Arrow Books. ISBN 978-0-09-950934-9.{{cite book}}: CS1 maint: publisher location (link) Fussell, Paul (1979). Poetic Meter and Poetic Form. US: McGraw-Hill Higher Education. ISBN 978-0-07-553606-2. Gasparov, M. L. (1996). A History of European Versification. UK: Clarendon Press. ISBN 978-0-19-815879-0.{{cite book}}: CS1 maint: publisher location (link) Kastner, L. E. (1903). A History of French Versification. UK: Clarendon Press.{{cite book}}: CS1 maint: publisher location (link) Krysl, Marilyn (2004). "Sacred and Profane: Sestina as Rite". The American Poetry Review. 33 (2): 7–12. Lennard, John (2006). The Poetry Handbook. UK: Oxford University Press. ISBN 978-0-19-926538-1.{{cite book}}: CS1 maint: publisher location (link) Preminger, Alex; et al. (1993). The New Princeton Encyclopedia of Poetry and Poetics. US: Princeton University Press. ISBN 0-691-02123-6.{{cite book}}: CS1 maint: publisher location (link) Shapiro, Marianne (1980). Hieroglyph of time: the Petrarchan sestina. US: University of Minnesota Press. ISBN 978-0-8166-0945-1.{{cite book}}: CS1 maint: publisher location (link) Spanos, Margaret (1978). "The Sestina: An Exploration of the Dynamics of Poetic Structure". Medieval Academy of America. 53 (3): 545–557. doi:10.2307/2855144. JSTOR 2855144. S2CID 162823092. (subscription required) Strand, Mark; Boland, Eavan (2001). The Making of a Poem: A Norton Anthology of Poetic Forms. US: Norton. ISBN 978-0-393-32178-4.{{cite book}}: CS1 maint: publisher location (link) White, Gleeson, ed. (1887). Ballades and Rondeaus, Chants Royal, Sestinas, Villanelles, etc. The Canterbury Poets. The Walter Scott Publishing Co., Ltd. Further reading [edit] Saclolo, Michael P. (May 2011). "How a Medieval Troubadour Became a Mathematical Figure" (PDF). Notices of the AMS. 58 (5): 682–7. Retrieved 7 September 2012. Stratton, Clarence (1917). "The Italian Lyrics of Sidney's Arcadia". The Sewanee Review. 25 (3): 305–326. JSTOR 27533030. (subscription required) External links [edit] Wikisource has the text of the 1911 Encyclopædia Britannica article "Sestina". Rules and history of the sestina from the American Academy of Poetry Examples of sestinas from 2003–2007 at McSweeney's Internet Tendency How to Write a Sestina (with Examples and Diagrams) from the Society of Classical Poets | v t e Western medieval lyric forms | | | --- | --- | | By regional tradition | | | | | --- | --- | | Occitan | Alba (poetry) Arlabecca Aubade Canso Cobla esparsa Dansa Descort Devinalh Ensenhamen Enuig Gab Lo Boièr Maldit-comiat Partimen Pastorela Planh Salut d'amor Sestina Sirventes Tenso Torneyamen Tornada Trobar clus Trobar leu Trobar ric Viadera | | French | Chanson de toile Formes fixes (Ballade, Rondeau, Virelai) Grand chant Pastourelle Reverdie Rondel Rondelet | | Italian | Ballata Octave Ottava rima Petrarchan sonnet Sicilian octave | | Welsh | Awdl Cerdd dafod Cywydd Traethodl | | German | Leise Tagelied | | Galician-Portuguese | Cantiga de amigo Cantiga de amor Cantigas de escárnio e maldizer | | English | | | others | Kyrielle Triolet | | | By alphabetical order | Alba Arlabecca Aubade Awdl Ballade Ballata Canso Cantiga de amigo Cantiga de amor Cantigas de escárnio e maldizer Cerdd dafod Chanson de toile Cobla esparsa Cywydd Dansa Descort Devinalh Ensenhamen Enuig Formes fixes Gab Grand chant Kyrielle Leise Madrigal Maldit-comiat Octave Partimen Pastorela Pastourelle Petrarchan sonnet Planh Reverdie Rondeau Rondel Rondelet Salut d'amor Sestina Sicilian octave Sirventes Tagelied Tenso Tornada Torneyamen Traethodl Triolet Trobar clus Trobar leu Trobar ric Viadera Virelai | | Authority control databases | | | --- | --- | | National | United States Israel | | Other | Yale LUX | Retrieved from " Categories: Poetic forms Western medieval lyric forms Rhyme Stanzaic form Occitan literary genres Hidden categories: CS1 Italian-language sources (it) Articles with short description Short description is different from Wikidata Good articles Use dmy dates from August 2022 Articles containing Italian-language text Pages with Occitan (post 1500) IPA Wikipedia articles needing clarification from October 2024 Wikipedia articles needing page number citations from July 2023 Pages containing links to subscription-only content CS1 maint: publisher location
292
Bit-field - cppreference.com =============== cppreference.com Create account Log in Namespaces Page Discussion Variants Views View Edit History Actions Loved by developers, CMOs, and CFOs alike. Try Harper today.ads via Carbon Bit-field From cppreference.com <cpp‎ | language C++ Compiler support Freestanding and hosted Language Standard library Standard library headers Named requirements Feature test macros(C++20) Language support library Concepts library(C++20) Diagnostics library Memory management library Metaprogramming library(C++11) General utilities library Containers library Iterators library Ranges library(C++20) Algorithms library Strings library Text processing library Numerics library Date and time library Input/output library Filesystem library(C++17) Concurrency support library(C++11) Execution control library(C++26) Technical specifications Symbols index External libraries [edit] C++ language General topics Preprocessor CommentsKeywords Escape sequences Flow control Conditional execution statements ifswitch Iteration statements (loops) for range-for(C++11)while do-while Jump statements continue - breakgoto - return Functions Function declaration Lambda function expression inline specifier Dynamic exception specifications(until C++17) noexcept specifier(C++11) Exceptions throw-expression try blockcatch handler Namespaces Namespace declarationNamespace aliases Types Fundamental types Enumeration types Function typesClass/struct types Union types Specifiers `const`/`volatile` decltype(C++11) auto(C++11)constexpr(C++11) consteval(C++20) constinit(C++20) Storage duration specifiers Initialization Default-initialization Value-initialization Zero-initialization Copy-initialization Direct-initializationAggregate initialization List-initialization(C++11) Constant initialization Reference initialization Expressions Value categories Order of evaluationOperators Operator precedence Alternative representations Literals Boolean - Integer - Floating-point Character - String - nullptr(C++11) User-defined(C++11) Utilities Attributes(C++11) Types typedef declaration Type alias declaration(C++11) Casts Implicit conversions static_cast const_castExplicit conversions dynamic_cast reinterpret_cast Memory allocation new expressiondelete expression Classes Class declaration Constructors this pointerAccess specifiers friend specifier Class-specific function properties Virtual function override specifier(C++11)` final` specifier(C++11)explicit(C++11) static Special member functions Default constructor Copy constructor Move constructor(C++11)Copy assignment Move assignment(C++11) Destructor Templates Class template Function templateTemplate specialization Parameter packs(C++11) Miscellaneous Inline assemblyHistory of C++ [edit] Classes General Overview class/struct types union types Injected-class-name Class property specifiers(C++26) Members Data members Static members The this pointer Nested classes Member templates Bit-fields using-declarations Member functions Member access specifiers Constructors and member initializer lists Default member initializer(C++11) friend specifier explicit specifier Converting constructor Special member functions Default constructor Copy constructor Move constructor(C++11) Copy assignment operator Move assignment operator(C++11) Destructor Inheritance Base and derived classes Empty base optimization (EBO) Virtual member functions Pure virtual functions and abstract classes override specifier(C++11) final specifier(C++11) [edit] Declares a class data member with explicit size, in bits. Adjacent bit-field members may (or may not) be packed to share and straddle the individual bytes. A bit-field declaration is a class data member declaration which uses the following declarator: identifier(optional)attr(optional):size(1) identifier(optional)attr(optional):size brace-or-equal-initializer(2)(since C++20) The type of the bit-field is introduced by the decl-specifier-seq of the declaration syntax. attr-(since C++11) sequence of any number of attributes identifier-the name of the bit-field that is being declared. The name is optional: unnamed bit-fields introduce the specified number of padding bits. size-an integral constant expression with a value greater or equal to zero. When greater than zero, this is the number of bits that this bit-field will occupy. The value zero is only allowed for nameless bit-fields and has special meaning. brace-or-equal-initializer-default member initializer to be used with this bit-field Contents 1 Explanation 2 Notes 3 Defect reports 4 References 5 See also [edit]Explanation The type of a bit-field can only be integral (including bool) or (possibly cv-qualified) enumeration type, an unnamed bit-field cannot be declared with a cv-qualified type. A bit-field cannot be a static data member. There are no bit-field prvalues: lvalue-to-rvalue conversion always produces an object of the underlying type of the bit-field. The number of bits in a bit-field sets the limit to the range of values it can hold: Run this code include struct S { // three-bit unsigned field, allowed values are 0...7 unsigned int b : 3; }; int main() { S s = {6}; ++s.b; // store the value 7 in the bit-field std::cout << s.b << '\n'; ++s.b; // the value 8 does not fit in this bit-field std::cout << s.b << '\n'; // formally implementation-defined, typically 0 } Possible output: 7 0 Multiple adjacent bit-fields are usually packed together (although this behavior is implementation-defined): Run this code include include include struct S { // will usually occupy 2 bytes: unsigned char b1 : 3; // 1st 3 bits (in 1st byte) are b1 unsigned char : 2; // next 2 bits (in 1st byte) are blocked out as unused unsigned char b2 : 6; // 6 bits for b2 - doesn't fit into the 1st byte => starts a 2nd unsigned char b3 : 2; // 2 bits for b3 - next (and final) bits in the 2nd byte }; int main() { std::cout << sizeof(S) << '\n'; // usually prints 2 S s; // set distinguishable field values s.b1 = 0b111; s.b2 = 0b101111; s.b3 = 0b11; // show layout of fields in S auto i = std::bit_cast<std::uint16_t>(s); // usually prints 1110000011110111 // breakdown is: └┬┘├┘└┬┘└─┬──┘└┤ // b1 u a b2 b3 // where “u” marks the unused:2 specified in the struct, and // “a” marks compiler-added padding to byte-align the next field. // Byte-alignment is happening because b2's type is declared unsigned char; // if b2 were declared uint16_t there would be no “a”, b2 would abut “u”. for (auto b = i; b; b >>= 1) // print LSB-first std::cout << (b & 1); std::cout << '\n'; } Possible output: 2 1110000011110111 The special unnamed bit-field of size zero can be forced to break up padding. It specifies that the next bit-field begins at the beginning of its allocation unit: Run this code include struct S { // will usually occupy 2 bytes: // 3 bits: value of b1 // 5 bits: unused // 2 bits: value of b2 // 6 bits: unused unsigned char b1 : 3; unsigned char :0; // start a new byte unsigned char b2 : 2; }; int main() { std::cout << sizeof(S) << '\n'; // usually prints 2 // would usually print 1 if not for // the padding break in line 11 } Possible output: 2 If the specified size of the bit-field is greater than the size of its type, the value is limited by the type: a std::uint8_t b :1000; would still hold values in the range [​0​,255]. The extra bits are padding bits. Because bit-fields do not necessarily begin at the beginning of a byte, address of a bit-field cannot be taken. Pointers and non-const references to bit-fields are not possible. When initializing a const reference from a bit-field, a temporary is created (its type is the type of the bit-field), copy initialized with the value of the bit-field, and the reference is bound to that temporary. There are no default member initializers for bit-fields: int b :1=0; and int b :1{0} are ill-formed.(until C++20) In case of ambiguity between the size of the bit-field and the default member initializer, the longest sequence of tokens that forms a valid size is chosen: int a; const int b = 0; struct S { // simple cases int x1 : 8 = 42; // OK; "= 42" is brace-or-equal-initializer int x2 : 8 {42}; // OK; "{42}" is brace-or-equal-initializer // ambiguities int y1 : true ? 8 : a = 42; // OK; brace-or-equal-initializer is absent int y2 : true ? 8 : b = 42; // error: cannot assign to const int int y3 : (true ? 8 : b) = 42; // OK; "= 42" is brace-or-equal-initializer int z : 1 || new int{0}; // OK; brace-or-equal-initializer is absent };(since C++20) [edit]Notes The following properties of bit-fields are implementation-defined: The value that results from assigning or initializing a signed bit-field with a value out of range, or from incrementing a signed bit-field past its range. Everything about the actual allocation details of bit-fields within the class object. For example, on some platforms, bit-fields don't straddle bytes, on others they do. Also, on some platforms, bit-fields are packed left-to-right, on others right-to-left. In the C programming language, the width of a bit-field cannot exceed the width of the underlying type, and whether int bit-fields that are not explicitly signed or unsigned are signed or unsigned is implementation-defined. For example, int b :3; may have the range of values [​0​,7] or [-4,3] in C, but only the latter choice is allowed in C++. [edit]Defect reports The following behavior-changing defect reports were applied retroactively to previously published C++ standards. | DR | Applied to | Behavior as published | Correct behavior | | --- | --- | --- | --- | | CWG 324 | C++98 | it was unspecified whether the return value of an assignment to a bit-field is a bit-field | added bit-field specifications for operators which may return lvalues | | CWG 739 | C++98 | signedness of bit-fields that are neither declared signed nor unsigned were implementation-defined | consistent with underlying types | | CWG 2229 | C++98 | unnamed bit-fields could be declared with a cv-qualified type | prohibited | | CWG 2511 | C++98 | cv-qualifications were not allowed in bit-field types | bit-fields can have cv-qualified enumeration types | [edit]References C++23 standard (ISO/IEC 14882:2024): 11.4.10 Bit-fields [class.bit] C++20 standard (ISO/IEC 14882:2020): 11.4.9 Bit-fields [class.bit] C++17 standard (ISO/IEC 14882:2017): 12.2.4 Bit-fields [class.bit] C++14 standard (ISO/IEC 14882:2014): 9.6 Bit-fields [class.bit] C++11 standard (ISO/IEC 14882:2011): 9.6 Bit-fields [class.bit] C++03 standard (ISO/IEC 14882:2003): 9.6 Bit-fields [class.bit] C++98 standard (ISO/IEC 14882:1998): 9.6 Bit-fields [class.bit] [edit]See also bitsetimplements constant length bit array (class template)[edit] vectorspace-efficient dynamic bitset (class template specialization)[edit] Bit manipulation(C++20)utilities to access, manipulate, and process individual bits and bit sequences C documentation for Bit-fields Retrieved from " Navigation Support us Recent changes FAQ Offline version Toolbox What links here Related changes Upload file Special pages Printable version Permanent link Page information In other languages Español 日本語 Русский 中文 This page was last modified on 7 February 2025, at 15:14. Privacy policy About cppreference.com Disclaimers
293
ordinary differential equations - Solve a linear ODE with dirac delta - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Solve a linear ODE with dirac delta Ask Question Asked 8 years, 7 months ago Modified8 years, 7 months ago Viewed 762 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. I have to solve this linear differential equation r f′′(r)+f′(r)+k 2 r f(r)=δ(r)r f″(r)+f′(r)+k 2 r f(r)=δ(r) on R+R+. I know the solutions to the homogeneous problem are c J 0(k r)+d Y 0(k r)c J 0(k r)+d Y 0(k r), where J 0,Y 0 J 0,Y 0 are the zeroth order Bessel functions are the first and second kind. But how do I go about solving the problem with this right hand side ? For background, I'm trying to solve the partial differential equation (Δ−1 c 2∂2∂t 2)A(r,t)=−μ 0 i(t)2 π r δ(r)(Δ−1 c 2∂2∂t 2)A(r,t)=−μ 0 i(t)2 π r δ(r) which reduces to what I have after Fourier transform w.r.t. time and fixing ω ω, and setting k=ω/c k=ω/c. ordinary-differential-equations dirac-delta bessel-functions Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jan 1, 2017 at 22:37 ManifoldFR asked Jan 1, 2017 at 20:17 ManifoldFRManifoldFR 455 2 2 silver badges 9 9 bronze badges 3 It's actually the electric field for a situation where there's a time-dependant current flowing through a wire on the line of equation r=0 r=0 (in cylindrical coordinates). –ManifoldFR Commented Jan 1, 2017 at 20:40 Already tried that. I get a differential equation that I can solve, but I'm not able to really Fourier transform back the solution. –ManifoldFR Commented Jan 1, 2017 at 21:02 Indeed, it leads the linear ODE, where x x is the Fourier transform variable w.r.t. r d d x((k 2−x 2)F(x))−x F(x)=1,d d x((k 2−x 2)F(x))−x F(x)=1, which I change to G′(x)−x k 2−x 2 G(x)=1 G′(x)−x k 2−x 2 G(x)=1 with G(x)=(k 2−x 2)F(x)G(x)=(k 2−x 2)F(x). The solution of the homogeneous problem is |k 2−x 2|−−−−−−−√|k 2−x 2|, yes, but the particular solution is ln(x 2−k 2−−−−−−√+x)ln⁡(x 2−k 2+x) for x>k x>k and arctan(x/k 2−x 2−−−−−−√)arctan⁡(x/k 2−x 2) for x<k x<k. Performing the inverse Fourier transform of that doesn't sound very computable. –ManifoldFR Commented Jan 1, 2017 at 21:12 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. For r≠0 r≠0, the Dirac delta vanishes, and therefore f(r)={c 1 J 0(k r)+d 1 Y 0(k r)c 2 J 0(k r)+d 2 Y 0(k r)r<0 r>0 f(r)={c 1 J 0(k r)+d 1 Y 0(k r)r<0 c 2 J 0(k r)+d 2 Y 0(k r)r>0 The ODE, being of second order, should have solutions with two arbitrary coefficients; but the expression above apparently has four arbitrary constants. In fact, there are two equations that constraint these four constants, reducing the number down to two, as one would expect. The two relations are f(0−)=f(0+)f(0−)=f(0+) (why?) and [r f′(r)]0+0−=1[r f′(r)]0−0+=1 (why?) Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jan 1, 2017 at 21:41 answered Jan 1, 2017 at 20:33 AccidentalFourierTransformAccidentalFourierTransform 2,662 1 1 gold badge 17 17 silver badges 36 36 bronze badges 3 My solution function isn't defined on the negative reals, though... –ManifoldFR Commented Jan 1, 2017 at 20:36 2 then you cannot have a delta at r=0 r=0. Maybe you want to solve ...=δ(r−r 0)...=δ(r−r 0)? –AccidentalFourierTransform Commented Jan 1, 2017 at 20:36 The function is defined on R+R+: why can't I have a delta ? –ManifoldFR Commented Jan 1, 2017 at 21:42 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions ordinary-differential-equations dirac-delta bessel-functions See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Related 3Solving Second Order PDE with Dirac Delta 4Getting 0 solving Schrodinger equation with Dirac delta by Fourier transform 3First order ODE with Dirac delta funtcion 5Differential Equation with Delta Dirac 0Dirac Delta Intuition 1problem in solving ode with dirac delta function involved Hot Network Questions A story where a character that looks like Wile E. Coyote helps to relocate a community of business-sharp hunters-gatherers Dimension too large compiling longtable with lualatex. What is the cause? Does cell phone only receive (one way communication) or receive and transmit microwaves (two way communication) during download? Use bigger sample for predictors in regression Why does my HDD keep spinning and seeking when I power off the computer? PCB design for audio compressor – THT routing, GND plane and power tracks XSIM : print solutions of exercises having a predefined tag? Why am I experiencing these problems updating LibreOffice? In Isa. 46:9 why is וְאֵ֣ין עֹ֔וד אֱלֹהִ֖ים not translated "and there are no other gods?" Is the logic of the original smoking study valid? Does the warning "5 years imprisonment for removal" on Canada's Four Corners obelisk have any legal backing? Can high schoolers post to arXiv or write preprints? Can metal atoms act as ligands? Elfquest story where two elves argue over one's hypnotizing of an animal Graphical software tools for quick and easy diagrams repeat_and_join function for strings and chars in rust In Black Mirror S07E01 “Common People”, why didn’t this character make a different choice? Does the Melf's Acid Arrow spell require a ranged attack roll? Expected number of rolls to see all sides of a die Is laser engraving on an interstellar object feasible? When two black holes merge, what happens to the space-time inside them? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? Wiring a bathroom exhaust fan History of Wilcoxon/Mann-Whitney being for the median? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
294
Lecture 4 Magnetostatics, Boundary Conditions, and Jump Conditions 4.1 Magnetostatics The magnetostatic equations where ∂/∂t = 0 are [29,31,40] ∇× H = J (4.1.1) ∇· B = 0 (4.1.2) One way to satisfy the second equation is to let B = ∇× A (4.1.3) because ∇· (∇× A) = 0 (4.1.4) The above is zero for the same reason that a · (a × b) = 0. In this manner, Gauss’s law is automatically satisfied. From (4.1.1), we have ∇× B µ  = J (4.1.5) Then using (4.1.3) ∇×  1 µ∇× A  = J (4.1.6) 35 36 Electromagnetic Field Theory In a homogeneous medium, µ is a constant and hence ∇× (∇× A) = µJ (4.1.7) We use the vector identity that (see previous lecture) ∇× (∇× A) = ∇(∇· A) −(∇· ∇)A = ∇(∇· A) −∇2A (4.1.8) As a result, we arrive at ∇(∇· A) −∇2A = µJ (4.1.9) By imposing the Coulomb’s gauge that ∇·A = 0, which will be elaborated in the next section, we arrive at ∇2A = −µJ (4.1.10) The above is also known as the vector Poisson’s equation. In cartesian coordinates, the above can be viewed as three scalar Poisson’s equations. Each of the Poisson’s equation can be solved using the Green’s function method previously described. Consequently, in free space A(r) = µ 4π ˚ V J(r′) R dV ′ (4.1.11) where R = |r −r′| (4.1.12) and dV ′ = dx′dy′dz′. It is also variously written as dr′ or d3r′. 4.1.1 More on Coulomb’s Gauge Gauge is a very important concept in physics , and we will further elaborate it here. First, notice that A in (4.1.3) is not unique because one can always define A′ = A −∇Ψ (4.1.13) Then ∇× A′ = ∇× (A −∇Ψ) = ∇× A = B (4.1.14) where we have made use of that ∇× ∇Ψ = 0. Hence, the ∇× of both A and A′ produce the same B. To find A uniquely, we have to define or set the divergence of A or provide a gauge condition. One way is to set the divergence of A to be zero, namely ∇· A = 0 (4.1.15) Magnetostatics, Boundary Conditions, and Jump Conditions 37 Then ∇· A′ = ∇· A −∇2Ψ ̸= ∇· A (4.1.16) The last non-equal sign follows if ∇2Ψ ̸= 0. However, if we further stipulate that ∇· A′ = ∇· A = 0, then −∇2Ψ = 0. This does not necessary imply that Ψ = 0, but if we impose that condition that Ψ →0 when r →∞, then Ψ = 0 everywhere.1 By so doing, A and A′ are equal to each other, and we obtain (4.1.10) and (4.1.11). 4.2 Boundary Conditions–1D Poisson’s Equation Boundary conditions are embedded in the partial differential equations that the potential or the field satisfy. Two important concepts to keep in mind are: • Differentiation of a function with discontinuous slope will give rise to step discontinuity. • Differentiation of a function with step discontinuity will give rise to a Dirac delta func-tion. This is also called the jump condition, a term often used by the mathematics community . Take for example a one dimensional Poisson’s equation that d dxε(x) d dxΦ(x) = −ϱ(x) (4.2.1) where ε(x) represents material property that has the form given in Figure 4.1. One can actually say a lot about Φ(x) given ϱ(x) on the right-hand side. If ϱ(x) has a delta function singularity, it implies that ε(x) d dxΦ(x) has a step discontinuity. If ϱ(x) is finite everywhere, then ε(x) d dxΦ(x) must be continuous everywhere. Furthermore, if ε(x) d dxΦ(x) is finite everywhere, it implies that Φ(x) must be continuous everywhere. 1It is a property of the Laplace boundary value problem that if Ψ = 0 on a closed surface S, then Ψ = 0 everywhere inside S. Earnshaw’s theorem is useful for proving this assertion. 38 Electromagnetic Field Theory Figure 4.1: A figure showing a charge sheet at the interface between two dielectric media. Because it is a surface charge sheet, the volume charge density ϱ(x) is infinite at the sheet location x0. To see this in greater detail, we illustrate it with the following example. In the above, ϱ(x) represents a charge distribution given by ϱ(x) = ϱsδ(x −x0). In this case, the charge distribution is everywhere zero except at the location of the surface charge sheet, where the charge density is infinite: it is represented mathematically by a delta function2 in space. To find the boundary condition of the potential Φ(x) at x0 , we integrate (4.2.1) over an infinitesimal width around x0, the location of the charge sheet, namely ˆ x0+∆ x0−∆ dx d dxε(x) d dxΦ(x) = − ˆ x0+∆ x0−∆ dxϱ(x) (4.2.2) or on the left-hand side, we get ε(x) d dxΦ(x) x0+∆ x0−∆ ∼ = −ϱs (4.2.3) whereas on the right-hand side, we pick up the contribution from the delta function. Evalu-ating the left-hand side at their limits, one arrives at lim ∆→0 ε(x+ 0 ) d dxΦ(x+ 0 ) −ε(x− 0 ) d dxΦ(x− 0 ) ∼ = −ϱs, (4.2.4) In other words, the jump discontinuity is in ε(x) d dxΦ(x) and the amplitude of the jump discontinuity is proportional to the amplitude of the delta function. Since E = ∇Φ, or Ex(x) = −d dxΦ(x), (4.2.5) 2This function has been attributed to Dirac who used in pervasively, but Cauchy was aware of such a function. Magnetostatics, Boundary Conditions, and Jump Conditions 39 The above implies that ε(x+ 0 )Ex(x+ 0 ) −ε(x− 0 )Ex(x− 0 ) = ϱs (4.2.6) or Dx(x+ 0 ) −Dx(x− 0 ) = ϱs (4.2.7) where Dx(x) = ε(x)Ex(x) (4.2.8) The lesson learned from above is that boundary condition is obtained by integrating the pertinent differential equation over an infinitesimal small segment. In this mathematical way of looking at the boundary condition, one can also eyeball the differential equation and ascertain the terms that will have the jump discontinuity that will yield the delta function on the right-hand side. 4.3 Boundary Conditions–Maxwell’s Equations As seen previously, boundary conditions for a field is embedded in the differential equation that the field satisfies. Hence, boundary conditions can be derived from the differential operator forms of Maxwell’s equations. In most textbooks, boundary conditions are obtained by integrating Maxwell’s equations over a small pill box [29,31,41]. To derive these boundary conditions, we will take an unconventional view: namely to see what sources can induce jump conditions on the pertinent fields. Boundary conditions are needed at media interfaces, as well as across current or charge sheets. 4.3.1 Faraday’s Law Figure 4.2: This figure is for the derivation of Faraday’s law. A local coordinate system can be used to see the boundary condition more lucidly. Here, the normal ˆ n = ˆ y and the tangential component ˆ t = ˆ x. 40 Electromagnetic Field Theory For this, we start with Faraday’s law, which implies that ∇× E = −∂B ∂t (4.3.1) One quick answer we could have is that if the right-hand side of the above equation is every-where finite, then there could not be any jump discontinuity on the field E on the left hand side. To see this quickly, one can project the tangential field component and normal field component to a local coordinate system. In other words, one can think of ˆ t and ˆ n as the local ˆ x and ˆ y coordinates. Then writing the curl operator in this local coordinates, one gets ∇× E =  ˆ x ∂ ∂x + ˆ y ∂ ∂y  × (ˆ xEx + ˆ yEy) (4.3.2) = ˆ z ∂ ∂xEy −ˆ z ∂ ∂y Ex (4.3.3) In simplifying the above, we have used the distributive property of cross product, and evalu-ating the cross product in cartesian coordinates. The cross product produces four terms, but only two of the four terms are non-zero as shown above. Since the right-hand side of (4.3.1) is finite, the above implies that ∂ ∂xEy and ∂ ∂yEx have to be finite. In order words, Ex is continuous in the y direction and Ey is continuous in the x direction. Since in the local coordinate system, Ex = Et, then Et is continuous across the boundary. The above implies that E1t = E2t (4.3.4) or ˆ n × E1 = ˆ n × E2 (4.3.5) where ˆ n is the unit normal at the interface, and ˆ n × E always bring out the tangential component of a vector E (convince yourself). 4.3.2 Gauss’s Law From Gauss’s law, we have ∇· D = ϱ (4.3.6) where ϱ is the volume charge density. Magnetostatics, Boundary Conditions, and Jump Conditions 41 Figure 4.3: A figure showing the derivation of boundary condition for Gauss’s law. Again, a local coordinate system can be introduced for convenience. Expressing the above in local coordinates, then ∇· D = ∂ ∂xDx + ∂ ∂y Dy + ∂ ∂z Dz = ϱ (4.3.7) If there is a surface layer charge at the interface, then the volume charge density must be infinitely large, and can be expressed in terms of a delta function, or ϱ = ϱsδ(z) in local coordinates. By looking at the above expression, the only term that can produce a δ(z) is from ∂ ∂zDz. In other words, Dz has a jump discontinuity at z = 0; the other terms do not. Then ∂ ∂z Dz = ϱsδ(z) (4.3.8) Integrating the above from 0 −∆to 0 + ∆, we get Dz(z) 0+∆ 0−∆ = ϱs (4.3.9) or Dz(0+) −Dz(0−) = ϱs (4.3.10) where 0+ = lim∆→0 0 + ∆, 0−= lim∆→0 0 −∆. Since Dz(0+) = D2n, Dz(0−) = D1n, the above becomes D2n −D1n = ϱs (4.3.11) or that ˆ n · (D2 −D1) = ϱs (4.3.12) 42 Electromagnetic Field Theory In other words, a charge sheet ϱs can give rise to a jump discontinuity in the normal component of the electric flux D. Figure 4.4 shows an intuitive sketch as to why a charge sheet gives rise to a discontinuous normal component of the electric flux D. Figure 4.4: A figure intuitively showing why a sheet of charge gives rise to a jump discontinuiy in the normal component of the electric flux D. 4.3.3 Ampere’s Law Ampere’s law, or the generalized one, stipulates that ∇× H = J + ∂D ∂t (4.3.13) Again if the right-hand side is everywhere finite, then H is a continuous field everywhere. However, if the right-hand side has a delta function singularity, then this is not so. For instance, we can project the above equation onto a local coordinates just as we did for Faraday’s law. Magnetostatics, Boundary Conditions, and Jump Conditions 43 Figure 4.5: A figure showing the derivation of boundary condition for Ampere’s law. A local coordinate system is used for simplicity. To be general, we also include the presence of a current sheet at the interface. A current sheet, or a surface current density becomes a delta function singularity when expressed as a volume current density; Thus, rewriting (4.3.13) in a local coordinate system, assuming that J = ˆ xJsxδ(z), then ∇× H = ˆ x  ∂ ∂y Hz −∂ ∂z Hy  = ˆ xJsxδ(z) (4.3.14) The displacement current term on the right-hand side is ignored since it is regular or finite, and will not induce a jump discontinuity on the field; hence, we have the form of the right-hand side of the above equation. From the above, the only term that can produce a δ(z) singularity on the left-hand side is the −∂ ∂zHy term. Therefore, we conclude that −∂ ∂z Hy = Jsxδ(z) (4.3.15) In other words, Hy has to have a jump discontinuity at the interface where the current sheet resides. Or that Hy(z = 0+) −Hy(z = 0−) = −Jsx (4.3.16) The above implies that H2y −H1y = −Jsx (4.3.17) But Hy is just the tangential component of the H field. Now if we repeat the same exercise with J = ˆ yJsyδ(z), at the interface, we have H2x −H1x = Jsy (4.3.18) Now, (4.3.17) and (4.3.18) can be rewritten using a cross product as ˆ z × (ˆ yH2y −ˆ yH1y) = ˆ xJsx (4.3.19) ˆ z × (ˆ xH2x −ˆ xH1x) = ˆ yJsy (4.3.20) 44 Electromagnetic Field Theory The above two equations can be combined as one to give ˆ z × (H2 −H1) = Js (4.3.21) Taking ˆ z = ˆ n in general, we have ˆ n × (H2 −H1) = Js (4.3.22) In other words, a current sheet Js can give rise to a jump discontinuity in the tangential components of the magnetic field, ˆ n × H. This is illustrated intuitively in Figure 4.6 Figure 4.6: A figure intuitively showing that with the understanding of how a single line current source generates a magnetic field (right), a cluster of them forming a current sheet will generate a jump discontinuity in the tangential component of the magnetic field H (left). 4.3.4 Gauss’s Law for Magnetic Flux Similarly, from Gauss’s law for magnetic flux, or that ∇· B = 0 (4.3.23) one deduces that ˆ n · (B2 −B1) = 0 (4.3.24) or that the normal magnetic fluxes are continuous at an interface. In other words, since mag-netic charges do not exist, the normal component of the magnetic flux has to be continuous. Bibliography J. A. Kong, “Theory of electromagnetic waves,” New York, Wiley-Interscience, 1975. 348 p., 1975. A. Einstein et al., “On the electrodynamics of moving bodies,” Annalen der Physik, vol. 17, no. 891, p. 50, 1905. P. A. M. Dirac, “The quantum theory of the emission and absorption of radiation,” Pro-ceedings of the Royal Society of London. Series A, Containing Papers of a Mathematical and Physical Character, vol. 114, no. 767, pp. 243–265, 1927. R. J. Glauber, “Coherent and incoherent states of the radiation field,” Physical Review, vol. 131, no. 6, p. 2766, 1963. C.-N. Yang and R. L. Mills, “Conservation of isotopic spin and isotopic gauge invariance,” Physical review, vol. 96, no. 1, p. 191, 1954. G. t’Hooft, 50 years of Yang-Mills theory. World Scientific, 2005. C. W. Misner, K. S. Thorne, and J. A. Wheeler, Gravitation. Princeton University Press, 2017. F. Teixeira and W. C. Chew, “Differential forms, metrics, and the reflectionless ab-sorption of electromagnetic waves,” Journal of Electromagnetic Waves and Applications, vol. 13, no. 5, pp. 665–686, 1999. W. C. Chew, E. Michielssen, J.-M. Jin, and J. Song, Fast and efficient algorithms in computational electromagnetics. Artech House, Inc., 2001. A. Volta, “On the electricity excited by the mere contact of conducting substances of different kinds. in a letter from Mr. Alexander Volta, FRS Professor of Natural Philos-ophy in the University of Pavia, to the Rt. Hon. Sir Joseph Banks, Bart. KBPR S,” Philosophical transactions of the Royal Society of London, no. 90, pp. 403–431, 1800. A.-M. Ampere, Expos´ e m´ ethodique des ph´ enom enes ´ electro-dynamiques, et des lois de ces ph´ enomenes. Bachelier, 1823. 83 84 Electromagnetic Field Theory ——, M´ emoire sur la th´ eorie math´ ematique des ph´ enom enes ´ electro-dynamiques unique-ment d´ eduite de l’exp´ erience: dans lequel se trouvent r´ eunis les M´ emoires que M. Ampere a communiqu´ es a l’Acad´ emie royale des Sciences, dans les s´ eances des 4 et 26 d´ ecembre 1820, 10 juin 1822, 22 d´ ecembre 1823, 12 septembre et 21 novembre 1825. Bachelier, 1825. B. Jones and M. Faraday, The life and letters of Faraday. Cambridge University Press, 2010, vol. 2. G. Kirchhoff, “Ueber die aufl¨ osung der gleichungen, auf welche man bei der untersuchung der linearen vertheilung galvanischer str¨ ome gef¨ uhrt wird,” Annalen der Physik, vol. 148, no. 12, pp. 497–508, 1847. L. Weinberg, “Kirchhoff’s’ third and fourth laws’,” IRE Transactions on Circuit Theory, vol. 5, no. 1, pp. 8–30, 1958. T. Standage, The Victorian Internet: The remarkable story of the telegraph and the nineteenth century’s online pioneers. Phoenix, 1998. J. C. Maxwell, “A dynamical theory of the electromagnetic field,” Philosophical trans-actions of the Royal Society of London, no. 155, pp. 459–512, 1865. H. Hertz, “On the finite velocity of propagation of electromagnetic actions,” Electric Waves, vol. 110, 1888. M. Romer and I. B. Cohen, “Roemer and the first determination of the velocity of light (1676),” Isis, vol. 31, no. 2, pp. 327–379, 1940. A. Arons and M. Peppard, “Einstein’s proposal of the photon concept–a translation of the Annalen der Physik paper of 1905,” American Journal of Physics, vol. 33, no. 5, pp. 367–374, 1965. A. Pais, “Einstein and the quantum theory,” Reviews of Modern Physics, vol. 51, no. 4, p. 863, 1979. M. Planck, “On the law of distribution of energy in the normal spectrum,” Annalen der physik, vol. 4, no. 553, p. 1, 1901. Z. Peng, S. De Graaf, J. Tsai, and O. Astafiev, “Tuneable on-demand single-photon source in the microwave range,” Nature communications, vol. 7, p. 12588, 2016. B. D. Gates, Q. Xu, M. Stewart, D. Ryan, C. G. Willson, and G. M. Whitesides, “New approaches to nanofabrication: molding, printing, and other techniques,” Chemical re-views, vol. 105, no. 4, pp. 1171–1196, 2005. J. S. Bell, “The debate on the significance of his contributions to the foundations of quantum mechanics, Bells Theorem and the Foundations of Modern Physics (A. van der Merwe, F. Selleri, and G. Tarozzi, eds.),” 1992. Lossy Media, Lorentz Force Law, and Drude-Lorentz-Sommerfeld Model 85 D. J. Griffiths and D. F. Schroeter, Introduction to quantum mechanics. Cambridge University Press, 2018. C. Pickover, Archimedes to Hawking: Laws of science and the great minds behind them. Oxford University Press, 2008. R. Resnick, J. Walker, and D. Halliday, Fundamentals of physics. John Wiley, 1988. S. Ramo, J. R. Whinnery, and T. Duzer van, Fields and waves in communication elec-tronics, Third Edition. John Wiley & Sons, Inc., 1995. J. L. De Lagrange, “Recherches d’arithm´ etique,” Nouveaux M´ emoires de l’Acad´ emie de Berlin, 1773. J. A. Kong, Electromagnetic Wave Theory. EMW Publishing, 2008. H. M. Schey and H. M. Schey, Div, grad, curl, and all that: an informal text on vector calculus. WW Norton New York, 2005. R. P. Feynman, R. B. Leighton, and M. Sands, The Feynman lectures on physics, Vol. I: The new millennium edition: mainly mechanics, radiation, and heat. Basic books, 2011, vol. 1. W. C. Chew, Waves and fields in inhomogeneous media. IEEE press, 1995. V. J. Katz, “The history of Stokes’ theorem,” Mathematics Magazine, vol. 52, no. 3, pp. 146–156, 1979. W. K. Panofsky and M. Phillips, Classical electricity and magnetism. Courier Corpo-ration, 2005. T. Lancaster and S. J. Blundell, Quantum field theory for the gifted amateur. OUP Oxford, 2014. W. C. Chew, “Ece 350x lecture notes,” 1990. C. M. Bender and S. A. Orszag, Advanced mathematical methods for scientists and en-gineers I: Asymptotic methods and perturbation theory. Springer Science & Business Media, 2013. J. M. Crowley, Fundamentals of applied electrostatics. Krieger Publishing Company, 1986. C. Balanis, Advanced Engineering Electromagnetics. Hoboken, NJ, USA: Wiley, 2012. J. D. Jackson, Classical electrodynamics. AAPT, 1999. R. Courant and D. Hilbert, Methods of Mathematical Physics: Partial Differential Equa-tions. John Wiley & Sons, 2008. 86 Electromagnetic Field Theory L. Esaki and R. Tsu, “Superlattice and negative differential conductivity in semiconduc-tors,” IBM Journal of Research and Development, vol. 14, no. 1, pp. 61–65, 1970. E. Kudeki and D. C. Munson, Analog Signals and Systems. Upper Saddle River, NJ, USA: Pearson Prentice Hall, 2009. A. V. Oppenheim and R. W. Schafer, Discrete-time signal processing. Pearson Educa-tion, 2014. R. F. Harrington, Time-harmonic electromagnetic fields. McGraw-Hill, 1961. E. C. Jordan and K. G. Balmain, Electromagnetic waves and radiating systems. Prentice-Hall, 1968. G. Agarwal, D. Pattanayak, and E. Wolf, “Electromagnetic fields in spatially dispersive media,” Physical Review B, vol. 10, no. 4, p. 1447, 1974. S. L. Chuang, Physics of photonic devices. John Wiley & Sons, 2012, vol. 80. B. E. Saleh and M. C. Teich, Fundamentals of photonics. John Wiley & Sons, 2019. M. Born and E. Wolf, Principles of optics: electromagnetic theory of propagation, inter-ference and diffraction of light. Elsevier, 2013. R. W. Boyd, Nonlinear optics. Elsevier, 2003. Y.-R. Shen, “The principles of nonlinear optics,” New York, Wiley-Interscience, 1984, 575 p., 1984. N. Bloembergen, Nonlinear optics. World Scientific, 1996. P. C. Krause, O. Wasynczuk, and S. D. Sudhoff, Analysis of electric machinery. McGraw-Hill New York, 1986, vol. 564. A. E. Fitzgerald, C. Kingsley, S. D. Umans, and B. James, Electric machinery. McGraw-Hill New York, 2003, vol. 5. M. A. Brown and R. C. Semelka, MRI.: Basic Principles and Applications. John Wiley & Sons, 2011. C. A. Balanis, Advanced engineering electromagnetics. John Wiley & Sons, 1999. Wikipedia, “Lorentz force,” 2019. R. O. Dendy, Plasma physics: an introductory course. Cambridge University Press, 1995. P. Sen and W. C. Chew, “The frequency dependent dielectric and conductivity response of sedimentary rocks,” Journal of microwave power, vol. 18, no. 1, pp. 95–105, 1983. D. A. Miller, Quantum Mechanics for Scientists and Engineers. Cambridge, UK: Cam-bridge University Press, 2008. Lossy Media, Lorentz Force Law, and Drude-Lorentz-Sommerfeld Model 87 W. C. Chew, “Quantum mechanics made simple: Lecture notes,” 2016. B. G. Streetman, S. Banerjee et al., Solid state electronic devices. Prentice hall Engle-wood Cliffs, NJ, 1995, vol. 4. Smithsonian, accessed: 2019-09-06.
295
Published Time: Fri, 19 Aug 2022 20:49:26 GMT Turkish Journal of Mathematics Turkish Journal of Mathematics Volume 46 Number 6 Article 20 1-1-2022 On the geometry of lift metrics and lift connections on the tangent On the geometry of lift metrics and lift connections on the tangent bundle bundle ESMAEIL PEYGHAN DAVOOD SEIFIPOUR ADARA BLAGA Follow this and additional works at: Part of the Mathematics Commons Recommended Citation Recommended Citation PEYGHAN, ESMAEIL; SEIFIPOUR, DAVOOD; and BLAGA, ADARA (2022) "On the geometry of lift metrics and lift connections on the tangent bundle," Turkish Journal of Mathematics : Vol. 46: No. 6, Article 20. Available at: This Article is brought to you for free and open access by TÜB İTAK Academic Journals. It has been accepted for inclusion in Turkish Journal of Mathematics by an authorized editor of TÜB İTAK Academic Journals. For more information, please contact [email protected] .Turk J Math (2022) 46: 2335 – 2352 © TÜBİTAK doi:10.55730/1300-0098.3272 Turkish Journal of Mathematics h t t p : / / j o u r n a l s . t u b i t a k . g o v . t r / m a t h / Research Article On the geometry of lift metrics and lift connections on the tangent bundle Esmaeil PEYGHAN 1,∗, Davood SEIFIPOUR 2, Adara M. BLAGA 3 1 Department of Mathematics, Faculty of Science, Arak University, Arak, Iran 2 Department of Mathematics, Abadan Branch, Islamic Azad University, Abadan, Iran 3 Department of Mathematics, Faculty of Mathematics and Computer Science, West University of Timişoara, Timişoara, România Received: 27.02.2022 •Accepted/Published Online: 06.05.2022 •Final Version: 04.07.2022 Abstract: We study lift metrics and lift connections on the tangent bundle T M of a Riemannian manifold (M, g ) . We also investigate the statistical and Codazzi couples of T M and their consequences on the geometry of M . Finally, we prove a result on 1 -Stein and Osserman structures on T M , whenever T M is equipped with the complete lift connection. Key words: Codazzi manifold, lift connections, lift metrics, Osserman structure, statistical manifold Introduction The geometry of the tangent bundle with Riemannian lift metrics has been extensively studied in recent years (see [1, 4, 5, 8, 9, 11, 12, 16, 21], for instance). On the other hand, information geometry is an important and useful bridge between applicable and pure sciences, a combination between differential geometry and statistics . In this framework, methods of differential geometry are used and extended to probability theory. The mathematical point of view on information geometry was initiated by C. R. Rao. He showed that a statistical model could be a Riemannian manifold, via the Fisher information metric. One of the main objects in this area are the statistical connections. Statistical manifolds provide a geometrical model of probability distributions. The geometry of statistical manifolds has been applied to various fields of information science, information theory, neural networks, machine learning, image processing, statistical mechanics, etc. ([2, 3, 15, 17, 19]). A statistical manifold is a differentiable manifold whose points are probability distributions ([2, 3, 13, 14]). Precisely, a statistical structure on a differentiable manifold M is a pair (g, ∇) such that g is a Riemannian metric and ∇ is torsion-free affine connection with the property that ∇g is totally symmetric. A Riemannian manifold (M, g ) together with Levi-Civita connection ∇ of g is a trivial example of statistical manifold. In other words, statistical manifolds can be regarded as generalizations of Riemannian manifolds. In this paper, we study the prolongations of statistical structures on manifolds to their tangent bundles with horizontal and complete lift connections. We consider two Riemannian lift metrics on the tangent bundle T M of a Riemannian manifold (M, g ), one of them is the twisted Sasaki metric Gf,h (in particular, Sasaki metric) and the other one is the gradient Sasaki metric gf .The paper is organized as follows. After some preliminary considerations, in Section 3, we study the geometry of T M equipped with the twisted Sasaki metric Gf,h and the horizontal (respectively, complete) lift ∗Correspondence: [email protected] 2010 AMS Mathematics Subject Classification: 53B20, 53C05, 53C15, 53C25 This work is licensed under a Creative Commons Attribution 4.0 International License. 2335 PEYGHAN et al./Turk J Math connection H ∇ (respectively, C ∇) and we investigate some properties of the couples (gs, ∇f,h ) and (gf1 , ∇f,h ) on T M , where ∇f,h is the Levi-Civita connection of the twisted Sasaki metric Gf,h , gs is the Sasaki metric, and gf1 is the gradient Sasaki metric. We also obtain some results on the lift to the tangent bundle of Killing vector fields and infinitesimal affine transformations. In Section 4, we study the geometry of T M equipped with the gradient Sasaki metric gf and the lift connection C ∇ and we investigate some properties of the couples (gs, ∇f ) and (Gf,h , ∇f1 ) on T M , where ∇f is the Levi-Civita connection of the gradient Sasaki metric gf . We also study the necessary conditions for (T M, g s, ∇f ) and (T M, G f,h , ∇f1 ) to be Codazzi and statistical manifolds. Finally, in Section 5, we prove a theorem on the spectral geometry of T M and we deduce that T M is globally Osserman, whenever it is equipped with the complete lift connection C ∇ and ∇ is a flat connection. Preliminaries Let ∇ be an affine connection on a differentiable manifold M , let (xi) be local coordinates on M , and let (xi, y i) be the induced coordinates on T M . Then { ∂∂x i |(x,y ), ∂∂y i |(x,y )} is the natural basis of T(x,y )T M . It is known that, with respect to an affine connection, T(x,y )T M can be decomposed to H(x,y )T M ⊕ V(x,y )T M ,where H(x,y )T M is spanned by { δδx i |(x,y ):= ( ∂∂x i )H = ∂∂x i |(x,y ) −ykΓjki (x) ∂∂y j |(x,y )} and V(x,y )T M is spanned by { ∂∂y i |(x,y ):= ( ∂∂x i )V }, with Γjki the connection coefficients of ∇. Denote by π : T M → M , π(x, y ) := x,and by χ(M ) the set of all vector fields on M .The various lifts of a vector field X = Xi ∂∂x i on M (complete lift, horizontal lift and vertical lift, respectively) are defined as follows XC = Xi ∂∂x i + ya ∂X i ∂x a ∂∂y i , XH = Xi ∂∂x i − yaΓkai Xi ∂∂y k , XV = Xi ∂∂y i , (using Einstein summation convention). According to , the Lie brackets of the horizontal lift and vertical lift of vector fields are [XH , Y H ] = [ X, Y ]H − (R(X, Y )y)V , [XH , Y V ] = ( ∇X Y )V − (T (X, Y )) V , [XV , Y V ] = 0 , (2.1) where T is the torsion tensor field and R is the curvature tensor field of ∇.The horizontal lift connection H ∇ and the complete lift connection C ∇ of the affine connection ∇ are respectively defined by : H ∇XH Y H = ( ∇X Y )H , H ∇XH Y V = ( ∇X Y )V , H ∇XV Y H = H ∇XV Y V = 0 , C ∇XH Y H = ( ∇X Y )H + ( R(y, X )Y )V , C ∇XV Y H = C ∇XV Y V = 0 , (2.2) C ∇XH Y V = ( ∇X Y )V , C ∇XC Y C = ( ∇X Y )C , C ∇XC Y V = C ∇XV Y C = ( ∇X Y )V . It is known that ∇ is flat and torsion-free if and only if H ∇ ( C ∇) is torsion-free . For simplicity, in the rest of the paper, we shall write ∂i , δi and ∂¯i instead of ∂∂x i , δδx i and ∂∂y i .2336 PEYGHAN et al./Turk J Math Let now (M, g ) be a Riemannian manifold. Similar to the lifts of vector fields and affine connections, we can define lifts of Riemannian metrics. We construct the twisted Sasaki metric Gf,h on T M as follows: Gf,h (x,y ) (XH , Y H ) = f (x)gx(X, Y ), Gf,h (x,y ) (XV , Y H ) = 0 , Gf,h (x,y ) (XV , Y V ) = h(x)gx(X, Y ), (2.3) where f, h are strictly positive smooth functions on M . If f = h = 1 , then Gf,h reduces to the Sasaki metric gs . Lemma 2.1 Let (M, g ) be a Riemannian manifold, let ∇ be the Levi-Civita connection of g , and let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric. Then the Levi-Civita connection ∇f,h of Gf,h is given by ∇f,h XV Y V = − ( 12f g(X, Y ) grad h )H , ∇f,h XV Y H = ( h 2f R(y, X )Y )H + ( Y (h)2h X )V , ∇f,h XH Y V = ( h 2f R(y, Y )X )H + ( X(h)2h Y + ∇X Y )V , ∇f,h XH Y H = ( ∇X Y + Af (X, Y )) H − 12 (R(X, Y )y)V , where Af (X, Y ) = 12f ( X(f )Y + Y (f )X − g(X, Y ) grad f ) , X, Y ∈ χ(M ), and (x, y ) ∈ T M . In particular, the Levi-Civita connection ∇s of the Sasaki metric gs is given by ∇sXH Y H = ( ∇X Y )H − 12 (R(X, Y )y)V , ∇sXV Y H = 12 (R(y, X )Y )H , ∇sXH Y V = ∇X Y V + 12 (R(y, Y )X)H , ∇sXV Y V = 0 . We construct also the gradient Sasaki metric gf on T M as follows: gf (x,y ) (XH , Y H ) = gx(X, Y ), gf (x,y ) (XH , Y V ) = 0 , gf (x,y ) (XV , Y V ) = gx(X, Y ) + Xx(f )Yx(f ), (2.4) where f is a strictly positive smooth function on M . If f is a constant, then gf reduces to the Sasaki metric gs . Lemma 2.2 Let (M, g ) be a Riemannian manifold, let ∇ be the Levi-Civita connection of g , and let (T M, g f ) be its tangent bundle equipped with the gradient Sasaki metric. Then the Levi-Civita connection ∇f 2337 PEYGHAN et al./Turk J Math of gf is given by ∇fXV Y V = − 12 X(f )( ∇Y grad f )H − 12 Y (f )( ∇X grad f )H , ∇fXV Y H = 12 (R(y, X )Y )H + 12 X(f )( R(y, grad f )Y )H + 12 X(f )( ∇Y grad f )V 12a {g(X, ∇Y grad f ) − 12 Y (a)X(f )}(grad f )V , ∇fXH Y V = 12 (R(y, Y )X)H + 12 Y (f )( R(y, grad f )X)H + 12 Y (f )( ∇X grad f )V + ( ∇X Y )V 12a {g(Y, ∇X grad f ) − 12 X(a)Y (f )}(grad f )V , ∇fXH Y H = ( ∇X Y )H − 12 (R(X, Y )y)V , where a = 1+ ∥ grad f ∥2 , X, Y ∈ χ(M ) and (x, y ) ∈ T M . Definition 2.3 Let (M, g ) be a Riemannian manifold and let ∇ be an affine connection on M . The pair (g, ∇) is said to be a Codazzi couple on M if the cubic tensor field C := ∇g is totally symmetric, namely, the Codazzi equations hold: (∇X g)( Y, Z ) = ( ∇Y g)( Z, X ) = ( ∇Z g)( X, Y ), for every X, Y, Z ∈ χ(M ). The triplet (M, g, ∇) is called a Codazzi manifold and ∇ is called a Codazzi connection. Furthermore, if ∇ is torsion-free, then (M, g, ∇) is a statistical manifold, (g, ∇) is a statistical couple and ∇ is a statistical connection. Throughout the rest of the paper, we shall use two notations gf and gf1 for the gradient Sasaki metric defined by f and f1 , respectively, because, whenever appear in a theorem both a gradient Sasaki metric gf1 and a twisted Sasaki metric Gf,h (defined by f and h), we wish to specify that the two functions f and f1 may be different. Geometry of tangent bundle with twisted Sasaki metric In this section, we study the geometry of T M equipped with the twisted Sasaki metric (in particular the Sasaki metric). Definition 3.1 Let (M, g ) be a Riemannian manifold and let ∇ be an affine connection on M .(1) A vector field X is said to be conformal (respectively, Killing) with respect to g , if LX g = 2 ρg (respectively, LX g = 0 ), where ρ is a function on M and the Lie derivative of g in the direction of X is given by (LX g)( Y, Z ) := Xg (Y, Z ) − g(LX Y, Z ) − g(Y, L X Z).(2) A vector field X is said to be an infinitesimal affine transformation on M with respect to ∇, if LX ∇ = 0 ,where the Lie derivative of ∇ in the direction of X is given by (LX ∇)( Y, Z ) := LX (∇Y Z) − ∇ Y (LX Z) −∇[X,Y ]Z . 2338 PEYGHAN et al./Turk J Math Now we study conditions under which XV and XH are Killing vector fields for Gf,h .By a direct computation and using (2.3) and (2.1), we get (LXV Gf,h )( Y V , Z V ) = 0 , (LXV Gf,h )( Y H , Z V ) = hg (∇Y X − T (Y, X ), Z ), (LXV Gf,h )( Y H , Z H ) = 0 . If ∇ is torsion-free, then XV is a Killing vector field for Gf,h if and only if ∇Y X = 0 . Moreover, using (2.3) and (2.1), a straightforward computation gives (LXH Gf,h )( Y V , Z V ) = X(h)g(Y, Z ) + h ( (∇X g)( Y, Z ) + g(T (X, Y ), Z ) + g(Y, T (X, Z )) ) , (LXH Gf,h )( Y H , Z V ) = hg (R(X, Y )y, Z ), (LXH Gf,h )( Y H , Z H ) = X(f )g(Y, Z ) + f (LX g)( Y, Z ). If ∇ is torsion-free, then XH is a Killing vector field for Gf,h if and only if (∇X g)( Y, Z ) = − X(h) h g(Y, Z ), R(X, Y )Z = 0 , (LX g)( Y, Z ) = − X(f ) f g(Y, Z ), ∀ Y, Z ∈ χ(M ). Thus, we get the following Proposition 3.2 Let (M, g ) be a Riemannian manifold and let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric. Then the following assertions hold (1) if ∇ is a torsion-free affine connection on M , then XV is a Killing vector field for Gf,h if and only if X is a parallel vector field; (2) if ∇ is a torsion-free affine connection on M , then XH is a Killing vector field for Gf,h if and only if X is a conformal vector field on (M, g ) and (∇X g)( Y, Z ) = − X(h) h g(Y, Z ), R(X, Y )Z = 0 , for all Y, Z ∈ χ(M );(3) if ∇ is a torsion-free affine connection on M and h is constant, then XH is a Killing vector field for Gf,h if and only if X is a conformal vector field on (M, g ), ∇ is the Levi-Civita connection of (M, g ) and R(X, Y )Z = 0 , for all Y, Z ∈ χ(M );(4) if ∇ is a torsion-free affine connection on M and f and h are constant functions, then XH is a Killing vector field for Gf,h if and only if X is a Killing vector field on (M, g ), ∇ is the Levi-Civita connection of (M, g ) and R(X, Y )Z = 0 , for all Y, Z ∈ χ(M );(5) if ∇ is the flat Levi-Civita connection on (M, g ) and f and h are constant functions, then XH is a Killing vector field for Gf,h if and only if X is a Killing vector field on (M, g ). 2339 PEYGHAN et al./Turk J Math Here, we compute the components of H ∇Gf,h to study (T M, G f,h , H ∇). A direct computation gives (H ∇δi Gf,h )( δj , δ k) = δiGf,h (δj , δ k) − Gf,h (H ∇δi δj , δ k) − Gf,h (δj , H ∇δi δk) (3.1) = δi(f g jk ) − Gf,h (( ∇∂i ∂j )H , (∂k)H ) − Gf,h (( ∂j )H , (∇∂i ∂k)H )= ∂i(f )gjk + f ∂ i(gjk ) − f g (∇∂i ∂j , ∂ k) − f g (∂j , ∇∂i ∂k)= ∂i(f )gjk + f (∇∂i g)( ∂j , ∂ k). By a similar computation, we get (H ∇δj Gf,h )( δk, δ i) = ∂j (f )gki + f (∇∂j g)( ∂k, ∂ i), (H ∇δk Gf,h )( δi, δ j ) = ∂k(f )gij + f (∇∂k g)( ∂i, ∂ j ). (3.2) We have also (H ∇∂¯i Gf,h )( ∂¯j , ∂ ¯k) = 0 , (H ∇δi Gf,h )( δj , ∂ ¯k) = ( H ∇δj Gf,h )( ∂¯k, δ i) = ( H ∇∂¯k Gf,h )( δi, δ j ) = 0 , (H ∇∂¯i Gf,h )( ∂¯j , δ k) = ( H ∇∂¯j Gf,h )( δk, ∂ ¯i) = 0 , (H ∇δk Gf,h )( ∂¯i, ∂ ¯j ) = ∂k(h)gij + h(∇∂k g)( ∂i, ∂ j ). (3.3) If (T M, G f,h , H ∇) is a Codazzi manifold, then the second equation of (3.3) implies (∇∂k g)( ∂i, ∂ j ) = − 1 h ∂k(h)gij .Setting this equation in (3.1) and using (3.2), we get (∂k(f ) − fh ∂k(h)) gij = 0 , and consequently h∂ k(f ) = f ∂ k(h). Thus, we get the following: Theorem 3.3 Let (M, g ) be a Riemannian manifold, let ∇ be an affine connection on M and let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric. Then the following statements hold (1) if (T M, G f,h , H ∇) is a Codazzi manifold, then (∇Z g)( X, Y ) = − 1 f Z(f )g(X, Y ) and f grad h = h grad f ,for all X, Y, Z ∈ χ(M ). Moreover, H ∇ is compatible with Gf,h ;(2) if (T M, G f,h , H ∇) is a statistical manifold, then ∇ is flat, (∇Z g)( X, Y ) = − 1 f Z(Z)g(X, Y ) and f grad h = h grad f , for all X, Y, Z ∈ χ(M ). Moreover, H ∇ reduces to the Levi-Civita connection of Gf,h ;(3) if (T M, G f,h , H ∇) is a statistical manifold and h is a constant, then f is constant, ∇ is the Levi-Civita connection of g and H ∇ reduces to the Levi-Civita of Gf,h ;(4) if ∇ is the Levi-Civita connection of g and f, h are constant functions, then H ∇ is compatible with Gf,h ;in particular, if ∇ is flat, then H ∇ reduces to the Levi-Civita connection of Gf,h . 2340 PEYGHAN et al./Turk J Math Now we focus on (T M, G f,h , C ∇). A direct computation gives ( C ∇δi Gf,h )( δj , δ k) = δiGf,h (δj , δ k) − Gf,h ( C ∇δi δj , δ k) − Gf,h (δj , C ∇δi δk)= δi(f g jk ) − Gf,h (( ∇∂i ∂j )H + ( R(y, ∂ i), ∂ j )V , (∂k)H ) − Gf,h (( ∂j )H , (∇∂i ∂k)H + ( R(y, ∂ i)∂k)V )= ∂i(f )gjk + f ∂ i(gjk ) − f g (∇∂i ∂j , ∂ k) − f g (∂j , ∇∂i ∂k)= ∂i(f )gjk + f (∇∂i g)( ∂j , ∂ k). By a similar computation, we get ( C ∇δj Gf,h )( δk, δ i) = ∂j (f )gki + f (∇∂j g)( ∂k, ∂ i), ( C ∇δk Gf,h )( δi, δ j ) = ∂k(f )gij + f (∇∂k g)( ∂i, ∂ j ). We have also ( C ∇∂¯i Gf,h )( ∂¯j , ∂ ¯k) = 0 , ( C ∇δi Gf,h )( δj , ∂ ¯k) = −hy sRtsij gtk , ( C ∇δj Gf,h )( ∂¯k, δ i) = −hy sRtsji gkt , ( C ∇∂¯k Gf,h )( δi, δ j ) = 0 , (3.4) ( C ∇∂¯i Gf,h )( ∂¯j , δ k) = ( C ∇∂¯j Gf,h )( δk, ∂ ¯i) = 0 , ( C ∇δk Gf,h )( ∂¯i, ∂ ¯j ) = ∂k(h)gij + h(∇∂k g)( ∂i, ∂ j ). If (T M, G f,h , C ∇) is a Codazzi manifold, from (3.4), we get ysRtsji = 0 . Differentiating with respect to yt , we obtain Rthji = 0 , so ∇ is a flat connection. Thus, we get the following: Theorem 3.4 Let (M, g ) be a Riemannian manifold, let ∇ be a torsion-free affine connection on M and let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric. Then the following statements hold (1) if (T M, G f,h , C ∇) is a Codazzi (respectively, statistical) manifold, then ∇ is flat, (∇Z g)( X, Y ) = − 1 f Z(f )g(X, Y ) and f grad h = h grad f , for all X, Y, Z ∈ χ(M ). Moreover, C ∇ is compatible with Gf,h (respectively, C ∇ reduces to the Levi-Civita connection of Gf,h ); (2) if (T M, G f,h , C ∇) is a statistical manifold and h is a constant, then f is constant, ∇ is the Levi-Civita connection of g and C ∇ reduces to the Levi-Civita of Gf,h ;(3) if ∇ is the Levi-Civita connection of g and f, h are constant functions, then C ∇ is compatible with Gf,h ;in particular, if ∇ is flat, then C ∇ reduces to the Levi-Civita connection of Gf,h . 2341 PEYGHAN et al./Turk J Math Here we provide conditions for the vertical vector field XV to be an infinitesimal affine transformation on T M with respect to ∇f,h , whenever (M, g ) is a flat space and ∇ is the Levi-Civita connection of g .Using Lemma 2.1 and (2.1), we get (LXV ∇f,h )( Y V , Z V ) = ( g(Y, Z )2f ∇grad hX )V , (3.5) (LXV ∇f,h )( Y H , Z V ) = − ( 12f g(∇Y X, Z ) ◦ π ) (grad h)H , (3.6) (LXV ∇f,h )( Y H , Z H ) = − ( ∇Af (Y,Z )+ ∇Y Z X + Y (h)2h ∇Z X + ∇Y ∇Z X + Z(h)2h ∇Y X )V . (3.7) Let XV be an infinitesimal affine transformation on T M with respect to ∇f,h . Then from (3.6) we get ∇Y X = 0 , for all Y ∈ χ(M ), or grad h = 0 . In both cases, (3.5) vanishes. Also, in the first case, (3.7) vanishes and in the second case, (3.7) reduces to ∇Af (Y,Z )+ ∇Y Z X + ∇Y ∇Z X = 0 . Thus, we have the following: Proposition 3.5 Let (M, g ) be a flat Riemannian manifold, let ∇ be the Levi-Civita connection of g , let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric, and let ∇f,h be the Levi-Civita connection of Gf,h . Then XV is an infinitesimal affine transformation on T M with respect to ∇f,h if and only if X is parallel, or h is constant and ∇Af (Y,Z )+ ∇Y Z X + ∇Y ∇Z X = 0 , for all Y, Z ∈ χ(M ). Here we provide necessary and sufficient conditions for the horizontal vector field XH to be an infinites-imal affine transformation on T M with respect to ∇f,h .Using Lemma 2.1, (2.1) and considering Y H (F V ) = ( Y (F )) V , F V Y V = ( F Y )V , for all F ∈ C∞(M ),we get (LXH ∇f,h )( Y V , Z V ) = ( X(f )2f 2 g(Y, Z ) ◦ π ) (grad h)H − ( 1 f g(Y, Z ) ◦ π ) [X, grad h]H = 1 f ( X(f )2f − 1 )( g(Y, Z ) ◦ π )( grad h − [X, grad h] )H , (LXH ∇f,h )( Y H , Z V ) = ( R(X, Y )Z)V + XH ( Y (h)2h )V ZV − ( X, Y 2h )V ZV = ( R(X, Y )Z + X ( Y (h)2h ) Z − X, Y 2h Z )V = ( R(X, Y )Z + 12h ( − X(h)Y (h) h + Y (X(h)) ) Z )V , (LXH ∇f,h )( Y H , Z H ) = ( (LX ∇)( Y, Z ) + [ X, A f (Y, Z )] + Af (Y, [Z, X ]) − Af ([ X, Y ], Z )] )H . Proposition 3.6 Let (M, g ) be a flat Riemannian manifold, let ∇ be the Levi-Civita connection of g , let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric, and let ∇f,h be the Levi-Civita connection of Gf,h . Then XH is an infinitesimal affine transformation with respect to ∇f,h if and only if the 2342 PEYGHAN et al./Turk J Math following equations hold: ( X(f ) − 2f )( grad h − [X, grad h] ) = 0 , R(X, Y )Z = 12h ( X(h)Y (h) h − Y (X(h)) ) , (LX ∇)( Y, Z ) + [ X, A f (Y, Z )] + Af (Y, [Z, X ]) − Af ([ X, Y ], Z )] = 0 , for all Y, Z ∈ χ(M ). Corollary 3.7 Let (M, g ) be a flat Riemannian manifold, let ∇ be the Levi-Civita connection of g , let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric and let ∇f,h be the Levi-Civita connection of Gf,h .(1) If f is constant, then XH is an infinitesimal affine transformation with respect to ∇f,h if and only if X is an infinitesimal affine transformation with respect to ∇ and [X, grad h] = grad h, R(X, Y )Z = 12h ( X(h)Y (h) h − Y (X(h)) ) , for all Y, Z ∈ χ(M ).(2) If h is constant, then XH is an infinitesimal affine transformation with respect to ∇f,h if and only if (LX ∇)( Y, Z ) + [ X, A f (Y, Z )] + Af (Y, [Z, X ]) − Af ([ X, Y ], Z )] = 0 , and R(X, Y )Z = 0 , for all Y, Z ∈ χ(M ).(3) If f and h are constant, then XH is an infinitesimal affine transformation with respect to ∇f,h if and only if X is an infinitesimal affine transformation with respect to ∇ and R(X, Y )Z = 0 , for all Y, Z ∈ χ(M ). Here we provide necessary and sufficient conditions for the horizontal vector field XH to be an infinites-imal affine transformation on T M with respect to H ∇, where ∇ is an affine connection on M .By a direct computation and using (2.1) and (2.2), we get (LXH H ∇)( Y V , Z V ) = 0 , (LXH H ∇)( Y H , Z V ) = ( R(X, Y )Z − T (X, ∇Y Z) + ∇Y T (X, Z ) )V , (LXH H ∇)( Y H , Z H ) = (( LX ∇)( Y, Z )) H − (R(X, ∇Y Z)y)V + ( ∇Y R(X, Z )y)V . Thus, we get the following: Proposition 3.8 Let ∇ be a flat torsion-free connection on (M, g ). Then XH is an infinitesimal affine transformation on T M with respect to H ∇ if and only if X is an infinitesimal affine transformation on M with respect to ∇. 2343 PEYGHAN et al./Turk J Math Here we consider (T M, g s, ∇f,h ) as a statistical manifold. Direct computations give (∇f,h ∂¯i gs)( ∂¯j , ∂ ¯k) = 0 and (∇f,h δi gs)( δj , δ k) = −g(Af (∂i, ∂ j ), ∂ k) − g(∂j , A f (∂i, ∂ k)) = − ∂i(f ) f gjk , (3.8) (∇f,h δi gs)( δj , ∂ ¯k) = 12 yr Rijrk ( 1 − hf ) , (∇f,h ∂¯k gs)( δi, δ j ) = − hf yr Rijrk , (3.9) (∇f,h ∂¯i gs)( ∂¯j , δ k) = ∂k(h)2f gij − ∂k(h)2h gij , (∇f,h δk gs)( ∂¯i, ∂ ¯j ) = − ∂k(h) h gij . (3.10) If (T M, g s, ∇f,h ) is a statistical manifold, from (3.8) and (3.10), we deduce that f and h are constant. From (3.9), we get also 12 yr Rijrk ( 1 + hf ) = 0 . Differentiating with respect to yt , we obtain − 12 Rijkt ( 1 + hf ) = 0 .According to the above case, we get the following: Theorem 3.9 Let (M, g ) be a Riemannian manifold, (T M, g s) be its tangent bundle equipped with the Sasaki metric and let ∇f,h be the Levi-Civita connection of twisted Sasaki metric Gf,h . If (T M, g s, ∇f,h ) is a statistical manifold, then f and h are constant. Moreover, we have that ∇ is flat or f = −h. Now we study the necessary conditions for (T M, g f1 , ∇f,h ) to be a statistical manifold. Firstly, we recall the definition of Hessian. The Hessian of a function f ∈ C∞(M ) taken with respect to an affine connection ∇ is the covariant derivative of the 1-form df , i.e. Hess f (X, Y ) := ( ∇df )( X, Y ) = XY (f ) − (∇X Y )( f ), ∀ X, Y ∈ χ(M ). It is worth noting that Hess f is symmetric if and only if ∇ is torsion-free. Direct computations give (∇f,h δi gf1 )( δj , δ k) = − ∂i(f ) f gjk , (∇f,h ∂¯i gf1 )( ∂¯j , ∂ ¯k) = 0 , (∇f,h ∂¯i gf1 )( ∂¯j , δ k) = ∂k(h)2f gij − ∂k(h)2h ( gij + ∂j (f1)∂i(f1) ) , (3.11) (∇f,h δk gf1 )( ∂¯i, ∂ ¯j ) = − ∂k(h) h ( gij + ∂j (f1)∂i(f1) ) Hess f1 (∂k, ∂ i)∂j (f1) + Hess f1 (∂k, ∂ j )∂i(f1). (3.12) If (T M, g f1 , ∇f,h ) is a statistical manifold, from (3.11) and (3.12), we get ∂k(h)2f gij + ∂k(h)2h {gij + ∂j (f1)∂i(f1)} = Hess f1 (∂k, ∂ i)∂j (f1) + Hess f1 (∂k, ∂ j )∂i(f1). Also we have (∇f,h δi gf1 )( δj , ∂ ¯k) = yr 2 Rijrk + yr 2 Rsijr ∂s(f1)∂k(f1) − h 2f yr Rrkij , (∇f,h ∂¯k gs)( δi, δ j ) = − hf yr Rrkij . (3.13) 2344 PEYGHAN et al./Turk J Math Since (T M, g f1 , ∇f,h ) is statistical, from (3.13), we get yr 2 {Rijrk + Rsijr ∂s(f1)∂k(f1) + hf Rrkij } = 0 . Differentiating with respect to yt , we obtain ( 1 + hf ) Rijtk + Rsijt ∂s(f1)∂k(f1) = 0 . Thus, we get the following: Theorem 3.10 Let (M, g ) be a Riemannian manifold, let ∇ be an affine connection on M , let (T M, g f1 ) be its tangent bundle equipped with the gradient Sasaki metric and let ∇f,h be the Levi-Civita connection of the twisted Sasaki metric Gf,h . If (T M, g f1 , ∇f,h ) is a statistical manifold, then Hess f1 (Z, X )Y (f1) + Hess f1 (Z, Y )X(f1) = 12( f + h) Z(h)g(X, Y ) + 12h Z(h)Y (f1)X(f1), and ( 1 + hf ) R(X, Y )Z = ( R(Y, X )Z)( f1) grad( f1), for all X, Y, Z ∈ χ(M ). Geometry of tangent bundle with gradient Sasaki metric In this section, we study the geometry of T M equipped with the gradient Sasaki metric gf .Firstly, we study the necessary and sufficient conditions for the vector fields XV and XH to be Killing for gf .By a direct computation and using (2.1) and (2.4), we get (LXV gf )( Y V , Z V ) = 0 , (LXV gf )( Y H , Z V ) = g(∇Y X, Z ) + ( ∇Y X)( f )Z(f ) = g(∇Y X, Z ) + g(( ∇Y X)( f ) grad( f ), Z ), (LXV gf )( Y H , Z H ) = 0 . Using (2.1) and (2.4), straightforward computations give (LXH gf )( Y V , Z V ) = ( ∇X g)( Y, Z ) + g(T (X, Y ), Z ) + g(Y, T (X, Z )) + ( T (X, Y ))( f )Z(f ) + ( T (X, Z ))( f )Y (f )+ Hess f (X, Y )Z(f ) + Hess f (X, Z )Y (f ), (LXH gf )( Y H , Z V ) = g(R(X, Y )y, Z ) + ( R(X, Y )y)( f )Z(f )= g(R(X, Y )y, Z ) + g(( R(X, Y )y)( f ) grad( f ), Z ), (LXH gf )( Y H , Z H ) = ( LX g)( Y, Z ). Thus, we get the following: 2345 PEYGHAN et al./Turk J Math Proposition 4.1 Let (M, g ) be a Riemannian manifold and let (T M, g f ) be its tangent bundle equipped with the gradient Sasaki metric. Then the following assertions hold (1) if ∇ is a torsion-free affine connection on M , then XV is a Killing vector field for gf , if and only if ∇Y X = −(∇Y X)( f ) grad( f ), for all Y ∈ χ(M );(2) XH is a Killing vector field for gf if and only if X is a Killing vector field for g and (∇X g)( Y, Z ) = − Hess f (X, Y )Z(f ) − Hess f (X, Z )Y (f ), R(X, Y )Z = −(R(X, Y )Z)( f ) grad( f ), for all Y, Z ∈ χ(M ). Here we compute the components of C ∇gf to study the Codazzi and statistical structures for (T M, g f , C ∇).A direct computation gives ( C ∇δk gf )( ∂¯i, ∂ ¯j ) = δkgf (∂¯i, ∂ ¯j ) − gf ( C ∇δk ∂¯i, ∂ ¯j ) − gf (∂¯i, C ∇δk ∂¯j ) (4.1) = δkgf (∂¯i, ∂ ¯j ) − gf (( ∇∂k ∂i)V , ∂ ¯j ) − gf (∂¯i, (∇∂k ∂j )V )= δk{gij + ∂i(f )∂j (f )} − { g(∇∂k ∂i, ∂ j ) + ( ∇∂k ∂i)( f )∂j (f )}− { g(∂i, ∇∂k ∂j ) + ∂i(f )( ∇∂k ∂j )( f )} = ∂k(gij ) + ∂k(∂i(f )) ∂j (f ) + ∂i(f )∂k(∂j (f )) − g(∇∂k ∂i, ∂ j ) − (∇∂k ∂i)( f )∂j (f ) − g(∂i, ∇∂k ∂j ) − ∂i(f )( ∇∂k ∂j )( f )= ( ∇∂k g)( ∂i, ∂ j ) + ∂k(∂i(f )) ∂j (f ) + ∂i(f )∂k(∂j (f )) − (∇∂k ∂i)( f )∂j (f ) − ∂i(f )( ∇∂k ∂j )( f )= ( ∇∂k g)( ∂i, ∂ j ) + ∂j (f ) Hess f (∂k, ∂ i) + ∂i(f ) Hess f (∂k, ∂ j ). We have also ( C ∇∂¯i gf )( ∂¯j , δ k) = ( C ∇∂¯j gf )( δk, ∂ ¯i) = 0 , (4.2) ( C ∇δi gf )( δj , ∂ ¯k) = −g(R(y, ∂ i)∂j , ∂ k) − (R(y, ∂ i)∂j )( f )∂k(f ), ( C ∇δj gf )( ∂¯k, δ i) = −g(R(y, ∂ j )∂i, ∂ k) − (R(y, ∂ j )∂i)( f )∂k(f ), ( C ∇∂¯k gf )( δi, δ j ) = 0 , ( C ∇δi gf )( δj , δ k) = ( ∇∂i g)( ∂j , ∂ k), ( C ∇∂¯i gf )( ∂¯j , ∂ ¯k) = 0 . (4.3) Let (T M, g f , C ∇) be a Codazzi manifold. The first equation of (4.3) implies that ∇ is Codazzi. On the other hand, from (4.1) and (4.2), we deduce (∇∂k g)( ∂i, ∂ j ) = −∂j (f ) Hess f (∂k, ∂ i) − ∂i(f ) Hess f (∂k, ∂ j ). (4.4) 2346 PEYGHAN et al./Turk J Math Since ∇ is Codazzi, then the above equation gives ∂k(f ) Hess f (∂i, ∂ j ) − ∂i(f ) Hess f (∂k, ∂ j ) = T (∂i, ∂ k)( f )∂j (f ). (4.5) Now, let (T M, g f , C ∇) be a statistical manifold. It is known that C ∇ is torsion-free if and only if ∇ is torsion-free and flat. Thus, we deduce that ∇ is a statistical connection. In this case, (4.5) reduces to the following ∂k(f ) Hess f (∂i, ∂ j ) − ∂i(f ) Hess f (∂k, ∂ j ) = 0 . Considering the above equation in (4.4), we get (∇∂k g)( ∂i, ∂ j ) = −2∂k(f ) Hess f (∂i, ∂ j ). According to the above description, we conclude the following Theorem 4.2 Let (M, g ) be a Riemannian manifold, let ∇ be an affine connection on M , and let (T M, g f ) be its tangent bundle equipped with the gradient Sasaki metric. Then the following statements hold: (1) if (T M, g f , C ∇) is a Codazzi manifold, then (M, g, ∇) is a Codazzi manifold, (∇Z g)( X, Y ) = − Hess f (Z, X )Y (f ) − Hess f (Z, Y )X(f ), and R(X, Y )Z = −(R(X, Y )Z)( f ) grad( f ), such that Hess f (X, Y )Z(f ) − Hess f (Z, Y )X(f ) = T (X, Z )( f )Y (f ), ∀ X, Y, Z ∈ χ(M ); (2) if (T M, g f , C ∇) is a statistical manifold, then (M, g, ∇) is a statistical manifold and (∇Z g)( X, Y ) = −2 Hess f (X, Y )Z(f ). Now we study the necessary conditions for (T M, g s, ∇f ) to be a statistical manifold. Direct computations give (∇f∂¯k gs)( δi, δ j ) = − yr 2 Rrkij − yr 2 ∂k(f )g(R(∂r , grad f )∂i, ∂ j ) − yr 2 Rrkji − yr 2 ∂k(f )g(∂i, R (∂r , grad f )∂j ), (4.6) (∇fδi gs)( δj , ∂ ¯k) = −yr Rijrk − yr 2 ∂k(f )g(∂j , R (∂r , grad f )∂i). (4.7) If (T M, g s, ∇f ) is a statistical manifold, from (4.6) and (4.7), we get yr {Rrkij − ∂k(f )2 g(∂i, R (∂r , grad f )∂j )} = 0 . 2347 PEYGHAN et al./Turk J Math Differentiating with respect to yt , we obtain Rtkij = ∂k(f )2 g(∂i, R (∂t, grad f )∂j ). We have also (∇fδi gs)( δj , δ k) = ( ∇f∂¯i gs)( ∂¯j , ∂ ¯k) = 0 , (∇f∂¯i gs)( ∂¯j , δ k) = ∂i(f )2 g(∇∂j grad f, ∂ k) + ∂j (f )2 g(∇∂i grad f, ∂ k) − ∂i(f )2 g(∇∂k grad f, ∂ j ) − 12a {g(∂i, ∇∂k grad f ) − ∂k(a)2 ∂i(f )}∂j (f ), (4.8) (∇fδk gs)( ∂¯i, ∂ ¯j ) = − ∂i(f )2 g(∇∂k grad f, ∂ j ) − ∂j (f )2 g(∇∂k grad f, ∂ i) − 12a ( g(∂i, ∇∂k grad f ) − ∂k(a)2 ∂i(f ) ) ∂j (f ) − 12a ( g(∂j , ∇∂k grad f ) − ∂k(a)2 ∂j (f ) ) ∂i(f ). (4.9) Since (T M, g s, ∇f ) is statistical, from (4.8) and (4.9), we get −∂j (f )g(∇∂k grad f, ∂ i) − 1 a ( g(∂j , ∇∂k grad f ) − ∂k(a)2 ∂j (f ) ) ∂i(f )= ∂i(f )g(∇∂j grad f, ∂ k) + ∂j (f )g(∇∂i grad f, ∂ k). Thus, we get the following: Theorem 4.3 Let (M, g ) be a Riemannian manifold, let (T M, g s) be its tangent bundle equipped with the Sasaki metric, and let ∇f be the Levi-Civita connection of the gradient Sasaki metric gf . If (T M, g s, ∇f ) is a statistical manifold, then we have − Y (f )g(∇Z grad f, X ) − 1 a ( g(∇Z grad f, Y ) − 12 Z(a)Y (f ) ) X(f )= X(f )g(∇Y grad f, Z ) + Y (f )g(∇X grad f, Z ), and R(X, Y )Z = 12 g(X, R (Z, grad f )Y ) grad f, for all X, Y, Z ∈ χ(M ). Now we focus on (T M, G f,h , ∇f1 ). Direct computations give (∇f1 δi Gf,h )( δj , δ k) = ∂i(f )gjk , (∇f1 ∂¯i Gf,h )( ∂¯j , ∂ ¯k) = 0 , (∇f1 ∂¯k Gf,h )( δi, δ j ) = 0 , (4.10) 2348 PEYGHAN et al./Turk J Math (∇f1 δi Gf,h )( δj , ∂ ¯k) = yr 2 {(h − f )Rijrk − f ∂ k(f1)g(∂j , R (∂r , grad f1)∂j )}. (4.11) If (T M, G f,h , ∇f1 ) is a statistical manifold, from (4.10), we get ∂i(f ) = 0 , i.e., f is constant. (4.11) implies yr 2 ( (h − f )Rijrk − f ∂ k(f1)g(∂i, R (∂r , grad f1)∂j ) ) = 0 . Differentiating with respect to yt , we obtain (h − f )Rijtk − f ∂ k(f1)g(∂i, R (∂t, grad f1)∂j ) = 0 . We have also (∇f1 ∂¯i Gf,h )( ∂¯j , δ k) = ∂i(f1)2 f g (∇∂j grad f1, ∂ k) + ∂j (f1)2 f g (∇∂i grad f1, ∂ k) − ∂i(f1)2 hg (∂j , ∇∂k grad f1) − h 2a {g(∂i, ∇∂k grad f1) − ∂k(a)2 ∂i(f1)}∂j (f1), (4.12) (∇f1 δk Gf,h )( ∂¯i, ∂ ¯j ) = − ∂i(f1)2 hg (∇∂k grad f1, ∂ j ) − h 2a {g(∂i, ∇∂k grad f1) − ∂k(a)2 ∂i(f1)}∂j (f1) − ∂j (f1)2 hg (∇∂k grad f1, ∂ i) − h 2a {g(∂j , ∇∂k grad f1) − ∂k(a)2 ∂j (f1)}∂i(f1)+ ∂k(h)gij . (4.13) Since (T M, G f,h , ∇f1 ) is statistical, from (4.12) and (4.13), we get ∂i(f1)2 f g (∇∂j grad f1, ∂ k) + ∂j (f1)2 f g (∇∂i grad f1, ∂ k) = ∂k(h)gij − h 2a {g(∂j , ∇∂k grad f1) − ∂k(a)2 ∂j (f1)}∂i(f1) − ∂j (f1)2 hg (∇∂k grad f1, ∂ i). Thus, we get the following: Theorem 4.4 Let (M, g ) be a Riemannian manifold, let (T M, G f,h ) be its tangent bundle equipped with the twisted Sasaki metric, and let ∇f1 be the Levi-Civita connection of the gradient Sasaki metric gf1 . If (T M, G f,h , ∇f1 ) is a statistical manifold, then f is constant. Moreover, we have f X (f1)g(∇Y grad f1, Z ) + f Y (f1)g(∇X grad f1, Z ) = 2 g(X, Y )Z(h) − hY (f1)g(∇Z grad f1, X ) − ha ( g(∇Z grad f1, Y ) − 12 Z(a)Y (f1) ) X(f1), and (h − f )R(X, Y )Z = f g (X, R (Z, grad f1)Y ) grad f1, for all X, Y, Z ∈ χ(M ). 2349 PEYGHAN et al./Turk J Math 1-Stein and Osserman structures on T M In this part, we introduce two geometric concepts, such as 1-Stein and Osserman space. Then we show that T M is a 1-Stein space whenever it is equipped with the complete lift connection C ∇. In the end, we prove that if ∇ is a flat connection, then T M equipped with C ∇ is a globally Osserman space. It is known that C ∇ is the Levi-Civita connection of the complete lift metric on T M (see for more details). Studying 1-Stein and Osserman structures on T M with twisted Sasaki metric and gradient Sasaki metric are interesting ideas that can be studied in the future. The Jacobi operator JX (Y ) = R(Y, X )X is a self-adjoint operator and it plays an important role in the curvature theory. Let spec {JX } be the set of all eigenvalues of the Jacobi operator JX and S(M, g ) be the sphere bundle of unit tangent vector fields. One says that (M, g ) is Osserman at p ∈ M , if for every X, Y ∈ Sp(M, g ), we have spec {JX } = spec {JY }, i.e. the eigenvalues of JX are independent of the tangent vector at p. Furthermore, (M, g ) is pointwise Osserman, if it is Osserman at each p ∈ M . Also, (M, g ) is globally Osserman if, for any point p ∈ M and any unit tangent vector X ∈ TpM , the eigenvalues of the Jacobi operator depend neither on X nor on p, i.e. the eigenvalues of JX are constant on S(M, g ). We recall that globally Osserman manifolds are clearly pointwise Osserman manifolds. Let (M, g ) be a Riemannian manifold, p ∈ M , Z ∈ Sp(M, g ). Associated to the Jacobi operators, and natural number t, there exist some functions ft defined by ft(p, Z ) = g(Z, Z )t trace( J(t) Z ), where J(t) Z is the tth power of the Jacobi operator JZ . We say that the Riemannian manifold (M, g ) is k -Stein at p ∈ M , if ft(p, Z ) is independent of Z ∈ Sp(M, g ) for every 1 ⩽ t ⩽ k . Moreover, (M, g ) is k -Stein if it is k -Stein at each point. Lemma 5.1 Let (M, g ) be a 4-dimensional Riemannian manifold. Then (M, g ) is pointwise Osserman if and only if (M, g ) is 2-Stein. Now we study the 1-Stein and Osserman structure of T M , whenever it is equipped with the complete lift connection C ∇. We denote by ¯R (respectively, ¯J ) the Riemannian curvature tensor and the Jacobi operator of C ∇. Let u ∈ T M and v ∈ Tu(T M ). So we have v = Xkδk + X¯k∂¯k , where Xk, X ¯k are smooth functions on T M . Direct computations give us ¯Jv (δi) = ¯R(δi, v )v = ¯R(δi, X kδk + X¯k∂¯k)( Xkδk + X¯k∂¯k)= ( Xk)2 ¯R(δi, δ k)δk + XkX¯k ¯R(δi, δ k)∂¯k + X¯kXk ¯R(δi, ∂ ¯k)δk + ( X¯k)2 ¯R(δi, ∂ ¯k)∂¯k. Using (2.2) and a straightforward computation, we get ¯R(δi, δ k)δk = ( J∂k (∂i)) H + ( R(y, ∂ i)∇∂k ∂k − R(y, ∂ k)∇∂i ∂k + ∇∂i R(y, ∂ k)∂i − ∇ ∂k R(y, ∂ i)∂k)V . We have also ¯R(δi, δ k)∂¯k = ( J∂k (∂i)) V , ¯R(δi, ∂ ¯k)δk = ¯R(δi, ∂ ¯k)∂¯k = 0 . We set A = R(y, ∂ i)∇∂k ∂k − R(y, ∂ k)∇∂i ∂k + ∇∂i R(y, ∂ k)∂i − ∇ ∂k R(y, ∂ i)∂k . Also from (2.2), we obtain 2350 PEYGHAN et al./Turk J Math ¯Jv (∂¯i) = 0 ; thus, the matrix representation of the Jacobi operator of T M is ¯Jv = [ (Xk)2(J∂k (∂i)) H 0(Xk)2AV + ( XkX¯k)( J∂k (∂i)) V 0 ] . Since (Xk)2(J∂k (∂k)) H = 0 , for k = 1 , . . . , n , we conclude that the principal diagonal entries are zero, so trace( ¯Jv ) = 0 , i.e. trace is independent of v . Therefore, T M is a 1-Stein space. Moreover, if ∇ is a flat connection, then the Jacobi operator ¯Jv of T M equipped with C ∇ is zero. Thus, spec { ¯JV } = {0}, so T M is globally Osserman. As mentioned above and using Lemma 5.1, we get the following: Proposition 5.2 Let (M, g ) be a Riemannian manifold, let ∇ be an affine connection, and let T M be its tangent bundle. Then the following statements hold: (1) T M equipped with the complete lift connection C ∇ is a 1-Stein space; (2) if ∇ is a flat connection, then T M equipped with C ∇ is globally Osserman. Moreover, if M is 2-dimensional, then T M equipped with C ∇ is a 2-Stein space. References Altunbas M, Bilen L, Gezer A. Remarks about the Kaluza-Klein metric on tangent bundle. International Journal of Geometric Methods in Modern Physics 2019; 16 (03): 1950040. Amari S. Information Geometry and its Applications, Springer, Tokyo, Japan, (2016). Amari S, Nagaoka, H. Method of information geometry. American Mathematical Society: Providence, RI, USA, 2000 Abbasi MTK, Sarih M. On some hereditary properties of Riemannian g-natural metrics on tangent bundles of Riemannian manifolds. Differential Geometry and Its Appllications 2005; 22: 19-47. Balan V, Peyghan E, Sharahi E. Statistical structures on the tangent bundle of a statistical manifold with Sasaki metric. Hacettepe Journal of Mathematics and Statistics 2020; 49 (1): 120-135. Belarbi L, El Hendi H. Geometry of twisted Sasaki metric. Journal of Geometry and Symmetry in Physics 2019; 53: 1-19. Belarbi L, El Hendi H. On the geometry of the tangent bundle with gradient Sasaki metric. Arab Journal of Mathematical Sciences 2021. Davies ET. On the curvature of tangent bundles. Annali di Matematica (IV) 1969; 81: 193-204. Dombrowski P. On the geometry of tangent bundle. Journal für die reine und angewandte Mathematik 1962; 210: 73-88. Garcia-Rio E, Kupeli D, Vazquez-Lorenzo R. Osserman Manifolds in Semi-Riemannian Geometry. Lecture notes in Mathematics, Springer Verlag, 2002. Gezer A, Ozkan M. Notes on tangent bundle with deformed complete lift metric. Turkish Journal of Mathematics 2014; 38: 1038-1049 Kowalski O, Sekizawa M. Natural transformations of Riemannian metrics on manifolds to metrics on tangent bundles-a classification. Bulletin of Tokyo Gakugei University 1988; 40 (4): 1-29 Lauritzen S. Statistical manifolds. In Differential geometry in statistical inference, IMS lecture notes monograph series 1987 (10), Institute of mathematical statistics: Hyward, CA, USA: 96-163. 2351 PEYGHAN et al./Turk J Math Matsuzoe H. Statistical manifolds and affine differential geometry. Advanced Studies in Pure Mathematics 2010; 57: 303-321. Oliva JB, Póczos B, Schneider J. The statistical recurrent unit. arXiv:1703.00381, 2017 Oproiu V. Some new geometric structures on the tangent bundles. Publicationes Mathematicae Debrecen 1999; 55 (3-4): 261-281. Pennec X., Fillard P, Ayache N. A Riemannian framework for tensor computing. International Journal of Computer Vision 2006; 66 (1): 41-66. Sasaki S. On the geometry of tangent bundles of Riemannian manifolds. Tohoku Mathematical Journal 1958; 10: 238-354. Turaga P, Veeraraghavan A, Chellappa R. Statistical analysis on stiefel and grassmann manifolds with applications in computer vision. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 1-8. Yano K, Ishihara S. Tangent and cotangent bundles. Marcel Dekker Inc., New York 1973. Yano K, Kobayashi S. Prolongations of tensor fields and connections to tangent bundles, I, General Theory. Journal of the Mathematical Society of Japan 1966; 18: 194-210. 2352
296
Pointwise Remez inequality You have full access to this open access article 2270 Accesses 1 Citation Explore all metrics Abstract The standard well-known Remez inequality gives an upper estimate of the values of polynomials on ([-1,1]) if they are bounded by 1 on a subset of ([-1,1]) of fixed Lebesgue measure. The extremal solution is given by the rescaled Chebyshev polynomials for one interval. Andrievskii asked about the maximal value of polynomials at a fixed point, if they are again bounded by 1 on a set of fixed size. We show that the extremal polynomials are either Chebyshev (one interval) or Akhiezer polynomials (two intervals) and prove Totik–Widom bounds for the extremal value, thereby providing a complete asymptotic solution to the Andrievskii problem. Similar content being viewed by others Remez-Type Inequalities and Their Applications Chebyshev’s Problem of the Moments of Nonnegative Polynomials Polynomials orthogonal in the Sobolev sense, generated by Chebyshev polynomials orthogonal on a mesh Explore related subjects Avoid common mistakes on your manuscript. 1 Introduction Finding exact constants is one of the most desirable goals in constructive approximation. Many classical results of this sort are collected in the addendum to the Akhiezer book . We are happy to mention in this connection . This paper deals with a certain clarification of the famous exact Remez inequality, that is, we provide the exact constant in the Andrievskii problem defined below. Based on several results by Erdélyi, Saff and himself [5, 6, 15, 16], Andrievskii posed the following problem. Problem 1.1 Let ({\mathcal {P}}_n) be the collection of polynomials of degree at most n. Let E be a closed subset of ([-1,1]), and |E| denote its Lebesgue measure. For (x_0\in [-1,1]), define For (\delta \in (0,1)), find Let us comment the setting with the following three evident remarks: (L_{n,\delta }(x_0)) is even, thus we will consider only (x_0\in [-1,0]). ({\mathcal {L}}_{n,\delta }:=\sup _{x_0\in [-1,0]}L_{n,\delta }(x_0)) is the famous Remez constant . It is attained at the endpoint (-1) by the Chebyshev polynomial ( R_{n,\delta }(x)) for the interval ([-1+2\delta ,1]), i.e., where (\mathfrak {T}_n) denotes the standard Chebyshev polynomial of degree n associated with ([-1,1]), We will henceforth call (R_{n,\delta }) the Remez polynomial. Clearly, for Problem 1.1, ( R_{n,\delta }(x)) cannot be extremal for all (x_0\in [-1,0]) as soon as (\delta ) is sufficiently small. Let (\delta =1/2). Then, (R_{n,1/2}(0)=1), while for the so-called Akhiezer polynomial (A_{2m,\delta }(x)) we get (A_{2m,1/2}(0)=\mathfrak {T}_m\left( 5/3\right) >1). Recall or [4, Chapter 10] is the even Chebyshev polynomial on the set ([-1,1]\setminus (-\delta ,\delta )). Andrievskii raised his question on several international conferences, including Jaen Conference on Approximation Theory, 2018. The third remark highlights that the problem is non-trivial. We found it highly interesting and in this paper we provide its complete asymptotic solution. In [1, 2], Akhiezer studied various extremal problems for polynomials bounded on two disjoint intervals (E(\alpha ,\delta )=[-1,1]\setminus (\alpha -\delta ,\alpha +\delta )), (\delta -1<\alpha \le 0), see also [3, Appendix Section 36 and Section 38 in German translation]. Definition 1.2 We say that a polynomial (A_{n,\alpha ,\delta }(x)) is the Akhiezer polynomial for (E(\alpha ,\delta )) with respect to an internal gap if it solves the following extremal problem where (x_0\in (\alpha -\delta ,\alpha +\delta )). We point out that (A_{2m,\delta }) as introduced above corresponds to the symmetric case (\alpha =0) and even (n=2m). We describe properties of Akhiezer polynomials in Sect. 2.1. Note, in particular, that the extremal property of (A_{n,\alpha ,\delta }(x)) does not depend on which point (x_0\in (\alpha -\delta ,\alpha +\delta )) was fixed in (1.3). In Sect. 3, we prove the following theorem. Theorem 1.3 The extremal value (L_{n,\delta }(x_0)) is assumed either on the Remez polynomial (R_{n,\delta }(x)) or on an Akhiezer polynomial (A_{n,\alpha ,\delta }(x)) with a suitable (\alpha ). Our main result, the asymptotic behavior of (L_{n,\delta }(x_0)), is presented in Sect. 4. Nowadays, the language of potential theory is so common and widely accepted, see, e.g., 6, [11, 325–349 (2019)"), 13"), 19, 21–63 (2017)"), 27, 127–232")], that we will formulate our asymptotic result using this terminology. Let (G_{\alpha ,\delta }(z,z_0)) be the Green function in the domain (\Omega :=\overline{{\mathbb {C}}}\setminus E(\alpha ,\delta )) with respect to (z_0\in \Omega ). In particular, we set (G_{\alpha ,\delta }(z)=G_{\alpha ,\delta }(z,\infty )). Respectively, (G_{\delta }(z)) is the Green function of the domain (\overline{{\mathbb {C}}}\setminus E(\delta )) with respect to infinity, where (E(\delta )=[-1+2\delta ,1]). It is well known, that and Lemma 1.4 Let (\Phi _\delta (x)) be the upper envelope If for (x_0) the supremum is attained for some internal point (\alpha \in (\delta -1,0]), then (x_0=x_0(\alpha )) is a unique solution of the equation Otherwise, it is attained as (\alpha \rightarrow \delta -1) and (\Phi _\delta (x_0)=G_{\delta }(x_0)). Note that the non-trivial claim in Lemma 1.4 is the uniqueness of (x_0(\alpha )). We will provide a proof of this fact in Sect. 4.2. Thus, we show that the upper envelope (1.5) is in fact the upper envelope of just two curves. One of them is the graph of (G_{\delta }(x)) given in terms of elementary functions, see (4.8). The other one is obtained in the following way: first we find the unique solution (x_0(\alpha )) of (1.6) and then we form the curve ((x_0(\alpha ), G_{\alpha ,\delta }(x_0(\alpha )))). Although each individual Green function (G_{\alpha ,\delta }(x)) can be expressed in terms of elliptic functions [4, (\S ) 55 equation (4)], due to the implicit definition of (x_0(\alpha )) by means of (1.6), we don’t believe that a parametric description of the upper envelope in terms of elliptic functions exists. However, for any fixed (\delta >0) it can be solved numerically, as we demonstrate in Example 4.5 including a graph of the curve ((x_0(\alpha ), G_{\alpha ,\delta }(x_0(\alpha )))), cf. Fig. 3. Note that the endpoints of this curve are also given explicitly. Our main theorem below shows that the asymptotics of (L_{n,\delta }(x_0)) are described by the upper envelope (\Phi _\delta (x_0)). Theorem 1.5 The following limit exists To be more precise, if (\Phi _\delta (x_0)=G_{\delta }(x_0)), then If (\Phi _\delta (x_0)=G_{\delta ,\alpha }(x_0)) for some (\alpha \in (\delta -1,0]), then the Totik–Widom bound holds. For the classical Chebyshev problem related to a fixed set E, the asymptotics are described in terms of the Green function associated with this set. In contrast to this, (\Phi _\delta (x_0)), respectively, (E(\alpha _0,\delta )) for some extremal (\alpha _0) depend on (x_0) in a quite sophisticated way. We present an asymptotic diagram which allows to describe completely the asymptotic behavior of (L_{n,\delta }(x_0)) for all (x_0\in [-1,1]) and a given (\delta ). The organization of the paper is as follows. In Sect. 2, we prove part 1 of Theorem 1.3 and present technical tools that will be used throughout the paper. The structure of Chebyshev polynomials can be revealed by their representation in terms of conformal mappings on so-called comb domains. However, the relation between the parameters describing a given set E and the related comb domain are highly non-trivial. We provide a comprehensive description of Akhiezer polynomials and their dependence on the position (\alpha ). In Sect. 2.3, we finish the proof of Theorem 1.3. The proof of Theorem 1.5 and a description of the upper envelope (1.5) are given in Sect. 4. 2 Preliminaries In , the sharp constant in the Remez inequality for trigonometric polynomials on the unit circle was given. The proof was based on the following two steps: The structure of possible extremal polynomials was revealed with the help of their comb representations, The principle of harmonic measure (a monotonic dependence on a domain extension) allows to get an extremal configuration for the comb parameters in the given problem. We also start with recalling comb representations for extremal polynomials, see Sect. 2.1. We refer the reader to [17, 25] and the references therein for more information about the use of comb mappings in approximation theory. However, we doubt that the step (ii) is applicable to the Andrievskii problem. That is, that comparably simple arguments from potential theory such as the principle of harmonic measure, would also allow us to bring a certain fixed configuration (comb) to an extremal one. Instead, we develop here an infinitesimal approach, closely related to the ideas of Loewner chains [12, 22]. Using this method, we will prove in Sect. 3 that the extremal solution for (L_{n,\delta }(x_0)) is either a Chebyshev or an Akhiezer polynomial. For this reason, we continue in Sect. 2.3 with the complete discussion of Akhiezer polynomials (A_{n,\alpha ,\delta }), see (1.3). Recall that (A_{n,\alpha ,\delta }) is extremal on two given intervals with respect to points (x_0 \in (\alpha -\delta ,\alpha +\delta )).Footnote 1 Generically, (A^{-1}_{n,\alpha ,\delta }([-1, 1])) is a union of three intervals—the set contains also an additional interval outside of ([-1, 1]). We demonstrate our infinitesimal approach showing dependence of this additional interval on (\alpha ) for fixed n and (\delta ). In the language of comb domains, we will observe a rather involved dependence of the comb parameters in a simple monotonic motion of (\alpha ). The domain can undergo all possible variations: to narrow, expand, or a combination of both, see Theorem 2.14. Essentially, this is the base for our belief that simple arguments, in the spirit of (ii), in the Andrievskii problem are hardly possible. If for fixed (x_0) the extremal polynomial is a Remez polynomial, there is no additional interval outside of ([-1,1]). Intuitively, the same considerations as were used in should work. However, a technical difference prevents direct applications of the principle of harmonic measure. Namely, the Lebesgue measure on the circle corresponds to the harmonic measure evaluated at 0, while in the sense of potential theory the Lebesgue measure on ({\mathbb {R}}) corresponds to the Martin or Phragmén–Lindelöf measure, . Although we are convinced that a limiting process would allow to overcome this technical issue, we provide an alternative proof below; cf. Lemma 2.5. 2.1 Comb Representation for Extremal Polynomials By a regular comb, we mean a half strip with a system of vertical slits where (n_+-n_-=n\in {\mathbb {N}}) and the k’s are integers. We call (h_k), (h_k\ge 0), the height of the k-th slit and point out that the degeneration (h_k=0) is possible. Let (\theta :{\mathbb {C}}_+\rightarrow \Pi ) be a conformal mapping of the upper half-plane onto a regular comb such that (\theta (\infty )=\infty ). Then, defines a polynomial of degree n. Let (E\subset [-1,1]) be compact and (x_0\in [-1,1]\setminus E). Moreover, let (a, b) denote the maximal open interval in ({\mathbb {R}}\setminus E) that contains (x_0). Then, using the technique of Markov correction terms we obtain the following representation of the extremizer of (1.1) Theorem 2.1 ([25, 7.5. Basic theorem], [14, Theorem 3.2.]) Under the normalization (T_n(x_0)>0), there exists a unique extremal polynomial for (1.1) and it only depends on the gap (a, b) and not on the particular point (x_0\in (a,b)). Moreover, let (\theta :{\mathbb {C}}_+\rightarrow \Pi ) be a comb mapping onto a regular comb (\Pi ) and E be such that: (h_0>0) and (a=\theta ^{-1}(-0), b=\theta ^{-1}(+0)), E contains at least one of the points (\theta ^{-1}(k\pi \pm 0)), for all (k\in (n_-,n_+)), E contains at least one of the points (\theta ^{-1}(\pi n_-), \theta ^{-1}(\pi n_+)), Then, is an extremal polynomial for E and the interval (a, b). Vice versa, if (T_n) is an extremal polynomial for a set E and an interval (a, b), then there exists a regular comb with these properties such that (2.1) holds. Let us now specify to the case of Akhiezer polynomials, where (E=E(\alpha ,\delta )=[-1,1]\setminus (\alpha -\delta ,\alpha +\delta )). By the above theorem, the so-called n-extension (E_n=E_n(\alpha ,\delta ):=A_{n,\alpha ,\delta }^{-1}([-1,1])) can be of the following types: there is an additional interval to the right of 1, E is extended at 1, E is extended at (-1), there is an additional interval to the left of (-1). The corresponding comb-mapping realization is collected in the following Corollary: Corollary 2.2 For a fixed set (E=E(\alpha ,\delta )) and degree n the extremal polynomial (A_{n,\alpha ,\delta }(z)) of (1.1) is of the form (2.1). The unique comb (\Pi _{n,\alpha ,\delta }) has one of four possible shapes shown in Figs. 1 and 2. Moreover, the following normalizations distinguish the cases (\theta ^{-1}(\pi n_-)=-1), (\theta ^{-1}(\pi (n_+-1)-0)=1), see Fig. 1, left, (\theta ^{-1}(\pi n_--0)=-1), (\theta (1)\in [\pi (n_+-1), \pi n_+]) see Fig. 1, right, (\theta (-1)\in [\pi (n_-), \pi (n_-+1)]), (\theta ^{-1}(\pi n_--0)=-1), see Fig. 2, left, (\theta ^{-1}(\pi (n_-+1)+0)=-1), (\theta ^{-1}(\pi n_+)=1), see Fig. 2, right. Comb domains for the cases (i) and (ii) Comb domains for the cases (iii) and (iv) Remark 2.3 Let us denote the additional interval for the cases (i) and (iv) by (I_n). These cases include the limit cases (h_{n_+-1}=\infty ) and (h_{n_-+1}=\infty ), respectively. That is, the extremal polynomial is of the degree (n-1). Note that (A_{n,\alpha ,\delta }) has a zero on (I_n). The aforementioned degeneration corresponds to “the zero being at (\infty ).” On the other hand, also the degenerations (h_{n_+-1}=0) and (h_{n_-+1}=0) are possible, which allow for a smooth transition to the cases (ii) and (iii), respectively. 2.2 Reduction to Remez Polynomials (Proof of Theorem 1.3, Part 1) As we have mentioned in the beginning of this section, we cannot use the ideas of the principle of harmonic measure directly. We overcome this technical problem by using transition functions from the theory of Loewner chains [12, 22]. Let us consider an arbitrary regular comb (\Pi =\Pi (0)) as in Theorem 2.1 and let us fix a slit with height (h_k>0). Let (h_k({\epsilon })) be strictly monotonically decreasing such that (h_k(0)=h_k) and (\Pi ({\epsilon })) be the comb which coincides with (\Pi ), but (h_k) is reduced to (h_k({\epsilon })). Let (\theta ) and (\theta _{\epsilon }) be the corresponding comb mappings. Then, clearly (\Pi \subset \Pi _{{\epsilon }}) and thus the transition function is well defined and is an analytic map from ({\mathbb {C}}_+) into ({\mathbb {C}}_+). If we define (I({\epsilon }):=\theta ^{-1}([\pi k+ih_k({\epsilon }), \pi k+ih_k])) and (c_k({\epsilon })=\theta _{\epsilon }^{-1}(\pi k+ih_k({\epsilon }))), then (w_{\epsilon }:{\mathbb {R}}\setminus I({\epsilon })\rightarrow {\mathbb {R}}\setminus {c_k({\epsilon })}) is one-to-one and onto. The arc (J({\epsilon }):=\theta _{\epsilon }^{-1}([\pi k+ih_k({\epsilon }), \pi k+ih_k])) lies, except its endpoint in ({\mathbb {C}}_+) and (w_{\epsilon }:{\mathbb {C}}_+\rightarrow {\mathbb {C}}_+\setminus J({\epsilon })) is conformal. Lemma 2.4 The Nevanlinna function (w_{\epsilon }) admits the representation Proof It follows from the above, that (w_{\epsilon }) is a Nevanlinna function, i.e., is an analytic function with positive imaginary part on ({\mathbb {C}}_+), and that the measure in its integral representation is supported on (I({\epsilon })). That is, we can write Moreover, it satisfies the normalization conditions which yields From this, we obtain (2.2). (\square ) With the aid of this lemma, one could already recover Remez’ result, that for points on the boundary the extremal configuration is an interval. Lemma 2.5 Let and (T_n(x,E)) be the extremal polynomial for (x_0) as described in Theorem 2.1. Then, Proof For an elementary proof even in the multi-dimensional case, we refer to [8, 344–355. (Russian),"), 9, 345-356"), Lemma 2]. \(\square \) Remark 2.6 Continuing the heuristics provided at the beginning of this section, we give an interpretation of this result in terms of the harmonic measure. Assume that (\theta _{\epsilon }) is a conformal map for which one of the slit heights was reduced, (E_{\epsilon }=\theta _{\epsilon }^{-1}([\pi n_-,\pi n_+])) and (w_{\epsilon }) the corresponding transition function. Lemma 2.5 follows easily, provided that (|E_{\epsilon }|>|E|) and (w_{\epsilon }(x_0)>x_0). Let us assume we were interested in the harmonic measure of E instead of its Lebesgue measure. Then, the above properties have a probabilistic interpretation: namely, the first one corresponds to the probability that a particle which starts at a point which is close to infinity in the domain (\Pi _{\epsilon }) and (\Pi ) and terminates on the base of the comb. From this perspective, it is clear that this increases if one of the slit heights is decreased. Similarly, writing the second one as (w_{\epsilon }(x_0)+1>x_0+1), it corresponds to the probability that a particle terminates in ([\theta (-1),\theta (x_0)]) in (\Pi ) and (\Pi _{\epsilon }), respectively. Again, this explains, why the value should be increased. This can be made rigorous by using Lemma 2.4. The above interpretation uses the fact that there is no additional portion of E outside of ([-1,1]). For Akhiezer polynomials, this is not true. This makes the problem essentially more delicate and we extend our tools by using the infinitesimal variation ({\dot{w}}) as defined in (2.4); cf. also Remark 2.15. 2.3 Transformation of the Akhiezer Polynomials as (\alpha ) is moving We believe that the observations presented in this section are of independent interest. Our goal is to describe the transformation of the comb as the interval starting from the center moves continuously to the left. This should correspond to a continuous transformation of the comb, which seems at the first sight impossible, since the base of the slits are positioned only at integers. We show in the following how the cases (i), (ii), (iii), (iv) can be connected by a continuous transformation: let us start with case (i) in the degenerated configuration (h_{n_+-1}=\infty ). Then, we start decreasing (h_{n_+-1}) until we reach (h_{n_+-1}=0). Now we are in the situation that (\theta (1)=\pi (n_+-1)) and we continue with case (ii). That is, (\theta (1)) increases until it reaches (\theta (1)=\pi n_+). All the time (\theta (-1)=\pi n_-). Now we continue with case (iii) and increase (\theta (-1)) until the point (\theta (-1)=\pi (n_-+1)). We continue with case (iv) and increase (h_{n_-+1}) from (h_{n_-+1}=0) to (h_{n_-+1}=\infty ). We have arrived at our initial configuration but the base of the comb was shifted from position ((\pi n_-,\pi (n_+-1))) to position ((\pi (n_-+1),\pi n_+)). We believe it is helpful to understand these transformations also on the level of (E_n). Our initial configuration corresponds to (E=E_n). We view this as the additional interval is hidden at (\infty ). Starting case (i) corresponds to the motion that the position of the additional interval decreases from (+\infty ) until it joins E. This corresponds to the change from case (i) to (ii). When it is fully absorbed, we arrive at case (iii), i.e., E starts to be extended to the left until the extension separates and starts to decrease to (-\infty ). And the circle starts from the beginning. Theorem 2.7 Moving (a, b) to the left corresponds to a succession of continuous transformations of the comb as described above. If n is odd, we start with case (i) and (h_{n_+-1}=\infty ). If n is even, we start with case (iii) and (\theta (-1)=\pi n_-). Remark 2.8 When (\alpha =0) by symmetry, the extremal polynomial is even and all critical points of the extremal polynomial are equally distributed on ([-1,-\delta ]) and ([\delta ,1]). The above procedure describes how all critical points are moved from the interval ([-1,\alpha -\delta ]) to ([\alpha +\delta ,1]) as (\alpha ) is decreasing. In the limit when (\alpha ) approaches (-1+\delta ), all critical points will be on ([-1+2\delta ,1]) and the extremal polynomial will transform into the Remez polynomial (R_{n,\delta }). As we have already indicated at the end of Section 2.2 the proof of Theorem 2.7 requires in addition to the transition function (w_{\epsilon }) its infinitesimal transform ({\dot{w}}). Let us introduce the notation Lemma 2.9 Under an appropriate choice of ({\epsilon }), (w(z,{\epsilon })) is differentiable with respect to ({\epsilon }) and there exists (\sigma _k>0) such that where (c_k=c_k(0)=\theta ^{-1}(\pi k+ih_k)). Proof We note that if (k=n_-+1) or (k=n_+-1) and we are in the situation of Theorem 2.1 (i) or (iv), then due to (2.3) (A_{\epsilon }<1) and (A_{\epsilon }>1) otherwise. In any case, we obtain from the properties of (w_{\epsilon }) that (A_{\epsilon }) is strictly monotonic. Therefore, if we fix ({\epsilon }_0) and define ( \beta _0=\left| \int _{I({\epsilon }_0)}\frac{\mathrm {d}\sigma _{{\epsilon }_0}(\xi )}{1-\xi ^2}\right| , ) then ( {\epsilon }\mapsto \left| \int _{I({\epsilon })}\frac{\mathrm {d}\sigma _{\epsilon }(\xi )}{1-\xi ^2}\right| ) maps ([0,{\epsilon }_0]) continuously and bijectively on ([0,\beta _0]). Through a reparametrization we can achieve that ( \left| \int _{I({\epsilon }')}\frac{\mathrm {d}\sigma _{{\epsilon }'}(\xi )}{1-\xi ^2}\right| =\frac{{\epsilon }'\beta _0}{{\epsilon }_0}. ) Thus, the measures (\frac{\mathrm {d}\sigma _{{\epsilon }'}(\xi )}{(1-\xi ^2){\epsilon }'}) are normalized and we get by passing to a subsequence Since the support of (\sigma _{{\epsilon }'_k}) shrinks to ({c_k}), we get that (\sigma _\infty =\sigma _k\delta _{c_k}), for (\sigma _k/|1-c_k^2|=\beta _0/{\epsilon }_0). This concludes the proof. (\square ) We are also interested in the case of growing slit heights. This corresponds to for some transition function as defined above. Note that since (c_k({\epsilon })\rightarrow c_k) as ({\epsilon }\rightarrow 0), this inverse is well defined on ({\mathbb {R}}) away from a vicinity of (c_k) and we conclude since (w'(z,0)=1) that The following lemma discusses the case (i). We show that decreasing (h_{n_+-1}) and simultaneously increasing (h_0) appropriately allows us to move the gap ((\alpha -\delta ,\alpha +\delta )) to the left without changing its size. Let us set ((a,b)=(\alpha -\delta ,\alpha +\delta )) let c be the critical point in (a, b) and (c_+) be the critical point outside ([-1,1]), i.e., (c=\theta ^{-1}(ih_0)), (c_+=\theta ^{-1}(\pi (n_+-1)+ih_{n_+-1})). Lemma 2.10 Let (E(\alpha ,\delta )) with (\alpha \le 0) be such that the corresponding extremal polynomial (A_{n,\alpha ,\delta }(z)) corresponds to the case (i). Then, the infinitesimal variation ({\dot{w}}(x)) generated by decreasing the height (h_{n_+-1}) under the constraint of a constant gap length ((\delta =\mathrm{const})) leads to an increase in (h_0). Moreover, in this variation the gap is moving to the left, that is, (\alpha ({\epsilon })) is decreasing. Proof Let (w_{n_+-1}) and (w_0) be transforms corresponding to a variation of the slit heights (h_{n_+-1}) and (h_0). In (2.5), we chose (\sigma _{n_+-1}>0) and determine the sign of (\sigma _0) by the condition of constant gap length. Thus, the total transform corresponds to (w(z,{\epsilon })=w_{n_+-1}(w_0(z,{\epsilon }),{\epsilon })) and hence, by the previous computations, we find that with (\sigma _+>0). The value (\sigma _0) is determined due to the constraint (w(b,{\epsilon })-w(a,{\epsilon })=b-a), i.e., ({\dot{w}}(b)={\dot{w}}(a)). Thus, we obtain Let Since (a+b=2\alpha \le 0), (\ell ) is non-decreasing. Moreover, since (a,b>-1), (\ell (-1)=(1+a)(1+b)>0) and hence, (\ell (c)>0) and (\ell (c_+)>0). Using that the numerator is negative for both summands, we find that (\sigma _+>0) implies (\sigma _0<0). In other words, the compression of the height (h_{n_+-1}) leads to a growth of (h_0). Since both summands are negative, Consequently, ({\dot{w}}(b)={\dot{w}}(a)<0), and we find that the ends of the interval ((a_{\epsilon },b_{\epsilon })) are moving to the left. This concludes the proof. (\square ) Case (iv) is similar to case (i). However, we will encounter certain specific phenomena. First of all, we will increase the value of (h_{n_-+1}) to move the given gap to the left. But in order to fulfill the constraint of fixed gap length both an increasing or a decreasing of (h_0) is possible. Lemma 2.11 Let (E(\alpha ,\delta )) with (\alpha \le 0) be such that the corresponding extremal polynomial (A_{n,\alpha ,\delta }(z)) corresponds to the case (iv). Let (\ell ) be defined as in (2.7) and be its zero. If (c_-<\eta ), the infinitesimal variation ({\dot{w}}(x)) generated by increasing the height (h_{n_-+1}) under the constraint of a constant gap length ((\delta =\mathrm{const})) leads to an increase in (h_0) and it leads to a decrease in (h_0) if (c_-\in (\eta ,-1)). In both cases, the gap is moving to the left, that is, (\alpha ({\epsilon })) is decreasing. Proof As before, we find that the infinitesimal variation is of the form Respectively, the constraint ({\dot{w}}(b)={\dot{w}}(a)) corresponds to We have that (\ell (c_-)<0) for (c_-<\eta ). Since (c_-<-1<a<c<b), this implies that (\sigma _0<0). As before, we conclude that ( \dot{w}(a)<0, ) and the interval is moving to the left. If (c_-\in (\eta ,-1)), then (\ell (c_-)>0) and therefore (\sigma _0>0). In this case, ( {\dot{w}}(b)<0 ) and again the interval is moving to the left. Finally, if (c_-=\eta ), we have (\ell (c_-)=0) and therefore (2.6) implies that (\sigma _0=0). (\square ) 2.3.1 The cases (ii) and (iii) We will discuss case (ii) and case (iii) simultaneously. Recall that case (ii) corresponds to an extension of E to the right and case (iii) to an extension to the left, i.e., (\theta (1)\in (\pi (n_+-1),\pi n_+)) or (\theta (-1)\in (\pi n_-,\pi ( n_-+1))). Let (\Pi \equiv \Pi ({\epsilon })) but let us increase the normalization point (\theta _{\epsilon }(\pm 1)). Let (w^\pm _{\epsilon }(z)=\theta _{\epsilon }^{-1}(\theta (z))) be the corresponding transition functions. Lemma 2.12 Let (w^\pm _{\epsilon }) be defined as above. Then, there exists (\rho ^\pm _{\epsilon }>0), such that and Proof We only prove the claim for (w^+_{\epsilon }). Since (\Pi ({\epsilon })\equiv \Pi (0)), (w_{\epsilon }) is just an affine transformation and using (w_{\epsilon }(-1)=-1) and (w_{\epsilon }(\infty )=\infty ) we find (2.9). Since (\theta _{\epsilon }(1)>\theta (1)), we obtain that (w_{\epsilon }(1)=\theta _{\epsilon }^{-1}(\theta (1))<\theta _{\epsilon }^{-1}(\theta _{\epsilon }(1))=1) and thus (\rho _{\epsilon }<1). Therefore, (\square ) Lemma 2.13 Let (E(\alpha ,\delta )) with (\alpha \le 0) be such that the corresponding extremal polynomial (A_{n,\alpha ,\delta }(z)) corresponds to the case (ii) (case (iii)). Then, the infinitesimal variation ({\dot{w}}(x)) generated by increasing (\theta (1)) (increasing (\theta (-1))) under the constraint of a constant gap length ((\delta =\mathrm{const})) leads to an increase (decrease) of (h_0). Moreover, in this variation the gap is moving to the left, that is, (\alpha ({\epsilon })) is decreasing. Proof We only prove the case (ii). We have Applying the constraint ({\dot{w}}(a)={\dot{w}}(b)), we obtain Since (\ell (c)>0), this implies (\sigma _0<0) and thus (h_0) is increasing. Moreover, ({\dot{w}}(a)<0), which concludes the proof. (\square ) We summarize our results in the following theorem: Theorem 2.14 Let (\eta ) be defined by (2.8). Then, we have: (h_0) is increasing, (h_0) is increasing, (h_0) is decreasing, if (c_-<\eta ), then (h_0) is decreasing, if (c_->\eta ), then (h_0) is increasing. In all cases (\alpha ) is decreasing. Remark 2.15 We have seen in the proof of Lemma 2.5 that (w_{\epsilon }(x_0)) increased monotonically, if some other slit height was decreased. Theorem 2.14, in particular case (iii) and case (iv) show that such a monotonicity is lacking for Akhiezer polynomials, which makes the situation essentially different to the Remez extremal problem. 3 Reduction to Akhiezer Polynomials (Proof of Theorem 1.3, Part 2) The goal of this section is to finish the proof of Theorem 1.3. That is, if (x_0) is in an internal gap, then the extremal polynomial is an Akhiezer polynomial. Recall that in contrast to Section 2.2 now it is possible that there is an extension outside of ( [-1,1]), moreover, this is a generic position. All possible types of extremizer were described in Corollary 2.2 and the discussion above the corollary. Thus, it remains to show that on ([-1,1]) the extremal configuration in fact only has one gap, i.e., the extremal set is of the form (E(\alpha ,\delta )) for some (\alpha \in (-1+\delta ,1-\delta )). First, we will show that E has at most two gaps on ([-1,1]). Let (T_n(z,E)) denote the extremizer of (1.1) for the set E. Lemma 3.1 Let (E\subset [-1,1]) and let (x_0) be in an internal gap of E. Then, there exists a two-gap set ({\tilde{E}}), such that Proof We will only prove the case that there are no boundary gaps. Moreover, let us assume that E is already maximal, i.e., (T_n^{-1}([-1,1],E)\cap [-1,1]=E). Let us write and let us denote the gap which contains (x_0) by (a, b). Let (\Pi ) be the comb related to E and let us assume that the slit corresponding to (a, b) is placed at 0 and let c denote the critical point of (T_n) in (a, b). Assume that there are two additional gaps ((a_1,b_1)) and ((a_2,b_2)) with slit heights (h_k) and critical points (c_k), (k=1,2). We will vary the slit heights (h_k) and h such that (w_{\epsilon }(x_0)=x_0) and (|E_{\epsilon }|=|E|). Therefore, we get The constraints will guarantee that (3.1) is satisfied. Our goal is to find (\sigma _1>0) and (\sigma ,\sigma _2), such that (3.3) is satisfied. Let us define Due to the second constraint in (3.3), we have where f(z) is linear. Thus (f(z)=K(z-\xi )) or (f(z)=K). We need to check that we can always find (\xi ) such that the first constraint in (3.3) is satisfied. If ( \sum _{j=1}^{g}(H(b_j)-H(a_j))=0, ) we set (f(z)=K). If ( \sum _{j=1}^{g}(H(b_j)-H(a_j))\ne 0, ) we define In any case, we define ({\dot{w}}) by (3.4) and (\sigma _{1}), (\sigma _2), (\sigma ) as the residues of this function at (c_1, c_2, c), respectively. Now we have to distinguish two cases. If (\xi \ne c_1), we can choose K so that (\sigma _1>0) and decrease (h_1). If at some point (\xi =c_1), we can choose K so that (\sigma _{2}) will be decreased. Note that in this case (h_1) remains unchanged. In particular, it won’t increase again. Hence, this procedure allows us to “erase” all but one additional gap. The case of boundary gaps works essentially in the same way, only using instead of the variations used above, variations as described in Lemma 2.12. (\square ) Ending Proof of Theorem 1.3 First, assume that the extremizer is in a generic position, that is, there is an extra interval (I_n\subset {\mathbb {R}}\setminus [-1,1]). We have (T_n(z,E)) with and (|T_n(z,E)|\le 1) for (z\in E\cup I_n). The corresponding comb has three slits, which heights we denote by (h_{out}, h, h_1). Our goal is to show that we can reduce the value (h_1). Varying all three values, we get that the corresponding infinitesimal variation as described by expression (3.2), with the parameters (\sigma _{out},\sigma , \sigma _1). Therefore, we still can satisfy the two constraints in (3.3), and choose one of the parameters positive. Since the direction of the variation of the heights (h_{out}) and h is not essential for us, we choose (\sigma _1>0), and therefore reduce the size of (h_1). In a degenerated case we can use variations, which were described in Section 2.3. This is possible as long as either (\theta (1)) or (\theta (-1)) can be moved in both directions. Thus, it remains to consider the case that E is of the form (3.5), but the corresponding comb has only two non-trivial teeth of the heights h and (h_1). WLOG we assume that (b<a_1). We have We will apply the third variation, see Fig. 2, left. That is, we will reduce the value (h_1) and move the preimage of (-1) in the positive direction. According to (2.10), see also (2.11), we obtain Let us point out that with an arbitrary choice of the parameter (\sigma _1>0) and (\tau <0) we get an increasing function. Therefore, Thus, with a small variation ({\epsilon }) of this kind we get with On the other hand ({\dot{w}}(z)) has a trivial zero (z=1) and a second one, which we denote by (y_). Since we get where Thus, with different values of (\tau <0) and (\sigma _1>0), we can get an arbitrary value (y_\in (-1,c_1)). Assume that (x_0c) (recall our assumption (b<a_1), therefore this is possible). Then, ({\dot{w}}(x_0)<0). For a small ({\epsilon }), we get (w_{\epsilon }(x_0)T_n(x_0,E)), that is (T_n(z,E)) was not an extremal polynomial for the Andrievskii problem. If (x_0\in (c,b)), we choose (y_0). We repeat the same arguments, having in mind that in this range (T_n(z,E)) is decreasing. Thus, we arrive to the same contradiction (T_n(x_0,E_{\epsilon })>T_n(x_0,E)). (\square ) 4 Asymptotics (Proof of Theorem 1.5) The goal of this section is to prove Theorem 1.5 and to give a description of the upper envelope (1.5). 4.1 Totik–Widom bounds We need an asymptotic result for Akhiezer polynomials. Recall that (E(\alpha ,\delta )=[-1,1]\setminus (\alpha -\delta ,\alpha +\delta )) and that (A_{n,\alpha ,\delta }) denotes the associated Akhiezer polynomial. Moreover, we denote ({\hat{E}}_n(\alpha ,\delta )=A_{n,\alpha ,\delta }^{-1}([-1,1])=E(\alpha ,\delta )\cup I_n) and (\hat{\Omega }_n={\overline{{\mathbb {C}}}}\setminus {\hat{E}}_n). We have described the shape of the additional interval (I_n) in Theorem 2.1 and the discussion below. The following Lemma is known in a much more general context [11, 325–349 (2019)"), Proposition 4.4.]. Lemma 4.1 Let (\alpha ,\delta ) be fixed and (A_{n,\alpha ,\delta }) be the associated Akhiezer polynomial. Let (x_n\in I_n) denote the single zero of (A_{n,\alpha ,\delta }) outside of ([-1,1]). For any (y\in (\alpha -\delta ,\alpha +\delta )) If we pass to a subsequence such that (\lim _{k\rightarrow \infty }x_{n_k}=x_\infty \in ({\mathbb {R}}\cup {\infty })\setminus (-1,1)), then where (\Omega (\alpha ,\delta )={\overline{{\mathbb {C}}}}\setminus E(\alpha ,\delta )). Proof of Theorem 1.5 We start with the case that the sup in (1.5) is attained at some internal point (\alpha _0\in (-1+\delta ,0]). Let (T_{n,\delta ,x_0}) be the extremizer of (1.2) and set ({\hat{E}}_n=T_{n,\delta ,x_0}^{-1}([-1,1])) and (\hat{\Omega }_n={\overline{{\mathbb {C}}}}\setminus {\hat{E}}_n). Due to Theorem 1.3, ({\hat{E}}_n) is either ([-1+2\delta ,1]) or ({\hat{E}}_n=E(\alpha _n,\delta )\cup I_n) for some (\alpha _n) and some additional interval outside of ([-1,1]). In the following, we will denote (G(x,\infty ,\Omega )=G(x,\Omega )) and we note that by definition (G(x,\Omega (\alpha ,\delta ))=G_{\alpha ,\delta }(x)). Put (E_n={\hat{E}}_n\cap [-1,1]) and (\Omega _n={\overline{{\mathbb {C}}}}\setminus E_n). Due to extremality of (\alpha _0), we have Since ({\hat{\Omega }}_n\subset \Omega _n), we get Using (4.1) or (1.4) and the extremality property of (T_{n,\delta ,x_0}(x_0)), we get On the other hand, equation (4.2) yields where By definition, (|T_{n,\delta ,x_0}(x_0)|=L_n(x_0)) and therefore combining all inequalities finishes the proof of (1.8). The proof for the boundary case is essentially the same. Only in the last step, due to representation (1.4), there is no extra constant C (due to the fact that there is no extension of the domain) and we get (1.7). (\square ) 4.2 The Asymptotic Diagram In this section, we introduce an asymptotic diagram, which provides a description of the upper envelope (\Phi _\delta (x)). First of all, we prove Lemma 1.4. Proof of Lemma 1.4 Recall the explicit representation of Green functions for two intervals as elliptic integrals. For (a=\alpha -\delta , b=\alpha +\delta ), we have where If for fixed (x_0) the sup is attained at an internal point, then clearly holds. Thus, it remains to show that (4.6) has a unique solution (x_0(\alpha )). Due to (4.4), we have Since (\dot{a}=\dot{b}=1), we get Note that Since the distance (|c(\alpha )-b(\alpha )|) monotonically increases with (|\alpha |), we have (\dot{c}(\alpha )>1). Therefore, we get ({\partial }_x{\partial }_\alpha G_{\alpha ,\delta }(x)>0) in (4.7). That is, ({\partial }_\alpha G_{\alpha ,\delta }(x)) is increasing for (x\in (a,b)). Moreover, (\lim _{x\rightarrow a} {\partial }_\alpha G_{\alpha ,\delta }(x)=-\infty ) and (\lim _{x\rightarrow b} {\partial }_\alpha G_{\alpha ,\delta }(x)=+\infty ) and we obtain that a zero (x_0(\alpha )) of the function ({\partial }_\alpha G_{\alpha ,\delta }(x)) in (a, b) exists and is unique. (\square ) Thus, the limiting behavior of (L_{n,\delta }(x_0)), (n\rightarrow \infty ), can be distinguished by a diagram with two curves, which we describe in the following proposition, see also Example 4.5. Proposition 4.2 In the range (x\in [-1,0]), (\Phi _\delta (x)) represents the upper envelope of the following two curves. The first one is given explicitly and the second one is given in parametric form Moreover, the end points of the last curve are given explicitly by Proof According to Lemma 1.4, if (\Phi _\delta (x)) is assumed at the end point (\alpha \rightarrow -1+\delta ), then it is the Green function in the complement to the interval ([-1+2\delta ,1]), which has a well-known representation (4.8). Alternatively, (x=x_0(\alpha )) and (\Phi _\delta (x)=G_{\alpha ,\delta }(x_0(\alpha ))) for a certain (\alpha ) in the range, what is (4.9). Further, for (\alpha =0), (G_{0,\delta }(x)) is the Green function related to two symmetric intervals. Due to the symmetry (x_0(0)=0) and (G_{0,\delta }(x)) can be reduced to the Green function of a single interval ([\delta ^2,1]). We get Thus, it remains to prove the last statement of the proposition. Our proof is based on expressing ({\partial }_\alpha G_{\alpha ,\delta }) in terms of elliptic integrals and manipulating those. It will be convenient to make a standard substitution in (4.4). Let (\xi (\psi ,\alpha )=\alpha -\delta \cos \psi ), then Differentiating (4.11), we get where Let ({\epsilon }={\epsilon }(\alpha )) be such that ({\epsilon }\rightarrow 0) as (\alpha \rightarrow -1+\delta ), to be chosen later on, and let Direct estimations show that where (\lim _{\alpha \rightarrow -1+\delta }a_1(\alpha )>0). The integral (I_3(x(\alpha ),\alpha )) also tends to zero, but we will need a more precise decomposition with (\lim _{\alpha \rightarrow -1+\delta }b_1(\alpha )>0). Indeed, we have Therefore, for (\psi ) sufficiently small we can use the expansion where (O(\psi ^4)) is uniform in (\alpha ). We get (4.13). In the same time, Collecting all three terms, we obtain where (\varkappa _1(\alpha )=\sqrt{a_1(\alpha )b_1(\alpha )}), (\varkappa (\alpha )=\sqrt{a_1(\alpha )/b_1(\alpha )}) and (\varkappa _2(\alpha )) is chosen appropriately. Recall that (\varkappa _1(\alpha )) and (\varkappa (\alpha )) have positive and finite limits as (\alpha \rightarrow -1+\delta ). Similar manipulations with elliptic integrals show that (\dot{c}(\alpha )\rightarrow +\infty ) as (\alpha \rightarrow -1+\delta ). In fact, the rate of this divergence is (see Appendix) but such accuracy is not needed for our purpose. Having these estimations, we can find a suitable interval ([x_-(\alpha ),x_+(\alpha )]), with the limit such that ({\partial }_\alpha G_{\alpha ,\delta }(x)) changes its sign in it and therefore this interval contains the unique solution of the equation ({\partial }_\alpha G_{\alpha ,\delta }(x)=0). Define Since (\dot{c}(\alpha ){\epsilon }_\pm (\alpha )^3=o(1)), (4.14) gets the form where (x_\pm (\alpha )) is defined by (4.12) for ({\epsilon }={\epsilon }_\pm (\alpha )). For (\dot{c}(\alpha )) sufficiently large, we obtain in (4.16) both positive and negative values, and simultaneously we have (4.15). Consequently, (x_0(\alpha )\rightarrow -1+2\delta ). (\square ) Corollary 4.3 Let (\delta _) be a unique solution of the equation numerically (\delta _=0.543689...). Then, for (\delta <\delta _) (\Phi _{\delta }(x)) does not coincide identically with (G_\delta (x)) (in its range (x\in (-1,0])). On the other hand, for an arbitrary (\delta >0) there exists (x_(\delta )>-1) such that (\Phi _{\delta }(x)) and (G_\delta (x)) coincide in ([-1,x_(\delta ))). Proof The first claim follows by a direct comparison of (G_\delta (0)) and (G_{0,\delta }(0)), see (4.8) and (4.10). For a fixed (\delta ), we define By (4.10) and continuity, (x_(\delta )>-1). Thus, the curve (4.9) does not intersect the range ([-1,x_(\delta ))). (\square ) Remark 4.4 We do not claim here that ([-1,x_(\delta ))) with (x_(\delta )) given by (4.17) is the biggest possible interval on which (\Phi _\delta (x)=G_\delta (x)), see Example 4.5 for details. The asymptotic diagram for (\delta =0.4) Example 4.5 A numerical example of the asymptotic diagram for (\delta =0.4) is given in Fig. 3 (diagrams for other values of (\delta <0.5) look pretty the same). Let (x_s(\delta )) be the switching point between two (Remez and Akhiezer) extremal configurations, i.e., Recall that (x_(\delta )) was defined in (4.17). On the diagram, we can observe the following four regions: ((-1, x_(\delta )), (x_(\delta ),x_s(\delta )), (x_s(\delta ),-1+2\delta )) and ((-1+2\delta ,0)). Note that we discuss the case (-1+2\delta <0), i.e., (\delta<0.5<\delta _). (x\in (-1+2\delta ,0)). In this case (x\in (\alpha -\delta ,\alpha +\delta )) implies that such an interval is a subset of ((-1,1)) even in the leftmost position (\alpha =-\delta +x). Therefore, the function (G_{\alpha ,\delta }(x)) for a fixed x and (\alpha \in (x-\delta ,x+\delta )) attains its maximum at some internal point and we get the case ({\partial }_\alpha G_{\alpha ,\delta }(x)=0). (x\in (x_s(\delta ),-1+2\delta )). As soon as (x<-1+2\delta ) the left boundary for a possible value of (\alpha ) is given by (\alpha -\delta =-1). Respectively, the supremum of (G_{\alpha ,\delta }(x)) for a fixed x can be attained either at an internal point (\alpha \in (-1+\delta ,x+\delta )) or as the limit at the left end point. In this range, it is still attained at an internal point. Note that besides the local maximum the function gets a local minimum (the second branch of the curve (4.9) with the same coordinate (x_0(\alpha )=x)). (x\in (x_(\delta ),x_s(\delta ))). For such x, the function (G_{\alpha ,\delta }(x)) has still its local maximum and minimum, but the biggest value is attained at the boundary point (\alpha =-1+\delta ), i.e., (\Phi _\delta (x)=G_\delta (x)). (x\in (-1,x_(\delta ))). At (x=x_(\delta )) the points of local maximum and minimum of the function (G_{\alpha ,\delta }(x)) collide, that is, in fact, they produce an inflection point. The function (G_{\alpha ,\delta }(x)) become monotonic in this region. Its supremum is the limit at the boundary point (\alpha =-1+\delta ), see the second claim in Corollary 4.3. Notes We would like to point out that up to some trivial degenerations, (A_{n,\alpha ,\delta }) is different to the polynomial of degree n that has maximal leading coefficient and is bounded by one on the set (E(\alpha ,\delta )). References Achyeser, N.: [N.I. Akhiezer], Über einige Funktionen, die in gegebenen Intervallen am wenigsten von Null abweichen, Izv. Kazan, Fiz.-Mat. Obshch. (3) 3 (1928), 1–69 Achyeser, N.: Über einige Funktionen, welche in zwei gegebenen Intervallen am wenigsten von Null abweichen, I, II, III, Izv. Akad. Nauk SSSR, 1932, 1163-1202; 1933, 309-344, 499-536 Akhiezer, N.I.: Lectures on Approximation Theory, 2nd rev. ed., Nauka, Moscow, 1965; German transl., Akademie-Verlag, Berlin, 1967; Engl transl. of 1st ed., Ungar, New York, 1956 Akhiezer, N.I.: Elements of the theory of elliptic functions, Translations of Mathematical Monographs, vol. 79, American Mathematical Society, Providence, RI, 1990, Translated from the second Russian edition by H. H. McFaden Andrievskii, V.: Pointwise Remez-type inequalities in the unit disk. Constr. Approx. 22(3), 297–308 (2005) Article MathSciNet Google Scholar Andrievskii, V.: Local Remez-type inequalities for exponentials of a potential on a piecewise analytic arc. J. Anal. Math. 100, 323–336 (2006) Article MathSciNet Google Scholar Aptekarev, A.I., Draux, A., Tulyakov, D.N.: On asymptotics of the sharp constants of the Markov-Bernshtein inequalities for the Sobolev spaces. Lobachevskii J. Math. 39(5), 609–622 (2018) Article MathSciNet Google Scholar Brudnyi, Ju.A., Ganzburg, M.I.: A certain extremal problem for polynomials in (n) variables, Izv. Akad. Nauk SSSR Ser. Mat. 37 (1973), 344–355. (Russian), Brudnyi, Ju.A., Ganzburg, M.I.: A certain extremal problem for polynomials in (n) variables, Engl. trans. in Math USSR-Izv. 7 (1973), 345-356 Christiansen, J.S., Simon, B., Zinchenko, M.: Asymptotics of Chebyshev polynomials, I: subsets of ({\mathbb{R}}). Invent. Math. 208(1), 217–245 (2017) Article MathSciNet Google Scholar Christiansen, J.S., Simon, B., Yuditskii, P., Zinchenko, M.: Asymptotics of Chebyshev polynomials, II: DCT subsets of ({\mathbb{R}}). Duke Math. J. 168(2), 325–349 (2019) Article MathSciNet Google Scholar Conway, J.B.: Functions of one complex variable. II, Graduate Texts in Mathematics, vol. 159, Springer-Verlag, New York, 1995 Eichinger, B.: Szegő-Widom asymptotics of Chebyshev polynomials on circular arcs. J. Approx. Theory 217, 15–25 (2017) Article MathSciNet Google Scholar Eichinger, B., Yuditskii, P.: The Ahlfors problem for polynomials. Mat. Sb. 209(3), 34–66 (2018) Article MathSciNet Google Scholar Erdélyi, T.: Remez-type inequalities on the size of generalized polynomials, J. London Math. Soc. (2) 45 (1992), no. 2, 255–264 Erdélyi, T., Li, X., Saff, E.B.: Remez- and Nikolskii-type inequalities for logarithmic potentials. SIAM J. Math. Anal. 25(2), 365–383 (1994) Article MathSciNet Google Scholar Eremenko, A., Yuditskii, P.: Comb functions, Recent advances in orthogonal polynomials, special functions, and their applications, Contemp. Math., vol. 578, Amer. Math. Soc., Providence, RI, 2012, pp. 99–118 Garnett, J.B., Marshall, D.E.: Harmonic measure, New Mathematical Monographs, vol. 2, Cambridge University Press, Cambridge, 2008, Reprint of the 2005 original Kalmykov, S., Nagy, B., Totik, V.: Bernstein- and Markov-type inequalities for rational functions. Acta Mathematica 219(1), 21–63 (2017) Article MathSciNet Google Scholar Koosis, P.: The logarithmic integral. I, Cambridge Studies in Advanced Mathematics, vol. 12, Cambridge University Press, Cambridge, 1998, Corrected reprint of the 1988 original Landkof, N.S., Foundations of modern potential theory, Springer-Verlag, New York-Heidelberg,: Translated from the Russian by A, p. 180. P. Doohovskoy, Die Grundlehren der mathematischen Wissenschaften, Band (1972) Pommerenke, C.: Univalent functions, Vandenhoeck & Ruprecht, Göttingen, 1975, With a chapter on quadratic differentials by Gerd Jensen, Studia Mathematica/Mathematische Lehrbücher, Band XXV Ransford, T.: Potential theory in the complex plane, London Mathematical Society Student Texts, vol. 28. Cambridge University Press, Cambridge (1995) Book Google Scholar Remes, E.: Sur une propriété extremale des polynômes de Tchebychef. Commun. Inst. Sci. Math. et Mecan. 13, 93–95 (1936) MATH Google Scholar Sodin, M., Yuditskii, P.: Functions that deviate least from zero on closed subsets of the real axis. Algebra i Analiz 4(2), 1–61 (1992) MathSciNet Google Scholar Tikhonov, S., Yuditskii, P.: Sharp Remez inequality. Constr. Approx. 52, 233–246 (2020) Widom, H.: Extremal polynomials associated with a system of curves in the complex plane, Advances in Math., no. 2, 3, (1969), 127–232 Download references Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Funding Open access funding provided by Johannes Kepler University Linz. Author information Authors and Affiliations Abteilung für Dynamische Systeme und Approximationstheorie, Johannes Kepler Universität Linz, 4040, Linz, Austria B. Eichinger & P. Yuditskii Search author on:PubMed Google Scholar Search author on:PubMed Google Scholar Corresponding author Correspondence to B. Eichinger. Additional information Communicated by Doron Lubinsky. Dedicated to A. Aptekarev on the occasion of his 65th birthday. B. E. was supported by the Austrian Science Fund FWF, project no: J4138 and project no: P33885 P. Y. was supported by the Austrian Science Fund FWF, project no: P32855. Appendix A. Lemma on the limit of (\dot{c}(\alpha )) Appendix A. Lemma on the limit of (\dot{c}(\alpha )) Lemma A.1 Set (\alpha =-1+\delta (1+\frac{1}{2} {\epsilon }^2)). Then, (\dot{c}(\alpha )) tends to (+\infty ) as ({\epsilon }\rightarrow 0) with the rate Proof By (4.5), we have Making the change of variables we get Thus, introducing we have Since ({\partial }_\alpha \xi (\phi ,\alpha )=1), we have Using the definition of (\xi (\alpha ,\phi )), we get Finally, Now we insert (\alpha =-1+\delta (1+\frac{1}{2} {\epsilon }^2)), ({\epsilon }\rightarrow 0). For a sufficiently small (\phi _0), we have Recall that (\xi (\pi ,\alpha )=b). Therefore, the following limit exists. Thus, we have Also Moreover, for (I_3) we get As before, we can split up (I_2) and get Collecting all terms and inserting it into (A.1) yield the claim. (\square ) Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Eichinger, B., Yuditskii, P. Pointwise Remez inequality. Constr Approx 54, 529–554 (2021). Download citation Received: 30 July 2020 Revised: 22 April 2021 Accepted: 27 May 2021 Published: 08 November 2021 Issue Date: December 2021 DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Keywords Mathematics Subject Classification Avoid common mistakes on your manuscript. Advertisement Search Navigation Discover content Publish with us Products and services Our brands 54.221.216.208 Not affiliated © 2025 Springer Nature
297
algebraic topology - If a manifold M has zero Euler characteristic, there is a non-vanishing vector field on it - Mathematics Stack Exchange =============== Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now If a manifold M has zero Euler characteristic, there is a non-vanishing vector field on it Ask Question Asked 14 years, 1 month ago Modified14 years, 1 month ago Viewed 4k times This question shows research effort; it is useful and clear 11 Save this question. Show activity on this post. There is hint: if M has isolated singular points, find a diffeomorphism to make these singular points in a any neighborhood which you want. How can we do next? algebraic-topology Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited Jun 25, 2011 at 8:26 Grigory M 18.1k 4 4 gold badges 91 91 silver badges 135 135 bronze badges asked Jun 24, 2011 at 9:35 henryhenry 307 3 3 silver badges 8 8 bronze badges 3 @Javier That explains why zero Euler char is necessary condition for existence of a non-vanishing vector field. But OP asks why it is sufficient — and AFAICS the linked post doesn't help much. –Grigory M Commented Jun 25, 2011 at 18:45 @Grigory M: Indeed, you are right!! Sorry, I read the problem backwards as I had just posted about index theorems giving as an amusing consequence the hairy ball theorem and thought it may be of conceptual help. Your obstruction solution is of much more help anyway. –Javier Álvarez-Vizoso Commented Jun 25, 2011 at 19:01 1 @Javier Actually... you were right, it's just degree theory, essentially. –Grigory M Commented Jun 26, 2011 at 12:00 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 15 Save this answer. Show activity on this post. // Let me begin with an obstruction-theoretic solution — in hope that someone might find it of interest. (For simplicity, let π 1(M)=0 π 1(M)=0.) Non-vanishing vector field is a section of the spherization of T M T M, a bundle with fiber S n−1 S n−1. Obstructions to finding a section of a bundle with fiber S n−1 S n−1 lie in groups H k(M;π k−1 S n−1)H k(M;π k−1 S n−1). These groups are trivial for kn k>n (since M M is n n-dimensional). So the only non-trivial obstruction is the principal obstruction χ∈H n(M;π n−1(S n−1))=Z χ∈H n(M;π n−1(S n−1))=Z. And it's not hard to show, that it coincides with Euler char (indeed, the value of the obstruction on an n n-cell is the degree of vector field on the bounding sphere — which coincides with the sum of indices of singular points of an extension of the field inside the cell). // Reference (obstruction-theoretic approach to char. classes): Milnor-Stasheff, section 12. This also explains, how to solve the problem directly (actually, it's the same solution in slightly different language). Take any vector field v v on M M, and choose some sphere, containing all singular points. Degree (aka index) of the field on the sphere is exactly χ(M)=0 χ(M)=0 — which means exactly that there is a non-vanishing extension of the field from the sphere to the ball (coinciding with v v on the boundary, but not inside the ball). Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications edited Jun 26, 2011 at 11:55 answered Jun 25, 2011 at 8:09 Grigory MGrigory M 18.1k 4 4 gold badges 91 91 silver badges 135 135 bronze badges 1 1 I don't see how having χ(M)=0 χ(M)=0 means the extension is non-vanishing. –J126 Commented Jul 27, 2012 at 2:43 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions algebraic-topology See similar questions with these tags. Featured on Meta Will you help build our new visual identity? Upcoming initiatives on Stack Overflow and across the Stack Exchange network... Community help needed to clean up goo.gl links (by August 25) Report this ad Linked 4If χ(M)=0 χ(M)=0 then M M admits a nowhere-vanishing vector field 48Consequences of Degree Theory 7If S S is homeomorphic to a torus ⇒⇒S S has a differentiable vector field without singular points Related 1Does an inward vector field on a boundary require a zero point or singularity in the interior? 0Proving a nowhere vanishing vector field on 2D manifold implies T U≅M×S 1 T U≅M×S 1 17If the top Stiefel-Whitney class of a compact manifold is nonzero, must there be another non-vanishing Stiefel-Whitney class? 3A compact Lie group modulo by its maximal torus has nonzero Euler characteristic 8Euler Characteristic of a boundary of a Manifold 1Non-vanishing vector field on S O(n)S O(n) 3Is there a Regular Submanifold of R 3 R 3 whose Singular Homology group has non trivial Torsion Hot Network Questions Does the Melf's Acid Arrow spell require a ranged attack roll? Why are illegal immigrants counted towards congressional district apportionment and allocation of Electoral College votes in the United States? Open dense subset of a compact lie group has full measure how to set grub default Why do we expect AI to reason instantly when humans require years of lived experience? Can metal atoms act as ligands? At the time of the prequels, was everyone who worked in the Jedi Temple on Coruscant a Jedi? Why are there no 'add14' chords? Pilot Procedures for OFV Control When Cabin System Fails Can my daughter’s candy preferences be modelled using numeric weights II? LM393 comparator not pulling down How can a theory be discarded if the Duhem–Quine thesis suggests it can’t be falsified History of Wilcoxon/Mann-Whitney being for the median? Can high schoolers post to arXiv or write preprints? Wiring a bathroom exhaust fan Proper way to power off a Ubuntu 22.04-5 desktop from single user mode Dimension too large compiling longtable with lualatex. What is the cause? Is there any way to still use Manifest v2 extensions in Google Chrome 139+? I failed to make Claus benzene. (With sticks.) What do you call this outfit that Japanese housewives always wear? Why doesn't chatGPT learn from its interactions with users? How to reduce repetition in a large amount of if-else statements when reading from a buffer Harry Potter fanfic where Petunia dies of cancer and Vernon works at a horse racing track? How do I keep my internal drives active? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.8.13.32804 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
298
Expo. Math. 24 (2006) 337–369 www.elsevier.de/exmath Zero-sum problems in finite abelian groups: A survey Weidong Gaoa, Alfred Geroldingerb,∗ aCenter for Combinatorics, Nankai University, Tianjin 300071, PR China bInstitut für Mathematik und Wissenschaftliches Rechnen, Karl-Franzens Universität, Heinrichstrasse 36, 8010 Graz, Austria Received 19 December 2005 Abstract We give an overview of zero-sum theory in finite abelian groups, a subfield of additive group theory and combinatorial number theory. In doing so we concentrate on the algebraic part of the theory and on the development since the appearance of the survey article by Y. Caro in 1996.  2006 Elsevier GmbH. All rights reserved. MSC 2000: 11B50; 11P70; 11B75 Keywords: Zero-sum sequences; Finite abelian groups 1. Introduction Let G be an additive finite abelian group. In combinatorial number theory a finite sequence S = (g1, . . . , gl) = g1 · . . . · gl of elements of G, where the repetition of elements is allowed and their order is disregarded, is simply called a sequence over G, and S is called a zero-sum sequence if g1 + · · · + gl = 0. A typical direct zero-sum problem studies conditions which ensure that given sequences have non-empty zero-sum subsequences with prescribed properties. The associated inverse zero-sum problem studies the structure of extremal sequences which have no such zero-sum subsequences. ∗Corresponding author. E-mail address: [email protected] (A. Geroldinger). 0723-0869/$ - see front matter  2006 Elsevier GmbH. All rights reserved. doi:10.1016/j.exmath.2006.07.002 338 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 These investigations were initiated by a result of P. Erd˝ os, A. Ginzburg and A. Ziv, who proved that 2n −1 is the smallest integer l ∈N such that every sequence S over a cyclic group of order n has a zero-sum subsequence of length n (see ). Some years later, P.C. Baayen, P. Erd˝ os and H. Davenport (see [138,45,143]) posed the problem to determine the smallest integer l ∈N such that every sequence S over G of length |S|⩾l has a zero-sum subsequence. In subsequent literature that integer l has been called the Davenport constant of G. It is denoted by D(G), and its precise value – in terms of the group invariants of G – is still unknown in general. These problems were the starting points for much research, as it turned out that ques-tions of this type occur naturally in various branches of combinatorics, number theory and geometry. Conversely, zero-sum problems have greatly influenced the development of various subfields of these areas (among others, zero-sum Ramsey theory was initiated by the works of A. Bialostocki and P. Dierker). So there are intrinsic connections with graph theory, Ramsey theory and geometry (see [119,4,12,13] for some classical papers and [11,10,105,14,108,40,123] for some recent papers). The following observation goes back to H. Davenport: If R is the ring of integers of some algebraic number field with ideal class group (isomorphic to) G, then D(G) is the maximal number of prime ideals (counted with or without multiplicity) which occur in the prime ideal decomposition of aR for ir-reducible elements a ∈R. Indeed, in the theory of non-unique factorizations it has turned out that the monoid of all zero-sum sequences over G closely reflects the arithmetic of a Krull monoid which has class group G and every class contains a prime (see [96, Corollary 3.4.12]). On the other hand, it was factorization theory which promoted the investigation of inverse zero-sum problems, which appear naturally in that area. Apart from all that, zero-sum problems occur in various types of number theoretical topics (as Carmicheal numbers , Artin’s conjecture on additive forms or permutation matrices ). Zero-sum problems are tackled with a huge variety of methods. First of all we men-tion methods from additive group theory including all types of addition theorems (see [96,136,137,142,112,107,109,103,133,124,9]). Furthermore, group algebras , results from the covering area [164,132,80], from linear algebra [32,31] and polynomial methods [2,3] play crucial roles. Moreover, in the meantime zero-sum theory has already developed its own methods and a wealth of results which promote its further development. The first survey article on zero-sum theory, written by Y. Caro, appeared 10 years ago in 1996 (see [23,24]). The aim of the present article is to sketch the development in the last decade and to give an overview over the present state of the area under the following two restrictions. First, we do not outline the relationships to other areas, as graph the-ory, Ramsey theory or the theory of non-unique factorizations, but we restrict to what is sometimes called the algebraic part of zero-sum theory. Second, although since the 1960s zero-sum problems were studied also in the setting of non-abelian groups (see [36,146,150,147,148,170,63,39,172]), we restrict to the case of abelian groups. Since Y. Caro’s article has an extended bibliography on the literature until 1994, we also re-fer to his bibliography and concentrate ourselves on papers having appeared since that time. In Section 2, we fix our notations and terminology, and we give the definitions of the key invariants. Then in the subsequent sections we present the state of knowledge on these invariants and on the associated inverse problems. Throughout this article, let G be an additive finite abelian group and let G• = G{0}. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 339 2. Preliminaries Let N denote the set of positive integers, P ⊂N the set of all prime numbers and let N0 = N ∪{0}. For integers a, b ∈Z we set [a, b] = {x ∈Z | a⩽x ⩽b}, and for c ∈N let N⩾c = N[1, c −1]. For a real number x, we denote by ⌊x⌋the largest integer that is less than or equal to x, and by ⌈x⌉the smallest integer that is greater than or equal to x. Throughout, all abelian groups will be written additively. For n ∈N, let Cn denote a cyclic group with n elements, and let nG = {ng | g ∈G}. By the Fundamental Theorem of Finite Abelian Groups we have GCn1 · · · CnrCq1 · · · Cqs, where r = r(G) ∈N0 is the rank of G, s = r∗(G) ∈N0 is the total rank of G, n1, . . . , nr ∈N are integers with 1 < n1 | . . . | nr and q1, . . . , qs are prime powers. Moreover, n1, . . . , nr, q1, . . . , qs are uniquely determined by G, and we set d∗(G) = r  i=1 (ni −1) and k∗(G) = s  i=1 qi −1 qi . Clearly, nr = exp(G) is the exponent of G, and if |G| = 1, then r(G) = d∗(G) = k∗(G) = 0 and exp(G) = 1. Let s ∈N. An s-tuple (e1, . . . , es) of elements of G is said to be independent if ei ̸= 0 for all i ∈[1, s] and, for every s-tuple (m1, . . . , ms) ∈Zs, s  i=1 miei = 0 implies m1e1 = · · · = mses = 0. An s-tuple (e1, . . . , es) of elements of G is called a basis if it is independent and G = ⟨e1⟩ · · · ⟨es⟩. We write sequences multiplicatively and consider them as elements of the free abelian monoid over G, a point of view which was put forward by the requirements of the theory of non-unique factorizations. Thus, we have at our disposal all notions from elementary divisibility theory which provides a suitable framework when dealing with subsequences of given sequences, and we may apply algebraic concepts in a natural way. LetF(G)bethefreeabelianmonoid,multiplicativelywritten,withbasisG.Theelements of F(G) are called sequences over G. We write sequences S ∈F(G) in the form S = g∈G gvg(S) with vg(S) ∈N0 for all g ∈G. We call vg(S) the multiplicity of g in S, and we say that S contains g, if vg(S) > 0. S is called squarefree if vg(S)⩽1 for all g ∈G. The unit element 1 ∈F(G) is called the empty sequence.A sequence S1 is called a subsequence of S if S1 | S in F(G) (equivalently, vg(S1)⩽vg(S)forallg ∈G),anditiscalledapropersubsequenceofS ifitisasubsequence with 1 ̸= S1 ̸= S. If a sequence S ∈F(G) is written in the form S = g1 · . . . · gl, we tacitly assume that l ∈N0 and g1, . . . , gl ∈G. 340 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 For a sequence S = g1 · . . . · gl = g∈G gvg(S) ∈F(G), we call |S| = l =  g∈G vg(S) ∈N0 the length of S, h(S) = max{vg(S) | g ∈G} ∈[0, |S|] the maximum of the multiplicities of S, k(S) = l  i=1 1 ord(gi) ∈Q⩾0 the cross-number of S, supp(S) = {g ∈G | vg(S) > 0} ⊂G the support of S, (S) = l  i=1 gi =  g∈G vg(S)g ∈G the sum of S, k(S) =  i∈I gi      I ⊂[1, l] with |I| = k  the set of k-term subsums of S, for all k ∈N, ⩽k(S) =  j∈[1,k] j(S), ⩾k(S) =  j ⩾k j(S), and (S) = ⩾1(S) the set of (all) subsums of S. The sequence S is called • zero-sumfree if 0 / ∈(S), • a zero-sum sequence if (S) = 0, • a minimal zero-sum sequence if it is a non-empty zero-sum sequence and every proper subsequence is zero-sumfree, • a short zero-sum sequence if it is a zero-sum sequence of length |S| ∈[1, exp(G)]. We denote by B(G) the set of all zero-sum sequences and by A(G) the set of all minimal zero-sum sequences. Then B(G) ⊂F(G) is a submonoid (also called the block monoid over G); it is a Krull monoid and A(G) is the set of atoms of B(G) (see [96, Proposition 2.5.6]). For any map of abelian groups : G →G′, there exists a unique homomorphism : F(G) →F(G′) with | G = . Usually, we simply write instead of . Explicitly, : F(G) →F(G′) is given by (g1 · . . . · gl) = (g1) · . . . · (gl) for all l ∈N0 and g1, . . . , gl ∈G. If S ∈F(G), then |(S)| = |S| and supp((S)) = (supp(S)). If : G →G′ is even a homomorphism, then ((S)) = ((S)), ((S)) = ((S)) and W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 341 (B(G)) ⊂B(G′). In particular, we use the inversion (g →−g) and the translation (g →g0 + g), and for S = g1 · . . . · gl ∈F(G) we set −S = (−g1) · . . . · (−gl) and g0 + S = (g0 + g1) · . . . · (g0 + gl) ∈F(G). If g ∈G is a non-zero element and S = (n1g) · . . . · (nlg), where l ∈N0 and n1, . . . , nl ∈[1, ord(g)], then ∥S∥g = n1 + · · · + nl ord(g) is called the g-norm of S. If S is a zero-sum sequence for which {0} ̸= ⟨supp(S)⟩⊂G is a finite cyclic group, then ind(S) = min{∥S∥g | g ∈G with ⟨supp(S)⟩= ⟨g⟩} ∈N0 is called the index of S. We set ind(1) = 0, and if supp(S) = {0}, then we set ind(S) = 1. Next we give the definition of the zero-sum invariants which we are going to discuss in the subsequent sections. We concentrate on invariants dealing with general sequences, as introduced in Definition 2.1. However, by an often used technique, problems on general sequences are reduced to problems on squarefree sequences, and thus we briefly deal also with invariants on squarefree sequences (or in other words, with sets), as introduced in Definition 2.2. Definition 2.1. Let exp(G) = n and k, m ∈N with k∤exp(G). We denote by • D(G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l has a non-empty zero-sum subsequence. The invariant D(G) is called the Davenport constant of G. • d(G) the maximal length of a zero-sumfree sequence over G. • (G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l has a short zero-sum subsequence. • smn(G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l has a zero-sum subsequence T of length |T | = mn. In particular, we set s(G) = sn(G). • snN(G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l has a non-empty zero-sum subsequence T of length |T | ≡0 mod n. • Ek(G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l has a zero-sum subsequence T with k∤|T |. • (G) the smallest integer l ∈N0 with the following property: For every zero-sumfree sequence S ∈F(G) of length |S|⩾l there exist a subgroup H ⊂G and an element a ∈G\H such that G•(S) ⊂a + H. A simple argument (see [96, Section 5.1] for details) shows that d(G) = max{|S| | S ∈F(G), (S) = G•} and 1 + d(G) = D(G) = max{|S||S ∈A(G)}. 342 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Definition 2.2. We denote by • Ol(G) the smallest integer l ∈N such that every squarefree sequence S ∈F(G) of length |S|⩾l has a non-empty zero-sum subsequence. The invariant Ol(G) is called the Olson constant of G. • ol(G) the maximal length of a squarefree zero-sumfree sequence S ∈F(G). • cr(G) the smallest integer l ∈N such that every squarefree sequence S ∈F(G•) of length |S|⩾l satisfies (S) = G. The invariant cr(G) is called the critical number of G. • g(G) the smallest integer l ∈N such that every squarefree sequence S ∈F(G) of length |S|⩾l has a zero-sum subsequence T of length |T | = exp(G). We use the convention that min(∅) = sup(∅) = 0. For a subset G0 ⊂G and some integer l ∈N, R.B. Eggleton and P. Erd˝ os (see ) introduced the f-invariant f(G0, l) = min{|(S)| | S ∈F(G0), S squarefree and zero-sumfree, |S| = l}. The basic relationships between these invariants are summarized in Lemma 10.1. 3. On the Davenport constant Let G = Cn1 · · · Cnr with 1 < n1 | . . . | nr, r = r(G) and let (e1, . . . , er) be a basis of G with ord(ei) = ni for all i ∈[1, r]. Then the sequence S = r i=1 eni−1 i ∈F(G) is zero-sumfree whence we have the crucial inequality d(G)⩾d∗(G). In the 1960s, D. Kruyswijk and J.E. Olson proved independently the following result (see [143,5,44,144] and [96, Theorems 5.5.9 and 5.8.3]). Theorem 3.1. If G is a p-group or r(G)⩽2, then d(G) = d∗(G). We present two types of results implying that d(G) = d∗(G). The first one is due to P. van Emde Boas et al. (see [44, Theorems 3.9, 4.2], where more results of this flavor may be found) and the second is due to S.T. Chapman et al. (see , and also the various conjectures in that paper). Theorem 3.2. Let G=C2n1C2n2C2n3 and H =Cn1Cn2Cn3 with 1⩽n1 | n2 | n3. If (H) = d∗(H) −1, then d(G) = d∗(G). Theorem 3.3. Let G=HCkm where k, m ∈N and H ⊂G is a subgroup with exp(H)|m. If d(HCm) = d(H) + m −1 and (HCm)⩽d(H) + 2m, then d(G) = d(H) + km −1. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 343 In particular (use Theorem 3.1 and [96, Proposition 5.7.7]), if m is a prime power and d(H) < m, then d(G) = d∗(G). These and similar results give rise to long lists of explicit groups satisfying d(G) = d∗(G) (see [44,25,6,46,35]). The first example of a group G with d(G) > d∗(G) is due to P.C. Baayen. In [44, Theorem 8.1] it is shown that d(G) > d∗(G) for G = C4k 2 C4k+2 with k ∈N, and more examples are given in . Let H ⊂G be a subgroup. Then d(H) + d(G/H)⩽ d(G), and if G is as above, I ⊂[1, r] and H =  i∈I Cni then d(H) > d∗(H) implies d(G) > d∗(G) (see [96, Proposition 5.1.11]). This shows that the interesting groups with d(G) > d∗(G) are those with small rank. A. Geroldinger and R. Schneider showed that there are infinitely many G with r(G) = 4 such that d(G) > d∗(G). The following result may be found in and [77, Theorem 3.3]. Theorem 3.4. We have d(G) > d∗(G) in each of the following cases: 1. G = CmC2 nC2n, where m, n ∈N⩾3 are odd and m | n. 2. G = Ci 2C5−i 2n , where n ∈N⩾3 is odd and i ∈[2, 4]. Let G = Cr 2Cn where r ∈N and n ∈N⩾3 is odd. Then d(G) = d∗(G) if and only if r ⩽4 (see [98, Corollary 2]). For some small r ⩾5 and n⩾3 the precise value of d(G) was recently determined in . The growth of d(G) −d∗(G) is studied in . We make the following conjecture. Conjecture 3.5. If r(G) = 3 or G = Cr n with n, r ∈N⩾3, then d(G) = d∗(G). For groups of rank three Conjecture 3.5 goes back to P. van Emde Boas (see ) and is supported by . For groups of the form G = Cr n it is supported by [80, Theorem 6.6]. The next result provides upper bounds on D(G). The first one is due to P. van Emde Boas and D. Kruyswijk [46, Theorem 7.1] and is sharp for cyclic groups (for other approaches and related bounds see [8,141]). The second bound is sharp for groups of rank 2 and with H = pG for some prime divisor p of exp(G) (see [96, Theorem 5.5.5 and Proposition 5.7.11]). Theorem 3.6. Let exp(G) = n⩾2 and H ⊂G be a subgroup. 1. d(G)⩽(n −1) + n log |G| n . 2. d(G)⩽d(H) exp(G/H) + max{d(G/H), (G/H) −exp(G/H) −1}. We end this section with a conjecture supported by [96, Theorem 6.2.8]. 344 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Conjecture 3.7. If |G| > 1, then D(G)⩽d∗(G) + r(G). 4. On the structure of long zero-sumfree sequences Let S ∈F(G) be a zero-sumfree sequence of length |S| = d(G). According to general philosophy in inverse additive number theory (see [142,53,54]), S should have some struc-ture. Obviously, if G is cyclic of order n⩾2, then S = gn−1 for some g ∈supp(S) with ord(g) = n, and if S is an elementary 2-group of rank r, then S = e1 · . . . · er for some basis (e1, . . . , er) of G. Apart from these trivial cases very little is known up to now. The most modest questions one could ask are the following: 1. What is the order of elements in supp(S)? 2. What is the multiplicity of elements in supp(S)? What is a reasonable lower bound for h(S)? 3. How large is supp(S)? Crucial in all investigations of zero-sumfree sequences is the following inequality of Moser-Scherk (see [96, Theorem 5.3.1]): Let S ∈F(G) be a zero-sumfree sequence. If S = S1S2 then |(S)|⩾|(S1)| + |(S2)|. By M. Freeze and W.W. Smith ([52, Theorem 2.5, 96, Proposition 5.3.5]) this implies that |(S)|⩾2|S| −h(S)⩾|S| + |supp(S)| −1. We start with the following conjecture. Conjecture 4.1. Every zero-sumfree sequence S ∈F(G) of length |S| = d(G) has some element g ∈supp(S) with ord(g) = exp(G). The conjecture is true for cyclic groups, p-groups (see [96, Corollary 5.1.13]), groups of the form G=CnCn (see below) and for G=C2C2n (see ). As concerns the second question, the philosophy is that in groups where the exponent is large in comparison with the rank, h(S) should be large. For cyclic groups, there are the following results going back to J.D. Bovey, P. Erd˝ os, I. Niven, W. Gao, A. Geroldinger andY. ould Hamidoune (see [17,76,97] and [96, Theorem 5.4.5]). Theorem 4.2. Let G be cyclic of order n⩾3, and let S ∈F(G) be a zero-sumfree sequence of length |S|⩾n + 1 2 . 1. For all g ∈supp(S) we have ord(g)⩾3. 2. There exists some g ∈supp(S) with vg(S)⩾2|S| −n + 1. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 345 3. There exists some g ∈supp(S) with ord(g) = n such that vg(S)⩾n + 5 6 if n is odd and vg(S)⩾3 if n is even. In cyclic groups long zero-sumfree sequences and long minimal zero-sum sequences can be completely characterized (see ). Theorem 4.3. Let G by cyclic of order n⩾2 and let S ∈F(G) be a zero-sumfree sequence of length |S|=n−k with k ∈[1, ⌊n/3⌋]+1]. Then there exists some g ∈G with ord(g)=n and x1, . . . , xk−1 ∈[1, n −1] such that S = gn−2k+1 k−1 i=1 (xig) and k−1  i=1 xi ⩽2k −2. In particular, every minimal zero-sum sequence S ∈A(G) of length |S|⩾n −⌊n/3⌋has ind(S) = 1. The index of zero-sum sequences over cyclic groups is investigated in [71,26,29]. In (p. 344 with d = n) it is conjectured that every sequence S ∈F(Cn) of length |S| = n has a non-empty zero-sum subsequence T with ind(T ) = 1. Among others, the g-norm and the index of zero-sum sequences play a role in arithmetical investigations (see [96, Section 6.8]). Next we discuss groups of the form G=CnCn with n⩾2 (see [77,167,79], [96, Section 5.8] and ). Theorem 4.4. Let G = CnCn with n⩾2. Then the following statements are equivalent: (a) If S ∈F(G), |S| = 3n −3 and S has no zero-sum subsequence T of length |T |⩾n, then there exists some a ∈G such that 0n−1an−2 | S. (b) If S ∈F(G) is zero-sumfree and |S| = d(G), then an−2 | S for some a ∈G. (c) If S ∈A(G) and |S| = D(G), then an−1 | S for some a ∈G. (d) If S ∈A(G) and |S| = D(G), then there exists a basis (e1, e2) of G and integers x1, . . . , xn ∈[0, n −1] with x1 + · · · + xn ≡1 mod n such that S = en−1 1 n =1 (xe1 + e2). Moreover, if S ∈A(G) and |S| = D(G), then ord(g) = n for every g ∈supp(S), and if n is prime, then |supp(S)| ∈[3, n]. Conjecture 4.5. For every n⩾2 the four equivalent statements of Theorem 4.4 are satisfied. 346 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Conjecture 4.5 has been verified for n ∈[2, 7], and if it holds for some n⩾6, then it holds for 2n (see [79, Theorem 8.1]). We continue with a result for non-cyclic groups having large exponent (see ). Theorem 4.6. 1. Let G=Cn1Cn2 with 1 < n1 | n2 and n2 > n1(n1 +1). Let : G →G=Cn1Cn1 be the canonical epimorphism and S ∈A(G) of length |S| = D(G). If gk|(S) for some k > n1 and some g ∈G, then gk|S for some g ∈−1(g). 2. Let G = HCn where exp(G) = n = lm, H ⊂G a subgroup with exp(H) | m, m⩾2 and l ⩾4|H| > 4(m −2). Let : G →G = HCm denote the canonical epimorphism and S ∈F(G) a zero-sumfree sequence of length |S| = n. Then S has a subsequence T of length |T |⩾(l −2|H| + 1)m such that the following holds: If gk|(T ) for some k > m and some g ∈G, then gk|T for some g ∈−1(g). For general finite abelian groups there is the following result (see , [96, Theorem 5.3.6]) which plays a key role in the proof of Theorem 10.4.2). Theorem 4.7. Let G0 ⊂G be a subset, k ∈N and k⩾2 be such that f(G0, k) > 0, and let S ∈F(G0) be a zero-sumfree sequence of length |S|⩾  |G| −k f(G0, k) + 1 k. Then there exists some g ∈G0 such that vg(S)⩾|S| k −1 − |G| −k −1 (k −1)f(G0, k). If the rank of the group is large in comparison with the exponent, there is in gen-eral no element with high multiplicity (see Theorem 10.4.1), but in case of elementary p-groups there is the following structural result (see [77,Theorem 10.3], [80, Corollary 6.3], [96, Corollary 5.6.9]). Theorem 4.8. Let G be a finite elementary p-group and S ∈F(G) be a zero-sumfree sequence of length |S| = d(G). Then (g, h) is independent for any two distinct elements g, h ∈supp(S). We continue with the following Conjecture 4.9. Let G = Cn1 · · · Cnr with 1 < n1| · · · |nr, k ∈[1, n1 −1] and S ∈ F(G) be a sequence of length |S| = k + d(G). If S has no zero-sum subsequence S′ of length |S′| > k, then S = 0kT where T ∈F(G) is zero-sumfree. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 347 The following example shows that in Conjecture 4.9 the restriction k ∈[1, n1 −1] is essential: Let T ∈F(G) be a zero-sumfree sequence of length |T | = d(G) such that vg(S) = ord(g) −1 for some g ∈G. Then for every l ∈N the sequence S = glord(g)T has no zero-sum subsequence S′ of length |S′| > lord(g). Next we discuss the invariant (G) which was introduced by P. van Emde Boas in con-nection with his investigations of the Davenport constant for groups of rank three (see [44, p. 15] and ). An easy argument (see [96, Proposition 5.1.16]) shows that d(G) −1⩽(G)⩽d(G) and we make the following conjecture. Conjecture 4.10. (G) = d(G) −1. The following result goes back to P. van Emde Boas, W. Gao and A. Geroldinger ([44, Theorem 2.8], [79, Theorem 5.3], [96, Theorems 5.5.9 and 5.8.10], for more see also [69, Theorem 5.2]). Theorem 4.11. Conjecture 4.10 holds true in each of the following cases: 1. G is cyclic. 2. G is a p-group. 3. G = CnCn satisfies Conjecture 4.5. We end this section with a result (see ) showing that minimal zero-sum sequences are not additively closed (apart from some well-defined exceptions). Theorem 4.12. Let S ∈F(G•) be a sequence of length |S|⩾4, and let S = BC with B, C ∈F(G) such that |B|⩾|C|. If (T ) ∈supp(S) for all subsequences T of B with |T | = 2 and for all subsequences T of C with |T | = 2, then S has a proper zero-sum subsequence, apart from the following exceptions: 1. |C| = 1, and we are in one of the following cases: (a) B = gk and C = 2g for some k⩾3 and g ∈G with ord(g)⩾k + 2. (b) B = gk(2g) and C = 3g for some k⩾2 and g ∈G with ord(g)⩾k + 5. (c) B = g1g2(g1 + g2) and C = g1 + 2g2 for some g1, g2 ∈G with ord(g1) = 2 and ord(g2)⩾5. 2. {B, C} = {g(9g)(10g), (11g)(3g)(14g)} for some g ∈G with ord(g) = 16. If S = g1 · . . . · gl ∈F(G) such that ord(gk) > kk for all k ∈[1, l], then G. Harcos and I. Ruzsa showed that S allows a product decomposition S = S1S2 where S1 and S2 are both zero-sumfree (see ). 5. On generalizations of the Davenport constant We discuss two generalizations of the Davenport constant in some detail (for yet an-other generalization, the barycentric Davenport constant, we refer to ). The first one 348 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 was introduced by F. Halter-Koch in connection with the analytic theory of non-unique factorizations (see ). Definition 5.1. Let k ∈N. We denote by • Dk(G) the smallest integer l ∈N such that every sequence S ∈F(G) of length |S|⩾l is divisible by a product of k non-empty zero-sum sequences. • dk(G) the largest integer l ∈N with the following property: There is a sequence S ∈F(G) of length |S| = l which is not divisible by a product of k non-empty zero-sum sequences. Obviously, we have Dk(G) = 1 + dk(G), d1(G) = d(G) and D1(G) = D(G). We present one result on dk(G) which, among others, may be found in [96, Section 6.1]. Theorem 5.2. Let exp(G) = n and k ∈N. 1. Let G = HCn where H ⊂G is a subgroup. Then d(H) + kn −1⩽dk(G)⩽(k −1)n + max{d(G), (G) −n −1}. In particular, if d(G) = d(H) + n −1 and (G)⩽d(G) + n + 1, then dk(G) = d(G) + (k −1)n. 2. If r(G)⩽2, then dk(G) = d(G) + (k −1)n. 3. If G a p-group and D(G)⩽2n −1, then dk(G) = d(G) + (k −1)n. The following generalization of the Davenport constant was introduced by M. Skałba in connection with his investigations on binary quadratic forms (see [160–162]). Definition 5.3. For every g ∈G, let Dg(G) denote the largest integer l ∈N with the following property: There is a sequence S ∈F(G) of length |S| = l and sum (S) = g such that every proper subsequence of S is zero-sumfree. By definition, D0(G) = D(G), and if g ̸= 0, then Dg(G)⩽d(G). The following result is due to M. Skałba (see [161, Theorem 2] and [162, Theorem 1]) Theorem 5.4. Let G=Cn1Cn2 with 1⩽n1 | n2 and (e1, e2) a basis of G. Let g = a1e1+ a2e2 ∈G• with a1 ∈[0, n1 −1], a2 ∈[0, n2 −1] and d = gcd(gcd(a1, n1), gcd(a2, n2)). Then Dg(G) = n1 + n2 −d −1 if d ̸= n1, n1 + n2 −gcd(a2, n2) −1 if d = n1. Lemma 5.5. Let exp(G) = n⩾2. Then the following statements are equivalent: (a) There exists some g ∈G with ord(g) = n such that Dg(G) = d(G). (b) For all g ∈G with ord(g) = n we have Dg(G) = d(G). (c) There exists a minimal zero-sum sequence S ∈F(G) of length |S| = D(G) such that max{ord(g) | g ∈supp(S)} = n. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 349 Proof. (a) ⇒(b) Let g, g∗∈G with ord(g)=ord(g∗)=n and suppose that Dg∗(G)=d(G). Then there exists a zero-sumfree sequence S ∈F(G) of length |S| = d(G) and (S) = g∗. If : G →G is a group automorphism with (g∗)=g, then (S) is a zero-sumfree sequence of length |(S)| = d(G) and ((S)) = ((S)) = g whence Dg(G) = d(G). (b) ⇒(c) Let g ∈G and S ∈F(G) a zero-sumfree sequence with (S) = g and |S| = Dg(G) = d(G). Then the sequence S∗= (−g)S has the required properties. (c) ⇒(a) Assume to the contrary that for all g ∈G with ord(g) = n we have Dg(G) < d(G). This means that for all zero-sumfree sequences S ∈F(G) with |S|=d(G) we have ord((S)) < n. But this implies that for all minimal zero-sum sequences S ∈F(G) of length |S| = D(G) we have max{ord(g) | g ∈supp(S)} < n, a contradiction. □ Note that Conjecture 4.1 implies Condition (c) of Lemma 5.5. Using this condition we immediately obtain the following corollary. Corollary 5.6. If d∗(G) = d(G) and g ∈G with ord(g) = exp(G), then Dg(G) = d(G). 6. On the invariants (G), s(G) and their analogues We start with a key result first obtained by W. Gao (see ). Its proof is based on the Addition Theorem of Kemperman–Scherk (for the version below we refer to [96, Theorem 5.7.3]). Theorem 6.1. Let S ∈F(G) be a sequence of length |S|⩾|G|. Then S has a non-empty zero-sum subsequence T of length |T |⩽min{h(S), max{ord(g) | g ∈supp(S)}}. Now we discuss the invariants (G), s(G) and their relationship. Both invariants have received a lot of attention in the literature. The various contributions and the present state of knowledge are well-described in , where also the connection with finite geometry is discussed (see also ). Therefore we only mention some of the most recent results, and then we discuss the relationship of (G) and s(G) in greater detail. A simple observation shows that D(G)⩽(G)⩽s(G) −exp(G) + 1. Using Theorem 6.1 we obtain the following upper bounds on (G) and s(G) (see [96, Theorem 5.7.4]) which are sharp for cyclic groups. Theorem 6.2. (G)⩽|G| and s(G)⩽|G| + exp(G) −1. Both invariants, (G) and s(G) are completely determined for groups of rank at most two (see [96, Theorem 5.8.3]). Theorem 6.3 is based on the result by C. Reiher which states that s(CpCp) = 4p −3 for all p ∈P (see , and also ), and it contains the Theorem of Erd˝ os–Ginzburg–Ziv (set n1 = 1). Theorem 6.4 may be found in . 350 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Theorem 6.3. Let G = Cn1Cn2 with 1⩽n1 | n2. Then (G) = 2n1 + n2 −2 and s(G) = 2n1 + 2n2 −3. Theorem 6.4. LetGbeap-groupforsomeoddprimep withexp(G)=nandD(G)⩽2n−1. Then 2D(G) −1⩽(G) + n −1⩽s(G)⩽D(G) + 2n −2. In particular, if D(G) = 2n −1, then s(G) = (G) + n −1 = 4n −3. We continue with the following conjecture Conjecture 6.5. (G) = s(G) −exp(G) + 1. Theorem 6.6. Conjecture 6.5 holds true in each of the following cases: 1. exp(G) ∈{2, 3, 4}. 2. r(G)⩽2. 3. G is a p-group for some odd prime p and D(G) = 2 exp(G) −1. 4. G = C3 5. Proof. 1. Is proved in , 2. follows from Theorem 6.3, and 3. follows from Theorem 6.4. In order to give an idea of the arguments we are going to prove 4. We need the following two results: F1. If n ∈N⩾3 is odd, then (C3 n)⩾8n −7 (this is due to C. Elsholtz , see also [40, Lemma 3.4]). F2. If exp(G) = n and S ∈F(G) such that |S|⩾(G) + n −1 and h(S)⩾n −⌊n/2⌋−1, then S has a zero-sum subsequence of length n (see [73, Proposition 2.7]). Let G = C3 5. It suffices to show that s(G)⩽(G) + 4. Let S ∈F(G) be a sequence of length (G) + 4. We have to verify that S has a zero-sum subsequence of length 5. By F1 we have, |S|⩾37. If we can prove that h(S)⩾2, then the assertion follows from F2. Assume to the contrary that S is squarefree. Let G=H⟨g⟩where H ⊂G is a subgroup with |H| = 25 and g ∈G with ord(g) = 5. Then S = l i=1 (gi + hi), where gi ∈⟨g⟩, hi ∈H, and we set T = l i=1 gi. If h(T )⩾9, say g1 = · · · = g9, then h1, . . . , h9 are pairwise distinct. Since g(C2 5) = 9 (see and Conjecture 10.2), the sequence h1·. . .·h9 has a zero-sum subsequence of length 5, and therefore S has a zero-sum subsequence of length 5. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 351 Suppose that h(T )⩽8. Then T = 0l0gl1(2g)l2(3g)l3(4g)l4 with l0, l1, l2, l3, l4 ∈[5, 8], and we write S in the form S = 4 i=0 li j=1 (ig + hi,j) with all hi,j ∈H. Since S is squarefree, for every i ∈[0, 4] the elements hi,1, . . . , hi,li are pairwise distinct, and we set Ai = {hi,1, . . . , hi,li}. Note that 0 + g + 2g + 3g + 4g = 0 ∈G. So if 0 ∈A = A0 + A1 + A2 + A3 + A4, then S has a zero-sum subsequence of length 5. Let K be the maximal subgroup of H such that A + K = A. By Kneser’s Addition Theorem (see [96, Theorem 5.2.6.2]) we obtain that |A|⩾ 4  i=0 |Ai + K| −4|K|. If |K| = 1, then |A|⩾|A0| + |A1| + |A2| + |A3| + |A4| −4 = |S| −4⩾33, a contradiction. Assume to the contrary that |K| = 5. Since |A0| + |A1| + |A2| + |A3| + |A4| = |S|⩾37 and |Ai| = li ∈[5, 8], it follows that |Ai|⩾6 for at least four indices i ∈[0, 4]. Therefore we obtain that |A|⩾ 4  i=0 |Ai + K| −4|K|⩾4 · 2|K| + |K| −4|K| = 5|K| = 25, a contradiction. Thus it follows that K = H whence A = H and we are done. □ For recent progress on Conjecture 6.5 we refer to . Next we consider the invariant snN(G). Theorem 6.3 allows to determine snN(G) for groups G of rank r(G)⩽2. Theorem 6.7. Let exp(G) = n⩾2. 1. d(G) + n⩽snN(G)⩽min{s(G), D(GCn)}. 2. We have snN(G) = d(G) + n in each of the following cases: (a) G is a p-group. (b) G = Cn1Cn2 with 1⩽n1 | n2. Proof. 1. is simple (see [79, Lemma 3.5]) and 2.(a) is a consequence of 1. To verify 2.(b), let G = Cn1Cn2 with 1⩽n1 | n2. Then 1. implies that d(G) + n2 ⩽snN(G) whence it remains to prove that snN(G)⩽d(G) + n2 = n1 + 2n2 −2. If n1 = 1, this follows from 1. Let S ∈F(G) be a sequence of length |S| = n1 + 2n2 −2. We have to show that S has a zero-sum subsequence of length n2 or 2n2. Let H = GCn2 = G⟨e⟩with ord(e) = n2, so that every h ∈GCn2 has a unique representation h = g + je, where g ∈G and j ∈[0, n2 −1]. We define : G →H by (g)=g +e for every g ∈G. Thus it suffices to show that (S) has a non-empty zero-sum subsequence. We distinguish two cases. 352 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Case 1: n1 = n2. We set n = n1 and proceed by induction on n. If n is prime, the assertion follows from 2.(a). Suppose that n is composite, p a prime divisor of n and : H →H the multiplication by p. Then pGCn/pCn/p and Ker()C3 p. Since s(pG)=4(n/p)−3 and |S|=3n− 2⩾(3p −4)(n/p) + 4n/p −3, S admits a product decomposition S = S1 · . . . · S3p−3S′ such that, for all i ∈[1, 3p −3], (Si) has sum zero and length |Si| = n/p (for details see [96, Lemma 5.7.10]). Then |S′| = 3n/p −2 = snN(Cn/pCn/p), and thus S′ has a sub-sequence S3p−2 such that (S3p−2) has sum zero and length |S3p−2| ∈{n/p, 2n/p}. This implies that 3p−2 i=1 ((Si)) ∈F(Ker()). Since D(Ker()) = 3p −2, there exists a non-empty subset I ⊂[1, 3p −2] such that  i∈I ((Si)) = 0 whence i∈I (Si) is a non-empty zero-sum subsequence of (S). Case 2: n2 > n1. Let m=n−1 1 n2 and let : H =Cn1C2 n2 →Cn1mC2 n2 be a map which is the identity on the first component and the multiplication by m on the second and on the third component whence Ker()CmCm and (G)Cn1Cn1. Since s(Cn1Cn1)=4n1 −3 and |S|= n1+2n2−2⩾(2m−3)n1+(4n1−3), S admits a product decomposition S=S1·. . .·S2m−2S′, where for all i ∈[1, 2m−2], (Si) has sum zero and length |Si|=n1. Then |S′|=3n1 −2, and since by Case 1, snN(Cn1Cn1) = 3n1 −2, the sequence S′ has a subsequence S2m−1 such that (S2m−1) has sum zero and length |S2m−1| ∈{n1, 2n1}. This implies that 2m−1 i=1 ((Si)) ∈F(Ker()). Since D(Ker()) = 2m −1, there exists a non-empty subset I ⊂[1, 2m −1] such that  i∈I ((Si)) = 0 whence i∈I (Si) is a non-empty zero-sum subsequence of (S). □ Next we deal with zero-sum subsequences of length |G|. The following result is due to W. Gao and Y. Caro (see [21,22,62] and also [96, Proposition 5.7.9]). In Section 9, we discuss generalizations due to Y. ould Hamidoune. The structure of sequences S of length |S| = |G| + d(G) −1 which have no zero-sum subsequence of length |G| is studied in . Theorem 6.8. s|G|(G) = |G| + d(G). Note that Theorem 6.8 yields immediately a generalization of a Theorem of Hall (see [135, Section 3]). W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 353 Conjecture 6.9. Let G be cyclic of order n⩾2, q the smallest prime divisor of n and S ∈F(G•) be a sequence of length |S| = n. If h = h(S)⩾n/q −1, then ⩽h(S) = (S). Conjecture 6.9 has been verified for cyclic groups of prime power order in . The following example shows that the conclusion of Conjecture 6.9 does not hold whenever nq/(2n −q)⩽h⩽n/q −2. Let all notations be as in Conjecture 6.9, N = {0, a1, . . . , an/q−1} a subgroup of G with |N| = n/q, g ∈G with ord(g) = n and W = ah 1 · . . . · ah n/q−1gh(g + a1)h · . . . · (g + an/q−1)h ∈F(G). Since h⩾nq/(2n −q), we have |W| = ( n q −1)h + n q h⩾n. Now let S be a subsequence of W of length |S| = n such that gh(g + ai) is a subsequence of S for some i ∈[1, (n/q) −1]. Then h(S) = h, ((h + 1)g + N) ∩(S) ̸= ∅ but ((h + 1)g + N) ∩⩽h(S) = ∅ whence ⩽h(S) ̸= (S). Next we discuss the invariants skn(G) where exp(G) = n and k ∈N. If S ∈F(G) is a zero-sumfree sequence of length |S| = d(G) elements, then the sequence T = 0kn−1S has no zero-sum subsequence of length kn whence skn(G)⩾|T | + 1 = kn + d(G). The following result may be found in . Theorem 6.10. Let exp(G) = n⩾2 and k ∈N. 1. If k < D(G)/n, then skn(G) > kn + d(G). 2. If k⩾|G|/n, then skn(G) = kn + d(G). 3. If G a finite abelian p-group and pl ⩾D(G), then splk(G) = plk + d(G). Theorem 6.10 motivates the following definition. Definition 6.11. We denote by l(G) the smallest integer l ∈N such that sk exp(G)(G) = k exp(G) + d(G) for every k⩾l. Theorem 6.10 shows that D(G) n ⩽l(G)⩽|G| n whence l(Cn) = 1. Theorem 6.12. Let G = Cn1Cn2 with 1 < n1 | n2. Then l(G) = 2. Proof. Since s(G) = 2n1 + 2n2 −3 > n2 + d(G), it follows that l(G)⩾2. Let k⩾2 and S ∈F(G) a sequence of length |S| = kn2 + d(G) = (k −2)n2 + 3n2 + n1 −2. We prove that S has a zero-sum subsequence of length kn which implies that l(G)⩽2. Since 354 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 s(G) = 2n1 + 2n2 −3, S admits a product decomposition S = S1 · . . . · Sk−1S′ where for all i ∈[1, k −1], Si has sum zero and length |Si| = n2 (for details see [96, Lemma 5.7.10]). Since |S′|=|S|−(k+1)n2=2n2+n1−1, Theorem 6.7.2.(b) implies that S′ has a zero-sum subsequence Sk of length |Sk| ∈{n2, 2n2} whence either S1 ·. . .·Sk−1Sk or S1 ·. . .·Sk−2Sk is a zero-sum subsequence of length kn2. □ The invariant Ek(G) was introduced in (in connection with investigations on s(G), see also ). Clearly, we have D(G)⩽Ek(G)⩽s(G), and if D(G) < k, then D(G)=Ek(G) (see [157, Lemma 2.1]). Theorem 6.13. 1. If G = Cn1Cn2 with 1⩽n1 | n2 and n2 odd, then E2(G) = 2n1 + 2n2 −3 (see [72, Section 3]). 2. If G = CnCn with n⩾2 and 3∤n, then E3(G) = 3n −2. 3. If G is a p-group and k ∈N⩾2 with gcd(p, k) = 1, then Ek(G) = k k −1d∗(G) + 1 (see ). Proof. 2. By [157, Lemma 2.4], we have 3n −2⩽E3(G). Since snN(G) = 3n −2, every sequence S ∈F(G) of length |S|⩾3n −2 has a zero-sum subsequence T of length |T | ∈{n, 2n} whence E3(G)⩽3n −2. □ 7. Inverse problems associated with (G) and s(G) In this section we investigate the structure of sequences S ∈F(G) of length (G) −1 without a zero-sum subsequence T of length |T | ∈[1, exp(G)], s(G) −1 without a zero-sum subsequence T of length |T | = exp(G). We formulate two properties and two conjectures. Conjecture 7.1. Let S ∈F(G) be a sequence of length |S| = s(G) −1. If S has no zero-sum subsequence of length exp(G), then h(S) = exp(G) −1. Note that Conjecture 7.1 and Fact F2 (formulated in the proof of Theorem 6.6) imply Conjecture 6.5. Property C. Every sequence S ∈F(G) of length |S| = (G) −1 which has no short zero-sum subsequence has the form S = T n−1 for some sequence T ∈F(G). Property D. Every sequence S ∈F(G) of length |S| = s(G) −1 which has no zero-sum subsequence of length n has the form S = T n−1 for some sequence T ∈F(G). W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 355 Suppose that G has Property D. We show that G satisfies Property C as well. Let S ∈ F(G) be a sequence of length (G) −1 which has no short zero-sum subsequence. We consider the sequence T = 0n−1S. If T has a zero-sum subsequence T ′ of length |T ′| = n, then T ′ = 0kS′ with k′ ∈[0, n −1] whence S′ is a short zero-sum subsequence of S, a contradiction. Thus T has no zero-sum subsequence of length n. Since Property D holds, Conjectures 7.1 and 6.5 hold in G whence |T |=(G)−1+(n−1)=s(G)−1.ThereforePropertyDimpliesthatS hastherequiredform. Conjecture 7.2. Every group G = Cr n, where r ∈N and n ∈N⩾2, has Property D. An easy observation shows that s(G)⩽(g(G) −1)(n −1) + 1. Moreover, if G = Cr n and equality holds, then Cr n has Property D (see [40, Lemma 2.3]). Thus [119, Hilfssatz 3] implies that Cr 3 has Property D for every r ∈N. However, only little is known for groups G = Cr n in case r ⩾3 (see [91,82]). We continue with some results on |G|(S) for general groups which arose from gen-eralizations of the Erd˝ os–Ginzburg–Ziv Theorem (see also [85,57,168,114] and note that Theorem 7.3 implies Theorem 6.8). Then we discuss cyclic groups and groups of the form G = CnCn. Theorem 7.3 (See [60,61]). Let S ∈F(G) be a sequence of length |S|⩾|G| and let g ∈G with vg(S) = h(S). 1. |G|(S) = ⩾(|G|−h(S))(−g + g−h(S)S). 2. Suppose that for every a ∈G and every subsequence T of S of length |T |=|S|−|G|+1 we have 0 ∈(a + T ). Then |G|(S) =  y∈G (y + S) = (−g + S). Next we present a result by D.J. Grynkiewicz [106, Theorem 1] which confirms a con-jecture of Y. ould Hamidoune (see [116, Theorem 3.6] and for special cases). Theorem 7.4. LetS ∈F(G)beasequenceoflength|S|⩾|G|+1,k ∈Nwith|supp(S)|⩾k and h(S)⩽|G| −k + 2. Then one of the following two statements holds: (a) ||G|(S)|⩾min{|G|, |S| −|G| + k −1}. (b) There exists a non-trivial subgroup H ⊂G, some g ∈G and a subsequence T of S such that the following conditions hold: • H ⊂ |G|(S), |G|(S) is H-periodic and | |G|(S)|⩾(|T | + 1)|H|. • supp(T −1S) ⊂g + H and |T |⩽min{ |S−|G|+k−2 |H| , (G: H) −2}. 356 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Now we consider cyclic groups. Several authors [13,23,171,20,50] showed independently that a sequence S ∈F(Cn) of length |S| = 2n −2, which has no zero-sum subsequence of length n, has the form S = an−1bn−1 where a, b ∈Cn and ord(a −b) = n. Based on Theorem 7.3 the following stronger result was obtained in [65, Theorem 1] (see also ). Theorem 7.5. Let G be cyclic of order n⩾2, k ∈[2, ⌊n/4⌋+ 2] and S ∈F(G) be a sequence of length |S| = 2n −k. If S has no zero-sum subsequence of length n, then S = aubvc1 · . . . · cl, where ord(a −b) = n, u⩾v⩾n −2k + 3 and u + v⩾2n −2k + 1 (equivalently, l⩽k −1). In particular, we have • If k = 2, then S = an−1bn−1. • If k = 3 and n⩾4, then S = an−1bn−2 or S = an−1bn−3(2b −a). Closely related to the inverse problem is the investigation of the Brakemeier function (see [57,18,15,58,56,121,122]). Conjecture 7.6. Let G be cyclic of order n⩾2, q the smallest prime divisor of n and S ∈F(G) be a sequence of length |S|⩾n + n/q −1. If 0 / ∈n(S) then h(S)⩾|S| −n + 1. Conjecture 7.6 has been verified for cyclic groups of prime power order (see [92,93]). The following example shows that the conclusion of Conjecture 7.6 does not hold whenever q ⩽|S| −n⩽n/q −2. Let all notations be as in Conjecture 7.6, N = {0, a1, a2, . . . , an/q−1} be the subgroup of G with |N| = n/q, k ∈[q, n/q −2], g ∈G with ord(g) = n and W = ak 1 · . . . · ak n/q−1gk(g + a1)k · . . . · (g + an/q−1)k ∈F(G) a sequence of length |W| = k(2n/q −1). Since k ∈[q, n/q −2], one can choose a subsequence S of W such that |S| = n + k such that gk is a subsequence of S and (S) ∈ (k + 1)g + N. Therefore h(S) = k and ((k + 1)g + N) ∩k(S) = ∅which implies that (S) / ∈k(S) and 0 / ∈ n(S). Now suppose that G = CnCn. It was P. van Emde Boas who studied Property C for such groups in connection with his investigations on the Davenport constant for groups of rank three (see and [69, Lemma 4.7]). Property D was introduced in , where it is shown that both Property C and Property D are multiplicative in the following sense. Theorem 7.7. Let n1, n2 ∈N⩾2. If the groups Cn1Cn1 and Cn2Cn2 both have Prop-erty C (or Property D, respectively), then the group Cn1n2Cn1n2 has Property C (or Property D, respectively). The next result follows from Theorem 6.7.2.(b), from Theorem 7.7 and from [79, Theorem 6.2]. Theorem 7.8. Let n⩾2 and suppose that n = m1 · . . . · ms where s ∈N and m1, . . . , ms ∈ N⩾2. If for all i ∈[1, s] the groups CmiCmi satisfy the equivalent conditions of Theorem 4.4, then CnCn has Property C. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 357 In it is shown that CpCp has Property D for p ∈{2, 3, 5} and in the same is shown for p = 7. We end with a result which could be a first step on the way showing that CnCn has Property C. Theorem 7.9. Let G = CnCn with n⩾3 and S = f n−1 1 f n−1 2 g1 · . . . · gn−1 ∈F(G) be a sequence of length |S| = 3n −3 which has no short zero-sum subsequence. Then there exists a basis (e1, e2) of G such that S = (e1 + e2)n−1en−1 2 n−1 i=1 (aie1 + be2), where ai ∈[0, n −1] for all i ∈[1, n −1] and b ∈[0, n −1]{1}. Proof. By [96, Lemma 5.8.6] it follows that (f1, f2) is a basis of G whence gi =yif1+xif2 with xi, yi ∈[0, n −1] for all i ∈[1, n −1]. We assert that x1 + y1 = · · · = xn−1 + yn−1. Assume to the contrary that this does not hold.ThenTheorem 4.2.2 implies that the sequence n−1 i=1 ((xi + yi −1)e1) is not zero-sumfree. Hence after some renumeration we may suppose that t  i=1 (xi + yi −1) ≡0 mod n for some t ∈[1, n −1]. Then the sequence W = f n−x 2 f n−y 1 t i=1 (yif1 + xif2), where x, y ∈[1, n] such that x ≡x1 + · · · + xt mod n and y ≡y1 + · · · + yt mod n, is a zero-sum subsequence of S of length |W| = (n −x) + (n −y) + t ≡0 mod n. Since S has no short zero-sum subsequence, it follows that |W| = 2n. But then |W| > d(CnCn) whence W (and thus S) has a short zero-sum subsequence, a contradiction. Now we obtain that (e1, e2) = (f2 −f1, f1) is a basis of G and gi = yif1 + xif2 = xie1 + (xi + yi)e2 for all i ∈[1, n −1]. Thus it remains to show that x1 + y1 / ≡1 mod n. Assume to the contrary that (x1 + y1)e2 = e2. Since s(Cn) = 2n −1, the sequence en−1 1 0n−1(x1e1) has a zero-sum subsequence of length n whence (e1 + e2)n−1en−1 2 (x1e1 + e2) has a zero-sum subsequence of length n, a contradiction. □ 358 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 8. On the number of zero-sum subsequences The enumeration of zero-sum subsequences of a given (long) sequence over G, which have some prescribed properties, is a classical topic in combinatorial number theory going back to P. Erd˝ os, J.E. Olson and others. Many zero-sum results (such as the proof of d∗(G)= d(G) for p-groups or the proof that s(CpCp)=4p−3) are based on enumeration results. Definition 8.1. Let S = g1 · . . . · gl ∈F(G) be a sequence of length |S| = l ∈N0 and let g ∈G. 1. For every k ∈N0 let Nk g(S) =      I ⊂[1, l]|  i∈I gi = g and |I| = k      denote the number of subsequences T of S having sum (T ) = g and length |T | = k (counted with the multiplicity of their appearance in S). In particular, N0 0(S) = 1 and N0 g(S) = 0 if g ∈G•. 2. We define Ng(S) =  k ⩾0 Nk g(S), N+ g (S) =  k ⩾0 N2k g (S) and N− g (S) =  k ⩾0 N2k+1 g (S). Thus Ng(S) denotes the number of subsequences T of S having sum (T ) = g, N+ g (S) denotes the number of all such subsequences of even length, and N− g (S) denotes the number of all such subsequences of odd length (each counted with the multiplicity of its appearance in S). We start with two results on p-groups. The first one (see ) sharpens results of J.E. Olson and I. Koutis (see [143, Theorem 1] and [128, Theorems 7–10]). It is proved via group algebras. Theorem 8.2. Let G be a p-group, g ∈G, k ∈N0 and S ∈F(G) be a sequence of length |S| > k exp(G) + d∗(G). 1. N+ g (S) ≡N− g (S) mod pk+1. 2. If p = 2, then Ng(S) ≡0 mod 2k+1. The next result (proved in ) is based on Theorem 8.2. Theorem 8.3. Let G be a p-group and S ∈F(G) be a sequence of length |S| ∈[|G| + d(G), 2|G| −1]. Then N|G| g (S) ≡ 0 mod p if g ∈G•, 1 mod p if g = 0. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 359 An easy argument shows that in an elementary 2-group we have N0(S)=Ng(S) for every S ∈F(G) and every g ∈(S) (see [75, Proposition 3.3]). For more enumeration results in G = Cp see , and in G = CpCp see [96, Theorems 5.8.1 and 5.8.2]. We continue with some results of the following type: A sequence S ∈F(G), for which |S| is long and |(S)| is small, has a very special form. The first result is due to J.E. Olson [149, Theorems 1 and 2]. Theorem 8.4. Let S ∈F(G•) be a sequence of length |S| = |G|. If N0(S) < |G|, then G is cyclic and S = g|G| for some g ∈G•. For cyclic groups there are the following two sharper results: for Theorem 8.5 see [67, Theorem 1] (note that there is a misprint in the formulation of Theorem 1), and for Theorem 8.6 see [67, Theorems 2–4]. Theorem 8.5. Let G be cyclic of order n⩾2, k ∈[1, ⌊n/4⌋+ 1] and S ∈F(G). If N0(S) < 2|S|−n+k+1, then there exists some g ∈G with ord(g) = n such that S = gu(−g)v(x1g) · . . . · (xk−1g)(y1g) · . . . · (ylg), where u⩾v⩾0, u + v = n −2k + 1, yi ∈[0, n −1] for all i ∈[1, l], xi ∈[1, n −1] for all i ∈[1, k −1] and xi ⩽n/2xi + xi>n/2(n −xi)⩽2k −2. Theorem 8.6. Let G be cyclic of order n⩾22 and S ∈F(G•) be a sequence of length |S| = n −1. If N0(S)⩽n, then there exists some g ∈G with ord(g) = n such that S has one of the following forms: (−g)gn−2, (2g)(−g)gn−3, (3g)(−g)gn−3, (2g)2(−g)gn−4, gn−1, (2g)gn−2, (3g)gn−2, (2g)2gn−3. The next result deals with the number of zero-sum subsequences of length exp(G) in cyclic groups (see ). Theorem 8.7. Let G be cyclic of order n⩾2 and S ∈F(G) be a sequence of length |S| = 2n −1. 1. For every g ∈G• we have Nn g(S) = 0 or Nn g(S)⩾n. 2. Nn 0(S)⩾n + 1 or S = anbn−1 for some a, b ∈G with ord(a −b) = n. The following examples show that the inequalities in Theorem 8.7 cannot be improved. Let g ∈G with ord(g) = n. If S = 0n−1gn−1(−g), then Nn −g(S) = n, and if S = 0n+1gn−2, then Nn 0(S) = n + 1. 360 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 A problem related to Theorem 8.7 on Nn 0(S) is the following conjecture formulated by A. Bialostocki and M. Lotspeich [126,55]: Conjecture 8.8. Let G be cyclic of order n⩾2 and S ∈F(G). Then Nn 0(S)⩾  ⌊|S|/2⌋ n +  ⌈|S|/2⌉ n . Z. Füredi and D. Kleitman, M. Kisin, W. Gao and D.J. Grynkiewicz gave partial positive answers to the above conjecture. Theorem 8.9. Conjecture 8.8 holds true in each of the following cases: 1. n = paqb where p, q are distinct primes, a ∈N and b ∈{0, 1} (see ). 2. |S|⩾n6n (see ). 3. |S|⩽19n/3 (see , and also ). The next result (see ) settles a conjecture of B. Bollobás and I. Leader (see ). Theorem 8.10. Let S ∈F(G) be a sequence. If 0 / ∈|G|(S), then there is a zero-sumfree sequence T ∈F(G) of length |T | = |S| −|G| + 1 such that ||G|(S)|⩾|(T )|. We conclude with an explicit formula for the number of all zero-sum sequences of given length, which was recently derived by V. Ponomarenko (see ). Theorem 8.11. Let G be cyclic of order n⩾10 and k > 2n/3. Then |{S ∈A(G)||S| = k}| = (n)pk(n), where is Euler’s Phi Function and pk(n) denotes the number of partitions of n into k parts. 9. Weighted sequences and the cross-number We start with a recent result due to D.J. Grynkiewicz (see [104, Theorem 1.1]) which may be considered as a weighted version of the Theorem of Erd˝ os–Ginzburg–Ziv (the case where G is cyclic, k = |G| and w1 = · · · = wk = 1 gives the classical result). It completely affirms a conjecture of Y. Caro formulated in 1996 (see [23, Conjecture 2.4]). Special cases were settled by N.Alon et al. , byW. Gao and X. Jin and byY. ould Hamidoune [113, Theorem 2.1]. Theorem 9.1. Let S ∈F(G) be a sequence of length |S| = |G| + k −1, for some k⩾2, and (w1, . . . , wk) ∈Zk a k-tuple of integers such that w1 + · · · + wk ≡0 mod exp(G). Then S has a subsequence T = g1 · . . . · gk such that w1g1 + · · · + wkgk = 0. We continue with a result by Y. ould Hamidoune [113, Theorem 3.2] which implies Theorem 6.8 (for results of a similar flavor see [34,111,117]). W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 361 Theorem 9.2. Let S ∈F(G) be a sequence of length |S| = D(G) + k with k⩾|G| −1 and let g ∈G with vg(S) = h(S). Then S has a subsequence T of length |T | = k such that (T ) = kg. Next we discuss the cross number of a finite abelian group. It was introduced by U. Krause (see [129,130]), and its relevance stems from the theory of non-unique factorizations (see and [96, Chapter 6]). Definition 9.3. The invariant K(G) = max{k(S)|S ∈A(G)} is called the cross-number of G and k(G) = max{k(S)|S ∈F(G) is zero-sumfree} is called the little cross-number of G. If exp(G) = n and q is the smallest prime divisor of n, then a straightforward argument (see [96, Proposition 5.1.8]) shows that 1 n + k∗(G)⩽1 n + k(G)⩽K(G)⩽1 q + k(G). Conjecture 9.4. 1 n + k∗(G) = K(G). Conjecture 9.4 has been verified for p-groups and various other classes of groups (see [96, Theorem 5.5.9 and Section 5.7]). Theorem 9.5. 1. 1 + nk(G) is the smallest integer l ∈N such that every sequence S ∈F(G) with nk(S)⩾l has a non-empty zero-sum subsequence. 2. Every sequence S ∈F(G) of length |S|⩾|G| has a non-empty zero-sum subsequence T with k(T )⩽1. Whereas Theorem 9.5.1 is straightforward, Theorem 9.5.2 settles a conjecture of D. Kleitman and P. Lemke (see [127,95] and for a recent graph theoretical approach). For more information on the cross number we refer to [99,28,100,101,7]. 10. On the Olson constant, the critical number and some analogues We summarize some basic relationships of the invariants introduced in Definition 2.2. Note that max{|U||U ∈A(G) squarefree} is called the strong Davenport constant of G (see [26,51,27,151]). 362 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 Lemma 10.1. 1. 1 + ol(G) = Ol(G)⩽g(G)⩽|G| + 1. 2. g(G) = |G| + 1 if and only if G is either cyclic of even order or an elementary 2-group. 3. 1 + max{|S||S ∈F(G) squarefree, (S) = G•}⩽Ol(G)⩽min{D(G), cr(G)}. 4. max{|supp(U)||U ∈A(G)} = max{|U| | U ∈A(G) squarefree}⩽Ol(G). 5. If f(G, l)⩾1 + c−2l2 for some l ∈N and c ∈R>0, then ol(G) < c√|G| −1. Proof. We show the upper bound on g(G) and 2. A proof of 4. may be found in [26, Theorem 7], and the remaining assertions follow either by the very definitions or by [96, Lemma 5.1.17]. Since there are no squarefree sequences S ∈F(G) of length |S|⩾|G| + 1, every such sequence has a zero-sum subsequence of length |T | = exp(G) whenceg(G)⩽|G|+1.IfGiscyclicofevenorderoranelementary2-group,thenthesquare-free sequence S ∈F(G) consisting of all group elements has no zero-sum subsequence T of length |T |=exp(G) whence g(G) > |G|. Suppose that G=H⟨g⟩with some (possibly trivial) subgroup H ⊂G and some g ∈G with ord(g) = exp(G) = n⩾3. We have to show that the squarefree sequence S ∈F(G) consisting of all group elements as a zero-sum sub-sequence T of length |T |=n. If n is odd, then T =g(2g)·. . .·(ng) has the required property. If n is even and h ∈H{0}, then T = g(2g) · . . . · ((n −2)g)(h + (n −1)g)(−h + (n/2)g) has the required property. □ We start with the g-invariant which was first studied by H. Harborth and A. Kemnitz (see [119,125]). Let G = CnCn with n⩾3 and let (e1, e2) be a basis of G. If n is odd, then S = n−2 i=0 (ie2) n−1 i=1 (e1 + ie2) ∈F(G) is a squarefree sequence of length |S| = 2n −2 which has no zero-sum subsequence of length n whence g(G)⩾2n −1. If n is even, then S = n−1 i=0 (ie2) n−1 i=0 (e1 + ie2) ∈F(G) is a squarefree sequence of length |S| = 2n which has no zero-sum subsequence of length n whence g(G)⩾2n + 1. Conjecture 10.2. Let G = CnCn with n⩾3. Then g(G) = 2n −1 if n is odd, 2n + 1 if n is even. Conjecture 10.2 holds true for some small integers and for all primes p⩾67 (see ). We continue with the Olson constant. For some basic bounds for the f-invariant (and hence for the Olson constant) we refer to [96, Section 5.3]. Proving a conjecture of P. Erd˝ os and H. Heilbronn, E. Szemerédi showed that there is some c ∈R>0 (not depending W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 363 on the group) such that Ol(G)⩽c√|G|. J.E. Olson proved the result for c = 3. The following result is due to Y. ould Hamidoune and G. Zémor [118, Theorems 3.3 and 4.5]. Theorem 10.3. 1. If G is prime cyclic, then Ol(G)⩽√2|G| + 5 log(|G|). 2. Ol(G)⩽√2|G| + ε(|G|) for some real-valued function ε with ε(x) = O(x1/3 log x). The result for prime cyclic groups is essentially the best possible. However, the situation is completely different for non-cyclic groups.We have ol(G)⩽d(G), and obviously equality holds for elementary 2-groups, and by also for elementary 3-groups. In the following theorem we summarize two results. The first one (see [77, Theorem 7.3]) shows in particular that in p-groups of large rank we have ol(G) = d(G) (which is in contrast to the situation in CpCp, see Theorem 4.4). The second result was recently achieved in . Theorem 10.4. 1. Let G = HCs+1 n where exp(G) = n⩾2, s ∈N0, H ⊂G a (possibly trivial) subgroup and exp(H) a proper divisor of n. If r(H) + s/2⩾n, then 1 + d∗(G)⩽max{|U||U ∈ A(G) squarefree}. 2. Ol(CpCp) = Ol(Cp) + p −1 for all primes p > 4 · 67 × 1034. Let G = HCn = H⟨e⟩where H ⊂G is a subgroup with |H|⩾n −1 and e ∈G with ord(e)=n. If T ∈F(H) is a squarefree zero-sumfree sequence of length |T |=ol(G) and h1, . . . , hn−1 ∈H are pairwise distinct, then S = T n−1 i=1 (e + hi) ∈F(G) is a squarefree zero-sumfree sequence of length |S| = |T | + n −1 whence ol(G)⩾ol(H) + n −1. Let n be a prime power. Assume to the contrary that Ol(Cr n) = Ol(Cr−1 n ) + n −1 for all r ⩾2. Then Theorem 10.4.1 implies that Ol(Cn) = D(Cn), a contradiction. Thus there exists some r ⩾2 such that Ol(Cr n) > Ol(Cr−1 n ) + n −1. Finally we discuss the critical number cr(G) of G. It was first studied by P. Erd˝ os and H. Heilbronn (see [48, Theorem I]) for cyclic groups of prime order, and in the sequel this problem found a lot of attention (see [138,38,37,139,153,30,33,134,86,115]). Following (where the inverse problem associated to the critical number is studied) we summarize what is known on cr(G). Theorem 10.5. Let q denote the smallest prime divisor of exp(G). 1. Suppose that |G| = q. Then cr(G)⩽⌊√4q −7⌋, and equality holds if the upper bound is odd (see [33, Example 4.2]). 2. Suppose that |G|/q is prime. (a) cr(C2C2) = 3, and if q is odd, then cr(CqCq) = 2q −2. (b) |G|/q + q −2⩽cr(G)⩽|G|/q + q −1. 364 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 3. Suppose that |G|/q is composite. We have cr(C8) = cr(C2C4) = 5, and otherwise cr(G) = |G| q + q −2. C. Peng (see [153,152,66]) investigated the following variant of the critical number. He studied the smallest integer l ∈N0 with the following property: every sequence S ∈F(G•) of length |S|⩾l and with |supp(S)∩H|⩽|H|−1 for all proper subgroups H ⊂G, satisfies (S) = G. Van H.Vu (see ) showed the existence of a constant C with the following property: If G is a sufficiently large cyclic group and S ∈F(G) a squarefree sequence with supp(S) ⊂ {g ∈G | ord(g) = |G|} and |S|⩾C√|G|, then (S) = G•. Acknowledgments The first author is supported by NSFC, Project No. 10271080. The second author is supported by the Austrian Science Fund FWF, Project No. P18779-N13. Note added in proof When this article went to press in June 2006, we were informed on the following progress: • S. Savchev and F. Chen announced an improvement of Theorem 4.2. • D.J. Grynkiewiecz, O. Ordaz, M.T. Varela and F. Villarroel announced progress on Con-jectures 6.9 and 7.6. References W.R. Alford, A. Granville, C. Pomerance, There are infinitely many Carmicheal numbers, Ann. Math. 140 (1994) 703–722. N. Alon, Tools from higher algebra, in: Handbook of Combinatorics, vol. 2, North-Holland, Amsterdam, 1995, pp. 1749–1783. N. Alon, Combinatorial Nullstellensatz, Combin. Probab. Comput. 8 (1999) 7–29. N. Alon, S. Friedland, G. Kalai, Regular subgraphs of almost regular graphs, J. Combin. Theory Ser. B 37 (1984) 79–91. P.C. Baayen, Een combinatorisch problem voor eindige abelse groepen, MC Syllabus 5, Colloquium Discrete Wiskunde, Mathematical Centre, Amsterdam, 1968. P.C. Baayen, C2C2C2C2n!, Reports ZW-1969-006, Mathematical Centre, Amsterdam, 1969. P. Baginski, S.T. Chapman, K. McDonald, L. Pudwell, On cross numbers of minimal zero sequences in certain cyclic groups, Ars. Combin. 70 (2004) 47–60. R.C. Baker, W. Schmidt, Diophantine problems in variables restricted to the values of 0 and 1, J. Number Theory 12 (1980) 460–486. E. Balandraud, Un nouveau point de vue isopérimétrique appliqué au théorème de Kneser, manuscript. P. Balister, Y. Caro, C. Rousseau, R. Yuster, Zero-sum square matrices, European. J. Combin. 23 (2002) 489–497. A. Bialostocki, G. Bialostocki, Y. Caro, R. Yuster, Zero-sum ascending waves, J. Combin. Math. Combin. Comput. 32 (2000) 103–114. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 365 A. Bialostocki, P. Dierker, Zero sum Ramsey theorems, Congr. Numer. 70 (1990) 119–130. A. Bialostocki, P. Dierker, On the Erd˝ os–Ginzburg–Ziv theorem and the Ramsey numbers for stars and matchings, Discrete Math. 110 (1992) 1–8. A. Bialostocki, P. Dierker, D. Grynkiewicz, M. Lotspeich, On some developments of the Erd˝ os–Ginzburg–Ziv theorem II, Acta Arith. 110 (2003) 173–184. A. Bialostocki, M. Lotspeich, Some developments of the Erd˝ os–Ginzburg–Ziv theorem, in: Sets, Graphs and Numbers, vol. 60, Colloquia Mathematica Societatis Janos Bolyai, North-Holland, Amsterdam, New York, 1992, pp. 97–117. B. Bollobás, I. Leader, The number of k-sums modulo k, J. Number Theory 78 (1999) 27–35. J.D. Bovey, P. Erd˝ os, I. Niven, Conditions for zero sum modulo n, Canad. Math. Bull. 18 (1975) 27–29. W. Brakemeier, Eine Anzahlformel von Zahlen modulo n, Monatsh Math. 85 (1978) 277–282. J. Brüdern, H. Godinho, On Artin’s conjecture, II: pairs of additive forms, Proc. London Math. Soc. 84 (2002) 513–538. Y. Caro, Zero-sum Ramsey numbers-stars, Discrete Math. 104 (1992) 1–6. Y. Caro, Zero-sum subsequences in abelian non-cyclic groups, Israel J. Math. 92 (1995) 221–233. Y. Caro, Remarks on a zero-sum theorem, J. Combin. Theory Ser. A 76 (1996) 315–322. Y. Caro, Zero-sum problems — a survey, Discrete Math. 152 (1996) 93–113. Y. Caro, Problems in zero-sum combinatorics, J. London Math. Soc. 55 (1997) 427–434. S.T. Chapman, M. Freeze, W. Gao, W.W. Smith, On Davenport’s constant of finite abelian groups, Far East J. Math. Sci. 2 (2002) 47–54. S.T. Chapman, M. Freeze,W.W. Smith, Minimal zero sequences and the strong Davenport constant, Discrete Math. 203 (1999) 271–277. S.T. Chapman, M. Freeze, W.W. Smith, Equivalence classes of minimal zero-sequences modulo a prime, Ideal Theoretic Methods in Commutative Algebra, Lecture Notes in Pure and Applied Mathematics, vol. 220, Marcel Dekker, New York, 2001. pp. 133–145. S.T. Chapman, A. Geroldinger, On cross numbers of minimal zero sequences, Australasian J. Combin. 14 (1996) 85–92. S.T. Chapman, W.W. Smith, A characterization of minimal zero-sequences of index one in finite cyclic groups, Integers 5 (1) (2005) (Paper A27). G. Chiaselotti, Sums of distinct elements in finite abelian groups, Boll. Unione Mat. Ital. 7 (1993) 243–251. J.A. Dias da Silva, Linear algebra and additive theory, in: M.B. Nathanson (Ed.), Unusual Applications of Number Theory, DIMACS Series Discrete Mathematics and Theoretical Computer Science, vol. 64, American Mathematical Society, RI, Providence, 2004, pp. 61–69. J.A. Dias da Silva, H. Godinho, Generalized derivatives and additive theory, Linear Algebra Appl. 342 (2002) 1–15. J.A. Dias da Silva, Y. ould Hamidoune, Cyclic spaces for Grassmann derivatives and additive theory, Bull. London Math. Soc. 26 (1994) 140–146. C. Delorme, I. Marquez, O. Ordaz, A. Ortuño, Existence conditions for barycentric sequences, Discrete Math. 281 (2004) 163–172. C. Delorme, O. Ordaz, D. Quiroz, Some remarks on Davenport constant, Discrete Math. 237 (2001) 119–128. G.T. Diderrich, On Kneser’s addition theorem in groups, Proc. Amer. Math. Soc. 38 (1973) 443–451. G.T. Diderrich, An addition theorem for abelian groups of order pq, J. Number Theory 7 (1975) 33–48. G.T. Diderrich, H.B. Mann, Combinatorial problems in finite abelian groups, in:A Survey of Combinatorial Theory, North-Holland, Amsterdam, 1973, pp. 95–100. V. Dimitrov, On the strong Davenport constant of nonabelian finite p-groups, Math. Balkica 18 (2004) 131–140. Y. Edel. C. Elsholtz, A. Geroldinger, S. Kubertin, L. Rackham, Zero-sum problems in finite abelian groups and affine caps, manuscript. R.B. Eggleton, P. Erd˝ os, Two combinatorial problems in group theory, Acta Arith. 21 (1972) 111–116. S. Elledge, G.H. Hurlbert, An application of graph pebbling to zero-sum sequences in abelian groups, Integers 5 (1) (2005) 10 (Paper A17). C. Elsholtz, Lower bounds for multidimensional zero sums, Combinatorica 24 (2004) 351–358. P. van Emde Boas,A combinatorial problem on finite abelian groups II, Reports ZW-1969-007, Mathematical Centre, Amsterdam, 1969. P. van Emde Boas, D. Kruyswijk,A combinatorial problem on finite abelian groups, Reports ZW-1967-009, Mathematical Centre, Amsterdam, 1967. P. van Emde Boas, D. Kruyswijk, A combinatorial problem on finite abelian groups III, Reports ZW-1969-008, Mathematical Centre, Amsterdam, 1969. 366 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 P. Erd˝ os, A. Ginzburg, A. Ziv, Theorem in the additive number theory, Bull. Res. Council Israel 10 (1961) 41–43. P. Erd˝ os, H. Heilbronn, On the addition of residue classes modulo p, Acta Arith. 9 (1964) 149–159. B.W. Finklea, T. Moore, V. Ponomarenko, Z.J. Turner, On block monoid atomic structure, manuscript. C. Flores, O. Ordaz, On the Erd˝ os–Ginzburg–Ziv theorem, Discrete Math. 152 (1996) 321–324. M. Freeze, Lengths of factorizations in Dedekind domains, Ph.D. Thesis, University of North Carolina at Chapel Hill, 1999. M. Freeze, W.W. Smith, Sumsets of zerofree sequences,Arab J. Sci. Eng. Section C: Theme Issues 26 (2001) 97–105. G.A. Freiman, Foundations of a Structural Theory of Set Addition, Translations of Mathematical Monographs, vol. 37, American Mathematical Society, Providence, RI, 1973. G.A. Freiman, Structure theory of set addition, in: J.M. Deshouillers, B. Landreau, A.A. Yudin (Eds.), Structure Theory of Set Addition, vol. 258, Astérisque, 1999, pp. 1–33. Z. Füredi, D.J. Kleitman, The minimal number of zero sums. in: Combinatorics, Paul Erd˝ os is Eighty, J. Bolyai Mathematical Society, 1993, pp. 159–172. L. Gallardo, G. Grekos, On Brakemeier’s variant of the Erd˝ os–Ginzburg–Ziv problem, Tatra Mt. Math. Publ. 20 (2000) 91–98. L. Gallardo, G. Grekos, L. Habsieger, F. Hennecart, B. Landreau, A. Plagne, Restricted addition in Z/nZ and an application to the Erd˝ os–Ginzburg–Ziv problem, J. London Math. Soc. 65 (2002) 513–523. L. Gallardo, G. Grekos, J. Pihko, On a variant of the Erd˝ os–Ginzburg–Ziv problem, Acta Arith. 89 (1999) 331–336. W. Gao, Subsequence sums in finite cyclic groups, manuscript. W. Gao, Some problems in additive group theory and number theory, Ph.D. Thesis, Sichuan University, Sichuan, PR China, 1994. W. Gao, Addition theorems for finite abelian groups, J. Number Theory 53 (1995) 241–246. W. Gao, A combinatorial problem on finite abelian groups, J. Number Theory 58 (1995) 100–103. W. Gao, An improvement of Erd˝ os–Ginzburg–Ziv theorem, Acta Math. Sin. 39 (1996) 514–523. W. Gao, Two addition theorems on groups of prime order, J. Number Theory 56 (1996) 211–213. W. Gao, An addition theorem for finite cyclic groups, Discrete Math. 163 (1997) 257–265. W. Gao, Addition theorems and group rings, J. Combin. Theory Ser. A 77 (1997) 98–109. W. Gao, On the number of zero sum subsequences, Discrete Math. 163 (1997) 267–273. W. Gao, On the number of subsequences with given sum, Discrete Math. 195 (1999) 127–138. W. Gao, On Davenport’s constant of finite abelian groups with rank three, Discrete Math. 222 (2000) 111–124. W. Gao, Two zero sum problems and multiple properties, J. Number Theory 81 (2000) 254–265. W. Gao, Zero sums in finite cyclic groups, Integers 0 (2000) 9 (Paper A14). W. Gao, On zero sum subsequences of restricted size III, Ars. Combin. 61 (2001) 65–72. W. Gao, On zero sum subsequences of restricted size II, Discrete Math. 271 (2003) 51–59. W. Gao, A. Geroldinger, F. Halter-Koch, Group algebras of finite abelian groups and their applications to combinatorial problems, Rocky Mt. J. Math., to appear. W. Gao, A. Geroldinger, On the number of subsequences with given sum of sequences over finite abelian p-groups, Rocky Mt. J. Math., to appear. W. Gao, A. Geroldinger, On the structure of zerofree sequences, Combinatorica 18 (1998) 519–527. W. Gao, A. Geroldinger, On long minimal zero sequences in finite abelian groups, Period Math. Hungar. 38 (1999) 179–211. W. Gao,A. Geroldinger, On the order of elements in long minimal zero-sum sequences, Period Math. Hung. 44 (2002) 63–73. W. Gao, A. Geroldinger, On zero-sum sequences in Z/nZZ/nZ, Integers 3 (2003) 45 (Paper A08). W. Gao,A. Geroldinger, Zero-sum problems and coverings by proper cosets, European J. Combin. 24 (2003) 531–549. W. Gao,A. Geroldinger, On a property of minimal zero-sum sequences and restricted sumsets, Bull. London Math. Soc. 37 (2005) 321–334. W. Gao, Q.H. Hou, W.A. Schmid, R. Thangadurai, On short zero-sum subsequences II, manuscript. W. Gao, X. Jin, Weighted sums in finite cyclic groups, Discrete Math. 283 (2004) 243–247. W. Gao, I. Leader, Sums and k-sums in abelian groups of order k, J. Number Theory 120 (2006) 26–32. W. Gao, Y. ould Hamidoune, Zero sums in abelian groups, Combin. Probab. Comput. 7 (1998) 261–263. W. Gao, Y. ould Hamidoune, On additive bases, Acta Arith. 88 (1999) 233–237. W. Gao, Y. ould Hamidoune, A. Llado, O. Serra, Covering a finite abelian group by subset sums, Combinatorica 23 (2003) 599–611. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 367 W. Gao, A. Panigrahi, R. Thangadurai, On the structure of p-zero-sum free sequences and its application to a variant of Erd˝ os–Ginzburg–Ziv theorem, Proc. Indian Acad. Sci. Math. Sci. 115 (2005) 67–77. W. Gao, I. Ruzsa, R. Thangadurai, Olson’s constant for the group ZpZp, J. Combin. Theory Ser. A 107 (2004) 49–67. W. Gao, R. Thangadurai, On zero-sum sequences of prescribed length, Aequationes Math., to appear. W. Gao, R. Thangadurai, On the structure of sequences with forbidden zero-sum subsequences, Colloq. Math. 98 (2003) 213–222. W. Gao, R. Thangadurai, A variant of Kemnitz conjecture, J. Combin. Theory Ser. A 107 (2004) 69–86. W. Gao, R. Thangadurai, J. Zhuang, Addition theorems on the cyclic groups Zpn, manuscript. W. Gao, J. Zhuang, Sequences not containing long zero-sum subsequences, European J. Combin. 27 (2006) 777–787. A. Geroldinger, On a conjecture of Kleitman and Lemke, J. Number Theory 44 (1993) 60–65. A. Geroldinger, F. Halter-Koch, Non-unique factorizations, Algebraic, Combinatorial and Analytic Theory, Pure and Applied Mathematics, vol. 278, Chapman & Hall/CRC, London, Boca Raton, FL, 2006. A. Geroldinger, Y. ould Hamidoune, Zero-sumfree sequences in cyclic groups and some arithmetical application, J. Théoret. Nombres Bordx 14 (2002) 221–239. A. Geroldinger, R. Schneider, On Davenport’s constant, J. Combin. Theory Ser. A 61 (1992) 147–152. A. Geroldinger, R. Schneider, The cross number of finite abelian groups II, European J. Combin. 15 (1994) 399–405. A. Geroldinger, R. Schneider, The cross number of finite abelian groups III, Discrete Math. 150 (1996) 123–130. A. Geroldinger, R. Schneider, On minimal zero sequences with large cross number, Ars. Combin. 46 (1997) 297–303. D.J. Grynkiewicz, On the number of m-term zero-sum subsequences, Acta Arith., to appear. D.J. Grynkiewicz, A step beyond Kemperman’s structure theorem, manuscript. D.J. Grynkiewicz, A weighted version of the Erd˝ os–Ginzburg–Ziv theorem. Combinatorica, to appear. D.J. Grynkiewicz, On four colored sets with non-decreasing diameter and the Erd˝ os–Ginzburg–Ziv theorem, J. Combin. Theory Ser. A 100 (2002) 44–60. D.J. Grynkiewicz, On a conjecture of Hamidoune for subsequence sums, Integers 5 (2) (2005) 11 (Paper A07). D.J. Grynkiewicz, On a partition analog of the Cauchy–Davenport theorem, Acta Math. Hungar. 107 (2005) 161–174. D.J. Grynkiewicz, On an extension of the Erd˝ os–Ginzburg–Ziv theorem to hypergraphs, European J. Combin. 26 (2005) 1154–1176. D.J. Grynkiewicz, Quasi-periodic decompositions and the Kemperman structure theorem, European J. Combin. 26 (2005) 559–575. F. Halter-Koch, A generalization of Davenport’s constant and its arithmetical applications, Colloq. Math. 63 (1992) 203–210. Y. ould Hamidoune, On weighted sequence sums, Combin. Probab. Comput. 4 (1995) 363–367. Y. ould Hamidoune, An isoperimetric method in additive theory, J. Algebra 179 (1996) 622–630. Y. ould Hamidoune, On weighted sums in abelian groups, Discrete Math. 162 (1996) 127–132. Y. ould Hamidoune, Subsequence sums, Combin. Probab. Comput. 12 (2003) 413–425. Y. ould Hamidoune, A.S. Lladó, O. Serra, On sets with a small subset sum, Combin. Probab. Comput. 8 (1999) 461–466. Y. ould Hamidoune, O. Ordaz,A. Ortuño, On a combinatorial theorem of Erd˝ os, Ginzburg and Ziv, Combin. Probab. Comput. 7 (1998) 403–412. Y. ould Hamidoune, D. Quiroz, On subsequence weighted products, Combin. Probab. Comput. 14 (2005) 485–489. Y. ould Hamidoune, G. Zémor, On zero-free subset sums, Acta Arith. 78 (1996) 143–152. H. Harborth, Ein Extremalproblem für Gitterpunkte, J. Reine. Angew Math. 262 (1973) 356–360. G. Harcos, I.Z. Ruzsa, A problem on zero subsums in abelian groups, Period Math. Hungar. 35 (1997) 31–34. F. Hennecart, La fonction de Brakemeier dans le problème d’Erd˝ os–Ginzburg–Ziv, Acta Arith. 117 (2005) 35–50. F. Hennecart, Restricted addition and some developments of the Erd˝ os–Ginzburg–Ziv theorem, Bull. London Math. Soc. 37 (2005) 481–490. G.H. Hurlbert, Recent progress in graph pebbling. Graph Theory Notes of New York, to appear. F. Kainrath, On local half-factorial orders, Arithmetical Properties of Commutative Rings and Monoids, Lecture Notes in Pure and Applied Mathematics, vol. 241, Chapman & Hall/CRC, London/Boca Raton, 2005, pp. 316–324. 368 W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 A. Kemnitz, On a lattice point problem, Ars. Combin. 16-B (1983) 151–160. M. Kisin, The number of zero sums modulo m in a sequence of length n, Mathematika 41 (1994) 149–163. D. Kleitman, P. Lemke, An addition theorem on the integers modulo n, J. Number Theory 31 (1989) 335–345. I. Koutis, Dimensionality restrictions on sums over Zd p, manuscript. U. Krause, A characterization of algebraic number fields with cyclic class group of prime power order, Math. Z. 186 (1984) 143–148. U. Krause, C. Zahlten, Arithmetic in Krull monoids and the cross number of divisor class groups, Mitt. Math. Ges. Hamb. 12 (1991) 681–696. G. Lettl, W.A. Schmid, Minimal zero-sum sequences in CnCn, European J. Combin., to appear. G. Lettl, Z.-W. Sun, On covers of abelian groups by cosets, manuscript. V.F. Lev, Restricted set addition in abelian groups: results and conjectures, J. Théoret. Nombres Bordx. 17 (2005) 181–193. E. Lipkin, Subset sums of sets of residues. in: Structure Theory of Set Addition, vol. 258, Astérisque, 1999, pp. 187–192. A.D. Lungo, Reconstructing permutation matrices from diagonal sums, Theoret. Comput. Sci. 281 (2002) 235–249. H.B. Mann, Additive group theory — a progress report, Bull. Amer. Math. Soc. 79 (1973) 1069–1075. H.B. Mann,AdditionTheorems:TheAdditionTheorems of GroupTheory and NumberTheory, R.E. Krieger, New York, 1976. H.B. Mann, J.E. Olson, Sums of sets in the elementary abelian group of type (p, p), J. Combin. Theory Ser. A 2 (1967) 275–284. H.B. Mann,Y.F. Wou, An addition theorem for the elementary abelian group of type (p, p), Monatsh Math. 102 (1986) 273–308. M. Mazur, A note on the growth of Davenport’s constant, Manuscr. Math. 74 (1992) 229–235. R. Meshulam, An uncertainty inequality and zero subsums, Discrete Math. 84 (1990) 197–200. M.B. Nathanson, Additive Number Theory: Inverse Problems and the Geometry of Sumsets, Springer, Berlin, 1996. J.E. Olson, A combinatorial problem on finite abelian groups I, J. Number Theory 1 (1969) 8–10. J.E. Olson, A combinatorial problem on finite abelian groups II, J. Number Theory 1 (1969) 195–199. J.E. Olson, Sums of sets of group elements, Acta Arith. 28 (1975) 147–156. J.E. Olson, On a combinatorial problem of Erd˝ os. Ginzburg and Ziv, J. Number Theory 8 (1976) 52–57. J.E. Olson, On the sum of two sets in a group, J. Number Theory 18 (1984) 110–120. J.E. Olson, On the symmetric difference of two sets in a group, European J. Combin. 7 (1986) 43–54. J.E. Olson, A problem of Erd˝ os on abelian groups, Combinatorica 7 (1987) 285–289. J.E. Olson, E.T. White, Sums from a sequence of group elements, in: H. Zassenhaus (Ed.), Number Theory and Algebra, Academic Press, New York, 1977, pp. 215–222. O. Ordaz, D. Quiroz, On zero-free sets, Divulg. Mat. 14 (2006) 1–10. C. Peng, Addition theorems in elementary abelian groups I, J. Number Theory 27 (1987) 46–57. C. Peng, Addition theorems in elementary abelian groups II, J. Number Theory 27 (1987) 58–62. V. Ponomarenko, Minimal zero sequences of finite cyclic groups, Integers 4 (2004) 6 (Paper A24). C. Reiher, On Kemnitz’ conjecture concerning lattice points in the plane, J. Ramanujan, to appear. S. Savchev, F. Chen, Kemnitz’ conjecture revisited, Discrete Math. 297 (2005) 196–201. W.A. Schmid, On zero-sum subsequences in finite abelian groups, Integers 1 (2001) 8 (Paper A01). W.A. Schmid, Half-factorial sets in finite abelian groups: a survey, Grazer Math. Ber. 348 (2005) 41–64. W.A. Schmid, J.J. Zhuang, On short zero-sum subsequences over p-groups. Ars. Combin., to appear. M. Skałba, The relative Davenport’s constant of the group Zn × Zn, Grazer Math. Ber. 318 (1992) 167–168. M. Skałba, On numbers with a unique representation by a binary quadratic form, Acta Arith. 64 (1993) 59–68. M. Skałba, On the relative Davenport constant, European J. Combin. 19 (1998) 221–225. J. Subocz, Some values of Olson’s constant, Divulg. Mat. 8 (2000) 121–128. Z.-W. Sun, Unification of zero-sum problems, subset sums and covers of Z, Electron. Res. Announc. Amer. Math. Soc. 9 (2003) 51–60. B. Sury, R. Thangadurai, Gao’s conjecture on zero-sum sequences, Proc. Indian Acad. Sci. Math. Sci. 112 (2002) 399–414. E. Szemerédi, On a conjecture of Erd˝ os and Heilbronn, Acta Arith. 17 (1970) 227–229. R. Thangadurai, Interplay between four conjectures on certain zero-sum problems, Exposition Math. 20 (2002) 215–228. W. Gao, A. Geroldinger / Expo. Math. 24 (2006) 337–369 369 R. Thangadurai, Non-canonical extensions of Erd˝ os–Ginzburg–Ziv theorem, Integers 2 (2002) 14 (Paper A08). V.H. Vu, Olson’s theorem for cyclic groups, manuscript. T. Yuster, Bounds for counter-examples to addition theorems in solvable groups, Arch. Math. 51 (1988) 223–231. T. Yuster, B. Peterson, A generalization of an addition theorem for solvable groups, Canad. J. Math. 36 (1984) 529–536. J. Zhuang, W. Gao, Erd˝ os–Ginzburg–Ziv theorem for dihedral groups of large prime index, European J. Combin. 26 (2005) 1053–1059.
299
Abstract Interpretation I¸ sıl Dillig I¸ sıl Dillig, Abstract Interpretation 1/27 Overview ▶Deductive verifiers require annotations (e.g., loop invariants) from user ▶Fortunately, many techniques that can automatically learn loop invariants ▶A common framework for this purpose is Abstract Interpretation (AI) ▶Abstract interpretation forms the basis of most static analyzers I¸ sıl Dillig, Abstract Interpretation 2/27 Key Idea: Over-approximation ▶Abstract interpretation is a framework for computing over-approximations of program states ▶Cannot reason about the exact program behavior due to undecidability (and also for scalability reasons) ▶But we can obtain a conservative over-approximation and this can be enough to prove program correctness I¸ sıl Dillig, Abstract Interpretation 3/27 Motivating Example ▶What does this function do? ▶Annotations computed automatically using an AI tool (Apron) I¸ sıl Dillig, Abstract Interpretation 4/27 The AI Recipe Abstract interpretation provides a recipe for computing over-approximations of program behavior 1. Define abstract domain – fixes“shape”of the invariants ▶e.g., c1 ≤x ≤c2 (intervals) or ±x ± y ≤c (octagons) 2. Define abstract semantics (transformers) ▶Define how to symbolically execute each statement in the chosen abstract domain ▶Must be sound wrt to concrete semantics 3. Iterate abstract transformers until fixed point ▶The fixed-point is an over-approximation of program behavior I¸ sıl Dillig, Abstract Interpretation 5/27 Simple Example: Sign Domain ▶Suppose we want to infer invariants of the form x ⋊ ⋉0 where ⋊ ⋉∈{≥, =, >, <} (i.e., zero, non-negative, positive, negative) ▶This corresponds to the following abstract domain represented as lattice: non-neg neg pos zero Each element in this lattice is an "abstract value" ▶Lattice is a partially ordered set (S, ⊑) where each pair of elements has a least upper bound (i.e., join ⊔) and a greatest lower bound (i.e., meet ⊓) I¸ sıl Dillig, Abstract Interpretation 6/27 1 Concretization and Abstraction Functions ▶The“meaning”of abstract domain is given by abstraction and concretization functions that relate concrete and abstract values ▶Concretization function (γ) maps each abstract value to sets of concrete elements ▶γ(pos) = {x | x ∈Z ∧x > 0} ▶Abstraction function (α) maps sets of concrete elements to the most precise value in the abstract domain ▶α({2, 10, 0}) = non-neg ▶α({3, 99}) = pos ▶α({−3, 2}) = ⊤ I¸ sıl Dillig, Abstract Interpretation 7/27 Requirement: Galois Connection ▶Important requirement: concrete domain D and abstract domain ˆ D must be related through Galois connection: ∀x ∈D, ∀ˆ x ∈ˆ D. α(x) ⊑ˆ x ⇔x ⊑γ(ˆ x) ▶Intuitively, this says that α, γ respect the orderings of D, ˆ D I¸ sıl Dillig, Abstract Interpretation 8/27 Step 2: Abstract Semantics ▶Given abstract domain, α, γ, need to define abstract transformers (i.e., semantics) for each statement ▶Describes how statements affect our abstraction ▶Abstract counter-part of operational semantics rules x = y op z S: Var Concrete value S': Var Concrete value Operational Semantics x = y op z A: Var Abstract value A': Var Abstract value Abstract Semantics I¸ sıl Dillig, Abstract Interpretation 9/27 Back to Our Example ▶For our sign analysis, we can define abstract transformer for x = y + z as follows: pos neg zero non-neg ⊤ ⊥ pos pos ⊤ pos pos ⊤ ⊥ neg ⊤ neg neg ⊤ ⊤ ⊥ zero pos neg zero non-neg ⊤ ⊥ non-neg pos ⊤ non-neg non-neg ⊤ ⊥ ⊤ ⊤ ⊤ ⊤ ⊤ ⊤ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ ⊥ I¸ sıl Dillig, Abstract Interpretation 10/27 Soundness of Abstract Transformers ▶Important requirement: Abstract semantics must be sound wrt (i.e., faithfully models) the concrete semantics ▶If F is the concrete transformer and ˆ F is its abstract counterpart, soundness of ˆ F means: ∀x ∈D, ∀x ∈ˆ D. α(x) ⊑ˆ x ⇒α(F(x)) ⊑ˆ F(ˆ x) ▶If ˆ x is an overapproximation of x, then ˆ F(ˆ x) is an over-approximation of F(x) I¸ sıl Dillig, Abstract Interpretation 11/27 Putting It All Together Fixed-point engine Abstract domain Abstract semantics P I¸ sıl Dillig, Abstract Interpretation 12/27 2 Fixed-point Computations ▶Fixed-point computation: Repeated symbolic execution of the program using abstract semantics until our approximation of the program reaches an equilibrium: G i∈N ˆ F i(⊥) ▶Least fixed-point: Start with underapproximation and grow the approximation until it stops growing ▶Assuming correctness of your abstract semantics, the least fixed point is an overapproximation of the program! I¸ sıl Dillig, Abstract Interpretation 13/27 Performing Least Fixed Point Computation ▶Represent program as a control-flow graph ▶Want to compute abstract values at every program point ▶Initialize all abstract states to ⊥ ▶Repeat until no abstract state changes at any program point: ▶Compute abstract state on entry to a basic block B by taking the join of B’s predecessors ▶Symbolically execute each basic block using abstract semantics x =0 y =1 loop head exit block branch x = x+1 x = x+y loop end y <= n z =0 z !=0 y = y+1 x y loop exit block x I¸ sıl Dillig, Abstract Interpretation 14/27 An Example x = 0; y =0; while(y <= n) { if (z == 0) { x = x+1; } else { x = x + y; } y = y+1 } Is x always non-negative inside the loop? x =0 y =1 loop head exit block branch x = x+1 x = x+y loop end y <= n z =0 z !=0 y = y+1 I¸ sıl Dillig, Abstract Interpretation 15/27 Fixed-Point Computation x =0 y =1 loop head exit block branch x = x+1 x = x+y loop end y <= n z =0 z !=0 y = y+1 x =0 y =1 loop head exit block branch x = x+1 x loop en y <= n z =0 y = y+ x = , y = x = , y = x = , y = x = , y = x = , y = x = , y = x = , y = x = , y = x = , y = I¸ sıl Dillig, Abstract Interpretation 16/27 Termination of Fixed Point Computation ▶In this example, we quickly reached least fixed point – but does this computation always terminate? ▶Yes if the lattice has finite height; otherwise, it might not ▶Unfortunately, many interesting domains do not have this property, so we need widening operators for convergence. I¸ sıl Dillig, Abstract Interpretation 17/27 Interval Analysis ▶In the interval domain, abstract values are of the form [c1, c2] where c1 is a lower bound and c2 has an upper bound ▶If the abstract value for x is [1, 3] at some program point P, this means 1 ≤x ≤3 is an invariant of P Does not have finite-height property! I¸ sıl Dillig, Abstract Interpretation 18/27 3 Widening ▶If abstract domain does not have this property, we need a widening ∇operator that forces convergence ▶Conditions on ∇: 1. ∀a, b ∈ˆ D. a ⊔b ⊑a∇b 2. For all increasing chains d0 ⊑d1 ⊑. . ., the ascending chaing d∇ 0 ⊑d∇ 1 ⊑. . . eventually stabilizes where d∇ 0 = d0 and d∇ i+1 = d∇ i ∇di+1 ▶Overapproximate lfp by using widening operator rather than join ⇒sound and guaranteed to terminate ▶This is called post-fixed-point I¸ sıl Dillig, Abstract Interpretation 19/27 Widening in Interval Domain ▶For the interval domain, we can define the following simple widening operator: [a, b]∇⊥ = [a, b] ⊥∇[a, b] = [a, b] [a, b]∇[c, d] = [(c < a? −∞: a), (b < d? + ∞: b)] ▶[1, 2]∇[0, 2] = ▶[0, 2]∇[1, 2] = ▶[1, 5]∇[1, 5] = ▶[2, 3]∇[2, 4] = I¸ sıl Dillig, Abstract Interpretation 20/27 Example with Widening x =5 y =7 loop head exit block i = y = y+1 i = i-1 i >=0 i <0 x =5 y =7 loop head exit block i = y = y+1 i = i-1 i >=0 i <0 I¸ sıl Dillig, Abstract Interpretation 21/27 Motivation for Narrowing ▶In many cases, widening overshoots and generates imprecise results ▶Consider this example: x=1; while() { x = 2; } ▶After widening, x’s abstract value will be [1, ∞] after the loop; but more precise value is [1, 2] I¸ sıl Dillig, Abstract Interpretation 22/27 Narrowing ▶Idea: After finding a post-fixed-point (using widening), have a second pass using a narrowing operator ▶Narrowing operator △must satisfy the following conditions: 1. ∀x, y ∈ˆ D. (y ⊑x) ⇒y ⊑(x △y) ⊑x 2. For all decreasing chains x0 ⊒x1 ⊒. . ., the sequence y0 = x0, . . . yi+1 = yi △xi+1 converges ▶For interval domain, we can define △as follows: [a, b] △⊥ = ⊥ ⊥△[a, b] = ⊥ [a, b] △[c, d] = [(a = −∞?c : a), (b = ∞?d : b)] I¸ sıl Dillig, Abstract Interpretation 23/27 Example with Narrowing x=1 loop head exit block x=2 I¸ sıl Dillig, Abstract Interpretation 24/27 4 Relational Abstract Domains ▶Both the sign and interval domain are non-relational domains (i.e., do not relate different program variables) ▶Relational domains track relationships between variables and are more powerful ▶A motivating example: x=0; y=0; while() { x = x+1; y = y+1; } assert(x=y); ▶Cannot prove this assertion using interval domain I¸ sıl Dillig, Abstract Interpretation 25/27 Examples of Relational Domains ▶Karr’s domain: Tracks equalities between variables (e.g., x = 2y + z) ▶Octagon domain: Constraints of the form ±x ± y ≤c ▶Polyhedra domain: Constraints of the form c1x1 + . . . cnxn ≤c ▶Polyhedra domain most precise among these, but can be expensive (exponential complexity) ▶Octagons less precise but cubic time complexity I¸ sıl Dillig, Abstract Interpretation 26/27 Message from Patrick Cousot I¸ sıl Dillig, Abstract Interpretation 27/27 5